text
string
filename
string
file_size
int64
title
string
authors
string
journal
string
category
string
publisher
string
license
string
license_url
string
doi
string
source_file
string
content
string
year
string
# s-Goodness for Low-Rank Matrix Recovery **Authors:** Lingchen Kong; Levent Tunçel; Naihua Xiu **Journal:** Abstract and Applied Analysis (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101974 --- ## Abstract Low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, and system identification and control. This class of optimization problems is generally𝒩𝒫 hard. A popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend and characterize the concept of s-goodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski (Math Program, 2011)) to linear transformations in LMR. Using the two characteristic s-goodness constants, γs and γ^s, of a linear transformation, we derive necessary and sufficient conditions for a linear transformation to be s-good. Moreover, we establish the equivalence of s-goodness and the null space properties. Therefore, s-goodness is a necessary and sufficient condition for exact s-rank matrix recovery via the nuclear norm minimization. --- ## Body ## 1. Introduction Low-rank matrix recovery (LMR for short) is a rank minimization problem (RMP) with linear constraints or the affine matrix rank minimization problem which is defined as follows: (1)minimizerank(X),subjectto𝒜X=b, where X∈ℝm×n is the matrix variable, 𝒜:ℝm×n→ℝp is a linear transformation, and b∈ℝp. Although specific instances can often be solved by specialized algorithms, the LMR is 𝒩𝒫 hard. A popular approach for solving LMR in the systems and control community is to minimize the trace of a positive semidefinite matrix variable instead of its rank (see, e.g., [1, 2]). A generalization of this approach to nonsymmetric matrices introduced by Fazel et al. [3] is the famous convex relaxation of LMR (1), which is called nuclear norm minimization (NNM): (2)min∥X∥*s.t.𝒜X=b, where ∥X∥* is the nuclear norm of X, that is, the sum of its singular values. When m=n and the matrix X:=Diag(x), x∈ℝn, is diagonal, the LMR (1) reduces to sparse signal recovery (SSR), which is the so-called cardinality minimization problem (CMP): (3)min∥x∥0s.t.Φx=b, where ∥x∥0 denotes the number of nonzero entries in the vector x and Φ∈ℝm×n is a given sensing matrix. A well-known heuristic for SSR is the ℓ1-norm minimization relaxation (basis pursuit problem): (4)min∥x∥1s.t.Φx=b, where ∥x∥1 is the ℓ1-norm of x, that is, the sum of absolute values of its entries.LMR problems have many applications and they appeared in the literature of a diverse set of fields including signal and image processing, statistics, computer vision, and system identification and control. For more details, see the recent paper [4]. LMR and NNM have been the focus of some recent research in the optimization community, see; for example, [4–15]. Although there are many papers dealing with algorithms for NNM such as interior-point methods, fixed point and Bregman iterative methods, and proximal point methods, there are fewer papers dealing with the conditions that guarantee the success of the low-rank matrix recovery via NNM. For instance, following the program laid out in the work of Candès and Tao in compressed sensing (CS, see, e.g., [16–18]), Recht et al. [4] provided a certain restricted isometry property (RIP) condition on the linear transformation which guarantees that the minimum nuclear norm solution is the minimum rank solution. Recht et al. [14, 19] gave the null space property (NSP) which characterizes a particular property of the null space of the linear transformation, which is also discussed by Oymak et al. [20, 21]. Note that NSP states a necessary and sufficient condition for exactly recovering the low-rank matrix via nuclear norm minimization. Recently, Chandrasekaran et al. [22] proposed that a fixed s-rank matrix X0 can be recovered if and only if the null space of 𝒜 does not intersect the tangent cone of the nuclear norm ball at X0.In the setting of CS, there are other characterizations of the sensing matrix, under whichℓ1-norm minimization can be guaranteed to yield an optimal solution to SSR, in addition to RIP and null-space properties, see; for example, [23–26]. In particular, Juditsky and Nemirovski [24] established necessary and sufficient conditions for a Sensing matrix to be “s-good” to allow for exact ℓ1-recovery of sparse signals with s nonzero entries when no measurement noise is present. They also demonstrated that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact SSR and to efficiently computable upper bounds on those s for which a given sensing matrix is s-good. Furthermore, they established instructive links between s-goodness and RIP in the CS context. One may wonder whether we can generalize the s-goodness concept to LMR and still maintain many of the nice properties as done in [24]. Here, we deal with this issue. Our approach is based on the singular value decomposition (SVD) of a matrix and the partition technique generalized from CS. In the next section, following Juditsky and Nemirovski’s terminology, we propose definitions of s-goodness and G-numbers, γs and γ^s, of a linear transformation in LMR and then we provide some basic properties of G-numbers. In Section 3, we characterize s-goodness of a linear transformation in LMR via G-numbers. We consider the connections between the s-goodness, NSP, and RIP in Section 4. We eventually obtain that δ2s<0.472⇒𝒜 satisfying NSP⇔γ^s(𝒜)<1/2⇔γs(𝒜)<1⇔𝒜 is s-good.LetW∈ℝm×n, r:=min{m,n}, and let W=UDiag(σ(W))VT be an SVD of W, where U∈ℝm×r, V∈ℝn×r, and Diag(σ(W)) is the diagonal matrix of σ(W)=(σ1(W),…,σr(W))T which is the vector of the singular values of W. Also let Ξ(W) denote the set of pairs of matrices (U,V) in the SVD of W; that is, (5)Ξ(W):={(σ(W))VT(U,V):U∈ℝm×r,V∈ℝn×r,W=UDiag(σ(W))VT}. For s∈{0,1,2,…,r}, we say W∈ℝm×n is an s-rank matrix to mean that the rank of W is no more than s. For an s-rank matrix W, it is convenient to take W=Um×sWsVn×sT as its SVD where Um×s∈ℝm×s, Vn×s∈ℝn×s are orthogonal matrices and Ws=Diag((σ1(W),…,σs(W))T). For a vector y∈ℝp, let ∥·∥d be the dual norm of ∥·∥ specified by ∥y∥d:=maxv{〈v,y〉:∥v∥≤1}. In particular, ∥·∥∞ is the dual norm of ∥·∥1 for a vector. Let ∥X∥ denote the spectral or the operator norm of a matrix X∈ℝm×n, that is, the largest singular value of X. In fact, ∥X∥ is the dual norm of ∥X∥*. Let ∥X∥F:=〈X,X〉=Tr(XTX) be the Frobenius norm of X, which is equal to the ℓ2-norm of the vector of its singular values. We denote by XT the transpose of X. For a linear transformation 𝒜:ℝm×n→ℝp, we denote by 𝒜*:ℝp→ℝm×n the adjoint of 𝒜. ## 2. Definitions and Basic Properties ### 2.1. Definitions We first go over some concepts related tos-goodness of the linear transformation in LMR (RMP). These are extensions of those given for SSR (CMP) in [24].Definition 1. Let𝒜:ℝm×n→ℝp be a linear transformation and s∈{0,1,2,…,r}. One says that 𝒜 is s-good, if for every s-rank matrix W∈ℝm×n, W is the unique optimal solution to the optimization problem (6)minX∈ℝm×n{∥X∥*:𝒜X=𝒜W}.We denote bys*(𝒜) the largest integer s for which 𝒜 is s-good. Clearly, s*(𝒜)∈{0,1,…,r}. To characterize s-goodness we introduce two useful s-goodness constants: γs and γ^s. We call γs and γ^sG-numbers.Definition 2. Let𝒜:ℝm×n→ℝp be a linear transformation, β∈[0,+∞] and s∈{0,1,2,…,r}. Then we have the following. (i)G-number γs(𝒜,β) is the infimum of γ≥0 such that for every matrix X∈ℝm×n with singular value decomposition X=Um×sVn×sT (i.e., s nonzero singular values, all equal to 1), there exists a vector y∈ℝp such that (7)∥y∥d≤β,𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices, and (8)σi(𝒜*y){=1,ifσi(X)=1,∈[0,γ],ifσi(X)=0,i∈{1,2,…,r}. If there does not exist such y for some X as above, we set γs(𝒜,β)=+∞. (ii)G-number γ^s(𝒜,β) is the infimum of γ≥0 such that for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y∈ℝp such that 𝒜*y and X share the same orthogonal row and column spaces: (9)∥y∥d≤β,∥𝒜*y-X∥≤γ. If there does not exist such y for some X as above, we set γs(𝒜,β)=+∞ and to be compatible with the special case given by [24] we write γs(𝒜), γ^s(𝒜) instead of γs(𝒜,+∞), γ^s(𝒜,+∞), respectively.From the above definition, we easily see that the set of values thatγ takes is closed. Thus, when γs(𝒜,β)<+∞, for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y∈ℝp such that(10)∥y∥d≤β,σi(𝒜*y){=1,ifσi(X)=1,∈[0,γs(𝒜,β)],ifσi(X)=0,i∈{1,2,…,r}.Similarly, for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y^∈ℝp such that 𝒜*y^ and X share the same orthogonal row and column spaces: (11)∥y^∥d≤β,∥𝒜*y^-X∥≤γ^s(𝒜,β). Observing that the set {𝒜*y:∥y∥d≤β} is convex, we obtain that if γs(𝒜,β)<+∞ then for every matrix X with at most s nonzero singular values and ∥X∥≤1 there exist vectors y satisfying (10) and there exist vectors y^ satisfying (11). ### 2.2. Basic Properties ofG-Numbers In order to characterize thes-goodness of a linear transformation 𝒜, we study the basic properties of G-numbers. We begin with the result that G-numbers γs(𝒜,β) and γ^s(𝒜,β) are convex nonincreasing functions of β.Proposition 3. For every linear transformation𝒜 and every s∈{0,1,…,r}, G-numbers γs(𝒜,β) and γ^s(𝒜,β) are convex nonincreasing functions of β∈[0,+∞].Proof. We only need to demonstrate that the quantityγs(𝒜,β) is a convex nonincreasing function of β∈[0,+∞]. It is evident from the definition that γs(𝒜,β) is nonincreasing for given 𝒜,s. It remains to show that γs(𝒜,β) is a convex function of β. In other words, for every pair β1,β2∈[0,+∞], we need to verify that (12)γs(𝒜,αβ1+(1-α)β2)≤αγs(𝒜,β1)+(1-α)γs(𝒜,β2),∀α∈[0,1]. The above inequality follows immediately if one of β1, β2 is +∞. Thus, we may assume β1,β2∈[0,+∞). In fact, from the argument around (10) and the definition of γs(𝒜,·), we know that for every matrix X=UDiag(σ(X))VT with s nonzero singular values, all equal to 1, there exist vectors y1,y2∈ℝp such that for k∈{1,2}(13)∥yk∥d≤βk,σi(𝒜*yk){=1,ifσi(X)=1,∈[0,γs(𝒜,βk)],ifσi(X)=0,i∈{1,2,…,r}.It is immediate from (13) that ∥αy1+(1-α)y2∥d≤αβ1+(1-α)β2. Moreover, from the above information on the singular values of 𝒜*y1, 𝒜*y2, we may set 𝒜*yk=X+Yk, k∈{1,2} such that (14)XTYk=0,XYkT=0,rank(Yk)≤r-s,∥Yk∥≤γs(𝒜,βk). This implies that for every α∈[0,1](15)XT[αY1+(1-α)Y2]=0,X[αY1+(1-α)Y2]T=0, and hence rank[αY1+(1-α)Y2]≤r-s, X, and [αY1+(1-α)Y2] have orthogonal row and column spaces. Thus, noting that 𝒜*[αy1+(1-α)y2]=X+αY1+(1-α)Y2, we obtain that ∥αy1+(1-α)y2∥d≤αβ1+(1-α)β2 and (16)σi(𝒜*(αy1+(1-α)y2))={1,ifσi(X)=1,σi(αY1+(1-α)Y2),ifσi(X)=0, for every α∈[0,1]. Combining this with the fact (17)∥αY1+(1-α)Y2∥≤α∥Y1∥+(1-α)∥Y2∥≤αγs(𝒜,β1)+(1-α)γs(𝒜,β2), we obtain the desired conclusion.The following observation thatG-numbers γs(𝒜,β), γ^s(𝒜,β) are nondecreasing in s is immediate.Proposition 4. For everys′≤s, one has γs′(𝒜,β)≤γs(𝒜,β), γ^s′(𝒜,β)≤γ^s(𝒜,β).We further investigate the relationship between theG-numbers γs(𝒜,β) and γ^s(𝒜,β).Proposition 5. Let𝒜:ℝm×n→ℝp be a linear transformation, β∈[0,+∞], and s∈{0,1,2,…,r}. Then one has (18)γ:=γs(𝒜,β)<1⇒γ^s(𝒜,11+γβ)=γ1+γ<12,γ^:=γ^s(𝒜,β)<12⇒γs(𝒜,11-γ^β)=γ^1-γ^<1.Proof. Letγ:=γs(𝒜,β)<1. Then, for every matrix Z∈ℝm×n with s nonzero singular values, all equal to 1, there exists y∈ℝp, ∥y∥d≤β, such that 𝒜*y=Z+W, where ∥W∥≤γ and W and Z have orthogonal row and column spaces. For a given pair Z,y as above, take y~:=(1/(1+γ))y. Then we have ∥y~∥d≤(1/(1+γ))β and (19)∥𝒜*y~-Z∥≤max{1-11+γ,γ1+γ}=γ1+γ, where the first term under the maximum comes from the fact that 𝒜*y and Z agree on the subspace corresponding to the nonzero singular values of Z. Therefore, we obtain (20)γ^s(𝒜,11+γβ)≤γ1+γ<12. Now, we assume that γ^:=γ^s(𝒜,β)<1/2. Fix orthogonal matrices U∈ℝm×r, V∈ℝn×r. For an s-element subset J of the index set {1,2,…,r}, we define a set SJ with respect to orthogonal matrices U,V as (21)SJ:={VTJ-x∈ℝr:∃y∈ℝp,∥y∥d≤β,𝒜*y=UDiag(x)VTwith|xi|≤γ^fori∈J-}. In the above, J- denotes the complement of J. It is immediately seen that SJ is a closed convex set in ℝr. Moreover, we have the following Claim 1. SJ contains the ∥·∥∞-ball of radius (1-γ^) centered at the origin in ℝr. Proof. Note that SJ is closed and convex. Moreover, SJ is the direct sum of its projections onto the pair of subspaces (22)LJ:={x∈ℝr:xi=0,i∈J-}anditsorthogonalcomplementLJ⊥={x∈ℝr:xi=0,i∈J}. Let Q denote the projection of SJ onto LJ. Then, Q is closed and convex (because of the direct sum property above and the fact that SJ is closed and convex). Note that LJ can be naturally identified with ℝs, and our claim is the image Q-⊂ℝs of Q under this identification that contains the ∥·∥∞-ball Bs of radius (1-γ^) centered at the origin in ℝs. For a contradiction, suppose Bs is not contained in Q-. Then there exists v∈Bs∖Q-. Since Q- is closed and convex, by a separating hyperplane theorem, there exists a vector u∈ℝs,  ∥u∥1=1 such that (23)uTv>uTv′foreveryv′∈Q-. Let z∈ℝr be defined by (24)zi:={1,i∈J,0,otherwise. By definition of γ^=γ^s(𝒜,β), for s-rank matrix UDiag(z)VT, there exists y∈ℝp such that ∥y∥d≤β and (25)𝒜*y=UDiag(z)VT+W, where W and UDiag(z)VT have the same orthogonal row and column spaces, ∥𝒜*y-UDiag(z)VT∥≤γ^ and ∥σ(𝒜*y)-z∥∞≤γ^. Together with the definitions of SJ and Q-, this means that Q- contains a vector v- with |v-i-sign(ui)|≤γ^, ∀i∈{1,2,…,s}. Therefore, (26)uTv-≥∑i=1s‍|ui|(1-γ^)=(1-γ^)∥u∥1=1-γ^. By v∈Bs and the definition of u, we obtain (27)1-γ^≥∥v∥∞=∥u∥1∥v∥∞≥uTv>uTv-≥1-γ^, where the strict inequality follows from the facts that v-∈Q- and u separates v from Q-. The above string of inequalities is a contradiction, and hence the desired claim holds. Using the above claim, we conclude that for everyJ⊆{1,2,…,r} with cardinality s, there exists an x∈SJ such that xi=(1-γ^), for all i∈J. From the definition of SJ, we obtain that there exists y∈ℝp with ∥y∥d≤(1-γ^)-1β such that (28)𝒜*y=UDiag(σ(𝒜*y))VT, where σi(𝒜*y)=(1-γ^)-1xi=1 if i∈J, and σi(𝒜*y)i≤(1-γ^)-1γ^ if i∈J-. Thus, we obtain that (29)γ^s:=γ^s(𝒜,β)<12⇒γs(𝒜,11-γ^β)≤γ^1-γ^<1. To conclude the proof, we need to prove that the inequalities we established(30)γ^s(𝒜,11+γ^β)≤γ1+γ,γs(𝒜,11-γ^β)≤γ^1+γ^ are both equations. This is straightforward by an argument similar to the one in the proof of [24, Theorem  1]. We omit it for the sake of brevity.We end this section with a simple argument which illustrates that for a given pair (𝒜,s), γs(𝒜,β)=γs(𝒜) and γ^s(𝒜,β)=γ^s(𝒜), for all β large enough.Proposition 6. Let𝒜:ℝm×n→ℝp be a linear transformation and β∈[0,+∞]. Assume that for some ρ>0, the image of the unit ∥·∥*-ball in ℝm×n under the mapping X↦𝒜X contains the ball B={x∈ℝp:∥x∥1≤ρ}. Then for every s∈{1,2,…,r}(31)β≥1ρ,γs(𝒜)<1⇒γs(𝒜,β)=γs(𝒜),β≥1ρ,γ^s(𝒜)<12⇒γ^s(𝒜,β)=γ^s(𝒜).Proof. Fixs∈{1,2,…,r}. We only need to show the first implication. Let γ:=γs(𝒜)<1. Then for every matrix W∈ℝm×n with its SVD W=Um×sVn×sT, there exists a vector y∈ℝp such that (32)∥y∥d≤β,𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices, and (33)σi(𝒜*y){=1,ifσi(W)=1,∈[0,γ],ifσi(W)=0,i∈{1,2,…,r}. Clearly, ∥𝒜*y∥≤1. That is, (34)1≥∥𝒜*y∥=maxX∈ℝm×n{〈X,𝒜*y〉:∥X∥*≤1}=maxX∈ℝm×n{〈u,y〉:u=𝒜X,∥X∥*≤1}. From the inclusion assumption, we obtain that (35)maxX∈ℝm×n{〈u,y〉:u=𝒜X,∥X∥*≤1}≥maxu∈ℝp{〈u,y〉:∥u∥1≤ρ}=ρ∥y∥∞=ρ∥y∥d. Combining the above two strings of relations, we derive the desired conclusion. ## 2.1. Definitions We first go over some concepts related tos-goodness of the linear transformation in LMR (RMP). These are extensions of those given for SSR (CMP) in [24].Definition 1. Let𝒜:ℝm×n→ℝp be a linear transformation and s∈{0,1,2,…,r}. One says that 𝒜 is s-good, if for every s-rank matrix W∈ℝm×n, W is the unique optimal solution to the optimization problem (6)minX∈ℝm×n{∥X∥*:𝒜X=𝒜W}.We denote bys*(𝒜) the largest integer s for which 𝒜 is s-good. Clearly, s*(𝒜)∈{0,1,…,r}. To characterize s-goodness we introduce two useful s-goodness constants: γs and γ^s. We call γs and γ^sG-numbers.Definition 2. Let𝒜:ℝm×n→ℝp be a linear transformation, β∈[0,+∞] and s∈{0,1,2,…,r}. Then we have the following. (i)G-number γs(𝒜,β) is the infimum of γ≥0 such that for every matrix X∈ℝm×n with singular value decomposition X=Um×sVn×sT (i.e., s nonzero singular values, all equal to 1), there exists a vector y∈ℝp such that (7)∥y∥d≤β,𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices, and (8)σi(𝒜*y){=1,ifσi(X)=1,∈[0,γ],ifσi(X)=0,i∈{1,2,…,r}. If there does not exist such y for some X as above, we set γs(𝒜,β)=+∞. (ii)G-number γ^s(𝒜,β) is the infimum of γ≥0 such that for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y∈ℝp such that 𝒜*y and X share the same orthogonal row and column spaces: (9)∥y∥d≤β,∥𝒜*y-X∥≤γ. If there does not exist such y for some X as above, we set γs(𝒜,β)=+∞ and to be compatible with the special case given by [24] we write γs(𝒜), γ^s(𝒜) instead of γs(𝒜,+∞), γ^s(𝒜,+∞), respectively.From the above definition, we easily see that the set of values thatγ takes is closed. Thus, when γs(𝒜,β)<+∞, for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y∈ℝp such that(10)∥y∥d≤β,σi(𝒜*y){=1,ifσi(X)=1,∈[0,γs(𝒜,β)],ifσi(X)=0,i∈{1,2,…,r}.Similarly, for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y^∈ℝp such that 𝒜*y^ and X share the same orthogonal row and column spaces: (11)∥y^∥d≤β,∥𝒜*y^-X∥≤γ^s(𝒜,β). Observing that the set {𝒜*y:∥y∥d≤β} is convex, we obtain that if γs(𝒜,β)<+∞ then for every matrix X with at most s nonzero singular values and ∥X∥≤1 there exist vectors y satisfying (10) and there exist vectors y^ satisfying (11). ## 2.2. Basic Properties ofG-Numbers In order to characterize thes-goodness of a linear transformation 𝒜, we study the basic properties of G-numbers. We begin with the result that G-numbers γs(𝒜,β) and γ^s(𝒜,β) are convex nonincreasing functions of β.Proposition 3. For every linear transformation𝒜 and every s∈{0,1,…,r}, G-numbers γs(𝒜,β) and γ^s(𝒜,β) are convex nonincreasing functions of β∈[0,+∞].Proof. We only need to demonstrate that the quantityγs(𝒜,β) is a convex nonincreasing function of β∈[0,+∞]. It is evident from the definition that γs(𝒜,β) is nonincreasing for given 𝒜,s. It remains to show that γs(𝒜,β) is a convex function of β. In other words, for every pair β1,β2∈[0,+∞], we need to verify that (12)γs(𝒜,αβ1+(1-α)β2)≤αγs(𝒜,β1)+(1-α)γs(𝒜,β2),∀α∈[0,1]. The above inequality follows immediately if one of β1, β2 is +∞. Thus, we may assume β1,β2∈[0,+∞). In fact, from the argument around (10) and the definition of γs(𝒜,·), we know that for every matrix X=UDiag(σ(X))VT with s nonzero singular values, all equal to 1, there exist vectors y1,y2∈ℝp such that for k∈{1,2}(13)∥yk∥d≤βk,σi(𝒜*yk){=1,ifσi(X)=1,∈[0,γs(𝒜,βk)],ifσi(X)=0,i∈{1,2,…,r}.It is immediate from (13) that ∥αy1+(1-α)y2∥d≤αβ1+(1-α)β2. Moreover, from the above information on the singular values of 𝒜*y1, 𝒜*y2, we may set 𝒜*yk=X+Yk, k∈{1,2} such that (14)XTYk=0,XYkT=0,rank(Yk)≤r-s,∥Yk∥≤γs(𝒜,βk). This implies that for every α∈[0,1](15)XT[αY1+(1-α)Y2]=0,X[αY1+(1-α)Y2]T=0, and hence rank[αY1+(1-α)Y2]≤r-s, X, and [αY1+(1-α)Y2] have orthogonal row and column spaces. Thus, noting that 𝒜*[αy1+(1-α)y2]=X+αY1+(1-α)Y2, we obtain that ∥αy1+(1-α)y2∥d≤αβ1+(1-α)β2 and (16)σi(𝒜*(αy1+(1-α)y2))={1,ifσi(X)=1,σi(αY1+(1-α)Y2),ifσi(X)=0, for every α∈[0,1]. Combining this with the fact (17)∥αY1+(1-α)Y2∥≤α∥Y1∥+(1-α)∥Y2∥≤αγs(𝒜,β1)+(1-α)γs(𝒜,β2), we obtain the desired conclusion.The following observation thatG-numbers γs(𝒜,β), γ^s(𝒜,β) are nondecreasing in s is immediate.Proposition 4. For everys′≤s, one has γs′(𝒜,β)≤γs(𝒜,β), γ^s′(𝒜,β)≤γ^s(𝒜,β).We further investigate the relationship between theG-numbers γs(𝒜,β) and γ^s(𝒜,β).Proposition 5. Let𝒜:ℝm×n→ℝp be a linear transformation, β∈[0,+∞], and s∈{0,1,2,…,r}. Then one has (18)γ:=γs(𝒜,β)<1⇒γ^s(𝒜,11+γβ)=γ1+γ<12,γ^:=γ^s(𝒜,β)<12⇒γs(𝒜,11-γ^β)=γ^1-γ^<1.Proof. Letγ:=γs(𝒜,β)<1. Then, for every matrix Z∈ℝm×n with s nonzero singular values, all equal to 1, there exists y∈ℝp, ∥y∥d≤β, such that 𝒜*y=Z+W, where ∥W∥≤γ and W and Z have orthogonal row and column spaces. For a given pair Z,y as above, take y~:=(1/(1+γ))y. Then we have ∥y~∥d≤(1/(1+γ))β and (19)∥𝒜*y~-Z∥≤max{1-11+γ,γ1+γ}=γ1+γ, where the first term under the maximum comes from the fact that 𝒜*y and Z agree on the subspace corresponding to the nonzero singular values of Z. Therefore, we obtain (20)γ^s(𝒜,11+γβ)≤γ1+γ<12. Now, we assume that γ^:=γ^s(𝒜,β)<1/2. Fix orthogonal matrices U∈ℝm×r, V∈ℝn×r. For an s-element subset J of the index set {1,2,…,r}, we define a set SJ with respect to orthogonal matrices U,V as (21)SJ:={VTJ-x∈ℝr:∃y∈ℝp,∥y∥d≤β,𝒜*y=UDiag(x)VTwith|xi|≤γ^fori∈J-}. In the above, J- denotes the complement of J. It is immediately seen that SJ is a closed convex set in ℝr. Moreover, we have the following Claim 1. SJ contains the ∥·∥∞-ball of radius (1-γ^) centered at the origin in ℝr. Proof. Note that SJ is closed and convex. Moreover, SJ is the direct sum of its projections onto the pair of subspaces (22)LJ:={x∈ℝr:xi=0,i∈J-}anditsorthogonalcomplementLJ⊥={x∈ℝr:xi=0,i∈J}. Let Q denote the projection of SJ onto LJ. Then, Q is closed and convex (because of the direct sum property above and the fact that SJ is closed and convex). Note that LJ can be naturally identified with ℝs, and our claim is the image Q-⊂ℝs of Q under this identification that contains the ∥·∥∞-ball Bs of radius (1-γ^) centered at the origin in ℝs. For a contradiction, suppose Bs is not contained in Q-. Then there exists v∈Bs∖Q-. Since Q- is closed and convex, by a separating hyperplane theorem, there exists a vector u∈ℝs,  ∥u∥1=1 such that (23)uTv>uTv′foreveryv′∈Q-. Let z∈ℝr be defined by (24)zi:={1,i∈J,0,otherwise. By definition of γ^=γ^s(𝒜,β), for s-rank matrix UDiag(z)VT, there exists y∈ℝp such that ∥y∥d≤β and (25)𝒜*y=UDiag(z)VT+W, where W and UDiag(z)VT have the same orthogonal row and column spaces, ∥𝒜*y-UDiag(z)VT∥≤γ^ and ∥σ(𝒜*y)-z∥∞≤γ^. Together with the definitions of SJ and Q-, this means that Q- contains a vector v- with |v-i-sign(ui)|≤γ^, ∀i∈{1,2,…,s}. Therefore, (26)uTv-≥∑i=1s‍|ui|(1-γ^)=(1-γ^)∥u∥1=1-γ^. By v∈Bs and the definition of u, we obtain (27)1-γ^≥∥v∥∞=∥u∥1∥v∥∞≥uTv>uTv-≥1-γ^, where the strict inequality follows from the facts that v-∈Q- and u separates v from Q-. The above string of inequalities is a contradiction, and hence the desired claim holds. Using the above claim, we conclude that for everyJ⊆{1,2,…,r} with cardinality s, there exists an x∈SJ such that xi=(1-γ^), for all i∈J. From the definition of SJ, we obtain that there exists y∈ℝp with ∥y∥d≤(1-γ^)-1β such that (28)𝒜*y=UDiag(σ(𝒜*y))VT, where σi(𝒜*y)=(1-γ^)-1xi=1 if i∈J, and σi(𝒜*y)i≤(1-γ^)-1γ^ if i∈J-. Thus, we obtain that (29)γ^s:=γ^s(𝒜,β)<12⇒γs(𝒜,11-γ^β)≤γ^1-γ^<1. To conclude the proof, we need to prove that the inequalities we established(30)γ^s(𝒜,11+γ^β)≤γ1+γ,γs(𝒜,11-γ^β)≤γ^1+γ^ are both equations. This is straightforward by an argument similar to the one in the proof of [24, Theorem  1]. We omit it for the sake of brevity.We end this section with a simple argument which illustrates that for a given pair (𝒜,s), γs(𝒜,β)=γs(𝒜) and γ^s(𝒜,β)=γ^s(𝒜), for all β large enough.Proposition 6. Let𝒜:ℝm×n→ℝp be a linear transformation and β∈[0,+∞]. Assume that for some ρ>0, the image of the unit ∥·∥*-ball in ℝm×n under the mapping X↦𝒜X contains the ball B={x∈ℝp:∥x∥1≤ρ}. Then for every s∈{1,2,…,r}(31)β≥1ρ,γs(𝒜)<1⇒γs(𝒜,β)=γs(𝒜),β≥1ρ,γ^s(𝒜)<12⇒γ^s(𝒜,β)=γ^s(𝒜).Proof. Fixs∈{1,2,…,r}. We only need to show the first implication. Let γ:=γs(𝒜)<1. Then for every matrix W∈ℝm×n with its SVD W=Um×sVn×sT, there exists a vector y∈ℝp such that (32)∥y∥d≤β,𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices, and (33)σi(𝒜*y){=1,ifσi(W)=1,∈[0,γ],ifσi(W)=0,i∈{1,2,…,r}. Clearly, ∥𝒜*y∥≤1. That is, (34)1≥∥𝒜*y∥=maxX∈ℝm×n{〈X,𝒜*y〉:∥X∥*≤1}=maxX∈ℝm×n{〈u,y〉:u=𝒜X,∥X∥*≤1}. From the inclusion assumption, we obtain that (35)maxX∈ℝm×n{〈u,y〉:u=𝒜X,∥X∥*≤1}≥maxu∈ℝp{〈u,y〉:∥u∥1≤ρ}=ρ∥y∥∞=ρ∥y∥d. Combining the above two strings of relations, we derive the desired conclusion. ## 3.s-Goodness and G-Numbers We first give the following characterization result ofs-goodness of a linear transformation 𝒜 via the G-number γs(𝒜), which explains the importance of γs(𝒜) in LMR.Theorem 7. Let𝒜:ℝm×n→ℝp be a linear transformation, and s be an integer s∈{0,1,2,…,r}. Then 𝒜 is s-good if and only if γs(𝒜)<1.Proof. Suppose𝒜 is s-good. Let W∈ℝm×n be a matrix of rank s∈{1,2,…,r}. Without loss of generality, let W=Um×sWsVn×sT be its SVD where Um×s∈ℝm×s, Vn×s∈ℝn×s are orthogonal matrices and Ws=Diag((σ1(W),…,σs(W))T). By the definition of s-goodness of 𝒜, W is the unique solution to the optimization problem (6). Using the first-order optimality conditions, we obtain that there exists y∈ℝp such that the function fy(x)=∥X∥*-yT[𝒜X-𝒜W] attains its minimum value over X∈ℝm×n at X=W. So 0∈∂fy(W) or 𝒜*y∈∂∥W∥*. Using the fact (see, e.g., [27]) (36)∂∥W∥*={Um×sVn×sT+M:WandMhaveorthogonalrowandcolumnspaces,and∥M∥≤1Vn×sT}, it follows that there exist matrices Um×(r-s), Vn×(r-s) such that 𝒜*y=UDiag(σi(𝒜*y))VT where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices and (37)σi(𝒜*y){=1,ifi∈J,∈[0,1],ifi∈J-, where J:={i:σi(W)≠0} and J-:={1,2,…,r}∖J. Therefore, the optimal objective value of the optimization problem (38)miny,γ{γ:𝒜*y∈∂∥W∥*,σi(𝒜*y){=1,ifi∈J,∈[0,γ],ifi∈J-,} is at most one. For the given W with its SVD W=Um×sWsVn×sT, let(39)Π:={(0s00σ(M))conv{M∈ℝm×n:theSVDofMisM=[Um×sU-m×(r-s)](0s00σ(M))[Vn×sV-n×(r-s)]T}.It is easy to see that Π is a subspace and its normal cone (in the sense of variational analysis, see, e.g., [28] for details) is specified by Π⊥. Thus, the above problem (38) is equivalent to the following convex optimization problem with set constraint: (40)miny,M{∥M∥:𝒜*y-Um×sVn×sT-M=0,M∈Π}. We will show that the optimal value is less than 1. For a contradiction, suppose that the optimal value is one. Then, by [28, Theorem 10.1 and Exercise  10.52], there exists a Lagrange multiplier D∈ℝm×n such that the function (41)L(y,M)=∥M∥+〈D,𝒜*y-Um×sVn×sT-M〉+δΠ(M) has unconstrained minimum in (y,M) equal to 1, where δΠ(·) is the indicator function of Π. Let (y*,M*) be an optimal solution. Then, by the optimality condition 0∈∂L, we obtain that (42)0∈∂yL(y*,M*),0∈∂ML(y*,M*). Direct calculation yields that (43)𝒜D=0,0∈-D+∂∥M*∥+Π⊥. Then there exist DJ∈Π⊥ and DJ-∈∂∥M*∥ such that D=DJ+DJ-. Notice that [29, Corollary  6.4] implies that for DJ-∈∂∥M*∥, DJ-∈Π and ∥DJ-∥*≤1. Therefore, 〈D,Um×sVn×sT〉=〈DJ,Um×sVn×sT〉 and 〈D,M*〉=〈DJ-,M*〉. Moreover, 〈DJ-,M*〉≤∥M*∥ by the definition of the dual norm of ∥·∥. This together with the facts 𝒜D=0, DJ∈Π⊥ and DJ-∈∂∥M*∥⊆Π yields (44)L(y*,M*)=∥M*∥-〈DJ-,M*〉+〈D,𝒜*y*〉-〈DJ,Um×sVn×sT〉+δΠ(M*)≥-〈DJ,Um×sVn×sT〉+δΠ(M*). Thus, the minimum value of  L(y,M) is attained, L(y*,M*)=-〈DJ,Um×sVn×sT〉, when M*∈Π, 〈DJ-,M*〉=∥M*∥. We obtain that ∥DJ-∥*=1. By assumption, 1=L(y*,M*)=-〈DJ,Um×sVn×sT〉. That is, ∑i=1s‍(Um×sTDVn×s)ii=-1. Without loss of generality, let SVD of the optimal M* be M*=U~(0s00σ(M*))V~T, where U~:=[Um×sU~m×(r-s)] and V~:=[Vn×sV~n×(r-s)]. From the above arguments, we obtain that(i) 𝒜D=0,(ii) ∑i=1s‍(Um×sTDVn×s)ii=∑i∈J‍(U~TDV~)ii=-1,(iii) ∑i∈J-‍(U~TDV~)ii=1. Clearly, for everyt∈ℝ, the matrices Xt:=W+tD are feasible in (6). Note that (45)W=Um×sWsVn×sT=[Um×sU~m×(r-s)](Ws000)[Vn×sV~n×(r-s)]T. Then, ∥W∥*=∥U~TWV~∥*=Tr(U~TWV~). From the above equations, we obtain that ∥Xt∥*=∥W∥* for all small enough t>0 (since σi(W)>0, i∈{1,2,…,s}). Noting that W is the unique optimal solution to (6), we have Xt=W, which means that (U~TDV~)ii=0 for i∈J. This is a contradiction, and hence the desired conclusion holds. We next prove that𝒜 is s-good if γs(𝒜)<1. That is, we let W be an s-rank matrix and we show that W is the unique optimal solution to (6). Without loss of generality, let W be a matrix of rank s′≠0 and Um×s′Ws′Vn×s′T its SVD, where Um×s′∈ℝm×s′, Vn×s′∈ℝn×s′ are orthogonal matrices and Ws′=Diag((σ1(W),…,σs′(W))T). It follows from Proposition 4 that γs′(𝒜)≤γs(𝒜)<1. By the definition of γs(𝒜), there exists y∈ℝp such that 𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×s′Um×(r-s′)], V=[Vn×s′Vn×(r-s′)], and(46)σi(𝒜*y){=1,ifσi(W)≠0,∈[0,1),ifσi(W)=0. Now, we have the optimization problem of minimizing the function (47)f(X)=∥X∥*-yT[𝒜X-𝒜W]=∥X∥*-〈𝒜*y,X〉+∥W∥* over all X∈ℝm×n such that 𝒜X=𝒜W. Note that 〈𝒜*y,X〉≤∥X∥* by ∥𝒜*y∥≤1 and the definition of dual norm. So f(X)≥∥X∥*-∥X∥*+∥W∥*=∥W∥* and this function attains its unconstrained minimum in X at X=W. Hence X=W is an optimal solution to (6). It remains to show that this optimal solution is unique. Let Z be another optimal solution to the problem. Then f(Z)-f(W)=∥Z∥*-yT𝒜Z=∥Z∥*-〈𝒜*y,Z〉=0. This together with the fact ∥𝒜*y∥≤1 implies that there exist SVDs for 𝒜*y and Z such that (48)𝒜*y=U~Diag(σ(𝒜*y))V~T,Z=U~Diag(σ(Z))V~T, where U~∈ℝm×r and V~∈ℝn×r are orthogonal matrices, and σi(Z)=0 if σi(𝒜*y)≠1. Thus, for σi(𝒜*y)=0, for all i∈{s′+1,…,r}, we must have σi(Z)=σi(W)=0. By the two forms of SVDs of 𝒜*y as above, Um×s′Vn×s′T=U~m×s′V~n×s′T where U~m×s′, V~n×s′T are the corresponding submatrices of U~, V~, respectively. Without loss of generality, let (49)U=[u1,u2,…,ur],V=[v1,v2,…,vr],U~=[u~1,u~2,…,u~r],V~=[v~1,v~2,…,v~r], where uj=u~j and vj=v~j for the corresponding index j∈{i:σi(𝒜*y)=0,i∈{s′+1,…,r}}. Then we have (50)Z=∑i=1s′‍σi(Z)u~iv~iT,W=∑i=1s′‍σi(W)uiviT. From Um×s′Vn×s′T=U~m×s′V~n×s′T, we obtain that (51)∑i=s′+1r‍σi(𝒜*y)u~iv~iT=∑i=s′+1r‍σi(𝒜*y)uiviT. Therefore, we deduce (52)∑i=s′+1,σi(𝒜*y)≠0r‍σi(𝒜*y)u~iv~iT+∑i=s′+1,σi(𝒜*y)=0r‍u~iv~iT=∑i=s′+1,σi(𝒜*y)≠0r‍σi(𝒜*y)uiviT+∑i=s′+1,σi(𝒜*y)=0r‍uiviT=:Ω. Clearly, the rank of Ω is no less than r-s′≥r-s. From the orthogonality property of U,V and U~,V~, we easily derive that (53)ΩTu~iv~iT=0,ΩTuiviT=0,∀i∈{1,2,…,s′}. Thus, we obtain ΩT(Z-W)=0, which implies that the rank of the matrix Z-W is no more than s. Since γs(𝒜)<1, there exists y~ such that (54)σi(𝒜*y~){=1,ifσi(Z-W)≠0,∈[0,1)ifσi(Z-W)=0. Therefore, 0=y~T𝒜(Z-W)=〈𝒜*y~,Z-W〉=∥Z-W∥*. Then Z=W.For theG-number γ^s(𝒜), we directly obtain the following equivalent theorem of s-goodness from Proposition 5 and Theorem 7.Theorem 8. Let𝒜:ℝm×n→ℝp be a linear transformation, and s∈{1,2,…,r}. Then 𝒜 is s-good if and only if γ^s(𝒜)<1/2. ## 4.s-Goodness, NSP, and RIP This section deals with the connections betweens-goodness, the null space property (NSP), and the restricted isometry property (RIP). We start with establishing the equivalence of NSP and G-number γ^s(𝒜)<1/2. Here, we say 𝒜  satisfies NSP if for every nonzero matrix X∈Null(𝒜) with the SVD X=UDiag(σ(X))VT, then we have (55)∑i=1s‍σi(X)<∑i=s+1r‍σi(X). For further details, see, for example, [14, 19–21] and references therein.Proposition 9. For the linear transformation𝒜, γ^s(𝒜)<1/2 if and only if 𝒜 satisfies NSP.Proof. We first give an equivalent representation of theG-number γ^s(𝒜,β). We define a compact convex set first: (56)Ps:={Z∈ℝm×n:∥Z∥*≤s,∥Z∥≤1}. Let Bβ:={y∈ℝp:∥y∥d≤β} and B:={X∈ℝm×n:∥X∥≤1}. By definition, γ^s(𝒜,β) is the smallest γ such that the closed convex set Cγ,β:=𝒜*Bβ+γB contains all matrices with s nonzero singular values, all equal to 1. Equivalently, Cγ,β contains the convex hull of these matrices, namely, Ps. Note that γ satisfies the inclusion Ps⊆Cγ,β if and only if for every X∈ℝm×n(57)maxZ∈Ps〈Z,X〉≤maxY∈Cγ,β〈Y,X〉=maxy∈ℝp,W∈ℝm×n{〈X,𝒜*y〉+γ〈X,W〉:∥y∥d≤β,∥W∥≤1}=β∥𝒜X∥+γ∥X∥*. For the above, we adopt the convention that whenever β=+∞, β∥𝒜X∥ is defined to be +∞ or 0 depending on whether ∥𝒜X∥>0 or ∥𝒜X∥=0. Thus, Ps⊆Cγ,β if and only if maxZ∈Ps{〈Z,X〉-β∥𝒜X∥}≤γ∥X∥*. Using the homogeneity of this last relation with respect to X, the above is equivalent to (58)maxZ,X{〈Z,X〉-β∥𝒜X∥:Z∈Ps,∥X∥*≤1}≤γ. Therefore, we obtain γ^s(𝒜,β)=maxZ,X{〈Z,X〉-β∥𝒜X∥:Z∈Ps,∥X∥*≤1}. Furthermore, (59)γ^s(𝒜)=maxZ,X{〈Z,X〉:Z∈Ps,∥X∥*≤1,𝒜X=0}. ForX∈ℝm×n with 𝒜X=0, let X=UDiag(σ(X))VT be its SVD. Then, we obtain the sum of the s largest singular values of X as (60)∥X∥s,*=maxZ∈Ps〈Z,X〉. From (59), we immediately obtain that γ^s(𝒜) is the best upper bound on ∥X∥s,* of matrices X∈Null(𝒜) such that ∥X∥*≤1. Therefore, γ^s(𝒜)<1/2 implies that the maximum value of ∥·∥s,*-norms of matrices X∈Null(𝒜) with ∥X∥*=1 is less than 1/2. That is, ∑i=1s‍σi(X)<1/2∑i=1r‍σi(X). Thus, ∑i=1s‍σi(X)<∑i=s+1r‍σi(X) and hence 𝒜 satisfies NSP. Now, it is easy to see that 𝒜 satisfies NSP if and only if γ^s(𝒜)<1/2.Next, we consider the connection between restricted isometry constants andG-number of the linear transformation in LMR. It is well known that, for a nonsingular matrix (transformation) T∈ℝp×p, the RIP constants of 𝒜 and T𝒜 can be very different, as shown by Zhang [30] for the vector case. However, the s-goodness properties of 𝒜 and T𝒜 are always the same for a nonsingular transformation T∈ℝp×p (i.e., s-goodness properties enjoy scale invariance in this sense). Recall that the s-restricted isometry constantδs of a linear transformation 𝒜 is defined as the smallest constant such that the following holds for all s-rank matrices X∈ℝm×n: (61)(1-δs)∥X∥F2≤∥𝒜X∥22≤(1+δs)∥X∥F2. In this case, we say 𝒜 possesses the RI(δs)-property (RIP) as in the CS context. For details, see [4, 31–34] and the references therein.Proposition 10. Let𝒜:ℝm×n→ℝp be a linear transformation and s∈{0,1,2,…,r}. For any nonsingular transformation T∈ℝp×p, γ^s(𝒜)=γ^s(T𝒜).Proof. It follows from the nonsingularity ofT that {X:𝒜X=0}={X:T𝒜X=0}. Then, by the equivalent representation of the G-number γ^s(𝒜,β) in (59), (62)γ^s(𝒜)=maxZ,X{〈Z,X〉:Z∈Ps,∥X∥*≤1,𝒜X=0}=maxZ,X{〈Z,X〉:Z∈Ps,∥X∥*≤1,T𝒜X=0}=γ^s(T𝒜).For the RIP constantδ2s, Oymak et al. [21] gave the current best bound on the restricted isometry constant δ2s<0.472, where they proposed a general technique for translating results from SSR to LMR. Together with the above arguments, we immediately obtain the following theorem.Theorem 11. δ 2 s < 0.472 ⇒ 𝒜 satisfying NSP⇔γ^s(𝒜)<1/2⇔γs(𝒜)<1⇔𝒜 is s-good.Proof. It follows from [21, Theorem  1], Proposition 9, and Theorems 7 and 8.The above theorem says thats-goodness is a necessary and sufficient condition for recovering the low-rank solution exactly via nuclear norm minimization. ## 5. Conclusion In this paper, we have shown thats-goodness of the linear transformation in LMR is a necessary and sufficient conditions for exact s-rank matrix recovery via the nuclear norm minimization, which is equivalent to the null space property. Our analysis is based on the two characteristic s-goodness constants, γs and γ^s, and the variational property of matrix norm in convex optimization. This shows that s-goodness is an elegant concept for low-rank matrix recovery, although γs and γ^s may not be easy to compute. Development of efficiently computable bounds on these quantities is left to future work. Even though we develop and use techniques based on optimization, convex analysis, and geometry, we do not provide explicit analogues to the results of Donoho [35] where necessary and sufficient conditions for vector recovery special case were derived based on the geometric notions of face preservation and neighborliness. The corresponding generalization to low-rank recovery is not known, currently the closest one being [22]. Moreover, it is also important to consider the semidefinite relaxation (SDR) for the rank minimization with the positive semidefinite constraint since the SDR convexifies nonconvex or discrete optimization problems by removing the rank-one constraint. Another future research topic is to extend the main results and the techniques in this paper to the SDR. --- *Source: 101974-2013-04-09.xml*
101974-2013-04-09_101974-2013-04-09.md
40,200
s-Goodness for Low-Rank Matrix Recovery
Lingchen Kong; Levent Tunçel; Naihua Xiu
Abstract and Applied Analysis (2013)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101974
101974-2013-04-09.xml
--- ## Abstract Low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, and system identification and control. This class of optimization problems is generally𝒩𝒫 hard. A popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend and characterize the concept of s-goodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski (Math Program, 2011)) to linear transformations in LMR. Using the two characteristic s-goodness constants, γs and γ^s, of a linear transformation, we derive necessary and sufficient conditions for a linear transformation to be s-good. Moreover, we establish the equivalence of s-goodness and the null space properties. Therefore, s-goodness is a necessary and sufficient condition for exact s-rank matrix recovery via the nuclear norm minimization. --- ## Body ## 1. Introduction Low-rank matrix recovery (LMR for short) is a rank minimization problem (RMP) with linear constraints or the affine matrix rank minimization problem which is defined as follows: (1)minimizerank(X),subjectto𝒜X=b, where X∈ℝm×n is the matrix variable, 𝒜:ℝm×n→ℝp is a linear transformation, and b∈ℝp. Although specific instances can often be solved by specialized algorithms, the LMR is 𝒩𝒫 hard. A popular approach for solving LMR in the systems and control community is to minimize the trace of a positive semidefinite matrix variable instead of its rank (see, e.g., [1, 2]). A generalization of this approach to nonsymmetric matrices introduced by Fazel et al. [3] is the famous convex relaxation of LMR (1), which is called nuclear norm minimization (NNM): (2)min∥X∥*s.t.𝒜X=b, where ∥X∥* is the nuclear norm of X, that is, the sum of its singular values. When m=n and the matrix X:=Diag(x), x∈ℝn, is diagonal, the LMR (1) reduces to sparse signal recovery (SSR), which is the so-called cardinality minimization problem (CMP): (3)min∥x∥0s.t.Φx=b, where ∥x∥0 denotes the number of nonzero entries in the vector x and Φ∈ℝm×n is a given sensing matrix. A well-known heuristic for SSR is the ℓ1-norm minimization relaxation (basis pursuit problem): (4)min∥x∥1s.t.Φx=b, where ∥x∥1 is the ℓ1-norm of x, that is, the sum of absolute values of its entries.LMR problems have many applications and they appeared in the literature of a diverse set of fields including signal and image processing, statistics, computer vision, and system identification and control. For more details, see the recent paper [4]. LMR and NNM have been the focus of some recent research in the optimization community, see; for example, [4–15]. Although there are many papers dealing with algorithms for NNM such as interior-point methods, fixed point and Bregman iterative methods, and proximal point methods, there are fewer papers dealing with the conditions that guarantee the success of the low-rank matrix recovery via NNM. For instance, following the program laid out in the work of Candès and Tao in compressed sensing (CS, see, e.g., [16–18]), Recht et al. [4] provided a certain restricted isometry property (RIP) condition on the linear transformation which guarantees that the minimum nuclear norm solution is the minimum rank solution. Recht et al. [14, 19] gave the null space property (NSP) which characterizes a particular property of the null space of the linear transformation, which is also discussed by Oymak et al. [20, 21]. Note that NSP states a necessary and sufficient condition for exactly recovering the low-rank matrix via nuclear norm minimization. Recently, Chandrasekaran et al. [22] proposed that a fixed s-rank matrix X0 can be recovered if and only if the null space of 𝒜 does not intersect the tangent cone of the nuclear norm ball at X0.In the setting of CS, there are other characterizations of the sensing matrix, under whichℓ1-norm minimization can be guaranteed to yield an optimal solution to SSR, in addition to RIP and null-space properties, see; for example, [23–26]. In particular, Juditsky and Nemirovski [24] established necessary and sufficient conditions for a Sensing matrix to be “s-good” to allow for exact ℓ1-recovery of sparse signals with s nonzero entries when no measurement noise is present. They also demonstrated that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact SSR and to efficiently computable upper bounds on those s for which a given sensing matrix is s-good. Furthermore, they established instructive links between s-goodness and RIP in the CS context. One may wonder whether we can generalize the s-goodness concept to LMR and still maintain many of the nice properties as done in [24]. Here, we deal with this issue. Our approach is based on the singular value decomposition (SVD) of a matrix and the partition technique generalized from CS. In the next section, following Juditsky and Nemirovski’s terminology, we propose definitions of s-goodness and G-numbers, γs and γ^s, of a linear transformation in LMR and then we provide some basic properties of G-numbers. In Section 3, we characterize s-goodness of a linear transformation in LMR via G-numbers. We consider the connections between the s-goodness, NSP, and RIP in Section 4. We eventually obtain that δ2s<0.472⇒𝒜 satisfying NSP⇔γ^s(𝒜)<1/2⇔γs(𝒜)<1⇔𝒜 is s-good.LetW∈ℝm×n, r:=min{m,n}, and let W=UDiag(σ(W))VT be an SVD of W, where U∈ℝm×r, V∈ℝn×r, and Diag(σ(W)) is the diagonal matrix of σ(W)=(σ1(W),…,σr(W))T which is the vector of the singular values of W. Also let Ξ(W) denote the set of pairs of matrices (U,V) in the SVD of W; that is, (5)Ξ(W):={(σ(W))VT(U,V):U∈ℝm×r,V∈ℝn×r,W=UDiag(σ(W))VT}. For s∈{0,1,2,…,r}, we say W∈ℝm×n is an s-rank matrix to mean that the rank of W is no more than s. For an s-rank matrix W, it is convenient to take W=Um×sWsVn×sT as its SVD where Um×s∈ℝm×s, Vn×s∈ℝn×s are orthogonal matrices and Ws=Diag((σ1(W),…,σs(W))T). For a vector y∈ℝp, let ∥·∥d be the dual norm of ∥·∥ specified by ∥y∥d:=maxv{〈v,y〉:∥v∥≤1}. In particular, ∥·∥∞ is the dual norm of ∥·∥1 for a vector. Let ∥X∥ denote the spectral or the operator norm of a matrix X∈ℝm×n, that is, the largest singular value of X. In fact, ∥X∥ is the dual norm of ∥X∥*. Let ∥X∥F:=〈X,X〉=Tr(XTX) be the Frobenius norm of X, which is equal to the ℓ2-norm of the vector of its singular values. We denote by XT the transpose of X. For a linear transformation 𝒜:ℝm×n→ℝp, we denote by 𝒜*:ℝp→ℝm×n the adjoint of 𝒜. ## 2. Definitions and Basic Properties ### 2.1. Definitions We first go over some concepts related tos-goodness of the linear transformation in LMR (RMP). These are extensions of those given for SSR (CMP) in [24].Definition 1. Let𝒜:ℝm×n→ℝp be a linear transformation and s∈{0,1,2,…,r}. One says that 𝒜 is s-good, if for every s-rank matrix W∈ℝm×n, W is the unique optimal solution to the optimization problem (6)minX∈ℝm×n{∥X∥*:𝒜X=𝒜W}.We denote bys*(𝒜) the largest integer s for which 𝒜 is s-good. Clearly, s*(𝒜)∈{0,1,…,r}. To characterize s-goodness we introduce two useful s-goodness constants: γs and γ^s. We call γs and γ^sG-numbers.Definition 2. Let𝒜:ℝm×n→ℝp be a linear transformation, β∈[0,+∞] and s∈{0,1,2,…,r}. Then we have the following. (i)G-number γs(𝒜,β) is the infimum of γ≥0 such that for every matrix X∈ℝm×n with singular value decomposition X=Um×sVn×sT (i.e., s nonzero singular values, all equal to 1), there exists a vector y∈ℝp such that (7)∥y∥d≤β,𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices, and (8)σi(𝒜*y){=1,ifσi(X)=1,∈[0,γ],ifσi(X)=0,i∈{1,2,…,r}. If there does not exist such y for some X as above, we set γs(𝒜,β)=+∞. (ii)G-number γ^s(𝒜,β) is the infimum of γ≥0 such that for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y∈ℝp such that 𝒜*y and X share the same orthogonal row and column spaces: (9)∥y∥d≤β,∥𝒜*y-X∥≤γ. If there does not exist such y for some X as above, we set γs(𝒜,β)=+∞ and to be compatible with the special case given by [24] we write γs(𝒜), γ^s(𝒜) instead of γs(𝒜,+∞), γ^s(𝒜,+∞), respectively.From the above definition, we easily see that the set of values thatγ takes is closed. Thus, when γs(𝒜,β)<+∞, for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y∈ℝp such that(10)∥y∥d≤β,σi(𝒜*y){=1,ifσi(X)=1,∈[0,γs(𝒜,β)],ifσi(X)=0,i∈{1,2,…,r}.Similarly, for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y^∈ℝp such that 𝒜*y^ and X share the same orthogonal row and column spaces: (11)∥y^∥d≤β,∥𝒜*y^-X∥≤γ^s(𝒜,β). Observing that the set {𝒜*y:∥y∥d≤β} is convex, we obtain that if γs(𝒜,β)<+∞ then for every matrix X with at most s nonzero singular values and ∥X∥≤1 there exist vectors y satisfying (10) and there exist vectors y^ satisfying (11). ### 2.2. Basic Properties ofG-Numbers In order to characterize thes-goodness of a linear transformation 𝒜, we study the basic properties of G-numbers. We begin with the result that G-numbers γs(𝒜,β) and γ^s(𝒜,β) are convex nonincreasing functions of β.Proposition 3. For every linear transformation𝒜 and every s∈{0,1,…,r}, G-numbers γs(𝒜,β) and γ^s(𝒜,β) are convex nonincreasing functions of β∈[0,+∞].Proof. We only need to demonstrate that the quantityγs(𝒜,β) is a convex nonincreasing function of β∈[0,+∞]. It is evident from the definition that γs(𝒜,β) is nonincreasing for given 𝒜,s. It remains to show that γs(𝒜,β) is a convex function of β. In other words, for every pair β1,β2∈[0,+∞], we need to verify that (12)γs(𝒜,αβ1+(1-α)β2)≤αγs(𝒜,β1)+(1-α)γs(𝒜,β2),∀α∈[0,1]. The above inequality follows immediately if one of β1, β2 is +∞. Thus, we may assume β1,β2∈[0,+∞). In fact, from the argument around (10) and the definition of γs(𝒜,·), we know that for every matrix X=UDiag(σ(X))VT with s nonzero singular values, all equal to 1, there exist vectors y1,y2∈ℝp such that for k∈{1,2}(13)∥yk∥d≤βk,σi(𝒜*yk){=1,ifσi(X)=1,∈[0,γs(𝒜,βk)],ifσi(X)=0,i∈{1,2,…,r}.It is immediate from (13) that ∥αy1+(1-α)y2∥d≤αβ1+(1-α)β2. Moreover, from the above information on the singular values of 𝒜*y1, 𝒜*y2, we may set 𝒜*yk=X+Yk, k∈{1,2} such that (14)XTYk=0,XYkT=0,rank(Yk)≤r-s,∥Yk∥≤γs(𝒜,βk). This implies that for every α∈[0,1](15)XT[αY1+(1-α)Y2]=0,X[αY1+(1-α)Y2]T=0, and hence rank[αY1+(1-α)Y2]≤r-s, X, and [αY1+(1-α)Y2] have orthogonal row and column spaces. Thus, noting that 𝒜*[αy1+(1-α)y2]=X+αY1+(1-α)Y2, we obtain that ∥αy1+(1-α)y2∥d≤αβ1+(1-α)β2 and (16)σi(𝒜*(αy1+(1-α)y2))={1,ifσi(X)=1,σi(αY1+(1-α)Y2),ifσi(X)=0, for every α∈[0,1]. Combining this with the fact (17)∥αY1+(1-α)Y2∥≤α∥Y1∥+(1-α)∥Y2∥≤αγs(𝒜,β1)+(1-α)γs(𝒜,β2), we obtain the desired conclusion.The following observation thatG-numbers γs(𝒜,β), γ^s(𝒜,β) are nondecreasing in s is immediate.Proposition 4. For everys′≤s, one has γs′(𝒜,β)≤γs(𝒜,β), γ^s′(𝒜,β)≤γ^s(𝒜,β).We further investigate the relationship between theG-numbers γs(𝒜,β) and γ^s(𝒜,β).Proposition 5. Let𝒜:ℝm×n→ℝp be a linear transformation, β∈[0,+∞], and s∈{0,1,2,…,r}. Then one has (18)γ:=γs(𝒜,β)<1⇒γ^s(𝒜,11+γβ)=γ1+γ<12,γ^:=γ^s(𝒜,β)<12⇒γs(𝒜,11-γ^β)=γ^1-γ^<1.Proof. Letγ:=γs(𝒜,β)<1. Then, for every matrix Z∈ℝm×n with s nonzero singular values, all equal to 1, there exists y∈ℝp, ∥y∥d≤β, such that 𝒜*y=Z+W, where ∥W∥≤γ and W and Z have orthogonal row and column spaces. For a given pair Z,y as above, take y~:=(1/(1+γ))y. Then we have ∥y~∥d≤(1/(1+γ))β and (19)∥𝒜*y~-Z∥≤max{1-11+γ,γ1+γ}=γ1+γ, where the first term under the maximum comes from the fact that 𝒜*y and Z agree on the subspace corresponding to the nonzero singular values of Z. Therefore, we obtain (20)γ^s(𝒜,11+γβ)≤γ1+γ<12. Now, we assume that γ^:=γ^s(𝒜,β)<1/2. Fix orthogonal matrices U∈ℝm×r, V∈ℝn×r. For an s-element subset J of the index set {1,2,…,r}, we define a set SJ with respect to orthogonal matrices U,V as (21)SJ:={VTJ-x∈ℝr:∃y∈ℝp,∥y∥d≤β,𝒜*y=UDiag(x)VTwith|xi|≤γ^fori∈J-}. In the above, J- denotes the complement of J. It is immediately seen that SJ is a closed convex set in ℝr. Moreover, we have the following Claim 1. SJ contains the ∥·∥∞-ball of radius (1-γ^) centered at the origin in ℝr. Proof. Note that SJ is closed and convex. Moreover, SJ is the direct sum of its projections onto the pair of subspaces (22)LJ:={x∈ℝr:xi=0,i∈J-}anditsorthogonalcomplementLJ⊥={x∈ℝr:xi=0,i∈J}. Let Q denote the projection of SJ onto LJ. Then, Q is closed and convex (because of the direct sum property above and the fact that SJ is closed and convex). Note that LJ can be naturally identified with ℝs, and our claim is the image Q-⊂ℝs of Q under this identification that contains the ∥·∥∞-ball Bs of radius (1-γ^) centered at the origin in ℝs. For a contradiction, suppose Bs is not contained in Q-. Then there exists v∈Bs∖Q-. Since Q- is closed and convex, by a separating hyperplane theorem, there exists a vector u∈ℝs,  ∥u∥1=1 such that (23)uTv>uTv′foreveryv′∈Q-. Let z∈ℝr be defined by (24)zi:={1,i∈J,0,otherwise. By definition of γ^=γ^s(𝒜,β), for s-rank matrix UDiag(z)VT, there exists y∈ℝp such that ∥y∥d≤β and (25)𝒜*y=UDiag(z)VT+W, where W and UDiag(z)VT have the same orthogonal row and column spaces, ∥𝒜*y-UDiag(z)VT∥≤γ^ and ∥σ(𝒜*y)-z∥∞≤γ^. Together with the definitions of SJ and Q-, this means that Q- contains a vector v- with |v-i-sign(ui)|≤γ^, ∀i∈{1,2,…,s}. Therefore, (26)uTv-≥∑i=1s‍|ui|(1-γ^)=(1-γ^)∥u∥1=1-γ^. By v∈Bs and the definition of u, we obtain (27)1-γ^≥∥v∥∞=∥u∥1∥v∥∞≥uTv>uTv-≥1-γ^, where the strict inequality follows from the facts that v-∈Q- and u separates v from Q-. The above string of inequalities is a contradiction, and hence the desired claim holds. Using the above claim, we conclude that for everyJ⊆{1,2,…,r} with cardinality s, there exists an x∈SJ such that xi=(1-γ^), for all i∈J. From the definition of SJ, we obtain that there exists y∈ℝp with ∥y∥d≤(1-γ^)-1β such that (28)𝒜*y=UDiag(σ(𝒜*y))VT, where σi(𝒜*y)=(1-γ^)-1xi=1 if i∈J, and σi(𝒜*y)i≤(1-γ^)-1γ^ if i∈J-. Thus, we obtain that (29)γ^s:=γ^s(𝒜,β)<12⇒γs(𝒜,11-γ^β)≤γ^1-γ^<1. To conclude the proof, we need to prove that the inequalities we established(30)γ^s(𝒜,11+γ^β)≤γ1+γ,γs(𝒜,11-γ^β)≤γ^1+γ^ are both equations. This is straightforward by an argument similar to the one in the proof of [24, Theorem  1]. We omit it for the sake of brevity.We end this section with a simple argument which illustrates that for a given pair (𝒜,s), γs(𝒜,β)=γs(𝒜) and γ^s(𝒜,β)=γ^s(𝒜), for all β large enough.Proposition 6. Let𝒜:ℝm×n→ℝp be a linear transformation and β∈[0,+∞]. Assume that for some ρ>0, the image of the unit ∥·∥*-ball in ℝm×n under the mapping X↦𝒜X contains the ball B={x∈ℝp:∥x∥1≤ρ}. Then for every s∈{1,2,…,r}(31)β≥1ρ,γs(𝒜)<1⇒γs(𝒜,β)=γs(𝒜),β≥1ρ,γ^s(𝒜)<12⇒γ^s(𝒜,β)=γ^s(𝒜).Proof. Fixs∈{1,2,…,r}. We only need to show the first implication. Let γ:=γs(𝒜)<1. Then for every matrix W∈ℝm×n with its SVD W=Um×sVn×sT, there exists a vector y∈ℝp such that (32)∥y∥d≤β,𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices, and (33)σi(𝒜*y){=1,ifσi(W)=1,∈[0,γ],ifσi(W)=0,i∈{1,2,…,r}. Clearly, ∥𝒜*y∥≤1. That is, (34)1≥∥𝒜*y∥=maxX∈ℝm×n{〈X,𝒜*y〉:∥X∥*≤1}=maxX∈ℝm×n{〈u,y〉:u=𝒜X,∥X∥*≤1}. From the inclusion assumption, we obtain that (35)maxX∈ℝm×n{〈u,y〉:u=𝒜X,∥X∥*≤1}≥maxu∈ℝp{〈u,y〉:∥u∥1≤ρ}=ρ∥y∥∞=ρ∥y∥d. Combining the above two strings of relations, we derive the desired conclusion. ## 2.1. Definitions We first go over some concepts related tos-goodness of the linear transformation in LMR (RMP). These are extensions of those given for SSR (CMP) in [24].Definition 1. Let𝒜:ℝm×n→ℝp be a linear transformation and s∈{0,1,2,…,r}. One says that 𝒜 is s-good, if for every s-rank matrix W∈ℝm×n, W is the unique optimal solution to the optimization problem (6)minX∈ℝm×n{∥X∥*:𝒜X=𝒜W}.We denote bys*(𝒜) the largest integer s for which 𝒜 is s-good. Clearly, s*(𝒜)∈{0,1,…,r}. To characterize s-goodness we introduce two useful s-goodness constants: γs and γ^s. We call γs and γ^sG-numbers.Definition 2. Let𝒜:ℝm×n→ℝp be a linear transformation, β∈[0,+∞] and s∈{0,1,2,…,r}. Then we have the following. (i)G-number γs(𝒜,β) is the infimum of γ≥0 such that for every matrix X∈ℝm×n with singular value decomposition X=Um×sVn×sT (i.e., s nonzero singular values, all equal to 1), there exists a vector y∈ℝp such that (7)∥y∥d≤β,𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices, and (8)σi(𝒜*y){=1,ifσi(X)=1,∈[0,γ],ifσi(X)=0,i∈{1,2,…,r}. If there does not exist such y for some X as above, we set γs(𝒜,β)=+∞. (ii)G-number γ^s(𝒜,β) is the infimum of γ≥0 such that for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y∈ℝp such that 𝒜*y and X share the same orthogonal row and column spaces: (9)∥y∥d≤β,∥𝒜*y-X∥≤γ. If there does not exist such y for some X as above, we set γs(𝒜,β)=+∞ and to be compatible with the special case given by [24] we write γs(𝒜), γ^s(𝒜) instead of γs(𝒜,+∞), γ^s(𝒜,+∞), respectively.From the above definition, we easily see that the set of values thatγ takes is closed. Thus, when γs(𝒜,β)<+∞, for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y∈ℝp such that(10)∥y∥d≤β,σi(𝒜*y){=1,ifσi(X)=1,∈[0,γs(𝒜,β)],ifσi(X)=0,i∈{1,2,…,r}.Similarly, for every matrix X∈ℝm×n with s nonzero singular values, all equal to 1, there exists a vector y^∈ℝp such that 𝒜*y^ and X share the same orthogonal row and column spaces: (11)∥y^∥d≤β,∥𝒜*y^-X∥≤γ^s(𝒜,β). Observing that the set {𝒜*y:∥y∥d≤β} is convex, we obtain that if γs(𝒜,β)<+∞ then for every matrix X with at most s nonzero singular values and ∥X∥≤1 there exist vectors y satisfying (10) and there exist vectors y^ satisfying (11). ## 2.2. Basic Properties ofG-Numbers In order to characterize thes-goodness of a linear transformation 𝒜, we study the basic properties of G-numbers. We begin with the result that G-numbers γs(𝒜,β) and γ^s(𝒜,β) are convex nonincreasing functions of β.Proposition 3. For every linear transformation𝒜 and every s∈{0,1,…,r}, G-numbers γs(𝒜,β) and γ^s(𝒜,β) are convex nonincreasing functions of β∈[0,+∞].Proof. We only need to demonstrate that the quantityγs(𝒜,β) is a convex nonincreasing function of β∈[0,+∞]. It is evident from the definition that γs(𝒜,β) is nonincreasing for given 𝒜,s. It remains to show that γs(𝒜,β) is a convex function of β. In other words, for every pair β1,β2∈[0,+∞], we need to verify that (12)γs(𝒜,αβ1+(1-α)β2)≤αγs(𝒜,β1)+(1-α)γs(𝒜,β2),∀α∈[0,1]. The above inequality follows immediately if one of β1, β2 is +∞. Thus, we may assume β1,β2∈[0,+∞). In fact, from the argument around (10) and the definition of γs(𝒜,·), we know that for every matrix X=UDiag(σ(X))VT with s nonzero singular values, all equal to 1, there exist vectors y1,y2∈ℝp such that for k∈{1,2}(13)∥yk∥d≤βk,σi(𝒜*yk){=1,ifσi(X)=1,∈[0,γs(𝒜,βk)],ifσi(X)=0,i∈{1,2,…,r}.It is immediate from (13) that ∥αy1+(1-α)y2∥d≤αβ1+(1-α)β2. Moreover, from the above information on the singular values of 𝒜*y1, 𝒜*y2, we may set 𝒜*yk=X+Yk, k∈{1,2} such that (14)XTYk=0,XYkT=0,rank(Yk)≤r-s,∥Yk∥≤γs(𝒜,βk). This implies that for every α∈[0,1](15)XT[αY1+(1-α)Y2]=0,X[αY1+(1-α)Y2]T=0, and hence rank[αY1+(1-α)Y2]≤r-s, X, and [αY1+(1-α)Y2] have orthogonal row and column spaces. Thus, noting that 𝒜*[αy1+(1-α)y2]=X+αY1+(1-α)Y2, we obtain that ∥αy1+(1-α)y2∥d≤αβ1+(1-α)β2 and (16)σi(𝒜*(αy1+(1-α)y2))={1,ifσi(X)=1,σi(αY1+(1-α)Y2),ifσi(X)=0, for every α∈[0,1]. Combining this with the fact (17)∥αY1+(1-α)Y2∥≤α∥Y1∥+(1-α)∥Y2∥≤αγs(𝒜,β1)+(1-α)γs(𝒜,β2), we obtain the desired conclusion.The following observation thatG-numbers γs(𝒜,β), γ^s(𝒜,β) are nondecreasing in s is immediate.Proposition 4. For everys′≤s, one has γs′(𝒜,β)≤γs(𝒜,β), γ^s′(𝒜,β)≤γ^s(𝒜,β).We further investigate the relationship between theG-numbers γs(𝒜,β) and γ^s(𝒜,β).Proposition 5. Let𝒜:ℝm×n→ℝp be a linear transformation, β∈[0,+∞], and s∈{0,1,2,…,r}. Then one has (18)γ:=γs(𝒜,β)<1⇒γ^s(𝒜,11+γβ)=γ1+γ<12,γ^:=γ^s(𝒜,β)<12⇒γs(𝒜,11-γ^β)=γ^1-γ^<1.Proof. Letγ:=γs(𝒜,β)<1. Then, for every matrix Z∈ℝm×n with s nonzero singular values, all equal to 1, there exists y∈ℝp, ∥y∥d≤β, such that 𝒜*y=Z+W, where ∥W∥≤γ and W and Z have orthogonal row and column spaces. For a given pair Z,y as above, take y~:=(1/(1+γ))y. Then we have ∥y~∥d≤(1/(1+γ))β and (19)∥𝒜*y~-Z∥≤max{1-11+γ,γ1+γ}=γ1+γ, where the first term under the maximum comes from the fact that 𝒜*y and Z agree on the subspace corresponding to the nonzero singular values of Z. Therefore, we obtain (20)γ^s(𝒜,11+γβ)≤γ1+γ<12. Now, we assume that γ^:=γ^s(𝒜,β)<1/2. Fix orthogonal matrices U∈ℝm×r, V∈ℝn×r. For an s-element subset J of the index set {1,2,…,r}, we define a set SJ with respect to orthogonal matrices U,V as (21)SJ:={VTJ-x∈ℝr:∃y∈ℝp,∥y∥d≤β,𝒜*y=UDiag(x)VTwith|xi|≤γ^fori∈J-}. In the above, J- denotes the complement of J. It is immediately seen that SJ is a closed convex set in ℝr. Moreover, we have the following Claim 1. SJ contains the ∥·∥∞-ball of radius (1-γ^) centered at the origin in ℝr. Proof. Note that SJ is closed and convex. Moreover, SJ is the direct sum of its projections onto the pair of subspaces (22)LJ:={x∈ℝr:xi=0,i∈J-}anditsorthogonalcomplementLJ⊥={x∈ℝr:xi=0,i∈J}. Let Q denote the projection of SJ onto LJ. Then, Q is closed and convex (because of the direct sum property above and the fact that SJ is closed and convex). Note that LJ can be naturally identified with ℝs, and our claim is the image Q-⊂ℝs of Q under this identification that contains the ∥·∥∞-ball Bs of radius (1-γ^) centered at the origin in ℝs. For a contradiction, suppose Bs is not contained in Q-. Then there exists v∈Bs∖Q-. Since Q- is closed and convex, by a separating hyperplane theorem, there exists a vector u∈ℝs,  ∥u∥1=1 such that (23)uTv>uTv′foreveryv′∈Q-. Let z∈ℝr be defined by (24)zi:={1,i∈J,0,otherwise. By definition of γ^=γ^s(𝒜,β), for s-rank matrix UDiag(z)VT, there exists y∈ℝp such that ∥y∥d≤β and (25)𝒜*y=UDiag(z)VT+W, where W and UDiag(z)VT have the same orthogonal row and column spaces, ∥𝒜*y-UDiag(z)VT∥≤γ^ and ∥σ(𝒜*y)-z∥∞≤γ^. Together with the definitions of SJ and Q-, this means that Q- contains a vector v- with |v-i-sign(ui)|≤γ^, ∀i∈{1,2,…,s}. Therefore, (26)uTv-≥∑i=1s‍|ui|(1-γ^)=(1-γ^)∥u∥1=1-γ^. By v∈Bs and the definition of u, we obtain (27)1-γ^≥∥v∥∞=∥u∥1∥v∥∞≥uTv>uTv-≥1-γ^, where the strict inequality follows from the facts that v-∈Q- and u separates v from Q-. The above string of inequalities is a contradiction, and hence the desired claim holds. Using the above claim, we conclude that for everyJ⊆{1,2,…,r} with cardinality s, there exists an x∈SJ such that xi=(1-γ^), for all i∈J. From the definition of SJ, we obtain that there exists y∈ℝp with ∥y∥d≤(1-γ^)-1β such that (28)𝒜*y=UDiag(σ(𝒜*y))VT, where σi(𝒜*y)=(1-γ^)-1xi=1 if i∈J, and σi(𝒜*y)i≤(1-γ^)-1γ^ if i∈J-. Thus, we obtain that (29)γ^s:=γ^s(𝒜,β)<12⇒γs(𝒜,11-γ^β)≤γ^1-γ^<1. To conclude the proof, we need to prove that the inequalities we established(30)γ^s(𝒜,11+γ^β)≤γ1+γ,γs(𝒜,11-γ^β)≤γ^1+γ^ are both equations. This is straightforward by an argument similar to the one in the proof of [24, Theorem  1]. We omit it for the sake of brevity.We end this section with a simple argument which illustrates that for a given pair (𝒜,s), γs(𝒜,β)=γs(𝒜) and γ^s(𝒜,β)=γ^s(𝒜), for all β large enough.Proposition 6. Let𝒜:ℝm×n→ℝp be a linear transformation and β∈[0,+∞]. Assume that for some ρ>0, the image of the unit ∥·∥*-ball in ℝm×n under the mapping X↦𝒜X contains the ball B={x∈ℝp:∥x∥1≤ρ}. Then for every s∈{1,2,…,r}(31)β≥1ρ,γs(𝒜)<1⇒γs(𝒜,β)=γs(𝒜),β≥1ρ,γ^s(𝒜)<12⇒γ^s(𝒜,β)=γ^s(𝒜).Proof. Fixs∈{1,2,…,r}. We only need to show the first implication. Let γ:=γs(𝒜)<1. Then for every matrix W∈ℝm×n with its SVD W=Um×sVn×sT, there exists a vector y∈ℝp such that (32)∥y∥d≤β,𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices, and (33)σi(𝒜*y){=1,ifσi(W)=1,∈[0,γ],ifσi(W)=0,i∈{1,2,…,r}. Clearly, ∥𝒜*y∥≤1. That is, (34)1≥∥𝒜*y∥=maxX∈ℝm×n{〈X,𝒜*y〉:∥X∥*≤1}=maxX∈ℝm×n{〈u,y〉:u=𝒜X,∥X∥*≤1}. From the inclusion assumption, we obtain that (35)maxX∈ℝm×n{〈u,y〉:u=𝒜X,∥X∥*≤1}≥maxu∈ℝp{〈u,y〉:∥u∥1≤ρ}=ρ∥y∥∞=ρ∥y∥d. Combining the above two strings of relations, we derive the desired conclusion. ## 3.s-Goodness and G-Numbers We first give the following characterization result ofs-goodness of a linear transformation 𝒜 via the G-number γs(𝒜), which explains the importance of γs(𝒜) in LMR.Theorem 7. Let𝒜:ℝm×n→ℝp be a linear transformation, and s be an integer s∈{0,1,2,…,r}. Then 𝒜 is s-good if and only if γs(𝒜)<1.Proof. Suppose𝒜 is s-good. Let W∈ℝm×n be a matrix of rank s∈{1,2,…,r}. Without loss of generality, let W=Um×sWsVn×sT be its SVD where Um×s∈ℝm×s, Vn×s∈ℝn×s are orthogonal matrices and Ws=Diag((σ1(W),…,σs(W))T). By the definition of s-goodness of 𝒜, W is the unique solution to the optimization problem (6). Using the first-order optimality conditions, we obtain that there exists y∈ℝp such that the function fy(x)=∥X∥*-yT[𝒜X-𝒜W] attains its minimum value over X∈ℝm×n at X=W. So 0∈∂fy(W) or 𝒜*y∈∂∥W∥*. Using the fact (see, e.g., [27]) (36)∂∥W∥*={Um×sVn×sT+M:WandMhaveorthogonalrowandcolumnspaces,and∥M∥≤1Vn×sT}, it follows that there exist matrices Um×(r-s), Vn×(r-s) such that 𝒜*y=UDiag(σi(𝒜*y))VT where U=[Um×sUm×(r-s)], V=[Vn×sVn×(r-s)] are orthogonal matrices and (37)σi(𝒜*y){=1,ifi∈J,∈[0,1],ifi∈J-, where J:={i:σi(W)≠0} and J-:={1,2,…,r}∖J. Therefore, the optimal objective value of the optimization problem (38)miny,γ{γ:𝒜*y∈∂∥W∥*,σi(𝒜*y){=1,ifi∈J,∈[0,γ],ifi∈J-,} is at most one. For the given W with its SVD W=Um×sWsVn×sT, let(39)Π:={(0s00σ(M))conv{M∈ℝm×n:theSVDofMisM=[Um×sU-m×(r-s)](0s00σ(M))[Vn×sV-n×(r-s)]T}.It is easy to see that Π is a subspace and its normal cone (in the sense of variational analysis, see, e.g., [28] for details) is specified by Π⊥. Thus, the above problem (38) is equivalent to the following convex optimization problem with set constraint: (40)miny,M{∥M∥:𝒜*y-Um×sVn×sT-M=0,M∈Π}. We will show that the optimal value is less than 1. For a contradiction, suppose that the optimal value is one. Then, by [28, Theorem 10.1 and Exercise  10.52], there exists a Lagrange multiplier D∈ℝm×n such that the function (41)L(y,M)=∥M∥+〈D,𝒜*y-Um×sVn×sT-M〉+δΠ(M) has unconstrained minimum in (y,M) equal to 1, where δΠ(·) is the indicator function of Π. Let (y*,M*) be an optimal solution. Then, by the optimality condition 0∈∂L, we obtain that (42)0∈∂yL(y*,M*),0∈∂ML(y*,M*). Direct calculation yields that (43)𝒜D=0,0∈-D+∂∥M*∥+Π⊥. Then there exist DJ∈Π⊥ and DJ-∈∂∥M*∥ such that D=DJ+DJ-. Notice that [29, Corollary  6.4] implies that for DJ-∈∂∥M*∥, DJ-∈Π and ∥DJ-∥*≤1. Therefore, 〈D,Um×sVn×sT〉=〈DJ,Um×sVn×sT〉 and 〈D,M*〉=〈DJ-,M*〉. Moreover, 〈DJ-,M*〉≤∥M*∥ by the definition of the dual norm of ∥·∥. This together with the facts 𝒜D=0, DJ∈Π⊥ and DJ-∈∂∥M*∥⊆Π yields (44)L(y*,M*)=∥M*∥-〈DJ-,M*〉+〈D,𝒜*y*〉-〈DJ,Um×sVn×sT〉+δΠ(M*)≥-〈DJ,Um×sVn×sT〉+δΠ(M*). Thus, the minimum value of  L(y,M) is attained, L(y*,M*)=-〈DJ,Um×sVn×sT〉, when M*∈Π, 〈DJ-,M*〉=∥M*∥. We obtain that ∥DJ-∥*=1. By assumption, 1=L(y*,M*)=-〈DJ,Um×sVn×sT〉. That is, ∑i=1s‍(Um×sTDVn×s)ii=-1. Without loss of generality, let SVD of the optimal M* be M*=U~(0s00σ(M*))V~T, where U~:=[Um×sU~m×(r-s)] and V~:=[Vn×sV~n×(r-s)]. From the above arguments, we obtain that(i) 𝒜D=0,(ii) ∑i=1s‍(Um×sTDVn×s)ii=∑i∈J‍(U~TDV~)ii=-1,(iii) ∑i∈J-‍(U~TDV~)ii=1. Clearly, for everyt∈ℝ, the matrices Xt:=W+tD are feasible in (6). Note that (45)W=Um×sWsVn×sT=[Um×sU~m×(r-s)](Ws000)[Vn×sV~n×(r-s)]T. Then, ∥W∥*=∥U~TWV~∥*=Tr(U~TWV~). From the above equations, we obtain that ∥Xt∥*=∥W∥* for all small enough t>0 (since σi(W)>0, i∈{1,2,…,s}). Noting that W is the unique optimal solution to (6), we have Xt=W, which means that (U~TDV~)ii=0 for i∈J. This is a contradiction, and hence the desired conclusion holds. We next prove that𝒜 is s-good if γs(𝒜)<1. That is, we let W be an s-rank matrix and we show that W is the unique optimal solution to (6). Without loss of generality, let W be a matrix of rank s′≠0 and Um×s′Ws′Vn×s′T its SVD, where Um×s′∈ℝm×s′, Vn×s′∈ℝn×s′ are orthogonal matrices and Ws′=Diag((σ1(W),…,σs′(W))T). It follows from Proposition 4 that γs′(𝒜)≤γs(𝒜)<1. By the definition of γs(𝒜), there exists y∈ℝp such that 𝒜*y=UDiag(σ(𝒜*y))VT, where U=[Um×s′Um×(r-s′)], V=[Vn×s′Vn×(r-s′)], and(46)σi(𝒜*y){=1,ifσi(W)≠0,∈[0,1),ifσi(W)=0. Now, we have the optimization problem of minimizing the function (47)f(X)=∥X∥*-yT[𝒜X-𝒜W]=∥X∥*-〈𝒜*y,X〉+∥W∥* over all X∈ℝm×n such that 𝒜X=𝒜W. Note that 〈𝒜*y,X〉≤∥X∥* by ∥𝒜*y∥≤1 and the definition of dual norm. So f(X)≥∥X∥*-∥X∥*+∥W∥*=∥W∥* and this function attains its unconstrained minimum in X at X=W. Hence X=W is an optimal solution to (6). It remains to show that this optimal solution is unique. Let Z be another optimal solution to the problem. Then f(Z)-f(W)=∥Z∥*-yT𝒜Z=∥Z∥*-〈𝒜*y,Z〉=0. This together with the fact ∥𝒜*y∥≤1 implies that there exist SVDs for 𝒜*y and Z such that (48)𝒜*y=U~Diag(σ(𝒜*y))V~T,Z=U~Diag(σ(Z))V~T, where U~∈ℝm×r and V~∈ℝn×r are orthogonal matrices, and σi(Z)=0 if σi(𝒜*y)≠1. Thus, for σi(𝒜*y)=0, for all i∈{s′+1,…,r}, we must have σi(Z)=σi(W)=0. By the two forms of SVDs of 𝒜*y as above, Um×s′Vn×s′T=U~m×s′V~n×s′T where U~m×s′, V~n×s′T are the corresponding submatrices of U~, V~, respectively. Without loss of generality, let (49)U=[u1,u2,…,ur],V=[v1,v2,…,vr],U~=[u~1,u~2,…,u~r],V~=[v~1,v~2,…,v~r], where uj=u~j and vj=v~j for the corresponding index j∈{i:σi(𝒜*y)=0,i∈{s′+1,…,r}}. Then we have (50)Z=∑i=1s′‍σi(Z)u~iv~iT,W=∑i=1s′‍σi(W)uiviT. From Um×s′Vn×s′T=U~m×s′V~n×s′T, we obtain that (51)∑i=s′+1r‍σi(𝒜*y)u~iv~iT=∑i=s′+1r‍σi(𝒜*y)uiviT. Therefore, we deduce (52)∑i=s′+1,σi(𝒜*y)≠0r‍σi(𝒜*y)u~iv~iT+∑i=s′+1,σi(𝒜*y)=0r‍u~iv~iT=∑i=s′+1,σi(𝒜*y)≠0r‍σi(𝒜*y)uiviT+∑i=s′+1,σi(𝒜*y)=0r‍uiviT=:Ω. Clearly, the rank of Ω is no less than r-s′≥r-s. From the orthogonality property of U,V and U~,V~, we easily derive that (53)ΩTu~iv~iT=0,ΩTuiviT=0,∀i∈{1,2,…,s′}. Thus, we obtain ΩT(Z-W)=0, which implies that the rank of the matrix Z-W is no more than s. Since γs(𝒜)<1, there exists y~ such that (54)σi(𝒜*y~){=1,ifσi(Z-W)≠0,∈[0,1)ifσi(Z-W)=0. Therefore, 0=y~T𝒜(Z-W)=〈𝒜*y~,Z-W〉=∥Z-W∥*. Then Z=W.For theG-number γ^s(𝒜), we directly obtain the following equivalent theorem of s-goodness from Proposition 5 and Theorem 7.Theorem 8. Let𝒜:ℝm×n→ℝp be a linear transformation, and s∈{1,2,…,r}. Then 𝒜 is s-good if and only if γ^s(𝒜)<1/2. ## 4.s-Goodness, NSP, and RIP This section deals with the connections betweens-goodness, the null space property (NSP), and the restricted isometry property (RIP). We start with establishing the equivalence of NSP and G-number γ^s(𝒜)<1/2. Here, we say 𝒜  satisfies NSP if for every nonzero matrix X∈Null(𝒜) with the SVD X=UDiag(σ(X))VT, then we have (55)∑i=1s‍σi(X)<∑i=s+1r‍σi(X). For further details, see, for example, [14, 19–21] and references therein.Proposition 9. For the linear transformation𝒜, γ^s(𝒜)<1/2 if and only if 𝒜 satisfies NSP.Proof. We first give an equivalent representation of theG-number γ^s(𝒜,β). We define a compact convex set first: (56)Ps:={Z∈ℝm×n:∥Z∥*≤s,∥Z∥≤1}. Let Bβ:={y∈ℝp:∥y∥d≤β} and B:={X∈ℝm×n:∥X∥≤1}. By definition, γ^s(𝒜,β) is the smallest γ such that the closed convex set Cγ,β:=𝒜*Bβ+γB contains all matrices with s nonzero singular values, all equal to 1. Equivalently, Cγ,β contains the convex hull of these matrices, namely, Ps. Note that γ satisfies the inclusion Ps⊆Cγ,β if and only if for every X∈ℝm×n(57)maxZ∈Ps〈Z,X〉≤maxY∈Cγ,β〈Y,X〉=maxy∈ℝp,W∈ℝm×n{〈X,𝒜*y〉+γ〈X,W〉:∥y∥d≤β,∥W∥≤1}=β∥𝒜X∥+γ∥X∥*. For the above, we adopt the convention that whenever β=+∞, β∥𝒜X∥ is defined to be +∞ or 0 depending on whether ∥𝒜X∥>0 or ∥𝒜X∥=0. Thus, Ps⊆Cγ,β if and only if maxZ∈Ps{〈Z,X〉-β∥𝒜X∥}≤γ∥X∥*. Using the homogeneity of this last relation with respect to X, the above is equivalent to (58)maxZ,X{〈Z,X〉-β∥𝒜X∥:Z∈Ps,∥X∥*≤1}≤γ. Therefore, we obtain γ^s(𝒜,β)=maxZ,X{〈Z,X〉-β∥𝒜X∥:Z∈Ps,∥X∥*≤1}. Furthermore, (59)γ^s(𝒜)=maxZ,X{〈Z,X〉:Z∈Ps,∥X∥*≤1,𝒜X=0}. ForX∈ℝm×n with 𝒜X=0, let X=UDiag(σ(X))VT be its SVD. Then, we obtain the sum of the s largest singular values of X as (60)∥X∥s,*=maxZ∈Ps〈Z,X〉. From (59), we immediately obtain that γ^s(𝒜) is the best upper bound on ∥X∥s,* of matrices X∈Null(𝒜) such that ∥X∥*≤1. Therefore, γ^s(𝒜)<1/2 implies that the maximum value of ∥·∥s,*-norms of matrices X∈Null(𝒜) with ∥X∥*=1 is less than 1/2. That is, ∑i=1s‍σi(X)<1/2∑i=1r‍σi(X). Thus, ∑i=1s‍σi(X)<∑i=s+1r‍σi(X) and hence 𝒜 satisfies NSP. Now, it is easy to see that 𝒜 satisfies NSP if and only if γ^s(𝒜)<1/2.Next, we consider the connection between restricted isometry constants andG-number of the linear transformation in LMR. It is well known that, for a nonsingular matrix (transformation) T∈ℝp×p, the RIP constants of 𝒜 and T𝒜 can be very different, as shown by Zhang [30] for the vector case. However, the s-goodness properties of 𝒜 and T𝒜 are always the same for a nonsingular transformation T∈ℝp×p (i.e., s-goodness properties enjoy scale invariance in this sense). Recall that the s-restricted isometry constantδs of a linear transformation 𝒜 is defined as the smallest constant such that the following holds for all s-rank matrices X∈ℝm×n: (61)(1-δs)∥X∥F2≤∥𝒜X∥22≤(1+δs)∥X∥F2. In this case, we say 𝒜 possesses the RI(δs)-property (RIP) as in the CS context. For details, see [4, 31–34] and the references therein.Proposition 10. Let𝒜:ℝm×n→ℝp be a linear transformation and s∈{0,1,2,…,r}. For any nonsingular transformation T∈ℝp×p, γ^s(𝒜)=γ^s(T𝒜).Proof. It follows from the nonsingularity ofT that {X:𝒜X=0}={X:T𝒜X=0}. Then, by the equivalent representation of the G-number γ^s(𝒜,β) in (59), (62)γ^s(𝒜)=maxZ,X{〈Z,X〉:Z∈Ps,∥X∥*≤1,𝒜X=0}=maxZ,X{〈Z,X〉:Z∈Ps,∥X∥*≤1,T𝒜X=0}=γ^s(T𝒜).For the RIP constantδ2s, Oymak et al. [21] gave the current best bound on the restricted isometry constant δ2s<0.472, where they proposed a general technique for translating results from SSR to LMR. Together with the above arguments, we immediately obtain the following theorem.Theorem 11. δ 2 s < 0.472 ⇒ 𝒜 satisfying NSP⇔γ^s(𝒜)<1/2⇔γs(𝒜)<1⇔𝒜 is s-good.Proof. It follows from [21, Theorem  1], Proposition 9, and Theorems 7 and 8.The above theorem says thats-goodness is a necessary and sufficient condition for recovering the low-rank solution exactly via nuclear norm minimization. ## 5. Conclusion In this paper, we have shown thats-goodness of the linear transformation in LMR is a necessary and sufficient conditions for exact s-rank matrix recovery via the nuclear norm minimization, which is equivalent to the null space property. Our analysis is based on the two characteristic s-goodness constants, γs and γ^s, and the variational property of matrix norm in convex optimization. This shows that s-goodness is an elegant concept for low-rank matrix recovery, although γs and γ^s may not be easy to compute. Development of efficiently computable bounds on these quantities is left to future work. Even though we develop and use techniques based on optimization, convex analysis, and geometry, we do not provide explicit analogues to the results of Donoho [35] where necessary and sufficient conditions for vector recovery special case were derived based on the geometric notions of face preservation and neighborliness. The corresponding generalization to low-rank recovery is not known, currently the closest one being [22]. Moreover, it is also important to consider the semidefinite relaxation (SDR) for the rank minimization with the positive semidefinite constraint since the SDR convexifies nonconvex or discrete optimization problems by removing the rank-one constraint. Another future research topic is to extend the main results and the techniques in this paper to the SDR. --- *Source: 101974-2013-04-09.xml*
2013
# Adsorption Properties of NF3 and N2O on Al- and Ga-Doped Graphene Surface: A Density Functional Theory Study **Authors:** Qilin Yi; Gang Wei; Zhengqin Cao; Xiaoyu Wu; Yuanyuan Gao **Journal:** Adsorption Science & Technology (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1019746 --- ## Abstract SF6/N2 gas mixture decomposition components can reflect the operation status inside GIS, and be used for fault diagnosis and monitoring inside GIS. NF3 and N2O are the characteristic decomposition components of SF6/N2 mixed gas. In order to find a potential gas sensitivity material for the detection of NF3 and N2O. This paper investigated the adsorption properties of NF3 and N2O on Al- and Ga- doped graphene monolayers based on density functional theory. Through the analysis of adsorption distance, charge transfer, adsorption energy, energy band structure, etc., the results indicated that the adsorption effect of Al- and Ga-doped graphene to NF3 and N2O are probably good, and these nanomaterials are potential to apply for the monitoring of GIS internal faults. --- ## Body ## 1. Introduction SF6 has been extensively adopted for high-voltage gas insulation devices due to its superb arc extinguishing and insulation abilities. However, it has a strong greenhouse effect [1–3]. Concurrently, the addition of N2 can greatly reduce the use of SF6 gas without significantly affecting the insulation performance, which is of great importance in achieving energy conservation and emission reduction [4–7]. However, SF6/N2 mixed gas under partial discharge (PD) or partial overheating conditions will produce gases such as NF3, SO2, CO2, and N2O [8]. It is a feasible technical method to comprehend the fault diagnosis of SF6/N2 mixed gas insulation equipment by monitoring this characteristic component decomposition information.As the research hotspot of gas sensing materials in the sensor field, graphene has become an irreplaceable material with its ultrahigh electron mobility and specific surface area (SSA), along with superb mechanical characteristics [9–12]. Compared to other sensing materials, the following documents are available. According to He et al. C2H2 gas was analyzed for its sensing properties as well as electronic characteristics on diverse boron nitride nanotubes-modified transition metal oxide (Fe2O3, TiO2, and NiO) nanoparticles [13]. It was found that the conductivity of C2H2 gas on the three transition metal oxides and modified boron nitride nanotubes was different, especially on Fe2O3 and TiO2. In his studies of the gas sensing properties of single-walled carbon nanotubes doped with gold atoms for SO2 and H2S. Chen et al. found that SO2 and H2S have good adsorption properties on gold doped single-walled carbon nanotubes [14]. Syaahiran et al. implemented the DFT method to analyze CO, H2S, and H2 for their gas sensing performances on (WO3) n (n=2−4) doped Cr [15]. The results indicate that when compared with undoped clusters (WO3) n (n=2−4), the energy gap of chromium-doped tungsten oxide clusters decreases (CrWn−1O3n) (n=2−4), the reactivity increases and the stability is improved. Syaahiran et al. studied the interaction of CO gas on chromium doped-tungsten oxide/graphene composites [16]. From the energy gap, surface activity, and binding energy, it is found that chromium-doped tungsten oxide/graphene composite has a strong adsorption effect on CO.Many scholars have studied the gas sensors of atom doped graphene, such as Mn, Pd, and Pt, to better study the interaction between graphene materials and gas molecules and to explore the gas-sensing characteristics of gases onto doped and intrinsic graphene surfaces. Gui et al. through DFT calculation, studied adsorption properties for typical oil soluble gases (C2H2, CH4, and CO) in transformers by doping Mn atoms at graphene bridge sites [17]. The gas sensing mechanism was analyzed by using the density of states (DOS) and molecular orbital theory. As a result, manganese-doped graphene was the potential gas-sensing substance in the detection of CO and C2H2. According to the literature, gas adsorption is more evident due to the doping of transition metals [18]. It is confirmed that doped graphene shows better gas sensing performance than intrinsic graphene, and metal doping remarkably enhances graphene chemical activity and adsorption performance.According to above-mentioned, this paper studies the gas sensitivity of Al-doped Gra and Ga-doped Gra to NF3 and N2O gas molecules based on DFT. This work will guide the manufacturing of gas sensors and provide basic gas sensitivity information, as well as aluminum-doped graphene or gallium-doped graphene as a potential candidate for resistance chemical sensors for GIS internal fault diagnosis. ## 2. Computational Details The present work conducted first-principle calculation by Dmol3 quantum chemistry module from Materials Studio [19–21]. Perdew Burke ernzerho (PBE) function of the generalized gradient approximation (GGA) is used for managing the electron exchange relation [22]. Besides, double numerical plus polarization (DNP) is selected to be the atomic orbital basis group. The maximum atomic displacement, energy convergence accuracy, orbital tailing, and maximum force are set to 5 × 10-3 Å, 1.0 × 10-5 Ha, 0.005Ha, and 0.05 eV/Å, respectively [23, 24]. Also, to ensure precision in calculating total energy, this work established global orbital cut-off radius and self-consistent field error at 4.5 Å and 1.0x10-6Ha, respectively, Additively, a 2 × 2 × 1 Brillouin k-point grid space was set [25, 26]. Dispersion force was controlled by using DFT-D (Grimme) approach, and the charge transfer amount (Qd) in the adsorption process was determined based on the Hirshfield approach [16, 27]. The total charge number Qd>0, represents the transfer of electrons to the doped graphene surface from gas molecules, and the negative value stands for the opposite electron transfer path. Further, the definition of adsorption energy (Ead) is shown in [28]: (1)Ead=EX−Graphene/gas−EX−Graphene−Egas.EX−Graphene/gas, EX−Graphene, Egas represent energy of gas adsorption on metal doped graphene, the energy of an Al-doped graphene and Ga-doped graphene surface, and Energy of gas, respectively. Generally, Ead > 0 means that the adsorption process does not occur automatically, while Ead < 0 means that the adsorption process is automatic [29]. ## 3. Results and Discussion ### 3.1. The Optimization of Al-Doped Graphene, Ga-Graphene, NF3, and N2O Firstly, the adsorption characteristics of the aluminum and gallium atoms onto the graphene surface are discussed through the formation energy (Eform). Aluminum and gallium atoms Eform moving onto the graphene surface is defined in [30]: (2)Eform=EX−Graphene−EX−EGraphene.EX−Graphene, energy after doping graphene into aluminum and gallium; EX and EGraphene are the initial energies of aluminum, gallium, and graphene substrates, respectively. We considered the doping configurations of the top, bridge, heart, and replacement sites. By comparing the formation energies of different doping structures in Table 1, we found that the configuration of the replacement site is the most stable.Table 1 Formation energy of Al- and Ga-doped Gra at T, B, H, and R sites. ConfigurationTop(T)Bridge(B)Heart(H)Replacement(R)EAl−form(eV)-1.058-0.835-1.093-1.253EGa−form(eV)-1.236-1.034-1.025-1.076Figure1 presents optimal geometry for aluminum-doped graphene, gallium-doped graphene, NF3, and N2O, with bond angle and bond length being expressed as °and Å, respectively. According to Figure 1(a), the aluminum-doped graphene surface is composed of (4 × 4 × 1) supercells with a 20 Å(1Å=10−10m) vacuum layer for reducing the interaction of neighboring clusters and for preventing interactions between planes resulting from periodic boundary conditions [31]. The optimal top view for aluminum-doped graphene exhibit in Figure 1(b). The bond length between the Al and the surrounding three carbon atoms is 1.746 Å, which is increased by 0.321 Å compared with the carbon-carbon before doping, since aluminum has a larger atomic orbital radius than carbon. The ∠C1-Al-C2 is 120°, which remained unchanged from that before doping. The optimal top view for gallium-doped graphene is shown in Figure 1(c). Optimal N2O and NF3 gas molecules structures are shown in Figures 1(d) and 1(e).Figure 1 Geometry optimization structure of Al- and Ga-doped graphene, N2O, and NF3 ((a) side view of Al-doped graphene; (b) top view of Al-doped graphene; (c) top view of Ga-doped graphene; (d) N2O; (e) NF3). (a)(b)(c)(d)(e)Figure2 shows the TDOS for graphene, Al- and Ga-doped graphene. The structural properties of doped-graphene are further analyzed through TDOS. Contrast undoped graphene, it is known that the charge distribution of TDOS increases remarkably near the Fermi level after doping aluminum and gallium relative to intrinsic graphene, suggesting that aluminum and gallium doping enhances graphene structure conductivity.Figure 2 The TDOS configuration of graphene, Al- and Ga-doped Graphene.The band structure of intrinsic graphene is shown in Figure3(a). It can be seen from the figure that the valence band and conduction band are almost tangent at the Fermi energy level, and the band gap is 0.005 eV, approaching 0 eV. However, after doping aluminum atoms, as shown in Figure 3(b), the valence band moves up and intersects with the Ef. After doping, the band gap of graphene has obviously increased, which is 0.227 eV. At the same time, a new energy level has been introduced near the Ef, indicating that the electrical and physical properties of graphene have changed significantly due to the doping of aluminum atoms as shown in Figure 3(c). Doped gallium atoms have similar properties. The influence of aluminum-doped graphene and gallium-doped graphene on gas adsorption characteristics needs to be further studied.Figure 3 The band structure of undoped graphene, Al- and Ga-Graphene ((a) Undoped Graphene; (b) Al-Graphene; (c) Ga-Graphene). (a)(b)(c) ### 3.2. The Adsorption Properties of NF3 on Aluminum-Doped Graphene Different original approach sites for NF3 on aluminum doped sites were calculated for obtaining adsorption structure with the highest stability to further analyze the adsorption characteristics of aluminum-doped graphene on target gas. A characteristic adsorption structure is obtained following optimization. Figure 4 shows its top view and side view. Table 2 displays the charge transfer and adsorption energy, together with structural parameters.Figure 4 Adsorption configuration of NF3 adsorbed on Al-Graphene: (a) top view; (b) side view. (a)(b)Table 2 The Ead, Qt, and structural parameters of the NF3 adsorbed on Al-Graohene. ConfigurationEad(eV)Qt(e)dF1-Al(Å)dF1-N(Å)∠F1-N-F2(°)∠F2-N-F3(°)F1-Al-Gra-1.4760.2911.6893.07275.591103.457As observed from the adsorption structure in (Figures4(a) and 4(b)). The bond length formed by A1-F1 is 1.689 Å, and the number of electrons transferred from aluminum-doped graphene surface onto NF3 reaches 0.291e. Noteworthily, the NF3 structure alters the following adsorption, among them, the F1-N bond length increases to 3.072 Å, and the angle of F1-N-F2 becomes 75.591°. The Ead of NF3 onto aluminum-doped graphene surface reaches -1.476 eV. From the perspective of electron transfer and Eg, the reaction of NF3 gas is strongest when F atom is close to the aluminum-doped graphene surface.Figure5(a) shows the TDOS for NF3 on aluminum-doped graphene surface. The TDOS showed significant changes at -14 eV, -9.2 eV, -7.3 eV, -5.4 eV, -3.4 eV, and other positions when compared with the nonadsorbed gas. Due to the outermost electrons of atoms contributing the most during adsorption, just PDOS for Al-3p, F-2p, and N-2p is discussed. From the PDOS in Figure 5(b), we can see that for the above orbitals, their overlapped peaks can be seen at approximately, -9.3 eV, -7.3 eV, -3.9 eV, Fermi level, and 5.2 eV. PDOS and TDOS analyses implicit that there are great interaction between NF3 and aluminum-doped graphene.Figure 5 TDOS before and after NF3 adsorption and PDOS of the main interacting atoms ((a) TDOS; (b) PDOS). (a)(b)Figure6 displays the difference in electron density of NF3 adsorbed onto aluminum-doped graphene surfaces from different sides, in which the red and blue areas indicate elevated and declined electron density, respectively. The post-gas adsorption charge distribution is analyzed intuitively based on the difference in electron density. From Figure 6, it shows that the red area around the F atom shows that electrons are received, while the blue area around the N atom, Al atom, and C atom indicates that electrons are lost due to the reduction in electron density. Gas molecules are suggested to play the role of electron acceptors, while Al-Graphene is the electron donors. Therefore, NF3 molecules bring drastic changes in electron density to the surface of aluminum-doped graphene.Figure 6 The charge difference density of NF3 adsorbed on Al-Graphene ((a) F1 and F2 atom section; (b) F3 atom section). (a)(b)Collectively, based on the structural parameters of DOS, adsorption energy, and charge transfer, together with the difference in NF3 electron density adsorbed on aluminum-doped graphene, it is obvious that the interaction between NF3 and aluminum-doped graphene is very severe.The band structure of aluminum-doped graphene without adsorbed gas exhibit in Figure7(a). Figure 7(b) exhibit the band structure of NF3 molecules adsorbed on aluminum-doped graphene surface. From above, it is known that the band structure of aluminum-doped graphene has an obvious band gap near the Ef, and a new energy level is introduced at the Ef. Moreover, a lot of new energy levels have been added in the valence band. It can be determined that the generation of the new energy level is due to the adsorption of gas on the surface.Figure 7 The band structure of Al-Graphene and NF3 adsorption on Al-Graphene ((a) Al-Graphene; (b) NF3-F-Al-Graphene). (a)(b) ### 3.3. The Adsorption Properties of N2O on Aluminum-Doped Graphene Gas molecules approach the aluminum-doped graphene surface with different atoms for the adsorption of N2O. 3 characteristic adsorption structures are acquired following geometric optimization, according to Figure 8, the parameters of the above configurations are exhibited in Table 3. Figures 8(a) and 8(b) exhibits top and side view for M1 configuration. N2O is close to the doped surface with an N1 atom, and the dads is 2.035 Å. The N1-N2 bond length of N2O adsorbed on the surface reaches 1.142 Å, with the 1.180 Å N2-O bond length. The bond angle does not change, which is not different from the free N2O molecule. Therefore, the N2O structure shows low alterations in adsorption. In M1 configuration, adsorption energy is -1.376 eV, with the transfer of 0.238 e onto aluminum-doped graphene surface from the gas molecule, indicating the strong interaction between N2O and aluminum-doped graphene surface.Figure 8 Adsorption configuration of N2O adsorbed on Al-Graphene ((a) M1 top view; (b) M1 side view; (c) M2 top view; (d) M2 side view; (e) M3 top view; (f) M3 side view). (a)(b)(c)(d)(e)(f)Table 3 The Ead, Qt, and structural parameters of the N2O-adsorbed Al-Graphene. ConfigurationEad(eV)Qt(e)dads(Å)dN1-N2(Å)dN2-O(Å)M1-1.3760.2382.0351.1431.180M2-1.4070.2122.0801.1481.180M3-1.2400.2042.1571.2141.133Figures8(c) and 8(d) exhibits side and top view for M2 configuration. N2O is close to the doped surface with an N2 atom, and the dads is 2.080 Å. Based on structural parameters observed in Table 3, the N2O structure has low alterations before and after adsorption. M2 configuration has adsorption energy of -1.407 eV, which indicates that the M2 configuration is more stable than M1. In addition, the transfer of 0.212e onto aluminum doped graphene surface from gas molecules is detected in the M2 configuration, with the charges transferred from N1, N2, and O atom being 0.247 e, -0.004 e, and 0.031 e, respectively.The side and top views for the M3 configuration are displayed in Figures8(e) and 8(f). N2O is close to the doped surface with the O atom, and the dads is 2.157 Å. The N1-N2 bond length of N2O adsorbed onto aluminum-doped graphene surface reaches 1.214 Å, which is slightly longer than the N1-N2 bond (1.141 Å) of free N2O molecules. The charge transfer in the M3 configuration is 0.204 e, which significantly increases relative to M1/M2 configuration. However, according to parameters observed in Table 3, the bond angle and bond length changed a little after adsorption, and the adsorption energy was -1.240 eV, which was slightly decreased when compared with the M2 configurations. In conclusion, this was based on great adsorption energy between N2O and M2 configurations, and therefore, the M2 system may exhibit the highest stability. For better verification, this work also analyzed the difference in electron density and DOS.TDOS for M2 configuration is exhibited in Figure9(a). When N2O molecules are adsorbed onto an aluminum-doped graphene surface, there are obvious changes around -12.07 eV, -11.01 eV, -5.71 eV, and 0.89 eV. Due to the outermost electrons of atoms contributing the most to adsorption, this work only analyzes PDOS for Al-3p, N-2p, and O-2p. From PDOS observed in Figure 9(b), N-2p together with O-2p orbital peaks coincide around -12.15 eV, -10.97 eV, -10.25 eV, -5.62 eV, and 0.86 eV. According to PDOS, potent chemisorption of N2O with aluminum-doped graphene was seen. At the same time, considering that the N-2p orbital contributes the most to the adsorption process, the N2O adsorption structure shows extremely high stability within the M2 configuration.Figure 9 TDOS before and after N2O adsorption and PDOS of the main interacting atoms ((a) TDOS; (b) PDOS). (a)(b)The difference in electron density of M2 configuration is exhibited in Figure10. Red and blue areas stand for elevated and declined electron densities, respectively. O and N1 receive charge in the adsorption process, and the charge close to the N2 atom declines. There is also an increase in electron density on the aluminum-doped graphene surface. According to the distribution on electron density, N2O gets electrons.Figure 10 The charge difference density of N2O adsorbed on Al-Graphene.The band structure of aluminum-doped graphene without adsorbed gas exhibit in Figure11(a). Figure 11(b) shows the band structure of N2O molecules adsorbed on aluminum-doped graphene surface. It can be seen that after adsorbing N2O, the energy gap of aluminum doped graphene decreases, the conduction band near the Fermi level becomes more smooth, and introduced new energy levels. Accordingly, the density of states near the Ef increases greatly.Figure 11 The band structure of Al-Graphene and N2O adsorption on Al-Graphene ((a) Al-Graphene; (b) N2O-N-Al-Graphene). (a)(b) ### 3.4. Prediction of Desorption of Aluminum-Doped Graphene The decomposed components of SF6/N2 mixed gas can be desorbed from the sensing material surface under heating conditions. The Ead and dads distance of NF3 and N2O on aluminum-doped graphene surface are exhibited in Tables 2 and 3, 375 K, 575 K, and 757 K are the temperature gradient of the desorption time of the sensing material. The desorption time is related to the adsorption energy and temperature, as shown in [32, 33]: (3)ε=A−1exp‐Ead/RT.In which,A is the trial frequency of the system, generally 1012 s-1 [34], Ead is the adsorption energy of NF3 and N2O on aluminum-doped graphene surface, R: a constant, eV/K, T is the temperature (unit: K).The desorption time of NF3 and N2O gas molecules at 375 K, 575 K, and 775 K is shown in Figure 12. The optimal desorption time of NF3 molecule at 775 K is 3.94×10−3s, and the desorption time of the whole system increases with the decrease of temperature, and the maximum recovery time is 6.77×107s. The desorption time of N2O molecule in the system is shorter than that of NF3 molecule. At 375 K and 775 K, the recovery time is 8.01×106s and 1.40×10−3s, respectively.Figure 12 Desorption time at different temperatures. ### 3.5. Adsorption of NF3 and N2O Gases on Gallium-Doped Graphene Since the number of outermost electrons of Ga atom is the same as that of Al atom, the doping of Ga atom is also considered. Its adsorption mode is the same as that of Al doped graphene, so only consider that NF3 molecules are close to Ga-doped graphene as F atoms, and N2O molecules are close to Ga-doped graphene as N atoms. As shown in Figures 13(a) and 13(b), NF3 molecules are adsorbed on Ga-doped graphene surface, the dads is 1.801 Å, and the electron transformation is 0.287 e. As shown in Figures 13(c) and 13(d), N2O is adsorbed on Ga-doped graphene surface, the dads is 2.285 Å, and the electron transformation is 0.227 e. It can be seen from Table 4 that the adsorption energy of both gases is less than that on Al-doped graphene surface. The adsorption parameters show that there is a good reaction between NF3 and N2O molecules on Ga-doped graphene surface.Figure 13 The adsorption configuration of NF3 and N2O adsorbed on Ga-Graphene ((a) X1 top view; (b) X1 side view; (c) X2 top view; (d) X2 side view). (a)(b)(c)(d)Table 4 The Ead, Qt, and structural parameters of the NF3 and N2O adsorbed on Ga-Graphene. ConfigurationEad(eV)Qt(e)dads(Å)X1-1.3560.2871.801X2-1.2800.2272.285As shown in Figures14(a)–14(d), it is the TDOS and PDOS of NF3 and N2O on Ga-doped graphene surface. For NF3 molecule, a new peak appears at the Ef of PDOS, which is contributed by NF3 molecule, where the 2p orbital of N atom, the 2p orbital of F atom, and the 3p orbital of Ga atom are coupled. For N2O molecule, it can be seen from the PDOS diagram that there is no new peak at the Ef, and a fresh peak occur in the conduction band region, and it hybridizes with the 3p orbital of Ga atom, indicating that N2O molecule has a strong interaction with Ga-doped graphene.Figure 14 The density of states configuration of NF3 and N2O adsorbed on Ga-Graphene ((a) NF3-TDOS; (b) NF3-PDOS; (c) N2O-TDOS; (d) N2O-PDOS). (a)(b)(c)(d)Figures15(a) and 15(b) shows the charge difference density of NF3 and N2O on Ga-doped graphene, respectively. It can be seen that after the adsorption of NF3 and N2O, the color near F atom turns red, indicating that the charge increases, and the color near N atom turns blue, indicating that the charge decreases. The charge near N1 atom and O atom in N2O molecule increases, and the electron concentration near N2 atom and graphene surface decreases, indicating that the gas obtains charge.Figure 15 The charge difference density of NF3 and N2O adsorbed on Ga-Graphene ((a) the charge difference density of NF3 adsorbed on Ga-Graphene; (b) the charge difference density of N2O adsorbed on Ga-Graphene). (a)(b) ## 3.1. The Optimization of Al-Doped Graphene, Ga-Graphene, NF3, and N2O Firstly, the adsorption characteristics of the aluminum and gallium atoms onto the graphene surface are discussed through the formation energy (Eform). Aluminum and gallium atoms Eform moving onto the graphene surface is defined in [30]: (2)Eform=EX−Graphene−EX−EGraphene.EX−Graphene, energy after doping graphene into aluminum and gallium; EX and EGraphene are the initial energies of aluminum, gallium, and graphene substrates, respectively. We considered the doping configurations of the top, bridge, heart, and replacement sites. By comparing the formation energies of different doping structures in Table 1, we found that the configuration of the replacement site is the most stable.Table 1 Formation energy of Al- and Ga-doped Gra at T, B, H, and R sites. ConfigurationTop(T)Bridge(B)Heart(H)Replacement(R)EAl−form(eV)-1.058-0.835-1.093-1.253EGa−form(eV)-1.236-1.034-1.025-1.076Figure1 presents optimal geometry for aluminum-doped graphene, gallium-doped graphene, NF3, and N2O, with bond angle and bond length being expressed as °and Å, respectively. According to Figure 1(a), the aluminum-doped graphene surface is composed of (4 × 4 × 1) supercells with a 20 Å(1Å=10−10m) vacuum layer for reducing the interaction of neighboring clusters and for preventing interactions between planes resulting from periodic boundary conditions [31]. The optimal top view for aluminum-doped graphene exhibit in Figure 1(b). The bond length between the Al and the surrounding three carbon atoms is 1.746 Å, which is increased by 0.321 Å compared with the carbon-carbon before doping, since aluminum has a larger atomic orbital radius than carbon. The ∠C1-Al-C2 is 120°, which remained unchanged from that before doping. The optimal top view for gallium-doped graphene is shown in Figure 1(c). Optimal N2O and NF3 gas molecules structures are shown in Figures 1(d) and 1(e).Figure 1 Geometry optimization structure of Al- and Ga-doped graphene, N2O, and NF3 ((a) side view of Al-doped graphene; (b) top view of Al-doped graphene; (c) top view of Ga-doped graphene; (d) N2O; (e) NF3). (a)(b)(c)(d)(e)Figure2 shows the TDOS for graphene, Al- and Ga-doped graphene. The structural properties of doped-graphene are further analyzed through TDOS. Contrast undoped graphene, it is known that the charge distribution of TDOS increases remarkably near the Fermi level after doping aluminum and gallium relative to intrinsic graphene, suggesting that aluminum and gallium doping enhances graphene structure conductivity.Figure 2 The TDOS configuration of graphene, Al- and Ga-doped Graphene.The band structure of intrinsic graphene is shown in Figure3(a). It can be seen from the figure that the valence band and conduction band are almost tangent at the Fermi energy level, and the band gap is 0.005 eV, approaching 0 eV. However, after doping aluminum atoms, as shown in Figure 3(b), the valence band moves up and intersects with the Ef. After doping, the band gap of graphene has obviously increased, which is 0.227 eV. At the same time, a new energy level has been introduced near the Ef, indicating that the electrical and physical properties of graphene have changed significantly due to the doping of aluminum atoms as shown in Figure 3(c). Doped gallium atoms have similar properties. The influence of aluminum-doped graphene and gallium-doped graphene on gas adsorption characteristics needs to be further studied.Figure 3 The band structure of undoped graphene, Al- and Ga-Graphene ((a) Undoped Graphene; (b) Al-Graphene; (c) Ga-Graphene). (a)(b)(c) ## 3.2. The Adsorption Properties of NF3 on Aluminum-Doped Graphene Different original approach sites for NF3 on aluminum doped sites were calculated for obtaining adsorption structure with the highest stability to further analyze the adsorption characteristics of aluminum-doped graphene on target gas. A characteristic adsorption structure is obtained following optimization. Figure 4 shows its top view and side view. Table 2 displays the charge transfer and adsorption energy, together with structural parameters.Figure 4 Adsorption configuration of NF3 adsorbed on Al-Graphene: (a) top view; (b) side view. (a)(b)Table 2 The Ead, Qt, and structural parameters of the NF3 adsorbed on Al-Graohene. ConfigurationEad(eV)Qt(e)dF1-Al(Å)dF1-N(Å)∠F1-N-F2(°)∠F2-N-F3(°)F1-Al-Gra-1.4760.2911.6893.07275.591103.457As observed from the adsorption structure in (Figures4(a) and 4(b)). The bond length formed by A1-F1 is 1.689 Å, and the number of electrons transferred from aluminum-doped graphene surface onto NF3 reaches 0.291e. Noteworthily, the NF3 structure alters the following adsorption, among them, the F1-N bond length increases to 3.072 Å, and the angle of F1-N-F2 becomes 75.591°. The Ead of NF3 onto aluminum-doped graphene surface reaches -1.476 eV. From the perspective of electron transfer and Eg, the reaction of NF3 gas is strongest when F atom is close to the aluminum-doped graphene surface.Figure5(a) shows the TDOS for NF3 on aluminum-doped graphene surface. The TDOS showed significant changes at -14 eV, -9.2 eV, -7.3 eV, -5.4 eV, -3.4 eV, and other positions when compared with the nonadsorbed gas. Due to the outermost electrons of atoms contributing the most during adsorption, just PDOS for Al-3p, F-2p, and N-2p is discussed. From the PDOS in Figure 5(b), we can see that for the above orbitals, their overlapped peaks can be seen at approximately, -9.3 eV, -7.3 eV, -3.9 eV, Fermi level, and 5.2 eV. PDOS and TDOS analyses implicit that there are great interaction between NF3 and aluminum-doped graphene.Figure 5 TDOS before and after NF3 adsorption and PDOS of the main interacting atoms ((a) TDOS; (b) PDOS). (a)(b)Figure6 displays the difference in electron density of NF3 adsorbed onto aluminum-doped graphene surfaces from different sides, in which the red and blue areas indicate elevated and declined electron density, respectively. The post-gas adsorption charge distribution is analyzed intuitively based on the difference in electron density. From Figure 6, it shows that the red area around the F atom shows that electrons are received, while the blue area around the N atom, Al atom, and C atom indicates that electrons are lost due to the reduction in electron density. Gas molecules are suggested to play the role of electron acceptors, while Al-Graphene is the electron donors. Therefore, NF3 molecules bring drastic changes in electron density to the surface of aluminum-doped graphene.Figure 6 The charge difference density of NF3 adsorbed on Al-Graphene ((a) F1 and F2 atom section; (b) F3 atom section). (a)(b)Collectively, based on the structural parameters of DOS, adsorption energy, and charge transfer, together with the difference in NF3 electron density adsorbed on aluminum-doped graphene, it is obvious that the interaction between NF3 and aluminum-doped graphene is very severe.The band structure of aluminum-doped graphene without adsorbed gas exhibit in Figure7(a). Figure 7(b) exhibit the band structure of NF3 molecules adsorbed on aluminum-doped graphene surface. From above, it is known that the band structure of aluminum-doped graphene has an obvious band gap near the Ef, and a new energy level is introduced at the Ef. Moreover, a lot of new energy levels have been added in the valence band. It can be determined that the generation of the new energy level is due to the adsorption of gas on the surface.Figure 7 The band structure of Al-Graphene and NF3 adsorption on Al-Graphene ((a) Al-Graphene; (b) NF3-F-Al-Graphene). (a)(b) ## 3.3. The Adsorption Properties of N2O on Aluminum-Doped Graphene Gas molecules approach the aluminum-doped graphene surface with different atoms for the adsorption of N2O. 3 characteristic adsorption structures are acquired following geometric optimization, according to Figure 8, the parameters of the above configurations are exhibited in Table 3. Figures 8(a) and 8(b) exhibits top and side view for M1 configuration. N2O is close to the doped surface with an N1 atom, and the dads is 2.035 Å. The N1-N2 bond length of N2O adsorbed on the surface reaches 1.142 Å, with the 1.180 Å N2-O bond length. The bond angle does not change, which is not different from the free N2O molecule. Therefore, the N2O structure shows low alterations in adsorption. In M1 configuration, adsorption energy is -1.376 eV, with the transfer of 0.238 e onto aluminum-doped graphene surface from the gas molecule, indicating the strong interaction between N2O and aluminum-doped graphene surface.Figure 8 Adsorption configuration of N2O adsorbed on Al-Graphene ((a) M1 top view; (b) M1 side view; (c) M2 top view; (d) M2 side view; (e) M3 top view; (f) M3 side view). (a)(b)(c)(d)(e)(f)Table 3 The Ead, Qt, and structural parameters of the N2O-adsorbed Al-Graphene. ConfigurationEad(eV)Qt(e)dads(Å)dN1-N2(Å)dN2-O(Å)M1-1.3760.2382.0351.1431.180M2-1.4070.2122.0801.1481.180M3-1.2400.2042.1571.2141.133Figures8(c) and 8(d) exhibits side and top view for M2 configuration. N2O is close to the doped surface with an N2 atom, and the dads is 2.080 Å. Based on structural parameters observed in Table 3, the N2O structure has low alterations before and after adsorption. M2 configuration has adsorption energy of -1.407 eV, which indicates that the M2 configuration is more stable than M1. In addition, the transfer of 0.212e onto aluminum doped graphene surface from gas molecules is detected in the M2 configuration, with the charges transferred from N1, N2, and O atom being 0.247 e, -0.004 e, and 0.031 e, respectively.The side and top views for the M3 configuration are displayed in Figures8(e) and 8(f). N2O is close to the doped surface with the O atom, and the dads is 2.157 Å. The N1-N2 bond length of N2O adsorbed onto aluminum-doped graphene surface reaches 1.214 Å, which is slightly longer than the N1-N2 bond (1.141 Å) of free N2O molecules. The charge transfer in the M3 configuration is 0.204 e, which significantly increases relative to M1/M2 configuration. However, according to parameters observed in Table 3, the bond angle and bond length changed a little after adsorption, and the adsorption energy was -1.240 eV, which was slightly decreased when compared with the M2 configurations. In conclusion, this was based on great adsorption energy between N2O and M2 configurations, and therefore, the M2 system may exhibit the highest stability. For better verification, this work also analyzed the difference in electron density and DOS.TDOS for M2 configuration is exhibited in Figure9(a). When N2O molecules are adsorbed onto an aluminum-doped graphene surface, there are obvious changes around -12.07 eV, -11.01 eV, -5.71 eV, and 0.89 eV. Due to the outermost electrons of atoms contributing the most to adsorption, this work only analyzes PDOS for Al-3p, N-2p, and O-2p. From PDOS observed in Figure 9(b), N-2p together with O-2p orbital peaks coincide around -12.15 eV, -10.97 eV, -10.25 eV, -5.62 eV, and 0.86 eV. According to PDOS, potent chemisorption of N2O with aluminum-doped graphene was seen. At the same time, considering that the N-2p orbital contributes the most to the adsorption process, the N2O adsorption structure shows extremely high stability within the M2 configuration.Figure 9 TDOS before and after N2O adsorption and PDOS of the main interacting atoms ((a) TDOS; (b) PDOS). (a)(b)The difference in electron density of M2 configuration is exhibited in Figure10. Red and blue areas stand for elevated and declined electron densities, respectively. O and N1 receive charge in the adsorption process, and the charge close to the N2 atom declines. There is also an increase in electron density on the aluminum-doped graphene surface. According to the distribution on electron density, N2O gets electrons.Figure 10 The charge difference density of N2O adsorbed on Al-Graphene.The band structure of aluminum-doped graphene without adsorbed gas exhibit in Figure11(a). Figure 11(b) shows the band structure of N2O molecules adsorbed on aluminum-doped graphene surface. It can be seen that after adsorbing N2O, the energy gap of aluminum doped graphene decreases, the conduction band near the Fermi level becomes more smooth, and introduced new energy levels. Accordingly, the density of states near the Ef increases greatly.Figure 11 The band structure of Al-Graphene and N2O adsorption on Al-Graphene ((a) Al-Graphene; (b) N2O-N-Al-Graphene). (a)(b) ## 3.4. Prediction of Desorption of Aluminum-Doped Graphene The decomposed components of SF6/N2 mixed gas can be desorbed from the sensing material surface under heating conditions. The Ead and dads distance of NF3 and N2O on aluminum-doped graphene surface are exhibited in Tables 2 and 3, 375 K, 575 K, and 757 K are the temperature gradient of the desorption time of the sensing material. The desorption time is related to the adsorption energy and temperature, as shown in [32, 33]: (3)ε=A−1exp‐Ead/RT.In which,A is the trial frequency of the system, generally 1012 s-1 [34], Ead is the adsorption energy of NF3 and N2O on aluminum-doped graphene surface, R: a constant, eV/K, T is the temperature (unit: K).The desorption time of NF3 and N2O gas molecules at 375 K, 575 K, and 775 K is shown in Figure 12. The optimal desorption time of NF3 molecule at 775 K is 3.94×10−3s, and the desorption time of the whole system increases with the decrease of temperature, and the maximum recovery time is 6.77×107s. The desorption time of N2O molecule in the system is shorter than that of NF3 molecule. At 375 K and 775 K, the recovery time is 8.01×106s and 1.40×10−3s, respectively.Figure 12 Desorption time at different temperatures. ## 3.5. Adsorption of NF3 and N2O Gases on Gallium-Doped Graphene Since the number of outermost electrons of Ga atom is the same as that of Al atom, the doping of Ga atom is also considered. Its adsorption mode is the same as that of Al doped graphene, so only consider that NF3 molecules are close to Ga-doped graphene as F atoms, and N2O molecules are close to Ga-doped graphene as N atoms. As shown in Figures 13(a) and 13(b), NF3 molecules are adsorbed on Ga-doped graphene surface, the dads is 1.801 Å, and the electron transformation is 0.287 e. As shown in Figures 13(c) and 13(d), N2O is adsorbed on Ga-doped graphene surface, the dads is 2.285 Å, and the electron transformation is 0.227 e. It can be seen from Table 4 that the adsorption energy of both gases is less than that on Al-doped graphene surface. The adsorption parameters show that there is a good reaction between NF3 and N2O molecules on Ga-doped graphene surface.Figure 13 The adsorption configuration of NF3 and N2O adsorbed on Ga-Graphene ((a) X1 top view; (b) X1 side view; (c) X2 top view; (d) X2 side view). (a)(b)(c)(d)Table 4 The Ead, Qt, and structural parameters of the NF3 and N2O adsorbed on Ga-Graphene. ConfigurationEad(eV)Qt(e)dads(Å)X1-1.3560.2871.801X2-1.2800.2272.285As shown in Figures14(a)–14(d), it is the TDOS and PDOS of NF3 and N2O on Ga-doped graphene surface. For NF3 molecule, a new peak appears at the Ef of PDOS, which is contributed by NF3 molecule, where the 2p orbital of N atom, the 2p orbital of F atom, and the 3p orbital of Ga atom are coupled. For N2O molecule, it can be seen from the PDOS diagram that there is no new peak at the Ef, and a fresh peak occur in the conduction band region, and it hybridizes with the 3p orbital of Ga atom, indicating that N2O molecule has a strong interaction with Ga-doped graphene.Figure 14 The density of states configuration of NF3 and N2O adsorbed on Ga-Graphene ((a) NF3-TDOS; (b) NF3-PDOS; (c) N2O-TDOS; (d) N2O-PDOS). (a)(b)(c)(d)Figures15(a) and 15(b) shows the charge difference density of NF3 and N2O on Ga-doped graphene, respectively. It can be seen that after the adsorption of NF3 and N2O, the color near F atom turns red, indicating that the charge increases, and the color near N atom turns blue, indicating that the charge decreases. The charge near N1 atom and O atom in N2O molecule increases, and the electron concentration near N2 atom and graphene surface decreases, indicating that the gas obtains charge.Figure 15 The charge difference density of NF3 and N2O adsorbed on Ga-Graphene ((a) the charge difference density of NF3 adsorbed on Ga-Graphene; (b) the charge difference density of N2O adsorbed on Ga-Graphene). (a)(b) ## 4. Conclusions The adsorption characteristics of NF3 and N2O molecules on Al- and Ga-doped graphene surfaces were studied in this paper, for the sake of finding the resistance chemical sensor material for GIS internal fault diagnosis. Among them, the most likely adsorption mode of NF3 molecule is that F atom is close to Al-doped graphene. They have large charge transfer and Ead, short dads, and strong interaction. The most likely adsorption mode of N2O molecules on Al-doped graphene is M2 mode, which is close to the N atom. From the adsorption parameters obtained, it can be seen that there is also a very strong interaction between N2O molecules and Al-doped graphene. As Ga atom and Al atom belongs to the same family of elements, the simulation of Ga atom is also carried out. The results show that the adsorption parameters of Ga-doped graphene and Al-doped graphene are similar, and Ga-doped graphene may also be a sensing material.The present work sheds more light on the association of characteristic decomposition components of SF6/N2 gas mixture and Al- and Ga-doped graphene, and provides the theoretical foundation to adsorb characteristic decomposition components in SF6/N2 gas mixture onto Al- and Ga-doped graphene. --- *Source: 1019746-2022-11-15.xml*
1019746-2022-11-15_1019746-2022-11-15.md
41,056
Adsorption Properties of NF3 and N2O on Al- and Ga-Doped Graphene Surface: A Density Functional Theory Study
Qilin Yi; Gang Wei; Zhengqin Cao; Xiaoyu Wu; Yuanyuan Gao
Adsorption Science & Technology (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1019746
1019746-2022-11-15.xml
--- ## Abstract SF6/N2 gas mixture decomposition components can reflect the operation status inside GIS, and be used for fault diagnosis and monitoring inside GIS. NF3 and N2O are the characteristic decomposition components of SF6/N2 mixed gas. In order to find a potential gas sensitivity material for the detection of NF3 and N2O. This paper investigated the adsorption properties of NF3 and N2O on Al- and Ga- doped graphene monolayers based on density functional theory. Through the analysis of adsorption distance, charge transfer, adsorption energy, energy band structure, etc., the results indicated that the adsorption effect of Al- and Ga-doped graphene to NF3 and N2O are probably good, and these nanomaterials are potential to apply for the monitoring of GIS internal faults. --- ## Body ## 1. Introduction SF6 has been extensively adopted for high-voltage gas insulation devices due to its superb arc extinguishing and insulation abilities. However, it has a strong greenhouse effect [1–3]. Concurrently, the addition of N2 can greatly reduce the use of SF6 gas without significantly affecting the insulation performance, which is of great importance in achieving energy conservation and emission reduction [4–7]. However, SF6/N2 mixed gas under partial discharge (PD) or partial overheating conditions will produce gases such as NF3, SO2, CO2, and N2O [8]. It is a feasible technical method to comprehend the fault diagnosis of SF6/N2 mixed gas insulation equipment by monitoring this characteristic component decomposition information.As the research hotspot of gas sensing materials in the sensor field, graphene has become an irreplaceable material with its ultrahigh electron mobility and specific surface area (SSA), along with superb mechanical characteristics [9–12]. Compared to other sensing materials, the following documents are available. According to He et al. C2H2 gas was analyzed for its sensing properties as well as electronic characteristics on diverse boron nitride nanotubes-modified transition metal oxide (Fe2O3, TiO2, and NiO) nanoparticles [13]. It was found that the conductivity of C2H2 gas on the three transition metal oxides and modified boron nitride nanotubes was different, especially on Fe2O3 and TiO2. In his studies of the gas sensing properties of single-walled carbon nanotubes doped with gold atoms for SO2 and H2S. Chen et al. found that SO2 and H2S have good adsorption properties on gold doped single-walled carbon nanotubes [14]. Syaahiran et al. implemented the DFT method to analyze CO, H2S, and H2 for their gas sensing performances on (WO3) n (n=2−4) doped Cr [15]. The results indicate that when compared with undoped clusters (WO3) n (n=2−4), the energy gap of chromium-doped tungsten oxide clusters decreases (CrWn−1O3n) (n=2−4), the reactivity increases and the stability is improved. Syaahiran et al. studied the interaction of CO gas on chromium doped-tungsten oxide/graphene composites [16]. From the energy gap, surface activity, and binding energy, it is found that chromium-doped tungsten oxide/graphene composite has a strong adsorption effect on CO.Many scholars have studied the gas sensors of atom doped graphene, such as Mn, Pd, and Pt, to better study the interaction between graphene materials and gas molecules and to explore the gas-sensing characteristics of gases onto doped and intrinsic graphene surfaces. Gui et al. through DFT calculation, studied adsorption properties for typical oil soluble gases (C2H2, CH4, and CO) in transformers by doping Mn atoms at graphene bridge sites [17]. The gas sensing mechanism was analyzed by using the density of states (DOS) and molecular orbital theory. As a result, manganese-doped graphene was the potential gas-sensing substance in the detection of CO and C2H2. According to the literature, gas adsorption is more evident due to the doping of transition metals [18]. It is confirmed that doped graphene shows better gas sensing performance than intrinsic graphene, and metal doping remarkably enhances graphene chemical activity and adsorption performance.According to above-mentioned, this paper studies the gas sensitivity of Al-doped Gra and Ga-doped Gra to NF3 and N2O gas molecules based on DFT. This work will guide the manufacturing of gas sensors and provide basic gas sensitivity information, as well as aluminum-doped graphene or gallium-doped graphene as a potential candidate for resistance chemical sensors for GIS internal fault diagnosis. ## 2. Computational Details The present work conducted first-principle calculation by Dmol3 quantum chemistry module from Materials Studio [19–21]. Perdew Burke ernzerho (PBE) function of the generalized gradient approximation (GGA) is used for managing the electron exchange relation [22]. Besides, double numerical plus polarization (DNP) is selected to be the atomic orbital basis group. The maximum atomic displacement, energy convergence accuracy, orbital tailing, and maximum force are set to 5 × 10-3 Å, 1.0 × 10-5 Ha, 0.005Ha, and 0.05 eV/Å, respectively [23, 24]. Also, to ensure precision in calculating total energy, this work established global orbital cut-off radius and self-consistent field error at 4.5 Å and 1.0x10-6Ha, respectively, Additively, a 2 × 2 × 1 Brillouin k-point grid space was set [25, 26]. Dispersion force was controlled by using DFT-D (Grimme) approach, and the charge transfer amount (Qd) in the adsorption process was determined based on the Hirshfield approach [16, 27]. The total charge number Qd>0, represents the transfer of electrons to the doped graphene surface from gas molecules, and the negative value stands for the opposite electron transfer path. Further, the definition of adsorption energy (Ead) is shown in [28]: (1)Ead=EX−Graphene/gas−EX−Graphene−Egas.EX−Graphene/gas, EX−Graphene, Egas represent energy of gas adsorption on metal doped graphene, the energy of an Al-doped graphene and Ga-doped graphene surface, and Energy of gas, respectively. Generally, Ead > 0 means that the adsorption process does not occur automatically, while Ead < 0 means that the adsorption process is automatic [29]. ## 3. Results and Discussion ### 3.1. The Optimization of Al-Doped Graphene, Ga-Graphene, NF3, and N2O Firstly, the adsorption characteristics of the aluminum and gallium atoms onto the graphene surface are discussed through the formation energy (Eform). Aluminum and gallium atoms Eform moving onto the graphene surface is defined in [30]: (2)Eform=EX−Graphene−EX−EGraphene.EX−Graphene, energy after doping graphene into aluminum and gallium; EX and EGraphene are the initial energies of aluminum, gallium, and graphene substrates, respectively. We considered the doping configurations of the top, bridge, heart, and replacement sites. By comparing the formation energies of different doping structures in Table 1, we found that the configuration of the replacement site is the most stable.Table 1 Formation energy of Al- and Ga-doped Gra at T, B, H, and R sites. ConfigurationTop(T)Bridge(B)Heart(H)Replacement(R)EAl−form(eV)-1.058-0.835-1.093-1.253EGa−form(eV)-1.236-1.034-1.025-1.076Figure1 presents optimal geometry for aluminum-doped graphene, gallium-doped graphene, NF3, and N2O, with bond angle and bond length being expressed as °and Å, respectively. According to Figure 1(a), the aluminum-doped graphene surface is composed of (4 × 4 × 1) supercells with a 20 Å(1Å=10−10m) vacuum layer for reducing the interaction of neighboring clusters and for preventing interactions between planes resulting from periodic boundary conditions [31]. The optimal top view for aluminum-doped graphene exhibit in Figure 1(b). The bond length between the Al and the surrounding three carbon atoms is 1.746 Å, which is increased by 0.321 Å compared with the carbon-carbon before doping, since aluminum has a larger atomic orbital radius than carbon. The ∠C1-Al-C2 is 120°, which remained unchanged from that before doping. The optimal top view for gallium-doped graphene is shown in Figure 1(c). Optimal N2O and NF3 gas molecules structures are shown in Figures 1(d) and 1(e).Figure 1 Geometry optimization structure of Al- and Ga-doped graphene, N2O, and NF3 ((a) side view of Al-doped graphene; (b) top view of Al-doped graphene; (c) top view of Ga-doped graphene; (d) N2O; (e) NF3). (a)(b)(c)(d)(e)Figure2 shows the TDOS for graphene, Al- and Ga-doped graphene. The structural properties of doped-graphene are further analyzed through TDOS. Contrast undoped graphene, it is known that the charge distribution of TDOS increases remarkably near the Fermi level after doping aluminum and gallium relative to intrinsic graphene, suggesting that aluminum and gallium doping enhances graphene structure conductivity.Figure 2 The TDOS configuration of graphene, Al- and Ga-doped Graphene.The band structure of intrinsic graphene is shown in Figure3(a). It can be seen from the figure that the valence band and conduction band are almost tangent at the Fermi energy level, and the band gap is 0.005 eV, approaching 0 eV. However, after doping aluminum atoms, as shown in Figure 3(b), the valence band moves up and intersects with the Ef. After doping, the band gap of graphene has obviously increased, which is 0.227 eV. At the same time, a new energy level has been introduced near the Ef, indicating that the electrical and physical properties of graphene have changed significantly due to the doping of aluminum atoms as shown in Figure 3(c). Doped gallium atoms have similar properties. The influence of aluminum-doped graphene and gallium-doped graphene on gas adsorption characteristics needs to be further studied.Figure 3 The band structure of undoped graphene, Al- and Ga-Graphene ((a) Undoped Graphene; (b) Al-Graphene; (c) Ga-Graphene). (a)(b)(c) ### 3.2. The Adsorption Properties of NF3 on Aluminum-Doped Graphene Different original approach sites for NF3 on aluminum doped sites were calculated for obtaining adsorption structure with the highest stability to further analyze the adsorption characteristics of aluminum-doped graphene on target gas. A characteristic adsorption structure is obtained following optimization. Figure 4 shows its top view and side view. Table 2 displays the charge transfer and adsorption energy, together with structural parameters.Figure 4 Adsorption configuration of NF3 adsorbed on Al-Graphene: (a) top view; (b) side view. (a)(b)Table 2 The Ead, Qt, and structural parameters of the NF3 adsorbed on Al-Graohene. ConfigurationEad(eV)Qt(e)dF1-Al(Å)dF1-N(Å)∠F1-N-F2(°)∠F2-N-F3(°)F1-Al-Gra-1.4760.2911.6893.07275.591103.457As observed from the adsorption structure in (Figures4(a) and 4(b)). The bond length formed by A1-F1 is 1.689 Å, and the number of electrons transferred from aluminum-doped graphene surface onto NF3 reaches 0.291e. Noteworthily, the NF3 structure alters the following adsorption, among them, the F1-N bond length increases to 3.072 Å, and the angle of F1-N-F2 becomes 75.591°. The Ead of NF3 onto aluminum-doped graphene surface reaches -1.476 eV. From the perspective of electron transfer and Eg, the reaction of NF3 gas is strongest when F atom is close to the aluminum-doped graphene surface.Figure5(a) shows the TDOS for NF3 on aluminum-doped graphene surface. The TDOS showed significant changes at -14 eV, -9.2 eV, -7.3 eV, -5.4 eV, -3.4 eV, and other positions when compared with the nonadsorbed gas. Due to the outermost electrons of atoms contributing the most during adsorption, just PDOS for Al-3p, F-2p, and N-2p is discussed. From the PDOS in Figure 5(b), we can see that for the above orbitals, their overlapped peaks can be seen at approximately, -9.3 eV, -7.3 eV, -3.9 eV, Fermi level, and 5.2 eV. PDOS and TDOS analyses implicit that there are great interaction between NF3 and aluminum-doped graphene.Figure 5 TDOS before and after NF3 adsorption and PDOS of the main interacting atoms ((a) TDOS; (b) PDOS). (a)(b)Figure6 displays the difference in electron density of NF3 adsorbed onto aluminum-doped graphene surfaces from different sides, in which the red and blue areas indicate elevated and declined electron density, respectively. The post-gas adsorption charge distribution is analyzed intuitively based on the difference in electron density. From Figure 6, it shows that the red area around the F atom shows that electrons are received, while the blue area around the N atom, Al atom, and C atom indicates that electrons are lost due to the reduction in electron density. Gas molecules are suggested to play the role of electron acceptors, while Al-Graphene is the electron donors. Therefore, NF3 molecules bring drastic changes in electron density to the surface of aluminum-doped graphene.Figure 6 The charge difference density of NF3 adsorbed on Al-Graphene ((a) F1 and F2 atom section; (b) F3 atom section). (a)(b)Collectively, based on the structural parameters of DOS, adsorption energy, and charge transfer, together with the difference in NF3 electron density adsorbed on aluminum-doped graphene, it is obvious that the interaction between NF3 and aluminum-doped graphene is very severe.The band structure of aluminum-doped graphene without adsorbed gas exhibit in Figure7(a). Figure 7(b) exhibit the band structure of NF3 molecules adsorbed on aluminum-doped graphene surface. From above, it is known that the band structure of aluminum-doped graphene has an obvious band gap near the Ef, and a new energy level is introduced at the Ef. Moreover, a lot of new energy levels have been added in the valence band. It can be determined that the generation of the new energy level is due to the adsorption of gas on the surface.Figure 7 The band structure of Al-Graphene and NF3 adsorption on Al-Graphene ((a) Al-Graphene; (b) NF3-F-Al-Graphene). (a)(b) ### 3.3. The Adsorption Properties of N2O on Aluminum-Doped Graphene Gas molecules approach the aluminum-doped graphene surface with different atoms for the adsorption of N2O. 3 characteristic adsorption structures are acquired following geometric optimization, according to Figure 8, the parameters of the above configurations are exhibited in Table 3. Figures 8(a) and 8(b) exhibits top and side view for M1 configuration. N2O is close to the doped surface with an N1 atom, and the dads is 2.035 Å. The N1-N2 bond length of N2O adsorbed on the surface reaches 1.142 Å, with the 1.180 Å N2-O bond length. The bond angle does not change, which is not different from the free N2O molecule. Therefore, the N2O structure shows low alterations in adsorption. In M1 configuration, adsorption energy is -1.376 eV, with the transfer of 0.238 e onto aluminum-doped graphene surface from the gas molecule, indicating the strong interaction between N2O and aluminum-doped graphene surface.Figure 8 Adsorption configuration of N2O adsorbed on Al-Graphene ((a) M1 top view; (b) M1 side view; (c) M2 top view; (d) M2 side view; (e) M3 top view; (f) M3 side view). (a)(b)(c)(d)(e)(f)Table 3 The Ead, Qt, and structural parameters of the N2O-adsorbed Al-Graphene. ConfigurationEad(eV)Qt(e)dads(Å)dN1-N2(Å)dN2-O(Å)M1-1.3760.2382.0351.1431.180M2-1.4070.2122.0801.1481.180M3-1.2400.2042.1571.2141.133Figures8(c) and 8(d) exhibits side and top view for M2 configuration. N2O is close to the doped surface with an N2 atom, and the dads is 2.080 Å. Based on structural parameters observed in Table 3, the N2O structure has low alterations before and after adsorption. M2 configuration has adsorption energy of -1.407 eV, which indicates that the M2 configuration is more stable than M1. In addition, the transfer of 0.212e onto aluminum doped graphene surface from gas molecules is detected in the M2 configuration, with the charges transferred from N1, N2, and O atom being 0.247 e, -0.004 e, and 0.031 e, respectively.The side and top views for the M3 configuration are displayed in Figures8(e) and 8(f). N2O is close to the doped surface with the O atom, and the dads is 2.157 Å. The N1-N2 bond length of N2O adsorbed onto aluminum-doped graphene surface reaches 1.214 Å, which is slightly longer than the N1-N2 bond (1.141 Å) of free N2O molecules. The charge transfer in the M3 configuration is 0.204 e, which significantly increases relative to M1/M2 configuration. However, according to parameters observed in Table 3, the bond angle and bond length changed a little after adsorption, and the adsorption energy was -1.240 eV, which was slightly decreased when compared with the M2 configurations. In conclusion, this was based on great adsorption energy between N2O and M2 configurations, and therefore, the M2 system may exhibit the highest stability. For better verification, this work also analyzed the difference in electron density and DOS.TDOS for M2 configuration is exhibited in Figure9(a). When N2O molecules are adsorbed onto an aluminum-doped graphene surface, there are obvious changes around -12.07 eV, -11.01 eV, -5.71 eV, and 0.89 eV. Due to the outermost electrons of atoms contributing the most to adsorption, this work only analyzes PDOS for Al-3p, N-2p, and O-2p. From PDOS observed in Figure 9(b), N-2p together with O-2p orbital peaks coincide around -12.15 eV, -10.97 eV, -10.25 eV, -5.62 eV, and 0.86 eV. According to PDOS, potent chemisorption of N2O with aluminum-doped graphene was seen. At the same time, considering that the N-2p orbital contributes the most to the adsorption process, the N2O adsorption structure shows extremely high stability within the M2 configuration.Figure 9 TDOS before and after N2O adsorption and PDOS of the main interacting atoms ((a) TDOS; (b) PDOS). (a)(b)The difference in electron density of M2 configuration is exhibited in Figure10. Red and blue areas stand for elevated and declined electron densities, respectively. O and N1 receive charge in the adsorption process, and the charge close to the N2 atom declines. There is also an increase in electron density on the aluminum-doped graphene surface. According to the distribution on electron density, N2O gets electrons.Figure 10 The charge difference density of N2O adsorbed on Al-Graphene.The band structure of aluminum-doped graphene without adsorbed gas exhibit in Figure11(a). Figure 11(b) shows the band structure of N2O molecules adsorbed on aluminum-doped graphene surface. It can be seen that after adsorbing N2O, the energy gap of aluminum doped graphene decreases, the conduction band near the Fermi level becomes more smooth, and introduced new energy levels. Accordingly, the density of states near the Ef increases greatly.Figure 11 The band structure of Al-Graphene and N2O adsorption on Al-Graphene ((a) Al-Graphene; (b) N2O-N-Al-Graphene). (a)(b) ### 3.4. Prediction of Desorption of Aluminum-Doped Graphene The decomposed components of SF6/N2 mixed gas can be desorbed from the sensing material surface under heating conditions. The Ead and dads distance of NF3 and N2O on aluminum-doped graphene surface are exhibited in Tables 2 and 3, 375 K, 575 K, and 757 K are the temperature gradient of the desorption time of the sensing material. The desorption time is related to the adsorption energy and temperature, as shown in [32, 33]: (3)ε=A−1exp‐Ead/RT.In which,A is the trial frequency of the system, generally 1012 s-1 [34], Ead is the adsorption energy of NF3 and N2O on aluminum-doped graphene surface, R: a constant, eV/K, T is the temperature (unit: K).The desorption time of NF3 and N2O gas molecules at 375 K, 575 K, and 775 K is shown in Figure 12. The optimal desorption time of NF3 molecule at 775 K is 3.94×10−3s, and the desorption time of the whole system increases with the decrease of temperature, and the maximum recovery time is 6.77×107s. The desorption time of N2O molecule in the system is shorter than that of NF3 molecule. At 375 K and 775 K, the recovery time is 8.01×106s and 1.40×10−3s, respectively.Figure 12 Desorption time at different temperatures. ### 3.5. Adsorption of NF3 and N2O Gases on Gallium-Doped Graphene Since the number of outermost electrons of Ga atom is the same as that of Al atom, the doping of Ga atom is also considered. Its adsorption mode is the same as that of Al doped graphene, so only consider that NF3 molecules are close to Ga-doped graphene as F atoms, and N2O molecules are close to Ga-doped graphene as N atoms. As shown in Figures 13(a) and 13(b), NF3 molecules are adsorbed on Ga-doped graphene surface, the dads is 1.801 Å, and the electron transformation is 0.287 e. As shown in Figures 13(c) and 13(d), N2O is adsorbed on Ga-doped graphene surface, the dads is 2.285 Å, and the electron transformation is 0.227 e. It can be seen from Table 4 that the adsorption energy of both gases is less than that on Al-doped graphene surface. The adsorption parameters show that there is a good reaction between NF3 and N2O molecules on Ga-doped graphene surface.Figure 13 The adsorption configuration of NF3 and N2O adsorbed on Ga-Graphene ((a) X1 top view; (b) X1 side view; (c) X2 top view; (d) X2 side view). (a)(b)(c)(d)Table 4 The Ead, Qt, and structural parameters of the NF3 and N2O adsorbed on Ga-Graphene. ConfigurationEad(eV)Qt(e)dads(Å)X1-1.3560.2871.801X2-1.2800.2272.285As shown in Figures14(a)–14(d), it is the TDOS and PDOS of NF3 and N2O on Ga-doped graphene surface. For NF3 molecule, a new peak appears at the Ef of PDOS, which is contributed by NF3 molecule, where the 2p orbital of N atom, the 2p orbital of F atom, and the 3p orbital of Ga atom are coupled. For N2O molecule, it can be seen from the PDOS diagram that there is no new peak at the Ef, and a fresh peak occur in the conduction band region, and it hybridizes with the 3p orbital of Ga atom, indicating that N2O molecule has a strong interaction with Ga-doped graphene.Figure 14 The density of states configuration of NF3 and N2O adsorbed on Ga-Graphene ((a) NF3-TDOS; (b) NF3-PDOS; (c) N2O-TDOS; (d) N2O-PDOS). (a)(b)(c)(d)Figures15(a) and 15(b) shows the charge difference density of NF3 and N2O on Ga-doped graphene, respectively. It can be seen that after the adsorption of NF3 and N2O, the color near F atom turns red, indicating that the charge increases, and the color near N atom turns blue, indicating that the charge decreases. The charge near N1 atom and O atom in N2O molecule increases, and the electron concentration near N2 atom and graphene surface decreases, indicating that the gas obtains charge.Figure 15 The charge difference density of NF3 and N2O adsorbed on Ga-Graphene ((a) the charge difference density of NF3 adsorbed on Ga-Graphene; (b) the charge difference density of N2O adsorbed on Ga-Graphene). (a)(b) ## 3.1. The Optimization of Al-Doped Graphene, Ga-Graphene, NF3, and N2O Firstly, the adsorption characteristics of the aluminum and gallium atoms onto the graphene surface are discussed through the formation energy (Eform). Aluminum and gallium atoms Eform moving onto the graphene surface is defined in [30]: (2)Eform=EX−Graphene−EX−EGraphene.EX−Graphene, energy after doping graphene into aluminum and gallium; EX and EGraphene are the initial energies of aluminum, gallium, and graphene substrates, respectively. We considered the doping configurations of the top, bridge, heart, and replacement sites. By comparing the formation energies of different doping structures in Table 1, we found that the configuration of the replacement site is the most stable.Table 1 Formation energy of Al- and Ga-doped Gra at T, B, H, and R sites. ConfigurationTop(T)Bridge(B)Heart(H)Replacement(R)EAl−form(eV)-1.058-0.835-1.093-1.253EGa−form(eV)-1.236-1.034-1.025-1.076Figure1 presents optimal geometry for aluminum-doped graphene, gallium-doped graphene, NF3, and N2O, with bond angle and bond length being expressed as °and Å, respectively. According to Figure 1(a), the aluminum-doped graphene surface is composed of (4 × 4 × 1) supercells with a 20 Å(1Å=10−10m) vacuum layer for reducing the interaction of neighboring clusters and for preventing interactions between planes resulting from periodic boundary conditions [31]. The optimal top view for aluminum-doped graphene exhibit in Figure 1(b). The bond length between the Al and the surrounding three carbon atoms is 1.746 Å, which is increased by 0.321 Å compared with the carbon-carbon before doping, since aluminum has a larger atomic orbital radius than carbon. The ∠C1-Al-C2 is 120°, which remained unchanged from that before doping. The optimal top view for gallium-doped graphene is shown in Figure 1(c). Optimal N2O and NF3 gas molecules structures are shown in Figures 1(d) and 1(e).Figure 1 Geometry optimization structure of Al- and Ga-doped graphene, N2O, and NF3 ((a) side view of Al-doped graphene; (b) top view of Al-doped graphene; (c) top view of Ga-doped graphene; (d) N2O; (e) NF3). (a)(b)(c)(d)(e)Figure2 shows the TDOS for graphene, Al- and Ga-doped graphene. The structural properties of doped-graphene are further analyzed through TDOS. Contrast undoped graphene, it is known that the charge distribution of TDOS increases remarkably near the Fermi level after doping aluminum and gallium relative to intrinsic graphene, suggesting that aluminum and gallium doping enhances graphene structure conductivity.Figure 2 The TDOS configuration of graphene, Al- and Ga-doped Graphene.The band structure of intrinsic graphene is shown in Figure3(a). It can be seen from the figure that the valence band and conduction band are almost tangent at the Fermi energy level, and the band gap is 0.005 eV, approaching 0 eV. However, after doping aluminum atoms, as shown in Figure 3(b), the valence band moves up and intersects with the Ef. After doping, the band gap of graphene has obviously increased, which is 0.227 eV. At the same time, a new energy level has been introduced near the Ef, indicating that the electrical and physical properties of graphene have changed significantly due to the doping of aluminum atoms as shown in Figure 3(c). Doped gallium atoms have similar properties. The influence of aluminum-doped graphene and gallium-doped graphene on gas adsorption characteristics needs to be further studied.Figure 3 The band structure of undoped graphene, Al- and Ga-Graphene ((a) Undoped Graphene; (b) Al-Graphene; (c) Ga-Graphene). (a)(b)(c) ## 3.2. The Adsorption Properties of NF3 on Aluminum-Doped Graphene Different original approach sites for NF3 on aluminum doped sites were calculated for obtaining adsorption structure with the highest stability to further analyze the adsorption characteristics of aluminum-doped graphene on target gas. A characteristic adsorption structure is obtained following optimization. Figure 4 shows its top view and side view. Table 2 displays the charge transfer and adsorption energy, together with structural parameters.Figure 4 Adsorption configuration of NF3 adsorbed on Al-Graphene: (a) top view; (b) side view. (a)(b)Table 2 The Ead, Qt, and structural parameters of the NF3 adsorbed on Al-Graohene. ConfigurationEad(eV)Qt(e)dF1-Al(Å)dF1-N(Å)∠F1-N-F2(°)∠F2-N-F3(°)F1-Al-Gra-1.4760.2911.6893.07275.591103.457As observed from the adsorption structure in (Figures4(a) and 4(b)). The bond length formed by A1-F1 is 1.689 Å, and the number of electrons transferred from aluminum-doped graphene surface onto NF3 reaches 0.291e. Noteworthily, the NF3 structure alters the following adsorption, among them, the F1-N bond length increases to 3.072 Å, and the angle of F1-N-F2 becomes 75.591°. The Ead of NF3 onto aluminum-doped graphene surface reaches -1.476 eV. From the perspective of electron transfer and Eg, the reaction of NF3 gas is strongest when F atom is close to the aluminum-doped graphene surface.Figure5(a) shows the TDOS for NF3 on aluminum-doped graphene surface. The TDOS showed significant changes at -14 eV, -9.2 eV, -7.3 eV, -5.4 eV, -3.4 eV, and other positions when compared with the nonadsorbed gas. Due to the outermost electrons of atoms contributing the most during adsorption, just PDOS for Al-3p, F-2p, and N-2p is discussed. From the PDOS in Figure 5(b), we can see that for the above orbitals, their overlapped peaks can be seen at approximately, -9.3 eV, -7.3 eV, -3.9 eV, Fermi level, and 5.2 eV. PDOS and TDOS analyses implicit that there are great interaction between NF3 and aluminum-doped graphene.Figure 5 TDOS before and after NF3 adsorption and PDOS of the main interacting atoms ((a) TDOS; (b) PDOS). (a)(b)Figure6 displays the difference in electron density of NF3 adsorbed onto aluminum-doped graphene surfaces from different sides, in which the red and blue areas indicate elevated and declined electron density, respectively. The post-gas adsorption charge distribution is analyzed intuitively based on the difference in electron density. From Figure 6, it shows that the red area around the F atom shows that electrons are received, while the blue area around the N atom, Al atom, and C atom indicates that electrons are lost due to the reduction in electron density. Gas molecules are suggested to play the role of electron acceptors, while Al-Graphene is the electron donors. Therefore, NF3 molecules bring drastic changes in electron density to the surface of aluminum-doped graphene.Figure 6 The charge difference density of NF3 adsorbed on Al-Graphene ((a) F1 and F2 atom section; (b) F3 atom section). (a)(b)Collectively, based on the structural parameters of DOS, adsorption energy, and charge transfer, together with the difference in NF3 electron density adsorbed on aluminum-doped graphene, it is obvious that the interaction between NF3 and aluminum-doped graphene is very severe.The band structure of aluminum-doped graphene without adsorbed gas exhibit in Figure7(a). Figure 7(b) exhibit the band structure of NF3 molecules adsorbed on aluminum-doped graphene surface. From above, it is known that the band structure of aluminum-doped graphene has an obvious band gap near the Ef, and a new energy level is introduced at the Ef. Moreover, a lot of new energy levels have been added in the valence band. It can be determined that the generation of the new energy level is due to the adsorption of gas on the surface.Figure 7 The band structure of Al-Graphene and NF3 adsorption on Al-Graphene ((a) Al-Graphene; (b) NF3-F-Al-Graphene). (a)(b) ## 3.3. The Adsorption Properties of N2O on Aluminum-Doped Graphene Gas molecules approach the aluminum-doped graphene surface with different atoms for the adsorption of N2O. 3 characteristic adsorption structures are acquired following geometric optimization, according to Figure 8, the parameters of the above configurations are exhibited in Table 3. Figures 8(a) and 8(b) exhibits top and side view for M1 configuration. N2O is close to the doped surface with an N1 atom, and the dads is 2.035 Å. The N1-N2 bond length of N2O adsorbed on the surface reaches 1.142 Å, with the 1.180 Å N2-O bond length. The bond angle does not change, which is not different from the free N2O molecule. Therefore, the N2O structure shows low alterations in adsorption. In M1 configuration, adsorption energy is -1.376 eV, with the transfer of 0.238 e onto aluminum-doped graphene surface from the gas molecule, indicating the strong interaction between N2O and aluminum-doped graphene surface.Figure 8 Adsorption configuration of N2O adsorbed on Al-Graphene ((a) M1 top view; (b) M1 side view; (c) M2 top view; (d) M2 side view; (e) M3 top view; (f) M3 side view). (a)(b)(c)(d)(e)(f)Table 3 The Ead, Qt, and structural parameters of the N2O-adsorbed Al-Graphene. ConfigurationEad(eV)Qt(e)dads(Å)dN1-N2(Å)dN2-O(Å)M1-1.3760.2382.0351.1431.180M2-1.4070.2122.0801.1481.180M3-1.2400.2042.1571.2141.133Figures8(c) and 8(d) exhibits side and top view for M2 configuration. N2O is close to the doped surface with an N2 atom, and the dads is 2.080 Å. Based on structural parameters observed in Table 3, the N2O structure has low alterations before and after adsorption. M2 configuration has adsorption energy of -1.407 eV, which indicates that the M2 configuration is more stable than M1. In addition, the transfer of 0.212e onto aluminum doped graphene surface from gas molecules is detected in the M2 configuration, with the charges transferred from N1, N2, and O atom being 0.247 e, -0.004 e, and 0.031 e, respectively.The side and top views for the M3 configuration are displayed in Figures8(e) and 8(f). N2O is close to the doped surface with the O atom, and the dads is 2.157 Å. The N1-N2 bond length of N2O adsorbed onto aluminum-doped graphene surface reaches 1.214 Å, which is slightly longer than the N1-N2 bond (1.141 Å) of free N2O molecules. The charge transfer in the M3 configuration is 0.204 e, which significantly increases relative to M1/M2 configuration. However, according to parameters observed in Table 3, the bond angle and bond length changed a little after adsorption, and the adsorption energy was -1.240 eV, which was slightly decreased when compared with the M2 configurations. In conclusion, this was based on great adsorption energy between N2O and M2 configurations, and therefore, the M2 system may exhibit the highest stability. For better verification, this work also analyzed the difference in electron density and DOS.TDOS for M2 configuration is exhibited in Figure9(a). When N2O molecules are adsorbed onto an aluminum-doped graphene surface, there are obvious changes around -12.07 eV, -11.01 eV, -5.71 eV, and 0.89 eV. Due to the outermost electrons of atoms contributing the most to adsorption, this work only analyzes PDOS for Al-3p, N-2p, and O-2p. From PDOS observed in Figure 9(b), N-2p together with O-2p orbital peaks coincide around -12.15 eV, -10.97 eV, -10.25 eV, -5.62 eV, and 0.86 eV. According to PDOS, potent chemisorption of N2O with aluminum-doped graphene was seen. At the same time, considering that the N-2p orbital contributes the most to the adsorption process, the N2O adsorption structure shows extremely high stability within the M2 configuration.Figure 9 TDOS before and after N2O adsorption and PDOS of the main interacting atoms ((a) TDOS; (b) PDOS). (a)(b)The difference in electron density of M2 configuration is exhibited in Figure10. Red and blue areas stand for elevated and declined electron densities, respectively. O and N1 receive charge in the adsorption process, and the charge close to the N2 atom declines. There is also an increase in electron density on the aluminum-doped graphene surface. According to the distribution on electron density, N2O gets electrons.Figure 10 The charge difference density of N2O adsorbed on Al-Graphene.The band structure of aluminum-doped graphene without adsorbed gas exhibit in Figure11(a). Figure 11(b) shows the band structure of N2O molecules adsorbed on aluminum-doped graphene surface. It can be seen that after adsorbing N2O, the energy gap of aluminum doped graphene decreases, the conduction band near the Fermi level becomes more smooth, and introduced new energy levels. Accordingly, the density of states near the Ef increases greatly.Figure 11 The band structure of Al-Graphene and N2O adsorption on Al-Graphene ((a) Al-Graphene; (b) N2O-N-Al-Graphene). (a)(b) ## 3.4. Prediction of Desorption of Aluminum-Doped Graphene The decomposed components of SF6/N2 mixed gas can be desorbed from the sensing material surface under heating conditions. The Ead and dads distance of NF3 and N2O on aluminum-doped graphene surface are exhibited in Tables 2 and 3, 375 K, 575 K, and 757 K are the temperature gradient of the desorption time of the sensing material. The desorption time is related to the adsorption energy and temperature, as shown in [32, 33]: (3)ε=A−1exp‐Ead/RT.In which,A is the trial frequency of the system, generally 1012 s-1 [34], Ead is the adsorption energy of NF3 and N2O on aluminum-doped graphene surface, R: a constant, eV/K, T is the temperature (unit: K).The desorption time of NF3 and N2O gas molecules at 375 K, 575 K, and 775 K is shown in Figure 12. The optimal desorption time of NF3 molecule at 775 K is 3.94×10−3s, and the desorption time of the whole system increases with the decrease of temperature, and the maximum recovery time is 6.77×107s. The desorption time of N2O molecule in the system is shorter than that of NF3 molecule. At 375 K and 775 K, the recovery time is 8.01×106s and 1.40×10−3s, respectively.Figure 12 Desorption time at different temperatures. ## 3.5. Adsorption of NF3 and N2O Gases on Gallium-Doped Graphene Since the number of outermost electrons of Ga atom is the same as that of Al atom, the doping of Ga atom is also considered. Its adsorption mode is the same as that of Al doped graphene, so only consider that NF3 molecules are close to Ga-doped graphene as F atoms, and N2O molecules are close to Ga-doped graphene as N atoms. As shown in Figures 13(a) and 13(b), NF3 molecules are adsorbed on Ga-doped graphene surface, the dads is 1.801 Å, and the electron transformation is 0.287 e. As shown in Figures 13(c) and 13(d), N2O is adsorbed on Ga-doped graphene surface, the dads is 2.285 Å, and the electron transformation is 0.227 e. It can be seen from Table 4 that the adsorption energy of both gases is less than that on Al-doped graphene surface. The adsorption parameters show that there is a good reaction between NF3 and N2O molecules on Ga-doped graphene surface.Figure 13 The adsorption configuration of NF3 and N2O adsorbed on Ga-Graphene ((a) X1 top view; (b) X1 side view; (c) X2 top view; (d) X2 side view). (a)(b)(c)(d)Table 4 The Ead, Qt, and structural parameters of the NF3 and N2O adsorbed on Ga-Graphene. ConfigurationEad(eV)Qt(e)dads(Å)X1-1.3560.2871.801X2-1.2800.2272.285As shown in Figures14(a)–14(d), it is the TDOS and PDOS of NF3 and N2O on Ga-doped graphene surface. For NF3 molecule, a new peak appears at the Ef of PDOS, which is contributed by NF3 molecule, where the 2p orbital of N atom, the 2p orbital of F atom, and the 3p orbital of Ga atom are coupled. For N2O molecule, it can be seen from the PDOS diagram that there is no new peak at the Ef, and a fresh peak occur in the conduction band region, and it hybridizes with the 3p orbital of Ga atom, indicating that N2O molecule has a strong interaction with Ga-doped graphene.Figure 14 The density of states configuration of NF3 and N2O adsorbed on Ga-Graphene ((a) NF3-TDOS; (b) NF3-PDOS; (c) N2O-TDOS; (d) N2O-PDOS). (a)(b)(c)(d)Figures15(a) and 15(b) shows the charge difference density of NF3 and N2O on Ga-doped graphene, respectively. It can be seen that after the adsorption of NF3 and N2O, the color near F atom turns red, indicating that the charge increases, and the color near N atom turns blue, indicating that the charge decreases. The charge near N1 atom and O atom in N2O molecule increases, and the electron concentration near N2 atom and graphene surface decreases, indicating that the gas obtains charge.Figure 15 The charge difference density of NF3 and N2O adsorbed on Ga-Graphene ((a) the charge difference density of NF3 adsorbed on Ga-Graphene; (b) the charge difference density of N2O adsorbed on Ga-Graphene). (a)(b) ## 4. Conclusions The adsorption characteristics of NF3 and N2O molecules on Al- and Ga-doped graphene surfaces were studied in this paper, for the sake of finding the resistance chemical sensor material for GIS internal fault diagnosis. Among them, the most likely adsorption mode of NF3 molecule is that F atom is close to Al-doped graphene. They have large charge transfer and Ead, short dads, and strong interaction. The most likely adsorption mode of N2O molecules on Al-doped graphene is M2 mode, which is close to the N atom. From the adsorption parameters obtained, it can be seen that there is also a very strong interaction between N2O molecules and Al-doped graphene. As Ga atom and Al atom belongs to the same family of elements, the simulation of Ga atom is also carried out. The results show that the adsorption parameters of Ga-doped graphene and Al-doped graphene are similar, and Ga-doped graphene may also be a sensing material.The present work sheds more light on the association of characteristic decomposition components of SF6/N2 gas mixture and Al- and Ga-doped graphene, and provides the theoretical foundation to adsorb characteristic decomposition components in SF6/N2 gas mixture onto Al- and Ga-doped graphene. --- *Source: 1019746-2022-11-15.xml*
2022
# Combining Users’ Cognition Noise with Interactive Genetic Algorithms and Trapezoidal Fuzzy Numbers for Product Color Design **Authors:** Yan-pu Yang; Xing Tian **Journal:** Computational Intelligence and Neuroscience (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1019749 --- ## Abstract Product color plays a vital role in shaping brand style and affecting users’ purchase decision. However, users’ preferences about product color design schemes may vary due to their cognition differences. Although considering users’ perception of product color has been widely performed by industrial designers, it is not effective to support this activity. In order to provide users with plentiful product color solutions as well as embody users’ preference into product design process, involving users in interactive genetic algorithms (IGAs) is an effectual way to find optimum solutions. Nevertheless, cognition difference and uncertainty among users may lead to various understanding in line with IGA progressing. To address this issue, this study presents an advanced IGA by combining users’ cognition noise which includes cognition phase, intermediate phase, and fatigue phase. Trapezoidal fuzzy numbers are employed to represent uncertainty of users’ evaluations. An algorithm is designed to find key parameters through similarity calculation between RGB value and their area proportion of two individuals and users’ judgment. The interactive product color design process is put forward with an instance by comparing with an ordinary IGA. Results show that (1) knowledge background will significantly affect users’ cognition about product colors and (2) the proposed method is helpful to improve convergence speed and evolution efficiency with convergence increasing from 67.5% to 82.5% and overall average evolutionary generations decreasing from 18.15 to 15.825. It is promising that the proposed method can help reduce users’ cognition noise, promote convergence, and improve evolution efficiency of interactive product color design. --- ## Body ## 1. Introduction As an essential component of a vision system, color can trigger complex aesthetic sensations and psychological reactions and impact on the cognition and emotions of people [1]. As an important marketing communication tool, color carries abundant visual, symbolic, and associative information about products [2]. Since color is of great importance in visualizing appearance of products, manipulating product color has become an important way to touch off consumers’ emotional experience, attract consumers’ attention, and convince them to buy a product [2, 3]. A proper selection of product color can not only code visual information and communicate a brand’s positioning but also help build product style and induce different feelings [4, 5]. Consumers’ feelings about a product reflect their psychological preferences and are determined by their inner perceptions [6]. In this light, how to integrate consumers’ perceptions into product color design process effectively becomes a critical issue for successful product development.Product color design refers to selecting appropriate colors in order to convey the desired emotion to consumers. Due to the shortened product life cycle and the diversified product demand, it is becoming crucial for enterprises to realize fast innovation, which makes computer-aided techniques and intelligent algorithms rise and be widely used to adapt to consumers’ expectations. There have been several approaches to assist and support intelligent product color design, including using genetic algorithm (GA) for a near-optimal color combination design for multicolored products [7], employing particle swarm optimization (PSO) to find product color solutions that fit with consumers’ multiemotions [8], integrating factor analysis, fuzzy analytic hierarchy process, and image compositing technique to analyze consumers’ subjective perceptions for customized product color design [9], combining grey theory and GA to search for color combinations that meet the specified product color emotions and achieved a high degree of color harmony [10] and developing computer-aided color planning system to obtain optimized natural colors of an image and transfer them into product color design [11]. To reflect the designer’s subjective experience, the interactive genetic algorithm (IGA) is created to establish a creative and interactive evolutionary system that a designer can participate and interact to explore novel design schemes [12]. The relevant research results have been successfully applied in various fields of color design, such as web page [13], arm-type aerial work platform [14], clothes [15], automotive exterior [16], and electronic door lock [17].However, cognitive dissonance often occurs because consumers do not have the systematic training as designers do to colors, which will lead to inconsistent perceptions between consumers and designers [9]. Cognitive dissonance arises when there are discrepancy and inconsistency between cognitions [18], and people are more comfortable with consistency than inconsistency [19]. As design often is interdisciplinary in practice and members of a team may have different knowledge and expertise, how to reduce cognitive dissonance in design process becomes a challenge. An effective way presented by Goel and Wiltgen [20] is to employ analogies as a mechanism for reducing cognitive dissonance in interdisciplinary design teams. As a reasoning process in creative design, analogies can conduce to reduction of individual differences, similar to the knowledge sharing method [21]. In the industrial design field, a powerful approach is Kansei engineering, which can help designer link consumers’ emotional response to design properties of a product [22–24]. For computational and intelligent solving, the effective method is IGA, which involves human as evaluators to make evaluations and selections, and get the fitness value in an evolutionary process instead of making the fitness function in classical genetic algorithms [25, 26]. IGA is conducive to capture consumers’ aesthetic intention and percept users’ emotion or preference [27], and has been widely used in color design of various products, such as motorcycle [12], car console [28, 29], software robot [30], etc. Nevertheless, users’ fatigue and cognitive dissonance are ubiquitous and will gradually arise with the evolution process of IGAs, which can be defined as fitness noise and will influence the performance of interactive evolutionary computation (IEC) [31]. The former can be caused by a lot of repetitive work, tedious operation and visual weariness, and the latter may be attributed to the user’s knowledge and experience discrepancy. They have constituted the issues and obstacles to the application of IGAs. To solve these problems, many researchers have studied and put forward several practical methods, such as using multistage IGAs to divide population into several stages for lessening users’ population cognition burden [28], adopting a fuzzy number described with a Gaussian membership function to express an individual’s fitness [32], employing preference surrogate model to achieve fitness estimation and information extraction in the process of IEC [33–35], etc.As the intelligent evolutionary method is based on users’ preference and selection, the fitness of each individual in evolution process is gained by users’ subjective evaluation, which may be affected by users’ experiences, cognitive disparities, or fatigue, leading to unobjective evaluation [28]. In other words, the fitness given by users always mixes with noise and is imprecise in the evolution process, and the evolutionary consequences cannot reflect consumers’ preferences accurately and ultimately affect the accuracy and validity of final design decisions. Although several researches have proposed methods to address these issues, the effects of these problems cannot be completely eliminated. It is still interesting and worthy of further research.This study presents an IGA method that considers the influence of cognition noise, including consumers’ cognition familiarity, and fatigue and employs trapezoidal fuzzy numbers to represent the uncertainty of users’ judgment instead of precise values. To do so, a cognition noise model is proposed by considering three phases: cognitive phase, intermediate phase, and fatigue phase. By considering cognitive noise and introducing trapezoidal fuzzy numbers, the IGA method can reduce the influence of subjectivity of consumers’ evaluation. To validate the proposed method, the IGA method is adopted for designing a handheld detector.The remainder of the paper is organized as follows: Section2 introduces the methods for interactive product color design, including cognition noise model by combining users’ cognition familiarity and fatigue, a solving algorithm through similarity measurement of the individual’s color and evaluation with trapezoidal fuzzy numbers, and an interactive product color design process. Then, a numerical example is provided to illustrate the detailed implementation of the proposed method in Section 3. Finally, we summarize and highlight the contribution of this paper. ## 2. Methods IGA is an optimization method that connects a computer system and human being to jointly accomplish a task [36]. It provides a framework for interaction between humans and computers where computers use GAs to explore possible solutions and converge them to adapt objectives and constraints, and humans evaluate and provide feedback on individuals in the search process. Because of the characteristics that the fitness values of individuals are computed by users’ assigned preference rank rather than numerical calculation, IGA is effective to solve problems that indexing optimization of the implicit performance cannot be directly calculated by a function [37]. Since users participate in the IGA process, it is inevitable that users’ cognition about individuals will change and users’ fatigue will emerge as the population involves. Several approaches have been explored to help reduce users’ cognition burden and alleviate their fatigue, including dividing interactive design process into several stages to lower population complex in the initial stage [28], incorporating a case-based machine learning system to learn and predict user’s assessment [38], and training artificial neural network to automatically define an iterative fitness function [39]. These researches provide feasible methods to decrease human fatigue to a certain extent. As human fatigue cannot be completely eliminated, it is necessary to solve this dilemma by removing instead of only reducing the impact on evolution in product color design process through appropriate algorithm design. To do so, we build a users’ cognition noise model and develop a solving algorithm for interactive product color design. ### 2.1. Users’ Cognition Noise Model In an IGA process for product color design, the fitness values of individuals are likely to change with users’ cognitive level, which embodies in two aspects.(1) In the initial stage of an IGA, the users may not be familiar with product color schemes, and it is not easy to obtain precise cognition about individuals from users, leading to the evaluation results carrying a greater randomness. As interactive evolution progresses, the users can get much clearer cognition about individuals, and the users’ cognition is advancing towards more comprehension, which can be used as a stable evaluation standard. Although there is a certain degree of randomness at this time, the random noise is relatively small. According to the above analysis, we might describe the problem as follows: Set the cognition threshold asNc, that is, the users are completely familiar with the individuals after evaluating Nc product color schemes. When the number of individuals exceeds Nc, the users’ evaluation can be identified with no noise. The simpler the product color schemes are, the smaller the Nc will be, and vice versa.(2) After the user’s cognition getting more comprehension, when the number of product color schemes that have been evaluated reaches a certain threshold, the user might get fatigued. At this time, the given fitness value cannot accurately reflect the user’s preference and the quality of color schemes. Set the fatigue threshold asNf, that is, users begin to get fatigue after evaluating Nf product color schemes.Assume that the evaluation process has reached generationt and i + 1 product color schemes have been assessed, then the number of evaluated individuals can be depicted as Ne = (t − 1) · N + i, where N represents the number of individuals per generation. When Ne < Nc, the users’ cognition about product color schemes is proportional to Ne. That means as the numbers of evaluated individuals increase, the users’ familiarity will largely improve. When Ne ≥ Nf ≥ Nc, even if the users are already familiar with the individuals, they would get fatigued, and as a result, fatigue noise will affect the evolution process. For ease of processing, this study assumes Nf ≥ Nc.Based on the above analysis, the users’ cognition noise model is constructed as follows:(1)δNe=σ+k1⋅Nc−NeNc⋅e−Ne/Nc,Ne<Nc,σ⋅N0,1,Nc≤Ne≤Nf,k2⋅e−Nf/Ne⋅N0,1,Ne>Nf,where k1 and k2 are regulatory factors in different evaluation process; σ represents the noise intensity and σ∈0,1; and N0,1 is the standard normal distribution noise. It is clear from equation (1) that the value of δNe can be limited to 0-1 by reasonably choosing the parameters. The curve of cognition noise intensity varies with the number of evaluated individuals (Figure 1).Figure 1 The curve of cognition noise intensity varies with the number of evaluated individuals.According to different usage scenarios, the composition of each phase of the cognition noise model is also different. When users are familiar with target products and color images, they may skip the cognition phase; if the number of product color schemes is small and optimized schemes can be gained without too many evaluations, then the fatigue phase will not come. ### 2.2. An Algorithm for Solving Users’ Cognition Noise Model The key to solve users’ cognition noise model is to determine the recognition thresholdNc and the fatigue threshold Nf. The basis for this is the consistency between user’s cognition and preference, which means that similar product colors will be given similar judgment. If the condition is met, users are in the intermediate phase; otherwise, they are in the cognition phase or fatigue phase.An individualX can be coded as follows:(2)X=x1,r1,g1,b1,x2,r2,g2,b2,…,xm,rn,gn,bn,where m represents partition number of a product for color design; n is the number of colors; and ri, gi, and bi represent RGB parameters of a color within the range of 0–255. Generally, the color number of a product is less than 3, while the number of product form components is more than 3. Therefore, in the color design process, no more than 3 colors are randomly selected and assigned to the product form components.Considering the area proportion of each color, the similarity of two individualsXi and Xj can be computed with(3)Dij=1−∑k=1mSMkTA⋅rik−rjksr2+gik−gjksg2+bik−bjksb2,where sr, sg, and sb represent standard deviations of r, g, and b, respectively; sr=1/m∑i=1mri−r¯ and similarly, we have the value of sg and sb; SMk is the color area of product component k; and TA is the total area of a product color scheme.Due to that users’ perception about product color design schemes are emotional and cannot be represented with the precise value, it is necessary to utilize fuzzy numbers to substitute exact values. Triangular and trapezoidal-shaped fuzzy numbers, with bounded interval of [0, 1], are the most widely used to represent uncertainty, and trapezoidal is more general than triangular [40]. Since they provide an intuitive way to capture the vagueness of users’ evaluation, we choose trapezoidal fuzzy numbers to denote users’ preference of individuals.For a trapezoidal fuzzy numberA˜=a,c,d,b, the membership function can be written as follows:(4)Ax=x−ac−a,a≤x≤c,1,c≤x≤d,b−xb−d,d≤x≤b,0,otherwise,where 0≤a≤c≤d≤b≤1 and a,b is the support of the fuzzy number and c,d is the modal interval. For ranking design schemes, the defuzzification value of the trapezoidal fuzzy number is needed by using a+b+c+d/4 [41].Using a 7-point labeled scale, which is commonly used to gather respondents’ ratings for perceptual items [6], users’ preference of color images in the IGA can be described by a 4-tuple. Each Kansei attribute comprises 7 sets of semantic terms, and the corresponding fuzzy numbers are indicated in Table 1.Table 1 Semantic terms and the corresponding fuzzy numbers for Kansei attribute. Semantic label Semantic terms (perceived preference) Trapezoidal fuzzy number VL Very low Kansei preference (0, 0, 0.1, 0.2) L Low Kansei preference (0.1, 0.2, 0.2, 0.3) ML Moderately low Kansei preference (0.2, 0.3, 0.4, 0.5) M Medium Kansei preference (0.4, 0.5, 0.5, 0.6) MH Moderately high Kansei preference (0.5, 0.6, 0.7, 0.8) H High Kansei preference (0.7, 0.8, 0.8, 0.9) VH Very high Kansei preference (0.8, 0.9, 1, 1)Assume that there areq color image indicators for evaluation. The weight of each indicator, represented by wk (k=1,2,…,q), is calculated with the AHP method [42]. Let the preference set of Xi given by users be vi=vi1,vi2,…,viq and vik=aik,cik,dik,bik, then the synthetical evaluation of product color scheme xxi in generation t can be calculated as follows:(5)fxxi,t=14∑∑k=1qwkaik,cik,dik,bik.Accordingly, the similarity between two evaluations can be computed as follows:(6)CDij=1−1q∑k=1q14∑o=a,c,d,bwkoik−wkojk21/2.For two similar product color schemes, if the users’ two assessments are of high similarity, the cognition noise is considered to be small and users are in the intermediate phase; if the users’ two evaluations are of low similarity, the cognition noise is considered to be large and users are in the cognition phase or fatigue phase. For previousK (1≤K≤j−1) products of individual Xj, if Dij−CDij≤δ (δ represents the cognition difference threshold, i=j−K,j−K+1,&,j−1) can be met, then users are in the intermediate phase and Nc=j−Kmax (Kmax is the maximum K that met the formula Dij−CDij≤δ). When it is confirmed that users are in the intermediate phase, if the previous K products of individual Xj meet the formula Dij−CDij≤δ (i=j−K,j−K+1,…,j−1), then users are in the fatigue phase and Nf=j−Kmax (Kmax is the maximum K that met the formula Dij−CDij≤δ). Otherwise, users are in the cognition phase. ### 2.3. Interactive Product Color Design Process The aim of interactive product color design is to involve users to interact and assess the fitness of individuals for design evolution by means of interactive genetic algorithms to satisfy the objective desired by users. This process includes three parts: designing fitness function, establishing genetic and mutation mechanism, and planning the implementation process of the proposed algorithm. #### 2.3.1. Fitness Function In the light of synthetical evaluation computed with formula (5) and considering users’ cognition noise, the fitness function can be depicted as follows:(7)Fxxi,t=fxxi,t,Nc≤i≤Nf,1−δxxi,t⋅fxxi,t,otherwise.Formula (7) indicates that when users are in the intermediate phase, their evaluation about product design schemes can accurately reflect their cognition, and the fitness value equals users’ evaluation value. Otherwise, the fitness value of individuals should be subtracted by users’ evaluation value from the cognition noise. #### 2.3.2. Crossover and Mutation Since trapezoidal fuzzy numbers are employed to depict users’ preference, the selection criteria should be in accordance with them and set with semantic terms. Here, we assume that the individuals whose synthetic evaluation is equal to or exceeds high Kansei preference will be selected and enter into the next generation, while the individuals below will be eliminated. Parent individuals are selected in the eliminated individuals according to the level of evaluation and to produce offspring populations through crossover and mutation.Let there ben eliminated individuals and numbers of each preference level be n(MH), n(M), n(ML), n(L), and n(VL), respectively, nMH+nM+nML+nL+nVL=n. Let the descending order of them be n1 > n2 > n3 > n4 > n5, and the probability for each individual to be selected as the parent individual in its preference level can be computed as follows: n1/n⋅nMH, n2/n⋅nM, n3/n⋅nML, n4/n⋅nL, and n5/n⋅nVL.For crossover operating, randomly choose the color from parent individuals to make up the required colors of the target product. Mutation is realized by extending R, G, and B values of product color design individuals to 20% with a set mutation rate in each dimension of RGB color space. The changed color values which exceed beyond 0–255 will be ignored. Figure2 shows how the crossover and mutation implement.Figure 2 Illustration of crossover and mutation. #### 2.3.3. Interactive Product Color Design Process The detailed process of the interactive product color design is presented as follows:(1) By specifying the value ofk1, k2, σ, δ, and genetic manipulation parameters, the original population of product color schemes is generated(2) The preference of individuals is given by users according to the evaluation indicators of product color images(3) Calculate color value similarity and evaluation similarity of each individual, respectively, in line with formulas (3) and (6)(4) Compute the fitness value of each individual according to formula (7) and save individuals that their evaluation value equals or exceeds specified satisfaction threshold(5) Judge whether the evolutional generation moves outside the set limits. If true, then finish; otherwise, go to next step(6) Judge whether the amount of satisfactory individuals exceeds the set value. If true, then finish; otherwise go to next step(7) Execute crossover and mutation for producing populations of the next generation. And then, go to step 2The overall framework of the proposed method is shown in Figure3.Figure 3 Framework of the proposed method. ## 2.1. Users’ Cognition Noise Model In an IGA process for product color design, the fitness values of individuals are likely to change with users’ cognitive level, which embodies in two aspects.(1) In the initial stage of an IGA, the users may not be familiar with product color schemes, and it is not easy to obtain precise cognition about individuals from users, leading to the evaluation results carrying a greater randomness. As interactive evolution progresses, the users can get much clearer cognition about individuals, and the users’ cognition is advancing towards more comprehension, which can be used as a stable evaluation standard. Although there is a certain degree of randomness at this time, the random noise is relatively small. According to the above analysis, we might describe the problem as follows: Set the cognition threshold asNc, that is, the users are completely familiar with the individuals after evaluating Nc product color schemes. When the number of individuals exceeds Nc, the users’ evaluation can be identified with no noise. The simpler the product color schemes are, the smaller the Nc will be, and vice versa.(2) After the user’s cognition getting more comprehension, when the number of product color schemes that have been evaluated reaches a certain threshold, the user might get fatigued. At this time, the given fitness value cannot accurately reflect the user’s preference and the quality of color schemes. Set the fatigue threshold asNf, that is, users begin to get fatigue after evaluating Nf product color schemes.Assume that the evaluation process has reached generationt and i + 1 product color schemes have been assessed, then the number of evaluated individuals can be depicted as Ne = (t − 1) · N + i, where N represents the number of individuals per generation. When Ne < Nc, the users’ cognition about product color schemes is proportional to Ne. That means as the numbers of evaluated individuals increase, the users’ familiarity will largely improve. When Ne ≥ Nf ≥ Nc, even if the users are already familiar with the individuals, they would get fatigued, and as a result, fatigue noise will affect the evolution process. For ease of processing, this study assumes Nf ≥ Nc.Based on the above analysis, the users’ cognition noise model is constructed as follows:(1)δNe=σ+k1⋅Nc−NeNc⋅e−Ne/Nc,Ne<Nc,σ⋅N0,1,Nc≤Ne≤Nf,k2⋅e−Nf/Ne⋅N0,1,Ne>Nf,where k1 and k2 are regulatory factors in different evaluation process; σ represents the noise intensity and σ∈0,1; and N0,1 is the standard normal distribution noise. It is clear from equation (1) that the value of δNe can be limited to 0-1 by reasonably choosing the parameters. The curve of cognition noise intensity varies with the number of evaluated individuals (Figure 1).Figure 1 The curve of cognition noise intensity varies with the number of evaluated individuals.According to different usage scenarios, the composition of each phase of the cognition noise model is also different. When users are familiar with target products and color images, they may skip the cognition phase; if the number of product color schemes is small and optimized schemes can be gained without too many evaluations, then the fatigue phase will not come. ## 2.2. An Algorithm for Solving Users’ Cognition Noise Model The key to solve users’ cognition noise model is to determine the recognition thresholdNc and the fatigue threshold Nf. The basis for this is the consistency between user’s cognition and preference, which means that similar product colors will be given similar judgment. If the condition is met, users are in the intermediate phase; otherwise, they are in the cognition phase or fatigue phase.An individualX can be coded as follows:(2)X=x1,r1,g1,b1,x2,r2,g2,b2,…,xm,rn,gn,bn,where m represents partition number of a product for color design; n is the number of colors; and ri, gi, and bi represent RGB parameters of a color within the range of 0–255. Generally, the color number of a product is less than 3, while the number of product form components is more than 3. Therefore, in the color design process, no more than 3 colors are randomly selected and assigned to the product form components.Considering the area proportion of each color, the similarity of two individualsXi and Xj can be computed with(3)Dij=1−∑k=1mSMkTA⋅rik−rjksr2+gik−gjksg2+bik−bjksb2,where sr, sg, and sb represent standard deviations of r, g, and b, respectively; sr=1/m∑i=1mri−r¯ and similarly, we have the value of sg and sb; SMk is the color area of product component k; and TA is the total area of a product color scheme.Due to that users’ perception about product color design schemes are emotional and cannot be represented with the precise value, it is necessary to utilize fuzzy numbers to substitute exact values. Triangular and trapezoidal-shaped fuzzy numbers, with bounded interval of [0, 1], are the most widely used to represent uncertainty, and trapezoidal is more general than triangular [40]. Since they provide an intuitive way to capture the vagueness of users’ evaluation, we choose trapezoidal fuzzy numbers to denote users’ preference of individuals.For a trapezoidal fuzzy numberA˜=a,c,d,b, the membership function can be written as follows:(4)Ax=x−ac−a,a≤x≤c,1,c≤x≤d,b−xb−d,d≤x≤b,0,otherwise,where 0≤a≤c≤d≤b≤1 and a,b is the support of the fuzzy number and c,d is the modal interval. For ranking design schemes, the defuzzification value of the trapezoidal fuzzy number is needed by using a+b+c+d/4 [41].Using a 7-point labeled scale, which is commonly used to gather respondents’ ratings for perceptual items [6], users’ preference of color images in the IGA can be described by a 4-tuple. Each Kansei attribute comprises 7 sets of semantic terms, and the corresponding fuzzy numbers are indicated in Table 1.Table 1 Semantic terms and the corresponding fuzzy numbers for Kansei attribute. Semantic label Semantic terms (perceived preference) Trapezoidal fuzzy number VL Very low Kansei preference (0, 0, 0.1, 0.2) L Low Kansei preference (0.1, 0.2, 0.2, 0.3) ML Moderately low Kansei preference (0.2, 0.3, 0.4, 0.5) M Medium Kansei preference (0.4, 0.5, 0.5, 0.6) MH Moderately high Kansei preference (0.5, 0.6, 0.7, 0.8) H High Kansei preference (0.7, 0.8, 0.8, 0.9) VH Very high Kansei preference (0.8, 0.9, 1, 1)Assume that there areq color image indicators for evaluation. The weight of each indicator, represented by wk (k=1,2,…,q), is calculated with the AHP method [42]. Let the preference set of Xi given by users be vi=vi1,vi2,…,viq and vik=aik,cik,dik,bik, then the synthetical evaluation of product color scheme xxi in generation t can be calculated as follows:(5)fxxi,t=14∑∑k=1qwkaik,cik,dik,bik.Accordingly, the similarity between two evaluations can be computed as follows:(6)CDij=1−1q∑k=1q14∑o=a,c,d,bwkoik−wkojk21/2.For two similar product color schemes, if the users’ two assessments are of high similarity, the cognition noise is considered to be small and users are in the intermediate phase; if the users’ two evaluations are of low similarity, the cognition noise is considered to be large and users are in the cognition phase or fatigue phase. For previousK (1≤K≤j−1) products of individual Xj, if Dij−CDij≤δ (δ represents the cognition difference threshold, i=j−K,j−K+1,&,j−1) can be met, then users are in the intermediate phase and Nc=j−Kmax (Kmax is the maximum K that met the formula Dij−CDij≤δ). When it is confirmed that users are in the intermediate phase, if the previous K products of individual Xj meet the formula Dij−CDij≤δ (i=j−K,j−K+1,…,j−1), then users are in the fatigue phase and Nf=j−Kmax (Kmax is the maximum K that met the formula Dij−CDij≤δ). Otherwise, users are in the cognition phase. ## 2.3. Interactive Product Color Design Process The aim of interactive product color design is to involve users to interact and assess the fitness of individuals for design evolution by means of interactive genetic algorithms to satisfy the objective desired by users. This process includes three parts: designing fitness function, establishing genetic and mutation mechanism, and planning the implementation process of the proposed algorithm. ### 2.3.1. Fitness Function In the light of synthetical evaluation computed with formula (5) and considering users’ cognition noise, the fitness function can be depicted as follows:(7)Fxxi,t=fxxi,t,Nc≤i≤Nf,1−δxxi,t⋅fxxi,t,otherwise.Formula (7) indicates that when users are in the intermediate phase, their evaluation about product design schemes can accurately reflect their cognition, and the fitness value equals users’ evaluation value. Otherwise, the fitness value of individuals should be subtracted by users’ evaluation value from the cognition noise. ### 2.3.2. Crossover and Mutation Since trapezoidal fuzzy numbers are employed to depict users’ preference, the selection criteria should be in accordance with them and set with semantic terms. Here, we assume that the individuals whose synthetic evaluation is equal to or exceeds high Kansei preference will be selected and enter into the next generation, while the individuals below will be eliminated. Parent individuals are selected in the eliminated individuals according to the level of evaluation and to produce offspring populations through crossover and mutation.Let there ben eliminated individuals and numbers of each preference level be n(MH), n(M), n(ML), n(L), and n(VL), respectively, nMH+nM+nML+nL+nVL=n. Let the descending order of them be n1 > n2 > n3 > n4 > n5, and the probability for each individual to be selected as the parent individual in its preference level can be computed as follows: n1/n⋅nMH, n2/n⋅nM, n3/n⋅nML, n4/n⋅nL, and n5/n⋅nVL.For crossover operating, randomly choose the color from parent individuals to make up the required colors of the target product. Mutation is realized by extending R, G, and B values of product color design individuals to 20% with a set mutation rate in each dimension of RGB color space. The changed color values which exceed beyond 0–255 will be ignored. Figure2 shows how the crossover and mutation implement.Figure 2 Illustration of crossover and mutation. ### 2.3.3. Interactive Product Color Design Process The detailed process of the interactive product color design is presented as follows:(1) By specifying the value ofk1, k2, σ, δ, and genetic manipulation parameters, the original population of product color schemes is generated(2) The preference of individuals is given by users according to the evaluation indicators of product color images(3) Calculate color value similarity and evaluation similarity of each individual, respectively, in line with formulas (3) and (6)(4) Compute the fitness value of each individual according to formula (7) and save individuals that their evaluation value equals or exceeds specified satisfaction threshold(5) Judge whether the evolutional generation moves outside the set limits. If true, then finish; otherwise, go to next step(6) Judge whether the amount of satisfactory individuals exceeds the set value. If true, then finish; otherwise go to next step(7) Execute crossover and mutation for producing populations of the next generation. And then, go to step 2The overall framework of the proposed method is shown in Figure3.Figure 3 Framework of the proposed method. ## 2.3.1. Fitness Function In the light of synthetical evaluation computed with formula (5) and considering users’ cognition noise, the fitness function can be depicted as follows:(7)Fxxi,t=fxxi,t,Nc≤i≤Nf,1−δxxi,t⋅fxxi,t,otherwise.Formula (7) indicates that when users are in the intermediate phase, their evaluation about product design schemes can accurately reflect their cognition, and the fitness value equals users’ evaluation value. Otherwise, the fitness value of individuals should be subtracted by users’ evaluation value from the cognition noise. ## 2.3.2. Crossover and Mutation Since trapezoidal fuzzy numbers are employed to depict users’ preference, the selection criteria should be in accordance with them and set with semantic terms. Here, we assume that the individuals whose synthetic evaluation is equal to or exceeds high Kansei preference will be selected and enter into the next generation, while the individuals below will be eliminated. Parent individuals are selected in the eliminated individuals according to the level of evaluation and to produce offspring populations through crossover and mutation.Let there ben eliminated individuals and numbers of each preference level be n(MH), n(M), n(ML), n(L), and n(VL), respectively, nMH+nM+nML+nL+nVL=n. Let the descending order of them be n1 > n2 > n3 > n4 > n5, and the probability for each individual to be selected as the parent individual in its preference level can be computed as follows: n1/n⋅nMH, n2/n⋅nM, n3/n⋅nML, n4/n⋅nL, and n5/n⋅nVL.For crossover operating, randomly choose the color from parent individuals to make up the required colors of the target product. Mutation is realized by extending R, G, and B values of product color design individuals to 20% with a set mutation rate in each dimension of RGB color space. The changed color values which exceed beyond 0–255 will be ignored. Figure2 shows how the crossover and mutation implement.Figure 2 Illustration of crossover and mutation. ## 2.3.3. Interactive Product Color Design Process The detailed process of the interactive product color design is presented as follows:(1) By specifying the value ofk1, k2, σ, δ, and genetic manipulation parameters, the original population of product color schemes is generated(2) The preference of individuals is given by users according to the evaluation indicators of product color images(3) Calculate color value similarity and evaluation similarity of each individual, respectively, in line with formulas (3) and (6)(4) Compute the fitness value of each individual according to formula (7) and save individuals that their evaluation value equals or exceeds specified satisfaction threshold(5) Judge whether the evolutional generation moves outside the set limits. If true, then finish; otherwise, go to next step(6) Judge whether the amount of satisfactory individuals exceeds the set value. If true, then finish; otherwise go to next step(7) Execute crossover and mutation for producing populations of the next generation. And then, go to step 2The overall framework of the proposed method is shown in Figure3.Figure 3 Framework of the proposed method. ## 3. Case Study The color design of a handheld detector is taken as an example to verify the validity of the proposed method. By utilizing the VBA macroeditor of CorelDRAW software, an interactive product color design module is developed by combining users’ cognition noise and an IGA, as shown in Figure4. There are 6 product color schemes for each generation, and a 7-point labeled scale is deployed to evaluate the indicators which are fashionable and technical. In each generation of evolutionary operations, 3 colors are randomly generated and assigned to 5 product components. In order to better analyze users’ perception of product color schemes, we use two computers to implement the experiment. One is used to run the IGA module to quickly generate product color schemes. Another is a workstation where KeyShot software, a real-time 3D rendering software, is installed to give users better visual perception by quickly creating 3D pictures with no more than 30 seconds. 3D configuration and rendering are shown in Figure 5. With the AHP method, by comparing the two indicators, the weights are set by 0.4 and 0.6. The total number of satisfactory solutions required is 6. Through user surveys, individuals whose evaluation equals or exceeds high Kansei preference will be saved as satisfactory solutions. That means the fitness should be greater or equal to 0.8 according to formula (5). Suggested crossover and mutation probability are 0.5–0.9 and 0.01–0.1. Here, we set 0.7 and 0.08, respectively. Maximum evolutionary generation is given 20. Set k1 = k2 = 0.5 and σ = 0.05. 20 students majored in the industrial design (half male and half female, represented as DM and DF) and 20 students of other majors (half male and half female, represented as CM and CF) are gathered randomly as participants to take part in the experiment. Comparing the proposed method (represented by NIGA) with a traditional IGA, whose parameters of population size, cross probability, mutation probability, and terminate evolutionary generation number are, respectively, set to be 6, 0.7, 0.08 and 20, the calculation results are shown in Tables 2 and 3.Figure 4 Interactive product color design module.Figure 5 3D configuration and rendering of product color schemes.Table 2 Experimental result comparing NIGA with IGA. Name User ID Total generations N c N f Total number of satisfactory individuals NIGA DM1 16 1 91 6 DM2 14 1 80 6 DM3 14 1 82 6 DM4 15 1 87 6 DM5 16 5 None 6 DM6 15 1 85 6 DM7 20 1 115 5 DM8 14 1 79 6 DM9 17 5 100 6 DM10 15 1 88 6 DF1 16 7 91 6 DF2 16 5 None 6 DF3 15 1 89 6 DF4 12 1 68 6 DF5 20 8 103 3 DF6 15 6 88 6 DF7 14 1 79 6 DF8 17 1 100 6 DF9 14 1 80 6 DF10 16 1 92 6 IGA DM1 18 — — 6 DM2 16 — — 6 DM3 15 — — 6 DM4 17 — — 6 DM5 20 — — 4 DM6 18 — — 6 DM7 17 — — 6 DM8 15 — — 6 DM9 18 — — 6 DM10 17 — — 6 DF1 16 — — 6 DF2 20 — — 5 DF3 20 — — 5 DF4 20 — — 4 DF5 13 — — 6 DF6 14 — — 6 DF7 15 — — 6 DF8 15 — — 6 DF9 20 — — 3 DF10 20 — — 5 NIGA CM1 13 5 75 6 CM2 14 4 81 6 CM3 15 5 88 6 CM4 20 3 120 3 CM5 15 1 85 6 CM6 20 1 105 5 CM7 14 5 81 6 CM8 15 5 86 6 CM9 15 6 87 6 CM10 20 4 115 5 CF1 15 6 85 6 CF2 12 5 70 6 CF3 14 5 80 6 CF4 20 8 115 6 CF5 15 6 86 6 CF6 16 8 91 6 CF7 14 6 82 6 CF8 20 9 104 4 CF9 15 4 87 6 CF10 20 8 99 5 IGA CM1 19 — — 6 CM2 20 — — 6 CM3 19 — — 6 CM4 20 — — 6 CM5 20 — — 4 CM6 18 — — 6 CM7 17 — — 6 CM8 20 — — 5 CM9 20 — — 5 CM10 18 — — 6 CF1 18 — — 6 CF2 19 — — 6 CF3 20 — — 5 CF4 20 — — 5 CF5 18 — — 6 CF6 20 — — 4 CF7 19 — — 6 CF8 20 — — 4 CF9 18 — — 6 CF10 19 — — 6Table 3 Comparison of average generations. Name User type Average generation Average number of satisfactory individuals NIGA DM 15.6 5.9 DF 15.5 5.7 CM 16.1 5.5 CF 16.1 5.7 IGA DM 17.1 5.8 DF 17.3 5.2 CM 19.1 5.6 CF 19.1 5.4From the experimental results in Table2, it can be seen that the evaluation of 82.5% users participating in this method has converged, while in the ordinary IGA process, there are 30% participants majored in the industrial design and 35% participants from other majors who have not found the required 6 satisfactory solutions. The convergence rate is increased from 67.5% to 82.5%, which indicates that the proposed method can improve the convergence of the interactive product color design. As shown in Table 2, the cognition threshold indicates that students majored in the industrial design are more familiar with product color image indicators (the mean of Nc is 2.5), and they can quickly enter the evaluation process and establish mapping between product color schemes and image indicators. While students from other majors need to go through a certain process to digest product color images, and the cognition noise is relatively larger than other students (the mean of Nc is 5.2). This indicates that knowledge background has a significant impact on users’ perception of product color images. In terms of fatigue threshold performance, only 2 industrial design participants did not enter the fatigue phase (for ease of calculation, the number of evaluated individuals after the convergence of evolutionary process was considered to be the fatigue threshold, which are both 96), and all the other participants have got fatigued. The average fatigue threshold is 90.275. For further verification, participants are asked whether they really feel fatigue after the experiment. Investigation shows that 92.1% of them feel confused and cannot judge the image indicators of the color schemes precisely.From the comparison of average generations shown in Table3, by contrast with an ordinary IGA, the average generations of NIGA by industrial design students decrease a little (male 1.5 and female 1.8). However, it has a marked impact on students from other majors, with male and female both cutting down 3 generations, which implies that the proposed method plays an active role on users with no or little knowledge of product color design. With the result of improving evolutionary efficiency, the overall average evolutionary generations decrease from 18.15 to 15.825.In conclusion, as an interactive process involving users into product color design, it is inevitable that users’ perception about product color schemes will be influenced by their background knowledge, while cognition noise in different phases will directly affect the validity of product color design. Meanwhile, the precise value used in a traditional IGA cannot represent uncertainty of users’ preference. Therefore, integrating users’ cognition into product color design process with IGA and trapezoidal fuzzy numbers will be conducive to simulate users’ perception about product colors in real world in an objective and scientific way and further improve the convergence speed and evolution efficiency of an ordinary IGA. ## 4. Conclusions Providing users with multiple product color schemes will help identify users’ preference and reduce the risk of product development. Due to cognition difference and uncertainty existing in users, it is not easy to determine which product color users prefer. To assist industrial designers for product color design more effectively and incarnate users’ perception about product colors more accurately, interactive genetic algorithms are deployed by combining users’ cognition noise with a proposed cognition noise model which consists of three phases: cognition phase, intermediate phase, and fatigue phase. With trapezoidal fuzzy numbers, an algorithm is designed to find key parameters through similarity calculation between RGB values of two individuals and users’ evaluations. The interactive product color design process is presented with an instance by comparing with a traditional IGA. By collecting 40 users to participate in the experiment process, the results show that (1) knowledge background will significantly affect users’ cognition about product colors; (2) the proposed method is helpful to improve the convergence speed and evolution efficiency with convergence increasing from 67.5% to 82.5% and overall average evolutionary generations decreasing from 18.15 to 15.825.This study makes the following contributions: (1) Using trapezoidal fuzzy numbers to describe users’ preferences makes the application process of an IGA more practical and easy to operate. (2) Incorporating users’ subjective cognitive differences into the IGA process will help to improve the convergence speed and evolution efficiency of a traditional IGA. (3) The proposed method can effectively assist industrial designers in the product color design. --- *Source: 1019749-2019-08-01.xml*
1019749-2019-08-01_1019749-2019-08-01.md
46,283
Combining Users’ Cognition Noise with Interactive Genetic Algorithms and Trapezoidal Fuzzy Numbers for Product Color Design
Yan-pu Yang; Xing Tian
Computational Intelligence and Neuroscience (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1019749
1019749-2019-08-01.xml
--- ## Abstract Product color plays a vital role in shaping brand style and affecting users’ purchase decision. However, users’ preferences about product color design schemes may vary due to their cognition differences. Although considering users’ perception of product color has been widely performed by industrial designers, it is not effective to support this activity. In order to provide users with plentiful product color solutions as well as embody users’ preference into product design process, involving users in interactive genetic algorithms (IGAs) is an effectual way to find optimum solutions. Nevertheless, cognition difference and uncertainty among users may lead to various understanding in line with IGA progressing. To address this issue, this study presents an advanced IGA by combining users’ cognition noise which includes cognition phase, intermediate phase, and fatigue phase. Trapezoidal fuzzy numbers are employed to represent uncertainty of users’ evaluations. An algorithm is designed to find key parameters through similarity calculation between RGB value and their area proportion of two individuals and users’ judgment. The interactive product color design process is put forward with an instance by comparing with an ordinary IGA. Results show that (1) knowledge background will significantly affect users’ cognition about product colors and (2) the proposed method is helpful to improve convergence speed and evolution efficiency with convergence increasing from 67.5% to 82.5% and overall average evolutionary generations decreasing from 18.15 to 15.825. It is promising that the proposed method can help reduce users’ cognition noise, promote convergence, and improve evolution efficiency of interactive product color design. --- ## Body ## 1. Introduction As an essential component of a vision system, color can trigger complex aesthetic sensations and psychological reactions and impact on the cognition and emotions of people [1]. As an important marketing communication tool, color carries abundant visual, symbolic, and associative information about products [2]. Since color is of great importance in visualizing appearance of products, manipulating product color has become an important way to touch off consumers’ emotional experience, attract consumers’ attention, and convince them to buy a product [2, 3]. A proper selection of product color can not only code visual information and communicate a brand’s positioning but also help build product style and induce different feelings [4, 5]. Consumers’ feelings about a product reflect their psychological preferences and are determined by their inner perceptions [6]. In this light, how to integrate consumers’ perceptions into product color design process effectively becomes a critical issue for successful product development.Product color design refers to selecting appropriate colors in order to convey the desired emotion to consumers. Due to the shortened product life cycle and the diversified product demand, it is becoming crucial for enterprises to realize fast innovation, which makes computer-aided techniques and intelligent algorithms rise and be widely used to adapt to consumers’ expectations. There have been several approaches to assist and support intelligent product color design, including using genetic algorithm (GA) for a near-optimal color combination design for multicolored products [7], employing particle swarm optimization (PSO) to find product color solutions that fit with consumers’ multiemotions [8], integrating factor analysis, fuzzy analytic hierarchy process, and image compositing technique to analyze consumers’ subjective perceptions for customized product color design [9], combining grey theory and GA to search for color combinations that meet the specified product color emotions and achieved a high degree of color harmony [10] and developing computer-aided color planning system to obtain optimized natural colors of an image and transfer them into product color design [11]. To reflect the designer’s subjective experience, the interactive genetic algorithm (IGA) is created to establish a creative and interactive evolutionary system that a designer can participate and interact to explore novel design schemes [12]. The relevant research results have been successfully applied in various fields of color design, such as web page [13], arm-type aerial work platform [14], clothes [15], automotive exterior [16], and electronic door lock [17].However, cognitive dissonance often occurs because consumers do not have the systematic training as designers do to colors, which will lead to inconsistent perceptions between consumers and designers [9]. Cognitive dissonance arises when there are discrepancy and inconsistency between cognitions [18], and people are more comfortable with consistency than inconsistency [19]. As design often is interdisciplinary in practice and members of a team may have different knowledge and expertise, how to reduce cognitive dissonance in design process becomes a challenge. An effective way presented by Goel and Wiltgen [20] is to employ analogies as a mechanism for reducing cognitive dissonance in interdisciplinary design teams. As a reasoning process in creative design, analogies can conduce to reduction of individual differences, similar to the knowledge sharing method [21]. In the industrial design field, a powerful approach is Kansei engineering, which can help designer link consumers’ emotional response to design properties of a product [22–24]. For computational and intelligent solving, the effective method is IGA, which involves human as evaluators to make evaluations and selections, and get the fitness value in an evolutionary process instead of making the fitness function in classical genetic algorithms [25, 26]. IGA is conducive to capture consumers’ aesthetic intention and percept users’ emotion or preference [27], and has been widely used in color design of various products, such as motorcycle [12], car console [28, 29], software robot [30], etc. Nevertheless, users’ fatigue and cognitive dissonance are ubiquitous and will gradually arise with the evolution process of IGAs, which can be defined as fitness noise and will influence the performance of interactive evolutionary computation (IEC) [31]. The former can be caused by a lot of repetitive work, tedious operation and visual weariness, and the latter may be attributed to the user’s knowledge and experience discrepancy. They have constituted the issues and obstacles to the application of IGAs. To solve these problems, many researchers have studied and put forward several practical methods, such as using multistage IGAs to divide population into several stages for lessening users’ population cognition burden [28], adopting a fuzzy number described with a Gaussian membership function to express an individual’s fitness [32], employing preference surrogate model to achieve fitness estimation and information extraction in the process of IEC [33–35], etc.As the intelligent evolutionary method is based on users’ preference and selection, the fitness of each individual in evolution process is gained by users’ subjective evaluation, which may be affected by users’ experiences, cognitive disparities, or fatigue, leading to unobjective evaluation [28]. In other words, the fitness given by users always mixes with noise and is imprecise in the evolution process, and the evolutionary consequences cannot reflect consumers’ preferences accurately and ultimately affect the accuracy and validity of final design decisions. Although several researches have proposed methods to address these issues, the effects of these problems cannot be completely eliminated. It is still interesting and worthy of further research.This study presents an IGA method that considers the influence of cognition noise, including consumers’ cognition familiarity, and fatigue and employs trapezoidal fuzzy numbers to represent the uncertainty of users’ judgment instead of precise values. To do so, a cognition noise model is proposed by considering three phases: cognitive phase, intermediate phase, and fatigue phase. By considering cognitive noise and introducing trapezoidal fuzzy numbers, the IGA method can reduce the influence of subjectivity of consumers’ evaluation. To validate the proposed method, the IGA method is adopted for designing a handheld detector.The remainder of the paper is organized as follows: Section2 introduces the methods for interactive product color design, including cognition noise model by combining users’ cognition familiarity and fatigue, a solving algorithm through similarity measurement of the individual’s color and evaluation with trapezoidal fuzzy numbers, and an interactive product color design process. Then, a numerical example is provided to illustrate the detailed implementation of the proposed method in Section 3. Finally, we summarize and highlight the contribution of this paper. ## 2. Methods IGA is an optimization method that connects a computer system and human being to jointly accomplish a task [36]. It provides a framework for interaction between humans and computers where computers use GAs to explore possible solutions and converge them to adapt objectives and constraints, and humans evaluate and provide feedback on individuals in the search process. Because of the characteristics that the fitness values of individuals are computed by users’ assigned preference rank rather than numerical calculation, IGA is effective to solve problems that indexing optimization of the implicit performance cannot be directly calculated by a function [37]. Since users participate in the IGA process, it is inevitable that users’ cognition about individuals will change and users’ fatigue will emerge as the population involves. Several approaches have been explored to help reduce users’ cognition burden and alleviate their fatigue, including dividing interactive design process into several stages to lower population complex in the initial stage [28], incorporating a case-based machine learning system to learn and predict user’s assessment [38], and training artificial neural network to automatically define an iterative fitness function [39]. These researches provide feasible methods to decrease human fatigue to a certain extent. As human fatigue cannot be completely eliminated, it is necessary to solve this dilemma by removing instead of only reducing the impact on evolution in product color design process through appropriate algorithm design. To do so, we build a users’ cognition noise model and develop a solving algorithm for interactive product color design. ### 2.1. Users’ Cognition Noise Model In an IGA process for product color design, the fitness values of individuals are likely to change with users’ cognitive level, which embodies in two aspects.(1) In the initial stage of an IGA, the users may not be familiar with product color schemes, and it is not easy to obtain precise cognition about individuals from users, leading to the evaluation results carrying a greater randomness. As interactive evolution progresses, the users can get much clearer cognition about individuals, and the users’ cognition is advancing towards more comprehension, which can be used as a stable evaluation standard. Although there is a certain degree of randomness at this time, the random noise is relatively small. According to the above analysis, we might describe the problem as follows: Set the cognition threshold asNc, that is, the users are completely familiar with the individuals after evaluating Nc product color schemes. When the number of individuals exceeds Nc, the users’ evaluation can be identified with no noise. The simpler the product color schemes are, the smaller the Nc will be, and vice versa.(2) After the user’s cognition getting more comprehension, when the number of product color schemes that have been evaluated reaches a certain threshold, the user might get fatigued. At this time, the given fitness value cannot accurately reflect the user’s preference and the quality of color schemes. Set the fatigue threshold asNf, that is, users begin to get fatigue after evaluating Nf product color schemes.Assume that the evaluation process has reached generationt and i + 1 product color schemes have been assessed, then the number of evaluated individuals can be depicted as Ne = (t − 1) · N + i, where N represents the number of individuals per generation. When Ne < Nc, the users’ cognition about product color schemes is proportional to Ne. That means as the numbers of evaluated individuals increase, the users’ familiarity will largely improve. When Ne ≥ Nf ≥ Nc, even if the users are already familiar with the individuals, they would get fatigued, and as a result, fatigue noise will affect the evolution process. For ease of processing, this study assumes Nf ≥ Nc.Based on the above analysis, the users’ cognition noise model is constructed as follows:(1)δNe=σ+k1⋅Nc−NeNc⋅e−Ne/Nc,Ne<Nc,σ⋅N0,1,Nc≤Ne≤Nf,k2⋅e−Nf/Ne⋅N0,1,Ne>Nf,where k1 and k2 are regulatory factors in different evaluation process; σ represents the noise intensity and σ∈0,1; and N0,1 is the standard normal distribution noise. It is clear from equation (1) that the value of δNe can be limited to 0-1 by reasonably choosing the parameters. The curve of cognition noise intensity varies with the number of evaluated individuals (Figure 1).Figure 1 The curve of cognition noise intensity varies with the number of evaluated individuals.According to different usage scenarios, the composition of each phase of the cognition noise model is also different. When users are familiar with target products and color images, they may skip the cognition phase; if the number of product color schemes is small and optimized schemes can be gained without too many evaluations, then the fatigue phase will not come. ### 2.2. An Algorithm for Solving Users’ Cognition Noise Model The key to solve users’ cognition noise model is to determine the recognition thresholdNc and the fatigue threshold Nf. The basis for this is the consistency between user’s cognition and preference, which means that similar product colors will be given similar judgment. If the condition is met, users are in the intermediate phase; otherwise, they are in the cognition phase or fatigue phase.An individualX can be coded as follows:(2)X=x1,r1,g1,b1,x2,r2,g2,b2,…,xm,rn,gn,bn,where m represents partition number of a product for color design; n is the number of colors; and ri, gi, and bi represent RGB parameters of a color within the range of 0–255. Generally, the color number of a product is less than 3, while the number of product form components is more than 3. Therefore, in the color design process, no more than 3 colors are randomly selected and assigned to the product form components.Considering the area proportion of each color, the similarity of two individualsXi and Xj can be computed with(3)Dij=1−∑k=1mSMkTA⋅rik−rjksr2+gik−gjksg2+bik−bjksb2,where sr, sg, and sb represent standard deviations of r, g, and b, respectively; sr=1/m∑i=1mri−r¯ and similarly, we have the value of sg and sb; SMk is the color area of product component k; and TA is the total area of a product color scheme.Due to that users’ perception about product color design schemes are emotional and cannot be represented with the precise value, it is necessary to utilize fuzzy numbers to substitute exact values. Triangular and trapezoidal-shaped fuzzy numbers, with bounded interval of [0, 1], are the most widely used to represent uncertainty, and trapezoidal is more general than triangular [40]. Since they provide an intuitive way to capture the vagueness of users’ evaluation, we choose trapezoidal fuzzy numbers to denote users’ preference of individuals.For a trapezoidal fuzzy numberA˜=a,c,d,b, the membership function can be written as follows:(4)Ax=x−ac−a,a≤x≤c,1,c≤x≤d,b−xb−d,d≤x≤b,0,otherwise,where 0≤a≤c≤d≤b≤1 and a,b is the support of the fuzzy number and c,d is the modal interval. For ranking design schemes, the defuzzification value of the trapezoidal fuzzy number is needed by using a+b+c+d/4 [41].Using a 7-point labeled scale, which is commonly used to gather respondents’ ratings for perceptual items [6], users’ preference of color images in the IGA can be described by a 4-tuple. Each Kansei attribute comprises 7 sets of semantic terms, and the corresponding fuzzy numbers are indicated in Table 1.Table 1 Semantic terms and the corresponding fuzzy numbers for Kansei attribute. Semantic label Semantic terms (perceived preference) Trapezoidal fuzzy number VL Very low Kansei preference (0, 0, 0.1, 0.2) L Low Kansei preference (0.1, 0.2, 0.2, 0.3) ML Moderately low Kansei preference (0.2, 0.3, 0.4, 0.5) M Medium Kansei preference (0.4, 0.5, 0.5, 0.6) MH Moderately high Kansei preference (0.5, 0.6, 0.7, 0.8) H High Kansei preference (0.7, 0.8, 0.8, 0.9) VH Very high Kansei preference (0.8, 0.9, 1, 1)Assume that there areq color image indicators for evaluation. The weight of each indicator, represented by wk (k=1,2,…,q), is calculated with the AHP method [42]. Let the preference set of Xi given by users be vi=vi1,vi2,…,viq and vik=aik,cik,dik,bik, then the synthetical evaluation of product color scheme xxi in generation t can be calculated as follows:(5)fxxi,t=14∑∑k=1qwkaik,cik,dik,bik.Accordingly, the similarity between two evaluations can be computed as follows:(6)CDij=1−1q∑k=1q14∑o=a,c,d,bwkoik−wkojk21/2.For two similar product color schemes, if the users’ two assessments are of high similarity, the cognition noise is considered to be small and users are in the intermediate phase; if the users’ two evaluations are of low similarity, the cognition noise is considered to be large and users are in the cognition phase or fatigue phase. For previousK (1≤K≤j−1) products of individual Xj, if Dij−CDij≤δ (δ represents the cognition difference threshold, i=j−K,j−K+1,&,j−1) can be met, then users are in the intermediate phase and Nc=j−Kmax (Kmax is the maximum K that met the formula Dij−CDij≤δ). When it is confirmed that users are in the intermediate phase, if the previous K products of individual Xj meet the formula Dij−CDij≤δ (i=j−K,j−K+1,…,j−1), then users are in the fatigue phase and Nf=j−Kmax (Kmax is the maximum K that met the formula Dij−CDij≤δ). Otherwise, users are in the cognition phase. ### 2.3. Interactive Product Color Design Process The aim of interactive product color design is to involve users to interact and assess the fitness of individuals for design evolution by means of interactive genetic algorithms to satisfy the objective desired by users. This process includes three parts: designing fitness function, establishing genetic and mutation mechanism, and planning the implementation process of the proposed algorithm. #### 2.3.1. Fitness Function In the light of synthetical evaluation computed with formula (5) and considering users’ cognition noise, the fitness function can be depicted as follows:(7)Fxxi,t=fxxi,t,Nc≤i≤Nf,1−δxxi,t⋅fxxi,t,otherwise.Formula (7) indicates that when users are in the intermediate phase, their evaluation about product design schemes can accurately reflect their cognition, and the fitness value equals users’ evaluation value. Otherwise, the fitness value of individuals should be subtracted by users’ evaluation value from the cognition noise. #### 2.3.2. Crossover and Mutation Since trapezoidal fuzzy numbers are employed to depict users’ preference, the selection criteria should be in accordance with them and set with semantic terms. Here, we assume that the individuals whose synthetic evaluation is equal to or exceeds high Kansei preference will be selected and enter into the next generation, while the individuals below will be eliminated. Parent individuals are selected in the eliminated individuals according to the level of evaluation and to produce offspring populations through crossover and mutation.Let there ben eliminated individuals and numbers of each preference level be n(MH), n(M), n(ML), n(L), and n(VL), respectively, nMH+nM+nML+nL+nVL=n. Let the descending order of them be n1 > n2 > n3 > n4 > n5, and the probability for each individual to be selected as the parent individual in its preference level can be computed as follows: n1/n⋅nMH, n2/n⋅nM, n3/n⋅nML, n4/n⋅nL, and n5/n⋅nVL.For crossover operating, randomly choose the color from parent individuals to make up the required colors of the target product. Mutation is realized by extending R, G, and B values of product color design individuals to 20% with a set mutation rate in each dimension of RGB color space. The changed color values which exceed beyond 0–255 will be ignored. Figure2 shows how the crossover and mutation implement.Figure 2 Illustration of crossover and mutation. #### 2.3.3. Interactive Product Color Design Process The detailed process of the interactive product color design is presented as follows:(1) By specifying the value ofk1, k2, σ, δ, and genetic manipulation parameters, the original population of product color schemes is generated(2) The preference of individuals is given by users according to the evaluation indicators of product color images(3) Calculate color value similarity and evaluation similarity of each individual, respectively, in line with formulas (3) and (6)(4) Compute the fitness value of each individual according to formula (7) and save individuals that their evaluation value equals or exceeds specified satisfaction threshold(5) Judge whether the evolutional generation moves outside the set limits. If true, then finish; otherwise, go to next step(6) Judge whether the amount of satisfactory individuals exceeds the set value. If true, then finish; otherwise go to next step(7) Execute crossover and mutation for producing populations of the next generation. And then, go to step 2The overall framework of the proposed method is shown in Figure3.Figure 3 Framework of the proposed method. ## 2.1. Users’ Cognition Noise Model In an IGA process for product color design, the fitness values of individuals are likely to change with users’ cognitive level, which embodies in two aspects.(1) In the initial stage of an IGA, the users may not be familiar with product color schemes, and it is not easy to obtain precise cognition about individuals from users, leading to the evaluation results carrying a greater randomness. As interactive evolution progresses, the users can get much clearer cognition about individuals, and the users’ cognition is advancing towards more comprehension, which can be used as a stable evaluation standard. Although there is a certain degree of randomness at this time, the random noise is relatively small. According to the above analysis, we might describe the problem as follows: Set the cognition threshold asNc, that is, the users are completely familiar with the individuals after evaluating Nc product color schemes. When the number of individuals exceeds Nc, the users’ evaluation can be identified with no noise. The simpler the product color schemes are, the smaller the Nc will be, and vice versa.(2) After the user’s cognition getting more comprehension, when the number of product color schemes that have been evaluated reaches a certain threshold, the user might get fatigued. At this time, the given fitness value cannot accurately reflect the user’s preference and the quality of color schemes. Set the fatigue threshold asNf, that is, users begin to get fatigue after evaluating Nf product color schemes.Assume that the evaluation process has reached generationt and i + 1 product color schemes have been assessed, then the number of evaluated individuals can be depicted as Ne = (t − 1) · N + i, where N represents the number of individuals per generation. When Ne < Nc, the users’ cognition about product color schemes is proportional to Ne. That means as the numbers of evaluated individuals increase, the users’ familiarity will largely improve. When Ne ≥ Nf ≥ Nc, even if the users are already familiar with the individuals, they would get fatigued, and as a result, fatigue noise will affect the evolution process. For ease of processing, this study assumes Nf ≥ Nc.Based on the above analysis, the users’ cognition noise model is constructed as follows:(1)δNe=σ+k1⋅Nc−NeNc⋅e−Ne/Nc,Ne<Nc,σ⋅N0,1,Nc≤Ne≤Nf,k2⋅e−Nf/Ne⋅N0,1,Ne>Nf,where k1 and k2 are regulatory factors in different evaluation process; σ represents the noise intensity and σ∈0,1; and N0,1 is the standard normal distribution noise. It is clear from equation (1) that the value of δNe can be limited to 0-1 by reasonably choosing the parameters. The curve of cognition noise intensity varies with the number of evaluated individuals (Figure 1).Figure 1 The curve of cognition noise intensity varies with the number of evaluated individuals.According to different usage scenarios, the composition of each phase of the cognition noise model is also different. When users are familiar with target products and color images, they may skip the cognition phase; if the number of product color schemes is small and optimized schemes can be gained without too many evaluations, then the fatigue phase will not come. ## 2.2. An Algorithm for Solving Users’ Cognition Noise Model The key to solve users’ cognition noise model is to determine the recognition thresholdNc and the fatigue threshold Nf. The basis for this is the consistency between user’s cognition and preference, which means that similar product colors will be given similar judgment. If the condition is met, users are in the intermediate phase; otherwise, they are in the cognition phase or fatigue phase.An individualX can be coded as follows:(2)X=x1,r1,g1,b1,x2,r2,g2,b2,…,xm,rn,gn,bn,where m represents partition number of a product for color design; n is the number of colors; and ri, gi, and bi represent RGB parameters of a color within the range of 0–255. Generally, the color number of a product is less than 3, while the number of product form components is more than 3. Therefore, in the color design process, no more than 3 colors are randomly selected and assigned to the product form components.Considering the area proportion of each color, the similarity of two individualsXi and Xj can be computed with(3)Dij=1−∑k=1mSMkTA⋅rik−rjksr2+gik−gjksg2+bik−bjksb2,where sr, sg, and sb represent standard deviations of r, g, and b, respectively; sr=1/m∑i=1mri−r¯ and similarly, we have the value of sg and sb; SMk is the color area of product component k; and TA is the total area of a product color scheme.Due to that users’ perception about product color design schemes are emotional and cannot be represented with the precise value, it is necessary to utilize fuzzy numbers to substitute exact values. Triangular and trapezoidal-shaped fuzzy numbers, with bounded interval of [0, 1], are the most widely used to represent uncertainty, and trapezoidal is more general than triangular [40]. Since they provide an intuitive way to capture the vagueness of users’ evaluation, we choose trapezoidal fuzzy numbers to denote users’ preference of individuals.For a trapezoidal fuzzy numberA˜=a,c,d,b, the membership function can be written as follows:(4)Ax=x−ac−a,a≤x≤c,1,c≤x≤d,b−xb−d,d≤x≤b,0,otherwise,where 0≤a≤c≤d≤b≤1 and a,b is the support of the fuzzy number and c,d is the modal interval. For ranking design schemes, the defuzzification value of the trapezoidal fuzzy number is needed by using a+b+c+d/4 [41].Using a 7-point labeled scale, which is commonly used to gather respondents’ ratings for perceptual items [6], users’ preference of color images in the IGA can be described by a 4-tuple. Each Kansei attribute comprises 7 sets of semantic terms, and the corresponding fuzzy numbers are indicated in Table 1.Table 1 Semantic terms and the corresponding fuzzy numbers for Kansei attribute. Semantic label Semantic terms (perceived preference) Trapezoidal fuzzy number VL Very low Kansei preference (0, 0, 0.1, 0.2) L Low Kansei preference (0.1, 0.2, 0.2, 0.3) ML Moderately low Kansei preference (0.2, 0.3, 0.4, 0.5) M Medium Kansei preference (0.4, 0.5, 0.5, 0.6) MH Moderately high Kansei preference (0.5, 0.6, 0.7, 0.8) H High Kansei preference (0.7, 0.8, 0.8, 0.9) VH Very high Kansei preference (0.8, 0.9, 1, 1)Assume that there areq color image indicators for evaluation. The weight of each indicator, represented by wk (k=1,2,…,q), is calculated with the AHP method [42]. Let the preference set of Xi given by users be vi=vi1,vi2,…,viq and vik=aik,cik,dik,bik, then the synthetical evaluation of product color scheme xxi in generation t can be calculated as follows:(5)fxxi,t=14∑∑k=1qwkaik,cik,dik,bik.Accordingly, the similarity between two evaluations can be computed as follows:(6)CDij=1−1q∑k=1q14∑o=a,c,d,bwkoik−wkojk21/2.For two similar product color schemes, if the users’ two assessments are of high similarity, the cognition noise is considered to be small and users are in the intermediate phase; if the users’ two evaluations are of low similarity, the cognition noise is considered to be large and users are in the cognition phase or fatigue phase. For previousK (1≤K≤j−1) products of individual Xj, if Dij−CDij≤δ (δ represents the cognition difference threshold, i=j−K,j−K+1,&,j−1) can be met, then users are in the intermediate phase and Nc=j−Kmax (Kmax is the maximum K that met the formula Dij−CDij≤δ). When it is confirmed that users are in the intermediate phase, if the previous K products of individual Xj meet the formula Dij−CDij≤δ (i=j−K,j−K+1,…,j−1), then users are in the fatigue phase and Nf=j−Kmax (Kmax is the maximum K that met the formula Dij−CDij≤δ). Otherwise, users are in the cognition phase. ## 2.3. Interactive Product Color Design Process The aim of interactive product color design is to involve users to interact and assess the fitness of individuals for design evolution by means of interactive genetic algorithms to satisfy the objective desired by users. This process includes three parts: designing fitness function, establishing genetic and mutation mechanism, and planning the implementation process of the proposed algorithm. ### 2.3.1. Fitness Function In the light of synthetical evaluation computed with formula (5) and considering users’ cognition noise, the fitness function can be depicted as follows:(7)Fxxi,t=fxxi,t,Nc≤i≤Nf,1−δxxi,t⋅fxxi,t,otherwise.Formula (7) indicates that when users are in the intermediate phase, their evaluation about product design schemes can accurately reflect their cognition, and the fitness value equals users’ evaluation value. Otherwise, the fitness value of individuals should be subtracted by users’ evaluation value from the cognition noise. ### 2.3.2. Crossover and Mutation Since trapezoidal fuzzy numbers are employed to depict users’ preference, the selection criteria should be in accordance with them and set with semantic terms. Here, we assume that the individuals whose synthetic evaluation is equal to or exceeds high Kansei preference will be selected and enter into the next generation, while the individuals below will be eliminated. Parent individuals are selected in the eliminated individuals according to the level of evaluation and to produce offspring populations through crossover and mutation.Let there ben eliminated individuals and numbers of each preference level be n(MH), n(M), n(ML), n(L), and n(VL), respectively, nMH+nM+nML+nL+nVL=n. Let the descending order of them be n1 > n2 > n3 > n4 > n5, and the probability for each individual to be selected as the parent individual in its preference level can be computed as follows: n1/n⋅nMH, n2/n⋅nM, n3/n⋅nML, n4/n⋅nL, and n5/n⋅nVL.For crossover operating, randomly choose the color from parent individuals to make up the required colors of the target product. Mutation is realized by extending R, G, and B values of product color design individuals to 20% with a set mutation rate in each dimension of RGB color space. The changed color values which exceed beyond 0–255 will be ignored. Figure2 shows how the crossover and mutation implement.Figure 2 Illustration of crossover and mutation. ### 2.3.3. Interactive Product Color Design Process The detailed process of the interactive product color design is presented as follows:(1) By specifying the value ofk1, k2, σ, δ, and genetic manipulation parameters, the original population of product color schemes is generated(2) The preference of individuals is given by users according to the evaluation indicators of product color images(3) Calculate color value similarity and evaluation similarity of each individual, respectively, in line with formulas (3) and (6)(4) Compute the fitness value of each individual according to formula (7) and save individuals that their evaluation value equals or exceeds specified satisfaction threshold(5) Judge whether the evolutional generation moves outside the set limits. If true, then finish; otherwise, go to next step(6) Judge whether the amount of satisfactory individuals exceeds the set value. If true, then finish; otherwise go to next step(7) Execute crossover and mutation for producing populations of the next generation. And then, go to step 2The overall framework of the proposed method is shown in Figure3.Figure 3 Framework of the proposed method. ## 2.3.1. Fitness Function In the light of synthetical evaluation computed with formula (5) and considering users’ cognition noise, the fitness function can be depicted as follows:(7)Fxxi,t=fxxi,t,Nc≤i≤Nf,1−δxxi,t⋅fxxi,t,otherwise.Formula (7) indicates that when users are in the intermediate phase, their evaluation about product design schemes can accurately reflect their cognition, and the fitness value equals users’ evaluation value. Otherwise, the fitness value of individuals should be subtracted by users’ evaluation value from the cognition noise. ## 2.3.2. Crossover and Mutation Since trapezoidal fuzzy numbers are employed to depict users’ preference, the selection criteria should be in accordance with them and set with semantic terms. Here, we assume that the individuals whose synthetic evaluation is equal to or exceeds high Kansei preference will be selected and enter into the next generation, while the individuals below will be eliminated. Parent individuals are selected in the eliminated individuals according to the level of evaluation and to produce offspring populations through crossover and mutation.Let there ben eliminated individuals and numbers of each preference level be n(MH), n(M), n(ML), n(L), and n(VL), respectively, nMH+nM+nML+nL+nVL=n. Let the descending order of them be n1 > n2 > n3 > n4 > n5, and the probability for each individual to be selected as the parent individual in its preference level can be computed as follows: n1/n⋅nMH, n2/n⋅nM, n3/n⋅nML, n4/n⋅nL, and n5/n⋅nVL.For crossover operating, randomly choose the color from parent individuals to make up the required colors of the target product. Mutation is realized by extending R, G, and B values of product color design individuals to 20% with a set mutation rate in each dimension of RGB color space. The changed color values which exceed beyond 0–255 will be ignored. Figure2 shows how the crossover and mutation implement.Figure 2 Illustration of crossover and mutation. ## 2.3.3. Interactive Product Color Design Process The detailed process of the interactive product color design is presented as follows:(1) By specifying the value ofk1, k2, σ, δ, and genetic manipulation parameters, the original population of product color schemes is generated(2) The preference of individuals is given by users according to the evaluation indicators of product color images(3) Calculate color value similarity and evaluation similarity of each individual, respectively, in line with formulas (3) and (6)(4) Compute the fitness value of each individual according to formula (7) and save individuals that their evaluation value equals or exceeds specified satisfaction threshold(5) Judge whether the evolutional generation moves outside the set limits. If true, then finish; otherwise, go to next step(6) Judge whether the amount of satisfactory individuals exceeds the set value. If true, then finish; otherwise go to next step(7) Execute crossover and mutation for producing populations of the next generation. And then, go to step 2The overall framework of the proposed method is shown in Figure3.Figure 3 Framework of the proposed method. ## 3. Case Study The color design of a handheld detector is taken as an example to verify the validity of the proposed method. By utilizing the VBA macroeditor of CorelDRAW software, an interactive product color design module is developed by combining users’ cognition noise and an IGA, as shown in Figure4. There are 6 product color schemes for each generation, and a 7-point labeled scale is deployed to evaluate the indicators which are fashionable and technical. In each generation of evolutionary operations, 3 colors are randomly generated and assigned to 5 product components. In order to better analyze users’ perception of product color schemes, we use two computers to implement the experiment. One is used to run the IGA module to quickly generate product color schemes. Another is a workstation where KeyShot software, a real-time 3D rendering software, is installed to give users better visual perception by quickly creating 3D pictures with no more than 30 seconds. 3D configuration and rendering are shown in Figure 5. With the AHP method, by comparing the two indicators, the weights are set by 0.4 and 0.6. The total number of satisfactory solutions required is 6. Through user surveys, individuals whose evaluation equals or exceeds high Kansei preference will be saved as satisfactory solutions. That means the fitness should be greater or equal to 0.8 according to formula (5). Suggested crossover and mutation probability are 0.5–0.9 and 0.01–0.1. Here, we set 0.7 and 0.08, respectively. Maximum evolutionary generation is given 20. Set k1 = k2 = 0.5 and σ = 0.05. 20 students majored in the industrial design (half male and half female, represented as DM and DF) and 20 students of other majors (half male and half female, represented as CM and CF) are gathered randomly as participants to take part in the experiment. Comparing the proposed method (represented by NIGA) with a traditional IGA, whose parameters of population size, cross probability, mutation probability, and terminate evolutionary generation number are, respectively, set to be 6, 0.7, 0.08 and 20, the calculation results are shown in Tables 2 and 3.Figure 4 Interactive product color design module.Figure 5 3D configuration and rendering of product color schemes.Table 2 Experimental result comparing NIGA with IGA. Name User ID Total generations N c N f Total number of satisfactory individuals NIGA DM1 16 1 91 6 DM2 14 1 80 6 DM3 14 1 82 6 DM4 15 1 87 6 DM5 16 5 None 6 DM6 15 1 85 6 DM7 20 1 115 5 DM8 14 1 79 6 DM9 17 5 100 6 DM10 15 1 88 6 DF1 16 7 91 6 DF2 16 5 None 6 DF3 15 1 89 6 DF4 12 1 68 6 DF5 20 8 103 3 DF6 15 6 88 6 DF7 14 1 79 6 DF8 17 1 100 6 DF9 14 1 80 6 DF10 16 1 92 6 IGA DM1 18 — — 6 DM2 16 — — 6 DM3 15 — — 6 DM4 17 — — 6 DM5 20 — — 4 DM6 18 — — 6 DM7 17 — — 6 DM8 15 — — 6 DM9 18 — — 6 DM10 17 — — 6 DF1 16 — — 6 DF2 20 — — 5 DF3 20 — — 5 DF4 20 — — 4 DF5 13 — — 6 DF6 14 — — 6 DF7 15 — — 6 DF8 15 — — 6 DF9 20 — — 3 DF10 20 — — 5 NIGA CM1 13 5 75 6 CM2 14 4 81 6 CM3 15 5 88 6 CM4 20 3 120 3 CM5 15 1 85 6 CM6 20 1 105 5 CM7 14 5 81 6 CM8 15 5 86 6 CM9 15 6 87 6 CM10 20 4 115 5 CF1 15 6 85 6 CF2 12 5 70 6 CF3 14 5 80 6 CF4 20 8 115 6 CF5 15 6 86 6 CF6 16 8 91 6 CF7 14 6 82 6 CF8 20 9 104 4 CF9 15 4 87 6 CF10 20 8 99 5 IGA CM1 19 — — 6 CM2 20 — — 6 CM3 19 — — 6 CM4 20 — — 6 CM5 20 — — 4 CM6 18 — — 6 CM7 17 — — 6 CM8 20 — — 5 CM9 20 — — 5 CM10 18 — — 6 CF1 18 — — 6 CF2 19 — — 6 CF3 20 — — 5 CF4 20 — — 5 CF5 18 — — 6 CF6 20 — — 4 CF7 19 — — 6 CF8 20 — — 4 CF9 18 — — 6 CF10 19 — — 6Table 3 Comparison of average generations. Name User type Average generation Average number of satisfactory individuals NIGA DM 15.6 5.9 DF 15.5 5.7 CM 16.1 5.5 CF 16.1 5.7 IGA DM 17.1 5.8 DF 17.3 5.2 CM 19.1 5.6 CF 19.1 5.4From the experimental results in Table2, it can be seen that the evaluation of 82.5% users participating in this method has converged, while in the ordinary IGA process, there are 30% participants majored in the industrial design and 35% participants from other majors who have not found the required 6 satisfactory solutions. The convergence rate is increased from 67.5% to 82.5%, which indicates that the proposed method can improve the convergence of the interactive product color design. As shown in Table 2, the cognition threshold indicates that students majored in the industrial design are more familiar with product color image indicators (the mean of Nc is 2.5), and they can quickly enter the evaluation process and establish mapping between product color schemes and image indicators. While students from other majors need to go through a certain process to digest product color images, and the cognition noise is relatively larger than other students (the mean of Nc is 5.2). This indicates that knowledge background has a significant impact on users’ perception of product color images. In terms of fatigue threshold performance, only 2 industrial design participants did not enter the fatigue phase (for ease of calculation, the number of evaluated individuals after the convergence of evolutionary process was considered to be the fatigue threshold, which are both 96), and all the other participants have got fatigued. The average fatigue threshold is 90.275. For further verification, participants are asked whether they really feel fatigue after the experiment. Investigation shows that 92.1% of them feel confused and cannot judge the image indicators of the color schemes precisely.From the comparison of average generations shown in Table3, by contrast with an ordinary IGA, the average generations of NIGA by industrial design students decrease a little (male 1.5 and female 1.8). However, it has a marked impact on students from other majors, with male and female both cutting down 3 generations, which implies that the proposed method plays an active role on users with no or little knowledge of product color design. With the result of improving evolutionary efficiency, the overall average evolutionary generations decrease from 18.15 to 15.825.In conclusion, as an interactive process involving users into product color design, it is inevitable that users’ perception about product color schemes will be influenced by their background knowledge, while cognition noise in different phases will directly affect the validity of product color design. Meanwhile, the precise value used in a traditional IGA cannot represent uncertainty of users’ preference. Therefore, integrating users’ cognition into product color design process with IGA and trapezoidal fuzzy numbers will be conducive to simulate users’ perception about product colors in real world in an objective and scientific way and further improve the convergence speed and evolution efficiency of an ordinary IGA. ## 4. Conclusions Providing users with multiple product color schemes will help identify users’ preference and reduce the risk of product development. Due to cognition difference and uncertainty existing in users, it is not easy to determine which product color users prefer. To assist industrial designers for product color design more effectively and incarnate users’ perception about product colors more accurately, interactive genetic algorithms are deployed by combining users’ cognition noise with a proposed cognition noise model which consists of three phases: cognition phase, intermediate phase, and fatigue phase. With trapezoidal fuzzy numbers, an algorithm is designed to find key parameters through similarity calculation between RGB values of two individuals and users’ evaluations. The interactive product color design process is presented with an instance by comparing with a traditional IGA. By collecting 40 users to participate in the experiment process, the results show that (1) knowledge background will significantly affect users’ cognition about product colors; (2) the proposed method is helpful to improve the convergence speed and evolution efficiency with convergence increasing from 67.5% to 82.5% and overall average evolutionary generations decreasing from 18.15 to 15.825.This study makes the following contributions: (1) Using trapezoidal fuzzy numbers to describe users’ preferences makes the application process of an IGA more practical and easy to operate. (2) Incorporating users’ subjective cognitive differences into the IGA process will help to improve the convergence speed and evolution efficiency of a traditional IGA. (3) The proposed method can effectively assist industrial designers in the product color design. --- *Source: 1019749-2019-08-01.xml*
2019
# A Lightweight Fine-Grained Searchable Encryption Scheme in Fog-Based Healthcare IoT Networks **Authors:** Hui Li; Tao Jing **Journal:** Wireless Communications and Mobile Computing (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1019767 --- ## Abstract For a smart healthcare system, a cloud based paradigm with numerous user terminals is to support and improve more reliable, convenient, and intelligent services. Considering the resource limitation of terminals and communication overhead in cloud paradigm, we propose a hybrid IoT-Fog-Cloud framework. In this framework, we deploy a geo-distributed fog layer at the edge of networks. The fogs can provide the local storage, sufficient processing power, and appropriate network functions. For the fog-based healthcare system, data confidentiality, access control, and secure searching over ciphertext are the key issues in sensitive data. Furthermore, how to adjust the storage and computing requirements to meet the limited resource is also a great challenge for data management. To address these, we design a lightweight keyword searchable encryption scheme with fine-grained access control for our proposed healthcare related IoT-Fog-Cloud framework. Through our design, the users can achieve a fast and efficient service by delegating a majority part of the workloads and storage requirements to fogs and the cloud without extra privacy leakage. We prove our scheme satisfies the security requirements and demonstrate the excellent efficiency through experimental evaluation. --- ## Body ## 1. Introduction Since Ashton [1] and Brock [2] firstly proposed the concept of IoT, it has been widely used in real life by combining with technologies in sensor networks, embedded system, object identifications, and wireless networks in order to tag, sense, and control things over the Internet [3–6]. With the ubiquitous nature of IoT, it makes great contribution in improving the equality of medical care by empowering remote monitoring and reducing time cost through implanting sensors or wearing mobile devices. According to the insight from [7], the healthcare system will evovle into a home-centered paradigm in 2030 from the current hospital-centered one. As more sensors are deployed in the healthcare system, the seamless data needs to be stored, processed, and transmitted. This may cause a great challenge to the traditional IoT-cloud infrastructure from the aspects of reliability, immediate response, and security [8]. This calls demand for a “mediator” between IoT devices and cloud server to support geo-distribution, storage, and computing capability, acting as an extension of the cloud, which is officially called fog from the concept of fog computing proposed by Cisco [9].When storing sensitive data like personal health records to cloud servers, the security and privacy of these data are still challenges in the fog computing paradigm [10–12]. To solve this problem, applying access control mechanism is an essential method to protect the sensitive data from unauthorized users. As a new type of IBE proposed by [13], attribute-based encryption (ABE) plays a great role in access control, which is classified into the key-policy attribute-based encryption (KP-ABE) and the ciphertext-policy attribute-based encryption (CP-ABE). KP-ABE associates user’s private keys with the designated policies and tags ciphertexts with attributes, while CP-ABE is related to ciphertexts with the designated policies and identifies the user’s private key with attributes [14, 15]. Obviously, CP-ABE is a better choice to execute access control in our model since it is the user’s ability to designate an access structure and process the encryption operation under the structure.However, most existing ABE schemes are time consuming in the key generation phase and have a large computational load in the decryption phase, which leads to suffering a bad experience for users. Also, how to maintain effective search in the encrypted ciphertext is a great challenge. Searchable encryption especially searchable public encryption is an effective approach to solve the above problem. And it is important to reduce complex operations, e.g., pairing and exponential operations, for users in the searchable public encryption. ### 1.1. Motivation and Contribution The IoT infrastructure, such as the monitoring devices in a traditional hospital or health management wearable devices in a smart home, continuously synchronizes data to the remote cloud. The massive sensitive data leads to a great challenge to the current healthcare-related IoT-to-cloud system due to the nature of IoT’s limited storage, low power, and poor computability. In this paper, we attempt to solve the problem above as follows:(i) We propose a fog-supported hybrid infrastructure as shown in Figure1. The distributed fogs are deployed between IoT devices and clouds, providing temporary data storage, data computation and analysis, and network services [16], so as to reduce transmission delay. Also, they help to manage users and attributes under the control of trusted authority.Figure 1 A Fog-based healthcare system.With the proposed infrastructure above, we design a new scheme to implement some specific network functions to meet real world needs. We will show it by exhibiting an example as follows. A person named Wealth rarely cares about his physical condition. One day he knows his friend Bob is suffering from hyperglycemia, and then he wants to learn about it. When he searches “Hyperglycemia" in cloud service providers such as “BodyMedia", “Google Health", “CiscoHealthPresence", or “IBM Bluemix", clouds know that he or someone he knows may get hyperglycemia. Obviously, his personal health privacy is exposed to the clouds. In order to prevent privacy disclosure, we construct indexes for “Hyperglycemia" in the file encryption phase through some secure methods. To search such a keyword, we need to generate the corresponding trapdoors with the help of the fog. Upon receiving the trapdoor, clouds return all the encrypted files associated with the specific “Hyperglycemia" if the trapdoor matches with the index. We can protect Wealth’s searching privacy by this way as follows.Further, we consider Wealth receives all files through searching “Hyperglycemia" by performing our designs. After realizing the importance of keeping healthy, he decides to start his own fitness program to monitor his health indicators such as Glycemic index through wearable sensors. Also, due to the limited storage of his own devices, he has to store his data to the cloud and shares it to some designated ones which have specific attributes. If someone without sufficient attributes attempts to search the keyword, he/she is impossible to generate a valid trapdoor matching with a keyword’s index, not to mention to get Wealth’s sensitive data. We help Wealth to accomplish this goal through the following designs.In summary, Wealth could enjoy an efficient, fast, high-quality, and secure service through adopting our system.The main contributions of this article are exhibited as follows:(i) We design a keyword searchable encryption scheme in the healthcare related IoT-fog-cloud infrastructure. The proposed scheme ensures a security requirement that both data and keywords are protected from the cloud and the fog, which is very essential to users in the health related environment.(ii) With the restriction of constrained resource, IoT devices are not capable of doing complicated encryption and decryption process. In order to overcome this issue,we transfer most of heavy computation to the fog and the cloud in our scheme, while only a small part is reserved for users.(iii) On the basis of ciphertext-policy attribute-based encryption, we design a fine-grained access control framework. A user should obtain his query capability authorization from a trusted authority and the fog through checking his attributes. The messages are encrypted with an access policy such thatonly users with the designated attributes can access them.(iv) We provide formal security analysis which demonstrates that our scheme issecure under IND-CK-CCA attack and satisfies trapdoor indistinguishability secure. Also we make experiment comparisons with some previous research revealing that our scheme has a good efficiency.The rest of the paper is organized as follows. In Section2, we briefly introduce preliminaries which will be utilized in our paper. Next, in Section 3, we present two adversary models, security requirements and system functions of our lightweight fine-grained searchable encryption (LFSE) system. Our proposed system is described in Section 4. The thorough security analysis of the proposed system appears in Section 5 and the efficiency is analysed in Section 6. We conclude our paper in Section 8. ## 1.1. Motivation and Contribution The IoT infrastructure, such as the monitoring devices in a traditional hospital or health management wearable devices in a smart home, continuously synchronizes data to the remote cloud. The massive sensitive data leads to a great challenge to the current healthcare-related IoT-to-cloud system due to the nature of IoT’s limited storage, low power, and poor computability. In this paper, we attempt to solve the problem above as follows:(i) We propose a fog-supported hybrid infrastructure as shown in Figure1. The distributed fogs are deployed between IoT devices and clouds, providing temporary data storage, data computation and analysis, and network services [16], so as to reduce transmission delay. Also, they help to manage users and attributes under the control of trusted authority.Figure 1 A Fog-based healthcare system.With the proposed infrastructure above, we design a new scheme to implement some specific network functions to meet real world needs. We will show it by exhibiting an example as follows. A person named Wealth rarely cares about his physical condition. One day he knows his friend Bob is suffering from hyperglycemia, and then he wants to learn about it. When he searches “Hyperglycemia" in cloud service providers such as “BodyMedia", “Google Health", “CiscoHealthPresence", or “IBM Bluemix", clouds know that he or someone he knows may get hyperglycemia. Obviously, his personal health privacy is exposed to the clouds. In order to prevent privacy disclosure, we construct indexes for “Hyperglycemia" in the file encryption phase through some secure methods. To search such a keyword, we need to generate the corresponding trapdoors with the help of the fog. Upon receiving the trapdoor, clouds return all the encrypted files associated with the specific “Hyperglycemia" if the trapdoor matches with the index. We can protect Wealth’s searching privacy by this way as follows.Further, we consider Wealth receives all files through searching “Hyperglycemia" by performing our designs. After realizing the importance of keeping healthy, he decides to start his own fitness program to monitor his health indicators such as Glycemic index through wearable sensors. Also, due to the limited storage of his own devices, he has to store his data to the cloud and shares it to some designated ones which have specific attributes. If someone without sufficient attributes attempts to search the keyword, he/she is impossible to generate a valid trapdoor matching with a keyword’s index, not to mention to get Wealth’s sensitive data. We help Wealth to accomplish this goal through the following designs.In summary, Wealth could enjoy an efficient, fast, high-quality, and secure service through adopting our system.The main contributions of this article are exhibited as follows:(i) We design a keyword searchable encryption scheme in the healthcare related IoT-fog-cloud infrastructure. The proposed scheme ensures a security requirement that both data and keywords are protected from the cloud and the fog, which is very essential to users in the health related environment.(ii) With the restriction of constrained resource, IoT devices are not capable of doing complicated encryption and decryption process. In order to overcome this issue,we transfer most of heavy computation to the fog and the cloud in our scheme, while only a small part is reserved for users.(iii) On the basis of ciphertext-policy attribute-based encryption, we design a fine-grained access control framework. A user should obtain his query capability authorization from a trusted authority and the fog through checking his attributes. The messages are encrypted with an access policy such thatonly users with the designated attributes can access them.(iv) We provide formal security analysis which demonstrates that our scheme issecure under IND-CK-CCA attack and satisfies trapdoor indistinguishability secure. Also we make experiment comparisons with some previous research revealing that our scheme has a good efficiency.The rest of the paper is organized as follows. In Section2, we briefly introduce preliminaries which will be utilized in our paper. Next, in Section 3, we present two adversary models, security requirements and system functions of our lightweight fine-grained searchable encryption (LFSE) system. Our proposed system is described in Section 4. The thorough security analysis of the proposed system appears in Section 5 and the efficiency is analysed in Section 6. We conclude our paper in Section 8. ## 2. Preliminaries In this section we provide a detailed description of some fundamentals of cryptography that will be used throughout this paper. ### 2.1. The Notations In this section, we first give notation descriptions that will be used throughout this paper. For a prime numberp, we denote the set {1,2,…,p-1} as Zp∗, where multiplication and addition modulo p are defined in the set. We use a←rS to denote that a is uniformly chosen from all elements in S randomly. And let λ be the security parameter of our system. ### 2.2. Bilinear Map G 1 and G2 are two multiplicative cyclic groups of prime order p. Let g be a generator of G1 and e be a bilinear map, e:G1×G1→G2. The bilinear map e has the following properties:(i) Bilinear: A mape:G1×G1→G2 is bilinear if e(aP,bQ)=e(P,Q)ab for all P,Q∈G1 and all a,b∈Zp∗.(ii) Nondegenerate:e(g,g)≠1.(iii) Computable: There is an efficient algorithm to computee(P,Q) for all P,Q∈G1. ### 2.3. Access Policy An access policy defines attribute sets that are acquired to get access to private messages.Definition 1 (monotonicity). LettingAT be attributes universe, then an access policy A⊆2AT means A is a collection of non-empty subsets of AT. We call the access policy A is monotone if ∀Ω1,Ω2⊆AT s.t. (1)Ω1⊆Ω2,Ω1∈A⇒Ω2∈A.According to the monotonicity, an authorized user cannot lose his privileges if he has more attributes than required. ## 2.1. The Notations In this section, we first give notation descriptions that will be used throughout this paper. For a prime numberp, we denote the set {1,2,…,p-1} as Zp∗, where multiplication and addition modulo p are defined in the set. We use a←rS to denote that a is uniformly chosen from all elements in S randomly. And let λ be the security parameter of our system. ## 2.2. Bilinear Map G 1 and G2 are two multiplicative cyclic groups of prime order p. Let g be a generator of G1 and e be a bilinear map, e:G1×G1→G2. The bilinear map e has the following properties:(i) Bilinear: A mape:G1×G1→G2 is bilinear if e(aP,bQ)=e(P,Q)ab for all P,Q∈G1 and all a,b∈Zp∗.(ii) Nondegenerate:e(g,g)≠1.(iii) Computable: There is an efficient algorithm to computee(P,Q) for all P,Q∈G1. ## 2.3. Access Policy An access policy defines attribute sets that are acquired to get access to private messages.Definition 1 (monotonicity). LettingAT be attributes universe, then an access policy A⊆2AT means A is a collection of non-empty subsets of AT. We call the access policy A is monotone if ∀Ω1,Ω2⊆AT s.t. (1)Ω1⊆Ω2,Ω1∈A⇒Ω2∈A.According to the monotonicity, an authorized user cannot lose his privileges if he has more attributes than required. ## 3. System Model ### 3.1. Architecture of System The architecture of the proposed fog-based healthcare system is shown in Figure2. It is composed of four parts, i.e., a trusted authority, cloud server providers, fog nodes, and data users (including date owners and other users).Figure 2 System model.Trusted Authority (TA). A trusted authority, such as the national health center or an entity authorized by it, is an important authority for verifying users attributes. It takes charge of generating system parameters for all entities. And it is responsible for issue, revoke, and update attribute private keys for users.Cloud (Short for Cloud Server Providers). The cloud such as Amazon provides data storage, computational resource services, and data analysis. Apart from providing content service above, it also takes charge of the access services from the outside users to the encrypted files. We assume that the public cloud executes the searchable algorithm honestly. The cloud in our system is responsible for performing test algorithm and accomplish a part of decryption task with knowing any information about the user’s keys or attributes.Fog (Short for Fog Nodes). Fog, providing abilities of computing, storage, and mobility, is deployed at edges of networks. Because of the limited computing resources and the restricted capacity of the data owner or user’s facility carried nearby, it is responsible for deploying a half-trusted fog as interface between a user and the cloud server, especially in situations with sensitive medical information. Fog in our system takes charge of managing users within its coverage, revoking users and attributes without having any information about their private keys. Further, it helps controlling users’ query action through generating one part of the trapdoor without knowing the queried keyword.Data Owner. The data owner is an entity who intends to share his files with designated receivers. The receivers’ attributes should satisfy the access policy embedded in the corresponding ciphertext. It is in charge of file encryption with a specific access policy, index generation for all the keywords, and uploading to the cloud.Data User. The data user is the entity who intends to get the encrypted files by sending a query request to cloud servers and the fog. If he has enough attributes satisfying with the required access policy, he is able to download ciphertexts and decrypt them with the help from the cloud. It takes charge of keyword selection to generate trapdoors and then ciphertext decryption.Assumptions. We assume that the cloud and the fog are always online. They have sufficient storage capacity and computing resource. Also we assume that there exists a secure channel between data owner/user and the fog node, e.g., secure Wi-Fi networks.We assume that the cloud and fogs are all “honest but curious" [21]. To be specific, they do not delete or modify user’s data and return the computing results honestly but attempt to access as much private information as possible. All the entities execute our proposed protocol and users try to access data either within or out of their privileges. And it is assumed that the cloud and the fog do not collude with each other.Different from most existing work with only public cloud, it is a novel cloud-fog architecture. In this work, we assume that files and keywords are sensitive and should be protected from both the cloud and the fog. And attributes are semisensitive, which means attributes can only be known by the fog. ### 3.2. Definition of Basic Algorithms We describe a general definition for our lightweight fine-grained searchable encryption scheme, consisting of several polynomial time algorithms.Setup. This phase containing three subalgorithms is implemented by TA.S y s t e m . S e t u p ( 1 λ ): Input the security parameter λ; then the algorithm outputs the master key Mk, public key Pk, and other system parameters.F o g . S e t u p ( P k ): Input the system parameter Pk; then the algorithm outputs the fog’s public and private key pair (PkFk,SkFk) and the corresponding verification key vkj for each attribute in the attribute universe.U s e r . S e t u p ( P k ): For each user requesting to join the system, TA verifies the user identity and his attributes.KeyGeneration. This phase is executed by TA, which contains two subalgorithms.K e y G e n ( P k , M k , U s e r i , Ω u i ): Input the system’s keys (Pk,Sk), the user identity, and the user’s attributes; then the algorithm outputs the user’s public and private key (Pkui,Skui). Next, Input the output of private key and user’s attributes Ωui; the algorithm outputs the secret verification key svkj for each attribute attrj∈Ωui,.S e a r c h K e y G e n ( M k , U s e r i ): Input the system’s master key Mk and user’s identity Useri; then the algorithm returns the search key Si for the user.FogSupport. This phase is executed by the fog and users which is under the management of the fog. Three algorithms are included in this phase.A d d u s e r ( U s e r i ): Input the public parameter and the user’s identity Useri; then the algorithm outputs a table Tuser for the fog Fk to store users’ information.R e K e y G e n ( S k u i , s v k j ): Input the user’s private key Skui and the private verification key svkj for atj∈Ωui, then the algorithm outputs a secret key cskui.R e E n c ( P k u i , v k j ): Input the user’s public key Pkui and the verification key vkj for atj∈Ωui, then the algorithm outputs a ciphertext cvkui.FileEncryption. This phase is performed by the user.E n c ( F , A , v k j ): Input a file F, an access policy A and the verification key vkj; then the algorithm outputs the ciphertext C embedded with the access policy.IndexGeneration. This phase is implemented by the user through running the algorithm Index.I n d e x ( W , S i , P k u i ): Input the user’s search key Si and the keyword W; then the algorithm outputs an index IW for the keyword.TrapdoorGeneration. This phase is executed by the fog and the user, including two subalgorithms.T r a p d o o r ( P k F k , c s k u i ): It is performed by the fog. Input the fog’s public key PkFk and the user’s regenerated key cskui as input; then the algorithm outputs Tf that is a part of the trapdoor T.T r a p d o o r 2 ( W , S i ): This algorithm is executed by the user. Input the user’s search key Si and a keyword W, then the algorithm outputs TW that is the other part of the trapdoor T.Test. This phase is implemented by the cloud server through running Test.T e s t ( I W , T ): Input the keyword index IW and the trapdoor T; then the algorithm outputs 0 if they do not match; otherwise it outputs 1.FileDecryption. The decryption phase is implemented by the cloud server and the user, consisting two subalgorithms.D e c ( C , c v k u i ): Input the file’s ciphertext C, the trapdoor T, and the ciphertext of user’s attributes ciphertext cvkui; then the algorithm outputs Cpd that is a part-decrypted version of the ciphertext.D e c 2 ( C p d , S k u i ): Input the user’s private key Skui and the part-decrypted ciphertext Cpd, then the algorithm outputs the file F. ### 3.3. Security Requirements (1) Data confidentiality: The cloud and the fog are not allowed to know the encrypted data files. Unauthorized users who have no appropriate attributes matching the policy embedded in the ciphertext should not learn the content of the underlying plaintext. (2) Keyword privacy: The keywords should be protected from both the cloud and the fog in a secure way, such as by using a oneway hash function. The cloud server is able to perform the test operation over the indexes but leaks no information about keywords to any unauthorized attackers. (3) Trapdoor privacy: One part of the trapdoor is generated by the data user by using the search key and the secret verification key for his attributes together with the keyword. The other part is generated with the help of the fog using the user’s re-encrypted key. The trapdoor reveals no information about the corresponding keyword or the user’s attributes to the attacker. ### 3.4. Adversary Model To achieve the security requirements, we design two security models for our scheme. Firstly, we introduce a fundamental assumption in Definition2.Definition 2 (DBDH assumption). We say that the DBDH assumption holds if no polynomial time algorithm has a nonnegligible advantage in solving the DBDH problem.According to the security parameter, let a groupG1 of prime order p have a generator g. a,b,c←RZp∗ are chosen randomly. The DBDH problem states that the adversary should distinguish e(g,g)abc∈G2 from a random element V∈G2 when given g,ga,gb,gc∈G1.Definition 3. Our LFSE scheme is trapdoor indistinguishable secure if there is no polynomial time attack can have a nonnegligible advantage in the following game.The security model is defined asGame 1 played between an adversary A and an algorithm B.Game 1 (Trapdoor privacy). Setup: With a security parameter λ, the algorithm B outputs system parameters and generates the public key Pkui, the private key Skui, and the search key Si for the data user.Query phase 1: The adversaryA adaptively makes the following queries.O.Trapdoor1: The adversary A could query any keyword’s one part (Tf0,Tf1,Tf2) of the trapdoor.O.Trapdoor2: The adversary A could query the keyword’s another part (TW1,TW2) of the trapdoor.Challenge phase: The adversaryA sends two keywords W1∗ and W0∗ with equal length. Then B will randomly select x∈{0,1} and construct the trapdoor Tf{Wx∗} for the keyword Wx∗ and send it to the adversary A.Query phase 2: The adversaryA queries the same as phase 1 with the restriction the queried keyword W∉{W0∗,W1∗}.Guess: The adversaryA outputs a guess x′∈{0,1}. If x=x′, A wins the game and the algorithm B outputs 1; otherwise A fails and B outputs 0.Definition 4. Our LFSE scheme is IND-CKCCA secure if there is no polynomial time attack can have a nonnegligible advantage in the following game.We define the indistinguishable against chosen keyword chosen ciphertext attack in our system. The security model is defined throughGame 2 played between an adversary A and a challenger C as follows.Game 2 (Ciphertext and Keyword privacy).This Initial Phase. The adversary A commits to challenge C.Setup: The challengerC seclets a large security parameter λ and runs the setup algorithm to obtain the system master key and public key (Mk,Pk). C gives Pk to A and keeps Mk.Phase 1: The adversaryA makes the following queries with a polynomial number bound.(i) O.KeyGen: The oracle contains several key generation oracles executed by the challenger C to generate a series of keys for A.(ii) O.Trapdoor: The oracle contains two trapdoor generation oracles executed by the challenger C to generate the trapdoor T=(Tf,TW) for A, with the keys generated from the above steps.Challenge: After finishing phase 1, the adversaryA outputs two messages m0∗,m1∗ and two keywords W0∗,W1∗ both with equal length to be challenged. The challenger C flips a coin to choose b1,b2∈{0,1} and then constructs ciphertext for mb1∗ and index for Wb2∗. Finally, the challenger C sends them to the adversary A.Phase 2: The adversaryA adaptively makes queries the same as phase 1, expect the restrictions that W∉{W0∗,W1∗} and the user’s private key cannot be queried.Guess: The adversaryA outputs guesses b1′,b2′∈0,1. If b1′=b1 and b2′=b2, A wins the game.The adversaryA has an advantage of ϵAdvALFSE(λ)=|Pr[b1′=b1,b2′=b2]-1/2| in breaking the DBDH assumption. ### 3.5. System Functions Considering the performance-related issues, our scheme are designed to achieve the following functions.(1) Fine-grained access control: A data owner embeds an access policy into each file to be transmitted to the cloud. This guarantees that the data is only accessed by users with appropriate attributes and well prevented from the cloud server.(2) Authorization: Each data user who is authorized by the trusted attribute authority can be assigned his individual private key. These private keys can be used to search and decrypt files in our system.(3) Search on keywords: An authorized user can generate a query request for some keywords by using his individual private key. After the cloud server receives the query and performs the “Test” on the encrypted files, the user can obtain the matched files.(4) Revocability: The trusted authority should be able to revoke an user and attributes. If an authorized user is revoked, the user is no longer able to search and read files in our system. If an attribute of the user is revoked, the user is no longer able to access the files embedded with an access policy containing the attribute. ## 3.1. Architecture of System The architecture of the proposed fog-based healthcare system is shown in Figure2. It is composed of four parts, i.e., a trusted authority, cloud server providers, fog nodes, and data users (including date owners and other users).Figure 2 System model.Trusted Authority (TA). A trusted authority, such as the national health center or an entity authorized by it, is an important authority for verifying users attributes. It takes charge of generating system parameters for all entities. And it is responsible for issue, revoke, and update attribute private keys for users.Cloud (Short for Cloud Server Providers). The cloud such as Amazon provides data storage, computational resource services, and data analysis. Apart from providing content service above, it also takes charge of the access services from the outside users to the encrypted files. We assume that the public cloud executes the searchable algorithm honestly. The cloud in our system is responsible for performing test algorithm and accomplish a part of decryption task with knowing any information about the user’s keys or attributes.Fog (Short for Fog Nodes). Fog, providing abilities of computing, storage, and mobility, is deployed at edges of networks. Because of the limited computing resources and the restricted capacity of the data owner or user’s facility carried nearby, it is responsible for deploying a half-trusted fog as interface between a user and the cloud server, especially in situations with sensitive medical information. Fog in our system takes charge of managing users within its coverage, revoking users and attributes without having any information about their private keys. Further, it helps controlling users’ query action through generating one part of the trapdoor without knowing the queried keyword.Data Owner. The data owner is an entity who intends to share his files with designated receivers. The receivers’ attributes should satisfy the access policy embedded in the corresponding ciphertext. It is in charge of file encryption with a specific access policy, index generation for all the keywords, and uploading to the cloud.Data User. The data user is the entity who intends to get the encrypted files by sending a query request to cloud servers and the fog. If he has enough attributes satisfying with the required access policy, he is able to download ciphertexts and decrypt them with the help from the cloud. It takes charge of keyword selection to generate trapdoors and then ciphertext decryption.Assumptions. We assume that the cloud and the fog are always online. They have sufficient storage capacity and computing resource. Also we assume that there exists a secure channel between data owner/user and the fog node, e.g., secure Wi-Fi networks.We assume that the cloud and fogs are all “honest but curious" [21]. To be specific, they do not delete or modify user’s data and return the computing results honestly but attempt to access as much private information as possible. All the entities execute our proposed protocol and users try to access data either within or out of their privileges. And it is assumed that the cloud and the fog do not collude with each other.Different from most existing work with only public cloud, it is a novel cloud-fog architecture. In this work, we assume that files and keywords are sensitive and should be protected from both the cloud and the fog. And attributes are semisensitive, which means attributes can only be known by the fog. ## 3.2. Definition of Basic Algorithms We describe a general definition for our lightweight fine-grained searchable encryption scheme, consisting of several polynomial time algorithms.Setup. This phase containing three subalgorithms is implemented by TA.S y s t e m . S e t u p ( 1 λ ): Input the security parameter λ; then the algorithm outputs the master key Mk, public key Pk, and other system parameters.F o g . S e t u p ( P k ): Input the system parameter Pk; then the algorithm outputs the fog’s public and private key pair (PkFk,SkFk) and the corresponding verification key vkj for each attribute in the attribute universe.U s e r . S e t u p ( P k ): For each user requesting to join the system, TA verifies the user identity and his attributes.KeyGeneration. This phase is executed by TA, which contains two subalgorithms.K e y G e n ( P k , M k , U s e r i , Ω u i ): Input the system’s keys (Pk,Sk), the user identity, and the user’s attributes; then the algorithm outputs the user’s public and private key (Pkui,Skui). Next, Input the output of private key and user’s attributes Ωui; the algorithm outputs the secret verification key svkj for each attribute attrj∈Ωui,.S e a r c h K e y G e n ( M k , U s e r i ): Input the system’s master key Mk and user’s identity Useri; then the algorithm returns the search key Si for the user.FogSupport. This phase is executed by the fog and users which is under the management of the fog. Three algorithms are included in this phase.A d d u s e r ( U s e r i ): Input the public parameter and the user’s identity Useri; then the algorithm outputs a table Tuser for the fog Fk to store users’ information.R e K e y G e n ( S k u i , s v k j ): Input the user’s private key Skui and the private verification key svkj for atj∈Ωui, then the algorithm outputs a secret key cskui.R e E n c ( P k u i , v k j ): Input the user’s public key Pkui and the verification key vkj for atj∈Ωui, then the algorithm outputs a ciphertext cvkui.FileEncryption. This phase is performed by the user.E n c ( F , A , v k j ): Input a file F, an access policy A and the verification key vkj; then the algorithm outputs the ciphertext C embedded with the access policy.IndexGeneration. This phase is implemented by the user through running the algorithm Index.I n d e x ( W , S i , P k u i ): Input the user’s search key Si and the keyword W; then the algorithm outputs an index IW for the keyword.TrapdoorGeneration. This phase is executed by the fog and the user, including two subalgorithms.T r a p d o o r ( P k F k , c s k u i ): It is performed by the fog. Input the fog’s public key PkFk and the user’s regenerated key cskui as input; then the algorithm outputs Tf that is a part of the trapdoor T.T r a p d o o r 2 ( W , S i ): This algorithm is executed by the user. Input the user’s search key Si and a keyword W, then the algorithm outputs TW that is the other part of the trapdoor T.Test. This phase is implemented by the cloud server through running Test.T e s t ( I W , T ): Input the keyword index IW and the trapdoor T; then the algorithm outputs 0 if they do not match; otherwise it outputs 1.FileDecryption. The decryption phase is implemented by the cloud server and the user, consisting two subalgorithms.D e c ( C , c v k u i ): Input the file’s ciphertext C, the trapdoor T, and the ciphertext of user’s attributes ciphertext cvkui; then the algorithm outputs Cpd that is a part-decrypted version of the ciphertext.D e c 2 ( C p d , S k u i ): Input the user’s private key Skui and the part-decrypted ciphertext Cpd, then the algorithm outputs the file F. ## 3.3. Security Requirements (1) Data confidentiality: The cloud and the fog are not allowed to know the encrypted data files. Unauthorized users who have no appropriate attributes matching the policy embedded in the ciphertext should not learn the content of the underlying plaintext. (2) Keyword privacy: The keywords should be protected from both the cloud and the fog in a secure way, such as by using a oneway hash function. The cloud server is able to perform the test operation over the indexes but leaks no information about keywords to any unauthorized attackers. (3) Trapdoor privacy: One part of the trapdoor is generated by the data user by using the search key and the secret verification key for his attributes together with the keyword. The other part is generated with the help of the fog using the user’s re-encrypted key. The trapdoor reveals no information about the corresponding keyword or the user’s attributes to the attacker. ## 3.4. Adversary Model To achieve the security requirements, we design two security models for our scheme. Firstly, we introduce a fundamental assumption in Definition2.Definition 2 (DBDH assumption). We say that the DBDH assumption holds if no polynomial time algorithm has a nonnegligible advantage in solving the DBDH problem.According to the security parameter, let a groupG1 of prime order p have a generator g. a,b,c←RZp∗ are chosen randomly. The DBDH problem states that the adversary should distinguish e(g,g)abc∈G2 from a random element V∈G2 when given g,ga,gb,gc∈G1.Definition 3. Our LFSE scheme is trapdoor indistinguishable secure if there is no polynomial time attack can have a nonnegligible advantage in the following game.The security model is defined asGame 1 played between an adversary A and an algorithm B.Game 1 (Trapdoor privacy). Setup: With a security parameter λ, the algorithm B outputs system parameters and generates the public key Pkui, the private key Skui, and the search key Si for the data user.Query phase 1: The adversaryA adaptively makes the following queries.O.Trapdoor1: The adversary A could query any keyword’s one part (Tf0,Tf1,Tf2) of the trapdoor.O.Trapdoor2: The adversary A could query the keyword’s another part (TW1,TW2) of the trapdoor.Challenge phase: The adversaryA sends two keywords W1∗ and W0∗ with equal length. Then B will randomly select x∈{0,1} and construct the trapdoor Tf{Wx∗} for the keyword Wx∗ and send it to the adversary A.Query phase 2: The adversaryA queries the same as phase 1 with the restriction the queried keyword W∉{W0∗,W1∗}.Guess: The adversaryA outputs a guess x′∈{0,1}. If x=x′, A wins the game and the algorithm B outputs 1; otherwise A fails and B outputs 0.Definition 4. Our LFSE scheme is IND-CKCCA secure if there is no polynomial time attack can have a nonnegligible advantage in the following game.We define the indistinguishable against chosen keyword chosen ciphertext attack in our system. The security model is defined throughGame 2 played between an adversary A and a challenger C as follows.Game 2 (Ciphertext and Keyword privacy).This Initial Phase. The adversary A commits to challenge C.Setup: The challengerC seclets a large security parameter λ and runs the setup algorithm to obtain the system master key and public key (Mk,Pk). C gives Pk to A and keeps Mk.Phase 1: The adversaryA makes the following queries with a polynomial number bound.(i) O.KeyGen: The oracle contains several key generation oracles executed by the challenger C to generate a series of keys for A.(ii) O.Trapdoor: The oracle contains two trapdoor generation oracles executed by the challenger C to generate the trapdoor T=(Tf,TW) for A, with the keys generated from the above steps.Challenge: After finishing phase 1, the adversaryA outputs two messages m0∗,m1∗ and two keywords W0∗,W1∗ both with equal length to be challenged. The challenger C flips a coin to choose b1,b2∈{0,1} and then constructs ciphertext for mb1∗ and index for Wb2∗. Finally, the challenger C sends them to the adversary A.Phase 2: The adversaryA adaptively makes queries the same as phase 1, expect the restrictions that W∉{W0∗,W1∗} and the user’s private key cannot be queried.Guess: The adversaryA outputs guesses b1′,b2′∈0,1. If b1′=b1 and b2′=b2, A wins the game.The adversaryA has an advantage of ϵAdvALFSE(λ)=|Pr[b1′=b1,b2′=b2]-1/2| in breaking the DBDH assumption. ## 3.5. System Functions Considering the performance-related issues, our scheme are designed to achieve the following functions.(1) Fine-grained access control: A data owner embeds an access policy into each file to be transmitted to the cloud. This guarantees that the data is only accessed by users with appropriate attributes and well prevented from the cloud server.(2) Authorization: Each data user who is authorized by the trusted attribute authority can be assigned his individual private key. These private keys can be used to search and decrypt files in our system.(3) Search on keywords: An authorized user can generate a query request for some keywords by using his individual private key. After the cloud server receives the query and performs the “Test” on the encrypted files, the user can obtain the matched files.(4) Revocability: The trusted authority should be able to revoke an user and attributes. If an authorized user is revoked, the user is no longer able to search and read files in our system. If an attribute of the user is revoked, the user is no longer able to access the files embedded with an access policy containing the attribute. ## 4. LFSE Scheme ### 4.1. Construction of LFSE Scheme We specify the proposed LFSE scheme in fog-based healthcare system in details. In real world, we consider that all the sensors carried by the owner are continually collecting and reporting data, and the owner decides whether and when data is transmitted to the cloud.(1) System setup: Letλ be the security parameter, and then TA performs the following steps. Firstly, it chooses two cyclic groups (G,·) and (GT,·) with prime order p and defines a bilinear pairing e:G×G→GT. Let g be a generator of G, g1,g2 and s,υ are randomly chosen from G and Zp∗, respectively. Then, it computes g′=gs, V=e(g,g)υ, and selects two hash functions: H:{0,1}∗→Zp∗, H1:Zp∗×{0,1}∗→Zp∗. Ultimately, TA keeps (s,υ) secret as master key Sk and publishes system parameters Pk={λ,G,GT,e,g,g1,g2,g′,V}. Afterwards, TA will initialize the attribute universe AT={at1,at2,…,atm} and the monotone access structure A. Let A0=(Ω1,Ω2,…,Ωn) be a basis for A, where each Ωi is a minimal authorized attribute set in A.(2) Setup and key generation for fogs: For each fog, TA generates its public and private keys(PkFogk,SkFogk) by running Fog.Setup. The algorithm picks ςk←rZp∗ randomly and outputs (PkFogk, SkFogk)=(ςk,gςk). The fog maintains the private key sent from TA and initializes a table Tuser to manage all the authorized users within its coverage. Further, to authorize fogs to manage attributes, for each fog Fogk, TA selects a σk←rZp∗ and then computes θj=H1(σk,atj), d1j=gθj, and d2j=Vθj for each attribute atj∈AT and defines vkj=(d1j,d2j) as a verification key. Then TA sends verification keys {vkj}atj∈AT to the corresponding fogs as attributes information. The fogs exchange their users and attributes verification keys information to allow the authorized user to connect to our system when he moves to other fog’s managing area.(3) Key Generation for the user: Assume that a new userUseri with the attribute list Ωui={{atj},j≤m} requests to join the system. First of all, TA authenticates the user’s identity and his attributes. Then it returns the public and private key (PkUseri,SkUseri)=(gαi,(t0,t1,t2)) to each user, where t0=gυg1αi, t1=gβi, t2=δ, and αi,βi,δ←rZp∗. Simultaneously, TA computes svkj=(PkUseri)θj=gαiθj for each atj∈Ωui and returns it to the user as a secret verification key for each attribute obtained by the user. Once the phase is finished, the fog adds Useri to table Tuser as a new authenticated user’s information.(4) Search Key Generation: After receiving the public and private keys from TA, the userUseri also needs to make a request to get a private key for searching on keywords. The user picks η←rZp∗ randomly and sends g11/η to TA. Then, TA computes the searching key Si=(g11/η)sg2βiδ and sends Si to the user.(5) Prepare For Fog Support: Due to the limited processing power and low computing efficiency of the user, we would like to transfer most of the computational load to the fog and cloud without leaking additional information. In our system, the user would delegate the fog to complete a part of auxiliary computation by transferring a converted secret keycskui. The user computes T0=t0t2=gυδg1αiδ, T1=t1t2=gβiδ, and T2j=(svkj)t2=gαiθjδ and then sends cskui=(T0,T1,{T2j}atj∈Ωui)to the fog. With cskui, the fog can help users accomplish a part of computation tasks without knowing the user private key. For facilitating further calculations, the user can compute V1=e(g,g) and V2=e(g1,g′) in advance and stores them. Simultaneously, to ensure the cloud can help users do a part of computation and avoid the cloud from obtaining information from the user’s attributes, the fog selects s′←rZp∗ randomly and computes D1=g1s′, D2j=(d1j)s′=(gθj)s′=gθjs′ and sends cvkui=(D1,{D2j}atj∈Ωui) to the cloud and the secret s′ to the user through a secure channel.(6) Encrypt: Suppose that the data owner decides his fileF. This file can be searched and acquired by users whose attributes satisfy with an access policy A. Under this assumption, the user can designate different types of data to be accessed by different kind of people. For the monotone access policy A, there exists a basis A0=(Ω1,Ω2,…,Ωn), where each Ωi, a minimal set, is composed of the authorized attributes. To encrypt the file, the user picks sl←rZp∗ for each 1≤l≤n computes(2)Cl=C1l,C2l=F·∏atj∈Ωld2jsl,sls′,The user keeps the ciphertext as C=(A,{Cl}1≤l≤n) embedded with the access policy A.(7) Index: For a continuous health monitoring system, data are constantly processed and transferred to the cloud from various kind of sensors. In order to get quick access to useful files from the super large data center, we add different keywords to files.We assume the fileF contains a set of keywords W which are extracted from the original health file. For each keyword W∈W, the user picks u←rZp∗ randomly and computes IW=(CW1,CW2,CW3)=((e(g,g)H(W)e(g1,g′))u,gu,g2u). Subsequently, the user sends the ciphertext C together with the index IW to the cloud. Then the cloud stores them.(8) Trapdoor: Generally speaking, theTrapdoor algorithm is used to generate a trapdoor for a certain keyword by the user who wants to search files containing this keyword. In our system, to help the user reduce the computing burden, we delegate the fog to do a part of the trapdoor generation work without leaking any information about the queried keywords. This design has an advantage in our IoT system: confidentiality of keywords. Specifically, upon receiving the query request from the user Useri, the fog firstly searches the user’s identity in table Tuser. If the fog does not find it in the table which means the user did not join the system, then the fog refuses to generate the part trapdoor for the user and returns a warning message. This process completed by the fog ensures that any external user who is not authenticated cannot search any keyword and guarantees no leakage of any information about keywords or encrypted files. If the fog finds the user in table Tuser, the fog randomly chooses ρ←rZp∗, sends it to the user through a secure channel, and then computes(3)Tf0=T0ρ=t0t2ρ=gυδρg1αiδρ,Tf1=T1ρ=t1t2ρ=gβiδρ,Tf2j=T2jρ=svkjt2ρ=gαiθjδρ.After finishing all the above steps, the fog uploads Tf=(Tf0,Tf1,Tf2j{atj∈Ωui}) to the cloud as a part of the trapdoor. To search files with a keyword W′, the user firstly chooses η←rZp∗ and computes(4)TW1=gHW′Siη=gHW′g1sg2βiδη,TW2=ηρwith his own search key. Then, the user sends the other part of the trapdoor TW=(TW1,TW2) to the cloud. Ultimately, the cloud gets a full trapdoor T=(Tf,TW). In this phase, if the fog has verified one user’s identity and verification keys for his attributes, the fog can perform the trapdoor generation once in a while. This is available as this phase is not related to the queried keyword. As a result, the computing burden for the fog and interaction time for both the user and the fog can be reduced.(9) Test: Upon receiving the search request for keywordW′ from the fog and the user, the cloud runs Test algorithm for all items which are encrypted indexes for all the keywords by computing(5)eCW2,TW1eCW3,Tf1TW2.The cloud compares the result with CW1, if it equals CW1, the cloud outputs 1, and performs the next step. Otherwise, the cloud outputs 0, returns a warning message, and exits the system.(10) Decryption: If the algorithmTest cannot find an index for the uploaded trapdoor, the cloud would not run the Dec1 algorithm and returns ⊥. Otherwise, the cloud computes(6)Cpd=eD1,∏atj′∈Ωui′Tf2je∏atj′∈Ωui′D2j,Tf0C2lTW2.Once upon receiving the part-decrypted ciphertext Cpd from the cloud, the user recovers the file F by using his own private key through computing(7)F=C1lCpd1/δt2.Obviously, the user only needs to do an exponential operation in the decryption, which is a great step in improving efficiency. ### 4.2. Consistency Firstly, we present that the trapdoor matching is valid in our system.(8)eCW2,TW1eCW3,Tf1TW2=egu,gHW′g11/ηsg2βiδηeg2u,gβiδρη/ρ=egu,gHW′·egu,g1s·egu,g2βiδηeg2u,gβiδη=eg,guHW′·eg,g1us·eg,g2uβiδηeg2,guβiδη=eg,guHW′·eg,g1us=eg,guHW′·eg1,g′u. If there exists a keyword W∈W matching with the queried keyword which leads H(W)=H(W′), we can derive the conclusion e(CW2,TW1)/e(CW3,Tf1TW2)=CW1.Then, the file recovery can be maintained as the following two steps. If the test passes, the cloud decrypts all the related files by computing(9)Cpd=eD1,∏atj′∈Ωui′Tf2je∏atj′∈Ωui′D2j,Tf0C2lTW2=eg1s′,∏atj′∈Ωui′gαiθjδρe∏atj′∈Ωui′gθjs′,gυg1αiδρsl/s′η/ρ=eg1s′,gαiδρ∑atj′∈Ωui′egs′∑atj′∈Ωui′θj,gυδρ·egs′∑atj′∈Ωui′θj,g1αiδρsl/s′η/ρ=eg1,gs′αiδρ∑atj′∈Ωui′eg,gs′υδρ∑atj′∈Ωui′θj·eg,g1s′αiδρ∑atj′∈Ωui′θjsl/s′η/ρ=1eg,gυδ∑atj′∈Ωui′θjslη=1eg,gυδslη∑atj′∈Ωui′θj.If the user’s attributesΩui′ satisfy the access policy A, we know there exists a basis A0′=(Ω1′,Ω2′,…,Ωn′) s.t.(10)∀atj∈Ωui′,∃Ωl′s.t.atj∈Ωl′⊆Ωui′, and we have ∑atj′∈Ωui′θj=∑atj′∈Ωl′θj=∑1nθj=∑atj′∈Ωlθj.According to this, the user finally recovers the file by computing(11)F=C1lCpd1/δt2=F·∏atj∈Ωld2jsl1eg,gυδslη∑atj′∈Ωui′θj1/δη=F·eg,gυsl∑atj∈Ωuiθj·1eg,gυsl∑atj′∈Ωui′θj=F. ### 4.3. User Revocation and Attribute Revocation As mentioned above, the fog is an access interface between the cloud and users. The tableTuser is a certification to verify whether a user is in the system. The revocation of a user can be realized through rejecting the query request. To be specific, once a user submits a revocation request to the trusted authority or the trusted authority decides to revoke a user, the trusted authority deletes all keys and attributes information of the user. Then it sends the user’s revocation information to the fog, and all the information about the user will be deleted in Tuser. As a result, the user cannot update his/her request to the cloud server. Furthermore, once the re-encrypted keys cskui and cvkui are revoked from the fog, the user cannot generate trapdoors for any keywords. Because the fog needs cskui and cvkui to do a part of computation to accomplish the trapdoor generation phase, the loss of cskui and cvkui leads to the user’s failure to search for any files. As a result, such a user is new to the system and the fog will no longer respond to its any request.In our system, we can achieve attribute revocation with the designation ofcskui and cvkui. Once an attribute is revoked, the data owner could keep the data from the group of users who have the revoked attribute. To be specific, upon deciding to revoke an attribute atj, the fog destroys the attribute’s verification key vkj and deletes cskui and cvkui for users containing the attribute, then sends a warning message to these users to update the related cskui and cvkui. Before users updating cskui and cvkui, the fog refuses to generate trapdoor for them, which directly leading to the failure of accessing files in the system. Although it may cause some computational loads and transmission cost, it is acceptable when the extremely sensitive data is concerned. ## 4.1. Construction of LFSE Scheme We specify the proposed LFSE scheme in fog-based healthcare system in details. In real world, we consider that all the sensors carried by the owner are continually collecting and reporting data, and the owner decides whether and when data is transmitted to the cloud.(1) System setup: Letλ be the security parameter, and then TA performs the following steps. Firstly, it chooses two cyclic groups (G,·) and (GT,·) with prime order p and defines a bilinear pairing e:G×G→GT. Let g be a generator of G, g1,g2 and s,υ are randomly chosen from G and Zp∗, respectively. Then, it computes g′=gs, V=e(g,g)υ, and selects two hash functions: H:{0,1}∗→Zp∗, H1:Zp∗×{0,1}∗→Zp∗. Ultimately, TA keeps (s,υ) secret as master key Sk and publishes system parameters Pk={λ,G,GT,e,g,g1,g2,g′,V}. Afterwards, TA will initialize the attribute universe AT={at1,at2,…,atm} and the monotone access structure A. Let A0=(Ω1,Ω2,…,Ωn) be a basis for A, where each Ωi is a minimal authorized attribute set in A.(2) Setup and key generation for fogs: For each fog, TA generates its public and private keys(PkFogk,SkFogk) by running Fog.Setup. The algorithm picks ςk←rZp∗ randomly and outputs (PkFogk, SkFogk)=(ςk,gςk). The fog maintains the private key sent from TA and initializes a table Tuser to manage all the authorized users within its coverage. Further, to authorize fogs to manage attributes, for each fog Fogk, TA selects a σk←rZp∗ and then computes θj=H1(σk,atj), d1j=gθj, and d2j=Vθj for each attribute atj∈AT and defines vkj=(d1j,d2j) as a verification key. Then TA sends verification keys {vkj}atj∈AT to the corresponding fogs as attributes information. The fogs exchange their users and attributes verification keys information to allow the authorized user to connect to our system when he moves to other fog’s managing area.(3) Key Generation for the user: Assume that a new userUseri with the attribute list Ωui={{atj},j≤m} requests to join the system. First of all, TA authenticates the user’s identity and his attributes. Then it returns the public and private key (PkUseri,SkUseri)=(gαi,(t0,t1,t2)) to each user, where t0=gυg1αi, t1=gβi, t2=δ, and αi,βi,δ←rZp∗. Simultaneously, TA computes svkj=(PkUseri)θj=gαiθj for each atj∈Ωui and returns it to the user as a secret verification key for each attribute obtained by the user. Once the phase is finished, the fog adds Useri to table Tuser as a new authenticated user’s information.(4) Search Key Generation: After receiving the public and private keys from TA, the userUseri also needs to make a request to get a private key for searching on keywords. The user picks η←rZp∗ randomly and sends g11/η to TA. Then, TA computes the searching key Si=(g11/η)sg2βiδ and sends Si to the user.(5) Prepare For Fog Support: Due to the limited processing power and low computing efficiency of the user, we would like to transfer most of the computational load to the fog and cloud without leaking additional information. In our system, the user would delegate the fog to complete a part of auxiliary computation by transferring a converted secret keycskui. The user computes T0=t0t2=gυδg1αiδ, T1=t1t2=gβiδ, and T2j=(svkj)t2=gαiθjδ and then sends cskui=(T0,T1,{T2j}atj∈Ωui)to the fog. With cskui, the fog can help users accomplish a part of computation tasks without knowing the user private key. For facilitating further calculations, the user can compute V1=e(g,g) and V2=e(g1,g′) in advance and stores them. Simultaneously, to ensure the cloud can help users do a part of computation and avoid the cloud from obtaining information from the user’s attributes, the fog selects s′←rZp∗ randomly and computes D1=g1s′, D2j=(d1j)s′=(gθj)s′=gθjs′ and sends cvkui=(D1,{D2j}atj∈Ωui) to the cloud and the secret s′ to the user through a secure channel.(6) Encrypt: Suppose that the data owner decides his fileF. This file can be searched and acquired by users whose attributes satisfy with an access policy A. Under this assumption, the user can designate different types of data to be accessed by different kind of people. For the monotone access policy A, there exists a basis A0=(Ω1,Ω2,…,Ωn), where each Ωi, a minimal set, is composed of the authorized attributes. To encrypt the file, the user picks sl←rZp∗ for each 1≤l≤n computes(2)Cl=C1l,C2l=F·∏atj∈Ωld2jsl,sls′,The user keeps the ciphertext as C=(A,{Cl}1≤l≤n) embedded with the access policy A.(7) Index: For a continuous health monitoring system, data are constantly processed and transferred to the cloud from various kind of sensors. In order to get quick access to useful files from the super large data center, we add different keywords to files.We assume the fileF contains a set of keywords W which are extracted from the original health file. For each keyword W∈W, the user picks u←rZp∗ randomly and computes IW=(CW1,CW2,CW3)=((e(g,g)H(W)e(g1,g′))u,gu,g2u). Subsequently, the user sends the ciphertext C together with the index IW to the cloud. Then the cloud stores them.(8) Trapdoor: Generally speaking, theTrapdoor algorithm is used to generate a trapdoor for a certain keyword by the user who wants to search files containing this keyword. In our system, to help the user reduce the computing burden, we delegate the fog to do a part of the trapdoor generation work without leaking any information about the queried keywords. This design has an advantage in our IoT system: confidentiality of keywords. Specifically, upon receiving the query request from the user Useri, the fog firstly searches the user’s identity in table Tuser. If the fog does not find it in the table which means the user did not join the system, then the fog refuses to generate the part trapdoor for the user and returns a warning message. This process completed by the fog ensures that any external user who is not authenticated cannot search any keyword and guarantees no leakage of any information about keywords or encrypted files. If the fog finds the user in table Tuser, the fog randomly chooses ρ←rZp∗, sends it to the user through a secure channel, and then computes(3)Tf0=T0ρ=t0t2ρ=gυδρg1αiδρ,Tf1=T1ρ=t1t2ρ=gβiδρ,Tf2j=T2jρ=svkjt2ρ=gαiθjδρ.After finishing all the above steps, the fog uploads Tf=(Tf0,Tf1,Tf2j{atj∈Ωui}) to the cloud as a part of the trapdoor. To search files with a keyword W′, the user firstly chooses η←rZp∗ and computes(4)TW1=gHW′Siη=gHW′g1sg2βiδη,TW2=ηρwith his own search key. Then, the user sends the other part of the trapdoor TW=(TW1,TW2) to the cloud. Ultimately, the cloud gets a full trapdoor T=(Tf,TW). In this phase, if the fog has verified one user’s identity and verification keys for his attributes, the fog can perform the trapdoor generation once in a while. This is available as this phase is not related to the queried keyword. As a result, the computing burden for the fog and interaction time for both the user and the fog can be reduced.(9) Test: Upon receiving the search request for keywordW′ from the fog and the user, the cloud runs Test algorithm for all items which are encrypted indexes for all the keywords by computing(5)eCW2,TW1eCW3,Tf1TW2.The cloud compares the result with CW1, if it equals CW1, the cloud outputs 1, and performs the next step. Otherwise, the cloud outputs 0, returns a warning message, and exits the system.(10) Decryption: If the algorithmTest cannot find an index for the uploaded trapdoor, the cloud would not run the Dec1 algorithm and returns ⊥. Otherwise, the cloud computes(6)Cpd=eD1,∏atj′∈Ωui′Tf2je∏atj′∈Ωui′D2j,Tf0C2lTW2.Once upon receiving the part-decrypted ciphertext Cpd from the cloud, the user recovers the file F by using his own private key through computing(7)F=C1lCpd1/δt2.Obviously, the user only needs to do an exponential operation in the decryption, which is a great step in improving efficiency. ## 4.2. Consistency Firstly, we present that the trapdoor matching is valid in our system.(8)eCW2,TW1eCW3,Tf1TW2=egu,gHW′g11/ηsg2βiδηeg2u,gβiδρη/ρ=egu,gHW′·egu,g1s·egu,g2βiδηeg2u,gβiδη=eg,guHW′·eg,g1us·eg,g2uβiδηeg2,guβiδη=eg,guHW′·eg,g1us=eg,guHW′·eg1,g′u. If there exists a keyword W∈W matching with the queried keyword which leads H(W)=H(W′), we can derive the conclusion e(CW2,TW1)/e(CW3,Tf1TW2)=CW1.Then, the file recovery can be maintained as the following two steps. If the test passes, the cloud decrypts all the related files by computing(9)Cpd=eD1,∏atj′∈Ωui′Tf2je∏atj′∈Ωui′D2j,Tf0C2lTW2=eg1s′,∏atj′∈Ωui′gαiθjδρe∏atj′∈Ωui′gθjs′,gυg1αiδρsl/s′η/ρ=eg1s′,gαiδρ∑atj′∈Ωui′egs′∑atj′∈Ωui′θj,gυδρ·egs′∑atj′∈Ωui′θj,g1αiδρsl/s′η/ρ=eg1,gs′αiδρ∑atj′∈Ωui′eg,gs′υδρ∑atj′∈Ωui′θj·eg,g1s′αiδρ∑atj′∈Ωui′θjsl/s′η/ρ=1eg,gυδ∑atj′∈Ωui′θjslη=1eg,gυδslη∑atj′∈Ωui′θj.If the user’s attributesΩui′ satisfy the access policy A, we know there exists a basis A0′=(Ω1′,Ω2′,…,Ωn′) s.t.(10)∀atj∈Ωui′,∃Ωl′s.t.atj∈Ωl′⊆Ωui′, and we have ∑atj′∈Ωui′θj=∑atj′∈Ωl′θj=∑1nθj=∑atj′∈Ωlθj.According to this, the user finally recovers the file by computing(11)F=C1lCpd1/δt2=F·∏atj∈Ωld2jsl1eg,gυδslη∑atj′∈Ωui′θj1/δη=F·eg,gυsl∑atj∈Ωuiθj·1eg,gυsl∑atj′∈Ωui′θj=F. ## 4.3. User Revocation and Attribute Revocation As mentioned above, the fog is an access interface between the cloud and users. The tableTuser is a certification to verify whether a user is in the system. The revocation of a user can be realized through rejecting the query request. To be specific, once a user submits a revocation request to the trusted authority or the trusted authority decides to revoke a user, the trusted authority deletes all keys and attributes information of the user. Then it sends the user’s revocation information to the fog, and all the information about the user will be deleted in Tuser. As a result, the user cannot update his/her request to the cloud server. Furthermore, once the re-encrypted keys cskui and cvkui are revoked from the fog, the user cannot generate trapdoors for any keywords. Because the fog needs cskui and cvkui to do a part of computation to accomplish the trapdoor generation phase, the loss of cskui and cvkui leads to the user’s failure to search for any files. As a result, such a user is new to the system and the fog will no longer respond to its any request.In our system, we can achieve attribute revocation with the designation ofcskui and cvkui. Once an attribute is revoked, the data owner could keep the data from the group of users who have the revoked attribute. To be specific, upon deciding to revoke an attribute atj, the fog destroys the attribute’s verification key vkj and deletes cskui and cvkui for users containing the attribute, then sends a warning message to these users to update the related cskui and cvkui. Before users updating cskui and cvkui, the fog refuses to generate trapdoor for them, which directly leading to the failure of accessing files in the system. Although it may cause some computational loads and transmission cost, it is acceptable when the extremely sensitive data is concerned. ## 5. Security Analysis Recall that our system is concerned about three security requirements: data confidentiality, keyword privacy, and trapdoor privacy. We present our security analysis for trapdoor privacy by proofing Theorem5, and data confidentiality and keyword privacy are exhibited through Theorem 6,.The security of our scheme is based on the complex assumption in Definition2.Theorem 5 (trapdoor privacy). Under the assumption of DBDH, the trapdoor generated in our LFSE scheme is indistinguishable against the chosen keyword attack.Proof. Assume that an malicious adversaryA is able to break the trapdoor security in our LFSE scheme in a polynomial time with the advantage ϵ which is not negligible. Without loss of generality, we construct an algorithm B that plays the following game with A and solves DBDH using the capability of A.(i) Setup: For a security parameterλ, the algorithm B takes (g,ga,gb,gc,Z) as input, where a,b,c are chosen from Zp∗ by the challenger C and Z is also randomly selected from G. The challenger C picks a coin to denote x∈{0,1}. If x=1, computes Z=gabc. Otherwise, Z is a random element from G. For the user ui, the algorithm randomly chooses s,αi,ν from Zp∗ and g1,g2 from the group G. Then it announces the user’s public and private key as (gαi,(gνgαi,gb,c)) and sets g2=g,η=c. Furthermore, it announces the search key for the user as (g11/η)sg2ab.(ii) Query Phase 1: The adversaryA issues the following query.O.Query: Upon receiving the query request on keyword W from the adversary A. The algorithm B selects r,ρ,θj randomly from Zp∗ and then it computes Tf0=grbρg1αibρ, Tf1=gabρ, Tf2=gαiθjbρ, TW1=gH(W)((g11/η)sg2ab)η, and TW2=c/ρ, where all the other parameters are randomly chosen in a similar way as in Theorem 5. At last, the algorithm B returns Tf=(Tf0,Tf1,Tf2,TW1,TW2) as the trapdoor for the keyword W′ to A.(iii) Challenge: The adversaryA selects two keywords W0∗ and W1∗ with equal length which are both queried for the first time. Then the algorithm B flips a coin to choose a random bit of x and computes the trapdoor for the keyword Wx∗ as Tf∗=(Tf0∗,Tf1∗,Tf2∗,TW1∗,TW2∗), where Tf0∗=grbρg1αibρ, Tf1∗=gabρ, Tf2∗=gαiθjbρ, TW1∗=gH(W)g1sZ and TW2∗=c/ρ.(iv) Query Phase 2: The adversaryA does the same thing continuously for polynomial times as in Query Phase 1, but with the restriction that both W0∗ and W1∗ cannot be queried any more.(v) Guess Phase: The adversaryA returns a guess x∈0,1′ to B. If x′=x, it means the adversary A wins the game, the algorithm B outputs 1. Otherwise A fails and B outputs 0.(vi) Analysis: As shown above, we haveTW1=gH(W)((g11/η)sg2ab)η=gH(W)((g11/c)sgab)c=gH(W)g1sgabc. Compared with Tf2∗, we can know Z=gabc clearly. As a result, the adversary A can win the game with the same probability of winning the DBDH assumption. That means AdvB(1λ)DBDH=Pr[x′=x]=ϵ, which is contradictory to the DBDH assumption. In summary, our scheme satisfies the trapdoor indistinguishable secure under the DBDH assumption.Theorem 6 (Ciphertext privacy and keyword privacy). The proposed scheme shown in Section4 is IND-CK-CCA secure under the DBDH assumption.Proof. Suppose there is a polynomial time adversaryA who can break our proposed scheme with a nonnegligible advantage ϵ, then we can build an algorithm to solve the DBDH assumption. It can be described as a game between a challenger C and an adversary A. Setup: The challengerC receives (G,GT,e,g,gx,gy,gz,Z) from the DBDH assumption, where Z is a randomly chosen element from GT or equals e(g,g)xyz. The challenger C chooses s,υ→Zp∗ and computes g2=gs, V=e(g,g)υ, and also C sets g1=gx,g′=gy. (g,g1,g2,g′,V) are sent to the adversary A as public parameters. Phase 1: The adversaryA makes the following queries:(i) O.Fog.KeyGen: The adversary A queries keys for the fog, and the challenger C picks σF,ςF←rZp∗ at random and outputs (PkF, SkF)=(ςF,gςF).(ii) O.KeyGen: The adversary A queries keys for the user Useri, and the challenger C picks αi,βi,δ←rZp∗ at random and computes (PkUseri,SkUseri)=(gαi,(t0,t1,t2)) to A, where t0=gυg1αi, t1=gβi, and t2=δ. For all the attributes owned by the user, the adversary A also queries the verification keys from O.Fog.KeyGen and secret verification keys from O.KeyGen, then A obtains vkj=(d1j,d2j), and svkj=d3j=gαiθj, where θj=H1(σF,atj), d1j=gθj, and d2j=Vθj are computed by C.(iii) O.SearchKeyGen: After receiving a commitment g11/η, the adversary A queries the search key, and the challenger C computes Si=(g11/η)sg2βiδ to A.(iv) O.ReKey: The adversary A queries transformed key for the user, and the challenge C computes T0=t0t2, T1=t1t2, and T2j=(d3j)t2 and sends cskui=(T0,T1,{T2j}atj∈Ωui) to A.(v) O.Trapdoor: Upon getting a query on the trapdoor for the keyword W, C firstly randomly chooses ρ,η←rZp∗ and computes Tf0=T0ρ,Tf1=T1ρ,Tf2j=T2jρ, TW1=gH(W′)Siη, and TW2=η/ρ to A. Challenge: The adversaryA gives an access policy A∗, two equal length plaintexts m0∗,m1∗ and two keywords W0∗,W1∗ to C. Then C randomly picks b1∈{0,1} and constructs the ciphertext as ((mb2∗·(∏atj∈Ωl‍d2j)sl,sl/s′),1≤l≤n). Also it constructs the index CW1∗=e(g,gz)H(Wb1∗)·Z,CW2∗=gz,CW3∗=(gz)s. C sends the index IW∗=(CW1∗,CW2∗,CW3∗) for keyword W. Phase 2:A can ask a polynomially bounded number of queries adaptively again as Phase 1 except the queried keyword W∉{W0∗,W1∗}. C answers A’s queries as in Phase 1. Guess:A outputs guesses b1′,b2′ of b1,b2. C outputs 0 to guess that Z=e(g,g)xyz if b1′=b1 and b2′=b2; otherwise, it outputs 1 to indicate that it believes Z is a random element. Analysis: Assume the adversaryA has an advantage ϵ in attacking DBDH assumption and C has an advantage ϵ′ in winning the game. Through the game showed above, we can know ϵ′=ϵ as a obvious result. ## 6. Efficiency Analysis In this section, we analyse the efficiency of our system from both theoretical and experimental aspects. Table1 illustrates the descriptions of notations we use in the following comparisons.Table 1 Description of parameters. Parameter Description |S| the size of the user’s attribute set k the amount of attributes associated with the user’s private key |U| the size of the attribute universe t the amount of attributes associated with the ciphertext N the number of files to be encrypted m the number of keywords to be used to generate indexes |G|,|GT| the bit length of the elements belong to the groupG,GT |Zp| the bit length of the elements belong to the groupZp Cp the computational cost of the pairing operationG,GeT Ce,CeT the computational cost of the exponential operation in groupG,GeT ### 6.1. Storage and Transmission Cost Analysis We compare our scheme with the related schemesVKS [17], LHL [19], SYL [18], and ZSQM [20] over some important features, which are illustrated in Tables 2 and 3. Though many parameters are generated, stored, and transmitted throughout the whole process, we only consider the following parameters that extremely affect the system efficiency:(i) PK: The size of public keyPK measures how many storage is needed to store public keys of all entities for each user to accomplish his computation. As shown in the second column in Table 2, it increases linearly with |U| in [18, 19], which leads to a great amount storage demand for the user. This indicates it is difficult to adopt new attributes in [18, 19]. Because it cannot meet the demands of frequently updating attributes in rapidly changing IoT networks. Reference [20] is file-centered, so the size of PK is related to the number of all files being encrypted, which also causes a large storage requirement for each user. It is obviously that our scheme and [17] only have a small and constant storage requirement.(ii) SK: The private keySK is always kept by the user himself, so the size of SK only indicates the secure storage needed to store his private key for each user. The third column reveals that, in [17–20], |SK| increases with the attributes with different efficients k,2k,S,|U|, respectively, where k<2k<S≪|U|. Since the storage of users or devices in IoT networks is limited, it would be desirable if only small and constant storage is needed to store the keys. This expected goal is achieved in our scheme as shown; it is obviously better than the others with only constant storage requirement 2|G|+|Zp|, regardless of the attribute number’ change.(iii) CT: The size of ciphertextCT measures the transmission cost for the user and the storage cost for the cloud server, because the ciphertext is computed by the user, transmitted to the cloud, and stored in the cloud data center. Reference [18] is concentrated in the user management such as user updating and revoking, and the encryption and decryption processes are not revealed in details, so it is empty in the fourth column. Considering that all the five schemes store an access policy in the ciphertext, we ignore this part in the ciphertext size comparison. CT in our scheme and [19] are both linearly increasing with the number of the attributes associated with the user’s private key k, which is consistent with the situations in real world. Obviously, CT in [17] is much larger than our scheme and [19], owing to |U| is much larger than k, which means it takes much more transmission overhead for the user and more storage requirement for the cloud server. Ciphertext size in [20] is (N+2)|G|+2|GT|, because it is file-centered; all files owned by each user are encrypted at one time; this is not convenient if only a part of files are needed to be updated or modified..(iv) ID and TD: The size of indexID indicates the transmission overhead for the user and the storage required to store the indexes for retrieving related files for the cloud. The size of trapdoor TD shows the transmission cost for the data user, because the trapdoor is needed to be transmitted to the cloud to accomplish the test and search processes. We do not compute ID and TD for [19] as [19] only concerns about attribute-based encryption and it does not support the function of searching on keywords. For simplicity, we consider generating the index and trapdoor for only one keyword here. We could tell that the scheme in [18] costs most for transmitting both ID and TD between the user and the cloud server. It is shown from the fifth column that [17, 20] have similar ID size with our scheme, which reveals a small and constant storage is required. For the trapdoor size, scheme in [17] is linearly with the user’s attribute number, while scheme in [20] and ours is constant; furthermore, ours requires less transmission overhead than [20].Table 2 Storage and transmission comparisons. PK SK CT ID TD VSKE [17] 6 | G | + | G T | 2 ( | S | + 1 ) | G | + | Z p | ( | U | + k ) ( | G | + | G T | ) 2 | G | + | G T | ( 2 | S | + 3 ) | G | SYL [18] 3 | U | | G | + | G T | ( 2 | U | + 1 ) | G | + 2 | Z p | - ( | U | + 1 ) | G | + | G T | + | Z p | ( 2 | U | + 1 ) | G | + 2 | Z p | LHL [19] ( | U | + 4 ) | G | 2 k | G | + | Z p | ( k + 2 ) | G | + | G T | - - ZSQM [20] ( N + 4 ) | G | ( | S | + 3 ) | G | + | Z p | ( N + 2 ) | G | + 2 | G T | | G T | 4 | G | Ours 5 | G | + | G T | 2 | G | + | Z p | 2 k | G T | 2 | G | + | G T | | G | + | Z p |Table 3 Computation cost comparisons. Keygen Encrypt Index Trapdoor Test Decrypt VSKE [17] ( 2 | S | + 4 ) C e ( 2 m + 1 ) C e ( 2 | S | + 4 ) C e ( 2 | U | + K ) C e + k C p ( 2 | S | + 1 ) C p + | S | C e T C p + C e T SYL [18] ( 2 | U | + 1 ) C e + 2 C e T - ( | U | + 1 ) C e + C e T ( 2 | U | + 1 ) C e ( | U | + 1 ) C p + C e T - LHL [19] 2 ( k + 1 ) C e ( k + 2 ) C e + C p - - - ( 2 k + 3 ) C p + C e T ZSQM [20] 5 C e + N C p ( N + 3 ) C e + C p + C e T C p ( N + 2 ) C e 2 C e + ( k + 1 ) C p ( N k + 1 ) C p + C e Ours 4 C e k C e T 2 ( C e + C e T ) 2 ( | S | + 1 ) C e 2 C p + C e C eAccording to the above analyses, our scheme has a better performance in the storage and transmission requirement comparing with the other exsiting schemes. ### 6.2. Computational Cost Simulation and Analysis In this section we present the analysis in terms of the computational cost and comparisons with those related works listed in Table2. Since operations over Zp cost much less computational time than operations over groups and the pairing operation, we just consider the latter two fundamental cryptographic operations. The results are given in Table 3. It is obvious from the table that our scheme has significantly better efficiency than the other schemes.By adopting the pairing-based cryptography (PBC (URL:https://crypto.stanford.edu/pbc)) library, we perform our experiment in C on a computer with Intel(R) Core(TM) i3-3220 CPU @ 3.30 running Ubuntu 16.04.5 with 4.00 GB system memory. This simulation environment is used to performKeygen andTest, which are executed by the trusted authority and the cloud server with a great computational capability. In contrast, the users or devices in our system are mostly with low computational capability, to simulateEncrypt,Index,Trapdoor, andDecrypt performed by them; we execute our experiment on a client machine with Intel Core Duo CPU running Ubuntu MATE 16.04 with 2 GB system memory. To realize the security requirement of 1024-bit, we use the Type A curve, which is denoted as E(Fq):y2=x3+x with parameter q=512 bits, where the order p of both the group G and group GT is 160 bits and |G|=|GT|=1024 bits. For simplicity, we assume that the user only generates index for one keyword in our simulation. The simulation result is exhibited in Figure 3.Figure 3 Comparison of computational cost. (a) Time for key generation (b) Time for encryption (c) Time for index generation (d) Time for trapdoor generation (e) Time for test (f) Time for decryptionOnce receiving a request from a user to join in the system, TA generates public and private keys for each user with only four exponential operations in our scheme. Clearly from Figure3(a), the computation cost in our scheme is constant and the smallest among all the schemes. As the attribute number increases, the computational cost for key generation in [17–19] all grows; especially in [18] it climbs up to thousands of milliseconds. Cost in [20] is also constant and similar with ours, this is because we assume the encrypted file number N=1 for simplicity in our experiment. While in reality the scheme in [20] is file-centered, the computational cost for key generation grows with the number of encrypted files increases, but in our scheme the cost for key generation is irrelevant with the encrypted file numbers.After receiving the keys from TA, the user encrypts the files with his/her keys before updating them to the cloud server. Reference [18] focuses on attributed-based encryption to manage users; the data encryption and decryption phase are not described in details; therefore it is not considered in our encryption simulation. Reference [20] is not considered in the encryption phase because the scheme encrypts all files of one user at once, while the others encrypt one file at a time. As shown in Figure 3(b), cost for encryption in our scheme and [19] are increasing linearly with the number of attributes grows, due to the file is encrypted associated with the attributes embedded in the access policy. When the attribute number is 50, our scheme needs 109.96 milliseconds and [19] needs 133.758 milliseconds, which is lightly larger than ours. Reference [17] has the lowest computational cost because they use symmetric encryption method to encryption and the cost shown in Figure 3(b) is for access control in encryption phase.Next is about querying on keywords, which involves the three algorithms:Index,Trapdoor, andTest. The computational cost for them is exhibited in Figures 3(c), 3(d), and 3(e), respectively. Because [19] has no capability for searching based on keywords, it is not considered in our comparison for these three phases. References [17, 18] have a obviously large increase in computation burden when the number of the attributes grows. When the attribute number grows up to 100, almost 15000 milliseconds are required to complement these three algorithms to achieve querying on keywords for the two schemes, which causes a long network delay. Our scheme has a similar computational cost with the scheme in [20] proposed to speed up in the industrial IoT network, which has been proved having a good efficiency in fast query.Last, the computational cost for the decryption phase is shown in Figure3(f). The efficiency for the decryption algorithm is very important because one keyword is always associated with a lot of different files. To decrypt all the returned files in a short time in IoT networks is a key issue in recent researches. As shown in Figure 3(f), our scheme satisfies this demand with only less than 13 milliseconds is required, regardless of the increasing of the attribute number. And the scheme in [17] has a lightly bigger cost than ours. In contrast, the other two schemes’ cost grows enormously with the attributes’ number, which causes a super large computational burden because of the large amount of returned files and the user’s limited computation capability.In summary, our proposed scheme enjoys a good efficiency in storage, transmission requirement, and computational cost, which indicates it is suitable for the healthcare related IoT networks. ## 6.1. Storage and Transmission Cost Analysis We compare our scheme with the related schemesVKS [17], LHL [19], SYL [18], and ZSQM [20] over some important features, which are illustrated in Tables 2 and 3. Though many parameters are generated, stored, and transmitted throughout the whole process, we only consider the following parameters that extremely affect the system efficiency:(i) PK: The size of public keyPK measures how many storage is needed to store public keys of all entities for each user to accomplish his computation. As shown in the second column in Table 2, it increases linearly with |U| in [18, 19], which leads to a great amount storage demand for the user. This indicates it is difficult to adopt new attributes in [18, 19]. Because it cannot meet the demands of frequently updating attributes in rapidly changing IoT networks. Reference [20] is file-centered, so the size of PK is related to the number of all files being encrypted, which also causes a large storage requirement for each user. It is obviously that our scheme and [17] only have a small and constant storage requirement.(ii) SK: The private keySK is always kept by the user himself, so the size of SK only indicates the secure storage needed to store his private key for each user. The third column reveals that, in [17–20], |SK| increases with the attributes with different efficients k,2k,S,|U|, respectively, where k<2k<S≪|U|. Since the storage of users or devices in IoT networks is limited, it would be desirable if only small and constant storage is needed to store the keys. This expected goal is achieved in our scheme as shown; it is obviously better than the others with only constant storage requirement 2|G|+|Zp|, regardless of the attribute number’ change.(iii) CT: The size of ciphertextCT measures the transmission cost for the user and the storage cost for the cloud server, because the ciphertext is computed by the user, transmitted to the cloud, and stored in the cloud data center. Reference [18] is concentrated in the user management such as user updating and revoking, and the encryption and decryption processes are not revealed in details, so it is empty in the fourth column. Considering that all the five schemes store an access policy in the ciphertext, we ignore this part in the ciphertext size comparison. CT in our scheme and [19] are both linearly increasing with the number of the attributes associated with the user’s private key k, which is consistent with the situations in real world. Obviously, CT in [17] is much larger than our scheme and [19], owing to |U| is much larger than k, which means it takes much more transmission overhead for the user and more storage requirement for the cloud server. Ciphertext size in [20] is (N+2)|G|+2|GT|, because it is file-centered; all files owned by each user are encrypted at one time; this is not convenient if only a part of files are needed to be updated or modified..(iv) ID and TD: The size of indexID indicates the transmission overhead for the user and the storage required to store the indexes for retrieving related files for the cloud. The size of trapdoor TD shows the transmission cost for the data user, because the trapdoor is needed to be transmitted to the cloud to accomplish the test and search processes. We do not compute ID and TD for [19] as [19] only concerns about attribute-based encryption and it does not support the function of searching on keywords. For simplicity, we consider generating the index and trapdoor for only one keyword here. We could tell that the scheme in [18] costs most for transmitting both ID and TD between the user and the cloud server. It is shown from the fifth column that [17, 20] have similar ID size with our scheme, which reveals a small and constant storage is required. For the trapdoor size, scheme in [17] is linearly with the user’s attribute number, while scheme in [20] and ours is constant; furthermore, ours requires less transmission overhead than [20].Table 2 Storage and transmission comparisons. PK SK CT ID TD VSKE [17] 6 | G | + | G T | 2 ( | S | + 1 ) | G | + | Z p | ( | U | + k ) ( | G | + | G T | ) 2 | G | + | G T | ( 2 | S | + 3 ) | G | SYL [18] 3 | U | | G | + | G T | ( 2 | U | + 1 ) | G | + 2 | Z p | - ( | U | + 1 ) | G | + | G T | + | Z p | ( 2 | U | + 1 ) | G | + 2 | Z p | LHL [19] ( | U | + 4 ) | G | 2 k | G | + | Z p | ( k + 2 ) | G | + | G T | - - ZSQM [20] ( N + 4 ) | G | ( | S | + 3 ) | G | + | Z p | ( N + 2 ) | G | + 2 | G T | | G T | 4 | G | Ours 5 | G | + | G T | 2 | G | + | Z p | 2 k | G T | 2 | G | + | G T | | G | + | Z p |Table 3 Computation cost comparisons. Keygen Encrypt Index Trapdoor Test Decrypt VSKE [17] ( 2 | S | + 4 ) C e ( 2 m + 1 ) C e ( 2 | S | + 4 ) C e ( 2 | U | + K ) C e + k C p ( 2 | S | + 1 ) C p + | S | C e T C p + C e T SYL [18] ( 2 | U | + 1 ) C e + 2 C e T - ( | U | + 1 ) C e + C e T ( 2 | U | + 1 ) C e ( | U | + 1 ) C p + C e T - LHL [19] 2 ( k + 1 ) C e ( k + 2 ) C e + C p - - - ( 2 k + 3 ) C p + C e T ZSQM [20] 5 C e + N C p ( N + 3 ) C e + C p + C e T C p ( N + 2 ) C e 2 C e + ( k + 1 ) C p ( N k + 1 ) C p + C e Ours 4 C e k C e T 2 ( C e + C e T ) 2 ( | S | + 1 ) C e 2 C p + C e C eAccording to the above analyses, our scheme has a better performance in the storage and transmission requirement comparing with the other exsiting schemes. ## 6.2. Computational Cost Simulation and Analysis In this section we present the analysis in terms of the computational cost and comparisons with those related works listed in Table2. Since operations over Zp cost much less computational time than operations over groups and the pairing operation, we just consider the latter two fundamental cryptographic operations. The results are given in Table 3. It is obvious from the table that our scheme has significantly better efficiency than the other schemes.By adopting the pairing-based cryptography (PBC (URL:https://crypto.stanford.edu/pbc)) library, we perform our experiment in C on a computer with Intel(R) Core(TM) i3-3220 CPU @ 3.30 running Ubuntu 16.04.5 with 4.00 GB system memory. This simulation environment is used to performKeygen andTest, which are executed by the trusted authority and the cloud server with a great computational capability. In contrast, the users or devices in our system are mostly with low computational capability, to simulateEncrypt,Index,Trapdoor, andDecrypt performed by them; we execute our experiment on a client machine with Intel Core Duo CPU running Ubuntu MATE 16.04 with 2 GB system memory. To realize the security requirement of 1024-bit, we use the Type A curve, which is denoted as E(Fq):y2=x3+x with parameter q=512 bits, where the order p of both the group G and group GT is 160 bits and |G|=|GT|=1024 bits. For simplicity, we assume that the user only generates index for one keyword in our simulation. The simulation result is exhibited in Figure 3.Figure 3 Comparison of computational cost. (a) Time for key generation (b) Time for encryption (c) Time for index generation (d) Time for trapdoor generation (e) Time for test (f) Time for decryptionOnce receiving a request from a user to join in the system, TA generates public and private keys for each user with only four exponential operations in our scheme. Clearly from Figure3(a), the computation cost in our scheme is constant and the smallest among all the schemes. As the attribute number increases, the computational cost for key generation in [17–19] all grows; especially in [18] it climbs up to thousands of milliseconds. Cost in [20] is also constant and similar with ours, this is because we assume the encrypted file number N=1 for simplicity in our experiment. While in reality the scheme in [20] is file-centered, the computational cost for key generation grows with the number of encrypted files increases, but in our scheme the cost for key generation is irrelevant with the encrypted file numbers.After receiving the keys from TA, the user encrypts the files with his/her keys before updating them to the cloud server. Reference [18] focuses on attributed-based encryption to manage users; the data encryption and decryption phase are not described in details; therefore it is not considered in our encryption simulation. Reference [20] is not considered in the encryption phase because the scheme encrypts all files of one user at once, while the others encrypt one file at a time. As shown in Figure 3(b), cost for encryption in our scheme and [19] are increasing linearly with the number of attributes grows, due to the file is encrypted associated with the attributes embedded in the access policy. When the attribute number is 50, our scheme needs 109.96 milliseconds and [19] needs 133.758 milliseconds, which is lightly larger than ours. Reference [17] has the lowest computational cost because they use symmetric encryption method to encryption and the cost shown in Figure 3(b) is for access control in encryption phase.Next is about querying on keywords, which involves the three algorithms:Index,Trapdoor, andTest. The computational cost for them is exhibited in Figures 3(c), 3(d), and 3(e), respectively. Because [19] has no capability for searching based on keywords, it is not considered in our comparison for these three phases. References [17, 18] have a obviously large increase in computation burden when the number of the attributes grows. When the attribute number grows up to 100, almost 15000 milliseconds are required to complement these three algorithms to achieve querying on keywords for the two schemes, which causes a long network delay. Our scheme has a similar computational cost with the scheme in [20] proposed to speed up in the industrial IoT network, which has been proved having a good efficiency in fast query.Last, the computational cost for the decryption phase is shown in Figure3(f). The efficiency for the decryption algorithm is very important because one keyword is always associated with a lot of different files. To decrypt all the returned files in a short time in IoT networks is a key issue in recent researches. As shown in Figure 3(f), our scheme satisfies this demand with only less than 13 milliseconds is required, regardless of the increasing of the attribute number. And the scheme in [17] has a lightly bigger cost than ours. In contrast, the other two schemes’ cost grows enormously with the attributes’ number, which causes a super large computational burden because of the large amount of returned files and the user’s limited computation capability.In summary, our proposed scheme enjoys a good efficiency in storage, transmission requirement, and computational cost, which indicates it is suitable for the healthcare related IoT networks. ## 7. Related Work ### 7.1. Healthcare Related IoT Security Security is one of the most important issues in the healthcare related IoT Networks. This is not only because the vulnerability of IoT devices themselves, which can be easily attacked or physically destructed, but also because the data collected and processed in IoT networks are highly sensitive and tightly related to our life. Johns Hopkins University developed an hospital-centralized patient monitoring system called MEDiSN [22]. But in this system secure communication especially data integrity and user authentication are not implemented [23]. Similar with MEDiSN, other systems such as CodeBlue [24] and MobiCare [25] are implemented in the infrastructure layer without considering the real communication security.To achieve real communication security, encryption operations are essential. However, most of the existing encryption schemes demand complex computation operations and high process overload. How to overcome these limitations is an important issue. In [26, 27], the authors present a secure and efficient authentication and authorization framework for healthcare related IoT network but high processing power is needed. In [28], the authors implement an IoT-based health prescription assistant and achieve user authentication and access control on their system. However, the data confidentiality is not considered during the transmission process [29]. Although they have reduced some communication and computation latency in their small-scale data experiment, it is still not enough for real world network with super large amount of data [30]. ### 7.2. ABE in Cloud Computing Paradigm As an extension of identity-based encryption, attribute-based encryption was first introduced by Sahai and Waters [13]. It has been applied to a lot of encryption schemes to achieve fine-grained access control over encrypted data. Particulary, ABE was extended by Goyal et al. [31] to form two complementary flavors: key-policy ABE (KP-ABE) and ciphertext-policy ABE (CP-ABE). KP-ABE takes attributes to describe the ciphertexts, and policies over these attributes are associated with users’ keys, while in CP-ABE it is reversed. CP-ABE makes it possible that the users can get access to the encrypted data and decrypt the data only if the access structures match attributes.Same as the originally proposed ABE scheme in [13], the most classic architecture of ABE access control schemes apply a single central authority to take charge of enrolling, updating all attributes and managing keys for all entities. In such centralized ABE frameworks, the most difficult but important part is to achieve efficient revocation for users and attributes. In [32], the authors put forward an expiration time for each attribute to maintain revocation but it turns out having issues in backward and forward. The authors from [33, 34] succeed in overcoming the above issues through adopting the concept of proxy-based re-encryption. Also, lazy revocation [33, 35] and revocable-storage ABE [36] are designed to achieve revocation to prevent the message from unauthorized users.As the IoT networks expand, the centralized ABE paradigm with only one single authority has a great drawback in efficiency due to the super large amount of data. Therefore, multiauthority ABE was introduced by [37], in which a global identifier was assigned by the central authority to each user as a unique ID, aiming to distinguish users without attributes by independent authorities. Furthermore, more works such as [38–40] improve the above scheme by cancelling the user’s consistent GID to avoid privacy leakage and support collusion resistance; this paradigm is called as decentralized ABE.No matter in a centralized or decentralized ABE paradigm, a user may not withstand the financial attempt and share his attributes to other users. In order to avoid a decryption privilege leakage, works in [41, 42] provide access control schemes with traceability, where the user who leaks the decryption key to someone else can be traced and revoked by the system. As people become more concerned about personal privacy, the access policy itself can be taken as sensitive information and need to be protected from unauthorized users. Works in [43] achieve anonymity by designing three protocols together with homomorphic encryption and scrambled circuit evaluation to protect both the policies and the credentials. ### 7.3. Searchable Encryption with ABE in Cloud Computing Paradigm The searchable encryption was firstly proposed by [44] and has been widely researched and used. It has indicated a new direction for operating searching on ciphertexts in cloud computing [45]. Both the notion of symmetric encryption with keyword search (SESK) and the public key encryption with keyword search (PESK) are gaining a lot of attentions. They have been developed to support different functions, such as works in [18, 46–51]. However, these schemes cannot achieve fine-grained access control on ciphertexts.The attribute-based keyword search (ABKS) was proposed in [52], in which the cloud server checks whether the user has the capability to decrypt the required encrypted ciphertext before searching it by a signature built from the user’s attributes. But this scheme cannot maintain the security of keywords. Some other works also proposed different schemes based on ABKS to support specific functions such as Checkability [19], fuzzy keyword search [53], revocation [54], and verifiability [55]. But most of these works require the users to do complex computation like pairing and exponential operations many times, which is not practical because of the user’s limited computation ability. Therefore, how to transfer the heavy computation burden and reduce the times of complex computation operations without loosing security requirements is the most important challenge for now. ## 7.1. Healthcare Related IoT Security Security is one of the most important issues in the healthcare related IoT Networks. This is not only because the vulnerability of IoT devices themselves, which can be easily attacked or physically destructed, but also because the data collected and processed in IoT networks are highly sensitive and tightly related to our life. Johns Hopkins University developed an hospital-centralized patient monitoring system called MEDiSN [22]. But in this system secure communication especially data integrity and user authentication are not implemented [23]. Similar with MEDiSN, other systems such as CodeBlue [24] and MobiCare [25] are implemented in the infrastructure layer without considering the real communication security.To achieve real communication security, encryption operations are essential. However, most of the existing encryption schemes demand complex computation operations and high process overload. How to overcome these limitations is an important issue. In [26, 27], the authors present a secure and efficient authentication and authorization framework for healthcare related IoT network but high processing power is needed. In [28], the authors implement an IoT-based health prescription assistant and achieve user authentication and access control on their system. However, the data confidentiality is not considered during the transmission process [29]. Although they have reduced some communication and computation latency in their small-scale data experiment, it is still not enough for real world network with super large amount of data [30]. ## 7.2. ABE in Cloud Computing Paradigm As an extension of identity-based encryption, attribute-based encryption was first introduced by Sahai and Waters [13]. It has been applied to a lot of encryption schemes to achieve fine-grained access control over encrypted data. Particulary, ABE was extended by Goyal et al. [31] to form two complementary flavors: key-policy ABE (KP-ABE) and ciphertext-policy ABE (CP-ABE). KP-ABE takes attributes to describe the ciphertexts, and policies over these attributes are associated with users’ keys, while in CP-ABE it is reversed. CP-ABE makes it possible that the users can get access to the encrypted data and decrypt the data only if the access structures match attributes.Same as the originally proposed ABE scheme in [13], the most classic architecture of ABE access control schemes apply a single central authority to take charge of enrolling, updating all attributes and managing keys for all entities. In such centralized ABE frameworks, the most difficult but important part is to achieve efficient revocation for users and attributes. In [32], the authors put forward an expiration time for each attribute to maintain revocation but it turns out having issues in backward and forward. The authors from [33, 34] succeed in overcoming the above issues through adopting the concept of proxy-based re-encryption. Also, lazy revocation [33, 35] and revocable-storage ABE [36] are designed to achieve revocation to prevent the message from unauthorized users.As the IoT networks expand, the centralized ABE paradigm with only one single authority has a great drawback in efficiency due to the super large amount of data. Therefore, multiauthority ABE was introduced by [37], in which a global identifier was assigned by the central authority to each user as a unique ID, aiming to distinguish users without attributes by independent authorities. Furthermore, more works such as [38–40] improve the above scheme by cancelling the user’s consistent GID to avoid privacy leakage and support collusion resistance; this paradigm is called as decentralized ABE.No matter in a centralized or decentralized ABE paradigm, a user may not withstand the financial attempt and share his attributes to other users. In order to avoid a decryption privilege leakage, works in [41, 42] provide access control schemes with traceability, where the user who leaks the decryption key to someone else can be traced and revoked by the system. As people become more concerned about personal privacy, the access policy itself can be taken as sensitive information and need to be protected from unauthorized users. Works in [43] achieve anonymity by designing three protocols together with homomorphic encryption and scrambled circuit evaluation to protect both the policies and the credentials. ## 7.3. Searchable Encryption with ABE in Cloud Computing Paradigm The searchable encryption was firstly proposed by [44] and has been widely researched and used. It has indicated a new direction for operating searching on ciphertexts in cloud computing [45]. Both the notion of symmetric encryption with keyword search (SESK) and the public key encryption with keyword search (PESK) are gaining a lot of attentions. They have been developed to support different functions, such as works in [18, 46–51]. However, these schemes cannot achieve fine-grained access control on ciphertexts.The attribute-based keyword search (ABKS) was proposed in [52], in which the cloud server checks whether the user has the capability to decrypt the required encrypted ciphertext before searching it by a signature built from the user’s attributes. But this scheme cannot maintain the security of keywords. Some other works also proposed different schemes based on ABKS to support specific functions such as Checkability [19], fuzzy keyword search [53], revocation [54], and verifiability [55]. But most of these works require the users to do complex computation like pairing and exponential operations many times, which is not practical because of the user’s limited computation ability. Therefore, how to transfer the heavy computation burden and reduce the times of complex computation operations without loosing security requirements is the most important challenge for now. ## 8. Conclusion In this paper, we design a keyword searchable encryption with fine gained access control for our proposed healthcare related IoT-fog-cloud framework. Through our design, the users could achieve a fast and efficient service by reducing the calculation overload and storage with the help of the fog and cloud, especially the data user only needs to do a exponential operation to retrieve the message. In our scheme, the fogs are capable of helping the trusted authority to manage the users and their attributes through authoring their query keys. In addition, our scheme is very efficient because only the authorized users could download the keyword-matched-part of ciphertexts by refusing unauthorized research and unauthorized users. At last, our scheme is proofed IND-CK-CCA secure and trapdoor indistinguishably secure. We also show our scheme takes less storage and transmission consumption and much less computational cost through theoretical analysis and experimental evaluations.We assume fogs and the cloud do not collude with each other in this paper; next we will consider in achieving the collusion resistance in our proposed IoT-Fog-Cloud system. We are also interested in the user update and the attribute replacement with a more efficient method in our future research. How to improve the efficiency of searching process by designing better structures of indexes and trapdoors for the keywords in the cloud server is also in our future consideration. --- *Source: 1019767-2019-05-23.xml*
1019767-2019-05-23_1019767-2019-05-23.md
106,597
A Lightweight Fine-Grained Searchable Encryption Scheme in Fog-Based Healthcare IoT Networks
Hui Li; Tao Jing
Wireless Communications and Mobile Computing (2019)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1019767
1019767-2019-05-23.xml
--- ## Abstract For a smart healthcare system, a cloud based paradigm with numerous user terminals is to support and improve more reliable, convenient, and intelligent services. Considering the resource limitation of terminals and communication overhead in cloud paradigm, we propose a hybrid IoT-Fog-Cloud framework. In this framework, we deploy a geo-distributed fog layer at the edge of networks. The fogs can provide the local storage, sufficient processing power, and appropriate network functions. For the fog-based healthcare system, data confidentiality, access control, and secure searching over ciphertext are the key issues in sensitive data. Furthermore, how to adjust the storage and computing requirements to meet the limited resource is also a great challenge for data management. To address these, we design a lightweight keyword searchable encryption scheme with fine-grained access control for our proposed healthcare related IoT-Fog-Cloud framework. Through our design, the users can achieve a fast and efficient service by delegating a majority part of the workloads and storage requirements to fogs and the cloud without extra privacy leakage. We prove our scheme satisfies the security requirements and demonstrate the excellent efficiency through experimental evaluation. --- ## Body ## 1. Introduction Since Ashton [1] and Brock [2] firstly proposed the concept of IoT, it has been widely used in real life by combining with technologies in sensor networks, embedded system, object identifications, and wireless networks in order to tag, sense, and control things over the Internet [3–6]. With the ubiquitous nature of IoT, it makes great contribution in improving the equality of medical care by empowering remote monitoring and reducing time cost through implanting sensors or wearing mobile devices. According to the insight from [7], the healthcare system will evovle into a home-centered paradigm in 2030 from the current hospital-centered one. As more sensors are deployed in the healthcare system, the seamless data needs to be stored, processed, and transmitted. This may cause a great challenge to the traditional IoT-cloud infrastructure from the aspects of reliability, immediate response, and security [8]. This calls demand for a “mediator” between IoT devices and cloud server to support geo-distribution, storage, and computing capability, acting as an extension of the cloud, which is officially called fog from the concept of fog computing proposed by Cisco [9].When storing sensitive data like personal health records to cloud servers, the security and privacy of these data are still challenges in the fog computing paradigm [10–12]. To solve this problem, applying access control mechanism is an essential method to protect the sensitive data from unauthorized users. As a new type of IBE proposed by [13], attribute-based encryption (ABE) plays a great role in access control, which is classified into the key-policy attribute-based encryption (KP-ABE) and the ciphertext-policy attribute-based encryption (CP-ABE). KP-ABE associates user’s private keys with the designated policies and tags ciphertexts with attributes, while CP-ABE is related to ciphertexts with the designated policies and identifies the user’s private key with attributes [14, 15]. Obviously, CP-ABE is a better choice to execute access control in our model since it is the user’s ability to designate an access structure and process the encryption operation under the structure.However, most existing ABE schemes are time consuming in the key generation phase and have a large computational load in the decryption phase, which leads to suffering a bad experience for users. Also, how to maintain effective search in the encrypted ciphertext is a great challenge. Searchable encryption especially searchable public encryption is an effective approach to solve the above problem. And it is important to reduce complex operations, e.g., pairing and exponential operations, for users in the searchable public encryption. ### 1.1. Motivation and Contribution The IoT infrastructure, such as the monitoring devices in a traditional hospital or health management wearable devices in a smart home, continuously synchronizes data to the remote cloud. The massive sensitive data leads to a great challenge to the current healthcare-related IoT-to-cloud system due to the nature of IoT’s limited storage, low power, and poor computability. In this paper, we attempt to solve the problem above as follows:(i) We propose a fog-supported hybrid infrastructure as shown in Figure1. The distributed fogs are deployed between IoT devices and clouds, providing temporary data storage, data computation and analysis, and network services [16], so as to reduce transmission delay. Also, they help to manage users and attributes under the control of trusted authority.Figure 1 A Fog-based healthcare system.With the proposed infrastructure above, we design a new scheme to implement some specific network functions to meet real world needs. We will show it by exhibiting an example as follows. A person named Wealth rarely cares about his physical condition. One day he knows his friend Bob is suffering from hyperglycemia, and then he wants to learn about it. When he searches “Hyperglycemia" in cloud service providers such as “BodyMedia", “Google Health", “CiscoHealthPresence", or “IBM Bluemix", clouds know that he or someone he knows may get hyperglycemia. Obviously, his personal health privacy is exposed to the clouds. In order to prevent privacy disclosure, we construct indexes for “Hyperglycemia" in the file encryption phase through some secure methods. To search such a keyword, we need to generate the corresponding trapdoors with the help of the fog. Upon receiving the trapdoor, clouds return all the encrypted files associated with the specific “Hyperglycemia" if the trapdoor matches with the index. We can protect Wealth’s searching privacy by this way as follows.Further, we consider Wealth receives all files through searching “Hyperglycemia" by performing our designs. After realizing the importance of keeping healthy, he decides to start his own fitness program to monitor his health indicators such as Glycemic index through wearable sensors. Also, due to the limited storage of his own devices, he has to store his data to the cloud and shares it to some designated ones which have specific attributes. If someone without sufficient attributes attempts to search the keyword, he/she is impossible to generate a valid trapdoor matching with a keyword’s index, not to mention to get Wealth’s sensitive data. We help Wealth to accomplish this goal through the following designs.In summary, Wealth could enjoy an efficient, fast, high-quality, and secure service through adopting our system.The main contributions of this article are exhibited as follows:(i) We design a keyword searchable encryption scheme in the healthcare related IoT-fog-cloud infrastructure. The proposed scheme ensures a security requirement that both data and keywords are protected from the cloud and the fog, which is very essential to users in the health related environment.(ii) With the restriction of constrained resource, IoT devices are not capable of doing complicated encryption and decryption process. In order to overcome this issue,we transfer most of heavy computation to the fog and the cloud in our scheme, while only a small part is reserved for users.(iii) On the basis of ciphertext-policy attribute-based encryption, we design a fine-grained access control framework. A user should obtain his query capability authorization from a trusted authority and the fog through checking his attributes. The messages are encrypted with an access policy such thatonly users with the designated attributes can access them.(iv) We provide formal security analysis which demonstrates that our scheme issecure under IND-CK-CCA attack and satisfies trapdoor indistinguishability secure. Also we make experiment comparisons with some previous research revealing that our scheme has a good efficiency.The rest of the paper is organized as follows. In Section2, we briefly introduce preliminaries which will be utilized in our paper. Next, in Section 3, we present two adversary models, security requirements and system functions of our lightweight fine-grained searchable encryption (LFSE) system. Our proposed system is described in Section 4. The thorough security analysis of the proposed system appears in Section 5 and the efficiency is analysed in Section 6. We conclude our paper in Section 8. ## 1.1. Motivation and Contribution The IoT infrastructure, such as the monitoring devices in a traditional hospital or health management wearable devices in a smart home, continuously synchronizes data to the remote cloud. The massive sensitive data leads to a great challenge to the current healthcare-related IoT-to-cloud system due to the nature of IoT’s limited storage, low power, and poor computability. In this paper, we attempt to solve the problem above as follows:(i) We propose a fog-supported hybrid infrastructure as shown in Figure1. The distributed fogs are deployed between IoT devices and clouds, providing temporary data storage, data computation and analysis, and network services [16], so as to reduce transmission delay. Also, they help to manage users and attributes under the control of trusted authority.Figure 1 A Fog-based healthcare system.With the proposed infrastructure above, we design a new scheme to implement some specific network functions to meet real world needs. We will show it by exhibiting an example as follows. A person named Wealth rarely cares about his physical condition. One day he knows his friend Bob is suffering from hyperglycemia, and then he wants to learn about it. When he searches “Hyperglycemia" in cloud service providers such as “BodyMedia", “Google Health", “CiscoHealthPresence", or “IBM Bluemix", clouds know that he or someone he knows may get hyperglycemia. Obviously, his personal health privacy is exposed to the clouds. In order to prevent privacy disclosure, we construct indexes for “Hyperglycemia" in the file encryption phase through some secure methods. To search such a keyword, we need to generate the corresponding trapdoors with the help of the fog. Upon receiving the trapdoor, clouds return all the encrypted files associated with the specific “Hyperglycemia" if the trapdoor matches with the index. We can protect Wealth’s searching privacy by this way as follows.Further, we consider Wealth receives all files through searching “Hyperglycemia" by performing our designs. After realizing the importance of keeping healthy, he decides to start his own fitness program to monitor his health indicators such as Glycemic index through wearable sensors. Also, due to the limited storage of his own devices, he has to store his data to the cloud and shares it to some designated ones which have specific attributes. If someone without sufficient attributes attempts to search the keyword, he/she is impossible to generate a valid trapdoor matching with a keyword’s index, not to mention to get Wealth’s sensitive data. We help Wealth to accomplish this goal through the following designs.In summary, Wealth could enjoy an efficient, fast, high-quality, and secure service through adopting our system.The main contributions of this article are exhibited as follows:(i) We design a keyword searchable encryption scheme in the healthcare related IoT-fog-cloud infrastructure. The proposed scheme ensures a security requirement that both data and keywords are protected from the cloud and the fog, which is very essential to users in the health related environment.(ii) With the restriction of constrained resource, IoT devices are not capable of doing complicated encryption and decryption process. In order to overcome this issue,we transfer most of heavy computation to the fog and the cloud in our scheme, while only a small part is reserved for users.(iii) On the basis of ciphertext-policy attribute-based encryption, we design a fine-grained access control framework. A user should obtain his query capability authorization from a trusted authority and the fog through checking his attributes. The messages are encrypted with an access policy such thatonly users with the designated attributes can access them.(iv) We provide formal security analysis which demonstrates that our scheme issecure under IND-CK-CCA attack and satisfies trapdoor indistinguishability secure. Also we make experiment comparisons with some previous research revealing that our scheme has a good efficiency.The rest of the paper is organized as follows. In Section2, we briefly introduce preliminaries which will be utilized in our paper. Next, in Section 3, we present two adversary models, security requirements and system functions of our lightweight fine-grained searchable encryption (LFSE) system. Our proposed system is described in Section 4. The thorough security analysis of the proposed system appears in Section 5 and the efficiency is analysed in Section 6. We conclude our paper in Section 8. ## 2. Preliminaries In this section we provide a detailed description of some fundamentals of cryptography that will be used throughout this paper. ### 2.1. The Notations In this section, we first give notation descriptions that will be used throughout this paper. For a prime numberp, we denote the set {1,2,…,p-1} as Zp∗, where multiplication and addition modulo p are defined in the set. We use a←rS to denote that a is uniformly chosen from all elements in S randomly. And let λ be the security parameter of our system. ### 2.2. Bilinear Map G 1 and G2 are two multiplicative cyclic groups of prime order p. Let g be a generator of G1 and e be a bilinear map, e:G1×G1→G2. The bilinear map e has the following properties:(i) Bilinear: A mape:G1×G1→G2 is bilinear if e(aP,bQ)=e(P,Q)ab for all P,Q∈G1 and all a,b∈Zp∗.(ii) Nondegenerate:e(g,g)≠1.(iii) Computable: There is an efficient algorithm to computee(P,Q) for all P,Q∈G1. ### 2.3. Access Policy An access policy defines attribute sets that are acquired to get access to private messages.Definition 1 (monotonicity). LettingAT be attributes universe, then an access policy A⊆2AT means A is a collection of non-empty subsets of AT. We call the access policy A is monotone if ∀Ω1,Ω2⊆AT s.t. (1)Ω1⊆Ω2,Ω1∈A⇒Ω2∈A.According to the monotonicity, an authorized user cannot lose his privileges if he has more attributes than required. ## 2.1. The Notations In this section, we first give notation descriptions that will be used throughout this paper. For a prime numberp, we denote the set {1,2,…,p-1} as Zp∗, where multiplication and addition modulo p are defined in the set. We use a←rS to denote that a is uniformly chosen from all elements in S randomly. And let λ be the security parameter of our system. ## 2.2. Bilinear Map G 1 and G2 are two multiplicative cyclic groups of prime order p. Let g be a generator of G1 and e be a bilinear map, e:G1×G1→G2. The bilinear map e has the following properties:(i) Bilinear: A mape:G1×G1→G2 is bilinear if e(aP,bQ)=e(P,Q)ab for all P,Q∈G1 and all a,b∈Zp∗.(ii) Nondegenerate:e(g,g)≠1.(iii) Computable: There is an efficient algorithm to computee(P,Q) for all P,Q∈G1. ## 2.3. Access Policy An access policy defines attribute sets that are acquired to get access to private messages.Definition 1 (monotonicity). LettingAT be attributes universe, then an access policy A⊆2AT means A is a collection of non-empty subsets of AT. We call the access policy A is monotone if ∀Ω1,Ω2⊆AT s.t. (1)Ω1⊆Ω2,Ω1∈A⇒Ω2∈A.According to the monotonicity, an authorized user cannot lose his privileges if he has more attributes than required. ## 3. System Model ### 3.1. Architecture of System The architecture of the proposed fog-based healthcare system is shown in Figure2. It is composed of four parts, i.e., a trusted authority, cloud server providers, fog nodes, and data users (including date owners and other users).Figure 2 System model.Trusted Authority (TA). A trusted authority, such as the national health center or an entity authorized by it, is an important authority for verifying users attributes. It takes charge of generating system parameters for all entities. And it is responsible for issue, revoke, and update attribute private keys for users.Cloud (Short for Cloud Server Providers). The cloud such as Amazon provides data storage, computational resource services, and data analysis. Apart from providing content service above, it also takes charge of the access services from the outside users to the encrypted files. We assume that the public cloud executes the searchable algorithm honestly. The cloud in our system is responsible for performing test algorithm and accomplish a part of decryption task with knowing any information about the user’s keys or attributes.Fog (Short for Fog Nodes). Fog, providing abilities of computing, storage, and mobility, is deployed at edges of networks. Because of the limited computing resources and the restricted capacity of the data owner or user’s facility carried nearby, it is responsible for deploying a half-trusted fog as interface between a user and the cloud server, especially in situations with sensitive medical information. Fog in our system takes charge of managing users within its coverage, revoking users and attributes without having any information about their private keys. Further, it helps controlling users’ query action through generating one part of the trapdoor without knowing the queried keyword.Data Owner. The data owner is an entity who intends to share his files with designated receivers. The receivers’ attributes should satisfy the access policy embedded in the corresponding ciphertext. It is in charge of file encryption with a specific access policy, index generation for all the keywords, and uploading to the cloud.Data User. The data user is the entity who intends to get the encrypted files by sending a query request to cloud servers and the fog. If he has enough attributes satisfying with the required access policy, he is able to download ciphertexts and decrypt them with the help from the cloud. It takes charge of keyword selection to generate trapdoors and then ciphertext decryption.Assumptions. We assume that the cloud and the fog are always online. They have sufficient storage capacity and computing resource. Also we assume that there exists a secure channel between data owner/user and the fog node, e.g., secure Wi-Fi networks.We assume that the cloud and fogs are all “honest but curious" [21]. To be specific, they do not delete or modify user’s data and return the computing results honestly but attempt to access as much private information as possible. All the entities execute our proposed protocol and users try to access data either within or out of their privileges. And it is assumed that the cloud and the fog do not collude with each other.Different from most existing work with only public cloud, it is a novel cloud-fog architecture. In this work, we assume that files and keywords are sensitive and should be protected from both the cloud and the fog. And attributes are semisensitive, which means attributes can only be known by the fog. ### 3.2. Definition of Basic Algorithms We describe a general definition for our lightweight fine-grained searchable encryption scheme, consisting of several polynomial time algorithms.Setup. This phase containing three subalgorithms is implemented by TA.S y s t e m . S e t u p ( 1 λ ): Input the security parameter λ; then the algorithm outputs the master key Mk, public key Pk, and other system parameters.F o g . S e t u p ( P k ): Input the system parameter Pk; then the algorithm outputs the fog’s public and private key pair (PkFk,SkFk) and the corresponding verification key vkj for each attribute in the attribute universe.U s e r . S e t u p ( P k ): For each user requesting to join the system, TA verifies the user identity and his attributes.KeyGeneration. This phase is executed by TA, which contains two subalgorithms.K e y G e n ( P k , M k , U s e r i , Ω u i ): Input the system’s keys (Pk,Sk), the user identity, and the user’s attributes; then the algorithm outputs the user’s public and private key (Pkui,Skui). Next, Input the output of private key and user’s attributes Ωui; the algorithm outputs the secret verification key svkj for each attribute attrj∈Ωui,.S e a r c h K e y G e n ( M k , U s e r i ): Input the system’s master key Mk and user’s identity Useri; then the algorithm returns the search key Si for the user.FogSupport. This phase is executed by the fog and users which is under the management of the fog. Three algorithms are included in this phase.A d d u s e r ( U s e r i ): Input the public parameter and the user’s identity Useri; then the algorithm outputs a table Tuser for the fog Fk to store users’ information.R e K e y G e n ( S k u i , s v k j ): Input the user’s private key Skui and the private verification key svkj for atj∈Ωui, then the algorithm outputs a secret key cskui.R e E n c ( P k u i , v k j ): Input the user’s public key Pkui and the verification key vkj for atj∈Ωui, then the algorithm outputs a ciphertext cvkui.FileEncryption. This phase is performed by the user.E n c ( F , A , v k j ): Input a file F, an access policy A and the verification key vkj; then the algorithm outputs the ciphertext C embedded with the access policy.IndexGeneration. This phase is implemented by the user through running the algorithm Index.I n d e x ( W , S i , P k u i ): Input the user’s search key Si and the keyword W; then the algorithm outputs an index IW for the keyword.TrapdoorGeneration. This phase is executed by the fog and the user, including two subalgorithms.T r a p d o o r ( P k F k , c s k u i ): It is performed by the fog. Input the fog’s public key PkFk and the user’s regenerated key cskui as input; then the algorithm outputs Tf that is a part of the trapdoor T.T r a p d o o r 2 ( W , S i ): This algorithm is executed by the user. Input the user’s search key Si and a keyword W, then the algorithm outputs TW that is the other part of the trapdoor T.Test. This phase is implemented by the cloud server through running Test.T e s t ( I W , T ): Input the keyword index IW and the trapdoor T; then the algorithm outputs 0 if they do not match; otherwise it outputs 1.FileDecryption. The decryption phase is implemented by the cloud server and the user, consisting two subalgorithms.D e c ( C , c v k u i ): Input the file’s ciphertext C, the trapdoor T, and the ciphertext of user’s attributes ciphertext cvkui; then the algorithm outputs Cpd that is a part-decrypted version of the ciphertext.D e c 2 ( C p d , S k u i ): Input the user’s private key Skui and the part-decrypted ciphertext Cpd, then the algorithm outputs the file F. ### 3.3. Security Requirements (1) Data confidentiality: The cloud and the fog are not allowed to know the encrypted data files. Unauthorized users who have no appropriate attributes matching the policy embedded in the ciphertext should not learn the content of the underlying plaintext. (2) Keyword privacy: The keywords should be protected from both the cloud and the fog in a secure way, such as by using a oneway hash function. The cloud server is able to perform the test operation over the indexes but leaks no information about keywords to any unauthorized attackers. (3) Trapdoor privacy: One part of the trapdoor is generated by the data user by using the search key and the secret verification key for his attributes together with the keyword. The other part is generated with the help of the fog using the user’s re-encrypted key. The trapdoor reveals no information about the corresponding keyword or the user’s attributes to the attacker. ### 3.4. Adversary Model To achieve the security requirements, we design two security models for our scheme. Firstly, we introduce a fundamental assumption in Definition2.Definition 2 (DBDH assumption). We say that the DBDH assumption holds if no polynomial time algorithm has a nonnegligible advantage in solving the DBDH problem.According to the security parameter, let a groupG1 of prime order p have a generator g. a,b,c←RZp∗ are chosen randomly. The DBDH problem states that the adversary should distinguish e(g,g)abc∈G2 from a random element V∈G2 when given g,ga,gb,gc∈G1.Definition 3. Our LFSE scheme is trapdoor indistinguishable secure if there is no polynomial time attack can have a nonnegligible advantage in the following game.The security model is defined asGame 1 played between an adversary A and an algorithm B.Game 1 (Trapdoor privacy). Setup: With a security parameter λ, the algorithm B outputs system parameters and generates the public key Pkui, the private key Skui, and the search key Si for the data user.Query phase 1: The adversaryA adaptively makes the following queries.O.Trapdoor1: The adversary A could query any keyword’s one part (Tf0,Tf1,Tf2) of the trapdoor.O.Trapdoor2: The adversary A could query the keyword’s another part (TW1,TW2) of the trapdoor.Challenge phase: The adversaryA sends two keywords W1∗ and W0∗ with equal length. Then B will randomly select x∈{0,1} and construct the trapdoor Tf{Wx∗} for the keyword Wx∗ and send it to the adversary A.Query phase 2: The adversaryA queries the same as phase 1 with the restriction the queried keyword W∉{W0∗,W1∗}.Guess: The adversaryA outputs a guess x′∈{0,1}. If x=x′, A wins the game and the algorithm B outputs 1; otherwise A fails and B outputs 0.Definition 4. Our LFSE scheme is IND-CKCCA secure if there is no polynomial time attack can have a nonnegligible advantage in the following game.We define the indistinguishable against chosen keyword chosen ciphertext attack in our system. The security model is defined throughGame 2 played between an adversary A and a challenger C as follows.Game 2 (Ciphertext and Keyword privacy).This Initial Phase. The adversary A commits to challenge C.Setup: The challengerC seclets a large security parameter λ and runs the setup algorithm to obtain the system master key and public key (Mk,Pk). C gives Pk to A and keeps Mk.Phase 1: The adversaryA makes the following queries with a polynomial number bound.(i) O.KeyGen: The oracle contains several key generation oracles executed by the challenger C to generate a series of keys for A.(ii) O.Trapdoor: The oracle contains two trapdoor generation oracles executed by the challenger C to generate the trapdoor T=(Tf,TW) for A, with the keys generated from the above steps.Challenge: After finishing phase 1, the adversaryA outputs two messages m0∗,m1∗ and two keywords W0∗,W1∗ both with equal length to be challenged. The challenger C flips a coin to choose b1,b2∈{0,1} and then constructs ciphertext for mb1∗ and index for Wb2∗. Finally, the challenger C sends them to the adversary A.Phase 2: The adversaryA adaptively makes queries the same as phase 1, expect the restrictions that W∉{W0∗,W1∗} and the user’s private key cannot be queried.Guess: The adversaryA outputs guesses b1′,b2′∈0,1. If b1′=b1 and b2′=b2, A wins the game.The adversaryA has an advantage of ϵAdvALFSE(λ)=|Pr[b1′=b1,b2′=b2]-1/2| in breaking the DBDH assumption. ### 3.5. System Functions Considering the performance-related issues, our scheme are designed to achieve the following functions.(1) Fine-grained access control: A data owner embeds an access policy into each file to be transmitted to the cloud. This guarantees that the data is only accessed by users with appropriate attributes and well prevented from the cloud server.(2) Authorization: Each data user who is authorized by the trusted attribute authority can be assigned his individual private key. These private keys can be used to search and decrypt files in our system.(3) Search on keywords: An authorized user can generate a query request for some keywords by using his individual private key. After the cloud server receives the query and performs the “Test” on the encrypted files, the user can obtain the matched files.(4) Revocability: The trusted authority should be able to revoke an user and attributes. If an authorized user is revoked, the user is no longer able to search and read files in our system. If an attribute of the user is revoked, the user is no longer able to access the files embedded with an access policy containing the attribute. ## 3.1. Architecture of System The architecture of the proposed fog-based healthcare system is shown in Figure2. It is composed of four parts, i.e., a trusted authority, cloud server providers, fog nodes, and data users (including date owners and other users).Figure 2 System model.Trusted Authority (TA). A trusted authority, such as the national health center or an entity authorized by it, is an important authority for verifying users attributes. It takes charge of generating system parameters for all entities. And it is responsible for issue, revoke, and update attribute private keys for users.Cloud (Short for Cloud Server Providers). The cloud such as Amazon provides data storage, computational resource services, and data analysis. Apart from providing content service above, it also takes charge of the access services from the outside users to the encrypted files. We assume that the public cloud executes the searchable algorithm honestly. The cloud in our system is responsible for performing test algorithm and accomplish a part of decryption task with knowing any information about the user’s keys or attributes.Fog (Short for Fog Nodes). Fog, providing abilities of computing, storage, and mobility, is deployed at edges of networks. Because of the limited computing resources and the restricted capacity of the data owner or user’s facility carried nearby, it is responsible for deploying a half-trusted fog as interface between a user and the cloud server, especially in situations with sensitive medical information. Fog in our system takes charge of managing users within its coverage, revoking users and attributes without having any information about their private keys. Further, it helps controlling users’ query action through generating one part of the trapdoor without knowing the queried keyword.Data Owner. The data owner is an entity who intends to share his files with designated receivers. The receivers’ attributes should satisfy the access policy embedded in the corresponding ciphertext. It is in charge of file encryption with a specific access policy, index generation for all the keywords, and uploading to the cloud.Data User. The data user is the entity who intends to get the encrypted files by sending a query request to cloud servers and the fog. If he has enough attributes satisfying with the required access policy, he is able to download ciphertexts and decrypt them with the help from the cloud. It takes charge of keyword selection to generate trapdoors and then ciphertext decryption.Assumptions. We assume that the cloud and the fog are always online. They have sufficient storage capacity and computing resource. Also we assume that there exists a secure channel between data owner/user and the fog node, e.g., secure Wi-Fi networks.We assume that the cloud and fogs are all “honest but curious" [21]. To be specific, they do not delete or modify user’s data and return the computing results honestly but attempt to access as much private information as possible. All the entities execute our proposed protocol and users try to access data either within or out of their privileges. And it is assumed that the cloud and the fog do not collude with each other.Different from most existing work with only public cloud, it is a novel cloud-fog architecture. In this work, we assume that files and keywords are sensitive and should be protected from both the cloud and the fog. And attributes are semisensitive, which means attributes can only be known by the fog. ## 3.2. Definition of Basic Algorithms We describe a general definition for our lightweight fine-grained searchable encryption scheme, consisting of several polynomial time algorithms.Setup. This phase containing three subalgorithms is implemented by TA.S y s t e m . S e t u p ( 1 λ ): Input the security parameter λ; then the algorithm outputs the master key Mk, public key Pk, and other system parameters.F o g . S e t u p ( P k ): Input the system parameter Pk; then the algorithm outputs the fog’s public and private key pair (PkFk,SkFk) and the corresponding verification key vkj for each attribute in the attribute universe.U s e r . S e t u p ( P k ): For each user requesting to join the system, TA verifies the user identity and his attributes.KeyGeneration. This phase is executed by TA, which contains two subalgorithms.K e y G e n ( P k , M k , U s e r i , Ω u i ): Input the system’s keys (Pk,Sk), the user identity, and the user’s attributes; then the algorithm outputs the user’s public and private key (Pkui,Skui). Next, Input the output of private key and user’s attributes Ωui; the algorithm outputs the secret verification key svkj for each attribute attrj∈Ωui,.S e a r c h K e y G e n ( M k , U s e r i ): Input the system’s master key Mk and user’s identity Useri; then the algorithm returns the search key Si for the user.FogSupport. This phase is executed by the fog and users which is under the management of the fog. Three algorithms are included in this phase.A d d u s e r ( U s e r i ): Input the public parameter and the user’s identity Useri; then the algorithm outputs a table Tuser for the fog Fk to store users’ information.R e K e y G e n ( S k u i , s v k j ): Input the user’s private key Skui and the private verification key svkj for atj∈Ωui, then the algorithm outputs a secret key cskui.R e E n c ( P k u i , v k j ): Input the user’s public key Pkui and the verification key vkj for atj∈Ωui, then the algorithm outputs a ciphertext cvkui.FileEncryption. This phase is performed by the user.E n c ( F , A , v k j ): Input a file F, an access policy A and the verification key vkj; then the algorithm outputs the ciphertext C embedded with the access policy.IndexGeneration. This phase is implemented by the user through running the algorithm Index.I n d e x ( W , S i , P k u i ): Input the user’s search key Si and the keyword W; then the algorithm outputs an index IW for the keyword.TrapdoorGeneration. This phase is executed by the fog and the user, including two subalgorithms.T r a p d o o r ( P k F k , c s k u i ): It is performed by the fog. Input the fog’s public key PkFk and the user’s regenerated key cskui as input; then the algorithm outputs Tf that is a part of the trapdoor T.T r a p d o o r 2 ( W , S i ): This algorithm is executed by the user. Input the user’s search key Si and a keyword W, then the algorithm outputs TW that is the other part of the trapdoor T.Test. This phase is implemented by the cloud server through running Test.T e s t ( I W , T ): Input the keyword index IW and the trapdoor T; then the algorithm outputs 0 if they do not match; otherwise it outputs 1.FileDecryption. The decryption phase is implemented by the cloud server and the user, consisting two subalgorithms.D e c ( C , c v k u i ): Input the file’s ciphertext C, the trapdoor T, and the ciphertext of user’s attributes ciphertext cvkui; then the algorithm outputs Cpd that is a part-decrypted version of the ciphertext.D e c 2 ( C p d , S k u i ): Input the user’s private key Skui and the part-decrypted ciphertext Cpd, then the algorithm outputs the file F. ## 3.3. Security Requirements (1) Data confidentiality: The cloud and the fog are not allowed to know the encrypted data files. Unauthorized users who have no appropriate attributes matching the policy embedded in the ciphertext should not learn the content of the underlying plaintext. (2) Keyword privacy: The keywords should be protected from both the cloud and the fog in a secure way, such as by using a oneway hash function. The cloud server is able to perform the test operation over the indexes but leaks no information about keywords to any unauthorized attackers. (3) Trapdoor privacy: One part of the trapdoor is generated by the data user by using the search key and the secret verification key for his attributes together with the keyword. The other part is generated with the help of the fog using the user’s re-encrypted key. The trapdoor reveals no information about the corresponding keyword or the user’s attributes to the attacker. ## 3.4. Adversary Model To achieve the security requirements, we design two security models for our scheme. Firstly, we introduce a fundamental assumption in Definition2.Definition 2 (DBDH assumption). We say that the DBDH assumption holds if no polynomial time algorithm has a nonnegligible advantage in solving the DBDH problem.According to the security parameter, let a groupG1 of prime order p have a generator g. a,b,c←RZp∗ are chosen randomly. The DBDH problem states that the adversary should distinguish e(g,g)abc∈G2 from a random element V∈G2 when given g,ga,gb,gc∈G1.Definition 3. Our LFSE scheme is trapdoor indistinguishable secure if there is no polynomial time attack can have a nonnegligible advantage in the following game.The security model is defined asGame 1 played between an adversary A and an algorithm B.Game 1 (Trapdoor privacy). Setup: With a security parameter λ, the algorithm B outputs system parameters and generates the public key Pkui, the private key Skui, and the search key Si for the data user.Query phase 1: The adversaryA adaptively makes the following queries.O.Trapdoor1: The adversary A could query any keyword’s one part (Tf0,Tf1,Tf2) of the trapdoor.O.Trapdoor2: The adversary A could query the keyword’s another part (TW1,TW2) of the trapdoor.Challenge phase: The adversaryA sends two keywords W1∗ and W0∗ with equal length. Then B will randomly select x∈{0,1} and construct the trapdoor Tf{Wx∗} for the keyword Wx∗ and send it to the adversary A.Query phase 2: The adversaryA queries the same as phase 1 with the restriction the queried keyword W∉{W0∗,W1∗}.Guess: The adversaryA outputs a guess x′∈{0,1}. If x=x′, A wins the game and the algorithm B outputs 1; otherwise A fails and B outputs 0.Definition 4. Our LFSE scheme is IND-CKCCA secure if there is no polynomial time attack can have a nonnegligible advantage in the following game.We define the indistinguishable against chosen keyword chosen ciphertext attack in our system. The security model is defined throughGame 2 played between an adversary A and a challenger C as follows.Game 2 (Ciphertext and Keyword privacy).This Initial Phase. The adversary A commits to challenge C.Setup: The challengerC seclets a large security parameter λ and runs the setup algorithm to obtain the system master key and public key (Mk,Pk). C gives Pk to A and keeps Mk.Phase 1: The adversaryA makes the following queries with a polynomial number bound.(i) O.KeyGen: The oracle contains several key generation oracles executed by the challenger C to generate a series of keys for A.(ii) O.Trapdoor: The oracle contains two trapdoor generation oracles executed by the challenger C to generate the trapdoor T=(Tf,TW) for A, with the keys generated from the above steps.Challenge: After finishing phase 1, the adversaryA outputs two messages m0∗,m1∗ and two keywords W0∗,W1∗ both with equal length to be challenged. The challenger C flips a coin to choose b1,b2∈{0,1} and then constructs ciphertext for mb1∗ and index for Wb2∗. Finally, the challenger C sends them to the adversary A.Phase 2: The adversaryA adaptively makes queries the same as phase 1, expect the restrictions that W∉{W0∗,W1∗} and the user’s private key cannot be queried.Guess: The adversaryA outputs guesses b1′,b2′∈0,1. If b1′=b1 and b2′=b2, A wins the game.The adversaryA has an advantage of ϵAdvALFSE(λ)=|Pr[b1′=b1,b2′=b2]-1/2| in breaking the DBDH assumption. ## 3.5. System Functions Considering the performance-related issues, our scheme are designed to achieve the following functions.(1) Fine-grained access control: A data owner embeds an access policy into each file to be transmitted to the cloud. This guarantees that the data is only accessed by users with appropriate attributes and well prevented from the cloud server.(2) Authorization: Each data user who is authorized by the trusted attribute authority can be assigned his individual private key. These private keys can be used to search and decrypt files in our system.(3) Search on keywords: An authorized user can generate a query request for some keywords by using his individual private key. After the cloud server receives the query and performs the “Test” on the encrypted files, the user can obtain the matched files.(4) Revocability: The trusted authority should be able to revoke an user and attributes. If an authorized user is revoked, the user is no longer able to search and read files in our system. If an attribute of the user is revoked, the user is no longer able to access the files embedded with an access policy containing the attribute. ## 4. LFSE Scheme ### 4.1. Construction of LFSE Scheme We specify the proposed LFSE scheme in fog-based healthcare system in details. In real world, we consider that all the sensors carried by the owner are continually collecting and reporting data, and the owner decides whether and when data is transmitted to the cloud.(1) System setup: Letλ be the security parameter, and then TA performs the following steps. Firstly, it chooses two cyclic groups (G,·) and (GT,·) with prime order p and defines a bilinear pairing e:G×G→GT. Let g be a generator of G, g1,g2 and s,υ are randomly chosen from G and Zp∗, respectively. Then, it computes g′=gs, V=e(g,g)υ, and selects two hash functions: H:{0,1}∗→Zp∗, H1:Zp∗×{0,1}∗→Zp∗. Ultimately, TA keeps (s,υ) secret as master key Sk and publishes system parameters Pk={λ,G,GT,e,g,g1,g2,g′,V}. Afterwards, TA will initialize the attribute universe AT={at1,at2,…,atm} and the monotone access structure A. Let A0=(Ω1,Ω2,…,Ωn) be a basis for A, where each Ωi is a minimal authorized attribute set in A.(2) Setup and key generation for fogs: For each fog, TA generates its public and private keys(PkFogk,SkFogk) by running Fog.Setup. The algorithm picks ςk←rZp∗ randomly and outputs (PkFogk, SkFogk)=(ςk,gςk). The fog maintains the private key sent from TA and initializes a table Tuser to manage all the authorized users within its coverage. Further, to authorize fogs to manage attributes, for each fog Fogk, TA selects a σk←rZp∗ and then computes θj=H1(σk,atj), d1j=gθj, and d2j=Vθj for each attribute atj∈AT and defines vkj=(d1j,d2j) as a verification key. Then TA sends verification keys {vkj}atj∈AT to the corresponding fogs as attributes information. The fogs exchange their users and attributes verification keys information to allow the authorized user to connect to our system when he moves to other fog’s managing area.(3) Key Generation for the user: Assume that a new userUseri with the attribute list Ωui={{atj},j≤m} requests to join the system. First of all, TA authenticates the user’s identity and his attributes. Then it returns the public and private key (PkUseri,SkUseri)=(gαi,(t0,t1,t2)) to each user, where t0=gυg1αi, t1=gβi, t2=δ, and αi,βi,δ←rZp∗. Simultaneously, TA computes svkj=(PkUseri)θj=gαiθj for each atj∈Ωui and returns it to the user as a secret verification key for each attribute obtained by the user. Once the phase is finished, the fog adds Useri to table Tuser as a new authenticated user’s information.(4) Search Key Generation: After receiving the public and private keys from TA, the userUseri also needs to make a request to get a private key for searching on keywords. The user picks η←rZp∗ randomly and sends g11/η to TA. Then, TA computes the searching key Si=(g11/η)sg2βiδ and sends Si to the user.(5) Prepare For Fog Support: Due to the limited processing power and low computing efficiency of the user, we would like to transfer most of the computational load to the fog and cloud without leaking additional information. In our system, the user would delegate the fog to complete a part of auxiliary computation by transferring a converted secret keycskui. The user computes T0=t0t2=gυδg1αiδ, T1=t1t2=gβiδ, and T2j=(svkj)t2=gαiθjδ and then sends cskui=(T0,T1,{T2j}atj∈Ωui)to the fog. With cskui, the fog can help users accomplish a part of computation tasks without knowing the user private key. For facilitating further calculations, the user can compute V1=e(g,g) and V2=e(g1,g′) in advance and stores them. Simultaneously, to ensure the cloud can help users do a part of computation and avoid the cloud from obtaining information from the user’s attributes, the fog selects s′←rZp∗ randomly and computes D1=g1s′, D2j=(d1j)s′=(gθj)s′=gθjs′ and sends cvkui=(D1,{D2j}atj∈Ωui) to the cloud and the secret s′ to the user through a secure channel.(6) Encrypt: Suppose that the data owner decides his fileF. This file can be searched and acquired by users whose attributes satisfy with an access policy A. Under this assumption, the user can designate different types of data to be accessed by different kind of people. For the monotone access policy A, there exists a basis A0=(Ω1,Ω2,…,Ωn), where each Ωi, a minimal set, is composed of the authorized attributes. To encrypt the file, the user picks sl←rZp∗ for each 1≤l≤n computes(2)Cl=C1l,C2l=F·∏atj∈Ωld2jsl,sls′,The user keeps the ciphertext as C=(A,{Cl}1≤l≤n) embedded with the access policy A.(7) Index: For a continuous health monitoring system, data are constantly processed and transferred to the cloud from various kind of sensors. In order to get quick access to useful files from the super large data center, we add different keywords to files.We assume the fileF contains a set of keywords W which are extracted from the original health file. For each keyword W∈W, the user picks u←rZp∗ randomly and computes IW=(CW1,CW2,CW3)=((e(g,g)H(W)e(g1,g′))u,gu,g2u). Subsequently, the user sends the ciphertext C together with the index IW to the cloud. Then the cloud stores them.(8) Trapdoor: Generally speaking, theTrapdoor algorithm is used to generate a trapdoor for a certain keyword by the user who wants to search files containing this keyword. In our system, to help the user reduce the computing burden, we delegate the fog to do a part of the trapdoor generation work without leaking any information about the queried keywords. This design has an advantage in our IoT system: confidentiality of keywords. Specifically, upon receiving the query request from the user Useri, the fog firstly searches the user’s identity in table Tuser. If the fog does not find it in the table which means the user did not join the system, then the fog refuses to generate the part trapdoor for the user and returns a warning message. This process completed by the fog ensures that any external user who is not authenticated cannot search any keyword and guarantees no leakage of any information about keywords or encrypted files. If the fog finds the user in table Tuser, the fog randomly chooses ρ←rZp∗, sends it to the user through a secure channel, and then computes(3)Tf0=T0ρ=t0t2ρ=gυδρg1αiδρ,Tf1=T1ρ=t1t2ρ=gβiδρ,Tf2j=T2jρ=svkjt2ρ=gαiθjδρ.After finishing all the above steps, the fog uploads Tf=(Tf0,Tf1,Tf2j{atj∈Ωui}) to the cloud as a part of the trapdoor. To search files with a keyword W′, the user firstly chooses η←rZp∗ and computes(4)TW1=gHW′Siη=gHW′g1sg2βiδη,TW2=ηρwith his own search key. Then, the user sends the other part of the trapdoor TW=(TW1,TW2) to the cloud. Ultimately, the cloud gets a full trapdoor T=(Tf,TW). In this phase, if the fog has verified one user’s identity and verification keys for his attributes, the fog can perform the trapdoor generation once in a while. This is available as this phase is not related to the queried keyword. As a result, the computing burden for the fog and interaction time for both the user and the fog can be reduced.(9) Test: Upon receiving the search request for keywordW′ from the fog and the user, the cloud runs Test algorithm for all items which are encrypted indexes for all the keywords by computing(5)eCW2,TW1eCW3,Tf1TW2.The cloud compares the result with CW1, if it equals CW1, the cloud outputs 1, and performs the next step. Otherwise, the cloud outputs 0, returns a warning message, and exits the system.(10) Decryption: If the algorithmTest cannot find an index for the uploaded trapdoor, the cloud would not run the Dec1 algorithm and returns ⊥. Otherwise, the cloud computes(6)Cpd=eD1,∏atj′∈Ωui′Tf2je∏atj′∈Ωui′D2j,Tf0C2lTW2.Once upon receiving the part-decrypted ciphertext Cpd from the cloud, the user recovers the file F by using his own private key through computing(7)F=C1lCpd1/δt2.Obviously, the user only needs to do an exponential operation in the decryption, which is a great step in improving efficiency. ### 4.2. Consistency Firstly, we present that the trapdoor matching is valid in our system.(8)eCW2,TW1eCW3,Tf1TW2=egu,gHW′g11/ηsg2βiδηeg2u,gβiδρη/ρ=egu,gHW′·egu,g1s·egu,g2βiδηeg2u,gβiδη=eg,guHW′·eg,g1us·eg,g2uβiδηeg2,guβiδη=eg,guHW′·eg,g1us=eg,guHW′·eg1,g′u. If there exists a keyword W∈W matching with the queried keyword which leads H(W)=H(W′), we can derive the conclusion e(CW2,TW1)/e(CW3,Tf1TW2)=CW1.Then, the file recovery can be maintained as the following two steps. If the test passes, the cloud decrypts all the related files by computing(9)Cpd=eD1,∏atj′∈Ωui′Tf2je∏atj′∈Ωui′D2j,Tf0C2lTW2=eg1s′,∏atj′∈Ωui′gαiθjδρe∏atj′∈Ωui′gθjs′,gυg1αiδρsl/s′η/ρ=eg1s′,gαiδρ∑atj′∈Ωui′egs′∑atj′∈Ωui′θj,gυδρ·egs′∑atj′∈Ωui′θj,g1αiδρsl/s′η/ρ=eg1,gs′αiδρ∑atj′∈Ωui′eg,gs′υδρ∑atj′∈Ωui′θj·eg,g1s′αiδρ∑atj′∈Ωui′θjsl/s′η/ρ=1eg,gυδ∑atj′∈Ωui′θjslη=1eg,gυδslη∑atj′∈Ωui′θj.If the user’s attributesΩui′ satisfy the access policy A, we know there exists a basis A0′=(Ω1′,Ω2′,…,Ωn′) s.t.(10)∀atj∈Ωui′,∃Ωl′s.t.atj∈Ωl′⊆Ωui′, and we have ∑atj′∈Ωui′θj=∑atj′∈Ωl′θj=∑1nθj=∑atj′∈Ωlθj.According to this, the user finally recovers the file by computing(11)F=C1lCpd1/δt2=F·∏atj∈Ωld2jsl1eg,gυδslη∑atj′∈Ωui′θj1/δη=F·eg,gυsl∑atj∈Ωuiθj·1eg,gυsl∑atj′∈Ωui′θj=F. ### 4.3. User Revocation and Attribute Revocation As mentioned above, the fog is an access interface between the cloud and users. The tableTuser is a certification to verify whether a user is in the system. The revocation of a user can be realized through rejecting the query request. To be specific, once a user submits a revocation request to the trusted authority or the trusted authority decides to revoke a user, the trusted authority deletes all keys and attributes information of the user. Then it sends the user’s revocation information to the fog, and all the information about the user will be deleted in Tuser. As a result, the user cannot update his/her request to the cloud server. Furthermore, once the re-encrypted keys cskui and cvkui are revoked from the fog, the user cannot generate trapdoors for any keywords. Because the fog needs cskui and cvkui to do a part of computation to accomplish the trapdoor generation phase, the loss of cskui and cvkui leads to the user’s failure to search for any files. As a result, such a user is new to the system and the fog will no longer respond to its any request.In our system, we can achieve attribute revocation with the designation ofcskui and cvkui. Once an attribute is revoked, the data owner could keep the data from the group of users who have the revoked attribute. To be specific, upon deciding to revoke an attribute atj, the fog destroys the attribute’s verification key vkj and deletes cskui and cvkui for users containing the attribute, then sends a warning message to these users to update the related cskui and cvkui. Before users updating cskui and cvkui, the fog refuses to generate trapdoor for them, which directly leading to the failure of accessing files in the system. Although it may cause some computational loads and transmission cost, it is acceptable when the extremely sensitive data is concerned. ## 4.1. Construction of LFSE Scheme We specify the proposed LFSE scheme in fog-based healthcare system in details. In real world, we consider that all the sensors carried by the owner are continually collecting and reporting data, and the owner decides whether and when data is transmitted to the cloud.(1) System setup: Letλ be the security parameter, and then TA performs the following steps. Firstly, it chooses two cyclic groups (G,·) and (GT,·) with prime order p and defines a bilinear pairing e:G×G→GT. Let g be a generator of G, g1,g2 and s,υ are randomly chosen from G and Zp∗, respectively. Then, it computes g′=gs, V=e(g,g)υ, and selects two hash functions: H:{0,1}∗→Zp∗, H1:Zp∗×{0,1}∗→Zp∗. Ultimately, TA keeps (s,υ) secret as master key Sk and publishes system parameters Pk={λ,G,GT,e,g,g1,g2,g′,V}. Afterwards, TA will initialize the attribute universe AT={at1,at2,…,atm} and the monotone access structure A. Let A0=(Ω1,Ω2,…,Ωn) be a basis for A, where each Ωi is a minimal authorized attribute set in A.(2) Setup and key generation for fogs: For each fog, TA generates its public and private keys(PkFogk,SkFogk) by running Fog.Setup. The algorithm picks ςk←rZp∗ randomly and outputs (PkFogk, SkFogk)=(ςk,gςk). The fog maintains the private key sent from TA and initializes a table Tuser to manage all the authorized users within its coverage. Further, to authorize fogs to manage attributes, for each fog Fogk, TA selects a σk←rZp∗ and then computes θj=H1(σk,atj), d1j=gθj, and d2j=Vθj for each attribute atj∈AT and defines vkj=(d1j,d2j) as a verification key. Then TA sends verification keys {vkj}atj∈AT to the corresponding fogs as attributes information. The fogs exchange their users and attributes verification keys information to allow the authorized user to connect to our system when he moves to other fog’s managing area.(3) Key Generation for the user: Assume that a new userUseri with the attribute list Ωui={{atj},j≤m} requests to join the system. First of all, TA authenticates the user’s identity and his attributes. Then it returns the public and private key (PkUseri,SkUseri)=(gαi,(t0,t1,t2)) to each user, where t0=gυg1αi, t1=gβi, t2=δ, and αi,βi,δ←rZp∗. Simultaneously, TA computes svkj=(PkUseri)θj=gαiθj for each atj∈Ωui and returns it to the user as a secret verification key for each attribute obtained by the user. Once the phase is finished, the fog adds Useri to table Tuser as a new authenticated user’s information.(4) Search Key Generation: After receiving the public and private keys from TA, the userUseri also needs to make a request to get a private key for searching on keywords. The user picks η←rZp∗ randomly and sends g11/η to TA. Then, TA computes the searching key Si=(g11/η)sg2βiδ and sends Si to the user.(5) Prepare For Fog Support: Due to the limited processing power and low computing efficiency of the user, we would like to transfer most of the computational load to the fog and cloud without leaking additional information. In our system, the user would delegate the fog to complete a part of auxiliary computation by transferring a converted secret keycskui. The user computes T0=t0t2=gυδg1αiδ, T1=t1t2=gβiδ, and T2j=(svkj)t2=gαiθjδ and then sends cskui=(T0,T1,{T2j}atj∈Ωui)to the fog. With cskui, the fog can help users accomplish a part of computation tasks without knowing the user private key. For facilitating further calculations, the user can compute V1=e(g,g) and V2=e(g1,g′) in advance and stores them. Simultaneously, to ensure the cloud can help users do a part of computation and avoid the cloud from obtaining information from the user’s attributes, the fog selects s′←rZp∗ randomly and computes D1=g1s′, D2j=(d1j)s′=(gθj)s′=gθjs′ and sends cvkui=(D1,{D2j}atj∈Ωui) to the cloud and the secret s′ to the user through a secure channel.(6) Encrypt: Suppose that the data owner decides his fileF. This file can be searched and acquired by users whose attributes satisfy with an access policy A. Under this assumption, the user can designate different types of data to be accessed by different kind of people. For the monotone access policy A, there exists a basis A0=(Ω1,Ω2,…,Ωn), where each Ωi, a minimal set, is composed of the authorized attributes. To encrypt the file, the user picks sl←rZp∗ for each 1≤l≤n computes(2)Cl=C1l,C2l=F·∏atj∈Ωld2jsl,sls′,The user keeps the ciphertext as C=(A,{Cl}1≤l≤n) embedded with the access policy A.(7) Index: For a continuous health monitoring system, data are constantly processed and transferred to the cloud from various kind of sensors. In order to get quick access to useful files from the super large data center, we add different keywords to files.We assume the fileF contains a set of keywords W which are extracted from the original health file. For each keyword W∈W, the user picks u←rZp∗ randomly and computes IW=(CW1,CW2,CW3)=((e(g,g)H(W)e(g1,g′))u,gu,g2u). Subsequently, the user sends the ciphertext C together with the index IW to the cloud. Then the cloud stores them.(8) Trapdoor: Generally speaking, theTrapdoor algorithm is used to generate a trapdoor for a certain keyword by the user who wants to search files containing this keyword. In our system, to help the user reduce the computing burden, we delegate the fog to do a part of the trapdoor generation work without leaking any information about the queried keywords. This design has an advantage in our IoT system: confidentiality of keywords. Specifically, upon receiving the query request from the user Useri, the fog firstly searches the user’s identity in table Tuser. If the fog does not find it in the table which means the user did not join the system, then the fog refuses to generate the part trapdoor for the user and returns a warning message. This process completed by the fog ensures that any external user who is not authenticated cannot search any keyword and guarantees no leakage of any information about keywords or encrypted files. If the fog finds the user in table Tuser, the fog randomly chooses ρ←rZp∗, sends it to the user through a secure channel, and then computes(3)Tf0=T0ρ=t0t2ρ=gυδρg1αiδρ,Tf1=T1ρ=t1t2ρ=gβiδρ,Tf2j=T2jρ=svkjt2ρ=gαiθjδρ.After finishing all the above steps, the fog uploads Tf=(Tf0,Tf1,Tf2j{atj∈Ωui}) to the cloud as a part of the trapdoor. To search files with a keyword W′, the user firstly chooses η←rZp∗ and computes(4)TW1=gHW′Siη=gHW′g1sg2βiδη,TW2=ηρwith his own search key. Then, the user sends the other part of the trapdoor TW=(TW1,TW2) to the cloud. Ultimately, the cloud gets a full trapdoor T=(Tf,TW). In this phase, if the fog has verified one user’s identity and verification keys for his attributes, the fog can perform the trapdoor generation once in a while. This is available as this phase is not related to the queried keyword. As a result, the computing burden for the fog and interaction time for both the user and the fog can be reduced.(9) Test: Upon receiving the search request for keywordW′ from the fog and the user, the cloud runs Test algorithm for all items which are encrypted indexes for all the keywords by computing(5)eCW2,TW1eCW3,Tf1TW2.The cloud compares the result with CW1, if it equals CW1, the cloud outputs 1, and performs the next step. Otherwise, the cloud outputs 0, returns a warning message, and exits the system.(10) Decryption: If the algorithmTest cannot find an index for the uploaded trapdoor, the cloud would not run the Dec1 algorithm and returns ⊥. Otherwise, the cloud computes(6)Cpd=eD1,∏atj′∈Ωui′Tf2je∏atj′∈Ωui′D2j,Tf0C2lTW2.Once upon receiving the part-decrypted ciphertext Cpd from the cloud, the user recovers the file F by using his own private key through computing(7)F=C1lCpd1/δt2.Obviously, the user only needs to do an exponential operation in the decryption, which is a great step in improving efficiency. ## 4.2. Consistency Firstly, we present that the trapdoor matching is valid in our system.(8)eCW2,TW1eCW3,Tf1TW2=egu,gHW′g11/ηsg2βiδηeg2u,gβiδρη/ρ=egu,gHW′·egu,g1s·egu,g2βiδηeg2u,gβiδη=eg,guHW′·eg,g1us·eg,g2uβiδηeg2,guβiδη=eg,guHW′·eg,g1us=eg,guHW′·eg1,g′u. If there exists a keyword W∈W matching with the queried keyword which leads H(W)=H(W′), we can derive the conclusion e(CW2,TW1)/e(CW3,Tf1TW2)=CW1.Then, the file recovery can be maintained as the following two steps. If the test passes, the cloud decrypts all the related files by computing(9)Cpd=eD1,∏atj′∈Ωui′Tf2je∏atj′∈Ωui′D2j,Tf0C2lTW2=eg1s′,∏atj′∈Ωui′gαiθjδρe∏atj′∈Ωui′gθjs′,gυg1αiδρsl/s′η/ρ=eg1s′,gαiδρ∑atj′∈Ωui′egs′∑atj′∈Ωui′θj,gυδρ·egs′∑atj′∈Ωui′θj,g1αiδρsl/s′η/ρ=eg1,gs′αiδρ∑atj′∈Ωui′eg,gs′υδρ∑atj′∈Ωui′θj·eg,g1s′αiδρ∑atj′∈Ωui′θjsl/s′η/ρ=1eg,gυδ∑atj′∈Ωui′θjslη=1eg,gυδslη∑atj′∈Ωui′θj.If the user’s attributesΩui′ satisfy the access policy A, we know there exists a basis A0′=(Ω1′,Ω2′,…,Ωn′) s.t.(10)∀atj∈Ωui′,∃Ωl′s.t.atj∈Ωl′⊆Ωui′, and we have ∑atj′∈Ωui′θj=∑atj′∈Ωl′θj=∑1nθj=∑atj′∈Ωlθj.According to this, the user finally recovers the file by computing(11)F=C1lCpd1/δt2=F·∏atj∈Ωld2jsl1eg,gυδslη∑atj′∈Ωui′θj1/δη=F·eg,gυsl∑atj∈Ωuiθj·1eg,gυsl∑atj′∈Ωui′θj=F. ## 4.3. User Revocation and Attribute Revocation As mentioned above, the fog is an access interface between the cloud and users. The tableTuser is a certification to verify whether a user is in the system. The revocation of a user can be realized through rejecting the query request. To be specific, once a user submits a revocation request to the trusted authority or the trusted authority decides to revoke a user, the trusted authority deletes all keys and attributes information of the user. Then it sends the user’s revocation information to the fog, and all the information about the user will be deleted in Tuser. As a result, the user cannot update his/her request to the cloud server. Furthermore, once the re-encrypted keys cskui and cvkui are revoked from the fog, the user cannot generate trapdoors for any keywords. Because the fog needs cskui and cvkui to do a part of computation to accomplish the trapdoor generation phase, the loss of cskui and cvkui leads to the user’s failure to search for any files. As a result, such a user is new to the system and the fog will no longer respond to its any request.In our system, we can achieve attribute revocation with the designation ofcskui and cvkui. Once an attribute is revoked, the data owner could keep the data from the group of users who have the revoked attribute. To be specific, upon deciding to revoke an attribute atj, the fog destroys the attribute’s verification key vkj and deletes cskui and cvkui for users containing the attribute, then sends a warning message to these users to update the related cskui and cvkui. Before users updating cskui and cvkui, the fog refuses to generate trapdoor for them, which directly leading to the failure of accessing files in the system. Although it may cause some computational loads and transmission cost, it is acceptable when the extremely sensitive data is concerned. ## 5. Security Analysis Recall that our system is concerned about three security requirements: data confidentiality, keyword privacy, and trapdoor privacy. We present our security analysis for trapdoor privacy by proofing Theorem5, and data confidentiality and keyword privacy are exhibited through Theorem 6,.The security of our scheme is based on the complex assumption in Definition2.Theorem 5 (trapdoor privacy). Under the assumption of DBDH, the trapdoor generated in our LFSE scheme is indistinguishable against the chosen keyword attack.Proof. Assume that an malicious adversaryA is able to break the trapdoor security in our LFSE scheme in a polynomial time with the advantage ϵ which is not negligible. Without loss of generality, we construct an algorithm B that plays the following game with A and solves DBDH using the capability of A.(i) Setup: For a security parameterλ, the algorithm B takes (g,ga,gb,gc,Z) as input, where a,b,c are chosen from Zp∗ by the challenger C and Z is also randomly selected from G. The challenger C picks a coin to denote x∈{0,1}. If x=1, computes Z=gabc. Otherwise, Z is a random element from G. For the user ui, the algorithm randomly chooses s,αi,ν from Zp∗ and g1,g2 from the group G. Then it announces the user’s public and private key as (gαi,(gνgαi,gb,c)) and sets g2=g,η=c. Furthermore, it announces the search key for the user as (g11/η)sg2ab.(ii) Query Phase 1: The adversaryA issues the following query.O.Query: Upon receiving the query request on keyword W from the adversary A. The algorithm B selects r,ρ,θj randomly from Zp∗ and then it computes Tf0=grbρg1αibρ, Tf1=gabρ, Tf2=gαiθjbρ, TW1=gH(W)((g11/η)sg2ab)η, and TW2=c/ρ, where all the other parameters are randomly chosen in a similar way as in Theorem 5. At last, the algorithm B returns Tf=(Tf0,Tf1,Tf2,TW1,TW2) as the trapdoor for the keyword W′ to A.(iii) Challenge: The adversaryA selects two keywords W0∗ and W1∗ with equal length which are both queried for the first time. Then the algorithm B flips a coin to choose a random bit of x and computes the trapdoor for the keyword Wx∗ as Tf∗=(Tf0∗,Tf1∗,Tf2∗,TW1∗,TW2∗), where Tf0∗=grbρg1αibρ, Tf1∗=gabρ, Tf2∗=gαiθjbρ, TW1∗=gH(W)g1sZ and TW2∗=c/ρ.(iv) Query Phase 2: The adversaryA does the same thing continuously for polynomial times as in Query Phase 1, but with the restriction that both W0∗ and W1∗ cannot be queried any more.(v) Guess Phase: The adversaryA returns a guess x∈0,1′ to B. If x′=x, it means the adversary A wins the game, the algorithm B outputs 1. Otherwise A fails and B outputs 0.(vi) Analysis: As shown above, we haveTW1=gH(W)((g11/η)sg2ab)η=gH(W)((g11/c)sgab)c=gH(W)g1sgabc. Compared with Tf2∗, we can know Z=gabc clearly. As a result, the adversary A can win the game with the same probability of winning the DBDH assumption. That means AdvB(1λ)DBDH=Pr[x′=x]=ϵ, which is contradictory to the DBDH assumption. In summary, our scheme satisfies the trapdoor indistinguishable secure under the DBDH assumption.Theorem 6 (Ciphertext privacy and keyword privacy). The proposed scheme shown in Section4 is IND-CK-CCA secure under the DBDH assumption.Proof. Suppose there is a polynomial time adversaryA who can break our proposed scheme with a nonnegligible advantage ϵ, then we can build an algorithm to solve the DBDH assumption. It can be described as a game between a challenger C and an adversary A. Setup: The challengerC receives (G,GT,e,g,gx,gy,gz,Z) from the DBDH assumption, where Z is a randomly chosen element from GT or equals e(g,g)xyz. The challenger C chooses s,υ→Zp∗ and computes g2=gs, V=e(g,g)υ, and also C sets g1=gx,g′=gy. (g,g1,g2,g′,V) are sent to the adversary A as public parameters. Phase 1: The adversaryA makes the following queries:(i) O.Fog.KeyGen: The adversary A queries keys for the fog, and the challenger C picks σF,ςF←rZp∗ at random and outputs (PkF, SkF)=(ςF,gςF).(ii) O.KeyGen: The adversary A queries keys for the user Useri, and the challenger C picks αi,βi,δ←rZp∗ at random and computes (PkUseri,SkUseri)=(gαi,(t0,t1,t2)) to A, where t0=gυg1αi, t1=gβi, and t2=δ. For all the attributes owned by the user, the adversary A also queries the verification keys from O.Fog.KeyGen and secret verification keys from O.KeyGen, then A obtains vkj=(d1j,d2j), and svkj=d3j=gαiθj, where θj=H1(σF,atj), d1j=gθj, and d2j=Vθj are computed by C.(iii) O.SearchKeyGen: After receiving a commitment g11/η, the adversary A queries the search key, and the challenger C computes Si=(g11/η)sg2βiδ to A.(iv) O.ReKey: The adversary A queries transformed key for the user, and the challenge C computes T0=t0t2, T1=t1t2, and T2j=(d3j)t2 and sends cskui=(T0,T1,{T2j}atj∈Ωui) to A.(v) O.Trapdoor: Upon getting a query on the trapdoor for the keyword W, C firstly randomly chooses ρ,η←rZp∗ and computes Tf0=T0ρ,Tf1=T1ρ,Tf2j=T2jρ, TW1=gH(W′)Siη, and TW2=η/ρ to A. Challenge: The adversaryA gives an access policy A∗, two equal length plaintexts m0∗,m1∗ and two keywords W0∗,W1∗ to C. Then C randomly picks b1∈{0,1} and constructs the ciphertext as ((mb2∗·(∏atj∈Ωl‍d2j)sl,sl/s′),1≤l≤n). Also it constructs the index CW1∗=e(g,gz)H(Wb1∗)·Z,CW2∗=gz,CW3∗=(gz)s. C sends the index IW∗=(CW1∗,CW2∗,CW3∗) for keyword W. Phase 2:A can ask a polynomially bounded number of queries adaptively again as Phase 1 except the queried keyword W∉{W0∗,W1∗}. C answers A’s queries as in Phase 1. Guess:A outputs guesses b1′,b2′ of b1,b2. C outputs 0 to guess that Z=e(g,g)xyz if b1′=b1 and b2′=b2; otherwise, it outputs 1 to indicate that it believes Z is a random element. Analysis: Assume the adversaryA has an advantage ϵ in attacking DBDH assumption and C has an advantage ϵ′ in winning the game. Through the game showed above, we can know ϵ′=ϵ as a obvious result. ## 6. Efficiency Analysis In this section, we analyse the efficiency of our system from both theoretical and experimental aspects. Table1 illustrates the descriptions of notations we use in the following comparisons.Table 1 Description of parameters. Parameter Description |S| the size of the user’s attribute set k the amount of attributes associated with the user’s private key |U| the size of the attribute universe t the amount of attributes associated with the ciphertext N the number of files to be encrypted m the number of keywords to be used to generate indexes |G|,|GT| the bit length of the elements belong to the groupG,GT |Zp| the bit length of the elements belong to the groupZp Cp the computational cost of the pairing operationG,GeT Ce,CeT the computational cost of the exponential operation in groupG,GeT ### 6.1. Storage and Transmission Cost Analysis We compare our scheme with the related schemesVKS [17], LHL [19], SYL [18], and ZSQM [20] over some important features, which are illustrated in Tables 2 and 3. Though many parameters are generated, stored, and transmitted throughout the whole process, we only consider the following parameters that extremely affect the system efficiency:(i) PK: The size of public keyPK measures how many storage is needed to store public keys of all entities for each user to accomplish his computation. As shown in the second column in Table 2, it increases linearly with |U| in [18, 19], which leads to a great amount storage demand for the user. This indicates it is difficult to adopt new attributes in [18, 19]. Because it cannot meet the demands of frequently updating attributes in rapidly changing IoT networks. Reference [20] is file-centered, so the size of PK is related to the number of all files being encrypted, which also causes a large storage requirement for each user. It is obviously that our scheme and [17] only have a small and constant storage requirement.(ii) SK: The private keySK is always kept by the user himself, so the size of SK only indicates the secure storage needed to store his private key for each user. The third column reveals that, in [17–20], |SK| increases with the attributes with different efficients k,2k,S,|U|, respectively, where k<2k<S≪|U|. Since the storage of users or devices in IoT networks is limited, it would be desirable if only small and constant storage is needed to store the keys. This expected goal is achieved in our scheme as shown; it is obviously better than the others with only constant storage requirement 2|G|+|Zp|, regardless of the attribute number’ change.(iii) CT: The size of ciphertextCT measures the transmission cost for the user and the storage cost for the cloud server, because the ciphertext is computed by the user, transmitted to the cloud, and stored in the cloud data center. Reference [18] is concentrated in the user management such as user updating and revoking, and the encryption and decryption processes are not revealed in details, so it is empty in the fourth column. Considering that all the five schemes store an access policy in the ciphertext, we ignore this part in the ciphertext size comparison. CT in our scheme and [19] are both linearly increasing with the number of the attributes associated with the user’s private key k, which is consistent with the situations in real world. Obviously, CT in [17] is much larger than our scheme and [19], owing to |U| is much larger than k, which means it takes much more transmission overhead for the user and more storage requirement for the cloud server. Ciphertext size in [20] is (N+2)|G|+2|GT|, because it is file-centered; all files owned by each user are encrypted at one time; this is not convenient if only a part of files are needed to be updated or modified..(iv) ID and TD: The size of indexID indicates the transmission overhead for the user and the storage required to store the indexes for retrieving related files for the cloud. The size of trapdoor TD shows the transmission cost for the data user, because the trapdoor is needed to be transmitted to the cloud to accomplish the test and search processes. We do not compute ID and TD for [19] as [19] only concerns about attribute-based encryption and it does not support the function of searching on keywords. For simplicity, we consider generating the index and trapdoor for only one keyword here. We could tell that the scheme in [18] costs most for transmitting both ID and TD between the user and the cloud server. It is shown from the fifth column that [17, 20] have similar ID size with our scheme, which reveals a small and constant storage is required. For the trapdoor size, scheme in [17] is linearly with the user’s attribute number, while scheme in [20] and ours is constant; furthermore, ours requires less transmission overhead than [20].Table 2 Storage and transmission comparisons. PK SK CT ID TD VSKE [17] 6 | G | + | G T | 2 ( | S | + 1 ) | G | + | Z p | ( | U | + k ) ( | G | + | G T | ) 2 | G | + | G T | ( 2 | S | + 3 ) | G | SYL [18] 3 | U | | G | + | G T | ( 2 | U | + 1 ) | G | + 2 | Z p | - ( | U | + 1 ) | G | + | G T | + | Z p | ( 2 | U | + 1 ) | G | + 2 | Z p | LHL [19] ( | U | + 4 ) | G | 2 k | G | + | Z p | ( k + 2 ) | G | + | G T | - - ZSQM [20] ( N + 4 ) | G | ( | S | + 3 ) | G | + | Z p | ( N + 2 ) | G | + 2 | G T | | G T | 4 | G | Ours 5 | G | + | G T | 2 | G | + | Z p | 2 k | G T | 2 | G | + | G T | | G | + | Z p |Table 3 Computation cost comparisons. Keygen Encrypt Index Trapdoor Test Decrypt VSKE [17] ( 2 | S | + 4 ) C e ( 2 m + 1 ) C e ( 2 | S | + 4 ) C e ( 2 | U | + K ) C e + k C p ( 2 | S | + 1 ) C p + | S | C e T C p + C e T SYL [18] ( 2 | U | + 1 ) C e + 2 C e T - ( | U | + 1 ) C e + C e T ( 2 | U | + 1 ) C e ( | U | + 1 ) C p + C e T - LHL [19] 2 ( k + 1 ) C e ( k + 2 ) C e + C p - - - ( 2 k + 3 ) C p + C e T ZSQM [20] 5 C e + N C p ( N + 3 ) C e + C p + C e T C p ( N + 2 ) C e 2 C e + ( k + 1 ) C p ( N k + 1 ) C p + C e Ours 4 C e k C e T 2 ( C e + C e T ) 2 ( | S | + 1 ) C e 2 C p + C e C eAccording to the above analyses, our scheme has a better performance in the storage and transmission requirement comparing with the other exsiting schemes. ### 6.2. Computational Cost Simulation and Analysis In this section we present the analysis in terms of the computational cost and comparisons with those related works listed in Table2. Since operations over Zp cost much less computational time than operations over groups and the pairing operation, we just consider the latter two fundamental cryptographic operations. The results are given in Table 3. It is obvious from the table that our scheme has significantly better efficiency than the other schemes.By adopting the pairing-based cryptography (PBC (URL:https://crypto.stanford.edu/pbc)) library, we perform our experiment in C on a computer with Intel(R) Core(TM) i3-3220 CPU @ 3.30 running Ubuntu 16.04.5 with 4.00 GB system memory. This simulation environment is used to performKeygen andTest, which are executed by the trusted authority and the cloud server with a great computational capability. In contrast, the users or devices in our system are mostly with low computational capability, to simulateEncrypt,Index,Trapdoor, andDecrypt performed by them; we execute our experiment on a client machine with Intel Core Duo CPU running Ubuntu MATE 16.04 with 2 GB system memory. To realize the security requirement of 1024-bit, we use the Type A curve, which is denoted as E(Fq):y2=x3+x with parameter q=512 bits, where the order p of both the group G and group GT is 160 bits and |G|=|GT|=1024 bits. For simplicity, we assume that the user only generates index for one keyword in our simulation. The simulation result is exhibited in Figure 3.Figure 3 Comparison of computational cost. (a) Time for key generation (b) Time for encryption (c) Time for index generation (d) Time for trapdoor generation (e) Time for test (f) Time for decryptionOnce receiving a request from a user to join in the system, TA generates public and private keys for each user with only four exponential operations in our scheme. Clearly from Figure3(a), the computation cost in our scheme is constant and the smallest among all the schemes. As the attribute number increases, the computational cost for key generation in [17–19] all grows; especially in [18] it climbs up to thousands of milliseconds. Cost in [20] is also constant and similar with ours, this is because we assume the encrypted file number N=1 for simplicity in our experiment. While in reality the scheme in [20] is file-centered, the computational cost for key generation grows with the number of encrypted files increases, but in our scheme the cost for key generation is irrelevant with the encrypted file numbers.After receiving the keys from TA, the user encrypts the files with his/her keys before updating them to the cloud server. Reference [18] focuses on attributed-based encryption to manage users; the data encryption and decryption phase are not described in details; therefore it is not considered in our encryption simulation. Reference [20] is not considered in the encryption phase because the scheme encrypts all files of one user at once, while the others encrypt one file at a time. As shown in Figure 3(b), cost for encryption in our scheme and [19] are increasing linearly with the number of attributes grows, due to the file is encrypted associated with the attributes embedded in the access policy. When the attribute number is 50, our scheme needs 109.96 milliseconds and [19] needs 133.758 milliseconds, which is lightly larger than ours. Reference [17] has the lowest computational cost because they use symmetric encryption method to encryption and the cost shown in Figure 3(b) is for access control in encryption phase.Next is about querying on keywords, which involves the three algorithms:Index,Trapdoor, andTest. The computational cost for them is exhibited in Figures 3(c), 3(d), and 3(e), respectively. Because [19] has no capability for searching based on keywords, it is not considered in our comparison for these three phases. References [17, 18] have a obviously large increase in computation burden when the number of the attributes grows. When the attribute number grows up to 100, almost 15000 milliseconds are required to complement these three algorithms to achieve querying on keywords for the two schemes, which causes a long network delay. Our scheme has a similar computational cost with the scheme in [20] proposed to speed up in the industrial IoT network, which has been proved having a good efficiency in fast query.Last, the computational cost for the decryption phase is shown in Figure3(f). The efficiency for the decryption algorithm is very important because one keyword is always associated with a lot of different files. To decrypt all the returned files in a short time in IoT networks is a key issue in recent researches. As shown in Figure 3(f), our scheme satisfies this demand with only less than 13 milliseconds is required, regardless of the increasing of the attribute number. And the scheme in [17] has a lightly bigger cost than ours. In contrast, the other two schemes’ cost grows enormously with the attributes’ number, which causes a super large computational burden because of the large amount of returned files and the user’s limited computation capability.In summary, our proposed scheme enjoys a good efficiency in storage, transmission requirement, and computational cost, which indicates it is suitable for the healthcare related IoT networks. ## 6.1. Storage and Transmission Cost Analysis We compare our scheme with the related schemesVKS [17], LHL [19], SYL [18], and ZSQM [20] over some important features, which are illustrated in Tables 2 and 3. Though many parameters are generated, stored, and transmitted throughout the whole process, we only consider the following parameters that extremely affect the system efficiency:(i) PK: The size of public keyPK measures how many storage is needed to store public keys of all entities for each user to accomplish his computation. As shown in the second column in Table 2, it increases linearly with |U| in [18, 19], which leads to a great amount storage demand for the user. This indicates it is difficult to adopt new attributes in [18, 19]. Because it cannot meet the demands of frequently updating attributes in rapidly changing IoT networks. Reference [20] is file-centered, so the size of PK is related to the number of all files being encrypted, which also causes a large storage requirement for each user. It is obviously that our scheme and [17] only have a small and constant storage requirement.(ii) SK: The private keySK is always kept by the user himself, so the size of SK only indicates the secure storage needed to store his private key for each user. The third column reveals that, in [17–20], |SK| increases with the attributes with different efficients k,2k,S,|U|, respectively, where k<2k<S≪|U|. Since the storage of users or devices in IoT networks is limited, it would be desirable if only small and constant storage is needed to store the keys. This expected goal is achieved in our scheme as shown; it is obviously better than the others with only constant storage requirement 2|G|+|Zp|, regardless of the attribute number’ change.(iii) CT: The size of ciphertextCT measures the transmission cost for the user and the storage cost for the cloud server, because the ciphertext is computed by the user, transmitted to the cloud, and stored in the cloud data center. Reference [18] is concentrated in the user management such as user updating and revoking, and the encryption and decryption processes are not revealed in details, so it is empty in the fourth column. Considering that all the five schemes store an access policy in the ciphertext, we ignore this part in the ciphertext size comparison. CT in our scheme and [19] are both linearly increasing with the number of the attributes associated with the user’s private key k, which is consistent with the situations in real world. Obviously, CT in [17] is much larger than our scheme and [19], owing to |U| is much larger than k, which means it takes much more transmission overhead for the user and more storage requirement for the cloud server. Ciphertext size in [20] is (N+2)|G|+2|GT|, because it is file-centered; all files owned by each user are encrypted at one time; this is not convenient if only a part of files are needed to be updated or modified..(iv) ID and TD: The size of indexID indicates the transmission overhead for the user and the storage required to store the indexes for retrieving related files for the cloud. The size of trapdoor TD shows the transmission cost for the data user, because the trapdoor is needed to be transmitted to the cloud to accomplish the test and search processes. We do not compute ID and TD for [19] as [19] only concerns about attribute-based encryption and it does not support the function of searching on keywords. For simplicity, we consider generating the index and trapdoor for only one keyword here. We could tell that the scheme in [18] costs most for transmitting both ID and TD between the user and the cloud server. It is shown from the fifth column that [17, 20] have similar ID size with our scheme, which reveals a small and constant storage is required. For the trapdoor size, scheme in [17] is linearly with the user’s attribute number, while scheme in [20] and ours is constant; furthermore, ours requires less transmission overhead than [20].Table 2 Storage and transmission comparisons. PK SK CT ID TD VSKE [17] 6 | G | + | G T | 2 ( | S | + 1 ) | G | + | Z p | ( | U | + k ) ( | G | + | G T | ) 2 | G | + | G T | ( 2 | S | + 3 ) | G | SYL [18] 3 | U | | G | + | G T | ( 2 | U | + 1 ) | G | + 2 | Z p | - ( | U | + 1 ) | G | + | G T | + | Z p | ( 2 | U | + 1 ) | G | + 2 | Z p | LHL [19] ( | U | + 4 ) | G | 2 k | G | + | Z p | ( k + 2 ) | G | + | G T | - - ZSQM [20] ( N + 4 ) | G | ( | S | + 3 ) | G | + | Z p | ( N + 2 ) | G | + 2 | G T | | G T | 4 | G | Ours 5 | G | + | G T | 2 | G | + | Z p | 2 k | G T | 2 | G | + | G T | | G | + | Z p |Table 3 Computation cost comparisons. Keygen Encrypt Index Trapdoor Test Decrypt VSKE [17] ( 2 | S | + 4 ) C e ( 2 m + 1 ) C e ( 2 | S | + 4 ) C e ( 2 | U | + K ) C e + k C p ( 2 | S | + 1 ) C p + | S | C e T C p + C e T SYL [18] ( 2 | U | + 1 ) C e + 2 C e T - ( | U | + 1 ) C e + C e T ( 2 | U | + 1 ) C e ( | U | + 1 ) C p + C e T - LHL [19] 2 ( k + 1 ) C e ( k + 2 ) C e + C p - - - ( 2 k + 3 ) C p + C e T ZSQM [20] 5 C e + N C p ( N + 3 ) C e + C p + C e T C p ( N + 2 ) C e 2 C e + ( k + 1 ) C p ( N k + 1 ) C p + C e Ours 4 C e k C e T 2 ( C e + C e T ) 2 ( | S | + 1 ) C e 2 C p + C e C eAccording to the above analyses, our scheme has a better performance in the storage and transmission requirement comparing with the other exsiting schemes. ## 6.2. Computational Cost Simulation and Analysis In this section we present the analysis in terms of the computational cost and comparisons with those related works listed in Table2. Since operations over Zp cost much less computational time than operations over groups and the pairing operation, we just consider the latter two fundamental cryptographic operations. The results are given in Table 3. It is obvious from the table that our scheme has significantly better efficiency than the other schemes.By adopting the pairing-based cryptography (PBC (URL:https://crypto.stanford.edu/pbc)) library, we perform our experiment in C on a computer with Intel(R) Core(TM) i3-3220 CPU @ 3.30 running Ubuntu 16.04.5 with 4.00 GB system memory. This simulation environment is used to performKeygen andTest, which are executed by the trusted authority and the cloud server with a great computational capability. In contrast, the users or devices in our system are mostly with low computational capability, to simulateEncrypt,Index,Trapdoor, andDecrypt performed by them; we execute our experiment on a client machine with Intel Core Duo CPU running Ubuntu MATE 16.04 with 2 GB system memory. To realize the security requirement of 1024-bit, we use the Type A curve, which is denoted as E(Fq):y2=x3+x with parameter q=512 bits, where the order p of both the group G and group GT is 160 bits and |G|=|GT|=1024 bits. For simplicity, we assume that the user only generates index for one keyword in our simulation. The simulation result is exhibited in Figure 3.Figure 3 Comparison of computational cost. (a) Time for key generation (b) Time for encryption (c) Time for index generation (d) Time for trapdoor generation (e) Time for test (f) Time for decryptionOnce receiving a request from a user to join in the system, TA generates public and private keys for each user with only four exponential operations in our scheme. Clearly from Figure3(a), the computation cost in our scheme is constant and the smallest among all the schemes. As the attribute number increases, the computational cost for key generation in [17–19] all grows; especially in [18] it climbs up to thousands of milliseconds. Cost in [20] is also constant and similar with ours, this is because we assume the encrypted file number N=1 for simplicity in our experiment. While in reality the scheme in [20] is file-centered, the computational cost for key generation grows with the number of encrypted files increases, but in our scheme the cost for key generation is irrelevant with the encrypted file numbers.After receiving the keys from TA, the user encrypts the files with his/her keys before updating them to the cloud server. Reference [18] focuses on attributed-based encryption to manage users; the data encryption and decryption phase are not described in details; therefore it is not considered in our encryption simulation. Reference [20] is not considered in the encryption phase because the scheme encrypts all files of one user at once, while the others encrypt one file at a time. As shown in Figure 3(b), cost for encryption in our scheme and [19] are increasing linearly with the number of attributes grows, due to the file is encrypted associated with the attributes embedded in the access policy. When the attribute number is 50, our scheme needs 109.96 milliseconds and [19] needs 133.758 milliseconds, which is lightly larger than ours. Reference [17] has the lowest computational cost because they use symmetric encryption method to encryption and the cost shown in Figure 3(b) is for access control in encryption phase.Next is about querying on keywords, which involves the three algorithms:Index,Trapdoor, andTest. The computational cost for them is exhibited in Figures 3(c), 3(d), and 3(e), respectively. Because [19] has no capability for searching based on keywords, it is not considered in our comparison for these three phases. References [17, 18] have a obviously large increase in computation burden when the number of the attributes grows. When the attribute number grows up to 100, almost 15000 milliseconds are required to complement these three algorithms to achieve querying on keywords for the two schemes, which causes a long network delay. Our scheme has a similar computational cost with the scheme in [20] proposed to speed up in the industrial IoT network, which has been proved having a good efficiency in fast query.Last, the computational cost for the decryption phase is shown in Figure3(f). The efficiency for the decryption algorithm is very important because one keyword is always associated with a lot of different files. To decrypt all the returned files in a short time in IoT networks is a key issue in recent researches. As shown in Figure 3(f), our scheme satisfies this demand with only less than 13 milliseconds is required, regardless of the increasing of the attribute number. And the scheme in [17] has a lightly bigger cost than ours. In contrast, the other two schemes’ cost grows enormously with the attributes’ number, which causes a super large computational burden because of the large amount of returned files and the user’s limited computation capability.In summary, our proposed scheme enjoys a good efficiency in storage, transmission requirement, and computational cost, which indicates it is suitable for the healthcare related IoT networks. ## 7. Related Work ### 7.1. Healthcare Related IoT Security Security is one of the most important issues in the healthcare related IoT Networks. This is not only because the vulnerability of IoT devices themselves, which can be easily attacked or physically destructed, but also because the data collected and processed in IoT networks are highly sensitive and tightly related to our life. Johns Hopkins University developed an hospital-centralized patient monitoring system called MEDiSN [22]. But in this system secure communication especially data integrity and user authentication are not implemented [23]. Similar with MEDiSN, other systems such as CodeBlue [24] and MobiCare [25] are implemented in the infrastructure layer without considering the real communication security.To achieve real communication security, encryption operations are essential. However, most of the existing encryption schemes demand complex computation operations and high process overload. How to overcome these limitations is an important issue. In [26, 27], the authors present a secure and efficient authentication and authorization framework for healthcare related IoT network but high processing power is needed. In [28], the authors implement an IoT-based health prescription assistant and achieve user authentication and access control on their system. However, the data confidentiality is not considered during the transmission process [29]. Although they have reduced some communication and computation latency in their small-scale data experiment, it is still not enough for real world network with super large amount of data [30]. ### 7.2. ABE in Cloud Computing Paradigm As an extension of identity-based encryption, attribute-based encryption was first introduced by Sahai and Waters [13]. It has been applied to a lot of encryption schemes to achieve fine-grained access control over encrypted data. Particulary, ABE was extended by Goyal et al. [31] to form two complementary flavors: key-policy ABE (KP-ABE) and ciphertext-policy ABE (CP-ABE). KP-ABE takes attributes to describe the ciphertexts, and policies over these attributes are associated with users’ keys, while in CP-ABE it is reversed. CP-ABE makes it possible that the users can get access to the encrypted data and decrypt the data only if the access structures match attributes.Same as the originally proposed ABE scheme in [13], the most classic architecture of ABE access control schemes apply a single central authority to take charge of enrolling, updating all attributes and managing keys for all entities. In such centralized ABE frameworks, the most difficult but important part is to achieve efficient revocation for users and attributes. In [32], the authors put forward an expiration time for each attribute to maintain revocation but it turns out having issues in backward and forward. The authors from [33, 34] succeed in overcoming the above issues through adopting the concept of proxy-based re-encryption. Also, lazy revocation [33, 35] and revocable-storage ABE [36] are designed to achieve revocation to prevent the message from unauthorized users.As the IoT networks expand, the centralized ABE paradigm with only one single authority has a great drawback in efficiency due to the super large amount of data. Therefore, multiauthority ABE was introduced by [37], in which a global identifier was assigned by the central authority to each user as a unique ID, aiming to distinguish users without attributes by independent authorities. Furthermore, more works such as [38–40] improve the above scheme by cancelling the user’s consistent GID to avoid privacy leakage and support collusion resistance; this paradigm is called as decentralized ABE.No matter in a centralized or decentralized ABE paradigm, a user may not withstand the financial attempt and share his attributes to other users. In order to avoid a decryption privilege leakage, works in [41, 42] provide access control schemes with traceability, where the user who leaks the decryption key to someone else can be traced and revoked by the system. As people become more concerned about personal privacy, the access policy itself can be taken as sensitive information and need to be protected from unauthorized users. Works in [43] achieve anonymity by designing three protocols together with homomorphic encryption and scrambled circuit evaluation to protect both the policies and the credentials. ### 7.3. Searchable Encryption with ABE in Cloud Computing Paradigm The searchable encryption was firstly proposed by [44] and has been widely researched and used. It has indicated a new direction for operating searching on ciphertexts in cloud computing [45]. Both the notion of symmetric encryption with keyword search (SESK) and the public key encryption with keyword search (PESK) are gaining a lot of attentions. They have been developed to support different functions, such as works in [18, 46–51]. However, these schemes cannot achieve fine-grained access control on ciphertexts.The attribute-based keyword search (ABKS) was proposed in [52], in which the cloud server checks whether the user has the capability to decrypt the required encrypted ciphertext before searching it by a signature built from the user’s attributes. But this scheme cannot maintain the security of keywords. Some other works also proposed different schemes based on ABKS to support specific functions such as Checkability [19], fuzzy keyword search [53], revocation [54], and verifiability [55]. But most of these works require the users to do complex computation like pairing and exponential operations many times, which is not practical because of the user’s limited computation ability. Therefore, how to transfer the heavy computation burden and reduce the times of complex computation operations without loosing security requirements is the most important challenge for now. ## 7.1. Healthcare Related IoT Security Security is one of the most important issues in the healthcare related IoT Networks. This is not only because the vulnerability of IoT devices themselves, which can be easily attacked or physically destructed, but also because the data collected and processed in IoT networks are highly sensitive and tightly related to our life. Johns Hopkins University developed an hospital-centralized patient monitoring system called MEDiSN [22]. But in this system secure communication especially data integrity and user authentication are not implemented [23]. Similar with MEDiSN, other systems such as CodeBlue [24] and MobiCare [25] are implemented in the infrastructure layer without considering the real communication security.To achieve real communication security, encryption operations are essential. However, most of the existing encryption schemes demand complex computation operations and high process overload. How to overcome these limitations is an important issue. In [26, 27], the authors present a secure and efficient authentication and authorization framework for healthcare related IoT network but high processing power is needed. In [28], the authors implement an IoT-based health prescription assistant and achieve user authentication and access control on their system. However, the data confidentiality is not considered during the transmission process [29]. Although they have reduced some communication and computation latency in their small-scale data experiment, it is still not enough for real world network with super large amount of data [30]. ## 7.2. ABE in Cloud Computing Paradigm As an extension of identity-based encryption, attribute-based encryption was first introduced by Sahai and Waters [13]. It has been applied to a lot of encryption schemes to achieve fine-grained access control over encrypted data. Particulary, ABE was extended by Goyal et al. [31] to form two complementary flavors: key-policy ABE (KP-ABE) and ciphertext-policy ABE (CP-ABE). KP-ABE takes attributes to describe the ciphertexts, and policies over these attributes are associated with users’ keys, while in CP-ABE it is reversed. CP-ABE makes it possible that the users can get access to the encrypted data and decrypt the data only if the access structures match attributes.Same as the originally proposed ABE scheme in [13], the most classic architecture of ABE access control schemes apply a single central authority to take charge of enrolling, updating all attributes and managing keys for all entities. In such centralized ABE frameworks, the most difficult but important part is to achieve efficient revocation for users and attributes. In [32], the authors put forward an expiration time for each attribute to maintain revocation but it turns out having issues in backward and forward. The authors from [33, 34] succeed in overcoming the above issues through adopting the concept of proxy-based re-encryption. Also, lazy revocation [33, 35] and revocable-storage ABE [36] are designed to achieve revocation to prevent the message from unauthorized users.As the IoT networks expand, the centralized ABE paradigm with only one single authority has a great drawback in efficiency due to the super large amount of data. Therefore, multiauthority ABE was introduced by [37], in which a global identifier was assigned by the central authority to each user as a unique ID, aiming to distinguish users without attributes by independent authorities. Furthermore, more works such as [38–40] improve the above scheme by cancelling the user’s consistent GID to avoid privacy leakage and support collusion resistance; this paradigm is called as decentralized ABE.No matter in a centralized or decentralized ABE paradigm, a user may not withstand the financial attempt and share his attributes to other users. In order to avoid a decryption privilege leakage, works in [41, 42] provide access control schemes with traceability, where the user who leaks the decryption key to someone else can be traced and revoked by the system. As people become more concerned about personal privacy, the access policy itself can be taken as sensitive information and need to be protected from unauthorized users. Works in [43] achieve anonymity by designing three protocols together with homomorphic encryption and scrambled circuit evaluation to protect both the policies and the credentials. ## 7.3. Searchable Encryption with ABE in Cloud Computing Paradigm The searchable encryption was firstly proposed by [44] and has been widely researched and used. It has indicated a new direction for operating searching on ciphertexts in cloud computing [45]. Both the notion of symmetric encryption with keyword search (SESK) and the public key encryption with keyword search (PESK) are gaining a lot of attentions. They have been developed to support different functions, such as works in [18, 46–51]. However, these schemes cannot achieve fine-grained access control on ciphertexts.The attribute-based keyword search (ABKS) was proposed in [52], in which the cloud server checks whether the user has the capability to decrypt the required encrypted ciphertext before searching it by a signature built from the user’s attributes. But this scheme cannot maintain the security of keywords. Some other works also proposed different schemes based on ABKS to support specific functions such as Checkability [19], fuzzy keyword search [53], revocation [54], and verifiability [55]. But most of these works require the users to do complex computation like pairing and exponential operations many times, which is not practical because of the user’s limited computation ability. Therefore, how to transfer the heavy computation burden and reduce the times of complex computation operations without loosing security requirements is the most important challenge for now. ## 8. Conclusion In this paper, we design a keyword searchable encryption with fine gained access control for our proposed healthcare related IoT-fog-cloud framework. Through our design, the users could achieve a fast and efficient service by reducing the calculation overload and storage with the help of the fog and cloud, especially the data user only needs to do a exponential operation to retrieve the message. In our scheme, the fogs are capable of helping the trusted authority to manage the users and their attributes through authoring their query keys. In addition, our scheme is very efficient because only the authorized users could download the keyword-matched-part of ciphertexts by refusing unauthorized research and unauthorized users. At last, our scheme is proofed IND-CK-CCA secure and trapdoor indistinguishably secure. We also show our scheme takes less storage and transmission consumption and much less computational cost through theoretical analysis and experimental evaluations.We assume fogs and the cloud do not collude with each other in this paper; next we will consider in achieving the collusion resistance in our proposed IoT-Fog-Cloud system. We are also interested in the user update and the attribute replacement with a more efficient method in our future research. How to improve the efficiency of searching process by designing better structures of indexes and trapdoors for the keywords in the cloud server is also in our future consideration. --- *Source: 1019767-2019-05-23.xml*
2019
# Evaluating the Performance of Oil and Gas Companies by an Extended Balanced Scorecard and the Hesitant Fuzzy Best-Worst Method **Authors:** Amir Karbassi Yazdi; Amir Mehdiabadi; Thomas Hanne; Amir Homayoun Sarfaraz; Fatemeh Tabatabaei Yazdian **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1019779 --- ## Abstract The aim of this research is to find and prioritize a multicriteria performance measurement based on the balanced scorecard (BSC) for oil and gas (O & G) companies in an uncertain environment using the hesitant fuzzy best-worst method (HFBWM). The O & G industry has a key role in the economies of many countries. Hence, the evaluation of the performance of the O & G industry plays an important role. We utilize BSC for this purpose, which usually considers the financial, customer-oriented, internal, learning-oriented, and growth perspectives. In our research, the social responsibility perspective will be added. After finding multiple performance measurements, many companies cannot implement all of them because of limited resources. Therefore, multicriteria decision-making (MCDM) methods can be applied for prioritizing and selecting the most important measurement criteria. One of the MCDM methods is the best-worst method (BWM). This approach has several advantages compared to other MCDM methods. Due to uncertainties in decision-making, a suitable method for decision-making in an uncertain environment is necessary. Hesitant fuzzy approaches are applied as one such uncertainty-based method in this research. Our results indicate that among the five perspectives of BSC that we considered, the customer and internal process perspectives are the most important ones, and the cost of the R & D indicator is the most important subcriterion among these. --- ## Body ## 1. Introduction One of the main concerns of organizations is achieving a comprehensive, reliable, and flexible performance appraisal method to help them obtain accurate and sufficient information about their position and learn from past mistakes by looking to the future [1]. New approaches to organization and management (customer orientation, quality orientation, virtualization, etc.) also emphasize the double necessity of the concept and subject of evaluation. Accurate, comprehensive, and purposeful monitoring and evaluation are considered one of the most important facilitators of growth, dynamism, and excellence in the field of management [2]. Evaluating the performance of organizations has always been one of the main concerns of managers and their officials. Performance calculation helps organizations become more transparent [3]. In fact, a performance measurement system includes a diverse set of performance appraisal indicators that relate to organizational strategies and provide information about all components of the supply chain ([4], 398; [5], 5). One of the most popular and effective performance appraisal systems is the balanced scorecard (BSC). BSC is a comprehensive, complete, and accurate performance appraisal system for planning and monitoring an organization’s progress toward achieving its goals ([6], 138; [7], 360; [8], 73).Over the years of research on performance appraisal, researchers have presented numerous papers on BSC methods and hybrid models. Hegazy et al. [9] provide a detailed framework for supporting audit firms with BSC. The results show that the development and application of the proposed BSC measures improve the performance of audit firms. Auditing firms have a better understanding of the various performance factors and strategies and thus create a competitive advantage. Aujrrapongpan et al. [10] evaluated the performance of social hospitals in Thailand with the BSC approach. The results of this study are presented as a five-year comparison of performance evaluation indicators. Laury et al. [11] analyze the strategic planning and strategic performance of companies with BSC in a review article. Nazari-Shirkouhi et al. [12] evaluated the performance of an educational institution with an integrated IPA-BSC approach. Tuan [13] addressed the impact of BSC on performance in Vietnamese bank branches. Akbarei et al. [14] used a combined approach AHP-TOPSIS-BSC to evaluate the performance of bank branches and provide ways to improve it. Karbassi Yazdi et al. [15] have developed performance criteria for export agencies with the DEA approach. Karbassi Yazdi et al. [16] also developed an analytical vision of performance for the company using a combination of fuzzy clustering and DEA.In the past, the most crucial performance measurement was based on financial indicators [17, 18]. However, Kaplan and Norton [19] pointed out that these indicators were not solely responsible for the performance and that various further factors had an influence on it. Consequently, the BSC was suggested. This model consists of four perspectives: finance, customers, internal processes, and learning and growth. In order to get to know the situation of their company better and to find out its strengths and weaknesses, managers may use BSC, which introduces a comprehensive model for evaluating the company according to the mentioned four perspectives and relevant indicators [20, 21]. As mentioned above, traditional BSC has four perspectives, but Kaplan and Norton [22, 23] and Kaplan et al. [24] suggested that companies could add other perspectives to BSC or remove some of the suggested ones. To create new perspectives, the most crucial performance measurement should be considered. One of the most important obligations of oil and gas companies is to pay more attention to social responsibility performance measurement, such as the protection of the environment.After having extracted performance measurement indicators, companies should implement measures for improving these indicators. However, frequently, companies do not have a sufficient budget, time, or staff to implement these measures. Therefore, these performance measurement indicators should be prioritized in order to focus on the most important ones. There are many methods for prioritizing items that are characterized by multiple criteria, especially multiple criteria decision-making (MCDM) methods. These methods can be classified into different categories. Methods based on a finite set of alternatives (or a decision matrix) are usually denoted as multiple attribute decision-making (MADM) [25]. Suitable methods usually involve either a direct evaluation of alternatives (for instance, based on assessing a utility function or some other scalarizing function) or making use of a pairwise comparison of alternatives (such as the analytical hierarchy process (AHP) and the family of outranking methods [26]).In this paper, the best-worst method (BWM) is applied, which belongs to the pairwise comparison methods. This method has some benefits compared to other methods. In particular, it needs less data for comparison, and the result is more reliable than others [27, 28]. In our permanently changing world, decision-makers (DMs) cannot accurately judge. Based on this fact, DMs need a tool that helps them identify their preferences. Fuzzy sets are an approach for considering uncertainty. Methods using fuzzy sets can better support decisions in an uncertain environment. Hesitant fuzzy sets (HFS) (and hesitant fuzzy numbers, HFN) are one of the respective approaches. In this paper, we suggest using a modification of BWM based on HFS that can be used for MADM problems under uncertainty and to help make a decision. The respective approach is denoted as the hesitant fuzzy best-worst method (HFBWM). The oil and gas industry are the most important industry in Iran, and most of the budget of Iran depends on the revenues from the sale of oil. By increasing the number of oil sales, a country can create more job opportunities, decrease the Gini coefficient (for obtaining an equal income or wealth distribution in society), increase financial investments in the infrastructure, and so on, and therefore this industry plays an important role. For these reasons, evaluating the performance of the oil and gas industry helps managers make better decisions for improving the performance indicators.The research questions of this research are about the indicators of BSC in the oil and gas industry and which perspectives and indicators have the highest priority. The contribution of this paper is to prioritize the performance measurement of oil and gas companies by HFBWM. As this method is rather new, only a few papers have been published about it so far, in terms of BSC and extended versions of BSC. Another contribution of this study is applying a combination of BWM and BSC to the oil and gas industry. Also, a social responsibility perspective is added to the other aspects considered in this model. The final contribution is using real data for this research and data gathering based on questionnaires filled in by experts in this industry.Performance management is one of the crucial issues among companies, especially in theO & G industry, due to its strong impact on various fields such as economics, healthcare, education, or infrastructure. Therefore, performance management is essential for O & G companies to design road maps to realize their vision.This paper consists of the following sections: After the introductory section, a literature review of BSC is presented in Section2. In Section 3, the best-worst method is pointed out. Section 4 deals with the research methodology. Data analysis and results are illustrated in Section 5. The final section reveals the conclusions. ## 2. Literature Review ### 2.1. Balanced Scorecard (BSC) and Multiattribute Decision-Making (MADM) BSC is a tool for translating the strategy of organizations into a common language, which can be understood by the staff of a company. This model helps managers and staff to find out where their company stands, how far it deviates from the predetermined indicators (benchmark values), why they do not achieve them, and how to improve them. This method can be used to evaluate all aspects of companies and uses cause-and-effect relationships for the considered performance measurement. After the introduction of this model, various directions of research were investigated. In the following, we discuss research based on a combination of BSC and MADM methods.Yazdi et al. [25] evaluated the performance of Colombian bank branches using a combined approach of BSC, SWARA, and WASPAS. Heydariyeh et al. [29] combined the BSC model and the fuzzy DEMATEL (DEcision MAking Trial and Evaluation Laboratory) to present a new approach to integrated strategy map analysis. Ajripour et al. [30] developed a model for managing the performance of organizations using BSC, PROMETHEE, ELECTRE, and TOPSIS methods. Ozdogan et al. [31] provide a model for evaluating the performance of municipal services with a combined approach of multiple decision methods. Varmazyar et al. [32] developed a novel hybrid MCDM model for the performance evaluation of research and technology organizations based on the BSC approach.Dinçer et al. [33] illustrated a model of BSC in the European energy industry using a combination of fuzzy MCDM methods. They combined the quality function deployment (QFD) technique with fuzzy DEMATEL, fuzzy AHP, and the fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The result indicates which policy should be selected and which policies should be changed accordingly. Deng et al. [34] studied the combination of DEMATEL, the analytic network process (ANP), a modified VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) approach, and BSC for evaluating Taiwanese companies. The result pointed out that the customer perspective is the most important one and that these companies should focus on customer-oriented performance measurement. Lu et al. [35] combined DEMATEL, ANP, and modified VIKOR in the context of a sustainability-oriented BSC for evaluating international airports. First, the relationship between key performance indicators is determined by DEMATEL. Then, DEMATEL-ANP is used for finding weights. Finally, the gap between the current situation and the ideal situation is found by the modified VIKOR method. The result demonstrated that the airport’s image is the most important factor among others. Also, the best airport is found. Dinçer et al. [36] revealed a combination of the BSC model with fuzzy DEMATEL, fuzzy ANP, and the MOORA (Multiobjective Optimization Method by Ratio Analysis) method. Fuzzy DEMATEL and fuzzy ANP are used to find weights, and then nine airline companies are prioritized by the MOORA method. The result showed which airline company has a proper performance. Zhao and Li [37] implemented BSC, fuzzy Delphi, ANP, and fuzzy TOPSIS for thermal power enterprises. First, performance measurements of BSC are determined by fuzzy Delphi. Then, weights for the model are obtained by ANP, and companies are ranked by fuzzy TOPSIS and performance measurements. The result depicted a model for the evaluation of companies using hybrid methods. Meena and Thakkar [38] illustrated a model based on the combination of ISM and ANP for improving the BSC method for finding performance measurements. Then, by using the ISM method, relationships among them are shown. ANP was applied for the purpose of finding priorities for these performance measurements. Quezada and López-Ospina [39] depicted a method for drawing a strategy map of BSC by using an MCDM method (AHP) and linear programming (LP). The aim of using AHP and LP methods is to minimize the number of selected relationships while maximizing their total importance in selecting relationships. The result shows the trade-off between these aspects and designing a strategy map of BSC. Rabbani et al. [40] investigated sustainability using BSC and MCDM methods, considering linguistic variables in the oil and gas industry. They considered five perspectives such as economics, environment, social, internal process and growth, and the learning perspective. In this research, the criteria and subcriteria of BSC are defined first. Then, weights for them were obtained by the ANP method. Fuzzy COPRAS (COmplex PRoportional ASsessment of alternatives) is used for finding the best strategy. Shafiee et al. [41] designed a model for evaluating SCM performance by data envelopment analysis (DEA), DEMATEL, and BSC. First, based on four BSC perspectives, performance indicators were created. Then, the relationship among the performance measurements was determined by DEMATEL. Finally, Iranian food companies were evaluated in a case study using the network DEA and BSC methods.Rezaei et al. [42] measured the port performance using the best-worst method. They stated that costs and times of transportation in the supply chain are the most important factors. Galankashi et al. [43] discussed a hybrid model of BSC and fuzzy AHP for supplier selection in the automobile industry. First, for each supplier, a BSC model was designed, and after that, the performance measurements of each supplier were ranked by fuzzy AHP to find the best supplier. Lin [44] studied the implementation of BSC and a closed-loop ANP together with a fuzzy Delphi method in the higher education sector. They used fuzzy Delphi and ANP to find relationships in the closed-loop structure. Abo-Hamad and Arisha [45] illustrated a model of BSC with the analytical hierarchy process (AHP) and a simulation for measuring the performance of emergency departments. The result indicated how one could improve the efficiency of the processes by using these methods. Bhattacharya et al. [46] measured the performance of a green supply chain by using fuzzy-ANP and a balanced scorecard. Fuzzy-ANP was used to rank the BSC perspectives and performance measurements. The result showed how a supplier’s performance could be aligned with industry standards. Khairalla et al. [47] depicted a model for an outsourcing strategy based on ANP and BSC. In this research, after finding performance measurements, these were ranked by ANP for identifying the best strategies. After that, sensitivity analysis is used for increasing the robustness of the model. Hsu et al. [48] implemented fuzzy Delphi, ANP, and sustainable BSC for the semiconductor industry. They used a revised BSC with sustainability, stakeholders, internal business processes, and learning and growth perspectives. Then, performance measurements were extracted by fuzzy Delphi. Finally, perspectives and performance measurements were ranked by ANP. The result indicated that the sustainability perspective and some performance measurements had high priority.Bazrkar et al. [49] depicted a model for customer satisfaction with a combination of BSC and Lean Six Sigma (LSS). First, BSC perspectives and indicators are extracted. Then, data envelopment analysis (DEA) is implemented for selecting indicators. Finally, the Define, Measure, Analyze, Improve, and Control (DMAIC) cycle is applied for improving the quality of the process. The results pointed out that sigma levels increased and the time of processes decreased. Wang and Chien [50] illustrated a hybrid model of BSC and DEA for Taiwanese companies. First, the performance measurement of the BSC model is set as the inputs and outputs of the model. Then, companies’ performances are determined by DEA. Wu and Liao [51] used BSC and DEA for evaluating airline companies. They extracted inputs and outputs from the model based on BSC, and then 38 airline companies were evaluated by the DEA method.Tizroo et al. [52] designed a model of BSC and Interpretive Structural Modeling (ISM) in the steel industry. They found a relationship between the criteria and subcriteria of BSC. The results indicated how strategies for this industry can be formulated based on the results of the ISM and BSC. The result showed how this approach helps the stakeholders to make better decisions. Linet al. [53] used hierarchical BSC with fuzzy linguistics in hospitals. After determining performance measurements of BSC, fuzzy linguistics is applied for developing the model. The result indicated how management might use a new approach for the design and implementation of a new strategy in their organizations.Kaviani et al. [54] used gray numbers while considering hybrid MADM methods for ranking suppliers in the O & G industry. Yazdi et al. [55] used hybrid MADM methods using Z-numbers for evaluating suppliers in the O & G industry. ### 2.2. Hesitant Fuzzy Sets (HFSs) and MADM Various studies show the importance and reliability of HFS for decision-making under uncertainty and considering the complexity of organizations. Alcantud et al. [56] have introduced hesitant fuzzy sets as a new method. Tüysüz and Şimşek [57] used an AHP method based on hesitant fuzzy sets to evaluate the performance of a shipping company in Turkey. Divsalar et al. [58] developed the DANP technique using interval-valued hesitant fuzzy elements (IVHFEs). Zhai [59] has proposed the hesitant fuzzy linguistic preference relations (HLPRs) method for the performance evaluation of wireless sensor networks. The research findings shed new light on the selection, performance selection, and promotion of wireless sensor networks. Pérez-Domínguez et al. [60] focused on performance appraisal in a manufacturing company using a combination of TOPSIS and hesitant fuzzy linguistic term set (HFLTS) models. Using this method, they presented a model for lean manufacturing (LM). Liao et al. [61] used the hesitant fuzzy linguistic BWM method to evaluate the performance of hospitals. They state that the proposed method is more effective than the hesitant fuzzy AHP method. Liu et al. [62] used a combination of probabilistic hesitant fuzzy elements (PHFE) and MADM methods for the selection of venture capital investment projects. Candan [63] focuses on the efficiency and performance of economic research in 15 OECD countries using bibliographic elements for the period 2010–2017. There are seven criteria that are thought to affect the efficiency and performance of economic research. In this study, he used the hesitant fuzzy AHP and the OCRA method. Gong et al. [64] presented a new integrated approach using LHF-TODIM and BWM for E-learning website evaluation and selection. The results show that the proposed method is more effective. Meng et al. [65] introduced a new model using a combination of dual hesitant fuzzy preference relations (DHFPRs) and provided a new group decision-making method. Lin et al. [66] used a combination of probabilistic hesitant fuzzy best-worst (PHFBW) and MULTIMOORA for prioritizing distributed stream processing frameworks for IoT applications. ### 2.3. Our Proposed Method Based on categorizing the previous research, methods for this subject can be divided into hybrid MADM methods, pairwise comparison methods, DEA methods, and soft computing methods. Some of these methods are based on fuzzy numbers. In one of the previous studies [42], they used BWM for BSC. In some other research, authors used fuzzy numbers in their research.However, in this paper, the first traditional BSC method is transferred in to revised BSC method. Then, BWM is combined with hesitant fuzzy sets. These changes are not apparent in the previous research. Table1 summarizes the methods used in previous studies.Table 1 Previous studies on BSC and other methods. IndicatorResearcherDinçer et al. [33]Rezaeiet al. [42]Deng et al. [34]Lu et al. [35]Dinçer et al. [36]Bazrkar et al. [49]Tizroo et al. [52]Galankashi et al. [43]Lin [44]Wang and Chien [50]Zhao & Li [37]Abo-Hamad and Arisha [45]Bhattacharya et al. [46]Meena and Thakkar [38]Quezada and López-Ospina [39]Rabbani et al. [40]Khairalla et al. [47]Wu and Liao [51]Shafiee et al. [41]Lin et al. [53]Hsu et al. [48]Fuzzy linguistics∗Fuzzy COPRAS∗LP∗AHP∗∗Fuzzy Delphi∗∗∗ISM∗∗DEA∗∗∗∗LSS∗MOORA∗Fuzzy ANP∗∗∗VIKOR∗∗ANP∗∗∗∗∗∗∗DEMATEL∗∗∗BWM∗Fuzzy TOPSIS∗∗Fuzzy AHP∗∗Fuzzy DEMATEL∗∗QFD∗According to Table1 and the abovementioned review, many papers have been published about BSC, and it is the most popular topic among researchers. In this research, we suggest using a new MADM method based on HFS in combination with BSC, which helps to design a road map for supporting decision-makers and improves some weaknesses of previous studies. ## 2.1. Balanced Scorecard (BSC) and Multiattribute Decision-Making (MADM) BSC is a tool for translating the strategy of organizations into a common language, which can be understood by the staff of a company. This model helps managers and staff to find out where their company stands, how far it deviates from the predetermined indicators (benchmark values), why they do not achieve them, and how to improve them. This method can be used to evaluate all aspects of companies and uses cause-and-effect relationships for the considered performance measurement. After the introduction of this model, various directions of research were investigated. In the following, we discuss research based on a combination of BSC and MADM methods.Yazdi et al. [25] evaluated the performance of Colombian bank branches using a combined approach of BSC, SWARA, and WASPAS. Heydariyeh et al. [29] combined the BSC model and the fuzzy DEMATEL (DEcision MAking Trial and Evaluation Laboratory) to present a new approach to integrated strategy map analysis. Ajripour et al. [30] developed a model for managing the performance of organizations using BSC, PROMETHEE, ELECTRE, and TOPSIS methods. Ozdogan et al. [31] provide a model for evaluating the performance of municipal services with a combined approach of multiple decision methods. Varmazyar et al. [32] developed a novel hybrid MCDM model for the performance evaluation of research and technology organizations based on the BSC approach.Dinçer et al. [33] illustrated a model of BSC in the European energy industry using a combination of fuzzy MCDM methods. They combined the quality function deployment (QFD) technique with fuzzy DEMATEL, fuzzy AHP, and the fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The result indicates which policy should be selected and which policies should be changed accordingly. Deng et al. [34] studied the combination of DEMATEL, the analytic network process (ANP), a modified VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) approach, and BSC for evaluating Taiwanese companies. The result pointed out that the customer perspective is the most important one and that these companies should focus on customer-oriented performance measurement. Lu et al. [35] combined DEMATEL, ANP, and modified VIKOR in the context of a sustainability-oriented BSC for evaluating international airports. First, the relationship between key performance indicators is determined by DEMATEL. Then, DEMATEL-ANP is used for finding weights. Finally, the gap between the current situation and the ideal situation is found by the modified VIKOR method. The result demonstrated that the airport’s image is the most important factor among others. Also, the best airport is found. Dinçer et al. [36] revealed a combination of the BSC model with fuzzy DEMATEL, fuzzy ANP, and the MOORA (Multiobjective Optimization Method by Ratio Analysis) method. Fuzzy DEMATEL and fuzzy ANP are used to find weights, and then nine airline companies are prioritized by the MOORA method. The result showed which airline company has a proper performance. Zhao and Li [37] implemented BSC, fuzzy Delphi, ANP, and fuzzy TOPSIS for thermal power enterprises. First, performance measurements of BSC are determined by fuzzy Delphi. Then, weights for the model are obtained by ANP, and companies are ranked by fuzzy TOPSIS and performance measurements. The result depicted a model for the evaluation of companies using hybrid methods. Meena and Thakkar [38] illustrated a model based on the combination of ISM and ANP for improving the BSC method for finding performance measurements. Then, by using the ISM method, relationships among them are shown. ANP was applied for the purpose of finding priorities for these performance measurements. Quezada and López-Ospina [39] depicted a method for drawing a strategy map of BSC by using an MCDM method (AHP) and linear programming (LP). The aim of using AHP and LP methods is to minimize the number of selected relationships while maximizing their total importance in selecting relationships. The result shows the trade-off between these aspects and designing a strategy map of BSC. Rabbani et al. [40] investigated sustainability using BSC and MCDM methods, considering linguistic variables in the oil and gas industry. They considered five perspectives such as economics, environment, social, internal process and growth, and the learning perspective. In this research, the criteria and subcriteria of BSC are defined first. Then, weights for them were obtained by the ANP method. Fuzzy COPRAS (COmplex PRoportional ASsessment of alternatives) is used for finding the best strategy. Shafiee et al. [41] designed a model for evaluating SCM performance by data envelopment analysis (DEA), DEMATEL, and BSC. First, based on four BSC perspectives, performance indicators were created. Then, the relationship among the performance measurements was determined by DEMATEL. Finally, Iranian food companies were evaluated in a case study using the network DEA and BSC methods.Rezaei et al. [42] measured the port performance using the best-worst method. They stated that costs and times of transportation in the supply chain are the most important factors. Galankashi et al. [43] discussed a hybrid model of BSC and fuzzy AHP for supplier selection in the automobile industry. First, for each supplier, a BSC model was designed, and after that, the performance measurements of each supplier were ranked by fuzzy AHP to find the best supplier. Lin [44] studied the implementation of BSC and a closed-loop ANP together with a fuzzy Delphi method in the higher education sector. They used fuzzy Delphi and ANP to find relationships in the closed-loop structure. Abo-Hamad and Arisha [45] illustrated a model of BSC with the analytical hierarchy process (AHP) and a simulation for measuring the performance of emergency departments. The result indicated how one could improve the efficiency of the processes by using these methods. Bhattacharya et al. [46] measured the performance of a green supply chain by using fuzzy-ANP and a balanced scorecard. Fuzzy-ANP was used to rank the BSC perspectives and performance measurements. The result showed how a supplier’s performance could be aligned with industry standards. Khairalla et al. [47] depicted a model for an outsourcing strategy based on ANP and BSC. In this research, after finding performance measurements, these were ranked by ANP for identifying the best strategies. After that, sensitivity analysis is used for increasing the robustness of the model. Hsu et al. [48] implemented fuzzy Delphi, ANP, and sustainable BSC for the semiconductor industry. They used a revised BSC with sustainability, stakeholders, internal business processes, and learning and growth perspectives. Then, performance measurements were extracted by fuzzy Delphi. Finally, perspectives and performance measurements were ranked by ANP. The result indicated that the sustainability perspective and some performance measurements had high priority.Bazrkar et al. [49] depicted a model for customer satisfaction with a combination of BSC and Lean Six Sigma (LSS). First, BSC perspectives and indicators are extracted. Then, data envelopment analysis (DEA) is implemented for selecting indicators. Finally, the Define, Measure, Analyze, Improve, and Control (DMAIC) cycle is applied for improving the quality of the process. The results pointed out that sigma levels increased and the time of processes decreased. Wang and Chien [50] illustrated a hybrid model of BSC and DEA for Taiwanese companies. First, the performance measurement of the BSC model is set as the inputs and outputs of the model. Then, companies’ performances are determined by DEA. Wu and Liao [51] used BSC and DEA for evaluating airline companies. They extracted inputs and outputs from the model based on BSC, and then 38 airline companies were evaluated by the DEA method.Tizroo et al. [52] designed a model of BSC and Interpretive Structural Modeling (ISM) in the steel industry. They found a relationship between the criteria and subcriteria of BSC. The results indicated how strategies for this industry can be formulated based on the results of the ISM and BSC. The result showed how this approach helps the stakeholders to make better decisions. Linet al. [53] used hierarchical BSC with fuzzy linguistics in hospitals. After determining performance measurements of BSC, fuzzy linguistics is applied for developing the model. The result indicated how management might use a new approach for the design and implementation of a new strategy in their organizations.Kaviani et al. [54] used gray numbers while considering hybrid MADM methods for ranking suppliers in the O & G industry. Yazdi et al. [55] used hybrid MADM methods using Z-numbers for evaluating suppliers in the O & G industry. ## 2.2. Hesitant Fuzzy Sets (HFSs) and MADM Various studies show the importance and reliability of HFS for decision-making under uncertainty and considering the complexity of organizations. Alcantud et al. [56] have introduced hesitant fuzzy sets as a new method. Tüysüz and Şimşek [57] used an AHP method based on hesitant fuzzy sets to evaluate the performance of a shipping company in Turkey. Divsalar et al. [58] developed the DANP technique using interval-valued hesitant fuzzy elements (IVHFEs). Zhai [59] has proposed the hesitant fuzzy linguistic preference relations (HLPRs) method for the performance evaluation of wireless sensor networks. The research findings shed new light on the selection, performance selection, and promotion of wireless sensor networks. Pérez-Domínguez et al. [60] focused on performance appraisal in a manufacturing company using a combination of TOPSIS and hesitant fuzzy linguistic term set (HFLTS) models. Using this method, they presented a model for lean manufacturing (LM). Liao et al. [61] used the hesitant fuzzy linguistic BWM method to evaluate the performance of hospitals. They state that the proposed method is more effective than the hesitant fuzzy AHP method. Liu et al. [62] used a combination of probabilistic hesitant fuzzy elements (PHFE) and MADM methods for the selection of venture capital investment projects. Candan [63] focuses on the efficiency and performance of economic research in 15 OECD countries using bibliographic elements for the period 2010–2017. There are seven criteria that are thought to affect the efficiency and performance of economic research. In this study, he used the hesitant fuzzy AHP and the OCRA method. Gong et al. [64] presented a new integrated approach using LHF-TODIM and BWM for E-learning website evaluation and selection. The results show that the proposed method is more effective. Meng et al. [65] introduced a new model using a combination of dual hesitant fuzzy preference relations (DHFPRs) and provided a new group decision-making method. Lin et al. [66] used a combination of probabilistic hesitant fuzzy best-worst (PHFBW) and MULTIMOORA for prioritizing distributed stream processing frameworks for IoT applications. ## 2.3. Our Proposed Method Based on categorizing the previous research, methods for this subject can be divided into hybrid MADM methods, pairwise comparison methods, DEA methods, and soft computing methods. Some of these methods are based on fuzzy numbers. In one of the previous studies [42], they used BWM for BSC. In some other research, authors used fuzzy numbers in their research.However, in this paper, the first traditional BSC method is transferred in to revised BSC method. Then, BWM is combined with hesitant fuzzy sets. These changes are not apparent in the previous research. Table1 summarizes the methods used in previous studies.Table 1 Previous studies on BSC and other methods. IndicatorResearcherDinçer et al. [33]Rezaeiet al. [42]Deng et al. [34]Lu et al. [35]Dinçer et al. [36]Bazrkar et al. [49]Tizroo et al. [52]Galankashi et al. [43]Lin [44]Wang and Chien [50]Zhao & Li [37]Abo-Hamad and Arisha [45]Bhattacharya et al. [46]Meena and Thakkar [38]Quezada and López-Ospina [39]Rabbani et al. [40]Khairalla et al. [47]Wu and Liao [51]Shafiee et al. [41]Lin et al. [53]Hsu et al. [48]Fuzzy linguistics∗Fuzzy COPRAS∗LP∗AHP∗∗Fuzzy Delphi∗∗∗ISM∗∗DEA∗∗∗∗LSS∗MOORA∗Fuzzy ANP∗∗∗VIKOR∗∗ANP∗∗∗∗∗∗∗DEMATEL∗∗∗BWM∗Fuzzy TOPSIS∗∗Fuzzy AHP∗∗Fuzzy DEMATEL∗∗QFD∗According to Table1 and the abovementioned review, many papers have been published about BSC, and it is the most popular topic among researchers. In this research, we suggest using a new MADM method based on HFS in combination with BSC, which helps to design a road map for supporting decision-makers and improves some weaknesses of previous studies. ## 3. Multicriteria Decision-Making in an Uncertain Environment ### 3.1. The Best-Worst Method Many MCDM methods help decision-makers in making better decisions. One of the new approaches in the area of MCDM methods is the Best-Worst Method (BWM). This model belongs to the methods based on a finite set of alternatives (also denoted as multiple attribute decision-making, or MADM) and uses pairwise comparisons for finding weights of alternatives. Rezaei [27] introduced this model. The method compares vectors representing the criteria values of alternatives on a pairwise basis for the purpose of finding out which of these vectors is most beneficial (or the best vector). This model is based on the nonlinear minimax model for computing the maximum absolute difference in weight ratios and minimizing the corresponding comparison. For finding weights of alternatives by BWM, the following steps are needed:Step 1. Criteria and alternatives of the model are assumed to be specified. Criteria are denoted as C=c1,c2,….,cn.Step 2. The type of criteria is determined. The criteria can be best or worst criteria (criteria to be maximized or minimized).Step 3. Determine the relative preferences of the best criterion (denoted as B) in comparison to all other criteria based on a 1–9 scale. The preferences for the best criterion B are indicated as AB=aB1,aB2,…,aBn. It is obvious that aBB=1.Step 4. Determine the relative preferences of the worst criterion (denoted as W) in comparison to all other criteria based on a 1–9 scale. The preferences of worst criterion W are indicated as Aw=aw1,aw2,…,awn. It is obvious that aww=1.Step 5. Final weights are obtained based on the following approach. These weights are shown as w1∗,w2∗,…,wn∗.The maximum absolute differenceswB/wj−aBj and wj/ww−awj will be minimized for all j, such that the ratio of weights best corresponds to the relative preferences. The following equation shows this computation:(1)minmaxjwBwj−aBj,wjww−awjs.t.∑jwj=1wj≥0,for allj.This model can be rewritten as follows:(2)minξs.t.wBwj−aBj≤ξ,for alljwjww−awj≤ξ,for allj∑jwj=1wj≥0,for allj. ### 3.2. Hesitant Fuzzy Sets LetX represent a reference set. A hesitant fuzzy set of X is defined by a function h on X that returns a subset of [0, 1].We usually consider cases whereh (x) is a finite set [67]. Special cases of h (x) are as follows:(3)empty set:hx=0for allx∈X,full set:hx=1for allx∈X,complete ignorance:x∈X,hx=0,1,nonsence set:hx=∅.μx=1 and μx=0 point out the empty and full sets, which should not be confused with nonsense situations or complete ignorance.For a fuzzy set with membership functionμ on the reference set [0, 1], we can use hesitant fuzzy sets to represent the inverse function of μ, i.e., the hesitant fuzzy set hx is defined by hx∶=μ−1x or, respectively,(4)hx∶=α|α∈X,μα=x.Hesitant fuzzy sets can also be constructed from several fuzzy sets: considering a set ofN membership functions, M=μ1,…,μN. The hesitant fuzzy set of M, hM, is then defined as(5)hMx=∪μ∈Mμx.This concept can be used in group decision-making when experts or DMs are evaluating a set of alternatives. In this case,M represents the preferences or opinions of the DMs for each of the alternatives, and then hM represents the opinions of all of them.Assumeh1, h2, and h2 are HFS. Typical operations on HFS are as follows:(6)lower bound:h−x=minhx,upper bound:h+x=maxhx,α−upper bound:hα+x=h∈hx|h≥α,α−lower bound:hα−x=h∈hx|h≤α,complement:hcx=∪γ∈hx1−γ,union:h1∪h2x=h∈h1x∪h2x|h≥maxh1−,h2−,or,equivalentlyh1∪h2x=h1x∪h2xα+forα=maxh1−,h2−,intersection:h1∩h2x=h∈h1x∩h2x|h≤minh1−,h2−,or,equivalentlyh1∩h2x=h1x∪h2xα+forα=minh1−,h2−.The idea behind this definition is as follows: whenever we have two hesitant fuzzy sets, if a hesitant fuzzy set is a possible set of alternatives, for allx, the lower bound of h1∪h2 is the largest of the two h1−, h2−. The definition of an intersection follows a similar consideration.Assume thath1, h2, and h2 are HFS. According to Torra and Narukawa [67] some properties are as follows:(7)complement:hcc=h.HFS is a kind of fuzzy type 2 approach [67]. For a HFS h, a corresponding fuzzy type 2 set can be defined as follows (note the typo in Torra and Narukawa [67]):(8)μ2xy=1,ify∈hx,0,ify∉hx.There are various methods for transferring an HFS number to a crisp number.In this paper, equations (9) and (10) are used for obtaining crisp number.a: lower boundb: middle valuec: upper boundWhen we have three numbers, the formula is(9)crisp number=a+b+c3,and when there are a and c only,(10)crisp number=a+c2. ## 3.1. The Best-Worst Method Many MCDM methods help decision-makers in making better decisions. One of the new approaches in the area of MCDM methods is the Best-Worst Method (BWM). This model belongs to the methods based on a finite set of alternatives (also denoted as multiple attribute decision-making, or MADM) and uses pairwise comparisons for finding weights of alternatives. Rezaei [27] introduced this model. The method compares vectors representing the criteria values of alternatives on a pairwise basis for the purpose of finding out which of these vectors is most beneficial (or the best vector). This model is based on the nonlinear minimax model for computing the maximum absolute difference in weight ratios and minimizing the corresponding comparison. For finding weights of alternatives by BWM, the following steps are needed:Step 1. Criteria and alternatives of the model are assumed to be specified. Criteria are denoted as C=c1,c2,….,cn.Step 2. The type of criteria is determined. The criteria can be best or worst criteria (criteria to be maximized or minimized).Step 3. Determine the relative preferences of the best criterion (denoted as B) in comparison to all other criteria based on a 1–9 scale. The preferences for the best criterion B are indicated as AB=aB1,aB2,…,aBn. It is obvious that aBB=1.Step 4. Determine the relative preferences of the worst criterion (denoted as W) in comparison to all other criteria based on a 1–9 scale. The preferences of worst criterion W are indicated as Aw=aw1,aw2,…,awn. It is obvious that aww=1.Step 5. Final weights are obtained based on the following approach. These weights are shown as w1∗,w2∗,…,wn∗.The maximum absolute differenceswB/wj−aBj and wj/ww−awj will be minimized for all j, such that the ratio of weights best corresponds to the relative preferences. The following equation shows this computation:(1)minmaxjwBwj−aBj,wjww−awjs.t.∑jwj=1wj≥0,for allj.This model can be rewritten as follows:(2)minξs.t.wBwj−aBj≤ξ,for alljwjww−awj≤ξ,for allj∑jwj=1wj≥0,for allj. ## 3.2. Hesitant Fuzzy Sets LetX represent a reference set. A hesitant fuzzy set of X is defined by a function h on X that returns a subset of [0, 1].We usually consider cases whereh (x) is a finite set [67]. Special cases of h (x) are as follows:(3)empty set:hx=0for allx∈X,full set:hx=1for allx∈X,complete ignorance:x∈X,hx=0,1,nonsence set:hx=∅.μx=1 and μx=0 point out the empty and full sets, which should not be confused with nonsense situations or complete ignorance.For a fuzzy set with membership functionμ on the reference set [0, 1], we can use hesitant fuzzy sets to represent the inverse function of μ, i.e., the hesitant fuzzy set hx is defined by hx∶=μ−1x or, respectively,(4)hx∶=α|α∈X,μα=x.Hesitant fuzzy sets can also be constructed from several fuzzy sets: considering a set ofN membership functions, M=μ1,…,μN. The hesitant fuzzy set of M, hM, is then defined as(5)hMx=∪μ∈Mμx.This concept can be used in group decision-making when experts or DMs are evaluating a set of alternatives. In this case,M represents the preferences or opinions of the DMs for each of the alternatives, and then hM represents the opinions of all of them.Assumeh1, h2, and h2 are HFS. Typical operations on HFS are as follows:(6)lower bound:h−x=minhx,upper bound:h+x=maxhx,α−upper bound:hα+x=h∈hx|h≥α,α−lower bound:hα−x=h∈hx|h≤α,complement:hcx=∪γ∈hx1−γ,union:h1∪h2x=h∈h1x∪h2x|h≥maxh1−,h2−,or,equivalentlyh1∪h2x=h1x∪h2xα+forα=maxh1−,h2−,intersection:h1∩h2x=h∈h1x∩h2x|h≤minh1−,h2−,or,equivalentlyh1∩h2x=h1x∪h2xα+forα=minh1−,h2−.The idea behind this definition is as follows: whenever we have two hesitant fuzzy sets, if a hesitant fuzzy set is a possible set of alternatives, for allx, the lower bound of h1∪h2 is the largest of the two h1−, h2−. The definition of an intersection follows a similar consideration.Assume thath1, h2, and h2 are HFS. According to Torra and Narukawa [67] some properties are as follows:(7)complement:hcc=h.HFS is a kind of fuzzy type 2 approach [67]. For a HFS h, a corresponding fuzzy type 2 set can be defined as follows (note the typo in Torra and Narukawa [67]):(8)μ2xy=1,ify∈hx,0,ify∉hx.There are various methods for transferring an HFS number to a crisp number.In this paper, equations (9) and (10) are used for obtaining crisp number.a: lower boundb: middle valuec: upper boundWhen we have three numbers, the formula is(9)crisp number=a+b+c3,and when there are a and c only,(10)crisp number=a+c2. ## 4. Research Methodology ### 4.1. Social Responsibility Perspective After the introduction of BSC, many papers were published about it based on the four traditional perspectives. However, rapid changes in the environment caused changes to this model. One of the new perspectives of BSC is social responsibility [68–70]. The oil and gas industry in Iran should focus on general aspects of society, e.g., preventing environmental pollution, etc. Therefore, social responsibility plays a crucial role in society. ### 4.2. Research Procedure The research procedure is shown in Figure1.Figure 1 Research methodology.Figure1 shows that this research starts with the design of the strategic plan. After that, based on strategic planning, performance measurements are extracted. In the next step, the questionnaires based on these performance measurements and BWM are created. Then, the questionnaires are distributed among experts in the oil and gas industry in Iran. After gathering those, the responses are prioritized by HFBWM with related software, which ranks these performance measurements. The results indicate which of these performance measurements and perspectives have high and which have low priority. The important point is that the present study is not a cause-and-effect relationship that goes through a statistical process and requires hypothesizing for analysis. Rather, the research approach is to use a combination of multicriteria decision-making methods and a new concept for prioritizing performance criteria in the oil and gas industry, which can be well adopted in the strategic planning approach. ### 4.3. The Reasons for Choosing HFBWM As mentioned, there are many methods for MCDM, and numerous related papers are being published [71]. Each of these methods has its strengths and weaknesses. BWM has some strengths in comparison to other methods, especially other pairwise comparison methods. First, it requires fewer comparisons compared to other methods. Second, it provides a more consistent comparison compared to other methods [27, 28].Hesitant fuzzy sets with the ability to model inaccurate information can be used widely and efficiently in decision-making. In general, in decision-making situations, there are several alternatives, and the goal is to evaluate these options by considering different criteria and then finally select and use the best alternative for the desired purpose. Therefore, the evaluation aspect of these alternatives and the information that is collected about them are important. Basically, several criteria are determined, and a number of experts are asked to comment on each of the alternatives regarding the chosen criteria. Any expert may hesitate to determine the extent to which an alternative satisfies each of the criteria. Instead of a membership value as for a traditional fuzzy set, experts possibly prefer to determine nonmembership (intuitionistic fuzzy set) or a set of membership values. This may be due to the expert’s skepticism about collecting information and selecting the most appropriate alternative based on that information.Thus, today there are many methods based on uncertainty that help managers to make accurate decisions. One of them that is suitable for MADM methods is HFS. The difference between HFS and other fuzzy methods is that DMs can tell the degree of uncertainty by the hesitant fuzzy situation. It helps DMs describe the degree of their uncertainty and ultimately leads to a better-founded decision. The combination of HFS and BWM, demonstrates that by using BWM the computation is reasonable despite the increase in accuracy. In addition, FHBWM better supports dealing with uncertain data, which is a common situation when dealing with real decision-makers. ### 4.4. Software of BWM For finding weights using BWM, the related software LINGO is used. This software solves the linear programming model related to BWM. ### 4.5. Data Gathering After defining performance measurements for these companies (e.g., based on suggestions in the literature and company-specific suggestions), questionnaires based on these performance measurements and BWM were designed. Then, these questionnaires were distributed among twelve top and middle managers of the respective companies. After gathering the data, the number that has the highest frequency among DMs preferences (the mode value) is selected as the final response. Table2 shows the information about the DMs.Table 2 Information about DMs. DM123456789101112Experience262829312829303433262729EducationPhDMScBScBScMScPhDMScBScPhDMScPhDPhD ### 4.6. Hesitant Fuzzy Numbers In this section, the DMs’ opinions on their preferences based on hesitant fuzzy numbers are shown in Table3. The table shows how the crisp preferences of DMs are transferred into hesitant fuzzy sets.Table 3 Linguistic variables [72]. Crisp number123456789HF number(0.1, 0.1, 0.2)(0.1, 0.2, 0.3)(0.2, 0.3, 0.4)(0.3, 0.4, 0.5)(0.4, 0.5, 0.6)(0.5, 0.6, 0.7)(0.6, 0.7, 0.8)(0.7, 0.8, 0.9)(0.9, 0.9, 1)Linguistic variablesVery very lowVery lowLowModerateFair-moderateFair goodGoodVery goodVery very good ### 4.7. Information about the Sample Population The companies considered in this research are large companies, i.e., companies with more than 1500 employees. There are quite a few companies that can be categorized as large, but unfortunately, only seven companies provided the information to us. Most locations ofO & G sites are in the south of Iran; however, their headquarters are in Tehran. In addition, they were required to keep their names confidential. ## 4.1. Social Responsibility Perspective After the introduction of BSC, many papers were published about it based on the four traditional perspectives. However, rapid changes in the environment caused changes to this model. One of the new perspectives of BSC is social responsibility [68–70]. The oil and gas industry in Iran should focus on general aspects of society, e.g., preventing environmental pollution, etc. Therefore, social responsibility plays a crucial role in society. ## 4.2. Research Procedure The research procedure is shown in Figure1.Figure 1 Research methodology.Figure1 shows that this research starts with the design of the strategic plan. After that, based on strategic planning, performance measurements are extracted. In the next step, the questionnaires based on these performance measurements and BWM are created. Then, the questionnaires are distributed among experts in the oil and gas industry in Iran. After gathering those, the responses are prioritized by HFBWM with related software, which ranks these performance measurements. The results indicate which of these performance measurements and perspectives have high and which have low priority. The important point is that the present study is not a cause-and-effect relationship that goes through a statistical process and requires hypothesizing for analysis. Rather, the research approach is to use a combination of multicriteria decision-making methods and a new concept for prioritizing performance criteria in the oil and gas industry, which can be well adopted in the strategic planning approach. ## 4.3. The Reasons for Choosing HFBWM As mentioned, there are many methods for MCDM, and numerous related papers are being published [71]. Each of these methods has its strengths and weaknesses. BWM has some strengths in comparison to other methods, especially other pairwise comparison methods. First, it requires fewer comparisons compared to other methods. Second, it provides a more consistent comparison compared to other methods [27, 28].Hesitant fuzzy sets with the ability to model inaccurate information can be used widely and efficiently in decision-making. In general, in decision-making situations, there are several alternatives, and the goal is to evaluate these options by considering different criteria and then finally select and use the best alternative for the desired purpose. Therefore, the evaluation aspect of these alternatives and the information that is collected about them are important. Basically, several criteria are determined, and a number of experts are asked to comment on each of the alternatives regarding the chosen criteria. Any expert may hesitate to determine the extent to which an alternative satisfies each of the criteria. Instead of a membership value as for a traditional fuzzy set, experts possibly prefer to determine nonmembership (intuitionistic fuzzy set) or a set of membership values. This may be due to the expert’s skepticism about collecting information and selecting the most appropriate alternative based on that information.Thus, today there are many methods based on uncertainty that help managers to make accurate decisions. One of them that is suitable for MADM methods is HFS. The difference between HFS and other fuzzy methods is that DMs can tell the degree of uncertainty by the hesitant fuzzy situation. It helps DMs describe the degree of their uncertainty and ultimately leads to a better-founded decision. The combination of HFS and BWM, demonstrates that by using BWM the computation is reasonable despite the increase in accuracy. In addition, FHBWM better supports dealing with uncertain data, which is a common situation when dealing with real decision-makers. ## 4.4. Software of BWM For finding weights using BWM, the related software LINGO is used. This software solves the linear programming model related to BWM. ## 4.5. Data Gathering After defining performance measurements for these companies (e.g., based on suggestions in the literature and company-specific suggestions), questionnaires based on these performance measurements and BWM were designed. Then, these questionnaires were distributed among twelve top and middle managers of the respective companies. After gathering the data, the number that has the highest frequency among DMs preferences (the mode value) is selected as the final response. Table2 shows the information about the DMs.Table 2 Information about DMs. DM123456789101112Experience262829312829303433262729EducationPhDMScBScBScMScPhDMScBScPhDMScPhDPhD ## 4.6. Hesitant Fuzzy Numbers In this section, the DMs’ opinions on their preferences based on hesitant fuzzy numbers are shown in Table3. The table shows how the crisp preferences of DMs are transferred into hesitant fuzzy sets.Table 3 Linguistic variables [72]. Crisp number123456789HF number(0.1, 0.1, 0.2)(0.1, 0.2, 0.3)(0.2, 0.3, 0.4)(0.3, 0.4, 0.5)(0.4, 0.5, 0.6)(0.5, 0.6, 0.7)(0.6, 0.7, 0.8)(0.7, 0.8, 0.9)(0.9, 0.9, 1)Linguistic variablesVery very lowVery lowLowModerateFair-moderateFair goodGoodVery goodVery very good ## 4.7. Information about the Sample Population The companies considered in this research are large companies, i.e., companies with more than 1500 employees. There are quite a few companies that can be categorized as large, but unfortunately, only seven companies provided the information to us. Most locations ofO & G sites are in the south of Iran; however, their headquarters are in Tehran. In addition, they were required to keep their names confidential. ## 5. Data Analysis and Results First, performance measurement indicators from all five perspectives are extracted. Table4 shows these performance measurements. Note that actual measurements of these indicators are not required at this point in our analysis.Table 4 Perspectives and performance measurements of the model. PerspectivePerformance measurementsFinance (C1)Total assets (C11), total costs (C12), income (C13), debt (C14)Customer (C2)Responsibility rate (C21), customer satisfaction rate (C22), sales volume (C23), number of participations in trade fairs (C24)Internal process (C3)Cost ofR & D (C31), number of improvement processes (C32), improvement of the supply chain (C33)Learning and growth (C4)Motivation (C41), rate of absence (C42), training hours (C43), number of staff suggestions (C44)Social responsibility (C5)Number of accepted international standards (C51), budget allocated for the protection of environment (C52), budget for improving social aspects in society (C53)The weights, the preferences of the DMs for all perspectives of the model, and the results are shown in Tables5–7. First, according to Table 3, the DMs specified their preferences related to the variables. Based on equation (2), the linear model of BWM is then created. The HFS numbers are transferred to crisp numbers according to equations (9)-(10). The respective optimization model is shown in Appendix (A.1).Table 5 The preferences of the DMs for the best perspectives. CriteriaC1C2C3C4C54675HFN(0.3, 0.4, 0.5)(0.5, 0.6, 0.7)(0.6, 0.7, 0.8)(0.4, 0.5, 0.6)Table 6 The preferences of the DMs for worst perspectives. C2HFNC17(0.6, 0.7, 0.8)C33(0.2, 0.3, 0.3)C45(0.4, 0.5, 0.6)C58(0.7, 0.8, 0.9)Table 7 Final weights and ranks of the perspectives. C1C2C3C4C5Weights0.2240.2760.2760.1960.148Rank21134Table5 presents the best perspective of the BSC model compared with the other perspectives based on the DM’s preferences.In Table6, the worst perspective of the BSC model is compared with the other perspectives based on the DM’s preferences. In Table 7, the final weights and the ranks of the perspectives are shown.This means that among the five perspectives, the internal processes and learning and the customer perspectives have the highest priority. Thus, by improving processes and focusing on supply chain aspects, the performance of the organization will be enhanced. Besides, customers have a high impact on the performance of oil and gas companies. The second most important perspective is the financial perspective. It shows that financial issues have strong effects on oil and gas companies. The learning and growth perspective is the third most important perspective. This result indicates that oil and gas companies may focus somewhat less on this issue. The least important perspective is social responsibility, but companies still adhere to environmental and social principles. The weights and preferences of DMs for financial performance measurement of the model, as well as the results, are shown in Tables8–10. The respective optimization model is shown in Appendix (A.2).Table 8 The preferences of the DMs for the best financial performance measurement. CriteriaC11C13C14C12845HFN(0.7, 0.8, 0.9)(0.3, 0.4, 0.5)(0.4, 0.5, 0.6)Table 9 The preferences of the DMs for the worst financial performance measurement. C11HFNC128(0.7, 0.8, 0.9)C137(0.6, 0.7, 0.8)C147(0.6, 0.7, 0.8)Table 10 Final weights and ranks of financial performance measurement. C11C12C13C14Weights0.30.1770.2840.244Rank1423In Table8, the best performance measurement from the financial perspective is compared with the other performance measurements based on the DM’s preferences. In Table 9, the worst performance measurement from the financial perspective is compared with the other performance measurements based on the DM’s preferences. Table 10 shows the final weights and the ranks of the perspectives C11–C14.In the financial perspective, total assets are the first and most important performance measurement. The second most important performance measurement is income; thus, oil and gas companies look forward to increasing it. The next performance measurement is debt that means companies are attempting to decrease it. The least important performance measurement is the total cost. It indicates oil and gas companies are not very concerned with total costs. The weights and the preferences of the DMs for customer performance measurements of the model, as well as the result, are shown in Tables11–13. The respective optimization model is presented in Appendix (A.3).Table 11 The preferences of the DMs for the best customer performance measurement. CriteriaC22C23C24C21579HFN(0.4, 0.5, 0.6)(0.6, 0.7, 0.8)(0.9, 0.9, 1)Table 12 The preferences of the DMs for the worst customer performance measurement. C24HFNC219(0.9, 0.9, 1)C228(0.7, 0.8, 0.9)C237(0.6, 0.7, 0.8)Table 13 Final weights and ranks of customer performance measurement. C21C22C23C24Weights0.210.2610.2090.32Rank3421In Table11, the best performance measurement from the customer’s perspective is compared with the other performance measurements based on the DM’s preferences. In Table 12, the worst performance measurement from the customer perspective is compared with the others based on the DM’s preferences. Table 13 shows the final weights and the ranks of the perspectives C21–C24.Participation in trade fairs is the most important factor. It means that attending these fairs is particularly important for oil and gas companies. The second most important performance measurement is sales volume. If oil and gas companies want to increase the export rates of their products, smaller amounts of their products should be sold in their home country. Therefore, decreasing home sales not only helps to increase the export of products but also supports the protection of the environment. The third performance measurement is the responsibility indicator, which shows that oil and gas companies should address customer needs better and faster. The least important performance measurement is customer satisfaction. It means that customer satisfaction with oil and gas companies does not need to be significantly improved. The weights and the preferences of DMs for internal process performance measurements of the model, as well as the result, are shown in Tables14–16. The respective optimization model is shown in Appendix (A.4).Table 14 The preferences of DMs for the best internal process performance measurement. CriteriaC31C33C3227HFN(0.1, 0.2, 0.3)(0.6, 0.7, 0.8)Table 15 The preferences of the DMs for the worst internal process performance measurement. C33HFNC318(0.7, 0.8, 0.9)C327(0.6, 0.7, 0.8)Table 16 Final weights and ranks of the internal process performance measurement. C31C32C33Weights0.4140.1840.402Rank132In Table14, the best performance measurement from the internal process perspective is compared with the other performance measurements based on the DM’s preferences. In Table 15, the worst performance measurement from the internal process perspective is compared with the others based on the DM’s preferences. Table 16 shows the final weights and the ranks of perspectives C31–C33.The most important performance measurement of the internal process perspective is the cost ofR & D. This priority points out that these companies need to spend more money on R & D in order to create new products, services, etc. The improvement of the supply chain is the second most important performance measurement. It means that this performance measurement is very important because any interruption in the supply chain may cause various problems, such as interruptions in public or private transportation, problems in production, and subsequently problems in social conditions. It is also of predominant importance for the economic success of a company. The least important performance measurement is the improvement of the supply chain. The weights and the preferences of the DMs for learning and growth performance measurements of the model and results are shown in Tables 17–19. Appendix (A.5) presents the respective optimization model.Table 17 The preferences of the DMs for the best learning and growth performance measurement. CriteriaC41C42C44C43583HFN(0.4, 0.5, 0.6)(0.7, 0.8, 0.9)(0.2, 0.3, 0.4)Table 18 The preferences of DMs for the worst learning and growth performance measurement. C44HFNC419(0.9, 0.9, 1)C428(0.7, 0.8, 0.9)C438(0.7, 0.8, 0.9)Table 19 Final weights and ranks of learning and growth performance measurement. C41C42C43C44Weights0.2870.2290.1720.312Rank2341In Table17, the best performance measurement from the learning and growth process perspective is compared with the remaining criteria based on the DMs’ preferences. Table 18 shows the respective comparison results of the worst performance measurement of learning and growth compared with the other performance indicators based on the DM’s preferences. The final weights and ranks of perspectives C41–C44 are presented in Table 19.The number of employee suggestions is the first performance measurement of the learning and growth perspective. It shows that these companies should pay sufficient attention to the suggestions of their employees. Motivation is the second most important performance measurement. This indicates that the focus of the oil and gas companies should also be on motivation. The rate of absence is the third performance measurement. It means that oil and gas companies should analyze the reasons and factors for their employees’ absences and then develop appropriate improvement programs for them. Training hours are the least important performance measurement. This means that for these companies, it is not important to pay more attention to the training and further education of their employees. The weights, the preferences of DMs for social responsibility performance measurements of the model, and the results are shown in Tables20–22. The respective optimization model is shown in Appendix (A.6).Table 20 The preferences of the DMs for the best social responsibility performance measurement. CriteriaC51C53C5287HFN(0.1, 0.2,0.3)(0.6, 0.7, 0.8)Table 21 The preferences of the DMs for worst social responsibility performance measurement. C51HFNC528(0.7, 0.8, 0.9)C535(0.4, 0.5, 0.6)Table 22 Final weights and ranks of the social responsibility performance measurement. C51C52C53Weights0.40.2710.329Rank132In Table20, the best performance measurement from the social responsibility process perspective is compared with the other performance measurements based on the DM’s preferences. In Table 21, the worst performance measurement of the social responsibility process perspective is compared with the others based on the DMs’ preferences.In Table22, the worst performance measurement from the social responsibility process perspective is compared with the other performance measurements based on the DMs’ preferences.The number of accepted international standards is the most important performance measurement. It shows that standards relating to management, quality, and environmental aspects are very important for these companies and that they must focus on that. The second performance measurement is the budget for the protection of the environment. It shows that it is of great importance for oil and gas companies to allocate budgets for environmental protection and that these should possibly be increased. The budget for improving social aspects of society is the final and least important performance measurement. It illustrates that oil and gas companies do not need to care much about their respective budgets (e.g., the support of football teams). The final weights of the model are shown in Table23.Table 23 Final weights of performance measurement. PerspectivesPerspective weightsPerformance measurementsRelative weightsFinal weightsRankingC10.224C110.30.0675C120.1770.04016C130.2840.0646C140.2440.05512C20.276C210.2100.0589C220.2610.0724C230.2090.05810C240.3200.0883C30.276C310.4140.1141C320.1840.05113C330.4020.1112C40.196C410.2870.05611C420.2290.04515C430.1720.03418C440.3120.0617C50.148C510.4000.0598C520.2710.04017C530.3290.04914 ## 6. Conclusions Today, many companies need to measure their performance because of the increasing importance of efficiency due to strong competition. For managers, it would be ideal to supervise performance measurement through a dashboard, like in an airplane, which shows all relevant aspects of the plane such as speed, altitude, fuel, and so on. Performance measurement should transparently show where a company stands and what its strengths and weaknesses are. There are many methods for measuring the performance of companies. One of the well-known methods is balanced scorecards. This method points out all relevant aspects for the evaluation of an organization according to performance criteria related to, e.g., finance, customers, internal processes, and learning and growth perspectives. By further developing companies, new concepts are added to the BSC’s perspectives. One of them is social responsibility. After the introduction of BSC, many companies are attempting its implementation in their organizations. However, in many cases, they fail in implementation. One reason is that they need to prioritize the perspectives and performance measurements of BSC. Many methods are available for prioritization. One of these methods is the Best-Worst Method (BWM), which belongs to the family of pairwise comparison methods. This model has some advantages compared to other pairwise comparison methods, such as fewer comparisons and better consistency. In our paper, we suggested using this method in a fuzzy form, the hesitant fuzzy best-worst method (HFBWM), to consider uncertainties.The oil and gas industry are the first and most important industry of Iran. Most of the budget in Iran depends on oil and gas, and many people use the products of these companies. Also, the oil and gas industry play a key role in the production of electricity for both houses and factories. For these reasons, it is particularly important for these companies to focus on their performance. The result of this research is based on DMs’ judgment. As pointed out in Section4.5, we need to elicit the preferences of DMs as required by BWM. The research and results are completely based on the viewpoints of DMs, which were obtained from questionnaires. For filling in these questionnaires, 12 DMs were selected. The definition of DM in this research is that they should have much knowledge about the oil and gas industry and should be top persons in this field. Also, they need to have more than 25 years of experience in this field. This selection helps ensure that DMs can reliably specify their preferences based on their experiences. As a consequence, the results of BWM can also be considered reliable. In this paper, a combination of BWM and BSC is applied to design a new and accurate model for BSC. First, performance measurements are extracted from the strategic planning of oil and gas companies. These are then prioritized based on BWM. Finally, the ranking shows which performance indicators are important and which are less important. The result indicates that, among the five perspectives, customers and internal processes are the most important ones. Customers of the oil and gas industry are divided into internal and external customers. When sufficient focus and facilities are provided for them, the number of external customers may increase significantly, along with the respective revenues, which will help the progress of the country. Although this industry is governmental and does not have many competitors, insufficient attention to internal customers leads to problems such as transfers, the provision of food, and other basic human needs. Another highly important perspective is internal processes. In this perspective, whenever companies focus on improving their performance, the result indicates that the cost will decrease, while the speed of providing and serving the customers, and the revenues increase dramatically. In prioritizing BSC performance measurement, R & D performance measurement cost is the most important among the 18 performance measures. It shows that oil and gas companies must focus on increasing their cost of R & D for the development and implementation of new customer services. The least important performance measurement is the number of training hours. It means that oil and gas companies already pay sufficient attention to this aspect and understand its importance. However, investing in training may decrease their costs and lead to increased effectiveness. Therefore, they may already consider this aspect to a sufficient extent. Let us note that the results of Varmazyar et al. [32], contradict our research. In Varmazyar et al. [32] the financial perspective is the most important, but in our research, it has the second highest priority. In addition, internal processes are the least important in Varmazyar et al. [32]. However, this perspective is the most important in our findings. Singh et al. [73] demonstrate that the customer perspective is the most important in their research. This is directly related to the outcome of our research. Besides, in these studies, the learning and growth perspective has the lowest priority. Lu et al. [35] illustrate that social responsibility has the highest priority but is the least important in our research. Internal processes are the least important in Lu et al. [35], but they are the most important perspective in our study. ### 6.1. Limitations and Future Research A limitation of this study is the number of indicators that can be analyzed. An increased number would require a higher effort for ranking them by BWM. In particular, the effort of the DMs to respond to the questionnaires would lead to difficulties. As DMs are working in different cities across the country, accessing them has proven difficult. In addition, repeated discussions were needed to familiarize them with the used concepts and to fill in the questionnaires. For future research, other methods based on uncertainty should be used for prioritizing perspectives and indicators of BSC and designing a road map for allocating limited resources to high-priority perspectives and indicators. Moreover, for the implementation of this method in an uncertain environment, some other fuzzy numbers, such as Z-numbers orD-numbers, could be used. ## 6.1. Limitations and Future Research A limitation of this study is the number of indicators that can be analyzed. An increased number would require a higher effort for ranking them by BWM. In particular, the effort of the DMs to respond to the questionnaires would lead to difficulties. As DMs are working in different cities across the country, accessing them has proven difficult. In addition, repeated discussions were needed to familiarize them with the used concepts and to fill in the questionnaires. For future research, other methods based on uncertainty should be used for prioritizing perspectives and indicators of BSC and designing a road map for allocating limited resources to high-priority perspectives and indicators. Moreover, for the implementation of this method in an uncertain environment, some other fuzzy numbers, such as Z-numbers orD-numbers, could be used. --- *Source: 1019779-2022-11-17.xml*
1019779-2022-11-17_1019779-2022-11-17.md
75,465
Evaluating the Performance of Oil and Gas Companies by an Extended Balanced Scorecard and the Hesitant Fuzzy Best-Worst Method
Amir Karbassi Yazdi; Amir Mehdiabadi; Thomas Hanne; Amir Homayoun Sarfaraz; Fatemeh Tabatabaei Yazdian
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1019779
1019779-2022-11-17.xml
--- ## Abstract The aim of this research is to find and prioritize a multicriteria performance measurement based on the balanced scorecard (BSC) for oil and gas (O & G) companies in an uncertain environment using the hesitant fuzzy best-worst method (HFBWM). The O & G industry has a key role in the economies of many countries. Hence, the evaluation of the performance of the O & G industry plays an important role. We utilize BSC for this purpose, which usually considers the financial, customer-oriented, internal, learning-oriented, and growth perspectives. In our research, the social responsibility perspective will be added. After finding multiple performance measurements, many companies cannot implement all of them because of limited resources. Therefore, multicriteria decision-making (MCDM) methods can be applied for prioritizing and selecting the most important measurement criteria. One of the MCDM methods is the best-worst method (BWM). This approach has several advantages compared to other MCDM methods. Due to uncertainties in decision-making, a suitable method for decision-making in an uncertain environment is necessary. Hesitant fuzzy approaches are applied as one such uncertainty-based method in this research. Our results indicate that among the five perspectives of BSC that we considered, the customer and internal process perspectives are the most important ones, and the cost of the R & D indicator is the most important subcriterion among these. --- ## Body ## 1. Introduction One of the main concerns of organizations is achieving a comprehensive, reliable, and flexible performance appraisal method to help them obtain accurate and sufficient information about their position and learn from past mistakes by looking to the future [1]. New approaches to organization and management (customer orientation, quality orientation, virtualization, etc.) also emphasize the double necessity of the concept and subject of evaluation. Accurate, comprehensive, and purposeful monitoring and evaluation are considered one of the most important facilitators of growth, dynamism, and excellence in the field of management [2]. Evaluating the performance of organizations has always been one of the main concerns of managers and their officials. Performance calculation helps organizations become more transparent [3]. In fact, a performance measurement system includes a diverse set of performance appraisal indicators that relate to organizational strategies and provide information about all components of the supply chain ([4], 398; [5], 5). One of the most popular and effective performance appraisal systems is the balanced scorecard (BSC). BSC is a comprehensive, complete, and accurate performance appraisal system for planning and monitoring an organization’s progress toward achieving its goals ([6], 138; [7], 360; [8], 73).Over the years of research on performance appraisal, researchers have presented numerous papers on BSC methods and hybrid models. Hegazy et al. [9] provide a detailed framework for supporting audit firms with BSC. The results show that the development and application of the proposed BSC measures improve the performance of audit firms. Auditing firms have a better understanding of the various performance factors and strategies and thus create a competitive advantage. Aujrrapongpan et al. [10] evaluated the performance of social hospitals in Thailand with the BSC approach. The results of this study are presented as a five-year comparison of performance evaluation indicators. Laury et al. [11] analyze the strategic planning and strategic performance of companies with BSC in a review article. Nazari-Shirkouhi et al. [12] evaluated the performance of an educational institution with an integrated IPA-BSC approach. Tuan [13] addressed the impact of BSC on performance in Vietnamese bank branches. Akbarei et al. [14] used a combined approach AHP-TOPSIS-BSC to evaluate the performance of bank branches and provide ways to improve it. Karbassi Yazdi et al. [15] have developed performance criteria for export agencies with the DEA approach. Karbassi Yazdi et al. [16] also developed an analytical vision of performance for the company using a combination of fuzzy clustering and DEA.In the past, the most crucial performance measurement was based on financial indicators [17, 18]. However, Kaplan and Norton [19] pointed out that these indicators were not solely responsible for the performance and that various further factors had an influence on it. Consequently, the BSC was suggested. This model consists of four perspectives: finance, customers, internal processes, and learning and growth. In order to get to know the situation of their company better and to find out its strengths and weaknesses, managers may use BSC, which introduces a comprehensive model for evaluating the company according to the mentioned four perspectives and relevant indicators [20, 21]. As mentioned above, traditional BSC has four perspectives, but Kaplan and Norton [22, 23] and Kaplan et al. [24] suggested that companies could add other perspectives to BSC or remove some of the suggested ones. To create new perspectives, the most crucial performance measurement should be considered. One of the most important obligations of oil and gas companies is to pay more attention to social responsibility performance measurement, such as the protection of the environment.After having extracted performance measurement indicators, companies should implement measures for improving these indicators. However, frequently, companies do not have a sufficient budget, time, or staff to implement these measures. Therefore, these performance measurement indicators should be prioritized in order to focus on the most important ones. There are many methods for prioritizing items that are characterized by multiple criteria, especially multiple criteria decision-making (MCDM) methods. These methods can be classified into different categories. Methods based on a finite set of alternatives (or a decision matrix) are usually denoted as multiple attribute decision-making (MADM) [25]. Suitable methods usually involve either a direct evaluation of alternatives (for instance, based on assessing a utility function or some other scalarizing function) or making use of a pairwise comparison of alternatives (such as the analytical hierarchy process (AHP) and the family of outranking methods [26]).In this paper, the best-worst method (BWM) is applied, which belongs to the pairwise comparison methods. This method has some benefits compared to other methods. In particular, it needs less data for comparison, and the result is more reliable than others [27, 28]. In our permanently changing world, decision-makers (DMs) cannot accurately judge. Based on this fact, DMs need a tool that helps them identify their preferences. Fuzzy sets are an approach for considering uncertainty. Methods using fuzzy sets can better support decisions in an uncertain environment. Hesitant fuzzy sets (HFS) (and hesitant fuzzy numbers, HFN) are one of the respective approaches. In this paper, we suggest using a modification of BWM based on HFS that can be used for MADM problems under uncertainty and to help make a decision. The respective approach is denoted as the hesitant fuzzy best-worst method (HFBWM). The oil and gas industry are the most important industry in Iran, and most of the budget of Iran depends on the revenues from the sale of oil. By increasing the number of oil sales, a country can create more job opportunities, decrease the Gini coefficient (for obtaining an equal income or wealth distribution in society), increase financial investments in the infrastructure, and so on, and therefore this industry plays an important role. For these reasons, evaluating the performance of the oil and gas industry helps managers make better decisions for improving the performance indicators.The research questions of this research are about the indicators of BSC in the oil and gas industry and which perspectives and indicators have the highest priority. The contribution of this paper is to prioritize the performance measurement of oil and gas companies by HFBWM. As this method is rather new, only a few papers have been published about it so far, in terms of BSC and extended versions of BSC. Another contribution of this study is applying a combination of BWM and BSC to the oil and gas industry. Also, a social responsibility perspective is added to the other aspects considered in this model. The final contribution is using real data for this research and data gathering based on questionnaires filled in by experts in this industry.Performance management is one of the crucial issues among companies, especially in theO & G industry, due to its strong impact on various fields such as economics, healthcare, education, or infrastructure. Therefore, performance management is essential for O & G companies to design road maps to realize their vision.This paper consists of the following sections: After the introductory section, a literature review of BSC is presented in Section2. In Section 3, the best-worst method is pointed out. Section 4 deals with the research methodology. Data analysis and results are illustrated in Section 5. The final section reveals the conclusions. ## 2. Literature Review ### 2.1. Balanced Scorecard (BSC) and Multiattribute Decision-Making (MADM) BSC is a tool for translating the strategy of organizations into a common language, which can be understood by the staff of a company. This model helps managers and staff to find out where their company stands, how far it deviates from the predetermined indicators (benchmark values), why they do not achieve them, and how to improve them. This method can be used to evaluate all aspects of companies and uses cause-and-effect relationships for the considered performance measurement. After the introduction of this model, various directions of research were investigated. In the following, we discuss research based on a combination of BSC and MADM methods.Yazdi et al. [25] evaluated the performance of Colombian bank branches using a combined approach of BSC, SWARA, and WASPAS. Heydariyeh et al. [29] combined the BSC model and the fuzzy DEMATEL (DEcision MAking Trial and Evaluation Laboratory) to present a new approach to integrated strategy map analysis. Ajripour et al. [30] developed a model for managing the performance of organizations using BSC, PROMETHEE, ELECTRE, and TOPSIS methods. Ozdogan et al. [31] provide a model for evaluating the performance of municipal services with a combined approach of multiple decision methods. Varmazyar et al. [32] developed a novel hybrid MCDM model for the performance evaluation of research and technology organizations based on the BSC approach.Dinçer et al. [33] illustrated a model of BSC in the European energy industry using a combination of fuzzy MCDM methods. They combined the quality function deployment (QFD) technique with fuzzy DEMATEL, fuzzy AHP, and the fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The result indicates which policy should be selected and which policies should be changed accordingly. Deng et al. [34] studied the combination of DEMATEL, the analytic network process (ANP), a modified VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) approach, and BSC for evaluating Taiwanese companies. The result pointed out that the customer perspective is the most important one and that these companies should focus on customer-oriented performance measurement. Lu et al. [35] combined DEMATEL, ANP, and modified VIKOR in the context of a sustainability-oriented BSC for evaluating international airports. First, the relationship between key performance indicators is determined by DEMATEL. Then, DEMATEL-ANP is used for finding weights. Finally, the gap between the current situation and the ideal situation is found by the modified VIKOR method. The result demonstrated that the airport’s image is the most important factor among others. Also, the best airport is found. Dinçer et al. [36] revealed a combination of the BSC model with fuzzy DEMATEL, fuzzy ANP, and the MOORA (Multiobjective Optimization Method by Ratio Analysis) method. Fuzzy DEMATEL and fuzzy ANP are used to find weights, and then nine airline companies are prioritized by the MOORA method. The result showed which airline company has a proper performance. Zhao and Li [37] implemented BSC, fuzzy Delphi, ANP, and fuzzy TOPSIS for thermal power enterprises. First, performance measurements of BSC are determined by fuzzy Delphi. Then, weights for the model are obtained by ANP, and companies are ranked by fuzzy TOPSIS and performance measurements. The result depicted a model for the evaluation of companies using hybrid methods. Meena and Thakkar [38] illustrated a model based on the combination of ISM and ANP for improving the BSC method for finding performance measurements. Then, by using the ISM method, relationships among them are shown. ANP was applied for the purpose of finding priorities for these performance measurements. Quezada and López-Ospina [39] depicted a method for drawing a strategy map of BSC by using an MCDM method (AHP) and linear programming (LP). The aim of using AHP and LP methods is to minimize the number of selected relationships while maximizing their total importance in selecting relationships. The result shows the trade-off between these aspects and designing a strategy map of BSC. Rabbani et al. [40] investigated sustainability using BSC and MCDM methods, considering linguistic variables in the oil and gas industry. They considered five perspectives such as economics, environment, social, internal process and growth, and the learning perspective. In this research, the criteria and subcriteria of BSC are defined first. Then, weights for them were obtained by the ANP method. Fuzzy COPRAS (COmplex PRoportional ASsessment of alternatives) is used for finding the best strategy. Shafiee et al. [41] designed a model for evaluating SCM performance by data envelopment analysis (DEA), DEMATEL, and BSC. First, based on four BSC perspectives, performance indicators were created. Then, the relationship among the performance measurements was determined by DEMATEL. Finally, Iranian food companies were evaluated in a case study using the network DEA and BSC methods.Rezaei et al. [42] measured the port performance using the best-worst method. They stated that costs and times of transportation in the supply chain are the most important factors. Galankashi et al. [43] discussed a hybrid model of BSC and fuzzy AHP for supplier selection in the automobile industry. First, for each supplier, a BSC model was designed, and after that, the performance measurements of each supplier were ranked by fuzzy AHP to find the best supplier. Lin [44] studied the implementation of BSC and a closed-loop ANP together with a fuzzy Delphi method in the higher education sector. They used fuzzy Delphi and ANP to find relationships in the closed-loop structure. Abo-Hamad and Arisha [45] illustrated a model of BSC with the analytical hierarchy process (AHP) and a simulation for measuring the performance of emergency departments. The result indicated how one could improve the efficiency of the processes by using these methods. Bhattacharya et al. [46] measured the performance of a green supply chain by using fuzzy-ANP and a balanced scorecard. Fuzzy-ANP was used to rank the BSC perspectives and performance measurements. The result showed how a supplier’s performance could be aligned with industry standards. Khairalla et al. [47] depicted a model for an outsourcing strategy based on ANP and BSC. In this research, after finding performance measurements, these were ranked by ANP for identifying the best strategies. After that, sensitivity analysis is used for increasing the robustness of the model. Hsu et al. [48] implemented fuzzy Delphi, ANP, and sustainable BSC for the semiconductor industry. They used a revised BSC with sustainability, stakeholders, internal business processes, and learning and growth perspectives. Then, performance measurements were extracted by fuzzy Delphi. Finally, perspectives and performance measurements were ranked by ANP. The result indicated that the sustainability perspective and some performance measurements had high priority.Bazrkar et al. [49] depicted a model for customer satisfaction with a combination of BSC and Lean Six Sigma (LSS). First, BSC perspectives and indicators are extracted. Then, data envelopment analysis (DEA) is implemented for selecting indicators. Finally, the Define, Measure, Analyze, Improve, and Control (DMAIC) cycle is applied for improving the quality of the process. The results pointed out that sigma levels increased and the time of processes decreased. Wang and Chien [50] illustrated a hybrid model of BSC and DEA for Taiwanese companies. First, the performance measurement of the BSC model is set as the inputs and outputs of the model. Then, companies’ performances are determined by DEA. Wu and Liao [51] used BSC and DEA for evaluating airline companies. They extracted inputs and outputs from the model based on BSC, and then 38 airline companies were evaluated by the DEA method.Tizroo et al. [52] designed a model of BSC and Interpretive Structural Modeling (ISM) in the steel industry. They found a relationship between the criteria and subcriteria of BSC. The results indicated how strategies for this industry can be formulated based on the results of the ISM and BSC. The result showed how this approach helps the stakeholders to make better decisions. Linet al. [53] used hierarchical BSC with fuzzy linguistics in hospitals. After determining performance measurements of BSC, fuzzy linguistics is applied for developing the model. The result indicated how management might use a new approach for the design and implementation of a new strategy in their organizations.Kaviani et al. [54] used gray numbers while considering hybrid MADM methods for ranking suppliers in the O & G industry. Yazdi et al. [55] used hybrid MADM methods using Z-numbers for evaluating suppliers in the O & G industry. ### 2.2. Hesitant Fuzzy Sets (HFSs) and MADM Various studies show the importance and reliability of HFS for decision-making under uncertainty and considering the complexity of organizations. Alcantud et al. [56] have introduced hesitant fuzzy sets as a new method. Tüysüz and Şimşek [57] used an AHP method based on hesitant fuzzy sets to evaluate the performance of a shipping company in Turkey. Divsalar et al. [58] developed the DANP technique using interval-valued hesitant fuzzy elements (IVHFEs). Zhai [59] has proposed the hesitant fuzzy linguistic preference relations (HLPRs) method for the performance evaluation of wireless sensor networks. The research findings shed new light on the selection, performance selection, and promotion of wireless sensor networks. Pérez-Domínguez et al. [60] focused on performance appraisal in a manufacturing company using a combination of TOPSIS and hesitant fuzzy linguistic term set (HFLTS) models. Using this method, they presented a model for lean manufacturing (LM). Liao et al. [61] used the hesitant fuzzy linguistic BWM method to evaluate the performance of hospitals. They state that the proposed method is more effective than the hesitant fuzzy AHP method. Liu et al. [62] used a combination of probabilistic hesitant fuzzy elements (PHFE) and MADM methods for the selection of venture capital investment projects. Candan [63] focuses on the efficiency and performance of economic research in 15 OECD countries using bibliographic elements for the period 2010–2017. There are seven criteria that are thought to affect the efficiency and performance of economic research. In this study, he used the hesitant fuzzy AHP and the OCRA method. Gong et al. [64] presented a new integrated approach using LHF-TODIM and BWM for E-learning website evaluation and selection. The results show that the proposed method is more effective. Meng et al. [65] introduced a new model using a combination of dual hesitant fuzzy preference relations (DHFPRs) and provided a new group decision-making method. Lin et al. [66] used a combination of probabilistic hesitant fuzzy best-worst (PHFBW) and MULTIMOORA for prioritizing distributed stream processing frameworks for IoT applications. ### 2.3. Our Proposed Method Based on categorizing the previous research, methods for this subject can be divided into hybrid MADM methods, pairwise comparison methods, DEA methods, and soft computing methods. Some of these methods are based on fuzzy numbers. In one of the previous studies [42], they used BWM for BSC. In some other research, authors used fuzzy numbers in their research.However, in this paper, the first traditional BSC method is transferred in to revised BSC method. Then, BWM is combined with hesitant fuzzy sets. These changes are not apparent in the previous research. Table1 summarizes the methods used in previous studies.Table 1 Previous studies on BSC and other methods. IndicatorResearcherDinçer et al. [33]Rezaeiet al. [42]Deng et al. [34]Lu et al. [35]Dinçer et al. [36]Bazrkar et al. [49]Tizroo et al. [52]Galankashi et al. [43]Lin [44]Wang and Chien [50]Zhao & Li [37]Abo-Hamad and Arisha [45]Bhattacharya et al. [46]Meena and Thakkar [38]Quezada and López-Ospina [39]Rabbani et al. [40]Khairalla et al. [47]Wu and Liao [51]Shafiee et al. [41]Lin et al. [53]Hsu et al. [48]Fuzzy linguistics∗Fuzzy COPRAS∗LP∗AHP∗∗Fuzzy Delphi∗∗∗ISM∗∗DEA∗∗∗∗LSS∗MOORA∗Fuzzy ANP∗∗∗VIKOR∗∗ANP∗∗∗∗∗∗∗DEMATEL∗∗∗BWM∗Fuzzy TOPSIS∗∗Fuzzy AHP∗∗Fuzzy DEMATEL∗∗QFD∗According to Table1 and the abovementioned review, many papers have been published about BSC, and it is the most popular topic among researchers. In this research, we suggest using a new MADM method based on HFS in combination with BSC, which helps to design a road map for supporting decision-makers and improves some weaknesses of previous studies. ## 2.1. Balanced Scorecard (BSC) and Multiattribute Decision-Making (MADM) BSC is a tool for translating the strategy of organizations into a common language, which can be understood by the staff of a company. This model helps managers and staff to find out where their company stands, how far it deviates from the predetermined indicators (benchmark values), why they do not achieve them, and how to improve them. This method can be used to evaluate all aspects of companies and uses cause-and-effect relationships for the considered performance measurement. After the introduction of this model, various directions of research were investigated. In the following, we discuss research based on a combination of BSC and MADM methods.Yazdi et al. [25] evaluated the performance of Colombian bank branches using a combined approach of BSC, SWARA, and WASPAS. Heydariyeh et al. [29] combined the BSC model and the fuzzy DEMATEL (DEcision MAking Trial and Evaluation Laboratory) to present a new approach to integrated strategy map analysis. Ajripour et al. [30] developed a model for managing the performance of organizations using BSC, PROMETHEE, ELECTRE, and TOPSIS methods. Ozdogan et al. [31] provide a model for evaluating the performance of municipal services with a combined approach of multiple decision methods. Varmazyar et al. [32] developed a novel hybrid MCDM model for the performance evaluation of research and technology organizations based on the BSC approach.Dinçer et al. [33] illustrated a model of BSC in the European energy industry using a combination of fuzzy MCDM methods. They combined the quality function deployment (QFD) technique with fuzzy DEMATEL, fuzzy AHP, and the fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The result indicates which policy should be selected and which policies should be changed accordingly. Deng et al. [34] studied the combination of DEMATEL, the analytic network process (ANP), a modified VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) approach, and BSC for evaluating Taiwanese companies. The result pointed out that the customer perspective is the most important one and that these companies should focus on customer-oriented performance measurement. Lu et al. [35] combined DEMATEL, ANP, and modified VIKOR in the context of a sustainability-oriented BSC for evaluating international airports. First, the relationship between key performance indicators is determined by DEMATEL. Then, DEMATEL-ANP is used for finding weights. Finally, the gap between the current situation and the ideal situation is found by the modified VIKOR method. The result demonstrated that the airport’s image is the most important factor among others. Also, the best airport is found. Dinçer et al. [36] revealed a combination of the BSC model with fuzzy DEMATEL, fuzzy ANP, and the MOORA (Multiobjective Optimization Method by Ratio Analysis) method. Fuzzy DEMATEL and fuzzy ANP are used to find weights, and then nine airline companies are prioritized by the MOORA method. The result showed which airline company has a proper performance. Zhao and Li [37] implemented BSC, fuzzy Delphi, ANP, and fuzzy TOPSIS for thermal power enterprises. First, performance measurements of BSC are determined by fuzzy Delphi. Then, weights for the model are obtained by ANP, and companies are ranked by fuzzy TOPSIS and performance measurements. The result depicted a model for the evaluation of companies using hybrid methods. Meena and Thakkar [38] illustrated a model based on the combination of ISM and ANP for improving the BSC method for finding performance measurements. Then, by using the ISM method, relationships among them are shown. ANP was applied for the purpose of finding priorities for these performance measurements. Quezada and López-Ospina [39] depicted a method for drawing a strategy map of BSC by using an MCDM method (AHP) and linear programming (LP). The aim of using AHP and LP methods is to minimize the number of selected relationships while maximizing their total importance in selecting relationships. The result shows the trade-off between these aspects and designing a strategy map of BSC. Rabbani et al. [40] investigated sustainability using BSC and MCDM methods, considering linguistic variables in the oil and gas industry. They considered five perspectives such as economics, environment, social, internal process and growth, and the learning perspective. In this research, the criteria and subcriteria of BSC are defined first. Then, weights for them were obtained by the ANP method. Fuzzy COPRAS (COmplex PRoportional ASsessment of alternatives) is used for finding the best strategy. Shafiee et al. [41] designed a model for evaluating SCM performance by data envelopment analysis (DEA), DEMATEL, and BSC. First, based on four BSC perspectives, performance indicators were created. Then, the relationship among the performance measurements was determined by DEMATEL. Finally, Iranian food companies were evaluated in a case study using the network DEA and BSC methods.Rezaei et al. [42] measured the port performance using the best-worst method. They stated that costs and times of transportation in the supply chain are the most important factors. Galankashi et al. [43] discussed a hybrid model of BSC and fuzzy AHP for supplier selection in the automobile industry. First, for each supplier, a BSC model was designed, and after that, the performance measurements of each supplier were ranked by fuzzy AHP to find the best supplier. Lin [44] studied the implementation of BSC and a closed-loop ANP together with a fuzzy Delphi method in the higher education sector. They used fuzzy Delphi and ANP to find relationships in the closed-loop structure. Abo-Hamad and Arisha [45] illustrated a model of BSC with the analytical hierarchy process (AHP) and a simulation for measuring the performance of emergency departments. The result indicated how one could improve the efficiency of the processes by using these methods. Bhattacharya et al. [46] measured the performance of a green supply chain by using fuzzy-ANP and a balanced scorecard. Fuzzy-ANP was used to rank the BSC perspectives and performance measurements. The result showed how a supplier’s performance could be aligned with industry standards. Khairalla et al. [47] depicted a model for an outsourcing strategy based on ANP and BSC. In this research, after finding performance measurements, these were ranked by ANP for identifying the best strategies. After that, sensitivity analysis is used for increasing the robustness of the model. Hsu et al. [48] implemented fuzzy Delphi, ANP, and sustainable BSC for the semiconductor industry. They used a revised BSC with sustainability, stakeholders, internal business processes, and learning and growth perspectives. Then, performance measurements were extracted by fuzzy Delphi. Finally, perspectives and performance measurements were ranked by ANP. The result indicated that the sustainability perspective and some performance measurements had high priority.Bazrkar et al. [49] depicted a model for customer satisfaction with a combination of BSC and Lean Six Sigma (LSS). First, BSC perspectives and indicators are extracted. Then, data envelopment analysis (DEA) is implemented for selecting indicators. Finally, the Define, Measure, Analyze, Improve, and Control (DMAIC) cycle is applied for improving the quality of the process. The results pointed out that sigma levels increased and the time of processes decreased. Wang and Chien [50] illustrated a hybrid model of BSC and DEA for Taiwanese companies. First, the performance measurement of the BSC model is set as the inputs and outputs of the model. Then, companies’ performances are determined by DEA. Wu and Liao [51] used BSC and DEA for evaluating airline companies. They extracted inputs and outputs from the model based on BSC, and then 38 airline companies were evaluated by the DEA method.Tizroo et al. [52] designed a model of BSC and Interpretive Structural Modeling (ISM) in the steel industry. They found a relationship between the criteria and subcriteria of BSC. The results indicated how strategies for this industry can be formulated based on the results of the ISM and BSC. The result showed how this approach helps the stakeholders to make better decisions. Linet al. [53] used hierarchical BSC with fuzzy linguistics in hospitals. After determining performance measurements of BSC, fuzzy linguistics is applied for developing the model. The result indicated how management might use a new approach for the design and implementation of a new strategy in their organizations.Kaviani et al. [54] used gray numbers while considering hybrid MADM methods for ranking suppliers in the O & G industry. Yazdi et al. [55] used hybrid MADM methods using Z-numbers for evaluating suppliers in the O & G industry. ## 2.2. Hesitant Fuzzy Sets (HFSs) and MADM Various studies show the importance and reliability of HFS for decision-making under uncertainty and considering the complexity of organizations. Alcantud et al. [56] have introduced hesitant fuzzy sets as a new method. Tüysüz and Şimşek [57] used an AHP method based on hesitant fuzzy sets to evaluate the performance of a shipping company in Turkey. Divsalar et al. [58] developed the DANP technique using interval-valued hesitant fuzzy elements (IVHFEs). Zhai [59] has proposed the hesitant fuzzy linguistic preference relations (HLPRs) method for the performance evaluation of wireless sensor networks. The research findings shed new light on the selection, performance selection, and promotion of wireless sensor networks. Pérez-Domínguez et al. [60] focused on performance appraisal in a manufacturing company using a combination of TOPSIS and hesitant fuzzy linguistic term set (HFLTS) models. Using this method, they presented a model for lean manufacturing (LM). Liao et al. [61] used the hesitant fuzzy linguistic BWM method to evaluate the performance of hospitals. They state that the proposed method is more effective than the hesitant fuzzy AHP method. Liu et al. [62] used a combination of probabilistic hesitant fuzzy elements (PHFE) and MADM methods for the selection of venture capital investment projects. Candan [63] focuses on the efficiency and performance of economic research in 15 OECD countries using bibliographic elements for the period 2010–2017. There are seven criteria that are thought to affect the efficiency and performance of economic research. In this study, he used the hesitant fuzzy AHP and the OCRA method. Gong et al. [64] presented a new integrated approach using LHF-TODIM and BWM for E-learning website evaluation and selection. The results show that the proposed method is more effective. Meng et al. [65] introduced a new model using a combination of dual hesitant fuzzy preference relations (DHFPRs) and provided a new group decision-making method. Lin et al. [66] used a combination of probabilistic hesitant fuzzy best-worst (PHFBW) and MULTIMOORA for prioritizing distributed stream processing frameworks for IoT applications. ## 2.3. Our Proposed Method Based on categorizing the previous research, methods for this subject can be divided into hybrid MADM methods, pairwise comparison methods, DEA methods, and soft computing methods. Some of these methods are based on fuzzy numbers. In one of the previous studies [42], they used BWM for BSC. In some other research, authors used fuzzy numbers in their research.However, in this paper, the first traditional BSC method is transferred in to revised BSC method. Then, BWM is combined with hesitant fuzzy sets. These changes are not apparent in the previous research. Table1 summarizes the methods used in previous studies.Table 1 Previous studies on BSC and other methods. IndicatorResearcherDinçer et al. [33]Rezaeiet al. [42]Deng et al. [34]Lu et al. [35]Dinçer et al. [36]Bazrkar et al. [49]Tizroo et al. [52]Galankashi et al. [43]Lin [44]Wang and Chien [50]Zhao & Li [37]Abo-Hamad and Arisha [45]Bhattacharya et al. [46]Meena and Thakkar [38]Quezada and López-Ospina [39]Rabbani et al. [40]Khairalla et al. [47]Wu and Liao [51]Shafiee et al. [41]Lin et al. [53]Hsu et al. [48]Fuzzy linguistics∗Fuzzy COPRAS∗LP∗AHP∗∗Fuzzy Delphi∗∗∗ISM∗∗DEA∗∗∗∗LSS∗MOORA∗Fuzzy ANP∗∗∗VIKOR∗∗ANP∗∗∗∗∗∗∗DEMATEL∗∗∗BWM∗Fuzzy TOPSIS∗∗Fuzzy AHP∗∗Fuzzy DEMATEL∗∗QFD∗According to Table1 and the abovementioned review, many papers have been published about BSC, and it is the most popular topic among researchers. In this research, we suggest using a new MADM method based on HFS in combination with BSC, which helps to design a road map for supporting decision-makers and improves some weaknesses of previous studies. ## 3. Multicriteria Decision-Making in an Uncertain Environment ### 3.1. The Best-Worst Method Many MCDM methods help decision-makers in making better decisions. One of the new approaches in the area of MCDM methods is the Best-Worst Method (BWM). This model belongs to the methods based on a finite set of alternatives (also denoted as multiple attribute decision-making, or MADM) and uses pairwise comparisons for finding weights of alternatives. Rezaei [27] introduced this model. The method compares vectors representing the criteria values of alternatives on a pairwise basis for the purpose of finding out which of these vectors is most beneficial (or the best vector). This model is based on the nonlinear minimax model for computing the maximum absolute difference in weight ratios and minimizing the corresponding comparison. For finding weights of alternatives by BWM, the following steps are needed:Step 1. Criteria and alternatives of the model are assumed to be specified. Criteria are denoted as C=c1,c2,….,cn.Step 2. The type of criteria is determined. The criteria can be best or worst criteria (criteria to be maximized or minimized).Step 3. Determine the relative preferences of the best criterion (denoted as B) in comparison to all other criteria based on a 1–9 scale. The preferences for the best criterion B are indicated as AB=aB1,aB2,…,aBn. It is obvious that aBB=1.Step 4. Determine the relative preferences of the worst criterion (denoted as W) in comparison to all other criteria based on a 1–9 scale. The preferences of worst criterion W are indicated as Aw=aw1,aw2,…,awn. It is obvious that aww=1.Step 5. Final weights are obtained based on the following approach. These weights are shown as w1∗,w2∗,…,wn∗.The maximum absolute differenceswB/wj−aBj and wj/ww−awj will be minimized for all j, such that the ratio of weights best corresponds to the relative preferences. The following equation shows this computation:(1)minmaxjwBwj−aBj,wjww−awjs.t.∑jwj=1wj≥0,for allj.This model can be rewritten as follows:(2)minξs.t.wBwj−aBj≤ξ,for alljwjww−awj≤ξ,for allj∑jwj=1wj≥0,for allj. ### 3.2. Hesitant Fuzzy Sets LetX represent a reference set. A hesitant fuzzy set of X is defined by a function h on X that returns a subset of [0, 1].We usually consider cases whereh (x) is a finite set [67]. Special cases of h (x) are as follows:(3)empty set:hx=0for allx∈X,full set:hx=1for allx∈X,complete ignorance:x∈X,hx=0,1,nonsence set:hx=∅.μx=1 and μx=0 point out the empty and full sets, which should not be confused with nonsense situations or complete ignorance.For a fuzzy set with membership functionμ on the reference set [0, 1], we can use hesitant fuzzy sets to represent the inverse function of μ, i.e., the hesitant fuzzy set hx is defined by hx∶=μ−1x or, respectively,(4)hx∶=α|α∈X,μα=x.Hesitant fuzzy sets can also be constructed from several fuzzy sets: considering a set ofN membership functions, M=μ1,…,μN. The hesitant fuzzy set of M, hM, is then defined as(5)hMx=∪μ∈Mμx.This concept can be used in group decision-making when experts or DMs are evaluating a set of alternatives. In this case,M represents the preferences or opinions of the DMs for each of the alternatives, and then hM represents the opinions of all of them.Assumeh1, h2, and h2 are HFS. Typical operations on HFS are as follows:(6)lower bound:h−x=minhx,upper bound:h+x=maxhx,α−upper bound:hα+x=h∈hx|h≥α,α−lower bound:hα−x=h∈hx|h≤α,complement:hcx=∪γ∈hx1−γ,union:h1∪h2x=h∈h1x∪h2x|h≥maxh1−,h2−,or,equivalentlyh1∪h2x=h1x∪h2xα+forα=maxh1−,h2−,intersection:h1∩h2x=h∈h1x∩h2x|h≤minh1−,h2−,or,equivalentlyh1∩h2x=h1x∪h2xα+forα=minh1−,h2−.The idea behind this definition is as follows: whenever we have two hesitant fuzzy sets, if a hesitant fuzzy set is a possible set of alternatives, for allx, the lower bound of h1∪h2 is the largest of the two h1−, h2−. The definition of an intersection follows a similar consideration.Assume thath1, h2, and h2 are HFS. According to Torra and Narukawa [67] some properties are as follows:(7)complement:hcc=h.HFS is a kind of fuzzy type 2 approach [67]. For a HFS h, a corresponding fuzzy type 2 set can be defined as follows (note the typo in Torra and Narukawa [67]):(8)μ2xy=1,ify∈hx,0,ify∉hx.There are various methods for transferring an HFS number to a crisp number.In this paper, equations (9) and (10) are used for obtaining crisp number.a: lower boundb: middle valuec: upper boundWhen we have three numbers, the formula is(9)crisp number=a+b+c3,and when there are a and c only,(10)crisp number=a+c2. ## 3.1. The Best-Worst Method Many MCDM methods help decision-makers in making better decisions. One of the new approaches in the area of MCDM methods is the Best-Worst Method (BWM). This model belongs to the methods based on a finite set of alternatives (also denoted as multiple attribute decision-making, or MADM) and uses pairwise comparisons for finding weights of alternatives. Rezaei [27] introduced this model. The method compares vectors representing the criteria values of alternatives on a pairwise basis for the purpose of finding out which of these vectors is most beneficial (or the best vector). This model is based on the nonlinear minimax model for computing the maximum absolute difference in weight ratios and minimizing the corresponding comparison. For finding weights of alternatives by BWM, the following steps are needed:Step 1. Criteria and alternatives of the model are assumed to be specified. Criteria are denoted as C=c1,c2,….,cn.Step 2. The type of criteria is determined. The criteria can be best or worst criteria (criteria to be maximized or minimized).Step 3. Determine the relative preferences of the best criterion (denoted as B) in comparison to all other criteria based on a 1–9 scale. The preferences for the best criterion B are indicated as AB=aB1,aB2,…,aBn. It is obvious that aBB=1.Step 4. Determine the relative preferences of the worst criterion (denoted as W) in comparison to all other criteria based on a 1–9 scale. The preferences of worst criterion W are indicated as Aw=aw1,aw2,…,awn. It is obvious that aww=1.Step 5. Final weights are obtained based on the following approach. These weights are shown as w1∗,w2∗,…,wn∗.The maximum absolute differenceswB/wj−aBj and wj/ww−awj will be minimized for all j, such that the ratio of weights best corresponds to the relative preferences. The following equation shows this computation:(1)minmaxjwBwj−aBj,wjww−awjs.t.∑jwj=1wj≥0,for allj.This model can be rewritten as follows:(2)minξs.t.wBwj−aBj≤ξ,for alljwjww−awj≤ξ,for allj∑jwj=1wj≥0,for allj. ## 3.2. Hesitant Fuzzy Sets LetX represent a reference set. A hesitant fuzzy set of X is defined by a function h on X that returns a subset of [0, 1].We usually consider cases whereh (x) is a finite set [67]. Special cases of h (x) are as follows:(3)empty set:hx=0for allx∈X,full set:hx=1for allx∈X,complete ignorance:x∈X,hx=0,1,nonsence set:hx=∅.μx=1 and μx=0 point out the empty and full sets, which should not be confused with nonsense situations or complete ignorance.For a fuzzy set with membership functionμ on the reference set [0, 1], we can use hesitant fuzzy sets to represent the inverse function of μ, i.e., the hesitant fuzzy set hx is defined by hx∶=μ−1x or, respectively,(4)hx∶=α|α∈X,μα=x.Hesitant fuzzy sets can also be constructed from several fuzzy sets: considering a set ofN membership functions, M=μ1,…,μN. The hesitant fuzzy set of M, hM, is then defined as(5)hMx=∪μ∈Mμx.This concept can be used in group decision-making when experts or DMs are evaluating a set of alternatives. In this case,M represents the preferences or opinions of the DMs for each of the alternatives, and then hM represents the opinions of all of them.Assumeh1, h2, and h2 are HFS. Typical operations on HFS are as follows:(6)lower bound:h−x=minhx,upper bound:h+x=maxhx,α−upper bound:hα+x=h∈hx|h≥α,α−lower bound:hα−x=h∈hx|h≤α,complement:hcx=∪γ∈hx1−γ,union:h1∪h2x=h∈h1x∪h2x|h≥maxh1−,h2−,or,equivalentlyh1∪h2x=h1x∪h2xα+forα=maxh1−,h2−,intersection:h1∩h2x=h∈h1x∩h2x|h≤minh1−,h2−,or,equivalentlyh1∩h2x=h1x∪h2xα+forα=minh1−,h2−.The idea behind this definition is as follows: whenever we have two hesitant fuzzy sets, if a hesitant fuzzy set is a possible set of alternatives, for allx, the lower bound of h1∪h2 is the largest of the two h1−, h2−. The definition of an intersection follows a similar consideration.Assume thath1, h2, and h2 are HFS. According to Torra and Narukawa [67] some properties are as follows:(7)complement:hcc=h.HFS is a kind of fuzzy type 2 approach [67]. For a HFS h, a corresponding fuzzy type 2 set can be defined as follows (note the typo in Torra and Narukawa [67]):(8)μ2xy=1,ify∈hx,0,ify∉hx.There are various methods for transferring an HFS number to a crisp number.In this paper, equations (9) and (10) are used for obtaining crisp number.a: lower boundb: middle valuec: upper boundWhen we have three numbers, the formula is(9)crisp number=a+b+c3,and when there are a and c only,(10)crisp number=a+c2. ## 4. Research Methodology ### 4.1. Social Responsibility Perspective After the introduction of BSC, many papers were published about it based on the four traditional perspectives. However, rapid changes in the environment caused changes to this model. One of the new perspectives of BSC is social responsibility [68–70]. The oil and gas industry in Iran should focus on general aspects of society, e.g., preventing environmental pollution, etc. Therefore, social responsibility plays a crucial role in society. ### 4.2. Research Procedure The research procedure is shown in Figure1.Figure 1 Research methodology.Figure1 shows that this research starts with the design of the strategic plan. After that, based on strategic planning, performance measurements are extracted. In the next step, the questionnaires based on these performance measurements and BWM are created. Then, the questionnaires are distributed among experts in the oil and gas industry in Iran. After gathering those, the responses are prioritized by HFBWM with related software, which ranks these performance measurements. The results indicate which of these performance measurements and perspectives have high and which have low priority. The important point is that the present study is not a cause-and-effect relationship that goes through a statistical process and requires hypothesizing for analysis. Rather, the research approach is to use a combination of multicriteria decision-making methods and a new concept for prioritizing performance criteria in the oil and gas industry, which can be well adopted in the strategic planning approach. ### 4.3. The Reasons for Choosing HFBWM As mentioned, there are many methods for MCDM, and numerous related papers are being published [71]. Each of these methods has its strengths and weaknesses. BWM has some strengths in comparison to other methods, especially other pairwise comparison methods. First, it requires fewer comparisons compared to other methods. Second, it provides a more consistent comparison compared to other methods [27, 28].Hesitant fuzzy sets with the ability to model inaccurate information can be used widely and efficiently in decision-making. In general, in decision-making situations, there are several alternatives, and the goal is to evaluate these options by considering different criteria and then finally select and use the best alternative for the desired purpose. Therefore, the evaluation aspect of these alternatives and the information that is collected about them are important. Basically, several criteria are determined, and a number of experts are asked to comment on each of the alternatives regarding the chosen criteria. Any expert may hesitate to determine the extent to which an alternative satisfies each of the criteria. Instead of a membership value as for a traditional fuzzy set, experts possibly prefer to determine nonmembership (intuitionistic fuzzy set) or a set of membership values. This may be due to the expert’s skepticism about collecting information and selecting the most appropriate alternative based on that information.Thus, today there are many methods based on uncertainty that help managers to make accurate decisions. One of them that is suitable for MADM methods is HFS. The difference between HFS and other fuzzy methods is that DMs can tell the degree of uncertainty by the hesitant fuzzy situation. It helps DMs describe the degree of their uncertainty and ultimately leads to a better-founded decision. The combination of HFS and BWM, demonstrates that by using BWM the computation is reasonable despite the increase in accuracy. In addition, FHBWM better supports dealing with uncertain data, which is a common situation when dealing with real decision-makers. ### 4.4. Software of BWM For finding weights using BWM, the related software LINGO is used. This software solves the linear programming model related to BWM. ### 4.5. Data Gathering After defining performance measurements for these companies (e.g., based on suggestions in the literature and company-specific suggestions), questionnaires based on these performance measurements and BWM were designed. Then, these questionnaires were distributed among twelve top and middle managers of the respective companies. After gathering the data, the number that has the highest frequency among DMs preferences (the mode value) is selected as the final response. Table2 shows the information about the DMs.Table 2 Information about DMs. DM123456789101112Experience262829312829303433262729EducationPhDMScBScBScMScPhDMScBScPhDMScPhDPhD ### 4.6. Hesitant Fuzzy Numbers In this section, the DMs’ opinions on their preferences based on hesitant fuzzy numbers are shown in Table3. The table shows how the crisp preferences of DMs are transferred into hesitant fuzzy sets.Table 3 Linguistic variables [72]. Crisp number123456789HF number(0.1, 0.1, 0.2)(0.1, 0.2, 0.3)(0.2, 0.3, 0.4)(0.3, 0.4, 0.5)(0.4, 0.5, 0.6)(0.5, 0.6, 0.7)(0.6, 0.7, 0.8)(0.7, 0.8, 0.9)(0.9, 0.9, 1)Linguistic variablesVery very lowVery lowLowModerateFair-moderateFair goodGoodVery goodVery very good ### 4.7. Information about the Sample Population The companies considered in this research are large companies, i.e., companies with more than 1500 employees. There are quite a few companies that can be categorized as large, but unfortunately, only seven companies provided the information to us. Most locations ofO & G sites are in the south of Iran; however, their headquarters are in Tehran. In addition, they were required to keep their names confidential. ## 4.1. Social Responsibility Perspective After the introduction of BSC, many papers were published about it based on the four traditional perspectives. However, rapid changes in the environment caused changes to this model. One of the new perspectives of BSC is social responsibility [68–70]. The oil and gas industry in Iran should focus on general aspects of society, e.g., preventing environmental pollution, etc. Therefore, social responsibility plays a crucial role in society. ## 4.2. Research Procedure The research procedure is shown in Figure1.Figure 1 Research methodology.Figure1 shows that this research starts with the design of the strategic plan. After that, based on strategic planning, performance measurements are extracted. In the next step, the questionnaires based on these performance measurements and BWM are created. Then, the questionnaires are distributed among experts in the oil and gas industry in Iran. After gathering those, the responses are prioritized by HFBWM with related software, which ranks these performance measurements. The results indicate which of these performance measurements and perspectives have high and which have low priority. The important point is that the present study is not a cause-and-effect relationship that goes through a statistical process and requires hypothesizing for analysis. Rather, the research approach is to use a combination of multicriteria decision-making methods and a new concept for prioritizing performance criteria in the oil and gas industry, which can be well adopted in the strategic planning approach. ## 4.3. The Reasons for Choosing HFBWM As mentioned, there are many methods for MCDM, and numerous related papers are being published [71]. Each of these methods has its strengths and weaknesses. BWM has some strengths in comparison to other methods, especially other pairwise comparison methods. First, it requires fewer comparisons compared to other methods. Second, it provides a more consistent comparison compared to other methods [27, 28].Hesitant fuzzy sets with the ability to model inaccurate information can be used widely and efficiently in decision-making. In general, in decision-making situations, there are several alternatives, and the goal is to evaluate these options by considering different criteria and then finally select and use the best alternative for the desired purpose. Therefore, the evaluation aspect of these alternatives and the information that is collected about them are important. Basically, several criteria are determined, and a number of experts are asked to comment on each of the alternatives regarding the chosen criteria. Any expert may hesitate to determine the extent to which an alternative satisfies each of the criteria. Instead of a membership value as for a traditional fuzzy set, experts possibly prefer to determine nonmembership (intuitionistic fuzzy set) or a set of membership values. This may be due to the expert’s skepticism about collecting information and selecting the most appropriate alternative based on that information.Thus, today there are many methods based on uncertainty that help managers to make accurate decisions. One of them that is suitable for MADM methods is HFS. The difference between HFS and other fuzzy methods is that DMs can tell the degree of uncertainty by the hesitant fuzzy situation. It helps DMs describe the degree of their uncertainty and ultimately leads to a better-founded decision. The combination of HFS and BWM, demonstrates that by using BWM the computation is reasonable despite the increase in accuracy. In addition, FHBWM better supports dealing with uncertain data, which is a common situation when dealing with real decision-makers. ## 4.4. Software of BWM For finding weights using BWM, the related software LINGO is used. This software solves the linear programming model related to BWM. ## 4.5. Data Gathering After defining performance measurements for these companies (e.g., based on suggestions in the literature and company-specific suggestions), questionnaires based on these performance measurements and BWM were designed. Then, these questionnaires were distributed among twelve top and middle managers of the respective companies. After gathering the data, the number that has the highest frequency among DMs preferences (the mode value) is selected as the final response. Table2 shows the information about the DMs.Table 2 Information about DMs. DM123456789101112Experience262829312829303433262729EducationPhDMScBScBScMScPhDMScBScPhDMScPhDPhD ## 4.6. Hesitant Fuzzy Numbers In this section, the DMs’ opinions on their preferences based on hesitant fuzzy numbers are shown in Table3. The table shows how the crisp preferences of DMs are transferred into hesitant fuzzy sets.Table 3 Linguistic variables [72]. Crisp number123456789HF number(0.1, 0.1, 0.2)(0.1, 0.2, 0.3)(0.2, 0.3, 0.4)(0.3, 0.4, 0.5)(0.4, 0.5, 0.6)(0.5, 0.6, 0.7)(0.6, 0.7, 0.8)(0.7, 0.8, 0.9)(0.9, 0.9, 1)Linguistic variablesVery very lowVery lowLowModerateFair-moderateFair goodGoodVery goodVery very good ## 4.7. Information about the Sample Population The companies considered in this research are large companies, i.e., companies with more than 1500 employees. There are quite a few companies that can be categorized as large, but unfortunately, only seven companies provided the information to us. Most locations ofO & G sites are in the south of Iran; however, their headquarters are in Tehran. In addition, they were required to keep their names confidential. ## 5. Data Analysis and Results First, performance measurement indicators from all five perspectives are extracted. Table4 shows these performance measurements. Note that actual measurements of these indicators are not required at this point in our analysis.Table 4 Perspectives and performance measurements of the model. PerspectivePerformance measurementsFinance (C1)Total assets (C11), total costs (C12), income (C13), debt (C14)Customer (C2)Responsibility rate (C21), customer satisfaction rate (C22), sales volume (C23), number of participations in trade fairs (C24)Internal process (C3)Cost ofR & D (C31), number of improvement processes (C32), improvement of the supply chain (C33)Learning and growth (C4)Motivation (C41), rate of absence (C42), training hours (C43), number of staff suggestions (C44)Social responsibility (C5)Number of accepted international standards (C51), budget allocated for the protection of environment (C52), budget for improving social aspects in society (C53)The weights, the preferences of the DMs for all perspectives of the model, and the results are shown in Tables5–7. First, according to Table 3, the DMs specified their preferences related to the variables. Based on equation (2), the linear model of BWM is then created. The HFS numbers are transferred to crisp numbers according to equations (9)-(10). The respective optimization model is shown in Appendix (A.1).Table 5 The preferences of the DMs for the best perspectives. CriteriaC1C2C3C4C54675HFN(0.3, 0.4, 0.5)(0.5, 0.6, 0.7)(0.6, 0.7, 0.8)(0.4, 0.5, 0.6)Table 6 The preferences of the DMs for worst perspectives. C2HFNC17(0.6, 0.7, 0.8)C33(0.2, 0.3, 0.3)C45(0.4, 0.5, 0.6)C58(0.7, 0.8, 0.9)Table 7 Final weights and ranks of the perspectives. C1C2C3C4C5Weights0.2240.2760.2760.1960.148Rank21134Table5 presents the best perspective of the BSC model compared with the other perspectives based on the DM’s preferences.In Table6, the worst perspective of the BSC model is compared with the other perspectives based on the DM’s preferences. In Table 7, the final weights and the ranks of the perspectives are shown.This means that among the five perspectives, the internal processes and learning and the customer perspectives have the highest priority. Thus, by improving processes and focusing on supply chain aspects, the performance of the organization will be enhanced. Besides, customers have a high impact on the performance of oil and gas companies. The second most important perspective is the financial perspective. It shows that financial issues have strong effects on oil and gas companies. The learning and growth perspective is the third most important perspective. This result indicates that oil and gas companies may focus somewhat less on this issue. The least important perspective is social responsibility, but companies still adhere to environmental and social principles. The weights and preferences of DMs for financial performance measurement of the model, as well as the results, are shown in Tables8–10. The respective optimization model is shown in Appendix (A.2).Table 8 The preferences of the DMs for the best financial performance measurement. CriteriaC11C13C14C12845HFN(0.7, 0.8, 0.9)(0.3, 0.4, 0.5)(0.4, 0.5, 0.6)Table 9 The preferences of the DMs for the worst financial performance measurement. C11HFNC128(0.7, 0.8, 0.9)C137(0.6, 0.7, 0.8)C147(0.6, 0.7, 0.8)Table 10 Final weights and ranks of financial performance measurement. C11C12C13C14Weights0.30.1770.2840.244Rank1423In Table8, the best performance measurement from the financial perspective is compared with the other performance measurements based on the DM’s preferences. In Table 9, the worst performance measurement from the financial perspective is compared with the other performance measurements based on the DM’s preferences. Table 10 shows the final weights and the ranks of the perspectives C11–C14.In the financial perspective, total assets are the first and most important performance measurement. The second most important performance measurement is income; thus, oil and gas companies look forward to increasing it. The next performance measurement is debt that means companies are attempting to decrease it. The least important performance measurement is the total cost. It indicates oil and gas companies are not very concerned with total costs. The weights and the preferences of the DMs for customer performance measurements of the model, as well as the result, are shown in Tables11–13. The respective optimization model is presented in Appendix (A.3).Table 11 The preferences of the DMs for the best customer performance measurement. CriteriaC22C23C24C21579HFN(0.4, 0.5, 0.6)(0.6, 0.7, 0.8)(0.9, 0.9, 1)Table 12 The preferences of the DMs for the worst customer performance measurement. C24HFNC219(0.9, 0.9, 1)C228(0.7, 0.8, 0.9)C237(0.6, 0.7, 0.8)Table 13 Final weights and ranks of customer performance measurement. C21C22C23C24Weights0.210.2610.2090.32Rank3421In Table11, the best performance measurement from the customer’s perspective is compared with the other performance measurements based on the DM’s preferences. In Table 12, the worst performance measurement from the customer perspective is compared with the others based on the DM’s preferences. Table 13 shows the final weights and the ranks of the perspectives C21–C24.Participation in trade fairs is the most important factor. It means that attending these fairs is particularly important for oil and gas companies. The second most important performance measurement is sales volume. If oil and gas companies want to increase the export rates of their products, smaller amounts of their products should be sold in their home country. Therefore, decreasing home sales not only helps to increase the export of products but also supports the protection of the environment. The third performance measurement is the responsibility indicator, which shows that oil and gas companies should address customer needs better and faster. The least important performance measurement is customer satisfaction. It means that customer satisfaction with oil and gas companies does not need to be significantly improved. The weights and the preferences of DMs for internal process performance measurements of the model, as well as the result, are shown in Tables14–16. The respective optimization model is shown in Appendix (A.4).Table 14 The preferences of DMs for the best internal process performance measurement. CriteriaC31C33C3227HFN(0.1, 0.2, 0.3)(0.6, 0.7, 0.8)Table 15 The preferences of the DMs for the worst internal process performance measurement. C33HFNC318(0.7, 0.8, 0.9)C327(0.6, 0.7, 0.8)Table 16 Final weights and ranks of the internal process performance measurement. C31C32C33Weights0.4140.1840.402Rank132In Table14, the best performance measurement from the internal process perspective is compared with the other performance measurements based on the DM’s preferences. In Table 15, the worst performance measurement from the internal process perspective is compared with the others based on the DM’s preferences. Table 16 shows the final weights and the ranks of perspectives C31–C33.The most important performance measurement of the internal process perspective is the cost ofR & D. This priority points out that these companies need to spend more money on R & D in order to create new products, services, etc. The improvement of the supply chain is the second most important performance measurement. It means that this performance measurement is very important because any interruption in the supply chain may cause various problems, such as interruptions in public or private transportation, problems in production, and subsequently problems in social conditions. It is also of predominant importance for the economic success of a company. The least important performance measurement is the improvement of the supply chain. The weights and the preferences of the DMs for learning and growth performance measurements of the model and results are shown in Tables 17–19. Appendix (A.5) presents the respective optimization model.Table 17 The preferences of the DMs for the best learning and growth performance measurement. CriteriaC41C42C44C43583HFN(0.4, 0.5, 0.6)(0.7, 0.8, 0.9)(0.2, 0.3, 0.4)Table 18 The preferences of DMs for the worst learning and growth performance measurement. C44HFNC419(0.9, 0.9, 1)C428(0.7, 0.8, 0.9)C438(0.7, 0.8, 0.9)Table 19 Final weights and ranks of learning and growth performance measurement. C41C42C43C44Weights0.2870.2290.1720.312Rank2341In Table17, the best performance measurement from the learning and growth process perspective is compared with the remaining criteria based on the DMs’ preferences. Table 18 shows the respective comparison results of the worst performance measurement of learning and growth compared with the other performance indicators based on the DM’s preferences. The final weights and ranks of perspectives C41–C44 are presented in Table 19.The number of employee suggestions is the first performance measurement of the learning and growth perspective. It shows that these companies should pay sufficient attention to the suggestions of their employees. Motivation is the second most important performance measurement. This indicates that the focus of the oil and gas companies should also be on motivation. The rate of absence is the third performance measurement. It means that oil and gas companies should analyze the reasons and factors for their employees’ absences and then develop appropriate improvement programs for them. Training hours are the least important performance measurement. This means that for these companies, it is not important to pay more attention to the training and further education of their employees. The weights, the preferences of DMs for social responsibility performance measurements of the model, and the results are shown in Tables20–22. The respective optimization model is shown in Appendix (A.6).Table 20 The preferences of the DMs for the best social responsibility performance measurement. CriteriaC51C53C5287HFN(0.1, 0.2,0.3)(0.6, 0.7, 0.8)Table 21 The preferences of the DMs for worst social responsibility performance measurement. C51HFNC528(0.7, 0.8, 0.9)C535(0.4, 0.5, 0.6)Table 22 Final weights and ranks of the social responsibility performance measurement. C51C52C53Weights0.40.2710.329Rank132In Table20, the best performance measurement from the social responsibility process perspective is compared with the other performance measurements based on the DM’s preferences. In Table 21, the worst performance measurement of the social responsibility process perspective is compared with the others based on the DMs’ preferences.In Table22, the worst performance measurement from the social responsibility process perspective is compared with the other performance measurements based on the DMs’ preferences.The number of accepted international standards is the most important performance measurement. It shows that standards relating to management, quality, and environmental aspects are very important for these companies and that they must focus on that. The second performance measurement is the budget for the protection of the environment. It shows that it is of great importance for oil and gas companies to allocate budgets for environmental protection and that these should possibly be increased. The budget for improving social aspects of society is the final and least important performance measurement. It illustrates that oil and gas companies do not need to care much about their respective budgets (e.g., the support of football teams). The final weights of the model are shown in Table23.Table 23 Final weights of performance measurement. PerspectivesPerspective weightsPerformance measurementsRelative weightsFinal weightsRankingC10.224C110.30.0675C120.1770.04016C130.2840.0646C140.2440.05512C20.276C210.2100.0589C220.2610.0724C230.2090.05810C240.3200.0883C30.276C310.4140.1141C320.1840.05113C330.4020.1112C40.196C410.2870.05611C420.2290.04515C430.1720.03418C440.3120.0617C50.148C510.4000.0598C520.2710.04017C530.3290.04914 ## 6. Conclusions Today, many companies need to measure their performance because of the increasing importance of efficiency due to strong competition. For managers, it would be ideal to supervise performance measurement through a dashboard, like in an airplane, which shows all relevant aspects of the plane such as speed, altitude, fuel, and so on. Performance measurement should transparently show where a company stands and what its strengths and weaknesses are. There are many methods for measuring the performance of companies. One of the well-known methods is balanced scorecards. This method points out all relevant aspects for the evaluation of an organization according to performance criteria related to, e.g., finance, customers, internal processes, and learning and growth perspectives. By further developing companies, new concepts are added to the BSC’s perspectives. One of them is social responsibility. After the introduction of BSC, many companies are attempting its implementation in their organizations. However, in many cases, they fail in implementation. One reason is that they need to prioritize the perspectives and performance measurements of BSC. Many methods are available for prioritization. One of these methods is the Best-Worst Method (BWM), which belongs to the family of pairwise comparison methods. This model has some advantages compared to other pairwise comparison methods, such as fewer comparisons and better consistency. In our paper, we suggested using this method in a fuzzy form, the hesitant fuzzy best-worst method (HFBWM), to consider uncertainties.The oil and gas industry are the first and most important industry of Iran. Most of the budget in Iran depends on oil and gas, and many people use the products of these companies. Also, the oil and gas industry play a key role in the production of electricity for both houses and factories. For these reasons, it is particularly important for these companies to focus on their performance. The result of this research is based on DMs’ judgment. As pointed out in Section4.5, we need to elicit the preferences of DMs as required by BWM. The research and results are completely based on the viewpoints of DMs, which were obtained from questionnaires. For filling in these questionnaires, 12 DMs were selected. The definition of DM in this research is that they should have much knowledge about the oil and gas industry and should be top persons in this field. Also, they need to have more than 25 years of experience in this field. This selection helps ensure that DMs can reliably specify their preferences based on their experiences. As a consequence, the results of BWM can also be considered reliable. In this paper, a combination of BWM and BSC is applied to design a new and accurate model for BSC. First, performance measurements are extracted from the strategic planning of oil and gas companies. These are then prioritized based on BWM. Finally, the ranking shows which performance indicators are important and which are less important. The result indicates that, among the five perspectives, customers and internal processes are the most important ones. Customers of the oil and gas industry are divided into internal and external customers. When sufficient focus and facilities are provided for them, the number of external customers may increase significantly, along with the respective revenues, which will help the progress of the country. Although this industry is governmental and does not have many competitors, insufficient attention to internal customers leads to problems such as transfers, the provision of food, and other basic human needs. Another highly important perspective is internal processes. In this perspective, whenever companies focus on improving their performance, the result indicates that the cost will decrease, while the speed of providing and serving the customers, and the revenues increase dramatically. In prioritizing BSC performance measurement, R & D performance measurement cost is the most important among the 18 performance measures. It shows that oil and gas companies must focus on increasing their cost of R & D for the development and implementation of new customer services. The least important performance measurement is the number of training hours. It means that oil and gas companies already pay sufficient attention to this aspect and understand its importance. However, investing in training may decrease their costs and lead to increased effectiveness. Therefore, they may already consider this aspect to a sufficient extent. Let us note that the results of Varmazyar et al. [32], contradict our research. In Varmazyar et al. [32] the financial perspective is the most important, but in our research, it has the second highest priority. In addition, internal processes are the least important in Varmazyar et al. [32]. However, this perspective is the most important in our findings. Singh et al. [73] demonstrate that the customer perspective is the most important in their research. This is directly related to the outcome of our research. Besides, in these studies, the learning and growth perspective has the lowest priority. Lu et al. [35] illustrate that social responsibility has the highest priority but is the least important in our research. Internal processes are the least important in Lu et al. [35], but they are the most important perspective in our study. ### 6.1. Limitations and Future Research A limitation of this study is the number of indicators that can be analyzed. An increased number would require a higher effort for ranking them by BWM. In particular, the effort of the DMs to respond to the questionnaires would lead to difficulties. As DMs are working in different cities across the country, accessing them has proven difficult. In addition, repeated discussions were needed to familiarize them with the used concepts and to fill in the questionnaires. For future research, other methods based on uncertainty should be used for prioritizing perspectives and indicators of BSC and designing a road map for allocating limited resources to high-priority perspectives and indicators. Moreover, for the implementation of this method in an uncertain environment, some other fuzzy numbers, such as Z-numbers orD-numbers, could be used. ## 6.1. Limitations and Future Research A limitation of this study is the number of indicators that can be analyzed. An increased number would require a higher effort for ranking them by BWM. In particular, the effort of the DMs to respond to the questionnaires would lead to difficulties. As DMs are working in different cities across the country, accessing them has proven difficult. In addition, repeated discussions were needed to familiarize them with the used concepts and to fill in the questionnaires. For future research, other methods based on uncertainty should be used for prioritizing perspectives and indicators of BSC and designing a road map for allocating limited resources to high-priority perspectives and indicators. Moreover, for the implementation of this method in an uncertain environment, some other fuzzy numbers, such as Z-numbers orD-numbers, could be used. --- *Source: 1019779-2022-11-17.xml*
2022
# Review of Ethnobotanical, Phytochemical, and Pharmacological Study ofThymus serpyllum L. **Authors:** Snežana Jarić; Miroslava Mitrović; Pavle Pavlović **Journal:** Evidence-Based Complementary and Alternative Medicine (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101978 --- ## Abstract Thymus serpyllum L. (wild thyme) is a perennial shrub, native to areas of northern and central Europe. Its aerial parts are most frequently used in ethnomedicine (mainly for treating illnesses and problems related to the respiratory and gastrointestinal systems), although recently its essential oils are becoming more popular as an important plant-derived product. The composition of these oils is affected by geographic region, the development stage of the plant, the harvest season, habitat, and climatic conditions. Wild thyme essential oil has an ever-growing number of uses in contemporary medicine due to its pharmacological properties: antioxidative, antimicrobial, and anticancerogenic activities. The antioxidative and antimicrobial properties of the essential oil are related to the synergistic and cumulative effect of its components. In terms of antitumor and cytotoxic activity, further research into the effects of essential oil is necessary, aimed at improving its cytotoxic effects, on the basis of which appropriate medicines can be formulated. Due to its pharmacological properties, the essential oil of wild thyme, a plant used in traditional medicine, represents an important natural resource for the pharmaceutical industry. In addition, it can be a source of natural antioxidants, nutritional supplements, or components of functional foods in the food industry. --- ## Body ## 1. Introduction Thymus serpyllum L. (wild thyme) belongs to the family Lamiaceae, which according to the World Checklist contains 7534 species [1], including the genusThymus L. with 220 species [2]. This genus is very complex from the taxonomical and systematic points of view, demonstrating significant polymorphism not only in morphological characteristics but also in composition of ethereal oils.T. serpyllum L. is a perennial shrub, native to regions of northern and central Europe (Figure 1). It is known as Breckland thyme, wild thyme, or creeping thyme; however, its specific name “serpyllum” is derived from the Greek word meaning “to creep,” because of wild thyme’s trailing habit. It has a long stem, which is woody at the base but with a sterile leaf rosette at the top. Leaves are oval (rounded at the top, tapered at the base), 4–6 mm long, 2–4 mm wide, and glabrous on the face and underside, while at the base along the edge they have long trichomes, a prominent central vein, and less prominent lateral veins (Figure 2). Inflorescences are 4–7 cm tall and form in a series along a low-lying stem, with a uniform layer of trichomes on all sides. Flowers are located at the top of the stems and form spherical (or more rarely elongated) verticillaster [3]. It flowers from May to September. Wild thyme grows best on dry, stony ground, open sandy heaths, and grasslands.Figure 1 Map of distribution,Thymus serpyllum L. (Source: Botanical Museum, Helsinki, Finland, 2014, data from BGBM, Berlin-Dahlem, Germany.)Figure 2 Thymus serpyllum L.The medicinal properties of wild thyme have been extensively used in official and traditional medicine for many years and centuries, respectively. Fresh and dried herbs particularly the upper part of the above ground portion of wild thyme, collected when the plant is in bloom, possess certain healing properties due to the presence of significant amounts of essential oils. Recent years have seen increased interest in ethnobotanical, phytochemical, and pharmacological investigations into the medicinal properties of the speciesT. serpyllum which serves as a high quality source for many different formulations in pharmaceutical and chemical industries. The herb is used in preparations of natural herbal remedies, such as syrups, tinctures, infusions, decoctions, tea, and oil. The increase in multidrug resistant strains of pathogenic microorganisms has led to extensive phytochemical and pharmacological studies ofT. serpyllum as an important source of medicinal substances with antioxidant, antimicrobial, antitumor, and cytotoxic properties and their effective medicinal application, as well as use in pharmaceutical, food, and cosmetic industries. In addition, the increased pressure from consumers for natural products as supplements and their clinical application instead of synthetic chemicals, which are generally perceived by the public as being more toxic, has also stimulated research into many medicinal and aromatic plants of whichT. serpyllum occupies a very important place. ## 2. Traditional Uses and Ethnopharmacology The widespread use of different species of theThymus genus dates back to ancient Egypt, where they were used for making perfumed balms, for embalming, and for medical purposes. The Greeks and Romans used them in the same way, as we know from the writings of Pliny (1st century), Dioscorides (2nd century), and Philippus Aureolus Theophrastus Bombastus von Hohenheim (Paracelsus 1493/1494–1541). “Everyone knows thyme,” wrote physician Dioscorides in the first line of his discourse on the pharmacological value of this very aromatic herb, a subject supported by more than three millennia of experience. According to Dioscorides, thyme was used to treat asthma and loosen congestion in the throat and stomach [4]. In terms of geography, the use of these plants spread no further north than the Alps. The first recorded information on the medicinal properties of thyme north of the Alps can be found in the manuscriptPhysica, by the abbess Hildegard von Bingen (1098–1179) and the works of Albertus Magnus (1193–1280). This continued in the 16th century with the Herbal by the herbalist P. Mathiolus (1505–1577), which first mentions the strength and efficacy of thyme. Since then, numerous therapeutic properties have been attributed to thyme, some on an empirical basis, others more debatable [5]. However, the spread of thyme throughout Europe is thought to be due to the Romans, as they used it to purify their rooms and to “give an aromatic flavour to cheese and liqueurs” [6]. In the European Middle Ages, the herb was placed beneath pillows to aid sleep and ward off nightmares [7]. In this period, women would often also give knights and warriors gifts that included thyme leaves, as it was believed to bring courage to the bearer. The pharmacological manuscripts of the Chilandar Medical Codex (15th-16th centuries) mention the use of wild thyme for the treatment of headaches caused by colds, laryngitis, and diseases of the digestive organs and as an antitussive [8]. During the Renaissance period (16th and 17th centuries), wild thyme was used internally to treat malaria and epilepsy [9].The aerial part ofT. serpyllum has a long tradition of being used in many countries of Europe [10] and worldwide as an anthelmintic, a strong antiseptic, an antispasmodic, a carminative, deodorant, diaphoretic, disinfectant, expectorant, sedative, and tonic [11]. It is most frequently used for treating illnesses and problems related to the gastrointestinal and respiratory systems [12–19]. In the Western Balkans, this species has an important use as a sedative [16, 20], or to improve blood circulation, and then as anticholesterolemic and immunostimulant [21]. In alpine region of northeastern Italy, infusion or decoction of plant areal parts (in flowering stage) is used in treatment of rheumatism [22]. Gairola et al. mention the use of wild thyme in some regions of India for treating menstrual disorders [23], while Shinwari and Gilani state its use as an anthelmintic in Northern Pakistan [24].T. serpyllum is also used externally as an antiseptic, to treat wounds [14], to combat eczema [13], or to reduce swelling [25]. In some areas of Italy, wild thyme is used as an important herb in cookery, mainly for flavouring meat or fish [26]. In addition, ethnobotanical studies in Catalonia and Balearic Islands have proved usage ofT. serpyllum in ethnoveterinary particularly as antidiarrheal [27]. TheBritish Herbal Pharmacopoeia classifies this species as a medicinal plant and among the indications for its use it mentions bronchitis, bronchial catarrh, whooping cough, and sore throats. Whooping cough is singled out as a specific indication. In the monograph, recommendations are given for combining it with other plants (Coltsfoot,Tussilago farfara L., or Horehound,Marrubium vulgare L.). As a gargle for acute pharyngitis, it is recommended in combination with the leaves of blackberry (Rubus fruticosus L.) orEchinacea (Echinacea sp.) [28]. According to the PDR for Herbal Medicines, wild thyme is a component in various standardized preparations with antitussive effects, while alcohol extracts are integral components of drops used for coughs and colds [29]. The recommended daily dose of this drug is 4–6 g. ## 3. Pharmacological Properties Many studies on the chemical composition and yields of the essential oils from plants belonging to theThymus genus have been conducted, including those fromT. serpyllum. The chemical composition and yield of the essential oil ofT. serpyllum are considered to be affected by geographic region, the development stage of the plant, the harvest season, habitat, and climatic conditions [35]. As such, its content varies from 0.1 to 0.6% [29, 36, 37] or even from 0.1 to 1% [38]. Analysis of the yield of essential oil ofT. serpyllum in Estonia revealed its content to be between 0.6 and 4.4 mL/kg. Only in one locality did it amount to 3 mL/kg [39], which is in accordance with European Pharmacopoeia standards. Similarly, the essential oil content of wild thyme from 5 regions in Armenia ranged between 4.5 and 7.4 mL/kg [40]. In samples of wild thyme from Pakistan, yields of 0.48% were achieved [41], or 29 g/kg [42]. In Serbia, the yields of essential oil from samples of this species growing on Mt. Kopaonik were 3 mL/kg (~0.3%) [43] and 4.1 g/kg (~0.1%) in samples from Mt. Pasjača [44].Over the last two decades, more and more studies have researched the chemical composition ofT. serpyllum essential oil (Table 1) [39, 41–50]. It has been established that plant species of theThymus genus are characterized by chemical polymorphism, meaning that several chemotypes exist (geraniol, germacrene D, citral, linalool, (E)-caryophyllene, α-terpinyl acetate, carvacrol, and thymol) [30, 33, 51]. According to the PDR for Herbal Medicines, the chief component of the essential oil ofT. serpyllum is carvacrol, while it also contains borneol, isobutyl acetate, caryophyllene, 1,8-cineole, citral, citronellal, citronellol,p-cymene, geraniol, linalool, α-pinene, γ-terpinene, α-terpineol, terpinyl acetate, and thymol in relatively high concentrations [29]. Carvacrol and thymol are isomers, belonging to the group of monoterpenic phenols with powerful antiseptic properties. They are very quickly absorbed after application and quickly metabolise as they are not subject to first phase biotransformation; instead their conjugation with sulphuric and glucuronic acids occurs directly. They are excreted via urine within 24 hours, mainly in the form of conjugates, and less so in their unchanged form [32]. According to the European Pharmacopeia, the herbT. serpyllum must contain at least 1.2% essential oil, in which the total content of carvacrol and thymol is 40% or higher [31]. In addition to essential oil, wild thyme also contains flavonoids, phenol carboxylic acids, and their derivatives, triterpenes and tannins [29, 52]. Besides carvacrol and thymol, Kulišić et al. also include γ-terpinene and p-cymene among the main components of the essential oil [53]. However, research into the composition and concentration of compounds in the essential oil ofT. serpyllum in different regions of the world has revealed significant differences. For example, the content of essential oil in populations of wild thyme in the Altai Mountains (Russia) is 0.5–1%, but its chemical composition differs significantly depending on the altitude. In the village of Kolyvan (150 m a.s.l.), the principal components of the oil are β-myrcene (4.0%),p-cymol (3.8%), 1,8-cineole (14.0%),cis-β-terpineol (8.2%), camphor (4.0%), andtrans-nerolidol (29.8%), while in the same region, but in the village of Mendur-Sokkon (500–750 m a.s.l.), the following were identified as the main components:p-cymol (14.5%), 1,8-cineole (5.6%), γ-terpinene (17.2%), and carvacrol (29.6%) [45]. The essential oil from both areas contained less than 2% thymol. Furthermore, in the essential oil ofT. serpyllum growing wild in Lithuania, the presence of thymol and carvacrol was not established [54]. Although noted as dominant components in literature, thymol and carvacrol are not the principal components of wild thyme essential oil in Estonia either [40]. Differences in the chemical composition of essential oils have been established in other localities, too: the principal components of the essential oils of wild thyme from Mt. Kopaonik (Serbia) aretrans-caryophyllene (27.7%), γ-muurolene (10.5%), and α-humulene (7.5%) [33], while on Mt. Pasjača (Serbia) the dominant components of the essential oil aretrans-nerolidol (24.2%), germacrene D (16.0%), thymol (7.3%), δ-cadinene (3.7%), and β-bisabolene (3.3%) [32]. The essential oil ofT. serpyllum growing in Pakistan contains mainly thymol (53.3%) and carvacrol (10.4%) [41], while Hussain et al., also in Pakistan, but from a different area (the Gilgit Valley), found that the chemical composition of the essential oil was dominated by carvacrol (44.4%) and o-cymene (14.0%) [42]. De Lisi et al. studied the composition of the essential oils of various ecotypes from the region of southern Italy and established that there are differences in the composition between biotypes, too: in two biotypes (S2 and S3), the concentration of geraniol was highest (35% and 22%, resp.), while in biotype S1 thymol is predominant (32.6%) [55]. Ložiene et al. carried out research on 26 samples from 14 habitats in Lithuania and found that there were wide variations in the composition of the main components of the oils [56]. They recorded the existence of five chemotypes (1,8-cineole, germacrene B, (E)-β-ocimene, α-cadinol, andcis-p-ment-2-en-1-ol), which are directly connected to the oil composition among the studied varieties and chemotypes.Table 1 The chemical composition of the essential oil ofThymus serpyllum L. in some regions. Compound Regions/locality Estonia [30] Serbia (Pasjača) [31] Serbia (Kopaonik) [32] Pakistan [33] “Natures” company and local Greek pharmacy in Thessaloniki [34] Yield percentage Monoterpene hydrocarbons α-Pinene 0,649 0,51 6,9 6,06 2,0 Tricyclene 0,062 0,1 Camphene 2,170 0,35 1,0 0,1 2,4 Sabinene 0,187 0,21 0,8 β-Pinene 0,374 0,67 1,8 1,43 0,2 α-Phellandrene 0,016 0,5 0,05 0,2 Myrcene 6,152 1,64 α-Terpinene 0,041 0,21 1,1 p-Cymene 0,410 2,11 2,0 8,9 o-Cymene 14,0 β-Cymene 0,19 Limonene 0,352 1,03 2,7 0,6 trans-β-Ocimene 0,986 1,55 1,5 0,1 cis-β-Ocimene 0,069 0,7 α-Thujene 0,047 γ-Terpinene 0,147 1,48 1,4 0,02 7,2 Oxidized monoterpenes 1,8-Cineole 1,247 1,38 2,5 3,44 0,4 α-Thujone 0,046 1,1 cis-Thujone 1,89 trans-Thujone 0,084 0,21 cis-Sabinene hydrate 0,047 0,5 Linalool 3,000 0,72 1,2 2,02 2,4 δ3-Carene 0,1 Terpinolene 0,090 α-Campholen 0,24 Camphor 3,545 0,99 3,6 0,7 cis-Chrysanthenol 0,4 Borneol 4,667 0,56 2,45 6,0 Menthol 0,26 Isoborneol 0,028 p-Mentha-3,8-diene 0,18 Terpinene-4-ol 0,594 0,4 0,7 α-Terpineol 2,490 0,52 6,47 0,1 trans-Sabinene hydrate 0,087 cis-Linalol oxide 0,100 (Z)-p-Mentha-2,8-dien-1-ol∗1118 0,017 (Z)-p-Mentha-2-en-1-ol∗ 0,003 cis-Sabinol∗ 0,206 p-Cymen-8-ol∗ 0,037 trans-Sabinol∗ 0,0017 (E)-Dihydrocarvone 0,079 (E)-Carveol 0,088 Nerol 0,170 Neral 0,009 Carvone 0,055 Thymol methyl ether 0,29 3,8 Fenchyl alcohol 0,14 Carvacrol methyl ether 0,49 Thymoquinone 0,43 Geraniol 0,233 1,42 Geranial 0,022 0,5 1-Decanol 0,118 cis-Dihydrocarvone 0,006 0,1 Bornyl acetate 1,291 0,27 7,0 Isobornyl acetate 0,1 β-Citronellol 1,15 Thymol 0,958 7,26 5,6 38,5 Thymol acetate 2,8 Carvacrol 0,620 0,61 44,4 4,7 Carvacrol acetate 0,3 Geranyl acetate 3,303 0,39 Linalyl acetate 1,170 0,38 (E)-Sabinyl acetate 0,140 Terpinyl acetate 0,016 1,01 Sesquiterpene hydrocarbons α-Copaene 0,191 0,27 β-Copaene 0,42 β-Bourbonene 0,350 0,87 α-Selinene 0,05 α-Ylangene 0,222 β-Longipinene 0,17 Longifolene 0,56 β-Cubebene 0,64 α-Cubebene 0,29 α-Guaiene∗ 0,021 Selina-3,7(11)-dien 0,600 Germacrene B 0,297 δ-Elemene 0,013 β-Elemene 0,076 0,71 β-Caryophyllene 6,222 2,76 5,25 1,3 Longicyclene 0,14 Amorphene 0,04 α-Humulene 0,300 0,4 7,5 1,11 Aromadendrene 0,1 Alloaromadendrene 0,285 0,34 0,02 0,1 (E)-β-Farnesene 0,092 2,28 γ-Muurolene 0,019 0,54 10,5 Germacrene D 5,495 16,02 Germacrene D-4-ol 0,088 1,1 epi-Sesquiphellandrene 0,041 0,22 Bicyclogermacrene 0,200 0,63 0,2 Valencene 0,04 α-Muurolene 0,237 0,52 1,6 0,05 β-Bisabolene 0,873 3,33 1,9 1,0 β-Bisabolol 0,100 2,6 trans-Calamenene 0,97 Calamenene 0,41 δ-Cadinene 0,655 3,73 1,3 0,2 α-Cadinene 0,21 γ-Cadinene 0,320 0,1 α-Calacorene 0,21 Oxidized sesquiterpenes cis-Sesquisabinene hydrate 0,29 Elemol 0,29 trans-Nerolidol 24,527 24,2 2,4 Caryophyllene oxide 10,288 1,12 1,3 2,32 0,4 Thujopsene-2-α-ol 0,34 Viridiflorol 0,170 0,61 β-Copaen-4-α-ol 0,24 Humulene-epoxide II 0,106 0,46 β-Oplopenone 0,39 1,10-Di-epi-cubenol 0,43 Epi-α-cadinol (τ-cadinol) 1,47 1,1 0,1 Spathulenol 1,220 0,05 0,3 Cadrol Ledol∗ 0,382 δ-Cadinol 0,331 T-Cadinol 0,773 T-Muurolol 0,006 α-Farnesol 0,561 α-Santanol 0,197 α-Bisabolol 0,740 Hedycaryol∗ 0,120 α-Guaiol 0,1 α-Muurolol 0,57 α-Cadinol 0,570 2,45 1,6 Helifolenol A 0,37 α-Eudesmol 0,2 Eudesm-3-en-6-ol 0,6 Germacra-4(15),5,10(14)-trien-1-α-ol 1,54 Others 1-Octen-3-ol 0,724 0,24 0,2 3-Octanol 0,091 0,24 3-Octanone 0,727 6,6 2-Heptenol 0,071 1-Dodecanol 0,152 Neryl acetate 0,074 α-Ionone 0,040 n-Heptadecane 0,043 n-Nonadecane 0,049 n-Heneicosane 0,079Due to the great variability in the content of the same components inT. serpyllum, the composition of the essential oil cannot be used as a reliable chemotaxonomic marker. However, its composition is of great importance when it comes to medical and cosmetic uses, as well as in industries where essential oils are used as raw materials. The pleasant fragrance of wild thyme essential oil is mainly down to the phenolic monoterpenoids thymol and carvacrol, which inhibit lipid peroxidation and demonstrate powerful antimicrobial properties on various kinds of microorganisms [34]. Numerous compounds in the composition of the essential oil are natural antioxidants that act in metabolic response to the endogenous production of free radicals and other oxidant species. These responses are due to ecological stress or are promoted by toxins produced by pathogenic fungi and bacteria [57]. ### 3.1. Antioxidant Activity The number of published works studying the antioxidant activity ofT. serpyllum is relatively small, but some have evaluated it and compared it with other species [42, 44, 46]. For example, Hussain et al. noted thatT. serpyllum essential oils demonstrated better radical scavenging activity (IC50: 34.8 mg/mL) thanThymus linearis (Benth. ex Benth) essential oil (IC50: 42.9 mg/mL) [42]. In terms of inhibition of linoleic acid peroxidation,T. serpyllum essential oil again exhibited better antioxidant activity thanT. linearis essential oil (84.2% and 76.0%, resp.). They also established that thymol, the major component in the essential oil of both species, demonstrated better antioxidant activity than the entire oil whereas carvacrol, a major component ofT. serpyllum essential oil, exhibited weaker antioxidant activity than the oil itself. Petrović et al. studied the antioxidant capacity of wild thyme essential oil in terms of its ability to neutralise DPPH (1,1-diphenyl-2-picryl-hydrazyl) free radicals, that is, the ability of the components of the essential oil to donate hydrogen atoms and transform DPPH into its reduced form DPPH-H [44]. Their results showed that the essential oil exhibited significantly better antioxidant activity when compared to synthetic antioxidants like butylated hydroxyanisole (BHA) and in particular butylated hydroxytoluene (BHT). Research into the antioxidant capacity of the essential oil ofT. serpyllum growing in Croatia revealed that it demonstrates poorer ability to neutralise DPPH radicals than BHA, BHT, tocopherol, ascorbic acid, and the essential oil ofT. vulgaris [46]. Hussain et al. also established that the essential oil ofT. serpyllum growing in Pakistan exhibited less ability to neutralise DPPH radicals than BHT and thymol [42]. Mihailović-Stanojević et al. proved the pronounced antioxidant activity of wild thyme aqueous extracts containing phenols and flavonoids in terms of their high antioxidant capacity and potential antihypertensive effect on spontaneously hypertensive and normotensive rats [58]. They showed that the bolus injection of this extract (100 mg/kg body weight) decreases systolic and diastolic blood pressure and total peripheral resistance in the former, without affecting these parameters in the latter. The predominant phenolic compounds were rosmarinic and caffeic acids.The antioxidant activity exhibited by the tested essential oils justifies the traditional uses of wild thyme. Hazzit et al. found that antioxidant potential should be attributed to the phenol constituents of the essential oil [59]. The oil’s chemoprotective efficacy against oxidative stress-mediated disorders is mainly due to its free radical scavenging and metal chelating properties. However, the antioxidant activity of the essential oil ofT. serpyllum is not due to the mere presence of certain dominant components but is the result of the synergism of a larger number of components, including some which are present only in small amounts (trans-nerolidol, germacrene D, δ-cadinene, and β-bisabolene) [34]. ### 3.2. Antimicrobial Activity Many scientists ascribe the antimicrobial activity of species from theThymus genus to the high concentration of carvacrol in its essential oil [60–62]. It has biocidal properties, which lead to bacterial membrane perturbations. Moreover, it may cross cell membranes, reaching the interior of the cell and interacting with intracellular sites vital for antibacterial activities [63, 64]. The biological precursor of carvacrol and another significant component of the plant extracts, p-cymene, has very weak antibacterial properties, but it most likely acts in synergy with carvacrol by expanding the membrane, causing it to become destabilized [65].An antimicrobial assay revealed that ethanol and aqueous extracts ofT. serpyllum demonstrated inhibitory activity againstStaphylococcus aureus,Bacillus subtilis,Escherichia coli, andPseudomonas aeruginosa tested organisms [66]. A comparative analysis by Lević et al. on the effects of the essential oils of oregano (Origanum majorana L.), thyme (Thymus vulgaris L.), and wild thyme on the growth of the bacterial speciesProteus mirabilis,Escherichia coli,Salmonella choleraesuis,Staphylococcus aureus, andEnterococcus faecalis revealed that oregano essential oil demonstrated the greatest antimicrobial activity, while wild thyme essential oil had the least inhibitory effect on the growth of these microorganisms [67]. It was research by Ahmad et al. that showed that wild thyme essential oil has bactericidal effects, but not bacteriostatic effects on the bacterial speciesEscherichia coli,Salmonella Typhi,Shigella ferarie,Bacillus megaterium,Bacillus subtilis,Lactobacillus acidophilus,Micrococcus luteus,Staphylococcus albus,Staphylococcus aureus, andVibrio cholera [41]. Sokolić-Mihalak et al. established that the microbial activity of wild thyme essential oil can be attributed to the effects of its phenolic compounds on inhibiting growth and mycotoxin production of the following species:Aspergillus ochraceus,A. carbonarius, andA. niger, which totalled 60% [68]. The inhibitory activity of essential oils depends on the conditions and duration of incubation, so a greater inhibitory effect is achieved thanks to the synergistic and cumulative effects of the other components of the essential oil. Nikolić et al. found a positive correlation between the antimicrobial activity of selected essential oils ofT. serpyllum,Thymus algeriensis, andT. vulgaris and their chemical composition, which indicates that the activity may be ascribed to the phenolic compound thymol because it occurs in high proportions in these oils [50]. This confirms earlier findings that thymol is a good antimicrobial agent [69, 70]. However, althoughT. serpyllum essential oil had the lowest thymol content, it demonstrated the greatest antimicrobial activity, which confirms the importance of the synergistic effect of the other components. ### 3.3. Antitumor and Cytotoxic Activity As one of the principal constituents of thyme essential oil, carvacrol has important in vitro cytotoxic effects on tumour cells [71]. Experiments have confirmed that carvacrol fromThymus algeriensis and different wild varieties of Moroccan thyme demonstrates significant cytotoxic activity against leukaemia P388 in mice [72] and Hep-2 [73]. However, according to Tsukamoto et al., thymol, which is also one of the major constituents in the essential oils ofT. serpyllum,T. algeriensis, andT. vulgaris, might be involved in stimulating the active proliferation of pulpal fibroblasts [74]. By comparing the antitumor activities of the essential oils of the species mentioned above on the growth of four human tumour cells, Nikolić et al. confirmed that it is the essential oil ofT. serpyllum that exhibits the greatest antitumor activity [50]. Namely,T. serpyllum was the most potent in all the tested cell lines and contains thymol as its major constituent, a phenolic compound known in literature for its antiproliferative activity [75].Of the 21 compounds isolated, carvacrol, thymol, and thymoquinone are the major components of hexane extract ofThymus serpyllum essential oil, and the hexane extract of this species is cytotoxic to 6 cancer cell lines (MDA-MB-231, MCF-7, HepG2, HCT-116, PC3, and A549). It demonstrated the best anticancer activity in HepG2 (Liver Carcinoma Cell Line), followed by HCT-116 (Colon Cancer Cell Line), MCF-7 (Breast Cancer Cell Line), MDA-MB-231 (Breast Cancer Cell Line), PC3 (Prostate Cancer Cell Line), and A549 (Lung Carcinoma Cell Line), as proved by Baig et al. [76]. ## 3.1. Antioxidant Activity The number of published works studying the antioxidant activity ofT. serpyllum is relatively small, but some have evaluated it and compared it with other species [42, 44, 46]. For example, Hussain et al. noted thatT. serpyllum essential oils demonstrated better radical scavenging activity (IC50: 34.8 mg/mL) thanThymus linearis (Benth. ex Benth) essential oil (IC50: 42.9 mg/mL) [42]. In terms of inhibition of linoleic acid peroxidation,T. serpyllum essential oil again exhibited better antioxidant activity thanT. linearis essential oil (84.2% and 76.0%, resp.). They also established that thymol, the major component in the essential oil of both species, demonstrated better antioxidant activity than the entire oil whereas carvacrol, a major component ofT. serpyllum essential oil, exhibited weaker antioxidant activity than the oil itself. Petrović et al. studied the antioxidant capacity of wild thyme essential oil in terms of its ability to neutralise DPPH (1,1-diphenyl-2-picryl-hydrazyl) free radicals, that is, the ability of the components of the essential oil to donate hydrogen atoms and transform DPPH into its reduced form DPPH-H [44]. Their results showed that the essential oil exhibited significantly better antioxidant activity when compared to synthetic antioxidants like butylated hydroxyanisole (BHA) and in particular butylated hydroxytoluene (BHT). Research into the antioxidant capacity of the essential oil ofT. serpyllum growing in Croatia revealed that it demonstrates poorer ability to neutralise DPPH radicals than BHA, BHT, tocopherol, ascorbic acid, and the essential oil ofT. vulgaris [46]. Hussain et al. also established that the essential oil ofT. serpyllum growing in Pakistan exhibited less ability to neutralise DPPH radicals than BHT and thymol [42]. Mihailović-Stanojević et al. proved the pronounced antioxidant activity of wild thyme aqueous extracts containing phenols and flavonoids in terms of their high antioxidant capacity and potential antihypertensive effect on spontaneously hypertensive and normotensive rats [58]. They showed that the bolus injection of this extract (100 mg/kg body weight) decreases systolic and diastolic blood pressure and total peripheral resistance in the former, without affecting these parameters in the latter. The predominant phenolic compounds were rosmarinic and caffeic acids.The antioxidant activity exhibited by the tested essential oils justifies the traditional uses of wild thyme. Hazzit et al. found that antioxidant potential should be attributed to the phenol constituents of the essential oil [59]. The oil’s chemoprotective efficacy against oxidative stress-mediated disorders is mainly due to its free radical scavenging and metal chelating properties. However, the antioxidant activity of the essential oil ofT. serpyllum is not due to the mere presence of certain dominant components but is the result of the synergism of a larger number of components, including some which are present only in small amounts (trans-nerolidol, germacrene D, δ-cadinene, and β-bisabolene) [34]. ## 3.2. Antimicrobial Activity Many scientists ascribe the antimicrobial activity of species from theThymus genus to the high concentration of carvacrol in its essential oil [60–62]. It has biocidal properties, which lead to bacterial membrane perturbations. Moreover, it may cross cell membranes, reaching the interior of the cell and interacting with intracellular sites vital for antibacterial activities [63, 64]. The biological precursor of carvacrol and another significant component of the plant extracts, p-cymene, has very weak antibacterial properties, but it most likely acts in synergy with carvacrol by expanding the membrane, causing it to become destabilized [65].An antimicrobial assay revealed that ethanol and aqueous extracts ofT. serpyllum demonstrated inhibitory activity againstStaphylococcus aureus,Bacillus subtilis,Escherichia coli, andPseudomonas aeruginosa tested organisms [66]. A comparative analysis by Lević et al. on the effects of the essential oils of oregano (Origanum majorana L.), thyme (Thymus vulgaris L.), and wild thyme on the growth of the bacterial speciesProteus mirabilis,Escherichia coli,Salmonella choleraesuis,Staphylococcus aureus, andEnterococcus faecalis revealed that oregano essential oil demonstrated the greatest antimicrobial activity, while wild thyme essential oil had the least inhibitory effect on the growth of these microorganisms [67]. It was research by Ahmad et al. that showed that wild thyme essential oil has bactericidal effects, but not bacteriostatic effects on the bacterial speciesEscherichia coli,Salmonella Typhi,Shigella ferarie,Bacillus megaterium,Bacillus subtilis,Lactobacillus acidophilus,Micrococcus luteus,Staphylococcus albus,Staphylococcus aureus, andVibrio cholera [41]. Sokolić-Mihalak et al. established that the microbial activity of wild thyme essential oil can be attributed to the effects of its phenolic compounds on inhibiting growth and mycotoxin production of the following species:Aspergillus ochraceus,A. carbonarius, andA. niger, which totalled 60% [68]. The inhibitory activity of essential oils depends on the conditions and duration of incubation, so a greater inhibitory effect is achieved thanks to the synergistic and cumulative effects of the other components of the essential oil. Nikolić et al. found a positive correlation between the antimicrobial activity of selected essential oils ofT. serpyllum,Thymus algeriensis, andT. vulgaris and their chemical composition, which indicates that the activity may be ascribed to the phenolic compound thymol because it occurs in high proportions in these oils [50]. This confirms earlier findings that thymol is a good antimicrobial agent [69, 70]. However, althoughT. serpyllum essential oil had the lowest thymol content, it demonstrated the greatest antimicrobial activity, which confirms the importance of the synergistic effect of the other components. ## 3.3. Antitumor and Cytotoxic Activity As one of the principal constituents of thyme essential oil, carvacrol has important in vitro cytotoxic effects on tumour cells [71]. Experiments have confirmed that carvacrol fromThymus algeriensis and different wild varieties of Moroccan thyme demonstrates significant cytotoxic activity against leukaemia P388 in mice [72] and Hep-2 [73]. However, according to Tsukamoto et al., thymol, which is also one of the major constituents in the essential oils ofT. serpyllum,T. algeriensis, andT. vulgaris, might be involved in stimulating the active proliferation of pulpal fibroblasts [74]. By comparing the antitumor activities of the essential oils of the species mentioned above on the growth of four human tumour cells, Nikolić et al. confirmed that it is the essential oil ofT. serpyllum that exhibits the greatest antitumor activity [50]. Namely,T. serpyllum was the most potent in all the tested cell lines and contains thymol as its major constituent, a phenolic compound known in literature for its antiproliferative activity [75].Of the 21 compounds isolated, carvacrol, thymol, and thymoquinone are the major components of hexane extract ofThymus serpyllum essential oil, and the hexane extract of this species is cytotoxic to 6 cancer cell lines (MDA-MB-231, MCF-7, HepG2, HCT-116, PC3, and A549). It demonstrated the best anticancer activity in HepG2 (Liver Carcinoma Cell Line), followed by HCT-116 (Colon Cancer Cell Line), MCF-7 (Breast Cancer Cell Line), MDA-MB-231 (Breast Cancer Cell Line), PC3 (Prostate Cancer Cell Line), and A549 (Lung Carcinoma Cell Line), as proved by Baig et al. [76]. ## 4. Conclusions T. serpyllum has a tradition stretching back many centuries of being used in ethnomedicine as an aromatic, analgesic, antiseptic, diaphoretic, anthelmintic, expectorant, diuretic, spasmolytic, carminative, sedative, stimulant, and tonic. The aerial part of the plant has traditionally been most frequently used in the treatment of illnesses and problems related to the respiratory, digestive, and urogenital tracts. However, the use of the essential oil, as one of the important plant-derived products of this species, is increasing in contemporary medicine due to its pharmacological properties. The chemical composition and yield of the essential oil ofT. serpyllum are considered to be affected by geographic region, the development stage of the plant, the harvest season, habitat, and climatic conditions. Therefore, its composition is of great importance when it comes to medical and cosmetic uses, as well as in industries where essential oils are used as raw materials. Novel studies have revealed its pronounced antioxidant and antimicrobial properties of the essential oil being based on the synergistic and cumulative effect of its components. At the same time,T. serpyllum essential oils demonstrated better overall antioxidant activity in comparison to otherThymus species, due to better antioxidant activity of its major component in the essential oil, thymol, than the entire oil and a major component of otherThymus species’ essential oil, carvacrol. Future research should seek to answer to what extent thymol or carvacrol is responsible individually for cytotoxicity and how much it is the result of the combination with other constituents of the essential oil. In terms of its antitumor and cytotoxic activities, it is our opinion that further research is needed into the effects of the hexane extract, aimed at improving its cytotoxic effects on cancer cell lines, particularly liver cancer, on the basis of which appropriate medicines can be formulated.Due to its pharmacological characteristics, the essential oil of wild thyme represents an important natural resource for the pharmaceutical industry. Additionally, it is a source of natural antioxidants, nutritional supplements, or components of functional foods in the food industry. --- *Source: 101978-2015-07-22.xml*
101978-2015-07-22_101978-2015-07-22.md
36,910
Review of Ethnobotanical, Phytochemical, and Pharmacological Study ofThymus serpyllum L.
Snežana Jarić; Miroslava Mitrović; Pavle Pavlović
Evidence-Based Complementary and Alternative Medicine (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101978
101978-2015-07-22.xml
--- ## Abstract Thymus serpyllum L. (wild thyme) is a perennial shrub, native to areas of northern and central Europe. Its aerial parts are most frequently used in ethnomedicine (mainly for treating illnesses and problems related to the respiratory and gastrointestinal systems), although recently its essential oils are becoming more popular as an important plant-derived product. The composition of these oils is affected by geographic region, the development stage of the plant, the harvest season, habitat, and climatic conditions. Wild thyme essential oil has an ever-growing number of uses in contemporary medicine due to its pharmacological properties: antioxidative, antimicrobial, and anticancerogenic activities. The antioxidative and antimicrobial properties of the essential oil are related to the synergistic and cumulative effect of its components. In terms of antitumor and cytotoxic activity, further research into the effects of essential oil is necessary, aimed at improving its cytotoxic effects, on the basis of which appropriate medicines can be formulated. Due to its pharmacological properties, the essential oil of wild thyme, a plant used in traditional medicine, represents an important natural resource for the pharmaceutical industry. In addition, it can be a source of natural antioxidants, nutritional supplements, or components of functional foods in the food industry. --- ## Body ## 1. Introduction Thymus serpyllum L. (wild thyme) belongs to the family Lamiaceae, which according to the World Checklist contains 7534 species [1], including the genusThymus L. with 220 species [2]. This genus is very complex from the taxonomical and systematic points of view, demonstrating significant polymorphism not only in morphological characteristics but also in composition of ethereal oils.T. serpyllum L. is a perennial shrub, native to regions of northern and central Europe (Figure 1). It is known as Breckland thyme, wild thyme, or creeping thyme; however, its specific name “serpyllum” is derived from the Greek word meaning “to creep,” because of wild thyme’s trailing habit. It has a long stem, which is woody at the base but with a sterile leaf rosette at the top. Leaves are oval (rounded at the top, tapered at the base), 4–6 mm long, 2–4 mm wide, and glabrous on the face and underside, while at the base along the edge they have long trichomes, a prominent central vein, and less prominent lateral veins (Figure 2). Inflorescences are 4–7 cm tall and form in a series along a low-lying stem, with a uniform layer of trichomes on all sides. Flowers are located at the top of the stems and form spherical (or more rarely elongated) verticillaster [3]. It flowers from May to September. Wild thyme grows best on dry, stony ground, open sandy heaths, and grasslands.Figure 1 Map of distribution,Thymus serpyllum L. (Source: Botanical Museum, Helsinki, Finland, 2014, data from BGBM, Berlin-Dahlem, Germany.)Figure 2 Thymus serpyllum L.The medicinal properties of wild thyme have been extensively used in official and traditional medicine for many years and centuries, respectively. Fresh and dried herbs particularly the upper part of the above ground portion of wild thyme, collected when the plant is in bloom, possess certain healing properties due to the presence of significant amounts of essential oils. Recent years have seen increased interest in ethnobotanical, phytochemical, and pharmacological investigations into the medicinal properties of the speciesT. serpyllum which serves as a high quality source for many different formulations in pharmaceutical and chemical industries. The herb is used in preparations of natural herbal remedies, such as syrups, tinctures, infusions, decoctions, tea, and oil. The increase in multidrug resistant strains of pathogenic microorganisms has led to extensive phytochemical and pharmacological studies ofT. serpyllum as an important source of medicinal substances with antioxidant, antimicrobial, antitumor, and cytotoxic properties and their effective medicinal application, as well as use in pharmaceutical, food, and cosmetic industries. In addition, the increased pressure from consumers for natural products as supplements and their clinical application instead of synthetic chemicals, which are generally perceived by the public as being more toxic, has also stimulated research into many medicinal and aromatic plants of whichT. serpyllum occupies a very important place. ## 2. Traditional Uses and Ethnopharmacology The widespread use of different species of theThymus genus dates back to ancient Egypt, where they were used for making perfumed balms, for embalming, and for medical purposes. The Greeks and Romans used them in the same way, as we know from the writings of Pliny (1st century), Dioscorides (2nd century), and Philippus Aureolus Theophrastus Bombastus von Hohenheim (Paracelsus 1493/1494–1541). “Everyone knows thyme,” wrote physician Dioscorides in the first line of his discourse on the pharmacological value of this very aromatic herb, a subject supported by more than three millennia of experience. According to Dioscorides, thyme was used to treat asthma and loosen congestion in the throat and stomach [4]. In terms of geography, the use of these plants spread no further north than the Alps. The first recorded information on the medicinal properties of thyme north of the Alps can be found in the manuscriptPhysica, by the abbess Hildegard von Bingen (1098–1179) and the works of Albertus Magnus (1193–1280). This continued in the 16th century with the Herbal by the herbalist P. Mathiolus (1505–1577), which first mentions the strength and efficacy of thyme. Since then, numerous therapeutic properties have been attributed to thyme, some on an empirical basis, others more debatable [5]. However, the spread of thyme throughout Europe is thought to be due to the Romans, as they used it to purify their rooms and to “give an aromatic flavour to cheese and liqueurs” [6]. In the European Middle Ages, the herb was placed beneath pillows to aid sleep and ward off nightmares [7]. In this period, women would often also give knights and warriors gifts that included thyme leaves, as it was believed to bring courage to the bearer. The pharmacological manuscripts of the Chilandar Medical Codex (15th-16th centuries) mention the use of wild thyme for the treatment of headaches caused by colds, laryngitis, and diseases of the digestive organs and as an antitussive [8]. During the Renaissance period (16th and 17th centuries), wild thyme was used internally to treat malaria and epilepsy [9].The aerial part ofT. serpyllum has a long tradition of being used in many countries of Europe [10] and worldwide as an anthelmintic, a strong antiseptic, an antispasmodic, a carminative, deodorant, diaphoretic, disinfectant, expectorant, sedative, and tonic [11]. It is most frequently used for treating illnesses and problems related to the gastrointestinal and respiratory systems [12–19]. In the Western Balkans, this species has an important use as a sedative [16, 20], or to improve blood circulation, and then as anticholesterolemic and immunostimulant [21]. In alpine region of northeastern Italy, infusion or decoction of plant areal parts (in flowering stage) is used in treatment of rheumatism [22]. Gairola et al. mention the use of wild thyme in some regions of India for treating menstrual disorders [23], while Shinwari and Gilani state its use as an anthelmintic in Northern Pakistan [24].T. serpyllum is also used externally as an antiseptic, to treat wounds [14], to combat eczema [13], or to reduce swelling [25]. In some areas of Italy, wild thyme is used as an important herb in cookery, mainly for flavouring meat or fish [26]. In addition, ethnobotanical studies in Catalonia and Balearic Islands have proved usage ofT. serpyllum in ethnoveterinary particularly as antidiarrheal [27]. TheBritish Herbal Pharmacopoeia classifies this species as a medicinal plant and among the indications for its use it mentions bronchitis, bronchial catarrh, whooping cough, and sore throats. Whooping cough is singled out as a specific indication. In the monograph, recommendations are given for combining it with other plants (Coltsfoot,Tussilago farfara L., or Horehound,Marrubium vulgare L.). As a gargle for acute pharyngitis, it is recommended in combination with the leaves of blackberry (Rubus fruticosus L.) orEchinacea (Echinacea sp.) [28]. According to the PDR for Herbal Medicines, wild thyme is a component in various standardized preparations with antitussive effects, while alcohol extracts are integral components of drops used for coughs and colds [29]. The recommended daily dose of this drug is 4–6 g. ## 3. Pharmacological Properties Many studies on the chemical composition and yields of the essential oils from plants belonging to theThymus genus have been conducted, including those fromT. serpyllum. The chemical composition and yield of the essential oil ofT. serpyllum are considered to be affected by geographic region, the development stage of the plant, the harvest season, habitat, and climatic conditions [35]. As such, its content varies from 0.1 to 0.6% [29, 36, 37] or even from 0.1 to 1% [38]. Analysis of the yield of essential oil ofT. serpyllum in Estonia revealed its content to be between 0.6 and 4.4 mL/kg. Only in one locality did it amount to 3 mL/kg [39], which is in accordance with European Pharmacopoeia standards. Similarly, the essential oil content of wild thyme from 5 regions in Armenia ranged between 4.5 and 7.4 mL/kg [40]. In samples of wild thyme from Pakistan, yields of 0.48% were achieved [41], or 29 g/kg [42]. In Serbia, the yields of essential oil from samples of this species growing on Mt. Kopaonik were 3 mL/kg (~0.3%) [43] and 4.1 g/kg (~0.1%) in samples from Mt. Pasjača [44].Over the last two decades, more and more studies have researched the chemical composition ofT. serpyllum essential oil (Table 1) [39, 41–50]. It has been established that plant species of theThymus genus are characterized by chemical polymorphism, meaning that several chemotypes exist (geraniol, germacrene D, citral, linalool, (E)-caryophyllene, α-terpinyl acetate, carvacrol, and thymol) [30, 33, 51]. According to the PDR for Herbal Medicines, the chief component of the essential oil ofT. serpyllum is carvacrol, while it also contains borneol, isobutyl acetate, caryophyllene, 1,8-cineole, citral, citronellal, citronellol,p-cymene, geraniol, linalool, α-pinene, γ-terpinene, α-terpineol, terpinyl acetate, and thymol in relatively high concentrations [29]. Carvacrol and thymol are isomers, belonging to the group of monoterpenic phenols with powerful antiseptic properties. They are very quickly absorbed after application and quickly metabolise as they are not subject to first phase biotransformation; instead their conjugation with sulphuric and glucuronic acids occurs directly. They are excreted via urine within 24 hours, mainly in the form of conjugates, and less so in their unchanged form [32]. According to the European Pharmacopeia, the herbT. serpyllum must contain at least 1.2% essential oil, in which the total content of carvacrol and thymol is 40% or higher [31]. In addition to essential oil, wild thyme also contains flavonoids, phenol carboxylic acids, and their derivatives, triterpenes and tannins [29, 52]. Besides carvacrol and thymol, Kulišić et al. also include γ-terpinene and p-cymene among the main components of the essential oil [53]. However, research into the composition and concentration of compounds in the essential oil ofT. serpyllum in different regions of the world has revealed significant differences. For example, the content of essential oil in populations of wild thyme in the Altai Mountains (Russia) is 0.5–1%, but its chemical composition differs significantly depending on the altitude. In the village of Kolyvan (150 m a.s.l.), the principal components of the oil are β-myrcene (4.0%),p-cymol (3.8%), 1,8-cineole (14.0%),cis-β-terpineol (8.2%), camphor (4.0%), andtrans-nerolidol (29.8%), while in the same region, but in the village of Mendur-Sokkon (500–750 m a.s.l.), the following were identified as the main components:p-cymol (14.5%), 1,8-cineole (5.6%), γ-terpinene (17.2%), and carvacrol (29.6%) [45]. The essential oil from both areas contained less than 2% thymol. Furthermore, in the essential oil ofT. serpyllum growing wild in Lithuania, the presence of thymol and carvacrol was not established [54]. Although noted as dominant components in literature, thymol and carvacrol are not the principal components of wild thyme essential oil in Estonia either [40]. Differences in the chemical composition of essential oils have been established in other localities, too: the principal components of the essential oils of wild thyme from Mt. Kopaonik (Serbia) aretrans-caryophyllene (27.7%), γ-muurolene (10.5%), and α-humulene (7.5%) [33], while on Mt. Pasjača (Serbia) the dominant components of the essential oil aretrans-nerolidol (24.2%), germacrene D (16.0%), thymol (7.3%), δ-cadinene (3.7%), and β-bisabolene (3.3%) [32]. The essential oil ofT. serpyllum growing in Pakistan contains mainly thymol (53.3%) and carvacrol (10.4%) [41], while Hussain et al., also in Pakistan, but from a different area (the Gilgit Valley), found that the chemical composition of the essential oil was dominated by carvacrol (44.4%) and o-cymene (14.0%) [42]. De Lisi et al. studied the composition of the essential oils of various ecotypes from the region of southern Italy and established that there are differences in the composition between biotypes, too: in two biotypes (S2 and S3), the concentration of geraniol was highest (35% and 22%, resp.), while in biotype S1 thymol is predominant (32.6%) [55]. Ložiene et al. carried out research on 26 samples from 14 habitats in Lithuania and found that there were wide variations in the composition of the main components of the oils [56]. They recorded the existence of five chemotypes (1,8-cineole, germacrene B, (E)-β-ocimene, α-cadinol, andcis-p-ment-2-en-1-ol), which are directly connected to the oil composition among the studied varieties and chemotypes.Table 1 The chemical composition of the essential oil ofThymus serpyllum L. in some regions. Compound Regions/locality Estonia [30] Serbia (Pasjača) [31] Serbia (Kopaonik) [32] Pakistan [33] “Natures” company and local Greek pharmacy in Thessaloniki [34] Yield percentage Monoterpene hydrocarbons α-Pinene 0,649 0,51 6,9 6,06 2,0 Tricyclene 0,062 0,1 Camphene 2,170 0,35 1,0 0,1 2,4 Sabinene 0,187 0,21 0,8 β-Pinene 0,374 0,67 1,8 1,43 0,2 α-Phellandrene 0,016 0,5 0,05 0,2 Myrcene 6,152 1,64 α-Terpinene 0,041 0,21 1,1 p-Cymene 0,410 2,11 2,0 8,9 o-Cymene 14,0 β-Cymene 0,19 Limonene 0,352 1,03 2,7 0,6 trans-β-Ocimene 0,986 1,55 1,5 0,1 cis-β-Ocimene 0,069 0,7 α-Thujene 0,047 γ-Terpinene 0,147 1,48 1,4 0,02 7,2 Oxidized monoterpenes 1,8-Cineole 1,247 1,38 2,5 3,44 0,4 α-Thujone 0,046 1,1 cis-Thujone 1,89 trans-Thujone 0,084 0,21 cis-Sabinene hydrate 0,047 0,5 Linalool 3,000 0,72 1,2 2,02 2,4 δ3-Carene 0,1 Terpinolene 0,090 α-Campholen 0,24 Camphor 3,545 0,99 3,6 0,7 cis-Chrysanthenol 0,4 Borneol 4,667 0,56 2,45 6,0 Menthol 0,26 Isoborneol 0,028 p-Mentha-3,8-diene 0,18 Terpinene-4-ol 0,594 0,4 0,7 α-Terpineol 2,490 0,52 6,47 0,1 trans-Sabinene hydrate 0,087 cis-Linalol oxide 0,100 (Z)-p-Mentha-2,8-dien-1-ol∗1118 0,017 (Z)-p-Mentha-2-en-1-ol∗ 0,003 cis-Sabinol∗ 0,206 p-Cymen-8-ol∗ 0,037 trans-Sabinol∗ 0,0017 (E)-Dihydrocarvone 0,079 (E)-Carveol 0,088 Nerol 0,170 Neral 0,009 Carvone 0,055 Thymol methyl ether 0,29 3,8 Fenchyl alcohol 0,14 Carvacrol methyl ether 0,49 Thymoquinone 0,43 Geraniol 0,233 1,42 Geranial 0,022 0,5 1-Decanol 0,118 cis-Dihydrocarvone 0,006 0,1 Bornyl acetate 1,291 0,27 7,0 Isobornyl acetate 0,1 β-Citronellol 1,15 Thymol 0,958 7,26 5,6 38,5 Thymol acetate 2,8 Carvacrol 0,620 0,61 44,4 4,7 Carvacrol acetate 0,3 Geranyl acetate 3,303 0,39 Linalyl acetate 1,170 0,38 (E)-Sabinyl acetate 0,140 Terpinyl acetate 0,016 1,01 Sesquiterpene hydrocarbons α-Copaene 0,191 0,27 β-Copaene 0,42 β-Bourbonene 0,350 0,87 α-Selinene 0,05 α-Ylangene 0,222 β-Longipinene 0,17 Longifolene 0,56 β-Cubebene 0,64 α-Cubebene 0,29 α-Guaiene∗ 0,021 Selina-3,7(11)-dien 0,600 Germacrene B 0,297 δ-Elemene 0,013 β-Elemene 0,076 0,71 β-Caryophyllene 6,222 2,76 5,25 1,3 Longicyclene 0,14 Amorphene 0,04 α-Humulene 0,300 0,4 7,5 1,11 Aromadendrene 0,1 Alloaromadendrene 0,285 0,34 0,02 0,1 (E)-β-Farnesene 0,092 2,28 γ-Muurolene 0,019 0,54 10,5 Germacrene D 5,495 16,02 Germacrene D-4-ol 0,088 1,1 epi-Sesquiphellandrene 0,041 0,22 Bicyclogermacrene 0,200 0,63 0,2 Valencene 0,04 α-Muurolene 0,237 0,52 1,6 0,05 β-Bisabolene 0,873 3,33 1,9 1,0 β-Bisabolol 0,100 2,6 trans-Calamenene 0,97 Calamenene 0,41 δ-Cadinene 0,655 3,73 1,3 0,2 α-Cadinene 0,21 γ-Cadinene 0,320 0,1 α-Calacorene 0,21 Oxidized sesquiterpenes cis-Sesquisabinene hydrate 0,29 Elemol 0,29 trans-Nerolidol 24,527 24,2 2,4 Caryophyllene oxide 10,288 1,12 1,3 2,32 0,4 Thujopsene-2-α-ol 0,34 Viridiflorol 0,170 0,61 β-Copaen-4-α-ol 0,24 Humulene-epoxide II 0,106 0,46 β-Oplopenone 0,39 1,10-Di-epi-cubenol 0,43 Epi-α-cadinol (τ-cadinol) 1,47 1,1 0,1 Spathulenol 1,220 0,05 0,3 Cadrol Ledol∗ 0,382 δ-Cadinol 0,331 T-Cadinol 0,773 T-Muurolol 0,006 α-Farnesol 0,561 α-Santanol 0,197 α-Bisabolol 0,740 Hedycaryol∗ 0,120 α-Guaiol 0,1 α-Muurolol 0,57 α-Cadinol 0,570 2,45 1,6 Helifolenol A 0,37 α-Eudesmol 0,2 Eudesm-3-en-6-ol 0,6 Germacra-4(15),5,10(14)-trien-1-α-ol 1,54 Others 1-Octen-3-ol 0,724 0,24 0,2 3-Octanol 0,091 0,24 3-Octanone 0,727 6,6 2-Heptenol 0,071 1-Dodecanol 0,152 Neryl acetate 0,074 α-Ionone 0,040 n-Heptadecane 0,043 n-Nonadecane 0,049 n-Heneicosane 0,079Due to the great variability in the content of the same components inT. serpyllum, the composition of the essential oil cannot be used as a reliable chemotaxonomic marker. However, its composition is of great importance when it comes to medical and cosmetic uses, as well as in industries where essential oils are used as raw materials. The pleasant fragrance of wild thyme essential oil is mainly down to the phenolic monoterpenoids thymol and carvacrol, which inhibit lipid peroxidation and demonstrate powerful antimicrobial properties on various kinds of microorganisms [34]. Numerous compounds in the composition of the essential oil are natural antioxidants that act in metabolic response to the endogenous production of free radicals and other oxidant species. These responses are due to ecological stress or are promoted by toxins produced by pathogenic fungi and bacteria [57]. ### 3.1. Antioxidant Activity The number of published works studying the antioxidant activity ofT. serpyllum is relatively small, but some have evaluated it and compared it with other species [42, 44, 46]. For example, Hussain et al. noted thatT. serpyllum essential oils demonstrated better radical scavenging activity (IC50: 34.8 mg/mL) thanThymus linearis (Benth. ex Benth) essential oil (IC50: 42.9 mg/mL) [42]. In terms of inhibition of linoleic acid peroxidation,T. serpyllum essential oil again exhibited better antioxidant activity thanT. linearis essential oil (84.2% and 76.0%, resp.). They also established that thymol, the major component in the essential oil of both species, demonstrated better antioxidant activity than the entire oil whereas carvacrol, a major component ofT. serpyllum essential oil, exhibited weaker antioxidant activity than the oil itself. Petrović et al. studied the antioxidant capacity of wild thyme essential oil in terms of its ability to neutralise DPPH (1,1-diphenyl-2-picryl-hydrazyl) free radicals, that is, the ability of the components of the essential oil to donate hydrogen atoms and transform DPPH into its reduced form DPPH-H [44]. Their results showed that the essential oil exhibited significantly better antioxidant activity when compared to synthetic antioxidants like butylated hydroxyanisole (BHA) and in particular butylated hydroxytoluene (BHT). Research into the antioxidant capacity of the essential oil ofT. serpyllum growing in Croatia revealed that it demonstrates poorer ability to neutralise DPPH radicals than BHA, BHT, tocopherol, ascorbic acid, and the essential oil ofT. vulgaris [46]. Hussain et al. also established that the essential oil ofT. serpyllum growing in Pakistan exhibited less ability to neutralise DPPH radicals than BHT and thymol [42]. Mihailović-Stanojević et al. proved the pronounced antioxidant activity of wild thyme aqueous extracts containing phenols and flavonoids in terms of their high antioxidant capacity and potential antihypertensive effect on spontaneously hypertensive and normotensive rats [58]. They showed that the bolus injection of this extract (100 mg/kg body weight) decreases systolic and diastolic blood pressure and total peripheral resistance in the former, without affecting these parameters in the latter. The predominant phenolic compounds were rosmarinic and caffeic acids.The antioxidant activity exhibited by the tested essential oils justifies the traditional uses of wild thyme. Hazzit et al. found that antioxidant potential should be attributed to the phenol constituents of the essential oil [59]. The oil’s chemoprotective efficacy against oxidative stress-mediated disorders is mainly due to its free radical scavenging and metal chelating properties. However, the antioxidant activity of the essential oil ofT. serpyllum is not due to the mere presence of certain dominant components but is the result of the synergism of a larger number of components, including some which are present only in small amounts (trans-nerolidol, germacrene D, δ-cadinene, and β-bisabolene) [34]. ### 3.2. Antimicrobial Activity Many scientists ascribe the antimicrobial activity of species from theThymus genus to the high concentration of carvacrol in its essential oil [60–62]. It has biocidal properties, which lead to bacterial membrane perturbations. Moreover, it may cross cell membranes, reaching the interior of the cell and interacting with intracellular sites vital for antibacterial activities [63, 64]. The biological precursor of carvacrol and another significant component of the plant extracts, p-cymene, has very weak antibacterial properties, but it most likely acts in synergy with carvacrol by expanding the membrane, causing it to become destabilized [65].An antimicrobial assay revealed that ethanol and aqueous extracts ofT. serpyllum demonstrated inhibitory activity againstStaphylococcus aureus,Bacillus subtilis,Escherichia coli, andPseudomonas aeruginosa tested organisms [66]. A comparative analysis by Lević et al. on the effects of the essential oils of oregano (Origanum majorana L.), thyme (Thymus vulgaris L.), and wild thyme on the growth of the bacterial speciesProteus mirabilis,Escherichia coli,Salmonella choleraesuis,Staphylococcus aureus, andEnterococcus faecalis revealed that oregano essential oil demonstrated the greatest antimicrobial activity, while wild thyme essential oil had the least inhibitory effect on the growth of these microorganisms [67]. It was research by Ahmad et al. that showed that wild thyme essential oil has bactericidal effects, but not bacteriostatic effects on the bacterial speciesEscherichia coli,Salmonella Typhi,Shigella ferarie,Bacillus megaterium,Bacillus subtilis,Lactobacillus acidophilus,Micrococcus luteus,Staphylococcus albus,Staphylococcus aureus, andVibrio cholera [41]. Sokolić-Mihalak et al. established that the microbial activity of wild thyme essential oil can be attributed to the effects of its phenolic compounds on inhibiting growth and mycotoxin production of the following species:Aspergillus ochraceus,A. carbonarius, andA. niger, which totalled 60% [68]. The inhibitory activity of essential oils depends on the conditions and duration of incubation, so a greater inhibitory effect is achieved thanks to the synergistic and cumulative effects of the other components of the essential oil. Nikolić et al. found a positive correlation between the antimicrobial activity of selected essential oils ofT. serpyllum,Thymus algeriensis, andT. vulgaris and their chemical composition, which indicates that the activity may be ascribed to the phenolic compound thymol because it occurs in high proportions in these oils [50]. This confirms earlier findings that thymol is a good antimicrobial agent [69, 70]. However, althoughT. serpyllum essential oil had the lowest thymol content, it demonstrated the greatest antimicrobial activity, which confirms the importance of the synergistic effect of the other components. ### 3.3. Antitumor and Cytotoxic Activity As one of the principal constituents of thyme essential oil, carvacrol has important in vitro cytotoxic effects on tumour cells [71]. Experiments have confirmed that carvacrol fromThymus algeriensis and different wild varieties of Moroccan thyme demonstrates significant cytotoxic activity against leukaemia P388 in mice [72] and Hep-2 [73]. However, according to Tsukamoto et al., thymol, which is also one of the major constituents in the essential oils ofT. serpyllum,T. algeriensis, andT. vulgaris, might be involved in stimulating the active proliferation of pulpal fibroblasts [74]. By comparing the antitumor activities of the essential oils of the species mentioned above on the growth of four human tumour cells, Nikolić et al. confirmed that it is the essential oil ofT. serpyllum that exhibits the greatest antitumor activity [50]. Namely,T. serpyllum was the most potent in all the tested cell lines and contains thymol as its major constituent, a phenolic compound known in literature for its antiproliferative activity [75].Of the 21 compounds isolated, carvacrol, thymol, and thymoquinone are the major components of hexane extract ofThymus serpyllum essential oil, and the hexane extract of this species is cytotoxic to 6 cancer cell lines (MDA-MB-231, MCF-7, HepG2, HCT-116, PC3, and A549). It demonstrated the best anticancer activity in HepG2 (Liver Carcinoma Cell Line), followed by HCT-116 (Colon Cancer Cell Line), MCF-7 (Breast Cancer Cell Line), MDA-MB-231 (Breast Cancer Cell Line), PC3 (Prostate Cancer Cell Line), and A549 (Lung Carcinoma Cell Line), as proved by Baig et al. [76]. ## 3.1. Antioxidant Activity The number of published works studying the antioxidant activity ofT. serpyllum is relatively small, but some have evaluated it and compared it with other species [42, 44, 46]. For example, Hussain et al. noted thatT. serpyllum essential oils demonstrated better radical scavenging activity (IC50: 34.8 mg/mL) thanThymus linearis (Benth. ex Benth) essential oil (IC50: 42.9 mg/mL) [42]. In terms of inhibition of linoleic acid peroxidation,T. serpyllum essential oil again exhibited better antioxidant activity thanT. linearis essential oil (84.2% and 76.0%, resp.). They also established that thymol, the major component in the essential oil of both species, demonstrated better antioxidant activity than the entire oil whereas carvacrol, a major component ofT. serpyllum essential oil, exhibited weaker antioxidant activity than the oil itself. Petrović et al. studied the antioxidant capacity of wild thyme essential oil in terms of its ability to neutralise DPPH (1,1-diphenyl-2-picryl-hydrazyl) free radicals, that is, the ability of the components of the essential oil to donate hydrogen atoms and transform DPPH into its reduced form DPPH-H [44]. Their results showed that the essential oil exhibited significantly better antioxidant activity when compared to synthetic antioxidants like butylated hydroxyanisole (BHA) and in particular butylated hydroxytoluene (BHT). Research into the antioxidant capacity of the essential oil ofT. serpyllum growing in Croatia revealed that it demonstrates poorer ability to neutralise DPPH radicals than BHA, BHT, tocopherol, ascorbic acid, and the essential oil ofT. vulgaris [46]. Hussain et al. also established that the essential oil ofT. serpyllum growing in Pakistan exhibited less ability to neutralise DPPH radicals than BHT and thymol [42]. Mihailović-Stanojević et al. proved the pronounced antioxidant activity of wild thyme aqueous extracts containing phenols and flavonoids in terms of their high antioxidant capacity and potential antihypertensive effect on spontaneously hypertensive and normotensive rats [58]. They showed that the bolus injection of this extract (100 mg/kg body weight) decreases systolic and diastolic blood pressure and total peripheral resistance in the former, without affecting these parameters in the latter. The predominant phenolic compounds were rosmarinic and caffeic acids.The antioxidant activity exhibited by the tested essential oils justifies the traditional uses of wild thyme. Hazzit et al. found that antioxidant potential should be attributed to the phenol constituents of the essential oil [59]. The oil’s chemoprotective efficacy against oxidative stress-mediated disorders is mainly due to its free radical scavenging and metal chelating properties. However, the antioxidant activity of the essential oil ofT. serpyllum is not due to the mere presence of certain dominant components but is the result of the synergism of a larger number of components, including some which are present only in small amounts (trans-nerolidol, germacrene D, δ-cadinene, and β-bisabolene) [34]. ## 3.2. Antimicrobial Activity Many scientists ascribe the antimicrobial activity of species from theThymus genus to the high concentration of carvacrol in its essential oil [60–62]. It has biocidal properties, which lead to bacterial membrane perturbations. Moreover, it may cross cell membranes, reaching the interior of the cell and interacting with intracellular sites vital for antibacterial activities [63, 64]. The biological precursor of carvacrol and another significant component of the plant extracts, p-cymene, has very weak antibacterial properties, but it most likely acts in synergy with carvacrol by expanding the membrane, causing it to become destabilized [65].An antimicrobial assay revealed that ethanol and aqueous extracts ofT. serpyllum demonstrated inhibitory activity againstStaphylococcus aureus,Bacillus subtilis,Escherichia coli, andPseudomonas aeruginosa tested organisms [66]. A comparative analysis by Lević et al. on the effects of the essential oils of oregano (Origanum majorana L.), thyme (Thymus vulgaris L.), and wild thyme on the growth of the bacterial speciesProteus mirabilis,Escherichia coli,Salmonella choleraesuis,Staphylococcus aureus, andEnterococcus faecalis revealed that oregano essential oil demonstrated the greatest antimicrobial activity, while wild thyme essential oil had the least inhibitory effect on the growth of these microorganisms [67]. It was research by Ahmad et al. that showed that wild thyme essential oil has bactericidal effects, but not bacteriostatic effects on the bacterial speciesEscherichia coli,Salmonella Typhi,Shigella ferarie,Bacillus megaterium,Bacillus subtilis,Lactobacillus acidophilus,Micrococcus luteus,Staphylococcus albus,Staphylococcus aureus, andVibrio cholera [41]. Sokolić-Mihalak et al. established that the microbial activity of wild thyme essential oil can be attributed to the effects of its phenolic compounds on inhibiting growth and mycotoxin production of the following species:Aspergillus ochraceus,A. carbonarius, andA. niger, which totalled 60% [68]. The inhibitory activity of essential oils depends on the conditions and duration of incubation, so a greater inhibitory effect is achieved thanks to the synergistic and cumulative effects of the other components of the essential oil. Nikolić et al. found a positive correlation between the antimicrobial activity of selected essential oils ofT. serpyllum,Thymus algeriensis, andT. vulgaris and their chemical composition, which indicates that the activity may be ascribed to the phenolic compound thymol because it occurs in high proportions in these oils [50]. This confirms earlier findings that thymol is a good antimicrobial agent [69, 70]. However, althoughT. serpyllum essential oil had the lowest thymol content, it demonstrated the greatest antimicrobial activity, which confirms the importance of the synergistic effect of the other components. ## 3.3. Antitumor and Cytotoxic Activity As one of the principal constituents of thyme essential oil, carvacrol has important in vitro cytotoxic effects on tumour cells [71]. Experiments have confirmed that carvacrol fromThymus algeriensis and different wild varieties of Moroccan thyme demonstrates significant cytotoxic activity against leukaemia P388 in mice [72] and Hep-2 [73]. However, according to Tsukamoto et al., thymol, which is also one of the major constituents in the essential oils ofT. serpyllum,T. algeriensis, andT. vulgaris, might be involved in stimulating the active proliferation of pulpal fibroblasts [74]. By comparing the antitumor activities of the essential oils of the species mentioned above on the growth of four human tumour cells, Nikolić et al. confirmed that it is the essential oil ofT. serpyllum that exhibits the greatest antitumor activity [50]. Namely,T. serpyllum was the most potent in all the tested cell lines and contains thymol as its major constituent, a phenolic compound known in literature for its antiproliferative activity [75].Of the 21 compounds isolated, carvacrol, thymol, and thymoquinone are the major components of hexane extract ofThymus serpyllum essential oil, and the hexane extract of this species is cytotoxic to 6 cancer cell lines (MDA-MB-231, MCF-7, HepG2, HCT-116, PC3, and A549). It demonstrated the best anticancer activity in HepG2 (Liver Carcinoma Cell Line), followed by HCT-116 (Colon Cancer Cell Line), MCF-7 (Breast Cancer Cell Line), MDA-MB-231 (Breast Cancer Cell Line), PC3 (Prostate Cancer Cell Line), and A549 (Lung Carcinoma Cell Line), as proved by Baig et al. [76]. ## 4. Conclusions T. serpyllum has a tradition stretching back many centuries of being used in ethnomedicine as an aromatic, analgesic, antiseptic, diaphoretic, anthelmintic, expectorant, diuretic, spasmolytic, carminative, sedative, stimulant, and tonic. The aerial part of the plant has traditionally been most frequently used in the treatment of illnesses and problems related to the respiratory, digestive, and urogenital tracts. However, the use of the essential oil, as one of the important plant-derived products of this species, is increasing in contemporary medicine due to its pharmacological properties. The chemical composition and yield of the essential oil ofT. serpyllum are considered to be affected by geographic region, the development stage of the plant, the harvest season, habitat, and climatic conditions. Therefore, its composition is of great importance when it comes to medical and cosmetic uses, as well as in industries where essential oils are used as raw materials. Novel studies have revealed its pronounced antioxidant and antimicrobial properties of the essential oil being based on the synergistic and cumulative effect of its components. At the same time,T. serpyllum essential oils demonstrated better overall antioxidant activity in comparison to otherThymus species, due to better antioxidant activity of its major component in the essential oil, thymol, than the entire oil and a major component of otherThymus species’ essential oil, carvacrol. Future research should seek to answer to what extent thymol or carvacrol is responsible individually for cytotoxicity and how much it is the result of the combination with other constituents of the essential oil. In terms of its antitumor and cytotoxic activities, it is our opinion that further research is needed into the effects of the hexane extract, aimed at improving its cytotoxic effects on cancer cell lines, particularly liver cancer, on the basis of which appropriate medicines can be formulated.Due to its pharmacological characteristics, the essential oil of wild thyme represents an important natural resource for the pharmaceutical industry. Additionally, it is a source of natural antioxidants, nutritional supplements, or components of functional foods in the food industry. --- *Source: 101978-2015-07-22.xml*
2015
# Regulation of Tissue Fibrosis by the Biomechanical Environment **Authors:** Wayne Carver; Edie C. Goldsmith **Journal:** BioMed Research International (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101979 --- ## Abstract The biomechanical environment plays a fundamental role in embryonic development, tissue maintenance, and pathogenesis. Mechanical forces play particularly important roles in the regulation of connective tissues including not only bone and cartilage but also the interstitial tissues of most organs.In vivo studies have correlated changes in mechanical load to modulation of the extracellular matrix and have indicated that increased mechanical force contributes to the enhanced expression and deposition of extracellular matrix components or fibrosis. Pathological fibrosis contributes to dysfunction of many organ systems. A variety of in vitro models have been utilized to evaluate the effects of mechanical force on extracellular matrix-producing cells. In general, application of mechanical stretch, fluid flow, and compression results in increased expression of extracellular matrix components. More recent studies have indicated that tissue rigidity also provides profibrotic signals to cells. The mechanisms whereby cells detect mechanical signals and transduce them into biochemical responses have received considerable attention. Cell surface receptors for extracellular matrix components and intracellular signaling pathways are instrumental in the mechanotransduction process. Understanding how mechanical signals are transmitted from the microenvironment will identify novel therapeutic targets for fibrosis and other pathological conditions. --- ## Body ## 1. Introduction Mechanical forces play integral roles in embryonic development, homeostasis, and pathogenesis. All cells in multicellular organisms are exposed to mechanical forces of varying degrees. Endothelial cells, for instance, are exposed to shear stress due to the passage of fluid through the cardiovascular system. Chondrocytes and other cells in joints are exposed to repetitive compressive forces. The effects of mechanical forces on cells and tissues have received greater attention as models have been developed to systematically analyze these effects. Many of the early studies in this regard were focused on cells and tissues that are influenced by obvious mechanical force including the cardiovascular and musculoskeletal systems. Early investigations in the mechanobiology field relied on relatively simple and imprecise systems. For instance, studies have utilized a hanging-drop culture system to examine the effects of tensile forces on connective tissue cells [1]. As interest grew in the mechanobiology field, innovative systems were developed to apply tensile strain to rat calvarial cells cultured on ribbons of collagen [2] and compressive forces to chick long bones [3].The mechanobiology field began to move forward rapidly asin vitro model systems were developed to more precisely isolate the effects of mechanical forces on cellular processes. Various systems were engineered to apply uniaxial or multiaxial distension or stretch to cells grown on deformable substrata. These systems date back several decades to studies conducted on smooth muscle cells that were cultured on deformable elastin matrices [4, 5]. Among other responses, these studies illustrated a role for mechanical force in the growth and maintenance of skeletal and cardiovascular cells [6–9]. It has become increasingly clear that many aspects of cell behavior can be modulated by mechanical force including cell proliferation, differentiation, migration, and gene expression. The realization that most cells respond to mechanical stimuli has resulted in enhanced interest in the contribution of these forces to pathogenesis including tissue fibrosis and in the mechanisms whereby cells detect and respond to these forces.Studies by Leung et al. [5] were among the first to illustrate that cyclic mechanical loading promotes the production of extracellular matrix (ECM) components by vascular smooth muscle cells. The ECM is a dynamic network composed primarily of collagens, noncollagenous glycoproteins, and proteoglycans. The ECM was historically appreciated for its function as a three-dimensional scaffold that played an essential role in tissue development and function. Alterations in ECM composition, organization, and accumulation can deleteriously impact embryonic development and organ homeostasis in adults. For instance, deficits in collagen production result in vascular weakness and aneurysms [10]. On the other extreme, increased accumulation of ECM components or fibrosis results in dysfunction of many organs.The expression of ECM components is regulated by diverse biochemical factors including growth factors, cytokines, and hormones (see [11, 12] for recent reviews). In addition, ECM production can be modulated by electrical and mechanical stimuli. Until relatively recently, the role of mechanical forces in regulating gene expression and cell behavior has received little attention. This has changed as it has been realized that all cells are exposed to mechanical forces, and with the advent of in vitro testing systems the effects of these forces and the mechanisms of their actions have been and continue to be investigated. ## 2. Mechanical Stretch and Promotion of Tissue Fibrosis Cells can be exposed to diverse types of extrinsic mechanical forces including mechanical stretch (tension), compression, and shear stress. A number of early studies utilized cells cultured on deformable membranes to examine the cellular effects of mechanical stretch. These studies illustrated that mechanical stretch of isolated cells mimicked many of the responses that had been characterized to increased loadin vivo. For instance, mechanical stretch of skeletal myotubes elicited a hypertrophic response that included increased general protein synthesis and enhanced accumulation of contractile proteins [13].Alterations in mechanical loadin vivo had been known for some time to impact synthesis and deposition of the ECM. For instance, increased cardiovascular load has for some time been correlated to increased deposition of ECM components. The period immediately after birth is associated with increased cardiovascular load and rapid growth of the heart [14]. This period of “physiological hypertrophy” is also associated with rapid deposition and organization of ECM components, particularly interstitial collagens [15–18]. Increased mechanical load as seen during aortic constriction or stenosis also promotes myocardial hypertrophy and fibrosis in the adult heart [19, 20]. While a number of mechanical stretch devices have been utilized to mimic changes in mechanical forces seen in vivo, all have generally illustrated that mechanical stretch of matrix-producing cells (largely fibroblasts and smooth muscle cells) results in increased production of ECM components or a profibrotic response [21–24].To more accurately mimic thein vivo environment, apparatuses are being developed to investigate mechanical forces in three-dimensional in vitro systems. Several recent studies have applied mechanical loads to cells cultured in three-dimensional scaffolds [25, 26]. The use of three-dimensional constructs provides important insight into the effects of complex mechanical forces on tissue properties, and the development of systems to apply and analyze mechanical load to these constructs will also be advantageous to efforts to engineer functional tissue constructs. ## 3. Effects of Tissue Stiffness on Fibrosis While two-dimensionalin vitro systems have been invaluable in elucidating the effects of mechanical forces on cells and the mechanisms of mechanotransduction, cells function within a three-dimensional environment whose mechanical properties can change during development [27] or various pathological conditions including fibrosis [28, 29], cancer [30–34], and atherosclerosis [35]. Due to accumulation of ECM components and cross-linking of these components, alterations in tissue stiffness are a common feature of fibrosis. For instance, pathological scars are stiffer relative to unwounded normal skin and typically consist of thicker collagen bundles [36]. Accumulation of ECM components alters the tissue mechanical properties, which in turn can deleteriously impact organ function [37]. Component cells sense and respond to ECM rigidity, which can regulate cell growth [38], shape [39], migration [40, 41], and differentiation [42, 43].Seminal studies by Mauch et al. [44] were among the first to evaluate the effects of the biomechanical microenvironment on the expression of ECM components. The expression of ECM components and ECM-modifying enzymes was compared between cells cultured on tissue culture plastic, a rigid substratum, and three-dimensional collagen gels, a more flexible substratum. These studies illustrated that collagen expression is markedly decreased in fibroblasts cultured in three-dimensional collagen scaffolds compared to cells grown on tissue culture plastic. This effect was, at least in part, regulated at the mRNA level as α1(I), α2(I), and α1(III) collagen mRNAs were diminished in cells cultured in the three-dimensional scaffolds. Further studies by this group of investigators illustrated that collagenase activity is enhanced by culture in three-dimensional scaffolds promoting a collagenolytic phenotype in the less rigid environment of the collagen gels [45]. A number of studies have subsequently supported the concept that matrix rigidity propagates the profibrotic response. Culture of human colon fibroblasts on matrices that mimic the mechanical properties of the normal colon or the pathologically stiff colon of Crohn’s disease patients demonstrated enhanced expression of ECM components and increased proliferation of fibroblasts on the stiffer matrix [46]. Similarly, culture of human dermal fibroblasts in collagen gels that were made stiffer by prestraining resulted in enhanced expression of collagen by dermal fibroblasts relative to that in unstrained scaffolds [47]. Liu et al. [48] have utilized a novel photopolymerization approach to generate polyacrylamide scaffolds with stiffness gradients that span the range of normal and fibrotic lung tissue (0.1 to 50 kPa). In this system, proliferation of lung fibroblasts was induced by increased scaffold stiffness. In contrast, matrix stiffness protected cells from apoptosis in response to serum starvation. The patterns of collagen α1(I) and α1(III) mRNA expression paralleled proliferation with increasing expression in stiffer regions of the scaffold. The expression of prostaglandin, which is an endogenous antifibrotic factor, was opposite to that of the collagens with increased levels in the less rigid portions of the construct. These studies and others indicate that the biomechanical properties of the microenvironment can direct the expression of ECM components and ECM-modifying enzymes with stiffer tissue properties contributing to enhanced ECM production. Less rigid matrices appear to promote an anti-fibrotic environment that includes increased production of matrix-degrading proteases and anti-fibrotic agents like prostaglandin.Matrix rigidity impacts not only the expression of ECM components but also other parameters associated with fibrosis including the deposition and organization of these components. Studies by Halliday and Tomasek [49] illustrated that fibroblasts cultured in stabilized three-dimensional collagen gels generate stress that is transmitted throughout the collagen scaffold. These cells develop large actin microfilament bundles and organize fibronectin into extracellular fibrils. Fibroblasts cultured in free-floating collagen gels generate less stress and lack fibronectin-containing fibrils. More recently, Carraher and Schwarzbauer [50] utilized a polyacrylamide model to evaluate the role of matrix stiffness on fibronectin organization. Polyacrylamide scaffolds have become popular three-dimensional models as their rigidity can be modulated by altering the ratios of the components contributing to polymerization of the scaffold. Similar to previous studies, this work illustrated that growth of cells on more rigid substrates promoted fibronectin assembly and activation of focal adhesion kinase (FAK). Furthermore, activation of ECM receptors of the integrin family by Mn2+ on softer substrates stimulated fibronectin assembly illustrating that integrin activity is an important mediator of this process (discussed further below). Previous studies have illustrated that the conformation of fibronectin on more rigid substrata is extended, which exposes additional binding sites for cells to fibronectin [51]. This is consistent with other studies illustrating that multiple proteins that are involved in mechanotransduction become extended in response to mechanical force thus revealing cryptic interaction sites that mediate activity of the proteins. Indeed, providing exogenous unfolded fibronectin to cells in “soft” polyacrylamide gels increases FAK activation to a similar degree as culture in more rigid gels [50]. ## 4. ECM Density and Myofibroblast Formation An important step in tissue fibrosis of many organs is the formation of myofibroblasts or myofibroblast-like cells. These cells are characterized by enhanced contractile activity, formation of stress fibers, and expression ofα-smooth muscle actin. Myofibroblasts are responsible for alterations to connective tissues including increased synthesis of ECM components. In addition, these cells produce cytokines and growth factors that promote the fibrotic response in an autocrine/paracrine manner. Myofibroblasts are derived from a variety of cells in response to tissue damage and stress including quiescent fibroblasts, blood-derived fibrocytes, mesenchymal stem cells, stellate cells of the liver, and others [52, 53]. Regardless of their origin, myofibroblasts likely arise as an acute and beneficial response to repair damaged tissue. Continued myofibroblast contraction and production of ECM components become deleterious and in many cases yield to stiff fibrotic tissue that obstructs and destroys organ function [54]. Stiffened tissue further promotes myofibroblast formation perpetuating scar formation.Studies using a three-dimensional collagen scaffold system illustrated that collagen deformability or compliance is inversely related to the transformation of cells into a myofibroblast phenotype [55]. Culture of cells on plastic coated with thin films of collagen (minimal compliance and maximal generation of intracellular tension) resulted in the highest levels of α-smooth muscle actin expression, routinely used as a marker for myofibroblast formation. Culture of cells in free-floating collagen gels (maximal compliance and least generation of intracellular tension) yielded the lowest relative level of α-smooth muscle actin expression. Similar results have been obtained in experiments examining matrix rigidity and differentiation of bronchial fibroblasts to a myofibroblast phenotype [56]. Culture of bronchial fibroblasts on polydimethylsiloxane substrates of variable stiffnesses (1–50 kPa) was performed to evaluate the effects of matrix mechanical properties on myofibroblast formation [56]. Increased scaffold stiffness promoted myofibroblast formation and increased α-smooth muscle actin and interstitial collagen expression. In the former studies, the expression of the α1 and α2 integrins, which are collagen receptors, correlated to enhanced myofibroblast formation on collagen-coated plastic [55]. Incubation of cells with function-blocking antibodies to these integrins attenuated myofibroblast formation indicating that generation of intracellular tension via integrin-ECM interactions is critical to the transformation process. More recent studies have illustrated an interaction between the mechanical properties of three-dimensional collagen gels and the biochemical environment [57]. In these studies, there was no difference in α-smooth muscle actin expression between cells in free-floating and constrained collagen gels cultured in low serum (5%); however, enhanced α-smooth muscle actin expression was seen in constrained gels at higher serum levels (10%). These studies and others illustrate integration of mechanical and biochemical signals by cells.The conversion of hepatic stellate cells to a myofibroblast phenotype is a critical step in liver fibrosis and is part of the pathway to cirrhosis in chronic liver disease. Culture of hepatic stellate cells on tissue culture plastic and in high levels of serum results in their spontaneous conversion to a myofibroblast phenotype [58]. Culture of hepatic cells on Matrigel, a relatively soft basement membrane-like matrix, retains the quiescent nature of hepatic stellate cells [59]. Furthermore, culture of differentiated hepatic myofibroblasts on Matrigel results in loss of myofibroblast characteristics [60]. The mechanisms of the dedifferentiation of these cells are not well understood. Recent studies by Olsen et al. [61] to evaluate the role of substrate stiffness on differentiation of hepatic stellate cells utilized polyacrylamide scaffolds coated with various ECM substrates. These studies illustrated that increased matrix stiffness is capable of promoting myofibroblast formation independent of growth factor or cytokine stimulation. However, addition of TGF-β to the culture medium enhanced differentiation on stiff scaffolds, again indicating interactions between the mechanical and biochemical environments. These studies also illustrated that interactions between the cells and the surrounding ECM and generation of mechanical tension are critical to the conversion to a myofibroblast phenotype. That is, coating of polyacrylamide scaffolds with collagen or fibronectin promoted myofibroblast formation to a much greater degree than polyacrylamide scaffolds coated with poly-L-lysine. Cell adhesion to poly-L-lysine is through electrostatic charges and not via specific integrin receptors. Studies with foreskin fibroblasts have illustrated that alterations in integrin expression accompany changes in substrate rigidity and myofibroblast formation [62]. In these studies, cells cultured on less rigid polyacrylamide gels expressed little α-smooth muscle actin and primarily the α2β1 integrin. Culture of cells on more rigid substrata resulted in enhanced expression of α-smooth muscle actin and a switch to expression primarily of αvβ3 integrin.Fibroblasts isolated from diseased patients or animal models typically retain characteristics of their altered phenotypein vitro [63]. Indeed, comparison of fibroblasts from normal individuals and individuals with idiopathic pulmonary fibrosis illustrated differences in proliferation and contractile activity on rigid substrates [64]. However, the fibroblasts from idiopathic pulmonary fibrosis patients remained responsive to alterations in matrix rigidity with decreased proliferation and contractile properties when plated in soft matrices. This suggests that the myofibroblast phenotype is not a permanent state but can be reversed by alterations in the matrix properties. In contrast to this, studies culturing fibroblasts for prolonged periods on matrices of different mechanical properties suggest the conversion to a myofibroblast phenotype is a more “permanent” condition [65]. Culture of cells on a rigid matrix for three weeks resulted in sustained fibrotic activity, even after moving the cells to softer matrices. Understanding the plasticity of the fibrotic phenotype is critical to development of novel therapeutic approaches to fibrosis.Recent studies have been carried out utilizing a novel photodegradable cross-linker-polyethylene glycol scaffold in which exposure to ultraviolet light can modulate the mechanical properties of the substratum to evaluate the effects on myofibroblast conversion of heart valve interstitial cells [66]. Similar to studies with other cell types, increased elastic modulus of the scaffold yielded an enhanced proportion of α-smooth muscle actin-containing cells. Interestingly, and of potential therapeutic significance, the proportion of myofibroblasts in the scaffolds decreased by approximately half when the elastic modulus was decreased by photodegradation. This coincided with a reduction in connective tissue growth factor and in proliferation. The classic dogma has been that once fibrosis has begun, it cannot be reversed; however, recent studies have illustrated that fibrosis can be halted or even reversed depending upon the extent of its progression [67]. The above studies suggest that alteration in the ECM biomechanical properties may be an important therapeutic target that is able to modulate myofibroblast formation and fibrosis.Recent studies with gold nanoparticles have shown that they can be used for both measuring cell-induced deformation of the ECM as well as modulating matrix stiffness and formation of myofibroblasts. Stone et al. [68] described a method using the light scattering properties of gold nanorods as a pattern marker to track cardiac fibroblast deformation of a two-dimensional collagen matrix using digital image correlation. This study detected areas of both tensile and compressive strain within the collagen films and displacements on the order of 18 μm [68]. Recently this method was applied to examine age-dependent differences in cellular mechanical behavior. Cardiac fibroblasts isolated from neonatal and adult rats were examined for their ability to deform a two-dimensional collagen film and three-dimensional collagen gels [69]. While no significant differences in strain were detected between the cell populations on the two-dimensional films, neonatal fibroblasts were significantly more contractile in three-dimensional collagen gels and expressed higher levels of α-smooth muscle actin compared to adult fibroblasts. Inclusion of negatively charged, polyelectrolyte-coated gold nanorods within three-dimensional collagen gels significantly reduced the ability of neonatal cardiac fibroblasts to contract these gels and was accompanied by a significant decrease in both the expression of α-smooth muscle actin and type I collagen [70]. This study suggested that the presence of the surface-modified nanorods impaired the ability of the fibroblasts to transform into myofibroblasts. In addition, it has been shown that negatively charged nanorods accelerated the in vitro assembly to type I collagen, and rheological characterization of the mechanical properties of these constructs revealed that these gels were stiffer and more elastic than controls or gels containing positively charged gold nanorods [71]. These latter studies would suggest that nanomaterials may hold promise as a means to both alter the mechanical properties of the ECM and the formation of the myofibroblast phenotype associated with pathological fibrosis.Another mechanism to take advantage of matrix mechanical properties therapeutically is in targeting death of cells via alterations in matrix rigidity. It has long been known that interactions with the ECM are necessary for survival of normal cells. However, the effects of the mechanical properties of the ECM on cell survival are only recently being addressed. Using polyacrylamide gels of varying rigidity coated with type I collagen, Wang et al. [72] illustrated that proliferation of NIH 3T3 cells is enhanced on stiffer scaffolds. These studies also illustrated that apoptosis of NIH 3T3 cells was increased by almost two fold on less rigid collagen-coated polyacrylamide gels. The effect of matrix stiffness on apoptosis was absent in H-ras-transformed cells. A similar increase in apoptosis was seen in cells from the rat annulus fibrosis when cultured on softer polyacrylamide scaffolds [73]. These studies suggest that decreasing local matrix stiffness will result in apoptosis, potentially of matrix-producing myofibroblasts or other cells.The ability of matrix mechanical properties to direct cell behavior is also being integrated into novel tissue engineering approaches, particularly in attempting to develop vascularized tissue constructs [74]. Examination of the invasive activity of endothelial cells plated onto the surface of collagen scaffolds has been used as an angiogenic model. Increasing the stiffness of the collagen scaffolds by cross-linking with microbial transglutaminase resulted in increased numbers of angiogenic sprouts and enhanced cell invasion independent of ECM pore size or density [75]. Under the appropriate biochemical and mechanical conditions, endothelial cells are able to form three-dimensional networks. Utilizing polyacrylamide gels functionalized with peptide sequences derived from cell adhesion sequences, the effect of scaffold mechanical properties on network formation was evaluated [76]. Endothelial cells formed stable networks on relatively soft functionalized polyacrylamide gels (Young’s modulus of 140 Pa) in the absence of angiogenic biochemical factors (bFGF or VEGF). On stiffer polyacrylamide scaffolds (2500 Pa), endothelial cells failed to assemble into networks in the presence or absence of angiogenic factors. Thus, the elastic modulus of hydrogels is able to direct the migration and organization of vascular cells [74]. ## 5. Transduction of Mechanical Signals Studies utilizingin vitro systems have provided fundamental information regarding the molecular mechanisms whereby cells detect and respond to mechanical forces. During the past two decades, extensive progress has been made in understanding “mechanotransduction” or the mechanisms whereby physical stimuli are converted into chemical signals by cells [77, 78]. Despite the fact that the types of mechanical forces cells experience are variable, including externally applied forces (stretch, shear stress, compression, etc.) and forces generated by cells themselves, the molecular mechanisms whereby this information is transduced appear to have similarities. Alterations in the three-dimensional conformation of mechanosensitive proteins or adhesion structures are often at the foundation of this process. Studies utilizing mechanical stretch systems were fundamental in implicating cell surface integrins as central components of cell adhesion complexes and fundamental to mechanotransduction [79]. Integrins are heterodimers composed of an alpha and a beta chain that serve as the primary family of receptors for ECM components [80–82]. There are over twenty different α/β heterodimer combinations, and specific α/β heterodimers serve as receptors for particular ECM ligand(s). The response of cells to mechanical stretch varies depending upon the ECM substratum suggesting a role for specific integrin heterodimers [79, 83]. Utilizing function-blocking antibodies to specific integrins (α4 and α5 chains) or arginine-glycine-aspartic acid (RGD) peptides to prevent integrin-ECM interactions, MacKenna et al. [79] were among the first to show roles for specific integrins in the response of fibroblasts to mechanical stretch.These early studies set the stage for extensive research focused on the mechanisms whereby cells detect mechanical changes in the microenvironment and transduce these into biochemical and molecular alterations in the cytoplasm and nucleus. The cell-ECM linkage involving integrins and a myriad of associated proteins is a critical component of this process (Figure1). It has become increasingly clear that integrin-based adhesions are dynamic and complex structures that transmit information from the ECM to the cell and vice versa [84]. Integrins, which lack intrinsic enzyme activity, provide a physical linkage from the ECM to the actin cytoskeleton and to a wide array of signaling proteins. In fact, integrin complexes can contain over a hundred different proteins, many that bind in a force-dependent manner [85, 86]. The characterization of the ECM-integrin-cytoskeletal linkage has contributed to the concept of tensegrity in which signals can be transmitted from the ECM to the cytoplasm and nucleus via these physical connections [87, 88]. Several proteins can simultaneously bind integrins and actin and are thus thought to participate in mechanotransduction via the physical ECM-integrin-cytoskeleton linkage including vinculin, talin, and α-actinin [89, 90].Figure 1 This schematic illustrates the transduction of mechanical force from the microenvironment to the cell. Extrinsically applied force results in alteration in the three-dimensional structure of the ECM and activation of integrin-associated signaling and transmission of signals via the actin cytoskeleton. These forces subsequently result in accumulation of ECM components and a stiffer ECM, which exacerbates the fibrotic response.A number of signaling molecules associate directly or indirectly with the integrin cytoplasmic domain including focal adhesion kinase (FAK). FAK was initially identified as a Src kinase substrate [91, 92]. As integrins do not have intrinsic enzyme activity, FAK is a critical mediator of integrin-induced signaling events. The activation of FAK is initiated by autophosphorylation of tyrosine at position 397 and can be induced by clustering of integrins [93, 94]. In turn, FAK can activate integrins, which strengthens cell adhesions with the ECM [95]. Activated FAK can act independently or as part of a Src-containing complex to phosphorylate other signaling proteins or act as a scaffold in the recruitment of additional proteins to cell adhesions.Exposure of cells to mechanical force results in activation of numerous intracellular signaling pathways including protein kinases such as protein kinase C, c-Jun N-terminal kinases (JNK), extracellular signal-regulated kinases (Erk), and others (see [96] for recent review). Activation of these pathways ultimately leads to activation of transcription factors and cell activities that comprise the response of a given cell to mechanical events.While there appear commonalities in signaling pathways induced by various types of mechanical forces,in vitro studies illustrate that cells respond differently to diverse types of mechanical perturbations. The type of mechanical force can modulate differentiation of connective tissue cells. The ratio between tensile and compression type forces can promote either differentiation into cartilage or bone [97]. Exposing heart fibroblasts to constant versus cyclic mechanical stretch resulted in differences in collagen gene expression [98]. Similarly, exposing vascular endothelial cells to cyclic stretch resulted in differences in growth factor expression and branch formation compared to constant stretch [99]. Application of steady mechanical force on aortas resulted in more pronounced FAK activation compared to pulsatile stretch [100]. These studies suggest that while generalities may be developed regarding the response of cells to mechanical force, the details of this response likely vary depending on the type of force and in a cell- or tissue-specific manner. ## 6. YAP/TAZ as Mechanotransducers Recent studies have illustrated that signals from the ECM and cell adhesion sites converge on two components of the Hippo pathway, Yes-associated protein (YAP) and transcriptional coactivator with PDZ-binding motif (TAZ) [101, 102]. Analysis of the expression of YAP and TAZ illustrated that the levels of these proteins were enhanced in endothelial cells cultured on stiff fibronectin-containing polyacrylamide hydrogels (10–40 kPa) compared to cells growing on soft hydrogels (0.7–1.0 kPa) [101]. The expression of YAP and TAZ on stiff hydrogels was similar to that seen in cells cultured on plastic culture dishes. In addition, the subcellular localizations of YAP and TAZ are altered by the ECM mechanical environment. These proteins are predominantly located in the cytoplasm of cells grown in softer matrices but are translocated to the nucleus in cells cultured in stiff substrates. YAP and TAZ modulate the activity of transcription factors, including LEAD, RUNx, and Smads in the nucleus. Among the transcriptional targets of the YAP and TAZ system are connective tissue growth factor and TGF-β, two important biochemical factors that promote fibrosis, and transglutaminase-2, an important component of ECM deposition and turnover [103].Several recent studies have begun to evaluate the functional roles of YAP and TAZ in mediating the response of cells to mechanical forces. In humans, the trabecular meshwork of the eye is approximately twentyfold stiffer in individuals with glaucoma than in normal individuals [104]. Cells from the trabecular meshwork have been cultured on hydrogels of varying stiffness representing normal and glaucomatous conditions (5 kPa and 75 kPa, resp.) to evaluate the role of the YAP/TAZ system in the progression of fibrosis associated with glaucoma. Similar to the above studies, culture of trabecular meshwork cells on stiffer ECM resulted in enhanced expression of TAZ and transglutaminase-2. Interestingly, YAP expression was decreased relative to that on softer scaffolds suggesting that there may be cell-specific regulation of YAP and TAZ in response to altered mechanical properties of the microenvironment. ## 7. Conclusions and Future Directions It has become increasingly clear that most cells in the vertebrate body are exposed to varying degrees of mechanical forces. These forces impact embryonic development, homeostasis, and pathological conditions including fibrosis. Historically most of the studies that focused on mechanical force as a profibrotic stimulus utilized two-dimensional stretch or compression models with isolated matrix-producing cells. These studies have provided substantial knowledge regarding the responses of cells to mechanical force and the underlying mechanisms of this response. However, these systems do not adequately mimic thein vivo three-dimensional environment. This has led to development of three-dimensional models to evaluate the effects of mechanical forces in a more in vivo-like environment. The realization that the biomechanical properties of the microenvironment can promote fibrosis and other responses has led to renewed interest in the effects of mechanical forces on cell and tissue behavior.While extensive knowledge has been gained regarding the effects of the mechanical environment on cells and tissues, many questions remain regarding the molecular mechanisms of these effects. Identification of novel mechanoresponsive proteins such as YAP and TAZ will provide new therapeutic targets to modulate the deleterious effects of increased mechanical force. As it is becomingly increasing clear that tissue stiffness may precede fibrosis or at least contribute to ongoing fibrosis, identifying methods to modulate the mechanical properties of the microenvironment may also yield novel therapeutic approaches. Along these lines, specific nanomaterials may provide such reagents. However, the mechanisms whereby these materials regulate tissue properties have not been elucidated. --- *Source: 101979-2013-05-28.xml*
101979-2013-05-28_101979-2013-05-28.md
35,455
Regulation of Tissue Fibrosis by the Biomechanical Environment
Wayne Carver; Edie C. Goldsmith
BioMed Research International (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101979
101979-2013-05-28.xml
--- ## Abstract The biomechanical environment plays a fundamental role in embryonic development, tissue maintenance, and pathogenesis. Mechanical forces play particularly important roles in the regulation of connective tissues including not only bone and cartilage but also the interstitial tissues of most organs.In vivo studies have correlated changes in mechanical load to modulation of the extracellular matrix and have indicated that increased mechanical force contributes to the enhanced expression and deposition of extracellular matrix components or fibrosis. Pathological fibrosis contributes to dysfunction of many organ systems. A variety of in vitro models have been utilized to evaluate the effects of mechanical force on extracellular matrix-producing cells. In general, application of mechanical stretch, fluid flow, and compression results in increased expression of extracellular matrix components. More recent studies have indicated that tissue rigidity also provides profibrotic signals to cells. The mechanisms whereby cells detect mechanical signals and transduce them into biochemical responses have received considerable attention. Cell surface receptors for extracellular matrix components and intracellular signaling pathways are instrumental in the mechanotransduction process. Understanding how mechanical signals are transmitted from the microenvironment will identify novel therapeutic targets for fibrosis and other pathological conditions. --- ## Body ## 1. Introduction Mechanical forces play integral roles in embryonic development, homeostasis, and pathogenesis. All cells in multicellular organisms are exposed to mechanical forces of varying degrees. Endothelial cells, for instance, are exposed to shear stress due to the passage of fluid through the cardiovascular system. Chondrocytes and other cells in joints are exposed to repetitive compressive forces. The effects of mechanical forces on cells and tissues have received greater attention as models have been developed to systematically analyze these effects. Many of the early studies in this regard were focused on cells and tissues that are influenced by obvious mechanical force including the cardiovascular and musculoskeletal systems. Early investigations in the mechanobiology field relied on relatively simple and imprecise systems. For instance, studies have utilized a hanging-drop culture system to examine the effects of tensile forces on connective tissue cells [1]. As interest grew in the mechanobiology field, innovative systems were developed to apply tensile strain to rat calvarial cells cultured on ribbons of collagen [2] and compressive forces to chick long bones [3].The mechanobiology field began to move forward rapidly asin vitro model systems were developed to more precisely isolate the effects of mechanical forces on cellular processes. Various systems were engineered to apply uniaxial or multiaxial distension or stretch to cells grown on deformable substrata. These systems date back several decades to studies conducted on smooth muscle cells that were cultured on deformable elastin matrices [4, 5]. Among other responses, these studies illustrated a role for mechanical force in the growth and maintenance of skeletal and cardiovascular cells [6–9]. It has become increasingly clear that many aspects of cell behavior can be modulated by mechanical force including cell proliferation, differentiation, migration, and gene expression. The realization that most cells respond to mechanical stimuli has resulted in enhanced interest in the contribution of these forces to pathogenesis including tissue fibrosis and in the mechanisms whereby cells detect and respond to these forces.Studies by Leung et al. [5] were among the first to illustrate that cyclic mechanical loading promotes the production of extracellular matrix (ECM) components by vascular smooth muscle cells. The ECM is a dynamic network composed primarily of collagens, noncollagenous glycoproteins, and proteoglycans. The ECM was historically appreciated for its function as a three-dimensional scaffold that played an essential role in tissue development and function. Alterations in ECM composition, organization, and accumulation can deleteriously impact embryonic development and organ homeostasis in adults. For instance, deficits in collagen production result in vascular weakness and aneurysms [10]. On the other extreme, increased accumulation of ECM components or fibrosis results in dysfunction of many organs.The expression of ECM components is regulated by diverse biochemical factors including growth factors, cytokines, and hormones (see [11, 12] for recent reviews). In addition, ECM production can be modulated by electrical and mechanical stimuli. Until relatively recently, the role of mechanical forces in regulating gene expression and cell behavior has received little attention. This has changed as it has been realized that all cells are exposed to mechanical forces, and with the advent of in vitro testing systems the effects of these forces and the mechanisms of their actions have been and continue to be investigated. ## 2. Mechanical Stretch and Promotion of Tissue Fibrosis Cells can be exposed to diverse types of extrinsic mechanical forces including mechanical stretch (tension), compression, and shear stress. A number of early studies utilized cells cultured on deformable membranes to examine the cellular effects of mechanical stretch. These studies illustrated that mechanical stretch of isolated cells mimicked many of the responses that had been characterized to increased loadin vivo. For instance, mechanical stretch of skeletal myotubes elicited a hypertrophic response that included increased general protein synthesis and enhanced accumulation of contractile proteins [13].Alterations in mechanical loadin vivo had been known for some time to impact synthesis and deposition of the ECM. For instance, increased cardiovascular load has for some time been correlated to increased deposition of ECM components. The period immediately after birth is associated with increased cardiovascular load and rapid growth of the heart [14]. This period of “physiological hypertrophy” is also associated with rapid deposition and organization of ECM components, particularly interstitial collagens [15–18]. Increased mechanical load as seen during aortic constriction or stenosis also promotes myocardial hypertrophy and fibrosis in the adult heart [19, 20]. While a number of mechanical stretch devices have been utilized to mimic changes in mechanical forces seen in vivo, all have generally illustrated that mechanical stretch of matrix-producing cells (largely fibroblasts and smooth muscle cells) results in increased production of ECM components or a profibrotic response [21–24].To more accurately mimic thein vivo environment, apparatuses are being developed to investigate mechanical forces in three-dimensional in vitro systems. Several recent studies have applied mechanical loads to cells cultured in three-dimensional scaffolds [25, 26]. The use of three-dimensional constructs provides important insight into the effects of complex mechanical forces on tissue properties, and the development of systems to apply and analyze mechanical load to these constructs will also be advantageous to efforts to engineer functional tissue constructs. ## 3. Effects of Tissue Stiffness on Fibrosis While two-dimensionalin vitro systems have been invaluable in elucidating the effects of mechanical forces on cells and the mechanisms of mechanotransduction, cells function within a three-dimensional environment whose mechanical properties can change during development [27] or various pathological conditions including fibrosis [28, 29], cancer [30–34], and atherosclerosis [35]. Due to accumulation of ECM components and cross-linking of these components, alterations in tissue stiffness are a common feature of fibrosis. For instance, pathological scars are stiffer relative to unwounded normal skin and typically consist of thicker collagen bundles [36]. Accumulation of ECM components alters the tissue mechanical properties, which in turn can deleteriously impact organ function [37]. Component cells sense and respond to ECM rigidity, which can regulate cell growth [38], shape [39], migration [40, 41], and differentiation [42, 43].Seminal studies by Mauch et al. [44] were among the first to evaluate the effects of the biomechanical microenvironment on the expression of ECM components. The expression of ECM components and ECM-modifying enzymes was compared between cells cultured on tissue culture plastic, a rigid substratum, and three-dimensional collagen gels, a more flexible substratum. These studies illustrated that collagen expression is markedly decreased in fibroblasts cultured in three-dimensional collagen scaffolds compared to cells grown on tissue culture plastic. This effect was, at least in part, regulated at the mRNA level as α1(I), α2(I), and α1(III) collagen mRNAs were diminished in cells cultured in the three-dimensional scaffolds. Further studies by this group of investigators illustrated that collagenase activity is enhanced by culture in three-dimensional scaffolds promoting a collagenolytic phenotype in the less rigid environment of the collagen gels [45]. A number of studies have subsequently supported the concept that matrix rigidity propagates the profibrotic response. Culture of human colon fibroblasts on matrices that mimic the mechanical properties of the normal colon or the pathologically stiff colon of Crohn’s disease patients demonstrated enhanced expression of ECM components and increased proliferation of fibroblasts on the stiffer matrix [46]. Similarly, culture of human dermal fibroblasts in collagen gels that were made stiffer by prestraining resulted in enhanced expression of collagen by dermal fibroblasts relative to that in unstrained scaffolds [47]. Liu et al. [48] have utilized a novel photopolymerization approach to generate polyacrylamide scaffolds with stiffness gradients that span the range of normal and fibrotic lung tissue (0.1 to 50 kPa). In this system, proliferation of lung fibroblasts was induced by increased scaffold stiffness. In contrast, matrix stiffness protected cells from apoptosis in response to serum starvation. The patterns of collagen α1(I) and α1(III) mRNA expression paralleled proliferation with increasing expression in stiffer regions of the scaffold. The expression of prostaglandin, which is an endogenous antifibrotic factor, was opposite to that of the collagens with increased levels in the less rigid portions of the construct. These studies and others indicate that the biomechanical properties of the microenvironment can direct the expression of ECM components and ECM-modifying enzymes with stiffer tissue properties contributing to enhanced ECM production. Less rigid matrices appear to promote an anti-fibrotic environment that includes increased production of matrix-degrading proteases and anti-fibrotic agents like prostaglandin.Matrix rigidity impacts not only the expression of ECM components but also other parameters associated with fibrosis including the deposition and organization of these components. Studies by Halliday and Tomasek [49] illustrated that fibroblasts cultured in stabilized three-dimensional collagen gels generate stress that is transmitted throughout the collagen scaffold. These cells develop large actin microfilament bundles and organize fibronectin into extracellular fibrils. Fibroblasts cultured in free-floating collagen gels generate less stress and lack fibronectin-containing fibrils. More recently, Carraher and Schwarzbauer [50] utilized a polyacrylamide model to evaluate the role of matrix stiffness on fibronectin organization. Polyacrylamide scaffolds have become popular three-dimensional models as their rigidity can be modulated by altering the ratios of the components contributing to polymerization of the scaffold. Similar to previous studies, this work illustrated that growth of cells on more rigid substrates promoted fibronectin assembly and activation of focal adhesion kinase (FAK). Furthermore, activation of ECM receptors of the integrin family by Mn2+ on softer substrates stimulated fibronectin assembly illustrating that integrin activity is an important mediator of this process (discussed further below). Previous studies have illustrated that the conformation of fibronectin on more rigid substrata is extended, which exposes additional binding sites for cells to fibronectin [51]. This is consistent with other studies illustrating that multiple proteins that are involved in mechanotransduction become extended in response to mechanical force thus revealing cryptic interaction sites that mediate activity of the proteins. Indeed, providing exogenous unfolded fibronectin to cells in “soft” polyacrylamide gels increases FAK activation to a similar degree as culture in more rigid gels [50]. ## 4. ECM Density and Myofibroblast Formation An important step in tissue fibrosis of many organs is the formation of myofibroblasts or myofibroblast-like cells. These cells are characterized by enhanced contractile activity, formation of stress fibers, and expression ofα-smooth muscle actin. Myofibroblasts are responsible for alterations to connective tissues including increased synthesis of ECM components. In addition, these cells produce cytokines and growth factors that promote the fibrotic response in an autocrine/paracrine manner. Myofibroblasts are derived from a variety of cells in response to tissue damage and stress including quiescent fibroblasts, blood-derived fibrocytes, mesenchymal stem cells, stellate cells of the liver, and others [52, 53]. Regardless of their origin, myofibroblasts likely arise as an acute and beneficial response to repair damaged tissue. Continued myofibroblast contraction and production of ECM components become deleterious and in many cases yield to stiff fibrotic tissue that obstructs and destroys organ function [54]. Stiffened tissue further promotes myofibroblast formation perpetuating scar formation.Studies using a three-dimensional collagen scaffold system illustrated that collagen deformability or compliance is inversely related to the transformation of cells into a myofibroblast phenotype [55]. Culture of cells on plastic coated with thin films of collagen (minimal compliance and maximal generation of intracellular tension) resulted in the highest levels of α-smooth muscle actin expression, routinely used as a marker for myofibroblast formation. Culture of cells in free-floating collagen gels (maximal compliance and least generation of intracellular tension) yielded the lowest relative level of α-smooth muscle actin expression. Similar results have been obtained in experiments examining matrix rigidity and differentiation of bronchial fibroblasts to a myofibroblast phenotype [56]. Culture of bronchial fibroblasts on polydimethylsiloxane substrates of variable stiffnesses (1–50 kPa) was performed to evaluate the effects of matrix mechanical properties on myofibroblast formation [56]. Increased scaffold stiffness promoted myofibroblast formation and increased α-smooth muscle actin and interstitial collagen expression. In the former studies, the expression of the α1 and α2 integrins, which are collagen receptors, correlated to enhanced myofibroblast formation on collagen-coated plastic [55]. Incubation of cells with function-blocking antibodies to these integrins attenuated myofibroblast formation indicating that generation of intracellular tension via integrin-ECM interactions is critical to the transformation process. More recent studies have illustrated an interaction between the mechanical properties of three-dimensional collagen gels and the biochemical environment [57]. In these studies, there was no difference in α-smooth muscle actin expression between cells in free-floating and constrained collagen gels cultured in low serum (5%); however, enhanced α-smooth muscle actin expression was seen in constrained gels at higher serum levels (10%). These studies and others illustrate integration of mechanical and biochemical signals by cells.The conversion of hepatic stellate cells to a myofibroblast phenotype is a critical step in liver fibrosis and is part of the pathway to cirrhosis in chronic liver disease. Culture of hepatic stellate cells on tissue culture plastic and in high levels of serum results in their spontaneous conversion to a myofibroblast phenotype [58]. Culture of hepatic cells on Matrigel, a relatively soft basement membrane-like matrix, retains the quiescent nature of hepatic stellate cells [59]. Furthermore, culture of differentiated hepatic myofibroblasts on Matrigel results in loss of myofibroblast characteristics [60]. The mechanisms of the dedifferentiation of these cells are not well understood. Recent studies by Olsen et al. [61] to evaluate the role of substrate stiffness on differentiation of hepatic stellate cells utilized polyacrylamide scaffolds coated with various ECM substrates. These studies illustrated that increased matrix stiffness is capable of promoting myofibroblast formation independent of growth factor or cytokine stimulation. However, addition of TGF-β to the culture medium enhanced differentiation on stiff scaffolds, again indicating interactions between the mechanical and biochemical environments. These studies also illustrated that interactions between the cells and the surrounding ECM and generation of mechanical tension are critical to the conversion to a myofibroblast phenotype. That is, coating of polyacrylamide scaffolds with collagen or fibronectin promoted myofibroblast formation to a much greater degree than polyacrylamide scaffolds coated with poly-L-lysine. Cell adhesion to poly-L-lysine is through electrostatic charges and not via specific integrin receptors. Studies with foreskin fibroblasts have illustrated that alterations in integrin expression accompany changes in substrate rigidity and myofibroblast formation [62]. In these studies, cells cultured on less rigid polyacrylamide gels expressed little α-smooth muscle actin and primarily the α2β1 integrin. Culture of cells on more rigid substrata resulted in enhanced expression of α-smooth muscle actin and a switch to expression primarily of αvβ3 integrin.Fibroblasts isolated from diseased patients or animal models typically retain characteristics of their altered phenotypein vitro [63]. Indeed, comparison of fibroblasts from normal individuals and individuals with idiopathic pulmonary fibrosis illustrated differences in proliferation and contractile activity on rigid substrates [64]. However, the fibroblasts from idiopathic pulmonary fibrosis patients remained responsive to alterations in matrix rigidity with decreased proliferation and contractile properties when plated in soft matrices. This suggests that the myofibroblast phenotype is not a permanent state but can be reversed by alterations in the matrix properties. In contrast to this, studies culturing fibroblasts for prolonged periods on matrices of different mechanical properties suggest the conversion to a myofibroblast phenotype is a more “permanent” condition [65]. Culture of cells on a rigid matrix for three weeks resulted in sustained fibrotic activity, even after moving the cells to softer matrices. Understanding the plasticity of the fibrotic phenotype is critical to development of novel therapeutic approaches to fibrosis.Recent studies have been carried out utilizing a novel photodegradable cross-linker-polyethylene glycol scaffold in which exposure to ultraviolet light can modulate the mechanical properties of the substratum to evaluate the effects on myofibroblast conversion of heart valve interstitial cells [66]. Similar to studies with other cell types, increased elastic modulus of the scaffold yielded an enhanced proportion of α-smooth muscle actin-containing cells. Interestingly, and of potential therapeutic significance, the proportion of myofibroblasts in the scaffolds decreased by approximately half when the elastic modulus was decreased by photodegradation. This coincided with a reduction in connective tissue growth factor and in proliferation. The classic dogma has been that once fibrosis has begun, it cannot be reversed; however, recent studies have illustrated that fibrosis can be halted or even reversed depending upon the extent of its progression [67]. The above studies suggest that alteration in the ECM biomechanical properties may be an important therapeutic target that is able to modulate myofibroblast formation and fibrosis.Recent studies with gold nanoparticles have shown that they can be used for both measuring cell-induced deformation of the ECM as well as modulating matrix stiffness and formation of myofibroblasts. Stone et al. [68] described a method using the light scattering properties of gold nanorods as a pattern marker to track cardiac fibroblast deformation of a two-dimensional collagen matrix using digital image correlation. This study detected areas of both tensile and compressive strain within the collagen films and displacements on the order of 18 μm [68]. Recently this method was applied to examine age-dependent differences in cellular mechanical behavior. Cardiac fibroblasts isolated from neonatal and adult rats were examined for their ability to deform a two-dimensional collagen film and three-dimensional collagen gels [69]. While no significant differences in strain were detected between the cell populations on the two-dimensional films, neonatal fibroblasts were significantly more contractile in three-dimensional collagen gels and expressed higher levels of α-smooth muscle actin compared to adult fibroblasts. Inclusion of negatively charged, polyelectrolyte-coated gold nanorods within three-dimensional collagen gels significantly reduced the ability of neonatal cardiac fibroblasts to contract these gels and was accompanied by a significant decrease in both the expression of α-smooth muscle actin and type I collagen [70]. This study suggested that the presence of the surface-modified nanorods impaired the ability of the fibroblasts to transform into myofibroblasts. In addition, it has been shown that negatively charged nanorods accelerated the in vitro assembly to type I collagen, and rheological characterization of the mechanical properties of these constructs revealed that these gels were stiffer and more elastic than controls or gels containing positively charged gold nanorods [71]. These latter studies would suggest that nanomaterials may hold promise as a means to both alter the mechanical properties of the ECM and the formation of the myofibroblast phenotype associated with pathological fibrosis.Another mechanism to take advantage of matrix mechanical properties therapeutically is in targeting death of cells via alterations in matrix rigidity. It has long been known that interactions with the ECM are necessary for survival of normal cells. However, the effects of the mechanical properties of the ECM on cell survival are only recently being addressed. Using polyacrylamide gels of varying rigidity coated with type I collagen, Wang et al. [72] illustrated that proliferation of NIH 3T3 cells is enhanced on stiffer scaffolds. These studies also illustrated that apoptosis of NIH 3T3 cells was increased by almost two fold on less rigid collagen-coated polyacrylamide gels. The effect of matrix stiffness on apoptosis was absent in H-ras-transformed cells. A similar increase in apoptosis was seen in cells from the rat annulus fibrosis when cultured on softer polyacrylamide scaffolds [73]. These studies suggest that decreasing local matrix stiffness will result in apoptosis, potentially of matrix-producing myofibroblasts or other cells.The ability of matrix mechanical properties to direct cell behavior is also being integrated into novel tissue engineering approaches, particularly in attempting to develop vascularized tissue constructs [74]. Examination of the invasive activity of endothelial cells plated onto the surface of collagen scaffolds has been used as an angiogenic model. Increasing the stiffness of the collagen scaffolds by cross-linking with microbial transglutaminase resulted in increased numbers of angiogenic sprouts and enhanced cell invasion independent of ECM pore size or density [75]. Under the appropriate biochemical and mechanical conditions, endothelial cells are able to form three-dimensional networks. Utilizing polyacrylamide gels functionalized with peptide sequences derived from cell adhesion sequences, the effect of scaffold mechanical properties on network formation was evaluated [76]. Endothelial cells formed stable networks on relatively soft functionalized polyacrylamide gels (Young’s modulus of 140 Pa) in the absence of angiogenic biochemical factors (bFGF or VEGF). On stiffer polyacrylamide scaffolds (2500 Pa), endothelial cells failed to assemble into networks in the presence or absence of angiogenic factors. Thus, the elastic modulus of hydrogels is able to direct the migration and organization of vascular cells [74]. ## 5. Transduction of Mechanical Signals Studies utilizingin vitro systems have provided fundamental information regarding the molecular mechanisms whereby cells detect and respond to mechanical forces. During the past two decades, extensive progress has been made in understanding “mechanotransduction” or the mechanisms whereby physical stimuli are converted into chemical signals by cells [77, 78]. Despite the fact that the types of mechanical forces cells experience are variable, including externally applied forces (stretch, shear stress, compression, etc.) and forces generated by cells themselves, the molecular mechanisms whereby this information is transduced appear to have similarities. Alterations in the three-dimensional conformation of mechanosensitive proteins or adhesion structures are often at the foundation of this process. Studies utilizing mechanical stretch systems were fundamental in implicating cell surface integrins as central components of cell adhesion complexes and fundamental to mechanotransduction [79]. Integrins are heterodimers composed of an alpha and a beta chain that serve as the primary family of receptors for ECM components [80–82]. There are over twenty different α/β heterodimer combinations, and specific α/β heterodimers serve as receptors for particular ECM ligand(s). The response of cells to mechanical stretch varies depending upon the ECM substratum suggesting a role for specific integrin heterodimers [79, 83]. Utilizing function-blocking antibodies to specific integrins (α4 and α5 chains) or arginine-glycine-aspartic acid (RGD) peptides to prevent integrin-ECM interactions, MacKenna et al. [79] were among the first to show roles for specific integrins in the response of fibroblasts to mechanical stretch.These early studies set the stage for extensive research focused on the mechanisms whereby cells detect mechanical changes in the microenvironment and transduce these into biochemical and molecular alterations in the cytoplasm and nucleus. The cell-ECM linkage involving integrins and a myriad of associated proteins is a critical component of this process (Figure1). It has become increasingly clear that integrin-based adhesions are dynamic and complex structures that transmit information from the ECM to the cell and vice versa [84]. Integrins, which lack intrinsic enzyme activity, provide a physical linkage from the ECM to the actin cytoskeleton and to a wide array of signaling proteins. In fact, integrin complexes can contain over a hundred different proteins, many that bind in a force-dependent manner [85, 86]. The characterization of the ECM-integrin-cytoskeletal linkage has contributed to the concept of tensegrity in which signals can be transmitted from the ECM to the cytoplasm and nucleus via these physical connections [87, 88]. Several proteins can simultaneously bind integrins and actin and are thus thought to participate in mechanotransduction via the physical ECM-integrin-cytoskeleton linkage including vinculin, talin, and α-actinin [89, 90].Figure 1 This schematic illustrates the transduction of mechanical force from the microenvironment to the cell. Extrinsically applied force results in alteration in the three-dimensional structure of the ECM and activation of integrin-associated signaling and transmission of signals via the actin cytoskeleton. These forces subsequently result in accumulation of ECM components and a stiffer ECM, which exacerbates the fibrotic response.A number of signaling molecules associate directly or indirectly with the integrin cytoplasmic domain including focal adhesion kinase (FAK). FAK was initially identified as a Src kinase substrate [91, 92]. As integrins do not have intrinsic enzyme activity, FAK is a critical mediator of integrin-induced signaling events. The activation of FAK is initiated by autophosphorylation of tyrosine at position 397 and can be induced by clustering of integrins [93, 94]. In turn, FAK can activate integrins, which strengthens cell adhesions with the ECM [95]. Activated FAK can act independently or as part of a Src-containing complex to phosphorylate other signaling proteins or act as a scaffold in the recruitment of additional proteins to cell adhesions.Exposure of cells to mechanical force results in activation of numerous intracellular signaling pathways including protein kinases such as protein kinase C, c-Jun N-terminal kinases (JNK), extracellular signal-regulated kinases (Erk), and others (see [96] for recent review). Activation of these pathways ultimately leads to activation of transcription factors and cell activities that comprise the response of a given cell to mechanical events.While there appear commonalities in signaling pathways induced by various types of mechanical forces,in vitro studies illustrate that cells respond differently to diverse types of mechanical perturbations. The type of mechanical force can modulate differentiation of connective tissue cells. The ratio between tensile and compression type forces can promote either differentiation into cartilage or bone [97]. Exposing heart fibroblasts to constant versus cyclic mechanical stretch resulted in differences in collagen gene expression [98]. Similarly, exposing vascular endothelial cells to cyclic stretch resulted in differences in growth factor expression and branch formation compared to constant stretch [99]. Application of steady mechanical force on aortas resulted in more pronounced FAK activation compared to pulsatile stretch [100]. These studies suggest that while generalities may be developed regarding the response of cells to mechanical force, the details of this response likely vary depending on the type of force and in a cell- or tissue-specific manner. ## 6. YAP/TAZ as Mechanotransducers Recent studies have illustrated that signals from the ECM and cell adhesion sites converge on two components of the Hippo pathway, Yes-associated protein (YAP) and transcriptional coactivator with PDZ-binding motif (TAZ) [101, 102]. Analysis of the expression of YAP and TAZ illustrated that the levels of these proteins were enhanced in endothelial cells cultured on stiff fibronectin-containing polyacrylamide hydrogels (10–40 kPa) compared to cells growing on soft hydrogels (0.7–1.0 kPa) [101]. The expression of YAP and TAZ on stiff hydrogels was similar to that seen in cells cultured on plastic culture dishes. In addition, the subcellular localizations of YAP and TAZ are altered by the ECM mechanical environment. These proteins are predominantly located in the cytoplasm of cells grown in softer matrices but are translocated to the nucleus in cells cultured in stiff substrates. YAP and TAZ modulate the activity of transcription factors, including LEAD, RUNx, and Smads in the nucleus. Among the transcriptional targets of the YAP and TAZ system are connective tissue growth factor and TGF-β, two important biochemical factors that promote fibrosis, and transglutaminase-2, an important component of ECM deposition and turnover [103].Several recent studies have begun to evaluate the functional roles of YAP and TAZ in mediating the response of cells to mechanical forces. In humans, the trabecular meshwork of the eye is approximately twentyfold stiffer in individuals with glaucoma than in normal individuals [104]. Cells from the trabecular meshwork have been cultured on hydrogels of varying stiffness representing normal and glaucomatous conditions (5 kPa and 75 kPa, resp.) to evaluate the role of the YAP/TAZ system in the progression of fibrosis associated with glaucoma. Similar to the above studies, culture of trabecular meshwork cells on stiffer ECM resulted in enhanced expression of TAZ and transglutaminase-2. Interestingly, YAP expression was decreased relative to that on softer scaffolds suggesting that there may be cell-specific regulation of YAP and TAZ in response to altered mechanical properties of the microenvironment. ## 7. Conclusions and Future Directions It has become increasingly clear that most cells in the vertebrate body are exposed to varying degrees of mechanical forces. These forces impact embryonic development, homeostasis, and pathological conditions including fibrosis. Historically most of the studies that focused on mechanical force as a profibrotic stimulus utilized two-dimensional stretch or compression models with isolated matrix-producing cells. These studies have provided substantial knowledge regarding the responses of cells to mechanical force and the underlying mechanisms of this response. However, these systems do not adequately mimic thein vivo three-dimensional environment. This has led to development of three-dimensional models to evaluate the effects of mechanical forces in a more in vivo-like environment. The realization that the biomechanical properties of the microenvironment can promote fibrosis and other responses has led to renewed interest in the effects of mechanical forces on cell and tissue behavior.While extensive knowledge has been gained regarding the effects of the mechanical environment on cells and tissues, many questions remain regarding the molecular mechanisms of these effects. Identification of novel mechanoresponsive proteins such as YAP and TAZ will provide new therapeutic targets to modulate the deleterious effects of increased mechanical force. As it is becomingly increasing clear that tissue stiffness may precede fibrosis or at least contribute to ongoing fibrosis, identifying methods to modulate the mechanical properties of the microenvironment may also yield novel therapeutic approaches. Along these lines, specific nanomaterials may provide such reagents. However, the mechanisms whereby these materials regulate tissue properties have not been elucidated. --- *Source: 101979-2013-05-28.xml*
2013
# An Application of Homotopy Perturbation Method to Fractional-Order Thin Film Flow of the Johnson–Segalman Fluid Model **Authors:** Mubashir Qayyum; Farnaz Ismail; Syed Inayat Ali Shah; Muhammad Sohail; Essam R. El-Zahar; K. C Gokul **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1019810 --- ## Abstract Thin film flow is an important theme in fluid mechanics and has many industrial applications. These flows can be observed in oil refinement process, laser cutting, and nuclear reactors. In this theoretical study, we explore thin film flow of non-Newtonian Johnson–Segalman fluid on a vertical belt in fractional space in lifting and drainage scenarios. Modelled fractional-order boundary value problems are solved numerically using the homotopy perturbation method along with Caputo definition of fractional derivative. In this study, instantaneous and average velocities and volumetric flux are computed in lifting and drainage cases. Validity and convergence of homotopy-based solutions are confirmed by finding residual errors in each case. Moreover, the consequences of different fractional and fluid parameters are graphically studied on the velocity profile. Analysis shows that fractional parameters have opposite effects of the fluid velocity. --- ## Body ## 1. Introduction Modelling and analysis of non-Newtonian fluids is an important and active research theme in industrial engineering. Food processing, paper production, blood flow analysis, and mud drilling are different applications areas of non-Newtonian fluids. These fluids are defined through a nonlinear relationship between rate of deformation and stress tensors, and therefore, it has several models in different scenarios. Johnson–Segalman fluid is one of the very significant fluid modals which have numerous engineering and industrial applications.Thin film flow can be detected in different natural situations, for example, movement of raindrop on window glass, tears in the eyes, and lava flow. Industrial applications of such flows include oil refining, nuclear reactors, and laser cutting[1–4]. The initial work on thin film flow was carried out in [3] for Newtonian fluids, but this study has limitations and cannot be generalized for non-Newtonian fluids such as melted plastics, gels, pastes, honey, ketchup, and blood [5]. Siddiqui et al. examined thin film of different fluids including PTT (Phan-Thien and Tanner) and third and fourth grade fluids in [6–8]. Landau [9] and Stuart [10] extended these analyses to turbulence. Ullah et al. studied the film flow under slip conditions in generalized Maxwell fluids in [11]. Ruan et al. studied thin film from a distributed source on a vertical wall in [12]. Ahmad and Xu proposed an improved nanofluid model in thin film under the action of gravity in [13].Fractional calculus has gained significant attention due to its striking application as a new modelling tool in a variety of fields, such as fluid dynamics, hydrology, system control theory, signal processing, physics, biology, and finance [14–20]. These fractional models are more appropriate for describing memory and transmissible properties of different materials than integral-order models. In the past few decades, various numerical techniques have been developed by different researchers for nonlinear BVPs. Wazwaz proposed the modified decomposition method (MDM) for BVPs in [21]. Noor and Mohyud-Din used VIM along with He’s polynomials for the solution of higher order BVPs [22]. Liu et al. used the multiscale method for nanoparticle diffusion in sheared cellular blood flow in [23]. Jafari and Gejji used adomian decomposition for the solution of system of FDEs in [24]. Hashim et al. apply OHAM to fractional-order fuzzy differential equations in [22]. Rysak and Gregorczyk proposed DTM for fractional dynamical systems in [25]. Zada et al. apply NIM to fractional PDEs in [26]. Yaghouti used radial basis functions to different families of fractional differential equations in [27]. Al-Kuze et al. used spectral quasi-linearization and irreversibility analysis of magnetized cross-fluid flow [28].In this study, we extend the theoretical study of thin film flow to fractional space in case of non-Newtonian Johnson–Segalman fluid since exact solutions of highly nonlinear fractional differential equations (FDEs) are not possible. The conventional approach is to use perturbation techniques in such scenarios, which again need small or large parameters. To avoid this, we utilize the well-known homotopy perturbation method (HPM) [29–33] along with fractional calculus for solution purpose. This method is effectively used by many scholars in [29–33]. Validation and convergence of the obtained numerical solutions are confirmed by means of finding residual errors. To the best of the authors’ knowledge, the given problem has not been attempted before in fractional space. ## 2. Preliminaries ### 2.1. Fractional Calculus Few basic definitions of fractional calculus are given below.Definition 1. Letft,t>0, be a real function in the space Cµ,µ∈R, if ∃ a real number p>µ∍ft=tpf1t,wheref1t∈C0,∞, and it is in the space(1)Cµm⇔fm∈Cµ,m∈N.Definition 2. The Caputo fractional derivativeDα is defined as(2)Dαft=1Γm−α∫0tt−τm−α−1fmdτ,form−1<α<m,m∈N,t>0.Definition 3. α≥0 order Riemann–Liouville fractional integration is defined as(3)Jαft=1Γα∫0tt−τα−1fτdτ,k−1<α<k,andk∈N. ### 2.2. Basic Equations The basic equations of incompressible fluid of Johnson–Segalman fluid are(4)divV=0,(5)ρDVDt=divσ+ρf,where V is the velocity vector, ρ is the constant density, f is the body force per unit mass, and σ is the Cauchy stress tensor, where(6)σ=−pI+T,(7)T=S+2μD,(8)S+mW−aDTS+SW−aD+DSDt=2Dη. ## 2.1. Fractional Calculus Few basic definitions of fractional calculus are given below.Definition 1. Letft,t>0, be a real function in the space Cµ,µ∈R, if ∃ a real number p>µ∍ft=tpf1t,wheref1t∈C0,∞, and it is in the space(1)Cµm⇔fm∈Cµ,m∈N.Definition 2. The Caputo fractional derivativeDα is defined as(2)Dαft=1Γm−α∫0tt−τm−α−1fmdτ,form−1<α<m,m∈N,t>0.Definition 3. α≥0 order Riemann–Liouville fractional integration is defined as(3)Jαft=1Γα∫0tt−τα−1fτdτ,k−1<α<k,andk∈N. ## 2.2. Basic Equations The basic equations of incompressible fluid of Johnson–Segalman fluid are(4)divV=0,(5)ρDVDt=divσ+ρf,where V is the velocity vector, ρ is the constant density, f is the body force per unit mass, and σ is the Cauchy stress tensor, where(6)σ=−pI+T,(7)T=S+2μD,(8)S+mW−aDTS+SW−aD+DSDt=2Dη. ## 3. Mathematical Formulation in Lifting Case The belt is passing through the container filled with Johnson–Segalman fluid, and it is moving vertically with constant speedU0. A uniform thickness δ of the thin film is taken up by the belt, but due to gravity fluid, it is draining down. Let pressure be assumed to be atmospheric, and the flow is uniform, steady, and laminar. Also, the x-axis is considered to be normal, while the y-axis is along the belt.The boundary conditions are(9)x=0,v=U0,x=δ,Txy=0.Velocity is of the form(10)V=0,vx,0,(11)T=TxandTxz,where the extra and shear stress tensor is T=Tx and Txz, respectively.Substitution of (8) in (3) and (4) shows that (3) is satisfied, while (4) has the form(12)0=dTxxdx+ρf1,(13)0=dTxydx+ρf2,where f1 and f2 are the components of body force.Since gravitational force is in downward whiley-axis is in upward directions, hence, (12) and (13) become(14)0=dTxxdx,0=dTxydx−ρg.By using equations (8), (10), and (11) in (7), components of S are(15)Sxx=−ηm1−adv/dx21+m21−a2dv/dx2,(16)Sxy=ηdv/dx1+m21−a2dv/dx2=Syx,(17)Syy=ηm1+adv/dx21+m21−a2dv/dx2.Using (15)–(17), stress tensor T in (6) is(18)Txx=−ηm1−adv/dx21+m21−a2dv/dx2,(19)Txy=μdvdx+ηdv/dx1+m21−a2dv/dx2=Tyx,(20)Tyy=ηm1+adv/dx21+m21−a2dv/dx2.SubstitutingTxy in (14) gives(21)ddxμdvdx+ηdv/dx1+m21−a2dv/dx2=ρg.Boundary conditions become(22)dvdx=0atx=δ,(23)v=U0atx=0.Introduce nondimensional parameters as(24)x∗=xδ,v∗=vU0andφ=μμ+η,where φ is the ratios of viscosities. Equations (21)–(23) become(25)ddxφdvdx+1−φdv/dx1+We21−a2dv/dx2=St.(26)Atx=1,dvdx=0.(27)Atx=0,v=1,where St=ρgδ2/μeffU0 and We=mU0/δ are Stokes and Weissenberg numbers, respectively, and μeff=μ+η.Simplification of equation (25) gives(28)d2vdx2+We41−a22φd2vdx2−Stdvdx4−We21−a2d2vdx2+φd2vdx2+2Stdvdx2−St=0,with(29)dvdx=0atx=1,v=1atx=0.Now, using definition of fractional calculus given in Section2.1, the fractional form of equation (28) is(30)d2vxdx2+We41−a22φd2vxdx2−StDαvx4−We21−a2d2vxdx2+φd2vxdx2+2StDαvx2−St=0,with(31)v0=1,v′1=0,0<α<1. ## 4. Homotopy Solution of Johnson–Segalman Fluid in the Lifting Case HomotopyΩ×0,1⟶ℜ for (30) is(32)1−pd2vxdx2+pd2vxdx2+We41−a22φd2vxdx2−StDαvx4−We21−a2d2vxdx2+φd2vxdx2+2StDαvx2−St=0.Using (30) and (31), we obtain the following.Zeroth-order problem:(33)v0″x=0,v00=1,v0′1=0.First-order problem:(34)−St−2StWe2Dαv0x2+2a2StWe2Dαv0x2−StWe4Dαv0x4+2a2StWe4Dαv0x4−a4StWe4Dαv0x4−We2Dαv0x2v0″x+a2WeDαv0x2v0″x−We2ϕDαv0x2v0″x+a2We2ϕDαv0x2v0″x+We4ϕDαv0x4v0″x−2a2We4ϕDαv0x4v0″x+a4We4ϕDαv0x4v0″x+v1″x=0,v10=0,v1′1=0.Second-order problem:(35)We2Dαv0xDαv1x+4a2StWe2Dαv0x−4StDαv1x−4StWe4Dαv0x3Dαv1x8a2StWe4Dαv0x3Dαv1x−4a4StWe4Dαv0x3Dαv1x−2We2Dαv0xDαv1xv0″x+2a2We2Dαv0xDαv1xv0″x−2We2ϕDαv0xDαv1xv0″x+2a2We2ϕDαv0xDαv1xv0″x+4We4ϕDαv0x3Dαv1xv0″x−8a2We4ϕDαv0x3Dαv1xv0″x+4a4We4ϕDαv0x3Dαv1xv0″x−We2Dαv0x2v1″x+a2We2Dαv0x2v1″x−We2ϕDαv0x2v1″x+a2We2ϕDαv0x2v1″x+We4ϕDαv0x4v1″x−2a2We4ϕDαv0x4v1″x+a4We4ϕDαv0x4v1″x+v2″x=0,v20=0,v2′1=0.Similarly, we can find higher order problems and their solutions.Third-order approximate solution, after applying Caputo definition and keepingα=0.99,ϕ=0.1,St=0.01, a=0.01,andWe=0.1 fixed, is(36)Vx=1+12−0.02t+0.01x2+0.0796711−8.24342×10−8x2.98+1.23219×10−7x4−8.24098×10−8x5+2.04999×10−8x6x1.98+0.0398355−9.06776×10−8x2.98+1.35541×10−7x4−9.06508×10−8x5+2.25499×10−8x6x1.98.The residual is denoted byR and is defined as(37)R=d2Vxdx2+We41−a22φd2Vxdx2−StDαVx4−We21−a2d2Vxdx2+φd2Vxdx2+2StDαVx2−St. ## 5. Flow Rate and Average Velocity in the Lifting Case Flow rate in this case is(38)Q=1615−16α+4α2Γ4−α2−3+2α−1−7+2α3−1+a2St3We2−3+α2−16+α25+α−13+2α3+ϕ−2−3+St−5+2αΓ4−α2,and average velocity is(39)V¯=1615−16α+4α2Γ4−α2−3+2α−1−7+2α3−1+a2St3We2−3+α2−16+α25+α−13+2α3+ϕ−2−3+St−5+2αΓ4−α2. ## 6. Mathematical Formulation in the Drainage Case Let fluid be draining down due to gravity on the infinite stationary belt. Equations (4)–(8) become(40)0=dTxydx+ρg.Using equation (19) in (14) gives the form(41)ddxμdvdx+ηdv/dx1+m21−a2dv/dx2=−ρg,with(42)dvdx=0atx=δ,v=0atx=0.Equation (40) in the dimensionless form is(43)ddx1−φdv/dx1+We21−a2dv/dx2+φdvdx=−St.(44)Atx=1,dvdx=0,Atx=0,v=0.Simplification of (43) gives(45)d2vdx2+We41−a22φd2vdx2+Stdvdx4−We21−a2d2vdx2+φd2vdx2−2Stdvdx2+St=0.Three cases as fractional boundary value problems are obtained.Case 1. (46)d2vxdx2+We41−a22φd2vxdx2+StDαvx4−We21−a2d2vxdx2+φd2vxdx2−2StDαvx2+St=0,with(47)v0=0,v′1=0,0<α<1.Case 2. (48)Dγvx+We41−a22φDγvx+Stdvxdx4−We21−a2Dγvx+Dγvx−2Stdvxdx2+St=0.(49)v0=0,v′1=0,1<γ<2.Case 3. Here both first- and second-order derivatives are replaced by the noninteger order derivative as follows:(50)Dγvx+We41−a22φDγvx+StDαvx4−We21−a2Dγvx+Dγvx−2StDαvx2+St=0,with(51)v0=0,v1=0,1<γ<2,0<α<1. ## 7. Results and Discussion In this paper, the fractional study of thin film flow of Johnson–Segalman fluid is carried out in lifting and drainage cases. The obtained boundary value problems are solved for different values of involved parameters and results mentioned in Tables1–4 for lifting while Tables 5–10 in drainage situation. Tables 1 and 5 present solutions and corresponding errors for different α. Tables 2 and 6 show solutions along with errors for ϕ. Tables 3 and 7 show case solutions and errors for various St. Table 4 demonstrates solutions and errors for various We in the lifting case.Table 1 Solution and residual error forα keeping a=0.1,We=0.01,St=0.001,andϕ=0.1 fixed in lifting. α=0.2α=0.6α=0.99xVxRes errorVxRes errorxRes error0.10.905−8.76128 × 10−250.905−9.08606 × 10−240.905−6.59422 × 10−230.20.82−2.5981 × 10−240.82−1.45747 × 10−230.82−4.249 × 10−230.30.745−4.78151 × 10−240.745−1.74003 × 10−230.745−2.54931 × 10−230.40.68−7.11425 × 10−240.68−1.79924 × 10−230.68−1.41813 × 10−240.50.625−9.29068 × 10−240.625−1.69071 × 10−230.625−6.9968 × 10−240.60.58−1.10441 × 10−230.58−1.4743 × 10−230.58−3.0318 × 10−240.70.545−1.2179 × 10−230.545−1.20434 × 10−230.545−1.0542 × 10−240.80.52−1.25923 × 10−230.52−9.26204 × 10−240.52−2.52839 × 10−250.90.505−1.22773 × 10−230.505−6.71129 × 10−240.505−2.91563 × 10−271.0.5−1.13159 × 10−230.5−4.57323 × 10−240.51.0211 × 10−26Table 2 Solution and residual error forϕ keeping α=0.99,We=0.01,St=0.001,anda=0.1 fixed in lifting. ϕ=0.1ϕ=0.6ϕ=0.9xVxRes errorVxRes errorVxRes error0.10.905−6.59422 × 10−230.905−9.09923 × 10−230.905−1.08443 × 10–220.20.82−4.249 × 10−230.82−5.8628 × 10−230.82−6.98245 × 10−230.30.745−2.54931 × 10−230.745−3.51797 × 10−230.745−4.19438 × 10−230.40.68−1.41813 × 10−230.68−1.95548 × 10−230.68−2.32819 × 10−230.50.625−6.9968 × 10−240.625−9.6702 × 10−240.625−1.15297 × 10−230.60.58−3.0318 × 10−240.58−4.18483 × 10−240.58−4.96745 × 10−240.70.545−1.0542 × 10−240.545−1.45024 × 10−240.545−1.7253 × 10−240.80.52−2.52839 × 10−250.52−3.53713 × 10−250.52−4.20658 × 10−250.90.505−2.91563 × 10−270.505−1.76326 × 10−260.505−2.24163 × 10−261.0.51.0211 × 10−260.55.3895 × 10−270.59.2139 × 10−27Table 3 Solution and residual error forSt keeping α=0.98,We=0.001,ϕ=0.1,anda=0.1 fixed in lifting. St=0.001St=0.01St=0.1xVxRes errorVxRes errorVxRes error0.10.999905−7.23543 × 10−270.99905−6.29219 × 10−220.9905−6.30818 × 10−170.20.99982−3.63823 × 10−270.9982−4.18657 × 10−220.982−4.18849 × 10−170.30.999745−2.86435 × 10−270.99745−2.5807 × 10−220.9745−2.57976 × 10−170.40.99968−1.51698∗ × 10−270.9968−1.46438 × 10−220.968−1.46541 × 10−180.50.999625−8.70705 × 10−280.99625−7.53253 × 10−230.9625−7.52476 × 10−180.60.99958−4.61286 × 10−280.9958−3.40645 × 10−230.958−3.37798 × 10−180.70.999545−4.34662 × 10−280.99545−1.28478 × 10−230.9545−1.25942 × 10−180.80.99952−4.00347 × 10−290.9952−3.70343 × 10−240.952−3.64878 × 10−190.90.999505−1.66647 × 10−290.99505−1.07941 × 10−240.9505−8.14999 × 10−201.0.99959.95937 × 10−300.995−8.78476 × 10−260.95−8.71776 × 10−21Table 4 Solution and residual error forWe keeping α=0.95,St=0.001,ϕ=0.1,anda=0.1 fixed in lifting. We=0.001We=0.01We=0.1xVxRes errorVxRes errorVxRes error0.10.905−5.48732 × 10−270.905−5.51525 × 10−230.905−5.4982 × 10−190.20.82−3.75182 × 10−270.82−3.98956 × 10−230.82−3.98946 × 10−190.30.745−2.5638 × 10−270.745−2.63606 × 10−230.745−2.63525 × 10−190.40.68−1.17028 × 10−270.68−1.60262 × 10−230.68−1.6049 × 10−190.50.625−7.01495 × 10−280.625−8.9237 × 10−240.625−8.93613 × 10−200.60.58−4.80416 × 10−280.58−4.47664 × 10−240.58−4.46647 × 10−200.70.545−3.2225 × 10−280.545−1.95569 × 10−240.545−1.9563 × 10−200.80.527.79 × 10−300.52−7.49229 × 10−250.52−7.37223 × 10−210.90.5051.3884 × 10−280.505−2.49356 × 10−250.505−2.42499 × 10−211.0.59.40717 × 10−290.5−6.13229 × 10−260.5−5.63892 × 10−22Table 5 Solution and residual error forα keeping a=0.1,We=0.01,St=0.001,andϕ=0.1 fixed in Case 1 of drainage. α=0.3α=0.6α=0.9xVxRes errorVxRes errorVxRes error0.10.0000951.61655 × 10−240.0000959.08344 × 10−240.0000956.30751 × 10−230.20.000184.16166 × 10−240.000181.45689 × 10−230.000184.18362 × 10−230.30.0002556.96312 × 10−240.0002551.73923 × 10−230.0002552.57623 × 10−230.40.000329.57269 × 10−240.000321.79837 × 10−230.000321.46516 × 10−230.50.0003751.16444 × 10−230.0003751.6899 × 10−230.0003757.51521 × 10−240.60.000421.29537 × 10−230.000421.47362 × 10−230.000423.40636 × 10−240.70.0004551.34057 × 10−230.0004551.20384 × 10−230.0004551.25929 × 10−240.80.000481.30281 × 10−230.000489.25865 × 10−240.000483.70727 × 10−250.90.0004951.19467 × 10−230.0004956.70931 × 10−240.0004958.09182 × 10−261.0.00051.0354 × 10−230.00054.57227 × 10−240.00051.89895 × 10−26Table 6 Solution and residual error forϕ keeping α=0.98,We=0.01,St=0.001,anda=0.1 fixed for Case 1 of drainage. ϕ=0.01ϕ=0.1ϕ=0.3xVxRes errorVxRes errorVxRes error0.10.0000955.92844 × 10−230.0000956.30751 × 10−230.0000958.18541 × 10−230.20.000183.93158 × 10−230.000184.18362 × 10−230.000185.42952 × 10−230.30.0002552.42117 × 10−230.0002552.57623 × 10−230.0002553.34316 × 10−230.40.000321.37711 × 10−230.000321.46516 × 10−230.000321.90143 × 10−230.50.0003757.0636 × 10−240.0003757.51521 × 10−240.0003759.74926 × 10−240.60.000423.20361 × 10−240.000423.40636 × 10−240.000424.41229 × 10−240.70.0004551.18469 × 10−240.0004551.25929 × 10−240.0004551.62912 × 10−240.80.000483.49409 × 10−250.000483.70727 × 10−250.000484.82038 × 10−250.90.0004957.66394 × 10−260.0004958.09182 × 10−260.0004951.05556 × 10−251.0.00051.87118 × 10−260.00051.89895 × 10−260.00052.15706 × 10−24Table 7 Solution and residual error forSt keeping α=0.99,We=0.01,ϕ=0.01,anda=0.1 fixed in Case 1 of drainage. St=0.001St=0.01St=0.1xVxRes errorVxRes errorVxRes error0.10.0000955.82872 × 10−230.0000955.82511 × 10−180.0000955.8252 × 10–−130.20.000183.7525 × 10−230.000183.75136 × 10−180.000183.75135 × 10–−130.30.0002552.24807 × 10−230.0002552.25476 × 10−180.0002552.25477 × 10–−130.40.000321.25172 × 10−230.000321.24919 × 10−180.000321.2492 × 10–−130.50.0003756.177 × 10−240.0003756.22301 × 10−190.0003756.22317 × 10–−140.60.000422.69667 × 10−240.000422.67581 × 10−190.000422.67572 × 10–−140.70.0004559.14662 × 10−250.0004559.26852 × 10−200.0004559.2681 × 10–−150.80.000482.38577 × 10−250.000482.30937 × 10−200.000482.3095 × 10–−150.90.0004952.25046 × 10−260.0004953.77223 × 10−210.0004953.76414 × 10–−161.0.00053.98217 × 10−270.00051.84694 × 10−220.00051.88184 × 10−17Table 8 Solution and residual error forγ keeping St=0.01,ϕ=0.1,We=0.001,anda=0.1 fixed in Case 2 of drainage. γ=1.2γ=1.6γ=1.99xVxRes errorVxRes errorVxRes error0.15.72659 × 10–−4−1.32727 × 10−181.75703 × 10–−43.197 × 10−185.1638 × 10–−51.29417 × 10−170.21.31563 × 10–−3−2.02422 × 10−185.3263 × 10–−33.56035 × 10−182.05125 × 10–−41.22623 × 10−170.32.14014 × 10–−3−1.98946 × 10−181.01899 × 10–−33.88346 × 10−184.59664 × 10–−41.24389 × 10−170.43.02251 × 10–−3−2.9885 × 10−191.61463 × 10–−33.46852 × 10−188.14833 × 10–−41.35961 × 10−170.53.95057 × 10–−3−1.84225 × 10−182.30744 × 10–−33.68767 × 10−181.27034 × 10–−31.31157 × 10−170.64.91675 × 10–−3−2.46976 × 10−183.08901 × 10–−33.57668 × 10−181.82595 × 10–−31.37891 × 10−170.75.91581 × 10–−3−2.09831 × 10−183.95307 × 10–−33.36789 × 10−182.4815 × 10–−31.30025 × 10−170.86.94391 × 10–−3−1.35899 × 10−184.89465 × 10–−33.06616 × 10−183.23682 × 10–−31.40449 × 10−170.97.99811 × 10–−3−5.26745 × 10−195.90971 × 10–−34.003 × 10−184.09177 × 10–−31.29653 × 10−171.9.07604 × 10–−3−1.50223 × 10−186.99484 × 10–−32.56045 × 10−185.04625 × 10–−31.32103 × 10−17Table 9 Solution and residual error forα keeping γ=1.95,St=0.001,ϕ=0.1,We=0.01,anda=0.1 fixed in Case 3 of drainage. α=0.3α=0.7α=0.99xVxRes errorVxRes errorVxRes error0.15.87208 × 10–−6−9.076 × 10−205.87208 × 10–−61.16701 × 10−195.87208 × 10–−66.48938 × 10−210.22.26882 × 10–−56.62623 × 10−202.26882 × 10–−5−3.50514 × 10−202.26882 × 10–−5−1.64872 × 10−190.35.0024 × 10–−57.16925 × 10−205.0024 × 10–−5−5.38787 × 10−205.0024 × 10–−5−5.48153 × 10−200.48.76616 × 10–−56.17061 × 10−208.76616 × 10–−54.81294 × 10−208.76616 × 10–−5−2.07791 × 10−190.51.35451 × 10–−4−7.89358 × 10−201.35451 × 10–−48.88535 × 10−201.35451 × 10–−46.49314 × 10−200.61.9328 × 10–−45.62103 × 10−201.9328 × 10–−4−8.1769 × 10−201.9328 × 10–−4−1.40755 × 10−200.72.61056 × 10–−42.13652 × 10−202.61056 × 10–−42.46311 × 10−192.61056 × 10–−48.01669 × 10−210.83.38702 × 10–−44.3926 × 10−203.38702 × 10–−4−3.74249 × 10−203.38702 × 10–−4−1.55483 × 10−190.94.26153 × 10–−4−1.25871 × 10−194.26153 × 10–−4−4.78429 × 10−204.26153 × 10–−4−1.57049 × 10−191.5.2335 × 10–−47.27423 × 10−205.2335 × 10–−43.94566 × 10−205.2335 × 10–−4−5.88772 × 10−20Table 10 Solution and residual error forγ keeping α=0.95,St=0.001,ϕ=0.1,We=0.01,anda=0.1 fixed in Case 3 of drainage. γ=1.2γ=1.6γ=1.99xVxRes errorVxRes errorVxRes error0.15.72659 × 10–−4−8.85316 × 10−201.75703 × 10–−5−9.8013 × 10−205.1638 × 10–−64.95955 × 10−210.21.31563 × 10–−4−1.35853 × 10−195.3263 × 10–−5−5.72227 × 10−202.0512 × 10–−51.06505 × 10−200.32.14014 × 10–−41.63841 × 10−201.01899 × 10–−43.02498 × 10−224.5966 × 10–−52.98796 × 10−200.43.02251 × 10–−4−1.77002 × 10−191.61463 × 10–−42.74106 × 10−198.1483 × 10–−58.85672 × 10−200.53.95057 × 10–−46.75864 × 10−202.30744 × 10–−4−2.9077 × 10−201.2703 × 10–−4−1.25093 × 10−190.64.91675 × 10–−4−1.24341 × 10−193.08901 × 10–−42.8962 × 10−191.8259 × 10–−47.08777 × 10−210.75.91581 × 10–−43.33152 × 10−203.95307 × 10–−44.4664 × 10−202.4815 × 10–−43.73471 × 10−200.86.94391 × 10–−4−5.4743 × 10−204.89465 × 10–−4−4.42716 × 10−203.2368 × 10–−4−2.48835 × 10−200.97.99811 × 10–−4−4.85706 × 10−205.90971 × 10–−4−1.45847 × 10−194.0917 × 10–−41.10885 × 10−191.9.07604 × 10–−4−2.09174 × 10−196.99484 × 10–−41.98754 × 10−205.0462 × 10–−42.57162 × 10−19Tables8–10 exhibit solutions and residual errors for different values of fractional parameters in Cases 2 and 3, respectively. Exploration of these tables confirms that solutions are consistent. Influence of different parameters on Vx in lifting and drainage cases are seen graphically. Figures 1–8show the effect of different parameters on Vx in the lifting case. Figure 1 shows the effect of α on Vx. It is seen that increasing α reduces Vx. Figure 2 shows effect of a on Vx, showing that a has direct relationship with Vx. Figures 3–5 depict the consequence of ϕ, St, and We on Vx, respectively. It is observed that Weissenberg number is the dimensionless number while this parameter is used to measure the flow behavior into motion regarding particles. It is physically defined as ratio among viscous force and elastic force. Viscous force is enhanced against higher values of Weissenberg number, whereas higher viscous force brings a declination into motion of fluid particles. Momentum layers are also the reducing function versus implanted higher values of the Weissenberg number. It is seen that Vx has inverse relationship with mentioned parameters in all cases. Figure 6 indicates the impact of increasing a and St simultaneously on Vx. It is observed that Vx decreases with an increase in a and St, and hence, St is a more influential parameter then a. Figure 7 shows the combine effect of a and ϕ. Motion into fluid particles is reduced versus higher impact of ratio of viscosity number. For higher values of ϕ make a declination into motion of fluid. So, fluid becomes more viscous and thick. It is seen that Vx increases with increase in a and ϕ, and hence, a is more influential than ϕ. Figures 8–14 show the effect of different parameters on Vx in Case 1 of drainage. Figure 8 depicts the effect of α on Vx. It is seen that Vx has direct association with α. Figure 9 shows the impact of a on Vx. It is seen that Vx has inverse association with a. Figures 10–12 show the effect of ϕ,St,andWe on Vx, respectively. It is realized that Vx has direct connection with these parameters. Figure 13 shows the combined effect of increasing St and a. It is observed that Vx increases with increase in St and a, and hence, St is more influential than a. Stoke number St is a dimensionless number. Physically, it is used to know characterization of fluid particles during the flow of fluid. Basically, it is defined as the ratio among characteristics time and characteristics time during the flow regarding particle. So, it is noticed that flow of particles is directly proportional against velocity of fluid. Furthermore, Figure 14 shows the effect of increasing ϕ and a on Vx. Investigation shows that a is a more influential than ϕ. Figure 15 shows the effect of γ on Vx in Case 2 of drainage. It is seen that Vx has inverse association with γ. Figures 16 and 17 indicate the effect of α and γ on Vx in Case 3 of drainage. It is observed that α and γ exhibit opposite behaviors on Vx in this case. In addition to the above results, volumetric flow and average velocities are also computed in lifting and drainage cases.Figure 1 Effect ofα on Vx for a=0.1,We=1,St=1,andϕ=0.1 fixed in the lifting case.Figure 2 Effect ofa on Vx for α=0.95,St=1,ϕ=0.01,andWe=0.1 fixed in the lifting case.Figure 3 Effect ofϕ on Vx for α=0.98,a=0.1,We=1,andSt=1 fixed in the lifting case.Figure 4 The effect ofSt on Vx for α=0.95,a=0.1,We=1,andϕ=0.1 fixed in the lifting case.Figure 5 Effect ofWe on Vx for α=0.98,a=0.1,St=1,andϕ=0.1 fixed in the lifting case.Figure 6 Effect of increasingStanda simultaneously on Vx for α=0.99,We=0.1,andϕ=0.1 fixed in the lifting case.Figure 7 Effect of increasingϕanda simultaneously on Vx for α=0.95,We=1,St=0.1 fixed in the lifting case.Figure 8 Effect ofα on Vx for a=0.1,We=1,St=1,andϕ=0.1 fixed for Case 1 in the drainage case.Figure 9 Effect ofa on Vx for α=0.95,We=0.1,St=1,andϕ=0.01 fixed for Case 1 in the drainage case.Figure 10 The effect ofϕ on Vx for α=0.97,a=0.1,We=1,andSt=1 fixed for Case 1 in the drainage case.Figure 11 The effect ofSt on Vx for α=0.95,a=0.2,We=1,andϕ=0.1 fixed for Case 1 in the drainage case.Figure 12 The effect ofWe on Vx for α=0.95,a=0.1,St=1,andϕ=0.1 fixed for Case 1 in the drainage case.Figure 13 Effect of increasingStanda simultaneously on Vx for α=0.95,We=1,andϕ=0.01 fixed for Case 1 in the drainage case.Figure 14 Effect of increasingϕanda simultaneously on Vx for α=0.95,We=1,andSt=1 fixed for Case 1 in the drainage case.Figure 15 The effect ofγ on Vx for a=0.1,We=1,St=1,andϕ=0.1 fixed for Case 2 in the drainage case.Figure 16 The effect ofα on Vx for γ=1.95,a=0.1,We=1,St=1,andϕ=0.1 fixed for Case 3 in the drainage case.Figure 17 The effect ofγ on Vx for α=99,a=0.1,We=1,St=1,andϕ=0.1 fixed for Case 3 in the drainage case. ## 8. Conclusions In this study, thin film of Johnson–Segalman fluid flow is investigated in fractional space for the case of lifting and drainage. Solutions of boundary value problems are obtained using fractional calculus and HPM. Validity of obtained solutions is established through residual errors’ computation. Few important results related to different parameters are established in the fractional space. It is comprehended that fractional parameterα has shown reverse effect in drainage and lifting. The study also discloses that fractional state Stokes and Weissenberg numbers showed similar effects on the fluid velocity in drainage and lifting cases. It is further seen that the Stokes number is the most influential parameter in both lifting and drainage situations. [34–39]. --- *Source: 1019810-2022-02-23.xml*
1019810-2022-02-23_1019810-2022-02-23.md
30,565
An Application of Homotopy Perturbation Method to Fractional-Order Thin Film Flow of the Johnson–Segalman Fluid Model
Mubashir Qayyum; Farnaz Ismail; Syed Inayat Ali Shah; Muhammad Sohail; Essam R. El-Zahar; K. C Gokul
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1019810
1019810-2022-02-23.xml
--- ## Abstract Thin film flow is an important theme in fluid mechanics and has many industrial applications. These flows can be observed in oil refinement process, laser cutting, and nuclear reactors. In this theoretical study, we explore thin film flow of non-Newtonian Johnson–Segalman fluid on a vertical belt in fractional space in lifting and drainage scenarios. Modelled fractional-order boundary value problems are solved numerically using the homotopy perturbation method along with Caputo definition of fractional derivative. In this study, instantaneous and average velocities and volumetric flux are computed in lifting and drainage cases. Validity and convergence of homotopy-based solutions are confirmed by finding residual errors in each case. Moreover, the consequences of different fractional and fluid parameters are graphically studied on the velocity profile. Analysis shows that fractional parameters have opposite effects of the fluid velocity. --- ## Body ## 1. Introduction Modelling and analysis of non-Newtonian fluids is an important and active research theme in industrial engineering. Food processing, paper production, blood flow analysis, and mud drilling are different applications areas of non-Newtonian fluids. These fluids are defined through a nonlinear relationship between rate of deformation and stress tensors, and therefore, it has several models in different scenarios. Johnson–Segalman fluid is one of the very significant fluid modals which have numerous engineering and industrial applications.Thin film flow can be detected in different natural situations, for example, movement of raindrop on window glass, tears in the eyes, and lava flow. Industrial applications of such flows include oil refining, nuclear reactors, and laser cutting[1–4]. The initial work on thin film flow was carried out in [3] for Newtonian fluids, but this study has limitations and cannot be generalized for non-Newtonian fluids such as melted plastics, gels, pastes, honey, ketchup, and blood [5]. Siddiqui et al. examined thin film of different fluids including PTT (Phan-Thien and Tanner) and third and fourth grade fluids in [6–8]. Landau [9] and Stuart [10] extended these analyses to turbulence. Ullah et al. studied the film flow under slip conditions in generalized Maxwell fluids in [11]. Ruan et al. studied thin film from a distributed source on a vertical wall in [12]. Ahmad and Xu proposed an improved nanofluid model in thin film under the action of gravity in [13].Fractional calculus has gained significant attention due to its striking application as a new modelling tool in a variety of fields, such as fluid dynamics, hydrology, system control theory, signal processing, physics, biology, and finance [14–20]. These fractional models are more appropriate for describing memory and transmissible properties of different materials than integral-order models. In the past few decades, various numerical techniques have been developed by different researchers for nonlinear BVPs. Wazwaz proposed the modified decomposition method (MDM) for BVPs in [21]. Noor and Mohyud-Din used VIM along with He’s polynomials for the solution of higher order BVPs [22]. Liu et al. used the multiscale method for nanoparticle diffusion in sheared cellular blood flow in [23]. Jafari and Gejji used adomian decomposition for the solution of system of FDEs in [24]. Hashim et al. apply OHAM to fractional-order fuzzy differential equations in [22]. Rysak and Gregorczyk proposed DTM for fractional dynamical systems in [25]. Zada et al. apply NIM to fractional PDEs in [26]. Yaghouti used radial basis functions to different families of fractional differential equations in [27]. Al-Kuze et al. used spectral quasi-linearization and irreversibility analysis of magnetized cross-fluid flow [28].In this study, we extend the theoretical study of thin film flow to fractional space in case of non-Newtonian Johnson–Segalman fluid since exact solutions of highly nonlinear fractional differential equations (FDEs) are not possible. The conventional approach is to use perturbation techniques in such scenarios, which again need small or large parameters. To avoid this, we utilize the well-known homotopy perturbation method (HPM) [29–33] along with fractional calculus for solution purpose. This method is effectively used by many scholars in [29–33]. Validation and convergence of the obtained numerical solutions are confirmed by means of finding residual errors. To the best of the authors’ knowledge, the given problem has not been attempted before in fractional space. ## 2. Preliminaries ### 2.1. Fractional Calculus Few basic definitions of fractional calculus are given below.Definition 1. Letft,t>0, be a real function in the space Cµ,µ∈R, if ∃ a real number p>µ∍ft=tpf1t,wheref1t∈C0,∞, and it is in the space(1)Cµm⇔fm∈Cµ,m∈N.Definition 2. The Caputo fractional derivativeDα is defined as(2)Dαft=1Γm−α∫0tt−τm−α−1fmdτ,form−1<α<m,m∈N,t>0.Definition 3. α≥0 order Riemann–Liouville fractional integration is defined as(3)Jαft=1Γα∫0tt−τα−1fτdτ,k−1<α<k,andk∈N. ### 2.2. Basic Equations The basic equations of incompressible fluid of Johnson–Segalman fluid are(4)divV=0,(5)ρDVDt=divσ+ρf,where V is the velocity vector, ρ is the constant density, f is the body force per unit mass, and σ is the Cauchy stress tensor, where(6)σ=−pI+T,(7)T=S+2μD,(8)S+mW−aDTS+SW−aD+DSDt=2Dη. ## 2.1. Fractional Calculus Few basic definitions of fractional calculus are given below.Definition 1. Letft,t>0, be a real function in the space Cµ,µ∈R, if ∃ a real number p>µ∍ft=tpf1t,wheref1t∈C0,∞, and it is in the space(1)Cµm⇔fm∈Cµ,m∈N.Definition 2. The Caputo fractional derivativeDα is defined as(2)Dαft=1Γm−α∫0tt−τm−α−1fmdτ,form−1<α<m,m∈N,t>0.Definition 3. α≥0 order Riemann–Liouville fractional integration is defined as(3)Jαft=1Γα∫0tt−τα−1fτdτ,k−1<α<k,andk∈N. ## 2.2. Basic Equations The basic equations of incompressible fluid of Johnson–Segalman fluid are(4)divV=0,(5)ρDVDt=divσ+ρf,where V is the velocity vector, ρ is the constant density, f is the body force per unit mass, and σ is the Cauchy stress tensor, where(6)σ=−pI+T,(7)T=S+2μD,(8)S+mW−aDTS+SW−aD+DSDt=2Dη. ## 3. Mathematical Formulation in Lifting Case The belt is passing through the container filled with Johnson–Segalman fluid, and it is moving vertically with constant speedU0. A uniform thickness δ of the thin film is taken up by the belt, but due to gravity fluid, it is draining down. Let pressure be assumed to be atmospheric, and the flow is uniform, steady, and laminar. Also, the x-axis is considered to be normal, while the y-axis is along the belt.The boundary conditions are(9)x=0,v=U0,x=δ,Txy=0.Velocity is of the form(10)V=0,vx,0,(11)T=TxandTxz,where the extra and shear stress tensor is T=Tx and Txz, respectively.Substitution of (8) in (3) and (4) shows that (3) is satisfied, while (4) has the form(12)0=dTxxdx+ρf1,(13)0=dTxydx+ρf2,where f1 and f2 are the components of body force.Since gravitational force is in downward whiley-axis is in upward directions, hence, (12) and (13) become(14)0=dTxxdx,0=dTxydx−ρg.By using equations (8), (10), and (11) in (7), components of S are(15)Sxx=−ηm1−adv/dx21+m21−a2dv/dx2,(16)Sxy=ηdv/dx1+m21−a2dv/dx2=Syx,(17)Syy=ηm1+adv/dx21+m21−a2dv/dx2.Using (15)–(17), stress tensor T in (6) is(18)Txx=−ηm1−adv/dx21+m21−a2dv/dx2,(19)Txy=μdvdx+ηdv/dx1+m21−a2dv/dx2=Tyx,(20)Tyy=ηm1+adv/dx21+m21−a2dv/dx2.SubstitutingTxy in (14) gives(21)ddxμdvdx+ηdv/dx1+m21−a2dv/dx2=ρg.Boundary conditions become(22)dvdx=0atx=δ,(23)v=U0atx=0.Introduce nondimensional parameters as(24)x∗=xδ,v∗=vU0andφ=μμ+η,where φ is the ratios of viscosities. Equations (21)–(23) become(25)ddxφdvdx+1−φdv/dx1+We21−a2dv/dx2=St.(26)Atx=1,dvdx=0.(27)Atx=0,v=1,where St=ρgδ2/μeffU0 and We=mU0/δ are Stokes and Weissenberg numbers, respectively, and μeff=μ+η.Simplification of equation (25) gives(28)d2vdx2+We41−a22φd2vdx2−Stdvdx4−We21−a2d2vdx2+φd2vdx2+2Stdvdx2−St=0,with(29)dvdx=0atx=1,v=1atx=0.Now, using definition of fractional calculus given in Section2.1, the fractional form of equation (28) is(30)d2vxdx2+We41−a22φd2vxdx2−StDαvx4−We21−a2d2vxdx2+φd2vxdx2+2StDαvx2−St=0,with(31)v0=1,v′1=0,0<α<1. ## 4. Homotopy Solution of Johnson–Segalman Fluid in the Lifting Case HomotopyΩ×0,1⟶ℜ for (30) is(32)1−pd2vxdx2+pd2vxdx2+We41−a22φd2vxdx2−StDαvx4−We21−a2d2vxdx2+φd2vxdx2+2StDαvx2−St=0.Using (30) and (31), we obtain the following.Zeroth-order problem:(33)v0″x=0,v00=1,v0′1=0.First-order problem:(34)−St−2StWe2Dαv0x2+2a2StWe2Dαv0x2−StWe4Dαv0x4+2a2StWe4Dαv0x4−a4StWe4Dαv0x4−We2Dαv0x2v0″x+a2WeDαv0x2v0″x−We2ϕDαv0x2v0″x+a2We2ϕDαv0x2v0″x+We4ϕDαv0x4v0″x−2a2We4ϕDαv0x4v0″x+a4We4ϕDαv0x4v0″x+v1″x=0,v10=0,v1′1=0.Second-order problem:(35)We2Dαv0xDαv1x+4a2StWe2Dαv0x−4StDαv1x−4StWe4Dαv0x3Dαv1x8a2StWe4Dαv0x3Dαv1x−4a4StWe4Dαv0x3Dαv1x−2We2Dαv0xDαv1xv0″x+2a2We2Dαv0xDαv1xv0″x−2We2ϕDαv0xDαv1xv0″x+2a2We2ϕDαv0xDαv1xv0″x+4We4ϕDαv0x3Dαv1xv0″x−8a2We4ϕDαv0x3Dαv1xv0″x+4a4We4ϕDαv0x3Dαv1xv0″x−We2Dαv0x2v1″x+a2We2Dαv0x2v1″x−We2ϕDαv0x2v1″x+a2We2ϕDαv0x2v1″x+We4ϕDαv0x4v1″x−2a2We4ϕDαv0x4v1″x+a4We4ϕDαv0x4v1″x+v2″x=0,v20=0,v2′1=0.Similarly, we can find higher order problems and their solutions.Third-order approximate solution, after applying Caputo definition and keepingα=0.99,ϕ=0.1,St=0.01, a=0.01,andWe=0.1 fixed, is(36)Vx=1+12−0.02t+0.01x2+0.0796711−8.24342×10−8x2.98+1.23219×10−7x4−8.24098×10−8x5+2.04999×10−8x6x1.98+0.0398355−9.06776×10−8x2.98+1.35541×10−7x4−9.06508×10−8x5+2.25499×10−8x6x1.98.The residual is denoted byR and is defined as(37)R=d2Vxdx2+We41−a22φd2Vxdx2−StDαVx4−We21−a2d2Vxdx2+φd2Vxdx2+2StDαVx2−St. ## 5. Flow Rate and Average Velocity in the Lifting Case Flow rate in this case is(38)Q=1615−16α+4α2Γ4−α2−3+2α−1−7+2α3−1+a2St3We2−3+α2−16+α25+α−13+2α3+ϕ−2−3+St−5+2αΓ4−α2,and average velocity is(39)V¯=1615−16α+4α2Γ4−α2−3+2α−1−7+2α3−1+a2St3We2−3+α2−16+α25+α−13+2α3+ϕ−2−3+St−5+2αΓ4−α2. ## 6. Mathematical Formulation in the Drainage Case Let fluid be draining down due to gravity on the infinite stationary belt. Equations (4)–(8) become(40)0=dTxydx+ρg.Using equation (19) in (14) gives the form(41)ddxμdvdx+ηdv/dx1+m21−a2dv/dx2=−ρg,with(42)dvdx=0atx=δ,v=0atx=0.Equation (40) in the dimensionless form is(43)ddx1−φdv/dx1+We21−a2dv/dx2+φdvdx=−St.(44)Atx=1,dvdx=0,Atx=0,v=0.Simplification of (43) gives(45)d2vdx2+We41−a22φd2vdx2+Stdvdx4−We21−a2d2vdx2+φd2vdx2−2Stdvdx2+St=0.Three cases as fractional boundary value problems are obtained.Case 1. (46)d2vxdx2+We41−a22φd2vxdx2+StDαvx4−We21−a2d2vxdx2+φd2vxdx2−2StDαvx2+St=0,with(47)v0=0,v′1=0,0<α<1.Case 2. (48)Dγvx+We41−a22φDγvx+Stdvxdx4−We21−a2Dγvx+Dγvx−2Stdvxdx2+St=0.(49)v0=0,v′1=0,1<γ<2.Case 3. Here both first- and second-order derivatives are replaced by the noninteger order derivative as follows:(50)Dγvx+We41−a22φDγvx+StDαvx4−We21−a2Dγvx+Dγvx−2StDαvx2+St=0,with(51)v0=0,v1=0,1<γ<2,0<α<1. ## 7. Results and Discussion In this paper, the fractional study of thin film flow of Johnson–Segalman fluid is carried out in lifting and drainage cases. The obtained boundary value problems are solved for different values of involved parameters and results mentioned in Tables1–4 for lifting while Tables 5–10 in drainage situation. Tables 1 and 5 present solutions and corresponding errors for different α. Tables 2 and 6 show solutions along with errors for ϕ. Tables 3 and 7 show case solutions and errors for various St. Table 4 demonstrates solutions and errors for various We in the lifting case.Table 1 Solution and residual error forα keeping a=0.1,We=0.01,St=0.001,andϕ=0.1 fixed in lifting. α=0.2α=0.6α=0.99xVxRes errorVxRes errorxRes error0.10.905−8.76128 × 10−250.905−9.08606 × 10−240.905−6.59422 × 10−230.20.82−2.5981 × 10−240.82−1.45747 × 10−230.82−4.249 × 10−230.30.745−4.78151 × 10−240.745−1.74003 × 10−230.745−2.54931 × 10−230.40.68−7.11425 × 10−240.68−1.79924 × 10−230.68−1.41813 × 10−240.50.625−9.29068 × 10−240.625−1.69071 × 10−230.625−6.9968 × 10−240.60.58−1.10441 × 10−230.58−1.4743 × 10−230.58−3.0318 × 10−240.70.545−1.2179 × 10−230.545−1.20434 × 10−230.545−1.0542 × 10−240.80.52−1.25923 × 10−230.52−9.26204 × 10−240.52−2.52839 × 10−250.90.505−1.22773 × 10−230.505−6.71129 × 10−240.505−2.91563 × 10−271.0.5−1.13159 × 10−230.5−4.57323 × 10−240.51.0211 × 10−26Table 2 Solution and residual error forϕ keeping α=0.99,We=0.01,St=0.001,anda=0.1 fixed in lifting. ϕ=0.1ϕ=0.6ϕ=0.9xVxRes errorVxRes errorVxRes error0.10.905−6.59422 × 10−230.905−9.09923 × 10−230.905−1.08443 × 10–220.20.82−4.249 × 10−230.82−5.8628 × 10−230.82−6.98245 × 10−230.30.745−2.54931 × 10−230.745−3.51797 × 10−230.745−4.19438 × 10−230.40.68−1.41813 × 10−230.68−1.95548 × 10−230.68−2.32819 × 10−230.50.625−6.9968 × 10−240.625−9.6702 × 10−240.625−1.15297 × 10−230.60.58−3.0318 × 10−240.58−4.18483 × 10−240.58−4.96745 × 10−240.70.545−1.0542 × 10−240.545−1.45024 × 10−240.545−1.7253 × 10−240.80.52−2.52839 × 10−250.52−3.53713 × 10−250.52−4.20658 × 10−250.90.505−2.91563 × 10−270.505−1.76326 × 10−260.505−2.24163 × 10−261.0.51.0211 × 10−260.55.3895 × 10−270.59.2139 × 10−27Table 3 Solution and residual error forSt keeping α=0.98,We=0.001,ϕ=0.1,anda=0.1 fixed in lifting. St=0.001St=0.01St=0.1xVxRes errorVxRes errorVxRes error0.10.999905−7.23543 × 10−270.99905−6.29219 × 10−220.9905−6.30818 × 10−170.20.99982−3.63823 × 10−270.9982−4.18657 × 10−220.982−4.18849 × 10−170.30.999745−2.86435 × 10−270.99745−2.5807 × 10−220.9745−2.57976 × 10−170.40.99968−1.51698∗ × 10−270.9968−1.46438 × 10−220.968−1.46541 × 10−180.50.999625−8.70705 × 10−280.99625−7.53253 × 10−230.9625−7.52476 × 10−180.60.99958−4.61286 × 10−280.9958−3.40645 × 10−230.958−3.37798 × 10−180.70.999545−4.34662 × 10−280.99545−1.28478 × 10−230.9545−1.25942 × 10−180.80.99952−4.00347 × 10−290.9952−3.70343 × 10−240.952−3.64878 × 10−190.90.999505−1.66647 × 10−290.99505−1.07941 × 10−240.9505−8.14999 × 10−201.0.99959.95937 × 10−300.995−8.78476 × 10−260.95−8.71776 × 10−21Table 4 Solution and residual error forWe keeping α=0.95,St=0.001,ϕ=0.1,anda=0.1 fixed in lifting. We=0.001We=0.01We=0.1xVxRes errorVxRes errorVxRes error0.10.905−5.48732 × 10−270.905−5.51525 × 10−230.905−5.4982 × 10−190.20.82−3.75182 × 10−270.82−3.98956 × 10−230.82−3.98946 × 10−190.30.745−2.5638 × 10−270.745−2.63606 × 10−230.745−2.63525 × 10−190.40.68−1.17028 × 10−270.68−1.60262 × 10−230.68−1.6049 × 10−190.50.625−7.01495 × 10−280.625−8.9237 × 10−240.625−8.93613 × 10−200.60.58−4.80416 × 10−280.58−4.47664 × 10−240.58−4.46647 × 10−200.70.545−3.2225 × 10−280.545−1.95569 × 10−240.545−1.9563 × 10−200.80.527.79 × 10−300.52−7.49229 × 10−250.52−7.37223 × 10−210.90.5051.3884 × 10−280.505−2.49356 × 10−250.505−2.42499 × 10−211.0.59.40717 × 10−290.5−6.13229 × 10−260.5−5.63892 × 10−22Table 5 Solution and residual error forα keeping a=0.1,We=0.01,St=0.001,andϕ=0.1 fixed in Case 1 of drainage. α=0.3α=0.6α=0.9xVxRes errorVxRes errorVxRes error0.10.0000951.61655 × 10−240.0000959.08344 × 10−240.0000956.30751 × 10−230.20.000184.16166 × 10−240.000181.45689 × 10−230.000184.18362 × 10−230.30.0002556.96312 × 10−240.0002551.73923 × 10−230.0002552.57623 × 10−230.40.000329.57269 × 10−240.000321.79837 × 10−230.000321.46516 × 10−230.50.0003751.16444 × 10−230.0003751.6899 × 10−230.0003757.51521 × 10−240.60.000421.29537 × 10−230.000421.47362 × 10−230.000423.40636 × 10−240.70.0004551.34057 × 10−230.0004551.20384 × 10−230.0004551.25929 × 10−240.80.000481.30281 × 10−230.000489.25865 × 10−240.000483.70727 × 10−250.90.0004951.19467 × 10−230.0004956.70931 × 10−240.0004958.09182 × 10−261.0.00051.0354 × 10−230.00054.57227 × 10−240.00051.89895 × 10−26Table 6 Solution and residual error forϕ keeping α=0.98,We=0.01,St=0.001,anda=0.1 fixed for Case 1 of drainage. ϕ=0.01ϕ=0.1ϕ=0.3xVxRes errorVxRes errorVxRes error0.10.0000955.92844 × 10−230.0000956.30751 × 10−230.0000958.18541 × 10−230.20.000183.93158 × 10−230.000184.18362 × 10−230.000185.42952 × 10−230.30.0002552.42117 × 10−230.0002552.57623 × 10−230.0002553.34316 × 10−230.40.000321.37711 × 10−230.000321.46516 × 10−230.000321.90143 × 10−230.50.0003757.0636 × 10−240.0003757.51521 × 10−240.0003759.74926 × 10−240.60.000423.20361 × 10−240.000423.40636 × 10−240.000424.41229 × 10−240.70.0004551.18469 × 10−240.0004551.25929 × 10−240.0004551.62912 × 10−240.80.000483.49409 × 10−250.000483.70727 × 10−250.000484.82038 × 10−250.90.0004957.66394 × 10−260.0004958.09182 × 10−260.0004951.05556 × 10−251.0.00051.87118 × 10−260.00051.89895 × 10−260.00052.15706 × 10−24Table 7 Solution and residual error forSt keeping α=0.99,We=0.01,ϕ=0.01,anda=0.1 fixed in Case 1 of drainage. St=0.001St=0.01St=0.1xVxRes errorVxRes errorVxRes error0.10.0000955.82872 × 10−230.0000955.82511 × 10−180.0000955.8252 × 10–−130.20.000183.7525 × 10−230.000183.75136 × 10−180.000183.75135 × 10–−130.30.0002552.24807 × 10−230.0002552.25476 × 10−180.0002552.25477 × 10–−130.40.000321.25172 × 10−230.000321.24919 × 10−180.000321.2492 × 10–−130.50.0003756.177 × 10−240.0003756.22301 × 10−190.0003756.22317 × 10–−140.60.000422.69667 × 10−240.000422.67581 × 10−190.000422.67572 × 10–−140.70.0004559.14662 × 10−250.0004559.26852 × 10−200.0004559.2681 × 10–−150.80.000482.38577 × 10−250.000482.30937 × 10−200.000482.3095 × 10–−150.90.0004952.25046 × 10−260.0004953.77223 × 10−210.0004953.76414 × 10–−161.0.00053.98217 × 10−270.00051.84694 × 10−220.00051.88184 × 10−17Table 8 Solution and residual error forγ keeping St=0.01,ϕ=0.1,We=0.001,anda=0.1 fixed in Case 2 of drainage. γ=1.2γ=1.6γ=1.99xVxRes errorVxRes errorVxRes error0.15.72659 × 10–−4−1.32727 × 10−181.75703 × 10–−43.197 × 10−185.1638 × 10–−51.29417 × 10−170.21.31563 × 10–−3−2.02422 × 10−185.3263 × 10–−33.56035 × 10−182.05125 × 10–−41.22623 × 10−170.32.14014 × 10–−3−1.98946 × 10−181.01899 × 10–−33.88346 × 10−184.59664 × 10–−41.24389 × 10−170.43.02251 × 10–−3−2.9885 × 10−191.61463 × 10–−33.46852 × 10−188.14833 × 10–−41.35961 × 10−170.53.95057 × 10–−3−1.84225 × 10−182.30744 × 10–−33.68767 × 10−181.27034 × 10–−31.31157 × 10−170.64.91675 × 10–−3−2.46976 × 10−183.08901 × 10–−33.57668 × 10−181.82595 × 10–−31.37891 × 10−170.75.91581 × 10–−3−2.09831 × 10−183.95307 × 10–−33.36789 × 10−182.4815 × 10–−31.30025 × 10−170.86.94391 × 10–−3−1.35899 × 10−184.89465 × 10–−33.06616 × 10−183.23682 × 10–−31.40449 × 10−170.97.99811 × 10–−3−5.26745 × 10−195.90971 × 10–−34.003 × 10−184.09177 × 10–−31.29653 × 10−171.9.07604 × 10–−3−1.50223 × 10−186.99484 × 10–−32.56045 × 10−185.04625 × 10–−31.32103 × 10−17Table 9 Solution and residual error forα keeping γ=1.95,St=0.001,ϕ=0.1,We=0.01,anda=0.1 fixed in Case 3 of drainage. α=0.3α=0.7α=0.99xVxRes errorVxRes errorVxRes error0.15.87208 × 10–−6−9.076 × 10−205.87208 × 10–−61.16701 × 10−195.87208 × 10–−66.48938 × 10−210.22.26882 × 10–−56.62623 × 10−202.26882 × 10–−5−3.50514 × 10−202.26882 × 10–−5−1.64872 × 10−190.35.0024 × 10–−57.16925 × 10−205.0024 × 10–−5−5.38787 × 10−205.0024 × 10–−5−5.48153 × 10−200.48.76616 × 10–−56.17061 × 10−208.76616 × 10–−54.81294 × 10−208.76616 × 10–−5−2.07791 × 10−190.51.35451 × 10–−4−7.89358 × 10−201.35451 × 10–−48.88535 × 10−201.35451 × 10–−46.49314 × 10−200.61.9328 × 10–−45.62103 × 10−201.9328 × 10–−4−8.1769 × 10−201.9328 × 10–−4−1.40755 × 10−200.72.61056 × 10–−42.13652 × 10−202.61056 × 10–−42.46311 × 10−192.61056 × 10–−48.01669 × 10−210.83.38702 × 10–−44.3926 × 10−203.38702 × 10–−4−3.74249 × 10−203.38702 × 10–−4−1.55483 × 10−190.94.26153 × 10–−4−1.25871 × 10−194.26153 × 10–−4−4.78429 × 10−204.26153 × 10–−4−1.57049 × 10−191.5.2335 × 10–−47.27423 × 10−205.2335 × 10–−43.94566 × 10−205.2335 × 10–−4−5.88772 × 10−20Table 10 Solution and residual error forγ keeping α=0.95,St=0.001,ϕ=0.1,We=0.01,anda=0.1 fixed in Case 3 of drainage. γ=1.2γ=1.6γ=1.99xVxRes errorVxRes errorVxRes error0.15.72659 × 10–−4−8.85316 × 10−201.75703 × 10–−5−9.8013 × 10−205.1638 × 10–−64.95955 × 10−210.21.31563 × 10–−4−1.35853 × 10−195.3263 × 10–−5−5.72227 × 10−202.0512 × 10–−51.06505 × 10−200.32.14014 × 10–−41.63841 × 10−201.01899 × 10–−43.02498 × 10−224.5966 × 10–−52.98796 × 10−200.43.02251 × 10–−4−1.77002 × 10−191.61463 × 10–−42.74106 × 10−198.1483 × 10–−58.85672 × 10−200.53.95057 × 10–−46.75864 × 10−202.30744 × 10–−4−2.9077 × 10−201.2703 × 10–−4−1.25093 × 10−190.64.91675 × 10–−4−1.24341 × 10−193.08901 × 10–−42.8962 × 10−191.8259 × 10–−47.08777 × 10−210.75.91581 × 10–−43.33152 × 10−203.95307 × 10–−44.4664 × 10−202.4815 × 10–−43.73471 × 10−200.86.94391 × 10–−4−5.4743 × 10−204.89465 × 10–−4−4.42716 × 10−203.2368 × 10–−4−2.48835 × 10−200.97.99811 × 10–−4−4.85706 × 10−205.90971 × 10–−4−1.45847 × 10−194.0917 × 10–−41.10885 × 10−191.9.07604 × 10–−4−2.09174 × 10−196.99484 × 10–−41.98754 × 10−205.0462 × 10–−42.57162 × 10−19Tables8–10 exhibit solutions and residual errors for different values of fractional parameters in Cases 2 and 3, respectively. Exploration of these tables confirms that solutions are consistent. Influence of different parameters on Vx in lifting and drainage cases are seen graphically. Figures 1–8show the effect of different parameters on Vx in the lifting case. Figure 1 shows the effect of α on Vx. It is seen that increasing α reduces Vx. Figure 2 shows effect of a on Vx, showing that a has direct relationship with Vx. Figures 3–5 depict the consequence of ϕ, St, and We on Vx, respectively. It is observed that Weissenberg number is the dimensionless number while this parameter is used to measure the flow behavior into motion regarding particles. It is physically defined as ratio among viscous force and elastic force. Viscous force is enhanced against higher values of Weissenberg number, whereas higher viscous force brings a declination into motion of fluid particles. Momentum layers are also the reducing function versus implanted higher values of the Weissenberg number. It is seen that Vx has inverse relationship with mentioned parameters in all cases. Figure 6 indicates the impact of increasing a and St simultaneously on Vx. It is observed that Vx decreases with an increase in a and St, and hence, St is a more influential parameter then a. Figure 7 shows the combine effect of a and ϕ. Motion into fluid particles is reduced versus higher impact of ratio of viscosity number. For higher values of ϕ make a declination into motion of fluid. So, fluid becomes more viscous and thick. It is seen that Vx increases with increase in a and ϕ, and hence, a is more influential than ϕ. Figures 8–14 show the effect of different parameters on Vx in Case 1 of drainage. Figure 8 depicts the effect of α on Vx. It is seen that Vx has direct association with α. Figure 9 shows the impact of a on Vx. It is seen that Vx has inverse association with a. Figures 10–12 show the effect of ϕ,St,andWe on Vx, respectively. It is realized that Vx has direct connection with these parameters. Figure 13 shows the combined effect of increasing St and a. It is observed that Vx increases with increase in St and a, and hence, St is more influential than a. Stoke number St is a dimensionless number. Physically, it is used to know characterization of fluid particles during the flow of fluid. Basically, it is defined as the ratio among characteristics time and characteristics time during the flow regarding particle. So, it is noticed that flow of particles is directly proportional against velocity of fluid. Furthermore, Figure 14 shows the effect of increasing ϕ and a on Vx. Investigation shows that a is a more influential than ϕ. Figure 15 shows the effect of γ on Vx in Case 2 of drainage. It is seen that Vx has inverse association with γ. Figures 16 and 17 indicate the effect of α and γ on Vx in Case 3 of drainage. It is observed that α and γ exhibit opposite behaviors on Vx in this case. In addition to the above results, volumetric flow and average velocities are also computed in lifting and drainage cases.Figure 1 Effect ofα on Vx for a=0.1,We=1,St=1,andϕ=0.1 fixed in the lifting case.Figure 2 Effect ofa on Vx for α=0.95,St=1,ϕ=0.01,andWe=0.1 fixed in the lifting case.Figure 3 Effect ofϕ on Vx for α=0.98,a=0.1,We=1,andSt=1 fixed in the lifting case.Figure 4 The effect ofSt on Vx for α=0.95,a=0.1,We=1,andϕ=0.1 fixed in the lifting case.Figure 5 Effect ofWe on Vx for α=0.98,a=0.1,St=1,andϕ=0.1 fixed in the lifting case.Figure 6 Effect of increasingStanda simultaneously on Vx for α=0.99,We=0.1,andϕ=0.1 fixed in the lifting case.Figure 7 Effect of increasingϕanda simultaneously on Vx for α=0.95,We=1,St=0.1 fixed in the lifting case.Figure 8 Effect ofα on Vx for a=0.1,We=1,St=1,andϕ=0.1 fixed for Case 1 in the drainage case.Figure 9 Effect ofa on Vx for α=0.95,We=0.1,St=1,andϕ=0.01 fixed for Case 1 in the drainage case.Figure 10 The effect ofϕ on Vx for α=0.97,a=0.1,We=1,andSt=1 fixed for Case 1 in the drainage case.Figure 11 The effect ofSt on Vx for α=0.95,a=0.2,We=1,andϕ=0.1 fixed for Case 1 in the drainage case.Figure 12 The effect ofWe on Vx for α=0.95,a=0.1,St=1,andϕ=0.1 fixed for Case 1 in the drainage case.Figure 13 Effect of increasingStanda simultaneously on Vx for α=0.95,We=1,andϕ=0.01 fixed for Case 1 in the drainage case.Figure 14 Effect of increasingϕanda simultaneously on Vx for α=0.95,We=1,andSt=1 fixed for Case 1 in the drainage case.Figure 15 The effect ofγ on Vx for a=0.1,We=1,St=1,andϕ=0.1 fixed for Case 2 in the drainage case.Figure 16 The effect ofα on Vx for γ=1.95,a=0.1,We=1,St=1,andϕ=0.1 fixed for Case 3 in the drainage case.Figure 17 The effect ofγ on Vx for α=99,a=0.1,We=1,St=1,andϕ=0.1 fixed for Case 3 in the drainage case. ## 8. Conclusions In this study, thin film of Johnson–Segalman fluid flow is investigated in fractional space for the case of lifting and drainage. Solutions of boundary value problems are obtained using fractional calculus and HPM. Validity of obtained solutions is established through residual errors’ computation. Few important results related to different parameters are established in the fractional space. It is comprehended that fractional parameterα has shown reverse effect in drainage and lifting. The study also discloses that fractional state Stokes and Weissenberg numbers showed similar effects on the fluid velocity in drainage and lifting cases. It is further seen that the Stokes number is the most influential parameter in both lifting and drainage situations. [34–39]. --- *Source: 1019810-2022-02-23.xml*
2022
# Therapeutic Effect of Ultrasound-Guided Peripherally Inserted Central Catheter Combined with Predictive Nursing in Patients with Large-Area Severe Burns **Authors:** Baiyan He; Aiqiong Zhang; Shuting He **Journal:** Computational and Mathematical Methods in Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1019829 --- ## Abstract This study was aimed to explore the application value of ultrasound-guided peripherally inserted central catheter (PICC) combined with predictive nursing in the treatment of large-area severe burns. 88 patients with large-area severe burns who visited hospital were chosen as the research objects. They were randomly divided into the observation group and the control group, with 44 cases in each. The patients in the observation group were treated with ultrasound-guided PICC combined with predictive nursing, while those in the control group were treated with traditional PICC and nursing methods. Then, the anxiety of patients was compared between groups by the Self-rating Anxiety Scale (SAS), while the depression was compared by the Self-rating Depression Scale (SDS). The pain of the patients was analyzed by the McGill Pain Questionnaire (MPQ), and a self-made nursing satisfaction questionnaire was adopted to evaluate the nursing satisfaction. The surgery-related indicators of the patients were detected and recorded (the success rate of one-time puncture, the success rate of one-time catheter placement, incidence of complications, heart rate, blood pressure, etc.). The success rates of one-time puncture (93% vs. 86%) and of catheter placement (95% vs. 81%) in the observation group were significantly higher than those in the control group,P<0.05. The pain scores of the observation group were much lower than those of the control group at each time period, P<0.05. The number of patients with negative emotions such as anxiety and depression in the observation group was markedly less than that in the control group. The incidence of complications in the observation group was notably lower than that in the control group (4.5% vs 18%), P<0.05. The nursing satisfaction of the observation group was significantly higher than that of the control group (93% vs 79.5%), P<0.05. In conclusion, ultrasound-guided PICC and predictive nursing had high clinical application values in the treatment of patients with large-area severe burns. --- ## Body ## 1. Introduction Burns bring patients a huge impact on health, life, work, and study. It will weaken the social labor force and increase the economic burden on the family and society. Statistics show that the incidence of burns in China is much higher than that in oversea countries [1]. Generally, burns are classified into four grades: first-degree burns, superficial second-degree burns, deep second-degree burns, and third-degree burns. The specific clinical manifestations of each grade are as follows. For the first-degree burns, the mild burns are generally characterized by mild redness, swelling, and heat pain, with no blisters and no skin damage. It can usually recover to normal within a week without any scarring, but the color of local skin may be darker in a short term. For superficial second-degree burns, blisters of different sizes are formed, and the blister fluid is clear and transparent, which is pale yellow or egg white-like fluid. The ruptured blisters expose a rosy and moist wound [2]. Patients may experience significant pain and local redness and swelling. The wound usually heals in 1-2 weeks without scarring, but sometimes the newly grown skin may have pigment changes. For deep second-degree burns, there is local swelling, and the epithelial tissue turns to be white or brownish-yellow. There are also scattered small blisters; the wound of the ruptured blisters is slightly wet with the color of red and white or red in white. Many red dots or small vascular branches can be observed, the cutaneous sensation is insensitive, and the pain is not obvious. If there is no infection, the healing generally takes about 3-4 weeks. In the event of infection, not only the healing time will be prolonged but also scars will be left after healing. For the third-degree burns, the wound surface is dry and is in waxy white, brown, or charcoal black, with no blisters and no pain. It is tough and leather-like, and thick vascular network coagulates under the eschar, which is caused by venous embolism in the fat layer. In summary, the second- and third-degree burns pose a serious threat to the life and health of patients. In addition, the prognosis of patients is generally very poor, and the treatment time is relatively long. Therefore, the treatment and nursing for burns take a long-term and difficult process [3].Since burns can cause extensive damage to the protective barrier of the skin, further loss of body fluids can occur. Clinically, long-term fluid supplementation, anti-infection, and postoperative repair are often required for patients. This process typically takes months or even years. Intravenous infusion is commonly used for fluid supplementation in clinical practice. However, the scarred skin formed in burn patients often makes it difficult to find veins, which increases the difficulty of venipuncture. The general puncture requires alternation repeatedly [4]. These can cause great suffering to patients and increase the difficulty of clinical nursing. Therefore, finding a method that can relieve the pain of patients and enable long-term infusion administration is a hot topic of current clinical research [5]. For peripherally inserted central catheter (PICC), the tip of the catheter is located in the superior vena cava, which can quickly dilute the drug. Thus, it can avoid problems such as phlebitis and drug leakage caused by tissue necrosis [6, 7]. In addition, PICC also has the advantages of long indwelling time as well as no risk of pneumothorax and arterial injury [8, 9]. With the development of imaging technologies such as ultrasound, the improved Seldinger PICC placement technique under ultrasound guidance has been gradually derived [10]. A large number of clinical studies have shown that the success rate of traditional PICC placement is only 78%, the success rate of PICC placement using optimized Seldinger technique alone is 84%, and that of optimized ultrasound-guided Seldinger PICC placement reaches 98%. In general, ultrasound-guided optimized Seldinger technique for PICC placement has the wide and good clinical applications. However, there is no direct report worldwide about its application in patients with large-area severe burns [11]. Therefore, further in-depth research is needed on its application effect in patients with large-area severe burns.There are also some defects and deficiencies in the optimized Seldinger PICC placement technique guided by ultrasound. For example, venipuncture and blade dilation are required during catheter placement, which can cause local tissue damage and pain in patients [12]. Pain caused by ultrasound-guided optimized Seldinger PICC placement can lead to a series of physiological and pathological changes, and these changes are important factors causing postoperative complications. Therefore, it is necessary and urgent to take appropriate nursing intervention methods to improve the quality of life and prognosis of severely burned patients. Predictive nursing is a method that is widely used worldwide and has been recognized and confirmed by many scholars [13]. Predictive nursing, conducted by some foreign scholars, has reduced the incidence of coronary heart disease by 50%. For patients with advanced head and neck tumors, some scholars have adopted predictive enteral nutrition support nursing, and found that this nursing measure can make the patients with neck tumors nourished. Researches by domestic scholars show that predictive nursing can improve the comfort, satisfaction, and compliance of clinical treatment. There are many similar studies [14]. From the above, predictive nursing has achieved good outcomes in clinical work and has been widely promoted. It can effectively relieve the negative moods of patients and reduce the incidence of complications. However, all the existing related researches in the world directly reported the application of predictive nursing in the PICC of burn patients under the ultrasound-guided modified Seldinger technique. As for the application effect of the modified Seldinger technique in the PICC of burn patients under the guidance of ultrasound, further research is needed [15].The patients with large-area severe burns were preselected as the research objects in this research, so as to explore the application value of ultrasound-guided PICC placement combined with predictive nursing in the treatment. It was expected to provide reference and basis for the clinical treatment of related diseases as well as the application of related technologies. ## 2. Research Methods ### 2.1. Objects In this study, 100 patients with severe burn admitted to the hospital from July 2020 to January 2022 were selected and randomly divided into control group and observation group, with 50 cases in each. The ultrasound-guided PICC combined with predictive nursing was given in the observation group, while the control group received traditional PICC combined with nursing. Inclusion criteria required the patients had an age of 18-66 years old, no skin damage to the auricle, no history of alcohol allergy, and PICC placement for the first time. Besides, the patients were suitable for the indications of PICC; they had no mental illness and could correctly express pain. No systemic or local pain relief measure was taken for 24 hours before PICC placement. Exclusion criteria were as follows. The patients received deep venous catheter placement (intravenous access port, subclavian or internal jugular, and femoral venous catheter placement). The diameter of the basilic vein, brachial vein, and median cubital vein under B-mode ultrasound was <5 mm2. The patients suffered from upper extremity hemiplegia, had a history of surgery, or had an unsuccessful one-time venipuncture. They received radiotherapy, chemotherapy, drugs, surgery, or other treatments that could relieve pain during the research. The informed consent forms were obtained from patients, and this study had been approved by ethics committee of hospital. ### 2.2. PICC Puncture Methods In the control group, ordinary deep vein puncture was adopted for PICC placement. A disposable puncture catheter was used, the patient was in a supine position, the arm to be catheterized was abducted by 90°, and the vein below the elbow was selected for puncture. After the blood vessel is selected, sterilization and draping were carried out. Puncture was performed with a puncture needle, the needle core was withdrawn after the venous return was observed, and the catheter was sent into along the outer cannula of the puncture needle until to the predetermined length. Then, the guide wire was withdrawn, the catheter was connected to the installer, and the sterile saline gauze was used to clean the skin around the puncture point. The puncture point was covered with sterile gauze and fixed with transparent application. After the catheter was fixed, it was positioned under X-ray.In the observation group, the patients underwent PICC placement using the ultrasound-guided technique. The body position and disinfection method were the same as those in the control group. The vein below the elbow was selected for puncture. The approximate position of the vein was displayed on the transverse section under color Doppler ultrasonic apparatus. Then, the longitudinal section was scanned to observe the blood flow, wall thickness, and blood vessel diameter of the vein. It was turned to the transverse section, the midpoint of the probe was located at the same point as the transverse section of the vein, and this point was marked, which was just the position on the body surface of the vein. This point was as the starting point, and a point was located after detection every 1 cm. A total of 3 points were located, and the 3 points were kept on the same straight line. After routine disinfection and draping, the lowest located point was the needle insertion point of puncture. The probe was at right angles to the vein as well as to the skin. Under the guidance of ultrasound, the puncture needle and the vein were advanced in parallel. After good blood return was obtained, the position of the needle core was kept unchanged, and the guide wire was put into for 10 cm. After, the puncture angle was reduced, and it was continued to insert the guide wire. The needle core was then withdrawn, the catheter sheath was advanced, and the catheter was placed well. ### 2.3. Nursing Intervention The patients in the control group received routine nursing. The nursing mainly included admission introduction, medication guidance, routine observation, auxiliary treatment, health education, and other basic nursing care.The patients in the observation group were given with predictive nursing. A predictive nursing intervention team was established. The team organized team members for nursing knowledge training every week and conducted regular assessments. The specific interventions were as follows. (1) On the basis of fully understanding of the patients’ acquisition of knowledge, a systematic knowledge theory system was constructed. Through repeated communications, distribution of brochures, and other methods, the correct cognition of disease understanding in patients was deepened and strengthened. The patients were also helped to establish correct beliefs and attitudes. (2) During the treatment and nursing period, the Self-rating Anxiety Scale (SAS) and the Self-rating Depression Scale (SDS) were adopted for observing and evaluating whether the negative situation occurred in patients. If the patient had depression, anxiety, etc., it was necessary to give a targeted psychological intervention measure, and the intervention was carried out every 7 days for 25 minutes each time. (3) Patients with severe burns might need to stay in bed for a long time, and some negative emotions could also affect the vagus nerve, resulting in constipation. Nursing intervention team needed to carry out reasonable dietary intervention for patients. (4) For nursing after catheter placement, chitin-type wound dressings should be used for fixation, to promote hemostasis and healing at the puncture point as soon as possible. Relevant information was recorded in detail on the catheter maintenance record sheet, including catheter model, batch number, the position of catheter placement, catheter placement length, and other information. The precautions, possible adverse reactions, daily life precautions, and common sense of self-maintenance after catheter placement were explained in detail to the patients and their families. The importance of regularly maintaining the catheter and keeping the catheter in good condition was also expounded. ### 2.4. Observation Indicators Pain score: The McGill pain questionnaire (MPQ) was used for estimation. With standards of the pain rating index (PRI) score (0-3 points), 0 point stood for no pain, 1 point for mild pain, 2 points for moderate pain, and 3 points for severe pain. Under the Present Pain Intensity (PPI) scoring (0-5 points), no pain, mild pain, pain causing discomfort, moderate pain, severe pain, and unbearable pain were indicated by 0, 1, 2, 3, 4, and 5 points, respectively. The observation and comparison were at six time points, including local anesthesia, venipuncture, withdrawal of puncture needle, blade expansion, vascular sheath insertion, and vascular sheath withdrawal.Anxiety situation: The SAS was adopted for the evaluation of patients’ anxiety, with a score of 0 to 100. The SAS standard score <50 indicated no anxiety, 50-59 indicated mild anxiety, 60-69 meant moderate anxiety, and 70 and above represented severe anxiety. The comparison was made before nursing and 15 days after nursing intervention, respectively.Depression: As SDS was used for evaluation, SDS standard score <53 meant no depression. 53-62, 63-72, and ≥73 were denoted as mild depression, moderate depression, and severe depression, respectively. It was compared before nursing as well as 15 days after nursing intervention.Surgery-related indicators: The success rate of one-time puncture, the success rate of one-time catheter placement, the incidence of complications, and the basic physiological indicators such as heart rate (measured by the doctors using a stethoscope) and blood pressure (measured with a sphygmomanometer) of the patients in the two groups were recorded. These were compared 15 days after nursing intervention.Bacterial culture: The condition of bacterial infection was compared between the two groups during the nursing intervention period. ### 2.5. Statistical Methods The observed data were filled in the observation table, and the values were entered into the SPSS11.0 software for statistical analysis. The data were statistically described by the mean ± standard deviation, and the measurement data were analyzed byt test. The scores before and after nursing intervention were compared using paired-sample T test within the same group, while the scores were compared using the independent-sample T test between the groups. P<0.05 meant a difference was statistically significant. ## 2.1. Objects In this study, 100 patients with severe burn admitted to the hospital from July 2020 to January 2022 were selected and randomly divided into control group and observation group, with 50 cases in each. The ultrasound-guided PICC combined with predictive nursing was given in the observation group, while the control group received traditional PICC combined with nursing. Inclusion criteria required the patients had an age of 18-66 years old, no skin damage to the auricle, no history of alcohol allergy, and PICC placement for the first time. Besides, the patients were suitable for the indications of PICC; they had no mental illness and could correctly express pain. No systemic or local pain relief measure was taken for 24 hours before PICC placement. Exclusion criteria were as follows. The patients received deep venous catheter placement (intravenous access port, subclavian or internal jugular, and femoral venous catheter placement). The diameter of the basilic vein, brachial vein, and median cubital vein under B-mode ultrasound was <5 mm2. The patients suffered from upper extremity hemiplegia, had a history of surgery, or had an unsuccessful one-time venipuncture. They received radiotherapy, chemotherapy, drugs, surgery, or other treatments that could relieve pain during the research. The informed consent forms were obtained from patients, and this study had been approved by ethics committee of hospital. ## 2.2. PICC Puncture Methods In the control group, ordinary deep vein puncture was adopted for PICC placement. A disposable puncture catheter was used, the patient was in a supine position, the arm to be catheterized was abducted by 90°, and the vein below the elbow was selected for puncture. After the blood vessel is selected, sterilization and draping were carried out. Puncture was performed with a puncture needle, the needle core was withdrawn after the venous return was observed, and the catheter was sent into along the outer cannula of the puncture needle until to the predetermined length. Then, the guide wire was withdrawn, the catheter was connected to the installer, and the sterile saline gauze was used to clean the skin around the puncture point. The puncture point was covered with sterile gauze and fixed with transparent application. After the catheter was fixed, it was positioned under X-ray.In the observation group, the patients underwent PICC placement using the ultrasound-guided technique. The body position and disinfection method were the same as those in the control group. The vein below the elbow was selected for puncture. The approximate position of the vein was displayed on the transverse section under color Doppler ultrasonic apparatus. Then, the longitudinal section was scanned to observe the blood flow, wall thickness, and blood vessel diameter of the vein. It was turned to the transverse section, the midpoint of the probe was located at the same point as the transverse section of the vein, and this point was marked, which was just the position on the body surface of the vein. This point was as the starting point, and a point was located after detection every 1 cm. A total of 3 points were located, and the 3 points were kept on the same straight line. After routine disinfection and draping, the lowest located point was the needle insertion point of puncture. The probe was at right angles to the vein as well as to the skin. Under the guidance of ultrasound, the puncture needle and the vein were advanced in parallel. After good blood return was obtained, the position of the needle core was kept unchanged, and the guide wire was put into for 10 cm. After, the puncture angle was reduced, and it was continued to insert the guide wire. The needle core was then withdrawn, the catheter sheath was advanced, and the catheter was placed well. ## 2.3. Nursing Intervention The patients in the control group received routine nursing. The nursing mainly included admission introduction, medication guidance, routine observation, auxiliary treatment, health education, and other basic nursing care.The patients in the observation group were given with predictive nursing. A predictive nursing intervention team was established. The team organized team members for nursing knowledge training every week and conducted regular assessments. The specific interventions were as follows. (1) On the basis of fully understanding of the patients’ acquisition of knowledge, a systematic knowledge theory system was constructed. Through repeated communications, distribution of brochures, and other methods, the correct cognition of disease understanding in patients was deepened and strengthened. The patients were also helped to establish correct beliefs and attitudes. (2) During the treatment and nursing period, the Self-rating Anxiety Scale (SAS) and the Self-rating Depression Scale (SDS) were adopted for observing and evaluating whether the negative situation occurred in patients. If the patient had depression, anxiety, etc., it was necessary to give a targeted psychological intervention measure, and the intervention was carried out every 7 days for 25 minutes each time. (3) Patients with severe burns might need to stay in bed for a long time, and some negative emotions could also affect the vagus nerve, resulting in constipation. Nursing intervention team needed to carry out reasonable dietary intervention for patients. (4) For nursing after catheter placement, chitin-type wound dressings should be used for fixation, to promote hemostasis and healing at the puncture point as soon as possible. Relevant information was recorded in detail on the catheter maintenance record sheet, including catheter model, batch number, the position of catheter placement, catheter placement length, and other information. The precautions, possible adverse reactions, daily life precautions, and common sense of self-maintenance after catheter placement were explained in detail to the patients and their families. The importance of regularly maintaining the catheter and keeping the catheter in good condition was also expounded. ## 2.4. Observation Indicators Pain score: The McGill pain questionnaire (MPQ) was used for estimation. With standards of the pain rating index (PRI) score (0-3 points), 0 point stood for no pain, 1 point for mild pain, 2 points for moderate pain, and 3 points for severe pain. Under the Present Pain Intensity (PPI) scoring (0-5 points), no pain, mild pain, pain causing discomfort, moderate pain, severe pain, and unbearable pain were indicated by 0, 1, 2, 3, 4, and 5 points, respectively. The observation and comparison were at six time points, including local anesthesia, venipuncture, withdrawal of puncture needle, blade expansion, vascular sheath insertion, and vascular sheath withdrawal.Anxiety situation: The SAS was adopted for the evaluation of patients’ anxiety, with a score of 0 to 100. The SAS standard score <50 indicated no anxiety, 50-59 indicated mild anxiety, 60-69 meant moderate anxiety, and 70 and above represented severe anxiety. The comparison was made before nursing and 15 days after nursing intervention, respectively.Depression: As SDS was used for evaluation, SDS standard score <53 meant no depression. 53-62, 63-72, and ≥73 were denoted as mild depression, moderate depression, and severe depression, respectively. It was compared before nursing as well as 15 days after nursing intervention.Surgery-related indicators: The success rate of one-time puncture, the success rate of one-time catheter placement, the incidence of complications, and the basic physiological indicators such as heart rate (measured by the doctors using a stethoscope) and blood pressure (measured with a sphygmomanometer) of the patients in the two groups were recorded. These were compared 15 days after nursing intervention.Bacterial culture: The condition of bacterial infection was compared between the two groups during the nursing intervention period. ## 2.5. Statistical Methods The observed data were filled in the observation table, and the values were entered into the SPSS11.0 software for statistical analysis. The data were statistically described by the mean ± standard deviation, and the measurement data were analyzed byt test. The scores before and after nursing intervention were compared using paired-sample T test within the same group, while the scores were compared using the independent-sample T test between the groups. P<0.05 meant a difference was statistically significant. ## 3. Results ### 3.1. General Information The general data of the two groups of patients are listed in Table1. The observation group included 28 male patients and 22 female patients, with an average age of 44.2 ± 9.8 years old. In the control group, 30 male patients and 20 female patients were included, having an average age of 55.3 ± 11.3 years old. There was no statistical difference in these general data between two groups, P<0.05.Table 1 General information of patients in the two groups. ItemsObservation group (n =50 cases)Control group (n =50 cases)X2/t valuePGender0.3880.614Male2830Female2220Age (years old)0.1200.13344.2 ± 9.855.3 ± 11.3Nationality2.1311.810Han Chinese3638Minorities1412Education level0.3350.198Primary school and below89Junior high school1311High school or technical secondary school2021College and above99Marital status0.5120.837Married2325Single1716Divorced54Widowed55Monthly income per capita (yuan)0.3830.193<1,00011101,000-3,00018163,001-5,0001617>5,00057Payment method0.4490.527Urban medical insurance2021Rural cooperative medical care2218Commercial insurance56Self-paying35 ### 3.2. Comparison of the Success Rates of Puncture and Catheter Placement The success rates of puncture and catheter placement were compared between the two groups as shown in Figure1. The number of successful one-time puncture was 48 (96%) and 41 (82%), respectively, in the two groups. The success rate of one-time catheter placement was 47 (94%) and 39 (78%), respectively. The success rates of both one-time puncture and one-time catheter placement were markedly higher in the observation group than those in the control group, P<0.05.Figure 1 Comparison of the success rates of puncture and catheter placement in two groups.∗Compared with control group, P<0.05. ### 3.3. Comparison of Pain Scores The comparison results of the MPQ pain scores in the two groups are displayed in Figure2. The MPQ scores of the two groups of patients were 36.3 ± 9.88 and 35.5 ± 11.3, respectively, before nursing intervention, with no significant difference, P>0.05. The MPQ scores after nursing intervention turned to be 18.6 ± 7.11 and 28.9 ± 6.3, respectively. After intervention, the MPQ score of the observation group was greatly lower than that of the control group, P<0.05.Figure 2 Comparison of MPQ pain scores of patients between the two groups.∗Compared with control group, P<0.05. ### 3.4. Comparison of Anxiety and Depression The anxiety condition of the two groups of patients is presented in Figure3. Before intervention, the number of patients without anxiety, mild anxiety, moderate anxiety, and severe anxiety were counted as 6, 24, 18, and 2, respectively, in the observation group. Those were 7, 23, 19, and 1, respectively, in the control group. The SAS scores of the two groups were 55.3 ± 11.3 and 56.1 ± 9.6, respectively, having no statistical difference in anxiety between the two groups, P>0.05. After nursing intervention, there were 18, 30, 2, and 0 patients with no anxiety, mild anxiety, moderate anxiety, and severe anxiety, respectively, in the observation group, while 13, 16, 19, and 2 patients in control group, respectively. The SAS scores of the two groups became 40.3 ± 8.7 and 55.1 ± 10.2, respectively. The anxiety of patients in the observation group was pretty milder than that of the control group, P<0.05.Figure 3 Comparison results of anxiety of patients between two groups. (a), (b), and (c) represented before intervention, after intervention, and SAS score, respectively.∗Compared with control group, P<0.05. (a)(b)(c)The comparative results of depression of patients in the two groups are displayed in Figure4. The number of patients with no, mild, moderate, and severe depression was counted to be 11, 30, 5, and 4, respectively, before intervention in the observation group. 13, 28, 6, and 3 patients got no, mild, moderate, and severe depression, respectively, in the control group. The SDS scores of the two groups were 60.2 ± 9.9 and 62.2 ± 10.2, respectively, before intervention, without a statistical difference in anxiety between the two groups as P>0.05. After nursing intervention, 25, 20, 4, and 1 patient in the observation group and 8, 10, 30, and 2 patients in the control group had no depression, mild depression, moderate depression, and severe depression, respectively. The SDS score was 48.8 ± 9.9 in the observation group while 60.2 ± 11.2 in the control group. The depression status of patients in the observation group was remarkably milder than that in the control group, P<0.05.Figure 4 Comparison results of depression status of patients in two groups. (a) Before intervention. (b) After intervention. (c) SDS score.∗Compared with control group, P<0.05. (a)(b)(c) ### 3.5. Comparison of Physiological Indicators The physiological indicators of patients in the two groups are compared in Figure5. The systolic blood pressure before nursing intervention was 115 ± 11.3 mmHg in the observation group and 116 ± 10.2 mmHg in the control group, showing no significant difference between the groups. After intervention, the systolic blood pressure turned to be 110 ± 8.8 mmHg and 125 ± 9.3 mmHg, respectively, in the observation and control groups. The diastolic blood pressure before intervention was 68.8 ± 5.2 mmHg and 69.3 ± 6.7 mmHg, respectively; not a significant difference was found between the groups. The diastolic blood pressure after intervention was 69.3 ± 10.2 mmHg and 73.8 ± 11.4 mmHg in the observation and control groups, respectively; a significant difference was shown between groups, P<0.05. The heart rates were 70.3 ± 4.4 beats/min and 70.8 ± 5.2 beats/min before intervention in the two groups, suggesting no significant difference between groups. The heart rates after intervention were 70.8 ± 3.8 beats/min and 79.3 ± 3.7 beats/min, respectively, in the observation and the control groups, with a significant difference between groups for P<0.05.Figure 5 Comparison of physiological indicators between two groups. (a), (b), and (c) represented systolic blood pressure, diastolic blood pressure, and heart rate, respectively.∗Compared with control group, P<0.05. (a)(b)(c) ### 3.6. Comparison of Complications The incidence of complications in the two groups is shown in Figure6. There were 0, 1, 1, and 0 patients with local hematoma, local infection, thrombosis, and phlebitis, respectively, in the observation group. The incidence of complications was counted to be 4.5% in the observation group. 2, 3, 2, and 1 patient got the complications, respectively, in the control group; thereout, the incidence of complications was 18%.Figure 6 Comparison of the incidence of complications in two groups.∗Compared with control group, P<0.05. ### 3.7. Comparison of Nursing Satisfaction The comparative results of nursing satisfaction in the two groups are presented in Figure7. There were 36, 11, and 3 patients satisfied, basically satisfied, and dissatisfied, respectively, in the observation group. The satisfied rate reached 94%. In the control group, 24, 17, and 9 patients were satisfied, basically satisfied, and dissatisfied, respectively, with the satisfied rate of 82%. The satisfaction of the observation group was observably higher than that of the control group, P<0.05.Figure 7 Comparison of nursing satisfaction of patients in two groups. ### 3.8. Comparison of the Incidence of Bacterial Infection The incidence of bacterial infection was compared between the two groups. 3 (6%) patients got bacterial infection in the observation group, while bacterial infection occurred in 9 (18%) patients in the control group. The number of patients with bacterial infection in the observation group was considerably less than that in the control group,P<0.05. ## 3.1. General Information The general data of the two groups of patients are listed in Table1. The observation group included 28 male patients and 22 female patients, with an average age of 44.2 ± 9.8 years old. In the control group, 30 male patients and 20 female patients were included, having an average age of 55.3 ± 11.3 years old. There was no statistical difference in these general data between two groups, P<0.05.Table 1 General information of patients in the two groups. ItemsObservation group (n =50 cases)Control group (n =50 cases)X2/t valuePGender0.3880.614Male2830Female2220Age (years old)0.1200.13344.2 ± 9.855.3 ± 11.3Nationality2.1311.810Han Chinese3638Minorities1412Education level0.3350.198Primary school and below89Junior high school1311High school or technical secondary school2021College and above99Marital status0.5120.837Married2325Single1716Divorced54Widowed55Monthly income per capita (yuan)0.3830.193<1,00011101,000-3,00018163,001-5,0001617>5,00057Payment method0.4490.527Urban medical insurance2021Rural cooperative medical care2218Commercial insurance56Self-paying35 ## 3.2. Comparison of the Success Rates of Puncture and Catheter Placement The success rates of puncture and catheter placement were compared between the two groups as shown in Figure1. The number of successful one-time puncture was 48 (96%) and 41 (82%), respectively, in the two groups. The success rate of one-time catheter placement was 47 (94%) and 39 (78%), respectively. The success rates of both one-time puncture and one-time catheter placement were markedly higher in the observation group than those in the control group, P<0.05.Figure 1 Comparison of the success rates of puncture and catheter placement in two groups.∗Compared with control group, P<0.05. ## 3.3. Comparison of Pain Scores The comparison results of the MPQ pain scores in the two groups are displayed in Figure2. The MPQ scores of the two groups of patients were 36.3 ± 9.88 and 35.5 ± 11.3, respectively, before nursing intervention, with no significant difference, P>0.05. The MPQ scores after nursing intervention turned to be 18.6 ± 7.11 and 28.9 ± 6.3, respectively. After intervention, the MPQ score of the observation group was greatly lower than that of the control group, P<0.05.Figure 2 Comparison of MPQ pain scores of patients between the two groups.∗Compared with control group, P<0.05. ## 3.4. Comparison of Anxiety and Depression The anxiety condition of the two groups of patients is presented in Figure3. Before intervention, the number of patients without anxiety, mild anxiety, moderate anxiety, and severe anxiety were counted as 6, 24, 18, and 2, respectively, in the observation group. Those were 7, 23, 19, and 1, respectively, in the control group. The SAS scores of the two groups were 55.3 ± 11.3 and 56.1 ± 9.6, respectively, having no statistical difference in anxiety between the two groups, P>0.05. After nursing intervention, there were 18, 30, 2, and 0 patients with no anxiety, mild anxiety, moderate anxiety, and severe anxiety, respectively, in the observation group, while 13, 16, 19, and 2 patients in control group, respectively. The SAS scores of the two groups became 40.3 ± 8.7 and 55.1 ± 10.2, respectively. The anxiety of patients in the observation group was pretty milder than that of the control group, P<0.05.Figure 3 Comparison results of anxiety of patients between two groups. (a), (b), and (c) represented before intervention, after intervention, and SAS score, respectively.∗Compared with control group, P<0.05. (a)(b)(c)The comparative results of depression of patients in the two groups are displayed in Figure4. The number of patients with no, mild, moderate, and severe depression was counted to be 11, 30, 5, and 4, respectively, before intervention in the observation group. 13, 28, 6, and 3 patients got no, mild, moderate, and severe depression, respectively, in the control group. The SDS scores of the two groups were 60.2 ± 9.9 and 62.2 ± 10.2, respectively, before intervention, without a statistical difference in anxiety between the two groups as P>0.05. After nursing intervention, 25, 20, 4, and 1 patient in the observation group and 8, 10, 30, and 2 patients in the control group had no depression, mild depression, moderate depression, and severe depression, respectively. The SDS score was 48.8 ± 9.9 in the observation group while 60.2 ± 11.2 in the control group. The depression status of patients in the observation group was remarkably milder than that in the control group, P<0.05.Figure 4 Comparison results of depression status of patients in two groups. (a) Before intervention. (b) After intervention. (c) SDS score.∗Compared with control group, P<0.05. (a)(b)(c) ## 3.5. Comparison of Physiological Indicators The physiological indicators of patients in the two groups are compared in Figure5. The systolic blood pressure before nursing intervention was 115 ± 11.3 mmHg in the observation group and 116 ± 10.2 mmHg in the control group, showing no significant difference between the groups. After intervention, the systolic blood pressure turned to be 110 ± 8.8 mmHg and 125 ± 9.3 mmHg, respectively, in the observation and control groups. The diastolic blood pressure before intervention was 68.8 ± 5.2 mmHg and 69.3 ± 6.7 mmHg, respectively; not a significant difference was found between the groups. The diastolic blood pressure after intervention was 69.3 ± 10.2 mmHg and 73.8 ± 11.4 mmHg in the observation and control groups, respectively; a significant difference was shown between groups, P<0.05. The heart rates were 70.3 ± 4.4 beats/min and 70.8 ± 5.2 beats/min before intervention in the two groups, suggesting no significant difference between groups. The heart rates after intervention were 70.8 ± 3.8 beats/min and 79.3 ± 3.7 beats/min, respectively, in the observation and the control groups, with a significant difference between groups for P<0.05.Figure 5 Comparison of physiological indicators between two groups. (a), (b), and (c) represented systolic blood pressure, diastolic blood pressure, and heart rate, respectively.∗Compared with control group, P<0.05. (a)(b)(c) ## 3.6. Comparison of Complications The incidence of complications in the two groups is shown in Figure6. There were 0, 1, 1, and 0 patients with local hematoma, local infection, thrombosis, and phlebitis, respectively, in the observation group. The incidence of complications was counted to be 4.5% in the observation group. 2, 3, 2, and 1 patient got the complications, respectively, in the control group; thereout, the incidence of complications was 18%.Figure 6 Comparison of the incidence of complications in two groups.∗Compared with control group, P<0.05. ## 3.7. Comparison of Nursing Satisfaction The comparative results of nursing satisfaction in the two groups are presented in Figure7. There were 36, 11, and 3 patients satisfied, basically satisfied, and dissatisfied, respectively, in the observation group. The satisfied rate reached 94%. In the control group, 24, 17, and 9 patients were satisfied, basically satisfied, and dissatisfied, respectively, with the satisfied rate of 82%. The satisfaction of the observation group was observably higher than that of the control group, P<0.05.Figure 7 Comparison of nursing satisfaction of patients in two groups. ## 3.8. Comparison of the Incidence of Bacterial Infection The incidence of bacterial infection was compared between the two groups. 3 (6%) patients got bacterial infection in the observation group, while bacterial infection occurred in 9 (18%) patients in the control group. The number of patients with bacterial infection in the observation group was considerably less than that in the control group,P<0.05. ## 4. Discussion Not only affect burns life and health but even destroy the patients’ life, work, and study. Burns will weaken the social labor force and also lay an increased economic burden on the family and the society [16, 17]. The protective barrier of large areas of the skin is damaged when a burn occurs, which can lead to a massive loss of body fluids. Thus, long-term fluid supplementation, anti-infection, and postoperative repair are often required for patients in clinical practice. Typically, this course takes months or even years. These procedures are generally completed by intravenous infusion. However, burn patients generally have scarred skin, which makes the veins difficult to find, greatly increasing the difficulty of venipuncture. In addition, conventional puncture or central venous catheter requires repeated alternation and replacement in a short time period [18]. These problems and shortcomings often bring pain to patients and increase the difficulty of clinical nursing. On the basis of the treatment characteristics of burns, it is indispensable and urgent to find a method that can relieve the pain of patients, avoid local skin infection, and achieve good long-term infusion administration as well [19].PICC refers to the technique of inserting a central venous catheter through peripheral vein puncture, so that the tip of the catheter reaches the superior vena cava or subclavian vein [20–22]. As the tip of PICC is in the superior vena cava, the drugs can be quickly diluted, thus avoiding tissue necrosis caused by phlebitis and drug leakage [23, 24]. Moreover, PICC placement is generally operated independently by a professional nurse, having the advantages of long indwelling time and no risk of pneumothorax as well as arterial injury. With the development of imaging technologies including ultrasound, the optimized Seldinger PICC placement technique has been derived gradually under ultrasound guidance [25]. In a large number of clinical studies, the success rate of traditional PICC placement is only 78%, that rises to 84% as modified Seldinger technique was used, and reaches 98% using optimized ultrasound-guided Seldinger PICC placement [26, 27]. The ultrasound-guided optimized Seldinger technique has a wide range of great clinical applications for PICC placement. Thereout, PICC placement has a high application value in burn treatment. Therefore, in this work, patients with severe burns were selected as the research objects, and were randomly divided into the observation group and the control group. In the observation group, ultrasound-guided PICC technology was utilized for catheter placement, while traditional PICC was adopted in the control group. The success rate of one-time puncture (93% vs. 86%) and the success rate of one-time catheter placement (95% vs. 81%) in the observation group were notably higher than those in the control group, P<0.05. It suggested that the application value of ultrasound-guided PICC technology was much higher than that of traditional catheter placement technology in the treatment of severe burns. This was consistent with the findings of previous related studies.In spite of many advantages of the ultrasound-guided optimized Seldinger technique for PICC placement, it also has some flaws and deficiencies. Venipuncture, blade dilation, etc. are needed during catheter placement, and these PICC operations can cause local tissue damage as well as pain to the patients [28]. With the continuous development of pain specialty, pain has become the fifth vital sign after the four vital signs of breathing, pulse, blood pressure, and body temperature [29]. Pain caused by PICC placement can bring about a series of physiological and pathological changes, which are important factors for postoperative complications. Therefore, taking appropriate nursing intervention methods is necessary and urgent for severely burned patients, to promote their quality of life and prognosis. Predictive nursing is widely applied and has been recognized by numerous scholars across the world [30]. In this work, patients with large-area severe burns were included as the research objects. The patients were randomly divided into the observation group and the control group. The observation group received ultrasound-guided PICC technology combined with predictive nursing, while traditional PICC combined with traditional nursing was adopted in the control group. The MPQ scores of the observation group were significantly lower than those of the control group at each time period, P<0.05. The patients in the observation group had significantly fewer negative emotions such as anxiety and depression than the control group. The incidence of complications in the observation group was significantly lower than that in the control group (4.5% vs 18%), P<0.05. The nursing satisfaction in the observation group was greatly higher than that of the control group (93% vs 79.5%), P<0.05. In summary, ultrasound-guided PICC and predictive nursing had high clinical application value in the treatment of patients with large-area severe burns. This was consistent with the predictions. ## 5. Conclusion The patients with severe burns were selected as the research objects and were randomly divided into the observation and the control groups. In the observation group, patients received ultrasound-guided PICC with predictive nursing, while traditional PICC and traditional nursing were given to the control group. The ultrasound-guided PICC and predictive nursing showed high clinical application values in treating large-area severe burns; thus, this work provided a reference and basis for the clinical treatment. However, due to the limited samples and text space, this work still had certain defects. In the future, the samples would be expanded for further research. --- *Source: 1019829-2022-07-31.xml*
1019829-2022-07-31_1019829-2022-07-31.md
47,953
Therapeutic Effect of Ultrasound-Guided Peripherally Inserted Central Catheter Combined with Predictive Nursing in Patients with Large-Area Severe Burns
Baiyan He; Aiqiong Zhang; Shuting He
Computational and Mathematical Methods in Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1019829
1019829-2022-07-31.xml
--- ## Abstract This study was aimed to explore the application value of ultrasound-guided peripherally inserted central catheter (PICC) combined with predictive nursing in the treatment of large-area severe burns. 88 patients with large-area severe burns who visited hospital were chosen as the research objects. They were randomly divided into the observation group and the control group, with 44 cases in each. The patients in the observation group were treated with ultrasound-guided PICC combined with predictive nursing, while those in the control group were treated with traditional PICC and nursing methods. Then, the anxiety of patients was compared between groups by the Self-rating Anxiety Scale (SAS), while the depression was compared by the Self-rating Depression Scale (SDS). The pain of the patients was analyzed by the McGill Pain Questionnaire (MPQ), and a self-made nursing satisfaction questionnaire was adopted to evaluate the nursing satisfaction. The surgery-related indicators of the patients were detected and recorded (the success rate of one-time puncture, the success rate of one-time catheter placement, incidence of complications, heart rate, blood pressure, etc.). The success rates of one-time puncture (93% vs. 86%) and of catheter placement (95% vs. 81%) in the observation group were significantly higher than those in the control group,P<0.05. The pain scores of the observation group were much lower than those of the control group at each time period, P<0.05. The number of patients with negative emotions such as anxiety and depression in the observation group was markedly less than that in the control group. The incidence of complications in the observation group was notably lower than that in the control group (4.5% vs 18%), P<0.05. The nursing satisfaction of the observation group was significantly higher than that of the control group (93% vs 79.5%), P<0.05. In conclusion, ultrasound-guided PICC and predictive nursing had high clinical application values in the treatment of patients with large-area severe burns. --- ## Body ## 1. Introduction Burns bring patients a huge impact on health, life, work, and study. It will weaken the social labor force and increase the economic burden on the family and society. Statistics show that the incidence of burns in China is much higher than that in oversea countries [1]. Generally, burns are classified into four grades: first-degree burns, superficial second-degree burns, deep second-degree burns, and third-degree burns. The specific clinical manifestations of each grade are as follows. For the first-degree burns, the mild burns are generally characterized by mild redness, swelling, and heat pain, with no blisters and no skin damage. It can usually recover to normal within a week without any scarring, but the color of local skin may be darker in a short term. For superficial second-degree burns, blisters of different sizes are formed, and the blister fluid is clear and transparent, which is pale yellow or egg white-like fluid. The ruptured blisters expose a rosy and moist wound [2]. Patients may experience significant pain and local redness and swelling. The wound usually heals in 1-2 weeks without scarring, but sometimes the newly grown skin may have pigment changes. For deep second-degree burns, there is local swelling, and the epithelial tissue turns to be white or brownish-yellow. There are also scattered small blisters; the wound of the ruptured blisters is slightly wet with the color of red and white or red in white. Many red dots or small vascular branches can be observed, the cutaneous sensation is insensitive, and the pain is not obvious. If there is no infection, the healing generally takes about 3-4 weeks. In the event of infection, not only the healing time will be prolonged but also scars will be left after healing. For the third-degree burns, the wound surface is dry and is in waxy white, brown, or charcoal black, with no blisters and no pain. It is tough and leather-like, and thick vascular network coagulates under the eschar, which is caused by venous embolism in the fat layer. In summary, the second- and third-degree burns pose a serious threat to the life and health of patients. In addition, the prognosis of patients is generally very poor, and the treatment time is relatively long. Therefore, the treatment and nursing for burns take a long-term and difficult process [3].Since burns can cause extensive damage to the protective barrier of the skin, further loss of body fluids can occur. Clinically, long-term fluid supplementation, anti-infection, and postoperative repair are often required for patients. This process typically takes months or even years. Intravenous infusion is commonly used for fluid supplementation in clinical practice. However, the scarred skin formed in burn patients often makes it difficult to find veins, which increases the difficulty of venipuncture. The general puncture requires alternation repeatedly [4]. These can cause great suffering to patients and increase the difficulty of clinical nursing. Therefore, finding a method that can relieve the pain of patients and enable long-term infusion administration is a hot topic of current clinical research [5]. For peripherally inserted central catheter (PICC), the tip of the catheter is located in the superior vena cava, which can quickly dilute the drug. Thus, it can avoid problems such as phlebitis and drug leakage caused by tissue necrosis [6, 7]. In addition, PICC also has the advantages of long indwelling time as well as no risk of pneumothorax and arterial injury [8, 9]. With the development of imaging technologies such as ultrasound, the improved Seldinger PICC placement technique under ultrasound guidance has been gradually derived [10]. A large number of clinical studies have shown that the success rate of traditional PICC placement is only 78%, the success rate of PICC placement using optimized Seldinger technique alone is 84%, and that of optimized ultrasound-guided Seldinger PICC placement reaches 98%. In general, ultrasound-guided optimized Seldinger technique for PICC placement has the wide and good clinical applications. However, there is no direct report worldwide about its application in patients with large-area severe burns [11]. Therefore, further in-depth research is needed on its application effect in patients with large-area severe burns.There are also some defects and deficiencies in the optimized Seldinger PICC placement technique guided by ultrasound. For example, venipuncture and blade dilation are required during catheter placement, which can cause local tissue damage and pain in patients [12]. Pain caused by ultrasound-guided optimized Seldinger PICC placement can lead to a series of physiological and pathological changes, and these changes are important factors causing postoperative complications. Therefore, it is necessary and urgent to take appropriate nursing intervention methods to improve the quality of life and prognosis of severely burned patients. Predictive nursing is a method that is widely used worldwide and has been recognized and confirmed by many scholars [13]. Predictive nursing, conducted by some foreign scholars, has reduced the incidence of coronary heart disease by 50%. For patients with advanced head and neck tumors, some scholars have adopted predictive enteral nutrition support nursing, and found that this nursing measure can make the patients with neck tumors nourished. Researches by domestic scholars show that predictive nursing can improve the comfort, satisfaction, and compliance of clinical treatment. There are many similar studies [14]. From the above, predictive nursing has achieved good outcomes in clinical work and has been widely promoted. It can effectively relieve the negative moods of patients and reduce the incidence of complications. However, all the existing related researches in the world directly reported the application of predictive nursing in the PICC of burn patients under the ultrasound-guided modified Seldinger technique. As for the application effect of the modified Seldinger technique in the PICC of burn patients under the guidance of ultrasound, further research is needed [15].The patients with large-area severe burns were preselected as the research objects in this research, so as to explore the application value of ultrasound-guided PICC placement combined with predictive nursing in the treatment. It was expected to provide reference and basis for the clinical treatment of related diseases as well as the application of related technologies. ## 2. Research Methods ### 2.1. Objects In this study, 100 patients with severe burn admitted to the hospital from July 2020 to January 2022 were selected and randomly divided into control group and observation group, with 50 cases in each. The ultrasound-guided PICC combined with predictive nursing was given in the observation group, while the control group received traditional PICC combined with nursing. Inclusion criteria required the patients had an age of 18-66 years old, no skin damage to the auricle, no history of alcohol allergy, and PICC placement for the first time. Besides, the patients were suitable for the indications of PICC; they had no mental illness and could correctly express pain. No systemic or local pain relief measure was taken for 24 hours before PICC placement. Exclusion criteria were as follows. The patients received deep venous catheter placement (intravenous access port, subclavian or internal jugular, and femoral venous catheter placement). The diameter of the basilic vein, brachial vein, and median cubital vein under B-mode ultrasound was <5 mm2. The patients suffered from upper extremity hemiplegia, had a history of surgery, or had an unsuccessful one-time venipuncture. They received radiotherapy, chemotherapy, drugs, surgery, or other treatments that could relieve pain during the research. The informed consent forms were obtained from patients, and this study had been approved by ethics committee of hospital. ### 2.2. PICC Puncture Methods In the control group, ordinary deep vein puncture was adopted for PICC placement. A disposable puncture catheter was used, the patient was in a supine position, the arm to be catheterized was abducted by 90°, and the vein below the elbow was selected for puncture. After the blood vessel is selected, sterilization and draping were carried out. Puncture was performed with a puncture needle, the needle core was withdrawn after the venous return was observed, and the catheter was sent into along the outer cannula of the puncture needle until to the predetermined length. Then, the guide wire was withdrawn, the catheter was connected to the installer, and the sterile saline gauze was used to clean the skin around the puncture point. The puncture point was covered with sterile gauze and fixed with transparent application. After the catheter was fixed, it was positioned under X-ray.In the observation group, the patients underwent PICC placement using the ultrasound-guided technique. The body position and disinfection method were the same as those in the control group. The vein below the elbow was selected for puncture. The approximate position of the vein was displayed on the transverse section under color Doppler ultrasonic apparatus. Then, the longitudinal section was scanned to observe the blood flow, wall thickness, and blood vessel diameter of the vein. It was turned to the transverse section, the midpoint of the probe was located at the same point as the transverse section of the vein, and this point was marked, which was just the position on the body surface of the vein. This point was as the starting point, and a point was located after detection every 1 cm. A total of 3 points were located, and the 3 points were kept on the same straight line. After routine disinfection and draping, the lowest located point was the needle insertion point of puncture. The probe was at right angles to the vein as well as to the skin. Under the guidance of ultrasound, the puncture needle and the vein were advanced in parallel. After good blood return was obtained, the position of the needle core was kept unchanged, and the guide wire was put into for 10 cm. After, the puncture angle was reduced, and it was continued to insert the guide wire. The needle core was then withdrawn, the catheter sheath was advanced, and the catheter was placed well. ### 2.3. Nursing Intervention The patients in the control group received routine nursing. The nursing mainly included admission introduction, medication guidance, routine observation, auxiliary treatment, health education, and other basic nursing care.The patients in the observation group were given with predictive nursing. A predictive nursing intervention team was established. The team organized team members for nursing knowledge training every week and conducted regular assessments. The specific interventions were as follows. (1) On the basis of fully understanding of the patients’ acquisition of knowledge, a systematic knowledge theory system was constructed. Through repeated communications, distribution of brochures, and other methods, the correct cognition of disease understanding in patients was deepened and strengthened. The patients were also helped to establish correct beliefs and attitudes. (2) During the treatment and nursing period, the Self-rating Anxiety Scale (SAS) and the Self-rating Depression Scale (SDS) were adopted for observing and evaluating whether the negative situation occurred in patients. If the patient had depression, anxiety, etc., it was necessary to give a targeted psychological intervention measure, and the intervention was carried out every 7 days for 25 minutes each time. (3) Patients with severe burns might need to stay in bed for a long time, and some negative emotions could also affect the vagus nerve, resulting in constipation. Nursing intervention team needed to carry out reasonable dietary intervention for patients. (4) For nursing after catheter placement, chitin-type wound dressings should be used for fixation, to promote hemostasis and healing at the puncture point as soon as possible. Relevant information was recorded in detail on the catheter maintenance record sheet, including catheter model, batch number, the position of catheter placement, catheter placement length, and other information. The precautions, possible adverse reactions, daily life precautions, and common sense of self-maintenance after catheter placement were explained in detail to the patients and their families. The importance of regularly maintaining the catheter and keeping the catheter in good condition was also expounded. ### 2.4. Observation Indicators Pain score: The McGill pain questionnaire (MPQ) was used for estimation. With standards of the pain rating index (PRI) score (0-3 points), 0 point stood for no pain, 1 point for mild pain, 2 points for moderate pain, and 3 points for severe pain. Under the Present Pain Intensity (PPI) scoring (0-5 points), no pain, mild pain, pain causing discomfort, moderate pain, severe pain, and unbearable pain were indicated by 0, 1, 2, 3, 4, and 5 points, respectively. The observation and comparison were at six time points, including local anesthesia, venipuncture, withdrawal of puncture needle, blade expansion, vascular sheath insertion, and vascular sheath withdrawal.Anxiety situation: The SAS was adopted for the evaluation of patients’ anxiety, with a score of 0 to 100. The SAS standard score <50 indicated no anxiety, 50-59 indicated mild anxiety, 60-69 meant moderate anxiety, and 70 and above represented severe anxiety. The comparison was made before nursing and 15 days after nursing intervention, respectively.Depression: As SDS was used for evaluation, SDS standard score <53 meant no depression. 53-62, 63-72, and ≥73 were denoted as mild depression, moderate depression, and severe depression, respectively. It was compared before nursing as well as 15 days after nursing intervention.Surgery-related indicators: The success rate of one-time puncture, the success rate of one-time catheter placement, the incidence of complications, and the basic physiological indicators such as heart rate (measured by the doctors using a stethoscope) and blood pressure (measured with a sphygmomanometer) of the patients in the two groups were recorded. These were compared 15 days after nursing intervention.Bacterial culture: The condition of bacterial infection was compared between the two groups during the nursing intervention period. ### 2.5. Statistical Methods The observed data were filled in the observation table, and the values were entered into the SPSS11.0 software for statistical analysis. The data were statistically described by the mean ± standard deviation, and the measurement data were analyzed byt test. The scores before and after nursing intervention were compared using paired-sample T test within the same group, while the scores were compared using the independent-sample T test between the groups. P<0.05 meant a difference was statistically significant. ## 2.1. Objects In this study, 100 patients with severe burn admitted to the hospital from July 2020 to January 2022 were selected and randomly divided into control group and observation group, with 50 cases in each. The ultrasound-guided PICC combined with predictive nursing was given in the observation group, while the control group received traditional PICC combined with nursing. Inclusion criteria required the patients had an age of 18-66 years old, no skin damage to the auricle, no history of alcohol allergy, and PICC placement for the first time. Besides, the patients were suitable for the indications of PICC; they had no mental illness and could correctly express pain. No systemic or local pain relief measure was taken for 24 hours before PICC placement. Exclusion criteria were as follows. The patients received deep venous catheter placement (intravenous access port, subclavian or internal jugular, and femoral venous catheter placement). The diameter of the basilic vein, brachial vein, and median cubital vein under B-mode ultrasound was <5 mm2. The patients suffered from upper extremity hemiplegia, had a history of surgery, or had an unsuccessful one-time venipuncture. They received radiotherapy, chemotherapy, drugs, surgery, or other treatments that could relieve pain during the research. The informed consent forms were obtained from patients, and this study had been approved by ethics committee of hospital. ## 2.2. PICC Puncture Methods In the control group, ordinary deep vein puncture was adopted for PICC placement. A disposable puncture catheter was used, the patient was in a supine position, the arm to be catheterized was abducted by 90°, and the vein below the elbow was selected for puncture. After the blood vessel is selected, sterilization and draping were carried out. Puncture was performed with a puncture needle, the needle core was withdrawn after the venous return was observed, and the catheter was sent into along the outer cannula of the puncture needle until to the predetermined length. Then, the guide wire was withdrawn, the catheter was connected to the installer, and the sterile saline gauze was used to clean the skin around the puncture point. The puncture point was covered with sterile gauze and fixed with transparent application. After the catheter was fixed, it was positioned under X-ray.In the observation group, the patients underwent PICC placement using the ultrasound-guided technique. The body position and disinfection method were the same as those in the control group. The vein below the elbow was selected for puncture. The approximate position of the vein was displayed on the transverse section under color Doppler ultrasonic apparatus. Then, the longitudinal section was scanned to observe the blood flow, wall thickness, and blood vessel diameter of the vein. It was turned to the transverse section, the midpoint of the probe was located at the same point as the transverse section of the vein, and this point was marked, which was just the position on the body surface of the vein. This point was as the starting point, and a point was located after detection every 1 cm. A total of 3 points were located, and the 3 points were kept on the same straight line. After routine disinfection and draping, the lowest located point was the needle insertion point of puncture. The probe was at right angles to the vein as well as to the skin. Under the guidance of ultrasound, the puncture needle and the vein were advanced in parallel. After good blood return was obtained, the position of the needle core was kept unchanged, and the guide wire was put into for 10 cm. After, the puncture angle was reduced, and it was continued to insert the guide wire. The needle core was then withdrawn, the catheter sheath was advanced, and the catheter was placed well. ## 2.3. Nursing Intervention The patients in the control group received routine nursing. The nursing mainly included admission introduction, medication guidance, routine observation, auxiliary treatment, health education, and other basic nursing care.The patients in the observation group were given with predictive nursing. A predictive nursing intervention team was established. The team organized team members for nursing knowledge training every week and conducted regular assessments. The specific interventions were as follows. (1) On the basis of fully understanding of the patients’ acquisition of knowledge, a systematic knowledge theory system was constructed. Through repeated communications, distribution of brochures, and other methods, the correct cognition of disease understanding in patients was deepened and strengthened. The patients were also helped to establish correct beliefs and attitudes. (2) During the treatment and nursing period, the Self-rating Anxiety Scale (SAS) and the Self-rating Depression Scale (SDS) were adopted for observing and evaluating whether the negative situation occurred in patients. If the patient had depression, anxiety, etc., it was necessary to give a targeted psychological intervention measure, and the intervention was carried out every 7 days for 25 minutes each time. (3) Patients with severe burns might need to stay in bed for a long time, and some negative emotions could also affect the vagus nerve, resulting in constipation. Nursing intervention team needed to carry out reasonable dietary intervention for patients. (4) For nursing after catheter placement, chitin-type wound dressings should be used for fixation, to promote hemostasis and healing at the puncture point as soon as possible. Relevant information was recorded in detail on the catheter maintenance record sheet, including catheter model, batch number, the position of catheter placement, catheter placement length, and other information. The precautions, possible adverse reactions, daily life precautions, and common sense of self-maintenance after catheter placement were explained in detail to the patients and their families. The importance of regularly maintaining the catheter and keeping the catheter in good condition was also expounded. ## 2.4. Observation Indicators Pain score: The McGill pain questionnaire (MPQ) was used for estimation. With standards of the pain rating index (PRI) score (0-3 points), 0 point stood for no pain, 1 point for mild pain, 2 points for moderate pain, and 3 points for severe pain. Under the Present Pain Intensity (PPI) scoring (0-5 points), no pain, mild pain, pain causing discomfort, moderate pain, severe pain, and unbearable pain were indicated by 0, 1, 2, 3, 4, and 5 points, respectively. The observation and comparison were at six time points, including local anesthesia, venipuncture, withdrawal of puncture needle, blade expansion, vascular sheath insertion, and vascular sheath withdrawal.Anxiety situation: The SAS was adopted for the evaluation of patients’ anxiety, with a score of 0 to 100. The SAS standard score <50 indicated no anxiety, 50-59 indicated mild anxiety, 60-69 meant moderate anxiety, and 70 and above represented severe anxiety. The comparison was made before nursing and 15 days after nursing intervention, respectively.Depression: As SDS was used for evaluation, SDS standard score <53 meant no depression. 53-62, 63-72, and ≥73 were denoted as mild depression, moderate depression, and severe depression, respectively. It was compared before nursing as well as 15 days after nursing intervention.Surgery-related indicators: The success rate of one-time puncture, the success rate of one-time catheter placement, the incidence of complications, and the basic physiological indicators such as heart rate (measured by the doctors using a stethoscope) and blood pressure (measured with a sphygmomanometer) of the patients in the two groups were recorded. These were compared 15 days after nursing intervention.Bacterial culture: The condition of bacterial infection was compared between the two groups during the nursing intervention period. ## 2.5. Statistical Methods The observed data were filled in the observation table, and the values were entered into the SPSS11.0 software for statistical analysis. The data were statistically described by the mean ± standard deviation, and the measurement data were analyzed byt test. The scores before and after nursing intervention were compared using paired-sample T test within the same group, while the scores were compared using the independent-sample T test between the groups. P<0.05 meant a difference was statistically significant. ## 3. Results ### 3.1. General Information The general data of the two groups of patients are listed in Table1. The observation group included 28 male patients and 22 female patients, with an average age of 44.2 ± 9.8 years old. In the control group, 30 male patients and 20 female patients were included, having an average age of 55.3 ± 11.3 years old. There was no statistical difference in these general data between two groups, P<0.05.Table 1 General information of patients in the two groups. ItemsObservation group (n =50 cases)Control group (n =50 cases)X2/t valuePGender0.3880.614Male2830Female2220Age (years old)0.1200.13344.2 ± 9.855.3 ± 11.3Nationality2.1311.810Han Chinese3638Minorities1412Education level0.3350.198Primary school and below89Junior high school1311High school or technical secondary school2021College and above99Marital status0.5120.837Married2325Single1716Divorced54Widowed55Monthly income per capita (yuan)0.3830.193<1,00011101,000-3,00018163,001-5,0001617>5,00057Payment method0.4490.527Urban medical insurance2021Rural cooperative medical care2218Commercial insurance56Self-paying35 ### 3.2. Comparison of the Success Rates of Puncture and Catheter Placement The success rates of puncture and catheter placement were compared between the two groups as shown in Figure1. The number of successful one-time puncture was 48 (96%) and 41 (82%), respectively, in the two groups. The success rate of one-time catheter placement was 47 (94%) and 39 (78%), respectively. The success rates of both one-time puncture and one-time catheter placement were markedly higher in the observation group than those in the control group, P<0.05.Figure 1 Comparison of the success rates of puncture and catheter placement in two groups.∗Compared with control group, P<0.05. ### 3.3. Comparison of Pain Scores The comparison results of the MPQ pain scores in the two groups are displayed in Figure2. The MPQ scores of the two groups of patients were 36.3 ± 9.88 and 35.5 ± 11.3, respectively, before nursing intervention, with no significant difference, P>0.05. The MPQ scores after nursing intervention turned to be 18.6 ± 7.11 and 28.9 ± 6.3, respectively. After intervention, the MPQ score of the observation group was greatly lower than that of the control group, P<0.05.Figure 2 Comparison of MPQ pain scores of patients between the two groups.∗Compared with control group, P<0.05. ### 3.4. Comparison of Anxiety and Depression The anxiety condition of the two groups of patients is presented in Figure3. Before intervention, the number of patients without anxiety, mild anxiety, moderate anxiety, and severe anxiety were counted as 6, 24, 18, and 2, respectively, in the observation group. Those were 7, 23, 19, and 1, respectively, in the control group. The SAS scores of the two groups were 55.3 ± 11.3 and 56.1 ± 9.6, respectively, having no statistical difference in anxiety between the two groups, P>0.05. After nursing intervention, there were 18, 30, 2, and 0 patients with no anxiety, mild anxiety, moderate anxiety, and severe anxiety, respectively, in the observation group, while 13, 16, 19, and 2 patients in control group, respectively. The SAS scores of the two groups became 40.3 ± 8.7 and 55.1 ± 10.2, respectively. The anxiety of patients in the observation group was pretty milder than that of the control group, P<0.05.Figure 3 Comparison results of anxiety of patients between two groups. (a), (b), and (c) represented before intervention, after intervention, and SAS score, respectively.∗Compared with control group, P<0.05. (a)(b)(c)The comparative results of depression of patients in the two groups are displayed in Figure4. The number of patients with no, mild, moderate, and severe depression was counted to be 11, 30, 5, and 4, respectively, before intervention in the observation group. 13, 28, 6, and 3 patients got no, mild, moderate, and severe depression, respectively, in the control group. The SDS scores of the two groups were 60.2 ± 9.9 and 62.2 ± 10.2, respectively, before intervention, without a statistical difference in anxiety between the two groups as P>0.05. After nursing intervention, 25, 20, 4, and 1 patient in the observation group and 8, 10, 30, and 2 patients in the control group had no depression, mild depression, moderate depression, and severe depression, respectively. The SDS score was 48.8 ± 9.9 in the observation group while 60.2 ± 11.2 in the control group. The depression status of patients in the observation group was remarkably milder than that in the control group, P<0.05.Figure 4 Comparison results of depression status of patients in two groups. (a) Before intervention. (b) After intervention. (c) SDS score.∗Compared with control group, P<0.05. (a)(b)(c) ### 3.5. Comparison of Physiological Indicators The physiological indicators of patients in the two groups are compared in Figure5. The systolic blood pressure before nursing intervention was 115 ± 11.3 mmHg in the observation group and 116 ± 10.2 mmHg in the control group, showing no significant difference between the groups. After intervention, the systolic blood pressure turned to be 110 ± 8.8 mmHg and 125 ± 9.3 mmHg, respectively, in the observation and control groups. The diastolic blood pressure before intervention was 68.8 ± 5.2 mmHg and 69.3 ± 6.7 mmHg, respectively; not a significant difference was found between the groups. The diastolic blood pressure after intervention was 69.3 ± 10.2 mmHg and 73.8 ± 11.4 mmHg in the observation and control groups, respectively; a significant difference was shown between groups, P<0.05. The heart rates were 70.3 ± 4.4 beats/min and 70.8 ± 5.2 beats/min before intervention in the two groups, suggesting no significant difference between groups. The heart rates after intervention were 70.8 ± 3.8 beats/min and 79.3 ± 3.7 beats/min, respectively, in the observation and the control groups, with a significant difference between groups for P<0.05.Figure 5 Comparison of physiological indicators between two groups. (a), (b), and (c) represented systolic blood pressure, diastolic blood pressure, and heart rate, respectively.∗Compared with control group, P<0.05. (a)(b)(c) ### 3.6. Comparison of Complications The incidence of complications in the two groups is shown in Figure6. There were 0, 1, 1, and 0 patients with local hematoma, local infection, thrombosis, and phlebitis, respectively, in the observation group. The incidence of complications was counted to be 4.5% in the observation group. 2, 3, 2, and 1 patient got the complications, respectively, in the control group; thereout, the incidence of complications was 18%.Figure 6 Comparison of the incidence of complications in two groups.∗Compared with control group, P<0.05. ### 3.7. Comparison of Nursing Satisfaction The comparative results of nursing satisfaction in the two groups are presented in Figure7. There were 36, 11, and 3 patients satisfied, basically satisfied, and dissatisfied, respectively, in the observation group. The satisfied rate reached 94%. In the control group, 24, 17, and 9 patients were satisfied, basically satisfied, and dissatisfied, respectively, with the satisfied rate of 82%. The satisfaction of the observation group was observably higher than that of the control group, P<0.05.Figure 7 Comparison of nursing satisfaction of patients in two groups. ### 3.8. Comparison of the Incidence of Bacterial Infection The incidence of bacterial infection was compared between the two groups. 3 (6%) patients got bacterial infection in the observation group, while bacterial infection occurred in 9 (18%) patients in the control group. The number of patients with bacterial infection in the observation group was considerably less than that in the control group,P<0.05. ## 3.1. General Information The general data of the two groups of patients are listed in Table1. The observation group included 28 male patients and 22 female patients, with an average age of 44.2 ± 9.8 years old. In the control group, 30 male patients and 20 female patients were included, having an average age of 55.3 ± 11.3 years old. There was no statistical difference in these general data between two groups, P<0.05.Table 1 General information of patients in the two groups. ItemsObservation group (n =50 cases)Control group (n =50 cases)X2/t valuePGender0.3880.614Male2830Female2220Age (years old)0.1200.13344.2 ± 9.855.3 ± 11.3Nationality2.1311.810Han Chinese3638Minorities1412Education level0.3350.198Primary school and below89Junior high school1311High school or technical secondary school2021College and above99Marital status0.5120.837Married2325Single1716Divorced54Widowed55Monthly income per capita (yuan)0.3830.193<1,00011101,000-3,00018163,001-5,0001617>5,00057Payment method0.4490.527Urban medical insurance2021Rural cooperative medical care2218Commercial insurance56Self-paying35 ## 3.2. Comparison of the Success Rates of Puncture and Catheter Placement The success rates of puncture and catheter placement were compared between the two groups as shown in Figure1. The number of successful one-time puncture was 48 (96%) and 41 (82%), respectively, in the two groups. The success rate of one-time catheter placement was 47 (94%) and 39 (78%), respectively. The success rates of both one-time puncture and one-time catheter placement were markedly higher in the observation group than those in the control group, P<0.05.Figure 1 Comparison of the success rates of puncture and catheter placement in two groups.∗Compared with control group, P<0.05. ## 3.3. Comparison of Pain Scores The comparison results of the MPQ pain scores in the two groups are displayed in Figure2. The MPQ scores of the two groups of patients were 36.3 ± 9.88 and 35.5 ± 11.3, respectively, before nursing intervention, with no significant difference, P>0.05. The MPQ scores after nursing intervention turned to be 18.6 ± 7.11 and 28.9 ± 6.3, respectively. After intervention, the MPQ score of the observation group was greatly lower than that of the control group, P<0.05.Figure 2 Comparison of MPQ pain scores of patients between the two groups.∗Compared with control group, P<0.05. ## 3.4. Comparison of Anxiety and Depression The anxiety condition of the two groups of patients is presented in Figure3. Before intervention, the number of patients without anxiety, mild anxiety, moderate anxiety, and severe anxiety were counted as 6, 24, 18, and 2, respectively, in the observation group. Those were 7, 23, 19, and 1, respectively, in the control group. The SAS scores of the two groups were 55.3 ± 11.3 and 56.1 ± 9.6, respectively, having no statistical difference in anxiety between the two groups, P>0.05. After nursing intervention, there were 18, 30, 2, and 0 patients with no anxiety, mild anxiety, moderate anxiety, and severe anxiety, respectively, in the observation group, while 13, 16, 19, and 2 patients in control group, respectively. The SAS scores of the two groups became 40.3 ± 8.7 and 55.1 ± 10.2, respectively. The anxiety of patients in the observation group was pretty milder than that of the control group, P<0.05.Figure 3 Comparison results of anxiety of patients between two groups. (a), (b), and (c) represented before intervention, after intervention, and SAS score, respectively.∗Compared with control group, P<0.05. (a)(b)(c)The comparative results of depression of patients in the two groups are displayed in Figure4. The number of patients with no, mild, moderate, and severe depression was counted to be 11, 30, 5, and 4, respectively, before intervention in the observation group. 13, 28, 6, and 3 patients got no, mild, moderate, and severe depression, respectively, in the control group. The SDS scores of the two groups were 60.2 ± 9.9 and 62.2 ± 10.2, respectively, before intervention, without a statistical difference in anxiety between the two groups as P>0.05. After nursing intervention, 25, 20, 4, and 1 patient in the observation group and 8, 10, 30, and 2 patients in the control group had no depression, mild depression, moderate depression, and severe depression, respectively. The SDS score was 48.8 ± 9.9 in the observation group while 60.2 ± 11.2 in the control group. The depression status of patients in the observation group was remarkably milder than that in the control group, P<0.05.Figure 4 Comparison results of depression status of patients in two groups. (a) Before intervention. (b) After intervention. (c) SDS score.∗Compared with control group, P<0.05. (a)(b)(c) ## 3.5. Comparison of Physiological Indicators The physiological indicators of patients in the two groups are compared in Figure5. The systolic blood pressure before nursing intervention was 115 ± 11.3 mmHg in the observation group and 116 ± 10.2 mmHg in the control group, showing no significant difference between the groups. After intervention, the systolic blood pressure turned to be 110 ± 8.8 mmHg and 125 ± 9.3 mmHg, respectively, in the observation and control groups. The diastolic blood pressure before intervention was 68.8 ± 5.2 mmHg and 69.3 ± 6.7 mmHg, respectively; not a significant difference was found between the groups. The diastolic blood pressure after intervention was 69.3 ± 10.2 mmHg and 73.8 ± 11.4 mmHg in the observation and control groups, respectively; a significant difference was shown between groups, P<0.05. The heart rates were 70.3 ± 4.4 beats/min and 70.8 ± 5.2 beats/min before intervention in the two groups, suggesting no significant difference between groups. The heart rates after intervention were 70.8 ± 3.8 beats/min and 79.3 ± 3.7 beats/min, respectively, in the observation and the control groups, with a significant difference between groups for P<0.05.Figure 5 Comparison of physiological indicators between two groups. (a), (b), and (c) represented systolic blood pressure, diastolic blood pressure, and heart rate, respectively.∗Compared with control group, P<0.05. (a)(b)(c) ## 3.6. Comparison of Complications The incidence of complications in the two groups is shown in Figure6. There were 0, 1, 1, and 0 patients with local hematoma, local infection, thrombosis, and phlebitis, respectively, in the observation group. The incidence of complications was counted to be 4.5% in the observation group. 2, 3, 2, and 1 patient got the complications, respectively, in the control group; thereout, the incidence of complications was 18%.Figure 6 Comparison of the incidence of complications in two groups.∗Compared with control group, P<0.05. ## 3.7. Comparison of Nursing Satisfaction The comparative results of nursing satisfaction in the two groups are presented in Figure7. There were 36, 11, and 3 patients satisfied, basically satisfied, and dissatisfied, respectively, in the observation group. The satisfied rate reached 94%. In the control group, 24, 17, and 9 patients were satisfied, basically satisfied, and dissatisfied, respectively, with the satisfied rate of 82%. The satisfaction of the observation group was observably higher than that of the control group, P<0.05.Figure 7 Comparison of nursing satisfaction of patients in two groups. ## 3.8. Comparison of the Incidence of Bacterial Infection The incidence of bacterial infection was compared between the two groups. 3 (6%) patients got bacterial infection in the observation group, while bacterial infection occurred in 9 (18%) patients in the control group. The number of patients with bacterial infection in the observation group was considerably less than that in the control group,P<0.05. ## 4. Discussion Not only affect burns life and health but even destroy the patients’ life, work, and study. Burns will weaken the social labor force and also lay an increased economic burden on the family and the society [16, 17]. The protective barrier of large areas of the skin is damaged when a burn occurs, which can lead to a massive loss of body fluids. Thus, long-term fluid supplementation, anti-infection, and postoperative repair are often required for patients in clinical practice. Typically, this course takes months or even years. These procedures are generally completed by intravenous infusion. However, burn patients generally have scarred skin, which makes the veins difficult to find, greatly increasing the difficulty of venipuncture. In addition, conventional puncture or central venous catheter requires repeated alternation and replacement in a short time period [18]. These problems and shortcomings often bring pain to patients and increase the difficulty of clinical nursing. On the basis of the treatment characteristics of burns, it is indispensable and urgent to find a method that can relieve the pain of patients, avoid local skin infection, and achieve good long-term infusion administration as well [19].PICC refers to the technique of inserting a central venous catheter through peripheral vein puncture, so that the tip of the catheter reaches the superior vena cava or subclavian vein [20–22]. As the tip of PICC is in the superior vena cava, the drugs can be quickly diluted, thus avoiding tissue necrosis caused by phlebitis and drug leakage [23, 24]. Moreover, PICC placement is generally operated independently by a professional nurse, having the advantages of long indwelling time and no risk of pneumothorax as well as arterial injury. With the development of imaging technologies including ultrasound, the optimized Seldinger PICC placement technique has been derived gradually under ultrasound guidance [25]. In a large number of clinical studies, the success rate of traditional PICC placement is only 78%, that rises to 84% as modified Seldinger technique was used, and reaches 98% using optimized ultrasound-guided Seldinger PICC placement [26, 27]. The ultrasound-guided optimized Seldinger technique has a wide range of great clinical applications for PICC placement. Thereout, PICC placement has a high application value in burn treatment. Therefore, in this work, patients with severe burns were selected as the research objects, and were randomly divided into the observation group and the control group. In the observation group, ultrasound-guided PICC technology was utilized for catheter placement, while traditional PICC was adopted in the control group. The success rate of one-time puncture (93% vs. 86%) and the success rate of one-time catheter placement (95% vs. 81%) in the observation group were notably higher than those in the control group, P<0.05. It suggested that the application value of ultrasound-guided PICC technology was much higher than that of traditional catheter placement technology in the treatment of severe burns. This was consistent with the findings of previous related studies.In spite of many advantages of the ultrasound-guided optimized Seldinger technique for PICC placement, it also has some flaws and deficiencies. Venipuncture, blade dilation, etc. are needed during catheter placement, and these PICC operations can cause local tissue damage as well as pain to the patients [28]. With the continuous development of pain specialty, pain has become the fifth vital sign after the four vital signs of breathing, pulse, blood pressure, and body temperature [29]. Pain caused by PICC placement can bring about a series of physiological and pathological changes, which are important factors for postoperative complications. Therefore, taking appropriate nursing intervention methods is necessary and urgent for severely burned patients, to promote their quality of life and prognosis. Predictive nursing is widely applied and has been recognized by numerous scholars across the world [30]. In this work, patients with large-area severe burns were included as the research objects. The patients were randomly divided into the observation group and the control group. The observation group received ultrasound-guided PICC technology combined with predictive nursing, while traditional PICC combined with traditional nursing was adopted in the control group. The MPQ scores of the observation group were significantly lower than those of the control group at each time period, P<0.05. The patients in the observation group had significantly fewer negative emotions such as anxiety and depression than the control group. The incidence of complications in the observation group was significantly lower than that in the control group (4.5% vs 18%), P<0.05. The nursing satisfaction in the observation group was greatly higher than that of the control group (93% vs 79.5%), P<0.05. In summary, ultrasound-guided PICC and predictive nursing had high clinical application value in the treatment of patients with large-area severe burns. This was consistent with the predictions. ## 5. Conclusion The patients with severe burns were selected as the research objects and were randomly divided into the observation and the control groups. In the observation group, patients received ultrasound-guided PICC with predictive nursing, while traditional PICC and traditional nursing were given to the control group. The ultrasound-guided PICC and predictive nursing showed high clinical application values in treating large-area severe burns; thus, this work provided a reference and basis for the clinical treatment. However, due to the limited samples and text space, this work still had certain defects. In the future, the samples would be expanded for further research. --- *Source: 1019829-2022-07-31.xml*
2022
# Transformer Fault Diagnosis Based on BP-Adaboost and PNN Series Connection **Authors:** Chun Yan; Meixuan Li; Wei Liu **Journal:** Mathematical Problems in Engineering (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1019845 --- ## Abstract Dissolved gas-in-oil analysis (DGA) is a powerful method to diagnose and detect transformer faults. It is of profound significance for the accurate and rapid determination of the fault of the transformer and the stability of the power. In different transformer faults, the concentration of dissolved gases in oil is also inconsistent. Commonly used gases include hydrogen (H2), methane (CH4), acetylene (C2H2), ethane (C2H6), and ethylene (C2H4). This paper first combines BP neural network with improved Adaboost algorithm, then combines PNN neural network to form a series diagnosis model for transformer fault, and finally combines dissolved gas-in-oil analysis to diagnose transformer fault. The experimental results show that the accuracy of the series diagnosis model proposed in this paper is greatly improved compared with BP neural network, GA-BP neural network, PNN neural network, and BP-Adaboost. --- ## Body ## 1. Introduction In recent years, with the rapid development of China's economy, power system is developing towards the direction of ultrahigh voltage, large power grid, large capacity, and automation. Domestic demand for electricity has increased dramatically, and the national power industry is experiencing a rapid development stage. At present, the number of 110KV (66KV) and above voltage transformers transported by the State Grid Corporation has reached more than 30,000, with a total capacity of 3.4 TVA. Because the power transformer is in the central position of the power grid, the operation environment is complex, and, under the impact of various bad operating conditions, it is easy that fault occurs. Transformer faults has caused large area of breakage, resulting in a large number of economic losses. Therefore, the effective diagnosis of transformer faults is of great significance.At present, the main testing and monitoring methods of transformer operation state are DC resistance measurement [1], dissolved gas-in-oil analysis (DGA) [2], oil temperature monitoring (OTM) [3], insulation experiment (IE), acoustic partial discharge measurement (APDM) [4], detection of characteristic curve by repeated pulse method [5], Winding Deformation Testing and Low Voltage Short Circuit Impedance Testing [6], etc. DGA analysis is a relatively ideal monitoring and analysis method, because it can monitor and analyze oil chromatography online and sample analysis at any time. At present, it is recognized that dissolved gas analysis technology in oil is a powerful measure to find latent faults in transformers at home and abroad [2].Due to the normal operation of transformer, transformer oil and solid insulation materials will gradually age and decompose into a small amount of gas. However, when the power equipment is faulted, especially in the case of overheating, discharge, or humidity, the amount of these gases will increase rapidly. It has been proved by long-term practice that the content of gas in oil is directly related to the faulty degree of transformer [7]. Over the years, both at home and abroad have devoted themselves to the development of online monitoring devices and systems characterized by the content of dissolved gas in oil. The three-ratio method is the most basic method for fault diagnosis of oil-filled power equipment. However, the three-ratio method exposes that the coding cannot include all DGA analysis results and the coding is too absolute [8]. With the development of research, more and more artificial intelligence methods are introduced into transformer fault diagnosis, for example, artificial neural network [9], BP neural network [10], fuzzy logic reasoning [11], rough set theory [12], Extreme Learning Machine [13], support vector machine [14], and Bayesian network [15]. However, all kinds of artificial intelligence methods have some limitations, such as the network structure and weight of artificial neural network which are difficult to determine and easy to fall into local minimum and overfitting. The Bayesian network requires a large number of sample data [16]. The inference rules of fuzzy logic and the function of fuzzy membership degree depend to a great extent on experience [6].BP neural network is a kind of multilayer feedforward neural network, because of its simple structure, many adjustable parameters, many training algorithms, and good maneuverability. BP neural network has been widely used. According to statistics, 80%~90% of the neural network models are based on the BP network or its deformation [17]. The traditional BP neural network has the disadvantage of random initial weights, which lead to low learning efficiency, slow convergence speed, and easy to fall into a local minimum. So many scholars use intelligent algorithm to optimize the weight of BP neural network. Liang uses the genetic algorithm to optimize the weight of BP neural network to realize the inversion method of soil moisture [18]. Salman Predicts Palm Oil Price Using BP Neural Network Based on Particle Swarm Optimization [19]. Kuang uses ant colony algorithm to optimize BP neural network for macroeconomic prediction [20]. But using intelligent algorithms to optimize the weight of BP neural network greatly enlarges the operation time and makes the model diagnosis inefficient [21].In this paper, a diagnostic model which combines BP-Adaboost algorithm and PNN in series is proposed. Adaboost algorithm is a simple and easy algorithm, which can combine several weak classifiers to form a strong classifier. At the same time, the upper limit of the classification error rate will not increase with the overfitting of training. In addition, the Adaboost algorithm has the advantages of no need to adjust parameters and low generalization error rate. Because the Adaboost algorithm can construct multiple weak predictors with lower accuracy into a strong learner with higher accuracy, therefore, this paper combines BP neural network as a weak classifier with Adaboost algorithm. Considering that Adaboost algorithm is usually used to deal with binary classification problems, and transformer faults are often divided into many types of faults, this paper changes the multiclassification problem into multiple Adaboost binary classification problems to be solved. In the Adaboost algorithm, only the error rate of the weak classifier is slightly less than 1/2, but in the actual training process, there will still be special cases, which will affect the operation of the algorithm. In order to solve this problem, this paper revalues individual variables under special circumstances. Then the transformer fault is diagnosed by the improved BP-Adaboost algorithm. Samples that have not been successfully classified in the diagnosis results (A sample is divided into two or more different faults or not into any faults.) are reclassified as prediction samples, and the original training samples are reclassified as training samples and put into PNN neural network for diagnosis. The advantages of BP-Adaboost and PNN are fully combined by this series model. With samples which have not been successfully diagnosed by BP-Adaboost algorithm, the second diagnosis can be carried out by PNN, which effectively improves the accuracy of the model.The main contributions of this paper are as follows.(1) Firstly, the Adaboost algorithm is improved. It solves the defect that the diagnostic error of each weak classifier in the traditional Adaboost algorithm can only be within(0,1/2). The two-class Adaboost algorithm is improved to multiclass algorithm.(2) Then, BP-Adaboost multiclassification diagnosis algorithm is formed by combining the BP neural network as a weak classifier with multiclassification Adaboost algorithm. For samples diagnosed wrong in BP-Adaboost diagnosis model, they are put into PNN neural network for diagnosis again.(3) Finally, the sample set is selected. Inspired by IEC three-ratio method, this paper not only takes five commonly used gas as characteristic parameters, but also takes C2H2/C2H4, CH4/H2, C2H4/C2H6 as characteristic parameters of transformer fault diagnosis.Section1 introduces the background significance of transformer fault diagnosis and the methods commonly used to diagnose transformer fault in recent years. Section 2 first introduces the model used in this paper and finally introduces the series multiclassification algorithm proposed in this paper. Section 3 introduces the selection of the sample set and sample characteristic parameters. Section 4 compares the diagnostic results of the proposed model with those of the other four models. ## 2. Materials and Methods ### 2.1. Improved BP-Adaboost Diagnostic Model BP neural network is an error back propagation algorithm, which uses the steepest descent method to continuously adjust the weights and biases of the network are continuously adjusted to minimize the sum of square errors of the network. BP neural network consists of one input layer, one output layer, and several hidden layers. The training process is as follows.(1) Establish BP neural network and initialize the weight and biases of BP neural network.(2) Preprocess the sample data and set the number of neurons in each layer. Supposex=(xij)(i=1,2,3,...,n;j=1,2,3,...,m) is a sample input matrix. The output of the hidden layer is bj. The biases of neurons in the hidden layer and the output layer are θj and θk, respectively. The output bj of the j -th neuron in the hidden layer is(1)bj=f1∑i=1nwijxi-θjThe output layer output yk is(2)yk=f2∑j=1mwikbi-θkIn formulas (1) and (2), f1 and f2 are S-type tangent function and S-type logarithmic function, respectively.(3) Errore of actual output yk and expected output tk in BP neural network are(3)e=∑k=1tk-yk2(4) If the errors produced do not meet the requirements, the steepest descent method is used to backpropagate the errors and adjust the weights and biases. Iterative cycle until the error meets the requirement.Adaptive boosting (Adaboost) is a strong efficient algorithm that combines weak classifiers into strong classifiers. It was proposed by Yoav Freund and Robert Schapire in 1995. The main idea is as follows. Firstly, each training sample is given the same weight. Then the weak classifier is used to run iteratively T times; after each operation, the weight of training data is updated according to the classification results of training samples, and the wrong samples are usually given larger weight. For multiple weak classifiers, after running T times, a sequence of classification results of training samples is obtained. Each classification function is given a weight. The better the classification result, the greater the corresponding weight. The steps of the Adaboost algorithm are as follows.Step 1 (randomly selectm samples from the samples as training data). Initialization data distribution weightsDt(i)=1/m. The structure of the neural network is determined according to the input and output dimensions of the samples, and the weights and biases of the neural network are initialized.Step 2 (calculate the prediction error sum of weak classifier). When thet-th weak classifier is trained, the prediction error of prediction sequence g(t) and et are obtained. In formula (4), gi(t) is the predicted result of the i-th sample and yi is the expected classification result of the i-th sample. In BP neural network, the default output of more than 0 belongs to “1” category in classification, and the output of less than 0 belongs to “-1” category in classification.(4)et=∑iDiii=1,2,…,mgit≠yiStep 3. Calculation of sequence weightsat based on prediction error is(5)at=12ln⁡1-etet Formula (5) shows that when et is less than 1/2, (1-et)/et increases with the decrease of et. ln⁡(1-et/et) is an increasing function, so the weight at of weak classifier increases with the decrease of error sum. However, as the number of iterations increases, the classification error decreases gradually. According to formula (5), when the error is reduced to a certain extent, it is easy to be unable to calculate the weight of weak classifiers, thus affecting the classification results [22]. Therefore, this paper reassigns the sum of errors in this case. Because there are 80 training samples in this paper, formula (5) cannot calculate the weight; only the error is zero. So when the error of weak classifier predicting training data is zero, this paper makes error et be 0.0125, that is, only one sample predicting error. The calculation formula of sequence weight is established in the case of error and less than 1/2, but in the actual case, it is not ruled out that the prediction accuracy of weak classifier to the sample is less than 1/2. Therefore, this paper improves this situation. For the case where et is more than 1/2, it shows that the accuracy of weak classifier is less than 1/2 in the binary classification (-1 and 1), so that the output of weak classifier can be taken as the opposite number et=1-et, which can also be calculated by formula (5).Step 4 (adjust the data weight). Adjusting the weight of the next training sample according to the sequence weight,(6)Dt+1i=DtiBt×exp⁡-atyigtxii=1,2,…,mIn formula (6), Bt is a normalization factor. The purpose is to make the sum of distribution weights equal to 1 under the condition that the proportion of weights is unchanged. When the predicted results are not the same as the actual results, yigt(xi) in formula (6) is less than 0, and the greater the absolute value of the predicted results, the greater the value of exp⁡[-atyigt(xi)], thus satisfying the condition that the samples with wrong classification are usually given a larger weight.Step 5 (construct a strong classifier). After several weak classifiers are trained by T-rounds, the T-group weak classifier functionsf(gt,at) are obtained, and then the T-group weak classifier functions are combined to form a strong classifier function h(x).(7)hx=sign⁡∑t=1Tat·fgt,atThe algorithm flow based on BP-Adaboost model is shown in Figure 1.Figure 1 BP-Adaboost algorithm flow. ### 2.2. PNN Neural Network Probabilistic neural network (PNN) is a parallel algorithm based on Bayes classification rules and Parzen window for probability density function estimation. PNN is a kind of artificial neural network with a simple structure, simple training, and wide application. Its structure consists of an input layer, mode layer, summation layer, and output layer. Its basic structure is shown in Figure2.Figure 2 Basic structure of probabilistic neural network.The values of training samples are first received through the input layer, and the number of neurons in the input layer is equal to the dimension of the input sample vector. Then the data information is transmitted to the pattern layer of the second layer through the input layer.The pattern layer is used to calculate the matching relationship between input samples and each pattern in the training set. The number of neurons in the pattern layer is equal to the total number of training samples. Assuming that the vector of the input layer isX=(x1,x2,...,xn)T, the data is mapped from the input layer to the pattern layer through the mapping mechanism, then the input of the j-th neuron of the pattern layer is X, and the output of the j-th neuron of the pattern layer is (8)fijX=12πp/2δpNR1exp⁡-x-wijTx-wij2δ2In formula (8), p is the total number of categories, m is the number of neurons in the mode layer, wij is the connection weight from the output layer to the mode layer, and δ is the smoothing factor.The summation layer is the accumulation of probabilities belonging to the same class, and its conditional probability density is(9)Pi∣X=∑j-1mwijfijXi=1,2,…,pof which(10)∑j=1mwij=1,i=1,2,…,p.wij∈0,1The output layer receives the probability density function of each class of summation layer output and then finds the maximum probability P(X)=max⁡(P(i∣X),(i=1,2,...,P) of one of them. ### 2.3. A Multiclassification Series Model for BP-Adaboost and PNN Neural Networks Generally, Adaboost combines with weak classifier to form a strong classifier to solve the problem of binary classification. But in transformer fault classification, there are not only two types of transformer faults. Therefore, BP-Adaboost needs to be transformed into multiclassification problems. In this paper, according to the total fault types to be classified, several BP-Adaboost two-classification models are established to classify each fault in turn. The specific classification operation is shown in Figure3.Figure 3 BP-Adaboost multiclassification model.In Figure3, we make the output of each BP-Adaboost model “- 1” or “1”, so that the output of “A” fault in the training sample is “1”, and the output of the other fault types is “- 1”. In this way, the test sample whose result is “1” can be classified by BP-Adaboost, and its prediction result can be regarded as “A” fault. Each fault type can be diagnosed by binary classification in this way. Although Adaboost improves the prediction results of the weak classifier by its powerful result correction ability, such a multiclassification model still has some defects. For example, the same test sample is divided into different fault models (e.g., Sample “d” in Table 1), or a test sample is not divided into any type (e.g., Sample “a” in Table 1). The occurrence of these two cases can determine that the multiclassification model can not accurately classify these samples.Table 1 Multiple BP-Adaboost classification result. Sample a b c d e BP-Adaboost classification result 1 -1 1 -1 1 -1 BP-Adaboost classification result 2 -1 -1 1 1 -1 BP-Adaboost classification result 3 -1 -1 -1 -1 1Based on the possible error classification, the classification results of BP-Adaboost multiclassification model are set as follows. Firstly, the diagnosis type is coded and then the fault type is diagnosed according to the coding order. Then, each BP-Adaboost binary classification result is constructed into aTm×n matrix, where m is the total fault type and n is the number of test samples. Finally, according to the position of “1” in column i,i=1,2,...,n of matrix T, the class of the i-th sample is determined. That is, the diagnostic result of sample i-th in multiclassification results is the location of “1” in i column. When there are more than one “1” in column j or no “1” in column j, this indicates that the BP-Adaboost multiclassification diagnostic model is wrong in classifying the j-th sample, so the classification result of the j-th individual in the BP-Adaboost multiclassification result is “0”. The specific operation results are shown in Table 2.Table 2 BP-Adaboost multiclassification results. Sample a b c d e BP-Adaboost classification result 1 -1 1 -1 1 -1 BP-Adaboost classification result 2 -1 -1 1 1 -1 BP-Adaboost classification result 3 -1 -1 -1 -1 1 Classification result 0 1 2 0 3Usually we do not know the accuracy of the diagnostic results of the diagnostic model before comparing them with the real results. But for BP-Adaboost multiclassification diagnostic results in this paper, when “0” appears in the diagnostic results, it shows that the diagnosis must be wrong. In order to improve the accuracy of one algorithm, many scholars usually combine one algorithm with another to make the final experimental results better than any of them [23, 24]. In order to improve the diagnostic accuracy, a PNN neural network is connected in series after the diagnosis result of BP-Adaboost. By utilizing the recognition ability of PNN neural network, the samples of BP-Adaboost diagnosis error are rediagnosed. The algorithm diagram is shown in Figure 4. This fully combines the excellent results of BP-Adaboost multiclassification diagnosis model with those of PNN diagnosis model, so as to improve the accuracy of sample prediction.Figure 4 Diagnostic model of BP-Adaboost in tandem with PNN. ## 2.1. Improved BP-Adaboost Diagnostic Model BP neural network is an error back propagation algorithm, which uses the steepest descent method to continuously adjust the weights and biases of the network are continuously adjusted to minimize the sum of square errors of the network. BP neural network consists of one input layer, one output layer, and several hidden layers. The training process is as follows.(1) Establish BP neural network and initialize the weight and biases of BP neural network.(2) Preprocess the sample data and set the number of neurons in each layer. Supposex=(xij)(i=1,2,3,...,n;j=1,2,3,...,m) is a sample input matrix. The output of the hidden layer is bj. The biases of neurons in the hidden layer and the output layer are θj and θk, respectively. The output bj of the j -th neuron in the hidden layer is(1)bj=f1∑i=1nwijxi-θjThe output layer output yk is(2)yk=f2∑j=1mwikbi-θkIn formulas (1) and (2), f1 and f2 are S-type tangent function and S-type logarithmic function, respectively.(3) Errore of actual output yk and expected output tk in BP neural network are(3)e=∑k=1tk-yk2(4) If the errors produced do not meet the requirements, the steepest descent method is used to backpropagate the errors and adjust the weights and biases. Iterative cycle until the error meets the requirement.Adaptive boosting (Adaboost) is a strong efficient algorithm that combines weak classifiers into strong classifiers. It was proposed by Yoav Freund and Robert Schapire in 1995. The main idea is as follows. Firstly, each training sample is given the same weight. Then the weak classifier is used to run iteratively T times; after each operation, the weight of training data is updated according to the classification results of training samples, and the wrong samples are usually given larger weight. For multiple weak classifiers, after running T times, a sequence of classification results of training samples is obtained. Each classification function is given a weight. The better the classification result, the greater the corresponding weight. The steps of the Adaboost algorithm are as follows.Step 1 (randomly selectm samples from the samples as training data). Initialization data distribution weightsDt(i)=1/m. The structure of the neural network is determined according to the input and output dimensions of the samples, and the weights and biases of the neural network are initialized.Step 2 (calculate the prediction error sum of weak classifier). When thet-th weak classifier is trained, the prediction error of prediction sequence g(t) and et are obtained. In formula (4), gi(t) is the predicted result of the i-th sample and yi is the expected classification result of the i-th sample. In BP neural network, the default output of more than 0 belongs to “1” category in classification, and the output of less than 0 belongs to “-1” category in classification.(4)et=∑iDiii=1,2,…,mgit≠yiStep 3. Calculation of sequence weightsat based on prediction error is(5)at=12ln⁡1-etet Formula (5) shows that when et is less than 1/2, (1-et)/et increases with the decrease of et. ln⁡(1-et/et) is an increasing function, so the weight at of weak classifier increases with the decrease of error sum. However, as the number of iterations increases, the classification error decreases gradually. According to formula (5), when the error is reduced to a certain extent, it is easy to be unable to calculate the weight of weak classifiers, thus affecting the classification results [22]. Therefore, this paper reassigns the sum of errors in this case. Because there are 80 training samples in this paper, formula (5) cannot calculate the weight; only the error is zero. So when the error of weak classifier predicting training data is zero, this paper makes error et be 0.0125, that is, only one sample predicting error. The calculation formula of sequence weight is established in the case of error and less than 1/2, but in the actual case, it is not ruled out that the prediction accuracy of weak classifier to the sample is less than 1/2. Therefore, this paper improves this situation. For the case where et is more than 1/2, it shows that the accuracy of weak classifier is less than 1/2 in the binary classification (-1 and 1), so that the output of weak classifier can be taken as the opposite number et=1-et, which can also be calculated by formula (5).Step 4 (adjust the data weight). Adjusting the weight of the next training sample according to the sequence weight,(6)Dt+1i=DtiBt×exp⁡-atyigtxii=1,2,…,mIn formula (6), Bt is a normalization factor. The purpose is to make the sum of distribution weights equal to 1 under the condition that the proportion of weights is unchanged. When the predicted results are not the same as the actual results, yigt(xi) in formula (6) is less than 0, and the greater the absolute value of the predicted results, the greater the value of exp⁡[-atyigt(xi)], thus satisfying the condition that the samples with wrong classification are usually given a larger weight.Step 5 (construct a strong classifier). After several weak classifiers are trained by T-rounds, the T-group weak classifier functionsf(gt,at) are obtained, and then the T-group weak classifier functions are combined to form a strong classifier function h(x).(7)hx=sign⁡∑t=1Tat·fgt,atThe algorithm flow based on BP-Adaboost model is shown in Figure 1.Figure 1 BP-Adaboost algorithm flow. ## 2.2. PNN Neural Network Probabilistic neural network (PNN) is a parallel algorithm based on Bayes classification rules and Parzen window for probability density function estimation. PNN is a kind of artificial neural network with a simple structure, simple training, and wide application. Its structure consists of an input layer, mode layer, summation layer, and output layer. Its basic structure is shown in Figure2.Figure 2 Basic structure of probabilistic neural network.The values of training samples are first received through the input layer, and the number of neurons in the input layer is equal to the dimension of the input sample vector. Then the data information is transmitted to the pattern layer of the second layer through the input layer.The pattern layer is used to calculate the matching relationship between input samples and each pattern in the training set. The number of neurons in the pattern layer is equal to the total number of training samples. Assuming that the vector of the input layer isX=(x1,x2,...,xn)T, the data is mapped from the input layer to the pattern layer through the mapping mechanism, then the input of the j-th neuron of the pattern layer is X, and the output of the j-th neuron of the pattern layer is (8)fijX=12πp/2δpNR1exp⁡-x-wijTx-wij2δ2In formula (8), p is the total number of categories, m is the number of neurons in the mode layer, wij is the connection weight from the output layer to the mode layer, and δ is the smoothing factor.The summation layer is the accumulation of probabilities belonging to the same class, and its conditional probability density is(9)Pi∣X=∑j-1mwijfijXi=1,2,…,pof which(10)∑j=1mwij=1,i=1,2,…,p.wij∈0,1The output layer receives the probability density function of each class of summation layer output and then finds the maximum probability P(X)=max⁡(P(i∣X),(i=1,2,...,P) of one of them. ## 2.3. A Multiclassification Series Model for BP-Adaboost and PNN Neural Networks Generally, Adaboost combines with weak classifier to form a strong classifier to solve the problem of binary classification. But in transformer fault classification, there are not only two types of transformer faults. Therefore, BP-Adaboost needs to be transformed into multiclassification problems. In this paper, according to the total fault types to be classified, several BP-Adaboost two-classification models are established to classify each fault in turn. The specific classification operation is shown in Figure3.Figure 3 BP-Adaboost multiclassification model.In Figure3, we make the output of each BP-Adaboost model “- 1” or “1”, so that the output of “A” fault in the training sample is “1”, and the output of the other fault types is “- 1”. In this way, the test sample whose result is “1” can be classified by BP-Adaboost, and its prediction result can be regarded as “A” fault. Each fault type can be diagnosed by binary classification in this way. Although Adaboost improves the prediction results of the weak classifier by its powerful result correction ability, such a multiclassification model still has some defects. For example, the same test sample is divided into different fault models (e.g., Sample “d” in Table 1), or a test sample is not divided into any type (e.g., Sample “a” in Table 1). The occurrence of these two cases can determine that the multiclassification model can not accurately classify these samples.Table 1 Multiple BP-Adaboost classification result. Sample a b c d e BP-Adaboost classification result 1 -1 1 -1 1 -1 BP-Adaboost classification result 2 -1 -1 1 1 -1 BP-Adaboost classification result 3 -1 -1 -1 -1 1Based on the possible error classification, the classification results of BP-Adaboost multiclassification model are set as follows. Firstly, the diagnosis type is coded and then the fault type is diagnosed according to the coding order. Then, each BP-Adaboost binary classification result is constructed into aTm×n matrix, where m is the total fault type and n is the number of test samples. Finally, according to the position of “1” in column i,i=1,2,...,n of matrix T, the class of the i-th sample is determined. That is, the diagnostic result of sample i-th in multiclassification results is the location of “1” in i column. When there are more than one “1” in column j or no “1” in column j, this indicates that the BP-Adaboost multiclassification diagnostic model is wrong in classifying the j-th sample, so the classification result of the j-th individual in the BP-Adaboost multiclassification result is “0”. The specific operation results are shown in Table 2.Table 2 BP-Adaboost multiclassification results. Sample a b c d e BP-Adaboost classification result 1 -1 1 -1 1 -1 BP-Adaboost classification result 2 -1 -1 1 1 -1 BP-Adaboost classification result 3 -1 -1 -1 -1 1 Classification result 0 1 2 0 3Usually we do not know the accuracy of the diagnostic results of the diagnostic model before comparing them with the real results. But for BP-Adaboost multiclassification diagnostic results in this paper, when “0” appears in the diagnostic results, it shows that the diagnosis must be wrong. In order to improve the accuracy of one algorithm, many scholars usually combine one algorithm with another to make the final experimental results better than any of them [23, 24]. In order to improve the diagnostic accuracy, a PNN neural network is connected in series after the diagnosis result of BP-Adaboost. By utilizing the recognition ability of PNN neural network, the samples of BP-Adaboost diagnosis error are rediagnosed. The algorithm diagram is shown in Figure 4. This fully combines the excellent results of BP-Adaboost multiclassification diagnosis model with those of PNN diagnosis model, so as to improve the accuracy of sample prediction.Figure 4 Diagnostic model of BP-Adaboost in tandem with PNN. ## 3. Results and Discussion ### 3.1. Selection of Sample Sets In transformer fault diagnosis, selecting representative data samples is more conducive to the establishment of a simulation model. Therefore, the basic principles of sample selection in this paper are as follows. (1) The fault samples selected are representative. (2) The selected samples should involve more complete fault type. (3) The samples should be compact. Therefore, this paper selected 100 representative samples from the historical faults of several oil-immersed power transformers in a 220V substation for empirical analysis. Types of transformer faults include medium and low temperature overheating, arc discharge, discharge and overheating fault, low energy discharge fault, and high temperature overheating fault.For selection of training and test sets, in this paper, 20 samples were randomly selected as test data, and the remaining 80 samples were used as training data. In order to diagnose transformer faults accurately, the test samples are randomly selected according to the proportion of different types of samples. Specific fault codes and sample numbers are shown in Table3.Table 3 Sample data description. The fault types code Total number of samples Number of training samples Number of testing samples medium and low temperature overheating 1 29 23 6 arc discharge 2 12 10 2 discharge and overheating 3 11 9 2 low energy discharge fault 4 18 14 4 high temperature overheating 5 30 24 6 ### 3.2. Feature Fault Selection Because of the difference of transformer internal faults, the gas produced by each fault is not completely the same. Principally, the fault related gases commonly used are hydrogen (H2), carbon monoxide (CO), carbon dioxide (CO2), methane (CH4), acetylene (C2H2), ethane (C2H6), and ethylene (C2H4). Since the components of H2, CH4, C2H6, C2H4, and C2H2 are closely related to the fault types of transformers, all the five gases are taken as the characteristic parameters of transformer fault diagnosis in this paper [25]. Inspired by IEC three-ratio method, this paper also takes C2H2/C2H4, CH4/H2, and C2H4/C2H6 as the characteristic parameters of transformer fault diagnosis. ### 3.3. Parameter Setting and Running Environment In the series model of BP-Adaboost and PNN, the number of BP neural networks with weak classifiers is 20. The training target of each BP neural network is 0.00004, the learning rate is 0.1, and the training time is 5. The selection of SPREAD in PNN neural network is 1.5. In the diagnosis of BP neural network, the training target is 0.01, the learning rate is 0.1, and the training time is 1000. In traditional genetic algorithm optimization of BP neural network (GA-BP), the population is 20 and the number of iterations is 50 (Testing environment: Core i5-3230M dual-core processor, running in the 2016a version of MATLAB). ### 3.4. Comparative Analysis of Prediction Examples Eight variables are selected as input vectors of the model, and different output vectors are set according to different BP-Adaboost binary classification models. According to Table3, the number of samples of each type is randomly selected as the training data and test data of five models: BP neural network, GA-BP neural network, BP-Adaboost model, PNN neural network, and the diagnostic model proposed in this paper. The test samples are input into the five models after training, and the corresponding prediction results are obtained. Then the results are compared with the real data to evaluate the good degree of each model for transformer fault diagnosis.Because the test samples produced in each experiment are random, this paper tests the five models 10 times, and the test samples of the five models are the same as the training samples. The error results and the average running time of the four models tested 10 times are shown in Table4.Table 4 Number of samples per diagnostic error. Number of diagnostic error samples Average error rate Average running time BP-Adaboost 4 4 2 6 4 2 5 2 4 4 18.5% 10.495s BP 7 7 9 8 9 6 8 7 9 9 39.5% 0.6159s PNN 5 5 8 5 9 5 7 1 6 5 28% 0.1877s GA-BP 2 0 3 4 4 1 3 3 3 2 12.5% 723.52s BP-Adaboost_PNN 1 3 1 4 4 0 2 2 2 2 10.5% 11.253sFrom Table4, we can see that the test results of BP-Adaboost are superior to BP neural network almost every time, so it can be effectively explained that Adaboost can combine weak classifiers into a strong classifier. In addition to the proposed BP-Adaboost and PNN series model in transformer fault diagnosis, each test result is better than the BP-Adaboost and PNN test results, which show that the proposed series method effectively combines the advantages of BP-Adaboost and PNN model recognition. Although the diagnostic accuracy of BP-Adaboost and PNN series model is similar to that of GA-BP model, the time spent by GA-BP greatly exceeds that of BP-Adaboost and PNN series model.In order to illustrate the validity of the model proposed in this paper, the results of one of the tests are taken as an example to analyze the effectiveness of the proposed model. Figures5, 6 and 7 are diagrams comparing the diagnostic results of BP neural network, PNN neural network, and GA-BP neural network with the real results. It can be seen from the figure that the diagnostic accuracy of BP is only 65%, and the diagnosis accuracy of PNN is relatively high, but only 75%; the diagnostic accuracy of GA-BP model is 90%.Figure 5 Diagnosis result of BP neural network.Figure 6 Diagnosis result of PNN neural network.Figure 7 Diagnosis result of GA-BP neural network.BP-Adaboost Output Matrix T (11) s a m p l e s B P – A d a b o o s t 1 B P – A d a b o o s t 2 B P – A d a b o o s t 3 B P – A d a b o o s t 4 B P – A d a b o o s t 5 T = 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1BP-Adaboost Output Matrix T Converted to Vector (12) X = 1 0 1 1 1 0 2 2 0 3 4 0 4 4 5 5 5 5 5 5Equation (12) is the diagnostic result X of BP-Adaboost. X is transformed from the BP-Adaboost output matrix T of (11) according to Table 2. The diagnostic result “0” in X is the undiagnosed sample of BP-Adaboost diagnostic model. From (12), we can see that the output results by 2, 6, 9, and 12 in the test samples are 0. Therefore, it is shown that the four samples in the BP-Adaboost multiclassification result have certain classification errors. These four wrong classification samples are used as new test samples and input into PNN neural network. Through Figure 6, we can see that the prediction accuracy of PNN for 2,6,9,12 samples is 75%. The predicted results of PNN for these four samples are put into the corresponding position of the predicted results in BP-Adaboost, and the final diagnostic results of BP-Adaboost and PNN in series are obtained. The comparison between the diagnostic results and the real results is shown in (12). From Figure 8, we can see that the diagnostic accuracy of BP-Adaboost in series with PNN is as high as 95%, which is obviously higher than that of BP-Adaboost, GA-BP, and PNN models.Figure 8 Diagnostic results of BP-Adaboost and PNN tandem model.In this paper, BP neural network and improved Adaboost are used to form a strong classifier. At the same time, several BP-Adaboost binary classifiers are established to form a multiclassifier. For the result matrix T formed by multiple binary classifiers, we can directly find some samples of classification errors and then put these samples of diagnosis errors into PNN neural network for rediagnosis. The reason why the proposed method can combine the diagnostic advantages of BP-Adaboost model and PNN model is that in BP-Adaboost multiclassification recognition, the classification accuracy of those samples which are only classified into one category is relatively high. Because it not only requires such samples to be classified into one type, but also ensures that the samples are not classified into other types. Under such stringent requirements, the classification accuracy of samples classified into only one category in BP-Adaboost is relatively high. Finally, the experimental results show that the diagnostic accuracy of the proposed series model in transformer fault diagnosis is significantly higher than that of BP neural network model, BP-Adaboost model, and PNN model. Although the accuracy is slightly better than that of GA-BP model, the diagnostic time of the proposed series model is obviously better than that of GA-BP model. ## 3.1. Selection of Sample Sets In transformer fault diagnosis, selecting representative data samples is more conducive to the establishment of a simulation model. Therefore, the basic principles of sample selection in this paper are as follows. (1) The fault samples selected are representative. (2) The selected samples should involve more complete fault type. (3) The samples should be compact. Therefore, this paper selected 100 representative samples from the historical faults of several oil-immersed power transformers in a 220V substation for empirical analysis. Types of transformer faults include medium and low temperature overheating, arc discharge, discharge and overheating fault, low energy discharge fault, and high temperature overheating fault.For selection of training and test sets, in this paper, 20 samples were randomly selected as test data, and the remaining 80 samples were used as training data. In order to diagnose transformer faults accurately, the test samples are randomly selected according to the proportion of different types of samples. Specific fault codes and sample numbers are shown in Table3.Table 3 Sample data description. The fault types code Total number of samples Number of training samples Number of testing samples medium and low temperature overheating 1 29 23 6 arc discharge 2 12 10 2 discharge and overheating 3 11 9 2 low energy discharge fault 4 18 14 4 high temperature overheating 5 30 24 6 ## 3.2. Feature Fault Selection Because of the difference of transformer internal faults, the gas produced by each fault is not completely the same. Principally, the fault related gases commonly used are hydrogen (H2), carbon monoxide (CO), carbon dioxide (CO2), methane (CH4), acetylene (C2H2), ethane (C2H6), and ethylene (C2H4). Since the components of H2, CH4, C2H6, C2H4, and C2H2 are closely related to the fault types of transformers, all the five gases are taken as the characteristic parameters of transformer fault diagnosis in this paper [25]. Inspired by IEC three-ratio method, this paper also takes C2H2/C2H4, CH4/H2, and C2H4/C2H6 as the characteristic parameters of transformer fault diagnosis. ## 3.3. Parameter Setting and Running Environment In the series model of BP-Adaboost and PNN, the number of BP neural networks with weak classifiers is 20. The training target of each BP neural network is 0.00004, the learning rate is 0.1, and the training time is 5. The selection of SPREAD in PNN neural network is 1.5. In the diagnosis of BP neural network, the training target is 0.01, the learning rate is 0.1, and the training time is 1000. In traditional genetic algorithm optimization of BP neural network (GA-BP), the population is 20 and the number of iterations is 50 (Testing environment: Core i5-3230M dual-core processor, running in the 2016a version of MATLAB). ## 3.4. Comparative Analysis of Prediction Examples Eight variables are selected as input vectors of the model, and different output vectors are set according to different BP-Adaboost binary classification models. According to Table3, the number of samples of each type is randomly selected as the training data and test data of five models: BP neural network, GA-BP neural network, BP-Adaboost model, PNN neural network, and the diagnostic model proposed in this paper. The test samples are input into the five models after training, and the corresponding prediction results are obtained. Then the results are compared with the real data to evaluate the good degree of each model for transformer fault diagnosis.Because the test samples produced in each experiment are random, this paper tests the five models 10 times, and the test samples of the five models are the same as the training samples. The error results and the average running time of the four models tested 10 times are shown in Table4.Table 4 Number of samples per diagnostic error. Number of diagnostic error samples Average error rate Average running time BP-Adaboost 4 4 2 6 4 2 5 2 4 4 18.5% 10.495s BP 7 7 9 8 9 6 8 7 9 9 39.5% 0.6159s PNN 5 5 8 5 9 5 7 1 6 5 28% 0.1877s GA-BP 2 0 3 4 4 1 3 3 3 2 12.5% 723.52s BP-Adaboost_PNN 1 3 1 4 4 0 2 2 2 2 10.5% 11.253sFrom Table4, we can see that the test results of BP-Adaboost are superior to BP neural network almost every time, so it can be effectively explained that Adaboost can combine weak classifiers into a strong classifier. In addition to the proposed BP-Adaboost and PNN series model in transformer fault diagnosis, each test result is better than the BP-Adaboost and PNN test results, which show that the proposed series method effectively combines the advantages of BP-Adaboost and PNN model recognition. Although the diagnostic accuracy of BP-Adaboost and PNN series model is similar to that of GA-BP model, the time spent by GA-BP greatly exceeds that of BP-Adaboost and PNN series model.In order to illustrate the validity of the model proposed in this paper, the results of one of the tests are taken as an example to analyze the effectiveness of the proposed model. Figures5, 6 and 7 are diagrams comparing the diagnostic results of BP neural network, PNN neural network, and GA-BP neural network with the real results. It can be seen from the figure that the diagnostic accuracy of BP is only 65%, and the diagnosis accuracy of PNN is relatively high, but only 75%; the diagnostic accuracy of GA-BP model is 90%.Figure 5 Diagnosis result of BP neural network.Figure 6 Diagnosis result of PNN neural network.Figure 7 Diagnosis result of GA-BP neural network.BP-Adaboost Output Matrix T (11) s a m p l e s B P – A d a b o o s t 1 B P – A d a b o o s t 2 B P – A d a b o o s t 3 B P – A d a b o o s t 4 B P – A d a b o o s t 5 T = 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1BP-Adaboost Output Matrix T Converted to Vector (12) X = 1 0 1 1 1 0 2 2 0 3 4 0 4 4 5 5 5 5 5 5Equation (12) is the diagnostic result X of BP-Adaboost. X is transformed from the BP-Adaboost output matrix T of (11) according to Table 2. The diagnostic result “0” in X is the undiagnosed sample of BP-Adaboost diagnostic model. From (12), we can see that the output results by 2, 6, 9, and 12 in the test samples are 0. Therefore, it is shown that the four samples in the BP-Adaboost multiclassification result have certain classification errors. These four wrong classification samples are used as new test samples and input into PNN neural network. Through Figure 6, we can see that the prediction accuracy of PNN for 2,6,9,12 samples is 75%. The predicted results of PNN for these four samples are put into the corresponding position of the predicted results in BP-Adaboost, and the final diagnostic results of BP-Adaboost and PNN in series are obtained. The comparison between the diagnostic results and the real results is shown in (12). From Figure 8, we can see that the diagnostic accuracy of BP-Adaboost in series with PNN is as high as 95%, which is obviously higher than that of BP-Adaboost, GA-BP, and PNN models.Figure 8 Diagnostic results of BP-Adaboost and PNN tandem model.In this paper, BP neural network and improved Adaboost are used to form a strong classifier. At the same time, several BP-Adaboost binary classifiers are established to form a multiclassifier. For the result matrix T formed by multiple binary classifiers, we can directly find some samples of classification errors and then put these samples of diagnosis errors into PNN neural network for rediagnosis. The reason why the proposed method can combine the diagnostic advantages of BP-Adaboost model and PNN model is that in BP-Adaboost multiclassification recognition, the classification accuracy of those samples which are only classified into one category is relatively high. Because it not only requires such samples to be classified into one type, but also ensures that the samples are not classified into other types. Under such stringent requirements, the classification accuracy of samples classified into only one category in BP-Adaboost is relatively high. Finally, the experimental results show that the diagnostic accuracy of the proposed series model in transformer fault diagnosis is significantly higher than that of BP neural network model, BP-Adaboost model, and PNN model. Although the accuracy is slightly better than that of GA-BP model, the diagnostic time of the proposed series model is obviously better than that of GA-BP model. ## 4. Conclusions Power transformer plays an important role in power transmission and distribution. The performance of power transformer directly affects the operation of the whole power system. Therefore, it is very important to discover the faults of the transformer in advance. Whether early fault of the transformer can be eliminated as soon as possible is the key to ensure the stability of the power supply for users.This paper presents a new diagnostic model of BP-Adaboost in series with PNN. By transforming BP-Adaboost biclassification model into a multiclassification model, on the basis of advantages of BP-Adaboost biclassification model, it also provides double guarantee for accurate classification samples of BP-Adaboost multiclassification model. For the type to be diagnosed, it is necessary to satisfy the need not to reclassify the classified samples into this type, but also to satisfy that the samples belonging to this type are diagnosed. Obviously, this is not always possible, which has a very high recognition accuracy for the two-class BP-Adaboost. Therefore, this paper transforms the result matrix T into vectorX and then finds out the samples which have not diagnosed the fault type by vector X and puts them into PNN for further diagnosis. By connecting BP-Adaboost with PNN in series, we cannot only improve the defect of the BP-Adaboost algorithm that does not diagnose samples, but also improve the defect that the diagnostic accuracy of the PNN model is not very high. --- *Source: 1019845-2019-07-03.xml*
1019845-2019-07-03_1019845-2019-07-03.md
50,733
Transformer Fault Diagnosis Based on BP-Adaboost and PNN Series Connection
Chun Yan; Meixuan Li; Wei Liu
Mathematical Problems in Engineering (2019)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1019845
1019845-2019-07-03.xml
--- ## Abstract Dissolved gas-in-oil analysis (DGA) is a powerful method to diagnose and detect transformer faults. It is of profound significance for the accurate and rapid determination of the fault of the transformer and the stability of the power. In different transformer faults, the concentration of dissolved gases in oil is also inconsistent. Commonly used gases include hydrogen (H2), methane (CH4), acetylene (C2H2), ethane (C2H6), and ethylene (C2H4). This paper first combines BP neural network with improved Adaboost algorithm, then combines PNN neural network to form a series diagnosis model for transformer fault, and finally combines dissolved gas-in-oil analysis to diagnose transformer fault. The experimental results show that the accuracy of the series diagnosis model proposed in this paper is greatly improved compared with BP neural network, GA-BP neural network, PNN neural network, and BP-Adaboost. --- ## Body ## 1. Introduction In recent years, with the rapid development of China's economy, power system is developing towards the direction of ultrahigh voltage, large power grid, large capacity, and automation. Domestic demand for electricity has increased dramatically, and the national power industry is experiencing a rapid development stage. At present, the number of 110KV (66KV) and above voltage transformers transported by the State Grid Corporation has reached more than 30,000, with a total capacity of 3.4 TVA. Because the power transformer is in the central position of the power grid, the operation environment is complex, and, under the impact of various bad operating conditions, it is easy that fault occurs. Transformer faults has caused large area of breakage, resulting in a large number of economic losses. Therefore, the effective diagnosis of transformer faults is of great significance.At present, the main testing and monitoring methods of transformer operation state are DC resistance measurement [1], dissolved gas-in-oil analysis (DGA) [2], oil temperature monitoring (OTM) [3], insulation experiment (IE), acoustic partial discharge measurement (APDM) [4], detection of characteristic curve by repeated pulse method [5], Winding Deformation Testing and Low Voltage Short Circuit Impedance Testing [6], etc. DGA analysis is a relatively ideal monitoring and analysis method, because it can monitor and analyze oil chromatography online and sample analysis at any time. At present, it is recognized that dissolved gas analysis technology in oil is a powerful measure to find latent faults in transformers at home and abroad [2].Due to the normal operation of transformer, transformer oil and solid insulation materials will gradually age and decompose into a small amount of gas. However, when the power equipment is faulted, especially in the case of overheating, discharge, or humidity, the amount of these gases will increase rapidly. It has been proved by long-term practice that the content of gas in oil is directly related to the faulty degree of transformer [7]. Over the years, both at home and abroad have devoted themselves to the development of online monitoring devices and systems characterized by the content of dissolved gas in oil. The three-ratio method is the most basic method for fault diagnosis of oil-filled power equipment. However, the three-ratio method exposes that the coding cannot include all DGA analysis results and the coding is too absolute [8]. With the development of research, more and more artificial intelligence methods are introduced into transformer fault diagnosis, for example, artificial neural network [9], BP neural network [10], fuzzy logic reasoning [11], rough set theory [12], Extreme Learning Machine [13], support vector machine [14], and Bayesian network [15]. However, all kinds of artificial intelligence methods have some limitations, such as the network structure and weight of artificial neural network which are difficult to determine and easy to fall into local minimum and overfitting. The Bayesian network requires a large number of sample data [16]. The inference rules of fuzzy logic and the function of fuzzy membership degree depend to a great extent on experience [6].BP neural network is a kind of multilayer feedforward neural network, because of its simple structure, many adjustable parameters, many training algorithms, and good maneuverability. BP neural network has been widely used. According to statistics, 80%~90% of the neural network models are based on the BP network or its deformation [17]. The traditional BP neural network has the disadvantage of random initial weights, which lead to low learning efficiency, slow convergence speed, and easy to fall into a local minimum. So many scholars use intelligent algorithm to optimize the weight of BP neural network. Liang uses the genetic algorithm to optimize the weight of BP neural network to realize the inversion method of soil moisture [18]. Salman Predicts Palm Oil Price Using BP Neural Network Based on Particle Swarm Optimization [19]. Kuang uses ant colony algorithm to optimize BP neural network for macroeconomic prediction [20]. But using intelligent algorithms to optimize the weight of BP neural network greatly enlarges the operation time and makes the model diagnosis inefficient [21].In this paper, a diagnostic model which combines BP-Adaboost algorithm and PNN in series is proposed. Adaboost algorithm is a simple and easy algorithm, which can combine several weak classifiers to form a strong classifier. At the same time, the upper limit of the classification error rate will not increase with the overfitting of training. In addition, the Adaboost algorithm has the advantages of no need to adjust parameters and low generalization error rate. Because the Adaboost algorithm can construct multiple weak predictors with lower accuracy into a strong learner with higher accuracy, therefore, this paper combines BP neural network as a weak classifier with Adaboost algorithm. Considering that Adaboost algorithm is usually used to deal with binary classification problems, and transformer faults are often divided into many types of faults, this paper changes the multiclassification problem into multiple Adaboost binary classification problems to be solved. In the Adaboost algorithm, only the error rate of the weak classifier is slightly less than 1/2, but in the actual training process, there will still be special cases, which will affect the operation of the algorithm. In order to solve this problem, this paper revalues individual variables under special circumstances. Then the transformer fault is diagnosed by the improved BP-Adaboost algorithm. Samples that have not been successfully classified in the diagnosis results (A sample is divided into two or more different faults or not into any faults.) are reclassified as prediction samples, and the original training samples are reclassified as training samples and put into PNN neural network for diagnosis. The advantages of BP-Adaboost and PNN are fully combined by this series model. With samples which have not been successfully diagnosed by BP-Adaboost algorithm, the second diagnosis can be carried out by PNN, which effectively improves the accuracy of the model.The main contributions of this paper are as follows.(1) Firstly, the Adaboost algorithm is improved. It solves the defect that the diagnostic error of each weak classifier in the traditional Adaboost algorithm can only be within(0,1/2). The two-class Adaboost algorithm is improved to multiclass algorithm.(2) Then, BP-Adaboost multiclassification diagnosis algorithm is formed by combining the BP neural network as a weak classifier with multiclassification Adaboost algorithm. For samples diagnosed wrong in BP-Adaboost diagnosis model, they are put into PNN neural network for diagnosis again.(3) Finally, the sample set is selected. Inspired by IEC three-ratio method, this paper not only takes five commonly used gas as characteristic parameters, but also takes C2H2/C2H4, CH4/H2, C2H4/C2H6 as characteristic parameters of transformer fault diagnosis.Section1 introduces the background significance of transformer fault diagnosis and the methods commonly used to diagnose transformer fault in recent years. Section 2 first introduces the model used in this paper and finally introduces the series multiclassification algorithm proposed in this paper. Section 3 introduces the selection of the sample set and sample characteristic parameters. Section 4 compares the diagnostic results of the proposed model with those of the other four models. ## 2. Materials and Methods ### 2.1. Improved BP-Adaboost Diagnostic Model BP neural network is an error back propagation algorithm, which uses the steepest descent method to continuously adjust the weights and biases of the network are continuously adjusted to minimize the sum of square errors of the network. BP neural network consists of one input layer, one output layer, and several hidden layers. The training process is as follows.(1) Establish BP neural network and initialize the weight and biases of BP neural network.(2) Preprocess the sample data and set the number of neurons in each layer. Supposex=(xij)(i=1,2,3,...,n;j=1,2,3,...,m) is a sample input matrix. The output of the hidden layer is bj. The biases of neurons in the hidden layer and the output layer are θj and θk, respectively. The output bj of the j -th neuron in the hidden layer is(1)bj=f1∑i=1nwijxi-θjThe output layer output yk is(2)yk=f2∑j=1mwikbi-θkIn formulas (1) and (2), f1 and f2 are S-type tangent function and S-type logarithmic function, respectively.(3) Errore of actual output yk and expected output tk in BP neural network are(3)e=∑k=1tk-yk2(4) If the errors produced do not meet the requirements, the steepest descent method is used to backpropagate the errors and adjust the weights and biases. Iterative cycle until the error meets the requirement.Adaptive boosting (Adaboost) is a strong efficient algorithm that combines weak classifiers into strong classifiers. It was proposed by Yoav Freund and Robert Schapire in 1995. The main idea is as follows. Firstly, each training sample is given the same weight. Then the weak classifier is used to run iteratively T times; after each operation, the weight of training data is updated according to the classification results of training samples, and the wrong samples are usually given larger weight. For multiple weak classifiers, after running T times, a sequence of classification results of training samples is obtained. Each classification function is given a weight. The better the classification result, the greater the corresponding weight. The steps of the Adaboost algorithm are as follows.Step 1 (randomly selectm samples from the samples as training data). Initialization data distribution weightsDt(i)=1/m. The structure of the neural network is determined according to the input and output dimensions of the samples, and the weights and biases of the neural network are initialized.Step 2 (calculate the prediction error sum of weak classifier). When thet-th weak classifier is trained, the prediction error of prediction sequence g(t) and et are obtained. In formula (4), gi(t) is the predicted result of the i-th sample and yi is the expected classification result of the i-th sample. In BP neural network, the default output of more than 0 belongs to “1” category in classification, and the output of less than 0 belongs to “-1” category in classification.(4)et=∑iDiii=1,2,…,mgit≠yiStep 3. Calculation of sequence weightsat based on prediction error is(5)at=12ln⁡1-etet Formula (5) shows that when et is less than 1/2, (1-et)/et increases with the decrease of et. ln⁡(1-et/et) is an increasing function, so the weight at of weak classifier increases with the decrease of error sum. However, as the number of iterations increases, the classification error decreases gradually. According to formula (5), when the error is reduced to a certain extent, it is easy to be unable to calculate the weight of weak classifiers, thus affecting the classification results [22]. Therefore, this paper reassigns the sum of errors in this case. Because there are 80 training samples in this paper, formula (5) cannot calculate the weight; only the error is zero. So when the error of weak classifier predicting training data is zero, this paper makes error et be 0.0125, that is, only one sample predicting error. The calculation formula of sequence weight is established in the case of error and less than 1/2, but in the actual case, it is not ruled out that the prediction accuracy of weak classifier to the sample is less than 1/2. Therefore, this paper improves this situation. For the case where et is more than 1/2, it shows that the accuracy of weak classifier is less than 1/2 in the binary classification (-1 and 1), so that the output of weak classifier can be taken as the opposite number et=1-et, which can also be calculated by formula (5).Step 4 (adjust the data weight). Adjusting the weight of the next training sample according to the sequence weight,(6)Dt+1i=DtiBt×exp⁡-atyigtxii=1,2,…,mIn formula (6), Bt is a normalization factor. The purpose is to make the sum of distribution weights equal to 1 under the condition that the proportion of weights is unchanged. When the predicted results are not the same as the actual results, yigt(xi) in formula (6) is less than 0, and the greater the absolute value of the predicted results, the greater the value of exp⁡[-atyigt(xi)], thus satisfying the condition that the samples with wrong classification are usually given a larger weight.Step 5 (construct a strong classifier). After several weak classifiers are trained by T-rounds, the T-group weak classifier functionsf(gt,at) are obtained, and then the T-group weak classifier functions are combined to form a strong classifier function h(x).(7)hx=sign⁡∑t=1Tat·fgt,atThe algorithm flow based on BP-Adaboost model is shown in Figure 1.Figure 1 BP-Adaboost algorithm flow. ### 2.2. PNN Neural Network Probabilistic neural network (PNN) is a parallel algorithm based on Bayes classification rules and Parzen window for probability density function estimation. PNN is a kind of artificial neural network with a simple structure, simple training, and wide application. Its structure consists of an input layer, mode layer, summation layer, and output layer. Its basic structure is shown in Figure2.Figure 2 Basic structure of probabilistic neural network.The values of training samples are first received through the input layer, and the number of neurons in the input layer is equal to the dimension of the input sample vector. Then the data information is transmitted to the pattern layer of the second layer through the input layer.The pattern layer is used to calculate the matching relationship between input samples and each pattern in the training set. The number of neurons in the pattern layer is equal to the total number of training samples. Assuming that the vector of the input layer isX=(x1,x2,...,xn)T, the data is mapped from the input layer to the pattern layer through the mapping mechanism, then the input of the j-th neuron of the pattern layer is X, and the output of the j-th neuron of the pattern layer is (8)fijX=12πp/2δpNR1exp⁡-x-wijTx-wij2δ2In formula (8), p is the total number of categories, m is the number of neurons in the mode layer, wij is the connection weight from the output layer to the mode layer, and δ is the smoothing factor.The summation layer is the accumulation of probabilities belonging to the same class, and its conditional probability density is(9)Pi∣X=∑j-1mwijfijXi=1,2,…,pof which(10)∑j=1mwij=1,i=1,2,…,p.wij∈0,1The output layer receives the probability density function of each class of summation layer output and then finds the maximum probability P(X)=max⁡(P(i∣X),(i=1,2,...,P) of one of them. ### 2.3. A Multiclassification Series Model for BP-Adaboost and PNN Neural Networks Generally, Adaboost combines with weak classifier to form a strong classifier to solve the problem of binary classification. But in transformer fault classification, there are not only two types of transformer faults. Therefore, BP-Adaboost needs to be transformed into multiclassification problems. In this paper, according to the total fault types to be classified, several BP-Adaboost two-classification models are established to classify each fault in turn. The specific classification operation is shown in Figure3.Figure 3 BP-Adaboost multiclassification model.In Figure3, we make the output of each BP-Adaboost model “- 1” or “1”, so that the output of “A” fault in the training sample is “1”, and the output of the other fault types is “- 1”. In this way, the test sample whose result is “1” can be classified by BP-Adaboost, and its prediction result can be regarded as “A” fault. Each fault type can be diagnosed by binary classification in this way. Although Adaboost improves the prediction results of the weak classifier by its powerful result correction ability, such a multiclassification model still has some defects. For example, the same test sample is divided into different fault models (e.g., Sample “d” in Table 1), or a test sample is not divided into any type (e.g., Sample “a” in Table 1). The occurrence of these two cases can determine that the multiclassification model can not accurately classify these samples.Table 1 Multiple BP-Adaboost classification result. Sample a b c d e BP-Adaboost classification result 1 -1 1 -1 1 -1 BP-Adaboost classification result 2 -1 -1 1 1 -1 BP-Adaboost classification result 3 -1 -1 -1 -1 1Based on the possible error classification, the classification results of BP-Adaboost multiclassification model are set as follows. Firstly, the diagnosis type is coded and then the fault type is diagnosed according to the coding order. Then, each BP-Adaboost binary classification result is constructed into aTm×n matrix, where m is the total fault type and n is the number of test samples. Finally, according to the position of “1” in column i,i=1,2,...,n of matrix T, the class of the i-th sample is determined. That is, the diagnostic result of sample i-th in multiclassification results is the location of “1” in i column. When there are more than one “1” in column j or no “1” in column j, this indicates that the BP-Adaboost multiclassification diagnostic model is wrong in classifying the j-th sample, so the classification result of the j-th individual in the BP-Adaboost multiclassification result is “0”. The specific operation results are shown in Table 2.Table 2 BP-Adaboost multiclassification results. Sample a b c d e BP-Adaboost classification result 1 -1 1 -1 1 -1 BP-Adaboost classification result 2 -1 -1 1 1 -1 BP-Adaboost classification result 3 -1 -1 -1 -1 1 Classification result 0 1 2 0 3Usually we do not know the accuracy of the diagnostic results of the diagnostic model before comparing them with the real results. But for BP-Adaboost multiclassification diagnostic results in this paper, when “0” appears in the diagnostic results, it shows that the diagnosis must be wrong. In order to improve the accuracy of one algorithm, many scholars usually combine one algorithm with another to make the final experimental results better than any of them [23, 24]. In order to improve the diagnostic accuracy, a PNN neural network is connected in series after the diagnosis result of BP-Adaboost. By utilizing the recognition ability of PNN neural network, the samples of BP-Adaboost diagnosis error are rediagnosed. The algorithm diagram is shown in Figure 4. This fully combines the excellent results of BP-Adaboost multiclassification diagnosis model with those of PNN diagnosis model, so as to improve the accuracy of sample prediction.Figure 4 Diagnostic model of BP-Adaboost in tandem with PNN. ## 2.1. Improved BP-Adaboost Diagnostic Model BP neural network is an error back propagation algorithm, which uses the steepest descent method to continuously adjust the weights and biases of the network are continuously adjusted to minimize the sum of square errors of the network. BP neural network consists of one input layer, one output layer, and several hidden layers. The training process is as follows.(1) Establish BP neural network and initialize the weight and biases of BP neural network.(2) Preprocess the sample data and set the number of neurons in each layer. Supposex=(xij)(i=1,2,3,...,n;j=1,2,3,...,m) is a sample input matrix. The output of the hidden layer is bj. The biases of neurons in the hidden layer and the output layer are θj and θk, respectively. The output bj of the j -th neuron in the hidden layer is(1)bj=f1∑i=1nwijxi-θjThe output layer output yk is(2)yk=f2∑j=1mwikbi-θkIn formulas (1) and (2), f1 and f2 are S-type tangent function and S-type logarithmic function, respectively.(3) Errore of actual output yk and expected output tk in BP neural network are(3)e=∑k=1tk-yk2(4) If the errors produced do not meet the requirements, the steepest descent method is used to backpropagate the errors and adjust the weights and biases. Iterative cycle until the error meets the requirement.Adaptive boosting (Adaboost) is a strong efficient algorithm that combines weak classifiers into strong classifiers. It was proposed by Yoav Freund and Robert Schapire in 1995. The main idea is as follows. Firstly, each training sample is given the same weight. Then the weak classifier is used to run iteratively T times; after each operation, the weight of training data is updated according to the classification results of training samples, and the wrong samples are usually given larger weight. For multiple weak classifiers, after running T times, a sequence of classification results of training samples is obtained. Each classification function is given a weight. The better the classification result, the greater the corresponding weight. The steps of the Adaboost algorithm are as follows.Step 1 (randomly selectm samples from the samples as training data). Initialization data distribution weightsDt(i)=1/m. The structure of the neural network is determined according to the input and output dimensions of the samples, and the weights and biases of the neural network are initialized.Step 2 (calculate the prediction error sum of weak classifier). When thet-th weak classifier is trained, the prediction error of prediction sequence g(t) and et are obtained. In formula (4), gi(t) is the predicted result of the i-th sample and yi is the expected classification result of the i-th sample. In BP neural network, the default output of more than 0 belongs to “1” category in classification, and the output of less than 0 belongs to “-1” category in classification.(4)et=∑iDiii=1,2,…,mgit≠yiStep 3. Calculation of sequence weightsat based on prediction error is(5)at=12ln⁡1-etet Formula (5) shows that when et is less than 1/2, (1-et)/et increases with the decrease of et. ln⁡(1-et/et) is an increasing function, so the weight at of weak classifier increases with the decrease of error sum. However, as the number of iterations increases, the classification error decreases gradually. According to formula (5), when the error is reduced to a certain extent, it is easy to be unable to calculate the weight of weak classifiers, thus affecting the classification results [22]. Therefore, this paper reassigns the sum of errors in this case. Because there are 80 training samples in this paper, formula (5) cannot calculate the weight; only the error is zero. So when the error of weak classifier predicting training data is zero, this paper makes error et be 0.0125, that is, only one sample predicting error. The calculation formula of sequence weight is established in the case of error and less than 1/2, but in the actual case, it is not ruled out that the prediction accuracy of weak classifier to the sample is less than 1/2. Therefore, this paper improves this situation. For the case where et is more than 1/2, it shows that the accuracy of weak classifier is less than 1/2 in the binary classification (-1 and 1), so that the output of weak classifier can be taken as the opposite number et=1-et, which can also be calculated by formula (5).Step 4 (adjust the data weight). Adjusting the weight of the next training sample according to the sequence weight,(6)Dt+1i=DtiBt×exp⁡-atyigtxii=1,2,…,mIn formula (6), Bt is a normalization factor. The purpose is to make the sum of distribution weights equal to 1 under the condition that the proportion of weights is unchanged. When the predicted results are not the same as the actual results, yigt(xi) in formula (6) is less than 0, and the greater the absolute value of the predicted results, the greater the value of exp⁡[-atyigt(xi)], thus satisfying the condition that the samples with wrong classification are usually given a larger weight.Step 5 (construct a strong classifier). After several weak classifiers are trained by T-rounds, the T-group weak classifier functionsf(gt,at) are obtained, and then the T-group weak classifier functions are combined to form a strong classifier function h(x).(7)hx=sign⁡∑t=1Tat·fgt,atThe algorithm flow based on BP-Adaboost model is shown in Figure 1.Figure 1 BP-Adaboost algorithm flow. ## 2.2. PNN Neural Network Probabilistic neural network (PNN) is a parallel algorithm based on Bayes classification rules and Parzen window for probability density function estimation. PNN is a kind of artificial neural network with a simple structure, simple training, and wide application. Its structure consists of an input layer, mode layer, summation layer, and output layer. Its basic structure is shown in Figure2.Figure 2 Basic structure of probabilistic neural network.The values of training samples are first received through the input layer, and the number of neurons in the input layer is equal to the dimension of the input sample vector. Then the data information is transmitted to the pattern layer of the second layer through the input layer.The pattern layer is used to calculate the matching relationship between input samples and each pattern in the training set. The number of neurons in the pattern layer is equal to the total number of training samples. Assuming that the vector of the input layer isX=(x1,x2,...,xn)T, the data is mapped from the input layer to the pattern layer through the mapping mechanism, then the input of the j-th neuron of the pattern layer is X, and the output of the j-th neuron of the pattern layer is (8)fijX=12πp/2δpNR1exp⁡-x-wijTx-wij2δ2In formula (8), p is the total number of categories, m is the number of neurons in the mode layer, wij is the connection weight from the output layer to the mode layer, and δ is the smoothing factor.The summation layer is the accumulation of probabilities belonging to the same class, and its conditional probability density is(9)Pi∣X=∑j-1mwijfijXi=1,2,…,pof which(10)∑j=1mwij=1,i=1,2,…,p.wij∈0,1The output layer receives the probability density function of each class of summation layer output and then finds the maximum probability P(X)=max⁡(P(i∣X),(i=1,2,...,P) of one of them. ## 2.3. A Multiclassification Series Model for BP-Adaboost and PNN Neural Networks Generally, Adaboost combines with weak classifier to form a strong classifier to solve the problem of binary classification. But in transformer fault classification, there are not only two types of transformer faults. Therefore, BP-Adaboost needs to be transformed into multiclassification problems. In this paper, according to the total fault types to be classified, several BP-Adaboost two-classification models are established to classify each fault in turn. The specific classification operation is shown in Figure3.Figure 3 BP-Adaboost multiclassification model.In Figure3, we make the output of each BP-Adaboost model “- 1” or “1”, so that the output of “A” fault in the training sample is “1”, and the output of the other fault types is “- 1”. In this way, the test sample whose result is “1” can be classified by BP-Adaboost, and its prediction result can be regarded as “A” fault. Each fault type can be diagnosed by binary classification in this way. Although Adaboost improves the prediction results of the weak classifier by its powerful result correction ability, such a multiclassification model still has some defects. For example, the same test sample is divided into different fault models (e.g., Sample “d” in Table 1), or a test sample is not divided into any type (e.g., Sample “a” in Table 1). The occurrence of these two cases can determine that the multiclassification model can not accurately classify these samples.Table 1 Multiple BP-Adaboost classification result. Sample a b c d e BP-Adaboost classification result 1 -1 1 -1 1 -1 BP-Adaboost classification result 2 -1 -1 1 1 -1 BP-Adaboost classification result 3 -1 -1 -1 -1 1Based on the possible error classification, the classification results of BP-Adaboost multiclassification model are set as follows. Firstly, the diagnosis type is coded and then the fault type is diagnosed according to the coding order. Then, each BP-Adaboost binary classification result is constructed into aTm×n matrix, where m is the total fault type and n is the number of test samples. Finally, according to the position of “1” in column i,i=1,2,...,n of matrix T, the class of the i-th sample is determined. That is, the diagnostic result of sample i-th in multiclassification results is the location of “1” in i column. When there are more than one “1” in column j or no “1” in column j, this indicates that the BP-Adaboost multiclassification diagnostic model is wrong in classifying the j-th sample, so the classification result of the j-th individual in the BP-Adaboost multiclassification result is “0”. The specific operation results are shown in Table 2.Table 2 BP-Adaboost multiclassification results. Sample a b c d e BP-Adaboost classification result 1 -1 1 -1 1 -1 BP-Adaboost classification result 2 -1 -1 1 1 -1 BP-Adaboost classification result 3 -1 -1 -1 -1 1 Classification result 0 1 2 0 3Usually we do not know the accuracy of the diagnostic results of the diagnostic model before comparing them with the real results. But for BP-Adaboost multiclassification diagnostic results in this paper, when “0” appears in the diagnostic results, it shows that the diagnosis must be wrong. In order to improve the accuracy of one algorithm, many scholars usually combine one algorithm with another to make the final experimental results better than any of them [23, 24]. In order to improve the diagnostic accuracy, a PNN neural network is connected in series after the diagnosis result of BP-Adaboost. By utilizing the recognition ability of PNN neural network, the samples of BP-Adaboost diagnosis error are rediagnosed. The algorithm diagram is shown in Figure 4. This fully combines the excellent results of BP-Adaboost multiclassification diagnosis model with those of PNN diagnosis model, so as to improve the accuracy of sample prediction.Figure 4 Diagnostic model of BP-Adaboost in tandem with PNN. ## 3. Results and Discussion ### 3.1. Selection of Sample Sets In transformer fault diagnosis, selecting representative data samples is more conducive to the establishment of a simulation model. Therefore, the basic principles of sample selection in this paper are as follows. (1) The fault samples selected are representative. (2) The selected samples should involve more complete fault type. (3) The samples should be compact. Therefore, this paper selected 100 representative samples from the historical faults of several oil-immersed power transformers in a 220V substation for empirical analysis. Types of transformer faults include medium and low temperature overheating, arc discharge, discharge and overheating fault, low energy discharge fault, and high temperature overheating fault.For selection of training and test sets, in this paper, 20 samples were randomly selected as test data, and the remaining 80 samples were used as training data. In order to diagnose transformer faults accurately, the test samples are randomly selected according to the proportion of different types of samples. Specific fault codes and sample numbers are shown in Table3.Table 3 Sample data description. The fault types code Total number of samples Number of training samples Number of testing samples medium and low temperature overheating 1 29 23 6 arc discharge 2 12 10 2 discharge and overheating 3 11 9 2 low energy discharge fault 4 18 14 4 high temperature overheating 5 30 24 6 ### 3.2. Feature Fault Selection Because of the difference of transformer internal faults, the gas produced by each fault is not completely the same. Principally, the fault related gases commonly used are hydrogen (H2), carbon monoxide (CO), carbon dioxide (CO2), methane (CH4), acetylene (C2H2), ethane (C2H6), and ethylene (C2H4). Since the components of H2, CH4, C2H6, C2H4, and C2H2 are closely related to the fault types of transformers, all the five gases are taken as the characteristic parameters of transformer fault diagnosis in this paper [25]. Inspired by IEC three-ratio method, this paper also takes C2H2/C2H4, CH4/H2, and C2H4/C2H6 as the characteristic parameters of transformer fault diagnosis. ### 3.3. Parameter Setting and Running Environment In the series model of BP-Adaboost and PNN, the number of BP neural networks with weak classifiers is 20. The training target of each BP neural network is 0.00004, the learning rate is 0.1, and the training time is 5. The selection of SPREAD in PNN neural network is 1.5. In the diagnosis of BP neural network, the training target is 0.01, the learning rate is 0.1, and the training time is 1000. In traditional genetic algorithm optimization of BP neural network (GA-BP), the population is 20 and the number of iterations is 50 (Testing environment: Core i5-3230M dual-core processor, running in the 2016a version of MATLAB). ### 3.4. Comparative Analysis of Prediction Examples Eight variables are selected as input vectors of the model, and different output vectors are set according to different BP-Adaboost binary classification models. According to Table3, the number of samples of each type is randomly selected as the training data and test data of five models: BP neural network, GA-BP neural network, BP-Adaboost model, PNN neural network, and the diagnostic model proposed in this paper. The test samples are input into the five models after training, and the corresponding prediction results are obtained. Then the results are compared with the real data to evaluate the good degree of each model for transformer fault diagnosis.Because the test samples produced in each experiment are random, this paper tests the five models 10 times, and the test samples of the five models are the same as the training samples. The error results and the average running time of the four models tested 10 times are shown in Table4.Table 4 Number of samples per diagnostic error. Number of diagnostic error samples Average error rate Average running time BP-Adaboost 4 4 2 6 4 2 5 2 4 4 18.5% 10.495s BP 7 7 9 8 9 6 8 7 9 9 39.5% 0.6159s PNN 5 5 8 5 9 5 7 1 6 5 28% 0.1877s GA-BP 2 0 3 4 4 1 3 3 3 2 12.5% 723.52s BP-Adaboost_PNN 1 3 1 4 4 0 2 2 2 2 10.5% 11.253sFrom Table4, we can see that the test results of BP-Adaboost are superior to BP neural network almost every time, so it can be effectively explained that Adaboost can combine weak classifiers into a strong classifier. In addition to the proposed BP-Adaboost and PNN series model in transformer fault diagnosis, each test result is better than the BP-Adaboost and PNN test results, which show that the proposed series method effectively combines the advantages of BP-Adaboost and PNN model recognition. Although the diagnostic accuracy of BP-Adaboost and PNN series model is similar to that of GA-BP model, the time spent by GA-BP greatly exceeds that of BP-Adaboost and PNN series model.In order to illustrate the validity of the model proposed in this paper, the results of one of the tests are taken as an example to analyze the effectiveness of the proposed model. Figures5, 6 and 7 are diagrams comparing the diagnostic results of BP neural network, PNN neural network, and GA-BP neural network with the real results. It can be seen from the figure that the diagnostic accuracy of BP is only 65%, and the diagnosis accuracy of PNN is relatively high, but only 75%; the diagnostic accuracy of GA-BP model is 90%.Figure 5 Diagnosis result of BP neural network.Figure 6 Diagnosis result of PNN neural network.Figure 7 Diagnosis result of GA-BP neural network.BP-Adaboost Output Matrix T (11) s a m p l e s B P – A d a b o o s t 1 B P – A d a b o o s t 2 B P – A d a b o o s t 3 B P – A d a b o o s t 4 B P – A d a b o o s t 5 T = 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1BP-Adaboost Output Matrix T Converted to Vector (12) X = 1 0 1 1 1 0 2 2 0 3 4 0 4 4 5 5 5 5 5 5Equation (12) is the diagnostic result X of BP-Adaboost. X is transformed from the BP-Adaboost output matrix T of (11) according to Table 2. The diagnostic result “0” in X is the undiagnosed sample of BP-Adaboost diagnostic model. From (12), we can see that the output results by 2, 6, 9, and 12 in the test samples are 0. Therefore, it is shown that the four samples in the BP-Adaboost multiclassification result have certain classification errors. These four wrong classification samples are used as new test samples and input into PNN neural network. Through Figure 6, we can see that the prediction accuracy of PNN for 2,6,9,12 samples is 75%. The predicted results of PNN for these four samples are put into the corresponding position of the predicted results in BP-Adaboost, and the final diagnostic results of BP-Adaboost and PNN in series are obtained. The comparison between the diagnostic results and the real results is shown in (12). From Figure 8, we can see that the diagnostic accuracy of BP-Adaboost in series with PNN is as high as 95%, which is obviously higher than that of BP-Adaboost, GA-BP, and PNN models.Figure 8 Diagnostic results of BP-Adaboost and PNN tandem model.In this paper, BP neural network and improved Adaboost are used to form a strong classifier. At the same time, several BP-Adaboost binary classifiers are established to form a multiclassifier. For the result matrix T formed by multiple binary classifiers, we can directly find some samples of classification errors and then put these samples of diagnosis errors into PNN neural network for rediagnosis. The reason why the proposed method can combine the diagnostic advantages of BP-Adaboost model and PNN model is that in BP-Adaboost multiclassification recognition, the classification accuracy of those samples which are only classified into one category is relatively high. Because it not only requires such samples to be classified into one type, but also ensures that the samples are not classified into other types. Under such stringent requirements, the classification accuracy of samples classified into only one category in BP-Adaboost is relatively high. Finally, the experimental results show that the diagnostic accuracy of the proposed series model in transformer fault diagnosis is significantly higher than that of BP neural network model, BP-Adaboost model, and PNN model. Although the accuracy is slightly better than that of GA-BP model, the diagnostic time of the proposed series model is obviously better than that of GA-BP model. ## 3.1. Selection of Sample Sets In transformer fault diagnosis, selecting representative data samples is more conducive to the establishment of a simulation model. Therefore, the basic principles of sample selection in this paper are as follows. (1) The fault samples selected are representative. (2) The selected samples should involve more complete fault type. (3) The samples should be compact. Therefore, this paper selected 100 representative samples from the historical faults of several oil-immersed power transformers in a 220V substation for empirical analysis. Types of transformer faults include medium and low temperature overheating, arc discharge, discharge and overheating fault, low energy discharge fault, and high temperature overheating fault.For selection of training and test sets, in this paper, 20 samples were randomly selected as test data, and the remaining 80 samples were used as training data. In order to diagnose transformer faults accurately, the test samples are randomly selected according to the proportion of different types of samples. Specific fault codes and sample numbers are shown in Table3.Table 3 Sample data description. The fault types code Total number of samples Number of training samples Number of testing samples medium and low temperature overheating 1 29 23 6 arc discharge 2 12 10 2 discharge and overheating 3 11 9 2 low energy discharge fault 4 18 14 4 high temperature overheating 5 30 24 6 ## 3.2. Feature Fault Selection Because of the difference of transformer internal faults, the gas produced by each fault is not completely the same. Principally, the fault related gases commonly used are hydrogen (H2), carbon monoxide (CO), carbon dioxide (CO2), methane (CH4), acetylene (C2H2), ethane (C2H6), and ethylene (C2H4). Since the components of H2, CH4, C2H6, C2H4, and C2H2 are closely related to the fault types of transformers, all the five gases are taken as the characteristic parameters of transformer fault diagnosis in this paper [25]. Inspired by IEC three-ratio method, this paper also takes C2H2/C2H4, CH4/H2, and C2H4/C2H6 as the characteristic parameters of transformer fault diagnosis. ## 3.3. Parameter Setting and Running Environment In the series model of BP-Adaboost and PNN, the number of BP neural networks with weak classifiers is 20. The training target of each BP neural network is 0.00004, the learning rate is 0.1, and the training time is 5. The selection of SPREAD in PNN neural network is 1.5. In the diagnosis of BP neural network, the training target is 0.01, the learning rate is 0.1, and the training time is 1000. In traditional genetic algorithm optimization of BP neural network (GA-BP), the population is 20 and the number of iterations is 50 (Testing environment: Core i5-3230M dual-core processor, running in the 2016a version of MATLAB). ## 3.4. Comparative Analysis of Prediction Examples Eight variables are selected as input vectors of the model, and different output vectors are set according to different BP-Adaboost binary classification models. According to Table3, the number of samples of each type is randomly selected as the training data and test data of five models: BP neural network, GA-BP neural network, BP-Adaboost model, PNN neural network, and the diagnostic model proposed in this paper. The test samples are input into the five models after training, and the corresponding prediction results are obtained. Then the results are compared with the real data to evaluate the good degree of each model for transformer fault diagnosis.Because the test samples produced in each experiment are random, this paper tests the five models 10 times, and the test samples of the five models are the same as the training samples. The error results and the average running time of the four models tested 10 times are shown in Table4.Table 4 Number of samples per diagnostic error. Number of diagnostic error samples Average error rate Average running time BP-Adaboost 4 4 2 6 4 2 5 2 4 4 18.5% 10.495s BP 7 7 9 8 9 6 8 7 9 9 39.5% 0.6159s PNN 5 5 8 5 9 5 7 1 6 5 28% 0.1877s GA-BP 2 0 3 4 4 1 3 3 3 2 12.5% 723.52s BP-Adaboost_PNN 1 3 1 4 4 0 2 2 2 2 10.5% 11.253sFrom Table4, we can see that the test results of BP-Adaboost are superior to BP neural network almost every time, so it can be effectively explained that Adaboost can combine weak classifiers into a strong classifier. In addition to the proposed BP-Adaboost and PNN series model in transformer fault diagnosis, each test result is better than the BP-Adaboost and PNN test results, which show that the proposed series method effectively combines the advantages of BP-Adaboost and PNN model recognition. Although the diagnostic accuracy of BP-Adaboost and PNN series model is similar to that of GA-BP model, the time spent by GA-BP greatly exceeds that of BP-Adaboost and PNN series model.In order to illustrate the validity of the model proposed in this paper, the results of one of the tests are taken as an example to analyze the effectiveness of the proposed model. Figures5, 6 and 7 are diagrams comparing the diagnostic results of BP neural network, PNN neural network, and GA-BP neural network with the real results. It can be seen from the figure that the diagnostic accuracy of BP is only 65%, and the diagnosis accuracy of PNN is relatively high, but only 75%; the diagnostic accuracy of GA-BP model is 90%.Figure 5 Diagnosis result of BP neural network.Figure 6 Diagnosis result of PNN neural network.Figure 7 Diagnosis result of GA-BP neural network.BP-Adaboost Output Matrix T (11) s a m p l e s B P – A d a b o o s t 1 B P – A d a b o o s t 2 B P – A d a b o o s t 3 B P – A d a b o o s t 4 B P – A d a b o o s t 5 T = 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1 - 1 - 1 - 1 - 1 1BP-Adaboost Output Matrix T Converted to Vector (12) X = 1 0 1 1 1 0 2 2 0 3 4 0 4 4 5 5 5 5 5 5Equation (12) is the diagnostic result X of BP-Adaboost. X is transformed from the BP-Adaboost output matrix T of (11) according to Table 2. The diagnostic result “0” in X is the undiagnosed sample of BP-Adaboost diagnostic model. From (12), we can see that the output results by 2, 6, 9, and 12 in the test samples are 0. Therefore, it is shown that the four samples in the BP-Adaboost multiclassification result have certain classification errors. These four wrong classification samples are used as new test samples and input into PNN neural network. Through Figure 6, we can see that the prediction accuracy of PNN for 2,6,9,12 samples is 75%. The predicted results of PNN for these four samples are put into the corresponding position of the predicted results in BP-Adaboost, and the final diagnostic results of BP-Adaboost and PNN in series are obtained. The comparison between the diagnostic results and the real results is shown in (12). From Figure 8, we can see that the diagnostic accuracy of BP-Adaboost in series with PNN is as high as 95%, which is obviously higher than that of BP-Adaboost, GA-BP, and PNN models.Figure 8 Diagnostic results of BP-Adaboost and PNN tandem model.In this paper, BP neural network and improved Adaboost are used to form a strong classifier. At the same time, several BP-Adaboost binary classifiers are established to form a multiclassifier. For the result matrix T formed by multiple binary classifiers, we can directly find some samples of classification errors and then put these samples of diagnosis errors into PNN neural network for rediagnosis. The reason why the proposed method can combine the diagnostic advantages of BP-Adaboost model and PNN model is that in BP-Adaboost multiclassification recognition, the classification accuracy of those samples which are only classified into one category is relatively high. Because it not only requires such samples to be classified into one type, but also ensures that the samples are not classified into other types. Under such stringent requirements, the classification accuracy of samples classified into only one category in BP-Adaboost is relatively high. Finally, the experimental results show that the diagnostic accuracy of the proposed series model in transformer fault diagnosis is significantly higher than that of BP neural network model, BP-Adaboost model, and PNN model. Although the accuracy is slightly better than that of GA-BP model, the diagnostic time of the proposed series model is obviously better than that of GA-BP model. ## 4. Conclusions Power transformer plays an important role in power transmission and distribution. The performance of power transformer directly affects the operation of the whole power system. Therefore, it is very important to discover the faults of the transformer in advance. Whether early fault of the transformer can be eliminated as soon as possible is the key to ensure the stability of the power supply for users.This paper presents a new diagnostic model of BP-Adaboost in series with PNN. By transforming BP-Adaboost biclassification model into a multiclassification model, on the basis of advantages of BP-Adaboost biclassification model, it also provides double guarantee for accurate classification samples of BP-Adaboost multiclassification model. For the type to be diagnosed, it is necessary to satisfy the need not to reclassify the classified samples into this type, but also to satisfy that the samples belonging to this type are diagnosed. Obviously, this is not always possible, which has a very high recognition accuracy for the two-class BP-Adaboost. Therefore, this paper transforms the result matrix T into vectorX and then finds out the samples which have not diagnosed the fault type by vector X and puts them into PNN for further diagnosis. By connecting BP-Adaboost with PNN in series, we cannot only improve the defect of the BP-Adaboost algorithm that does not diagnose samples, but also improve the defect that the diagnostic accuracy of the PNN model is not very high. --- *Source: 1019845-2019-07-03.xml*
2019
# SmartMal: A Service-Oriented Behavioral Malware Detection Framework for Mobile Devices **Authors:** Chao Wang; Zhizhong Wu; Xi Li; Xuehai Zhou; Aili Wang; Patrick C. K. Hung **Journal:** The Scientific World Journal (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101986 --- ## Abstract This paper presents SmartMal—a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server’s main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users’ regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices. --- ## Body ## 1. Introduction Personal digital assistants (PDAs), mobile phones, and recently smartphones have evolved from simple devices into sophisticated yet compact minicomputers which can connect to a wide spectrum of networks, including the Internet and corporate intranets. Designed as open, programmable, networked devices, smartphones are susceptible to various malware threats such as viruses, Trojan horses, and worms, all of which are well known from desktop platforms. These devices enable users to access and browse the Internet, receive and send emails, and short message service (SMS), connect to other devices for exchanging/synchronizing information, and install various applications, which make these devices ideal attack targets [1].Above all, mobile devices have become popular companions in people’s daily life, as is illustrated in Figure1. It allows users to access news, entertainment, carry out research, or make purchases via e-businesses. Unfortunately, cyberspace is a double-edged sword; the new malware and viruses appearing on mobile devices have dramatically impacted the safety and security of users; this side effect of Internet access has become a serious problem. According to the Internet Filter Reviews statistics [2], the amount of malware detected is each year the double. In particular, there are at least 7.12 million smartphones that have been infected by various malware and virus.Figure 1 Mobile devices have become a common place for both Internet and telecom networks. They have been combined into a sound framework which allows different media to communicate with each other immediately and efficiently.The challenges for smartphone security are becoming very similar to those that personal computers encounter and common desktop security solutions are often being downsized to mobile devices. Unfortunately, the increasing popularity smartphones and their ability to run third-party software have also attracted the attention of virus writers [3, 4]. Malware can make a smartphone partially or fully unusable, causing unwanted billing; stealing private information, and so on. If we have the ability to detect the attack as soon as it occurs, we can stop it from doing any damage to the system or personal data. This is where an intrusion detection system comes in, there are two types of intrusion detection systems: signature-based and anomaly-based systems. Signature-based approaches can only detect existing malwares and require frequent signature updates to keep the signature database up-to-date. Signature-based systems are often used for antivirus software on desktop systems. Researchers are trying to develop anomaly based approaches which can detect unknown malwares.Recently, behavior-based programming has been proved [5] to be an efficient way to detect abnormal utilizations to formalize requirements in the form of use cases and scenarios. It has also been introduced to the malware detection mechanism [1]. However, the behavior analysis technique is worth pursuing, it still poses significant challenge to clearly identify behaviors for distinct embedded applications.In order to solve this problem, we will demonstrate the effectiveness of service-oriented architecture (SOA) in browser design. Traditionally, SOA provides effective measures with better flexibility and extensibility at lower cost by adopting reusable software modules. SOA can also reduce the complexity of integration and application development through uniform service description and integration interfaces. Therefore, SOA-based design is more convenient when building systems by providing a common way for interaction and communication.By the exploration of benefits of SOA concepts, we can conclude that there are at least two significant advantages of integrating SOA concepts into malware detections. Firstly, it can greatly reduce the local workload of the detection algorithm. This feature allows users to run a light-weight client which works especially well for mobile devices, because all the processing threads will run on the servers. Secondly, the user behavior analyses, such as CPU/memory utilization, battery endurance, and network traffic flow, are located on central or distributed servers. This improves the load balancing status for global optimization. Finally, with the back-end management module (e.g., web pages), the malware information is easier to be kept up-to-date and be distributed to clients for real time synchronization.This paper proposes service-oriented malware detection with distributed behavior analysis mechanisms for the first time, called SmartMal. The paper is extended from the previous publication at [6]. The contributions of this paper are listed as follows. The paper starts by describing the SOA-based malware with distributed detection algorithm. The abnormal messages and irregular behaviors are provided through services. Secondly, this paper proposes and realizes a behavior analysis algorithm with SOA concepts. We integrate distributed components into a hierarchical kernel model. Diverse optimizing measures are taken to exploit the battery endurance, CPU/memory utilization and network traffic flow. Experimental results are presented to demonstrate the effectiveness of SmartMal.The rest of the paper is organized as follows. Related work and motivation are summarized in Section2. Section 3 discusses the architecture and main concepts of SmartMal, including architecture, client organization, and server hierarchical models. Section 4 describes the behavior analysis model. Section 5 demonstrates the SmartMal with a typical case: DoS attacks. Finally, conclusion and further work are presented in Section 6. ## 2. Related Work and Motivation Safety and security problems on mobile devices have been a major focus in the past decade. In this section, we present a review on general malware detection techniques for mobile devices.There has been a considerable amount of research into anomaly detection in computing systems and network traffic. These include statistics-based approaches, data-mining based methods, and machine learning based techniques. A wide set of anomaly detection approaches on smartphones are built from the above techniques.Statistical-based approaches were originally used in anomaly detection on smartphones. Cheng et al. [7] propose a collaborative virus detection and alert system for smartphones where smartphones run a light-weight agent then collect and report information to a proxy. The proxy detects viruses through a statistical approach, and it keeps track of the average number of communications. Buennemeyer et al. [8] present a scheme that monitors abnormal changes of smartphones using smart batteries.Bose et al. [9] propose a novel behavior-based detection framework for smartphones. The behavior signatures are constructed at run time by monitoring the events and API calls via Proxy DLL. They use support vector machines (SVMs) to train a classifier from normal and malicious data. The evaluation results show that the scheme can identify current mobile malwares with more than 96% accuracy. A distributed SVM algorithm is presented in [10]; with the distributed scheme, the participating clients perform their computation in parallel and update the support vectors simultaneously, so the overhead of machine learning algorithm is efficiently decreased.Schmidt et al. [11] present programs that monitor smartphones running Symbian and Windows Mobile OS. They demonstrate that only a few features are needed to achieve acceptable detection performance. Machine learning methods, like artificial immune system (AIS) and self-organizing maps (SOM), are applied to detect abnormal behavior on remote server, and they proposed an algorithm called linear prediction to detect change by checking four predecessors of a chosen feature. In [12], they present a novel approach to detect malware, where function calls are extracted from binaries. The centroid machine classifies an executable via clustering, in which each cluster is defined by a centroid. That is, a binary is classified as malicious if it is closer to malicious cluster, naming the distance metric as Euclidean distance.Game theory has been introduced into the anomaly detection area of mobile phone. Shabtai et al. [1] propose a light-weight malware detection system for Android smartphone, and they developed four malicious applications for experiment. Several usual classification algorithms and feature selection algorithms are evaluated to find the best performance in these detection systems. Alpcan et al. [13] present a novel probabilistic diffusion scheme for anomaly detection based on mobile device usage patterns. The scheme models the normal behavior and their features as a bipartite graph, which constitutes the basis for the stochastic diffusion process. In the stochastic diffusion algorithm, the Kullback-Leibler divergence is used to measure the distance between the distributions. uCLAVS [14] is a web service-oriented ontology framework for malware and intrusion detection. uCLAVS is based on the idea that the files analysis can improve their performance if they are moved to the network instead of running on every host. It enables each process to enter the system files, send them to the network, and then to decide whether they are executed according to the threat delivered report. Reference [15] proposes a model to reduce on-device CPU, memory, and power resources whereby mobile antivirus functionality is moved to an off-device network service employing multiple virtualized malware detection engines. TaintDroid [16] is a system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data.Meanwhile, programming behavior [5] is a new mechanism that has been integrated into the malware detection approaches [1, 9]. However, it has been widely used in business process [17], cache optimization [18], and operating systems [19]. However, the current research programs have serious common drawbacks: (1) most of the verification operations as well as the databases are performed locally, which may cause significant security issues if the databases are hacked, and (2) the approaches of local browsers lack modularity, which will cause excessive and inefficient workloads for programmers.This paper introduces a SOA concept for malware detection mechanisms, in order to construct a distributed malware detection framework with behavior analysis model. SOA is widely applied in software services, web services, operating systems, and so on. Various SOA frameworks have been proposed in many fields, such as chip design [20], mobile computing system [21], classroom scheduling [22], enterprise architecture [23], Internet browser [24], and electronic productions [25]. The advantages of SOA are to integrate various services and provide unified interfaces within different solutions.In order to learn from the SOA concepts, we have also summarized the cutting-edge SOA researches. Alam et al. [17] present a behavioral attestation method for business processes. Zhang et al. [26] provide a presentation for proactively recommending services in a workflow composition process, based on service usage history. Haki and Forte [23] demonstrate that using the SOA concept into the enterprise architecture (EA) framework makes the best of the synergy existing between these two approaches. Zhou et al. [6] explore service composition and service dependency and propose an extended dependency-aware SOA model. A loosely coupled service-oriented implementation is presented in [27]. The architecture takes advantage of Octave models in creating and using prediction models. In this framework, every method is applied as an Octave script in a plug-in fashion. Achbany et al. [28] present an allocation method of services to tasks, but the algorithm is not applied in realistic systems. In conclusion, since SOA has the ability for software across organizations and network boundaries to collaborate efficiently, it has been widely employed in aspects of research areas to facilitate researchers.Although there is a lot of research works related to SOA and malware detection for mobile devices platform, respectively, there is only a few studies on integrating SOA and malware detection in order to construct a service-oriented abnormal behavior analysis framework.To utilize SOA architecture’s benefits, this paper presents a services-oriented malware detection and authentication mechanisms at the server side. All clients send requests to the web servers at run time in order to obtain a list of malware behaviors. This paper is extended from previous work at [29]. The paper proposes a distributed malware detection framework with the following features:(1) a light-weight profiling and information collection application on mobile devices to record all the normal and irregular behaviors; (2) a malware detection and abnormal behavior mechanism as separate modules; (3) a set of system behavior analysis schemes which integrate CPU/memory utilization, battery endurance, and network traffic flow. ## 3. SOA Architecture Model The concept of this paper is to apply SOA concepts into malware detection framework design. By integrating remote irregular and behavior analysis, the aim is to design an integrated client-server application with abnormal behavior maintenance. The architecture framework for SmartMal is illustrated in Figure2. The system runs at client-server mode, of which the applications running on each client mobile device are in charge of keeping the record of the smartphones and collecting abnormal information. The selected abnormal information which is represented as vectors and extensible markup language (XML) messages are sent to the remote servers via general packet radio service (GPRS), 3G, or WiFi networks. Also the servers are distributed in one communication server and multiple malware detection servers. The communication server is responsible for exchanging messages with client users. The communication data are mainly through web services, in which data are packaged in certain data formats such as XML and JSON. After abnormal information is received, the communication server should forward the message to a specific server, in according to the client ID and system load balancing status. The detection algorithm running on the distributed servers will identify and return the results to major servers. The information will be stored in database and alerted to corresponding terminal devices when an attack or abnormal message occurs.Figure 2 Architecture framework for SmartMal.The SmartMal architecture provides a set of administrative control web pages. Once the data is updated, the new information will be pushed into communication servers and user clients simultaneously.To manage the massive mobile data, the architecture maintains aglobal status stable that records the current server traffic amount data. Each server has a uniquetag, while the major server has the smallesttag. The major server is elected from all the candidate servers, under the election algorithm illustrated in Algorithm 1. Denoted S indicates all the candidate server set, and the size of the set is n, represented as tag j, j ∈ [ 1 , n ].Algorithm 1:Algorithm to elect the major server. INPUT: Server Set S and tag j , j ∈ [ 1 , n ]. OUTPUT: Major Server ID (1)for each server tag i in S (2) send request witht a g s to all other servers (3) receive requests from all other servers (4)if tag i < Min ⁡ { tag j , j ∈ [ 1 , n ] , j ≠ i } then (5)  set servertag i as the main server (6)else then (7)  set serverMin ⁡ { tag j , j ∈ [ 1 , n ] , j ≠ i } as main server (8)end if (9)end for ### 3.1. Client Architecture for Mobile Devices The Client’s main function is to extract abnormal features as follows.( 1) Feature Extractor. This is the main module of the client. All the features are extracted through APIs provided by the Android application framework or information read from the Linux kernel. The collected features are clustered into three primary levels: Linux Kernel level (e.g., CPU, RAM, etc.), application level (e.g., messages, calling, etc.), and user behavior level. The user behavior level includes significant features that can reflect the user behavior, such as the screen on/off and the key pressed frequency. The feature extracting frequency is controlled by a timer whose value can be changed by user with the default value of 30 seconds. A total of 29 features are collected during every extracting, and the vector data structure is used to store features. As the data size of each transmission is very small (less than 200 bytes), compression mechanisms may not be able to achieve efficient performance.( 2) Communication Module. This module sends feature vectors to remote servers and receives anomaly alerts from the servers if the features are detected as anomaly. If the client is connected to the server for the first time, the communication module will request registration with the unique international mobile equipment identity (IMEI) of the smartphones.( 3) Graphical User Interface (GUI). This module provides the users with a mean to configure client parameters, such as the value of extractor timer and server IP address. ### 3.2. Anomaly Detection Server The anomaly detection server’s major task is to classify the feature vectors as normal or abnormal. The components include the following.( 1) Database. MySQL is used to store massive feature vectors with classValue (normal or anomaly). Database interfaces are provided for various operations. In the database, a total_table relation includes all vector information, and each detector corresponds to a detector_table, while the primary key is extract_time and phone_tag. All newly received feature vectors were stored in total_table. For each feature vector, it was assigned to the corresponding detection server according to the phone_tag and processing history. If the client was newly registered, the vector was assigned to the lowest load server.( 2) Detecting Module. This is the major module of the detection server, and complex detecting algorithms are implemented here. It consists of several detectors with a detector manager. Each detector is corresponding to a classification algorithm which distinguishes between normal and abnormal feature vectors, such as J48 Detector implemented with the C4.5 decision-tree algorithm. When new feature vectors come, each detector fetches the set of vectors from detector_table, builds the classifier (if it does not exist), and classifies the feature vectors. Then, the detector manager gives out the final results by integrating all detectors’ results and stores the results into total_table and detector_table. The detector manager can also configure the parameters of detectors.( 3) Communication Module. This module communicates with the client and deals with various requests and messages. The module passes received feature vectors to detecting module, and if the vector is detected as anomaly the module will send an anomaly alert to the clients.( 4) Client Manager. When a new client requests for registration, this module will register the client with the IMEI.( 5) GUI. The server’s GUI configures database and visualizes current detectors and connected clients. ### 3.3. Service-Oriented Hierarchical Model Figure3 presents the hierarchical model of distributed servers, which consist of three layers: services, service scheduler, and transmitter. The functionalities of each layer are introduced as follows.Figure 3 Hierarchical layer of servers.First of all, services provider provides service access points (SAPs) to clients. Each SAP is in charge of one specific kind of service. All the SAPs are provided with a data format packaging mechanism. When a request arrives, the SAP first decodes the target request and then identifies which service is requested. Then, the specified request will be sent to service scheduler.Second, services scheduler is in charge of service scheduling and mapping. Each Internet request may include several service requests. Therefore, if more than one servant is available, then each service request must be mapped and scheduled to a certain servant according to the system’s load balancing status.Finally, transmitter dispatches the subtasks to different servants for execution. After the task is completed, the results are collected by transmitter.With respect to the period-like features of 3G/GPRS/WiFi client modules, SmartMal server provides three services for demonstration: CPU/memory utilization, battery endurance, and network traffic flow. From the exploration of the state-of-the-art studies, it is quite common that the malware applications will either drag down the network flow performance, resulting in the congestion, or illegally waste the CPU/memory utilization, or the energy.The high level services are mapped to different servants. In order to provide a feasible system for services, at least one servant for each service is integrated into the system. Each service request is transmitted to a specific servant. All the servants are managed for efficiency use. The data transmission between servant and service layer is through communication interfaces and status checking interface.Status checking interface is responsible for providing synchronization information of diverse servants for services mapping and scheduling, such as load balancing and services bottleneck exploration.The physical layer consists of database and object-relation (O-R) mapping mechanisms. Generally, all the analyzed irregular behaviors are stored in databases, which may be located at distributed areas. Dealing with the current relation based database models, O-R mapping methods are widely employed for object-oriented abstractions, such as Hibernate and Toplink. Benefiting from these approaches, the high-level objects are mapped to relational databases. We hereby utilize TopLink for demonstration to map the database tuples to the standard C++ classes. ### 3.4. Remarked Features It is quite true that spelling out the requirements for a software system under development is not an easy task, and translating captured requirements into correct operational software can be even harder [5]. Many technologies (languages, modeling tools, and programming paradigms) and methodologies (agile, test-driven, and model-driven) are designed, among other things, to help address these challenges. One widely accepted practice is to formalize requirements by behavioral programming skills in the form of use cases and scenarios.However, in realizing abnormal detection based analysis method, not all the collected behaviors are reflecting abnormal message. Therefore, it is challenging to choose the required information among the extracted set of phases or behaviors. It has been proved that the detection efficiency can be significantly improved by the refinements of degrading the dimensions and eliminating the superfluous information [11].In this paper, we model the weight for each feature as a synthesized combination of subjective and objective weight to identify the behavior and characteristics. The weight of the synthesized weight is represented as follows:(1) w i = u w s i + ( 1 - u ) w o i .For each featurei, w i represents the synthesized weight, w s i denotes the subjective weight, w o i refers to objective weight, and u indicates the proportion of subjective weight. The major contribution of this method is to introduce the subjective weight that can leverage the default analyzed results obtained from the behaviors. We use an analytic hierarchy process (AHP) algorithm to construct a three-layer model to divide the complex strategic decision problem into different subjects aiming at multiple targets. For each subject, a fuzzy quantitative approach is employed to calculate the weight for each feature and then merge it hierarchically.Figure4 illustrates the hierarchical model for the remarked features that is composed of three layers. The top layer denotes the final target of behavioral analysis to identify the optimal features. In the middle layer, three classifications are listed according to different abnormal behaviors such as DoS attack malware, user information stealing malware, and irregular software/hardware resource consuming malware. Finally, all the behaviors on operating system application interfaces are reflecting the three abnormal behaviors in bottom level.Figure 4 Hierarchical layers for the remarked features.In the hierarchical analysis method, the relative weighta i j represents the correlation between the ith element and the jth element. Assume that there are n × m elements in total, and then A = ( a i j ) n × n is denoted as the correlation matrix. For the elements in the matrix, we have a j i = 1 / a i j, a i j = a i j · a i j and a i i = 1. The values and representation of different parameters are described in Table 1.Table 1 Configurations in the correlation matrix. Value Representation a i j = 1 ith element and jth element have the same effect a i j = 3 ith element is a little more important than jth element a i j = 5 ith element is important than jth element a i j = 7 ith element is much more important than jth element a i j = 9 ith element is extremely more important than jth element a i j = 2 n Superior ofith than jth element between 2 n - 1 and 2 n + 1Then, we normalize the matrixA to matrix Q: (2) Q = ( q i j ) n × n d d , q i j = a i j ∑ k = 0 n a k j .Add the elements in matrixQ by rows to get the vector α: (3) α = ( α 1 , α 2 , … , α n ) T in which α i = ∑ j = 1 n q i j .The vectorα is normalized to the weight vector W: (4) W = ( w 1 , w 1 , … , w n ) T in which w i = α i ∑ j = 1 n α j .AfterW has been calculated, we need to maintain the consistency with respect to the subjective perceptive. Consistency Index (CI) is utilized as the evaluation metrics: (5) CI = λ max ⁡ - n n - 1 .In (5), λ max ⁡ refers to the peak value of the feature, which is derived in the following equation: (6) λ max ⁡ = 1 n ∑ i = 1 n ( A W ) i w i .Moreover, we also utilize consistency rate (CR) as to characterize and to model the proportion of the consistencyCR = CI / RI. RI refers to random index that is the maximum value that a i j is selected completely at random. Obviously, the value of CR depends on the order of the matrix n. The consistency is accepted only if CR < 0.1, and otherwise the correlation matrix should be leveraged until the condition is met.After the weight for each target has been calculated, they could be moved forward to the next step, where different weight vectors are systematized as a combination. Specially, the steps to combine all the vectors are as follows.(1) Calculate the importance for every level to the top level. This calculation process is carried out from top to bottom. (2) Assume that there aren k - 1 elements resided in the ( k - 1 )th level and the weight vector is calculated as (7) W ( k - 1 ) = ( w 1 ( k - 1 ) , w 2 ( k - 1 ) , … , w n ( k - 1 ) ) T . (3) Assume that there aren k elements resided in the kth level and the weight vector of the impact on ( k - 1 )th level refers to (8) p ( k ) = ( w 1 j ( k ) , w 2 j ( k ) , … , w n j ( k ) ) T . If the ith element is independent with the jth element, then w i j = 0. (4) From (2) and (3), we can get that the weight vector of the impact on kth level is (9) W ( k ) = ( p 1 ( k ) , p 2 ( k ) , … , p n ( k ) ) W ( k - 1 ) . After both the subjective and objective weights are evaluated, the proportion coefficient u can be calculated in step 1. ## 3.1. Client Architecture for Mobile Devices The Client’s main function is to extract abnormal features as follows.( 1) Feature Extractor. This is the main module of the client. All the features are extracted through APIs provided by the Android application framework or information read from the Linux kernel. The collected features are clustered into three primary levels: Linux Kernel level (e.g., CPU, RAM, etc.), application level (e.g., messages, calling, etc.), and user behavior level. The user behavior level includes significant features that can reflect the user behavior, such as the screen on/off and the key pressed frequency. The feature extracting frequency is controlled by a timer whose value can be changed by user with the default value of 30 seconds. A total of 29 features are collected during every extracting, and the vector data structure is used to store features. As the data size of each transmission is very small (less than 200 bytes), compression mechanisms may not be able to achieve efficient performance.( 2) Communication Module. This module sends feature vectors to remote servers and receives anomaly alerts from the servers if the features are detected as anomaly. If the client is connected to the server for the first time, the communication module will request registration with the unique international mobile equipment identity (IMEI) of the smartphones.( 3) Graphical User Interface (GUI). This module provides the users with a mean to configure client parameters, such as the value of extractor timer and server IP address. ## 3.2. Anomaly Detection Server The anomaly detection server’s major task is to classify the feature vectors as normal or abnormal. The components include the following.( 1) Database. MySQL is used to store massive feature vectors with classValue (normal or anomaly). Database interfaces are provided for various operations. In the database, a total_table relation includes all vector information, and each detector corresponds to a detector_table, while the primary key is extract_time and phone_tag. All newly received feature vectors were stored in total_table. For each feature vector, it was assigned to the corresponding detection server according to the phone_tag and processing history. If the client was newly registered, the vector was assigned to the lowest load server.( 2) Detecting Module. This is the major module of the detection server, and complex detecting algorithms are implemented here. It consists of several detectors with a detector manager. Each detector is corresponding to a classification algorithm which distinguishes between normal and abnormal feature vectors, such as J48 Detector implemented with the C4.5 decision-tree algorithm. When new feature vectors come, each detector fetches the set of vectors from detector_table, builds the classifier (if it does not exist), and classifies the feature vectors. Then, the detector manager gives out the final results by integrating all detectors’ results and stores the results into total_table and detector_table. The detector manager can also configure the parameters of detectors.( 3) Communication Module. This module communicates with the client and deals with various requests and messages. The module passes received feature vectors to detecting module, and if the vector is detected as anomaly the module will send an anomaly alert to the clients.( 4) Client Manager. When a new client requests for registration, this module will register the client with the IMEI.( 5) GUI. The server’s GUI configures database and visualizes current detectors and connected clients. ## 3.3. Service-Oriented Hierarchical Model Figure3 presents the hierarchical model of distributed servers, which consist of three layers: services, service scheduler, and transmitter. The functionalities of each layer are introduced as follows.Figure 3 Hierarchical layer of servers.First of all, services provider provides service access points (SAPs) to clients. Each SAP is in charge of one specific kind of service. All the SAPs are provided with a data format packaging mechanism. When a request arrives, the SAP first decodes the target request and then identifies which service is requested. Then, the specified request will be sent to service scheduler.Second, services scheduler is in charge of service scheduling and mapping. Each Internet request may include several service requests. Therefore, if more than one servant is available, then each service request must be mapped and scheduled to a certain servant according to the system’s load balancing status.Finally, transmitter dispatches the subtasks to different servants for execution. After the task is completed, the results are collected by transmitter.With respect to the period-like features of 3G/GPRS/WiFi client modules, SmartMal server provides three services for demonstration: CPU/memory utilization, battery endurance, and network traffic flow. From the exploration of the state-of-the-art studies, it is quite common that the malware applications will either drag down the network flow performance, resulting in the congestion, or illegally waste the CPU/memory utilization, or the energy.The high level services are mapped to different servants. In order to provide a feasible system for services, at least one servant for each service is integrated into the system. Each service request is transmitted to a specific servant. All the servants are managed for efficiency use. The data transmission between servant and service layer is through communication interfaces and status checking interface.Status checking interface is responsible for providing synchronization information of diverse servants for services mapping and scheduling, such as load balancing and services bottleneck exploration.The physical layer consists of database and object-relation (O-R) mapping mechanisms. Generally, all the analyzed irregular behaviors are stored in databases, which may be located at distributed areas. Dealing with the current relation based database models, O-R mapping methods are widely employed for object-oriented abstractions, such as Hibernate and Toplink. Benefiting from these approaches, the high-level objects are mapped to relational databases. We hereby utilize TopLink for demonstration to map the database tuples to the standard C++ classes. ## 3.4. Remarked Features It is quite true that spelling out the requirements for a software system under development is not an easy task, and translating captured requirements into correct operational software can be even harder [5]. Many technologies (languages, modeling tools, and programming paradigms) and methodologies (agile, test-driven, and model-driven) are designed, among other things, to help address these challenges. One widely accepted practice is to formalize requirements by behavioral programming skills in the form of use cases and scenarios.However, in realizing abnormal detection based analysis method, not all the collected behaviors are reflecting abnormal message. Therefore, it is challenging to choose the required information among the extracted set of phases or behaviors. It has been proved that the detection efficiency can be significantly improved by the refinements of degrading the dimensions and eliminating the superfluous information [11].In this paper, we model the weight for each feature as a synthesized combination of subjective and objective weight to identify the behavior and characteristics. The weight of the synthesized weight is represented as follows:(1) w i = u w s i + ( 1 - u ) w o i .For each featurei, w i represents the synthesized weight, w s i denotes the subjective weight, w o i refers to objective weight, and u indicates the proportion of subjective weight. The major contribution of this method is to introduce the subjective weight that can leverage the default analyzed results obtained from the behaviors. We use an analytic hierarchy process (AHP) algorithm to construct a three-layer model to divide the complex strategic decision problem into different subjects aiming at multiple targets. For each subject, a fuzzy quantitative approach is employed to calculate the weight for each feature and then merge it hierarchically.Figure4 illustrates the hierarchical model for the remarked features that is composed of three layers. The top layer denotes the final target of behavioral analysis to identify the optimal features. In the middle layer, three classifications are listed according to different abnormal behaviors such as DoS attack malware, user information stealing malware, and irregular software/hardware resource consuming malware. Finally, all the behaviors on operating system application interfaces are reflecting the three abnormal behaviors in bottom level.Figure 4 Hierarchical layers for the remarked features.In the hierarchical analysis method, the relative weighta i j represents the correlation between the ith element and the jth element. Assume that there are n × m elements in total, and then A = ( a i j ) n × n is denoted as the correlation matrix. For the elements in the matrix, we have a j i = 1 / a i j, a i j = a i j · a i j and a i i = 1. The values and representation of different parameters are described in Table 1.Table 1 Configurations in the correlation matrix. Value Representation a i j = 1 ith element and jth element have the same effect a i j = 3 ith element is a little more important than jth element a i j = 5 ith element is important than jth element a i j = 7 ith element is much more important than jth element a i j = 9 ith element is extremely more important than jth element a i j = 2 n Superior ofith than jth element between 2 n - 1 and 2 n + 1Then, we normalize the matrixA to matrix Q: (2) Q = ( q i j ) n × n d d , q i j = a i j ∑ k = 0 n a k j .Add the elements in matrixQ by rows to get the vector α: (3) α = ( α 1 , α 2 , … , α n ) T in which α i = ∑ j = 1 n q i j .The vectorα is normalized to the weight vector W: (4) W = ( w 1 , w 1 , … , w n ) T in which w i = α i ∑ j = 1 n α j .AfterW has been calculated, we need to maintain the consistency with respect to the subjective perceptive. Consistency Index (CI) is utilized as the evaluation metrics: (5) CI = λ max ⁡ - n n - 1 .In (5), λ max ⁡ refers to the peak value of the feature, which is derived in the following equation: (6) λ max ⁡ = 1 n ∑ i = 1 n ( A W ) i w i .Moreover, we also utilize consistency rate (CR) as to characterize and to model the proportion of the consistencyCR = CI / RI. RI refers to random index that is the maximum value that a i j is selected completely at random. Obviously, the value of CR depends on the order of the matrix n. The consistency is accepted only if CR < 0.1, and otherwise the correlation matrix should be leveraged until the condition is met.After the weight for each target has been calculated, they could be moved forward to the next step, where different weight vectors are systematized as a combination. Specially, the steps to combine all the vectors are as follows.(1) Calculate the importance for every level to the top level. This calculation process is carried out from top to bottom. (2) Assume that there aren k - 1 elements resided in the ( k - 1 )th level and the weight vector is calculated as (7) W ( k - 1 ) = ( w 1 ( k - 1 ) , w 2 ( k - 1 ) , … , w n ( k - 1 ) ) T . (3) Assume that there aren k elements resided in the kth level and the weight vector of the impact on ( k - 1 )th level refers to (8) p ( k ) = ( w 1 j ( k ) , w 2 j ( k ) , … , w n j ( k ) ) T . If the ith element is independent with the jth element, then w i j = 0. (4) From (2) and (3), we can get that the weight vector of the impact on kth level is (9) W ( k ) = ( p 1 ( k ) , p 2 ( k ) , … , p n ( k ) ) W ( k - 1 ) . After both the subjective and objective weights are evaluated, the proportion coefficient u can be calculated in step 1. ## 4. Behavior Analysis To demonstrate the effectiveness of the SmartMal architecture, we have implemented a prototype application for both behavioral analysis and abnormal malware attack detection. Due to the abnormal malware detection for mobile devices, there is no acknowledged data set. For behavioral analysis, it is extremely important to select a fair and reasonable benchmark set for behavior analysis. We have selected 32 most highly ranked applications in the Android market, including 11 game applications and 21 software tools, presented in Table2. All the software programs are installed in three mobile devices for malware detection (1 Moto Me722 handset and 2 Samsung S5830 handsets).Table 2 Applications for profiling. Game applications Fruit Ninja Angry Birds Can Knockdown X Construction Cut String Gold Miner Bubble Ball Shift Flight Chess Sudoku Talking Tom Software tool applications Office Suite 360 Guard Root Explorer King Soft iReader PowerAMP Mobo Player UCWeb Fetion Task Manager MSN Google Map Google Music King Reader Mobile TV Storm Player Tencent QQ Shang Mail Sina Weibo RenRen Adobe ReaderAll the 32 applications are installed and run on the smartphones for at least one hour, during which the malware detection engines are running at back stage. The back stage engine is configured to sample the mobile device application running status every 30 seconds. After the execution is finished, there should be up to 120 items of the characterized features. For some special purposed behavior, such as battery consumption, text messages, and incoming/outgoing phone calls, it is not fair to gather the fingerprints every 30 seconds. In these application-specific scenarios, we use an accumulative value for the recent 9 intervals, each with 30 seconds. Finally, all these items are marked with normal behaviors for further comparison and detection schemes.Alternatively, these three Android smartphones are delivered to three persons for regular daily use. Meanwhile, the back stage engine keeps on tracking the featured behavior. This period has lasted for more than 90 days.In this paper, we focus on the CPU/memory utilization rate, battery consumption, and network traffic flow to analyze the workload behavior. The experimental results are illustrated in Figure5.Figure 5 CPU/memory utilization. ### 4.1. CPU/Memory Utilization To evaluate the CPU and memory utilization, we usetop command in Linux operating system to observe the CPU and memory utilization of the client. Meanwhile, we have also compared the statistics for typical software applications including IM software MSN, media player PowerAMP, and UCWeb, which is illustrated in Table 3. Taking CPU average utilization rate into consideration, the profiling account only occupies less than 1% of CPU, which is ignorable. Alternatively, as our profiling application explores the feature every 30 seconds, thereby the peak utilization achieves 20%–24%, but the duration of the sample is too short to be noticed. Therefore, it does not cause any influence for user experiences. Finally, for the sake of the memory utilization, our profiling application takes up to 26 MB, which is smaller than MSN and PowerAMP applications, and, consequently, the overhead is affordable for less than 5%. Note that a general smartphone can integrate more than 512 MB internal memory.Table 3 Battery endurance statistics. 100% to 15% 100% to 5% Profiling offline 525 minutes 590 minutes Profiling online 480 minutes 540 minutes ### 4.2. Battery Endurance One of the most serious challenges for smartphones is the power consumption and battery endurance. To analyze the behavior of how our profiling application has an impact on the battery endurance, we have evaluated the endurance trial on both scenarios, profiling online and offline, described in Table3. On one hand, when the client application is offline, it takes approximately 525 minutes from the full battery charge 100% to 15%, while it takes 590 minutes when only 5% battery is left. On the other hand, with respect to the online profiling application, the duration only lasts for 480 minutes from 100% to 15%, while it lasts for 540 minutes from 100% to 5%. ### 4.3. Network Traffic Flow Due to the fact that our profiling application requires Internet to transfer extracted behaviors to the major server, consequently, we also need to analyze the behavior for the network traffic flow to evaluate how our approach has an impact on the Internet utilization. We employ TrafficStats toolset provided in Android operating system to monitor the traffic flow for both single upload transaction and batch uploading process. For the sake of single uploading procedure, the packaged message should be delivered once the abnormal behavior is detected, and the uploading flow speed is 9 Kb/10 min, while the downloading speed is 4 Kb/10 min. With respect to the batch procedure, up to 10 featured messages will be batched together into a package for uploading operation. However, from the experiment, we can get a result which is exactly the same as the single uploading transaction; that is, the uploading flow speed is 9 Kb/10 min, while the downloading speed is 4 Kb/10 min. Due to the small volume for each packet, multiple messages can reside in the same package to reach a minimum size for one pack. Considering the 3G and WiFi network bandwidth, the 13 Kb/10 min network traffic flow is acceptable. ## 4.1. CPU/Memory Utilization To evaluate the CPU and memory utilization, we usetop command in Linux operating system to observe the CPU and memory utilization of the client. Meanwhile, we have also compared the statistics for typical software applications including IM software MSN, media player PowerAMP, and UCWeb, which is illustrated in Table 3. Taking CPU average utilization rate into consideration, the profiling account only occupies less than 1% of CPU, which is ignorable. Alternatively, as our profiling application explores the feature every 30 seconds, thereby the peak utilization achieves 20%–24%, but the duration of the sample is too short to be noticed. Therefore, it does not cause any influence for user experiences. Finally, for the sake of the memory utilization, our profiling application takes up to 26 MB, which is smaller than MSN and PowerAMP applications, and, consequently, the overhead is affordable for less than 5%. Note that a general smartphone can integrate more than 512 MB internal memory.Table 3 Battery endurance statistics. 100% to 15% 100% to 5% Profiling offline 525 minutes 590 minutes Profiling online 480 minutes 540 minutes ## 4.2. Battery Endurance One of the most serious challenges for smartphones is the power consumption and battery endurance. To analyze the behavior of how our profiling application has an impact on the battery endurance, we have evaluated the endurance trial on both scenarios, profiling online and offline, described in Table3. On one hand, when the client application is offline, it takes approximately 525 minutes from the full battery charge 100% to 15%, while it takes 590 minutes when only 5% battery is left. On the other hand, with respect to the online profiling application, the duration only lasts for 480 minutes from 100% to 15%, while it lasts for 540 minutes from 100% to 5%. ## 4.3. Network Traffic Flow Due to the fact that our profiling application requires Internet to transfer extracted behaviors to the major server, consequently, we also need to analyze the behavior for the network traffic flow to evaluate how our approach has an impact on the Internet utilization. We employ TrafficStats toolset provided in Android operating system to monitor the traffic flow for both single upload transaction and batch uploading process. For the sake of single uploading procedure, the packaged message should be delivered once the abnormal behavior is detected, and the uploading flow speed is 9 Kb/10 min, while the downloading speed is 4 Kb/10 min. With respect to the batch procedure, up to 10 featured messages will be batched together into a package for uploading operation. However, from the experiment, we can get a result which is exactly the same as the single uploading transaction; that is, the uploading flow speed is 9 Kb/10 min, while the downloading speed is 4 Kb/10 min. Due to the small volume for each packet, multiple messages can reside in the same package to reach a minimum size for one pack. Considering the 3G and WiFi network bandwidth, the 13 Kb/10 min network traffic flow is acceptable. ## 5. Malware Detection: A DoS Attack Case In this section, we introduce abnormal malware detection in real DoS attack case. It is quite common that the network flow of the mobile devices statistics could be periodical. For example, the TCP SYN packages and RAB establishing/release procedures can be repeated when in a period-like manner. In particular, the mathematic idioms used in malware detection are denoted in Table4.Table 4 Idioms and terms defined in DoS attack cases. τ Statistics intervals v i τ ( k ) ith devices behavior in the interval k N τ ( k ) Total amount of devices in the intervalk v τ ( k ) { v i τ ( k ) , i = 1,2 , … , N τ ( k ) } the vector in interval k X τ ( k ) Probability distribution for normalizedv τ ( k ) L ( k 1 , k 2 ) Similarity forX τ ( k 1 ) and X τ ( k 2 ) W 0 ( k ) The observation window forX ( k ) W 1 ( k ) The sampling observation window forX ( k ) D I ( k ) Internal distance for probability correlationX ( k ) D E ( k ) External distance for probability correlationX ( k ) M Abnormal probability distribution set ### 5.1. Detection Algorithm Generally speaking,τ decides the aggregation degree for the accumulated data, while distinct DoS attack behaviors on mobile networks perform different aggressive abnormal data degree. We set the minimum interval τ 0 = 30 seconds, which is used to accumulate more coarse grained information, such as 1 minute, 5 minutes, 10 minutes, and 30 minutes.Given two probability distribution for distinct time slices with same feature and intervals, the formal probability distribution can be normalized toX τ ( k 1 ) and X τ ( k 2 ). First, we define L ( k 1 , k 2 ) as the distance for the two probability distributions k 1 and k 2. Then, we need to sample the observation windows in the following two phases.First, for the sake of a given intervalk and observation window W 0 ( k ), we need to select the time slice which is most relevant to the amount of total registered mobile devices in (10) with an experimental value from 5% to 15%. Consider (10) W 0 ′ ( k ) = k i ∈ W 0 ( k ) , | N ( k ) - N ( k i ) | N ( k ) ≤ S .Second, choose probability distributionX τ ( k ) that is most relevant to X ( k ) from the observation window W 0 ( k ) to organize a sampling observation window W 1 ( k ).AfterW 1 ( k ) has been identified, both internal distance D 1 ( k ) between X ( k ) and W 1 ( k ) and external distance D E ( k ) is calculated, respectively, and presented in (11) and (12). Then, the processing flow is described in Algorithm 2.  Consider (11) D I ( k ) = { L ( k i , k j ) , k i , k j ∈ W 1 ( k ) , k i ≠ k j } , (12) D E ( k ) = { L ( k , k i ) , k i ∈ W 1 ( k ) } .Algorithm 2:Pseudocode of abnormal behavior analysis for mobile networks. INITIATION: Set values of t , N , M INPUT: V ( k ) , t , N , M OUTPUT: X ( k ) Begin (1) ObtainX ( k ) and N ( k ) from V ( k ) (2) Define the observation windowW 0 ( k ) (3) SelectW 1 ( k ) from W 0 ( k ) / M by sampling (4) Calculate the distanceD 1 ( k ) and D E ( k ) (5)If D E ( k ) > D 1 ( k ) (6)X ( k ) is detected as anomaly (7)  SetM = M ∪ { k } (8)else X ( k ) is normal (9) Increasek by one and go-back to (1) End ### 5.2. Similarity Evaluation To analyze whether the behavior is a normal operation or the malware attack operation, we use a Kullback-Leibler (KL) divergence based approach. Assuming that parameters p and q represent the probability distribution of two data sets, therefore, the KL divergence can be used to measure the relative entropy between the two probability distributions described in the following equation: (13) D ( p ∥ q ) = E [ log ⁡ ⁡ ( p ( ω ) q ( ω ) ) ] = ∑ ω ∈ Ω p ( ω ) log ⁡ ⁡ ( p ( ω ) q ( ω ) ) , in which 0 log ⁡ ⁡ ( 0 / q ) = 0 and p log ⁡ ⁡ ( p / 0 ) = ∞. Moreover, the KL divergence is 0 only when p is equal to q. Since KL divergence is not a metric, we propose a revised metric to measure the distance. Consider (14) L ( p , q ) = 1 2 ( D ( p ∥ q ) H P + D ( q ∥ p ) H q ) , where D ( p ∥ q ) and D ( q ∥ p ) represent the KL divergence, while H p and H q are the entropy for p and q, respectively. In particular, the calculation of the entropy is introduced in the following equation: (15) H ( X ) = ( P 1 , P 2 , … , P n ) = P ( x i ) log ⁡ ⁡ P ( x i ) . Referring to (15), X represents the probability distribution. P ( x i ) indicates the probability that the source fetches ith signal, and we have ∑ i P ( x i ) = 1. Due to the calculation of D ( p ∥ q ) which requires the additional information, D ( q ∥ p ) / H q denotes the extra workload for the calculation. To maintain the accuracy, the final distance L is set to the average distance of L ( p , q ) and L ( q , p ).Referring to a probability distribution, the dimension for the two distributions can be different and an occasional case ofp log ⁡ ⁡ ( p / 0 ) = ∞ may happen. In order to avoid this situation, we can choose the maximum mobile device volume as the uniform-dimensional degree, while for the insufficient distribution, we can use a signal ∈ representing 0; therefore, the situation of p log ⁡ ⁡ ( p / 0 ) = ∞ can be avoided. In this paper, we set ∈ = 10 - 10. ### 5.3. Experimental Results and Analysis We setup a simulation platform to verify the DoS behavior detection for periodic probability distribution. The NET_SEND behavior is implemented as the TCP SYN simulation, while the back stage servers will send an abnormal malware messages demonstrating the SYN flooding attack. In order to simulate a relatively large scale experimental platform, we combine the message from 3 smartphones every 5 minutes into a 10-length chain with a 30-dimensional NET_SEND vector; then, it is normalized into the probability distribution for TCP SYN behaviors.We run the applications on smartphones for two months continually. As one probability distribution vector includes the information collected every 5 minutes, we totally get2 × 30 × 24 × 12 = 15480 normalized vectors.Figure6 illustrates the detection accuracy for DoS attacking malwares. The X-axis represents the amount of attaching smartphones, while the Y-axis is the detected accuracy rate for malware attacks, denoted as true positive rate (TPR). It can be easily derived that the TPR is increased with the amount of attacking smartphones. When there is only one device, the TPR is only 10%, which means more than 90% attacks failure. When the device amount is increased to 5, the TPR is also grown up to 50%. When there are more than 10 devices in total, the TPR is stable as high as 99.1%. Our approach can obtain both highly efficient and accurate results to detect all the abnormal behaviors and malware attacks.Figure 6 The detection accuracy for malware applications under different amounts of attacks. ## 5.1. Detection Algorithm Generally speaking,τ decides the aggregation degree for the accumulated data, while distinct DoS attack behaviors on mobile networks perform different aggressive abnormal data degree. We set the minimum interval τ 0 = 30 seconds, which is used to accumulate more coarse grained information, such as 1 minute, 5 minutes, 10 minutes, and 30 minutes.Given two probability distribution for distinct time slices with same feature and intervals, the formal probability distribution can be normalized toX τ ( k 1 ) and X τ ( k 2 ). First, we define L ( k 1 , k 2 ) as the distance for the two probability distributions k 1 and k 2. Then, we need to sample the observation windows in the following two phases.First, for the sake of a given intervalk and observation window W 0 ( k ), we need to select the time slice which is most relevant to the amount of total registered mobile devices in (10) with an experimental value from 5% to 15%. Consider (10) W 0 ′ ( k ) = k i ∈ W 0 ( k ) , | N ( k ) - N ( k i ) | N ( k ) ≤ S .Second, choose probability distributionX τ ( k ) that is most relevant to X ( k ) from the observation window W 0 ( k ) to organize a sampling observation window W 1 ( k ).AfterW 1 ( k ) has been identified, both internal distance D 1 ( k ) between X ( k ) and W 1 ( k ) and external distance D E ( k ) is calculated, respectively, and presented in (11) and (12). Then, the processing flow is described in Algorithm 2.  Consider (11) D I ( k ) = { L ( k i , k j ) , k i , k j ∈ W 1 ( k ) , k i ≠ k j } , (12) D E ( k ) = { L ( k , k i ) , k i ∈ W 1 ( k ) } .Algorithm 2:Pseudocode of abnormal behavior analysis for mobile networks. INITIATION: Set values of t , N , M INPUT: V ( k ) , t , N , M OUTPUT: X ( k ) Begin (1) ObtainX ( k ) and N ( k ) from V ( k ) (2) Define the observation windowW 0 ( k ) (3) SelectW 1 ( k ) from W 0 ( k ) / M by sampling (4) Calculate the distanceD 1 ( k ) and D E ( k ) (5)If D E ( k ) > D 1 ( k ) (6)X ( k ) is detected as anomaly (7)  SetM = M ∪ { k } (8)else X ( k ) is normal (9) Increasek by one and go-back to (1) End ## 5.2. Similarity Evaluation To analyze whether the behavior is a normal operation or the malware attack operation, we use a Kullback-Leibler (KL) divergence based approach. Assuming that parameters p and q represent the probability distribution of two data sets, therefore, the KL divergence can be used to measure the relative entropy between the two probability distributions described in the following equation: (13) D ( p ∥ q ) = E [ log ⁡ ⁡ ( p ( ω ) q ( ω ) ) ] = ∑ ω ∈ Ω p ( ω ) log ⁡ ⁡ ( p ( ω ) q ( ω ) ) , in which 0 log ⁡ ⁡ ( 0 / q ) = 0 and p log ⁡ ⁡ ( p / 0 ) = ∞. Moreover, the KL divergence is 0 only when p is equal to q. Since KL divergence is not a metric, we propose a revised metric to measure the distance. Consider (14) L ( p , q ) = 1 2 ( D ( p ∥ q ) H P + D ( q ∥ p ) H q ) , where D ( p ∥ q ) and D ( q ∥ p ) represent the KL divergence, while H p and H q are the entropy for p and q, respectively. In particular, the calculation of the entropy is introduced in the following equation: (15) H ( X ) = ( P 1 , P 2 , … , P n ) = P ( x i ) log ⁡ ⁡ P ( x i ) . Referring to (15), X represents the probability distribution. P ( x i ) indicates the probability that the source fetches ith signal, and we have ∑ i P ( x i ) = 1. Due to the calculation of D ( p ∥ q ) which requires the additional information, D ( q ∥ p ) / H q denotes the extra workload for the calculation. To maintain the accuracy, the final distance L is set to the average distance of L ( p , q ) and L ( q , p ).Referring to a probability distribution, the dimension for the two distributions can be different and an occasional case ofp log ⁡ ⁡ ( p / 0 ) = ∞ may happen. In order to avoid this situation, we can choose the maximum mobile device volume as the uniform-dimensional degree, while for the insufficient distribution, we can use a signal ∈ representing 0; therefore, the situation of p log ⁡ ⁡ ( p / 0 ) = ∞ can be avoided. In this paper, we set ∈ = 10 - 10. ## 5.3. Experimental Results and Analysis We setup a simulation platform to verify the DoS behavior detection for periodic probability distribution. The NET_SEND behavior is implemented as the TCP SYN simulation, while the back stage servers will send an abnormal malware messages demonstrating the SYN flooding attack. In order to simulate a relatively large scale experimental platform, we combine the message from 3 smartphones every 5 minutes into a 10-length chain with a 30-dimensional NET_SEND vector; then, it is normalized into the probability distribution for TCP SYN behaviors.We run the applications on smartphones for two months continually. As one probability distribution vector includes the information collected every 5 minutes, we totally get2 × 30 × 24 × 12 = 15480 normalized vectors.Figure6 illustrates the detection accuracy for DoS attacking malwares. The X-axis represents the amount of attaching smartphones, while the Y-axis is the detected accuracy rate for malware attacks, denoted as true positive rate (TPR). It can be easily derived that the TPR is increased with the amount of attacking smartphones. When there is only one device, the TPR is only 10%, which means more than 90% attacks failure. When the device amount is increased to 5, the TPR is also grown up to 50%. When there are more than 10 devices in total, the TPR is stable as high as 99.1%. Our approach can obtain both highly efficient and accurate results to detect all the abnormal behaviors and malware attacks.Figure 6 The detection accuracy for malware applications under different amounts of attacks. ## 6. Conclusions This paper proposed a service-oriented malware detection framework “SmartMal,” which is the first work to combine SOA concepts with state-of-the-art behavior-based malware detection methodologies. By applying SOA into the framework, irregular behaviors can be processed in parallel servers instead of operating locally. Utilizing the distributed operation of irregular behavior analysis, SmartMal can largely reduce clients’ computational complexity with great flexibility and modularity.Moreover, as a test case, we have proposed a randomization method to defend against signaling DoS attacks on 30 cellular networks from the system-design perspective. By setting the parameter that is crucial for attacking efficiency as random distribution, the parameter is more difficult to be measured and the measured value is the maximum of the random value. The cost of launching an attack is increased enormously. Our simulation of signaling attack via RAB establishment release shows that our randomization method can achieve as high as 99.1% of the malware and irregular behaviors. The randomization method is easy and effective towards this kind of signaling attacks including paging attacks.The initial results are promising, but there is a lot of work worth pursuing. Future works include extending the server to distributed cloud systems to achieve high throughput and integration of services. Meanwhile, we also plan to integrate the design-space exploration into the malware detection methods and behavior analysis to improve the accuracy and flexibility. --- *Source: 101986-2014-08-05.xml*
101986-2014-08-05_101986-2014-08-05.md
64,041
SmartMal: A Service-Oriented Behavioral Malware Detection Framework for Mobile Devices
Chao Wang; Zhizhong Wu; Xi Li; Xuehai Zhou; Aili Wang; Patrick C. K. Hung
The Scientific World Journal (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101986
101986-2014-08-05.xml
--- ## Abstract This paper presents SmartMal—a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server’s main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users’ regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices. --- ## Body ## 1. Introduction Personal digital assistants (PDAs), mobile phones, and recently smartphones have evolved from simple devices into sophisticated yet compact minicomputers which can connect to a wide spectrum of networks, including the Internet and corporate intranets. Designed as open, programmable, networked devices, smartphones are susceptible to various malware threats such as viruses, Trojan horses, and worms, all of which are well known from desktop platforms. These devices enable users to access and browse the Internet, receive and send emails, and short message service (SMS), connect to other devices for exchanging/synchronizing information, and install various applications, which make these devices ideal attack targets [1].Above all, mobile devices have become popular companions in people’s daily life, as is illustrated in Figure1. It allows users to access news, entertainment, carry out research, or make purchases via e-businesses. Unfortunately, cyberspace is a double-edged sword; the new malware and viruses appearing on mobile devices have dramatically impacted the safety and security of users; this side effect of Internet access has become a serious problem. According to the Internet Filter Reviews statistics [2], the amount of malware detected is each year the double. In particular, there are at least 7.12 million smartphones that have been infected by various malware and virus.Figure 1 Mobile devices have become a common place for both Internet and telecom networks. They have been combined into a sound framework which allows different media to communicate with each other immediately and efficiently.The challenges for smartphone security are becoming very similar to those that personal computers encounter and common desktop security solutions are often being downsized to mobile devices. Unfortunately, the increasing popularity smartphones and their ability to run third-party software have also attracted the attention of virus writers [3, 4]. Malware can make a smartphone partially or fully unusable, causing unwanted billing; stealing private information, and so on. If we have the ability to detect the attack as soon as it occurs, we can stop it from doing any damage to the system or personal data. This is where an intrusion detection system comes in, there are two types of intrusion detection systems: signature-based and anomaly-based systems. Signature-based approaches can only detect existing malwares and require frequent signature updates to keep the signature database up-to-date. Signature-based systems are often used for antivirus software on desktop systems. Researchers are trying to develop anomaly based approaches which can detect unknown malwares.Recently, behavior-based programming has been proved [5] to be an efficient way to detect abnormal utilizations to formalize requirements in the form of use cases and scenarios. It has also been introduced to the malware detection mechanism [1]. However, the behavior analysis technique is worth pursuing, it still poses significant challenge to clearly identify behaviors for distinct embedded applications.In order to solve this problem, we will demonstrate the effectiveness of service-oriented architecture (SOA) in browser design. Traditionally, SOA provides effective measures with better flexibility and extensibility at lower cost by adopting reusable software modules. SOA can also reduce the complexity of integration and application development through uniform service description and integration interfaces. Therefore, SOA-based design is more convenient when building systems by providing a common way for interaction and communication.By the exploration of benefits of SOA concepts, we can conclude that there are at least two significant advantages of integrating SOA concepts into malware detections. Firstly, it can greatly reduce the local workload of the detection algorithm. This feature allows users to run a light-weight client which works especially well for mobile devices, because all the processing threads will run on the servers. Secondly, the user behavior analyses, such as CPU/memory utilization, battery endurance, and network traffic flow, are located on central or distributed servers. This improves the load balancing status for global optimization. Finally, with the back-end management module (e.g., web pages), the malware information is easier to be kept up-to-date and be distributed to clients for real time synchronization.This paper proposes service-oriented malware detection with distributed behavior analysis mechanisms for the first time, called SmartMal. The paper is extended from the previous publication at [6]. The contributions of this paper are listed as follows. The paper starts by describing the SOA-based malware with distributed detection algorithm. The abnormal messages and irregular behaviors are provided through services. Secondly, this paper proposes and realizes a behavior analysis algorithm with SOA concepts. We integrate distributed components into a hierarchical kernel model. Diverse optimizing measures are taken to exploit the battery endurance, CPU/memory utilization and network traffic flow. Experimental results are presented to demonstrate the effectiveness of SmartMal.The rest of the paper is organized as follows. Related work and motivation are summarized in Section2. Section 3 discusses the architecture and main concepts of SmartMal, including architecture, client organization, and server hierarchical models. Section 4 describes the behavior analysis model. Section 5 demonstrates the SmartMal with a typical case: DoS attacks. Finally, conclusion and further work are presented in Section 6. ## 2. Related Work and Motivation Safety and security problems on mobile devices have been a major focus in the past decade. In this section, we present a review on general malware detection techniques for mobile devices.There has been a considerable amount of research into anomaly detection in computing systems and network traffic. These include statistics-based approaches, data-mining based methods, and machine learning based techniques. A wide set of anomaly detection approaches on smartphones are built from the above techniques.Statistical-based approaches were originally used in anomaly detection on smartphones. Cheng et al. [7] propose a collaborative virus detection and alert system for smartphones where smartphones run a light-weight agent then collect and report information to a proxy. The proxy detects viruses through a statistical approach, and it keeps track of the average number of communications. Buennemeyer et al. [8] present a scheme that monitors abnormal changes of smartphones using smart batteries.Bose et al. [9] propose a novel behavior-based detection framework for smartphones. The behavior signatures are constructed at run time by monitoring the events and API calls via Proxy DLL. They use support vector machines (SVMs) to train a classifier from normal and malicious data. The evaluation results show that the scheme can identify current mobile malwares with more than 96% accuracy. A distributed SVM algorithm is presented in [10]; with the distributed scheme, the participating clients perform their computation in parallel and update the support vectors simultaneously, so the overhead of machine learning algorithm is efficiently decreased.Schmidt et al. [11] present programs that monitor smartphones running Symbian and Windows Mobile OS. They demonstrate that only a few features are needed to achieve acceptable detection performance. Machine learning methods, like artificial immune system (AIS) and self-organizing maps (SOM), are applied to detect abnormal behavior on remote server, and they proposed an algorithm called linear prediction to detect change by checking four predecessors of a chosen feature. In [12], they present a novel approach to detect malware, where function calls are extracted from binaries. The centroid machine classifies an executable via clustering, in which each cluster is defined by a centroid. That is, a binary is classified as malicious if it is closer to malicious cluster, naming the distance metric as Euclidean distance.Game theory has been introduced into the anomaly detection area of mobile phone. Shabtai et al. [1] propose a light-weight malware detection system for Android smartphone, and they developed four malicious applications for experiment. Several usual classification algorithms and feature selection algorithms are evaluated to find the best performance in these detection systems. Alpcan et al. [13] present a novel probabilistic diffusion scheme for anomaly detection based on mobile device usage patterns. The scheme models the normal behavior and their features as a bipartite graph, which constitutes the basis for the stochastic diffusion process. In the stochastic diffusion algorithm, the Kullback-Leibler divergence is used to measure the distance between the distributions. uCLAVS [14] is a web service-oriented ontology framework for malware and intrusion detection. uCLAVS is based on the idea that the files analysis can improve their performance if they are moved to the network instead of running on every host. It enables each process to enter the system files, send them to the network, and then to decide whether they are executed according to the threat delivered report. Reference [15] proposes a model to reduce on-device CPU, memory, and power resources whereby mobile antivirus functionality is moved to an off-device network service employing multiple virtualized malware detection engines. TaintDroid [16] is a system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data.Meanwhile, programming behavior [5] is a new mechanism that has been integrated into the malware detection approaches [1, 9]. However, it has been widely used in business process [17], cache optimization [18], and operating systems [19]. However, the current research programs have serious common drawbacks: (1) most of the verification operations as well as the databases are performed locally, which may cause significant security issues if the databases are hacked, and (2) the approaches of local browsers lack modularity, which will cause excessive and inefficient workloads for programmers.This paper introduces a SOA concept for malware detection mechanisms, in order to construct a distributed malware detection framework with behavior analysis model. SOA is widely applied in software services, web services, operating systems, and so on. Various SOA frameworks have been proposed in many fields, such as chip design [20], mobile computing system [21], classroom scheduling [22], enterprise architecture [23], Internet browser [24], and electronic productions [25]. The advantages of SOA are to integrate various services and provide unified interfaces within different solutions.In order to learn from the SOA concepts, we have also summarized the cutting-edge SOA researches. Alam et al. [17] present a behavioral attestation method for business processes. Zhang et al. [26] provide a presentation for proactively recommending services in a workflow composition process, based on service usage history. Haki and Forte [23] demonstrate that using the SOA concept into the enterprise architecture (EA) framework makes the best of the synergy existing between these two approaches. Zhou et al. [6] explore service composition and service dependency and propose an extended dependency-aware SOA model. A loosely coupled service-oriented implementation is presented in [27]. The architecture takes advantage of Octave models in creating and using prediction models. In this framework, every method is applied as an Octave script in a plug-in fashion. Achbany et al. [28] present an allocation method of services to tasks, but the algorithm is not applied in realistic systems. In conclusion, since SOA has the ability for software across organizations and network boundaries to collaborate efficiently, it has been widely employed in aspects of research areas to facilitate researchers.Although there is a lot of research works related to SOA and malware detection for mobile devices platform, respectively, there is only a few studies on integrating SOA and malware detection in order to construct a service-oriented abnormal behavior analysis framework.To utilize SOA architecture’s benefits, this paper presents a services-oriented malware detection and authentication mechanisms at the server side. All clients send requests to the web servers at run time in order to obtain a list of malware behaviors. This paper is extended from previous work at [29]. The paper proposes a distributed malware detection framework with the following features:(1) a light-weight profiling and information collection application on mobile devices to record all the normal and irregular behaviors; (2) a malware detection and abnormal behavior mechanism as separate modules; (3) a set of system behavior analysis schemes which integrate CPU/memory utilization, battery endurance, and network traffic flow. ## 3. SOA Architecture Model The concept of this paper is to apply SOA concepts into malware detection framework design. By integrating remote irregular and behavior analysis, the aim is to design an integrated client-server application with abnormal behavior maintenance. The architecture framework for SmartMal is illustrated in Figure2. The system runs at client-server mode, of which the applications running on each client mobile device are in charge of keeping the record of the smartphones and collecting abnormal information. The selected abnormal information which is represented as vectors and extensible markup language (XML) messages are sent to the remote servers via general packet radio service (GPRS), 3G, or WiFi networks. Also the servers are distributed in one communication server and multiple malware detection servers. The communication server is responsible for exchanging messages with client users. The communication data are mainly through web services, in which data are packaged in certain data formats such as XML and JSON. After abnormal information is received, the communication server should forward the message to a specific server, in according to the client ID and system load balancing status. The detection algorithm running on the distributed servers will identify and return the results to major servers. The information will be stored in database and alerted to corresponding terminal devices when an attack or abnormal message occurs.Figure 2 Architecture framework for SmartMal.The SmartMal architecture provides a set of administrative control web pages. Once the data is updated, the new information will be pushed into communication servers and user clients simultaneously.To manage the massive mobile data, the architecture maintains aglobal status stable that records the current server traffic amount data. Each server has a uniquetag, while the major server has the smallesttag. The major server is elected from all the candidate servers, under the election algorithm illustrated in Algorithm 1. Denoted S indicates all the candidate server set, and the size of the set is n, represented as tag j, j ∈ [ 1 , n ].Algorithm 1:Algorithm to elect the major server. INPUT: Server Set S and tag j , j ∈ [ 1 , n ]. OUTPUT: Major Server ID (1)for each server tag i in S (2) send request witht a g s to all other servers (3) receive requests from all other servers (4)if tag i < Min ⁡ { tag j , j ∈ [ 1 , n ] , j ≠ i } then (5)  set servertag i as the main server (6)else then (7)  set serverMin ⁡ { tag j , j ∈ [ 1 , n ] , j ≠ i } as main server (8)end if (9)end for ### 3.1. Client Architecture for Mobile Devices The Client’s main function is to extract abnormal features as follows.( 1) Feature Extractor. This is the main module of the client. All the features are extracted through APIs provided by the Android application framework or information read from the Linux kernel. The collected features are clustered into three primary levels: Linux Kernel level (e.g., CPU, RAM, etc.), application level (e.g., messages, calling, etc.), and user behavior level. The user behavior level includes significant features that can reflect the user behavior, such as the screen on/off and the key pressed frequency. The feature extracting frequency is controlled by a timer whose value can be changed by user with the default value of 30 seconds. A total of 29 features are collected during every extracting, and the vector data structure is used to store features. As the data size of each transmission is very small (less than 200 bytes), compression mechanisms may not be able to achieve efficient performance.( 2) Communication Module. This module sends feature vectors to remote servers and receives anomaly alerts from the servers if the features are detected as anomaly. If the client is connected to the server for the first time, the communication module will request registration with the unique international mobile equipment identity (IMEI) of the smartphones.( 3) Graphical User Interface (GUI). This module provides the users with a mean to configure client parameters, such as the value of extractor timer and server IP address. ### 3.2. Anomaly Detection Server The anomaly detection server’s major task is to classify the feature vectors as normal or abnormal. The components include the following.( 1) Database. MySQL is used to store massive feature vectors with classValue (normal or anomaly). Database interfaces are provided for various operations. In the database, a total_table relation includes all vector information, and each detector corresponds to a detector_table, while the primary key is extract_time and phone_tag. All newly received feature vectors were stored in total_table. For each feature vector, it was assigned to the corresponding detection server according to the phone_tag and processing history. If the client was newly registered, the vector was assigned to the lowest load server.( 2) Detecting Module. This is the major module of the detection server, and complex detecting algorithms are implemented here. It consists of several detectors with a detector manager. Each detector is corresponding to a classification algorithm which distinguishes between normal and abnormal feature vectors, such as J48 Detector implemented with the C4.5 decision-tree algorithm. When new feature vectors come, each detector fetches the set of vectors from detector_table, builds the classifier (if it does not exist), and classifies the feature vectors. Then, the detector manager gives out the final results by integrating all detectors’ results and stores the results into total_table and detector_table. The detector manager can also configure the parameters of detectors.( 3) Communication Module. This module communicates with the client and deals with various requests and messages. The module passes received feature vectors to detecting module, and if the vector is detected as anomaly the module will send an anomaly alert to the clients.( 4) Client Manager. When a new client requests for registration, this module will register the client with the IMEI.( 5) GUI. The server’s GUI configures database and visualizes current detectors and connected clients. ### 3.3. Service-Oriented Hierarchical Model Figure3 presents the hierarchical model of distributed servers, which consist of three layers: services, service scheduler, and transmitter. The functionalities of each layer are introduced as follows.Figure 3 Hierarchical layer of servers.First of all, services provider provides service access points (SAPs) to clients. Each SAP is in charge of one specific kind of service. All the SAPs are provided with a data format packaging mechanism. When a request arrives, the SAP first decodes the target request and then identifies which service is requested. Then, the specified request will be sent to service scheduler.Second, services scheduler is in charge of service scheduling and mapping. Each Internet request may include several service requests. Therefore, if more than one servant is available, then each service request must be mapped and scheduled to a certain servant according to the system’s load balancing status.Finally, transmitter dispatches the subtasks to different servants for execution. After the task is completed, the results are collected by transmitter.With respect to the period-like features of 3G/GPRS/WiFi client modules, SmartMal server provides three services for demonstration: CPU/memory utilization, battery endurance, and network traffic flow. From the exploration of the state-of-the-art studies, it is quite common that the malware applications will either drag down the network flow performance, resulting in the congestion, or illegally waste the CPU/memory utilization, or the energy.The high level services are mapped to different servants. In order to provide a feasible system for services, at least one servant for each service is integrated into the system. Each service request is transmitted to a specific servant. All the servants are managed for efficiency use. The data transmission between servant and service layer is through communication interfaces and status checking interface.Status checking interface is responsible for providing synchronization information of diverse servants for services mapping and scheduling, such as load balancing and services bottleneck exploration.The physical layer consists of database and object-relation (O-R) mapping mechanisms. Generally, all the analyzed irregular behaviors are stored in databases, which may be located at distributed areas. Dealing with the current relation based database models, O-R mapping methods are widely employed for object-oriented abstractions, such as Hibernate and Toplink. Benefiting from these approaches, the high-level objects are mapped to relational databases. We hereby utilize TopLink for demonstration to map the database tuples to the standard C++ classes. ### 3.4. Remarked Features It is quite true that spelling out the requirements for a software system under development is not an easy task, and translating captured requirements into correct operational software can be even harder [5]. Many technologies (languages, modeling tools, and programming paradigms) and methodologies (agile, test-driven, and model-driven) are designed, among other things, to help address these challenges. One widely accepted practice is to formalize requirements by behavioral programming skills in the form of use cases and scenarios.However, in realizing abnormal detection based analysis method, not all the collected behaviors are reflecting abnormal message. Therefore, it is challenging to choose the required information among the extracted set of phases or behaviors. It has been proved that the detection efficiency can be significantly improved by the refinements of degrading the dimensions and eliminating the superfluous information [11].In this paper, we model the weight for each feature as a synthesized combination of subjective and objective weight to identify the behavior and characteristics. The weight of the synthesized weight is represented as follows:(1) w i = u w s i + ( 1 - u ) w o i .For each featurei, w i represents the synthesized weight, w s i denotes the subjective weight, w o i refers to objective weight, and u indicates the proportion of subjective weight. The major contribution of this method is to introduce the subjective weight that can leverage the default analyzed results obtained from the behaviors. We use an analytic hierarchy process (AHP) algorithm to construct a three-layer model to divide the complex strategic decision problem into different subjects aiming at multiple targets. For each subject, a fuzzy quantitative approach is employed to calculate the weight for each feature and then merge it hierarchically.Figure4 illustrates the hierarchical model for the remarked features that is composed of three layers. The top layer denotes the final target of behavioral analysis to identify the optimal features. In the middle layer, three classifications are listed according to different abnormal behaviors such as DoS attack malware, user information stealing malware, and irregular software/hardware resource consuming malware. Finally, all the behaviors on operating system application interfaces are reflecting the three abnormal behaviors in bottom level.Figure 4 Hierarchical layers for the remarked features.In the hierarchical analysis method, the relative weighta i j represents the correlation between the ith element and the jth element. Assume that there are n × m elements in total, and then A = ( a i j ) n × n is denoted as the correlation matrix. For the elements in the matrix, we have a j i = 1 / a i j, a i j = a i j · a i j and a i i = 1. The values and representation of different parameters are described in Table 1.Table 1 Configurations in the correlation matrix. Value Representation a i j = 1 ith element and jth element have the same effect a i j = 3 ith element is a little more important than jth element a i j = 5 ith element is important than jth element a i j = 7 ith element is much more important than jth element a i j = 9 ith element is extremely more important than jth element a i j = 2 n Superior ofith than jth element between 2 n - 1 and 2 n + 1Then, we normalize the matrixA to matrix Q: (2) Q = ( q i j ) n × n d d , q i j = a i j ∑ k = 0 n a k j .Add the elements in matrixQ by rows to get the vector α: (3) α = ( α 1 , α 2 , … , α n ) T in which α i = ∑ j = 1 n q i j .The vectorα is normalized to the weight vector W: (4) W = ( w 1 , w 1 , … , w n ) T in which w i = α i ∑ j = 1 n α j .AfterW has been calculated, we need to maintain the consistency with respect to the subjective perceptive. Consistency Index (CI) is utilized as the evaluation metrics: (5) CI = λ max ⁡ - n n - 1 .In (5), λ max ⁡ refers to the peak value of the feature, which is derived in the following equation: (6) λ max ⁡ = 1 n ∑ i = 1 n ( A W ) i w i .Moreover, we also utilize consistency rate (CR) as to characterize and to model the proportion of the consistencyCR = CI / RI. RI refers to random index that is the maximum value that a i j is selected completely at random. Obviously, the value of CR depends on the order of the matrix n. The consistency is accepted only if CR < 0.1, and otherwise the correlation matrix should be leveraged until the condition is met.After the weight for each target has been calculated, they could be moved forward to the next step, where different weight vectors are systematized as a combination. Specially, the steps to combine all the vectors are as follows.(1) Calculate the importance for every level to the top level. This calculation process is carried out from top to bottom. (2) Assume that there aren k - 1 elements resided in the ( k - 1 )th level and the weight vector is calculated as (7) W ( k - 1 ) = ( w 1 ( k - 1 ) , w 2 ( k - 1 ) , … , w n ( k - 1 ) ) T . (3) Assume that there aren k elements resided in the kth level and the weight vector of the impact on ( k - 1 )th level refers to (8) p ( k ) = ( w 1 j ( k ) , w 2 j ( k ) , … , w n j ( k ) ) T . If the ith element is independent with the jth element, then w i j = 0. (4) From (2) and (3), we can get that the weight vector of the impact on kth level is (9) W ( k ) = ( p 1 ( k ) , p 2 ( k ) , … , p n ( k ) ) W ( k - 1 ) . After both the subjective and objective weights are evaluated, the proportion coefficient u can be calculated in step 1. ## 3.1. Client Architecture for Mobile Devices The Client’s main function is to extract abnormal features as follows.( 1) Feature Extractor. This is the main module of the client. All the features are extracted through APIs provided by the Android application framework or information read from the Linux kernel. The collected features are clustered into three primary levels: Linux Kernel level (e.g., CPU, RAM, etc.), application level (e.g., messages, calling, etc.), and user behavior level. The user behavior level includes significant features that can reflect the user behavior, such as the screen on/off and the key pressed frequency. The feature extracting frequency is controlled by a timer whose value can be changed by user with the default value of 30 seconds. A total of 29 features are collected during every extracting, and the vector data structure is used to store features. As the data size of each transmission is very small (less than 200 bytes), compression mechanisms may not be able to achieve efficient performance.( 2) Communication Module. This module sends feature vectors to remote servers and receives anomaly alerts from the servers if the features are detected as anomaly. If the client is connected to the server for the first time, the communication module will request registration with the unique international mobile equipment identity (IMEI) of the smartphones.( 3) Graphical User Interface (GUI). This module provides the users with a mean to configure client parameters, such as the value of extractor timer and server IP address. ## 3.2. Anomaly Detection Server The anomaly detection server’s major task is to classify the feature vectors as normal or abnormal. The components include the following.( 1) Database. MySQL is used to store massive feature vectors with classValue (normal or anomaly). Database interfaces are provided for various operations. In the database, a total_table relation includes all vector information, and each detector corresponds to a detector_table, while the primary key is extract_time and phone_tag. All newly received feature vectors were stored in total_table. For each feature vector, it was assigned to the corresponding detection server according to the phone_tag and processing history. If the client was newly registered, the vector was assigned to the lowest load server.( 2) Detecting Module. This is the major module of the detection server, and complex detecting algorithms are implemented here. It consists of several detectors with a detector manager. Each detector is corresponding to a classification algorithm which distinguishes between normal and abnormal feature vectors, such as J48 Detector implemented with the C4.5 decision-tree algorithm. When new feature vectors come, each detector fetches the set of vectors from detector_table, builds the classifier (if it does not exist), and classifies the feature vectors. Then, the detector manager gives out the final results by integrating all detectors’ results and stores the results into total_table and detector_table. The detector manager can also configure the parameters of detectors.( 3) Communication Module. This module communicates with the client and deals with various requests and messages. The module passes received feature vectors to detecting module, and if the vector is detected as anomaly the module will send an anomaly alert to the clients.( 4) Client Manager. When a new client requests for registration, this module will register the client with the IMEI.( 5) GUI. The server’s GUI configures database and visualizes current detectors and connected clients. ## 3.3. Service-Oriented Hierarchical Model Figure3 presents the hierarchical model of distributed servers, which consist of three layers: services, service scheduler, and transmitter. The functionalities of each layer are introduced as follows.Figure 3 Hierarchical layer of servers.First of all, services provider provides service access points (SAPs) to clients. Each SAP is in charge of one specific kind of service. All the SAPs are provided with a data format packaging mechanism. When a request arrives, the SAP first decodes the target request and then identifies which service is requested. Then, the specified request will be sent to service scheduler.Second, services scheduler is in charge of service scheduling and mapping. Each Internet request may include several service requests. Therefore, if more than one servant is available, then each service request must be mapped and scheduled to a certain servant according to the system’s load balancing status.Finally, transmitter dispatches the subtasks to different servants for execution. After the task is completed, the results are collected by transmitter.With respect to the period-like features of 3G/GPRS/WiFi client modules, SmartMal server provides three services for demonstration: CPU/memory utilization, battery endurance, and network traffic flow. From the exploration of the state-of-the-art studies, it is quite common that the malware applications will either drag down the network flow performance, resulting in the congestion, or illegally waste the CPU/memory utilization, or the energy.The high level services are mapped to different servants. In order to provide a feasible system for services, at least one servant for each service is integrated into the system. Each service request is transmitted to a specific servant. All the servants are managed for efficiency use. The data transmission between servant and service layer is through communication interfaces and status checking interface.Status checking interface is responsible for providing synchronization information of diverse servants for services mapping and scheduling, such as load balancing and services bottleneck exploration.The physical layer consists of database and object-relation (O-R) mapping mechanisms. Generally, all the analyzed irregular behaviors are stored in databases, which may be located at distributed areas. Dealing with the current relation based database models, O-R mapping methods are widely employed for object-oriented abstractions, such as Hibernate and Toplink. Benefiting from these approaches, the high-level objects are mapped to relational databases. We hereby utilize TopLink for demonstration to map the database tuples to the standard C++ classes. ## 3.4. Remarked Features It is quite true that spelling out the requirements for a software system under development is not an easy task, and translating captured requirements into correct operational software can be even harder [5]. Many technologies (languages, modeling tools, and programming paradigms) and methodologies (agile, test-driven, and model-driven) are designed, among other things, to help address these challenges. One widely accepted practice is to formalize requirements by behavioral programming skills in the form of use cases and scenarios.However, in realizing abnormal detection based analysis method, not all the collected behaviors are reflecting abnormal message. Therefore, it is challenging to choose the required information among the extracted set of phases or behaviors. It has been proved that the detection efficiency can be significantly improved by the refinements of degrading the dimensions and eliminating the superfluous information [11].In this paper, we model the weight for each feature as a synthesized combination of subjective and objective weight to identify the behavior and characteristics. The weight of the synthesized weight is represented as follows:(1) w i = u w s i + ( 1 - u ) w o i .For each featurei, w i represents the synthesized weight, w s i denotes the subjective weight, w o i refers to objective weight, and u indicates the proportion of subjective weight. The major contribution of this method is to introduce the subjective weight that can leverage the default analyzed results obtained from the behaviors. We use an analytic hierarchy process (AHP) algorithm to construct a three-layer model to divide the complex strategic decision problem into different subjects aiming at multiple targets. For each subject, a fuzzy quantitative approach is employed to calculate the weight for each feature and then merge it hierarchically.Figure4 illustrates the hierarchical model for the remarked features that is composed of three layers. The top layer denotes the final target of behavioral analysis to identify the optimal features. In the middle layer, three classifications are listed according to different abnormal behaviors such as DoS attack malware, user information stealing malware, and irregular software/hardware resource consuming malware. Finally, all the behaviors on operating system application interfaces are reflecting the three abnormal behaviors in bottom level.Figure 4 Hierarchical layers for the remarked features.In the hierarchical analysis method, the relative weighta i j represents the correlation between the ith element and the jth element. Assume that there are n × m elements in total, and then A = ( a i j ) n × n is denoted as the correlation matrix. For the elements in the matrix, we have a j i = 1 / a i j, a i j = a i j · a i j and a i i = 1. The values and representation of different parameters are described in Table 1.Table 1 Configurations in the correlation matrix. Value Representation a i j = 1 ith element and jth element have the same effect a i j = 3 ith element is a little more important than jth element a i j = 5 ith element is important than jth element a i j = 7 ith element is much more important than jth element a i j = 9 ith element is extremely more important than jth element a i j = 2 n Superior ofith than jth element between 2 n - 1 and 2 n + 1Then, we normalize the matrixA to matrix Q: (2) Q = ( q i j ) n × n d d , q i j = a i j ∑ k = 0 n a k j .Add the elements in matrixQ by rows to get the vector α: (3) α = ( α 1 , α 2 , … , α n ) T in which α i = ∑ j = 1 n q i j .The vectorα is normalized to the weight vector W: (4) W = ( w 1 , w 1 , … , w n ) T in which w i = α i ∑ j = 1 n α j .AfterW has been calculated, we need to maintain the consistency with respect to the subjective perceptive. Consistency Index (CI) is utilized as the evaluation metrics: (5) CI = λ max ⁡ - n n - 1 .In (5), λ max ⁡ refers to the peak value of the feature, which is derived in the following equation: (6) λ max ⁡ = 1 n ∑ i = 1 n ( A W ) i w i .Moreover, we also utilize consistency rate (CR) as to characterize and to model the proportion of the consistencyCR = CI / RI. RI refers to random index that is the maximum value that a i j is selected completely at random. Obviously, the value of CR depends on the order of the matrix n. The consistency is accepted only if CR < 0.1, and otherwise the correlation matrix should be leveraged until the condition is met.After the weight for each target has been calculated, they could be moved forward to the next step, where different weight vectors are systematized as a combination. Specially, the steps to combine all the vectors are as follows.(1) Calculate the importance for every level to the top level. This calculation process is carried out from top to bottom. (2) Assume that there aren k - 1 elements resided in the ( k - 1 )th level and the weight vector is calculated as (7) W ( k - 1 ) = ( w 1 ( k - 1 ) , w 2 ( k - 1 ) , … , w n ( k - 1 ) ) T . (3) Assume that there aren k elements resided in the kth level and the weight vector of the impact on ( k - 1 )th level refers to (8) p ( k ) = ( w 1 j ( k ) , w 2 j ( k ) , … , w n j ( k ) ) T . If the ith element is independent with the jth element, then w i j = 0. (4) From (2) and (3), we can get that the weight vector of the impact on kth level is (9) W ( k ) = ( p 1 ( k ) , p 2 ( k ) , … , p n ( k ) ) W ( k - 1 ) . After both the subjective and objective weights are evaluated, the proportion coefficient u can be calculated in step 1. ## 4. Behavior Analysis To demonstrate the effectiveness of the SmartMal architecture, we have implemented a prototype application for both behavioral analysis and abnormal malware attack detection. Due to the abnormal malware detection for mobile devices, there is no acknowledged data set. For behavioral analysis, it is extremely important to select a fair and reasonable benchmark set for behavior analysis. We have selected 32 most highly ranked applications in the Android market, including 11 game applications and 21 software tools, presented in Table2. All the software programs are installed in three mobile devices for malware detection (1 Moto Me722 handset and 2 Samsung S5830 handsets).Table 2 Applications for profiling. Game applications Fruit Ninja Angry Birds Can Knockdown X Construction Cut String Gold Miner Bubble Ball Shift Flight Chess Sudoku Talking Tom Software tool applications Office Suite 360 Guard Root Explorer King Soft iReader PowerAMP Mobo Player UCWeb Fetion Task Manager MSN Google Map Google Music King Reader Mobile TV Storm Player Tencent QQ Shang Mail Sina Weibo RenRen Adobe ReaderAll the 32 applications are installed and run on the smartphones for at least one hour, during which the malware detection engines are running at back stage. The back stage engine is configured to sample the mobile device application running status every 30 seconds. After the execution is finished, there should be up to 120 items of the characterized features. For some special purposed behavior, such as battery consumption, text messages, and incoming/outgoing phone calls, it is not fair to gather the fingerprints every 30 seconds. In these application-specific scenarios, we use an accumulative value for the recent 9 intervals, each with 30 seconds. Finally, all these items are marked with normal behaviors for further comparison and detection schemes.Alternatively, these three Android smartphones are delivered to three persons for regular daily use. Meanwhile, the back stage engine keeps on tracking the featured behavior. This period has lasted for more than 90 days.In this paper, we focus on the CPU/memory utilization rate, battery consumption, and network traffic flow to analyze the workload behavior. The experimental results are illustrated in Figure5.Figure 5 CPU/memory utilization. ### 4.1. CPU/Memory Utilization To evaluate the CPU and memory utilization, we usetop command in Linux operating system to observe the CPU and memory utilization of the client. Meanwhile, we have also compared the statistics for typical software applications including IM software MSN, media player PowerAMP, and UCWeb, which is illustrated in Table 3. Taking CPU average utilization rate into consideration, the profiling account only occupies less than 1% of CPU, which is ignorable. Alternatively, as our profiling application explores the feature every 30 seconds, thereby the peak utilization achieves 20%–24%, but the duration of the sample is too short to be noticed. Therefore, it does not cause any influence for user experiences. Finally, for the sake of the memory utilization, our profiling application takes up to 26 MB, which is smaller than MSN and PowerAMP applications, and, consequently, the overhead is affordable for less than 5%. Note that a general smartphone can integrate more than 512 MB internal memory.Table 3 Battery endurance statistics. 100% to 15% 100% to 5% Profiling offline 525 minutes 590 minutes Profiling online 480 minutes 540 minutes ### 4.2. Battery Endurance One of the most serious challenges for smartphones is the power consumption and battery endurance. To analyze the behavior of how our profiling application has an impact on the battery endurance, we have evaluated the endurance trial on both scenarios, profiling online and offline, described in Table3. On one hand, when the client application is offline, it takes approximately 525 minutes from the full battery charge 100% to 15%, while it takes 590 minutes when only 5% battery is left. On the other hand, with respect to the online profiling application, the duration only lasts for 480 minutes from 100% to 15%, while it lasts for 540 minutes from 100% to 5%. ### 4.3. Network Traffic Flow Due to the fact that our profiling application requires Internet to transfer extracted behaviors to the major server, consequently, we also need to analyze the behavior for the network traffic flow to evaluate how our approach has an impact on the Internet utilization. We employ TrafficStats toolset provided in Android operating system to monitor the traffic flow for both single upload transaction and batch uploading process. For the sake of single uploading procedure, the packaged message should be delivered once the abnormal behavior is detected, and the uploading flow speed is 9 Kb/10 min, while the downloading speed is 4 Kb/10 min. With respect to the batch procedure, up to 10 featured messages will be batched together into a package for uploading operation. However, from the experiment, we can get a result which is exactly the same as the single uploading transaction; that is, the uploading flow speed is 9 Kb/10 min, while the downloading speed is 4 Kb/10 min. Due to the small volume for each packet, multiple messages can reside in the same package to reach a minimum size for one pack. Considering the 3G and WiFi network bandwidth, the 13 Kb/10 min network traffic flow is acceptable. ## 4.1. CPU/Memory Utilization To evaluate the CPU and memory utilization, we usetop command in Linux operating system to observe the CPU and memory utilization of the client. Meanwhile, we have also compared the statistics for typical software applications including IM software MSN, media player PowerAMP, and UCWeb, which is illustrated in Table 3. Taking CPU average utilization rate into consideration, the profiling account only occupies less than 1% of CPU, which is ignorable. Alternatively, as our profiling application explores the feature every 30 seconds, thereby the peak utilization achieves 20%–24%, but the duration of the sample is too short to be noticed. Therefore, it does not cause any influence for user experiences. Finally, for the sake of the memory utilization, our profiling application takes up to 26 MB, which is smaller than MSN and PowerAMP applications, and, consequently, the overhead is affordable for less than 5%. Note that a general smartphone can integrate more than 512 MB internal memory.Table 3 Battery endurance statistics. 100% to 15% 100% to 5% Profiling offline 525 minutes 590 minutes Profiling online 480 minutes 540 minutes ## 4.2. Battery Endurance One of the most serious challenges for smartphones is the power consumption and battery endurance. To analyze the behavior of how our profiling application has an impact on the battery endurance, we have evaluated the endurance trial on both scenarios, profiling online and offline, described in Table3. On one hand, when the client application is offline, it takes approximately 525 minutes from the full battery charge 100% to 15%, while it takes 590 minutes when only 5% battery is left. On the other hand, with respect to the online profiling application, the duration only lasts for 480 minutes from 100% to 15%, while it lasts for 540 minutes from 100% to 5%. ## 4.3. Network Traffic Flow Due to the fact that our profiling application requires Internet to transfer extracted behaviors to the major server, consequently, we also need to analyze the behavior for the network traffic flow to evaluate how our approach has an impact on the Internet utilization. We employ TrafficStats toolset provided in Android operating system to monitor the traffic flow for both single upload transaction and batch uploading process. For the sake of single uploading procedure, the packaged message should be delivered once the abnormal behavior is detected, and the uploading flow speed is 9 Kb/10 min, while the downloading speed is 4 Kb/10 min. With respect to the batch procedure, up to 10 featured messages will be batched together into a package for uploading operation. However, from the experiment, we can get a result which is exactly the same as the single uploading transaction; that is, the uploading flow speed is 9 Kb/10 min, while the downloading speed is 4 Kb/10 min. Due to the small volume for each packet, multiple messages can reside in the same package to reach a minimum size for one pack. Considering the 3G and WiFi network bandwidth, the 13 Kb/10 min network traffic flow is acceptable. ## 5. Malware Detection: A DoS Attack Case In this section, we introduce abnormal malware detection in real DoS attack case. It is quite common that the network flow of the mobile devices statistics could be periodical. For example, the TCP SYN packages and RAB establishing/release procedures can be repeated when in a period-like manner. In particular, the mathematic idioms used in malware detection are denoted in Table4.Table 4 Idioms and terms defined in DoS attack cases. τ Statistics intervals v i τ ( k ) ith devices behavior in the interval k N τ ( k ) Total amount of devices in the intervalk v τ ( k ) { v i τ ( k ) , i = 1,2 , … , N τ ( k ) } the vector in interval k X τ ( k ) Probability distribution for normalizedv τ ( k ) L ( k 1 , k 2 ) Similarity forX τ ( k 1 ) and X τ ( k 2 ) W 0 ( k ) The observation window forX ( k ) W 1 ( k ) The sampling observation window forX ( k ) D I ( k ) Internal distance for probability correlationX ( k ) D E ( k ) External distance for probability correlationX ( k ) M Abnormal probability distribution set ### 5.1. Detection Algorithm Generally speaking,τ decides the aggregation degree for the accumulated data, while distinct DoS attack behaviors on mobile networks perform different aggressive abnormal data degree. We set the minimum interval τ 0 = 30 seconds, which is used to accumulate more coarse grained information, such as 1 minute, 5 minutes, 10 minutes, and 30 minutes.Given two probability distribution for distinct time slices with same feature and intervals, the formal probability distribution can be normalized toX τ ( k 1 ) and X τ ( k 2 ). First, we define L ( k 1 , k 2 ) as the distance for the two probability distributions k 1 and k 2. Then, we need to sample the observation windows in the following two phases.First, for the sake of a given intervalk and observation window W 0 ( k ), we need to select the time slice which is most relevant to the amount of total registered mobile devices in (10) with an experimental value from 5% to 15%. Consider (10) W 0 ′ ( k ) = k i ∈ W 0 ( k ) , | N ( k ) - N ( k i ) | N ( k ) ≤ S .Second, choose probability distributionX τ ( k ) that is most relevant to X ( k ) from the observation window W 0 ( k ) to organize a sampling observation window W 1 ( k ).AfterW 1 ( k ) has been identified, both internal distance D 1 ( k ) between X ( k ) and W 1 ( k ) and external distance D E ( k ) is calculated, respectively, and presented in (11) and (12). Then, the processing flow is described in Algorithm 2.  Consider (11) D I ( k ) = { L ( k i , k j ) , k i , k j ∈ W 1 ( k ) , k i ≠ k j } , (12) D E ( k ) = { L ( k , k i ) , k i ∈ W 1 ( k ) } .Algorithm 2:Pseudocode of abnormal behavior analysis for mobile networks. INITIATION: Set values of t , N , M INPUT: V ( k ) , t , N , M OUTPUT: X ( k ) Begin (1) ObtainX ( k ) and N ( k ) from V ( k ) (2) Define the observation windowW 0 ( k ) (3) SelectW 1 ( k ) from W 0 ( k ) / M by sampling (4) Calculate the distanceD 1 ( k ) and D E ( k ) (5)If D E ( k ) > D 1 ( k ) (6)X ( k ) is detected as anomaly (7)  SetM = M ∪ { k } (8)else X ( k ) is normal (9) Increasek by one and go-back to (1) End ### 5.2. Similarity Evaluation To analyze whether the behavior is a normal operation or the malware attack operation, we use a Kullback-Leibler (KL) divergence based approach. Assuming that parameters p and q represent the probability distribution of two data sets, therefore, the KL divergence can be used to measure the relative entropy between the two probability distributions described in the following equation: (13) D ( p ∥ q ) = E [ log ⁡ ⁡ ( p ( ω ) q ( ω ) ) ] = ∑ ω ∈ Ω p ( ω ) log ⁡ ⁡ ( p ( ω ) q ( ω ) ) , in which 0 log ⁡ ⁡ ( 0 / q ) = 0 and p log ⁡ ⁡ ( p / 0 ) = ∞. Moreover, the KL divergence is 0 only when p is equal to q. Since KL divergence is not a metric, we propose a revised metric to measure the distance. Consider (14) L ( p , q ) = 1 2 ( D ( p ∥ q ) H P + D ( q ∥ p ) H q ) , where D ( p ∥ q ) and D ( q ∥ p ) represent the KL divergence, while H p and H q are the entropy for p and q, respectively. In particular, the calculation of the entropy is introduced in the following equation: (15) H ( X ) = ( P 1 , P 2 , … , P n ) = P ( x i ) log ⁡ ⁡ P ( x i ) . Referring to (15), X represents the probability distribution. P ( x i ) indicates the probability that the source fetches ith signal, and we have ∑ i P ( x i ) = 1. Due to the calculation of D ( p ∥ q ) which requires the additional information, D ( q ∥ p ) / H q denotes the extra workload for the calculation. To maintain the accuracy, the final distance L is set to the average distance of L ( p , q ) and L ( q , p ).Referring to a probability distribution, the dimension for the two distributions can be different and an occasional case ofp log ⁡ ⁡ ( p / 0 ) = ∞ may happen. In order to avoid this situation, we can choose the maximum mobile device volume as the uniform-dimensional degree, while for the insufficient distribution, we can use a signal ∈ representing 0; therefore, the situation of p log ⁡ ⁡ ( p / 0 ) = ∞ can be avoided. In this paper, we set ∈ = 10 - 10. ### 5.3. Experimental Results and Analysis We setup a simulation platform to verify the DoS behavior detection for periodic probability distribution. The NET_SEND behavior is implemented as the TCP SYN simulation, while the back stage servers will send an abnormal malware messages demonstrating the SYN flooding attack. In order to simulate a relatively large scale experimental platform, we combine the message from 3 smartphones every 5 minutes into a 10-length chain with a 30-dimensional NET_SEND vector; then, it is normalized into the probability distribution for TCP SYN behaviors.We run the applications on smartphones for two months continually. As one probability distribution vector includes the information collected every 5 minutes, we totally get2 × 30 × 24 × 12 = 15480 normalized vectors.Figure6 illustrates the detection accuracy for DoS attacking malwares. The X-axis represents the amount of attaching smartphones, while the Y-axis is the detected accuracy rate for malware attacks, denoted as true positive rate (TPR). It can be easily derived that the TPR is increased with the amount of attacking smartphones. When there is only one device, the TPR is only 10%, which means more than 90% attacks failure. When the device amount is increased to 5, the TPR is also grown up to 50%. When there are more than 10 devices in total, the TPR is stable as high as 99.1%. Our approach can obtain both highly efficient and accurate results to detect all the abnormal behaviors and malware attacks.Figure 6 The detection accuracy for malware applications under different amounts of attacks. ## 5.1. Detection Algorithm Generally speaking,τ decides the aggregation degree for the accumulated data, while distinct DoS attack behaviors on mobile networks perform different aggressive abnormal data degree. We set the minimum interval τ 0 = 30 seconds, which is used to accumulate more coarse grained information, such as 1 minute, 5 minutes, 10 minutes, and 30 minutes.Given two probability distribution for distinct time slices with same feature and intervals, the formal probability distribution can be normalized toX τ ( k 1 ) and X τ ( k 2 ). First, we define L ( k 1 , k 2 ) as the distance for the two probability distributions k 1 and k 2. Then, we need to sample the observation windows in the following two phases.First, for the sake of a given intervalk and observation window W 0 ( k ), we need to select the time slice which is most relevant to the amount of total registered mobile devices in (10) with an experimental value from 5% to 15%. Consider (10) W 0 ′ ( k ) = k i ∈ W 0 ( k ) , | N ( k ) - N ( k i ) | N ( k ) ≤ S .Second, choose probability distributionX τ ( k ) that is most relevant to X ( k ) from the observation window W 0 ( k ) to organize a sampling observation window W 1 ( k ).AfterW 1 ( k ) has been identified, both internal distance D 1 ( k ) between X ( k ) and W 1 ( k ) and external distance D E ( k ) is calculated, respectively, and presented in (11) and (12). Then, the processing flow is described in Algorithm 2.  Consider (11) D I ( k ) = { L ( k i , k j ) , k i , k j ∈ W 1 ( k ) , k i ≠ k j } , (12) D E ( k ) = { L ( k , k i ) , k i ∈ W 1 ( k ) } .Algorithm 2:Pseudocode of abnormal behavior analysis for mobile networks. INITIATION: Set values of t , N , M INPUT: V ( k ) , t , N , M OUTPUT: X ( k ) Begin (1) ObtainX ( k ) and N ( k ) from V ( k ) (2) Define the observation windowW 0 ( k ) (3) SelectW 1 ( k ) from W 0 ( k ) / M by sampling (4) Calculate the distanceD 1 ( k ) and D E ( k ) (5)If D E ( k ) > D 1 ( k ) (6)X ( k ) is detected as anomaly (7)  SetM = M ∪ { k } (8)else X ( k ) is normal (9) Increasek by one and go-back to (1) End ## 5.2. Similarity Evaluation To analyze whether the behavior is a normal operation or the malware attack operation, we use a Kullback-Leibler (KL) divergence based approach. Assuming that parameters p and q represent the probability distribution of two data sets, therefore, the KL divergence can be used to measure the relative entropy between the two probability distributions described in the following equation: (13) D ( p ∥ q ) = E [ log ⁡ ⁡ ( p ( ω ) q ( ω ) ) ] = ∑ ω ∈ Ω p ( ω ) log ⁡ ⁡ ( p ( ω ) q ( ω ) ) , in which 0 log ⁡ ⁡ ( 0 / q ) = 0 and p log ⁡ ⁡ ( p / 0 ) = ∞. Moreover, the KL divergence is 0 only when p is equal to q. Since KL divergence is not a metric, we propose a revised metric to measure the distance. Consider (14) L ( p , q ) = 1 2 ( D ( p ∥ q ) H P + D ( q ∥ p ) H q ) , where D ( p ∥ q ) and D ( q ∥ p ) represent the KL divergence, while H p and H q are the entropy for p and q, respectively. In particular, the calculation of the entropy is introduced in the following equation: (15) H ( X ) = ( P 1 , P 2 , … , P n ) = P ( x i ) log ⁡ ⁡ P ( x i ) . Referring to (15), X represents the probability distribution. P ( x i ) indicates the probability that the source fetches ith signal, and we have ∑ i P ( x i ) = 1. Due to the calculation of D ( p ∥ q ) which requires the additional information, D ( q ∥ p ) / H q denotes the extra workload for the calculation. To maintain the accuracy, the final distance L is set to the average distance of L ( p , q ) and L ( q , p ).Referring to a probability distribution, the dimension for the two distributions can be different and an occasional case ofp log ⁡ ⁡ ( p / 0 ) = ∞ may happen. In order to avoid this situation, we can choose the maximum mobile device volume as the uniform-dimensional degree, while for the insufficient distribution, we can use a signal ∈ representing 0; therefore, the situation of p log ⁡ ⁡ ( p / 0 ) = ∞ can be avoided. In this paper, we set ∈ = 10 - 10. ## 5.3. Experimental Results and Analysis We setup a simulation platform to verify the DoS behavior detection for periodic probability distribution. The NET_SEND behavior is implemented as the TCP SYN simulation, while the back stage servers will send an abnormal malware messages demonstrating the SYN flooding attack. In order to simulate a relatively large scale experimental platform, we combine the message from 3 smartphones every 5 minutes into a 10-length chain with a 30-dimensional NET_SEND vector; then, it is normalized into the probability distribution for TCP SYN behaviors.We run the applications on smartphones for two months continually. As one probability distribution vector includes the information collected every 5 minutes, we totally get2 × 30 × 24 × 12 = 15480 normalized vectors.Figure6 illustrates the detection accuracy for DoS attacking malwares. The X-axis represents the amount of attaching smartphones, while the Y-axis is the detected accuracy rate for malware attacks, denoted as true positive rate (TPR). It can be easily derived that the TPR is increased with the amount of attacking smartphones. When there is only one device, the TPR is only 10%, which means more than 90% attacks failure. When the device amount is increased to 5, the TPR is also grown up to 50%. When there are more than 10 devices in total, the TPR is stable as high as 99.1%. Our approach can obtain both highly efficient and accurate results to detect all the abnormal behaviors and malware attacks.Figure 6 The detection accuracy for malware applications under different amounts of attacks. ## 6. Conclusions This paper proposed a service-oriented malware detection framework “SmartMal,” which is the first work to combine SOA concepts with state-of-the-art behavior-based malware detection methodologies. By applying SOA into the framework, irregular behaviors can be processed in parallel servers instead of operating locally. Utilizing the distributed operation of irregular behavior analysis, SmartMal can largely reduce clients’ computational complexity with great flexibility and modularity.Moreover, as a test case, we have proposed a randomization method to defend against signaling DoS attacks on 30 cellular networks from the system-design perspective. By setting the parameter that is crucial for attacking efficiency as random distribution, the parameter is more difficult to be measured and the measured value is the maximum of the random value. The cost of launching an attack is increased enormously. Our simulation of signaling attack via RAB establishment release shows that our randomization method can achieve as high as 99.1% of the malware and irregular behaviors. The randomization method is easy and effective towards this kind of signaling attacks including paging attacks.The initial results are promising, but there is a lot of work worth pursuing. Future works include extending the server to distributed cloud systems to achieve high throughput and integration of services. Meanwhile, we also plan to integrate the design-space exploration into the malware detection methods and behavior analysis to improve the accuracy and flexibility. --- *Source: 101986-2014-08-05.xml*
2014
# The Hen or the Egg: Inflammatory Aspects of Murine MPN Models **Authors:** Jonas S. Jutzi; Heike L. Pahl **Journal:** Mediators of Inflammation (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101987 --- ## Abstract It has been known for some time that solid tumors, especially gastrointestinal tumors, can arise on the basis of chronic inflammation. However, the role of inflammation in the genesis of hematological malignancies has not been extensively studied. Recent evidence clearly shows that changes in the bone marrow niche can suffice to induce myeloid diseases. Nonetheless, while it has been demonstrated that myeloproliferative neoplasms (MPN) are associated with a proinflammatory state, it is not clear whether inflammatory processes contribute to the induction or maintenance of MPN. More provocatively stated: which comes first, the hen or the egg, inflammation or MPN? In other words, can chronic inflammation itself trigger an MPN? In this review, we will describe the evidence supporting a role for inflammation in initiating and promoting MPN development. Furthermore, we will compare and contrast the data obtained in gastrointestinal tumors with observations in MPN patients and models, pointing out the opportunities provided by novel murine MPN models to address fundamental questions regarding the role of inflammatory stimuli in the molecular pathogenesis of MPN. --- ## Body ## 1. Introduction “Dass Carcinome nicht selten auf einfach entzündliche Reize, wie Traumen, entstehen, ist bekannt” (that carcinomas arise, not seldom, at the site of inflammatory stimuli, such as traumas, is known) wrote Virchow in 1869 [1]. This far-sighted statement, worded as a fact rather than a hypothesis, was validated almost 150 years later when Hanahan and Weinberg named “inflammation” as an underlying principle that contributes to and fosters the newly named “hallmarks of cancer” [2]. ## 2. Inflammatory Etiology of Solid Tumors In the interval between these two pivotal publications, a large collection of data was accrued that supports the postulated role for inflammation in carcinogenesis. It is now known that solid tumors can arise on the basis of chronic inflammation, most notably Gastrointestinal Stromal Tumor (GIST) followingHelicobacter pylori infection. Additional examples include enteropathy-associated T cell lymphoma and adenocarcinomas in patients with coeliac disease as well as the increased risk of colorectal carcinoma in patients with inflammatory bowel disease [3, 4].The model for neoplastic transformation in these disorders implies a multistep process (Figure1). Initially, chronic inflammation causes epithelial cells as well as stromal macrophages to release cytokines and other stimulatory molecules that promote proliferation of surrounding cells, for example, the interstitial cells of Cajal in the stomach during activeH. pylori infection [5]. In a second series of steps, enhanced proliferation increases the chance of stochastic mutations, leading first to hyperplasia and subsequently, with the accumulation of additional aberrations, to neoplasia. While this model has been validated experimentally for several solid tumor entities, the role of inflammation in the genesis of hematological malignancies has not been extensively studied.Figure 1 Multistep process for inflammatory driven neoplastic transformation. Stress, induced by various intrinsic and extrinsic factors, causes epithelial cells as well as stromal macrophages to release cytokines and other proliferation-promoting molecules, which lead to enhanced proliferation of surrounding cells. In a second step, enhanced proliferation increases the chance of stochastic mutations, leading first to hyperplasia and subsequently, with the accumulation of additional aberrations, to neoplasia. ## 3. Cell Extrinsic Influences on the Development of Myeloid Malignancies The microenvironment and stromal tissue that surround solid tumors can be seen as analogous in function and in cell-cell interactions to the bone marrow niche cells that surround hematopoietic stem cells. During the past years, several observations have strengthened the hypothesis that the bone marrow niche can contribute to the development of myeloid malignancies. In one seminal study, Raaijmakers and colleagues demonstrated that altering gene expression by deletion ofDicer1 specifically in osteoprogenitor cells, but not in the bone marrow, led first to the development of myelodysplasia and, subsequently, to the emergence of acute myeloid leukemia [6]. Leukemia arose in hematopoietic cells that expressedDicer1 but had acquired other genetic abnormalities. Importantly, transplantation of BM from anemic, thrombocytopenic mice, in whichDicer1 has been deleted in the osteoprogenitors, into lethally irradiated wild-type recipient mice led to complete resolution of the cytopenias, demonstrating that they were niche-induced and not attributed to cell autonomous changes in hematopoietic stem cells themselves [6]. Conversely, transplanting wild-type bone marrow cells into mice which carried theDicer1 deletion in osteoprogenitors resulted in an MDS phenotype and induction of AML. These data clearly demonstrate that changes in the bone marrow niche can be sufficient to induce leukemia. Interestingly, deletingDicer1 in mature osteoblasts did not induce either MDS or leukemia, demonstrating that very specific alterations in the bone marrow are required for niche-induced oncogenesis. The precise nature of these changes is currently being investigated and it is not known whether inflammatory mechanisms contribute to leukemia induction in this model. ## 4. Association of MPN with Inflammatory and Autoimmune Diseases While the data by Raaijmakers and colleagues thus constitute a proof of principle that leukemia can be induced by changes in the bone marrow microenvironment, the question remains whether inflammatory processes in particular contribute to the induction or maintenance of myeloid malignancies, specifically to myeloproliferative neoplasms (MPN). Several studies have recently suggested an inflammatory etiology for MDS, AML, and MPN [7–10], most notably a large epidemiological study in Sweden, which demonstrated a significantly increased risk of AML or MDS in patients with a history of any infectious disease [9]. Esplin et al. have shown that continuous TLR activation by chronic exposure to Lipopolysaccharides (LPS) alters the self-renewal capacity of HSCs in mice. Prolonged TLR activation occurs in various bacterial infections, for example, during oral infections such as Gram-negative periodontitis and during subacute bacterial endocarditis [11]. In their mice, Esplin and colleagues were able to show a myeloid bias and, conversely, a selective loss of lymphopoietic potential as well as an increased proportion of C D 150 h i C D 48 - long-term HSCs [12]. The emergence of a myeloid bias has been witnessed during normal aging of HSC [13–15]. Signer et al. point out that the risk of developing myeloid and lymphoid leukemias increases with age [16]. It seems likely that HSCs acquire random genetic hits either under chronic TLR activation induced by LPS or during normal aging. These parallels strengthen the hypothesis of inflammatory driven myeloid malignancies, in some cases perhaps induced directly by an infectious cause.While inflammatory processes involve various factors, including cytokines, reactive oxygen species, and immune cells like macrophages, autoimmune phenomena are characterized by activation of T and B cells including the production of autoantibodies. Autoimmune diseases thus mainly involve changed T and B cell function but might share aspects of inflammatory processes resulting from altered cytokine release, such as increased IL-6 levels [17].MPN patients with an antecedent autoimmune disorder carried a 1.7- and 2.1-fold increased risk to develop an AML or an MDS, respectively [9, 18]. In particular patients with MPN-associated myelofibrosis may show various autoimmune phenomena, including antibodies against red blood cells or anti-nuclear [9, 18] or anti-mitochondrial antibodies. To some extent, this might explain the pathogenesis of anemia and the accompanied compensatory reticulocytosis in this cohort of patients [19, 20]. The resulting increased malignant and nonmalignant myeloproliferation themselves thereby increase the risk for stochastic secondary (epi-)genetic hits and disease progression. However, neither the inflammatory nor the autoimmune hypotheses regarding MPN etiology have yet been directly confirmed by experimental studies. ## 5. The Inflammatory Hypothesis of MPN MPN patients show elevated serum levels of various proinflammatory cytokines including IL-1, IL-6, IL-8, IL-11, IL-17, TNF-α, and TGF-β, as well as of the anti-inflammatory IL-10 [21–26]. Treatment with Ruxolitinib, JAK1 and JAK2 inhibitor, significantly decreased the level of circulating cytokines [27]. While these data demonstrate that MPN is accompanied by inflammatory changes, the causal order of events has not been determined. Does the malignant clone trigger an inflammatory response or—and this would constitute a change in perspective—can chronic inflammation itself trigger a MPN? In the latter model, sustained low-level, probably subclinical inflammation initially increases the proliferation of healthy, polyclonal hematopoietic stem and progenitor cells. Since each cell division carries the risk of acquiring a mutation, a malignant MPN clone arises and evolves on the basis of chronic, inflammation-induced proliferation.Is there evidence supporting such a change in perspective or can it be procured using recently established, novel murine MPN models? ## 6. Murine Models to Test the Inflammatory Hypothesis of MPN The field of gastrointestinal tumors has made use of sophisticated mouse models to detail the role of inflammation for the initiation and promotion of carcinomas. Multiple tissue specific knockout and transgenic lines have been generated to study the underlying molecular mechanisms and signal transduction pathways [34]. During the past five years, various mouse models with a myeloproliferative neoplasm- (MPN-) like phenotype have also been reported [32, 33, 45–52]. In this review, we will describe the evidence supporting a role for inflammation in initiating and promoting MPN development. Furthermore, we will compare and contrast the data from GI tumors with observations in MPN patients and models, pointing out the opportunities provided by the novel murine MPN models to address fundamental questions regarding the role of inflammatory stimuli in the molecular pathogenesis of MPN.Various murine MPN models based on the most commonly occurring mutations have been developed. The alleles, which were introduced either in bone marrow transplant models, as transgenes, or as constitutively or inducibly active knock-ins, includeJ A K 2 V 617 F, J A K 2 E x o n 12, c M p l W 515 L, TET2, ASXL1, and NFE2 (see Table 1) [32, 33, 45–52]. Of these, the NFE2 mice consistently show spontaneous transformation to acute leukemia, suggesting that elevated NFE2 activity promotes not only MPN development but also a sustained acquisition of additional aberrations leading to leukemic transformation [32, 33]. The transcription factor NFE2 is overexpressed in the majority of MPN patients, irrespective of the underlying driver mutation [53, 54]. NFE2 is central to the inflammatory process. On the one hand, it is induced by inflammatory cytokines, such as IL1β [55]. Elevated NFE2 activity in turn increases cell proliferation by increasing transcription of cell cycle regulators and promoting G1/S transition [33]. On the other hand, NFE2 itself promotes inflammation as it has been shown to directly regulate transcription of IL-8, a proinflammatory cytokine [56]. Interestingly, inhibition of NFE2, by shRNA, abrogates endogenous erythroid colonies (EEC) formation [57], a pathognomonic hallmark of PV, supporting a central role for this inflammatory axis in promoting growth of the neoplastic clone.Table 1 Disease models involving inflammation. Affected compartment Cause Intervention Phenotype Reference Genetic alteration Hematopoiesis J A K 2 V 617 F TNF-α deletion Attenuation of MPN development [28] Hematopoiesis Gata- 1 lo Myelofibrosis [29] Hematopoiesis Gata- 1 lo TGF-β inhibition Restored hematopoiesis, reduced fibrosis [30] Hematopoiesis T P O h i with TGF-β inhibition Restored hematopoiesis [31] Hematopoiesis NFE2 overexpression/mutations MPN, sAML [32, 33] Gastrointestinal mucosa APC mutations Colorectal cancer Reviewed in [34] Gastrointestinal mucosa APCΔ716 COX-2 knockout Suppression of intestinal polyposis [35] Gastrointestinal mucosa APCΔ716 PGE2-receptor-2 knockout Suppression of intestinal polyposis [36] Gastrointestinal mucosa APCΔ716 Prostaglandin synthaseknockout Suppression of intestinal polyposis [37] Gastrointestinal mucosa APCΔ716 15-prostaglandin dehydrogenase (15-PDGH)knockout Disease exacerbation [38] Gastrointestinal mucosa APCΔ716 Deletion of either IL-17, IL-6, CCR2, TNFR, or p55 Suppression of intestinal polyposis [39–42] Infectious cause Hematopoiesis cell intrinsic and extrinsic TLR activation by bacterial infection HSC exhaustion [12] Chemical cause Gastrointestinal mucosa Azoxymethane (AOM)Dextran Sodium Sulfate (DSS) Colitis associated colon cancer (CAC) Reviewed in [34] Gastrointestinal mucosa Azoxymethane (AOM) COX-2 transgene Increased development of tumors [43] Gastrointestinal mucosa Azoxymethane (AOM)Dextran Sodium Sulfate (DSS) COX-2 deletion Increased development of tumors [44] Gastrointestinal mucosa AOM or DSS plus deletion of either IL-17, IL-6, CCR2, TNFR, or p55 Suppression of CAC [39–42]Two distinct groups of murine models are used to study the role of inflammation in GI cancers (reviewed in [34]). The first are genetically altered mice, either transgenic or knock-in strains, that carry mutations in the “adenomatous polyposis coli” (APC) gene or in genes affecting the Wnt signaling pathway. The APC gene is mutated in 80% of human colorectal cancers, while a further 10% carry mutations in beta-catenin, a central regulator of the Wnt-signaling pathway [58, 59]. In the second type of models, chemical carcinogens and promoters of inflammation, frequently azoxymethane (AOM) and dextran sodium sulfate (DSS), are used to induce the development of colitis associated colon cancer (CAC) [34]. ## 7. The Role of the COX2/PGE2 Axis By generating double or triple mutant mice, for example, strains that carry APC mutations in addition to tissue specific knockouts of critical signal transducing molecules, the role of various molecular pathways was investigated. The data reveal a critical role for the cyclooxygenase-2 (COX-2)/prostaglandin-E2 (PGE2) pathway even in mice that carry APC mutations [35–37, 43]. COX-2 is a central mediator of inflammation. It oxidizes arachidonic acid to prostaglandin H2, which is subsequently converted to PGE2. PGE2 promotes inflammation by affecting a variety of cellular functions. In contrast to COX-1, which is constitutively expressed, COX-2 is specifically induced by proinflammatory stimuli and mitogens.Knockout of COX-2 in mice carrying theA P C Δ 716 mutation drastically suppressed the development of intestinal polyposis as did treatment of mice with COX-2 inhibitors [35]. Conversely, transgenic overexpression of COX-2 in colon epithelium increased the development of intestinal tumors [43]. A similar strategy could easily be used to test the importance of the COX-2/PGE2 axis in MPN models. The COX-2 knockout is not tissue specific, so that development of the MPN phenotype in the presence or absence of systemic COX-2 could be investigated. In this context, the use of inducible models appears especially interesting, as the role of inflammatory processes in disease initiation could be investigated [48, 50, 60].The logic described above was applied to various other genes in the COX-2/PGE2 axis, and the results consistently underwrite an essential role for an inflammatory response in the development of APC-driven cancers. For example, knockout of the gene for either the PGE2-receptor-2 or the microsomal PGE synthase resulted in the suppression of intestinal polyp formation [37]. Conversely, deletion of the gene for 15-prostaglandin dehydrogenase (15-PDGH), an enzyme that catabolizes and inactivates prostaglandins, resulted in disease exacerbation, animals carrying mutant APC but lacking 15-PDGH developing significantly more polyps than their control littermates [38]. In addition, and perhaps less surprisingly, the COX-2/PGE2 axis was also shown to be essential in the AOM/DSS inflammation-associated colon tumor model, as deletion of COX-2 exacerbates CAC development [44, 61].Equivalent mouse strains could be generated in the context of various MPN mutations to investigate the contribution of the COX-2/PGE2 inflammatory axis to MPN disease initiation or maintenance. Inducible expression of MPN alleles in the background of a constitutive COX-2/PGE2 knockout will test the role of inflammation in MPN initiation, whereas constitutive expression of MPN mutations and subsequent inducible deletion of a COX-2/PGE2 axis gene will test for the requirement of an inflammatory milieu in maintaining the MPN phenotype. ## 8. The Role of Specific Immune Cells During the past decade, various mouse strains lacking specific immune cells have been developed. These mice can attest to the requirement of specific cell types for disease development. For example, crossingA P C Δ 716 mice with op/op mice, which are devoid of functional macrophages, led to a suppression of polyp formation, as did the generation of APC-mutant, k i t W / W mice, which lack mast cells [62]. Hence, both macrophages and mast cells are required to elaborate the microenvironment in which mutant APC can induce polyp formation. A recent paper by Ramos and colleagues provides compelling evidence that similar but distinct mechanisms operate in MPN [63]. In mice with an established J A K 2 V 617 F driven erythrocytosis, depletion of macrophages with clodronate normalized hematocrit and RBC counts as well as reducing reticulocytosis. Since these authors used a Vav-Cre/J A K 2 V 617 F BMT model, it is likely that the macrophages were also carrying the J A K 2 V 617 F mutation and were therefore part of the malignant clone. The molecular mechanism is thus slightly different from that in gastric cancer, where macrophages appear necessary for paracrine stimulation of the neoplastic epithelial cells. In MPN, macrophages that are part of the malignant clone would be perpetuating the neoplasia in an autocrine manner. However, if the op/op mice are used in models similar to those detailed above, a role for healthy macrophages in MPN initiation from healthy HSCs may be revealed. ## 9. The Role of Cytokines The requirement for macrophages and mast cells points to a rather obvious role for cytokines in tumor formation. While the essential role for cytokines in various physiological processes makes the construction of knockout mice deficient in these signaling molecules challenging, several strains have been generated and examined for cytokine contribution to gastric cancer development. Deletion of IL-17, IL-6, CCR2, or TNF-receptor p55 [39–42] led to a suppression of intestinal polyp development or CAC development in both the APC-mutant and the AOM/DSS models.One very similar study points to an important role for TNF-α in promoting J A K 2 V 617 F driven MPN [28]. Deletion of TNF-α limited expansion of J A K 2 V 617 F positive cells and attenuated disease development, pointing to a disease-promoting role for this cytokine. Analogous investigations for other inflammatory cytokines are required, especially addressing the question whether they are necessary for successful disease initiation. Candidates that should be investigated with priority include those factors for whom elevated levels have been documented in MPN patients and who have been shown to play a role in the genesis of other entities with an inflammatory component.In this light, IL-11 stands out as its levels are elevated in PV patients and has been shown to induce healthy bone marrow to form endogenous erythroid colonies [22, 64]. EEC constitute a characteristic abnormality of PV, one that may be used diagnostically because of its high sensitivity and specificity. Antibodies to IL-11 inhibit EEC formation in PV cells [64]. IL-11 has been shown to promote gastric tumor development, while, conversely, deletion of the IL-11 coreceptor alpha ablated the development of gastric tumors [65].IL-8 has likewise been shown to induce EEC formation from healthy bone marrow cells [64]. As detailed above, IL-8 is a direct target of NFE2 and both are overexpressed in MPN patients. Furthermore, Hermouet and colleagues have shown that IL-8 promotes hematopoietic progenitor survival [66]. Conversely, inactivation of the IL-8 pathway inhibited CD34+ cell proliferation and colony formation [66]. As IL-8 levels constitute an independent predictor of survival in PMF patients, this cytokine is highly likely to contribute to MPN pathophysiology, perhaps as one of the pivotal inflammatory mediators that initiate hyperproliferation of healthy HSCs in the bone marrow [26].The role of TGF-beta in the dysmegakaryopoiesis and fibrosis characteristic of PMF has been investigated in a murine model of myelofibrosis due to low Gata-1 expression (Gata-1 l o) [29, 30]. While the mutation decreasing Gata-1 levels in this model is not found in PMF patients, Gata-1 levels are specifically downregulated in a subset of PMF megakaryocytes [67]. In Gata-1 l o mice, inhibition of TGF-beta signaling restored hematopoiesis, normalized megakaryocyte development, and reduced fibrosis [30]. Similar results were obtained by Dr. Vainchenker’s group in mice overexpressing thrombopoietin (TPO). Mice displaying high TPO levels develop an MPN phenotype with fibrosis. In the absence of TGF-beta, these mice still show a myeloproliferative syndrome, yet no fibrosis [31]. Interestingly, while they express normal TGF-beta levels, untreated Gata-1 l o mice nonetheless show specific TGF-beta signaling alterations in bone marrow and spleen, such as overexpression of EVI1. This signaling abnormality is comparable to the abnormal TGF-beta profile observed in PMF patients, which includes overexpression of STAT1 and IL-6, factors directly related to autoimmune fibrosis [68].These data clearly indicate that TGF-beta plays a pivotal role in propagating the PMF phenotype and the development of fibrosis, which contributes to the cytopenias that constitute the leading cause of morbidity and mortality in this patient population. Targeted deletion or tissue specific overexpression of TGF-beta is now required to determine whether the cytokine is required or sufficient for disease initiation. Observations in other organs suggest that the latter is likely: liver specific overexpression of TGF-beta results in hepatic fibrosis [69].Another novel, autocrine inflammatory pathway has recently been described. Dr. Hoffman’s laboratory showed that MPN myeloid cells secrete elevated levels of lipocalin-2, an inflammatory cytokine, and that lipocalin-2 levels are elevated in PMF patients [70]. Lipocalin secretion is known to be stimulated by IL-1, IL-6, and IL-17, all of which are elevated in MPN [23, 24, 71–73]. Lipocalin induces reactive oxygen intermediates (ROS) formation with subsequent induction of double stranded DNA breaks leading to apoptosis of healthy HSCs but not PMF HCSs [70]. Hence, protection of PMF cells from lipocalin action, by a yet unknown mechanism, could constitute one way in which the microenvironment or the MPN clone itself uses inflammatory mediators to create an environment that provides a selective advantage to the MPN clone. ## 10. The Inflammatory Hypothesis of MPN: Awaiting Proof from Murine Models While the evidence presented above supports a change in perspective, in which inflammation may induce and promote MPN, rather than simply being a consequence of it, several aspects of this hypothesis remain to be experimentally proven. A murine model, which does not carry a specific MPN mutation, but rather models a prolonged, chronic inflammation, would constitute a valuable tool. If in such a model, the inflammatory milieu alone was sufficient to induce malignant myeloproliferation or even leukemic transformation, this would constitute a proof of principle.Proving the inflammatory hypothesis in MPN patients directly may, however, not be feasible. Diagnosing the underlying inflammatory process, postulated to be present even prior to the clinical MPN presentation, will not be possible in most cases. However, this will not be required. If the inflammatory hypothesis can be proven experimentally, this provides sufficient evidence for the initiation of clinical trials examining the effectiveness of early therapeutic intervention with the goal of suppressing chronic inflammation, thereby intersecting the vicious cycle that promotes MPN progression. Again, epidemiological data from the field of gastric cancers may point the way. Two landmark studies, published over 20 years ago, demonstrated that regular use of nonsteroidal anti-inflammatory drugs (NSAIDs) reduces the risk of colon cancer [74]. NSAIDs including aspirin are well known to function as a COX-1/2 inhibitor and therefore inhibit the production of PGE2. The Efficacy and Safety of Low-Dose Aspirin study (ECLAP), nomen est omen, proved both the safety and efficacy of aspirin in PV patients [75]. While overall survival was not increased during the observation period in patients treated with low-dose aspirin, longer followup is required to observe a beneficial effect if aspirin use prevents leukemic transformation by suppressing a chronic inflammatory stimulus. As mentioned above, mouse strains carrying MPN mutations in the context of COX-2 deficiency may reveal the impact of the COX-2/PGE2 inflammatory axis to MPN disease initiation and maintenance as well as leukemic progression. --- *Source: 101987-2015-10-12.xml*
101987-2015-10-12_101987-2015-10-12.md
26,421
The Hen or the Egg: Inflammatory Aspects of Murine MPN Models
Jonas S. Jutzi; Heike L. Pahl
Mediators of Inflammation (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101987
101987-2015-10-12.xml
--- ## Abstract It has been known for some time that solid tumors, especially gastrointestinal tumors, can arise on the basis of chronic inflammation. However, the role of inflammation in the genesis of hematological malignancies has not been extensively studied. Recent evidence clearly shows that changes in the bone marrow niche can suffice to induce myeloid diseases. Nonetheless, while it has been demonstrated that myeloproliferative neoplasms (MPN) are associated with a proinflammatory state, it is not clear whether inflammatory processes contribute to the induction or maintenance of MPN. More provocatively stated: which comes first, the hen or the egg, inflammation or MPN? In other words, can chronic inflammation itself trigger an MPN? In this review, we will describe the evidence supporting a role for inflammation in initiating and promoting MPN development. Furthermore, we will compare and contrast the data obtained in gastrointestinal tumors with observations in MPN patients and models, pointing out the opportunities provided by novel murine MPN models to address fundamental questions regarding the role of inflammatory stimuli in the molecular pathogenesis of MPN. --- ## Body ## 1. Introduction “Dass Carcinome nicht selten auf einfach entzündliche Reize, wie Traumen, entstehen, ist bekannt” (that carcinomas arise, not seldom, at the site of inflammatory stimuli, such as traumas, is known) wrote Virchow in 1869 [1]. This far-sighted statement, worded as a fact rather than a hypothesis, was validated almost 150 years later when Hanahan and Weinberg named “inflammation” as an underlying principle that contributes to and fosters the newly named “hallmarks of cancer” [2]. ## 2. Inflammatory Etiology of Solid Tumors In the interval between these two pivotal publications, a large collection of data was accrued that supports the postulated role for inflammation in carcinogenesis. It is now known that solid tumors can arise on the basis of chronic inflammation, most notably Gastrointestinal Stromal Tumor (GIST) followingHelicobacter pylori infection. Additional examples include enteropathy-associated T cell lymphoma and adenocarcinomas in patients with coeliac disease as well as the increased risk of colorectal carcinoma in patients with inflammatory bowel disease [3, 4].The model for neoplastic transformation in these disorders implies a multistep process (Figure1). Initially, chronic inflammation causes epithelial cells as well as stromal macrophages to release cytokines and other stimulatory molecules that promote proliferation of surrounding cells, for example, the interstitial cells of Cajal in the stomach during activeH. pylori infection [5]. In a second series of steps, enhanced proliferation increases the chance of stochastic mutations, leading first to hyperplasia and subsequently, with the accumulation of additional aberrations, to neoplasia. While this model has been validated experimentally for several solid tumor entities, the role of inflammation in the genesis of hematological malignancies has not been extensively studied.Figure 1 Multistep process for inflammatory driven neoplastic transformation. Stress, induced by various intrinsic and extrinsic factors, causes epithelial cells as well as stromal macrophages to release cytokines and other proliferation-promoting molecules, which lead to enhanced proliferation of surrounding cells. In a second step, enhanced proliferation increases the chance of stochastic mutations, leading first to hyperplasia and subsequently, with the accumulation of additional aberrations, to neoplasia. ## 3. Cell Extrinsic Influences on the Development of Myeloid Malignancies The microenvironment and stromal tissue that surround solid tumors can be seen as analogous in function and in cell-cell interactions to the bone marrow niche cells that surround hematopoietic stem cells. During the past years, several observations have strengthened the hypothesis that the bone marrow niche can contribute to the development of myeloid malignancies. In one seminal study, Raaijmakers and colleagues demonstrated that altering gene expression by deletion ofDicer1 specifically in osteoprogenitor cells, but not in the bone marrow, led first to the development of myelodysplasia and, subsequently, to the emergence of acute myeloid leukemia [6]. Leukemia arose in hematopoietic cells that expressedDicer1 but had acquired other genetic abnormalities. Importantly, transplantation of BM from anemic, thrombocytopenic mice, in whichDicer1 has been deleted in the osteoprogenitors, into lethally irradiated wild-type recipient mice led to complete resolution of the cytopenias, demonstrating that they were niche-induced and not attributed to cell autonomous changes in hematopoietic stem cells themselves [6]. Conversely, transplanting wild-type bone marrow cells into mice which carried theDicer1 deletion in osteoprogenitors resulted in an MDS phenotype and induction of AML. These data clearly demonstrate that changes in the bone marrow niche can be sufficient to induce leukemia. Interestingly, deletingDicer1 in mature osteoblasts did not induce either MDS or leukemia, demonstrating that very specific alterations in the bone marrow are required for niche-induced oncogenesis. The precise nature of these changes is currently being investigated and it is not known whether inflammatory mechanisms contribute to leukemia induction in this model. ## 4. Association of MPN with Inflammatory and Autoimmune Diseases While the data by Raaijmakers and colleagues thus constitute a proof of principle that leukemia can be induced by changes in the bone marrow microenvironment, the question remains whether inflammatory processes in particular contribute to the induction or maintenance of myeloid malignancies, specifically to myeloproliferative neoplasms (MPN). Several studies have recently suggested an inflammatory etiology for MDS, AML, and MPN [7–10], most notably a large epidemiological study in Sweden, which demonstrated a significantly increased risk of AML or MDS in patients with a history of any infectious disease [9]. Esplin et al. have shown that continuous TLR activation by chronic exposure to Lipopolysaccharides (LPS) alters the self-renewal capacity of HSCs in mice. Prolonged TLR activation occurs in various bacterial infections, for example, during oral infections such as Gram-negative periodontitis and during subacute bacterial endocarditis [11]. In their mice, Esplin and colleagues were able to show a myeloid bias and, conversely, a selective loss of lymphopoietic potential as well as an increased proportion of C D 150 h i C D 48 - long-term HSCs [12]. The emergence of a myeloid bias has been witnessed during normal aging of HSC [13–15]. Signer et al. point out that the risk of developing myeloid and lymphoid leukemias increases with age [16]. It seems likely that HSCs acquire random genetic hits either under chronic TLR activation induced by LPS or during normal aging. These parallels strengthen the hypothesis of inflammatory driven myeloid malignancies, in some cases perhaps induced directly by an infectious cause.While inflammatory processes involve various factors, including cytokines, reactive oxygen species, and immune cells like macrophages, autoimmune phenomena are characterized by activation of T and B cells including the production of autoantibodies. Autoimmune diseases thus mainly involve changed T and B cell function but might share aspects of inflammatory processes resulting from altered cytokine release, such as increased IL-6 levels [17].MPN patients with an antecedent autoimmune disorder carried a 1.7- and 2.1-fold increased risk to develop an AML or an MDS, respectively [9, 18]. In particular patients with MPN-associated myelofibrosis may show various autoimmune phenomena, including antibodies against red blood cells or anti-nuclear [9, 18] or anti-mitochondrial antibodies. To some extent, this might explain the pathogenesis of anemia and the accompanied compensatory reticulocytosis in this cohort of patients [19, 20]. The resulting increased malignant and nonmalignant myeloproliferation themselves thereby increase the risk for stochastic secondary (epi-)genetic hits and disease progression. However, neither the inflammatory nor the autoimmune hypotheses regarding MPN etiology have yet been directly confirmed by experimental studies. ## 5. The Inflammatory Hypothesis of MPN MPN patients show elevated serum levels of various proinflammatory cytokines including IL-1, IL-6, IL-8, IL-11, IL-17, TNF-α, and TGF-β, as well as of the anti-inflammatory IL-10 [21–26]. Treatment with Ruxolitinib, JAK1 and JAK2 inhibitor, significantly decreased the level of circulating cytokines [27]. While these data demonstrate that MPN is accompanied by inflammatory changes, the causal order of events has not been determined. Does the malignant clone trigger an inflammatory response or—and this would constitute a change in perspective—can chronic inflammation itself trigger a MPN? In the latter model, sustained low-level, probably subclinical inflammation initially increases the proliferation of healthy, polyclonal hematopoietic stem and progenitor cells. Since each cell division carries the risk of acquiring a mutation, a malignant MPN clone arises and evolves on the basis of chronic, inflammation-induced proliferation.Is there evidence supporting such a change in perspective or can it be procured using recently established, novel murine MPN models? ## 6. Murine Models to Test the Inflammatory Hypothesis of MPN The field of gastrointestinal tumors has made use of sophisticated mouse models to detail the role of inflammation for the initiation and promotion of carcinomas. Multiple tissue specific knockout and transgenic lines have been generated to study the underlying molecular mechanisms and signal transduction pathways [34]. During the past five years, various mouse models with a myeloproliferative neoplasm- (MPN-) like phenotype have also been reported [32, 33, 45–52]. In this review, we will describe the evidence supporting a role for inflammation in initiating and promoting MPN development. Furthermore, we will compare and contrast the data from GI tumors with observations in MPN patients and models, pointing out the opportunities provided by the novel murine MPN models to address fundamental questions regarding the role of inflammatory stimuli in the molecular pathogenesis of MPN.Various murine MPN models based on the most commonly occurring mutations have been developed. The alleles, which were introduced either in bone marrow transplant models, as transgenes, or as constitutively or inducibly active knock-ins, includeJ A K 2 V 617 F, J A K 2 E x o n 12, c M p l W 515 L, TET2, ASXL1, and NFE2 (see Table 1) [32, 33, 45–52]. Of these, the NFE2 mice consistently show spontaneous transformation to acute leukemia, suggesting that elevated NFE2 activity promotes not only MPN development but also a sustained acquisition of additional aberrations leading to leukemic transformation [32, 33]. The transcription factor NFE2 is overexpressed in the majority of MPN patients, irrespective of the underlying driver mutation [53, 54]. NFE2 is central to the inflammatory process. On the one hand, it is induced by inflammatory cytokines, such as IL1β [55]. Elevated NFE2 activity in turn increases cell proliferation by increasing transcription of cell cycle regulators and promoting G1/S transition [33]. On the other hand, NFE2 itself promotes inflammation as it has been shown to directly regulate transcription of IL-8, a proinflammatory cytokine [56]. Interestingly, inhibition of NFE2, by shRNA, abrogates endogenous erythroid colonies (EEC) formation [57], a pathognomonic hallmark of PV, supporting a central role for this inflammatory axis in promoting growth of the neoplastic clone.Table 1 Disease models involving inflammation. Affected compartment Cause Intervention Phenotype Reference Genetic alteration Hematopoiesis J A K 2 V 617 F TNF-α deletion Attenuation of MPN development [28] Hematopoiesis Gata- 1 lo Myelofibrosis [29] Hematopoiesis Gata- 1 lo TGF-β inhibition Restored hematopoiesis, reduced fibrosis [30] Hematopoiesis T P O h i with TGF-β inhibition Restored hematopoiesis [31] Hematopoiesis NFE2 overexpression/mutations MPN, sAML [32, 33] Gastrointestinal mucosa APC mutations Colorectal cancer Reviewed in [34] Gastrointestinal mucosa APCΔ716 COX-2 knockout Suppression of intestinal polyposis [35] Gastrointestinal mucosa APCΔ716 PGE2-receptor-2 knockout Suppression of intestinal polyposis [36] Gastrointestinal mucosa APCΔ716 Prostaglandin synthaseknockout Suppression of intestinal polyposis [37] Gastrointestinal mucosa APCΔ716 15-prostaglandin dehydrogenase (15-PDGH)knockout Disease exacerbation [38] Gastrointestinal mucosa APCΔ716 Deletion of either IL-17, IL-6, CCR2, TNFR, or p55 Suppression of intestinal polyposis [39–42] Infectious cause Hematopoiesis cell intrinsic and extrinsic TLR activation by bacterial infection HSC exhaustion [12] Chemical cause Gastrointestinal mucosa Azoxymethane (AOM)Dextran Sodium Sulfate (DSS) Colitis associated colon cancer (CAC) Reviewed in [34] Gastrointestinal mucosa Azoxymethane (AOM) COX-2 transgene Increased development of tumors [43] Gastrointestinal mucosa Azoxymethane (AOM)Dextran Sodium Sulfate (DSS) COX-2 deletion Increased development of tumors [44] Gastrointestinal mucosa AOM or DSS plus deletion of either IL-17, IL-6, CCR2, TNFR, or p55 Suppression of CAC [39–42]Two distinct groups of murine models are used to study the role of inflammation in GI cancers (reviewed in [34]). The first are genetically altered mice, either transgenic or knock-in strains, that carry mutations in the “adenomatous polyposis coli” (APC) gene or in genes affecting the Wnt signaling pathway. The APC gene is mutated in 80% of human colorectal cancers, while a further 10% carry mutations in beta-catenin, a central regulator of the Wnt-signaling pathway [58, 59]. In the second type of models, chemical carcinogens and promoters of inflammation, frequently azoxymethane (AOM) and dextran sodium sulfate (DSS), are used to induce the development of colitis associated colon cancer (CAC) [34]. ## 7. The Role of the COX2/PGE2 Axis By generating double or triple mutant mice, for example, strains that carry APC mutations in addition to tissue specific knockouts of critical signal transducing molecules, the role of various molecular pathways was investigated. The data reveal a critical role for the cyclooxygenase-2 (COX-2)/prostaglandin-E2 (PGE2) pathway even in mice that carry APC mutations [35–37, 43]. COX-2 is a central mediator of inflammation. It oxidizes arachidonic acid to prostaglandin H2, which is subsequently converted to PGE2. PGE2 promotes inflammation by affecting a variety of cellular functions. In contrast to COX-1, which is constitutively expressed, COX-2 is specifically induced by proinflammatory stimuli and mitogens.Knockout of COX-2 in mice carrying theA P C Δ 716 mutation drastically suppressed the development of intestinal polyposis as did treatment of mice with COX-2 inhibitors [35]. Conversely, transgenic overexpression of COX-2 in colon epithelium increased the development of intestinal tumors [43]. A similar strategy could easily be used to test the importance of the COX-2/PGE2 axis in MPN models. The COX-2 knockout is not tissue specific, so that development of the MPN phenotype in the presence or absence of systemic COX-2 could be investigated. In this context, the use of inducible models appears especially interesting, as the role of inflammatory processes in disease initiation could be investigated [48, 50, 60].The logic described above was applied to various other genes in the COX-2/PGE2 axis, and the results consistently underwrite an essential role for an inflammatory response in the development of APC-driven cancers. For example, knockout of the gene for either the PGE2-receptor-2 or the microsomal PGE synthase resulted in the suppression of intestinal polyp formation [37]. Conversely, deletion of the gene for 15-prostaglandin dehydrogenase (15-PDGH), an enzyme that catabolizes and inactivates prostaglandins, resulted in disease exacerbation, animals carrying mutant APC but lacking 15-PDGH developing significantly more polyps than their control littermates [38]. In addition, and perhaps less surprisingly, the COX-2/PGE2 axis was also shown to be essential in the AOM/DSS inflammation-associated colon tumor model, as deletion of COX-2 exacerbates CAC development [44, 61].Equivalent mouse strains could be generated in the context of various MPN mutations to investigate the contribution of the COX-2/PGE2 inflammatory axis to MPN disease initiation or maintenance. Inducible expression of MPN alleles in the background of a constitutive COX-2/PGE2 knockout will test the role of inflammation in MPN initiation, whereas constitutive expression of MPN mutations and subsequent inducible deletion of a COX-2/PGE2 axis gene will test for the requirement of an inflammatory milieu in maintaining the MPN phenotype. ## 8. The Role of Specific Immune Cells During the past decade, various mouse strains lacking specific immune cells have been developed. These mice can attest to the requirement of specific cell types for disease development. For example, crossingA P C Δ 716 mice with op/op mice, which are devoid of functional macrophages, led to a suppression of polyp formation, as did the generation of APC-mutant, k i t W / W mice, which lack mast cells [62]. Hence, both macrophages and mast cells are required to elaborate the microenvironment in which mutant APC can induce polyp formation. A recent paper by Ramos and colleagues provides compelling evidence that similar but distinct mechanisms operate in MPN [63]. In mice with an established J A K 2 V 617 F driven erythrocytosis, depletion of macrophages with clodronate normalized hematocrit and RBC counts as well as reducing reticulocytosis. Since these authors used a Vav-Cre/J A K 2 V 617 F BMT model, it is likely that the macrophages were also carrying the J A K 2 V 617 F mutation and were therefore part of the malignant clone. The molecular mechanism is thus slightly different from that in gastric cancer, where macrophages appear necessary for paracrine stimulation of the neoplastic epithelial cells. In MPN, macrophages that are part of the malignant clone would be perpetuating the neoplasia in an autocrine manner. However, if the op/op mice are used in models similar to those detailed above, a role for healthy macrophages in MPN initiation from healthy HSCs may be revealed. ## 9. The Role of Cytokines The requirement for macrophages and mast cells points to a rather obvious role for cytokines in tumor formation. While the essential role for cytokines in various physiological processes makes the construction of knockout mice deficient in these signaling molecules challenging, several strains have been generated and examined for cytokine contribution to gastric cancer development. Deletion of IL-17, IL-6, CCR2, or TNF-receptor p55 [39–42] led to a suppression of intestinal polyp development or CAC development in both the APC-mutant and the AOM/DSS models.One very similar study points to an important role for TNF-α in promoting J A K 2 V 617 F driven MPN [28]. Deletion of TNF-α limited expansion of J A K 2 V 617 F positive cells and attenuated disease development, pointing to a disease-promoting role for this cytokine. Analogous investigations for other inflammatory cytokines are required, especially addressing the question whether they are necessary for successful disease initiation. Candidates that should be investigated with priority include those factors for whom elevated levels have been documented in MPN patients and who have been shown to play a role in the genesis of other entities with an inflammatory component.In this light, IL-11 stands out as its levels are elevated in PV patients and has been shown to induce healthy bone marrow to form endogenous erythroid colonies [22, 64]. EEC constitute a characteristic abnormality of PV, one that may be used diagnostically because of its high sensitivity and specificity. Antibodies to IL-11 inhibit EEC formation in PV cells [64]. IL-11 has been shown to promote gastric tumor development, while, conversely, deletion of the IL-11 coreceptor alpha ablated the development of gastric tumors [65].IL-8 has likewise been shown to induce EEC formation from healthy bone marrow cells [64]. As detailed above, IL-8 is a direct target of NFE2 and both are overexpressed in MPN patients. Furthermore, Hermouet and colleagues have shown that IL-8 promotes hematopoietic progenitor survival [66]. Conversely, inactivation of the IL-8 pathway inhibited CD34+ cell proliferation and colony formation [66]. As IL-8 levels constitute an independent predictor of survival in PMF patients, this cytokine is highly likely to contribute to MPN pathophysiology, perhaps as one of the pivotal inflammatory mediators that initiate hyperproliferation of healthy HSCs in the bone marrow [26].The role of TGF-beta in the dysmegakaryopoiesis and fibrosis characteristic of PMF has been investigated in a murine model of myelofibrosis due to low Gata-1 expression (Gata-1 l o) [29, 30]. While the mutation decreasing Gata-1 levels in this model is not found in PMF patients, Gata-1 levels are specifically downregulated in a subset of PMF megakaryocytes [67]. In Gata-1 l o mice, inhibition of TGF-beta signaling restored hematopoiesis, normalized megakaryocyte development, and reduced fibrosis [30]. Similar results were obtained by Dr. Vainchenker’s group in mice overexpressing thrombopoietin (TPO). Mice displaying high TPO levels develop an MPN phenotype with fibrosis. In the absence of TGF-beta, these mice still show a myeloproliferative syndrome, yet no fibrosis [31]. Interestingly, while they express normal TGF-beta levels, untreated Gata-1 l o mice nonetheless show specific TGF-beta signaling alterations in bone marrow and spleen, such as overexpression of EVI1. This signaling abnormality is comparable to the abnormal TGF-beta profile observed in PMF patients, which includes overexpression of STAT1 and IL-6, factors directly related to autoimmune fibrosis [68].These data clearly indicate that TGF-beta plays a pivotal role in propagating the PMF phenotype and the development of fibrosis, which contributes to the cytopenias that constitute the leading cause of morbidity and mortality in this patient population. Targeted deletion or tissue specific overexpression of TGF-beta is now required to determine whether the cytokine is required or sufficient for disease initiation. Observations in other organs suggest that the latter is likely: liver specific overexpression of TGF-beta results in hepatic fibrosis [69].Another novel, autocrine inflammatory pathway has recently been described. Dr. Hoffman’s laboratory showed that MPN myeloid cells secrete elevated levels of lipocalin-2, an inflammatory cytokine, and that lipocalin-2 levels are elevated in PMF patients [70]. Lipocalin secretion is known to be stimulated by IL-1, IL-6, and IL-17, all of which are elevated in MPN [23, 24, 71–73]. Lipocalin induces reactive oxygen intermediates (ROS) formation with subsequent induction of double stranded DNA breaks leading to apoptosis of healthy HSCs but not PMF HCSs [70]. Hence, protection of PMF cells from lipocalin action, by a yet unknown mechanism, could constitute one way in which the microenvironment or the MPN clone itself uses inflammatory mediators to create an environment that provides a selective advantage to the MPN clone. ## 10. The Inflammatory Hypothesis of MPN: Awaiting Proof from Murine Models While the evidence presented above supports a change in perspective, in which inflammation may induce and promote MPN, rather than simply being a consequence of it, several aspects of this hypothesis remain to be experimentally proven. A murine model, which does not carry a specific MPN mutation, but rather models a prolonged, chronic inflammation, would constitute a valuable tool. If in such a model, the inflammatory milieu alone was sufficient to induce malignant myeloproliferation or even leukemic transformation, this would constitute a proof of principle.Proving the inflammatory hypothesis in MPN patients directly may, however, not be feasible. Diagnosing the underlying inflammatory process, postulated to be present even prior to the clinical MPN presentation, will not be possible in most cases. However, this will not be required. If the inflammatory hypothesis can be proven experimentally, this provides sufficient evidence for the initiation of clinical trials examining the effectiveness of early therapeutic intervention with the goal of suppressing chronic inflammation, thereby intersecting the vicious cycle that promotes MPN progression. Again, epidemiological data from the field of gastric cancers may point the way. Two landmark studies, published over 20 years ago, demonstrated that regular use of nonsteroidal anti-inflammatory drugs (NSAIDs) reduces the risk of colon cancer [74]. NSAIDs including aspirin are well known to function as a COX-1/2 inhibitor and therefore inhibit the production of PGE2. The Efficacy and Safety of Low-Dose Aspirin study (ECLAP), nomen est omen, proved both the safety and efficacy of aspirin in PV patients [75]. While overall survival was not increased during the observation period in patients treated with low-dose aspirin, longer followup is required to observe a beneficial effect if aspirin use prevents leukemic transformation by suppressing a chronic inflammatory stimulus. As mentioned above, mouse strains carrying MPN mutations in the context of COX-2 deficiency may reveal the impact of the COX-2/PGE2 inflammatory axis to MPN disease initiation and maintenance as well as leukemic progression. --- *Source: 101987-2015-10-12.xml*
2015
# Assessment of the Effectiveness of Pelvic Floor Muscle Training (PFMT) and Extracorporeal Magnetic Innervation (ExMI) in Treatment of Stress Urinary Incontinence in Women: A Randomized Controlled Trial **Authors:** Magdalena Weber-Rajek; Agnieszka Strączyńska; Katarzyna Strojek; Zuzanna Piekorz; Beata Pilarska; Marta Podhorecka; Kinga Sobieralska-Michalak; Aleksander Goch; Agnieszka Radzimińska **Journal:** BioMed Research International (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1019872 --- ## Abstract Objective. The purpose of this study is to assess the effectiveness of pelvic floor muscle training and extracorporeal magnetic innervation in treatment of urinary incontinence in women with stress urinary incontinence. Methods. The randomized controlled trial enrolled 128 women with stress urinary incontinence who were randomly allocated to either one out of two experimental groups (EG1 or EG2) or the control group (CG). Subjects in the experimental group 1 (EG1) received 12 sessions of pelvic floor muscle training, whereas subjects in the experimental group 2 (EG2) received 12 sessions of extracorporeal magnetic innervation. Subjects in the control group (CG) did not receive any therapeutic intervention. The following instruments were used to measure results in all study groups at the initial and final assessments: Revised Urinary Incontinence Scale (RUIS), Beck Depression Inventory (BDI-II), General Self-Efficacy Scale (GSES), and King’s Health Questionnaire (KHQ). Results. In both experimental groups, a statistically significant decline in depressive symptoms (BDI-II) and an improvement in urinary incontinence severity (RUIS) and quality of life (KHQ) were found in the following domains: “social limitations,” “emotions,” “severity measures,” and “symptom severity scale.” Moreover, self-efficacy beliefs (GSES) improved in the experimental group that received ExMI (EG2). No statistically significant differences were found between all measured variables in the control group. Comparative analysis of the three study groups showed statistically significant differences at the final assessment in the quality of life in the following domains: “physical limitations,” “social limitations,” “personal relationships,” and “emotions.” Conclusion. Pelvic floor muscle training and extracorporeal magnetic innervation proved to be effective treatment methods for stress urinary incontinence in women. The authors observed an improvement in both the physical and psychosocial aspects. --- ## Body ## 1. Introduction The World Health Organization (WHO) and the International Continence Society (ICS) define urinary incontinence (UI) as an involuntary leakage of urine through the urethra and consider it a health, social, and hygienic concern [1].The Standardisation Steering Committee (SSC) recognizes three main types of UI: stress urinary incontinence (SUI), urge urinary incontinence (UUI), and mixed urinary incontinence (MUI). While SUI is the most common type of UI, there is inconclusive data on the frequency of urinary incontinence in the general population. During the Global Forum on Incontinence, held on April 17-18th, 2018, in Rome, it was assumed that UI affects 6–10% of the general population [2]. It is worth mentioning that, according to the WHO data, any condition affecting at least 5% of the population is recognized as a social disease. The above-mentioned data confirm the gravity of urinary incontinence and the importance of disease prevention and treatment. Nowadays, increasing attention is given to physiotherapy as a conservative treatment for urinary incontinence. The most important physiotherapy treatments for UI include pelvic floor muscle training, electrostimulation, biofeedback, and magnetotherapy. In the following series of studies, the authors assessed the effectiveness of pelvic floor muscle training (PFMT) and extracorporeal magnetic innervation (ExMI). The European Association of Urology (EAU) recommends the use of pelvic floor muscle training as a basic nonsurgical treatment for UI [3], whereas extracorporeal magnetic innervation (ExMI) is a rather new physiotherapy method used in the treatment of urinary incontinence. ExMI uses high electromagnetic induction values (2 Tesla) with a frequency of 10–50 Hz, which is adjusted depending on the type of urinary incontinence. During an ExMI treatment session, patients are seated in a special chair with a magnetic field generator in the seat. The magnetic field emitted by the generator penetrates pelvis minor organs and acts on motor fibers of pudendal and visceral nerves. Once the sodium-potassium pump is activated and the motor neuron depolarization begins, nerve impulses reach the neuromuscular junction which consequently initiates muscle contraction [4–6]. Nevertheless, there are relatively few scientific reports that assess the effectiveness of ExMI. The EAU pointed this out as well in their 2017 guidelines on the nonsurgical treatment for urinary incontinence [3].Urinary incontinence is a multifaceted issue that impairs patients’ physical and psychosocial functioning. In light of the above, improving patients’ general quality of life, that is, physical, mental, and social well-being, should be paramount in the treatment of urinary incontinence. ### 1.1. Study Purpose This study aims to compare the effectiveness of pelvic floor muscle training and extracorporeal magnetic innervation in the treatment of stress urinary incontinence. ## 1.1. Study Purpose This study aims to compare the effectiveness of pelvic floor muscle training and extracorporeal magnetic innervation in the treatment of stress urinary incontinence. ## 2. Methods ### 2.1. Study Design In the period between February 2017 and June 2018, 148 participants affected by urinary incontinence were enrolled in a randomized controlled trial. The study was conducted in accordance with the Declaration of Helsinki guidelines and with the approval of a local Bioethics Committee. Moreover, all participants provided a statement confirming that written informed consent was given. All the deidentified data is presented in the report. The authors stratified randomization by allocating participants to one of the three study groups. The allocation method was simple—each subject picked a sealed envelope with a computer-generated group allocation number. Furthermore, the main investigator was blinded to the study group allocation. Among the 20 subjects excluded from the study, 15 subjects failed to meet the inclusion criteria, and 5 subjects refused to participate. 128 subjects who met the inclusion criteria were then randomly allocated to one out of the two experimental groups (EG1 or EG2) or the control group (CG). Of the 17 women who failed to complete the study, 9 subjects withdrew from EG1 and EG2 during the 4-week treatment program, 6 subjects from the CG missed the final study visit, and 2 subjects from the EG2 submitted incomplete questionnaires. Consequently, 111 women aged 45 to 78 (mean 68.77) successfully completed the study (PFMT (EG1) wasn = 40 aged 60 to 78 (mean 70.12), ExMI (EG2) was n = 37 aged 45 to 76 (mean 66.71), and CG was n = 34 aged 60 to 78 (mean 69.79)). Applying the CONSORT statement (Consolidated Standards of Reporting Trials) (Figure 1) allowed the authors to improve the RCT reporting quality [7]. Prior to the treatment, subjects provided information on the contraindications to the treatment, circumstances of urine loss, and any comorbid conditions. It should be noted that the type of urinary incontinence was diagnosed by a urology specialist. Study inclusion criteria were as follows: diagnosed SUI, and no contradictions to the PFMT or ExMI treatment.Figure 1 The study flow diagram. RUIS: Revised Urinary Incontinence Scale; GSES: General Self-Efficacy Scale; BDI-II: Beck Depression Inventory; KHQ: King’s Health Questionnaire; UI: urinary incontinence.; PFMT: pelvic floor muscle training; ExMI: extracorporeal magnetic innervation.Contradictions to PFMT were as follows: active malignancy, recent surgeries, recent pelvic fractures, fever, acute inflammations, uterine tumors and myomas, urinary or genital tract infections, grade 3 or 4 hemorrhoids, stage 3 uterine prolapse (downward displacement of the uterus into the vagina) [8].Contradictions to ExMI were as follows: pregnancy, recent pelvic fractures, fever, acute inflammations, active malignancy, uterine tumors and myomas, stage 3 uterine prolapse, hemorrhoids, urinary or genital tract infections, suspected urethral and/or vesical fistula, severe urethral sphincter weakness and/or defect, deep vein thrombosis, acute infections, cardiac arrhythmia, cardiac pacemaker, and neurological diseases [6].Study exclusion criteria were as follows: presence of contraindications to the PFMT or ExMI treatment, diagnosed MUI or UUI, and recent therapeutic interventions in UI 3 months prior to the study (PFMT, ExMI, electrostimulation, or biofeedback). ### 2.2. Measurements #### 2.2.1. The Revised Urinary Incontinence Scale (RUIS) The RUIS is a valid 5-item scale used to determine UI symptoms and to monitor treatment outcomes. A score of less than 3 indicates no urinary incontinence; a score of 4–8 is considered mild urinary incontinence; a score of 9–12 is considered moderate urinary incontinence; a score of 13 or above indicates severe urinary incontinence [9]. #### 2.2.2. The General Self-Efficacy Scale (GSES) The scale was developed by Matthias Jerusalem and Ralf Schwarzer to assess merely one psychometric characteristic—one’s general self-efficacy belief, which is understood as a belief in one’s capabilities to deal with various situations. The obtained raw scores were afterwards transformed into standardised sten scores, and higher scores indicate higher self-efficacy beliefs. Score ranges are defined as follows: sten scores of 1–4 indicate low scores; sten scores of 5–6 indicate average scores; sten scores of 7–10 indicate high scores [10]. #### 2.2.3. Beck Depression Inventory-II (BDI-II) The BDI-II questionnaire is a self-report depression scale widely used in research on mental disorders, and it is still as popular as ever. The tool allows to evaluate the mood of urological, gynecological, oncological, and neurological patients. The questionnaire is a 21-item scale with individual item scores ranging from 0 (no symptoms) to 3 (severe symptoms). Score ranges are defined as follows: a score of 0–8 indicates no depression, and a score of 9–18 indicates moderate depression, whereas a score of 18 or above indicates severe depression [11]. #### 2.2.4. King’s Health Questionnaire (KHQ) The KHQ is a 21-item patient self-report scale that comprises 3 parts [12, 13].Part 1 (KHQ–1) focuses on the general health perception (one item) and the incontinence impact (one item).Part 2 (KHQ–2) focuses on the following:(i) Role limitations (two items)–2A(ii) Physical limitations (two items)–2B(iii) Social limitations (two items)–2C(iv) Personal relationships (three items)–2D(v) Emotions (three items)–2E(vi) Sleep/energy (two items)–2F(vii) Severity measures (four items)–2GPart 3 (KHQ–3) is a single-item section that contains ten bladder-related symptoms—frequency, nocturia, urgency, urge, stress, intercourse incontinence, nocturnal enuresis, infections, pain, and difficulty in voiding—which are answered on a 4-point scale. The subscales in parts 1 and 2 are scored from 0 (the best quality of life) to 100 points (the worst quality of life), whereas the scale in part 3 is scored from 0 (the best quality of life) to 30 points (the worst quality of life). The lower the KHQ score, the better the quality of life. ### 2.3. Intervention #### 2.3.1. Pelvic Floor Muscle Training Women in the EG1 received 12 PFMT therapy sessions (45-minute sessions, 3 times a week, for 4 weeks) that followed a specific training regimen. The exercises were supervised by a physiotherapist and completed in five/six-person groups. Prior to the training session, study participants were examined for posture and body correction, learned how to mobilize sacroiliac joints, and improve the lumbar-sacral spine, as well as hip and knee joint range of movement. The subjects also participated in abdominal breathing exercises simulating the thoracic duct. The pelvic floor muscle training (PMFT) regimen focused on straining fast- and slow-twitch muscle fibers of the pelvic floor using the transversus abdominis muscle tension technique. The exercises were performed with and without changing the position, with relaxed gluteal muscles, and the muscle function was synchronized with breathing. The PFMT exercises were performed in standing, sitting, and lying positions. The number of exercises and repetitions were individually determined and matched to the subjects’ functional abilities [8]. #### 2.3.2. Extracorporeal Magnetic Innervation Women in the EG2 received 12 ExMI therapy sessions (15-minute sessions, 3 times a week, for 4 weeks) which were delivered using the NeoControl chair (Neotonus Inc., Marietta, GA, USA). The applied magnetic field parameters were as follows: 2.0 Tesla at 50 Hz, delivered for 8 seconds with a dwell time of 4 seconds. During consecutive treatment sessions, the field intensity was being increased from 20% to 100%, and the electromagnetic stimulation strength corresponded to the highest level tolerated by the patient [6]. ### 2.4. Statistical Analyses The authors analyzed the collected data using the PQ Stat 1.6.8 software. Also, the Shapiro–Wilk test was implemented to check the normality distribution of measured variables. The results in the initial and final assessments in the study groups were compared using the Wilcoxon test. The Anova Kruskal–Wallis test was applied to determine the differences between the three study groups. Then, the authors performed a post hoc Conover-Iman test. The statistical significance level was defined asP<0.05. ## 2.1. Study Design In the period between February 2017 and June 2018, 148 participants affected by urinary incontinence were enrolled in a randomized controlled trial. The study was conducted in accordance with the Declaration of Helsinki guidelines and with the approval of a local Bioethics Committee. Moreover, all participants provided a statement confirming that written informed consent was given. All the deidentified data is presented in the report. The authors stratified randomization by allocating participants to one of the three study groups. The allocation method was simple—each subject picked a sealed envelope with a computer-generated group allocation number. Furthermore, the main investigator was blinded to the study group allocation. Among the 20 subjects excluded from the study, 15 subjects failed to meet the inclusion criteria, and 5 subjects refused to participate. 128 subjects who met the inclusion criteria were then randomly allocated to one out of the two experimental groups (EG1 or EG2) or the control group (CG). Of the 17 women who failed to complete the study, 9 subjects withdrew from EG1 and EG2 during the 4-week treatment program, 6 subjects from the CG missed the final study visit, and 2 subjects from the EG2 submitted incomplete questionnaires. Consequently, 111 women aged 45 to 78 (mean 68.77) successfully completed the study (PFMT (EG1) wasn = 40 aged 60 to 78 (mean 70.12), ExMI (EG2) was n = 37 aged 45 to 76 (mean 66.71), and CG was n = 34 aged 60 to 78 (mean 69.79)). Applying the CONSORT statement (Consolidated Standards of Reporting Trials) (Figure 1) allowed the authors to improve the RCT reporting quality [7]. Prior to the treatment, subjects provided information on the contraindications to the treatment, circumstances of urine loss, and any comorbid conditions. It should be noted that the type of urinary incontinence was diagnosed by a urology specialist. Study inclusion criteria were as follows: diagnosed SUI, and no contradictions to the PFMT or ExMI treatment.Figure 1 The study flow diagram. RUIS: Revised Urinary Incontinence Scale; GSES: General Self-Efficacy Scale; BDI-II: Beck Depression Inventory; KHQ: King’s Health Questionnaire; UI: urinary incontinence.; PFMT: pelvic floor muscle training; ExMI: extracorporeal magnetic innervation.Contradictions to PFMT were as follows: active malignancy, recent surgeries, recent pelvic fractures, fever, acute inflammations, uterine tumors and myomas, urinary or genital tract infections, grade 3 or 4 hemorrhoids, stage 3 uterine prolapse (downward displacement of the uterus into the vagina) [8].Contradictions to ExMI were as follows: pregnancy, recent pelvic fractures, fever, acute inflammations, active malignancy, uterine tumors and myomas, stage 3 uterine prolapse, hemorrhoids, urinary or genital tract infections, suspected urethral and/or vesical fistula, severe urethral sphincter weakness and/or defect, deep vein thrombosis, acute infections, cardiac arrhythmia, cardiac pacemaker, and neurological diseases [6].Study exclusion criteria were as follows: presence of contraindications to the PFMT or ExMI treatment, diagnosed MUI or UUI, and recent therapeutic interventions in UI 3 months prior to the study (PFMT, ExMI, electrostimulation, or biofeedback). ## 2.2. Measurements ### 2.2.1. The Revised Urinary Incontinence Scale (RUIS) The RUIS is a valid 5-item scale used to determine UI symptoms and to monitor treatment outcomes. A score of less than 3 indicates no urinary incontinence; a score of 4–8 is considered mild urinary incontinence; a score of 9–12 is considered moderate urinary incontinence; a score of 13 or above indicates severe urinary incontinence [9]. ### 2.2.2. The General Self-Efficacy Scale (GSES) The scale was developed by Matthias Jerusalem and Ralf Schwarzer to assess merely one psychometric characteristic—one’s general self-efficacy belief, which is understood as a belief in one’s capabilities to deal with various situations. The obtained raw scores were afterwards transformed into standardised sten scores, and higher scores indicate higher self-efficacy beliefs. Score ranges are defined as follows: sten scores of 1–4 indicate low scores; sten scores of 5–6 indicate average scores; sten scores of 7–10 indicate high scores [10]. ### 2.2.3. Beck Depression Inventory-II (BDI-II) The BDI-II questionnaire is a self-report depression scale widely used in research on mental disorders, and it is still as popular as ever. The tool allows to evaluate the mood of urological, gynecological, oncological, and neurological patients. The questionnaire is a 21-item scale with individual item scores ranging from 0 (no symptoms) to 3 (severe symptoms). Score ranges are defined as follows: a score of 0–8 indicates no depression, and a score of 9–18 indicates moderate depression, whereas a score of 18 or above indicates severe depression [11]. ### 2.2.4. King’s Health Questionnaire (KHQ) The KHQ is a 21-item patient self-report scale that comprises 3 parts [12, 13].Part 1 (KHQ–1) focuses on the general health perception (one item) and the incontinence impact (one item).Part 2 (KHQ–2) focuses on the following:(i) Role limitations (two items)–2A(ii) Physical limitations (two items)–2B(iii) Social limitations (two items)–2C(iv) Personal relationships (three items)–2D(v) Emotions (three items)–2E(vi) Sleep/energy (two items)–2F(vii) Severity measures (four items)–2GPart 3 (KHQ–3) is a single-item section that contains ten bladder-related symptoms—frequency, nocturia, urgency, urge, stress, intercourse incontinence, nocturnal enuresis, infections, pain, and difficulty in voiding—which are answered on a 4-point scale. The subscales in parts 1 and 2 are scored from 0 (the best quality of life) to 100 points (the worst quality of life), whereas the scale in part 3 is scored from 0 (the best quality of life) to 30 points (the worst quality of life). The lower the KHQ score, the better the quality of life. ## 2.2.1. The Revised Urinary Incontinence Scale (RUIS) The RUIS is a valid 5-item scale used to determine UI symptoms and to monitor treatment outcomes. A score of less than 3 indicates no urinary incontinence; a score of 4–8 is considered mild urinary incontinence; a score of 9–12 is considered moderate urinary incontinence; a score of 13 or above indicates severe urinary incontinence [9]. ## 2.2.2. The General Self-Efficacy Scale (GSES) The scale was developed by Matthias Jerusalem and Ralf Schwarzer to assess merely one psychometric characteristic—one’s general self-efficacy belief, which is understood as a belief in one’s capabilities to deal with various situations. The obtained raw scores were afterwards transformed into standardised sten scores, and higher scores indicate higher self-efficacy beliefs. Score ranges are defined as follows: sten scores of 1–4 indicate low scores; sten scores of 5–6 indicate average scores; sten scores of 7–10 indicate high scores [10]. ## 2.2.3. Beck Depression Inventory-II (BDI-II) The BDI-II questionnaire is a self-report depression scale widely used in research on mental disorders, and it is still as popular as ever. The tool allows to evaluate the mood of urological, gynecological, oncological, and neurological patients. The questionnaire is a 21-item scale with individual item scores ranging from 0 (no symptoms) to 3 (severe symptoms). Score ranges are defined as follows: a score of 0–8 indicates no depression, and a score of 9–18 indicates moderate depression, whereas a score of 18 or above indicates severe depression [11]. ## 2.2.4. King’s Health Questionnaire (KHQ) The KHQ is a 21-item patient self-report scale that comprises 3 parts [12, 13].Part 1 (KHQ–1) focuses on the general health perception (one item) and the incontinence impact (one item).Part 2 (KHQ–2) focuses on the following:(i) Role limitations (two items)–2A(ii) Physical limitations (two items)–2B(iii) Social limitations (two items)–2C(iv) Personal relationships (three items)–2D(v) Emotions (three items)–2E(vi) Sleep/energy (two items)–2F(vii) Severity measures (four items)–2GPart 3 (KHQ–3) is a single-item section that contains ten bladder-related symptoms—frequency, nocturia, urgency, urge, stress, intercourse incontinence, nocturnal enuresis, infections, pain, and difficulty in voiding—which are answered on a 4-point scale. The subscales in parts 1 and 2 are scored from 0 (the best quality of life) to 100 points (the worst quality of life), whereas the scale in part 3 is scored from 0 (the best quality of life) to 30 points (the worst quality of life). The lower the KHQ score, the better the quality of life. ## 2.3. Intervention ### 2.3.1. Pelvic Floor Muscle Training Women in the EG1 received 12 PFMT therapy sessions (45-minute sessions, 3 times a week, for 4 weeks) that followed a specific training regimen. The exercises were supervised by a physiotherapist and completed in five/six-person groups. Prior to the training session, study participants were examined for posture and body correction, learned how to mobilize sacroiliac joints, and improve the lumbar-sacral spine, as well as hip and knee joint range of movement. The subjects also participated in abdominal breathing exercises simulating the thoracic duct. The pelvic floor muscle training (PMFT) regimen focused on straining fast- and slow-twitch muscle fibers of the pelvic floor using the transversus abdominis muscle tension technique. The exercises were performed with and without changing the position, with relaxed gluteal muscles, and the muscle function was synchronized with breathing. The PFMT exercises were performed in standing, sitting, and lying positions. The number of exercises and repetitions were individually determined and matched to the subjects’ functional abilities [8]. ### 2.3.2. Extracorporeal Magnetic Innervation Women in the EG2 received 12 ExMI therapy sessions (15-minute sessions, 3 times a week, for 4 weeks) which were delivered using the NeoControl chair (Neotonus Inc., Marietta, GA, USA). The applied magnetic field parameters were as follows: 2.0 Tesla at 50 Hz, delivered for 8 seconds with a dwell time of 4 seconds. During consecutive treatment sessions, the field intensity was being increased from 20% to 100%, and the electromagnetic stimulation strength corresponded to the highest level tolerated by the patient [6]. ## 2.3.1. Pelvic Floor Muscle Training Women in the EG1 received 12 PFMT therapy sessions (45-minute sessions, 3 times a week, for 4 weeks) that followed a specific training regimen. The exercises were supervised by a physiotherapist and completed in five/six-person groups. Prior to the training session, study participants were examined for posture and body correction, learned how to mobilize sacroiliac joints, and improve the lumbar-sacral spine, as well as hip and knee joint range of movement. The subjects also participated in abdominal breathing exercises simulating the thoracic duct. The pelvic floor muscle training (PMFT) regimen focused on straining fast- and slow-twitch muscle fibers of the pelvic floor using the transversus abdominis muscle tension technique. The exercises were performed with and without changing the position, with relaxed gluteal muscles, and the muscle function was synchronized with breathing. The PFMT exercises were performed in standing, sitting, and lying positions. The number of exercises and repetitions were individually determined and matched to the subjects’ functional abilities [8]. ## 2.3.2. Extracorporeal Magnetic Innervation Women in the EG2 received 12 ExMI therapy sessions (15-minute sessions, 3 times a week, for 4 weeks) which were delivered using the NeoControl chair (Neotonus Inc., Marietta, GA, USA). The applied magnetic field parameters were as follows: 2.0 Tesla at 50 Hz, delivered for 8 seconds with a dwell time of 4 seconds. During consecutive treatment sessions, the field intensity was being increased from 20% to 100%, and the electromagnetic stimulation strength corresponded to the highest level tolerated by the patient [6]. ## 2.4. Statistical Analyses The authors analyzed the collected data using the PQ Stat 1.6.8 software. Also, the Shapiro–Wilk test was implemented to check the normality distribution of measured variables. The results in the initial and final assessments in the study groups were compared using the Wilcoxon test. The Anova Kruskal–Wallis test was applied to determine the differences between the three study groups. Then, the authors performed a post hoc Conover-Iman test. The statistical significance level was defined asP<0.05. ## 3. Results Table1 presents Wilcoxon test results and descriptive statistics for all measured variables across both experimental groups and the control group at the initial and final assessments.Table 1 Comparative analysis of all measured variables for the EG1, EG2, and CG at the initial and final assessments. EG1 (n = 40) EG2 (n = 37) CG (n = 34) Parameter Statistics Initial assessment Final assessment P value Initial assessment Final assessment P value Initial assessment Final assessment P value RUIS Med 8.00 6.00 <0.001∗ 9.00 7.00 0.001∗ 8.00 9.00 0.190 IQR 6.00 5.00 6.00 4.00 7.00 5.00 GSES Med 7.00 7.00 0.231 7.00 7.00 0.019∗ 7.00 7.00 0.147 IQR 4.00 3.00 3.00 3.00 3.00 1.00 BDI-II Med 6.00 5.00 <0.001∗ 7.00 5.00 <0.001∗ 6.50 7.50 0.856 IQR 9.00 7.50 11.00 7.00 11.00 9.00 KHQ–1 Med 16.66 16.66 0.146 16.66 33.33 0.113 16.66 33.33 8.817 IQR 50.00 33.33 50.00 33.33 50.00 50.00 KHQ–2A Med 33.33 24.99 0.073 33.33 33.33 0.463 33.33 33.33 0.932 IQR 33.33 50.00 33.33 50.00 33.33 40.97 KHQ–2B Med 5.55 0.00 0.089 33.33 0.00 0.322 11.11 22.22 0.297 IQR 33.33 22.22 0.00 22.22 44.44 48.61 KHQ–2C Med 33.33 11.11 0.001∗ 33.33 22.22 0.001∗ 33.33 22.22 0.536 IQR 50.00 22.22 50.00 22.22 35.41 31.94 KHQ–2D Med 22.22 16.66 0.444 22.22 33.33 0.327 22.22 44.44 0.255 IQR 13.88 41.66 22.22 50.00 33.33 45.83 KHQ–2E Med 33.33 16.66 0.035∗ 41.66 33.33 0.004∗ 37.50 50.00 0.789 IQR 41.66 50.00 50.00 50.00 47.91 31.25 KHQ–2F IQR 50.00 41.66 0.699 33.33 41.66 0.405 29.16 38.19 0.663 Med 33.33 33.33 33.33 41.66 33.33 41.66 KHQ–2G IQR 41.66 33.33 <0.001∗ 41.66 25.00 <0.001∗ 41.66 31.25 0.190 Med 41.66 16.66 33.33 16.66 33.33 33.33 KHQ–3 IQR 11.25 7.00 <0.001∗ 6.00 6.00 <0.001∗ 5.00 7.50 0.609 Med 6.00 2.00 6.00 2.00 6.75 4.50 ∗Statistical significance. EG1: experimental group 1; EG2: experimental group 2; CG: control group; Med: median; IQR: interquartile range; P: significance level.Table2 presents the Anova Kruskal–Wallis test applied to determine the differences between the three study groups at the final assessment.Table 2 Comparison of all measured variables across the three study groups at the final assessment. Parameter Group H statistics P value RUIS EG1 5.066 0.079 EG2 CG GSES EG1 4.120 0.127 EG2 CG BDI-II EG1 0.166 0.920 EG2 CG KHQ–1 EG1 1.479 0.473 EG2 CG KHQ–2A EG1 1.318 0.517 EG2 CG KHQ–2B EG1 8.211 0.016∗ EG2 CG KHQ–2C EG1 7.785 0.020∗ EG2 CG KHQ–2D EG1 7.762 0.020∗ EG2 CG KHQ–2E EG1 9.046 0.010∗ EG2 CG KHQ–2F EG1 1.331 0.513 EG2 CG KHQ–2G EG1 10.457 0.066 EG2 CG KHQ–3 EG1 3.957 0.138 EG2 CG ∗Statistical significance.A statistically significant difference was found in the quality of life results obtained from the three study groups in the following domains: physical limitations (KHQ–2B), social limitations (KHQ–2C), personal relationships (KHQ–2D), and emotions (KHQ–2E).In the next stage of the study, the authors performed a post hoc Conover-Iman test. This is illustrated by the results in Table3.Table 3 Post hoc Conover-Iman test. EG1 EG2 CG KHQ–2B Statistics EG1 0.192 2.670 EG2 0.192 2.440 CG 2.670 2.440 P value EG1 0.847 0.008∗ EG2 0.847 0.016∗ CG 0.008∗ 0.016∗ KHQ–2C Statistics EG1 0.430 2.694 EG2 0.430 2.237 CG 2.694 2.237 P value EG1 0.667 0.008∗ EG2 0.667 0.027∗ CG 0.008∗ 0.027∗ KHQ–2D Statistics EG1 0.547 2.737 EG2 0.547 2.155 CG 2.737 2.155 P value EG1 0.584 0.007∗ EG2 0.584 0.033∗ CG 0..007∗ 0.033∗ KHQ–2E Statistics EG1 0.014 2.715 EG2 0.014 2.689 CG 2.715 2.689 P value EG1 0.988 0.007∗ EG2 0.988 0.008∗ CG 0.007∗ 0.008∗ ∗Statistical significance.Post hoc Conover-Iman test results showed no statistically significant differences between experimental groups and showed statistically significant differences between the experimental groups and the control group in all analyzed variables. ## 4. Discussion The authors assessed the physical and psychosocial functioning of women with stress urinary incontinence following different physiotherapy treatment methods for UI.The physical aspects were assessed using the Revised Urinary Incontinence Scale (RUIS), which is a reliable and valid instrument used to determine urinary incontinence symptoms and monitor patient response to treatment [9]. The RUIS scores showed a statistically significant improvement in the urinary incontinence severity in both the PFMT group (EG1) and the ExMI group (EG2). Similar results were obtained in our previous studies [6, 8, 14].“Severity measures” domain in Part 2 of the King’s Health Questionnaire (KHQ–2G) was also used to assess the incontinence severity. The authors observed a statistically significant improvement in this domain following both PFMT and ExMI.In our previous studies, we assessed the level of myostatin concentration after using PFMT and ExMI in a group of older women with UI [6, 8]. The level of myostatin increases in the periods of skeletal muscle inactivity, and the inhibition of serum myostatin increases muscle strength and mass. Therapeutic interventions such as physical activity can suppress myostatin signalling ameliorate the effects of advancing age on skeletal muscle mass and function. Some studies suggest that myostatin inhibits human urethral rhabdosphincter satellite cell proliferation; therefore, inhibition of myostatin function might be a useful strategy for the treatment of stress UI. The results of these studies showed that effective PFMT and ExMI cause downregulation of myostatin concentration and an improvement in the severity of urinary incontinence in elderly women with stress UI [6, 8].International Continence Society guidelines acknowledged in their recommendations that comprehensive patient care needs to take into consideration physical and psychosocial perspectives [15]. Other parameters measured during this study were self-efficacy beliefs (GSES), depression symptoms (BDI-II), and quality of life (KHQ).General self-efficacy is an essential mental resource that may influence behavioral determinants and consequently impact, either directly or indirectly, health-related behaviors. People with high self-efficacy beliefs are more likely to practice health-related behaviors because they strongly believe in their ability to overcome obstacles and achieve their goals [16]. The analyses of self-efficacy beliefs among UI patients may provide essential information on self-motivation and belief in planned intervention effectiveness [17, 18]. The study results demonstrated that there was a statistically significant increase in self-efficacy beliefs in the ExMI group (EG2), while no changes in self-efficacy beliefs were observed in the PFMT group (EG1). More importantly, relatively high self-efficacy beliefs were recorded in both experimental groups before the treatment. Therefore, it can be assumed that high self-efficacy beliefs motivated the patients to face the problem of urinary incontinence and participate in our study. We have also assessed the self-efficacy after using ExMI in women with UI in previous studies, which led to similar conclusions [6].Among comorbid conditions associated with urinary incontinence and frequently reported in the literature on the subject, depression emerges as the most debilitating mental health condition [19–22]. The Beck Depression Inventory administered to the PFMT group at the initial assessment yielded the following results: no depression in 22 patients (55%), moderate depression in 14 patients (35%), and severe depression in 4 patients (10%), whereas the following results were reported after the treatment: no depression in 29 patients (73%), moderate depression in 9 patients (22%), and severe depression in 2 patients (5%).The Beck Depression Inventory administered to the ExMI group at the initial assessment yielded the following results: no depression in 20 patients (54%), moderate depression in 14 patients (38%), and severe depression in 3 patients (8%), whereas the following results were reported after the treatment: no depression in 27 patients (73%), moderate depression in 8 patients (22%), and severe depression in 2 patients (5%). We observed a statistically significant decline in depressive symptoms in both experimental groups after the treatment.Moreover, the King’s Health Questionnaire was used to evaluate patients’ quality of life. The tool was designed in 1997 by Dr. C. J. Kelleher and his colleagues from the Department of Urogynaecology at King’s College, Cambridge. Once the questionnaire’s reliability and validity were confirmed with standard psychometric techniques during six pilot studies, the final version of the tool was published in the British Journal of Obstetrics and Gynaecology in December 1997 [12]. The KHQ was proved to be an essential and reliable tool for the evaluation of the quality of life in UI female patients, and the instrument is also recommended by the European Clinical Practice Guidelines [13].In this study, a statistically significant improvement in the quality of life was reported in both experimental groups in the following domains: “social limitations” (KHQ–2C), “emotions” (KHQ–2E), “severity measures” (KHQ–2G), and “symptom severity scale” (KHQ–3).No statistically significant differences emerged between all measured variables in the control group. Comparative analysis of the three study groups showed statistically significant differences in the quality of life at the final assessment in the following domains: “physical limitations” (KHQ–B), “social limitations” (KHQ–C), “personal relationships”(KHQ–2D), and “emotions” (KHQ–2E). Additionally, post hoc Conover-Iman test results showed no statistically significant differences between experimental groups and showed statistically significant differences between the experimental groups and the control group in all analyzed variables.The study results demonstrated that both physiotherapy treatment methods are effective in the treatment of urinary incontinence. The authors observed an improvement in both the physical and psychosocial parameters. However, the authors want to highlight that there are more contradictions to the ExMI therapy than to PFMT. It is worth mentioning that two subjects during our study withdrew from the ExMI therapy due to the discomfort experienced during the treatment. ## 5. Conclusions (1) Pelvic floor muscle training and extracorporeal magnetic innervation proved to be effective treatment methods for stress urinary incontinence in women. The authors observed an improvement in both the physical and psychosocial aspects. (2) In both experimental groups, there was a statistically significant decline in depressive symptoms, and there was an improvement in urinary incontinence severity and quality of life in the following domains: “social limitations,” “emotions,” “severity measures,” and “symptom severity scale,” following the treatment. (3) Moreover, higher self-efficacy beliefs were observed in the experimental group that received ExMI. (4) Comparative analysis of the three study groups showed statistically significant differences in the quality of life at the final assessment in the following domains: “physical limitations,” “social limitations,” “personal relationships,” and “emotions.” --- *Source: 1019872-2020-01-17.xml*
1019872-2020-01-17_1019872-2020-01-17.md
38,386
Assessment of the Effectiveness of Pelvic Floor Muscle Training (PFMT) and Extracorporeal Magnetic Innervation (ExMI) in Treatment of Stress Urinary Incontinence in Women: A Randomized Controlled Trial
Magdalena Weber-Rajek; Agnieszka Strączyńska; Katarzyna Strojek; Zuzanna Piekorz; Beata Pilarska; Marta Podhorecka; Kinga Sobieralska-Michalak; Aleksander Goch; Agnieszka Radzimińska
BioMed Research International (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1019872
1019872-2020-01-17.xml
--- ## Abstract Objective. The purpose of this study is to assess the effectiveness of pelvic floor muscle training and extracorporeal magnetic innervation in treatment of urinary incontinence in women with stress urinary incontinence. Methods. The randomized controlled trial enrolled 128 women with stress urinary incontinence who were randomly allocated to either one out of two experimental groups (EG1 or EG2) or the control group (CG). Subjects in the experimental group 1 (EG1) received 12 sessions of pelvic floor muscle training, whereas subjects in the experimental group 2 (EG2) received 12 sessions of extracorporeal magnetic innervation. Subjects in the control group (CG) did not receive any therapeutic intervention. The following instruments were used to measure results in all study groups at the initial and final assessments: Revised Urinary Incontinence Scale (RUIS), Beck Depression Inventory (BDI-II), General Self-Efficacy Scale (GSES), and King’s Health Questionnaire (KHQ). Results. In both experimental groups, a statistically significant decline in depressive symptoms (BDI-II) and an improvement in urinary incontinence severity (RUIS) and quality of life (KHQ) were found in the following domains: “social limitations,” “emotions,” “severity measures,” and “symptom severity scale.” Moreover, self-efficacy beliefs (GSES) improved in the experimental group that received ExMI (EG2). No statistically significant differences were found between all measured variables in the control group. Comparative analysis of the three study groups showed statistically significant differences at the final assessment in the quality of life in the following domains: “physical limitations,” “social limitations,” “personal relationships,” and “emotions.” Conclusion. Pelvic floor muscle training and extracorporeal magnetic innervation proved to be effective treatment methods for stress urinary incontinence in women. The authors observed an improvement in both the physical and psychosocial aspects. --- ## Body ## 1. Introduction The World Health Organization (WHO) and the International Continence Society (ICS) define urinary incontinence (UI) as an involuntary leakage of urine through the urethra and consider it a health, social, and hygienic concern [1].The Standardisation Steering Committee (SSC) recognizes three main types of UI: stress urinary incontinence (SUI), urge urinary incontinence (UUI), and mixed urinary incontinence (MUI). While SUI is the most common type of UI, there is inconclusive data on the frequency of urinary incontinence in the general population. During the Global Forum on Incontinence, held on April 17-18th, 2018, in Rome, it was assumed that UI affects 6–10% of the general population [2]. It is worth mentioning that, according to the WHO data, any condition affecting at least 5% of the population is recognized as a social disease. The above-mentioned data confirm the gravity of urinary incontinence and the importance of disease prevention and treatment. Nowadays, increasing attention is given to physiotherapy as a conservative treatment for urinary incontinence. The most important physiotherapy treatments for UI include pelvic floor muscle training, electrostimulation, biofeedback, and magnetotherapy. In the following series of studies, the authors assessed the effectiveness of pelvic floor muscle training (PFMT) and extracorporeal magnetic innervation (ExMI). The European Association of Urology (EAU) recommends the use of pelvic floor muscle training as a basic nonsurgical treatment for UI [3], whereas extracorporeal magnetic innervation (ExMI) is a rather new physiotherapy method used in the treatment of urinary incontinence. ExMI uses high electromagnetic induction values (2 Tesla) with a frequency of 10–50 Hz, which is adjusted depending on the type of urinary incontinence. During an ExMI treatment session, patients are seated in a special chair with a magnetic field generator in the seat. The magnetic field emitted by the generator penetrates pelvis minor organs and acts on motor fibers of pudendal and visceral nerves. Once the sodium-potassium pump is activated and the motor neuron depolarization begins, nerve impulses reach the neuromuscular junction which consequently initiates muscle contraction [4–6]. Nevertheless, there are relatively few scientific reports that assess the effectiveness of ExMI. The EAU pointed this out as well in their 2017 guidelines on the nonsurgical treatment for urinary incontinence [3].Urinary incontinence is a multifaceted issue that impairs patients’ physical and psychosocial functioning. In light of the above, improving patients’ general quality of life, that is, physical, mental, and social well-being, should be paramount in the treatment of urinary incontinence. ### 1.1. Study Purpose This study aims to compare the effectiveness of pelvic floor muscle training and extracorporeal magnetic innervation in the treatment of stress urinary incontinence. ## 1.1. Study Purpose This study aims to compare the effectiveness of pelvic floor muscle training and extracorporeal magnetic innervation in the treatment of stress urinary incontinence. ## 2. Methods ### 2.1. Study Design In the period between February 2017 and June 2018, 148 participants affected by urinary incontinence were enrolled in a randomized controlled trial. The study was conducted in accordance with the Declaration of Helsinki guidelines and with the approval of a local Bioethics Committee. Moreover, all participants provided a statement confirming that written informed consent was given. All the deidentified data is presented in the report. The authors stratified randomization by allocating participants to one of the three study groups. The allocation method was simple—each subject picked a sealed envelope with a computer-generated group allocation number. Furthermore, the main investigator was blinded to the study group allocation. Among the 20 subjects excluded from the study, 15 subjects failed to meet the inclusion criteria, and 5 subjects refused to participate. 128 subjects who met the inclusion criteria were then randomly allocated to one out of the two experimental groups (EG1 or EG2) or the control group (CG). Of the 17 women who failed to complete the study, 9 subjects withdrew from EG1 and EG2 during the 4-week treatment program, 6 subjects from the CG missed the final study visit, and 2 subjects from the EG2 submitted incomplete questionnaires. Consequently, 111 women aged 45 to 78 (mean 68.77) successfully completed the study (PFMT (EG1) wasn = 40 aged 60 to 78 (mean 70.12), ExMI (EG2) was n = 37 aged 45 to 76 (mean 66.71), and CG was n = 34 aged 60 to 78 (mean 69.79)). Applying the CONSORT statement (Consolidated Standards of Reporting Trials) (Figure 1) allowed the authors to improve the RCT reporting quality [7]. Prior to the treatment, subjects provided information on the contraindications to the treatment, circumstances of urine loss, and any comorbid conditions. It should be noted that the type of urinary incontinence was diagnosed by a urology specialist. Study inclusion criteria were as follows: diagnosed SUI, and no contradictions to the PFMT or ExMI treatment.Figure 1 The study flow diagram. RUIS: Revised Urinary Incontinence Scale; GSES: General Self-Efficacy Scale; BDI-II: Beck Depression Inventory; KHQ: King’s Health Questionnaire; UI: urinary incontinence.; PFMT: pelvic floor muscle training; ExMI: extracorporeal magnetic innervation.Contradictions to PFMT were as follows: active malignancy, recent surgeries, recent pelvic fractures, fever, acute inflammations, uterine tumors and myomas, urinary or genital tract infections, grade 3 or 4 hemorrhoids, stage 3 uterine prolapse (downward displacement of the uterus into the vagina) [8].Contradictions to ExMI were as follows: pregnancy, recent pelvic fractures, fever, acute inflammations, active malignancy, uterine tumors and myomas, stage 3 uterine prolapse, hemorrhoids, urinary or genital tract infections, suspected urethral and/or vesical fistula, severe urethral sphincter weakness and/or defect, deep vein thrombosis, acute infections, cardiac arrhythmia, cardiac pacemaker, and neurological diseases [6].Study exclusion criteria were as follows: presence of contraindications to the PFMT or ExMI treatment, diagnosed MUI or UUI, and recent therapeutic interventions in UI 3 months prior to the study (PFMT, ExMI, electrostimulation, or biofeedback). ### 2.2. Measurements #### 2.2.1. The Revised Urinary Incontinence Scale (RUIS) The RUIS is a valid 5-item scale used to determine UI symptoms and to monitor treatment outcomes. A score of less than 3 indicates no urinary incontinence; a score of 4–8 is considered mild urinary incontinence; a score of 9–12 is considered moderate urinary incontinence; a score of 13 or above indicates severe urinary incontinence [9]. #### 2.2.2. The General Self-Efficacy Scale (GSES) The scale was developed by Matthias Jerusalem and Ralf Schwarzer to assess merely one psychometric characteristic—one’s general self-efficacy belief, which is understood as a belief in one’s capabilities to deal with various situations. The obtained raw scores were afterwards transformed into standardised sten scores, and higher scores indicate higher self-efficacy beliefs. Score ranges are defined as follows: sten scores of 1–4 indicate low scores; sten scores of 5–6 indicate average scores; sten scores of 7–10 indicate high scores [10]. #### 2.2.3. Beck Depression Inventory-II (BDI-II) The BDI-II questionnaire is a self-report depression scale widely used in research on mental disorders, and it is still as popular as ever. The tool allows to evaluate the mood of urological, gynecological, oncological, and neurological patients. The questionnaire is a 21-item scale with individual item scores ranging from 0 (no symptoms) to 3 (severe symptoms). Score ranges are defined as follows: a score of 0–8 indicates no depression, and a score of 9–18 indicates moderate depression, whereas a score of 18 or above indicates severe depression [11]. #### 2.2.4. King’s Health Questionnaire (KHQ) The KHQ is a 21-item patient self-report scale that comprises 3 parts [12, 13].Part 1 (KHQ–1) focuses on the general health perception (one item) and the incontinence impact (one item).Part 2 (KHQ–2) focuses on the following:(i) Role limitations (two items)–2A(ii) Physical limitations (two items)–2B(iii) Social limitations (two items)–2C(iv) Personal relationships (three items)–2D(v) Emotions (three items)–2E(vi) Sleep/energy (two items)–2F(vii) Severity measures (four items)–2GPart 3 (KHQ–3) is a single-item section that contains ten bladder-related symptoms—frequency, nocturia, urgency, urge, stress, intercourse incontinence, nocturnal enuresis, infections, pain, and difficulty in voiding—which are answered on a 4-point scale. The subscales in parts 1 and 2 are scored from 0 (the best quality of life) to 100 points (the worst quality of life), whereas the scale in part 3 is scored from 0 (the best quality of life) to 30 points (the worst quality of life). The lower the KHQ score, the better the quality of life. ### 2.3. Intervention #### 2.3.1. Pelvic Floor Muscle Training Women in the EG1 received 12 PFMT therapy sessions (45-minute sessions, 3 times a week, for 4 weeks) that followed a specific training regimen. The exercises were supervised by a physiotherapist and completed in five/six-person groups. Prior to the training session, study participants were examined for posture and body correction, learned how to mobilize sacroiliac joints, and improve the lumbar-sacral spine, as well as hip and knee joint range of movement. The subjects also participated in abdominal breathing exercises simulating the thoracic duct. The pelvic floor muscle training (PMFT) regimen focused on straining fast- and slow-twitch muscle fibers of the pelvic floor using the transversus abdominis muscle tension technique. The exercises were performed with and without changing the position, with relaxed gluteal muscles, and the muscle function was synchronized with breathing. The PFMT exercises were performed in standing, sitting, and lying positions. The number of exercises and repetitions were individually determined and matched to the subjects’ functional abilities [8]. #### 2.3.2. Extracorporeal Magnetic Innervation Women in the EG2 received 12 ExMI therapy sessions (15-minute sessions, 3 times a week, for 4 weeks) which were delivered using the NeoControl chair (Neotonus Inc., Marietta, GA, USA). The applied magnetic field parameters were as follows: 2.0 Tesla at 50 Hz, delivered for 8 seconds with a dwell time of 4 seconds. During consecutive treatment sessions, the field intensity was being increased from 20% to 100%, and the electromagnetic stimulation strength corresponded to the highest level tolerated by the patient [6]. ### 2.4. Statistical Analyses The authors analyzed the collected data using the PQ Stat 1.6.8 software. Also, the Shapiro–Wilk test was implemented to check the normality distribution of measured variables. The results in the initial and final assessments in the study groups were compared using the Wilcoxon test. The Anova Kruskal–Wallis test was applied to determine the differences between the three study groups. Then, the authors performed a post hoc Conover-Iman test. The statistical significance level was defined asP<0.05. ## 2.1. Study Design In the period between February 2017 and June 2018, 148 participants affected by urinary incontinence were enrolled in a randomized controlled trial. The study was conducted in accordance with the Declaration of Helsinki guidelines and with the approval of a local Bioethics Committee. Moreover, all participants provided a statement confirming that written informed consent was given. All the deidentified data is presented in the report. The authors stratified randomization by allocating participants to one of the three study groups. The allocation method was simple—each subject picked a sealed envelope with a computer-generated group allocation number. Furthermore, the main investigator was blinded to the study group allocation. Among the 20 subjects excluded from the study, 15 subjects failed to meet the inclusion criteria, and 5 subjects refused to participate. 128 subjects who met the inclusion criteria were then randomly allocated to one out of the two experimental groups (EG1 or EG2) or the control group (CG). Of the 17 women who failed to complete the study, 9 subjects withdrew from EG1 and EG2 during the 4-week treatment program, 6 subjects from the CG missed the final study visit, and 2 subjects from the EG2 submitted incomplete questionnaires. Consequently, 111 women aged 45 to 78 (mean 68.77) successfully completed the study (PFMT (EG1) wasn = 40 aged 60 to 78 (mean 70.12), ExMI (EG2) was n = 37 aged 45 to 76 (mean 66.71), and CG was n = 34 aged 60 to 78 (mean 69.79)). Applying the CONSORT statement (Consolidated Standards of Reporting Trials) (Figure 1) allowed the authors to improve the RCT reporting quality [7]. Prior to the treatment, subjects provided information on the contraindications to the treatment, circumstances of urine loss, and any comorbid conditions. It should be noted that the type of urinary incontinence was diagnosed by a urology specialist. Study inclusion criteria were as follows: diagnosed SUI, and no contradictions to the PFMT or ExMI treatment.Figure 1 The study flow diagram. RUIS: Revised Urinary Incontinence Scale; GSES: General Self-Efficacy Scale; BDI-II: Beck Depression Inventory; KHQ: King’s Health Questionnaire; UI: urinary incontinence.; PFMT: pelvic floor muscle training; ExMI: extracorporeal magnetic innervation.Contradictions to PFMT were as follows: active malignancy, recent surgeries, recent pelvic fractures, fever, acute inflammations, uterine tumors and myomas, urinary or genital tract infections, grade 3 or 4 hemorrhoids, stage 3 uterine prolapse (downward displacement of the uterus into the vagina) [8].Contradictions to ExMI were as follows: pregnancy, recent pelvic fractures, fever, acute inflammations, active malignancy, uterine tumors and myomas, stage 3 uterine prolapse, hemorrhoids, urinary or genital tract infections, suspected urethral and/or vesical fistula, severe urethral sphincter weakness and/or defect, deep vein thrombosis, acute infections, cardiac arrhythmia, cardiac pacemaker, and neurological diseases [6].Study exclusion criteria were as follows: presence of contraindications to the PFMT or ExMI treatment, diagnosed MUI or UUI, and recent therapeutic interventions in UI 3 months prior to the study (PFMT, ExMI, electrostimulation, or biofeedback). ## 2.2. Measurements ### 2.2.1. The Revised Urinary Incontinence Scale (RUIS) The RUIS is a valid 5-item scale used to determine UI symptoms and to monitor treatment outcomes. A score of less than 3 indicates no urinary incontinence; a score of 4–8 is considered mild urinary incontinence; a score of 9–12 is considered moderate urinary incontinence; a score of 13 or above indicates severe urinary incontinence [9]. ### 2.2.2. The General Self-Efficacy Scale (GSES) The scale was developed by Matthias Jerusalem and Ralf Schwarzer to assess merely one psychometric characteristic—one’s general self-efficacy belief, which is understood as a belief in one’s capabilities to deal with various situations. The obtained raw scores were afterwards transformed into standardised sten scores, and higher scores indicate higher self-efficacy beliefs. Score ranges are defined as follows: sten scores of 1–4 indicate low scores; sten scores of 5–6 indicate average scores; sten scores of 7–10 indicate high scores [10]. ### 2.2.3. Beck Depression Inventory-II (BDI-II) The BDI-II questionnaire is a self-report depression scale widely used in research on mental disorders, and it is still as popular as ever. The tool allows to evaluate the mood of urological, gynecological, oncological, and neurological patients. The questionnaire is a 21-item scale with individual item scores ranging from 0 (no symptoms) to 3 (severe symptoms). Score ranges are defined as follows: a score of 0–8 indicates no depression, and a score of 9–18 indicates moderate depression, whereas a score of 18 or above indicates severe depression [11]. ### 2.2.4. King’s Health Questionnaire (KHQ) The KHQ is a 21-item patient self-report scale that comprises 3 parts [12, 13].Part 1 (KHQ–1) focuses on the general health perception (one item) and the incontinence impact (one item).Part 2 (KHQ–2) focuses on the following:(i) Role limitations (two items)–2A(ii) Physical limitations (two items)–2B(iii) Social limitations (two items)–2C(iv) Personal relationships (three items)–2D(v) Emotions (three items)–2E(vi) Sleep/energy (two items)–2F(vii) Severity measures (four items)–2GPart 3 (KHQ–3) is a single-item section that contains ten bladder-related symptoms—frequency, nocturia, urgency, urge, stress, intercourse incontinence, nocturnal enuresis, infections, pain, and difficulty in voiding—which are answered on a 4-point scale. The subscales in parts 1 and 2 are scored from 0 (the best quality of life) to 100 points (the worst quality of life), whereas the scale in part 3 is scored from 0 (the best quality of life) to 30 points (the worst quality of life). The lower the KHQ score, the better the quality of life. ## 2.2.1. The Revised Urinary Incontinence Scale (RUIS) The RUIS is a valid 5-item scale used to determine UI symptoms and to monitor treatment outcomes. A score of less than 3 indicates no urinary incontinence; a score of 4–8 is considered mild urinary incontinence; a score of 9–12 is considered moderate urinary incontinence; a score of 13 or above indicates severe urinary incontinence [9]. ## 2.2.2. The General Self-Efficacy Scale (GSES) The scale was developed by Matthias Jerusalem and Ralf Schwarzer to assess merely one psychometric characteristic—one’s general self-efficacy belief, which is understood as a belief in one’s capabilities to deal with various situations. The obtained raw scores were afterwards transformed into standardised sten scores, and higher scores indicate higher self-efficacy beliefs. Score ranges are defined as follows: sten scores of 1–4 indicate low scores; sten scores of 5–6 indicate average scores; sten scores of 7–10 indicate high scores [10]. ## 2.2.3. Beck Depression Inventory-II (BDI-II) The BDI-II questionnaire is a self-report depression scale widely used in research on mental disorders, and it is still as popular as ever. The tool allows to evaluate the mood of urological, gynecological, oncological, and neurological patients. The questionnaire is a 21-item scale with individual item scores ranging from 0 (no symptoms) to 3 (severe symptoms). Score ranges are defined as follows: a score of 0–8 indicates no depression, and a score of 9–18 indicates moderate depression, whereas a score of 18 or above indicates severe depression [11]. ## 2.2.4. King’s Health Questionnaire (KHQ) The KHQ is a 21-item patient self-report scale that comprises 3 parts [12, 13].Part 1 (KHQ–1) focuses on the general health perception (one item) and the incontinence impact (one item).Part 2 (KHQ–2) focuses on the following:(i) Role limitations (two items)–2A(ii) Physical limitations (two items)–2B(iii) Social limitations (two items)–2C(iv) Personal relationships (three items)–2D(v) Emotions (three items)–2E(vi) Sleep/energy (two items)–2F(vii) Severity measures (four items)–2GPart 3 (KHQ–3) is a single-item section that contains ten bladder-related symptoms—frequency, nocturia, urgency, urge, stress, intercourse incontinence, nocturnal enuresis, infections, pain, and difficulty in voiding—which are answered on a 4-point scale. The subscales in parts 1 and 2 are scored from 0 (the best quality of life) to 100 points (the worst quality of life), whereas the scale in part 3 is scored from 0 (the best quality of life) to 30 points (the worst quality of life). The lower the KHQ score, the better the quality of life. ## 2.3. Intervention ### 2.3.1. Pelvic Floor Muscle Training Women in the EG1 received 12 PFMT therapy sessions (45-minute sessions, 3 times a week, for 4 weeks) that followed a specific training regimen. The exercises were supervised by a physiotherapist and completed in five/six-person groups. Prior to the training session, study participants were examined for posture and body correction, learned how to mobilize sacroiliac joints, and improve the lumbar-sacral spine, as well as hip and knee joint range of movement. The subjects also participated in abdominal breathing exercises simulating the thoracic duct. The pelvic floor muscle training (PMFT) regimen focused on straining fast- and slow-twitch muscle fibers of the pelvic floor using the transversus abdominis muscle tension technique. The exercises were performed with and without changing the position, with relaxed gluteal muscles, and the muscle function was synchronized with breathing. The PFMT exercises were performed in standing, sitting, and lying positions. The number of exercises and repetitions were individually determined and matched to the subjects’ functional abilities [8]. ### 2.3.2. Extracorporeal Magnetic Innervation Women in the EG2 received 12 ExMI therapy sessions (15-minute sessions, 3 times a week, for 4 weeks) which were delivered using the NeoControl chair (Neotonus Inc., Marietta, GA, USA). The applied magnetic field parameters were as follows: 2.0 Tesla at 50 Hz, delivered for 8 seconds with a dwell time of 4 seconds. During consecutive treatment sessions, the field intensity was being increased from 20% to 100%, and the electromagnetic stimulation strength corresponded to the highest level tolerated by the patient [6]. ## 2.3.1. Pelvic Floor Muscle Training Women in the EG1 received 12 PFMT therapy sessions (45-minute sessions, 3 times a week, for 4 weeks) that followed a specific training regimen. The exercises were supervised by a physiotherapist and completed in five/six-person groups. Prior to the training session, study participants were examined for posture and body correction, learned how to mobilize sacroiliac joints, and improve the lumbar-sacral spine, as well as hip and knee joint range of movement. The subjects also participated in abdominal breathing exercises simulating the thoracic duct. The pelvic floor muscle training (PMFT) regimen focused on straining fast- and slow-twitch muscle fibers of the pelvic floor using the transversus abdominis muscle tension technique. The exercises were performed with and without changing the position, with relaxed gluteal muscles, and the muscle function was synchronized with breathing. The PFMT exercises were performed in standing, sitting, and lying positions. The number of exercises and repetitions were individually determined and matched to the subjects’ functional abilities [8]. ## 2.3.2. Extracorporeal Magnetic Innervation Women in the EG2 received 12 ExMI therapy sessions (15-minute sessions, 3 times a week, for 4 weeks) which were delivered using the NeoControl chair (Neotonus Inc., Marietta, GA, USA). The applied magnetic field parameters were as follows: 2.0 Tesla at 50 Hz, delivered for 8 seconds with a dwell time of 4 seconds. During consecutive treatment sessions, the field intensity was being increased from 20% to 100%, and the electromagnetic stimulation strength corresponded to the highest level tolerated by the patient [6]. ## 2.4. Statistical Analyses The authors analyzed the collected data using the PQ Stat 1.6.8 software. Also, the Shapiro–Wilk test was implemented to check the normality distribution of measured variables. The results in the initial and final assessments in the study groups were compared using the Wilcoxon test. The Anova Kruskal–Wallis test was applied to determine the differences between the three study groups. Then, the authors performed a post hoc Conover-Iman test. The statistical significance level was defined asP<0.05. ## 3. Results Table1 presents Wilcoxon test results and descriptive statistics for all measured variables across both experimental groups and the control group at the initial and final assessments.Table 1 Comparative analysis of all measured variables for the EG1, EG2, and CG at the initial and final assessments. EG1 (n = 40) EG2 (n = 37) CG (n = 34) Parameter Statistics Initial assessment Final assessment P value Initial assessment Final assessment P value Initial assessment Final assessment P value RUIS Med 8.00 6.00 <0.001∗ 9.00 7.00 0.001∗ 8.00 9.00 0.190 IQR 6.00 5.00 6.00 4.00 7.00 5.00 GSES Med 7.00 7.00 0.231 7.00 7.00 0.019∗ 7.00 7.00 0.147 IQR 4.00 3.00 3.00 3.00 3.00 1.00 BDI-II Med 6.00 5.00 <0.001∗ 7.00 5.00 <0.001∗ 6.50 7.50 0.856 IQR 9.00 7.50 11.00 7.00 11.00 9.00 KHQ–1 Med 16.66 16.66 0.146 16.66 33.33 0.113 16.66 33.33 8.817 IQR 50.00 33.33 50.00 33.33 50.00 50.00 KHQ–2A Med 33.33 24.99 0.073 33.33 33.33 0.463 33.33 33.33 0.932 IQR 33.33 50.00 33.33 50.00 33.33 40.97 KHQ–2B Med 5.55 0.00 0.089 33.33 0.00 0.322 11.11 22.22 0.297 IQR 33.33 22.22 0.00 22.22 44.44 48.61 KHQ–2C Med 33.33 11.11 0.001∗ 33.33 22.22 0.001∗ 33.33 22.22 0.536 IQR 50.00 22.22 50.00 22.22 35.41 31.94 KHQ–2D Med 22.22 16.66 0.444 22.22 33.33 0.327 22.22 44.44 0.255 IQR 13.88 41.66 22.22 50.00 33.33 45.83 KHQ–2E Med 33.33 16.66 0.035∗ 41.66 33.33 0.004∗ 37.50 50.00 0.789 IQR 41.66 50.00 50.00 50.00 47.91 31.25 KHQ–2F IQR 50.00 41.66 0.699 33.33 41.66 0.405 29.16 38.19 0.663 Med 33.33 33.33 33.33 41.66 33.33 41.66 KHQ–2G IQR 41.66 33.33 <0.001∗ 41.66 25.00 <0.001∗ 41.66 31.25 0.190 Med 41.66 16.66 33.33 16.66 33.33 33.33 KHQ–3 IQR 11.25 7.00 <0.001∗ 6.00 6.00 <0.001∗ 5.00 7.50 0.609 Med 6.00 2.00 6.00 2.00 6.75 4.50 ∗Statistical significance. EG1: experimental group 1; EG2: experimental group 2; CG: control group; Med: median; IQR: interquartile range; P: significance level.Table2 presents the Anova Kruskal–Wallis test applied to determine the differences between the three study groups at the final assessment.Table 2 Comparison of all measured variables across the three study groups at the final assessment. Parameter Group H statistics P value RUIS EG1 5.066 0.079 EG2 CG GSES EG1 4.120 0.127 EG2 CG BDI-II EG1 0.166 0.920 EG2 CG KHQ–1 EG1 1.479 0.473 EG2 CG KHQ–2A EG1 1.318 0.517 EG2 CG KHQ–2B EG1 8.211 0.016∗ EG2 CG KHQ–2C EG1 7.785 0.020∗ EG2 CG KHQ–2D EG1 7.762 0.020∗ EG2 CG KHQ–2E EG1 9.046 0.010∗ EG2 CG KHQ–2F EG1 1.331 0.513 EG2 CG KHQ–2G EG1 10.457 0.066 EG2 CG KHQ–3 EG1 3.957 0.138 EG2 CG ∗Statistical significance.A statistically significant difference was found in the quality of life results obtained from the three study groups in the following domains: physical limitations (KHQ–2B), social limitations (KHQ–2C), personal relationships (KHQ–2D), and emotions (KHQ–2E).In the next stage of the study, the authors performed a post hoc Conover-Iman test. This is illustrated by the results in Table3.Table 3 Post hoc Conover-Iman test. EG1 EG2 CG KHQ–2B Statistics EG1 0.192 2.670 EG2 0.192 2.440 CG 2.670 2.440 P value EG1 0.847 0.008∗ EG2 0.847 0.016∗ CG 0.008∗ 0.016∗ KHQ–2C Statistics EG1 0.430 2.694 EG2 0.430 2.237 CG 2.694 2.237 P value EG1 0.667 0.008∗ EG2 0.667 0.027∗ CG 0.008∗ 0.027∗ KHQ–2D Statistics EG1 0.547 2.737 EG2 0.547 2.155 CG 2.737 2.155 P value EG1 0.584 0.007∗ EG2 0.584 0.033∗ CG 0..007∗ 0.033∗ KHQ–2E Statistics EG1 0.014 2.715 EG2 0.014 2.689 CG 2.715 2.689 P value EG1 0.988 0.007∗ EG2 0.988 0.008∗ CG 0.007∗ 0.008∗ ∗Statistical significance.Post hoc Conover-Iman test results showed no statistically significant differences between experimental groups and showed statistically significant differences between the experimental groups and the control group in all analyzed variables. ## 4. Discussion The authors assessed the physical and psychosocial functioning of women with stress urinary incontinence following different physiotherapy treatment methods for UI.The physical aspects were assessed using the Revised Urinary Incontinence Scale (RUIS), which is a reliable and valid instrument used to determine urinary incontinence symptoms and monitor patient response to treatment [9]. The RUIS scores showed a statistically significant improvement in the urinary incontinence severity in both the PFMT group (EG1) and the ExMI group (EG2). Similar results were obtained in our previous studies [6, 8, 14].“Severity measures” domain in Part 2 of the King’s Health Questionnaire (KHQ–2G) was also used to assess the incontinence severity. The authors observed a statistically significant improvement in this domain following both PFMT and ExMI.In our previous studies, we assessed the level of myostatin concentration after using PFMT and ExMI in a group of older women with UI [6, 8]. The level of myostatin increases in the periods of skeletal muscle inactivity, and the inhibition of serum myostatin increases muscle strength and mass. Therapeutic interventions such as physical activity can suppress myostatin signalling ameliorate the effects of advancing age on skeletal muscle mass and function. Some studies suggest that myostatin inhibits human urethral rhabdosphincter satellite cell proliferation; therefore, inhibition of myostatin function might be a useful strategy for the treatment of stress UI. The results of these studies showed that effective PFMT and ExMI cause downregulation of myostatin concentration and an improvement in the severity of urinary incontinence in elderly women with stress UI [6, 8].International Continence Society guidelines acknowledged in their recommendations that comprehensive patient care needs to take into consideration physical and psychosocial perspectives [15]. Other parameters measured during this study were self-efficacy beliefs (GSES), depression symptoms (BDI-II), and quality of life (KHQ).General self-efficacy is an essential mental resource that may influence behavioral determinants and consequently impact, either directly or indirectly, health-related behaviors. People with high self-efficacy beliefs are more likely to practice health-related behaviors because they strongly believe in their ability to overcome obstacles and achieve their goals [16]. The analyses of self-efficacy beliefs among UI patients may provide essential information on self-motivation and belief in planned intervention effectiveness [17, 18]. The study results demonstrated that there was a statistically significant increase in self-efficacy beliefs in the ExMI group (EG2), while no changes in self-efficacy beliefs were observed in the PFMT group (EG1). More importantly, relatively high self-efficacy beliefs were recorded in both experimental groups before the treatment. Therefore, it can be assumed that high self-efficacy beliefs motivated the patients to face the problem of urinary incontinence and participate in our study. We have also assessed the self-efficacy after using ExMI in women with UI in previous studies, which led to similar conclusions [6].Among comorbid conditions associated with urinary incontinence and frequently reported in the literature on the subject, depression emerges as the most debilitating mental health condition [19–22]. The Beck Depression Inventory administered to the PFMT group at the initial assessment yielded the following results: no depression in 22 patients (55%), moderate depression in 14 patients (35%), and severe depression in 4 patients (10%), whereas the following results were reported after the treatment: no depression in 29 patients (73%), moderate depression in 9 patients (22%), and severe depression in 2 patients (5%).The Beck Depression Inventory administered to the ExMI group at the initial assessment yielded the following results: no depression in 20 patients (54%), moderate depression in 14 patients (38%), and severe depression in 3 patients (8%), whereas the following results were reported after the treatment: no depression in 27 patients (73%), moderate depression in 8 patients (22%), and severe depression in 2 patients (5%). We observed a statistically significant decline in depressive symptoms in both experimental groups after the treatment.Moreover, the King’s Health Questionnaire was used to evaluate patients’ quality of life. The tool was designed in 1997 by Dr. C. J. Kelleher and his colleagues from the Department of Urogynaecology at King’s College, Cambridge. Once the questionnaire’s reliability and validity were confirmed with standard psychometric techniques during six pilot studies, the final version of the tool was published in the British Journal of Obstetrics and Gynaecology in December 1997 [12]. The KHQ was proved to be an essential and reliable tool for the evaluation of the quality of life in UI female patients, and the instrument is also recommended by the European Clinical Practice Guidelines [13].In this study, a statistically significant improvement in the quality of life was reported in both experimental groups in the following domains: “social limitations” (KHQ–2C), “emotions” (KHQ–2E), “severity measures” (KHQ–2G), and “symptom severity scale” (KHQ–3).No statistically significant differences emerged between all measured variables in the control group. Comparative analysis of the three study groups showed statistically significant differences in the quality of life at the final assessment in the following domains: “physical limitations” (KHQ–B), “social limitations” (KHQ–C), “personal relationships”(KHQ–2D), and “emotions” (KHQ–2E). Additionally, post hoc Conover-Iman test results showed no statistically significant differences between experimental groups and showed statistically significant differences between the experimental groups and the control group in all analyzed variables.The study results demonstrated that both physiotherapy treatment methods are effective in the treatment of urinary incontinence. The authors observed an improvement in both the physical and psychosocial parameters. However, the authors want to highlight that there are more contradictions to the ExMI therapy than to PFMT. It is worth mentioning that two subjects during our study withdrew from the ExMI therapy due to the discomfort experienced during the treatment. ## 5. Conclusions (1) Pelvic floor muscle training and extracorporeal magnetic innervation proved to be effective treatment methods for stress urinary incontinence in women. The authors observed an improvement in both the physical and psychosocial aspects. (2) In both experimental groups, there was a statistically significant decline in depressive symptoms, and there was an improvement in urinary incontinence severity and quality of life in the following domains: “social limitations,” “emotions,” “severity measures,” and “symptom severity scale,” following the treatment. (3) Moreover, higher self-efficacy beliefs were observed in the experimental group that received ExMI. (4) Comparative analysis of the three study groups showed statistically significant differences in the quality of life at the final assessment in the following domains: “physical limitations,” “social limitations,” “personal relationships,” and “emotions.” --- *Source: 1019872-2020-01-17.xml*
2020
# Coleopteran Antimicrobial Peptides: Prospects for Clinical Applications **Authors:** Monde Ntwasa; Akira Goto; Shoichiro Kurata **Journal:** International Journal of Microbiology (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/101989 --- ## Abstract Antimicrobial peptides (AMPs) are activated in response to septic injury and have important roles in vertebrate and invertebrate immune systems. AMPs act directly against pathogens and have both wound healing and antitumor activities. Although coleopterans comprise the largest and most diverse order of eukaryotes and occupy an earlier branch thanDrosophila in the holometabolous lineage of insects, their immune system has not been studied extensively. Initial research reports, however, indicate that coleopterans possess unique immune response mechanisms, and studies of these novel mechanisms may help to further elucidate innate immunity. Recently, the complete genome sequence of Tribolium was published, boosting research on coleopteran immunity and leading to the identification of Tribolium AMPs that are shared by Drosophila and mammals, as well as other AMPs that are unique. AMPs have potential applicability in the development of vaccines. Here, we review coleopteran AMPs, their potential impact on clinical medicine, and the molecular basis of immune defense. --- ## Body ## 1. Overview Research on innate immunity has led to an accumulation of information that offers prospects for the development of antimicrobial therapeutic drugs and vaccines. The low rate of discovery of new antibiotics, the emergence of multiple-drug resistance, and the alarming death rate due to infection indicate a clear need for the development of alternative means to combat infections. A highlight of the 20th century was the discovery of vaccines that led to the eradication of diseases such as polio, small pox, and others. Even after more than two decades, however, a vaccine against the highly mutable human immune-deficiency virus remains to be developed, illustrating the need for new strategies to produce vaccines. A better understanding of innate immunity has revealed important links between innate and adaptive immune systems that could lead to effective approaches in vaccine development.Coleopterans comprise 40% of the 360,000 currently known insect species and are therefore the largest and most diverse order of eukaryotic organisms [1]. Tribolium, the coleopteran model, is proposed to be a better model than Drosophila, especially for evolutionary studies, as it is acknowledged to be the most evolutionarily successful metazoan and to be more representative of insects in general than Drosophila [1, 2]. Coleopterans, with no adaptive immunity, thrive on this planet. Studies of the molecular basis of coleopteran immunity could therefore lead to a better understanding of the evolution of the innate immune system. Much of the work on innate immunity and studies of the functional aspects of antimicrobial peptides (AMPs) has been performed using Drosophila, which represents dipterans, while studies on coleopterans lag behind. Insects and humans share innate immunity, but humans also have adaptive immunity. Some of the conserved molecular signaling pathways that are used by insects and humans for immune defense are also used for early embryonic development in insects, but there are notable differences, probably due to the fact that the innate immune systems of invertebrates and vertebrates diverged some 800 million years ago, and adaptive immunity appeared in the vertebrate branch only about 500 million years ago [3, 4]. The divergence of dipterans and coleoptera occurred some 284 million years ago, and Drosophila, in the dipteran branch, exhibits a remarkably accelerated protein evolution [5]. Furthermore, despite these separate evolutionary paths, molecular coevolution could have occurred between coleoptera and mammals due to interdependence, that is, sharing common habitats and resources.While the majority of the work on immunity has been conducted usingDrosophila as a model, there is evidence that coleoptera has retained many ancestral vertebrate genes, suggesting that studies of coleoptera could provide more insight into the properties and evolution of innate immunity. For example, Tribolium has many ancestral genes that are present in vertebrates and absent in Drosophila [6]. Similarly, the sequenced Tribolium genome revealed that ancestral genes involved in cell-cell communication and development are retained in Tribolium, but not in Drosophila [2]. Furthermore, in homology searches, human genes compare significantly better with Tribolium than Drosophila [5].AMPs are small peptides characterized by an overall positive charge (cationic), hydrophobicity, and amphipathicity. Structurally, they fall into two broad groups: linearα-helical and cysteine-containing forms with one or more disulfide bridges and β-hairpin-like, β-sheet, or mixed α-helical/β-sheet structures. The peptides assume these conformations upon contact with the target membranes [7–9]. Their characteristic physicochemical properties facilitate interactions with the phospholipid bilayer in the cell membranes of pathogens [10–12]. AMPs have been shown to kill pathogens directly by disrupting their membranes using mechanisms that are not fully understood. Several models, however, have been proposed. First, there is the “barrel-stave” model whereby a transmembrane pore is created by amphipathic α-helical peptides, disrupting the cell membrane of a pathogen. Second, the “carpet” model proposes that the peptides solubilize the membrane by interacting with the lipid head groups on the pathogen cell surface. This model was also proposed for viral killing [13]. Another is the aggregation model that is exhibited by sapecin from Sarcophaga peregrina, based on the existence of hydrophobic and hydrophilic domains on the AMPs. These structural features allow the peptides to form pores with hydrophilic walls and hydrophobic regions facing the acyl side chains of pathogen membrane phospholipids, thus facilitating movement of hydrophilic molecules through the pore [14]. Finally, the toroidal model, a subtle variation of the aggregation model, involves the formation of a dynamic lipid-layer core by hydrophilic regions of the peptide and lipid head groups and is induced by magainins, melittin, and protegrins [15–17]. While the indispensability of the structural features of cationic peptides in pathogen killing is under debate, charge differences between cationic peptides and lipids on the membrane are considered crucial. This may be the basis for their selective activity as nonhemolytic peptides have a high net positive charge distributed along the peptide length, whereas hemolytic peptides have a low negative charge [10, 11]. Evidence suggests that AMPs have intracellular targets. This is exemplified by elafin, a cationic and α-helical human innate defense AMP that does not lyse the bacterial membrane and is translocated into the cytoplasm. In vitro analysis using a mobility shift assay revealed that elafin binds DNA [18]. The histone-derived peptide buforin II binds nucleic acids in gel retardation assays and rapidly kills Escherichia coli by translocating into the cytoplasm of the pathogen and probably interfering with the functions of DNA or RNA. The structurally similar magainin 2 also kills E. coli but does not enter the cytoplasm [19]. Similarly, cationic antibacterial peptides enter the cytoplasm of Aspergillus nidulans and kill the fungus by targeting intracellular molecules whose identity has not been verified [20]. An excellent review of the intracellular targets of AMPs was recently published [21]. More studies are required, however, to confirm the existence and actual mode of action of AMPs with intracellular targets.Insects produce AMPs constitutively at local sites or the AMPs are released systemically upon pathogenic infection to initiate pathogen-killing activities. In addition to the well-characterizedDrosophila and mouse innate immune signaling pathways, the sequencing of the Tribolium genome has boosted research progress because bioinformatics analyses revealed putative immune-related genes based on comparisons with the genomes of other species [22].AMPs are multifunctional molecules that, in addition to their well-known role as effectors of the innate immune system, are involved in several biologic processes and pathologic conditions, such as immune modulation, angiogenesis, and cytokine and histamine release [23–27]. Probably due to the negative charge in the plasma membrane of many cancer cells, some cationic peptides also have anticancer activity [28, 29]. These properties can be potentially exploited for clinical purposes [12, 30]. Cecropins are selectively cytotoxic to cancer cells, preventing their proliferation in bladder cancer, and are therefore likely candidates in strategies for the development of anticancer drugs [31]. In addition to antimicrobial activity, defensins facilitate the induction of adaptive immunity and promote cell proliferation and wound healing. Defensins show chemotactic activity whereby dendritic cells, monocytes, and T cells are recruited to the site of infection. Moreover, human β-defensins and the cathelicidin LL-37 stimulate the production of pruritogenic cytokines, such as interleukin-31, leukotrienes, prostaglandin E2, and others, suggesting an important role in allergic reactions [32–34]. AMPs also form the basis of the potentially lucrative commercial area of “cosmeceuticals”-products with beneficial topical activities that are delivered by rubbing, sprinkling, spraying, and so forth [35].Here, we review the progress made in discovery of coleopteran AMPs, the molecular basis ofTribolium innate immunity, and prospects for the application of antimicrobial peptides in medicine. ## 2. The Discovery Process ### 2.1. Antimicrobial Peptides inTribolium The first wide-scale study ofTribolium immunity was conducted by Zou et al. in 2007 [22]. Taking advantage of the fully sequenced Tribolium genome to predict putative immune genes using bioinformatics techniques and real-time polymerase chain reaction (PCR), Zou et al. [22] predicted 12 AMPs in Tribolium compared to 20 in Drosophila, the most studied invertebrate. Another study using suppression subtractive hybridization led to the addition of a few more AMPs to this list [36] (see Table 1). Both studies identified four defensins in Tribolium, and phylogenetic analysis indicated that three of these are found in the evolutionary branch comprising only coleopterans. The fourth defensin (Def4) is found in a mixed branch that includes hymenopterans. A search of the Defensins Knowledgebase [37] revealed that the sequence information of this defensin is not available in the public domain, although its existence has been reported [22]. Attacins, which were identified in lepidopterans, were found in a cluster of three genes. Attacins are rich in glycine and proline, are structurally similar to coleoptericins, and are inducible by bacteria. Furthermore, Drosophila studies demonstrated that the induction of attacin is reduced in both imd and Tl− mutants [38]. Coleoptericins were first isolated from the larvae of Allomyrina dichotoma beetles immunized with E. coli. Coleoptericins also show activity against Staphylococcus aureus, methicillin-resistant S. aureus, and Bacillus subtilis.Like attacins, but unlike cecropins, coleoptericins do not form pores on the bacterial membrane, but do cause defects in cell division, as liposomes containing E. coli or S. aureus membrane constituents do not leak upon treatment with the recombinant form of coleoptericin, but instead form chains [39].Table 1 Antimicrobial peptides currently predicted or identified inTribolium. Antimicrobial peptideAccession numberReferenceTargetMethod of identificationAttacin1GLEAN_07737[22]Homology searchesAttacin2GLEAN_07738[22]Homology searchesAttacin3GLEAN_07739[22]Homology searchesCecropin1GLEAN_00499[22, 31]Antibacterial, antitumorHomology searchesCecropin2Cec2[22, 31]Antibacterial, antitumorHomology searchesCecropin3GLEAN_00500[22, 31]Antibacterial, antitumorHomology searchesDefensin1GLEAN_06250; XM_962101[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin2GLEAN_10517; XM_963144[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin3GLEAN_12469; XM_968482[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin4Def4[22]Homology searchesColeoptericin1GLEAN_05093[22]AntibacterialHomology searchesColeoptericin2GLEAN_05096[22]AntibacterialHomology searchesSimilar to thaumatin familyXM_963631[36]AntifungalSuppression subtractive hybridizationProbable antimicrobial peptideTc11324[22]Homology searchesPutative antimicrobial peptideAM712902[36]Suppression subtractive hybridizationTribolium cecropins are predicted to be pseudogenes because of a shift in the open reading frame; some cecropin-related proteins with an unusual structure, however, have been reported [22]. A cecropin has been reported in at least one coleopteran, Acalolepta luxuriosa[44].Four thaumatin-like genes were found inTribolium using suppression subtractive hybridization and genome search. Experimentally, septic injury induces thaumatin-1 and defensins in Tribolium [36]. Sterile wounding also induces thaumatin-1 and defensin-2. Furthermore, recombinant thaumatin-1 heterologously overexpressed in E. coli is active against fungi [36]. Coleopteran cationic peptides might be remarkably different from other known peptides and are therefore not readily identified by homology searches. A clear homolog of the Drosophila antifungal drosomycin could not be found in the Tribolium genome, but a weakly homologous protein with a cysteine-rich sequence was detected [22]. An overview of Tribolium AMPs indicates similarities with other coleopterans, but some differences with Drosophila. The work reported by these groups provides a good basis for advancing research on coleopteran AMPs. ### 2.2. Other Antimicrobial Peptides Identified in Coleopterans A number of AMPs present in certain coleopterans have not yet been identified inTribolium (see Table 2). One of these is an interesting class of insect peptides that adopts the knottin fold and was first identified in 2003 from the harlequin beetle, Acrocinus longimanus. Members of this class include Alo-1, Alo-2, and Alo3 [42]. Psacotheasin from the yellow star longhorn beetle Psacothea hilaris has also been identified as a member of this class [43, 45]. Alo-3 is active against fungi, while psacotheasin is active against bacteria and fungi. The knottin fold is characterized by a disulfide topology of the “abcabc” type, in which disulfide bridges are formed between the first cysteine and the fourth, second, and fifth cysteines, and the third and sixth cysteines [46]. Disulfide bridge formation may confer important properties to the peptides, such as stability and resistance to protease cleavage. Members of the knottin family in general have low sequence similarity, reducing their chances of identification by homology searches [46]. In contrast, however, the coleopteran knottin fold AMPs share sequence similarities with several plant antifungal peptides [42]. Although the mechanism by which these peptides function is not fully understood, psacotheasin kills Candida albicans by inducing apoptosis [47]. This has clinical significance as C. albicans can cause mild superficial to severe infections in immunocompromised patients. A better understanding of the molecular events that are critical to the induction of apoptosis by cationic peptides could lead to new targets for antifungal drug development. Alarmingly, candidemia, a systemic Candida infection, is on the increase and is accompanied by the reemergence of resistance against common drugs, pointing to the urgency of finding alternative means of treating fungal infections [48, 49].Table 2 Antimicrobial peptides expressed in other coleopterans not yet identified inTribolium. Antimicrobial peptideOrganismAccession no.ReferenceDiptericin AS. zeamis, (G. morsitans)Q8WTD5[40]Acaloleptin AS. zeamis (A. luxuriosa)Q76K70[40]Sarcotoxin II-1S. zeamis, (S. peregrina)P24491[40]Tenecin-1S. zeamis, (T. molitor)Q27023[40, 41]Tenecin-2T. molitor[41]LuxuriosinS. zeamis, (A. luxuriosa)Q60FC9[40]Alo-3 (knottin type)A. longimanusP83653[42]Psacotheasin (Knottin type)P. hilaris[43] ### 2.3. Databases The Antimicrobial Peptides database, a comprehensive and searchable database for AMPs was established based on information from literature surveys [50, 51]. Currently, an updated version on the website indicates that there are 1773 cationic peptides in the database, including antiviral (5.8%), antibacterial (78.56%), antifungal (31.19%), and antitumor (6.14%) peptides. Some of these peptides function against more than one type of pathogens. The structures of 231 of these peptides have been determined by nuclear magnetic resonance and X-ray diffraction studies. Another useful database is the Defensins Knowledgebase, which allows text-based searches for information on this large family of AMPs [37]. It is a manually curated and specialized database similar to the shrimp penaeidin database, PenBase [52]. We have also started molecular studies of another coleopteran, the dung beetle Euoniticellus intermedius, and sequenced the adult transcriptome with a view to study its immune system [53]. These databases serve as useful tools for the discovery and design of new peptides. Indeed, key features upon which antimicrobial activity is based have been studied using the Antimicrobial Peptides database [54, 55]. Such analyses generate an important information pool for drug design. ## 2.1. Antimicrobial Peptides inTribolium The first wide-scale study ofTribolium immunity was conducted by Zou et al. in 2007 [22]. Taking advantage of the fully sequenced Tribolium genome to predict putative immune genes using bioinformatics techniques and real-time polymerase chain reaction (PCR), Zou et al. [22] predicted 12 AMPs in Tribolium compared to 20 in Drosophila, the most studied invertebrate. Another study using suppression subtractive hybridization led to the addition of a few more AMPs to this list [36] (see Table 1). Both studies identified four defensins in Tribolium, and phylogenetic analysis indicated that three of these are found in the evolutionary branch comprising only coleopterans. The fourth defensin (Def4) is found in a mixed branch that includes hymenopterans. A search of the Defensins Knowledgebase [37] revealed that the sequence information of this defensin is not available in the public domain, although its existence has been reported [22]. Attacins, which were identified in lepidopterans, were found in a cluster of three genes. Attacins are rich in glycine and proline, are structurally similar to coleoptericins, and are inducible by bacteria. Furthermore, Drosophila studies demonstrated that the induction of attacin is reduced in both imd and Tl− mutants [38]. Coleoptericins were first isolated from the larvae of Allomyrina dichotoma beetles immunized with E. coli. Coleoptericins also show activity against Staphylococcus aureus, methicillin-resistant S. aureus, and Bacillus subtilis.Like attacins, but unlike cecropins, coleoptericins do not form pores on the bacterial membrane, but do cause defects in cell division, as liposomes containing E. coli or S. aureus membrane constituents do not leak upon treatment with the recombinant form of coleoptericin, but instead form chains [39].Table 1 Antimicrobial peptides currently predicted or identified inTribolium. Antimicrobial peptideAccession numberReferenceTargetMethod of identificationAttacin1GLEAN_07737[22]Homology searchesAttacin2GLEAN_07738[22]Homology searchesAttacin3GLEAN_07739[22]Homology searchesCecropin1GLEAN_00499[22, 31]Antibacterial, antitumorHomology searchesCecropin2Cec2[22, 31]Antibacterial, antitumorHomology searchesCecropin3GLEAN_00500[22, 31]Antibacterial, antitumorHomology searchesDefensin1GLEAN_06250; XM_962101[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin2GLEAN_10517; XM_963144[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin3GLEAN_12469; XM_968482[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin4Def4[22]Homology searchesColeoptericin1GLEAN_05093[22]AntibacterialHomology searchesColeoptericin2GLEAN_05096[22]AntibacterialHomology searchesSimilar to thaumatin familyXM_963631[36]AntifungalSuppression subtractive hybridizationProbable antimicrobial peptideTc11324[22]Homology searchesPutative antimicrobial peptideAM712902[36]Suppression subtractive hybridizationTribolium cecropins are predicted to be pseudogenes because of a shift in the open reading frame; some cecropin-related proteins with an unusual structure, however, have been reported [22]. A cecropin has been reported in at least one coleopteran, Acalolepta luxuriosa[44].Four thaumatin-like genes were found inTribolium using suppression subtractive hybridization and genome search. Experimentally, septic injury induces thaumatin-1 and defensins in Tribolium [36]. Sterile wounding also induces thaumatin-1 and defensin-2. Furthermore, recombinant thaumatin-1 heterologously overexpressed in E. coli is active against fungi [36]. Coleopteran cationic peptides might be remarkably different from other known peptides and are therefore not readily identified by homology searches. A clear homolog of the Drosophila antifungal drosomycin could not be found in the Tribolium genome, but a weakly homologous protein with a cysteine-rich sequence was detected [22]. An overview of Tribolium AMPs indicates similarities with other coleopterans, but some differences with Drosophila. The work reported by these groups provides a good basis for advancing research on coleopteran AMPs. ## 2.2. Other Antimicrobial Peptides Identified in Coleopterans A number of AMPs present in certain coleopterans have not yet been identified inTribolium (see Table 2). One of these is an interesting class of insect peptides that adopts the knottin fold and was first identified in 2003 from the harlequin beetle, Acrocinus longimanus. Members of this class include Alo-1, Alo-2, and Alo3 [42]. Psacotheasin from the yellow star longhorn beetle Psacothea hilaris has also been identified as a member of this class [43, 45]. Alo-3 is active against fungi, while psacotheasin is active against bacteria and fungi. The knottin fold is characterized by a disulfide topology of the “abcabc” type, in which disulfide bridges are formed between the first cysteine and the fourth, second, and fifth cysteines, and the third and sixth cysteines [46]. Disulfide bridge formation may confer important properties to the peptides, such as stability and resistance to protease cleavage. Members of the knottin family in general have low sequence similarity, reducing their chances of identification by homology searches [46]. In contrast, however, the coleopteran knottin fold AMPs share sequence similarities with several plant antifungal peptides [42]. Although the mechanism by which these peptides function is not fully understood, psacotheasin kills Candida albicans by inducing apoptosis [47]. This has clinical significance as C. albicans can cause mild superficial to severe infections in immunocompromised patients. A better understanding of the molecular events that are critical to the induction of apoptosis by cationic peptides could lead to new targets for antifungal drug development. Alarmingly, candidemia, a systemic Candida infection, is on the increase and is accompanied by the reemergence of resistance against common drugs, pointing to the urgency of finding alternative means of treating fungal infections [48, 49].Table 2 Antimicrobial peptides expressed in other coleopterans not yet identified inTribolium. Antimicrobial peptideOrganismAccession no.ReferenceDiptericin AS. zeamis, (G. morsitans)Q8WTD5[40]Acaloleptin AS. zeamis (A. luxuriosa)Q76K70[40]Sarcotoxin II-1S. zeamis, (S. peregrina)P24491[40]Tenecin-1S. zeamis, (T. molitor)Q27023[40, 41]Tenecin-2T. molitor[41]LuxuriosinS. zeamis, (A. luxuriosa)Q60FC9[40]Alo-3 (knottin type)A. longimanusP83653[42]Psacotheasin (Knottin type)P. hilaris[43] ## 2.3. Databases The Antimicrobial Peptides database, a comprehensive and searchable database for AMPs was established based on information from literature surveys [50, 51]. Currently, an updated version on the website indicates that there are 1773 cationic peptides in the database, including antiviral (5.8%), antibacterial (78.56%), antifungal (31.19%), and antitumor (6.14%) peptides. Some of these peptides function against more than one type of pathogens. The structures of 231 of these peptides have been determined by nuclear magnetic resonance and X-ray diffraction studies. Another useful database is the Defensins Knowledgebase, which allows text-based searches for information on this large family of AMPs [37]. It is a manually curated and specialized database similar to the shrimp penaeidin database, PenBase [52]. We have also started molecular studies of another coleopteran, the dung beetle Euoniticellus intermedius, and sequenced the adult transcriptome with a view to study its immune system [53]. These databases serve as useful tools for the discovery and design of new peptides. Indeed, key features upon which antimicrobial activity is based have been studied using the Antimicrobial Peptides database [54, 55]. Such analyses generate an important information pool for drug design. ## 3. Regulation of AMP Expression by Coleopterans The signaling pathways that mediate the immune response inTribolium castaneum were initially predicted based on a combination of in silico studies and experimental work by Zou et al. [22] and more recently another study involving the burying beetle Nicrophorus vespilloides [56]. In addition, studies using adult beetles exposed to E. coli, M. luteus, C. albicans, and S. cerevisiaehave provided information on the signaling pathways. Accordingly, large-scale studies using real-time PCR revealed the presence of innate immune genes, such as PGRP-LA, PGRP-LE, PGRP-SA, PGRP-SB, several Toll proteins, and the immune deficiency (IMD) protein. Notably, some of the PGRPs had no orthologs in Drosophila, indicating a diversity of specificity. Recent biochemical studies using the large beetles Tenebrio molitor and Holotrichia diomphaliafurther elucidated the extracellular signaling network involved in responses to fungal and bacterial infections [41, 57]. Overall, coleopteran signaling appears to occur via the Toll and IMD pathways (Figure 1).Figure 1 Activation mechanisms in the coleopteran immune system. Immune response pathways activated by bacteria and fungi showing a pathogen-associated recognition pattern (PAMP), pattern recognition receptors (PRRs), and downstream signaling molecules. The protease cascade in the Toll pathway involves the modular apical modular serine protease (MSP), the Spz-processing enzyme-activating enzyme (SAE), and the spaëtzle processing enzyme (SPE). GNBP3: glucan binding protein 3; PGRP: peptidoglycan recognition protein.The Toll pathway is activated by PAMPS such asβ-1,3-glucans, found in fungi, and by Lys-type peptidoglycans (PGN), found primarily in Gram-positive bacteria. A complex of the PAMPS and pathogen recognition receptors (PRRs) activates an apical protease, leading to a three-step serine protease cascade that culminates in the generation of active spaëtzle, the ligand of the transmembrane receptor Toll. Subsequent intracellular signaling leads to the transcriptional activation of genes that encode antimicrobial peptides.Activation of the immune response by DAP-type PGN found primarily in Gram-negative bacteria and Gram-positive bacilli is still poorly understood in flies and beetles. Generally, it is understood that Gram-negative bacteria require the IMD pathway becauseimd− mutants cannot express antimicrobial peptides against Gram-negative bacteria. In Drosophila, candidates for the signal transduction-activated Gram-negative bacteria are the transmembrane receptor PGRP-LC and PGRPP-LE. Both molecules can activate the IMD pathway [3, 58]. Because these molecules are present in beetles and PGRP-LE is orthologous to the Drosophila protein, it is likely that the corresponding pathways are conserved. In Tribolium, PGRP-LA and PGRP-LE are activated by bacterial infection, but poorly activated by C. albicans and M. luteus[22]. Other Tribolium studies show that the IMD pathway is activated by two Gram-negative bacteria, Xenorhabdus nematophilaand E. coli,inducing 12 AMPs of which 5 are significantly dependent on the IMD pathway as demonstrated by RNA interference studies [59]. The same study, however, demonstrated that two Gram-positive bacteria with different peptidoglycans expressed the same AMPs with only defensin-1 being dependent on Toll. Taken together, these studies show that while the pathways may be conserved, differences in PAMPS recognition and signal transduction exist between Tribolium and Drosophila.The discovery of another PRR known as the LPS recognition protein (LRP) based on itsE. coli agglutinating properties suggests the existence of an LPS pathway. LRP circulates in the hemolymph and does not agglutinate S. aureus or C. albicans. Interestingly, LRP comprises six repeats of an epidermal-growth-factor- (EGF)-like domain, an unusual structural feature for PRRs [60]. The downstream events in this pathway remain unclear. ## 4. Antimicrobial Peptides in Clinical Medicine Cationic peptides have emerged as important targets for the development of therapeutics against bacteria, fungi, viruses, and parasites. They are key effector molecules in host defense through direct and indirect antimicrobial activity. Furthermore, in vertebrates, these peptides mediate a variety of cellular processes such as immunomodulation, wound healing, and tumorigenesis. These roles provide opportunities for the development of therapeutic products and vaccines. AMPs are attractive molecules for the development of clinical and veterinary therapeutics because they are fast acting and effective against susceptible pathogens, are less likely to cause the emergence of resistance compared to traditional antibiotics, have low toxicity to mammalian cells, and their mode of action tends to be more physical rather than targeted at metabolic pathways. A search of the FreePatentsOnline database using the word “antimicrobial peptide” produced more than 66.000 hits, and a number of AMPs have undergone clinical development [30]. A recent review of cationic peptides lists the peptides that are in various stages of clinical trials [29].As mentioned above, the predictedTribolium AMPs include defensins, attacin, coleoptericin, thaumatin, and cecropin. Defensins exhibit a broad spectrum of antimicrobial activity directed at bacteria, fungi, and viruses and are probably the most studied class of AMPs. Many therapeutic products have been modeled on them. The different types of defensins are either expressed constitutively or induced by infections to control the composition of microorganisms on surfaces such as the small and large intestines [61].Many challenges remain that hamper the development of commercially viable peptides. The pressing issues concern pharmacokinetics (how the body deals with peptide drugs). When peptides are administered orally, the gastrointestinal tract may prevent their reabsorption into the systemic circulation. Furthermore, peptides may elicit an antigenic response when injected directly in the blood. This leaves topical medication the most feasible formulation while more research is being pursued to address the remaining obstacles. Despite these obstacles, the prospects for AMPs are not bleak because some have proceeded to clinical application. There is some optimism that these obstacles may soon be overcome by new strategies that combine natural cationic peptides and stable synthetic immunomodulatory peptides [29]. In this regard, peptide drugs such as Polymyxin B and gramicidin that are used for the treatment of Gram-negative bacterial infections are reported to be safe and effective, and peptides such as the indolicidin-derived CLS001 (previously known as MX594AN) have reached phase III clinical trials with promising prospects [62–64]. Because of their evolutionary distance, during which their survival against microbes has been solely dependent on innate immunity, insects provide interesting models for novel AMP drug design [65, 66]. ## 5. Conclusions The emergence of multidrug-resistant pathogens threatens human health globally and presents an urgent need to find antimicrobials with a reduced chance of inducing resistance. Cationic peptides for which the mechanism of action involves targeting the plasma membrane in a nonspecific manner, but does not involve specific proteins, offer good prospects. Admittedly, more work is needed to elucidate the mechanism of action of these peptides, as there is some evidence for intracellular targets. The importance of cationic peptides is further highlighted by their emerging prospects in other aspects of medicine, such as cancer treatment and vaccine development. Coleopterans are the most evolutionarily successful group of insects and are more representative of insects thanDrosophila. In addition, human genes are more comparable to those of Tribolium than those of Drosophila. Thus, coleopterans are emerging as an important species for study as, like vertebrates, they have retained ancestral genes that are not present in Drosophila. Indeed, there is overwhelming evidence that coleopterans are more suitable for comparative studies between phyla than the commonly used dipterans. Here, we suggest that perhaps the outstanding evolutionary success of coleopterans is consistent with a robust immune system that warrants more attention than it has received to date. --- *Source: 101989-2012-03-01.xml*
101989-2012-03-01_101989-2012-03-01.md
34,418
Coleopteran Antimicrobial Peptides: Prospects for Clinical Applications
Monde Ntwasa; Akira Goto; Shoichiro Kurata
International Journal of Microbiology (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/101989
101989-2012-03-01.xml
--- ## Abstract Antimicrobial peptides (AMPs) are activated in response to septic injury and have important roles in vertebrate and invertebrate immune systems. AMPs act directly against pathogens and have both wound healing and antitumor activities. Although coleopterans comprise the largest and most diverse order of eukaryotes and occupy an earlier branch thanDrosophila in the holometabolous lineage of insects, their immune system has not been studied extensively. Initial research reports, however, indicate that coleopterans possess unique immune response mechanisms, and studies of these novel mechanisms may help to further elucidate innate immunity. Recently, the complete genome sequence of Tribolium was published, boosting research on coleopteran immunity and leading to the identification of Tribolium AMPs that are shared by Drosophila and mammals, as well as other AMPs that are unique. AMPs have potential applicability in the development of vaccines. Here, we review coleopteran AMPs, their potential impact on clinical medicine, and the molecular basis of immune defense. --- ## Body ## 1. Overview Research on innate immunity has led to an accumulation of information that offers prospects for the development of antimicrobial therapeutic drugs and vaccines. The low rate of discovery of new antibiotics, the emergence of multiple-drug resistance, and the alarming death rate due to infection indicate a clear need for the development of alternative means to combat infections. A highlight of the 20th century was the discovery of vaccines that led to the eradication of diseases such as polio, small pox, and others. Even after more than two decades, however, a vaccine against the highly mutable human immune-deficiency virus remains to be developed, illustrating the need for new strategies to produce vaccines. A better understanding of innate immunity has revealed important links between innate and adaptive immune systems that could lead to effective approaches in vaccine development.Coleopterans comprise 40% of the 360,000 currently known insect species and are therefore the largest and most diverse order of eukaryotic organisms [1]. Tribolium, the coleopteran model, is proposed to be a better model than Drosophila, especially for evolutionary studies, as it is acknowledged to be the most evolutionarily successful metazoan and to be more representative of insects in general than Drosophila [1, 2]. Coleopterans, with no adaptive immunity, thrive on this planet. Studies of the molecular basis of coleopteran immunity could therefore lead to a better understanding of the evolution of the innate immune system. Much of the work on innate immunity and studies of the functional aspects of antimicrobial peptides (AMPs) has been performed using Drosophila, which represents dipterans, while studies on coleopterans lag behind. Insects and humans share innate immunity, but humans also have adaptive immunity. Some of the conserved molecular signaling pathways that are used by insects and humans for immune defense are also used for early embryonic development in insects, but there are notable differences, probably due to the fact that the innate immune systems of invertebrates and vertebrates diverged some 800 million years ago, and adaptive immunity appeared in the vertebrate branch only about 500 million years ago [3, 4]. The divergence of dipterans and coleoptera occurred some 284 million years ago, and Drosophila, in the dipteran branch, exhibits a remarkably accelerated protein evolution [5]. Furthermore, despite these separate evolutionary paths, molecular coevolution could have occurred between coleoptera and mammals due to interdependence, that is, sharing common habitats and resources.While the majority of the work on immunity has been conducted usingDrosophila as a model, there is evidence that coleoptera has retained many ancestral vertebrate genes, suggesting that studies of coleoptera could provide more insight into the properties and evolution of innate immunity. For example, Tribolium has many ancestral genes that are present in vertebrates and absent in Drosophila [6]. Similarly, the sequenced Tribolium genome revealed that ancestral genes involved in cell-cell communication and development are retained in Tribolium, but not in Drosophila [2]. Furthermore, in homology searches, human genes compare significantly better with Tribolium than Drosophila [5].AMPs are small peptides characterized by an overall positive charge (cationic), hydrophobicity, and amphipathicity. Structurally, they fall into two broad groups: linearα-helical and cysteine-containing forms with one or more disulfide bridges and β-hairpin-like, β-sheet, or mixed α-helical/β-sheet structures. The peptides assume these conformations upon contact with the target membranes [7–9]. Their characteristic physicochemical properties facilitate interactions with the phospholipid bilayer in the cell membranes of pathogens [10–12]. AMPs have been shown to kill pathogens directly by disrupting their membranes using mechanisms that are not fully understood. Several models, however, have been proposed. First, there is the “barrel-stave” model whereby a transmembrane pore is created by amphipathic α-helical peptides, disrupting the cell membrane of a pathogen. Second, the “carpet” model proposes that the peptides solubilize the membrane by interacting with the lipid head groups on the pathogen cell surface. This model was also proposed for viral killing [13]. Another is the aggregation model that is exhibited by sapecin from Sarcophaga peregrina, based on the existence of hydrophobic and hydrophilic domains on the AMPs. These structural features allow the peptides to form pores with hydrophilic walls and hydrophobic regions facing the acyl side chains of pathogen membrane phospholipids, thus facilitating movement of hydrophilic molecules through the pore [14]. Finally, the toroidal model, a subtle variation of the aggregation model, involves the formation of a dynamic lipid-layer core by hydrophilic regions of the peptide and lipid head groups and is induced by magainins, melittin, and protegrins [15–17]. While the indispensability of the structural features of cationic peptides in pathogen killing is under debate, charge differences between cationic peptides and lipids on the membrane are considered crucial. This may be the basis for their selective activity as nonhemolytic peptides have a high net positive charge distributed along the peptide length, whereas hemolytic peptides have a low negative charge [10, 11]. Evidence suggests that AMPs have intracellular targets. This is exemplified by elafin, a cationic and α-helical human innate defense AMP that does not lyse the bacterial membrane and is translocated into the cytoplasm. In vitro analysis using a mobility shift assay revealed that elafin binds DNA [18]. The histone-derived peptide buforin II binds nucleic acids in gel retardation assays and rapidly kills Escherichia coli by translocating into the cytoplasm of the pathogen and probably interfering with the functions of DNA or RNA. The structurally similar magainin 2 also kills E. coli but does not enter the cytoplasm [19]. Similarly, cationic antibacterial peptides enter the cytoplasm of Aspergillus nidulans and kill the fungus by targeting intracellular molecules whose identity has not been verified [20]. An excellent review of the intracellular targets of AMPs was recently published [21]. More studies are required, however, to confirm the existence and actual mode of action of AMPs with intracellular targets.Insects produce AMPs constitutively at local sites or the AMPs are released systemically upon pathogenic infection to initiate pathogen-killing activities. In addition to the well-characterizedDrosophila and mouse innate immune signaling pathways, the sequencing of the Tribolium genome has boosted research progress because bioinformatics analyses revealed putative immune-related genes based on comparisons with the genomes of other species [22].AMPs are multifunctional molecules that, in addition to their well-known role as effectors of the innate immune system, are involved in several biologic processes and pathologic conditions, such as immune modulation, angiogenesis, and cytokine and histamine release [23–27]. Probably due to the negative charge in the plasma membrane of many cancer cells, some cationic peptides also have anticancer activity [28, 29]. These properties can be potentially exploited for clinical purposes [12, 30]. Cecropins are selectively cytotoxic to cancer cells, preventing their proliferation in bladder cancer, and are therefore likely candidates in strategies for the development of anticancer drugs [31]. In addition to antimicrobial activity, defensins facilitate the induction of adaptive immunity and promote cell proliferation and wound healing. Defensins show chemotactic activity whereby dendritic cells, monocytes, and T cells are recruited to the site of infection. Moreover, human β-defensins and the cathelicidin LL-37 stimulate the production of pruritogenic cytokines, such as interleukin-31, leukotrienes, prostaglandin E2, and others, suggesting an important role in allergic reactions [32–34]. AMPs also form the basis of the potentially lucrative commercial area of “cosmeceuticals”-products with beneficial topical activities that are delivered by rubbing, sprinkling, spraying, and so forth [35].Here, we review the progress made in discovery of coleopteran AMPs, the molecular basis ofTribolium innate immunity, and prospects for the application of antimicrobial peptides in medicine. ## 2. The Discovery Process ### 2.1. Antimicrobial Peptides inTribolium The first wide-scale study ofTribolium immunity was conducted by Zou et al. in 2007 [22]. Taking advantage of the fully sequenced Tribolium genome to predict putative immune genes using bioinformatics techniques and real-time polymerase chain reaction (PCR), Zou et al. [22] predicted 12 AMPs in Tribolium compared to 20 in Drosophila, the most studied invertebrate. Another study using suppression subtractive hybridization led to the addition of a few more AMPs to this list [36] (see Table 1). Both studies identified four defensins in Tribolium, and phylogenetic analysis indicated that three of these are found in the evolutionary branch comprising only coleopterans. The fourth defensin (Def4) is found in a mixed branch that includes hymenopterans. A search of the Defensins Knowledgebase [37] revealed that the sequence information of this defensin is not available in the public domain, although its existence has been reported [22]. Attacins, which were identified in lepidopterans, were found in a cluster of three genes. Attacins are rich in glycine and proline, are structurally similar to coleoptericins, and are inducible by bacteria. Furthermore, Drosophila studies demonstrated that the induction of attacin is reduced in both imd and Tl− mutants [38]. Coleoptericins were first isolated from the larvae of Allomyrina dichotoma beetles immunized with E. coli. Coleoptericins also show activity against Staphylococcus aureus, methicillin-resistant S. aureus, and Bacillus subtilis.Like attacins, but unlike cecropins, coleoptericins do not form pores on the bacterial membrane, but do cause defects in cell division, as liposomes containing E. coli or S. aureus membrane constituents do not leak upon treatment with the recombinant form of coleoptericin, but instead form chains [39].Table 1 Antimicrobial peptides currently predicted or identified inTribolium. Antimicrobial peptideAccession numberReferenceTargetMethod of identificationAttacin1GLEAN_07737[22]Homology searchesAttacin2GLEAN_07738[22]Homology searchesAttacin3GLEAN_07739[22]Homology searchesCecropin1GLEAN_00499[22, 31]Antibacterial, antitumorHomology searchesCecropin2Cec2[22, 31]Antibacterial, antitumorHomology searchesCecropin3GLEAN_00500[22, 31]Antibacterial, antitumorHomology searchesDefensin1GLEAN_06250; XM_962101[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin2GLEAN_10517; XM_963144[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin3GLEAN_12469; XM_968482[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin4Def4[22]Homology searchesColeoptericin1GLEAN_05093[22]AntibacterialHomology searchesColeoptericin2GLEAN_05096[22]AntibacterialHomology searchesSimilar to thaumatin familyXM_963631[36]AntifungalSuppression subtractive hybridizationProbable antimicrobial peptideTc11324[22]Homology searchesPutative antimicrobial peptideAM712902[36]Suppression subtractive hybridizationTribolium cecropins are predicted to be pseudogenes because of a shift in the open reading frame; some cecropin-related proteins with an unusual structure, however, have been reported [22]. A cecropin has been reported in at least one coleopteran, Acalolepta luxuriosa[44].Four thaumatin-like genes were found inTribolium using suppression subtractive hybridization and genome search. Experimentally, septic injury induces thaumatin-1 and defensins in Tribolium [36]. Sterile wounding also induces thaumatin-1 and defensin-2. Furthermore, recombinant thaumatin-1 heterologously overexpressed in E. coli is active against fungi [36]. Coleopteran cationic peptides might be remarkably different from other known peptides and are therefore not readily identified by homology searches. A clear homolog of the Drosophila antifungal drosomycin could not be found in the Tribolium genome, but a weakly homologous protein with a cysteine-rich sequence was detected [22]. An overview of Tribolium AMPs indicates similarities with other coleopterans, but some differences with Drosophila. The work reported by these groups provides a good basis for advancing research on coleopteran AMPs. ### 2.2. Other Antimicrobial Peptides Identified in Coleopterans A number of AMPs present in certain coleopterans have not yet been identified inTribolium (see Table 2). One of these is an interesting class of insect peptides that adopts the knottin fold and was first identified in 2003 from the harlequin beetle, Acrocinus longimanus. Members of this class include Alo-1, Alo-2, and Alo3 [42]. Psacotheasin from the yellow star longhorn beetle Psacothea hilaris has also been identified as a member of this class [43, 45]. Alo-3 is active against fungi, while psacotheasin is active against bacteria and fungi. The knottin fold is characterized by a disulfide topology of the “abcabc” type, in which disulfide bridges are formed between the first cysteine and the fourth, second, and fifth cysteines, and the third and sixth cysteines [46]. Disulfide bridge formation may confer important properties to the peptides, such as stability and resistance to protease cleavage. Members of the knottin family in general have low sequence similarity, reducing their chances of identification by homology searches [46]. In contrast, however, the coleopteran knottin fold AMPs share sequence similarities with several plant antifungal peptides [42]. Although the mechanism by which these peptides function is not fully understood, psacotheasin kills Candida albicans by inducing apoptosis [47]. This has clinical significance as C. albicans can cause mild superficial to severe infections in immunocompromised patients. A better understanding of the molecular events that are critical to the induction of apoptosis by cationic peptides could lead to new targets for antifungal drug development. Alarmingly, candidemia, a systemic Candida infection, is on the increase and is accompanied by the reemergence of resistance against common drugs, pointing to the urgency of finding alternative means of treating fungal infections [48, 49].Table 2 Antimicrobial peptides expressed in other coleopterans not yet identified inTribolium. Antimicrobial peptideOrganismAccession no.ReferenceDiptericin AS. zeamis, (G. morsitans)Q8WTD5[40]Acaloleptin AS. zeamis (A. luxuriosa)Q76K70[40]Sarcotoxin II-1S. zeamis, (S. peregrina)P24491[40]Tenecin-1S. zeamis, (T. molitor)Q27023[40, 41]Tenecin-2T. molitor[41]LuxuriosinS. zeamis, (A. luxuriosa)Q60FC9[40]Alo-3 (knottin type)A. longimanusP83653[42]Psacotheasin (Knottin type)P. hilaris[43] ### 2.3. Databases The Antimicrobial Peptides database, a comprehensive and searchable database for AMPs was established based on information from literature surveys [50, 51]. Currently, an updated version on the website indicates that there are 1773 cationic peptides in the database, including antiviral (5.8%), antibacterial (78.56%), antifungal (31.19%), and antitumor (6.14%) peptides. Some of these peptides function against more than one type of pathogens. The structures of 231 of these peptides have been determined by nuclear magnetic resonance and X-ray diffraction studies. Another useful database is the Defensins Knowledgebase, which allows text-based searches for information on this large family of AMPs [37]. It is a manually curated and specialized database similar to the shrimp penaeidin database, PenBase [52]. We have also started molecular studies of another coleopteran, the dung beetle Euoniticellus intermedius, and sequenced the adult transcriptome with a view to study its immune system [53]. These databases serve as useful tools for the discovery and design of new peptides. Indeed, key features upon which antimicrobial activity is based have been studied using the Antimicrobial Peptides database [54, 55]. Such analyses generate an important information pool for drug design. ## 2.1. Antimicrobial Peptides inTribolium The first wide-scale study ofTribolium immunity was conducted by Zou et al. in 2007 [22]. Taking advantage of the fully sequenced Tribolium genome to predict putative immune genes using bioinformatics techniques and real-time polymerase chain reaction (PCR), Zou et al. [22] predicted 12 AMPs in Tribolium compared to 20 in Drosophila, the most studied invertebrate. Another study using suppression subtractive hybridization led to the addition of a few more AMPs to this list [36] (see Table 1). Both studies identified four defensins in Tribolium, and phylogenetic analysis indicated that three of these are found in the evolutionary branch comprising only coleopterans. The fourth defensin (Def4) is found in a mixed branch that includes hymenopterans. A search of the Defensins Knowledgebase [37] revealed that the sequence information of this defensin is not available in the public domain, although its existence has been reported [22]. Attacins, which were identified in lepidopterans, were found in a cluster of three genes. Attacins are rich in glycine and proline, are structurally similar to coleoptericins, and are inducible by bacteria. Furthermore, Drosophila studies demonstrated that the induction of attacin is reduced in both imd and Tl− mutants [38]. Coleoptericins were first isolated from the larvae of Allomyrina dichotoma beetles immunized with E. coli. Coleoptericins also show activity against Staphylococcus aureus, methicillin-resistant S. aureus, and Bacillus subtilis.Like attacins, but unlike cecropins, coleoptericins do not form pores on the bacterial membrane, but do cause defects in cell division, as liposomes containing E. coli or S. aureus membrane constituents do not leak upon treatment with the recombinant form of coleoptericin, but instead form chains [39].Table 1 Antimicrobial peptides currently predicted or identified inTribolium. Antimicrobial peptideAccession numberReferenceTargetMethod of identificationAttacin1GLEAN_07737[22]Homology searchesAttacin2GLEAN_07738[22]Homology searchesAttacin3GLEAN_07739[22]Homology searchesCecropin1GLEAN_00499[22, 31]Antibacterial, antitumorHomology searchesCecropin2Cec2[22, 31]Antibacterial, antitumorHomology searchesCecropin3GLEAN_00500[22, 31]Antibacterial, antitumorHomology searchesDefensin1GLEAN_06250; XM_962101[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin2GLEAN_10517; XM_963144[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin3GLEAN_12469; XM_968482[22, 36]AntibacterialHomology searches and suppression subtractive hybridizationDefensin4Def4[22]Homology searchesColeoptericin1GLEAN_05093[22]AntibacterialHomology searchesColeoptericin2GLEAN_05096[22]AntibacterialHomology searchesSimilar to thaumatin familyXM_963631[36]AntifungalSuppression subtractive hybridizationProbable antimicrobial peptideTc11324[22]Homology searchesPutative antimicrobial peptideAM712902[36]Suppression subtractive hybridizationTribolium cecropins are predicted to be pseudogenes because of a shift in the open reading frame; some cecropin-related proteins with an unusual structure, however, have been reported [22]. A cecropin has been reported in at least one coleopteran, Acalolepta luxuriosa[44].Four thaumatin-like genes were found inTribolium using suppression subtractive hybridization and genome search. Experimentally, septic injury induces thaumatin-1 and defensins in Tribolium [36]. Sterile wounding also induces thaumatin-1 and defensin-2. Furthermore, recombinant thaumatin-1 heterologously overexpressed in E. coli is active against fungi [36]. Coleopteran cationic peptides might be remarkably different from other known peptides and are therefore not readily identified by homology searches. A clear homolog of the Drosophila antifungal drosomycin could not be found in the Tribolium genome, but a weakly homologous protein with a cysteine-rich sequence was detected [22]. An overview of Tribolium AMPs indicates similarities with other coleopterans, but some differences with Drosophila. The work reported by these groups provides a good basis for advancing research on coleopteran AMPs. ## 2.2. Other Antimicrobial Peptides Identified in Coleopterans A number of AMPs present in certain coleopterans have not yet been identified inTribolium (see Table 2). One of these is an interesting class of insect peptides that adopts the knottin fold and was first identified in 2003 from the harlequin beetle, Acrocinus longimanus. Members of this class include Alo-1, Alo-2, and Alo3 [42]. Psacotheasin from the yellow star longhorn beetle Psacothea hilaris has also been identified as a member of this class [43, 45]. Alo-3 is active against fungi, while psacotheasin is active against bacteria and fungi. The knottin fold is characterized by a disulfide topology of the “abcabc” type, in which disulfide bridges are formed between the first cysteine and the fourth, second, and fifth cysteines, and the third and sixth cysteines [46]. Disulfide bridge formation may confer important properties to the peptides, such as stability and resistance to protease cleavage. Members of the knottin family in general have low sequence similarity, reducing their chances of identification by homology searches [46]. In contrast, however, the coleopteran knottin fold AMPs share sequence similarities with several plant antifungal peptides [42]. Although the mechanism by which these peptides function is not fully understood, psacotheasin kills Candida albicans by inducing apoptosis [47]. This has clinical significance as C. albicans can cause mild superficial to severe infections in immunocompromised patients. A better understanding of the molecular events that are critical to the induction of apoptosis by cationic peptides could lead to new targets for antifungal drug development. Alarmingly, candidemia, a systemic Candida infection, is on the increase and is accompanied by the reemergence of resistance against common drugs, pointing to the urgency of finding alternative means of treating fungal infections [48, 49].Table 2 Antimicrobial peptides expressed in other coleopterans not yet identified inTribolium. Antimicrobial peptideOrganismAccession no.ReferenceDiptericin AS. zeamis, (G. morsitans)Q8WTD5[40]Acaloleptin AS. zeamis (A. luxuriosa)Q76K70[40]Sarcotoxin II-1S. zeamis, (S. peregrina)P24491[40]Tenecin-1S. zeamis, (T. molitor)Q27023[40, 41]Tenecin-2T. molitor[41]LuxuriosinS. zeamis, (A. luxuriosa)Q60FC9[40]Alo-3 (knottin type)A. longimanusP83653[42]Psacotheasin (Knottin type)P. hilaris[43] ## 2.3. Databases The Antimicrobial Peptides database, a comprehensive and searchable database for AMPs was established based on information from literature surveys [50, 51]. Currently, an updated version on the website indicates that there are 1773 cationic peptides in the database, including antiviral (5.8%), antibacterial (78.56%), antifungal (31.19%), and antitumor (6.14%) peptides. Some of these peptides function against more than one type of pathogens. The structures of 231 of these peptides have been determined by nuclear magnetic resonance and X-ray diffraction studies. Another useful database is the Defensins Knowledgebase, which allows text-based searches for information on this large family of AMPs [37]. It is a manually curated and specialized database similar to the shrimp penaeidin database, PenBase [52]. We have also started molecular studies of another coleopteran, the dung beetle Euoniticellus intermedius, and sequenced the adult transcriptome with a view to study its immune system [53]. These databases serve as useful tools for the discovery and design of new peptides. Indeed, key features upon which antimicrobial activity is based have been studied using the Antimicrobial Peptides database [54, 55]. Such analyses generate an important information pool for drug design. ## 3. Regulation of AMP Expression by Coleopterans The signaling pathways that mediate the immune response inTribolium castaneum were initially predicted based on a combination of in silico studies and experimental work by Zou et al. [22] and more recently another study involving the burying beetle Nicrophorus vespilloides [56]. In addition, studies using adult beetles exposed to E. coli, M. luteus, C. albicans, and S. cerevisiaehave provided information on the signaling pathways. Accordingly, large-scale studies using real-time PCR revealed the presence of innate immune genes, such as PGRP-LA, PGRP-LE, PGRP-SA, PGRP-SB, several Toll proteins, and the immune deficiency (IMD) protein. Notably, some of the PGRPs had no orthologs in Drosophila, indicating a diversity of specificity. Recent biochemical studies using the large beetles Tenebrio molitor and Holotrichia diomphaliafurther elucidated the extracellular signaling network involved in responses to fungal and bacterial infections [41, 57]. Overall, coleopteran signaling appears to occur via the Toll and IMD pathways (Figure 1).Figure 1 Activation mechanisms in the coleopteran immune system. Immune response pathways activated by bacteria and fungi showing a pathogen-associated recognition pattern (PAMP), pattern recognition receptors (PRRs), and downstream signaling molecules. The protease cascade in the Toll pathway involves the modular apical modular serine protease (MSP), the Spz-processing enzyme-activating enzyme (SAE), and the spaëtzle processing enzyme (SPE). GNBP3: glucan binding protein 3; PGRP: peptidoglycan recognition protein.The Toll pathway is activated by PAMPS such asβ-1,3-glucans, found in fungi, and by Lys-type peptidoglycans (PGN), found primarily in Gram-positive bacteria. A complex of the PAMPS and pathogen recognition receptors (PRRs) activates an apical protease, leading to a three-step serine protease cascade that culminates in the generation of active spaëtzle, the ligand of the transmembrane receptor Toll. Subsequent intracellular signaling leads to the transcriptional activation of genes that encode antimicrobial peptides.Activation of the immune response by DAP-type PGN found primarily in Gram-negative bacteria and Gram-positive bacilli is still poorly understood in flies and beetles. Generally, it is understood that Gram-negative bacteria require the IMD pathway becauseimd− mutants cannot express antimicrobial peptides against Gram-negative bacteria. In Drosophila, candidates for the signal transduction-activated Gram-negative bacteria are the transmembrane receptor PGRP-LC and PGRPP-LE. Both molecules can activate the IMD pathway [3, 58]. Because these molecules are present in beetles and PGRP-LE is orthologous to the Drosophila protein, it is likely that the corresponding pathways are conserved. In Tribolium, PGRP-LA and PGRP-LE are activated by bacterial infection, but poorly activated by C. albicans and M. luteus[22]. Other Tribolium studies show that the IMD pathway is activated by two Gram-negative bacteria, Xenorhabdus nematophilaand E. coli,inducing 12 AMPs of which 5 are significantly dependent on the IMD pathway as demonstrated by RNA interference studies [59]. The same study, however, demonstrated that two Gram-positive bacteria with different peptidoglycans expressed the same AMPs with only defensin-1 being dependent on Toll. Taken together, these studies show that while the pathways may be conserved, differences in PAMPS recognition and signal transduction exist between Tribolium and Drosophila.The discovery of another PRR known as the LPS recognition protein (LRP) based on itsE. coli agglutinating properties suggests the existence of an LPS pathway. LRP circulates in the hemolymph and does not agglutinate S. aureus or C. albicans. Interestingly, LRP comprises six repeats of an epidermal-growth-factor- (EGF)-like domain, an unusual structural feature for PRRs [60]. The downstream events in this pathway remain unclear. ## 4. Antimicrobial Peptides in Clinical Medicine Cationic peptides have emerged as important targets for the development of therapeutics against bacteria, fungi, viruses, and parasites. They are key effector molecules in host defense through direct and indirect antimicrobial activity. Furthermore, in vertebrates, these peptides mediate a variety of cellular processes such as immunomodulation, wound healing, and tumorigenesis. These roles provide opportunities for the development of therapeutic products and vaccines. AMPs are attractive molecules for the development of clinical and veterinary therapeutics because they are fast acting and effective against susceptible pathogens, are less likely to cause the emergence of resistance compared to traditional antibiotics, have low toxicity to mammalian cells, and their mode of action tends to be more physical rather than targeted at metabolic pathways. A search of the FreePatentsOnline database using the word “antimicrobial peptide” produced more than 66.000 hits, and a number of AMPs have undergone clinical development [30]. A recent review of cationic peptides lists the peptides that are in various stages of clinical trials [29].As mentioned above, the predictedTribolium AMPs include defensins, attacin, coleoptericin, thaumatin, and cecropin. Defensins exhibit a broad spectrum of antimicrobial activity directed at bacteria, fungi, and viruses and are probably the most studied class of AMPs. Many therapeutic products have been modeled on them. The different types of defensins are either expressed constitutively or induced by infections to control the composition of microorganisms on surfaces such as the small and large intestines [61].Many challenges remain that hamper the development of commercially viable peptides. The pressing issues concern pharmacokinetics (how the body deals with peptide drugs). When peptides are administered orally, the gastrointestinal tract may prevent their reabsorption into the systemic circulation. Furthermore, peptides may elicit an antigenic response when injected directly in the blood. This leaves topical medication the most feasible formulation while more research is being pursued to address the remaining obstacles. Despite these obstacles, the prospects for AMPs are not bleak because some have proceeded to clinical application. There is some optimism that these obstacles may soon be overcome by new strategies that combine natural cationic peptides and stable synthetic immunomodulatory peptides [29]. In this regard, peptide drugs such as Polymyxin B and gramicidin that are used for the treatment of Gram-negative bacterial infections are reported to be safe and effective, and peptides such as the indolicidin-derived CLS001 (previously known as MX594AN) have reached phase III clinical trials with promising prospects [62–64]. Because of their evolutionary distance, during which their survival against microbes has been solely dependent on innate immunity, insects provide interesting models for novel AMP drug design [65, 66]. ## 5. Conclusions The emergence of multidrug-resistant pathogens threatens human health globally and presents an urgent need to find antimicrobials with a reduced chance of inducing resistance. Cationic peptides for which the mechanism of action involves targeting the plasma membrane in a nonspecific manner, but does not involve specific proteins, offer good prospects. Admittedly, more work is needed to elucidate the mechanism of action of these peptides, as there is some evidence for intracellular targets. The importance of cationic peptides is further highlighted by their emerging prospects in other aspects of medicine, such as cancer treatment and vaccine development. Coleopterans are the most evolutionarily successful group of insects and are more representative of insects thanDrosophila. In addition, human genes are more comparable to those of Tribolium than those of Drosophila. Thus, coleopterans are emerging as an important species for study as, like vertebrates, they have retained ancestral genes that are not present in Drosophila. Indeed, there is overwhelming evidence that coleopterans are more suitable for comparative studies between phyla than the commonly used dipterans. Here, we suggest that perhaps the outstanding evolutionary success of coleopterans is consistent with a robust immune system that warrants more attention than it has received to date. --- *Source: 101989-2012-03-01.xml*
2012
# Study on the Application of Chinese Traditional Visual Elements in Visual Communication Design **Authors:** Yuming Zheng **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1020033 --- ## Abstract This study analyzed the data collected from field research and SD questionnaires on the application of traditional elements in Aba Prefecture. This study adopted the multidisciplinary research method integrating linguistics, architecture, art, and statistics of science and technology to discuss the related issues of the cultural industry park, and analyze the expression of traditional art elements in the visual communication of the cultural industry park. First, based on the photographs taken, we analyzed the layout of typical scenes of traditional element applications on the plane, including the analysis of traditional element types, colors, and materials. Second, the SD questionnaire results are displayed and analyzed. From the comprehensive scores of the respondents, we learned that the application of traditional elements has different effects on the architectural landscape. The main results show that (1) the public has a positive attitude toward the expression of traditional culture in the planning and construction of the park. Scenes rich in traditional elements play a positive role in the transmission of traditional culture and the creation of a traditional atmosphere; (2) showing traditional architectural styles and landscape layouts in a reproducible way can more intuitively set off the traditional cultural atmosphere; (3) the texture of the stone wall has certain rules and principles. The arrangement and combination of stones are a plane composition, and the visual characteristics brought by the wall can be specifically analyzed according to the elements of the plane composition. The stone masonry wall is analyzed according to the elements of plane composition, which can be divided into visual elements and relational elements. The visual elements are divided into the size, shape, color, and texture of the image, and the relational elements refer to the relationship between position and arrangement, including orientation, location, and center of gravity. --- ## Body ## 1. Introduction The protection and utilization of traditional culture have far-reaching significance for the development of Chinese cultural industry and society [1]. The cultural industry park not only undertakes the economic function of industrial agglomeration and development but also serves as a display window for urban regional culture. It is of great significance to carry out cultural activities, attract consumer groups, and create a cultural atmosphere. In the construction of cultural industry parks, how to apply traditional art elements reasonably and skillfully to the park design, and how to fully display traditional material and nonmaterial cultures have gradually become the focus of researchers [2]. Therefore, in order to better protect and rationally utilize traditional culture, it is very necessary to study the application effect of traditional art elements in the planning of cultural industry parks [3]. The role of visual communication design in the design of cultural industrial parks is becoming more and more obvious [4]. Studying the visual effects of traditional art elements can grasp the development trend and focus of traditional art elements in the planning of cultural industrial parks, and then can provide a basis for formulating reasonable planning and management guidance schemes for cultural industrial parks. Traditional visual elements were applied in visual communication design mainly through analyzing the color, composition, type, location, and spatial relationship of visual communication elements in the park, and then discussing the necessity of using visual communication elements in the design of cultural industrial parks, providing references for the park to deeply explore cultural connotations, and filling in traditional elements and modern design.Traditional art elements are the unique precious wealth and cultural heritage of Chinese traditional culture, which cannot be replaced by other art forms [5]. Traditional art elements come from traditional culture and can directly reflect traditional culture. Therefore, how to apply traditional art elements to contemporary design and create a diversified design trend has become a new topic for designers. In this study, traditional art elements mainly refer to the relevant content that can be applied to the visual communication design of cultural industrial parks [6]. A cultural industry park is a specific carrier of cultural industry, which usually carries out the development of cultural industry in the form of parks or blocks. China’s cultural industry parks have a tendency to form cultural industry clusters to enhance their competitiveness. At present, there are three main types of cultural industrial parks in China: cultural theme parks, cultural and creative blocks, and cultural heritage gathering areas [7].Visual communication is considered to be the most natural way for people to communicate various information to each other [8]. In Heller and Philip, from the perspective of the development process of visual communication design, to a large extent, it is a printing art design (graphic design) that emerged in Europe and the United States in the mid−19th century, also translated as graphic design or graphic design [9]. Visual perception is a subjective way of expressing and interpreting thinking, which can interpret information obtained from the human eye. As far as the purpose of visual communication design is concerned, it takes into account the acceptance of the target object, and combines words, symbols, colors, images, etc. to transmit information to the target object through appropriate visual media [10]. There are 4 main factors and basic principles of visual communication design: (1) text and its design principles. Writing is a system of symbols that humans use to communicate, and it is the form of writing that records thoughts and events. (2) Color and its design principles. Colors can often be used to symbolize specific emotions and connotations, and at the same time, the meaning of colors also changes with different cultures. (3) Logo and design principles. Logo design is an important part of visual communication design. (4) Placements and their design principles. The placement of visual communication media (e.g., signage and billboards) plays an important role in attracting attention.The expression forms of traditional art elements in architectural practice are divided into three types, namely, the expression of traditional elements’ “shape,” the expression of traditional elements’ “environment,” and the expression of traditional elements’ “meaning” [11]. In contemporary architectural design, “shape” refers to obtaining a visual experience similar or similar to traditional buildings in appearance and form by drawing on traditional architectural shapes and compositions, building materials, and architectural colors [12]. “Environment” refers to the construction of modern architectural space and streamlines by drawing on the traditional architectural space combination and streamline organization to create a traditional spatial atmosphere and situation. “Meaning” means that it does not pursue the similarity of the external form of traditional architecture, but focuses on the expression of the creative concept and artistic conception of traditional architecture [13].In landscape design, Chinese classical gardens simultaneously show the essence of traditional philosophy and traditional Chinese culture. The long-term development of Chinese culture is accompanied by its inevitable ideological foundation, which is the ideological spirit of Chinese culture—implicit, introverted, broad, and profound. At the same time, traditional cultural elements are the carriers of humanistic thoughts, and people’s pursuit of harmony, moderation, and simplicity is consistent with their understanding of culture, nature, and philosophy [14].Japanese architect Kisho Kurokawa believes that each country’s culture has its own uniqueness and vitality, and he applies this philosophy to urban planning and design [15]. According to Indian architect Kriya, third-world countries have a good environment, which includes a balanced ecology, recycling of used products, a proper way of life, and indigenous construction techniques. As far as landscape design is concerned, the design is to help the area form a specific urban connotation, and the religious belief formed by the unique local culture of India and the understanding of time and space is a unique feature in the architectural design of Koriya [16]. Weeks and Grimmer describe the U.S. Department of the Interior’s basic conservation strategy for historic relics. There are many types of historical relics, for example, historical buildings (residential houses, courts, town halls, commercial buildings, etc.), and landscape design and natural scenery [17]. English Heritage is the body that manages most of England’s ancient and modern famous cultural buildings. It can provide advice and opinions on the protection of historical buildings and historical landscapes for local planning management departments and governments. They advocate that any restoration and protection of historical buildings should reduce the damage to the historical context and texture of the city [18].Chinese traditional culture can be intuitively expressed through the application of traditional art elements. In contemporary China, cultural economy has the potential to promote urban development. As a specific medium to undertake cultural industries, cultural industry parks can play an important role in conveying traditional culture and urban connotation. Visual communication design is a combination of ideas and media [19]. It continues to evolve with the development of society. Visual communication design itself contains social significance, economic benefits, and cultural connotations. Its humanistic value is manifested in the satisfaction of people’s living needs. National cultural symbols are the crystallization of human consciousness and traditional cultural spirit [20]. The reinnovation and reuse of national symbols can better reflect and convey Chinese folk culture. For example, the abstraction and redesign of traditional elements such as New Year pictures, paper-cuts, wood carvings, and shadow puppets can be better combined with modern design [21].Graphics are one of the elements of visual communication design, and researchers can study visual communication design through perceptible conceptual elements in graphics. The most basic elements of the form of graphic language are points, lines, and planes [22]. A point, from a geometrical point of view, has only a position. But in the concept of graphic language, a point not only has position, but also size and shape. The smaller the point, the stronger the feeling of the point; on the contrary, the larger the point, the more the face [23]. The content of the points covers the elements with location, such as pavilions, sculptures, trees and stones, benches, and steps, and has two functions: functional and decorative. According to the focal characteristics of the point, the point scene can be placed in the middle of the intersection, the corner of the green space, the end of the road axis, or the center of the square, and the characteristics of the point can be highlighted through symmetry, contrast, repetition, etc [24]. Curves are often used in entertainment and leisure landscape design because of their free and smooth form to create an elegant and romantic atmosphere. The surface usually includes buildings, squares, lakes, trees, lawns, etc. The geometrical plane has a strong order and is generally used in monumental landscapes, such as Tiananmen Square, to create a solemn and solemn atmosphere. Irregular planes are easy to coordinate with buildings and roads, and can bring a feeling of liveliness and freedom [25].The Tibetan blockhouses in Aba Prefecture have strong regional characteristics, and the core part of the blockhouses is their stone-making skills. Therefore, the stone-making skills of Tibetan blockhouses are unique among traditional Chinese construction techniques. The uses of stone crafting materials, tools, and construction techniques in the Tibetan watchtowers in Aba Prefecture all reflect the construction wisdom of the Tibetan people. The use of stone in the house is more in line with the material construction logic. The architectural cultural landscape is the carrier of various regional cultures, such as the natural fusion of environment, humanities, and historical and cultural elements. On the contrary, local culture is the gene of architectural cultural landscape. The Tibetan watchtower in Aba Prefecture is a beautiful cultural landscape. Under the impact of modern craftsmanship, materials, and lifestyles, the cultural landscape of the Kangzang area in Aba Prefecture is gradually disappearing. Traditional buildings are gradually being demolished and remodeled. Modern materials are integrated with traditional buildings and even replace the materials used in traditional buildings. For the inheritance part, the inheritance of stone craftsmanship basically relies on words and deeds, but such inheritance methods are disappearing, and many excellent stone craftsmen are also facing various difficulties.In this study, taking the case of Aba Prefecture showing regional cultural characteristics as the research object, the layout of traditional elements in the plane is analyzed by means of graphical methods, and the public’s satisfaction with the application of traditional elements in the study area is analyzed through SD questionnaires, and traditional art is summarized. In the second section, we introduced the main approaches we have used; in the third section, graphical expression and SD method questionnaire analysis of the case of Aba Prefecture were carried out to evaluate the public’s satisfaction with the application of traditional art elements to the cultural industry park. First, according to the plane composition theory, the distribution of the plane layout of traditional elements in the area is analyzed by points, lines, and planes. Second, 8 scenes were selected to take photographs in the case, the evaluation factors and evaluation scales were established, and the SD questionnaire was conducted. A total of 81 valid questionnaires were collected. Through data entry analysis and graphic display, the author learned that the public has different evaluations on the application scenarios of different traditional elements. Generally speaking, the public tends to affirm the scenarios of traditional architectural styles and classical garden landscape layouts; finally, we summarized the main conclusions. ## 2. Study Area and Methods Aba Prefecture is located in the northwest of Sichuan Province (Figure1). The latitude and longitude range is as follows: 30°35′–34°19′ N and 100°31′–104°27′ E, with a total area of about 84,200 square kilometers, accounting for about 17% of the total area of Sichuan Province. The site is located on the southeastern edge of the Qinghai-Tibet Plateau and is part of the Hengduan Mountains [26]. It is located between the Qinghai-Tibet Plateau and the Chengdu Plain as a whole. Aba Prefecture is one of the most important water sources in the upper reaches of the Yangtze River and the Yellow River. It is also the only place where the Yellow River flows in Sichuan Province. At the same time, the Minjiang River, one of the most important tributaries of the upper reaches of the Yangtze River, also originates here. It borders Qinghai and Gansu in the north, Mianyang, Deyang, and Chengdu in the east, Ya’an in the south, and Ganzi in the west, as shown in Figure 1.Figure 1 Location of Aba Prefecture.According to the traditional Tibetan division, the stone-built watchtowers in Aba Prefecture are located in Amdo and Khamba in the three major Tibetan areas (Amdo, Kham, and Uizang), among which Tibetan dwellings are the majority, and some Qiang dwellings are distributed in Li County, Mao County, and Wenchuan County. There are also branches of Tibetans in this area. The Amdo Tibetan area is mostly the Baima Tibetan area, while the Kangba Tibetan area is mostly populated by the Jiarong Tibetans. At the same time, some areas of Rangtang County belong to the Jiarong Tibetan area, and other areas belong to the Amdo Tibetan area; except for the areas distributed in the Kangba Tibetan area, some areas of Li County belong to the Qiang area [27].The district has 1 city and 12 counties, namely, Malkang City, Jiuzhaigou County, Xiaojin County, Aba County, Ruoergai County, Hongyuan County, Rangtang County, Wenchuan County, Li County, Mao County, Songpan County, Jinchuan County, and Heishui County, a total of 223 towns, and 1354 administrative villages, and the administrative center is located in Malkang City [28].Aba Prefecture has a vast land area and a total population of only 919,500 people. The density is small, and the distribution is extremely uneven. Among them, the agricultural population is 710,000, the nonagricultural population is 209,500, and the urbanization rate is 37.86%. Among the total population, Tibetans account for as high as 58.36%, followed by Han and Qiang, accounting for 19.69% and 18.55% of the total population, respectively; other ethnic minorities account for about 3.4%. The continuous optimization of the industrial structure promotes the sound development of the regional economy [29]. The per capita income of residents in Aba continues to increase, and the government finances are further moving toward a virtuous circle. The regional economic development situation is good, and the comprehensive economic strength continues to rise. According to the 2017 Aba Government’s National Economic and Social Development Statistical Bulletin, the state’s GDP has reached 29.516 billion yuan, of which the primary industry’s total output value is 7.248 billion yuan, contributing 12.0% to economic growth; the industrial output value was 114.34 years, contributing 80.8% to economic growth; the output value of the tertiary industry was 10.738 billion yuan, contributing 7.2% to economic growth. The per capita GDP is 31,487 yuan, accounting for 70% of Sichuan’s per capita. In terms of transportation construction, by the end of 2017, the total mileage of highways in Aba Prefecture reached 13,454 kilometers, and the total government investment in transportation during the “13th Five-Year Plan” period was as high as 10.1 billion [30]. At present, there are 2357.313 kilometers of trunk highways in the whole state, including 11 national highways and 2 provincial highways. As far as the overall traffic construction is concerned, there are still shortcomings such as a small number of high-grade highways and poor traffic conditions. In 2017, in addition to the increase in road freight traffic in Aba Prefecture, passenger turnover and cargo turnover decreased by 22.9% and 5.8%, respectively, compared with the previous year [31].The SD questionnaire method was used to study the public’s views and satisfaction with the application of traditional art elements in the cultural industry park. The SD method (semantic differential) is a method for evaluating the connotation and meaning of research objects, which can be used to evaluate the views, attitudes, and thoughts of the respondents [32].Referring to the adjective pairs (such as “open-closed”) that are often used in the field of architecture in the previous literature, 19 pairs of adjectives were selected for the investigation of Aba Prefecture, and the 19 pairs of adjectives were divided into five aspects: spatial layout characteristics (4), architectural style (4), landscape design (2), psychological experience (7), and visual experience (2) (the frequency of adjective pairs in parentheses). In the questionnaire design, the researchers randomly arranged and combined adjective pairs to obtain the respondents’ subjective evaluation of the research content. The adjective pairs and corresponding evaluation factors are shown in Table1. Regarding the selection of the evaluation scale, generally speaking, when the evaluation scale is lower than level 5, the evaluation scale will be too general, which will easily lead to the deviation of the evaluation results. In order to facilitate the respondents to understand and identify the evaluation grades, and at the same time to ensure the evaluation accuracy, the SD method questionnaire in this study selected a 5-level evaluation scale, that is, the scores are −2, −1, 0, 1, 2, and 0 is the center. Symmetrical setup (Table 2): the collected questionnaire data were entered into Microsoft Excel for calculation. According to the scores of the respondents, the average score of each evaluation factor can be calculated. Among them, the total number of respondents is R, each respondent is r1, r2, r3, …, rR, and the scores of each respondent for different questions are as follows:(1)Q1r1,Q2r1,Q3r1,…,Q19r1,Q1r2,Q2r2,Q3r2,…,Q19r2,Q1r3,Q2r3,Q3r3,…,Q19r3.Table 1 Adjective pairs and evaluation factors of SD analysis. Adjective pairEvaluation indicesEvaluation objectOpen-closedSense of spaceSpatial layout featuresOrderly-messySense of orderSpatial layout featuresAttractive-resistantAttractivenessPsychological feelingVivid-rigidVitalityPsychological feelingStaggered-flushStaggered degreeArchitectural styleVegetation rich-vegetation monotonousVegetation richnessLandscape designQuiet-noisyQuietnessPsychological feelingNovelty-ordinaryNoveltyArchitectural styleArchitectural styleTraditionalArchitectural stylePublic-secretPublicityPsychological feelingRelaxed-tenseRelaxationPsychological feelingSafe-dangerousSecurityPsychological feelingPleasant-unpleasantPleasurePsychological feelingBright-dimBrightnessVisual feelingDiverse-singularDiversityArchitectural styleColorful-monotonousColor richnessVisual feelingCoordinated-unbalancedCoordinationSpatial layout featuresClean-dirtyCleanlinessLandscape designEasily identifiable-not easily identifiableRecognizableSpatial layout featuresTable 2 Rating scale. Adjective (positive)Very muchWellModerateWellVery muchAdjective (negative)210−1−2Average scores for each question are as follows:(2)Q1=Q1r1+Q1r2+Q1r3+⋯+Q1rRR,Q2=Q2r1+Q2r2+Q2r3+⋯+Q2rRR,⋯Q19=Q19r1+Q19r2+Q19r3+⋯+Q19rRR.By comparing the average scores of the evaluation factors, the public’s views and opinions on the evaluation objects can be reflected.This study adopts the multidisciplinary research method integrating linguistics, architecture, art, and statistics of science and technology to discuss the related issues of the cultural industry park, and analyze the expression of traditional art elements in the visual communication of the cultural industry park. In this study, in order to explore the relationship between the application of traditional art elements and the design level of cultural industrial parks, both qualitative and quantitative research methods are used to analyze the research objects. Among them, qualitative research methods are mainly used to obtain, analyze, and interpret data through observational methods. Quantitative research methods are used to analyze quantitative relationships between attributes and phenomena of research objects, often involving a large number of respondents. The research object of this paper covers online literature and actual case studies. The application of qualitative and quantitative research methods can comprehensively reflect the relationship between traditional elements and cultural industrial parks. ## 3. Results and Discussion ### 3.1. The Expression of the Art of Building Stone Works According to China’s current administrative divisions, Tibetans are mainly distributed in parts of Tibet Autonomous Prefecture, Sichuan Province, Qinghai Province, Gansu Province, and Yunnan Province. As an area covered by Tibetan architectural culture, the Tibetan area in Sichuan must have been influenced by Tibetan culture. However, before the introduction of Tibetan culture, the area also had its own cultural accumulation. It is integrated with Tibetan culture to form a unique and diverse Sichuan Tibetan architecture. The Tibetan watchtowers in Aba Prefecture have a long history and have experienced thousands of years of wind and rain. Its development can be summarized as the initial germination period, the embryonic period, and the mature period.Stone walls give people a special feeling. This special texture has certain rules and principles. The arrangement and combination of stones are a plane composition, so it follows the visual elements of plane composition, so the visual effect brought by the wall can be specifically analyzed according to the elements of plane composition and features. The stone masonry wall is analyzed according to the elements of plane composition, which can be divided into visual elements and relational elements. The visual elements are divided into the size, shape, color, and texture of the image, and the relational elements refer to the relationship between position and arrangement, including orientation, location, and center of gravity. #### 3.1.1. Shape Any visible object has a shape, and the wall made of stone craftsmanship also has its own unique shape. The shape of the wall is a rectangular plane or a curved surface. Different shapes of walls will bring different visual experience to people. Rectangular and neat walls give people a sense of stability, while curved walls give people a line of sight and have a certain “extensibility” (Figure2).Figure 2 Different shapes of walls of Aba Prefecture. #### 3.1.2. Size The size relative to the volume of the building will give people an intuitive feeling. The size here includes the ratio of the length, width, and height of the building, and the influence of the size of the stone on the visual characteristics of people. Due to the different building volumes and the choice of stone sizes, the walls give people different feelings (Figure3).Figure 3 Combinations between different sizes of stones. #### 3.1.3. Color The wall composed of natural stone colors will form a unique color matching, and the visual image of the wall is perceived by people through the color and brightness of the combination of stones. There is a certain relationship between the color of the stone and the soil. The main colors of the stone in Aba are red, gray, cyan, brown, etc., a wide variety of colors. The combination of different colors brings different visual feelings to people. The complex color splicing of natural stone will give people a grand feeling and a strong visual impact (Figure4).Figure 4 Different colors of stone bring different visual experience. #### 3.1.4. Texture Stone, as the most traditional texture material, forms a flat and even, rough, and rough visual experience. As a natural product of nature, stone has its own unique texture and beauty. Therefore, due to the different units of the stone masonry wall, the unevenness of the wall is formed, which enriches the texture of the wall, and forms a unique and regular geometric texture and strong texture. At the same time, the stone walls in Aba Prefecture are more inclined to a kind of “weaving,” and it is this weaving texture that gives people a very strong visual impact.In the relationship elements of wall art, there is no special feature in direction and position, but the center of gravity gives people a strong feeling. Due to the application of the wall and prestressing technology, such a wall gives people a sense of stability. It can psychologically give people a sense of balance and heaviness (Figure5).Figure 5 The arc formed by the retraction of the wall and the improvement of the corner brings a sense of stability. ### 3.2. Aba Prefecture SD Law Evaluation Survey We select 8 representative scenes (entrances, squares, buildings, etc., as shown in Figure6) in each park for shooting, make photographs of the same size as the samples of the SD method evaluation research, and then conduct the SD method questionnaire survey. The questionnaire was distributed to the public in different regions of our country by e-mail. Considering the feasibility of the study, the respondents were mainly concentrated in the regional center where the case was located. All respondents were over the age of 16 years, were able to express their views independently and clearly, and were allowed to withdraw from the survey at any time according to their wishes. In this SD method evaluation study, the author distributed 110 questionnaires and recovered 81 valid questionnaires. The information of the respondents is shown in Table 3, and the ratio of men to women is nearly 1 : 1. In terms of occupation, the number of respondents related to architectural planning accounted for 44.0% of the total number of respondents, and the test sample has a certain objectivity and reference.Figure 6 Samples’ evaluation curve. Red line is the comprehensive evaluation curve (baseline).Table 3 Statistics of the interviewees. TypeProportion (%)TypeProportion (%)Gender0-Woman46Job occupation0-Architectural planning related1-architectural planning unrelated441-Man54Age18–285629–4041–65Above 65We completed the selection of adjective pairs and evaluation scales, and finally completed the design and production of the SD method questionnaire (Table3).After collecting all 81 questionnaires, we first entered the scores of the case samples into Microsoft Excel software and then calculated the average score of each evaluation factor of each sample. Then, the comprehensive average value of 8 sample pictures is obtained, that is, the comprehensive average value is the average value of each factor of Σ/8 (number of samples), and the comprehensive evaluation score of the SD method is drawn according to the comprehensive average value obtained (Table4).Table 4 SD method questionnaire sample. AdjectiveVery muchWellModerateWellVery muchAdjectiveOpen210−1−2CloseOrdered210−1−2MessyAbstractive210−1−2ResistingVivid210−1−2BoringScattered210−1−2FlushRich in vegetation210−1−2Monotonous in vegetationQuiet210−1−2NoisyNovel210−1−2OrdinaryTraditional210−1−2ModernPublic210−1−2HiddenRelax210−1−2NervousSafe210−1−2DangerousHappy210−1−2SadBright210−1−2DimDiverse210−1−2MonotonousColorful210−1−2ColorfulCoordinated210−1−2ImbalanceClean210−1−2DirtyIdentified210−1−2UnidentifiedThen, with the adjective pair as the abscissa and the comprehensive average of the samples as the ordinate, the average change curve of the sample evaluation can be drawn (for example, Figure6). For the average change curve of the sample evaluation, the points on the curve represent the average score of each adjective pair, and which side of the score is biased to indicate that the evaluation is biased toward the adjective on this side, that is, the public’s evaluation of the scene.It can be seen from Table5 that different scenes can bring different visual feelings to the respondents and then obtain corresponding scores. Sample 7 is the scene with the best overall evaluation among the 8 samples in Aba Prefecture, with 11 evaluation factors getting the highest score. In contrast, sample 8 did not receive the highest score for either evaluation factor. In the comprehensive evaluation of Aba Prefecture, the scores of all 19 evaluation factors are between 0 and 1; that is, the subjective evaluation obtained by Aba Prefecture is between average status and slight status. Ten of these evaluation factors scored higher than 0.50: sense of space, sense of order, attractiveness, publicity, relaxation, safety, pleasure, brightness, cleanliness, and recognizability. The sense of space received a high score of 0.96, indicating that the respondents believed that the space type of the park was mainly open space. The combined average scores for traditionality and color richness were 0.19 and 0.30, respectively. It can be seen from this that the public’s comprehensive evaluation of Aba Prefecture can be summarized as a public open space with a strong sense of space and order. In terms of visual perception, its traditional degree and color richness have been affirmed to a certain extent.Table 5 Comprehensive average scores of Aba State. Adjective pairEvaluation indicesSample 1Sample 2Sample 3Sample 4Sample 5Sample 6Sample 7Sample 8Open-closedSense of space1.151.220.091.070.781.371.440.58Orderly-messySense of order1.070.90.520.490.61.090.520.36Attractive-resistantAttractiveness0.570.570.750.680.40.490.840.2Vivid-rigidVitality0.110.520.310.670.570.440.890.06Staggered-flushStaggered degree−0.530.420.480.670.25−0.010.670.48Vegetation rich-vegetation monotonousVegetation richness−0.070.490.060.52−0.220.331.220.58Quiet-noisyQuietness0.040.20.090.420.250.440.570.12Novelty-ordinaryNovelty0.36−0.09−0.10.070.44−0.07−0.15−0.09Architectural styleTraditional−0.40−0.231.230.72−0.110.10.46−0.23Public-secretPublicity1.041.110.640.750.681.070.950.73Relaxed-tenseRelaxation0.440.860.420.830.410.731.060.44Safe-dangerousSecurity0.470.70.650.980.80.70.150.6Pleasant-unpleasantPleasure0.350.540.480.720.420.481.000.11Bright-dimBrightness0.80.980.581.020.420.680.90.67Diverse-singularDiversity0.040.470.440.330.110.20.570.41Colorful-monotonousColor richness−0.250.350.490.640.040.000.740.42Coordinated-unbalancedCoordination0.530.640.460.430.270.640.670.28Clean-dirtyCleanliness1.020.810.730.90.740.860.530.81Easily identifiable-not easily identifiableRecognizable1.120.230.810.910.960.680.490.35Times with high scores211311110 ### 3.3. SD Questionnaire Sample Curve Analysis We draw the evaluation change curve for the selected samples in Aba Prefecture one by one and superimpose the comprehensive evaluation curve of each sample with the comprehensive evaluation curve (baseline curve) of Aba Prefecture (red line in Figure6), so as to compare and analyze the differences in the psychological evaluation scores of these 8 samples (Figures 7(a)–7(h)).Figure 7 Eight samples in Aba Prefecture. (a)(b)(c)(d)(e)(f)(g)(h)Green line in Figure6 shows that the sense of space, order, novelty, publicity, cleanliness, and recognizability of sample 1 is all higher than the comprehensive average score. However, the scores for staggeredness, quietness, tradition, and richness of color were lower than the overall evaluation score. In particular, sample 1 in Aba Prefecture has an open space and is easily recognizable because of its novel and unique pillars. However, its colors and vegetation types are relatively simple. Most of the interviewees thought that the pillars could identify the entrance characteristics of the scene, but as a park entrance with the theme of traditional culture, it seems too modern and the classical atmosphere is not very strong.The trend of the evaluation curve of sample 2 is almost the same as that of the benchmark curve (blue line in Figure6). The scores of evaluation factors such as sense of space, sense of order, vegetation richness, publicity, relaxation, brightness, and color richness are slightly higher than the comprehensive evaluation scores. Conversely, scenario 2 received lower scores for traditionalness and recognizability. The public sees the scene as a spacious space for public events, with diverse vegetation and landscape pieces. Although sample 2 has high vegetation and color richness, most respondents believe that the scene lacks its own characteristics and is, therefore, difficult to identify.Sample 3 shows that except for the two evaluation factors of spatial sense and traditional degree, the scores of the remaining factors are almost the same as the comprehensive evaluation scores of Aba Prefecture (yellow line in Figure6). A low score for the sense of space indicates that the scene has a strong sense of enclosure, and the traditional degree obtains the highest score among the 19 evaluation factors. The application of traditional elements, such as traditional architectural styles and Chinese characters, makes the scene highly recognizable.The sample evaluation score of the sample 4 scene is slightly higher than the overall evaluation score of Aba, and only the sense of order gets a lower score (dark blue line in Figure6). Brightness and color richness, which are closely related to visual communication, are well received by the public. In addition, traditional degrees receive higher scores. The public believed that the scene had a good interpretation of the traditional meaning. Not only did the quaint buildings and sculptures convey a traditional feeling, but the diverse vegetation and colors also enriched the visual experience of the respondents.In sample 5, this scene is a typical combination of traditional utensil modeling and modern architectural design (Figure6). At the same time, it indicates that the evaluation curve of the sample floats around the benchmark curve. Except for the four evaluation factors of vitality, novelty, safety, and recognizability, the scores of the remaining factors are all lower than the comprehensive evaluation score of Aba Prefecture. The interviewees pointed out that the “ding” shape of the building is quite novel, and the use of traditional patterns and local bronze materials also conveys the cultural atmosphere well.By comparing with the benchmark curve, the scene gets high evaluations in terms of space, order, quietness, publicity, and coordination. The scores for patchiness and color richness are slightly lower than the Aba composite score. Most of the respondents believe that the square space is relatively ordinary and lacks visual attraction, but the traditional pattern relief can reflect certain traditional characteristics to highlight cultural characteristics.The overall evaluation of sample 7 by the public is high. Only in the sense of order, novelty, safety, cleanliness, and recognizability, it got a slightly lower score than the comprehensive evaluation of Aba. Respondents clearly liked the scene because of the rich vegetation, vast waters, and ancient pagodas serving as landscape nodes.The comprehensive evaluation of this scene is lower than the overall evaluation of Aba Prefecture. However, the sense of patchiness and vegetation richness achieved better evaluations. The sign is located in the green space near the building and has a certain degree of legibility. At the same time, the abstraction of the elements of the horse head wall in ancient Huizhou buildings and the use of ancient Chinese characters convey a certain traditional cultural connotation. ## 3.1. The Expression of the Art of Building Stone Works According to China’s current administrative divisions, Tibetans are mainly distributed in parts of Tibet Autonomous Prefecture, Sichuan Province, Qinghai Province, Gansu Province, and Yunnan Province. As an area covered by Tibetan architectural culture, the Tibetan area in Sichuan must have been influenced by Tibetan culture. However, before the introduction of Tibetan culture, the area also had its own cultural accumulation. It is integrated with Tibetan culture to form a unique and diverse Sichuan Tibetan architecture. The Tibetan watchtowers in Aba Prefecture have a long history and have experienced thousands of years of wind and rain. Its development can be summarized as the initial germination period, the embryonic period, and the mature period.Stone walls give people a special feeling. This special texture has certain rules and principles. The arrangement and combination of stones are a plane composition, so it follows the visual elements of plane composition, so the visual effect brought by the wall can be specifically analyzed according to the elements of plane composition and features. The stone masonry wall is analyzed according to the elements of plane composition, which can be divided into visual elements and relational elements. The visual elements are divided into the size, shape, color, and texture of the image, and the relational elements refer to the relationship between position and arrangement, including orientation, location, and center of gravity. ### 3.1.1. Shape Any visible object has a shape, and the wall made of stone craftsmanship also has its own unique shape. The shape of the wall is a rectangular plane or a curved surface. Different shapes of walls will bring different visual experience to people. Rectangular and neat walls give people a sense of stability, while curved walls give people a line of sight and have a certain “extensibility” (Figure2).Figure 2 Different shapes of walls of Aba Prefecture. ### 3.1.2. Size The size relative to the volume of the building will give people an intuitive feeling. The size here includes the ratio of the length, width, and height of the building, and the influence of the size of the stone on the visual characteristics of people. Due to the different building volumes and the choice of stone sizes, the walls give people different feelings (Figure3).Figure 3 Combinations between different sizes of stones. ### 3.1.3. Color The wall composed of natural stone colors will form a unique color matching, and the visual image of the wall is perceived by people through the color and brightness of the combination of stones. There is a certain relationship between the color of the stone and the soil. The main colors of the stone in Aba are red, gray, cyan, brown, etc., a wide variety of colors. The combination of different colors brings different visual feelings to people. The complex color splicing of natural stone will give people a grand feeling and a strong visual impact (Figure4).Figure 4 Different colors of stone bring different visual experience. ### 3.1.4. Texture Stone, as the most traditional texture material, forms a flat and even, rough, and rough visual experience. As a natural product of nature, stone has its own unique texture and beauty. Therefore, due to the different units of the stone masonry wall, the unevenness of the wall is formed, which enriches the texture of the wall, and forms a unique and regular geometric texture and strong texture. At the same time, the stone walls in Aba Prefecture are more inclined to a kind of “weaving,” and it is this weaving texture that gives people a very strong visual impact.In the relationship elements of wall art, there is no special feature in direction and position, but the center of gravity gives people a strong feeling. Due to the application of the wall and prestressing technology, such a wall gives people a sense of stability. It can psychologically give people a sense of balance and heaviness (Figure5).Figure 5 The arc formed by the retraction of the wall and the improvement of the corner brings a sense of stability. ## 3.1.1. Shape Any visible object has a shape, and the wall made of stone craftsmanship also has its own unique shape. The shape of the wall is a rectangular plane or a curved surface. Different shapes of walls will bring different visual experience to people. Rectangular and neat walls give people a sense of stability, while curved walls give people a line of sight and have a certain “extensibility” (Figure2).Figure 2 Different shapes of walls of Aba Prefecture. ## 3.1.2. Size The size relative to the volume of the building will give people an intuitive feeling. The size here includes the ratio of the length, width, and height of the building, and the influence of the size of the stone on the visual characteristics of people. Due to the different building volumes and the choice of stone sizes, the walls give people different feelings (Figure3).Figure 3 Combinations between different sizes of stones. ## 3.1.3. Color The wall composed of natural stone colors will form a unique color matching, and the visual image of the wall is perceived by people through the color and brightness of the combination of stones. There is a certain relationship between the color of the stone and the soil. The main colors of the stone in Aba are red, gray, cyan, brown, etc., a wide variety of colors. The combination of different colors brings different visual feelings to people. The complex color splicing of natural stone will give people a grand feeling and a strong visual impact (Figure4).Figure 4 Different colors of stone bring different visual experience. ## 3.1.4. Texture Stone, as the most traditional texture material, forms a flat and even, rough, and rough visual experience. As a natural product of nature, stone has its own unique texture and beauty. Therefore, due to the different units of the stone masonry wall, the unevenness of the wall is formed, which enriches the texture of the wall, and forms a unique and regular geometric texture and strong texture. At the same time, the stone walls in Aba Prefecture are more inclined to a kind of “weaving,” and it is this weaving texture that gives people a very strong visual impact.In the relationship elements of wall art, there is no special feature in direction and position, but the center of gravity gives people a strong feeling. Due to the application of the wall and prestressing technology, such a wall gives people a sense of stability. It can psychologically give people a sense of balance and heaviness (Figure5).Figure 5 The arc formed by the retraction of the wall and the improvement of the corner brings a sense of stability. ## 3.2. Aba Prefecture SD Law Evaluation Survey We select 8 representative scenes (entrances, squares, buildings, etc., as shown in Figure6) in each park for shooting, make photographs of the same size as the samples of the SD method evaluation research, and then conduct the SD method questionnaire survey. The questionnaire was distributed to the public in different regions of our country by e-mail. Considering the feasibility of the study, the respondents were mainly concentrated in the regional center where the case was located. All respondents were over the age of 16 years, were able to express their views independently and clearly, and were allowed to withdraw from the survey at any time according to their wishes. In this SD method evaluation study, the author distributed 110 questionnaires and recovered 81 valid questionnaires. The information of the respondents is shown in Table 3, and the ratio of men to women is nearly 1 : 1. In terms of occupation, the number of respondents related to architectural planning accounted for 44.0% of the total number of respondents, and the test sample has a certain objectivity and reference.Figure 6 Samples’ evaluation curve. Red line is the comprehensive evaluation curve (baseline).Table 3 Statistics of the interviewees. TypeProportion (%)TypeProportion (%)Gender0-Woman46Job occupation0-Architectural planning related1-architectural planning unrelated441-Man54Age18–285629–4041–65Above 65We completed the selection of adjective pairs and evaluation scales, and finally completed the design and production of the SD method questionnaire (Table3).After collecting all 81 questionnaires, we first entered the scores of the case samples into Microsoft Excel software and then calculated the average score of each evaluation factor of each sample. Then, the comprehensive average value of 8 sample pictures is obtained, that is, the comprehensive average value is the average value of each factor of Σ/8 (number of samples), and the comprehensive evaluation score of the SD method is drawn according to the comprehensive average value obtained (Table4).Table 4 SD method questionnaire sample. AdjectiveVery muchWellModerateWellVery muchAdjectiveOpen210−1−2CloseOrdered210−1−2MessyAbstractive210−1−2ResistingVivid210−1−2BoringScattered210−1−2FlushRich in vegetation210−1−2Monotonous in vegetationQuiet210−1−2NoisyNovel210−1−2OrdinaryTraditional210−1−2ModernPublic210−1−2HiddenRelax210−1−2NervousSafe210−1−2DangerousHappy210−1−2SadBright210−1−2DimDiverse210−1−2MonotonousColorful210−1−2ColorfulCoordinated210−1−2ImbalanceClean210−1−2DirtyIdentified210−1−2UnidentifiedThen, with the adjective pair as the abscissa and the comprehensive average of the samples as the ordinate, the average change curve of the sample evaluation can be drawn (for example, Figure6). For the average change curve of the sample evaluation, the points on the curve represent the average score of each adjective pair, and which side of the score is biased to indicate that the evaluation is biased toward the adjective on this side, that is, the public’s evaluation of the scene.It can be seen from Table5 that different scenes can bring different visual feelings to the respondents and then obtain corresponding scores. Sample 7 is the scene with the best overall evaluation among the 8 samples in Aba Prefecture, with 11 evaluation factors getting the highest score. In contrast, sample 8 did not receive the highest score for either evaluation factor. In the comprehensive evaluation of Aba Prefecture, the scores of all 19 evaluation factors are between 0 and 1; that is, the subjective evaluation obtained by Aba Prefecture is between average status and slight status. Ten of these evaluation factors scored higher than 0.50: sense of space, sense of order, attractiveness, publicity, relaxation, safety, pleasure, brightness, cleanliness, and recognizability. The sense of space received a high score of 0.96, indicating that the respondents believed that the space type of the park was mainly open space. The combined average scores for traditionality and color richness were 0.19 and 0.30, respectively. It can be seen from this that the public’s comprehensive evaluation of Aba Prefecture can be summarized as a public open space with a strong sense of space and order. In terms of visual perception, its traditional degree and color richness have been affirmed to a certain extent.Table 5 Comprehensive average scores of Aba State. Adjective pairEvaluation indicesSample 1Sample 2Sample 3Sample 4Sample 5Sample 6Sample 7Sample 8Open-closedSense of space1.151.220.091.070.781.371.440.58Orderly-messySense of order1.070.90.520.490.61.090.520.36Attractive-resistantAttractiveness0.570.570.750.680.40.490.840.2Vivid-rigidVitality0.110.520.310.670.570.440.890.06Staggered-flushStaggered degree−0.530.420.480.670.25−0.010.670.48Vegetation rich-vegetation monotonousVegetation richness−0.070.490.060.52−0.220.331.220.58Quiet-noisyQuietness0.040.20.090.420.250.440.570.12Novelty-ordinaryNovelty0.36−0.09−0.10.070.44−0.07−0.15−0.09Architectural styleTraditional−0.40−0.231.230.72−0.110.10.46−0.23Public-secretPublicity1.041.110.640.750.681.070.950.73Relaxed-tenseRelaxation0.440.860.420.830.410.731.060.44Safe-dangerousSecurity0.470.70.650.980.80.70.150.6Pleasant-unpleasantPleasure0.350.540.480.720.420.481.000.11Bright-dimBrightness0.80.980.581.020.420.680.90.67Diverse-singularDiversity0.040.470.440.330.110.20.570.41Colorful-monotonousColor richness−0.250.350.490.640.040.000.740.42Coordinated-unbalancedCoordination0.530.640.460.430.270.640.670.28Clean-dirtyCleanliness1.020.810.730.90.740.860.530.81Easily identifiable-not easily identifiableRecognizable1.120.230.810.910.960.680.490.35Times with high scores211311110 ## 3.3. SD Questionnaire Sample Curve Analysis We draw the evaluation change curve for the selected samples in Aba Prefecture one by one and superimpose the comprehensive evaluation curve of each sample with the comprehensive evaluation curve (baseline curve) of Aba Prefecture (red line in Figure6), so as to compare and analyze the differences in the psychological evaluation scores of these 8 samples (Figures 7(a)–7(h)).Figure 7 Eight samples in Aba Prefecture. (a)(b)(c)(d)(e)(f)(g)(h)Green line in Figure6 shows that the sense of space, order, novelty, publicity, cleanliness, and recognizability of sample 1 is all higher than the comprehensive average score. However, the scores for staggeredness, quietness, tradition, and richness of color were lower than the overall evaluation score. In particular, sample 1 in Aba Prefecture has an open space and is easily recognizable because of its novel and unique pillars. However, its colors and vegetation types are relatively simple. Most of the interviewees thought that the pillars could identify the entrance characteristics of the scene, but as a park entrance with the theme of traditional culture, it seems too modern and the classical atmosphere is not very strong.The trend of the evaluation curve of sample 2 is almost the same as that of the benchmark curve (blue line in Figure6). The scores of evaluation factors such as sense of space, sense of order, vegetation richness, publicity, relaxation, brightness, and color richness are slightly higher than the comprehensive evaluation scores. Conversely, scenario 2 received lower scores for traditionalness and recognizability. The public sees the scene as a spacious space for public events, with diverse vegetation and landscape pieces. Although sample 2 has high vegetation and color richness, most respondents believe that the scene lacks its own characteristics and is, therefore, difficult to identify.Sample 3 shows that except for the two evaluation factors of spatial sense and traditional degree, the scores of the remaining factors are almost the same as the comprehensive evaluation scores of Aba Prefecture (yellow line in Figure6). A low score for the sense of space indicates that the scene has a strong sense of enclosure, and the traditional degree obtains the highest score among the 19 evaluation factors. The application of traditional elements, such as traditional architectural styles and Chinese characters, makes the scene highly recognizable.The sample evaluation score of the sample 4 scene is slightly higher than the overall evaluation score of Aba, and only the sense of order gets a lower score (dark blue line in Figure6). Brightness and color richness, which are closely related to visual communication, are well received by the public. In addition, traditional degrees receive higher scores. The public believed that the scene had a good interpretation of the traditional meaning. Not only did the quaint buildings and sculptures convey a traditional feeling, but the diverse vegetation and colors also enriched the visual experience of the respondents.In sample 5, this scene is a typical combination of traditional utensil modeling and modern architectural design (Figure6). At the same time, it indicates that the evaluation curve of the sample floats around the benchmark curve. Except for the four evaluation factors of vitality, novelty, safety, and recognizability, the scores of the remaining factors are all lower than the comprehensive evaluation score of Aba Prefecture. The interviewees pointed out that the “ding” shape of the building is quite novel, and the use of traditional patterns and local bronze materials also conveys the cultural atmosphere well.By comparing with the benchmark curve, the scene gets high evaluations in terms of space, order, quietness, publicity, and coordination. The scores for patchiness and color richness are slightly lower than the Aba composite score. Most of the respondents believe that the square space is relatively ordinary and lacks visual attraction, but the traditional pattern relief can reflect certain traditional characteristics to highlight cultural characteristics.The overall evaluation of sample 7 by the public is high. Only in the sense of order, novelty, safety, cleanliness, and recognizability, it got a slightly lower score than the comprehensive evaluation of Aba. Respondents clearly liked the scene because of the rich vegetation, vast waters, and ancient pagodas serving as landscape nodes.The comprehensive evaluation of this scene is lower than the overall evaluation of Aba Prefecture. However, the sense of patchiness and vegetation richness achieved better evaluations. The sign is located in the green space near the building and has a certain degree of legibility. At the same time, the abstraction of the elements of the horse head wall in ancient Huizhou buildings and the use of ancient Chinese characters convey a certain traditional cultural connotation. ## 4. Conclusions In our study, the research takes the application of Chinese traditional art elements in the visual communication design of cultural industry parks as a perspective. Taking Aba Prefecture, Sichuan Province, as an example, the graphic expression and the SD method questionnaire survey analysis were carried out to evaluate the public’s satisfaction with the application of traditional art elements to the cultural industry park. First, according to the plane composition theory, the distribution of traditional elements in the park plane layout is analyzed by point, line, and plane. Second, 8 scenes were selected to take photographs in the case, the evaluation factors and evaluation scales were established, and the SD questionnaire was conducted. A total of 81 valid questionnaires were recovered. Through data input analysis and graphic display, it was learned that the public had different evaluations on the application scenarios of different traditional elements. Generally speaking, the public tended to affirm traditional architectural styles.In cultural industry parks, scenes with traditional art elements are more likely to be well received by the public than scenes without traditional elements, indicating that the public has a positive attitude toward the expression of traditional culture in the planning and construction of the park. Scenes rich in traditional elements play a positive role in the transmission of traditional culture and the creation of traditional atmosphere.As far as the application of traditional elements is concerned, it is easier to express traditional architectural styles and landscape layouts in a way that reproduces the traditional cultural atmosphere. This method is easier than applying abstract and simplified traditional elements to modern design, achieving the purpose of attracting public attention, and bringing a rich visual experience to the public.In view of the public’s acceptance and affirmation of cultural elements, in a society with a strong cultural atmosphere, the use of traditional art elements can produce positive effects in visual communication. At the same time, the rational and effective use of traditional art elements can play a certain role in improving the quality and operation effect of the cultural industry park. In addition, cultural leisure and shopping districts have gradually become the main way to provide recreation and entertainment space for the public. The combination of traditional cultural elements and modern design has a certain significance in conveying Chinese traditional culture.The rational and effective application of traditional art elements to the visual communication design of the park can achieve positive results. Traditional elements (for example, ancient Chinese architectural forms and traditional patterns) are the figurative expressions of Chinese traditional culture, and visual images with traditional artistic conception can directly convey cultural characteristics to the public.However, the number and scale of China’s cultural industry parks are increasing, and previous literature shows that Aba Prefecture usually lacks a unified plan, and the degree of excavation of regional cultural connotations is not enough. In terms of visual communication, traditional architectural styles, ancient figures, traditional patterns, etc. are often used in the design of research areas. The application of traditional elements should not lack dialectical thinking and blindly adopt the doctrine, and should pay attention to the combination of traditional elements and local culture. With the development of modern technology, digital media has influenced visual communication design to a certain extent. In order to improve the design quality of the area, the interactive participation of the public can be considered as an important factor.In the future, for the SD questionnaire, the selected samples were all scenes shot during the day, showing the visual image of the area under natural light. Therefore, the public’s evaluation of the sample space cannot reflect its visual experience at night, such as the influence of the lighting design in the area on the expression of traditional culture. Analysis of the role of area lighting design in visual communication can be supplemented in future research. --- *Source: 1020033-2022-08-27.xml*
1020033-2022-08-27_1020033-2022-08-27.md
61,291
Study on the Application of Chinese Traditional Visual Elements in Visual Communication Design
Yuming Zheng
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1020033
1020033-2022-08-27.xml
--- ## Abstract This study analyzed the data collected from field research and SD questionnaires on the application of traditional elements in Aba Prefecture. This study adopted the multidisciplinary research method integrating linguistics, architecture, art, and statistics of science and technology to discuss the related issues of the cultural industry park, and analyze the expression of traditional art elements in the visual communication of the cultural industry park. First, based on the photographs taken, we analyzed the layout of typical scenes of traditional element applications on the plane, including the analysis of traditional element types, colors, and materials. Second, the SD questionnaire results are displayed and analyzed. From the comprehensive scores of the respondents, we learned that the application of traditional elements has different effects on the architectural landscape. The main results show that (1) the public has a positive attitude toward the expression of traditional culture in the planning and construction of the park. Scenes rich in traditional elements play a positive role in the transmission of traditional culture and the creation of a traditional atmosphere; (2) showing traditional architectural styles and landscape layouts in a reproducible way can more intuitively set off the traditional cultural atmosphere; (3) the texture of the stone wall has certain rules and principles. The arrangement and combination of stones are a plane composition, and the visual characteristics brought by the wall can be specifically analyzed according to the elements of the plane composition. The stone masonry wall is analyzed according to the elements of plane composition, which can be divided into visual elements and relational elements. The visual elements are divided into the size, shape, color, and texture of the image, and the relational elements refer to the relationship between position and arrangement, including orientation, location, and center of gravity. --- ## Body ## 1. Introduction The protection and utilization of traditional culture have far-reaching significance for the development of Chinese cultural industry and society [1]. The cultural industry park not only undertakes the economic function of industrial agglomeration and development but also serves as a display window for urban regional culture. It is of great significance to carry out cultural activities, attract consumer groups, and create a cultural atmosphere. In the construction of cultural industry parks, how to apply traditional art elements reasonably and skillfully to the park design, and how to fully display traditional material and nonmaterial cultures have gradually become the focus of researchers [2]. Therefore, in order to better protect and rationally utilize traditional culture, it is very necessary to study the application effect of traditional art elements in the planning of cultural industry parks [3]. The role of visual communication design in the design of cultural industrial parks is becoming more and more obvious [4]. Studying the visual effects of traditional art elements can grasp the development trend and focus of traditional art elements in the planning of cultural industrial parks, and then can provide a basis for formulating reasonable planning and management guidance schemes for cultural industrial parks. Traditional visual elements were applied in visual communication design mainly through analyzing the color, composition, type, location, and spatial relationship of visual communication elements in the park, and then discussing the necessity of using visual communication elements in the design of cultural industrial parks, providing references for the park to deeply explore cultural connotations, and filling in traditional elements and modern design.Traditional art elements are the unique precious wealth and cultural heritage of Chinese traditional culture, which cannot be replaced by other art forms [5]. Traditional art elements come from traditional culture and can directly reflect traditional culture. Therefore, how to apply traditional art elements to contemporary design and create a diversified design trend has become a new topic for designers. In this study, traditional art elements mainly refer to the relevant content that can be applied to the visual communication design of cultural industrial parks [6]. A cultural industry park is a specific carrier of cultural industry, which usually carries out the development of cultural industry in the form of parks or blocks. China’s cultural industry parks have a tendency to form cultural industry clusters to enhance their competitiveness. At present, there are three main types of cultural industrial parks in China: cultural theme parks, cultural and creative blocks, and cultural heritage gathering areas [7].Visual communication is considered to be the most natural way for people to communicate various information to each other [8]. In Heller and Philip, from the perspective of the development process of visual communication design, to a large extent, it is a printing art design (graphic design) that emerged in Europe and the United States in the mid−19th century, also translated as graphic design or graphic design [9]. Visual perception is a subjective way of expressing and interpreting thinking, which can interpret information obtained from the human eye. As far as the purpose of visual communication design is concerned, it takes into account the acceptance of the target object, and combines words, symbols, colors, images, etc. to transmit information to the target object through appropriate visual media [10]. There are 4 main factors and basic principles of visual communication design: (1) text and its design principles. Writing is a system of symbols that humans use to communicate, and it is the form of writing that records thoughts and events. (2) Color and its design principles. Colors can often be used to symbolize specific emotions and connotations, and at the same time, the meaning of colors also changes with different cultures. (3) Logo and design principles. Logo design is an important part of visual communication design. (4) Placements and their design principles. The placement of visual communication media (e.g., signage and billboards) plays an important role in attracting attention.The expression forms of traditional art elements in architectural practice are divided into three types, namely, the expression of traditional elements’ “shape,” the expression of traditional elements’ “environment,” and the expression of traditional elements’ “meaning” [11]. In contemporary architectural design, “shape” refers to obtaining a visual experience similar or similar to traditional buildings in appearance and form by drawing on traditional architectural shapes and compositions, building materials, and architectural colors [12]. “Environment” refers to the construction of modern architectural space and streamlines by drawing on the traditional architectural space combination and streamline organization to create a traditional spatial atmosphere and situation. “Meaning” means that it does not pursue the similarity of the external form of traditional architecture, but focuses on the expression of the creative concept and artistic conception of traditional architecture [13].In landscape design, Chinese classical gardens simultaneously show the essence of traditional philosophy and traditional Chinese culture. The long-term development of Chinese culture is accompanied by its inevitable ideological foundation, which is the ideological spirit of Chinese culture—implicit, introverted, broad, and profound. At the same time, traditional cultural elements are the carriers of humanistic thoughts, and people’s pursuit of harmony, moderation, and simplicity is consistent with their understanding of culture, nature, and philosophy [14].Japanese architect Kisho Kurokawa believes that each country’s culture has its own uniqueness and vitality, and he applies this philosophy to urban planning and design [15]. According to Indian architect Kriya, third-world countries have a good environment, which includes a balanced ecology, recycling of used products, a proper way of life, and indigenous construction techniques. As far as landscape design is concerned, the design is to help the area form a specific urban connotation, and the religious belief formed by the unique local culture of India and the understanding of time and space is a unique feature in the architectural design of Koriya [16]. Weeks and Grimmer describe the U.S. Department of the Interior’s basic conservation strategy for historic relics. There are many types of historical relics, for example, historical buildings (residential houses, courts, town halls, commercial buildings, etc.), and landscape design and natural scenery [17]. English Heritage is the body that manages most of England’s ancient and modern famous cultural buildings. It can provide advice and opinions on the protection of historical buildings and historical landscapes for local planning management departments and governments. They advocate that any restoration and protection of historical buildings should reduce the damage to the historical context and texture of the city [18].Chinese traditional culture can be intuitively expressed through the application of traditional art elements. In contemporary China, cultural economy has the potential to promote urban development. As a specific medium to undertake cultural industries, cultural industry parks can play an important role in conveying traditional culture and urban connotation. Visual communication design is a combination of ideas and media [19]. It continues to evolve with the development of society. Visual communication design itself contains social significance, economic benefits, and cultural connotations. Its humanistic value is manifested in the satisfaction of people’s living needs. National cultural symbols are the crystallization of human consciousness and traditional cultural spirit [20]. The reinnovation and reuse of national symbols can better reflect and convey Chinese folk culture. For example, the abstraction and redesign of traditional elements such as New Year pictures, paper-cuts, wood carvings, and shadow puppets can be better combined with modern design [21].Graphics are one of the elements of visual communication design, and researchers can study visual communication design through perceptible conceptual elements in graphics. The most basic elements of the form of graphic language are points, lines, and planes [22]. A point, from a geometrical point of view, has only a position. But in the concept of graphic language, a point not only has position, but also size and shape. The smaller the point, the stronger the feeling of the point; on the contrary, the larger the point, the more the face [23]. The content of the points covers the elements with location, such as pavilions, sculptures, trees and stones, benches, and steps, and has two functions: functional and decorative. According to the focal characteristics of the point, the point scene can be placed in the middle of the intersection, the corner of the green space, the end of the road axis, or the center of the square, and the characteristics of the point can be highlighted through symmetry, contrast, repetition, etc [24]. Curves are often used in entertainment and leisure landscape design because of their free and smooth form to create an elegant and romantic atmosphere. The surface usually includes buildings, squares, lakes, trees, lawns, etc. The geometrical plane has a strong order and is generally used in monumental landscapes, such as Tiananmen Square, to create a solemn and solemn atmosphere. Irregular planes are easy to coordinate with buildings and roads, and can bring a feeling of liveliness and freedom [25].The Tibetan blockhouses in Aba Prefecture have strong regional characteristics, and the core part of the blockhouses is their stone-making skills. Therefore, the stone-making skills of Tibetan blockhouses are unique among traditional Chinese construction techniques. The uses of stone crafting materials, tools, and construction techniques in the Tibetan watchtowers in Aba Prefecture all reflect the construction wisdom of the Tibetan people. The use of stone in the house is more in line with the material construction logic. The architectural cultural landscape is the carrier of various regional cultures, such as the natural fusion of environment, humanities, and historical and cultural elements. On the contrary, local culture is the gene of architectural cultural landscape. The Tibetan watchtower in Aba Prefecture is a beautiful cultural landscape. Under the impact of modern craftsmanship, materials, and lifestyles, the cultural landscape of the Kangzang area in Aba Prefecture is gradually disappearing. Traditional buildings are gradually being demolished and remodeled. Modern materials are integrated with traditional buildings and even replace the materials used in traditional buildings. For the inheritance part, the inheritance of stone craftsmanship basically relies on words and deeds, but such inheritance methods are disappearing, and many excellent stone craftsmen are also facing various difficulties.In this study, taking the case of Aba Prefecture showing regional cultural characteristics as the research object, the layout of traditional elements in the plane is analyzed by means of graphical methods, and the public’s satisfaction with the application of traditional elements in the study area is analyzed through SD questionnaires, and traditional art is summarized. In the second section, we introduced the main approaches we have used; in the third section, graphical expression and SD method questionnaire analysis of the case of Aba Prefecture were carried out to evaluate the public’s satisfaction with the application of traditional art elements to the cultural industry park. First, according to the plane composition theory, the distribution of the plane layout of traditional elements in the area is analyzed by points, lines, and planes. Second, 8 scenes were selected to take photographs in the case, the evaluation factors and evaluation scales were established, and the SD questionnaire was conducted. A total of 81 valid questionnaires were collected. Through data entry analysis and graphic display, the author learned that the public has different evaluations on the application scenarios of different traditional elements. Generally speaking, the public tends to affirm the scenarios of traditional architectural styles and classical garden landscape layouts; finally, we summarized the main conclusions. ## 2. Study Area and Methods Aba Prefecture is located in the northwest of Sichuan Province (Figure1). The latitude and longitude range is as follows: 30°35′–34°19′ N and 100°31′–104°27′ E, with a total area of about 84,200 square kilometers, accounting for about 17% of the total area of Sichuan Province. The site is located on the southeastern edge of the Qinghai-Tibet Plateau and is part of the Hengduan Mountains [26]. It is located between the Qinghai-Tibet Plateau and the Chengdu Plain as a whole. Aba Prefecture is one of the most important water sources in the upper reaches of the Yangtze River and the Yellow River. It is also the only place where the Yellow River flows in Sichuan Province. At the same time, the Minjiang River, one of the most important tributaries of the upper reaches of the Yangtze River, also originates here. It borders Qinghai and Gansu in the north, Mianyang, Deyang, and Chengdu in the east, Ya’an in the south, and Ganzi in the west, as shown in Figure 1.Figure 1 Location of Aba Prefecture.According to the traditional Tibetan division, the stone-built watchtowers in Aba Prefecture are located in Amdo and Khamba in the three major Tibetan areas (Amdo, Kham, and Uizang), among which Tibetan dwellings are the majority, and some Qiang dwellings are distributed in Li County, Mao County, and Wenchuan County. There are also branches of Tibetans in this area. The Amdo Tibetan area is mostly the Baima Tibetan area, while the Kangba Tibetan area is mostly populated by the Jiarong Tibetans. At the same time, some areas of Rangtang County belong to the Jiarong Tibetan area, and other areas belong to the Amdo Tibetan area; except for the areas distributed in the Kangba Tibetan area, some areas of Li County belong to the Qiang area [27].The district has 1 city and 12 counties, namely, Malkang City, Jiuzhaigou County, Xiaojin County, Aba County, Ruoergai County, Hongyuan County, Rangtang County, Wenchuan County, Li County, Mao County, Songpan County, Jinchuan County, and Heishui County, a total of 223 towns, and 1354 administrative villages, and the administrative center is located in Malkang City [28].Aba Prefecture has a vast land area and a total population of only 919,500 people. The density is small, and the distribution is extremely uneven. Among them, the agricultural population is 710,000, the nonagricultural population is 209,500, and the urbanization rate is 37.86%. Among the total population, Tibetans account for as high as 58.36%, followed by Han and Qiang, accounting for 19.69% and 18.55% of the total population, respectively; other ethnic minorities account for about 3.4%. The continuous optimization of the industrial structure promotes the sound development of the regional economy [29]. The per capita income of residents in Aba continues to increase, and the government finances are further moving toward a virtuous circle. The regional economic development situation is good, and the comprehensive economic strength continues to rise. According to the 2017 Aba Government’s National Economic and Social Development Statistical Bulletin, the state’s GDP has reached 29.516 billion yuan, of which the primary industry’s total output value is 7.248 billion yuan, contributing 12.0% to economic growth; the industrial output value was 114.34 years, contributing 80.8% to economic growth; the output value of the tertiary industry was 10.738 billion yuan, contributing 7.2% to economic growth. The per capita GDP is 31,487 yuan, accounting for 70% of Sichuan’s per capita. In terms of transportation construction, by the end of 2017, the total mileage of highways in Aba Prefecture reached 13,454 kilometers, and the total government investment in transportation during the “13th Five-Year Plan” period was as high as 10.1 billion [30]. At present, there are 2357.313 kilometers of trunk highways in the whole state, including 11 national highways and 2 provincial highways. As far as the overall traffic construction is concerned, there are still shortcomings such as a small number of high-grade highways and poor traffic conditions. In 2017, in addition to the increase in road freight traffic in Aba Prefecture, passenger turnover and cargo turnover decreased by 22.9% and 5.8%, respectively, compared with the previous year [31].The SD questionnaire method was used to study the public’s views and satisfaction with the application of traditional art elements in the cultural industry park. The SD method (semantic differential) is a method for evaluating the connotation and meaning of research objects, which can be used to evaluate the views, attitudes, and thoughts of the respondents [32].Referring to the adjective pairs (such as “open-closed”) that are often used in the field of architecture in the previous literature, 19 pairs of adjectives were selected for the investigation of Aba Prefecture, and the 19 pairs of adjectives were divided into five aspects: spatial layout characteristics (4), architectural style (4), landscape design (2), psychological experience (7), and visual experience (2) (the frequency of adjective pairs in parentheses). In the questionnaire design, the researchers randomly arranged and combined adjective pairs to obtain the respondents’ subjective evaluation of the research content. The adjective pairs and corresponding evaluation factors are shown in Table1. Regarding the selection of the evaluation scale, generally speaking, when the evaluation scale is lower than level 5, the evaluation scale will be too general, which will easily lead to the deviation of the evaluation results. In order to facilitate the respondents to understand and identify the evaluation grades, and at the same time to ensure the evaluation accuracy, the SD method questionnaire in this study selected a 5-level evaluation scale, that is, the scores are −2, −1, 0, 1, 2, and 0 is the center. Symmetrical setup (Table 2): the collected questionnaire data were entered into Microsoft Excel for calculation. According to the scores of the respondents, the average score of each evaluation factor can be calculated. Among them, the total number of respondents is R, each respondent is r1, r2, r3, …, rR, and the scores of each respondent for different questions are as follows:(1)Q1r1,Q2r1,Q3r1,…,Q19r1,Q1r2,Q2r2,Q3r2,…,Q19r2,Q1r3,Q2r3,Q3r3,…,Q19r3.Table 1 Adjective pairs and evaluation factors of SD analysis. Adjective pairEvaluation indicesEvaluation objectOpen-closedSense of spaceSpatial layout featuresOrderly-messySense of orderSpatial layout featuresAttractive-resistantAttractivenessPsychological feelingVivid-rigidVitalityPsychological feelingStaggered-flushStaggered degreeArchitectural styleVegetation rich-vegetation monotonousVegetation richnessLandscape designQuiet-noisyQuietnessPsychological feelingNovelty-ordinaryNoveltyArchitectural styleArchitectural styleTraditionalArchitectural stylePublic-secretPublicityPsychological feelingRelaxed-tenseRelaxationPsychological feelingSafe-dangerousSecurityPsychological feelingPleasant-unpleasantPleasurePsychological feelingBright-dimBrightnessVisual feelingDiverse-singularDiversityArchitectural styleColorful-monotonousColor richnessVisual feelingCoordinated-unbalancedCoordinationSpatial layout featuresClean-dirtyCleanlinessLandscape designEasily identifiable-not easily identifiableRecognizableSpatial layout featuresTable 2 Rating scale. Adjective (positive)Very muchWellModerateWellVery muchAdjective (negative)210−1−2Average scores for each question are as follows:(2)Q1=Q1r1+Q1r2+Q1r3+⋯+Q1rRR,Q2=Q2r1+Q2r2+Q2r3+⋯+Q2rRR,⋯Q19=Q19r1+Q19r2+Q19r3+⋯+Q19rRR.By comparing the average scores of the evaluation factors, the public’s views and opinions on the evaluation objects can be reflected.This study adopts the multidisciplinary research method integrating linguistics, architecture, art, and statistics of science and technology to discuss the related issues of the cultural industry park, and analyze the expression of traditional art elements in the visual communication of the cultural industry park. In this study, in order to explore the relationship between the application of traditional art elements and the design level of cultural industrial parks, both qualitative and quantitative research methods are used to analyze the research objects. Among them, qualitative research methods are mainly used to obtain, analyze, and interpret data through observational methods. Quantitative research methods are used to analyze quantitative relationships between attributes and phenomena of research objects, often involving a large number of respondents. The research object of this paper covers online literature and actual case studies. The application of qualitative and quantitative research methods can comprehensively reflect the relationship between traditional elements and cultural industrial parks. ## 3. Results and Discussion ### 3.1. The Expression of the Art of Building Stone Works According to China’s current administrative divisions, Tibetans are mainly distributed in parts of Tibet Autonomous Prefecture, Sichuan Province, Qinghai Province, Gansu Province, and Yunnan Province. As an area covered by Tibetan architectural culture, the Tibetan area in Sichuan must have been influenced by Tibetan culture. However, before the introduction of Tibetan culture, the area also had its own cultural accumulation. It is integrated with Tibetan culture to form a unique and diverse Sichuan Tibetan architecture. The Tibetan watchtowers in Aba Prefecture have a long history and have experienced thousands of years of wind and rain. Its development can be summarized as the initial germination period, the embryonic period, and the mature period.Stone walls give people a special feeling. This special texture has certain rules and principles. The arrangement and combination of stones are a plane composition, so it follows the visual elements of plane composition, so the visual effect brought by the wall can be specifically analyzed according to the elements of plane composition and features. The stone masonry wall is analyzed according to the elements of plane composition, which can be divided into visual elements and relational elements. The visual elements are divided into the size, shape, color, and texture of the image, and the relational elements refer to the relationship between position and arrangement, including orientation, location, and center of gravity. #### 3.1.1. Shape Any visible object has a shape, and the wall made of stone craftsmanship also has its own unique shape. The shape of the wall is a rectangular plane or a curved surface. Different shapes of walls will bring different visual experience to people. Rectangular and neat walls give people a sense of stability, while curved walls give people a line of sight and have a certain “extensibility” (Figure2).Figure 2 Different shapes of walls of Aba Prefecture. #### 3.1.2. Size The size relative to the volume of the building will give people an intuitive feeling. The size here includes the ratio of the length, width, and height of the building, and the influence of the size of the stone on the visual characteristics of people. Due to the different building volumes and the choice of stone sizes, the walls give people different feelings (Figure3).Figure 3 Combinations between different sizes of stones. #### 3.1.3. Color The wall composed of natural stone colors will form a unique color matching, and the visual image of the wall is perceived by people through the color and brightness of the combination of stones. There is a certain relationship between the color of the stone and the soil. The main colors of the stone in Aba are red, gray, cyan, brown, etc., a wide variety of colors. The combination of different colors brings different visual feelings to people. The complex color splicing of natural stone will give people a grand feeling and a strong visual impact (Figure4).Figure 4 Different colors of stone bring different visual experience. #### 3.1.4. Texture Stone, as the most traditional texture material, forms a flat and even, rough, and rough visual experience. As a natural product of nature, stone has its own unique texture and beauty. Therefore, due to the different units of the stone masonry wall, the unevenness of the wall is formed, which enriches the texture of the wall, and forms a unique and regular geometric texture and strong texture. At the same time, the stone walls in Aba Prefecture are more inclined to a kind of “weaving,” and it is this weaving texture that gives people a very strong visual impact.In the relationship elements of wall art, there is no special feature in direction and position, but the center of gravity gives people a strong feeling. Due to the application of the wall and prestressing technology, such a wall gives people a sense of stability. It can psychologically give people a sense of balance and heaviness (Figure5).Figure 5 The arc formed by the retraction of the wall and the improvement of the corner brings a sense of stability. ### 3.2. Aba Prefecture SD Law Evaluation Survey We select 8 representative scenes (entrances, squares, buildings, etc., as shown in Figure6) in each park for shooting, make photographs of the same size as the samples of the SD method evaluation research, and then conduct the SD method questionnaire survey. The questionnaire was distributed to the public in different regions of our country by e-mail. Considering the feasibility of the study, the respondents were mainly concentrated in the regional center where the case was located. All respondents were over the age of 16 years, were able to express their views independently and clearly, and were allowed to withdraw from the survey at any time according to their wishes. In this SD method evaluation study, the author distributed 110 questionnaires and recovered 81 valid questionnaires. The information of the respondents is shown in Table 3, and the ratio of men to women is nearly 1 : 1. In terms of occupation, the number of respondents related to architectural planning accounted for 44.0% of the total number of respondents, and the test sample has a certain objectivity and reference.Figure 6 Samples’ evaluation curve. Red line is the comprehensive evaluation curve (baseline).Table 3 Statistics of the interviewees. TypeProportion (%)TypeProportion (%)Gender0-Woman46Job occupation0-Architectural planning related1-architectural planning unrelated441-Man54Age18–285629–4041–65Above 65We completed the selection of adjective pairs and evaluation scales, and finally completed the design and production of the SD method questionnaire (Table3).After collecting all 81 questionnaires, we first entered the scores of the case samples into Microsoft Excel software and then calculated the average score of each evaluation factor of each sample. Then, the comprehensive average value of 8 sample pictures is obtained, that is, the comprehensive average value is the average value of each factor of Σ/8 (number of samples), and the comprehensive evaluation score of the SD method is drawn according to the comprehensive average value obtained (Table4).Table 4 SD method questionnaire sample. AdjectiveVery muchWellModerateWellVery muchAdjectiveOpen210−1−2CloseOrdered210−1−2MessyAbstractive210−1−2ResistingVivid210−1−2BoringScattered210−1−2FlushRich in vegetation210−1−2Monotonous in vegetationQuiet210−1−2NoisyNovel210−1−2OrdinaryTraditional210−1−2ModernPublic210−1−2HiddenRelax210−1−2NervousSafe210−1−2DangerousHappy210−1−2SadBright210−1−2DimDiverse210−1−2MonotonousColorful210−1−2ColorfulCoordinated210−1−2ImbalanceClean210−1−2DirtyIdentified210−1−2UnidentifiedThen, with the adjective pair as the abscissa and the comprehensive average of the samples as the ordinate, the average change curve of the sample evaluation can be drawn (for example, Figure6). For the average change curve of the sample evaluation, the points on the curve represent the average score of each adjective pair, and which side of the score is biased to indicate that the evaluation is biased toward the adjective on this side, that is, the public’s evaluation of the scene.It can be seen from Table5 that different scenes can bring different visual feelings to the respondents and then obtain corresponding scores. Sample 7 is the scene with the best overall evaluation among the 8 samples in Aba Prefecture, with 11 evaluation factors getting the highest score. In contrast, sample 8 did not receive the highest score for either evaluation factor. In the comprehensive evaluation of Aba Prefecture, the scores of all 19 evaluation factors are between 0 and 1; that is, the subjective evaluation obtained by Aba Prefecture is between average status and slight status. Ten of these evaluation factors scored higher than 0.50: sense of space, sense of order, attractiveness, publicity, relaxation, safety, pleasure, brightness, cleanliness, and recognizability. The sense of space received a high score of 0.96, indicating that the respondents believed that the space type of the park was mainly open space. The combined average scores for traditionality and color richness were 0.19 and 0.30, respectively. It can be seen from this that the public’s comprehensive evaluation of Aba Prefecture can be summarized as a public open space with a strong sense of space and order. In terms of visual perception, its traditional degree and color richness have been affirmed to a certain extent.Table 5 Comprehensive average scores of Aba State. Adjective pairEvaluation indicesSample 1Sample 2Sample 3Sample 4Sample 5Sample 6Sample 7Sample 8Open-closedSense of space1.151.220.091.070.781.371.440.58Orderly-messySense of order1.070.90.520.490.61.090.520.36Attractive-resistantAttractiveness0.570.570.750.680.40.490.840.2Vivid-rigidVitality0.110.520.310.670.570.440.890.06Staggered-flushStaggered degree−0.530.420.480.670.25−0.010.670.48Vegetation rich-vegetation monotonousVegetation richness−0.070.490.060.52−0.220.331.220.58Quiet-noisyQuietness0.040.20.090.420.250.440.570.12Novelty-ordinaryNovelty0.36−0.09−0.10.070.44−0.07−0.15−0.09Architectural styleTraditional−0.40−0.231.230.72−0.110.10.46−0.23Public-secretPublicity1.041.110.640.750.681.070.950.73Relaxed-tenseRelaxation0.440.860.420.830.410.731.060.44Safe-dangerousSecurity0.470.70.650.980.80.70.150.6Pleasant-unpleasantPleasure0.350.540.480.720.420.481.000.11Bright-dimBrightness0.80.980.581.020.420.680.90.67Diverse-singularDiversity0.040.470.440.330.110.20.570.41Colorful-monotonousColor richness−0.250.350.490.640.040.000.740.42Coordinated-unbalancedCoordination0.530.640.460.430.270.640.670.28Clean-dirtyCleanliness1.020.810.730.90.740.860.530.81Easily identifiable-not easily identifiableRecognizable1.120.230.810.910.960.680.490.35Times with high scores211311110 ### 3.3. SD Questionnaire Sample Curve Analysis We draw the evaluation change curve for the selected samples in Aba Prefecture one by one and superimpose the comprehensive evaluation curve of each sample with the comprehensive evaluation curve (baseline curve) of Aba Prefecture (red line in Figure6), so as to compare and analyze the differences in the psychological evaluation scores of these 8 samples (Figures 7(a)–7(h)).Figure 7 Eight samples in Aba Prefecture. (a)(b)(c)(d)(e)(f)(g)(h)Green line in Figure6 shows that the sense of space, order, novelty, publicity, cleanliness, and recognizability of sample 1 is all higher than the comprehensive average score. However, the scores for staggeredness, quietness, tradition, and richness of color were lower than the overall evaluation score. In particular, sample 1 in Aba Prefecture has an open space and is easily recognizable because of its novel and unique pillars. However, its colors and vegetation types are relatively simple. Most of the interviewees thought that the pillars could identify the entrance characteristics of the scene, but as a park entrance with the theme of traditional culture, it seems too modern and the classical atmosphere is not very strong.The trend of the evaluation curve of sample 2 is almost the same as that of the benchmark curve (blue line in Figure6). The scores of evaluation factors such as sense of space, sense of order, vegetation richness, publicity, relaxation, brightness, and color richness are slightly higher than the comprehensive evaluation scores. Conversely, scenario 2 received lower scores for traditionalness and recognizability. The public sees the scene as a spacious space for public events, with diverse vegetation and landscape pieces. Although sample 2 has high vegetation and color richness, most respondents believe that the scene lacks its own characteristics and is, therefore, difficult to identify.Sample 3 shows that except for the two evaluation factors of spatial sense and traditional degree, the scores of the remaining factors are almost the same as the comprehensive evaluation scores of Aba Prefecture (yellow line in Figure6). A low score for the sense of space indicates that the scene has a strong sense of enclosure, and the traditional degree obtains the highest score among the 19 evaluation factors. The application of traditional elements, such as traditional architectural styles and Chinese characters, makes the scene highly recognizable.The sample evaluation score of the sample 4 scene is slightly higher than the overall evaluation score of Aba, and only the sense of order gets a lower score (dark blue line in Figure6). Brightness and color richness, which are closely related to visual communication, are well received by the public. In addition, traditional degrees receive higher scores. The public believed that the scene had a good interpretation of the traditional meaning. Not only did the quaint buildings and sculptures convey a traditional feeling, but the diverse vegetation and colors also enriched the visual experience of the respondents.In sample 5, this scene is a typical combination of traditional utensil modeling and modern architectural design (Figure6). At the same time, it indicates that the evaluation curve of the sample floats around the benchmark curve. Except for the four evaluation factors of vitality, novelty, safety, and recognizability, the scores of the remaining factors are all lower than the comprehensive evaluation score of Aba Prefecture. The interviewees pointed out that the “ding” shape of the building is quite novel, and the use of traditional patterns and local bronze materials also conveys the cultural atmosphere well.By comparing with the benchmark curve, the scene gets high evaluations in terms of space, order, quietness, publicity, and coordination. The scores for patchiness and color richness are slightly lower than the Aba composite score. Most of the respondents believe that the square space is relatively ordinary and lacks visual attraction, but the traditional pattern relief can reflect certain traditional characteristics to highlight cultural characteristics.The overall evaluation of sample 7 by the public is high. Only in the sense of order, novelty, safety, cleanliness, and recognizability, it got a slightly lower score than the comprehensive evaluation of Aba. Respondents clearly liked the scene because of the rich vegetation, vast waters, and ancient pagodas serving as landscape nodes.The comprehensive evaluation of this scene is lower than the overall evaluation of Aba Prefecture. However, the sense of patchiness and vegetation richness achieved better evaluations. The sign is located in the green space near the building and has a certain degree of legibility. At the same time, the abstraction of the elements of the horse head wall in ancient Huizhou buildings and the use of ancient Chinese characters convey a certain traditional cultural connotation. ## 3.1. The Expression of the Art of Building Stone Works According to China’s current administrative divisions, Tibetans are mainly distributed in parts of Tibet Autonomous Prefecture, Sichuan Province, Qinghai Province, Gansu Province, and Yunnan Province. As an area covered by Tibetan architectural culture, the Tibetan area in Sichuan must have been influenced by Tibetan culture. However, before the introduction of Tibetan culture, the area also had its own cultural accumulation. It is integrated with Tibetan culture to form a unique and diverse Sichuan Tibetan architecture. The Tibetan watchtowers in Aba Prefecture have a long history and have experienced thousands of years of wind and rain. Its development can be summarized as the initial germination period, the embryonic period, and the mature period.Stone walls give people a special feeling. This special texture has certain rules and principles. The arrangement and combination of stones are a plane composition, so it follows the visual elements of plane composition, so the visual effect brought by the wall can be specifically analyzed according to the elements of plane composition and features. The stone masonry wall is analyzed according to the elements of plane composition, which can be divided into visual elements and relational elements. The visual elements are divided into the size, shape, color, and texture of the image, and the relational elements refer to the relationship between position and arrangement, including orientation, location, and center of gravity. ### 3.1.1. Shape Any visible object has a shape, and the wall made of stone craftsmanship also has its own unique shape. The shape of the wall is a rectangular plane or a curved surface. Different shapes of walls will bring different visual experience to people. Rectangular and neat walls give people a sense of stability, while curved walls give people a line of sight and have a certain “extensibility” (Figure2).Figure 2 Different shapes of walls of Aba Prefecture. ### 3.1.2. Size The size relative to the volume of the building will give people an intuitive feeling. The size here includes the ratio of the length, width, and height of the building, and the influence of the size of the stone on the visual characteristics of people. Due to the different building volumes and the choice of stone sizes, the walls give people different feelings (Figure3).Figure 3 Combinations between different sizes of stones. ### 3.1.3. Color The wall composed of natural stone colors will form a unique color matching, and the visual image of the wall is perceived by people through the color and brightness of the combination of stones. There is a certain relationship between the color of the stone and the soil. The main colors of the stone in Aba are red, gray, cyan, brown, etc., a wide variety of colors. The combination of different colors brings different visual feelings to people. The complex color splicing of natural stone will give people a grand feeling and a strong visual impact (Figure4).Figure 4 Different colors of stone bring different visual experience. ### 3.1.4. Texture Stone, as the most traditional texture material, forms a flat and even, rough, and rough visual experience. As a natural product of nature, stone has its own unique texture and beauty. Therefore, due to the different units of the stone masonry wall, the unevenness of the wall is formed, which enriches the texture of the wall, and forms a unique and regular geometric texture and strong texture. At the same time, the stone walls in Aba Prefecture are more inclined to a kind of “weaving,” and it is this weaving texture that gives people a very strong visual impact.In the relationship elements of wall art, there is no special feature in direction and position, but the center of gravity gives people a strong feeling. Due to the application of the wall and prestressing technology, such a wall gives people a sense of stability. It can psychologically give people a sense of balance and heaviness (Figure5).Figure 5 The arc formed by the retraction of the wall and the improvement of the corner brings a sense of stability. ## 3.1.1. Shape Any visible object has a shape, and the wall made of stone craftsmanship also has its own unique shape. The shape of the wall is a rectangular plane or a curved surface. Different shapes of walls will bring different visual experience to people. Rectangular and neat walls give people a sense of stability, while curved walls give people a line of sight and have a certain “extensibility” (Figure2).Figure 2 Different shapes of walls of Aba Prefecture. ## 3.1.2. Size The size relative to the volume of the building will give people an intuitive feeling. The size here includes the ratio of the length, width, and height of the building, and the influence of the size of the stone on the visual characteristics of people. Due to the different building volumes and the choice of stone sizes, the walls give people different feelings (Figure3).Figure 3 Combinations between different sizes of stones. ## 3.1.3. Color The wall composed of natural stone colors will form a unique color matching, and the visual image of the wall is perceived by people through the color and brightness of the combination of stones. There is a certain relationship between the color of the stone and the soil. The main colors of the stone in Aba are red, gray, cyan, brown, etc., a wide variety of colors. The combination of different colors brings different visual feelings to people. The complex color splicing of natural stone will give people a grand feeling and a strong visual impact (Figure4).Figure 4 Different colors of stone bring different visual experience. ## 3.1.4. Texture Stone, as the most traditional texture material, forms a flat and even, rough, and rough visual experience. As a natural product of nature, stone has its own unique texture and beauty. Therefore, due to the different units of the stone masonry wall, the unevenness of the wall is formed, which enriches the texture of the wall, and forms a unique and regular geometric texture and strong texture. At the same time, the stone walls in Aba Prefecture are more inclined to a kind of “weaving,” and it is this weaving texture that gives people a very strong visual impact.In the relationship elements of wall art, there is no special feature in direction and position, but the center of gravity gives people a strong feeling. Due to the application of the wall and prestressing technology, such a wall gives people a sense of stability. It can psychologically give people a sense of balance and heaviness (Figure5).Figure 5 The arc formed by the retraction of the wall and the improvement of the corner brings a sense of stability. ## 3.2. Aba Prefecture SD Law Evaluation Survey We select 8 representative scenes (entrances, squares, buildings, etc., as shown in Figure6) in each park for shooting, make photographs of the same size as the samples of the SD method evaluation research, and then conduct the SD method questionnaire survey. The questionnaire was distributed to the public in different regions of our country by e-mail. Considering the feasibility of the study, the respondents were mainly concentrated in the regional center where the case was located. All respondents were over the age of 16 years, were able to express their views independently and clearly, and were allowed to withdraw from the survey at any time according to their wishes. In this SD method evaluation study, the author distributed 110 questionnaires and recovered 81 valid questionnaires. The information of the respondents is shown in Table 3, and the ratio of men to women is nearly 1 : 1. In terms of occupation, the number of respondents related to architectural planning accounted for 44.0% of the total number of respondents, and the test sample has a certain objectivity and reference.Figure 6 Samples’ evaluation curve. Red line is the comprehensive evaluation curve (baseline).Table 3 Statistics of the interviewees. TypeProportion (%)TypeProportion (%)Gender0-Woman46Job occupation0-Architectural planning related1-architectural planning unrelated441-Man54Age18–285629–4041–65Above 65We completed the selection of adjective pairs and evaluation scales, and finally completed the design and production of the SD method questionnaire (Table3).After collecting all 81 questionnaires, we first entered the scores of the case samples into Microsoft Excel software and then calculated the average score of each evaluation factor of each sample. Then, the comprehensive average value of 8 sample pictures is obtained, that is, the comprehensive average value is the average value of each factor of Σ/8 (number of samples), and the comprehensive evaluation score of the SD method is drawn according to the comprehensive average value obtained (Table4).Table 4 SD method questionnaire sample. AdjectiveVery muchWellModerateWellVery muchAdjectiveOpen210−1−2CloseOrdered210−1−2MessyAbstractive210−1−2ResistingVivid210−1−2BoringScattered210−1−2FlushRich in vegetation210−1−2Monotonous in vegetationQuiet210−1−2NoisyNovel210−1−2OrdinaryTraditional210−1−2ModernPublic210−1−2HiddenRelax210−1−2NervousSafe210−1−2DangerousHappy210−1−2SadBright210−1−2DimDiverse210−1−2MonotonousColorful210−1−2ColorfulCoordinated210−1−2ImbalanceClean210−1−2DirtyIdentified210−1−2UnidentifiedThen, with the adjective pair as the abscissa and the comprehensive average of the samples as the ordinate, the average change curve of the sample evaluation can be drawn (for example, Figure6). For the average change curve of the sample evaluation, the points on the curve represent the average score of each adjective pair, and which side of the score is biased to indicate that the evaluation is biased toward the adjective on this side, that is, the public’s evaluation of the scene.It can be seen from Table5 that different scenes can bring different visual feelings to the respondents and then obtain corresponding scores. Sample 7 is the scene with the best overall evaluation among the 8 samples in Aba Prefecture, with 11 evaluation factors getting the highest score. In contrast, sample 8 did not receive the highest score for either evaluation factor. In the comprehensive evaluation of Aba Prefecture, the scores of all 19 evaluation factors are between 0 and 1; that is, the subjective evaluation obtained by Aba Prefecture is between average status and slight status. Ten of these evaluation factors scored higher than 0.50: sense of space, sense of order, attractiveness, publicity, relaxation, safety, pleasure, brightness, cleanliness, and recognizability. The sense of space received a high score of 0.96, indicating that the respondents believed that the space type of the park was mainly open space. The combined average scores for traditionality and color richness were 0.19 and 0.30, respectively. It can be seen from this that the public’s comprehensive evaluation of Aba Prefecture can be summarized as a public open space with a strong sense of space and order. In terms of visual perception, its traditional degree and color richness have been affirmed to a certain extent.Table 5 Comprehensive average scores of Aba State. Adjective pairEvaluation indicesSample 1Sample 2Sample 3Sample 4Sample 5Sample 6Sample 7Sample 8Open-closedSense of space1.151.220.091.070.781.371.440.58Orderly-messySense of order1.070.90.520.490.61.090.520.36Attractive-resistantAttractiveness0.570.570.750.680.40.490.840.2Vivid-rigidVitality0.110.520.310.670.570.440.890.06Staggered-flushStaggered degree−0.530.420.480.670.25−0.010.670.48Vegetation rich-vegetation monotonousVegetation richness−0.070.490.060.52−0.220.331.220.58Quiet-noisyQuietness0.040.20.090.420.250.440.570.12Novelty-ordinaryNovelty0.36−0.09−0.10.070.44−0.07−0.15−0.09Architectural styleTraditional−0.40−0.231.230.72−0.110.10.46−0.23Public-secretPublicity1.041.110.640.750.681.070.950.73Relaxed-tenseRelaxation0.440.860.420.830.410.731.060.44Safe-dangerousSecurity0.470.70.650.980.80.70.150.6Pleasant-unpleasantPleasure0.350.540.480.720.420.481.000.11Bright-dimBrightness0.80.980.581.020.420.680.90.67Diverse-singularDiversity0.040.470.440.330.110.20.570.41Colorful-monotonousColor richness−0.250.350.490.640.040.000.740.42Coordinated-unbalancedCoordination0.530.640.460.430.270.640.670.28Clean-dirtyCleanliness1.020.810.730.90.740.860.530.81Easily identifiable-not easily identifiableRecognizable1.120.230.810.910.960.680.490.35Times with high scores211311110 ## 3.3. SD Questionnaire Sample Curve Analysis We draw the evaluation change curve for the selected samples in Aba Prefecture one by one and superimpose the comprehensive evaluation curve of each sample with the comprehensive evaluation curve (baseline curve) of Aba Prefecture (red line in Figure6), so as to compare and analyze the differences in the psychological evaluation scores of these 8 samples (Figures 7(a)–7(h)).Figure 7 Eight samples in Aba Prefecture. (a)(b)(c)(d)(e)(f)(g)(h)Green line in Figure6 shows that the sense of space, order, novelty, publicity, cleanliness, and recognizability of sample 1 is all higher than the comprehensive average score. However, the scores for staggeredness, quietness, tradition, and richness of color were lower than the overall evaluation score. In particular, sample 1 in Aba Prefecture has an open space and is easily recognizable because of its novel and unique pillars. However, its colors and vegetation types are relatively simple. Most of the interviewees thought that the pillars could identify the entrance characteristics of the scene, but as a park entrance with the theme of traditional culture, it seems too modern and the classical atmosphere is not very strong.The trend of the evaluation curve of sample 2 is almost the same as that of the benchmark curve (blue line in Figure6). The scores of evaluation factors such as sense of space, sense of order, vegetation richness, publicity, relaxation, brightness, and color richness are slightly higher than the comprehensive evaluation scores. Conversely, scenario 2 received lower scores for traditionalness and recognizability. The public sees the scene as a spacious space for public events, with diverse vegetation and landscape pieces. Although sample 2 has high vegetation and color richness, most respondents believe that the scene lacks its own characteristics and is, therefore, difficult to identify.Sample 3 shows that except for the two evaluation factors of spatial sense and traditional degree, the scores of the remaining factors are almost the same as the comprehensive evaluation scores of Aba Prefecture (yellow line in Figure6). A low score for the sense of space indicates that the scene has a strong sense of enclosure, and the traditional degree obtains the highest score among the 19 evaluation factors. The application of traditional elements, such as traditional architectural styles and Chinese characters, makes the scene highly recognizable.The sample evaluation score of the sample 4 scene is slightly higher than the overall evaluation score of Aba, and only the sense of order gets a lower score (dark blue line in Figure6). Brightness and color richness, which are closely related to visual communication, are well received by the public. In addition, traditional degrees receive higher scores. The public believed that the scene had a good interpretation of the traditional meaning. Not only did the quaint buildings and sculptures convey a traditional feeling, but the diverse vegetation and colors also enriched the visual experience of the respondents.In sample 5, this scene is a typical combination of traditional utensil modeling and modern architectural design (Figure6). At the same time, it indicates that the evaluation curve of the sample floats around the benchmark curve. Except for the four evaluation factors of vitality, novelty, safety, and recognizability, the scores of the remaining factors are all lower than the comprehensive evaluation score of Aba Prefecture. The interviewees pointed out that the “ding” shape of the building is quite novel, and the use of traditional patterns and local bronze materials also conveys the cultural atmosphere well.By comparing with the benchmark curve, the scene gets high evaluations in terms of space, order, quietness, publicity, and coordination. The scores for patchiness and color richness are slightly lower than the Aba composite score. Most of the respondents believe that the square space is relatively ordinary and lacks visual attraction, but the traditional pattern relief can reflect certain traditional characteristics to highlight cultural characteristics.The overall evaluation of sample 7 by the public is high. Only in the sense of order, novelty, safety, cleanliness, and recognizability, it got a slightly lower score than the comprehensive evaluation of Aba. Respondents clearly liked the scene because of the rich vegetation, vast waters, and ancient pagodas serving as landscape nodes.The comprehensive evaluation of this scene is lower than the overall evaluation of Aba Prefecture. However, the sense of patchiness and vegetation richness achieved better evaluations. The sign is located in the green space near the building and has a certain degree of legibility. At the same time, the abstraction of the elements of the horse head wall in ancient Huizhou buildings and the use of ancient Chinese characters convey a certain traditional cultural connotation. ## 4. Conclusions In our study, the research takes the application of Chinese traditional art elements in the visual communication design of cultural industry parks as a perspective. Taking Aba Prefecture, Sichuan Province, as an example, the graphic expression and the SD method questionnaire survey analysis were carried out to evaluate the public’s satisfaction with the application of traditional art elements to the cultural industry park. First, according to the plane composition theory, the distribution of traditional elements in the park plane layout is analyzed by point, line, and plane. Second, 8 scenes were selected to take photographs in the case, the evaluation factors and evaluation scales were established, and the SD questionnaire was conducted. A total of 81 valid questionnaires were recovered. Through data input analysis and graphic display, it was learned that the public had different evaluations on the application scenarios of different traditional elements. Generally speaking, the public tended to affirm traditional architectural styles.In cultural industry parks, scenes with traditional art elements are more likely to be well received by the public than scenes without traditional elements, indicating that the public has a positive attitude toward the expression of traditional culture in the planning and construction of the park. Scenes rich in traditional elements play a positive role in the transmission of traditional culture and the creation of traditional atmosphere.As far as the application of traditional elements is concerned, it is easier to express traditional architectural styles and landscape layouts in a way that reproduces the traditional cultural atmosphere. This method is easier than applying abstract and simplified traditional elements to modern design, achieving the purpose of attracting public attention, and bringing a rich visual experience to the public.In view of the public’s acceptance and affirmation of cultural elements, in a society with a strong cultural atmosphere, the use of traditional art elements can produce positive effects in visual communication. At the same time, the rational and effective use of traditional art elements can play a certain role in improving the quality and operation effect of the cultural industry park. In addition, cultural leisure and shopping districts have gradually become the main way to provide recreation and entertainment space for the public. The combination of traditional cultural elements and modern design has a certain significance in conveying Chinese traditional culture.The rational and effective application of traditional art elements to the visual communication design of the park can achieve positive results. Traditional elements (for example, ancient Chinese architectural forms and traditional patterns) are the figurative expressions of Chinese traditional culture, and visual images with traditional artistic conception can directly convey cultural characteristics to the public.However, the number and scale of China’s cultural industry parks are increasing, and previous literature shows that Aba Prefecture usually lacks a unified plan, and the degree of excavation of regional cultural connotations is not enough. In terms of visual communication, traditional architectural styles, ancient figures, traditional patterns, etc. are often used in the design of research areas. The application of traditional elements should not lack dialectical thinking and blindly adopt the doctrine, and should pay attention to the combination of traditional elements and local culture. With the development of modern technology, digital media has influenced visual communication design to a certain extent. In order to improve the design quality of the area, the interactive participation of the public can be considered as an important factor.In the future, for the SD questionnaire, the selected samples were all scenes shot during the day, showing the visual image of the area under natural light. Therefore, the public’s evaluation of the sample space cannot reflect its visual experience at night, such as the influence of the lighting design in the area on the expression of traditional culture. Analysis of the role of area lighting design in visual communication can be supplemented in future research. --- *Source: 1020033-2022-08-27.xml*
2022
# Video Scene Information Detection Based on Entity Recognition **Authors:** Hui Qian; Mengxuan Dai; Yong Ma; Jiale Zhao; Qinghua Liu; Tao Tao; Shugang Yin; Haipeng Li; Youcheng Zhang **Journal:** Wireless Communications and Mobile Computing (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1020044 --- ## Abstract Video situational information detection is widely used in the fields of video query, character anomaly detection, surveillance analysis, and so on. However, most of the existing researches pay much attention to the subject or video backgrounds, but little attention to the recognition of situational information. What is more, because there is no strong relation between the pixel information and the scene information of video data, it is difficult for computers to obtain corresponding high-level scene information through the low-level pixel information of video data. Video scene information detection is mainly to detect and analyze the multiple features in the video and mark the scenes in the video. It is aimed at automatically extracting video scene information from all kinds of original video data and realizing the recognition of scene information through “comprehensive consideration of pixel information and spatiotemporal continuity.” In order to solve the problem of transforming pixel information into scene information, this paper proposes a video scene information detection method based on entity recognition. This model integrates the spatiotemporal relationship between the video subject and object on the basis of entity recognition, so as to realize the recognition of scene information by establishing mapping relation. The effectiveness and accuracy of the model are verified by simulation experiments with the TV series as experimental data. The accuracy of this model in the simulation experiment can reach more than 85%. --- ## Body ## 1. Introduction With the development of computer network and multimedia technology, the way characters receive information has shifted from traditional words and pictures to video stream. Taking China as an example, in the first half of 2020, the number of online audiovisual users has reached 901 million, with a year-on-year growth of 4.87% (https://new.qq.com/omn/20201014/20201014A05GLY00.html), which also leads to a sharp increase in video data. With the development of 5G technology, video’s share of worldwide mobile data traffic will climb from 60% in 2018 to 74% in 2024 (https://blogs.cisco.com/sp/mobile-vni-forecast-2017-2022-5g-emerges). In such an environment with a large amount of video data, understanding video content is an important step for the intelligent system to approach human’s understanding ability. It also has a great application value in social services, national security, and industrial development. However, video data is characterized by nonstructuration, strong redundancy, high dimension, deep information hiding, and understanding difficulties. How to map the complex video information into the semantic space in line with human cognitive habits is a challenge for video information extraction.In recent years, the extraction and analysis of video information has become an important research content in video processing, which is of great significance in video semantic extraction, video query, and other aspects. Character detection and background detection, which are similar to scene information detection, have been deeply studied and widely applied [1–9]. However, there are not many in-depth researches on video situational information. At present, most of the proposed model is targeted to the recognition of face [10], character [11, 12], or background content [14] of the video, by extracting key frames and recognizing the character information or the scene information in the frames to realize the extraction of the relationship between characters [13–15] and video scene classification [16, 17]. Zheng and Yu [10] combined the squeeze-and-excitation network (SEN) and residual network (ResNet) to accurately detect the face information in each frame, extract the position of the target face, and then extract face features from adjacent frames through the RNFT model to predict the position of the target face in the next frame. Gong and Wang [16] extracted background audio signals from match shots and recognized the sound of cheering and hitting from the audio signals of each match shot. By combining background audio signals and shot image information, this method realizes a more accurate video classification. Ding and Yilmaz [14] used to analyze whether characters appear in the same video scene, so as to extract the relationship network of the characters in the video. Tran and Jung [15] counted the cooccurrence of characters in video images to extract their relationship. However, most of these methods only take the global character/scene features at the camera level into consideration, ignoring the local features with more information and the relations that exist among them.Scene detection is also widely used in real life. For example, in the novel coronavirus epidemic which started from 2020, the mode of online meeting and online teaching has become more and more popular, and the video data of meeting and course have also increased. When we process these video data, we find that there is a kind of application condition, that is, in a video, we usually only pay attention to the state of a target person/object under a specific situation. For instance, if a student participates in two consecutive classes in the same classroom, and the surveillance camera in the classroom will shot a video of these two classes. And we would like to analyze the student’s attendance in one of the classes to ensure whether he was late or left early or returned after leaving for a period of time. When using the video information processing model mentioned above to analyze it, we found the following problems:(1) Without more information, it is difficult for the computer to directly judge whether the student is in a changed course or not(2) The computer is able to recognize all the parts when the student was absent in the whole video, but the process of determining whether the absence occurred in the course we are concerned about usually needs to be done manuallyLei et al. [21] proposed the SSCD method. It realizes the recognition of changing objects in a fixed scene and judges the change of street scene. However, it can not solve the above problems. In the case of lens movement or a large number of personnel changes, the error rate of the model will increase greatly, and it is difficult to deal with the processing of human-centered video. Similarly, there is the method proposed by Santana et al. [22], which can realize the rapid recognition of moving objects from a fixed perspective and judge the scene changes based on the results. However, this method can only obtain the contour map of moving objects and still can not well solve the above problems. The method proposed by Huang and Liao [23] can realize the scene detection task from the perspective of motion, but it has certain requirements for the consistency of video. At the same time, the method compares frame by frame, which has high requirements for the performance of the machine and insufficient processing speed.To solve the above problems, a video scene information detection model based on entity recognition is proposed in this paper. This model makes use of more information including global information at video level and partial information at entity level for more information to get more accurate results. Similar to this example, there are many application conditions, such as the situational judgment of meeting process and the abnormal judgment of security video, etc., but existing video processing models are not able to handle such application conditions well.According to the spatiotemporal features of the video scene, this paper selects the state of the video object as the characteristic to help us analyze and understand the video scene, combines with the state feature of the video subject, and determines the scene feature of the video subject. In this paper, the innovations can be summarized as the following three points:(1) This paper proposes a new situational information detection model, which can recognize the changes of video situational information with high efficiency(2) This paper establishes situational features by combining the spatiotemporal continuity between the subject and the object in video content, which enables the model to recognize situational information without semantic information of the video object and achieves good results.(3) The accuracy of the model proposed in this paper reaches 80%In this paper, we will explain and verify the above research contents. Section2 will briefly introduce the existing entity recognition models, such as Yolo, and some mature face recognition models, such as face recognition. At present stage, these models are the premise for the test in this study. In Section 3, we will introduce the models, including their establishment, mathematical basis, and partial content of the pseudocode. Section 4 will present our experimental results and summarize the failed parts, which are also what need to be further discussed in our subsequent research work. In Section 5, we will summarize the research content and briefly introduce the main research directions in the future. ## 2. Relevant Work ### 2.1. Yolo Yolo is a new target detection method [18], which is characterized by rapid detection and high accuracy. Redmon regarded the target detection task as a regression problem of target region prediction and category prediction. In this method, a single neural network is used to directly predict item boundary and category probability to achieve end-to-end item detection. Yolo is widely used in target detection [19], target tracking [20], and other applications. Zhang et al. [19] used the deep separable convolutional method to optimize the convolution layer of the tiny Yolo model and divided a complete convolution operation into deep convolution and point-by-point convolution, thus reducing the parameters of CNN and improving the operation speed. Mohammed et al. [20] combined the neural network, image-based tracking, and Yolo V3 to solve the problem of intelligent vehicle tracking.In this paper, Yolo V4 can be used as the target detection network in the entity detection stage. On the basis of Yolo V3, Yolo V4 has made a lot of innovations. The innovation of the input end is mainly the improvement of the input end during training, including Mosaic data enhancement, CMBN, and SAT self-confrontation training. Backbone network combines all kinds of new ways, including CSPDarknet53, Mish activation function, and Dropblock. The neck target detection network often inserts some layers in the backbone and the final output layer, such as the SPP module in the Yolo V4 and FPN+PAN structure. The anchor frame mechanism of the output layer is the same as that of Yolo V3. The main improvement is in the loss function Clou-Loss during training, and the NMS screened by the prediction box is changed into DIOU-nms. Yolo V4 is a major update of the Yolo series, with average accuracy (AP) and frame per second (FPS) in the COCO dataset improved by 10% and 12%, respectively. ### 2.2. Face Recognition Algorithm In the model proposed in this paper, it is also feasible to directly use the face recognition algorithm to replace the target detection network. This method will reduce the accuracy of the model to some extent, but meanwhile, the computing efficiency will be better than the complete target detection network. When only the face recognition algorithm is used for scene information detection, the target object will be replaced by face recognition results, which greatly reduces the computational load of the model.Face recognition is a powerful, simple, and easy-to-use face recognition open-source project, equipped with integrated development documents and application cases, and compatible with the Raspberry Pi system. You can use Python and command line tools to extract, recognize, and manipulate faces. Face recognition is a deep learning model based on C++ open-source library dlib. The face dataset Labeled Faces in the Wild is used for testing with a 99.38% accuracy. But the recognition accuracy of children and Asian faces has yet to be improved.SeetaFace2 is a face recognition project written in C++ that supports Windows, Linux, and ARM platforms and does not rely on third-party libraries. This project includes face recognition module FaceDetector, face key point locating module Face Landmarks, and face feature extraction and comparison module Facerecognizer. FaceDetector can achieve a recall rate of over 92% under the condition of 100 false detections on FDDB, it also supports 5-point and 81-point localization of face key points, and its 1-to-N module supports face recognition applications with a base of thousands of characters. ## 2.1. Yolo Yolo is a new target detection method [18], which is characterized by rapid detection and high accuracy. Redmon regarded the target detection task as a regression problem of target region prediction and category prediction. In this method, a single neural network is used to directly predict item boundary and category probability to achieve end-to-end item detection. Yolo is widely used in target detection [19], target tracking [20], and other applications. Zhang et al. [19] used the deep separable convolutional method to optimize the convolution layer of the tiny Yolo model and divided a complete convolution operation into deep convolution and point-by-point convolution, thus reducing the parameters of CNN and improving the operation speed. Mohammed et al. [20] combined the neural network, image-based tracking, and Yolo V3 to solve the problem of intelligent vehicle tracking.In this paper, Yolo V4 can be used as the target detection network in the entity detection stage. On the basis of Yolo V3, Yolo V4 has made a lot of innovations. The innovation of the input end is mainly the improvement of the input end during training, including Mosaic data enhancement, CMBN, and SAT self-confrontation training. Backbone network combines all kinds of new ways, including CSPDarknet53, Mish activation function, and Dropblock. The neck target detection network often inserts some layers in the backbone and the final output layer, such as the SPP module in the Yolo V4 and FPN+PAN structure. The anchor frame mechanism of the output layer is the same as that of Yolo V3. The main improvement is in the loss function Clou-Loss during training, and the NMS screened by the prediction box is changed into DIOU-nms. Yolo V4 is a major update of the Yolo series, with average accuracy (AP) and frame per second (FPS) in the COCO dataset improved by 10% and 12%, respectively. ## 2.2. Face Recognition Algorithm In the model proposed in this paper, it is also feasible to directly use the face recognition algorithm to replace the target detection network. This method will reduce the accuracy of the model to some extent, but meanwhile, the computing efficiency will be better than the complete target detection network. When only the face recognition algorithm is used for scene information detection, the target object will be replaced by face recognition results, which greatly reduces the computational load of the model.Face recognition is a powerful, simple, and easy-to-use face recognition open-source project, equipped with integrated development documents and application cases, and compatible with the Raspberry Pi system. You can use Python and command line tools to extract, recognize, and manipulate faces. Face recognition is a deep learning model based on C++ open-source library dlib. The face dataset Labeled Faces in the Wild is used for testing with a 99.38% accuracy. But the recognition accuracy of children and Asian faces has yet to be improved.SeetaFace2 is a face recognition project written in C++ that supports Windows, Linux, and ARM platforms and does not rely on third-party libraries. This project includes face recognition module FaceDetector, face key point locating module Face Landmarks, and face feature extraction and comparison module Facerecognizer. FaceDetector can achieve a recall rate of over 92% under the condition of 100 false detections on FDDB, it also supports 5-point and 81-point localization of face key points, and its 1-to-N module supports face recognition applications with a base of thousands of characters. ## 3. Model ### 3.1. Model Description The steps of video scene information extraction are as follows: Firstly, the input video is analyzed and preprocessed to obtain the entity target in each frame of the video. The main purpose of this work is to lay a good foundation for the subsequent subject-object labeling and the establishment of spatiotemporal relationship. Secondly, according to the input subject picture, the entity targets are compared and labeled, and the remaining entity targets are labeled as the object. Then, the video subject labeling results are used as scene nodes to extract and analyze the spatiotemporal relationship between the objects and the subjects in the video, so as to judge whether the scene is continuous or not. Finally, the attributes of scene nodes, namely, the scene information of the subject, is determined in the continuous scene.This paper mainly focuses on the following:(1) How to establish the relationship between subjects and objects?(2) How to judge the attributes of scene nodes?The model in this paper completes the above research contents through three stages of information processing. #### 3.1.1. The First Stage: Establish the Spatiotemporal Relationship between the Subject and the Object In this stage, we lock the current situational information by establishing the relationship between the subject and the object, which is also the situational feature introduced in the model. The spatiotemporal relationship is mainly based on the randomness of object selection. Under the same condition, the mathematical probability that a certain number of randomly selected entities in the initial image of the scene will simultaneously appear abnormal in this period of time and space is very small.According to the Bayesian probability formula, let the subject beX, and the object set Y=y1,y2⋯,yn, y1,y2⋯,yn are independent of each other and random; the anomaly probability of Y can be shown as Py; and the probability of n object anomalies and subject anomalies at the same time can be shown as P. (1)P=Pyn∗Psubjectanomaly,can be concluded. As shown in Figure1, when Py=0.3, the probability of misrecognition is less than 5% when the value of n is greater than 3.Figure 1 Influence curve ofn value on model accuracy when Pobject=0.3.The spatiotemporal relationship is also reflected in the spatiotemporal continuity of the object. In the same scene, the mathematical probability of continuous abnormal occurrence of an entity randomly selected in the initial image of the scene in this period of time and space is also very small.Since the occurrence of an anomaly in the same entity is an independent event, according to the principle of event independence, the object anomaly probability is assumed to bePy, and the probability Pobject of object continuous abnormal n times. (2)Pobject=Pyn,As shown in Figure2, in the case of Pobject=0.2, when n value is 2, the probability of misrecognition is 4%.Figure 2 Influence curve ofn value on model accuracy when Pobject=0.2. #### 3.1.2. The Second Stage: Recognize whether the Subject and the object Are Abnormal After establishing the subject-object relationship in the previous stage, we can realize the marking on the clips of the same scene in the video.The main work in this stage can be divided into three steps. Firstly, each frame image is named according to the video frame order, and the same scene fragments are split one by one. Secondly, the features of the partial images of each entity in each frame of the same scene is extracted and compared with the target image feature to recognize whether the subject is in each video frame of the continuous scene. And the file names of each image that the subject exists are extracted as the subject recognition set. Finally, the features of the partial images of each entity are compared with the recognized object image features to recognize whether the object is in each video frame of the continuous scene, and the file names of each image that the object exists are extracted as the object recognition set.The scene feature of a videoV in a continuous scene is defined as (3)V^X,Y,t=VX,Y,t−VX,Y,tlσX,Y,tl,where X∈1,2,⋯,M, Y∈0,1,⋯,N are scene indices and M and N are the number of subject and object of the video frame, respectively. t∈1,2,⋯,T is the temporal index, and T is number of frames in a video. VX,Y,t is the quantity of subject and object at time t. And tl is the last time the situation changed. (4)σX,Y,tl=∑tltVX,Y,tt−tl,is the average value of V of each frame from scene change to present.Since feature extraction and comparison are independent, tasks at this stage can improve detection efficiency through parallel approach. Similarly, nature video has high correlation among neighboring pixels both in space and time. In order to further improve the processing efficiency, we can also choose to extract a picture every few frames for comparison. #### 3.1.3. The Third Stage: Calculate the Results of Scene Detection After the subject and object recognition set of the previous stage is obtained, we can integrate them to obtain scene detection results. In the above work, the method of renaming each frame image and taking each image file name as the result set is to reduce the computational load at this stage, so as to improve the efficiency of result integration.The work in this stage is mainly divided into two steps. Firstly, the intersection part of each object recognition set is taken, and then, the intersection with the subject recognition set is taken. This part of image has two features: (1) the scene information does not change under the same scene and (2) there is no exception in the body. According to the above two features, we can get the video clips of the corresponding frame of intersection images and ensure that the scene of this video is unchanged and the subject is not abnormal.(5)Rnormal=A∩B1∪A∩B2⋯∪A∩Bn,whereA is the result frame set that recognizes the subject and Bn is the result frame set that recognizes the object.Then, the image filenames of the intersection of the image filenames of the subject recognition set and the object recognition set are compared to obtain the partial images containing the subject but not containing the object. According to the comparison results, we can get the video clips of corresponding frames and determine that the scene changes have been taken place in this video.(6)Rabnormal=⋃−Rnormal. ### 3.2. Establishment of Spatiotemporal Relationship A large number of existing models, such as Yolo and face recognition, have been able to realize fast entity recognition of image information. In this paper, such entity recognition results are directly seen as the entity target of video scenes.After the entity target result set of the video is obtained by using the above models, the entity relationship algorithm is used to establish the relationship between the subject and the object target and establish scene features.Algorithm 1: Entity relationship algorithm. Input: entity target set, number of reference objectsOutput: object target set1:encodings is all entity target codes of first_frame2:fori in 0 to lengthofencodings−1:3:if target code equal to the ith entity code of encodings:4:  addsI to temp list5:   end6:num is the number of reference objects7:fori in 0 to num−1:8:rand is a random integer from 0 to lengthofencodings−19:whilerand is in temp:10:rand is a random integer from 0 to lengthofencodings−111:  addsrand to temp list12:  adds therand element of encodings to the lstThe entity object of the frame is recognized by the entity recognition method and is used as the input of the relationship algorithm. The number of reference objects is determined by the user, and the corresponding number of objects is arbitrarily selected from the entity object as the reference object of the current scene. According to the naive Bayes theory, there is a great similarity between the arbitrarily selected object and the subject in the same scene, and the more the selected objects, the stronger the relationship in the space and time. ### 3.3. Judgment of Scene Node Attributes After the establishment of spatiotemporal relationship, the continuity of scene information is firstly detected. Only in continuous scenes can the judgment of scene node attributes have practical application attributes. After obtaining the continuous video clips of scene information, the attributes of scene nodes are determined according to the state of the main body of the videoAlgorithm 2: Scene attribute judgment algorithm. Input: subject coding, object target setOutput: Situational node attributes1:i=02:encodings is all entity targets encode of frame3:for all the encode of scene’s entity targets:4:ifencode is not in encodings:5:i=i+16:ifi equals to scene_entity:7:scene has been changed8:else:9:scene has not been changed10:encodings is all object targets encode of frame11:iftarget_encode is in encodings:12:subjectofvideo is in a particular situation13:else:14:subjectofvideo is not in a particular situationThe subject target and the output results of the relationship algorithm are taken as the input of the judgment algorithm. According to the relationship between the subject and the object, the entity target in the current frame is traversed, and the scene attributes are determined from the subject state and object state. ### 3.4. Video Scene Detection Model After the completion of entity relationship and scene node attribute judgment, the information of one scene can be detected. However, in general, a video contains multiple scene information. Therefore, on the basis of Algorithm1 and Algorithm 2, this paper proposes Algorithm 3 to realize the detection of all scene information in a video data.Algorithm 3: Video scene detection model. Input: video dataOutput: Scene detection results1:fori in 0 to totalvideoframes−1:2:ic=i+13:image is the icth frame of video4:ifimage is the first frame:5:   determine if the subject of video is in the first frame6:else:7:if scene has not been changed:8:      determine if the subject of video is in the frame:9:ifresult is True10:          addsic to timeImage11:          setnextframe to False12:else:13:if time lag between now and the last scene is more than 3 s:14:            addsic to timeImage15:            setnextframe to False16:else:17:            clear the last record intimeImage18:            setnextframe to True19:else:20:is_end=True21:fori in the number of frames in scene change:22:if scene back to original:23:is_end=False24:end if26:end for25:ifis_end is True:26:         addsic to timeImage27:end ifThe content of the first frame of the video is taken as the initial scene information. Algorithms2 and 3 traverse the video data. When the change of video scene information is detected, the time sequence of the scene change frame is recorded, and the content of the scene change frame is taken as the initial scene information of the subsequent video data to cycle to the end of the video. ## 3.1. Model Description The steps of video scene information extraction are as follows: Firstly, the input video is analyzed and preprocessed to obtain the entity target in each frame of the video. The main purpose of this work is to lay a good foundation for the subsequent subject-object labeling and the establishment of spatiotemporal relationship. Secondly, according to the input subject picture, the entity targets are compared and labeled, and the remaining entity targets are labeled as the object. Then, the video subject labeling results are used as scene nodes to extract and analyze the spatiotemporal relationship between the objects and the subjects in the video, so as to judge whether the scene is continuous or not. Finally, the attributes of scene nodes, namely, the scene information of the subject, is determined in the continuous scene.This paper mainly focuses on the following:(1) How to establish the relationship between subjects and objects?(2) How to judge the attributes of scene nodes?The model in this paper completes the above research contents through three stages of information processing. ### 3.1.1. The First Stage: Establish the Spatiotemporal Relationship between the Subject and the Object In this stage, we lock the current situational information by establishing the relationship between the subject and the object, which is also the situational feature introduced in the model. The spatiotemporal relationship is mainly based on the randomness of object selection. Under the same condition, the mathematical probability that a certain number of randomly selected entities in the initial image of the scene will simultaneously appear abnormal in this period of time and space is very small.According to the Bayesian probability formula, let the subject beX, and the object set Y=y1,y2⋯,yn, y1,y2⋯,yn are independent of each other and random; the anomaly probability of Y can be shown as Py; and the probability of n object anomalies and subject anomalies at the same time can be shown as P. (1)P=Pyn∗Psubjectanomaly,can be concluded. As shown in Figure1, when Py=0.3, the probability of misrecognition is less than 5% when the value of n is greater than 3.Figure 1 Influence curve ofn value on model accuracy when Pobject=0.3.The spatiotemporal relationship is also reflected in the spatiotemporal continuity of the object. In the same scene, the mathematical probability of continuous abnormal occurrence of an entity randomly selected in the initial image of the scene in this period of time and space is also very small.Since the occurrence of an anomaly in the same entity is an independent event, according to the principle of event independence, the object anomaly probability is assumed to bePy, and the probability Pobject of object continuous abnormal n times. (2)Pobject=Pyn,As shown in Figure2, in the case of Pobject=0.2, when n value is 2, the probability of misrecognition is 4%.Figure 2 Influence curve ofn value on model accuracy when Pobject=0.2. ### 3.1.2. The Second Stage: Recognize whether the Subject and the object Are Abnormal After establishing the subject-object relationship in the previous stage, we can realize the marking on the clips of the same scene in the video.The main work in this stage can be divided into three steps. Firstly, each frame image is named according to the video frame order, and the same scene fragments are split one by one. Secondly, the features of the partial images of each entity in each frame of the same scene is extracted and compared with the target image feature to recognize whether the subject is in each video frame of the continuous scene. And the file names of each image that the subject exists are extracted as the subject recognition set. Finally, the features of the partial images of each entity are compared with the recognized object image features to recognize whether the object is in each video frame of the continuous scene, and the file names of each image that the object exists are extracted as the object recognition set.The scene feature of a videoV in a continuous scene is defined as (3)V^X,Y,t=VX,Y,t−VX,Y,tlσX,Y,tl,where X∈1,2,⋯,M, Y∈0,1,⋯,N are scene indices and M and N are the number of subject and object of the video frame, respectively. t∈1,2,⋯,T is the temporal index, and T is number of frames in a video. VX,Y,t is the quantity of subject and object at time t. And tl is the last time the situation changed. (4)σX,Y,tl=∑tltVX,Y,tt−tl,is the average value of V of each frame from scene change to present.Since feature extraction and comparison are independent, tasks at this stage can improve detection efficiency through parallel approach. Similarly, nature video has high correlation among neighboring pixels both in space and time. In order to further improve the processing efficiency, we can also choose to extract a picture every few frames for comparison. ### 3.1.3. The Third Stage: Calculate the Results of Scene Detection After the subject and object recognition set of the previous stage is obtained, we can integrate them to obtain scene detection results. In the above work, the method of renaming each frame image and taking each image file name as the result set is to reduce the computational load at this stage, so as to improve the efficiency of result integration.The work in this stage is mainly divided into two steps. Firstly, the intersection part of each object recognition set is taken, and then, the intersection with the subject recognition set is taken. This part of image has two features: (1) the scene information does not change under the same scene and (2) there is no exception in the body. According to the above two features, we can get the video clips of the corresponding frame of intersection images and ensure that the scene of this video is unchanged and the subject is not abnormal.(5)Rnormal=A∩B1∪A∩B2⋯∪A∩Bn,whereA is the result frame set that recognizes the subject and Bn is the result frame set that recognizes the object.Then, the image filenames of the intersection of the image filenames of the subject recognition set and the object recognition set are compared to obtain the partial images containing the subject but not containing the object. According to the comparison results, we can get the video clips of corresponding frames and determine that the scene changes have been taken place in this video.(6)Rabnormal=⋃−Rnormal. ## 3.1.1. The First Stage: Establish the Spatiotemporal Relationship between the Subject and the Object In this stage, we lock the current situational information by establishing the relationship between the subject and the object, which is also the situational feature introduced in the model. The spatiotemporal relationship is mainly based on the randomness of object selection. Under the same condition, the mathematical probability that a certain number of randomly selected entities in the initial image of the scene will simultaneously appear abnormal in this period of time and space is very small.According to the Bayesian probability formula, let the subject beX, and the object set Y=y1,y2⋯,yn, y1,y2⋯,yn are independent of each other and random; the anomaly probability of Y can be shown as Py; and the probability of n object anomalies and subject anomalies at the same time can be shown as P. (1)P=Pyn∗Psubjectanomaly,can be concluded. As shown in Figure1, when Py=0.3, the probability of misrecognition is less than 5% when the value of n is greater than 3.Figure 1 Influence curve ofn value on model accuracy when Pobject=0.3.The spatiotemporal relationship is also reflected in the spatiotemporal continuity of the object. In the same scene, the mathematical probability of continuous abnormal occurrence of an entity randomly selected in the initial image of the scene in this period of time and space is also very small.Since the occurrence of an anomaly in the same entity is an independent event, according to the principle of event independence, the object anomaly probability is assumed to bePy, and the probability Pobject of object continuous abnormal n times. (2)Pobject=Pyn,As shown in Figure2, in the case of Pobject=0.2, when n value is 2, the probability of misrecognition is 4%.Figure 2 Influence curve ofn value on model accuracy when Pobject=0.2. ## 3.1.2. The Second Stage: Recognize whether the Subject and the object Are Abnormal After establishing the subject-object relationship in the previous stage, we can realize the marking on the clips of the same scene in the video.The main work in this stage can be divided into three steps. Firstly, each frame image is named according to the video frame order, and the same scene fragments are split one by one. Secondly, the features of the partial images of each entity in each frame of the same scene is extracted and compared with the target image feature to recognize whether the subject is in each video frame of the continuous scene. And the file names of each image that the subject exists are extracted as the subject recognition set. Finally, the features of the partial images of each entity are compared with the recognized object image features to recognize whether the object is in each video frame of the continuous scene, and the file names of each image that the object exists are extracted as the object recognition set.The scene feature of a videoV in a continuous scene is defined as (3)V^X,Y,t=VX,Y,t−VX,Y,tlσX,Y,tl,where X∈1,2,⋯,M, Y∈0,1,⋯,N are scene indices and M and N are the number of subject and object of the video frame, respectively. t∈1,2,⋯,T is the temporal index, and T is number of frames in a video. VX,Y,t is the quantity of subject and object at time t. And tl is the last time the situation changed. (4)σX,Y,tl=∑tltVX,Y,tt−tl,is the average value of V of each frame from scene change to present.Since feature extraction and comparison are independent, tasks at this stage can improve detection efficiency through parallel approach. Similarly, nature video has high correlation among neighboring pixels both in space and time. In order to further improve the processing efficiency, we can also choose to extract a picture every few frames for comparison. ## 3.1.3. The Third Stage: Calculate the Results of Scene Detection After the subject and object recognition set of the previous stage is obtained, we can integrate them to obtain scene detection results. In the above work, the method of renaming each frame image and taking each image file name as the result set is to reduce the computational load at this stage, so as to improve the efficiency of result integration.The work in this stage is mainly divided into two steps. Firstly, the intersection part of each object recognition set is taken, and then, the intersection with the subject recognition set is taken. This part of image has two features: (1) the scene information does not change under the same scene and (2) there is no exception in the body. According to the above two features, we can get the video clips of the corresponding frame of intersection images and ensure that the scene of this video is unchanged and the subject is not abnormal.(5)Rnormal=A∩B1∪A∩B2⋯∪A∩Bn,whereA is the result frame set that recognizes the subject and Bn is the result frame set that recognizes the object.Then, the image filenames of the intersection of the image filenames of the subject recognition set and the object recognition set are compared to obtain the partial images containing the subject but not containing the object. According to the comparison results, we can get the video clips of corresponding frames and determine that the scene changes have been taken place in this video.(6)Rabnormal=⋃−Rnormal. ## 3.2. Establishment of Spatiotemporal Relationship A large number of existing models, such as Yolo and face recognition, have been able to realize fast entity recognition of image information. In this paper, such entity recognition results are directly seen as the entity target of video scenes.After the entity target result set of the video is obtained by using the above models, the entity relationship algorithm is used to establish the relationship between the subject and the object target and establish scene features.Algorithm 1: Entity relationship algorithm. Input: entity target set, number of reference objectsOutput: object target set1:encodings is all entity target codes of first_frame2:fori in 0 to lengthofencodings−1:3:if target code equal to the ith entity code of encodings:4:  addsI to temp list5:   end6:num is the number of reference objects7:fori in 0 to num−1:8:rand is a random integer from 0 to lengthofencodings−19:whilerand is in temp:10:rand is a random integer from 0 to lengthofencodings−111:  addsrand to temp list12:  adds therand element of encodings to the lstThe entity object of the frame is recognized by the entity recognition method and is used as the input of the relationship algorithm. The number of reference objects is determined by the user, and the corresponding number of objects is arbitrarily selected from the entity object as the reference object of the current scene. According to the naive Bayes theory, there is a great similarity between the arbitrarily selected object and the subject in the same scene, and the more the selected objects, the stronger the relationship in the space and time. ## 3.3. Judgment of Scene Node Attributes After the establishment of spatiotemporal relationship, the continuity of scene information is firstly detected. Only in continuous scenes can the judgment of scene node attributes have practical application attributes. After obtaining the continuous video clips of scene information, the attributes of scene nodes are determined according to the state of the main body of the videoAlgorithm 2: Scene attribute judgment algorithm. Input: subject coding, object target setOutput: Situational node attributes1:i=02:encodings is all entity targets encode of frame3:for all the encode of scene’s entity targets:4:ifencode is not in encodings:5:i=i+16:ifi equals to scene_entity:7:scene has been changed8:else:9:scene has not been changed10:encodings is all object targets encode of frame11:iftarget_encode is in encodings:12:subjectofvideo is in a particular situation13:else:14:subjectofvideo is not in a particular situationThe subject target and the output results of the relationship algorithm are taken as the input of the judgment algorithm. According to the relationship between the subject and the object, the entity target in the current frame is traversed, and the scene attributes are determined from the subject state and object state. ## 3.4. Video Scene Detection Model After the completion of entity relationship and scene node attribute judgment, the information of one scene can be detected. However, in general, a video contains multiple scene information. Therefore, on the basis of Algorithm1 and Algorithm 2, this paper proposes Algorithm 3 to realize the detection of all scene information in a video data.Algorithm 3: Video scene detection model. Input: video dataOutput: Scene detection results1:fori in 0 to totalvideoframes−1:2:ic=i+13:image is the icth frame of video4:ifimage is the first frame:5:   determine if the subject of video is in the first frame6:else:7:if scene has not been changed:8:      determine if the subject of video is in the frame:9:ifresult is True10:          addsic to timeImage11:          setnextframe to False12:else:13:if time lag between now and the last scene is more than 3 s:14:            addsic to timeImage15:            setnextframe to False16:else:17:            clear the last record intimeImage18:            setnextframe to True19:else:20:is_end=True21:fori in the number of frames in scene change:22:if scene back to original:23:is_end=False24:end if26:end for25:ifis_end is True:26:         addsic to timeImage27:end ifThe content of the first frame of the video is taken as the initial scene information. Algorithms2 and 3 traverse the video data. When the change of video scene information is detected, the time sequence of the scene change frame is recorded, and the content of the scene change frame is taken as the initial scene information of the subsequent video data to cycle to the end of the video. ## 4. Experiments ### 4.1. Experimental Data The experimental dataset adopted in this paper is a public video dataset; the main content of which is TV play Ten Miles of Peach Blossom (the data comes from Tencent video, which is only used for academic research in this paper, and the copyright belongs to Tencent company), Hospital Playlist (the data comes from Netflix and is only used for academic research in this paper; the copyright belongs to Netflix company), Nirvana in Fire (the data comes from Tencent video, which is only used for academic research in this paper, and the copyright belongs to Tencent company), and It started with a Kiss. The average scenario switching time of each dataset is 7-10 seconds.In this experimental environment, in order to analyze the performance of the algorithm proposed in this paper, the evaluation index used in this study is precision.(7)precision=correctlyrecognizednumbertotal.The values are between 0 and 1, and the closer they are to 1, the better the effect of the model will be.The hardware configuration information used in the experiment is as follows: CPU R53600, graphics card GTX1660, internal storage 16G, operating system Win10, and development language Python3. ### 4.2. Experimental Results and Analysis As mentioned in Section3, the model proposed in this study is to process and calculate the entity recognition results in the video, so we will use the existing mature entity recognition algorithms in the experiment. Two existing character recognition algorithms are used to meet the needs of model operation. First, face recognition was used to extract environmental features and human face features; second, SeetaFace2 was used to extract environmental features and human face features. The evaluation criteria are whether the target disappears and whether the scene transforms. In this experiment, scene changes have been manually marked. The precision of marking is seconds, and the precision of model detection is video frames. Due to the inconsistency of precision between manual marking and model detection, when the time axes corresponding to the video frame contained in the detection results are the same as that of manual marks, the results are right. #### 4.2.1. Experiments Based on Face Recognition In this paper, we use Face_recognition as the entity recognition part of the model. As the entity to establish scene features in the model, it is used to recognize the characters in the video image. After establishing the association between the entities in the video (Algorithm1), the model calculates and determines the scene characteristics of each frame based on Formula (3), records the frame number of V^X,Y,t difference that is abnormal, and determines the corresponding time point on the time axis. Then, we compare it with the change time which is manually marked and get the test result.These five datasets have been, respectively, tested, and the experimental results are as shown in Table1.Table 1 The experimental results based on face recognition. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29101Dataset316180Dataset415170Dataset512140We have found a part of the unrecognized images, as shown in Figure3.Figure 3 Incorrect results with face recognition.We have conducted separate recognition processing and found that some frames of the model could not recognize their face features, leading to recognition errors during scene detection, which may be caused by decorations on the face.The experimental results show that face recognition can work well in this dataset. However, face recognition sometimes fails to extract character features because face features are currently used for scene features. The occurrence of the above problems has some serious impacts on the establishment of scene features in the proposed model. Due to the errors of character face recognition, the frame missing a certain entity is wrongly recognized as the changes of scene features during the establishment of scene features. On the whole, the model proposed in this paper does well in scene detection in the case of limited entities. #### 4.2.2. Experiments Based on SeetaFace2 The experiment is similar to the former one, but we make some changes that we use SeetaFace2 instead of face recognition as the entity recognition part of the model. SeetaFace2 is used to recognize the faces in the video image as the entity to establish scene features of the model and test the same datasets. The experimental results are as shown in Table2.Table 2 The experimental results based on SeetaFace2. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29100Dataset314180Dataset416171Dataset512140We have found a part of the unrecognized images, as shown in Figure4.Figure 4 Incorrect results with SeetaFace2.We have found that the SeetaFace2 model recognized faces very sensitively and even can achieve the recognition of supporting characters in the background of photos. And as the camera moves, the number of supporting characters changes dramatically, leading to the misjudgment of a scene switch.The experimental results show that the recognition effect of SeetaFace2 is very sensitive, and SeetaFace2 uses the model structure of ResNet50. In this network, multiple residual learning blocks are connected in series, and the deep representation of the deep learning image of the model is utilized, so the recognition effect is very sensitive thus the recognition error occurs. Different from the problems mentioned above, when SeetaFace is combined with the model proposed in this paper, entities will increase abnormally in some frames due to the recognition of background characters. This leads to the errors of the proposed model in establishing scene features, resulting in recognition errors. #### 4.2.3. Experiment Summary In addition to the above open dataset experiments, we also used 20 self-made datasets for experiments, and the content of them is meeting recording videos. In order to produce situational changes, the videos have some situations such as characters leaving midway, characters joining midway, and meeting pausing. The results are as follows:(i) Face recognition: 87%(ii) SeetaFace2: 85%The test results meet the expectations. The model proposed in this paper can achieve more accurate scene change detection. It can realize video scene change detection on the premise of using face recognition results as the main entity. The feasibility and universality of the model have been already proved in the experiment. We believe that the accuracy can be further improved if the result including object recognition is introduced as an entity. But in some special cases, such as too many characters in the background, characters turning back, and decorations on the face, it will lead to the failure of scene recognition. In the future, we plan to increase the correct rate of scene transformation recognition by using judgment logic, model recognition, adding background object feature recognition module, and other measures. ## 4.1. Experimental Data The experimental dataset adopted in this paper is a public video dataset; the main content of which is TV play Ten Miles of Peach Blossom (the data comes from Tencent video, which is only used for academic research in this paper, and the copyright belongs to Tencent company), Hospital Playlist (the data comes from Netflix and is only used for academic research in this paper; the copyright belongs to Netflix company), Nirvana in Fire (the data comes from Tencent video, which is only used for academic research in this paper, and the copyright belongs to Tencent company), and It started with a Kiss. The average scenario switching time of each dataset is 7-10 seconds.In this experimental environment, in order to analyze the performance of the algorithm proposed in this paper, the evaluation index used in this study is precision.(7)precision=correctlyrecognizednumbertotal.The values are between 0 and 1, and the closer they are to 1, the better the effect of the model will be.The hardware configuration information used in the experiment is as follows: CPU R53600, graphics card GTX1660, internal storage 16G, operating system Win10, and development language Python3. ## 4.2. Experimental Results and Analysis As mentioned in Section3, the model proposed in this study is to process and calculate the entity recognition results in the video, so we will use the existing mature entity recognition algorithms in the experiment. Two existing character recognition algorithms are used to meet the needs of model operation. First, face recognition was used to extract environmental features and human face features; second, SeetaFace2 was used to extract environmental features and human face features. The evaluation criteria are whether the target disappears and whether the scene transforms. In this experiment, scene changes have been manually marked. The precision of marking is seconds, and the precision of model detection is video frames. Due to the inconsistency of precision between manual marking and model detection, when the time axes corresponding to the video frame contained in the detection results are the same as that of manual marks, the results are right. ### 4.2.1. Experiments Based on Face Recognition In this paper, we use Face_recognition as the entity recognition part of the model. As the entity to establish scene features in the model, it is used to recognize the characters in the video image. After establishing the association between the entities in the video (Algorithm1), the model calculates and determines the scene characteristics of each frame based on Formula (3), records the frame number of V^X,Y,t difference that is abnormal, and determines the corresponding time point on the time axis. Then, we compare it with the change time which is manually marked and get the test result.These five datasets have been, respectively, tested, and the experimental results are as shown in Table1.Table 1 The experimental results based on face recognition. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29101Dataset316180Dataset415170Dataset512140We have found a part of the unrecognized images, as shown in Figure3.Figure 3 Incorrect results with face recognition.We have conducted separate recognition processing and found that some frames of the model could not recognize their face features, leading to recognition errors during scene detection, which may be caused by decorations on the face.The experimental results show that face recognition can work well in this dataset. However, face recognition sometimes fails to extract character features because face features are currently used for scene features. The occurrence of the above problems has some serious impacts on the establishment of scene features in the proposed model. Due to the errors of character face recognition, the frame missing a certain entity is wrongly recognized as the changes of scene features during the establishment of scene features. On the whole, the model proposed in this paper does well in scene detection in the case of limited entities. ### 4.2.2. Experiments Based on SeetaFace2 The experiment is similar to the former one, but we make some changes that we use SeetaFace2 instead of face recognition as the entity recognition part of the model. SeetaFace2 is used to recognize the faces in the video image as the entity to establish scene features of the model and test the same datasets. The experimental results are as shown in Table2.Table 2 The experimental results based on SeetaFace2. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29100Dataset314180Dataset416171Dataset512140We have found a part of the unrecognized images, as shown in Figure4.Figure 4 Incorrect results with SeetaFace2.We have found that the SeetaFace2 model recognized faces very sensitively and even can achieve the recognition of supporting characters in the background of photos. And as the camera moves, the number of supporting characters changes dramatically, leading to the misjudgment of a scene switch.The experimental results show that the recognition effect of SeetaFace2 is very sensitive, and SeetaFace2 uses the model structure of ResNet50. In this network, multiple residual learning blocks are connected in series, and the deep representation of the deep learning image of the model is utilized, so the recognition effect is very sensitive thus the recognition error occurs. Different from the problems mentioned above, when SeetaFace is combined with the model proposed in this paper, entities will increase abnormally in some frames due to the recognition of background characters. This leads to the errors of the proposed model in establishing scene features, resulting in recognition errors. ### 4.2.3. Experiment Summary In addition to the above open dataset experiments, we also used 20 self-made datasets for experiments, and the content of them is meeting recording videos. In order to produce situational changes, the videos have some situations such as characters leaving midway, characters joining midway, and meeting pausing. The results are as follows:(i) Face recognition: 87%(ii) SeetaFace2: 85%The test results meet the expectations. The model proposed in this paper can achieve more accurate scene change detection. It can realize video scene change detection on the premise of using face recognition results as the main entity. The feasibility and universality of the model have been already proved in the experiment. We believe that the accuracy can be further improved if the result including object recognition is introduced as an entity. But in some special cases, such as too many characters in the background, characters turning back, and decorations on the face, it will lead to the failure of scene recognition. In the future, we plan to increase the correct rate of scene transformation recognition by using judgment logic, model recognition, adding background object feature recognition module, and other measures. ## 4.2.1. Experiments Based on Face Recognition In this paper, we use Face_recognition as the entity recognition part of the model. As the entity to establish scene features in the model, it is used to recognize the characters in the video image. After establishing the association between the entities in the video (Algorithm1), the model calculates and determines the scene characteristics of each frame based on Formula (3), records the frame number of V^X,Y,t difference that is abnormal, and determines the corresponding time point on the time axis. Then, we compare it with the change time which is manually marked and get the test result.These five datasets have been, respectively, tested, and the experimental results are as shown in Table1.Table 1 The experimental results based on face recognition. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29101Dataset316180Dataset415170Dataset512140We have found a part of the unrecognized images, as shown in Figure3.Figure 3 Incorrect results with face recognition.We have conducted separate recognition processing and found that some frames of the model could not recognize their face features, leading to recognition errors during scene detection, which may be caused by decorations on the face.The experimental results show that face recognition can work well in this dataset. However, face recognition sometimes fails to extract character features because face features are currently used for scene features. The occurrence of the above problems has some serious impacts on the establishment of scene features in the proposed model. Due to the errors of character face recognition, the frame missing a certain entity is wrongly recognized as the changes of scene features during the establishment of scene features. On the whole, the model proposed in this paper does well in scene detection in the case of limited entities. ## 4.2.2. Experiments Based on SeetaFace2 The experiment is similar to the former one, but we make some changes that we use SeetaFace2 instead of face recognition as the entity recognition part of the model. SeetaFace2 is used to recognize the faces in the video image as the entity to establish scene features of the model and test the same datasets. The experimental results are as shown in Table2.Table 2 The experimental results based on SeetaFace2. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29100Dataset314180Dataset416171Dataset512140We have found a part of the unrecognized images, as shown in Figure4.Figure 4 Incorrect results with SeetaFace2.We have found that the SeetaFace2 model recognized faces very sensitively and even can achieve the recognition of supporting characters in the background of photos. And as the camera moves, the number of supporting characters changes dramatically, leading to the misjudgment of a scene switch.The experimental results show that the recognition effect of SeetaFace2 is very sensitive, and SeetaFace2 uses the model structure of ResNet50. In this network, multiple residual learning blocks are connected in series, and the deep representation of the deep learning image of the model is utilized, so the recognition effect is very sensitive thus the recognition error occurs. Different from the problems mentioned above, when SeetaFace is combined with the model proposed in this paper, entities will increase abnormally in some frames due to the recognition of background characters. This leads to the errors of the proposed model in establishing scene features, resulting in recognition errors. ## 4.2.3. Experiment Summary In addition to the above open dataset experiments, we also used 20 self-made datasets for experiments, and the content of them is meeting recording videos. In order to produce situational changes, the videos have some situations such as characters leaving midway, characters joining midway, and meeting pausing. The results are as follows:(i) Face recognition: 87%(ii) SeetaFace2: 85%The test results meet the expectations. The model proposed in this paper can achieve more accurate scene change detection. It can realize video scene change detection on the premise of using face recognition results as the main entity. The feasibility and universality of the model have been already proved in the experiment. We believe that the accuracy can be further improved if the result including object recognition is introduced as an entity. But in some special cases, such as too many characters in the background, characters turning back, and decorations on the face, it will lead to the failure of scene recognition. In the future, we plan to increase the correct rate of scene transformation recognition by using judgment logic, model recognition, adding background object feature recognition module, and other measures. ## 5. Conclusion This paper proposes a video scene information detection based on entity recognition, which can achieve the task of video scene information detection on the premise of entity recognition of video pixel data. The proposed model has strong robustness, and the precision can reach more than 85%. At the same time, it can replace entity recognition with face recognition algorithm as the input of scene information detection without too many impacts on the results of scene information detection.In this paper, we will explain and verify the above research contents. Section2 briefly introduces the existing entity recognition models, such as Yolo, and some mature face recognition models, such as face recognition. At present stage, these models are the premise for the test in this study. In Section 3, we introduce the models, including their establishment, mathematical basis and partial content of the pseudocode. Section 4 presents our experimental results and summarizes the failed parts, which are also what need to be further discussed in our subsequent research work. In Section 5, we summarize the research content and briefly introduce the main research directions in the future.We take the spatiotemporal relationship of video entities as the basis of situational information detection and creatively put forward the concept of situational features to ensure the logical accuracy of the model. In the process of experiments, we found some existing problems, such as overreliance on the accuracy of entity recognition and difficulties in screening noise information effectively. In the research, we focus on how to better combine the entity recognition model with the model proposed in this paper to improve the detection efficiency of the proposed video scene information detection model. --- *Source: 1020044-2021-10-31.xml*
1020044-2021-10-31_1020044-2021-10-31.md
65,454
Video Scene Information Detection Based on Entity Recognition
Hui Qian; Mengxuan Dai; Yong Ma; Jiale Zhao; Qinghua Liu; Tao Tao; Shugang Yin; Haipeng Li; Youcheng Zhang
Wireless Communications and Mobile Computing (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1020044
1020044-2021-10-31.xml
--- ## Abstract Video situational information detection is widely used in the fields of video query, character anomaly detection, surveillance analysis, and so on. However, most of the existing researches pay much attention to the subject or video backgrounds, but little attention to the recognition of situational information. What is more, because there is no strong relation between the pixel information and the scene information of video data, it is difficult for computers to obtain corresponding high-level scene information through the low-level pixel information of video data. Video scene information detection is mainly to detect and analyze the multiple features in the video and mark the scenes in the video. It is aimed at automatically extracting video scene information from all kinds of original video data and realizing the recognition of scene information through “comprehensive consideration of pixel information and spatiotemporal continuity.” In order to solve the problem of transforming pixel information into scene information, this paper proposes a video scene information detection method based on entity recognition. This model integrates the spatiotemporal relationship between the video subject and object on the basis of entity recognition, so as to realize the recognition of scene information by establishing mapping relation. The effectiveness and accuracy of the model are verified by simulation experiments with the TV series as experimental data. The accuracy of this model in the simulation experiment can reach more than 85%. --- ## Body ## 1. Introduction With the development of computer network and multimedia technology, the way characters receive information has shifted from traditional words and pictures to video stream. Taking China as an example, in the first half of 2020, the number of online audiovisual users has reached 901 million, with a year-on-year growth of 4.87% (https://new.qq.com/omn/20201014/20201014A05GLY00.html), which also leads to a sharp increase in video data. With the development of 5G technology, video’s share of worldwide mobile data traffic will climb from 60% in 2018 to 74% in 2024 (https://blogs.cisco.com/sp/mobile-vni-forecast-2017-2022-5g-emerges). In such an environment with a large amount of video data, understanding video content is an important step for the intelligent system to approach human’s understanding ability. It also has a great application value in social services, national security, and industrial development. However, video data is characterized by nonstructuration, strong redundancy, high dimension, deep information hiding, and understanding difficulties. How to map the complex video information into the semantic space in line with human cognitive habits is a challenge for video information extraction.In recent years, the extraction and analysis of video information has become an important research content in video processing, which is of great significance in video semantic extraction, video query, and other aspects. Character detection and background detection, which are similar to scene information detection, have been deeply studied and widely applied [1–9]. However, there are not many in-depth researches on video situational information. At present, most of the proposed model is targeted to the recognition of face [10], character [11, 12], or background content [14] of the video, by extracting key frames and recognizing the character information or the scene information in the frames to realize the extraction of the relationship between characters [13–15] and video scene classification [16, 17]. Zheng and Yu [10] combined the squeeze-and-excitation network (SEN) and residual network (ResNet) to accurately detect the face information in each frame, extract the position of the target face, and then extract face features from adjacent frames through the RNFT model to predict the position of the target face in the next frame. Gong and Wang [16] extracted background audio signals from match shots and recognized the sound of cheering and hitting from the audio signals of each match shot. By combining background audio signals and shot image information, this method realizes a more accurate video classification. Ding and Yilmaz [14] used to analyze whether characters appear in the same video scene, so as to extract the relationship network of the characters in the video. Tran and Jung [15] counted the cooccurrence of characters in video images to extract their relationship. However, most of these methods only take the global character/scene features at the camera level into consideration, ignoring the local features with more information and the relations that exist among them.Scene detection is also widely used in real life. For example, in the novel coronavirus epidemic which started from 2020, the mode of online meeting and online teaching has become more and more popular, and the video data of meeting and course have also increased. When we process these video data, we find that there is a kind of application condition, that is, in a video, we usually only pay attention to the state of a target person/object under a specific situation. For instance, if a student participates in two consecutive classes in the same classroom, and the surveillance camera in the classroom will shot a video of these two classes. And we would like to analyze the student’s attendance in one of the classes to ensure whether he was late or left early or returned after leaving for a period of time. When using the video information processing model mentioned above to analyze it, we found the following problems:(1) Without more information, it is difficult for the computer to directly judge whether the student is in a changed course or not(2) The computer is able to recognize all the parts when the student was absent in the whole video, but the process of determining whether the absence occurred in the course we are concerned about usually needs to be done manuallyLei et al. [21] proposed the SSCD method. It realizes the recognition of changing objects in a fixed scene and judges the change of street scene. However, it can not solve the above problems. In the case of lens movement or a large number of personnel changes, the error rate of the model will increase greatly, and it is difficult to deal with the processing of human-centered video. Similarly, there is the method proposed by Santana et al. [22], which can realize the rapid recognition of moving objects from a fixed perspective and judge the scene changes based on the results. However, this method can only obtain the contour map of moving objects and still can not well solve the above problems. The method proposed by Huang and Liao [23] can realize the scene detection task from the perspective of motion, but it has certain requirements for the consistency of video. At the same time, the method compares frame by frame, which has high requirements for the performance of the machine and insufficient processing speed.To solve the above problems, a video scene information detection model based on entity recognition is proposed in this paper. This model makes use of more information including global information at video level and partial information at entity level for more information to get more accurate results. Similar to this example, there are many application conditions, such as the situational judgment of meeting process and the abnormal judgment of security video, etc., but existing video processing models are not able to handle such application conditions well.According to the spatiotemporal features of the video scene, this paper selects the state of the video object as the characteristic to help us analyze and understand the video scene, combines with the state feature of the video subject, and determines the scene feature of the video subject. In this paper, the innovations can be summarized as the following three points:(1) This paper proposes a new situational information detection model, which can recognize the changes of video situational information with high efficiency(2) This paper establishes situational features by combining the spatiotemporal continuity between the subject and the object in video content, which enables the model to recognize situational information without semantic information of the video object and achieves good results.(3) The accuracy of the model proposed in this paper reaches 80%In this paper, we will explain and verify the above research contents. Section2 will briefly introduce the existing entity recognition models, such as Yolo, and some mature face recognition models, such as face recognition. At present stage, these models are the premise for the test in this study. In Section 3, we will introduce the models, including their establishment, mathematical basis, and partial content of the pseudocode. Section 4 will present our experimental results and summarize the failed parts, which are also what need to be further discussed in our subsequent research work. In Section 5, we will summarize the research content and briefly introduce the main research directions in the future. ## 2. Relevant Work ### 2.1. Yolo Yolo is a new target detection method [18], which is characterized by rapid detection and high accuracy. Redmon regarded the target detection task as a regression problem of target region prediction and category prediction. In this method, a single neural network is used to directly predict item boundary and category probability to achieve end-to-end item detection. Yolo is widely used in target detection [19], target tracking [20], and other applications. Zhang et al. [19] used the deep separable convolutional method to optimize the convolution layer of the tiny Yolo model and divided a complete convolution operation into deep convolution and point-by-point convolution, thus reducing the parameters of CNN and improving the operation speed. Mohammed et al. [20] combined the neural network, image-based tracking, and Yolo V3 to solve the problem of intelligent vehicle tracking.In this paper, Yolo V4 can be used as the target detection network in the entity detection stage. On the basis of Yolo V3, Yolo V4 has made a lot of innovations. The innovation of the input end is mainly the improvement of the input end during training, including Mosaic data enhancement, CMBN, and SAT self-confrontation training. Backbone network combines all kinds of new ways, including CSPDarknet53, Mish activation function, and Dropblock. The neck target detection network often inserts some layers in the backbone and the final output layer, such as the SPP module in the Yolo V4 and FPN+PAN structure. The anchor frame mechanism of the output layer is the same as that of Yolo V3. The main improvement is in the loss function Clou-Loss during training, and the NMS screened by the prediction box is changed into DIOU-nms. Yolo V4 is a major update of the Yolo series, with average accuracy (AP) and frame per second (FPS) in the COCO dataset improved by 10% and 12%, respectively. ### 2.2. Face Recognition Algorithm In the model proposed in this paper, it is also feasible to directly use the face recognition algorithm to replace the target detection network. This method will reduce the accuracy of the model to some extent, but meanwhile, the computing efficiency will be better than the complete target detection network. When only the face recognition algorithm is used for scene information detection, the target object will be replaced by face recognition results, which greatly reduces the computational load of the model.Face recognition is a powerful, simple, and easy-to-use face recognition open-source project, equipped with integrated development documents and application cases, and compatible with the Raspberry Pi system. You can use Python and command line tools to extract, recognize, and manipulate faces. Face recognition is a deep learning model based on C++ open-source library dlib. The face dataset Labeled Faces in the Wild is used for testing with a 99.38% accuracy. But the recognition accuracy of children and Asian faces has yet to be improved.SeetaFace2 is a face recognition project written in C++ that supports Windows, Linux, and ARM platforms and does not rely on third-party libraries. This project includes face recognition module FaceDetector, face key point locating module Face Landmarks, and face feature extraction and comparison module Facerecognizer. FaceDetector can achieve a recall rate of over 92% under the condition of 100 false detections on FDDB, it also supports 5-point and 81-point localization of face key points, and its 1-to-N module supports face recognition applications with a base of thousands of characters. ## 2.1. Yolo Yolo is a new target detection method [18], which is characterized by rapid detection and high accuracy. Redmon regarded the target detection task as a regression problem of target region prediction and category prediction. In this method, a single neural network is used to directly predict item boundary and category probability to achieve end-to-end item detection. Yolo is widely used in target detection [19], target tracking [20], and other applications. Zhang et al. [19] used the deep separable convolutional method to optimize the convolution layer of the tiny Yolo model and divided a complete convolution operation into deep convolution and point-by-point convolution, thus reducing the parameters of CNN and improving the operation speed. Mohammed et al. [20] combined the neural network, image-based tracking, and Yolo V3 to solve the problem of intelligent vehicle tracking.In this paper, Yolo V4 can be used as the target detection network in the entity detection stage. On the basis of Yolo V3, Yolo V4 has made a lot of innovations. The innovation of the input end is mainly the improvement of the input end during training, including Mosaic data enhancement, CMBN, and SAT self-confrontation training. Backbone network combines all kinds of new ways, including CSPDarknet53, Mish activation function, and Dropblock. The neck target detection network often inserts some layers in the backbone and the final output layer, such as the SPP module in the Yolo V4 and FPN+PAN structure. The anchor frame mechanism of the output layer is the same as that of Yolo V3. The main improvement is in the loss function Clou-Loss during training, and the NMS screened by the prediction box is changed into DIOU-nms. Yolo V4 is a major update of the Yolo series, with average accuracy (AP) and frame per second (FPS) in the COCO dataset improved by 10% and 12%, respectively. ## 2.2. Face Recognition Algorithm In the model proposed in this paper, it is also feasible to directly use the face recognition algorithm to replace the target detection network. This method will reduce the accuracy of the model to some extent, but meanwhile, the computing efficiency will be better than the complete target detection network. When only the face recognition algorithm is used for scene information detection, the target object will be replaced by face recognition results, which greatly reduces the computational load of the model.Face recognition is a powerful, simple, and easy-to-use face recognition open-source project, equipped with integrated development documents and application cases, and compatible with the Raspberry Pi system. You can use Python and command line tools to extract, recognize, and manipulate faces. Face recognition is a deep learning model based on C++ open-source library dlib. The face dataset Labeled Faces in the Wild is used for testing with a 99.38% accuracy. But the recognition accuracy of children and Asian faces has yet to be improved.SeetaFace2 is a face recognition project written in C++ that supports Windows, Linux, and ARM platforms and does not rely on third-party libraries. This project includes face recognition module FaceDetector, face key point locating module Face Landmarks, and face feature extraction and comparison module Facerecognizer. FaceDetector can achieve a recall rate of over 92% under the condition of 100 false detections on FDDB, it also supports 5-point and 81-point localization of face key points, and its 1-to-N module supports face recognition applications with a base of thousands of characters. ## 3. Model ### 3.1. Model Description The steps of video scene information extraction are as follows: Firstly, the input video is analyzed and preprocessed to obtain the entity target in each frame of the video. The main purpose of this work is to lay a good foundation for the subsequent subject-object labeling and the establishment of spatiotemporal relationship. Secondly, according to the input subject picture, the entity targets are compared and labeled, and the remaining entity targets are labeled as the object. Then, the video subject labeling results are used as scene nodes to extract and analyze the spatiotemporal relationship between the objects and the subjects in the video, so as to judge whether the scene is continuous or not. Finally, the attributes of scene nodes, namely, the scene information of the subject, is determined in the continuous scene.This paper mainly focuses on the following:(1) How to establish the relationship between subjects and objects?(2) How to judge the attributes of scene nodes?The model in this paper completes the above research contents through three stages of information processing. #### 3.1.1. The First Stage: Establish the Spatiotemporal Relationship between the Subject and the Object In this stage, we lock the current situational information by establishing the relationship between the subject and the object, which is also the situational feature introduced in the model. The spatiotemporal relationship is mainly based on the randomness of object selection. Under the same condition, the mathematical probability that a certain number of randomly selected entities in the initial image of the scene will simultaneously appear abnormal in this period of time and space is very small.According to the Bayesian probability formula, let the subject beX, and the object set Y=y1,y2⋯,yn, y1,y2⋯,yn are independent of each other and random; the anomaly probability of Y can be shown as Py; and the probability of n object anomalies and subject anomalies at the same time can be shown as P. (1)P=Pyn∗Psubjectanomaly,can be concluded. As shown in Figure1, when Py=0.3, the probability of misrecognition is less than 5% when the value of n is greater than 3.Figure 1 Influence curve ofn value on model accuracy when Pobject=0.3.The spatiotemporal relationship is also reflected in the spatiotemporal continuity of the object. In the same scene, the mathematical probability of continuous abnormal occurrence of an entity randomly selected in the initial image of the scene in this period of time and space is also very small.Since the occurrence of an anomaly in the same entity is an independent event, according to the principle of event independence, the object anomaly probability is assumed to bePy, and the probability Pobject of object continuous abnormal n times. (2)Pobject=Pyn,As shown in Figure2, in the case of Pobject=0.2, when n value is 2, the probability of misrecognition is 4%.Figure 2 Influence curve ofn value on model accuracy when Pobject=0.2. #### 3.1.2. The Second Stage: Recognize whether the Subject and the object Are Abnormal After establishing the subject-object relationship in the previous stage, we can realize the marking on the clips of the same scene in the video.The main work in this stage can be divided into three steps. Firstly, each frame image is named according to the video frame order, and the same scene fragments are split one by one. Secondly, the features of the partial images of each entity in each frame of the same scene is extracted and compared with the target image feature to recognize whether the subject is in each video frame of the continuous scene. And the file names of each image that the subject exists are extracted as the subject recognition set. Finally, the features of the partial images of each entity are compared with the recognized object image features to recognize whether the object is in each video frame of the continuous scene, and the file names of each image that the object exists are extracted as the object recognition set.The scene feature of a videoV in a continuous scene is defined as (3)V^X,Y,t=VX,Y,t−VX,Y,tlσX,Y,tl,where X∈1,2,⋯,M, Y∈0,1,⋯,N are scene indices and M and N are the number of subject and object of the video frame, respectively. t∈1,2,⋯,T is the temporal index, and T is number of frames in a video. VX,Y,t is the quantity of subject and object at time t. And tl is the last time the situation changed. (4)σX,Y,tl=∑tltVX,Y,tt−tl,is the average value of V of each frame from scene change to present.Since feature extraction and comparison are independent, tasks at this stage can improve detection efficiency through parallel approach. Similarly, nature video has high correlation among neighboring pixels both in space and time. In order to further improve the processing efficiency, we can also choose to extract a picture every few frames for comparison. #### 3.1.3. The Third Stage: Calculate the Results of Scene Detection After the subject and object recognition set of the previous stage is obtained, we can integrate them to obtain scene detection results. In the above work, the method of renaming each frame image and taking each image file name as the result set is to reduce the computational load at this stage, so as to improve the efficiency of result integration.The work in this stage is mainly divided into two steps. Firstly, the intersection part of each object recognition set is taken, and then, the intersection with the subject recognition set is taken. This part of image has two features: (1) the scene information does not change under the same scene and (2) there is no exception in the body. According to the above two features, we can get the video clips of the corresponding frame of intersection images and ensure that the scene of this video is unchanged and the subject is not abnormal.(5)Rnormal=A∩B1∪A∩B2⋯∪A∩Bn,whereA is the result frame set that recognizes the subject and Bn is the result frame set that recognizes the object.Then, the image filenames of the intersection of the image filenames of the subject recognition set and the object recognition set are compared to obtain the partial images containing the subject but not containing the object. According to the comparison results, we can get the video clips of corresponding frames and determine that the scene changes have been taken place in this video.(6)Rabnormal=⋃−Rnormal. ### 3.2. Establishment of Spatiotemporal Relationship A large number of existing models, such as Yolo and face recognition, have been able to realize fast entity recognition of image information. In this paper, such entity recognition results are directly seen as the entity target of video scenes.After the entity target result set of the video is obtained by using the above models, the entity relationship algorithm is used to establish the relationship between the subject and the object target and establish scene features.Algorithm 1: Entity relationship algorithm. Input: entity target set, number of reference objectsOutput: object target set1:encodings is all entity target codes of first_frame2:fori in 0 to lengthofencodings−1:3:if target code equal to the ith entity code of encodings:4:  addsI to temp list5:   end6:num is the number of reference objects7:fori in 0 to num−1:8:rand is a random integer from 0 to lengthofencodings−19:whilerand is in temp:10:rand is a random integer from 0 to lengthofencodings−111:  addsrand to temp list12:  adds therand element of encodings to the lstThe entity object of the frame is recognized by the entity recognition method and is used as the input of the relationship algorithm. The number of reference objects is determined by the user, and the corresponding number of objects is arbitrarily selected from the entity object as the reference object of the current scene. According to the naive Bayes theory, there is a great similarity between the arbitrarily selected object and the subject in the same scene, and the more the selected objects, the stronger the relationship in the space and time. ### 3.3. Judgment of Scene Node Attributes After the establishment of spatiotemporal relationship, the continuity of scene information is firstly detected. Only in continuous scenes can the judgment of scene node attributes have practical application attributes. After obtaining the continuous video clips of scene information, the attributes of scene nodes are determined according to the state of the main body of the videoAlgorithm 2: Scene attribute judgment algorithm. Input: subject coding, object target setOutput: Situational node attributes1:i=02:encodings is all entity targets encode of frame3:for all the encode of scene’s entity targets:4:ifencode is not in encodings:5:i=i+16:ifi equals to scene_entity:7:scene has been changed8:else:9:scene has not been changed10:encodings is all object targets encode of frame11:iftarget_encode is in encodings:12:subjectofvideo is in a particular situation13:else:14:subjectofvideo is not in a particular situationThe subject target and the output results of the relationship algorithm are taken as the input of the judgment algorithm. According to the relationship between the subject and the object, the entity target in the current frame is traversed, and the scene attributes are determined from the subject state and object state. ### 3.4. Video Scene Detection Model After the completion of entity relationship and scene node attribute judgment, the information of one scene can be detected. However, in general, a video contains multiple scene information. Therefore, on the basis of Algorithm1 and Algorithm 2, this paper proposes Algorithm 3 to realize the detection of all scene information in a video data.Algorithm 3: Video scene detection model. Input: video dataOutput: Scene detection results1:fori in 0 to totalvideoframes−1:2:ic=i+13:image is the icth frame of video4:ifimage is the first frame:5:   determine if the subject of video is in the first frame6:else:7:if scene has not been changed:8:      determine if the subject of video is in the frame:9:ifresult is True10:          addsic to timeImage11:          setnextframe to False12:else:13:if time lag between now and the last scene is more than 3 s:14:            addsic to timeImage15:            setnextframe to False16:else:17:            clear the last record intimeImage18:            setnextframe to True19:else:20:is_end=True21:fori in the number of frames in scene change:22:if scene back to original:23:is_end=False24:end if26:end for25:ifis_end is True:26:         addsic to timeImage27:end ifThe content of the first frame of the video is taken as the initial scene information. Algorithms2 and 3 traverse the video data. When the change of video scene information is detected, the time sequence of the scene change frame is recorded, and the content of the scene change frame is taken as the initial scene information of the subsequent video data to cycle to the end of the video. ## 3.1. Model Description The steps of video scene information extraction are as follows: Firstly, the input video is analyzed and preprocessed to obtain the entity target in each frame of the video. The main purpose of this work is to lay a good foundation for the subsequent subject-object labeling and the establishment of spatiotemporal relationship. Secondly, according to the input subject picture, the entity targets are compared and labeled, and the remaining entity targets are labeled as the object. Then, the video subject labeling results are used as scene nodes to extract and analyze the spatiotemporal relationship between the objects and the subjects in the video, so as to judge whether the scene is continuous or not. Finally, the attributes of scene nodes, namely, the scene information of the subject, is determined in the continuous scene.This paper mainly focuses on the following:(1) How to establish the relationship between subjects and objects?(2) How to judge the attributes of scene nodes?The model in this paper completes the above research contents through three stages of information processing. ### 3.1.1. The First Stage: Establish the Spatiotemporal Relationship between the Subject and the Object In this stage, we lock the current situational information by establishing the relationship between the subject and the object, which is also the situational feature introduced in the model. The spatiotemporal relationship is mainly based on the randomness of object selection. Under the same condition, the mathematical probability that a certain number of randomly selected entities in the initial image of the scene will simultaneously appear abnormal in this period of time and space is very small.According to the Bayesian probability formula, let the subject beX, and the object set Y=y1,y2⋯,yn, y1,y2⋯,yn are independent of each other and random; the anomaly probability of Y can be shown as Py; and the probability of n object anomalies and subject anomalies at the same time can be shown as P. (1)P=Pyn∗Psubjectanomaly,can be concluded. As shown in Figure1, when Py=0.3, the probability of misrecognition is less than 5% when the value of n is greater than 3.Figure 1 Influence curve ofn value on model accuracy when Pobject=0.3.The spatiotemporal relationship is also reflected in the spatiotemporal continuity of the object. In the same scene, the mathematical probability of continuous abnormal occurrence of an entity randomly selected in the initial image of the scene in this period of time and space is also very small.Since the occurrence of an anomaly in the same entity is an independent event, according to the principle of event independence, the object anomaly probability is assumed to bePy, and the probability Pobject of object continuous abnormal n times. (2)Pobject=Pyn,As shown in Figure2, in the case of Pobject=0.2, when n value is 2, the probability of misrecognition is 4%.Figure 2 Influence curve ofn value on model accuracy when Pobject=0.2. ### 3.1.2. The Second Stage: Recognize whether the Subject and the object Are Abnormal After establishing the subject-object relationship in the previous stage, we can realize the marking on the clips of the same scene in the video.The main work in this stage can be divided into three steps. Firstly, each frame image is named according to the video frame order, and the same scene fragments are split one by one. Secondly, the features of the partial images of each entity in each frame of the same scene is extracted and compared with the target image feature to recognize whether the subject is in each video frame of the continuous scene. And the file names of each image that the subject exists are extracted as the subject recognition set. Finally, the features of the partial images of each entity are compared with the recognized object image features to recognize whether the object is in each video frame of the continuous scene, and the file names of each image that the object exists are extracted as the object recognition set.The scene feature of a videoV in a continuous scene is defined as (3)V^X,Y,t=VX,Y,t−VX,Y,tlσX,Y,tl,where X∈1,2,⋯,M, Y∈0,1,⋯,N are scene indices and M and N are the number of subject and object of the video frame, respectively. t∈1,2,⋯,T is the temporal index, and T is number of frames in a video. VX,Y,t is the quantity of subject and object at time t. And tl is the last time the situation changed. (4)σX,Y,tl=∑tltVX,Y,tt−tl,is the average value of V of each frame from scene change to present.Since feature extraction and comparison are independent, tasks at this stage can improve detection efficiency through parallel approach. Similarly, nature video has high correlation among neighboring pixels both in space and time. In order to further improve the processing efficiency, we can also choose to extract a picture every few frames for comparison. ### 3.1.3. The Third Stage: Calculate the Results of Scene Detection After the subject and object recognition set of the previous stage is obtained, we can integrate them to obtain scene detection results. In the above work, the method of renaming each frame image and taking each image file name as the result set is to reduce the computational load at this stage, so as to improve the efficiency of result integration.The work in this stage is mainly divided into two steps. Firstly, the intersection part of each object recognition set is taken, and then, the intersection with the subject recognition set is taken. This part of image has two features: (1) the scene information does not change under the same scene and (2) there is no exception in the body. According to the above two features, we can get the video clips of the corresponding frame of intersection images and ensure that the scene of this video is unchanged and the subject is not abnormal.(5)Rnormal=A∩B1∪A∩B2⋯∪A∩Bn,whereA is the result frame set that recognizes the subject and Bn is the result frame set that recognizes the object.Then, the image filenames of the intersection of the image filenames of the subject recognition set and the object recognition set are compared to obtain the partial images containing the subject but not containing the object. According to the comparison results, we can get the video clips of corresponding frames and determine that the scene changes have been taken place in this video.(6)Rabnormal=⋃−Rnormal. ## 3.1.1. The First Stage: Establish the Spatiotemporal Relationship between the Subject and the Object In this stage, we lock the current situational information by establishing the relationship between the subject and the object, which is also the situational feature introduced in the model. The spatiotemporal relationship is mainly based on the randomness of object selection. Under the same condition, the mathematical probability that a certain number of randomly selected entities in the initial image of the scene will simultaneously appear abnormal in this period of time and space is very small.According to the Bayesian probability formula, let the subject beX, and the object set Y=y1,y2⋯,yn, y1,y2⋯,yn are independent of each other and random; the anomaly probability of Y can be shown as Py; and the probability of n object anomalies and subject anomalies at the same time can be shown as P. (1)P=Pyn∗Psubjectanomaly,can be concluded. As shown in Figure1, when Py=0.3, the probability of misrecognition is less than 5% when the value of n is greater than 3.Figure 1 Influence curve ofn value on model accuracy when Pobject=0.3.The spatiotemporal relationship is also reflected in the spatiotemporal continuity of the object. In the same scene, the mathematical probability of continuous abnormal occurrence of an entity randomly selected in the initial image of the scene in this period of time and space is also very small.Since the occurrence of an anomaly in the same entity is an independent event, according to the principle of event independence, the object anomaly probability is assumed to bePy, and the probability Pobject of object continuous abnormal n times. (2)Pobject=Pyn,As shown in Figure2, in the case of Pobject=0.2, when n value is 2, the probability of misrecognition is 4%.Figure 2 Influence curve ofn value on model accuracy when Pobject=0.2. ## 3.1.2. The Second Stage: Recognize whether the Subject and the object Are Abnormal After establishing the subject-object relationship in the previous stage, we can realize the marking on the clips of the same scene in the video.The main work in this stage can be divided into three steps. Firstly, each frame image is named according to the video frame order, and the same scene fragments are split one by one. Secondly, the features of the partial images of each entity in each frame of the same scene is extracted and compared with the target image feature to recognize whether the subject is in each video frame of the continuous scene. And the file names of each image that the subject exists are extracted as the subject recognition set. Finally, the features of the partial images of each entity are compared with the recognized object image features to recognize whether the object is in each video frame of the continuous scene, and the file names of each image that the object exists are extracted as the object recognition set.The scene feature of a videoV in a continuous scene is defined as (3)V^X,Y,t=VX,Y,t−VX,Y,tlσX,Y,tl,where X∈1,2,⋯,M, Y∈0,1,⋯,N are scene indices and M and N are the number of subject and object of the video frame, respectively. t∈1,2,⋯,T is the temporal index, and T is number of frames in a video. VX,Y,t is the quantity of subject and object at time t. And tl is the last time the situation changed. (4)σX,Y,tl=∑tltVX,Y,tt−tl,is the average value of V of each frame from scene change to present.Since feature extraction and comparison are independent, tasks at this stage can improve detection efficiency through parallel approach. Similarly, nature video has high correlation among neighboring pixels both in space and time. In order to further improve the processing efficiency, we can also choose to extract a picture every few frames for comparison. ## 3.1.3. The Third Stage: Calculate the Results of Scene Detection After the subject and object recognition set of the previous stage is obtained, we can integrate them to obtain scene detection results. In the above work, the method of renaming each frame image and taking each image file name as the result set is to reduce the computational load at this stage, so as to improve the efficiency of result integration.The work in this stage is mainly divided into two steps. Firstly, the intersection part of each object recognition set is taken, and then, the intersection with the subject recognition set is taken. This part of image has two features: (1) the scene information does not change under the same scene and (2) there is no exception in the body. According to the above two features, we can get the video clips of the corresponding frame of intersection images and ensure that the scene of this video is unchanged and the subject is not abnormal.(5)Rnormal=A∩B1∪A∩B2⋯∪A∩Bn,whereA is the result frame set that recognizes the subject and Bn is the result frame set that recognizes the object.Then, the image filenames of the intersection of the image filenames of the subject recognition set and the object recognition set are compared to obtain the partial images containing the subject but not containing the object. According to the comparison results, we can get the video clips of corresponding frames and determine that the scene changes have been taken place in this video.(6)Rabnormal=⋃−Rnormal. ## 3.2. Establishment of Spatiotemporal Relationship A large number of existing models, such as Yolo and face recognition, have been able to realize fast entity recognition of image information. In this paper, such entity recognition results are directly seen as the entity target of video scenes.After the entity target result set of the video is obtained by using the above models, the entity relationship algorithm is used to establish the relationship between the subject and the object target and establish scene features.Algorithm 1: Entity relationship algorithm. Input: entity target set, number of reference objectsOutput: object target set1:encodings is all entity target codes of first_frame2:fori in 0 to lengthofencodings−1:3:if target code equal to the ith entity code of encodings:4:  addsI to temp list5:   end6:num is the number of reference objects7:fori in 0 to num−1:8:rand is a random integer from 0 to lengthofencodings−19:whilerand is in temp:10:rand is a random integer from 0 to lengthofencodings−111:  addsrand to temp list12:  adds therand element of encodings to the lstThe entity object of the frame is recognized by the entity recognition method and is used as the input of the relationship algorithm. The number of reference objects is determined by the user, and the corresponding number of objects is arbitrarily selected from the entity object as the reference object of the current scene. According to the naive Bayes theory, there is a great similarity between the arbitrarily selected object and the subject in the same scene, and the more the selected objects, the stronger the relationship in the space and time. ## 3.3. Judgment of Scene Node Attributes After the establishment of spatiotemporal relationship, the continuity of scene information is firstly detected. Only in continuous scenes can the judgment of scene node attributes have practical application attributes. After obtaining the continuous video clips of scene information, the attributes of scene nodes are determined according to the state of the main body of the videoAlgorithm 2: Scene attribute judgment algorithm. Input: subject coding, object target setOutput: Situational node attributes1:i=02:encodings is all entity targets encode of frame3:for all the encode of scene’s entity targets:4:ifencode is not in encodings:5:i=i+16:ifi equals to scene_entity:7:scene has been changed8:else:9:scene has not been changed10:encodings is all object targets encode of frame11:iftarget_encode is in encodings:12:subjectofvideo is in a particular situation13:else:14:subjectofvideo is not in a particular situationThe subject target and the output results of the relationship algorithm are taken as the input of the judgment algorithm. According to the relationship between the subject and the object, the entity target in the current frame is traversed, and the scene attributes are determined from the subject state and object state. ## 3.4. Video Scene Detection Model After the completion of entity relationship and scene node attribute judgment, the information of one scene can be detected. However, in general, a video contains multiple scene information. Therefore, on the basis of Algorithm1 and Algorithm 2, this paper proposes Algorithm 3 to realize the detection of all scene information in a video data.Algorithm 3: Video scene detection model. Input: video dataOutput: Scene detection results1:fori in 0 to totalvideoframes−1:2:ic=i+13:image is the icth frame of video4:ifimage is the first frame:5:   determine if the subject of video is in the first frame6:else:7:if scene has not been changed:8:      determine if the subject of video is in the frame:9:ifresult is True10:          addsic to timeImage11:          setnextframe to False12:else:13:if time lag between now and the last scene is more than 3 s:14:            addsic to timeImage15:            setnextframe to False16:else:17:            clear the last record intimeImage18:            setnextframe to True19:else:20:is_end=True21:fori in the number of frames in scene change:22:if scene back to original:23:is_end=False24:end if26:end for25:ifis_end is True:26:         addsic to timeImage27:end ifThe content of the first frame of the video is taken as the initial scene information. Algorithms2 and 3 traverse the video data. When the change of video scene information is detected, the time sequence of the scene change frame is recorded, and the content of the scene change frame is taken as the initial scene information of the subsequent video data to cycle to the end of the video. ## 4. Experiments ### 4.1. Experimental Data The experimental dataset adopted in this paper is a public video dataset; the main content of which is TV play Ten Miles of Peach Blossom (the data comes from Tencent video, which is only used for academic research in this paper, and the copyright belongs to Tencent company), Hospital Playlist (the data comes from Netflix and is only used for academic research in this paper; the copyright belongs to Netflix company), Nirvana in Fire (the data comes from Tencent video, which is only used for academic research in this paper, and the copyright belongs to Tencent company), and It started with a Kiss. The average scenario switching time of each dataset is 7-10 seconds.In this experimental environment, in order to analyze the performance of the algorithm proposed in this paper, the evaluation index used in this study is precision.(7)precision=correctlyrecognizednumbertotal.The values are between 0 and 1, and the closer they are to 1, the better the effect of the model will be.The hardware configuration information used in the experiment is as follows: CPU R53600, graphics card GTX1660, internal storage 16G, operating system Win10, and development language Python3. ### 4.2. Experimental Results and Analysis As mentioned in Section3, the model proposed in this study is to process and calculate the entity recognition results in the video, so we will use the existing mature entity recognition algorithms in the experiment. Two existing character recognition algorithms are used to meet the needs of model operation. First, face recognition was used to extract environmental features and human face features; second, SeetaFace2 was used to extract environmental features and human face features. The evaluation criteria are whether the target disappears and whether the scene transforms. In this experiment, scene changes have been manually marked. The precision of marking is seconds, and the precision of model detection is video frames. Due to the inconsistency of precision between manual marking and model detection, when the time axes corresponding to the video frame contained in the detection results are the same as that of manual marks, the results are right. #### 4.2.1. Experiments Based on Face Recognition In this paper, we use Face_recognition as the entity recognition part of the model. As the entity to establish scene features in the model, it is used to recognize the characters in the video image. After establishing the association between the entities in the video (Algorithm1), the model calculates and determines the scene characteristics of each frame based on Formula (3), records the frame number of V^X,Y,t difference that is abnormal, and determines the corresponding time point on the time axis. Then, we compare it with the change time which is manually marked and get the test result.These five datasets have been, respectively, tested, and the experimental results are as shown in Table1.Table 1 The experimental results based on face recognition. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29101Dataset316180Dataset415170Dataset512140We have found a part of the unrecognized images, as shown in Figure3.Figure 3 Incorrect results with face recognition.We have conducted separate recognition processing and found that some frames of the model could not recognize their face features, leading to recognition errors during scene detection, which may be caused by decorations on the face.The experimental results show that face recognition can work well in this dataset. However, face recognition sometimes fails to extract character features because face features are currently used for scene features. The occurrence of the above problems has some serious impacts on the establishment of scene features in the proposed model. Due to the errors of character face recognition, the frame missing a certain entity is wrongly recognized as the changes of scene features during the establishment of scene features. On the whole, the model proposed in this paper does well in scene detection in the case of limited entities. #### 4.2.2. Experiments Based on SeetaFace2 The experiment is similar to the former one, but we make some changes that we use SeetaFace2 instead of face recognition as the entity recognition part of the model. SeetaFace2 is used to recognize the faces in the video image as the entity to establish scene features of the model and test the same datasets. The experimental results are as shown in Table2.Table 2 The experimental results based on SeetaFace2. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29100Dataset314180Dataset416171Dataset512140We have found a part of the unrecognized images, as shown in Figure4.Figure 4 Incorrect results with SeetaFace2.We have found that the SeetaFace2 model recognized faces very sensitively and even can achieve the recognition of supporting characters in the background of photos. And as the camera moves, the number of supporting characters changes dramatically, leading to the misjudgment of a scene switch.The experimental results show that the recognition effect of SeetaFace2 is very sensitive, and SeetaFace2 uses the model structure of ResNet50. In this network, multiple residual learning blocks are connected in series, and the deep representation of the deep learning image of the model is utilized, so the recognition effect is very sensitive thus the recognition error occurs. Different from the problems mentioned above, when SeetaFace is combined with the model proposed in this paper, entities will increase abnormally in some frames due to the recognition of background characters. This leads to the errors of the proposed model in establishing scene features, resulting in recognition errors. #### 4.2.3. Experiment Summary In addition to the above open dataset experiments, we also used 20 self-made datasets for experiments, and the content of them is meeting recording videos. In order to produce situational changes, the videos have some situations such as characters leaving midway, characters joining midway, and meeting pausing. The results are as follows:(i) Face recognition: 87%(ii) SeetaFace2: 85%The test results meet the expectations. The model proposed in this paper can achieve more accurate scene change detection. It can realize video scene change detection on the premise of using face recognition results as the main entity. The feasibility and universality of the model have been already proved in the experiment. We believe that the accuracy can be further improved if the result including object recognition is introduced as an entity. But in some special cases, such as too many characters in the background, characters turning back, and decorations on the face, it will lead to the failure of scene recognition. In the future, we plan to increase the correct rate of scene transformation recognition by using judgment logic, model recognition, adding background object feature recognition module, and other measures. ## 4.1. Experimental Data The experimental dataset adopted in this paper is a public video dataset; the main content of which is TV play Ten Miles of Peach Blossom (the data comes from Tencent video, which is only used for academic research in this paper, and the copyright belongs to Tencent company), Hospital Playlist (the data comes from Netflix and is only used for academic research in this paper; the copyright belongs to Netflix company), Nirvana in Fire (the data comes from Tencent video, which is only used for academic research in this paper, and the copyright belongs to Tencent company), and It started with a Kiss. The average scenario switching time of each dataset is 7-10 seconds.In this experimental environment, in order to analyze the performance of the algorithm proposed in this paper, the evaluation index used in this study is precision.(7)precision=correctlyrecognizednumbertotal.The values are between 0 and 1, and the closer they are to 1, the better the effect of the model will be.The hardware configuration information used in the experiment is as follows: CPU R53600, graphics card GTX1660, internal storage 16G, operating system Win10, and development language Python3. ## 4.2. Experimental Results and Analysis As mentioned in Section3, the model proposed in this study is to process and calculate the entity recognition results in the video, so we will use the existing mature entity recognition algorithms in the experiment. Two existing character recognition algorithms are used to meet the needs of model operation. First, face recognition was used to extract environmental features and human face features; second, SeetaFace2 was used to extract environmental features and human face features. The evaluation criteria are whether the target disappears and whether the scene transforms. In this experiment, scene changes have been manually marked. The precision of marking is seconds, and the precision of model detection is video frames. Due to the inconsistency of precision between manual marking and model detection, when the time axes corresponding to the video frame contained in the detection results are the same as that of manual marks, the results are right. ### 4.2.1. Experiments Based on Face Recognition In this paper, we use Face_recognition as the entity recognition part of the model. As the entity to establish scene features in the model, it is used to recognize the characters in the video image. After establishing the association between the entities in the video (Algorithm1), the model calculates and determines the scene characteristics of each frame based on Formula (3), records the frame number of V^X,Y,t difference that is abnormal, and determines the corresponding time point on the time axis. Then, we compare it with the change time which is manually marked and get the test result.These five datasets have been, respectively, tested, and the experimental results are as shown in Table1.Table 1 The experimental results based on face recognition. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29101Dataset316180Dataset415170Dataset512140We have found a part of the unrecognized images, as shown in Figure3.Figure 3 Incorrect results with face recognition.We have conducted separate recognition processing and found that some frames of the model could not recognize their face features, leading to recognition errors during scene detection, which may be caused by decorations on the face.The experimental results show that face recognition can work well in this dataset. However, face recognition sometimes fails to extract character features because face features are currently used for scene features. The occurrence of the above problems has some serious impacts on the establishment of scene features in the proposed model. Due to the errors of character face recognition, the frame missing a certain entity is wrongly recognized as the changes of scene features during the establishment of scene features. On the whole, the model proposed in this paper does well in scene detection in the case of limited entities. ### 4.2.2. Experiments Based on SeetaFace2 The experiment is similar to the former one, but we make some changes that we use SeetaFace2 instead of face recognition as the entity recognition part of the model. SeetaFace2 is used to recognize the faces in the video image as the entity to establish scene features of the model and test the same datasets. The experimental results are as shown in Table2.Table 2 The experimental results based on SeetaFace2. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29100Dataset314180Dataset416171Dataset512140We have found a part of the unrecognized images, as shown in Figure4.Figure 4 Incorrect results with SeetaFace2.We have found that the SeetaFace2 model recognized faces very sensitively and even can achieve the recognition of supporting characters in the background of photos. And as the camera moves, the number of supporting characters changes dramatically, leading to the misjudgment of a scene switch.The experimental results show that the recognition effect of SeetaFace2 is very sensitive, and SeetaFace2 uses the model structure of ResNet50. In this network, multiple residual learning blocks are connected in series, and the deep representation of the deep learning image of the model is utilized, so the recognition effect is very sensitive thus the recognition error occurs. Different from the problems mentioned above, when SeetaFace is combined with the model proposed in this paper, entities will increase abnormally in some frames due to the recognition of background characters. This leads to the errors of the proposed model in establishing scene features, resulting in recognition errors. ### 4.2.3. Experiment Summary In addition to the above open dataset experiments, we also used 20 self-made datasets for experiments, and the content of them is meeting recording videos. In order to produce situational changes, the videos have some situations such as characters leaving midway, characters joining midway, and meeting pausing. The results are as follows:(i) Face recognition: 87%(ii) SeetaFace2: 85%The test results meet the expectations. The model proposed in this paper can achieve more accurate scene change detection. It can realize video scene change detection on the premise of using face recognition results as the main entity. The feasibility and universality of the model have been already proved in the experiment. We believe that the accuracy can be further improved if the result including object recognition is introduced as an entity. But in some special cases, such as too many characters in the background, characters turning back, and decorations on the face, it will lead to the failure of scene recognition. In the future, we plan to increase the correct rate of scene transformation recognition by using judgment logic, model recognition, adding background object feature recognition module, and other measures. ## 4.2.1. Experiments Based on Face Recognition In this paper, we use Face_recognition as the entity recognition part of the model. As the entity to establish scene features in the model, it is used to recognize the characters in the video image. After establishing the association between the entities in the video (Algorithm1), the model calculates and determines the scene characteristics of each frame based on Formula (3), records the frame number of V^X,Y,t difference that is abnormal, and determines the corresponding time point on the time axis. Then, we compare it with the change time which is manually marked and get the test result.These five datasets have been, respectively, tested, and the experimental results are as shown in Table1.Table 1 The experimental results based on face recognition. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29101Dataset316180Dataset415170Dataset512140We have found a part of the unrecognized images, as shown in Figure3.Figure 3 Incorrect results with face recognition.We have conducted separate recognition processing and found that some frames of the model could not recognize their face features, leading to recognition errors during scene detection, which may be caused by decorations on the face.The experimental results show that face recognition can work well in this dataset. However, face recognition sometimes fails to extract character features because face features are currently used for scene features. The occurrence of the above problems has some serious impacts on the establishment of scene features in the proposed model. Due to the errors of character face recognition, the frame missing a certain entity is wrongly recognized as the changes of scene features during the establishment of scene features. On the whole, the model proposed in this paper does well in scene detection in the case of limited entities. ## 4.2.2. Experiments Based on SeetaFace2 The experiment is similar to the former one, but we make some changes that we use SeetaFace2 instead of face recognition as the entity recognition part of the model. SeetaFace2 is used to recognize the faces in the video image as the entity to establish scene features of the model and test the same datasets. The experimental results are as shown in Table2.Table 2 The experimental results based on SeetaFace2. DatasetCorrectly recognizedTotalWrongly recognizedDataset1770Dataset29100Dataset314180Dataset416171Dataset512140We have found a part of the unrecognized images, as shown in Figure4.Figure 4 Incorrect results with SeetaFace2.We have found that the SeetaFace2 model recognized faces very sensitively and even can achieve the recognition of supporting characters in the background of photos. And as the camera moves, the number of supporting characters changes dramatically, leading to the misjudgment of a scene switch.The experimental results show that the recognition effect of SeetaFace2 is very sensitive, and SeetaFace2 uses the model structure of ResNet50. In this network, multiple residual learning blocks are connected in series, and the deep representation of the deep learning image of the model is utilized, so the recognition effect is very sensitive thus the recognition error occurs. Different from the problems mentioned above, when SeetaFace is combined with the model proposed in this paper, entities will increase abnormally in some frames due to the recognition of background characters. This leads to the errors of the proposed model in establishing scene features, resulting in recognition errors. ## 4.2.3. Experiment Summary In addition to the above open dataset experiments, we also used 20 self-made datasets for experiments, and the content of them is meeting recording videos. In order to produce situational changes, the videos have some situations such as characters leaving midway, characters joining midway, and meeting pausing. The results are as follows:(i) Face recognition: 87%(ii) SeetaFace2: 85%The test results meet the expectations. The model proposed in this paper can achieve more accurate scene change detection. It can realize video scene change detection on the premise of using face recognition results as the main entity. The feasibility and universality of the model have been already proved in the experiment. We believe that the accuracy can be further improved if the result including object recognition is introduced as an entity. But in some special cases, such as too many characters in the background, characters turning back, and decorations on the face, it will lead to the failure of scene recognition. In the future, we plan to increase the correct rate of scene transformation recognition by using judgment logic, model recognition, adding background object feature recognition module, and other measures. ## 5. Conclusion This paper proposes a video scene information detection based on entity recognition, which can achieve the task of video scene information detection on the premise of entity recognition of video pixel data. The proposed model has strong robustness, and the precision can reach more than 85%. At the same time, it can replace entity recognition with face recognition algorithm as the input of scene information detection without too many impacts on the results of scene information detection.In this paper, we will explain and verify the above research contents. Section2 briefly introduces the existing entity recognition models, such as Yolo, and some mature face recognition models, such as face recognition. At present stage, these models are the premise for the test in this study. In Section 3, we introduce the models, including their establishment, mathematical basis and partial content of the pseudocode. Section 4 presents our experimental results and summarizes the failed parts, which are also what need to be further discussed in our subsequent research work. In Section 5, we summarize the research content and briefly introduce the main research directions in the future.We take the spatiotemporal relationship of video entities as the basis of situational information detection and creatively put forward the concept of situational features to ensure the logical accuracy of the model. In the process of experiments, we found some existing problems, such as overreliance on the accuracy of entity recognition and difficulties in screening noise information effectively. In the research, we focus on how to better combine the entity recognition model with the model proposed in this paper to improve the detection efficiency of the proposed video scene information detection model. --- *Source: 1020044-2021-10-31.xml*
2021
# Huangqin-Tang Ameliorates TNBS-Induced Colitis by Regulating Effector and Regulatory CD4+ T Cells **Authors:** Ying Zou; Wen-Yang Li; Zheng Wan; Bing Zhao; Zhi-Wei He; Zhu-Guo Wu; Guo-Liang Huang; Jian Wang; Bin-Bin Li; Yang-Jia Lu; Cong-Cong Ding; Hong-Gang Chi; Xue-Bao Zheng **Journal:** BioMed Research International (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102021 --- ## Abstract Huangqin-Tang decoction (HQT) is a classic traditional Chinese herbal formulation that is widely used to ameliorate the symptoms of gastrointestinal disorders, including inflammatory bowel disease (IBD). This study was designed to investigate the therapeutic potential and immunological regulatory activity of HQT in experimental colitis in rats. Using an animal model of colitis by intrarectally administering 2,4,6-trinitrobenzenesulfonic acid (TNBS), we found that administration of HQT significantly inhibited the severity of TNBS-induced colitis in a dose-dependent manner. In addition, treatment with HQT produced better results than that with mesalazine, as shown by improvedweight loss bleeding and diarrhoea scores, colon length, and intestinal inflammation. As for potential immunological regulation of HQT action, the percentages of Th1 and Th17 cells were reduced, but those Th2 and Treg cells were enhanced in LPMCs after HQT treatment. Additionally, HQT lowered the levels of Th1/Th17-associated cytokines but increased production of Th2/Treg-associated cytokines in the colon and MLNs. Furthermore, we observed a remarkable suppression of the Th1/Th17-associated transcription factors T-bet and ROR-γt. However, expression levels of the Th2/Treg-associated transcription factors GATA-3 and Foxp3 were enhanced during treatment with HQT. Our results suggest that HQT has the therapeutic potential to ameliorate TNBS-induced colitis symptoms. This protective effect is possibly mediated by its effects on CD4+ T cells subsets. --- ## Body ## 1. Introduction Human inflammatory bowel disease (IBD) comprises the two related chronic, relapsing inflammatory disorders, Crohn’s disease (CD) and ulcerative colitis (UC) [1]. Although the detailed etiology and pathogenesis of IBD remain uncertain, recent experimental and clinical studies have suggested that the dysregulation of mucosal CD4+ T cells, which contributes to intestinal inflammation and mucosal barrier destruction, is one of the most important aspects of the pathogenesis [2, 3].Among a variety of inflammatory cells in the gut, both effector CD4+ T helper (Th) cells and regulatory CD4+ T (Treg) cells are important in IBD, as they regulate pro/anti-inflammatory cytokine production [4]. In general, naive CD4+ T cells can be divided into one of several lineages of Th cells, including Th1, Th2, and Th17, which vary in their cytokine production and function [5]. Classically, CD is thought to be caused by a deregulated Th1 inflammatory response, while UC has historically been considered a Th2-mediated disease. Th17 is a new subtype of effector Th cells that has been reported to play a key pathogenic role in chronic inflammatory conditions, including IBD [6]. Treg cells are a specialized population of CD4+ T cells that act as dedicated mediators to dampen inflammatory responses and prevent autoimmunity. Several studies have demonstrated an inadequate Treg cells responses in the face of an overly exuberant Th1, Th2, and Th17 cells response, resulting in the breakdown of intestinal homeostasis and profound acceleration of the perpetuation of IBD [7]. Given the key role of CD4+ T cells subsets in intestinal inflammation, therapeutics targeting these aberrant CD4+ T cells responses is already under development and are promising treatments for IBD and other inflammatory diseases.The mainstays of current IBD treatments involve the use of corticosteroids, immunomodulators, and biologic agents targeting specific cytokines. Although these drugs are conventional therapeutics, most of these treatments are still being used with reluctance due to the high cost, toxic side effects, and uncertainty about long-term safety [8–10]. Consequently, many patients turn to alternative strategies, including traditional plant-based remedies.Huangqin-Tang decoction (HQT) is a classic traditional Chinese herbal formulation consisting of 4 components: the roots ofScutellaria baicalensis Georgi (scute),Glycyrrhiza uralensis Fisch. (licorice),Paeonia lactiflora Pall. (peony), and the fruit ofZiziphus jujuba Mill. (Chinese date). HQT has been used for nearly 1800 years in traditional Chinese medicine to treat common gastrointestinal distress, such as diarrhoea, abdominal spasms, fever, headache, vomiting, nausea, extreme thirst, and subcardiac distention [11]. Although HQT is also significantly protective in the treatment of IBD in Chinese clinical application, further clinical evidence and definitive mechanisms of action that demonstrate the role of HQT in gastrointestinal diseases are still lacking. Therefore, the aim of this study was to investigate the contribution of HQT to the amelioration of colitis and CD4+ T cells immune homeostasis in 2,4,6-trinitrobenzenesulfonic acid- (TNBS-) induced acute colitis. ## 2. Materials and Methods ### 2.1. Rats Sprague-Dawley rats, weighing 200–250 g, were purchased from the Experimental Animal Center of Guangdong Province (Guangzhou, China). Rats were provided a standard rat chow and water in a controlled room (temperature, 22–24°C; humidity, 70–75%; and a 12 h/12 h light and dark cycle). The animal studies were conducted under protocols approved by the Ethics Committee for Animal Experiments of Southern Medical University. The rats were paired with age-matched controls. ### 2.2. Induction of TNBS-Induced Colitis and Treatment Colitis was induced with a single intracolonic application of TNBS, as described previously [12]. Briefly, overnight-fasted mice were treated under anesthesia with 30 mg/kg TNBS (Sigma-Aldrich) dissolved in 0.25 mL of 50% alcohol via intrarectal injection using a polyethylene catheter (2 mm in outer diameter), with 0.9% saline treatment as a control. The ingredients of HQT included 9 g ofScutellaria baicalensis Georgi (scute), 6 g ofPaeonia lactiflora Pall. (peony), 6 g ofGlycyrrhiza uralensis Fisch. (licorice), and 6 g ofZiziphus jujuba Mill. (Chinese date). All herb formula granules (1 g extract = 10 g crude herb) were provided by E-Fong Pharmaceutical co., Ltd. (Guangzhou, GD, China) and administered at doses of 30 mg/kg, 60 mg/kg, and 120 mg/kg of body weight, dissolved in distilled water. Mesalazine (500 mg/pack) was used at a dose of 100 mg/kg as a vehicle control and purchased from Ethypharm (Houdan, France). HQT and mesalazine were administered by oral gavage twice daily for one week starting from 24 h after colitis induction. Control rats had free access to tap water. ### 2.3. Clinical Assessment of Colitis Body weight, diarrhoea scores, and bleeding scores were assessed daily as described [13]. Weight changes were calculated as percent difference relative to original body weight. Stool consistency was scored as follows: 0, well-formed pellets; 2, pasty and semiformed stools that do not adhere to the anus; and 4, diarrhoea that remained adhesive to the anus. Fecal blood was scored as follows: 0, no blood by hemoccult test; 2, positive hemoccult; and 4, gross bleeding. ### 2.4. Macroscopic Evaluation The colon was removed and opened longitudinally, and the colon length and macroscopic damage were assessed immediately by an independent observer blinded to the identity of treatments. The macroscopic score was assigned by examining an 8 cm distal portion of the rat colon and utilizing a 0–4 scale with some modifications from that used previously [14]: 0, no macroscopic changes; 1, hyperemia and edema without ulcers; 2, hyperemia and edema with small linear ulcers or petechiae; 3, hyperemia and edema with wide ulcers and necrosis and/or adhesions; 4, hyperemia and edema with megacolon, stenosis, and/or perforation. ### 2.5. Histology Scoring For histological examination, colonic tissue was fixed in 10% formalin, dehydrated, paraffin embedded, processed, sliced into 4μm thick sections, and stained with hematoxylin and eosin (H&E). All the slides were read and scored by a blinded pathologist. The microscopic damage in the colon was assessed on a 0–3 scale as described by Dieleman et al. [15] as follows: (1) severity of inflammation (0, none; 1, mild; 2, moderate; and 3, severe); (2) extent of inflammation (0, none; 1, mucosal; 2, mucosal and submucosal; and 3, transmural); (3) crypt damage (0, none; 1, basal third damaged; 2, basal two-thirds damaged; 3, crypt loss with present surface epithelium; and 4, crypt and surface epithelium loss). The average of the three histology scores was used for statistical analysis. ### 2.6. Myeloperoxidase (MPO) Activity Assay MPO activity was measured according to the method described previously [16]. Each segment was weighed, chopped, and then homogenized in a potassium phosphate buffer (50 mM, pH 6.0) containing 5% hexadecyl trimethyl ammonium bromide (HTAB) and 0.336% EDTA (9 mL/mg tissue) for 30 s. The colon homogenates were subjected to 3 cycles of freezing/thawing, 30 s of sonication, and centrifugation at 13,000 ×g for 15 min at 4°C. Then, 0.167 mg/mL o-dianisidine dihydrochloride (Sigma-Aldrich) and 0.0005% H2O2 in 200 μL of phosphate buffer (pH 6.0) were added to the supernatant, and the absorbance rate was monitored at 490 nm. ### 2.7. Preparation of Lamina Propria Mononuclear Cells (LPMCs) LPMCs were isolated using a modified method as previously described [17]. Briefly, the intestinal mucosa was washed in complete Hank’s balanced salt solution (HBSS) without Ca2+ and Mg2+, cut into 5 mm pieces, and incubated in medium containing 5 mM EDTA (Sigma-Aldrich, St. Louis, Missouri, USA) and 1 mM DTT (Sigma-Aldrich) at 37°C for 30 min until all crypts and individual epithelial cells were removed. The tissues were digested further in RPMI 1640 (GIBCO Laboratories, Grand Island, NY, USA) containing 10% fetal calf serum (FCS, HyClone, Logan, UT, USA), 0.15 g of collagenase type IV (Sigma-Aldrich), and 0.025 g of DNase I (Sigma-Aldrich) in a shaking incubator at 37°C. The tissue slurry was then passed through a 70 μm cell strainer to remove undigested tissue pieces, centrifuged, and resuspended in a 40–60% Percoll solution (Amersham Biosciences, Piscataway, NJ, USA) density gradient. Cells were frozen in liquid nitrogen for storage until analysis at a concentration of 1 × 106 cells/mL. ### 2.8. Isolation and Culture of Mesenteric Lymph Nodes (MLNs) Cells MLNs were removed and transferred to ice cold sterile Hank’s balanced salt solution. The nodes were disrupted and passed through a nylon mesh (70μm pore size). Cells were then incubated in RPMI 1640 with 10% FCS and 100 IU/mL penicillin/streptomycin at a concentration of 1 × 106 cells/mL for 48 h in the presence of anti-CD3 and anti-CD28 antibodies (eBioscience, San Diego, CA). Cytokine production in culture supernatants was determined by enzyme-linked immunosorbent assay (ELISA). ### 2.9. ELISA Frozen colonic samples were homogenized mechanically in lysis buffer. Homogenized tissue samples were centrifuged at 18,300 ×g at 4°C for 30 min. Homogenized tissue or cell culture supernatants were collected, and cytokine levels of TNF-α, IL-1β, IL-12, IFN-γ, IL-4, IL-13, IL-5, IL-6, IL-17, and IL-10 in the supernatant were determined using ELISA kits, according to the manufacturer’s instructions (R&D Systems, Minneapolis, MN). ### 2.10. Real-Time Polymerase Chain Reaction (PCR) Total RNA was extracted using TRIzol reagent (Invitrogen). 1μg total RNA was then reverse transcribed into first-strand cDNAs and was synthesized with M-MLV reverse transcriptase. Primer sequences are listed as follows: IFN-γ forward 5′-AGGATGCATTCATGAGCATCGCC-3′ and reverse 5′-TCAGCACCGACTCCTTTTCCGCT-3′; IL-12 forward 5′-AGTGTAACCAGAAAGGTGCGTTC-3′ and reverse 5′-CCTGCAGGGTACACATGTCCATT-3′; IL-4 forward 5′-CGGCAACAAGGAACACCACGGA-3′ and reverse 5′-AGCGTGGACTCATTCACGGTGC-3′; IL-13 Forward 5′-CTGCAGTCCTGGCTCTCGC-3′ and reverse 5′-CTTTTCCGCTATGGCCACTG-3′; IL-5 forward 5′-ACGATGAGGCTTCCTGTTCC-3′ and reverse 5′-TTCCATTGCCCACTCTGTAC-3′; IL-6 forward 5′-GTGCAATGGCAATTCTGATTGTA-3′ and reverse 5′-CTAGGGTTTCAGTATTGCTCTGA-3′; IL-17 forward 5′-AGCTCCAGAAGGCCCTCAGACTA-3′ and reverse 5′-CAGGACCAGGATCTCTTGCTGGA-3′; IL-10 forward 5′-CCAAGCCTTGTCAGAAATGATCA-3′ and reverse 5′-CTCATTCATGGCCTTGTAGACAC-3′; T-bet forward 5′-AACCAGTATCCTGTTCCCAGC-3′ and reverse 5′-TGTCGCCACTGGAAGGATAG-3′; GATA-3 forward 5′-CAGTCCGCATCTCTTCAC-3′ and reverse 5′-TAGTGCCCAGTACCATCTC-3′; RORγt forward 5′-AGTAGGCCACATTACACTGCT-3′ and reverse 5′-GACCCACACCTCACAAATTGA-3′; Foxp3 forward 5′-GTACAGCCGGACACACTGC-3′ and reverse 5′-GCTGACTTCCAAGTCTCGTGT-3′. Mean relative gene expression was calculated using the 2 - Δ Δ Ct formula, as described previously [18]. ### 2.11. Western Blotting Proteins were extracted in complete radioimmunoprecipitation lysis buffer (RIPA) supplemented with protease inhibitor cocktails (Roche Diagnostics). Protein concentrations of the samples were determined using a bicinchoninic acid assay kit (Thermo Scientific, Bremen, Germany), and the samples were then boiled for 10 minutes. Equal amounts of protein (50μg) were separated by SDS-PAGE and electrophoretically transferred onto polyvinylidene fluoride (PVDF) membranes (Millipore Corp., Bedford, MA). After blocking by incubation in 5% nonfat dry milk for 1 h at room temperature, the membranes were incubated overnight with primary antibodies recognizing β-actin, T-bet, GATA-3, ROR-γt, and Foxp3 (Santa Cruz, CA, USA) at 4°C with gentle shaking. After washing three times with TBST, the blots were incubated with horseradish peroxidase- (HRP-) conjugated secondary antibodies (Cell Signaling Technology) for 1 h. The blots were washed three times, visualized using the enhanced chemiluminescence (ECL) detection system (Amersham Biosciences, Buckinghamshire, UK), and quantified with the Quantity One System (Bio-Rad). ### 2.12. Flow Cytometry and Intracellular Staining All antibodies used for cell labeling were purchased from eBioscience (San Diego, CA, USA). For measurements of intracellular cytokines, cells were stimulated with PMA (1μg/mL) and ionomycin (50 μg/mL) in the presence of monensin (0.1 mg/mL) at 37°C and 5% CO2 for 5 h. Cells were then washed in PBS and surface-labeled with fluorescein isothiocyanate- (FITC-) conjugated anti-CD4. After fixation and permeabilization, the cells were stained with phycoerythrin-cyanin 7- (PE-Cy7-) conjugated anti-IL-17, PerCP-Cy5-conjugated anti-IFN-γ, and allophycocyanin- (APC-) conjugated anti-IL-4. For analysis of Treg cells, they were aliquoted into tubes without PMA and ionomycin stimulation, and surface staining was performed with FITC conjugated anti-CD4 and PE conjugated anti-CD25 antibodies. Then cells were fixed and permeabilized with Fix/Perm solution, and intracellular staining was performed with APC-conjugated anti-Foxp3. The stained cells were analyzed using a FACS Canto cytometer (BD Bioscience), and the data were analyzed with FlowJo (TreeStar). ### 2.13. Statistics Results are presented as the mean ± SD. Differences between two groups were examined using unpaired Student’st-tests. For analyzing multiple groups, a one-way ANOVA was used. The clinical activity score of colitis as well as macroscopic and histological scores was statistically analyzed using the Kruskal-Wallis nonparametric test, followed by the Mann-Whitney U-test, to compare the results of the different groups. P values < 0.05 were considered significant. ## 2.1. Rats Sprague-Dawley rats, weighing 200–250 g, were purchased from the Experimental Animal Center of Guangdong Province (Guangzhou, China). Rats were provided a standard rat chow and water in a controlled room (temperature, 22–24°C; humidity, 70–75%; and a 12 h/12 h light and dark cycle). The animal studies were conducted under protocols approved by the Ethics Committee for Animal Experiments of Southern Medical University. The rats were paired with age-matched controls. ## 2.2. Induction of TNBS-Induced Colitis and Treatment Colitis was induced with a single intracolonic application of TNBS, as described previously [12]. Briefly, overnight-fasted mice were treated under anesthesia with 30 mg/kg TNBS (Sigma-Aldrich) dissolved in 0.25 mL of 50% alcohol via intrarectal injection using a polyethylene catheter (2 mm in outer diameter), with 0.9% saline treatment as a control. The ingredients of HQT included 9 g ofScutellaria baicalensis Georgi (scute), 6 g ofPaeonia lactiflora Pall. (peony), 6 g ofGlycyrrhiza uralensis Fisch. (licorice), and 6 g ofZiziphus jujuba Mill. (Chinese date). All herb formula granules (1 g extract = 10 g crude herb) were provided by E-Fong Pharmaceutical co., Ltd. (Guangzhou, GD, China) and administered at doses of 30 mg/kg, 60 mg/kg, and 120 mg/kg of body weight, dissolved in distilled water. Mesalazine (500 mg/pack) was used at a dose of 100 mg/kg as a vehicle control and purchased from Ethypharm (Houdan, France). HQT and mesalazine were administered by oral gavage twice daily for one week starting from 24 h after colitis induction. Control rats had free access to tap water. ## 2.3. Clinical Assessment of Colitis Body weight, diarrhoea scores, and bleeding scores were assessed daily as described [13]. Weight changes were calculated as percent difference relative to original body weight. Stool consistency was scored as follows: 0, well-formed pellets; 2, pasty and semiformed stools that do not adhere to the anus; and 4, diarrhoea that remained adhesive to the anus. Fecal blood was scored as follows: 0, no blood by hemoccult test; 2, positive hemoccult; and 4, gross bleeding. ## 2.4. Macroscopic Evaluation The colon was removed and opened longitudinally, and the colon length and macroscopic damage were assessed immediately by an independent observer blinded to the identity of treatments. The macroscopic score was assigned by examining an 8 cm distal portion of the rat colon and utilizing a 0–4 scale with some modifications from that used previously [14]: 0, no macroscopic changes; 1, hyperemia and edema without ulcers; 2, hyperemia and edema with small linear ulcers or petechiae; 3, hyperemia and edema with wide ulcers and necrosis and/or adhesions; 4, hyperemia and edema with megacolon, stenosis, and/or perforation. ## 2.5. Histology Scoring For histological examination, colonic tissue was fixed in 10% formalin, dehydrated, paraffin embedded, processed, sliced into 4μm thick sections, and stained with hematoxylin and eosin (H&E). All the slides were read and scored by a blinded pathologist. The microscopic damage in the colon was assessed on a 0–3 scale as described by Dieleman et al. [15] as follows: (1) severity of inflammation (0, none; 1, mild; 2, moderate; and 3, severe); (2) extent of inflammation (0, none; 1, mucosal; 2, mucosal and submucosal; and 3, transmural); (3) crypt damage (0, none; 1, basal third damaged; 2, basal two-thirds damaged; 3, crypt loss with present surface epithelium; and 4, crypt and surface epithelium loss). The average of the three histology scores was used for statistical analysis. ## 2.6. Myeloperoxidase (MPO) Activity Assay MPO activity was measured according to the method described previously [16]. Each segment was weighed, chopped, and then homogenized in a potassium phosphate buffer (50 mM, pH 6.0) containing 5% hexadecyl trimethyl ammonium bromide (HTAB) and 0.336% EDTA (9 mL/mg tissue) for 30 s. The colon homogenates were subjected to 3 cycles of freezing/thawing, 30 s of sonication, and centrifugation at 13,000 ×g for 15 min at 4°C. Then, 0.167 mg/mL o-dianisidine dihydrochloride (Sigma-Aldrich) and 0.0005% H2O2 in 200 μL of phosphate buffer (pH 6.0) were added to the supernatant, and the absorbance rate was monitored at 490 nm. ## 2.7. Preparation of Lamina Propria Mononuclear Cells (LPMCs) LPMCs were isolated using a modified method as previously described [17]. Briefly, the intestinal mucosa was washed in complete Hank’s balanced salt solution (HBSS) without Ca2+ and Mg2+, cut into 5 mm pieces, and incubated in medium containing 5 mM EDTA (Sigma-Aldrich, St. Louis, Missouri, USA) and 1 mM DTT (Sigma-Aldrich) at 37°C for 30 min until all crypts and individual epithelial cells were removed. The tissues were digested further in RPMI 1640 (GIBCO Laboratories, Grand Island, NY, USA) containing 10% fetal calf serum (FCS, HyClone, Logan, UT, USA), 0.15 g of collagenase type IV (Sigma-Aldrich), and 0.025 g of DNase I (Sigma-Aldrich) in a shaking incubator at 37°C. The tissue slurry was then passed through a 70 μm cell strainer to remove undigested tissue pieces, centrifuged, and resuspended in a 40–60% Percoll solution (Amersham Biosciences, Piscataway, NJ, USA) density gradient. Cells were frozen in liquid nitrogen for storage until analysis at a concentration of 1 × 106 cells/mL. ## 2.8. Isolation and Culture of Mesenteric Lymph Nodes (MLNs) Cells MLNs were removed and transferred to ice cold sterile Hank’s balanced salt solution. The nodes were disrupted and passed through a nylon mesh (70μm pore size). Cells were then incubated in RPMI 1640 with 10% FCS and 100 IU/mL penicillin/streptomycin at a concentration of 1 × 106 cells/mL for 48 h in the presence of anti-CD3 and anti-CD28 antibodies (eBioscience, San Diego, CA). Cytokine production in culture supernatants was determined by enzyme-linked immunosorbent assay (ELISA). ## 2.9. ELISA Frozen colonic samples were homogenized mechanically in lysis buffer. Homogenized tissue samples were centrifuged at 18,300 ×g at 4°C for 30 min. Homogenized tissue or cell culture supernatants were collected, and cytokine levels of TNF-α, IL-1β, IL-12, IFN-γ, IL-4, IL-13, IL-5, IL-6, IL-17, and IL-10 in the supernatant were determined using ELISA kits, according to the manufacturer’s instructions (R&D Systems, Minneapolis, MN). ## 2.10. Real-Time Polymerase Chain Reaction (PCR) Total RNA was extracted using TRIzol reagent (Invitrogen). 1μg total RNA was then reverse transcribed into first-strand cDNAs and was synthesized with M-MLV reverse transcriptase. Primer sequences are listed as follows: IFN-γ forward 5′-AGGATGCATTCATGAGCATCGCC-3′ and reverse 5′-TCAGCACCGACTCCTTTTCCGCT-3′; IL-12 forward 5′-AGTGTAACCAGAAAGGTGCGTTC-3′ and reverse 5′-CCTGCAGGGTACACATGTCCATT-3′; IL-4 forward 5′-CGGCAACAAGGAACACCACGGA-3′ and reverse 5′-AGCGTGGACTCATTCACGGTGC-3′; IL-13 Forward 5′-CTGCAGTCCTGGCTCTCGC-3′ and reverse 5′-CTTTTCCGCTATGGCCACTG-3′; IL-5 forward 5′-ACGATGAGGCTTCCTGTTCC-3′ and reverse 5′-TTCCATTGCCCACTCTGTAC-3′; IL-6 forward 5′-GTGCAATGGCAATTCTGATTGTA-3′ and reverse 5′-CTAGGGTTTCAGTATTGCTCTGA-3′; IL-17 forward 5′-AGCTCCAGAAGGCCCTCAGACTA-3′ and reverse 5′-CAGGACCAGGATCTCTTGCTGGA-3′; IL-10 forward 5′-CCAAGCCTTGTCAGAAATGATCA-3′ and reverse 5′-CTCATTCATGGCCTTGTAGACAC-3′; T-bet forward 5′-AACCAGTATCCTGTTCCCAGC-3′ and reverse 5′-TGTCGCCACTGGAAGGATAG-3′; GATA-3 forward 5′-CAGTCCGCATCTCTTCAC-3′ and reverse 5′-TAGTGCCCAGTACCATCTC-3′; RORγt forward 5′-AGTAGGCCACATTACACTGCT-3′ and reverse 5′-GACCCACACCTCACAAATTGA-3′; Foxp3 forward 5′-GTACAGCCGGACACACTGC-3′ and reverse 5′-GCTGACTTCCAAGTCTCGTGT-3′. Mean relative gene expression was calculated using the 2 - Δ Δ Ct formula, as described previously [18]. ## 2.11. Western Blotting Proteins were extracted in complete radioimmunoprecipitation lysis buffer (RIPA) supplemented with protease inhibitor cocktails (Roche Diagnostics). Protein concentrations of the samples were determined using a bicinchoninic acid assay kit (Thermo Scientific, Bremen, Germany), and the samples were then boiled for 10 minutes. Equal amounts of protein (50μg) were separated by SDS-PAGE and electrophoretically transferred onto polyvinylidene fluoride (PVDF) membranes (Millipore Corp., Bedford, MA). After blocking by incubation in 5% nonfat dry milk for 1 h at room temperature, the membranes were incubated overnight with primary antibodies recognizing β-actin, T-bet, GATA-3, ROR-γt, and Foxp3 (Santa Cruz, CA, USA) at 4°C with gentle shaking. After washing three times with TBST, the blots were incubated with horseradish peroxidase- (HRP-) conjugated secondary antibodies (Cell Signaling Technology) for 1 h. The blots were washed three times, visualized using the enhanced chemiluminescence (ECL) detection system (Amersham Biosciences, Buckinghamshire, UK), and quantified with the Quantity One System (Bio-Rad). ## 2.12. Flow Cytometry and Intracellular Staining All antibodies used for cell labeling were purchased from eBioscience (San Diego, CA, USA). For measurements of intracellular cytokines, cells were stimulated with PMA (1μg/mL) and ionomycin (50 μg/mL) in the presence of monensin (0.1 mg/mL) at 37°C and 5% CO2 for 5 h. Cells were then washed in PBS and surface-labeled with fluorescein isothiocyanate- (FITC-) conjugated anti-CD4. After fixation and permeabilization, the cells were stained with phycoerythrin-cyanin 7- (PE-Cy7-) conjugated anti-IL-17, PerCP-Cy5-conjugated anti-IFN-γ, and allophycocyanin- (APC-) conjugated anti-IL-4. For analysis of Treg cells, they were aliquoted into tubes without PMA and ionomycin stimulation, and surface staining was performed with FITC conjugated anti-CD4 and PE conjugated anti-CD25 antibodies. Then cells were fixed and permeabilized with Fix/Perm solution, and intracellular staining was performed with APC-conjugated anti-Foxp3. The stained cells were analyzed using a FACS Canto cytometer (BD Bioscience), and the data were analyzed with FlowJo (TreeStar). ## 2.13. Statistics Results are presented as the mean ± SD. Differences between two groups were examined using unpaired Student’st-tests. For analyzing multiple groups, a one-way ANOVA was used. The clinical activity score of colitis as well as macroscopic and histological scores was statistically analyzed using the Kruskal-Wallis nonparametric test, followed by the Mann-Whitney U-test, to compare the results of the different groups. P values < 0.05 were considered significant. ## 3. Results ### 3.1. HQT Ameliorates TNBS-Induced Colitis in a Dose-Dependent Manner Intrarectal administration of TNBS has long been used as an alternative model for the induction of acute colitis [19]. To assess whether HQT exerts a protective role during colitis, HQT was first evaluated in a dose-response study of TNBS-induced colitis at 30 mg/kg, 60 mg/kg, and 120 mg/kg concentrations. Rats were treated with HQT for 7 consecutive days starting on day 2 after induction of TNBS colitis. As expected, rats given TNBS developed severe colitis, characterized by a profound and sustained weight loss, bleeding, and diarrhoea. Here, HQT treatment rapidly reversed the lost body weight and decreased the extent of the bleeding and diarrhoea scores in a dose-dependent manner, with both parameters reaching significance in the 60 and 120 mg/kg treatment groups (Figures 1(a)–1(c)). Consistently, this treatment also significantly prevented colon shortening and decreased MPO activity (Figures 1(d) and 1(e)). Furthermore, TNF-α in colon culture supernatants was significantly lower in HQT-treated rats compared to rats treated with TNBS alone (Figure 1(f)). Since treatment with 120 mg/kg HQT proved to be most effective in the amelioration of colitis, this concentration was used in further experiments. Collectively, these results demonstrate that HQT is effective in protecting against acute TNBS-induced colitis in a dose-dependent manner.Figure 1 HQT ameliorates TNBS-induced colitis in a dose-dependent manner. Various doses of Huangqin-Tang decoction (HQT) (30–120 mg/kg) were administered following the 2,4,6-trinitrobenzenesulfonic acid (TNBS) enema and on the next 2 days. (a) Body weight changes (percentage of original body weight), (b) bleeding score, and (c) diarrhoea score were scored daily. (d) Rats were sacrificed on day 7 to measure colon length. (e) Myeloperoxidase (MPO) activity was assessed in colon homogenates as described in Section2. (f) The production of tumor necrosis factor-α (TNF-α) in the colon was determined by enzyme-linked immunosorbent assay (ELISA). Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) ### 3.2. The Anti-Inflammatory Potency of HQT is Superior to Mesalazine in the TNBS-Induced Colitis Model Oral administration of mesalazine is the first-line approach to induce and maintain clinical remission in patients with mild-to-moderate UC or CD [20]. To assess the anti-inflammatory potency of HQT in TNBS-induced colitis, the effect of HQT (120 mg/kg) and mesalazine (100 mg/kg) was directly compared. When studying the clinical course of the disease, TNBS-treated rats suffered the most body weight loss from day 3 onward (Figure 2(a)). Starting at day 4, treatment with HQT or mesalazine resulted in a higher weight gain compared to animals treated with TNBS alone (Figure 2(a)). Simultaneously, bleeding and diarrhoea scores of TNBS-treated rats became significantly worse compared to those of controls. In contrast, such changes were markedly improved by HQT or mesalazine treatment (Figures 2(b) and 2(c)).Figure 2 HQT protects against TNBS-induced colitis in a manner equal to mesalazine. Rats with TNBS-induced colitis were treated with HQT (120 mg/kg) or mesalazine (100 mg/kg). (a) Body weight changes (percentage of original body weight), (b) bleeding score, and (c) diarrhoea score were scored daily. (d) Rats were sacrificed on day 7 to measure colon length. (e) Macroscopic score was evaluated on day 7. (f) Histological score in colons. (g) Colon sections from each group rat were stained with H&E (original magnification, 100x). (h) MPO activity was assessed in colon homogenates, as described in Section2. (i) The production of TNF-α and interleukin- (IL-) 1β in the colon was determined by ELISA. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h) (i)To further assess the severity of colitis, colon length was measured in each group of rats. Colons of rats treated with TNBS alone were on average 10% shorter than those of rats subjected to additional treatment with HQT or mesalazine (Figure2(d)). This inflammatory phenotype was further evidenced by the gross and microscopic appearances of the colon. Consistent with the clinical parameters discussed above, treatment with HQT or mesalazine significantly ameliorated the macroscopic scores compared to rats treated with TNBS alone (Figure 2(e)). Histological sections revealed no substantial disease in activity in control rats, whereas in TNBS-treated rats, severe inflammation could be detected, including more infiltrating inflammatory cells and significantly more ulceration. However, colon section from HQT or mesalazine groups showed a marked reduction in the tissue disruption, mucosal ulcerations, and mononuclear cell infiltration (Figure 2(g)). Furthermore, histological scoring revealed that HQT or mesalazine treatment reduced the severity of TNBS-induced colitis (Figure 2(f)). Consistent with these histological changes, TNBS significantly increased colonic MPO activity. In contrast, all HQT-treated rats as well as mesalazine-treated rats presented decreased colonic MPO activity compared to rats treated with TNBS alone (Figure 2(h)).Furthermore, inflammatory cytokine expression levels, including those of TNF-α and IL-1β, were also clearly induced in TNBS-treated rats compared to control rats. Administration of HQT or mesalazine prevented the induction of these inflammatory cytokines (Figure 2(i)), suggesting that HQT treatment might have broad anti-inflammatory activity. Together, these results clearly indicate that HQT plays a therapeutic role and is superior to mesalazine in resolving the inflammatory response following TNBS-induced injury of the colon. ### 3.3. Distinct Effects of HQT on the Frequencies of Th1, Th2, Th17, and Treg Cells in the TNBS-Induced Colitis Model As studies have demonstrated that the Th1, Th2, Th17, and Treg CD4+ T cells subsets play distinct roles in the control and development of IBD [2], we hypothesized that HQT may differentially contribute to the development of these CD4+ T cells subsets. Using flow cytometry, we determined the proportions of Th1, Th2, Th17, and Treg cells among LPMCs in the TNBS-induced colitis model. LPMCs cells were treated with phorbol myristate acetate-ionomycin, stained for cell surface CD4, and intracellularly stained for interferon- (IFN-) γ, IL-4, and IL-17 to detect Th1, Th2, and Th17 cells, respectively. As shown in Figures 3(a) and 3(b), we observed that there were marked increases in the numbers of IFN+ and IL-17+ CD4+ T cells after TNBS challenge, while the IL-4+ CD4+ T cell decreased. The proportions of Th1 and Th17 cells in the HQT-treated TNBS group were significantly lower than those in the TNBS group. In contrast to the Th1 and Th17 cells, we observed a significantly higher frequency of IL-4-producing Th2 cells in LPMCs of rats that were treated with HQT, compared with rats treated with TNBS alone.Figure 3 HQT regulates frequencies of Th1, Th2, Th17, and Treg in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg) and analyzed 7 days after treatment. Lamina propria mononuclear cells (LPMCs) were isolated from each group and subjected to intracellular IFN-γ, IL-4, IL-17, and Foxp3 staining. (a) The frequency of T helper 1 (Th1) (CD4+IFN-γ +), Th2 (CD4+IL-4+), Th17 (CD4+IL-17+), and regulatory T (Treg) (CD4+CD25+Foxp3+) was determined by flow cytometry. Numbers represent the percentages of IFN-γ, IL-4, IL-17A-expressing CD4+ T cells, and Foxp3-expressing CD4+CD25+ T cells in each quadrant. (b) Quantitative analysis of the frequency and total number of Th1, Th2, Th17, and Treg in LPMCs. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b)For analysis of Treg cells, LPMCs were surface-labeled with CD4 and CD25 antibodies, followed by intracellular staining with Foxp3. Results show that HQT treatment increased the CD4+ CD25+ Foxp3+ Treg levels amongst LPMCs. Thus, our results clearly indicate that the ability of HQT to ameliorate colitis was associated with an expansion of Th2 and Treg cells and a reduction of Th1 and Th17 cells among LPMCs. ### 3.4. HQT Regulates Th1-, Th2-, Th17-, and Treg-Related Cytokine Production in the TNBS-Induced Colitis Model To determine the effect of HQT on driving Th cell responses in rats with TNBS-induced colitis, we further measured the production of signature cytokines that are critical for the differentiation of Th subsets in MLNs and colonic tissue. Our results revealed that TNBS-treated rats exhibited an aberrant cytokine pattern, characterized by mRNA overexpression of Th1 and Th17 signature cytokines, including IFN-γ, IL-12, IL-17, and IL-6, and this increase was significantly decreased by administration of HQT (Figure 4). Moreover, total protein extracted from MLNs was analyzed by ELISA. Similarly, HQT significantly downregulated cytokine levels of IFN-γ, IL-12, IL-17, and IL-6 in TNBS-treated rats (Figure 5). Contrary to the decreased Th1- and Th17-associated cytokines, the protein and mRNA expression in MLNs and colonic tissue showed increased production of Th2- and Treg-associated cytokines IL-4, IL-5, IL-13, and IL-10 in HQT-treated rats (Figures 4 and 5). Taken together, these results indicate that HQT administration inhibits Th1 and Th17 responses but promotes Th2 and Treg responses in TNBS-induced colitis.Figure 4 HQT regulates mRNA expression of Th1-, Th2-, Th17-, and Treg-related cytokines in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). Total mRNA was extracted from colonic tissue to analyze the expression of Th1-related cytokines (a) IFN-γ and (b) IL-12; Th2-related cytokines (c) IL-4, (d) IL-13, and (e) IL-5; Th17-related cytokines (f) IL-17A and (g) IL-6; Treg-related cytokines (h) IL-10 by real-time polymerase chain reaction (PCR). Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h)Figure 5 HQT regulates protein levels of Th1-, Th2-, Th17-, and Treg-related cytokines in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). Mesenteric lymph nodes (MLNs) cells from each group were stimulated with anti-CD3/CD28 antibodies and the cultural supernatants were harvested, followed by ELISA analysis of cytokines indicated above (pg/mL). Th1-related cytokines (a) IFN-γ and (b) IL-12; Th2-related cytokines (c) IL-4, (d) IL-13, and (e) IL-5; Th17-related cytokines (f) IL-17A and (g) IL-6; Treg-related cytokines (h) IL-10 were measured. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h) ### 3.5. Effect of HQT on Th1, Th2, Th17, and Treg Transcription Factors in TNBS-Induced Colitis To understand the molecular mechanism by which HQT affects CD4+ T cell subsets, we determined the expression levels of the nuclear transcription factors of these subsets using western blot and real-time PCR. We found that HQT treatment enhanced the expression of Foxp3 and GATA-3, but it reduced the expression of T-bet and ROR-γt, both at the protein and gene expression levels (Figures 6(a) and 6(b)). Taken together, we demonstrate a crucial role for HQT that prevent the development of intestinal inflammation and maintain intestinal immune homeostasis.Figure 6 HQT regulates protein and mRNA levels of T-bet, GATA-3, ROR-γt, and Foxp3 in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). (a) Whole colon tissue homogenates collected at 7 days after HQT treatment were examined for T-bet, GATA-3, RORγt, and Foxp3 by western blot analysis. Each lane corresponds to an individual mouse. (b) Distal colons collected at day 7 after HQT treatment were used to isolate RNA for expression analysis of T-bet, GATA-3, ROR-γt, and Foxp3 by real-time PCR. Results represent the mean ± SD from eight mice per group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) ## 3.1. HQT Ameliorates TNBS-Induced Colitis in a Dose-Dependent Manner Intrarectal administration of TNBS has long been used as an alternative model for the induction of acute colitis [19]. To assess whether HQT exerts a protective role during colitis, HQT was first evaluated in a dose-response study of TNBS-induced colitis at 30 mg/kg, 60 mg/kg, and 120 mg/kg concentrations. Rats were treated with HQT for 7 consecutive days starting on day 2 after induction of TNBS colitis. As expected, rats given TNBS developed severe colitis, characterized by a profound and sustained weight loss, bleeding, and diarrhoea. Here, HQT treatment rapidly reversed the lost body weight and decreased the extent of the bleeding and diarrhoea scores in a dose-dependent manner, with both parameters reaching significance in the 60 and 120 mg/kg treatment groups (Figures 1(a)–1(c)). Consistently, this treatment also significantly prevented colon shortening and decreased MPO activity (Figures 1(d) and 1(e)). Furthermore, TNF-α in colon culture supernatants was significantly lower in HQT-treated rats compared to rats treated with TNBS alone (Figure 1(f)). Since treatment with 120 mg/kg HQT proved to be most effective in the amelioration of colitis, this concentration was used in further experiments. Collectively, these results demonstrate that HQT is effective in protecting against acute TNBS-induced colitis in a dose-dependent manner.Figure 1 HQT ameliorates TNBS-induced colitis in a dose-dependent manner. Various doses of Huangqin-Tang decoction (HQT) (30–120 mg/kg) were administered following the 2,4,6-trinitrobenzenesulfonic acid (TNBS) enema and on the next 2 days. (a) Body weight changes (percentage of original body weight), (b) bleeding score, and (c) diarrhoea score were scored daily. (d) Rats were sacrificed on day 7 to measure colon length. (e) Myeloperoxidase (MPO) activity was assessed in colon homogenates as described in Section2. (f) The production of tumor necrosis factor-α (TNF-α) in the colon was determined by enzyme-linked immunosorbent assay (ELISA). Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) ## 3.2. The Anti-Inflammatory Potency of HQT is Superior to Mesalazine in the TNBS-Induced Colitis Model Oral administration of mesalazine is the first-line approach to induce and maintain clinical remission in patients with mild-to-moderate UC or CD [20]. To assess the anti-inflammatory potency of HQT in TNBS-induced colitis, the effect of HQT (120 mg/kg) and mesalazine (100 mg/kg) was directly compared. When studying the clinical course of the disease, TNBS-treated rats suffered the most body weight loss from day 3 onward (Figure 2(a)). Starting at day 4, treatment with HQT or mesalazine resulted in a higher weight gain compared to animals treated with TNBS alone (Figure 2(a)). Simultaneously, bleeding and diarrhoea scores of TNBS-treated rats became significantly worse compared to those of controls. In contrast, such changes were markedly improved by HQT or mesalazine treatment (Figures 2(b) and 2(c)).Figure 2 HQT protects against TNBS-induced colitis in a manner equal to mesalazine. Rats with TNBS-induced colitis were treated with HQT (120 mg/kg) or mesalazine (100 mg/kg). (a) Body weight changes (percentage of original body weight), (b) bleeding score, and (c) diarrhoea score were scored daily. (d) Rats were sacrificed on day 7 to measure colon length. (e) Macroscopic score was evaluated on day 7. (f) Histological score in colons. (g) Colon sections from each group rat were stained with H&E (original magnification, 100x). (h) MPO activity was assessed in colon homogenates, as described in Section2. (i) The production of TNF-α and interleukin- (IL-) 1β in the colon was determined by ELISA. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h) (i)To further assess the severity of colitis, colon length was measured in each group of rats. Colons of rats treated with TNBS alone were on average 10% shorter than those of rats subjected to additional treatment with HQT or mesalazine (Figure2(d)). This inflammatory phenotype was further evidenced by the gross and microscopic appearances of the colon. Consistent with the clinical parameters discussed above, treatment with HQT or mesalazine significantly ameliorated the macroscopic scores compared to rats treated with TNBS alone (Figure 2(e)). Histological sections revealed no substantial disease in activity in control rats, whereas in TNBS-treated rats, severe inflammation could be detected, including more infiltrating inflammatory cells and significantly more ulceration. However, colon section from HQT or mesalazine groups showed a marked reduction in the tissue disruption, mucosal ulcerations, and mononuclear cell infiltration (Figure 2(g)). Furthermore, histological scoring revealed that HQT or mesalazine treatment reduced the severity of TNBS-induced colitis (Figure 2(f)). Consistent with these histological changes, TNBS significantly increased colonic MPO activity. In contrast, all HQT-treated rats as well as mesalazine-treated rats presented decreased colonic MPO activity compared to rats treated with TNBS alone (Figure 2(h)).Furthermore, inflammatory cytokine expression levels, including those of TNF-α and IL-1β, were also clearly induced in TNBS-treated rats compared to control rats. Administration of HQT or mesalazine prevented the induction of these inflammatory cytokines (Figure 2(i)), suggesting that HQT treatment might have broad anti-inflammatory activity. Together, these results clearly indicate that HQT plays a therapeutic role and is superior to mesalazine in resolving the inflammatory response following TNBS-induced injury of the colon. ## 3.3. Distinct Effects of HQT on the Frequencies of Th1, Th2, Th17, and Treg Cells in the TNBS-Induced Colitis Model As studies have demonstrated that the Th1, Th2, Th17, and Treg CD4+ T cells subsets play distinct roles in the control and development of IBD [2], we hypothesized that HQT may differentially contribute to the development of these CD4+ T cells subsets. Using flow cytometry, we determined the proportions of Th1, Th2, Th17, and Treg cells among LPMCs in the TNBS-induced colitis model. LPMCs cells were treated with phorbol myristate acetate-ionomycin, stained for cell surface CD4, and intracellularly stained for interferon- (IFN-) γ, IL-4, and IL-17 to detect Th1, Th2, and Th17 cells, respectively. As shown in Figures 3(a) and 3(b), we observed that there were marked increases in the numbers of IFN+ and IL-17+ CD4+ T cells after TNBS challenge, while the IL-4+ CD4+ T cell decreased. The proportions of Th1 and Th17 cells in the HQT-treated TNBS group were significantly lower than those in the TNBS group. In contrast to the Th1 and Th17 cells, we observed a significantly higher frequency of IL-4-producing Th2 cells in LPMCs of rats that were treated with HQT, compared with rats treated with TNBS alone.Figure 3 HQT regulates frequencies of Th1, Th2, Th17, and Treg in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg) and analyzed 7 days after treatment. Lamina propria mononuclear cells (LPMCs) were isolated from each group and subjected to intracellular IFN-γ, IL-4, IL-17, and Foxp3 staining. (a) The frequency of T helper 1 (Th1) (CD4+IFN-γ +), Th2 (CD4+IL-4+), Th17 (CD4+IL-17+), and regulatory T (Treg) (CD4+CD25+Foxp3+) was determined by flow cytometry. Numbers represent the percentages of IFN-γ, IL-4, IL-17A-expressing CD4+ T cells, and Foxp3-expressing CD4+CD25+ T cells in each quadrant. (b) Quantitative analysis of the frequency and total number of Th1, Th2, Th17, and Treg in LPMCs. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b)For analysis of Treg cells, LPMCs were surface-labeled with CD4 and CD25 antibodies, followed by intracellular staining with Foxp3. Results show that HQT treatment increased the CD4+ CD25+ Foxp3+ Treg levels amongst LPMCs. Thus, our results clearly indicate that the ability of HQT to ameliorate colitis was associated with an expansion of Th2 and Treg cells and a reduction of Th1 and Th17 cells among LPMCs. ## 3.4. HQT Regulates Th1-, Th2-, Th17-, and Treg-Related Cytokine Production in the TNBS-Induced Colitis Model To determine the effect of HQT on driving Th cell responses in rats with TNBS-induced colitis, we further measured the production of signature cytokines that are critical for the differentiation of Th subsets in MLNs and colonic tissue. Our results revealed that TNBS-treated rats exhibited an aberrant cytokine pattern, characterized by mRNA overexpression of Th1 and Th17 signature cytokines, including IFN-γ, IL-12, IL-17, and IL-6, and this increase was significantly decreased by administration of HQT (Figure 4). Moreover, total protein extracted from MLNs was analyzed by ELISA. Similarly, HQT significantly downregulated cytokine levels of IFN-γ, IL-12, IL-17, and IL-6 in TNBS-treated rats (Figure 5). Contrary to the decreased Th1- and Th17-associated cytokines, the protein and mRNA expression in MLNs and colonic tissue showed increased production of Th2- and Treg-associated cytokines IL-4, IL-5, IL-13, and IL-10 in HQT-treated rats (Figures 4 and 5). Taken together, these results indicate that HQT administration inhibits Th1 and Th17 responses but promotes Th2 and Treg responses in TNBS-induced colitis.Figure 4 HQT regulates mRNA expression of Th1-, Th2-, Th17-, and Treg-related cytokines in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). Total mRNA was extracted from colonic tissue to analyze the expression of Th1-related cytokines (a) IFN-γ and (b) IL-12; Th2-related cytokines (c) IL-4, (d) IL-13, and (e) IL-5; Th17-related cytokines (f) IL-17A and (g) IL-6; Treg-related cytokines (h) IL-10 by real-time polymerase chain reaction (PCR). Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h)Figure 5 HQT regulates protein levels of Th1-, Th2-, Th17-, and Treg-related cytokines in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). Mesenteric lymph nodes (MLNs) cells from each group were stimulated with anti-CD3/CD28 antibodies and the cultural supernatants were harvested, followed by ELISA analysis of cytokines indicated above (pg/mL). Th1-related cytokines (a) IFN-γ and (b) IL-12; Th2-related cytokines (c) IL-4, (d) IL-13, and (e) IL-5; Th17-related cytokines (f) IL-17A and (g) IL-6; Treg-related cytokines (h) IL-10 were measured. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h) ## 3.5. Effect of HQT on Th1, Th2, Th17, and Treg Transcription Factors in TNBS-Induced Colitis To understand the molecular mechanism by which HQT affects CD4+ T cell subsets, we determined the expression levels of the nuclear transcription factors of these subsets using western blot and real-time PCR. We found that HQT treatment enhanced the expression of Foxp3 and GATA-3, but it reduced the expression of T-bet and ROR-γt, both at the protein and gene expression levels (Figures 6(a) and 6(b)). Taken together, we demonstrate a crucial role for HQT that prevent the development of intestinal inflammation and maintain intestinal immune homeostasis.Figure 6 HQT regulates protein and mRNA levels of T-bet, GATA-3, ROR-γt, and Foxp3 in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). (a) Whole colon tissue homogenates collected at 7 days after HQT treatment were examined for T-bet, GATA-3, RORγt, and Foxp3 by western blot analysis. Each lane corresponds to an individual mouse. (b) Distal colons collected at day 7 after HQT treatment were used to isolate RNA for expression analysis of T-bet, GATA-3, ROR-γt, and Foxp3 by real-time PCR. Results represent the mean ± SD from eight mice per group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) ## 4. Discussion In the present study, we illustrated an important role for HQT in inhibiting TNBS-mediated intestinal inflammation. We first demonstrated that administration of HQT at doses of 30–120 mg/kg significantly attenuated colitis in a dose-dependent manner. In addition, administration of 120 mg/kg HQT was significantly more potent than mesalazine (100 mg/kg) in ameliorating TNBS-induced colitis. Moreover, mechanistic studies indicate that the anti-inflammatory effects of HQT are involved in restraining Th1 and Th17 responses, while enhancing Th2 and Treg responses, to TNBS challenge in this murine colitis model. Therefore, our report unveiled for the first time, to our knowledge, an important role for HQT in anti-inflammatory and immunomodulatory effect in IBD.To evaluate the effect of HQT, we used the well-established model of TNBS-induced colitis in rats which has resemblance to CD [21]. In the present study, HQT efficiently and dose-dependently improved TNBS-induced colitis. It caused attenuation of weight loss, diarrhoea, and bleeding scores while preserving colonic length and reducing MPO activity, a marker of tissue neutrophil activation [22]. As expected, TNBS treatment markedly increased TNF-α protein expression in the colon, and that increase was reduced significantly and dose-dependently by HQT treatment. Consequently, treatment with 120 mg/kg HQT was the most effective with respect to the amelioration of colitis and was used in our experiments.Mesalazine is one of the most commonly prescribed anti-inflammatory drugs that is used to treat IBD [23]. Studies have demonstrated the significant and comparable protection of mesalazine on experimental colitis induced by TNBS [24]. Here, the effects of HQT (120 mg/kg) and mesalazine (100 mg/kg) were directly compared in experimental colitis. We showed that HQT as well as mesalazine dramatically inhibited weight loss, bleeding, and diarrhoea score while preserving colonic length. In addition to exerting such beneficial clinical effects, treatment with HQT and mesalazine also resulted in macroscopic and microscopic amelioration of intestinal inflammation, consistent with reduced MPO activity. Elevated levels of proinflammatory cytokines, such as TNF-α and IL-1β, were demonstrated during the development of IBD and experimental colitis. TNF-α monoclonal antibodies have been shown to dramatically decrease signs and symptoms of IBD and subsequently are key potential therapeutic agents [25, 26]. In the present study, we further demonstrate that local TNF-α and IL-1β expressions are decreased after HQT or mesalazine treatment in rats with TNBS-induced colitis. Our results suggest that daily HQT administration significantly inhibited the progression of colitis, yielding a protective effect equal to or even greater than that of mesalazine.Furthermore, our work highlights the fact that HQT uniquely interacts with the host immune system to exact its immunoregulatory potency. More recently, studies have highlighted the roles of T cell subsets in IBD [27]. Classical Th1/Th2 pathways are thought to play a critical role in IBD pathogenesis. It is widely accepted that TNBS-induced colitis is mediated by a dominant Th1 immune response and a deficiency of Th2 responses [28, 29]. Moreover, recent studies have highlighted a key pathogenic role of Th17 cells, and increased numbers of Th17 cells have been found in IBD patients and animal models [30–32]. On the contrary, Treg cells are key players in maintaining immune homeostasis, and they regulate immune responses to allergens by preventing excessive inflammatory responses [33]. Recent studies demonstrated a decrease in Treg cells number in IBD patients and animal models [34–36]. Here, we found that, in the progression of TNBS-induced colitis, treatment with HQT significantly decreased the percentage of Th1 and Th17 cells among LPMCs. Simultaneously, the numbers of Th2 and Treg cells markedly increased when compared with the TNBS-induced colitis group. This implies that the rehabilitating effect of HQT in IBD works by restoring the balance between CD4+ T cells subsets.Homeostasis of distinct Th cell subset-derived cytokines plays a crucial role in mediating intestinal inflammation in IBD. Studies have shown that Th1-related cytokines (IFN-γ and IL-12) and Th17-associated cytokines (IL-17A, IL-21, and IL-23) are markedly increased in CD, while in UC there is increased production of the Th2 cytokines (IL-5, IL-13, and IL-4) [37]. IL-10 is an important anti-inflammatory cytokine that can be secreted by Treg cells, and IL-10 defects cause spontaneous colitis in mice [38]. In addition, numerous studies have shown that a change in the cytokine profile from Th1 and Th17 to Th2 and Treg could ameliorate Th1/Th17-mediated diseases, such as CD and TNBS-induced colitis [29, 39, 40]. ELISA and real-time PCR methods were used in this study to detect the expression of cytokines related to the different CD4+ T cells subsets. In agreement with a suppression of Th1 and Th17 numbers amongst LPMCs, HQT-treated rats exhibited defective production of Th1- and Th17-associated cytokines. Nevertheless, increased production of Th2- and Treg-associated cytokines were observed in TNBS-treated rats, suggesting that HQT significantly improved inflammation and ameliorated disease in TNBS-treated rats, associated with a shift from a Th1 and Th17 profile to a Th2 and Treg immunological profile.Because transcription factors are crucial for T-cell differentiation, we also examined lineage-specific transcription factors. The Th1 transcription factor T-bet plays a critical role in the development of Th1-driven colitis due to the high expression levels of IFN-γ [41], while GATA3 is an essential master regulator of Th2 cells for the induction of IL-4, IL-5, and IL-13 [5]. Although Th17 and Treg cells share a common requirement for TGF-β in their differentiation, their distinct transcriptional regulators ROR-γt and Foxp3 are necessary, respectively [31]. ROR-γt directs Th17 differentiation and induces the production of IL-17 [42], and Foxp3 dominates Treg formation and production of regulatory cytokines, such as TGF-β and IL-10 [43]. Our results showed that the colonic protein and mRNA expression levels of T-bet and ROR-γt significantly decreased but GATA-3 and Foxp3 expressions were enhanced in colon after HQT treatment in colitis rats. These data indicate that HQT plays a significant role during IBD development in establishing the homeostasis of distinct Th cell subsets in response to TNBS challenge.In this study, we presented evidence that HQT-treatment elicits a strong Th2 and Treg response in TNBS-induced colitis. It has been reported that Th1 and Th2 cells and the cytokines they release are often mutually antagonistic, and a change in the cytokine profile from Th1 to Th2 could ameliorate Th1-mediated disease [29, 44]. Consistent with these findings, a strong Th2 response successfully counteracts Th1/Th17-mediated colitis [45], suggesting that the role of Th2 in intestinal inflammation may be protective in TNBS-induced colitis. Additionally, Treg cells have been reported to repress the activity of other T cell subsets to induce an anti-inflammatory response [46]. It is therefore conceivable that the protective effect of HQT on TNBS-induced colitis might be explained by its capability to induce Treg cells and rebalance CD4+ T cell subsets. Although there were no significant side effects associated with HQT treatment in our study, more detailed studies are necessary to prove its immunomodulatory effect in different models of colitis.In conclusion, our results indicate that HQT plays an important role in the regulation of intestinal immune responses in TNBS-induced colitis by downregulating effector phenotype of Th1 and Th17 cells, while promoting Th2 and Treg responses. Thus, using HQT, a Chinese medicinal formulation, to regulate immune homeostasis may offer a promising alternative to our current therapeutic strategy for IBD. --- *Source: 102021-2015-08-04.xml*
102021-2015-08-04_102021-2015-08-04.md
60,304
Huangqin-Tang Ameliorates TNBS-Induced Colitis by Regulating Effector and Regulatory CD4+ T Cells
Ying Zou; Wen-Yang Li; Zheng Wan; Bing Zhao; Zhi-Wei He; Zhu-Guo Wu; Guo-Liang Huang; Jian Wang; Bin-Bin Li; Yang-Jia Lu; Cong-Cong Ding; Hong-Gang Chi; Xue-Bao Zheng
BioMed Research International (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102021
102021-2015-08-04.xml
--- ## Abstract Huangqin-Tang decoction (HQT) is a classic traditional Chinese herbal formulation that is widely used to ameliorate the symptoms of gastrointestinal disorders, including inflammatory bowel disease (IBD). This study was designed to investigate the therapeutic potential and immunological regulatory activity of HQT in experimental colitis in rats. Using an animal model of colitis by intrarectally administering 2,4,6-trinitrobenzenesulfonic acid (TNBS), we found that administration of HQT significantly inhibited the severity of TNBS-induced colitis in a dose-dependent manner. In addition, treatment with HQT produced better results than that with mesalazine, as shown by improvedweight loss bleeding and diarrhoea scores, colon length, and intestinal inflammation. As for potential immunological regulation of HQT action, the percentages of Th1 and Th17 cells were reduced, but those Th2 and Treg cells were enhanced in LPMCs after HQT treatment. Additionally, HQT lowered the levels of Th1/Th17-associated cytokines but increased production of Th2/Treg-associated cytokines in the colon and MLNs. Furthermore, we observed a remarkable suppression of the Th1/Th17-associated transcription factors T-bet and ROR-γt. However, expression levels of the Th2/Treg-associated transcription factors GATA-3 and Foxp3 were enhanced during treatment with HQT. Our results suggest that HQT has the therapeutic potential to ameliorate TNBS-induced colitis symptoms. This protective effect is possibly mediated by its effects on CD4+ T cells subsets. --- ## Body ## 1. Introduction Human inflammatory bowel disease (IBD) comprises the two related chronic, relapsing inflammatory disorders, Crohn’s disease (CD) and ulcerative colitis (UC) [1]. Although the detailed etiology and pathogenesis of IBD remain uncertain, recent experimental and clinical studies have suggested that the dysregulation of mucosal CD4+ T cells, which contributes to intestinal inflammation and mucosal barrier destruction, is one of the most important aspects of the pathogenesis [2, 3].Among a variety of inflammatory cells in the gut, both effector CD4+ T helper (Th) cells and regulatory CD4+ T (Treg) cells are important in IBD, as they regulate pro/anti-inflammatory cytokine production [4]. In general, naive CD4+ T cells can be divided into one of several lineages of Th cells, including Th1, Th2, and Th17, which vary in their cytokine production and function [5]. Classically, CD is thought to be caused by a deregulated Th1 inflammatory response, while UC has historically been considered a Th2-mediated disease. Th17 is a new subtype of effector Th cells that has been reported to play a key pathogenic role in chronic inflammatory conditions, including IBD [6]. Treg cells are a specialized population of CD4+ T cells that act as dedicated mediators to dampen inflammatory responses and prevent autoimmunity. Several studies have demonstrated an inadequate Treg cells responses in the face of an overly exuberant Th1, Th2, and Th17 cells response, resulting in the breakdown of intestinal homeostasis and profound acceleration of the perpetuation of IBD [7]. Given the key role of CD4+ T cells subsets in intestinal inflammation, therapeutics targeting these aberrant CD4+ T cells responses is already under development and are promising treatments for IBD and other inflammatory diseases.The mainstays of current IBD treatments involve the use of corticosteroids, immunomodulators, and biologic agents targeting specific cytokines. Although these drugs are conventional therapeutics, most of these treatments are still being used with reluctance due to the high cost, toxic side effects, and uncertainty about long-term safety [8–10]. Consequently, many patients turn to alternative strategies, including traditional plant-based remedies.Huangqin-Tang decoction (HQT) is a classic traditional Chinese herbal formulation consisting of 4 components: the roots ofScutellaria baicalensis Georgi (scute),Glycyrrhiza uralensis Fisch. (licorice),Paeonia lactiflora Pall. (peony), and the fruit ofZiziphus jujuba Mill. (Chinese date). HQT has been used for nearly 1800 years in traditional Chinese medicine to treat common gastrointestinal distress, such as diarrhoea, abdominal spasms, fever, headache, vomiting, nausea, extreme thirst, and subcardiac distention [11]. Although HQT is also significantly protective in the treatment of IBD in Chinese clinical application, further clinical evidence and definitive mechanisms of action that demonstrate the role of HQT in gastrointestinal diseases are still lacking. Therefore, the aim of this study was to investigate the contribution of HQT to the amelioration of colitis and CD4+ T cells immune homeostasis in 2,4,6-trinitrobenzenesulfonic acid- (TNBS-) induced acute colitis. ## 2. Materials and Methods ### 2.1. Rats Sprague-Dawley rats, weighing 200–250 g, were purchased from the Experimental Animal Center of Guangdong Province (Guangzhou, China). Rats were provided a standard rat chow and water in a controlled room (temperature, 22–24°C; humidity, 70–75%; and a 12 h/12 h light and dark cycle). The animal studies were conducted under protocols approved by the Ethics Committee for Animal Experiments of Southern Medical University. The rats were paired with age-matched controls. ### 2.2. Induction of TNBS-Induced Colitis and Treatment Colitis was induced with a single intracolonic application of TNBS, as described previously [12]. Briefly, overnight-fasted mice were treated under anesthesia with 30 mg/kg TNBS (Sigma-Aldrich) dissolved in 0.25 mL of 50% alcohol via intrarectal injection using a polyethylene catheter (2 mm in outer diameter), with 0.9% saline treatment as a control. The ingredients of HQT included 9 g ofScutellaria baicalensis Georgi (scute), 6 g ofPaeonia lactiflora Pall. (peony), 6 g ofGlycyrrhiza uralensis Fisch. (licorice), and 6 g ofZiziphus jujuba Mill. (Chinese date). All herb formula granules (1 g extract = 10 g crude herb) were provided by E-Fong Pharmaceutical co., Ltd. (Guangzhou, GD, China) and administered at doses of 30 mg/kg, 60 mg/kg, and 120 mg/kg of body weight, dissolved in distilled water. Mesalazine (500 mg/pack) was used at a dose of 100 mg/kg as a vehicle control and purchased from Ethypharm (Houdan, France). HQT and mesalazine were administered by oral gavage twice daily for one week starting from 24 h after colitis induction. Control rats had free access to tap water. ### 2.3. Clinical Assessment of Colitis Body weight, diarrhoea scores, and bleeding scores were assessed daily as described [13]. Weight changes were calculated as percent difference relative to original body weight. Stool consistency was scored as follows: 0, well-formed pellets; 2, pasty and semiformed stools that do not adhere to the anus; and 4, diarrhoea that remained adhesive to the anus. Fecal blood was scored as follows: 0, no blood by hemoccult test; 2, positive hemoccult; and 4, gross bleeding. ### 2.4. Macroscopic Evaluation The colon was removed and opened longitudinally, and the colon length and macroscopic damage were assessed immediately by an independent observer blinded to the identity of treatments. The macroscopic score was assigned by examining an 8 cm distal portion of the rat colon and utilizing a 0–4 scale with some modifications from that used previously [14]: 0, no macroscopic changes; 1, hyperemia and edema without ulcers; 2, hyperemia and edema with small linear ulcers or petechiae; 3, hyperemia and edema with wide ulcers and necrosis and/or adhesions; 4, hyperemia and edema with megacolon, stenosis, and/or perforation. ### 2.5. Histology Scoring For histological examination, colonic tissue was fixed in 10% formalin, dehydrated, paraffin embedded, processed, sliced into 4μm thick sections, and stained with hematoxylin and eosin (H&E). All the slides were read and scored by a blinded pathologist. The microscopic damage in the colon was assessed on a 0–3 scale as described by Dieleman et al. [15] as follows: (1) severity of inflammation (0, none; 1, mild; 2, moderate; and 3, severe); (2) extent of inflammation (0, none; 1, mucosal; 2, mucosal and submucosal; and 3, transmural); (3) crypt damage (0, none; 1, basal third damaged; 2, basal two-thirds damaged; 3, crypt loss with present surface epithelium; and 4, crypt and surface epithelium loss). The average of the three histology scores was used for statistical analysis. ### 2.6. Myeloperoxidase (MPO) Activity Assay MPO activity was measured according to the method described previously [16]. Each segment was weighed, chopped, and then homogenized in a potassium phosphate buffer (50 mM, pH 6.0) containing 5% hexadecyl trimethyl ammonium bromide (HTAB) and 0.336% EDTA (9 mL/mg tissue) for 30 s. The colon homogenates were subjected to 3 cycles of freezing/thawing, 30 s of sonication, and centrifugation at 13,000 ×g for 15 min at 4°C. Then, 0.167 mg/mL o-dianisidine dihydrochloride (Sigma-Aldrich) and 0.0005% H2O2 in 200 μL of phosphate buffer (pH 6.0) were added to the supernatant, and the absorbance rate was monitored at 490 nm. ### 2.7. Preparation of Lamina Propria Mononuclear Cells (LPMCs) LPMCs were isolated using a modified method as previously described [17]. Briefly, the intestinal mucosa was washed in complete Hank’s balanced salt solution (HBSS) without Ca2+ and Mg2+, cut into 5 mm pieces, and incubated in medium containing 5 mM EDTA (Sigma-Aldrich, St. Louis, Missouri, USA) and 1 mM DTT (Sigma-Aldrich) at 37°C for 30 min until all crypts and individual epithelial cells were removed. The tissues were digested further in RPMI 1640 (GIBCO Laboratories, Grand Island, NY, USA) containing 10% fetal calf serum (FCS, HyClone, Logan, UT, USA), 0.15 g of collagenase type IV (Sigma-Aldrich), and 0.025 g of DNase I (Sigma-Aldrich) in a shaking incubator at 37°C. The tissue slurry was then passed through a 70 μm cell strainer to remove undigested tissue pieces, centrifuged, and resuspended in a 40–60% Percoll solution (Amersham Biosciences, Piscataway, NJ, USA) density gradient. Cells were frozen in liquid nitrogen for storage until analysis at a concentration of 1 × 106 cells/mL. ### 2.8. Isolation and Culture of Mesenteric Lymph Nodes (MLNs) Cells MLNs were removed and transferred to ice cold sterile Hank’s balanced salt solution. The nodes were disrupted and passed through a nylon mesh (70μm pore size). Cells were then incubated in RPMI 1640 with 10% FCS and 100 IU/mL penicillin/streptomycin at a concentration of 1 × 106 cells/mL for 48 h in the presence of anti-CD3 and anti-CD28 antibodies (eBioscience, San Diego, CA). Cytokine production in culture supernatants was determined by enzyme-linked immunosorbent assay (ELISA). ### 2.9. ELISA Frozen colonic samples were homogenized mechanically in lysis buffer. Homogenized tissue samples were centrifuged at 18,300 ×g at 4°C for 30 min. Homogenized tissue or cell culture supernatants were collected, and cytokine levels of TNF-α, IL-1β, IL-12, IFN-γ, IL-4, IL-13, IL-5, IL-6, IL-17, and IL-10 in the supernatant were determined using ELISA kits, according to the manufacturer’s instructions (R&D Systems, Minneapolis, MN). ### 2.10. Real-Time Polymerase Chain Reaction (PCR) Total RNA was extracted using TRIzol reagent (Invitrogen). 1μg total RNA was then reverse transcribed into first-strand cDNAs and was synthesized with M-MLV reverse transcriptase. Primer sequences are listed as follows: IFN-γ forward 5′-AGGATGCATTCATGAGCATCGCC-3′ and reverse 5′-TCAGCACCGACTCCTTTTCCGCT-3′; IL-12 forward 5′-AGTGTAACCAGAAAGGTGCGTTC-3′ and reverse 5′-CCTGCAGGGTACACATGTCCATT-3′; IL-4 forward 5′-CGGCAACAAGGAACACCACGGA-3′ and reverse 5′-AGCGTGGACTCATTCACGGTGC-3′; IL-13 Forward 5′-CTGCAGTCCTGGCTCTCGC-3′ and reverse 5′-CTTTTCCGCTATGGCCACTG-3′; IL-5 forward 5′-ACGATGAGGCTTCCTGTTCC-3′ and reverse 5′-TTCCATTGCCCACTCTGTAC-3′; IL-6 forward 5′-GTGCAATGGCAATTCTGATTGTA-3′ and reverse 5′-CTAGGGTTTCAGTATTGCTCTGA-3′; IL-17 forward 5′-AGCTCCAGAAGGCCCTCAGACTA-3′ and reverse 5′-CAGGACCAGGATCTCTTGCTGGA-3′; IL-10 forward 5′-CCAAGCCTTGTCAGAAATGATCA-3′ and reverse 5′-CTCATTCATGGCCTTGTAGACAC-3′; T-bet forward 5′-AACCAGTATCCTGTTCCCAGC-3′ and reverse 5′-TGTCGCCACTGGAAGGATAG-3′; GATA-3 forward 5′-CAGTCCGCATCTCTTCAC-3′ and reverse 5′-TAGTGCCCAGTACCATCTC-3′; RORγt forward 5′-AGTAGGCCACATTACACTGCT-3′ and reverse 5′-GACCCACACCTCACAAATTGA-3′; Foxp3 forward 5′-GTACAGCCGGACACACTGC-3′ and reverse 5′-GCTGACTTCCAAGTCTCGTGT-3′. Mean relative gene expression was calculated using the 2 - Δ Δ Ct formula, as described previously [18]. ### 2.11. Western Blotting Proteins were extracted in complete radioimmunoprecipitation lysis buffer (RIPA) supplemented with protease inhibitor cocktails (Roche Diagnostics). Protein concentrations of the samples were determined using a bicinchoninic acid assay kit (Thermo Scientific, Bremen, Germany), and the samples were then boiled for 10 minutes. Equal amounts of protein (50μg) were separated by SDS-PAGE and electrophoretically transferred onto polyvinylidene fluoride (PVDF) membranes (Millipore Corp., Bedford, MA). After blocking by incubation in 5% nonfat dry milk for 1 h at room temperature, the membranes were incubated overnight with primary antibodies recognizing β-actin, T-bet, GATA-3, ROR-γt, and Foxp3 (Santa Cruz, CA, USA) at 4°C with gentle shaking. After washing three times with TBST, the blots were incubated with horseradish peroxidase- (HRP-) conjugated secondary antibodies (Cell Signaling Technology) for 1 h. The blots were washed three times, visualized using the enhanced chemiluminescence (ECL) detection system (Amersham Biosciences, Buckinghamshire, UK), and quantified with the Quantity One System (Bio-Rad). ### 2.12. Flow Cytometry and Intracellular Staining All antibodies used for cell labeling were purchased from eBioscience (San Diego, CA, USA). For measurements of intracellular cytokines, cells were stimulated with PMA (1μg/mL) and ionomycin (50 μg/mL) in the presence of monensin (0.1 mg/mL) at 37°C and 5% CO2 for 5 h. Cells were then washed in PBS and surface-labeled with fluorescein isothiocyanate- (FITC-) conjugated anti-CD4. After fixation and permeabilization, the cells were stained with phycoerythrin-cyanin 7- (PE-Cy7-) conjugated anti-IL-17, PerCP-Cy5-conjugated anti-IFN-γ, and allophycocyanin- (APC-) conjugated anti-IL-4. For analysis of Treg cells, they were aliquoted into tubes without PMA and ionomycin stimulation, and surface staining was performed with FITC conjugated anti-CD4 and PE conjugated anti-CD25 antibodies. Then cells were fixed and permeabilized with Fix/Perm solution, and intracellular staining was performed with APC-conjugated anti-Foxp3. The stained cells were analyzed using a FACS Canto cytometer (BD Bioscience), and the data were analyzed with FlowJo (TreeStar). ### 2.13. Statistics Results are presented as the mean ± SD. Differences between two groups were examined using unpaired Student’st-tests. For analyzing multiple groups, a one-way ANOVA was used. The clinical activity score of colitis as well as macroscopic and histological scores was statistically analyzed using the Kruskal-Wallis nonparametric test, followed by the Mann-Whitney U-test, to compare the results of the different groups. P values < 0.05 were considered significant. ## 2.1. Rats Sprague-Dawley rats, weighing 200–250 g, were purchased from the Experimental Animal Center of Guangdong Province (Guangzhou, China). Rats were provided a standard rat chow and water in a controlled room (temperature, 22–24°C; humidity, 70–75%; and a 12 h/12 h light and dark cycle). The animal studies were conducted under protocols approved by the Ethics Committee for Animal Experiments of Southern Medical University. The rats were paired with age-matched controls. ## 2.2. Induction of TNBS-Induced Colitis and Treatment Colitis was induced with a single intracolonic application of TNBS, as described previously [12]. Briefly, overnight-fasted mice were treated under anesthesia with 30 mg/kg TNBS (Sigma-Aldrich) dissolved in 0.25 mL of 50% alcohol via intrarectal injection using a polyethylene catheter (2 mm in outer diameter), with 0.9% saline treatment as a control. The ingredients of HQT included 9 g ofScutellaria baicalensis Georgi (scute), 6 g ofPaeonia lactiflora Pall. (peony), 6 g ofGlycyrrhiza uralensis Fisch. (licorice), and 6 g ofZiziphus jujuba Mill. (Chinese date). All herb formula granules (1 g extract = 10 g crude herb) were provided by E-Fong Pharmaceutical co., Ltd. (Guangzhou, GD, China) and administered at doses of 30 mg/kg, 60 mg/kg, and 120 mg/kg of body weight, dissolved in distilled water. Mesalazine (500 mg/pack) was used at a dose of 100 mg/kg as a vehicle control and purchased from Ethypharm (Houdan, France). HQT and mesalazine were administered by oral gavage twice daily for one week starting from 24 h after colitis induction. Control rats had free access to tap water. ## 2.3. Clinical Assessment of Colitis Body weight, diarrhoea scores, and bleeding scores were assessed daily as described [13]. Weight changes were calculated as percent difference relative to original body weight. Stool consistency was scored as follows: 0, well-formed pellets; 2, pasty and semiformed stools that do not adhere to the anus; and 4, diarrhoea that remained adhesive to the anus. Fecal blood was scored as follows: 0, no blood by hemoccult test; 2, positive hemoccult; and 4, gross bleeding. ## 2.4. Macroscopic Evaluation The colon was removed and opened longitudinally, and the colon length and macroscopic damage were assessed immediately by an independent observer blinded to the identity of treatments. The macroscopic score was assigned by examining an 8 cm distal portion of the rat colon and utilizing a 0–4 scale with some modifications from that used previously [14]: 0, no macroscopic changes; 1, hyperemia and edema without ulcers; 2, hyperemia and edema with small linear ulcers or petechiae; 3, hyperemia and edema with wide ulcers and necrosis and/or adhesions; 4, hyperemia and edema with megacolon, stenosis, and/or perforation. ## 2.5. Histology Scoring For histological examination, colonic tissue was fixed in 10% formalin, dehydrated, paraffin embedded, processed, sliced into 4μm thick sections, and stained with hematoxylin and eosin (H&E). All the slides were read and scored by a blinded pathologist. The microscopic damage in the colon was assessed on a 0–3 scale as described by Dieleman et al. [15] as follows: (1) severity of inflammation (0, none; 1, mild; 2, moderate; and 3, severe); (2) extent of inflammation (0, none; 1, mucosal; 2, mucosal and submucosal; and 3, transmural); (3) crypt damage (0, none; 1, basal third damaged; 2, basal two-thirds damaged; 3, crypt loss with present surface epithelium; and 4, crypt and surface epithelium loss). The average of the three histology scores was used for statistical analysis. ## 2.6. Myeloperoxidase (MPO) Activity Assay MPO activity was measured according to the method described previously [16]. Each segment was weighed, chopped, and then homogenized in a potassium phosphate buffer (50 mM, pH 6.0) containing 5% hexadecyl trimethyl ammonium bromide (HTAB) and 0.336% EDTA (9 mL/mg tissue) for 30 s. The colon homogenates were subjected to 3 cycles of freezing/thawing, 30 s of sonication, and centrifugation at 13,000 ×g for 15 min at 4°C. Then, 0.167 mg/mL o-dianisidine dihydrochloride (Sigma-Aldrich) and 0.0005% H2O2 in 200 μL of phosphate buffer (pH 6.0) were added to the supernatant, and the absorbance rate was monitored at 490 nm. ## 2.7. Preparation of Lamina Propria Mononuclear Cells (LPMCs) LPMCs were isolated using a modified method as previously described [17]. Briefly, the intestinal mucosa was washed in complete Hank’s balanced salt solution (HBSS) without Ca2+ and Mg2+, cut into 5 mm pieces, and incubated in medium containing 5 mM EDTA (Sigma-Aldrich, St. Louis, Missouri, USA) and 1 mM DTT (Sigma-Aldrich) at 37°C for 30 min until all crypts and individual epithelial cells were removed. The tissues were digested further in RPMI 1640 (GIBCO Laboratories, Grand Island, NY, USA) containing 10% fetal calf serum (FCS, HyClone, Logan, UT, USA), 0.15 g of collagenase type IV (Sigma-Aldrich), and 0.025 g of DNase I (Sigma-Aldrich) in a shaking incubator at 37°C. The tissue slurry was then passed through a 70 μm cell strainer to remove undigested tissue pieces, centrifuged, and resuspended in a 40–60% Percoll solution (Amersham Biosciences, Piscataway, NJ, USA) density gradient. Cells were frozen in liquid nitrogen for storage until analysis at a concentration of 1 × 106 cells/mL. ## 2.8. Isolation and Culture of Mesenteric Lymph Nodes (MLNs) Cells MLNs were removed and transferred to ice cold sterile Hank’s balanced salt solution. The nodes were disrupted and passed through a nylon mesh (70μm pore size). Cells were then incubated in RPMI 1640 with 10% FCS and 100 IU/mL penicillin/streptomycin at a concentration of 1 × 106 cells/mL for 48 h in the presence of anti-CD3 and anti-CD28 antibodies (eBioscience, San Diego, CA). Cytokine production in culture supernatants was determined by enzyme-linked immunosorbent assay (ELISA). ## 2.9. ELISA Frozen colonic samples were homogenized mechanically in lysis buffer. Homogenized tissue samples were centrifuged at 18,300 ×g at 4°C for 30 min. Homogenized tissue or cell culture supernatants were collected, and cytokine levels of TNF-α, IL-1β, IL-12, IFN-γ, IL-4, IL-13, IL-5, IL-6, IL-17, and IL-10 in the supernatant were determined using ELISA kits, according to the manufacturer’s instructions (R&D Systems, Minneapolis, MN). ## 2.10. Real-Time Polymerase Chain Reaction (PCR) Total RNA was extracted using TRIzol reagent (Invitrogen). 1μg total RNA was then reverse transcribed into first-strand cDNAs and was synthesized with M-MLV reverse transcriptase. Primer sequences are listed as follows: IFN-γ forward 5′-AGGATGCATTCATGAGCATCGCC-3′ and reverse 5′-TCAGCACCGACTCCTTTTCCGCT-3′; IL-12 forward 5′-AGTGTAACCAGAAAGGTGCGTTC-3′ and reverse 5′-CCTGCAGGGTACACATGTCCATT-3′; IL-4 forward 5′-CGGCAACAAGGAACACCACGGA-3′ and reverse 5′-AGCGTGGACTCATTCACGGTGC-3′; IL-13 Forward 5′-CTGCAGTCCTGGCTCTCGC-3′ and reverse 5′-CTTTTCCGCTATGGCCACTG-3′; IL-5 forward 5′-ACGATGAGGCTTCCTGTTCC-3′ and reverse 5′-TTCCATTGCCCACTCTGTAC-3′; IL-6 forward 5′-GTGCAATGGCAATTCTGATTGTA-3′ and reverse 5′-CTAGGGTTTCAGTATTGCTCTGA-3′; IL-17 forward 5′-AGCTCCAGAAGGCCCTCAGACTA-3′ and reverse 5′-CAGGACCAGGATCTCTTGCTGGA-3′; IL-10 forward 5′-CCAAGCCTTGTCAGAAATGATCA-3′ and reverse 5′-CTCATTCATGGCCTTGTAGACAC-3′; T-bet forward 5′-AACCAGTATCCTGTTCCCAGC-3′ and reverse 5′-TGTCGCCACTGGAAGGATAG-3′; GATA-3 forward 5′-CAGTCCGCATCTCTTCAC-3′ and reverse 5′-TAGTGCCCAGTACCATCTC-3′; RORγt forward 5′-AGTAGGCCACATTACACTGCT-3′ and reverse 5′-GACCCACACCTCACAAATTGA-3′; Foxp3 forward 5′-GTACAGCCGGACACACTGC-3′ and reverse 5′-GCTGACTTCCAAGTCTCGTGT-3′. Mean relative gene expression was calculated using the 2 - Δ Δ Ct formula, as described previously [18]. ## 2.11. Western Blotting Proteins were extracted in complete radioimmunoprecipitation lysis buffer (RIPA) supplemented with protease inhibitor cocktails (Roche Diagnostics). Protein concentrations of the samples were determined using a bicinchoninic acid assay kit (Thermo Scientific, Bremen, Germany), and the samples were then boiled for 10 minutes. Equal amounts of protein (50μg) were separated by SDS-PAGE and electrophoretically transferred onto polyvinylidene fluoride (PVDF) membranes (Millipore Corp., Bedford, MA). After blocking by incubation in 5% nonfat dry milk for 1 h at room temperature, the membranes were incubated overnight with primary antibodies recognizing β-actin, T-bet, GATA-3, ROR-γt, and Foxp3 (Santa Cruz, CA, USA) at 4°C with gentle shaking. After washing three times with TBST, the blots were incubated with horseradish peroxidase- (HRP-) conjugated secondary antibodies (Cell Signaling Technology) for 1 h. The blots were washed three times, visualized using the enhanced chemiluminescence (ECL) detection system (Amersham Biosciences, Buckinghamshire, UK), and quantified with the Quantity One System (Bio-Rad). ## 2.12. Flow Cytometry and Intracellular Staining All antibodies used for cell labeling were purchased from eBioscience (San Diego, CA, USA). For measurements of intracellular cytokines, cells were stimulated with PMA (1μg/mL) and ionomycin (50 μg/mL) in the presence of monensin (0.1 mg/mL) at 37°C and 5% CO2 for 5 h. Cells were then washed in PBS and surface-labeled with fluorescein isothiocyanate- (FITC-) conjugated anti-CD4. After fixation and permeabilization, the cells were stained with phycoerythrin-cyanin 7- (PE-Cy7-) conjugated anti-IL-17, PerCP-Cy5-conjugated anti-IFN-γ, and allophycocyanin- (APC-) conjugated anti-IL-4. For analysis of Treg cells, they were aliquoted into tubes without PMA and ionomycin stimulation, and surface staining was performed with FITC conjugated anti-CD4 and PE conjugated anti-CD25 antibodies. Then cells were fixed and permeabilized with Fix/Perm solution, and intracellular staining was performed with APC-conjugated anti-Foxp3. The stained cells were analyzed using a FACS Canto cytometer (BD Bioscience), and the data were analyzed with FlowJo (TreeStar). ## 2.13. Statistics Results are presented as the mean ± SD. Differences between two groups were examined using unpaired Student’st-tests. For analyzing multiple groups, a one-way ANOVA was used. The clinical activity score of colitis as well as macroscopic and histological scores was statistically analyzed using the Kruskal-Wallis nonparametric test, followed by the Mann-Whitney U-test, to compare the results of the different groups. P values < 0.05 were considered significant. ## 3. Results ### 3.1. HQT Ameliorates TNBS-Induced Colitis in a Dose-Dependent Manner Intrarectal administration of TNBS has long been used as an alternative model for the induction of acute colitis [19]. To assess whether HQT exerts a protective role during colitis, HQT was first evaluated in a dose-response study of TNBS-induced colitis at 30 mg/kg, 60 mg/kg, and 120 mg/kg concentrations. Rats were treated with HQT for 7 consecutive days starting on day 2 after induction of TNBS colitis. As expected, rats given TNBS developed severe colitis, characterized by a profound and sustained weight loss, bleeding, and diarrhoea. Here, HQT treatment rapidly reversed the lost body weight and decreased the extent of the bleeding and diarrhoea scores in a dose-dependent manner, with both parameters reaching significance in the 60 and 120 mg/kg treatment groups (Figures 1(a)–1(c)). Consistently, this treatment also significantly prevented colon shortening and decreased MPO activity (Figures 1(d) and 1(e)). Furthermore, TNF-α in colon culture supernatants was significantly lower in HQT-treated rats compared to rats treated with TNBS alone (Figure 1(f)). Since treatment with 120 mg/kg HQT proved to be most effective in the amelioration of colitis, this concentration was used in further experiments. Collectively, these results demonstrate that HQT is effective in protecting against acute TNBS-induced colitis in a dose-dependent manner.Figure 1 HQT ameliorates TNBS-induced colitis in a dose-dependent manner. Various doses of Huangqin-Tang decoction (HQT) (30–120 mg/kg) were administered following the 2,4,6-trinitrobenzenesulfonic acid (TNBS) enema and on the next 2 days. (a) Body weight changes (percentage of original body weight), (b) bleeding score, and (c) diarrhoea score were scored daily. (d) Rats were sacrificed on day 7 to measure colon length. (e) Myeloperoxidase (MPO) activity was assessed in colon homogenates as described in Section2. (f) The production of tumor necrosis factor-α (TNF-α) in the colon was determined by enzyme-linked immunosorbent assay (ELISA). Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) ### 3.2. The Anti-Inflammatory Potency of HQT is Superior to Mesalazine in the TNBS-Induced Colitis Model Oral administration of mesalazine is the first-line approach to induce and maintain clinical remission in patients with mild-to-moderate UC or CD [20]. To assess the anti-inflammatory potency of HQT in TNBS-induced colitis, the effect of HQT (120 mg/kg) and mesalazine (100 mg/kg) was directly compared. When studying the clinical course of the disease, TNBS-treated rats suffered the most body weight loss from day 3 onward (Figure 2(a)). Starting at day 4, treatment with HQT or mesalazine resulted in a higher weight gain compared to animals treated with TNBS alone (Figure 2(a)). Simultaneously, bleeding and diarrhoea scores of TNBS-treated rats became significantly worse compared to those of controls. In contrast, such changes were markedly improved by HQT or mesalazine treatment (Figures 2(b) and 2(c)).Figure 2 HQT protects against TNBS-induced colitis in a manner equal to mesalazine. Rats with TNBS-induced colitis were treated with HQT (120 mg/kg) or mesalazine (100 mg/kg). (a) Body weight changes (percentage of original body weight), (b) bleeding score, and (c) diarrhoea score were scored daily. (d) Rats were sacrificed on day 7 to measure colon length. (e) Macroscopic score was evaluated on day 7. (f) Histological score in colons. (g) Colon sections from each group rat were stained with H&E (original magnification, 100x). (h) MPO activity was assessed in colon homogenates, as described in Section2. (i) The production of TNF-α and interleukin- (IL-) 1β in the colon was determined by ELISA. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h) (i)To further assess the severity of colitis, colon length was measured in each group of rats. Colons of rats treated with TNBS alone were on average 10% shorter than those of rats subjected to additional treatment with HQT or mesalazine (Figure2(d)). This inflammatory phenotype was further evidenced by the gross and microscopic appearances of the colon. Consistent with the clinical parameters discussed above, treatment with HQT or mesalazine significantly ameliorated the macroscopic scores compared to rats treated with TNBS alone (Figure 2(e)). Histological sections revealed no substantial disease in activity in control rats, whereas in TNBS-treated rats, severe inflammation could be detected, including more infiltrating inflammatory cells and significantly more ulceration. However, colon section from HQT or mesalazine groups showed a marked reduction in the tissue disruption, mucosal ulcerations, and mononuclear cell infiltration (Figure 2(g)). Furthermore, histological scoring revealed that HQT or mesalazine treatment reduced the severity of TNBS-induced colitis (Figure 2(f)). Consistent with these histological changes, TNBS significantly increased colonic MPO activity. In contrast, all HQT-treated rats as well as mesalazine-treated rats presented decreased colonic MPO activity compared to rats treated with TNBS alone (Figure 2(h)).Furthermore, inflammatory cytokine expression levels, including those of TNF-α and IL-1β, were also clearly induced in TNBS-treated rats compared to control rats. Administration of HQT or mesalazine prevented the induction of these inflammatory cytokines (Figure 2(i)), suggesting that HQT treatment might have broad anti-inflammatory activity. Together, these results clearly indicate that HQT plays a therapeutic role and is superior to mesalazine in resolving the inflammatory response following TNBS-induced injury of the colon. ### 3.3. Distinct Effects of HQT on the Frequencies of Th1, Th2, Th17, and Treg Cells in the TNBS-Induced Colitis Model As studies have demonstrated that the Th1, Th2, Th17, and Treg CD4+ T cells subsets play distinct roles in the control and development of IBD [2], we hypothesized that HQT may differentially contribute to the development of these CD4+ T cells subsets. Using flow cytometry, we determined the proportions of Th1, Th2, Th17, and Treg cells among LPMCs in the TNBS-induced colitis model. LPMCs cells were treated with phorbol myristate acetate-ionomycin, stained for cell surface CD4, and intracellularly stained for interferon- (IFN-) γ, IL-4, and IL-17 to detect Th1, Th2, and Th17 cells, respectively. As shown in Figures 3(a) and 3(b), we observed that there were marked increases in the numbers of IFN+ and IL-17+ CD4+ T cells after TNBS challenge, while the IL-4+ CD4+ T cell decreased. The proportions of Th1 and Th17 cells in the HQT-treated TNBS group were significantly lower than those in the TNBS group. In contrast to the Th1 and Th17 cells, we observed a significantly higher frequency of IL-4-producing Th2 cells in LPMCs of rats that were treated with HQT, compared with rats treated with TNBS alone.Figure 3 HQT regulates frequencies of Th1, Th2, Th17, and Treg in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg) and analyzed 7 days after treatment. Lamina propria mononuclear cells (LPMCs) were isolated from each group and subjected to intracellular IFN-γ, IL-4, IL-17, and Foxp3 staining. (a) The frequency of T helper 1 (Th1) (CD4+IFN-γ +), Th2 (CD4+IL-4+), Th17 (CD4+IL-17+), and regulatory T (Treg) (CD4+CD25+Foxp3+) was determined by flow cytometry. Numbers represent the percentages of IFN-γ, IL-4, IL-17A-expressing CD4+ T cells, and Foxp3-expressing CD4+CD25+ T cells in each quadrant. (b) Quantitative analysis of the frequency and total number of Th1, Th2, Th17, and Treg in LPMCs. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b)For analysis of Treg cells, LPMCs were surface-labeled with CD4 and CD25 antibodies, followed by intracellular staining with Foxp3. Results show that HQT treatment increased the CD4+ CD25+ Foxp3+ Treg levels amongst LPMCs. Thus, our results clearly indicate that the ability of HQT to ameliorate colitis was associated with an expansion of Th2 and Treg cells and a reduction of Th1 and Th17 cells among LPMCs. ### 3.4. HQT Regulates Th1-, Th2-, Th17-, and Treg-Related Cytokine Production in the TNBS-Induced Colitis Model To determine the effect of HQT on driving Th cell responses in rats with TNBS-induced colitis, we further measured the production of signature cytokines that are critical for the differentiation of Th subsets in MLNs and colonic tissue. Our results revealed that TNBS-treated rats exhibited an aberrant cytokine pattern, characterized by mRNA overexpression of Th1 and Th17 signature cytokines, including IFN-γ, IL-12, IL-17, and IL-6, and this increase was significantly decreased by administration of HQT (Figure 4). Moreover, total protein extracted from MLNs was analyzed by ELISA. Similarly, HQT significantly downregulated cytokine levels of IFN-γ, IL-12, IL-17, and IL-6 in TNBS-treated rats (Figure 5). Contrary to the decreased Th1- and Th17-associated cytokines, the protein and mRNA expression in MLNs and colonic tissue showed increased production of Th2- and Treg-associated cytokines IL-4, IL-5, IL-13, and IL-10 in HQT-treated rats (Figures 4 and 5). Taken together, these results indicate that HQT administration inhibits Th1 and Th17 responses but promotes Th2 and Treg responses in TNBS-induced colitis.Figure 4 HQT regulates mRNA expression of Th1-, Th2-, Th17-, and Treg-related cytokines in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). Total mRNA was extracted from colonic tissue to analyze the expression of Th1-related cytokines (a) IFN-γ and (b) IL-12; Th2-related cytokines (c) IL-4, (d) IL-13, and (e) IL-5; Th17-related cytokines (f) IL-17A and (g) IL-6; Treg-related cytokines (h) IL-10 by real-time polymerase chain reaction (PCR). Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h)Figure 5 HQT regulates protein levels of Th1-, Th2-, Th17-, and Treg-related cytokines in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). Mesenteric lymph nodes (MLNs) cells from each group were stimulated with anti-CD3/CD28 antibodies and the cultural supernatants were harvested, followed by ELISA analysis of cytokines indicated above (pg/mL). Th1-related cytokines (a) IFN-γ and (b) IL-12; Th2-related cytokines (c) IL-4, (d) IL-13, and (e) IL-5; Th17-related cytokines (f) IL-17A and (g) IL-6; Treg-related cytokines (h) IL-10 were measured. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h) ### 3.5. Effect of HQT on Th1, Th2, Th17, and Treg Transcription Factors in TNBS-Induced Colitis To understand the molecular mechanism by which HQT affects CD4+ T cell subsets, we determined the expression levels of the nuclear transcription factors of these subsets using western blot and real-time PCR. We found that HQT treatment enhanced the expression of Foxp3 and GATA-3, but it reduced the expression of T-bet and ROR-γt, both at the protein and gene expression levels (Figures 6(a) and 6(b)). Taken together, we demonstrate a crucial role for HQT that prevent the development of intestinal inflammation and maintain intestinal immune homeostasis.Figure 6 HQT regulates protein and mRNA levels of T-bet, GATA-3, ROR-γt, and Foxp3 in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). (a) Whole colon tissue homogenates collected at 7 days after HQT treatment were examined for T-bet, GATA-3, RORγt, and Foxp3 by western blot analysis. Each lane corresponds to an individual mouse. (b) Distal colons collected at day 7 after HQT treatment were used to isolate RNA for expression analysis of T-bet, GATA-3, ROR-γt, and Foxp3 by real-time PCR. Results represent the mean ± SD from eight mice per group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) ## 3.1. HQT Ameliorates TNBS-Induced Colitis in a Dose-Dependent Manner Intrarectal administration of TNBS has long been used as an alternative model for the induction of acute colitis [19]. To assess whether HQT exerts a protective role during colitis, HQT was first evaluated in a dose-response study of TNBS-induced colitis at 30 mg/kg, 60 mg/kg, and 120 mg/kg concentrations. Rats were treated with HQT for 7 consecutive days starting on day 2 after induction of TNBS colitis. As expected, rats given TNBS developed severe colitis, characterized by a profound and sustained weight loss, bleeding, and diarrhoea. Here, HQT treatment rapidly reversed the lost body weight and decreased the extent of the bleeding and diarrhoea scores in a dose-dependent manner, with both parameters reaching significance in the 60 and 120 mg/kg treatment groups (Figures 1(a)–1(c)). Consistently, this treatment also significantly prevented colon shortening and decreased MPO activity (Figures 1(d) and 1(e)). Furthermore, TNF-α in colon culture supernatants was significantly lower in HQT-treated rats compared to rats treated with TNBS alone (Figure 1(f)). Since treatment with 120 mg/kg HQT proved to be most effective in the amelioration of colitis, this concentration was used in further experiments. Collectively, these results demonstrate that HQT is effective in protecting against acute TNBS-induced colitis in a dose-dependent manner.Figure 1 HQT ameliorates TNBS-induced colitis in a dose-dependent manner. Various doses of Huangqin-Tang decoction (HQT) (30–120 mg/kg) were administered following the 2,4,6-trinitrobenzenesulfonic acid (TNBS) enema and on the next 2 days. (a) Body weight changes (percentage of original body weight), (b) bleeding score, and (c) diarrhoea score were scored daily. (d) Rats were sacrificed on day 7 to measure colon length. (e) Myeloperoxidase (MPO) activity was assessed in colon homogenates as described in Section2. (f) The production of tumor necrosis factor-α (TNF-α) in the colon was determined by enzyme-linked immunosorbent assay (ELISA). Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) ## 3.2. The Anti-Inflammatory Potency of HQT is Superior to Mesalazine in the TNBS-Induced Colitis Model Oral administration of mesalazine is the first-line approach to induce and maintain clinical remission in patients with mild-to-moderate UC or CD [20]. To assess the anti-inflammatory potency of HQT in TNBS-induced colitis, the effect of HQT (120 mg/kg) and mesalazine (100 mg/kg) was directly compared. When studying the clinical course of the disease, TNBS-treated rats suffered the most body weight loss from day 3 onward (Figure 2(a)). Starting at day 4, treatment with HQT or mesalazine resulted in a higher weight gain compared to animals treated with TNBS alone (Figure 2(a)). Simultaneously, bleeding and diarrhoea scores of TNBS-treated rats became significantly worse compared to those of controls. In contrast, such changes were markedly improved by HQT or mesalazine treatment (Figures 2(b) and 2(c)).Figure 2 HQT protects against TNBS-induced colitis in a manner equal to mesalazine. Rats with TNBS-induced colitis were treated with HQT (120 mg/kg) or mesalazine (100 mg/kg). (a) Body weight changes (percentage of original body weight), (b) bleeding score, and (c) diarrhoea score were scored daily. (d) Rats were sacrificed on day 7 to measure colon length. (e) Macroscopic score was evaluated on day 7. (f) Histological score in colons. (g) Colon sections from each group rat were stained with H&E (original magnification, 100x). (h) MPO activity was assessed in colon homogenates, as described in Section2. (i) The production of TNF-α and interleukin- (IL-) 1β in the colon was determined by ELISA. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h) (i)To further assess the severity of colitis, colon length was measured in each group of rats. Colons of rats treated with TNBS alone were on average 10% shorter than those of rats subjected to additional treatment with HQT or mesalazine (Figure2(d)). This inflammatory phenotype was further evidenced by the gross and microscopic appearances of the colon. Consistent with the clinical parameters discussed above, treatment with HQT or mesalazine significantly ameliorated the macroscopic scores compared to rats treated with TNBS alone (Figure 2(e)). Histological sections revealed no substantial disease in activity in control rats, whereas in TNBS-treated rats, severe inflammation could be detected, including more infiltrating inflammatory cells and significantly more ulceration. However, colon section from HQT or mesalazine groups showed a marked reduction in the tissue disruption, mucosal ulcerations, and mononuclear cell infiltration (Figure 2(g)). Furthermore, histological scoring revealed that HQT or mesalazine treatment reduced the severity of TNBS-induced colitis (Figure 2(f)). Consistent with these histological changes, TNBS significantly increased colonic MPO activity. In contrast, all HQT-treated rats as well as mesalazine-treated rats presented decreased colonic MPO activity compared to rats treated with TNBS alone (Figure 2(h)).Furthermore, inflammatory cytokine expression levels, including those of TNF-α and IL-1β, were also clearly induced in TNBS-treated rats compared to control rats. Administration of HQT or mesalazine prevented the induction of these inflammatory cytokines (Figure 2(i)), suggesting that HQT treatment might have broad anti-inflammatory activity. Together, these results clearly indicate that HQT plays a therapeutic role and is superior to mesalazine in resolving the inflammatory response following TNBS-induced injury of the colon. ## 3.3. Distinct Effects of HQT on the Frequencies of Th1, Th2, Th17, and Treg Cells in the TNBS-Induced Colitis Model As studies have demonstrated that the Th1, Th2, Th17, and Treg CD4+ T cells subsets play distinct roles in the control and development of IBD [2], we hypothesized that HQT may differentially contribute to the development of these CD4+ T cells subsets. Using flow cytometry, we determined the proportions of Th1, Th2, Th17, and Treg cells among LPMCs in the TNBS-induced colitis model. LPMCs cells were treated with phorbol myristate acetate-ionomycin, stained for cell surface CD4, and intracellularly stained for interferon- (IFN-) γ, IL-4, and IL-17 to detect Th1, Th2, and Th17 cells, respectively. As shown in Figures 3(a) and 3(b), we observed that there were marked increases in the numbers of IFN+ and IL-17+ CD4+ T cells after TNBS challenge, while the IL-4+ CD4+ T cell decreased. The proportions of Th1 and Th17 cells in the HQT-treated TNBS group were significantly lower than those in the TNBS group. In contrast to the Th1 and Th17 cells, we observed a significantly higher frequency of IL-4-producing Th2 cells in LPMCs of rats that were treated with HQT, compared with rats treated with TNBS alone.Figure 3 HQT regulates frequencies of Th1, Th2, Th17, and Treg in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg) and analyzed 7 days after treatment. Lamina propria mononuclear cells (LPMCs) were isolated from each group and subjected to intracellular IFN-γ, IL-4, IL-17, and Foxp3 staining. (a) The frequency of T helper 1 (Th1) (CD4+IFN-γ +), Th2 (CD4+IL-4+), Th17 (CD4+IL-17+), and regulatory T (Treg) (CD4+CD25+Foxp3+) was determined by flow cytometry. Numbers represent the percentages of IFN-γ, IL-4, IL-17A-expressing CD4+ T cells, and Foxp3-expressing CD4+CD25+ T cells in each quadrant. (b) Quantitative analysis of the frequency and total number of Th1, Th2, Th17, and Treg in LPMCs. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b)For analysis of Treg cells, LPMCs were surface-labeled with CD4 and CD25 antibodies, followed by intracellular staining with Foxp3. Results show that HQT treatment increased the CD4+ CD25+ Foxp3+ Treg levels amongst LPMCs. Thus, our results clearly indicate that the ability of HQT to ameliorate colitis was associated with an expansion of Th2 and Treg cells and a reduction of Th1 and Th17 cells among LPMCs. ## 3.4. HQT Regulates Th1-, Th2-, Th17-, and Treg-Related Cytokine Production in the TNBS-Induced Colitis Model To determine the effect of HQT on driving Th cell responses in rats with TNBS-induced colitis, we further measured the production of signature cytokines that are critical for the differentiation of Th subsets in MLNs and colonic tissue. Our results revealed that TNBS-treated rats exhibited an aberrant cytokine pattern, characterized by mRNA overexpression of Th1 and Th17 signature cytokines, including IFN-γ, IL-12, IL-17, and IL-6, and this increase was significantly decreased by administration of HQT (Figure 4). Moreover, total protein extracted from MLNs was analyzed by ELISA. Similarly, HQT significantly downregulated cytokine levels of IFN-γ, IL-12, IL-17, and IL-6 in TNBS-treated rats (Figure 5). Contrary to the decreased Th1- and Th17-associated cytokines, the protein and mRNA expression in MLNs and colonic tissue showed increased production of Th2- and Treg-associated cytokines IL-4, IL-5, IL-13, and IL-10 in HQT-treated rats (Figures 4 and 5). Taken together, these results indicate that HQT administration inhibits Th1 and Th17 responses but promotes Th2 and Treg responses in TNBS-induced colitis.Figure 4 HQT regulates mRNA expression of Th1-, Th2-, Th17-, and Treg-related cytokines in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). Total mRNA was extracted from colonic tissue to analyze the expression of Th1-related cytokines (a) IFN-γ and (b) IL-12; Th2-related cytokines (c) IL-4, (d) IL-13, and (e) IL-5; Th17-related cytokines (f) IL-17A and (g) IL-6; Treg-related cytokines (h) IL-10 by real-time polymerase chain reaction (PCR). Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h)Figure 5 HQT regulates protein levels of Th1-, Th2-, Th17-, and Treg-related cytokines in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). Mesenteric lymph nodes (MLNs) cells from each group were stimulated with anti-CD3/CD28 antibodies and the cultural supernatants were harvested, followed by ELISA analysis of cytokines indicated above (pg/mL). Th1-related cytokines (a) IFN-γ and (b) IL-12; Th2-related cytokines (c) IL-4, (d) IL-13, and (e) IL-5; Th17-related cytokines (f) IL-17A and (g) IL-6; Treg-related cytokines (h) IL-10 were measured. Results represent the mean ± SD from eight mice per group. P * < 0.05, P * * < 0.001 versus the control group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) (c) (d) (e) (f) (g) (h) ## 3.5. Effect of HQT on Th1, Th2, Th17, and Treg Transcription Factors in TNBS-Induced Colitis To understand the molecular mechanism by which HQT affects CD4+ T cell subsets, we determined the expression levels of the nuclear transcription factors of these subsets using western blot and real-time PCR. We found that HQT treatment enhanced the expression of Foxp3 and GATA-3, but it reduced the expression of T-bet and ROR-γt, both at the protein and gene expression levels (Figures 6(a) and 6(b)). Taken together, we demonstrate a crucial role for HQT that prevent the development of intestinal inflammation and maintain intestinal immune homeostasis.Figure 6 HQT regulates protein and mRNA levels of T-bet, GATA-3, ROR-γt, and Foxp3 in the TNBS-induced colitis model. Rats with TNBS-induced colitis were treated with or without HQT (120 mg/kg). (a) Whole colon tissue homogenates collected at 7 days after HQT treatment were examined for T-bet, GATA-3, RORγt, and Foxp3 by western blot analysis. Each lane corresponds to an individual mouse. (b) Distal colons collected at day 7 after HQT treatment were used to isolate RNA for expression analysis of T-bet, GATA-3, ROR-γt, and Foxp3 by real-time PCR. Results represent the mean ± SD from eight mice per group. Δ P < 0.05, ΔΔ P < 0.001 versus TNBS-treated rats. (a) (b) ## 4. Discussion In the present study, we illustrated an important role for HQT in inhibiting TNBS-mediated intestinal inflammation. We first demonstrated that administration of HQT at doses of 30–120 mg/kg significantly attenuated colitis in a dose-dependent manner. In addition, administration of 120 mg/kg HQT was significantly more potent than mesalazine (100 mg/kg) in ameliorating TNBS-induced colitis. Moreover, mechanistic studies indicate that the anti-inflammatory effects of HQT are involved in restraining Th1 and Th17 responses, while enhancing Th2 and Treg responses, to TNBS challenge in this murine colitis model. Therefore, our report unveiled for the first time, to our knowledge, an important role for HQT in anti-inflammatory and immunomodulatory effect in IBD.To evaluate the effect of HQT, we used the well-established model of TNBS-induced colitis in rats which has resemblance to CD [21]. In the present study, HQT efficiently and dose-dependently improved TNBS-induced colitis. It caused attenuation of weight loss, diarrhoea, and bleeding scores while preserving colonic length and reducing MPO activity, a marker of tissue neutrophil activation [22]. As expected, TNBS treatment markedly increased TNF-α protein expression in the colon, and that increase was reduced significantly and dose-dependently by HQT treatment. Consequently, treatment with 120 mg/kg HQT was the most effective with respect to the amelioration of colitis and was used in our experiments.Mesalazine is one of the most commonly prescribed anti-inflammatory drugs that is used to treat IBD [23]. Studies have demonstrated the significant and comparable protection of mesalazine on experimental colitis induced by TNBS [24]. Here, the effects of HQT (120 mg/kg) and mesalazine (100 mg/kg) were directly compared in experimental colitis. We showed that HQT as well as mesalazine dramatically inhibited weight loss, bleeding, and diarrhoea score while preserving colonic length. In addition to exerting such beneficial clinical effects, treatment with HQT and mesalazine also resulted in macroscopic and microscopic amelioration of intestinal inflammation, consistent with reduced MPO activity. Elevated levels of proinflammatory cytokines, such as TNF-α and IL-1β, were demonstrated during the development of IBD and experimental colitis. TNF-α monoclonal antibodies have been shown to dramatically decrease signs and symptoms of IBD and subsequently are key potential therapeutic agents [25, 26]. In the present study, we further demonstrate that local TNF-α and IL-1β expressions are decreased after HQT or mesalazine treatment in rats with TNBS-induced colitis. Our results suggest that daily HQT administration significantly inhibited the progression of colitis, yielding a protective effect equal to or even greater than that of mesalazine.Furthermore, our work highlights the fact that HQT uniquely interacts with the host immune system to exact its immunoregulatory potency. More recently, studies have highlighted the roles of T cell subsets in IBD [27]. Classical Th1/Th2 pathways are thought to play a critical role in IBD pathogenesis. It is widely accepted that TNBS-induced colitis is mediated by a dominant Th1 immune response and a deficiency of Th2 responses [28, 29]. Moreover, recent studies have highlighted a key pathogenic role of Th17 cells, and increased numbers of Th17 cells have been found in IBD patients and animal models [30–32]. On the contrary, Treg cells are key players in maintaining immune homeostasis, and they regulate immune responses to allergens by preventing excessive inflammatory responses [33]. Recent studies demonstrated a decrease in Treg cells number in IBD patients and animal models [34–36]. Here, we found that, in the progression of TNBS-induced colitis, treatment with HQT significantly decreased the percentage of Th1 and Th17 cells among LPMCs. Simultaneously, the numbers of Th2 and Treg cells markedly increased when compared with the TNBS-induced colitis group. This implies that the rehabilitating effect of HQT in IBD works by restoring the balance between CD4+ T cells subsets.Homeostasis of distinct Th cell subset-derived cytokines plays a crucial role in mediating intestinal inflammation in IBD. Studies have shown that Th1-related cytokines (IFN-γ and IL-12) and Th17-associated cytokines (IL-17A, IL-21, and IL-23) are markedly increased in CD, while in UC there is increased production of the Th2 cytokines (IL-5, IL-13, and IL-4) [37]. IL-10 is an important anti-inflammatory cytokine that can be secreted by Treg cells, and IL-10 defects cause spontaneous colitis in mice [38]. In addition, numerous studies have shown that a change in the cytokine profile from Th1 and Th17 to Th2 and Treg could ameliorate Th1/Th17-mediated diseases, such as CD and TNBS-induced colitis [29, 39, 40]. ELISA and real-time PCR methods were used in this study to detect the expression of cytokines related to the different CD4+ T cells subsets. In agreement with a suppression of Th1 and Th17 numbers amongst LPMCs, HQT-treated rats exhibited defective production of Th1- and Th17-associated cytokines. Nevertheless, increased production of Th2- and Treg-associated cytokines were observed in TNBS-treated rats, suggesting that HQT significantly improved inflammation and ameliorated disease in TNBS-treated rats, associated with a shift from a Th1 and Th17 profile to a Th2 and Treg immunological profile.Because transcription factors are crucial for T-cell differentiation, we also examined lineage-specific transcription factors. The Th1 transcription factor T-bet plays a critical role in the development of Th1-driven colitis due to the high expression levels of IFN-γ [41], while GATA3 is an essential master regulator of Th2 cells for the induction of IL-4, IL-5, and IL-13 [5]. Although Th17 and Treg cells share a common requirement for TGF-β in their differentiation, their distinct transcriptional regulators ROR-γt and Foxp3 are necessary, respectively [31]. ROR-γt directs Th17 differentiation and induces the production of IL-17 [42], and Foxp3 dominates Treg formation and production of regulatory cytokines, such as TGF-β and IL-10 [43]. Our results showed that the colonic protein and mRNA expression levels of T-bet and ROR-γt significantly decreased but GATA-3 and Foxp3 expressions were enhanced in colon after HQT treatment in colitis rats. These data indicate that HQT plays a significant role during IBD development in establishing the homeostasis of distinct Th cell subsets in response to TNBS challenge.In this study, we presented evidence that HQT-treatment elicits a strong Th2 and Treg response in TNBS-induced colitis. It has been reported that Th1 and Th2 cells and the cytokines they release are often mutually antagonistic, and a change in the cytokine profile from Th1 to Th2 could ameliorate Th1-mediated disease [29, 44]. Consistent with these findings, a strong Th2 response successfully counteracts Th1/Th17-mediated colitis [45], suggesting that the role of Th2 in intestinal inflammation may be protective in TNBS-induced colitis. Additionally, Treg cells have been reported to repress the activity of other T cell subsets to induce an anti-inflammatory response [46]. It is therefore conceivable that the protective effect of HQT on TNBS-induced colitis might be explained by its capability to induce Treg cells and rebalance CD4+ T cell subsets. Although there were no significant side effects associated with HQT treatment in our study, more detailed studies are necessary to prove its immunomodulatory effect in different models of colitis.In conclusion, our results indicate that HQT plays an important role in the regulation of intestinal immune responses in TNBS-induced colitis by downregulating effector phenotype of Th1 and Th17 cells, while promoting Th2 and Treg responses. Thus, using HQT, a Chinese medicinal formulation, to regulate immune homeostasis may offer a promising alternative to our current therapeutic strategy for IBD. --- *Source: 102021-2015-08-04.xml*
2015
# Acute Disseminated Encephalomyelitis: An Unusual Presentation of Human Immunodeficiency Virus Infection **Authors:** Pedro Martínez-Ayala; Miguel Angel Valle-Murillo; Oscar Chávez-Barba; Rodolfo I. Cabrera-Silva; Luz A. González-Hernández; Fernando Amador-Lara; Moises Ramos-Solano; Sergio Zúñiga-Quiñones; Vida Verónica Ruíz-Herrera; Jaime F. Andrade-Villanueva **Journal:** Case Reports in Infectious Diseases (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1020274 --- ## Abstract Background. Acute disseminated encephalomyelitis (ADEM) is a rare inflammatory and demyelinating disorder of the central nervous system, with a distinct tendency to a perivenous localization of pathological changes. Children are the most affected population and frequently presented after exanthematous viral infections or vaccination. Due to the rarity of this disease, the annual incidence rate in the population is not precisely known. Case Presentation. Here, we present a 28-year-old male HIV-1 positive patient with an acute confusional state, a diminished alert status characterized by somnolence, hypoprosexia, and complex visual hallucinations. Neuroimages reported white matter demyelinating lesions, mainly affecting the semioval centers, the frontal lobe, and the left parietal lobe; hypointense on T1-weighted images, hyperintense on T2-weighted images and fluid-attenuated inversion recovery weighted images, DWI with restricted diffusion, and a parietal ring-enhancing lesion after IV gadolinium administration. Discussion. In HIV positive patients, the demyelinating disorders have a broader clinical spectrum that could be explained by the immunosuppressed state of the patients, the evolution of the disease, the use of medications, the opportunistic infections, and the environment. Due to this highly variable clinical spectrum, ADEM is a significant challenge for the physicians in HIV positive patients, causing a delay in the diagnosis and treatment. Conclusion. We suggest that ADEM should be considered among the differential diagnosis in HIV-infected patients with focal or multifocal neurological symptoms, particularly in encephalopathies with multifocal central nervous system involvement without severe immunosuppression. --- ## Body ## 1. Introduction Acute disseminated encephalomyelitis (ADEM) is a rare inflammatory and demyelinating disorder (DD) of the central nervous system (CNS). Distinctively, ADEM’s pathological changes tend for a perivenous localization [1]. Children are the most affected population (mainly younger than 15 years), frequently presented after exanthematous viral infections or vaccination. Other well-documented associations include HIV, influenza virus, Epstein–Barr virus, Herpes Simplex virus, or Cytomegalovirus infection and postsurgical interventions [2]. Due to its rarity, ADEM’s annual incidence in the population is unknown. A study from 1991 to 2000, in 3 pediatric hospitals from San Diego, California, reported an incidence of 0.4 per 100,000 people-years in persons less than 20 years of age [1]. ADEM is generally self-limited and monophasic, with clinical remission expected within four weeks [3].In HIV patients, ADEM develops as a multifocal disorder of the CNS, becoming monophasic during seroconversion, even when the immune system remains competent. However, a study of seven HIV-1 positive patients with ADEM reported an increased frequency of atypical presentations [4, 5].ADEM’s pathogenesis has an autoimmune origin, either by molecular mimicry (epitopes with structural homology to myelin proteins) or activation of pre-existent T-cells with antimyelin activity. Regardless, they cause a demyelinating process and perivenular inflammation [6]. The diagnosis relies on clinical and radiological findings; Table 1 shows the criteria for ADEM [7, 8].Table 1 ADEM 2012 criteria from the International Pediatric Multiple Sclerosis Study Group. ADEM is divided into three groupsMonophasic ADEM(i)  A first polyfocal clinical neurological event with a presumed inflammatory cause(ii)   A polysymptomatic clinical picture that includes encephalopathy(iii) Absence of new/recent signs and symptoms or MRI findings after three months of ADEM diagnosisMultiphasic ADEM(iv) A new ADEM event three months or more after the initial episode that involves unaffected areas from the previous event(v) It can be associated with novel clinical and MRI findings or to previously documented findings(vi) It must take place within one month after completing steroid treatmentRecurrent ADEM(vii) Recurrence of the initial signs and symptoms within three months or more after the initial episode(viii) Absence of new lesions based on medical history, physical examination, and neuroimaging(ix) MRI without new lesions; however, previous lesions can be increased in volumeMagnetic Resonance Imaging (MRI) findings include large brain lesions of at least 2 cm, either disseminated or confluent, and they can involve the white matter, cortex, and deep grey nuclei. The lesions are generally multiple, but large unique lesions can also affect both hemispheres; plus, the involvement of the deep grey matter helps to distinguish ADEM from multiple sclerosis (MS). Lesions are hypointense on T1-weighted images and hyperintense in T2-weighted images and short TI inversion recovery (STIR) weighted sequences. However, lesions with intense gliosis can be observed hyperintense in T1-weighted images. In diffusion-weighted magnetic resonance imaging (DWI), restriction, nodular lesions, and ring enhancement are standard features after intravenous (IV) contrast injection [9].At the spinal cord level, the radiological findings include focal lesions in the craniocervical junction and longitudinally extensive lesions affecting at least three intervertebral spaces [9].In this case report, we present a 28-year-old male HIV-1 positive patient with clinical, imaging, serological, and cerebrospinal fluid (CSF) findings consistent with ADEM. ## 2. Case Presentation A 28-year-old male patient admitted to the emergency department presented a tonic-clonic seizure, left arm paresis, paraparesis, two months of gait disturbance, and fever (38°C). The patient was diagnosed with HIV-1 three months ago. His CD4+ T-cell count was 669 cells/μL, with a viral load of 23,800 c/mL, CDC stage A1, and naïve to antiretroviral therapy (ART). The patient presented a confusional state, somnolence, hypoprosexia, and complex visual hallucinations. The left arm’s strength was diminished (3/5), and the lower limbs had symmetric paresis (2/5) and symmetrically augmented myotatic reflexes. Babinski’s plantar reflex was absent. The rest of the neurological examination: cranial nerves, optic fundus, and papilla did not have alterations, sensory examination was normal, and there were no meningeal signs, abnormal movements, or ataxia.The CSF was clear and had the following findings: proteins 27 mg/dl, glucose 74 mg/dl (serum glucose 90 mg/dl), and a white blood cell count of 20 cells/mm3 (54% lymphocytes) with no evidence of malignant cells nor oligoclonal bands. The cryptococcus antigen test, polymerase chain reaction (PCR) for M. tuberculosis, VDRL, bacterial cultures, and Gram and Ziehl Neelsen stains in CSF were negative. Serum IgG serology for Toxoplasma Gondii was negative. MRI reported white matter demyelinating lesions, mainly affecting the centrum semiovale, the frontal lobe, and the left parietal lobe; hypointense on T1-weighted images, hyperintense on T2-weighted images and fluid-attenuated inversion recovery (FLAIR) weighted images, DWI with restricted diffusion, and a parietal ring-enhancing lesion after IV gadolinium administration (Figure 1).Figure 1 Case report’s neuroimages. Axial and coronal MRI with T1 (a), FLAIR (b), diffusion (c), and contrast-enhanced T1, (d) weighted images show various predominant white matter (nodular and confluent) lesions hypointense on T1, hyperintense on T2 (not shown), and FLAIR. With diffusion restriction, some of them had fewer signals at the center (black arrow). After gadolinium administration, most of the lesions have mild incomplete annular enhancement (white arrowhead). MRI : magnetic resonance imaging; FLAIR : fluid-attenuated inversion recovery. (a)(b)(c)(d)After meningeal cryptococcal and meningeal tuberculosis infection was discarded, we started ART with abacavir/lamivudine/dolutegravir. Since, ART initiated after symptoms onset, immune reconstitution inflammatory syndrome (IRIS) criteria are not met, we started ART with Abacavir/Lamivudine/Dolutegravir. By exclusion of other diagnoses, ADEM was diagnosed, and the patient received high doses of IV methylprednisolone (1 g/day) for five consecutive days; subsequently, he showed neurological function improvement. After three years of follow-up, the patient showed complete neurological remission with no relapses. He continued on ART with good adherence and undetectable. ## 3. Discussion This report describes a recently diagnosed HIV-infected male with a CD4+ T-cell count of 669 cells/μL and a sub-acute monophasic course characterized by fever, encephalopathy, and multifocal neurological deficits. Opportunistic infections were unlikely due to his relatively preserved immune status (CD4+ T-cell count was 669 cells/µL); regardless, opportunistic infections were considered and discarded. MRI showed confluent white matter lesions, and the patient had an excellent response to steroid therapy. We established a diagnosis of ADEM after fulfillment of the “ADEM 2012 criteria from the International Pediatric multiple sclerosis study group”; it states the patient should have a polyfocal clinical CNS event. In our case, the subject showed multiple signs corresponding to polyfocal involvement of the CNS: triparesis, convulsions, and encephalopathy, this last one being an essential element if fever was not the cause. MRI was compatible with confluent demyelinating lesions involving white matter and deep grey matter both findings are highly suggestive of ADEM lesions. [7].Because of the high cost in our country, we were not able to perform antibody tests such as anti-acuaporin 4 antibody (AQP4) and myelin oligodendrocyte glycoprotein antibodies (anti-MOG). However, the clinical picture was not suggestive of opticospinal demyelinating syndrome and the anti-MOG serostatus in the context of ADEM is clinically helpful to establish risk of recurrence but not necessary for a diagnosis.In the context of an HIV-infected patient, the differential diagnosis for fever and encephalopathy becomes more complicated. Table2 summarizes the clinical and imaging characteristics for the principal differential diagnosis. All these pathologies usually manifest a focal or multifocal neurological syndrome suggesting a space-occupying lesion, except for cryptococcosis, which shows an intracranial hypertension syndrome and sub-acute meningitis [10].Table 2 Clinical and image comparison with the primary differential diagnoses for ADEM. Clinical entityClinical findingsNeuroimagingCommentsADEMMonophasic clinical picture.Encephalopathy + focal symptoms + myelopathyMultiple lesions in WM and cortex or deep grey nucleiSuspect in the light of compatible clinical picture, responds well to IV steroidsMSInsidious clinical picture with clinical relapses. Rarely with encephalopathyLesions in WM, rarely affecting cortexLesions in different agesAbsence of encephalopathy and frequent relapses distinguish it from ADEMNMOIt affects only the optic nerves and the spinal cord. Patients do not get encephalopathyProduces widely extensive myelitis, and it is not inclined to affect supratentorial regions.Optic nerve involvementAbsence of encephalopathy, differentiate from ADEM, and highly aggressive progressionTOXOPLASMA ENCEPHALITISProgressive sub-acute clinical picture, focal clinic + encephalopathy. Not accompanied by myelopathyLesions in basal grey nuclei and the cortico-subcortical junctionMost common cause of focal neurologic syndrome in HIVPMLCognitive impairment prevails + focal signs; visuals are commonDiffuse lesions; it affects mostly U fibers and parieto-occipital regions.Contrast-enhanced +Patients do not respond to immunotherapy. Suspect if low CD4 cell countPCNSLProgressive sub-acute clinical picture, focal clinic + encephalopathyClosely similar to toxoplasma encephalitis, lesions in the corpus callosum, and periventricular enhancementResponds initially to steroids. It requires SPECT, PET, and biopsyHIV-ASSOCIATED DEMENTIA COMPLEXProgressive cognitive deterioration + gait disturbanceDiffuse lesions restricted to U fibers; no contrast-enhancedDisease with a long evolution and a progressive courseADEM : acute disseminated encephalomyelitis; MS : multiple sclerosis; NMO : neuromyelitis optica; PML : progressive multifocal leukoencephalopathy; PCNSL : primary central nervous system lymphoma; HIV : human immunodeficiency virus; WM : white matter; IV : Intravenous; SPECT : single photon emission computed tomography; PET : positron emission tomography.The typical course of ADEM is a neurological event with an acute or sub-acute establishment with a monophasic progression. Signs and symptoms are multifocal, including encephalopathy, which suggests multiple lesions that involve the ascending reticular activating system. Commonly, it appears with alertness deterioration and even a state of coma. Like other DD, it can affect the rest of CNS like optic nerves and spinal cord. In neuroimaging, multiple supra- and infra-tentorial lesions, predominant in the white matter, can be observed, frequently affecting the cortex and the grey nuclei at the basal ganglia and brain stem. In adults, ADEM differential diagnosis is broad, as it is less frequent in adulthood than in childhood. Similarly, the acute DD forms, such as Marburg, Hurst, or Balo disease, could arise to a neurological condition of acute onset with multifocal symptoms and single or multiple demyelinating lesions in neuroimages. Also, an isolated clinical syndrome for the first MS episode is presented as a unifocal or multifocal episode and with demyelinating lesions in the brain and the spinal cord. Neuromyelitis optica presents extensive myelitis that can be accompanied by optic neuritis and, rarely, by supratentorial lesions, which can help clinically to distinguish it from ADEM [9]. ## 4. Conclusions We consider that ADEM, more than a disease, represents a group of pathologies that have demyelinating lesions in the CNS and immunological dysfunction as a common pathway. In HIV, a potential mechanism is the generation of incomplete reverse transcripts (HIV’s DNA), which stimulate an intense inflammatory response and a subsequently CD4+ T-cell depletion, through damage to the stem cells and the thymus [11]. Thus, the presence of “autoreactive” circulating T-cells predisposes the development of ADEM. Particularly in HIV-infected patients, the DD clinical spectrum becomes reasonably broad, possibly explained by the immunosuppressed state of the patients, the evolution of the disease, the use of medications, opportunistic infections, and the environment. Due to this diverse clinical spectrum, ADEM is a challenge for physicians, delaying the diagnosis and treatment. ADEM’s treatment includes high IV corticosteroid dosage, immunoglobulin, or plasmapheresis, which accelerate the patient’s recovery and reduce the number of active lesions [12, 13].In conclusion, we suggest that ADEM should be considered as a differential diagnosis in an HIV-infected patient presenting focal or multifocal neurological symptoms, particularly in patients with encephalopathy and without severe immunosuppression (CD4+ T-cell count >200 cells/μL) or when neuroimages show focal lesions. --- *Source: 1020274-2020-06-06.xml*
1020274-2020-06-06_1020274-2020-06-06.md
16,012
Acute Disseminated Encephalomyelitis: An Unusual Presentation of Human Immunodeficiency Virus Infection
Pedro Martínez-Ayala; Miguel Angel Valle-Murillo; Oscar Chávez-Barba; Rodolfo I. Cabrera-Silva; Luz A. González-Hernández; Fernando Amador-Lara; Moises Ramos-Solano; Sergio Zúñiga-Quiñones; Vida Verónica Ruíz-Herrera; Jaime F. Andrade-Villanueva
Case Reports in Infectious Diseases (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1020274
1020274-2020-06-06.xml
--- ## Abstract Background. Acute disseminated encephalomyelitis (ADEM) is a rare inflammatory and demyelinating disorder of the central nervous system, with a distinct tendency to a perivenous localization of pathological changes. Children are the most affected population and frequently presented after exanthematous viral infections or vaccination. Due to the rarity of this disease, the annual incidence rate in the population is not precisely known. Case Presentation. Here, we present a 28-year-old male HIV-1 positive patient with an acute confusional state, a diminished alert status characterized by somnolence, hypoprosexia, and complex visual hallucinations. Neuroimages reported white matter demyelinating lesions, mainly affecting the semioval centers, the frontal lobe, and the left parietal lobe; hypointense on T1-weighted images, hyperintense on T2-weighted images and fluid-attenuated inversion recovery weighted images, DWI with restricted diffusion, and a parietal ring-enhancing lesion after IV gadolinium administration. Discussion. In HIV positive patients, the demyelinating disorders have a broader clinical spectrum that could be explained by the immunosuppressed state of the patients, the evolution of the disease, the use of medications, the opportunistic infections, and the environment. Due to this highly variable clinical spectrum, ADEM is a significant challenge for the physicians in HIV positive patients, causing a delay in the diagnosis and treatment. Conclusion. We suggest that ADEM should be considered among the differential diagnosis in HIV-infected patients with focal or multifocal neurological symptoms, particularly in encephalopathies with multifocal central nervous system involvement without severe immunosuppression. --- ## Body ## 1. Introduction Acute disseminated encephalomyelitis (ADEM) is a rare inflammatory and demyelinating disorder (DD) of the central nervous system (CNS). Distinctively, ADEM’s pathological changes tend for a perivenous localization [1]. Children are the most affected population (mainly younger than 15 years), frequently presented after exanthematous viral infections or vaccination. Other well-documented associations include HIV, influenza virus, Epstein–Barr virus, Herpes Simplex virus, or Cytomegalovirus infection and postsurgical interventions [2]. Due to its rarity, ADEM’s annual incidence in the population is unknown. A study from 1991 to 2000, in 3 pediatric hospitals from San Diego, California, reported an incidence of 0.4 per 100,000 people-years in persons less than 20 years of age [1]. ADEM is generally self-limited and monophasic, with clinical remission expected within four weeks [3].In HIV patients, ADEM develops as a multifocal disorder of the CNS, becoming monophasic during seroconversion, even when the immune system remains competent. However, a study of seven HIV-1 positive patients with ADEM reported an increased frequency of atypical presentations [4, 5].ADEM’s pathogenesis has an autoimmune origin, either by molecular mimicry (epitopes with structural homology to myelin proteins) or activation of pre-existent T-cells with antimyelin activity. Regardless, they cause a demyelinating process and perivenular inflammation [6]. The diagnosis relies on clinical and radiological findings; Table 1 shows the criteria for ADEM [7, 8].Table 1 ADEM 2012 criteria from the International Pediatric Multiple Sclerosis Study Group. ADEM is divided into three groupsMonophasic ADEM(i)  A first polyfocal clinical neurological event with a presumed inflammatory cause(ii)   A polysymptomatic clinical picture that includes encephalopathy(iii) Absence of new/recent signs and symptoms or MRI findings after three months of ADEM diagnosisMultiphasic ADEM(iv) A new ADEM event three months or more after the initial episode that involves unaffected areas from the previous event(v) It can be associated with novel clinical and MRI findings or to previously documented findings(vi) It must take place within one month after completing steroid treatmentRecurrent ADEM(vii) Recurrence of the initial signs and symptoms within three months or more after the initial episode(viii) Absence of new lesions based on medical history, physical examination, and neuroimaging(ix) MRI without new lesions; however, previous lesions can be increased in volumeMagnetic Resonance Imaging (MRI) findings include large brain lesions of at least 2 cm, either disseminated or confluent, and they can involve the white matter, cortex, and deep grey nuclei. The lesions are generally multiple, but large unique lesions can also affect both hemispheres; plus, the involvement of the deep grey matter helps to distinguish ADEM from multiple sclerosis (MS). Lesions are hypointense on T1-weighted images and hyperintense in T2-weighted images and short TI inversion recovery (STIR) weighted sequences. However, lesions with intense gliosis can be observed hyperintense in T1-weighted images. In diffusion-weighted magnetic resonance imaging (DWI), restriction, nodular lesions, and ring enhancement are standard features after intravenous (IV) contrast injection [9].At the spinal cord level, the radiological findings include focal lesions in the craniocervical junction and longitudinally extensive lesions affecting at least three intervertebral spaces [9].In this case report, we present a 28-year-old male HIV-1 positive patient with clinical, imaging, serological, and cerebrospinal fluid (CSF) findings consistent with ADEM. ## 2. Case Presentation A 28-year-old male patient admitted to the emergency department presented a tonic-clonic seizure, left arm paresis, paraparesis, two months of gait disturbance, and fever (38°C). The patient was diagnosed with HIV-1 three months ago. His CD4+ T-cell count was 669 cells/μL, with a viral load of 23,800 c/mL, CDC stage A1, and naïve to antiretroviral therapy (ART). The patient presented a confusional state, somnolence, hypoprosexia, and complex visual hallucinations. The left arm’s strength was diminished (3/5), and the lower limbs had symmetric paresis (2/5) and symmetrically augmented myotatic reflexes. Babinski’s plantar reflex was absent. The rest of the neurological examination: cranial nerves, optic fundus, and papilla did not have alterations, sensory examination was normal, and there were no meningeal signs, abnormal movements, or ataxia.The CSF was clear and had the following findings: proteins 27 mg/dl, glucose 74 mg/dl (serum glucose 90 mg/dl), and a white blood cell count of 20 cells/mm3 (54% lymphocytes) with no evidence of malignant cells nor oligoclonal bands. The cryptococcus antigen test, polymerase chain reaction (PCR) for M. tuberculosis, VDRL, bacterial cultures, and Gram and Ziehl Neelsen stains in CSF were negative. Serum IgG serology for Toxoplasma Gondii was negative. MRI reported white matter demyelinating lesions, mainly affecting the centrum semiovale, the frontal lobe, and the left parietal lobe; hypointense on T1-weighted images, hyperintense on T2-weighted images and fluid-attenuated inversion recovery (FLAIR) weighted images, DWI with restricted diffusion, and a parietal ring-enhancing lesion after IV gadolinium administration (Figure 1).Figure 1 Case report’s neuroimages. Axial and coronal MRI with T1 (a), FLAIR (b), diffusion (c), and contrast-enhanced T1, (d) weighted images show various predominant white matter (nodular and confluent) lesions hypointense on T1, hyperintense on T2 (not shown), and FLAIR. With diffusion restriction, some of them had fewer signals at the center (black arrow). After gadolinium administration, most of the lesions have mild incomplete annular enhancement (white arrowhead). MRI : magnetic resonance imaging; FLAIR : fluid-attenuated inversion recovery. (a)(b)(c)(d)After meningeal cryptococcal and meningeal tuberculosis infection was discarded, we started ART with abacavir/lamivudine/dolutegravir. Since, ART initiated after symptoms onset, immune reconstitution inflammatory syndrome (IRIS) criteria are not met, we started ART with Abacavir/Lamivudine/Dolutegravir. By exclusion of other diagnoses, ADEM was diagnosed, and the patient received high doses of IV methylprednisolone (1 g/day) for five consecutive days; subsequently, he showed neurological function improvement. After three years of follow-up, the patient showed complete neurological remission with no relapses. He continued on ART with good adherence and undetectable. ## 3. Discussion This report describes a recently diagnosed HIV-infected male with a CD4+ T-cell count of 669 cells/μL and a sub-acute monophasic course characterized by fever, encephalopathy, and multifocal neurological deficits. Opportunistic infections were unlikely due to his relatively preserved immune status (CD4+ T-cell count was 669 cells/µL); regardless, opportunistic infections were considered and discarded. MRI showed confluent white matter lesions, and the patient had an excellent response to steroid therapy. We established a diagnosis of ADEM after fulfillment of the “ADEM 2012 criteria from the International Pediatric multiple sclerosis study group”; it states the patient should have a polyfocal clinical CNS event. In our case, the subject showed multiple signs corresponding to polyfocal involvement of the CNS: triparesis, convulsions, and encephalopathy, this last one being an essential element if fever was not the cause. MRI was compatible with confluent demyelinating lesions involving white matter and deep grey matter both findings are highly suggestive of ADEM lesions. [7].Because of the high cost in our country, we were not able to perform antibody tests such as anti-acuaporin 4 antibody (AQP4) and myelin oligodendrocyte glycoprotein antibodies (anti-MOG). However, the clinical picture was not suggestive of opticospinal demyelinating syndrome and the anti-MOG serostatus in the context of ADEM is clinically helpful to establish risk of recurrence but not necessary for a diagnosis.In the context of an HIV-infected patient, the differential diagnosis for fever and encephalopathy becomes more complicated. Table2 summarizes the clinical and imaging characteristics for the principal differential diagnosis. All these pathologies usually manifest a focal or multifocal neurological syndrome suggesting a space-occupying lesion, except for cryptococcosis, which shows an intracranial hypertension syndrome and sub-acute meningitis [10].Table 2 Clinical and image comparison with the primary differential diagnoses for ADEM. Clinical entityClinical findingsNeuroimagingCommentsADEMMonophasic clinical picture.Encephalopathy + focal symptoms + myelopathyMultiple lesions in WM and cortex or deep grey nucleiSuspect in the light of compatible clinical picture, responds well to IV steroidsMSInsidious clinical picture with clinical relapses. Rarely with encephalopathyLesions in WM, rarely affecting cortexLesions in different agesAbsence of encephalopathy and frequent relapses distinguish it from ADEMNMOIt affects only the optic nerves and the spinal cord. Patients do not get encephalopathyProduces widely extensive myelitis, and it is not inclined to affect supratentorial regions.Optic nerve involvementAbsence of encephalopathy, differentiate from ADEM, and highly aggressive progressionTOXOPLASMA ENCEPHALITISProgressive sub-acute clinical picture, focal clinic + encephalopathy. Not accompanied by myelopathyLesions in basal grey nuclei and the cortico-subcortical junctionMost common cause of focal neurologic syndrome in HIVPMLCognitive impairment prevails + focal signs; visuals are commonDiffuse lesions; it affects mostly U fibers and parieto-occipital regions.Contrast-enhanced +Patients do not respond to immunotherapy. Suspect if low CD4 cell countPCNSLProgressive sub-acute clinical picture, focal clinic + encephalopathyClosely similar to toxoplasma encephalitis, lesions in the corpus callosum, and periventricular enhancementResponds initially to steroids. It requires SPECT, PET, and biopsyHIV-ASSOCIATED DEMENTIA COMPLEXProgressive cognitive deterioration + gait disturbanceDiffuse lesions restricted to U fibers; no contrast-enhancedDisease with a long evolution and a progressive courseADEM : acute disseminated encephalomyelitis; MS : multiple sclerosis; NMO : neuromyelitis optica; PML : progressive multifocal leukoencephalopathy; PCNSL : primary central nervous system lymphoma; HIV : human immunodeficiency virus; WM : white matter; IV : Intravenous; SPECT : single photon emission computed tomography; PET : positron emission tomography.The typical course of ADEM is a neurological event with an acute or sub-acute establishment with a monophasic progression. Signs and symptoms are multifocal, including encephalopathy, which suggests multiple lesions that involve the ascending reticular activating system. Commonly, it appears with alertness deterioration and even a state of coma. Like other DD, it can affect the rest of CNS like optic nerves and spinal cord. In neuroimaging, multiple supra- and infra-tentorial lesions, predominant in the white matter, can be observed, frequently affecting the cortex and the grey nuclei at the basal ganglia and brain stem. In adults, ADEM differential diagnosis is broad, as it is less frequent in adulthood than in childhood. Similarly, the acute DD forms, such as Marburg, Hurst, or Balo disease, could arise to a neurological condition of acute onset with multifocal symptoms and single or multiple demyelinating lesions in neuroimages. Also, an isolated clinical syndrome for the first MS episode is presented as a unifocal or multifocal episode and with demyelinating lesions in the brain and the spinal cord. Neuromyelitis optica presents extensive myelitis that can be accompanied by optic neuritis and, rarely, by supratentorial lesions, which can help clinically to distinguish it from ADEM [9]. ## 4. Conclusions We consider that ADEM, more than a disease, represents a group of pathologies that have demyelinating lesions in the CNS and immunological dysfunction as a common pathway. In HIV, a potential mechanism is the generation of incomplete reverse transcripts (HIV’s DNA), which stimulate an intense inflammatory response and a subsequently CD4+ T-cell depletion, through damage to the stem cells and the thymus [11]. Thus, the presence of “autoreactive” circulating T-cells predisposes the development of ADEM. Particularly in HIV-infected patients, the DD clinical spectrum becomes reasonably broad, possibly explained by the immunosuppressed state of the patients, the evolution of the disease, the use of medications, opportunistic infections, and the environment. Due to this diverse clinical spectrum, ADEM is a challenge for physicians, delaying the diagnosis and treatment. ADEM’s treatment includes high IV corticosteroid dosage, immunoglobulin, or plasmapheresis, which accelerate the patient’s recovery and reduce the number of active lesions [12, 13].In conclusion, we suggest that ADEM should be considered as a differential diagnosis in an HIV-infected patient presenting focal or multifocal neurological symptoms, particularly in patients with encephalopathy and without severe immunosuppression (CD4+ T-cell count >200 cells/μL) or when neuroimages show focal lesions. --- *Source: 1020274-2020-06-06.xml*
2020
# Combinatorial Methods for Detecting Surface Subgroups in Right-Angled Artin Groups **Authors:** Robert W. Bell **Journal:** ISRN Algebra (2011) **Publisher:** International Scholarly Research Network **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.5402/2011/102029 --- ## Abstract We give a short proof of the following theorem of Sang-hyun Kim: ifA(Γ) is a right-angled Artin group with defining graph Γ, then A(Γ) contains a hyperbolic surface subgroup if Γ contains an induced subgraph C¯n for some n≥5, where C¯n denotes the complement graph of an n-cycle. Furthermore, we give a new proof of Kim's cocontraction theorem. --- ## Body ## 1. Introduction and Definitions Suppose thatΓ is a simple finite graph with vertex set VΓ and edge set EΓ. We say that Γ is the defining graph of the right-angled Artin group defined by the presentation(1.1)A(Γ)=〈VΓ;[v,w]:=vwv-1w-1=1,∀{v,w}∈EΓ〉. Right-angled Artin groups are also called graph groups or partially commutative groups in the literature. All graphs in this paper are assumed to be simple and finite.Right-angled Artin groups have been studied using both combinatorial and geometric methods. In particular, it is well known that these groups have simple solutions to the word and conjugacy problems. Moreover, each right-angled Artin group can be geometrically represented as the fundamental group of a nonpositively curved cubical complexXΓ called the Salvetti complex. For these and other fundamental results, we refer the reader to the survey article by Charney [1].LetΓ be a graph, and suppose that W⊂VΓ. The induced subgraph ΓW is the maximal subgraph of Γ on the vertex set W. A subgraph Λ⊂Γ is called an induced subgraph if Λ=ΓVΛ. In this case, the subgroup of A(Γ) generated by VΛ is canonically isomorphic to A(Λ). This follows from the fact that f:A(Γ)→A(Λ) given by f(v)=v if v∈VΛ and f(v)=1 if v∉VΛ defines a retraction. Therefore, we identify A(Λ) with its image in A(Γ).In this paper, we study the following problem: find conditions on a graphΓ which imply or deny the existence of hyperbolic surface subgroup in A(Γ). Herein, we say that a group is a hyperbolic surface group if it is the fundamental group of a closed orientable surface with negative Euler characteristic.Servatius et al. proved that ifΓ contains an induced n-cycle, that is, the underlying graph of a regular n-gon, for some n≥5, then A(Γ) has a hyperbolic surface subgroup [2]. In fact, they construct an isometrically embedded closed surface of genus 1+(n-4)2n-3 in the cover of the Salvetti complex XΓ corresponding to the commutator subgroup of A(Γ).Kim and, independently, Crisp et al. gave the first examples of graphs without inducedn-cycles, n≥5, which define a right-angled Artin groups which, nonetheless, contain hyperbolic surface subgroups [3, 4]. We give one such example here to illustrate the main lemma of this paper.Consider the graphs in Figure1. The map ϕ:A(P′)→A(P) sending v1 to v12, vi to vi, and vi′ to vi for i>1 defines an injective homomorphism onto an index two subgroup of A(P) (see the discussion below). Since P′ contains an induced circuit of length five (the induced subgraph on the vertices v2,…,v5, and v6′), A(P) contains a hyperbolic surface subgroup; however, P does not contain an n-cycle for any n≥5.The group given by the graphP on (b) injects into the group given by the graph P′ on (a). The vertex labeled by i is referred to as vi in the discussion below. (a)(b)That the mapϕ is injective can be seen from several perspectives. The approach of Kim and Crisp et al. is to use dissection diagrams; these are collections of simple closed curves which are dual to van Kampen diagrams on a surface over the presentation A(Γ). The method was introduced in this context by Crisp and Wiest [5] and used with much success by Kim [3] and Crisp et al. [4].The purpose of this paper is to demonstrate that classical methods from combinatorial group theory offer another perspective and simplify some of the arguments in the aforementioned articles. We will use the Reidemeister-Schreier rewriting process to give a direct proof that the mapϕ, above, is injective, and we also indicate how this can be proven using normal forms for splittings of groups. This in turn will lead to a short proof of Kim's theorem on cocontractions (in [3, Theorem  4.2]) alluded to in the abstract; see Theorem 3.5 in this paper.Lemma 1.1. Suppose thatA(Γ) is a right-angled Artin group, and let n be a positive integer. Choose a vertex z∈V(Γ), and define ϕ:A(Γ)→〈x;xn=1〉≅ℤ/nℤ by ϕ(v)=1 if v≠z and ϕ(z)=x. Then kerϕ is a right-angled Artin group with defining graph Γ′ obtained by gluing n copies of Γ∖st(z) to st(z) along lk(z), where st and lk are the star and link, respectively. Moreover, the vertices of Γ′ naturally correspond to the following generating set: (1.2){zn}∪lk(z)∪{u:u∉st(z)}∪{zuz-1:u∉st(z)}∪⋯∪{zn-1uz1-n:u∉st(z)}.The proof is a straightforward computation using the Reidemeister- Schreier method. The details are given in Section2. Applying Lemma 1.1 to the graphs in Figure 1 proves that A(P′) injects into A(P): if ϕ:A(P)→ℤ/2ℤ maps z=v1 to 1 mod 2, then A(P′)=kerϕ.Another way to prove Lemma1.1 is to take advantage of “visual” splittings of the groups A(Γ) and A(Γ′) as an HNN extension or amalgamated free product. This second approach is stated in the article by Crisp et al. (see [4, Remark  4.1]).We illustrate the utility of Lemma1.1 by giving a short proof of the following theorem of Kim.Theorem 1.2 (see Kim [3, Corollary  4.3(2)]). LetC¯n denote the complement graph of an n-cycle. For each n≥5, the group A(C¯n) contains a hyperbolic surface subgroup.In fact, we give a new proof of Kim's more general theorem (in [3, Theorem  4.2]) on cocontractions of right-angled Artin groups in Section 3. Kim's proof used the method of dissection diagrams. Kim has also discovered a short proof using visual splittings (personal communication).In preparing this paper, we found that the Reidemeister-Schreier method has been used previously to study certain Bestvina-Brady subgroups of right-angled Artin groups (see [6, 7]).This work was inspired by a desire to better understand Crisp et al.'s very interesting classification of the graphs on fewer than nine vertices which define right-angled Artin groups with hyperbolic surface subgroups. We hope that this paper will help to clarify some aspects of the general problem. ## 2. The Reidemeister-Schreier Method and Proof of Lemma1.1 The Reidemeister-Schreier method solves the following problem: suppose thatG is a group given by the presentation 〈X;R〉, and suppose that H⊂G is a subgroup; find a presentation for H. The treatment below is brisk; see [8] for details and complete proofs.LetF=F(X) be free with basis X, and let π:F→G extend the identity map on X. Consider the preimage P=π-1(H). Let T⊂F be a right Schreier transversal for P in F, that is, T is a complete set of right coset representatives that is closed under the operation of taking initial subwords (of freely reduced words over X). Given w∈F, let [w] be the unique element of T such that Pw=P[w]. For each t∈T and x∈X, let s(t,x)=tx[tx]-1. Define S={s(t,x):t∈T,x∈X,ands(t,x)≠1}. Then S is a basis for the free group P. Define a rewriting process τ:F→P on freely reduced words over X by(2.1)τ(y1y2⋯yn)=s(1,y1)s([y1],y2)⋯s([y1⋯yn-1],yn), where y∈X∪X-1. Then τ(w)=w[w]-1 for every reduced word w∈F, and(2.2)H=〈S;τ(t-1rt)=1,∀t∈T,r∈R〉. This rewriting process together with the resulting presentation for the given subgroup H of G=〈X;R〉 is called the Reidemeister-Schreier Method.Proof of Lemma1.1. LetΓ be a graph, G=A(Γ) the corresponding right-angled Artin group, and z a distinguished vertex of Γ. Let ϕ:G→〈x;xn〉 be given by ϕ(v)=1 if v≠z and ϕ(z)=x. LetF be free on X=VΓ, and let R be the set of defining relators corresponding to EΓ. Let H=kerϕ, and let P be the inverse image of H in F under the natural map F→G. The set T={1,z,…,zn-1} is a right Schreier transversal for P<F. One verifies (directly) that the following equations hold: (2.3)s(zk,v)=zkvz-k,ifv≠z,k=0,…,n-1,s(zk,z)=1,ifk=0,…,n-2,s(zk,z)=zn,ifk=n-1. Thus, we have a setS of generators for kerϕ; however, many of these generators are redundant. Again, one verifies (using τ(w)=w[w]-1) that the following equations hold: (2.4)τ(zk[u,v]z-k)=[s(zk,u),s(zk,v)],ifu,v≠z,k=0,…,n-1,τ(zk[z,v]z-k)=s(zk+1,v)⋅s(zk,v)-1,ifv≠z,k=0,…,n-2,τ(zk[z,v]z-k)=zn⋅s(1,v)⋅z-n⋅s(zn-1,v)-1,ifv≠z,k=n-1. Therefore, if[z,v]=1 is a relation in A(Γ), then (2.5)v=s(1,v)=s(z,v)=⋯=s(zn-1,v),[zn,v]=1 hold in kerϕ. It follows that kerϕ is generated by zn, the vertices adjacent to z in Γ, and n copies (u,zuz-1,…,zn-1uz1-n) of each vertex u≠z and not adjacent to z in Γ. Moreover, the relations are such that kerϕ is presented as a right-angled Artin group where the defining graph is obtained from Γ by taking the star of z and n copies of the complement of the star of z and gluing these copies along the link of z. This completes the proof of Lemma 1.1. ## 3. A Short Proof of Two Theorems of Kim Suppose thatΓ is a graph. The complement graph Γ¯ is the graph having the same vertices as Γ but which has edges complementary to the edges of Γ. Recall that an n-cycle Cn is the underlying graph of a regular n-gon.Theorem1.2 follows from Kim's cocontraction theorem (see Theorem 3.5 below); however, we present a short independent proof here.Proof of Theorem1.2. Suppose thatΓ is a graph which contains an induced C5. Then A(Γ) contains a hyperbolic surface subgroup by [2]. Since C5≅C¯5, Theorem 1.2 follows from Lemma 3.1 below.Lemma 3.1 (see Kim [3, Corollary  4.3(1)]). For eachn≥4, A(C¯n-1)<A(C¯n).Proof. LetVCn={x1,…,xn}=VC¯n. Define ϕ:A(C¯n)→〈a,a2〉 by ϕ(xn)=a and ϕ(xi)=1 for i≠n. By Lemma 1.1, the defining graph Γ of kerϕ has vertex set VΓ={z2}∪{x1,…,xn-1}∪{y1,yn-1}, where z=xn and yi=zxiz-1. Let S={y1,x2,…,xn-1}. Consider the induced subgraph ΓS. The vertices y1 and xi are not adjacent if and only if i∈{2,n-1}. The vertices xi and xj are not adjacent if and only if |i-j|≤1. Therefore, ΓS≅C¯n-1.Kim proved a more general theorem about subgroups of a right-angled Artin groupA(Γ) defined by “cocontractions.” Let S⊂VΓ, and let S′=VΓ∖S. If ΓS is connected, then the contraction CO(Γ,S) of Γ relative to S is defined by taking the induced subgraph ΓS′ together with a vertex vS and declaring vS to be adjacent to w∈S′ if w is adjacent in Γ to some vertex in S. The cocontraction CO¯(Γ,S) is defined as follows:(3.1)CO¯(Γ,S)=CO(Γ¯,S)¯.Kim insists thatΓS be connected whenever he considers the contraction CO(Γ,S). This assumption is not necessary. Moreover, the following lemma shows that the structure of ΓS is immaterial; the proof follows directly from the definitions.Lemma 3.2. Suppose thatΓ is a graph and S⊂VΓ. Let Γ′ be the graph obtained from Γ by removing any edges joining two elements of S. Then CO(Γ,S)=CO(Γ′,S).Corollary 3.3. Suppose thatΓ is a graph and S⊂VΓ. Let Γ′ be a graph obtained from Γ by adding or deleting any collection of edges with both of their vertices belonging to S. Then CO(Γ,S)=CO(Γ′,S) and CO¯(Γ,S)=CO¯(Γ′,S).Lemma 3.4. Suppose thatΓ is a graph and n≥2. Suppose that S={s1,…,sn}⊂VΓ. Let Λ=CO¯(Γ,{s1,…,sn-1}) and S′=S∖{sn}. Then CO¯(Γ,S)=CO¯(Λ,{vS′,sn}).Proof. It suffices to compare the collection of vertices which are adjacent tovS in Γ1=CO¯(Γ,S) and Γ2=CO¯(Λ,{vS′,sn}); in the latter case, we are identifying the vertex vS′∪sn with vS. A vertexw in Γ1 not belonging to S is adjacent to vS if and only if w is adjacent to every si in Γ. A vertex w in Γ2 not equal to vS′ nor sn is adjacent to vS if and only if w is adjacent to vS′ and sn in Λ; but this, in turn, means that w is adjacent to every si in Γ. (Note that the case of n=2 is trivial since CO(Γ,{s1})=Γ.)A collection of verticesS⊂VΓ is said to be anticonnected if Γ¯S is connected. (Note that (Γ¯)S=(ΓS)¯.)Theorem 3.5 (see Kim [3, Theorem  4.2]). Suppose thatΓ is a graph and S⊂V(Γ) is an anticonnected subset. Then A(CO¯(Γ,S)) embeds in A(Γ).Proof. First consider the case whenS consists of two nonadjacent vertices z,z′∈VΓ. Define ϕ:A(Γ)→〈x;x2〉 by ϕ(z)=x, and ϕ(v)=1 if v≠z. Let A(Γ′)=kerϕ. Let (3.2)T=(V(Γ)∖{z2,z′})∪{zz′z-1}⊂V(Γ′). We claim that ΓT′≅CO¯(Γ,S) via v↦v if v≠zz′z-1 and zz′z-1↦vS. Ifv and w are distinct from z and z′, then v and w are adjacent in CO¯(Γ,S) if and only if they are adjacent in Γ. On the other hand, a vertexw is adjacent to vS in CO¯(Γ,S) if and only if w is adjacent to z and z′, whereas a vertex w is adjacent to zz′z-1 in ΓT′ if and only if w belongs to the link of z and to the link of z′, that is, w is adjacent to z and z′. Therefore, ΓT′≅CO¯(Γ,S) and, hence, A(CO¯(Γ,S)) embeds in A(Γ). Now we prove the general statement by induction on|S|. Suppose that S={s1,…,sn} is anticonnected, and suppose that we have chosen the ordering so that S′={s1,…,sn-1} is also anticonnected. (This is always possible: choose sn so that it is not a cut point of Γ¯S.) Let Λ=CO¯(Γ,S′). Suppose that A(Λ) embeds in A(Γ). By the case of the two vertices above, A(CO¯(Λ,{vS′,sn})) embeds in A(Λ). (Note that vS′ and sn are not adjacent in Λ for, otherwise, sn would be adjacent to every si, i=1,…,n-1, which would contradict the hypothesis that S is anticonnected.) This proves the inductive step. The proof of the theorem is completed by applying Lemma 3.4. --- *Source: 102029-2011-08-23.xml*
102029-2011-08-23_102029-2011-08-23.md
13,977
Combinatorial Methods for Detecting Surface Subgroups in Right-Angled Artin Groups
Robert W. Bell
ISRN Algebra (2011)
Mathematical Sciences
International Scholarly Research Network
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.5402/2011/102029
102029-2011-08-23.xml
--- ## Abstract We give a short proof of the following theorem of Sang-hyun Kim: ifA(Γ) is a right-angled Artin group with defining graph Γ, then A(Γ) contains a hyperbolic surface subgroup if Γ contains an induced subgraph C¯n for some n≥5, where C¯n denotes the complement graph of an n-cycle. Furthermore, we give a new proof of Kim's cocontraction theorem. --- ## Body ## 1. Introduction and Definitions Suppose thatΓ is a simple finite graph with vertex set VΓ and edge set EΓ. We say that Γ is the defining graph of the right-angled Artin group defined by the presentation(1.1)A(Γ)=〈VΓ;[v,w]:=vwv-1w-1=1,∀{v,w}∈EΓ〉. Right-angled Artin groups are also called graph groups or partially commutative groups in the literature. All graphs in this paper are assumed to be simple and finite.Right-angled Artin groups have been studied using both combinatorial and geometric methods. In particular, it is well known that these groups have simple solutions to the word and conjugacy problems. Moreover, each right-angled Artin group can be geometrically represented as the fundamental group of a nonpositively curved cubical complexXΓ called the Salvetti complex. For these and other fundamental results, we refer the reader to the survey article by Charney [1].LetΓ be a graph, and suppose that W⊂VΓ. The induced subgraph ΓW is the maximal subgraph of Γ on the vertex set W. A subgraph Λ⊂Γ is called an induced subgraph if Λ=ΓVΛ. In this case, the subgroup of A(Γ) generated by VΛ is canonically isomorphic to A(Λ). This follows from the fact that f:A(Γ)→A(Λ) given by f(v)=v if v∈VΛ and f(v)=1 if v∉VΛ defines a retraction. Therefore, we identify A(Λ) with its image in A(Γ).In this paper, we study the following problem: find conditions on a graphΓ which imply or deny the existence of hyperbolic surface subgroup in A(Γ). Herein, we say that a group is a hyperbolic surface group if it is the fundamental group of a closed orientable surface with negative Euler characteristic.Servatius et al. proved that ifΓ contains an induced n-cycle, that is, the underlying graph of a regular n-gon, for some n≥5, then A(Γ) has a hyperbolic surface subgroup [2]. In fact, they construct an isometrically embedded closed surface of genus 1+(n-4)2n-3 in the cover of the Salvetti complex XΓ corresponding to the commutator subgroup of A(Γ).Kim and, independently, Crisp et al. gave the first examples of graphs without inducedn-cycles, n≥5, which define a right-angled Artin groups which, nonetheless, contain hyperbolic surface subgroups [3, 4]. We give one such example here to illustrate the main lemma of this paper.Consider the graphs in Figure1. The map ϕ:A(P′)→A(P) sending v1 to v12, vi to vi, and vi′ to vi for i>1 defines an injective homomorphism onto an index two subgroup of A(P) (see the discussion below). Since P′ contains an induced circuit of length five (the induced subgraph on the vertices v2,…,v5, and v6′), A(P) contains a hyperbolic surface subgroup; however, P does not contain an n-cycle for any n≥5.The group given by the graphP on (b) injects into the group given by the graph P′ on (a). The vertex labeled by i is referred to as vi in the discussion below. (a)(b)That the mapϕ is injective can be seen from several perspectives. The approach of Kim and Crisp et al. is to use dissection diagrams; these are collections of simple closed curves which are dual to van Kampen diagrams on a surface over the presentation A(Γ). The method was introduced in this context by Crisp and Wiest [5] and used with much success by Kim [3] and Crisp et al. [4].The purpose of this paper is to demonstrate that classical methods from combinatorial group theory offer another perspective and simplify some of the arguments in the aforementioned articles. We will use the Reidemeister-Schreier rewriting process to give a direct proof that the mapϕ, above, is injective, and we also indicate how this can be proven using normal forms for splittings of groups. This in turn will lead to a short proof of Kim's theorem on cocontractions (in [3, Theorem  4.2]) alluded to in the abstract; see Theorem 3.5 in this paper.Lemma 1.1. Suppose thatA(Γ) is a right-angled Artin group, and let n be a positive integer. Choose a vertex z∈V(Γ), and define ϕ:A(Γ)→〈x;xn=1〉≅ℤ/nℤ by ϕ(v)=1 if v≠z and ϕ(z)=x. Then kerϕ is a right-angled Artin group with defining graph Γ′ obtained by gluing n copies of Γ∖st(z) to st(z) along lk(z), where st and lk are the star and link, respectively. Moreover, the vertices of Γ′ naturally correspond to the following generating set: (1.2){zn}∪lk(z)∪{u:u∉st(z)}∪{zuz-1:u∉st(z)}∪⋯∪{zn-1uz1-n:u∉st(z)}.The proof is a straightforward computation using the Reidemeister- Schreier method. The details are given in Section2. Applying Lemma 1.1 to the graphs in Figure 1 proves that A(P′) injects into A(P): if ϕ:A(P)→ℤ/2ℤ maps z=v1 to 1 mod 2, then A(P′)=kerϕ.Another way to prove Lemma1.1 is to take advantage of “visual” splittings of the groups A(Γ) and A(Γ′) as an HNN extension or amalgamated free product. This second approach is stated in the article by Crisp et al. (see [4, Remark  4.1]).We illustrate the utility of Lemma1.1 by giving a short proof of the following theorem of Kim.Theorem 1.2 (see Kim [3, Corollary  4.3(2)]). LetC¯n denote the complement graph of an n-cycle. For each n≥5, the group A(C¯n) contains a hyperbolic surface subgroup.In fact, we give a new proof of Kim's more general theorem (in [3, Theorem  4.2]) on cocontractions of right-angled Artin groups in Section 3. Kim's proof used the method of dissection diagrams. Kim has also discovered a short proof using visual splittings (personal communication).In preparing this paper, we found that the Reidemeister-Schreier method has been used previously to study certain Bestvina-Brady subgroups of right-angled Artin groups (see [6, 7]).This work was inspired by a desire to better understand Crisp et al.'s very interesting classification of the graphs on fewer than nine vertices which define right-angled Artin groups with hyperbolic surface subgroups. We hope that this paper will help to clarify some aspects of the general problem. ## 2. The Reidemeister-Schreier Method and Proof of Lemma1.1 The Reidemeister-Schreier method solves the following problem: suppose thatG is a group given by the presentation 〈X;R〉, and suppose that H⊂G is a subgroup; find a presentation for H. The treatment below is brisk; see [8] for details and complete proofs.LetF=F(X) be free with basis X, and let π:F→G extend the identity map on X. Consider the preimage P=π-1(H). Let T⊂F be a right Schreier transversal for P in F, that is, T is a complete set of right coset representatives that is closed under the operation of taking initial subwords (of freely reduced words over X). Given w∈F, let [w] be the unique element of T such that Pw=P[w]. For each t∈T and x∈X, let s(t,x)=tx[tx]-1. Define S={s(t,x):t∈T,x∈X,ands(t,x)≠1}. Then S is a basis for the free group P. Define a rewriting process τ:F→P on freely reduced words over X by(2.1)τ(y1y2⋯yn)=s(1,y1)s([y1],y2)⋯s([y1⋯yn-1],yn), where y∈X∪X-1. Then τ(w)=w[w]-1 for every reduced word w∈F, and(2.2)H=〈S;τ(t-1rt)=1,∀t∈T,r∈R〉. This rewriting process together with the resulting presentation for the given subgroup H of G=〈X;R〉 is called the Reidemeister-Schreier Method.Proof of Lemma1.1. LetΓ be a graph, G=A(Γ) the corresponding right-angled Artin group, and z a distinguished vertex of Γ. Let ϕ:G→〈x;xn〉 be given by ϕ(v)=1 if v≠z and ϕ(z)=x. LetF be free on X=VΓ, and let R be the set of defining relators corresponding to EΓ. Let H=kerϕ, and let P be the inverse image of H in F under the natural map F→G. The set T={1,z,…,zn-1} is a right Schreier transversal for P<F. One verifies (directly) that the following equations hold: (2.3)s(zk,v)=zkvz-k,ifv≠z,k=0,…,n-1,s(zk,z)=1,ifk=0,…,n-2,s(zk,z)=zn,ifk=n-1. Thus, we have a setS of generators for kerϕ; however, many of these generators are redundant. Again, one verifies (using τ(w)=w[w]-1) that the following equations hold: (2.4)τ(zk[u,v]z-k)=[s(zk,u),s(zk,v)],ifu,v≠z,k=0,…,n-1,τ(zk[z,v]z-k)=s(zk+1,v)⋅s(zk,v)-1,ifv≠z,k=0,…,n-2,τ(zk[z,v]z-k)=zn⋅s(1,v)⋅z-n⋅s(zn-1,v)-1,ifv≠z,k=n-1. Therefore, if[z,v]=1 is a relation in A(Γ), then (2.5)v=s(1,v)=s(z,v)=⋯=s(zn-1,v),[zn,v]=1 hold in kerϕ. It follows that kerϕ is generated by zn, the vertices adjacent to z in Γ, and n copies (u,zuz-1,…,zn-1uz1-n) of each vertex u≠z and not adjacent to z in Γ. Moreover, the relations are such that kerϕ is presented as a right-angled Artin group where the defining graph is obtained from Γ by taking the star of z and n copies of the complement of the star of z and gluing these copies along the link of z. This completes the proof of Lemma 1.1. ## 3. A Short Proof of Two Theorems of Kim Suppose thatΓ is a graph. The complement graph Γ¯ is the graph having the same vertices as Γ but which has edges complementary to the edges of Γ. Recall that an n-cycle Cn is the underlying graph of a regular n-gon.Theorem1.2 follows from Kim's cocontraction theorem (see Theorem 3.5 below); however, we present a short independent proof here.Proof of Theorem1.2. Suppose thatΓ is a graph which contains an induced C5. Then A(Γ) contains a hyperbolic surface subgroup by [2]. Since C5≅C¯5, Theorem 1.2 follows from Lemma 3.1 below.Lemma 3.1 (see Kim [3, Corollary  4.3(1)]). For eachn≥4, A(C¯n-1)<A(C¯n).Proof. LetVCn={x1,…,xn}=VC¯n. Define ϕ:A(C¯n)→〈a,a2〉 by ϕ(xn)=a and ϕ(xi)=1 for i≠n. By Lemma 1.1, the defining graph Γ of kerϕ has vertex set VΓ={z2}∪{x1,…,xn-1}∪{y1,yn-1}, where z=xn and yi=zxiz-1. Let S={y1,x2,…,xn-1}. Consider the induced subgraph ΓS. The vertices y1 and xi are not adjacent if and only if i∈{2,n-1}. The vertices xi and xj are not adjacent if and only if |i-j|≤1. Therefore, ΓS≅C¯n-1.Kim proved a more general theorem about subgroups of a right-angled Artin groupA(Γ) defined by “cocontractions.” Let S⊂VΓ, and let S′=VΓ∖S. If ΓS is connected, then the contraction CO(Γ,S) of Γ relative to S is defined by taking the induced subgraph ΓS′ together with a vertex vS and declaring vS to be adjacent to w∈S′ if w is adjacent in Γ to some vertex in S. The cocontraction CO¯(Γ,S) is defined as follows:(3.1)CO¯(Γ,S)=CO(Γ¯,S)¯.Kim insists thatΓS be connected whenever he considers the contraction CO(Γ,S). This assumption is not necessary. Moreover, the following lemma shows that the structure of ΓS is immaterial; the proof follows directly from the definitions.Lemma 3.2. Suppose thatΓ is a graph and S⊂VΓ. Let Γ′ be the graph obtained from Γ by removing any edges joining two elements of S. Then CO(Γ,S)=CO(Γ′,S).Corollary 3.3. Suppose thatΓ is a graph and S⊂VΓ. Let Γ′ be a graph obtained from Γ by adding or deleting any collection of edges with both of their vertices belonging to S. Then CO(Γ,S)=CO(Γ′,S) and CO¯(Γ,S)=CO¯(Γ′,S).Lemma 3.4. Suppose thatΓ is a graph and n≥2. Suppose that S={s1,…,sn}⊂VΓ. Let Λ=CO¯(Γ,{s1,…,sn-1}) and S′=S∖{sn}. Then CO¯(Γ,S)=CO¯(Λ,{vS′,sn}).Proof. It suffices to compare the collection of vertices which are adjacent tovS in Γ1=CO¯(Γ,S) and Γ2=CO¯(Λ,{vS′,sn}); in the latter case, we are identifying the vertex vS′∪sn with vS. A vertexw in Γ1 not belonging to S is adjacent to vS if and only if w is adjacent to every si in Γ. A vertex w in Γ2 not equal to vS′ nor sn is adjacent to vS if and only if w is adjacent to vS′ and sn in Λ; but this, in turn, means that w is adjacent to every si in Γ. (Note that the case of n=2 is trivial since CO(Γ,{s1})=Γ.)A collection of verticesS⊂VΓ is said to be anticonnected if Γ¯S is connected. (Note that (Γ¯)S=(ΓS)¯.)Theorem 3.5 (see Kim [3, Theorem  4.2]). Suppose thatΓ is a graph and S⊂V(Γ) is an anticonnected subset. Then A(CO¯(Γ,S)) embeds in A(Γ).Proof. First consider the case whenS consists of two nonadjacent vertices z,z′∈VΓ. Define ϕ:A(Γ)→〈x;x2〉 by ϕ(z)=x, and ϕ(v)=1 if v≠z. Let A(Γ′)=kerϕ. Let (3.2)T=(V(Γ)∖{z2,z′})∪{zz′z-1}⊂V(Γ′). We claim that ΓT′≅CO¯(Γ,S) via v↦v if v≠zz′z-1 and zz′z-1↦vS. Ifv and w are distinct from z and z′, then v and w are adjacent in CO¯(Γ,S) if and only if they are adjacent in Γ. On the other hand, a vertexw is adjacent to vS in CO¯(Γ,S) if and only if w is adjacent to z and z′, whereas a vertex w is adjacent to zz′z-1 in ΓT′ if and only if w belongs to the link of z and to the link of z′, that is, w is adjacent to z and z′. Therefore, ΓT′≅CO¯(Γ,S) and, hence, A(CO¯(Γ,S)) embeds in A(Γ). Now we prove the general statement by induction on|S|. Suppose that S={s1,…,sn} is anticonnected, and suppose that we have chosen the ordering so that S′={s1,…,sn-1} is also anticonnected. (This is always possible: choose sn so that it is not a cut point of Γ¯S.) Let Λ=CO¯(Γ,S′). Suppose that A(Λ) embeds in A(Γ). By the case of the two vertices above, A(CO¯(Λ,{vS′,sn})) embeds in A(Λ). (Note that vS′ and sn are not adjacent in Λ for, otherwise, sn would be adjacent to every si, i=1,…,n-1, which would contradict the hypothesis that S is anticonnected.) This proves the inductive step. The proof of the theorem is completed by applying Lemma 3.4. --- *Source: 102029-2011-08-23.xml*
2011
# Erratum to “Prevalence and Associated Factors of Tuberculosis in Prisons Settings of East Gojjam Zone, Northwest Ethiopia” **Authors:** Emirie Hunegnaw; Moges Tiruneh; Mucheye Gizachew **Journal:** International Journal of Bacteriology (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1020349 --- ## Body --- *Source: 1020349-2018-08-09.xml*
1020349-2018-08-09_1020349-2018-08-09.md
420
Erratum to “Prevalence and Associated Factors of Tuberculosis in Prisons Settings of East Gojjam Zone, Northwest Ethiopia”
Emirie Hunegnaw; Moges Tiruneh; Mucheye Gizachew
International Journal of Bacteriology (2018)
Biological Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1020349
1020349-2018-08-09.xml
--- ## Body --- *Source: 1020349-2018-08-09.xml*
2018
# Mitochondrial Ferritin Deletion Exacerbatesβ-Amyloid-Induced Neurotoxicity in Mice **Authors:** Peina Wang; Qiong Wu; Wenyue Wu; Haiyan Li; Yuetong Guo; Peng Yu; Guofen Gao; Zhenhua Shi; Baolu Zhao; Yan-Zhong Chang **Journal:** Oxidative Medicine and Cellular Longevity (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1020357 --- ## Abstract Mitochondrial ferritin (FtMt) is a mitochondrial iron storage protein which protects mitochondria from iron-induced oxidative damage. Our previous studies indicate that FtMt attenuatesβ-amyloid- and 6-hydroxydopamine-induced neurotoxicity in SH-SY5Y cells. To explore the protective effects of FtMt on β-amyloid-induced memory impairment and neuronal apoptosis and the mechanisms involved, 10-month-old wild-type andFtmt knockout mice were infused intracerebroventricularly (ICV) with Aβ25–35 to establish an Alzheimer’s disease model. Knockout ofFtmt significantly exacerbated Aβ25–35-induced learning and memory impairment. The Bcl-2/Bax ratio in mouse hippocampi was decreased and the levels of cleaved caspase-3 and PARP were increased. The number of neuronal cells undergoing apoptosis in the hippocampus was also increased inFtmt knockout mice. In addition, the levels of L-ferritin and FPN1 in the hippocampus were raised, and the expression of TfR1 was decreased. Increased MDA levels were also detected inFtmt knockout mice treated with Aβ25–35. In conclusion, this study demonstrated that the neurological impairment induced by Aβ25–35 was exacerbated inFtmt knockout mice and that this may relate to increased levels of oxidative stress. --- ## Body ## 1. Introduction Alzheimer’s disease (AD) is a multifaceted neurodegenerative disease of the elderly which is characterized by neuronal loss, neuroinflammation, and progressive memory and cognitive impairment [1]. Many pathogenic factors are involved in the neuropathology, including accumulation of β-amyloid (Aβ), oxidative stress, inflammation, and metal deposition [2]. Aβ is considered to be a major factor in the pathophysiological mechanisms underlying AD and has been shown to directly induce neuronal cell death [3]. Thus, Aβ is a useful tool for establishing AD models and investigating the mechanisms involved in AD pathogenesis [4]. Aβ25–35 is an 11-amino acid fragment located in the hydrophobic functional domain of the C-terminal region of Aβ1–42 [5]. Single administration of Aβ25–35 into the lateral ventricles of mice or rats impairs memory and induces neurodegeneration in the hippocampus [6–10]. We have previously shown that Aβ25–35, like Aβ1–42, exerted neurotoxic effects on SH-SY5Y cells [11]. These data are among numerous studies confirming that Aβ25–35 is a useful tool for investigating AD-related mechanisms in animal models [10].Oxidative stress has been strongly implicated in the pathophysiology of AD [12]. Increased free radicals can damage proteins, lipids, and nucleic acids. The combination of mitochondrial dysfunction and Aβ accumulation generates reactive oxygen species (ROS) which, in the presence of metal ions such as Fe2+ and Cu2+, may contribute to oxidative damage in AD brains [13]. In addition, dysregulated brain iron homeostasis also accelerates AD progression. Excessive iron in the brain can directly lead to the generation of free radicals that eventually cause neurodegenerative disease.Mitochondrial ferritin (FtMt) is a recently identified ferritin that accumulates specifically in the mitochondria and possesses a high homology to H-ferritin [14]. The functions of FtMt include iron storage, regulating iron distribution between the cytosol and mitochondrial compartments, and preventing the production of ROS generated through the Fenton reaction [15, 16]. FtMt has been reported to be present in relatively low abundance in the liver and splenocytes, the major iron storage sites, while FtMt is found at higher levels in the testis, kidney, heart and brain, and tissues with high metabolic activity. Together with the knowledge that H-ferritin expression confers an antioxidant effect, the tissue distribution of FtMt is in line with a protective function of FtMt in mitochondria against iron-dependent oxidative damage [17]. Previous results indicated that FtMt was involved in the pathogenesis of neurodegenerative diseases, including AD, Parkinson’s disease, and Friedreich’s ataxia [18–20]. Increased expression of FtMt has been observed in the brains of AD patients and is associated with the antioxidant role of this protein [2]. Furthermore, our previous studies have shown that FtMt exerted a neuroprotective effect against 6-hydroxydopamine- (6-OHDA-) induced dopaminergic cell damage [20] and FtMt overexpression attenuated Aβ-induced neurotoxicity [11].Although abnormal iron metabolism and oxidative stress have been reported in AD, little information is available about the role of FtMt in the pathogenesis of AD. In the present study, we investigated memory impairment and neuronal cell death in Aβ25–35-injectedFtmt knockout mice. In addition, we explored the molecular mechanisms responsible for neuronal damage in this model. Our data indicate that FtMt deficiency exacerbated Aβ25–35-induced neuronal cell damage by altering intracellular iron levels in a way that intensifies the oxidative stress caused by Aβ25–35. ## 2. Materials and Methods ### 2.1. Animals C57BL/6Ftmt-null mice were obtained from The Jackson Laboratory [21]. Mice were housed under conditions controlled for temperature (22°C) and humidity (40%), using a 12 hr/12 hr light/dark cycle [22]. Mice were fed a standard rodent diet and water ad libitum. Age-matched C57BL/6J wild-type male mice andFtmt knockout male mice (10 months) were used in this study. All procedures were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Animal Care and Use Committee of the Hebei Science and Technical Bureau in China. ### 2.2. Antibodies and Reagents The following antibodies and reagents were used:β-actin (Alpha Diagnostic International, USA), TfR1 (Sigma-Aldrich, USA), FPN1, DMT1 (+IRE) and DMT1 (−IRE) (Alpha Diagnostic International, USA), L-ferritin (Abcam Inc., SF, USA), cleaved PARP, caspase-3, phospho-p38 (p-p38) and p38 (Cell Signaling Technology, USA), Bcl-2 and Bax (Santa Cruz Biotechnology, USA), Aβ25–35 peptide (Sigma-Aldrich, USA), and TUNEL in situ Cell Death Detection Kit (Roche Diagnostics GmbH, Mannheim, Germany). ### 2.3. Drug Preparation and Injection Aβ25–35 was dissolved in sterile saline and aggregated by incubation at 37°C for 4 days before use [23]. The aggregated form of Aβ25–35 (7.5 nmol in 5 μL saline per injection) was injected into the right lateral ventricle as previously described [24]. Mice were randomly divided into four groups: wild-type with saline injection (WT + saline), wild-type with Aβ25–35 injection (WT + Aβ25–35),Ftmt knockout with saline injection (KO + saline), andFtmt knockout with Aβ25–35 injection (KO + Aβ25–35). After injection, the mice were housed for 15 days under normal conditions and then trained and tested in a Morris water maze (MWM) as described below. ### 2.4. Morris Water Maze Test (MWM Test) Spatial learning and memory deficits were assessed using the Morris water maze as described previously [25, 26], with minor modification. The experimental apparatus consisted of a circular tank (diameter = 120 cm, height = 50 cm) that was divided into four quadrants, filled with water, and maintained at 22±2°C. At first, a visible platform test was performed, which confirmed that there were no significant differences in sensory, motor, or motivational activities among these four groups. Then, hidden platform tests were performed in succession. For the hidden platform test, a round platform (diameter = 9 cm) was placed at the midpoint of the fourth quadrant, 1 cm below the water surface. The test was conducted four times a day for four days, with four randomized starting points. The position of the escape platform was kept constant. Each trial lasted for 90 s or ended as soon as the mice reached the submerged platform. ### 2.5. Probe Test To assess memory consolidation, a probe test was performed 24 h after the Morris water maze test [25]. For the probe test, the platform was removed and the mice were allowed to swim freely. The swimming pattern of every mouse was recorded for 90 s with a camera. Consolidated spatial memory was estimated by the time spent in the target quadrant area. ### 2.6. Assessment of Apoptosis After the behavioral testing, the animals were perfused with 0.9% saline under anesthesia with 0.4% Nembutal. The brains were immediately collected and then postfixed with 4% paraformaldehyde in 0.1 M phosphate buffer. Serial coronal sections were cut at 15μm on a freezing microtome (Leica CM1950, Leica Microsystems, Shanghai, China) and mounted onto slides covered with APES (Beijing Zhongshan Biotechnology, Beijing, China). The presence of apoptosis in the dentate gyrus of mouse hippocampi was assessed by the terminal deoxynucleotidyl transferase-mediated FITC-dUTP nick-end labeling method (TUNEL) following the manufacturer’s protocol. Nuclei were counterstained with DAPI. The number of TUNEL-DAPI-positive cells was counted as described previously [27]. The counting area was located in the same position in all groups. For each group, quantification was performed in sections from three different mice. ### 2.7. Western Blot Analysis Protein expression was assessed by western blotting as previously described [28], with minor modifications. Briefly, hippocampi were homogenized and sonicated in RIPA buffer containing 1% NP40 and protease inhibitor cocktail tablets (Roche Diagnostics GmbH, Roche Applied Science, 68298 Mannheim, Germany). After centrifugation at 12,000 ×g for 20 min at 4°C, the supernatant was collected, and the whole cell lysate protein concentration was measured using the BCA Protein Quantification Kit (Yeasen Biotechnology, Shanghai, China). Protein from each sample (40 mg) was resolved by SDS-PAGE on 12% or 10% gels and then transferred to PVDF membranes. The blots were blocked in 5% nonfat milk containing 20 mM Tris-HCl (pH 7.6, 137 mM NaCl, and 0.1% Tween-20; TBS-T) for 1.5 h at room temperature, followed by incubation with primary antibody overnight at 4°C. After washing three times with TBS-T, the blots were incubated with horseradish peroxide (HRP) conjugated secondary antibody for 1.5 h at room temperature. Immunoreactive proteins were detected using the enhanced chemiluminescence (ECL) method and quantified by transmittance densitometry using volume integration with Multi Gauge ver. 3.1 software (FUJIFILM Corporation, Tokyo, Japan). ### 2.8. Measurement of MDA and SOD Malondialdehyde (MDA), a marker of lipid peroxidation, was assessed using the thiobarbituric acid (TBA) method [29] using a kit from the Nanjing Jiancheng Bioengineering Institute (Nanjing, China) according to the manufacturer’s instructions. This method is based on the spectrophotometric measurement of the product of the reaction of TBA with MDA. MDA concentrations were then calculated by the absorbance of TBA reactive substances (TBARS) at 532 nm.Superoxide dismutases (SODs), which catalyze the dismutation of superoxide into oxygen and hydrogen peroxide, were determined according to xanthine oxidase method using a commercial kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer’s instructions. The xanthine-xanthine oxidase system produces superoxide ions, which can react with 2-(4-iodophenyl)-3-(4-nitrophenol-5-phenlyltetrazolium chloride) to form a red formazan dye, which can be detected by its absorbance at 550 nm [29].The levels of MDA and the total SOD (T-SOD) activity were determined in each group. The hippocampi of mice were homogenized in ice-cold saline. The homogenate was centrifuged at 3000 ×g at 4°C for 15 min, and the supernatant was used to determine T-SOD activity and MDA levels with a spectrophotometer (Synergy H4, BioTek, USA) at wavelengths of 550 nm and 532 nm, respectively. Each group contained five mice for the MDA and SOD tests, with each test repeated three times. ### 2.9. Statistical Analysis All data are expressed as the mean ± standard deviation. One-way analysis of variance was used to estimate overall significance and was followed by Tukey’s post hoc test corrected for multiple comparisons. A probability level of 95% (p<0.05) was considered significant. All the tests were performed with SPSS 21.0 (IBM SPSS21.0, Armonk, New York, United States). ## 2.1. Animals C57BL/6Ftmt-null mice were obtained from The Jackson Laboratory [21]. Mice were housed under conditions controlled for temperature (22°C) and humidity (40%), using a 12 hr/12 hr light/dark cycle [22]. Mice were fed a standard rodent diet and water ad libitum. Age-matched C57BL/6J wild-type male mice andFtmt knockout male mice (10 months) were used in this study. All procedures were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Animal Care and Use Committee of the Hebei Science and Technical Bureau in China. ## 2.2. Antibodies and Reagents The following antibodies and reagents were used:β-actin (Alpha Diagnostic International, USA), TfR1 (Sigma-Aldrich, USA), FPN1, DMT1 (+IRE) and DMT1 (−IRE) (Alpha Diagnostic International, USA), L-ferritin (Abcam Inc., SF, USA), cleaved PARP, caspase-3, phospho-p38 (p-p38) and p38 (Cell Signaling Technology, USA), Bcl-2 and Bax (Santa Cruz Biotechnology, USA), Aβ25–35 peptide (Sigma-Aldrich, USA), and TUNEL in situ Cell Death Detection Kit (Roche Diagnostics GmbH, Mannheim, Germany). ## 2.3. Drug Preparation and Injection Aβ25–35 was dissolved in sterile saline and aggregated by incubation at 37°C for 4 days before use [23]. The aggregated form of Aβ25–35 (7.5 nmol in 5 μL saline per injection) was injected into the right lateral ventricle as previously described [24]. Mice were randomly divided into four groups: wild-type with saline injection (WT + saline), wild-type with Aβ25–35 injection (WT + Aβ25–35),Ftmt knockout with saline injection (KO + saline), andFtmt knockout with Aβ25–35 injection (KO + Aβ25–35). After injection, the mice were housed for 15 days under normal conditions and then trained and tested in a Morris water maze (MWM) as described below. ## 2.4. Morris Water Maze Test (MWM Test) Spatial learning and memory deficits were assessed using the Morris water maze as described previously [25, 26], with minor modification. The experimental apparatus consisted of a circular tank (diameter = 120 cm, height = 50 cm) that was divided into four quadrants, filled with water, and maintained at 22±2°C. At first, a visible platform test was performed, which confirmed that there were no significant differences in sensory, motor, or motivational activities among these four groups. Then, hidden platform tests were performed in succession. For the hidden platform test, a round platform (diameter = 9 cm) was placed at the midpoint of the fourth quadrant, 1 cm below the water surface. The test was conducted four times a day for four days, with four randomized starting points. The position of the escape platform was kept constant. Each trial lasted for 90 s or ended as soon as the mice reached the submerged platform. ## 2.5. Probe Test To assess memory consolidation, a probe test was performed 24 h after the Morris water maze test [25]. For the probe test, the platform was removed and the mice were allowed to swim freely. The swimming pattern of every mouse was recorded for 90 s with a camera. Consolidated spatial memory was estimated by the time spent in the target quadrant area. ## 2.6. Assessment of Apoptosis After the behavioral testing, the animals were perfused with 0.9% saline under anesthesia with 0.4% Nembutal. The brains were immediately collected and then postfixed with 4% paraformaldehyde in 0.1 M phosphate buffer. Serial coronal sections were cut at 15μm on a freezing microtome (Leica CM1950, Leica Microsystems, Shanghai, China) and mounted onto slides covered with APES (Beijing Zhongshan Biotechnology, Beijing, China). The presence of apoptosis in the dentate gyrus of mouse hippocampi was assessed by the terminal deoxynucleotidyl transferase-mediated FITC-dUTP nick-end labeling method (TUNEL) following the manufacturer’s protocol. Nuclei were counterstained with DAPI. The number of TUNEL-DAPI-positive cells was counted as described previously [27]. The counting area was located in the same position in all groups. For each group, quantification was performed in sections from three different mice. ## 2.7. Western Blot Analysis Protein expression was assessed by western blotting as previously described [28], with minor modifications. Briefly, hippocampi were homogenized and sonicated in RIPA buffer containing 1% NP40 and protease inhibitor cocktail tablets (Roche Diagnostics GmbH, Roche Applied Science, 68298 Mannheim, Germany). After centrifugation at 12,000 ×g for 20 min at 4°C, the supernatant was collected, and the whole cell lysate protein concentration was measured using the BCA Protein Quantification Kit (Yeasen Biotechnology, Shanghai, China). Protein from each sample (40 mg) was resolved by SDS-PAGE on 12% or 10% gels and then transferred to PVDF membranes. The blots were blocked in 5% nonfat milk containing 20 mM Tris-HCl (pH 7.6, 137 mM NaCl, and 0.1% Tween-20; TBS-T) for 1.5 h at room temperature, followed by incubation with primary antibody overnight at 4°C. After washing three times with TBS-T, the blots were incubated with horseradish peroxide (HRP) conjugated secondary antibody for 1.5 h at room temperature. Immunoreactive proteins were detected using the enhanced chemiluminescence (ECL) method and quantified by transmittance densitometry using volume integration with Multi Gauge ver. 3.1 software (FUJIFILM Corporation, Tokyo, Japan). ## 2.8. Measurement of MDA and SOD Malondialdehyde (MDA), a marker of lipid peroxidation, was assessed using the thiobarbituric acid (TBA) method [29] using a kit from the Nanjing Jiancheng Bioengineering Institute (Nanjing, China) according to the manufacturer’s instructions. This method is based on the spectrophotometric measurement of the product of the reaction of TBA with MDA. MDA concentrations were then calculated by the absorbance of TBA reactive substances (TBARS) at 532 nm.Superoxide dismutases (SODs), which catalyze the dismutation of superoxide into oxygen and hydrogen peroxide, were determined according to xanthine oxidase method using a commercial kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer’s instructions. The xanthine-xanthine oxidase system produces superoxide ions, which can react with 2-(4-iodophenyl)-3-(4-nitrophenol-5-phenlyltetrazolium chloride) to form a red formazan dye, which can be detected by its absorbance at 550 nm [29].The levels of MDA and the total SOD (T-SOD) activity were determined in each group. The hippocampi of mice were homogenized in ice-cold saline. The homogenate was centrifuged at 3000 ×g at 4°C for 15 min, and the supernatant was used to determine T-SOD activity and MDA levels with a spectrophotometer (Synergy H4, BioTek, USA) at wavelengths of 550 nm and 532 nm, respectively. Each group contained five mice for the MDA and SOD tests, with each test repeated three times. ## 2.9. Statistical Analysis All data are expressed as the mean ± standard deviation. One-way analysis of variance was used to estimate overall significance and was followed by Tukey’s post hoc test corrected for multiple comparisons. A probability level of 95% (p<0.05) was considered significant. All the tests were performed with SPSS 21.0 (IBM SPSS21.0, Armonk, New York, United States). ## 3. Results ### 3.1.Ftmt Ablation Exacerbates Aβ25–35-Induced Spatial Memory Deficits The MWM test was conducted to assess learning and memory in 10-month-old wild-type mice (WT) andFtmt knockout mice (KO). All mice were trained with four trials per day for 4 days. “Escape latency” is the time to reach the platform in the water maze and is used as a proxy for mouse memory. Compared to wild-type mice,Ftmt knockout mice took approximately the same time to reach the platform after training (Figure 1(a)). After the water maze test, we performed a probe test using the metric “time spent in quadrant” to investigate the maintenance of memory. The time spent in the target quadrant was also similar in wild-type andFtmt knockout mice (Figure 1(b)). After treatment with Aβ25–35, both the WT + Aβ25–35 group and the KO + Aβ25–35 group took a significantly longer time to reach the platform than the groups without Aβ25–35 injection (Figure 1(c)). Furthermore, Aβ25–35-infusedFtmt knockout mice had a significantly greater memory impairment (longer escape latency time) than Aβ25–35-infused wild-type mice. In addition, the time spent in the target quadrant in the probe trial was less in both the WT + Aβ25–35 and the KO + Aβ25–35 groups than in the control groups. Importantly, the KO + Aβ25–35 group was in the target quadrant for even less time than the WT + Aβ25–35 group (Figure 1(d)). Overall, our results show that knockout ofFtmt in mice significantly exacerbates memory deficits in the Aβ25–35-induced AD model.Figure 1 The effect ofFtmt ablation on Aβ25–35-induced spatial memory deficits. (a) Age-matched (10 months old)Ftmt knockout mice (n=20) and wild-type mice (n=23) were administered a 90 s trial four times a day to find the hidden platform. The analysis of the recorded data shows the changes in latency to find the hidden platform over the four consecutive days of training. (b)Ftmt knockout mice and wild-type mice were assessed in the probe test one day after the hidden platform test. The time spent in the target quadrant within the 90 s was recorded. (c) The effect of Aβ25–35 on escape latency. (Wild-type mice andFtmt knockout mice were randomly divided into four groups and injected with Aβ25–35 or saline. Fifteen days later, the MWM test was conducted.) (d) The time spent in the target quadrant during the probe test after injecting Aβ25–35. The data is presented as the mean ± SD. p∗<0.05 versus WT + saline group, n=11. p#<0.05, p##<0.01 versus KO + saline group, n=10. p$<0.05 versus WT + Aβ25–35 group, n=10. KO + Aβ25–35, n=10. (a) (b) (c) (d) ### 3.2.Ftmt Ablation Enhances Aβ25–35-Induced Neuronal Cell Apoptosis To evaluate the neuronal apoptosis affected byFtmt gene ablation in the AD model, we used the TUNEL method to detect apoptosis after Aβ25–35 stimulation. Our results indicated that neuronal apoptosis in the hippocampi was increased after injecting Aβ25–35, especially in the dentate gyrus. The number of apoptotic cells in the WT + Aβ25–35 group was approximately four times greater than that observed in the WT + saline group, and there was also a noticeable increase in the KO group. The number of apoptotic cells in the KO + Aβ25–35 group was more than threefold that of the WT + Aβ25–35 group. These results confirmed thatFtmt knockout significantly enhanced neuronal apoptosis compared to the WT + Aβ25–35 group (Figures 2(a) and 2(b)), suggesting that FtMt is protective against Aβ25–35-induced apoptosis.Figure 2 The effect ofFtmt ablation on Aβ25–35 -induced neuronal cell apoptosis. Apoptotic cell death was assessed by DAPI and TUNEL staining, as described in the Materials and Methods section. (a) Representative photographs (original magnification 100x) of the dentate gyrus of the hippocampus of mouse brains. (b) The statistical analysis of relative apoptotic cell levels. Data are presented as the mean ± SD, n=3. p∗∗∗<0.001 versus WT + saline group, p###<0.001 versus KO + saline group, and p$$$<0.001 versus WT + Aβ25–35 group. (a) (b) ### 3.3. FtMt Deficiency in the AD Mouse Model Elevates Proapoptotic Signals We found that the knockout ofFtmt remarkably decreased the ratio of Bcl-2/Bax (Figure 3(a)) and increased the activation of cleaved caspase-3 (Figure 3(b)) after Aβ25–35 treatment in mice. In the apoptotic cascade, caspase-3 cleaves poly-ADP-ribose-polymerase (PARP), leading to the accumulation of an 89 kDa PARP fragment [30]. Caspase-3-mediated PARP cleavage was enhanced in the KO + Aβ25–35 group compared to the WT + Aβ25–35 group (Figure 3(c)). These results indicate that the lack of FtMt can affect the Bcl-2/Bax ratio, leading to caspase-3 activation and a concomitant increase in PARP cleavage and, ultimately, apoptosis after Aβ25–35 injection.Figure 3 The effect ofFtmt deficiency on the Bcl-2/Bax ratio, cleaved caspase-3, and p38 MAPK activation in mice. Western blot and subsequent densitometric analysis of (a) the ratio of Bcl-2/Bax, (b) the amount of cleaved caspase-3, (c) the amount of cleaved PARP, and (d) the ratio of p-p38/p38. Data are presented as the mean ± SD, n=3. p∗<0.05, p∗∗<0.01 versus WT + saline group, p#<0.05, p##<0.01 versus KO + saline group, and p$<0.05 versus WT + Aβ25–35 group. MAPK: mitogen-activated protein kinase. (a) (b) (c) (d)The activation of p38 (MAP kinase) by phosphorylation is implicated in oxidative stress-induced cell death [31]. A high p-p38/p38 ratio can simultaneously promote Bax expression and decrease Bcl-2 levels. Aβ25–35 significantly induced the activation of p38 in the hippocampus. In the KO + Aβ25–35 group, p-p38 levels were elevated (Figure 3(d)). Overall, our data demonstrate that the knockout ofFtmt in mice injected with Aβ25–35 increases p-p38 levels which alters the amounts of proteins related to cell death, ultimately leading to increased neuronal cell death in the hippocampus. ### 3.4.Ftmt Knockout Increases MDA Levels in AD Mice without Altering SOD To determine whether increased levels of oxidative stress are responsible for the increased apoptosis in the hippocampus in the AD mouse model, we examined the levels of MDA and the activity of SOD in each group. Free radicals attack polyunsaturated fatty acids, leading to structural damage to membranes and the generation of MDA, which is considered a marker of lipid peroxidation and thus a surrogate for oxidative damage [32]. The level of MDA was increased in AD mice compared with controls, but this increase was significantly greater inFtmt knockout mice (Figure 4(a)). SOD is a free radical scavenging enzyme that converts superoxide into H2O2. The content of total SOD was unchanged in the four groups (Figure 4(b)).Figure 4 The effects ofFtmt ablation on the levels of MDA and Total SOD. (a) MDA and (b) total SOD were assayed as described in the Materials and Methods section. Values are presented as the mean ± SD. p∗<0.05 versus WT + saline group, p#<0.05 versus KO + saline group, and p$<0.05 versus WT + Aβ25–35 group. (a) (b) ### 3.5. The Effects ofFtmt Knockout on the Levels of L-Ferritin, TfR1, DMT1, and FPN1 Iron is an essential cofactor in many proteins, but excess free iron contributes to enhanced generation of ROS and oxidative stress [33]. When treated with Aβ25–35, the levels of L-ferritin were upregulated while those of TfR1 decreased significantly, compared to the control groups. The highest amount of L-ferritin expression was observed in the KO + Aβ25–35 group (Figures 5(a) and 5(b)). In addition, the content of L-ferritin was also increased in the KO + saline group when compared to the WT + Saline group (Figure 5(a)). These observations indicated that Aβ25–35 stimulation may lead to alterations in iron homeostasis and that FtMt deficiency may accelerate this process. In addition, alterations in cellular iron distribution (as detected by Perls’ staining) (see Supplementary Figure  1 of the Supplementary Material available online at https://doi.org/10.1155/2017/1020357) support this hypothesis. However, there was no significant difference in the expression of DMT1 (+IRE) or DMT1 (−IRE) in any group (Figures 5(c) and 5(d)), while the expression of FPN1, the iron release protein, was increased in both groups treated with Aβ25–35 (Figure 5(d)). These results suggest that injection of Aβ25–35 into the brain disturbed iron homeostasis, possibly leading to oxidative damage, both of which were exacerbated by the lack of FtMt.Figure 5 The effects ofFtmt deficiency on the levels of L-ferritin, TfR1, DMT1, and FPN1. Western blotting was used to assay iron metabolism related proteins in hippocampus of mice. (a) L-ferritin. (b) TfR1. (c) DMT1 (+IRE). (d) FPN1 and DMT1-IRE. The expression levels of these proteins were normalized to β-actin and expressed as the mean ± SD. p∗<0.05, p∗∗<0.01 versus WT + saline group, p#<0.05 versus KO + saline group, and p$<0.05 versus WT+ Aβ25–35 group. (a) (b) (c) (d) ## 3.1.Ftmt Ablation Exacerbates Aβ25–35-Induced Spatial Memory Deficits The MWM test was conducted to assess learning and memory in 10-month-old wild-type mice (WT) andFtmt knockout mice (KO). All mice were trained with four trials per day for 4 days. “Escape latency” is the time to reach the platform in the water maze and is used as a proxy for mouse memory. Compared to wild-type mice,Ftmt knockout mice took approximately the same time to reach the platform after training (Figure 1(a)). After the water maze test, we performed a probe test using the metric “time spent in quadrant” to investigate the maintenance of memory. The time spent in the target quadrant was also similar in wild-type andFtmt knockout mice (Figure 1(b)). After treatment with Aβ25–35, both the WT + Aβ25–35 group and the KO + Aβ25–35 group took a significantly longer time to reach the platform than the groups without Aβ25–35 injection (Figure 1(c)). Furthermore, Aβ25–35-infusedFtmt knockout mice had a significantly greater memory impairment (longer escape latency time) than Aβ25–35-infused wild-type mice. In addition, the time spent in the target quadrant in the probe trial was less in both the WT + Aβ25–35 and the KO + Aβ25–35 groups than in the control groups. Importantly, the KO + Aβ25–35 group was in the target quadrant for even less time than the WT + Aβ25–35 group (Figure 1(d)). Overall, our results show that knockout ofFtmt in mice significantly exacerbates memory deficits in the Aβ25–35-induced AD model.Figure 1 The effect ofFtmt ablation on Aβ25–35-induced spatial memory deficits. (a) Age-matched (10 months old)Ftmt knockout mice (n=20) and wild-type mice (n=23) were administered a 90 s trial four times a day to find the hidden platform. The analysis of the recorded data shows the changes in latency to find the hidden platform over the four consecutive days of training. (b)Ftmt knockout mice and wild-type mice were assessed in the probe test one day after the hidden platform test. The time spent in the target quadrant within the 90 s was recorded. (c) The effect of Aβ25–35 on escape latency. (Wild-type mice andFtmt knockout mice were randomly divided into four groups and injected with Aβ25–35 or saline. Fifteen days later, the MWM test was conducted.) (d) The time spent in the target quadrant during the probe test after injecting Aβ25–35. The data is presented as the mean ± SD. p∗<0.05 versus WT + saline group, n=11. p#<0.05, p##<0.01 versus KO + saline group, n=10. p$<0.05 versus WT + Aβ25–35 group, n=10. KO + Aβ25–35, n=10. (a) (b) (c) (d) ## 3.2.Ftmt Ablation Enhances Aβ25–35-Induced Neuronal Cell Apoptosis To evaluate the neuronal apoptosis affected byFtmt gene ablation in the AD model, we used the TUNEL method to detect apoptosis after Aβ25–35 stimulation. Our results indicated that neuronal apoptosis in the hippocampi was increased after injecting Aβ25–35, especially in the dentate gyrus. The number of apoptotic cells in the WT + Aβ25–35 group was approximately four times greater than that observed in the WT + saline group, and there was also a noticeable increase in the KO group. The number of apoptotic cells in the KO + Aβ25–35 group was more than threefold that of the WT + Aβ25–35 group. These results confirmed thatFtmt knockout significantly enhanced neuronal apoptosis compared to the WT + Aβ25–35 group (Figures 2(a) and 2(b)), suggesting that FtMt is protective against Aβ25–35-induced apoptosis.Figure 2 The effect ofFtmt ablation on Aβ25–35 -induced neuronal cell apoptosis. Apoptotic cell death was assessed by DAPI and TUNEL staining, as described in the Materials and Methods section. (a) Representative photographs (original magnification 100x) of the dentate gyrus of the hippocampus of mouse brains. (b) The statistical analysis of relative apoptotic cell levels. Data are presented as the mean ± SD, n=3. p∗∗∗<0.001 versus WT + saline group, p###<0.001 versus KO + saline group, and p$$$<0.001 versus WT + Aβ25–35 group. (a) (b) ## 3.3. FtMt Deficiency in the AD Mouse Model Elevates Proapoptotic Signals We found that the knockout ofFtmt remarkably decreased the ratio of Bcl-2/Bax (Figure 3(a)) and increased the activation of cleaved caspase-3 (Figure 3(b)) after Aβ25–35 treatment in mice. In the apoptotic cascade, caspase-3 cleaves poly-ADP-ribose-polymerase (PARP), leading to the accumulation of an 89 kDa PARP fragment [30]. Caspase-3-mediated PARP cleavage was enhanced in the KO + Aβ25–35 group compared to the WT + Aβ25–35 group (Figure 3(c)). These results indicate that the lack of FtMt can affect the Bcl-2/Bax ratio, leading to caspase-3 activation and a concomitant increase in PARP cleavage and, ultimately, apoptosis after Aβ25–35 injection.Figure 3 The effect ofFtmt deficiency on the Bcl-2/Bax ratio, cleaved caspase-3, and p38 MAPK activation in mice. Western blot and subsequent densitometric analysis of (a) the ratio of Bcl-2/Bax, (b) the amount of cleaved caspase-3, (c) the amount of cleaved PARP, and (d) the ratio of p-p38/p38. Data are presented as the mean ± SD, n=3. p∗<0.05, p∗∗<0.01 versus WT + saline group, p#<0.05, p##<0.01 versus KO + saline group, and p$<0.05 versus WT + Aβ25–35 group. MAPK: mitogen-activated protein kinase. (a) (b) (c) (d)The activation of p38 (MAP kinase) by phosphorylation is implicated in oxidative stress-induced cell death [31]. A high p-p38/p38 ratio can simultaneously promote Bax expression and decrease Bcl-2 levels. Aβ25–35 significantly induced the activation of p38 in the hippocampus. In the KO + Aβ25–35 group, p-p38 levels were elevated (Figure 3(d)). Overall, our data demonstrate that the knockout ofFtmt in mice injected with Aβ25–35 increases p-p38 levels which alters the amounts of proteins related to cell death, ultimately leading to increased neuronal cell death in the hippocampus. ## 3.4.Ftmt Knockout Increases MDA Levels in AD Mice without Altering SOD To determine whether increased levels of oxidative stress are responsible for the increased apoptosis in the hippocampus in the AD mouse model, we examined the levels of MDA and the activity of SOD in each group. Free radicals attack polyunsaturated fatty acids, leading to structural damage to membranes and the generation of MDA, which is considered a marker of lipid peroxidation and thus a surrogate for oxidative damage [32]. The level of MDA was increased in AD mice compared with controls, but this increase was significantly greater inFtmt knockout mice (Figure 4(a)). SOD is a free radical scavenging enzyme that converts superoxide into H2O2. The content of total SOD was unchanged in the four groups (Figure 4(b)).Figure 4 The effects ofFtmt ablation on the levels of MDA and Total SOD. (a) MDA and (b) total SOD were assayed as described in the Materials and Methods section. Values are presented as the mean ± SD. p∗<0.05 versus WT + saline group, p#<0.05 versus KO + saline group, and p$<0.05 versus WT + Aβ25–35 group. (a) (b) ## 3.5. The Effects ofFtmt Knockout on the Levels of L-Ferritin, TfR1, DMT1, and FPN1 Iron is an essential cofactor in many proteins, but excess free iron contributes to enhanced generation of ROS and oxidative stress [33]. When treated with Aβ25–35, the levels of L-ferritin were upregulated while those of TfR1 decreased significantly, compared to the control groups. The highest amount of L-ferritin expression was observed in the KO + Aβ25–35 group (Figures 5(a) and 5(b)). In addition, the content of L-ferritin was also increased in the KO + saline group when compared to the WT + Saline group (Figure 5(a)). These observations indicated that Aβ25–35 stimulation may lead to alterations in iron homeostasis and that FtMt deficiency may accelerate this process. In addition, alterations in cellular iron distribution (as detected by Perls’ staining) (see Supplementary Figure  1 of the Supplementary Material available online at https://doi.org/10.1155/2017/1020357) support this hypothesis. However, there was no significant difference in the expression of DMT1 (+IRE) or DMT1 (−IRE) in any group (Figures 5(c) and 5(d)), while the expression of FPN1, the iron release protein, was increased in both groups treated with Aβ25–35 (Figure 5(d)). These results suggest that injection of Aβ25–35 into the brain disturbed iron homeostasis, possibly leading to oxidative damage, both of which were exacerbated by the lack of FtMt.Figure 5 The effects ofFtmt deficiency on the levels of L-ferritin, TfR1, DMT1, and FPN1. Western blotting was used to assay iron metabolism related proteins in hippocampus of mice. (a) L-ferritin. (b) TfR1. (c) DMT1 (+IRE). (d) FPN1 and DMT1-IRE. The expression levels of these proteins were normalized to β-actin and expressed as the mean ± SD. p∗<0.05, p∗∗<0.01 versus WT + saline group, p#<0.05 versus KO + saline group, and p$<0.05 versus WT+ Aβ25–35 group. (a) (b) (c) (d) ## 4. Discussion Iron is an essential trace element for human health. The metal participates in many biological processes. Iron homeostasis is stringently regulated in vivo as excess iron can catalyze the generation of oxidative damage [20]. Importantly, iron is considered a contributing neurotoxic factor in several neurodegenerative disorders, including AD [34]. Cortical iron elevation has been increasingly reported as a feature of AD [35] and may contribute to the oxidative damage observed in AD brains. In addition, abnormalities in iron-regulatory proteins occur in the brains of AD sufferers [19]. FtMt, a recently identified H-ferritin-like protein expressed only in mitochondria, is thought to function to protect mitochondria from iron-dependent oxidative damage in cells with high metabolic activity and oxygen consumption. Previous studies have already shown an increased FtMt expression in the hippocampus of AD patients [36]. In addition, the downregulation of FtMt causes severe neurodegeneration in the Purkinje cells of the cerebellum [20]. In this study,Ftmt gene knockout mice were firstly used to study the effects of FtMt on the behavioral changes and mechanisms of Aβ25–35-induced neurotoxicity.Previous results indicate that FtMt-deficient mice are healthy and do not show any evident phenotype under baseline feeding conditions [21]. Here we have also found that 10-month-old, wild-type, andFtmt knockout mice show no behavior or memory differences, as determined by MWM assays. Thus, FtMt deficiency has no obvious effects in the mouse brain under normal physiological conditions. To further elucidate the role of FtMt in AD pathogenesis, we first showed that intracerebroventricular infusion of Aβ25–35 exacerbates memory impairment inFtmt knockout mice compared to the Aβ25–35-infused controls. The number of apoptotic cells in the hippocampus was also significantly increased in Aβ25–35-infusedFtmt knockout mice, which may account for their poorer performance in the MWM. Our data suggest that FtMt is not essential in mice under normal conditions. However, when challenged, such as with amyloid beta treatment, there appears to be a need for FtMt in a neuroprotective role.Bcl-2 and Bax play important roles in oxidative stress-mediated neuronal apoptosis [37]. It has been reported that Bcl-2 protects neurons against oxidant stress and apoptosis in PD [38]. Bcl-2 also maintains mitochondrial integrity by blocking the release of apoptotic factors from mitochondria into cytoplasm [20]. Bax can promote cell death by activating elements of the caspase pathway [39], especially caspase-3 [40]. As previously described, the activation of caspases, a family of cysteine proteases, is a central mechanism in the apoptotic process. Our results show that knockout ofFtmt decreases the ratio of Bcl-2/Bax and increases the activation of caspase-3 and PARP cleavage, which ultimately leads to cell death.Accumulated evidence demonstrated that Aβ-induced neuronal injury triggers transcriptional and posttranscriptional processes that regulate neuronal fate, including the activation of the MAPK pathway [41]. In this signaling cascade, p38 MAPK is activated by phosphorylation, and a high p-p38/p38 ratio can simultaneously promote Bax expression and decrease Bcl-2 levels. Our results show a significant elevation of p-p38 levels and its downstream factor Bax, strongly suggesting that this apoptotic signal transduction pathway is enhanced inFtmt knockout mice treated with Aβ25–35.An increasing number of studies have suggested that oxidative stress is associated with AD neurodegeneration and caspase-mediated apoptosis [42]. We detected a marked increase in the level of MDA, an indicator of oxidative damage, in the hippocampi ofFtmt knockout mice, indicating that knockout ofFtmt aggravates oxidative stress. Previous studies indicate that, in certain antioxidant systems, there might be a time lag between the synthesis of protein and the expression of mRNA following neurotoxicity; the activity of SOD is altered in the process of Aβ25–35 induced injury. Cu, Zn-SOD, and Mn-SOD activity in hippocampi of Aβ1–42 treated mice returned to near vehicle levels after 10 days [43]. Our data show thatFtmt ablation did not significantly affect the activity of total SOD, although this may be related to the time point at which SOD activity was measured.Cellular iron homeostasis is maintained by a strict regulation of various proteins that are involved in iron uptake, export, storage, and utilization [44]. Studies from our group and others have demonstrated that aberrant iron homeostasis can generate ROS which can eventuate in AD pathogenesis [2, 11]. In the present study, we observed upregulated L-ferritin and FPN1 and a simultaneous decrease in TfR1. These changes are likely to be the result of inhibited iron-regulatory protein binding [45] brought about by an increase in the regulatory iron pool in the neuronal cells injected with Aβ. Consistent with this, the absence of FtMt may decrease the cells ability to sequester excess iron under stressed conditions and enhances the degree of changes in the measured proteins of iron metabolism. Our previous data also indicate that “uncommitted” iron levels, commonly referred to as the “labile iron pool” (LIP), significantly increased in SH-SY5Y cells treated with Aβ25–35. FtMt overexpression was able to reverse this change [11]. We propose that a larger LIP, resulting from a redistribution of iron from mitochondria to the cytosol, especially in the absence of FtMt, is responsible for the oxidative stress that mediates the damage of cell components in our AD model [46].In summary, our research indicates that Aβ25–35 elevates the LIP and causes oxidative stress, which can be exacerbated by the lack of FtMt. The excess iron donates electrons to generate ROS and lipid peroxidation. These changes initiate the programmed cell death through the p38/MAPK pathway, ultimately causing neuronal apoptosis which causes more severe memory impairment. The alteration of iron maybe provides the feedback regulation to the levels of TfR1 and FPN1 (Figure 6).Figure 6 A schematic representation of the mechanism leading to neuronal cell apoptosis induced by Aβ25–35 in mice with a disruptedFtmt gene. Aβ25–35 changes the levels (LIP) and distribution of intracellular iron, thus increasing oxidative stress. WithoutFtmt to sequester excess mitochondrial iron, lipid peroxidation and the level of LIP are significantly increased. These changes may signal the cell to begin the process of programmed death through the P38 MAPK pathway, resulting in neuronal cell death, which is enhanced inFtmt knockout mice, leading to worsened memory impairments. ## 5. Conclusion The current study supports the hypothesis that the functionality of FtMt is not essential under normal conditions. But, in cases of neuronal stress, such as Aβ25–35 accumulation, FtMt offers a profound neuroprotection through regulating cellular iron content and distribution in a way that keeps oxidative stress in check, preventing the activation of apoptosis. --- *Source: 1020357-2017-01-16.xml*
1020357-2017-01-16_1020357-2017-01-16.md
45,594
Mitochondrial Ferritin Deletion Exacerbatesβ-Amyloid-Induced Neurotoxicity in Mice
Peina Wang; Qiong Wu; Wenyue Wu; Haiyan Li; Yuetong Guo; Peng Yu; Guofen Gao; Zhenhua Shi; Baolu Zhao; Yan-Zhong Chang
Oxidative Medicine and Cellular Longevity (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1020357
1020357-2017-01-16.xml
--- ## Abstract Mitochondrial ferritin (FtMt) is a mitochondrial iron storage protein which protects mitochondria from iron-induced oxidative damage. Our previous studies indicate that FtMt attenuatesβ-amyloid- and 6-hydroxydopamine-induced neurotoxicity in SH-SY5Y cells. To explore the protective effects of FtMt on β-amyloid-induced memory impairment and neuronal apoptosis and the mechanisms involved, 10-month-old wild-type andFtmt knockout mice were infused intracerebroventricularly (ICV) with Aβ25–35 to establish an Alzheimer’s disease model. Knockout ofFtmt significantly exacerbated Aβ25–35-induced learning and memory impairment. The Bcl-2/Bax ratio in mouse hippocampi was decreased and the levels of cleaved caspase-3 and PARP were increased. The number of neuronal cells undergoing apoptosis in the hippocampus was also increased inFtmt knockout mice. In addition, the levels of L-ferritin and FPN1 in the hippocampus were raised, and the expression of TfR1 was decreased. Increased MDA levels were also detected inFtmt knockout mice treated with Aβ25–35. In conclusion, this study demonstrated that the neurological impairment induced by Aβ25–35 was exacerbated inFtmt knockout mice and that this may relate to increased levels of oxidative stress. --- ## Body ## 1. Introduction Alzheimer’s disease (AD) is a multifaceted neurodegenerative disease of the elderly which is characterized by neuronal loss, neuroinflammation, and progressive memory and cognitive impairment [1]. Many pathogenic factors are involved in the neuropathology, including accumulation of β-amyloid (Aβ), oxidative stress, inflammation, and metal deposition [2]. Aβ is considered to be a major factor in the pathophysiological mechanisms underlying AD and has been shown to directly induce neuronal cell death [3]. Thus, Aβ is a useful tool for establishing AD models and investigating the mechanisms involved in AD pathogenesis [4]. Aβ25–35 is an 11-amino acid fragment located in the hydrophobic functional domain of the C-terminal region of Aβ1–42 [5]. Single administration of Aβ25–35 into the lateral ventricles of mice or rats impairs memory and induces neurodegeneration in the hippocampus [6–10]. We have previously shown that Aβ25–35, like Aβ1–42, exerted neurotoxic effects on SH-SY5Y cells [11]. These data are among numerous studies confirming that Aβ25–35 is a useful tool for investigating AD-related mechanisms in animal models [10].Oxidative stress has been strongly implicated in the pathophysiology of AD [12]. Increased free radicals can damage proteins, lipids, and nucleic acids. The combination of mitochondrial dysfunction and Aβ accumulation generates reactive oxygen species (ROS) which, in the presence of metal ions such as Fe2+ and Cu2+, may contribute to oxidative damage in AD brains [13]. In addition, dysregulated brain iron homeostasis also accelerates AD progression. Excessive iron in the brain can directly lead to the generation of free radicals that eventually cause neurodegenerative disease.Mitochondrial ferritin (FtMt) is a recently identified ferritin that accumulates specifically in the mitochondria and possesses a high homology to H-ferritin [14]. The functions of FtMt include iron storage, regulating iron distribution between the cytosol and mitochondrial compartments, and preventing the production of ROS generated through the Fenton reaction [15, 16]. FtMt has been reported to be present in relatively low abundance in the liver and splenocytes, the major iron storage sites, while FtMt is found at higher levels in the testis, kidney, heart and brain, and tissues with high metabolic activity. Together with the knowledge that H-ferritin expression confers an antioxidant effect, the tissue distribution of FtMt is in line with a protective function of FtMt in mitochondria against iron-dependent oxidative damage [17]. Previous results indicated that FtMt was involved in the pathogenesis of neurodegenerative diseases, including AD, Parkinson’s disease, and Friedreich’s ataxia [18–20]. Increased expression of FtMt has been observed in the brains of AD patients and is associated with the antioxidant role of this protein [2]. Furthermore, our previous studies have shown that FtMt exerted a neuroprotective effect against 6-hydroxydopamine- (6-OHDA-) induced dopaminergic cell damage [20] and FtMt overexpression attenuated Aβ-induced neurotoxicity [11].Although abnormal iron metabolism and oxidative stress have been reported in AD, little information is available about the role of FtMt in the pathogenesis of AD. In the present study, we investigated memory impairment and neuronal cell death in Aβ25–35-injectedFtmt knockout mice. In addition, we explored the molecular mechanisms responsible for neuronal damage in this model. Our data indicate that FtMt deficiency exacerbated Aβ25–35-induced neuronal cell damage by altering intracellular iron levels in a way that intensifies the oxidative stress caused by Aβ25–35. ## 2. Materials and Methods ### 2.1. Animals C57BL/6Ftmt-null mice were obtained from The Jackson Laboratory [21]. Mice were housed under conditions controlled for temperature (22°C) and humidity (40%), using a 12 hr/12 hr light/dark cycle [22]. Mice were fed a standard rodent diet and water ad libitum. Age-matched C57BL/6J wild-type male mice andFtmt knockout male mice (10 months) were used in this study. All procedures were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Animal Care and Use Committee of the Hebei Science and Technical Bureau in China. ### 2.2. Antibodies and Reagents The following antibodies and reagents were used:β-actin (Alpha Diagnostic International, USA), TfR1 (Sigma-Aldrich, USA), FPN1, DMT1 (+IRE) and DMT1 (−IRE) (Alpha Diagnostic International, USA), L-ferritin (Abcam Inc., SF, USA), cleaved PARP, caspase-3, phospho-p38 (p-p38) and p38 (Cell Signaling Technology, USA), Bcl-2 and Bax (Santa Cruz Biotechnology, USA), Aβ25–35 peptide (Sigma-Aldrich, USA), and TUNEL in situ Cell Death Detection Kit (Roche Diagnostics GmbH, Mannheim, Germany). ### 2.3. Drug Preparation and Injection Aβ25–35 was dissolved in sterile saline and aggregated by incubation at 37°C for 4 days before use [23]. The aggregated form of Aβ25–35 (7.5 nmol in 5 μL saline per injection) was injected into the right lateral ventricle as previously described [24]. Mice were randomly divided into four groups: wild-type with saline injection (WT + saline), wild-type with Aβ25–35 injection (WT + Aβ25–35),Ftmt knockout with saline injection (KO + saline), andFtmt knockout with Aβ25–35 injection (KO + Aβ25–35). After injection, the mice were housed for 15 days under normal conditions and then trained and tested in a Morris water maze (MWM) as described below. ### 2.4. Morris Water Maze Test (MWM Test) Spatial learning and memory deficits were assessed using the Morris water maze as described previously [25, 26], with minor modification. The experimental apparatus consisted of a circular tank (diameter = 120 cm, height = 50 cm) that was divided into four quadrants, filled with water, and maintained at 22±2°C. At first, a visible platform test was performed, which confirmed that there were no significant differences in sensory, motor, or motivational activities among these four groups. Then, hidden platform tests were performed in succession. For the hidden platform test, a round platform (diameter = 9 cm) was placed at the midpoint of the fourth quadrant, 1 cm below the water surface. The test was conducted four times a day for four days, with four randomized starting points. The position of the escape platform was kept constant. Each trial lasted for 90 s or ended as soon as the mice reached the submerged platform. ### 2.5. Probe Test To assess memory consolidation, a probe test was performed 24 h after the Morris water maze test [25]. For the probe test, the platform was removed and the mice were allowed to swim freely. The swimming pattern of every mouse was recorded for 90 s with a camera. Consolidated spatial memory was estimated by the time spent in the target quadrant area. ### 2.6. Assessment of Apoptosis After the behavioral testing, the animals were perfused with 0.9% saline under anesthesia with 0.4% Nembutal. The brains were immediately collected and then postfixed with 4% paraformaldehyde in 0.1 M phosphate buffer. Serial coronal sections were cut at 15μm on a freezing microtome (Leica CM1950, Leica Microsystems, Shanghai, China) and mounted onto slides covered with APES (Beijing Zhongshan Biotechnology, Beijing, China). The presence of apoptosis in the dentate gyrus of mouse hippocampi was assessed by the terminal deoxynucleotidyl transferase-mediated FITC-dUTP nick-end labeling method (TUNEL) following the manufacturer’s protocol. Nuclei were counterstained with DAPI. The number of TUNEL-DAPI-positive cells was counted as described previously [27]. The counting area was located in the same position in all groups. For each group, quantification was performed in sections from three different mice. ### 2.7. Western Blot Analysis Protein expression was assessed by western blotting as previously described [28], with minor modifications. Briefly, hippocampi were homogenized and sonicated in RIPA buffer containing 1% NP40 and protease inhibitor cocktail tablets (Roche Diagnostics GmbH, Roche Applied Science, 68298 Mannheim, Germany). After centrifugation at 12,000 ×g for 20 min at 4°C, the supernatant was collected, and the whole cell lysate protein concentration was measured using the BCA Protein Quantification Kit (Yeasen Biotechnology, Shanghai, China). Protein from each sample (40 mg) was resolved by SDS-PAGE on 12% or 10% gels and then transferred to PVDF membranes. The blots were blocked in 5% nonfat milk containing 20 mM Tris-HCl (pH 7.6, 137 mM NaCl, and 0.1% Tween-20; TBS-T) for 1.5 h at room temperature, followed by incubation with primary antibody overnight at 4°C. After washing three times with TBS-T, the blots were incubated with horseradish peroxide (HRP) conjugated secondary antibody for 1.5 h at room temperature. Immunoreactive proteins were detected using the enhanced chemiluminescence (ECL) method and quantified by transmittance densitometry using volume integration with Multi Gauge ver. 3.1 software (FUJIFILM Corporation, Tokyo, Japan). ### 2.8. Measurement of MDA and SOD Malondialdehyde (MDA), a marker of lipid peroxidation, was assessed using the thiobarbituric acid (TBA) method [29] using a kit from the Nanjing Jiancheng Bioengineering Institute (Nanjing, China) according to the manufacturer’s instructions. This method is based on the spectrophotometric measurement of the product of the reaction of TBA with MDA. MDA concentrations were then calculated by the absorbance of TBA reactive substances (TBARS) at 532 nm.Superoxide dismutases (SODs), which catalyze the dismutation of superoxide into oxygen and hydrogen peroxide, were determined according to xanthine oxidase method using a commercial kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer’s instructions. The xanthine-xanthine oxidase system produces superoxide ions, which can react with 2-(4-iodophenyl)-3-(4-nitrophenol-5-phenlyltetrazolium chloride) to form a red formazan dye, which can be detected by its absorbance at 550 nm [29].The levels of MDA and the total SOD (T-SOD) activity were determined in each group. The hippocampi of mice were homogenized in ice-cold saline. The homogenate was centrifuged at 3000 ×g at 4°C for 15 min, and the supernatant was used to determine T-SOD activity and MDA levels with a spectrophotometer (Synergy H4, BioTek, USA) at wavelengths of 550 nm and 532 nm, respectively. Each group contained five mice for the MDA and SOD tests, with each test repeated three times. ### 2.9. Statistical Analysis All data are expressed as the mean ± standard deviation. One-way analysis of variance was used to estimate overall significance and was followed by Tukey’s post hoc test corrected for multiple comparisons. A probability level of 95% (p<0.05) was considered significant. All the tests were performed with SPSS 21.0 (IBM SPSS21.0, Armonk, New York, United States). ## 2.1. Animals C57BL/6Ftmt-null mice were obtained from The Jackson Laboratory [21]. Mice were housed under conditions controlled for temperature (22°C) and humidity (40%), using a 12 hr/12 hr light/dark cycle [22]. Mice were fed a standard rodent diet and water ad libitum. Age-matched C57BL/6J wild-type male mice andFtmt knockout male mice (10 months) were used in this study. All procedures were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Animal Care and Use Committee of the Hebei Science and Technical Bureau in China. ## 2.2. Antibodies and Reagents The following antibodies and reagents were used:β-actin (Alpha Diagnostic International, USA), TfR1 (Sigma-Aldrich, USA), FPN1, DMT1 (+IRE) and DMT1 (−IRE) (Alpha Diagnostic International, USA), L-ferritin (Abcam Inc., SF, USA), cleaved PARP, caspase-3, phospho-p38 (p-p38) and p38 (Cell Signaling Technology, USA), Bcl-2 and Bax (Santa Cruz Biotechnology, USA), Aβ25–35 peptide (Sigma-Aldrich, USA), and TUNEL in situ Cell Death Detection Kit (Roche Diagnostics GmbH, Mannheim, Germany). ## 2.3. Drug Preparation and Injection Aβ25–35 was dissolved in sterile saline and aggregated by incubation at 37°C for 4 days before use [23]. The aggregated form of Aβ25–35 (7.5 nmol in 5 μL saline per injection) was injected into the right lateral ventricle as previously described [24]. Mice were randomly divided into four groups: wild-type with saline injection (WT + saline), wild-type with Aβ25–35 injection (WT + Aβ25–35),Ftmt knockout with saline injection (KO + saline), andFtmt knockout with Aβ25–35 injection (KO + Aβ25–35). After injection, the mice were housed for 15 days under normal conditions and then trained and tested in a Morris water maze (MWM) as described below. ## 2.4. Morris Water Maze Test (MWM Test) Spatial learning and memory deficits were assessed using the Morris water maze as described previously [25, 26], with minor modification. The experimental apparatus consisted of a circular tank (diameter = 120 cm, height = 50 cm) that was divided into four quadrants, filled with water, and maintained at 22±2°C. At first, a visible platform test was performed, which confirmed that there were no significant differences in sensory, motor, or motivational activities among these four groups. Then, hidden platform tests were performed in succession. For the hidden platform test, a round platform (diameter = 9 cm) was placed at the midpoint of the fourth quadrant, 1 cm below the water surface. The test was conducted four times a day for four days, with four randomized starting points. The position of the escape platform was kept constant. Each trial lasted for 90 s or ended as soon as the mice reached the submerged platform. ## 2.5. Probe Test To assess memory consolidation, a probe test was performed 24 h after the Morris water maze test [25]. For the probe test, the platform was removed and the mice were allowed to swim freely. The swimming pattern of every mouse was recorded for 90 s with a camera. Consolidated spatial memory was estimated by the time spent in the target quadrant area. ## 2.6. Assessment of Apoptosis After the behavioral testing, the animals were perfused with 0.9% saline under anesthesia with 0.4% Nembutal. The brains were immediately collected and then postfixed with 4% paraformaldehyde in 0.1 M phosphate buffer. Serial coronal sections were cut at 15μm on a freezing microtome (Leica CM1950, Leica Microsystems, Shanghai, China) and mounted onto slides covered with APES (Beijing Zhongshan Biotechnology, Beijing, China). The presence of apoptosis in the dentate gyrus of mouse hippocampi was assessed by the terminal deoxynucleotidyl transferase-mediated FITC-dUTP nick-end labeling method (TUNEL) following the manufacturer’s protocol. Nuclei were counterstained with DAPI. The number of TUNEL-DAPI-positive cells was counted as described previously [27]. The counting area was located in the same position in all groups. For each group, quantification was performed in sections from three different mice. ## 2.7. Western Blot Analysis Protein expression was assessed by western blotting as previously described [28], with minor modifications. Briefly, hippocampi were homogenized and sonicated in RIPA buffer containing 1% NP40 and protease inhibitor cocktail tablets (Roche Diagnostics GmbH, Roche Applied Science, 68298 Mannheim, Germany). After centrifugation at 12,000 ×g for 20 min at 4°C, the supernatant was collected, and the whole cell lysate protein concentration was measured using the BCA Protein Quantification Kit (Yeasen Biotechnology, Shanghai, China). Protein from each sample (40 mg) was resolved by SDS-PAGE on 12% or 10% gels and then transferred to PVDF membranes. The blots were blocked in 5% nonfat milk containing 20 mM Tris-HCl (pH 7.6, 137 mM NaCl, and 0.1% Tween-20; TBS-T) for 1.5 h at room temperature, followed by incubation with primary antibody overnight at 4°C. After washing three times with TBS-T, the blots were incubated with horseradish peroxide (HRP) conjugated secondary antibody for 1.5 h at room temperature. Immunoreactive proteins were detected using the enhanced chemiluminescence (ECL) method and quantified by transmittance densitometry using volume integration with Multi Gauge ver. 3.1 software (FUJIFILM Corporation, Tokyo, Japan). ## 2.8. Measurement of MDA and SOD Malondialdehyde (MDA), a marker of lipid peroxidation, was assessed using the thiobarbituric acid (TBA) method [29] using a kit from the Nanjing Jiancheng Bioengineering Institute (Nanjing, China) according to the manufacturer’s instructions. This method is based on the spectrophotometric measurement of the product of the reaction of TBA with MDA. MDA concentrations were then calculated by the absorbance of TBA reactive substances (TBARS) at 532 nm.Superoxide dismutases (SODs), which catalyze the dismutation of superoxide into oxygen and hydrogen peroxide, were determined according to xanthine oxidase method using a commercial kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer’s instructions. The xanthine-xanthine oxidase system produces superoxide ions, which can react with 2-(4-iodophenyl)-3-(4-nitrophenol-5-phenlyltetrazolium chloride) to form a red formazan dye, which can be detected by its absorbance at 550 nm [29].The levels of MDA and the total SOD (T-SOD) activity were determined in each group. The hippocampi of mice were homogenized in ice-cold saline. The homogenate was centrifuged at 3000 ×g at 4°C for 15 min, and the supernatant was used to determine T-SOD activity and MDA levels with a spectrophotometer (Synergy H4, BioTek, USA) at wavelengths of 550 nm and 532 nm, respectively. Each group contained five mice for the MDA and SOD tests, with each test repeated three times. ## 2.9. Statistical Analysis All data are expressed as the mean ± standard deviation. One-way analysis of variance was used to estimate overall significance and was followed by Tukey’s post hoc test corrected for multiple comparisons. A probability level of 95% (p<0.05) was considered significant. All the tests were performed with SPSS 21.0 (IBM SPSS21.0, Armonk, New York, United States). ## 3. Results ### 3.1.Ftmt Ablation Exacerbates Aβ25–35-Induced Spatial Memory Deficits The MWM test was conducted to assess learning and memory in 10-month-old wild-type mice (WT) andFtmt knockout mice (KO). All mice were trained with four trials per day for 4 days. “Escape latency” is the time to reach the platform in the water maze and is used as a proxy for mouse memory. Compared to wild-type mice,Ftmt knockout mice took approximately the same time to reach the platform after training (Figure 1(a)). After the water maze test, we performed a probe test using the metric “time spent in quadrant” to investigate the maintenance of memory. The time spent in the target quadrant was also similar in wild-type andFtmt knockout mice (Figure 1(b)). After treatment with Aβ25–35, both the WT + Aβ25–35 group and the KO + Aβ25–35 group took a significantly longer time to reach the platform than the groups without Aβ25–35 injection (Figure 1(c)). Furthermore, Aβ25–35-infusedFtmt knockout mice had a significantly greater memory impairment (longer escape latency time) than Aβ25–35-infused wild-type mice. In addition, the time spent in the target quadrant in the probe trial was less in both the WT + Aβ25–35 and the KO + Aβ25–35 groups than in the control groups. Importantly, the KO + Aβ25–35 group was in the target quadrant for even less time than the WT + Aβ25–35 group (Figure 1(d)). Overall, our results show that knockout ofFtmt in mice significantly exacerbates memory deficits in the Aβ25–35-induced AD model.Figure 1 The effect ofFtmt ablation on Aβ25–35-induced spatial memory deficits. (a) Age-matched (10 months old)Ftmt knockout mice (n=20) and wild-type mice (n=23) were administered a 90 s trial four times a day to find the hidden platform. The analysis of the recorded data shows the changes in latency to find the hidden platform over the four consecutive days of training. (b)Ftmt knockout mice and wild-type mice were assessed in the probe test one day after the hidden platform test. The time spent in the target quadrant within the 90 s was recorded. (c) The effect of Aβ25–35 on escape latency. (Wild-type mice andFtmt knockout mice were randomly divided into four groups and injected with Aβ25–35 or saline. Fifteen days later, the MWM test was conducted.) (d) The time spent in the target quadrant during the probe test after injecting Aβ25–35. The data is presented as the mean ± SD. p∗<0.05 versus WT + saline group, n=11. p#<0.05, p##<0.01 versus KO + saline group, n=10. p$<0.05 versus WT + Aβ25–35 group, n=10. KO + Aβ25–35, n=10. (a) (b) (c) (d) ### 3.2.Ftmt Ablation Enhances Aβ25–35-Induced Neuronal Cell Apoptosis To evaluate the neuronal apoptosis affected byFtmt gene ablation in the AD model, we used the TUNEL method to detect apoptosis after Aβ25–35 stimulation. Our results indicated that neuronal apoptosis in the hippocampi was increased after injecting Aβ25–35, especially in the dentate gyrus. The number of apoptotic cells in the WT + Aβ25–35 group was approximately four times greater than that observed in the WT + saline group, and there was also a noticeable increase in the KO group. The number of apoptotic cells in the KO + Aβ25–35 group was more than threefold that of the WT + Aβ25–35 group. These results confirmed thatFtmt knockout significantly enhanced neuronal apoptosis compared to the WT + Aβ25–35 group (Figures 2(a) and 2(b)), suggesting that FtMt is protective against Aβ25–35-induced apoptosis.Figure 2 The effect ofFtmt ablation on Aβ25–35 -induced neuronal cell apoptosis. Apoptotic cell death was assessed by DAPI and TUNEL staining, as described in the Materials and Methods section. (a) Representative photographs (original magnification 100x) of the dentate gyrus of the hippocampus of mouse brains. (b) The statistical analysis of relative apoptotic cell levels. Data are presented as the mean ± SD, n=3. p∗∗∗<0.001 versus WT + saline group, p###<0.001 versus KO + saline group, and p$$$<0.001 versus WT + Aβ25–35 group. (a) (b) ### 3.3. FtMt Deficiency in the AD Mouse Model Elevates Proapoptotic Signals We found that the knockout ofFtmt remarkably decreased the ratio of Bcl-2/Bax (Figure 3(a)) and increased the activation of cleaved caspase-3 (Figure 3(b)) after Aβ25–35 treatment in mice. In the apoptotic cascade, caspase-3 cleaves poly-ADP-ribose-polymerase (PARP), leading to the accumulation of an 89 kDa PARP fragment [30]. Caspase-3-mediated PARP cleavage was enhanced in the KO + Aβ25–35 group compared to the WT + Aβ25–35 group (Figure 3(c)). These results indicate that the lack of FtMt can affect the Bcl-2/Bax ratio, leading to caspase-3 activation and a concomitant increase in PARP cleavage and, ultimately, apoptosis after Aβ25–35 injection.Figure 3 The effect ofFtmt deficiency on the Bcl-2/Bax ratio, cleaved caspase-3, and p38 MAPK activation in mice. Western blot and subsequent densitometric analysis of (a) the ratio of Bcl-2/Bax, (b) the amount of cleaved caspase-3, (c) the amount of cleaved PARP, and (d) the ratio of p-p38/p38. Data are presented as the mean ± SD, n=3. p∗<0.05, p∗∗<0.01 versus WT + saline group, p#<0.05, p##<0.01 versus KO + saline group, and p$<0.05 versus WT + Aβ25–35 group. MAPK: mitogen-activated protein kinase. (a) (b) (c) (d)The activation of p38 (MAP kinase) by phosphorylation is implicated in oxidative stress-induced cell death [31]. A high p-p38/p38 ratio can simultaneously promote Bax expression and decrease Bcl-2 levels. Aβ25–35 significantly induced the activation of p38 in the hippocampus. In the KO + Aβ25–35 group, p-p38 levels were elevated (Figure 3(d)). Overall, our data demonstrate that the knockout ofFtmt in mice injected with Aβ25–35 increases p-p38 levels which alters the amounts of proteins related to cell death, ultimately leading to increased neuronal cell death in the hippocampus. ### 3.4.Ftmt Knockout Increases MDA Levels in AD Mice without Altering SOD To determine whether increased levels of oxidative stress are responsible for the increased apoptosis in the hippocampus in the AD mouse model, we examined the levels of MDA and the activity of SOD in each group. Free radicals attack polyunsaturated fatty acids, leading to structural damage to membranes and the generation of MDA, which is considered a marker of lipid peroxidation and thus a surrogate for oxidative damage [32]. The level of MDA was increased in AD mice compared with controls, but this increase was significantly greater inFtmt knockout mice (Figure 4(a)). SOD is a free radical scavenging enzyme that converts superoxide into H2O2. The content of total SOD was unchanged in the four groups (Figure 4(b)).Figure 4 The effects ofFtmt ablation on the levels of MDA and Total SOD. (a) MDA and (b) total SOD were assayed as described in the Materials and Methods section. Values are presented as the mean ± SD. p∗<0.05 versus WT + saline group, p#<0.05 versus KO + saline group, and p$<0.05 versus WT + Aβ25–35 group. (a) (b) ### 3.5. The Effects ofFtmt Knockout on the Levels of L-Ferritin, TfR1, DMT1, and FPN1 Iron is an essential cofactor in many proteins, but excess free iron contributes to enhanced generation of ROS and oxidative stress [33]. When treated with Aβ25–35, the levels of L-ferritin were upregulated while those of TfR1 decreased significantly, compared to the control groups. The highest amount of L-ferritin expression was observed in the KO + Aβ25–35 group (Figures 5(a) and 5(b)). In addition, the content of L-ferritin was also increased in the KO + saline group when compared to the WT + Saline group (Figure 5(a)). These observations indicated that Aβ25–35 stimulation may lead to alterations in iron homeostasis and that FtMt deficiency may accelerate this process. In addition, alterations in cellular iron distribution (as detected by Perls’ staining) (see Supplementary Figure  1 of the Supplementary Material available online at https://doi.org/10.1155/2017/1020357) support this hypothesis. However, there was no significant difference in the expression of DMT1 (+IRE) or DMT1 (−IRE) in any group (Figures 5(c) and 5(d)), while the expression of FPN1, the iron release protein, was increased in both groups treated with Aβ25–35 (Figure 5(d)). These results suggest that injection of Aβ25–35 into the brain disturbed iron homeostasis, possibly leading to oxidative damage, both of which were exacerbated by the lack of FtMt.Figure 5 The effects ofFtmt deficiency on the levels of L-ferritin, TfR1, DMT1, and FPN1. Western blotting was used to assay iron metabolism related proteins in hippocampus of mice. (a) L-ferritin. (b) TfR1. (c) DMT1 (+IRE). (d) FPN1 and DMT1-IRE. The expression levels of these proteins were normalized to β-actin and expressed as the mean ± SD. p∗<0.05, p∗∗<0.01 versus WT + saline group, p#<0.05 versus KO + saline group, and p$<0.05 versus WT+ Aβ25–35 group. (a) (b) (c) (d) ## 3.1.Ftmt Ablation Exacerbates Aβ25–35-Induced Spatial Memory Deficits The MWM test was conducted to assess learning and memory in 10-month-old wild-type mice (WT) andFtmt knockout mice (KO). All mice were trained with four trials per day for 4 days. “Escape latency” is the time to reach the platform in the water maze and is used as a proxy for mouse memory. Compared to wild-type mice,Ftmt knockout mice took approximately the same time to reach the platform after training (Figure 1(a)). After the water maze test, we performed a probe test using the metric “time spent in quadrant” to investigate the maintenance of memory. The time spent in the target quadrant was also similar in wild-type andFtmt knockout mice (Figure 1(b)). After treatment with Aβ25–35, both the WT + Aβ25–35 group and the KO + Aβ25–35 group took a significantly longer time to reach the platform than the groups without Aβ25–35 injection (Figure 1(c)). Furthermore, Aβ25–35-infusedFtmt knockout mice had a significantly greater memory impairment (longer escape latency time) than Aβ25–35-infused wild-type mice. In addition, the time spent in the target quadrant in the probe trial was less in both the WT + Aβ25–35 and the KO + Aβ25–35 groups than in the control groups. Importantly, the KO + Aβ25–35 group was in the target quadrant for even less time than the WT + Aβ25–35 group (Figure 1(d)). Overall, our results show that knockout ofFtmt in mice significantly exacerbates memory deficits in the Aβ25–35-induced AD model.Figure 1 The effect ofFtmt ablation on Aβ25–35-induced spatial memory deficits. (a) Age-matched (10 months old)Ftmt knockout mice (n=20) and wild-type mice (n=23) were administered a 90 s trial four times a day to find the hidden platform. The analysis of the recorded data shows the changes in latency to find the hidden platform over the four consecutive days of training. (b)Ftmt knockout mice and wild-type mice were assessed in the probe test one day after the hidden platform test. The time spent in the target quadrant within the 90 s was recorded. (c) The effect of Aβ25–35 on escape latency. (Wild-type mice andFtmt knockout mice were randomly divided into four groups and injected with Aβ25–35 or saline. Fifteen days later, the MWM test was conducted.) (d) The time spent in the target quadrant during the probe test after injecting Aβ25–35. The data is presented as the mean ± SD. p∗<0.05 versus WT + saline group, n=11. p#<0.05, p##<0.01 versus KO + saline group, n=10. p$<0.05 versus WT + Aβ25–35 group, n=10. KO + Aβ25–35, n=10. (a) (b) (c) (d) ## 3.2.Ftmt Ablation Enhances Aβ25–35-Induced Neuronal Cell Apoptosis To evaluate the neuronal apoptosis affected byFtmt gene ablation in the AD model, we used the TUNEL method to detect apoptosis after Aβ25–35 stimulation. Our results indicated that neuronal apoptosis in the hippocampi was increased after injecting Aβ25–35, especially in the dentate gyrus. The number of apoptotic cells in the WT + Aβ25–35 group was approximately four times greater than that observed in the WT + saline group, and there was also a noticeable increase in the KO group. The number of apoptotic cells in the KO + Aβ25–35 group was more than threefold that of the WT + Aβ25–35 group. These results confirmed thatFtmt knockout significantly enhanced neuronal apoptosis compared to the WT + Aβ25–35 group (Figures 2(a) and 2(b)), suggesting that FtMt is protective against Aβ25–35-induced apoptosis.Figure 2 The effect ofFtmt ablation on Aβ25–35 -induced neuronal cell apoptosis. Apoptotic cell death was assessed by DAPI and TUNEL staining, as described in the Materials and Methods section. (a) Representative photographs (original magnification 100x) of the dentate gyrus of the hippocampus of mouse brains. (b) The statistical analysis of relative apoptotic cell levels. Data are presented as the mean ± SD, n=3. p∗∗∗<0.001 versus WT + saline group, p###<0.001 versus KO + saline group, and p$$$<0.001 versus WT + Aβ25–35 group. (a) (b) ## 3.3. FtMt Deficiency in the AD Mouse Model Elevates Proapoptotic Signals We found that the knockout ofFtmt remarkably decreased the ratio of Bcl-2/Bax (Figure 3(a)) and increased the activation of cleaved caspase-3 (Figure 3(b)) after Aβ25–35 treatment in mice. In the apoptotic cascade, caspase-3 cleaves poly-ADP-ribose-polymerase (PARP), leading to the accumulation of an 89 kDa PARP fragment [30]. Caspase-3-mediated PARP cleavage was enhanced in the KO + Aβ25–35 group compared to the WT + Aβ25–35 group (Figure 3(c)). These results indicate that the lack of FtMt can affect the Bcl-2/Bax ratio, leading to caspase-3 activation and a concomitant increase in PARP cleavage and, ultimately, apoptosis after Aβ25–35 injection.Figure 3 The effect ofFtmt deficiency on the Bcl-2/Bax ratio, cleaved caspase-3, and p38 MAPK activation in mice. Western blot and subsequent densitometric analysis of (a) the ratio of Bcl-2/Bax, (b) the amount of cleaved caspase-3, (c) the amount of cleaved PARP, and (d) the ratio of p-p38/p38. Data are presented as the mean ± SD, n=3. p∗<0.05, p∗∗<0.01 versus WT + saline group, p#<0.05, p##<0.01 versus KO + saline group, and p$<0.05 versus WT + Aβ25–35 group. MAPK: mitogen-activated protein kinase. (a) (b) (c) (d)The activation of p38 (MAP kinase) by phosphorylation is implicated in oxidative stress-induced cell death [31]. A high p-p38/p38 ratio can simultaneously promote Bax expression and decrease Bcl-2 levels. Aβ25–35 significantly induced the activation of p38 in the hippocampus. In the KO + Aβ25–35 group, p-p38 levels were elevated (Figure 3(d)). Overall, our data demonstrate that the knockout ofFtmt in mice injected with Aβ25–35 increases p-p38 levels which alters the amounts of proteins related to cell death, ultimately leading to increased neuronal cell death in the hippocampus. ## 3.4.Ftmt Knockout Increases MDA Levels in AD Mice without Altering SOD To determine whether increased levels of oxidative stress are responsible for the increased apoptosis in the hippocampus in the AD mouse model, we examined the levels of MDA and the activity of SOD in each group. Free radicals attack polyunsaturated fatty acids, leading to structural damage to membranes and the generation of MDA, which is considered a marker of lipid peroxidation and thus a surrogate for oxidative damage [32]. The level of MDA was increased in AD mice compared with controls, but this increase was significantly greater inFtmt knockout mice (Figure 4(a)). SOD is a free radical scavenging enzyme that converts superoxide into H2O2. The content of total SOD was unchanged in the four groups (Figure 4(b)).Figure 4 The effects ofFtmt ablation on the levels of MDA and Total SOD. (a) MDA and (b) total SOD were assayed as described in the Materials and Methods section. Values are presented as the mean ± SD. p∗<0.05 versus WT + saline group, p#<0.05 versus KO + saline group, and p$<0.05 versus WT + Aβ25–35 group. (a) (b) ## 3.5. The Effects ofFtmt Knockout on the Levels of L-Ferritin, TfR1, DMT1, and FPN1 Iron is an essential cofactor in many proteins, but excess free iron contributes to enhanced generation of ROS and oxidative stress [33]. When treated with Aβ25–35, the levels of L-ferritin were upregulated while those of TfR1 decreased significantly, compared to the control groups. The highest amount of L-ferritin expression was observed in the KO + Aβ25–35 group (Figures 5(a) and 5(b)). In addition, the content of L-ferritin was also increased in the KO + saline group when compared to the WT + Saline group (Figure 5(a)). These observations indicated that Aβ25–35 stimulation may lead to alterations in iron homeostasis and that FtMt deficiency may accelerate this process. In addition, alterations in cellular iron distribution (as detected by Perls’ staining) (see Supplementary Figure  1 of the Supplementary Material available online at https://doi.org/10.1155/2017/1020357) support this hypothesis. However, there was no significant difference in the expression of DMT1 (+IRE) or DMT1 (−IRE) in any group (Figures 5(c) and 5(d)), while the expression of FPN1, the iron release protein, was increased in both groups treated with Aβ25–35 (Figure 5(d)). These results suggest that injection of Aβ25–35 into the brain disturbed iron homeostasis, possibly leading to oxidative damage, both of which were exacerbated by the lack of FtMt.Figure 5 The effects ofFtmt deficiency on the levels of L-ferritin, TfR1, DMT1, and FPN1. Western blotting was used to assay iron metabolism related proteins in hippocampus of mice. (a) L-ferritin. (b) TfR1. (c) DMT1 (+IRE). (d) FPN1 and DMT1-IRE. The expression levels of these proteins were normalized to β-actin and expressed as the mean ± SD. p∗<0.05, p∗∗<0.01 versus WT + saline group, p#<0.05 versus KO + saline group, and p$<0.05 versus WT+ Aβ25–35 group. (a) (b) (c) (d) ## 4. Discussion Iron is an essential trace element for human health. The metal participates in many biological processes. Iron homeostasis is stringently regulated in vivo as excess iron can catalyze the generation of oxidative damage [20]. Importantly, iron is considered a contributing neurotoxic factor in several neurodegenerative disorders, including AD [34]. Cortical iron elevation has been increasingly reported as a feature of AD [35] and may contribute to the oxidative damage observed in AD brains. In addition, abnormalities in iron-regulatory proteins occur in the brains of AD sufferers [19]. FtMt, a recently identified H-ferritin-like protein expressed only in mitochondria, is thought to function to protect mitochondria from iron-dependent oxidative damage in cells with high metabolic activity and oxygen consumption. Previous studies have already shown an increased FtMt expression in the hippocampus of AD patients [36]. In addition, the downregulation of FtMt causes severe neurodegeneration in the Purkinje cells of the cerebellum [20]. In this study,Ftmt gene knockout mice were firstly used to study the effects of FtMt on the behavioral changes and mechanisms of Aβ25–35-induced neurotoxicity.Previous results indicate that FtMt-deficient mice are healthy and do not show any evident phenotype under baseline feeding conditions [21]. Here we have also found that 10-month-old, wild-type, andFtmt knockout mice show no behavior or memory differences, as determined by MWM assays. Thus, FtMt deficiency has no obvious effects in the mouse brain under normal physiological conditions. To further elucidate the role of FtMt in AD pathogenesis, we first showed that intracerebroventricular infusion of Aβ25–35 exacerbates memory impairment inFtmt knockout mice compared to the Aβ25–35-infused controls. The number of apoptotic cells in the hippocampus was also significantly increased in Aβ25–35-infusedFtmt knockout mice, which may account for their poorer performance in the MWM. Our data suggest that FtMt is not essential in mice under normal conditions. However, when challenged, such as with amyloid beta treatment, there appears to be a need for FtMt in a neuroprotective role.Bcl-2 and Bax play important roles in oxidative stress-mediated neuronal apoptosis [37]. It has been reported that Bcl-2 protects neurons against oxidant stress and apoptosis in PD [38]. Bcl-2 also maintains mitochondrial integrity by blocking the release of apoptotic factors from mitochondria into cytoplasm [20]. Bax can promote cell death by activating elements of the caspase pathway [39], especially caspase-3 [40]. As previously described, the activation of caspases, a family of cysteine proteases, is a central mechanism in the apoptotic process. Our results show that knockout ofFtmt decreases the ratio of Bcl-2/Bax and increases the activation of caspase-3 and PARP cleavage, which ultimately leads to cell death.Accumulated evidence demonstrated that Aβ-induced neuronal injury triggers transcriptional and posttranscriptional processes that regulate neuronal fate, including the activation of the MAPK pathway [41]. In this signaling cascade, p38 MAPK is activated by phosphorylation, and a high p-p38/p38 ratio can simultaneously promote Bax expression and decrease Bcl-2 levels. Our results show a significant elevation of p-p38 levels and its downstream factor Bax, strongly suggesting that this apoptotic signal transduction pathway is enhanced inFtmt knockout mice treated with Aβ25–35.An increasing number of studies have suggested that oxidative stress is associated with AD neurodegeneration and caspase-mediated apoptosis [42]. We detected a marked increase in the level of MDA, an indicator of oxidative damage, in the hippocampi ofFtmt knockout mice, indicating that knockout ofFtmt aggravates oxidative stress. Previous studies indicate that, in certain antioxidant systems, there might be a time lag between the synthesis of protein and the expression of mRNA following neurotoxicity; the activity of SOD is altered in the process of Aβ25–35 induced injury. Cu, Zn-SOD, and Mn-SOD activity in hippocampi of Aβ1–42 treated mice returned to near vehicle levels after 10 days [43]. Our data show thatFtmt ablation did not significantly affect the activity of total SOD, although this may be related to the time point at which SOD activity was measured.Cellular iron homeostasis is maintained by a strict regulation of various proteins that are involved in iron uptake, export, storage, and utilization [44]. Studies from our group and others have demonstrated that aberrant iron homeostasis can generate ROS which can eventuate in AD pathogenesis [2, 11]. In the present study, we observed upregulated L-ferritin and FPN1 and a simultaneous decrease in TfR1. These changes are likely to be the result of inhibited iron-regulatory protein binding [45] brought about by an increase in the regulatory iron pool in the neuronal cells injected with Aβ. Consistent with this, the absence of FtMt may decrease the cells ability to sequester excess iron under stressed conditions and enhances the degree of changes in the measured proteins of iron metabolism. Our previous data also indicate that “uncommitted” iron levels, commonly referred to as the “labile iron pool” (LIP), significantly increased in SH-SY5Y cells treated with Aβ25–35. FtMt overexpression was able to reverse this change [11]. We propose that a larger LIP, resulting from a redistribution of iron from mitochondria to the cytosol, especially in the absence of FtMt, is responsible for the oxidative stress that mediates the damage of cell components in our AD model [46].In summary, our research indicates that Aβ25–35 elevates the LIP and causes oxidative stress, which can be exacerbated by the lack of FtMt. The excess iron donates electrons to generate ROS and lipid peroxidation. These changes initiate the programmed cell death through the p38/MAPK pathway, ultimately causing neuronal apoptosis which causes more severe memory impairment. The alteration of iron maybe provides the feedback regulation to the levels of TfR1 and FPN1 (Figure 6).Figure 6 A schematic representation of the mechanism leading to neuronal cell apoptosis induced by Aβ25–35 in mice with a disruptedFtmt gene. Aβ25–35 changes the levels (LIP) and distribution of intracellular iron, thus increasing oxidative stress. WithoutFtmt to sequester excess mitochondrial iron, lipid peroxidation and the level of LIP are significantly increased. These changes may signal the cell to begin the process of programmed death through the P38 MAPK pathway, resulting in neuronal cell death, which is enhanced inFtmt knockout mice, leading to worsened memory impairments. ## 5. Conclusion The current study supports the hypothesis that the functionality of FtMt is not essential under normal conditions. But, in cases of neuronal stress, such as Aβ25–35 accumulation, FtMt offers a profound neuroprotection through regulating cellular iron content and distribution in a way that keeps oxidative stress in check, preventing the activation of apoptosis. --- *Source: 1020357-2017-01-16.xml*
2017
# Novel Image Analysis Approach Quantifies Morphological Characteristics of 3D Breast Culture Acini with Varying Metastatic Potentials **Authors:** Lindsey McKeen Polizzotti; Basak Oztan; Chris S. Bjornsson; Katherine R. Shubert; Bülent Yener; George E. Plopper **Journal:** Journal of Biomedicine and Biotechnology (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102036 --- ## Abstract Prognosis of breast cancer is primarily predicted by the histological grading of the tumor, where pathologists manually evaluate microscopic characteristics of the tissue. This labor intensive process suffers from intra- and inter-observer variations; thus, computer-aided systems that accomplish this assessment automatically are in high demand. We address this by developing an image analysis framework for the automated grading of breast cancer inin vitro three-dimensional breast epithelial acini through the characterization of acinar structure morphology. A set of statistically significant features for the characterization of acini morphology are exploited for the automated grading of six (MCF10 series) cell line cultures mimicking three grades of breast cancer along the metastatic cascade. In addition to capturing both expected and visually differentiable changes, we quantify subtle differences that pose a challenge to assess through microscopic inspection. Our method achieves 89.0% accuracy in grading the acinar structures as nonmalignant, noninvasive carcinoma, and invasive carcinoma grades. We further demonstrate that the proposed methodology can be successfully applied for the grading of in vivo tissue samples albeit with additional constraints. These results indicate that the proposed features can be used to describe the relationship between the acini morphology and cellular function along the metastatic cascade. --- ## Body ## 1. Introduction Breast cancer is the second most common cancer in women and is also the second leading cause of cancer-related death in women [1]. In its most common form, the tumor arises from the epithelial cells in the breast tissue. Histological grading systems are commonly used to predict the prognosis of tumors. The most frequently used tumor grading system for breast cancer is the modified Scarrf-Bloom-Richardson method [2] where pathologists analyze the rate of cell division, percentage of tumor forming ducts, and the uniformity of cell nuclei to determine the cancer grade in H&E stained biopsies. While precancerous (or lower-grade) tumors tend to grow slowly and are less likely to spread, invasive (or higher-grade) tumors typically gain the ability to proliferate and spread rapidly. Subjectivity and variability of the results affect the accuracy of prognosis and subsequent patient treatment. A recent study indicates that the rate of misdiagnosis of breast cancer varies widely between clinicians and is nearly 40% in some cases [3]. Thus, there is an unmet need for robust methods that reduce the variability and subjectivity in the grading of breast tumors and lesions. Development of quantitative tools for image analysis and classification is rapidly expanding fields that constitute a great potential for improving diagnostic accuracy [4, 5].In this paper, a method for automated grading of breast cancer in three-dimensional (3D) epithelial cell cultures is presented.In vitro epithelial breast cells cultured in laminin rich extracellular matrix form acinar-like structures that both morphologically and structurally resemble in vivo acini of breast glands and lobules [6]. Therefore, these culture systems constitute suitable and controllable environments for breast cancer research [7–9].Figure1 shows a 3D view of a typical nonmalignant breast culture that comprises several acini that are surrounded by extracellular matrix (ECM) proteins. Figure 2 shows an enlarged view of a cross section from a typical nonmalignant acinus. Acinar structures in healthy/nonmalignant cultures include polarized luminal epithelial cells. (A polarized cell has specialized proteins localized to specific cell membranes such as the basal, lateral, or apical side) and a hollow lumen. The lateral membranes of neighboring cells are in close proximity due to cellular junctions and adhesion proteins. The basal side of cells contacts the surrounding ECM proteins, while the apical sides of cells face the hollow lumen. Malignant cancers result in loss of cell polarity that induces changes in the morphology of acinar structures. In this paper, we investigate morphological characteristics of mammary acinar structures in nonmalignant, noninvasive carcinoma, and invasive carcinoma cancer grades in six MCF10 series of cell lines grown in 3D cultures. We propose novel features to characterize these changes along the metastatic cascade and exploit them in a supervised machine learning setting for the automated grading of breast cancer. Proposed features capture not only similar factors as the Scarff-Bloom-Richardson grading system but also additional subvisual changes observed in breast cancer progression in a quantitative manner to reduce variability. As shown by the grading accuracies, the proposed features efficiently capture the differences caused by the metastatic progression of the cancer.Figure 1 Three-dimensional view of a typical nonmalignant cell line culture. Acinar structures are surrounded by the ECM proteins.Figure 2 Example of a typical nonmalignant acinar structure cross section. The acinar structure includes polarized epithelial cells and hollow lumen. Basal sides of the cells are surrounded by the ECM proteins and apical sides face the hollow lumen.Previous work on this problem includes examining the change in the morphological characteristics of nontumorigenic MCF10A epithelial acini over time and exploiting them to model the growth of culture over time. Chang et al. examined the elongation of the MCF10A acini at 6, 12, and 96 hours after a particular treatment [10]. In a more predictive setting, Rejniak et al. used the number of cells per acini, proliferation, and apoptosis rates to computationally model the MCF10A epithelial acini growth using fluid-dynamics-based elasticity models [11]. In addition to these features, Tang et al. utilized features like acinus volume, density, sphericity, and epithelial thickness to investigate the relationship between acinus morphology and apoptosis, proliferation, and polarization [12]. Specifically, they built a computational model that can predict the growth of acini over a 12-day period. In addition, graph theoretical tools [13–15] were exploited to highlight the structural organization of the cells within the malignant tissues. Our method is different than these characterization efforts in that the grading of cancer is achieved over a richer and more discriminative set of small-scale (local) morphological features that are statistically significant. In addition, the features proposed here closely mimic the features that current pathological grading systems utilize. The presented work builds upon and extends our prior work in this area that introduced the underlying framework [16]. In this work, we provide extended details of our methodology and also present analysis that tests the performance of different supervised machine learning methods and investigates the discriminative influence of the proposed features. Furthermore, the overall grading accuracy is significantly improved by eliminating the acini that are in the preliminary stages of their formations from our analysis. Finally, we perform a preliminary study on the grading of in vivo tissue section using our framework and demonstrate that the proposed features can also be used on in vivo tissue slides albeit with additional constraints on the preparation of the tissue for our analysis. ## 2. Materials and Methods ### 2.1. Cell Culture Cells were grown on tissue culture-treated plastic T75 flasks and incubated at 37°C in a humidified atmosphere containing 95% air and 5% carbon dioxide in the manufacturer suggested media. Once 80% confluent, cells were split using 0.25% trypsin with 0.2 g/L EDTA and seeded into a new flask or under experimental conditions. Six MCF10 series cell lines that represent three grades of breast cancer along the metastatic cascade were grown. The cell lines used in this experimentation were MCF10A (10A), MCF10AT1 (AT), MCF10AT1K.cl2 (KCL), MCF10DCIS.com (DCIS), MCF10CA1h.cl2 (CA1H), and MCF10CA1a.cl1 (CA1A). The DCIS, CA1H, and CA1A cell lines were grown in DMEM/F12 with 5% horse serum and 1% fungizon/penicillin/streptomycin. 10A, AT, and KCL cells were grown in the same base media with additional factors including 20 ng/mL epidermal growth factor, 0.5 mg/mL hydrocortisone, 100 ng/mL cholera toxin, and 10μg/mL of insulin. The 10A cell line was obtained from the American Type Culture Collection (ATCC), AT, KCL, CA1A, and CA1H cell lines were obtained from the Barbara Ann Karmanos Cancer Institute at Wayne State University, and the DCIS cell line was purchased from Asterand Inc.The advantages of the MCF10 series of cell lines include their derivation from a single biopsy and subsequent mutations forming cell lines of ranging metastatic ability. These cell lines were acquired from a tissue sample diagnosed as noncancerous fibrocystic disease [17, 18]. The biopsy cells, MCF-10M, were cultured and spontaneously gave origin to two immortal sublines. One of these cell lines was named 10A for its adherent ability. 10A cells were nontumorigenic in mice xenografts; however, the first derivative cell line was transfected with oncogene T-24 H-Ras to promote expression of a constitutively active form of Ras resulting in precancerous lesion formations. These cells were then serially passaged for six months and the derived cell line was named MCF10AneoT [17]. AT cells were derived from a 100-day-old MCF10AneoT lesion that formed a squamous carcinoma but failed to produce carcinomas when injected back into mice [19]. KCL cell line was obtained from a 367 day tumor xenograft of the AT cell line. KCL cells were injected into a mouse, where a tumor was formed and isolated after 292 days. Isolated cells derived from this tumor were cultured and yielded the DCIS cell line, which formed ductal carcinoma in situ tumors when injected into mice [18]. Additional cells from the tumors which were derived from the KCL cell line were implanted within a mouse and yielded the MCF10CA cell lines; two of these were included in this study: the CA1H and CA1A [18]. CA1H cells were found to develop invasive carcinoma in mice while the CA1A cells were of higher-grade malignancy and able to metastasize to the lungs. After evaluating the metastatic potentials of these six cell lines, we considered the 10A and AT cell lines as nonmalignant, KCL and DCIS cell lines as noninvasive carcinoma, and CA1H and CA1A cell lines as invasive carcinoma that constituted the three grades of breast cancer we considered in this study. #### 2.1.1. 3D Culture System Cells were suspended at a concentration of one million cells per milliliter in Matrigel (laminin rich extra cellular matrix) on ice [20]. The gel-cell solution was seeded at 30 uL per glass bottom 96 well for 14 days in their respective media. The Matrigel-cell solution was allowed to solidify for 30 minutes at 37°C before the media was added and was changed every 2-3 days thereafter. ### 2.2. Immunocytochemistry Staining for Imaging Following 14 days in culture, samples were washed once with phosphate buffer solution (PBS) and then fixed with 3% paraformaldehyde at room temperature for 30 minutes. Next, samples were rinsed with PBS and treated with cell blocking solution (PBS with 1% bovine serum albumin (BSA) and 0.25% Tween20) for one hour at 4°C. Samples were then permeabilized with 0.25% TritonX100-PBS solution for ten minutes.After the permabilization, samples were washed with PBS and then treated with the primary antibodies at their determined working dilution in PBS at 4°C overnight. Next, the samples were washed with PBS three times for 30 minutes at room temperature. The secondary antibodies were added to the samples in PBS at 4°C overnight. Finally, samples were treated with DAPI for thirty minutes followed by three 30-minute PBS washes and stored at 4°C in PBS prior to imaging. #### 2.2.1. Antibodies and Dyes Integrinα3 antibody (mouse monoclonal) (Abcam: ab11767) was used at a dilution of 1 : 40 for 3D confocal fluorescent imaging. Secondary antibody, Alexa Fluor 568 goat anti-mouse IgG highly cross absorbed (Invitrogen A-11031) at a dilution of 1 : 200, was used to visualize the localization of integrin α3. Integrin α6 FITC-conjugated antibody (Rat monoclonal) (Abcam: ab21259) was used at a dilution of 1 : 25 for 3D confocal fluorescent imaging during the secondary antibody step previously described. Nuclei were stained with 4′,6-diamidino-2-phenylindole (DAPI; Molecular Probes) for 30 minutes at room temperature. 3D volumes were obtained using a Zeiss LSM510 META confocal microscope with a 40x water immersion objective as z-stacks. Proper color filters were used to capture the red, green, and blue fluorescent signals. Images were captured using a multitrack setting where fluorescent channels were acquired sequentially to avoid fluorescence crosstalk. Thickness of the z-stacks ranged from 10 to 40 μm (1 μm slices) with an initial depth of at least 10 μM. Slices had 512 × 512 pixels (320 μm×320 μm) cross-section area. In order to include larger volumes of the cultures in our analysis, we captured four tiles (in 2 × 2 formation) with approximately 20% overlap and stitched these tiles using the 3D image registration technique proposed by Preibisch et al. [21] implemented in ImageJ [22]. A total of five stitched images were captured for each of the six cell lines. ### 2.3. Segmentation of Acinar Structures Segmentation of the acinar structures was accomplished by the well-known watershed segmentation [23]. Watershed segmentation exploits the morphological characteristics of the regions of interest and is particularly useful for the segmentation of adjoined objects. The process starts with the identification of the acinar structures. In this study, cell nuclei were marked with blue fluorescent marker DAPI, cell-to-cell borders were identified by the red fluorescent marker that shows the localization of integrin α3, and basal sides of the cell membranes were identified by the green fluorescent marker that shows the localization of integrin α6. As these components are observed in the individual color channels of the captured images and, we used the combination of the three color channels to identify the acinar structures.First, the color channels of the image were individually binarized. Image values were separated into foreground and background classes, where the foreground class represented the stained components and the background class included the combination of the gel medium and extracellular proteins. We employed a local adaptation to Otsu’s well-known global thresholding algorithm [24] to binarize the color channels. In each slice along the depth direction, we divided the image into rectangular blocks along the horizontal and vertical directions and binarized them separately. This approach handles the spatial variations in the foreground-background contrast better than global thresholding. The noise produced due to the binarization of local regions that contain hardly any information was eliminated by using the edge-based noise elimination that cleaned the regions that did not contain any edge from the resulting binary image [25]. Next, the resulting binary color channels were superposed to obtain a single monochrome binary image by logical OR operation. Enclosing acinar structures were identified by applying morphological close operation followed by a morphological fill-hole operation [26] to the resulting binary image. Finally, 3D watershed segmentation was applied to label the individual acinar structures. For this purpose, we first obtained a topographic interpretation of the resulting binary image by taking its Euclidean distance transform where the shortest distance to the nearest background pixel for each pixel was measured [27]. The resulting transformed image was then inverted, while forcing the values for the background class pixels to -∞, to construct the catchment basins and the watershed segmentation method was finally applied to construct to watershed lines to divide these basins and identify the unique acinar structures. Acinar structures with less than four nuclei were considered to exhibit reduced ability to form polarized acinar structures and excluded from our analysis. Note that this elimination was not carried out in our preliminary work on this problem [16] that resulted in a poorer overall grading accuracy (79.0%) than what we achieve with the elimination (89.0%) as we will present in Section 3. From the 30 images we analyzed, 99 10A, 49 10AT, 81 KCL, 80 DCIS, 29 CA1H, and 62 CA1A acinar structures were identified using this segmentation method yielding a total 400 acinar structures. ### 2.4. Characterization of Acinar Structure Morphology Visual investigation of the acinar structures shown in Figure1 reveals differences in acini morphology and localization of integrin subunits that are highlighted in Figure 2. Acinar structures in nonmalignant cell lines are comprised of polarized cells that are layered around the hollow lumen that closely approximate the acini formations in mammary glands and lobules. On the other hand, the acinar structures in the tumorigenic cell lines consist of nonpolarized cells that form clusters of cells rather than explicit acini. As shown in Figures 3(a) and 3(b), nontumorigenic cell line 10A and precancerous cell line AT, respectively, exhibit polarized acinar structures that are characterized by the integrin α6 localization at the basal membrane of the cells, integrin α3 localization along the lateral cell membranes, and clear hollow lumen formations. Acinar structures in KCL and DCIS cell lines shown in Figures 3(c) and 3(d), respectively, exhibit significant changes in the integrin subunit densities and their colocalizations. While the basal and lateral membrane protein densities decrease, relative colocalizations of these proteins increase within the acinus. The acinar structures from the noninvasive carcinoma cell lines and acinar-like structures from the invasive carcinoma cell lines CA1H and CA1A shown in Figures 3(e) and 3(f), respectively, are more elongated and exhibit smaller hollow lumens than the acinar structures from the nonmalignant cell lines.MCF10 series cell lines exhibit variation in acinar structure morphology along the metastatic cascade after 14 days in 3D laminin rich environment culture systems. Cells were cultured in 3D Matrigel suspensions and stained with integrinα3 (red), integrin α6 (green), and DAPI (blue). Example image slices from the six cell lines are typical of those captured during the course of the experiments and are displayed from (a) to (f) as follows: 10A, AT, KCL, DCIS, CA1H, and CA1A. (a)(b)(c)(d)(e)(f)The observed variations in the morphology of acinar structures motivated the development of the features that characterize the level of cell polarity within the acinar structures. The features of interest primarily capture (i) the morphology of the acinar structure and hollow lumen, (ii) the basal, and (iii) the lateral protein densities within the acinar structure. These features were computed in each acinar structure for the slice that the acinar structure had the largest cross-section area along the depth direction (z-stack). #### 2.4.1. Features Capturing the Morphology of Acinus and Hollow Lumen The first subset of features we propose captured the shape of the acinar structures, number of nuclei that constitute the acinus, and the relative size of the hollow lumen. The segmentation of the cell nuclei was also accomplished by the watershed segmentation technique described previously using the blue (nuclei) channel image only. Figure4(a) shows the number of nuclei per acinar structure across the six cell lines. As expected the invasive carcinoma cell lines exhibit the largest number of nuclei per cell cluster compared to the other cell lines acini due to the unregulated division of cells.Features quantifying acinus and hollow lumens morphology are statistically significant between cell lines and states of health. (a) shows the number of nuclei per acinar structure, (b) shows the acinar structure elongation (roundness) that is measured as illustrated in (d) where the ratio between the minor and major axis of the ellipse fitted to the acinar structure is taken, and (c) shows the ratio between the hollow lumen and acinar structure areas that is computed as illustrated in (e). Error bars correspond to the standard error of the means. Two-sidedt-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)Acinar structures in the nonmalignant grades of cancer typically have symmetrical round shapes. Malignant cancer grades result in deformations in the acinar structures that cause the shapes to be more elongated. We measured the roundness of the acinar structures by taking the ratio between the minor and major axes of the ellipse fitted as illustrated in Figure4(d). This fitting was achieved by finding the ellipse that has the same normalized second central moments as the region. Roundness values close to 1 comes from rounder/more symmetrical acinar structures whereas values close to 0 indicate an elongated shape. As expected, noninvasive and invasive cell lines exhibit statistically significant decrease in acinar structure roundness compared to the nonmalignant cell lines as shown in Figure 4(b).The final feature in this subset captured the differences in the relative size of hollow lumen by computing the ratio between the areas of the hollow lumen and the acinar structure. Values closer to 1 indicate larger hollow lumens, and those to 0 indicate smaller hollow lumens. In order to compute the hollow lumen area, we assumed that the hollow lumen was circular and computed its radius as the Euclidean distance between the centroid of the acinar structure and the nearest nucleus to the centroid as illustrated in Figure4(e). The hollow lumen for the precancerous cell line AT is the largest among the six cell lines. Noninvasive and invasive cell line acinar structures exhibit statistically significant shrinkage in hollow lumen compared against the nonmalignant cell line acinar structures as shown in Figure 4(c). #### 2.4.2. Features Capturing the Basal Protein Integrinα6 Density The next subset of features analyzed and quantified the localization and structural relationships of the basal membrane-localized protein integrinα6. The first feature measured the ratio of integrin α6 expression within the acinar structure (excluding the expression along the basal membrane) to the acinar structure area as illustrated in Figure 5(e). As nonmalignant cancerous cells exhibit high cellular polarity, integrin α6 is localized to the basal membrane of these cells. Hence, a low density of integrin α6 is expected within the acinar structure. Reduced cellular polarity in the malignant grades of cancer enables greater internal expressions of integrin α6 within the acinar structure. DCIS, CA1H, and CA1A cell lines exhibited statistically significant increase in the internal expression of integrin α6 compared with the less malignant cell lines as shown in Figure 5(a).Features capturing the level of cell polarity based on the basal membrane protein integrinα6 density. (a) shows the internal integrin α6 density that is computed as illustrated in (e) where the amount of integrin α6 localized inside the acini is divided to the total amount of integrin α6, (b) shows the cumulative spatial distributions of integrin α6 that is computed as illustrated in (g) where the amount of integrin α6 localized within the concentric circles are divided by the total amount of integrin α6 within the acini, (c) shows the continuity of the integrin α6 along the basal membrane that is measured as illustrated in (f) where the number of times that rays initiate from the centroid oriented at from 1 to 360 degrees intersected with the green fluorophores along the basal membrane is counted and normalized between 0 and 1, and (d) shows the ratio between the amount of integrin α6 colocalized with integrin α3 and the total integrin α6 expression. Error bars correspond to the standard error of the means. Two-sided  t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)(f)(g)The next feature characterized the spatial distribution of integrinα6 within the acinar structure. It was computed as the ratio between the amount of the green fluorophores located in concentric circles that were centered at the centroid of the acinar structure and the total amount of green fluorophores in the acinar structure. The radii of the concentric circles were increased by one-tenth of the radius of the circle that circumscribes the acinar structure at each step as illustrated in Figure 5(g). Cell lines that have expressions of integrin α6 near the centroid of the acinar structure show relatively early rise in the cumulative densities than the others. It is observed that integrin α6 is localized near the basal membrane in the three lower-grade cell lines, and near the centroids of the acini in the three advanced grade cell lines as shown in Figure 5(b). Since the cumulative densities near the acinus center and the basal membrane are similar for all the cell lines, we decide to include the cumulative densities corresponding to 30 to 70% of the radius of the circle that circumscribes the acinus in our analysis.Another feature captured the amount of integrinα6 expressed along the basal cell membrane as illustrated in Figure 5(f). First, we took the difference between the acinar structure and its morphologically eroded version to obtain a binary contour mask. This mask was then overlaid with the corresponding slice of the binary green channel image to obtain the integrin α6 localized along the basal membrane of the acinar structure. Next, we superposed the resulting image with rays that initiate at the centroid of the acinus oriented at angles that varied from 1° to 360° with 1° steps scanning a complete circle. Finally, the total number of times that these rays intersected with the green markers was counted and normalized between 0 and 1 to obtain the continuity of integrin α6 along the acinar basal side. Values close to 1 correspond to intact basal sides and, thus, indicate high cellular polarity. As shown in Figure 5(c), 10A and AT cell lines have the significantly more continuous expression of integrin α6 along the basal side of the cells than the four malignant cell lines. We note that KCL cell line exhibits a statistically significant higher localization of integrin α6 at the basal side than the three more tumorigenic cell lines.The final feature in this category measured the ratio of the amount of integrinα6 colocalized with integrin α3 to the total expression of integrin α6 to determine the amount basal membrane protein overlapped with the lateral membrane protein. This feature gets higher values when there is higher internal expression of integrin α6 or higher basal membrane localization of integrin α3. As plotted in Figure 5(d), 10A and DCIS exhibit statistically significant less colocalization of integrin α6 with α3 than the other four cell lines. #### 2.4.3. Features Capturing the Lateral Protein Integrinα3 Density The final subset of features captured the density of lateral membrane protein integrinα3 within the acinar structure. The following features measured the expression and localization of lateral membrane protein and adhesion molecule integrin α3. The first feature determined the amount integrin α3 expressed along the basal cell membrane using the same method described previously. We anticipate observing less amount of integrin α3 expressed along the basal cell membrane in the malignant grades of cancer than the nonmalignant grade.Next, we captured the overall expression of integrinα3 within the acinar structure as illustrated in Figure 6(e) where the total amount of integrin α3 expressed in the acinar structure was divided by the area of the acinar structure. It is seen in Figure 6(b) that this feature monotonically decreases from the nontumorigenic 10A to noninvasive DCIS acinar structures. The invasive carcinomas CA1H and CA1A show significantly higher expressions compared to the DCIS cells.Features capturing the level of cell polarity based on the lateral membrane protein integrinα3 density. (a) Shows the continuity of the integrin α6 along the basal membrane, (b) shows the total integrin α3 density that is computed as illustrated in (e) where the total amount of integrin α3 within the acinar structure is divided by area of the acinar structure, (c) shows the ratio between the amount of integrin α3 colocalized with integrin α6 and the total integrin α3 expression, and d shows the density of integrin α3 in the exterior of the hollow lumen. Error bars correspond to the standard error of the means. Two-sided t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)We then measured the ratio between the amount of integrinα3 colocalized with the integrin α6 and the total amount of integrin α3 within the acinar structure. As expected nonmalignant cell lines exhibit significantly less colocalization of integrin α3 with integrin α6 than the malignant cell lines as shown in Figure 6(c).Final feature in this subset measured the ratio between the amount of integrinα3 localized between the hollow center and basal membrane side of the acinar structure to the total integrin α3 expression. This feature, thus, quantified the density of α3 integrin along the lateral membrane of the cells. Since integrin α3 localization is not confined to cell-to-cell lateral membranes in malignant tumors, expressions of this protein are expected within the hollow lumen as well as the cell-to-extracellular matrix border. ### 2.5. Automated Grading of Cancer Using Supervised Machine Learning We used supervised machine learning [28] to grade the acinar structures into nonmalignant, noninvasive carcinoma, and invasive carcinoma forms of breast cancer using the features defined previously. In supervised learning, the data are first divided into training and test sets. The classifier is trained with the labeled training data and the classes of the test data are then predicted using the resulting classifier. We tested the grading performance of five supervised learning algorithms. Linear discriminant analysis fits a multivariate normal density to each of the training class assuming equal covariance matrices for each class [29]. It then separates the classes with a hyperplane that is established by seeking the projection that maximizes the sum of intraclass distances and minimize the sum of interclass distances. Quadratic discriminant analysis is similar to the linear discriminant analysis with the distinction that covariance matrix for each class is estimated separately [29]. Naïve Bayes classifier assumes that the classes are arising from independent distributions; hence, a diagonal covariance matrix is assumed. K-nearest neighbor classifier finds the K closest training data to the test data based on the Euclidean distance and classifies the sample using the majority voting of the nearest points. Support vector machines classifier maps the training data into a higher dimensional space with a kernel function where the data become linearly separable [30]. A separating maximum-margin hyperplane is established to separate the different class data with minimal misclassification via quadratic optimization [30]. The test data are classified by determining the side of the hyperplane they lie on in the kernel-mapped space. In order to extend SVM for three-class classification, we employed the one-against-one approach [31] where three two-class SVM classifiers were established for each pair of classes in the training data set. Each sample in the test data was assigned to a class by these classifiers and the class with the majority vote was chosen as the final result. If there is equal voting for the three classes, we chose the class that has the largest margin from the separating hyperplane.In order to obtain unbiased performance estimates, 10-fold cross-validation was performed. The feature set was first randomly divided into 10 disjoint partitions of equal size. For each partition, a classifier was trained with the remaining nine partitions and then tested on the retained partition. The results for each partition were then combined to find the overall grading accuracy. In order to reduce the scale differences within the features, the data were normalized so that the features had zero mean and unit variance across the samples. We performed a parametric search to determine the number of neighbors in the nearest neighbor identification. We tested between 8 and 15 nearest neighbors and determined that identifying 12 nearest neighbors to a test point achieved the highest grading accuracy. For the SVM classifiers, we used radial basis function, also referred to as Gaussian kernel, in the form ofK(xi,xj)=exp(-∥xi-xj∥2/2σ2) that mapped the data into an infinite dimensional Hilbert space [30]. We performed a parameter to search to identify σ that achieves the highest grading accuracy. We sought σ in the set of candidate values that varied from 1.0 to 2.5 with 0.1 steps and determined that σ being equals to 2.0 achieved the best performance in the grading of the acinar structures. Table 1 shows the performance of the five supervised learning algorithms. It is clear that SVM-based classifier achieves the highest overall accuracy in the grading of the acinar structures. This is not unexpected as SVM classifiers are known to be highly successful in biological applications [32].Table 1 Overall grading accuracies for the supervised machine learning techniques used in our analysis. SVM classifiers clearly achieve significantly higher grading accuracy than the other methods. The data set includes 400 acinar structures. Learning methodOverall grading accuracy (%)Linear discriminant analysis80.75Quadratic discriminant analysis80.00Naïve Bayes69.75K-nearest neighbors79.50Support vector machines89.00 #### 2.5.1. Discriminative Influence of the Proposed Features After identifying SVM-based classifier as the most accurate grading method for our data set, we performed feature selection to determine the discriminative capabilities of the features to characterize the data. A good feature selection algorithm identifies the features that are consistent within the class and exhibit differences between the classes. Fisher score (F-score) is documented to be a powerful feature selection tool [33]. For a dataset xi with two classes, denote the instances in the first class with xi(+) and instances in the other class with xi(-). The Fisher score of the ith feature is then given by F(i)=((xι̅(+)-xι̅)2+(xι̅(-)-xι̅)2)/(σx(+)2+σx(-)2), where xι̅ is the mean value of the ith features, xι̅(+) and xι̅(-) are the mean values over the positive and negative instances, respectively, and σx(+)2 and σx(-)2are the variances over the positive and negative instances, respectively, for the ith feature. In order to extend this method for a feature set with three classes, we computed the F-score for each feature in each pair of classes and then took the average of the three possible combinations. Larger values of F-score indicate stronger discriminative influence; therefore, after obtaining the F-scores for all the features, we ranked them in descending order.Table2 shows the average F-scores of the features and their corresponding discriminative rank. It is seen that continuity of integrin α6 along the basal membrane is the most discriminative feature to describe the data. Discriminative influence of the ratio between the hollow lumen and acinar structure area is also high. These two features are important as they are strong indicators of the level of cell polarity within the acinar structure and can be considered as measures of duct forming tumors as used in the commonly used Scarff-Bloom-Richardson system. Some of the other most discriminative features quantify the differences that are difficult to assess by visual inspection such as the colocalization between the integrin α3 and integrin α3 and internal densities of the integrin subunits. We note that these features constitute a subset of the novel features introduced in this paper and, therefore, this analysis particularly highlights the importance of the proposed features and the methodology.Table 2 Discriminative influence of the proposed features based on the F-score values. Feature labelF-ScoreRankNumber of nuclei within the acinar structure0.054714Elongation of the acinar structure0.11226Ratio between the hollow lumen and acinar structure areas0.16024Internal integrinα6 Density0.12055Density of integrinα6 inside the 30% circle0.062011Density of integrinα6 inside the 40% circle0.061112Density of integrinα6 inside the 50% circle0.08859Density of integrinα6 inside the 60% circle0.09538Density of integrinα6 inside the 70% circle0.059813Continuity of integrinα6 along the basal membrane0.57251Amount of integrinα6 colocalized with integrin α30.033715Continuity of integrinα3 along the basal membrane0.10467Total integrinα3 density0.26513Amount of integrinα3 colocalized with integrin α60.35262Density of integrinα3 in the exterior of the hollow lumen0.071610We performed cancer grading using subsets of the resulting most discriminative features, where at each stage we increased the number of most discriminative features in the training feature set. Figure7 shows the overall grading accuracy with respect to the number of most discriminative features selected in the grading. It is seen that the highest grading accuracy 89.0% is achieved when all of the features are used to train the classifiers. Considering that the most grading systems are typically based on the assessment of a limited number of features, we also investigate the grading performance of our methodology in a similar setting. When only the first five most discriminative features are used for grading, we achieve 82.0% overall grading accuracy. Though relatively worse than the highest grading accuracy, this constitutes a highly promising setting considering the limited number of features.Figure 7 Overall grading accuracy with respect to the number of most discriminative features selected in the grading. Highest overall grading accuracy is achieved when the whole feature set is considered. ### 2.6. Tissue Sections Breast tissue from patients with invasive carcinoma, ductal carcinomain situ, and from healthy individuals was obtained from ProteoGenix (Culver City, CA) and stored at −80°C. Portions of each tissue block were embedded in O.C.T. Compound (Sakura Finetek USA, Torrance, CA) and sections 20 μm thick were cut using a Microm HM505E cryostat. Tissue sections were adhered to Superfrost Plus Gold microscope slides (Fisher Scientific, Morris Plains, NJ) and vapor fixed using 2-3 Kimwipes (Kimberley-Clark Worldwide, Roswell, GA) soaked in 4% paraformaldehyde in a small chamber at −20°C for 30 minutes prior to immunohistochemistry. Tissue sections were encircled with an ImmEdge Pen (Vector Labs, Burlingame, CA) and immunohistochemistry and confocal microscopy were performed as described for in vitro samples in Section 2.2. An Alexa488-conjugated goat antifluorescein secondary antibody (Invitrogen, Carlsbad, CA) was included at a 1 : 300 dilution in the overnight secondary incubation. ## 2.1. Cell Culture Cells were grown on tissue culture-treated plastic T75 flasks and incubated at 37°C in a humidified atmosphere containing 95% air and 5% carbon dioxide in the manufacturer suggested media. Once 80% confluent, cells were split using 0.25% trypsin with 0.2 g/L EDTA and seeded into a new flask or under experimental conditions. Six MCF10 series cell lines that represent three grades of breast cancer along the metastatic cascade were grown. The cell lines used in this experimentation were MCF10A (10A), MCF10AT1 (AT), MCF10AT1K.cl2 (KCL), MCF10DCIS.com (DCIS), MCF10CA1h.cl2 (CA1H), and MCF10CA1a.cl1 (CA1A). The DCIS, CA1H, and CA1A cell lines were grown in DMEM/F12 with 5% horse serum and 1% fungizon/penicillin/streptomycin. 10A, AT, and KCL cells were grown in the same base media with additional factors including 20 ng/mL epidermal growth factor, 0.5 mg/mL hydrocortisone, 100 ng/mL cholera toxin, and 10μg/mL of insulin. The 10A cell line was obtained from the American Type Culture Collection (ATCC), AT, KCL, CA1A, and CA1H cell lines were obtained from the Barbara Ann Karmanos Cancer Institute at Wayne State University, and the DCIS cell line was purchased from Asterand Inc.The advantages of the MCF10 series of cell lines include their derivation from a single biopsy and subsequent mutations forming cell lines of ranging metastatic ability. These cell lines were acquired from a tissue sample diagnosed as noncancerous fibrocystic disease [17, 18]. The biopsy cells, MCF-10M, were cultured and spontaneously gave origin to two immortal sublines. One of these cell lines was named 10A for its adherent ability. 10A cells were nontumorigenic in mice xenografts; however, the first derivative cell line was transfected with oncogene T-24 H-Ras to promote expression of a constitutively active form of Ras resulting in precancerous lesion formations. These cells were then serially passaged for six months and the derived cell line was named MCF10AneoT [17]. AT cells were derived from a 100-day-old MCF10AneoT lesion that formed a squamous carcinoma but failed to produce carcinomas when injected back into mice [19]. KCL cell line was obtained from a 367 day tumor xenograft of the AT cell line. KCL cells were injected into a mouse, where a tumor was formed and isolated after 292 days. Isolated cells derived from this tumor were cultured and yielded the DCIS cell line, which formed ductal carcinoma in situ tumors when injected into mice [18]. Additional cells from the tumors which were derived from the KCL cell line were implanted within a mouse and yielded the MCF10CA cell lines; two of these were included in this study: the CA1H and CA1A [18]. CA1H cells were found to develop invasive carcinoma in mice while the CA1A cells were of higher-grade malignancy and able to metastasize to the lungs. After evaluating the metastatic potentials of these six cell lines, we considered the 10A and AT cell lines as nonmalignant, KCL and DCIS cell lines as noninvasive carcinoma, and CA1H and CA1A cell lines as invasive carcinoma that constituted the three grades of breast cancer we considered in this study. ### 2.1.1. 3D Culture System Cells were suspended at a concentration of one million cells per milliliter in Matrigel (laminin rich extra cellular matrix) on ice [20]. The gel-cell solution was seeded at 30 uL per glass bottom 96 well for 14 days in their respective media. The Matrigel-cell solution was allowed to solidify for 30 minutes at 37°C before the media was added and was changed every 2-3 days thereafter. ## 2.1.1. 3D Culture System Cells were suspended at a concentration of one million cells per milliliter in Matrigel (laminin rich extra cellular matrix) on ice [20]. The gel-cell solution was seeded at 30 uL per glass bottom 96 well for 14 days in their respective media. The Matrigel-cell solution was allowed to solidify for 30 minutes at 37°C before the media was added and was changed every 2-3 days thereafter. ## 2.2. Immunocytochemistry Staining for Imaging Following 14 days in culture, samples were washed once with phosphate buffer solution (PBS) and then fixed with 3% paraformaldehyde at room temperature for 30 minutes. Next, samples were rinsed with PBS and treated with cell blocking solution (PBS with 1% bovine serum albumin (BSA) and 0.25% Tween20) for one hour at 4°C. Samples were then permeabilized with 0.25% TritonX100-PBS solution for ten minutes.After the permabilization, samples were washed with PBS and then treated with the primary antibodies at their determined working dilution in PBS at 4°C overnight. Next, the samples were washed with PBS three times for 30 minutes at room temperature. The secondary antibodies were added to the samples in PBS at 4°C overnight. Finally, samples were treated with DAPI for thirty minutes followed by three 30-minute PBS washes and stored at 4°C in PBS prior to imaging. ### 2.2.1. Antibodies and Dyes Integrinα3 antibody (mouse monoclonal) (Abcam: ab11767) was used at a dilution of 1 : 40 for 3D confocal fluorescent imaging. Secondary antibody, Alexa Fluor 568 goat anti-mouse IgG highly cross absorbed (Invitrogen A-11031) at a dilution of 1 : 200, was used to visualize the localization of integrin α3. Integrin α6 FITC-conjugated antibody (Rat monoclonal) (Abcam: ab21259) was used at a dilution of 1 : 25 for 3D confocal fluorescent imaging during the secondary antibody step previously described. Nuclei were stained with 4′,6-diamidino-2-phenylindole (DAPI; Molecular Probes) for 30 minutes at room temperature. 3D volumes were obtained using a Zeiss LSM510 META confocal microscope with a 40x water immersion objective as z-stacks. Proper color filters were used to capture the red, green, and blue fluorescent signals. Images were captured using a multitrack setting where fluorescent channels were acquired sequentially to avoid fluorescence crosstalk. Thickness of the z-stacks ranged from 10 to 40 μm (1 μm slices) with an initial depth of at least 10 μM. Slices had 512 × 512 pixels (320 μm×320 μm) cross-section area. In order to include larger volumes of the cultures in our analysis, we captured four tiles (in 2 × 2 formation) with approximately 20% overlap and stitched these tiles using the 3D image registration technique proposed by Preibisch et al. [21] implemented in ImageJ [22]. A total of five stitched images were captured for each of the six cell lines. ## 2.2.1. Antibodies and Dyes Integrinα3 antibody (mouse monoclonal) (Abcam: ab11767) was used at a dilution of 1 : 40 for 3D confocal fluorescent imaging. Secondary antibody, Alexa Fluor 568 goat anti-mouse IgG highly cross absorbed (Invitrogen A-11031) at a dilution of 1 : 200, was used to visualize the localization of integrin α3. Integrin α6 FITC-conjugated antibody (Rat monoclonal) (Abcam: ab21259) was used at a dilution of 1 : 25 for 3D confocal fluorescent imaging during the secondary antibody step previously described. Nuclei were stained with 4′,6-diamidino-2-phenylindole (DAPI; Molecular Probes) for 30 minutes at room temperature. 3D volumes were obtained using a Zeiss LSM510 META confocal microscope with a 40x water immersion objective as z-stacks. Proper color filters were used to capture the red, green, and blue fluorescent signals. Images were captured using a multitrack setting where fluorescent channels were acquired sequentially to avoid fluorescence crosstalk. Thickness of the z-stacks ranged from 10 to 40 μm (1 μm slices) with an initial depth of at least 10 μM. Slices had 512 × 512 pixels (320 μm×320 μm) cross-section area. In order to include larger volumes of the cultures in our analysis, we captured four tiles (in 2 × 2 formation) with approximately 20% overlap and stitched these tiles using the 3D image registration technique proposed by Preibisch et al. [21] implemented in ImageJ [22]. A total of five stitched images were captured for each of the six cell lines. ## 2.3. Segmentation of Acinar Structures Segmentation of the acinar structures was accomplished by the well-known watershed segmentation [23]. Watershed segmentation exploits the morphological characteristics of the regions of interest and is particularly useful for the segmentation of adjoined objects. The process starts with the identification of the acinar structures. In this study, cell nuclei were marked with blue fluorescent marker DAPI, cell-to-cell borders were identified by the red fluorescent marker that shows the localization of integrin α3, and basal sides of the cell membranes were identified by the green fluorescent marker that shows the localization of integrin α6. As these components are observed in the individual color channels of the captured images and, we used the combination of the three color channels to identify the acinar structures.First, the color channels of the image were individually binarized. Image values were separated into foreground and background classes, where the foreground class represented the stained components and the background class included the combination of the gel medium and extracellular proteins. We employed a local adaptation to Otsu’s well-known global thresholding algorithm [24] to binarize the color channels. In each slice along the depth direction, we divided the image into rectangular blocks along the horizontal and vertical directions and binarized them separately. This approach handles the spatial variations in the foreground-background contrast better than global thresholding. The noise produced due to the binarization of local regions that contain hardly any information was eliminated by using the edge-based noise elimination that cleaned the regions that did not contain any edge from the resulting binary image [25]. Next, the resulting binary color channels were superposed to obtain a single monochrome binary image by logical OR operation. Enclosing acinar structures were identified by applying morphological close operation followed by a morphological fill-hole operation [26] to the resulting binary image. Finally, 3D watershed segmentation was applied to label the individual acinar structures. For this purpose, we first obtained a topographic interpretation of the resulting binary image by taking its Euclidean distance transform where the shortest distance to the nearest background pixel for each pixel was measured [27]. The resulting transformed image was then inverted, while forcing the values for the background class pixels to -∞, to construct the catchment basins and the watershed segmentation method was finally applied to construct to watershed lines to divide these basins and identify the unique acinar structures. Acinar structures with less than four nuclei were considered to exhibit reduced ability to form polarized acinar structures and excluded from our analysis. Note that this elimination was not carried out in our preliminary work on this problem [16] that resulted in a poorer overall grading accuracy (79.0%) than what we achieve with the elimination (89.0%) as we will present in Section 3. From the 30 images we analyzed, 99 10A, 49 10AT, 81 KCL, 80 DCIS, 29 CA1H, and 62 CA1A acinar structures were identified using this segmentation method yielding a total 400 acinar structures. ## 2.4. Characterization of Acinar Structure Morphology Visual investigation of the acinar structures shown in Figure1 reveals differences in acini morphology and localization of integrin subunits that are highlighted in Figure 2. Acinar structures in nonmalignant cell lines are comprised of polarized cells that are layered around the hollow lumen that closely approximate the acini formations in mammary glands and lobules. On the other hand, the acinar structures in the tumorigenic cell lines consist of nonpolarized cells that form clusters of cells rather than explicit acini. As shown in Figures 3(a) and 3(b), nontumorigenic cell line 10A and precancerous cell line AT, respectively, exhibit polarized acinar structures that are characterized by the integrin α6 localization at the basal membrane of the cells, integrin α3 localization along the lateral cell membranes, and clear hollow lumen formations. Acinar structures in KCL and DCIS cell lines shown in Figures 3(c) and 3(d), respectively, exhibit significant changes in the integrin subunit densities and their colocalizations. While the basal and lateral membrane protein densities decrease, relative colocalizations of these proteins increase within the acinus. The acinar structures from the noninvasive carcinoma cell lines and acinar-like structures from the invasive carcinoma cell lines CA1H and CA1A shown in Figures 3(e) and 3(f), respectively, are more elongated and exhibit smaller hollow lumens than the acinar structures from the nonmalignant cell lines.MCF10 series cell lines exhibit variation in acinar structure morphology along the metastatic cascade after 14 days in 3D laminin rich environment culture systems. Cells were cultured in 3D Matrigel suspensions and stained with integrinα3 (red), integrin α6 (green), and DAPI (blue). Example image slices from the six cell lines are typical of those captured during the course of the experiments and are displayed from (a) to (f) as follows: 10A, AT, KCL, DCIS, CA1H, and CA1A. (a)(b)(c)(d)(e)(f)The observed variations in the morphology of acinar structures motivated the development of the features that characterize the level of cell polarity within the acinar structures. The features of interest primarily capture (i) the morphology of the acinar structure and hollow lumen, (ii) the basal, and (iii) the lateral protein densities within the acinar structure. These features were computed in each acinar structure for the slice that the acinar structure had the largest cross-section area along the depth direction (z-stack). ### 2.4.1. Features Capturing the Morphology of Acinus and Hollow Lumen The first subset of features we propose captured the shape of the acinar structures, number of nuclei that constitute the acinus, and the relative size of the hollow lumen. The segmentation of the cell nuclei was also accomplished by the watershed segmentation technique described previously using the blue (nuclei) channel image only. Figure4(a) shows the number of nuclei per acinar structure across the six cell lines. As expected the invasive carcinoma cell lines exhibit the largest number of nuclei per cell cluster compared to the other cell lines acini due to the unregulated division of cells.Features quantifying acinus and hollow lumens morphology are statistically significant between cell lines and states of health. (a) shows the number of nuclei per acinar structure, (b) shows the acinar structure elongation (roundness) that is measured as illustrated in (d) where the ratio between the minor and major axis of the ellipse fitted to the acinar structure is taken, and (c) shows the ratio between the hollow lumen and acinar structure areas that is computed as illustrated in (e). Error bars correspond to the standard error of the means. Two-sidedt-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)Acinar structures in the nonmalignant grades of cancer typically have symmetrical round shapes. Malignant cancer grades result in deformations in the acinar structures that cause the shapes to be more elongated. We measured the roundness of the acinar structures by taking the ratio between the minor and major axes of the ellipse fitted as illustrated in Figure4(d). This fitting was achieved by finding the ellipse that has the same normalized second central moments as the region. Roundness values close to 1 comes from rounder/more symmetrical acinar structures whereas values close to 0 indicate an elongated shape. As expected, noninvasive and invasive cell lines exhibit statistically significant decrease in acinar structure roundness compared to the nonmalignant cell lines as shown in Figure 4(b).The final feature in this subset captured the differences in the relative size of hollow lumen by computing the ratio between the areas of the hollow lumen and the acinar structure. Values closer to 1 indicate larger hollow lumens, and those to 0 indicate smaller hollow lumens. In order to compute the hollow lumen area, we assumed that the hollow lumen was circular and computed its radius as the Euclidean distance between the centroid of the acinar structure and the nearest nucleus to the centroid as illustrated in Figure4(e). The hollow lumen for the precancerous cell line AT is the largest among the six cell lines. Noninvasive and invasive cell line acinar structures exhibit statistically significant shrinkage in hollow lumen compared against the nonmalignant cell line acinar structures as shown in Figure 4(c). ### 2.4.2. Features Capturing the Basal Protein Integrinα6 Density The next subset of features analyzed and quantified the localization and structural relationships of the basal membrane-localized protein integrinα6. The first feature measured the ratio of integrin α6 expression within the acinar structure (excluding the expression along the basal membrane) to the acinar structure area as illustrated in Figure 5(e). As nonmalignant cancerous cells exhibit high cellular polarity, integrin α6 is localized to the basal membrane of these cells. Hence, a low density of integrin α6 is expected within the acinar structure. Reduced cellular polarity in the malignant grades of cancer enables greater internal expressions of integrin α6 within the acinar structure. DCIS, CA1H, and CA1A cell lines exhibited statistically significant increase in the internal expression of integrin α6 compared with the less malignant cell lines as shown in Figure 5(a).Features capturing the level of cell polarity based on the basal membrane protein integrinα6 density. (a) shows the internal integrin α6 density that is computed as illustrated in (e) where the amount of integrin α6 localized inside the acini is divided to the total amount of integrin α6, (b) shows the cumulative spatial distributions of integrin α6 that is computed as illustrated in (g) where the amount of integrin α6 localized within the concentric circles are divided by the total amount of integrin α6 within the acini, (c) shows the continuity of the integrin α6 along the basal membrane that is measured as illustrated in (f) where the number of times that rays initiate from the centroid oriented at from 1 to 360 degrees intersected with the green fluorophores along the basal membrane is counted and normalized between 0 and 1, and (d) shows the ratio between the amount of integrin α6 colocalized with integrin α3 and the total integrin α6 expression. Error bars correspond to the standard error of the means. Two-sided  t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)(f)(g)The next feature characterized the spatial distribution of integrinα6 within the acinar structure. It was computed as the ratio between the amount of the green fluorophores located in concentric circles that were centered at the centroid of the acinar structure and the total amount of green fluorophores in the acinar structure. The radii of the concentric circles were increased by one-tenth of the radius of the circle that circumscribes the acinar structure at each step as illustrated in Figure 5(g). Cell lines that have expressions of integrin α6 near the centroid of the acinar structure show relatively early rise in the cumulative densities than the others. It is observed that integrin α6 is localized near the basal membrane in the three lower-grade cell lines, and near the centroids of the acini in the three advanced grade cell lines as shown in Figure 5(b). Since the cumulative densities near the acinus center and the basal membrane are similar for all the cell lines, we decide to include the cumulative densities corresponding to 30 to 70% of the radius of the circle that circumscribes the acinus in our analysis.Another feature captured the amount of integrinα6 expressed along the basal cell membrane as illustrated in Figure 5(f). First, we took the difference between the acinar structure and its morphologically eroded version to obtain a binary contour mask. This mask was then overlaid with the corresponding slice of the binary green channel image to obtain the integrin α6 localized along the basal membrane of the acinar structure. Next, we superposed the resulting image with rays that initiate at the centroid of the acinus oriented at angles that varied from 1° to 360° with 1° steps scanning a complete circle. Finally, the total number of times that these rays intersected with the green markers was counted and normalized between 0 and 1 to obtain the continuity of integrin α6 along the acinar basal side. Values close to 1 correspond to intact basal sides and, thus, indicate high cellular polarity. As shown in Figure 5(c), 10A and AT cell lines have the significantly more continuous expression of integrin α6 along the basal side of the cells than the four malignant cell lines. We note that KCL cell line exhibits a statistically significant higher localization of integrin α6 at the basal side than the three more tumorigenic cell lines.The final feature in this category measured the ratio of the amount of integrinα6 colocalized with integrin α3 to the total expression of integrin α6 to determine the amount basal membrane protein overlapped with the lateral membrane protein. This feature gets higher values when there is higher internal expression of integrin α6 or higher basal membrane localization of integrin α3. As plotted in Figure 5(d), 10A and DCIS exhibit statistically significant less colocalization of integrin α6 with α3 than the other four cell lines. ### 2.4.3. Features Capturing the Lateral Protein Integrinα3 Density The final subset of features captured the density of lateral membrane protein integrinα3 within the acinar structure. The following features measured the expression and localization of lateral membrane protein and adhesion molecule integrin α3. The first feature determined the amount integrin α3 expressed along the basal cell membrane using the same method described previously. We anticipate observing less amount of integrin α3 expressed along the basal cell membrane in the malignant grades of cancer than the nonmalignant grade.Next, we captured the overall expression of integrinα3 within the acinar structure as illustrated in Figure 6(e) where the total amount of integrin α3 expressed in the acinar structure was divided by the area of the acinar structure. It is seen in Figure 6(b) that this feature monotonically decreases from the nontumorigenic 10A to noninvasive DCIS acinar structures. The invasive carcinomas CA1H and CA1A show significantly higher expressions compared to the DCIS cells.Features capturing the level of cell polarity based on the lateral membrane protein integrinα3 density. (a) Shows the continuity of the integrin α6 along the basal membrane, (b) shows the total integrin α3 density that is computed as illustrated in (e) where the total amount of integrin α3 within the acinar structure is divided by area of the acinar structure, (c) shows the ratio between the amount of integrin α3 colocalized with integrin α6 and the total integrin α3 expression, and d shows the density of integrin α3 in the exterior of the hollow lumen. Error bars correspond to the standard error of the means. Two-sided t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)We then measured the ratio between the amount of integrinα3 colocalized with the integrin α6 and the total amount of integrin α3 within the acinar structure. As expected nonmalignant cell lines exhibit significantly less colocalization of integrin α3 with integrin α6 than the malignant cell lines as shown in Figure 6(c).Final feature in this subset measured the ratio between the amount of integrinα3 localized between the hollow center and basal membrane side of the acinar structure to the total integrin α3 expression. This feature, thus, quantified the density of α3 integrin along the lateral membrane of the cells. Since integrin α3 localization is not confined to cell-to-cell lateral membranes in malignant tumors, expressions of this protein are expected within the hollow lumen as well as the cell-to-extracellular matrix border. ## 2.4.1. Features Capturing the Morphology of Acinus and Hollow Lumen The first subset of features we propose captured the shape of the acinar structures, number of nuclei that constitute the acinus, and the relative size of the hollow lumen. The segmentation of the cell nuclei was also accomplished by the watershed segmentation technique described previously using the blue (nuclei) channel image only. Figure4(a) shows the number of nuclei per acinar structure across the six cell lines. As expected the invasive carcinoma cell lines exhibit the largest number of nuclei per cell cluster compared to the other cell lines acini due to the unregulated division of cells.Features quantifying acinus and hollow lumens morphology are statistically significant between cell lines and states of health. (a) shows the number of nuclei per acinar structure, (b) shows the acinar structure elongation (roundness) that is measured as illustrated in (d) where the ratio between the minor and major axis of the ellipse fitted to the acinar structure is taken, and (c) shows the ratio between the hollow lumen and acinar structure areas that is computed as illustrated in (e). Error bars correspond to the standard error of the means. Two-sidedt-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)Acinar structures in the nonmalignant grades of cancer typically have symmetrical round shapes. Malignant cancer grades result in deformations in the acinar structures that cause the shapes to be more elongated. We measured the roundness of the acinar structures by taking the ratio between the minor and major axes of the ellipse fitted as illustrated in Figure4(d). This fitting was achieved by finding the ellipse that has the same normalized second central moments as the region. Roundness values close to 1 comes from rounder/more symmetrical acinar structures whereas values close to 0 indicate an elongated shape. As expected, noninvasive and invasive cell lines exhibit statistically significant decrease in acinar structure roundness compared to the nonmalignant cell lines as shown in Figure 4(b).The final feature in this subset captured the differences in the relative size of hollow lumen by computing the ratio between the areas of the hollow lumen and the acinar structure. Values closer to 1 indicate larger hollow lumens, and those to 0 indicate smaller hollow lumens. In order to compute the hollow lumen area, we assumed that the hollow lumen was circular and computed its radius as the Euclidean distance between the centroid of the acinar structure and the nearest nucleus to the centroid as illustrated in Figure4(e). The hollow lumen for the precancerous cell line AT is the largest among the six cell lines. Noninvasive and invasive cell line acinar structures exhibit statistically significant shrinkage in hollow lumen compared against the nonmalignant cell line acinar structures as shown in Figure 4(c). ## 2.4.2. Features Capturing the Basal Protein Integrinα6 Density The next subset of features analyzed and quantified the localization and structural relationships of the basal membrane-localized protein integrinα6. The first feature measured the ratio of integrin α6 expression within the acinar structure (excluding the expression along the basal membrane) to the acinar structure area as illustrated in Figure 5(e). As nonmalignant cancerous cells exhibit high cellular polarity, integrin α6 is localized to the basal membrane of these cells. Hence, a low density of integrin α6 is expected within the acinar structure. Reduced cellular polarity in the malignant grades of cancer enables greater internal expressions of integrin α6 within the acinar structure. DCIS, CA1H, and CA1A cell lines exhibited statistically significant increase in the internal expression of integrin α6 compared with the less malignant cell lines as shown in Figure 5(a).Features capturing the level of cell polarity based on the basal membrane protein integrinα6 density. (a) shows the internal integrin α6 density that is computed as illustrated in (e) where the amount of integrin α6 localized inside the acini is divided to the total amount of integrin α6, (b) shows the cumulative spatial distributions of integrin α6 that is computed as illustrated in (g) where the amount of integrin α6 localized within the concentric circles are divided by the total amount of integrin α6 within the acini, (c) shows the continuity of the integrin α6 along the basal membrane that is measured as illustrated in (f) where the number of times that rays initiate from the centroid oriented at from 1 to 360 degrees intersected with the green fluorophores along the basal membrane is counted and normalized between 0 and 1, and (d) shows the ratio between the amount of integrin α6 colocalized with integrin α3 and the total integrin α6 expression. Error bars correspond to the standard error of the means. Two-sided  t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)(f)(g)The next feature characterized the spatial distribution of integrinα6 within the acinar structure. It was computed as the ratio between the amount of the green fluorophores located in concentric circles that were centered at the centroid of the acinar structure and the total amount of green fluorophores in the acinar structure. The radii of the concentric circles were increased by one-tenth of the radius of the circle that circumscribes the acinar structure at each step as illustrated in Figure 5(g). Cell lines that have expressions of integrin α6 near the centroid of the acinar structure show relatively early rise in the cumulative densities than the others. It is observed that integrin α6 is localized near the basal membrane in the three lower-grade cell lines, and near the centroids of the acini in the three advanced grade cell lines as shown in Figure 5(b). Since the cumulative densities near the acinus center and the basal membrane are similar for all the cell lines, we decide to include the cumulative densities corresponding to 30 to 70% of the radius of the circle that circumscribes the acinus in our analysis.Another feature captured the amount of integrinα6 expressed along the basal cell membrane as illustrated in Figure 5(f). First, we took the difference between the acinar structure and its morphologically eroded version to obtain a binary contour mask. This mask was then overlaid with the corresponding slice of the binary green channel image to obtain the integrin α6 localized along the basal membrane of the acinar structure. Next, we superposed the resulting image with rays that initiate at the centroid of the acinus oriented at angles that varied from 1° to 360° with 1° steps scanning a complete circle. Finally, the total number of times that these rays intersected with the green markers was counted and normalized between 0 and 1 to obtain the continuity of integrin α6 along the acinar basal side. Values close to 1 correspond to intact basal sides and, thus, indicate high cellular polarity. As shown in Figure 5(c), 10A and AT cell lines have the significantly more continuous expression of integrin α6 along the basal side of the cells than the four malignant cell lines. We note that KCL cell line exhibits a statistically significant higher localization of integrin α6 at the basal side than the three more tumorigenic cell lines.The final feature in this category measured the ratio of the amount of integrinα6 colocalized with integrin α3 to the total expression of integrin α6 to determine the amount basal membrane protein overlapped with the lateral membrane protein. This feature gets higher values when there is higher internal expression of integrin α6 or higher basal membrane localization of integrin α3. As plotted in Figure 5(d), 10A and DCIS exhibit statistically significant less colocalization of integrin α6 with α3 than the other four cell lines. ## 2.4.3. Features Capturing the Lateral Protein Integrinα3 Density The final subset of features captured the density of lateral membrane protein integrinα3 within the acinar structure. The following features measured the expression and localization of lateral membrane protein and adhesion molecule integrin α3. The first feature determined the amount integrin α3 expressed along the basal cell membrane using the same method described previously. We anticipate observing less amount of integrin α3 expressed along the basal cell membrane in the malignant grades of cancer than the nonmalignant grade.Next, we captured the overall expression of integrinα3 within the acinar structure as illustrated in Figure 6(e) where the total amount of integrin α3 expressed in the acinar structure was divided by the area of the acinar structure. It is seen in Figure 6(b) that this feature monotonically decreases from the nontumorigenic 10A to noninvasive DCIS acinar structures. The invasive carcinomas CA1H and CA1A show significantly higher expressions compared to the DCIS cells.Features capturing the level of cell polarity based on the lateral membrane protein integrinα3 density. (a) Shows the continuity of the integrin α6 along the basal membrane, (b) shows the total integrin α3 density that is computed as illustrated in (e) where the total amount of integrin α3 within the acinar structure is divided by area of the acinar structure, (c) shows the ratio between the amount of integrin α3 colocalized with integrin α6 and the total integrin α3 expression, and d shows the density of integrin α3 in the exterior of the hollow lumen. Error bars correspond to the standard error of the means. Two-sided t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)We then measured the ratio between the amount of integrinα3 colocalized with the integrin α6 and the total amount of integrin α3 within the acinar structure. As expected nonmalignant cell lines exhibit significantly less colocalization of integrin α3 with integrin α6 than the malignant cell lines as shown in Figure 6(c).Final feature in this subset measured the ratio between the amount of integrinα3 localized between the hollow center and basal membrane side of the acinar structure to the total integrin α3 expression. This feature, thus, quantified the density of α3 integrin along the lateral membrane of the cells. Since integrin α3 localization is not confined to cell-to-cell lateral membranes in malignant tumors, expressions of this protein are expected within the hollow lumen as well as the cell-to-extracellular matrix border. ## 2.5. Automated Grading of Cancer Using Supervised Machine Learning We used supervised machine learning [28] to grade the acinar structures into nonmalignant, noninvasive carcinoma, and invasive carcinoma forms of breast cancer using the features defined previously. In supervised learning, the data are first divided into training and test sets. The classifier is trained with the labeled training data and the classes of the test data are then predicted using the resulting classifier. We tested the grading performance of five supervised learning algorithms. Linear discriminant analysis fits a multivariate normal density to each of the training class assuming equal covariance matrices for each class [29]. It then separates the classes with a hyperplane that is established by seeking the projection that maximizes the sum of intraclass distances and minimize the sum of interclass distances. Quadratic discriminant analysis is similar to the linear discriminant analysis with the distinction that covariance matrix for each class is estimated separately [29]. Naïve Bayes classifier assumes that the classes are arising from independent distributions; hence, a diagonal covariance matrix is assumed. K-nearest neighbor classifier finds the K closest training data to the test data based on the Euclidean distance and classifies the sample using the majority voting of the nearest points. Support vector machines classifier maps the training data into a higher dimensional space with a kernel function where the data become linearly separable [30]. A separating maximum-margin hyperplane is established to separate the different class data with minimal misclassification via quadratic optimization [30]. The test data are classified by determining the side of the hyperplane they lie on in the kernel-mapped space. In order to extend SVM for three-class classification, we employed the one-against-one approach [31] where three two-class SVM classifiers were established for each pair of classes in the training data set. Each sample in the test data was assigned to a class by these classifiers and the class with the majority vote was chosen as the final result. If there is equal voting for the three classes, we chose the class that has the largest margin from the separating hyperplane.In order to obtain unbiased performance estimates, 10-fold cross-validation was performed. The feature set was first randomly divided into 10 disjoint partitions of equal size. For each partition, a classifier was trained with the remaining nine partitions and then tested on the retained partition. The results for each partition were then combined to find the overall grading accuracy. In order to reduce the scale differences within the features, the data were normalized so that the features had zero mean and unit variance across the samples. We performed a parametric search to determine the number of neighbors in the nearest neighbor identification. We tested between 8 and 15 nearest neighbors and determined that identifying 12 nearest neighbors to a test point achieved the highest grading accuracy. For the SVM classifiers, we used radial basis function, also referred to as Gaussian kernel, in the form ofK(xi,xj)=exp(-∥xi-xj∥2/2σ2) that mapped the data into an infinite dimensional Hilbert space [30]. We performed a parameter to search to identify σ that achieves the highest grading accuracy. We sought σ in the set of candidate values that varied from 1.0 to 2.5 with 0.1 steps and determined that σ being equals to 2.0 achieved the best performance in the grading of the acinar structures. Table 1 shows the performance of the five supervised learning algorithms. It is clear that SVM-based classifier achieves the highest overall accuracy in the grading of the acinar structures. This is not unexpected as SVM classifiers are known to be highly successful in biological applications [32].Table 1 Overall grading accuracies for the supervised machine learning techniques used in our analysis. SVM classifiers clearly achieve significantly higher grading accuracy than the other methods. The data set includes 400 acinar structures. Learning methodOverall grading accuracy (%)Linear discriminant analysis80.75Quadratic discriminant analysis80.00Naïve Bayes69.75K-nearest neighbors79.50Support vector machines89.00 ### 2.5.1. Discriminative Influence of the Proposed Features After identifying SVM-based classifier as the most accurate grading method for our data set, we performed feature selection to determine the discriminative capabilities of the features to characterize the data. A good feature selection algorithm identifies the features that are consistent within the class and exhibit differences between the classes. Fisher score (F-score) is documented to be a powerful feature selection tool [33]. For a dataset xi with two classes, denote the instances in the first class with xi(+) and instances in the other class with xi(-). The Fisher score of the ith feature is then given by F(i)=((xι̅(+)-xι̅)2+(xι̅(-)-xι̅)2)/(σx(+)2+σx(-)2), where xι̅ is the mean value of the ith features, xι̅(+) and xι̅(-) are the mean values over the positive and negative instances, respectively, and σx(+)2 and σx(-)2are the variances over the positive and negative instances, respectively, for the ith feature. In order to extend this method for a feature set with three classes, we computed the F-score for each feature in each pair of classes and then took the average of the three possible combinations. Larger values of F-score indicate stronger discriminative influence; therefore, after obtaining the F-scores for all the features, we ranked them in descending order.Table2 shows the average F-scores of the features and their corresponding discriminative rank. It is seen that continuity of integrin α6 along the basal membrane is the most discriminative feature to describe the data. Discriminative influence of the ratio between the hollow lumen and acinar structure area is also high. These two features are important as they are strong indicators of the level of cell polarity within the acinar structure and can be considered as measures of duct forming tumors as used in the commonly used Scarff-Bloom-Richardson system. Some of the other most discriminative features quantify the differences that are difficult to assess by visual inspection such as the colocalization between the integrin α3 and integrin α3 and internal densities of the integrin subunits. We note that these features constitute a subset of the novel features introduced in this paper and, therefore, this analysis particularly highlights the importance of the proposed features and the methodology.Table 2 Discriminative influence of the proposed features based on the F-score values. Feature labelF-ScoreRankNumber of nuclei within the acinar structure0.054714Elongation of the acinar structure0.11226Ratio between the hollow lumen and acinar structure areas0.16024Internal integrinα6 Density0.12055Density of integrinα6 inside the 30% circle0.062011Density of integrinα6 inside the 40% circle0.061112Density of integrinα6 inside the 50% circle0.08859Density of integrinα6 inside the 60% circle0.09538Density of integrinα6 inside the 70% circle0.059813Continuity of integrinα6 along the basal membrane0.57251Amount of integrinα6 colocalized with integrin α30.033715Continuity of integrinα3 along the basal membrane0.10467Total integrinα3 density0.26513Amount of integrinα3 colocalized with integrin α60.35262Density of integrinα3 in the exterior of the hollow lumen0.071610We performed cancer grading using subsets of the resulting most discriminative features, where at each stage we increased the number of most discriminative features in the training feature set. Figure7 shows the overall grading accuracy with respect to the number of most discriminative features selected in the grading. It is seen that the highest grading accuracy 89.0% is achieved when all of the features are used to train the classifiers. Considering that the most grading systems are typically based on the assessment of a limited number of features, we also investigate the grading performance of our methodology in a similar setting. When only the first five most discriminative features are used for grading, we achieve 82.0% overall grading accuracy. Though relatively worse than the highest grading accuracy, this constitutes a highly promising setting considering the limited number of features.Figure 7 Overall grading accuracy with respect to the number of most discriminative features selected in the grading. Highest overall grading accuracy is achieved when the whole feature set is considered. ## 2.5.1. Discriminative Influence of the Proposed Features After identifying SVM-based classifier as the most accurate grading method for our data set, we performed feature selection to determine the discriminative capabilities of the features to characterize the data. A good feature selection algorithm identifies the features that are consistent within the class and exhibit differences between the classes. Fisher score (F-score) is documented to be a powerful feature selection tool [33]. For a dataset xi with two classes, denote the instances in the first class with xi(+) and instances in the other class with xi(-). The Fisher score of the ith feature is then given by F(i)=((xι̅(+)-xι̅)2+(xι̅(-)-xι̅)2)/(σx(+)2+σx(-)2), where xι̅ is the mean value of the ith features, xι̅(+) and xι̅(-) are the mean values over the positive and negative instances, respectively, and σx(+)2 and σx(-)2are the variances over the positive and negative instances, respectively, for the ith feature. In order to extend this method for a feature set with three classes, we computed the F-score for each feature in each pair of classes and then took the average of the three possible combinations. Larger values of F-score indicate stronger discriminative influence; therefore, after obtaining the F-scores for all the features, we ranked them in descending order.Table2 shows the average F-scores of the features and their corresponding discriminative rank. It is seen that continuity of integrin α6 along the basal membrane is the most discriminative feature to describe the data. Discriminative influence of the ratio between the hollow lumen and acinar structure area is also high. These two features are important as they are strong indicators of the level of cell polarity within the acinar structure and can be considered as measures of duct forming tumors as used in the commonly used Scarff-Bloom-Richardson system. Some of the other most discriminative features quantify the differences that are difficult to assess by visual inspection such as the colocalization between the integrin α3 and integrin α3 and internal densities of the integrin subunits. We note that these features constitute a subset of the novel features introduced in this paper and, therefore, this analysis particularly highlights the importance of the proposed features and the methodology.Table 2 Discriminative influence of the proposed features based on the F-score values. Feature labelF-ScoreRankNumber of nuclei within the acinar structure0.054714Elongation of the acinar structure0.11226Ratio between the hollow lumen and acinar structure areas0.16024Internal integrinα6 Density0.12055Density of integrinα6 inside the 30% circle0.062011Density of integrinα6 inside the 40% circle0.061112Density of integrinα6 inside the 50% circle0.08859Density of integrinα6 inside the 60% circle0.09538Density of integrinα6 inside the 70% circle0.059813Continuity of integrinα6 along the basal membrane0.57251Amount of integrinα6 colocalized with integrin α30.033715Continuity of integrinα3 along the basal membrane0.10467Total integrinα3 density0.26513Amount of integrinα3 colocalized with integrin α60.35262Density of integrinα3 in the exterior of the hollow lumen0.071610We performed cancer grading using subsets of the resulting most discriminative features, where at each stage we increased the number of most discriminative features in the training feature set. Figure7 shows the overall grading accuracy with respect to the number of most discriminative features selected in the grading. It is seen that the highest grading accuracy 89.0% is achieved when all of the features are used to train the classifiers. Considering that the most grading systems are typically based on the assessment of a limited number of features, we also investigate the grading performance of our methodology in a similar setting. When only the first five most discriminative features are used for grading, we achieve 82.0% overall grading accuracy. Though relatively worse than the highest grading accuracy, this constitutes a highly promising setting considering the limited number of features.Figure 7 Overall grading accuracy with respect to the number of most discriminative features selected in the grading. Highest overall grading accuracy is achieved when the whole feature set is considered. ## 2.6. Tissue Sections Breast tissue from patients with invasive carcinoma, ductal carcinomain situ, and from healthy individuals was obtained from ProteoGenix (Culver City, CA) and stored at −80°C. Portions of each tissue block were embedded in O.C.T. Compound (Sakura Finetek USA, Torrance, CA) and sections 20 μm thick were cut using a Microm HM505E cryostat. Tissue sections were adhered to Superfrost Plus Gold microscope slides (Fisher Scientific, Morris Plains, NJ) and vapor fixed using 2-3 Kimwipes (Kimberley-Clark Worldwide, Roswell, GA) soaked in 4% paraformaldehyde in a small chamber at −20°C for 30 minutes prior to immunohistochemistry. Tissue sections were encircled with an ImmEdge Pen (Vector Labs, Burlingame, CA) and immunohistochemistry and confocal microscopy were performed as described for in vitro samples in Section 2.2. An Alexa488-conjugated goat antifluorescein secondary antibody (Invitrogen, Carlsbad, CA) was included at a 1 : 300 dilution in the overnight secondary incubation. ## 3. Results and Discussion A workflow diagram involved in the automated grading of breast cancer is shown in Figure8. First, 3D fluorescent confocal images of acinar structures were collected after 14 days within an in vitro culture system as described in Sections 2.1 and 2.2. A total of 30 images were captured (5 images for each cell line) and a total of 400 acinar structures were identified using the image segmentation method described in Section 2.3 in these images. For each acinar structure, we extracted the morphological features described in Section 2.4. These features exhibit high-level of statistical significance across the nonmalignant, noninvasive carcinoma, and invasive carcinoma grades of breast cancer. Using the resulting unique feature profiles, we then trained the SVM-based classifiers as described in Section 2.5 for the automated grading of acini and test on the data.Figure 8 A flow chart of the proposed quantitative analysis approach in measuring morphological-based features of acini. First, fluorescently labeled images of the cultures are captured by 3D confocal microscopy. Next, individual acinar structures in these images are segmented and labeled. We then extract the proposed features based on the acini morphology that lead to statistically significant unique feature profiles between functionally different stages of cancer. Finally, automated grading of cancer is achieved by supervised machine learning.Overall grading accuracy of 89.0% is achieved when the whole feature set is used in the grading. As shown in the first half of Table3, the nonmalignant 10A and AT cell lines are graded with 97.0% and 73.5% accuracy, respectively. Due to the strong resemblance between the AT and KCL cell lines, some portions of the AT acinar structures are graded as noninvasive carcinoma. Acinar structures in KCL cell line are correctly graded as noninvasive carcinoma 72.1% of the time, and as nonmalignant 11.1% of the time which should be considered as high success as KCL falls between the advanced-early stages of cancer and noninvasive carcinoma. We observe significantly high grading accuracies in the remaining three cell lines. Noninvasive DCIS acinar structures are graded with 95.0% accuracy, and the acinar structures in the invasive cell lines CA1H and CA1A are graded with 93.1% and 93.5% accuracy, respectively. We note that neither the DCIS nor invasive cell line acinar structures are graded as nonmalignant. When only the five most discriminative features are used, we achieve 82.0% overall accuracy in the grading. As shown in the second half of Table 3, in this case AT acinar structures are graded with higher success. However, the grading accuracy of noninvasive KCL and invasive CA1H acinar structures significantly decreases. It is not unexpected that some of the features not included after the feature selection capture significant characteristics about the acini morphology. The following discussion on the proposed features helps us understand how they relate to the underlying biological implications.Table 3 Performance of the SVM-based classifier with the proposed quantitative features. Acinar structures are graded as nonmalignant as trained from the 10A and AT samples, noninvasive carcinoma as trained from the KCL and DCIS samples, and invasive carcinoma as trained from the CA1H and CA1A samples using all of the proposed features as well as the five most discriminative features: continuity of integrinα6 along the basal membrane, colocalization of integrin α3 with integrin α6, integrin α3 density, ratio between hollow lumen and acinar structure areas, and integrin α6 density. The data set includes 400 acinar structures: 99 10A, 49 10AT, 81 KCL, 80 DCIS, 29 CA1H, and 62 CA1A. Grading results utilizing overall feature setGrading categories (%)NonmalignantNoninvasive CarcinomaInvasive CarcinomaNonmalignant10A97.02.01.0AT73.524.52.0Noninvasive CarcinomaKCL11.172.114.8DCIS0.095.05.0Invasive CarcinomaCA1H0.06.993.1CA1A0.06.593.5Grading results utilizing the five most discriminative featuresGrading categories (%)NonmalignantNoninvasive CarcinomaInvasive CarcinomaNonmalignant10A93.03.04.0AT85.710.24.1Noninvasive CarcinomaKCL18.553.128.4DCIS0.093.86.2Invasive CarcinomaCA1H0.027.672.4CA1A0.08.191.9Changes in the hollow lumen size and acini shape are visually observed and quantified across multiple cancer stages as described in Section2.4.1. While the average number of cells per acini is a concrete measurement and could be determined manually, it is useful and more practical to utilize automated techniques when analyzing large volumes of image data. With this feature we clearly determine that the two invasive cell lines CA1H and CA1A have more nuclei in the acinar structures than the other less malignant cell lines. This could arise from several biological factors. A simple reason is that large acinar structures are comprised of more cells. Alternatively, this could arise from higher density of cells within the acinus due to the loss of the hollow lumen. Both of these explanations likely contribute to the resulting feature making it challenging to determine the exact cause. Our next feature, the ratio between the areas of the hollow lumen and acinar structure can also help us explain the general trend in the number of nuclei per acinus with the progression along the metastatic cascade. Larger hollow lumens in the lower cancer grades possibly limit the space that the cells can proliferate; thus smaller number of nuclei per acinus is observed at lower-grade cell lines.The hollow lumen to acinar structure area ratio captures a key change in acinar structure which is challenging to assess by visual inspection. While the presence or absence of a hollow lumen is observable by eye, evaluating the relative size of the lumen compared to the overall acinar structure is difficult and poses subjectivity. This feature enabled us to quantitatively characterize this relationship and helps identify the significant changes between cancer grades. The loss of hollow lumens is associated with increasing cell division and cell survival within the acinar structure. In native breast tissue a decrease in the size of the hollow lumen arises from increasing metastatic capability. This is consistent in ourin vitro system and the resulting quantitative features. However, despite generally observing a decrease in the hollow lumen to acinar structure area ratio with the progression of cancer, precancerous AT cell line exhibits an increase compared to the 10A samples. While it is unclear why this is the case, it could potentially be due to the cells becoming flatter rather than columnar in shape creating a larger hollow lumen.In addition to the changes in the hollow lumen morphology, the acinar structure elongation quantifies the subtle changes in the roundness of acini that are also difficult assess by eye. As shown by our results, we capture statistically significant differences in the acini elongation caused by the increasing metastatic capability, reflecting the progressive loss of structural integrity in acini as breast cancer develops. Replacing these qualitative observations on the acini and hollow lumen morphology with these robust quantitative measurements enabled us to characterize the data more objectively.The density of basal membrane protein integrinα6 is quantified through multiple features as described in Section 2.4.2 and found to reflect the metastatic potential of acinar structures. The quantitative and statistically significant changes identified in the analysis of integrin α6 distribution strongly correlate with our expectations based on the visual inspection of the acinar structures. It is seen that the 10A and AT acini exhibited the highest amount of integrin α6 localization along the basal membrane, reflecting the presence of an intact, extracellular matrix basement membrane on the basal surface of these cells. In addition, it is observed that the acinar structures of more malignant cell lines showed dramatic loss in the localization of integrin α6 along the basal membrane of the cells and an increase in the internal localization of this protein, reflecting loss of the basement membrane. Features extracted from CA1A acinar structures are significantly different from some of the features extracted from the CA1H and DCIS acinar structures that can be explained by the degree of malignancy of this cell line. CA1A cells exhibit the highest malignancy of all the other cell lines with the advanced ability to metastasize. When cells are able to metastasize, cells may form secondary tumors called macrometastases. At this stage, these cells are able to adapt to their new environment in order to improve their abilities of survival. It is possible that the CA1A cells are behaving as a secondary tumor when placed in a basement membrane protein rich environment causing them to acquire some epithelial like phenotypes, such as increasing the basal membrane localization of integrin α6. Thus, these features thoroughly capture the distribution of integrin α6 throughout the acinar structure as visualized and are functionally expected across the different grades of cancer.The features based on the density of integrinα3 as described in Section 2.4.3 yield both expected and unexpected results. As reported in a previous study [34], we expect to observe different levels of total integrin α3 expression across the cell lines of varying metastatic potential. Our expectations are confirmed by our results that the total integrin α3 expressions exhibit a decrease between the nonmalignant and noninvasive carcinomas and a higher expression is observed in the invasive carcinoma cell lines (compared to the noninvasive KCL and DCIS cell line features). These features support the idea that integrin α3 switches its function from acting in a cell-cell adhesion manner to cell-ECM adhesion manner with increasing metastatic ability. As shown in Figure 6(a), the basal continuity of integrin α3 is the lowest for the DCIS cell lines as expected due to the low density of integrin α3 in this cell line. The continuity at the basal membrane displayed increased localization in the invasive carcinoma cell lines suggesting a cell-ECM adhesion role. The measurement of integrin α3 density in the exterior of the hollow lumen as shown in Figure 6(d) determined that the progression of cancer yields increased localization of this protein within the hollow lumen. It was at similar levels for the 10A, AT, and KCL cell lines however that the expression ratio increased in expression between the DCIS through CA1A cells. This feature was developed to quantify the functional switch of integrin α3 from cell-cell adhesion to cell-ECM adhesion with progressing metastatic state. The increase in this feature suggests a switch from cell-cell adhesion to cell-ECM adhesions; however, this change may be influenced by the reduction in the size of the hollow lumen and increased integrin α3 expression between cells throughout the acinar structure, thus hindering the robust characterization of the integrin α3 lateral localization.The amount of an integrin subunit that colocalizes with the other varies across the cell lines. Basal membrane integrinα6 colocalizes with integrin α3 more in the tumorigenic cell lines (except the DCIS) than the nontumorigenic cell line 10A. This could be due to the increasing internal expression of integrin α6 indicating a loss in cell polarity. Another reason could be due to integrin α3 changing localization from the lateral membrane to the basal membrane as it switches functions from cell-cell adhesion to cell-ECM adhesions. It could also be a combination of the two since loss in cell polarity results in irregular localization of the proteins; thus both integrin subunits are expressed throughout the cell membrane. Interestingly, DCIS acini exhibit the lowest colocalization of integrin α6 with integrin α3. This could be due to the low expression of integrin α3 in DCIS cells as shown in Figure 6(b) that yield higher amounts of isolated integrin α6 subunits in the acinar structures.The alternative comparison is the amount of integrinα3 that colocalizes with integrin α6. We observe that the noninvasive and the invasive cell lines exhibit higher colocalization than the nonmalignant cell lines. Interestingly, DCIS cell line has the highest amount of colocalization. This confirms our suggestion that low expression of integrin α3 is the cause of the lowest colocalization of integrin α6 with integrin α3 in this cell line. In this case almost all of the integrin α3 is colocalized with integrin α6 due to its low expression levels and loss of cell polarity in the DCIS acinar structures. The nonmalignant cell lines exhibit higher levels of colocalization than the noninvasive cell lines. By comparing the two colocalization features, we infer that this is likely due to integrin α3 localization at the basal and lateral membranes while integrin α6 localizes primarily at the basal membrane alone. This is reflected in 80% colocalization of integrin α6 with integrin α3 and 30% colocalization of integrin α3 with integrin α6 as shown in Figure 6(c). On the other hand, the invasive carcinoma cell lines exhibit approximately 70% colocalization of the basal membrane integrin with the lateral membrane integrin and approximately 50% colocalization of the lateral membrane integrin with the basal membrane integrin. Due to the approximately same values, this suggests that both integrin subunits have lost their specific localization and are expressed throughout. We also can further confirm this with the internal densities of both integrin α6 and integrin α3 having higher values.Proposed features display statistical significance across the cell lines with varying metastatic abilities. This indicates that the proposed features capture and quantify biologically relevant morphological changes. These features have the potential for studying structure-function relationships in a controlled and quantitative system. This application is useful in identifying underlying mechanisms of cancer and the role of specific protein functions on acinar structures. In addition, this approach could be used to test the effects of potential chemotherapy targets and combinations of drugs on the structures of both nontumorigenic, precancerous, noninvasive carcinoma and invasive carcinoma cells in preliminary studies. Also, with increasing medical imaging technology the future holds potential for many quantitative and computerized diagnostic tools and systems to aid doctors in diagnosis, treatment options, and prognosis. Finally, these features could be applied to current histology samples when tagged with fluorescent antibodies to quantify complex structural features.In order to demonstrate the relevance of our approach to tissue samples obtained from human patients, we performed immunohistochemistry on frozen sections as described in Section2.6 following the same procedure for cell culture experiments as outlined from Section 2.2 to Section 2.5. Images of nontumorigenic, precancerous, noninvasive, and invasive mammary gland tissue were collected using confocal microscopy. The random orientation of glands in sectioned material presented a challenge as the acinar structures in in vitro cultures are typically spherical in shape. In order to eliminate the longitudinal or oblique planes of sections from the present analysis, single optical sections of glands oriented roughly in crosssection that resemble the acinar formations in the in vitro cultures were cropped manually. Figure 9 shows examples of glands analyzed in our study. Although nonspecific Alexa568 secondary antibody labeling of luminal contents was observed in some cases, visual inspection of these images indicates that the pattern of both integrin α6 and integrin α3 exhibit similar staining patterns to in vitro acinar structures. In our preliminary study, we identified 12 glands in the 9 images analyzed in this study. For each gland, we extracted the proposed features and performed grading using the SVM-based classifier trained with the in vitro feature set. The in vivo test set was graded accurately except one nontumorigenic gland that was graded as noninvasive carcinoma. Nevertheless, we note that developing a computer-aided grading system for the grading of in vivo tissue samples is beyond the scope of this paper and left as future work.Tissue samples from human patients exhibit variations in acini morphology along the metastatic cascade similar to the 3D cultures. In the nontumorigenic glands from a healthy patient shown in (a), integrinα3 can be observed along the lateral surface of the epithelia, while integrin α6 shows strong staining across the basal surface. Precancerous gland shown in (b) exhibits a clear reduction in the amount of integrin α3 expression along the lateral membranes of the epithelia, while basal integrin α6 expression remains strong. Noninvasive carcinoma gland shown in (c) could be easily identified by the loss of a proper hollow lumen and stratification of the epithelial layer. In these large regions of tissue, integrin α3 is absent from cells in the interior of the gland; however, along with integrin α6, there can be observed along the basal surfaces of cells in contact with the basal lamina. Invasive carcinoma gland shown in (d) includes cords or clusters of cells exhibiting faint labeling of both proteins distributed along the cell membrane compared to background and control tissue. Scale bars represent 20 μm in (a) and (c), and 10 μm in (b) and (d). (a)(b)(c)(d) ## 4. Conclusion In this paper, we present a method that enables quantitative characterization of 3D breast culture acini with varying metastatic potentials. Specifically, we propose statistically significant features based on acinar structure morphology that capture differences between different grades of cancer that are difficult to assess under microscopic inspection. The experimental results demonstrate the efficacy of the proposed features to differentiate between the nonmalignant, noninvasive carcinoma, and invasive carcinoma grades of breast cancer with 89.0% accuracy. In addition, our preliminary studies indicate that our methodology can also be used for the grading of cancer inin vivo tissues provided that the captured tissue samples include cross-sectional portions of the glands. Hence, our method demonstrates great promise to model morphology-function relationships within controlled 3D systems and hold potential as an automatic breast cancer prognostic tool in current histology samples. --- *Source: 102036-2012-05-15.xml*
102036-2012-05-15_102036-2012-05-15.md
108,982
Novel Image Analysis Approach Quantifies Morphological Characteristics of 3D Breast Culture Acini with Varying Metastatic Potentials
Lindsey McKeen Polizzotti; Basak Oztan; Chris S. Bjornsson; Katherine R. Shubert; Bülent Yener; George E. Plopper
Journal of Biomedicine and Biotechnology (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102036
102036-2012-05-15.xml
--- ## Abstract Prognosis of breast cancer is primarily predicted by the histological grading of the tumor, where pathologists manually evaluate microscopic characteristics of the tissue. This labor intensive process suffers from intra- and inter-observer variations; thus, computer-aided systems that accomplish this assessment automatically are in high demand. We address this by developing an image analysis framework for the automated grading of breast cancer inin vitro three-dimensional breast epithelial acini through the characterization of acinar structure morphology. A set of statistically significant features for the characterization of acini morphology are exploited for the automated grading of six (MCF10 series) cell line cultures mimicking three grades of breast cancer along the metastatic cascade. In addition to capturing both expected and visually differentiable changes, we quantify subtle differences that pose a challenge to assess through microscopic inspection. Our method achieves 89.0% accuracy in grading the acinar structures as nonmalignant, noninvasive carcinoma, and invasive carcinoma grades. We further demonstrate that the proposed methodology can be successfully applied for the grading of in vivo tissue samples albeit with additional constraints. These results indicate that the proposed features can be used to describe the relationship between the acini morphology and cellular function along the metastatic cascade. --- ## Body ## 1. Introduction Breast cancer is the second most common cancer in women and is also the second leading cause of cancer-related death in women [1]. In its most common form, the tumor arises from the epithelial cells in the breast tissue. Histological grading systems are commonly used to predict the prognosis of tumors. The most frequently used tumor grading system for breast cancer is the modified Scarrf-Bloom-Richardson method [2] where pathologists analyze the rate of cell division, percentage of tumor forming ducts, and the uniformity of cell nuclei to determine the cancer grade in H&E stained biopsies. While precancerous (or lower-grade) tumors tend to grow slowly and are less likely to spread, invasive (or higher-grade) tumors typically gain the ability to proliferate and spread rapidly. Subjectivity and variability of the results affect the accuracy of prognosis and subsequent patient treatment. A recent study indicates that the rate of misdiagnosis of breast cancer varies widely between clinicians and is nearly 40% in some cases [3]. Thus, there is an unmet need for robust methods that reduce the variability and subjectivity in the grading of breast tumors and lesions. Development of quantitative tools for image analysis and classification is rapidly expanding fields that constitute a great potential for improving diagnostic accuracy [4, 5].In this paper, a method for automated grading of breast cancer in three-dimensional (3D) epithelial cell cultures is presented.In vitro epithelial breast cells cultured in laminin rich extracellular matrix form acinar-like structures that both morphologically and structurally resemble in vivo acini of breast glands and lobules [6]. Therefore, these culture systems constitute suitable and controllable environments for breast cancer research [7–9].Figure1 shows a 3D view of a typical nonmalignant breast culture that comprises several acini that are surrounded by extracellular matrix (ECM) proteins. Figure 2 shows an enlarged view of a cross section from a typical nonmalignant acinus. Acinar structures in healthy/nonmalignant cultures include polarized luminal epithelial cells. (A polarized cell has specialized proteins localized to specific cell membranes such as the basal, lateral, or apical side) and a hollow lumen. The lateral membranes of neighboring cells are in close proximity due to cellular junctions and adhesion proteins. The basal side of cells contacts the surrounding ECM proteins, while the apical sides of cells face the hollow lumen. Malignant cancers result in loss of cell polarity that induces changes in the morphology of acinar structures. In this paper, we investigate morphological characteristics of mammary acinar structures in nonmalignant, noninvasive carcinoma, and invasive carcinoma cancer grades in six MCF10 series of cell lines grown in 3D cultures. We propose novel features to characterize these changes along the metastatic cascade and exploit them in a supervised machine learning setting for the automated grading of breast cancer. Proposed features capture not only similar factors as the Scarff-Bloom-Richardson grading system but also additional subvisual changes observed in breast cancer progression in a quantitative manner to reduce variability. As shown by the grading accuracies, the proposed features efficiently capture the differences caused by the metastatic progression of the cancer.Figure 1 Three-dimensional view of a typical nonmalignant cell line culture. Acinar structures are surrounded by the ECM proteins.Figure 2 Example of a typical nonmalignant acinar structure cross section. The acinar structure includes polarized epithelial cells and hollow lumen. Basal sides of the cells are surrounded by the ECM proteins and apical sides face the hollow lumen.Previous work on this problem includes examining the change in the morphological characteristics of nontumorigenic MCF10A epithelial acini over time and exploiting them to model the growth of culture over time. Chang et al. examined the elongation of the MCF10A acini at 6, 12, and 96 hours after a particular treatment [10]. In a more predictive setting, Rejniak et al. used the number of cells per acini, proliferation, and apoptosis rates to computationally model the MCF10A epithelial acini growth using fluid-dynamics-based elasticity models [11]. In addition to these features, Tang et al. utilized features like acinus volume, density, sphericity, and epithelial thickness to investigate the relationship between acinus morphology and apoptosis, proliferation, and polarization [12]. Specifically, they built a computational model that can predict the growth of acini over a 12-day period. In addition, graph theoretical tools [13–15] were exploited to highlight the structural organization of the cells within the malignant tissues. Our method is different than these characterization efforts in that the grading of cancer is achieved over a richer and more discriminative set of small-scale (local) morphological features that are statistically significant. In addition, the features proposed here closely mimic the features that current pathological grading systems utilize. The presented work builds upon and extends our prior work in this area that introduced the underlying framework [16]. In this work, we provide extended details of our methodology and also present analysis that tests the performance of different supervised machine learning methods and investigates the discriminative influence of the proposed features. Furthermore, the overall grading accuracy is significantly improved by eliminating the acini that are in the preliminary stages of their formations from our analysis. Finally, we perform a preliminary study on the grading of in vivo tissue section using our framework and demonstrate that the proposed features can also be used on in vivo tissue slides albeit with additional constraints on the preparation of the tissue for our analysis. ## 2. Materials and Methods ### 2.1. Cell Culture Cells were grown on tissue culture-treated plastic T75 flasks and incubated at 37°C in a humidified atmosphere containing 95% air and 5% carbon dioxide in the manufacturer suggested media. Once 80% confluent, cells were split using 0.25% trypsin with 0.2 g/L EDTA and seeded into a new flask or under experimental conditions. Six MCF10 series cell lines that represent three grades of breast cancer along the metastatic cascade were grown. The cell lines used in this experimentation were MCF10A (10A), MCF10AT1 (AT), MCF10AT1K.cl2 (KCL), MCF10DCIS.com (DCIS), MCF10CA1h.cl2 (CA1H), and MCF10CA1a.cl1 (CA1A). The DCIS, CA1H, and CA1A cell lines were grown in DMEM/F12 with 5% horse serum and 1% fungizon/penicillin/streptomycin. 10A, AT, and KCL cells were grown in the same base media with additional factors including 20 ng/mL epidermal growth factor, 0.5 mg/mL hydrocortisone, 100 ng/mL cholera toxin, and 10μg/mL of insulin. The 10A cell line was obtained from the American Type Culture Collection (ATCC), AT, KCL, CA1A, and CA1H cell lines were obtained from the Barbara Ann Karmanos Cancer Institute at Wayne State University, and the DCIS cell line was purchased from Asterand Inc.The advantages of the MCF10 series of cell lines include their derivation from a single biopsy and subsequent mutations forming cell lines of ranging metastatic ability. These cell lines were acquired from a tissue sample diagnosed as noncancerous fibrocystic disease [17, 18]. The biopsy cells, MCF-10M, were cultured and spontaneously gave origin to two immortal sublines. One of these cell lines was named 10A for its adherent ability. 10A cells were nontumorigenic in mice xenografts; however, the first derivative cell line was transfected with oncogene T-24 H-Ras to promote expression of a constitutively active form of Ras resulting in precancerous lesion formations. These cells were then serially passaged for six months and the derived cell line was named MCF10AneoT [17]. AT cells were derived from a 100-day-old MCF10AneoT lesion that formed a squamous carcinoma but failed to produce carcinomas when injected back into mice [19]. KCL cell line was obtained from a 367 day tumor xenograft of the AT cell line. KCL cells were injected into a mouse, where a tumor was formed and isolated after 292 days. Isolated cells derived from this tumor were cultured and yielded the DCIS cell line, which formed ductal carcinoma in situ tumors when injected into mice [18]. Additional cells from the tumors which were derived from the KCL cell line were implanted within a mouse and yielded the MCF10CA cell lines; two of these were included in this study: the CA1H and CA1A [18]. CA1H cells were found to develop invasive carcinoma in mice while the CA1A cells were of higher-grade malignancy and able to metastasize to the lungs. After evaluating the metastatic potentials of these six cell lines, we considered the 10A and AT cell lines as nonmalignant, KCL and DCIS cell lines as noninvasive carcinoma, and CA1H and CA1A cell lines as invasive carcinoma that constituted the three grades of breast cancer we considered in this study. #### 2.1.1. 3D Culture System Cells were suspended at a concentration of one million cells per milliliter in Matrigel (laminin rich extra cellular matrix) on ice [20]. The gel-cell solution was seeded at 30 uL per glass bottom 96 well for 14 days in their respective media. The Matrigel-cell solution was allowed to solidify for 30 minutes at 37°C before the media was added and was changed every 2-3 days thereafter. ### 2.2. Immunocytochemistry Staining for Imaging Following 14 days in culture, samples were washed once with phosphate buffer solution (PBS) and then fixed with 3% paraformaldehyde at room temperature for 30 minutes. Next, samples were rinsed with PBS and treated with cell blocking solution (PBS with 1% bovine serum albumin (BSA) and 0.25% Tween20) for one hour at 4°C. Samples were then permeabilized with 0.25% TritonX100-PBS solution for ten minutes.After the permabilization, samples were washed with PBS and then treated with the primary antibodies at their determined working dilution in PBS at 4°C overnight. Next, the samples were washed with PBS three times for 30 minutes at room temperature. The secondary antibodies were added to the samples in PBS at 4°C overnight. Finally, samples were treated with DAPI for thirty minutes followed by three 30-minute PBS washes and stored at 4°C in PBS prior to imaging. #### 2.2.1. Antibodies and Dyes Integrinα3 antibody (mouse monoclonal) (Abcam: ab11767) was used at a dilution of 1 : 40 for 3D confocal fluorescent imaging. Secondary antibody, Alexa Fluor 568 goat anti-mouse IgG highly cross absorbed (Invitrogen A-11031) at a dilution of 1 : 200, was used to visualize the localization of integrin α3. Integrin α6 FITC-conjugated antibody (Rat monoclonal) (Abcam: ab21259) was used at a dilution of 1 : 25 for 3D confocal fluorescent imaging during the secondary antibody step previously described. Nuclei were stained with 4′,6-diamidino-2-phenylindole (DAPI; Molecular Probes) for 30 minutes at room temperature. 3D volumes were obtained using a Zeiss LSM510 META confocal microscope with a 40x water immersion objective as z-stacks. Proper color filters were used to capture the red, green, and blue fluorescent signals. Images were captured using a multitrack setting where fluorescent channels were acquired sequentially to avoid fluorescence crosstalk. Thickness of the z-stacks ranged from 10 to 40 μm (1 μm slices) with an initial depth of at least 10 μM. Slices had 512 × 512 pixels (320 μm×320 μm) cross-section area. In order to include larger volumes of the cultures in our analysis, we captured four tiles (in 2 × 2 formation) with approximately 20% overlap and stitched these tiles using the 3D image registration technique proposed by Preibisch et al. [21] implemented in ImageJ [22]. A total of five stitched images were captured for each of the six cell lines. ### 2.3. Segmentation of Acinar Structures Segmentation of the acinar structures was accomplished by the well-known watershed segmentation [23]. Watershed segmentation exploits the morphological characteristics of the regions of interest and is particularly useful for the segmentation of adjoined objects. The process starts with the identification of the acinar structures. In this study, cell nuclei were marked with blue fluorescent marker DAPI, cell-to-cell borders were identified by the red fluorescent marker that shows the localization of integrin α3, and basal sides of the cell membranes were identified by the green fluorescent marker that shows the localization of integrin α6. As these components are observed in the individual color channels of the captured images and, we used the combination of the three color channels to identify the acinar structures.First, the color channels of the image were individually binarized. Image values were separated into foreground and background classes, where the foreground class represented the stained components and the background class included the combination of the gel medium and extracellular proteins. We employed a local adaptation to Otsu’s well-known global thresholding algorithm [24] to binarize the color channels. In each slice along the depth direction, we divided the image into rectangular blocks along the horizontal and vertical directions and binarized them separately. This approach handles the spatial variations in the foreground-background contrast better than global thresholding. The noise produced due to the binarization of local regions that contain hardly any information was eliminated by using the edge-based noise elimination that cleaned the regions that did not contain any edge from the resulting binary image [25]. Next, the resulting binary color channels were superposed to obtain a single monochrome binary image by logical OR operation. Enclosing acinar structures were identified by applying morphological close operation followed by a morphological fill-hole operation [26] to the resulting binary image. Finally, 3D watershed segmentation was applied to label the individual acinar structures. For this purpose, we first obtained a topographic interpretation of the resulting binary image by taking its Euclidean distance transform where the shortest distance to the nearest background pixel for each pixel was measured [27]. The resulting transformed image was then inverted, while forcing the values for the background class pixels to -∞, to construct the catchment basins and the watershed segmentation method was finally applied to construct to watershed lines to divide these basins and identify the unique acinar structures. Acinar structures with less than four nuclei were considered to exhibit reduced ability to form polarized acinar structures and excluded from our analysis. Note that this elimination was not carried out in our preliminary work on this problem [16] that resulted in a poorer overall grading accuracy (79.0%) than what we achieve with the elimination (89.0%) as we will present in Section 3. From the 30 images we analyzed, 99 10A, 49 10AT, 81 KCL, 80 DCIS, 29 CA1H, and 62 CA1A acinar structures were identified using this segmentation method yielding a total 400 acinar structures. ### 2.4. Characterization of Acinar Structure Morphology Visual investigation of the acinar structures shown in Figure1 reveals differences in acini morphology and localization of integrin subunits that are highlighted in Figure 2. Acinar structures in nonmalignant cell lines are comprised of polarized cells that are layered around the hollow lumen that closely approximate the acini formations in mammary glands and lobules. On the other hand, the acinar structures in the tumorigenic cell lines consist of nonpolarized cells that form clusters of cells rather than explicit acini. As shown in Figures 3(a) and 3(b), nontumorigenic cell line 10A and precancerous cell line AT, respectively, exhibit polarized acinar structures that are characterized by the integrin α6 localization at the basal membrane of the cells, integrin α3 localization along the lateral cell membranes, and clear hollow lumen formations. Acinar structures in KCL and DCIS cell lines shown in Figures 3(c) and 3(d), respectively, exhibit significant changes in the integrin subunit densities and their colocalizations. While the basal and lateral membrane protein densities decrease, relative colocalizations of these proteins increase within the acinus. The acinar structures from the noninvasive carcinoma cell lines and acinar-like structures from the invasive carcinoma cell lines CA1H and CA1A shown in Figures 3(e) and 3(f), respectively, are more elongated and exhibit smaller hollow lumens than the acinar structures from the nonmalignant cell lines.MCF10 series cell lines exhibit variation in acinar structure morphology along the metastatic cascade after 14 days in 3D laminin rich environment culture systems. Cells were cultured in 3D Matrigel suspensions and stained with integrinα3 (red), integrin α6 (green), and DAPI (blue). Example image slices from the six cell lines are typical of those captured during the course of the experiments and are displayed from (a) to (f) as follows: 10A, AT, KCL, DCIS, CA1H, and CA1A. (a)(b)(c)(d)(e)(f)The observed variations in the morphology of acinar structures motivated the development of the features that characterize the level of cell polarity within the acinar structures. The features of interest primarily capture (i) the morphology of the acinar structure and hollow lumen, (ii) the basal, and (iii) the lateral protein densities within the acinar structure. These features were computed in each acinar structure for the slice that the acinar structure had the largest cross-section area along the depth direction (z-stack). #### 2.4.1. Features Capturing the Morphology of Acinus and Hollow Lumen The first subset of features we propose captured the shape of the acinar structures, number of nuclei that constitute the acinus, and the relative size of the hollow lumen. The segmentation of the cell nuclei was also accomplished by the watershed segmentation technique described previously using the blue (nuclei) channel image only. Figure4(a) shows the number of nuclei per acinar structure across the six cell lines. As expected the invasive carcinoma cell lines exhibit the largest number of nuclei per cell cluster compared to the other cell lines acini due to the unregulated division of cells.Features quantifying acinus and hollow lumens morphology are statistically significant between cell lines and states of health. (a) shows the number of nuclei per acinar structure, (b) shows the acinar structure elongation (roundness) that is measured as illustrated in (d) where the ratio between the minor and major axis of the ellipse fitted to the acinar structure is taken, and (c) shows the ratio between the hollow lumen and acinar structure areas that is computed as illustrated in (e). Error bars correspond to the standard error of the means. Two-sidedt-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)Acinar structures in the nonmalignant grades of cancer typically have symmetrical round shapes. Malignant cancer grades result in deformations in the acinar structures that cause the shapes to be more elongated. We measured the roundness of the acinar structures by taking the ratio between the minor and major axes of the ellipse fitted as illustrated in Figure4(d). This fitting was achieved by finding the ellipse that has the same normalized second central moments as the region. Roundness values close to 1 comes from rounder/more symmetrical acinar structures whereas values close to 0 indicate an elongated shape. As expected, noninvasive and invasive cell lines exhibit statistically significant decrease in acinar structure roundness compared to the nonmalignant cell lines as shown in Figure 4(b).The final feature in this subset captured the differences in the relative size of hollow lumen by computing the ratio between the areas of the hollow lumen and the acinar structure. Values closer to 1 indicate larger hollow lumens, and those to 0 indicate smaller hollow lumens. In order to compute the hollow lumen area, we assumed that the hollow lumen was circular and computed its radius as the Euclidean distance between the centroid of the acinar structure and the nearest nucleus to the centroid as illustrated in Figure4(e). The hollow lumen for the precancerous cell line AT is the largest among the six cell lines. Noninvasive and invasive cell line acinar structures exhibit statistically significant shrinkage in hollow lumen compared against the nonmalignant cell line acinar structures as shown in Figure 4(c). #### 2.4.2. Features Capturing the Basal Protein Integrinα6 Density The next subset of features analyzed and quantified the localization and structural relationships of the basal membrane-localized protein integrinα6. The first feature measured the ratio of integrin α6 expression within the acinar structure (excluding the expression along the basal membrane) to the acinar structure area as illustrated in Figure 5(e). As nonmalignant cancerous cells exhibit high cellular polarity, integrin α6 is localized to the basal membrane of these cells. Hence, a low density of integrin α6 is expected within the acinar structure. Reduced cellular polarity in the malignant grades of cancer enables greater internal expressions of integrin α6 within the acinar structure. DCIS, CA1H, and CA1A cell lines exhibited statistically significant increase in the internal expression of integrin α6 compared with the less malignant cell lines as shown in Figure 5(a).Features capturing the level of cell polarity based on the basal membrane protein integrinα6 density. (a) shows the internal integrin α6 density that is computed as illustrated in (e) where the amount of integrin α6 localized inside the acini is divided to the total amount of integrin α6, (b) shows the cumulative spatial distributions of integrin α6 that is computed as illustrated in (g) where the amount of integrin α6 localized within the concentric circles are divided by the total amount of integrin α6 within the acini, (c) shows the continuity of the integrin α6 along the basal membrane that is measured as illustrated in (f) where the number of times that rays initiate from the centroid oriented at from 1 to 360 degrees intersected with the green fluorophores along the basal membrane is counted and normalized between 0 and 1, and (d) shows the ratio between the amount of integrin α6 colocalized with integrin α3 and the total integrin α6 expression. Error bars correspond to the standard error of the means. Two-sided  t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)(f)(g)The next feature characterized the spatial distribution of integrinα6 within the acinar structure. It was computed as the ratio between the amount of the green fluorophores located in concentric circles that were centered at the centroid of the acinar structure and the total amount of green fluorophores in the acinar structure. The radii of the concentric circles were increased by one-tenth of the radius of the circle that circumscribes the acinar structure at each step as illustrated in Figure 5(g). Cell lines that have expressions of integrin α6 near the centroid of the acinar structure show relatively early rise in the cumulative densities than the others. It is observed that integrin α6 is localized near the basal membrane in the three lower-grade cell lines, and near the centroids of the acini in the three advanced grade cell lines as shown in Figure 5(b). Since the cumulative densities near the acinus center and the basal membrane are similar for all the cell lines, we decide to include the cumulative densities corresponding to 30 to 70% of the radius of the circle that circumscribes the acinus in our analysis.Another feature captured the amount of integrinα6 expressed along the basal cell membrane as illustrated in Figure 5(f). First, we took the difference between the acinar structure and its morphologically eroded version to obtain a binary contour mask. This mask was then overlaid with the corresponding slice of the binary green channel image to obtain the integrin α6 localized along the basal membrane of the acinar structure. Next, we superposed the resulting image with rays that initiate at the centroid of the acinus oriented at angles that varied from 1° to 360° with 1° steps scanning a complete circle. Finally, the total number of times that these rays intersected with the green markers was counted and normalized between 0 and 1 to obtain the continuity of integrin α6 along the acinar basal side. Values close to 1 correspond to intact basal sides and, thus, indicate high cellular polarity. As shown in Figure 5(c), 10A and AT cell lines have the significantly more continuous expression of integrin α6 along the basal side of the cells than the four malignant cell lines. We note that KCL cell line exhibits a statistically significant higher localization of integrin α6 at the basal side than the three more tumorigenic cell lines.The final feature in this category measured the ratio of the amount of integrinα6 colocalized with integrin α3 to the total expression of integrin α6 to determine the amount basal membrane protein overlapped with the lateral membrane protein. This feature gets higher values when there is higher internal expression of integrin α6 or higher basal membrane localization of integrin α3. As plotted in Figure 5(d), 10A and DCIS exhibit statistically significant less colocalization of integrin α6 with α3 than the other four cell lines. #### 2.4.3. Features Capturing the Lateral Protein Integrinα3 Density The final subset of features captured the density of lateral membrane protein integrinα3 within the acinar structure. The following features measured the expression and localization of lateral membrane protein and adhesion molecule integrin α3. The first feature determined the amount integrin α3 expressed along the basal cell membrane using the same method described previously. We anticipate observing less amount of integrin α3 expressed along the basal cell membrane in the malignant grades of cancer than the nonmalignant grade.Next, we captured the overall expression of integrinα3 within the acinar structure as illustrated in Figure 6(e) where the total amount of integrin α3 expressed in the acinar structure was divided by the area of the acinar structure. It is seen in Figure 6(b) that this feature monotonically decreases from the nontumorigenic 10A to noninvasive DCIS acinar structures. The invasive carcinomas CA1H and CA1A show significantly higher expressions compared to the DCIS cells.Features capturing the level of cell polarity based on the lateral membrane protein integrinα3 density. (a) Shows the continuity of the integrin α6 along the basal membrane, (b) shows the total integrin α3 density that is computed as illustrated in (e) where the total amount of integrin α3 within the acinar structure is divided by area of the acinar structure, (c) shows the ratio between the amount of integrin α3 colocalized with integrin α6 and the total integrin α3 expression, and d shows the density of integrin α3 in the exterior of the hollow lumen. Error bars correspond to the standard error of the means. Two-sided t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)We then measured the ratio between the amount of integrinα3 colocalized with the integrin α6 and the total amount of integrin α3 within the acinar structure. As expected nonmalignant cell lines exhibit significantly less colocalization of integrin α3 with integrin α6 than the malignant cell lines as shown in Figure 6(c).Final feature in this subset measured the ratio between the amount of integrinα3 localized between the hollow center and basal membrane side of the acinar structure to the total integrin α3 expression. This feature, thus, quantified the density of α3 integrin along the lateral membrane of the cells. Since integrin α3 localization is not confined to cell-to-cell lateral membranes in malignant tumors, expressions of this protein are expected within the hollow lumen as well as the cell-to-extracellular matrix border. ### 2.5. Automated Grading of Cancer Using Supervised Machine Learning We used supervised machine learning [28] to grade the acinar structures into nonmalignant, noninvasive carcinoma, and invasive carcinoma forms of breast cancer using the features defined previously. In supervised learning, the data are first divided into training and test sets. The classifier is trained with the labeled training data and the classes of the test data are then predicted using the resulting classifier. We tested the grading performance of five supervised learning algorithms. Linear discriminant analysis fits a multivariate normal density to each of the training class assuming equal covariance matrices for each class [29]. It then separates the classes with a hyperplane that is established by seeking the projection that maximizes the sum of intraclass distances and minimize the sum of interclass distances. Quadratic discriminant analysis is similar to the linear discriminant analysis with the distinction that covariance matrix for each class is estimated separately [29]. Naïve Bayes classifier assumes that the classes are arising from independent distributions; hence, a diagonal covariance matrix is assumed. K-nearest neighbor classifier finds the K closest training data to the test data based on the Euclidean distance and classifies the sample using the majority voting of the nearest points. Support vector machines classifier maps the training data into a higher dimensional space with a kernel function where the data become linearly separable [30]. A separating maximum-margin hyperplane is established to separate the different class data with minimal misclassification via quadratic optimization [30]. The test data are classified by determining the side of the hyperplane they lie on in the kernel-mapped space. In order to extend SVM for three-class classification, we employed the one-against-one approach [31] where three two-class SVM classifiers were established for each pair of classes in the training data set. Each sample in the test data was assigned to a class by these classifiers and the class with the majority vote was chosen as the final result. If there is equal voting for the three classes, we chose the class that has the largest margin from the separating hyperplane.In order to obtain unbiased performance estimates, 10-fold cross-validation was performed. The feature set was first randomly divided into 10 disjoint partitions of equal size. For each partition, a classifier was trained with the remaining nine partitions and then tested on the retained partition. The results for each partition were then combined to find the overall grading accuracy. In order to reduce the scale differences within the features, the data were normalized so that the features had zero mean and unit variance across the samples. We performed a parametric search to determine the number of neighbors in the nearest neighbor identification. We tested between 8 and 15 nearest neighbors and determined that identifying 12 nearest neighbors to a test point achieved the highest grading accuracy. For the SVM classifiers, we used radial basis function, also referred to as Gaussian kernel, in the form ofK(xi,xj)=exp(-∥xi-xj∥2/2σ2) that mapped the data into an infinite dimensional Hilbert space [30]. We performed a parameter to search to identify σ that achieves the highest grading accuracy. We sought σ in the set of candidate values that varied from 1.0 to 2.5 with 0.1 steps and determined that σ being equals to 2.0 achieved the best performance in the grading of the acinar structures. Table 1 shows the performance of the five supervised learning algorithms. It is clear that SVM-based classifier achieves the highest overall accuracy in the grading of the acinar structures. This is not unexpected as SVM classifiers are known to be highly successful in biological applications [32].Table 1 Overall grading accuracies for the supervised machine learning techniques used in our analysis. SVM classifiers clearly achieve significantly higher grading accuracy than the other methods. The data set includes 400 acinar structures. Learning methodOverall grading accuracy (%)Linear discriminant analysis80.75Quadratic discriminant analysis80.00Naïve Bayes69.75K-nearest neighbors79.50Support vector machines89.00 #### 2.5.1. Discriminative Influence of the Proposed Features After identifying SVM-based classifier as the most accurate grading method for our data set, we performed feature selection to determine the discriminative capabilities of the features to characterize the data. A good feature selection algorithm identifies the features that are consistent within the class and exhibit differences between the classes. Fisher score (F-score) is documented to be a powerful feature selection tool [33]. For a dataset xi with two classes, denote the instances in the first class with xi(+) and instances in the other class with xi(-). The Fisher score of the ith feature is then given by F(i)=((xι̅(+)-xι̅)2+(xι̅(-)-xι̅)2)/(σx(+)2+σx(-)2), where xι̅ is the mean value of the ith features, xι̅(+) and xι̅(-) are the mean values over the positive and negative instances, respectively, and σx(+)2 and σx(-)2are the variances over the positive and negative instances, respectively, for the ith feature. In order to extend this method for a feature set with three classes, we computed the F-score for each feature in each pair of classes and then took the average of the three possible combinations. Larger values of F-score indicate stronger discriminative influence; therefore, after obtaining the F-scores for all the features, we ranked them in descending order.Table2 shows the average F-scores of the features and their corresponding discriminative rank. It is seen that continuity of integrin α6 along the basal membrane is the most discriminative feature to describe the data. Discriminative influence of the ratio between the hollow lumen and acinar structure area is also high. These two features are important as they are strong indicators of the level of cell polarity within the acinar structure and can be considered as measures of duct forming tumors as used in the commonly used Scarff-Bloom-Richardson system. Some of the other most discriminative features quantify the differences that are difficult to assess by visual inspection such as the colocalization between the integrin α3 and integrin α3 and internal densities of the integrin subunits. We note that these features constitute a subset of the novel features introduced in this paper and, therefore, this analysis particularly highlights the importance of the proposed features and the methodology.Table 2 Discriminative influence of the proposed features based on the F-score values. Feature labelF-ScoreRankNumber of nuclei within the acinar structure0.054714Elongation of the acinar structure0.11226Ratio between the hollow lumen and acinar structure areas0.16024Internal integrinα6 Density0.12055Density of integrinα6 inside the 30% circle0.062011Density of integrinα6 inside the 40% circle0.061112Density of integrinα6 inside the 50% circle0.08859Density of integrinα6 inside the 60% circle0.09538Density of integrinα6 inside the 70% circle0.059813Continuity of integrinα6 along the basal membrane0.57251Amount of integrinα6 colocalized with integrin α30.033715Continuity of integrinα3 along the basal membrane0.10467Total integrinα3 density0.26513Amount of integrinα3 colocalized with integrin α60.35262Density of integrinα3 in the exterior of the hollow lumen0.071610We performed cancer grading using subsets of the resulting most discriminative features, where at each stage we increased the number of most discriminative features in the training feature set. Figure7 shows the overall grading accuracy with respect to the number of most discriminative features selected in the grading. It is seen that the highest grading accuracy 89.0% is achieved when all of the features are used to train the classifiers. Considering that the most grading systems are typically based on the assessment of a limited number of features, we also investigate the grading performance of our methodology in a similar setting. When only the first five most discriminative features are used for grading, we achieve 82.0% overall grading accuracy. Though relatively worse than the highest grading accuracy, this constitutes a highly promising setting considering the limited number of features.Figure 7 Overall grading accuracy with respect to the number of most discriminative features selected in the grading. Highest overall grading accuracy is achieved when the whole feature set is considered. ### 2.6. Tissue Sections Breast tissue from patients with invasive carcinoma, ductal carcinomain situ, and from healthy individuals was obtained from ProteoGenix (Culver City, CA) and stored at −80°C. Portions of each tissue block were embedded in O.C.T. Compound (Sakura Finetek USA, Torrance, CA) and sections 20 μm thick were cut using a Microm HM505E cryostat. Tissue sections were adhered to Superfrost Plus Gold microscope slides (Fisher Scientific, Morris Plains, NJ) and vapor fixed using 2-3 Kimwipes (Kimberley-Clark Worldwide, Roswell, GA) soaked in 4% paraformaldehyde in a small chamber at −20°C for 30 minutes prior to immunohistochemistry. Tissue sections were encircled with an ImmEdge Pen (Vector Labs, Burlingame, CA) and immunohistochemistry and confocal microscopy were performed as described for in vitro samples in Section 2.2. An Alexa488-conjugated goat antifluorescein secondary antibody (Invitrogen, Carlsbad, CA) was included at a 1 : 300 dilution in the overnight secondary incubation. ## 2.1. Cell Culture Cells were grown on tissue culture-treated plastic T75 flasks and incubated at 37°C in a humidified atmosphere containing 95% air and 5% carbon dioxide in the manufacturer suggested media. Once 80% confluent, cells were split using 0.25% trypsin with 0.2 g/L EDTA and seeded into a new flask or under experimental conditions. Six MCF10 series cell lines that represent three grades of breast cancer along the metastatic cascade were grown. The cell lines used in this experimentation were MCF10A (10A), MCF10AT1 (AT), MCF10AT1K.cl2 (KCL), MCF10DCIS.com (DCIS), MCF10CA1h.cl2 (CA1H), and MCF10CA1a.cl1 (CA1A). The DCIS, CA1H, and CA1A cell lines were grown in DMEM/F12 with 5% horse serum and 1% fungizon/penicillin/streptomycin. 10A, AT, and KCL cells were grown in the same base media with additional factors including 20 ng/mL epidermal growth factor, 0.5 mg/mL hydrocortisone, 100 ng/mL cholera toxin, and 10μg/mL of insulin. The 10A cell line was obtained from the American Type Culture Collection (ATCC), AT, KCL, CA1A, and CA1H cell lines were obtained from the Barbara Ann Karmanos Cancer Institute at Wayne State University, and the DCIS cell line was purchased from Asterand Inc.The advantages of the MCF10 series of cell lines include their derivation from a single biopsy and subsequent mutations forming cell lines of ranging metastatic ability. These cell lines were acquired from a tissue sample diagnosed as noncancerous fibrocystic disease [17, 18]. The biopsy cells, MCF-10M, were cultured and spontaneously gave origin to two immortal sublines. One of these cell lines was named 10A for its adherent ability. 10A cells were nontumorigenic in mice xenografts; however, the first derivative cell line was transfected with oncogene T-24 H-Ras to promote expression of a constitutively active form of Ras resulting in precancerous lesion formations. These cells were then serially passaged for six months and the derived cell line was named MCF10AneoT [17]. AT cells were derived from a 100-day-old MCF10AneoT lesion that formed a squamous carcinoma but failed to produce carcinomas when injected back into mice [19]. KCL cell line was obtained from a 367 day tumor xenograft of the AT cell line. KCL cells were injected into a mouse, where a tumor was formed and isolated after 292 days. Isolated cells derived from this tumor were cultured and yielded the DCIS cell line, which formed ductal carcinoma in situ tumors when injected into mice [18]. Additional cells from the tumors which were derived from the KCL cell line were implanted within a mouse and yielded the MCF10CA cell lines; two of these were included in this study: the CA1H and CA1A [18]. CA1H cells were found to develop invasive carcinoma in mice while the CA1A cells were of higher-grade malignancy and able to metastasize to the lungs. After evaluating the metastatic potentials of these six cell lines, we considered the 10A and AT cell lines as nonmalignant, KCL and DCIS cell lines as noninvasive carcinoma, and CA1H and CA1A cell lines as invasive carcinoma that constituted the three grades of breast cancer we considered in this study. ### 2.1.1. 3D Culture System Cells were suspended at a concentration of one million cells per milliliter in Matrigel (laminin rich extra cellular matrix) on ice [20]. The gel-cell solution was seeded at 30 uL per glass bottom 96 well for 14 days in their respective media. The Matrigel-cell solution was allowed to solidify for 30 minutes at 37°C before the media was added and was changed every 2-3 days thereafter. ## 2.1.1. 3D Culture System Cells were suspended at a concentration of one million cells per milliliter in Matrigel (laminin rich extra cellular matrix) on ice [20]. The gel-cell solution was seeded at 30 uL per glass bottom 96 well for 14 days in their respective media. The Matrigel-cell solution was allowed to solidify for 30 minutes at 37°C before the media was added and was changed every 2-3 days thereafter. ## 2.2. Immunocytochemistry Staining for Imaging Following 14 days in culture, samples were washed once with phosphate buffer solution (PBS) and then fixed with 3% paraformaldehyde at room temperature for 30 minutes. Next, samples were rinsed with PBS and treated with cell blocking solution (PBS with 1% bovine serum albumin (BSA) and 0.25% Tween20) for one hour at 4°C. Samples were then permeabilized with 0.25% TritonX100-PBS solution for ten minutes.After the permabilization, samples were washed with PBS and then treated with the primary antibodies at their determined working dilution in PBS at 4°C overnight. Next, the samples were washed with PBS three times for 30 minutes at room temperature. The secondary antibodies were added to the samples in PBS at 4°C overnight. Finally, samples were treated with DAPI for thirty minutes followed by three 30-minute PBS washes and stored at 4°C in PBS prior to imaging. ### 2.2.1. Antibodies and Dyes Integrinα3 antibody (mouse monoclonal) (Abcam: ab11767) was used at a dilution of 1 : 40 for 3D confocal fluorescent imaging. Secondary antibody, Alexa Fluor 568 goat anti-mouse IgG highly cross absorbed (Invitrogen A-11031) at a dilution of 1 : 200, was used to visualize the localization of integrin α3. Integrin α6 FITC-conjugated antibody (Rat monoclonal) (Abcam: ab21259) was used at a dilution of 1 : 25 for 3D confocal fluorescent imaging during the secondary antibody step previously described. Nuclei were stained with 4′,6-diamidino-2-phenylindole (DAPI; Molecular Probes) for 30 minutes at room temperature. 3D volumes were obtained using a Zeiss LSM510 META confocal microscope with a 40x water immersion objective as z-stacks. Proper color filters were used to capture the red, green, and blue fluorescent signals. Images were captured using a multitrack setting where fluorescent channels were acquired sequentially to avoid fluorescence crosstalk. Thickness of the z-stacks ranged from 10 to 40 μm (1 μm slices) with an initial depth of at least 10 μM. Slices had 512 × 512 pixels (320 μm×320 μm) cross-section area. In order to include larger volumes of the cultures in our analysis, we captured four tiles (in 2 × 2 formation) with approximately 20% overlap and stitched these tiles using the 3D image registration technique proposed by Preibisch et al. [21] implemented in ImageJ [22]. A total of five stitched images were captured for each of the six cell lines. ## 2.2.1. Antibodies and Dyes Integrinα3 antibody (mouse monoclonal) (Abcam: ab11767) was used at a dilution of 1 : 40 for 3D confocal fluorescent imaging. Secondary antibody, Alexa Fluor 568 goat anti-mouse IgG highly cross absorbed (Invitrogen A-11031) at a dilution of 1 : 200, was used to visualize the localization of integrin α3. Integrin α6 FITC-conjugated antibody (Rat monoclonal) (Abcam: ab21259) was used at a dilution of 1 : 25 for 3D confocal fluorescent imaging during the secondary antibody step previously described. Nuclei were stained with 4′,6-diamidino-2-phenylindole (DAPI; Molecular Probes) for 30 minutes at room temperature. 3D volumes were obtained using a Zeiss LSM510 META confocal microscope with a 40x water immersion objective as z-stacks. Proper color filters were used to capture the red, green, and blue fluorescent signals. Images were captured using a multitrack setting where fluorescent channels were acquired sequentially to avoid fluorescence crosstalk. Thickness of the z-stacks ranged from 10 to 40 μm (1 μm slices) with an initial depth of at least 10 μM. Slices had 512 × 512 pixels (320 μm×320 μm) cross-section area. In order to include larger volumes of the cultures in our analysis, we captured four tiles (in 2 × 2 formation) with approximately 20% overlap and stitched these tiles using the 3D image registration technique proposed by Preibisch et al. [21] implemented in ImageJ [22]. A total of five stitched images were captured for each of the six cell lines. ## 2.3. Segmentation of Acinar Structures Segmentation of the acinar structures was accomplished by the well-known watershed segmentation [23]. Watershed segmentation exploits the morphological characteristics of the regions of interest and is particularly useful for the segmentation of adjoined objects. The process starts with the identification of the acinar structures. In this study, cell nuclei were marked with blue fluorescent marker DAPI, cell-to-cell borders were identified by the red fluorescent marker that shows the localization of integrin α3, and basal sides of the cell membranes were identified by the green fluorescent marker that shows the localization of integrin α6. As these components are observed in the individual color channels of the captured images and, we used the combination of the three color channels to identify the acinar structures.First, the color channels of the image were individually binarized. Image values were separated into foreground and background classes, where the foreground class represented the stained components and the background class included the combination of the gel medium and extracellular proteins. We employed a local adaptation to Otsu’s well-known global thresholding algorithm [24] to binarize the color channels. In each slice along the depth direction, we divided the image into rectangular blocks along the horizontal and vertical directions and binarized them separately. This approach handles the spatial variations in the foreground-background contrast better than global thresholding. The noise produced due to the binarization of local regions that contain hardly any information was eliminated by using the edge-based noise elimination that cleaned the regions that did not contain any edge from the resulting binary image [25]. Next, the resulting binary color channels were superposed to obtain a single monochrome binary image by logical OR operation. Enclosing acinar structures were identified by applying morphological close operation followed by a morphological fill-hole operation [26] to the resulting binary image. Finally, 3D watershed segmentation was applied to label the individual acinar structures. For this purpose, we first obtained a topographic interpretation of the resulting binary image by taking its Euclidean distance transform where the shortest distance to the nearest background pixel for each pixel was measured [27]. The resulting transformed image was then inverted, while forcing the values for the background class pixels to -∞, to construct the catchment basins and the watershed segmentation method was finally applied to construct to watershed lines to divide these basins and identify the unique acinar structures. Acinar structures with less than four nuclei were considered to exhibit reduced ability to form polarized acinar structures and excluded from our analysis. Note that this elimination was not carried out in our preliminary work on this problem [16] that resulted in a poorer overall grading accuracy (79.0%) than what we achieve with the elimination (89.0%) as we will present in Section 3. From the 30 images we analyzed, 99 10A, 49 10AT, 81 KCL, 80 DCIS, 29 CA1H, and 62 CA1A acinar structures were identified using this segmentation method yielding a total 400 acinar structures. ## 2.4. Characterization of Acinar Structure Morphology Visual investigation of the acinar structures shown in Figure1 reveals differences in acini morphology and localization of integrin subunits that are highlighted in Figure 2. Acinar structures in nonmalignant cell lines are comprised of polarized cells that are layered around the hollow lumen that closely approximate the acini formations in mammary glands and lobules. On the other hand, the acinar structures in the tumorigenic cell lines consist of nonpolarized cells that form clusters of cells rather than explicit acini. As shown in Figures 3(a) and 3(b), nontumorigenic cell line 10A and precancerous cell line AT, respectively, exhibit polarized acinar structures that are characterized by the integrin α6 localization at the basal membrane of the cells, integrin α3 localization along the lateral cell membranes, and clear hollow lumen formations. Acinar structures in KCL and DCIS cell lines shown in Figures 3(c) and 3(d), respectively, exhibit significant changes in the integrin subunit densities and their colocalizations. While the basal and lateral membrane protein densities decrease, relative colocalizations of these proteins increase within the acinus. The acinar structures from the noninvasive carcinoma cell lines and acinar-like structures from the invasive carcinoma cell lines CA1H and CA1A shown in Figures 3(e) and 3(f), respectively, are more elongated and exhibit smaller hollow lumens than the acinar structures from the nonmalignant cell lines.MCF10 series cell lines exhibit variation in acinar structure morphology along the metastatic cascade after 14 days in 3D laminin rich environment culture systems. Cells were cultured in 3D Matrigel suspensions and stained with integrinα3 (red), integrin α6 (green), and DAPI (blue). Example image slices from the six cell lines are typical of those captured during the course of the experiments and are displayed from (a) to (f) as follows: 10A, AT, KCL, DCIS, CA1H, and CA1A. (a)(b)(c)(d)(e)(f)The observed variations in the morphology of acinar structures motivated the development of the features that characterize the level of cell polarity within the acinar structures. The features of interest primarily capture (i) the morphology of the acinar structure and hollow lumen, (ii) the basal, and (iii) the lateral protein densities within the acinar structure. These features were computed in each acinar structure for the slice that the acinar structure had the largest cross-section area along the depth direction (z-stack). ### 2.4.1. Features Capturing the Morphology of Acinus and Hollow Lumen The first subset of features we propose captured the shape of the acinar structures, number of nuclei that constitute the acinus, and the relative size of the hollow lumen. The segmentation of the cell nuclei was also accomplished by the watershed segmentation technique described previously using the blue (nuclei) channel image only. Figure4(a) shows the number of nuclei per acinar structure across the six cell lines. As expected the invasive carcinoma cell lines exhibit the largest number of nuclei per cell cluster compared to the other cell lines acini due to the unregulated division of cells.Features quantifying acinus and hollow lumens morphology are statistically significant between cell lines and states of health. (a) shows the number of nuclei per acinar structure, (b) shows the acinar structure elongation (roundness) that is measured as illustrated in (d) where the ratio between the minor and major axis of the ellipse fitted to the acinar structure is taken, and (c) shows the ratio between the hollow lumen and acinar structure areas that is computed as illustrated in (e). Error bars correspond to the standard error of the means. Two-sidedt-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)Acinar structures in the nonmalignant grades of cancer typically have symmetrical round shapes. Malignant cancer grades result in deformations in the acinar structures that cause the shapes to be more elongated. We measured the roundness of the acinar structures by taking the ratio between the minor and major axes of the ellipse fitted as illustrated in Figure4(d). This fitting was achieved by finding the ellipse that has the same normalized second central moments as the region. Roundness values close to 1 comes from rounder/more symmetrical acinar structures whereas values close to 0 indicate an elongated shape. As expected, noninvasive and invasive cell lines exhibit statistically significant decrease in acinar structure roundness compared to the nonmalignant cell lines as shown in Figure 4(b).The final feature in this subset captured the differences in the relative size of hollow lumen by computing the ratio between the areas of the hollow lumen and the acinar structure. Values closer to 1 indicate larger hollow lumens, and those to 0 indicate smaller hollow lumens. In order to compute the hollow lumen area, we assumed that the hollow lumen was circular and computed its radius as the Euclidean distance between the centroid of the acinar structure and the nearest nucleus to the centroid as illustrated in Figure4(e). The hollow lumen for the precancerous cell line AT is the largest among the six cell lines. Noninvasive and invasive cell line acinar structures exhibit statistically significant shrinkage in hollow lumen compared against the nonmalignant cell line acinar structures as shown in Figure 4(c). ### 2.4.2. Features Capturing the Basal Protein Integrinα6 Density The next subset of features analyzed and quantified the localization and structural relationships of the basal membrane-localized protein integrinα6. The first feature measured the ratio of integrin α6 expression within the acinar structure (excluding the expression along the basal membrane) to the acinar structure area as illustrated in Figure 5(e). As nonmalignant cancerous cells exhibit high cellular polarity, integrin α6 is localized to the basal membrane of these cells. Hence, a low density of integrin α6 is expected within the acinar structure. Reduced cellular polarity in the malignant grades of cancer enables greater internal expressions of integrin α6 within the acinar structure. DCIS, CA1H, and CA1A cell lines exhibited statistically significant increase in the internal expression of integrin α6 compared with the less malignant cell lines as shown in Figure 5(a).Features capturing the level of cell polarity based on the basal membrane protein integrinα6 density. (a) shows the internal integrin α6 density that is computed as illustrated in (e) where the amount of integrin α6 localized inside the acini is divided to the total amount of integrin α6, (b) shows the cumulative spatial distributions of integrin α6 that is computed as illustrated in (g) where the amount of integrin α6 localized within the concentric circles are divided by the total amount of integrin α6 within the acini, (c) shows the continuity of the integrin α6 along the basal membrane that is measured as illustrated in (f) where the number of times that rays initiate from the centroid oriented at from 1 to 360 degrees intersected with the green fluorophores along the basal membrane is counted and normalized between 0 and 1, and (d) shows the ratio between the amount of integrin α6 colocalized with integrin α3 and the total integrin α6 expression. Error bars correspond to the standard error of the means. Two-sided  t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)(f)(g)The next feature characterized the spatial distribution of integrinα6 within the acinar structure. It was computed as the ratio between the amount of the green fluorophores located in concentric circles that were centered at the centroid of the acinar structure and the total amount of green fluorophores in the acinar structure. The radii of the concentric circles were increased by one-tenth of the radius of the circle that circumscribes the acinar structure at each step as illustrated in Figure 5(g). Cell lines that have expressions of integrin α6 near the centroid of the acinar structure show relatively early rise in the cumulative densities than the others. It is observed that integrin α6 is localized near the basal membrane in the three lower-grade cell lines, and near the centroids of the acini in the three advanced grade cell lines as shown in Figure 5(b). Since the cumulative densities near the acinus center and the basal membrane are similar for all the cell lines, we decide to include the cumulative densities corresponding to 30 to 70% of the radius of the circle that circumscribes the acinus in our analysis.Another feature captured the amount of integrinα6 expressed along the basal cell membrane as illustrated in Figure 5(f). First, we took the difference between the acinar structure and its morphologically eroded version to obtain a binary contour mask. This mask was then overlaid with the corresponding slice of the binary green channel image to obtain the integrin α6 localized along the basal membrane of the acinar structure. Next, we superposed the resulting image with rays that initiate at the centroid of the acinus oriented at angles that varied from 1° to 360° with 1° steps scanning a complete circle. Finally, the total number of times that these rays intersected with the green markers was counted and normalized between 0 and 1 to obtain the continuity of integrin α6 along the acinar basal side. Values close to 1 correspond to intact basal sides and, thus, indicate high cellular polarity. As shown in Figure 5(c), 10A and AT cell lines have the significantly more continuous expression of integrin α6 along the basal side of the cells than the four malignant cell lines. We note that KCL cell line exhibits a statistically significant higher localization of integrin α6 at the basal side than the three more tumorigenic cell lines.The final feature in this category measured the ratio of the amount of integrinα6 colocalized with integrin α3 to the total expression of integrin α6 to determine the amount basal membrane protein overlapped with the lateral membrane protein. This feature gets higher values when there is higher internal expression of integrin α6 or higher basal membrane localization of integrin α3. As plotted in Figure 5(d), 10A and DCIS exhibit statistically significant less colocalization of integrin α6 with α3 than the other four cell lines. ### 2.4.3. Features Capturing the Lateral Protein Integrinα3 Density The final subset of features captured the density of lateral membrane protein integrinα3 within the acinar structure. The following features measured the expression and localization of lateral membrane protein and adhesion molecule integrin α3. The first feature determined the amount integrin α3 expressed along the basal cell membrane using the same method described previously. We anticipate observing less amount of integrin α3 expressed along the basal cell membrane in the malignant grades of cancer than the nonmalignant grade.Next, we captured the overall expression of integrinα3 within the acinar structure as illustrated in Figure 6(e) where the total amount of integrin α3 expressed in the acinar structure was divided by the area of the acinar structure. It is seen in Figure 6(b) that this feature monotonically decreases from the nontumorigenic 10A to noninvasive DCIS acinar structures. The invasive carcinomas CA1H and CA1A show significantly higher expressions compared to the DCIS cells.Features capturing the level of cell polarity based on the lateral membrane protein integrinα3 density. (a) Shows the continuity of the integrin α6 along the basal membrane, (b) shows the total integrin α3 density that is computed as illustrated in (e) where the total amount of integrin α3 within the acinar structure is divided by area of the acinar structure, (c) shows the ratio between the amount of integrin α3 colocalized with integrin α6 and the total integrin α3 expression, and d shows the density of integrin α3 in the exterior of the hollow lumen. Error bars correspond to the standard error of the means. Two-sided t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)We then measured the ratio between the amount of integrinα3 colocalized with the integrin α6 and the total amount of integrin α3 within the acinar structure. As expected nonmalignant cell lines exhibit significantly less colocalization of integrin α3 with integrin α6 than the malignant cell lines as shown in Figure 6(c).Final feature in this subset measured the ratio between the amount of integrinα3 localized between the hollow center and basal membrane side of the acinar structure to the total integrin α3 expression. This feature, thus, quantified the density of α3 integrin along the lateral membrane of the cells. Since integrin α3 localization is not confined to cell-to-cell lateral membranes in malignant tumors, expressions of this protein are expected within the hollow lumen as well as the cell-to-extracellular matrix border. ## 2.4.1. Features Capturing the Morphology of Acinus and Hollow Lumen The first subset of features we propose captured the shape of the acinar structures, number of nuclei that constitute the acinus, and the relative size of the hollow lumen. The segmentation of the cell nuclei was also accomplished by the watershed segmentation technique described previously using the blue (nuclei) channel image only. Figure4(a) shows the number of nuclei per acinar structure across the six cell lines. As expected the invasive carcinoma cell lines exhibit the largest number of nuclei per cell cluster compared to the other cell lines acini due to the unregulated division of cells.Features quantifying acinus and hollow lumens morphology are statistically significant between cell lines and states of health. (a) shows the number of nuclei per acinar structure, (b) shows the acinar structure elongation (roundness) that is measured as illustrated in (d) where the ratio between the minor and major axis of the ellipse fitted to the acinar structure is taken, and (c) shows the ratio between the hollow lumen and acinar structure areas that is computed as illustrated in (e). Error bars correspond to the standard error of the means. Two-sidedt-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)Acinar structures in the nonmalignant grades of cancer typically have symmetrical round shapes. Malignant cancer grades result in deformations in the acinar structures that cause the shapes to be more elongated. We measured the roundness of the acinar structures by taking the ratio between the minor and major axes of the ellipse fitted as illustrated in Figure4(d). This fitting was achieved by finding the ellipse that has the same normalized second central moments as the region. Roundness values close to 1 comes from rounder/more symmetrical acinar structures whereas values close to 0 indicate an elongated shape. As expected, noninvasive and invasive cell lines exhibit statistically significant decrease in acinar structure roundness compared to the nonmalignant cell lines as shown in Figure 4(b).The final feature in this subset captured the differences in the relative size of hollow lumen by computing the ratio between the areas of the hollow lumen and the acinar structure. Values closer to 1 indicate larger hollow lumens, and those to 0 indicate smaller hollow lumens. In order to compute the hollow lumen area, we assumed that the hollow lumen was circular and computed its radius as the Euclidean distance between the centroid of the acinar structure and the nearest nucleus to the centroid as illustrated in Figure4(e). The hollow lumen for the precancerous cell line AT is the largest among the six cell lines. Noninvasive and invasive cell line acinar structures exhibit statistically significant shrinkage in hollow lumen compared against the nonmalignant cell line acinar structures as shown in Figure 4(c). ## 2.4.2. Features Capturing the Basal Protein Integrinα6 Density The next subset of features analyzed and quantified the localization and structural relationships of the basal membrane-localized protein integrinα6. The first feature measured the ratio of integrin α6 expression within the acinar structure (excluding the expression along the basal membrane) to the acinar structure area as illustrated in Figure 5(e). As nonmalignant cancerous cells exhibit high cellular polarity, integrin α6 is localized to the basal membrane of these cells. Hence, a low density of integrin α6 is expected within the acinar structure. Reduced cellular polarity in the malignant grades of cancer enables greater internal expressions of integrin α6 within the acinar structure. DCIS, CA1H, and CA1A cell lines exhibited statistically significant increase in the internal expression of integrin α6 compared with the less malignant cell lines as shown in Figure 5(a).Features capturing the level of cell polarity based on the basal membrane protein integrinα6 density. (a) shows the internal integrin α6 density that is computed as illustrated in (e) where the amount of integrin α6 localized inside the acini is divided to the total amount of integrin α6, (b) shows the cumulative spatial distributions of integrin α6 that is computed as illustrated in (g) where the amount of integrin α6 localized within the concentric circles are divided by the total amount of integrin α6 within the acini, (c) shows the continuity of the integrin α6 along the basal membrane that is measured as illustrated in (f) where the number of times that rays initiate from the centroid oriented at from 1 to 360 degrees intersected with the green fluorophores along the basal membrane is counted and normalized between 0 and 1, and (d) shows the ratio between the amount of integrin α6 colocalized with integrin α3 and the total integrin α6 expression. Error bars correspond to the standard error of the means. Two-sided  t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)(f)(g)The next feature characterized the spatial distribution of integrinα6 within the acinar structure. It was computed as the ratio between the amount of the green fluorophores located in concentric circles that were centered at the centroid of the acinar structure and the total amount of green fluorophores in the acinar structure. The radii of the concentric circles were increased by one-tenth of the radius of the circle that circumscribes the acinar structure at each step as illustrated in Figure 5(g). Cell lines that have expressions of integrin α6 near the centroid of the acinar structure show relatively early rise in the cumulative densities than the others. It is observed that integrin α6 is localized near the basal membrane in the three lower-grade cell lines, and near the centroids of the acini in the three advanced grade cell lines as shown in Figure 5(b). Since the cumulative densities near the acinus center and the basal membrane are similar for all the cell lines, we decide to include the cumulative densities corresponding to 30 to 70% of the radius of the circle that circumscribes the acinus in our analysis.Another feature captured the amount of integrinα6 expressed along the basal cell membrane as illustrated in Figure 5(f). First, we took the difference between the acinar structure and its morphologically eroded version to obtain a binary contour mask. This mask was then overlaid with the corresponding slice of the binary green channel image to obtain the integrin α6 localized along the basal membrane of the acinar structure. Next, we superposed the resulting image with rays that initiate at the centroid of the acinus oriented at angles that varied from 1° to 360° with 1° steps scanning a complete circle. Finally, the total number of times that these rays intersected with the green markers was counted and normalized between 0 and 1 to obtain the continuity of integrin α6 along the acinar basal side. Values close to 1 correspond to intact basal sides and, thus, indicate high cellular polarity. As shown in Figure 5(c), 10A and AT cell lines have the significantly more continuous expression of integrin α6 along the basal side of the cells than the four malignant cell lines. We note that KCL cell line exhibits a statistically significant higher localization of integrin α6 at the basal side than the three more tumorigenic cell lines.The final feature in this category measured the ratio of the amount of integrinα6 colocalized with integrin α3 to the total expression of integrin α6 to determine the amount basal membrane protein overlapped with the lateral membrane protein. This feature gets higher values when there is higher internal expression of integrin α6 or higher basal membrane localization of integrin α3. As plotted in Figure 5(d), 10A and DCIS exhibit statistically significant less colocalization of integrin α6 with α3 than the other four cell lines. ## 2.4.3. Features Capturing the Lateral Protein Integrinα3 Density The final subset of features captured the density of lateral membrane protein integrinα3 within the acinar structure. The following features measured the expression and localization of lateral membrane protein and adhesion molecule integrin α3. The first feature determined the amount integrin α3 expressed along the basal cell membrane using the same method described previously. We anticipate observing less amount of integrin α3 expressed along the basal cell membrane in the malignant grades of cancer than the nonmalignant grade.Next, we captured the overall expression of integrinα3 within the acinar structure as illustrated in Figure 6(e) where the total amount of integrin α3 expressed in the acinar structure was divided by the area of the acinar structure. It is seen in Figure 6(b) that this feature monotonically decreases from the nontumorigenic 10A to noninvasive DCIS acinar structures. The invasive carcinomas CA1H and CA1A show significantly higher expressions compared to the DCIS cells.Features capturing the level of cell polarity based on the lateral membrane protein integrinα3 density. (a) Shows the continuity of the integrin α6 along the basal membrane, (b) shows the total integrin α3 density that is computed as illustrated in (e) where the total amount of integrin α3 within the acinar structure is divided by area of the acinar structure, (c) shows the ratio between the amount of integrin α3 colocalized with integrin α6 and the total integrin α3 expression, and d shows the density of integrin α3 in the exterior of the hollow lumen. Error bars correspond to the standard error of the means. Two-sided t-tests were performed and cell lines that exhibit statistically significance (P<0.05) are marked with asterisks (*). (a)(b)(c)(d)(e)We then measured the ratio between the amount of integrinα3 colocalized with the integrin α6 and the total amount of integrin α3 within the acinar structure. As expected nonmalignant cell lines exhibit significantly less colocalization of integrin α3 with integrin α6 than the malignant cell lines as shown in Figure 6(c).Final feature in this subset measured the ratio between the amount of integrinα3 localized between the hollow center and basal membrane side of the acinar structure to the total integrin α3 expression. This feature, thus, quantified the density of α3 integrin along the lateral membrane of the cells. Since integrin α3 localization is not confined to cell-to-cell lateral membranes in malignant tumors, expressions of this protein are expected within the hollow lumen as well as the cell-to-extracellular matrix border. ## 2.5. Automated Grading of Cancer Using Supervised Machine Learning We used supervised machine learning [28] to grade the acinar structures into nonmalignant, noninvasive carcinoma, and invasive carcinoma forms of breast cancer using the features defined previously. In supervised learning, the data are first divided into training and test sets. The classifier is trained with the labeled training data and the classes of the test data are then predicted using the resulting classifier. We tested the grading performance of five supervised learning algorithms. Linear discriminant analysis fits a multivariate normal density to each of the training class assuming equal covariance matrices for each class [29]. It then separates the classes with a hyperplane that is established by seeking the projection that maximizes the sum of intraclass distances and minimize the sum of interclass distances. Quadratic discriminant analysis is similar to the linear discriminant analysis with the distinction that covariance matrix for each class is estimated separately [29]. Naïve Bayes classifier assumes that the classes are arising from independent distributions; hence, a diagonal covariance matrix is assumed. K-nearest neighbor classifier finds the K closest training data to the test data based on the Euclidean distance and classifies the sample using the majority voting of the nearest points. Support vector machines classifier maps the training data into a higher dimensional space with a kernel function where the data become linearly separable [30]. A separating maximum-margin hyperplane is established to separate the different class data with minimal misclassification via quadratic optimization [30]. The test data are classified by determining the side of the hyperplane they lie on in the kernel-mapped space. In order to extend SVM for three-class classification, we employed the one-against-one approach [31] where three two-class SVM classifiers were established for each pair of classes in the training data set. Each sample in the test data was assigned to a class by these classifiers and the class with the majority vote was chosen as the final result. If there is equal voting for the three classes, we chose the class that has the largest margin from the separating hyperplane.In order to obtain unbiased performance estimates, 10-fold cross-validation was performed. The feature set was first randomly divided into 10 disjoint partitions of equal size. For each partition, a classifier was trained with the remaining nine partitions and then tested on the retained partition. The results for each partition were then combined to find the overall grading accuracy. In order to reduce the scale differences within the features, the data were normalized so that the features had zero mean and unit variance across the samples. We performed a parametric search to determine the number of neighbors in the nearest neighbor identification. We tested between 8 and 15 nearest neighbors and determined that identifying 12 nearest neighbors to a test point achieved the highest grading accuracy. For the SVM classifiers, we used radial basis function, also referred to as Gaussian kernel, in the form ofK(xi,xj)=exp(-∥xi-xj∥2/2σ2) that mapped the data into an infinite dimensional Hilbert space [30]. We performed a parameter to search to identify σ that achieves the highest grading accuracy. We sought σ in the set of candidate values that varied from 1.0 to 2.5 with 0.1 steps and determined that σ being equals to 2.0 achieved the best performance in the grading of the acinar structures. Table 1 shows the performance of the five supervised learning algorithms. It is clear that SVM-based classifier achieves the highest overall accuracy in the grading of the acinar structures. This is not unexpected as SVM classifiers are known to be highly successful in biological applications [32].Table 1 Overall grading accuracies for the supervised machine learning techniques used in our analysis. SVM classifiers clearly achieve significantly higher grading accuracy than the other methods. The data set includes 400 acinar structures. Learning methodOverall grading accuracy (%)Linear discriminant analysis80.75Quadratic discriminant analysis80.00Naïve Bayes69.75K-nearest neighbors79.50Support vector machines89.00 ### 2.5.1. Discriminative Influence of the Proposed Features After identifying SVM-based classifier as the most accurate grading method for our data set, we performed feature selection to determine the discriminative capabilities of the features to characterize the data. A good feature selection algorithm identifies the features that are consistent within the class and exhibit differences between the classes. Fisher score (F-score) is documented to be a powerful feature selection tool [33]. For a dataset xi with two classes, denote the instances in the first class with xi(+) and instances in the other class with xi(-). The Fisher score of the ith feature is then given by F(i)=((xι̅(+)-xι̅)2+(xι̅(-)-xι̅)2)/(σx(+)2+σx(-)2), where xι̅ is the mean value of the ith features, xι̅(+) and xι̅(-) are the mean values over the positive and negative instances, respectively, and σx(+)2 and σx(-)2are the variances over the positive and negative instances, respectively, for the ith feature. In order to extend this method for a feature set with three classes, we computed the F-score for each feature in each pair of classes and then took the average of the three possible combinations. Larger values of F-score indicate stronger discriminative influence; therefore, after obtaining the F-scores for all the features, we ranked them in descending order.Table2 shows the average F-scores of the features and their corresponding discriminative rank. It is seen that continuity of integrin α6 along the basal membrane is the most discriminative feature to describe the data. Discriminative influence of the ratio between the hollow lumen and acinar structure area is also high. These two features are important as they are strong indicators of the level of cell polarity within the acinar structure and can be considered as measures of duct forming tumors as used in the commonly used Scarff-Bloom-Richardson system. Some of the other most discriminative features quantify the differences that are difficult to assess by visual inspection such as the colocalization between the integrin α3 and integrin α3 and internal densities of the integrin subunits. We note that these features constitute a subset of the novel features introduced in this paper and, therefore, this analysis particularly highlights the importance of the proposed features and the methodology.Table 2 Discriminative influence of the proposed features based on the F-score values. Feature labelF-ScoreRankNumber of nuclei within the acinar structure0.054714Elongation of the acinar structure0.11226Ratio between the hollow lumen and acinar structure areas0.16024Internal integrinα6 Density0.12055Density of integrinα6 inside the 30% circle0.062011Density of integrinα6 inside the 40% circle0.061112Density of integrinα6 inside the 50% circle0.08859Density of integrinα6 inside the 60% circle0.09538Density of integrinα6 inside the 70% circle0.059813Continuity of integrinα6 along the basal membrane0.57251Amount of integrinα6 colocalized with integrin α30.033715Continuity of integrinα3 along the basal membrane0.10467Total integrinα3 density0.26513Amount of integrinα3 colocalized with integrin α60.35262Density of integrinα3 in the exterior of the hollow lumen0.071610We performed cancer grading using subsets of the resulting most discriminative features, where at each stage we increased the number of most discriminative features in the training feature set. Figure7 shows the overall grading accuracy with respect to the number of most discriminative features selected in the grading. It is seen that the highest grading accuracy 89.0% is achieved when all of the features are used to train the classifiers. Considering that the most grading systems are typically based on the assessment of a limited number of features, we also investigate the grading performance of our methodology in a similar setting. When only the first five most discriminative features are used for grading, we achieve 82.0% overall grading accuracy. Though relatively worse than the highest grading accuracy, this constitutes a highly promising setting considering the limited number of features.Figure 7 Overall grading accuracy with respect to the number of most discriminative features selected in the grading. Highest overall grading accuracy is achieved when the whole feature set is considered. ## 2.5.1. Discriminative Influence of the Proposed Features After identifying SVM-based classifier as the most accurate grading method for our data set, we performed feature selection to determine the discriminative capabilities of the features to characterize the data. A good feature selection algorithm identifies the features that are consistent within the class and exhibit differences between the classes. Fisher score (F-score) is documented to be a powerful feature selection tool [33]. For a dataset xi with two classes, denote the instances in the first class with xi(+) and instances in the other class with xi(-). The Fisher score of the ith feature is then given by F(i)=((xι̅(+)-xι̅)2+(xι̅(-)-xι̅)2)/(σx(+)2+σx(-)2), where xι̅ is the mean value of the ith features, xι̅(+) and xι̅(-) are the mean values over the positive and negative instances, respectively, and σx(+)2 and σx(-)2are the variances over the positive and negative instances, respectively, for the ith feature. In order to extend this method for a feature set with three classes, we computed the F-score for each feature in each pair of classes and then took the average of the three possible combinations. Larger values of F-score indicate stronger discriminative influence; therefore, after obtaining the F-scores for all the features, we ranked them in descending order.Table2 shows the average F-scores of the features and their corresponding discriminative rank. It is seen that continuity of integrin α6 along the basal membrane is the most discriminative feature to describe the data. Discriminative influence of the ratio between the hollow lumen and acinar structure area is also high. These two features are important as they are strong indicators of the level of cell polarity within the acinar structure and can be considered as measures of duct forming tumors as used in the commonly used Scarff-Bloom-Richardson system. Some of the other most discriminative features quantify the differences that are difficult to assess by visual inspection such as the colocalization between the integrin α3 and integrin α3 and internal densities of the integrin subunits. We note that these features constitute a subset of the novel features introduced in this paper and, therefore, this analysis particularly highlights the importance of the proposed features and the methodology.Table 2 Discriminative influence of the proposed features based on the F-score values. Feature labelF-ScoreRankNumber of nuclei within the acinar structure0.054714Elongation of the acinar structure0.11226Ratio between the hollow lumen and acinar structure areas0.16024Internal integrinα6 Density0.12055Density of integrinα6 inside the 30% circle0.062011Density of integrinα6 inside the 40% circle0.061112Density of integrinα6 inside the 50% circle0.08859Density of integrinα6 inside the 60% circle0.09538Density of integrinα6 inside the 70% circle0.059813Continuity of integrinα6 along the basal membrane0.57251Amount of integrinα6 colocalized with integrin α30.033715Continuity of integrinα3 along the basal membrane0.10467Total integrinα3 density0.26513Amount of integrinα3 colocalized with integrin α60.35262Density of integrinα3 in the exterior of the hollow lumen0.071610We performed cancer grading using subsets of the resulting most discriminative features, where at each stage we increased the number of most discriminative features in the training feature set. Figure7 shows the overall grading accuracy with respect to the number of most discriminative features selected in the grading. It is seen that the highest grading accuracy 89.0% is achieved when all of the features are used to train the classifiers. Considering that the most grading systems are typically based on the assessment of a limited number of features, we also investigate the grading performance of our methodology in a similar setting. When only the first five most discriminative features are used for grading, we achieve 82.0% overall grading accuracy. Though relatively worse than the highest grading accuracy, this constitutes a highly promising setting considering the limited number of features.Figure 7 Overall grading accuracy with respect to the number of most discriminative features selected in the grading. Highest overall grading accuracy is achieved when the whole feature set is considered. ## 2.6. Tissue Sections Breast tissue from patients with invasive carcinoma, ductal carcinomain situ, and from healthy individuals was obtained from ProteoGenix (Culver City, CA) and stored at −80°C. Portions of each tissue block were embedded in O.C.T. Compound (Sakura Finetek USA, Torrance, CA) and sections 20 μm thick were cut using a Microm HM505E cryostat. Tissue sections were adhered to Superfrost Plus Gold microscope slides (Fisher Scientific, Morris Plains, NJ) and vapor fixed using 2-3 Kimwipes (Kimberley-Clark Worldwide, Roswell, GA) soaked in 4% paraformaldehyde in a small chamber at −20°C for 30 minutes prior to immunohistochemistry. Tissue sections were encircled with an ImmEdge Pen (Vector Labs, Burlingame, CA) and immunohistochemistry and confocal microscopy were performed as described for in vitro samples in Section 2.2. An Alexa488-conjugated goat antifluorescein secondary antibody (Invitrogen, Carlsbad, CA) was included at a 1 : 300 dilution in the overnight secondary incubation. ## 3. Results and Discussion A workflow diagram involved in the automated grading of breast cancer is shown in Figure8. First, 3D fluorescent confocal images of acinar structures were collected after 14 days within an in vitro culture system as described in Sections 2.1 and 2.2. A total of 30 images were captured (5 images for each cell line) and a total of 400 acinar structures were identified using the image segmentation method described in Section 2.3 in these images. For each acinar structure, we extracted the morphological features described in Section 2.4. These features exhibit high-level of statistical significance across the nonmalignant, noninvasive carcinoma, and invasive carcinoma grades of breast cancer. Using the resulting unique feature profiles, we then trained the SVM-based classifiers as described in Section 2.5 for the automated grading of acini and test on the data.Figure 8 A flow chart of the proposed quantitative analysis approach in measuring morphological-based features of acini. First, fluorescently labeled images of the cultures are captured by 3D confocal microscopy. Next, individual acinar structures in these images are segmented and labeled. We then extract the proposed features based on the acini morphology that lead to statistically significant unique feature profiles between functionally different stages of cancer. Finally, automated grading of cancer is achieved by supervised machine learning.Overall grading accuracy of 89.0% is achieved when the whole feature set is used in the grading. As shown in the first half of Table3, the nonmalignant 10A and AT cell lines are graded with 97.0% and 73.5% accuracy, respectively. Due to the strong resemblance between the AT and KCL cell lines, some portions of the AT acinar structures are graded as noninvasive carcinoma. Acinar structures in KCL cell line are correctly graded as noninvasive carcinoma 72.1% of the time, and as nonmalignant 11.1% of the time which should be considered as high success as KCL falls between the advanced-early stages of cancer and noninvasive carcinoma. We observe significantly high grading accuracies in the remaining three cell lines. Noninvasive DCIS acinar structures are graded with 95.0% accuracy, and the acinar structures in the invasive cell lines CA1H and CA1A are graded with 93.1% and 93.5% accuracy, respectively. We note that neither the DCIS nor invasive cell line acinar structures are graded as nonmalignant. When only the five most discriminative features are used, we achieve 82.0% overall accuracy in the grading. As shown in the second half of Table 3, in this case AT acinar structures are graded with higher success. However, the grading accuracy of noninvasive KCL and invasive CA1H acinar structures significantly decreases. It is not unexpected that some of the features not included after the feature selection capture significant characteristics about the acini morphology. The following discussion on the proposed features helps us understand how they relate to the underlying biological implications.Table 3 Performance of the SVM-based classifier with the proposed quantitative features. Acinar structures are graded as nonmalignant as trained from the 10A and AT samples, noninvasive carcinoma as trained from the KCL and DCIS samples, and invasive carcinoma as trained from the CA1H and CA1A samples using all of the proposed features as well as the five most discriminative features: continuity of integrinα6 along the basal membrane, colocalization of integrin α3 with integrin α6, integrin α3 density, ratio between hollow lumen and acinar structure areas, and integrin α6 density. The data set includes 400 acinar structures: 99 10A, 49 10AT, 81 KCL, 80 DCIS, 29 CA1H, and 62 CA1A. Grading results utilizing overall feature setGrading categories (%)NonmalignantNoninvasive CarcinomaInvasive CarcinomaNonmalignant10A97.02.01.0AT73.524.52.0Noninvasive CarcinomaKCL11.172.114.8DCIS0.095.05.0Invasive CarcinomaCA1H0.06.993.1CA1A0.06.593.5Grading results utilizing the five most discriminative featuresGrading categories (%)NonmalignantNoninvasive CarcinomaInvasive CarcinomaNonmalignant10A93.03.04.0AT85.710.24.1Noninvasive CarcinomaKCL18.553.128.4DCIS0.093.86.2Invasive CarcinomaCA1H0.027.672.4CA1A0.08.191.9Changes in the hollow lumen size and acini shape are visually observed and quantified across multiple cancer stages as described in Section2.4.1. While the average number of cells per acini is a concrete measurement and could be determined manually, it is useful and more practical to utilize automated techniques when analyzing large volumes of image data. With this feature we clearly determine that the two invasive cell lines CA1H and CA1A have more nuclei in the acinar structures than the other less malignant cell lines. This could arise from several biological factors. A simple reason is that large acinar structures are comprised of more cells. Alternatively, this could arise from higher density of cells within the acinus due to the loss of the hollow lumen. Both of these explanations likely contribute to the resulting feature making it challenging to determine the exact cause. Our next feature, the ratio between the areas of the hollow lumen and acinar structure can also help us explain the general trend in the number of nuclei per acinus with the progression along the metastatic cascade. Larger hollow lumens in the lower cancer grades possibly limit the space that the cells can proliferate; thus smaller number of nuclei per acinus is observed at lower-grade cell lines.The hollow lumen to acinar structure area ratio captures a key change in acinar structure which is challenging to assess by visual inspection. While the presence or absence of a hollow lumen is observable by eye, evaluating the relative size of the lumen compared to the overall acinar structure is difficult and poses subjectivity. This feature enabled us to quantitatively characterize this relationship and helps identify the significant changes between cancer grades. The loss of hollow lumens is associated with increasing cell division and cell survival within the acinar structure. In native breast tissue a decrease in the size of the hollow lumen arises from increasing metastatic capability. This is consistent in ourin vitro system and the resulting quantitative features. However, despite generally observing a decrease in the hollow lumen to acinar structure area ratio with the progression of cancer, precancerous AT cell line exhibits an increase compared to the 10A samples. While it is unclear why this is the case, it could potentially be due to the cells becoming flatter rather than columnar in shape creating a larger hollow lumen.In addition to the changes in the hollow lumen morphology, the acinar structure elongation quantifies the subtle changes in the roundness of acini that are also difficult assess by eye. As shown by our results, we capture statistically significant differences in the acini elongation caused by the increasing metastatic capability, reflecting the progressive loss of structural integrity in acini as breast cancer develops. Replacing these qualitative observations on the acini and hollow lumen morphology with these robust quantitative measurements enabled us to characterize the data more objectively.The density of basal membrane protein integrinα6 is quantified through multiple features as described in Section 2.4.2 and found to reflect the metastatic potential of acinar structures. The quantitative and statistically significant changes identified in the analysis of integrin α6 distribution strongly correlate with our expectations based on the visual inspection of the acinar structures. It is seen that the 10A and AT acini exhibited the highest amount of integrin α6 localization along the basal membrane, reflecting the presence of an intact, extracellular matrix basement membrane on the basal surface of these cells. In addition, it is observed that the acinar structures of more malignant cell lines showed dramatic loss in the localization of integrin α6 along the basal membrane of the cells and an increase in the internal localization of this protein, reflecting loss of the basement membrane. Features extracted from CA1A acinar structures are significantly different from some of the features extracted from the CA1H and DCIS acinar structures that can be explained by the degree of malignancy of this cell line. CA1A cells exhibit the highest malignancy of all the other cell lines with the advanced ability to metastasize. When cells are able to metastasize, cells may form secondary tumors called macrometastases. At this stage, these cells are able to adapt to their new environment in order to improve their abilities of survival. It is possible that the CA1A cells are behaving as a secondary tumor when placed in a basement membrane protein rich environment causing them to acquire some epithelial like phenotypes, such as increasing the basal membrane localization of integrin α6. Thus, these features thoroughly capture the distribution of integrin α6 throughout the acinar structure as visualized and are functionally expected across the different grades of cancer.The features based on the density of integrinα3 as described in Section 2.4.3 yield both expected and unexpected results. As reported in a previous study [34], we expect to observe different levels of total integrin α3 expression across the cell lines of varying metastatic potential. Our expectations are confirmed by our results that the total integrin α3 expressions exhibit a decrease between the nonmalignant and noninvasive carcinomas and a higher expression is observed in the invasive carcinoma cell lines (compared to the noninvasive KCL and DCIS cell line features). These features support the idea that integrin α3 switches its function from acting in a cell-cell adhesion manner to cell-ECM adhesion manner with increasing metastatic ability. As shown in Figure 6(a), the basal continuity of integrin α3 is the lowest for the DCIS cell lines as expected due to the low density of integrin α3 in this cell line. The continuity at the basal membrane displayed increased localization in the invasive carcinoma cell lines suggesting a cell-ECM adhesion role. The measurement of integrin α3 density in the exterior of the hollow lumen as shown in Figure 6(d) determined that the progression of cancer yields increased localization of this protein within the hollow lumen. It was at similar levels for the 10A, AT, and KCL cell lines however that the expression ratio increased in expression between the DCIS through CA1A cells. This feature was developed to quantify the functional switch of integrin α3 from cell-cell adhesion to cell-ECM adhesion with progressing metastatic state. The increase in this feature suggests a switch from cell-cell adhesion to cell-ECM adhesions; however, this change may be influenced by the reduction in the size of the hollow lumen and increased integrin α3 expression between cells throughout the acinar structure, thus hindering the robust characterization of the integrin α3 lateral localization.The amount of an integrin subunit that colocalizes with the other varies across the cell lines. Basal membrane integrinα6 colocalizes with integrin α3 more in the tumorigenic cell lines (except the DCIS) than the nontumorigenic cell line 10A. This could be due to the increasing internal expression of integrin α6 indicating a loss in cell polarity. Another reason could be due to integrin α3 changing localization from the lateral membrane to the basal membrane as it switches functions from cell-cell adhesion to cell-ECM adhesions. It could also be a combination of the two since loss in cell polarity results in irregular localization of the proteins; thus both integrin subunits are expressed throughout the cell membrane. Interestingly, DCIS acini exhibit the lowest colocalization of integrin α6 with integrin α3. This could be due to the low expression of integrin α3 in DCIS cells as shown in Figure 6(b) that yield higher amounts of isolated integrin α6 subunits in the acinar structures.The alternative comparison is the amount of integrinα3 that colocalizes with integrin α6. We observe that the noninvasive and the invasive cell lines exhibit higher colocalization than the nonmalignant cell lines. Interestingly, DCIS cell line has the highest amount of colocalization. This confirms our suggestion that low expression of integrin α3 is the cause of the lowest colocalization of integrin α6 with integrin α3 in this cell line. In this case almost all of the integrin α3 is colocalized with integrin α6 due to its low expression levels and loss of cell polarity in the DCIS acinar structures. The nonmalignant cell lines exhibit higher levels of colocalization than the noninvasive cell lines. By comparing the two colocalization features, we infer that this is likely due to integrin α3 localization at the basal and lateral membranes while integrin α6 localizes primarily at the basal membrane alone. This is reflected in 80% colocalization of integrin α6 with integrin α3 and 30% colocalization of integrin α3 with integrin α6 as shown in Figure 6(c). On the other hand, the invasive carcinoma cell lines exhibit approximately 70% colocalization of the basal membrane integrin with the lateral membrane integrin and approximately 50% colocalization of the lateral membrane integrin with the basal membrane integrin. Due to the approximately same values, this suggests that both integrin subunits have lost their specific localization and are expressed throughout. We also can further confirm this with the internal densities of both integrin α6 and integrin α3 having higher values.Proposed features display statistical significance across the cell lines with varying metastatic abilities. This indicates that the proposed features capture and quantify biologically relevant morphological changes. These features have the potential for studying structure-function relationships in a controlled and quantitative system. This application is useful in identifying underlying mechanisms of cancer and the role of specific protein functions on acinar structures. In addition, this approach could be used to test the effects of potential chemotherapy targets and combinations of drugs on the structures of both nontumorigenic, precancerous, noninvasive carcinoma and invasive carcinoma cells in preliminary studies. Also, with increasing medical imaging technology the future holds potential for many quantitative and computerized diagnostic tools and systems to aid doctors in diagnosis, treatment options, and prognosis. Finally, these features could be applied to current histology samples when tagged with fluorescent antibodies to quantify complex structural features.In order to demonstrate the relevance of our approach to tissue samples obtained from human patients, we performed immunohistochemistry on frozen sections as described in Section2.6 following the same procedure for cell culture experiments as outlined from Section 2.2 to Section 2.5. Images of nontumorigenic, precancerous, noninvasive, and invasive mammary gland tissue were collected using confocal microscopy. The random orientation of glands in sectioned material presented a challenge as the acinar structures in in vitro cultures are typically spherical in shape. In order to eliminate the longitudinal or oblique planes of sections from the present analysis, single optical sections of glands oriented roughly in crosssection that resemble the acinar formations in the in vitro cultures were cropped manually. Figure 9 shows examples of glands analyzed in our study. Although nonspecific Alexa568 secondary antibody labeling of luminal contents was observed in some cases, visual inspection of these images indicates that the pattern of both integrin α6 and integrin α3 exhibit similar staining patterns to in vitro acinar structures. In our preliminary study, we identified 12 glands in the 9 images analyzed in this study. For each gland, we extracted the proposed features and performed grading using the SVM-based classifier trained with the in vitro feature set. The in vivo test set was graded accurately except one nontumorigenic gland that was graded as noninvasive carcinoma. Nevertheless, we note that developing a computer-aided grading system for the grading of in vivo tissue samples is beyond the scope of this paper and left as future work.Tissue samples from human patients exhibit variations in acini morphology along the metastatic cascade similar to the 3D cultures. In the nontumorigenic glands from a healthy patient shown in (a), integrinα3 can be observed along the lateral surface of the epithelia, while integrin α6 shows strong staining across the basal surface. Precancerous gland shown in (b) exhibits a clear reduction in the amount of integrin α3 expression along the lateral membranes of the epithelia, while basal integrin α6 expression remains strong. Noninvasive carcinoma gland shown in (c) could be easily identified by the loss of a proper hollow lumen and stratification of the epithelial layer. In these large regions of tissue, integrin α3 is absent from cells in the interior of the gland; however, along with integrin α6, there can be observed along the basal surfaces of cells in contact with the basal lamina. Invasive carcinoma gland shown in (d) includes cords or clusters of cells exhibiting faint labeling of both proteins distributed along the cell membrane compared to background and control tissue. Scale bars represent 20 μm in (a) and (c), and 10 μm in (b) and (d). (a)(b)(c)(d) ## 4. Conclusion In this paper, we present a method that enables quantitative characterization of 3D breast culture acini with varying metastatic potentials. Specifically, we propose statistically significant features based on acinar structure morphology that capture differences between different grades of cancer that are difficult to assess under microscopic inspection. The experimental results demonstrate the efficacy of the proposed features to differentiate between the nonmalignant, noninvasive carcinoma, and invasive carcinoma grades of breast cancer with 89.0% accuracy. In addition, our preliminary studies indicate that our methodology can also be used for the grading of cancer inin vivo tissues provided that the captured tissue samples include cross-sectional portions of the glands. Hence, our method demonstrates great promise to model morphology-function relationships within controlled 3D systems and hold potential as an automatic breast cancer prognostic tool in current histology samples. --- *Source: 102036-2012-05-15.xml*
2012
# The Generalization of the Poisson Sum Formula Associated with the Linear Canonical Transform **Authors:** Jun-Fang Zhang; Shou-Ping Hou **Journal:** Journal of Applied Mathematics (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102039 --- ## Abstract The generalization of the classical Poisson sum formula, by replacing the ordinary Fourier transform by the canonical transformation, has been derived in the linear canonical transform sense. Firstly, a new sum formula of Chirp-periodic property has been introduced, and then the relationship between this new sum and the original signal is derived. Secondly, the generalization of the classical Poisson sum formula to the linear canonical transform sense has been obtained. --- ## Body ## 1. Introductions As a generalization of the classical Fourier transform and the fractional Fourier transform (FrFT), the linear canonical transform (LCT) receives much interest in recent years [1–3]. Many important transforms, for example, the Fourier transform, the Fresnel transform, and the scaling operations are all special cases of the LCT. It has been shown to be one of the most useful tools in several areas [4–6], including optics, quantum physics, and in the signal processing community. Its relationship with the Fourier transform and the fractional Fourier transform can be found in [7, 8]. The well-known operations, for example, the Hilbert transform, the Parseval relationship, the convolution and product operations, and the spectral analysis, in traditional Fourier domain have been extended to the linear canonical transform domain by different authors [9–13]. For further properties and applications of LCT in optics and signal processing community, one can refer to [1, 2]. The classical sampling theorems associated with the LCT have also been investigated and studied in the LCT domain in various literatures. The extensions of the classical Shannon sampling theorem for band-limited or time-limited signals in the LCT domain have been deduced in [13, 14].However, for the best of our knowledge, none of the research papers throw light on the study of the traditional Poisson sum formula [15–21] associated with the LCT have been reported as yet. The Poisson summation formula is a very useful tool not only in many branches of the mathematics, but also it finds many applications in various fields, for example, mechanics, signal processing community, and many scientific fields. It is therefore, worthwhile as well as interesting to investigate the Poisson sum formula associated with the LCT.The objective of this paper is to study and investigate the Poisson formula associated with the LCT. In other words, we want to generalize the classical Poisson sum formula by replacing the ordinary Fourier transform by the canonical transform. In order to obtain the desired results for a signalx(t), we first deduce a new sum formula for the signal x(t) and then achieve the innovative results in the LCT domain. The paper is organized as follows, the preliminaries are proposed in Section 2, the main results of the paper are investigated in Section 3, and the conclusion is given in Section 4. ## 2. Preliminaries ### 2.1. The Linear Canonical Transform The linear canonical transform (LCT) of a signalx(t) with parameter matrix A is defined as [1–3] (2.1)XA(u)=∫-∞∞KA(u,t)x(t)dt, where (2.2)KA(u,t)=1j2πbej((a/2b)t2-(1/b)ut+(d/2b)u2),b≠0, and XA(u)=dej(cd/2)u2x(du), b=0. The parameter A=(abcd), and satisfy det(A)=1, a,b,c,d∈R. In this case, the linear canonical transform is a unitary transform [1], therefore we can derive the inverse transform of LCT as another LCT transform.The inverse transform of LCT can be derived by an LCT with the parameter ofA-1 as following: (2.3)x(t)=∫-∞∞KA-1(t,u)XA(u)du.The LCT can be looked at as the generalization of the well-known operations in the science and engineering community [4–6]. The relationship between the LCT and the Fourier transform, the fractional Fourier transform have been derived in [1–3].A signalx(t) is said to be band-limited with respect to ΩA in linear canonical transform domain, when (2.4)XA(u)=0for|u|>ΩA, where ΩA is called the bandwidth of signal x(t) in the linear canonical transform domain. At the same time, a signal x(t) is called Chirp-periodic with period T and of parameter A if it satisfies the following equation: (2.5)ej((at2)/2b)x(t)=ej(a(t+T)2/2b)x(t+T).The following identities will be used in the following sections.Lemma 2.1. The inverse linear canonical transform of signalZA(u)=∑n=-∞+∞δ(u-nB)ej(a/2b)u2 for parameter A=(abcd) is (2.6)z(t)=1B-1j2πe-j(a/2b)t2∑n=-∞∞δ(tb-nB). And the inverse linear canonical transform of signal δ(u-u0) is (2.7)s(t)=-1j2πbe-j(a/2b)(t2+u02)+ju0tb.Proof. These results can be derived easily by the definition of the LCT and the inverse transform of LCT.Assuming a signalx(t) is band-limited to ΩA in the linear canonical transform domain, then from the results derived in [13], x(t) is not band-limited in the traditional Fourier domain. Therefore, the classical results of bandlimited signal processing method in Fourier domain can be used in the LCT domain to obtain the novel results associated with the LCT. ### 2.2. The Poisson Sum Formula The Poisson sum formula demonstrates that the sum of infinite samples in time domain of a signalx(t) is equivalent to the sum of infinite samples of X(u) in the Fourier domain. Mathematically, the Poisson sum formula can be represented as follows: (2.8)∑n=-∞∞x(t+kτ)=1τ∑n=-∞∞X(nτ)ej(n/τ)t, or (2.9)∑k=-∞∞x(kτ)=1τ∑n=-∞∞X(nτ), where X(u) is the traditional Fourier transform of signal x(t). It is well known that it will be valid only if x(t) and its Fourier transform X(u) are regular enough and only if both series of (2.9) converge [20].In order to obtain the new results associated with the linear canonical transform, a new summation associated with the signalx(t) is introduced as following: (2.10)y(t)=∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2), where τ is a constant. From (2.10), the function y(t) can be seen as a periodic phase-shift replica of the original function x(t). The signal y(t) will be used in the following sections to investigate the Poisson sum formula associate with the linear canonical transform. ## 2.1. The Linear Canonical Transform The linear canonical transform (LCT) of a signalx(t) with parameter matrix A is defined as [1–3] (2.1)XA(u)=∫-∞∞KA(u,t)x(t)dt, where (2.2)KA(u,t)=1j2πbej((a/2b)t2-(1/b)ut+(d/2b)u2),b≠0, and XA(u)=dej(cd/2)u2x(du), b=0. The parameter A=(abcd), and satisfy det(A)=1, a,b,c,d∈R. In this case, the linear canonical transform is a unitary transform [1], therefore we can derive the inverse transform of LCT as another LCT transform.The inverse transform of LCT can be derived by an LCT with the parameter ofA-1 as following: (2.3)x(t)=∫-∞∞KA-1(t,u)XA(u)du.The LCT can be looked at as the generalization of the well-known operations in the science and engineering community [4–6]. The relationship between the LCT and the Fourier transform, the fractional Fourier transform have been derived in [1–3].A signalx(t) is said to be band-limited with respect to ΩA in linear canonical transform domain, when (2.4)XA(u)=0for|u|>ΩA, where ΩA is called the bandwidth of signal x(t) in the linear canonical transform domain. At the same time, a signal x(t) is called Chirp-periodic with period T and of parameter A if it satisfies the following equation: (2.5)ej((at2)/2b)x(t)=ej(a(t+T)2/2b)x(t+T).The following identities will be used in the following sections.Lemma 2.1. The inverse linear canonical transform of signalZA(u)=∑n=-∞+∞δ(u-nB)ej(a/2b)u2 for parameter A=(abcd) is (2.6)z(t)=1B-1j2πe-j(a/2b)t2∑n=-∞∞δ(tb-nB). And the inverse linear canonical transform of signal δ(u-u0) is (2.7)s(t)=-1j2πbe-j(a/2b)(t2+u02)+ju0tb.Proof. These results can be derived easily by the definition of the LCT and the inverse transform of LCT.Assuming a signalx(t) is band-limited to ΩA in the linear canonical transform domain, then from the results derived in [13], x(t) is not band-limited in the traditional Fourier domain. Therefore, the classical results of bandlimited signal processing method in Fourier domain can be used in the LCT domain to obtain the novel results associated with the LCT. ## 2.2. The Poisson Sum Formula The Poisson sum formula demonstrates that the sum of infinite samples in time domain of a signalx(t) is equivalent to the sum of infinite samples of X(u) in the Fourier domain. Mathematically, the Poisson sum formula can be represented as follows: (2.8)∑n=-∞∞x(t+kτ)=1τ∑n=-∞∞X(nτ)ej(n/τ)t, or (2.9)∑k=-∞∞x(kτ)=1τ∑n=-∞∞X(nτ), where X(u) is the traditional Fourier transform of signal x(t). It is well known that it will be valid only if x(t) and its Fourier transform X(u) are regular enough and only if both series of (2.9) converge [20].In order to obtain the new results associated with the linear canonical transform, a new summation associated with the signalx(t) is introduced as following: (2.10)y(t)=∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2), where τ is a constant. From (2.10), the function y(t) can be seen as a periodic phase-shift replica of the original function x(t). The signal y(t) will be used in the following sections to investigate the Poisson sum formula associate with the linear canonical transform. ## 3. The Main Results Suppose a signalx(t) is band-limited to ΩA in linear canonical transform domain of parameter A, then, from (2.10) a new function y(t) can be deduced from signal x(t). Firstly, the properties of the signal y(t) associated with the linear canonical transform can be derived from Theorem 3.1.Theorem 3.1. Suppose a signalx(t) is band-limited to ΩA in the linear canonical transform domain of parameter A, and y(t)=∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2), then the following results about y(t) is true.(a) y(t) is a Chirp-periodic signal with period τ.(b) x(t) is a band-limited signal in linear canonical transform domain with parameter A, if and only if y(t) has a finite number of nonzero linear canonical series coefficients for any τ.Proof. (a) By the definition of the Chirp-periodicity, we obtain(3.1)y(t+τ)ej(a/2b)(t+τ)2=∑k=-∞∞x(t+τ+kτ)ej(a/2b)[2kτ(t+τ)+k2τ2]ej(a/2b)(t+τ)2=ej(a/2b)t2∑k=-∞∞x[t+(k+1)τ]ej(a/2b)[2(k+1)τt+(k+1)2τ2]=y(t)ej(a/2b)t2. This proves the Chirp-periodicity of the signal y(t). (b) To prove the necessary condition, thenth coefficient cn,A of signal y(t) can be deduced from the linear canonical series definition proposed in [4] as (3.2)cn,A=(b-ja)τ∫0τy(t)ej(a/2b)[(nb2π/τ)2+t2]-j(n2π/τ)tdt. Equation (3.2) can be rewritten as (3.3)cn,A=(b-ja)τej(a/2b)(n2πb/τ)2∫0τ∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2)ej(a/2b)t2-j(n(2π/τ))tdt=(b-ja)τej(a/2b)(n2πb/τ)2∑k=-∞+∞∫kτ(k+1)τx(λ)ej(a/2b)λ2-j(n(2π/τ))(λ-kτ)dλ=(b-ja)/τKA(u,t)∫-∞+∞KA(u,t)x(λ)ej(a/2b)(nF′)2ej(a/2b)λ2-j(1/b)(nF′)λdλ=(b-ja)/τKA(u,t)XA(n2πτb). Since x(t) is a ΩA band-limited signal in linear canonical transform domain of parameter A, that is to say (3.4)XA(u)=∫-∞+∞x(t)KA(u,t)ej(a/2b)(t2+u2)-j(1/b)utdt=0,|u|>Ωα. Comparing (3.3) and (3.4), we obtain (3.5)cn,A=0,whenn>τΩA2πb. Therefore, the necessary condition is proved. To prove the sufficient condition, let us assume thatcn,A=0 for n>N, where N is any finite integer. From (3.5), the (3.6)XA(u)=∫-∞+∞x(t)KA(u,t)dt=0,|u|>2πbNτ. Hence, x(t) is band-limited signal having bandwidth as following: (3.7)ΩA≤2πbNτ. This proves the sufficient condition of the theorem.Based on the derived results of Theorem3.1, the following Theorem 3.2 can be deduced.Theorem 3.2. Suppose a signalx(t) is band-limited to ΩA in the linear canonical transform domain of parameter A, and y(t)=∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2) is derived by shifting signal x(t) to left and right, then the following conclusions can be deduced.(a) When1/τ>ΩA, y(t) can be deduced from the following formula: (3.8)y(t)=(bτ)XA(0)e-j(a2b)t2.(b) WhenΩA/2<1/τ<ΩA, y(t) can be deduced from the following formula: (3.9)y(t)=bτe-j(a2b)t2{XA(0)+e-j(a2b)(b/τ)2[XA(bτ)ejt/τ+XA(-bτ)e-jt/τ]}.(c) WhenΩA/n<1/τ<ΩA/(n-1), y(t) can be deduced from the following formula: (3.10)y(t)=bτe-j(a/2b)t2{XA(0)+∑k=1ne-j(a/2b)(k(a/τ))2[XA(kbτ)ejkt/τ+XA(-kbτ)e-jkt/τ]}.Proof. ( 1 ) Proof of (a). Since x(t) is a ΩA band-limited signal in the linear canonical transform domain, XA(u) if sampled in the linear canonical transform domain of order A at a rate of B>ΩA, then the samples can be represented as follows: (3.11)Ys,A(u)=XA(u)∑n=-∞+∞δ(u-nB)=XA(0)δ(u),B>ΩA. The first part of (3.11) can be reorganized as (3.12)Ys,A(u)=[XA(u)][∑n=-∞+∞δ(u-nB)ej(a/2b)u2]e-j(a/2b)u2 If we let ZA(u)=∑n=-∞+∞δ(u-nB)ej(a/2b)u2, then (3.12) can be rewritten as (3.13)Ys,A(u)=[XA(u)][ZA(u)]e-j(a/2b)u2. Applying the convolution and product theorem proposed in [7] and Lemma 2.1 to (3.13), the inverse linear canonical transform of formula (3.13) can be represented as (3.14)ys(t)=1j2πbe-j(a/2b)t2(x(t)e(a/2b)t2*z(t)ej(a/2b)t2)=1j2πbe-j(a/2b)t2[x(t)ej(a/2b)t2*1B∑n=-∞+∞δ(1bt-nB)]=1j2πb[1B∑n=-∞+∞x(t-nBb)ej(a/b)[2(n/B)bt-((n/B)b)2]]. From the second part of (3.11), the inverse linear canonical transform of Ys,A(u) can be derived as (3.15)ys(t)=1j2πbXA(0)e-j(a/2b)t2. If we select τ=b/B, then from (3.14)-(3.15) (3.16)y(t)=bτ2π(1-j(a/b))ys(t)=bτXA(0)e-j(a/2b)t2. This proves (a). ( 2 ) Proof of (b). Similar to the method of proving (a), if XA(u) is sampled in the linear canonical transform domain of parameter A at a rate of ΩA/2<B<ΩA, there are essentially three nonzero samples of XA(u): (3.17)Ys,A(u)=XA(u)∑n=-∞+∞δ(u-nB)=Xα(0)δ(u)+Xα(B)δ(u-B)+Xα(-B)δ(u+B). In this condition, (3.14) is also correct, and from (3.13), the relationship between y(t) and ys(t) can be derived as follows: (3.18)y(t)=Bj2πbys(t). While ys(t) can be deduced from (3.18) and Lemma 2.1 as (3.19)ys(t)=1/j2πb[XA(0)e-j(a/2b)t2+XA(B)e-j(a/2b)(t2+B2)+jBt(1/b)+XA(-B)e-j(a/2b)(t2+B2)-jBt(1/b)]. If τ=b/B is chosen, then from (3.18)-(3.19). (3.20)y(t)=Be-j(a/2b)t2[XA(0)+XA(B)e-j(a/2b)B2+jBt(1/b)+XA(-B)e-j(a/2b)B2-jBt(1/b)]=Be-j(a/2b)t2{XA(0)+e-j(a/2b)B2[XA(B)ejBt/b+XA(-B)e-jBt/b]}=(bτ)e-j(a/2b)t2{XA(0)+e-j(a/2b)(b/τ)2[XA(bτ)ejt/τ+XA(-bτ)e-jt/τ]}. Thus, (b) is also proved.( 3 ) Proof of (c). If XA(u) is sampled in linear canonical transform domain of order A at a rate of ΩA/n<B<ΩA/(n-1), there are essentially 2n-1 nonzero samples remaining (3.21)Ys,A(u)=XA(u)∑n=-∞+∞δ(u-nB)=XA(0)δ(u)+XA(B)δ(u±B)+⋯+XA[(n-1)B]δ[u±(n-1)B]. Again, (3.14) is also correct in this case, and using similar method in proving (a) and (b), y(t) can be deduced as (3.22)y(t)=Bj2πbys(t)=Be-j(a/2b)t2[XA(0)+XA(B)e-j(a/2b)B2+jBt/b+XA(-B)e-j(a/2b)B2-jBt/b+⋯+XA(nB)e-j(a/2b)(nB)2+jnBt/b+XA(-nB)e-j(a/2b)(nB)2-jnBt/b]=Be-j(a/2b)t2{XA(0)+∑k=1ne-j(a/2b)(kB)2[XA(kB)ejkBt/b+XA(-kB)e-jkBt/b]}. If we select τ=b/B, then (3.23)y(t)=bτe-j(a/2b)t2{XA(0)+∑k=1ne-j(a/2b)(k(b/τ))2[XA(kbτ)ejkt/τ+XA(-kbτ)e-jkt/τ]}. Part (c) of Theorem 3.1 is proved. ## 4. Conclusion In this paper, the generalization of the classical Poisson sum formula to the linear canonical transform domain is investigated, by replacing the ordinary Fourier transform by the canonical transform, we firstly derived a new Chirp-periodic sum, and then the classical Poisson summations are generalized to the linear canonical transform domain based on the relationship derived. The classical results can be looked at as the special cases of the derived results. The applications of the derived results in sampling theories, signal analysis will be investigated in the linear canonical transform domain in the future. --- *Source: 102039-2012-12-23.xml*
102039-2012-12-23_102039-2012-12-23.md
16,074
The Generalization of the Poisson Sum Formula Associated with the Linear Canonical Transform
Jun-Fang Zhang; Shou-Ping Hou
Journal of Applied Mathematics (2012)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102039
102039-2012-12-23.xml
--- ## Abstract The generalization of the classical Poisson sum formula, by replacing the ordinary Fourier transform by the canonical transformation, has been derived in the linear canonical transform sense. Firstly, a new sum formula of Chirp-periodic property has been introduced, and then the relationship between this new sum and the original signal is derived. Secondly, the generalization of the classical Poisson sum formula to the linear canonical transform sense has been obtained. --- ## Body ## 1. Introductions As a generalization of the classical Fourier transform and the fractional Fourier transform (FrFT), the linear canonical transform (LCT) receives much interest in recent years [1–3]. Many important transforms, for example, the Fourier transform, the Fresnel transform, and the scaling operations are all special cases of the LCT. It has been shown to be one of the most useful tools in several areas [4–6], including optics, quantum physics, and in the signal processing community. Its relationship with the Fourier transform and the fractional Fourier transform can be found in [7, 8]. The well-known operations, for example, the Hilbert transform, the Parseval relationship, the convolution and product operations, and the spectral analysis, in traditional Fourier domain have been extended to the linear canonical transform domain by different authors [9–13]. For further properties and applications of LCT in optics and signal processing community, one can refer to [1, 2]. The classical sampling theorems associated with the LCT have also been investigated and studied in the LCT domain in various literatures. The extensions of the classical Shannon sampling theorem for band-limited or time-limited signals in the LCT domain have been deduced in [13, 14].However, for the best of our knowledge, none of the research papers throw light on the study of the traditional Poisson sum formula [15–21] associated with the LCT have been reported as yet. The Poisson summation formula is a very useful tool not only in many branches of the mathematics, but also it finds many applications in various fields, for example, mechanics, signal processing community, and many scientific fields. It is therefore, worthwhile as well as interesting to investigate the Poisson sum formula associated with the LCT.The objective of this paper is to study and investigate the Poisson formula associated with the LCT. In other words, we want to generalize the classical Poisson sum formula by replacing the ordinary Fourier transform by the canonical transform. In order to obtain the desired results for a signalx(t), we first deduce a new sum formula for the signal x(t) and then achieve the innovative results in the LCT domain. The paper is organized as follows, the preliminaries are proposed in Section 2, the main results of the paper are investigated in Section 3, and the conclusion is given in Section 4. ## 2. Preliminaries ### 2.1. The Linear Canonical Transform The linear canonical transform (LCT) of a signalx(t) with parameter matrix A is defined as [1–3] (2.1)XA(u)=∫-∞∞KA(u,t)x(t)dt, where (2.2)KA(u,t)=1j2πbej((a/2b)t2-(1/b)ut+(d/2b)u2),b≠0, and XA(u)=dej(cd/2)u2x(du), b=0. The parameter A=(abcd), and satisfy det(A)=1, a,b,c,d∈R. In this case, the linear canonical transform is a unitary transform [1], therefore we can derive the inverse transform of LCT as another LCT transform.The inverse transform of LCT can be derived by an LCT with the parameter ofA-1 as following: (2.3)x(t)=∫-∞∞KA-1(t,u)XA(u)du.The LCT can be looked at as the generalization of the well-known operations in the science and engineering community [4–6]. The relationship between the LCT and the Fourier transform, the fractional Fourier transform have been derived in [1–3].A signalx(t) is said to be band-limited with respect to ΩA in linear canonical transform domain, when (2.4)XA(u)=0for|u|>ΩA, where ΩA is called the bandwidth of signal x(t) in the linear canonical transform domain. At the same time, a signal x(t) is called Chirp-periodic with period T and of parameter A if it satisfies the following equation: (2.5)ej((at2)/2b)x(t)=ej(a(t+T)2/2b)x(t+T).The following identities will be used in the following sections.Lemma 2.1. The inverse linear canonical transform of signalZA(u)=∑n=-∞+∞δ(u-nB)ej(a/2b)u2 for parameter A=(abcd) is (2.6)z(t)=1B-1j2πe-j(a/2b)t2∑n=-∞∞δ(tb-nB). And the inverse linear canonical transform of signal δ(u-u0) is (2.7)s(t)=-1j2πbe-j(a/2b)(t2+u02)+ju0tb.Proof. These results can be derived easily by the definition of the LCT and the inverse transform of LCT.Assuming a signalx(t) is band-limited to ΩA in the linear canonical transform domain, then from the results derived in [13], x(t) is not band-limited in the traditional Fourier domain. Therefore, the classical results of bandlimited signal processing method in Fourier domain can be used in the LCT domain to obtain the novel results associated with the LCT. ### 2.2. The Poisson Sum Formula The Poisson sum formula demonstrates that the sum of infinite samples in time domain of a signalx(t) is equivalent to the sum of infinite samples of X(u) in the Fourier domain. Mathematically, the Poisson sum formula can be represented as follows: (2.8)∑n=-∞∞x(t+kτ)=1τ∑n=-∞∞X(nτ)ej(n/τ)t, or (2.9)∑k=-∞∞x(kτ)=1τ∑n=-∞∞X(nτ), where X(u) is the traditional Fourier transform of signal x(t). It is well known that it will be valid only if x(t) and its Fourier transform X(u) are regular enough and only if both series of (2.9) converge [20].In order to obtain the new results associated with the linear canonical transform, a new summation associated with the signalx(t) is introduced as following: (2.10)y(t)=∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2), where τ is a constant. From (2.10), the function y(t) can be seen as a periodic phase-shift replica of the original function x(t). The signal y(t) will be used in the following sections to investigate the Poisson sum formula associate with the linear canonical transform. ## 2.1. The Linear Canonical Transform The linear canonical transform (LCT) of a signalx(t) with parameter matrix A is defined as [1–3] (2.1)XA(u)=∫-∞∞KA(u,t)x(t)dt, where (2.2)KA(u,t)=1j2πbej((a/2b)t2-(1/b)ut+(d/2b)u2),b≠0, and XA(u)=dej(cd/2)u2x(du), b=0. The parameter A=(abcd), and satisfy det(A)=1, a,b,c,d∈R. In this case, the linear canonical transform is a unitary transform [1], therefore we can derive the inverse transform of LCT as another LCT transform.The inverse transform of LCT can be derived by an LCT with the parameter ofA-1 as following: (2.3)x(t)=∫-∞∞KA-1(t,u)XA(u)du.The LCT can be looked at as the generalization of the well-known operations in the science and engineering community [4–6]. The relationship between the LCT and the Fourier transform, the fractional Fourier transform have been derived in [1–3].A signalx(t) is said to be band-limited with respect to ΩA in linear canonical transform domain, when (2.4)XA(u)=0for|u|>ΩA, where ΩA is called the bandwidth of signal x(t) in the linear canonical transform domain. At the same time, a signal x(t) is called Chirp-periodic with period T and of parameter A if it satisfies the following equation: (2.5)ej((at2)/2b)x(t)=ej(a(t+T)2/2b)x(t+T).The following identities will be used in the following sections.Lemma 2.1. The inverse linear canonical transform of signalZA(u)=∑n=-∞+∞δ(u-nB)ej(a/2b)u2 for parameter A=(abcd) is (2.6)z(t)=1B-1j2πe-j(a/2b)t2∑n=-∞∞δ(tb-nB). And the inverse linear canonical transform of signal δ(u-u0) is (2.7)s(t)=-1j2πbe-j(a/2b)(t2+u02)+ju0tb.Proof. These results can be derived easily by the definition of the LCT and the inverse transform of LCT.Assuming a signalx(t) is band-limited to ΩA in the linear canonical transform domain, then from the results derived in [13], x(t) is not band-limited in the traditional Fourier domain. Therefore, the classical results of bandlimited signal processing method in Fourier domain can be used in the LCT domain to obtain the novel results associated with the LCT. ## 2.2. The Poisson Sum Formula The Poisson sum formula demonstrates that the sum of infinite samples in time domain of a signalx(t) is equivalent to the sum of infinite samples of X(u) in the Fourier domain. Mathematically, the Poisson sum formula can be represented as follows: (2.8)∑n=-∞∞x(t+kτ)=1τ∑n=-∞∞X(nτ)ej(n/τ)t, or (2.9)∑k=-∞∞x(kτ)=1τ∑n=-∞∞X(nτ), where X(u) is the traditional Fourier transform of signal x(t). It is well known that it will be valid only if x(t) and its Fourier transform X(u) are regular enough and only if both series of (2.9) converge [20].In order to obtain the new results associated with the linear canonical transform, a new summation associated with the signalx(t) is introduced as following: (2.10)y(t)=∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2), where τ is a constant. From (2.10), the function y(t) can be seen as a periodic phase-shift replica of the original function x(t). The signal y(t) will be used in the following sections to investigate the Poisson sum formula associate with the linear canonical transform. ## 3. The Main Results Suppose a signalx(t) is band-limited to ΩA in linear canonical transform domain of parameter A, then, from (2.10) a new function y(t) can be deduced from signal x(t). Firstly, the properties of the signal y(t) associated with the linear canonical transform can be derived from Theorem 3.1.Theorem 3.1. Suppose a signalx(t) is band-limited to ΩA in the linear canonical transform domain of parameter A, and y(t)=∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2), then the following results about y(t) is true.(a) y(t) is a Chirp-periodic signal with period τ.(b) x(t) is a band-limited signal in linear canonical transform domain with parameter A, if and only if y(t) has a finite number of nonzero linear canonical series coefficients for any τ.Proof. (a) By the definition of the Chirp-periodicity, we obtain(3.1)y(t+τ)ej(a/2b)(t+τ)2=∑k=-∞∞x(t+τ+kτ)ej(a/2b)[2kτ(t+τ)+k2τ2]ej(a/2b)(t+τ)2=ej(a/2b)t2∑k=-∞∞x[t+(k+1)τ]ej(a/2b)[2(k+1)τt+(k+1)2τ2]=y(t)ej(a/2b)t2. This proves the Chirp-periodicity of the signal y(t). (b) To prove the necessary condition, thenth coefficient cn,A of signal y(t) can be deduced from the linear canonical series definition proposed in [4] as (3.2)cn,A=(b-ja)τ∫0τy(t)ej(a/2b)[(nb2π/τ)2+t2]-j(n2π/τ)tdt. Equation (3.2) can be rewritten as (3.3)cn,A=(b-ja)τej(a/2b)(n2πb/τ)2∫0τ∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2)ej(a/2b)t2-j(n(2π/τ))tdt=(b-ja)τej(a/2b)(n2πb/τ)2∑k=-∞+∞∫kτ(k+1)τx(λ)ej(a/2b)λ2-j(n(2π/τ))(λ-kτ)dλ=(b-ja)/τKA(u,t)∫-∞+∞KA(u,t)x(λ)ej(a/2b)(nF′)2ej(a/2b)λ2-j(1/b)(nF′)λdλ=(b-ja)/τKA(u,t)XA(n2πτb). Since x(t) is a ΩA band-limited signal in linear canonical transform domain of parameter A, that is to say (3.4)XA(u)=∫-∞+∞x(t)KA(u,t)ej(a/2b)(t2+u2)-j(1/b)utdt=0,|u|>Ωα. Comparing (3.3) and (3.4), we obtain (3.5)cn,A=0,whenn>τΩA2πb. Therefore, the necessary condition is proved. To prove the sufficient condition, let us assume thatcn,A=0 for n>N, where N is any finite integer. From (3.5), the (3.6)XA(u)=∫-∞+∞x(t)KA(u,t)dt=0,|u|>2πbNτ. Hence, x(t) is band-limited signal having bandwidth as following: (3.7)ΩA≤2πbNτ. This proves the sufficient condition of the theorem.Based on the derived results of Theorem3.1, the following Theorem 3.2 can be deduced.Theorem 3.2. Suppose a signalx(t) is band-limited to ΩA in the linear canonical transform domain of parameter A, and y(t)=∑k=-∞∞x(t+kτ)ej(a/2b)(2kτt+k2τ2) is derived by shifting signal x(t) to left and right, then the following conclusions can be deduced.(a) When1/τ>ΩA, y(t) can be deduced from the following formula: (3.8)y(t)=(bτ)XA(0)e-j(a2b)t2.(b) WhenΩA/2<1/τ<ΩA, y(t) can be deduced from the following formula: (3.9)y(t)=bτe-j(a2b)t2{XA(0)+e-j(a2b)(b/τ)2[XA(bτ)ejt/τ+XA(-bτ)e-jt/τ]}.(c) WhenΩA/n<1/τ<ΩA/(n-1), y(t) can be deduced from the following formula: (3.10)y(t)=bτe-j(a/2b)t2{XA(0)+∑k=1ne-j(a/2b)(k(a/τ))2[XA(kbτ)ejkt/τ+XA(-kbτ)e-jkt/τ]}.Proof. ( 1 ) Proof of (a). Since x(t) is a ΩA band-limited signal in the linear canonical transform domain, XA(u) if sampled in the linear canonical transform domain of order A at a rate of B>ΩA, then the samples can be represented as follows: (3.11)Ys,A(u)=XA(u)∑n=-∞+∞δ(u-nB)=XA(0)δ(u),B>ΩA. The first part of (3.11) can be reorganized as (3.12)Ys,A(u)=[XA(u)][∑n=-∞+∞δ(u-nB)ej(a/2b)u2]e-j(a/2b)u2 If we let ZA(u)=∑n=-∞+∞δ(u-nB)ej(a/2b)u2, then (3.12) can be rewritten as (3.13)Ys,A(u)=[XA(u)][ZA(u)]e-j(a/2b)u2. Applying the convolution and product theorem proposed in [7] and Lemma 2.1 to (3.13), the inverse linear canonical transform of formula (3.13) can be represented as (3.14)ys(t)=1j2πbe-j(a/2b)t2(x(t)e(a/2b)t2*z(t)ej(a/2b)t2)=1j2πbe-j(a/2b)t2[x(t)ej(a/2b)t2*1B∑n=-∞+∞δ(1bt-nB)]=1j2πb[1B∑n=-∞+∞x(t-nBb)ej(a/b)[2(n/B)bt-((n/B)b)2]]. From the second part of (3.11), the inverse linear canonical transform of Ys,A(u) can be derived as (3.15)ys(t)=1j2πbXA(0)e-j(a/2b)t2. If we select τ=b/B, then from (3.14)-(3.15) (3.16)y(t)=bτ2π(1-j(a/b))ys(t)=bτXA(0)e-j(a/2b)t2. This proves (a). ( 2 ) Proof of (b). Similar to the method of proving (a), if XA(u) is sampled in the linear canonical transform domain of parameter A at a rate of ΩA/2<B<ΩA, there are essentially three nonzero samples of XA(u): (3.17)Ys,A(u)=XA(u)∑n=-∞+∞δ(u-nB)=Xα(0)δ(u)+Xα(B)δ(u-B)+Xα(-B)δ(u+B). In this condition, (3.14) is also correct, and from (3.13), the relationship between y(t) and ys(t) can be derived as follows: (3.18)y(t)=Bj2πbys(t). While ys(t) can be deduced from (3.18) and Lemma 2.1 as (3.19)ys(t)=1/j2πb[XA(0)e-j(a/2b)t2+XA(B)e-j(a/2b)(t2+B2)+jBt(1/b)+XA(-B)e-j(a/2b)(t2+B2)-jBt(1/b)]. If τ=b/B is chosen, then from (3.18)-(3.19). (3.20)y(t)=Be-j(a/2b)t2[XA(0)+XA(B)e-j(a/2b)B2+jBt(1/b)+XA(-B)e-j(a/2b)B2-jBt(1/b)]=Be-j(a/2b)t2{XA(0)+e-j(a/2b)B2[XA(B)ejBt/b+XA(-B)e-jBt/b]}=(bτ)e-j(a/2b)t2{XA(0)+e-j(a/2b)(b/τ)2[XA(bτ)ejt/τ+XA(-bτ)e-jt/τ]}. Thus, (b) is also proved.( 3 ) Proof of (c). If XA(u) is sampled in linear canonical transform domain of order A at a rate of ΩA/n<B<ΩA/(n-1), there are essentially 2n-1 nonzero samples remaining (3.21)Ys,A(u)=XA(u)∑n=-∞+∞δ(u-nB)=XA(0)δ(u)+XA(B)δ(u±B)+⋯+XA[(n-1)B]δ[u±(n-1)B]. Again, (3.14) is also correct in this case, and using similar method in proving (a) and (b), y(t) can be deduced as (3.22)y(t)=Bj2πbys(t)=Be-j(a/2b)t2[XA(0)+XA(B)e-j(a/2b)B2+jBt/b+XA(-B)e-j(a/2b)B2-jBt/b+⋯+XA(nB)e-j(a/2b)(nB)2+jnBt/b+XA(-nB)e-j(a/2b)(nB)2-jnBt/b]=Be-j(a/2b)t2{XA(0)+∑k=1ne-j(a/2b)(kB)2[XA(kB)ejkBt/b+XA(-kB)e-jkBt/b]}. If we select τ=b/B, then (3.23)y(t)=bτe-j(a/2b)t2{XA(0)+∑k=1ne-j(a/2b)(k(b/τ))2[XA(kbτ)ejkt/τ+XA(-kbτ)e-jkt/τ]}. Part (c) of Theorem 3.1 is proved. ## 4. Conclusion In this paper, the generalization of the classical Poisson sum formula to the linear canonical transform domain is investigated, by replacing the ordinary Fourier transform by the canonical transform, we firstly derived a new Chirp-periodic sum, and then the classical Poisson summations are generalized to the linear canonical transform domain based on the relationship derived. The classical results can be looked at as the special cases of the derived results. The applications of the derived results in sampling theories, signal analysis will be investigated in the linear canonical transform domain in the future. --- *Source: 102039-2012-12-23.xml*
2012
# Framework for Bidirectional Knowledge-Based Maintenance of Wind Turbines **Authors:** Javier Vives; Juan Palaci; Janverly Heart **Journal:** Computational Intelligence and Neuroscience (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1020400 --- ## Abstract Artificial intelligence (AI) techniques, such as machine learning (ML), are being developed and applied for the monitoring, tracking, and fault diagnosis of wind turbines. Current prediction systems are largely limited by their inherent disadvantages for wind turbines. For example, frequency or vibration analysis simulations at a part scale require a great deal of computational power and take considerable time, an aspect that can be essential and expensive in the case of a breakdown, especially if it is offshore. An integrated digital framework for wind turbine maintenance is proposed in this study. With this framework, predictions can be made both forward and backward, breaking down barriers between process variables and key attributes. Prediction accuracy in both directions is enhanced by process knowledge. An analysis of the complicated relationships between process parameters and process attributes is demonstrated in a case study based on a wind turbine prototype. Due to the harsh environments in which wind turbines operate, the proposed method should be very useful for supervising and diagnosing faults. --- ## Body ## 1. Introduction In the last few years, the increase in energy consumption and the powerful potential of artificial intelligence algorithms have changed the current supervision and fault diagnosis of the industrial process. In the last 5 years, the wind power infrastructure has increased wind production by about 80% around the world [1]. Researchers have developed new techniques to maintain wind power, combining the traditional methods of monitoring and supervision with machine learning (ML) techniques. In the literature [2, 3], there are several studies combining the methodology of ML for the maintenance of wind turbines. Several industries are using predictive maintenance because of its unique processing characteristics, such as aerospace, oil and gas, nuclear, automotive, and shipbuilding [3]. It is estimated that the size of the market for ML in the industrial sector will exceed USD 2000 million in the near future, according to a market survey [4]. One wind turbine is built with thousands of different components that must be perfectly matched and synchronized to work and achieve the best possible performance. One of the most critical components that can fail in a wind turbine are the bearings, blades, and gears. Offshore wind farms must detect and diagnose faults early if they are to be stopped in case of problems, especially those that have high repair and maintenance costs, especially if they are offshore [5]. As well as minimizing downtime and defect costs, maintenance activities must be managed efficiently. By applying algorithms designed to anticipate and prevent problems, we developed a prototype that detects, supervises, and anticipates failures in contrast to existing systems. Currently, the monitoring and prediction systems for possible failures in wind turbines focus on vibrations, shaft speed, noise, and even overheating of some components. Using digital and artificial intelligence technologies, such as machine learning, to create a fast and accurate prediction system for wind turbines is an essential research topic [6]. Digital solutions and machine learning for rotatory machines have been the subject of many research endeavours, such as prediction and control of the fast and slow axes [7], bearing defect detection [8, 9], or system vibrations [10, 11].ML is being applied to wind turbines in several recent ways. Sun et al. [12] adopted a neural network as the main method for the yaw angle prediction. To forecast the process parameters precisely according to the desired blade yaw angles, the forward neural network model is needed to work with a backward prediction strategy. Two output process parameters were obtained using only two input features of wind speed and air density in the reverse model. Jiménez et al. [13] designed and developed an ML algorithm for the real-time maintenance and prediction of a possible blade failure, comparing different supervised ML methods such as, for example, decision trees, discriminant analysis, support vector machines (SVM), or nearest neighbours (KNN). Compared to the traditional optimisation approach (ML), bidirectional predictions were considered much more efficient in their research. It is possible to dramatically reduce maintenance costs by adopting this cutting-edge technology.This paper presents a knowledge-based ML modelling framework that enables two directions of prediction. In the forward direction, the basic process parameters pertinent to a specific process, such as temperature and axis speed, are applied as the initial model inputs, while knowledge-based process factors are added as the advanced model inputs. Using training data obtained from previous experiments, knowledge-based ML is then applied to predict process attributes. The process parameters are derived from the knowledge-based features after the process factors are predicted based on the requirements of the process attributes. For understanding, analysing, designing, and optimizing wind turbine maintenance processes, forward and backward predictions are important. An evaluation of the suggested approach is also presented in this paper through a case study. This paper is structured as follows: in Section2, the knowledge-based ML modelling framework is introduced. Section 3 displays the prototype and the variables that are monitored via forward and backward modelling. The results, performance, and predictions of the algorithm are shown in Section 4. Section 5 concludes the research, highlighting the main conclusion. ## 2. Methodology ML models and process factors are combined to propose a bidirectional modelling approach that predicts the attributes of the process in this section, such as imbalance and good stage bearings, and predicts basic process parameters, such as the temperature and speed of the wind turbine axis. Figure1 illustrates the framework. This framework defines ML models as being both the input and output of basic process parameters and key process attributes. To enhance the performance and efficiency of ML models, knowledge-based factors are introduced.Figure 1 Bi-directional modelling framework based on knowledge. ### 2.1. Process Parameters As the most important input for monitoring a wind turbine process, the basic process parameters always need to be defined carefully to achieve the required part performance. Depending on the wind turbine scale and location, various parameters are defined and used. For instance, temperature and rotor speed are two of the main process parameters for a wind turbine. There are also some other basic process parameters, like wind speed, wind direction, and blade radius. When their effects are revealed, these parameters can also be called basic process parameters. In this document, these basic process parameters are represented asAb (n is the number of process parameters).(1)Ab=A1,A2,A3,…,An. ### 2.2. Process Factors Based on Knowledge Research has shown that the process attributes are generally determined by different basic process parameters that are fed directly into the modelling process, as described in the introduction. Different combinations of basic process parameters can, however, produce the same process attributes. For bidirectional modelling, the basic process parameters need to have a unique relationship. To link the process attributes directly, knowledge-based process factors were utilized in this study. The process factors can be determined according to the basic parameters of the process as well as their physical mechanisms. As a result, the data-driven model will be less redundant when the ML model is applied.As a result of applying the process factors, the fundamental relationship can be embedded in the model, allowing for more efficient data analysis and a more generic solution for different types of wind turbines. As an example, several factors can be used, such as air or density temperature, to determine the imbalance or good stage variables. Based on the prior knowledge and basic parameters, new features can be generated, which are donated asAk.(2)Ak=TkAbwhere Tk∗ represents the transfer function based on the physical mechanism for the axis turn. ### 2.3. Process Attributes Many of the attributes of a process are determined by digital modelling or simulation, but some can be measured directly from the real-time monitoring of the process, which can be represented asXat. This type of digital model is typically computationally demanding and requires experimental data as input. As a result, we can express the process attribute as follows:(3)Xat=TdDex,where Td∗ represents the function of the digital model, and Dex is the data collected from experiments. ### 2.4. Bidirectional Prediction with Machine Learning A neural network is the main machine learning (ML) technology used in this research, while other algorithms are compared in the case study. Data properties and problems can dictate the type of ANN [14] to be used. Typically, recurrent neural networks (RNNs) are used for processing real-time monitoring data, whereas convolutional neural networks (CNNs) are used for processing image data. During this research, the model is both input and output by using parameters, factors, and attributes. Using a forward model, we can say,(4)Xat=TMLTkAb=TdDex,where TML∗ is the ML algorithm. According to the backward model,(5)Ab=Tk−1TMLXat. ## 2.1. Process Parameters As the most important input for monitoring a wind turbine process, the basic process parameters always need to be defined carefully to achieve the required part performance. Depending on the wind turbine scale and location, various parameters are defined and used. For instance, temperature and rotor speed are two of the main process parameters for a wind turbine. There are also some other basic process parameters, like wind speed, wind direction, and blade radius. When their effects are revealed, these parameters can also be called basic process parameters. In this document, these basic process parameters are represented asAb (n is the number of process parameters).(1)Ab=A1,A2,A3,…,An. ## 2.2. Process Factors Based on Knowledge Research has shown that the process attributes are generally determined by different basic process parameters that are fed directly into the modelling process, as described in the introduction. Different combinations of basic process parameters can, however, produce the same process attributes. For bidirectional modelling, the basic process parameters need to have a unique relationship. To link the process attributes directly, knowledge-based process factors were utilized in this study. The process factors can be determined according to the basic parameters of the process as well as their physical mechanisms. As a result, the data-driven model will be less redundant when the ML model is applied.As a result of applying the process factors, the fundamental relationship can be embedded in the model, allowing for more efficient data analysis and a more generic solution for different types of wind turbines. As an example, several factors can be used, such as air or density temperature, to determine the imbalance or good stage variables. Based on the prior knowledge and basic parameters, new features can be generated, which are donated asAk.(2)Ak=TkAbwhere Tk∗ represents the transfer function based on the physical mechanism for the axis turn. ## 2.3. Process Attributes Many of the attributes of a process are determined by digital modelling or simulation, but some can be measured directly from the real-time monitoring of the process, which can be represented asXat. This type of digital model is typically computationally demanding and requires experimental data as input. As a result, we can express the process attribute as follows:(3)Xat=TdDex,where Td∗ represents the function of the digital model, and Dex is the data collected from experiments. ## 2.4. Bidirectional Prediction with Machine Learning A neural network is the main machine learning (ML) technology used in this research, while other algorithms are compared in the case study. Data properties and problems can dictate the type of ANN [14] to be used. Typically, recurrent neural networks (RNNs) are used for processing real-time monitoring data, whereas convolutional neural networks (CNNs) are used for processing image data. During this research, the model is both input and output by using parameters, factors, and attributes. Using a forward model, we can say,(4)Xat=TMLTkAb=TdDex,where TML∗ is the ML algorithm. According to the backward model,(5)Ab=Tk−1TMLXat. ## 3. Case Study An evaluation of the proposed modelling framework is demonstrated in this section. Several previous experiments and simulations on a wind turbine prototype have generated a fault variable dataset that can be used to read the process attributes (imbalance or good stage). During the experiments, the design of the experimental method was used to minimise the number of samples required. The implemented method was according to [15]. A wide range of process parameters was adopted, such as air temperature (10–25°C), wind speed (5–20 km/s), and air density (1.225 km/m3). ### 3.1. Temperature Model The temperature sensors are used to detect overheating in the components and determine which faults they correspond to, based on the studies carried out on the rest of the components. In this study, the temperature above the shaft bearing is monitored. The selected temperature sensors are positive temperature coefficient sensors, that is, PTC-type sensors, specifically the PT-100. For the temperature sensors, two NI 9217 modules [16] have been used. The NI 9217 RTD Analog Input Module has 4 channels and 24-bit resolution for 100 Ω RTD measurements. The NI 9217 can be configured for two different sampling modes. Both the NI 9201 and NI 9217 boards mount on the NI cDAQ-9172 module [17], which is an 8-slot compact DAQ chassis that can support up to eight I/O modules. It operates from 11 to 30 VDC and includes an AC/DC power adapter. ### 3.2. Speed Model The measurement of the speed is important to know the relative movement of the slow and fast axes and thus check the correct operation of the system, as well as possible failures, by comparing this signal with others of the acquired ones. Likewise, the data will be used to check the status of the machine in case it is stopped. For the speed measurements, two inductive proximity sensors were selected, specifically the IG5594 [18], which were installed on each of the axes, one for the slow axis and one for the fast axis, as you can see in Figure 2. By their nature, inductive speed sensors work with pulsed signals, delivering a high level when the sensor detects metal. Both sensors connect to the NI 9201 module on top of the NI cDAQ-9172.Figure 2 Positioning of the speed sensors on the prototype. ### 3.3. Modelling Bi-Directionally with Machine Learning We propose a bidirectional model, which consists of forward and backward prediction models. An ML model for forward and backward prediction embeds process knowledge using two process factors. Mainly, ANNs were used for ML modelling. It was possible to make three predictions using two different neural networks using both forward and backward modelling approaches. #### 3.3.1. Forward Modelling In Figure3, we see how the forward model is structured. A basic process parameter, such as wind speed and temperature, is the first input, and a good stage or imbalance variable is the target. A neural network structure optimized for each neural network was designed separately.Figure 3 Modelling variables using forward process knowledge-based machine learning. #### 3.3.2. Backward Modelling As shown in Figure4, the target parameters of the backward model are the basic process parameters, while the imbalance or good stage variables are the input variables. In the first step, neural networks are used to predict the process factors. Using the relationship between the process parameters and the process, the basic parameter values are calculated.Figure 4 Modelling variables using backward process knowledge-based machine learning. ## 3.1. Temperature Model The temperature sensors are used to detect overheating in the components and determine which faults they correspond to, based on the studies carried out on the rest of the components. In this study, the temperature above the shaft bearing is monitored. The selected temperature sensors are positive temperature coefficient sensors, that is, PTC-type sensors, specifically the PT-100. For the temperature sensors, two NI 9217 modules [16] have been used. The NI 9217 RTD Analog Input Module has 4 channels and 24-bit resolution for 100 Ω RTD measurements. The NI 9217 can be configured for two different sampling modes. Both the NI 9201 and NI 9217 boards mount on the NI cDAQ-9172 module [17], which is an 8-slot compact DAQ chassis that can support up to eight I/O modules. It operates from 11 to 30 VDC and includes an AC/DC power adapter. ## 3.2. Speed Model The measurement of the speed is important to know the relative movement of the slow and fast axes and thus check the correct operation of the system, as well as possible failures, by comparing this signal with others of the acquired ones. Likewise, the data will be used to check the status of the machine in case it is stopped. For the speed measurements, two inductive proximity sensors were selected, specifically the IG5594 [18], which were installed on each of the axes, one for the slow axis and one for the fast axis, as you can see in Figure 2. By their nature, inductive speed sensors work with pulsed signals, delivering a high level when the sensor detects metal. Both sensors connect to the NI 9201 module on top of the NI cDAQ-9172.Figure 2 Positioning of the speed sensors on the prototype. ## 3.3. Modelling Bi-Directionally with Machine Learning We propose a bidirectional model, which consists of forward and backward prediction models. An ML model for forward and backward prediction embeds process knowledge using two process factors. Mainly, ANNs were used for ML modelling. It was possible to make three predictions using two different neural networks using both forward and backward modelling approaches. ### 3.3.1. Forward Modelling In Figure3, we see how the forward model is structured. A basic process parameter, such as wind speed and temperature, is the first input, and a good stage or imbalance variable is the target. A neural network structure optimized for each neural network was designed separately.Figure 3 Modelling variables using forward process knowledge-based machine learning. ### 3.3.2. Backward Modelling As shown in Figure4, the target parameters of the backward model are the basic process parameters, while the imbalance or good stage variables are the input variables. In the first step, neural networks are used to predict the process factors. Using the relationship between the process parameters and the process, the basic parameter values are calculated.Figure 4 Modelling variables using backward process knowledge-based machine learning. ## 3.3.1. Forward Modelling In Figure3, we see how the forward model is structured. A basic process parameter, such as wind speed and temperature, is the first input, and a good stage or imbalance variable is the target. A neural network structure optimized for each neural network was designed separately.Figure 3 Modelling variables using forward process knowledge-based machine learning. ## 3.3.2. Backward Modelling As shown in Figure4, the target parameters of the backward model are the basic process parameters, while the imbalance or good stage variables are the input variables. In the first step, neural networks are used to predict the process factors. Using the relationship between the process parameters and the process, the basic parameter values are calculated.Figure 4 Modelling variables using backward process knowledge-based machine learning. ## 4. Results We evaluated the proposed modelling approach using a model correlation coefficient (MCC). Additionally, two other ML algorithms, namely, decision trees and random forests, were compared with the results obtained from the proposed approach based on ANN. All results from ANN and other ML algorithms, as well as those without knowledge-based process factors, are compared to the results from ANN and other ML algorithms without process factors. A good stage and imbalance were predicted by forward modelling, as described in Section3.3.1. Figure 5 shows the predictions for good stages. Direct prediction and process factors produced the highest MCC, 0.98, according to the proposed approach. Without incorporating process factors, random forest modelling made the least accurate predictions. The MCC of all ML algorithms was also improved by about 5% when process factors were used as enhanced inputs.Figure 5 Modelling performance for good stage prediction. Forward model.According to Figure6, the prediction results of imbalance using ANN, random forest, and decision tree show similar trends in comparison with the prediction results of the good stage. Direct prediction with and without process factors obtained the highest MCC, in this case, 0.92 MCC. A decision tree algorithm was used to calculate this value. Using both the random forest algorithm and direct prediction, the random forest algorithm provided the lowest MCC (0.802). Most ML models do not show a significant improvement in prediction efficiency when knowledge-based process factors are considered.Figure 6 Modelling performance for imbalance prediction. Forward model.Temperature and speed are the primary targets of the backward model. The prediction results for temperature are shown in Figure7. With an MCC of 0.93, ANN is more accurate than other ML technologies. Based and direct predictions are very similar to the ANN methodology. Our lowest MCC (8.85) was obtained using the process knowledge-based prediction algorithm.Figure 7 Modelling performance for good stage prediction. Backward model.Figure8 shows the prediction results of speed. The ANN is also the most accurate in direct prediction, with an MCC of 0.92, similar to the previous variable. For other ML technologies in direct prediction, using the decision tree algorithm, we obtained a 0.89 MCC, while using a random forest, we scored the lowest MCC (0.86). If we compare the results with the based predictions, the results are very similar; just in this case, the lowest MCC (0.837) is obtained by the decision tree.Figure 8 Modelling performance for imbalance prediction. Backward model.When the process of factor-based prediction and direct prediction in both models (forward and backward) using the ANN algorithm were compared, this the methodology produced the best results. The ANN-based modelling has improved and shown better results than the other ML technologies by around 7%. In conclusion, ANNs supported by process factors are more effective than other ML algorithms when compared with other algorithms. The ANN is less sensitive to different targets and is better at adapting to new input features. ## 5. Conclusions Supervisory and fault diagnosis of wind turbines can be accomplished through a data-driven approach. In this approach, physical and empirical relationships and process knowledge were integrated for bidirectional modelling. Based on a review of the state of the art, the proposed approach combines physics-based and data-driven models. As a result of both approaches proposed, a very good result has been achieved. As a result of understanding the fundamental relationship embedded in the physical mechanisms, the inputs and outputs of the proposed modelling are obtained. In this study, ANNs showed better results than ML algorithms. When basic process parameters are used as direct outputs for inverse analysis, the nonuniqueness issue generally arises, which is overcome by the knowledge-based, data-driven method. Due to its effectiveness, the methodology can be applied to other mechanical components of wind turbine prototypes, thus preventing the breakdown of other mechanical components. This prototype can be used to study, develop, and validate fault diagnosis and supervision techniques, with the possibility of replacing defective or worn parts with alternative components. High-performance wind turbines are equipped with prototype wind turbines that are used to test diagnostic algorithms. It also enables the algorithms to be verified, adjusted, and corrected, thus saving time and money. --- *Source: 1020400-2022-12-02.xml*
1020400-2022-12-02_1020400-2022-12-02.md
25,219
Framework for Bidirectional Knowledge-Based Maintenance of Wind Turbines
Javier Vives; Juan Palaci; Janverly Heart
Computational Intelligence and Neuroscience (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1020400
1020400-2022-12-02.xml
--- ## Abstract Artificial intelligence (AI) techniques, such as machine learning (ML), are being developed and applied for the monitoring, tracking, and fault diagnosis of wind turbines. Current prediction systems are largely limited by their inherent disadvantages for wind turbines. For example, frequency or vibration analysis simulations at a part scale require a great deal of computational power and take considerable time, an aspect that can be essential and expensive in the case of a breakdown, especially if it is offshore. An integrated digital framework for wind turbine maintenance is proposed in this study. With this framework, predictions can be made both forward and backward, breaking down barriers between process variables and key attributes. Prediction accuracy in both directions is enhanced by process knowledge. An analysis of the complicated relationships between process parameters and process attributes is demonstrated in a case study based on a wind turbine prototype. Due to the harsh environments in which wind turbines operate, the proposed method should be very useful for supervising and diagnosing faults. --- ## Body ## 1. Introduction In the last few years, the increase in energy consumption and the powerful potential of artificial intelligence algorithms have changed the current supervision and fault diagnosis of the industrial process. In the last 5 years, the wind power infrastructure has increased wind production by about 80% around the world [1]. Researchers have developed new techniques to maintain wind power, combining the traditional methods of monitoring and supervision with machine learning (ML) techniques. In the literature [2, 3], there are several studies combining the methodology of ML for the maintenance of wind turbines. Several industries are using predictive maintenance because of its unique processing characteristics, such as aerospace, oil and gas, nuclear, automotive, and shipbuilding [3]. It is estimated that the size of the market for ML in the industrial sector will exceed USD 2000 million in the near future, according to a market survey [4]. One wind turbine is built with thousands of different components that must be perfectly matched and synchronized to work and achieve the best possible performance. One of the most critical components that can fail in a wind turbine are the bearings, blades, and gears. Offshore wind farms must detect and diagnose faults early if they are to be stopped in case of problems, especially those that have high repair and maintenance costs, especially if they are offshore [5]. As well as minimizing downtime and defect costs, maintenance activities must be managed efficiently. By applying algorithms designed to anticipate and prevent problems, we developed a prototype that detects, supervises, and anticipates failures in contrast to existing systems. Currently, the monitoring and prediction systems for possible failures in wind turbines focus on vibrations, shaft speed, noise, and even overheating of some components. Using digital and artificial intelligence technologies, such as machine learning, to create a fast and accurate prediction system for wind turbines is an essential research topic [6]. Digital solutions and machine learning for rotatory machines have been the subject of many research endeavours, such as prediction and control of the fast and slow axes [7], bearing defect detection [8, 9], or system vibrations [10, 11].ML is being applied to wind turbines in several recent ways. Sun et al. [12] adopted a neural network as the main method for the yaw angle prediction. To forecast the process parameters precisely according to the desired blade yaw angles, the forward neural network model is needed to work with a backward prediction strategy. Two output process parameters were obtained using only two input features of wind speed and air density in the reverse model. Jiménez et al. [13] designed and developed an ML algorithm for the real-time maintenance and prediction of a possible blade failure, comparing different supervised ML methods such as, for example, decision trees, discriminant analysis, support vector machines (SVM), or nearest neighbours (KNN). Compared to the traditional optimisation approach (ML), bidirectional predictions were considered much more efficient in their research. It is possible to dramatically reduce maintenance costs by adopting this cutting-edge technology.This paper presents a knowledge-based ML modelling framework that enables two directions of prediction. In the forward direction, the basic process parameters pertinent to a specific process, such as temperature and axis speed, are applied as the initial model inputs, while knowledge-based process factors are added as the advanced model inputs. Using training data obtained from previous experiments, knowledge-based ML is then applied to predict process attributes. The process parameters are derived from the knowledge-based features after the process factors are predicted based on the requirements of the process attributes. For understanding, analysing, designing, and optimizing wind turbine maintenance processes, forward and backward predictions are important. An evaluation of the suggested approach is also presented in this paper through a case study. This paper is structured as follows: in Section2, the knowledge-based ML modelling framework is introduced. Section 3 displays the prototype and the variables that are monitored via forward and backward modelling. The results, performance, and predictions of the algorithm are shown in Section 4. Section 5 concludes the research, highlighting the main conclusion. ## 2. Methodology ML models and process factors are combined to propose a bidirectional modelling approach that predicts the attributes of the process in this section, such as imbalance and good stage bearings, and predicts basic process parameters, such as the temperature and speed of the wind turbine axis. Figure1 illustrates the framework. This framework defines ML models as being both the input and output of basic process parameters and key process attributes. To enhance the performance and efficiency of ML models, knowledge-based factors are introduced.Figure 1 Bi-directional modelling framework based on knowledge. ### 2.1. Process Parameters As the most important input for monitoring a wind turbine process, the basic process parameters always need to be defined carefully to achieve the required part performance. Depending on the wind turbine scale and location, various parameters are defined and used. For instance, temperature and rotor speed are two of the main process parameters for a wind turbine. There are also some other basic process parameters, like wind speed, wind direction, and blade radius. When their effects are revealed, these parameters can also be called basic process parameters. In this document, these basic process parameters are represented asAb (n is the number of process parameters).(1)Ab=A1,A2,A3,…,An. ### 2.2. Process Factors Based on Knowledge Research has shown that the process attributes are generally determined by different basic process parameters that are fed directly into the modelling process, as described in the introduction. Different combinations of basic process parameters can, however, produce the same process attributes. For bidirectional modelling, the basic process parameters need to have a unique relationship. To link the process attributes directly, knowledge-based process factors were utilized in this study. The process factors can be determined according to the basic parameters of the process as well as their physical mechanisms. As a result, the data-driven model will be less redundant when the ML model is applied.As a result of applying the process factors, the fundamental relationship can be embedded in the model, allowing for more efficient data analysis and a more generic solution for different types of wind turbines. As an example, several factors can be used, such as air or density temperature, to determine the imbalance or good stage variables. Based on the prior knowledge and basic parameters, new features can be generated, which are donated asAk.(2)Ak=TkAbwhere Tk∗ represents the transfer function based on the physical mechanism for the axis turn. ### 2.3. Process Attributes Many of the attributes of a process are determined by digital modelling or simulation, but some can be measured directly from the real-time monitoring of the process, which can be represented asXat. This type of digital model is typically computationally demanding and requires experimental data as input. As a result, we can express the process attribute as follows:(3)Xat=TdDex,where Td∗ represents the function of the digital model, and Dex is the data collected from experiments. ### 2.4. Bidirectional Prediction with Machine Learning A neural network is the main machine learning (ML) technology used in this research, while other algorithms are compared in the case study. Data properties and problems can dictate the type of ANN [14] to be used. Typically, recurrent neural networks (RNNs) are used for processing real-time monitoring data, whereas convolutional neural networks (CNNs) are used for processing image data. During this research, the model is both input and output by using parameters, factors, and attributes. Using a forward model, we can say,(4)Xat=TMLTkAb=TdDex,where TML∗ is the ML algorithm. According to the backward model,(5)Ab=Tk−1TMLXat. ## 2.1. Process Parameters As the most important input for monitoring a wind turbine process, the basic process parameters always need to be defined carefully to achieve the required part performance. Depending on the wind turbine scale and location, various parameters are defined and used. For instance, temperature and rotor speed are two of the main process parameters for a wind turbine. There are also some other basic process parameters, like wind speed, wind direction, and blade radius. When their effects are revealed, these parameters can also be called basic process parameters. In this document, these basic process parameters are represented asAb (n is the number of process parameters).(1)Ab=A1,A2,A3,…,An. ## 2.2. Process Factors Based on Knowledge Research has shown that the process attributes are generally determined by different basic process parameters that are fed directly into the modelling process, as described in the introduction. Different combinations of basic process parameters can, however, produce the same process attributes. For bidirectional modelling, the basic process parameters need to have a unique relationship. To link the process attributes directly, knowledge-based process factors were utilized in this study. The process factors can be determined according to the basic parameters of the process as well as their physical mechanisms. As a result, the data-driven model will be less redundant when the ML model is applied.As a result of applying the process factors, the fundamental relationship can be embedded in the model, allowing for more efficient data analysis and a more generic solution for different types of wind turbines. As an example, several factors can be used, such as air or density temperature, to determine the imbalance or good stage variables. Based on the prior knowledge and basic parameters, new features can be generated, which are donated asAk.(2)Ak=TkAbwhere Tk∗ represents the transfer function based on the physical mechanism for the axis turn. ## 2.3. Process Attributes Many of the attributes of a process are determined by digital modelling or simulation, but some can be measured directly from the real-time monitoring of the process, which can be represented asXat. This type of digital model is typically computationally demanding and requires experimental data as input. As a result, we can express the process attribute as follows:(3)Xat=TdDex,where Td∗ represents the function of the digital model, and Dex is the data collected from experiments. ## 2.4. Bidirectional Prediction with Machine Learning A neural network is the main machine learning (ML) technology used in this research, while other algorithms are compared in the case study. Data properties and problems can dictate the type of ANN [14] to be used. Typically, recurrent neural networks (RNNs) are used for processing real-time monitoring data, whereas convolutional neural networks (CNNs) are used for processing image data. During this research, the model is both input and output by using parameters, factors, and attributes. Using a forward model, we can say,(4)Xat=TMLTkAb=TdDex,where TML∗ is the ML algorithm. According to the backward model,(5)Ab=Tk−1TMLXat. ## 3. Case Study An evaluation of the proposed modelling framework is demonstrated in this section. Several previous experiments and simulations on a wind turbine prototype have generated a fault variable dataset that can be used to read the process attributes (imbalance or good stage). During the experiments, the design of the experimental method was used to minimise the number of samples required. The implemented method was according to [15]. A wide range of process parameters was adopted, such as air temperature (10–25°C), wind speed (5–20 km/s), and air density (1.225 km/m3). ### 3.1. Temperature Model The temperature sensors are used to detect overheating in the components and determine which faults they correspond to, based on the studies carried out on the rest of the components. In this study, the temperature above the shaft bearing is monitored. The selected temperature sensors are positive temperature coefficient sensors, that is, PTC-type sensors, specifically the PT-100. For the temperature sensors, two NI 9217 modules [16] have been used. The NI 9217 RTD Analog Input Module has 4 channels and 24-bit resolution for 100 Ω RTD measurements. The NI 9217 can be configured for two different sampling modes. Both the NI 9201 and NI 9217 boards mount on the NI cDAQ-9172 module [17], which is an 8-slot compact DAQ chassis that can support up to eight I/O modules. It operates from 11 to 30 VDC and includes an AC/DC power adapter. ### 3.2. Speed Model The measurement of the speed is important to know the relative movement of the slow and fast axes and thus check the correct operation of the system, as well as possible failures, by comparing this signal with others of the acquired ones. Likewise, the data will be used to check the status of the machine in case it is stopped. For the speed measurements, two inductive proximity sensors were selected, specifically the IG5594 [18], which were installed on each of the axes, one for the slow axis and one for the fast axis, as you can see in Figure 2. By their nature, inductive speed sensors work with pulsed signals, delivering a high level when the sensor detects metal. Both sensors connect to the NI 9201 module on top of the NI cDAQ-9172.Figure 2 Positioning of the speed sensors on the prototype. ### 3.3. Modelling Bi-Directionally with Machine Learning We propose a bidirectional model, which consists of forward and backward prediction models. An ML model for forward and backward prediction embeds process knowledge using two process factors. Mainly, ANNs were used for ML modelling. It was possible to make three predictions using two different neural networks using both forward and backward modelling approaches. #### 3.3.1. Forward Modelling In Figure3, we see how the forward model is structured. A basic process parameter, such as wind speed and temperature, is the first input, and a good stage or imbalance variable is the target. A neural network structure optimized for each neural network was designed separately.Figure 3 Modelling variables using forward process knowledge-based machine learning. #### 3.3.2. Backward Modelling As shown in Figure4, the target parameters of the backward model are the basic process parameters, while the imbalance or good stage variables are the input variables. In the first step, neural networks are used to predict the process factors. Using the relationship between the process parameters and the process, the basic parameter values are calculated.Figure 4 Modelling variables using backward process knowledge-based machine learning. ## 3.1. Temperature Model The temperature sensors are used to detect overheating in the components and determine which faults they correspond to, based on the studies carried out on the rest of the components. In this study, the temperature above the shaft bearing is monitored. The selected temperature sensors are positive temperature coefficient sensors, that is, PTC-type sensors, specifically the PT-100. For the temperature sensors, two NI 9217 modules [16] have been used. The NI 9217 RTD Analog Input Module has 4 channels and 24-bit resolution for 100 Ω RTD measurements. The NI 9217 can be configured for two different sampling modes. Both the NI 9201 and NI 9217 boards mount on the NI cDAQ-9172 module [17], which is an 8-slot compact DAQ chassis that can support up to eight I/O modules. It operates from 11 to 30 VDC and includes an AC/DC power adapter. ## 3.2. Speed Model The measurement of the speed is important to know the relative movement of the slow and fast axes and thus check the correct operation of the system, as well as possible failures, by comparing this signal with others of the acquired ones. Likewise, the data will be used to check the status of the machine in case it is stopped. For the speed measurements, two inductive proximity sensors were selected, specifically the IG5594 [18], which were installed on each of the axes, one for the slow axis and one for the fast axis, as you can see in Figure 2. By their nature, inductive speed sensors work with pulsed signals, delivering a high level when the sensor detects metal. Both sensors connect to the NI 9201 module on top of the NI cDAQ-9172.Figure 2 Positioning of the speed sensors on the prototype. ## 3.3. Modelling Bi-Directionally with Machine Learning We propose a bidirectional model, which consists of forward and backward prediction models. An ML model for forward and backward prediction embeds process knowledge using two process factors. Mainly, ANNs were used for ML modelling. It was possible to make three predictions using two different neural networks using both forward and backward modelling approaches. ### 3.3.1. Forward Modelling In Figure3, we see how the forward model is structured. A basic process parameter, such as wind speed and temperature, is the first input, and a good stage or imbalance variable is the target. A neural network structure optimized for each neural network was designed separately.Figure 3 Modelling variables using forward process knowledge-based machine learning. ### 3.3.2. Backward Modelling As shown in Figure4, the target parameters of the backward model are the basic process parameters, while the imbalance or good stage variables are the input variables. In the first step, neural networks are used to predict the process factors. Using the relationship between the process parameters and the process, the basic parameter values are calculated.Figure 4 Modelling variables using backward process knowledge-based machine learning. ## 3.3.1. Forward Modelling In Figure3, we see how the forward model is structured. A basic process parameter, such as wind speed and temperature, is the first input, and a good stage or imbalance variable is the target. A neural network structure optimized for each neural network was designed separately.Figure 3 Modelling variables using forward process knowledge-based machine learning. ## 3.3.2. Backward Modelling As shown in Figure4, the target parameters of the backward model are the basic process parameters, while the imbalance or good stage variables are the input variables. In the first step, neural networks are used to predict the process factors. Using the relationship between the process parameters and the process, the basic parameter values are calculated.Figure 4 Modelling variables using backward process knowledge-based machine learning. ## 4. Results We evaluated the proposed modelling approach using a model correlation coefficient (MCC). Additionally, two other ML algorithms, namely, decision trees and random forests, were compared with the results obtained from the proposed approach based on ANN. All results from ANN and other ML algorithms, as well as those without knowledge-based process factors, are compared to the results from ANN and other ML algorithms without process factors. A good stage and imbalance were predicted by forward modelling, as described in Section3.3.1. Figure 5 shows the predictions for good stages. Direct prediction and process factors produced the highest MCC, 0.98, according to the proposed approach. Without incorporating process factors, random forest modelling made the least accurate predictions. The MCC of all ML algorithms was also improved by about 5% when process factors were used as enhanced inputs.Figure 5 Modelling performance for good stage prediction. Forward model.According to Figure6, the prediction results of imbalance using ANN, random forest, and decision tree show similar trends in comparison with the prediction results of the good stage. Direct prediction with and without process factors obtained the highest MCC, in this case, 0.92 MCC. A decision tree algorithm was used to calculate this value. Using both the random forest algorithm and direct prediction, the random forest algorithm provided the lowest MCC (0.802). Most ML models do not show a significant improvement in prediction efficiency when knowledge-based process factors are considered.Figure 6 Modelling performance for imbalance prediction. Forward model.Temperature and speed are the primary targets of the backward model. The prediction results for temperature are shown in Figure7. With an MCC of 0.93, ANN is more accurate than other ML technologies. Based and direct predictions are very similar to the ANN methodology. Our lowest MCC (8.85) was obtained using the process knowledge-based prediction algorithm.Figure 7 Modelling performance for good stage prediction. Backward model.Figure8 shows the prediction results of speed. The ANN is also the most accurate in direct prediction, with an MCC of 0.92, similar to the previous variable. For other ML technologies in direct prediction, using the decision tree algorithm, we obtained a 0.89 MCC, while using a random forest, we scored the lowest MCC (0.86). If we compare the results with the based predictions, the results are very similar; just in this case, the lowest MCC (0.837) is obtained by the decision tree.Figure 8 Modelling performance for imbalance prediction. Backward model.When the process of factor-based prediction and direct prediction in both models (forward and backward) using the ANN algorithm were compared, this the methodology produced the best results. The ANN-based modelling has improved and shown better results than the other ML technologies by around 7%. In conclusion, ANNs supported by process factors are more effective than other ML algorithms when compared with other algorithms. The ANN is less sensitive to different targets and is better at adapting to new input features. ## 5. Conclusions Supervisory and fault diagnosis of wind turbines can be accomplished through a data-driven approach. In this approach, physical and empirical relationships and process knowledge were integrated for bidirectional modelling. Based on a review of the state of the art, the proposed approach combines physics-based and data-driven models. As a result of both approaches proposed, a very good result has been achieved. As a result of understanding the fundamental relationship embedded in the physical mechanisms, the inputs and outputs of the proposed modelling are obtained. In this study, ANNs showed better results than ML algorithms. When basic process parameters are used as direct outputs for inverse analysis, the nonuniqueness issue generally arises, which is overcome by the knowledge-based, data-driven method. Due to its effectiveness, the methodology can be applied to other mechanical components of wind turbine prototypes, thus preventing the breakdown of other mechanical components. This prototype can be used to study, develop, and validate fault diagnosis and supervision techniques, with the possibility of replacing defective or worn parts with alternative components. High-performance wind turbines are equipped with prototype wind turbines that are used to test diagnostic algorithms. It also enables the algorithms to be verified, adjusted, and corrected, thus saving time and money. --- *Source: 1020400-2022-12-02.xml*
2022
# Distributed Policy Evaluation with Fractional Order Dynamics in Multiagent Reinforcement Learning **Authors:** Wei Dai; Wei Wang; Zhongtian Mao; Ruwen Jiang; Fudong Nian; Teng Li **Journal:** Security and Communication Networks (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1020466 --- ## Abstract The main objective of multiagent reinforcement learning is to achieve a global optimal policy. It is difficult to evaluate the value function with high-dimensional state space. Therefore, we transfer the problem of multiagent reinforcement learning into a distributed optimization problem with constraint terms. In this problem, all agents share the space of states and actions, but each agent only obtains its own local reward. Then, we propose a distributed optimization with fractional order dynamics to solve this problem. Moreover, we prove the convergence of the proposed algorithm and illustrate its effectiveness with a numerical example. --- ## Body ## 1. Introduction In recent years, reinforcement learning [1] has received much attention from the society and succeeded remarkably in many areas such as machine learning and artificial intelligence [2]. As we all know, in reinforcement learning, an agent determines the optimal strategy under the feedback of rewards via constantly interacting with the environment. The function of the policy maps possible states to possible actions. Although reinforcement learning has made great achievements in single agent, it remains challenging in the application of multiagent [3]. The goal of the multiagent system is to enable several agents with simple intelligence, but it is easy to manage and control to realize complex intelligence through mutual cooperation. While reducing the complexity of system modeling, the robustness, reliability, and flexibility of the system should be improved [4, 5].In this paper, the objective of this paper is to investigate multiagent reinforcement learning (MARL), where each agent exchanges information with their neighbors in network systems [6]. All agents share the state space and action except local rewards. The purpose of the MARL is to determine the global optimal policy, and a feasible way is to construct a central controller, where each agent must exchange information with the central controller [7], which makes decisions for all of them. However, with the increase of state dimensions, the computation of the central controller becomes extensively heavy. The whole system would collapse if the central controller was attacked.Then, we try to replace the centralized algorithm mentioned above with distributed control [8, 9]. Consistency protocol based on design enables all agents to achieve the same state [10–13]. In [14], Zhang et al. proposed a continuous-time distributed version of the gradient algorithm. As far as we know, most of the gradient methods use integer order iteration. In fact, fractional order has been developed for 300 years and used to solve many kinds of problems such as control applications and systems’ theory [15–17]. In comparison with the traditional integer order algorithm, the fractional order algorithm has more design freedom and potential to obtain better convergence performance [18, 19].Hereinafter, the contributions of the paper are listed:(1) We transform the multiagent strategy evaluation problem into a distributed optimization problem with a consensus constraint(2) We construct the fractional order dynamics and prove the convergence of the algorithm(3) We take a numerical example to verify the superiority of the proposed fractional order algorithmThe rest organization of this paper is listed as follows. Section2 introduces some problems of formulation on MARL and fractional order calculus. Section 3 transforms the multiagent strategy evaluation problem into the optimization problem with a consensus constraint, proposes an algorithm with fractional order dynamics, and proves that the algorithm asymptotically converge to an exact solution. Section 4 presents a simulation example, and we summarize the work in Section 5. ## 2. Problem Formulation ### 2.1. Notations Letℝ,ℝn, and ℝn×m represent the real number set, n-dimensional real column vector set, and n × m real matrix set, respectively. AT represents the transpose of A. A=∑i=1n∑i=1naij1/2,XG=XTGX, and A,B=ATB.S,Ai=1n,P,Rii=1n,γ represents a multiagent Markov decision process (MDP), where S is the state space and A is the joint action space. Pa is the probability of transition from st to st+1 when the agent takes the joint action a and Pπs,s′=Ea∼π·|sPas,s′, Ris,a is the local reward when agent i takes joint action a at state s and γ∈0,1 is a discount parameter. πa|s represents the condition of probability when the agent takes joint action a at state s. The reward function of agent i is defined when follows a joint policy π at state s as follows:(1)Riπs=Ea∼π·|sRis,a,where the right-hand side of the equation means that there is a probability for all possible choices of action a, and we calculate the expected value for all rewards of agent i:(2)Rcπs=1n∑i=1nRiπs,where Rcπs represents the average of the local rewards. ### 2.2. Graph theory The graph is expressed asGV,E, where G represents a graph, V is the set of vertices, and E is the set of edges in G. If any edge in the graph is undirected, the graph is named as undirected graph [20]. In graph, A=aij∈ℝn×n is the adjacency matrix with aij≠0 if i,j∈E, aij=0 otherwise. D=diagd1,d2,…,d3 is the degree matrix with di=∑j=1naij and Laplacian matrix is L=D−A. Moreover, if the graph is connected, L has the following two properties:(1) Laplacian matrix is a semipositive definite matrix(2) The minimum eigenvalue is 0 because the sum of every row of the Laplace matrix is 0The minimum nonzero eigenvalue is defined as the algebraic connectivity of the graph.Assumption 1. The undirected graph mentioned in the following text is connected.Lemma 1 (see [21]). The frequency distributed model is defined for a fractional order systemDαxt=ut, where α∈0,1 as follows:(3)∂zω,t∂z=−ωzω,t+ut,yt=∫0∞μαωzω,tdω,where μα=sinαπ/ωαπ.Definition 1 (see [22]). Theαth order Caputo derivative is(4)Dαft=1Γn−α∫0tt−τn−1−αfnτdτ,where α∈n−1,n, n∈ℕ,Γt=∫0∞τt−1e−τdτ is Gamma function, and fnt is the nth order derivative of ft. ### 2.3. Policy Evaluation To measure the benefits of agents in its current state, we establish the following value function, which represents the value of the cumulative return obtained by agents starting from the statest, adopting a certain strategy π:(5)Vπs=Eπ∑m=1∞γmRcπst+m+1|st=s.We construct Bellman equation based onVπ∈ℝS and Rcπ∈ℝS:(6)Vπ=Rcπ+γPπVπ.It is difficult to evaluateVπ directly if the dimension of the state space is very large. Therefore, we use Vθs=ϕTsθ to approximate Vπ, where θ∈ℝd is the vector and ϕs:S⟶ℝd, which is a particular function for state s. Indeed, solving equation (6) is equivalent to obtain the vector θ via Vθ≈Vπ. In other words, it means to minimize the mean square error about 1/2Vθ−VπD2, where D=diagμπs,s∈S, ∈ℝS×S is a diagonal matrix determined by the stationary distribution. We construct the equation as follows:(7)fθ=12ΠΦVθ−γPπVθ−RcπD2+ρ2θ2,where ρ is a regularization parameter and ΠΦ is a projection operator in the column subspace of Φ. It is not difficult to rewrite ΠΦ as ΠΦ=ΦΦTDΦ−1ΦTD substituting ΠΦ into (7):(8)fθ=ρ2θ2+12ΠΦVθ−γPπVθ−RcπD2=ρ2θ2+12Vθ−γPπVθ−RcπTΠΦTDΠΦ×Vθ−γPπVθ−Rcπ=ρ2θ2+12Vθ−γPπVθ−RcπTDΦΦTDΦ−1ΦTD×Vθ−γPπVθ−Rcπ=ρ2θ2+12ΦTDVθ−γPπVθ−RcπΦTDΦ−12=ρ2θ2+12ΦTDΦ−γPπΦθ−ΦTDRcπΦTDΦ−12=ρ2θ2+12Aθ−bC−12,where A=ΦTDΦ−γPπΦ=Es∼μπϕsϕs−γϕs′T, C=ΦTDΦ=Es∼μπϕsϕTs, and b=ΦTDRcπ=Es∼μπRcπsϕs.The minimum value ofθ in equation (8) is unique if A is a full rank matrix and C is a positive definite matrix. In practice, it is difficult to get the expectations in the compact form when the distribution is unknown. We replace expectation with the average as follows:(9)A^=1p∑t=1PAt,b^=1p∑t=1Pbt,C^=1p∑t=1PCt,where At=ϕstφTst,φstϕstϕst+1T, Ct=ϕstϕTst, and bt=Rcπstϕst.We assume that the sample sizep approaches infinity to make sure its confidence level. In these sequences, each state is attached at least once. Then, we reconstruct equation (8) as follows:(10)fθ=12A^θ−b^C^−12+ρ2θ2.Noteworthy, in a shared space, the agent observes the states and actions of the neighbors, but only observes the local rewards of its own. In other words, we getA^ and C^ except b^. So, we define b^i=1/p∑t=1pbt,i with bt,i=Riπst,atϕst. Then, we rewrite equation (10) as follows:(11)minθ∈Rd1n∑t=1n12A^θ−b^iC−12+ρ2θ2. ## 2.1. Notations Letℝ,ℝn, and ℝn×m represent the real number set, n-dimensional real column vector set, and n × m real matrix set, respectively. AT represents the transpose of A. A=∑i=1n∑i=1naij1/2,XG=XTGX, and A,B=ATB.S,Ai=1n,P,Rii=1n,γ represents a multiagent Markov decision process (MDP), where S is the state space and A is the joint action space. Pa is the probability of transition from st to st+1 when the agent takes the joint action a and Pπs,s′=Ea∼π·|sPas,s′, Ris,a is the local reward when agent i takes joint action a at state s and γ∈0,1 is a discount parameter. πa|s represents the condition of probability when the agent takes joint action a at state s. The reward function of agent i is defined when follows a joint policy π at state s as follows:(1)Riπs=Ea∼π·|sRis,a,where the right-hand side of the equation means that there is a probability for all possible choices of action a, and we calculate the expected value for all rewards of agent i:(2)Rcπs=1n∑i=1nRiπs,where Rcπs represents the average of the local rewards. ## 2.2. Graph theory The graph is expressed asGV,E, where G represents a graph, V is the set of vertices, and E is the set of edges in G. If any edge in the graph is undirected, the graph is named as undirected graph [20]. In graph, A=aij∈ℝn×n is the adjacency matrix with aij≠0 if i,j∈E, aij=0 otherwise. D=diagd1,d2,…,d3 is the degree matrix with di=∑j=1naij and Laplacian matrix is L=D−A. Moreover, if the graph is connected, L has the following two properties:(1) Laplacian matrix is a semipositive definite matrix(2) The minimum eigenvalue is 0 because the sum of every row of the Laplace matrix is 0The minimum nonzero eigenvalue is defined as the algebraic connectivity of the graph.Assumption 1. The undirected graph mentioned in the following text is connected.Lemma 1 (see [21]). The frequency distributed model is defined for a fractional order systemDαxt=ut, where α∈0,1 as follows:(3)∂zω,t∂z=−ωzω,t+ut,yt=∫0∞μαωzω,tdω,where μα=sinαπ/ωαπ.Definition 1 (see [22]). Theαth order Caputo derivative is(4)Dαft=1Γn−α∫0tt−τn−1−αfnτdτ,where α∈n−1,n, n∈ℕ,Γt=∫0∞τt−1e−τdτ is Gamma function, and fnt is the nth order derivative of ft. ## 2.3. Policy Evaluation To measure the benefits of agents in its current state, we establish the following value function, which represents the value of the cumulative return obtained by agents starting from the statest, adopting a certain strategy π:(5)Vπs=Eπ∑m=1∞γmRcπst+m+1|st=s.We construct Bellman equation based onVπ∈ℝS and Rcπ∈ℝS:(6)Vπ=Rcπ+γPπVπ.It is difficult to evaluateVπ directly if the dimension of the state space is very large. Therefore, we use Vθs=ϕTsθ to approximate Vπ, where θ∈ℝd is the vector and ϕs:S⟶ℝd, which is a particular function for state s. Indeed, solving equation (6) is equivalent to obtain the vector θ via Vθ≈Vπ. In other words, it means to minimize the mean square error about 1/2Vθ−VπD2, where D=diagμπs,s∈S, ∈ℝS×S is a diagonal matrix determined by the stationary distribution. We construct the equation as follows:(7)fθ=12ΠΦVθ−γPπVθ−RcπD2+ρ2θ2,where ρ is a regularization parameter and ΠΦ is a projection operator in the column subspace of Φ. It is not difficult to rewrite ΠΦ as ΠΦ=ΦΦTDΦ−1ΦTD substituting ΠΦ into (7):(8)fθ=ρ2θ2+12ΠΦVθ−γPπVθ−RcπD2=ρ2θ2+12Vθ−γPπVθ−RcπTΠΦTDΠΦ×Vθ−γPπVθ−Rcπ=ρ2θ2+12Vθ−γPπVθ−RcπTDΦΦTDΦ−1ΦTD×Vθ−γPπVθ−Rcπ=ρ2θ2+12ΦTDVθ−γPπVθ−RcπΦTDΦ−12=ρ2θ2+12ΦTDΦ−γPπΦθ−ΦTDRcπΦTDΦ−12=ρ2θ2+12Aθ−bC−12,where A=ΦTDΦ−γPπΦ=Es∼μπϕsϕs−γϕs′T, C=ΦTDΦ=Es∼μπϕsϕTs, and b=ΦTDRcπ=Es∼μπRcπsϕs.The minimum value ofθ in equation (8) is unique if A is a full rank matrix and C is a positive definite matrix. In practice, it is difficult to get the expectations in the compact form when the distribution is unknown. We replace expectation with the average as follows:(9)A^=1p∑t=1PAt,b^=1p∑t=1Pbt,C^=1p∑t=1PCt,where At=ϕstφTst,φstϕstϕst+1T, Ct=ϕstϕTst, and bt=Rcπstϕst.We assume that the sample sizep approaches infinity to make sure its confidence level. In these sequences, each state is attached at least once. Then, we reconstruct equation (8) as follows:(10)fθ=12A^θ−b^C^−12+ρ2θ2.Noteworthy, in a shared space, the agent observes the states and actions of the neighbors, but only observes the local rewards of its own. In other words, we getA^ and C^ except b^. So, we define b^i=1/p∑t=1pbt,i with bt,i=Riπst,atϕst. Then, we rewrite equation (10) as follows:(11)minθ∈Rd1n∑t=1n12A^θ−b^iC−12+ρ2θ2. ## 3. Fractional Order Dynamics for Policy Evaluation Hereinbefore, the aim of policy evaluation becomes to minimize the object function. Now, we rewrite (11) as follows:(12)minθi1n∑i=1n12A^θi−b^iC^−12+ρ2θi2,s.t.θi=θj.We defineθ¯∈ℝnd as a factor concatenating all θi: θ¯=θ1T,θ2T,…θnTT∈ℝnd and the aggregative function f as fθ¯=∑i=1nfθi. As we all know, the consensus constraint (12) is expressed as(13)minθ¯12A^¯θ¯−b^¯iC^¯-12+ρ2θ¯2,s.t.L¯θ¯=0,where b^¯=b^1T,b^2T,…b^nTT∈ℝnd,L∈ℝn×n,L¯=L⊗Id∈ℝnd×nd, A^¯=A^⊗In∈ℝnd×nd, and C^¯=C^⊗In ∈ℝnd×nd. Based on (13), we formulate the following the augmented Lagrangian:(14)ℒθ¯,λ=fθ¯+λ,L¯θ¯+12θ¯TL¯θ¯,where λ∈ℝnd is the Lagrange multiplier.It is feasible to design a fractional order continuous-time optimization algorithm from primal-dual viewpoint, gradient descend for primal variableθ¯, and gradient ascent for dual variable λ via (14). Both of them are updated according to the fractional order law:(15)Dα1θ¯t=−∇θ¯tℒθ¯t,λt,Dα2λt=∇λtℒθ¯t,λt,where 0<α1<2,0<α2<1,∇θ¯tℒθ¯t,λt, and ∇λtℒθ¯t,λt are gradient of θ¯t,λt on variables θ¯t and λt, respectively. We express the detail of (15) in Algorithm 1.Algorithm 1: Initialization:θi=0∈ℝd,λi=0∈ℝd.UpdateFort≤50Dα1θit=−A^TC^−1A^+ρIθit−A^TC^−1b^i+L¯iλt+L¯iθ¯t,Dα2λit=L¯iθ¯tEndReturnθThe aim of the distributed algorithm is to obtain the solution of the value function. The proposed algorithm has more potential to get better convergence performance and design freedom than the conventional integer order. Hereinafter, we provide the following convergence conclusion.Theorem 1. Under Assumption1, let θ¯t and λt be generated according to Algorithm 1. If 0<α1,α2<1, then θ¯t asymptotically converges to the optimal solution.Proof. We obtain the detailed dynamics ofθ¯t and λt:(16)Dα1θ¯t=−A^¯TC^¯−1b^¯+ρI+L¯θ¯(t)+A^¯TC^¯−1b¯−L¯λt,Dα2λt=L¯θ¯(t,where I is an identity matrix. We consider the equilibrium of (16):(17)0=−A^¯TC^¯−1b^¯+ρI+L¯θ¯∗+A^¯TC^¯−1b¯−L¯λ∗,0=L¯θ¯∗. Then, we combine (16) and (17), and according to the facts Dα1θ¯∗=0, Dα2λ∗=0,(18)Dα1θ¯˜t=−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t,Dα2λ˜t=L¯θ¯˜t. Through Lemma1, we reconstruct (18) as follows:(19)∂z1ω,t∂t=−ωz1ω,t−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t,θ¯˜t=∫0∞μα1ωz1ω,tdωand(20)π∂z2ω,t∂t=−ωz2ω,t+L¯θ¯˜t,λ˜t=∫0∞μα2z2ω,tdω. We construct the Lyapunov function as follows:(21)V1=12∫0∞∑i=12μαiωziω,t2dω. Then,(22)V1.=∫0∞∑i=12μαiωziω,t,∂ziω,t∂tdω=∫0∞μα1ωz1ω,t,−ωz1ω,t−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜tⅆω+∫0∞μα2ωz2ω,t−ωz2ω,t+L¯θ¯˜tⅆω=−∫0∞∑i=12μαiωziω,t2dω+θ¯˜t,−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t+λ˜t,L¯θ¯˜t=−∫0∞∑i=12μαiωziω,t2dω−θ¯˜tTA^¯TC^¯−1b^¯+ρI+L¯θ¯˜t≤0. We obtain the result according to the Lasalle invariance principle. Hereinafter, we improve the convergence conclusion of Theorem 1 by extendingα1 from (0,1) to (1,2).Theorem 2. Under Assumption1, let θ¯t and λt be generated according to Algorithm 1. If 1<α1<2,α1+α2=2, then θ¯t asymptotically converges to the optimal solution.Proof. Under the conditionα1=1+α¯1, we rewrite the dynamics with the condition of Theorem 1 as follows:(23)θ¯˜˙at=−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t,Dα¯1θ¯˜t=θ¯˜at,Dα2θ¯˜t=L¯θ¯˜t. Due toα1=1+α¯1 and α1+α2=2,(24)λ˜˙t=Dα1¯Dα2λ˜t=Dα1¯L¯θ¯˜t=L¯aθ¯˜t. Under the condition of (23) and (24), we obtain the frequency distributed model by Lemma 1 as follows:(25)θ¯˜˙at=−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t,∂z1ω,t∂t=−ωz1ω,t+θ¯˜at,θ¯˜t=∫0∞μα¯1ωz1ω,tdω,λ˜˙t=L¯θ¯˜at. We construct the Lyapunov function:(26)V2=12θ¯˜at2+12λ˜t2. Then,(27)V2.=θ¯˜at,θ¯˜at+λ˜t,λ˜˙t=θ¯˜at,−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜at−L¯λ˜t+λ˜t,L¯θ¯˜at=−θ¯˜atTA^¯TC^¯−1b^¯+ρI+L¯θ¯˜t≤0. Through the LaSalle invariance principle, we obtain the result. ## 4. Experimental Simulation In this section, we provide an example to illustrate the effectiveness of the proposed algorithm. There are 20 states in the multiagent reinforcement learning. We setd=5, regularization parameter ρ=0.1, and discount parameter γ=0.5. There are 4 agents in the connected network in Figure 1. State s is a randomly generated 5-dimensional column vector, the dimension of ϕs is a cosine function, and P is a randomly generated 5-dimensional matrix.Figure 1 Undirected communication network graph.Then, we randomly generate the matricesA^,C^,b^i as follows:(28)A^=0.97970.59490.11740.08550.73030.43890.26220.29670.26250.48860.11110.60280.31880.80100.57850.25810.71120.42420.02920.23730.40870.22170.50790.92890.4588,C^=0.25000.00000.00000.00000.00000.00000.25000.00000.00000.00000.00000.00000.25000.00000.00000.00000.00000.00000.25000.00000.00000.00000.00000.00000.2500,b^1=0.9631,0.5468,0.5211,0.2316,0.4889T,b^2=0.6241,0.6791,0.3955,0.3674,0.9880T,b^3=0.0377,0.8852,0.9133,0.7962,0.0987T,b^4=0.2619,0.3354,0.6797,0.1366,0.7212T.Before the simulation, it is necessary to obtain the solution of the multiagent reinforcement learning:(29)θ∗=−0.0756,0.0211,0.5362,0.0508,0.6956T.We show the comparison about the fractional order algorithm with the conventional integer order one. In Figures2 and 3, the curve illustrates almost the same convergence performance as the conventional integer order when α is 0.995. In Figures 4 and 5, the fractional order algorithm achieves a faster convergent rate than that of the integer order algorithm. Simulation results illustrate the convergence about the integer order and the fractional order. Furthermore, the proposed distributed algorithm with fractional order dynamics has more design freedom to achieve a better performance than that of the conventional first-order algorithm.Figure 2 The trajectory of estimation ofθ1.Figure 3 The trajectory of estimation ofθ2.Figure 4 The trajectory of estimation ofθ1.Figure 5 The trajectory of estimation ofθ2. ## 5. Conclusion In this paper, the value function problem of the multiagent reinforcement learning was transformed as a distributed optimization problem with a consensus constraint. Then, we proposed a distributed algorithm with fractional order dynamics to solve this problem. Besides, we proved the asymptotic convergence of the algorithm by Lyapunov functions and illustrated the effectiveness of the proposed algorithm with an example. In the future, we will consider applying reinforcement learning to the recommendation system, so as to get better results [23]. --- *Source: 1020466-2021-09-06.xml*
1020466-2021-09-06_1020466-2021-09-06.md
20,737
Distributed Policy Evaluation with Fractional Order Dynamics in Multiagent Reinforcement Learning
Wei Dai; Wei Wang; Zhongtian Mao; Ruwen Jiang; Fudong Nian; Teng Li
Security and Communication Networks (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1020466
1020466-2021-09-06.xml
--- ## Abstract The main objective of multiagent reinforcement learning is to achieve a global optimal policy. It is difficult to evaluate the value function with high-dimensional state space. Therefore, we transfer the problem of multiagent reinforcement learning into a distributed optimization problem with constraint terms. In this problem, all agents share the space of states and actions, but each agent only obtains its own local reward. Then, we propose a distributed optimization with fractional order dynamics to solve this problem. Moreover, we prove the convergence of the proposed algorithm and illustrate its effectiveness with a numerical example. --- ## Body ## 1. Introduction In recent years, reinforcement learning [1] has received much attention from the society and succeeded remarkably in many areas such as machine learning and artificial intelligence [2]. As we all know, in reinforcement learning, an agent determines the optimal strategy under the feedback of rewards via constantly interacting with the environment. The function of the policy maps possible states to possible actions. Although reinforcement learning has made great achievements in single agent, it remains challenging in the application of multiagent [3]. The goal of the multiagent system is to enable several agents with simple intelligence, but it is easy to manage and control to realize complex intelligence through mutual cooperation. While reducing the complexity of system modeling, the robustness, reliability, and flexibility of the system should be improved [4, 5].In this paper, the objective of this paper is to investigate multiagent reinforcement learning (MARL), where each agent exchanges information with their neighbors in network systems [6]. All agents share the state space and action except local rewards. The purpose of the MARL is to determine the global optimal policy, and a feasible way is to construct a central controller, where each agent must exchange information with the central controller [7], which makes decisions for all of them. However, with the increase of state dimensions, the computation of the central controller becomes extensively heavy. The whole system would collapse if the central controller was attacked.Then, we try to replace the centralized algorithm mentioned above with distributed control [8, 9]. Consistency protocol based on design enables all agents to achieve the same state [10–13]. In [14], Zhang et al. proposed a continuous-time distributed version of the gradient algorithm. As far as we know, most of the gradient methods use integer order iteration. In fact, fractional order has been developed for 300 years and used to solve many kinds of problems such as control applications and systems’ theory [15–17]. In comparison with the traditional integer order algorithm, the fractional order algorithm has more design freedom and potential to obtain better convergence performance [18, 19].Hereinafter, the contributions of the paper are listed:(1) We transform the multiagent strategy evaluation problem into a distributed optimization problem with a consensus constraint(2) We construct the fractional order dynamics and prove the convergence of the algorithm(3) We take a numerical example to verify the superiority of the proposed fractional order algorithmThe rest organization of this paper is listed as follows. Section2 introduces some problems of formulation on MARL and fractional order calculus. Section 3 transforms the multiagent strategy evaluation problem into the optimization problem with a consensus constraint, proposes an algorithm with fractional order dynamics, and proves that the algorithm asymptotically converge to an exact solution. Section 4 presents a simulation example, and we summarize the work in Section 5. ## 2. Problem Formulation ### 2.1. Notations Letℝ,ℝn, and ℝn×m represent the real number set, n-dimensional real column vector set, and n × m real matrix set, respectively. AT represents the transpose of A. A=∑i=1n∑i=1naij1/2,XG=XTGX, and A,B=ATB.S,Ai=1n,P,Rii=1n,γ represents a multiagent Markov decision process (MDP), where S is the state space and A is the joint action space. Pa is the probability of transition from st to st+1 when the agent takes the joint action a and Pπs,s′=Ea∼π·|sPas,s′, Ris,a is the local reward when agent i takes joint action a at state s and γ∈0,1 is a discount parameter. πa|s represents the condition of probability when the agent takes joint action a at state s. The reward function of agent i is defined when follows a joint policy π at state s as follows:(1)Riπs=Ea∼π·|sRis,a,where the right-hand side of the equation means that there is a probability for all possible choices of action a, and we calculate the expected value for all rewards of agent i:(2)Rcπs=1n∑i=1nRiπs,where Rcπs represents the average of the local rewards. ### 2.2. Graph theory The graph is expressed asGV,E, where G represents a graph, V is the set of vertices, and E is the set of edges in G. If any edge in the graph is undirected, the graph is named as undirected graph [20]. In graph, A=aij∈ℝn×n is the adjacency matrix with aij≠0 if i,j∈E, aij=0 otherwise. D=diagd1,d2,…,d3 is the degree matrix with di=∑j=1naij and Laplacian matrix is L=D−A. Moreover, if the graph is connected, L has the following two properties:(1) Laplacian matrix is a semipositive definite matrix(2) The minimum eigenvalue is 0 because the sum of every row of the Laplace matrix is 0The minimum nonzero eigenvalue is defined as the algebraic connectivity of the graph.Assumption 1. The undirected graph mentioned in the following text is connected.Lemma 1 (see [21]). The frequency distributed model is defined for a fractional order systemDαxt=ut, where α∈0,1 as follows:(3)∂zω,t∂z=−ωzω,t+ut,yt=∫0∞μαωzω,tdω,where μα=sinαπ/ωαπ.Definition 1 (see [22]). Theαth order Caputo derivative is(4)Dαft=1Γn−α∫0tt−τn−1−αfnτdτ,where α∈n−1,n, n∈ℕ,Γt=∫0∞τt−1e−τdτ is Gamma function, and fnt is the nth order derivative of ft. ### 2.3. Policy Evaluation To measure the benefits of agents in its current state, we establish the following value function, which represents the value of the cumulative return obtained by agents starting from the statest, adopting a certain strategy π:(5)Vπs=Eπ∑m=1∞γmRcπst+m+1|st=s.We construct Bellman equation based onVπ∈ℝS and Rcπ∈ℝS:(6)Vπ=Rcπ+γPπVπ.It is difficult to evaluateVπ directly if the dimension of the state space is very large. Therefore, we use Vθs=ϕTsθ to approximate Vπ, where θ∈ℝd is the vector and ϕs:S⟶ℝd, which is a particular function for state s. Indeed, solving equation (6) is equivalent to obtain the vector θ via Vθ≈Vπ. In other words, it means to minimize the mean square error about 1/2Vθ−VπD2, where D=diagμπs,s∈S, ∈ℝS×S is a diagonal matrix determined by the stationary distribution. We construct the equation as follows:(7)fθ=12ΠΦVθ−γPπVθ−RcπD2+ρ2θ2,where ρ is a regularization parameter and ΠΦ is a projection operator in the column subspace of Φ. It is not difficult to rewrite ΠΦ as ΠΦ=ΦΦTDΦ−1ΦTD substituting ΠΦ into (7):(8)fθ=ρ2θ2+12ΠΦVθ−γPπVθ−RcπD2=ρ2θ2+12Vθ−γPπVθ−RcπTΠΦTDΠΦ×Vθ−γPπVθ−Rcπ=ρ2θ2+12Vθ−γPπVθ−RcπTDΦΦTDΦ−1ΦTD×Vθ−γPπVθ−Rcπ=ρ2θ2+12ΦTDVθ−γPπVθ−RcπΦTDΦ−12=ρ2θ2+12ΦTDΦ−γPπΦθ−ΦTDRcπΦTDΦ−12=ρ2θ2+12Aθ−bC−12,where A=ΦTDΦ−γPπΦ=Es∼μπϕsϕs−γϕs′T, C=ΦTDΦ=Es∼μπϕsϕTs, and b=ΦTDRcπ=Es∼μπRcπsϕs.The minimum value ofθ in equation (8) is unique if A is a full rank matrix and C is a positive definite matrix. In practice, it is difficult to get the expectations in the compact form when the distribution is unknown. We replace expectation with the average as follows:(9)A^=1p∑t=1PAt,b^=1p∑t=1Pbt,C^=1p∑t=1PCt,where At=ϕstφTst,φstϕstϕst+1T, Ct=ϕstϕTst, and bt=Rcπstϕst.We assume that the sample sizep approaches infinity to make sure its confidence level. In these sequences, each state is attached at least once. Then, we reconstruct equation (8) as follows:(10)fθ=12A^θ−b^C^−12+ρ2θ2.Noteworthy, in a shared space, the agent observes the states and actions of the neighbors, but only observes the local rewards of its own. In other words, we getA^ and C^ except b^. So, we define b^i=1/p∑t=1pbt,i with bt,i=Riπst,atϕst. Then, we rewrite equation (10) as follows:(11)minθ∈Rd1n∑t=1n12A^θ−b^iC−12+ρ2θ2. ## 2.1. Notations Letℝ,ℝn, and ℝn×m represent the real number set, n-dimensional real column vector set, and n × m real matrix set, respectively. AT represents the transpose of A. A=∑i=1n∑i=1naij1/2,XG=XTGX, and A,B=ATB.S,Ai=1n,P,Rii=1n,γ represents a multiagent Markov decision process (MDP), where S is the state space and A is the joint action space. Pa is the probability of transition from st to st+1 when the agent takes the joint action a and Pπs,s′=Ea∼π·|sPas,s′, Ris,a is the local reward when agent i takes joint action a at state s and γ∈0,1 is a discount parameter. πa|s represents the condition of probability when the agent takes joint action a at state s. The reward function of agent i is defined when follows a joint policy π at state s as follows:(1)Riπs=Ea∼π·|sRis,a,where the right-hand side of the equation means that there is a probability for all possible choices of action a, and we calculate the expected value for all rewards of agent i:(2)Rcπs=1n∑i=1nRiπs,where Rcπs represents the average of the local rewards. ## 2.2. Graph theory The graph is expressed asGV,E, where G represents a graph, V is the set of vertices, and E is the set of edges in G. If any edge in the graph is undirected, the graph is named as undirected graph [20]. In graph, A=aij∈ℝn×n is the adjacency matrix with aij≠0 if i,j∈E, aij=0 otherwise. D=diagd1,d2,…,d3 is the degree matrix with di=∑j=1naij and Laplacian matrix is L=D−A. Moreover, if the graph is connected, L has the following two properties:(1) Laplacian matrix is a semipositive definite matrix(2) The minimum eigenvalue is 0 because the sum of every row of the Laplace matrix is 0The minimum nonzero eigenvalue is defined as the algebraic connectivity of the graph.Assumption 1. The undirected graph mentioned in the following text is connected.Lemma 1 (see [21]). The frequency distributed model is defined for a fractional order systemDαxt=ut, where α∈0,1 as follows:(3)∂zω,t∂z=−ωzω,t+ut,yt=∫0∞μαωzω,tdω,where μα=sinαπ/ωαπ.Definition 1 (see [22]). Theαth order Caputo derivative is(4)Dαft=1Γn−α∫0tt−τn−1−αfnτdτ,where α∈n−1,n, n∈ℕ,Γt=∫0∞τt−1e−τdτ is Gamma function, and fnt is the nth order derivative of ft. ## 2.3. Policy Evaluation To measure the benefits of agents in its current state, we establish the following value function, which represents the value of the cumulative return obtained by agents starting from the statest, adopting a certain strategy π:(5)Vπs=Eπ∑m=1∞γmRcπst+m+1|st=s.We construct Bellman equation based onVπ∈ℝS and Rcπ∈ℝS:(6)Vπ=Rcπ+γPπVπ.It is difficult to evaluateVπ directly if the dimension of the state space is very large. Therefore, we use Vθs=ϕTsθ to approximate Vπ, where θ∈ℝd is the vector and ϕs:S⟶ℝd, which is a particular function for state s. Indeed, solving equation (6) is equivalent to obtain the vector θ via Vθ≈Vπ. In other words, it means to minimize the mean square error about 1/2Vθ−VπD2, where D=diagμπs,s∈S, ∈ℝS×S is a diagonal matrix determined by the stationary distribution. We construct the equation as follows:(7)fθ=12ΠΦVθ−γPπVθ−RcπD2+ρ2θ2,where ρ is a regularization parameter and ΠΦ is a projection operator in the column subspace of Φ. It is not difficult to rewrite ΠΦ as ΠΦ=ΦΦTDΦ−1ΦTD substituting ΠΦ into (7):(8)fθ=ρ2θ2+12ΠΦVθ−γPπVθ−RcπD2=ρ2θ2+12Vθ−γPπVθ−RcπTΠΦTDΠΦ×Vθ−γPπVθ−Rcπ=ρ2θ2+12Vθ−γPπVθ−RcπTDΦΦTDΦ−1ΦTD×Vθ−γPπVθ−Rcπ=ρ2θ2+12ΦTDVθ−γPπVθ−RcπΦTDΦ−12=ρ2θ2+12ΦTDΦ−γPπΦθ−ΦTDRcπΦTDΦ−12=ρ2θ2+12Aθ−bC−12,where A=ΦTDΦ−γPπΦ=Es∼μπϕsϕs−γϕs′T, C=ΦTDΦ=Es∼μπϕsϕTs, and b=ΦTDRcπ=Es∼μπRcπsϕs.The minimum value ofθ in equation (8) is unique if A is a full rank matrix and C is a positive definite matrix. In practice, it is difficult to get the expectations in the compact form when the distribution is unknown. We replace expectation with the average as follows:(9)A^=1p∑t=1PAt,b^=1p∑t=1Pbt,C^=1p∑t=1PCt,where At=ϕstφTst,φstϕstϕst+1T, Ct=ϕstϕTst, and bt=Rcπstϕst.We assume that the sample sizep approaches infinity to make sure its confidence level. In these sequences, each state is attached at least once. Then, we reconstruct equation (8) as follows:(10)fθ=12A^θ−b^C^−12+ρ2θ2.Noteworthy, in a shared space, the agent observes the states and actions of the neighbors, but only observes the local rewards of its own. In other words, we getA^ and C^ except b^. So, we define b^i=1/p∑t=1pbt,i with bt,i=Riπst,atϕst. Then, we rewrite equation (10) as follows:(11)minθ∈Rd1n∑t=1n12A^θ−b^iC−12+ρ2θ2. ## 3. Fractional Order Dynamics for Policy Evaluation Hereinbefore, the aim of policy evaluation becomes to minimize the object function. Now, we rewrite (11) as follows:(12)minθi1n∑i=1n12A^θi−b^iC^−12+ρ2θi2,s.t.θi=θj.We defineθ¯∈ℝnd as a factor concatenating all θi: θ¯=θ1T,θ2T,…θnTT∈ℝnd and the aggregative function f as fθ¯=∑i=1nfθi. As we all know, the consensus constraint (12) is expressed as(13)minθ¯12A^¯θ¯−b^¯iC^¯-12+ρ2θ¯2,s.t.L¯θ¯=0,where b^¯=b^1T,b^2T,…b^nTT∈ℝnd,L∈ℝn×n,L¯=L⊗Id∈ℝnd×nd, A^¯=A^⊗In∈ℝnd×nd, and C^¯=C^⊗In ∈ℝnd×nd. Based on (13), we formulate the following the augmented Lagrangian:(14)ℒθ¯,λ=fθ¯+λ,L¯θ¯+12θ¯TL¯θ¯,where λ∈ℝnd is the Lagrange multiplier.It is feasible to design a fractional order continuous-time optimization algorithm from primal-dual viewpoint, gradient descend for primal variableθ¯, and gradient ascent for dual variable λ via (14). Both of them are updated according to the fractional order law:(15)Dα1θ¯t=−∇θ¯tℒθ¯t,λt,Dα2λt=∇λtℒθ¯t,λt,where 0<α1<2,0<α2<1,∇θ¯tℒθ¯t,λt, and ∇λtℒθ¯t,λt are gradient of θ¯t,λt on variables θ¯t and λt, respectively. We express the detail of (15) in Algorithm 1.Algorithm 1: Initialization:θi=0∈ℝd,λi=0∈ℝd.UpdateFort≤50Dα1θit=−A^TC^−1A^+ρIθit−A^TC^−1b^i+L¯iλt+L¯iθ¯t,Dα2λit=L¯iθ¯tEndReturnθThe aim of the distributed algorithm is to obtain the solution of the value function. The proposed algorithm has more potential to get better convergence performance and design freedom than the conventional integer order. Hereinafter, we provide the following convergence conclusion.Theorem 1. Under Assumption1, let θ¯t and λt be generated according to Algorithm 1. If 0<α1,α2<1, then θ¯t asymptotically converges to the optimal solution.Proof. We obtain the detailed dynamics ofθ¯t and λt:(16)Dα1θ¯t=−A^¯TC^¯−1b^¯+ρI+L¯θ¯(t)+A^¯TC^¯−1b¯−L¯λt,Dα2λt=L¯θ¯(t,where I is an identity matrix. We consider the equilibrium of (16):(17)0=−A^¯TC^¯−1b^¯+ρI+L¯θ¯∗+A^¯TC^¯−1b¯−L¯λ∗,0=L¯θ¯∗. Then, we combine (16) and (17), and according to the facts Dα1θ¯∗=0, Dα2λ∗=0,(18)Dα1θ¯˜t=−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t,Dα2λ˜t=L¯θ¯˜t. Through Lemma1, we reconstruct (18) as follows:(19)∂z1ω,t∂t=−ωz1ω,t−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t,θ¯˜t=∫0∞μα1ωz1ω,tdωand(20)π∂z2ω,t∂t=−ωz2ω,t+L¯θ¯˜t,λ˜t=∫0∞μα2z2ω,tdω. We construct the Lyapunov function as follows:(21)V1=12∫0∞∑i=12μαiωziω,t2dω. Then,(22)V1.=∫0∞∑i=12μαiωziω,t,∂ziω,t∂tdω=∫0∞μα1ωz1ω,t,−ωz1ω,t−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜tⅆω+∫0∞μα2ωz2ω,t−ωz2ω,t+L¯θ¯˜tⅆω=−∫0∞∑i=12μαiωziω,t2dω+θ¯˜t,−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t+λ˜t,L¯θ¯˜t=−∫0∞∑i=12μαiωziω,t2dω−θ¯˜tTA^¯TC^¯−1b^¯+ρI+L¯θ¯˜t≤0. We obtain the result according to the Lasalle invariance principle. Hereinafter, we improve the convergence conclusion of Theorem 1 by extendingα1 from (0,1) to (1,2).Theorem 2. Under Assumption1, let θ¯t and λt be generated according to Algorithm 1. If 1<α1<2,α1+α2=2, then θ¯t asymptotically converges to the optimal solution.Proof. Under the conditionα1=1+α¯1, we rewrite the dynamics with the condition of Theorem 1 as follows:(23)θ¯˜˙at=−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t,Dα¯1θ¯˜t=θ¯˜at,Dα2θ¯˜t=L¯θ¯˜t. Due toα1=1+α¯1 and α1+α2=2,(24)λ˜˙t=Dα1¯Dα2λ˜t=Dα1¯L¯θ¯˜t=L¯aθ¯˜t. Under the condition of (23) and (24), we obtain the frequency distributed model by Lemma 1 as follows:(25)θ¯˜˙at=−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜t−L¯λ˜t,∂z1ω,t∂t=−ωz1ω,t+θ¯˜at,θ¯˜t=∫0∞μα¯1ωz1ω,tdω,λ˜˙t=L¯θ¯˜at. We construct the Lyapunov function:(26)V2=12θ¯˜at2+12λ˜t2. Then,(27)V2.=θ¯˜at,θ¯˜at+λ˜t,λ˜˙t=θ¯˜at,−A^¯TC^¯−1b^¯+ρI+L¯θ¯˜at−L¯λ˜t+λ˜t,L¯θ¯˜at=−θ¯˜atTA^¯TC^¯−1b^¯+ρI+L¯θ¯˜t≤0. Through the LaSalle invariance principle, we obtain the result. ## 4. Experimental Simulation In this section, we provide an example to illustrate the effectiveness of the proposed algorithm. There are 20 states in the multiagent reinforcement learning. We setd=5, regularization parameter ρ=0.1, and discount parameter γ=0.5. There are 4 agents in the connected network in Figure 1. State s is a randomly generated 5-dimensional column vector, the dimension of ϕs is a cosine function, and P is a randomly generated 5-dimensional matrix.Figure 1 Undirected communication network graph.Then, we randomly generate the matricesA^,C^,b^i as follows:(28)A^=0.97970.59490.11740.08550.73030.43890.26220.29670.26250.48860.11110.60280.31880.80100.57850.25810.71120.42420.02920.23730.40870.22170.50790.92890.4588,C^=0.25000.00000.00000.00000.00000.00000.25000.00000.00000.00000.00000.00000.25000.00000.00000.00000.00000.00000.25000.00000.00000.00000.00000.00000.2500,b^1=0.9631,0.5468,0.5211,0.2316,0.4889T,b^2=0.6241,0.6791,0.3955,0.3674,0.9880T,b^3=0.0377,0.8852,0.9133,0.7962,0.0987T,b^4=0.2619,0.3354,0.6797,0.1366,0.7212T.Before the simulation, it is necessary to obtain the solution of the multiagent reinforcement learning:(29)θ∗=−0.0756,0.0211,0.5362,0.0508,0.6956T.We show the comparison about the fractional order algorithm with the conventional integer order one. In Figures2 and 3, the curve illustrates almost the same convergence performance as the conventional integer order when α is 0.995. In Figures 4 and 5, the fractional order algorithm achieves a faster convergent rate than that of the integer order algorithm. Simulation results illustrate the convergence about the integer order and the fractional order. Furthermore, the proposed distributed algorithm with fractional order dynamics has more design freedom to achieve a better performance than that of the conventional first-order algorithm.Figure 2 The trajectory of estimation ofθ1.Figure 3 The trajectory of estimation ofθ2.Figure 4 The trajectory of estimation ofθ1.Figure 5 The trajectory of estimation ofθ2. ## 5. Conclusion In this paper, the value function problem of the multiagent reinforcement learning was transformed as a distributed optimization problem with a consensus constraint. Then, we proposed a distributed algorithm with fractional order dynamics to solve this problem. Besides, we proved the asymptotic convergence of the algorithm by Lyapunov functions and illustrated the effectiveness of the proposed algorithm with an example. In the future, we will consider applying reinforcement learning to the recommendation system, so as to get better results [23]. --- *Source: 1020466-2021-09-06.xml*
2021
# Clinical Assessment of Nifedipine-Induced Gingival Overgrowth in a Group of Brazilian Patients **Authors:** Cliciane Portela Sousa; Claudia Maria Navarro; Maria Regina Sposto **Journal:** ISRN Dentistry (2011) **Publisher:** International Scholarly Research Network **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.5402/2011/102047 --- ## Abstract Although it has been established that nifedipine is associated with gingival overgrowth (GO), there is little information on the prevalence and severity of this condition in the Brazilian population. The aim of this study was to assess the occurrence of nifedipine-induced GO in Brazilian patients and the risk factors associated using a Clinical Index for Drug Induced Gingival Overgrowth (Clinical Index DIGO). The study was carried out on 35 patients under treatment with nifedipine (test group) and 35 patients without treatment (control group). Variables such as demographic (age, gender), pharmacological (dose, time of use), periodontal (plaque index, gingival index, probing depth, clinical insertion level, and bleeding on probing), and GO were assessed. Statistical analysis showed no association between GO and demographic or pharmacological variables. However, there was an association between GO and periodontal variables, except for plaque index. According to our study, the Clinical Index DIGO can be used as a parameter to evaluate GO. Therefore, we conclude that the presence of gingival inflammation was the main risk factor for the occurrence of nifedipine-induced GO. --- ## Body ## 1. Introduction Drug-induced gingival overgrowth (DIGO) is a histomorphological alteration due to the side effects of a medication on the extracellular matrix [1]. Several drugs induce gingival overgrowth, but phenytoin, cyclosporine, and nifedipine produce significant alterations in terms of prevalence and severity of gingival overgrowth [2].Nifedipine is a specific calcium antagonist which inhibits calcium influx directly from the cells of cardiac muscle and has a vasodilatory action that causes reduced arterial pressure [3]. Among calcium antagonists, it is the drug most commonly related to DIGO [4], whose prevalence ranges from 20% to 83% [5, 6].According to Seymour et al. [7], the hypothesis for the etiology of DIGO is multifactorial. Analysis of variables such as demographic (patient’s age and gender), pharmacological (dose, time of use, serum, and salivary concentration of the drug) and periodontal (plaque index and gingival inflammation), in addition to genetic factors and association of medications, have been identified as risk factors for this condition [7, 8].The correlation between demographic and pharmacological variables to the extent and severity of GO has been studied, aiming at the identification of risk situations for the patients under nifedipine treatment [8–10]. Concerning the periodontal variables, some studies have suggested a consensus about the fact that bacterial plaque and gingival inflammation are risk factors strongly associated with nifedipine-induced GO [5, 7, 11].Most of the indexes used and reported in the literature for the assessment of DIGO are complicated to use, many of them require the preparation of plaster casts and many measurements and procedures that impair their routine use [5, 7, 11].Considering the lack of studies on the prevalence and severity of DIGO related to the use of nifedipine in the Brazilian population, this study was proposed. Thus, the objective was to assess nifedipine DIGO in a Brazilian group of patients and evaluate the possible association with demographic, pharmacological, and periodontal variables using the Clinical Index for Drug-Induced Gingival Overgrowth (DIGO) proposed by Inglés et al. [11]. ## 2. Material and Methods ### 2.1. Patient Selection The protocol for patient care was approved by the Research Ethics Committee of the Dental School of Araraquara, UNESP, São Paulo, Brazil. Two groups of patients, test and control, were used. The test group was selected at the Health Stations of the Municipality of Araraquara, São Paulo, Brazil, among patients with cardiovascular disease who were under periodical medical control and who had been taking nifedipine for at least 6 months under medical monitorization. The control group was selected among patients who sought the Dental School of Araraquara for dental treatment.The following inclusion criteria were used for the test and control groups: no periodontal treatment for the preceding 6 months, no use of orthodontic braces or dentures, absence of defective restorations, and the presence of at least 6 to 12 teeth from the anterior region. Exclusion criteria werediabetes mellitus, blood dyscrasia, hormonal changes, pregnancy, oral breathing, smoking, patients taking systemic antibiotics, or anti-inflammatory drugs (steroidal and non-steroidal), and patients taking contraceptive medication or any other drug inducing gingival overgrowth. ### 2.2. Clinical Examination During clinical examination, a single examiner (MRS) performed the anamnesis and a chart was filled out with the identification for Group Test or Control, patient age, gender, and pharmacological data (dose and time of use of nifedipine). The gingival areas involved in the study were photo-documented. ### 2.3. Periodontal Examination Gingival Overgrowth (GO) was assessed in the upper teeth by the method of Inglés et al. [11]. This method consists of clinical evaluation of the vestibular and lingual papillae, which were scored from 0 to 4 according to the Clinical Index DIGO [11]. A single examiner (CPS) blind for identification of patients, trained and calibrated for the Kappa agreement test, performed the periodontal examination, which consisted of the measurement of the following variables: plaque index (PI) [12], gingival index (GI) [13], probing depth (PD), bleeding on probing (PB), and clinical insertion level (CIL). ### 2.4. Criteria for the Analysis of Periodontal Variables and Gingival Overgrowth The data collected during the periodontal examination were divided into groups for statistical analysis. The variables PI and GI were divided into groups according to the absence or presence of visible plaque and according to marginal probing bleeding, respectively, with scores of 0 (presence) and 1 (absence). PB variable was divided into groups according to the absence or presence of bleeding on probing, with scores varying from 0 (presence) to 1 (absence). The PD and CIL variables were divided into groups according to intervals of sites with PD and CIL < 3 mm, from 3 to 4 mm, and ≥5 mm. Finally, GO was divided into groups according to absence (score < 2) and presence (score ≥ 2). ### 2.5. Statistical Analysis The values obtained for the test and control groups concerning the periodontal variables (PI, GI, PD, PB, and CIL) and GO were submitted to theZ test at the 5% level of significance (P<.05) for the comparison of proportions of relative frequencies [14]. The Spearman correlation test using the BioEstat 2.0 software [15] was used to calculate the correlations between GO and the demographic, pharmacological, and periodontal variables. ## 2.1. Patient Selection The protocol for patient care was approved by the Research Ethics Committee of the Dental School of Araraquara, UNESP, São Paulo, Brazil. Two groups of patients, test and control, were used. The test group was selected at the Health Stations of the Municipality of Araraquara, São Paulo, Brazil, among patients with cardiovascular disease who were under periodical medical control and who had been taking nifedipine for at least 6 months under medical monitorization. The control group was selected among patients who sought the Dental School of Araraquara for dental treatment.The following inclusion criteria were used for the test and control groups: no periodontal treatment for the preceding 6 months, no use of orthodontic braces or dentures, absence of defective restorations, and the presence of at least 6 to 12 teeth from the anterior region. Exclusion criteria werediabetes mellitus, blood dyscrasia, hormonal changes, pregnancy, oral breathing, smoking, patients taking systemic antibiotics, or anti-inflammatory drugs (steroidal and non-steroidal), and patients taking contraceptive medication or any other drug inducing gingival overgrowth. ## 2.2. Clinical Examination During clinical examination, a single examiner (MRS) performed the anamnesis and a chart was filled out with the identification for Group Test or Control, patient age, gender, and pharmacological data (dose and time of use of nifedipine). The gingival areas involved in the study were photo-documented. ## 2.3. Periodontal Examination Gingival Overgrowth (GO) was assessed in the upper teeth by the method of Inglés et al. [11]. This method consists of clinical evaluation of the vestibular and lingual papillae, which were scored from 0 to 4 according to the Clinical Index DIGO [11]. A single examiner (CPS) blind for identification of patients, trained and calibrated for the Kappa agreement test, performed the periodontal examination, which consisted of the measurement of the following variables: plaque index (PI) [12], gingival index (GI) [13], probing depth (PD), bleeding on probing (PB), and clinical insertion level (CIL). ## 2.4. Criteria for the Analysis of Periodontal Variables and Gingival Overgrowth The data collected during the periodontal examination were divided into groups for statistical analysis. The variables PI and GI were divided into groups according to the absence or presence of visible plaque and according to marginal probing bleeding, respectively, with scores of 0 (presence) and 1 (absence). PB variable was divided into groups according to the absence or presence of bleeding on probing, with scores varying from 0 (presence) to 1 (absence). The PD and CIL variables were divided into groups according to intervals of sites with PD and CIL < 3 mm, from 3 to 4 mm, and ≥5 mm. Finally, GO was divided into groups according to absence (score < 2) and presence (score ≥ 2). ## 2.5. Statistical Analysis The values obtained for the test and control groups concerning the periodontal variables (PI, GI, PD, PB, and CIL) and GO were submitted to theZ test at the 5% level of significance (P<.05) for the comparison of proportions of relative frequencies [14]. The Spearman correlation test using the BioEstat 2.0 software [15] was used to calculate the correlations between GO and the demographic, pharmacological, and periodontal variables. ## 3. Results The patient’s data are presented in Table1, which shows a higher prevalence of males in the test group, with a mean age of 69.5 years and a mean daily dose of 40 mg for a mean period of 13 years (156 months) of nifedipine use.Table 1 Distribution of the demographic and pharmacological variables in the test and control groups. Test groupControl groupNumber of patients3535Age (range)69.5 ± 2.12 (40–80)40.5 ± 14.85 (30–73)Gender distribution (m : f)22 : 136 : 29Nifedipine dose (mg/day)40 ± 11.21—Time of nifedipine use (months)156 ± 118.79—With the use of the Clinical Index DIGO [11], GO was observed in 68% of the patients in the test group and in 23% of the patients in the control group. Application of the Z test demonstrated a significant difference in GO between groups at the 5% level of significance (P<.05). The values of the Z test for the comparison of the proportions of GO and of the periodontal variables between groups are listed in Table 2. The Z test showed a no significant difference for PI, GI, PD, and PB between the test and control groups. However, for CIL the Z test showed a statistically significant difference for the intervals with CIL < 3 mm and CIL from 3 to 4 mm.Table 2 Z test values for the comparison of the proportions for periodontal variables between groups. VariablesScores and intervalsZ valuesTest × ControlGO<2−2.32*≥22.32*PI0-1−1.222-31.24GI0-10.672-3−0.79PD<3 mm−1.21from 3 to 4 mm0.57≥5 mm1.19CIL<3 mm−2.68*from 3 to 4 mm2.05*≥5 mm1.31PB0−1.6111.62Significant differences:P*<.05, Z= ±1.96.In Figures1 and 2 we can observe a higher frequency for the scores 2 and 3 for PI in the test group and for GI in the control group. Figures 3 and 4 illustrate the mean percentages of PD and CIL, demonstrating higher PD and CIL frequencies in the interval of 3 to 4 mm for both groups and in the PD ≥ 5 mm and CIL ≥ 5 mm interval for the test group.Figure 1 Percentages of sites with dichotomized Plaque Index Scores (0-1) and (2-3) for the test (T) and control (C) groups.Figure 2 Percentages of sites with dichotomized Gingival Index Scores (0-1) and (2-3) for the test (T) and control (C) groups.Figure 3 Percentages of sites with Probing Depth Values < 3 mm, from 3 to 4 mm, and ≥5 mm for the test (T) and control (C) groups.Figure 4 Percentages of sites with Clinical Insertion Level Values < 3 mm, from 3 to 4 mm, and ≥5 mm for the test (T) and control (C) groups.The demographic variables (age and gender) and the pharmacological variables (dose and time of use of nifedipine) did not show correlation with GO (Table3). A correlation between degree of severity of GO and CIL < 3 mm and CIL ≥ 5 mm was detected only for the test group (Table 4). The periodontal variable PI was correlated with the degree of severity of GO only in the control group. There was a positive correlation between GI, PD, and PB and the degree of GO severity for both groups.Table 3 Spearman correlation among gingival growth (GO ≥ 2), demographic and pharmacological variables for test and control groups. VariablesTestControlt valuet valueAge−1.03830.6456Gender1.40741.4520Dose0.3986—Time of use0.5574—Spearman Correlation.Significant difference:P*<.05,t (tabulated)= ±2.035.Table 4 Spearman correlation between severity of gingival growth (GO ≥ 2) and periodontal variables for groups. Periodontal variablesTestt valueControlt valuePI(2-3)1.53662.6269*GI(2-3)2.9879*2.5203*PD(<3 mm)−5.4483*−2.9233*(from 3 to 4 mm)4.4462*2.6914*(≥5 mm)5.4593*2.1133*CIL(<3 mm)−2.2788*−1.9358(from 3 to 4 mm)0.62501.9852(≥5 mm)4.6975*1.9970PB3.5932*2.5119*Spearman Correlation.Significant difference:P*<.05,t (tabulated)= ±2.035. ## 4. Discussion Since the first report of drug-related GO by Kimbal [16] in 1939, several studies have been conducted in an attempt to understand the factors that act on this process. Today, more than 20 drugs are known to induce GO [2].Few studies are currently available in the literature about the influence of nifedipine in gingival manifestations. In the present study, the mean age of patients in the test group was 69.5 years and the age variable did not show a correlation with GO, in agreement with other studies [6, 9, 10, 17]. However, according to Thomason et al. [18] and James et al. [19], younger patients show a higher prevalence of GO when the association of nifedipine and cyclosporine treatment was identified. Maybe this is an effect of drug association and not only related to the age of the patients.In our group of patients, despite the larger number of male patients in the test group than in the control group (22 : 6), GO was more prevalent among females in both groups (63.63% for the test group and 76.92% for the control group). However, the Spearman test did not reveal a correlation between patients gender and the occurrence of GO, in agreement with data reported by King et al. [9], Margiotta et al. [20], and Güncü et al. [21]. According to Seymour et al. [8], there are evidences that male patients under treatment with nifedipine and cyclosporine are more prone to a greater prevalence and severity of GO than female patients [9, 20]. Since the medication may alter androgen metabolism [22], reaching the gingival fibroblasts, with a consequent increase in the propensity to GO [7, 8]. However, the relation between GO and patient gender acting as a hormonal cofactor has not been completely clarified by this study neither in the literature correlated [22].In the present study, the dose and the time of nifedipine use were also not correlated with GO [6, 9, 10], in contrast to the data reported by others authors [17, 18, 23]. Probably the severity of GO has not been adequately correlated to pharmacological variables because the events that determine GO depend much more on local factors than on the circulating plasma level of the drug. There are evidences that the drug metabolites concentrated in gingival tissue interact with inflammation chemical mediators, producing a stimuli on fibroblasts activity leading to an unbalance in the local homeostasis, which eventually results in clinically observable GO [24, 25]. Moreover, Seymour et al. [8] reported that the most appropriate method for the assessment of the effect of pharmacological variables is blood analysis, which provides a more precise understanding of the drug dose and GO interaction.Concerning the use of the Clinical Index DIGO [11] as a parameter for GO, we noted that the index was easily and rapidly applicable. Considering that most of the indexes for GO assessment are difficult to reproduce, since they require the preparation of casts, photographs, slide projection, and several measurements, this index [11] has proved to be advantageous. The present study seems to be the first one to use Clinical Index DIGO as a method for GO assessment.The presence and intensity of PI is an important risk factor for the development of GO in patients taking drugs associated with gingival growth [8, 24]. The PI was not correlated with GO in the test group, whereas in the control group, a correlation was observed.In the control group, GO can be explained by the presence of plaque, which may influence the development of inflammatory gingival overgrowth. These results are in agreement with the literature [5, 19, 23], which could be explained by the fact that the PI for patients with GO were artificially lower due to a possible improvement in oral hygiene before the periodontal examination.Gingival inflammation assessed by GI and PB was not significantly different between the test and control groups. However, the correlation test showed association of both GI and PB with the degree of GO severity, and this association was stronger for the test group. This result may explain the influence of these periodontal variables on GO. Gingival inflammation is considered an important risk factor in the expression of GO correlated to nifedipine use [8]. The GI observed in the present study showed results similar to those reported by Barclay et al. [5], King et al. [9], Güncü et al. [21], and Miranda et al. [23]. PB results were similar to those reported by Tavassoli et al. [17], whereas Margiotta et al. [20] did not detect a correlation between PB and GO.The PD did not differ significantly between the test and control groups, but GO was associated with an increase in PD in both groups [5, 9, 18, 25]. King et al. [9] assessed CIL in patients under treatment with nifedipine and concluded that this was not a variable correlated with GO. In this study, however, CIL was correlated with GO in the test group for the intervals of mild to severe loss of insertion, which indicates that GO was due to the action of the medication and an association of this effect with periodontal inflammation. The CIL was not correlated with GO for the control group. The Z test showed significant differences between the two groups when the loss of insertion was moderate (CIL from 3 to 4 mm). The GO was not defined as significant for the criteria of periodontal variables, which in our study considered the presence of GO to be significant when the score was ≥2. Due to the lack of studies evaluating CIL in patients under treatment with drugs associated with GO, we observed that the assessment of this variable is more indicated in clinical follow-up studies or long-term studies, in which it is possible to evaluate the progress of periodontal disease. ## 5. Conclusion GO differed significantly between the test and control groups. Its prevalence in the control group may have been due to an inflammatory reaction explained by the influence of periodontal variables (PI, GI, PD, and PB). The prevalence of GO in the test group can be explained by the effect of induction of nifedipine in gingival tissue. The profile of the study group consisted of older people with systemic diseases, mainly cardiovascular diseases, with low socioeconomic status and probably unmotivated with respect to their oral health, which may cause ordinary periodontal inflammation that can exacerbate the gingival overgrowth. The prevalence of GO detected in this study is also related to the use of the Clinical Index DIGO [11] as a method for clinical evaluation, which may be interpreted as a differential factor compared to previous studies. --- *Source: 102047-2011-06-29.xml*
102047-2011-06-29_102047-2011-06-29.md
21,121
Clinical Assessment of Nifedipine-Induced Gingival Overgrowth in a Group of Brazilian Patients
Cliciane Portela Sousa; Claudia Maria Navarro; Maria Regina Sposto
ISRN Dentistry (2011)
Medical & Health Sciences
International Scholarly Research Network
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.5402/2011/102047
102047-2011-06-29.xml
--- ## Abstract Although it has been established that nifedipine is associated with gingival overgrowth (GO), there is little information on the prevalence and severity of this condition in the Brazilian population. The aim of this study was to assess the occurrence of nifedipine-induced GO in Brazilian patients and the risk factors associated using a Clinical Index for Drug Induced Gingival Overgrowth (Clinical Index DIGO). The study was carried out on 35 patients under treatment with nifedipine (test group) and 35 patients without treatment (control group). Variables such as demographic (age, gender), pharmacological (dose, time of use), periodontal (plaque index, gingival index, probing depth, clinical insertion level, and bleeding on probing), and GO were assessed. Statistical analysis showed no association between GO and demographic or pharmacological variables. However, there was an association between GO and periodontal variables, except for plaque index. According to our study, the Clinical Index DIGO can be used as a parameter to evaluate GO. Therefore, we conclude that the presence of gingival inflammation was the main risk factor for the occurrence of nifedipine-induced GO. --- ## Body ## 1. Introduction Drug-induced gingival overgrowth (DIGO) is a histomorphological alteration due to the side effects of a medication on the extracellular matrix [1]. Several drugs induce gingival overgrowth, but phenytoin, cyclosporine, and nifedipine produce significant alterations in terms of prevalence and severity of gingival overgrowth [2].Nifedipine is a specific calcium antagonist which inhibits calcium influx directly from the cells of cardiac muscle and has a vasodilatory action that causes reduced arterial pressure [3]. Among calcium antagonists, it is the drug most commonly related to DIGO [4], whose prevalence ranges from 20% to 83% [5, 6].According to Seymour et al. [7], the hypothesis for the etiology of DIGO is multifactorial. Analysis of variables such as demographic (patient’s age and gender), pharmacological (dose, time of use, serum, and salivary concentration of the drug) and periodontal (plaque index and gingival inflammation), in addition to genetic factors and association of medications, have been identified as risk factors for this condition [7, 8].The correlation between demographic and pharmacological variables to the extent and severity of GO has been studied, aiming at the identification of risk situations for the patients under nifedipine treatment [8–10]. Concerning the periodontal variables, some studies have suggested a consensus about the fact that bacterial plaque and gingival inflammation are risk factors strongly associated with nifedipine-induced GO [5, 7, 11].Most of the indexes used and reported in the literature for the assessment of DIGO are complicated to use, many of them require the preparation of plaster casts and many measurements and procedures that impair their routine use [5, 7, 11].Considering the lack of studies on the prevalence and severity of DIGO related to the use of nifedipine in the Brazilian population, this study was proposed. Thus, the objective was to assess nifedipine DIGO in a Brazilian group of patients and evaluate the possible association with demographic, pharmacological, and periodontal variables using the Clinical Index for Drug-Induced Gingival Overgrowth (DIGO) proposed by Inglés et al. [11]. ## 2. Material and Methods ### 2.1. Patient Selection The protocol for patient care was approved by the Research Ethics Committee of the Dental School of Araraquara, UNESP, São Paulo, Brazil. Two groups of patients, test and control, were used. The test group was selected at the Health Stations of the Municipality of Araraquara, São Paulo, Brazil, among patients with cardiovascular disease who were under periodical medical control and who had been taking nifedipine for at least 6 months under medical monitorization. The control group was selected among patients who sought the Dental School of Araraquara for dental treatment.The following inclusion criteria were used for the test and control groups: no periodontal treatment for the preceding 6 months, no use of orthodontic braces or dentures, absence of defective restorations, and the presence of at least 6 to 12 teeth from the anterior region. Exclusion criteria werediabetes mellitus, blood dyscrasia, hormonal changes, pregnancy, oral breathing, smoking, patients taking systemic antibiotics, or anti-inflammatory drugs (steroidal and non-steroidal), and patients taking contraceptive medication or any other drug inducing gingival overgrowth. ### 2.2. Clinical Examination During clinical examination, a single examiner (MRS) performed the anamnesis and a chart was filled out with the identification for Group Test or Control, patient age, gender, and pharmacological data (dose and time of use of nifedipine). The gingival areas involved in the study were photo-documented. ### 2.3. Periodontal Examination Gingival Overgrowth (GO) was assessed in the upper teeth by the method of Inglés et al. [11]. This method consists of clinical evaluation of the vestibular and lingual papillae, which were scored from 0 to 4 according to the Clinical Index DIGO [11]. A single examiner (CPS) blind for identification of patients, trained and calibrated for the Kappa agreement test, performed the periodontal examination, which consisted of the measurement of the following variables: plaque index (PI) [12], gingival index (GI) [13], probing depth (PD), bleeding on probing (PB), and clinical insertion level (CIL). ### 2.4. Criteria for the Analysis of Periodontal Variables and Gingival Overgrowth The data collected during the periodontal examination were divided into groups for statistical analysis. The variables PI and GI were divided into groups according to the absence or presence of visible plaque and according to marginal probing bleeding, respectively, with scores of 0 (presence) and 1 (absence). PB variable was divided into groups according to the absence or presence of bleeding on probing, with scores varying from 0 (presence) to 1 (absence). The PD and CIL variables were divided into groups according to intervals of sites with PD and CIL < 3 mm, from 3 to 4 mm, and ≥5 mm. Finally, GO was divided into groups according to absence (score < 2) and presence (score ≥ 2). ### 2.5. Statistical Analysis The values obtained for the test and control groups concerning the periodontal variables (PI, GI, PD, PB, and CIL) and GO were submitted to theZ test at the 5% level of significance (P<.05) for the comparison of proportions of relative frequencies [14]. The Spearman correlation test using the BioEstat 2.0 software [15] was used to calculate the correlations between GO and the demographic, pharmacological, and periodontal variables. ## 2.1. Patient Selection The protocol for patient care was approved by the Research Ethics Committee of the Dental School of Araraquara, UNESP, São Paulo, Brazil. Two groups of patients, test and control, were used. The test group was selected at the Health Stations of the Municipality of Araraquara, São Paulo, Brazil, among patients with cardiovascular disease who were under periodical medical control and who had been taking nifedipine for at least 6 months under medical monitorization. The control group was selected among patients who sought the Dental School of Araraquara for dental treatment.The following inclusion criteria were used for the test and control groups: no periodontal treatment for the preceding 6 months, no use of orthodontic braces or dentures, absence of defective restorations, and the presence of at least 6 to 12 teeth from the anterior region. Exclusion criteria werediabetes mellitus, blood dyscrasia, hormonal changes, pregnancy, oral breathing, smoking, patients taking systemic antibiotics, or anti-inflammatory drugs (steroidal and non-steroidal), and patients taking contraceptive medication or any other drug inducing gingival overgrowth. ## 2.2. Clinical Examination During clinical examination, a single examiner (MRS) performed the anamnesis and a chart was filled out with the identification for Group Test or Control, patient age, gender, and pharmacological data (dose and time of use of nifedipine). The gingival areas involved in the study were photo-documented. ## 2.3. Periodontal Examination Gingival Overgrowth (GO) was assessed in the upper teeth by the method of Inglés et al. [11]. This method consists of clinical evaluation of the vestibular and lingual papillae, which were scored from 0 to 4 according to the Clinical Index DIGO [11]. A single examiner (CPS) blind for identification of patients, trained and calibrated for the Kappa agreement test, performed the periodontal examination, which consisted of the measurement of the following variables: plaque index (PI) [12], gingival index (GI) [13], probing depth (PD), bleeding on probing (PB), and clinical insertion level (CIL). ## 2.4. Criteria for the Analysis of Periodontal Variables and Gingival Overgrowth The data collected during the periodontal examination were divided into groups for statistical analysis. The variables PI and GI were divided into groups according to the absence or presence of visible plaque and according to marginal probing bleeding, respectively, with scores of 0 (presence) and 1 (absence). PB variable was divided into groups according to the absence or presence of bleeding on probing, with scores varying from 0 (presence) to 1 (absence). The PD and CIL variables were divided into groups according to intervals of sites with PD and CIL < 3 mm, from 3 to 4 mm, and ≥5 mm. Finally, GO was divided into groups according to absence (score < 2) and presence (score ≥ 2). ## 2.5. Statistical Analysis The values obtained for the test and control groups concerning the periodontal variables (PI, GI, PD, PB, and CIL) and GO were submitted to theZ test at the 5% level of significance (P<.05) for the comparison of proportions of relative frequencies [14]. The Spearman correlation test using the BioEstat 2.0 software [15] was used to calculate the correlations between GO and the demographic, pharmacological, and periodontal variables. ## 3. Results The patient’s data are presented in Table1, which shows a higher prevalence of males in the test group, with a mean age of 69.5 years and a mean daily dose of 40 mg for a mean period of 13 years (156 months) of nifedipine use.Table 1 Distribution of the demographic and pharmacological variables in the test and control groups. Test groupControl groupNumber of patients3535Age (range)69.5 ± 2.12 (40–80)40.5 ± 14.85 (30–73)Gender distribution (m : f)22 : 136 : 29Nifedipine dose (mg/day)40 ± 11.21—Time of nifedipine use (months)156 ± 118.79—With the use of the Clinical Index DIGO [11], GO was observed in 68% of the patients in the test group and in 23% of the patients in the control group. Application of the Z test demonstrated a significant difference in GO between groups at the 5% level of significance (P<.05). The values of the Z test for the comparison of the proportions of GO and of the periodontal variables between groups are listed in Table 2. The Z test showed a no significant difference for PI, GI, PD, and PB between the test and control groups. However, for CIL the Z test showed a statistically significant difference for the intervals with CIL < 3 mm and CIL from 3 to 4 mm.Table 2 Z test values for the comparison of the proportions for periodontal variables between groups. VariablesScores and intervalsZ valuesTest × ControlGO<2−2.32*≥22.32*PI0-1−1.222-31.24GI0-10.672-3−0.79PD<3 mm−1.21from 3 to 4 mm0.57≥5 mm1.19CIL<3 mm−2.68*from 3 to 4 mm2.05*≥5 mm1.31PB0−1.6111.62Significant differences:P*<.05, Z= ±1.96.In Figures1 and 2 we can observe a higher frequency for the scores 2 and 3 for PI in the test group and for GI in the control group. Figures 3 and 4 illustrate the mean percentages of PD and CIL, demonstrating higher PD and CIL frequencies in the interval of 3 to 4 mm for both groups and in the PD ≥ 5 mm and CIL ≥ 5 mm interval for the test group.Figure 1 Percentages of sites with dichotomized Plaque Index Scores (0-1) and (2-3) for the test (T) and control (C) groups.Figure 2 Percentages of sites with dichotomized Gingival Index Scores (0-1) and (2-3) for the test (T) and control (C) groups.Figure 3 Percentages of sites with Probing Depth Values < 3 mm, from 3 to 4 mm, and ≥5 mm for the test (T) and control (C) groups.Figure 4 Percentages of sites with Clinical Insertion Level Values < 3 mm, from 3 to 4 mm, and ≥5 mm for the test (T) and control (C) groups.The demographic variables (age and gender) and the pharmacological variables (dose and time of use of nifedipine) did not show correlation with GO (Table3). A correlation between degree of severity of GO and CIL < 3 mm and CIL ≥ 5 mm was detected only for the test group (Table 4). The periodontal variable PI was correlated with the degree of severity of GO only in the control group. There was a positive correlation between GI, PD, and PB and the degree of GO severity for both groups.Table 3 Spearman correlation among gingival growth (GO ≥ 2), demographic and pharmacological variables for test and control groups. VariablesTestControlt valuet valueAge−1.03830.6456Gender1.40741.4520Dose0.3986—Time of use0.5574—Spearman Correlation.Significant difference:P*<.05,t (tabulated)= ±2.035.Table 4 Spearman correlation between severity of gingival growth (GO ≥ 2) and periodontal variables for groups. Periodontal variablesTestt valueControlt valuePI(2-3)1.53662.6269*GI(2-3)2.9879*2.5203*PD(<3 mm)−5.4483*−2.9233*(from 3 to 4 mm)4.4462*2.6914*(≥5 mm)5.4593*2.1133*CIL(<3 mm)−2.2788*−1.9358(from 3 to 4 mm)0.62501.9852(≥5 mm)4.6975*1.9970PB3.5932*2.5119*Spearman Correlation.Significant difference:P*<.05,t (tabulated)= ±2.035. ## 4. Discussion Since the first report of drug-related GO by Kimbal [16] in 1939, several studies have been conducted in an attempt to understand the factors that act on this process. Today, more than 20 drugs are known to induce GO [2].Few studies are currently available in the literature about the influence of nifedipine in gingival manifestations. In the present study, the mean age of patients in the test group was 69.5 years and the age variable did not show a correlation with GO, in agreement with other studies [6, 9, 10, 17]. However, according to Thomason et al. [18] and James et al. [19], younger patients show a higher prevalence of GO when the association of nifedipine and cyclosporine treatment was identified. Maybe this is an effect of drug association and not only related to the age of the patients.In our group of patients, despite the larger number of male patients in the test group than in the control group (22 : 6), GO was more prevalent among females in both groups (63.63% for the test group and 76.92% for the control group). However, the Spearman test did not reveal a correlation between patients gender and the occurrence of GO, in agreement with data reported by King et al. [9], Margiotta et al. [20], and Güncü et al. [21]. According to Seymour et al. [8], there are evidences that male patients under treatment with nifedipine and cyclosporine are more prone to a greater prevalence and severity of GO than female patients [9, 20]. Since the medication may alter androgen metabolism [22], reaching the gingival fibroblasts, with a consequent increase in the propensity to GO [7, 8]. However, the relation between GO and patient gender acting as a hormonal cofactor has not been completely clarified by this study neither in the literature correlated [22].In the present study, the dose and the time of nifedipine use were also not correlated with GO [6, 9, 10], in contrast to the data reported by others authors [17, 18, 23]. Probably the severity of GO has not been adequately correlated to pharmacological variables because the events that determine GO depend much more on local factors than on the circulating plasma level of the drug. There are evidences that the drug metabolites concentrated in gingival tissue interact with inflammation chemical mediators, producing a stimuli on fibroblasts activity leading to an unbalance in the local homeostasis, which eventually results in clinically observable GO [24, 25]. Moreover, Seymour et al. [8] reported that the most appropriate method for the assessment of the effect of pharmacological variables is blood analysis, which provides a more precise understanding of the drug dose and GO interaction.Concerning the use of the Clinical Index DIGO [11] as a parameter for GO, we noted that the index was easily and rapidly applicable. Considering that most of the indexes for GO assessment are difficult to reproduce, since they require the preparation of casts, photographs, slide projection, and several measurements, this index [11] has proved to be advantageous. The present study seems to be the first one to use Clinical Index DIGO as a method for GO assessment.The presence and intensity of PI is an important risk factor for the development of GO in patients taking drugs associated with gingival growth [8, 24]. The PI was not correlated with GO in the test group, whereas in the control group, a correlation was observed.In the control group, GO can be explained by the presence of plaque, which may influence the development of inflammatory gingival overgrowth. These results are in agreement with the literature [5, 19, 23], which could be explained by the fact that the PI for patients with GO were artificially lower due to a possible improvement in oral hygiene before the periodontal examination.Gingival inflammation assessed by GI and PB was not significantly different between the test and control groups. However, the correlation test showed association of both GI and PB with the degree of GO severity, and this association was stronger for the test group. This result may explain the influence of these periodontal variables on GO. Gingival inflammation is considered an important risk factor in the expression of GO correlated to nifedipine use [8]. The GI observed in the present study showed results similar to those reported by Barclay et al. [5], King et al. [9], Güncü et al. [21], and Miranda et al. [23]. PB results were similar to those reported by Tavassoli et al. [17], whereas Margiotta et al. [20] did not detect a correlation between PB and GO.The PD did not differ significantly between the test and control groups, but GO was associated with an increase in PD in both groups [5, 9, 18, 25]. King et al. [9] assessed CIL in patients under treatment with nifedipine and concluded that this was not a variable correlated with GO. In this study, however, CIL was correlated with GO in the test group for the intervals of mild to severe loss of insertion, which indicates that GO was due to the action of the medication and an association of this effect with periodontal inflammation. The CIL was not correlated with GO for the control group. The Z test showed significant differences between the two groups when the loss of insertion was moderate (CIL from 3 to 4 mm). The GO was not defined as significant for the criteria of periodontal variables, which in our study considered the presence of GO to be significant when the score was ≥2. Due to the lack of studies evaluating CIL in patients under treatment with drugs associated with GO, we observed that the assessment of this variable is more indicated in clinical follow-up studies or long-term studies, in which it is possible to evaluate the progress of periodontal disease. ## 5. Conclusion GO differed significantly between the test and control groups. Its prevalence in the control group may have been due to an inflammatory reaction explained by the influence of periodontal variables (PI, GI, PD, and PB). The prevalence of GO in the test group can be explained by the effect of induction of nifedipine in gingival tissue. The profile of the study group consisted of older people with systemic diseases, mainly cardiovascular diseases, with low socioeconomic status and probably unmotivated with respect to their oral health, which may cause ordinary periodontal inflammation that can exacerbate the gingival overgrowth. The prevalence of GO detected in this study is also related to the use of the Clinical Index DIGO [11] as a method for clinical evaluation, which may be interpreted as a differential factor compared to previous studies. --- *Source: 102047-2011-06-29.xml*
2011
# A Novel 2-Stage Fractional Runge–Kutta Method for a Time-Fractional Logistic Growth Model **Authors:** Muhammad Sarmad Arshad; Dumitru Baleanu; Muhammad Bilal Riaz; Muhammad Abbas **Journal:** Discrete Dynamics in Nature and Society (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1020472 --- ## Abstract In this paper, the fractional Euler method has been studied, and the derivation of the novel 2-stage fractional Runge–Kutta (FRK) method has been presented. The proposed fractional numerical method has been implemented to find the solution of fractional differential equations. The proposed novel method will be helpful to derive the higher-order family of fractional Runge–Kutta methods. The nonlinear fractional Logistic Growth Model is solved and analyzed. The numerical results and graphs of the examples demonstrate the effectiveness of the method. --- ## Body ## 1. Introduction In the 20th century, important research in fractional calculus was published in the engineering and science literature. Progress of fractional calculus is reported in various applications in the field of integral equations, fluid mechanics, viscoelastic models, biological models, and electrochemistry [1–3]. Undoubtedly, fractional calculus is an efficient mathematical tool to solve various problems in mathematics, engineering, and sciences. To get more attention in this field and to validate its effectiveness, this paper contributes the solution of new and recent applications of fractional calculus in biological and engineering sciences [4, 5]. Recently, the tool of fractional calculus has been used to analyze the nonlinear dynamics of different problems [6–8].Mostly, the analytical solutions cannot be obtained for fractional differential equations, so that there is a need of semianalytical and numerical methods to understand the effects of the solutions to the nonlinear problems [9]. In the recent decades, different methods have been implemented to solve the linear as well as the nonlinear dynamical systems, such as the Adomian decomposition method (ADM) [10], variational iteration method (VIM) [11], Homotopy perturbation method (HPM) [12], Homotopy perturbation method in association with the Laplace transform method [6], Homotopy analysis method (HAM) [13], and Homotopy analysis transform method (HATM) [7]. In the recent years, the novel numerical techniques have also been applied on a two-dimensional telegraph equation on arbitrary domains and modified diffusion equations with nonlinear source terms [14–16].In the recent past, many numerical methods have been used just for linear equations or often more smaller classes. The generalization of the classical Adams–Bashforth–Moulton method has been introduced for the numerical solutions of nonlinear fractional differential equations [17]. Odibat and Momani also develop the new method with the connection of fractional Euler method and modified Trapezoidal rule by using the generalized Taylor series expansion [18].Moreover, scientists have been actively worked on logistic growth that is typically the common model of population growth. A biological population with a lot of food, space to grow and no threats from predators, and trends to grow at a rate that is proportional to the population in each unit of time is a certain percentage of the individuals who produce new individuals [19–21].In this paper, we derived the 2-stage fractional Runge–Kutta method by using the generalized Taylor series expansion in Section2. Afterwards, we applied the proposed numerical method on different nonlinear fractional differential equations and present the numerical results in Section 3. More specifically, we have used the fractional Runge–Kutta Method to solve the fractional logistic growth model. The conclusion is drawn in Section 4. ## 2. Method Description In order to study the fractional differential equation, we will consider Caputo’s fractional order derivative. Caputo’s fractional order derivative is the modified form of the Riemann–Liouville definition and beneficial in dealing with the initial value problem more efficiently. Generalized Taylor’s formula is defined as follows. ### 2.1. Generalized Taylor’s Formula Here, we are defining generalized Taylor’s formula as given in [18], i.e, suppose that Dkαϕx∈C0,a for k=0,1,2,…,n+1, where 0<α≤1. We have(1)ux=∑i=0nxiαΓiα+1Diαϕ0+Dn+1αϕςΓn+1α+1xn+1α,with 0≤ς≤x, ∀x∈0,a. ### 2.2. Fractional Euler Method In order to derive the fractional Euler’s method to find the numerical solution of initial value problem with time-fractional derivative in Caputo’s sense, we consider the initial value problem of the form(2)dαdtαut=ϕt,ut;u0=u0,α∈0,1,where Dα represents the Caputo fractional differential operator [22]. Consider the initial value problem. Let 0,a be an interval for which we are finding the solution of the problem in equation (2). The collection of points tj,utj are used to find the approximation. The interval 0,a is subdivided into r subintervals tj,tj+1 of equal step size h=a/r using the nodal points tj=jh for j=0,1,2,…,r. Suppose that ut, Dαut, and D2αut are continuous functions on the interval 0,a, and applying Taylor’s formula involving fractional derivatives, we have(3)ut+h=ut+hαΓα+1Dαut+h2αΓ2α+1D2αut+⋯.For the very small step size, we neglect the higher terms involvingh2α or higher, and substituting the value of Dαut from equation (2), we obtain(4)ut+h=ut+hαΓα+1ϕt,ut.By using the abovementioned equation, we can obtain the following iterative formula.(5)un+1=un+hαΓα+1ϕtn,un.It is worth mentioning here that ifα=1, then fractional Euler’s method 2.3 reduced to classical Euler’s method. This is the generalization of classical Euler’s method. ### 2.3. Fractional Runge–Kutta Method This method is the generalization of the Runge–Kutta (RK) method of order 2. Consider fractional order initial value problem (2). The generalized Taylor expansion is(6)ut+h=ut+hαΓα+1Dαut+h2αΓ2α+1D2αut+⋯,and using the formula Dt2αu=Dtαϕt,u+ϕt,uDuαϕt,u in equation (6) gives(7)ut+h=ut+hαΓα+1ϕt,ut+h2αΓ2α+1Dtαϕt,u+ϕt,uDuαϕt,u+⋯.Rearranging the abovementioned equation, we have(8)ut+h=ut+hα2Γα+1ϕt,ut+hα2Γα+1ϕt,ut+2hαΓα+1Γ2α+1Dtαϕt,u+2hαΓα+1Γ2α+1ϕt,uDuαϕt,u+⋯.It can also be written as(9)ut+h=ut+hα2Γα+1ϕt,ut+hα2Γα+1ϕt+2hαΓα+1Γ2α+1,ut+2hαΓα+1Γ2α+1ϕt,u.In view of the abovementioned expression, the following formula is the 2-stage fractional Runge–Kutta method.(10)un+1=un+hα2Γα+1K1+K2,where(11)K1=ϕtn,un,K2=ϕtn+2hαΓα+1Γ2α+1,un+2hαΓα+1Γ2α+1ϕtn,un.One can easily verify that ifα=1, then the fractional order Runge–Kutta method 2.5 reduced to the classical Runge–Kutta method of order 2. ## 2.1. Generalized Taylor’s Formula Here, we are defining generalized Taylor’s formula as given in [18], i.e, suppose that Dkαϕx∈C0,a for k=0,1,2,…,n+1, where 0<α≤1. We have(1)ux=∑i=0nxiαΓiα+1Diαϕ0+Dn+1αϕςΓn+1α+1xn+1α,with 0≤ς≤x, ∀x∈0,a. ## 2.2. Fractional Euler Method In order to derive the fractional Euler’s method to find the numerical solution of initial value problem with time-fractional derivative in Caputo’s sense, we consider the initial value problem of the form(2)dαdtαut=ϕt,ut;u0=u0,α∈0,1,where Dα represents the Caputo fractional differential operator [22]. Consider the initial value problem. Let 0,a be an interval for which we are finding the solution of the problem in equation (2). The collection of points tj,utj are used to find the approximation. The interval 0,a is subdivided into r subintervals tj,tj+1 of equal step size h=a/r using the nodal points tj=jh for j=0,1,2,…,r. Suppose that ut, Dαut, and D2αut are continuous functions on the interval 0,a, and applying Taylor’s formula involving fractional derivatives, we have(3)ut+h=ut+hαΓα+1Dαut+h2αΓ2α+1D2αut+⋯.For the very small step size, we neglect the higher terms involvingh2α or higher, and substituting the value of Dαut from equation (2), we obtain(4)ut+h=ut+hαΓα+1ϕt,ut.By using the abovementioned equation, we can obtain the following iterative formula.(5)un+1=un+hαΓα+1ϕtn,un.It is worth mentioning here that ifα=1, then fractional Euler’s method 2.3 reduced to classical Euler’s method. This is the generalization of classical Euler’s method. ## 2.3. Fractional Runge–Kutta Method This method is the generalization of the Runge–Kutta (RK) method of order 2. Consider fractional order initial value problem (2). The generalized Taylor expansion is(6)ut+h=ut+hαΓα+1Dαut+h2αΓ2α+1D2αut+⋯,and using the formula Dt2αu=Dtαϕt,u+ϕt,uDuαϕt,u in equation (6) gives(7)ut+h=ut+hαΓα+1ϕt,ut+h2αΓ2α+1Dtαϕt,u+ϕt,uDuαϕt,u+⋯.Rearranging the abovementioned equation, we have(8)ut+h=ut+hα2Γα+1ϕt,ut+hα2Γα+1ϕt,ut+2hαΓα+1Γ2α+1Dtαϕt,u+2hαΓα+1Γ2α+1ϕt,uDuαϕt,u+⋯.It can also be written as(9)ut+h=ut+hα2Γα+1ϕt,ut+hα2Γα+1ϕt+2hαΓα+1Γ2α+1,ut+2hαΓα+1Γ2α+1ϕt,u.In view of the abovementioned expression, the following formula is the 2-stage fractional Runge–Kutta method.(10)un+1=un+hα2Γα+1K1+K2,where(11)K1=ϕtn,un,K2=ϕtn+2hαΓα+1Γ2α+1,un+2hαΓα+1Γ2α+1ϕtn,un.One can easily verify that ifα=1, then the fractional order Runge–Kutta method 2.5 reduced to the classical Runge–Kutta method of order 2. ## 3. Numerical Examples To understand the methodology to apply the fractional Runge–Kutta method, we have solved three examples and also made a comparison with the exact solution.Example 1. In the first example, we consider the inhomogeneous linear fractional differential equation(12)Dαut=2Γ3−αt2−α−1Γ2−αt1−α−ut+t2−t,subject to the conditions(13)u0=0;0<α≤1;t>0;with the exact solution(14)ut=t2−t. By using the fractional R–K Method, we obtain the iterative relation for equation (12).(15)un+1=un+hα2Γα+1K1+K2,where(16)K1=2Γ3−αtn2−α−1Γ2−αtn1−α−un+tn2−tn,K2=2Γ3−αtn+2hαΓα+1Γ2α+12−α−1Γ2−αtn+2hαΓα+1Γ2α+11−α−un+2hαΓα+1Γ2α+1K1+tn+2hαΓα+1Γ2α+12−tn+2hαΓα+1Γ2α+1. Figure1 expresses the numerical solutions of equation (12) for different values of α using the fractional Runge–Kutta method. Here, we can easily visualize in Table 1 that when we put α=1 the approximate solution coincide with the exact solution ut=t2−t. In Table 2, we can further analyze the solutions of the problem for α=0.96. Moreover, in Figure 2, hidden effects are visible by changing the values of α which cannot be obtained by using integer order derivative. Accuracy will be improved by using the small mesh size.Figure 1 Numerical results of Example1 for α=1 having discretization h=0.01, respectively.Table 1 Numerical results of Example1 for α=1, with discretization h=0.01. TyexactyapproxAbsErrorα=10.00000.1−0.0900−0.09004.7820e − 060.2−0.1600−0.16009.1089e − 060.3−0.2100−0.21001.3024e − 050.4−0.2400−0.24001.6567 −e050.5−0.2500−0.25001.9772 −e050.6−0.2400−0.24002.2673e − 050.7−0.2100−0.21002.5297e − 050.8−0.1600−0.16002.7672e − 050.9−0.0900−0.09002.9820e − 051.003.1765e − 053.1765e − 05Table 2 Numerical results of Example1 for α=0.96, with discretization h=0.01. TyexactyapproxAbsErrorα=0.960.00000.1−0.0900−0.09120.00120.2−0.1600−0.17000.01000.3−0.2100−0.22740.01740.4−0.2400−0.26260.02260.5−0.2500−0.27520.02520.6−0.2400−0.26500.02500.7−0.2100−0.23220.02220.8−0.1600−0.17660.01660.9−0.0900−0.09850.00851.000.00210.0021Figure 2 Numerical results of Example1 for α=0.96 having discretization h=0.01, respectively.Example 2. Consider the nonlinear fractional differential equation(17)Dαut=ut2−2t+12,along with the conditions(18)u0=−2;0<α≤1;t>0. The exact solution of equation (17) for α=1 is given by(19)ut=−2t+1. By using the fractional R–K method, we get the iterative relation for equation (12).(20)un+1=un+hα2Γα+1K1+K2,where(21)K1=un2−2tn+12,K2=un+2hαΓα+1Γ2α+1K12−2tn+2hαΓα+1/Γ2α+1+12. Figures3 and 4 show numerical solutions of equation (17) for different values of α using the fractional Runge–Kutta method. We can see in Table 3 that when α=1, the approximate solution has excellent agreement with the exact solution ut=−2/t+1. In Table 4, we can further analyze the solutions of the problem for α=0.96. Moreover, the hidden nonlinearity effects are also visible in Table 2 by changing the value of α. Accuracy will be improved by using the small mesh size.Figure 3 Numerical results of Example2 for α=1, with mesh size h=0.01.Figure 4 Numerical results of Example2 for α=0.96, with mesh size h=0.01.Table 3 Numerical results of Example2 for α=1, with discretization h=0.01. tyexactyapproxAbsErrorα=10.0−2−200.1−1.8182−1.81822.0884e − 050.2−1.6667−1.66672.9468e − 050.3−1.5385−1.53853.2069e − 050.4−1.4286−1.42863.1769 −e050.5−1.3333−1.33343.0117 −e050.6−1.2500−1.25002.7902e − 050.7−1.1765−1.17652.5530e − 050.8−1.1111−1.11112.3203e − 050.9−1.0526−1.05272.1018e − 051.0−1−11.9014e − 05Table 4 Numerical results of Example2 for α=0.96, with discretization h=0.01. tyexactyapproxAbsErrorα=0.960.0−2−200.1−1.8182−1.79140.02680.2−1.6667−1.62600.04060.3−1.5385−1.49090.04760.4−1.4286−1.37780.05070.5−1.3333−1.28160.05180.6−1.2500−1.19840.05160.7−1.1765−1.12580.05070.8−1.1111−1.06170.04940.9−1.0526−1.00460.04801.0−1−0.95350.0465Example 3. Time-Fractional Logistic Growth Model We consider the time-fractional logistic growth model represented by the equation(22)dαPdtα=rP1−PM;Pt0=P0,where P0 is the initial density of the population, r is intrinsic growth rate of the population, and M is the carrying capacity. The analytical solution of equation (22) is given by(23)P=MP0P0+M−P0e−rt. In the review of the fractional Runge–Kutta method, we have(24)Pn+1=Pn+hα2Γα+1K1+K2d,where(25)K1=rPn1−PnM,K2=rPnM1+2rhαΓα+1Γ2α+1−2rPnhαΓα+1MΓ2α+1M−Pn1+2rhαΓα+1Γ2α+1−2rPnhαΓα+1MΓ2α+1. Figures5 and 6 demonstrate the approximate solutions of fractional Logistic Growth Model represented by equation (22) for different values of α using the fractional Runge–Kutta method. Table5 shows that when we put α=1, the approximate solution has excellent agreement with the exact solution given in equation (23). In Table 6, we can further analyze the solutions of the problem for α=0.96. Moreover, we can get better accuracy by using the small mesh size.Figure 5 Numerical results of fractional logistic growth model forα=1; r=0.5; and M=10 with mesh size h=0.01.Figure 6 Numerical results of fractional logistic growth model forα=0.96; r=0.5; and M=10 with mesh size h=0.01.Table 5 Numerical results of Example3 for α=1, with mesh size h=0.01. tyexactyapproxAbsErrorα=10.0202000.119.069919.07002.3897e − 050.218.262118.26223.9362e − 050.317.554817.55484.9190e − 050.416.930916.93105.5198 −e050.516.377316.37745.8593 −e050.615.883315.88346.0190e − 050.715.440315.44046.0546e − 050.815.041215.04136.0048e − 050.914.680314.68035.8968e − 051.014.352714.35275.7498e − 05Table 6 Numerical results of Example3 for α=0.96, with mesh size h=0.01. tyexactyapproxAbsErrorα=0.960.0202000.119.069918.91460.15530.218.262117.99410.26800.317.554817.20480.35000.416.930916.52170.40930.516.377315.92560.45180.615.883315.40180.48160.715.440314.93860.50170.815.041214.52690.51440.914.680314.15900.52131.014.352713.82880.5238 ## 4. Conclusions The fundamental objective of this research is to construct the numerical scheme to solve fractional differential equations. The objective has been achieved by implementing the fractional numerical method (fractional Runge–Kutta method). The derivation of the method is also presented. The method is a new contribution and is reliable to find the solutions of problems which arise in applied sciences. The comparison of numerical results has been made with exact solutions. The proposed method is useful to derive the higher order family of fractional Runge Kutta Methods. Finally, the recent development in the field of fractional differential equations in applied mathematics makes it needed to implement on such equations to get the numerical solutions. We are hoping that this work is the active contribution in this direction. --- *Source: 1020472-2020-06-10.xml*
1020472-2020-06-10_1020472-2020-06-10.md
16,767
A Novel 2-Stage Fractional Runge–Kutta Method for a Time-Fractional Logistic Growth Model
Muhammad Sarmad Arshad; Dumitru Baleanu; Muhammad Bilal Riaz; Muhammad Abbas
Discrete Dynamics in Nature and Society (2020)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1020472
1020472-2020-06-10.xml
--- ## Abstract In this paper, the fractional Euler method has been studied, and the derivation of the novel 2-stage fractional Runge–Kutta (FRK) method has been presented. The proposed fractional numerical method has been implemented to find the solution of fractional differential equations. The proposed novel method will be helpful to derive the higher-order family of fractional Runge–Kutta methods. The nonlinear fractional Logistic Growth Model is solved and analyzed. The numerical results and graphs of the examples demonstrate the effectiveness of the method. --- ## Body ## 1. Introduction In the 20th century, important research in fractional calculus was published in the engineering and science literature. Progress of fractional calculus is reported in various applications in the field of integral equations, fluid mechanics, viscoelastic models, biological models, and electrochemistry [1–3]. Undoubtedly, fractional calculus is an efficient mathematical tool to solve various problems in mathematics, engineering, and sciences. To get more attention in this field and to validate its effectiveness, this paper contributes the solution of new and recent applications of fractional calculus in biological and engineering sciences [4, 5]. Recently, the tool of fractional calculus has been used to analyze the nonlinear dynamics of different problems [6–8].Mostly, the analytical solutions cannot be obtained for fractional differential equations, so that there is a need of semianalytical and numerical methods to understand the effects of the solutions to the nonlinear problems [9]. In the recent decades, different methods have been implemented to solve the linear as well as the nonlinear dynamical systems, such as the Adomian decomposition method (ADM) [10], variational iteration method (VIM) [11], Homotopy perturbation method (HPM) [12], Homotopy perturbation method in association with the Laplace transform method [6], Homotopy analysis method (HAM) [13], and Homotopy analysis transform method (HATM) [7]. In the recent years, the novel numerical techniques have also been applied on a two-dimensional telegraph equation on arbitrary domains and modified diffusion equations with nonlinear source terms [14–16].In the recent past, many numerical methods have been used just for linear equations or often more smaller classes. The generalization of the classical Adams–Bashforth–Moulton method has been introduced for the numerical solutions of nonlinear fractional differential equations [17]. Odibat and Momani also develop the new method with the connection of fractional Euler method and modified Trapezoidal rule by using the generalized Taylor series expansion [18].Moreover, scientists have been actively worked on logistic growth that is typically the common model of population growth. A biological population with a lot of food, space to grow and no threats from predators, and trends to grow at a rate that is proportional to the population in each unit of time is a certain percentage of the individuals who produce new individuals [19–21].In this paper, we derived the 2-stage fractional Runge–Kutta method by using the generalized Taylor series expansion in Section2. Afterwards, we applied the proposed numerical method on different nonlinear fractional differential equations and present the numerical results in Section 3. More specifically, we have used the fractional Runge–Kutta Method to solve the fractional logistic growth model. The conclusion is drawn in Section 4. ## 2. Method Description In order to study the fractional differential equation, we will consider Caputo’s fractional order derivative. Caputo’s fractional order derivative is the modified form of the Riemann–Liouville definition and beneficial in dealing with the initial value problem more efficiently. Generalized Taylor’s formula is defined as follows. ### 2.1. Generalized Taylor’s Formula Here, we are defining generalized Taylor’s formula as given in [18], i.e, suppose that Dkαϕx∈C0,a for k=0,1,2,…,n+1, where 0<α≤1. We have(1)ux=∑i=0nxiαΓiα+1Diαϕ0+Dn+1αϕςΓn+1α+1xn+1α,with 0≤ς≤x, ∀x∈0,a. ### 2.2. Fractional Euler Method In order to derive the fractional Euler’s method to find the numerical solution of initial value problem with time-fractional derivative in Caputo’s sense, we consider the initial value problem of the form(2)dαdtαut=ϕt,ut;u0=u0,α∈0,1,where Dα represents the Caputo fractional differential operator [22]. Consider the initial value problem. Let 0,a be an interval for which we are finding the solution of the problem in equation (2). The collection of points tj,utj are used to find the approximation. The interval 0,a is subdivided into r subintervals tj,tj+1 of equal step size h=a/r using the nodal points tj=jh for j=0,1,2,…,r. Suppose that ut, Dαut, and D2αut are continuous functions on the interval 0,a, and applying Taylor’s formula involving fractional derivatives, we have(3)ut+h=ut+hαΓα+1Dαut+h2αΓ2α+1D2αut+⋯.For the very small step size, we neglect the higher terms involvingh2α or higher, and substituting the value of Dαut from equation (2), we obtain(4)ut+h=ut+hαΓα+1ϕt,ut.By using the abovementioned equation, we can obtain the following iterative formula.(5)un+1=un+hαΓα+1ϕtn,un.It is worth mentioning here that ifα=1, then fractional Euler’s method 2.3 reduced to classical Euler’s method. This is the generalization of classical Euler’s method. ### 2.3. Fractional Runge–Kutta Method This method is the generalization of the Runge–Kutta (RK) method of order 2. Consider fractional order initial value problem (2). The generalized Taylor expansion is(6)ut+h=ut+hαΓα+1Dαut+h2αΓ2α+1D2αut+⋯,and using the formula Dt2αu=Dtαϕt,u+ϕt,uDuαϕt,u in equation (6) gives(7)ut+h=ut+hαΓα+1ϕt,ut+h2αΓ2α+1Dtαϕt,u+ϕt,uDuαϕt,u+⋯.Rearranging the abovementioned equation, we have(8)ut+h=ut+hα2Γα+1ϕt,ut+hα2Γα+1ϕt,ut+2hαΓα+1Γ2α+1Dtαϕt,u+2hαΓα+1Γ2α+1ϕt,uDuαϕt,u+⋯.It can also be written as(9)ut+h=ut+hα2Γα+1ϕt,ut+hα2Γα+1ϕt+2hαΓα+1Γ2α+1,ut+2hαΓα+1Γ2α+1ϕt,u.In view of the abovementioned expression, the following formula is the 2-stage fractional Runge–Kutta method.(10)un+1=un+hα2Γα+1K1+K2,where(11)K1=ϕtn,un,K2=ϕtn+2hαΓα+1Γ2α+1,un+2hαΓα+1Γ2α+1ϕtn,un.One can easily verify that ifα=1, then the fractional order Runge–Kutta method 2.5 reduced to the classical Runge–Kutta method of order 2. ## 2.1. Generalized Taylor’s Formula Here, we are defining generalized Taylor’s formula as given in [18], i.e, suppose that Dkαϕx∈C0,a for k=0,1,2,…,n+1, where 0<α≤1. We have(1)ux=∑i=0nxiαΓiα+1Diαϕ0+Dn+1αϕςΓn+1α+1xn+1α,with 0≤ς≤x, ∀x∈0,a. ## 2.2. Fractional Euler Method In order to derive the fractional Euler’s method to find the numerical solution of initial value problem with time-fractional derivative in Caputo’s sense, we consider the initial value problem of the form(2)dαdtαut=ϕt,ut;u0=u0,α∈0,1,where Dα represents the Caputo fractional differential operator [22]. Consider the initial value problem. Let 0,a be an interval for which we are finding the solution of the problem in equation (2). The collection of points tj,utj are used to find the approximation. The interval 0,a is subdivided into r subintervals tj,tj+1 of equal step size h=a/r using the nodal points tj=jh for j=0,1,2,…,r. Suppose that ut, Dαut, and D2αut are continuous functions on the interval 0,a, and applying Taylor’s formula involving fractional derivatives, we have(3)ut+h=ut+hαΓα+1Dαut+h2αΓ2α+1D2αut+⋯.For the very small step size, we neglect the higher terms involvingh2α or higher, and substituting the value of Dαut from equation (2), we obtain(4)ut+h=ut+hαΓα+1ϕt,ut.By using the abovementioned equation, we can obtain the following iterative formula.(5)un+1=un+hαΓα+1ϕtn,un.It is worth mentioning here that ifα=1, then fractional Euler’s method 2.3 reduced to classical Euler’s method. This is the generalization of classical Euler’s method. ## 2.3. Fractional Runge–Kutta Method This method is the generalization of the Runge–Kutta (RK) method of order 2. Consider fractional order initial value problem (2). The generalized Taylor expansion is(6)ut+h=ut+hαΓα+1Dαut+h2αΓ2α+1D2αut+⋯,and using the formula Dt2αu=Dtαϕt,u+ϕt,uDuαϕt,u in equation (6) gives(7)ut+h=ut+hαΓα+1ϕt,ut+h2αΓ2α+1Dtαϕt,u+ϕt,uDuαϕt,u+⋯.Rearranging the abovementioned equation, we have(8)ut+h=ut+hα2Γα+1ϕt,ut+hα2Γα+1ϕt,ut+2hαΓα+1Γ2α+1Dtαϕt,u+2hαΓα+1Γ2α+1ϕt,uDuαϕt,u+⋯.It can also be written as(9)ut+h=ut+hα2Γα+1ϕt,ut+hα2Γα+1ϕt+2hαΓα+1Γ2α+1,ut+2hαΓα+1Γ2α+1ϕt,u.In view of the abovementioned expression, the following formula is the 2-stage fractional Runge–Kutta method.(10)un+1=un+hα2Γα+1K1+K2,where(11)K1=ϕtn,un,K2=ϕtn+2hαΓα+1Γ2α+1,un+2hαΓα+1Γ2α+1ϕtn,un.One can easily verify that ifα=1, then the fractional order Runge–Kutta method 2.5 reduced to the classical Runge–Kutta method of order 2. ## 3. Numerical Examples To understand the methodology to apply the fractional Runge–Kutta method, we have solved three examples and also made a comparison with the exact solution.Example 1. In the first example, we consider the inhomogeneous linear fractional differential equation(12)Dαut=2Γ3−αt2−α−1Γ2−αt1−α−ut+t2−t,subject to the conditions(13)u0=0;0<α≤1;t>0;with the exact solution(14)ut=t2−t. By using the fractional R–K Method, we obtain the iterative relation for equation (12).(15)un+1=un+hα2Γα+1K1+K2,where(16)K1=2Γ3−αtn2−α−1Γ2−αtn1−α−un+tn2−tn,K2=2Γ3−αtn+2hαΓα+1Γ2α+12−α−1Γ2−αtn+2hαΓα+1Γ2α+11−α−un+2hαΓα+1Γ2α+1K1+tn+2hαΓα+1Γ2α+12−tn+2hαΓα+1Γ2α+1. Figure1 expresses the numerical solutions of equation (12) for different values of α using the fractional Runge–Kutta method. Here, we can easily visualize in Table 1 that when we put α=1 the approximate solution coincide with the exact solution ut=t2−t. In Table 2, we can further analyze the solutions of the problem for α=0.96. Moreover, in Figure 2, hidden effects are visible by changing the values of α which cannot be obtained by using integer order derivative. Accuracy will be improved by using the small mesh size.Figure 1 Numerical results of Example1 for α=1 having discretization h=0.01, respectively.Table 1 Numerical results of Example1 for α=1, with discretization h=0.01. TyexactyapproxAbsErrorα=10.00000.1−0.0900−0.09004.7820e − 060.2−0.1600−0.16009.1089e − 060.3−0.2100−0.21001.3024e − 050.4−0.2400−0.24001.6567 −e050.5−0.2500−0.25001.9772 −e050.6−0.2400−0.24002.2673e − 050.7−0.2100−0.21002.5297e − 050.8−0.1600−0.16002.7672e − 050.9−0.0900−0.09002.9820e − 051.003.1765e − 053.1765e − 05Table 2 Numerical results of Example1 for α=0.96, with discretization h=0.01. TyexactyapproxAbsErrorα=0.960.00000.1−0.0900−0.09120.00120.2−0.1600−0.17000.01000.3−0.2100−0.22740.01740.4−0.2400−0.26260.02260.5−0.2500−0.27520.02520.6−0.2400−0.26500.02500.7−0.2100−0.23220.02220.8−0.1600−0.17660.01660.9−0.0900−0.09850.00851.000.00210.0021Figure 2 Numerical results of Example1 for α=0.96 having discretization h=0.01, respectively.Example 2. Consider the nonlinear fractional differential equation(17)Dαut=ut2−2t+12,along with the conditions(18)u0=−2;0<α≤1;t>0. The exact solution of equation (17) for α=1 is given by(19)ut=−2t+1. By using the fractional R–K method, we get the iterative relation for equation (12).(20)un+1=un+hα2Γα+1K1+K2,where(21)K1=un2−2tn+12,K2=un+2hαΓα+1Γ2α+1K12−2tn+2hαΓα+1/Γ2α+1+12. Figures3 and 4 show numerical solutions of equation (17) for different values of α using the fractional Runge–Kutta method. We can see in Table 3 that when α=1, the approximate solution has excellent agreement with the exact solution ut=−2/t+1. In Table 4, we can further analyze the solutions of the problem for α=0.96. Moreover, the hidden nonlinearity effects are also visible in Table 2 by changing the value of α. Accuracy will be improved by using the small mesh size.Figure 3 Numerical results of Example2 for α=1, with mesh size h=0.01.Figure 4 Numerical results of Example2 for α=0.96, with mesh size h=0.01.Table 3 Numerical results of Example2 for α=1, with discretization h=0.01. tyexactyapproxAbsErrorα=10.0−2−200.1−1.8182−1.81822.0884e − 050.2−1.6667−1.66672.9468e − 050.3−1.5385−1.53853.2069e − 050.4−1.4286−1.42863.1769 −e050.5−1.3333−1.33343.0117 −e050.6−1.2500−1.25002.7902e − 050.7−1.1765−1.17652.5530e − 050.8−1.1111−1.11112.3203e − 050.9−1.0526−1.05272.1018e − 051.0−1−11.9014e − 05Table 4 Numerical results of Example2 for α=0.96, with discretization h=0.01. tyexactyapproxAbsErrorα=0.960.0−2−200.1−1.8182−1.79140.02680.2−1.6667−1.62600.04060.3−1.5385−1.49090.04760.4−1.4286−1.37780.05070.5−1.3333−1.28160.05180.6−1.2500−1.19840.05160.7−1.1765−1.12580.05070.8−1.1111−1.06170.04940.9−1.0526−1.00460.04801.0−1−0.95350.0465Example 3. Time-Fractional Logistic Growth Model We consider the time-fractional logistic growth model represented by the equation(22)dαPdtα=rP1−PM;Pt0=P0,where P0 is the initial density of the population, r is intrinsic growth rate of the population, and M is the carrying capacity. The analytical solution of equation (22) is given by(23)P=MP0P0+M−P0e−rt. In the review of the fractional Runge–Kutta method, we have(24)Pn+1=Pn+hα2Γα+1K1+K2d,where(25)K1=rPn1−PnM,K2=rPnM1+2rhαΓα+1Γ2α+1−2rPnhαΓα+1MΓ2α+1M−Pn1+2rhαΓα+1Γ2α+1−2rPnhαΓα+1MΓ2α+1. Figures5 and 6 demonstrate the approximate solutions of fractional Logistic Growth Model represented by equation (22) for different values of α using the fractional Runge–Kutta method. Table5 shows that when we put α=1, the approximate solution has excellent agreement with the exact solution given in equation (23). In Table 6, we can further analyze the solutions of the problem for α=0.96. Moreover, we can get better accuracy by using the small mesh size.Figure 5 Numerical results of fractional logistic growth model forα=1; r=0.5; and M=10 with mesh size h=0.01.Figure 6 Numerical results of fractional logistic growth model forα=0.96; r=0.5; and M=10 with mesh size h=0.01.Table 5 Numerical results of Example3 for α=1, with mesh size h=0.01. tyexactyapproxAbsErrorα=10.0202000.119.069919.07002.3897e − 050.218.262118.26223.9362e − 050.317.554817.55484.9190e − 050.416.930916.93105.5198 −e050.516.377316.37745.8593 −e050.615.883315.88346.0190e − 050.715.440315.44046.0546e − 050.815.041215.04136.0048e − 050.914.680314.68035.8968e − 051.014.352714.35275.7498e − 05Table 6 Numerical results of Example3 for α=0.96, with mesh size h=0.01. tyexactyapproxAbsErrorα=0.960.0202000.119.069918.91460.15530.218.262117.99410.26800.317.554817.20480.35000.416.930916.52170.40930.516.377315.92560.45180.615.883315.40180.48160.715.440314.93860.50170.815.041214.52690.51440.914.680314.15900.52131.014.352713.82880.5238 ## 4. Conclusions The fundamental objective of this research is to construct the numerical scheme to solve fractional differential equations. The objective has been achieved by implementing the fractional numerical method (fractional Runge–Kutta method). The derivation of the method is also presented. The method is a new contribution and is reliable to find the solutions of problems which arise in applied sciences. The comparison of numerical results has been made with exact solutions. The proposed method is useful to derive the higher order family of fractional Runge Kutta Methods. Finally, the recent development in the field of fractional differential equations in applied mathematics makes it needed to implement on such equations to get the numerical solutions. We are hoping that this work is the active contribution in this direction. --- *Source: 1020472-2020-06-10.xml*
2020
# A High Rigidity and Precision Scanning Tunneling Microscope with DecoupledXY and Z Scans **Authors:** Xu Chen; Tengfei Guo; Yubin Hou; Jing Zhang; Wenjie Meng; Qingyou Lu **Journal:** Scanning (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1020476 --- ## Abstract A new scan-head structure for the scanning tunneling microscope (STM) is proposed, featuring high scan precision and rigidity. The core structure consists of a piezoelectric tube scanner of quadrant type (forXY scans) coaxially housed in a piezoelectric tube with single inner and outer electrodes (for Z scan). They are fixed at one end (called common end). A hollow tantalum shaft is coaxially housed in the XY-scan tube and they are mutually fixed at both ends. When the XY scanner scans, its free end will bring the shaft to scan and the tip which is coaxially inserted in the shaft at the common end will scan a smaller area if the tip protrudes short enough from the common end. The decoupled XY and Z scans are desired for less image distortion and the mechanically reduced scan range has the superiority of reducing the impact of the background electronic noise on the scanner and enhancing the tip positioning precision. High quality atomic resolution images are also shown. --- ## Body ## 1. Introduction The scanning tunneling microscope (STM) is commonly applied in studying materials’ local surface property of electron states in real space at atomic resolution [1] under various conditions (high magnet [2], ultrahigh vacuum [3], ultralow temperature [4], and water solution [5]). It has been nearly 40 years since the first atomically resolved STM was formally invented by Binnig et al. [6], and the major effort in building a high quality STM has been to improve the design of its head structure so as to obtain better stability, immunity to external vibration, and tip positioning precision, which are all vital to atomic resolution or tunneling current spectrum quality and are still far from being satisfactory even today, especially under harsh vibration and sound environments. To this end, in the past a few years since 2008, we had developed a number of new types of STM head structures that are suitable for working under harsh conditions, including the fully low voltage STM [7], SPM with an ultrarigid close-stacked piezo motor [8], and detachable scan unit [9] driven by new types of piezoelectric motors (GeckoDrive [10], PandaDrive [10], SpiderDrive [11], and TunaDrive [12, 13]), and finally obtained the world’s first atomically resolved STM image in a water-cooled resistive magnet [14]. It was also the first atomic resolution image ever taken in a magnetic field exceeding the maximum magnetic field a superconducting magnet can generate.An STM head is typically made of a scan unit and a coarse approach motor which drives the tip and sample in the scan unit to approach towards each other until the tip-sample gap is small enough and the tunneling junction is formed. The scan unit and the coarse approach motor can be designed as one inseparable part [15–18] or two mutually detachable parts [9]. As we pointed out in [9], the latter has the superiority of preventing the instability of the coarse approach motor from entering the scan unit, thus enhancing the stability of the tunneling junction. In this situation, the stability of the scan unit itself will play the main role in determining the stability and quality of the measurement since it will become a pure standing-alone part after the coarse approach motor retracts from the scan unit when the coarse approach process is done. Thus, how to design a stable scan unit becomes very important.Therefore, in this paper we will focus on improving the scan unit in the STM head structure. The scanner commonly used is a single quadrant piezoelectric tube (PT) with one inner and four outer electrodes, which can scan inX, Y, and Z directions. In this case, these three directions are coupled, meaning that the XY scan in the sample plane can cause a change in the overall sizes of the scanner, which will in turn impact the Z position. This will apparently downgrade the positioning precision. To decouple the XY and Z motions, some researchers use five-outer-electrode PT: certain length of the PT has four outer electrodes (like a quadrant PT) for XY scan and the outer surface of the remaining length is one whole electrode for Z adjustment. This can indeed reduce the coupling between XY and Z motions, but they are not fully decoupled since the scanner is still made of one single piece of piezoelectric material, in which the deformation of one portion can cause the deformation of another portion to some extent. Also, the overall length of the five-outer-electrode PT is large because the portions responsible for XY and Z scans, respectively, are actually connected in series. This is an obvious disadvantage, which can enhance the instability, especially under harsh vibration and sound conditions.In our new design, a quadrant PT (forXY scans, abbreviated as XY-PT) is housed inside another PT (for Z scan, abbreviated as Z-PT) whose inner and outer surfaces are intact electrodes, respectively. These two PTs are glued together at one end (called common end). A rigid and hollow shaft is housed inside the XY-PT, where they are glued together at both ends and the tip is inserted in the shaft at the common end. As a result, when the XY-PT scans at its free end, the tip at the other end (or at the common end in other words), will scan synchronically in the opposite direction via the lever action of the shaft with the fulcrum being at the common end. Thus, if the length of the tip measuring from the fulcrum is smaller than the length of the shaft between the fulcrum and the free end of the XY-PT, the area scanned by the tip will be smaller than the area scanned by the free end of the XY-PT. The area scanned by the tip can be easily adjusted by pulling (using a pair of tweezers) the tip out from or inserting the tip further into the shaft. A heavily reduced scan area at the tip end can help enhance the positioning precision of the tip greatly. And, this area reduction is achieved completely through mechanics, meaning that the tip positioning uncertainty caused by the background electronic noise can also be reduced. This is a big advantage compared with the traditional area reduction via electronic attenuation method (i.e., using amplifier with gain less than one), in which the electronic noise of the attenuation circuit itself is directly added to the scan signal instead of being attenuated. This added noise will become significant if the bandwidth is large or the scan signal is small or both. This design of mechanical scan area reduction also has many other advantages, which will also be discussed in this paper. ## 2. Scan Unit Design Figure1 shows schematic drawing of the proposed new structure of the scan unit. The Z-PT (EBL#3 type from EBL Products Inc. with dimensions 5 mm length, 6.35 mm outer diameter, and 0.5 mm thickness) has intact electrodes on the inner and outer surface, respectively, and the XY-PT (EBL#3 type from EBL Products Inc. with dimensions 6 mm length, 4.64 mm outer diameter, and 0.5 mm thickness) is a quadrant PT with four electrodes on the outer surface and single intact electrode on the inner surface. They are responsible for Z and XY scans, respectively, and are coaxially glued (using H74F epoxy from Epoxy Technology) on a tantalum base (called common end) in such a way that their inner electrodes are electrically connected and grounded. A hollow tantalum shaft of 1 mm outer diameter and 0.5 mm inner diameter made of 304 stainless steel, which also serves as the tip holder, is coaxially housed in the XY-PT. They are fixed together (using epoxy) at both ends via a pair of sapphire rings for insulation. The tip is inserted in the shaft at the common end. When the XY scan signals are applied on the outer electrodes of the XY-PT in a push-pull manner, its free end will scan the shaft inside, which will cause the tip to scan at the other end in terms of lever principle with the fulcrum being at the common end. This new “tube-in-tube” type (TTT) scanner has quite a few advantages as follows.Figure 1 (a) Photograph of the scanning head. Four outer electrodes of outer PT (Z-PT) are electrically connected as a single intact electrode. (b) Scan unit in three-dimensional view. (c) Schematic of the scan unit cross-section. (a) (b) (c)Firstly, in this structure, the area scanned by the tip,Stip, will be smaller than that scanned by the XY-PT, SXY-PT, if the effective tip length, Ltip, measured from the far end of the tip to the fulcrum is shorter than the effective XY-PT length, LXY-PT, measured from the fulcrum to the free end of the XY-PT. Quantitatively speaking,(1)StipSXY-PT=LXY-PTLtip.Smaller Stip means higher tip positioning precision, which is important when the tunneling current spectrum dI/dV needs to be measured at a certain precisely defined location. In addition, because the reduction in Stip is done by mechanical lever action, it can also reduce the tip positioning uncertainty caused by any electronic noise. This is a remarkable benefit. Traditionally, when scan area reduction is needed, it is commonly done electronically by using an attenuating circuit [19] (gain less than one) to diminish the XY scan signals before they are sent to the XY-PT. In this case, the attenuator will add its own unreduced noise to the reduced XYscan signals, which can become a particularly severe issue when the final scan signals on the XY-PT need to be very small.Secondly, it is apparent that, in our new design, theXY and Z scans are fully decoupled, since they are controlled by two mutually independent PTs, in which the motion of the XY-PT will not affect the motion of the Z-PT and vice versa. This can reduce image distortion. Compared with the aforementioned five-outer-electrode PT in which the two PTs are in series, the TTT scanner is short as the two PTs in it are mounted in parallel. This can help reduce the thermal drifting issue in the sensitive tunneling current measurement and makes the scan unit become more resistant to external vibrations.Besides the diminution in the overall length, another improvement in enhancing the immunity to external vibrations is that the rigidity of theXY-PT in the TTT scanner is higher compared with the aforementioned two conventional types of single tube scanners: four-outer-electrode (quadrant) PT and five-outer-electrode PT. This is because there is a rigid shaft inside the XY-PT, which is mounted with the XY-PT at both ends. Not only can this shrink the scan area further mechanically (beneficial to scan precision as discussed above), it also heightens the rigidity of the XY-PT.We have performed finite element analyses by software (Ansys) to check how the axial and radial resonant frequencies of a PT change as a function of the shaft’s outer diameter. The material properties, model structure, and simulated stress patterns are exhibited in Table1 and Figure 2, respectively. It can be seen from the stress patterns that the free end of the XY-PT bears the maximum force, so we choose to use very stiff material: sapphire as the insulating and mounting material to fix the shaft with the free end of the XY-PT.Table 1 Material properties for finite element analysis. Young’s modulus (1010 N/m2) Density (g/cm3) Poisson ratio Piezoceramics 6.3 7.45 0.31 Tantalum 36 16.69 0.34 Sapphire 68.5 4.00 0.28Figure 2 (a) Scan unit model for finite element analysis. (b) Model with mesh for finite element analysis. (c) Radial vibration model (fixed at the bottom) and the corresponding stress pattern. (d) Axial vibration model (fixed at the bottom) and the corresponding stress pattern. (a) (b) (c) (d)Table2 and Figure 3 show the results of the simulated axial and radial resonant frequencies of the TTT scanner versus the diameter of the shaft in the XY-PT. It can be found from Figure 3 that the axial resonant frequency rises monotonically as the shaft diameter increases, but the radial resonant frequency declines slightly at the beginning with maximum reduction being 2.7% only and then increases dramatically. We believe that this slight reduction in the radial resonant frequency stems from the fact that the added tantalum shaft has certain effective mass. If we use a lighter but stiff enough material such as titanium or sapphire to make the shaft, it will lower the reduction in the radial resonant frequency (make it negligible) as well as enhance the increase in the axial resonant frequency. Consequently, the overall trends for both axial and radial resonant frequencies are to go higher as a stiffer shaft is used. Higher resonant frequencies mean that it is not easy for the external vibrations to arouse the XY-PT’s resonance, implying that the XY-PT is more resistant to harsh conditions.Table 2 Radial and axial resonant frequencies as a function of scan shaft radius. The first row is the scanning head without a tip holder. Radius (mm) Radial frequency (Hz) Axial frequency (Hz) 0 35995 108461 0.5 35361 126304 0.575 35122 128861 0.65 35022 131385 0.725 35072 133869 0.8 35284 136363 0.875 35661 138895 0.95 36205 141492 1.025 36893 144085 1.1 37738 146777 1.175 38709 149441 1.25 39791 152094Figure 3 Radial and axial resonant frequencies plotted versus the radius of the scan shaft. ## 3. Coarse Approach To implement tip-sample coarse approach (see Figure1(a)), the sample is at first coaxially mounded in a tubular frame at one end. The above TTT scanner is coaxially spring-clamped in the tubular frame via a “rod-sliding-in-tube” guiding mechanism and is pushed to approach the sample by a coaxially installed SpiderDrive [12] piezoelectric motor. The structure details are as follows.The TTT scanner is coaxially mounted on a square tantalum rod (length 23 mm and width 5 mm) with the latter being coaxially housed in and spring-clamped against the inner wall of a guiding tube. The whole thing is then coaxially housed in the tubular frame with the guiding tube being mounted on the inner wall of the tubular frame via a pair of setscrews. The SpiderDrive is also coaxially fixed inside the tubular frame, which pushes the square tantalum rod to slide in the guiding tube and brings the TTT scanner to approach the sample in front, thus implementing the coarse approach. The square tantalum rod (hence the TTT scanner) can also be withdrawn (pulled back) by the SpiderDrive through a pair hooks between the square tantalum rod and the push shaft of the SpiderDrive. These hooks are slightly loosely hooked (with about 0.5 mm space along the axial direction) so that they can become detached (not in touch) if the SpiderDrive pushes (or pulls) the square tantalum rod in one direction and walks backwards slightly. The upside of this detachable hooking design is that the scanner can become standing-along during an image scan or any tunneling current measurement, which prevents the instability of the coarse approach motor from impacting the scan.After the coarse approach is done, we can perform fineZ adjustment so as to attain an appropriate tunneling current for the STM measurement that follows. This is done by applying Z control signal on the Z-PT in the TTT scanner. The XY scans in the sample plane are realized by applying X and Y scan signals on the XY-PT. The tubular frame should be designed with a symmetry as high as possible and all the parts installed inside it should be as coaxial as possible with it, which can help reduce thermal drift and any field (magnetic field, e.g.) induced strain drift. ## 4. Performance Test We chose to utilize a segment of hand-cut 0.25 mm thick platinum wire of 90/10 Pt/Ir as the STM tip. A commercial controller from CASMF Tech. Ltd (refer tohttp://www.casmf.com) was used to implement TunaDrive coarse approach control, image scan and data acquisition, and so on. The preamplifier circuit used was designed by ourselves [20] which had the superiority of high current resolution. Its working principle can be found in [20].The STM scan head including the new TTT scan unit was tested by scanning a graphite sample (GYBS/1.7, type ZYB from NT-MDT) in air and at room temperature. A high quality atomically resolved image (raw data) is shown in Figure4(a), where the scan mode was constant current mode, the bias voltage between the tip and sample was 300 mV (sample positive), and the scan rate was 0.2 second per line.Figure 4 (a) Atomically resolved graphite image taken in air and at room temperature under constant current mode with a sample-positive bias voltage of 300 mV, a setpoint of 0.8 A, and a scan area 10 Å × 10 Å. (b) and (c) Two repeated image scans of 8 Å × 8 Å with a time interval of 5 min, in which the drifting rate of the green line can be measured. The line profile for the green line in each image is given at the bottom, in which the average corrugation is 0.41 nm (error range 0.03). (d)Z direction drifting distance measured as a function of time, which gives a drifting rate of about 0.1 Å/min when the STM head gets stable. (a) (b) (c) (d)To measure how severe the drifting issue was, we performed two repeated image scans with a time interval of five minutes. They are presented in Figures4(b) and 4(c), respectively. The scanned area was chosen to be smaller than that of Figure 4(a), which could allow us to measure the drifting values more precisely. A line profile acquired from the green line in the image in Figure 4(b) is given underneath. This green line was drifted to a slightly different location in the image in Figure 4(c), and the corresponding profile is shown underneath also. From these two green lines and their profiles, we can find that there is a slight drift in Y direction (about 5 pm/min), but no drift is seen apparently in X direction. The average corrugation of these profiles of ten images is 0.41 nm (error range 0.03 nm) which is higher than that obtained by other groups [17] and is helpful in enhancing image contrast. We think this improvement is due to rigidity and reduction of background electronic noise of TTT scanner.The measured drift inZ direction plotted as a function of time is demonstrated in Figure 4(d). The drift in Z direction is measured by running the feedback program to hold the tunneling current at 0.5 A (scan area = 0) and recording the displacement of scanner in Z direction (derived from the voltage on Z-PT) every 5 minutes. The measurement lasted for about two and half hours. It shows that the tip oscillated slightly at the beginning when it was approaching the sample. After the tip became settled down, the drift in the Z direction was stabilized at about 0.1 Å/min, which is better than that reported by others [9]. ## 5. Conclusion We have demonstrated how we can build a high precision and rigid scan unit and the corresponding scan head based on the proposed TTT scan structure, in which the advantages of decouplingXY and Z scan motions and the scan area reduction through mechanical lever motion are discussed. Finite element analysis unveils that the new TTT scan structure can achieve higher axial and radial frequencies, which is valuable in improving the immunity to external harsh vibrations. High quality atomic resolution images and low drifting values measured in X, Y, and Z directions are presented to confirm the performance, which were obtained using the new scan head built based on the TTT scan structure. --- *Source: 1020476-2017-11-14.xml*
1020476-2017-11-14_1020476-2017-11-14.md
19,731
A High Rigidity and Precision Scanning Tunneling Microscope with DecoupledXY and Z Scans
Xu Chen; Tengfei Guo; Yubin Hou; Jing Zhang; Wenjie Meng; Qingyou Lu
Scanning (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1020476
1020476-2017-11-14.xml
--- ## Abstract A new scan-head structure for the scanning tunneling microscope (STM) is proposed, featuring high scan precision and rigidity. The core structure consists of a piezoelectric tube scanner of quadrant type (forXY scans) coaxially housed in a piezoelectric tube with single inner and outer electrodes (for Z scan). They are fixed at one end (called common end). A hollow tantalum shaft is coaxially housed in the XY-scan tube and they are mutually fixed at both ends. When the XY scanner scans, its free end will bring the shaft to scan and the tip which is coaxially inserted in the shaft at the common end will scan a smaller area if the tip protrudes short enough from the common end. The decoupled XY and Z scans are desired for less image distortion and the mechanically reduced scan range has the superiority of reducing the impact of the background electronic noise on the scanner and enhancing the tip positioning precision. High quality atomic resolution images are also shown. --- ## Body ## 1. Introduction The scanning tunneling microscope (STM) is commonly applied in studying materials’ local surface property of electron states in real space at atomic resolution [1] under various conditions (high magnet [2], ultrahigh vacuum [3], ultralow temperature [4], and water solution [5]). It has been nearly 40 years since the first atomically resolved STM was formally invented by Binnig et al. [6], and the major effort in building a high quality STM has been to improve the design of its head structure so as to obtain better stability, immunity to external vibration, and tip positioning precision, which are all vital to atomic resolution or tunneling current spectrum quality and are still far from being satisfactory even today, especially under harsh vibration and sound environments. To this end, in the past a few years since 2008, we had developed a number of new types of STM head structures that are suitable for working under harsh conditions, including the fully low voltage STM [7], SPM with an ultrarigid close-stacked piezo motor [8], and detachable scan unit [9] driven by new types of piezoelectric motors (GeckoDrive [10], PandaDrive [10], SpiderDrive [11], and TunaDrive [12, 13]), and finally obtained the world’s first atomically resolved STM image in a water-cooled resistive magnet [14]. It was also the first atomic resolution image ever taken in a magnetic field exceeding the maximum magnetic field a superconducting magnet can generate.An STM head is typically made of a scan unit and a coarse approach motor which drives the tip and sample in the scan unit to approach towards each other until the tip-sample gap is small enough and the tunneling junction is formed. The scan unit and the coarse approach motor can be designed as one inseparable part [15–18] or two mutually detachable parts [9]. As we pointed out in [9], the latter has the superiority of preventing the instability of the coarse approach motor from entering the scan unit, thus enhancing the stability of the tunneling junction. In this situation, the stability of the scan unit itself will play the main role in determining the stability and quality of the measurement since it will become a pure standing-alone part after the coarse approach motor retracts from the scan unit when the coarse approach process is done. Thus, how to design a stable scan unit becomes very important.Therefore, in this paper we will focus on improving the scan unit in the STM head structure. The scanner commonly used is a single quadrant piezoelectric tube (PT) with one inner and four outer electrodes, which can scan inX, Y, and Z directions. In this case, these three directions are coupled, meaning that the XY scan in the sample plane can cause a change in the overall sizes of the scanner, which will in turn impact the Z position. This will apparently downgrade the positioning precision. To decouple the XY and Z motions, some researchers use five-outer-electrode PT: certain length of the PT has four outer electrodes (like a quadrant PT) for XY scan and the outer surface of the remaining length is one whole electrode for Z adjustment. This can indeed reduce the coupling between XY and Z motions, but they are not fully decoupled since the scanner is still made of one single piece of piezoelectric material, in which the deformation of one portion can cause the deformation of another portion to some extent. Also, the overall length of the five-outer-electrode PT is large because the portions responsible for XY and Z scans, respectively, are actually connected in series. This is an obvious disadvantage, which can enhance the instability, especially under harsh vibration and sound conditions.In our new design, a quadrant PT (forXY scans, abbreviated as XY-PT) is housed inside another PT (for Z scan, abbreviated as Z-PT) whose inner and outer surfaces are intact electrodes, respectively. These two PTs are glued together at one end (called common end). A rigid and hollow shaft is housed inside the XY-PT, where they are glued together at both ends and the tip is inserted in the shaft at the common end. As a result, when the XY-PT scans at its free end, the tip at the other end (or at the common end in other words), will scan synchronically in the opposite direction via the lever action of the shaft with the fulcrum being at the common end. Thus, if the length of the tip measuring from the fulcrum is smaller than the length of the shaft between the fulcrum and the free end of the XY-PT, the area scanned by the tip will be smaller than the area scanned by the free end of the XY-PT. The area scanned by the tip can be easily adjusted by pulling (using a pair of tweezers) the tip out from or inserting the tip further into the shaft. A heavily reduced scan area at the tip end can help enhance the positioning precision of the tip greatly. And, this area reduction is achieved completely through mechanics, meaning that the tip positioning uncertainty caused by the background electronic noise can also be reduced. This is a big advantage compared with the traditional area reduction via electronic attenuation method (i.e., using amplifier with gain less than one), in which the electronic noise of the attenuation circuit itself is directly added to the scan signal instead of being attenuated. This added noise will become significant if the bandwidth is large or the scan signal is small or both. This design of mechanical scan area reduction also has many other advantages, which will also be discussed in this paper. ## 2. Scan Unit Design Figure1 shows schematic drawing of the proposed new structure of the scan unit. The Z-PT (EBL#3 type from EBL Products Inc. with dimensions 5 mm length, 6.35 mm outer diameter, and 0.5 mm thickness) has intact electrodes on the inner and outer surface, respectively, and the XY-PT (EBL#3 type from EBL Products Inc. with dimensions 6 mm length, 4.64 mm outer diameter, and 0.5 mm thickness) is a quadrant PT with four electrodes on the outer surface and single intact electrode on the inner surface. They are responsible for Z and XY scans, respectively, and are coaxially glued (using H74F epoxy from Epoxy Technology) on a tantalum base (called common end) in such a way that their inner electrodes are electrically connected and grounded. A hollow tantalum shaft of 1 mm outer diameter and 0.5 mm inner diameter made of 304 stainless steel, which also serves as the tip holder, is coaxially housed in the XY-PT. They are fixed together (using epoxy) at both ends via a pair of sapphire rings for insulation. The tip is inserted in the shaft at the common end. When the XY scan signals are applied on the outer electrodes of the XY-PT in a push-pull manner, its free end will scan the shaft inside, which will cause the tip to scan at the other end in terms of lever principle with the fulcrum being at the common end. This new “tube-in-tube” type (TTT) scanner has quite a few advantages as follows.Figure 1 (a) Photograph of the scanning head. Four outer electrodes of outer PT (Z-PT) are electrically connected as a single intact electrode. (b) Scan unit in three-dimensional view. (c) Schematic of the scan unit cross-section. (a) (b) (c)Firstly, in this structure, the area scanned by the tip,Stip, will be smaller than that scanned by the XY-PT, SXY-PT, if the effective tip length, Ltip, measured from the far end of the tip to the fulcrum is shorter than the effective XY-PT length, LXY-PT, measured from the fulcrum to the free end of the XY-PT. Quantitatively speaking,(1)StipSXY-PT=LXY-PTLtip.Smaller Stip means higher tip positioning precision, which is important when the tunneling current spectrum dI/dV needs to be measured at a certain precisely defined location. In addition, because the reduction in Stip is done by mechanical lever action, it can also reduce the tip positioning uncertainty caused by any electronic noise. This is a remarkable benefit. Traditionally, when scan area reduction is needed, it is commonly done electronically by using an attenuating circuit [19] (gain less than one) to diminish the XY scan signals before they are sent to the XY-PT. In this case, the attenuator will add its own unreduced noise to the reduced XYscan signals, which can become a particularly severe issue when the final scan signals on the XY-PT need to be very small.Secondly, it is apparent that, in our new design, theXY and Z scans are fully decoupled, since they are controlled by two mutually independent PTs, in which the motion of the XY-PT will not affect the motion of the Z-PT and vice versa. This can reduce image distortion. Compared with the aforementioned five-outer-electrode PT in which the two PTs are in series, the TTT scanner is short as the two PTs in it are mounted in parallel. This can help reduce the thermal drifting issue in the sensitive tunneling current measurement and makes the scan unit become more resistant to external vibrations.Besides the diminution in the overall length, another improvement in enhancing the immunity to external vibrations is that the rigidity of theXY-PT in the TTT scanner is higher compared with the aforementioned two conventional types of single tube scanners: four-outer-electrode (quadrant) PT and five-outer-electrode PT. This is because there is a rigid shaft inside the XY-PT, which is mounted with the XY-PT at both ends. Not only can this shrink the scan area further mechanically (beneficial to scan precision as discussed above), it also heightens the rigidity of the XY-PT.We have performed finite element analyses by software (Ansys) to check how the axial and radial resonant frequencies of a PT change as a function of the shaft’s outer diameter. The material properties, model structure, and simulated stress patterns are exhibited in Table1 and Figure 2, respectively. It can be seen from the stress patterns that the free end of the XY-PT bears the maximum force, so we choose to use very stiff material: sapphire as the insulating and mounting material to fix the shaft with the free end of the XY-PT.Table 1 Material properties for finite element analysis. Young’s modulus (1010 N/m2) Density (g/cm3) Poisson ratio Piezoceramics 6.3 7.45 0.31 Tantalum 36 16.69 0.34 Sapphire 68.5 4.00 0.28Figure 2 (a) Scan unit model for finite element analysis. (b) Model with mesh for finite element analysis. (c) Radial vibration model (fixed at the bottom) and the corresponding stress pattern. (d) Axial vibration model (fixed at the bottom) and the corresponding stress pattern. (a) (b) (c) (d)Table2 and Figure 3 show the results of the simulated axial and radial resonant frequencies of the TTT scanner versus the diameter of the shaft in the XY-PT. It can be found from Figure 3 that the axial resonant frequency rises monotonically as the shaft diameter increases, but the radial resonant frequency declines slightly at the beginning with maximum reduction being 2.7% only and then increases dramatically. We believe that this slight reduction in the radial resonant frequency stems from the fact that the added tantalum shaft has certain effective mass. If we use a lighter but stiff enough material such as titanium or sapphire to make the shaft, it will lower the reduction in the radial resonant frequency (make it negligible) as well as enhance the increase in the axial resonant frequency. Consequently, the overall trends for both axial and radial resonant frequencies are to go higher as a stiffer shaft is used. Higher resonant frequencies mean that it is not easy for the external vibrations to arouse the XY-PT’s resonance, implying that the XY-PT is more resistant to harsh conditions.Table 2 Radial and axial resonant frequencies as a function of scan shaft radius. The first row is the scanning head without a tip holder. Radius (mm) Radial frequency (Hz) Axial frequency (Hz) 0 35995 108461 0.5 35361 126304 0.575 35122 128861 0.65 35022 131385 0.725 35072 133869 0.8 35284 136363 0.875 35661 138895 0.95 36205 141492 1.025 36893 144085 1.1 37738 146777 1.175 38709 149441 1.25 39791 152094Figure 3 Radial and axial resonant frequencies plotted versus the radius of the scan shaft. ## 3. Coarse Approach To implement tip-sample coarse approach (see Figure1(a)), the sample is at first coaxially mounded in a tubular frame at one end. The above TTT scanner is coaxially spring-clamped in the tubular frame via a “rod-sliding-in-tube” guiding mechanism and is pushed to approach the sample by a coaxially installed SpiderDrive [12] piezoelectric motor. The structure details are as follows.The TTT scanner is coaxially mounted on a square tantalum rod (length 23 mm and width 5 mm) with the latter being coaxially housed in and spring-clamped against the inner wall of a guiding tube. The whole thing is then coaxially housed in the tubular frame with the guiding tube being mounted on the inner wall of the tubular frame via a pair of setscrews. The SpiderDrive is also coaxially fixed inside the tubular frame, which pushes the square tantalum rod to slide in the guiding tube and brings the TTT scanner to approach the sample in front, thus implementing the coarse approach. The square tantalum rod (hence the TTT scanner) can also be withdrawn (pulled back) by the SpiderDrive through a pair hooks between the square tantalum rod and the push shaft of the SpiderDrive. These hooks are slightly loosely hooked (with about 0.5 mm space along the axial direction) so that they can become detached (not in touch) if the SpiderDrive pushes (or pulls) the square tantalum rod in one direction and walks backwards slightly. The upside of this detachable hooking design is that the scanner can become standing-along during an image scan or any tunneling current measurement, which prevents the instability of the coarse approach motor from impacting the scan.After the coarse approach is done, we can perform fineZ adjustment so as to attain an appropriate tunneling current for the STM measurement that follows. This is done by applying Z control signal on the Z-PT in the TTT scanner. The XY scans in the sample plane are realized by applying X and Y scan signals on the XY-PT. The tubular frame should be designed with a symmetry as high as possible and all the parts installed inside it should be as coaxial as possible with it, which can help reduce thermal drift and any field (magnetic field, e.g.) induced strain drift. ## 4. Performance Test We chose to utilize a segment of hand-cut 0.25 mm thick platinum wire of 90/10 Pt/Ir as the STM tip. A commercial controller from CASMF Tech. Ltd (refer tohttp://www.casmf.com) was used to implement TunaDrive coarse approach control, image scan and data acquisition, and so on. The preamplifier circuit used was designed by ourselves [20] which had the superiority of high current resolution. Its working principle can be found in [20].The STM scan head including the new TTT scan unit was tested by scanning a graphite sample (GYBS/1.7, type ZYB from NT-MDT) in air and at room temperature. A high quality atomically resolved image (raw data) is shown in Figure4(a), where the scan mode was constant current mode, the bias voltage between the tip and sample was 300 mV (sample positive), and the scan rate was 0.2 second per line.Figure 4 (a) Atomically resolved graphite image taken in air and at room temperature under constant current mode with a sample-positive bias voltage of 300 mV, a setpoint of 0.8 A, and a scan area 10 Å × 10 Å. (b) and (c) Two repeated image scans of 8 Å × 8 Å with a time interval of 5 min, in which the drifting rate of the green line can be measured. The line profile for the green line in each image is given at the bottom, in which the average corrugation is 0.41 nm (error range 0.03). (d)Z direction drifting distance measured as a function of time, which gives a drifting rate of about 0.1 Å/min when the STM head gets stable. (a) (b) (c) (d)To measure how severe the drifting issue was, we performed two repeated image scans with a time interval of five minutes. They are presented in Figures4(b) and 4(c), respectively. The scanned area was chosen to be smaller than that of Figure 4(a), which could allow us to measure the drifting values more precisely. A line profile acquired from the green line in the image in Figure 4(b) is given underneath. This green line was drifted to a slightly different location in the image in Figure 4(c), and the corresponding profile is shown underneath also. From these two green lines and their profiles, we can find that there is a slight drift in Y direction (about 5 pm/min), but no drift is seen apparently in X direction. The average corrugation of these profiles of ten images is 0.41 nm (error range 0.03 nm) which is higher than that obtained by other groups [17] and is helpful in enhancing image contrast. We think this improvement is due to rigidity and reduction of background electronic noise of TTT scanner.The measured drift inZ direction plotted as a function of time is demonstrated in Figure 4(d). The drift in Z direction is measured by running the feedback program to hold the tunneling current at 0.5 A (scan area = 0) and recording the displacement of scanner in Z direction (derived from the voltage on Z-PT) every 5 minutes. The measurement lasted for about two and half hours. It shows that the tip oscillated slightly at the beginning when it was approaching the sample. After the tip became settled down, the drift in the Z direction was stabilized at about 0.1 Å/min, which is better than that reported by others [9]. ## 5. Conclusion We have demonstrated how we can build a high precision and rigid scan unit and the corresponding scan head based on the proposed TTT scan structure, in which the advantages of decouplingXY and Z scan motions and the scan area reduction through mechanical lever motion are discussed. Finite element analysis unveils that the new TTT scan structure can achieve higher axial and radial frequencies, which is valuable in improving the immunity to external harsh vibrations. High quality atomic resolution images and low drifting values measured in X, Y, and Z directions are presented to confirm the performance, which were obtained using the new scan head built based on the TTT scan structure. --- *Source: 1020476-2017-11-14.xml*
2017
# Optimization Framework and Parameter Determination for Proximity-Based Device Discovery in D2D Communication Systems **Authors:** Minjoong Rim; Seungyeob Chae; Chung G. Kang **Journal:** International Journal of Antennas and Propagation (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102049 --- ## Abstract One of the most important processes in device-to-device communications of cellular devices is that of discovery, which determines the proximity of devices. When a discovery process is performed, there are several parameters to determine, including the discovery range, the discovery period, and the modulation and coding scheme of the discovery messages. In this paper, we address the relationships between these parameters and describe an optimization framework to determine them. In the proposed procedure, it is important to first optimize the discovery rate, which is defined as the number of discoverable devices per unit time. Once the discovery rate is maximized, the discovery period can be determined accordingly based on the device density and the target discovery range. Since the discovery rate is not affected by many of discovery parameters such as the discovery range, the device density, and the discovery period, it can be used as a performance metric for comparing discovery schemes with different discovery ranges or different discovery periods. --- ## Body ## 1. Introduction Cellular-assisted device-to-device (D2D) communications refer to direct communications among mobile devices without transferring data through a base station (BS) [1–6]. One of the most important processes in D2D communications is that of discovery, which finds devices located nearby [7–16]. In order to determine the proximity of devices, each device transmits a discovery message periodically, and other devices check if the message can be received successfully.Consider peer discovery applications such as friend finding in a densely populated urban area. If we have unlimited discovery resources or allow an unlimited discovery time, a resource unit dedicated for discovery message transmissions, simply calledresource block (RB) in this paper, can be occupied by at most one device so as to avoid the interference from other transmitting devices. In this case, the discovery range can be governed by the transmission power and the noise level. However, in practice, the discovery resources are limited and the discovery time should not be too long, while there can be a large number of devices participating in the discovery process. In this paper, we assume that discovery processes need to be performed with a reasonably short discovery time and limited discovery resources, while a large number of devices participate in a limited area, that is, the device density can be high. If the number of devices using the same RB is large, the discovery range might be limited by the interference from other transmitting devices.In interference-limited environments, a transmitting device may perform carrier sensing for each RB available within a single discovery period, before transmitting the discovery message, assuming that the interference conditions remain unchanged for a certain amount of time [17–20]. Then, it can select the RB with the least amount of interference so that the discovery range can be maximized. If there is no specific carrier sensing threshold used for a discovery or the carrier sensing threshold is too high, the discovery range may be determined by the density of transmitting devices per RB, assuming interference-limited environments. If a certain discovery range needs to be maintained, then the corresponding carrier sensing threshold can be used so as to satisfy the target signal-to-interference ratio (SIR) of the neighbor devices within the discovery range. If no RB satisfying the carrier sensing threshold can be found due to a high density of transmitting devices, the discovery period needs to be increased so that more RBs can be included in a single period. The discovery period can be adjusted by the centralized control of the network or by the distributed control of mobile devices. An increased discovery period means a large discovery range assuming still interference-limited environments, but it also results in a longer discovery time.Similarly, if a lower modulation and coding scheme (MCS) is used for a discovery message, the discovery range can be extended, since a lower SIR is allowed at the receivers. However, a low MCS may increase the size of each RB, resulting in an increased discovery time as well. We can see that the discovery range is related to the device density, the discovery period, and the target SIR of the discovery messages. In order to determine discovery parameters for a discovery process, it is important to understand the relationships among them.A number of recent works have focused on various issues on the discovery process. Scenarios and requirements for the discovery process have been discussed in [7–10] and some efficient discovery schemes have been proposed in a centralized manner [10] or in a distributed manner [11]. Efficient discovery patterns have been investigated [12] and energy efficiency discovery process has been intensely addressed [13–15]. However, most of these approaches do not adequately address the relationships among these discovery parameters. Above all, most of them simply assume a low MCS for a discovery message to increase the discovery coverage. A low MCS is surely helpful for noise-limited environments where the device density is very low. However, this may not be true with limited discovery resources and a high device density, since a low MCS means a small number of RBs with given discovery resources and thus there can be a large number of transmitting (and thus interfering) devices per RB. In this paper, we discuss how these discovery parameters are related and how they can be determined, assuming a high device density and interference-limited environments. The contribution of the paper includes the following.(a) We address the relationships between the discovery parameters.(b) We describe an optimization framework to determine the discovery parameters.(c) We define a performance metric called discovery rate to compare discovery schemes with different discovery parameters.(d) We show that the MCS should be carefully determined since a low MCS can be harmful when the device density is high.The rest of this paper is organized as follows. Section2 describes the system model considered in this paper and Section 3 discusses the design and performance metrics for the discovery process. Section 4 proposes an optimization framework to determine the discovery parameters. Simulation results are presented in Section 5 and conclusions are drawn in Section 6. ## 2. System Model In order to determine the proximity of devices in D2D communication systems, devices transmit discovery messages periodically so that neighbor devices check if the messages can be received successfully [7–16]. In particular, when the number of devices is large, the network may not be able to take full control of D2D devices. The system model considered in this paper is based on network-assisted but distributed-control D2D communications. We assume that devices are all synchronized and provided with discovery resources orthogonal to those for cellular communications. While discovery parameters can be provided by the network, each device selects its own resource for discovery message transmissions.Figure1(a) illustrates a part of a D2D frame structure for discovery resources. In practice, there can be separate resources dedicated for cellular communications or D2D data transmissions, which are not shown here. We assume that the total amount of discovery resources is fixed and discovery resources are partitioned into RBs specialized for transmitting discovery messages. Each RB can include one discovery message with a fixed length of LMessage (bits) and its time length, denoted by TRB (seconds), depends on the MCS of a discovery message. While TRB is relatively static, the discovery period may be adjusted at run time if necessary. If the discovery period is chosen as TPeriod,0 (seconds), then the number of RBs in a single discovery period is given as NRB,0=TPeriod,0/TRB. Each device selects an appropriate RB among NRB,0 RBs within a single discovery period and transmits a discovery message periodically over the selected RB. For the remaining (NRB,0-1) RBs, each device tries to receive discovery messages so as to discover its proximate devices.Figure 1 Resource for discovery message transmissions with different discovery periods. (a) Initial discovery period (b) Increased discovery periodWhen a device starts the discovery process, it may perform carrier sensing for each RB available in a single discovery period and select the RB with the least amount of interference to maximize the discovery range [17–20]. If the measured interference power at a transmitter for the selected RB is lower than a predefined carrier sensing threshold ITransmitter, then the RB can be used for periodic discovery message transmissions with the assumption that a desired discovery range R can be satisfied. Otherwise, the discovery period needs to be increased so that a greater number of RBs can be included in an enlarged discovery period and the density of transmitters per RB can be reduced. Figure 1(b) illustrates an increased discovery period of  TPeriod,1 (seconds), in which NRB,1 RBs are included.There can be several different scenarios to adjust the discovery period. For example, devices may voluntarily reduce the number of discovery message transmissions by detecting the congestion of the discovery resources or a BS may adjust the discovery parameters upon the request of devices or using the information gathered from devices. Since the locations of devices can be steadily changed, devices need to reselect RBs after multiple transmissions of discovery messages. For reselecting an RB, a device may stop sending a discovery message, wait for a random time if necessary, and restart the discovery process with performing carrier sensing. ## 3. Design and Performance Metrics ### 3.1. Discovery Period Consider discovery resources with an initial discovery period shown in Figure1(a). The average time length (in seconds) for one RB, denoted by TRB, can be written as(1)TRB=LMessageC(SIRTarget),where SIRTarget is the target SIR at the worst position inside the discovery range and C(SIRTarget) is the data rate achieved with SIRTarget. C(SIRTarget) can be determined by an MCS to meet a prespecified frame error rate and roughly predicted by Shannon capacity as (2)C(SIRTarget)=Blog2(1+SIRTarget),where B is the average bandwidth assigned for the discovery process. If there is no specific target discovery range or the device density is too low, the discovery process is performed with a minimum discovery period. The initial (and minimum) discovery period, denoted by TPeriod,0, can be written as (3)TPeriod,0=NRB,0TRB=NRB,0LMessageC(SIRTarget),where NRB,0 is the number of RBs included in the initial discovery period TPeriod,0. With a fixed TRB, the discovery period can be enlarged by increasing NRB if necessary. ### 3.2. Discovery Range If carrier sensing is performed with a prespecified carrier sensing level, a minimum distance between two transmitting devices over the same RB is guaranteed [19, 20]. Suppose that devices transmitting discovery messages through the same RB are placed in a hexagonal form, as shown in Figure 2. Let R0 be the discovery range and let d0 be the distance between adjacent devices using the same RB. Consider a transmitter, six neighbor interferers, and a receiver located at the worst position inside the discovery range. The signal power at the receiver can be written as (4)SReceiver=K1R0-αand the interference power can be expressed as (5)IReceiver=K1d0-R0-α+d0+R0-α34d02+12d0+R02-α/2mmcc+234d02+12d0-R02-α/2mcmc+234d02+12d0+R02-α/2,where K1 is a constant determined by the system parameters and α is the path-loss exponent. Assuming interference-limited environments, the SIR at the receiver can be written as a function of R0/d0; that is, (6)SIRTarget=fR0d0,where function f(x)(x>0) is defined as(7)fx≡x-α1-x-α+1+x-α34+12+x2-α/2mmcmmcc+234+12-x2-α/2mmcmmcc+234+12+x2-α/2-1.Carrier sensing may be performed before the transmitter selects an RB and the interference power measured at the transmitter can be written as (8)ITransmitterHexagonal=6K1d0-α=6K1R0f-1(SIRTarget)-α,assuming that the inverse function f-1(•) exists over an acceptable range of SIRTarget. If the measured interference power at the transmitter is lower than the carrier sensing threshold ITransmitterHexagonal in (8), then the desired discovery range R0 can be satisfied, assuming that devices are placed in a hexagonal pattern.Figure 2 Interference with a hexagonal distribution.LetDDevice be the device density (the number of devices per unit area) for all devices participating in a discovery process assuming that devices are uniformly distributed. Each device selects an RB and transmits a discovery message over the selected RB. Assuming that devices are uniformly allocated over NRB,0 RBs, the density of transmitting devices for each RB, denoted by DDevice/RB,0, can be written as DDevice/RB,0=DDevice/NRB,0. In particular, for a hexagonal distribution of devices allocated on the same RB, it can be expressed as(9)DDevice/RB,0=1(3/2)d02=DDeviceNRB,0and, using (6) and (9), the discovery range R0 can be expressed as follows: (10)R0=d0f-1(SIRTarget)=23f-1(SIRTarget)DDevice/RB,0=23NRB,0DDevicef-1(SIRTarget).In practice, devices using the same RB may not be placed in a hexagonal form even with carrier sensing and thus (10) may not be very accurate. However, we can conjecture that the distance between two devices can be inversely proportional to the square of the density even for other distributions. It can be justified by the following argument. Consider a distribution of devices as shown in Figure 3(a) with density DDevice,0. Let dij be the distance between device i and device j, and NDevice,i(r) denote the number of devices included in the circle with radius r and centered at device i. Then, the device density can be expressed as (11)DDevice,0=limr→∞NDevice,i(r)πr2.Consider a β-times (β>0) expanded or shrunk version of the distribution as shown in Figure 3(b), where the distance between device i and device j, denoted by dijNew, is now given as dijNew=βdij. Then, the number of devices included in the circle centered at device i with radius r, denoted by NDevice,iNew(r), can be written as NDevice,iNew(r)=NDevice,i(r/β). Hence, the new device density, denoted by DDevice,1, can be expressed as(12)DDevice,1=limr→∞NDevice,iNew(r)πr2=limβr→∞NDevice,iNew(βr)πβr2=limβr→∞1β2NDevice,i(r)πr2=1β2DDevice,0and thus(13)β=DDevice,0DDevice,1,which means that the distance between two devices can be inversely proportional to the square of the density. Let us assume that the discovery range R0 is inversely proportional to DDevice/RB,0. Then, it can be expressed as (14)R0=g(SIRTarget)DDevice/RB,0=g(SIRTarget)NRB,0DDevice,where the function g(SIRTarget) can be found by simulations to take care of actual distributions of transmitting devices for an RB or for the special case of a hexagonal distribution, g(SIRTarget) can be given as (15)g(SIRTarget)=23f-1(SIRTarget).Figure 3 Device density with varying the distance between two devices. (a) Low device density (b) High device density ### 3.3. Discovery Rate Let us define thediscovery rate as the number of discoverable devices per unit time. The discovery rate for a discovery process can determine the number of discoverable devices with a given discovery time and a device density. For other purpose, it can determine the discovery time with a given discovery range and a device density.Suppose that all devices inside the discovery rangeR0 are discoverable and other devices are not discoverable. The density of receiving devices inside the circle can be written as DDevice, since there can be no other transmitting device within the discovery range. The discovery rate per unit distance at distance r from the transmitter, denoted by ρDistance,0(r), can be written as (16)ρDistance,0(r)=2πrDDeviceTPeriod,0ifr<R00otherwise.Hence, the discovery rate, denoted by ρ0, can be determined by taking the integration of (16) as follows: (17)ρ0=∫0∞ρDistance,0(r)dr=∫0R02πrDDeviceTPeriod,0dr=πR02DDeviceTPeriod,0=πg2(SIRTarget)NRB,0DDeviceC(SIRTarget)NRB,0LMessageDDevice=πLMessageg2(SIRTarget)C(SIRTarget).Note that the discovery rate is a function of the target SIR SIRTarget, while it is independent of other discovery parameters such as the discovery period TPeriod,0 and the device density DDevice.If there is no specific target discovery range or the device density is very low, the minimum discovery period can be used. On the other hand, if the device density becomes too high with a given target discovery range, then the discovery period may need to be increased. If the device density is hard to estimate, the carrier sensing process may be used to determine whether the discovery period needs to be increased or not. If the interference measured at the transmitter is above a predefined threshold (e.g.,ITransmitterHexagonal in (8)) for all RBs, then a greater number of RBs need to be assigned so that a smaller number of devices are assigned to each RB. If the number of RBs within the discovery period is increased to NRB,1(>NRB,0), then the new discovery period can be written as (18)TPeriod,1=NRB,1LMessageC(SIRTarget)and the corresponding discovery range can be represented as follows: (19)R1=g(SIRTarget)NRB,1DDevice.Note that the discovery range can be enlarged by increasing the discovery period, assuming still interference-limited environments. The discovery rate per unit distance at distance r from the transmitter with the increased discovery period, denoted by ρDistance,1(r), can be written as (20)ρDistance,1(r)=2πrDDeviceTPeriod,1ifr<R10otherwiseand the discovery rate with the increased discovery period, denoted as ρ1, can be expressed as follows: (21)ρ1=∫0∞ρDistance,1(r)dr=∫0R12πrDDeviceTPeriod,1dr=πR12DDeviceTPeriod,1=πLMessageg2(SIRTarget)C(SIRTarget)=ρ0.The discovery rate is independent of the discovery period. If the discovery period increases with a fixed device density, the device density per RB decreases and the discovery range can be extended. This results in the increased number of discoverable devices but it will take more time to discover neighbors due to the increased discovery period. Similarly, the discovery rate is independent of the device density. If the discovery period is fixed, a high device density can result in a reduced discovery range. However, the discovery rate remains unchanged since the device density in the reduced discovery range is increased and the number of discoverable devices does not change. ## 3.1. Discovery Period Consider discovery resources with an initial discovery period shown in Figure1(a). The average time length (in seconds) for one RB, denoted by TRB, can be written as(1)TRB=LMessageC(SIRTarget),where SIRTarget is the target SIR at the worst position inside the discovery range and C(SIRTarget) is the data rate achieved with SIRTarget. C(SIRTarget) can be determined by an MCS to meet a prespecified frame error rate and roughly predicted by Shannon capacity as (2)C(SIRTarget)=Blog2(1+SIRTarget),where B is the average bandwidth assigned for the discovery process. If there is no specific target discovery range or the device density is too low, the discovery process is performed with a minimum discovery period. The initial (and minimum) discovery period, denoted by TPeriod,0, can be written as (3)TPeriod,0=NRB,0TRB=NRB,0LMessageC(SIRTarget),where NRB,0 is the number of RBs included in the initial discovery period TPeriod,0. With a fixed TRB, the discovery period can be enlarged by increasing NRB if necessary. ## 3.2. Discovery Range If carrier sensing is performed with a prespecified carrier sensing level, a minimum distance between two transmitting devices over the same RB is guaranteed [19, 20]. Suppose that devices transmitting discovery messages through the same RB are placed in a hexagonal form, as shown in Figure 2. Let R0 be the discovery range and let d0 be the distance between adjacent devices using the same RB. Consider a transmitter, six neighbor interferers, and a receiver located at the worst position inside the discovery range. The signal power at the receiver can be written as (4)SReceiver=K1R0-αand the interference power can be expressed as (5)IReceiver=K1d0-R0-α+d0+R0-α34d02+12d0+R02-α/2mmcc+234d02+12d0-R02-α/2mcmc+234d02+12d0+R02-α/2,where K1 is a constant determined by the system parameters and α is the path-loss exponent. Assuming interference-limited environments, the SIR at the receiver can be written as a function of R0/d0; that is, (6)SIRTarget=fR0d0,where function f(x)(x>0) is defined as(7)fx≡x-α1-x-α+1+x-α34+12+x2-α/2mmcmmcc+234+12-x2-α/2mmcmmcc+234+12+x2-α/2-1.Carrier sensing may be performed before the transmitter selects an RB and the interference power measured at the transmitter can be written as (8)ITransmitterHexagonal=6K1d0-α=6K1R0f-1(SIRTarget)-α,assuming that the inverse function f-1(•) exists over an acceptable range of SIRTarget. If the measured interference power at the transmitter is lower than the carrier sensing threshold ITransmitterHexagonal in (8), then the desired discovery range R0 can be satisfied, assuming that devices are placed in a hexagonal pattern.Figure 2 Interference with a hexagonal distribution.LetDDevice be the device density (the number of devices per unit area) for all devices participating in a discovery process assuming that devices are uniformly distributed. Each device selects an RB and transmits a discovery message over the selected RB. Assuming that devices are uniformly allocated over NRB,0 RBs, the density of transmitting devices for each RB, denoted by DDevice/RB,0, can be written as DDevice/RB,0=DDevice/NRB,0. In particular, for a hexagonal distribution of devices allocated on the same RB, it can be expressed as(9)DDevice/RB,0=1(3/2)d02=DDeviceNRB,0and, using (6) and (9), the discovery range R0 can be expressed as follows: (10)R0=d0f-1(SIRTarget)=23f-1(SIRTarget)DDevice/RB,0=23NRB,0DDevicef-1(SIRTarget).In practice, devices using the same RB may not be placed in a hexagonal form even with carrier sensing and thus (10) may not be very accurate. However, we can conjecture that the distance between two devices can be inversely proportional to the square of the density even for other distributions. It can be justified by the following argument. Consider a distribution of devices as shown in Figure 3(a) with density DDevice,0. Let dij be the distance between device i and device j, and NDevice,i(r) denote the number of devices included in the circle with radius r and centered at device i. Then, the device density can be expressed as (11)DDevice,0=limr→∞NDevice,i(r)πr2.Consider a β-times (β>0) expanded or shrunk version of the distribution as shown in Figure 3(b), where the distance between device i and device j, denoted by dijNew, is now given as dijNew=βdij. Then, the number of devices included in the circle centered at device i with radius r, denoted by NDevice,iNew(r), can be written as NDevice,iNew(r)=NDevice,i(r/β). Hence, the new device density, denoted by DDevice,1, can be expressed as(12)DDevice,1=limr→∞NDevice,iNew(r)πr2=limβr→∞NDevice,iNew(βr)πβr2=limβr→∞1β2NDevice,i(r)πr2=1β2DDevice,0and thus(13)β=DDevice,0DDevice,1,which means that the distance between two devices can be inversely proportional to the square of the density. Let us assume that the discovery range R0 is inversely proportional to DDevice/RB,0. Then, it can be expressed as (14)R0=g(SIRTarget)DDevice/RB,0=g(SIRTarget)NRB,0DDevice,where the function g(SIRTarget) can be found by simulations to take care of actual distributions of transmitting devices for an RB or for the special case of a hexagonal distribution, g(SIRTarget) can be given as (15)g(SIRTarget)=23f-1(SIRTarget).Figure 3 Device density with varying the distance between two devices. (a) Low device density (b) High device density ## 3.3. Discovery Rate Let us define thediscovery rate as the number of discoverable devices per unit time. The discovery rate for a discovery process can determine the number of discoverable devices with a given discovery time and a device density. For other purpose, it can determine the discovery time with a given discovery range and a device density.Suppose that all devices inside the discovery rangeR0 are discoverable and other devices are not discoverable. The density of receiving devices inside the circle can be written as DDevice, since there can be no other transmitting device within the discovery range. The discovery rate per unit distance at distance r from the transmitter, denoted by ρDistance,0(r), can be written as (16)ρDistance,0(r)=2πrDDeviceTPeriod,0ifr<R00otherwise.Hence, the discovery rate, denoted by ρ0, can be determined by taking the integration of (16) as follows: (17)ρ0=∫0∞ρDistance,0(r)dr=∫0R02πrDDeviceTPeriod,0dr=πR02DDeviceTPeriod,0=πg2(SIRTarget)NRB,0DDeviceC(SIRTarget)NRB,0LMessageDDevice=πLMessageg2(SIRTarget)C(SIRTarget).Note that the discovery rate is a function of the target SIR SIRTarget, while it is independent of other discovery parameters such as the discovery period TPeriod,0 and the device density DDevice.If there is no specific target discovery range or the device density is very low, the minimum discovery period can be used. On the other hand, if the device density becomes too high with a given target discovery range, then the discovery period may need to be increased. If the device density is hard to estimate, the carrier sensing process may be used to determine whether the discovery period needs to be increased or not. If the interference measured at the transmitter is above a predefined threshold (e.g.,ITransmitterHexagonal in (8)) for all RBs, then a greater number of RBs need to be assigned so that a smaller number of devices are assigned to each RB. If the number of RBs within the discovery period is increased to NRB,1(>NRB,0), then the new discovery period can be written as (18)TPeriod,1=NRB,1LMessageC(SIRTarget)and the corresponding discovery range can be represented as follows: (19)R1=g(SIRTarget)NRB,1DDevice.Note that the discovery range can be enlarged by increasing the discovery period, assuming still interference-limited environments. The discovery rate per unit distance at distance r from the transmitter with the increased discovery period, denoted by ρDistance,1(r), can be written as (20)ρDistance,1(r)=2πrDDeviceTPeriod,1ifr<R10otherwiseand the discovery rate with the increased discovery period, denoted as ρ1, can be expressed as follows: (21)ρ1=∫0∞ρDistance,1(r)dr=∫0R12πrDDeviceTPeriod,1dr=πR12DDeviceTPeriod,1=πLMessageg2(SIRTarget)C(SIRTarget)=ρ0.The discovery rate is independent of the discovery period. If the discovery period increases with a fixed device density, the device density per RB decreases and the discovery range can be extended. This results in the increased number of discoverable devices but it will take more time to discover neighbors due to the increased discovery period. Similarly, the discovery rate is independent of the device density. If the discovery period is fixed, a high device density can result in a reduced discovery range. However, the discovery rate remains unchanged since the device density in the reduced discovery range is increased and the number of discoverable devices does not change. ## 4. Optimization Framework Since the discovery rateρ is independent of other discovery parameters, including the discovery range R, the device density DDevice, and the discovery period TPeriod, it is important to first maximize the discovery rate. With a given bandwidth and a message length, the discovery rate depends on the target SIR and the distribution of devices allocated on the same RB. Hence, we need to first find the target SIR that maximizes the discovery rate, expressed as follows: (22)SIRTargetOptimal=argmax⁡SIRTargetρ=argmax⁡SIRTargetg2(SIRTarget)C(SIRTarget).The target SIR found by (22) determines the MCS of a discovery message and the length (in seconds) of an RB. For example, if a hexagonal distribution is assumed for devices using the same RB and (2) is used to calculate the data rate for the discovery message, then the optimal target SIR is found as follows:(23)SIRTargetOptimal=argmax⁡SIRTargetf-1(SIRTarget)2log2(1+SIRTarget)≈9dB.Other parameters can be subsequently determined. For example, if the target discovery range R and the device density DDevice are given, then the number of RBs within the discovery period can be determined as(24)NRB=max⁡NRB,0,DDeviceR2g2(SIRTargetOptimal)and the discovery period can be obtained as (25)TPeriod=NRBTRB=max⁡TPeriod,0,LMessageDDeviceR2g2(SIRTargetOptimal)C(SIRTargetOptimal).If the target discovery range R is given, but the device density DDevice is unknown, carrier sensing can be performed with the corresponding sensing threshold. If a device is unable to find an RB satisfying the carrier sensing threshold, then the discovery period needs to be increased. If the target discovery range R is not specified, the discovery range can be simply determined by the device density.The proposed procedure for discovery parameter determination is summarized in Figure4. In the figure, the solid rectangular boxes represent the determination of discovery parameters using (1), (8), (19), and (22). First, the target SIR (SIRTarget) at receivers needs to be optimized by (19), which in turn can determine the MCS of a discovery message and the length of an RB (TRB) by using (1). Note that these are static system parameters, which are hardly modified at run time. If there is no target discovery range, there are no more discovery parameters to determine. The discovery range can be simply determined by the device density. If the target discovery range is given and the device density can be estimated by the BS, the discovery period can be determined by adjusting the number of RBs in a single discovery period. However, it may not be easy to estimate the device density. Then, a BS simply provides a carrier sensing threshold, which can guarantee a minimum distance between two adjacent devices allocated on the same RB.Figure 4 Optimization framework for discovery process.In practice, the discovery range is not clearly defined due to fading, shadowing, and irregular distributions of transmitting devices. Hence, some simulations might be required to obtain more accurate values of discovery parameters with a precise definition of the discovery range. However, the procedure described in Figure4 can be still applicable for determining discovery parameters even with simulations.Notice that the optimal SIR obtained in (23) is not very low even when a large (but still interference-limited) discovery range is required. Although a low target SIR is helpful to receive a message with severe interference, a low MCS increases the average time length for one RB (TRB) and decreases the number of RBs (NRB) in a single discovery period. Hence, the density of transmitting devices for each RB (DDevice/RB) is increased and the discovery range might be even reduced due to the increased interference. A low MCS does not mean a large discovery range if the discovery process is performed in a heavily populated area. ## 5. Simulation Results In this section, we find discovery rates by Monte-Carlo simulations, in which devices are randomly distributed over a wide square area of 1000 m × 1000 m and interferences are generated with a wrap-around pattern. For the simulations, the length of an RB is determined by (2) with a given target SIR, and resource allocation for each device is performed sequentially based on carrier sensing results. Each device selects an RB with the least amount of interference and transmits a discovery message if a given carrier sensing threshold is satisfied. The initial (and minimum) number of discovery RBs in the discovery period is 8 and the number of RBs can be increased if there is no RB satisfying the carrier sensing threshold. Some simulation parameters are chosen to show substantially different shapes of curves having different discovery ranges. Rayleigh fading is used for channels but shadowing is not applied for the simplicity of simulations. The detailed simulation parameters are summarized in Table 1.Table 1 Simulation parameters. Parameter Value Simulation region 1000 m × 1000 m (wrap-around) Device density (DDevice) Figures4 and 7 0.005/m2 Figure5 0.001~0.004/m2 Figure6 0.002/m2 Path loss exponent (α) 4 Shadowing Not applied Fading Flat Rayleigh fading Bandwidth (B) 10 KHz Message length (LMessage) 100 bits Initial number of RBs in discovery period (NRB,0) 8 Target SIR (SIRTarget) Figures4 and 5 0 dB Figure6 −10~20 dB Figure7 7 dBFigure5(a) shows the discovery rates per unit distance at distance r from the transmitting device, with four different sensing thresholds (Γ1, Γ2, Γ3, and Γ4), when 0 dB is used for the target SIR. The sensing thresholds are chosen to show substantially different shapes of curves with different discovery ranges. While Γ1 is a high value that allows for the large interference, Γ4 is low to maintain a large discovery range at the expense of an increased discovery time. The area under each curved line in Figure 5(a) shows the discovery rate, which has been redrawn as a bar graph in Figure 5(b) for the purpose of easy comparisons. The shapes of the four graphs in Figure 5(a) are quite different, indicating different discovery ranges. However, their areas, which represent the discovery rates, are very similar as shown in Figure 5(b).Figure 5 Discovery rates with varying sensing thresholds. (a) Discovery rates per unit distance (b) Discovery ratesFigure6 illustrates the discovery rates with varying device densities (DDevice = 0.001, 0.002, 0.003, and 0.004/m2). The device densities are chosen to show substantially different shapes of curves. The target SIR is set to 0 dB and no specific carrier sensing threshold is used. If there is no specific carrier sensing threshold, a large discovery range can be obtained with a low device density and the discovery range decreases as the device density increases. From the figures, we can see that the discovery rate is also independent of the device density. While very different values of discovery ranges can be obtained by changing other discovery parameters, the discovery rates do not significantly vary. Hence, the discovery rate can be used as a performance metric for comparing discovery schemes with different discovery parameters. We can say that similar discovery performances can be obtained among the four schemes in Figure 5 (or among the four schemes in Figure 6) although they have quite different discovery ranges.Figure 6 Discovery rates with varying device densities. (a) Discovery rates per unit distance (b) Discovery ratesFigure7(a) shows the discovery rates per unit distance with several different target SIR values (−10 dB, 0 dB, 10 dB, and 20 dB) at receivers, when there is no specific sensing threshold. As expected, a discovery range also depends on the corresponding target SIR and a low target SIR can achieve a longer discovery range. However, unlike Figures 5(a) or 6(a), the curved lines in Figure 7(a) have substantially different areas. Figure 7(b) represents the discovery rates according to target SIR values from −10 dB to 20 dB. In the figure, the discovery rate can be maximized with the target SIR of 7 dB, which is close to the theoretical optimal target SIR (9 dB) found by (23) assuming a hexagonal distribution of devices allocated on the same RB. Discovery schemes with the optimal target SIR will provide the best performances and other parameters can be subsequently determined.Figure 7 Discovery rates according to target SIR values. (a) Discovery rates per unit distance (b) Discovery ratesFigure8 shows the results with the target SIR of 7 dB (the optimal target SIR found from Figure 7(b)). In order to obtain similar discovery ranges as those for Figure 5, threshold values Γi/SIRTarget(i=1,2,3,4) are used. Other simulation parameters are the same as those for Figure 5. Note that the discovery rates are considerably improved as compared to those in Figure 5, in which 0 dB is used for the target SIR. If we want to obtain a reasonably long discovery range with a high device density, we need to use a long discovery period or a low carrier sensing threshold with the optimal target SIR instead of using a low target SIR.Figure 8 Discovery rates with the optimal target SIR. (a) Discovery rates per unit distance (b) Discovery ratesWhen the device density is very low, we can use a low MCS to maximize the discovery range. However, if we consider discovery processes in an urban area where the device density can be very high, we would better not use a too low MCS since it may eventually reduce the discovery range with a discovery time limit. The discovery rate (the number of discoverable devices per unit time) for discovery process is analogous to the received data rate (the amount of successfully received data per unit time) for data transmissions. A too low MCS is not desirable since only small amount of data can be transmitted per unit time and a too high MCS is also not recommended since receivers become too vulnerable to the interference from other transmitters. An appropriate MCS needs to be used to maximize the system performance for data transmissions. So is true for device discovery and the discovery rate can be maximized with an appropriate MCS. ## 6. Conclusion The discovery rate, which is defined as the number of discoverable devices per unit time, does not depend on the device density, the discovery range, or the discovery period assuming interference-limited environments. Hence, it can be used as a performance metric for comparing discovery methods with different discovery parameters. While the discovery rate is independent on many other discovery parameters, its value can significantly vary with the target SIR at receivers. Hence, the MCS of a discovery message should be optimized first.While a low MCS of a discovery message is considered in many of previous works, this paper shows that a low MCS can be harmful when the device density is high. The number of discovery RBs is reduced with a lower MCS with given discovery resources and a greater number of devices may be assigned to each RB. This increases the interference from other transmitting devices and the discovery range might be eventually reduced. The discovery rate for discovery process is analogous to the received data rate in data transmissions. While a too high MCS makes receivers vulnerable to interference, a too low MCS is also not desirable since only small amount of data can be transferred per unit time.The analysis given in this paper may be extended with considering a clearer definition of discovery ranges, more realistic channel models with fading and shadowing, more practical distributions of transmitting devices, and more complicated discovery processes including multihop discovery, device cooperation, and intelligent network assistance. Also, half-duplexing and adjacent-channel-interference problems need to be considered in the future for orthogonal frequency division multiple access (OFDMA) systems. A slightly lower value of the target SIR might be required with severe adjacent channel interference, since more robust MCS may be beneficial to mitigate the interference from other OFDMA subchannels. --- *Source: 102049-2015-04-07.xml*
102049-2015-04-07_102049-2015-04-07.md
40,323
Optimization Framework and Parameter Determination for Proximity-Based Device Discovery in D2D Communication Systems
Minjoong Rim; Seungyeob Chae; Chung G. Kang
International Journal of Antennas and Propagation (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102049
102049-2015-04-07.xml
--- ## Abstract One of the most important processes in device-to-device communications of cellular devices is that of discovery, which determines the proximity of devices. When a discovery process is performed, there are several parameters to determine, including the discovery range, the discovery period, and the modulation and coding scheme of the discovery messages. In this paper, we address the relationships between these parameters and describe an optimization framework to determine them. In the proposed procedure, it is important to first optimize the discovery rate, which is defined as the number of discoverable devices per unit time. Once the discovery rate is maximized, the discovery period can be determined accordingly based on the device density and the target discovery range. Since the discovery rate is not affected by many of discovery parameters such as the discovery range, the device density, and the discovery period, it can be used as a performance metric for comparing discovery schemes with different discovery ranges or different discovery periods. --- ## Body ## 1. Introduction Cellular-assisted device-to-device (D2D) communications refer to direct communications among mobile devices without transferring data through a base station (BS) [1–6]. One of the most important processes in D2D communications is that of discovery, which finds devices located nearby [7–16]. In order to determine the proximity of devices, each device transmits a discovery message periodically, and other devices check if the message can be received successfully.Consider peer discovery applications such as friend finding in a densely populated urban area. If we have unlimited discovery resources or allow an unlimited discovery time, a resource unit dedicated for discovery message transmissions, simply calledresource block (RB) in this paper, can be occupied by at most one device so as to avoid the interference from other transmitting devices. In this case, the discovery range can be governed by the transmission power and the noise level. However, in practice, the discovery resources are limited and the discovery time should not be too long, while there can be a large number of devices participating in the discovery process. In this paper, we assume that discovery processes need to be performed with a reasonably short discovery time and limited discovery resources, while a large number of devices participate in a limited area, that is, the device density can be high. If the number of devices using the same RB is large, the discovery range might be limited by the interference from other transmitting devices.In interference-limited environments, a transmitting device may perform carrier sensing for each RB available within a single discovery period, before transmitting the discovery message, assuming that the interference conditions remain unchanged for a certain amount of time [17–20]. Then, it can select the RB with the least amount of interference so that the discovery range can be maximized. If there is no specific carrier sensing threshold used for a discovery or the carrier sensing threshold is too high, the discovery range may be determined by the density of transmitting devices per RB, assuming interference-limited environments. If a certain discovery range needs to be maintained, then the corresponding carrier sensing threshold can be used so as to satisfy the target signal-to-interference ratio (SIR) of the neighbor devices within the discovery range. If no RB satisfying the carrier sensing threshold can be found due to a high density of transmitting devices, the discovery period needs to be increased so that more RBs can be included in a single period. The discovery period can be adjusted by the centralized control of the network or by the distributed control of mobile devices. An increased discovery period means a large discovery range assuming still interference-limited environments, but it also results in a longer discovery time.Similarly, if a lower modulation and coding scheme (MCS) is used for a discovery message, the discovery range can be extended, since a lower SIR is allowed at the receivers. However, a low MCS may increase the size of each RB, resulting in an increased discovery time as well. We can see that the discovery range is related to the device density, the discovery period, and the target SIR of the discovery messages. In order to determine discovery parameters for a discovery process, it is important to understand the relationships among them.A number of recent works have focused on various issues on the discovery process. Scenarios and requirements for the discovery process have been discussed in [7–10] and some efficient discovery schemes have been proposed in a centralized manner [10] or in a distributed manner [11]. Efficient discovery patterns have been investigated [12] and energy efficiency discovery process has been intensely addressed [13–15]. However, most of these approaches do not adequately address the relationships among these discovery parameters. Above all, most of them simply assume a low MCS for a discovery message to increase the discovery coverage. A low MCS is surely helpful for noise-limited environments where the device density is very low. However, this may not be true with limited discovery resources and a high device density, since a low MCS means a small number of RBs with given discovery resources and thus there can be a large number of transmitting (and thus interfering) devices per RB. In this paper, we discuss how these discovery parameters are related and how they can be determined, assuming a high device density and interference-limited environments. The contribution of the paper includes the following.(a) We address the relationships between the discovery parameters.(b) We describe an optimization framework to determine the discovery parameters.(c) We define a performance metric called discovery rate to compare discovery schemes with different discovery parameters.(d) We show that the MCS should be carefully determined since a low MCS can be harmful when the device density is high.The rest of this paper is organized as follows. Section2 describes the system model considered in this paper and Section 3 discusses the design and performance metrics for the discovery process. Section 4 proposes an optimization framework to determine the discovery parameters. Simulation results are presented in Section 5 and conclusions are drawn in Section 6. ## 2. System Model In order to determine the proximity of devices in D2D communication systems, devices transmit discovery messages periodically so that neighbor devices check if the messages can be received successfully [7–16]. In particular, when the number of devices is large, the network may not be able to take full control of D2D devices. The system model considered in this paper is based on network-assisted but distributed-control D2D communications. We assume that devices are all synchronized and provided with discovery resources orthogonal to those for cellular communications. While discovery parameters can be provided by the network, each device selects its own resource for discovery message transmissions.Figure1(a) illustrates a part of a D2D frame structure for discovery resources. In practice, there can be separate resources dedicated for cellular communications or D2D data transmissions, which are not shown here. We assume that the total amount of discovery resources is fixed and discovery resources are partitioned into RBs specialized for transmitting discovery messages. Each RB can include one discovery message with a fixed length of LMessage (bits) and its time length, denoted by TRB (seconds), depends on the MCS of a discovery message. While TRB is relatively static, the discovery period may be adjusted at run time if necessary. If the discovery period is chosen as TPeriod,0 (seconds), then the number of RBs in a single discovery period is given as NRB,0=TPeriod,0/TRB. Each device selects an appropriate RB among NRB,0 RBs within a single discovery period and transmits a discovery message periodically over the selected RB. For the remaining (NRB,0-1) RBs, each device tries to receive discovery messages so as to discover its proximate devices.Figure 1 Resource for discovery message transmissions with different discovery periods. (a) Initial discovery period (b) Increased discovery periodWhen a device starts the discovery process, it may perform carrier sensing for each RB available in a single discovery period and select the RB with the least amount of interference to maximize the discovery range [17–20]. If the measured interference power at a transmitter for the selected RB is lower than a predefined carrier sensing threshold ITransmitter, then the RB can be used for periodic discovery message transmissions with the assumption that a desired discovery range R can be satisfied. Otherwise, the discovery period needs to be increased so that a greater number of RBs can be included in an enlarged discovery period and the density of transmitters per RB can be reduced. Figure 1(b) illustrates an increased discovery period of  TPeriod,1 (seconds), in which NRB,1 RBs are included.There can be several different scenarios to adjust the discovery period. For example, devices may voluntarily reduce the number of discovery message transmissions by detecting the congestion of the discovery resources or a BS may adjust the discovery parameters upon the request of devices or using the information gathered from devices. Since the locations of devices can be steadily changed, devices need to reselect RBs after multiple transmissions of discovery messages. For reselecting an RB, a device may stop sending a discovery message, wait for a random time if necessary, and restart the discovery process with performing carrier sensing. ## 3. Design and Performance Metrics ### 3.1. Discovery Period Consider discovery resources with an initial discovery period shown in Figure1(a). The average time length (in seconds) for one RB, denoted by TRB, can be written as(1)TRB=LMessageC(SIRTarget),where SIRTarget is the target SIR at the worst position inside the discovery range and C(SIRTarget) is the data rate achieved with SIRTarget. C(SIRTarget) can be determined by an MCS to meet a prespecified frame error rate and roughly predicted by Shannon capacity as (2)C(SIRTarget)=Blog2(1+SIRTarget),where B is the average bandwidth assigned for the discovery process. If there is no specific target discovery range or the device density is too low, the discovery process is performed with a minimum discovery period. The initial (and minimum) discovery period, denoted by TPeriod,0, can be written as (3)TPeriod,0=NRB,0TRB=NRB,0LMessageC(SIRTarget),where NRB,0 is the number of RBs included in the initial discovery period TPeriod,0. With a fixed TRB, the discovery period can be enlarged by increasing NRB if necessary. ### 3.2. Discovery Range If carrier sensing is performed with a prespecified carrier sensing level, a minimum distance between two transmitting devices over the same RB is guaranteed [19, 20]. Suppose that devices transmitting discovery messages through the same RB are placed in a hexagonal form, as shown in Figure 2. Let R0 be the discovery range and let d0 be the distance between adjacent devices using the same RB. Consider a transmitter, six neighbor interferers, and a receiver located at the worst position inside the discovery range. The signal power at the receiver can be written as (4)SReceiver=K1R0-αand the interference power can be expressed as (5)IReceiver=K1d0-R0-α+d0+R0-α34d02+12d0+R02-α/2mmcc+234d02+12d0-R02-α/2mcmc+234d02+12d0+R02-α/2,where K1 is a constant determined by the system parameters and α is the path-loss exponent. Assuming interference-limited environments, the SIR at the receiver can be written as a function of R0/d0; that is, (6)SIRTarget=fR0d0,where function f(x)(x>0) is defined as(7)fx≡x-α1-x-α+1+x-α34+12+x2-α/2mmcmmcc+234+12-x2-α/2mmcmmcc+234+12+x2-α/2-1.Carrier sensing may be performed before the transmitter selects an RB and the interference power measured at the transmitter can be written as (8)ITransmitterHexagonal=6K1d0-α=6K1R0f-1(SIRTarget)-α,assuming that the inverse function f-1(•) exists over an acceptable range of SIRTarget. If the measured interference power at the transmitter is lower than the carrier sensing threshold ITransmitterHexagonal in (8), then the desired discovery range R0 can be satisfied, assuming that devices are placed in a hexagonal pattern.Figure 2 Interference with a hexagonal distribution.LetDDevice be the device density (the number of devices per unit area) for all devices participating in a discovery process assuming that devices are uniformly distributed. Each device selects an RB and transmits a discovery message over the selected RB. Assuming that devices are uniformly allocated over NRB,0 RBs, the density of transmitting devices for each RB, denoted by DDevice/RB,0, can be written as DDevice/RB,0=DDevice/NRB,0. In particular, for a hexagonal distribution of devices allocated on the same RB, it can be expressed as(9)DDevice/RB,0=1(3/2)d02=DDeviceNRB,0and, using (6) and (9), the discovery range R0 can be expressed as follows: (10)R0=d0f-1(SIRTarget)=23f-1(SIRTarget)DDevice/RB,0=23NRB,0DDevicef-1(SIRTarget).In practice, devices using the same RB may not be placed in a hexagonal form even with carrier sensing and thus (10) may not be very accurate. However, we can conjecture that the distance between two devices can be inversely proportional to the square of the density even for other distributions. It can be justified by the following argument. Consider a distribution of devices as shown in Figure 3(a) with density DDevice,0. Let dij be the distance between device i and device j, and NDevice,i(r) denote the number of devices included in the circle with radius r and centered at device i. Then, the device density can be expressed as (11)DDevice,0=limr→∞NDevice,i(r)πr2.Consider a β-times (β>0) expanded or shrunk version of the distribution as shown in Figure 3(b), where the distance between device i and device j, denoted by dijNew, is now given as dijNew=βdij. Then, the number of devices included in the circle centered at device i with radius r, denoted by NDevice,iNew(r), can be written as NDevice,iNew(r)=NDevice,i(r/β). Hence, the new device density, denoted by DDevice,1, can be expressed as(12)DDevice,1=limr→∞NDevice,iNew(r)πr2=limβr→∞NDevice,iNew(βr)πβr2=limβr→∞1β2NDevice,i(r)πr2=1β2DDevice,0and thus(13)β=DDevice,0DDevice,1,which means that the distance between two devices can be inversely proportional to the square of the density. Let us assume that the discovery range R0 is inversely proportional to DDevice/RB,0. Then, it can be expressed as (14)R0=g(SIRTarget)DDevice/RB,0=g(SIRTarget)NRB,0DDevice,where the function g(SIRTarget) can be found by simulations to take care of actual distributions of transmitting devices for an RB or for the special case of a hexagonal distribution, g(SIRTarget) can be given as (15)g(SIRTarget)=23f-1(SIRTarget).Figure 3 Device density with varying the distance between two devices. (a) Low device density (b) High device density ### 3.3. Discovery Rate Let us define thediscovery rate as the number of discoverable devices per unit time. The discovery rate for a discovery process can determine the number of discoverable devices with a given discovery time and a device density. For other purpose, it can determine the discovery time with a given discovery range and a device density.Suppose that all devices inside the discovery rangeR0 are discoverable and other devices are not discoverable. The density of receiving devices inside the circle can be written as DDevice, since there can be no other transmitting device within the discovery range. The discovery rate per unit distance at distance r from the transmitter, denoted by ρDistance,0(r), can be written as (16)ρDistance,0(r)=2πrDDeviceTPeriod,0ifr<R00otherwise.Hence, the discovery rate, denoted by ρ0, can be determined by taking the integration of (16) as follows: (17)ρ0=∫0∞ρDistance,0(r)dr=∫0R02πrDDeviceTPeriod,0dr=πR02DDeviceTPeriod,0=πg2(SIRTarget)NRB,0DDeviceC(SIRTarget)NRB,0LMessageDDevice=πLMessageg2(SIRTarget)C(SIRTarget).Note that the discovery rate is a function of the target SIR SIRTarget, while it is independent of other discovery parameters such as the discovery period TPeriod,0 and the device density DDevice.If there is no specific target discovery range or the device density is very low, the minimum discovery period can be used. On the other hand, if the device density becomes too high with a given target discovery range, then the discovery period may need to be increased. If the device density is hard to estimate, the carrier sensing process may be used to determine whether the discovery period needs to be increased or not. If the interference measured at the transmitter is above a predefined threshold (e.g.,ITransmitterHexagonal in (8)) for all RBs, then a greater number of RBs need to be assigned so that a smaller number of devices are assigned to each RB. If the number of RBs within the discovery period is increased to NRB,1(>NRB,0), then the new discovery period can be written as (18)TPeriod,1=NRB,1LMessageC(SIRTarget)and the corresponding discovery range can be represented as follows: (19)R1=g(SIRTarget)NRB,1DDevice.Note that the discovery range can be enlarged by increasing the discovery period, assuming still interference-limited environments. The discovery rate per unit distance at distance r from the transmitter with the increased discovery period, denoted by ρDistance,1(r), can be written as (20)ρDistance,1(r)=2πrDDeviceTPeriod,1ifr<R10otherwiseand the discovery rate with the increased discovery period, denoted as ρ1, can be expressed as follows: (21)ρ1=∫0∞ρDistance,1(r)dr=∫0R12πrDDeviceTPeriod,1dr=πR12DDeviceTPeriod,1=πLMessageg2(SIRTarget)C(SIRTarget)=ρ0.The discovery rate is independent of the discovery period. If the discovery period increases with a fixed device density, the device density per RB decreases and the discovery range can be extended. This results in the increased number of discoverable devices but it will take more time to discover neighbors due to the increased discovery period. Similarly, the discovery rate is independent of the device density. If the discovery period is fixed, a high device density can result in a reduced discovery range. However, the discovery rate remains unchanged since the device density in the reduced discovery range is increased and the number of discoverable devices does not change. ## 3.1. Discovery Period Consider discovery resources with an initial discovery period shown in Figure1(a). The average time length (in seconds) for one RB, denoted by TRB, can be written as(1)TRB=LMessageC(SIRTarget),where SIRTarget is the target SIR at the worst position inside the discovery range and C(SIRTarget) is the data rate achieved with SIRTarget. C(SIRTarget) can be determined by an MCS to meet a prespecified frame error rate and roughly predicted by Shannon capacity as (2)C(SIRTarget)=Blog2(1+SIRTarget),where B is the average bandwidth assigned for the discovery process. If there is no specific target discovery range or the device density is too low, the discovery process is performed with a minimum discovery period. The initial (and minimum) discovery period, denoted by TPeriod,0, can be written as (3)TPeriod,0=NRB,0TRB=NRB,0LMessageC(SIRTarget),where NRB,0 is the number of RBs included in the initial discovery period TPeriod,0. With a fixed TRB, the discovery period can be enlarged by increasing NRB if necessary. ## 3.2. Discovery Range If carrier sensing is performed with a prespecified carrier sensing level, a minimum distance between two transmitting devices over the same RB is guaranteed [19, 20]. Suppose that devices transmitting discovery messages through the same RB are placed in a hexagonal form, as shown in Figure 2. Let R0 be the discovery range and let d0 be the distance between adjacent devices using the same RB. Consider a transmitter, six neighbor interferers, and a receiver located at the worst position inside the discovery range. The signal power at the receiver can be written as (4)SReceiver=K1R0-αand the interference power can be expressed as (5)IReceiver=K1d0-R0-α+d0+R0-α34d02+12d0+R02-α/2mmcc+234d02+12d0-R02-α/2mcmc+234d02+12d0+R02-α/2,where K1 is a constant determined by the system parameters and α is the path-loss exponent. Assuming interference-limited environments, the SIR at the receiver can be written as a function of R0/d0; that is, (6)SIRTarget=fR0d0,where function f(x)(x>0) is defined as(7)fx≡x-α1-x-α+1+x-α34+12+x2-α/2mmcmmcc+234+12-x2-α/2mmcmmcc+234+12+x2-α/2-1.Carrier sensing may be performed before the transmitter selects an RB and the interference power measured at the transmitter can be written as (8)ITransmitterHexagonal=6K1d0-α=6K1R0f-1(SIRTarget)-α,assuming that the inverse function f-1(•) exists over an acceptable range of SIRTarget. If the measured interference power at the transmitter is lower than the carrier sensing threshold ITransmitterHexagonal in (8), then the desired discovery range R0 can be satisfied, assuming that devices are placed in a hexagonal pattern.Figure 2 Interference with a hexagonal distribution.LetDDevice be the device density (the number of devices per unit area) for all devices participating in a discovery process assuming that devices are uniformly distributed. Each device selects an RB and transmits a discovery message over the selected RB. Assuming that devices are uniformly allocated over NRB,0 RBs, the density of transmitting devices for each RB, denoted by DDevice/RB,0, can be written as DDevice/RB,0=DDevice/NRB,0. In particular, for a hexagonal distribution of devices allocated on the same RB, it can be expressed as(9)DDevice/RB,0=1(3/2)d02=DDeviceNRB,0and, using (6) and (9), the discovery range R0 can be expressed as follows: (10)R0=d0f-1(SIRTarget)=23f-1(SIRTarget)DDevice/RB,0=23NRB,0DDevicef-1(SIRTarget).In practice, devices using the same RB may not be placed in a hexagonal form even with carrier sensing and thus (10) may not be very accurate. However, we can conjecture that the distance between two devices can be inversely proportional to the square of the density even for other distributions. It can be justified by the following argument. Consider a distribution of devices as shown in Figure 3(a) with density DDevice,0. Let dij be the distance between device i and device j, and NDevice,i(r) denote the number of devices included in the circle with radius r and centered at device i. Then, the device density can be expressed as (11)DDevice,0=limr→∞NDevice,i(r)πr2.Consider a β-times (β>0) expanded or shrunk version of the distribution as shown in Figure 3(b), where the distance between device i and device j, denoted by dijNew, is now given as dijNew=βdij. Then, the number of devices included in the circle centered at device i with radius r, denoted by NDevice,iNew(r), can be written as NDevice,iNew(r)=NDevice,i(r/β). Hence, the new device density, denoted by DDevice,1, can be expressed as(12)DDevice,1=limr→∞NDevice,iNew(r)πr2=limβr→∞NDevice,iNew(βr)πβr2=limβr→∞1β2NDevice,i(r)πr2=1β2DDevice,0and thus(13)β=DDevice,0DDevice,1,which means that the distance between two devices can be inversely proportional to the square of the density. Let us assume that the discovery range R0 is inversely proportional to DDevice/RB,0. Then, it can be expressed as (14)R0=g(SIRTarget)DDevice/RB,0=g(SIRTarget)NRB,0DDevice,where the function g(SIRTarget) can be found by simulations to take care of actual distributions of transmitting devices for an RB or for the special case of a hexagonal distribution, g(SIRTarget) can be given as (15)g(SIRTarget)=23f-1(SIRTarget).Figure 3 Device density with varying the distance between two devices. (a) Low device density (b) High device density ## 3.3. Discovery Rate Let us define thediscovery rate as the number of discoverable devices per unit time. The discovery rate for a discovery process can determine the number of discoverable devices with a given discovery time and a device density. For other purpose, it can determine the discovery time with a given discovery range and a device density.Suppose that all devices inside the discovery rangeR0 are discoverable and other devices are not discoverable. The density of receiving devices inside the circle can be written as DDevice, since there can be no other transmitting device within the discovery range. The discovery rate per unit distance at distance r from the transmitter, denoted by ρDistance,0(r), can be written as (16)ρDistance,0(r)=2πrDDeviceTPeriod,0ifr<R00otherwise.Hence, the discovery rate, denoted by ρ0, can be determined by taking the integration of (16) as follows: (17)ρ0=∫0∞ρDistance,0(r)dr=∫0R02πrDDeviceTPeriod,0dr=πR02DDeviceTPeriod,0=πg2(SIRTarget)NRB,0DDeviceC(SIRTarget)NRB,0LMessageDDevice=πLMessageg2(SIRTarget)C(SIRTarget).Note that the discovery rate is a function of the target SIR SIRTarget, while it is independent of other discovery parameters such as the discovery period TPeriod,0 and the device density DDevice.If there is no specific target discovery range or the device density is very low, the minimum discovery period can be used. On the other hand, if the device density becomes too high with a given target discovery range, then the discovery period may need to be increased. If the device density is hard to estimate, the carrier sensing process may be used to determine whether the discovery period needs to be increased or not. If the interference measured at the transmitter is above a predefined threshold (e.g.,ITransmitterHexagonal in (8)) for all RBs, then a greater number of RBs need to be assigned so that a smaller number of devices are assigned to each RB. If the number of RBs within the discovery period is increased to NRB,1(>NRB,0), then the new discovery period can be written as (18)TPeriod,1=NRB,1LMessageC(SIRTarget)and the corresponding discovery range can be represented as follows: (19)R1=g(SIRTarget)NRB,1DDevice.Note that the discovery range can be enlarged by increasing the discovery period, assuming still interference-limited environments. The discovery rate per unit distance at distance r from the transmitter with the increased discovery period, denoted by ρDistance,1(r), can be written as (20)ρDistance,1(r)=2πrDDeviceTPeriod,1ifr<R10otherwiseand the discovery rate with the increased discovery period, denoted as ρ1, can be expressed as follows: (21)ρ1=∫0∞ρDistance,1(r)dr=∫0R12πrDDeviceTPeriod,1dr=πR12DDeviceTPeriod,1=πLMessageg2(SIRTarget)C(SIRTarget)=ρ0.The discovery rate is independent of the discovery period. If the discovery period increases with a fixed device density, the device density per RB decreases and the discovery range can be extended. This results in the increased number of discoverable devices but it will take more time to discover neighbors due to the increased discovery period. Similarly, the discovery rate is independent of the device density. If the discovery period is fixed, a high device density can result in a reduced discovery range. However, the discovery rate remains unchanged since the device density in the reduced discovery range is increased and the number of discoverable devices does not change. ## 4. Optimization Framework Since the discovery rateρ is independent of other discovery parameters, including the discovery range R, the device density DDevice, and the discovery period TPeriod, it is important to first maximize the discovery rate. With a given bandwidth and a message length, the discovery rate depends on the target SIR and the distribution of devices allocated on the same RB. Hence, we need to first find the target SIR that maximizes the discovery rate, expressed as follows: (22)SIRTargetOptimal=argmax⁡SIRTargetρ=argmax⁡SIRTargetg2(SIRTarget)C(SIRTarget).The target SIR found by (22) determines the MCS of a discovery message and the length (in seconds) of an RB. For example, if a hexagonal distribution is assumed for devices using the same RB and (2) is used to calculate the data rate for the discovery message, then the optimal target SIR is found as follows:(23)SIRTargetOptimal=argmax⁡SIRTargetf-1(SIRTarget)2log2(1+SIRTarget)≈9dB.Other parameters can be subsequently determined. For example, if the target discovery range R and the device density DDevice are given, then the number of RBs within the discovery period can be determined as(24)NRB=max⁡NRB,0,DDeviceR2g2(SIRTargetOptimal)and the discovery period can be obtained as (25)TPeriod=NRBTRB=max⁡TPeriod,0,LMessageDDeviceR2g2(SIRTargetOptimal)C(SIRTargetOptimal).If the target discovery range R is given, but the device density DDevice is unknown, carrier sensing can be performed with the corresponding sensing threshold. If a device is unable to find an RB satisfying the carrier sensing threshold, then the discovery period needs to be increased. If the target discovery range R is not specified, the discovery range can be simply determined by the device density.The proposed procedure for discovery parameter determination is summarized in Figure4. In the figure, the solid rectangular boxes represent the determination of discovery parameters using (1), (8), (19), and (22). First, the target SIR (SIRTarget) at receivers needs to be optimized by (19), which in turn can determine the MCS of a discovery message and the length of an RB (TRB) by using (1). Note that these are static system parameters, which are hardly modified at run time. If there is no target discovery range, there are no more discovery parameters to determine. The discovery range can be simply determined by the device density. If the target discovery range is given and the device density can be estimated by the BS, the discovery period can be determined by adjusting the number of RBs in a single discovery period. However, it may not be easy to estimate the device density. Then, a BS simply provides a carrier sensing threshold, which can guarantee a minimum distance between two adjacent devices allocated on the same RB.Figure 4 Optimization framework for discovery process.In practice, the discovery range is not clearly defined due to fading, shadowing, and irregular distributions of transmitting devices. Hence, some simulations might be required to obtain more accurate values of discovery parameters with a precise definition of the discovery range. However, the procedure described in Figure4 can be still applicable for determining discovery parameters even with simulations.Notice that the optimal SIR obtained in (23) is not very low even when a large (but still interference-limited) discovery range is required. Although a low target SIR is helpful to receive a message with severe interference, a low MCS increases the average time length for one RB (TRB) and decreases the number of RBs (NRB) in a single discovery period. Hence, the density of transmitting devices for each RB (DDevice/RB) is increased and the discovery range might be even reduced due to the increased interference. A low MCS does not mean a large discovery range if the discovery process is performed in a heavily populated area. ## 5. Simulation Results In this section, we find discovery rates by Monte-Carlo simulations, in which devices are randomly distributed over a wide square area of 1000 m × 1000 m and interferences are generated with a wrap-around pattern. For the simulations, the length of an RB is determined by (2) with a given target SIR, and resource allocation for each device is performed sequentially based on carrier sensing results. Each device selects an RB with the least amount of interference and transmits a discovery message if a given carrier sensing threshold is satisfied. The initial (and minimum) number of discovery RBs in the discovery period is 8 and the number of RBs can be increased if there is no RB satisfying the carrier sensing threshold. Some simulation parameters are chosen to show substantially different shapes of curves having different discovery ranges. Rayleigh fading is used for channels but shadowing is not applied for the simplicity of simulations. The detailed simulation parameters are summarized in Table 1.Table 1 Simulation parameters. Parameter Value Simulation region 1000 m × 1000 m (wrap-around) Device density (DDevice) Figures4 and 7 0.005/m2 Figure5 0.001~0.004/m2 Figure6 0.002/m2 Path loss exponent (α) 4 Shadowing Not applied Fading Flat Rayleigh fading Bandwidth (B) 10 KHz Message length (LMessage) 100 bits Initial number of RBs in discovery period (NRB,0) 8 Target SIR (SIRTarget) Figures4 and 5 0 dB Figure6 −10~20 dB Figure7 7 dBFigure5(a) shows the discovery rates per unit distance at distance r from the transmitting device, with four different sensing thresholds (Γ1, Γ2, Γ3, and Γ4), when 0 dB is used for the target SIR. The sensing thresholds are chosen to show substantially different shapes of curves with different discovery ranges. While Γ1 is a high value that allows for the large interference, Γ4 is low to maintain a large discovery range at the expense of an increased discovery time. The area under each curved line in Figure 5(a) shows the discovery rate, which has been redrawn as a bar graph in Figure 5(b) for the purpose of easy comparisons. The shapes of the four graphs in Figure 5(a) are quite different, indicating different discovery ranges. However, their areas, which represent the discovery rates, are very similar as shown in Figure 5(b).Figure 5 Discovery rates with varying sensing thresholds. (a) Discovery rates per unit distance (b) Discovery ratesFigure6 illustrates the discovery rates with varying device densities (DDevice = 0.001, 0.002, 0.003, and 0.004/m2). The device densities are chosen to show substantially different shapes of curves. The target SIR is set to 0 dB and no specific carrier sensing threshold is used. If there is no specific carrier sensing threshold, a large discovery range can be obtained with a low device density and the discovery range decreases as the device density increases. From the figures, we can see that the discovery rate is also independent of the device density. While very different values of discovery ranges can be obtained by changing other discovery parameters, the discovery rates do not significantly vary. Hence, the discovery rate can be used as a performance metric for comparing discovery schemes with different discovery parameters. We can say that similar discovery performances can be obtained among the four schemes in Figure 5 (or among the four schemes in Figure 6) although they have quite different discovery ranges.Figure 6 Discovery rates with varying device densities. (a) Discovery rates per unit distance (b) Discovery ratesFigure7(a) shows the discovery rates per unit distance with several different target SIR values (−10 dB, 0 dB, 10 dB, and 20 dB) at receivers, when there is no specific sensing threshold. As expected, a discovery range also depends on the corresponding target SIR and a low target SIR can achieve a longer discovery range. However, unlike Figures 5(a) or 6(a), the curved lines in Figure 7(a) have substantially different areas. Figure 7(b) represents the discovery rates according to target SIR values from −10 dB to 20 dB. In the figure, the discovery rate can be maximized with the target SIR of 7 dB, which is close to the theoretical optimal target SIR (9 dB) found by (23) assuming a hexagonal distribution of devices allocated on the same RB. Discovery schemes with the optimal target SIR will provide the best performances and other parameters can be subsequently determined.Figure 7 Discovery rates according to target SIR values. (a) Discovery rates per unit distance (b) Discovery ratesFigure8 shows the results with the target SIR of 7 dB (the optimal target SIR found from Figure 7(b)). In order to obtain similar discovery ranges as those for Figure 5, threshold values Γi/SIRTarget(i=1,2,3,4) are used. Other simulation parameters are the same as those for Figure 5. Note that the discovery rates are considerably improved as compared to those in Figure 5, in which 0 dB is used for the target SIR. If we want to obtain a reasonably long discovery range with a high device density, we need to use a long discovery period or a low carrier sensing threshold with the optimal target SIR instead of using a low target SIR.Figure 8 Discovery rates with the optimal target SIR. (a) Discovery rates per unit distance (b) Discovery ratesWhen the device density is very low, we can use a low MCS to maximize the discovery range. However, if we consider discovery processes in an urban area where the device density can be very high, we would better not use a too low MCS since it may eventually reduce the discovery range with a discovery time limit. The discovery rate (the number of discoverable devices per unit time) for discovery process is analogous to the received data rate (the amount of successfully received data per unit time) for data transmissions. A too low MCS is not desirable since only small amount of data can be transmitted per unit time and a too high MCS is also not recommended since receivers become too vulnerable to the interference from other transmitters. An appropriate MCS needs to be used to maximize the system performance for data transmissions. So is true for device discovery and the discovery rate can be maximized with an appropriate MCS. ## 6. Conclusion The discovery rate, which is defined as the number of discoverable devices per unit time, does not depend on the device density, the discovery range, or the discovery period assuming interference-limited environments. Hence, it can be used as a performance metric for comparing discovery methods with different discovery parameters. While the discovery rate is independent on many other discovery parameters, its value can significantly vary with the target SIR at receivers. Hence, the MCS of a discovery message should be optimized first.While a low MCS of a discovery message is considered in many of previous works, this paper shows that a low MCS can be harmful when the device density is high. The number of discovery RBs is reduced with a lower MCS with given discovery resources and a greater number of devices may be assigned to each RB. This increases the interference from other transmitting devices and the discovery range might be eventually reduced. The discovery rate for discovery process is analogous to the received data rate in data transmissions. While a too high MCS makes receivers vulnerable to interference, a too low MCS is also not desirable since only small amount of data can be transferred per unit time.The analysis given in this paper may be extended with considering a clearer definition of discovery ranges, more realistic channel models with fading and shadowing, more practical distributions of transmitting devices, and more complicated discovery processes including multihop discovery, device cooperation, and intelligent network assistance. Also, half-duplexing and adjacent-channel-interference problems need to be considered in the future for orthogonal frequency division multiple access (OFDMA) systems. A slightly lower value of the target SIR might be required with severe adjacent channel interference, since more robust MCS may be beneficial to mitigate the interference from other OFDMA subchannels. --- *Source: 102049-2015-04-07.xml*
2015
# The Healing Effects of Piano Practice-Induced Hand Muscle Injury **Authors:** Hecheng Yu; Xiaoming Luo **Journal:** Computational and Mathematical Methods in Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1020504 --- ## Abstract Background. The muscles related to piano practice are mainly concentrated in the fingers and upper limbs, and the muscles related to other parts of the body are weak. Compared with other sports injuries, the injuries caused by piano practice are mainly chronic injuries caused by long-term strain of the upper limbs, and acute injuries rarely occur. The purpose of this study was to analyze the therapeutic effect of hand muscle injury caused by piano practice. Method. A total of 60 patients with hand muscle injury caused by piano practice admitted to our hospital from January 2019 to June 2020 were selected. According to the number random grouping method, they were randomly divided into two groups. There were 30 patients in the observation group, including 20 males and 10 females, aged 24-53 (39.51±7.01) years old, and the course of disease was 1-5 (3.24±1.62) months. In the control group, there were 30 patients, including 18 males and 12 females, aged 24-56 (39.62±7.17) years old, and the course of disease was 1.5-5 (3.14±1.71) months. If the observation group experienced excessive pain, the group took ibuprofen sustained-release capsules. On weekdays, exercise your fingers 2-3 times per day. After the intervention, the wrist joint function score of the observation group was higher than that before the intervention. Results. Before treatment, there was no significant difference in pain level scores between the two groups (P>0.05). After treatment, the limb pain score in the observation group was lower than that in the control group. The effective rate of hand tendon rehabilitation in the observation group was 93.33%. The effective rate of hand tendon rehabilitation in the control group was 70.00%. The comparison results showed that there was statistical significance (P<0.05). The score of the observation group was significantly higher than that of the control group, with statistically significant differences (P<0.05). Conclusion. Piano workouts can cause hand muscle difficulties, which can be alleviated by daily finger gymnastics. Daily finger exercises are simple and not limited by time and place. Piano practitioners can use the spare time of daily training and performance to exercise for a long time, so as to prevent or recover finger muscle damage caused by piano practice. It has the potential to help pianists avoid hand muscle injuries when practicing while also allowing music to reach its full potential. --- ## Body ## 1. Introduction The upper limb muscles help us move our arms and hands under the command of our brain. Muscles can only exert force in one direction since they contract. Two muscles or two sets of muscles are required to move a bodily part in two directions, one to move it one way and the other to move it the other. Therefore, the movement of muscle has the characteristics of unidirectional [1, 2]. Piano practice involves ten fingers, arms, shoulders, and neck muscles. It is an upper limb movement coordinated by many muscle groups [3]. To allow movement, the opposing muscle must relax and extend in response to the contraction of one muscle. If this does not occur, that is, if the opposing muscle stays tense—both muscles contract at the same time, which is known as cocontraction. Cocontraction is a movement-inhibiting condition that can result in damage [4, 5]. Piano practice is a long-term, fast-paced upper limb movement. During piano practice, the upper limb muscles are in a high load contraction state for a long time [6]. Tense exercise causes muscle contraction for a long time, coupled with the irregularity of piano practice, which is the main source of muscle injury [7]. There are many factors that affect human muscle health: reasonable rest, adequate sleep, age growth, and scientific exercise and prevention [8]. This explains why some pianists can perform for years without incident before suffering an injury in their late thirties or forties. They are not playing any differently, but their bodies are less able to handle the stress. The finger flexor system, which is located in the top half of our lower arm around the elbow, controls the majority of finger movements [9, 10]. The interosseous muscle system is the second muscular system that regulates finger mobility. The interphalangeal muscles and the interosseous dorsal muscles are interosseous muscles that are dispersed across the palm [11]. Modern piano playing necessitates playing with the upper arm’s gravity pull, with a focus on simple, sharp movements of the small muscles of the fingers and synchronised, relaxed motions of the upper body, shoulders, and arms [12, 13]. The muscle energy of the human body structure must be used to develop piano skills. It is simple to make training blunders and permanently injure human body muscles without this foundation [14]. The following are the results of 60 cases of hand muscle injury induced by piano practice that were collected and treated in this research. The specific chapter structure is shown in Figure 1.Figure 1 Structure of this study. ## 2. Materials and Methods ### 2.1. General Information A total of 60 patients with hand muscle injury caused by piano practice from January 2019 to June 2020 were selected and considered two research groups. The observation group included 20 males and 10 females, aged 24-53 (39.51±7.01) years old, and the course of disease was 1-5 (3.24±1.62) months. The control group included 18 males and 12 females, aged 24-56 (39.62±7.17) years old, and the course of disease was 1.5-5 (3.14±1.71) months. Details of the two groups are shown in Table 1.Table 1 General information. GroupsCasesGenderAgeCourse of disease (month)MalesFemalesObservation group30201039.51±7.013.24±1.62Control group30181239.62±7.173.14±1.71In this study, we took the wrist and fingers as the whole research part. Because the muscle injury caused by piano practice has certain particularity, the inclusion criteria and exclusion criteria are formulated on the premise of diagnostic criteria.(1) Diagnostic criteria: according to the conscious symptoms of the wrist (positive for any of the acid, numbness, or tingling of the finger or wrist) and check the signs of the hand (positive for swelling or atrophy of the musculus magnus) [15], and the diagnosis was confirmed according to the relevant diagnostic criteria of practical neurological diseases and therapeutics [16](2) Inclusion criteria: ① meeting the diagnostic criteria of hand muscle injury caused by piano practice; ② voluntary subjects signed informed consent; ③ oral medication or external application of traditional Chinese medicine in strict accordance with the doctor’s advice; ④ cooperating with the collection of test results, compliance was good(3) Exclusion criteria: ① those who had taken oral analgesics or local physiotherapy before treatment to affect the efficacy evaluation; ② local skin rupture; ③ suffering from cervical spondylosis or other diseases that cause shoulder pain, such as intrashoulder joint fractures, tuberculosis, bone metastases, and other diseases that affected shoulder joint mobility; ④ acute phase with severe liver and kidney dysfunction, heart disease, endocrine disease, and other diseases ### 2.2. Methods #### 2.2.1. Control Group The treatment method of the control group is to use drugs only to relieve pain; other methods of rehabilitation with fingers and wrists are not assisted. Among them, ibuprofen sustained-release capsules produced by Sino US joint venture Tianjin Shike Pharmaceutical Co., Ltd. were selected as the analgesic drugs: production batch number: National Pharmaceutical Standard H20013062, specification 0.4 g, 1 capsule per time, taking it orally once in the morning and evening after meals, taking it with warm water after meals, and continuing treatment for 15 days. During this process, record the time from taking the medicine to pain relief, the degree of pain and the duration of pain, and the number of intermittent pain every day (including acid swelling and other finger and wrist discomfort). #### 2.2.2. Observation Group In the observation group, the pain of the participants was first evaluated. If the pain was tolerable, they would not take drugs, and only use the finger exercise method proposed in this study. If the pain is too great, you can take ibuprofen sustained-release capsules to relieve the pain and continue to exercise your fingers 2-3 times a day for six months. Finger exercises include ① arm relaxation exercises, ② finger push-ups, and ③ wrist spring operation. Specific exercises related to the hands are shown in Figure2. (1) Arm relaxation exercises: stand naturally with shoulders relaxed and falling. Then, bend elbows slightly and raise arms naturally to shoulder height. Hold for 2 or 3 seconds and let the whole arm fall into free fall. After practicing several times, alternate left and right hands to drop and lift. Then, make further adjustments to this exercise: slowly raising hands to head height. Hold for 2 or 3 seconds and then lower arms slowly and controllably. When reaching shoulder height, immediately relax and fall [17]. It should be noted that the relaxation should be carried out in an instant as much as possible and should not be gradually adjusted to a relaxed state. Let the slight tension control and the complete relaxation exercise combine to improve the ability of arm tightness adjustment(2) Finger push-ups: the training steps were as follows: stand upright, facing the wall, about two-thirds of the arm away from the wall; stand each finger separately on the wall, the palm joints protruding, the palms arched, the wrists slightly lower than the palm joints [18]. Then, slowly lean forward, bending elbows and letting weight pour into fingertips and toes to increase their support. After the weight pressure reached the appropriate level, stay for 2 or 3 seconds, slowly push away until the body was upright and return to its original state; let arms down and relaxed a little. Repeat this exercise(3) Wrist spring operation: sit with body upright, shoulders and arms relaxed, elbows bent to about 90 degrees to play close to the body; relax the wrist and open the fingers naturally so that there was about an octave distance between the fingers, 2, 3, 4 fingers slightly raised in an arched hand shape. Then, keep the wrists and elbows of both hands at the original height, with the palm part facing the body direction, which was the upper part of the oblique after instant force, for a quick spring [19]. After that, relax completely and let hands fall naturally. This movement was just like the daily action of knocking on the door with the finger joints. Its force was mainly to bounce upwards. The three joints were used as the central fulcrum of the force to bounce, and the force was quickly concentrated and then relaxed instantlyFigure 2 Main contents of finger motor rehabilitation. ### 2.3. Selection of Indicators After 6 months of treatment, the pain degree, joint mobility, and daily activities of the two groups were compared.(1) The degree of pain: using NRS [20] (NRS is a scoring system for pain; the score range is 0-10 points, of which 0-3 points belong to mild pain, 4-7 points belong to moderate pain, and 8-10 points belong to severe pain) to assess the patient’s shoulder pain, the scale used 0 to 10 points to represent different degrees of pain; 0 pointed to no pain, 10 pointed to unbearable severe pain, and the middle part indicated different degrees of pain, and it was evaluated once before and after treatment(2) Joint mobility: standard for measuring total active range of motion (TAM is a scoring system for evaluating the overall function of the palm advocated by the final results Committee of the American Society of Hand Surgery; TAM is the sum of the angles formed by the maximum flexion of the metacarpophalangeal joint, proximal and distal interphalangeal joints in the fist grip position minus the sum of the limited extension of these joints) [21]: excellent rehabilitation: TAM>220; good recovery: TAM200~220; poor rehabilitation effect: TAM<180(3) Ability to perform daily activities: both groups were assessed with the Daily Living Activity Scale (FIM, the main evaluation index of FIM is self-care activities, including the impact of walking and hand movement on life) [22]; there were 7 points for each item, a total of 42 points ### 2.4. Statistical Methods Using SPSS 19.0 statistical software for data analysis, measurement data in line with the normal distribution were expressed asX±S. The comparison between groups was performed by t-test, statistical data were compared by using the χ2 test, and P<0.05 meant the difference was statistically significant. ## 2.1. General Information A total of 60 patients with hand muscle injury caused by piano practice from January 2019 to June 2020 were selected and considered two research groups. The observation group included 20 males and 10 females, aged 24-53 (39.51±7.01) years old, and the course of disease was 1-5 (3.24±1.62) months. The control group included 18 males and 12 females, aged 24-56 (39.62±7.17) years old, and the course of disease was 1.5-5 (3.14±1.71) months. Details of the two groups are shown in Table 1.Table 1 General information. GroupsCasesGenderAgeCourse of disease (month)MalesFemalesObservation group30201039.51±7.013.24±1.62Control group30181239.62±7.173.14±1.71In this study, we took the wrist and fingers as the whole research part. Because the muscle injury caused by piano practice has certain particularity, the inclusion criteria and exclusion criteria are formulated on the premise of diagnostic criteria.(1) Diagnostic criteria: according to the conscious symptoms of the wrist (positive for any of the acid, numbness, or tingling of the finger or wrist) and check the signs of the hand (positive for swelling or atrophy of the musculus magnus) [15], and the diagnosis was confirmed according to the relevant diagnostic criteria of practical neurological diseases and therapeutics [16](2) Inclusion criteria: ① meeting the diagnostic criteria of hand muscle injury caused by piano practice; ② voluntary subjects signed informed consent; ③ oral medication or external application of traditional Chinese medicine in strict accordance with the doctor’s advice; ④ cooperating with the collection of test results, compliance was good(3) Exclusion criteria: ① those who had taken oral analgesics or local physiotherapy before treatment to affect the efficacy evaluation; ② local skin rupture; ③ suffering from cervical spondylosis or other diseases that cause shoulder pain, such as intrashoulder joint fractures, tuberculosis, bone metastases, and other diseases that affected shoulder joint mobility; ④ acute phase with severe liver and kidney dysfunction, heart disease, endocrine disease, and other diseases ## 2.2. Methods ### 2.2.1. Control Group The treatment method of the control group is to use drugs only to relieve pain; other methods of rehabilitation with fingers and wrists are not assisted. Among them, ibuprofen sustained-release capsules produced by Sino US joint venture Tianjin Shike Pharmaceutical Co., Ltd. were selected as the analgesic drugs: production batch number: National Pharmaceutical Standard H20013062, specification 0.4 g, 1 capsule per time, taking it orally once in the morning and evening after meals, taking it with warm water after meals, and continuing treatment for 15 days. During this process, record the time from taking the medicine to pain relief, the degree of pain and the duration of pain, and the number of intermittent pain every day (including acid swelling and other finger and wrist discomfort). ### 2.2.2. Observation Group In the observation group, the pain of the participants was first evaluated. If the pain was tolerable, they would not take drugs, and only use the finger exercise method proposed in this study. If the pain is too great, you can take ibuprofen sustained-release capsules to relieve the pain and continue to exercise your fingers 2-3 times a day for six months. Finger exercises include ① arm relaxation exercises, ② finger push-ups, and ③ wrist spring operation. Specific exercises related to the hands are shown in Figure2. (1) Arm relaxation exercises: stand naturally with shoulders relaxed and falling. Then, bend elbows slightly and raise arms naturally to shoulder height. Hold for 2 or 3 seconds and let the whole arm fall into free fall. After practicing several times, alternate left and right hands to drop and lift. Then, make further adjustments to this exercise: slowly raising hands to head height. Hold for 2 or 3 seconds and then lower arms slowly and controllably. When reaching shoulder height, immediately relax and fall [17]. It should be noted that the relaxation should be carried out in an instant as much as possible and should not be gradually adjusted to a relaxed state. Let the slight tension control and the complete relaxation exercise combine to improve the ability of arm tightness adjustment(2) Finger push-ups: the training steps were as follows: stand upright, facing the wall, about two-thirds of the arm away from the wall; stand each finger separately on the wall, the palm joints protruding, the palms arched, the wrists slightly lower than the palm joints [18]. Then, slowly lean forward, bending elbows and letting weight pour into fingertips and toes to increase their support. After the weight pressure reached the appropriate level, stay for 2 or 3 seconds, slowly push away until the body was upright and return to its original state; let arms down and relaxed a little. Repeat this exercise(3) Wrist spring operation: sit with body upright, shoulders and arms relaxed, elbows bent to about 90 degrees to play close to the body; relax the wrist and open the fingers naturally so that there was about an octave distance between the fingers, 2, 3, 4 fingers slightly raised in an arched hand shape. Then, keep the wrists and elbows of both hands at the original height, with the palm part facing the body direction, which was the upper part of the oblique after instant force, for a quick spring [19]. After that, relax completely and let hands fall naturally. This movement was just like the daily action of knocking on the door with the finger joints. Its force was mainly to bounce upwards. The three joints were used as the central fulcrum of the force to bounce, and the force was quickly concentrated and then relaxed instantlyFigure 2 Main contents of finger motor rehabilitation. ## 2.2.1. Control Group The treatment method of the control group is to use drugs only to relieve pain; other methods of rehabilitation with fingers and wrists are not assisted. Among them, ibuprofen sustained-release capsules produced by Sino US joint venture Tianjin Shike Pharmaceutical Co., Ltd. were selected as the analgesic drugs: production batch number: National Pharmaceutical Standard H20013062, specification 0.4 g, 1 capsule per time, taking it orally once in the morning and evening after meals, taking it with warm water after meals, and continuing treatment for 15 days. During this process, record the time from taking the medicine to pain relief, the degree of pain and the duration of pain, and the number of intermittent pain every day (including acid swelling and other finger and wrist discomfort). ## 2.2.2. Observation Group In the observation group, the pain of the participants was first evaluated. If the pain was tolerable, they would not take drugs, and only use the finger exercise method proposed in this study. If the pain is too great, you can take ibuprofen sustained-release capsules to relieve the pain and continue to exercise your fingers 2-3 times a day for six months. Finger exercises include ① arm relaxation exercises, ② finger push-ups, and ③ wrist spring operation. Specific exercises related to the hands are shown in Figure2. (1) Arm relaxation exercises: stand naturally with shoulders relaxed and falling. Then, bend elbows slightly and raise arms naturally to shoulder height. Hold for 2 or 3 seconds and let the whole arm fall into free fall. After practicing several times, alternate left and right hands to drop and lift. Then, make further adjustments to this exercise: slowly raising hands to head height. Hold for 2 or 3 seconds and then lower arms slowly and controllably. When reaching shoulder height, immediately relax and fall [17]. It should be noted that the relaxation should be carried out in an instant as much as possible and should not be gradually adjusted to a relaxed state. Let the slight tension control and the complete relaxation exercise combine to improve the ability of arm tightness adjustment(2) Finger push-ups: the training steps were as follows: stand upright, facing the wall, about two-thirds of the arm away from the wall; stand each finger separately on the wall, the palm joints protruding, the palms arched, the wrists slightly lower than the palm joints [18]. Then, slowly lean forward, bending elbows and letting weight pour into fingertips and toes to increase their support. After the weight pressure reached the appropriate level, stay for 2 or 3 seconds, slowly push away until the body was upright and return to its original state; let arms down and relaxed a little. Repeat this exercise(3) Wrist spring operation: sit with body upright, shoulders and arms relaxed, elbows bent to about 90 degrees to play close to the body; relax the wrist and open the fingers naturally so that there was about an octave distance between the fingers, 2, 3, 4 fingers slightly raised in an arched hand shape. Then, keep the wrists and elbows of both hands at the original height, with the palm part facing the body direction, which was the upper part of the oblique after instant force, for a quick spring [19]. After that, relax completely and let hands fall naturally. This movement was just like the daily action of knocking on the door with the finger joints. Its force was mainly to bounce upwards. The three joints were used as the central fulcrum of the force to bounce, and the force was quickly concentrated and then relaxed instantlyFigure 2 Main contents of finger motor rehabilitation. ## 2.3. Selection of Indicators After 6 months of treatment, the pain degree, joint mobility, and daily activities of the two groups were compared.(1) The degree of pain: using NRS [20] (NRS is a scoring system for pain; the score range is 0-10 points, of which 0-3 points belong to mild pain, 4-7 points belong to moderate pain, and 8-10 points belong to severe pain) to assess the patient’s shoulder pain, the scale used 0 to 10 points to represent different degrees of pain; 0 pointed to no pain, 10 pointed to unbearable severe pain, and the middle part indicated different degrees of pain, and it was evaluated once before and after treatment(2) Joint mobility: standard for measuring total active range of motion (TAM is a scoring system for evaluating the overall function of the palm advocated by the final results Committee of the American Society of Hand Surgery; TAM is the sum of the angles formed by the maximum flexion of the metacarpophalangeal joint, proximal and distal interphalangeal joints in the fist grip position minus the sum of the limited extension of these joints) [21]: excellent rehabilitation: TAM>220; good recovery: TAM200~220; poor rehabilitation effect: TAM<180(3) Ability to perform daily activities: both groups were assessed with the Daily Living Activity Scale (FIM, the main evaluation index of FIM is self-care activities, including the impact of walking and hand movement on life) [22]; there were 7 points for each item, a total of 42 points ## 2.4. Statistical Methods Using SPSS 19.0 statistical software for data analysis, measurement data in line with the normal distribution were expressed asX±S. The comparison between groups was performed by t-test, statistical data were compared by using the χ2 test, and P<0.05 meant the difference was statistically significant. ## 3. Results This section follows the relevant standards proposed in the previous section, collates the two groups of data based on the three indicators: pain level, joint range of motion, and daily activity ability, and compares the results. ### 3.1. Results of Pain Level There was no significant difference in pain level scores between the two groups (P>0.05) before treatment. The limb pain score in the observation group was lower than that of the control group after treatment. Results of pain level are shown in Table 2.Table 2 Results of pain level. GroupsCasesLimb pain VASWrist hand function VASBeforeAfterBeforeAfterObservation group304.15±1.282.01±1.0951.02±17.6173.06±15.74Control group304.20±1.343.52±1.3150.36±16.2270.05±14.91 ### 3.2. Results of Joint Range of Motion The effective rate of hand tendon rehabilitation in the observation group was 93.33%. The effective rate of hand tendon rehabilitation in the control group was 70.00%, and the comparison results showed that there was statistical significance (P<0.05). Results of joint range of motion are shown in Table 3.Table 3 Results of joint range of motion. GroupsCasesExcellent TAMGood TAMPoor TAMRateObservation group301612293.33Control group30912970.00 ### 3.3. Results of Daily Activity Ability There was no difference in daily living ability scores between the two groups (P>0.05), after treatment and before treatment. The scores of washing, wearing a jacket, going to the toilet, eating, bathing, and wearing pants in both groups were all improved. Results of daily activity ability are shown in Table 4.Table 4 Results of daily activity ability. GroupsTimeWashingWearing a jacketGoing to the toiletEatingBathingWearing pantsTotalObservation groupBefore3.18±1.073.27±1.093.17±1.023.06±1.043.09±1.353.09±1.4819.08±3.57After6.38±0.366.07±0.656.23±0.526.32±0.636.78±0.1146.38±0.2335.23±7.01Control groupBefore3.01±1.113.15±1.213.23±1.523.22±1.013.26±1.613.04±1.7118.23±4.62After5.64±0.525.34±0.335.60±1.205.41±1.2565.33±1.014.76±1.0130.45±8.03 ## 3.1. Results of Pain Level There was no significant difference in pain level scores between the two groups (P>0.05) before treatment. The limb pain score in the observation group was lower than that of the control group after treatment. Results of pain level are shown in Table 2.Table 2 Results of pain level. GroupsCasesLimb pain VASWrist hand function VASBeforeAfterBeforeAfterObservation group304.15±1.282.01±1.0951.02±17.6173.06±15.74Control group304.20±1.343.52±1.3150.36±16.2270.05±14.91 ## 3.2. Results of Joint Range of Motion The effective rate of hand tendon rehabilitation in the observation group was 93.33%. The effective rate of hand tendon rehabilitation in the control group was 70.00%, and the comparison results showed that there was statistical significance (P<0.05). Results of joint range of motion are shown in Table 3.Table 3 Results of joint range of motion. GroupsCasesExcellent TAMGood TAMPoor TAMRateObservation group301612293.33Control group30912970.00 ## 3.3. Results of Daily Activity Ability There was no difference in daily living ability scores between the two groups (P>0.05), after treatment and before treatment. The scores of washing, wearing a jacket, going to the toilet, eating, bathing, and wearing pants in both groups were all improved. Results of daily activity ability are shown in Table 4.Table 4 Results of daily activity ability. GroupsTimeWashingWearing a jacketGoing to the toiletEatingBathingWearing pantsTotalObservation groupBefore3.18±1.073.27±1.093.17±1.023.06±1.043.09±1.353.09±1.4819.08±3.57After6.38±0.366.07±0.656.23±0.526.32±0.636.78±0.1146.38±0.2335.23±7.01Control groupBefore3.01±1.113.15±1.213.23±1.523.22±1.013.26±1.613.04±1.7118.23±4.62After5.64±0.525.34±0.335.60±1.205.41±1.2565.33±1.014.76±1.0130.45±8.03 ## 4. Discussion Piano playing is a highly repetitive activity. The players sit for a long time, their upper limb muscles tense and contract during training and performance, and their wrists and fingers move fast, which brings great hidden dangers to the hand muscle injury of piano players [23]. Some scholars have done experiments to set the metronome at a quarter-note equal to 120 tempo, and the number of repetitions for playing a sixteenth-note one hour continuously is 28,800 times. It is only an hour, and with a little more effort, eight hours a day for more than twenty years, it can imagine how many times our fingers would repeat the work [24, 25]. Piano playing is another highly technical activity. It must be improved after thousands of times of practice. “Repetition” is unavoidable for every piano learner [26]. Highly repetitive performance is a prerequisite for the formation of performance sports injury. Piano skills training must be based on the body structure and muscle energy. Different trainees have their own characteristics in the hand muscle energy structure. They cannot be the same. Training should maximize strengths and avoid weaknesses. At the same time, when practicing the piano, the trainer should consider his own physical condition and avoid excessive exercise for a long time [27]. Modern piano playing requires the player to use hand muscles to coordinate the relaxed upper body and shoulder and arm muscles. How to find a way to relax the muscles while practicing and playing is a challenge that every piano practitioner needs to face [28]. In order to achieve natural, relaxed, and touching performance, players need to adopt macro comprehensive adjustment and micro detail control [29]. Macro comprehensive regulation requires piano players to pay attention to the prevention of hand muscle injury and pay attention to scientific rehabilitation methods after hand muscle injury. We should not blindly rely on drugs for pain relief. Drug pain relief has the characteristics of short term and temporary, so we cannot achieve the basic rehabilitation goals, nor can we keep a fluke mentality [30]. In the absence of external intervention, it is difficult to self-heal the hand muscle injury caused by piano training [31]. If it is not handled properly, it will not only affect the pianist’s performance level but also have a great chance to leave sequelae such as finger joint deformation, tenosynovitis, and cervical spondylosis [32, 33]. Micro comprehensive adjustment requires piano players to formulate scientific and reasonable practice time and methods when preventing hand muscle injury. After hand muscle injury, go to the hospital for examination in time. With the help of doctors, formulate scientific health care plans and methods and reserve sufficient hand rest time. Pay attention to relaxing muscles while playing and training, and do scientific hand exercises between playing and training [34].There was no significant difference in pain level scores between the two groups (P>0.05) before treatment, and the limb pain score in the observation group was lower than that of the control group after treatment. The effective rate of hand tendon rehabilitation in the observation group was 93.33%. The effective rate of hand tendon rehabilitation in the control group was 70.00%; the comparison results showed that there was statistical significance (P<0.05). Before treatment, there was no significant difference in daily living ability scores between the two groups (P>0.05). After treatment, the scores of washing, wearing a jacket, going to the toilet, eating, bathing, and wearing pants in both groups were all improved. However, the observation group’s limb pain score was lower than the control group’s, and the observation group’s wrist hand function score was higher than the control group’s. ## 5. Conclusion Through the above research, we draw the following three conclusions:(1) Hand muscle injury is a common sports injury for piano players, which has the characteristics of high incidence and inevitability. The rehabilitation period of injury is long, which has a great impact on the life of piano practitioners(2) ① Arm relaxation exercises, ② finger push-ups, and ③ wrist spring operation: three kinds of body movements are of great help to the rehabilitation of piano players’ muscle injuries(3) Daily finger exercises are simple and not limited by time and place. Piano practitioners can use the spare time of daily training and performance to exercise for a long time, so as to prevent or recover finger muscle damage caused by piano practice --- *Source: 1020504-2022-07-18.xml*
1020504-2022-07-18_1020504-2022-07-18.md
33,084
The Healing Effects of Piano Practice-Induced Hand Muscle Injury
Hecheng Yu; Xiaoming Luo
Computational and Mathematical Methods in Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1020504
1020504-2022-07-18.xml
--- ## Abstract Background. The muscles related to piano practice are mainly concentrated in the fingers and upper limbs, and the muscles related to other parts of the body are weak. Compared with other sports injuries, the injuries caused by piano practice are mainly chronic injuries caused by long-term strain of the upper limbs, and acute injuries rarely occur. The purpose of this study was to analyze the therapeutic effect of hand muscle injury caused by piano practice. Method. A total of 60 patients with hand muscle injury caused by piano practice admitted to our hospital from January 2019 to June 2020 were selected. According to the number random grouping method, they were randomly divided into two groups. There were 30 patients in the observation group, including 20 males and 10 females, aged 24-53 (39.51±7.01) years old, and the course of disease was 1-5 (3.24±1.62) months. In the control group, there were 30 patients, including 18 males and 12 females, aged 24-56 (39.62±7.17) years old, and the course of disease was 1.5-5 (3.14±1.71) months. If the observation group experienced excessive pain, the group took ibuprofen sustained-release capsules. On weekdays, exercise your fingers 2-3 times per day. After the intervention, the wrist joint function score of the observation group was higher than that before the intervention. Results. Before treatment, there was no significant difference in pain level scores between the two groups (P>0.05). After treatment, the limb pain score in the observation group was lower than that in the control group. The effective rate of hand tendon rehabilitation in the observation group was 93.33%. The effective rate of hand tendon rehabilitation in the control group was 70.00%. The comparison results showed that there was statistical significance (P<0.05). The score of the observation group was significantly higher than that of the control group, with statistically significant differences (P<0.05). Conclusion. Piano workouts can cause hand muscle difficulties, which can be alleviated by daily finger gymnastics. Daily finger exercises are simple and not limited by time and place. Piano practitioners can use the spare time of daily training and performance to exercise for a long time, so as to prevent or recover finger muscle damage caused by piano practice. It has the potential to help pianists avoid hand muscle injuries when practicing while also allowing music to reach its full potential. --- ## Body ## 1. Introduction The upper limb muscles help us move our arms and hands under the command of our brain. Muscles can only exert force in one direction since they contract. Two muscles or two sets of muscles are required to move a bodily part in two directions, one to move it one way and the other to move it the other. Therefore, the movement of muscle has the characteristics of unidirectional [1, 2]. Piano practice involves ten fingers, arms, shoulders, and neck muscles. It is an upper limb movement coordinated by many muscle groups [3]. To allow movement, the opposing muscle must relax and extend in response to the contraction of one muscle. If this does not occur, that is, if the opposing muscle stays tense—both muscles contract at the same time, which is known as cocontraction. Cocontraction is a movement-inhibiting condition that can result in damage [4, 5]. Piano practice is a long-term, fast-paced upper limb movement. During piano practice, the upper limb muscles are in a high load contraction state for a long time [6]. Tense exercise causes muscle contraction for a long time, coupled with the irregularity of piano practice, which is the main source of muscle injury [7]. There are many factors that affect human muscle health: reasonable rest, adequate sleep, age growth, and scientific exercise and prevention [8]. This explains why some pianists can perform for years without incident before suffering an injury in their late thirties or forties. They are not playing any differently, but their bodies are less able to handle the stress. The finger flexor system, which is located in the top half of our lower arm around the elbow, controls the majority of finger movements [9, 10]. The interosseous muscle system is the second muscular system that regulates finger mobility. The interphalangeal muscles and the interosseous dorsal muscles are interosseous muscles that are dispersed across the palm [11]. Modern piano playing necessitates playing with the upper arm’s gravity pull, with a focus on simple, sharp movements of the small muscles of the fingers and synchronised, relaxed motions of the upper body, shoulders, and arms [12, 13]. The muscle energy of the human body structure must be used to develop piano skills. It is simple to make training blunders and permanently injure human body muscles without this foundation [14]. The following are the results of 60 cases of hand muscle injury induced by piano practice that were collected and treated in this research. The specific chapter structure is shown in Figure 1.Figure 1 Structure of this study. ## 2. Materials and Methods ### 2.1. General Information A total of 60 patients with hand muscle injury caused by piano practice from January 2019 to June 2020 were selected and considered two research groups. The observation group included 20 males and 10 females, aged 24-53 (39.51±7.01) years old, and the course of disease was 1-5 (3.24±1.62) months. The control group included 18 males and 12 females, aged 24-56 (39.62±7.17) years old, and the course of disease was 1.5-5 (3.14±1.71) months. Details of the two groups are shown in Table 1.Table 1 General information. GroupsCasesGenderAgeCourse of disease (month)MalesFemalesObservation group30201039.51±7.013.24±1.62Control group30181239.62±7.173.14±1.71In this study, we took the wrist and fingers as the whole research part. Because the muscle injury caused by piano practice has certain particularity, the inclusion criteria and exclusion criteria are formulated on the premise of diagnostic criteria.(1) Diagnostic criteria: according to the conscious symptoms of the wrist (positive for any of the acid, numbness, or tingling of the finger or wrist) and check the signs of the hand (positive for swelling or atrophy of the musculus magnus) [15], and the diagnosis was confirmed according to the relevant diagnostic criteria of practical neurological diseases and therapeutics [16](2) Inclusion criteria: ① meeting the diagnostic criteria of hand muscle injury caused by piano practice; ② voluntary subjects signed informed consent; ③ oral medication or external application of traditional Chinese medicine in strict accordance with the doctor’s advice; ④ cooperating with the collection of test results, compliance was good(3) Exclusion criteria: ① those who had taken oral analgesics or local physiotherapy before treatment to affect the efficacy evaluation; ② local skin rupture; ③ suffering from cervical spondylosis or other diseases that cause shoulder pain, such as intrashoulder joint fractures, tuberculosis, bone metastases, and other diseases that affected shoulder joint mobility; ④ acute phase with severe liver and kidney dysfunction, heart disease, endocrine disease, and other diseases ### 2.2. Methods #### 2.2.1. Control Group The treatment method of the control group is to use drugs only to relieve pain; other methods of rehabilitation with fingers and wrists are not assisted. Among them, ibuprofen sustained-release capsules produced by Sino US joint venture Tianjin Shike Pharmaceutical Co., Ltd. were selected as the analgesic drugs: production batch number: National Pharmaceutical Standard H20013062, specification 0.4 g, 1 capsule per time, taking it orally once in the morning and evening after meals, taking it with warm water after meals, and continuing treatment for 15 days. During this process, record the time from taking the medicine to pain relief, the degree of pain and the duration of pain, and the number of intermittent pain every day (including acid swelling and other finger and wrist discomfort). #### 2.2.2. Observation Group In the observation group, the pain of the participants was first evaluated. If the pain was tolerable, they would not take drugs, and only use the finger exercise method proposed in this study. If the pain is too great, you can take ibuprofen sustained-release capsules to relieve the pain and continue to exercise your fingers 2-3 times a day for six months. Finger exercises include ① arm relaxation exercises, ② finger push-ups, and ③ wrist spring operation. Specific exercises related to the hands are shown in Figure2. (1) Arm relaxation exercises: stand naturally with shoulders relaxed and falling. Then, bend elbows slightly and raise arms naturally to shoulder height. Hold for 2 or 3 seconds and let the whole arm fall into free fall. After practicing several times, alternate left and right hands to drop and lift. Then, make further adjustments to this exercise: slowly raising hands to head height. Hold for 2 or 3 seconds and then lower arms slowly and controllably. When reaching shoulder height, immediately relax and fall [17]. It should be noted that the relaxation should be carried out in an instant as much as possible and should not be gradually adjusted to a relaxed state. Let the slight tension control and the complete relaxation exercise combine to improve the ability of arm tightness adjustment(2) Finger push-ups: the training steps were as follows: stand upright, facing the wall, about two-thirds of the arm away from the wall; stand each finger separately on the wall, the palm joints protruding, the palms arched, the wrists slightly lower than the palm joints [18]. Then, slowly lean forward, bending elbows and letting weight pour into fingertips and toes to increase their support. After the weight pressure reached the appropriate level, stay for 2 or 3 seconds, slowly push away until the body was upright and return to its original state; let arms down and relaxed a little. Repeat this exercise(3) Wrist spring operation: sit with body upright, shoulders and arms relaxed, elbows bent to about 90 degrees to play close to the body; relax the wrist and open the fingers naturally so that there was about an octave distance between the fingers, 2, 3, 4 fingers slightly raised in an arched hand shape. Then, keep the wrists and elbows of both hands at the original height, with the palm part facing the body direction, which was the upper part of the oblique after instant force, for a quick spring [19]. After that, relax completely and let hands fall naturally. This movement was just like the daily action of knocking on the door with the finger joints. Its force was mainly to bounce upwards. The three joints were used as the central fulcrum of the force to bounce, and the force was quickly concentrated and then relaxed instantlyFigure 2 Main contents of finger motor rehabilitation. ### 2.3. Selection of Indicators After 6 months of treatment, the pain degree, joint mobility, and daily activities of the two groups were compared.(1) The degree of pain: using NRS [20] (NRS is a scoring system for pain; the score range is 0-10 points, of which 0-3 points belong to mild pain, 4-7 points belong to moderate pain, and 8-10 points belong to severe pain) to assess the patient’s shoulder pain, the scale used 0 to 10 points to represent different degrees of pain; 0 pointed to no pain, 10 pointed to unbearable severe pain, and the middle part indicated different degrees of pain, and it was evaluated once before and after treatment(2) Joint mobility: standard for measuring total active range of motion (TAM is a scoring system for evaluating the overall function of the palm advocated by the final results Committee of the American Society of Hand Surgery; TAM is the sum of the angles formed by the maximum flexion of the metacarpophalangeal joint, proximal and distal interphalangeal joints in the fist grip position minus the sum of the limited extension of these joints) [21]: excellent rehabilitation: TAM>220; good recovery: TAM200~220; poor rehabilitation effect: TAM<180(3) Ability to perform daily activities: both groups were assessed with the Daily Living Activity Scale (FIM, the main evaluation index of FIM is self-care activities, including the impact of walking and hand movement on life) [22]; there were 7 points for each item, a total of 42 points ### 2.4. Statistical Methods Using SPSS 19.0 statistical software for data analysis, measurement data in line with the normal distribution were expressed asX±S. The comparison between groups was performed by t-test, statistical data were compared by using the χ2 test, and P<0.05 meant the difference was statistically significant. ## 2.1. General Information A total of 60 patients with hand muscle injury caused by piano practice from January 2019 to June 2020 were selected and considered two research groups. The observation group included 20 males and 10 females, aged 24-53 (39.51±7.01) years old, and the course of disease was 1-5 (3.24±1.62) months. The control group included 18 males and 12 females, aged 24-56 (39.62±7.17) years old, and the course of disease was 1.5-5 (3.14±1.71) months. Details of the two groups are shown in Table 1.Table 1 General information. GroupsCasesGenderAgeCourse of disease (month)MalesFemalesObservation group30201039.51±7.013.24±1.62Control group30181239.62±7.173.14±1.71In this study, we took the wrist and fingers as the whole research part. Because the muscle injury caused by piano practice has certain particularity, the inclusion criteria and exclusion criteria are formulated on the premise of diagnostic criteria.(1) Diagnostic criteria: according to the conscious symptoms of the wrist (positive for any of the acid, numbness, or tingling of the finger or wrist) and check the signs of the hand (positive for swelling or atrophy of the musculus magnus) [15], and the diagnosis was confirmed according to the relevant diagnostic criteria of practical neurological diseases and therapeutics [16](2) Inclusion criteria: ① meeting the diagnostic criteria of hand muscle injury caused by piano practice; ② voluntary subjects signed informed consent; ③ oral medication or external application of traditional Chinese medicine in strict accordance with the doctor’s advice; ④ cooperating with the collection of test results, compliance was good(3) Exclusion criteria: ① those who had taken oral analgesics or local physiotherapy before treatment to affect the efficacy evaluation; ② local skin rupture; ③ suffering from cervical spondylosis or other diseases that cause shoulder pain, such as intrashoulder joint fractures, tuberculosis, bone metastases, and other diseases that affected shoulder joint mobility; ④ acute phase with severe liver and kidney dysfunction, heart disease, endocrine disease, and other diseases ## 2.2. Methods ### 2.2.1. Control Group The treatment method of the control group is to use drugs only to relieve pain; other methods of rehabilitation with fingers and wrists are not assisted. Among them, ibuprofen sustained-release capsules produced by Sino US joint venture Tianjin Shike Pharmaceutical Co., Ltd. were selected as the analgesic drugs: production batch number: National Pharmaceutical Standard H20013062, specification 0.4 g, 1 capsule per time, taking it orally once in the morning and evening after meals, taking it with warm water after meals, and continuing treatment for 15 days. During this process, record the time from taking the medicine to pain relief, the degree of pain and the duration of pain, and the number of intermittent pain every day (including acid swelling and other finger and wrist discomfort). ### 2.2.2. Observation Group In the observation group, the pain of the participants was first evaluated. If the pain was tolerable, they would not take drugs, and only use the finger exercise method proposed in this study. If the pain is too great, you can take ibuprofen sustained-release capsules to relieve the pain and continue to exercise your fingers 2-3 times a day for six months. Finger exercises include ① arm relaxation exercises, ② finger push-ups, and ③ wrist spring operation. Specific exercises related to the hands are shown in Figure2. (1) Arm relaxation exercises: stand naturally with shoulders relaxed and falling. Then, bend elbows slightly and raise arms naturally to shoulder height. Hold for 2 or 3 seconds and let the whole arm fall into free fall. After practicing several times, alternate left and right hands to drop and lift. Then, make further adjustments to this exercise: slowly raising hands to head height. Hold for 2 or 3 seconds and then lower arms slowly and controllably. When reaching shoulder height, immediately relax and fall [17]. It should be noted that the relaxation should be carried out in an instant as much as possible and should not be gradually adjusted to a relaxed state. Let the slight tension control and the complete relaxation exercise combine to improve the ability of arm tightness adjustment(2) Finger push-ups: the training steps were as follows: stand upright, facing the wall, about two-thirds of the arm away from the wall; stand each finger separately on the wall, the palm joints protruding, the palms arched, the wrists slightly lower than the palm joints [18]. Then, slowly lean forward, bending elbows and letting weight pour into fingertips and toes to increase their support. After the weight pressure reached the appropriate level, stay for 2 or 3 seconds, slowly push away until the body was upright and return to its original state; let arms down and relaxed a little. Repeat this exercise(3) Wrist spring operation: sit with body upright, shoulders and arms relaxed, elbows bent to about 90 degrees to play close to the body; relax the wrist and open the fingers naturally so that there was about an octave distance between the fingers, 2, 3, 4 fingers slightly raised in an arched hand shape. Then, keep the wrists and elbows of both hands at the original height, with the palm part facing the body direction, which was the upper part of the oblique after instant force, for a quick spring [19]. After that, relax completely and let hands fall naturally. This movement was just like the daily action of knocking on the door with the finger joints. Its force was mainly to bounce upwards. The three joints were used as the central fulcrum of the force to bounce, and the force was quickly concentrated and then relaxed instantlyFigure 2 Main contents of finger motor rehabilitation. ## 2.2.1. Control Group The treatment method of the control group is to use drugs only to relieve pain; other methods of rehabilitation with fingers and wrists are not assisted. Among them, ibuprofen sustained-release capsules produced by Sino US joint venture Tianjin Shike Pharmaceutical Co., Ltd. were selected as the analgesic drugs: production batch number: National Pharmaceutical Standard H20013062, specification 0.4 g, 1 capsule per time, taking it orally once in the morning and evening after meals, taking it with warm water after meals, and continuing treatment for 15 days. During this process, record the time from taking the medicine to pain relief, the degree of pain and the duration of pain, and the number of intermittent pain every day (including acid swelling and other finger and wrist discomfort). ## 2.2.2. Observation Group In the observation group, the pain of the participants was first evaluated. If the pain was tolerable, they would not take drugs, and only use the finger exercise method proposed in this study. If the pain is too great, you can take ibuprofen sustained-release capsules to relieve the pain and continue to exercise your fingers 2-3 times a day for six months. Finger exercises include ① arm relaxation exercises, ② finger push-ups, and ③ wrist spring operation. Specific exercises related to the hands are shown in Figure2. (1) Arm relaxation exercises: stand naturally with shoulders relaxed and falling. Then, bend elbows slightly and raise arms naturally to shoulder height. Hold for 2 or 3 seconds and let the whole arm fall into free fall. After practicing several times, alternate left and right hands to drop and lift. Then, make further adjustments to this exercise: slowly raising hands to head height. Hold for 2 or 3 seconds and then lower arms slowly and controllably. When reaching shoulder height, immediately relax and fall [17]. It should be noted that the relaxation should be carried out in an instant as much as possible and should not be gradually adjusted to a relaxed state. Let the slight tension control and the complete relaxation exercise combine to improve the ability of arm tightness adjustment(2) Finger push-ups: the training steps were as follows: stand upright, facing the wall, about two-thirds of the arm away from the wall; stand each finger separately on the wall, the palm joints protruding, the palms arched, the wrists slightly lower than the palm joints [18]. Then, slowly lean forward, bending elbows and letting weight pour into fingertips and toes to increase their support. After the weight pressure reached the appropriate level, stay for 2 or 3 seconds, slowly push away until the body was upright and return to its original state; let arms down and relaxed a little. Repeat this exercise(3) Wrist spring operation: sit with body upright, shoulders and arms relaxed, elbows bent to about 90 degrees to play close to the body; relax the wrist and open the fingers naturally so that there was about an octave distance between the fingers, 2, 3, 4 fingers slightly raised in an arched hand shape. Then, keep the wrists and elbows of both hands at the original height, with the palm part facing the body direction, which was the upper part of the oblique after instant force, for a quick spring [19]. After that, relax completely and let hands fall naturally. This movement was just like the daily action of knocking on the door with the finger joints. Its force was mainly to bounce upwards. The three joints were used as the central fulcrum of the force to bounce, and the force was quickly concentrated and then relaxed instantlyFigure 2 Main contents of finger motor rehabilitation. ## 2.3. Selection of Indicators After 6 months of treatment, the pain degree, joint mobility, and daily activities of the two groups were compared.(1) The degree of pain: using NRS [20] (NRS is a scoring system for pain; the score range is 0-10 points, of which 0-3 points belong to mild pain, 4-7 points belong to moderate pain, and 8-10 points belong to severe pain) to assess the patient’s shoulder pain, the scale used 0 to 10 points to represent different degrees of pain; 0 pointed to no pain, 10 pointed to unbearable severe pain, and the middle part indicated different degrees of pain, and it was evaluated once before and after treatment(2) Joint mobility: standard for measuring total active range of motion (TAM is a scoring system for evaluating the overall function of the palm advocated by the final results Committee of the American Society of Hand Surgery; TAM is the sum of the angles formed by the maximum flexion of the metacarpophalangeal joint, proximal and distal interphalangeal joints in the fist grip position minus the sum of the limited extension of these joints) [21]: excellent rehabilitation: TAM>220; good recovery: TAM200~220; poor rehabilitation effect: TAM<180(3) Ability to perform daily activities: both groups were assessed with the Daily Living Activity Scale (FIM, the main evaluation index of FIM is self-care activities, including the impact of walking and hand movement on life) [22]; there were 7 points for each item, a total of 42 points ## 2.4. Statistical Methods Using SPSS 19.0 statistical software for data analysis, measurement data in line with the normal distribution were expressed asX±S. The comparison between groups was performed by t-test, statistical data were compared by using the χ2 test, and P<0.05 meant the difference was statistically significant. ## 3. Results This section follows the relevant standards proposed in the previous section, collates the two groups of data based on the three indicators: pain level, joint range of motion, and daily activity ability, and compares the results. ### 3.1. Results of Pain Level There was no significant difference in pain level scores between the two groups (P>0.05) before treatment. The limb pain score in the observation group was lower than that of the control group after treatment. Results of pain level are shown in Table 2.Table 2 Results of pain level. GroupsCasesLimb pain VASWrist hand function VASBeforeAfterBeforeAfterObservation group304.15±1.282.01±1.0951.02±17.6173.06±15.74Control group304.20±1.343.52±1.3150.36±16.2270.05±14.91 ### 3.2. Results of Joint Range of Motion The effective rate of hand tendon rehabilitation in the observation group was 93.33%. The effective rate of hand tendon rehabilitation in the control group was 70.00%, and the comparison results showed that there was statistical significance (P<0.05). Results of joint range of motion are shown in Table 3.Table 3 Results of joint range of motion. GroupsCasesExcellent TAMGood TAMPoor TAMRateObservation group301612293.33Control group30912970.00 ### 3.3. Results of Daily Activity Ability There was no difference in daily living ability scores between the two groups (P>0.05), after treatment and before treatment. The scores of washing, wearing a jacket, going to the toilet, eating, bathing, and wearing pants in both groups were all improved. Results of daily activity ability are shown in Table 4.Table 4 Results of daily activity ability. GroupsTimeWashingWearing a jacketGoing to the toiletEatingBathingWearing pantsTotalObservation groupBefore3.18±1.073.27±1.093.17±1.023.06±1.043.09±1.353.09±1.4819.08±3.57After6.38±0.366.07±0.656.23±0.526.32±0.636.78±0.1146.38±0.2335.23±7.01Control groupBefore3.01±1.113.15±1.213.23±1.523.22±1.013.26±1.613.04±1.7118.23±4.62After5.64±0.525.34±0.335.60±1.205.41±1.2565.33±1.014.76±1.0130.45±8.03 ## 3.1. Results of Pain Level There was no significant difference in pain level scores between the two groups (P>0.05) before treatment. The limb pain score in the observation group was lower than that of the control group after treatment. Results of pain level are shown in Table 2.Table 2 Results of pain level. GroupsCasesLimb pain VASWrist hand function VASBeforeAfterBeforeAfterObservation group304.15±1.282.01±1.0951.02±17.6173.06±15.74Control group304.20±1.343.52±1.3150.36±16.2270.05±14.91 ## 3.2. Results of Joint Range of Motion The effective rate of hand tendon rehabilitation in the observation group was 93.33%. The effective rate of hand tendon rehabilitation in the control group was 70.00%, and the comparison results showed that there was statistical significance (P<0.05). Results of joint range of motion are shown in Table 3.Table 3 Results of joint range of motion. GroupsCasesExcellent TAMGood TAMPoor TAMRateObservation group301612293.33Control group30912970.00 ## 3.3. Results of Daily Activity Ability There was no difference in daily living ability scores between the two groups (P>0.05), after treatment and before treatment. The scores of washing, wearing a jacket, going to the toilet, eating, bathing, and wearing pants in both groups were all improved. Results of daily activity ability are shown in Table 4.Table 4 Results of daily activity ability. GroupsTimeWashingWearing a jacketGoing to the toiletEatingBathingWearing pantsTotalObservation groupBefore3.18±1.073.27±1.093.17±1.023.06±1.043.09±1.353.09±1.4819.08±3.57After6.38±0.366.07±0.656.23±0.526.32±0.636.78±0.1146.38±0.2335.23±7.01Control groupBefore3.01±1.113.15±1.213.23±1.523.22±1.013.26±1.613.04±1.7118.23±4.62After5.64±0.525.34±0.335.60±1.205.41±1.2565.33±1.014.76±1.0130.45±8.03 ## 4. Discussion Piano playing is a highly repetitive activity. The players sit for a long time, their upper limb muscles tense and contract during training and performance, and their wrists and fingers move fast, which brings great hidden dangers to the hand muscle injury of piano players [23]. Some scholars have done experiments to set the metronome at a quarter-note equal to 120 tempo, and the number of repetitions for playing a sixteenth-note one hour continuously is 28,800 times. It is only an hour, and with a little more effort, eight hours a day for more than twenty years, it can imagine how many times our fingers would repeat the work [24, 25]. Piano playing is another highly technical activity. It must be improved after thousands of times of practice. “Repetition” is unavoidable for every piano learner [26]. Highly repetitive performance is a prerequisite for the formation of performance sports injury. Piano skills training must be based on the body structure and muscle energy. Different trainees have their own characteristics in the hand muscle energy structure. They cannot be the same. Training should maximize strengths and avoid weaknesses. At the same time, when practicing the piano, the trainer should consider his own physical condition and avoid excessive exercise for a long time [27]. Modern piano playing requires the player to use hand muscles to coordinate the relaxed upper body and shoulder and arm muscles. How to find a way to relax the muscles while practicing and playing is a challenge that every piano practitioner needs to face [28]. In order to achieve natural, relaxed, and touching performance, players need to adopt macro comprehensive adjustment and micro detail control [29]. Macro comprehensive regulation requires piano players to pay attention to the prevention of hand muscle injury and pay attention to scientific rehabilitation methods after hand muscle injury. We should not blindly rely on drugs for pain relief. Drug pain relief has the characteristics of short term and temporary, so we cannot achieve the basic rehabilitation goals, nor can we keep a fluke mentality [30]. In the absence of external intervention, it is difficult to self-heal the hand muscle injury caused by piano training [31]. If it is not handled properly, it will not only affect the pianist’s performance level but also have a great chance to leave sequelae such as finger joint deformation, tenosynovitis, and cervical spondylosis [32, 33]. Micro comprehensive adjustment requires piano players to formulate scientific and reasonable practice time and methods when preventing hand muscle injury. After hand muscle injury, go to the hospital for examination in time. With the help of doctors, formulate scientific health care plans and methods and reserve sufficient hand rest time. Pay attention to relaxing muscles while playing and training, and do scientific hand exercises between playing and training [34].There was no significant difference in pain level scores between the two groups (P>0.05) before treatment, and the limb pain score in the observation group was lower than that of the control group after treatment. The effective rate of hand tendon rehabilitation in the observation group was 93.33%. The effective rate of hand tendon rehabilitation in the control group was 70.00%; the comparison results showed that there was statistical significance (P<0.05). Before treatment, there was no significant difference in daily living ability scores between the two groups (P>0.05). After treatment, the scores of washing, wearing a jacket, going to the toilet, eating, bathing, and wearing pants in both groups were all improved. However, the observation group’s limb pain score was lower than the control group’s, and the observation group’s wrist hand function score was higher than the control group’s. ## 5. Conclusion Through the above research, we draw the following three conclusions:(1) Hand muscle injury is a common sports injury for piano players, which has the characteristics of high incidence and inevitability. The rehabilitation period of injury is long, which has a great impact on the life of piano practitioners(2) ① Arm relaxation exercises, ② finger push-ups, and ③ wrist spring operation: three kinds of body movements are of great help to the rehabilitation of piano players’ muscle injuries(3) Daily finger exercises are simple and not limited by time and place. Piano practitioners can use the spare time of daily training and performance to exercise for a long time, so as to prevent or recover finger muscle damage caused by piano practice --- *Source: 1020504-2022-07-18.xml*
2022
# Performance Analysis of WBAN MAC Protocol under Different Access Periods **Authors:** Pervez Khan; Niamat Ullah; Md. Nasre Alam; Kyung Sup Kwak **Journal:** International Journal of Distributed Sensor Networks (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102052 --- ## Abstract The IEEE 802.15.6 is a new standard on wireless body area network (WBAN) for short-range, extremely low power wireless communication with high data rates in the vicinity of, or inside, a human body. The standard defines two contention-based channel access schemes: slotted ALOHA and carrier sense multiple access with collision avoidance (CSMA/CA) using an alternative binary exponential backoff procedure. The standard supports quality of service (QoS) differentiation through user priorities and access phases. In this study, we develop an analytical model for the estimation of performance metrics such as energy consumption, normalized throughput, and mean frame service time, employing a Markov chain model under nonsaturated heterogeneous traffic scenarios including different access phases specified in the standard for different user priorities and access methods. We conclude that the deployment of exclusive access phase (EAP) is not necessary in a typical WBAN using CSMA/CA because it degrades the overall system throughput, consumes more energy per packet, and results in higher delay for nonemergency nodes. --- ## Body ## 1. Introduction A WBAN is a logical set comprised of small and intelligent wireless medical sensors (which are worn or implanted into the tissues) and a common hub. These medical sensors are capable of measuring, processing, and forwarding important physiological parameters such as the heart rate, blood pressure, glucose level, body and skin temperature, oxygen saturation, and respiration rate, as well as records such as electrocardiograms and electromyograms. This enables health professionals to predict, diagnose, and react to adverse events earlier than ever. A conceptual view of medical WBAN is shown in Figure1. The depicted WBAN includes a few sensors to monitor vital bodywide health information and send it to a remote server using a personal digital assistant (PDA) [1]. The IEEE 802.15 Working Group formed Task Group 6 (TG6) in November 2007 to develop a communication standard known as IEEE 802.15.6. The purpose of the group is to establish a communication standard optimized for low-power and short-range in-body/on-body nodes to serve a variety of medical, consumer electronics, and entertainment applications. WBANs must support a combination of reliability, quality of service (QoS), low power, high data rate, and noninterference to address the gamut of WBAN applications. The IEEE 802.15.6 standard was approved in 2012 for wireless communications in WBANs. The standard provides efficient communication solutions to ubiquitous healthcare and telemedicine systems, interactive gaming, military services, and portable audio/video systems.Figure 1 Abstract view of WBAN and its framework.The medium access control (MAC) protocol provides a control mechanism to allow packet transmission through a shared wireless channel. The IEEE 802.15.6 supports two communication modes: (1) beacon communication mode, where the hub transmits beacons for resource allocation and synchronization, and (2) nonbeacon communication mode, where the scheduled/unscheduled allocations and polling are used [2]. In the beacon communication mode, the beacons are transmitted in the beginning of each superframe. As illustrated in Figure 2, in a beacon communication mode each superframe is divided into different access phases (APs). A superframe includes exclusive access phase 1 (EAP1), random access phase 1 (RAP1), management access phase 1 (MAP1), exclusive access phase 2 (EAP2), random access phase 2 (RAP2), management access phase 2 (MAP2), and an optional B2 frame followed by a contention access phase (CAP). The EAPs are used for life-critical traffic while the RAPs and CAP are used for regular traffic. Each AP, except RAP1, may have zero length [3].Figure 2 Layout of access phases with superframe boundaries [15].In IEEE 802.15.6, the contention-based access methods for obtaining allocations are either carrier sense multiple access/collision avoidance (CSMA/CA) if a narrowband physical layer (PHY)/ultra-wideband (UWB) PHY is chosen or slotted ALOHA if  UWB PHY is used [3]. The IEEE 802.15.6 CSMA/CA mechanism is different in important aspects from the CSMA/CA mechanism of other wireless standards. The backoff mechanism is not binary exponential, and the contention window doubles only when the retry counter is an even number. In addition to busy channel the node will also lock the backoff counter if it is not allowed to access the medium during the current AP or the current AP length is not long enough for a frame transmission. These differences require changes in the typical discrete time markov chains (DTMCs) adopted for the CSMA/CA mechanism of previous standards presented in [4–7] for IEEE 802.11; in [8–11] for IEEE 802.11e; in [12, 13] for IEEE 802.15.4; and in [14] for IEEE 802.15.3c.To employ the CSMA/CA mechanism, as shown in Figure4 the contending node i belonging to a user class UPi where i=0,…,7 shall set its backoff counter to a random integer over the interval [1,CWi]. The contention window (CW) is chosen as follows: (a) If the node does not obtain any contended allocation previously, or if it succeeds in a data frame transmission, it will set the CW to CWmin. (b) If the node fails to transmit, it will keep the CW unchanged if this is the mth time the node has failed consecutively, where m is an odd number; otherwise, the CW is doubled. (c) If doubling the CW results in a value that exceeds CWmax for a UPi node, the node will set the CW to CWmax. After choosing the contention window, the node will decrement its backoff counter by one for each idle pCCATime. Further, the node will lock the backoff counter whenever it detects any transmission on the channel during pCCATime and will unlock it when the channel has been idle for pSIFS. The node will also lock the backoff counter if it is not allowed to access the medium during the current AP or the current AP length is not long enough for a frame transmission. The node transmits when the backoff counter reaches zero [3].In this study, we develop analytical and simulation models to evaluate the CSMA/CA mechanism of IEEE 802.15.6 media access control (MAC), by considering a portion of theAPs that can easily be extended to the entire superframe. We have not considered the deployment of EAP2 and RAP2 access phases. Given that the objective is to investigate the performance of the CSMA/CA mechanism, we ignore activities in the contention-free access phases (i.e., MAP1 and MAP2). Our analysis is validated with accurate computer simulation.The rest of this paper is structured as follows: Section2 reviews the related studies available in the literature. Section 3 describes the framework of the analytical model and performance measures. The experimental results are presented in Section 4, and finally, Section 5 concludes our study. ## 2. Related Studies Since the IEEE 802.15.6 standard has recently been released, there have been very few probabilistic works in the literature that analyze the CSMA/CA mechanism of the IEEE 802.15.6 standard. However, many researchers have analyzed the CSMA/CA protocol of various other communication standards in their articles. Performance analyses of the CSMA/CA mechanism for various IEEE wireless communication standards were presented in [4–7] for IEEE 802.11; in [8–11] for IEEE 802.11e; in [12, 13] for IEEE 802.15.4; and in [14] for IEEE 802.15.3c. Because the IEEE 802.15.6 CSMA/CA mechanism is different from the mechanisms of other wireless technologies, these analytical models are not appropriate for the IEEE 802.15.6 standard. In [16, 17], the authors study the performance of IEEE 802.15.6 CSMA/CA only under saturation conditions. The results indicate that the medium is accessed widely by the high-user-priority nodes, while the other nodes starve. In [18], the authors present numerical formulas to determine the theoretical throughput and delay limits of IEEE 802.15.6-based networks. They aim to optimize the packet size and to determine the upper bounds of IEEE 802.15.6 networks for different WBAN applications. They assume a collision-free network with no user priorities (UPs). The authors in [15] propose an analytical model to evaluate the performance of a contention-based IEEE 802.15.6 CSMA/CA mechanism under saturated conditions for heterogeneous WBAN scenarios. However, in most real-world IEEE 802.15.6 networks, the saturation assumption is not likely to hold, and the traffic is mostly nonsaturated. In [19] the authors study the normalized throughput performance of IEEE 802.15.6 slotted ALOHA protocol in nonsaturation conditions. In [20] the authors develop an analytical model for performance evaluation of the IEEE 802.15.6 standard under nonsaturation regime. They only calculated the mean response time of the data frames in the network. In [21] the authors develop a DTMC model for the analysis of reliability and throughput of an IEEE 802.15.6 CSMA/CA-based WBAN under saturation condition. A generalized three-dimensional Markov chain with backoff stage, backoff counter, and retransmission counter, as the stochastic parameters, is proposed in [22]. ## 3. Performance Analysis In order to analyze the CSMA/CA performance of the IEEE 802.15.6 MAC protocol, we introduce a DTMC model under nonsaturation modes, as shown in Figure3. We adopt the same analytic model as presented in [23]. We consider Poisson packet arrival at the rate of λ packets/microsecond. We assume that a sensor node can have only one packet at a time so that if it has a packet to transmit, then no other packets are generated. Eight user priorities in the WBAN, UPi where i∈{0,1,2,3,4,5,6,7} are differentiated by CWmin and CWmax, as depicted in Table 1. UP7 has been given an aggressive priority as compared to the other UPs. Moreover, UP7 nodes also have a separate AP for transmission. The contention window size for a UPi node during the jth backoff stage is calculated as Wi,j=2j/2CWi,min. We assume a star-topology single-hop WBAN with N heterogeneous nodes. The total number of nodes in the network can be obtained as N=∑i=07ni, where ni is the number of nodes in a class. We consider two nodes in each class. In the proposed analytical model, we consider that the lengths of EAP2, RAP2, and CAP are set to 0. We assume that transmissions errors are only due to collisions. We do not consider any retry limit in our model. We consider that the nodes access the medium without any RTS/CTS mechanism.Table 1 Contention window bounds for CSMA/CA. User priority Traffic designation CW min ⁡ CW max ⁡ 7 Emergency or medical implant event report 1 4 6 High-priority medical data or network control 2 8 5 Medical data or network control 4 8 4 Voice (VO) 4 16 3 Video (VI) 8 16 2 Excellent effort (EE) 8 32 1 Best effort (BE) 16 32 0 Background (BK) 16 64Figure 3 DTMC model for the CSMA/CA behavior in nonsaturated traffic conditions.Figure 4 IEEE 802.15.6 CSMA/CA flowchart.LetPtr be the probability that there is at least one transmission in the time slot under consideration and let βi be the probability that a node of class i transmits in a generic slot; Ptr is given by(1)Ptr=1-∏i=071-βini.The collision probability for a classi node can be obtained as follows: (2)γi=1-1-βini-1∏j=0,j≠i71-βjnj.LetTs and Tc be the average durations for which the medium is sensed to be busy owing to a successful and a collision transmission, respectively. Ts and Tc can be computed as(3)Ts=Tc=T(MAC+PHY)overhead+TPayload.LetEstate,i be the expected time spent per state of the Markov chain by a tagged node of class i. We compute Estate,i as follows:(4)Estate,i=1-Ptr·δ+∑i=07Ps,i·Ts+Tc1-∑i=07Ps,i+Ptr·1-γTack.δ is the length of a pCSMAslot mentioned in the standard.Letq be the probability that a packet is available to the MAC of a node in a given slot and let λ be the packet arrival rate. q is determined by (5)q=1-e-λEstate,i.P s , i be the probability that a transmission occurring on the medium by a class i node is successful and can be computed as(6)Ps,i=niβi1-βini-1∏j=0,j≠i71-βjnj.LetXeap and Xrap be the mean number of slots in EAP1 and RAP1, respectively, and let them be computed as follows: (7)Xeap=eapEstate,i,Xrap=rapEstate,i,where eap and rap are the duration of the access phases EAP1 and RAP1, respectively.In a given pCSMA slot the backoff counter of a node should be locked till the beginning of the next eligible AP if there is no enough time for a packet transmission during the current AP. This probability is represented as(8)e^i=1rap-Ts,0≤i≤6,1eap+rap-Ts,i=7.Therefore, for aUPi node the probability to decrement the backoff counter during RAP1 is given by(9)fi=∏j=071-βjnj∗1-e^i1-βi,i=0,…,6.The probability that aUP7 node decrements the backoff counter during EAP1 or RAP1 is given by(10)f7=Xrap∏j=071-βjnj1-e^7Xeap+Xrap1-β7+Xeap1-β7n71-e^7Xeap+Xrap1-β7Letηi be the normalized per class throughput, defined as the fraction of time for which the medium is used to successfully transmit payload bits. It can be computed as(11)ηi=Ps,i∗Tpayload∗Xrapeap+rap,0≤i≤6,Ps,i∗Xrap+Xeap∗Tpayloadeap+rap,i=7.T payload is the mean payload duration.Thus, normalized system throughput can be obtained as(12)η=∑i=07ηi.LetTi be the duration between the instant that the packet arrives at the head of the queue of a class i node and the time when the packet is successfully acknowledged by the receiver. The mean frame service time can be expressed as(13)Ti=δ·γi/1-γi+Ts+Estate,i∑j=0mi-1γij·Wi,j+1/2+Estate,iγimiWi,m/21-γi∗Xeapeap+rap,0≤i≤6,δ·γi1-γi+Ts+Estate,i∑j=0mi-1γij·Wi,j+12+Estate,iγimiWi,m21-γi,i=7.Energy is quite critical in WBANs, and therefore, in addition to the throughput and the mean frame service time, we are also interested in calculating the energy consumption. We estimate the energy consumption on a per-node per-packet basis. The expression for the mean frame service timeTi in (13) represents the time elapsed from the arrival of the packet until its successful delivery. The service time of the packet might contain a number of unsuccessful transmissions, with the associated backoff intervals. Denoting by Ptx, Prx, Pbo, and Psleep the power consumed by the transceiver of a node during transmission, reception, backoff, and sleep, respectively, we derive an estimate of the energy consumption EAVG,i for a class of UPi node on a per-node per-packet basis as follows:(14)EAVG,i=1/λ∗Psleep+δ·γi/1-γi∗Prx+Ts∗Ptx+Pbo∗Estate,i∑j=0mi-1γij·Wi,j+1/2+Pbo∗Estate,iγimiWi,m/21-γi∗Xeapeap+rap,0≤i≤6,1λ∗Psleep+δ·γi1-γi∗Prx+Ts∗Ptx+Pbo∗Estate,i∑j=0mi-1γij·Wi,j+12+Pbo∗Estate,iγimiWi,m21-γi,i=7.A sensor node deployed with a CSMA/CA mechanism needs to wait for a random backoff time before transmission. Letb(t) be the stochastic process representing the backoff time counter for a given sensor node. The backoff time counter of each contending node decrements after each successful pCCAtime, and the counter is stopped when the medium is sensed busy. Given that the value of the backoff counter of each contending node also depends on its transmission attempts, each transmission attempt leads the node to a new backoff window called the backoff stage. Let s(t) be the stochastic process representing the backoff stage of the node at time t. It is possible to model the two-dimensional stochastic processes s(t) and b(t) depicted in Figure 3 with a discrete time Markov chain having the following one-step transition probabilities among them:(15)Pr⁡i,k-1∣i,k=fi,1≤k≤Wi,Pr⁡i,-1∣i,0=1,1≤i≤m,Pr⁡i+1,k∣i,-1=γWi+1,1≤i≤m-1,1≤k≤Wi+1,Pr⁡1,k∣i,-1=q·1-γ·1W1,1≤i≤m,1≤k≤W1,Pr⁡l∣i,-1=1-γ1-q,1≤i≤m,Pr⁡l∣l=1-q,Pr⁡1,k∣l=q·1W1,1≤k≤W1,Pr⁡m,k∣m,-1=γWm,1≤k≤Wm,Pr⁡i,k∣i,k=1-fi,1≤k≤Wi.The first equation in (15) reflects the fact that, after each successful pCCAtime, the backoff counter is decremented. The second equation reflects the fact that, after a transmission, the nodes involved in the current transmission (at a state (i,0)) wait for an ACKtimeout period to know the status (success/collision) of their transmitted packet. Upon an unsuccessful transmission, the node chooses another random backoff value uniformly distributed in the range 1,…,Wi+1, and this is shown in the third transition probability of (15). The fourth case deals with the situation that, after a successful transmission, another packet is generated, and the node takes a new backoff for the new packet. The fifth case models the fact that, after a successful transmission, the node has no packet to transmit and so enters the idle state. The node remains in the idle state until a new packet arrives, when the node takes a new random backoff value in the range 1,…,W1 (first backoff stage); these are depicted in the sixth and seventh expressions. The second last case models the fact that once the backoff stage reaches value m, it is not increased in a subsequent packet retransmission. Finally, the last case reveals that the backoff counter is locked whenever a node detects any transmission on the channel during pCCATime, or if it is not allowed to access the medium during the current access phase, or the current AP length is not long enough for a frame transmission.For mathematical convenience, the abbreviated notations(i,k) are used to represent the random processes s(t) and b(t), respectively. The backoff stage i starts at 1 and can reach a maximum value of m. Once the backoff stage reaches the maximum value m, it is not increased for a packet retransmissions. A contending node, after reaching a maximum backoff stage m will continue to try in that backoff stage until the packet is successfully transmitted. Counter k is initially chosen uniformly between [1,W], where W is initially set to CWmin, and then its value increases in a nonbinary exponential manner, as explained in Section 1. The state (i,0) in our Markov chain is the state of transmission (at backoff stage i), which can be either successful or colliding. With b(i,k) and b(l) we now show how to obtain a closed-form solution for the Markov chain depicted in Figure 3. The main quantity of interest is the probability that a node transmits in a generic slot, regardless of the backoff stage. We denote βi:i∈{0,1,2,3,4,5,6,7} as the transmission probability by a UPi node. This probability can be expressed as(16)βi=∑i=1mbi,0.The stationary probability of being in theACKtimeout state (i,-1) can be expressed as(17)bi,-1=1bi,01≤i≤m.Therefore (1) can be written as(18)βi=∑i=1mbi,-1.The stationary distributions∑k=1W1-1b(1,k)+b(1,W1) represent the topmost row of the Markov chain and is simplified as (19)∑k=1W1-1b1,k+b1,W1=1fi1-γi∑j=1mγij1-γiβiW1+12.Similarly, The stationary distribution∑k=1Wm-1b(m,k)+b(m,Wm) represents the lowermost row of the Markov chain and can be expressed as (20)∑k=1W1-1bm,k+bm,Wm=1fiγibm-1,-1+bm,-1Wm+12.The stationary distribution∑i=2m-1∑k=1Wi-1b(i,k)+∑i=2m-1b(i,Wi) can be expressed as (21)∑i=2m-1∑k=1Wi-1bi,k+∑i=2m-1bi,Wi=1fiγi∑i=2m-1bi-1,-1Wi+12.Similarly, sum of the remaining stationary distributions of the Markov chain is given by(22)∑i=1mbi,0+∑i=1mbi,-1+bl=βi2+1q1-q1-γi.The stationary distribution b(l) takes into consideration the situation where the queue of the node is empty and is waiting for a packet to arrive.To find the normalized equation,(23)∑i=1m∑k=-1Wibi,k+bl=1.Let us sum the stationary distributions of (19), (20), (21), and (22) that give(24)∑k=1W1-1b1,k+b1,W1+∑k=1W1-1bm,k+bm,Wm+∑i=2m-1∑k=1Wi-1bi,k+∑i=2m-1bi,Wi+∑i=1mbi,0+∑i=1mbi,-1+bl=1(25)⟹1fi1-γiβi·W1+12+1fi·γi·1-γiβi2∑j=1m-1γijWi,j+1+1+γimWi,m+1+βi2+1q1-q1-γi=1(26)⟹βi=12+1/q1-q1-γi+1/fi1-γi2∑j=0mγij·Wi,j+1+1/2+1/fi1-γi·γim+1·Wi,m+1/2.Equations (2) and (26) represent a nonlinear coupled system with 16 unknown variables of γi and βi, which can be solved by using a contraction-mapping method in MATLAB. Here we use MATLAB’s fsolve function to solve the problem. The values of γi and βi can then be used to estimate the desired performance metrics such as normalized throughput, mean frame service time, and energy consumption by using (11), (13), and (14), respectively. ## 4. Results and Discussion To validate the accuracy of the developed analytical model, we have compared its results with an event-driven custom-made simulation program written in the C++ programming language. The simulator closely follows the behavior of the CSMA/CA mechanism of the IEEE 802.15.6 standard. The simulations are performed for a WBAN with five user priorities by considering two nodes in each class and a hub. Here, we consider the CSMA/CA MAC mechanism running in the narrowband (NB) PHY, as described by the standard. The NB PHY operates in seven different frequency bands and offers a variable number of channels, bit rates, and modulation schemes. One of these seven frequency bands is used for an implantable WBAN, whereas the other six are used for a wearable WBAN. The focus of this analysis is on the seventh band (or the sixth band of a wearable WBAN) of the NB PHY layer of 2400~2483.5 MHz, because it is a commonly used, free Industrial, Scientific, and Medical (ISM) band. The values of the parameters used to obtain our results, for both the analytical model and the simulation, are summarized in Table2. These parameters are specified for a narrowband PHY in the IEEE 802.15.6 standard. The packet payload has been assumed to be constant and is equal to 1020 bits, which is the average value of the largest allowed payload size for the NB PHY. For estimating energy, we used the parameters considered in [24]. In all the plots in this section, we used standard markers to represent the data obtained from the simulations and different type of lines to refer to the analytical results.Table 2 Narrowband “channel seventh” parameters and energy descriptions. Slot time 145μs pSIFS 75μs pCCA 105μs pCSMAMACPHYTime 40μs MAC header 56 bits MAC footer 16 bits PHY header 31 bits Payload 1020 bits PLCP Header (data rate) 91.9 (kb/s) PSDU (data rate) 971.4 (kb/s) P tx 29.9 mW P rx 24.5 mW P bo 24.5 mW P sleep 37μWFor a given number of nodes, we see that the throughput for lower-priority nodes decreases drastically asλ increases. This is because with a low arrival rate, very few nodes have packets to transmit, but when the arrival rate increases, the number of attempts decreases more for the lower-priority nodes. All these curves show that classes with smaller CWmin and CWmax have a higher priority in accessing the channel and hence higher throughput performance because of the smaller values of CWmin and CWmax reduces the average backoff time before a transmission attempt. From Figures 5 and 6 it is clear that the IEEE 802.15.6 CSMA/CA employing different access phases degrades the normalized throughput performance of the nodes other than UP7 nodes. This is because nodes other than UP7 are unable to transmit in the EAP period and hence their performance degrades. While UP7 has the same number of nodes for all the results so its performance is the same even with the use of an EAP period. The duration of EAP is 0.3 seconds and the duration of RAP is 0.6 seconds in case when EAP is half in length of RAP period.Figure 5 Normalised per class throughput by considering both EAP and RAP as one RAP.Figure 6 Normalised per class throughput, when EAP length is half of RAP.Figures7 and 8 show the overall network throughput for the two different scenarios, that is, without access phases and with access phases, respectively. The network consists of five different user priority classes, where each class has the same number of nodes but has different combination of CWmin and CWmax values. These results show that IEEE 802.15.6 CSMA/CA employing access phases degrades the overall system throughput performance.Figure 7 Normalised system throughput by considering both EAP and RAP as one RAP.Figure 8 Normalised system throughput, when EAP length is half of RAP.The mean frame service time performance in a nonsaturated heterogeneous scenario is illustrated in Figures9 and 10 as a function of the arrival rates. For a given UPi, we see that the mean frame service time increases with an increase in the arrival rate. The mean frame service time increases quickly for low-priority classes compared with high-priority classes as λ increases, because smaller values of CWmin and CWmax reduce the average backoff time. From Figures 9 and 10, it is clear that the IEEE 802.15.6 CSMA/CA employing different access phases maximizes the mean frame service time of the nodes other than UP7. This is because nodes other than UP7 are unable to transmit in the EAP period and hence their mean frame service time increases. While UP7 has almost the same performance for all the results even with the use of an EAP period. From these results, we can optimize the length of the access phases and number of nodes to achieve a reasonable delay.Figure 9 Head-of-line delay by considering both EAP and RAP as one RAP.Figure 10 Head-of-line delay, when EAP length is half of RAP.Figures11 and 12 show the average energy consumption of a UPi node on a per-node per-packet basis for the two different scenarios against the arrival rate (packets/microsecond). It is clear that the energy consumption for a higher user priority is very low as compared to that for a low user priority, as is the case for the mean frame service time in Figures 9 and 10. This is understandable in light of the fact that the longer frame service time is attributed to the longer periods of backoff and unsuccessful transmissions, and thus, the associated energy consumption also increases until a successful transmission occurs. The more energy consumption for lower user priority classes in the case of different access phases happens due to the higher mean frame service time of the nodes.Figure 11 Energy consumption by considering both EAP and RAP as one RAP.Figure 12 Energy consumption, when EAP length is half of RAP. ## 5. Conclusions In this study, we developed a discrete time Markov chain to model the backoff procedure of IEEE 802.15.6 CSMA/CA under nonsaturated conditions, by considering different access phases lenghts. We evaluated the performance of the IEEE 802.15.6 CSMA/CA mechanism to predict energy consumption, normalized throughput, and mean frame service time of the network by employing the proposed Markov chain model. The performance measures obtained by the analytical model were validated against accurate simulation results. Our results show that the IEEE 802.15.6 CSMA/CA mechanism utilizes the medium poorly for low priority users. In addition, the use of different access phases degrades the overall system throughput performance, resulting in higher delay for nonemergency nodes and hence more energy per packet consumed. We can optimize the length of the access phases to achieve better throughput and reasonable delay. This model will be extended in our future work, considering all the APs, error-prone channel, and multi-user environments. We also intend to fine-tune the length of the access phases and number of nodes for different user priorities, which will lead to comparatively better system throughput and minimum delay. --- *Source: 102052-2015-10-28.xml*
102052-2015-10-28_102052-2015-10-28.md
27,923
Performance Analysis of WBAN MAC Protocol under Different Access Periods
Pervez Khan; Niamat Ullah; Md. Nasre Alam; Kyung Sup Kwak
International Journal of Distributed Sensor Networks (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102052
102052-2015-10-28.xml
--- ## Abstract The IEEE 802.15.6 is a new standard on wireless body area network (WBAN) for short-range, extremely low power wireless communication with high data rates in the vicinity of, or inside, a human body. The standard defines two contention-based channel access schemes: slotted ALOHA and carrier sense multiple access with collision avoidance (CSMA/CA) using an alternative binary exponential backoff procedure. The standard supports quality of service (QoS) differentiation through user priorities and access phases. In this study, we develop an analytical model for the estimation of performance metrics such as energy consumption, normalized throughput, and mean frame service time, employing a Markov chain model under nonsaturated heterogeneous traffic scenarios including different access phases specified in the standard for different user priorities and access methods. We conclude that the deployment of exclusive access phase (EAP) is not necessary in a typical WBAN using CSMA/CA because it degrades the overall system throughput, consumes more energy per packet, and results in higher delay for nonemergency nodes. --- ## Body ## 1. Introduction A WBAN is a logical set comprised of small and intelligent wireless medical sensors (which are worn or implanted into the tissues) and a common hub. These medical sensors are capable of measuring, processing, and forwarding important physiological parameters such as the heart rate, blood pressure, glucose level, body and skin temperature, oxygen saturation, and respiration rate, as well as records such as electrocardiograms and electromyograms. This enables health professionals to predict, diagnose, and react to adverse events earlier than ever. A conceptual view of medical WBAN is shown in Figure1. The depicted WBAN includes a few sensors to monitor vital bodywide health information and send it to a remote server using a personal digital assistant (PDA) [1]. The IEEE 802.15 Working Group formed Task Group 6 (TG6) in November 2007 to develop a communication standard known as IEEE 802.15.6. The purpose of the group is to establish a communication standard optimized for low-power and short-range in-body/on-body nodes to serve a variety of medical, consumer electronics, and entertainment applications. WBANs must support a combination of reliability, quality of service (QoS), low power, high data rate, and noninterference to address the gamut of WBAN applications. The IEEE 802.15.6 standard was approved in 2012 for wireless communications in WBANs. The standard provides efficient communication solutions to ubiquitous healthcare and telemedicine systems, interactive gaming, military services, and portable audio/video systems.Figure 1 Abstract view of WBAN and its framework.The medium access control (MAC) protocol provides a control mechanism to allow packet transmission through a shared wireless channel. The IEEE 802.15.6 supports two communication modes: (1) beacon communication mode, where the hub transmits beacons for resource allocation and synchronization, and (2) nonbeacon communication mode, where the scheduled/unscheduled allocations and polling are used [2]. In the beacon communication mode, the beacons are transmitted in the beginning of each superframe. As illustrated in Figure 2, in a beacon communication mode each superframe is divided into different access phases (APs). A superframe includes exclusive access phase 1 (EAP1), random access phase 1 (RAP1), management access phase 1 (MAP1), exclusive access phase 2 (EAP2), random access phase 2 (RAP2), management access phase 2 (MAP2), and an optional B2 frame followed by a contention access phase (CAP). The EAPs are used for life-critical traffic while the RAPs and CAP are used for regular traffic. Each AP, except RAP1, may have zero length [3].Figure 2 Layout of access phases with superframe boundaries [15].In IEEE 802.15.6, the contention-based access methods for obtaining allocations are either carrier sense multiple access/collision avoidance (CSMA/CA) if a narrowband physical layer (PHY)/ultra-wideband (UWB) PHY is chosen or slotted ALOHA if  UWB PHY is used [3]. The IEEE 802.15.6 CSMA/CA mechanism is different in important aspects from the CSMA/CA mechanism of other wireless standards. The backoff mechanism is not binary exponential, and the contention window doubles only when the retry counter is an even number. In addition to busy channel the node will also lock the backoff counter if it is not allowed to access the medium during the current AP or the current AP length is not long enough for a frame transmission. These differences require changes in the typical discrete time markov chains (DTMCs) adopted for the CSMA/CA mechanism of previous standards presented in [4–7] for IEEE 802.11; in [8–11] for IEEE 802.11e; in [12, 13] for IEEE 802.15.4; and in [14] for IEEE 802.15.3c.To employ the CSMA/CA mechanism, as shown in Figure4 the contending node i belonging to a user class UPi where i=0,…,7 shall set its backoff counter to a random integer over the interval [1,CWi]. The contention window (CW) is chosen as follows: (a) If the node does not obtain any contended allocation previously, or if it succeeds in a data frame transmission, it will set the CW to CWmin. (b) If the node fails to transmit, it will keep the CW unchanged if this is the mth time the node has failed consecutively, where m is an odd number; otherwise, the CW is doubled. (c) If doubling the CW results in a value that exceeds CWmax for a UPi node, the node will set the CW to CWmax. After choosing the contention window, the node will decrement its backoff counter by one for each idle pCCATime. Further, the node will lock the backoff counter whenever it detects any transmission on the channel during pCCATime and will unlock it when the channel has been idle for pSIFS. The node will also lock the backoff counter if it is not allowed to access the medium during the current AP or the current AP length is not long enough for a frame transmission. The node transmits when the backoff counter reaches zero [3].In this study, we develop analytical and simulation models to evaluate the CSMA/CA mechanism of IEEE 802.15.6 media access control (MAC), by considering a portion of theAPs that can easily be extended to the entire superframe. We have not considered the deployment of EAP2 and RAP2 access phases. Given that the objective is to investigate the performance of the CSMA/CA mechanism, we ignore activities in the contention-free access phases (i.e., MAP1 and MAP2). Our analysis is validated with accurate computer simulation.The rest of this paper is structured as follows: Section2 reviews the related studies available in the literature. Section 3 describes the framework of the analytical model and performance measures. The experimental results are presented in Section 4, and finally, Section 5 concludes our study. ## 2. Related Studies Since the IEEE 802.15.6 standard has recently been released, there have been very few probabilistic works in the literature that analyze the CSMA/CA mechanism of the IEEE 802.15.6 standard. However, many researchers have analyzed the CSMA/CA protocol of various other communication standards in their articles. Performance analyses of the CSMA/CA mechanism for various IEEE wireless communication standards were presented in [4–7] for IEEE 802.11; in [8–11] for IEEE 802.11e; in [12, 13] for IEEE 802.15.4; and in [14] for IEEE 802.15.3c. Because the IEEE 802.15.6 CSMA/CA mechanism is different from the mechanisms of other wireless technologies, these analytical models are not appropriate for the IEEE 802.15.6 standard. In [16, 17], the authors study the performance of IEEE 802.15.6 CSMA/CA only under saturation conditions. The results indicate that the medium is accessed widely by the high-user-priority nodes, while the other nodes starve. In [18], the authors present numerical formulas to determine the theoretical throughput and delay limits of IEEE 802.15.6-based networks. They aim to optimize the packet size and to determine the upper bounds of IEEE 802.15.6 networks for different WBAN applications. They assume a collision-free network with no user priorities (UPs). The authors in [15] propose an analytical model to evaluate the performance of a contention-based IEEE 802.15.6 CSMA/CA mechanism under saturated conditions for heterogeneous WBAN scenarios. However, in most real-world IEEE 802.15.6 networks, the saturation assumption is not likely to hold, and the traffic is mostly nonsaturated. In [19] the authors study the normalized throughput performance of IEEE 802.15.6 slotted ALOHA protocol in nonsaturation conditions. In [20] the authors develop an analytical model for performance evaluation of the IEEE 802.15.6 standard under nonsaturation regime. They only calculated the mean response time of the data frames in the network. In [21] the authors develop a DTMC model for the analysis of reliability and throughput of an IEEE 802.15.6 CSMA/CA-based WBAN under saturation condition. A generalized three-dimensional Markov chain with backoff stage, backoff counter, and retransmission counter, as the stochastic parameters, is proposed in [22]. ## 3. Performance Analysis In order to analyze the CSMA/CA performance of the IEEE 802.15.6 MAC protocol, we introduce a DTMC model under nonsaturation modes, as shown in Figure3. We adopt the same analytic model as presented in [23]. We consider Poisson packet arrival at the rate of λ packets/microsecond. We assume that a sensor node can have only one packet at a time so that if it has a packet to transmit, then no other packets are generated. Eight user priorities in the WBAN, UPi where i∈{0,1,2,3,4,5,6,7} are differentiated by CWmin and CWmax, as depicted in Table 1. UP7 has been given an aggressive priority as compared to the other UPs. Moreover, UP7 nodes also have a separate AP for transmission. The contention window size for a UPi node during the jth backoff stage is calculated as Wi,j=2j/2CWi,min. We assume a star-topology single-hop WBAN with N heterogeneous nodes. The total number of nodes in the network can be obtained as N=∑i=07ni, where ni is the number of nodes in a class. We consider two nodes in each class. In the proposed analytical model, we consider that the lengths of EAP2, RAP2, and CAP are set to 0. We assume that transmissions errors are only due to collisions. We do not consider any retry limit in our model. We consider that the nodes access the medium without any RTS/CTS mechanism.Table 1 Contention window bounds for CSMA/CA. User priority Traffic designation CW min ⁡ CW max ⁡ 7 Emergency or medical implant event report 1 4 6 High-priority medical data or network control 2 8 5 Medical data or network control 4 8 4 Voice (VO) 4 16 3 Video (VI) 8 16 2 Excellent effort (EE) 8 32 1 Best effort (BE) 16 32 0 Background (BK) 16 64Figure 3 DTMC model for the CSMA/CA behavior in nonsaturated traffic conditions.Figure 4 IEEE 802.15.6 CSMA/CA flowchart.LetPtr be the probability that there is at least one transmission in the time slot under consideration and let βi be the probability that a node of class i transmits in a generic slot; Ptr is given by(1)Ptr=1-∏i=071-βini.The collision probability for a classi node can be obtained as follows: (2)γi=1-1-βini-1∏j=0,j≠i71-βjnj.LetTs and Tc be the average durations for which the medium is sensed to be busy owing to a successful and a collision transmission, respectively. Ts and Tc can be computed as(3)Ts=Tc=T(MAC+PHY)overhead+TPayload.LetEstate,i be the expected time spent per state of the Markov chain by a tagged node of class i. We compute Estate,i as follows:(4)Estate,i=1-Ptr·δ+∑i=07Ps,i·Ts+Tc1-∑i=07Ps,i+Ptr·1-γTack.δ is the length of a pCSMAslot mentioned in the standard.Letq be the probability that a packet is available to the MAC of a node in a given slot and let λ be the packet arrival rate. q is determined by (5)q=1-e-λEstate,i.P s , i be the probability that a transmission occurring on the medium by a class i node is successful and can be computed as(6)Ps,i=niβi1-βini-1∏j=0,j≠i71-βjnj.LetXeap and Xrap be the mean number of slots in EAP1 and RAP1, respectively, and let them be computed as follows: (7)Xeap=eapEstate,i,Xrap=rapEstate,i,where eap and rap are the duration of the access phases EAP1 and RAP1, respectively.In a given pCSMA slot the backoff counter of a node should be locked till the beginning of the next eligible AP if there is no enough time for a packet transmission during the current AP. This probability is represented as(8)e^i=1rap-Ts,0≤i≤6,1eap+rap-Ts,i=7.Therefore, for aUPi node the probability to decrement the backoff counter during RAP1 is given by(9)fi=∏j=071-βjnj∗1-e^i1-βi,i=0,…,6.The probability that aUP7 node decrements the backoff counter during EAP1 or RAP1 is given by(10)f7=Xrap∏j=071-βjnj1-e^7Xeap+Xrap1-β7+Xeap1-β7n71-e^7Xeap+Xrap1-β7Letηi be the normalized per class throughput, defined as the fraction of time for which the medium is used to successfully transmit payload bits. It can be computed as(11)ηi=Ps,i∗Tpayload∗Xrapeap+rap,0≤i≤6,Ps,i∗Xrap+Xeap∗Tpayloadeap+rap,i=7.T payload is the mean payload duration.Thus, normalized system throughput can be obtained as(12)η=∑i=07ηi.LetTi be the duration between the instant that the packet arrives at the head of the queue of a class i node and the time when the packet is successfully acknowledged by the receiver. The mean frame service time can be expressed as(13)Ti=δ·γi/1-γi+Ts+Estate,i∑j=0mi-1γij·Wi,j+1/2+Estate,iγimiWi,m/21-γi∗Xeapeap+rap,0≤i≤6,δ·γi1-γi+Ts+Estate,i∑j=0mi-1γij·Wi,j+12+Estate,iγimiWi,m21-γi,i=7.Energy is quite critical in WBANs, and therefore, in addition to the throughput and the mean frame service time, we are also interested in calculating the energy consumption. We estimate the energy consumption on a per-node per-packet basis. The expression for the mean frame service timeTi in (13) represents the time elapsed from the arrival of the packet until its successful delivery. The service time of the packet might contain a number of unsuccessful transmissions, with the associated backoff intervals. Denoting by Ptx, Prx, Pbo, and Psleep the power consumed by the transceiver of a node during transmission, reception, backoff, and sleep, respectively, we derive an estimate of the energy consumption EAVG,i for a class of UPi node on a per-node per-packet basis as follows:(14)EAVG,i=1/λ∗Psleep+δ·γi/1-γi∗Prx+Ts∗Ptx+Pbo∗Estate,i∑j=0mi-1γij·Wi,j+1/2+Pbo∗Estate,iγimiWi,m/21-γi∗Xeapeap+rap,0≤i≤6,1λ∗Psleep+δ·γi1-γi∗Prx+Ts∗Ptx+Pbo∗Estate,i∑j=0mi-1γij·Wi,j+12+Pbo∗Estate,iγimiWi,m21-γi,i=7.A sensor node deployed with a CSMA/CA mechanism needs to wait for a random backoff time before transmission. Letb(t) be the stochastic process representing the backoff time counter for a given sensor node. The backoff time counter of each contending node decrements after each successful pCCAtime, and the counter is stopped when the medium is sensed busy. Given that the value of the backoff counter of each contending node also depends on its transmission attempts, each transmission attempt leads the node to a new backoff window called the backoff stage. Let s(t) be the stochastic process representing the backoff stage of the node at time t. It is possible to model the two-dimensional stochastic processes s(t) and b(t) depicted in Figure 3 with a discrete time Markov chain having the following one-step transition probabilities among them:(15)Pr⁡i,k-1∣i,k=fi,1≤k≤Wi,Pr⁡i,-1∣i,0=1,1≤i≤m,Pr⁡i+1,k∣i,-1=γWi+1,1≤i≤m-1,1≤k≤Wi+1,Pr⁡1,k∣i,-1=q·1-γ·1W1,1≤i≤m,1≤k≤W1,Pr⁡l∣i,-1=1-γ1-q,1≤i≤m,Pr⁡l∣l=1-q,Pr⁡1,k∣l=q·1W1,1≤k≤W1,Pr⁡m,k∣m,-1=γWm,1≤k≤Wm,Pr⁡i,k∣i,k=1-fi,1≤k≤Wi.The first equation in (15) reflects the fact that, after each successful pCCAtime, the backoff counter is decremented. The second equation reflects the fact that, after a transmission, the nodes involved in the current transmission (at a state (i,0)) wait for an ACKtimeout period to know the status (success/collision) of their transmitted packet. Upon an unsuccessful transmission, the node chooses another random backoff value uniformly distributed in the range 1,…,Wi+1, and this is shown in the third transition probability of (15). The fourth case deals with the situation that, after a successful transmission, another packet is generated, and the node takes a new backoff for the new packet. The fifth case models the fact that, after a successful transmission, the node has no packet to transmit and so enters the idle state. The node remains in the idle state until a new packet arrives, when the node takes a new random backoff value in the range 1,…,W1 (first backoff stage); these are depicted in the sixth and seventh expressions. The second last case models the fact that once the backoff stage reaches value m, it is not increased in a subsequent packet retransmission. Finally, the last case reveals that the backoff counter is locked whenever a node detects any transmission on the channel during pCCATime, or if it is not allowed to access the medium during the current access phase, or the current AP length is not long enough for a frame transmission.For mathematical convenience, the abbreviated notations(i,k) are used to represent the random processes s(t) and b(t), respectively. The backoff stage i starts at 1 and can reach a maximum value of m. Once the backoff stage reaches the maximum value m, it is not increased for a packet retransmissions. A contending node, after reaching a maximum backoff stage m will continue to try in that backoff stage until the packet is successfully transmitted. Counter k is initially chosen uniformly between [1,W], where W is initially set to CWmin, and then its value increases in a nonbinary exponential manner, as explained in Section 1. The state (i,0) in our Markov chain is the state of transmission (at backoff stage i), which can be either successful or colliding. With b(i,k) and b(l) we now show how to obtain a closed-form solution for the Markov chain depicted in Figure 3. The main quantity of interest is the probability that a node transmits in a generic slot, regardless of the backoff stage. We denote βi:i∈{0,1,2,3,4,5,6,7} as the transmission probability by a UPi node. This probability can be expressed as(16)βi=∑i=1mbi,0.The stationary probability of being in theACKtimeout state (i,-1) can be expressed as(17)bi,-1=1bi,01≤i≤m.Therefore (1) can be written as(18)βi=∑i=1mbi,-1.The stationary distributions∑k=1W1-1b(1,k)+b(1,W1) represent the topmost row of the Markov chain and is simplified as (19)∑k=1W1-1b1,k+b1,W1=1fi1-γi∑j=1mγij1-γiβiW1+12.Similarly, The stationary distribution∑k=1Wm-1b(m,k)+b(m,Wm) represents the lowermost row of the Markov chain and can be expressed as (20)∑k=1W1-1bm,k+bm,Wm=1fiγibm-1,-1+bm,-1Wm+12.The stationary distribution∑i=2m-1∑k=1Wi-1b(i,k)+∑i=2m-1b(i,Wi) can be expressed as (21)∑i=2m-1∑k=1Wi-1bi,k+∑i=2m-1bi,Wi=1fiγi∑i=2m-1bi-1,-1Wi+12.Similarly, sum of the remaining stationary distributions of the Markov chain is given by(22)∑i=1mbi,0+∑i=1mbi,-1+bl=βi2+1q1-q1-γi.The stationary distribution b(l) takes into consideration the situation where the queue of the node is empty and is waiting for a packet to arrive.To find the normalized equation,(23)∑i=1m∑k=-1Wibi,k+bl=1.Let us sum the stationary distributions of (19), (20), (21), and (22) that give(24)∑k=1W1-1b1,k+b1,W1+∑k=1W1-1bm,k+bm,Wm+∑i=2m-1∑k=1Wi-1bi,k+∑i=2m-1bi,Wi+∑i=1mbi,0+∑i=1mbi,-1+bl=1(25)⟹1fi1-γiβi·W1+12+1fi·γi·1-γiβi2∑j=1m-1γijWi,j+1+1+γimWi,m+1+βi2+1q1-q1-γi=1(26)⟹βi=12+1/q1-q1-γi+1/fi1-γi2∑j=0mγij·Wi,j+1+1/2+1/fi1-γi·γim+1·Wi,m+1/2.Equations (2) and (26) represent a nonlinear coupled system with 16 unknown variables of γi and βi, which can be solved by using a contraction-mapping method in MATLAB. Here we use MATLAB’s fsolve function to solve the problem. The values of γi and βi can then be used to estimate the desired performance metrics such as normalized throughput, mean frame service time, and energy consumption by using (11), (13), and (14), respectively. ## 4. Results and Discussion To validate the accuracy of the developed analytical model, we have compared its results with an event-driven custom-made simulation program written in the C++ programming language. The simulator closely follows the behavior of the CSMA/CA mechanism of the IEEE 802.15.6 standard. The simulations are performed for a WBAN with five user priorities by considering two nodes in each class and a hub. Here, we consider the CSMA/CA MAC mechanism running in the narrowband (NB) PHY, as described by the standard. The NB PHY operates in seven different frequency bands and offers a variable number of channels, bit rates, and modulation schemes. One of these seven frequency bands is used for an implantable WBAN, whereas the other six are used for a wearable WBAN. The focus of this analysis is on the seventh band (or the sixth band of a wearable WBAN) of the NB PHY layer of 2400~2483.5 MHz, because it is a commonly used, free Industrial, Scientific, and Medical (ISM) band. The values of the parameters used to obtain our results, for both the analytical model and the simulation, are summarized in Table2. These parameters are specified for a narrowband PHY in the IEEE 802.15.6 standard. The packet payload has been assumed to be constant and is equal to 1020 bits, which is the average value of the largest allowed payload size for the NB PHY. For estimating energy, we used the parameters considered in [24]. In all the plots in this section, we used standard markers to represent the data obtained from the simulations and different type of lines to refer to the analytical results.Table 2 Narrowband “channel seventh” parameters and energy descriptions. Slot time 145μs pSIFS 75μs pCCA 105μs pCSMAMACPHYTime 40μs MAC header 56 bits MAC footer 16 bits PHY header 31 bits Payload 1020 bits PLCP Header (data rate) 91.9 (kb/s) PSDU (data rate) 971.4 (kb/s) P tx 29.9 mW P rx 24.5 mW P bo 24.5 mW P sleep 37μWFor a given number of nodes, we see that the throughput for lower-priority nodes decreases drastically asλ increases. This is because with a low arrival rate, very few nodes have packets to transmit, but when the arrival rate increases, the number of attempts decreases more for the lower-priority nodes. All these curves show that classes with smaller CWmin and CWmax have a higher priority in accessing the channel and hence higher throughput performance because of the smaller values of CWmin and CWmax reduces the average backoff time before a transmission attempt. From Figures 5 and 6 it is clear that the IEEE 802.15.6 CSMA/CA employing different access phases degrades the normalized throughput performance of the nodes other than UP7 nodes. This is because nodes other than UP7 are unable to transmit in the EAP period and hence their performance degrades. While UP7 has the same number of nodes for all the results so its performance is the same even with the use of an EAP period. The duration of EAP is 0.3 seconds and the duration of RAP is 0.6 seconds in case when EAP is half in length of RAP period.Figure 5 Normalised per class throughput by considering both EAP and RAP as one RAP.Figure 6 Normalised per class throughput, when EAP length is half of RAP.Figures7 and 8 show the overall network throughput for the two different scenarios, that is, without access phases and with access phases, respectively. The network consists of five different user priority classes, where each class has the same number of nodes but has different combination of CWmin and CWmax values. These results show that IEEE 802.15.6 CSMA/CA employing access phases degrades the overall system throughput performance.Figure 7 Normalised system throughput by considering both EAP and RAP as one RAP.Figure 8 Normalised system throughput, when EAP length is half of RAP.The mean frame service time performance in a nonsaturated heterogeneous scenario is illustrated in Figures9 and 10 as a function of the arrival rates. For a given UPi, we see that the mean frame service time increases with an increase in the arrival rate. The mean frame service time increases quickly for low-priority classes compared with high-priority classes as λ increases, because smaller values of CWmin and CWmax reduce the average backoff time. From Figures 9 and 10, it is clear that the IEEE 802.15.6 CSMA/CA employing different access phases maximizes the mean frame service time of the nodes other than UP7. This is because nodes other than UP7 are unable to transmit in the EAP period and hence their mean frame service time increases. While UP7 has almost the same performance for all the results even with the use of an EAP period. From these results, we can optimize the length of the access phases and number of nodes to achieve a reasonable delay.Figure 9 Head-of-line delay by considering both EAP and RAP as one RAP.Figure 10 Head-of-line delay, when EAP length is half of RAP.Figures11 and 12 show the average energy consumption of a UPi node on a per-node per-packet basis for the two different scenarios against the arrival rate (packets/microsecond). It is clear that the energy consumption for a higher user priority is very low as compared to that for a low user priority, as is the case for the mean frame service time in Figures 9 and 10. This is understandable in light of the fact that the longer frame service time is attributed to the longer periods of backoff and unsuccessful transmissions, and thus, the associated energy consumption also increases until a successful transmission occurs. The more energy consumption for lower user priority classes in the case of different access phases happens due to the higher mean frame service time of the nodes.Figure 11 Energy consumption by considering both EAP and RAP as one RAP.Figure 12 Energy consumption, when EAP length is half of RAP. ## 5. Conclusions In this study, we developed a discrete time Markov chain to model the backoff procedure of IEEE 802.15.6 CSMA/CA under nonsaturated conditions, by considering different access phases lenghts. We evaluated the performance of the IEEE 802.15.6 CSMA/CA mechanism to predict energy consumption, normalized throughput, and mean frame service time of the network by employing the proposed Markov chain model. The performance measures obtained by the analytical model were validated against accurate simulation results. Our results show that the IEEE 802.15.6 CSMA/CA mechanism utilizes the medium poorly for low priority users. In addition, the use of different access phases degrades the overall system throughput performance, resulting in higher delay for nonemergency nodes and hence more energy per packet consumed. We can optimize the length of the access phases to achieve better throughput and reasonable delay. This model will be extended in our future work, considering all the APs, error-prone channel, and multi-user environments. We also intend to fine-tune the length of the access phases and number of nodes for different user priorities, which will lead to comparatively better system throughput and minimum delay. --- *Source: 102052-2015-10-28.xml*
2015
# Application of Multiple Unsupervised Models to Validate Clusters Robustness in Characterizing Smallholder Dairy Farmers **Authors:** Devotha G. Nyambo; Edith T. Luhanga; Zaipuna O. Yonah; Fidalis D. N. Mujibi **Journal:** The Scientific World Journal (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1020521 --- ## Abstract The heterogeneity of smallholder dairy production systems complicates service provision, information sharing, and dissemination of new technologies, especially those needed to maximize productivity and profitability. In order to obtain homogenous groups within which interventions can be made, it is necessary to define clusters of farmers who undertake similar management activities. This paper explores robustness of production cluster definition using various unsupervised learning algorithms to assess the best approach to define clusters. Data were collected from 8179 smallholder dairy farms in Ethiopia and Tanzania. From a total of 500 variables, selection of the 35 variables used in defining production clusters and household membership to these clusters was determined by Principal Component Analysis and domain expert knowledge. Three clustering algorithms, K-means, fuzzy, and Self-Organizing Maps (SOM), were compared in terms of their grouping consistency and prediction accuracy. The model with the least household reallocation between clusters for training and testing data was deemed the most robust. Prediction accuracy was obtained by fitting a model with fixed effects model including production clusters on milk yield, sales, and choice of breeding method. Results indicated that, for the Ethiopian dataset, clusters derived from the fuzzy algorithm had the highest predictive power (77% for milk yield and 48% for milk sales), while for the Tanzania data, clusters derived from Self-Organizing Maps were the best performing. The average cluster membership reallocation was 15%, 12%, and 34% for K-means, SOM, and fuzzy, respectively, for households in Ethiopia. Based on the divergent performance of the various algorithms evaluated, it is evident that, despite similar information being available for the study populations, the uniqueness of the data from each country provided an over-riding influence on cluster robustness and prediction accuracy. The results obtained in this study demonstrate the difficulty of generalizing model application and use across countries and production systems, despite seemingly similar information being collected. --- ## Body ## 1. Introduction Despite the high potential of livestock keeping, Ethiopia and Tanzania still suffer from low meat and milk production given that most livestock populations are dominated by low producing indigenous breeds [1, 2]. Smallholder farmers dominate the livestock keeping enterprise in Africa, accounting for about 50% of the total livestock production [3]. Dairy farming is an important source of income for smallholder farmers with high potentials for daily cash flow [4]. Majority of these smallholder producers have not reached their production potential in terms of yield and commercialization. However, data from a recent large-scale survey provides evidence that some farmers produce at a level well beyond the average production (PEARL data, 2016; unpublished). There are many constraints that contribute to the unreached potential, including lack of appropriate support in technologies and information dissemination.Despite the constraints hindering smallholder dairy productivity, milk obtained from smallholder dairy farmers constitutes the bulk of supply available for sale in Eastern Africa [4]. Among the hindering factors in the provision of appropriate support to the dairy sector and evolvement of the dairy farmer beyond subsistence, is the lack of understanding of the production system these farmers are operating in. Characterization of farm typologies is a necessary first step in designing appropriate interventions that allow these farmers to improve farm output and performance. The characterization of production systems and identification of homogenous units that represent contemporary groups in management terms allow us to understand the specific attributes associated with drivers of productivity. This holds the key to unlocking the ingredients of household evolvement through proper planning, adoption, and utilization of appropriate improved technologies and critical policy support [5]. This study sought to provide a mechanism through which farmers that perform similar production activities or have similar production system attributes can be grouped together into production clusters that describe their organization, needs, and outputs.Given the huge diversity of practices seen in smallholder farms, the need to form homogenous units that group farmers with near similar characteristics has been addressed in several studies. Primarily, this has been done by domain experts allocating farmers to various predetermined classes of farmers; defining their place in the production ecosystem, as well as statistical and machine learning approaches [6–10]. The latter approach involves use of various supervised and unsupervised algorithms to study, analyze, model, and predict trends in smallholder production systems. Recently, unsupervised learning algorithms have been applied in various studies to understand production systems [11, 12]. Some of the more popular unsupervised algorithms include hierarchical clustering, nonhierarchical clustering (K-means), unsupervised neural network algorithms (Self-Organizing Maps), Naïve Bayes and fuzzy clustering algorithms. However, despite their frequent use, unsupervised learning approaches suffer greatly from lack of consistency and predictability [13]. Various attempts have been made to overcome this weakness, including application of multiple algorithms to cluster farm data and select the one with highly homogeneous groups [14, 15].In this study, three unsupervised machine learning (ML) models were applied to classify and study the characteristics of smallholder dairy production systems based on data obtained from baseline surveys in Ethiopia and Tanzania. The aim of the study was to identify the most robust approach to accurately assign diverse dairy farming households into homogenous production units that reflect the differences in production practice and performance. ## 2. Methodology ### 2.1. Dataset Preparation and Feature Selection Data was collected under the PEARL (Program for Emerging Agricultural Research Leaders-Funded by the Bill and Melinda Gates Foundation through the Nelson Mandela African Institution of Science and Technology) project from June 2015 to June 2016 in Ethiopia and Tanzania. The total number of households surveyed was 3,500 for Tanzania and 4,679 for Ethiopia. Data collection was undertaken using questionnaires developed on the Open Data Kit (ODK) platform. Data quality checks included removal of erroneous data such as negative values, questionnaires whose total collection time was below a defined threshold (16 min), and data collected at night (survey start time beyond 7pm). The data cleaning process trimmed the datasets to 3317 and 4394 records for Tanzania and Ethiopia, respectively. From a total of 500 unique variables (features) available for analysis, a set of 46 variables were selected for inclusion in the cluster analysis based on their relevance to productivity and farmer evolvement.Feature Selection. In order to identify the most unique features among the 46 variables, Principal Component Analysis (PCA) was undertaken to eliminate correlated variables. The top 21 features (based on the load score) with the lowest communality were then selected for further analysis. An additional 14 variables related to feeding systems and health management practices which are known to influence productivity in smallholder dairy farming were included based on expert domain knowledge, such that a total of 35 features were available for cluster analysis and farm type characterization (Table 1). As a prerequisite for clustering, missing values for continuous variables were identified and replaced with population means, while missing values for categorical values were replaced with mode value. The effect of location (study site) for each country was removed from the response variables by fitting a liner model (y=μ+studysite+error) and extracting adjusted values. Each quantitative variable was tested for normality and scaled to have a mean of zero and unit variance. Additionally, for each variable, outliers were identified as values above or below the bounds estimated using box plots. Outliers were removed to minimize bias and misclustering. Specifically, bias was minimized by applying the following filters.Table 1 Features used in cluster analysis. S/No Feature Name Type Range 1 Exclusive grazing in dry season Boolean 0(no) or 1(yes) 2 Exclusive grazing in rainy season Boolean 0(no) or 1(yes) 3 Mainly grazing in dry season Boolean 0(no) or 1(yes) 4 Mainly grazing in rainy season Boolean 0(no) or 1(yes) 5 Mainly stall feed in dry season Boolean 0(no) or 1(yes) 6 Mainly stall feed in rainy season Boolean 0(no) or 1(yes) 7 Use of concentrates Discrete 1 – 12 (months) 8 Watering frequency Discrete 0 – 4 9 Distance to water source Continuous 0 – 15 10 Total land holding Continuous 0 – 100 11 Area under cash cropping Continuous 0 – 10 12 Area under food cropping Continuous 0 – 83.25 13 Area under fodder production Continuous 0 - 80 14 Area under grazing Continuous 0 - 13 15 Number of employees Discrete 1 - 10 16 Number of casual labors Discrete 1 – 10 17 Vaccination frequency Discrete 0 – 6 18 Deworming frequency Discrete 0 – 5 19 Self-deworming service Boolean 0(no) or 1(yes) 20 Membership in farmer groups Discrete 0 – 5 21 Experience in dairy farming Discrete 1 - 50 22 Years of schooling Discrete 0 – 21 23 Preferred breeding method Boolean 0 (bull) or 1(artificial insemination) 24 Distance to breeding service provider Continuous 0 - 100 25 Frequency of visit by extension officer Discrete 1 – 54 26 Herd size Discrete 1 – 50 27 Number of milking cows Discrete 1 – 20 28 Number of exotic cattle Discrete 1 - 48 29 Number of sheep Discrete 1 - 80 30 Peak milk production for the best cow Continuous 1 – 40 31 Amount of milk sold in bulk Continuous 1 – 100 32 Liters of milk sold Continuous 1 – 100 33 Distance to milk buyers Continuous 1 – 37 34 Total crop sale Continuous 0 – 21000 (Birr), 0 – 950000 (Tsh) 35 Distance to market Continuous 1 – 8The total number of cattle owned was restricted to a maximum of 50 per herd for Ethiopian farmers and a maximum of 30 per herd for Tanzanian farmers based on livestock densities [1, 2]. Some smallholder farmers held land holdings above 100 acres; all farmers with land holdings greater than 100 acres were removed. The maximum amount of milk sold by smallholder farmers was restricted to 100 liters per day, based on expert domain knowledge of the herd sizes and yield per cow. It was assumed that an extension officer could visit a farmer once each week. Any farmer who had more than 54 visits per year was considered an outlier. ### 2.2. Clustering Algorithms Three unsupervised learning algorithms, fuzzy clustering, Self-Organizing Maps (SOM), and K-means, were used for cluster analysis. In the analysis, the number of groups (K) represented how many farm typologies (clusters) could be defined for each dataset. The number of clusters that best represented the data was determined using the Elbow method (where a bend or elbow in a graph showing decline of within cluster sum of squares differences as the number of clusters increases provides the best solution). Gap statistics and silhouette separation coefficients were used in preliminary analysis to validate the results from the Elbow method [16], while the Euclidean distance was used to assess cluster robustness. The Elbow method was found to be robust and subsequently used for the rest of the analysis. Given that the selected algorithms have various methods with different convergence rates, two methods for each algorithm were tested and those that minimized convergence time were selected. The final clustering methods used were (i) Fanny for fuzzy clustering [17], (ii) superSOM with batch mode [18], and (iii) Hartigan-Wong [19, 20] for K-means. Evaluation of the clustering algorithms was done by considering ranking consistency in the testing dataset, mean distance of observations from central nodes, and mean silhouette separation coefficients as well as accuracy of predicting observed values of select response variables using a model fitting the predicted clusters as fixed effects. Other evaluation criteria for the clustering algorithms were. Data analysis was done using both SAS version 9.2 (SAS Institute Inc., Cary, NC, USA) and R software (Kabacoff, 2011). ### 2.3. Clustering Models Self-Organizing Maps (SOM) have been used to characterize smallholder farmers due to their ability to produce accurate typologies as explained by Nazari et al. [15] and Galluzzo [21]. The SOM algorithm calculates Euclidean distance by using (1) and the best matching unit (BMU) satisfying (2) [21, 22].(1)Distance=∑i=0i=nvi-wi2where v and w are vectors in an n dimension Euclidean space relating to position of a member and neuron, respectively, and (2)∀ni∈S:diffnwinnerweight,v≤diffniweight,vwherebyv is any new weight vector, nwinnerweight is the current weight of the winning neuron, and niweight is a weight of any other ith neuron on the map.The K-means algorithm has been widely used in nonhierarchical clustering and characterizing smallholder dairy farms [7, 8, 10]. Similar to SOMs, the algorithm uses Euclidean distance measures to estimate weights of data records. The algorithm is presented as. (3), with a segment of the Euclidean distance as in (1).(3)J=∑j=1k∑i=1nxij-cj2where xij-cj2 computes the Euclidean distance as in (1);k = number of clusters,n= number of observations, j = minimum number of clusters, i= minimum number of observations, xi = Euclidean vector for any ith observation, and cj = cluster center for any jth cluster.Fuzzy analysis (fanny method)was selected based on its relatively short convergence time and good measures for clusters separation [17]. Various methods based on fuzzy models have been used for cluster analysis [23–26]. The fanny method adds a fuzzier and a membership value to the common K-means algorithm (see (3)). In addition, the model uses the Dunn coefficient and a silhouette separation coefficient for assessing the solution fuzziness and intercluster cohesion, respectively. The general equation for fuzzy clustering [27] is given in (4) and the Dunn definition of partitioning [28] is given in (5).(4)J=∑i=1k∑j=1nUijmxi-cj2,1≤m<∞where k = number of clusters, n = number of observations,i= minimum number of clusters,j= minimum number of observations, Uijm =membership coefficient, xi = Euclidean vector for any ith observation, and cj = cluster center for any jth cluster. Given (4), the Dunn definition of partitioning is given by(5)FkU=1n∑i=1k∑j=1nUijm ### 2.4. Cluster Validation and Prediction Accuracy Production clusters outputted from the clustering algorithms were validated in three ways:(1) assessment of cluster robustness, (2) comparison of the cluster membership reallocation (differential allocation of households to clusters for training and testing datasets), and (3) evaluation of the proportion of variation explained by the clusters.Validation of cluster robustness was first undertaken by comparing three metrics: total within sum of square differences, mean Euclidean distance of observations from the cluster nodes, and the silhouette separation coefficients. Based on these parameters, the most suitable clustering model was identified. In the second stage of validation, the ability of the clustering models to allocate the same group of households into clusters in both training and testing datasets was tested. If all cluster members are colocated in one cluster in training and testing datasets, the reranking is 0 (the rank correlation between the two clusters is 1), and the model would be deemed the most accurate and robust. Parameters considered for evaluation were correlation coefficient, AIC, and residual deviance. The third stage of validation involved fitting linear (or logistic as appropriate) regression models with a set of fixed effects on milk yield, sales, and choice of breeding method. The first model (see (6) and (9)) included the clusters as one of the fixed effects while a second model did not include the clusters (see (7) and (10)). The difference in variance between the two models represented the proportion of total variance in the response variable accounted for by the clusters. The logistic model for choice of breeding method was fitted with only the cluster of production (see (8)) for Ethiopian data while two models were fitted for Tanzania (see (11) and (12)). In preliminary analysis, a model fitted with cluster of production yielded best fit results in the Ethiopia dataset and very low variances as a result of under fitting for the Tanzania dataset. For that reason, two models were fitted for Tanzania and one for Ethiopia to predict the binary variable. Class labels for the logistic regression were 0 and 1 for choice of bull method and Artificial Insemination, respectively. For assessing prediction accuracy, one-third of the records for the response variables were removed so that they could be predicted. The predicted values were correlated with the actual values to obtain an estimate of the prediction accuracy. These latter prediction accuracies were compared with those obtained in the previous validation step to help evaluate the algorithms’ consistency and clusters’ robustness. (6)yi=xe∗γe+ce+ee(7)yi=xe∗γe+eeThe logistic model used to predict choice of breeding method is shown in(8)yj=ce+eeFor Tanzania, predictive models were given by(9)yi=xt∗γt+lt+σt+ct+et(10)yi=xt∗γt+lt+σt+etAnd choice of breeding method was given by (see (11) and (12))(11)yj=xt+γt+ct+et(12)yj=xt+γt+etwhere yi is milk yield or milk quantity sold and yj is choice of breeding method. For the Ethiopia models, ce is cluster of production, ee is the error term, xe is experience in dairy farming, and γe is years of schooling. For the Tanzania models, ct is cluster of production, et is the error term, xt is experience in dairy farming, γt is years of schooling, lt is total land size, and σt is area under fodder production.For all model validation steps, prediction accuracies were obtained by developing the clustering model in a training dataset (70% of all records) and the resulting model reapplied to a testing dataset (remaining 30%). The model with the least reallocation of households between clusters for the training and testing datasets was considered the most robust. Rank analysis using the spearman correlation coefficient was used to evaluate the level of household reallocation between clusters. ## 2.1. Dataset Preparation and Feature Selection Data was collected under the PEARL (Program for Emerging Agricultural Research Leaders-Funded by the Bill and Melinda Gates Foundation through the Nelson Mandela African Institution of Science and Technology) project from June 2015 to June 2016 in Ethiopia and Tanzania. The total number of households surveyed was 3,500 for Tanzania and 4,679 for Ethiopia. Data collection was undertaken using questionnaires developed on the Open Data Kit (ODK) platform. Data quality checks included removal of erroneous data such as negative values, questionnaires whose total collection time was below a defined threshold (16 min), and data collected at night (survey start time beyond 7pm). The data cleaning process trimmed the datasets to 3317 and 4394 records for Tanzania and Ethiopia, respectively. From a total of 500 unique variables (features) available for analysis, a set of 46 variables were selected for inclusion in the cluster analysis based on their relevance to productivity and farmer evolvement.Feature Selection. In order to identify the most unique features among the 46 variables, Principal Component Analysis (PCA) was undertaken to eliminate correlated variables. The top 21 features (based on the load score) with the lowest communality were then selected for further analysis. An additional 14 variables related to feeding systems and health management practices which are known to influence productivity in smallholder dairy farming were included based on expert domain knowledge, such that a total of 35 features were available for cluster analysis and farm type characterization (Table 1). As a prerequisite for clustering, missing values for continuous variables were identified and replaced with population means, while missing values for categorical values were replaced with mode value. The effect of location (study site) for each country was removed from the response variables by fitting a liner model (y=μ+studysite+error) and extracting adjusted values. Each quantitative variable was tested for normality and scaled to have a mean of zero and unit variance. Additionally, for each variable, outliers were identified as values above or below the bounds estimated using box plots. Outliers were removed to minimize bias and misclustering. Specifically, bias was minimized by applying the following filters.Table 1 Features used in cluster analysis. S/No Feature Name Type Range 1 Exclusive grazing in dry season Boolean 0(no) or 1(yes) 2 Exclusive grazing in rainy season Boolean 0(no) or 1(yes) 3 Mainly grazing in dry season Boolean 0(no) or 1(yes) 4 Mainly grazing in rainy season Boolean 0(no) or 1(yes) 5 Mainly stall feed in dry season Boolean 0(no) or 1(yes) 6 Mainly stall feed in rainy season Boolean 0(no) or 1(yes) 7 Use of concentrates Discrete 1 – 12 (months) 8 Watering frequency Discrete 0 – 4 9 Distance to water source Continuous 0 – 15 10 Total land holding Continuous 0 – 100 11 Area under cash cropping Continuous 0 – 10 12 Area under food cropping Continuous 0 – 83.25 13 Area under fodder production Continuous 0 - 80 14 Area under grazing Continuous 0 - 13 15 Number of employees Discrete 1 - 10 16 Number of casual labors Discrete 1 – 10 17 Vaccination frequency Discrete 0 – 6 18 Deworming frequency Discrete 0 – 5 19 Self-deworming service Boolean 0(no) or 1(yes) 20 Membership in farmer groups Discrete 0 – 5 21 Experience in dairy farming Discrete 1 - 50 22 Years of schooling Discrete 0 – 21 23 Preferred breeding method Boolean 0 (bull) or 1(artificial insemination) 24 Distance to breeding service provider Continuous 0 - 100 25 Frequency of visit by extension officer Discrete 1 – 54 26 Herd size Discrete 1 – 50 27 Number of milking cows Discrete 1 – 20 28 Number of exotic cattle Discrete 1 - 48 29 Number of sheep Discrete 1 - 80 30 Peak milk production for the best cow Continuous 1 – 40 31 Amount of milk sold in bulk Continuous 1 – 100 32 Liters of milk sold Continuous 1 – 100 33 Distance to milk buyers Continuous 1 – 37 34 Total crop sale Continuous 0 – 21000 (Birr), 0 – 950000 (Tsh) 35 Distance to market Continuous 1 – 8The total number of cattle owned was restricted to a maximum of 50 per herd for Ethiopian farmers and a maximum of 30 per herd for Tanzanian farmers based on livestock densities [1, 2]. Some smallholder farmers held land holdings above 100 acres; all farmers with land holdings greater than 100 acres were removed. The maximum amount of milk sold by smallholder farmers was restricted to 100 liters per day, based on expert domain knowledge of the herd sizes and yield per cow. It was assumed that an extension officer could visit a farmer once each week. Any farmer who had more than 54 visits per year was considered an outlier. ## 2.2. Clustering Algorithms Three unsupervised learning algorithms, fuzzy clustering, Self-Organizing Maps (SOM), and K-means, were used for cluster analysis. In the analysis, the number of groups (K) represented how many farm typologies (clusters) could be defined for each dataset. The number of clusters that best represented the data was determined using the Elbow method (where a bend or elbow in a graph showing decline of within cluster sum of squares differences as the number of clusters increases provides the best solution). Gap statistics and silhouette separation coefficients were used in preliminary analysis to validate the results from the Elbow method [16], while the Euclidean distance was used to assess cluster robustness. The Elbow method was found to be robust and subsequently used for the rest of the analysis. Given that the selected algorithms have various methods with different convergence rates, two methods for each algorithm were tested and those that minimized convergence time were selected. The final clustering methods used were (i) Fanny for fuzzy clustering [17], (ii) superSOM with batch mode [18], and (iii) Hartigan-Wong [19, 20] for K-means. Evaluation of the clustering algorithms was done by considering ranking consistency in the testing dataset, mean distance of observations from central nodes, and mean silhouette separation coefficients as well as accuracy of predicting observed values of select response variables using a model fitting the predicted clusters as fixed effects. Other evaluation criteria for the clustering algorithms were. Data analysis was done using both SAS version 9.2 (SAS Institute Inc., Cary, NC, USA) and R software (Kabacoff, 2011). ## 2.3. Clustering Models Self-Organizing Maps (SOM) have been used to characterize smallholder farmers due to their ability to produce accurate typologies as explained by Nazari et al. [15] and Galluzzo [21]. The SOM algorithm calculates Euclidean distance by using (1) and the best matching unit (BMU) satisfying (2) [21, 22].(1)Distance=∑i=0i=nvi-wi2where v and w are vectors in an n dimension Euclidean space relating to position of a member and neuron, respectively, and (2)∀ni∈S:diffnwinnerweight,v≤diffniweight,vwherebyv is any new weight vector, nwinnerweight is the current weight of the winning neuron, and niweight is a weight of any other ith neuron on the map.The K-means algorithm has been widely used in nonhierarchical clustering and characterizing smallholder dairy farms [7, 8, 10]. Similar to SOMs, the algorithm uses Euclidean distance measures to estimate weights of data records. The algorithm is presented as. (3), with a segment of the Euclidean distance as in (1).(3)J=∑j=1k∑i=1nxij-cj2where xij-cj2 computes the Euclidean distance as in (1);k = number of clusters,n= number of observations, j = minimum number of clusters, i= minimum number of observations, xi = Euclidean vector for any ith observation, and cj = cluster center for any jth cluster.Fuzzy analysis (fanny method)was selected based on its relatively short convergence time and good measures for clusters separation [17]. Various methods based on fuzzy models have been used for cluster analysis [23–26]. The fanny method adds a fuzzier and a membership value to the common K-means algorithm (see (3)). In addition, the model uses the Dunn coefficient and a silhouette separation coefficient for assessing the solution fuzziness and intercluster cohesion, respectively. The general equation for fuzzy clustering [27] is given in (4) and the Dunn definition of partitioning [28] is given in (5).(4)J=∑i=1k∑j=1nUijmxi-cj2,1≤m<∞where k = number of clusters, n = number of observations,i= minimum number of clusters,j= minimum number of observations, Uijm =membership coefficient, xi = Euclidean vector for any ith observation, and cj = cluster center for any jth cluster. Given (4), the Dunn definition of partitioning is given by(5)FkU=1n∑i=1k∑j=1nUijm ## 2.4. Cluster Validation and Prediction Accuracy Production clusters outputted from the clustering algorithms were validated in three ways:(1) assessment of cluster robustness, (2) comparison of the cluster membership reallocation (differential allocation of households to clusters for training and testing datasets), and (3) evaluation of the proportion of variation explained by the clusters.Validation of cluster robustness was first undertaken by comparing three metrics: total within sum of square differences, mean Euclidean distance of observations from the cluster nodes, and the silhouette separation coefficients. Based on these parameters, the most suitable clustering model was identified. In the second stage of validation, the ability of the clustering models to allocate the same group of households into clusters in both training and testing datasets was tested. If all cluster members are colocated in one cluster in training and testing datasets, the reranking is 0 (the rank correlation between the two clusters is 1), and the model would be deemed the most accurate and robust. Parameters considered for evaluation were correlation coefficient, AIC, and residual deviance. The third stage of validation involved fitting linear (or logistic as appropriate) regression models with a set of fixed effects on milk yield, sales, and choice of breeding method. The first model (see (6) and (9)) included the clusters as one of the fixed effects while a second model did not include the clusters (see (7) and (10)). The difference in variance between the two models represented the proportion of total variance in the response variable accounted for by the clusters. The logistic model for choice of breeding method was fitted with only the cluster of production (see (8)) for Ethiopian data while two models were fitted for Tanzania (see (11) and (12)). In preliminary analysis, a model fitted with cluster of production yielded best fit results in the Ethiopia dataset and very low variances as a result of under fitting for the Tanzania dataset. For that reason, two models were fitted for Tanzania and one for Ethiopia to predict the binary variable. Class labels for the logistic regression were 0 and 1 for choice of bull method and Artificial Insemination, respectively. For assessing prediction accuracy, one-third of the records for the response variables were removed so that they could be predicted. The predicted values were correlated with the actual values to obtain an estimate of the prediction accuracy. These latter prediction accuracies were compared with those obtained in the previous validation step to help evaluate the algorithms’ consistency and clusters’ robustness. (6)yi=xe∗γe+ce+ee(7)yi=xe∗γe+eeThe logistic model used to predict choice of breeding method is shown in(8)yj=ce+eeFor Tanzania, predictive models were given by(9)yi=xt∗γt+lt+σt+ct+et(10)yi=xt∗γt+lt+σt+etAnd choice of breeding method was given by (see (11) and (12))(11)yj=xt+γt+ct+et(12)yj=xt+γt+etwhere yi is milk yield or milk quantity sold and yj is choice of breeding method. For the Ethiopia models, ce is cluster of production, ee is the error term, xe is experience in dairy farming, and γe is years of schooling. For the Tanzania models, ct is cluster of production, et is the error term, xt is experience in dairy farming, γt is years of schooling, lt is total land size, and σt is area under fodder production.For all model validation steps, prediction accuracies were obtained by developing the clustering model in a training dataset (70% of all records) and the resulting model reapplied to a testing dataset (remaining 30%). The model with the least reallocation of households between clusters for the training and testing datasets was considered the most robust. Rank analysis using the spearman correlation coefficient was used to evaluate the level of household reallocation between clusters. ## 3. Results ### 3.1. Clustering Based on the Elbow method, a four cluster solution was found to be optimal for the Ethiopia dataset and was fitted in the clustering models (Figure1). The SOM and K-means algorithms clustered the households in the Ethiopia dataset into four groups, while the fuzzy model assigned all households into three clusters, with no members in the fourth cluster. Table 2 shows the cluster densities for each algorithm. For Tanzania, six clusters were defined based on the Elbow method (Figure 2). However, at K=6, the fuzzy model had highly fuzzy cluster memberships of 0.09 and 0.18 for each member. Such low membership values imply an unstable cluster solution. The fuzzy model was therefore discarded for the Tanzania dataset and analysis proceeded with the K-means and Self-Organizing Maps (SOM) algorithms. Cluster densities associated with the six clusters are provided in Table 3.Table 2 Cluster densities (number of households allocated to the cluster) for the Ethiopia dataset. Cluster K-means model SOM model Fuzzy model 1 342 487 2673 2 875 2084 411 3 2689 1217 1309 4 487 605Table 3 Cluster densities (number of households allocated to the cluster) for the Tanzania dataset. Cluster K-means model SOM model Fuzzy model 1 811 1180 2506 2 452 952 811 3 374 203 4 616 295 5 372 516 6 692 171Figure 1 Graph showing four optimal clusters for the Ethiopia dataset.Figure 2 Graph showing six optimal clusters for the Tanzania dataset.For the Ethiopian data, cluster densities given in Table2 indicate the presence of one unchanging cluster for both K-means and SOM models (with the exact same list of 487 members). The number of members in the other clusters varied, indicating households being reassigned to different clusters. Figures 3, 4, and 5 represent the cluster visualization for each algorithm in the Ethiopia dataset. Clusters obtained using K-means were well separated and showed significant intracluster adhesion (Figure 3), while spatial distribution of SOM clusters (Figure 4) indicated significant overlap between two of the 4 clusters (clusters in red). Cluster densities for Tanzania are displayed in Table 3.Figure 3 Household allocation to four clusters using the K-means model for Ethiopia dairy farmers.Figure 4 Node counts for household clusters derived using the SOM model for Ethiopia (a) and dendrogram for super clusters (b). (a) (b)Figure 5 Household allocation into three clusters using the fuzzy model for Ethiopia dairy farmers.Figures4(a) and 4(b) are a heatmap representation of cluster densities and dendrogram from the SOM model, respectively. Figure 4(a) shows counts of households within clusters while Figure 4(b) indicates cluster relationship and separation. The numbers on the colored plane indicate number of members in each cluster. Two clusters had equal number of farmers (shown in red color) and on the dendrogram these are categorized as clusters 1 and 4. These two clusters seemingly had few differentiating features since they originate from the same parent node. This phenomenon can also be observed in Figure 3 for the K-means model (clusters 2 and 4). These clusters appear to be joined into one cluster in the fuzzy model (cluster 3 in Figure 5). The fuzzy model resulted in 3 clusters, each with a significant number of outliers (Figure 5). The outliers were however more pronounced for cluster 2 than clusters 1 and 3.Presence of the outliers and cluster overlap in the fuzzy model was supported by a low value of the Dunn coefficient (0.3014) which corresponds to a high level of fuzziness.Based on the results obtained, the cluster composition parameters related to intercluster adhesion and intracluster cohesion indicated that clusters from the K-means model were better separated (higher mean silhouette value) and more compact (lower mean distance from central node) than in the other models for Ethiopia (Table4).Table 4 Cluster composition parameters (intercluster adhesion and intracluster cohesion) for Ethiopian households. Model No. Clusters Within sum of square Mean distance from central nodes Mean silhouette separation K-means model 4 20758 0.74 0.66 SOM model 4 23178 0.92 0.51 Fuzzy model 3 21655 0.89 0.56For Tanzania, the mean silhouette separation coefficients were not significantly different (0.66 and 0.64 for K-means and SOM, respectively) as shown in Table5. However, there was a tendency for the SOM to have better defined clusters given its lower within cluster sum of squares as well as lower mean distance from central node. The spatial distribution is illustrated in Figures 6 and 7.Table 5 Cluster composition parameters (intercluster adhesion and intracluster cohesion) for Tanzania households. Model No. Clusters Within sum of square Mean distance from central nodes Mean silhouette separation K-means model 6 12628 2.1 0.66 SOM model 6 11772 1.7 0.64Figure 6 Household allocation into six clusters using the K-means model for Tanzania dairy farmers.Figure 7 Node counts for household clusters derived using the SOM model for Tanzania (a) and dendrogram for super clusters (b). (a) (b)For Tanzania clusters’ separation and intactness can be observed through Figures6 and 7. No significant difference can be observed with regard to the intercluster adhesion between K-means and SOM (Table 5).Figure6 shows clusters visualization from the K-means model for Tanzania dataset. Cluster 4 and 5 overlap and are in close proximity to cluster 6, indicating that they have few differentiating characteristics. This overlapping is equally observed in the SOM model (Figure 7).The numbers on the colored bar in Figure7(a) indicate densities of members in each cluster. There are only four well separated clusters based on density (from left: red, orange, yellow, and light gold). However, the dendrogram (Figure 7(b)) shows that three clusters, branching from the same node, which also are also seen as the overlapping clusters (clusters 4, 5, and 6) in the K-means plot (Figure 6) ### 3.2. Cluster Validation #### 3.2.1. Cluster Membership Reranking Ranking correlation was used to study the levels of household relocation for the training and testing datasets. Generally, the clustering models applied to the Ethiopia dataset indicated low membership relocation. Table6 summarizes the results for Ethiopia where, despite a lower Akaike Information Criteria (AIC) estimate, the fuzzy model had the highest number of members reallocated to other clusters (32%) compared to the K-means and SOM. The high correlation coefficients for SOM and K-means indicate lower reallocation of cluster members. In contrast, results from Tanzania indicated very high reranking of cluster membership between training and testing datasets (Table 7).Table 6 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Ethiopia dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 102 2.7e∧-2 0.85 SOM model 102 2.8e∧-2 -0.88 Fuzzy model 68.09 9.35e∧-2 0.68Table 7 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Tanzania dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 200 0.001 -0.21 SOM model 200 0.006 0.39 #### 3.2.2. Prediction Accuracy Tables8 and 9 summarize the results for predicting missing values for milk yield, sales, and breeding choice. Results for Ethiopia dataset indicate that model fitting fixed effects of clusters derived from the fuzzy model had higher accuracies for peak milk yield (0.77), milk sales (0.48), and probability of choosing AI (0.55) as shown in Table 8, while for Tanzania, higher accuracies were obtained for milk production and sales (0.46 and 0.41) while fitting clusters were obtained from the K-means model (Table 9).Table 8 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Ethiopia. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/Response Variable Milk yield Milk sold Preferred breeding method K-means 0.68 0.40 0.54 SOM 0.66 0.38 0.54 Fuzzy 0.77 0.48 0.55Table 9 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Tanzania. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/ Response Variable Milk yield Milk sold Preferred breeding method K-means 0.46 0.41 0.29 SOM 0.32 0.31 0.46For the Tanzania dataset, clusters from the K-means model achieved high prediction accuracies for both milk yield and sales (at 46% and 41%, respectively). However, the K-means clusters had lower prediction accuracy for choice of breeding method (29%). Clusters from the SOM model performed poorly on the quantitative traits but had higher probability (46%) for correctly assigning the choice of breeding method. #### 3.2.3. Cluster Variances In order to assess whether the clusters defined by the various algorithms reflect differences in production characteristics between households, we evaluated the variance accounted for by these cluster on select performance measures. For Ethiopia, total variance was 1.015 and 0.988 for milk yield and sales, respectively, while in Tanzania, the total variance was 1.076 and 1.09 for milk yield and sales, respectively. The differences between residual variances for two linear models (see (6) versus (7) for Ethiopia and (9) versus (10) for Tanzania) were significant (p < 0.00001). Results show that, for Ethiopia data, the fuzzy model clusters accounted for 89% and 70% of the total variance in milk yield and milk sales, respectively. On the other hand, the K-means clusters accounted for 71% and 65% of the total variation in milk yield and milk sales, respectively. Tables 10 and 11 summarize the proportion of variances accounted for by the clusters for each clustering model.Table 10 Proportion of variance accounted for by cluster of production in Ethiopia. Fitted model Total Variance∗ Residual variance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.015 0.239 1867.4 <0.00001 73% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.222 1770.1 <0.00001 54% Model without cluster 0.76 3388.6 SOM Milk yield Model with cluster 1.015 0.283 2091.8 <0.00001 68% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.258 1969.8 <0.00001 51% Model without cluster 0.76 3388.6 Fuzzy Milk yield Model with cluster 1.015 0.074 337 <0.00001 89% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.073 319.4 <0.00001 70% Model without cluster 0.76 3388.6 ∗Data scaled to have unit variance and mean of zero.Table 11 Proportion of variances accounted for by cluster of production in Tanzania. Fitted model Total variance∗ ResidualVariance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.076 0.0027 -2981 <0.00001 71% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.018 -1084.3 <0.00001 65% Model without cluster 0.723 2520 SOM Milk yield Model with cluster 1.076 0.294 1633 <0.00001 44% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.228 1381.6 <0.00001 45% Model without cluster 0.723 2520.2 ∗ indicates data scaled to have unit variance and mean of zero. ## 3.1. Clustering Based on the Elbow method, a four cluster solution was found to be optimal for the Ethiopia dataset and was fitted in the clustering models (Figure1). The SOM and K-means algorithms clustered the households in the Ethiopia dataset into four groups, while the fuzzy model assigned all households into three clusters, with no members in the fourth cluster. Table 2 shows the cluster densities for each algorithm. For Tanzania, six clusters were defined based on the Elbow method (Figure 2). However, at K=6, the fuzzy model had highly fuzzy cluster memberships of 0.09 and 0.18 for each member. Such low membership values imply an unstable cluster solution. The fuzzy model was therefore discarded for the Tanzania dataset and analysis proceeded with the K-means and Self-Organizing Maps (SOM) algorithms. Cluster densities associated with the six clusters are provided in Table 3.Table 2 Cluster densities (number of households allocated to the cluster) for the Ethiopia dataset. Cluster K-means model SOM model Fuzzy model 1 342 487 2673 2 875 2084 411 3 2689 1217 1309 4 487 605Table 3 Cluster densities (number of households allocated to the cluster) for the Tanzania dataset. Cluster K-means model SOM model Fuzzy model 1 811 1180 2506 2 452 952 811 3 374 203 4 616 295 5 372 516 6 692 171Figure 1 Graph showing four optimal clusters for the Ethiopia dataset.Figure 2 Graph showing six optimal clusters for the Tanzania dataset.For the Ethiopian data, cluster densities given in Table2 indicate the presence of one unchanging cluster for both K-means and SOM models (with the exact same list of 487 members). The number of members in the other clusters varied, indicating households being reassigned to different clusters. Figures 3, 4, and 5 represent the cluster visualization for each algorithm in the Ethiopia dataset. Clusters obtained using K-means were well separated and showed significant intracluster adhesion (Figure 3), while spatial distribution of SOM clusters (Figure 4) indicated significant overlap between two of the 4 clusters (clusters in red). Cluster densities for Tanzania are displayed in Table 3.Figure 3 Household allocation to four clusters using the K-means model for Ethiopia dairy farmers.Figure 4 Node counts for household clusters derived using the SOM model for Ethiopia (a) and dendrogram for super clusters (b). (a) (b)Figure 5 Household allocation into three clusters using the fuzzy model for Ethiopia dairy farmers.Figures4(a) and 4(b) are a heatmap representation of cluster densities and dendrogram from the SOM model, respectively. Figure 4(a) shows counts of households within clusters while Figure 4(b) indicates cluster relationship and separation. The numbers on the colored plane indicate number of members in each cluster. Two clusters had equal number of farmers (shown in red color) and on the dendrogram these are categorized as clusters 1 and 4. These two clusters seemingly had few differentiating features since they originate from the same parent node. This phenomenon can also be observed in Figure 3 for the K-means model (clusters 2 and 4). These clusters appear to be joined into one cluster in the fuzzy model (cluster 3 in Figure 5). The fuzzy model resulted in 3 clusters, each with a significant number of outliers (Figure 5). The outliers were however more pronounced for cluster 2 than clusters 1 and 3.Presence of the outliers and cluster overlap in the fuzzy model was supported by a low value of the Dunn coefficient (0.3014) which corresponds to a high level of fuzziness.Based on the results obtained, the cluster composition parameters related to intercluster adhesion and intracluster cohesion indicated that clusters from the K-means model were better separated (higher mean silhouette value) and more compact (lower mean distance from central node) than in the other models for Ethiopia (Table4).Table 4 Cluster composition parameters (intercluster adhesion and intracluster cohesion) for Ethiopian households. Model No. Clusters Within sum of square Mean distance from central nodes Mean silhouette separation K-means model 4 20758 0.74 0.66 SOM model 4 23178 0.92 0.51 Fuzzy model 3 21655 0.89 0.56For Tanzania, the mean silhouette separation coefficients were not significantly different (0.66 and 0.64 for K-means and SOM, respectively) as shown in Table5. However, there was a tendency for the SOM to have better defined clusters given its lower within cluster sum of squares as well as lower mean distance from central node. The spatial distribution is illustrated in Figures 6 and 7.Table 5 Cluster composition parameters (intercluster adhesion and intracluster cohesion) for Tanzania households. Model No. Clusters Within sum of square Mean distance from central nodes Mean silhouette separation K-means model 6 12628 2.1 0.66 SOM model 6 11772 1.7 0.64Figure 6 Household allocation into six clusters using the K-means model for Tanzania dairy farmers.Figure 7 Node counts for household clusters derived using the SOM model for Tanzania (a) and dendrogram for super clusters (b). (a) (b)For Tanzania clusters’ separation and intactness can be observed through Figures6 and 7. No significant difference can be observed with regard to the intercluster adhesion between K-means and SOM (Table 5).Figure6 shows clusters visualization from the K-means model for Tanzania dataset. Cluster 4 and 5 overlap and are in close proximity to cluster 6, indicating that they have few differentiating characteristics. This overlapping is equally observed in the SOM model (Figure 7).The numbers on the colored bar in Figure7(a) indicate densities of members in each cluster. There are only four well separated clusters based on density (from left: red, orange, yellow, and light gold). However, the dendrogram (Figure 7(b)) shows that three clusters, branching from the same node, which also are also seen as the overlapping clusters (clusters 4, 5, and 6) in the K-means plot (Figure 6) ## 3.2. Cluster Validation ### 3.2.1. Cluster Membership Reranking Ranking correlation was used to study the levels of household relocation for the training and testing datasets. Generally, the clustering models applied to the Ethiopia dataset indicated low membership relocation. Table6 summarizes the results for Ethiopia where, despite a lower Akaike Information Criteria (AIC) estimate, the fuzzy model had the highest number of members reallocated to other clusters (32%) compared to the K-means and SOM. The high correlation coefficients for SOM and K-means indicate lower reallocation of cluster members. In contrast, results from Tanzania indicated very high reranking of cluster membership between training and testing datasets (Table 7).Table 6 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Ethiopia dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 102 2.7e∧-2 0.85 SOM model 102 2.8e∧-2 -0.88 Fuzzy model 68.09 9.35e∧-2 0.68Table 7 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Tanzania dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 200 0.001 -0.21 SOM model 200 0.006 0.39 ### 3.2.2. Prediction Accuracy Tables8 and 9 summarize the results for predicting missing values for milk yield, sales, and breeding choice. Results for Ethiopia dataset indicate that model fitting fixed effects of clusters derived from the fuzzy model had higher accuracies for peak milk yield (0.77), milk sales (0.48), and probability of choosing AI (0.55) as shown in Table 8, while for Tanzania, higher accuracies were obtained for milk production and sales (0.46 and 0.41) while fitting clusters were obtained from the K-means model (Table 9).Table 8 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Ethiopia. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/Response Variable Milk yield Milk sold Preferred breeding method K-means 0.68 0.40 0.54 SOM 0.66 0.38 0.54 Fuzzy 0.77 0.48 0.55Table 9 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Tanzania. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/ Response Variable Milk yield Milk sold Preferred breeding method K-means 0.46 0.41 0.29 SOM 0.32 0.31 0.46For the Tanzania dataset, clusters from the K-means model achieved high prediction accuracies for both milk yield and sales (at 46% and 41%, respectively). However, the K-means clusters had lower prediction accuracy for choice of breeding method (29%). Clusters from the SOM model performed poorly on the quantitative traits but had higher probability (46%) for correctly assigning the choice of breeding method. ### 3.2.3. Cluster Variances In order to assess whether the clusters defined by the various algorithms reflect differences in production characteristics between households, we evaluated the variance accounted for by these cluster on select performance measures. For Ethiopia, total variance was 1.015 and 0.988 for milk yield and sales, respectively, while in Tanzania, the total variance was 1.076 and 1.09 for milk yield and sales, respectively. The differences between residual variances for two linear models (see (6) versus (7) for Ethiopia and (9) versus (10) for Tanzania) were significant (p < 0.00001). Results show that, for Ethiopia data, the fuzzy model clusters accounted for 89% and 70% of the total variance in milk yield and milk sales, respectively. On the other hand, the K-means clusters accounted for 71% and 65% of the total variation in milk yield and milk sales, respectively. Tables 10 and 11 summarize the proportion of variances accounted for by the clusters for each clustering model.Table 10 Proportion of variance accounted for by cluster of production in Ethiopia. Fitted model Total Variance∗ Residual variance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.015 0.239 1867.4 <0.00001 73% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.222 1770.1 <0.00001 54% Model without cluster 0.76 3388.6 SOM Milk yield Model with cluster 1.015 0.283 2091.8 <0.00001 68% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.258 1969.8 <0.00001 51% Model without cluster 0.76 3388.6 Fuzzy Milk yield Model with cluster 1.015 0.074 337 <0.00001 89% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.073 319.4 <0.00001 70% Model without cluster 0.76 3388.6 ∗Data scaled to have unit variance and mean of zero.Table 11 Proportion of variances accounted for by cluster of production in Tanzania. Fitted model Total variance∗ ResidualVariance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.076 0.0027 -2981 <0.00001 71% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.018 -1084.3 <0.00001 65% Model without cluster 0.723 2520 SOM Milk yield Model with cluster 1.076 0.294 1633 <0.00001 44% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.228 1381.6 <0.00001 45% Model without cluster 0.723 2520.2 ∗ indicates data scaled to have unit variance and mean of zero. ## 3.2.1. Cluster Membership Reranking Ranking correlation was used to study the levels of household relocation for the training and testing datasets. Generally, the clustering models applied to the Ethiopia dataset indicated low membership relocation. Table6 summarizes the results for Ethiopia where, despite a lower Akaike Information Criteria (AIC) estimate, the fuzzy model had the highest number of members reallocated to other clusters (32%) compared to the K-means and SOM. The high correlation coefficients for SOM and K-means indicate lower reallocation of cluster members. In contrast, results from Tanzania indicated very high reranking of cluster membership between training and testing datasets (Table 7).Table 6 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Ethiopia dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 102 2.7e∧-2 0.85 SOM model 102 2.8e∧-2 -0.88 Fuzzy model 68.09 9.35e∧-2 0.68Table 7 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Tanzania dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 200 0.001 -0.21 SOM model 200 0.006 0.39 ## 3.2.2. Prediction Accuracy Tables8 and 9 summarize the results for predicting missing values for milk yield, sales, and breeding choice. Results for Ethiopia dataset indicate that model fitting fixed effects of clusters derived from the fuzzy model had higher accuracies for peak milk yield (0.77), milk sales (0.48), and probability of choosing AI (0.55) as shown in Table 8, while for Tanzania, higher accuracies were obtained for milk production and sales (0.46 and 0.41) while fitting clusters were obtained from the K-means model (Table 9).Table 8 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Ethiopia. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/Response Variable Milk yield Milk sold Preferred breeding method K-means 0.68 0.40 0.54 SOM 0.66 0.38 0.54 Fuzzy 0.77 0.48 0.55Table 9 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Tanzania. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/ Response Variable Milk yield Milk sold Preferred breeding method K-means 0.46 0.41 0.29 SOM 0.32 0.31 0.46For the Tanzania dataset, clusters from the K-means model achieved high prediction accuracies for both milk yield and sales (at 46% and 41%, respectively). However, the K-means clusters had lower prediction accuracy for choice of breeding method (29%). Clusters from the SOM model performed poorly on the quantitative traits but had higher probability (46%) for correctly assigning the choice of breeding method. ## 3.2.3. Cluster Variances In order to assess whether the clusters defined by the various algorithms reflect differences in production characteristics between households, we evaluated the variance accounted for by these cluster on select performance measures. For Ethiopia, total variance was 1.015 and 0.988 for milk yield and sales, respectively, while in Tanzania, the total variance was 1.076 and 1.09 for milk yield and sales, respectively. The differences between residual variances for two linear models (see (6) versus (7) for Ethiopia and (9) versus (10) for Tanzania) were significant (p < 0.00001). Results show that, for Ethiopia data, the fuzzy model clusters accounted for 89% and 70% of the total variance in milk yield and milk sales, respectively. On the other hand, the K-means clusters accounted for 71% and 65% of the total variation in milk yield and milk sales, respectively. Tables 10 and 11 summarize the proportion of variances accounted for by the clusters for each clustering model.Table 10 Proportion of variance accounted for by cluster of production in Ethiopia. Fitted model Total Variance∗ Residual variance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.015 0.239 1867.4 <0.00001 73% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.222 1770.1 <0.00001 54% Model without cluster 0.76 3388.6 SOM Milk yield Model with cluster 1.015 0.283 2091.8 <0.00001 68% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.258 1969.8 <0.00001 51% Model without cluster 0.76 3388.6 Fuzzy Milk yield Model with cluster 1.015 0.074 337 <0.00001 89% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.073 319.4 <0.00001 70% Model without cluster 0.76 3388.6 ∗Data scaled to have unit variance and mean of zero.Table 11 Proportion of variances accounted for by cluster of production in Tanzania. Fitted model Total variance∗ ResidualVariance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.076 0.0027 -2981 <0.00001 71% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.018 -1084.3 <0.00001 65% Model without cluster 0.723 2520 SOM Milk yield Model with cluster 1.076 0.294 1633 <0.00001 44% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.228 1381.6 <0.00001 45% Model without cluster 0.723 2520.2 ∗ indicates data scaled to have unit variance and mean of zero. ## 4. Discussion ### 4.1. Characterization of Smallholder Farmers Unsupervised learning models have been used to characterize smallholder farmers despite the fact that these models lack consistency and are highly unpredictable [13]. In this study, the performance of three commonly used algorithms for clustering farming households; namely, K-means, fuzzy, and SOM were compared. A set of validation criteria to assess the robustness of the defined clusters is proposed. This approach is seldom used for similar studies.In Africa, smallholder farming systems have been characterized using common hierarchical and nonhierarchical clustering algorithms. Work done by Mburu et al. [29], Bidogezaet al. [30], Dossaet al. [10], and Kuivanenet al. [7, 8] utilized the ward and K-means methods to define clusters for smallholder households. In addition to the machine learning approaches, use of expert knowledge to validate cluster based characterization is highly recommended [7, 8]. In some studies, the local knowledge has been used in a participatory approach to accurately estimate farm types. Furthermore, complex clustering approaches have also been explored in studying smallholder farm types as done by Salasya & Stoorvogel [23], Pelcatet al. [31], Galluzzo [21], and Paas & Groot [12]. These studies present use of fuzzy clustering, Neural Networks, and Naïve Bayes algorithms, respectively. Although all clustering assigns farmers into some types, the fuzzy clustering presents a soft clustering approach where a farm can belong to more than one farm type or none [31]. However, from the analyzed previous researches clustering models’ robustness and their ability to predict farm types remains uncharted. Following up on Goswamiet al. [5] study of smallholder farmers needs to be subjected into formulation of predictive farm types. As such, evolvement of farmers in the homogeneous groups can be predicted because the clusters’ stabilities are known. ### 4.2. Clustering Algorithms Evaluated The determination of putative number of clusters that best define the data (K) presents the foremost need in cluster analysis. Bad estimates of K may result into unstable clusters and presence of many members appearing as outliers. Since the goal is to obtain highly homogeneous groups, the within group sum of square difference is commonly used to evaluate how compact the clusters are. We adopted recommendations given by Kassambara [16] and employed the Elbow, Gap statistics, and average silhouette methods to assess the best K for the datasets. The Elbow and Gap statistics estimate a value of K that minimizes the within groups sums of square (WSS) differences such that any additions to the estimated value of K will not significantly change the WSS. Since the study goal was to arrive at highly homogeneous groups, the measure of within sum of square differences seemed most important. However, a common method to estimate optimal number of clusters from other studies is to try out different values of K while observing the silhouette separation or manual inspection of dendrogram produced in hierarchical clustering [15, 16]. While the Elbow method and Gap statistics use within groups sum of square differences, the silhouette method compares the average clusters separation.The application of the three separate algorithms revealed differences in their performance based on data type and structure. Where observations were highly identical, soft clustering (fuzzy model) failed to categorize the records into appropriate number of clusters. The fuzzy model allocated households into only 3 clusters despite four clusters being determined as appropriate for the Ethiopia dataset (Figure5). The other models converged at 4 clusters (Figures 3 and 4). Similarly, for the Tanzanian dataset, the fuzzy model could not converge even after many iterations. It would appear that the fuzzy model is best suited to situations where data is highly heterogeneous. Otherwise it does not lend itself well to cluster identification.Balakrishnan (1994) compared K-means and SOM algorithms in cluster identification within specific criterion of intracluster similarity and intercluster differences. In addition, the dataset had known cluster solutions; so, the only target was to find out performance differences between the two algorithms. Results indicated that the K-means algorithm had good performance over the SOM algorithm. Mingoti & Lima [32] compared K-means and SOM models’ performance by using smallholders’ farm data. Results indicated that K-means were more robust. In this study, the SOM performed poorly compared to the fuzzy and K-means for the Ethiopia dataset having higher within cluster dispersion, as well as lower separation between clusters. For the Tanzania dataset, the SOM performed similarly as the K-means algorithm. Results from our study show that the performance of SOM is concordant with that of Nazariet al. [15] who characterized dryland farming systems. In contrast to observations by Mingoti & Lima [32], the fuzzy model used in their study failed spectacularly for both datasets. This reinforces observations by Xu [33] who concluded that the performance of clustering algorithms is subject to the nature of data and area of application. More studies need to be undertaken to see how the fuzzy algorithm can be best adapted to farming datasets. ### 4.3. Cluster Membership Reallocation and Prediction Accuracy A good clustering model should be able to repeatedly allocate a majority of households into the same clusters, even when the volume of data changes. In order to be sure that our model definitions represented a collection of the most important features that describe each cluster, we tested the ability of the models to redefine the same clusters between training and testing datasets. This strategy aligns well with Xu [33], who recommends that a good clustering model should have the ability to deal with new data cases without the need to relearn. The spearman rank correlation was used to measure the degree of reranking. For the Tanzania data, the SOM model provided the best cluster allocation that minimizes reranking. The rank correlations seen in Tanzania were very low for both the K-means and SOM models. Given the above premise and the spectacular failure of the fuzzy model in Tanzania, a pattern emerges to suggest a fundamental problem with the Tanzanian dataset rather than issues to do with model suitability. It is possible that there is no significant differentiation between households in Tanzania and the extreme homogeneity proves a challenge because each household can be allocated to any cluster. Such a scenario could occur due to flawed data collection strategies. We suspect that, due to requirements to finalize data collection within set timelines, groups of farmers were interviewed collectively while data was entered as if it were for an individual farmer.The fuzzy model in Ethiopia had the best fit, indicated by the lowest AIC value despite higher membership reallocation. Given a standard prediction problem, this would be the best model for the data. This is also corroborated by the fact that the variance accounted for by the clusters was also highest for the fuzzy model. However, given that our intention is to maximize correct reassignment of individuals into clusters, the K-means and SOM models would be preferred for household membership allocation.Three response variables (milk yield, sales, and choice of breeding method) were selected for the prediction exercise because of their vital role in smallholder dairy farm evolvement. They generally represent the commercial orientation of a smallholder farm. Evaluation of prediction accuracies for selected response variable indicated a very different scenario from the clustering problem. When the clusters were included in the models to predict milk yield, sales, or breeding method, the fuzzy model-derived clusters had the highest prediction accuracies compared to K-means and SOM clusters for Ethiopia data. For Tanzania data, the SOM model clusters yielded the best prediction accuracies for the binary trait, choice of breeding method, while K-means model performed the best for the quantitative traits. However, the prediction accuracies for the Tanzania data were low, underscoring the earlier assertions about data structure and integrity. Given the predictive power of the clusters on select response variables, the fuzzy clustering model performed the best, with defined clusters accounting for significantly higher variations in the response variable than other clustering models.Based on the results from Ethiopia, where all the models could be evaluated, it would seem that model choice depends on the problem that needs to be solved. For a clustering problem, where the intention is to obtain robust membership allocation, then the K-means algorithm would be the most appropriate, to ensure maximal homogeneity within clusters. The use of this model would minimize reranking when applying the model to new datasets without need for new learning. However, in the event that clusters are to be used in prediction models, the fuzzy algorithm would be the best for clusters definition. ## 4.1. Characterization of Smallholder Farmers Unsupervised learning models have been used to characterize smallholder farmers despite the fact that these models lack consistency and are highly unpredictable [13]. In this study, the performance of three commonly used algorithms for clustering farming households; namely, K-means, fuzzy, and SOM were compared. A set of validation criteria to assess the robustness of the defined clusters is proposed. This approach is seldom used for similar studies.In Africa, smallholder farming systems have been characterized using common hierarchical and nonhierarchical clustering algorithms. Work done by Mburu et al. [29], Bidogezaet al. [30], Dossaet al. [10], and Kuivanenet al. [7, 8] utilized the ward and K-means methods to define clusters for smallholder households. In addition to the machine learning approaches, use of expert knowledge to validate cluster based characterization is highly recommended [7, 8]. In some studies, the local knowledge has been used in a participatory approach to accurately estimate farm types. Furthermore, complex clustering approaches have also been explored in studying smallholder farm types as done by Salasya & Stoorvogel [23], Pelcatet al. [31], Galluzzo [21], and Paas & Groot [12]. These studies present use of fuzzy clustering, Neural Networks, and Naïve Bayes algorithms, respectively. Although all clustering assigns farmers into some types, the fuzzy clustering presents a soft clustering approach where a farm can belong to more than one farm type or none [31]. However, from the analyzed previous researches clustering models’ robustness and their ability to predict farm types remains uncharted. Following up on Goswamiet al. [5] study of smallholder farmers needs to be subjected into formulation of predictive farm types. As such, evolvement of farmers in the homogeneous groups can be predicted because the clusters’ stabilities are known. ## 4.2. Clustering Algorithms Evaluated The determination of putative number of clusters that best define the data (K) presents the foremost need in cluster analysis. Bad estimates of K may result into unstable clusters and presence of many members appearing as outliers. Since the goal is to obtain highly homogeneous groups, the within group sum of square difference is commonly used to evaluate how compact the clusters are. We adopted recommendations given by Kassambara [16] and employed the Elbow, Gap statistics, and average silhouette methods to assess the best K for the datasets. The Elbow and Gap statistics estimate a value of K that minimizes the within groups sums of square (WSS) differences such that any additions to the estimated value of K will not significantly change the WSS. Since the study goal was to arrive at highly homogeneous groups, the measure of within sum of square differences seemed most important. However, a common method to estimate optimal number of clusters from other studies is to try out different values of K while observing the silhouette separation or manual inspection of dendrogram produced in hierarchical clustering [15, 16]. While the Elbow method and Gap statistics use within groups sum of square differences, the silhouette method compares the average clusters separation.The application of the three separate algorithms revealed differences in their performance based on data type and structure. Where observations were highly identical, soft clustering (fuzzy model) failed to categorize the records into appropriate number of clusters. The fuzzy model allocated households into only 3 clusters despite four clusters being determined as appropriate for the Ethiopia dataset (Figure5). The other models converged at 4 clusters (Figures 3 and 4). Similarly, for the Tanzanian dataset, the fuzzy model could not converge even after many iterations. It would appear that the fuzzy model is best suited to situations where data is highly heterogeneous. Otherwise it does not lend itself well to cluster identification.Balakrishnan (1994) compared K-means and SOM algorithms in cluster identification within specific criterion of intracluster similarity and intercluster differences. In addition, the dataset had known cluster solutions; so, the only target was to find out performance differences between the two algorithms. Results indicated that the K-means algorithm had good performance over the SOM algorithm. Mingoti & Lima [32] compared K-means and SOM models’ performance by using smallholders’ farm data. Results indicated that K-means were more robust. In this study, the SOM performed poorly compared to the fuzzy and K-means for the Ethiopia dataset having higher within cluster dispersion, as well as lower separation between clusters. For the Tanzania dataset, the SOM performed similarly as the K-means algorithm. Results from our study show that the performance of SOM is concordant with that of Nazariet al. [15] who characterized dryland farming systems. In contrast to observations by Mingoti & Lima [32], the fuzzy model used in their study failed spectacularly for both datasets. This reinforces observations by Xu [33] who concluded that the performance of clustering algorithms is subject to the nature of data and area of application. More studies need to be undertaken to see how the fuzzy algorithm can be best adapted to farming datasets. ## 4.3. Cluster Membership Reallocation and Prediction Accuracy A good clustering model should be able to repeatedly allocate a majority of households into the same clusters, even when the volume of data changes. In order to be sure that our model definitions represented a collection of the most important features that describe each cluster, we tested the ability of the models to redefine the same clusters between training and testing datasets. This strategy aligns well with Xu [33], who recommends that a good clustering model should have the ability to deal with new data cases without the need to relearn. The spearman rank correlation was used to measure the degree of reranking. For the Tanzania data, the SOM model provided the best cluster allocation that minimizes reranking. The rank correlations seen in Tanzania were very low for both the K-means and SOM models. Given the above premise and the spectacular failure of the fuzzy model in Tanzania, a pattern emerges to suggest a fundamental problem with the Tanzanian dataset rather than issues to do with model suitability. It is possible that there is no significant differentiation between households in Tanzania and the extreme homogeneity proves a challenge because each household can be allocated to any cluster. Such a scenario could occur due to flawed data collection strategies. We suspect that, due to requirements to finalize data collection within set timelines, groups of farmers were interviewed collectively while data was entered as if it were for an individual farmer.The fuzzy model in Ethiopia had the best fit, indicated by the lowest AIC value despite higher membership reallocation. Given a standard prediction problem, this would be the best model for the data. This is also corroborated by the fact that the variance accounted for by the clusters was also highest for the fuzzy model. However, given that our intention is to maximize correct reassignment of individuals into clusters, the K-means and SOM models would be preferred for household membership allocation.Three response variables (milk yield, sales, and choice of breeding method) were selected for the prediction exercise because of their vital role in smallholder dairy farm evolvement. They generally represent the commercial orientation of a smallholder farm. Evaluation of prediction accuracies for selected response variable indicated a very different scenario from the clustering problem. When the clusters were included in the models to predict milk yield, sales, or breeding method, the fuzzy model-derived clusters had the highest prediction accuracies compared to K-means and SOM clusters for Ethiopia data. For Tanzania data, the SOM model clusters yielded the best prediction accuracies for the binary trait, choice of breeding method, while K-means model performed the best for the quantitative traits. However, the prediction accuracies for the Tanzania data were low, underscoring the earlier assertions about data structure and integrity. Given the predictive power of the clusters on select response variables, the fuzzy clustering model performed the best, with defined clusters accounting for significantly higher variations in the response variable than other clustering models.Based on the results from Ethiopia, where all the models could be evaluated, it would seem that model choice depends on the problem that needs to be solved. For a clustering problem, where the intention is to obtain robust membership allocation, then the K-means algorithm would be the most appropriate, to ensure maximal homogeneity within clusters. The use of this model would minimize reranking when applying the model to new datasets without need for new learning. However, in the event that clusters are to be used in prediction models, the fuzzy algorithm would be the best for clusters definition. ## 5. Conclusion The goal of the reported study was to identify the most robust approach to correctly classify diverse households into homogenous groups of farmers with similar production systems and management activities. The reason for the characterization was to use the defined groups in order to design interventions and strategies that facilitate the evolvement of smallholder dairy farmers beyond subsistence in Ethiopia and Tanzania. Results from this study demonstrate the use of unsupervised learning models in cluster definition for smallholder dairy farmers as well as strategies to assess the models’ suitability and cluster robustness. Performance varied across the tested models, underscoring the need to find an appropriate method depending on data structure and questions being answered. The results obtained from this study are a necessary first step in understanding smallholder farmer production systems and the study of household evolvement from subsistence to full commercial orientation. --- *Source: 1020521-2019-01-02.xml*
1020521-2019-01-02_1020521-2019-01-02.md
79,685
Application of Multiple Unsupervised Models to Validate Clusters Robustness in Characterizing Smallholder Dairy Farmers
Devotha G. Nyambo; Edith T. Luhanga; Zaipuna O. Yonah; Fidalis D. N. Mujibi
The Scientific World Journal (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1020521
1020521-2019-01-02.xml
--- ## Abstract The heterogeneity of smallholder dairy production systems complicates service provision, information sharing, and dissemination of new technologies, especially those needed to maximize productivity and profitability. In order to obtain homogenous groups within which interventions can be made, it is necessary to define clusters of farmers who undertake similar management activities. This paper explores robustness of production cluster definition using various unsupervised learning algorithms to assess the best approach to define clusters. Data were collected from 8179 smallholder dairy farms in Ethiopia and Tanzania. From a total of 500 variables, selection of the 35 variables used in defining production clusters and household membership to these clusters was determined by Principal Component Analysis and domain expert knowledge. Three clustering algorithms, K-means, fuzzy, and Self-Organizing Maps (SOM), were compared in terms of their grouping consistency and prediction accuracy. The model with the least household reallocation between clusters for training and testing data was deemed the most robust. Prediction accuracy was obtained by fitting a model with fixed effects model including production clusters on milk yield, sales, and choice of breeding method. Results indicated that, for the Ethiopian dataset, clusters derived from the fuzzy algorithm had the highest predictive power (77% for milk yield and 48% for milk sales), while for the Tanzania data, clusters derived from Self-Organizing Maps were the best performing. The average cluster membership reallocation was 15%, 12%, and 34% for K-means, SOM, and fuzzy, respectively, for households in Ethiopia. Based on the divergent performance of the various algorithms evaluated, it is evident that, despite similar information being available for the study populations, the uniqueness of the data from each country provided an over-riding influence on cluster robustness and prediction accuracy. The results obtained in this study demonstrate the difficulty of generalizing model application and use across countries and production systems, despite seemingly similar information being collected. --- ## Body ## 1. Introduction Despite the high potential of livestock keeping, Ethiopia and Tanzania still suffer from low meat and milk production given that most livestock populations are dominated by low producing indigenous breeds [1, 2]. Smallholder farmers dominate the livestock keeping enterprise in Africa, accounting for about 50% of the total livestock production [3]. Dairy farming is an important source of income for smallholder farmers with high potentials for daily cash flow [4]. Majority of these smallholder producers have not reached their production potential in terms of yield and commercialization. However, data from a recent large-scale survey provides evidence that some farmers produce at a level well beyond the average production (PEARL data, 2016; unpublished). There are many constraints that contribute to the unreached potential, including lack of appropriate support in technologies and information dissemination.Despite the constraints hindering smallholder dairy productivity, milk obtained from smallholder dairy farmers constitutes the bulk of supply available for sale in Eastern Africa [4]. Among the hindering factors in the provision of appropriate support to the dairy sector and evolvement of the dairy farmer beyond subsistence, is the lack of understanding of the production system these farmers are operating in. Characterization of farm typologies is a necessary first step in designing appropriate interventions that allow these farmers to improve farm output and performance. The characterization of production systems and identification of homogenous units that represent contemporary groups in management terms allow us to understand the specific attributes associated with drivers of productivity. This holds the key to unlocking the ingredients of household evolvement through proper planning, adoption, and utilization of appropriate improved technologies and critical policy support [5]. This study sought to provide a mechanism through which farmers that perform similar production activities or have similar production system attributes can be grouped together into production clusters that describe their organization, needs, and outputs.Given the huge diversity of practices seen in smallholder farms, the need to form homogenous units that group farmers with near similar characteristics has been addressed in several studies. Primarily, this has been done by domain experts allocating farmers to various predetermined classes of farmers; defining their place in the production ecosystem, as well as statistical and machine learning approaches [6–10]. The latter approach involves use of various supervised and unsupervised algorithms to study, analyze, model, and predict trends in smallholder production systems. Recently, unsupervised learning algorithms have been applied in various studies to understand production systems [11, 12]. Some of the more popular unsupervised algorithms include hierarchical clustering, nonhierarchical clustering (K-means), unsupervised neural network algorithms (Self-Organizing Maps), Naïve Bayes and fuzzy clustering algorithms. However, despite their frequent use, unsupervised learning approaches suffer greatly from lack of consistency and predictability [13]. Various attempts have been made to overcome this weakness, including application of multiple algorithms to cluster farm data and select the one with highly homogeneous groups [14, 15].In this study, three unsupervised machine learning (ML) models were applied to classify and study the characteristics of smallholder dairy production systems based on data obtained from baseline surveys in Ethiopia and Tanzania. The aim of the study was to identify the most robust approach to accurately assign diverse dairy farming households into homogenous production units that reflect the differences in production practice and performance. ## 2. Methodology ### 2.1. Dataset Preparation and Feature Selection Data was collected under the PEARL (Program for Emerging Agricultural Research Leaders-Funded by the Bill and Melinda Gates Foundation through the Nelson Mandela African Institution of Science and Technology) project from June 2015 to June 2016 in Ethiopia and Tanzania. The total number of households surveyed was 3,500 for Tanzania and 4,679 for Ethiopia. Data collection was undertaken using questionnaires developed on the Open Data Kit (ODK) platform. Data quality checks included removal of erroneous data such as negative values, questionnaires whose total collection time was below a defined threshold (16 min), and data collected at night (survey start time beyond 7pm). The data cleaning process trimmed the datasets to 3317 and 4394 records for Tanzania and Ethiopia, respectively. From a total of 500 unique variables (features) available for analysis, a set of 46 variables were selected for inclusion in the cluster analysis based on their relevance to productivity and farmer evolvement.Feature Selection. In order to identify the most unique features among the 46 variables, Principal Component Analysis (PCA) was undertaken to eliminate correlated variables. The top 21 features (based on the load score) with the lowest communality were then selected for further analysis. An additional 14 variables related to feeding systems and health management practices which are known to influence productivity in smallholder dairy farming were included based on expert domain knowledge, such that a total of 35 features were available for cluster analysis and farm type characterization (Table 1). As a prerequisite for clustering, missing values for continuous variables were identified and replaced with population means, while missing values for categorical values were replaced with mode value. The effect of location (study site) for each country was removed from the response variables by fitting a liner model (y=μ+studysite+error) and extracting adjusted values. Each quantitative variable was tested for normality and scaled to have a mean of zero and unit variance. Additionally, for each variable, outliers were identified as values above or below the bounds estimated using box plots. Outliers were removed to minimize bias and misclustering. Specifically, bias was minimized by applying the following filters.Table 1 Features used in cluster analysis. S/No Feature Name Type Range 1 Exclusive grazing in dry season Boolean 0(no) or 1(yes) 2 Exclusive grazing in rainy season Boolean 0(no) or 1(yes) 3 Mainly grazing in dry season Boolean 0(no) or 1(yes) 4 Mainly grazing in rainy season Boolean 0(no) or 1(yes) 5 Mainly stall feed in dry season Boolean 0(no) or 1(yes) 6 Mainly stall feed in rainy season Boolean 0(no) or 1(yes) 7 Use of concentrates Discrete 1 – 12 (months) 8 Watering frequency Discrete 0 – 4 9 Distance to water source Continuous 0 – 15 10 Total land holding Continuous 0 – 100 11 Area under cash cropping Continuous 0 – 10 12 Area under food cropping Continuous 0 – 83.25 13 Area under fodder production Continuous 0 - 80 14 Area under grazing Continuous 0 - 13 15 Number of employees Discrete 1 - 10 16 Number of casual labors Discrete 1 – 10 17 Vaccination frequency Discrete 0 – 6 18 Deworming frequency Discrete 0 – 5 19 Self-deworming service Boolean 0(no) or 1(yes) 20 Membership in farmer groups Discrete 0 – 5 21 Experience in dairy farming Discrete 1 - 50 22 Years of schooling Discrete 0 – 21 23 Preferred breeding method Boolean 0 (bull) or 1(artificial insemination) 24 Distance to breeding service provider Continuous 0 - 100 25 Frequency of visit by extension officer Discrete 1 – 54 26 Herd size Discrete 1 – 50 27 Number of milking cows Discrete 1 – 20 28 Number of exotic cattle Discrete 1 - 48 29 Number of sheep Discrete 1 - 80 30 Peak milk production for the best cow Continuous 1 – 40 31 Amount of milk sold in bulk Continuous 1 – 100 32 Liters of milk sold Continuous 1 – 100 33 Distance to milk buyers Continuous 1 – 37 34 Total crop sale Continuous 0 – 21000 (Birr), 0 – 950000 (Tsh) 35 Distance to market Continuous 1 – 8The total number of cattle owned was restricted to a maximum of 50 per herd for Ethiopian farmers and a maximum of 30 per herd for Tanzanian farmers based on livestock densities [1, 2]. Some smallholder farmers held land holdings above 100 acres; all farmers with land holdings greater than 100 acres were removed. The maximum amount of milk sold by smallholder farmers was restricted to 100 liters per day, based on expert domain knowledge of the herd sizes and yield per cow. It was assumed that an extension officer could visit a farmer once each week. Any farmer who had more than 54 visits per year was considered an outlier. ### 2.2. Clustering Algorithms Three unsupervised learning algorithms, fuzzy clustering, Self-Organizing Maps (SOM), and K-means, were used for cluster analysis. In the analysis, the number of groups (K) represented how many farm typologies (clusters) could be defined for each dataset. The number of clusters that best represented the data was determined using the Elbow method (where a bend or elbow in a graph showing decline of within cluster sum of squares differences as the number of clusters increases provides the best solution). Gap statistics and silhouette separation coefficients were used in preliminary analysis to validate the results from the Elbow method [16], while the Euclidean distance was used to assess cluster robustness. The Elbow method was found to be robust and subsequently used for the rest of the analysis. Given that the selected algorithms have various methods with different convergence rates, two methods for each algorithm were tested and those that minimized convergence time were selected. The final clustering methods used were (i) Fanny for fuzzy clustering [17], (ii) superSOM with batch mode [18], and (iii) Hartigan-Wong [19, 20] for K-means. Evaluation of the clustering algorithms was done by considering ranking consistency in the testing dataset, mean distance of observations from central nodes, and mean silhouette separation coefficients as well as accuracy of predicting observed values of select response variables using a model fitting the predicted clusters as fixed effects. Other evaluation criteria for the clustering algorithms were. Data analysis was done using both SAS version 9.2 (SAS Institute Inc., Cary, NC, USA) and R software (Kabacoff, 2011). ### 2.3. Clustering Models Self-Organizing Maps (SOM) have been used to characterize smallholder farmers due to their ability to produce accurate typologies as explained by Nazari et al. [15] and Galluzzo [21]. The SOM algorithm calculates Euclidean distance by using (1) and the best matching unit (BMU) satisfying (2) [21, 22].(1)Distance=∑i=0i=nvi-wi2where v and w are vectors in an n dimension Euclidean space relating to position of a member and neuron, respectively, and (2)∀ni∈S:diffnwinnerweight,v≤diffniweight,vwherebyv is any new weight vector, nwinnerweight is the current weight of the winning neuron, and niweight is a weight of any other ith neuron on the map.The K-means algorithm has been widely used in nonhierarchical clustering and characterizing smallholder dairy farms [7, 8, 10]. Similar to SOMs, the algorithm uses Euclidean distance measures to estimate weights of data records. The algorithm is presented as. (3), with a segment of the Euclidean distance as in (1).(3)J=∑j=1k∑i=1nxij-cj2where xij-cj2 computes the Euclidean distance as in (1);k = number of clusters,n= number of observations, j = minimum number of clusters, i= minimum number of observations, xi = Euclidean vector for any ith observation, and cj = cluster center for any jth cluster.Fuzzy analysis (fanny method)was selected based on its relatively short convergence time and good measures for clusters separation [17]. Various methods based on fuzzy models have been used for cluster analysis [23–26]. The fanny method adds a fuzzier and a membership value to the common K-means algorithm (see (3)). In addition, the model uses the Dunn coefficient and a silhouette separation coefficient for assessing the solution fuzziness and intercluster cohesion, respectively. The general equation for fuzzy clustering [27] is given in (4) and the Dunn definition of partitioning [28] is given in (5).(4)J=∑i=1k∑j=1nUijmxi-cj2,1≤m<∞where k = number of clusters, n = number of observations,i= minimum number of clusters,j= minimum number of observations, Uijm =membership coefficient, xi = Euclidean vector for any ith observation, and cj = cluster center for any jth cluster. Given (4), the Dunn definition of partitioning is given by(5)FkU=1n∑i=1k∑j=1nUijm ### 2.4. Cluster Validation and Prediction Accuracy Production clusters outputted from the clustering algorithms were validated in three ways:(1) assessment of cluster robustness, (2) comparison of the cluster membership reallocation (differential allocation of households to clusters for training and testing datasets), and (3) evaluation of the proportion of variation explained by the clusters.Validation of cluster robustness was first undertaken by comparing three metrics: total within sum of square differences, mean Euclidean distance of observations from the cluster nodes, and the silhouette separation coefficients. Based on these parameters, the most suitable clustering model was identified. In the second stage of validation, the ability of the clustering models to allocate the same group of households into clusters in both training and testing datasets was tested. If all cluster members are colocated in one cluster in training and testing datasets, the reranking is 0 (the rank correlation between the two clusters is 1), and the model would be deemed the most accurate and robust. Parameters considered for evaluation were correlation coefficient, AIC, and residual deviance. The third stage of validation involved fitting linear (or logistic as appropriate) regression models with a set of fixed effects on milk yield, sales, and choice of breeding method. The first model (see (6) and (9)) included the clusters as one of the fixed effects while a second model did not include the clusters (see (7) and (10)). The difference in variance between the two models represented the proportion of total variance in the response variable accounted for by the clusters. The logistic model for choice of breeding method was fitted with only the cluster of production (see (8)) for Ethiopian data while two models were fitted for Tanzania (see (11) and (12)). In preliminary analysis, a model fitted with cluster of production yielded best fit results in the Ethiopia dataset and very low variances as a result of under fitting for the Tanzania dataset. For that reason, two models were fitted for Tanzania and one for Ethiopia to predict the binary variable. Class labels for the logistic regression were 0 and 1 for choice of bull method and Artificial Insemination, respectively. For assessing prediction accuracy, one-third of the records for the response variables were removed so that they could be predicted. The predicted values were correlated with the actual values to obtain an estimate of the prediction accuracy. These latter prediction accuracies were compared with those obtained in the previous validation step to help evaluate the algorithms’ consistency and clusters’ robustness. (6)yi=xe∗γe+ce+ee(7)yi=xe∗γe+eeThe logistic model used to predict choice of breeding method is shown in(8)yj=ce+eeFor Tanzania, predictive models were given by(9)yi=xt∗γt+lt+σt+ct+et(10)yi=xt∗γt+lt+σt+etAnd choice of breeding method was given by (see (11) and (12))(11)yj=xt+γt+ct+et(12)yj=xt+γt+etwhere yi is milk yield or milk quantity sold and yj is choice of breeding method. For the Ethiopia models, ce is cluster of production, ee is the error term, xe is experience in dairy farming, and γe is years of schooling. For the Tanzania models, ct is cluster of production, et is the error term, xt is experience in dairy farming, γt is years of schooling, lt is total land size, and σt is area under fodder production.For all model validation steps, prediction accuracies were obtained by developing the clustering model in a training dataset (70% of all records) and the resulting model reapplied to a testing dataset (remaining 30%). The model with the least reallocation of households between clusters for the training and testing datasets was considered the most robust. Rank analysis using the spearman correlation coefficient was used to evaluate the level of household reallocation between clusters. ## 2.1. Dataset Preparation and Feature Selection Data was collected under the PEARL (Program for Emerging Agricultural Research Leaders-Funded by the Bill and Melinda Gates Foundation through the Nelson Mandela African Institution of Science and Technology) project from June 2015 to June 2016 in Ethiopia and Tanzania. The total number of households surveyed was 3,500 for Tanzania and 4,679 for Ethiopia. Data collection was undertaken using questionnaires developed on the Open Data Kit (ODK) platform. Data quality checks included removal of erroneous data such as negative values, questionnaires whose total collection time was below a defined threshold (16 min), and data collected at night (survey start time beyond 7pm). The data cleaning process trimmed the datasets to 3317 and 4394 records for Tanzania and Ethiopia, respectively. From a total of 500 unique variables (features) available for analysis, a set of 46 variables were selected for inclusion in the cluster analysis based on their relevance to productivity and farmer evolvement.Feature Selection. In order to identify the most unique features among the 46 variables, Principal Component Analysis (PCA) was undertaken to eliminate correlated variables. The top 21 features (based on the load score) with the lowest communality were then selected for further analysis. An additional 14 variables related to feeding systems and health management practices which are known to influence productivity in smallholder dairy farming were included based on expert domain knowledge, such that a total of 35 features were available for cluster analysis and farm type characterization (Table 1). As a prerequisite for clustering, missing values for continuous variables were identified and replaced with population means, while missing values for categorical values were replaced with mode value. The effect of location (study site) for each country was removed from the response variables by fitting a liner model (y=μ+studysite+error) and extracting adjusted values. Each quantitative variable was tested for normality and scaled to have a mean of zero and unit variance. Additionally, for each variable, outliers were identified as values above or below the bounds estimated using box plots. Outliers were removed to minimize bias and misclustering. Specifically, bias was minimized by applying the following filters.Table 1 Features used in cluster analysis. S/No Feature Name Type Range 1 Exclusive grazing in dry season Boolean 0(no) or 1(yes) 2 Exclusive grazing in rainy season Boolean 0(no) or 1(yes) 3 Mainly grazing in dry season Boolean 0(no) or 1(yes) 4 Mainly grazing in rainy season Boolean 0(no) or 1(yes) 5 Mainly stall feed in dry season Boolean 0(no) or 1(yes) 6 Mainly stall feed in rainy season Boolean 0(no) or 1(yes) 7 Use of concentrates Discrete 1 – 12 (months) 8 Watering frequency Discrete 0 – 4 9 Distance to water source Continuous 0 – 15 10 Total land holding Continuous 0 – 100 11 Area under cash cropping Continuous 0 – 10 12 Area under food cropping Continuous 0 – 83.25 13 Area under fodder production Continuous 0 - 80 14 Area under grazing Continuous 0 - 13 15 Number of employees Discrete 1 - 10 16 Number of casual labors Discrete 1 – 10 17 Vaccination frequency Discrete 0 – 6 18 Deworming frequency Discrete 0 – 5 19 Self-deworming service Boolean 0(no) or 1(yes) 20 Membership in farmer groups Discrete 0 – 5 21 Experience in dairy farming Discrete 1 - 50 22 Years of schooling Discrete 0 – 21 23 Preferred breeding method Boolean 0 (bull) or 1(artificial insemination) 24 Distance to breeding service provider Continuous 0 - 100 25 Frequency of visit by extension officer Discrete 1 – 54 26 Herd size Discrete 1 – 50 27 Number of milking cows Discrete 1 – 20 28 Number of exotic cattle Discrete 1 - 48 29 Number of sheep Discrete 1 - 80 30 Peak milk production for the best cow Continuous 1 – 40 31 Amount of milk sold in bulk Continuous 1 – 100 32 Liters of milk sold Continuous 1 – 100 33 Distance to milk buyers Continuous 1 – 37 34 Total crop sale Continuous 0 – 21000 (Birr), 0 – 950000 (Tsh) 35 Distance to market Continuous 1 – 8The total number of cattle owned was restricted to a maximum of 50 per herd for Ethiopian farmers and a maximum of 30 per herd for Tanzanian farmers based on livestock densities [1, 2]. Some smallholder farmers held land holdings above 100 acres; all farmers with land holdings greater than 100 acres were removed. The maximum amount of milk sold by smallholder farmers was restricted to 100 liters per day, based on expert domain knowledge of the herd sizes and yield per cow. It was assumed that an extension officer could visit a farmer once each week. Any farmer who had more than 54 visits per year was considered an outlier. ## 2.2. Clustering Algorithms Three unsupervised learning algorithms, fuzzy clustering, Self-Organizing Maps (SOM), and K-means, were used for cluster analysis. In the analysis, the number of groups (K) represented how many farm typologies (clusters) could be defined for each dataset. The number of clusters that best represented the data was determined using the Elbow method (where a bend or elbow in a graph showing decline of within cluster sum of squares differences as the number of clusters increases provides the best solution). Gap statistics and silhouette separation coefficients were used in preliminary analysis to validate the results from the Elbow method [16], while the Euclidean distance was used to assess cluster robustness. The Elbow method was found to be robust and subsequently used for the rest of the analysis. Given that the selected algorithms have various methods with different convergence rates, two methods for each algorithm were tested and those that minimized convergence time were selected. The final clustering methods used were (i) Fanny for fuzzy clustering [17], (ii) superSOM with batch mode [18], and (iii) Hartigan-Wong [19, 20] for K-means. Evaluation of the clustering algorithms was done by considering ranking consistency in the testing dataset, mean distance of observations from central nodes, and mean silhouette separation coefficients as well as accuracy of predicting observed values of select response variables using a model fitting the predicted clusters as fixed effects. Other evaluation criteria for the clustering algorithms were. Data analysis was done using both SAS version 9.2 (SAS Institute Inc., Cary, NC, USA) and R software (Kabacoff, 2011). ## 2.3. Clustering Models Self-Organizing Maps (SOM) have been used to characterize smallholder farmers due to their ability to produce accurate typologies as explained by Nazari et al. [15] and Galluzzo [21]. The SOM algorithm calculates Euclidean distance by using (1) and the best matching unit (BMU) satisfying (2) [21, 22].(1)Distance=∑i=0i=nvi-wi2where v and w are vectors in an n dimension Euclidean space relating to position of a member and neuron, respectively, and (2)∀ni∈S:diffnwinnerweight,v≤diffniweight,vwherebyv is any new weight vector, nwinnerweight is the current weight of the winning neuron, and niweight is a weight of any other ith neuron on the map.The K-means algorithm has been widely used in nonhierarchical clustering and characterizing smallholder dairy farms [7, 8, 10]. Similar to SOMs, the algorithm uses Euclidean distance measures to estimate weights of data records. The algorithm is presented as. (3), with a segment of the Euclidean distance as in (1).(3)J=∑j=1k∑i=1nxij-cj2where xij-cj2 computes the Euclidean distance as in (1);k = number of clusters,n= number of observations, j = minimum number of clusters, i= minimum number of observations, xi = Euclidean vector for any ith observation, and cj = cluster center for any jth cluster.Fuzzy analysis (fanny method)was selected based on its relatively short convergence time and good measures for clusters separation [17]. Various methods based on fuzzy models have been used for cluster analysis [23–26]. The fanny method adds a fuzzier and a membership value to the common K-means algorithm (see (3)). In addition, the model uses the Dunn coefficient and a silhouette separation coefficient for assessing the solution fuzziness and intercluster cohesion, respectively. The general equation for fuzzy clustering [27] is given in (4) and the Dunn definition of partitioning [28] is given in (5).(4)J=∑i=1k∑j=1nUijmxi-cj2,1≤m<∞where k = number of clusters, n = number of observations,i= minimum number of clusters,j= minimum number of observations, Uijm =membership coefficient, xi = Euclidean vector for any ith observation, and cj = cluster center for any jth cluster. Given (4), the Dunn definition of partitioning is given by(5)FkU=1n∑i=1k∑j=1nUijm ## 2.4. Cluster Validation and Prediction Accuracy Production clusters outputted from the clustering algorithms were validated in three ways:(1) assessment of cluster robustness, (2) comparison of the cluster membership reallocation (differential allocation of households to clusters for training and testing datasets), and (3) evaluation of the proportion of variation explained by the clusters.Validation of cluster robustness was first undertaken by comparing three metrics: total within sum of square differences, mean Euclidean distance of observations from the cluster nodes, and the silhouette separation coefficients. Based on these parameters, the most suitable clustering model was identified. In the second stage of validation, the ability of the clustering models to allocate the same group of households into clusters in both training and testing datasets was tested. If all cluster members are colocated in one cluster in training and testing datasets, the reranking is 0 (the rank correlation between the two clusters is 1), and the model would be deemed the most accurate and robust. Parameters considered for evaluation were correlation coefficient, AIC, and residual deviance. The third stage of validation involved fitting linear (or logistic as appropriate) regression models with a set of fixed effects on milk yield, sales, and choice of breeding method. The first model (see (6) and (9)) included the clusters as one of the fixed effects while a second model did not include the clusters (see (7) and (10)). The difference in variance between the two models represented the proportion of total variance in the response variable accounted for by the clusters. The logistic model for choice of breeding method was fitted with only the cluster of production (see (8)) for Ethiopian data while two models were fitted for Tanzania (see (11) and (12)). In preliminary analysis, a model fitted with cluster of production yielded best fit results in the Ethiopia dataset and very low variances as a result of under fitting for the Tanzania dataset. For that reason, two models were fitted for Tanzania and one for Ethiopia to predict the binary variable. Class labels for the logistic regression were 0 and 1 for choice of bull method and Artificial Insemination, respectively. For assessing prediction accuracy, one-third of the records for the response variables were removed so that they could be predicted. The predicted values were correlated with the actual values to obtain an estimate of the prediction accuracy. These latter prediction accuracies were compared with those obtained in the previous validation step to help evaluate the algorithms’ consistency and clusters’ robustness. (6)yi=xe∗γe+ce+ee(7)yi=xe∗γe+eeThe logistic model used to predict choice of breeding method is shown in(8)yj=ce+eeFor Tanzania, predictive models were given by(9)yi=xt∗γt+lt+σt+ct+et(10)yi=xt∗γt+lt+σt+etAnd choice of breeding method was given by (see (11) and (12))(11)yj=xt+γt+ct+et(12)yj=xt+γt+etwhere yi is milk yield or milk quantity sold and yj is choice of breeding method. For the Ethiopia models, ce is cluster of production, ee is the error term, xe is experience in dairy farming, and γe is years of schooling. For the Tanzania models, ct is cluster of production, et is the error term, xt is experience in dairy farming, γt is years of schooling, lt is total land size, and σt is area under fodder production.For all model validation steps, prediction accuracies were obtained by developing the clustering model in a training dataset (70% of all records) and the resulting model reapplied to a testing dataset (remaining 30%). The model with the least reallocation of households between clusters for the training and testing datasets was considered the most robust. Rank analysis using the spearman correlation coefficient was used to evaluate the level of household reallocation between clusters. ## 3. Results ### 3.1. Clustering Based on the Elbow method, a four cluster solution was found to be optimal for the Ethiopia dataset and was fitted in the clustering models (Figure1). The SOM and K-means algorithms clustered the households in the Ethiopia dataset into four groups, while the fuzzy model assigned all households into three clusters, with no members in the fourth cluster. Table 2 shows the cluster densities for each algorithm. For Tanzania, six clusters were defined based on the Elbow method (Figure 2). However, at K=6, the fuzzy model had highly fuzzy cluster memberships of 0.09 and 0.18 for each member. Such low membership values imply an unstable cluster solution. The fuzzy model was therefore discarded for the Tanzania dataset and analysis proceeded with the K-means and Self-Organizing Maps (SOM) algorithms. Cluster densities associated with the six clusters are provided in Table 3.Table 2 Cluster densities (number of households allocated to the cluster) for the Ethiopia dataset. Cluster K-means model SOM model Fuzzy model 1 342 487 2673 2 875 2084 411 3 2689 1217 1309 4 487 605Table 3 Cluster densities (number of households allocated to the cluster) for the Tanzania dataset. Cluster K-means model SOM model Fuzzy model 1 811 1180 2506 2 452 952 811 3 374 203 4 616 295 5 372 516 6 692 171Figure 1 Graph showing four optimal clusters for the Ethiopia dataset.Figure 2 Graph showing six optimal clusters for the Tanzania dataset.For the Ethiopian data, cluster densities given in Table2 indicate the presence of one unchanging cluster for both K-means and SOM models (with the exact same list of 487 members). The number of members in the other clusters varied, indicating households being reassigned to different clusters. Figures 3, 4, and 5 represent the cluster visualization for each algorithm in the Ethiopia dataset. Clusters obtained using K-means were well separated and showed significant intracluster adhesion (Figure 3), while spatial distribution of SOM clusters (Figure 4) indicated significant overlap between two of the 4 clusters (clusters in red). Cluster densities for Tanzania are displayed in Table 3.Figure 3 Household allocation to four clusters using the K-means model for Ethiopia dairy farmers.Figure 4 Node counts for household clusters derived using the SOM model for Ethiopia (a) and dendrogram for super clusters (b). (a) (b)Figure 5 Household allocation into three clusters using the fuzzy model for Ethiopia dairy farmers.Figures4(a) and 4(b) are a heatmap representation of cluster densities and dendrogram from the SOM model, respectively. Figure 4(a) shows counts of households within clusters while Figure 4(b) indicates cluster relationship and separation. The numbers on the colored plane indicate number of members in each cluster. Two clusters had equal number of farmers (shown in red color) and on the dendrogram these are categorized as clusters 1 and 4. These two clusters seemingly had few differentiating features since they originate from the same parent node. This phenomenon can also be observed in Figure 3 for the K-means model (clusters 2 and 4). These clusters appear to be joined into one cluster in the fuzzy model (cluster 3 in Figure 5). The fuzzy model resulted in 3 clusters, each with a significant number of outliers (Figure 5). The outliers were however more pronounced for cluster 2 than clusters 1 and 3.Presence of the outliers and cluster overlap in the fuzzy model was supported by a low value of the Dunn coefficient (0.3014) which corresponds to a high level of fuzziness.Based on the results obtained, the cluster composition parameters related to intercluster adhesion and intracluster cohesion indicated that clusters from the K-means model were better separated (higher mean silhouette value) and more compact (lower mean distance from central node) than in the other models for Ethiopia (Table4).Table 4 Cluster composition parameters (intercluster adhesion and intracluster cohesion) for Ethiopian households. Model No. Clusters Within sum of square Mean distance from central nodes Mean silhouette separation K-means model 4 20758 0.74 0.66 SOM model 4 23178 0.92 0.51 Fuzzy model 3 21655 0.89 0.56For Tanzania, the mean silhouette separation coefficients were not significantly different (0.66 and 0.64 for K-means and SOM, respectively) as shown in Table5. However, there was a tendency for the SOM to have better defined clusters given its lower within cluster sum of squares as well as lower mean distance from central node. The spatial distribution is illustrated in Figures 6 and 7.Table 5 Cluster composition parameters (intercluster adhesion and intracluster cohesion) for Tanzania households. Model No. Clusters Within sum of square Mean distance from central nodes Mean silhouette separation K-means model 6 12628 2.1 0.66 SOM model 6 11772 1.7 0.64Figure 6 Household allocation into six clusters using the K-means model for Tanzania dairy farmers.Figure 7 Node counts for household clusters derived using the SOM model for Tanzania (a) and dendrogram for super clusters (b). (a) (b)For Tanzania clusters’ separation and intactness can be observed through Figures6 and 7. No significant difference can be observed with regard to the intercluster adhesion between K-means and SOM (Table 5).Figure6 shows clusters visualization from the K-means model for Tanzania dataset. Cluster 4 and 5 overlap and are in close proximity to cluster 6, indicating that they have few differentiating characteristics. This overlapping is equally observed in the SOM model (Figure 7).The numbers on the colored bar in Figure7(a) indicate densities of members in each cluster. There are only four well separated clusters based on density (from left: red, orange, yellow, and light gold). However, the dendrogram (Figure 7(b)) shows that three clusters, branching from the same node, which also are also seen as the overlapping clusters (clusters 4, 5, and 6) in the K-means plot (Figure 6) ### 3.2. Cluster Validation #### 3.2.1. Cluster Membership Reranking Ranking correlation was used to study the levels of household relocation for the training and testing datasets. Generally, the clustering models applied to the Ethiopia dataset indicated low membership relocation. Table6 summarizes the results for Ethiopia where, despite a lower Akaike Information Criteria (AIC) estimate, the fuzzy model had the highest number of members reallocated to other clusters (32%) compared to the K-means and SOM. The high correlation coefficients for SOM and K-means indicate lower reallocation of cluster members. In contrast, results from Tanzania indicated very high reranking of cluster membership between training and testing datasets (Table 7).Table 6 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Ethiopia dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 102 2.7e∧-2 0.85 SOM model 102 2.8e∧-2 -0.88 Fuzzy model 68.09 9.35e∧-2 0.68Table 7 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Tanzania dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 200 0.001 -0.21 SOM model 200 0.006 0.39 #### 3.2.2. Prediction Accuracy Tables8 and 9 summarize the results for predicting missing values for milk yield, sales, and breeding choice. Results for Ethiopia dataset indicate that model fitting fixed effects of clusters derived from the fuzzy model had higher accuracies for peak milk yield (0.77), milk sales (0.48), and probability of choosing AI (0.55) as shown in Table 8, while for Tanzania, higher accuracies were obtained for milk production and sales (0.46 and 0.41) while fitting clusters were obtained from the K-means model (Table 9).Table 8 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Ethiopia. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/Response Variable Milk yield Milk sold Preferred breeding method K-means 0.68 0.40 0.54 SOM 0.66 0.38 0.54 Fuzzy 0.77 0.48 0.55Table 9 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Tanzania. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/ Response Variable Milk yield Milk sold Preferred breeding method K-means 0.46 0.41 0.29 SOM 0.32 0.31 0.46For the Tanzania dataset, clusters from the K-means model achieved high prediction accuracies for both milk yield and sales (at 46% and 41%, respectively). However, the K-means clusters had lower prediction accuracy for choice of breeding method (29%). Clusters from the SOM model performed poorly on the quantitative traits but had higher probability (46%) for correctly assigning the choice of breeding method. #### 3.2.3. Cluster Variances In order to assess whether the clusters defined by the various algorithms reflect differences in production characteristics between households, we evaluated the variance accounted for by these cluster on select performance measures. For Ethiopia, total variance was 1.015 and 0.988 for milk yield and sales, respectively, while in Tanzania, the total variance was 1.076 and 1.09 for milk yield and sales, respectively. The differences between residual variances for two linear models (see (6) versus (7) for Ethiopia and (9) versus (10) for Tanzania) were significant (p < 0.00001). Results show that, for Ethiopia data, the fuzzy model clusters accounted for 89% and 70% of the total variance in milk yield and milk sales, respectively. On the other hand, the K-means clusters accounted for 71% and 65% of the total variation in milk yield and milk sales, respectively. Tables 10 and 11 summarize the proportion of variances accounted for by the clusters for each clustering model.Table 10 Proportion of variance accounted for by cluster of production in Ethiopia. Fitted model Total Variance∗ Residual variance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.015 0.239 1867.4 <0.00001 73% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.222 1770.1 <0.00001 54% Model without cluster 0.76 3388.6 SOM Milk yield Model with cluster 1.015 0.283 2091.8 <0.00001 68% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.258 1969.8 <0.00001 51% Model without cluster 0.76 3388.6 Fuzzy Milk yield Model with cluster 1.015 0.074 337 <0.00001 89% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.073 319.4 <0.00001 70% Model without cluster 0.76 3388.6 ∗Data scaled to have unit variance and mean of zero.Table 11 Proportion of variances accounted for by cluster of production in Tanzania. Fitted model Total variance∗ ResidualVariance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.076 0.0027 -2981 <0.00001 71% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.018 -1084.3 <0.00001 65% Model without cluster 0.723 2520 SOM Milk yield Model with cluster 1.076 0.294 1633 <0.00001 44% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.228 1381.6 <0.00001 45% Model without cluster 0.723 2520.2 ∗ indicates data scaled to have unit variance and mean of zero. ## 3.1. Clustering Based on the Elbow method, a four cluster solution was found to be optimal for the Ethiopia dataset and was fitted in the clustering models (Figure1). The SOM and K-means algorithms clustered the households in the Ethiopia dataset into four groups, while the fuzzy model assigned all households into three clusters, with no members in the fourth cluster. Table 2 shows the cluster densities for each algorithm. For Tanzania, six clusters were defined based on the Elbow method (Figure 2). However, at K=6, the fuzzy model had highly fuzzy cluster memberships of 0.09 and 0.18 for each member. Such low membership values imply an unstable cluster solution. The fuzzy model was therefore discarded for the Tanzania dataset and analysis proceeded with the K-means and Self-Organizing Maps (SOM) algorithms. Cluster densities associated with the six clusters are provided in Table 3.Table 2 Cluster densities (number of households allocated to the cluster) for the Ethiopia dataset. Cluster K-means model SOM model Fuzzy model 1 342 487 2673 2 875 2084 411 3 2689 1217 1309 4 487 605Table 3 Cluster densities (number of households allocated to the cluster) for the Tanzania dataset. Cluster K-means model SOM model Fuzzy model 1 811 1180 2506 2 452 952 811 3 374 203 4 616 295 5 372 516 6 692 171Figure 1 Graph showing four optimal clusters for the Ethiopia dataset.Figure 2 Graph showing six optimal clusters for the Tanzania dataset.For the Ethiopian data, cluster densities given in Table2 indicate the presence of one unchanging cluster for both K-means and SOM models (with the exact same list of 487 members). The number of members in the other clusters varied, indicating households being reassigned to different clusters. Figures 3, 4, and 5 represent the cluster visualization for each algorithm in the Ethiopia dataset. Clusters obtained using K-means were well separated and showed significant intracluster adhesion (Figure 3), while spatial distribution of SOM clusters (Figure 4) indicated significant overlap between two of the 4 clusters (clusters in red). Cluster densities for Tanzania are displayed in Table 3.Figure 3 Household allocation to four clusters using the K-means model for Ethiopia dairy farmers.Figure 4 Node counts for household clusters derived using the SOM model for Ethiopia (a) and dendrogram for super clusters (b). (a) (b)Figure 5 Household allocation into three clusters using the fuzzy model for Ethiopia dairy farmers.Figures4(a) and 4(b) are a heatmap representation of cluster densities and dendrogram from the SOM model, respectively. Figure 4(a) shows counts of households within clusters while Figure 4(b) indicates cluster relationship and separation. The numbers on the colored plane indicate number of members in each cluster. Two clusters had equal number of farmers (shown in red color) and on the dendrogram these are categorized as clusters 1 and 4. These two clusters seemingly had few differentiating features since they originate from the same parent node. This phenomenon can also be observed in Figure 3 for the K-means model (clusters 2 and 4). These clusters appear to be joined into one cluster in the fuzzy model (cluster 3 in Figure 5). The fuzzy model resulted in 3 clusters, each with a significant number of outliers (Figure 5). The outliers were however more pronounced for cluster 2 than clusters 1 and 3.Presence of the outliers and cluster overlap in the fuzzy model was supported by a low value of the Dunn coefficient (0.3014) which corresponds to a high level of fuzziness.Based on the results obtained, the cluster composition parameters related to intercluster adhesion and intracluster cohesion indicated that clusters from the K-means model were better separated (higher mean silhouette value) and more compact (lower mean distance from central node) than in the other models for Ethiopia (Table4).Table 4 Cluster composition parameters (intercluster adhesion and intracluster cohesion) for Ethiopian households. Model No. Clusters Within sum of square Mean distance from central nodes Mean silhouette separation K-means model 4 20758 0.74 0.66 SOM model 4 23178 0.92 0.51 Fuzzy model 3 21655 0.89 0.56For Tanzania, the mean silhouette separation coefficients were not significantly different (0.66 and 0.64 for K-means and SOM, respectively) as shown in Table5. However, there was a tendency for the SOM to have better defined clusters given its lower within cluster sum of squares as well as lower mean distance from central node. The spatial distribution is illustrated in Figures 6 and 7.Table 5 Cluster composition parameters (intercluster adhesion and intracluster cohesion) for Tanzania households. Model No. Clusters Within sum of square Mean distance from central nodes Mean silhouette separation K-means model 6 12628 2.1 0.66 SOM model 6 11772 1.7 0.64Figure 6 Household allocation into six clusters using the K-means model for Tanzania dairy farmers.Figure 7 Node counts for household clusters derived using the SOM model for Tanzania (a) and dendrogram for super clusters (b). (a) (b)For Tanzania clusters’ separation and intactness can be observed through Figures6 and 7. No significant difference can be observed with regard to the intercluster adhesion between K-means and SOM (Table 5).Figure6 shows clusters visualization from the K-means model for Tanzania dataset. Cluster 4 and 5 overlap and are in close proximity to cluster 6, indicating that they have few differentiating characteristics. This overlapping is equally observed in the SOM model (Figure 7).The numbers on the colored bar in Figure7(a) indicate densities of members in each cluster. There are only four well separated clusters based on density (from left: red, orange, yellow, and light gold). However, the dendrogram (Figure 7(b)) shows that three clusters, branching from the same node, which also are also seen as the overlapping clusters (clusters 4, 5, and 6) in the K-means plot (Figure 6) ## 3.2. Cluster Validation ### 3.2.1. Cluster Membership Reranking Ranking correlation was used to study the levels of household relocation for the training and testing datasets. Generally, the clustering models applied to the Ethiopia dataset indicated low membership relocation. Table6 summarizes the results for Ethiopia where, despite a lower Akaike Information Criteria (AIC) estimate, the fuzzy model had the highest number of members reallocated to other clusters (32%) compared to the K-means and SOM. The high correlation coefficients for SOM and K-means indicate lower reallocation of cluster members. In contrast, results from Tanzania indicated very high reranking of cluster membership between training and testing datasets (Table 7).Table 6 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Ethiopia dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 102 2.7e∧-2 0.85 SOM model 102 2.8e∧-2 -0.88 Fuzzy model 68.09 9.35e∧-2 0.68Table 7 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Tanzania dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 200 0.001 -0.21 SOM model 200 0.006 0.39 ### 3.2.2. Prediction Accuracy Tables8 and 9 summarize the results for predicting missing values for milk yield, sales, and breeding choice. Results for Ethiopia dataset indicate that model fitting fixed effects of clusters derived from the fuzzy model had higher accuracies for peak milk yield (0.77), milk sales (0.48), and probability of choosing AI (0.55) as shown in Table 8, while for Tanzania, higher accuracies were obtained for milk production and sales (0.46 and 0.41) while fitting clusters were obtained from the K-means model (Table 9).Table 8 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Ethiopia. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/Response Variable Milk yield Milk sold Preferred breeding method K-means 0.68 0.40 0.54 SOM 0.66 0.38 0.54 Fuzzy 0.77 0.48 0.55Table 9 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Tanzania. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/ Response Variable Milk yield Milk sold Preferred breeding method K-means 0.46 0.41 0.29 SOM 0.32 0.31 0.46For the Tanzania dataset, clusters from the K-means model achieved high prediction accuracies for both milk yield and sales (at 46% and 41%, respectively). However, the K-means clusters had lower prediction accuracy for choice of breeding method (29%). Clusters from the SOM model performed poorly on the quantitative traits but had higher probability (46%) for correctly assigning the choice of breeding method. ### 3.2.3. Cluster Variances In order to assess whether the clusters defined by the various algorithms reflect differences in production characteristics between households, we evaluated the variance accounted for by these cluster on select performance measures. For Ethiopia, total variance was 1.015 and 0.988 for milk yield and sales, respectively, while in Tanzania, the total variance was 1.076 and 1.09 for milk yield and sales, respectively. The differences between residual variances for two linear models (see (6) versus (7) for Ethiopia and (9) versus (10) for Tanzania) were significant (p < 0.00001). Results show that, for Ethiopia data, the fuzzy model clusters accounted for 89% and 70% of the total variance in milk yield and milk sales, respectively. On the other hand, the K-means clusters accounted for 71% and 65% of the total variation in milk yield and milk sales, respectively. Tables 10 and 11 summarize the proportion of variances accounted for by the clusters for each clustering model.Table 10 Proportion of variance accounted for by cluster of production in Ethiopia. Fitted model Total Variance∗ Residual variance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.015 0.239 1867.4 <0.00001 73% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.222 1770.1 <0.00001 54% Model without cluster 0.76 3388.6 SOM Milk yield Model with cluster 1.015 0.283 2091.8 <0.00001 68% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.258 1969.8 <0.00001 51% Model without cluster 0.76 3388.6 Fuzzy Milk yield Model with cluster 1.015 0.074 337 <0.00001 89% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.073 319.4 <0.00001 70% Model without cluster 0.76 3388.6 ∗Data scaled to have unit variance and mean of zero.Table 11 Proportion of variances accounted for by cluster of production in Tanzania. Fitted model Total variance∗ ResidualVariance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.076 0.0027 -2981 <0.00001 71% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.018 -1084.3 <0.00001 65% Model without cluster 0.723 2520 SOM Milk yield Model with cluster 1.076 0.294 1633 <0.00001 44% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.228 1381.6 <0.00001 45% Model without cluster 0.723 2520.2 ∗ indicates data scaled to have unit variance and mean of zero. ## 3.2.1. Cluster Membership Reranking Ranking correlation was used to study the levels of household relocation for the training and testing datasets. Generally, the clustering models applied to the Ethiopia dataset indicated low membership relocation. Table6 summarizes the results for Ethiopia where, despite a lower Akaike Information Criteria (AIC) estimate, the fuzzy model had the highest number of members reallocated to other clusters (32%) compared to the K-means and SOM. The high correlation coefficients for SOM and K-means indicate lower reallocation of cluster members. In contrast, results from Tanzania indicated very high reranking of cluster membership between training and testing datasets (Table 7).Table 6 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Ethiopia dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 102 2.7e∧-2 0.85 SOM model 102 2.8e∧-2 -0.88 Fuzzy model 68.09 9.35e∧-2 0.68Table 7 Cluster model parameters and ranking accuracy (membership reallocation) based on spearman rank correlation for the Tanzania dataset. Model AIC Residual deviance Ranking accuracy (r) K-means model 200 0.001 -0.21 SOM model 200 0.006 0.39 ## 3.2.2. Prediction Accuracy Tables8 and 9 summarize the results for predicting missing values for milk yield, sales, and breeding choice. Results for Ethiopia dataset indicate that model fitting fixed effects of clusters derived from the fuzzy model had higher accuracies for peak milk yield (0.77), milk sales (0.48), and probability of choosing AI (0.55) as shown in Table 8, while for Tanzania, higher accuracies were obtained for milk production and sales (0.46 and 0.41) while fitting clusters were obtained from the K-means model (Table 9).Table 8 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Ethiopia. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/Response Variable Milk yield Milk sold Preferred breeding method K-means 0.68 0.40 0.54 SOM 0.66 0.38 0.54 Fuzzy 0.77 0.48 0.55Table 9 Estimates of prediction accuracy for models fitting cluster of production for milk yield, milk sales, and choice of breeding method in Tanzania. Accuracy of prediction (r) 0 ≤ p ≤ 1 Algorithm/ Response Variable Milk yield Milk sold Preferred breeding method K-means 0.46 0.41 0.29 SOM 0.32 0.31 0.46For the Tanzania dataset, clusters from the K-means model achieved high prediction accuracies for both milk yield and sales (at 46% and 41%, respectively). However, the K-means clusters had lower prediction accuracy for choice of breeding method (29%). Clusters from the SOM model performed poorly on the quantitative traits but had higher probability (46%) for correctly assigning the choice of breeding method. ## 3.2.3. Cluster Variances In order to assess whether the clusters defined by the various algorithms reflect differences in production characteristics between households, we evaluated the variance accounted for by these cluster on select performance measures. For Ethiopia, total variance was 1.015 and 0.988 for milk yield and sales, respectively, while in Tanzania, the total variance was 1.076 and 1.09 for milk yield and sales, respectively. The differences between residual variances for two linear models (see (6) versus (7) for Ethiopia and (9) versus (10) for Tanzania) were significant (p < 0.00001). Results show that, for Ethiopia data, the fuzzy model clusters accounted for 89% and 70% of the total variance in milk yield and milk sales, respectively. On the other hand, the K-means clusters accounted for 71% and 65% of the total variation in milk yield and milk sales, respectively. Tables 10 and 11 summarize the proportion of variances accounted for by the clusters for each clustering model.Table 10 Proportion of variance accounted for by cluster of production in Ethiopia. Fitted model Total Variance∗ Residual variance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.015 0.239 1867.4 <0.00001 73% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.222 1770.1 <0.00001 54% Model without cluster 0.76 3388.6 SOM Milk yield Model with cluster 1.015 0.283 2091.8 <0.00001 68% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.258 1969.8 <0.00001 51% Model without cluster 0.76 3388.6 Fuzzy Milk yield Model with cluster 1.015 0.074 337 <0.00001 89% Model without cluster 0.977 3718.4 Milk sales Model with cluster 0.988 0.073 319.4 <0.00001 70% Model without cluster 0.76 3388.6 ∗Data scaled to have unit variance and mean of zero.Table 11 Proportion of variances accounted for by cluster of production in Tanzania. Fitted model Total variance∗ ResidualVariance -2log likelihood P value Variance accounted for by cluster K-means Milk yield Model with cluster 1.076 0.0027 -2981 <0.00001 71% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.018 -1084.3 <0.00001 65% Model without cluster 0.723 2520 SOM Milk yield Model with cluster 1.076 0.294 1633 <0.00001 44% Model without cluster 0.771 2584.2 Milk sales Model with cluster 1.09 0.228 1381.6 <0.00001 45% Model without cluster 0.723 2520.2 ∗ indicates data scaled to have unit variance and mean of zero. ## 4. Discussion ### 4.1. Characterization of Smallholder Farmers Unsupervised learning models have been used to characterize smallholder farmers despite the fact that these models lack consistency and are highly unpredictable [13]. In this study, the performance of three commonly used algorithms for clustering farming households; namely, K-means, fuzzy, and SOM were compared. A set of validation criteria to assess the robustness of the defined clusters is proposed. This approach is seldom used for similar studies.In Africa, smallholder farming systems have been characterized using common hierarchical and nonhierarchical clustering algorithms. Work done by Mburu et al. [29], Bidogezaet al. [30], Dossaet al. [10], and Kuivanenet al. [7, 8] utilized the ward and K-means methods to define clusters for smallholder households. In addition to the machine learning approaches, use of expert knowledge to validate cluster based characterization is highly recommended [7, 8]. In some studies, the local knowledge has been used in a participatory approach to accurately estimate farm types. Furthermore, complex clustering approaches have also been explored in studying smallholder farm types as done by Salasya & Stoorvogel [23], Pelcatet al. [31], Galluzzo [21], and Paas & Groot [12]. These studies present use of fuzzy clustering, Neural Networks, and Naïve Bayes algorithms, respectively. Although all clustering assigns farmers into some types, the fuzzy clustering presents a soft clustering approach where a farm can belong to more than one farm type or none [31]. However, from the analyzed previous researches clustering models’ robustness and their ability to predict farm types remains uncharted. Following up on Goswamiet al. [5] study of smallholder farmers needs to be subjected into formulation of predictive farm types. As such, evolvement of farmers in the homogeneous groups can be predicted because the clusters’ stabilities are known. ### 4.2. Clustering Algorithms Evaluated The determination of putative number of clusters that best define the data (K) presents the foremost need in cluster analysis. Bad estimates of K may result into unstable clusters and presence of many members appearing as outliers. Since the goal is to obtain highly homogeneous groups, the within group sum of square difference is commonly used to evaluate how compact the clusters are. We adopted recommendations given by Kassambara [16] and employed the Elbow, Gap statistics, and average silhouette methods to assess the best K for the datasets. The Elbow and Gap statistics estimate a value of K that minimizes the within groups sums of square (WSS) differences such that any additions to the estimated value of K will not significantly change the WSS. Since the study goal was to arrive at highly homogeneous groups, the measure of within sum of square differences seemed most important. However, a common method to estimate optimal number of clusters from other studies is to try out different values of K while observing the silhouette separation or manual inspection of dendrogram produced in hierarchical clustering [15, 16]. While the Elbow method and Gap statistics use within groups sum of square differences, the silhouette method compares the average clusters separation.The application of the three separate algorithms revealed differences in their performance based on data type and structure. Where observations were highly identical, soft clustering (fuzzy model) failed to categorize the records into appropriate number of clusters. The fuzzy model allocated households into only 3 clusters despite four clusters being determined as appropriate for the Ethiopia dataset (Figure5). The other models converged at 4 clusters (Figures 3 and 4). Similarly, for the Tanzanian dataset, the fuzzy model could not converge even after many iterations. It would appear that the fuzzy model is best suited to situations where data is highly heterogeneous. Otherwise it does not lend itself well to cluster identification.Balakrishnan (1994) compared K-means and SOM algorithms in cluster identification within specific criterion of intracluster similarity and intercluster differences. In addition, the dataset had known cluster solutions; so, the only target was to find out performance differences between the two algorithms. Results indicated that the K-means algorithm had good performance over the SOM algorithm. Mingoti & Lima [32] compared K-means and SOM models’ performance by using smallholders’ farm data. Results indicated that K-means were more robust. In this study, the SOM performed poorly compared to the fuzzy and K-means for the Ethiopia dataset having higher within cluster dispersion, as well as lower separation between clusters. For the Tanzania dataset, the SOM performed similarly as the K-means algorithm. Results from our study show that the performance of SOM is concordant with that of Nazariet al. [15] who characterized dryland farming systems. In contrast to observations by Mingoti & Lima [32], the fuzzy model used in their study failed spectacularly for both datasets. This reinforces observations by Xu [33] who concluded that the performance of clustering algorithms is subject to the nature of data and area of application. More studies need to be undertaken to see how the fuzzy algorithm can be best adapted to farming datasets. ### 4.3. Cluster Membership Reallocation and Prediction Accuracy A good clustering model should be able to repeatedly allocate a majority of households into the same clusters, even when the volume of data changes. In order to be sure that our model definitions represented a collection of the most important features that describe each cluster, we tested the ability of the models to redefine the same clusters between training and testing datasets. This strategy aligns well with Xu [33], who recommends that a good clustering model should have the ability to deal with new data cases without the need to relearn. The spearman rank correlation was used to measure the degree of reranking. For the Tanzania data, the SOM model provided the best cluster allocation that minimizes reranking. The rank correlations seen in Tanzania were very low for both the K-means and SOM models. Given the above premise and the spectacular failure of the fuzzy model in Tanzania, a pattern emerges to suggest a fundamental problem with the Tanzanian dataset rather than issues to do with model suitability. It is possible that there is no significant differentiation between households in Tanzania and the extreme homogeneity proves a challenge because each household can be allocated to any cluster. Such a scenario could occur due to flawed data collection strategies. We suspect that, due to requirements to finalize data collection within set timelines, groups of farmers were interviewed collectively while data was entered as if it were for an individual farmer.The fuzzy model in Ethiopia had the best fit, indicated by the lowest AIC value despite higher membership reallocation. Given a standard prediction problem, this would be the best model for the data. This is also corroborated by the fact that the variance accounted for by the clusters was also highest for the fuzzy model. However, given that our intention is to maximize correct reassignment of individuals into clusters, the K-means and SOM models would be preferred for household membership allocation.Three response variables (milk yield, sales, and choice of breeding method) were selected for the prediction exercise because of their vital role in smallholder dairy farm evolvement. They generally represent the commercial orientation of a smallholder farm. Evaluation of prediction accuracies for selected response variable indicated a very different scenario from the clustering problem. When the clusters were included in the models to predict milk yield, sales, or breeding method, the fuzzy model-derived clusters had the highest prediction accuracies compared to K-means and SOM clusters for Ethiopia data. For Tanzania data, the SOM model clusters yielded the best prediction accuracies for the binary trait, choice of breeding method, while K-means model performed the best for the quantitative traits. However, the prediction accuracies for the Tanzania data were low, underscoring the earlier assertions about data structure and integrity. Given the predictive power of the clusters on select response variables, the fuzzy clustering model performed the best, with defined clusters accounting for significantly higher variations in the response variable than other clustering models.Based on the results from Ethiopia, where all the models could be evaluated, it would seem that model choice depends on the problem that needs to be solved. For a clustering problem, where the intention is to obtain robust membership allocation, then the K-means algorithm would be the most appropriate, to ensure maximal homogeneity within clusters. The use of this model would minimize reranking when applying the model to new datasets without need for new learning. However, in the event that clusters are to be used in prediction models, the fuzzy algorithm would be the best for clusters definition. ## 4.1. Characterization of Smallholder Farmers Unsupervised learning models have been used to characterize smallholder farmers despite the fact that these models lack consistency and are highly unpredictable [13]. In this study, the performance of three commonly used algorithms for clustering farming households; namely, K-means, fuzzy, and SOM were compared. A set of validation criteria to assess the robustness of the defined clusters is proposed. This approach is seldom used for similar studies.In Africa, smallholder farming systems have been characterized using common hierarchical and nonhierarchical clustering algorithms. Work done by Mburu et al. [29], Bidogezaet al. [30], Dossaet al. [10], and Kuivanenet al. [7, 8] utilized the ward and K-means methods to define clusters for smallholder households. In addition to the machine learning approaches, use of expert knowledge to validate cluster based characterization is highly recommended [7, 8]. In some studies, the local knowledge has been used in a participatory approach to accurately estimate farm types. Furthermore, complex clustering approaches have also been explored in studying smallholder farm types as done by Salasya & Stoorvogel [23], Pelcatet al. [31], Galluzzo [21], and Paas & Groot [12]. These studies present use of fuzzy clustering, Neural Networks, and Naïve Bayes algorithms, respectively. Although all clustering assigns farmers into some types, the fuzzy clustering presents a soft clustering approach where a farm can belong to more than one farm type or none [31]. However, from the analyzed previous researches clustering models’ robustness and their ability to predict farm types remains uncharted. Following up on Goswamiet al. [5] study of smallholder farmers needs to be subjected into formulation of predictive farm types. As such, evolvement of farmers in the homogeneous groups can be predicted because the clusters’ stabilities are known. ## 4.2. Clustering Algorithms Evaluated The determination of putative number of clusters that best define the data (K) presents the foremost need in cluster analysis. Bad estimates of K may result into unstable clusters and presence of many members appearing as outliers. Since the goal is to obtain highly homogeneous groups, the within group sum of square difference is commonly used to evaluate how compact the clusters are. We adopted recommendations given by Kassambara [16] and employed the Elbow, Gap statistics, and average silhouette methods to assess the best K for the datasets. The Elbow and Gap statistics estimate a value of K that minimizes the within groups sums of square (WSS) differences such that any additions to the estimated value of K will not significantly change the WSS. Since the study goal was to arrive at highly homogeneous groups, the measure of within sum of square differences seemed most important. However, a common method to estimate optimal number of clusters from other studies is to try out different values of K while observing the silhouette separation or manual inspection of dendrogram produced in hierarchical clustering [15, 16]. While the Elbow method and Gap statistics use within groups sum of square differences, the silhouette method compares the average clusters separation.The application of the three separate algorithms revealed differences in their performance based on data type and structure. Where observations were highly identical, soft clustering (fuzzy model) failed to categorize the records into appropriate number of clusters. The fuzzy model allocated households into only 3 clusters despite four clusters being determined as appropriate for the Ethiopia dataset (Figure5). The other models converged at 4 clusters (Figures 3 and 4). Similarly, for the Tanzanian dataset, the fuzzy model could not converge even after many iterations. It would appear that the fuzzy model is best suited to situations where data is highly heterogeneous. Otherwise it does not lend itself well to cluster identification.Balakrishnan (1994) compared K-means and SOM algorithms in cluster identification within specific criterion of intracluster similarity and intercluster differences. In addition, the dataset had known cluster solutions; so, the only target was to find out performance differences between the two algorithms. Results indicated that the K-means algorithm had good performance over the SOM algorithm. Mingoti & Lima [32] compared K-means and SOM models’ performance by using smallholders’ farm data. Results indicated that K-means were more robust. In this study, the SOM performed poorly compared to the fuzzy and K-means for the Ethiopia dataset having higher within cluster dispersion, as well as lower separation between clusters. For the Tanzania dataset, the SOM performed similarly as the K-means algorithm. Results from our study show that the performance of SOM is concordant with that of Nazariet al. [15] who characterized dryland farming systems. In contrast to observations by Mingoti & Lima [32], the fuzzy model used in their study failed spectacularly for both datasets. This reinforces observations by Xu [33] who concluded that the performance of clustering algorithms is subject to the nature of data and area of application. More studies need to be undertaken to see how the fuzzy algorithm can be best adapted to farming datasets. ## 4.3. Cluster Membership Reallocation and Prediction Accuracy A good clustering model should be able to repeatedly allocate a majority of households into the same clusters, even when the volume of data changes. In order to be sure that our model definitions represented a collection of the most important features that describe each cluster, we tested the ability of the models to redefine the same clusters between training and testing datasets. This strategy aligns well with Xu [33], who recommends that a good clustering model should have the ability to deal with new data cases without the need to relearn. The spearman rank correlation was used to measure the degree of reranking. For the Tanzania data, the SOM model provided the best cluster allocation that minimizes reranking. The rank correlations seen in Tanzania were very low for both the K-means and SOM models. Given the above premise and the spectacular failure of the fuzzy model in Tanzania, a pattern emerges to suggest a fundamental problem with the Tanzanian dataset rather than issues to do with model suitability. It is possible that there is no significant differentiation between households in Tanzania and the extreme homogeneity proves a challenge because each household can be allocated to any cluster. Such a scenario could occur due to flawed data collection strategies. We suspect that, due to requirements to finalize data collection within set timelines, groups of farmers were interviewed collectively while data was entered as if it were for an individual farmer.The fuzzy model in Ethiopia had the best fit, indicated by the lowest AIC value despite higher membership reallocation. Given a standard prediction problem, this would be the best model for the data. This is also corroborated by the fact that the variance accounted for by the clusters was also highest for the fuzzy model. However, given that our intention is to maximize correct reassignment of individuals into clusters, the K-means and SOM models would be preferred for household membership allocation.Three response variables (milk yield, sales, and choice of breeding method) were selected for the prediction exercise because of their vital role in smallholder dairy farm evolvement. They generally represent the commercial orientation of a smallholder farm. Evaluation of prediction accuracies for selected response variable indicated a very different scenario from the clustering problem. When the clusters were included in the models to predict milk yield, sales, or breeding method, the fuzzy model-derived clusters had the highest prediction accuracies compared to K-means and SOM clusters for Ethiopia data. For Tanzania data, the SOM model clusters yielded the best prediction accuracies for the binary trait, choice of breeding method, while K-means model performed the best for the quantitative traits. However, the prediction accuracies for the Tanzania data were low, underscoring the earlier assertions about data structure and integrity. Given the predictive power of the clusters on select response variables, the fuzzy clustering model performed the best, with defined clusters accounting for significantly higher variations in the response variable than other clustering models.Based on the results from Ethiopia, where all the models could be evaluated, it would seem that model choice depends on the problem that needs to be solved. For a clustering problem, where the intention is to obtain robust membership allocation, then the K-means algorithm would be the most appropriate, to ensure maximal homogeneity within clusters. The use of this model would minimize reranking when applying the model to new datasets without need for new learning. However, in the event that clusters are to be used in prediction models, the fuzzy algorithm would be the best for clusters definition. ## 5. Conclusion The goal of the reported study was to identify the most robust approach to correctly classify diverse households into homogenous groups of farmers with similar production systems and management activities. The reason for the characterization was to use the defined groups in order to design interventions and strategies that facilitate the evolvement of smallholder dairy farmers beyond subsistence in Ethiopia and Tanzania. Results from this study demonstrate the use of unsupervised learning models in cluster definition for smallholder dairy farmers as well as strategies to assess the models’ suitability and cluster robustness. Performance varied across the tested models, underscoring the need to find an appropriate method depending on data structure and questions being answered. The results obtained from this study are a necessary first step in understanding smallholder farmer production systems and the study of household evolvement from subsistence to full commercial orientation. --- *Source: 1020521-2019-01-02.xml*
2019
# Plasma Lactate Levels Increase during Hyperinsulinemic Euglycemic Clamp and Oral Glucose Tolerance Test **Authors:** Feven Berhane; Alemu Fite; Nour Daboul; Wissam Al-Janabi; Zaher Msallaty; Michael Caruso; Monique K. Lewis; Zhengping Yi; Michael P. Diamond; Abdul-Badi Abou-Samra; Berhane Seyoum **Journal:** Journal of Diabetes Research (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102054 --- ## Abstract Insulin resistance, which plays a central role in the pathogenesis of type 2 diabetes (T2D), is an early indicator that heralds the occurrence of T2D. It is imperative to understand the metabolic changes that occur at the cellular level in the early stages of insulin resistance. The objective of this study was to determine the pattern of circulating lactate levels during oral glucose tolerance test (OGTT) and hyperinsulinemic euglycemic clamp (HIEC) study in normal nondiabetic subjects. Lactate and glycerol were determined every 30 minutes during OGTT and HIEC on 22 participants. Lactate progressively increased throughout the HIEC study period (P < 0.001). Participants with BMI < 30 had significantly higher mean M-values compared to those with BMI ≥ 30 at baseline (P < 0.05). This trend also continued throughout the OGTT. In addition, those with impaired glucose tolerance test (IGT) had significantly higher mean lactate levels compared to those with normal glucose tolerance (P < 0.001). In conclusion, we found that lactate increased during HIEC study, which is a state of hyperinsulinemia similar to the metabolic milieu seen during the early stages in the development of T2D. --- ## Body ## 1. Introduction Type 2 diabetes mellitus (T2D) is a serious global problem that is closely related to the rise in obesity. Insulin resistance plays a central role in the pathogenesis of T2D [1] and is an early marker for the disease. Therefore, it is imperative to have a simple method to identify insulin resistance in the early stages in order to implement preventative measures. The gold standard test for insulin resistance is the hyperinsulinemic euglycemic clamp (HIEC); however, this technique is cumbersome and is restricted for research. Other methods, such as the minimal model approximation of the metabolism of glucose (MMAMG) [2, 3] and homeostasis model assessment to estimate insulin resistance (HOMA-IR), can be also used; however, they are indirect derivatives computed from fasting insulin and glucose determinations.Not much work has been done to understand the early metabolic changes at the cellular level during the stages of insulin resistance in the pathogenesis of diabetes. Understanding the early changes could help identify insulin resistance before it is clinically apparent. Part of the metabolic changes that occur at the cellular level is the increase in lactate production. In epidemiologic studies, it has been reported that lactate could predict the occurrence of diabetes [4, 5]. The purpose of this study is to examine lactate changes during HIEC and OGTT. HIEC is a state of hyperinsulinemia artificially created that resembles the hyperinsulinemia seen during the early stages of insulin resistance.Insulin stimulates glycolysis by activating the rate limiting enzymes, phosphofructokinase and pyruvate dehydrogenase [6]. Patients with diabetes and insulin resistance have increased activity of glycolysis [7, 8]. The increase in glycolysis leads to increased production of NADH and pyruvate and decreased NAD+ levels. NAD+ is generated from NADH in a redox reaction when pyruvate is converted to lactate by lactate dehydrogenase (Figure 1). This reaction might be exaggerated in insulin resistance as there is increased glycolysis driven by hyperinsulinemia. The conversion of pyruvate to lactate partly replenishes NAD+. Lactate is an important cellular metabolite in the glycolytic pathway and it may reflect the state of the cellular metabolism. Its high concentration may be an early signal of the beginning of insulin resistance. The purpose of this study was to examine lactate levels during OGTT and HIEC.Figure 1 Generation of NAD+ from pyruvate. ## 2. Methods ### 2.1. Study Subjects A total of 22 healthy nondiabetic volunteers (16 males and 6 females) were included in this study. The mean age was41 ± 12.4 years and the average BMI was 27.8 ± 4.8 kg/m2. Participants were 1st screened over the phone and provided a brief explanation of the study. Prequalified subjects were scheduled for the on-site screening visit. All studies were conducted in the MOTT Clinical Research Center (MCRC) on the Wayne State University medical campus and began at approximately 08:30 (time −60 min) after a minimum 10-hour overnight fast. The purpose, nature, and potential risks of the study were explained to all participants, and written consent was obtained before their participation. After written informed consent was obtained, comprehensive screening tests were performed such as vitals, body mass index (BMI), urinalysis, pregnancy test (females only), ECG, body composition, medical/health history, international physical activity questionnaire, and complete blood chemistry, CBC, HbA1c, and lipid profile. None of the participants had reported medical problems, and none of them was engaged in any heavy exercise. All participants were instructed to stop any form of exercise for at least 2 days before the study. The institutional review board of Wayne State University approved the protocol. ### 2.2. Experimental Strategy OGTT. Patients reported to the Mott Clinical Research Center (MCRC) at 8:00 AM in a fasting state. A catheter was placed in their antecubital vein for repeated blood draws, which was covered with a heating pad (60°C) for sampling of arterialized venous blood. The IV line was kept open with infusion of normal saline (0.9% NaCl; pH 7.4). Baseline blood for glucose, insulin, and lactate measurements were drown at 0 time. Then patients were given 300 mL of an aqueous solution containing 75 grams of glucose with orange flavor to drink it at one time. Glucose, lactate, and glycerol concentrations were determined at 30 min intervals for 2 hours after glucose ingestion. Glucose was measured using a YSI glucose analyzer (Beckman Instruments, Fullerton, California, USA).Hyperinsulinemic Euglycemic Clamp (HIEC). The clamp study was performed at the MCRC. Subjects were admitted to the MCRC at 8:00 AM after an overnight fast for the hyperinsulinemic euglycemic clamp. An antecubital catheter was placed on the right hand for repeated blood draws, which was covered with a heating pad (60°C) for sampling of arterialized venous blood. The IV line was kept open with infusion of normal saline (0.9% NaCl; pH 7.4).A second antecubital catheter with ports for insulin and glucose was inserted on the left arm. Continuous infusion of human regular insulin (Humulin R; Eli Lilly, Indianapolis, IN, USA) was started at a rate of 80 mU m−2 minute−1 and continued for 120 minutes. Plasma glucose was measured with an YSI glucose analyzer at 5-minute intervals throughout the hyperinsulinemic euglycemic clamp. Euglycemia was targeted for 90 mg/dL by variable infusion of 20% D-glucose [9]. Insulin-stimulated glucose disposal rates (M-value) were calculated as the average value during the final 30 minutes of insulin infusion. M-value is the glucose infusion rate per kg per minute (mg/kg/min).Lactate and glycerol levels were determined before the initiation of insulin infusion at −30 minutes and then every 30 minutes during the hyperinsulinemic conditions (0, 30, 60, 90, and 120 minutes).Lactate Assay. Fasting plasma lactate levels were determined using the commercially available enzymatic kit (Eton Bioscience Inc., San Diego, CA). Lactate determination technique relies on the lactate dehydrogenase conversion of lactate to pyruvate. In the process NAD+ is reduced to NADH that is ultimately coupled with the tetrazolium salt 2-(4-iodophenyl)-3-(4-nitrophenyl)-5-phenyl tetrazolium (INT) to produce a formazan, which exhibits absorbance at 490 nm. The assay is specific for lactate and the intraassay and interassay variations are 6% and 15%, respectively. As compared to other studies [5, 10, 11] this method showed a better precision. Coefficient of determination computed from lactate standard curve was 0.996.Glycerol Assay. Cayman’s Assay kit was used for plasma glycerol determination (Cayman Chemical Company, Ann Arbor, MI, USA). The assay employs a coupled enzymatic reaction system in 96-well plates whereby plasma glycerol content is converted by glycerol kinase. The enzymatic reaction produces glycerol-3-phosphate (G3P) in the presence of adenosine-5′-diphosphate (ADP). The G3P is oxidized by glycerol phosphate oxidase producing dihydroxyacetone phosphate and hydrogen peroxide (H2O2). The H2O2 reacts with 4-aminoantipyrine (4-AAP) and N-ethyl-N-(3-sulfopropyl)-m-anisidine (ESPA) by the catalysis of horse radish peroxidase (HRP). Briefly, 10 μL of undiluted plasma was added to the plates. Reaction was initiated by adding a total of 150 μL of buffer and solutions of enzyme mixture provided with the kit. After appropriate incubation period, the final colored product was read using a spectrophotometer (Molecular Devices LLC, CA, USA) at an absorbance maxima of λ 540 nm. Final concentration of plasma glycerol is determined based upon a standard curve of known glycerol concentration series run with each analysis. ### 2.3. Statistical Analysis We used SPSS statistical software (SPSS Inc., Chicago, IL) to calculate significant differences in lactate, glycerol levels, OGTT, andM-values. We also performed statistical correlations between these variables, in addition to BMI, considering all the subjects as one population to examine for trends and positive and negative correlation among these variables. We analyzed whether or not there was a rise in lactate or glycerol over time for both OGTT and HIEC. Then, we grouped the participants according to those with normal and impaired OGTT. Lastly, we grouped the participants into those with M-values ≤ 4 and those with M-values > 4. We compared the overall mean lactate levels for these two groups during OGTT and HIEC. These analyses allowed us to examine our hypothesis that lactate increases in individuals with clinical parameters suggestive of insulin resistance (IGTT, low M-values). ## 2.1. Study Subjects A total of 22 healthy nondiabetic volunteers (16 males and 6 females) were included in this study. The mean age was41 ± 12.4 years and the average BMI was 27.8 ± 4.8 kg/m2. Participants were 1st screened over the phone and provided a brief explanation of the study. Prequalified subjects were scheduled for the on-site screening visit. All studies were conducted in the MOTT Clinical Research Center (MCRC) on the Wayne State University medical campus and began at approximately 08:30 (time −60 min) after a minimum 10-hour overnight fast. The purpose, nature, and potential risks of the study were explained to all participants, and written consent was obtained before their participation. After written informed consent was obtained, comprehensive screening tests were performed such as vitals, body mass index (BMI), urinalysis, pregnancy test (females only), ECG, body composition, medical/health history, international physical activity questionnaire, and complete blood chemistry, CBC, HbA1c, and lipid profile. None of the participants had reported medical problems, and none of them was engaged in any heavy exercise. All participants were instructed to stop any form of exercise for at least 2 days before the study. The institutional review board of Wayne State University approved the protocol. ## 2.2. Experimental Strategy OGTT. Patients reported to the Mott Clinical Research Center (MCRC) at 8:00 AM in a fasting state. A catheter was placed in their antecubital vein for repeated blood draws, which was covered with a heating pad (60°C) for sampling of arterialized venous blood. The IV line was kept open with infusion of normal saline (0.9% NaCl; pH 7.4). Baseline blood for glucose, insulin, and lactate measurements were drown at 0 time. Then patients were given 300 mL of an aqueous solution containing 75 grams of glucose with orange flavor to drink it at one time. Glucose, lactate, and glycerol concentrations were determined at 30 min intervals for 2 hours after glucose ingestion. Glucose was measured using a YSI glucose analyzer (Beckman Instruments, Fullerton, California, USA).Hyperinsulinemic Euglycemic Clamp (HIEC). The clamp study was performed at the MCRC. Subjects were admitted to the MCRC at 8:00 AM after an overnight fast for the hyperinsulinemic euglycemic clamp. An antecubital catheter was placed on the right hand for repeated blood draws, which was covered with a heating pad (60°C) for sampling of arterialized venous blood. The IV line was kept open with infusion of normal saline (0.9% NaCl; pH 7.4).A second antecubital catheter with ports for insulin and glucose was inserted on the left arm. Continuous infusion of human regular insulin (Humulin R; Eli Lilly, Indianapolis, IN, USA) was started at a rate of 80 mU m−2 minute−1 and continued for 120 minutes. Plasma glucose was measured with an YSI glucose analyzer at 5-minute intervals throughout the hyperinsulinemic euglycemic clamp. Euglycemia was targeted for 90 mg/dL by variable infusion of 20% D-glucose [9]. Insulin-stimulated glucose disposal rates (M-value) were calculated as the average value during the final 30 minutes of insulin infusion. M-value is the glucose infusion rate per kg per minute (mg/kg/min).Lactate and glycerol levels were determined before the initiation of insulin infusion at −30 minutes and then every 30 minutes during the hyperinsulinemic conditions (0, 30, 60, 90, and 120 minutes).Lactate Assay. Fasting plasma lactate levels were determined using the commercially available enzymatic kit (Eton Bioscience Inc., San Diego, CA). Lactate determination technique relies on the lactate dehydrogenase conversion of lactate to pyruvate. In the process NAD+ is reduced to NADH that is ultimately coupled with the tetrazolium salt 2-(4-iodophenyl)-3-(4-nitrophenyl)-5-phenyl tetrazolium (INT) to produce a formazan, which exhibits absorbance at 490 nm. The assay is specific for lactate and the intraassay and interassay variations are 6% and 15%, respectively. As compared to other studies [5, 10, 11] this method showed a better precision. Coefficient of determination computed from lactate standard curve was 0.996.Glycerol Assay. Cayman’s Assay kit was used for plasma glycerol determination (Cayman Chemical Company, Ann Arbor, MI, USA). The assay employs a coupled enzymatic reaction system in 96-well plates whereby plasma glycerol content is converted by glycerol kinase. The enzymatic reaction produces glycerol-3-phosphate (G3P) in the presence of adenosine-5′-diphosphate (ADP). The G3P is oxidized by glycerol phosphate oxidase producing dihydroxyacetone phosphate and hydrogen peroxide (H2O2). The H2O2 reacts with 4-aminoantipyrine (4-AAP) and N-ethyl-N-(3-sulfopropyl)-m-anisidine (ESPA) by the catalysis of horse radish peroxidase (HRP). Briefly, 10 μL of undiluted plasma was added to the plates. Reaction was initiated by adding a total of 150 μL of buffer and solutions of enzyme mixture provided with the kit. After appropriate incubation period, the final colored product was read using a spectrophotometer (Molecular Devices LLC, CA, USA) at an absorbance maxima of λ 540 nm. Final concentration of plasma glycerol is determined based upon a standard curve of known glycerol concentration series run with each analysis. ## 2.3. Statistical Analysis We used SPSS statistical software (SPSS Inc., Chicago, IL) to calculate significant differences in lactate, glycerol levels, OGTT, andM-values. We also performed statistical correlations between these variables, in addition to BMI, considering all the subjects as one population to examine for trends and positive and negative correlation among these variables. We analyzed whether or not there was a rise in lactate or glycerol over time for both OGTT and HIEC. Then, we grouped the participants according to those with normal and impaired OGTT. Lastly, we grouped the participants into those with M-values ≤ 4 and those with M-values > 4. We compared the overall mean lactate levels for these two groups during OGTT and HIEC. These analyses allowed us to examine our hypothesis that lactate increases in individuals with clinical parameters suggestive of insulin resistance (IGTT, low M-values). ## 3. Results There were a total of 22 subjects: 16 were males and 6 were females. The mean age was41.9 ± 2.6 years (SEM) ranging 20–59 years and the mean BMI was 27.7 ± 1.0 kg/m2. ### 3.1. OGTT and HIEC All subjects were able to complete both OGTT and HIEC without any difficulties. The mean fasting blood glucose during OGTT was86.6 ± 1.6 mg/dL and the entire mean M-value during HIEC was 8.2 ± 1.0. There was no correlation between fasting blood glucose and M-values. Seven participants (31.8%) were found to have impaired OGTT (IGT). ### 3.2. Lactate and Glycerol during HIEC During HIEC there was a linear rise in plasma lactate levels. It increased significantly and progressively over the entire duration of the clamp time (P < 0.001, Figure 2). The progressive increase in lactate during HIEC was seen on all participants. There was no statistically significant correlation with the M-values, neither was there any relationship when M-values were divided by quartiles. However when the M-values were divided above and below 10, the mean lactate level at 0 time, which is the time when HIEC was initiated, the mean lactate level was significantly elevated among those participants with M-values below 10, (P < 0.03).Figure 2 Lactate levels during HIEC. Plasma lactate was measured as shown in Study Subjects. The figure shows lactate levels increase over time.The level of plasma glycerol was high before the initiation of clamp that is at time −30 (30 minutes before HIEC started) and remained high till time 0, the time when HIEC was initiated. Then it dropped precipitously and remained suppressed during the whole clamp period (Figure3).Figure 3 Glycerol levels during HIEC. Plasma glycerol was measured as shown in Study Subjects. (a) showed glycerol levels drop between time 0 min and 30 min and then stayed steady. (b) showed glycerol at time 30 min is significantly lower than glycerol levels at time 0 min (P < 0.001). (a) (b)During OGTT, those subjects withM-values ≤ 4 had slightly higher lactate values at baseline that were maintained throughout the period of OGTT than those with M-value > 4. The difference showed a strong trend but did reach statistical significance (Figure 4(a)). However, the mean total lactate among subjects with M-values less than 4 was significantly higher than those having M-values above 4 (5.2 ± 2.5 mM versus 4.2 ± 1.9 mM, P < 0.01) (Figure 4(b)).Figure 4 Plasma lactate values from OGTT according toM-value. OGTT was based on 75 g oral glucose tolerance tested over 120 min. M-values represent the insulin resistance index at time 120 min of HIEC. Plasma lactate was measured as shown in Study Subjects. (a) showed a progressive increase in lactate during OGTT among patients with M-value ≤ 4. The difference is not statistically different but did show a trend. (b) showed the mean total lactate among subjects with M-value greater or less than 4. Participants having M-values above 4 had significantly lower lactate levels (P < 0.01). (a) (b)Additionally, the fasting lactate level was significantly higher among patients with IGTT than NGTT. The difference continued during the entire period of OGTT. For those with IGTT the lactate level increased progressively whereas for those with NGTT the lactate level was stable throughout the test (Figure5(a)). Likewise lactate was significantly higher among those with IGTT than those with NGTT (5.7 ± 2.0 mM versus 3.7 ± 1.7 mM, P < 0.001) (Figure 5(b)).Figure 5 Lactate levels during OGTT according to glucose tolerance. OGTT was based on 75 g oral glucose tolerance tested over 120 min. Normal glucose tolerance (NGT) is based on glucose levels less than or equal to 140 mg/dL at time 120 min, and impaired glucose tolerance (IGT) is based on glucose levels 140–200 mg/dL at time 120 min. Plasma lactate was measured as shown in Study Subjects. (a) showed significantly higher levels of lactate among participants with IGT. The difference continued throughout the period of OGTT. (b) showed mean total lactate levels during OGTT. The cumulative lactate level among participants with IGT was significantly higher than those participants with NGT (P < 0.001). (a) (b) ### 3.3. Lipid Profile The mean total cholesterol, LDL cholesterol, HDL cholesterol, and triglyceride levels were183.7 ± 10.9, 102.7 ± 10.3, 49.6 ± 3.2, and 131.7 ± 19.6 mg/dL. There was no correlation between total LDL and M-values; however there was a strong negative correlation between triglycerides and M-values (r = - 0.50, P = 0.01). Participants with lower M-value had higher triglycerides (Figure 6). On the contrary, with regard to HDL there was a trend for positive correlation. In addition, those with low M-values had low HDL. There was also strong negative correlation between total cholesterol and M-values (r = - 0.50, P = 0.01).Figure 6 Correlation of triglycerides and meanM-values. Triglyceride levels negatively correlated with M-values. ## 3.1. OGTT and HIEC All subjects were able to complete both OGTT and HIEC without any difficulties. The mean fasting blood glucose during OGTT was86.6 ± 1.6 mg/dL and the entire mean M-value during HIEC was 8.2 ± 1.0. There was no correlation between fasting blood glucose and M-values. Seven participants (31.8%) were found to have impaired OGTT (IGT). ## 3.2. Lactate and Glycerol during HIEC During HIEC there was a linear rise in plasma lactate levels. It increased significantly and progressively over the entire duration of the clamp time (P < 0.001, Figure 2). The progressive increase in lactate during HIEC was seen on all participants. There was no statistically significant correlation with the M-values, neither was there any relationship when M-values were divided by quartiles. However when the M-values were divided above and below 10, the mean lactate level at 0 time, which is the time when HIEC was initiated, the mean lactate level was significantly elevated among those participants with M-values below 10, (P < 0.03).Figure 2 Lactate levels during HIEC. Plasma lactate was measured as shown in Study Subjects. The figure shows lactate levels increase over time.The level of plasma glycerol was high before the initiation of clamp that is at time −30 (30 minutes before HIEC started) and remained high till time 0, the time when HIEC was initiated. Then it dropped precipitously and remained suppressed during the whole clamp period (Figure3).Figure 3 Glycerol levels during HIEC. Plasma glycerol was measured as shown in Study Subjects. (a) showed glycerol levels drop between time 0 min and 30 min and then stayed steady. (b) showed glycerol at time 30 min is significantly lower than glycerol levels at time 0 min (P < 0.001). (a) (b)During OGTT, those subjects withM-values ≤ 4 had slightly higher lactate values at baseline that were maintained throughout the period of OGTT than those with M-value > 4. The difference showed a strong trend but did reach statistical significance (Figure 4(a)). However, the mean total lactate among subjects with M-values less than 4 was significantly higher than those having M-values above 4 (5.2 ± 2.5 mM versus 4.2 ± 1.9 mM, P < 0.01) (Figure 4(b)).Figure 4 Plasma lactate values from OGTT according toM-value. OGTT was based on 75 g oral glucose tolerance tested over 120 min. M-values represent the insulin resistance index at time 120 min of HIEC. Plasma lactate was measured as shown in Study Subjects. (a) showed a progressive increase in lactate during OGTT among patients with M-value ≤ 4. The difference is not statistically different but did show a trend. (b) showed the mean total lactate among subjects with M-value greater or less than 4. Participants having M-values above 4 had significantly lower lactate levels (P < 0.01). (a) (b)Additionally, the fasting lactate level was significantly higher among patients with IGTT than NGTT. The difference continued during the entire period of OGTT. For those with IGTT the lactate level increased progressively whereas for those with NGTT the lactate level was stable throughout the test (Figure5(a)). Likewise lactate was significantly higher among those with IGTT than those with NGTT (5.7 ± 2.0 mM versus 3.7 ± 1.7 mM, P < 0.001) (Figure 5(b)).Figure 5 Lactate levels during OGTT according to glucose tolerance. OGTT was based on 75 g oral glucose tolerance tested over 120 min. Normal glucose tolerance (NGT) is based on glucose levels less than or equal to 140 mg/dL at time 120 min, and impaired glucose tolerance (IGT) is based on glucose levels 140–200 mg/dL at time 120 min. Plasma lactate was measured as shown in Study Subjects. (a) showed significantly higher levels of lactate among participants with IGT. The difference continued throughout the period of OGTT. (b) showed mean total lactate levels during OGTT. The cumulative lactate level among participants with IGT was significantly higher than those participants with NGT (P < 0.001). (a) (b) ## 3.3. Lipid Profile The mean total cholesterol, LDL cholesterol, HDL cholesterol, and triglyceride levels were183.7 ± 10.9, 102.7 ± 10.3, 49.6 ± 3.2, and 131.7 ± 19.6 mg/dL. There was no correlation between total LDL and M-values; however there was a strong negative correlation between triglycerides and M-values (r = - 0.50, P = 0.01). Participants with lower M-value had higher triglycerides (Figure 6). On the contrary, with regard to HDL there was a trend for positive correlation. In addition, those with low M-values had low HDL. There was also strong negative correlation between total cholesterol and M-values (r = - 0.50, P = 0.01).Figure 6 Correlation of triglycerides and meanM-values. Triglyceride levels negatively correlated with M-values. ## 4. Discussion This study demonstrated for the first time that lactate production progressively and proportionally increases during the entire HIEC study in normal and obese subjects, a situation that mimics the hyperinsulinemic state seen in the early stages of diabetes before insulin resistance becomes clinically apparent and before the patient presents with diabetes. In similar previous studies, it is intriguing to observe increased lactate levels during the early periods of diabetes, prediabetes or the stage where there is hyperinsulinemia. Lactate is not only increased in the early stages of diabetes but has also been shown to predict its occurrence in the future [4, 5]. For the first time we have demonstrated a progressive increase in lactate and suppression of glycerol production during the entire period of HIEC. Furthermore, based on our study we cautiously state that lactate could be used to identify a state of insulin resistance.Traditionally an increase in lactate has been used as an indicator of energy imbalance related to vigorous exercise and hypoxia [12]. However, in our situation lactate is not increased as a result of exercise but it is increased in reaction to the hyperinsulinemic state that is artificially created by the HIEC. What will be the connecting tread between hyperinsulinemia and the increased lactate? A plausible explanation for the increased lactate levels during hyperinsulinemia could be an inadequate oxidative capacity at the cellular level. Indeed several studies have reported defective oxidative phosphorylation during the development of insulin resistance [8, 13, 14]. It is our belief that at this stage it is the defective oxidative capacity that causes the significant increase in circulating lactate. The hyperinsulinemic stage that we artificially created during the HIEC reflects the early stages of hyperinsulinemia that is seen during the pathogenesis of type 2 diabetes. It is very hard to conclude and the data we have is not able to confirm or refute that there is or there is no insulin resistance during the period of progressive lactate increase. But one can safely say that there is insulin resistance that goes along with increased insulin secretion without going into the age-old dilemma of which comes first the egg or the chicken. We cannot conclude in this study whether the insulin resistance or the hyperinsulinemia comes first. Nonetheless the end result is the high level of lactate, which is seen concurrently with hyperinsulinemia. The high levels of circulating insulin stimulate glycolysis, anaerobic glucose metabolism, producing excessive pyruvate that is further converted to lactate leading to high circulating lactate levels (Figure 7). Studies have shown that glycolysis increases in patients with diabetes [7, 8].Figure 7 Consumption and regeneration of NAD+.During the stage of hyperinsulinemia, the additional stimuli for the body to convert pyruvate to lactate, instead of converting it to acetyl CoA and sending it to the Krebs cycle, is the apparent defective oxidative capacity existing at the cellular level. This defect is manifested with a high NADH/NAD+ ratio. The increase in glycolysis activity depletes the cellular concentration of NAD+ and increases NADH concentration, tilting the balance toward a high NADH/NAD+ ratio. The resultant high NADH level creates a cellular reductive stress that drives the conversion of pyruvate to lactate, to quickly generate NAD+ (Figure 7). Moreover, the additional factor for the cell to recover NAD+ is in order to continue glycolysis. Adequate presence of NAD+ is mandatory for glycolysis to continue. Under normal conditions NAD+ is regenerated through Krebs cycle in the mitochondria. However, during the early stage of diabetes when there is hyperinsulinemia and excess calorie intake, the activity of glycolysis is increased. In this situation the cellular level of NAD+ is depleted, stimulating the cell mechanism to generate NAD+ quickly outside of the mitochondria. The fast metabolic pathway to generate NAD+ in the cytosol is by oxidizing NADH back to NAD+ through the conversion of pyruvate to lactate by lactate dehydrogenase (see Figure 1). Additionally the prevailing mitochondrial dysfunction seen in insulin resistance is a further impetus for the cell to generate NAD+ in the cytoplasm by converting pyruvate to lactate [14]. Thus the findings in this pilot study indicate that lactate is increased significantly in subjects with hyperinsulinemia, a condition that we can safely say is a good indicator of a state of insulin resistance.Fasting lactate levels were significantly higher among patients with IGT than those with NGT and the gap widened with time (Figure5). Robertson et al. [15] have also observed similar results. They showed a rise in lactate and insulin levels in subjects with IGT. The increase in lactate among patients with IGT was more clearly seen in an experiment done by Krentz et al. [16] who demonstrated significant elevation of lactate among healthy, nonobese subjects with IGT who were matched for age, gender, and body mass index. The subjects who had IGT and elevated lactate also had hyperinsulinemia despite a normal fasting glucose, a scenario commonly seen during the early stages of insulin resistance. Furthermore Robertson et al. [17] also demonstrated that lactate levels correlate positively with insulin levels: as insulin levels increase lactate concentration increased. Again it is demonstrated that lactate increases in a metabolic situation where there is insulin resistance. These findings are in line with our results that lactate rises during hyperinsulinemia. Additionally these findings support the hypothesis that an elevated lactate level may herald insulin resistance.Along with the increase in lactate, different investigators have also demonstrated a similar increase in fatty acids [15–17], which is a common finding among patients with metabolic syndrome. One of the defining characteristics of metabolic syndrome is high triglycerides and low HDL. Similarly, we found significant negative correlation between triglycerides and M-values (Figure 6) with a trend showing a positive correlation with HDL. The positive correlation of lactate with triglycerides is an additional factor supporting our hypothesis that lactate increases in insulin resistance.In conclusion, we demonstrated a linear rise of plasma lactate concentration during HIEC and IGT. The significance of the increased plasma lactate concentration during hyperinsulinemia may be relevant to the increased lactate levels in insulin resistant subjects with hyperinsulinemia; thus we suggest that an increase in lactate could herald the early stages of insulin resistance long time before patients are diagnosed with diabetes mellitus. --- *Source: 102054-2015-04-19.xml*
102054-2015-04-19_102054-2015-04-19.md
33,451
Plasma Lactate Levels Increase during Hyperinsulinemic Euglycemic Clamp and Oral Glucose Tolerance Test
Feven Berhane; Alemu Fite; Nour Daboul; Wissam Al-Janabi; Zaher Msallaty; Michael Caruso; Monique K. Lewis; Zhengping Yi; Michael P. Diamond; Abdul-Badi Abou-Samra; Berhane Seyoum
Journal of Diabetes Research (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102054
102054-2015-04-19.xml
--- ## Abstract Insulin resistance, which plays a central role in the pathogenesis of type 2 diabetes (T2D), is an early indicator that heralds the occurrence of T2D. It is imperative to understand the metabolic changes that occur at the cellular level in the early stages of insulin resistance. The objective of this study was to determine the pattern of circulating lactate levels during oral glucose tolerance test (OGTT) and hyperinsulinemic euglycemic clamp (HIEC) study in normal nondiabetic subjects. Lactate and glycerol were determined every 30 minutes during OGTT and HIEC on 22 participants. Lactate progressively increased throughout the HIEC study period (P < 0.001). Participants with BMI < 30 had significantly higher mean M-values compared to those with BMI ≥ 30 at baseline (P < 0.05). This trend also continued throughout the OGTT. In addition, those with impaired glucose tolerance test (IGT) had significantly higher mean lactate levels compared to those with normal glucose tolerance (P < 0.001). In conclusion, we found that lactate increased during HIEC study, which is a state of hyperinsulinemia similar to the metabolic milieu seen during the early stages in the development of T2D. --- ## Body ## 1. Introduction Type 2 diabetes mellitus (T2D) is a serious global problem that is closely related to the rise in obesity. Insulin resistance plays a central role in the pathogenesis of T2D [1] and is an early marker for the disease. Therefore, it is imperative to have a simple method to identify insulin resistance in the early stages in order to implement preventative measures. The gold standard test for insulin resistance is the hyperinsulinemic euglycemic clamp (HIEC); however, this technique is cumbersome and is restricted for research. Other methods, such as the minimal model approximation of the metabolism of glucose (MMAMG) [2, 3] and homeostasis model assessment to estimate insulin resistance (HOMA-IR), can be also used; however, they are indirect derivatives computed from fasting insulin and glucose determinations.Not much work has been done to understand the early metabolic changes at the cellular level during the stages of insulin resistance in the pathogenesis of diabetes. Understanding the early changes could help identify insulin resistance before it is clinically apparent. Part of the metabolic changes that occur at the cellular level is the increase in lactate production. In epidemiologic studies, it has been reported that lactate could predict the occurrence of diabetes [4, 5]. The purpose of this study is to examine lactate changes during HIEC and OGTT. HIEC is a state of hyperinsulinemia artificially created that resembles the hyperinsulinemia seen during the early stages of insulin resistance.Insulin stimulates glycolysis by activating the rate limiting enzymes, phosphofructokinase and pyruvate dehydrogenase [6]. Patients with diabetes and insulin resistance have increased activity of glycolysis [7, 8]. The increase in glycolysis leads to increased production of NADH and pyruvate and decreased NAD+ levels. NAD+ is generated from NADH in a redox reaction when pyruvate is converted to lactate by lactate dehydrogenase (Figure 1). This reaction might be exaggerated in insulin resistance as there is increased glycolysis driven by hyperinsulinemia. The conversion of pyruvate to lactate partly replenishes NAD+. Lactate is an important cellular metabolite in the glycolytic pathway and it may reflect the state of the cellular metabolism. Its high concentration may be an early signal of the beginning of insulin resistance. The purpose of this study was to examine lactate levels during OGTT and HIEC.Figure 1 Generation of NAD+ from pyruvate. ## 2. Methods ### 2.1. Study Subjects A total of 22 healthy nondiabetic volunteers (16 males and 6 females) were included in this study. The mean age was41 ± 12.4 years and the average BMI was 27.8 ± 4.8 kg/m2. Participants were 1st screened over the phone and provided a brief explanation of the study. Prequalified subjects were scheduled for the on-site screening visit. All studies were conducted in the MOTT Clinical Research Center (MCRC) on the Wayne State University medical campus and began at approximately 08:30 (time −60 min) after a minimum 10-hour overnight fast. The purpose, nature, and potential risks of the study were explained to all participants, and written consent was obtained before their participation. After written informed consent was obtained, comprehensive screening tests were performed such as vitals, body mass index (BMI), urinalysis, pregnancy test (females only), ECG, body composition, medical/health history, international physical activity questionnaire, and complete blood chemistry, CBC, HbA1c, and lipid profile. None of the participants had reported medical problems, and none of them was engaged in any heavy exercise. All participants were instructed to stop any form of exercise for at least 2 days before the study. The institutional review board of Wayne State University approved the protocol. ### 2.2. Experimental Strategy OGTT. Patients reported to the Mott Clinical Research Center (MCRC) at 8:00 AM in a fasting state. A catheter was placed in their antecubital vein for repeated blood draws, which was covered with a heating pad (60°C) for sampling of arterialized venous blood. The IV line was kept open with infusion of normal saline (0.9% NaCl; pH 7.4). Baseline blood for glucose, insulin, and lactate measurements were drown at 0 time. Then patients were given 300 mL of an aqueous solution containing 75 grams of glucose with orange flavor to drink it at one time. Glucose, lactate, and glycerol concentrations were determined at 30 min intervals for 2 hours after glucose ingestion. Glucose was measured using a YSI glucose analyzer (Beckman Instruments, Fullerton, California, USA).Hyperinsulinemic Euglycemic Clamp (HIEC). The clamp study was performed at the MCRC. Subjects were admitted to the MCRC at 8:00 AM after an overnight fast for the hyperinsulinemic euglycemic clamp. An antecubital catheter was placed on the right hand for repeated blood draws, which was covered with a heating pad (60°C) for sampling of arterialized venous blood. The IV line was kept open with infusion of normal saline (0.9% NaCl; pH 7.4).A second antecubital catheter with ports for insulin and glucose was inserted on the left arm. Continuous infusion of human regular insulin (Humulin R; Eli Lilly, Indianapolis, IN, USA) was started at a rate of 80 mU m−2 minute−1 and continued for 120 minutes. Plasma glucose was measured with an YSI glucose analyzer at 5-minute intervals throughout the hyperinsulinemic euglycemic clamp. Euglycemia was targeted for 90 mg/dL by variable infusion of 20% D-glucose [9]. Insulin-stimulated glucose disposal rates (M-value) were calculated as the average value during the final 30 minutes of insulin infusion. M-value is the glucose infusion rate per kg per minute (mg/kg/min).Lactate and glycerol levels were determined before the initiation of insulin infusion at −30 minutes and then every 30 minutes during the hyperinsulinemic conditions (0, 30, 60, 90, and 120 minutes).Lactate Assay. Fasting plasma lactate levels were determined using the commercially available enzymatic kit (Eton Bioscience Inc., San Diego, CA). Lactate determination technique relies on the lactate dehydrogenase conversion of lactate to pyruvate. In the process NAD+ is reduced to NADH that is ultimately coupled with the tetrazolium salt 2-(4-iodophenyl)-3-(4-nitrophenyl)-5-phenyl tetrazolium (INT) to produce a formazan, which exhibits absorbance at 490 nm. The assay is specific for lactate and the intraassay and interassay variations are 6% and 15%, respectively. As compared to other studies [5, 10, 11] this method showed a better precision. Coefficient of determination computed from lactate standard curve was 0.996.Glycerol Assay. Cayman’s Assay kit was used for plasma glycerol determination (Cayman Chemical Company, Ann Arbor, MI, USA). The assay employs a coupled enzymatic reaction system in 96-well plates whereby plasma glycerol content is converted by glycerol kinase. The enzymatic reaction produces glycerol-3-phosphate (G3P) in the presence of adenosine-5′-diphosphate (ADP). The G3P is oxidized by glycerol phosphate oxidase producing dihydroxyacetone phosphate and hydrogen peroxide (H2O2). The H2O2 reacts with 4-aminoantipyrine (4-AAP) and N-ethyl-N-(3-sulfopropyl)-m-anisidine (ESPA) by the catalysis of horse radish peroxidase (HRP). Briefly, 10 μL of undiluted plasma was added to the plates. Reaction was initiated by adding a total of 150 μL of buffer and solutions of enzyme mixture provided with the kit. After appropriate incubation period, the final colored product was read using a spectrophotometer (Molecular Devices LLC, CA, USA) at an absorbance maxima of λ 540 nm. Final concentration of plasma glycerol is determined based upon a standard curve of known glycerol concentration series run with each analysis. ### 2.3. Statistical Analysis We used SPSS statistical software (SPSS Inc., Chicago, IL) to calculate significant differences in lactate, glycerol levels, OGTT, andM-values. We also performed statistical correlations between these variables, in addition to BMI, considering all the subjects as one population to examine for trends and positive and negative correlation among these variables. We analyzed whether or not there was a rise in lactate or glycerol over time for both OGTT and HIEC. Then, we grouped the participants according to those with normal and impaired OGTT. Lastly, we grouped the participants into those with M-values ≤ 4 and those with M-values > 4. We compared the overall mean lactate levels for these two groups during OGTT and HIEC. These analyses allowed us to examine our hypothesis that lactate increases in individuals with clinical parameters suggestive of insulin resistance (IGTT, low M-values). ## 2.1. Study Subjects A total of 22 healthy nondiabetic volunteers (16 males and 6 females) were included in this study. The mean age was41 ± 12.4 years and the average BMI was 27.8 ± 4.8 kg/m2. Participants were 1st screened over the phone and provided a brief explanation of the study. Prequalified subjects were scheduled for the on-site screening visit. All studies were conducted in the MOTT Clinical Research Center (MCRC) on the Wayne State University medical campus and began at approximately 08:30 (time −60 min) after a minimum 10-hour overnight fast. The purpose, nature, and potential risks of the study were explained to all participants, and written consent was obtained before their participation. After written informed consent was obtained, comprehensive screening tests were performed such as vitals, body mass index (BMI), urinalysis, pregnancy test (females only), ECG, body composition, medical/health history, international physical activity questionnaire, and complete blood chemistry, CBC, HbA1c, and lipid profile. None of the participants had reported medical problems, and none of them was engaged in any heavy exercise. All participants were instructed to stop any form of exercise for at least 2 days before the study. The institutional review board of Wayne State University approved the protocol. ## 2.2. Experimental Strategy OGTT. Patients reported to the Mott Clinical Research Center (MCRC) at 8:00 AM in a fasting state. A catheter was placed in their antecubital vein for repeated blood draws, which was covered with a heating pad (60°C) for sampling of arterialized venous blood. The IV line was kept open with infusion of normal saline (0.9% NaCl; pH 7.4). Baseline blood for glucose, insulin, and lactate measurements were drown at 0 time. Then patients were given 300 mL of an aqueous solution containing 75 grams of glucose with orange flavor to drink it at one time. Glucose, lactate, and glycerol concentrations were determined at 30 min intervals for 2 hours after glucose ingestion. Glucose was measured using a YSI glucose analyzer (Beckman Instruments, Fullerton, California, USA).Hyperinsulinemic Euglycemic Clamp (HIEC). The clamp study was performed at the MCRC. Subjects were admitted to the MCRC at 8:00 AM after an overnight fast for the hyperinsulinemic euglycemic clamp. An antecubital catheter was placed on the right hand for repeated blood draws, which was covered with a heating pad (60°C) for sampling of arterialized venous blood. The IV line was kept open with infusion of normal saline (0.9% NaCl; pH 7.4).A second antecubital catheter with ports for insulin and glucose was inserted on the left arm. Continuous infusion of human regular insulin (Humulin R; Eli Lilly, Indianapolis, IN, USA) was started at a rate of 80 mU m−2 minute−1 and continued for 120 minutes. Plasma glucose was measured with an YSI glucose analyzer at 5-minute intervals throughout the hyperinsulinemic euglycemic clamp. Euglycemia was targeted for 90 mg/dL by variable infusion of 20% D-glucose [9]. Insulin-stimulated glucose disposal rates (M-value) were calculated as the average value during the final 30 minutes of insulin infusion. M-value is the glucose infusion rate per kg per minute (mg/kg/min).Lactate and glycerol levels were determined before the initiation of insulin infusion at −30 minutes and then every 30 minutes during the hyperinsulinemic conditions (0, 30, 60, 90, and 120 minutes).Lactate Assay. Fasting plasma lactate levels were determined using the commercially available enzymatic kit (Eton Bioscience Inc., San Diego, CA). Lactate determination technique relies on the lactate dehydrogenase conversion of lactate to pyruvate. In the process NAD+ is reduced to NADH that is ultimately coupled with the tetrazolium salt 2-(4-iodophenyl)-3-(4-nitrophenyl)-5-phenyl tetrazolium (INT) to produce a formazan, which exhibits absorbance at 490 nm. The assay is specific for lactate and the intraassay and interassay variations are 6% and 15%, respectively. As compared to other studies [5, 10, 11] this method showed a better precision. Coefficient of determination computed from lactate standard curve was 0.996.Glycerol Assay. Cayman’s Assay kit was used for plasma glycerol determination (Cayman Chemical Company, Ann Arbor, MI, USA). The assay employs a coupled enzymatic reaction system in 96-well plates whereby plasma glycerol content is converted by glycerol kinase. The enzymatic reaction produces glycerol-3-phosphate (G3P) in the presence of adenosine-5′-diphosphate (ADP). The G3P is oxidized by glycerol phosphate oxidase producing dihydroxyacetone phosphate and hydrogen peroxide (H2O2). The H2O2 reacts with 4-aminoantipyrine (4-AAP) and N-ethyl-N-(3-sulfopropyl)-m-anisidine (ESPA) by the catalysis of horse radish peroxidase (HRP). Briefly, 10 μL of undiluted plasma was added to the plates. Reaction was initiated by adding a total of 150 μL of buffer and solutions of enzyme mixture provided with the kit. After appropriate incubation period, the final colored product was read using a spectrophotometer (Molecular Devices LLC, CA, USA) at an absorbance maxima of λ 540 nm. Final concentration of plasma glycerol is determined based upon a standard curve of known glycerol concentration series run with each analysis. ## 2.3. Statistical Analysis We used SPSS statistical software (SPSS Inc., Chicago, IL) to calculate significant differences in lactate, glycerol levels, OGTT, andM-values. We also performed statistical correlations between these variables, in addition to BMI, considering all the subjects as one population to examine for trends and positive and negative correlation among these variables. We analyzed whether or not there was a rise in lactate or glycerol over time for both OGTT and HIEC. Then, we grouped the participants according to those with normal and impaired OGTT. Lastly, we grouped the participants into those with M-values ≤ 4 and those with M-values > 4. We compared the overall mean lactate levels for these two groups during OGTT and HIEC. These analyses allowed us to examine our hypothesis that lactate increases in individuals with clinical parameters suggestive of insulin resistance (IGTT, low M-values). ## 3. Results There were a total of 22 subjects: 16 were males and 6 were females. The mean age was41.9 ± 2.6 years (SEM) ranging 20–59 years and the mean BMI was 27.7 ± 1.0 kg/m2. ### 3.1. OGTT and HIEC All subjects were able to complete both OGTT and HIEC without any difficulties. The mean fasting blood glucose during OGTT was86.6 ± 1.6 mg/dL and the entire mean M-value during HIEC was 8.2 ± 1.0. There was no correlation between fasting blood glucose and M-values. Seven participants (31.8%) were found to have impaired OGTT (IGT). ### 3.2. Lactate and Glycerol during HIEC During HIEC there was a linear rise in plasma lactate levels. It increased significantly and progressively over the entire duration of the clamp time (P < 0.001, Figure 2). The progressive increase in lactate during HIEC was seen on all participants. There was no statistically significant correlation with the M-values, neither was there any relationship when M-values were divided by quartiles. However when the M-values were divided above and below 10, the mean lactate level at 0 time, which is the time when HIEC was initiated, the mean lactate level was significantly elevated among those participants with M-values below 10, (P < 0.03).Figure 2 Lactate levels during HIEC. Plasma lactate was measured as shown in Study Subjects. The figure shows lactate levels increase over time.The level of plasma glycerol was high before the initiation of clamp that is at time −30 (30 minutes before HIEC started) and remained high till time 0, the time when HIEC was initiated. Then it dropped precipitously and remained suppressed during the whole clamp period (Figure3).Figure 3 Glycerol levels during HIEC. Plasma glycerol was measured as shown in Study Subjects. (a) showed glycerol levels drop between time 0 min and 30 min and then stayed steady. (b) showed glycerol at time 30 min is significantly lower than glycerol levels at time 0 min (P < 0.001). (a) (b)During OGTT, those subjects withM-values ≤ 4 had slightly higher lactate values at baseline that were maintained throughout the period of OGTT than those with M-value > 4. The difference showed a strong trend but did reach statistical significance (Figure 4(a)). However, the mean total lactate among subjects with M-values less than 4 was significantly higher than those having M-values above 4 (5.2 ± 2.5 mM versus 4.2 ± 1.9 mM, P < 0.01) (Figure 4(b)).Figure 4 Plasma lactate values from OGTT according toM-value. OGTT was based on 75 g oral glucose tolerance tested over 120 min. M-values represent the insulin resistance index at time 120 min of HIEC. Plasma lactate was measured as shown in Study Subjects. (a) showed a progressive increase in lactate during OGTT among patients with M-value ≤ 4. The difference is not statistically different but did show a trend. (b) showed the mean total lactate among subjects with M-value greater or less than 4. Participants having M-values above 4 had significantly lower lactate levels (P < 0.01). (a) (b)Additionally, the fasting lactate level was significantly higher among patients with IGTT than NGTT. The difference continued during the entire period of OGTT. For those with IGTT the lactate level increased progressively whereas for those with NGTT the lactate level was stable throughout the test (Figure5(a)). Likewise lactate was significantly higher among those with IGTT than those with NGTT (5.7 ± 2.0 mM versus 3.7 ± 1.7 mM, P < 0.001) (Figure 5(b)).Figure 5 Lactate levels during OGTT according to glucose tolerance. OGTT was based on 75 g oral glucose tolerance tested over 120 min. Normal glucose tolerance (NGT) is based on glucose levels less than or equal to 140 mg/dL at time 120 min, and impaired glucose tolerance (IGT) is based on glucose levels 140–200 mg/dL at time 120 min. Plasma lactate was measured as shown in Study Subjects. (a) showed significantly higher levels of lactate among participants with IGT. The difference continued throughout the period of OGTT. (b) showed mean total lactate levels during OGTT. The cumulative lactate level among participants with IGT was significantly higher than those participants with NGT (P < 0.001). (a) (b) ### 3.3. Lipid Profile The mean total cholesterol, LDL cholesterol, HDL cholesterol, and triglyceride levels were183.7 ± 10.9, 102.7 ± 10.3, 49.6 ± 3.2, and 131.7 ± 19.6 mg/dL. There was no correlation between total LDL and M-values; however there was a strong negative correlation between triglycerides and M-values (r = - 0.50, P = 0.01). Participants with lower M-value had higher triglycerides (Figure 6). On the contrary, with regard to HDL there was a trend for positive correlation. In addition, those with low M-values had low HDL. There was also strong negative correlation between total cholesterol and M-values (r = - 0.50, P = 0.01).Figure 6 Correlation of triglycerides and meanM-values. Triglyceride levels negatively correlated with M-values. ## 3.1. OGTT and HIEC All subjects were able to complete both OGTT and HIEC without any difficulties. The mean fasting blood glucose during OGTT was86.6 ± 1.6 mg/dL and the entire mean M-value during HIEC was 8.2 ± 1.0. There was no correlation between fasting blood glucose and M-values. Seven participants (31.8%) were found to have impaired OGTT (IGT). ## 3.2. Lactate and Glycerol during HIEC During HIEC there was a linear rise in plasma lactate levels. It increased significantly and progressively over the entire duration of the clamp time (P < 0.001, Figure 2). The progressive increase in lactate during HIEC was seen on all participants. There was no statistically significant correlation with the M-values, neither was there any relationship when M-values were divided by quartiles. However when the M-values were divided above and below 10, the mean lactate level at 0 time, which is the time when HIEC was initiated, the mean lactate level was significantly elevated among those participants with M-values below 10, (P < 0.03).Figure 2 Lactate levels during HIEC. Plasma lactate was measured as shown in Study Subjects. The figure shows lactate levels increase over time.The level of plasma glycerol was high before the initiation of clamp that is at time −30 (30 minutes before HIEC started) and remained high till time 0, the time when HIEC was initiated. Then it dropped precipitously and remained suppressed during the whole clamp period (Figure3).Figure 3 Glycerol levels during HIEC. Plasma glycerol was measured as shown in Study Subjects. (a) showed glycerol levels drop between time 0 min and 30 min and then stayed steady. (b) showed glycerol at time 30 min is significantly lower than glycerol levels at time 0 min (P < 0.001). (a) (b)During OGTT, those subjects withM-values ≤ 4 had slightly higher lactate values at baseline that were maintained throughout the period of OGTT than those with M-value > 4. The difference showed a strong trend but did reach statistical significance (Figure 4(a)). However, the mean total lactate among subjects with M-values less than 4 was significantly higher than those having M-values above 4 (5.2 ± 2.5 mM versus 4.2 ± 1.9 mM, P < 0.01) (Figure 4(b)).Figure 4 Plasma lactate values from OGTT according toM-value. OGTT was based on 75 g oral glucose tolerance tested over 120 min. M-values represent the insulin resistance index at time 120 min of HIEC. Plasma lactate was measured as shown in Study Subjects. (a) showed a progressive increase in lactate during OGTT among patients with M-value ≤ 4. The difference is not statistically different but did show a trend. (b) showed the mean total lactate among subjects with M-value greater or less than 4. Participants having M-values above 4 had significantly lower lactate levels (P < 0.01). (a) (b)Additionally, the fasting lactate level was significantly higher among patients with IGTT than NGTT. The difference continued during the entire period of OGTT. For those with IGTT the lactate level increased progressively whereas for those with NGTT the lactate level was stable throughout the test (Figure5(a)). Likewise lactate was significantly higher among those with IGTT than those with NGTT (5.7 ± 2.0 mM versus 3.7 ± 1.7 mM, P < 0.001) (Figure 5(b)).Figure 5 Lactate levels during OGTT according to glucose tolerance. OGTT was based on 75 g oral glucose tolerance tested over 120 min. Normal glucose tolerance (NGT) is based on glucose levels less than or equal to 140 mg/dL at time 120 min, and impaired glucose tolerance (IGT) is based on glucose levels 140–200 mg/dL at time 120 min. Plasma lactate was measured as shown in Study Subjects. (a) showed significantly higher levels of lactate among participants with IGT. The difference continued throughout the period of OGTT. (b) showed mean total lactate levels during OGTT. The cumulative lactate level among participants with IGT was significantly higher than those participants with NGT (P < 0.001). (a) (b) ## 3.3. Lipid Profile The mean total cholesterol, LDL cholesterol, HDL cholesterol, and triglyceride levels were183.7 ± 10.9, 102.7 ± 10.3, 49.6 ± 3.2, and 131.7 ± 19.6 mg/dL. There was no correlation between total LDL and M-values; however there was a strong negative correlation between triglycerides and M-values (r = - 0.50, P = 0.01). Participants with lower M-value had higher triglycerides (Figure 6). On the contrary, with regard to HDL there was a trend for positive correlation. In addition, those with low M-values had low HDL. There was also strong negative correlation between total cholesterol and M-values (r = - 0.50, P = 0.01).Figure 6 Correlation of triglycerides and meanM-values. Triglyceride levels negatively correlated with M-values. ## 4. Discussion This study demonstrated for the first time that lactate production progressively and proportionally increases during the entire HIEC study in normal and obese subjects, a situation that mimics the hyperinsulinemic state seen in the early stages of diabetes before insulin resistance becomes clinically apparent and before the patient presents with diabetes. In similar previous studies, it is intriguing to observe increased lactate levels during the early periods of diabetes, prediabetes or the stage where there is hyperinsulinemia. Lactate is not only increased in the early stages of diabetes but has also been shown to predict its occurrence in the future [4, 5]. For the first time we have demonstrated a progressive increase in lactate and suppression of glycerol production during the entire period of HIEC. Furthermore, based on our study we cautiously state that lactate could be used to identify a state of insulin resistance.Traditionally an increase in lactate has been used as an indicator of energy imbalance related to vigorous exercise and hypoxia [12]. However, in our situation lactate is not increased as a result of exercise but it is increased in reaction to the hyperinsulinemic state that is artificially created by the HIEC. What will be the connecting tread between hyperinsulinemia and the increased lactate? A plausible explanation for the increased lactate levels during hyperinsulinemia could be an inadequate oxidative capacity at the cellular level. Indeed several studies have reported defective oxidative phosphorylation during the development of insulin resistance [8, 13, 14]. It is our belief that at this stage it is the defective oxidative capacity that causes the significant increase in circulating lactate. The hyperinsulinemic stage that we artificially created during the HIEC reflects the early stages of hyperinsulinemia that is seen during the pathogenesis of type 2 diabetes. It is very hard to conclude and the data we have is not able to confirm or refute that there is or there is no insulin resistance during the period of progressive lactate increase. But one can safely say that there is insulin resistance that goes along with increased insulin secretion without going into the age-old dilemma of which comes first the egg or the chicken. We cannot conclude in this study whether the insulin resistance or the hyperinsulinemia comes first. Nonetheless the end result is the high level of lactate, which is seen concurrently with hyperinsulinemia. The high levels of circulating insulin stimulate glycolysis, anaerobic glucose metabolism, producing excessive pyruvate that is further converted to lactate leading to high circulating lactate levels (Figure 7). Studies have shown that glycolysis increases in patients with diabetes [7, 8].Figure 7 Consumption and regeneration of NAD+.During the stage of hyperinsulinemia, the additional stimuli for the body to convert pyruvate to lactate, instead of converting it to acetyl CoA and sending it to the Krebs cycle, is the apparent defective oxidative capacity existing at the cellular level. This defect is manifested with a high NADH/NAD+ ratio. The increase in glycolysis activity depletes the cellular concentration of NAD+ and increases NADH concentration, tilting the balance toward a high NADH/NAD+ ratio. The resultant high NADH level creates a cellular reductive stress that drives the conversion of pyruvate to lactate, to quickly generate NAD+ (Figure 7). Moreover, the additional factor for the cell to recover NAD+ is in order to continue glycolysis. Adequate presence of NAD+ is mandatory for glycolysis to continue. Under normal conditions NAD+ is regenerated through Krebs cycle in the mitochondria. However, during the early stage of diabetes when there is hyperinsulinemia and excess calorie intake, the activity of glycolysis is increased. In this situation the cellular level of NAD+ is depleted, stimulating the cell mechanism to generate NAD+ quickly outside of the mitochondria. The fast metabolic pathway to generate NAD+ in the cytosol is by oxidizing NADH back to NAD+ through the conversion of pyruvate to lactate by lactate dehydrogenase (see Figure 1). Additionally the prevailing mitochondrial dysfunction seen in insulin resistance is a further impetus for the cell to generate NAD+ in the cytoplasm by converting pyruvate to lactate [14]. Thus the findings in this pilot study indicate that lactate is increased significantly in subjects with hyperinsulinemia, a condition that we can safely say is a good indicator of a state of insulin resistance.Fasting lactate levels were significantly higher among patients with IGT than those with NGT and the gap widened with time (Figure5). Robertson et al. [15] have also observed similar results. They showed a rise in lactate and insulin levels in subjects with IGT. The increase in lactate among patients with IGT was more clearly seen in an experiment done by Krentz et al. [16] who demonstrated significant elevation of lactate among healthy, nonobese subjects with IGT who were matched for age, gender, and body mass index. The subjects who had IGT and elevated lactate also had hyperinsulinemia despite a normal fasting glucose, a scenario commonly seen during the early stages of insulin resistance. Furthermore Robertson et al. [17] also demonstrated that lactate levels correlate positively with insulin levels: as insulin levels increase lactate concentration increased. Again it is demonstrated that lactate increases in a metabolic situation where there is insulin resistance. These findings are in line with our results that lactate rises during hyperinsulinemia. Additionally these findings support the hypothesis that an elevated lactate level may herald insulin resistance.Along with the increase in lactate, different investigators have also demonstrated a similar increase in fatty acids [15–17], which is a common finding among patients with metabolic syndrome. One of the defining characteristics of metabolic syndrome is high triglycerides and low HDL. Similarly, we found significant negative correlation between triglycerides and M-values (Figure 6) with a trend showing a positive correlation with HDL. The positive correlation of lactate with triglycerides is an additional factor supporting our hypothesis that lactate increases in insulin resistance.In conclusion, we demonstrated a linear rise of plasma lactate concentration during HIEC and IGT. The significance of the increased plasma lactate concentration during hyperinsulinemia may be relevant to the increased lactate levels in insulin resistant subjects with hyperinsulinemia; thus we suggest that an increase in lactate could herald the early stages of insulin resistance long time before patients are diagnosed with diabetes mellitus. --- *Source: 102054-2015-04-19.xml*
2015
# Depichering the Effects of Astragaloside IV on AD-Like Phenotypes: A Systematic and Experimental Investigation **Authors:** Xuncui Wang; Feng Gao; Wen Xu; Yin Cao; Jinghui Wang; Guoqi Zhu **Journal:** Oxidative Medicine and Cellular Longevity (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1020614 --- ## Abstract Astragaloside IV (AS-IV) is an active component inAstragalus membranaceus with the potential to treat neurodegenerative diseases, especially Alzheimer’s diseases (ADs). However, its mechanisms are still not known. Herein, we aimed to explore the systematic pharmacological mechanism of AS-IV for treating AD. Drug prediction, network pharmacology, and functional bioinformatics analyses were conducted. Molecular docking was applied to validate reliability of the interactions and binding affinities between AS-IV and related targets. Finally, experimental verification was carried out in AβO infusion produced AD-like phenotypes to investigate the molecular mechanisms. We found that AS-IV works through a multitarget synergistic mechanism, including inflammation, nervous system, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid. AS-IV highly interacted with PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1 after docking simulations. Meanwhile, PPARγ interacts with caspase-1, GSK3Β, PSEN1, and TRPV1. In vivo experiments showed that AβO infusion produced AD-like phenotypes in mice, including impairment of fear memory, neuronal loss, tau hyperphosphorylation, neuroinflammation, and synaptic deficits in the hippocampus. Especially, the expression of PPARγ, as well as BDNF, was also reduced in the hippocampus of AD-like mice. Conversely, AS-IV improved AβO infusion-induced memory impairment, inhibited neuronal loss and the phosphorylation of tau, and prevented the synaptic deficits. AS-IV prevented AβO infusion-induced reduction of PPARγ and BDNF. Moreover, the inhibition of PPARγ attenuated the effects of AS-IV on BDNF, neuroflammation, and pyroptosis in AD-like mice. Taken together, AS-IV could prevent AD-like phenotypes and reduce tau hyperphosphorylation, synaptic deficits, neuroinflammation, and pyroptosis, possibly via regulating PPARγ. --- ## Body ## 1. Introduction Alzheimer’s disease (AD) is a neurodegenerative disease characterized by cognitive decline and behavioral impairment. The incidence of AD is increasing as the world population ages. According to theWorld Alzheimer Report 2018 [1], there are more than 50 million people suffering from AD worldwide and it is predicted that by 2050, the number of AD patients will increase to 152 million. Currently, the pathogenesis and etiology of AD have not been fully elucidated, and there is no effective treatment for AD [2]. Remarkable efforts are made in developing strategies to resist mechanisms that lead to neuronal damage, synaptic deficits, neuroinflammation, and cognitive impairment [3–5]. Especially, amyloid-β (1-42) oligomers (AβO) accumulating in AD brains are linked to synaptic failure, neuroinflammation, and memory deficit [2, 6, 7].Astragaloside IV (AS-IV), one of the major effective components purified fromAstragalus membranaceus, has been documented in the treatment of diabetes and diabetic nephropathy [8, 9]. AS-IV has been reported to play a variety of beneficial roles in the prevention and treatment of neurodegenerative diseases with cognitive impairment [10]. Especially, AS-IV, as a selective natural PPARγ agonist, inhibited BACE1 activity by increasing PPARγ expression and subsequently reduced Aβ levels in APP/PS1 mice [11]. In addition, other studies pointed out that AS-IV could inhibit Aβ1-42-induced mitochondrial permeability transition pore opening, oxidative stress, and apoptosis [12, 13].PPARγ activation regulates the response of microglia to amyloid deposition, thereby increasing phagocytosis of Aβ and reducing cytokine release [14, 15]. In addition, PPARγ agonists are able to improve the memory deficits in AD models [16, 17], which are further confirmed in clinical trials [18, 19]. In a previous study, we reported that AS-IV prevented AβO-induced hippocampal neuronal apoptosis, probably by promoting the PPARγ/BDNF signaling pathway [20]. However, the findings were limited in the in vitro experiments, and systemic mechanisms have not been clearly disclosed.In this study, we adopted a systematic study of the multiscale mechanism to investigate the treatment effect of AS-IV for AD, which combined the drug prediction, network pharmacology, functional bioinformatics analyses, and molecular docking. Subsequently, experiments were carried out to validate the potential mechanisms from the target of PPARγ. This study would provide important implications for the treatment of AD. ## 2. Materials and Methods ### 2.1. Target Prediction To obtain the molecular targets of AS-IV, a computer developed model SysDT based on random forest (RF) and support vector machine (SVM) algorithms [21], which integrates large-scale information on genomics, chemistry, and pharmacology was proposed to predict the potential targets with RFscore≥0.8 and SVM≥0.7 as threshold. In addition, we also combined pharmacophore model [22] and structural similarity prediction methods to predict the targets of AS-IV [23]. ### 2.2. Network Construction To visualize and analyze the relationship between the targets of AS-IV and their related biological functions, we screened the relevant function corresponding to the targets, introduced them into Cytoscape, and constructed the network. In this section, three networks including compound-target (C-T), compound-target-function (C-T-F), and protein-protein interaction (PPI) [24] were structured to unclose the multitarget and multifunction therapeutic effect of AS-IV in combating AD (Figure 1).Figure 1 Experimental design. First, we screened the relevant targets via a comprehensive procedure; second, compound-target (C-T) and compound-target-function (C-T-F) were established to reveal the underlying molecular mechanisms; third, protein-protein interaction (PPI) network analysis and Gene Ontology (GO) enrichment analysis were performed to predict related targets; forth, we also studied the regulatory effect and specific mechanism of AS-IV on the AD via molecular docking and dynamics simulation; last, we investigated the effects of AS-IV on AD phenotypes in AβO-infused mice and further assessed the potential mechanisms. The mice were intragastrically administered daily with AS-IV (10, 20, and 40 mg/kg) for 7 days followed by intrahippocampal infusion of AβO or vehicle (aCSF) and then received AS-IV intragastrically or AS-IV plus GW9662 (1 mg/kg) intraperitoneally or donepezil (5 mg/kg) or the same volume saline intragastrically for 28 continuous days once per day. After that, the behavioral tests were conducted in the following one week, then the animals were sacrificed, and brain samples were collected. GW9662 (1 mg/kg) was coadministrated with 20 mg/kg of AS-IV or vehicle in the AβO-treated or aCSF-treated animals. ### 2.3. Gene Ontology (GO) Enrichment Analysis Presently, to further investigate the vital biological process connected with the AS-IV-related targets, we mapped these targets to DAVID 1 for analyzing targets’ biological meaning. The GO terms of biological process were utilized to symbolize genic function. Finally, those GO terms withP≤0.05 and FDR≤0.05 were selected in subsequent research. ### 2.4. Molecular Docking To validate the C-T network, AS-IV was docked to its predicted targets (PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1) by the AutoDock software version 4.1 package with default settings based on a powerful genetic algorithm method [25]. The X-ray crystal structures of targets (5GTN, 5IRX, 6IYC, 6PZP, and 6GN1) were taken from the RCSB Protein Data Bank. Each protein was prepared using methods such as adding polar hydrogens, partial charges, and defining the rotatable bonds. Finally, the results were analyzed in the AutoDock Tools. ### 2.5. Drugs and Reagents Astragaloside IV (purity: HPLC>98%), GW9662, and Aβ1-42 were purchased from Sigma-Aldrich. ELISA kits for Aβ1-42, IL-1β, IL-6, and TNF-α were obtained from Shanghai Jianglai Biotechnology. Antibodies against microtubule-associated protein tau (tau), p-tau, PPARγ, postsynaptic density 95 (PSD95), synaptophysin (SYN), growth-associated protein 43 (GAP43), glial fibrillary acidic protein (GFAP), NOD-like receptor protein 3 (NLRP3), cleaved IL-1β, cleaved caspase-1, and GAPDH were obtained from Cell Signaling Technology. The antibody against activity-regulated cytoskeleton-associated protein (ARC) was obtained from Synaptic System. Antibodies against BDNF and microtubule-associated protein 2 (MAP-2) were obtained from Novus Biologicals. Alexa 488 or 594-labeled fluorescent secondary antibodies for immunofluorescence and 4,6-diamidino-2-phenylindole (DAPI) were obtained from Thermo Fisher Scientific. Prestained Protein Ladder was obtained from Thermo Fermentas. SuperSignal chemiluminescence reagents were obtained from Pierce. ### 2.6. Animals and Treatments Male C57BL/6 mice (5-6 weeks old, 20-25 g) were obtained from the Beijing Weishang Lituo Technology Co., Ltd (SCXK (Beijing) 2016-0009). The mice were housed in groups of six per cage with controlled room temperature and humidity, under a 12 h light/dark cycle, with free access to food and water. The mice were adapted for one week before administration. All protocols were approved by the Animal Ethics Committee of Anhui University of Chinese Medicine (approval No. AHUCM-mouse-2019015), and the procedures involving animal research were in compliance with the Animal Ethics Procedures and Guidelines of the People’s Republic of China.The mice were randomly divided into the following groups (N=8 per group) (Figure 1): a sham group, an AβO group, AβO plus AS-IV (10, 20, and 40 mg/kg/day, i.g.) groups, an AβO plus donepezil (5 mg/kg/day, i.g.) group, and an AβO plus AS-IV (20 mg/kg/day, i.g.) with GW9662 (1 mg/kg/day, i.p.) group. Drugs were administered once per day for one week followed by intrahippocampal infusion of AβO and continuously received AS-IV once per day for another four weeks. The dose of AS-IV and GW9662 was selected and modified based upon a previous study [11]. ### 2.7. Preparation and Infusion of AβO AβO were prepared from synthetic Aβ1-42 and incubated at 37°C for 1 week in a stock solution of 10 μg/μL, then routinely characterized by size-exclusion chromatography, as previously described [26, 27], and stored at -80°C until use after subpackaging. AβO were perfused at a final concentration of 2.5 μg/μL in aCSF.For intrahippocampal infusion of AβO, mice were anesthetized with 5% isoflurane using a vaporizer system (RWD life Science Co., Ltd, Shenzhen, China) and maintained at 1% during the injection procedure, as previously described [26, 28]. AβO (5 μg per site) were bilaterally delivered into the hippocampal CA1 region (stereotaxical coordinates relative to bregma: 2.3 mm anteroposterior, ±1.8 mm mediolateral, and 2.0 mm dorsoventral). Injections were performed in a volume of 2 μL infused over 5 min, and the needle was left in place for 1 min to prevent backflow. Then, the mice were treated with penicillin to prevent infection. After the operation, the mice were kept under standard conditions with eating and drinking freely. Mice that showed signs of misplaced injections or any sign of hemorrhage were excluded from further analysis. Seven days before the AβO infusions, AS-IV (10, 20, and 40 mg/kg, once/day) was administered intragastrically in mice. Behavioral and pathological studies were performed 4 weeks postinjections of AβO. ### 2.8. Fear Conditioning FC was evaluated as previously described [29]. On adaption day, mice were allowed to freely explore the conditioning chamber (UgoBasile, Gemonio, Italy) with a camera that was connected to the ANY-Maze™ software (Stoelting, NJ, USA, RRID:SCR_014289) for 5 min. On conditioning day, mice were placed into the same test chamber, and then, an 80 dB audiotone (conditioned stimulus: CS) was presented for 30 s with a coterminating 1.0 mA, 2 s long foot shock (unconditioned stimulus: US) three times at a 73 s interval. Then, mice were removed from the cage. The next day (contextual test), mice were put back into the conditioning chamber for 5 min, but without any audiotone or foot shock. On day 4 (cued test), the cover of the back and side chamber walls was removed. The mice were returned to the chamber followed by three CS (without a foot shock) that were presented for 30 s each. The freezing time was recorded for each test using the software. ### 2.9. Preparation of Hippocampal Tissue Twenty-four hours after behavioral tests, some mice were anesthetized with 5% isoflurane and decapitated, and the hippocampi were then rapidly dissected on ice and snap-frozen in liquid nitrogen before storing at -80°C for biochemical tests. Others received transcardial perfusion with 4% paraformaldehyde (PFA), and then, the hippocampi were rapidly dissected and postfixed with 4% PFA overnight at 4°C followed by immersions in a solution containing 30% sucrose at 4°C for graded dehydration. Parts of the hippocampi were then cut into serial coronal frozen slices (20μm) for immunofluorescence assay, and other hippocampus samples were sliced into 4 μm thick coronal slices for histopathological analysis. ### 2.10. Hematoxylin and Eosin (HE) Staining After fixed in 4% paraformaldehyde for 24 h at room temperature, the hippocampal tissues were embedded in paraffin and coronally cut into 4μm thick slices (three slices per mouse). The tissues were dewaxed and successively rehydrated with alcohol (70%, 85%, 95%, and 100%), and then, the slices were stained with hematoxylin solution for 3 min followed by eosin solution for 2 min at room temperature. The slices were finally mounted by following dehydration with gradient alcohol and hyaline with xylenes and sealed with neutral gum. Representative photographs were captured by a light microscope with the DP70 software. ### 2.11. Enzyme-Linked Immunosorbent Assay Hippocampal tissues were collected and homogenized with ice-colded saline, supplemented with protease and phosphatase inhibitor cocktails. The supernatants were collected for further analysis. The levels of endogenous Aβ1-42, IL-1β, IL-6, and TNF-α were determined using ELISA kits according to the manufacturer’s instructions. The absorbance was recorded at 450 nm using a microplate reader (SpectraMax M2/M2e; Molecular Devices, Sunnyvale, CA, USA), and the concentrations of Aβ1-42, IL-1β, IL-6, and TNF-α were calculated from standard curves. Results were expressed as picograms per milliliter. Data were generated from 6-8 mice per group. ### 2.12. Immunofluorescence Mice were sacrificed, and the hippocampi were snap-frozen in optimal cutting temperature (OCT) compound (Sakura Finetechnical, Japan). For immunofluorescence staining, the OCT-embedded hippocampi were cut into serial coronal 20μm thick slices and mounted on adhesive microscope slides. The slices were fixed with ice-colded acetone for 10 min and then blocked in 10% goat serum (containing 0.04% Triton X-100) for 90 min at room temperature. Subsequently, the slices were incubated with primary antibodies to MAP-2 (1 : 200), PSD95 (1 : 200), SYN (1 : 400), GAP43 (1 : 200), and GFAP (1 : 200) overnight at 4°C followed by incubation with Alexa-conjugated secondary antibodies (Thermo Fisher Scientific) for 2 h at room temperature. After counterstained with DAPI solution in the dark, the fluorescent images of slices were acquired using a confocal scanning microscope (FV1000, Olympus, Japan). At least six representative images were taken from each mouse for analysis by the Image J software (NIH, USA, RRID:SCR_003070). ### 2.13. Immunohistochemistry Hippocampal slices were deparaffinized and rehydrated as described above. After antigen retrieval, slices were incubated with 3% H2O2 for 15 min and blocked in goat serum (containing 0.1% Triton X-100) for 30 min followed by incubation overnight at 4°C with primary antibodies to PPARγ (1 : 200) and BDNF (1 : 200). Then, the slices were washed three times with PBS and incubated with the horseradish peroxidase (HRP) conjugated goat anti-rabbit or anti-mouse IgG (1 : 100) secondary antibody for 2 h at room temperature followed by incubation with 50 μL 3,3′-diaminobenzidine (DAB) substrate (DAKO, Denmark) at room temperature for 10 min. The number of immunoreactive cells in the hippocampus was assessed using light microscopy (DP70; Olympus, Japan). At least three different fields (200×200 μm) per slice were randomly selected for visualization. The mean optical density in the hippocampus region was calculated and used to determine PPARγ and BDNF expression levels. ### 2.14. Golgi-Cox Staining Golgi-Cox staining was performed to assess changes in dendrites and dendritic spines within hippocampal neurons using the FD Rapid GolgiStain™ Kit (FD NeuroTechnologies, USA) according to the manufacturer’s instructions. Briefly, mice were anaesthetized with 5% isoflurane and decapitated, and the brains were rapidly removed and immersed in the impregnation solution (A:B=1:1, total 2 mL/mouse) at room temperature in the dark and then replaced with new impregnation solution after 2 days. Two weeks later, brains were transferred into solution C and stored at 4°C for three days and then rinsed 3 times with PBST (containing 0.3% Triton X-100). Brains were then cut serially into 100 μm coronal slices on a vibration microtome, and each slice was transferred to a gelatin-coated slide with solution C and then dried at room temperature at dark for up to 3 days. Then, the slices were placed in a mixture consisting of solution D, solution E, and distilled water (1 : 1 : 2) for 15 min followed by a dehydration series consisting of 50%, 70%, 85%, 95%, and 100% ethanol, for 3 applications at 5 min each. The slices were then transparented with xylenes and sealed with neutral gum for light microscopic observation. At least 3-5 dendritic segments of apical dendrites per neuron were randomly selected in each slice, and 5 pyramidal neurons were analyzed per mouse. For each group, the number of spines per dendritic segment of at least 3 mice was analyzed with using the Image J software (NIH, USA, RRID:SCR_003070). Results are expressed as the mean number of spines per 10 μm. ### 2.15. Transmission Electron Microscopy The hippocampi were rapidly dissected and placed in 2.5% glutaraldehyde at 4°C for 4 h followed by fixation with 1% osmium tetroxide for 1.5 h. After a series of gradient ethanol dehydrations, the tissues were immersed in propylene oxide for 30 min and then infiltrated with a mixture of propylene oxide and epoxy resin overnight. Then, the tissues were embedded in epoxy resin and placed in oven at 60°C for 48 h and then cut into serial ultrathin slices (70 nm thickness) and stained with 4% uranyl acetate for 20 min followed by 0.5% lead citrate for 5 min. The synaptic ultrastructures were observed under TEM (HT7700; Hitachi, Tokyo, Japan). In this study, at least 10 micrographs were randomly taken from each mouse and analysis of synaptic density was performed using the Image J software (NIH, USA, RRID:SCR_003070). ### 2.16. Immunoblotting Hippocampi were collected and homogenized in RIPA buffer containing protease and phosphatase inhibitor cocktails, and the protein concentration was determined by bicinchoninic acid method (Pierce Biotechnology, Inc., USA). Then, 25μg total protein from each sample was resolved by 8-15% sodium dodecyl sulfate polyacrylamide gel electrophoresis at room temperature and electroblotted onto nitrocellulose membrane (GE Healthcare, USA) at 4°C for 2 h. Membranes were blocked with 5% nonfat milk dissolved in Tris buffered saline Tween (TBST) at room temperature for 2.5 h. Primary antibodies against PSD95 (1 : 1000), SYN (1 : 1000), GAP43 (1 : 1000), ARC (1 : 1000), PPARγ (1 : 1000), GFAP (1 : 500), NLRP3 (1 : 1000), cleaved IL-1β (1 : 1000), cleaved caspase-1 (1 : 1000), and GAPDH (1 : 1000) were diluted in blocking solution and incubated with the membranes overnight at 4°C. After incubation with secondary anti-mouse or anti-rabbit IgGs (1 : 10000 in TBST) at room temperature for 90 min, membranes were washed in TBST buffer, developed with SuperSignal chemiluminescence substrate (Thermo Fisher Scientific, MA) and imaged with a chemiluminescence detector (FluorChem FC3; ProteinSimple, USA). The protein expression was quantified with the Quantity One software (Bio-Rad, Hercules, CA, USA, RRID:SCR_014280), and the densitometric plots of the results were normalized to the intensity of the GAPDH. ### 2.17. Statistical Analysis All analyses were performed with the GraphPad Prism 5.0 software (GraphPad Prism, San Diego, CA, USA, RRID: SCR_002798), and data were expressed asmean±standarddeviationSD. The statistical significance of difference between groups was evaluated using one-way ANOVA followed by Tukey test. P values of <0.05 were considered statistically significant. ## 2.1. Target Prediction To obtain the molecular targets of AS-IV, a computer developed model SysDT based on random forest (RF) and support vector machine (SVM) algorithms [21], which integrates large-scale information on genomics, chemistry, and pharmacology was proposed to predict the potential targets with RFscore≥0.8 and SVM≥0.7 as threshold. In addition, we also combined pharmacophore model [22] and structural similarity prediction methods to predict the targets of AS-IV [23]. ## 2.2. Network Construction To visualize and analyze the relationship between the targets of AS-IV and their related biological functions, we screened the relevant function corresponding to the targets, introduced them into Cytoscape, and constructed the network. In this section, three networks including compound-target (C-T), compound-target-function (C-T-F), and protein-protein interaction (PPI) [24] were structured to unclose the multitarget and multifunction therapeutic effect of AS-IV in combating AD (Figure 1).Figure 1 Experimental design. First, we screened the relevant targets via a comprehensive procedure; second, compound-target (C-T) and compound-target-function (C-T-F) were established to reveal the underlying molecular mechanisms; third, protein-protein interaction (PPI) network analysis and Gene Ontology (GO) enrichment analysis were performed to predict related targets; forth, we also studied the regulatory effect and specific mechanism of AS-IV on the AD via molecular docking and dynamics simulation; last, we investigated the effects of AS-IV on AD phenotypes in AβO-infused mice and further assessed the potential mechanisms. The mice were intragastrically administered daily with AS-IV (10, 20, and 40 mg/kg) for 7 days followed by intrahippocampal infusion of AβO or vehicle (aCSF) and then received AS-IV intragastrically or AS-IV plus GW9662 (1 mg/kg) intraperitoneally or donepezil (5 mg/kg) or the same volume saline intragastrically for 28 continuous days once per day. After that, the behavioral tests were conducted in the following one week, then the animals were sacrificed, and brain samples were collected. GW9662 (1 mg/kg) was coadministrated with 20 mg/kg of AS-IV or vehicle in the AβO-treated or aCSF-treated animals. ## 2.3. Gene Ontology (GO) Enrichment Analysis Presently, to further investigate the vital biological process connected with the AS-IV-related targets, we mapped these targets to DAVID 1 for analyzing targets’ biological meaning. The GO terms of biological process were utilized to symbolize genic function. Finally, those GO terms withP≤0.05 and FDR≤0.05 were selected in subsequent research. ## 2.4. Molecular Docking To validate the C-T network, AS-IV was docked to its predicted targets (PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1) by the AutoDock software version 4.1 package with default settings based on a powerful genetic algorithm method [25]. The X-ray crystal structures of targets (5GTN, 5IRX, 6IYC, 6PZP, and 6GN1) were taken from the RCSB Protein Data Bank. Each protein was prepared using methods such as adding polar hydrogens, partial charges, and defining the rotatable bonds. Finally, the results were analyzed in the AutoDock Tools. ## 2.5. Drugs and Reagents Astragaloside IV (purity: HPLC>98%), GW9662, and Aβ1-42 were purchased from Sigma-Aldrich. ELISA kits for Aβ1-42, IL-1β, IL-6, and TNF-α were obtained from Shanghai Jianglai Biotechnology. Antibodies against microtubule-associated protein tau (tau), p-tau, PPARγ, postsynaptic density 95 (PSD95), synaptophysin (SYN), growth-associated protein 43 (GAP43), glial fibrillary acidic protein (GFAP), NOD-like receptor protein 3 (NLRP3), cleaved IL-1β, cleaved caspase-1, and GAPDH were obtained from Cell Signaling Technology. The antibody against activity-regulated cytoskeleton-associated protein (ARC) was obtained from Synaptic System. Antibodies against BDNF and microtubule-associated protein 2 (MAP-2) were obtained from Novus Biologicals. Alexa 488 or 594-labeled fluorescent secondary antibodies for immunofluorescence and 4,6-diamidino-2-phenylindole (DAPI) were obtained from Thermo Fisher Scientific. Prestained Protein Ladder was obtained from Thermo Fermentas. SuperSignal chemiluminescence reagents were obtained from Pierce. ## 2.6. Animals and Treatments Male C57BL/6 mice (5-6 weeks old, 20-25 g) were obtained from the Beijing Weishang Lituo Technology Co., Ltd (SCXK (Beijing) 2016-0009). The mice were housed in groups of six per cage with controlled room temperature and humidity, under a 12 h light/dark cycle, with free access to food and water. The mice were adapted for one week before administration. All protocols were approved by the Animal Ethics Committee of Anhui University of Chinese Medicine (approval No. AHUCM-mouse-2019015), and the procedures involving animal research were in compliance with the Animal Ethics Procedures and Guidelines of the People’s Republic of China.The mice were randomly divided into the following groups (N=8 per group) (Figure 1): a sham group, an AβO group, AβO plus AS-IV (10, 20, and 40 mg/kg/day, i.g.) groups, an AβO plus donepezil (5 mg/kg/day, i.g.) group, and an AβO plus AS-IV (20 mg/kg/day, i.g.) with GW9662 (1 mg/kg/day, i.p.) group. Drugs were administered once per day for one week followed by intrahippocampal infusion of AβO and continuously received AS-IV once per day for another four weeks. The dose of AS-IV and GW9662 was selected and modified based upon a previous study [11]. ## 2.7. Preparation and Infusion of AβO AβO were prepared from synthetic Aβ1-42 and incubated at 37°C for 1 week in a stock solution of 10 μg/μL, then routinely characterized by size-exclusion chromatography, as previously described [26, 27], and stored at -80°C until use after subpackaging. AβO were perfused at a final concentration of 2.5 μg/μL in aCSF.For intrahippocampal infusion of AβO, mice were anesthetized with 5% isoflurane using a vaporizer system (RWD life Science Co., Ltd, Shenzhen, China) and maintained at 1% during the injection procedure, as previously described [26, 28]. AβO (5 μg per site) were bilaterally delivered into the hippocampal CA1 region (stereotaxical coordinates relative to bregma: 2.3 mm anteroposterior, ±1.8 mm mediolateral, and 2.0 mm dorsoventral). Injections were performed in a volume of 2 μL infused over 5 min, and the needle was left in place for 1 min to prevent backflow. Then, the mice were treated with penicillin to prevent infection. After the operation, the mice were kept under standard conditions with eating and drinking freely. Mice that showed signs of misplaced injections or any sign of hemorrhage were excluded from further analysis. Seven days before the AβO infusions, AS-IV (10, 20, and 40 mg/kg, once/day) was administered intragastrically in mice. Behavioral and pathological studies were performed 4 weeks postinjections of AβO. ## 2.8. Fear Conditioning FC was evaluated as previously described [29]. On adaption day, mice were allowed to freely explore the conditioning chamber (UgoBasile, Gemonio, Italy) with a camera that was connected to the ANY-Maze™ software (Stoelting, NJ, USA, RRID:SCR_014289) for 5 min. On conditioning day, mice were placed into the same test chamber, and then, an 80 dB audiotone (conditioned stimulus: CS) was presented for 30 s with a coterminating 1.0 mA, 2 s long foot shock (unconditioned stimulus: US) three times at a 73 s interval. Then, mice were removed from the cage. The next day (contextual test), mice were put back into the conditioning chamber for 5 min, but without any audiotone or foot shock. On day 4 (cued test), the cover of the back and side chamber walls was removed. The mice were returned to the chamber followed by three CS (without a foot shock) that were presented for 30 s each. The freezing time was recorded for each test using the software. ## 2.9. Preparation of Hippocampal Tissue Twenty-four hours after behavioral tests, some mice were anesthetized with 5% isoflurane and decapitated, and the hippocampi were then rapidly dissected on ice and snap-frozen in liquid nitrogen before storing at -80°C for biochemical tests. Others received transcardial perfusion with 4% paraformaldehyde (PFA), and then, the hippocampi were rapidly dissected and postfixed with 4% PFA overnight at 4°C followed by immersions in a solution containing 30% sucrose at 4°C for graded dehydration. Parts of the hippocampi were then cut into serial coronal frozen slices (20μm) for immunofluorescence assay, and other hippocampus samples were sliced into 4 μm thick coronal slices for histopathological analysis. ## 2.10. Hematoxylin and Eosin (HE) Staining After fixed in 4% paraformaldehyde for 24 h at room temperature, the hippocampal tissues were embedded in paraffin and coronally cut into 4μm thick slices (three slices per mouse). The tissues were dewaxed and successively rehydrated with alcohol (70%, 85%, 95%, and 100%), and then, the slices were stained with hematoxylin solution for 3 min followed by eosin solution for 2 min at room temperature. The slices were finally mounted by following dehydration with gradient alcohol and hyaline with xylenes and sealed with neutral gum. Representative photographs were captured by a light microscope with the DP70 software. ## 2.11. Enzyme-Linked Immunosorbent Assay Hippocampal tissues were collected and homogenized with ice-colded saline, supplemented with protease and phosphatase inhibitor cocktails. The supernatants were collected for further analysis. The levels of endogenous Aβ1-42, IL-1β, IL-6, and TNF-α were determined using ELISA kits according to the manufacturer’s instructions. The absorbance was recorded at 450 nm using a microplate reader (SpectraMax M2/M2e; Molecular Devices, Sunnyvale, CA, USA), and the concentrations of Aβ1-42, IL-1β, IL-6, and TNF-α were calculated from standard curves. Results were expressed as picograms per milliliter. Data were generated from 6-8 mice per group. ## 2.12. Immunofluorescence Mice were sacrificed, and the hippocampi were snap-frozen in optimal cutting temperature (OCT) compound (Sakura Finetechnical, Japan). For immunofluorescence staining, the OCT-embedded hippocampi were cut into serial coronal 20μm thick slices and mounted on adhesive microscope slides. The slices were fixed with ice-colded acetone for 10 min and then blocked in 10% goat serum (containing 0.04% Triton X-100) for 90 min at room temperature. Subsequently, the slices were incubated with primary antibodies to MAP-2 (1 : 200), PSD95 (1 : 200), SYN (1 : 400), GAP43 (1 : 200), and GFAP (1 : 200) overnight at 4°C followed by incubation with Alexa-conjugated secondary antibodies (Thermo Fisher Scientific) for 2 h at room temperature. After counterstained with DAPI solution in the dark, the fluorescent images of slices were acquired using a confocal scanning microscope (FV1000, Olympus, Japan). At least six representative images were taken from each mouse for analysis by the Image J software (NIH, USA, RRID:SCR_003070). ## 2.13. Immunohistochemistry Hippocampal slices were deparaffinized and rehydrated as described above. After antigen retrieval, slices were incubated with 3% H2O2 for 15 min and blocked in goat serum (containing 0.1% Triton X-100) for 30 min followed by incubation overnight at 4°C with primary antibodies to PPARγ (1 : 200) and BDNF (1 : 200). Then, the slices were washed three times with PBS and incubated with the horseradish peroxidase (HRP) conjugated goat anti-rabbit or anti-mouse IgG (1 : 100) secondary antibody for 2 h at room temperature followed by incubation with 50 μL 3,3′-diaminobenzidine (DAB) substrate (DAKO, Denmark) at room temperature for 10 min. The number of immunoreactive cells in the hippocampus was assessed using light microscopy (DP70; Olympus, Japan). At least three different fields (200×200 μm) per slice were randomly selected for visualization. The mean optical density in the hippocampus region was calculated and used to determine PPARγ and BDNF expression levels. ## 2.14. Golgi-Cox Staining Golgi-Cox staining was performed to assess changes in dendrites and dendritic spines within hippocampal neurons using the FD Rapid GolgiStain™ Kit (FD NeuroTechnologies, USA) according to the manufacturer’s instructions. Briefly, mice were anaesthetized with 5% isoflurane and decapitated, and the brains were rapidly removed and immersed in the impregnation solution (A:B=1:1, total 2 mL/mouse) at room temperature in the dark and then replaced with new impregnation solution after 2 days. Two weeks later, brains were transferred into solution C and stored at 4°C for three days and then rinsed 3 times with PBST (containing 0.3% Triton X-100). Brains were then cut serially into 100 μm coronal slices on a vibration microtome, and each slice was transferred to a gelatin-coated slide with solution C and then dried at room temperature at dark for up to 3 days. Then, the slices were placed in a mixture consisting of solution D, solution E, and distilled water (1 : 1 : 2) for 15 min followed by a dehydration series consisting of 50%, 70%, 85%, 95%, and 100% ethanol, for 3 applications at 5 min each. The slices were then transparented with xylenes and sealed with neutral gum for light microscopic observation. At least 3-5 dendritic segments of apical dendrites per neuron were randomly selected in each slice, and 5 pyramidal neurons were analyzed per mouse. For each group, the number of spines per dendritic segment of at least 3 mice was analyzed with using the Image J software (NIH, USA, RRID:SCR_003070). Results are expressed as the mean number of spines per 10 μm. ## 2.15. Transmission Electron Microscopy The hippocampi were rapidly dissected and placed in 2.5% glutaraldehyde at 4°C for 4 h followed by fixation with 1% osmium tetroxide for 1.5 h. After a series of gradient ethanol dehydrations, the tissues were immersed in propylene oxide for 30 min and then infiltrated with a mixture of propylene oxide and epoxy resin overnight. Then, the tissues were embedded in epoxy resin and placed in oven at 60°C for 48 h and then cut into serial ultrathin slices (70 nm thickness) and stained with 4% uranyl acetate for 20 min followed by 0.5% lead citrate for 5 min. The synaptic ultrastructures were observed under TEM (HT7700; Hitachi, Tokyo, Japan). In this study, at least 10 micrographs were randomly taken from each mouse and analysis of synaptic density was performed using the Image J software (NIH, USA, RRID:SCR_003070). ## 2.16. Immunoblotting Hippocampi were collected and homogenized in RIPA buffer containing protease and phosphatase inhibitor cocktails, and the protein concentration was determined by bicinchoninic acid method (Pierce Biotechnology, Inc., USA). Then, 25μg total protein from each sample was resolved by 8-15% sodium dodecyl sulfate polyacrylamide gel electrophoresis at room temperature and electroblotted onto nitrocellulose membrane (GE Healthcare, USA) at 4°C for 2 h. Membranes were blocked with 5% nonfat milk dissolved in Tris buffered saline Tween (TBST) at room temperature for 2.5 h. Primary antibodies against PSD95 (1 : 1000), SYN (1 : 1000), GAP43 (1 : 1000), ARC (1 : 1000), PPARγ (1 : 1000), GFAP (1 : 500), NLRP3 (1 : 1000), cleaved IL-1β (1 : 1000), cleaved caspase-1 (1 : 1000), and GAPDH (1 : 1000) were diluted in blocking solution and incubated with the membranes overnight at 4°C. After incubation with secondary anti-mouse or anti-rabbit IgGs (1 : 10000 in TBST) at room temperature for 90 min, membranes were washed in TBST buffer, developed with SuperSignal chemiluminescence substrate (Thermo Fisher Scientific, MA) and imaged with a chemiluminescence detector (FluorChem FC3; ProteinSimple, USA). The protein expression was quantified with the Quantity One software (Bio-Rad, Hercules, CA, USA, RRID:SCR_014280), and the densitometric plots of the results were normalized to the intensity of the GAPDH. ## 2.17. Statistical Analysis All analyses were performed with the GraphPad Prism 5.0 software (GraphPad Prism, San Diego, CA, USA, RRID: SCR_002798), and data were expressed asmean±standarddeviationSD. The statistical significance of difference between groups was evaluated using one-way ANOVA followed by Tukey test. P values of <0.05 were considered statistically significant. ## 3. Results ### 3.1. C-T Network In this study, we used a comprehensive method to screen AS-IV targets. Figure2(a) shows that there are 64 targets with the combining capacity to AS-IV. In this network, all these observations provide strong evidence that AS-IV works through a multitarget synergistic mechanism.Figure 2 Work scheme of system pharmacology approach. (a) C-T network was constructed by linking the AS-IV with its potential targets (circles). (b) C-T-F network was constructed by the AS-IV and its function (octagons) and corresponding protein targets (circles). Among the targets, octagons with different colors represent the nervous system, inflammatory, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid targets, respectively. The circles in the middle are the nervous system, inflammatory, cell proliferation, apoptosis, calcium ion, and steroid overlapped targets. Node size is proportional to its degree. (c) The PPI network of AS-IV. The color and size of the node are proportional to the degree, and the color and thickness of the connecting line are proportional to betweenness centrality. (d) Gene Ontology analysis of AS-IV target genes. (e) Distribution of AS-IV target proteins in the underlying pathways involved in AD. (f) Distribution of AS-IV target proteins in chromosome. (a)(b)(c)(d)(e)(f) ### 3.2. C-T-F Network In order to further explain the pharmacological mechanisms of beneficial effects of AS-IV on AD, we classified the target functions of this compound and constructed the C-T-F network. Figure2(b) depicts the global view of the C-T-F network, in which the diamond, circle, and hexagon nodes represent AS-IV, targets, and the corresponding function of the targets, respectively. Further observation of this network shows that these 64 targets are related to 7 functions, including inflammation, nervous system, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid. ### 3.3. PPI Network Proteins do not exert their functions independently of each other but interact together in the PPI network [30]. It is very helpful to understand the functions of proteins through analyzing the topological characteristics of proteins in PPI networks. Here, we constructed the PPI network of the 64 target proteins obtained from AS-IV and calculated the degree of each node. As shown in Figure 2(c), the degree of ADRA2A, ADRA2B, ADRA2C, CHRM2, S1PR5, S1PR2, DRD3, and HRH3 was the highest (degree=7), followed by APH1B, PSENEN, PSEN1, PSEN2, and NCSTN (degree=4), demonstrating that these proteins are hub targets and may be responsible for bridging other proteins in the PPI network. ### 3.4. GO Enrichment Analysis Through the GO enrichment analysis (Figures2(d)–2(f)), the targets were related to following biological processes, including G-protein coupled acetylcholine receptor signaling pathway (count=2,3,6), protein kinase B activity (count=1), Notch receptor processing (count=5), protein processing (count=7), and inflammatory response (count=4). These processes were usually related to cell proliferation, gene transcription, differentiation, and development. ### 3.5. Molecular Docking Figures3(a)–3(l) depict the binding interactions of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 after docking simulations. The results showed that hydrophobic and H-bond interactions influenced the binding affinity of AS-IV to their target proteins (Figures 3(i)–3(l)). AS-IV was anchored into a hydrophobic pocket in caspase-1, GSK3Β, PSEN1, and TRPV1. In detail, for the binding pocket of caspase-1 with its ligand, there were large hydrophobic interactions formed by residues Trp340, Pro343, and Ala284; with respect to GSK3Β, the hydrophobic interactions were formed by residues Val110, Leu188, Ala83, Leu132, Val70, Phe67, and Ile62. Additionally, in PSEN1, it was formed by residues Phe14, Ile408, Ile135, Phe6, Trp404, Leu142, and Ala98. Also, in TRPV1, it was formed by residues Phe543, Phe522, Met547, Val518, Leu515, Ile573, Ala566, Leu553, and Ile569.Figure 3 Binding conformations of AS-IV and four targets obtained from docking simulation. (a–d) The binding mode of AS-IV to (a) caspase-1, (b) GSK3Β, (c) PSEN1, and (d) TRPV1 in the active site. (e–h) Stereoview of binding mode for AS-IV with its receptors, i.e., (e) caspase-1, (f) GSK3Β, (g) PSEN1, and (h) TRPV1 in the binding site, where the H-bonds are depicted as the black dotted line. (i–l) The detailed view of the 2-D ligand interaction among AS-IV with (i) caspase-1, (j) GSK3Β, (k) PSEN1, and (l) TRPV1. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)AS-IV interacted with many residues in the active sites of caspase-1, and three H-bond networks were formed (Figure3). AS-IV forms H-bond networks with GSK3Β in Lys85, Val135, Lys60, Tyr134, Arg141, and Asn64. AS-IV forms H-bond interactions with PSEN1 in Ala139, while forms with TRPV1 in Asn551, Thr550, Arg557, and Ser512 (Figure 3). AS-IV is well suited to the receptor binding pocket as the binding of AS-IV to amino acids was tight and deep into the cavity. The binding free energy of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 was -5.30 Kcal/mol, -4.85 Kcal/mol, -6.41 Kcal/mol, and -6.07 Kcal/mol, respectively. These results indicated that AS-IV showed high binding affinities to its targets. ### 3.6. Interaction of PPARγ with Caspase-1, GSK3Β, PSEN1, and TRPV1 Figures4(a)–4(c) depict the binding interactions of AS-IV with PPARγ after docking simulations. For the target PPARγ, AS-IV is directed toward the binding site and stabilized by the hydrogen-bonding interactions with Gln343, Cys285, and Ser289. Five critical proteins in the network, including PPARγ, caspase-1, GSK3Β, PSEN1 and TRPV1, were selected to further validate the PPI. As shown in Figure 4(d), these five proteins showed a close interaction.Figure 4 Interaction of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. (a) The binding mode of AS-IV to PPARγ in the active site. (b) Stereoview of binding mode for AS-IV with PPARγ in the binding site, where the H-bonds are depicted as the black dotted line. (c) The detailed view of the 2-D ligand interaction among AS-IV with PPARγ. (d) The PPI network of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. (a)(b)(c)(d) ### 3.7. Effect of AS-IV on AβO-Induced Memory Impairment and Pathological Changes FC task was further performed by the intensity of freezing to context and auditory cue to assess the effects of AS-IV on fear memory in AβO-infused mice. During the adaptation session, there was no difference in freezing time among experimental groups (data not shown). By exposure to the context and auditory cue, freezing response was both higher in sham mice than AβO-infused mice (Figures 5(a) and 5(b)). The freezing time was lower in AβO-infused mice after administration of AS-IV (10, 20, and 40 mg/kg) or donepezil, a positive control drug. These results suggested that AS-IV prevented AβO-induced contextual and cued fear memory impairments.Figure 5 Effects of AS-IV on AβO-induced fear memory impairment and pathological changes in mice. (a) The freezing time of contextual memory. (b) The freezing time of cued memory. (c) Representative images of HE staining in the hippocampus (200x). Scale bar: 50 μm. (d) The content of Aβ1-42 in the hippocampus measured by ELISA assay. (e) The expression of p-tau protein in the hippocampus measured by western blotting. (f) MAP-2 expression in the hippocampus measured by IF (×200). Scale bar: 50 μm. Data are expressed as the mean±SD (N=8 or 6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f)HE staining showed that the pyramidal cells in CA1 region of the hippocampus of sham mice had intact cell body and round nuclei with tight arrangement, and no cell loss was found. However, the pyramidal layer was disintegrated, and neuronal loss was observed in the CA1 region. Additionally, neurons with shrunken or irregular shape of cell bodies and degeneration of nuclei were also found in the hippocampus of AβO-infused mice (Figure 5(c)). It is worth mentioning that AS-IV (10, 20, and 40 mg/kg) administration attenuated the structural damage and loss of neurons to some extent relative to AβO-infused, which indicated a neuroprotective effect of AS-IV.Next, the level of Aβ1-42 and phosphorylated tau expression was measured in the hippocampus. Results showed that there was no difference in the hippocampal Aβ1-42 level among experimental groups (Figure 5(d)). Compared with sham mice, the phosphorylated tau expression was increased significantly in AβO-infused mice. Compared with AβO-infused mice, AS-IV treatment reduced the hippocampal phosphorylated tau expression (Figure 5(e)).We also observed MAP-2 expression in the hippocampus of mice by immunofluorescence assay. Results showed that there were a large number of MAP-2+ cells, with regular arrangement of neurons, obvious neurites arranged in bundles in the hippocampus of sham mice. Compared with sham mice, the numbers of MAP-2+ cells were remarkably reduced, the arrangement of dendrites was disordered, and the length of the neurites was significantly shortened in the hippocampus of AβO-infused mice. In contrast, AS-IV (20 mg/kg) administration reversed the inhibitory effects of AβO on the growth of MAP-2+ neurites (Figure 5(f)). Based on these findings, AS-IV administration alleviated AβO-induced neuronal injury and reduced tau phosphorylation in the hippocampus, but had no effect on endogenous Aβ1-42 level in AβO-infused mice. ### 3.8. AS-IV Suppresses AβO-Induced Synaptic Deficit in the Hippocampus The effects of AS-IV on synaptic protein expression were investigated through determining the expression of PSD95, SYN, GAP43, and ARC. Results from immunofluorescence assays showed that the synaptic proteins PSD95, SYN, and GAP43 were all significantly reduced in hippocampal regions after AβO infusion when compared with sham mice. In contrast, AS-IV administration increased the immunoreactivity of PSD95, SYN, and GAP43 as compared to AβO-infused mice (Figures 6(a) and 6(b)).Figure 6 AS-IV suppresses AβO-induced synaptic deficits. (a) PSD95, SYN, and GAP43 expression in the hippocampus measured by immunofluorescence (×400). Scale bar: 5 μm. (b) Fluorescence intensity data of PSD95, SYN, and GAP43 expression. (c) The protein expression of synaptic plasticity markers in the hippocampus measured by western blotting. (d) Relative quantitative data of PSD95, SYN, GAP43, and ARC protein expression. (e) Changes of dendritic spines in the hippocampus measured by Golgi-Cox staining (1000x). Scale bar: 2 μm. (f) Quantitative data of dendritic spines in the hippocampus. (g) Ultrastructural changes of synapses in the hippocampus measured by TEM (8000x). Scale bar: 50 nm. (h) Quantitative data of synapses in the hippocampus. Data are expressed as the mean±SD (N=6 or 4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f)(g)(h)The results from immunoblotting assays also showed that there was a significant decrease in the expression of PSD95, SYN, and GAP43 in response to AβO infusion, while AS-IV administration significantly ameliorated AβO-induced downregulation of these synaptic protein expressions in the hippocampus (Figures 6(c) and 6(d)). By contrast, there was no difference in these groups of mice regarding ARC expression (Figures 6(c) and 6(d)).We next detected the density of dendritic spines in hippocampal neurons among experimental groups by Golgi-Cox staining assay. Results showed that the density of dendritic spines in hippocampal neurons of AβO-infused mice was significantly lower than that in sham mice, but these AβO infusion-induced changes in dendritic spine densities were significantly ameliorated by AS-IV (20 mg/kg) administration (Figures 6(e) and 6(f)).We further used transmission electron microscopy to examine the synaptic ultrastructure of hippocampal neurons. Our data showed that AβO infusion resulted in a significant decrease of numbers of hippocampal synapses as compared to that of sham mice, whereas AS-IV (20 mg/kg) administration significantly ameliorated this synaptic loss (Figures 6(g) and 6(h)). Overall, the results indicate that AS-IV affords protection against AβO-induced synaptic deficits. ### 3.9. AS-IV Promotes AβO Infusion-Inhibited PPARγ Expression in the Hippocampus The hippocampus was collected at four time points after AβO infusion (2 h, 1 d, 14 d, and 28 d). The expression of PPARγ significantly decreased at 2 h, 1 d, 14 d, and 28 d after AβO infusion (Figure 7(a)). By contrast, AS-IV attenuated the decrease of PPARγ in AβO-infused mice. A specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effect of AS-IV was blocked by GW9662 in the hippocampus of AβO-infused mice (Figure 7(b)).Figure 7 AβO infusion-inhibited PPARγ expression was promoted by AS-IV. (a) The expression of PPARγ protein at 2 h, 1 d, 14 d, and 28 d after AβO infusion in the hippocampus measured by western blotting. (b) The expression of PPARγ protein after AS-IV administration in AβO-infused mice measured by western blotting. Data are expressed as the mean±SD (N=4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b) ### 3.10. AS-IV Inhibits AβO-Induced BDNF Reduction via Promoting PPARγ Expression in Mouse Hippocampi To further explore the underlying neuroprotective mechanism of AS-IV on AβO-infused mice, the levels of PPARγ and BDNF in hippocampus were detected by immunohistochemistry. Compared with sham group, PPARγ and BDNF immunoreactivity was decreased in the hippocampus of AβO-infused mice, whereas hippocampal immunoreactivity of PPARγ and BDNF was higher in AS-IV-treated mice compared to AβO-infused mice (Figures 8(a)–8(f)). Additionally, the effect of AS-IV on the expression of BDNF and PPARγ was blocked by GW9662 in the hippocampus of AβO-infused mice (Figures 8(a)–8(f)).Figure 8 AS-IV inhibits AβO-induced BDNF reduction via promoting PPARγ expression. (a) PPARγ and BDNF expression measured by immunohistochemistry (×200). Scale bar: 50 μm. (b–f) Quantitative data of PPARγ and BDNF expression in different regions of the hippocampus. Data are expressed as the mean±SD (N=6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f) ### 3.11. AS-IV Inhibits AβO-Induced Neuroinflammation via Promoting PPARγ Expression Our data showed that there were significant differences among the experimental groups with regard to the number of astroglia in DG region of the hippocampus, as detected by immunofluorescence (Figure9(a)). Infusion of AβO induced a remarkable activation of astroglial responses in the hippocampus of mice, which was prevented by AS-IV (20 mg/kg) administration. Consistently, infusion of AβO also increased GFAP expression as determined with immunoblotting assay, while AS-IV (20 mg/kg) administration significantly suppressed GFAP expression in AβO-infused mice (Figure 9(b)). Furthermore, we asked whether PPARγ mediated the beneficial effect of AS-IV on anti-inflammatory response in AβO-infused mice. Interestingly, PPARγ inhibition by GW9662 blocked the inhibitory effects of AS-IV on GFAP immunoreactivity and expression in the hippocampus of AβO-infused mice (Figures 9(a) and 9(b)).Figure 9 AS-IV inhibits AβO-induced neuroinflammation via promoting PPARγ expression. (a) GFAP expression in the hippocampus measured by immunofluorescence (×400). Scale bar: 20 μm. (b) The expression of GFAP protein in the hippocampus measured by western blotting. (c) The content of IL-1β, IL-6, and TNF-α in the hippocampus measured by ELISA. Data are expressed as the mean±SD (N=4 or 6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)We measured the hippocampal IL-1β, IL-6, and TNF-α level in AβO-infused mice by ELISA. Results showed that AβO infusion led to an upregulation of IL-1β, IL-6, and TNF-α level in the hippocampus compared with sham mice, but AS-IV administration suppressed the upregulation of cytokines following AβO infusion. In line with the above findings, this effect of AS-IV was blocked by GW9662 (Figure 9(c)). These results suggest that AS-IV prevented the inflammatory response in the hippocampus via PPARγ. ### 3.12. AS-IV Inhibits AβO-Induced Pyroptotic Cell Death via Promoting PPARγ Expression As shown in Figures10(a)–10(c), the protein expression of NLRP3 and cleaved caspase-1 was significantly elevated in the hippocampus of AβO-infused mice compared with sham mice. In contrast, AS-IV (20 mg/kg) administration suppressed AβO-induced expression of NLRP3, as well as cleaved caspase-1 in the hippocampus of AβO-infused mice.Figure 10 AS-IV inhibits AβO-induced pyroptotic cell death via promoting PPARγ expression. (a) The protein expression of pyroptosis markers in the hippocampus measured by western blotting. (b) Relative quantitative data of NLRP3 protein expression. (c) Relative quantitative data of cleaved caspase-1 protein expression. (d) Relative quantitative data of IL-1β protein expression. Data are expressed as the mean±SD (N=4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)As shown in Figures10(a)–10(d), AβO infusion significantly increased the levels of IL-1β in the hippocampus, which was inhibited by AS-IV administration. In order to further confirm the role of PPARγ in AS-IV-mediated suppression of AβO-induced pyroptosis, specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effects of AS-IV against AβO-induced expression of NLRP3 and cleaved caspase-1 were blocked by GW9662. Moreover, the blockade of PPARγ was able to significantly reverse the effect of AS-IV on AβO-induced proinflammatory cytokine IL-1β overexpression (Figures 10(a)–10(d)). ## 3.1. C-T Network In this study, we used a comprehensive method to screen AS-IV targets. Figure2(a) shows that there are 64 targets with the combining capacity to AS-IV. In this network, all these observations provide strong evidence that AS-IV works through a multitarget synergistic mechanism.Figure 2 Work scheme of system pharmacology approach. (a) C-T network was constructed by linking the AS-IV with its potential targets (circles). (b) C-T-F network was constructed by the AS-IV and its function (octagons) and corresponding protein targets (circles). Among the targets, octagons with different colors represent the nervous system, inflammatory, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid targets, respectively. The circles in the middle are the nervous system, inflammatory, cell proliferation, apoptosis, calcium ion, and steroid overlapped targets. Node size is proportional to its degree. (c) The PPI network of AS-IV. The color and size of the node are proportional to the degree, and the color and thickness of the connecting line are proportional to betweenness centrality. (d) Gene Ontology analysis of AS-IV target genes. (e) Distribution of AS-IV target proteins in the underlying pathways involved in AD. (f) Distribution of AS-IV target proteins in chromosome. (a)(b)(c)(d)(e)(f) ## 3.2. C-T-F Network In order to further explain the pharmacological mechanisms of beneficial effects of AS-IV on AD, we classified the target functions of this compound and constructed the C-T-F network. Figure2(b) depicts the global view of the C-T-F network, in which the diamond, circle, and hexagon nodes represent AS-IV, targets, and the corresponding function of the targets, respectively. Further observation of this network shows that these 64 targets are related to 7 functions, including inflammation, nervous system, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid. ## 3.3. PPI Network Proteins do not exert their functions independently of each other but interact together in the PPI network [30]. It is very helpful to understand the functions of proteins through analyzing the topological characteristics of proteins in PPI networks. Here, we constructed the PPI network of the 64 target proteins obtained from AS-IV and calculated the degree of each node. As shown in Figure 2(c), the degree of ADRA2A, ADRA2B, ADRA2C, CHRM2, S1PR5, S1PR2, DRD3, and HRH3 was the highest (degree=7), followed by APH1B, PSENEN, PSEN1, PSEN2, and NCSTN (degree=4), demonstrating that these proteins are hub targets and may be responsible for bridging other proteins in the PPI network. ## 3.4. GO Enrichment Analysis Through the GO enrichment analysis (Figures2(d)–2(f)), the targets were related to following biological processes, including G-protein coupled acetylcholine receptor signaling pathway (count=2,3,6), protein kinase B activity (count=1), Notch receptor processing (count=5), protein processing (count=7), and inflammatory response (count=4). These processes were usually related to cell proliferation, gene transcription, differentiation, and development. ## 3.5. Molecular Docking Figures3(a)–3(l) depict the binding interactions of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 after docking simulations. The results showed that hydrophobic and H-bond interactions influenced the binding affinity of AS-IV to their target proteins (Figures 3(i)–3(l)). AS-IV was anchored into a hydrophobic pocket in caspase-1, GSK3Β, PSEN1, and TRPV1. In detail, for the binding pocket of caspase-1 with its ligand, there were large hydrophobic interactions formed by residues Trp340, Pro343, and Ala284; with respect to GSK3Β, the hydrophobic interactions were formed by residues Val110, Leu188, Ala83, Leu132, Val70, Phe67, and Ile62. Additionally, in PSEN1, it was formed by residues Phe14, Ile408, Ile135, Phe6, Trp404, Leu142, and Ala98. Also, in TRPV1, it was formed by residues Phe543, Phe522, Met547, Val518, Leu515, Ile573, Ala566, Leu553, and Ile569.Figure 3 Binding conformations of AS-IV and four targets obtained from docking simulation. (a–d) The binding mode of AS-IV to (a) caspase-1, (b) GSK3Β, (c) PSEN1, and (d) TRPV1 in the active site. (e–h) Stereoview of binding mode for AS-IV with its receptors, i.e., (e) caspase-1, (f) GSK3Β, (g) PSEN1, and (h) TRPV1 in the binding site, where the H-bonds are depicted as the black dotted line. (i–l) The detailed view of the 2-D ligand interaction among AS-IV with (i) caspase-1, (j) GSK3Β, (k) PSEN1, and (l) TRPV1. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)AS-IV interacted with many residues in the active sites of caspase-1, and three H-bond networks were formed (Figure3). AS-IV forms H-bond networks with GSK3Β in Lys85, Val135, Lys60, Tyr134, Arg141, and Asn64. AS-IV forms H-bond interactions with PSEN1 in Ala139, while forms with TRPV1 in Asn551, Thr550, Arg557, and Ser512 (Figure 3). AS-IV is well suited to the receptor binding pocket as the binding of AS-IV to amino acids was tight and deep into the cavity. The binding free energy of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 was -5.30 Kcal/mol, -4.85 Kcal/mol, -6.41 Kcal/mol, and -6.07 Kcal/mol, respectively. These results indicated that AS-IV showed high binding affinities to its targets. ## 3.6. Interaction of PPARγ with Caspase-1, GSK3Β, PSEN1, and TRPV1 Figures4(a)–4(c) depict the binding interactions of AS-IV with PPARγ after docking simulations. For the target PPARγ, AS-IV is directed toward the binding site and stabilized by the hydrogen-bonding interactions with Gln343, Cys285, and Ser289. Five critical proteins in the network, including PPARγ, caspase-1, GSK3Β, PSEN1 and TRPV1, were selected to further validate the PPI. As shown in Figure 4(d), these five proteins showed a close interaction.Figure 4 Interaction of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. (a) The binding mode of AS-IV to PPARγ in the active site. (b) Stereoview of binding mode for AS-IV with PPARγ in the binding site, where the H-bonds are depicted as the black dotted line. (c) The detailed view of the 2-D ligand interaction among AS-IV with PPARγ. (d) The PPI network of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. (a)(b)(c)(d) ## 3.7. Effect of AS-IV on AβO-Induced Memory Impairment and Pathological Changes FC task was further performed by the intensity of freezing to context and auditory cue to assess the effects of AS-IV on fear memory in AβO-infused mice. During the adaptation session, there was no difference in freezing time among experimental groups (data not shown). By exposure to the context and auditory cue, freezing response was both higher in sham mice than AβO-infused mice (Figures 5(a) and 5(b)). The freezing time was lower in AβO-infused mice after administration of AS-IV (10, 20, and 40 mg/kg) or donepezil, a positive control drug. These results suggested that AS-IV prevented AβO-induced contextual and cued fear memory impairments.Figure 5 Effects of AS-IV on AβO-induced fear memory impairment and pathological changes in mice. (a) The freezing time of contextual memory. (b) The freezing time of cued memory. (c) Representative images of HE staining in the hippocampus (200x). Scale bar: 50 μm. (d) The content of Aβ1-42 in the hippocampus measured by ELISA assay. (e) The expression of p-tau protein in the hippocampus measured by western blotting. (f) MAP-2 expression in the hippocampus measured by IF (×200). Scale bar: 50 μm. Data are expressed as the mean±SD (N=8 or 6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f)HE staining showed that the pyramidal cells in CA1 region of the hippocampus of sham mice had intact cell body and round nuclei with tight arrangement, and no cell loss was found. However, the pyramidal layer was disintegrated, and neuronal loss was observed in the CA1 region. Additionally, neurons with shrunken or irregular shape of cell bodies and degeneration of nuclei were also found in the hippocampus of AβO-infused mice (Figure 5(c)). It is worth mentioning that AS-IV (10, 20, and 40 mg/kg) administration attenuated the structural damage and loss of neurons to some extent relative to AβO-infused, which indicated a neuroprotective effect of AS-IV.Next, the level of Aβ1-42 and phosphorylated tau expression was measured in the hippocampus. Results showed that there was no difference in the hippocampal Aβ1-42 level among experimental groups (Figure 5(d)). Compared with sham mice, the phosphorylated tau expression was increased significantly in AβO-infused mice. Compared with AβO-infused mice, AS-IV treatment reduced the hippocampal phosphorylated tau expression (Figure 5(e)).We also observed MAP-2 expression in the hippocampus of mice by immunofluorescence assay. Results showed that there were a large number of MAP-2+ cells, with regular arrangement of neurons, obvious neurites arranged in bundles in the hippocampus of sham mice. Compared with sham mice, the numbers of MAP-2+ cells were remarkably reduced, the arrangement of dendrites was disordered, and the length of the neurites was significantly shortened in the hippocampus of AβO-infused mice. In contrast, AS-IV (20 mg/kg) administration reversed the inhibitory effects of AβO on the growth of MAP-2+ neurites (Figure 5(f)). Based on these findings, AS-IV administration alleviated AβO-induced neuronal injury and reduced tau phosphorylation in the hippocampus, but had no effect on endogenous Aβ1-42 level in AβO-infused mice. ## 3.8. AS-IV Suppresses AβO-Induced Synaptic Deficit in the Hippocampus The effects of AS-IV on synaptic protein expression were investigated through determining the expression of PSD95, SYN, GAP43, and ARC. Results from immunofluorescence assays showed that the synaptic proteins PSD95, SYN, and GAP43 were all significantly reduced in hippocampal regions after AβO infusion when compared with sham mice. In contrast, AS-IV administration increased the immunoreactivity of PSD95, SYN, and GAP43 as compared to AβO-infused mice (Figures 6(a) and 6(b)).Figure 6 AS-IV suppresses AβO-induced synaptic deficits. (a) PSD95, SYN, and GAP43 expression in the hippocampus measured by immunofluorescence (×400). Scale bar: 5 μm. (b) Fluorescence intensity data of PSD95, SYN, and GAP43 expression. (c) The protein expression of synaptic plasticity markers in the hippocampus measured by western blotting. (d) Relative quantitative data of PSD95, SYN, GAP43, and ARC protein expression. (e) Changes of dendritic spines in the hippocampus measured by Golgi-Cox staining (1000x). Scale bar: 2 μm. (f) Quantitative data of dendritic spines in the hippocampus. (g) Ultrastructural changes of synapses in the hippocampus measured by TEM (8000x). Scale bar: 50 nm. (h) Quantitative data of synapses in the hippocampus. Data are expressed as the mean±SD (N=6 or 4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f)(g)(h)The results from immunoblotting assays also showed that there was a significant decrease in the expression of PSD95, SYN, and GAP43 in response to AβO infusion, while AS-IV administration significantly ameliorated AβO-induced downregulation of these synaptic protein expressions in the hippocampus (Figures 6(c) and 6(d)). By contrast, there was no difference in these groups of mice regarding ARC expression (Figures 6(c) and 6(d)).We next detected the density of dendritic spines in hippocampal neurons among experimental groups by Golgi-Cox staining assay. Results showed that the density of dendritic spines in hippocampal neurons of AβO-infused mice was significantly lower than that in sham mice, but these AβO infusion-induced changes in dendritic spine densities were significantly ameliorated by AS-IV (20 mg/kg) administration (Figures 6(e) and 6(f)).We further used transmission electron microscopy to examine the synaptic ultrastructure of hippocampal neurons. Our data showed that AβO infusion resulted in a significant decrease of numbers of hippocampal synapses as compared to that of sham mice, whereas AS-IV (20 mg/kg) administration significantly ameliorated this synaptic loss (Figures 6(g) and 6(h)). Overall, the results indicate that AS-IV affords protection against AβO-induced synaptic deficits. ## 3.9. AS-IV Promotes AβO Infusion-Inhibited PPARγ Expression in the Hippocampus The hippocampus was collected at four time points after AβO infusion (2 h, 1 d, 14 d, and 28 d). The expression of PPARγ significantly decreased at 2 h, 1 d, 14 d, and 28 d after AβO infusion (Figure 7(a)). By contrast, AS-IV attenuated the decrease of PPARγ in AβO-infused mice. A specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effect of AS-IV was blocked by GW9662 in the hippocampus of AβO-infused mice (Figure 7(b)).Figure 7 AβO infusion-inhibited PPARγ expression was promoted by AS-IV. (a) The expression of PPARγ protein at 2 h, 1 d, 14 d, and 28 d after AβO infusion in the hippocampus measured by western blotting. (b) The expression of PPARγ protein after AS-IV administration in AβO-infused mice measured by western blotting. Data are expressed as the mean±SD (N=4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b) ## 3.10. AS-IV Inhibits AβO-Induced BDNF Reduction via Promoting PPARγ Expression in Mouse Hippocampi To further explore the underlying neuroprotective mechanism of AS-IV on AβO-infused mice, the levels of PPARγ and BDNF in hippocampus were detected by immunohistochemistry. Compared with sham group, PPARγ and BDNF immunoreactivity was decreased in the hippocampus of AβO-infused mice, whereas hippocampal immunoreactivity of PPARγ and BDNF was higher in AS-IV-treated mice compared to AβO-infused mice (Figures 8(a)–8(f)). Additionally, the effect of AS-IV on the expression of BDNF and PPARγ was blocked by GW9662 in the hippocampus of AβO-infused mice (Figures 8(a)–8(f)).Figure 8 AS-IV inhibits AβO-induced BDNF reduction via promoting PPARγ expression. (a) PPARγ and BDNF expression measured by immunohistochemistry (×200). Scale bar: 50 μm. (b–f) Quantitative data of PPARγ and BDNF expression in different regions of the hippocampus. Data are expressed as the mean±SD (N=6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f) ## 3.11. AS-IV Inhibits AβO-Induced Neuroinflammation via Promoting PPARγ Expression Our data showed that there were significant differences among the experimental groups with regard to the number of astroglia in DG region of the hippocampus, as detected by immunofluorescence (Figure9(a)). Infusion of AβO induced a remarkable activation of astroglial responses in the hippocampus of mice, which was prevented by AS-IV (20 mg/kg) administration. Consistently, infusion of AβO also increased GFAP expression as determined with immunoblotting assay, while AS-IV (20 mg/kg) administration significantly suppressed GFAP expression in AβO-infused mice (Figure 9(b)). Furthermore, we asked whether PPARγ mediated the beneficial effect of AS-IV on anti-inflammatory response in AβO-infused mice. Interestingly, PPARγ inhibition by GW9662 blocked the inhibitory effects of AS-IV on GFAP immunoreactivity and expression in the hippocampus of AβO-infused mice (Figures 9(a) and 9(b)).Figure 9 AS-IV inhibits AβO-induced neuroinflammation via promoting PPARγ expression. (a) GFAP expression in the hippocampus measured by immunofluorescence (×400). Scale bar: 20 μm. (b) The expression of GFAP protein in the hippocampus measured by western blotting. (c) The content of IL-1β, IL-6, and TNF-α in the hippocampus measured by ELISA. Data are expressed as the mean±SD (N=4 or 6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)We measured the hippocampal IL-1β, IL-6, and TNF-α level in AβO-infused mice by ELISA. Results showed that AβO infusion led to an upregulation of IL-1β, IL-6, and TNF-α level in the hippocampus compared with sham mice, but AS-IV administration suppressed the upregulation of cytokines following AβO infusion. In line with the above findings, this effect of AS-IV was blocked by GW9662 (Figure 9(c)). These results suggest that AS-IV prevented the inflammatory response in the hippocampus via PPARγ. ## 3.12. AS-IV Inhibits AβO-Induced Pyroptotic Cell Death via Promoting PPARγ Expression As shown in Figures10(a)–10(c), the protein expression of NLRP3 and cleaved caspase-1 was significantly elevated in the hippocampus of AβO-infused mice compared with sham mice. In contrast, AS-IV (20 mg/kg) administration suppressed AβO-induced expression of NLRP3, as well as cleaved caspase-1 in the hippocampus of AβO-infused mice.Figure 10 AS-IV inhibits AβO-induced pyroptotic cell death via promoting PPARγ expression. (a) The protein expression of pyroptosis markers in the hippocampus measured by western blotting. (b) Relative quantitative data of NLRP3 protein expression. (c) Relative quantitative data of cleaved caspase-1 protein expression. (d) Relative quantitative data of IL-1β protein expression. Data are expressed as the mean±SD (N=4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)As shown in Figures10(a)–10(d), AβO infusion significantly increased the levels of IL-1β in the hippocampus, which was inhibited by AS-IV administration. In order to further confirm the role of PPARγ in AS-IV-mediated suppression of AβO-induced pyroptosis, specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effects of AS-IV against AβO-induced expression of NLRP3 and cleaved caspase-1 were blocked by GW9662. Moreover, the blockade of PPARγ was able to significantly reverse the effect of AS-IV on AβO-induced proinflammatory cytokine IL-1β overexpression (Figures 10(a)–10(d)). ## 4. Discussion In this study, we applied systemic pharmacology strategies andin vivo experiments to probe the mechanism of AS-IV in treatment of AD. AS-IV could interact with 64 targets, and those targets had multipharmacological properties relevant with nervous system, inflammation, cell proliferation, apoptosis, pyroptosis, calcium dysregulation, and steroid. Molecular docking suggested that AS-IV could regulate the AD-like phenotypes by binding with caspase-1, GSK3Β, PSEN1, and TRPV1. Furthermore, in vivo experiments evidenced that AS-IV promoted the expression of PPARγ and BDNF in hippocampal neurons of mice infused with AβO and prevented synaptic deficits, inflammation, and memory impairments in AD-like mice. Consistent with the bioinformatics data, in vivo data also verified that AS-IV could suppress AβO infusion-induced neuronal pyroptosis. This systematic analysis provides new implications for the therapeutic of AD by AS-IV. ### 4.1. AS-IV Prevents AD Phenotypes through Multiple Mechanisms In the present study, we screened 64 related targets of AS-IV and these targets together play important roles in the pathogenesis of AD, possibly through regulating cell proliferation, calcium dysregulation, inflammation, pyroptosis, and apoptosis [20, 31–33]. Specifically, the G-protein coupled acetylcholine receptor signaling pathway and protein kinase B/GSK3B axis are involved in the processes of AD pathogenesis, resulting in cognitive dysfunction [34–36]. Besides, the decrease of response to hypoxia and dysregulation of vasoconstriction could effectively ameliorate vascular dementia [37, 38]. Furthermore, the neuroinflammation caused by the generation of caspase-1-mediated IL-1β and IL-18 is involved in the development and progression of AD [32]. GSK3Β plays an important role in hyperphosphorylation of tau, which is one of the pathological features in AD [35]. PSEN1 mutation is a risk factor for AD [39]. Additionally, TRPV1, a nonselective cation channel, is involved in synaptic plasticity and memory [40]. Our molecular docking results demonstrate that AS-IV could integrate with caspase-1, GSK3Β, PSEN1, and TRPV1. The binding affinity of AS-IV is mainly through electrostatic, H-bond, and hydrophobic interaction, suggesting the reliability of the docking model. Therefore, AS-IV may improve cognitive impairment by binding to AD-related gene, such as caspase-1, TRPV1, PSEN1, and GSK3Β, reduce cell death, and ultimately inhibit AD-phenotypes. ### 4.2. AS-IV Reduces Tau Hyperphosphorylation in AD Model AβO accumulate in the brains of AD patients and induce AD-like cognitive dysfunction [41]. Therefore, AβO-induced AD-like phenotypes may be a promising model to find treatments [41, 42]. In this study, we investigated the impact of AβO in the brains of mice and further confirmed the effect of AS-IV on memory formation in mice infused with AβO and to assess the mechanisms. Our results demonstrated that intrahippocampal infusion of AβO impaired both contextual and cued fear memory, which is consistent with previous study [43]. Conversely, AS-IV prevents AβO-induced contextual and cued fear memory impairment. Considering that hippocampus is an important brain region involved in the formation and expression of fear memory, our findings suggest that AβO infusion damaged the structure and function of hippocampus and subsequently blocked the formation of learning and memory, which can be prevented by AS-IV administration.Similar to previous studies, our findings showed that AβO infusion induced neuronal loss, as well as increased tau phosphorylation, suggesting that the pathological changes of the hippocampus induced by AβO infusion may be the basis of AD-like behavioral changes [3, 44]. On the contrary, AS-IV inhibited the pathological changes of hippocampal neurons and tau phosphorylation induced by AβO infusion, which may contribute to memory improvement in AD-like mice. It is speculated that Aβ pathology in AD brain is earlier than those of tau, and neurofibrillary tangles develop downstream of toxicity induced by Aβ and eventually lead to neuronal death. Moreover, the mutual promotion between them accelerates the pathogenesis of AD, which is consistent with previous reports [44–46]. Certainly, we also note that AβO infusion has no effect on endogenous Aβ1-42 content in the hippocampus, suggesting it may not cause the increase and accumulation of Aβ, and formation of amyloid plaque in the brain. Through bioinformatics prediction, AS-IV could integrate with GSK3B tightly. As GSK3B is practically responsible for the hyperphosphorylation of tau, the tight interaction of AS-IV with GSK3B might contribute to the effects of AS-IV on the reduction of tau hyperphosphorylation. ### 4.3. AS-IV Prevents AβO-Induced Synaptic Deficit Consistent with previous studies [47, 48], our findings demonstrated that AβO had neurotoxicity and synaptic toxicity before plaque formation in the brain, causing brain damage and eventually leading to AD-like behaviors. Given the mounting evidences that AβO caused synaptic deficits [3, 49, 50], elucidating the precise molecular pathways has important implications for treating and preventing the disease. Here, we demonstrate that AβO infusion reduced the immunoreactivity and expression levels of PSD95, GAP43, and SYN, which is similar to previous results [51, 52]. It has been shown that the SYN immune response density in the brain of transgenic mice is negatively correlated with Aβ levels, but has nothing to do with plaque loading, indicating that Aβ has synaptic toxicity when plaques are not formed [6]. We further found that AS-IV increased the immunoreactivity and expression levels of PSD95, GAP43, and SYN in the hippocampus of AD-like mice. PSD95, GAP43, and SYN are important markers of synaptic plasticity, and they are positively correlated with hippocampal learning and memory function [14, 53]. Furthermore, ARC plays a key role in synaptic plasticity and memory consolidation [54, 55]. Surprisingly, we note that there is no significant difference in ARC expression among the experimental mice, which suggested that AβO infusion did not target ARC. Our results of Golgi-Cox and TEM further showed that AS-IV increased the density of dendritic spines and synapse number in hippocampal neurons, which suggested that AS-IV improved synaptic structure damage and alleviated synaptic toxicity in the hippocampus of mice infused with AβO.In a previous study, we reported that AS-IV promoted PPARγ expression in cultured cells and activated the BDNF-TrkB signaling pathway [20]. Our in vivo findings further showed that PPARγ expression in the hippocampus of mice infused with AβO was significantly decreased along with the reduction of BDNF expression, while AS-IV significantly prevented AβO-induced inhibition of PPARγ and BDNF expression. Considering the important functions of BDNF-TrkB signaling pathway performed in synaptic function [29], those data further supported that AS-IV prevented AβO-induced synaptic deficit. ### 4.4. AS-IV Prevents AβO-Induced Neuroinflammation and Pyroptosis Numerous studies have confirmed that neuroinflammation accelerates the pathogenesis of AD [46, 56, 57]. In this study, we found that AβO infusion increased the immunoreactivity and expression of GFAP and upregulated IL-1β, IL-6, and TNF-α levels in the hippocampus, which were reversed by AS-IV. These results suggested that AS-IV prohibited AβO-induced neuroinflammation in the brain, which was beneficial for cognitive function improvement, which further confirmed the network screening.PPARγ plays a neuroprotective role by reducing brain inflammation and Aβ production [58, 59]. Our findings showed that AβO infusion inhibited PPARγ expression in mice, implicating that PPARγ participated in inflammation response of AD-like mice. Furthermore, AS-IV blocked AβO-induced inhibition of PPARγ expression. Pyroptosis is an inflammatory form of programmed cell death that has been reported in neurological pathogenesis [60]. Reducing pyroptosis was shown to alleviate cognitive impairment in AD animal models [61] and the progression of Parkinson’s disease [62]. Interestingly, NLRP3 has been reported to initiate neuronal pyroptosis [63, 64]. Indeed, NLRP3 inhibition has been shown to exhibit neuroprotective effects through the suppression of pyroptosis [65] and improve neurological functions in a transgenic mouse model of AD [63]. In this study, we demonstrated that AS-IV could inhibit AβO-induced pyroptotic neuronal death, whereas PPARγ antagonist GW9662 blocked the beneficial effect of AS-IV. In the systematic analyses, we also found that AS-IV had a high binding capacity with caspase-1, which might indicate the potential function of AS-IV in the pyroptosis. ### 4.5. AS-IV Reduces Tau Hyperphosphorylation, Synaptic Deficit, Neuroinflammation, and Pyroptosis via Regulating PPARγ In this study, we disclosed that AβO administration could progressively reduce PPARγ expression in the hippocampus from 2 h to one day and kept the PPARγ level at a relative low level from one day to 28 days. These data suggested that PPARγ would be an initial event after AβO administration. AS-IV could prevent AβO-induced reduction of PPARγ. The effects of AS-IV on brain inflammation, pyroptosis as well as synaptic deficit in AβO-induced AD phenotypes might be PPARγ-dependent. On the one hand, PPARγ antagonist blocked the effects of AS-IV on PPARγ expression, brain inflammation, and pyroptosis as well as BDNF expression. On the other hand, PPI indicated that PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1 had a close interaction. The reduced expression of PPARγ induced by AβO administration contributes to deregulation of caspase-1, GSK3Β, PSEN1, and TRPV1, which might lead to brain inflammation and pyroptosis as well as synaptic deficit. ## 4.1. AS-IV Prevents AD Phenotypes through Multiple Mechanisms In the present study, we screened 64 related targets of AS-IV and these targets together play important roles in the pathogenesis of AD, possibly through regulating cell proliferation, calcium dysregulation, inflammation, pyroptosis, and apoptosis [20, 31–33]. Specifically, the G-protein coupled acetylcholine receptor signaling pathway and protein kinase B/GSK3B axis are involved in the processes of AD pathogenesis, resulting in cognitive dysfunction [34–36]. Besides, the decrease of response to hypoxia and dysregulation of vasoconstriction could effectively ameliorate vascular dementia [37, 38]. Furthermore, the neuroinflammation caused by the generation of caspase-1-mediated IL-1β and IL-18 is involved in the development and progression of AD [32]. GSK3Β plays an important role in hyperphosphorylation of tau, which is one of the pathological features in AD [35]. PSEN1 mutation is a risk factor for AD [39]. Additionally, TRPV1, a nonselective cation channel, is involved in synaptic plasticity and memory [40]. Our molecular docking results demonstrate that AS-IV could integrate with caspase-1, GSK3Β, PSEN1, and TRPV1. The binding affinity of AS-IV is mainly through electrostatic, H-bond, and hydrophobic interaction, suggesting the reliability of the docking model. Therefore, AS-IV may improve cognitive impairment by binding to AD-related gene, such as caspase-1, TRPV1, PSEN1, and GSK3Β, reduce cell death, and ultimately inhibit AD-phenotypes. ## 4.2. AS-IV Reduces Tau Hyperphosphorylation in AD Model AβO accumulate in the brains of AD patients and induce AD-like cognitive dysfunction [41]. Therefore, AβO-induced AD-like phenotypes may be a promising model to find treatments [41, 42]. In this study, we investigated the impact of AβO in the brains of mice and further confirmed the effect of AS-IV on memory formation in mice infused with AβO and to assess the mechanisms. Our results demonstrated that intrahippocampal infusion of AβO impaired both contextual and cued fear memory, which is consistent with previous study [43]. Conversely, AS-IV prevents AβO-induced contextual and cued fear memory impairment. Considering that hippocampus is an important brain region involved in the formation and expression of fear memory, our findings suggest that AβO infusion damaged the structure and function of hippocampus and subsequently blocked the formation of learning and memory, which can be prevented by AS-IV administration.Similar to previous studies, our findings showed that AβO infusion induced neuronal loss, as well as increased tau phosphorylation, suggesting that the pathological changes of the hippocampus induced by AβO infusion may be the basis of AD-like behavioral changes [3, 44]. On the contrary, AS-IV inhibited the pathological changes of hippocampal neurons and tau phosphorylation induced by AβO infusion, which may contribute to memory improvement in AD-like mice. It is speculated that Aβ pathology in AD brain is earlier than those of tau, and neurofibrillary tangles develop downstream of toxicity induced by Aβ and eventually lead to neuronal death. Moreover, the mutual promotion between them accelerates the pathogenesis of AD, which is consistent with previous reports [44–46]. Certainly, we also note that AβO infusion has no effect on endogenous Aβ1-42 content in the hippocampus, suggesting it may not cause the increase and accumulation of Aβ, and formation of amyloid plaque in the brain. Through bioinformatics prediction, AS-IV could integrate with GSK3B tightly. As GSK3B is practically responsible for the hyperphosphorylation of tau, the tight interaction of AS-IV with GSK3B might contribute to the effects of AS-IV on the reduction of tau hyperphosphorylation. ## 4.3. AS-IV Prevents AβO-Induced Synaptic Deficit Consistent with previous studies [47, 48], our findings demonstrated that AβO had neurotoxicity and synaptic toxicity before plaque formation in the brain, causing brain damage and eventually leading to AD-like behaviors. Given the mounting evidences that AβO caused synaptic deficits [3, 49, 50], elucidating the precise molecular pathways has important implications for treating and preventing the disease. Here, we demonstrate that AβO infusion reduced the immunoreactivity and expression levels of PSD95, GAP43, and SYN, which is similar to previous results [51, 52]. It has been shown that the SYN immune response density in the brain of transgenic mice is negatively correlated with Aβ levels, but has nothing to do with plaque loading, indicating that Aβ has synaptic toxicity when plaques are not formed [6]. We further found that AS-IV increased the immunoreactivity and expression levels of PSD95, GAP43, and SYN in the hippocampus of AD-like mice. PSD95, GAP43, and SYN are important markers of synaptic plasticity, and they are positively correlated with hippocampal learning and memory function [14, 53]. Furthermore, ARC plays a key role in synaptic plasticity and memory consolidation [54, 55]. Surprisingly, we note that there is no significant difference in ARC expression among the experimental mice, which suggested that AβO infusion did not target ARC. Our results of Golgi-Cox and TEM further showed that AS-IV increased the density of dendritic spines and synapse number in hippocampal neurons, which suggested that AS-IV improved synaptic structure damage and alleviated synaptic toxicity in the hippocampus of mice infused with AβO.In a previous study, we reported that AS-IV promoted PPARγ expression in cultured cells and activated the BDNF-TrkB signaling pathway [20]. Our in vivo findings further showed that PPARγ expression in the hippocampus of mice infused with AβO was significantly decreased along with the reduction of BDNF expression, while AS-IV significantly prevented AβO-induced inhibition of PPARγ and BDNF expression. Considering the important functions of BDNF-TrkB signaling pathway performed in synaptic function [29], those data further supported that AS-IV prevented AβO-induced synaptic deficit. ## 4.4. AS-IV Prevents AβO-Induced Neuroinflammation and Pyroptosis Numerous studies have confirmed that neuroinflammation accelerates the pathogenesis of AD [46, 56, 57]. In this study, we found that AβO infusion increased the immunoreactivity and expression of GFAP and upregulated IL-1β, IL-6, and TNF-α levels in the hippocampus, which were reversed by AS-IV. These results suggested that AS-IV prohibited AβO-induced neuroinflammation in the brain, which was beneficial for cognitive function improvement, which further confirmed the network screening.PPARγ plays a neuroprotective role by reducing brain inflammation and Aβ production [58, 59]. Our findings showed that AβO infusion inhibited PPARγ expression in mice, implicating that PPARγ participated in inflammation response of AD-like mice. Furthermore, AS-IV blocked AβO-induced inhibition of PPARγ expression. Pyroptosis is an inflammatory form of programmed cell death that has been reported in neurological pathogenesis [60]. Reducing pyroptosis was shown to alleviate cognitive impairment in AD animal models [61] and the progression of Parkinson’s disease [62]. Interestingly, NLRP3 has been reported to initiate neuronal pyroptosis [63, 64]. Indeed, NLRP3 inhibition has been shown to exhibit neuroprotective effects through the suppression of pyroptosis [65] and improve neurological functions in a transgenic mouse model of AD [63]. In this study, we demonstrated that AS-IV could inhibit AβO-induced pyroptotic neuronal death, whereas PPARγ antagonist GW9662 blocked the beneficial effect of AS-IV. In the systematic analyses, we also found that AS-IV had a high binding capacity with caspase-1, which might indicate the potential function of AS-IV in the pyroptosis. ## 4.5. AS-IV Reduces Tau Hyperphosphorylation, Synaptic Deficit, Neuroinflammation, and Pyroptosis via Regulating PPARγ In this study, we disclosed that AβO administration could progressively reduce PPARγ expression in the hippocampus from 2 h to one day and kept the PPARγ level at a relative low level from one day to 28 days. These data suggested that PPARγ would be an initial event after AβO administration. AS-IV could prevent AβO-induced reduction of PPARγ. The effects of AS-IV on brain inflammation, pyroptosis as well as synaptic deficit in AβO-induced AD phenotypes might be PPARγ-dependent. On the one hand, PPARγ antagonist blocked the effects of AS-IV on PPARγ expression, brain inflammation, and pyroptosis as well as BDNF expression. On the other hand, PPI indicated that PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1 had a close interaction. The reduced expression of PPARγ induced by AβO administration contributes to deregulation of caspase-1, GSK3Β, PSEN1, and TRPV1, which might lead to brain inflammation and pyroptosis as well as synaptic deficit. ## 5. Conclusions In summary, our present study indicates that AS-IV could suppress tau hyperphosphorylation, synaptic deficits, neuroinflammation, and pyroptosis to prevent AD-like phenotypes, likely through interactions of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. This study offers a novel and reliable strategy for studying traditional Chinese medicine monomers. --- *Source: 1020614-2021-09-26.xml*
1020614-2021-09-26_1020614-2021-09-26.md
94,582
Depichering the Effects of Astragaloside IV on AD-Like Phenotypes: A Systematic and Experimental Investigation
Xuncui Wang; Feng Gao; Wen Xu; Yin Cao; Jinghui Wang; Guoqi Zhu
Oxidative Medicine and Cellular Longevity (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1020614
1020614-2021-09-26.xml
--- ## Abstract Astragaloside IV (AS-IV) is an active component inAstragalus membranaceus with the potential to treat neurodegenerative diseases, especially Alzheimer’s diseases (ADs). However, its mechanisms are still not known. Herein, we aimed to explore the systematic pharmacological mechanism of AS-IV for treating AD. Drug prediction, network pharmacology, and functional bioinformatics analyses were conducted. Molecular docking was applied to validate reliability of the interactions and binding affinities between AS-IV and related targets. Finally, experimental verification was carried out in AβO infusion produced AD-like phenotypes to investigate the molecular mechanisms. We found that AS-IV works through a multitarget synergistic mechanism, including inflammation, nervous system, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid. AS-IV highly interacted with PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1 after docking simulations. Meanwhile, PPARγ interacts with caspase-1, GSK3Β, PSEN1, and TRPV1. In vivo experiments showed that AβO infusion produced AD-like phenotypes in mice, including impairment of fear memory, neuronal loss, tau hyperphosphorylation, neuroinflammation, and synaptic deficits in the hippocampus. Especially, the expression of PPARγ, as well as BDNF, was also reduced in the hippocampus of AD-like mice. Conversely, AS-IV improved AβO infusion-induced memory impairment, inhibited neuronal loss and the phosphorylation of tau, and prevented the synaptic deficits. AS-IV prevented AβO infusion-induced reduction of PPARγ and BDNF. Moreover, the inhibition of PPARγ attenuated the effects of AS-IV on BDNF, neuroflammation, and pyroptosis in AD-like mice. Taken together, AS-IV could prevent AD-like phenotypes and reduce tau hyperphosphorylation, synaptic deficits, neuroinflammation, and pyroptosis, possibly via regulating PPARγ. --- ## Body ## 1. Introduction Alzheimer’s disease (AD) is a neurodegenerative disease characterized by cognitive decline and behavioral impairment. The incidence of AD is increasing as the world population ages. According to theWorld Alzheimer Report 2018 [1], there are more than 50 million people suffering from AD worldwide and it is predicted that by 2050, the number of AD patients will increase to 152 million. Currently, the pathogenesis and etiology of AD have not been fully elucidated, and there is no effective treatment for AD [2]. Remarkable efforts are made in developing strategies to resist mechanisms that lead to neuronal damage, synaptic deficits, neuroinflammation, and cognitive impairment [3–5]. Especially, amyloid-β (1-42) oligomers (AβO) accumulating in AD brains are linked to synaptic failure, neuroinflammation, and memory deficit [2, 6, 7].Astragaloside IV (AS-IV), one of the major effective components purified fromAstragalus membranaceus, has been documented in the treatment of diabetes and diabetic nephropathy [8, 9]. AS-IV has been reported to play a variety of beneficial roles in the prevention and treatment of neurodegenerative diseases with cognitive impairment [10]. Especially, AS-IV, as a selective natural PPARγ agonist, inhibited BACE1 activity by increasing PPARγ expression and subsequently reduced Aβ levels in APP/PS1 mice [11]. In addition, other studies pointed out that AS-IV could inhibit Aβ1-42-induced mitochondrial permeability transition pore opening, oxidative stress, and apoptosis [12, 13].PPARγ activation regulates the response of microglia to amyloid deposition, thereby increasing phagocytosis of Aβ and reducing cytokine release [14, 15]. In addition, PPARγ agonists are able to improve the memory deficits in AD models [16, 17], which are further confirmed in clinical trials [18, 19]. In a previous study, we reported that AS-IV prevented AβO-induced hippocampal neuronal apoptosis, probably by promoting the PPARγ/BDNF signaling pathway [20]. However, the findings were limited in the in vitro experiments, and systemic mechanisms have not been clearly disclosed.In this study, we adopted a systematic study of the multiscale mechanism to investigate the treatment effect of AS-IV for AD, which combined the drug prediction, network pharmacology, functional bioinformatics analyses, and molecular docking. Subsequently, experiments were carried out to validate the potential mechanisms from the target of PPARγ. This study would provide important implications for the treatment of AD. ## 2. Materials and Methods ### 2.1. Target Prediction To obtain the molecular targets of AS-IV, a computer developed model SysDT based on random forest (RF) and support vector machine (SVM) algorithms [21], which integrates large-scale information on genomics, chemistry, and pharmacology was proposed to predict the potential targets with RFscore≥0.8 and SVM≥0.7 as threshold. In addition, we also combined pharmacophore model [22] and structural similarity prediction methods to predict the targets of AS-IV [23]. ### 2.2. Network Construction To visualize and analyze the relationship between the targets of AS-IV and their related biological functions, we screened the relevant function corresponding to the targets, introduced them into Cytoscape, and constructed the network. In this section, three networks including compound-target (C-T), compound-target-function (C-T-F), and protein-protein interaction (PPI) [24] were structured to unclose the multitarget and multifunction therapeutic effect of AS-IV in combating AD (Figure 1).Figure 1 Experimental design. First, we screened the relevant targets via a comprehensive procedure; second, compound-target (C-T) and compound-target-function (C-T-F) were established to reveal the underlying molecular mechanisms; third, protein-protein interaction (PPI) network analysis and Gene Ontology (GO) enrichment analysis were performed to predict related targets; forth, we also studied the regulatory effect and specific mechanism of AS-IV on the AD via molecular docking and dynamics simulation; last, we investigated the effects of AS-IV on AD phenotypes in AβO-infused mice and further assessed the potential mechanisms. The mice were intragastrically administered daily with AS-IV (10, 20, and 40 mg/kg) for 7 days followed by intrahippocampal infusion of AβO or vehicle (aCSF) and then received AS-IV intragastrically or AS-IV plus GW9662 (1 mg/kg) intraperitoneally or donepezil (5 mg/kg) or the same volume saline intragastrically for 28 continuous days once per day. After that, the behavioral tests were conducted in the following one week, then the animals were sacrificed, and brain samples were collected. GW9662 (1 mg/kg) was coadministrated with 20 mg/kg of AS-IV or vehicle in the AβO-treated or aCSF-treated animals. ### 2.3. Gene Ontology (GO) Enrichment Analysis Presently, to further investigate the vital biological process connected with the AS-IV-related targets, we mapped these targets to DAVID 1 for analyzing targets’ biological meaning. The GO terms of biological process were utilized to symbolize genic function. Finally, those GO terms withP≤0.05 and FDR≤0.05 were selected in subsequent research. ### 2.4. Molecular Docking To validate the C-T network, AS-IV was docked to its predicted targets (PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1) by the AutoDock software version 4.1 package with default settings based on a powerful genetic algorithm method [25]. The X-ray crystal structures of targets (5GTN, 5IRX, 6IYC, 6PZP, and 6GN1) were taken from the RCSB Protein Data Bank. Each protein was prepared using methods such as adding polar hydrogens, partial charges, and defining the rotatable bonds. Finally, the results were analyzed in the AutoDock Tools. ### 2.5. Drugs and Reagents Astragaloside IV (purity: HPLC>98%), GW9662, and Aβ1-42 were purchased from Sigma-Aldrich. ELISA kits for Aβ1-42, IL-1β, IL-6, and TNF-α were obtained from Shanghai Jianglai Biotechnology. Antibodies against microtubule-associated protein tau (tau), p-tau, PPARγ, postsynaptic density 95 (PSD95), synaptophysin (SYN), growth-associated protein 43 (GAP43), glial fibrillary acidic protein (GFAP), NOD-like receptor protein 3 (NLRP3), cleaved IL-1β, cleaved caspase-1, and GAPDH were obtained from Cell Signaling Technology. The antibody against activity-regulated cytoskeleton-associated protein (ARC) was obtained from Synaptic System. Antibodies against BDNF and microtubule-associated protein 2 (MAP-2) were obtained from Novus Biologicals. Alexa 488 or 594-labeled fluorescent secondary antibodies for immunofluorescence and 4,6-diamidino-2-phenylindole (DAPI) were obtained from Thermo Fisher Scientific. Prestained Protein Ladder was obtained from Thermo Fermentas. SuperSignal chemiluminescence reagents were obtained from Pierce. ### 2.6. Animals and Treatments Male C57BL/6 mice (5-6 weeks old, 20-25 g) were obtained from the Beijing Weishang Lituo Technology Co., Ltd (SCXK (Beijing) 2016-0009). The mice were housed in groups of six per cage with controlled room temperature and humidity, under a 12 h light/dark cycle, with free access to food and water. The mice were adapted for one week before administration. All protocols were approved by the Animal Ethics Committee of Anhui University of Chinese Medicine (approval No. AHUCM-mouse-2019015), and the procedures involving animal research were in compliance with the Animal Ethics Procedures and Guidelines of the People’s Republic of China.The mice were randomly divided into the following groups (N=8 per group) (Figure 1): a sham group, an AβO group, AβO plus AS-IV (10, 20, and 40 mg/kg/day, i.g.) groups, an AβO plus donepezil (5 mg/kg/day, i.g.) group, and an AβO plus AS-IV (20 mg/kg/day, i.g.) with GW9662 (1 mg/kg/day, i.p.) group. Drugs were administered once per day for one week followed by intrahippocampal infusion of AβO and continuously received AS-IV once per day for another four weeks. The dose of AS-IV and GW9662 was selected and modified based upon a previous study [11]. ### 2.7. Preparation and Infusion of AβO AβO were prepared from synthetic Aβ1-42 and incubated at 37°C for 1 week in a stock solution of 10 μg/μL, then routinely characterized by size-exclusion chromatography, as previously described [26, 27], and stored at -80°C until use after subpackaging. AβO were perfused at a final concentration of 2.5 μg/μL in aCSF.For intrahippocampal infusion of AβO, mice were anesthetized with 5% isoflurane using a vaporizer system (RWD life Science Co., Ltd, Shenzhen, China) and maintained at 1% during the injection procedure, as previously described [26, 28]. AβO (5 μg per site) were bilaterally delivered into the hippocampal CA1 region (stereotaxical coordinates relative to bregma: 2.3 mm anteroposterior, ±1.8 mm mediolateral, and 2.0 mm dorsoventral). Injections were performed in a volume of 2 μL infused over 5 min, and the needle was left in place for 1 min to prevent backflow. Then, the mice were treated with penicillin to prevent infection. After the operation, the mice were kept under standard conditions with eating and drinking freely. Mice that showed signs of misplaced injections or any sign of hemorrhage were excluded from further analysis. Seven days before the AβO infusions, AS-IV (10, 20, and 40 mg/kg, once/day) was administered intragastrically in mice. Behavioral and pathological studies were performed 4 weeks postinjections of AβO. ### 2.8. Fear Conditioning FC was evaluated as previously described [29]. On adaption day, mice were allowed to freely explore the conditioning chamber (UgoBasile, Gemonio, Italy) with a camera that was connected to the ANY-Maze™ software (Stoelting, NJ, USA, RRID:SCR_014289) for 5 min. On conditioning day, mice were placed into the same test chamber, and then, an 80 dB audiotone (conditioned stimulus: CS) was presented for 30 s with a coterminating 1.0 mA, 2 s long foot shock (unconditioned stimulus: US) three times at a 73 s interval. Then, mice were removed from the cage. The next day (contextual test), mice were put back into the conditioning chamber for 5 min, but without any audiotone or foot shock. On day 4 (cued test), the cover of the back and side chamber walls was removed. The mice were returned to the chamber followed by three CS (without a foot shock) that were presented for 30 s each. The freezing time was recorded for each test using the software. ### 2.9. Preparation of Hippocampal Tissue Twenty-four hours after behavioral tests, some mice were anesthetized with 5% isoflurane and decapitated, and the hippocampi were then rapidly dissected on ice and snap-frozen in liquid nitrogen before storing at -80°C for biochemical tests. Others received transcardial perfusion with 4% paraformaldehyde (PFA), and then, the hippocampi were rapidly dissected and postfixed with 4% PFA overnight at 4°C followed by immersions in a solution containing 30% sucrose at 4°C for graded dehydration. Parts of the hippocampi were then cut into serial coronal frozen slices (20μm) for immunofluorescence assay, and other hippocampus samples were sliced into 4 μm thick coronal slices for histopathological analysis. ### 2.10. Hematoxylin and Eosin (HE) Staining After fixed in 4% paraformaldehyde for 24 h at room temperature, the hippocampal tissues were embedded in paraffin and coronally cut into 4μm thick slices (three slices per mouse). The tissues were dewaxed and successively rehydrated with alcohol (70%, 85%, 95%, and 100%), and then, the slices were stained with hematoxylin solution for 3 min followed by eosin solution for 2 min at room temperature. The slices were finally mounted by following dehydration with gradient alcohol and hyaline with xylenes and sealed with neutral gum. Representative photographs were captured by a light microscope with the DP70 software. ### 2.11. Enzyme-Linked Immunosorbent Assay Hippocampal tissues were collected and homogenized with ice-colded saline, supplemented with protease and phosphatase inhibitor cocktails. The supernatants were collected for further analysis. The levels of endogenous Aβ1-42, IL-1β, IL-6, and TNF-α were determined using ELISA kits according to the manufacturer’s instructions. The absorbance was recorded at 450 nm using a microplate reader (SpectraMax M2/M2e; Molecular Devices, Sunnyvale, CA, USA), and the concentrations of Aβ1-42, IL-1β, IL-6, and TNF-α were calculated from standard curves. Results were expressed as picograms per milliliter. Data were generated from 6-8 mice per group. ### 2.12. Immunofluorescence Mice were sacrificed, and the hippocampi were snap-frozen in optimal cutting temperature (OCT) compound (Sakura Finetechnical, Japan). For immunofluorescence staining, the OCT-embedded hippocampi were cut into serial coronal 20μm thick slices and mounted on adhesive microscope slides. The slices were fixed with ice-colded acetone for 10 min and then blocked in 10% goat serum (containing 0.04% Triton X-100) for 90 min at room temperature. Subsequently, the slices were incubated with primary antibodies to MAP-2 (1 : 200), PSD95 (1 : 200), SYN (1 : 400), GAP43 (1 : 200), and GFAP (1 : 200) overnight at 4°C followed by incubation with Alexa-conjugated secondary antibodies (Thermo Fisher Scientific) for 2 h at room temperature. After counterstained with DAPI solution in the dark, the fluorescent images of slices were acquired using a confocal scanning microscope (FV1000, Olympus, Japan). At least six representative images were taken from each mouse for analysis by the Image J software (NIH, USA, RRID:SCR_003070). ### 2.13. Immunohistochemistry Hippocampal slices were deparaffinized and rehydrated as described above. After antigen retrieval, slices were incubated with 3% H2O2 for 15 min and blocked in goat serum (containing 0.1% Triton X-100) for 30 min followed by incubation overnight at 4°C with primary antibodies to PPARγ (1 : 200) and BDNF (1 : 200). Then, the slices were washed three times with PBS and incubated with the horseradish peroxidase (HRP) conjugated goat anti-rabbit or anti-mouse IgG (1 : 100) secondary antibody for 2 h at room temperature followed by incubation with 50 μL 3,3′-diaminobenzidine (DAB) substrate (DAKO, Denmark) at room temperature for 10 min. The number of immunoreactive cells in the hippocampus was assessed using light microscopy (DP70; Olympus, Japan). At least three different fields (200×200 μm) per slice were randomly selected for visualization. The mean optical density in the hippocampus region was calculated and used to determine PPARγ and BDNF expression levels. ### 2.14. Golgi-Cox Staining Golgi-Cox staining was performed to assess changes in dendrites and dendritic spines within hippocampal neurons using the FD Rapid GolgiStain™ Kit (FD NeuroTechnologies, USA) according to the manufacturer’s instructions. Briefly, mice were anaesthetized with 5% isoflurane and decapitated, and the brains were rapidly removed and immersed in the impregnation solution (A:B=1:1, total 2 mL/mouse) at room temperature in the dark and then replaced with new impregnation solution after 2 days. Two weeks later, brains were transferred into solution C and stored at 4°C for three days and then rinsed 3 times with PBST (containing 0.3% Triton X-100). Brains were then cut serially into 100 μm coronal slices on a vibration microtome, and each slice was transferred to a gelatin-coated slide with solution C and then dried at room temperature at dark for up to 3 days. Then, the slices were placed in a mixture consisting of solution D, solution E, and distilled water (1 : 1 : 2) for 15 min followed by a dehydration series consisting of 50%, 70%, 85%, 95%, and 100% ethanol, for 3 applications at 5 min each. The slices were then transparented with xylenes and sealed with neutral gum for light microscopic observation. At least 3-5 dendritic segments of apical dendrites per neuron were randomly selected in each slice, and 5 pyramidal neurons were analyzed per mouse. For each group, the number of spines per dendritic segment of at least 3 mice was analyzed with using the Image J software (NIH, USA, RRID:SCR_003070). Results are expressed as the mean number of spines per 10 μm. ### 2.15. Transmission Electron Microscopy The hippocampi were rapidly dissected and placed in 2.5% glutaraldehyde at 4°C for 4 h followed by fixation with 1% osmium tetroxide for 1.5 h. After a series of gradient ethanol dehydrations, the tissues were immersed in propylene oxide for 30 min and then infiltrated with a mixture of propylene oxide and epoxy resin overnight. Then, the tissues were embedded in epoxy resin and placed in oven at 60°C for 48 h and then cut into serial ultrathin slices (70 nm thickness) and stained with 4% uranyl acetate for 20 min followed by 0.5% lead citrate for 5 min. The synaptic ultrastructures were observed under TEM (HT7700; Hitachi, Tokyo, Japan). In this study, at least 10 micrographs were randomly taken from each mouse and analysis of synaptic density was performed using the Image J software (NIH, USA, RRID:SCR_003070). ### 2.16. Immunoblotting Hippocampi were collected and homogenized in RIPA buffer containing protease and phosphatase inhibitor cocktails, and the protein concentration was determined by bicinchoninic acid method (Pierce Biotechnology, Inc., USA). Then, 25μg total protein from each sample was resolved by 8-15% sodium dodecyl sulfate polyacrylamide gel electrophoresis at room temperature and electroblotted onto nitrocellulose membrane (GE Healthcare, USA) at 4°C for 2 h. Membranes were blocked with 5% nonfat milk dissolved in Tris buffered saline Tween (TBST) at room temperature for 2.5 h. Primary antibodies against PSD95 (1 : 1000), SYN (1 : 1000), GAP43 (1 : 1000), ARC (1 : 1000), PPARγ (1 : 1000), GFAP (1 : 500), NLRP3 (1 : 1000), cleaved IL-1β (1 : 1000), cleaved caspase-1 (1 : 1000), and GAPDH (1 : 1000) were diluted in blocking solution and incubated with the membranes overnight at 4°C. After incubation with secondary anti-mouse or anti-rabbit IgGs (1 : 10000 in TBST) at room temperature for 90 min, membranes were washed in TBST buffer, developed with SuperSignal chemiluminescence substrate (Thermo Fisher Scientific, MA) and imaged with a chemiluminescence detector (FluorChem FC3; ProteinSimple, USA). The protein expression was quantified with the Quantity One software (Bio-Rad, Hercules, CA, USA, RRID:SCR_014280), and the densitometric plots of the results were normalized to the intensity of the GAPDH. ### 2.17. Statistical Analysis All analyses were performed with the GraphPad Prism 5.0 software (GraphPad Prism, San Diego, CA, USA, RRID: SCR_002798), and data were expressed asmean±standarddeviationSD. The statistical significance of difference between groups was evaluated using one-way ANOVA followed by Tukey test. P values of <0.05 were considered statistically significant. ## 2.1. Target Prediction To obtain the molecular targets of AS-IV, a computer developed model SysDT based on random forest (RF) and support vector machine (SVM) algorithms [21], which integrates large-scale information on genomics, chemistry, and pharmacology was proposed to predict the potential targets with RFscore≥0.8 and SVM≥0.7 as threshold. In addition, we also combined pharmacophore model [22] and structural similarity prediction methods to predict the targets of AS-IV [23]. ## 2.2. Network Construction To visualize and analyze the relationship between the targets of AS-IV and their related biological functions, we screened the relevant function corresponding to the targets, introduced them into Cytoscape, and constructed the network. In this section, three networks including compound-target (C-T), compound-target-function (C-T-F), and protein-protein interaction (PPI) [24] were structured to unclose the multitarget and multifunction therapeutic effect of AS-IV in combating AD (Figure 1).Figure 1 Experimental design. First, we screened the relevant targets via a comprehensive procedure; second, compound-target (C-T) and compound-target-function (C-T-F) were established to reveal the underlying molecular mechanisms; third, protein-protein interaction (PPI) network analysis and Gene Ontology (GO) enrichment analysis were performed to predict related targets; forth, we also studied the regulatory effect and specific mechanism of AS-IV on the AD via molecular docking and dynamics simulation; last, we investigated the effects of AS-IV on AD phenotypes in AβO-infused mice and further assessed the potential mechanisms. The mice were intragastrically administered daily with AS-IV (10, 20, and 40 mg/kg) for 7 days followed by intrahippocampal infusion of AβO or vehicle (aCSF) and then received AS-IV intragastrically or AS-IV plus GW9662 (1 mg/kg) intraperitoneally or donepezil (5 mg/kg) or the same volume saline intragastrically for 28 continuous days once per day. After that, the behavioral tests were conducted in the following one week, then the animals were sacrificed, and brain samples were collected. GW9662 (1 mg/kg) was coadministrated with 20 mg/kg of AS-IV or vehicle in the AβO-treated or aCSF-treated animals. ## 2.3. Gene Ontology (GO) Enrichment Analysis Presently, to further investigate the vital biological process connected with the AS-IV-related targets, we mapped these targets to DAVID 1 for analyzing targets’ biological meaning. The GO terms of biological process were utilized to symbolize genic function. Finally, those GO terms withP≤0.05 and FDR≤0.05 were selected in subsequent research. ## 2.4. Molecular Docking To validate the C-T network, AS-IV was docked to its predicted targets (PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1) by the AutoDock software version 4.1 package with default settings based on a powerful genetic algorithm method [25]. The X-ray crystal structures of targets (5GTN, 5IRX, 6IYC, 6PZP, and 6GN1) were taken from the RCSB Protein Data Bank. Each protein was prepared using methods such as adding polar hydrogens, partial charges, and defining the rotatable bonds. Finally, the results were analyzed in the AutoDock Tools. ## 2.5. Drugs and Reagents Astragaloside IV (purity: HPLC>98%), GW9662, and Aβ1-42 were purchased from Sigma-Aldrich. ELISA kits for Aβ1-42, IL-1β, IL-6, and TNF-α were obtained from Shanghai Jianglai Biotechnology. Antibodies against microtubule-associated protein tau (tau), p-tau, PPARγ, postsynaptic density 95 (PSD95), synaptophysin (SYN), growth-associated protein 43 (GAP43), glial fibrillary acidic protein (GFAP), NOD-like receptor protein 3 (NLRP3), cleaved IL-1β, cleaved caspase-1, and GAPDH were obtained from Cell Signaling Technology. The antibody against activity-regulated cytoskeleton-associated protein (ARC) was obtained from Synaptic System. Antibodies against BDNF and microtubule-associated protein 2 (MAP-2) were obtained from Novus Biologicals. Alexa 488 or 594-labeled fluorescent secondary antibodies for immunofluorescence and 4,6-diamidino-2-phenylindole (DAPI) were obtained from Thermo Fisher Scientific. Prestained Protein Ladder was obtained from Thermo Fermentas. SuperSignal chemiluminescence reagents were obtained from Pierce. ## 2.6. Animals and Treatments Male C57BL/6 mice (5-6 weeks old, 20-25 g) were obtained from the Beijing Weishang Lituo Technology Co., Ltd (SCXK (Beijing) 2016-0009). The mice were housed in groups of six per cage with controlled room temperature and humidity, under a 12 h light/dark cycle, with free access to food and water. The mice were adapted for one week before administration. All protocols were approved by the Animal Ethics Committee of Anhui University of Chinese Medicine (approval No. AHUCM-mouse-2019015), and the procedures involving animal research were in compliance with the Animal Ethics Procedures and Guidelines of the People’s Republic of China.The mice were randomly divided into the following groups (N=8 per group) (Figure 1): a sham group, an AβO group, AβO plus AS-IV (10, 20, and 40 mg/kg/day, i.g.) groups, an AβO plus donepezil (5 mg/kg/day, i.g.) group, and an AβO plus AS-IV (20 mg/kg/day, i.g.) with GW9662 (1 mg/kg/day, i.p.) group. Drugs were administered once per day for one week followed by intrahippocampal infusion of AβO and continuously received AS-IV once per day for another four weeks. The dose of AS-IV and GW9662 was selected and modified based upon a previous study [11]. ## 2.7. Preparation and Infusion of AβO AβO were prepared from synthetic Aβ1-42 and incubated at 37°C for 1 week in a stock solution of 10 μg/μL, then routinely characterized by size-exclusion chromatography, as previously described [26, 27], and stored at -80°C until use after subpackaging. AβO were perfused at a final concentration of 2.5 μg/μL in aCSF.For intrahippocampal infusion of AβO, mice were anesthetized with 5% isoflurane using a vaporizer system (RWD life Science Co., Ltd, Shenzhen, China) and maintained at 1% during the injection procedure, as previously described [26, 28]. AβO (5 μg per site) were bilaterally delivered into the hippocampal CA1 region (stereotaxical coordinates relative to bregma: 2.3 mm anteroposterior, ±1.8 mm mediolateral, and 2.0 mm dorsoventral). Injections were performed in a volume of 2 μL infused over 5 min, and the needle was left in place for 1 min to prevent backflow. Then, the mice were treated with penicillin to prevent infection. After the operation, the mice were kept under standard conditions with eating and drinking freely. Mice that showed signs of misplaced injections or any sign of hemorrhage were excluded from further analysis. Seven days before the AβO infusions, AS-IV (10, 20, and 40 mg/kg, once/day) was administered intragastrically in mice. Behavioral and pathological studies were performed 4 weeks postinjections of AβO. ## 2.8. Fear Conditioning FC was evaluated as previously described [29]. On adaption day, mice were allowed to freely explore the conditioning chamber (UgoBasile, Gemonio, Italy) with a camera that was connected to the ANY-Maze™ software (Stoelting, NJ, USA, RRID:SCR_014289) for 5 min. On conditioning day, mice were placed into the same test chamber, and then, an 80 dB audiotone (conditioned stimulus: CS) was presented for 30 s with a coterminating 1.0 mA, 2 s long foot shock (unconditioned stimulus: US) three times at a 73 s interval. Then, mice were removed from the cage. The next day (contextual test), mice were put back into the conditioning chamber for 5 min, but without any audiotone or foot shock. On day 4 (cued test), the cover of the back and side chamber walls was removed. The mice were returned to the chamber followed by three CS (without a foot shock) that were presented for 30 s each. The freezing time was recorded for each test using the software. ## 2.9. Preparation of Hippocampal Tissue Twenty-four hours after behavioral tests, some mice were anesthetized with 5% isoflurane and decapitated, and the hippocampi were then rapidly dissected on ice and snap-frozen in liquid nitrogen before storing at -80°C for biochemical tests. Others received transcardial perfusion with 4% paraformaldehyde (PFA), and then, the hippocampi were rapidly dissected and postfixed with 4% PFA overnight at 4°C followed by immersions in a solution containing 30% sucrose at 4°C for graded dehydration. Parts of the hippocampi were then cut into serial coronal frozen slices (20μm) for immunofluorescence assay, and other hippocampus samples were sliced into 4 μm thick coronal slices for histopathological analysis. ## 2.10. Hematoxylin and Eosin (HE) Staining After fixed in 4% paraformaldehyde for 24 h at room temperature, the hippocampal tissues were embedded in paraffin and coronally cut into 4μm thick slices (three slices per mouse). The tissues were dewaxed and successively rehydrated with alcohol (70%, 85%, 95%, and 100%), and then, the slices were stained with hematoxylin solution for 3 min followed by eosin solution for 2 min at room temperature. The slices were finally mounted by following dehydration with gradient alcohol and hyaline with xylenes and sealed with neutral gum. Representative photographs were captured by a light microscope with the DP70 software. ## 2.11. Enzyme-Linked Immunosorbent Assay Hippocampal tissues were collected and homogenized with ice-colded saline, supplemented with protease and phosphatase inhibitor cocktails. The supernatants were collected for further analysis. The levels of endogenous Aβ1-42, IL-1β, IL-6, and TNF-α were determined using ELISA kits according to the manufacturer’s instructions. The absorbance was recorded at 450 nm using a microplate reader (SpectraMax M2/M2e; Molecular Devices, Sunnyvale, CA, USA), and the concentrations of Aβ1-42, IL-1β, IL-6, and TNF-α were calculated from standard curves. Results were expressed as picograms per milliliter. Data were generated from 6-8 mice per group. ## 2.12. Immunofluorescence Mice were sacrificed, and the hippocampi were snap-frozen in optimal cutting temperature (OCT) compound (Sakura Finetechnical, Japan). For immunofluorescence staining, the OCT-embedded hippocampi were cut into serial coronal 20μm thick slices and mounted on adhesive microscope slides. The slices were fixed with ice-colded acetone for 10 min and then blocked in 10% goat serum (containing 0.04% Triton X-100) for 90 min at room temperature. Subsequently, the slices were incubated with primary antibodies to MAP-2 (1 : 200), PSD95 (1 : 200), SYN (1 : 400), GAP43 (1 : 200), and GFAP (1 : 200) overnight at 4°C followed by incubation with Alexa-conjugated secondary antibodies (Thermo Fisher Scientific) for 2 h at room temperature. After counterstained with DAPI solution in the dark, the fluorescent images of slices were acquired using a confocal scanning microscope (FV1000, Olympus, Japan). At least six representative images were taken from each mouse for analysis by the Image J software (NIH, USA, RRID:SCR_003070). ## 2.13. Immunohistochemistry Hippocampal slices were deparaffinized and rehydrated as described above. After antigen retrieval, slices were incubated with 3% H2O2 for 15 min and blocked in goat serum (containing 0.1% Triton X-100) for 30 min followed by incubation overnight at 4°C with primary antibodies to PPARγ (1 : 200) and BDNF (1 : 200). Then, the slices were washed three times with PBS and incubated with the horseradish peroxidase (HRP) conjugated goat anti-rabbit or anti-mouse IgG (1 : 100) secondary antibody for 2 h at room temperature followed by incubation with 50 μL 3,3′-diaminobenzidine (DAB) substrate (DAKO, Denmark) at room temperature for 10 min. The number of immunoreactive cells in the hippocampus was assessed using light microscopy (DP70; Olympus, Japan). At least three different fields (200×200 μm) per slice were randomly selected for visualization. The mean optical density in the hippocampus region was calculated and used to determine PPARγ and BDNF expression levels. ## 2.14. Golgi-Cox Staining Golgi-Cox staining was performed to assess changes in dendrites and dendritic spines within hippocampal neurons using the FD Rapid GolgiStain™ Kit (FD NeuroTechnologies, USA) according to the manufacturer’s instructions. Briefly, mice were anaesthetized with 5% isoflurane and decapitated, and the brains were rapidly removed and immersed in the impregnation solution (A:B=1:1, total 2 mL/mouse) at room temperature in the dark and then replaced with new impregnation solution after 2 days. Two weeks later, brains were transferred into solution C and stored at 4°C for three days and then rinsed 3 times with PBST (containing 0.3% Triton X-100). Brains were then cut serially into 100 μm coronal slices on a vibration microtome, and each slice was transferred to a gelatin-coated slide with solution C and then dried at room temperature at dark for up to 3 days. Then, the slices were placed in a mixture consisting of solution D, solution E, and distilled water (1 : 1 : 2) for 15 min followed by a dehydration series consisting of 50%, 70%, 85%, 95%, and 100% ethanol, for 3 applications at 5 min each. The slices were then transparented with xylenes and sealed with neutral gum for light microscopic observation. At least 3-5 dendritic segments of apical dendrites per neuron were randomly selected in each slice, and 5 pyramidal neurons were analyzed per mouse. For each group, the number of spines per dendritic segment of at least 3 mice was analyzed with using the Image J software (NIH, USA, RRID:SCR_003070). Results are expressed as the mean number of spines per 10 μm. ## 2.15. Transmission Electron Microscopy The hippocampi were rapidly dissected and placed in 2.5% glutaraldehyde at 4°C for 4 h followed by fixation with 1% osmium tetroxide for 1.5 h. After a series of gradient ethanol dehydrations, the tissues were immersed in propylene oxide for 30 min and then infiltrated with a mixture of propylene oxide and epoxy resin overnight. Then, the tissues were embedded in epoxy resin and placed in oven at 60°C for 48 h and then cut into serial ultrathin slices (70 nm thickness) and stained with 4% uranyl acetate for 20 min followed by 0.5% lead citrate for 5 min. The synaptic ultrastructures were observed under TEM (HT7700; Hitachi, Tokyo, Japan). In this study, at least 10 micrographs were randomly taken from each mouse and analysis of synaptic density was performed using the Image J software (NIH, USA, RRID:SCR_003070). ## 2.16. Immunoblotting Hippocampi were collected and homogenized in RIPA buffer containing protease and phosphatase inhibitor cocktails, and the protein concentration was determined by bicinchoninic acid method (Pierce Biotechnology, Inc., USA). Then, 25μg total protein from each sample was resolved by 8-15% sodium dodecyl sulfate polyacrylamide gel electrophoresis at room temperature and electroblotted onto nitrocellulose membrane (GE Healthcare, USA) at 4°C for 2 h. Membranes were blocked with 5% nonfat milk dissolved in Tris buffered saline Tween (TBST) at room temperature for 2.5 h. Primary antibodies against PSD95 (1 : 1000), SYN (1 : 1000), GAP43 (1 : 1000), ARC (1 : 1000), PPARγ (1 : 1000), GFAP (1 : 500), NLRP3 (1 : 1000), cleaved IL-1β (1 : 1000), cleaved caspase-1 (1 : 1000), and GAPDH (1 : 1000) were diluted in blocking solution and incubated with the membranes overnight at 4°C. After incubation with secondary anti-mouse or anti-rabbit IgGs (1 : 10000 in TBST) at room temperature for 90 min, membranes were washed in TBST buffer, developed with SuperSignal chemiluminescence substrate (Thermo Fisher Scientific, MA) and imaged with a chemiluminescence detector (FluorChem FC3; ProteinSimple, USA). The protein expression was quantified with the Quantity One software (Bio-Rad, Hercules, CA, USA, RRID:SCR_014280), and the densitometric plots of the results were normalized to the intensity of the GAPDH. ## 2.17. Statistical Analysis All analyses were performed with the GraphPad Prism 5.0 software (GraphPad Prism, San Diego, CA, USA, RRID: SCR_002798), and data were expressed asmean±standarddeviationSD. The statistical significance of difference between groups was evaluated using one-way ANOVA followed by Tukey test. P values of <0.05 were considered statistically significant. ## 3. Results ### 3.1. C-T Network In this study, we used a comprehensive method to screen AS-IV targets. Figure2(a) shows that there are 64 targets with the combining capacity to AS-IV. In this network, all these observations provide strong evidence that AS-IV works through a multitarget synergistic mechanism.Figure 2 Work scheme of system pharmacology approach. (a) C-T network was constructed by linking the AS-IV with its potential targets (circles). (b) C-T-F network was constructed by the AS-IV and its function (octagons) and corresponding protein targets (circles). Among the targets, octagons with different colors represent the nervous system, inflammatory, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid targets, respectively. The circles in the middle are the nervous system, inflammatory, cell proliferation, apoptosis, calcium ion, and steroid overlapped targets. Node size is proportional to its degree. (c) The PPI network of AS-IV. The color and size of the node are proportional to the degree, and the color and thickness of the connecting line are proportional to betweenness centrality. (d) Gene Ontology analysis of AS-IV target genes. (e) Distribution of AS-IV target proteins in the underlying pathways involved in AD. (f) Distribution of AS-IV target proteins in chromosome. (a)(b)(c)(d)(e)(f) ### 3.2. C-T-F Network In order to further explain the pharmacological mechanisms of beneficial effects of AS-IV on AD, we classified the target functions of this compound and constructed the C-T-F network. Figure2(b) depicts the global view of the C-T-F network, in which the diamond, circle, and hexagon nodes represent AS-IV, targets, and the corresponding function of the targets, respectively. Further observation of this network shows that these 64 targets are related to 7 functions, including inflammation, nervous system, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid. ### 3.3. PPI Network Proteins do not exert their functions independently of each other but interact together in the PPI network [30]. It is very helpful to understand the functions of proteins through analyzing the topological characteristics of proteins in PPI networks. Here, we constructed the PPI network of the 64 target proteins obtained from AS-IV and calculated the degree of each node. As shown in Figure 2(c), the degree of ADRA2A, ADRA2B, ADRA2C, CHRM2, S1PR5, S1PR2, DRD3, and HRH3 was the highest (degree=7), followed by APH1B, PSENEN, PSEN1, PSEN2, and NCSTN (degree=4), demonstrating that these proteins are hub targets and may be responsible for bridging other proteins in the PPI network. ### 3.4. GO Enrichment Analysis Through the GO enrichment analysis (Figures2(d)–2(f)), the targets were related to following biological processes, including G-protein coupled acetylcholine receptor signaling pathway (count=2,3,6), protein kinase B activity (count=1), Notch receptor processing (count=5), protein processing (count=7), and inflammatory response (count=4). These processes were usually related to cell proliferation, gene transcription, differentiation, and development. ### 3.5. Molecular Docking Figures3(a)–3(l) depict the binding interactions of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 after docking simulations. The results showed that hydrophobic and H-bond interactions influenced the binding affinity of AS-IV to their target proteins (Figures 3(i)–3(l)). AS-IV was anchored into a hydrophobic pocket in caspase-1, GSK3Β, PSEN1, and TRPV1. In detail, for the binding pocket of caspase-1 with its ligand, there were large hydrophobic interactions formed by residues Trp340, Pro343, and Ala284; with respect to GSK3Β, the hydrophobic interactions were formed by residues Val110, Leu188, Ala83, Leu132, Val70, Phe67, and Ile62. Additionally, in PSEN1, it was formed by residues Phe14, Ile408, Ile135, Phe6, Trp404, Leu142, and Ala98. Also, in TRPV1, it was formed by residues Phe543, Phe522, Met547, Val518, Leu515, Ile573, Ala566, Leu553, and Ile569.Figure 3 Binding conformations of AS-IV and four targets obtained from docking simulation. (a–d) The binding mode of AS-IV to (a) caspase-1, (b) GSK3Β, (c) PSEN1, and (d) TRPV1 in the active site. (e–h) Stereoview of binding mode for AS-IV with its receptors, i.e., (e) caspase-1, (f) GSK3Β, (g) PSEN1, and (h) TRPV1 in the binding site, where the H-bonds are depicted as the black dotted line. (i–l) The detailed view of the 2-D ligand interaction among AS-IV with (i) caspase-1, (j) GSK3Β, (k) PSEN1, and (l) TRPV1. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)AS-IV interacted with many residues in the active sites of caspase-1, and three H-bond networks were formed (Figure3). AS-IV forms H-bond networks with GSK3Β in Lys85, Val135, Lys60, Tyr134, Arg141, and Asn64. AS-IV forms H-bond interactions with PSEN1 in Ala139, while forms with TRPV1 in Asn551, Thr550, Arg557, and Ser512 (Figure 3). AS-IV is well suited to the receptor binding pocket as the binding of AS-IV to amino acids was tight and deep into the cavity. The binding free energy of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 was -5.30 Kcal/mol, -4.85 Kcal/mol, -6.41 Kcal/mol, and -6.07 Kcal/mol, respectively. These results indicated that AS-IV showed high binding affinities to its targets. ### 3.6. Interaction of PPARγ with Caspase-1, GSK3Β, PSEN1, and TRPV1 Figures4(a)–4(c) depict the binding interactions of AS-IV with PPARγ after docking simulations. For the target PPARγ, AS-IV is directed toward the binding site and stabilized by the hydrogen-bonding interactions with Gln343, Cys285, and Ser289. Five critical proteins in the network, including PPARγ, caspase-1, GSK3Β, PSEN1 and TRPV1, were selected to further validate the PPI. As shown in Figure 4(d), these five proteins showed a close interaction.Figure 4 Interaction of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. (a) The binding mode of AS-IV to PPARγ in the active site. (b) Stereoview of binding mode for AS-IV with PPARγ in the binding site, where the H-bonds are depicted as the black dotted line. (c) The detailed view of the 2-D ligand interaction among AS-IV with PPARγ. (d) The PPI network of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. (a)(b)(c)(d) ### 3.7. Effect of AS-IV on AβO-Induced Memory Impairment and Pathological Changes FC task was further performed by the intensity of freezing to context and auditory cue to assess the effects of AS-IV on fear memory in AβO-infused mice. During the adaptation session, there was no difference in freezing time among experimental groups (data not shown). By exposure to the context and auditory cue, freezing response was both higher in sham mice than AβO-infused mice (Figures 5(a) and 5(b)). The freezing time was lower in AβO-infused mice after administration of AS-IV (10, 20, and 40 mg/kg) or donepezil, a positive control drug. These results suggested that AS-IV prevented AβO-induced contextual and cued fear memory impairments.Figure 5 Effects of AS-IV on AβO-induced fear memory impairment and pathological changes in mice. (a) The freezing time of contextual memory. (b) The freezing time of cued memory. (c) Representative images of HE staining in the hippocampus (200x). Scale bar: 50 μm. (d) The content of Aβ1-42 in the hippocampus measured by ELISA assay. (e) The expression of p-tau protein in the hippocampus measured by western blotting. (f) MAP-2 expression in the hippocampus measured by IF (×200). Scale bar: 50 μm. Data are expressed as the mean±SD (N=8 or 6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f)HE staining showed that the pyramidal cells in CA1 region of the hippocampus of sham mice had intact cell body and round nuclei with tight arrangement, and no cell loss was found. However, the pyramidal layer was disintegrated, and neuronal loss was observed in the CA1 region. Additionally, neurons with shrunken or irregular shape of cell bodies and degeneration of nuclei were also found in the hippocampus of AβO-infused mice (Figure 5(c)). It is worth mentioning that AS-IV (10, 20, and 40 mg/kg) administration attenuated the structural damage and loss of neurons to some extent relative to AβO-infused, which indicated a neuroprotective effect of AS-IV.Next, the level of Aβ1-42 and phosphorylated tau expression was measured in the hippocampus. Results showed that there was no difference in the hippocampal Aβ1-42 level among experimental groups (Figure 5(d)). Compared with sham mice, the phosphorylated tau expression was increased significantly in AβO-infused mice. Compared with AβO-infused mice, AS-IV treatment reduced the hippocampal phosphorylated tau expression (Figure 5(e)).We also observed MAP-2 expression in the hippocampus of mice by immunofluorescence assay. Results showed that there were a large number of MAP-2+ cells, with regular arrangement of neurons, obvious neurites arranged in bundles in the hippocampus of sham mice. Compared with sham mice, the numbers of MAP-2+ cells were remarkably reduced, the arrangement of dendrites was disordered, and the length of the neurites was significantly shortened in the hippocampus of AβO-infused mice. In contrast, AS-IV (20 mg/kg) administration reversed the inhibitory effects of AβO on the growth of MAP-2+ neurites (Figure 5(f)). Based on these findings, AS-IV administration alleviated AβO-induced neuronal injury and reduced tau phosphorylation in the hippocampus, but had no effect on endogenous Aβ1-42 level in AβO-infused mice. ### 3.8. AS-IV Suppresses AβO-Induced Synaptic Deficit in the Hippocampus The effects of AS-IV on synaptic protein expression were investigated through determining the expression of PSD95, SYN, GAP43, and ARC. Results from immunofluorescence assays showed that the synaptic proteins PSD95, SYN, and GAP43 were all significantly reduced in hippocampal regions after AβO infusion when compared with sham mice. In contrast, AS-IV administration increased the immunoreactivity of PSD95, SYN, and GAP43 as compared to AβO-infused mice (Figures 6(a) and 6(b)).Figure 6 AS-IV suppresses AβO-induced synaptic deficits. (a) PSD95, SYN, and GAP43 expression in the hippocampus measured by immunofluorescence (×400). Scale bar: 5 μm. (b) Fluorescence intensity data of PSD95, SYN, and GAP43 expression. (c) The protein expression of synaptic plasticity markers in the hippocampus measured by western blotting. (d) Relative quantitative data of PSD95, SYN, GAP43, and ARC protein expression. (e) Changes of dendritic spines in the hippocampus measured by Golgi-Cox staining (1000x). Scale bar: 2 μm. (f) Quantitative data of dendritic spines in the hippocampus. (g) Ultrastructural changes of synapses in the hippocampus measured by TEM (8000x). Scale bar: 50 nm. (h) Quantitative data of synapses in the hippocampus. Data are expressed as the mean±SD (N=6 or 4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f)(g)(h)The results from immunoblotting assays also showed that there was a significant decrease in the expression of PSD95, SYN, and GAP43 in response to AβO infusion, while AS-IV administration significantly ameliorated AβO-induced downregulation of these synaptic protein expressions in the hippocampus (Figures 6(c) and 6(d)). By contrast, there was no difference in these groups of mice regarding ARC expression (Figures 6(c) and 6(d)).We next detected the density of dendritic spines in hippocampal neurons among experimental groups by Golgi-Cox staining assay. Results showed that the density of dendritic spines in hippocampal neurons of AβO-infused mice was significantly lower than that in sham mice, but these AβO infusion-induced changes in dendritic spine densities were significantly ameliorated by AS-IV (20 mg/kg) administration (Figures 6(e) and 6(f)).We further used transmission electron microscopy to examine the synaptic ultrastructure of hippocampal neurons. Our data showed that AβO infusion resulted in a significant decrease of numbers of hippocampal synapses as compared to that of sham mice, whereas AS-IV (20 mg/kg) administration significantly ameliorated this synaptic loss (Figures 6(g) and 6(h)). Overall, the results indicate that AS-IV affords protection against AβO-induced synaptic deficits. ### 3.9. AS-IV Promotes AβO Infusion-Inhibited PPARγ Expression in the Hippocampus The hippocampus was collected at four time points after AβO infusion (2 h, 1 d, 14 d, and 28 d). The expression of PPARγ significantly decreased at 2 h, 1 d, 14 d, and 28 d after AβO infusion (Figure 7(a)). By contrast, AS-IV attenuated the decrease of PPARγ in AβO-infused mice. A specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effect of AS-IV was blocked by GW9662 in the hippocampus of AβO-infused mice (Figure 7(b)).Figure 7 AβO infusion-inhibited PPARγ expression was promoted by AS-IV. (a) The expression of PPARγ protein at 2 h, 1 d, 14 d, and 28 d after AβO infusion in the hippocampus measured by western blotting. (b) The expression of PPARγ protein after AS-IV administration in AβO-infused mice measured by western blotting. Data are expressed as the mean±SD (N=4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b) ### 3.10. AS-IV Inhibits AβO-Induced BDNF Reduction via Promoting PPARγ Expression in Mouse Hippocampi To further explore the underlying neuroprotective mechanism of AS-IV on AβO-infused mice, the levels of PPARγ and BDNF in hippocampus were detected by immunohistochemistry. Compared with sham group, PPARγ and BDNF immunoreactivity was decreased in the hippocampus of AβO-infused mice, whereas hippocampal immunoreactivity of PPARγ and BDNF was higher in AS-IV-treated mice compared to AβO-infused mice (Figures 8(a)–8(f)). Additionally, the effect of AS-IV on the expression of BDNF and PPARγ was blocked by GW9662 in the hippocampus of AβO-infused mice (Figures 8(a)–8(f)).Figure 8 AS-IV inhibits AβO-induced BDNF reduction via promoting PPARγ expression. (a) PPARγ and BDNF expression measured by immunohistochemistry (×200). Scale bar: 50 μm. (b–f) Quantitative data of PPARγ and BDNF expression in different regions of the hippocampus. Data are expressed as the mean±SD (N=6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f) ### 3.11. AS-IV Inhibits AβO-Induced Neuroinflammation via Promoting PPARγ Expression Our data showed that there were significant differences among the experimental groups with regard to the number of astroglia in DG region of the hippocampus, as detected by immunofluorescence (Figure9(a)). Infusion of AβO induced a remarkable activation of astroglial responses in the hippocampus of mice, which was prevented by AS-IV (20 mg/kg) administration. Consistently, infusion of AβO also increased GFAP expression as determined with immunoblotting assay, while AS-IV (20 mg/kg) administration significantly suppressed GFAP expression in AβO-infused mice (Figure 9(b)). Furthermore, we asked whether PPARγ mediated the beneficial effect of AS-IV on anti-inflammatory response in AβO-infused mice. Interestingly, PPARγ inhibition by GW9662 blocked the inhibitory effects of AS-IV on GFAP immunoreactivity and expression in the hippocampus of AβO-infused mice (Figures 9(a) and 9(b)).Figure 9 AS-IV inhibits AβO-induced neuroinflammation via promoting PPARγ expression. (a) GFAP expression in the hippocampus measured by immunofluorescence (×400). Scale bar: 20 μm. (b) The expression of GFAP protein in the hippocampus measured by western blotting. (c) The content of IL-1β, IL-6, and TNF-α in the hippocampus measured by ELISA. Data are expressed as the mean±SD (N=4 or 6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)We measured the hippocampal IL-1β, IL-6, and TNF-α level in AβO-infused mice by ELISA. Results showed that AβO infusion led to an upregulation of IL-1β, IL-6, and TNF-α level in the hippocampus compared with sham mice, but AS-IV administration suppressed the upregulation of cytokines following AβO infusion. In line with the above findings, this effect of AS-IV was blocked by GW9662 (Figure 9(c)). These results suggest that AS-IV prevented the inflammatory response in the hippocampus via PPARγ. ### 3.12. AS-IV Inhibits AβO-Induced Pyroptotic Cell Death via Promoting PPARγ Expression As shown in Figures10(a)–10(c), the protein expression of NLRP3 and cleaved caspase-1 was significantly elevated in the hippocampus of AβO-infused mice compared with sham mice. In contrast, AS-IV (20 mg/kg) administration suppressed AβO-induced expression of NLRP3, as well as cleaved caspase-1 in the hippocampus of AβO-infused mice.Figure 10 AS-IV inhibits AβO-induced pyroptotic cell death via promoting PPARγ expression. (a) The protein expression of pyroptosis markers in the hippocampus measured by western blotting. (b) Relative quantitative data of NLRP3 protein expression. (c) Relative quantitative data of cleaved caspase-1 protein expression. (d) Relative quantitative data of IL-1β protein expression. Data are expressed as the mean±SD (N=4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)As shown in Figures10(a)–10(d), AβO infusion significantly increased the levels of IL-1β in the hippocampus, which was inhibited by AS-IV administration. In order to further confirm the role of PPARγ in AS-IV-mediated suppression of AβO-induced pyroptosis, specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effects of AS-IV against AβO-induced expression of NLRP3 and cleaved caspase-1 were blocked by GW9662. Moreover, the blockade of PPARγ was able to significantly reverse the effect of AS-IV on AβO-induced proinflammatory cytokine IL-1β overexpression (Figures 10(a)–10(d)). ## 3.1. C-T Network In this study, we used a comprehensive method to screen AS-IV targets. Figure2(a) shows that there are 64 targets with the combining capacity to AS-IV. In this network, all these observations provide strong evidence that AS-IV works through a multitarget synergistic mechanism.Figure 2 Work scheme of system pharmacology approach. (a) C-T network was constructed by linking the AS-IV with its potential targets (circles). (b) C-T-F network was constructed by the AS-IV and its function (octagons) and corresponding protein targets (circles). Among the targets, octagons with different colors represent the nervous system, inflammatory, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid targets, respectively. The circles in the middle are the nervous system, inflammatory, cell proliferation, apoptosis, calcium ion, and steroid overlapped targets. Node size is proportional to its degree. (c) The PPI network of AS-IV. The color and size of the node are proportional to the degree, and the color and thickness of the connecting line are proportional to betweenness centrality. (d) Gene Ontology analysis of AS-IV target genes. (e) Distribution of AS-IV target proteins in the underlying pathways involved in AD. (f) Distribution of AS-IV target proteins in chromosome. (a)(b)(c)(d)(e)(f) ## 3.2. C-T-F Network In order to further explain the pharmacological mechanisms of beneficial effects of AS-IV on AD, we classified the target functions of this compound and constructed the C-T-F network. Figure2(b) depicts the global view of the C-T-F network, in which the diamond, circle, and hexagon nodes represent AS-IV, targets, and the corresponding function of the targets, respectively. Further observation of this network shows that these 64 targets are related to 7 functions, including inflammation, nervous system, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid. ## 3.3. PPI Network Proteins do not exert their functions independently of each other but interact together in the PPI network [30]. It is very helpful to understand the functions of proteins through analyzing the topological characteristics of proteins in PPI networks. Here, we constructed the PPI network of the 64 target proteins obtained from AS-IV and calculated the degree of each node. As shown in Figure 2(c), the degree of ADRA2A, ADRA2B, ADRA2C, CHRM2, S1PR5, S1PR2, DRD3, and HRH3 was the highest (degree=7), followed by APH1B, PSENEN, PSEN1, PSEN2, and NCSTN (degree=4), demonstrating that these proteins are hub targets and may be responsible for bridging other proteins in the PPI network. ## 3.4. GO Enrichment Analysis Through the GO enrichment analysis (Figures2(d)–2(f)), the targets were related to following biological processes, including G-protein coupled acetylcholine receptor signaling pathway (count=2,3,6), protein kinase B activity (count=1), Notch receptor processing (count=5), protein processing (count=7), and inflammatory response (count=4). These processes were usually related to cell proliferation, gene transcription, differentiation, and development. ## 3.5. Molecular Docking Figures3(a)–3(l) depict the binding interactions of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 after docking simulations. The results showed that hydrophobic and H-bond interactions influenced the binding affinity of AS-IV to their target proteins (Figures 3(i)–3(l)). AS-IV was anchored into a hydrophobic pocket in caspase-1, GSK3Β, PSEN1, and TRPV1. In detail, for the binding pocket of caspase-1 with its ligand, there were large hydrophobic interactions formed by residues Trp340, Pro343, and Ala284; with respect to GSK3Β, the hydrophobic interactions were formed by residues Val110, Leu188, Ala83, Leu132, Val70, Phe67, and Ile62. Additionally, in PSEN1, it was formed by residues Phe14, Ile408, Ile135, Phe6, Trp404, Leu142, and Ala98. Also, in TRPV1, it was formed by residues Phe543, Phe522, Met547, Val518, Leu515, Ile573, Ala566, Leu553, and Ile569.Figure 3 Binding conformations of AS-IV and four targets obtained from docking simulation. (a–d) The binding mode of AS-IV to (a) caspase-1, (b) GSK3Β, (c) PSEN1, and (d) TRPV1 in the active site. (e–h) Stereoview of binding mode for AS-IV with its receptors, i.e., (e) caspase-1, (f) GSK3Β, (g) PSEN1, and (h) TRPV1 in the binding site, where the H-bonds are depicted as the black dotted line. (i–l) The detailed view of the 2-D ligand interaction among AS-IV with (i) caspase-1, (j) GSK3Β, (k) PSEN1, and (l) TRPV1. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)AS-IV interacted with many residues in the active sites of caspase-1, and three H-bond networks were formed (Figure3). AS-IV forms H-bond networks with GSK3Β in Lys85, Val135, Lys60, Tyr134, Arg141, and Asn64. AS-IV forms H-bond interactions with PSEN1 in Ala139, while forms with TRPV1 in Asn551, Thr550, Arg557, and Ser512 (Figure 3). AS-IV is well suited to the receptor binding pocket as the binding of AS-IV to amino acids was tight and deep into the cavity. The binding free energy of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 was -5.30 Kcal/mol, -4.85 Kcal/mol, -6.41 Kcal/mol, and -6.07 Kcal/mol, respectively. These results indicated that AS-IV showed high binding affinities to its targets. ## 3.6. Interaction of PPARγ with Caspase-1, GSK3Β, PSEN1, and TRPV1 Figures4(a)–4(c) depict the binding interactions of AS-IV with PPARγ after docking simulations. For the target PPARγ, AS-IV is directed toward the binding site and stabilized by the hydrogen-bonding interactions with Gln343, Cys285, and Ser289. Five critical proteins in the network, including PPARγ, caspase-1, GSK3Β, PSEN1 and TRPV1, were selected to further validate the PPI. As shown in Figure 4(d), these five proteins showed a close interaction.Figure 4 Interaction of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. (a) The binding mode of AS-IV to PPARγ in the active site. (b) Stereoview of binding mode for AS-IV with PPARγ in the binding site, where the H-bonds are depicted as the black dotted line. (c) The detailed view of the 2-D ligand interaction among AS-IV with PPARγ. (d) The PPI network of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. (a)(b)(c)(d) ## 3.7. Effect of AS-IV on AβO-Induced Memory Impairment and Pathological Changes FC task was further performed by the intensity of freezing to context and auditory cue to assess the effects of AS-IV on fear memory in AβO-infused mice. During the adaptation session, there was no difference in freezing time among experimental groups (data not shown). By exposure to the context and auditory cue, freezing response was both higher in sham mice than AβO-infused mice (Figures 5(a) and 5(b)). The freezing time was lower in AβO-infused mice after administration of AS-IV (10, 20, and 40 mg/kg) or donepezil, a positive control drug. These results suggested that AS-IV prevented AβO-induced contextual and cued fear memory impairments.Figure 5 Effects of AS-IV on AβO-induced fear memory impairment and pathological changes in mice. (a) The freezing time of contextual memory. (b) The freezing time of cued memory. (c) Representative images of HE staining in the hippocampus (200x). Scale bar: 50 μm. (d) The content of Aβ1-42 in the hippocampus measured by ELISA assay. (e) The expression of p-tau protein in the hippocampus measured by western blotting. (f) MAP-2 expression in the hippocampus measured by IF (×200). Scale bar: 50 μm. Data are expressed as the mean±SD (N=8 or 6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f)HE staining showed that the pyramidal cells in CA1 region of the hippocampus of sham mice had intact cell body and round nuclei with tight arrangement, and no cell loss was found. However, the pyramidal layer was disintegrated, and neuronal loss was observed in the CA1 region. Additionally, neurons with shrunken or irregular shape of cell bodies and degeneration of nuclei were also found in the hippocampus of AβO-infused mice (Figure 5(c)). It is worth mentioning that AS-IV (10, 20, and 40 mg/kg) administration attenuated the structural damage and loss of neurons to some extent relative to AβO-infused, which indicated a neuroprotective effect of AS-IV.Next, the level of Aβ1-42 and phosphorylated tau expression was measured in the hippocampus. Results showed that there was no difference in the hippocampal Aβ1-42 level among experimental groups (Figure 5(d)). Compared with sham mice, the phosphorylated tau expression was increased significantly in AβO-infused mice. Compared with AβO-infused mice, AS-IV treatment reduced the hippocampal phosphorylated tau expression (Figure 5(e)).We also observed MAP-2 expression in the hippocampus of mice by immunofluorescence assay. Results showed that there were a large number of MAP-2+ cells, with regular arrangement of neurons, obvious neurites arranged in bundles in the hippocampus of sham mice. Compared with sham mice, the numbers of MAP-2+ cells were remarkably reduced, the arrangement of dendrites was disordered, and the length of the neurites was significantly shortened in the hippocampus of AβO-infused mice. In contrast, AS-IV (20 mg/kg) administration reversed the inhibitory effects of AβO on the growth of MAP-2+ neurites (Figure 5(f)). Based on these findings, AS-IV administration alleviated AβO-induced neuronal injury and reduced tau phosphorylation in the hippocampus, but had no effect on endogenous Aβ1-42 level in AβO-infused mice. ## 3.8. AS-IV Suppresses AβO-Induced Synaptic Deficit in the Hippocampus The effects of AS-IV on synaptic protein expression were investigated through determining the expression of PSD95, SYN, GAP43, and ARC. Results from immunofluorescence assays showed that the synaptic proteins PSD95, SYN, and GAP43 were all significantly reduced in hippocampal regions after AβO infusion when compared with sham mice. In contrast, AS-IV administration increased the immunoreactivity of PSD95, SYN, and GAP43 as compared to AβO-infused mice (Figures 6(a) and 6(b)).Figure 6 AS-IV suppresses AβO-induced synaptic deficits. (a) PSD95, SYN, and GAP43 expression in the hippocampus measured by immunofluorescence (×400). Scale bar: 5 μm. (b) Fluorescence intensity data of PSD95, SYN, and GAP43 expression. (c) The protein expression of synaptic plasticity markers in the hippocampus measured by western blotting. (d) Relative quantitative data of PSD95, SYN, GAP43, and ARC protein expression. (e) Changes of dendritic spines in the hippocampus measured by Golgi-Cox staining (1000x). Scale bar: 2 μm. (f) Quantitative data of dendritic spines in the hippocampus. (g) Ultrastructural changes of synapses in the hippocampus measured by TEM (8000x). Scale bar: 50 nm. (h) Quantitative data of synapses in the hippocampus. Data are expressed as the mean±SD (N=6 or 4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f)(g)(h)The results from immunoblotting assays also showed that there was a significant decrease in the expression of PSD95, SYN, and GAP43 in response to AβO infusion, while AS-IV administration significantly ameliorated AβO-induced downregulation of these synaptic protein expressions in the hippocampus (Figures 6(c) and 6(d)). By contrast, there was no difference in these groups of mice regarding ARC expression (Figures 6(c) and 6(d)).We next detected the density of dendritic spines in hippocampal neurons among experimental groups by Golgi-Cox staining assay. Results showed that the density of dendritic spines in hippocampal neurons of AβO-infused mice was significantly lower than that in sham mice, but these AβO infusion-induced changes in dendritic spine densities were significantly ameliorated by AS-IV (20 mg/kg) administration (Figures 6(e) and 6(f)).We further used transmission electron microscopy to examine the synaptic ultrastructure of hippocampal neurons. Our data showed that AβO infusion resulted in a significant decrease of numbers of hippocampal synapses as compared to that of sham mice, whereas AS-IV (20 mg/kg) administration significantly ameliorated this synaptic loss (Figures 6(g) and 6(h)). Overall, the results indicate that AS-IV affords protection against AβO-induced synaptic deficits. ## 3.9. AS-IV Promotes AβO Infusion-Inhibited PPARγ Expression in the Hippocampus The hippocampus was collected at four time points after AβO infusion (2 h, 1 d, 14 d, and 28 d). The expression of PPARγ significantly decreased at 2 h, 1 d, 14 d, and 28 d after AβO infusion (Figure 7(a)). By contrast, AS-IV attenuated the decrease of PPARγ in AβO-infused mice. A specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effect of AS-IV was blocked by GW9662 in the hippocampus of AβO-infused mice (Figure 7(b)).Figure 7 AβO infusion-inhibited PPARγ expression was promoted by AS-IV. (a) The expression of PPARγ protein at 2 h, 1 d, 14 d, and 28 d after AβO infusion in the hippocampus measured by western blotting. (b) The expression of PPARγ protein after AS-IV administration in AβO-infused mice measured by western blotting. Data are expressed as the mean±SD (N=4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b) ## 3.10. AS-IV Inhibits AβO-Induced BDNF Reduction via Promoting PPARγ Expression in Mouse Hippocampi To further explore the underlying neuroprotective mechanism of AS-IV on AβO-infused mice, the levels of PPARγ and BDNF in hippocampus were detected by immunohistochemistry. Compared with sham group, PPARγ and BDNF immunoreactivity was decreased in the hippocampus of AβO-infused mice, whereas hippocampal immunoreactivity of PPARγ and BDNF was higher in AS-IV-treated mice compared to AβO-infused mice (Figures 8(a)–8(f)). Additionally, the effect of AS-IV on the expression of BDNF and PPARγ was blocked by GW9662 in the hippocampus of AβO-infused mice (Figures 8(a)–8(f)).Figure 8 AS-IV inhibits AβO-induced BDNF reduction via promoting PPARγ expression. (a) PPARγ and BDNF expression measured by immunohistochemistry (×200). Scale bar: 50 μm. (b–f) Quantitative data of PPARγ and BDNF expression in different regions of the hippocampus. Data are expressed as the mean±SD (N=6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)(e)(f) ## 3.11. AS-IV Inhibits AβO-Induced Neuroinflammation via Promoting PPARγ Expression Our data showed that there were significant differences among the experimental groups with regard to the number of astroglia in DG region of the hippocampus, as detected by immunofluorescence (Figure9(a)). Infusion of AβO induced a remarkable activation of astroglial responses in the hippocampus of mice, which was prevented by AS-IV (20 mg/kg) administration. Consistently, infusion of AβO also increased GFAP expression as determined with immunoblotting assay, while AS-IV (20 mg/kg) administration significantly suppressed GFAP expression in AβO-infused mice (Figure 9(b)). Furthermore, we asked whether PPARγ mediated the beneficial effect of AS-IV on anti-inflammatory response in AβO-infused mice. Interestingly, PPARγ inhibition by GW9662 blocked the inhibitory effects of AS-IV on GFAP immunoreactivity and expression in the hippocampus of AβO-infused mice (Figures 9(a) and 9(b)).Figure 9 AS-IV inhibits AβO-induced neuroinflammation via promoting PPARγ expression. (a) GFAP expression in the hippocampus measured by immunofluorescence (×400). Scale bar: 20 μm. (b) The expression of GFAP protein in the hippocampus measured by western blotting. (c) The content of IL-1β, IL-6, and TNF-α in the hippocampus measured by ELISA. Data are expressed as the mean±SD (N=4 or 6 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)We measured the hippocampal IL-1β, IL-6, and TNF-α level in AβO-infused mice by ELISA. Results showed that AβO infusion led to an upregulation of IL-1β, IL-6, and TNF-α level in the hippocampus compared with sham mice, but AS-IV administration suppressed the upregulation of cytokines following AβO infusion. In line with the above findings, this effect of AS-IV was blocked by GW9662 (Figure 9(c)). These results suggest that AS-IV prevented the inflammatory response in the hippocampus via PPARγ. ## 3.12. AS-IV Inhibits AβO-Induced Pyroptotic Cell Death via Promoting PPARγ Expression As shown in Figures10(a)–10(c), the protein expression of NLRP3 and cleaved caspase-1 was significantly elevated in the hippocampus of AβO-infused mice compared with sham mice. In contrast, AS-IV (20 mg/kg) administration suppressed AβO-induced expression of NLRP3, as well as cleaved caspase-1 in the hippocampus of AβO-infused mice.Figure 10 AS-IV inhibits AβO-induced pyroptotic cell death via promoting PPARγ expression. (a) The protein expression of pyroptosis markers in the hippocampus measured by western blotting. (b) Relative quantitative data of NLRP3 protein expression. (c) Relative quantitative data of cleaved caspase-1 protein expression. (d) Relative quantitative data of IL-1β protein expression. Data are expressed as the mean±SD (N=4 per group). Compared with sham, ∗P<0.05; compared with AβO, #P<0.05; compared with AS-IV+AβO, &P<0.05 (one-way ANOVA followed by the Tukey test). (a)(b)(c)(d)As shown in Figures10(a)–10(d), AβO infusion significantly increased the levels of IL-1β in the hippocampus, which was inhibited by AS-IV administration. In order to further confirm the role of PPARγ in AS-IV-mediated suppression of AβO-induced pyroptosis, specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effects of AS-IV against AβO-induced expression of NLRP3 and cleaved caspase-1 were blocked by GW9662. Moreover, the blockade of PPARγ was able to significantly reverse the effect of AS-IV on AβO-induced proinflammatory cytokine IL-1β overexpression (Figures 10(a)–10(d)). ## 4. Discussion In this study, we applied systemic pharmacology strategies andin vivo experiments to probe the mechanism of AS-IV in treatment of AD. AS-IV could interact with 64 targets, and those targets had multipharmacological properties relevant with nervous system, inflammation, cell proliferation, apoptosis, pyroptosis, calcium dysregulation, and steroid. Molecular docking suggested that AS-IV could regulate the AD-like phenotypes by binding with caspase-1, GSK3Β, PSEN1, and TRPV1. Furthermore, in vivo experiments evidenced that AS-IV promoted the expression of PPARγ and BDNF in hippocampal neurons of mice infused with AβO and prevented synaptic deficits, inflammation, and memory impairments in AD-like mice. Consistent with the bioinformatics data, in vivo data also verified that AS-IV could suppress AβO infusion-induced neuronal pyroptosis. This systematic analysis provides new implications for the therapeutic of AD by AS-IV. ### 4.1. AS-IV Prevents AD Phenotypes through Multiple Mechanisms In the present study, we screened 64 related targets of AS-IV and these targets together play important roles in the pathogenesis of AD, possibly through regulating cell proliferation, calcium dysregulation, inflammation, pyroptosis, and apoptosis [20, 31–33]. Specifically, the G-protein coupled acetylcholine receptor signaling pathway and protein kinase B/GSK3B axis are involved in the processes of AD pathogenesis, resulting in cognitive dysfunction [34–36]. Besides, the decrease of response to hypoxia and dysregulation of vasoconstriction could effectively ameliorate vascular dementia [37, 38]. Furthermore, the neuroinflammation caused by the generation of caspase-1-mediated IL-1β and IL-18 is involved in the development and progression of AD [32]. GSK3Β plays an important role in hyperphosphorylation of tau, which is one of the pathological features in AD [35]. PSEN1 mutation is a risk factor for AD [39]. Additionally, TRPV1, a nonselective cation channel, is involved in synaptic plasticity and memory [40]. Our molecular docking results demonstrate that AS-IV could integrate with caspase-1, GSK3Β, PSEN1, and TRPV1. The binding affinity of AS-IV is mainly through electrostatic, H-bond, and hydrophobic interaction, suggesting the reliability of the docking model. Therefore, AS-IV may improve cognitive impairment by binding to AD-related gene, such as caspase-1, TRPV1, PSEN1, and GSK3Β, reduce cell death, and ultimately inhibit AD-phenotypes. ### 4.2. AS-IV Reduces Tau Hyperphosphorylation in AD Model AβO accumulate in the brains of AD patients and induce AD-like cognitive dysfunction [41]. Therefore, AβO-induced AD-like phenotypes may be a promising model to find treatments [41, 42]. In this study, we investigated the impact of AβO in the brains of mice and further confirmed the effect of AS-IV on memory formation in mice infused with AβO and to assess the mechanisms. Our results demonstrated that intrahippocampal infusion of AβO impaired both contextual and cued fear memory, which is consistent with previous study [43]. Conversely, AS-IV prevents AβO-induced contextual and cued fear memory impairment. Considering that hippocampus is an important brain region involved in the formation and expression of fear memory, our findings suggest that AβO infusion damaged the structure and function of hippocampus and subsequently blocked the formation of learning and memory, which can be prevented by AS-IV administration.Similar to previous studies, our findings showed that AβO infusion induced neuronal loss, as well as increased tau phosphorylation, suggesting that the pathological changes of the hippocampus induced by AβO infusion may be the basis of AD-like behavioral changes [3, 44]. On the contrary, AS-IV inhibited the pathological changes of hippocampal neurons and tau phosphorylation induced by AβO infusion, which may contribute to memory improvement in AD-like mice. It is speculated that Aβ pathology in AD brain is earlier than those of tau, and neurofibrillary tangles develop downstream of toxicity induced by Aβ and eventually lead to neuronal death. Moreover, the mutual promotion between them accelerates the pathogenesis of AD, which is consistent with previous reports [44–46]. Certainly, we also note that AβO infusion has no effect on endogenous Aβ1-42 content in the hippocampus, suggesting it may not cause the increase and accumulation of Aβ, and formation of amyloid plaque in the brain. Through bioinformatics prediction, AS-IV could integrate with GSK3B tightly. As GSK3B is practically responsible for the hyperphosphorylation of tau, the tight interaction of AS-IV with GSK3B might contribute to the effects of AS-IV on the reduction of tau hyperphosphorylation. ### 4.3. AS-IV Prevents AβO-Induced Synaptic Deficit Consistent with previous studies [47, 48], our findings demonstrated that AβO had neurotoxicity and synaptic toxicity before plaque formation in the brain, causing brain damage and eventually leading to AD-like behaviors. Given the mounting evidences that AβO caused synaptic deficits [3, 49, 50], elucidating the precise molecular pathways has important implications for treating and preventing the disease. Here, we demonstrate that AβO infusion reduced the immunoreactivity and expression levels of PSD95, GAP43, and SYN, which is similar to previous results [51, 52]. It has been shown that the SYN immune response density in the brain of transgenic mice is negatively correlated with Aβ levels, but has nothing to do with plaque loading, indicating that Aβ has synaptic toxicity when plaques are not formed [6]. We further found that AS-IV increased the immunoreactivity and expression levels of PSD95, GAP43, and SYN in the hippocampus of AD-like mice. PSD95, GAP43, and SYN are important markers of synaptic plasticity, and they are positively correlated with hippocampal learning and memory function [14, 53]. Furthermore, ARC plays a key role in synaptic plasticity and memory consolidation [54, 55]. Surprisingly, we note that there is no significant difference in ARC expression among the experimental mice, which suggested that AβO infusion did not target ARC. Our results of Golgi-Cox and TEM further showed that AS-IV increased the density of dendritic spines and synapse number in hippocampal neurons, which suggested that AS-IV improved synaptic structure damage and alleviated synaptic toxicity in the hippocampus of mice infused with AβO.In a previous study, we reported that AS-IV promoted PPARγ expression in cultured cells and activated the BDNF-TrkB signaling pathway [20]. Our in vivo findings further showed that PPARγ expression in the hippocampus of mice infused with AβO was significantly decreased along with the reduction of BDNF expression, while AS-IV significantly prevented AβO-induced inhibition of PPARγ and BDNF expression. Considering the important functions of BDNF-TrkB signaling pathway performed in synaptic function [29], those data further supported that AS-IV prevented AβO-induced synaptic deficit. ### 4.4. AS-IV Prevents AβO-Induced Neuroinflammation and Pyroptosis Numerous studies have confirmed that neuroinflammation accelerates the pathogenesis of AD [46, 56, 57]. In this study, we found that AβO infusion increased the immunoreactivity and expression of GFAP and upregulated IL-1β, IL-6, and TNF-α levels in the hippocampus, which were reversed by AS-IV. These results suggested that AS-IV prohibited AβO-induced neuroinflammation in the brain, which was beneficial for cognitive function improvement, which further confirmed the network screening.PPARγ plays a neuroprotective role by reducing brain inflammation and Aβ production [58, 59]. Our findings showed that AβO infusion inhibited PPARγ expression in mice, implicating that PPARγ participated in inflammation response of AD-like mice. Furthermore, AS-IV blocked AβO-induced inhibition of PPARγ expression. Pyroptosis is an inflammatory form of programmed cell death that has been reported in neurological pathogenesis [60]. Reducing pyroptosis was shown to alleviate cognitive impairment in AD animal models [61] and the progression of Parkinson’s disease [62]. Interestingly, NLRP3 has been reported to initiate neuronal pyroptosis [63, 64]. Indeed, NLRP3 inhibition has been shown to exhibit neuroprotective effects through the suppression of pyroptosis [65] and improve neurological functions in a transgenic mouse model of AD [63]. In this study, we demonstrated that AS-IV could inhibit AβO-induced pyroptotic neuronal death, whereas PPARγ antagonist GW9662 blocked the beneficial effect of AS-IV. In the systematic analyses, we also found that AS-IV had a high binding capacity with caspase-1, which might indicate the potential function of AS-IV in the pyroptosis. ### 4.5. AS-IV Reduces Tau Hyperphosphorylation, Synaptic Deficit, Neuroinflammation, and Pyroptosis via Regulating PPARγ In this study, we disclosed that AβO administration could progressively reduce PPARγ expression in the hippocampus from 2 h to one day and kept the PPARγ level at a relative low level from one day to 28 days. These data suggested that PPARγ would be an initial event after AβO administration. AS-IV could prevent AβO-induced reduction of PPARγ. The effects of AS-IV on brain inflammation, pyroptosis as well as synaptic deficit in AβO-induced AD phenotypes might be PPARγ-dependent. On the one hand, PPARγ antagonist blocked the effects of AS-IV on PPARγ expression, brain inflammation, and pyroptosis as well as BDNF expression. On the other hand, PPI indicated that PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1 had a close interaction. The reduced expression of PPARγ induced by AβO administration contributes to deregulation of caspase-1, GSK3Β, PSEN1, and TRPV1, which might lead to brain inflammation and pyroptosis as well as synaptic deficit. ## 4.1. AS-IV Prevents AD Phenotypes through Multiple Mechanisms In the present study, we screened 64 related targets of AS-IV and these targets together play important roles in the pathogenesis of AD, possibly through regulating cell proliferation, calcium dysregulation, inflammation, pyroptosis, and apoptosis [20, 31–33]. Specifically, the G-protein coupled acetylcholine receptor signaling pathway and protein kinase B/GSK3B axis are involved in the processes of AD pathogenesis, resulting in cognitive dysfunction [34–36]. Besides, the decrease of response to hypoxia and dysregulation of vasoconstriction could effectively ameliorate vascular dementia [37, 38]. Furthermore, the neuroinflammation caused by the generation of caspase-1-mediated IL-1β and IL-18 is involved in the development and progression of AD [32]. GSK3Β plays an important role in hyperphosphorylation of tau, which is one of the pathological features in AD [35]. PSEN1 mutation is a risk factor for AD [39]. Additionally, TRPV1, a nonselective cation channel, is involved in synaptic plasticity and memory [40]. Our molecular docking results demonstrate that AS-IV could integrate with caspase-1, GSK3Β, PSEN1, and TRPV1. The binding affinity of AS-IV is mainly through electrostatic, H-bond, and hydrophobic interaction, suggesting the reliability of the docking model. Therefore, AS-IV may improve cognitive impairment by binding to AD-related gene, such as caspase-1, TRPV1, PSEN1, and GSK3Β, reduce cell death, and ultimately inhibit AD-phenotypes. ## 4.2. AS-IV Reduces Tau Hyperphosphorylation in AD Model AβO accumulate in the brains of AD patients and induce AD-like cognitive dysfunction [41]. Therefore, AβO-induced AD-like phenotypes may be a promising model to find treatments [41, 42]. In this study, we investigated the impact of AβO in the brains of mice and further confirmed the effect of AS-IV on memory formation in mice infused with AβO and to assess the mechanisms. Our results demonstrated that intrahippocampal infusion of AβO impaired both contextual and cued fear memory, which is consistent with previous study [43]. Conversely, AS-IV prevents AβO-induced contextual and cued fear memory impairment. Considering that hippocampus is an important brain region involved in the formation and expression of fear memory, our findings suggest that AβO infusion damaged the structure and function of hippocampus and subsequently blocked the formation of learning and memory, which can be prevented by AS-IV administration.Similar to previous studies, our findings showed that AβO infusion induced neuronal loss, as well as increased tau phosphorylation, suggesting that the pathological changes of the hippocampus induced by AβO infusion may be the basis of AD-like behavioral changes [3, 44]. On the contrary, AS-IV inhibited the pathological changes of hippocampal neurons and tau phosphorylation induced by AβO infusion, which may contribute to memory improvement in AD-like mice. It is speculated that Aβ pathology in AD brain is earlier than those of tau, and neurofibrillary tangles develop downstream of toxicity induced by Aβ and eventually lead to neuronal death. Moreover, the mutual promotion between them accelerates the pathogenesis of AD, which is consistent with previous reports [44–46]. Certainly, we also note that AβO infusion has no effect on endogenous Aβ1-42 content in the hippocampus, suggesting it may not cause the increase and accumulation of Aβ, and formation of amyloid plaque in the brain. Through bioinformatics prediction, AS-IV could integrate with GSK3B tightly. As GSK3B is practically responsible for the hyperphosphorylation of tau, the tight interaction of AS-IV with GSK3B might contribute to the effects of AS-IV on the reduction of tau hyperphosphorylation. ## 4.3. AS-IV Prevents AβO-Induced Synaptic Deficit Consistent with previous studies [47, 48], our findings demonstrated that AβO had neurotoxicity and synaptic toxicity before plaque formation in the brain, causing brain damage and eventually leading to AD-like behaviors. Given the mounting evidences that AβO caused synaptic deficits [3, 49, 50], elucidating the precise molecular pathways has important implications for treating and preventing the disease. Here, we demonstrate that AβO infusion reduced the immunoreactivity and expression levels of PSD95, GAP43, and SYN, which is similar to previous results [51, 52]. It has been shown that the SYN immune response density in the brain of transgenic mice is negatively correlated with Aβ levels, but has nothing to do with plaque loading, indicating that Aβ has synaptic toxicity when plaques are not formed [6]. We further found that AS-IV increased the immunoreactivity and expression levels of PSD95, GAP43, and SYN in the hippocampus of AD-like mice. PSD95, GAP43, and SYN are important markers of synaptic plasticity, and they are positively correlated with hippocampal learning and memory function [14, 53]. Furthermore, ARC plays a key role in synaptic plasticity and memory consolidation [54, 55]. Surprisingly, we note that there is no significant difference in ARC expression among the experimental mice, which suggested that AβO infusion did not target ARC. Our results of Golgi-Cox and TEM further showed that AS-IV increased the density of dendritic spines and synapse number in hippocampal neurons, which suggested that AS-IV improved synaptic structure damage and alleviated synaptic toxicity in the hippocampus of mice infused with AβO.In a previous study, we reported that AS-IV promoted PPARγ expression in cultured cells and activated the BDNF-TrkB signaling pathway [20]. Our in vivo findings further showed that PPARγ expression in the hippocampus of mice infused with AβO was significantly decreased along with the reduction of BDNF expression, while AS-IV significantly prevented AβO-induced inhibition of PPARγ and BDNF expression. Considering the important functions of BDNF-TrkB signaling pathway performed in synaptic function [29], those data further supported that AS-IV prevented AβO-induced synaptic deficit. ## 4.4. AS-IV Prevents AβO-Induced Neuroinflammation and Pyroptosis Numerous studies have confirmed that neuroinflammation accelerates the pathogenesis of AD [46, 56, 57]. In this study, we found that AβO infusion increased the immunoreactivity and expression of GFAP and upregulated IL-1β, IL-6, and TNF-α levels in the hippocampus, which were reversed by AS-IV. These results suggested that AS-IV prohibited AβO-induced neuroinflammation in the brain, which was beneficial for cognitive function improvement, which further confirmed the network screening.PPARγ plays a neuroprotective role by reducing brain inflammation and Aβ production [58, 59]. Our findings showed that AβO infusion inhibited PPARγ expression in mice, implicating that PPARγ participated in inflammation response of AD-like mice. Furthermore, AS-IV blocked AβO-induced inhibition of PPARγ expression. Pyroptosis is an inflammatory form of programmed cell death that has been reported in neurological pathogenesis [60]. Reducing pyroptosis was shown to alleviate cognitive impairment in AD animal models [61] and the progression of Parkinson’s disease [62]. Interestingly, NLRP3 has been reported to initiate neuronal pyroptosis [63, 64]. Indeed, NLRP3 inhibition has been shown to exhibit neuroprotective effects through the suppression of pyroptosis [65] and improve neurological functions in a transgenic mouse model of AD [63]. In this study, we demonstrated that AS-IV could inhibit AβO-induced pyroptotic neuronal death, whereas PPARγ antagonist GW9662 blocked the beneficial effect of AS-IV. In the systematic analyses, we also found that AS-IV had a high binding capacity with caspase-1, which might indicate the potential function of AS-IV in the pyroptosis. ## 4.5. AS-IV Reduces Tau Hyperphosphorylation, Synaptic Deficit, Neuroinflammation, and Pyroptosis via Regulating PPARγ In this study, we disclosed that AβO administration could progressively reduce PPARγ expression in the hippocampus from 2 h to one day and kept the PPARγ level at a relative low level from one day to 28 days. These data suggested that PPARγ would be an initial event after AβO administration. AS-IV could prevent AβO-induced reduction of PPARγ. The effects of AS-IV on brain inflammation, pyroptosis as well as synaptic deficit in AβO-induced AD phenotypes might be PPARγ-dependent. On the one hand, PPARγ antagonist blocked the effects of AS-IV on PPARγ expression, brain inflammation, and pyroptosis as well as BDNF expression. On the other hand, PPI indicated that PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1 had a close interaction. The reduced expression of PPARγ induced by AβO administration contributes to deregulation of caspase-1, GSK3Β, PSEN1, and TRPV1, which might lead to brain inflammation and pyroptosis as well as synaptic deficit. ## 5. Conclusions In summary, our present study indicates that AS-IV could suppress tau hyperphosphorylation, synaptic deficits, neuroinflammation, and pyroptosis to prevent AD-like phenotypes, likely through interactions of PPARγ with caspase-1, GSK3Β, PSEN1, and TRPV1. This study offers a novel and reliable strategy for studying traditional Chinese medicine monomers. --- *Source: 1020614-2021-09-26.xml*
2021
# Approximation of Bivariate Functions via Smooth Extensions **Authors:** Zhihua Zhang **Journal:** The Scientific World Journal (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102062 --- ## Abstract For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. --- ## Body ## 1. Introduction In the recent several decades, various approximation tools have been widely developed [1–14]. For example, a smooth periodic function can be approximated by trigonometric polynomials; a square-integrable smooth function can be expanded into a wavelet series and be approximated by partial sum of the wavelet series; and a smooth function on a cube can be approximated well by polynomials. However, for a smooth function on a general domain with arbitrary shape, even if it is infinitely many time differentiable, it is difficult to do Fourier approximation or wavelet approximation. In this paper, we will extend a function on general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. After that, it will be easy to do Fourier approximation or wavelet approximation. For the higher-dimensional case, the method of smooth extensions is similar to that in the two-dimensional case, but the representations of smooth extensions will be too complicated. Therefore, in this paper, we mainly consider the smooth extension of a bivariate function on a planar domain. By the way, for the one-dimensional case, since the bounded domain is reduced to a closed interval, the smooth extension can be regarded as a corollary of the two-dimensional case.This paper is organized as follows. In Section2, we state the main theorems on the smooth extension of the function on the general domain and their applications. In Sections 3 and 4, we give a general method of smooth extensions and complete the proofs of the main theorems. In Section 5, we use our extension method to discuss two important special cases of smooth extensions.Throughout this paper, we denoteT=[0,1]2 and the interior of T by To and always assume that Ω is a simply connected domain. We say that f∈Cq(Ω) if the derivatives (∂i+jf/∂xi∂yj) are continuous on Ω for 0≤i+j≤q. We say that f∈C∞(Ω) if all derivatives (∂i+jf/∂xi∂yj) are continuous on Ω for i,j∈ℤ+. We say that a function h(x,y) is a γ-periodic function if h(x+γk,y+γl)=h(x,y)((x,y)∈T;k,l∈ℤ), where γ is an integer. We appoint that 0!=1 and the notation [α] is the integral part of the real number α. ## 2. Main Theorems and Applications In this section, we state the main results of smooth extensions and their applications in Fourier analysis and wavelet analysis. ### 2.1. Main Theorems Our main theorems are stated as follows.Theorem 1. Letf∈C∞(Ω), where Ω⊂To and the boundary ∂Ω is a piecewise infinitely many time smooth curve. Then for any r∈ℤ+ there is a function F∈Cr(T) such that (i) F(x,y)=f(x,y)((x,y)∈Ω);(ii) (∂i+jF/∂xi∂yj)(x,y)=0 on the boundary ∂T for 0≤i+j≤r;(iii) on the complementT∖Ω,F(x,y) can be expressed locally in the forms (1)∑j=0Lξj(x)yj‍,or∑j=0Lηj(y)xj‍,or∑i,j=0Lcijxiyj‍,where L is a positive integer and each coefficient cij is constant.Theorem 2. Letf∈C∞(Ω), where Ω is stated as in Theorem 1. Then, for any r∈ℤ+, there exists a 1-periodic function Fp∈Cr(ℝ2) such that Fp(x,y)=f(x,y)((x,y)∈Ω).Theorem 3. Letf∈C∞(Ω), where Ω is stated as in Theorem 1. Then, for any r∈ℤ+, there exists a function Fc∈Cr(ℝ2) with compact support T such that Fc(x,y)=f(x,y)((x,y)∈Ω).In Sections3 and 4, we give constructive proofs of Theorems 1–3. In these three theorems, we assume that f∈C∞(Ω). If f∈Cq(Ω) (q is a nonnegative integer), by using the similar method of arguments of Theorems 1–3, we also can obtain the corresponding results. ### 2.2. Applications Here we show some applications of these theorems. #### 2.2.1. Approximation by Polynomials LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Then F∈Cr(T) and F=f on Ω. By ΔN, denote the set of all bivariate polynomials in the form ∑n1,n2=-NNcn1,n2xn1yn2. Then (2)inf⁡P∈ΔN⁡∥f-P∥Lp(Ω)≤inf⁡P∈ΔN⁡∥F-P∥Lp(T), where ∥·∥Lp(D) is the norm of the space Lp(D). The right-hand side of formula (2) is the best approximation of the extension F in ΔN. By (2), we know that the approximation problem of f by polynomials on a domain Ω is reduced to the well-known approximation problem of its smooth extension F by polynomials on the square T [4, 10]. #### 2.2.2. Fourier Analysis (i)Approximation by Trigonometric Polynomials. Let Fp be the smooth periodic extension of f as in Theorem 2. Then Fp∈Cr(ℝ2) and Fp=f on Ω. By the well-known results [5, 10], we know that the smooth periodic function Fp can be approximated by bivariate trigonometric polynomials very well. Its approximation error can be estimated by the modulus of continuity of its r time derivatives.ByΔN*, denote the set of all bivariate trigonometric polynomials in the form (3)∑n1,n2=-NNcn1,n2*e2πi(n1x+n2y). By Theorem 2, we have (4)inf⁡P*∈ΔN*⁡∥f-P*∥Lp′(Ω)≤min⁡P*∈ΔN*⁡∥Fp-P*∥Lp′(T). From this and Theorem 2, we see that the approximation problem of f on Ω by trigonometric polynomials is reduced to a well-known approximation problem of smooth periodic functions [5, 7, 10].(ii)Fourier Series. We expand Fp into a Fourier series [9] (5)Fp(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y), where τn1,n2=∫TFp(x,y)e-2πi(n1x+n2y)dxdy. By Theorem 2, we obtain that, for (x,y)∈Ω, (6)f(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y). Denote the partial sum (7)sn1,n2(x,y)=∑k1=0n1∑k2=0n2τk1,k2e2πi(k1x+k2y). Then we have (8)∥f(x,y)-sn1,n2(x,y)∥Lp′(Ω)≤∥Fp(x,y)-sn1,n2(x,y)∥Lp′(T). Since the smooth periodic function Fp can be approximated well by the partial sum of its Fourier series [5, 7, 10], from this inequality, we see that we have constructed a trigonometric polynomial sn1,n2(x,y) which can approximate to f on Ω very well.(iii)Odd (Even) Periodic Extension. Let F be the smooth extension of f from Ω to T which is stated in Theorem 1. Define Fo on [-1,1]2 by (9)Fo(x,y)={F(x,y),(x,y)∈[0,1]2,-F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,-F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fo is an odd function. By Theorem 1, we have Fo∈Cr([-1,1]2) and (∂i+jFo/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic odd function Fpo and Fpo∈Cr(ℝ2). By the well-known results [5, 7, 10], Fpo can be approximated by sine polynomials very well. Moreover, Fpo can be expanded into the Fourier sine series; that is, (10)Fpo(x,y)=∑n1=1∞∑n2=1∞αn1,n2sin⁡(πn1x)sin⁡(πn2y), where the coefficients αn1,n2=4∫TFpo(x,y)sin(πn1x)sin(πn2y)dxdy [9]. Considering the approximation of Fpo by the partial sum, the Fejer sum, and the Vallee-Poussin sum [7, 14] of the Fourier sine series of Fpo, we will obtain the approximation of the original function f on Ω by sine polynomials.DefineFe on [-1,1]2 as follows: (11)Fe(x,y)={F(x,y),(x,y)∈[0,1]2,F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fe is an even function on [-1,1]2. By Theorem 1, Fe∈Cr([-1,1]2) and (∂i+jFe/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic even function Fpe and Fpe∈Cr(ℝ2). By the well-known result [5, 10], Fpe can be approximated by cosine polynomials very well. Moreover, Fpe can be expanded into the Fourier cosine series. Considering the partial sum, the Fejer sum, and the Vallee-Poussin sum [5, 7, 14] of the Fourier cosine series of Fpe, we will obtain the approximation of the original function f on Ω by cosine polynomials. #### 2.2.3. Wavelet Analysis (i)Periodic Wavelet Series. Let Fp∈Cr(ℝ2) be stated in Theorem 2. Let {ψμ}13 be a bivariate smooth wavelet [2]. Then, under a mild condition, the families (12)Ψper:={1}⋃‍{ψμ,m,nper,μ=1,2,3;m∈ℤ+;jjjjjjjjjjjjjjjψμ,m,npern=(n1,n2),n1,n2=0,…,2m-1},whereψμ,m,nper=∑l∈ℤ2ψμ,m,n(·+l),ψμ,m,n=2mψμ(2m·-n) are a periodic wavelet basis. We expand Fp into a periodic wavelet series [2] (13)Fp=d0,0+∑μ=13∑m=0∞∑n1,n2=02m-1dμ,m,nψμ,m,nper. From this, we can realize the wavelet approximation of f on Ω, for example, if r=2, its partial sum (14)s2M(Fp)=d0,0+∑μ=13∑m=0M-1∑n1,n2=02m-1dμ,m,nψμ,m,nper satisfies ∥Fp-s2M(Fp)∥L2(T)=O(2-2M). From this and Fp(x,y)=f(x,y)((x,y)∈Ω), we will obtain an estimate of wavelet approximation for a smooth function f on the domain Ω.(ii)Wavelet Approximation. Let Fc be the smooth function with a compact support as in Theorem 3. Let ψ be a univariate Daubechies wavelet and ϕ be the corresponding scaling function [2]. Denoting (15)ψ1(x,y)=ϕ(x)ψ(y),ψ2(x,y)=ψ(x)ϕ(y),gggggggggψ3(x,y)=ψ(x)ψ(y), then {ψμ(x,y)}μ=13 is a smooth tensor product wavelet. We expand Fc into the wavelet series (16)Fc(x,y)=∑μ=13∑m∈ℤ∑n∈ℤ2cμ,m,nψμ,m,n(x,y), where ψμ,m,n=2mψμ(2m·-n) and the wavelet coefficients (17)cμ,m,n=∫ℝ2Fc(x,y)ψ¯μ,m,n(x,y)dxdy=∫TF(x,y)ψ¯μ,m,n(x,y)dxdy.‍ Since Fc is a smooth function, the wavelet coefficients cμ,m,n decay fast.On the other hand, sinceFc(x,y)=0, (x,y)∈ℝ2∖T, a lot of wavelet coefficients vanish. In fact, when m0∈ℤ and n0∈ℤ2 satisfy supp⁡ψμ,m0,n0⊂(ℝ2∖T), we have cμ,m0,n0=0. Besides, by condition (iii) in Theorem 1, we know that F is univariate or bivariate polynomials on T∖Ω. By the moment theorem [2], we know that more wavelet coefficients vanish.For example, letm*∈ℤ and n*=(n1*,n2*)∈ℤ2 satisfy supp⁡ψm*,n2*⊂[0,α*], where α*=inf⁡{g(x),x1≤x≤x2}. Then we have (18)c1,m*,n*=2m(∫0x1‍+∫x1x2‍+∫x21‍)ϕ¯m*,n1*(x)c1,m*,n*=×(∫0α*ψ¯m*,n2*(y)F(x,y)dy)dxc1,m*,1=:I1+I2+I3. By Lemma 8, we know that (19)F(x,y)=∑j=0Lξj(x)yj,(x,y)∈E1, where E1={(x,y):x1≤x≤x2,0≤y≤g(x)} and g(x)≥α*(x1≤x≤x2). So (20)I2=∫x1x2ϕ¯m*,n1*(x)(∫0α*ψ¯m*,n2*(y)(∑j=0Lξj(x)yj)dy‍)dx. If the Daubechies wavelet ψ chosen by us is L time smooth, then, by using the moment theorem and supp⁡ψm*,n2*⊂[0,α*], we have (21)∫0α*ψ¯m*,n2*(y)yjdy‍=∫ℝψ¯m*,n2*(y)yjdy=0,kkkl(0≤j≤2r+1). So I2=0. Similarly, since F(x,y) is bivariate polynomials on rectangles H1 and H3 (see Lemma 11), we have I1=I3=0. Furthermore, by (18), we get c1,m*,n*=0.Therefore, the partial sum of the wavelet series (16) can approximate to Fc very well and few wavelet coefficients can reconstruct Fc. Since Fc=f on Ω, the partial sum of the wavelet series (16) can approximate to the original function f on the domain Ω very well. ## 2.1. Main Theorems Our main theorems are stated as follows.Theorem 1. Letf∈C∞(Ω), where Ω⊂To and the boundary ∂Ω is a piecewise infinitely many time smooth curve. Then for any r∈ℤ+ there is a function F∈Cr(T) such that (i) F(x,y)=f(x,y)((x,y)∈Ω);(ii) (∂i+jF/∂xi∂yj)(x,y)=0 on the boundary ∂T for 0≤i+j≤r;(iii) on the complementT∖Ω,F(x,y) can be expressed locally in the forms (1)∑j=0Lξj(x)yj‍,or∑j=0Lηj(y)xj‍,or∑i,j=0Lcijxiyj‍,where L is a positive integer and each coefficient cij is constant.Theorem 2. Letf∈C∞(Ω), where Ω is stated as in Theorem 1. Then, for any r∈ℤ+, there exists a 1-periodic function Fp∈Cr(ℝ2) such that Fp(x,y)=f(x,y)((x,y)∈Ω).Theorem 3. Letf∈C∞(Ω), where Ω is stated as in Theorem 1. Then, for any r∈ℤ+, there exists a function Fc∈Cr(ℝ2) with compact support T such that Fc(x,y)=f(x,y)((x,y)∈Ω).In Sections3 and 4, we give constructive proofs of Theorems 1–3. In these three theorems, we assume that f∈C∞(Ω). If f∈Cq(Ω) (q is a nonnegative integer), by using the similar method of arguments of Theorems 1–3, we also can obtain the corresponding results. ## 2.2. Applications Here we show some applications of these theorems. ### 2.2.1. Approximation by Polynomials LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Then F∈Cr(T) and F=f on Ω. By ΔN, denote the set of all bivariate polynomials in the form ∑n1,n2=-NNcn1,n2xn1yn2. Then (2)inf⁡P∈ΔN⁡∥f-P∥Lp(Ω)≤inf⁡P∈ΔN⁡∥F-P∥Lp(T), where ∥·∥Lp(D) is the norm of the space Lp(D). The right-hand side of formula (2) is the best approximation of the extension F in ΔN. By (2), we know that the approximation problem of f by polynomials on a domain Ω is reduced to the well-known approximation problem of its smooth extension F by polynomials on the square T [4, 10]. ### 2.2.2. Fourier Analysis (i)Approximation by Trigonometric Polynomials. Let Fp be the smooth periodic extension of f as in Theorem 2. Then Fp∈Cr(ℝ2) and Fp=f on Ω. By the well-known results [5, 10], we know that the smooth periodic function Fp can be approximated by bivariate trigonometric polynomials very well. Its approximation error can be estimated by the modulus of continuity of its r time derivatives.ByΔN*, denote the set of all bivariate trigonometric polynomials in the form (3)∑n1,n2=-NNcn1,n2*e2πi(n1x+n2y). By Theorem 2, we have (4)inf⁡P*∈ΔN*⁡∥f-P*∥Lp′(Ω)≤min⁡P*∈ΔN*⁡∥Fp-P*∥Lp′(T). From this and Theorem 2, we see that the approximation problem of f on Ω by trigonometric polynomials is reduced to a well-known approximation problem of smooth periodic functions [5, 7, 10].(ii)Fourier Series. We expand Fp into a Fourier series [9] (5)Fp(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y), where τn1,n2=∫TFp(x,y)e-2πi(n1x+n2y)dxdy. By Theorem 2, we obtain that, for (x,y)∈Ω, (6)f(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y). Denote the partial sum (7)sn1,n2(x,y)=∑k1=0n1∑k2=0n2τk1,k2e2πi(k1x+k2y). Then we have (8)∥f(x,y)-sn1,n2(x,y)∥Lp′(Ω)≤∥Fp(x,y)-sn1,n2(x,y)∥Lp′(T). Since the smooth periodic function Fp can be approximated well by the partial sum of its Fourier series [5, 7, 10], from this inequality, we see that we have constructed a trigonometric polynomial sn1,n2(x,y) which can approximate to f on Ω very well.(iii)Odd (Even) Periodic Extension. Let F be the smooth extension of f from Ω to T which is stated in Theorem 1. Define Fo on [-1,1]2 by (9)Fo(x,y)={F(x,y),(x,y)∈[0,1]2,-F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,-F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fo is an odd function. By Theorem 1, we have Fo∈Cr([-1,1]2) and (∂i+jFo/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic odd function Fpo and Fpo∈Cr(ℝ2). By the well-known results [5, 7, 10], Fpo can be approximated by sine polynomials very well. Moreover, Fpo can be expanded into the Fourier sine series; that is, (10)Fpo(x,y)=∑n1=1∞∑n2=1∞αn1,n2sin⁡(πn1x)sin⁡(πn2y), where the coefficients αn1,n2=4∫TFpo(x,y)sin(πn1x)sin(πn2y)dxdy [9]. Considering the approximation of Fpo by the partial sum, the Fejer sum, and the Vallee-Poussin sum [7, 14] of the Fourier sine series of Fpo, we will obtain the approximation of the original function f on Ω by sine polynomials.DefineFe on [-1,1]2 as follows: (11)Fe(x,y)={F(x,y),(x,y)∈[0,1]2,F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fe is an even function on [-1,1]2. By Theorem 1, Fe∈Cr([-1,1]2) and (∂i+jFe/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic even function Fpe and Fpe∈Cr(ℝ2). By the well-known result [5, 10], Fpe can be approximated by cosine polynomials very well. Moreover, Fpe can be expanded into the Fourier cosine series. Considering the partial sum, the Fejer sum, and the Vallee-Poussin sum [5, 7, 14] of the Fourier cosine series of Fpe, we will obtain the approximation of the original function f on Ω by cosine polynomials. ### 2.2.3. Wavelet Analysis (i)Periodic Wavelet Series. Let Fp∈Cr(ℝ2) be stated in Theorem 2. Let {ψμ}13 be a bivariate smooth wavelet [2]. Then, under a mild condition, the families (12)Ψper:={1}⋃‍{ψμ,m,nper,μ=1,2,3;m∈ℤ+;jjjjjjjjjjjjjjjψμ,m,npern=(n1,n2),n1,n2=0,…,2m-1},whereψμ,m,nper=∑l∈ℤ2ψμ,m,n(·+l),ψμ,m,n=2mψμ(2m·-n) are a periodic wavelet basis. We expand Fp into a periodic wavelet series [2] (13)Fp=d0,0+∑μ=13∑m=0∞∑n1,n2=02m-1dμ,m,nψμ,m,nper. From this, we can realize the wavelet approximation of f on Ω, for example, if r=2, its partial sum (14)s2M(Fp)=d0,0+∑μ=13∑m=0M-1∑n1,n2=02m-1dμ,m,nψμ,m,nper satisfies ∥Fp-s2M(Fp)∥L2(T)=O(2-2M). From this and Fp(x,y)=f(x,y)((x,y)∈Ω), we will obtain an estimate of wavelet approximation for a smooth function f on the domain Ω.(ii)Wavelet Approximation. Let Fc be the smooth function with a compact support as in Theorem 3. Let ψ be a univariate Daubechies wavelet and ϕ be the corresponding scaling function [2]. Denoting (15)ψ1(x,y)=ϕ(x)ψ(y),ψ2(x,y)=ψ(x)ϕ(y),gggggggggψ3(x,y)=ψ(x)ψ(y), then {ψμ(x,y)}μ=13 is a smooth tensor product wavelet. We expand Fc into the wavelet series (16)Fc(x,y)=∑μ=13∑m∈ℤ∑n∈ℤ2cμ,m,nψμ,m,n(x,y), where ψμ,m,n=2mψμ(2m·-n) and the wavelet coefficients (17)cμ,m,n=∫ℝ2Fc(x,y)ψ¯μ,m,n(x,y)dxdy=∫TF(x,y)ψ¯μ,m,n(x,y)dxdy.‍ Since Fc is a smooth function, the wavelet coefficients cμ,m,n decay fast.On the other hand, sinceFc(x,y)=0, (x,y)∈ℝ2∖T, a lot of wavelet coefficients vanish. In fact, when m0∈ℤ and n0∈ℤ2 satisfy supp⁡ψμ,m0,n0⊂(ℝ2∖T), we have cμ,m0,n0=0. Besides, by condition (iii) in Theorem 1, we know that F is univariate or bivariate polynomials on T∖Ω. By the moment theorem [2], we know that more wavelet coefficients vanish.For example, letm*∈ℤ and n*=(n1*,n2*)∈ℤ2 satisfy supp⁡ψm*,n2*⊂[0,α*], where α*=inf⁡{g(x),x1≤x≤x2}. Then we have (18)c1,m*,n*=2m(∫0x1‍+∫x1x2‍+∫x21‍)ϕ¯m*,n1*(x)c1,m*,n*=×(∫0α*ψ¯m*,n2*(y)F(x,y)dy)dxc1,m*,1=:I1+I2+I3. By Lemma 8, we know that (19)F(x,y)=∑j=0Lξj(x)yj,(x,y)∈E1, where E1={(x,y):x1≤x≤x2,0≤y≤g(x)} and g(x)≥α*(x1≤x≤x2). So (20)I2=∫x1x2ϕ¯m*,n1*(x)(∫0α*ψ¯m*,n2*(y)(∑j=0Lξj(x)yj)dy‍)dx. If the Daubechies wavelet ψ chosen by us is L time smooth, then, by using the moment theorem and supp⁡ψm*,n2*⊂[0,α*], we have (21)∫0α*ψ¯m*,n2*(y)yjdy‍=∫ℝψ¯m*,n2*(y)yjdy=0,kkkl(0≤j≤2r+1). So I2=0. Similarly, since F(x,y) is bivariate polynomials on rectangles H1 and H3 (see Lemma 11), we have I1=I3=0. Furthermore, by (18), we get c1,m*,n*=0.Therefore, the partial sum of the wavelet series (16) can approximate to Fc very well and few wavelet coefficients can reconstruct Fc. Since Fc=f on Ω, the partial sum of the wavelet series (16) can approximate to the original function f on the domain Ω very well. ## 2.2.1. Approximation by Polynomials LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Then F∈Cr(T) and F=f on Ω. By ΔN, denote the set of all bivariate polynomials in the form ∑n1,n2=-NNcn1,n2xn1yn2. Then (2)inf⁡P∈ΔN⁡∥f-P∥Lp(Ω)≤inf⁡P∈ΔN⁡∥F-P∥Lp(T), where ∥·∥Lp(D) is the norm of the space Lp(D). The right-hand side of formula (2) is the best approximation of the extension F in ΔN. By (2), we know that the approximation problem of f by polynomials on a domain Ω is reduced to the well-known approximation problem of its smooth extension F by polynomials on the square T [4, 10]. ## 2.2.2. Fourier Analysis (i)Approximation by Trigonometric Polynomials. Let Fp be the smooth periodic extension of f as in Theorem 2. Then Fp∈Cr(ℝ2) and Fp=f on Ω. By the well-known results [5, 10], we know that the smooth periodic function Fp can be approximated by bivariate trigonometric polynomials very well. Its approximation error can be estimated by the modulus of continuity of its r time derivatives.ByΔN*, denote the set of all bivariate trigonometric polynomials in the form (3)∑n1,n2=-NNcn1,n2*e2πi(n1x+n2y). By Theorem 2, we have (4)inf⁡P*∈ΔN*⁡∥f-P*∥Lp′(Ω)≤min⁡P*∈ΔN*⁡∥Fp-P*∥Lp′(T). From this and Theorem 2, we see that the approximation problem of f on Ω by trigonometric polynomials is reduced to a well-known approximation problem of smooth periodic functions [5, 7, 10].(ii)Fourier Series. We expand Fp into a Fourier series [9] (5)Fp(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y), where τn1,n2=∫TFp(x,y)e-2πi(n1x+n2y)dxdy. By Theorem 2, we obtain that, for (x,y)∈Ω, (6)f(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y). Denote the partial sum (7)sn1,n2(x,y)=∑k1=0n1∑k2=0n2τk1,k2e2πi(k1x+k2y). Then we have (8)∥f(x,y)-sn1,n2(x,y)∥Lp′(Ω)≤∥Fp(x,y)-sn1,n2(x,y)∥Lp′(T). Since the smooth periodic function Fp can be approximated well by the partial sum of its Fourier series [5, 7, 10], from this inequality, we see that we have constructed a trigonometric polynomial sn1,n2(x,y) which can approximate to f on Ω very well.(iii)Odd (Even) Periodic Extension. Let F be the smooth extension of f from Ω to T which is stated in Theorem 1. Define Fo on [-1,1]2 by (9)Fo(x,y)={F(x,y),(x,y)∈[0,1]2,-F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,-F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fo is an odd function. By Theorem 1, we have Fo∈Cr([-1,1]2) and (∂i+jFo/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic odd function Fpo and Fpo∈Cr(ℝ2). By the well-known results [5, 7, 10], Fpo can be approximated by sine polynomials very well. Moreover, Fpo can be expanded into the Fourier sine series; that is, (10)Fpo(x,y)=∑n1=1∞∑n2=1∞αn1,n2sin⁡(πn1x)sin⁡(πn2y), where the coefficients αn1,n2=4∫TFpo(x,y)sin(πn1x)sin(πn2y)dxdy [9]. Considering the approximation of Fpo by the partial sum, the Fejer sum, and the Vallee-Poussin sum [7, 14] of the Fourier sine series of Fpo, we will obtain the approximation of the original function f on Ω by sine polynomials.DefineFe on [-1,1]2 as follows: (11)Fe(x,y)={F(x,y),(x,y)∈[0,1]2,F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fe is an even function on [-1,1]2. By Theorem 1, Fe∈Cr([-1,1]2) and (∂i+jFe/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic even function Fpe and Fpe∈Cr(ℝ2). By the well-known result [5, 10], Fpe can be approximated by cosine polynomials very well. Moreover, Fpe can be expanded into the Fourier cosine series. Considering the partial sum, the Fejer sum, and the Vallee-Poussin sum [5, 7, 14] of the Fourier cosine series of Fpe, we will obtain the approximation of the original function f on Ω by cosine polynomials. ## 2.2.3. Wavelet Analysis (i)Periodic Wavelet Series. Let Fp∈Cr(ℝ2) be stated in Theorem 2. Let {ψμ}13 be a bivariate smooth wavelet [2]. Then, under a mild condition, the families (12)Ψper:={1}⋃‍{ψμ,m,nper,μ=1,2,3;m∈ℤ+;jjjjjjjjjjjjjjjψμ,m,npern=(n1,n2),n1,n2=0,…,2m-1},whereψμ,m,nper=∑l∈ℤ2ψμ,m,n(·+l),ψμ,m,n=2mψμ(2m·-n) are a periodic wavelet basis. We expand Fp into a periodic wavelet series [2] (13)Fp=d0,0+∑μ=13∑m=0∞∑n1,n2=02m-1dμ,m,nψμ,m,nper. From this, we can realize the wavelet approximation of f on Ω, for example, if r=2, its partial sum (14)s2M(Fp)=d0,0+∑μ=13∑m=0M-1∑n1,n2=02m-1dμ,m,nψμ,m,nper satisfies ∥Fp-s2M(Fp)∥L2(T)=O(2-2M). From this and Fp(x,y)=f(x,y)((x,y)∈Ω), we will obtain an estimate of wavelet approximation for a smooth function f on the domain Ω.(ii)Wavelet Approximation. Let Fc be the smooth function with a compact support as in Theorem 3. Let ψ be a univariate Daubechies wavelet and ϕ be the corresponding scaling function [2]. Denoting (15)ψ1(x,y)=ϕ(x)ψ(y),ψ2(x,y)=ψ(x)ϕ(y),gggggggggψ3(x,y)=ψ(x)ψ(y), then {ψμ(x,y)}μ=13 is a smooth tensor product wavelet. We expand Fc into the wavelet series (16)Fc(x,y)=∑μ=13∑m∈ℤ∑n∈ℤ2cμ,m,nψμ,m,n(x,y), where ψμ,m,n=2mψμ(2m·-n) and the wavelet coefficients (17)cμ,m,n=∫ℝ2Fc(x,y)ψ¯μ,m,n(x,y)dxdy=∫TF(x,y)ψ¯μ,m,n(x,y)dxdy.‍ Since Fc is a smooth function, the wavelet coefficients cμ,m,n decay fast.On the other hand, sinceFc(x,y)=0, (x,y)∈ℝ2∖T, a lot of wavelet coefficients vanish. In fact, when m0∈ℤ and n0∈ℤ2 satisfy supp⁡ψμ,m0,n0⊂(ℝ2∖T), we have cμ,m0,n0=0. Besides, by condition (iii) in Theorem 1, we know that F is univariate or bivariate polynomials on T∖Ω. By the moment theorem [2], we know that more wavelet coefficients vanish.For example, letm*∈ℤ and n*=(n1*,n2*)∈ℤ2 satisfy supp⁡ψm*,n2*⊂[0,α*], where α*=inf⁡{g(x),x1≤x≤x2}. Then we have (18)c1,m*,n*=2m(∫0x1‍+∫x1x2‍+∫x21‍)ϕ¯m*,n1*(x)c1,m*,n*=×(∫0α*ψ¯m*,n2*(y)F(x,y)dy)dxc1,m*,1=:I1+I2+I3. By Lemma 8, we know that (19)F(x,y)=∑j=0Lξj(x)yj,(x,y)∈E1, where E1={(x,y):x1≤x≤x2,0≤y≤g(x)} and g(x)≥α*(x1≤x≤x2). So (20)I2=∫x1x2ϕ¯m*,n1*(x)(∫0α*ψ¯m*,n2*(y)(∑j=0Lξj(x)yj)dy‍)dx. If the Daubechies wavelet ψ chosen by us is L time smooth, then, by using the moment theorem and supp⁡ψm*,n2*⊂[0,α*], we have (21)∫0α*ψ¯m*,n2*(y)yjdy‍=∫ℝψ¯m*,n2*(y)yjdy=0,kkkl(0≤j≤2r+1). So I2=0. Similarly, since F(x,y) is bivariate polynomials on rectangles H1 and H3 (see Lemma 11), we have I1=I3=0. Furthermore, by (18), we get c1,m*,n*=0.Therefore, the partial sum of the wavelet series (16) can approximate to Fc very well and few wavelet coefficients can reconstruct Fc. Since Fc=f on Ω, the partial sum of the wavelet series (16) can approximate to the original function f on the domain Ω very well. ## 3. Proofs of the Main Theorems We first give a partition of the complementT∖Ω. ### 3.1. Partition of the Complement of the DomainΩ in T SinceΩ⊂To and ∂Ω is a piecewise infinitely many time smooth curve, without loss of generality, we can divide the complement T∖Ω into some rectangles and some trapezoids with a curved side. For convenience of representation, we assume that we can choose four point (xν,yν)∈∂Ω(ν=1,2,3,4) such that T∖Ω can be divided into the four rectangles (22)H1=[0,x1]×[0,y1],H2=[x2,1]×[0,y2],H3=[x3,1]×[y3,1],H4=[0,x4]×[y4,1] and four trapezoids with a curved side (23)E1={(x,y);x1≤x≤x2,0≤y≤g(x)},E2={(x,y);h(y)≤x≤1,y2≤y≤y3},E3={(x,y);x4≤x≤x3,g*(x)≤y≤1},E4={(x,y);0≤x≤h*(y),y1≤y≤y4}, where g∈C∞([x1,x2]),h∈C∞([y2,y3]),g*∈C∞([x4,x3]), and h*∈C∞([y1,y4]) and (24)0<g(x)<1(x1≤x≤x2),0<h(y)<1(y2≤y≤y3),0<g*(x)<1(x4≤x≤x3),0<h*(y)<1(y1≤y≤y4). From this, we know that T can be expressed into a disjoint union as follows: (25)T=Ω⋃(⋃14Eν)‍⋃(⋃14Hν), where each Eν is a trapezoid with a curved side and each Hν is a rectangle (see Figure 1).Figure 1 Partition of the complement of the domainΩ.In Sections3.2 and 3.3 we will extend f to each Eν and continue to extend to each Hν such that the obtained extension F satisfies the conditions of Theorem 1. ### 3.2. Smooth Extension to Each TrapezoidEν with a Curved Side By (23), the trapezoid E1 with a curved side y=g(x)(x1≤x≤x2) is represented as (26)E1={(x,y):x1≤x≤x2,0≤y≤g(x)}. We define two sequences of functions {ak,1(x,y)}0∞ and {bk,1(x,y)}0∞ as follows: (27)a0,1(x,y)=yg(x),b0,1(x,y)=y-g(x)-g(x),fffak,1(x,y)=(y-g(x))kk!(yg(x))k+1,bk,1(x,y)=ykk!(y-g(x)-g(x))k+1,k=1,2,…. By (27), we deduce that for x1≤x≤x2, (28)hhhhhh∂lak,1∂yl(x,g(x))=0(0≤l≤k-1),∂kak,1∂yk(x,g(x))=1;(∂lak,1)∂yl(x,0)=0(0≤l≤k),hhhhhh∂lbk,1∂yl(x,g(x))=0(0≤l≤k);∂lbk,1∂yl(x,0)=0(0≤l≤k-1),(∂kbk,1)∂yk(x,0)=1.OnE1, we define a sequence of functions {S1(k)(x,y)}0∞ by induction.Let(29)S1(0)(x,y)=f(x,g(x))a0,1(x,y)ggg.(x1≤x≤x2,0≤y≤g(x)). Then, by (27), (30)S1(0)(x,0)=0,S1(0)(x,g(x))=f(x,g(x)),hhhhhhhhhhhhhhhhhhhhhhhhll(x1≤x≤x2).Let(31)S1(1)(x,y)=S1(0)(x,y)S1(1)(x,y)=+a1,1(x,y)(∂f∂y(x,g(x))-∂S1(0)∂y(x,g(x)))S1(1)(x,y)=-b1,1(x,y)∂S1(0)∂y(x,0)fffffffffffffffff.(x1≤x≤x2,0≤y≤g(x)). Then, by (27)–(30), we obtain that, for x1≤x≤x2, (32)S1(1)(x,g(x))=f(x,g(x)),∂S1(1)∂y(x,g(x))=∂f∂y(x,g(x)),S1(1)(x,0)=0,∂S1(1)∂y(x,0)=0. In general, let (33)S1(k)(x,y)=S1(k-1)(x,y)+ak,1(x,y)S1(k)(x,y)=×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))S1(k)(x,y)=-bk,1(x,y)∂kS1(k-1)∂yk(x,0)S1(k)(x,y)ggggggggg=(x1≤x≤x2,0≤y≤g(x)).Lemma 4. For anyk∈ℤ+, one has S1(k)∈C∞(E1) and (34)S1(k)(x,y)=∑j=02k+1ζj,1(x)yj‍,(x,y)∈E1.Proof. Sincef∈C∞(Ω) and g∈C∞([x1,x2]), and g(x)>0(x1≤x≤x2), by the above construction, we know that S1(k)∈C∞(E1) for any k=0,1,…. Fork=0, since (35)S1(0)(x,y)=f(x,g(x))a0,1(x,y)=f(x,g(x))g(x)y, (34) holds. We assume that (34) holds for k=l-1; that is, (36)S1(l-1)(x,y)=∑j=02l-1ζj,1(l-1)(x)yj. This implies that (37)∂lS1(l-1)∂yl(x,g(x))=∑j=l2l-1j!(j-l)!ζj,1(l-1)(x)(g(x))j-l,∂lS1(l-1)∂yl(x,0)=l!ζl,1(l-1)(x). Again, notice that al,1(x,y) and bl,1(x,y) are polynomials of y whose degrees are both 2l+1. From this and (33), it follows that (34) holds for k=l. By induction, (34) holds for all k. Lemma 4 is proved.Below we compute derivatives(∂lS1(k)/∂yl)(x,y)(0≤l≤k) on the curved side Γ1={(x,g(x)):x1≤x≤x2} and the bottom side Δ1={(x,0):x1≤x≤x2} of E1.Lemma 5. LetS1(k)(x,y) be stated as above. For any k∈ℤ+, one has (38)∂lS1(k)∂yl(x,g(x))=∂lf∂yl(x,g(x)),∂lS1(k)∂yl(x,0)=0(x1≤x≤x2,0≤l≤k).Proof. By (30), We have known that, for k=0, (38) holds. Now we assume that (38) holds for k-1. Forx1≤x≤x2, by (33), we have (39)∂lS1(k)∂yl(x,g(x))=∂lS1(k-1)∂yl(x,g(x))+∂lak,1∂yl(x,g(x))×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))-∂lbk,1∂yl(x,g(x))∂kS1(k-1)∂yk(x,0),kkkkkkkkkkkkkkkhhhhl.k(0≤l≤k). For l=0,1,…,k-1, by the assumption of induction, we have (40)∂lS1(k-1)∂yl(x,g(x))=∂lf∂yl(x,g(x)). By (28), we have (∂lak,1/∂yl)(x,g(x))=0,(∂lbk,1/∂yl)(x,g(x))=0. So we get (41)∂lS1(k)∂yl(x,g(x))=∂lf∂yl(x,g(x)). For l=k, note that (∂kak,1/∂yk)(x,g(x))=1 and (∂kbk,1/∂yk)(x,g(x))=0. By (39), we get (42)∂kS1(k)∂yk(x,g(x))=∂kS1(k-1)∂yk(x,g(x))+(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))=∂kf∂yk(x,g(x)). The first formula of (38) holds for k. By (33), we have (43)∂lS1(k)∂yl(x,0)=∂lS1(k-1)∂yl(x,0)+∂lak,1∂yl(x,0)×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))-∂lbk,1∂yl(x,0)∂kS1(k-1)∂yk(x,0),hhhhhhhhhhhl(x1≤x≤x2,0≤l≤k). For l=0,…,k-1, by the assumption of induction and (28), we have (∂lS1(k-1)/∂yl)(x,0)=0 and (∂lak,1/∂yl)(x,0)=(∂lbk,1/∂yl)(x,0)=0. So (44)∂lS1(k)∂yl(x,0)=0. For l=k, since (∂kak,1/∂yk)(x,0)=0,(∂kbk,1/∂yk)(x,0)=1, by (43), we have (45)∂kS1(k)∂yk(x,0)=∂kS1(k-1)∂yk(x,0)-∂kS1(k-1)∂yk(x,0)=0. The second formula of (38) holds. By induction, (38) holds for all k. From this, we get Lemma 5.Now we compute the mixed derivatives ofS1(k)(x,y) on the curved side Γ1 and bottom side Δ1 of E1.Lemma 6. LetΓ1 and Δ1 be the curved side and the bottom side of E1, respectively. Then, for k∈ℤ+, (i) (∂i+jS1(k)/∂xi∂yj)(x,y)=(∂i+jf/∂xi∂yj)(x,y)((x,y)∈Γ1),(ii) (∂i+jS1(k)/∂xi∂yj)(x,y)=0((x,y)∈Δ1),where 0≤i+j≤k.Proof. Letx1≤x≤x2. Then we have (46)ddx(∂l-1f∂yl-1(x,g(x)))=∂lf∂x∂yl-1(x,g(x))+∂lf∂yl(x,g(x))g′(x),(l≥1). By the Newton-Leibniz formula, we have (47)∂l-1f∂yl-1(x,g(x))=∂l-1f∂yl-1(x1,g(x1))+∫x1x(∂lf∂x∂yl-1(t,g(t))+∂lf∂yl(t,g(t))g′(t))dt. Similarly, replacing f by S1(k) in this formula, we have (48)∂l-1S1(k)∂yl-1(x,g(x))=∂l-1S1(k)∂yl-1(x1,g(x1))+∫x1x(∂lS1(k)∂x∂yl-1(t,g(t))+∂lS1(k)∂yl(t,g(t))g′(t))dt,hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhl.(l≥1). From this and Lemma 5, it follows that, for any x1≤x≤x2, we have (49)∫x1x∂lS1(k)∂x∂yl-1(t,g(t))dt‍=∫x1x∂lf∂x∂yl-1(t,g(t))dt‍(1≤l≤k). Finding derivatives on the both sides of this formula, we get (50)∂lS1(k)∂x∂yl-1(x,g(x))=∂lf∂x∂yl-1(x,g(x))(1≤l≤k). Now we start from the equality(51)ddx(∂l-1f∂x∂yl-2(x,g(x)))=∂lf∂x2∂yl-2(x,g(x))+∂lf∂x∂yl-1(x,g(x))g′(x),hhhhhhhhhhhhhhhhh(l≥2). Similar to the argument from (46) to (50), we get (52)∂lS1(k)∂x2∂yl-2(x,g(x))=∂lf∂x2∂yl-2(x,g(x))(2≤l≤k). Continuing this procedure, we deduce that (i) holds for 0<i+j≤k. Letting l=0 in Lemma 5, we have S1(k)(x,g(x))=f(x,g(x)); that is, (i) holds for i=j=0. So we get (i). By Lemma5, (∂jS1(k)/∂yj)(x,0)=0(0≤j≤k). From this and S1(k)∈C∞(E1), we have (53)∂i+jS1(k)∂xi∂yj(x,0)=0(0≤i+j≤k), so (ii) holds. Lemma 6 is proved.From this, we get the following.Lemma 7. For any positive integerr, denote lr=r(r+1)(r+2)(r+3). Let (54)F(x,y)={S1(lr)(x,y),(x,y)∈E1,f(x,y),(x,y)∈Ω. Then (i) F∈Clr(Ω⋃‍E1) and F(x,y)=f(x,y)((x,y)∈Ω); (ii) (∂i+jF/∂xi∂yj)(x,y)=0((x,y)∈(E1⋂‍∂T), 0≤i+j≤lr).Proof. By the assumptionf∈C∞(Ω), Lemma 4: S1(k)∈C∞(E1), and Lemma 6(i): (55)∂i+jS1(k)∂xi∂yj(x,y)=∂i+jf∂xi∂yj(x,y)hh..((x,y)∈Γ1,0≤i+j≤k), where Γ1=Ω⋂‍E1, we get (i). By Lemma 6(ii) and E1⋂‍∂T=Δ1, we get (ii). Lemma 7 is proved.Forν=2,3,4, by using a similar method, we define Sν(k)(x,y) on the each trapezoid Eν with a curve side. The representations of Sν(k)(x,y) are stated in Section 4.1.Lemma 8. For anyν=1,2,3,4, let (56)F(x,y)={Sν(lr)(x,y),(x,y)∈Eν,f(x,y),(x,y)∈Ω, where lr=r(r+1)(r+2)(r+3). Then, for ν=1,2,3,4, one has the following: (i) F∈Clr(Ω⋃‍Eν);(ii) (∂i+jF/∂xi∂yj)(x,y)=0,(x,y)∈(Eν⋂‍∂T) for 0≤i+j≤lr;(iii) F(x,y) can be expressed in the form: (57)F(x,y)=∑j=02lr+1ζj,1(x)yj,(x,y)∈E1,F(x,y)=∑j=02lr+1ζj,2(y)xj,(x,y)∈E2,F(x,y)=∑j=02lr+1ζj,3(x)yj,(x,y)∈E3,F(x,y)=∑j=02lr+1ζj,4(y)xj,(x,y)∈E4.Proof. By Lemma7, we have (58)F∈Clr(Ω⋃‍E1),∂i+jF∂xi∂yj(x,y)=0,fff.((x,y)∈(E1⋂‍∂T),0≤i+j≤lr). Similar to the argument of Lemma 7, for ν=2,3,4, we have (59)F∈Clr(Ω⋃‍Eν),∂i+jF∂xi∂yj(x,y)=0,ff.((x,y)∈(Eν⋂‍∂T),0≤i+j≤lr). From this, we get (i) and (ii). The proof of (iii) is similar to the argument of Lemma4(iii). Lemma 8 is proved. ### 3.3. Smooth Extension to Each RectangleHν We have completed the smooth extension off to each trapezoid Eν with a curved side. In this subsection we complete the smooth extension of the obtained function F to each rectangle Hν. First we consider the smooth extension of F to H1. We divide this procedure in two steps.Step 1. In Lemma8, we know that F(x,y)=S4lr(x,y) on E4. Now we construct the smooth extension of S4(lr)(x,y) from E4 to H1, where S4(lr)(x,y) is stated in Section 4.2 and lr=r(r+1)(r+2)(r+3). Let(60)gggαk,11(y)=(y-y1)kk!(yy1)k+1,βk,11(y)=ykk!(y-y1-y1)k+1,k=0,1,…, and let (61)M1(0)(x,y)=S4(lr)(x,y1)α0,11(y),M1(k)(x,y)=M1(k-1)(x,y)+αk,11(y)hhhhhhhhhh×(∂kS4(lr)∂yk(x,y1)-∂kM1(k-1)∂yk(x,y1))M1(k)(x,y)=-βk,11(y)∂kM1(k-1)∂yk(x,0),gggggggggggggggggl.k=1,2,…,τr((x,y)∈H1), where τr=r(r+2).Lemma 9. Let{J1,l}14 be four sides of the rectangle H1:(62)J1,1={(x,y1),0≤x≤x1},J1,2={(0,y),0≤y≤y1},J1,3={(x,0),0≤x≤x1},J1,4={(x1,y),0≤y≤y1}. Then one has the following(i) M1(τr)(x,y)=∑i,j=02lr+1di,j(1)xiyj‍, where di,j(1) is a constant;(ii) (∂i+jM1(τr)/∂xi∂yj)(x,y)=(∂i+jS4(lr)/∂xi∂yj)(x,y)((x,y)∈J1,1);(iii) (∂i+jM1(τr)/∂xi∂yj)(x,y)=0((x,y)∈J1,2);(iv) (∂i+jM1(τr)/∂xi∂yj)(x,y)=0((x,y)∈J1,3),where 0≤i+j≤τr.Proof. By Lemma8(iii), we have (63)S4(lr)(x,y)=∑j=02lr+1ζj,4(y)xj‍. So (∂kS4(lr)/∂yk)(x,y1) is a polynomial of degree 2lr+1 with respect to x. Since ατr,11(y) and βτr,11(y) are both polynomials of degree 2τr+1, (i) follows from (61). Similar to the argument of Lemma6, we get (ii) and (iv). Since(0,y1)∈(E4⋂∂T), by Lemma 7, we have (64)∂i+jS4(lr)∂xi∂yj(0,y1)=∂i+jF∂xi∂yj(0,y1)=0,(0≤i+j≤lr). By the definition of M1(0) and (64), we have (65)∂i+jM1(0)∂xi∂yj(0,y)=∂iS4(lr)∂xi(0,y1)djα0,11dyj(y)=0,hhhhl.(0≤i+j≤lr,y∈ℝ). We assume that (66)∂i+jM1(k-1)∂xi∂yj(0,y)=0,(0≤i+j≤lr-12k(k-1)). By (61), we get (67)∂i+jM1(k)∂xi∂yj(0,y)=∂i+jM1(k-1)∂xi∂yj(0,y)+djαk,11dyj(y)×(∂k+iS4(lr)∂xi∂yk(0,y1)-∂k+iM1(k-1)∂xi∂yk(0,y1))-djβk,11dyj(y)∂k+iM1(k-1)∂xi∂yk(0,0). for 0≤i+j≤lr-(1/2)k(k+1), we have 0≤i+j≤lr-(1/2)k(k-1) and 0≤i+k≤lr-(1/2)k(k-1). Again, by the assumption of induction, we get (68)∂i+jM1(k-1)∂xi∂yj(0,y)=0,∂k+iM1(k-1)∂xi∂yk(0,y1)=0. By (64), we have (∂k+iS4(lr)/∂xi∂yk)(0,y1)=0. From this and (67), we get (69)hhhhhhhhh∂i+jM1(k)∂xi∂yj(0,y)=0,(0≤y≤y1,0≤i+j≤lr-k2(k+1)). Taking k=τr, we have (70)gggggggg∂i+jM1(τr)∂xi∂yj(0,y)=0,(0≤y≤y1,0≤i+j≤lr-τr2(τr+1)). Since lr-(τr/2)(τr+1)=(lr/2)≥τr, we get (iii). Lemma 9 is proved.Step 2. In Lemma8, we know that F(x,y)=S1lr(x,y) on E1. We consider the difference S1(lr)(x,y)-M1(τr)(x,y). Obviously, it is infinitely many time differentiable on E1 since M1(τr)(x,y) is a polynomial. Now we construct its smooth extension from E1 to the rectangle H1 as follows. Let (71)gggαk,14(x)=(x-x1)kk!(xx1)k+1,βk,14(x)=xkk!(x-x1-x1)k+1,k=0,1,…, and let (72)N1(0)(x,y)=(S1(lr)(x1,y)-M1(τr)(x1,y))α0,14(x),N1(k)(x,y)=N1(k-1)(x,y)+αk,14(x)N1(k)(x,y)=×(∂k(S1(lr)-M1(τr))∂xk(x1,y)N1(k)(x,y)==.=∂k(S1(lr)-M1(τr))∂xk-∂kN1(k-1)∂xk(x1,y))gggggggggg-βk,14(x)∂kN1(k-1)∂xk(0,y),ggggggk=0,1,…,r,((x,y)∈H1). From this, we obtain the following.Lemma 10. N 1 ( r ) ( x , y ) possesses the following properties: (i) (∂i+jN1(r)/∂xi∂yj)(x,y)=(∂i+jS1(lr)/∂xi∂yj)(x,y)-(∂i+jM1(τr)/∂xi∂yj)(x,y) on J1,4;(ii) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,2;(iii) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,1;(iv) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,3, where 0≤i+j≤r and {J1,ν}14 are stated in (62);(v) N1(r)(x,y)=∑i,j=02lr+1τi,j(1)xiyj, where τi,j(1) is a constant.Proof. The arguments similar to Lemma9(ii) and (iv) give the conclusions (i) and (ii) of this theorem. Now we prove (iii) and (iv). By Lemma6(i) and Lemma 9(ii), as well as lr≥τr, we get that, for 0≤i+j≤τr, (73)∂i+jS1(lr)∂xi∂yj(x1,y1)=∂i+jf∂xi∂yj(x1,y1)=∂i+jS4(lr)∂xi∂yj(x1,y1)=∂i+jM1(τr)∂xi∂yj(x1,y1). So we have (74)∂i+jN1(0)∂xi∂yj(x,y1)=∂j(S1(lr)-M1(τr))∂yj(x1,y1)diα0,14dxi(x)=0,hhhhhhh(0≤x≤x1,0≤i+j≤τr). Now we assume that(75)ggggggg∂i+jN1(k-1)∂xi∂yj(x,y1)=0,(0≤x≤x1,0≤i+j≤τr-k2(k-1)). By (72) and (73), (76)∂i+jN1(k)∂xi∂yj(x,y1)=∂i+jN1(k-1)∂xi∂yj(x,y1)+diαk,14dxi(x)∂i+jN1(k)∂xi∂yj(x,y1)=×(∂k+j(S1(lr)-M1(τr))∂xk∂yj(x1,y1)∂i+jN1(k)∂xi∂yjggg(x,y1)=∂k+j(S1(lr)-M1(τr))∂xk∂yj-∂k+jN1(k-1)∂xk∂yj(x1,y1))∂i+jN1(k)∂xi∂yj(x,y1)=-diβk,14dxi(x)∂k+jN1(k-1)∂xk∂yj(0,y1)=0,∂i+jN1(k)∂xi∂yj.(0≤x≤x1,0≤i+j≤τr-k(k+1)2). By induction, we get (77)hhhhhhhhh∂i+jN1(k)∂xi∂yj(x,y1)=0,(0≤x≤x1,0≤i+j≤τr-k(k+1)2). From this and τr-(1/2)r(r+1)≥r, we get (iii). By Lemma 6(ii) and Lemma 9(iii), we get that (78)∂i+jS1(lr)∂xi∂yj(x,0)=0,(0≤i+j≤lr),∂i+jM1(τr)∂xi∂yj(x,0)=0,(0≤i+j≤τr). From this and (72), by using an argument similar to the proof of (iii), we get (iv). By Lemma8(iii) and Lemma 9(i), we deduce that (S1(lr)-M1(τr))(x1,y) is a polynomial of degree 2lr+1 with respect to y. From this and (72), we get (v). Lemma 10 is proved.By Lemmas9 and 10, we obtain that for 0≤i+j≤r, (79)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=∂i+jS4(lr)∂xi∂yj(x,y)on⁡J1,1,(80)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=0on⁡J1,2⋃‍J1,3,(81)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=∂i+jS1(lr)∂xi∂yj(x,y)on⁡J1,4.Lemma 11. Let(82)F(x,y)={f(x,y),(x,y)∈Ω,S1(lr)(x,y),(x,y)∈E1,S4(lr)(x,y),(x,y)∈E4,M1(τr)(x,y)+N1(r)(x,y),(x,y)∈H1. Then one has(i) F∈Cr(Ω⋃E1⋃E4⋃H1),(ii) (∂i+jF/∂xi∂yj)(x,y)=0,(x,y)∈((E1⋃E4⋃H1)⋂∂T) for 0≤i+j≤r;(iii) F(x,y)=∑i,j=02lr+1cij(1)xiyj((x,y)∈H1), where each cij(1) is constant.Proof. By Lemma7, we have F∈Cr(Ω⋃E1⋃E4). Since S1(lr)∈Cr(E1), (83)M1(τr)+N1(r)∈Cr(H1),E1⋂‍H1=J1,4, by (81), we deduce that F∈Cr(E1⋃H1). Since S4(lr)∈Cr(E4), (84)M1(τr)+N1(r)∈Cr(H1),E4⋂‍H1=J1,1, by (79), we deduce that F∈Cr(H1⋃E4). So we get (i). By Lemma8(ii), (85)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E4)⋂‍∂T). Since H1⋂∂T=J1,2⋃J1,3, by (80), we deduce that (86)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈(H1⋂‍∂T). So we get (ii). From Lemma9(i), Lemma 10(v), and F(x,y)=M1(τr)(x,y)+N1(r)(x,y)((x,y)∈H1), we get (iii). Lemma 11 is proved.Forν=2,3,4, by using a similar method, we define F(x,y)=Mν(τr)(x,y)+Nν(r)(x,y)((x,y)∈Hν), where representations of Mν(τr)(x,y) and Nν(r)(x,y) see Section 4.2. ### 3.4. The Proofs of the Theorems Proof of Theorem1. Let(87)F(x,y)={f(x,y),(x,y)∈Ω,Sν(lr)(x,y),(x,y)∈Eν(ν=1,2,3,4),Mν(τr)(x,y)+Nν(r)(x,y),(x,y)∈Hν(ν=1,2,3,4). By (25), F has been defined on the unit square T. The argument similar to Lemma 11(i)-(ii) shows that (88)F∈Cr(Ω⋃‍E1⋃‍E4⋃‍H1);F∈Cr(Ω⋃‍E1⋃‍E2⋃‍H2);F∈Cr(Ω⋃‍E2⋃‍E3⋃‍H3);F∈Cr(Ω⋃‍E3⋃‍E4⋃‍H4); and for 0≤i+j≤r, (89)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E4⋃‍H1)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E2⋃‍H2)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E2⋃‍E3⋃‍H3)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E3⋃‍E4⋃‍H4)⋂‍∂T). From this and Ω⋂∂T=∅, by (25), we have F∈Cr(T) and (∂i+jF/∂xi∂yj)(x,y)=0((x,y)∈∂T,0≤i+j≤r). So we get (i) and (ii). Similar to the argument of Lemma11(iii), we get (90)F(x,y)=∑i,j=02lr+1cij(ν)xiyj‍,(x,y)∈Hν(ν=1,2,3,4), where each cij(ν) is a constant. From this and Lemma 8(iii), we know that, on T∖Ω, F(x,y) can be expressed locally in the form (91)∑j=02lr+1ξj(x)yjor∑j=02lr+1ηj(x)xjor∑i,j=02lr+1cijxiyj; (iii) holds. We have completed the proof of Theorem 1.The representation ofF satisfying the conditions of Theorem 1 is given in Section 4.Proof of Theorem2. LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Define Fp by (92)Fp(x+k,y+l)=F(x,y)((x,y)∈T;k,l∈ℤ). Then Fp is a 1-periodic function of ℝ2. By Theorem 1, we know that Fp∈Cr(T) and (93)∂i+jFp∂xi∂yj(x,y)=0((x,y)∈∂T;0≤i+j≤r). Let Tn1,n2=[n1,n1+1]×[n2,n2+1](n1,n2∈ℤ). Since Fp is 1-periodic function, we have Fp∈Cr(Tn1,n2) and for any n1,n2∈ℤ, (94)∂i+jFp∂xi∂yj(x,y)=0((x,y)∈∂Tn1,n2;0≤i+j≤r). Noticing that ℝ2=⋃n1,n2∈ℤTn1,n2, we have Fp∈Cr(ℝ2). By (92) and Theorem 1(i), we get (95)Fp(x,y)=F(x,y)=f(x,y)((x,y)∈Ω). Theorem 2 is proved.Proof of Theorem3. LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Define Fc by (96)Fc(x,y)={F(x,y),(x,y)∈T,0,(x,y)∈ℝ2∖T. From Theorem 1(ii), we have (97)∂i+jFc∂xi∂yj(x,y)=0((x,y)∈∂T;0≤i+j≤r). From this and (96), we get Fc(x,y)∈Cr(ℝ2). By (96) and Theorem 1(i), we get (98)Fc(x,y)=F(x,y)=f(x,y)((x,y)∈Ω). Theorem 3 is proved. ## 3.1. Partition of the Complement of the DomainΩ in T SinceΩ⊂To and ∂Ω is a piecewise infinitely many time smooth curve, without loss of generality, we can divide the complement T∖Ω into some rectangles and some trapezoids with a curved side. For convenience of representation, we assume that we can choose four point (xν,yν)∈∂Ω(ν=1,2,3,4) such that T∖Ω can be divided into the four rectangles (22)H1=[0,x1]×[0,y1],H2=[x2,1]×[0,y2],H3=[x3,1]×[y3,1],H4=[0,x4]×[y4,1] and four trapezoids with a curved side (23)E1={(x,y);x1≤x≤x2,0≤y≤g(x)},E2={(x,y);h(y)≤x≤1,y2≤y≤y3},E3={(x,y);x4≤x≤x3,g*(x)≤y≤1},E4={(x,y);0≤x≤h*(y),y1≤y≤y4}, where g∈C∞([x1,x2]),h∈C∞([y2,y3]),g*∈C∞([x4,x3]), and h*∈C∞([y1,y4]) and (24)0<g(x)<1(x1≤x≤x2),0<h(y)<1(y2≤y≤y3),0<g*(x)<1(x4≤x≤x3),0<h*(y)<1(y1≤y≤y4). From this, we know that T can be expressed into a disjoint union as follows: (25)T=Ω⋃(⋃14Eν)‍⋃(⋃14Hν), where each Eν is a trapezoid with a curved side and each Hν is a rectangle (see Figure 1).Figure 1 Partition of the complement of the domainΩ.In Sections3.2 and 3.3 we will extend f to each Eν and continue to extend to each Hν such that the obtained extension F satisfies the conditions of Theorem 1. ## 3.2. Smooth Extension to Each TrapezoidEν with a Curved Side By (23), the trapezoid E1 with a curved side y=g(x)(x1≤x≤x2) is represented as (26)E1={(x,y):x1≤x≤x2,0≤y≤g(x)}. We define two sequences of functions {ak,1(x,y)}0∞ and {bk,1(x,y)}0∞ as follows: (27)a0,1(x,y)=yg(x),b0,1(x,y)=y-g(x)-g(x),fffak,1(x,y)=(y-g(x))kk!(yg(x))k+1,bk,1(x,y)=ykk!(y-g(x)-g(x))k+1,k=1,2,…. By (27), we deduce that for x1≤x≤x2, (28)hhhhhh∂lak,1∂yl(x,g(x))=0(0≤l≤k-1),∂kak,1∂yk(x,g(x))=1;(∂lak,1)∂yl(x,0)=0(0≤l≤k),hhhhhh∂lbk,1∂yl(x,g(x))=0(0≤l≤k);∂lbk,1∂yl(x,0)=0(0≤l≤k-1),(∂kbk,1)∂yk(x,0)=1.OnE1, we define a sequence of functions {S1(k)(x,y)}0∞ by induction.Let(29)S1(0)(x,y)=f(x,g(x))a0,1(x,y)ggg.(x1≤x≤x2,0≤y≤g(x)). Then, by (27), (30)S1(0)(x,0)=0,S1(0)(x,g(x))=f(x,g(x)),hhhhhhhhhhhhhhhhhhhhhhhhll(x1≤x≤x2).Let(31)S1(1)(x,y)=S1(0)(x,y)S1(1)(x,y)=+a1,1(x,y)(∂f∂y(x,g(x))-∂S1(0)∂y(x,g(x)))S1(1)(x,y)=-b1,1(x,y)∂S1(0)∂y(x,0)fffffffffffffffff.(x1≤x≤x2,0≤y≤g(x)). Then, by (27)–(30), we obtain that, for x1≤x≤x2, (32)S1(1)(x,g(x))=f(x,g(x)),∂S1(1)∂y(x,g(x))=∂f∂y(x,g(x)),S1(1)(x,0)=0,∂S1(1)∂y(x,0)=0. In general, let (33)S1(k)(x,y)=S1(k-1)(x,y)+ak,1(x,y)S1(k)(x,y)=×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))S1(k)(x,y)=-bk,1(x,y)∂kS1(k-1)∂yk(x,0)S1(k)(x,y)ggggggggg=(x1≤x≤x2,0≤y≤g(x)).Lemma 4. For anyk∈ℤ+, one has S1(k)∈C∞(E1) and (34)S1(k)(x,y)=∑j=02k+1ζj,1(x)yj‍,(x,y)∈E1.Proof. Sincef∈C∞(Ω) and g∈C∞([x1,x2]), and g(x)>0(x1≤x≤x2), by the above construction, we know that S1(k)∈C∞(E1) for any k=0,1,…. Fork=0, since (35)S1(0)(x,y)=f(x,g(x))a0,1(x,y)=f(x,g(x))g(x)y, (34) holds. We assume that (34) holds for k=l-1; that is, (36)S1(l-1)(x,y)=∑j=02l-1ζj,1(l-1)(x)yj. This implies that (37)∂lS1(l-1)∂yl(x,g(x))=∑j=l2l-1j!(j-l)!ζj,1(l-1)(x)(g(x))j-l,∂lS1(l-1)∂yl(x,0)=l!ζl,1(l-1)(x). Again, notice that al,1(x,y) and bl,1(x,y) are polynomials of y whose degrees are both 2l+1. From this and (33), it follows that (34) holds for k=l. By induction, (34) holds for all k. Lemma 4 is proved.Below we compute derivatives(∂lS1(k)/∂yl)(x,y)(0≤l≤k) on the curved side Γ1={(x,g(x)):x1≤x≤x2} and the bottom side Δ1={(x,0):x1≤x≤x2} of E1.Lemma 5. LetS1(k)(x,y) be stated as above. For any k∈ℤ+, one has (38)∂lS1(k)∂yl(x,g(x))=∂lf∂yl(x,g(x)),∂lS1(k)∂yl(x,0)=0(x1≤x≤x2,0≤l≤k).Proof. By (30), We have known that, for k=0, (38) holds. Now we assume that (38) holds for k-1. Forx1≤x≤x2, by (33), we have (39)∂lS1(k)∂yl(x,g(x))=∂lS1(k-1)∂yl(x,g(x))+∂lak,1∂yl(x,g(x))×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))-∂lbk,1∂yl(x,g(x))∂kS1(k-1)∂yk(x,0),kkkkkkkkkkkkkkkhhhhl.k(0≤l≤k). For l=0,1,…,k-1, by the assumption of induction, we have (40)∂lS1(k-1)∂yl(x,g(x))=∂lf∂yl(x,g(x)). By (28), we have (∂lak,1/∂yl)(x,g(x))=0,(∂lbk,1/∂yl)(x,g(x))=0. So we get (41)∂lS1(k)∂yl(x,g(x))=∂lf∂yl(x,g(x)). For l=k, note that (∂kak,1/∂yk)(x,g(x))=1 and (∂kbk,1/∂yk)(x,g(x))=0. By (39), we get (42)∂kS1(k)∂yk(x,g(x))=∂kS1(k-1)∂yk(x,g(x))+(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))=∂kf∂yk(x,g(x)). The first formula of (38) holds for k. By (33), we have (43)∂lS1(k)∂yl(x,0)=∂lS1(k-1)∂yl(x,0)+∂lak,1∂yl(x,0)×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))-∂lbk,1∂yl(x,0)∂kS1(k-1)∂yk(x,0),hhhhhhhhhhhl(x1≤x≤x2,0≤l≤k). For l=0,…,k-1, by the assumption of induction and (28), we have (∂lS1(k-1)/∂yl)(x,0)=0 and (∂lak,1/∂yl)(x,0)=(∂lbk,1/∂yl)(x,0)=0. So (44)∂lS1(k)∂yl(x,0)=0. For l=k, since (∂kak,1/∂yk)(x,0)=0,(∂kbk,1/∂yk)(x,0)=1, by (43), we have (45)∂kS1(k)∂yk(x,0)=∂kS1(k-1)∂yk(x,0)-∂kS1(k-1)∂yk(x,0)=0. The second formula of (38) holds. By induction, (38) holds for all k. From this, we get Lemma 5.Now we compute the mixed derivatives ofS1(k)(x,y) on the curved side Γ1 and bottom side Δ1 of E1.Lemma 6. LetΓ1 and Δ1 be the curved side and the bottom side of E1, respectively. Then, for k∈ℤ+, (i) (∂i+jS1(k)/∂xi∂yj)(x,y)=(∂i+jf/∂xi∂yj)(x,y)((x,y)∈Γ1),(ii) (∂i+jS1(k)/∂xi∂yj)(x,y)=0((x,y)∈Δ1),where 0≤i+j≤k.Proof. Letx1≤x≤x2. Then we have (46)ddx(∂l-1f∂yl-1(x,g(x)))=∂lf∂x∂yl-1(x,g(x))+∂lf∂yl(x,g(x))g′(x),(l≥1). By the Newton-Leibniz formula, we have (47)∂l-1f∂yl-1(x,g(x))=∂l-1f∂yl-1(x1,g(x1))+∫x1x(∂lf∂x∂yl-1(t,g(t))+∂lf∂yl(t,g(t))g′(t))dt. Similarly, replacing f by S1(k) in this formula, we have (48)∂l-1S1(k)∂yl-1(x,g(x))=∂l-1S1(k)∂yl-1(x1,g(x1))+∫x1x(∂lS1(k)∂x∂yl-1(t,g(t))+∂lS1(k)∂yl(t,g(t))g′(t))dt,hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhl.(l≥1). From this and Lemma 5, it follows that, for any x1≤x≤x2, we have (49)∫x1x∂lS1(k)∂x∂yl-1(t,g(t))dt‍=∫x1x∂lf∂x∂yl-1(t,g(t))dt‍(1≤l≤k). Finding derivatives on the both sides of this formula, we get (50)∂lS1(k)∂x∂yl-1(x,g(x))=∂lf∂x∂yl-1(x,g(x))(1≤l≤k). Now we start from the equality(51)ddx(∂l-1f∂x∂yl-2(x,g(x)))=∂lf∂x2∂yl-2(x,g(x))+∂lf∂x∂yl-1(x,g(x))g′(x),hhhhhhhhhhhhhhhhh(l≥2). Similar to the argument from (46) to (50), we get (52)∂lS1(k)∂x2∂yl-2(x,g(x))=∂lf∂x2∂yl-2(x,g(x))(2≤l≤k). Continuing this procedure, we deduce that (i) holds for 0<i+j≤k. Letting l=0 in Lemma 5, we have S1(k)(x,g(x))=f(x,g(x)); that is, (i) holds for i=j=0. So we get (i). By Lemma5, (∂jS1(k)/∂yj)(x,0)=0(0≤j≤k). From this and S1(k)∈C∞(E1), we have (53)∂i+jS1(k)∂xi∂yj(x,0)=0(0≤i+j≤k), so (ii) holds. Lemma 6 is proved.From this, we get the following.Lemma 7. For any positive integerr, denote lr=r(r+1)(r+2)(r+3). Let (54)F(x,y)={S1(lr)(x,y),(x,y)∈E1,f(x,y),(x,y)∈Ω. Then (i) F∈Clr(Ω⋃‍E1) and F(x,y)=f(x,y)((x,y)∈Ω); (ii) (∂i+jF/∂xi∂yj)(x,y)=0((x,y)∈(E1⋂‍∂T), 0≤i+j≤lr).Proof. By the assumptionf∈C∞(Ω), Lemma 4: S1(k)∈C∞(E1), and Lemma 6(i): (55)∂i+jS1(k)∂xi∂yj(x,y)=∂i+jf∂xi∂yj(x,y)hh..((x,y)∈Γ1,0≤i+j≤k), where Γ1=Ω⋂‍E1, we get (i). By Lemma 6(ii) and E1⋂‍∂T=Δ1, we get (ii). Lemma 7 is proved.Forν=2,3,4, by using a similar method, we define Sν(k)(x,y) on the each trapezoid Eν with a curve side. The representations of Sν(k)(x,y) are stated in Section 4.1.Lemma 8. For anyν=1,2,3,4, let (56)F(x,y)={Sν(lr)(x,y),(x,y)∈Eν,f(x,y),(x,y)∈Ω, where lr=r(r+1)(r+2)(r+3). Then, for ν=1,2,3,4, one has the following: (i) F∈Clr(Ω⋃‍Eν);(ii) (∂i+jF/∂xi∂yj)(x,y)=0,(x,y)∈(Eν⋂‍∂T) for 0≤i+j≤lr;(iii) F(x,y) can be expressed in the form: (57)F(x,y)=∑j=02lr+1ζj,1(x)yj,(x,y)∈E1,F(x,y)=∑j=02lr+1ζj,2(y)xj,(x,y)∈E2,F(x,y)=∑j=02lr+1ζj,3(x)yj,(x,y)∈E3,F(x,y)=∑j=02lr+1ζj,4(y)xj,(x,y)∈E4.Proof. By Lemma7, we have (58)F∈Clr(Ω⋃‍E1),∂i+jF∂xi∂yj(x,y)=0,fff.((x,y)∈(E1⋂‍∂T),0≤i+j≤lr). Similar to the argument of Lemma 7, for ν=2,3,4, we have (59)F∈Clr(Ω⋃‍Eν),∂i+jF∂xi∂yj(x,y)=0,ff.((x,y)∈(Eν⋂‍∂T),0≤i+j≤lr). From this, we get (i) and (ii). The proof of (iii) is similar to the argument of Lemma4(iii). Lemma 8 is proved. ## 3.3. Smooth Extension to Each RectangleHν We have completed the smooth extension off to each trapezoid Eν with a curved side. In this subsection we complete the smooth extension of the obtained function F to each rectangle Hν. First we consider the smooth extension of F to H1. We divide this procedure in two steps.Step 1. In Lemma8, we know that F(x,y)=S4lr(x,y) on E4. Now we construct the smooth extension of S4(lr)(x,y) from E4 to H1, where S4(lr)(x,y) is stated in Section 4.2 and lr=r(r+1)(r+2)(r+3). Let(60)gggαk,11(y)=(y-y1)kk!(yy1)k+1,βk,11(y)=ykk!(y-y1-y1)k+1,k=0,1,…, and let (61)M1(0)(x,y)=S4(lr)(x,y1)α0,11(y),M1(k)(x,y)=M1(k-1)(x,y)+αk,11(y)hhhhhhhhhh×(∂kS4(lr)∂yk(x,y1)-∂kM1(k-1)∂yk(x,y1))M1(k)(x,y)=-βk,11(y)∂kM1(k-1)∂yk(x,0),gggggggggggggggggl.k=1,2,…,τr((x,y)∈H1), where τr=r(r+2).Lemma 9. Let{J1,l}14 be four sides of the rectangle H1:(62)J1,1={(x,y1),0≤x≤x1},J1,2={(0,y),0≤y≤y1},J1,3={(x,0),0≤x≤x1},J1,4={(x1,y),0≤y≤y1}. Then one has the following(i) M1(τr)(x,y)=∑i,j=02lr+1di,j(1)xiyj‍, where di,j(1) is a constant;(ii) (∂i+jM1(τr)/∂xi∂yj)(x,y)=(∂i+jS4(lr)/∂xi∂yj)(x,y)((x,y)∈J1,1);(iii) (∂i+jM1(τr)/∂xi∂yj)(x,y)=0((x,y)∈J1,2);(iv) (∂i+jM1(τr)/∂xi∂yj)(x,y)=0((x,y)∈J1,3),where 0≤i+j≤τr.Proof. By Lemma8(iii), we have (63)S4(lr)(x,y)=∑j=02lr+1ζj,4(y)xj‍. So (∂kS4(lr)/∂yk)(x,y1) is a polynomial of degree 2lr+1 with respect to x. Since ατr,11(y) and βτr,11(y) are both polynomials of degree 2τr+1, (i) follows from (61). Similar to the argument of Lemma6, we get (ii) and (iv). Since(0,y1)∈(E4⋂∂T), by Lemma 7, we have (64)∂i+jS4(lr)∂xi∂yj(0,y1)=∂i+jF∂xi∂yj(0,y1)=0,(0≤i+j≤lr). By the definition of M1(0) and (64), we have (65)∂i+jM1(0)∂xi∂yj(0,y)=∂iS4(lr)∂xi(0,y1)djα0,11dyj(y)=0,hhhhl.(0≤i+j≤lr,y∈ℝ). We assume that (66)∂i+jM1(k-1)∂xi∂yj(0,y)=0,(0≤i+j≤lr-12k(k-1)). By (61), we get (67)∂i+jM1(k)∂xi∂yj(0,y)=∂i+jM1(k-1)∂xi∂yj(0,y)+djαk,11dyj(y)×(∂k+iS4(lr)∂xi∂yk(0,y1)-∂k+iM1(k-1)∂xi∂yk(0,y1))-djβk,11dyj(y)∂k+iM1(k-1)∂xi∂yk(0,0). for 0≤i+j≤lr-(1/2)k(k+1), we have 0≤i+j≤lr-(1/2)k(k-1) and 0≤i+k≤lr-(1/2)k(k-1). Again, by the assumption of induction, we get (68)∂i+jM1(k-1)∂xi∂yj(0,y)=0,∂k+iM1(k-1)∂xi∂yk(0,y1)=0. By (64), we have (∂k+iS4(lr)/∂xi∂yk)(0,y1)=0. From this and (67), we get (69)hhhhhhhhh∂i+jM1(k)∂xi∂yj(0,y)=0,(0≤y≤y1,0≤i+j≤lr-k2(k+1)). Taking k=τr, we have (70)gggggggg∂i+jM1(τr)∂xi∂yj(0,y)=0,(0≤y≤y1,0≤i+j≤lr-τr2(τr+1)). Since lr-(τr/2)(τr+1)=(lr/2)≥τr, we get (iii). Lemma 9 is proved.Step 2. In Lemma8, we know that F(x,y)=S1lr(x,y) on E1. We consider the difference S1(lr)(x,y)-M1(τr)(x,y). Obviously, it is infinitely many time differentiable on E1 since M1(τr)(x,y) is a polynomial. Now we construct its smooth extension from E1 to the rectangle H1 as follows. Let (71)gggαk,14(x)=(x-x1)kk!(xx1)k+1,βk,14(x)=xkk!(x-x1-x1)k+1,k=0,1,…, and let (72)N1(0)(x,y)=(S1(lr)(x1,y)-M1(τr)(x1,y))α0,14(x),N1(k)(x,y)=N1(k-1)(x,y)+αk,14(x)N1(k)(x,y)=×(∂k(S1(lr)-M1(τr))∂xk(x1,y)N1(k)(x,y)==.=∂k(S1(lr)-M1(τr))∂xk-∂kN1(k-1)∂xk(x1,y))gggggggggg-βk,14(x)∂kN1(k-1)∂xk(0,y),ggggggk=0,1,…,r,((x,y)∈H1). From this, we obtain the following.Lemma 10. N 1 ( r ) ( x , y ) possesses the following properties: (i) (∂i+jN1(r)/∂xi∂yj)(x,y)=(∂i+jS1(lr)/∂xi∂yj)(x,y)-(∂i+jM1(τr)/∂xi∂yj)(x,y) on J1,4;(ii) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,2;(iii) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,1;(iv) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,3, where 0≤i+j≤r and {J1,ν}14 are stated in (62);(v) N1(r)(x,y)=∑i,j=02lr+1τi,j(1)xiyj, where τi,j(1) is a constant.Proof. The arguments similar to Lemma9(ii) and (iv) give the conclusions (i) and (ii) of this theorem. Now we prove (iii) and (iv). By Lemma6(i) and Lemma 9(ii), as well as lr≥τr, we get that, for 0≤i+j≤τr, (73)∂i+jS1(lr)∂xi∂yj(x1,y1)=∂i+jf∂xi∂yj(x1,y1)=∂i+jS4(lr)∂xi∂yj(x1,y1)=∂i+jM1(τr)∂xi∂yj(x1,y1). So we have (74)∂i+jN1(0)∂xi∂yj(x,y1)=∂j(S1(lr)-M1(τr))∂yj(x1,y1)diα0,14dxi(x)=0,hhhhhhh(0≤x≤x1,0≤i+j≤τr). Now we assume that(75)ggggggg∂i+jN1(k-1)∂xi∂yj(x,y1)=0,(0≤x≤x1,0≤i+j≤τr-k2(k-1)). By (72) and (73), (76)∂i+jN1(k)∂xi∂yj(x,y1)=∂i+jN1(k-1)∂xi∂yj(x,y1)+diαk,14dxi(x)∂i+jN1(k)∂xi∂yj(x,y1)=×(∂k+j(S1(lr)-M1(τr))∂xk∂yj(x1,y1)∂i+jN1(k)∂xi∂yjggg(x,y1)=∂k+j(S1(lr)-M1(τr))∂xk∂yj-∂k+jN1(k-1)∂xk∂yj(x1,y1))∂i+jN1(k)∂xi∂yj(x,y1)=-diβk,14dxi(x)∂k+jN1(k-1)∂xk∂yj(0,y1)=0,∂i+jN1(k)∂xi∂yj.(0≤x≤x1,0≤i+j≤τr-k(k+1)2). By induction, we get (77)hhhhhhhhh∂i+jN1(k)∂xi∂yj(x,y1)=0,(0≤x≤x1,0≤i+j≤τr-k(k+1)2). From this and τr-(1/2)r(r+1)≥r, we get (iii). By Lemma 6(ii) and Lemma 9(iii), we get that (78)∂i+jS1(lr)∂xi∂yj(x,0)=0,(0≤i+j≤lr),∂i+jM1(τr)∂xi∂yj(x,0)=0,(0≤i+j≤τr). From this and (72), by using an argument similar to the proof of (iii), we get (iv). By Lemma8(iii) and Lemma 9(i), we deduce that (S1(lr)-M1(τr))(x1,y) is a polynomial of degree 2lr+1 with respect to y. From this and (72), we get (v). Lemma 10 is proved.By Lemmas9 and 10, we obtain that for 0≤i+j≤r, (79)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=∂i+jS4(lr)∂xi∂yj(x,y)on⁡J1,1,(80)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=0on⁡J1,2⋃‍J1,3,(81)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=∂i+jS1(lr)∂xi∂yj(x,y)on⁡J1,4.Lemma 11. Let(82)F(x,y)={f(x,y),(x,y)∈Ω,S1(lr)(x,y),(x,y)∈E1,S4(lr)(x,y),(x,y)∈E4,M1(τr)(x,y)+N1(r)(x,y),(x,y)∈H1. Then one has(i) F∈Cr(Ω⋃E1⋃E4⋃H1),(ii) (∂i+jF/∂xi∂yj)(x,y)=0,(x,y)∈((E1⋃E4⋃H1)⋂∂T) for 0≤i+j≤r;(iii) F(x,y)=∑i,j=02lr+1cij(1)xiyj((x,y)∈H1), where each cij(1) is constant.Proof. By Lemma7, we have F∈Cr(Ω⋃E1⋃E4). Since S1(lr)∈Cr(E1), (83)M1(τr)+N1(r)∈Cr(H1),E1⋂‍H1=J1,4, by (81), we deduce that F∈Cr(E1⋃H1). Since S4(lr)∈Cr(E4), (84)M1(τr)+N1(r)∈Cr(H1),E4⋂‍H1=J1,1, by (79), we deduce that F∈Cr(H1⋃E4). So we get (i). By Lemma8(ii), (85)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E4)⋂‍∂T). Since H1⋂∂T=J1,2⋃J1,3, by (80), we deduce that (86)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈(H1⋂‍∂T). So we get (ii). From Lemma9(i), Lemma 10(v), and F(x,y)=M1(τr)(x,y)+N1(r)(x,y)((x,y)∈H1), we get (iii). Lemma 11 is proved.Forν=2,3,4, by using a similar method, we define F(x,y)=Mν(τr)(x,y)+Nν(r)(x,y)((x,y)∈Hν), where representations of Mν(τr)(x,y) and Nν(r)(x,y) see Section 4.2. ## 3.4. The Proofs of the Theorems Proof of Theorem1. Let(87)F(x,y)={f(x,y),(x,y)∈Ω,Sν(lr)(x,y),(x,y)∈Eν(ν=1,2,3,4),Mν(τr)(x,y)+Nν(r)(x,y),(x,y)∈Hν(ν=1,2,3,4). By (25), F has been defined on the unit square T. The argument similar to Lemma 11(i)-(ii) shows that (88)F∈Cr(Ω⋃‍E1⋃‍E4⋃‍H1);F∈Cr(Ω⋃‍E1⋃‍E2⋃‍H2);F∈Cr(Ω⋃‍E2⋃‍E3⋃‍H3);F∈Cr(Ω⋃‍E3⋃‍E4⋃‍H4); and for 0≤i+j≤r, (89)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E4⋃‍H1)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E2⋃‍H2)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E2⋃‍E3⋃‍H3)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E3⋃‍E4⋃‍H4)⋂‍∂T). From this and Ω⋂∂T=∅, by (25), we have F∈Cr(T) and (∂i+jF/∂xi∂yj)(x,y)=0((x,y)∈∂T,0≤i+j≤r). So we get (i) and (ii). Similar to the argument of Lemma11(iii), we get (90)F(x,y)=∑i,j=02lr+1cij(ν)xiyj‍,(x,y)∈Hν(ν=1,2,3,4), where each cij(ν) is a constant. From this and Lemma 8(iii), we know that, on T∖Ω, F(x,y) can be expressed locally in the form (91)∑j=02lr+1ξj(x)yjor∑j=02lr+1ηj(x)xjor∑i,j=02lr+1cijxiyj; (iii) holds. We have completed the proof of Theorem 1.The representation ofF satisfying the conditions of Theorem 1 is given in Section 4.Proof of Theorem2. LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Define Fp by (92)Fp(x+k,y+l)=F(x,y)((x,y)∈T;k,l∈ℤ). Then Fp is a 1-periodic function of ℝ2. By Theorem 1, we know that Fp∈Cr(T) and (93)∂i+jFp∂xi∂yj(x,y)=0((x,y)∈∂T;0≤i+j≤r). Let Tn1,n2=[n1,n1+1]×[n2,n2+1](n1,n2∈ℤ). Since Fp is 1-periodic function, we have Fp∈Cr(Tn1,n2) and for any n1,n2∈ℤ, (94)∂i+jFp∂xi∂yj(x,y)=0((x,y)∈∂Tn1,n2;0≤i+j≤r). Noticing that ℝ2=⋃n1,n2∈ℤTn1,n2, we have Fp∈Cr(ℝ2). By (92) and Theorem 1(i), we get (95)Fp(x,y)=F(x,y)=f(x,y)((x,y)∈Ω). Theorem 2 is proved.Proof of Theorem3. LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Define Fc by (96)Fc(x,y)={F(x,y),(x,y)∈T,0,(x,y)∈ℝ2∖T. From Theorem 1(ii), we have (97)∂i+jFc∂xi∂yj(x,y)=0((x,y)∈∂T;0≤i+j≤r). From this and (96), we get Fc(x,y)∈Cr(ℝ2). By (96) and Theorem 1(i), we get (98)Fc(x,y)=F(x,y)=f(x,y)((x,y)∈Ω). Theorem 3 is proved. ## 4. Representation of the ExtensionF Satisfying Theorem 1 Letf and Ω be stated as in Theorem 1 and let Ω be divided as in Section 3.1. The representation of F satisfying conditions of Theorem 1 is as follows: (99)F(x,y)={f(x,y),(x,y)∈Ω,Sν(lr)(x,y),(x,y)∈Eν(ν=1,2,3,4),Mν(τr)(x,y)+Nν(r)(x,y),(x,y)∈Hν(ν=1,2,3,4), where (100)T=Ω⋃‍(⋃14‍Eν)⋃‍(⋃14‍Hν) and the rectangles {Hν}14 and the trapezoids {Eν}14 with a curved side are stated in (22) and (23) and lr=r(r+1)(r+2)(r+3) and τr=r(r+2).Below we write out the representations of{Sν(k)(x,y)}14, {Mν(k)(x,y)}14, and {Nν(k)(x,y)}14. ### 4.1. The Construction of EachSν(k)(x,y) (i) Denote(101)hhhhak,1(x,y)=(y-g(x))kk!(yg(x))k+1,bk,1(x,y)=ykk!(y-g(x)-g(x))k+1,k=0,1,…. Define S1(k)(x,y) by induction as follows: (102)S1(0)(x,y)=f(x,g(x))a0,1(x,y),S1(k)(x,y)=S1(k-1)(x,y)+ak,1(x,y)S1(k)(x,y)=×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))S1(k)(x,y)=-bk,1(x,y)∂kS1(k-1)∂yk(x,0),hhhhhhhhhhhhhhhhh.k=1,2,…,((x,y)∈E1).(ii) Denote(103)hhhhak,2(x,y)=(x-h(y))kk!(1-x1-h(y))k+1,bk,2(x,y)=(x-1)kk!(h(y)-xh(y)-1)k+1,k=0,1,…. Define S2(k)(x,y) by induction as follows: (104)S2(0)(x,y)=f(h(y),y)a0,2(x,y),S2(k)(x,y)=S2(k-1)(x,y)+ak,2(x,y)S2(k)(x,y)=×(∂kf∂xk(h(y),y)-∂kS2(k-1)∂xk(h(y),y))S2(k)(x,y)=-bk,2(x,y)∂kS2(k-1)∂xk(1,y),hhhhhhhhhhhhhhhhk=1,2,…,((x,y)∈E2).(iii) Denote(105)ak,3(x,y)=(y-g*(x))kk!(1-y1-g*(x))k+1,bk,3(x,y)=(y-1)kk!(g*(x)-yg*(x)-1)k+1,k=0,1,…. Define S3(k)(x,y) by induction as follows: (106)S3(0)(x,y)=f(x,g*(x))a0,3(x,y),S3(k)(x,y)=S3(k-1)(x,y)+ak,3(x,y)S3(k)(x,y)=×(∂kf∂yk(x,g*(x))-∂kS3(k-1)∂yk(x,g*(x)))S3(k)(x,y)=-bk,3(x,y)∂kS3(k-1)∂yk(x,1),jjjjjjjhhhhhhhhhhjjjjjjjjjjk=1,2,…((x,y)∈E3).(iv) Denote(107)ak,4(x,y)=(x-h*(y))kk!(xh*(y))k+1,bk,4(x,y)=xkk!(x-h*(y)-h*(y))k+1,k=0,1,…. Define S4(k)(x,y) by induction as follows: (108)S4(0)(x,y)=f(h*(y),y)a0,4(x,y),S4(k)(x,y)=S4(k-1)(x,y)+ak,4(x,y)S4(k)(x,y)=×(∂kf∂xk(h*(y),y)-∂kS4(k-1)∂xk(h*(y),y))hhhhhhhhh-bk,4(x,y)∂kS4(k-1)∂xk(0,y),hhhhhhhhhhhhhhhhkkhk=1,2,…((x,y)∈E4). ### 4.2. The Constructions of EachMν(k)(x,y) and Nν(k)(x,y) (i) Denote(109)αk,11(y)=(y-y1)kk!(yy1)k+1,βk,11(y)=ykk!(y-y1-y1)k+1,k=0,1,…. Define M1(k)(x,y) by induction as follows: (110)M1(0)(x,y)=S4(lr)(x,y1)α0,11(y),M1(k)(x,y)=M1(k-1)(x,y)+αk,11(y)M1(k)(x,y)=×(∂kS4(lr)∂yk(x,y1)-∂kM1(k-1)∂yk(x,y1))M1(k)(x,y)=-βk,11(y)∂kM1(k-1)∂yk(x,0),hhhhhhhhhhhhhhhk=1,2,…,((x,y)∈H1).Denote(111)αk,14(x)=(x-x1)kk!(xx1)k+1,βk,14(x)=xkk!(x-x1-x1)k+1,k=0,1,…. Define N1(k)(x,y) by induction as follows: (112)N1(0)(x,y)=(S1(lr)(x1,y)-M1(2r)(x1,y))α0,14(x)N1(k)(x,y)=N1(k-1)(x,y)+αk,14(x)N1(k)(x,y)=×(∂k(S1(4r)-M1(2r))∂xk(x1,y)kkkkkkkkk.kkk.∂k(S1(4r)-M1(2r))∂xk-∂kN1(k-1)∂xk(x1,y))N1(k)(x,y)=-βk,14(x)∂kN1(k-1)∂xk(0,y),hhhhhhhk=1,2,…((x,y)∈H1).(ii) Denote(113)αk,21(x)=(x-x2)kk!(1-x1-x2)k+1,βk,21(x)=(x-1)kk!(x2-xx2-1)k+1,k=0,1,…. Define M2(τr)(x,y) by induction as follows: (114)M2(0)(x,y)=S1(lr)(x2,y)α0,21(x),M2(k)(x,y)=M2(k-1)(x,y)+αk,21(x)M2(k)(x,y)=×(∂kS1(lr)∂xk(x2,y)M2(k)(x,y)hhh=∂kS1(lr)∂xk-∂kM2(k-1)∂xk(x2,y))M2(k)(x,y)=-βk,21(x)∂kM2(k-1)∂xk(1,y),ggggggggggglk=1,2,…((x,y)∈H2).Denote(115)αk,22(y)=(y-y2)kk!(yy2)k+1,βk,22(y)=ykk!(y-y2-y2)k+1,k=0,1,…. Define N2(r)(x,y) by induction as follows: (116)N2(0)(x,y)=(S2(lr)-M2(τr))(x,y2)α0,22(y),N2(k)(x,y)=N2(k-1)(x,y)+αk,22(y)N2(k)(x,y)=×(∂k(S2(lr)-M2(τr))∂yk(x,y2)N2(k)(x,y)=hhhh.-∂kN2(k-1)∂yk(x,y2))N2(k)(x,y)=-βk,22(y)∂kN2(k-1)∂yk(x,0),hhhhhhhk=1,2,…((x,y)∈H2).(iii) Denote(117)αk,31(y)=(y-y3)kk!(1-y1-y3)k+1,βk,31(y)=(y-1)kk!(y3-yy3-1)k+1,k=0,1,…. Define M3(k)(x,y) by induction as follows: (118)M3(0)(x,y)=S2(lr)(x,y3)α0,31(y).M3(k)(x,y)=M3(k-1)(x,y)+αk,31(y)M3(k)(x,y)=×(∂kS2(lr)∂yk(x,y3)-∂kM3(k-1)∂yk(x,y3))M3(k)(x,y)=-βk,31(y)∂kM3(k-1)∂yk(x,1),ggggghhhhhhhgggggk=1,2,…((x,y)∈H3).Denote(119)αk,32(x)=(x-x3)kk!(1-x1-x3)k+1,βk,32(x)=(x-1)kk!(x3-xx3-1)k+1,k=0,1,…. Define N3(k)(x,y) by induction as follows: (120)N3(0)(x,y)=(S3(lr)-M3(τr))(x3,y)α0,32(x),N3(k)(x,y)=N3(k-1)(x,y)+αk,32(x)N3(k)(x,y)=×(∂k(S3(lr)-M3(τr))∂yk(x3,y)N3(k)(x,y)×=m∂k(S3(lr)-M3(τr))∂yk-∂kN3(k-1)∂xk(x3,y))N3(k)(x,y)=-βk,32(x)∂kN3(k-1)∂xk(1,y),hhhhhhhk=1,2,…((x,y)∈H3).(iv) Denote(121)αk,41(x)=(x-x4)kk!(xx4)k+1,βk,41(x)=(x-x4)kk!(x-x4)k+1,k=0,1,…. Define M4(k)(x,y) by induction as follows: (122)M4(0)(x,y)=S3(lr)(x4,y)α0,41(x),M4(k)(x,y)=M4(k-1)(x,y)+αk,41(x)M4(k)(x,y)=×(∂kS3(lr)∂xk(x4,y)-∂kM4(k-1)∂xk(x4,y))M4(k)(x,y)=-βk,41(x,y)∂kM4(k-1)∂xk(0,y),hhhhhhhhhhhhhhhhk=1,2,…((x,y)∈H4).Denote(123)αk,42(y)=(y-y4)kk!(1-y1-y4)k+1,βk,42(y)=(y-1)kk!(y4-yy4-1)k+1,k=0,1,…. Define N4(k)(x,y) by induction as follows: (124)N4(0)(x,y)=(S4(lr)-M4(τr))(x,y4)α0,42(y),N4(k)(x,y)=N4(k-1)(x,y)+αk,42(y)N4(k)(x,y)=×(∂k(S4(lr)-M4(τr))∂yk(x,y4)N4(k)(x,y)=hh.,.-∂kN4(k-1)∂yk(x,y4))N4(k)(x,y)=-βk,42(x,y)∂kN4(k-1)∂xk(x,1),hhhhhhhh.k=1,2,…((x,y)∈H4). ## 4.1. The Construction of EachSν(k)(x,y) (i) Denote(101)hhhhak,1(x,y)=(y-g(x))kk!(yg(x))k+1,bk,1(x,y)=ykk!(y-g(x)-g(x))k+1,k=0,1,…. Define S1(k)(x,y) by induction as follows: (102)S1(0)(x,y)=f(x,g(x))a0,1(x,y),S1(k)(x,y)=S1(k-1)(x,y)+ak,1(x,y)S1(k)(x,y)=×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))S1(k)(x,y)=-bk,1(x,y)∂kS1(k-1)∂yk(x,0),hhhhhhhhhhhhhhhhh.k=1,2,…,((x,y)∈E1).(ii) Denote(103)hhhhak,2(x,y)=(x-h(y))kk!(1-x1-h(y))k+1,bk,2(x,y)=(x-1)kk!(h(y)-xh(y)-1)k+1,k=0,1,…. Define S2(k)(x,y) by induction as follows: (104)S2(0)(x,y)=f(h(y),y)a0,2(x,y),S2(k)(x,y)=S2(k-1)(x,y)+ak,2(x,y)S2(k)(x,y)=×(∂kf∂xk(h(y),y)-∂kS2(k-1)∂xk(h(y),y))S2(k)(x,y)=-bk,2(x,y)∂kS2(k-1)∂xk(1,y),hhhhhhhhhhhhhhhhk=1,2,…,((x,y)∈E2).(iii) Denote(105)ak,3(x,y)=(y-g*(x))kk!(1-y1-g*(x))k+1,bk,3(x,y)=(y-1)kk!(g*(x)-yg*(x)-1)k+1,k=0,1,…. Define S3(k)(x,y) by induction as follows: (106)S3(0)(x,y)=f(x,g*(x))a0,3(x,y),S3(k)(x,y)=S3(k-1)(x,y)+ak,3(x,y)S3(k)(x,y)=×(∂kf∂yk(x,g*(x))-∂kS3(k-1)∂yk(x,g*(x)))S3(k)(x,y)=-bk,3(x,y)∂kS3(k-1)∂yk(x,1),jjjjjjjhhhhhhhhhhjjjjjjjjjjk=1,2,…((x,y)∈E3).(iv) Denote(107)ak,4(x,y)=(x-h*(y))kk!(xh*(y))k+1,bk,4(x,y)=xkk!(x-h*(y)-h*(y))k+1,k=0,1,…. Define S4(k)(x,y) by induction as follows: (108)S4(0)(x,y)=f(h*(y),y)a0,4(x,y),S4(k)(x,y)=S4(k-1)(x,y)+ak,4(x,y)S4(k)(x,y)=×(∂kf∂xk(h*(y),y)-∂kS4(k-1)∂xk(h*(y),y))hhhhhhhhh-bk,4(x,y)∂kS4(k-1)∂xk(0,y),hhhhhhhhhhhhhhhhkkhk=1,2,…((x,y)∈E4). ## 4.2. The Constructions of EachMν(k)(x,y) and Nν(k)(x,y) (i) Denote(109)αk,11(y)=(y-y1)kk!(yy1)k+1,βk,11(y)=ykk!(y-y1-y1)k+1,k=0,1,…. Define M1(k)(x,y) by induction as follows: (110)M1(0)(x,y)=S4(lr)(x,y1)α0,11(y),M1(k)(x,y)=M1(k-1)(x,y)+αk,11(y)M1(k)(x,y)=×(∂kS4(lr)∂yk(x,y1)-∂kM1(k-1)∂yk(x,y1))M1(k)(x,y)=-βk,11(y)∂kM1(k-1)∂yk(x,0),hhhhhhhhhhhhhhhk=1,2,…,((x,y)∈H1).Denote(111)αk,14(x)=(x-x1)kk!(xx1)k+1,βk,14(x)=xkk!(x-x1-x1)k+1,k=0,1,…. Define N1(k)(x,y) by induction as follows: (112)N1(0)(x,y)=(S1(lr)(x1,y)-M1(2r)(x1,y))α0,14(x)N1(k)(x,y)=N1(k-1)(x,y)+αk,14(x)N1(k)(x,y)=×(∂k(S1(4r)-M1(2r))∂xk(x1,y)kkkkkkkkk.kkk.∂k(S1(4r)-M1(2r))∂xk-∂kN1(k-1)∂xk(x1,y))N1(k)(x,y)=-βk,14(x)∂kN1(k-1)∂xk(0,y),hhhhhhhk=1,2,…((x,y)∈H1).(ii) Denote(113)αk,21(x)=(x-x2)kk!(1-x1-x2)k+1,βk,21(x)=(x-1)kk!(x2-xx2-1)k+1,k=0,1,…. Define M2(τr)(x,y) by induction as follows: (114)M2(0)(x,y)=S1(lr)(x2,y)α0,21(x),M2(k)(x,y)=M2(k-1)(x,y)+αk,21(x)M2(k)(x,y)=×(∂kS1(lr)∂xk(x2,y)M2(k)(x,y)hhh=∂kS1(lr)∂xk-∂kM2(k-1)∂xk(x2,y))M2(k)(x,y)=-βk,21(x)∂kM2(k-1)∂xk(1,y),ggggggggggglk=1,2,…((x,y)∈H2).Denote(115)αk,22(y)=(y-y2)kk!(yy2)k+1,βk,22(y)=ykk!(y-y2-y2)k+1,k=0,1,…. Define N2(r)(x,y) by induction as follows: (116)N2(0)(x,y)=(S2(lr)-M2(τr))(x,y2)α0,22(y),N2(k)(x,y)=N2(k-1)(x,y)+αk,22(y)N2(k)(x,y)=×(∂k(S2(lr)-M2(τr))∂yk(x,y2)N2(k)(x,y)=hhhh.-∂kN2(k-1)∂yk(x,y2))N2(k)(x,y)=-βk,22(y)∂kN2(k-1)∂yk(x,0),hhhhhhhk=1,2,…((x,y)∈H2).(iii) Denote(117)αk,31(y)=(y-y3)kk!(1-y1-y3)k+1,βk,31(y)=(y-1)kk!(y3-yy3-1)k+1,k=0,1,…. Define M3(k)(x,y) by induction as follows: (118)M3(0)(x,y)=S2(lr)(x,y3)α0,31(y).M3(k)(x,y)=M3(k-1)(x,y)+αk,31(y)M3(k)(x,y)=×(∂kS2(lr)∂yk(x,y3)-∂kM3(k-1)∂yk(x,y3))M3(k)(x,y)=-βk,31(y)∂kM3(k-1)∂yk(x,1),ggggghhhhhhhgggggk=1,2,…((x,y)∈H3).Denote(119)αk,32(x)=(x-x3)kk!(1-x1-x3)k+1,βk,32(x)=(x-1)kk!(x3-xx3-1)k+1,k=0,1,…. Define N3(k)(x,y) by induction as follows: (120)N3(0)(x,y)=(S3(lr)-M3(τr))(x3,y)α0,32(x),N3(k)(x,y)=N3(k-1)(x,y)+αk,32(x)N3(k)(x,y)=×(∂k(S3(lr)-M3(τr))∂yk(x3,y)N3(k)(x,y)×=m∂k(S3(lr)-M3(τr))∂yk-∂kN3(k-1)∂xk(x3,y))N3(k)(x,y)=-βk,32(x)∂kN3(k-1)∂xk(1,y),hhhhhhhk=1,2,…((x,y)∈H3).(iv) Denote(121)αk,41(x)=(x-x4)kk!(xx4)k+1,βk,41(x)=(x-x4)kk!(x-x4)k+1,k=0,1,…. Define M4(k)(x,y) by induction as follows: (122)M4(0)(x,y)=S3(lr)(x4,y)α0,41(x),M4(k)(x,y)=M4(k-1)(x,y)+αk,41(x)M4(k)(x,y)=×(∂kS3(lr)∂xk(x4,y)-∂kM4(k-1)∂xk(x4,y))M4(k)(x,y)=-βk,41(x,y)∂kM4(k-1)∂xk(0,y),hhhhhhhhhhhhhhhhk=1,2,…((x,y)∈H4).Denote(123)αk,42(y)=(y-y4)kk!(1-y1-y4)k+1,βk,42(y)=(y-1)kk!(y4-yy4-1)k+1,k=0,1,…. Define N4(k)(x,y) by induction as follows: (124)N4(0)(x,y)=(S4(lr)-M4(τr))(x,y4)α0,42(y),N4(k)(x,y)=N4(k-1)(x,y)+αk,42(y)N4(k)(x,y)=×(∂k(S4(lr)-M4(τr))∂yk(x,y4)N4(k)(x,y)=hh.,.-∂kN4(k-1)∂yk(x,y4))N4(k)(x,y)=-βk,42(x,y)∂kN4(k-1)∂xk(x,1),hhhhhhhh.k=1,2,…((x,y)∈H4). ## 5. Corollaries By using the extension method given in Section3, we discuss the two important special cases. ### 5.1. Smooth Extensions of Functions on a Kind of Domains LetΩ be a trapezoid with two curved sides: (125)Ω={(x,y):x1≤x≤x2,η(x)≤y≤ξ(x)}, where L1<η(x)<ξ(x)<L2(x1≤x≤x2),η,ξ∈Cm([x1,x2]). Denote the rectangle D=[x1,x2]×[L1,L2]. Then D=G1⋃‍Ω⋃‍G2, where G1 and G2 are both trapezoids with a curved side: (126)G1={(x,y):x1≤x≤x2,L1≤y≤η(x)},G2={(x,y):x1≤x≤x2,ξ(x)≤y≤L2}.Suppose thatf∈Cq(Ω) (q is a nonnegative integer). We will smoothly extend f from Ω to the trapezoids G1 and G2 with a curved side, respectively, as in Section 3.2, such that the extension function F is smooth on the rectangle D. Moreover, we will give a precise formula. It shows that the index of smoothness of F depends on not only smoothness of f but also smoothness of η,ξ.Denotea0,1(x,y)=(y-L1)/(η(x)-L1) and (127)ak,1(x,y)=(y-η(x))kk!(y-L1η(x)-L1)k+1,gggggggk=1,2,…(x1≤x≤x2,y∈ℝ). We define {S1(k)(x,y)} on G1 as follows. Let (128)S1(0)(x,y)=f(x,η(x))a1,0(x,y)((x,y)∈G1), and let k0 be the maximal integer satisfying 1+2+⋯+k0≤q. For k=1,2,…,k0, we define (129)S1(k)(x,y)=S1(k-1)(x,y)+a1,k(x,y)×(∂kf∂yk(x,η(x))-∂kS1(k-1)∂yk(x,η(x))). Then S1(k)∈Cλk(G1), where λk=min⁡{q-1-2-⋯-k,m}.Denotea0,2(x,y)=(L2-y)/(L2-ξ(x)) and (130)ak,2(x,y)=(y-ξ(x))kk!(L2-yL2-ξ(x))k+1,.hhhhk=1,2,…(x1≤x≤x2,y∈ℝ). We define {S2(k)(x,y)} on G2 as follows. Let (131)S2(0)(x,y)=f(x,ξ(x))a0,2(x,y)((x,y)∈G2). For k=1,2,…,k0, define (132)S2(k)(x,y)=S2(k-1)(x,y)+ak,2(x,y)×(∂kf∂yk(x,ξ(x))-∂kS2(k-1)∂yk(x,ξ(x))),hhhhhhhhhhhhhhhhhh((x,y)∈G2). Then S2(k)∈Cλk(G2), where λk is stated as above.An argument similar to Lemmas5 and 6 shows that, for 0≤k≤k0 and 0≤i+j≤min⁡{k,λk}, (133)∂i+jS1(k)∂xi∂yj(x,η(x))=∂i+jf∂xi∂yj(x,η(x)),∂i+jS2(k)∂xi∂yj(x,ξ(x))=∂i+jf∂xi∂yj(x,ξ(x)),hhhhhhhhhhhhhhl(x1≤x≤x2). A direct calculation shows that the number (134)τ(q,m)=min⁡{[2q+94-32],m} is the maximal value of integers k satisfying k≤λk, where [·] expresses the integral part. So τ(q,m)≤λτ(q,m).By (133), we get that, for 0≤i+j≤τ(q,m), (135)∂i+jS1(τ(q,m))∂xi∂yj(x,η(x))=∂i+jf∂xi∂yj(x,ξ(x)),∂i+jS2(τ(q,m))∂xi∂yj(x,ξ(x))=∂i+jf∂xi∂yj(x,ξ(x)),hhhhhhhhhhhhhhhhhl(x1≤x≤x2). Note that (136)S1(τ(q,m))∈Cλτ(q,m)(G1),S2(τ(q,m))∈Cλ(q,m)(G2),jjjjjjjjjjjjjjjτ(q,m)≤λτ(q,m)≤q, and the assumption f∈Cq(Ω). Now we define a function on D by (137)Fq,m(x,y)={f(x,y),(x,y)∈Ω,S1(τ(q,m))(x,y),(x,y)∈G1,S2(τ(q,m))(x,y),(x,y)∈G2. From this and (135), we have Fq,m∈Cτ(q,m)(D). This implies the following theorem.Theorem 12. Let the domainΩ and the rectangle D be stated as above. If f∈Cq(Ω), then the function Fq,m(x,y), defined in (137), is a smooth extension of f from Ω to D and Fq,m∈Cτ(q,m)(D), where τ(q,m) is stated in (134).Especially, forq=0 and m≥0, we have τ(q,m)=0, and so F0,m∈C(D); for q=2 and m≥1, we have τ(q,m)=1, and so F2,1∈C1(D); for q=5 and m≥2, we have τ(q,m)=2, and so F5,2∈C2(D). ### 5.2. Smooth Extensions of Univariate Functions on Closed Intervals Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1). In order to extend smoothly f from [x1,x2] to [0,x1], we construct two polynomials (138)a0(k)(x)=(x-x1)kk!(xx1)k+1,b0(k)(x)=xkk!(x1-xx1)k+1,k=0,1,…. Define S0(0)(x)=f(x1)(x/x1) and for k=1,…,q, (139)S0(k)(x)=S0(k-1)(x)-a0(k)(x)(f(k)(x1)-(S0(k-1))(k)(x1))-b0(k)(x)(S0(k-1))(k)(0)(0≤x≤x1). Then S0(q)(x) is a polynomial of degree ≤2q+1.Similar to the proof of Lemma5, we get (140)(S0(q))(k)(0)=0,(S0(q))(k)(x1)=f(k)(x1),hhhhhhhhhhhhhhhhhhhhhhhhhk=0,1,…,q. It is also easy to check directly them.Again extend smoothlyf from [x1,x2] to [x2,1], we construct two polynomials (141)a1(k)(x)=(x-x2)kk!(1-x1-x2)k+1,b1(k)(x)=(x-1)kk!(x2-xx2-1)k+1,k=0,1,…. Define S1(0)(x)=f(x2)((1-x)/(1-x2)) and for k=1,…,q, (142)S1(k)(x)=S1(k-1)(x)-a1(k)(x)(f(k)(x2)-(S1(k-1))(k)(x2))-b1(k)(x)(S1(k-1))(k)(1)(x2≤x≤1). Then S1(q)(x) is a polynomial of degree ≤2q+1.Similar to the proof of Lemma5, we get (143)gggg(S1(q))(k)(x2)=f(k)(x2),(S1(q))(k)(1)=0(k=0,1,…,q).Therefore, we obtain the smooth extensionF from [x1,x2] to [0,1] by (144)F(x)={f(x),x∈[x1,x2],S0(q)(x),x∈[0,x1],S1(q)(x),x∈[x2,1], where S0(q)(x) and S1(q)(x) are polynomials of degree 2q+1 defined as above, and F∈Cq([0,1]) and F(l)(0)=F(l)(1)=0(l=0,1,...,q). From this, we get the following.Theorem 13. Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1). Then there exists a function F∈Cq([0,1]) satisfying F(x)=f(x)(x1≤x≤x2) and F(l)(0)=F(l)(1)=0(l=0,1,…,q).Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1), and let F be the smooth extension of f from [x1,x2] to [0,1] which is stated as in Theorem 12. Let Fp be the 1-periodic extension satisfying Fp(x+n)=F(x)(0≤x≤1,n∈ℤ). Then Fp∈Cq(ℝ) and Fp(x)=f(x)(x∈[x1,x2]). We expand F(x) into the Fourier series which converges fast. From this, we get trigonometric approximation of f∈Cq([x1,x2]). We also may do odd extension or even extension of F from [0,1] to [-1,1], and then doing periodic extension, we get the odd periodic extension Fpo∈Cq(ℝ) or the even periodic extension Fpe∈Cq(ℝ). We expand Fpo or Fpe into the sine series and the cosine series, respectively. From this, we get the sine polynomial approximation and the cosine polynomial approximation of f on [x1,x2]. For F∈Cq(x)(x∈[0,1]), we pad zero in the outside of [0,1] and then the obtained function Fc∈Cq(ℝ). We expand Fc into a wavelet series which converges fast. By the moment theorem, a lot of wavelet coefficients are equal to zero. From this, we get wavelet approximation of f∈Cq([x1,x2]). ## 5.1. Smooth Extensions of Functions on a Kind of Domains LetΩ be a trapezoid with two curved sides: (125)Ω={(x,y):x1≤x≤x2,η(x)≤y≤ξ(x)}, where L1<η(x)<ξ(x)<L2(x1≤x≤x2),η,ξ∈Cm([x1,x2]). Denote the rectangle D=[x1,x2]×[L1,L2]. Then D=G1⋃‍Ω⋃‍G2, where G1 and G2 are both trapezoids with a curved side: (126)G1={(x,y):x1≤x≤x2,L1≤y≤η(x)},G2={(x,y):x1≤x≤x2,ξ(x)≤y≤L2}.Suppose thatf∈Cq(Ω) (q is a nonnegative integer). We will smoothly extend f from Ω to the trapezoids G1 and G2 with a curved side, respectively, as in Section 3.2, such that the extension function F is smooth on the rectangle D. Moreover, we will give a precise formula. It shows that the index of smoothness of F depends on not only smoothness of f but also smoothness of η,ξ.Denotea0,1(x,y)=(y-L1)/(η(x)-L1) and (127)ak,1(x,y)=(y-η(x))kk!(y-L1η(x)-L1)k+1,gggggggk=1,2,…(x1≤x≤x2,y∈ℝ). We define {S1(k)(x,y)} on G1 as follows. Let (128)S1(0)(x,y)=f(x,η(x))a1,0(x,y)((x,y)∈G1), and let k0 be the maximal integer satisfying 1+2+⋯+k0≤q. For k=1,2,…,k0, we define (129)S1(k)(x,y)=S1(k-1)(x,y)+a1,k(x,y)×(∂kf∂yk(x,η(x))-∂kS1(k-1)∂yk(x,η(x))). Then S1(k)∈Cλk(G1), where λk=min⁡{q-1-2-⋯-k,m}.Denotea0,2(x,y)=(L2-y)/(L2-ξ(x)) and (130)ak,2(x,y)=(y-ξ(x))kk!(L2-yL2-ξ(x))k+1,.hhhhk=1,2,…(x1≤x≤x2,y∈ℝ). We define {S2(k)(x,y)} on G2 as follows. Let (131)S2(0)(x,y)=f(x,ξ(x))a0,2(x,y)((x,y)∈G2). For k=1,2,…,k0, define (132)S2(k)(x,y)=S2(k-1)(x,y)+ak,2(x,y)×(∂kf∂yk(x,ξ(x))-∂kS2(k-1)∂yk(x,ξ(x))),hhhhhhhhhhhhhhhhhh((x,y)∈G2). Then S2(k)∈Cλk(G2), where λk is stated as above.An argument similar to Lemmas5 and 6 shows that, for 0≤k≤k0 and 0≤i+j≤min⁡{k,λk}, (133)∂i+jS1(k)∂xi∂yj(x,η(x))=∂i+jf∂xi∂yj(x,η(x)),∂i+jS2(k)∂xi∂yj(x,ξ(x))=∂i+jf∂xi∂yj(x,ξ(x)),hhhhhhhhhhhhhhl(x1≤x≤x2). A direct calculation shows that the number (134)τ(q,m)=min⁡{[2q+94-32],m} is the maximal value of integers k satisfying k≤λk, where [·] expresses the integral part. So τ(q,m)≤λτ(q,m).By (133), we get that, for 0≤i+j≤τ(q,m), (135)∂i+jS1(τ(q,m))∂xi∂yj(x,η(x))=∂i+jf∂xi∂yj(x,ξ(x)),∂i+jS2(τ(q,m))∂xi∂yj(x,ξ(x))=∂i+jf∂xi∂yj(x,ξ(x)),hhhhhhhhhhhhhhhhhl(x1≤x≤x2). Note that (136)S1(τ(q,m))∈Cλτ(q,m)(G1),S2(τ(q,m))∈Cλ(q,m)(G2),jjjjjjjjjjjjjjjτ(q,m)≤λτ(q,m)≤q, and the assumption f∈Cq(Ω). Now we define a function on D by (137)Fq,m(x,y)={f(x,y),(x,y)∈Ω,S1(τ(q,m))(x,y),(x,y)∈G1,S2(τ(q,m))(x,y),(x,y)∈G2. From this and (135), we have Fq,m∈Cτ(q,m)(D). This implies the following theorem.Theorem 12. Let the domainΩ and the rectangle D be stated as above. If f∈Cq(Ω), then the function Fq,m(x,y), defined in (137), is a smooth extension of f from Ω to D and Fq,m∈Cτ(q,m)(D), where τ(q,m) is stated in (134).Especially, forq=0 and m≥0, we have τ(q,m)=0, and so F0,m∈C(D); for q=2 and m≥1, we have τ(q,m)=1, and so F2,1∈C1(D); for q=5 and m≥2, we have τ(q,m)=2, and so F5,2∈C2(D). ## 5.2. Smooth Extensions of Univariate Functions on Closed Intervals Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1). In order to extend smoothly f from [x1,x2] to [0,x1], we construct two polynomials (138)a0(k)(x)=(x-x1)kk!(xx1)k+1,b0(k)(x)=xkk!(x1-xx1)k+1,k=0,1,…. Define S0(0)(x)=f(x1)(x/x1) and for k=1,…,q, (139)S0(k)(x)=S0(k-1)(x)-a0(k)(x)(f(k)(x1)-(S0(k-1))(k)(x1))-b0(k)(x)(S0(k-1))(k)(0)(0≤x≤x1). Then S0(q)(x) is a polynomial of degree ≤2q+1.Similar to the proof of Lemma5, we get (140)(S0(q))(k)(0)=0,(S0(q))(k)(x1)=f(k)(x1),hhhhhhhhhhhhhhhhhhhhhhhhhk=0,1,…,q. It is also easy to check directly them.Again extend smoothlyf from [x1,x2] to [x2,1], we construct two polynomials (141)a1(k)(x)=(x-x2)kk!(1-x1-x2)k+1,b1(k)(x)=(x-1)kk!(x2-xx2-1)k+1,k=0,1,…. Define S1(0)(x)=f(x2)((1-x)/(1-x2)) and for k=1,…,q, (142)S1(k)(x)=S1(k-1)(x)-a1(k)(x)(f(k)(x2)-(S1(k-1))(k)(x2))-b1(k)(x)(S1(k-1))(k)(1)(x2≤x≤1). Then S1(q)(x) is a polynomial of degree ≤2q+1.Similar to the proof of Lemma5, we get (143)gggg(S1(q))(k)(x2)=f(k)(x2),(S1(q))(k)(1)=0(k=0,1,…,q).Therefore, we obtain the smooth extensionF from [x1,x2] to [0,1] by (144)F(x)={f(x),x∈[x1,x2],S0(q)(x),x∈[0,x1],S1(q)(x),x∈[x2,1], where S0(q)(x) and S1(q)(x) are polynomials of degree 2q+1 defined as above, and F∈Cq([0,1]) and F(l)(0)=F(l)(1)=0(l=0,1,...,q). From this, we get the following.Theorem 13. Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1). Then there exists a function F∈Cq([0,1]) satisfying F(x)=f(x)(x1≤x≤x2) and F(l)(0)=F(l)(1)=0(l=0,1,…,q).Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1), and let F be the smooth extension of f from [x1,x2] to [0,1] which is stated as in Theorem 12. Let Fp be the 1-periodic extension satisfying Fp(x+n)=F(x)(0≤x≤1,n∈ℤ). Then Fp∈Cq(ℝ) and Fp(x)=f(x)(x∈[x1,x2]). We expand F(x) into the Fourier series which converges fast. From this, we get trigonometric approximation of f∈Cq([x1,x2]). We also may do odd extension or even extension of F from [0,1] to [-1,1], and then doing periodic extension, we get the odd periodic extension Fpo∈Cq(ℝ) or the even periodic extension Fpe∈Cq(ℝ). We expand Fpo or Fpe into the sine series and the cosine series, respectively. From this, we get the sine polynomial approximation and the cosine polynomial approximation of f on [x1,x2]. For F∈Cq(x)(x∈[0,1]), we pad zero in the outside of [0,1] and then the obtained function Fc∈Cq(ℝ). We expand Fc into a wavelet series which converges fast. By the moment theorem, a lot of wavelet coefficients are equal to zero. From this, we get wavelet approximation of f∈Cq([x1,x2]). --- *Source: 102062-2014-02-10.xml*
102062-2014-02-10_102062-2014-02-10.md
83,263
Approximation of Bivariate Functions via Smooth Extensions
Zhihua Zhang
The Scientific World Journal (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102062
102062-2014-02-10.xml
--- ## Abstract For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. --- ## Body ## 1. Introduction In the recent several decades, various approximation tools have been widely developed [1–14]. For example, a smooth periodic function can be approximated by trigonometric polynomials; a square-integrable smooth function can be expanded into a wavelet series and be approximated by partial sum of the wavelet series; and a smooth function on a cube can be approximated well by polynomials. However, for a smooth function on a general domain with arbitrary shape, even if it is infinitely many time differentiable, it is difficult to do Fourier approximation or wavelet approximation. In this paper, we will extend a function on general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. After that, it will be easy to do Fourier approximation or wavelet approximation. For the higher-dimensional case, the method of smooth extensions is similar to that in the two-dimensional case, but the representations of smooth extensions will be too complicated. Therefore, in this paper, we mainly consider the smooth extension of a bivariate function on a planar domain. By the way, for the one-dimensional case, since the bounded domain is reduced to a closed interval, the smooth extension can be regarded as a corollary of the two-dimensional case.This paper is organized as follows. In Section2, we state the main theorems on the smooth extension of the function on the general domain and their applications. In Sections 3 and 4, we give a general method of smooth extensions and complete the proofs of the main theorems. In Section 5, we use our extension method to discuss two important special cases of smooth extensions.Throughout this paper, we denoteT=[0,1]2 and the interior of T by To and always assume that Ω is a simply connected domain. We say that f∈Cq(Ω) if the derivatives (∂i+jf/∂xi∂yj) are continuous on Ω for 0≤i+j≤q. We say that f∈C∞(Ω) if all derivatives (∂i+jf/∂xi∂yj) are continuous on Ω for i,j∈ℤ+. We say that a function h(x,y) is a γ-periodic function if h(x+γk,y+γl)=h(x,y)((x,y)∈T;k,l∈ℤ), where γ is an integer. We appoint that 0!=1 and the notation [α] is the integral part of the real number α. ## 2. Main Theorems and Applications In this section, we state the main results of smooth extensions and their applications in Fourier analysis and wavelet analysis. ### 2.1. Main Theorems Our main theorems are stated as follows.Theorem 1. Letf∈C∞(Ω), where Ω⊂To and the boundary ∂Ω is a piecewise infinitely many time smooth curve. Then for any r∈ℤ+ there is a function F∈Cr(T) such that (i) F(x,y)=f(x,y)((x,y)∈Ω);(ii) (∂i+jF/∂xi∂yj)(x,y)=0 on the boundary ∂T for 0≤i+j≤r;(iii) on the complementT∖Ω,F(x,y) can be expressed locally in the forms (1)∑j=0Lξj(x)yj‍,or∑j=0Lηj(y)xj‍,or∑i,j=0Lcijxiyj‍,where L is a positive integer and each coefficient cij is constant.Theorem 2. Letf∈C∞(Ω), where Ω is stated as in Theorem 1. Then, for any r∈ℤ+, there exists a 1-periodic function Fp∈Cr(ℝ2) such that Fp(x,y)=f(x,y)((x,y)∈Ω).Theorem 3. Letf∈C∞(Ω), where Ω is stated as in Theorem 1. Then, for any r∈ℤ+, there exists a function Fc∈Cr(ℝ2) with compact support T such that Fc(x,y)=f(x,y)((x,y)∈Ω).In Sections3 and 4, we give constructive proofs of Theorems 1–3. In these three theorems, we assume that f∈C∞(Ω). If f∈Cq(Ω) (q is a nonnegative integer), by using the similar method of arguments of Theorems 1–3, we also can obtain the corresponding results. ### 2.2. Applications Here we show some applications of these theorems. #### 2.2.1. Approximation by Polynomials LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Then F∈Cr(T) and F=f on Ω. By ΔN, denote the set of all bivariate polynomials in the form ∑n1,n2=-NNcn1,n2xn1yn2. Then (2)inf⁡P∈ΔN⁡∥f-P∥Lp(Ω)≤inf⁡P∈ΔN⁡∥F-P∥Lp(T), where ∥·∥Lp(D) is the norm of the space Lp(D). The right-hand side of formula (2) is the best approximation of the extension F in ΔN. By (2), we know that the approximation problem of f by polynomials on a domain Ω is reduced to the well-known approximation problem of its smooth extension F by polynomials on the square T [4, 10]. #### 2.2.2. Fourier Analysis (i)Approximation by Trigonometric Polynomials. Let Fp be the smooth periodic extension of f as in Theorem 2. Then Fp∈Cr(ℝ2) and Fp=f on Ω. By the well-known results [5, 10], we know that the smooth periodic function Fp can be approximated by bivariate trigonometric polynomials very well. Its approximation error can be estimated by the modulus of continuity of its r time derivatives.ByΔN*, denote the set of all bivariate trigonometric polynomials in the form (3)∑n1,n2=-NNcn1,n2*e2πi(n1x+n2y). By Theorem 2, we have (4)inf⁡P*∈ΔN*⁡∥f-P*∥Lp′(Ω)≤min⁡P*∈ΔN*⁡∥Fp-P*∥Lp′(T). From this and Theorem 2, we see that the approximation problem of f on Ω by trigonometric polynomials is reduced to a well-known approximation problem of smooth periodic functions [5, 7, 10].(ii)Fourier Series. We expand Fp into a Fourier series [9] (5)Fp(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y), where τn1,n2=∫TFp(x,y)e-2πi(n1x+n2y)dxdy. By Theorem 2, we obtain that, for (x,y)∈Ω, (6)f(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y). Denote the partial sum (7)sn1,n2(x,y)=∑k1=0n1∑k2=0n2τk1,k2e2πi(k1x+k2y). Then we have (8)∥f(x,y)-sn1,n2(x,y)∥Lp′(Ω)≤∥Fp(x,y)-sn1,n2(x,y)∥Lp′(T). Since the smooth periodic function Fp can be approximated well by the partial sum of its Fourier series [5, 7, 10], from this inequality, we see that we have constructed a trigonometric polynomial sn1,n2(x,y) which can approximate to f on Ω very well.(iii)Odd (Even) Periodic Extension. Let F be the smooth extension of f from Ω to T which is stated in Theorem 1. Define Fo on [-1,1]2 by (9)Fo(x,y)={F(x,y),(x,y)∈[0,1]2,-F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,-F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fo is an odd function. By Theorem 1, we have Fo∈Cr([-1,1]2) and (∂i+jFo/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic odd function Fpo and Fpo∈Cr(ℝ2). By the well-known results [5, 7, 10], Fpo can be approximated by sine polynomials very well. Moreover, Fpo can be expanded into the Fourier sine series; that is, (10)Fpo(x,y)=∑n1=1∞∑n2=1∞αn1,n2sin⁡(πn1x)sin⁡(πn2y), where the coefficients αn1,n2=4∫TFpo(x,y)sin(πn1x)sin(πn2y)dxdy [9]. Considering the approximation of Fpo by the partial sum, the Fejer sum, and the Vallee-Poussin sum [7, 14] of the Fourier sine series of Fpo, we will obtain the approximation of the original function f on Ω by sine polynomials.DefineFe on [-1,1]2 as follows: (11)Fe(x,y)={F(x,y),(x,y)∈[0,1]2,F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fe is an even function on [-1,1]2. By Theorem 1, Fe∈Cr([-1,1]2) and (∂i+jFe/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic even function Fpe and Fpe∈Cr(ℝ2). By the well-known result [5, 10], Fpe can be approximated by cosine polynomials very well. Moreover, Fpe can be expanded into the Fourier cosine series. Considering the partial sum, the Fejer sum, and the Vallee-Poussin sum [5, 7, 14] of the Fourier cosine series of Fpe, we will obtain the approximation of the original function f on Ω by cosine polynomials. #### 2.2.3. Wavelet Analysis (i)Periodic Wavelet Series. Let Fp∈Cr(ℝ2) be stated in Theorem 2. Let {ψμ}13 be a bivariate smooth wavelet [2]. Then, under a mild condition, the families (12)Ψper:={1}⋃‍{ψμ,m,nper,μ=1,2,3;m∈ℤ+;jjjjjjjjjjjjjjjψμ,m,npern=(n1,n2),n1,n2=0,…,2m-1},whereψμ,m,nper=∑l∈ℤ2ψμ,m,n(·+l),ψμ,m,n=2mψμ(2m·-n) are a periodic wavelet basis. We expand Fp into a periodic wavelet series [2] (13)Fp=d0,0+∑μ=13∑m=0∞∑n1,n2=02m-1dμ,m,nψμ,m,nper. From this, we can realize the wavelet approximation of f on Ω, for example, if r=2, its partial sum (14)s2M(Fp)=d0,0+∑μ=13∑m=0M-1∑n1,n2=02m-1dμ,m,nψμ,m,nper satisfies ∥Fp-s2M(Fp)∥L2(T)=O(2-2M). From this and Fp(x,y)=f(x,y)((x,y)∈Ω), we will obtain an estimate of wavelet approximation for a smooth function f on the domain Ω.(ii)Wavelet Approximation. Let Fc be the smooth function with a compact support as in Theorem 3. Let ψ be a univariate Daubechies wavelet and ϕ be the corresponding scaling function [2]. Denoting (15)ψ1(x,y)=ϕ(x)ψ(y),ψ2(x,y)=ψ(x)ϕ(y),gggggggggψ3(x,y)=ψ(x)ψ(y), then {ψμ(x,y)}μ=13 is a smooth tensor product wavelet. We expand Fc into the wavelet series (16)Fc(x,y)=∑μ=13∑m∈ℤ∑n∈ℤ2cμ,m,nψμ,m,n(x,y), where ψμ,m,n=2mψμ(2m·-n) and the wavelet coefficients (17)cμ,m,n=∫ℝ2Fc(x,y)ψ¯μ,m,n(x,y)dxdy=∫TF(x,y)ψ¯μ,m,n(x,y)dxdy.‍ Since Fc is a smooth function, the wavelet coefficients cμ,m,n decay fast.On the other hand, sinceFc(x,y)=0, (x,y)∈ℝ2∖T, a lot of wavelet coefficients vanish. In fact, when m0∈ℤ and n0∈ℤ2 satisfy supp⁡ψμ,m0,n0⊂(ℝ2∖T), we have cμ,m0,n0=0. Besides, by condition (iii) in Theorem 1, we know that F is univariate or bivariate polynomials on T∖Ω. By the moment theorem [2], we know that more wavelet coefficients vanish.For example, letm*∈ℤ and n*=(n1*,n2*)∈ℤ2 satisfy supp⁡ψm*,n2*⊂[0,α*], where α*=inf⁡{g(x),x1≤x≤x2}. Then we have (18)c1,m*,n*=2m(∫0x1‍+∫x1x2‍+∫x21‍)ϕ¯m*,n1*(x)c1,m*,n*=×(∫0α*ψ¯m*,n2*(y)F(x,y)dy)dxc1,m*,1=:I1+I2+I3. By Lemma 8, we know that (19)F(x,y)=∑j=0Lξj(x)yj,(x,y)∈E1, where E1={(x,y):x1≤x≤x2,0≤y≤g(x)} and g(x)≥α*(x1≤x≤x2). So (20)I2=∫x1x2ϕ¯m*,n1*(x)(∫0α*ψ¯m*,n2*(y)(∑j=0Lξj(x)yj)dy‍)dx. If the Daubechies wavelet ψ chosen by us is L time smooth, then, by using the moment theorem and supp⁡ψm*,n2*⊂[0,α*], we have (21)∫0α*ψ¯m*,n2*(y)yjdy‍=∫ℝψ¯m*,n2*(y)yjdy=0,kkkl(0≤j≤2r+1). So I2=0. Similarly, since F(x,y) is bivariate polynomials on rectangles H1 and H3 (see Lemma 11), we have I1=I3=0. Furthermore, by (18), we get c1,m*,n*=0.Therefore, the partial sum of the wavelet series (16) can approximate to Fc very well and few wavelet coefficients can reconstruct Fc. Since Fc=f on Ω, the partial sum of the wavelet series (16) can approximate to the original function f on the domain Ω very well. ## 2.1. Main Theorems Our main theorems are stated as follows.Theorem 1. Letf∈C∞(Ω), where Ω⊂To and the boundary ∂Ω is a piecewise infinitely many time smooth curve. Then for any r∈ℤ+ there is a function F∈Cr(T) such that (i) F(x,y)=f(x,y)((x,y)∈Ω);(ii) (∂i+jF/∂xi∂yj)(x,y)=0 on the boundary ∂T for 0≤i+j≤r;(iii) on the complementT∖Ω,F(x,y) can be expressed locally in the forms (1)∑j=0Lξj(x)yj‍,or∑j=0Lηj(y)xj‍,or∑i,j=0Lcijxiyj‍,where L is a positive integer and each coefficient cij is constant.Theorem 2. Letf∈C∞(Ω), where Ω is stated as in Theorem 1. Then, for any r∈ℤ+, there exists a 1-periodic function Fp∈Cr(ℝ2) such that Fp(x,y)=f(x,y)((x,y)∈Ω).Theorem 3. Letf∈C∞(Ω), where Ω is stated as in Theorem 1. Then, for any r∈ℤ+, there exists a function Fc∈Cr(ℝ2) with compact support T such that Fc(x,y)=f(x,y)((x,y)∈Ω).In Sections3 and 4, we give constructive proofs of Theorems 1–3. In these three theorems, we assume that f∈C∞(Ω). If f∈Cq(Ω) (q is a nonnegative integer), by using the similar method of arguments of Theorems 1–3, we also can obtain the corresponding results. ## 2.2. Applications Here we show some applications of these theorems. ### 2.2.1. Approximation by Polynomials LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Then F∈Cr(T) and F=f on Ω. By ΔN, denote the set of all bivariate polynomials in the form ∑n1,n2=-NNcn1,n2xn1yn2. Then (2)inf⁡P∈ΔN⁡∥f-P∥Lp(Ω)≤inf⁡P∈ΔN⁡∥F-P∥Lp(T), where ∥·∥Lp(D) is the norm of the space Lp(D). The right-hand side of formula (2) is the best approximation of the extension F in ΔN. By (2), we know that the approximation problem of f by polynomials on a domain Ω is reduced to the well-known approximation problem of its smooth extension F by polynomials on the square T [4, 10]. ### 2.2.2. Fourier Analysis (i)Approximation by Trigonometric Polynomials. Let Fp be the smooth periodic extension of f as in Theorem 2. Then Fp∈Cr(ℝ2) and Fp=f on Ω. By the well-known results [5, 10], we know that the smooth periodic function Fp can be approximated by bivariate trigonometric polynomials very well. Its approximation error can be estimated by the modulus of continuity of its r time derivatives.ByΔN*, denote the set of all bivariate trigonometric polynomials in the form (3)∑n1,n2=-NNcn1,n2*e2πi(n1x+n2y). By Theorem 2, we have (4)inf⁡P*∈ΔN*⁡∥f-P*∥Lp′(Ω)≤min⁡P*∈ΔN*⁡∥Fp-P*∥Lp′(T). From this and Theorem 2, we see that the approximation problem of f on Ω by trigonometric polynomials is reduced to a well-known approximation problem of smooth periodic functions [5, 7, 10].(ii)Fourier Series. We expand Fp into a Fourier series [9] (5)Fp(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y), where τn1,n2=∫TFp(x,y)e-2πi(n1x+n2y)dxdy. By Theorem 2, we obtain that, for (x,y)∈Ω, (6)f(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y). Denote the partial sum (7)sn1,n2(x,y)=∑k1=0n1∑k2=0n2τk1,k2e2πi(k1x+k2y). Then we have (8)∥f(x,y)-sn1,n2(x,y)∥Lp′(Ω)≤∥Fp(x,y)-sn1,n2(x,y)∥Lp′(T). Since the smooth periodic function Fp can be approximated well by the partial sum of its Fourier series [5, 7, 10], from this inequality, we see that we have constructed a trigonometric polynomial sn1,n2(x,y) which can approximate to f on Ω very well.(iii)Odd (Even) Periodic Extension. Let F be the smooth extension of f from Ω to T which is stated in Theorem 1. Define Fo on [-1,1]2 by (9)Fo(x,y)={F(x,y),(x,y)∈[0,1]2,-F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,-F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fo is an odd function. By Theorem 1, we have Fo∈Cr([-1,1]2) and (∂i+jFo/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic odd function Fpo and Fpo∈Cr(ℝ2). By the well-known results [5, 7, 10], Fpo can be approximated by sine polynomials very well. Moreover, Fpo can be expanded into the Fourier sine series; that is, (10)Fpo(x,y)=∑n1=1∞∑n2=1∞αn1,n2sin⁡(πn1x)sin⁡(πn2y), where the coefficients αn1,n2=4∫TFpo(x,y)sin(πn1x)sin(πn2y)dxdy [9]. Considering the approximation of Fpo by the partial sum, the Fejer sum, and the Vallee-Poussin sum [7, 14] of the Fourier sine series of Fpo, we will obtain the approximation of the original function f on Ω by sine polynomials.DefineFe on [-1,1]2 as follows: (11)Fe(x,y)={F(x,y),(x,y)∈[0,1]2,F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fe is an even function on [-1,1]2. By Theorem 1, Fe∈Cr([-1,1]2) and (∂i+jFe/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic even function Fpe and Fpe∈Cr(ℝ2). By the well-known result [5, 10], Fpe can be approximated by cosine polynomials very well. Moreover, Fpe can be expanded into the Fourier cosine series. Considering the partial sum, the Fejer sum, and the Vallee-Poussin sum [5, 7, 14] of the Fourier cosine series of Fpe, we will obtain the approximation of the original function f on Ω by cosine polynomials. ### 2.2.3. Wavelet Analysis (i)Periodic Wavelet Series. Let Fp∈Cr(ℝ2) be stated in Theorem 2. Let {ψμ}13 be a bivariate smooth wavelet [2]. Then, under a mild condition, the families (12)Ψper:={1}⋃‍{ψμ,m,nper,μ=1,2,3;m∈ℤ+;jjjjjjjjjjjjjjjψμ,m,npern=(n1,n2),n1,n2=0,…,2m-1},whereψμ,m,nper=∑l∈ℤ2ψμ,m,n(·+l),ψμ,m,n=2mψμ(2m·-n) are a periodic wavelet basis. We expand Fp into a periodic wavelet series [2] (13)Fp=d0,0+∑μ=13∑m=0∞∑n1,n2=02m-1dμ,m,nψμ,m,nper. From this, we can realize the wavelet approximation of f on Ω, for example, if r=2, its partial sum (14)s2M(Fp)=d0,0+∑μ=13∑m=0M-1∑n1,n2=02m-1dμ,m,nψμ,m,nper satisfies ∥Fp-s2M(Fp)∥L2(T)=O(2-2M). From this and Fp(x,y)=f(x,y)((x,y)∈Ω), we will obtain an estimate of wavelet approximation for a smooth function f on the domain Ω.(ii)Wavelet Approximation. Let Fc be the smooth function with a compact support as in Theorem 3. Let ψ be a univariate Daubechies wavelet and ϕ be the corresponding scaling function [2]. Denoting (15)ψ1(x,y)=ϕ(x)ψ(y),ψ2(x,y)=ψ(x)ϕ(y),gggggggggψ3(x,y)=ψ(x)ψ(y), then {ψμ(x,y)}μ=13 is a smooth tensor product wavelet. We expand Fc into the wavelet series (16)Fc(x,y)=∑μ=13∑m∈ℤ∑n∈ℤ2cμ,m,nψμ,m,n(x,y), where ψμ,m,n=2mψμ(2m·-n) and the wavelet coefficients (17)cμ,m,n=∫ℝ2Fc(x,y)ψ¯μ,m,n(x,y)dxdy=∫TF(x,y)ψ¯μ,m,n(x,y)dxdy.‍ Since Fc is a smooth function, the wavelet coefficients cμ,m,n decay fast.On the other hand, sinceFc(x,y)=0, (x,y)∈ℝ2∖T, a lot of wavelet coefficients vanish. In fact, when m0∈ℤ and n0∈ℤ2 satisfy supp⁡ψμ,m0,n0⊂(ℝ2∖T), we have cμ,m0,n0=0. Besides, by condition (iii) in Theorem 1, we know that F is univariate or bivariate polynomials on T∖Ω. By the moment theorem [2], we know that more wavelet coefficients vanish.For example, letm*∈ℤ and n*=(n1*,n2*)∈ℤ2 satisfy supp⁡ψm*,n2*⊂[0,α*], where α*=inf⁡{g(x),x1≤x≤x2}. Then we have (18)c1,m*,n*=2m(∫0x1‍+∫x1x2‍+∫x21‍)ϕ¯m*,n1*(x)c1,m*,n*=×(∫0α*ψ¯m*,n2*(y)F(x,y)dy)dxc1,m*,1=:I1+I2+I3. By Lemma 8, we know that (19)F(x,y)=∑j=0Lξj(x)yj,(x,y)∈E1, where E1={(x,y):x1≤x≤x2,0≤y≤g(x)} and g(x)≥α*(x1≤x≤x2). So (20)I2=∫x1x2ϕ¯m*,n1*(x)(∫0α*ψ¯m*,n2*(y)(∑j=0Lξj(x)yj)dy‍)dx. If the Daubechies wavelet ψ chosen by us is L time smooth, then, by using the moment theorem and supp⁡ψm*,n2*⊂[0,α*], we have (21)∫0α*ψ¯m*,n2*(y)yjdy‍=∫ℝψ¯m*,n2*(y)yjdy=0,kkkl(0≤j≤2r+1). So I2=0. Similarly, since F(x,y) is bivariate polynomials on rectangles H1 and H3 (see Lemma 11), we have I1=I3=0. Furthermore, by (18), we get c1,m*,n*=0.Therefore, the partial sum of the wavelet series (16) can approximate to Fc very well and few wavelet coefficients can reconstruct Fc. Since Fc=f on Ω, the partial sum of the wavelet series (16) can approximate to the original function f on the domain Ω very well. ## 2.2.1. Approximation by Polynomials LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Then F∈Cr(T) and F=f on Ω. By ΔN, denote the set of all bivariate polynomials in the form ∑n1,n2=-NNcn1,n2xn1yn2. Then (2)inf⁡P∈ΔN⁡∥f-P∥Lp(Ω)≤inf⁡P∈ΔN⁡∥F-P∥Lp(T), where ∥·∥Lp(D) is the norm of the space Lp(D). The right-hand side of formula (2) is the best approximation of the extension F in ΔN. By (2), we know that the approximation problem of f by polynomials on a domain Ω is reduced to the well-known approximation problem of its smooth extension F by polynomials on the square T [4, 10]. ## 2.2.2. Fourier Analysis (i)Approximation by Trigonometric Polynomials. Let Fp be the smooth periodic extension of f as in Theorem 2. Then Fp∈Cr(ℝ2) and Fp=f on Ω. By the well-known results [5, 10], we know that the smooth periodic function Fp can be approximated by bivariate trigonometric polynomials very well. Its approximation error can be estimated by the modulus of continuity of its r time derivatives.ByΔN*, denote the set of all bivariate trigonometric polynomials in the form (3)∑n1,n2=-NNcn1,n2*e2πi(n1x+n2y). By Theorem 2, we have (4)inf⁡P*∈ΔN*⁡∥f-P*∥Lp′(Ω)≤min⁡P*∈ΔN*⁡∥Fp-P*∥Lp′(T). From this and Theorem 2, we see that the approximation problem of f on Ω by trigonometric polynomials is reduced to a well-known approximation problem of smooth periodic functions [5, 7, 10].(ii)Fourier Series. We expand Fp into a Fourier series [9] (5)Fp(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y), where τn1,n2=∫TFp(x,y)e-2πi(n1x+n2y)dxdy. By Theorem 2, we obtain that, for (x,y)∈Ω, (6)f(x,y)=∑(n1,n2)∈ℤ2τn1,n2e2πi(n1x+n2y). Denote the partial sum (7)sn1,n2(x,y)=∑k1=0n1∑k2=0n2τk1,k2e2πi(k1x+k2y). Then we have (8)∥f(x,y)-sn1,n2(x,y)∥Lp′(Ω)≤∥Fp(x,y)-sn1,n2(x,y)∥Lp′(T). Since the smooth periodic function Fp can be approximated well by the partial sum of its Fourier series [5, 7, 10], from this inequality, we see that we have constructed a trigonometric polynomial sn1,n2(x,y) which can approximate to f on Ω very well.(iii)Odd (Even) Periodic Extension. Let F be the smooth extension of f from Ω to T which is stated in Theorem 1. Define Fo on [-1,1]2 by (9)Fo(x,y)={F(x,y),(x,y)∈[0,1]2,-F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,-F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fo is an odd function. By Theorem 1, we have Fo∈Cr([-1,1]2) and (∂i+jFo/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic odd function Fpo and Fpo∈Cr(ℝ2). By the well-known results [5, 7, 10], Fpo can be approximated by sine polynomials very well. Moreover, Fpo can be expanded into the Fourier sine series; that is, (10)Fpo(x,y)=∑n1=1∞∑n2=1∞αn1,n2sin⁡(πn1x)sin⁡(πn2y), where the coefficients αn1,n2=4∫TFpo(x,y)sin(πn1x)sin(πn2y)dxdy [9]. Considering the approximation of Fpo by the partial sum, the Fejer sum, and the Vallee-Poussin sum [7, 14] of the Fourier sine series of Fpo, we will obtain the approximation of the original function f on Ω by sine polynomials.DefineFe on [-1,1]2 as follows: (11)Fe(x,y)={F(x,y),(x,y)∈[0,1]2,F(-x,y),(x,y)∈[-1,0]×[0,1],F(-x,-y),(x,y)∈[-1,0]2,F(x,-y),(x,y)∈[0,1]×[-1,0]. Then Fe is an even function on [-1,1]2. By Theorem 1, Fe∈Cr([-1,1]2) and (∂i+jFe/∂xi∂yj)(x,y)=0 on ∂([-1,1]2) for 0≤i+j≤r. Again, doing a 2-periodic extension, we obtain a 2-periodic even function Fpe and Fpe∈Cr(ℝ2). By the well-known result [5, 10], Fpe can be approximated by cosine polynomials very well. Moreover, Fpe can be expanded into the Fourier cosine series. Considering the partial sum, the Fejer sum, and the Vallee-Poussin sum [5, 7, 14] of the Fourier cosine series of Fpe, we will obtain the approximation of the original function f on Ω by cosine polynomials. ## 2.2.3. Wavelet Analysis (i)Periodic Wavelet Series. Let Fp∈Cr(ℝ2) be stated in Theorem 2. Let {ψμ}13 be a bivariate smooth wavelet [2]. Then, under a mild condition, the families (12)Ψper:={1}⋃‍{ψμ,m,nper,μ=1,2,3;m∈ℤ+;jjjjjjjjjjjjjjjψμ,m,npern=(n1,n2),n1,n2=0,…,2m-1},whereψμ,m,nper=∑l∈ℤ2ψμ,m,n(·+l),ψμ,m,n=2mψμ(2m·-n) are a periodic wavelet basis. We expand Fp into a periodic wavelet series [2] (13)Fp=d0,0+∑μ=13∑m=0∞∑n1,n2=02m-1dμ,m,nψμ,m,nper. From this, we can realize the wavelet approximation of f on Ω, for example, if r=2, its partial sum (14)s2M(Fp)=d0,0+∑μ=13∑m=0M-1∑n1,n2=02m-1dμ,m,nψμ,m,nper satisfies ∥Fp-s2M(Fp)∥L2(T)=O(2-2M). From this and Fp(x,y)=f(x,y)((x,y)∈Ω), we will obtain an estimate of wavelet approximation for a smooth function f on the domain Ω.(ii)Wavelet Approximation. Let Fc be the smooth function with a compact support as in Theorem 3. Let ψ be a univariate Daubechies wavelet and ϕ be the corresponding scaling function [2]. Denoting (15)ψ1(x,y)=ϕ(x)ψ(y),ψ2(x,y)=ψ(x)ϕ(y),gggggggggψ3(x,y)=ψ(x)ψ(y), then {ψμ(x,y)}μ=13 is a smooth tensor product wavelet. We expand Fc into the wavelet series (16)Fc(x,y)=∑μ=13∑m∈ℤ∑n∈ℤ2cμ,m,nψμ,m,n(x,y), where ψμ,m,n=2mψμ(2m·-n) and the wavelet coefficients (17)cμ,m,n=∫ℝ2Fc(x,y)ψ¯μ,m,n(x,y)dxdy=∫TF(x,y)ψ¯μ,m,n(x,y)dxdy.‍ Since Fc is a smooth function, the wavelet coefficients cμ,m,n decay fast.On the other hand, sinceFc(x,y)=0, (x,y)∈ℝ2∖T, a lot of wavelet coefficients vanish. In fact, when m0∈ℤ and n0∈ℤ2 satisfy supp⁡ψμ,m0,n0⊂(ℝ2∖T), we have cμ,m0,n0=0. Besides, by condition (iii) in Theorem 1, we know that F is univariate or bivariate polynomials on T∖Ω. By the moment theorem [2], we know that more wavelet coefficients vanish.For example, letm*∈ℤ and n*=(n1*,n2*)∈ℤ2 satisfy supp⁡ψm*,n2*⊂[0,α*], where α*=inf⁡{g(x),x1≤x≤x2}. Then we have (18)c1,m*,n*=2m(∫0x1‍+∫x1x2‍+∫x21‍)ϕ¯m*,n1*(x)c1,m*,n*=×(∫0α*ψ¯m*,n2*(y)F(x,y)dy)dxc1,m*,1=:I1+I2+I3. By Lemma 8, we know that (19)F(x,y)=∑j=0Lξj(x)yj,(x,y)∈E1, where E1={(x,y):x1≤x≤x2,0≤y≤g(x)} and g(x)≥α*(x1≤x≤x2). So (20)I2=∫x1x2ϕ¯m*,n1*(x)(∫0α*ψ¯m*,n2*(y)(∑j=0Lξj(x)yj)dy‍)dx. If the Daubechies wavelet ψ chosen by us is L time smooth, then, by using the moment theorem and supp⁡ψm*,n2*⊂[0,α*], we have (21)∫0α*ψ¯m*,n2*(y)yjdy‍=∫ℝψ¯m*,n2*(y)yjdy=0,kkkl(0≤j≤2r+1). So I2=0. Similarly, since F(x,y) is bivariate polynomials on rectangles H1 and H3 (see Lemma 11), we have I1=I3=0. Furthermore, by (18), we get c1,m*,n*=0.Therefore, the partial sum of the wavelet series (16) can approximate to Fc very well and few wavelet coefficients can reconstruct Fc. Since Fc=f on Ω, the partial sum of the wavelet series (16) can approximate to the original function f on the domain Ω very well. ## 3. Proofs of the Main Theorems We first give a partition of the complementT∖Ω. ### 3.1. Partition of the Complement of the DomainΩ in T SinceΩ⊂To and ∂Ω is a piecewise infinitely many time smooth curve, without loss of generality, we can divide the complement T∖Ω into some rectangles and some trapezoids with a curved side. For convenience of representation, we assume that we can choose four point (xν,yν)∈∂Ω(ν=1,2,3,4) such that T∖Ω can be divided into the four rectangles (22)H1=[0,x1]×[0,y1],H2=[x2,1]×[0,y2],H3=[x3,1]×[y3,1],H4=[0,x4]×[y4,1] and four trapezoids with a curved side (23)E1={(x,y);x1≤x≤x2,0≤y≤g(x)},E2={(x,y);h(y)≤x≤1,y2≤y≤y3},E3={(x,y);x4≤x≤x3,g*(x)≤y≤1},E4={(x,y);0≤x≤h*(y),y1≤y≤y4}, where g∈C∞([x1,x2]),h∈C∞([y2,y3]),g*∈C∞([x4,x3]), and h*∈C∞([y1,y4]) and (24)0<g(x)<1(x1≤x≤x2),0<h(y)<1(y2≤y≤y3),0<g*(x)<1(x4≤x≤x3),0<h*(y)<1(y1≤y≤y4). From this, we know that T can be expressed into a disjoint union as follows: (25)T=Ω⋃(⋃14Eν)‍⋃(⋃14Hν), where each Eν is a trapezoid with a curved side and each Hν is a rectangle (see Figure 1).Figure 1 Partition of the complement of the domainΩ.In Sections3.2 and 3.3 we will extend f to each Eν and continue to extend to each Hν such that the obtained extension F satisfies the conditions of Theorem 1. ### 3.2. Smooth Extension to Each TrapezoidEν with a Curved Side By (23), the trapezoid E1 with a curved side y=g(x)(x1≤x≤x2) is represented as (26)E1={(x,y):x1≤x≤x2,0≤y≤g(x)}. We define two sequences of functions {ak,1(x,y)}0∞ and {bk,1(x,y)}0∞ as follows: (27)a0,1(x,y)=yg(x),b0,1(x,y)=y-g(x)-g(x),fffak,1(x,y)=(y-g(x))kk!(yg(x))k+1,bk,1(x,y)=ykk!(y-g(x)-g(x))k+1,k=1,2,…. By (27), we deduce that for x1≤x≤x2, (28)hhhhhh∂lak,1∂yl(x,g(x))=0(0≤l≤k-1),∂kak,1∂yk(x,g(x))=1;(∂lak,1)∂yl(x,0)=0(0≤l≤k),hhhhhh∂lbk,1∂yl(x,g(x))=0(0≤l≤k);∂lbk,1∂yl(x,0)=0(0≤l≤k-1),(∂kbk,1)∂yk(x,0)=1.OnE1, we define a sequence of functions {S1(k)(x,y)}0∞ by induction.Let(29)S1(0)(x,y)=f(x,g(x))a0,1(x,y)ggg.(x1≤x≤x2,0≤y≤g(x)). Then, by (27), (30)S1(0)(x,0)=0,S1(0)(x,g(x))=f(x,g(x)),hhhhhhhhhhhhhhhhhhhhhhhhll(x1≤x≤x2).Let(31)S1(1)(x,y)=S1(0)(x,y)S1(1)(x,y)=+a1,1(x,y)(∂f∂y(x,g(x))-∂S1(0)∂y(x,g(x)))S1(1)(x,y)=-b1,1(x,y)∂S1(0)∂y(x,0)fffffffffffffffff.(x1≤x≤x2,0≤y≤g(x)). Then, by (27)–(30), we obtain that, for x1≤x≤x2, (32)S1(1)(x,g(x))=f(x,g(x)),∂S1(1)∂y(x,g(x))=∂f∂y(x,g(x)),S1(1)(x,0)=0,∂S1(1)∂y(x,0)=0. In general, let (33)S1(k)(x,y)=S1(k-1)(x,y)+ak,1(x,y)S1(k)(x,y)=×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))S1(k)(x,y)=-bk,1(x,y)∂kS1(k-1)∂yk(x,0)S1(k)(x,y)ggggggggg=(x1≤x≤x2,0≤y≤g(x)).Lemma 4. For anyk∈ℤ+, one has S1(k)∈C∞(E1) and (34)S1(k)(x,y)=∑j=02k+1ζj,1(x)yj‍,(x,y)∈E1.Proof. Sincef∈C∞(Ω) and g∈C∞([x1,x2]), and g(x)>0(x1≤x≤x2), by the above construction, we know that S1(k)∈C∞(E1) for any k=0,1,…. Fork=0, since (35)S1(0)(x,y)=f(x,g(x))a0,1(x,y)=f(x,g(x))g(x)y, (34) holds. We assume that (34) holds for k=l-1; that is, (36)S1(l-1)(x,y)=∑j=02l-1ζj,1(l-1)(x)yj. This implies that (37)∂lS1(l-1)∂yl(x,g(x))=∑j=l2l-1j!(j-l)!ζj,1(l-1)(x)(g(x))j-l,∂lS1(l-1)∂yl(x,0)=l!ζl,1(l-1)(x). Again, notice that al,1(x,y) and bl,1(x,y) are polynomials of y whose degrees are both 2l+1. From this and (33), it follows that (34) holds for k=l. By induction, (34) holds for all k. Lemma 4 is proved.Below we compute derivatives(∂lS1(k)/∂yl)(x,y)(0≤l≤k) on the curved side Γ1={(x,g(x)):x1≤x≤x2} and the bottom side Δ1={(x,0):x1≤x≤x2} of E1.Lemma 5. LetS1(k)(x,y) be stated as above. For any k∈ℤ+, one has (38)∂lS1(k)∂yl(x,g(x))=∂lf∂yl(x,g(x)),∂lS1(k)∂yl(x,0)=0(x1≤x≤x2,0≤l≤k).Proof. By (30), We have known that, for k=0, (38) holds. Now we assume that (38) holds for k-1. Forx1≤x≤x2, by (33), we have (39)∂lS1(k)∂yl(x,g(x))=∂lS1(k-1)∂yl(x,g(x))+∂lak,1∂yl(x,g(x))×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))-∂lbk,1∂yl(x,g(x))∂kS1(k-1)∂yk(x,0),kkkkkkkkkkkkkkkhhhhl.k(0≤l≤k). For l=0,1,…,k-1, by the assumption of induction, we have (40)∂lS1(k-1)∂yl(x,g(x))=∂lf∂yl(x,g(x)). By (28), we have (∂lak,1/∂yl)(x,g(x))=0,(∂lbk,1/∂yl)(x,g(x))=0. So we get (41)∂lS1(k)∂yl(x,g(x))=∂lf∂yl(x,g(x)). For l=k, note that (∂kak,1/∂yk)(x,g(x))=1 and (∂kbk,1/∂yk)(x,g(x))=0. By (39), we get (42)∂kS1(k)∂yk(x,g(x))=∂kS1(k-1)∂yk(x,g(x))+(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))=∂kf∂yk(x,g(x)). The first formula of (38) holds for k. By (33), we have (43)∂lS1(k)∂yl(x,0)=∂lS1(k-1)∂yl(x,0)+∂lak,1∂yl(x,0)×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))-∂lbk,1∂yl(x,0)∂kS1(k-1)∂yk(x,0),hhhhhhhhhhhl(x1≤x≤x2,0≤l≤k). For l=0,…,k-1, by the assumption of induction and (28), we have (∂lS1(k-1)/∂yl)(x,0)=0 and (∂lak,1/∂yl)(x,0)=(∂lbk,1/∂yl)(x,0)=0. So (44)∂lS1(k)∂yl(x,0)=0. For l=k, since (∂kak,1/∂yk)(x,0)=0,(∂kbk,1/∂yk)(x,0)=1, by (43), we have (45)∂kS1(k)∂yk(x,0)=∂kS1(k-1)∂yk(x,0)-∂kS1(k-1)∂yk(x,0)=0. The second formula of (38) holds. By induction, (38) holds for all k. From this, we get Lemma 5.Now we compute the mixed derivatives ofS1(k)(x,y) on the curved side Γ1 and bottom side Δ1 of E1.Lemma 6. LetΓ1 and Δ1 be the curved side and the bottom side of E1, respectively. Then, for k∈ℤ+, (i) (∂i+jS1(k)/∂xi∂yj)(x,y)=(∂i+jf/∂xi∂yj)(x,y)((x,y)∈Γ1),(ii) (∂i+jS1(k)/∂xi∂yj)(x,y)=0((x,y)∈Δ1),where 0≤i+j≤k.Proof. Letx1≤x≤x2. Then we have (46)ddx(∂l-1f∂yl-1(x,g(x)))=∂lf∂x∂yl-1(x,g(x))+∂lf∂yl(x,g(x))g′(x),(l≥1). By the Newton-Leibniz formula, we have (47)∂l-1f∂yl-1(x,g(x))=∂l-1f∂yl-1(x1,g(x1))+∫x1x(∂lf∂x∂yl-1(t,g(t))+∂lf∂yl(t,g(t))g′(t))dt. Similarly, replacing f by S1(k) in this formula, we have (48)∂l-1S1(k)∂yl-1(x,g(x))=∂l-1S1(k)∂yl-1(x1,g(x1))+∫x1x(∂lS1(k)∂x∂yl-1(t,g(t))+∂lS1(k)∂yl(t,g(t))g′(t))dt,hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhl.(l≥1). From this and Lemma 5, it follows that, for any x1≤x≤x2, we have (49)∫x1x∂lS1(k)∂x∂yl-1(t,g(t))dt‍=∫x1x∂lf∂x∂yl-1(t,g(t))dt‍(1≤l≤k). Finding derivatives on the both sides of this formula, we get (50)∂lS1(k)∂x∂yl-1(x,g(x))=∂lf∂x∂yl-1(x,g(x))(1≤l≤k). Now we start from the equality(51)ddx(∂l-1f∂x∂yl-2(x,g(x)))=∂lf∂x2∂yl-2(x,g(x))+∂lf∂x∂yl-1(x,g(x))g′(x),hhhhhhhhhhhhhhhhh(l≥2). Similar to the argument from (46) to (50), we get (52)∂lS1(k)∂x2∂yl-2(x,g(x))=∂lf∂x2∂yl-2(x,g(x))(2≤l≤k). Continuing this procedure, we deduce that (i) holds for 0<i+j≤k. Letting l=0 in Lemma 5, we have S1(k)(x,g(x))=f(x,g(x)); that is, (i) holds for i=j=0. So we get (i). By Lemma5, (∂jS1(k)/∂yj)(x,0)=0(0≤j≤k). From this and S1(k)∈C∞(E1), we have (53)∂i+jS1(k)∂xi∂yj(x,0)=0(0≤i+j≤k), so (ii) holds. Lemma 6 is proved.From this, we get the following.Lemma 7. For any positive integerr, denote lr=r(r+1)(r+2)(r+3). Let (54)F(x,y)={S1(lr)(x,y),(x,y)∈E1,f(x,y),(x,y)∈Ω. Then (i) F∈Clr(Ω⋃‍E1) and F(x,y)=f(x,y)((x,y)∈Ω); (ii) (∂i+jF/∂xi∂yj)(x,y)=0((x,y)∈(E1⋂‍∂T), 0≤i+j≤lr).Proof. By the assumptionf∈C∞(Ω), Lemma 4: S1(k)∈C∞(E1), and Lemma 6(i): (55)∂i+jS1(k)∂xi∂yj(x,y)=∂i+jf∂xi∂yj(x,y)hh..((x,y)∈Γ1,0≤i+j≤k), where Γ1=Ω⋂‍E1, we get (i). By Lemma 6(ii) and E1⋂‍∂T=Δ1, we get (ii). Lemma 7 is proved.Forν=2,3,4, by using a similar method, we define Sν(k)(x,y) on the each trapezoid Eν with a curve side. The representations of Sν(k)(x,y) are stated in Section 4.1.Lemma 8. For anyν=1,2,3,4, let (56)F(x,y)={Sν(lr)(x,y),(x,y)∈Eν,f(x,y),(x,y)∈Ω, where lr=r(r+1)(r+2)(r+3). Then, for ν=1,2,3,4, one has the following: (i) F∈Clr(Ω⋃‍Eν);(ii) (∂i+jF/∂xi∂yj)(x,y)=0,(x,y)∈(Eν⋂‍∂T) for 0≤i+j≤lr;(iii) F(x,y) can be expressed in the form: (57)F(x,y)=∑j=02lr+1ζj,1(x)yj,(x,y)∈E1,F(x,y)=∑j=02lr+1ζj,2(y)xj,(x,y)∈E2,F(x,y)=∑j=02lr+1ζj,3(x)yj,(x,y)∈E3,F(x,y)=∑j=02lr+1ζj,4(y)xj,(x,y)∈E4.Proof. By Lemma7, we have (58)F∈Clr(Ω⋃‍E1),∂i+jF∂xi∂yj(x,y)=0,fff.((x,y)∈(E1⋂‍∂T),0≤i+j≤lr). Similar to the argument of Lemma 7, for ν=2,3,4, we have (59)F∈Clr(Ω⋃‍Eν),∂i+jF∂xi∂yj(x,y)=0,ff.((x,y)∈(Eν⋂‍∂T),0≤i+j≤lr). From this, we get (i) and (ii). The proof of (iii) is similar to the argument of Lemma4(iii). Lemma 8 is proved. ### 3.3. Smooth Extension to Each RectangleHν We have completed the smooth extension off to each trapezoid Eν with a curved side. In this subsection we complete the smooth extension of the obtained function F to each rectangle Hν. First we consider the smooth extension of F to H1. We divide this procedure in two steps.Step 1. In Lemma8, we know that F(x,y)=S4lr(x,y) on E4. Now we construct the smooth extension of S4(lr)(x,y) from E4 to H1, where S4(lr)(x,y) is stated in Section 4.2 and lr=r(r+1)(r+2)(r+3). Let(60)gggαk,11(y)=(y-y1)kk!(yy1)k+1,βk,11(y)=ykk!(y-y1-y1)k+1,k=0,1,…, and let (61)M1(0)(x,y)=S4(lr)(x,y1)α0,11(y),M1(k)(x,y)=M1(k-1)(x,y)+αk,11(y)hhhhhhhhhh×(∂kS4(lr)∂yk(x,y1)-∂kM1(k-1)∂yk(x,y1))M1(k)(x,y)=-βk,11(y)∂kM1(k-1)∂yk(x,0),gggggggggggggggggl.k=1,2,…,τr((x,y)∈H1), where τr=r(r+2).Lemma 9. Let{J1,l}14 be four sides of the rectangle H1:(62)J1,1={(x,y1),0≤x≤x1},J1,2={(0,y),0≤y≤y1},J1,3={(x,0),0≤x≤x1},J1,4={(x1,y),0≤y≤y1}. Then one has the following(i) M1(τr)(x,y)=∑i,j=02lr+1di,j(1)xiyj‍, where di,j(1) is a constant;(ii) (∂i+jM1(τr)/∂xi∂yj)(x,y)=(∂i+jS4(lr)/∂xi∂yj)(x,y)((x,y)∈J1,1);(iii) (∂i+jM1(τr)/∂xi∂yj)(x,y)=0((x,y)∈J1,2);(iv) (∂i+jM1(τr)/∂xi∂yj)(x,y)=0((x,y)∈J1,3),where 0≤i+j≤τr.Proof. By Lemma8(iii), we have (63)S4(lr)(x,y)=∑j=02lr+1ζj,4(y)xj‍. So (∂kS4(lr)/∂yk)(x,y1) is a polynomial of degree 2lr+1 with respect to x. Since ατr,11(y) and βτr,11(y) are both polynomials of degree 2τr+1, (i) follows from (61). Similar to the argument of Lemma6, we get (ii) and (iv). Since(0,y1)∈(E4⋂∂T), by Lemma 7, we have (64)∂i+jS4(lr)∂xi∂yj(0,y1)=∂i+jF∂xi∂yj(0,y1)=0,(0≤i+j≤lr). By the definition of M1(0) and (64), we have (65)∂i+jM1(0)∂xi∂yj(0,y)=∂iS4(lr)∂xi(0,y1)djα0,11dyj(y)=0,hhhhl.(0≤i+j≤lr,y∈ℝ). We assume that (66)∂i+jM1(k-1)∂xi∂yj(0,y)=0,(0≤i+j≤lr-12k(k-1)). By (61), we get (67)∂i+jM1(k)∂xi∂yj(0,y)=∂i+jM1(k-1)∂xi∂yj(0,y)+djαk,11dyj(y)×(∂k+iS4(lr)∂xi∂yk(0,y1)-∂k+iM1(k-1)∂xi∂yk(0,y1))-djβk,11dyj(y)∂k+iM1(k-1)∂xi∂yk(0,0). for 0≤i+j≤lr-(1/2)k(k+1), we have 0≤i+j≤lr-(1/2)k(k-1) and 0≤i+k≤lr-(1/2)k(k-1). Again, by the assumption of induction, we get (68)∂i+jM1(k-1)∂xi∂yj(0,y)=0,∂k+iM1(k-1)∂xi∂yk(0,y1)=0. By (64), we have (∂k+iS4(lr)/∂xi∂yk)(0,y1)=0. From this and (67), we get (69)hhhhhhhhh∂i+jM1(k)∂xi∂yj(0,y)=0,(0≤y≤y1,0≤i+j≤lr-k2(k+1)). Taking k=τr, we have (70)gggggggg∂i+jM1(τr)∂xi∂yj(0,y)=0,(0≤y≤y1,0≤i+j≤lr-τr2(τr+1)). Since lr-(τr/2)(τr+1)=(lr/2)≥τr, we get (iii). Lemma 9 is proved.Step 2. In Lemma8, we know that F(x,y)=S1lr(x,y) on E1. We consider the difference S1(lr)(x,y)-M1(τr)(x,y). Obviously, it is infinitely many time differentiable on E1 since M1(τr)(x,y) is a polynomial. Now we construct its smooth extension from E1 to the rectangle H1 as follows. Let (71)gggαk,14(x)=(x-x1)kk!(xx1)k+1,βk,14(x)=xkk!(x-x1-x1)k+1,k=0,1,…, and let (72)N1(0)(x,y)=(S1(lr)(x1,y)-M1(τr)(x1,y))α0,14(x),N1(k)(x,y)=N1(k-1)(x,y)+αk,14(x)N1(k)(x,y)=×(∂k(S1(lr)-M1(τr))∂xk(x1,y)N1(k)(x,y)==.=∂k(S1(lr)-M1(τr))∂xk-∂kN1(k-1)∂xk(x1,y))gggggggggg-βk,14(x)∂kN1(k-1)∂xk(0,y),ggggggk=0,1,…,r,((x,y)∈H1). From this, we obtain the following.Lemma 10. N 1 ( r ) ( x , y ) possesses the following properties: (i) (∂i+jN1(r)/∂xi∂yj)(x,y)=(∂i+jS1(lr)/∂xi∂yj)(x,y)-(∂i+jM1(τr)/∂xi∂yj)(x,y) on J1,4;(ii) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,2;(iii) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,1;(iv) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,3, where 0≤i+j≤r and {J1,ν}14 are stated in (62);(v) N1(r)(x,y)=∑i,j=02lr+1τi,j(1)xiyj, where τi,j(1) is a constant.Proof. The arguments similar to Lemma9(ii) and (iv) give the conclusions (i) and (ii) of this theorem. Now we prove (iii) and (iv). By Lemma6(i) and Lemma 9(ii), as well as lr≥τr, we get that, for 0≤i+j≤τr, (73)∂i+jS1(lr)∂xi∂yj(x1,y1)=∂i+jf∂xi∂yj(x1,y1)=∂i+jS4(lr)∂xi∂yj(x1,y1)=∂i+jM1(τr)∂xi∂yj(x1,y1). So we have (74)∂i+jN1(0)∂xi∂yj(x,y1)=∂j(S1(lr)-M1(τr))∂yj(x1,y1)diα0,14dxi(x)=0,hhhhhhh(0≤x≤x1,0≤i+j≤τr). Now we assume that(75)ggggggg∂i+jN1(k-1)∂xi∂yj(x,y1)=0,(0≤x≤x1,0≤i+j≤τr-k2(k-1)). By (72) and (73), (76)∂i+jN1(k)∂xi∂yj(x,y1)=∂i+jN1(k-1)∂xi∂yj(x,y1)+diαk,14dxi(x)∂i+jN1(k)∂xi∂yj(x,y1)=×(∂k+j(S1(lr)-M1(τr))∂xk∂yj(x1,y1)∂i+jN1(k)∂xi∂yjggg(x,y1)=∂k+j(S1(lr)-M1(τr))∂xk∂yj-∂k+jN1(k-1)∂xk∂yj(x1,y1))∂i+jN1(k)∂xi∂yj(x,y1)=-diβk,14dxi(x)∂k+jN1(k-1)∂xk∂yj(0,y1)=0,∂i+jN1(k)∂xi∂yj.(0≤x≤x1,0≤i+j≤τr-k(k+1)2). By induction, we get (77)hhhhhhhhh∂i+jN1(k)∂xi∂yj(x,y1)=0,(0≤x≤x1,0≤i+j≤τr-k(k+1)2). From this and τr-(1/2)r(r+1)≥r, we get (iii). By Lemma 6(ii) and Lemma 9(iii), we get that (78)∂i+jS1(lr)∂xi∂yj(x,0)=0,(0≤i+j≤lr),∂i+jM1(τr)∂xi∂yj(x,0)=0,(0≤i+j≤τr). From this and (72), by using an argument similar to the proof of (iii), we get (iv). By Lemma8(iii) and Lemma 9(i), we deduce that (S1(lr)-M1(τr))(x1,y) is a polynomial of degree 2lr+1 with respect to y. From this and (72), we get (v). Lemma 10 is proved.By Lemmas9 and 10, we obtain that for 0≤i+j≤r, (79)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=∂i+jS4(lr)∂xi∂yj(x,y)on⁡J1,1,(80)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=0on⁡J1,2⋃‍J1,3,(81)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=∂i+jS1(lr)∂xi∂yj(x,y)on⁡J1,4.Lemma 11. Let(82)F(x,y)={f(x,y),(x,y)∈Ω,S1(lr)(x,y),(x,y)∈E1,S4(lr)(x,y),(x,y)∈E4,M1(τr)(x,y)+N1(r)(x,y),(x,y)∈H1. Then one has(i) F∈Cr(Ω⋃E1⋃E4⋃H1),(ii) (∂i+jF/∂xi∂yj)(x,y)=0,(x,y)∈((E1⋃E4⋃H1)⋂∂T) for 0≤i+j≤r;(iii) F(x,y)=∑i,j=02lr+1cij(1)xiyj((x,y)∈H1), where each cij(1) is constant.Proof. By Lemma7, we have F∈Cr(Ω⋃E1⋃E4). Since S1(lr)∈Cr(E1), (83)M1(τr)+N1(r)∈Cr(H1),E1⋂‍H1=J1,4, by (81), we deduce that F∈Cr(E1⋃H1). Since S4(lr)∈Cr(E4), (84)M1(τr)+N1(r)∈Cr(H1),E4⋂‍H1=J1,1, by (79), we deduce that F∈Cr(H1⋃E4). So we get (i). By Lemma8(ii), (85)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E4)⋂‍∂T). Since H1⋂∂T=J1,2⋃J1,3, by (80), we deduce that (86)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈(H1⋂‍∂T). So we get (ii). From Lemma9(i), Lemma 10(v), and F(x,y)=M1(τr)(x,y)+N1(r)(x,y)((x,y)∈H1), we get (iii). Lemma 11 is proved.Forν=2,3,4, by using a similar method, we define F(x,y)=Mν(τr)(x,y)+Nν(r)(x,y)((x,y)∈Hν), where representations of Mν(τr)(x,y) and Nν(r)(x,y) see Section 4.2. ### 3.4. The Proofs of the Theorems Proof of Theorem1. Let(87)F(x,y)={f(x,y),(x,y)∈Ω,Sν(lr)(x,y),(x,y)∈Eν(ν=1,2,3,4),Mν(τr)(x,y)+Nν(r)(x,y),(x,y)∈Hν(ν=1,2,3,4). By (25), F has been defined on the unit square T. The argument similar to Lemma 11(i)-(ii) shows that (88)F∈Cr(Ω⋃‍E1⋃‍E4⋃‍H1);F∈Cr(Ω⋃‍E1⋃‍E2⋃‍H2);F∈Cr(Ω⋃‍E2⋃‍E3⋃‍H3);F∈Cr(Ω⋃‍E3⋃‍E4⋃‍H4); and for 0≤i+j≤r, (89)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E4⋃‍H1)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E2⋃‍H2)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E2⋃‍E3⋃‍H3)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E3⋃‍E4⋃‍H4)⋂‍∂T). From this and Ω⋂∂T=∅, by (25), we have F∈Cr(T) and (∂i+jF/∂xi∂yj)(x,y)=0((x,y)∈∂T,0≤i+j≤r). So we get (i) and (ii). Similar to the argument of Lemma11(iii), we get (90)F(x,y)=∑i,j=02lr+1cij(ν)xiyj‍,(x,y)∈Hν(ν=1,2,3,4), where each cij(ν) is a constant. From this and Lemma 8(iii), we know that, on T∖Ω, F(x,y) can be expressed locally in the form (91)∑j=02lr+1ξj(x)yjor∑j=02lr+1ηj(x)xjor∑i,j=02lr+1cijxiyj; (iii) holds. We have completed the proof of Theorem 1.The representation ofF satisfying the conditions of Theorem 1 is given in Section 4.Proof of Theorem2. LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Define Fp by (92)Fp(x+k,y+l)=F(x,y)((x,y)∈T;k,l∈ℤ). Then Fp is a 1-periodic function of ℝ2. By Theorem 1, we know that Fp∈Cr(T) and (93)∂i+jFp∂xi∂yj(x,y)=0((x,y)∈∂T;0≤i+j≤r). Let Tn1,n2=[n1,n1+1]×[n2,n2+1](n1,n2∈ℤ). Since Fp is 1-periodic function, we have Fp∈Cr(Tn1,n2) and for any n1,n2∈ℤ, (94)∂i+jFp∂xi∂yj(x,y)=0((x,y)∈∂Tn1,n2;0≤i+j≤r). Noticing that ℝ2=⋃n1,n2∈ℤTn1,n2, we have Fp∈Cr(ℝ2). By (92) and Theorem 1(i), we get (95)Fp(x,y)=F(x,y)=f(x,y)((x,y)∈Ω). Theorem 2 is proved.Proof of Theorem3. LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Define Fc by (96)Fc(x,y)={F(x,y),(x,y)∈T,0,(x,y)∈ℝ2∖T. From Theorem 1(ii), we have (97)∂i+jFc∂xi∂yj(x,y)=0((x,y)∈∂T;0≤i+j≤r). From this and (96), we get Fc(x,y)∈Cr(ℝ2). By (96) and Theorem 1(i), we get (98)Fc(x,y)=F(x,y)=f(x,y)((x,y)∈Ω). Theorem 3 is proved. ## 3.1. Partition of the Complement of the DomainΩ in T SinceΩ⊂To and ∂Ω is a piecewise infinitely many time smooth curve, without loss of generality, we can divide the complement T∖Ω into some rectangles and some trapezoids with a curved side. For convenience of representation, we assume that we can choose four point (xν,yν)∈∂Ω(ν=1,2,3,4) such that T∖Ω can be divided into the four rectangles (22)H1=[0,x1]×[0,y1],H2=[x2,1]×[0,y2],H3=[x3,1]×[y3,1],H4=[0,x4]×[y4,1] and four trapezoids with a curved side (23)E1={(x,y);x1≤x≤x2,0≤y≤g(x)},E2={(x,y);h(y)≤x≤1,y2≤y≤y3},E3={(x,y);x4≤x≤x3,g*(x)≤y≤1},E4={(x,y);0≤x≤h*(y),y1≤y≤y4}, where g∈C∞([x1,x2]),h∈C∞([y2,y3]),g*∈C∞([x4,x3]), and h*∈C∞([y1,y4]) and (24)0<g(x)<1(x1≤x≤x2),0<h(y)<1(y2≤y≤y3),0<g*(x)<1(x4≤x≤x3),0<h*(y)<1(y1≤y≤y4). From this, we know that T can be expressed into a disjoint union as follows: (25)T=Ω⋃(⋃14Eν)‍⋃(⋃14Hν), where each Eν is a trapezoid with a curved side and each Hν is a rectangle (see Figure 1).Figure 1 Partition of the complement of the domainΩ.In Sections3.2 and 3.3 we will extend f to each Eν and continue to extend to each Hν such that the obtained extension F satisfies the conditions of Theorem 1. ## 3.2. Smooth Extension to Each TrapezoidEν with a Curved Side By (23), the trapezoid E1 with a curved side y=g(x)(x1≤x≤x2) is represented as (26)E1={(x,y):x1≤x≤x2,0≤y≤g(x)}. We define two sequences of functions {ak,1(x,y)}0∞ and {bk,1(x,y)}0∞ as follows: (27)a0,1(x,y)=yg(x),b0,1(x,y)=y-g(x)-g(x),fffak,1(x,y)=(y-g(x))kk!(yg(x))k+1,bk,1(x,y)=ykk!(y-g(x)-g(x))k+1,k=1,2,…. By (27), we deduce that for x1≤x≤x2, (28)hhhhhh∂lak,1∂yl(x,g(x))=0(0≤l≤k-1),∂kak,1∂yk(x,g(x))=1;(∂lak,1)∂yl(x,0)=0(0≤l≤k),hhhhhh∂lbk,1∂yl(x,g(x))=0(0≤l≤k);∂lbk,1∂yl(x,0)=0(0≤l≤k-1),(∂kbk,1)∂yk(x,0)=1.OnE1, we define a sequence of functions {S1(k)(x,y)}0∞ by induction.Let(29)S1(0)(x,y)=f(x,g(x))a0,1(x,y)ggg.(x1≤x≤x2,0≤y≤g(x)). Then, by (27), (30)S1(0)(x,0)=0,S1(0)(x,g(x))=f(x,g(x)),hhhhhhhhhhhhhhhhhhhhhhhhll(x1≤x≤x2).Let(31)S1(1)(x,y)=S1(0)(x,y)S1(1)(x,y)=+a1,1(x,y)(∂f∂y(x,g(x))-∂S1(0)∂y(x,g(x)))S1(1)(x,y)=-b1,1(x,y)∂S1(0)∂y(x,0)fffffffffffffffff.(x1≤x≤x2,0≤y≤g(x)). Then, by (27)–(30), we obtain that, for x1≤x≤x2, (32)S1(1)(x,g(x))=f(x,g(x)),∂S1(1)∂y(x,g(x))=∂f∂y(x,g(x)),S1(1)(x,0)=0,∂S1(1)∂y(x,0)=0. In general, let (33)S1(k)(x,y)=S1(k-1)(x,y)+ak,1(x,y)S1(k)(x,y)=×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))S1(k)(x,y)=-bk,1(x,y)∂kS1(k-1)∂yk(x,0)S1(k)(x,y)ggggggggg=(x1≤x≤x2,0≤y≤g(x)).Lemma 4. For anyk∈ℤ+, one has S1(k)∈C∞(E1) and (34)S1(k)(x,y)=∑j=02k+1ζj,1(x)yj‍,(x,y)∈E1.Proof. Sincef∈C∞(Ω) and g∈C∞([x1,x2]), and g(x)>0(x1≤x≤x2), by the above construction, we know that S1(k)∈C∞(E1) for any k=0,1,…. Fork=0, since (35)S1(0)(x,y)=f(x,g(x))a0,1(x,y)=f(x,g(x))g(x)y, (34) holds. We assume that (34) holds for k=l-1; that is, (36)S1(l-1)(x,y)=∑j=02l-1ζj,1(l-1)(x)yj. This implies that (37)∂lS1(l-1)∂yl(x,g(x))=∑j=l2l-1j!(j-l)!ζj,1(l-1)(x)(g(x))j-l,∂lS1(l-1)∂yl(x,0)=l!ζl,1(l-1)(x). Again, notice that al,1(x,y) and bl,1(x,y) are polynomials of y whose degrees are both 2l+1. From this and (33), it follows that (34) holds for k=l. By induction, (34) holds for all k. Lemma 4 is proved.Below we compute derivatives(∂lS1(k)/∂yl)(x,y)(0≤l≤k) on the curved side Γ1={(x,g(x)):x1≤x≤x2} and the bottom side Δ1={(x,0):x1≤x≤x2} of E1.Lemma 5. LetS1(k)(x,y) be stated as above. For any k∈ℤ+, one has (38)∂lS1(k)∂yl(x,g(x))=∂lf∂yl(x,g(x)),∂lS1(k)∂yl(x,0)=0(x1≤x≤x2,0≤l≤k).Proof. By (30), We have known that, for k=0, (38) holds. Now we assume that (38) holds for k-1. Forx1≤x≤x2, by (33), we have (39)∂lS1(k)∂yl(x,g(x))=∂lS1(k-1)∂yl(x,g(x))+∂lak,1∂yl(x,g(x))×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))-∂lbk,1∂yl(x,g(x))∂kS1(k-1)∂yk(x,0),kkkkkkkkkkkkkkkhhhhl.k(0≤l≤k). For l=0,1,…,k-1, by the assumption of induction, we have (40)∂lS1(k-1)∂yl(x,g(x))=∂lf∂yl(x,g(x)). By (28), we have (∂lak,1/∂yl)(x,g(x))=0,(∂lbk,1/∂yl)(x,g(x))=0. So we get (41)∂lS1(k)∂yl(x,g(x))=∂lf∂yl(x,g(x)). For l=k, note that (∂kak,1/∂yk)(x,g(x))=1 and (∂kbk,1/∂yk)(x,g(x))=0. By (39), we get (42)∂kS1(k)∂yk(x,g(x))=∂kS1(k-1)∂yk(x,g(x))+(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))=∂kf∂yk(x,g(x)). The first formula of (38) holds for k. By (33), we have (43)∂lS1(k)∂yl(x,0)=∂lS1(k-1)∂yl(x,0)+∂lak,1∂yl(x,0)×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))-∂lbk,1∂yl(x,0)∂kS1(k-1)∂yk(x,0),hhhhhhhhhhhl(x1≤x≤x2,0≤l≤k). For l=0,…,k-1, by the assumption of induction and (28), we have (∂lS1(k-1)/∂yl)(x,0)=0 and (∂lak,1/∂yl)(x,0)=(∂lbk,1/∂yl)(x,0)=0. So (44)∂lS1(k)∂yl(x,0)=0. For l=k, since (∂kak,1/∂yk)(x,0)=0,(∂kbk,1/∂yk)(x,0)=1, by (43), we have (45)∂kS1(k)∂yk(x,0)=∂kS1(k-1)∂yk(x,0)-∂kS1(k-1)∂yk(x,0)=0. The second formula of (38) holds. By induction, (38) holds for all k. From this, we get Lemma 5.Now we compute the mixed derivatives ofS1(k)(x,y) on the curved side Γ1 and bottom side Δ1 of E1.Lemma 6. LetΓ1 and Δ1 be the curved side and the bottom side of E1, respectively. Then, for k∈ℤ+, (i) (∂i+jS1(k)/∂xi∂yj)(x,y)=(∂i+jf/∂xi∂yj)(x,y)((x,y)∈Γ1),(ii) (∂i+jS1(k)/∂xi∂yj)(x,y)=0((x,y)∈Δ1),where 0≤i+j≤k.Proof. Letx1≤x≤x2. Then we have (46)ddx(∂l-1f∂yl-1(x,g(x)))=∂lf∂x∂yl-1(x,g(x))+∂lf∂yl(x,g(x))g′(x),(l≥1). By the Newton-Leibniz formula, we have (47)∂l-1f∂yl-1(x,g(x))=∂l-1f∂yl-1(x1,g(x1))+∫x1x(∂lf∂x∂yl-1(t,g(t))+∂lf∂yl(t,g(t))g′(t))dt. Similarly, replacing f by S1(k) in this formula, we have (48)∂l-1S1(k)∂yl-1(x,g(x))=∂l-1S1(k)∂yl-1(x1,g(x1))+∫x1x(∂lS1(k)∂x∂yl-1(t,g(t))+∂lS1(k)∂yl(t,g(t))g′(t))dt,hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhl.(l≥1). From this and Lemma 5, it follows that, for any x1≤x≤x2, we have (49)∫x1x∂lS1(k)∂x∂yl-1(t,g(t))dt‍=∫x1x∂lf∂x∂yl-1(t,g(t))dt‍(1≤l≤k). Finding derivatives on the both sides of this formula, we get (50)∂lS1(k)∂x∂yl-1(x,g(x))=∂lf∂x∂yl-1(x,g(x))(1≤l≤k). Now we start from the equality(51)ddx(∂l-1f∂x∂yl-2(x,g(x)))=∂lf∂x2∂yl-2(x,g(x))+∂lf∂x∂yl-1(x,g(x))g′(x),hhhhhhhhhhhhhhhhh(l≥2). Similar to the argument from (46) to (50), we get (52)∂lS1(k)∂x2∂yl-2(x,g(x))=∂lf∂x2∂yl-2(x,g(x))(2≤l≤k). Continuing this procedure, we deduce that (i) holds for 0<i+j≤k. Letting l=0 in Lemma 5, we have S1(k)(x,g(x))=f(x,g(x)); that is, (i) holds for i=j=0. So we get (i). By Lemma5, (∂jS1(k)/∂yj)(x,0)=0(0≤j≤k). From this and S1(k)∈C∞(E1), we have (53)∂i+jS1(k)∂xi∂yj(x,0)=0(0≤i+j≤k), so (ii) holds. Lemma 6 is proved.From this, we get the following.Lemma 7. For any positive integerr, denote lr=r(r+1)(r+2)(r+3). Let (54)F(x,y)={S1(lr)(x,y),(x,y)∈E1,f(x,y),(x,y)∈Ω. Then (i) F∈Clr(Ω⋃‍E1) and F(x,y)=f(x,y)((x,y)∈Ω); (ii) (∂i+jF/∂xi∂yj)(x,y)=0((x,y)∈(E1⋂‍∂T), 0≤i+j≤lr).Proof. By the assumptionf∈C∞(Ω), Lemma 4: S1(k)∈C∞(E1), and Lemma 6(i): (55)∂i+jS1(k)∂xi∂yj(x,y)=∂i+jf∂xi∂yj(x,y)hh..((x,y)∈Γ1,0≤i+j≤k), where Γ1=Ω⋂‍E1, we get (i). By Lemma 6(ii) and E1⋂‍∂T=Δ1, we get (ii). Lemma 7 is proved.Forν=2,3,4, by using a similar method, we define Sν(k)(x,y) on the each trapezoid Eν with a curve side. The representations of Sν(k)(x,y) are stated in Section 4.1.Lemma 8. For anyν=1,2,3,4, let (56)F(x,y)={Sν(lr)(x,y),(x,y)∈Eν,f(x,y),(x,y)∈Ω, where lr=r(r+1)(r+2)(r+3). Then, for ν=1,2,3,4, one has the following: (i) F∈Clr(Ω⋃‍Eν);(ii) (∂i+jF/∂xi∂yj)(x,y)=0,(x,y)∈(Eν⋂‍∂T) for 0≤i+j≤lr;(iii) F(x,y) can be expressed in the form: (57)F(x,y)=∑j=02lr+1ζj,1(x)yj,(x,y)∈E1,F(x,y)=∑j=02lr+1ζj,2(y)xj,(x,y)∈E2,F(x,y)=∑j=02lr+1ζj,3(x)yj,(x,y)∈E3,F(x,y)=∑j=02lr+1ζj,4(y)xj,(x,y)∈E4.Proof. By Lemma7, we have (58)F∈Clr(Ω⋃‍E1),∂i+jF∂xi∂yj(x,y)=0,fff.((x,y)∈(E1⋂‍∂T),0≤i+j≤lr). Similar to the argument of Lemma 7, for ν=2,3,4, we have (59)F∈Clr(Ω⋃‍Eν),∂i+jF∂xi∂yj(x,y)=0,ff.((x,y)∈(Eν⋂‍∂T),0≤i+j≤lr). From this, we get (i) and (ii). The proof of (iii) is similar to the argument of Lemma4(iii). Lemma 8 is proved. ## 3.3. Smooth Extension to Each RectangleHν We have completed the smooth extension off to each trapezoid Eν with a curved side. In this subsection we complete the smooth extension of the obtained function F to each rectangle Hν. First we consider the smooth extension of F to H1. We divide this procedure in two steps.Step 1. In Lemma8, we know that F(x,y)=S4lr(x,y) on E4. Now we construct the smooth extension of S4(lr)(x,y) from E4 to H1, where S4(lr)(x,y) is stated in Section 4.2 and lr=r(r+1)(r+2)(r+3). Let(60)gggαk,11(y)=(y-y1)kk!(yy1)k+1,βk,11(y)=ykk!(y-y1-y1)k+1,k=0,1,…, and let (61)M1(0)(x,y)=S4(lr)(x,y1)α0,11(y),M1(k)(x,y)=M1(k-1)(x,y)+αk,11(y)hhhhhhhhhh×(∂kS4(lr)∂yk(x,y1)-∂kM1(k-1)∂yk(x,y1))M1(k)(x,y)=-βk,11(y)∂kM1(k-1)∂yk(x,0),gggggggggggggggggl.k=1,2,…,τr((x,y)∈H1), where τr=r(r+2).Lemma 9. Let{J1,l}14 be four sides of the rectangle H1:(62)J1,1={(x,y1),0≤x≤x1},J1,2={(0,y),0≤y≤y1},J1,3={(x,0),0≤x≤x1},J1,4={(x1,y),0≤y≤y1}. Then one has the following(i) M1(τr)(x,y)=∑i,j=02lr+1di,j(1)xiyj‍, where di,j(1) is a constant;(ii) (∂i+jM1(τr)/∂xi∂yj)(x,y)=(∂i+jS4(lr)/∂xi∂yj)(x,y)((x,y)∈J1,1);(iii) (∂i+jM1(τr)/∂xi∂yj)(x,y)=0((x,y)∈J1,2);(iv) (∂i+jM1(τr)/∂xi∂yj)(x,y)=0((x,y)∈J1,3),where 0≤i+j≤τr.Proof. By Lemma8(iii), we have (63)S4(lr)(x,y)=∑j=02lr+1ζj,4(y)xj‍. So (∂kS4(lr)/∂yk)(x,y1) is a polynomial of degree 2lr+1 with respect to x. Since ατr,11(y) and βτr,11(y) are both polynomials of degree 2τr+1, (i) follows from (61). Similar to the argument of Lemma6, we get (ii) and (iv). Since(0,y1)∈(E4⋂∂T), by Lemma 7, we have (64)∂i+jS4(lr)∂xi∂yj(0,y1)=∂i+jF∂xi∂yj(0,y1)=0,(0≤i+j≤lr). By the definition of M1(0) and (64), we have (65)∂i+jM1(0)∂xi∂yj(0,y)=∂iS4(lr)∂xi(0,y1)djα0,11dyj(y)=0,hhhhl.(0≤i+j≤lr,y∈ℝ). We assume that (66)∂i+jM1(k-1)∂xi∂yj(0,y)=0,(0≤i+j≤lr-12k(k-1)). By (61), we get (67)∂i+jM1(k)∂xi∂yj(0,y)=∂i+jM1(k-1)∂xi∂yj(0,y)+djαk,11dyj(y)×(∂k+iS4(lr)∂xi∂yk(0,y1)-∂k+iM1(k-1)∂xi∂yk(0,y1))-djβk,11dyj(y)∂k+iM1(k-1)∂xi∂yk(0,0). for 0≤i+j≤lr-(1/2)k(k+1), we have 0≤i+j≤lr-(1/2)k(k-1) and 0≤i+k≤lr-(1/2)k(k-1). Again, by the assumption of induction, we get (68)∂i+jM1(k-1)∂xi∂yj(0,y)=0,∂k+iM1(k-1)∂xi∂yk(0,y1)=0. By (64), we have (∂k+iS4(lr)/∂xi∂yk)(0,y1)=0. From this and (67), we get (69)hhhhhhhhh∂i+jM1(k)∂xi∂yj(0,y)=0,(0≤y≤y1,0≤i+j≤lr-k2(k+1)). Taking k=τr, we have (70)gggggggg∂i+jM1(τr)∂xi∂yj(0,y)=0,(0≤y≤y1,0≤i+j≤lr-τr2(τr+1)). Since lr-(τr/2)(τr+1)=(lr/2)≥τr, we get (iii). Lemma 9 is proved.Step 2. In Lemma8, we know that F(x,y)=S1lr(x,y) on E1. We consider the difference S1(lr)(x,y)-M1(τr)(x,y). Obviously, it is infinitely many time differentiable on E1 since M1(τr)(x,y) is a polynomial. Now we construct its smooth extension from E1 to the rectangle H1 as follows. Let (71)gggαk,14(x)=(x-x1)kk!(xx1)k+1,βk,14(x)=xkk!(x-x1-x1)k+1,k=0,1,…, and let (72)N1(0)(x,y)=(S1(lr)(x1,y)-M1(τr)(x1,y))α0,14(x),N1(k)(x,y)=N1(k-1)(x,y)+αk,14(x)N1(k)(x,y)=×(∂k(S1(lr)-M1(τr))∂xk(x1,y)N1(k)(x,y)==.=∂k(S1(lr)-M1(τr))∂xk-∂kN1(k-1)∂xk(x1,y))gggggggggg-βk,14(x)∂kN1(k-1)∂xk(0,y),ggggggk=0,1,…,r,((x,y)∈H1). From this, we obtain the following.Lemma 10. N 1 ( r ) ( x , y ) possesses the following properties: (i) (∂i+jN1(r)/∂xi∂yj)(x,y)=(∂i+jS1(lr)/∂xi∂yj)(x,y)-(∂i+jM1(τr)/∂xi∂yj)(x,y) on J1,4;(ii) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,2;(iii) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,1;(iv) (∂i+jN1(r)/∂xi∂yj)(x,y)=0 on J1,3, where 0≤i+j≤r and {J1,ν}14 are stated in (62);(v) N1(r)(x,y)=∑i,j=02lr+1τi,j(1)xiyj, where τi,j(1) is a constant.Proof. The arguments similar to Lemma9(ii) and (iv) give the conclusions (i) and (ii) of this theorem. Now we prove (iii) and (iv). By Lemma6(i) and Lemma 9(ii), as well as lr≥τr, we get that, for 0≤i+j≤τr, (73)∂i+jS1(lr)∂xi∂yj(x1,y1)=∂i+jf∂xi∂yj(x1,y1)=∂i+jS4(lr)∂xi∂yj(x1,y1)=∂i+jM1(τr)∂xi∂yj(x1,y1). So we have (74)∂i+jN1(0)∂xi∂yj(x,y1)=∂j(S1(lr)-M1(τr))∂yj(x1,y1)diα0,14dxi(x)=0,hhhhhhh(0≤x≤x1,0≤i+j≤τr). Now we assume that(75)ggggggg∂i+jN1(k-1)∂xi∂yj(x,y1)=0,(0≤x≤x1,0≤i+j≤τr-k2(k-1)). By (72) and (73), (76)∂i+jN1(k)∂xi∂yj(x,y1)=∂i+jN1(k-1)∂xi∂yj(x,y1)+diαk,14dxi(x)∂i+jN1(k)∂xi∂yj(x,y1)=×(∂k+j(S1(lr)-M1(τr))∂xk∂yj(x1,y1)∂i+jN1(k)∂xi∂yjggg(x,y1)=∂k+j(S1(lr)-M1(τr))∂xk∂yj-∂k+jN1(k-1)∂xk∂yj(x1,y1))∂i+jN1(k)∂xi∂yj(x,y1)=-diβk,14dxi(x)∂k+jN1(k-1)∂xk∂yj(0,y1)=0,∂i+jN1(k)∂xi∂yj.(0≤x≤x1,0≤i+j≤τr-k(k+1)2). By induction, we get (77)hhhhhhhhh∂i+jN1(k)∂xi∂yj(x,y1)=0,(0≤x≤x1,0≤i+j≤τr-k(k+1)2). From this and τr-(1/2)r(r+1)≥r, we get (iii). By Lemma 6(ii) and Lemma 9(iii), we get that (78)∂i+jS1(lr)∂xi∂yj(x,0)=0,(0≤i+j≤lr),∂i+jM1(τr)∂xi∂yj(x,0)=0,(0≤i+j≤τr). From this and (72), by using an argument similar to the proof of (iii), we get (iv). By Lemma8(iii) and Lemma 9(i), we deduce that (S1(lr)-M1(τr))(x1,y) is a polynomial of degree 2lr+1 with respect to y. From this and (72), we get (v). Lemma 10 is proved.By Lemmas9 and 10, we obtain that for 0≤i+j≤r, (79)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=∂i+jS4(lr)∂xi∂yj(x,y)on⁡J1,1,(80)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=0on⁡J1,2⋃‍J1,3,(81)∂i+j(M1(τr)+N1(r))∂xi∂yj(x,y)=∂i+jS1(lr)∂xi∂yj(x,y)on⁡J1,4.Lemma 11. Let(82)F(x,y)={f(x,y),(x,y)∈Ω,S1(lr)(x,y),(x,y)∈E1,S4(lr)(x,y),(x,y)∈E4,M1(τr)(x,y)+N1(r)(x,y),(x,y)∈H1. Then one has(i) F∈Cr(Ω⋃E1⋃E4⋃H1),(ii) (∂i+jF/∂xi∂yj)(x,y)=0,(x,y)∈((E1⋃E4⋃H1)⋂∂T) for 0≤i+j≤r;(iii) F(x,y)=∑i,j=02lr+1cij(1)xiyj((x,y)∈H1), where each cij(1) is constant.Proof. By Lemma7, we have F∈Cr(Ω⋃E1⋃E4). Since S1(lr)∈Cr(E1), (83)M1(τr)+N1(r)∈Cr(H1),E1⋂‍H1=J1,4, by (81), we deduce that F∈Cr(E1⋃H1). Since S4(lr)∈Cr(E4), (84)M1(τr)+N1(r)∈Cr(H1),E4⋂‍H1=J1,1, by (79), we deduce that F∈Cr(H1⋃E4). So we get (i). By Lemma8(ii), (85)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E4)⋂‍∂T). Since H1⋂∂T=J1,2⋃J1,3, by (80), we deduce that (86)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈(H1⋂‍∂T). So we get (ii). From Lemma9(i), Lemma 10(v), and F(x,y)=M1(τr)(x,y)+N1(r)(x,y)((x,y)∈H1), we get (iii). Lemma 11 is proved.Forν=2,3,4, by using a similar method, we define F(x,y)=Mν(τr)(x,y)+Nν(r)(x,y)((x,y)∈Hν), where representations of Mν(τr)(x,y) and Nν(r)(x,y) see Section 4.2. ## 3.4. The Proofs of the Theorems Proof of Theorem1. Let(87)F(x,y)={f(x,y),(x,y)∈Ω,Sν(lr)(x,y),(x,y)∈Eν(ν=1,2,3,4),Mν(τr)(x,y)+Nν(r)(x,y),(x,y)∈Hν(ν=1,2,3,4). By (25), F has been defined on the unit square T. The argument similar to Lemma 11(i)-(ii) shows that (88)F∈Cr(Ω⋃‍E1⋃‍E4⋃‍H1);F∈Cr(Ω⋃‍E1⋃‍E2⋃‍H2);F∈Cr(Ω⋃‍E2⋃‍E3⋃‍H3);F∈Cr(Ω⋃‍E3⋃‍E4⋃‍H4); and for 0≤i+j≤r, (89)∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E4⋃‍H1)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E1⋃‍E2⋃‍H2)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E2⋃‍E3⋃‍H3)⋂‍∂T);∂i+jF∂xi∂yj(x,y)=0,(x,y)∈((E3⋃‍E4⋃‍H4)⋂‍∂T). From this and Ω⋂∂T=∅, by (25), we have F∈Cr(T) and (∂i+jF/∂xi∂yj)(x,y)=0((x,y)∈∂T,0≤i+j≤r). So we get (i) and (ii). Similar to the argument of Lemma11(iii), we get (90)F(x,y)=∑i,j=02lr+1cij(ν)xiyj‍,(x,y)∈Hν(ν=1,2,3,4), where each cij(ν) is a constant. From this and Lemma 8(iii), we know that, on T∖Ω, F(x,y) can be expressed locally in the form (91)∑j=02lr+1ξj(x)yjor∑j=02lr+1ηj(x)xjor∑i,j=02lr+1cijxiyj; (iii) holds. We have completed the proof of Theorem 1.The representation ofF satisfying the conditions of Theorem 1 is given in Section 4.Proof of Theorem2. LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Define Fp by (92)Fp(x+k,y+l)=F(x,y)((x,y)∈T;k,l∈ℤ). Then Fp is a 1-periodic function of ℝ2. By Theorem 1, we know that Fp∈Cr(T) and (93)∂i+jFp∂xi∂yj(x,y)=0((x,y)∈∂T;0≤i+j≤r). Let Tn1,n2=[n1,n1+1]×[n2,n2+1](n1,n2∈ℤ). Since Fp is 1-periodic function, we have Fp∈Cr(Tn1,n2) and for any n1,n2∈ℤ, (94)∂i+jFp∂xi∂yj(x,y)=0((x,y)∈∂Tn1,n2;0≤i+j≤r). Noticing that ℝ2=⋃n1,n2∈ℤTn1,n2, we have Fp∈Cr(ℝ2). By (92) and Theorem 1(i), we get (95)Fp(x,y)=F(x,y)=f(x,y)((x,y)∈Ω). Theorem 2 is proved.Proof of Theorem3. LetF be the smooth extension of f from Ω to T which is stated as in Theorem 1. Define Fc by (96)Fc(x,y)={F(x,y),(x,y)∈T,0,(x,y)∈ℝ2∖T. From Theorem 1(ii), we have (97)∂i+jFc∂xi∂yj(x,y)=0((x,y)∈∂T;0≤i+j≤r). From this and (96), we get Fc(x,y)∈Cr(ℝ2). By (96) and Theorem 1(i), we get (98)Fc(x,y)=F(x,y)=f(x,y)((x,y)∈Ω). Theorem 3 is proved. ## 4. Representation of the ExtensionF Satisfying Theorem 1 Letf and Ω be stated as in Theorem 1 and let Ω be divided as in Section 3.1. The representation of F satisfying conditions of Theorem 1 is as follows: (99)F(x,y)={f(x,y),(x,y)∈Ω,Sν(lr)(x,y),(x,y)∈Eν(ν=1,2,3,4),Mν(τr)(x,y)+Nν(r)(x,y),(x,y)∈Hν(ν=1,2,3,4), where (100)T=Ω⋃‍(⋃14‍Eν)⋃‍(⋃14‍Hν) and the rectangles {Hν}14 and the trapezoids {Eν}14 with a curved side are stated in (22) and (23) and lr=r(r+1)(r+2)(r+3) and τr=r(r+2).Below we write out the representations of{Sν(k)(x,y)}14, {Mν(k)(x,y)}14, and {Nν(k)(x,y)}14. ### 4.1. The Construction of EachSν(k)(x,y) (i) Denote(101)hhhhak,1(x,y)=(y-g(x))kk!(yg(x))k+1,bk,1(x,y)=ykk!(y-g(x)-g(x))k+1,k=0,1,…. Define S1(k)(x,y) by induction as follows: (102)S1(0)(x,y)=f(x,g(x))a0,1(x,y),S1(k)(x,y)=S1(k-1)(x,y)+ak,1(x,y)S1(k)(x,y)=×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))S1(k)(x,y)=-bk,1(x,y)∂kS1(k-1)∂yk(x,0),hhhhhhhhhhhhhhhhh.k=1,2,…,((x,y)∈E1).(ii) Denote(103)hhhhak,2(x,y)=(x-h(y))kk!(1-x1-h(y))k+1,bk,2(x,y)=(x-1)kk!(h(y)-xh(y)-1)k+1,k=0,1,…. Define S2(k)(x,y) by induction as follows: (104)S2(0)(x,y)=f(h(y),y)a0,2(x,y),S2(k)(x,y)=S2(k-1)(x,y)+ak,2(x,y)S2(k)(x,y)=×(∂kf∂xk(h(y),y)-∂kS2(k-1)∂xk(h(y),y))S2(k)(x,y)=-bk,2(x,y)∂kS2(k-1)∂xk(1,y),hhhhhhhhhhhhhhhhk=1,2,…,((x,y)∈E2).(iii) Denote(105)ak,3(x,y)=(y-g*(x))kk!(1-y1-g*(x))k+1,bk,3(x,y)=(y-1)kk!(g*(x)-yg*(x)-1)k+1,k=0,1,…. Define S3(k)(x,y) by induction as follows: (106)S3(0)(x,y)=f(x,g*(x))a0,3(x,y),S3(k)(x,y)=S3(k-1)(x,y)+ak,3(x,y)S3(k)(x,y)=×(∂kf∂yk(x,g*(x))-∂kS3(k-1)∂yk(x,g*(x)))S3(k)(x,y)=-bk,3(x,y)∂kS3(k-1)∂yk(x,1),jjjjjjjhhhhhhhhhhjjjjjjjjjjk=1,2,…((x,y)∈E3).(iv) Denote(107)ak,4(x,y)=(x-h*(y))kk!(xh*(y))k+1,bk,4(x,y)=xkk!(x-h*(y)-h*(y))k+1,k=0,1,…. Define S4(k)(x,y) by induction as follows: (108)S4(0)(x,y)=f(h*(y),y)a0,4(x,y),S4(k)(x,y)=S4(k-1)(x,y)+ak,4(x,y)S4(k)(x,y)=×(∂kf∂xk(h*(y),y)-∂kS4(k-1)∂xk(h*(y),y))hhhhhhhhh-bk,4(x,y)∂kS4(k-1)∂xk(0,y),hhhhhhhhhhhhhhhhkkhk=1,2,…((x,y)∈E4). ### 4.2. The Constructions of EachMν(k)(x,y) and Nν(k)(x,y) (i) Denote(109)αk,11(y)=(y-y1)kk!(yy1)k+1,βk,11(y)=ykk!(y-y1-y1)k+1,k=0,1,…. Define M1(k)(x,y) by induction as follows: (110)M1(0)(x,y)=S4(lr)(x,y1)α0,11(y),M1(k)(x,y)=M1(k-1)(x,y)+αk,11(y)M1(k)(x,y)=×(∂kS4(lr)∂yk(x,y1)-∂kM1(k-1)∂yk(x,y1))M1(k)(x,y)=-βk,11(y)∂kM1(k-1)∂yk(x,0),hhhhhhhhhhhhhhhk=1,2,…,((x,y)∈H1).Denote(111)αk,14(x)=(x-x1)kk!(xx1)k+1,βk,14(x)=xkk!(x-x1-x1)k+1,k=0,1,…. Define N1(k)(x,y) by induction as follows: (112)N1(0)(x,y)=(S1(lr)(x1,y)-M1(2r)(x1,y))α0,14(x)N1(k)(x,y)=N1(k-1)(x,y)+αk,14(x)N1(k)(x,y)=×(∂k(S1(4r)-M1(2r))∂xk(x1,y)kkkkkkkkk.kkk.∂k(S1(4r)-M1(2r))∂xk-∂kN1(k-1)∂xk(x1,y))N1(k)(x,y)=-βk,14(x)∂kN1(k-1)∂xk(0,y),hhhhhhhk=1,2,…((x,y)∈H1).(ii) Denote(113)αk,21(x)=(x-x2)kk!(1-x1-x2)k+1,βk,21(x)=(x-1)kk!(x2-xx2-1)k+1,k=0,1,…. Define M2(τr)(x,y) by induction as follows: (114)M2(0)(x,y)=S1(lr)(x2,y)α0,21(x),M2(k)(x,y)=M2(k-1)(x,y)+αk,21(x)M2(k)(x,y)=×(∂kS1(lr)∂xk(x2,y)M2(k)(x,y)hhh=∂kS1(lr)∂xk-∂kM2(k-1)∂xk(x2,y))M2(k)(x,y)=-βk,21(x)∂kM2(k-1)∂xk(1,y),ggggggggggglk=1,2,…((x,y)∈H2).Denote(115)αk,22(y)=(y-y2)kk!(yy2)k+1,βk,22(y)=ykk!(y-y2-y2)k+1,k=0,1,…. Define N2(r)(x,y) by induction as follows: (116)N2(0)(x,y)=(S2(lr)-M2(τr))(x,y2)α0,22(y),N2(k)(x,y)=N2(k-1)(x,y)+αk,22(y)N2(k)(x,y)=×(∂k(S2(lr)-M2(τr))∂yk(x,y2)N2(k)(x,y)=hhhh.-∂kN2(k-1)∂yk(x,y2))N2(k)(x,y)=-βk,22(y)∂kN2(k-1)∂yk(x,0),hhhhhhhk=1,2,…((x,y)∈H2).(iii) Denote(117)αk,31(y)=(y-y3)kk!(1-y1-y3)k+1,βk,31(y)=(y-1)kk!(y3-yy3-1)k+1,k=0,1,…. Define M3(k)(x,y) by induction as follows: (118)M3(0)(x,y)=S2(lr)(x,y3)α0,31(y).M3(k)(x,y)=M3(k-1)(x,y)+αk,31(y)M3(k)(x,y)=×(∂kS2(lr)∂yk(x,y3)-∂kM3(k-1)∂yk(x,y3))M3(k)(x,y)=-βk,31(y)∂kM3(k-1)∂yk(x,1),ggggghhhhhhhgggggk=1,2,…((x,y)∈H3).Denote(119)αk,32(x)=(x-x3)kk!(1-x1-x3)k+1,βk,32(x)=(x-1)kk!(x3-xx3-1)k+1,k=0,1,…. Define N3(k)(x,y) by induction as follows: (120)N3(0)(x,y)=(S3(lr)-M3(τr))(x3,y)α0,32(x),N3(k)(x,y)=N3(k-1)(x,y)+αk,32(x)N3(k)(x,y)=×(∂k(S3(lr)-M3(τr))∂yk(x3,y)N3(k)(x,y)×=m∂k(S3(lr)-M3(τr))∂yk-∂kN3(k-1)∂xk(x3,y))N3(k)(x,y)=-βk,32(x)∂kN3(k-1)∂xk(1,y),hhhhhhhk=1,2,…((x,y)∈H3).(iv) Denote(121)αk,41(x)=(x-x4)kk!(xx4)k+1,βk,41(x)=(x-x4)kk!(x-x4)k+1,k=0,1,…. Define M4(k)(x,y) by induction as follows: (122)M4(0)(x,y)=S3(lr)(x4,y)α0,41(x),M4(k)(x,y)=M4(k-1)(x,y)+αk,41(x)M4(k)(x,y)=×(∂kS3(lr)∂xk(x4,y)-∂kM4(k-1)∂xk(x4,y))M4(k)(x,y)=-βk,41(x,y)∂kM4(k-1)∂xk(0,y),hhhhhhhhhhhhhhhhk=1,2,…((x,y)∈H4).Denote(123)αk,42(y)=(y-y4)kk!(1-y1-y4)k+1,βk,42(y)=(y-1)kk!(y4-yy4-1)k+1,k=0,1,…. Define N4(k)(x,y) by induction as follows: (124)N4(0)(x,y)=(S4(lr)-M4(τr))(x,y4)α0,42(y),N4(k)(x,y)=N4(k-1)(x,y)+αk,42(y)N4(k)(x,y)=×(∂k(S4(lr)-M4(τr))∂yk(x,y4)N4(k)(x,y)=hh.,.-∂kN4(k-1)∂yk(x,y4))N4(k)(x,y)=-βk,42(x,y)∂kN4(k-1)∂xk(x,1),hhhhhhhh.k=1,2,…((x,y)∈H4). ## 4.1. The Construction of EachSν(k)(x,y) (i) Denote(101)hhhhak,1(x,y)=(y-g(x))kk!(yg(x))k+1,bk,1(x,y)=ykk!(y-g(x)-g(x))k+1,k=0,1,…. Define S1(k)(x,y) by induction as follows: (102)S1(0)(x,y)=f(x,g(x))a0,1(x,y),S1(k)(x,y)=S1(k-1)(x,y)+ak,1(x,y)S1(k)(x,y)=×(∂kf∂yk(x,g(x))-∂kS1(k-1)∂yk(x,g(x)))S1(k)(x,y)=-bk,1(x,y)∂kS1(k-1)∂yk(x,0),hhhhhhhhhhhhhhhhh.k=1,2,…,((x,y)∈E1).(ii) Denote(103)hhhhak,2(x,y)=(x-h(y))kk!(1-x1-h(y))k+1,bk,2(x,y)=(x-1)kk!(h(y)-xh(y)-1)k+1,k=0,1,…. Define S2(k)(x,y) by induction as follows: (104)S2(0)(x,y)=f(h(y),y)a0,2(x,y),S2(k)(x,y)=S2(k-1)(x,y)+ak,2(x,y)S2(k)(x,y)=×(∂kf∂xk(h(y),y)-∂kS2(k-1)∂xk(h(y),y))S2(k)(x,y)=-bk,2(x,y)∂kS2(k-1)∂xk(1,y),hhhhhhhhhhhhhhhhk=1,2,…,((x,y)∈E2).(iii) Denote(105)ak,3(x,y)=(y-g*(x))kk!(1-y1-g*(x))k+1,bk,3(x,y)=(y-1)kk!(g*(x)-yg*(x)-1)k+1,k=0,1,…. Define S3(k)(x,y) by induction as follows: (106)S3(0)(x,y)=f(x,g*(x))a0,3(x,y),S3(k)(x,y)=S3(k-1)(x,y)+ak,3(x,y)S3(k)(x,y)=×(∂kf∂yk(x,g*(x))-∂kS3(k-1)∂yk(x,g*(x)))S3(k)(x,y)=-bk,3(x,y)∂kS3(k-1)∂yk(x,1),jjjjjjjhhhhhhhhhhjjjjjjjjjjk=1,2,…((x,y)∈E3).(iv) Denote(107)ak,4(x,y)=(x-h*(y))kk!(xh*(y))k+1,bk,4(x,y)=xkk!(x-h*(y)-h*(y))k+1,k=0,1,…. Define S4(k)(x,y) by induction as follows: (108)S4(0)(x,y)=f(h*(y),y)a0,4(x,y),S4(k)(x,y)=S4(k-1)(x,y)+ak,4(x,y)S4(k)(x,y)=×(∂kf∂xk(h*(y),y)-∂kS4(k-1)∂xk(h*(y),y))hhhhhhhhh-bk,4(x,y)∂kS4(k-1)∂xk(0,y),hhhhhhhhhhhhhhhhkkhk=1,2,…((x,y)∈E4). ## 4.2. The Constructions of EachMν(k)(x,y) and Nν(k)(x,y) (i) Denote(109)αk,11(y)=(y-y1)kk!(yy1)k+1,βk,11(y)=ykk!(y-y1-y1)k+1,k=0,1,…. Define M1(k)(x,y) by induction as follows: (110)M1(0)(x,y)=S4(lr)(x,y1)α0,11(y),M1(k)(x,y)=M1(k-1)(x,y)+αk,11(y)M1(k)(x,y)=×(∂kS4(lr)∂yk(x,y1)-∂kM1(k-1)∂yk(x,y1))M1(k)(x,y)=-βk,11(y)∂kM1(k-1)∂yk(x,0),hhhhhhhhhhhhhhhk=1,2,…,((x,y)∈H1).Denote(111)αk,14(x)=(x-x1)kk!(xx1)k+1,βk,14(x)=xkk!(x-x1-x1)k+1,k=0,1,…. Define N1(k)(x,y) by induction as follows: (112)N1(0)(x,y)=(S1(lr)(x1,y)-M1(2r)(x1,y))α0,14(x)N1(k)(x,y)=N1(k-1)(x,y)+αk,14(x)N1(k)(x,y)=×(∂k(S1(4r)-M1(2r))∂xk(x1,y)kkkkkkkkk.kkk.∂k(S1(4r)-M1(2r))∂xk-∂kN1(k-1)∂xk(x1,y))N1(k)(x,y)=-βk,14(x)∂kN1(k-1)∂xk(0,y),hhhhhhhk=1,2,…((x,y)∈H1).(ii) Denote(113)αk,21(x)=(x-x2)kk!(1-x1-x2)k+1,βk,21(x)=(x-1)kk!(x2-xx2-1)k+1,k=0,1,…. Define M2(τr)(x,y) by induction as follows: (114)M2(0)(x,y)=S1(lr)(x2,y)α0,21(x),M2(k)(x,y)=M2(k-1)(x,y)+αk,21(x)M2(k)(x,y)=×(∂kS1(lr)∂xk(x2,y)M2(k)(x,y)hhh=∂kS1(lr)∂xk-∂kM2(k-1)∂xk(x2,y))M2(k)(x,y)=-βk,21(x)∂kM2(k-1)∂xk(1,y),ggggggggggglk=1,2,…((x,y)∈H2).Denote(115)αk,22(y)=(y-y2)kk!(yy2)k+1,βk,22(y)=ykk!(y-y2-y2)k+1,k=0,1,…. Define N2(r)(x,y) by induction as follows: (116)N2(0)(x,y)=(S2(lr)-M2(τr))(x,y2)α0,22(y),N2(k)(x,y)=N2(k-1)(x,y)+αk,22(y)N2(k)(x,y)=×(∂k(S2(lr)-M2(τr))∂yk(x,y2)N2(k)(x,y)=hhhh.-∂kN2(k-1)∂yk(x,y2))N2(k)(x,y)=-βk,22(y)∂kN2(k-1)∂yk(x,0),hhhhhhhk=1,2,…((x,y)∈H2).(iii) Denote(117)αk,31(y)=(y-y3)kk!(1-y1-y3)k+1,βk,31(y)=(y-1)kk!(y3-yy3-1)k+1,k=0,1,…. Define M3(k)(x,y) by induction as follows: (118)M3(0)(x,y)=S2(lr)(x,y3)α0,31(y).M3(k)(x,y)=M3(k-1)(x,y)+αk,31(y)M3(k)(x,y)=×(∂kS2(lr)∂yk(x,y3)-∂kM3(k-1)∂yk(x,y3))M3(k)(x,y)=-βk,31(y)∂kM3(k-1)∂yk(x,1),ggggghhhhhhhgggggk=1,2,…((x,y)∈H3).Denote(119)αk,32(x)=(x-x3)kk!(1-x1-x3)k+1,βk,32(x)=(x-1)kk!(x3-xx3-1)k+1,k=0,1,…. Define N3(k)(x,y) by induction as follows: (120)N3(0)(x,y)=(S3(lr)-M3(τr))(x3,y)α0,32(x),N3(k)(x,y)=N3(k-1)(x,y)+αk,32(x)N3(k)(x,y)=×(∂k(S3(lr)-M3(τr))∂yk(x3,y)N3(k)(x,y)×=m∂k(S3(lr)-M3(τr))∂yk-∂kN3(k-1)∂xk(x3,y))N3(k)(x,y)=-βk,32(x)∂kN3(k-1)∂xk(1,y),hhhhhhhk=1,2,…((x,y)∈H3).(iv) Denote(121)αk,41(x)=(x-x4)kk!(xx4)k+1,βk,41(x)=(x-x4)kk!(x-x4)k+1,k=0,1,…. Define M4(k)(x,y) by induction as follows: (122)M4(0)(x,y)=S3(lr)(x4,y)α0,41(x),M4(k)(x,y)=M4(k-1)(x,y)+αk,41(x)M4(k)(x,y)=×(∂kS3(lr)∂xk(x4,y)-∂kM4(k-1)∂xk(x4,y))M4(k)(x,y)=-βk,41(x,y)∂kM4(k-1)∂xk(0,y),hhhhhhhhhhhhhhhhk=1,2,…((x,y)∈H4).Denote(123)αk,42(y)=(y-y4)kk!(1-y1-y4)k+1,βk,42(y)=(y-1)kk!(y4-yy4-1)k+1,k=0,1,…. Define N4(k)(x,y) by induction as follows: (124)N4(0)(x,y)=(S4(lr)-M4(τr))(x,y4)α0,42(y),N4(k)(x,y)=N4(k-1)(x,y)+αk,42(y)N4(k)(x,y)=×(∂k(S4(lr)-M4(τr))∂yk(x,y4)N4(k)(x,y)=hh.,.-∂kN4(k-1)∂yk(x,y4))N4(k)(x,y)=-βk,42(x,y)∂kN4(k-1)∂xk(x,1),hhhhhhhh.k=1,2,…((x,y)∈H4). ## 5. Corollaries By using the extension method given in Section3, we discuss the two important special cases. ### 5.1. Smooth Extensions of Functions on a Kind of Domains LetΩ be a trapezoid with two curved sides: (125)Ω={(x,y):x1≤x≤x2,η(x)≤y≤ξ(x)}, where L1<η(x)<ξ(x)<L2(x1≤x≤x2),η,ξ∈Cm([x1,x2]). Denote the rectangle D=[x1,x2]×[L1,L2]. Then D=G1⋃‍Ω⋃‍G2, where G1 and G2 are both trapezoids with a curved side: (126)G1={(x,y):x1≤x≤x2,L1≤y≤η(x)},G2={(x,y):x1≤x≤x2,ξ(x)≤y≤L2}.Suppose thatf∈Cq(Ω) (q is a nonnegative integer). We will smoothly extend f from Ω to the trapezoids G1 and G2 with a curved side, respectively, as in Section 3.2, such that the extension function F is smooth on the rectangle D. Moreover, we will give a precise formula. It shows that the index of smoothness of F depends on not only smoothness of f but also smoothness of η,ξ.Denotea0,1(x,y)=(y-L1)/(η(x)-L1) and (127)ak,1(x,y)=(y-η(x))kk!(y-L1η(x)-L1)k+1,gggggggk=1,2,…(x1≤x≤x2,y∈ℝ). We define {S1(k)(x,y)} on G1 as follows. Let (128)S1(0)(x,y)=f(x,η(x))a1,0(x,y)((x,y)∈G1), and let k0 be the maximal integer satisfying 1+2+⋯+k0≤q. For k=1,2,…,k0, we define (129)S1(k)(x,y)=S1(k-1)(x,y)+a1,k(x,y)×(∂kf∂yk(x,η(x))-∂kS1(k-1)∂yk(x,η(x))). Then S1(k)∈Cλk(G1), where λk=min⁡{q-1-2-⋯-k,m}.Denotea0,2(x,y)=(L2-y)/(L2-ξ(x)) and (130)ak,2(x,y)=(y-ξ(x))kk!(L2-yL2-ξ(x))k+1,.hhhhk=1,2,…(x1≤x≤x2,y∈ℝ). We define {S2(k)(x,y)} on G2 as follows. Let (131)S2(0)(x,y)=f(x,ξ(x))a0,2(x,y)((x,y)∈G2). For k=1,2,…,k0, define (132)S2(k)(x,y)=S2(k-1)(x,y)+ak,2(x,y)×(∂kf∂yk(x,ξ(x))-∂kS2(k-1)∂yk(x,ξ(x))),hhhhhhhhhhhhhhhhhh((x,y)∈G2). Then S2(k)∈Cλk(G2), where λk is stated as above.An argument similar to Lemmas5 and 6 shows that, for 0≤k≤k0 and 0≤i+j≤min⁡{k,λk}, (133)∂i+jS1(k)∂xi∂yj(x,η(x))=∂i+jf∂xi∂yj(x,η(x)),∂i+jS2(k)∂xi∂yj(x,ξ(x))=∂i+jf∂xi∂yj(x,ξ(x)),hhhhhhhhhhhhhhl(x1≤x≤x2). A direct calculation shows that the number (134)τ(q,m)=min⁡{[2q+94-32],m} is the maximal value of integers k satisfying k≤λk, where [·] expresses the integral part. So τ(q,m)≤λτ(q,m).By (133), we get that, for 0≤i+j≤τ(q,m), (135)∂i+jS1(τ(q,m))∂xi∂yj(x,η(x))=∂i+jf∂xi∂yj(x,ξ(x)),∂i+jS2(τ(q,m))∂xi∂yj(x,ξ(x))=∂i+jf∂xi∂yj(x,ξ(x)),hhhhhhhhhhhhhhhhhl(x1≤x≤x2). Note that (136)S1(τ(q,m))∈Cλτ(q,m)(G1),S2(τ(q,m))∈Cλ(q,m)(G2),jjjjjjjjjjjjjjjτ(q,m)≤λτ(q,m)≤q, and the assumption f∈Cq(Ω). Now we define a function on D by (137)Fq,m(x,y)={f(x,y),(x,y)∈Ω,S1(τ(q,m))(x,y),(x,y)∈G1,S2(τ(q,m))(x,y),(x,y)∈G2. From this and (135), we have Fq,m∈Cτ(q,m)(D). This implies the following theorem.Theorem 12. Let the domainΩ and the rectangle D be stated as above. If f∈Cq(Ω), then the function Fq,m(x,y), defined in (137), is a smooth extension of f from Ω to D and Fq,m∈Cτ(q,m)(D), where τ(q,m) is stated in (134).Especially, forq=0 and m≥0, we have τ(q,m)=0, and so F0,m∈C(D); for q=2 and m≥1, we have τ(q,m)=1, and so F2,1∈C1(D); for q=5 and m≥2, we have τ(q,m)=2, and so F5,2∈C2(D). ### 5.2. Smooth Extensions of Univariate Functions on Closed Intervals Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1). In order to extend smoothly f from [x1,x2] to [0,x1], we construct two polynomials (138)a0(k)(x)=(x-x1)kk!(xx1)k+1,b0(k)(x)=xkk!(x1-xx1)k+1,k=0,1,…. Define S0(0)(x)=f(x1)(x/x1) and for k=1,…,q, (139)S0(k)(x)=S0(k-1)(x)-a0(k)(x)(f(k)(x1)-(S0(k-1))(k)(x1))-b0(k)(x)(S0(k-1))(k)(0)(0≤x≤x1). Then S0(q)(x) is a polynomial of degree ≤2q+1.Similar to the proof of Lemma5, we get (140)(S0(q))(k)(0)=0,(S0(q))(k)(x1)=f(k)(x1),hhhhhhhhhhhhhhhhhhhhhhhhhk=0,1,…,q. It is also easy to check directly them.Again extend smoothlyf from [x1,x2] to [x2,1], we construct two polynomials (141)a1(k)(x)=(x-x2)kk!(1-x1-x2)k+1,b1(k)(x)=(x-1)kk!(x2-xx2-1)k+1,k=0,1,…. Define S1(0)(x)=f(x2)((1-x)/(1-x2)) and for k=1,…,q, (142)S1(k)(x)=S1(k-1)(x)-a1(k)(x)(f(k)(x2)-(S1(k-1))(k)(x2))-b1(k)(x)(S1(k-1))(k)(1)(x2≤x≤1). Then S1(q)(x) is a polynomial of degree ≤2q+1.Similar to the proof of Lemma5, we get (143)gggg(S1(q))(k)(x2)=f(k)(x2),(S1(q))(k)(1)=0(k=0,1,…,q).Therefore, we obtain the smooth extensionF from [x1,x2] to [0,1] by (144)F(x)={f(x),x∈[x1,x2],S0(q)(x),x∈[0,x1],S1(q)(x),x∈[x2,1], where S0(q)(x) and S1(q)(x) are polynomials of degree 2q+1 defined as above, and F∈Cq([0,1]) and F(l)(0)=F(l)(1)=0(l=0,1,...,q). From this, we get the following.Theorem 13. Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1). Then there exists a function F∈Cq([0,1]) satisfying F(x)=f(x)(x1≤x≤x2) and F(l)(0)=F(l)(1)=0(l=0,1,…,q).Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1), and let F be the smooth extension of f from [x1,x2] to [0,1] which is stated as in Theorem 12. Let Fp be the 1-periodic extension satisfying Fp(x+n)=F(x)(0≤x≤1,n∈ℤ). Then Fp∈Cq(ℝ) and Fp(x)=f(x)(x∈[x1,x2]). We expand F(x) into the Fourier series which converges fast. From this, we get trigonometric approximation of f∈Cq([x1,x2]). We also may do odd extension or even extension of F from [0,1] to [-1,1], and then doing periodic extension, we get the odd periodic extension Fpo∈Cq(ℝ) or the even periodic extension Fpe∈Cq(ℝ). We expand Fpo or Fpe into the sine series and the cosine series, respectively. From this, we get the sine polynomial approximation and the cosine polynomial approximation of f on [x1,x2]. For F∈Cq(x)(x∈[0,1]), we pad zero in the outside of [0,1] and then the obtained function Fc∈Cq(ℝ). We expand Fc into a wavelet series which converges fast. By the moment theorem, a lot of wavelet coefficients are equal to zero. From this, we get wavelet approximation of f∈Cq([x1,x2]). ## 5.1. Smooth Extensions of Functions on a Kind of Domains LetΩ be a trapezoid with two curved sides: (125)Ω={(x,y):x1≤x≤x2,η(x)≤y≤ξ(x)}, where L1<η(x)<ξ(x)<L2(x1≤x≤x2),η,ξ∈Cm([x1,x2]). Denote the rectangle D=[x1,x2]×[L1,L2]. Then D=G1⋃‍Ω⋃‍G2, where G1 and G2 are both trapezoids with a curved side: (126)G1={(x,y):x1≤x≤x2,L1≤y≤η(x)},G2={(x,y):x1≤x≤x2,ξ(x)≤y≤L2}.Suppose thatf∈Cq(Ω) (q is a nonnegative integer). We will smoothly extend f from Ω to the trapezoids G1 and G2 with a curved side, respectively, as in Section 3.2, such that the extension function F is smooth on the rectangle D. Moreover, we will give a precise formula. It shows that the index of smoothness of F depends on not only smoothness of f but also smoothness of η,ξ.Denotea0,1(x,y)=(y-L1)/(η(x)-L1) and (127)ak,1(x,y)=(y-η(x))kk!(y-L1η(x)-L1)k+1,gggggggk=1,2,…(x1≤x≤x2,y∈ℝ). We define {S1(k)(x,y)} on G1 as follows. Let (128)S1(0)(x,y)=f(x,η(x))a1,0(x,y)((x,y)∈G1), and let k0 be the maximal integer satisfying 1+2+⋯+k0≤q. For k=1,2,…,k0, we define (129)S1(k)(x,y)=S1(k-1)(x,y)+a1,k(x,y)×(∂kf∂yk(x,η(x))-∂kS1(k-1)∂yk(x,η(x))). Then S1(k)∈Cλk(G1), where λk=min⁡{q-1-2-⋯-k,m}.Denotea0,2(x,y)=(L2-y)/(L2-ξ(x)) and (130)ak,2(x,y)=(y-ξ(x))kk!(L2-yL2-ξ(x))k+1,.hhhhk=1,2,…(x1≤x≤x2,y∈ℝ). We define {S2(k)(x,y)} on G2 as follows. Let (131)S2(0)(x,y)=f(x,ξ(x))a0,2(x,y)((x,y)∈G2). For k=1,2,…,k0, define (132)S2(k)(x,y)=S2(k-1)(x,y)+ak,2(x,y)×(∂kf∂yk(x,ξ(x))-∂kS2(k-1)∂yk(x,ξ(x))),hhhhhhhhhhhhhhhhhh((x,y)∈G2). Then S2(k)∈Cλk(G2), where λk is stated as above.An argument similar to Lemmas5 and 6 shows that, for 0≤k≤k0 and 0≤i+j≤min⁡{k,λk}, (133)∂i+jS1(k)∂xi∂yj(x,η(x))=∂i+jf∂xi∂yj(x,η(x)),∂i+jS2(k)∂xi∂yj(x,ξ(x))=∂i+jf∂xi∂yj(x,ξ(x)),hhhhhhhhhhhhhhl(x1≤x≤x2). A direct calculation shows that the number (134)τ(q,m)=min⁡{[2q+94-32],m} is the maximal value of integers k satisfying k≤λk, where [·] expresses the integral part. So τ(q,m)≤λτ(q,m).By (133), we get that, for 0≤i+j≤τ(q,m), (135)∂i+jS1(τ(q,m))∂xi∂yj(x,η(x))=∂i+jf∂xi∂yj(x,ξ(x)),∂i+jS2(τ(q,m))∂xi∂yj(x,ξ(x))=∂i+jf∂xi∂yj(x,ξ(x)),hhhhhhhhhhhhhhhhhl(x1≤x≤x2). Note that (136)S1(τ(q,m))∈Cλτ(q,m)(G1),S2(τ(q,m))∈Cλ(q,m)(G2),jjjjjjjjjjjjjjjτ(q,m)≤λτ(q,m)≤q, and the assumption f∈Cq(Ω). Now we define a function on D by (137)Fq,m(x,y)={f(x,y),(x,y)∈Ω,S1(τ(q,m))(x,y),(x,y)∈G1,S2(τ(q,m))(x,y),(x,y)∈G2. From this and (135), we have Fq,m∈Cτ(q,m)(D). This implies the following theorem.Theorem 12. Let the domainΩ and the rectangle D be stated as above. If f∈Cq(Ω), then the function Fq,m(x,y), defined in (137), is a smooth extension of f from Ω to D and Fq,m∈Cτ(q,m)(D), where τ(q,m) is stated in (134).Especially, forq=0 and m≥0, we have τ(q,m)=0, and so F0,m∈C(D); for q=2 and m≥1, we have τ(q,m)=1, and so F2,1∈C1(D); for q=5 and m≥2, we have τ(q,m)=2, and so F5,2∈C2(D). ## 5.2. Smooth Extensions of Univariate Functions on Closed Intervals Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1). In order to extend smoothly f from [x1,x2] to [0,x1], we construct two polynomials (138)a0(k)(x)=(x-x1)kk!(xx1)k+1,b0(k)(x)=xkk!(x1-xx1)k+1,k=0,1,…. Define S0(0)(x)=f(x1)(x/x1) and for k=1,…,q, (139)S0(k)(x)=S0(k-1)(x)-a0(k)(x)(f(k)(x1)-(S0(k-1))(k)(x1))-b0(k)(x)(S0(k-1))(k)(0)(0≤x≤x1). Then S0(q)(x) is a polynomial of degree ≤2q+1.Similar to the proof of Lemma5, we get (140)(S0(q))(k)(0)=0,(S0(q))(k)(x1)=f(k)(x1),hhhhhhhhhhhhhhhhhhhhhhhhhk=0,1,…,q. It is also easy to check directly them.Again extend smoothlyf from [x1,x2] to [x2,1], we construct two polynomials (141)a1(k)(x)=(x-x2)kk!(1-x1-x2)k+1,b1(k)(x)=(x-1)kk!(x2-xx2-1)k+1,k=0,1,…. Define S1(0)(x)=f(x2)((1-x)/(1-x2)) and for k=1,…,q, (142)S1(k)(x)=S1(k-1)(x)-a1(k)(x)(f(k)(x2)-(S1(k-1))(k)(x2))-b1(k)(x)(S1(k-1))(k)(1)(x2≤x≤1). Then S1(q)(x) is a polynomial of degree ≤2q+1.Similar to the proof of Lemma5, we get (143)gggg(S1(q))(k)(x2)=f(k)(x2),(S1(q))(k)(1)=0(k=0,1,…,q).Therefore, we obtain the smooth extensionF from [x1,x2] to [0,1] by (144)F(x)={f(x),x∈[x1,x2],S0(q)(x),x∈[0,x1],S1(q)(x),x∈[x2,1], where S0(q)(x) and S1(q)(x) are polynomials of degree 2q+1 defined as above, and F∈Cq([0,1]) and F(l)(0)=F(l)(1)=0(l=0,1,...,q). From this, we get the following.Theorem 13. Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1). Then there exists a function F∈Cq([0,1]) satisfying F(x)=f(x)(x1≤x≤x2) and F(l)(0)=F(l)(1)=0(l=0,1,…,q).Letf∈Cq([x1,x2]) and [x1,x2]⊂(0,1), and let F be the smooth extension of f from [x1,x2] to [0,1] which is stated as in Theorem 12. Let Fp be the 1-periodic extension satisfying Fp(x+n)=F(x)(0≤x≤1,n∈ℤ). Then Fp∈Cq(ℝ) and Fp(x)=f(x)(x∈[x1,x2]). We expand F(x) into the Fourier series which converges fast. From this, we get trigonometric approximation of f∈Cq([x1,x2]). We also may do odd extension or even extension of F from [0,1] to [-1,1], and then doing periodic extension, we get the odd periodic extension Fpo∈Cq(ℝ) or the even periodic extension Fpe∈Cq(ℝ). We expand Fpo or Fpe into the sine series and the cosine series, respectively. From this, we get the sine polynomial approximation and the cosine polynomial approximation of f on [x1,x2]. For F∈Cq(x)(x∈[0,1]), we pad zero in the outside of [0,1] and then the obtained function Fc∈Cq(ℝ). We expand Fc into a wavelet series which converges fast. By the moment theorem, a lot of wavelet coefficients are equal to zero. From this, we get wavelet approximation of f∈Cq([x1,x2]). --- *Source: 102062-2014-02-10.xml*
2014
# The Distribution and Origin of Carbonate Cements in Deep-Buried Sandstones in the Central Junggar Basin, Northwest China **Authors:** Wang Furong; He Sheng; Hou Yuguang; Dong Tian; He Zhiliang **Journal:** Geofluids (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1020648 --- ## Abstract Extremely high porosities and permeabilities are commonly discovered in the sandstones of the Xishanyao Formation in the central Junggar Basin with the burial depth greater than 5500 m, from which hydrocarbons are currently being produced. High content of carbonate cements (up to 20%) is also observed in a similar depth range. Our study aimed to improve our understanding on the origin of carbonate cements in the Xishanyao Formation, in order to provide insights into the existence of high porosity sandstones at greater depths. Integrated analyses including petrographic analysis, isotopic analysis, fluid-inclusion, and core analysis were applied to investigate the distribution and origin of carbonate cements and the influence of high fluid pressure on reservoir quality. Textural evidences demonstrate that there are two generations of carbonate cements, precipitated at the temperature of 90°C and 120°C, respectively. The carbonate cements with lowδCPDB13 ranging from −19.07 to -8.95‰ dominantly occurred near the overpressure surface and especially accumulated at approximately 100 m below the surface. Our interpretation is that high content of carbonate cements is significantly influenced by early carbonate cements dissolution and migration under overpressure. Dissolution of plagioclase resulted in the development of internal pores and porosities of as much as 10% at 6500 m depth presumably. --- ## Body ## 1. Introduction Carbonate cements in sandstones have variable mineralogy, texture, and chemical compositions and therefore exhibit significant effects on reservoir properties because it is commonly concentrated rather than being uniformly distributed. It is challenging to quantify the influence of concretionary carbonate cements on fluid flow in reservoirs because it is difficult to determine the distribution of diagenetic heterogeneity based on subsurface data. If the carbonate cements formed during early diagenetic stage, it could provide a framework that resists burial compaction and retains primary porosity until decarbonatization at greater burial depth [1–3]. Microlitic carbonate cements formed at early diagenetic stage can undertake partial overburden load that can slow compaction and can be dissolved into secondary pores under favorable geologic conditions. Extensive studies have been performed on carbonate cementation-dissolution reactions from the viewpoints of fluid-rock, organic-inorganic, and sandstone-mudstone interactions in the past 40 years by traditional geochemical methods, such as stable isotope and major and trace elements analysis [4–17] (Tan Jianxiong et al., 1999; dos Anjos et al., 2000; Hendry et al., 2000; Taylor et al., 2000; Fayek et al., 2001; Geoffrey Thyne, 2001; Ni Shijun et al., 2002; Wang Zhizhang et al., 2003; Xie Xinong et al., 2006; Wilkinson et al., 2006; Machent et al., 2007; Cao Jian et al., 2007).It is observed that the favorable sandstone reservoirs are developed at the depth of 4500~6000 m in the central Junggar Basin. The average porosity is approximately 10% and the average permeability is 1 × 10−3 μm2. Although many studies have been carried out in this area, including petrographic analysis, formation-water geochemistry, fluid inclusions analysis, and overpressure characterization [13, 18–23], there is still a lot of debate on the origins and types of porosity in this area. Some studies assumed that the primary residual pores are the dominant pore type and other studies assumed that secondary pores resulting from the dissolution of carbonate cements make more contributions to forming favorable reservoirs [24–26]. The negative correlation between porosity and the carbonate cements content indicates that the formation of secondary pores and carbonate cement dissolution probably have genetic relationships. It is unexpected that in sandstones high secondary porosity and high content of carbonate cements superimpose at the same depth. In this study, we attempt to investigate the origin of carbonate cements in deep-buried sandstones in the central Junggar Basin by applying a multidisciplinary approach, including petrographic, microthermometric, fluid-inclusion, and geochemical analysis. The main objectives of the study are as follows: (1) to quantify the chemical composition, size, and spatial distribution of carbonate cements and (2) to provide further insights into the effect of carbonates cements on petrophysical properties of deep-buried reservoirs. ## 2. Geological Setting The Junggar Basin is one of the most prolific oil basin in China (Jiang and Fowler, 1986), covering an area of 136,000 km2. It is an intramontane basin bounded by multiple orogenic belts, including the Qinggelidi Mountains, the Kelameili Mountains, the Yilinheibiergen Mountains, the Bodega Mountains, and the Zhayier Mountains (Figure 1). The Junggar Basin is Late Palaeozoic-Cenozoic in age which is developed on the Junggar terrane, consisting of both Precambrian crystalline basement formed at 800 Ma ago and slightly metamorphic Palaeozoic basement [28–33]. Our study area in this paper belongs to SINOPEC, located in the central depression of the hinterland of the basin which mainly consists of the west segment of the Changji Sag in the south (Figure 1).Figure 1 Map showing the tectonic units and studied well locations in the central Junggar Basin.The central depression area is one of the important areas for petroleum exploration. The characteristics of source rocks and reservoirs in the central Junggar Basin have been extensively studied. There were two sets of source rocks, including the Permian shales dominated by lacustrine-facies and the Jurassic mudstones dominated by swamp coal-bearing. The deeply buried Jurassic sandstones dominated by fluvial-delta facies were main reservoirs with low porosity and permeability generally. However, relatively high porosity and permeability in sandstones display at some depth. At the same time, extensive development of overpressure is displayed over much of the central Junggar Basin. Much more hydrocarbon generated from Permian and Jurassic source rocks accumulated in the overpressured system (Wu Hengzhi et al., 2006; Li Pingping et al., 2006; Yang Zhi et al., 2007) (Figure2).Figure 2 Stratigraphic column and correlation of stratum in the center of Junggar Basin (2002).Figure3 shows the modelling burial history of Y1 well (Yongjin area) in Block 3 (see Figure 1 for its location). The erosion event generated an unconformity between Later Jurassic and Early Cretaceous. Therefore, the heat flow is the only variable that needs to be adjusted to match the present-day vitrinite reflectance data. Our modelling results indicated that the paleotemperature decreased gradually from the Permian to the present. This result is consistent with previous studies [34–37]. Drilling data demonstrated that the vitrinite reflectance (Ro) ranges within 0.65~0.82% in the Middle Jurassic Xishanyao Formation. To match the measured and the predicated vitrinite reflectance data, BasinMod simulation software aims to rebuild the geothermal history. At present, the temperature of Jurassic strata was approximately between 120°C and 150°C, giving a gradient of 2.2°C/100 m. With the results of homogenization temperature, the main oil pools in the third central block formed from the end of the early Cretaceous to the early stage of Paleogene (from 75 Ma to 60 Ma) [38]. At the same time, the crude oil was detected in high levels of 25-norhopane from Y1 well, which explained that an early stage of hydrocarbon charging occurred in the late Jurassic [38].Figure 3 Generalized burial and thermal histories of Y1 well from Block 3 in the central Junggar Basin. Location of the example well is marked in Figure1. ## 3. Samples and Methods 125 core samples were obtained from six wells (Y1, Y2, Y3, Y6, Y7, and Y8) at a depth range of 5500–6200 m from Jurassic Xishanyao Formation (Figure1). The strategy of sample collection is based on the characteristics of lithology. Generally, an interval of one meter is between the two samples, if the cores have homogeneous qualities. The ternary plot indicates that the Xishanyao Formation sandstones are dominantly litharenite and feldspathic litharenite as they have an average framework composition of Q28F13L59 (Figure 4).Figure 4 Ternary plot showing sandstone compositions according to Folk’s (1980) classification scheme.Carbonate cements were investigated by using epoxy-impregnated thin sections, cathodoluminescence, and SEM/EDX images. Conventional core samples and epoxy-impregnated thin sections analysis was conducted on the MIAS 2000 microscopes by the Experimental Research Center of Wuxi Research Institute of Petroleum Geology of SINOPEC and Research Center of Shengli Oilfield Institute of Geology of SINOPEC. Analysis was performed at the room temperature of 25°C and the relative humidity of 60%.Cathodoluminescence analysis was conducted on the CL8200 MK5 cathodoluminescence microscopy by the Experimental Research Center of Wuxi Research Institute of Petroleum Geology of SINOPEC. Analysis was performed at the room temperature of 27°C and the relative humidity of 40%.SEM/EDX analysis was performed on the sem-xl30 and EDX-INCA scanner by the Experimental Research Center of Wuxi Research Institute of Petroleum Geology of SINOPEC. Analysis was performed at the room temperature of 22°C and the relative humidity of 60%.Carbon and oxygen isotope analysis was carried out on the MAT253 Gas isotope mass spectrometer made in Germany Firmigan company. Analysis was performed at the sample tray temperature of 72°C, chromatography temperature of 40°C, and helium gas pressure of 100 KPa.Electron microprobe analysis was conducted on the JXA-8100 electron probe microanalyser at the State Key Laboratory of Geological Processes and Mineral Resource. Analysis was performed at the room temperature of 23°C and the relative humidity of 65%. ## 4. Results ### 4.1. Thin Section Analysis Thin section images demonstrate that the dominant cements are carbonates, which consists of a high volume of ferroan calcite and ankerite. At the same time, a small amount of dolomite developed in some wells. Thin section images show that both ferroan and nonferroan calcites commonly occurred as extensive, single-crystal poikilotopes that filled with the intergranular pores (Figure5(a)). In contrast, ankerite cements are regular to rhombic, replacing calcites with an undulating extinction characteristic (Figures 5(b) and 5(c)). Dolomite cements were also observed in the form of replacing calcites (Figure 5(d)). Some calcite crystal filled in intergranular pores and postdated quartz overgrowth (Figures 5(e) and 5(f)). Some thin section images show that carbonate cements replaced detrital quartz, feldspar, or rock fragments. The abundance of carbonate can be up to 20% and generally in the range of 1–10% within the six studied wells (Figure 6). At the same time, particle contact modes were different because of maldistributed distribution. Some particles were in point contact where carbonate cements develop, while others were in straight of concavoconvex contact. Moreover, carbonates with different mineral compositions distributed in different wells, such as ankerite cements, generally distributed in wells Y2, Y6, and Y8 with an average concentration of 4.5% and calcite cements generally distributed in wells Y1, Y3, and Y7 with an average content of 4.1%.Figure 5 Photomicrographs showing petrographic features of the Xishanyao Formation sandstones. Most common types of mineralogy variations in sandstone cement, including (a) secondary pores were mainly filled by crystal calcite, major pores were filled by asphalt at the edges, and the photomicrograph is a stained red-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 40 (Y6 well at 6,048.59 m); (b) single-crystal rhombus ankerite, the photomicrograph is a stained blue-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 200 (Y6 well at 6028.52 m); (c) a few ankerites replace calcites in a red-epoxy-impregnated thin sections of conventional core samples with cross-polarized light with magnification of 100 (Y2 well at 5966.02 m); and (d) a few dolomites replace calcites in a stained red-epoxy-impregnated thin sections of conventional core samples with cross0polarized light with magnification of 200 (Y2 well at 5967.02 m); (e) crystal calcite filled in pores, and crystal postdate quartz overgrowth in a stained blue-epoxy-impregnated thin sections of conventional core samples by cross-polarized light with magnification of 40 (Y1 well at 5876 m); (f) dolomites develop in a stained blue-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 100 (Y2 well at 6000.25 m); (g) calcite cements show bright yellow luminescence in cathodoluminescence photomicrograph with magnification of 40 (Y7 well at 6095 m); (h) calcite cements show saffron luminescence and dolomites cements are disphotic in cathodoluminescence photomicrograph with magnification of 40 (Y8 well at 6099.46 m). (a) (b) (c) (d) (e) (f) (g) (h)Figure 6 Regional abundance of carbonate cements versus depth in the Jurassic Formation. Text data from 125 core samples show that carbonate is abundant from below 5800 m and increases with depth down to about 6100 m burial depth. The contents of calcite have little changes with the depth. However, ankerite has a sharp interface of 5850 m burial depth and the content increases with the depth. ### 4.2. Cathodoluminescence Under the cathodoluminescent (CL) images, the relative content of manganese (Mn) and iron (Fe) in carbonate cements can be used to provide insights into the redox conditions when the pore fluid formed. Mn in calcites is an activator in CL, while Fe acts as a quencher. Carbonate cements with Mn > Fe show bright luminescence, whereas calcite cements with Fe > Mn exhibit dull- luminesce. In Block 3, the carbonate cements partially show bright luminesce (Figure5(g)) and others show shade of bright luminesce (Figure 5(h)). Cathodoluminescence of these calcite cements can be interpreted by their origin in sandstones. ### 4.3. EDX Analysis The trace element data of 14 core samples, determined by EDX analysis, are presented in Table1. Carbonate cements are generally rich in Fe with low concent of Mn and Mg, and the concentration of Ca increases with increasing burial depth.Table 1 Composition from EDX analysis of carbonate cements in sandstone of Block 3. Well Depth/m Content/% Na2O MgO Al2O3 SiO2 CaO MnO Fe2O3 K2O Y1 6117.38 2.22 5.54 82.59 4.31 5.35 Y2 5970.53 17.48 5.55 47.38 4.43 25.16 6001.23 19.00 6.79 63.28 6.41 4.52 Y3 5614.22 1.37 13.92 3.33 8.70 43.33 3.47 24.45 5868.00 1.59 3.04 4.78 79.22 5.95 5.04 0.37 Y6 6028.60 21.22 52.30 3.63 22.85 6084.00 1.36 5.26 91.04 2.34 Y7 6095.00 1.27 2.45 92.39 1.76 2.13 6101.55 3.84 4.56 15.13 45.61 7.50 22.51 0.84 Y8 6088.55 21.43 2.44 1.99 51.05 23.08 6096.20 0.95 9.21 85.44 4.02 0.38 ### 4.4. Stable Isotopes δ 13C and δ18O of the carbonate cements, together with burial and thermal histories, can be used to reveal the origin of the cements. Stable isotopes data in Yongjin area are presented in Table 2. Carbon isotope values range from −19.07 to -5.87‰ (PDB) with average value of -8.95‰ (PDB). The oxygen isotope values range from −21.08 to -13.96‰ (PDB) with average value of -17.5‰ (PDB). The δ13C and δ18O values increase with increasing burial depth and there is a positive correlation between the δ13C and δ18O values.Table 2 Carbon and oxygen isotope values in Xishanyao formation sandstones of Block 3. Well Depth/m δ C P D B 13 / ‰ δ O P D B 18 / ‰ Isotopic temperature/°C Y1 5880.00 - 8.21 - 19.05 128.92 Y7 6095.00 - 7.55 - 18.53 124.17 6095.50 - 7.37 - 16.44 106.21 6096.80 - 8.18 - 19.95 137.06 6098.60 - 7.52 - 17.87 118.36 6099.59 - 7.60 - 17.53 115.42 6101.80 - 7.49 - 16.15 103.82 6103.60 - 7.65 - 18.17 120.99 Y8 6088.50 - 11.63 - 16.68 108.21 6088.80 - 12.38 - 16.88 109.89 6092.00 - 10.82 - 17.65 116.46 6093.30 - 11.08 - 16.04 102.92 6093.50 - 10.72 - 17.86 118.28 6094.30 - 10.80 - 16.78 109.05 6099.50 - 7.34 - 17.40 114.31 Y3 5614.90 - 7.05 - 17.36 113.97 5620.80 - 7.10 - 18.17 120.99 5621.10 - 7.61 - 18.43 123.28 5865.60 - 5.87 - 21.08 147.73 5866.35 - 19.07 - 14.68 92.04 5867.90 - 5.92 - 21.28 149.65 Y6 5977.00 - 8.31 - 13.96 86.5 6034.90 - 6.52 - 17.72 117.06 6044.80 - 6.95 - 19.03 128.64 6076.50 - 7.08 - 16.29 104.97 6084.60 - 7.04 - 19.20 130.18 6098.50 - 9.70 - 18.35 122.57 Y2 5961.50 - 6.36 - 15.06 95.03 6004.30 - 9.97 - 17.87 118.36 Isotopic temperatureC°=16.45-4.31(δc-δw)+0.14δc-δw2, according to Epstein et al. [27]. ## 4.1. Thin Section Analysis Thin section images demonstrate that the dominant cements are carbonates, which consists of a high volume of ferroan calcite and ankerite. At the same time, a small amount of dolomite developed in some wells. Thin section images show that both ferroan and nonferroan calcites commonly occurred as extensive, single-crystal poikilotopes that filled with the intergranular pores (Figure5(a)). In contrast, ankerite cements are regular to rhombic, replacing calcites with an undulating extinction characteristic (Figures 5(b) and 5(c)). Dolomite cements were also observed in the form of replacing calcites (Figure 5(d)). Some calcite crystal filled in intergranular pores and postdated quartz overgrowth (Figures 5(e) and 5(f)). Some thin section images show that carbonate cements replaced detrital quartz, feldspar, or rock fragments. The abundance of carbonate can be up to 20% and generally in the range of 1–10% within the six studied wells (Figure 6). At the same time, particle contact modes were different because of maldistributed distribution. Some particles were in point contact where carbonate cements develop, while others were in straight of concavoconvex contact. Moreover, carbonates with different mineral compositions distributed in different wells, such as ankerite cements, generally distributed in wells Y2, Y6, and Y8 with an average concentration of 4.5% and calcite cements generally distributed in wells Y1, Y3, and Y7 with an average content of 4.1%.Figure 5 Photomicrographs showing petrographic features of the Xishanyao Formation sandstones. Most common types of mineralogy variations in sandstone cement, including (a) secondary pores were mainly filled by crystal calcite, major pores were filled by asphalt at the edges, and the photomicrograph is a stained red-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 40 (Y6 well at 6,048.59 m); (b) single-crystal rhombus ankerite, the photomicrograph is a stained blue-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 200 (Y6 well at 6028.52 m); (c) a few ankerites replace calcites in a red-epoxy-impregnated thin sections of conventional core samples with cross-polarized light with magnification of 100 (Y2 well at 5966.02 m); and (d) a few dolomites replace calcites in a stained red-epoxy-impregnated thin sections of conventional core samples with cross0polarized light with magnification of 200 (Y2 well at 5967.02 m); (e) crystal calcite filled in pores, and crystal postdate quartz overgrowth in a stained blue-epoxy-impregnated thin sections of conventional core samples by cross-polarized light with magnification of 40 (Y1 well at 5876 m); (f) dolomites develop in a stained blue-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 100 (Y2 well at 6000.25 m); (g) calcite cements show bright yellow luminescence in cathodoluminescence photomicrograph with magnification of 40 (Y7 well at 6095 m); (h) calcite cements show saffron luminescence and dolomites cements are disphotic in cathodoluminescence photomicrograph with magnification of 40 (Y8 well at 6099.46 m). (a) (b) (c) (d) (e) (f) (g) (h)Figure 6 Regional abundance of carbonate cements versus depth in the Jurassic Formation. Text data from 125 core samples show that carbonate is abundant from below 5800 m and increases with depth down to about 6100 m burial depth. The contents of calcite have little changes with the depth. However, ankerite has a sharp interface of 5850 m burial depth and the content increases with the depth. ## 4.2. Cathodoluminescence Under the cathodoluminescent (CL) images, the relative content of manganese (Mn) and iron (Fe) in carbonate cements can be used to provide insights into the redox conditions when the pore fluid formed. Mn in calcites is an activator in CL, while Fe acts as a quencher. Carbonate cements with Mn > Fe show bright luminescence, whereas calcite cements with Fe > Mn exhibit dull- luminesce. In Block 3, the carbonate cements partially show bright luminesce (Figure5(g)) and others show shade of bright luminesce (Figure 5(h)). Cathodoluminescence of these calcite cements can be interpreted by their origin in sandstones. ## 4.3. EDX Analysis The trace element data of 14 core samples, determined by EDX analysis, are presented in Table1. Carbonate cements are generally rich in Fe with low concent of Mn and Mg, and the concentration of Ca increases with increasing burial depth.Table 1 Composition from EDX analysis of carbonate cements in sandstone of Block 3. Well Depth/m Content/% Na2O MgO Al2O3 SiO2 CaO MnO Fe2O3 K2O Y1 6117.38 2.22 5.54 82.59 4.31 5.35 Y2 5970.53 17.48 5.55 47.38 4.43 25.16 6001.23 19.00 6.79 63.28 6.41 4.52 Y3 5614.22 1.37 13.92 3.33 8.70 43.33 3.47 24.45 5868.00 1.59 3.04 4.78 79.22 5.95 5.04 0.37 Y6 6028.60 21.22 52.30 3.63 22.85 6084.00 1.36 5.26 91.04 2.34 Y7 6095.00 1.27 2.45 92.39 1.76 2.13 6101.55 3.84 4.56 15.13 45.61 7.50 22.51 0.84 Y8 6088.55 21.43 2.44 1.99 51.05 23.08 6096.20 0.95 9.21 85.44 4.02 0.38 ## 4.4. Stable Isotopes δ 13C and δ18O of the carbonate cements, together with burial and thermal histories, can be used to reveal the origin of the cements. Stable isotopes data in Yongjin area are presented in Table 2. Carbon isotope values range from −19.07 to -5.87‰ (PDB) with average value of -8.95‰ (PDB). The oxygen isotope values range from −21.08 to -13.96‰ (PDB) with average value of -17.5‰ (PDB). The δ13C and δ18O values increase with increasing burial depth and there is a positive correlation between the δ13C and δ18O values.Table 2 Carbon and oxygen isotope values in Xishanyao formation sandstones of Block 3. Well Depth/m δ C P D B 13 / ‰ δ O P D B 18 / ‰ Isotopic temperature/°C Y1 5880.00 - 8.21 - 19.05 128.92 Y7 6095.00 - 7.55 - 18.53 124.17 6095.50 - 7.37 - 16.44 106.21 6096.80 - 8.18 - 19.95 137.06 6098.60 - 7.52 - 17.87 118.36 6099.59 - 7.60 - 17.53 115.42 6101.80 - 7.49 - 16.15 103.82 6103.60 - 7.65 - 18.17 120.99 Y8 6088.50 - 11.63 - 16.68 108.21 6088.80 - 12.38 - 16.88 109.89 6092.00 - 10.82 - 17.65 116.46 6093.30 - 11.08 - 16.04 102.92 6093.50 - 10.72 - 17.86 118.28 6094.30 - 10.80 - 16.78 109.05 6099.50 - 7.34 - 17.40 114.31 Y3 5614.90 - 7.05 - 17.36 113.97 5620.80 - 7.10 - 18.17 120.99 5621.10 - 7.61 - 18.43 123.28 5865.60 - 5.87 - 21.08 147.73 5866.35 - 19.07 - 14.68 92.04 5867.90 - 5.92 - 21.28 149.65 Y6 5977.00 - 8.31 - 13.96 86.5 6034.90 - 6.52 - 17.72 117.06 6044.80 - 6.95 - 19.03 128.64 6076.50 - 7.08 - 16.29 104.97 6084.60 - 7.04 - 19.20 130.18 6098.50 - 9.70 - 18.35 122.57 Y2 5961.50 - 6.36 - 15.06 95.03 6004.30 - 9.97 - 17.87 118.36 Isotopic temperatureC°=16.45-4.31(δc-δw)+0.14δc-δw2, according to Epstein et al. [27]. ## 5. Discussion ### 5.1. Origins of Carbonate Cements and Source of Fluid Petrographic observations revealed at least two generations of carbonate cements. The first-generation carbonate occurred as blocky crystalline calcite, which filled within the intergranular pores reduced considerably by mechanical and chemical compaction. This kind of cements occurred in the form of coating quartz grains, preventing authigenic quartz from overgrowing (Alaa et al., 2007). And they dominated in the deeply buried sandstones and filled with iron (Fe) content, which belonged to late diagenesis cements. The second-generation carbonate cement was ankerite, which replaced calcite (Figure5(c)).δ 13C and δ18O in the carbonate cements can be used to unravel the origins of the cements. The main mechanisms of generation of δ13C-depleted CO2 in large amounts during burial are discussed by Irwin et al. [39], including diagenetic carbonate in area I, carbonate relating to biogas in area II, and carbonate relating to organic acid in area III (Jansa et al., 1990; Wang Darui, 2000; Wilkinson et al., 2006) (Figure 7). 29 samples from Xishanyao Formation (J2x) and 11 samples from Toutunhe formation (J2t) were collected between 5500 m and 6200 m for the carbon and oxygen stable isotopic analysis. The results were shown in Figure 7 as a plot of δ18O (PDB) versus δ13C (PDB) that more negative δ13C values are generally accompanied by more negative δ18O values indicating that the carbonate cements were significantly influenced by organic matter alteration during burial (Figure 7) in area III. Rare input of δ12C with increasing burial and temperature from thermal alteration of organic matter is indicated by the strong correlation between δ13C and δ18O [39–41].Figure 7 Stable isotopic data of carbon and oxygen for carbonate cements of Block 3.With limited fluid-inclusion data of carbonate cements and quartz overgrowth, the precipitation was formed at about 100°C and 80–130°C, respectively (Table3). From Figures 5(e) and 5(f), the phenomena that carbonate crystal postdates quartz overgrowth revealed that at least part of carbonate cement deposited at 120°C (inclusion homogenization temperature of overgrowth concentrating in 120°C). From Figure 3, the carbonate formation mainly resulted from the later hydrocarbon charging.Table 3 Fluid-inclusion data from quartz overgrowth and carbonate cements. Well Depth, m Numbers of inclusion Carbonate homogenization temperature, °C Quartz overgrowth homogenization temperature, °C Y1 5828.2 1 100 5876.38 4 115, 120, 125, 127 6114.7 1 98 6116.87 2 92, 98 Y2 5953.66 6 96 85, 88, 116, 120, 132 5970.53 3 102, 117, 127 6002.15 2 98, 102 Y6 6027.44 2 96, 102 6028.6 2 80, 134 ### 5.2. Effect on Reservoir Properties Reservoir physical properties data indicate that porosity near the overpressure surface is relatively high, mainly concentrating in the depth range +50 m~−250 m of the overpressure top surface, and the carbonate cements are concentrated in these high porosity zone (Figure8). Thin section images and microscope analysis were used to investigate the origin that why do the high porosity and carbonate cements superimpose in depth, when the formation water flowed and broke through the overpressure surface, causing the precipitation and concentration of calcite near the overpressure surface because of the unstable temperature and pressure [42, 43] (Yang Zhi, 2011).Figure 8 Relations between carbonate content, porosity, and distance to top overpressure surface of Block 3.Observation from thin section images indicate that secondary intergranular pores are the dominant pore type (Figure9). In contrast to the characteristics of minerals by plane-polarized and cross-polarized light, the remnant of calcite can be found after dissolution developing in the surrounding pores. Data from electron microprobe analysis indicate that the secondary intergranular pores resulted from dissolving intergranular carbonate cements and feldspar (Figure 10, Table 4), mainly generated by calcite dissolution. Megapores were mostly formed by the dissolution of albite and less by K-feldspar and kaolinite in situ deposit which shows that pore configuration has good connection. These evidences demonstrate that the large-scale dissolution of intergranular carbonate cements can generate more intergranular pores and make the pore more connected.Table 4 Electron microprobe analysis of residual mineral contents in secondary pores of wells in studied area. Well Position Content/% SiO2 TiO2 Al2O3 FeO MnO MgO CaO Na2O K2O Cr2O3 Total SH1 a 63.55 0.00 17.89 0.00 0.00 0.00 0.00 1.00 16.10 0.00 98.54 b 0.16 0.00 0.03 0.03 0.00 0.40 53.50 0.00 0.00 0.00 54.12 ZH1 c 68.13 0.00 19.57 0.05 0.00 0.00 0.00 12.03 0.02 0.00 99.80 d 64.13 0.03 18.02 0.05 0.00 0.00 0.00 0.31 16.65 0.00 99.19 SH2 e 51.02 0.00 18.75 0.05 0.00 0.00 0.31 8.17 0.00 0.00 78.30 f 0.00 0.00 0.00 0.59 0.62 0.44 60.92 0.00 0.00 0.00 62.57 g 0.08 0.00 0.00 58.11 0.00 0.00 0.00 0.00 0.00 0.00 58.19 Y1 h 0.81 0.00 0.13 1.49 1.34 0.17 52.45 0.00 0.00 0.02 56.41 i 64.73 0.00 18.32 0.00 0.00 0.00 0.00 0.30 16.69 0.00 100.04 j 70.41 0.00 20.46 0.10 0.00 0.00 0.16 10.48 0.00 0.00 101.61Figure 9 Photomicrographs of petrographic features of the Xishanyao Formation sandstones. Full view of pores and intergranular secondary pores develops from a blue-epoxy-impregnated thin sections of conventional core samples. The remnant of calcite and feldspar can be found after dissolution. (a), (b) Y2 well at 5961.5 m with magnification of 40; (c), (d) Y6 well at 5978 m with magnification of 40; (e), (f) Y1 well at 5876.3 m with magnification of 40; (g), (h) Y7 well at 6102.95 m with magnification of 40. (a) (b) (c) (d) (e) (f) (g) (h)Figure 10 Photomicrographs of pore types by electron microprobe analysis of sandstones in central Junggar Basin. (a) and (a′) Intergranular secondary pore develops, and measuring point A shows that the remnant is K-feldspar and the measuring point B is calcite. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Sha 1 well at 3656.6 m with Jurassic) and photo (a) by plane-polarized light and photo (a′) by cross polarization; (b) and (b′) intergranular secondary pore develops, and measuring point C shows that the remnant is albite and measuring point D is K-feldspar. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Zhuang 1 well at 4375.23 m with Jurassic), and photo (b) by plane-polarized light and photo (b′) by cross polarization; (c) and (c′) intergranular secondary pore develops, measuring point E shows that the remnant is albite. Measuring point F is calcite and measuring point G is siderite, and the photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Sha 2 well at 3439 m with Jurassic) and photo (c) by plane-polarized light and photo (c′) by cross-polarized light; (d) and (d′) intergranular secondary pore develops, and measuring point H shows that the remnant is calcite, measuring point I is K-feldspar, and measuring point J is albite. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Y1 well at 5877 m with Jurassic) and photo (d) by plane-polarized light and photo (d′) by cross-polarized light.Although sandstones experienced extensive mechanical compaction and chemical compaction, the point or straight grain contact and the pervasive development of intergranular pores suggest that the formation of carbonate cements predated intensive physical compaction. Firstly, early carbonate cements occupied the intergranular pore space which increased rock mechanical strength and resisting ability to compaction. Therefore, the sandstone reservoir can develop high primary porosity even in deeply buried conditions. Secondly, early carbonate cements provided materials for dissolution which can form a large amount of secondary intergranular porosity and feldspar dissolution further improved the porosity. In these two ways, the high porosity and high content of carbonate cements developed in the same burial depth [44].Carbonate cements commonly occurred as irregularly distributed concretions even at the same depth, so it is challenging to predict porosity and permeability in the subsurface from spaced wells. Thin sections can provide a continuous image of heterogeneity produced by concretionary calcite cements. Our thin section images indicate a negative correlation between the carbonate cements and porosity development. Perhaps this phenomenon elaborates that intergranular pores, developing in deeply buried sandstones, result from early carbonate cements dissolving and migration. Therefore, early carbonate cements develop widely and secondary porosity is higher at later diagenetic phase.The late generation of poikilotopic calcite is interpreted as a result of plagioclase and early calcite dissolution, which releases cations into pore water and may also be responsible for the precipitation of clay minerals and the silica cements [45]. From Figure 3,Ro were in 0.7–1.0%. Thin section observations show that calcite formation is strongly associated with alteration of plagioclase. Among the common rock-forming minerals, plagioclase (especially calcium-rich plagioclase) is dissolved more rapidly than the other silicate phases [46], indicating that porosity generation may primarily result from plagioclase dissolution in deeply burial location. With the increasing of depth, temperature, and thermal maturity, calcite dissolution will be gradually weakened [5]. According to thin sections, primary porosity developed before the depth 3500 m, secondary porosity mainly developed between 3500–6200 m, and cracks begin to develop from 6200 m [47, 48]. Therefore, during the deep-burial water-rock interaction processes, a lower secondary porosity zone resulting from plagioclase dissolution at depth greater than 6500 m would develop (Figure 11).Figure 11 Diagenetic reactions, porosity, and hydrocarbon evolution of the different diagenetic phase in Block 3. ## 5.1. Origins of Carbonate Cements and Source of Fluid Petrographic observations revealed at least two generations of carbonate cements. The first-generation carbonate occurred as blocky crystalline calcite, which filled within the intergranular pores reduced considerably by mechanical and chemical compaction. This kind of cements occurred in the form of coating quartz grains, preventing authigenic quartz from overgrowing (Alaa et al., 2007). And they dominated in the deeply buried sandstones and filled with iron (Fe) content, which belonged to late diagenesis cements. The second-generation carbonate cement was ankerite, which replaced calcite (Figure5(c)).δ 13C and δ18O in the carbonate cements can be used to unravel the origins of the cements. The main mechanisms of generation of δ13C-depleted CO2 in large amounts during burial are discussed by Irwin et al. [39], including diagenetic carbonate in area I, carbonate relating to biogas in area II, and carbonate relating to organic acid in area III (Jansa et al., 1990; Wang Darui, 2000; Wilkinson et al., 2006) (Figure 7). 29 samples from Xishanyao Formation (J2x) and 11 samples from Toutunhe formation (J2t) were collected between 5500 m and 6200 m for the carbon and oxygen stable isotopic analysis. The results were shown in Figure 7 as a plot of δ18O (PDB) versus δ13C (PDB) that more negative δ13C values are generally accompanied by more negative δ18O values indicating that the carbonate cements were significantly influenced by organic matter alteration during burial (Figure 7) in area III. Rare input of δ12C with increasing burial and temperature from thermal alteration of organic matter is indicated by the strong correlation between δ13C and δ18O [39–41].Figure 7 Stable isotopic data of carbon and oxygen for carbonate cements of Block 3.With limited fluid-inclusion data of carbonate cements and quartz overgrowth, the precipitation was formed at about 100°C and 80–130°C, respectively (Table3). From Figures 5(e) and 5(f), the phenomena that carbonate crystal postdates quartz overgrowth revealed that at least part of carbonate cement deposited at 120°C (inclusion homogenization temperature of overgrowth concentrating in 120°C). From Figure 3, the carbonate formation mainly resulted from the later hydrocarbon charging.Table 3 Fluid-inclusion data from quartz overgrowth and carbonate cements. Well Depth, m Numbers of inclusion Carbonate homogenization temperature, °C Quartz overgrowth homogenization temperature, °C Y1 5828.2 1 100 5876.38 4 115, 120, 125, 127 6114.7 1 98 6116.87 2 92, 98 Y2 5953.66 6 96 85, 88, 116, 120, 132 5970.53 3 102, 117, 127 6002.15 2 98, 102 Y6 6027.44 2 96, 102 6028.6 2 80, 134 ## 5.2. Effect on Reservoir Properties Reservoir physical properties data indicate that porosity near the overpressure surface is relatively high, mainly concentrating in the depth range +50 m~−250 m of the overpressure top surface, and the carbonate cements are concentrated in these high porosity zone (Figure8). Thin section images and microscope analysis were used to investigate the origin that why do the high porosity and carbonate cements superimpose in depth, when the formation water flowed and broke through the overpressure surface, causing the precipitation and concentration of calcite near the overpressure surface because of the unstable temperature and pressure [42, 43] (Yang Zhi, 2011).Figure 8 Relations between carbonate content, porosity, and distance to top overpressure surface of Block 3.Observation from thin section images indicate that secondary intergranular pores are the dominant pore type (Figure9). In contrast to the characteristics of minerals by plane-polarized and cross-polarized light, the remnant of calcite can be found after dissolution developing in the surrounding pores. Data from electron microprobe analysis indicate that the secondary intergranular pores resulted from dissolving intergranular carbonate cements and feldspar (Figure 10, Table 4), mainly generated by calcite dissolution. Megapores were mostly formed by the dissolution of albite and less by K-feldspar and kaolinite in situ deposit which shows that pore configuration has good connection. These evidences demonstrate that the large-scale dissolution of intergranular carbonate cements can generate more intergranular pores and make the pore more connected.Table 4 Electron microprobe analysis of residual mineral contents in secondary pores of wells in studied area. Well Position Content/% SiO2 TiO2 Al2O3 FeO MnO MgO CaO Na2O K2O Cr2O3 Total SH1 a 63.55 0.00 17.89 0.00 0.00 0.00 0.00 1.00 16.10 0.00 98.54 b 0.16 0.00 0.03 0.03 0.00 0.40 53.50 0.00 0.00 0.00 54.12 ZH1 c 68.13 0.00 19.57 0.05 0.00 0.00 0.00 12.03 0.02 0.00 99.80 d 64.13 0.03 18.02 0.05 0.00 0.00 0.00 0.31 16.65 0.00 99.19 SH2 e 51.02 0.00 18.75 0.05 0.00 0.00 0.31 8.17 0.00 0.00 78.30 f 0.00 0.00 0.00 0.59 0.62 0.44 60.92 0.00 0.00 0.00 62.57 g 0.08 0.00 0.00 58.11 0.00 0.00 0.00 0.00 0.00 0.00 58.19 Y1 h 0.81 0.00 0.13 1.49 1.34 0.17 52.45 0.00 0.00 0.02 56.41 i 64.73 0.00 18.32 0.00 0.00 0.00 0.00 0.30 16.69 0.00 100.04 j 70.41 0.00 20.46 0.10 0.00 0.00 0.16 10.48 0.00 0.00 101.61Figure 9 Photomicrographs of petrographic features of the Xishanyao Formation sandstones. Full view of pores and intergranular secondary pores develops from a blue-epoxy-impregnated thin sections of conventional core samples. The remnant of calcite and feldspar can be found after dissolution. (a), (b) Y2 well at 5961.5 m with magnification of 40; (c), (d) Y6 well at 5978 m with magnification of 40; (e), (f) Y1 well at 5876.3 m with magnification of 40; (g), (h) Y7 well at 6102.95 m with magnification of 40. (a) (b) (c) (d) (e) (f) (g) (h)Figure 10 Photomicrographs of pore types by electron microprobe analysis of sandstones in central Junggar Basin. (a) and (a′) Intergranular secondary pore develops, and measuring point A shows that the remnant is K-feldspar and the measuring point B is calcite. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Sha 1 well at 3656.6 m with Jurassic) and photo (a) by plane-polarized light and photo (a′) by cross polarization; (b) and (b′) intergranular secondary pore develops, and measuring point C shows that the remnant is albite and measuring point D is K-feldspar. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Zhuang 1 well at 4375.23 m with Jurassic), and photo (b) by plane-polarized light and photo (b′) by cross polarization; (c) and (c′) intergranular secondary pore develops, measuring point E shows that the remnant is albite. Measuring point F is calcite and measuring point G is siderite, and the photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Sha 2 well at 3439 m with Jurassic) and photo (c) by plane-polarized light and photo (c′) by cross-polarized light; (d) and (d′) intergranular secondary pore develops, and measuring point H shows that the remnant is calcite, measuring point I is K-feldspar, and measuring point J is albite. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Y1 well at 5877 m with Jurassic) and photo (d) by plane-polarized light and photo (d′) by cross-polarized light.Although sandstones experienced extensive mechanical compaction and chemical compaction, the point or straight grain contact and the pervasive development of intergranular pores suggest that the formation of carbonate cements predated intensive physical compaction. Firstly, early carbonate cements occupied the intergranular pore space which increased rock mechanical strength and resisting ability to compaction. Therefore, the sandstone reservoir can develop high primary porosity even in deeply buried conditions. Secondly, early carbonate cements provided materials for dissolution which can form a large amount of secondary intergranular porosity and feldspar dissolution further improved the porosity. In these two ways, the high porosity and high content of carbonate cements developed in the same burial depth [44].Carbonate cements commonly occurred as irregularly distributed concretions even at the same depth, so it is challenging to predict porosity and permeability in the subsurface from spaced wells. Thin sections can provide a continuous image of heterogeneity produced by concretionary calcite cements. Our thin section images indicate a negative correlation between the carbonate cements and porosity development. Perhaps this phenomenon elaborates that intergranular pores, developing in deeply buried sandstones, result from early carbonate cements dissolving and migration. Therefore, early carbonate cements develop widely and secondary porosity is higher at later diagenetic phase.The late generation of poikilotopic calcite is interpreted as a result of plagioclase and early calcite dissolution, which releases cations into pore water and may also be responsible for the precipitation of clay minerals and the silica cements [45]. From Figure 3,Ro were in 0.7–1.0%. Thin section observations show that calcite formation is strongly associated with alteration of plagioclase. Among the common rock-forming minerals, plagioclase (especially calcium-rich plagioclase) is dissolved more rapidly than the other silicate phases [46], indicating that porosity generation may primarily result from plagioclase dissolution in deeply burial location. With the increasing of depth, temperature, and thermal maturity, calcite dissolution will be gradually weakened [5]. According to thin sections, primary porosity developed before the depth 3500 m, secondary porosity mainly developed between 3500–6200 m, and cracks begin to develop from 6200 m [47, 48]. Therefore, during the deep-burial water-rock interaction processes, a lower secondary porosity zone resulting from plagioclase dissolution at depth greater than 6500 m would develop (Figure 11).Figure 11 Diagenetic reactions, porosity, and hydrocarbon evolution of the different diagenetic phase in Block 3. ## 6. Conclusions In Block 3 of the central Junggar Basin, carbonate cements are the predominant cements. Conventional core samples, epoxy-impregnated thin section analysis, and cathodoluminescence analysis indicate that the growth of carbonate cements has two stages and mostly formed at the late diagenetic stage, generating ferroan calcite and ankerite cements.Data from the six wells demonstrate that carbonate cements of most of the samples are less than 20% and generally in the range of 1–10%. The concentration of carbonate cements increases with increasing burial depth. Carbonate cements mainly concentrate in the depth range +50 m~−200 m to the top overpressure surface.Stable isotopic data shows thatδCPDB13 ranges from −19.07 to -5.87‰ and δOPDB18 ranges from −21.08 to -13.96‰. This suggests that the carbonate cements in these sandstones were significantly influenced by organic matter during burial history.Electron microprobe analysis documents that the secondary intergranular pores primarily resulted from dissolving intergranular carbonate cements and feldspar. The chemical compaction and large-scale cementation restricted the dissolving capability of organic acid on late carbonate cements.Textural data suggest that the late poikilotopic calcite, near the top overpressure surface, is rich in Fe and high porosity is developed in the same depth interval. This can be interpreted as a result of the dissolution of plagioclase. Therefore, another secondary porosity zone is supposed to develop, resulting from plagioclase dissolution at depth greater than 6500 m. However, because of the chemical compaction and quartz overgrowth, the porosity scale will be smaller than the porosity developed at the depth of 5500 m. --- *Source: 1020648-2017-09-14.xml*
1020648-2017-09-14_1020648-2017-09-14.md
46,555
The Distribution and Origin of Carbonate Cements in Deep-Buried Sandstones in the Central Junggar Basin, Northwest China
Wang Furong; He Sheng; Hou Yuguang; Dong Tian; He Zhiliang
Geofluids (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1020648
1020648-2017-09-14.xml
--- ## Abstract Extremely high porosities and permeabilities are commonly discovered in the sandstones of the Xishanyao Formation in the central Junggar Basin with the burial depth greater than 5500 m, from which hydrocarbons are currently being produced. High content of carbonate cements (up to 20%) is also observed in a similar depth range. Our study aimed to improve our understanding on the origin of carbonate cements in the Xishanyao Formation, in order to provide insights into the existence of high porosity sandstones at greater depths. Integrated analyses including petrographic analysis, isotopic analysis, fluid-inclusion, and core analysis were applied to investigate the distribution and origin of carbonate cements and the influence of high fluid pressure on reservoir quality. Textural evidences demonstrate that there are two generations of carbonate cements, precipitated at the temperature of 90°C and 120°C, respectively. The carbonate cements with lowδCPDB13 ranging from −19.07 to -8.95‰ dominantly occurred near the overpressure surface and especially accumulated at approximately 100 m below the surface. Our interpretation is that high content of carbonate cements is significantly influenced by early carbonate cements dissolution and migration under overpressure. Dissolution of plagioclase resulted in the development of internal pores and porosities of as much as 10% at 6500 m depth presumably. --- ## Body ## 1. Introduction Carbonate cements in sandstones have variable mineralogy, texture, and chemical compositions and therefore exhibit significant effects on reservoir properties because it is commonly concentrated rather than being uniformly distributed. It is challenging to quantify the influence of concretionary carbonate cements on fluid flow in reservoirs because it is difficult to determine the distribution of diagenetic heterogeneity based on subsurface data. If the carbonate cements formed during early diagenetic stage, it could provide a framework that resists burial compaction and retains primary porosity until decarbonatization at greater burial depth [1–3]. Microlitic carbonate cements formed at early diagenetic stage can undertake partial overburden load that can slow compaction and can be dissolved into secondary pores under favorable geologic conditions. Extensive studies have been performed on carbonate cementation-dissolution reactions from the viewpoints of fluid-rock, organic-inorganic, and sandstone-mudstone interactions in the past 40 years by traditional geochemical methods, such as stable isotope and major and trace elements analysis [4–17] (Tan Jianxiong et al., 1999; dos Anjos et al., 2000; Hendry et al., 2000; Taylor et al., 2000; Fayek et al., 2001; Geoffrey Thyne, 2001; Ni Shijun et al., 2002; Wang Zhizhang et al., 2003; Xie Xinong et al., 2006; Wilkinson et al., 2006; Machent et al., 2007; Cao Jian et al., 2007).It is observed that the favorable sandstone reservoirs are developed at the depth of 4500~6000 m in the central Junggar Basin. The average porosity is approximately 10% and the average permeability is 1 × 10−3 μm2. Although many studies have been carried out in this area, including petrographic analysis, formation-water geochemistry, fluid inclusions analysis, and overpressure characterization [13, 18–23], there is still a lot of debate on the origins and types of porosity in this area. Some studies assumed that the primary residual pores are the dominant pore type and other studies assumed that secondary pores resulting from the dissolution of carbonate cements make more contributions to forming favorable reservoirs [24–26]. The negative correlation between porosity and the carbonate cements content indicates that the formation of secondary pores and carbonate cement dissolution probably have genetic relationships. It is unexpected that in sandstones high secondary porosity and high content of carbonate cements superimpose at the same depth. In this study, we attempt to investigate the origin of carbonate cements in deep-buried sandstones in the central Junggar Basin by applying a multidisciplinary approach, including petrographic, microthermometric, fluid-inclusion, and geochemical analysis. The main objectives of the study are as follows: (1) to quantify the chemical composition, size, and spatial distribution of carbonate cements and (2) to provide further insights into the effect of carbonates cements on petrophysical properties of deep-buried reservoirs. ## 2. Geological Setting The Junggar Basin is one of the most prolific oil basin in China (Jiang and Fowler, 1986), covering an area of 136,000 km2. It is an intramontane basin bounded by multiple orogenic belts, including the Qinggelidi Mountains, the Kelameili Mountains, the Yilinheibiergen Mountains, the Bodega Mountains, and the Zhayier Mountains (Figure 1). The Junggar Basin is Late Palaeozoic-Cenozoic in age which is developed on the Junggar terrane, consisting of both Precambrian crystalline basement formed at 800 Ma ago and slightly metamorphic Palaeozoic basement [28–33]. Our study area in this paper belongs to SINOPEC, located in the central depression of the hinterland of the basin which mainly consists of the west segment of the Changji Sag in the south (Figure 1).Figure 1 Map showing the tectonic units and studied well locations in the central Junggar Basin.The central depression area is one of the important areas for petroleum exploration. The characteristics of source rocks and reservoirs in the central Junggar Basin have been extensively studied. There were two sets of source rocks, including the Permian shales dominated by lacustrine-facies and the Jurassic mudstones dominated by swamp coal-bearing. The deeply buried Jurassic sandstones dominated by fluvial-delta facies were main reservoirs with low porosity and permeability generally. However, relatively high porosity and permeability in sandstones display at some depth. At the same time, extensive development of overpressure is displayed over much of the central Junggar Basin. Much more hydrocarbon generated from Permian and Jurassic source rocks accumulated in the overpressured system (Wu Hengzhi et al., 2006; Li Pingping et al., 2006; Yang Zhi et al., 2007) (Figure2).Figure 2 Stratigraphic column and correlation of stratum in the center of Junggar Basin (2002).Figure3 shows the modelling burial history of Y1 well (Yongjin area) in Block 3 (see Figure 1 for its location). The erosion event generated an unconformity between Later Jurassic and Early Cretaceous. Therefore, the heat flow is the only variable that needs to be adjusted to match the present-day vitrinite reflectance data. Our modelling results indicated that the paleotemperature decreased gradually from the Permian to the present. This result is consistent with previous studies [34–37]. Drilling data demonstrated that the vitrinite reflectance (Ro) ranges within 0.65~0.82% in the Middle Jurassic Xishanyao Formation. To match the measured and the predicated vitrinite reflectance data, BasinMod simulation software aims to rebuild the geothermal history. At present, the temperature of Jurassic strata was approximately between 120°C and 150°C, giving a gradient of 2.2°C/100 m. With the results of homogenization temperature, the main oil pools in the third central block formed from the end of the early Cretaceous to the early stage of Paleogene (from 75 Ma to 60 Ma) [38]. At the same time, the crude oil was detected in high levels of 25-norhopane from Y1 well, which explained that an early stage of hydrocarbon charging occurred in the late Jurassic [38].Figure 3 Generalized burial and thermal histories of Y1 well from Block 3 in the central Junggar Basin. Location of the example well is marked in Figure1. ## 3. Samples and Methods 125 core samples were obtained from six wells (Y1, Y2, Y3, Y6, Y7, and Y8) at a depth range of 5500–6200 m from Jurassic Xishanyao Formation (Figure1). The strategy of sample collection is based on the characteristics of lithology. Generally, an interval of one meter is between the two samples, if the cores have homogeneous qualities. The ternary plot indicates that the Xishanyao Formation sandstones are dominantly litharenite and feldspathic litharenite as they have an average framework composition of Q28F13L59 (Figure 4).Figure 4 Ternary plot showing sandstone compositions according to Folk’s (1980) classification scheme.Carbonate cements were investigated by using epoxy-impregnated thin sections, cathodoluminescence, and SEM/EDX images. Conventional core samples and epoxy-impregnated thin sections analysis was conducted on the MIAS 2000 microscopes by the Experimental Research Center of Wuxi Research Institute of Petroleum Geology of SINOPEC and Research Center of Shengli Oilfield Institute of Geology of SINOPEC. Analysis was performed at the room temperature of 25°C and the relative humidity of 60%.Cathodoluminescence analysis was conducted on the CL8200 MK5 cathodoluminescence microscopy by the Experimental Research Center of Wuxi Research Institute of Petroleum Geology of SINOPEC. Analysis was performed at the room temperature of 27°C and the relative humidity of 40%.SEM/EDX analysis was performed on the sem-xl30 and EDX-INCA scanner by the Experimental Research Center of Wuxi Research Institute of Petroleum Geology of SINOPEC. Analysis was performed at the room temperature of 22°C and the relative humidity of 60%.Carbon and oxygen isotope analysis was carried out on the MAT253 Gas isotope mass spectrometer made in Germany Firmigan company. Analysis was performed at the sample tray temperature of 72°C, chromatography temperature of 40°C, and helium gas pressure of 100 KPa.Electron microprobe analysis was conducted on the JXA-8100 electron probe microanalyser at the State Key Laboratory of Geological Processes and Mineral Resource. Analysis was performed at the room temperature of 23°C and the relative humidity of 65%. ## 4. Results ### 4.1. Thin Section Analysis Thin section images demonstrate that the dominant cements are carbonates, which consists of a high volume of ferroan calcite and ankerite. At the same time, a small amount of dolomite developed in some wells. Thin section images show that both ferroan and nonferroan calcites commonly occurred as extensive, single-crystal poikilotopes that filled with the intergranular pores (Figure5(a)). In contrast, ankerite cements are regular to rhombic, replacing calcites with an undulating extinction characteristic (Figures 5(b) and 5(c)). Dolomite cements were also observed in the form of replacing calcites (Figure 5(d)). Some calcite crystal filled in intergranular pores and postdated quartz overgrowth (Figures 5(e) and 5(f)). Some thin section images show that carbonate cements replaced detrital quartz, feldspar, or rock fragments. The abundance of carbonate can be up to 20% and generally in the range of 1–10% within the six studied wells (Figure 6). At the same time, particle contact modes were different because of maldistributed distribution. Some particles were in point contact where carbonate cements develop, while others were in straight of concavoconvex contact. Moreover, carbonates with different mineral compositions distributed in different wells, such as ankerite cements, generally distributed in wells Y2, Y6, and Y8 with an average concentration of 4.5% and calcite cements generally distributed in wells Y1, Y3, and Y7 with an average content of 4.1%.Figure 5 Photomicrographs showing petrographic features of the Xishanyao Formation sandstones. Most common types of mineralogy variations in sandstone cement, including (a) secondary pores were mainly filled by crystal calcite, major pores were filled by asphalt at the edges, and the photomicrograph is a stained red-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 40 (Y6 well at 6,048.59 m); (b) single-crystal rhombus ankerite, the photomicrograph is a stained blue-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 200 (Y6 well at 6028.52 m); (c) a few ankerites replace calcites in a red-epoxy-impregnated thin sections of conventional core samples with cross-polarized light with magnification of 100 (Y2 well at 5966.02 m); and (d) a few dolomites replace calcites in a stained red-epoxy-impregnated thin sections of conventional core samples with cross0polarized light with magnification of 200 (Y2 well at 5967.02 m); (e) crystal calcite filled in pores, and crystal postdate quartz overgrowth in a stained blue-epoxy-impregnated thin sections of conventional core samples by cross-polarized light with magnification of 40 (Y1 well at 5876 m); (f) dolomites develop in a stained blue-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 100 (Y2 well at 6000.25 m); (g) calcite cements show bright yellow luminescence in cathodoluminescence photomicrograph with magnification of 40 (Y7 well at 6095 m); (h) calcite cements show saffron luminescence and dolomites cements are disphotic in cathodoluminescence photomicrograph with magnification of 40 (Y8 well at 6099.46 m). (a) (b) (c) (d) (e) (f) (g) (h)Figure 6 Regional abundance of carbonate cements versus depth in the Jurassic Formation. Text data from 125 core samples show that carbonate is abundant from below 5800 m and increases with depth down to about 6100 m burial depth. The contents of calcite have little changes with the depth. However, ankerite has a sharp interface of 5850 m burial depth and the content increases with the depth. ### 4.2. Cathodoluminescence Under the cathodoluminescent (CL) images, the relative content of manganese (Mn) and iron (Fe) in carbonate cements can be used to provide insights into the redox conditions when the pore fluid formed. Mn in calcites is an activator in CL, while Fe acts as a quencher. Carbonate cements with Mn > Fe show bright luminescence, whereas calcite cements with Fe > Mn exhibit dull- luminesce. In Block 3, the carbonate cements partially show bright luminesce (Figure5(g)) and others show shade of bright luminesce (Figure 5(h)). Cathodoluminescence of these calcite cements can be interpreted by their origin in sandstones. ### 4.3. EDX Analysis The trace element data of 14 core samples, determined by EDX analysis, are presented in Table1. Carbonate cements are generally rich in Fe with low concent of Mn and Mg, and the concentration of Ca increases with increasing burial depth.Table 1 Composition from EDX analysis of carbonate cements in sandstone of Block 3. Well Depth/m Content/% Na2O MgO Al2O3 SiO2 CaO MnO Fe2O3 K2O Y1 6117.38 2.22 5.54 82.59 4.31 5.35 Y2 5970.53 17.48 5.55 47.38 4.43 25.16 6001.23 19.00 6.79 63.28 6.41 4.52 Y3 5614.22 1.37 13.92 3.33 8.70 43.33 3.47 24.45 5868.00 1.59 3.04 4.78 79.22 5.95 5.04 0.37 Y6 6028.60 21.22 52.30 3.63 22.85 6084.00 1.36 5.26 91.04 2.34 Y7 6095.00 1.27 2.45 92.39 1.76 2.13 6101.55 3.84 4.56 15.13 45.61 7.50 22.51 0.84 Y8 6088.55 21.43 2.44 1.99 51.05 23.08 6096.20 0.95 9.21 85.44 4.02 0.38 ### 4.4. Stable Isotopes δ 13C and δ18O of the carbonate cements, together with burial and thermal histories, can be used to reveal the origin of the cements. Stable isotopes data in Yongjin area are presented in Table 2. Carbon isotope values range from −19.07 to -5.87‰ (PDB) with average value of -8.95‰ (PDB). The oxygen isotope values range from −21.08 to -13.96‰ (PDB) with average value of -17.5‰ (PDB). The δ13C and δ18O values increase with increasing burial depth and there is a positive correlation between the δ13C and δ18O values.Table 2 Carbon and oxygen isotope values in Xishanyao formation sandstones of Block 3. Well Depth/m δ C P D B 13 / ‰ δ O P D B 18 / ‰ Isotopic temperature/°C Y1 5880.00 - 8.21 - 19.05 128.92 Y7 6095.00 - 7.55 - 18.53 124.17 6095.50 - 7.37 - 16.44 106.21 6096.80 - 8.18 - 19.95 137.06 6098.60 - 7.52 - 17.87 118.36 6099.59 - 7.60 - 17.53 115.42 6101.80 - 7.49 - 16.15 103.82 6103.60 - 7.65 - 18.17 120.99 Y8 6088.50 - 11.63 - 16.68 108.21 6088.80 - 12.38 - 16.88 109.89 6092.00 - 10.82 - 17.65 116.46 6093.30 - 11.08 - 16.04 102.92 6093.50 - 10.72 - 17.86 118.28 6094.30 - 10.80 - 16.78 109.05 6099.50 - 7.34 - 17.40 114.31 Y3 5614.90 - 7.05 - 17.36 113.97 5620.80 - 7.10 - 18.17 120.99 5621.10 - 7.61 - 18.43 123.28 5865.60 - 5.87 - 21.08 147.73 5866.35 - 19.07 - 14.68 92.04 5867.90 - 5.92 - 21.28 149.65 Y6 5977.00 - 8.31 - 13.96 86.5 6034.90 - 6.52 - 17.72 117.06 6044.80 - 6.95 - 19.03 128.64 6076.50 - 7.08 - 16.29 104.97 6084.60 - 7.04 - 19.20 130.18 6098.50 - 9.70 - 18.35 122.57 Y2 5961.50 - 6.36 - 15.06 95.03 6004.30 - 9.97 - 17.87 118.36 Isotopic temperatureC°=16.45-4.31(δc-δw)+0.14δc-δw2, according to Epstein et al. [27]. ## 4.1. Thin Section Analysis Thin section images demonstrate that the dominant cements are carbonates, which consists of a high volume of ferroan calcite and ankerite. At the same time, a small amount of dolomite developed in some wells. Thin section images show that both ferroan and nonferroan calcites commonly occurred as extensive, single-crystal poikilotopes that filled with the intergranular pores (Figure5(a)). In contrast, ankerite cements are regular to rhombic, replacing calcites with an undulating extinction characteristic (Figures 5(b) and 5(c)). Dolomite cements were also observed in the form of replacing calcites (Figure 5(d)). Some calcite crystal filled in intergranular pores and postdated quartz overgrowth (Figures 5(e) and 5(f)). Some thin section images show that carbonate cements replaced detrital quartz, feldspar, or rock fragments. The abundance of carbonate can be up to 20% and generally in the range of 1–10% within the six studied wells (Figure 6). At the same time, particle contact modes were different because of maldistributed distribution. Some particles were in point contact where carbonate cements develop, while others were in straight of concavoconvex contact. Moreover, carbonates with different mineral compositions distributed in different wells, such as ankerite cements, generally distributed in wells Y2, Y6, and Y8 with an average concentration of 4.5% and calcite cements generally distributed in wells Y1, Y3, and Y7 with an average content of 4.1%.Figure 5 Photomicrographs showing petrographic features of the Xishanyao Formation sandstones. Most common types of mineralogy variations in sandstone cement, including (a) secondary pores were mainly filled by crystal calcite, major pores were filled by asphalt at the edges, and the photomicrograph is a stained red-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 40 (Y6 well at 6,048.59 m); (b) single-crystal rhombus ankerite, the photomicrograph is a stained blue-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 200 (Y6 well at 6028.52 m); (c) a few ankerites replace calcites in a red-epoxy-impregnated thin sections of conventional core samples with cross-polarized light with magnification of 100 (Y2 well at 5966.02 m); and (d) a few dolomites replace calcites in a stained red-epoxy-impregnated thin sections of conventional core samples with cross0polarized light with magnification of 200 (Y2 well at 5967.02 m); (e) crystal calcite filled in pores, and crystal postdate quartz overgrowth in a stained blue-epoxy-impregnated thin sections of conventional core samples by cross-polarized light with magnification of 40 (Y1 well at 5876 m); (f) dolomites develop in a stained blue-epoxy-impregnated thin sections of conventional core samples by plane-polarized light with magnification of 100 (Y2 well at 6000.25 m); (g) calcite cements show bright yellow luminescence in cathodoluminescence photomicrograph with magnification of 40 (Y7 well at 6095 m); (h) calcite cements show saffron luminescence and dolomites cements are disphotic in cathodoluminescence photomicrograph with magnification of 40 (Y8 well at 6099.46 m). (a) (b) (c) (d) (e) (f) (g) (h)Figure 6 Regional abundance of carbonate cements versus depth in the Jurassic Formation. Text data from 125 core samples show that carbonate is abundant from below 5800 m and increases with depth down to about 6100 m burial depth. The contents of calcite have little changes with the depth. However, ankerite has a sharp interface of 5850 m burial depth and the content increases with the depth. ## 4.2. Cathodoluminescence Under the cathodoluminescent (CL) images, the relative content of manganese (Mn) and iron (Fe) in carbonate cements can be used to provide insights into the redox conditions when the pore fluid formed. Mn in calcites is an activator in CL, while Fe acts as a quencher. Carbonate cements with Mn > Fe show bright luminescence, whereas calcite cements with Fe > Mn exhibit dull- luminesce. In Block 3, the carbonate cements partially show bright luminesce (Figure5(g)) and others show shade of bright luminesce (Figure 5(h)). Cathodoluminescence of these calcite cements can be interpreted by their origin in sandstones. ## 4.3. EDX Analysis The trace element data of 14 core samples, determined by EDX analysis, are presented in Table1. Carbonate cements are generally rich in Fe with low concent of Mn and Mg, and the concentration of Ca increases with increasing burial depth.Table 1 Composition from EDX analysis of carbonate cements in sandstone of Block 3. Well Depth/m Content/% Na2O MgO Al2O3 SiO2 CaO MnO Fe2O3 K2O Y1 6117.38 2.22 5.54 82.59 4.31 5.35 Y2 5970.53 17.48 5.55 47.38 4.43 25.16 6001.23 19.00 6.79 63.28 6.41 4.52 Y3 5614.22 1.37 13.92 3.33 8.70 43.33 3.47 24.45 5868.00 1.59 3.04 4.78 79.22 5.95 5.04 0.37 Y6 6028.60 21.22 52.30 3.63 22.85 6084.00 1.36 5.26 91.04 2.34 Y7 6095.00 1.27 2.45 92.39 1.76 2.13 6101.55 3.84 4.56 15.13 45.61 7.50 22.51 0.84 Y8 6088.55 21.43 2.44 1.99 51.05 23.08 6096.20 0.95 9.21 85.44 4.02 0.38 ## 4.4. Stable Isotopes δ 13C and δ18O of the carbonate cements, together with burial and thermal histories, can be used to reveal the origin of the cements. Stable isotopes data in Yongjin area are presented in Table 2. Carbon isotope values range from −19.07 to -5.87‰ (PDB) with average value of -8.95‰ (PDB). The oxygen isotope values range from −21.08 to -13.96‰ (PDB) with average value of -17.5‰ (PDB). The δ13C and δ18O values increase with increasing burial depth and there is a positive correlation between the δ13C and δ18O values.Table 2 Carbon and oxygen isotope values in Xishanyao formation sandstones of Block 3. Well Depth/m δ C P D B 13 / ‰ δ O P D B 18 / ‰ Isotopic temperature/°C Y1 5880.00 - 8.21 - 19.05 128.92 Y7 6095.00 - 7.55 - 18.53 124.17 6095.50 - 7.37 - 16.44 106.21 6096.80 - 8.18 - 19.95 137.06 6098.60 - 7.52 - 17.87 118.36 6099.59 - 7.60 - 17.53 115.42 6101.80 - 7.49 - 16.15 103.82 6103.60 - 7.65 - 18.17 120.99 Y8 6088.50 - 11.63 - 16.68 108.21 6088.80 - 12.38 - 16.88 109.89 6092.00 - 10.82 - 17.65 116.46 6093.30 - 11.08 - 16.04 102.92 6093.50 - 10.72 - 17.86 118.28 6094.30 - 10.80 - 16.78 109.05 6099.50 - 7.34 - 17.40 114.31 Y3 5614.90 - 7.05 - 17.36 113.97 5620.80 - 7.10 - 18.17 120.99 5621.10 - 7.61 - 18.43 123.28 5865.60 - 5.87 - 21.08 147.73 5866.35 - 19.07 - 14.68 92.04 5867.90 - 5.92 - 21.28 149.65 Y6 5977.00 - 8.31 - 13.96 86.5 6034.90 - 6.52 - 17.72 117.06 6044.80 - 6.95 - 19.03 128.64 6076.50 - 7.08 - 16.29 104.97 6084.60 - 7.04 - 19.20 130.18 6098.50 - 9.70 - 18.35 122.57 Y2 5961.50 - 6.36 - 15.06 95.03 6004.30 - 9.97 - 17.87 118.36 Isotopic temperatureC°=16.45-4.31(δc-δw)+0.14δc-δw2, according to Epstein et al. [27]. ## 5. Discussion ### 5.1. Origins of Carbonate Cements and Source of Fluid Petrographic observations revealed at least two generations of carbonate cements. The first-generation carbonate occurred as blocky crystalline calcite, which filled within the intergranular pores reduced considerably by mechanical and chemical compaction. This kind of cements occurred in the form of coating quartz grains, preventing authigenic quartz from overgrowing (Alaa et al., 2007). And they dominated in the deeply buried sandstones and filled with iron (Fe) content, which belonged to late diagenesis cements. The second-generation carbonate cement was ankerite, which replaced calcite (Figure5(c)).δ 13C and δ18O in the carbonate cements can be used to unravel the origins of the cements. The main mechanisms of generation of δ13C-depleted CO2 in large amounts during burial are discussed by Irwin et al. [39], including diagenetic carbonate in area I, carbonate relating to biogas in area II, and carbonate relating to organic acid in area III (Jansa et al., 1990; Wang Darui, 2000; Wilkinson et al., 2006) (Figure 7). 29 samples from Xishanyao Formation (J2x) and 11 samples from Toutunhe formation (J2t) were collected between 5500 m and 6200 m for the carbon and oxygen stable isotopic analysis. The results were shown in Figure 7 as a plot of δ18O (PDB) versus δ13C (PDB) that more negative δ13C values are generally accompanied by more negative δ18O values indicating that the carbonate cements were significantly influenced by organic matter alteration during burial (Figure 7) in area III. Rare input of δ12C with increasing burial and temperature from thermal alteration of organic matter is indicated by the strong correlation between δ13C and δ18O [39–41].Figure 7 Stable isotopic data of carbon and oxygen for carbonate cements of Block 3.With limited fluid-inclusion data of carbonate cements and quartz overgrowth, the precipitation was formed at about 100°C and 80–130°C, respectively (Table3). From Figures 5(e) and 5(f), the phenomena that carbonate crystal postdates quartz overgrowth revealed that at least part of carbonate cement deposited at 120°C (inclusion homogenization temperature of overgrowth concentrating in 120°C). From Figure 3, the carbonate formation mainly resulted from the later hydrocarbon charging.Table 3 Fluid-inclusion data from quartz overgrowth and carbonate cements. Well Depth, m Numbers of inclusion Carbonate homogenization temperature, °C Quartz overgrowth homogenization temperature, °C Y1 5828.2 1 100 5876.38 4 115, 120, 125, 127 6114.7 1 98 6116.87 2 92, 98 Y2 5953.66 6 96 85, 88, 116, 120, 132 5970.53 3 102, 117, 127 6002.15 2 98, 102 Y6 6027.44 2 96, 102 6028.6 2 80, 134 ### 5.2. Effect on Reservoir Properties Reservoir physical properties data indicate that porosity near the overpressure surface is relatively high, mainly concentrating in the depth range +50 m~−250 m of the overpressure top surface, and the carbonate cements are concentrated in these high porosity zone (Figure8). Thin section images and microscope analysis were used to investigate the origin that why do the high porosity and carbonate cements superimpose in depth, when the formation water flowed and broke through the overpressure surface, causing the precipitation and concentration of calcite near the overpressure surface because of the unstable temperature and pressure [42, 43] (Yang Zhi, 2011).Figure 8 Relations between carbonate content, porosity, and distance to top overpressure surface of Block 3.Observation from thin section images indicate that secondary intergranular pores are the dominant pore type (Figure9). In contrast to the characteristics of minerals by plane-polarized and cross-polarized light, the remnant of calcite can be found after dissolution developing in the surrounding pores. Data from electron microprobe analysis indicate that the secondary intergranular pores resulted from dissolving intergranular carbonate cements and feldspar (Figure 10, Table 4), mainly generated by calcite dissolution. Megapores were mostly formed by the dissolution of albite and less by K-feldspar and kaolinite in situ deposit which shows that pore configuration has good connection. These evidences demonstrate that the large-scale dissolution of intergranular carbonate cements can generate more intergranular pores and make the pore more connected.Table 4 Electron microprobe analysis of residual mineral contents in secondary pores of wells in studied area. Well Position Content/% SiO2 TiO2 Al2O3 FeO MnO MgO CaO Na2O K2O Cr2O3 Total SH1 a 63.55 0.00 17.89 0.00 0.00 0.00 0.00 1.00 16.10 0.00 98.54 b 0.16 0.00 0.03 0.03 0.00 0.40 53.50 0.00 0.00 0.00 54.12 ZH1 c 68.13 0.00 19.57 0.05 0.00 0.00 0.00 12.03 0.02 0.00 99.80 d 64.13 0.03 18.02 0.05 0.00 0.00 0.00 0.31 16.65 0.00 99.19 SH2 e 51.02 0.00 18.75 0.05 0.00 0.00 0.31 8.17 0.00 0.00 78.30 f 0.00 0.00 0.00 0.59 0.62 0.44 60.92 0.00 0.00 0.00 62.57 g 0.08 0.00 0.00 58.11 0.00 0.00 0.00 0.00 0.00 0.00 58.19 Y1 h 0.81 0.00 0.13 1.49 1.34 0.17 52.45 0.00 0.00 0.02 56.41 i 64.73 0.00 18.32 0.00 0.00 0.00 0.00 0.30 16.69 0.00 100.04 j 70.41 0.00 20.46 0.10 0.00 0.00 0.16 10.48 0.00 0.00 101.61Figure 9 Photomicrographs of petrographic features of the Xishanyao Formation sandstones. Full view of pores and intergranular secondary pores develops from a blue-epoxy-impregnated thin sections of conventional core samples. The remnant of calcite and feldspar can be found after dissolution. (a), (b) Y2 well at 5961.5 m with magnification of 40; (c), (d) Y6 well at 5978 m with magnification of 40; (e), (f) Y1 well at 5876.3 m with magnification of 40; (g), (h) Y7 well at 6102.95 m with magnification of 40. (a) (b) (c) (d) (e) (f) (g) (h)Figure 10 Photomicrographs of pore types by electron microprobe analysis of sandstones in central Junggar Basin. (a) and (a′) Intergranular secondary pore develops, and measuring point A shows that the remnant is K-feldspar and the measuring point B is calcite. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Sha 1 well at 3656.6 m with Jurassic) and photo (a) by plane-polarized light and photo (a′) by cross polarization; (b) and (b′) intergranular secondary pore develops, and measuring point C shows that the remnant is albite and measuring point D is K-feldspar. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Zhuang 1 well at 4375.23 m with Jurassic), and photo (b) by plane-polarized light and photo (b′) by cross polarization; (c) and (c′) intergranular secondary pore develops, measuring point E shows that the remnant is albite. Measuring point F is calcite and measuring point G is siderite, and the photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Sha 2 well at 3439 m with Jurassic) and photo (c) by plane-polarized light and photo (c′) by cross-polarized light; (d) and (d′) intergranular secondary pore develops, and measuring point H shows that the remnant is calcite, measuring point I is K-feldspar, and measuring point J is albite. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Y1 well at 5877 m with Jurassic) and photo (d) by plane-polarized light and photo (d′) by cross-polarized light.Although sandstones experienced extensive mechanical compaction and chemical compaction, the point or straight grain contact and the pervasive development of intergranular pores suggest that the formation of carbonate cements predated intensive physical compaction. Firstly, early carbonate cements occupied the intergranular pore space which increased rock mechanical strength and resisting ability to compaction. Therefore, the sandstone reservoir can develop high primary porosity even in deeply buried conditions. Secondly, early carbonate cements provided materials for dissolution which can form a large amount of secondary intergranular porosity and feldspar dissolution further improved the porosity. In these two ways, the high porosity and high content of carbonate cements developed in the same burial depth [44].Carbonate cements commonly occurred as irregularly distributed concretions even at the same depth, so it is challenging to predict porosity and permeability in the subsurface from spaced wells. Thin sections can provide a continuous image of heterogeneity produced by concretionary calcite cements. Our thin section images indicate a negative correlation between the carbonate cements and porosity development. Perhaps this phenomenon elaborates that intergranular pores, developing in deeply buried sandstones, result from early carbonate cements dissolving and migration. Therefore, early carbonate cements develop widely and secondary porosity is higher at later diagenetic phase.The late generation of poikilotopic calcite is interpreted as a result of plagioclase and early calcite dissolution, which releases cations into pore water and may also be responsible for the precipitation of clay minerals and the silica cements [45]. From Figure 3,Ro were in 0.7–1.0%. Thin section observations show that calcite formation is strongly associated with alteration of plagioclase. Among the common rock-forming minerals, plagioclase (especially calcium-rich plagioclase) is dissolved more rapidly than the other silicate phases [46], indicating that porosity generation may primarily result from plagioclase dissolution in deeply burial location. With the increasing of depth, temperature, and thermal maturity, calcite dissolution will be gradually weakened [5]. According to thin sections, primary porosity developed before the depth 3500 m, secondary porosity mainly developed between 3500–6200 m, and cracks begin to develop from 6200 m [47, 48]. Therefore, during the deep-burial water-rock interaction processes, a lower secondary porosity zone resulting from plagioclase dissolution at depth greater than 6500 m would develop (Figure 11).Figure 11 Diagenetic reactions, porosity, and hydrocarbon evolution of the different diagenetic phase in Block 3. ## 5.1. Origins of Carbonate Cements and Source of Fluid Petrographic observations revealed at least two generations of carbonate cements. The first-generation carbonate occurred as blocky crystalline calcite, which filled within the intergranular pores reduced considerably by mechanical and chemical compaction. This kind of cements occurred in the form of coating quartz grains, preventing authigenic quartz from overgrowing (Alaa et al., 2007). And they dominated in the deeply buried sandstones and filled with iron (Fe) content, which belonged to late diagenesis cements. The second-generation carbonate cement was ankerite, which replaced calcite (Figure5(c)).δ 13C and δ18O in the carbonate cements can be used to unravel the origins of the cements. The main mechanisms of generation of δ13C-depleted CO2 in large amounts during burial are discussed by Irwin et al. [39], including diagenetic carbonate in area I, carbonate relating to biogas in area II, and carbonate relating to organic acid in area III (Jansa et al., 1990; Wang Darui, 2000; Wilkinson et al., 2006) (Figure 7). 29 samples from Xishanyao Formation (J2x) and 11 samples from Toutunhe formation (J2t) were collected between 5500 m and 6200 m for the carbon and oxygen stable isotopic analysis. The results were shown in Figure 7 as a plot of δ18O (PDB) versus δ13C (PDB) that more negative δ13C values are generally accompanied by more negative δ18O values indicating that the carbonate cements were significantly influenced by organic matter alteration during burial (Figure 7) in area III. Rare input of δ12C with increasing burial and temperature from thermal alteration of organic matter is indicated by the strong correlation between δ13C and δ18O [39–41].Figure 7 Stable isotopic data of carbon and oxygen for carbonate cements of Block 3.With limited fluid-inclusion data of carbonate cements and quartz overgrowth, the precipitation was formed at about 100°C and 80–130°C, respectively (Table3). From Figures 5(e) and 5(f), the phenomena that carbonate crystal postdates quartz overgrowth revealed that at least part of carbonate cement deposited at 120°C (inclusion homogenization temperature of overgrowth concentrating in 120°C). From Figure 3, the carbonate formation mainly resulted from the later hydrocarbon charging.Table 3 Fluid-inclusion data from quartz overgrowth and carbonate cements. Well Depth, m Numbers of inclusion Carbonate homogenization temperature, °C Quartz overgrowth homogenization temperature, °C Y1 5828.2 1 100 5876.38 4 115, 120, 125, 127 6114.7 1 98 6116.87 2 92, 98 Y2 5953.66 6 96 85, 88, 116, 120, 132 5970.53 3 102, 117, 127 6002.15 2 98, 102 Y6 6027.44 2 96, 102 6028.6 2 80, 134 ## 5.2. Effect on Reservoir Properties Reservoir physical properties data indicate that porosity near the overpressure surface is relatively high, mainly concentrating in the depth range +50 m~−250 m of the overpressure top surface, and the carbonate cements are concentrated in these high porosity zone (Figure8). Thin section images and microscope analysis were used to investigate the origin that why do the high porosity and carbonate cements superimpose in depth, when the formation water flowed and broke through the overpressure surface, causing the precipitation and concentration of calcite near the overpressure surface because of the unstable temperature and pressure [42, 43] (Yang Zhi, 2011).Figure 8 Relations between carbonate content, porosity, and distance to top overpressure surface of Block 3.Observation from thin section images indicate that secondary intergranular pores are the dominant pore type (Figure9). In contrast to the characteristics of minerals by plane-polarized and cross-polarized light, the remnant of calcite can be found after dissolution developing in the surrounding pores. Data from electron microprobe analysis indicate that the secondary intergranular pores resulted from dissolving intergranular carbonate cements and feldspar (Figure 10, Table 4), mainly generated by calcite dissolution. Megapores were mostly formed by the dissolution of albite and less by K-feldspar and kaolinite in situ deposit which shows that pore configuration has good connection. These evidences demonstrate that the large-scale dissolution of intergranular carbonate cements can generate more intergranular pores and make the pore more connected.Table 4 Electron microprobe analysis of residual mineral contents in secondary pores of wells in studied area. Well Position Content/% SiO2 TiO2 Al2O3 FeO MnO MgO CaO Na2O K2O Cr2O3 Total SH1 a 63.55 0.00 17.89 0.00 0.00 0.00 0.00 1.00 16.10 0.00 98.54 b 0.16 0.00 0.03 0.03 0.00 0.40 53.50 0.00 0.00 0.00 54.12 ZH1 c 68.13 0.00 19.57 0.05 0.00 0.00 0.00 12.03 0.02 0.00 99.80 d 64.13 0.03 18.02 0.05 0.00 0.00 0.00 0.31 16.65 0.00 99.19 SH2 e 51.02 0.00 18.75 0.05 0.00 0.00 0.31 8.17 0.00 0.00 78.30 f 0.00 0.00 0.00 0.59 0.62 0.44 60.92 0.00 0.00 0.00 62.57 g 0.08 0.00 0.00 58.11 0.00 0.00 0.00 0.00 0.00 0.00 58.19 Y1 h 0.81 0.00 0.13 1.49 1.34 0.17 52.45 0.00 0.00 0.02 56.41 i 64.73 0.00 18.32 0.00 0.00 0.00 0.00 0.30 16.69 0.00 100.04 j 70.41 0.00 20.46 0.10 0.00 0.00 0.16 10.48 0.00 0.00 101.61Figure 9 Photomicrographs of petrographic features of the Xishanyao Formation sandstones. Full view of pores and intergranular secondary pores develops from a blue-epoxy-impregnated thin sections of conventional core samples. The remnant of calcite and feldspar can be found after dissolution. (a), (b) Y2 well at 5961.5 m with magnification of 40; (c), (d) Y6 well at 5978 m with magnification of 40; (e), (f) Y1 well at 5876.3 m with magnification of 40; (g), (h) Y7 well at 6102.95 m with magnification of 40. (a) (b) (c) (d) (e) (f) (g) (h)Figure 10 Photomicrographs of pore types by electron microprobe analysis of sandstones in central Junggar Basin. (a) and (a′) Intergranular secondary pore develops, and measuring point A shows that the remnant is K-feldspar and the measuring point B is calcite. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Sha 1 well at 3656.6 m with Jurassic) and photo (a) by plane-polarized light and photo (a′) by cross polarization; (b) and (b′) intergranular secondary pore develops, and measuring point C shows that the remnant is albite and measuring point D is K-feldspar. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Zhuang 1 well at 4375.23 m with Jurassic), and photo (b) by plane-polarized light and photo (b′) by cross polarization; (c) and (c′) intergranular secondary pore develops, measuring point E shows that the remnant is albite. Measuring point F is calcite and measuring point G is siderite, and the photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Sha 2 well at 3439 m with Jurassic) and photo (c) by plane-polarized light and photo (c′) by cross-polarized light; (d) and (d′) intergranular secondary pore develops, and measuring point H shows that the remnant is calcite, measuring point I is K-feldspar, and measuring point J is albite. The photomicrograph is a red-epoxy-impregnated thin sections of conventional core samples with magnification of 10 (Y1 well at 5877 m with Jurassic) and photo (d) by plane-polarized light and photo (d′) by cross-polarized light.Although sandstones experienced extensive mechanical compaction and chemical compaction, the point or straight grain contact and the pervasive development of intergranular pores suggest that the formation of carbonate cements predated intensive physical compaction. Firstly, early carbonate cements occupied the intergranular pore space which increased rock mechanical strength and resisting ability to compaction. Therefore, the sandstone reservoir can develop high primary porosity even in deeply buried conditions. Secondly, early carbonate cements provided materials for dissolution which can form a large amount of secondary intergranular porosity and feldspar dissolution further improved the porosity. In these two ways, the high porosity and high content of carbonate cements developed in the same burial depth [44].Carbonate cements commonly occurred as irregularly distributed concretions even at the same depth, so it is challenging to predict porosity and permeability in the subsurface from spaced wells. Thin sections can provide a continuous image of heterogeneity produced by concretionary calcite cements. Our thin section images indicate a negative correlation between the carbonate cements and porosity development. Perhaps this phenomenon elaborates that intergranular pores, developing in deeply buried sandstones, result from early carbonate cements dissolving and migration. Therefore, early carbonate cements develop widely and secondary porosity is higher at later diagenetic phase.The late generation of poikilotopic calcite is interpreted as a result of plagioclase and early calcite dissolution, which releases cations into pore water and may also be responsible for the precipitation of clay minerals and the silica cements [45]. From Figure 3,Ro were in 0.7–1.0%. Thin section observations show that calcite formation is strongly associated with alteration of plagioclase. Among the common rock-forming minerals, plagioclase (especially calcium-rich plagioclase) is dissolved more rapidly than the other silicate phases [46], indicating that porosity generation may primarily result from plagioclase dissolution in deeply burial location. With the increasing of depth, temperature, and thermal maturity, calcite dissolution will be gradually weakened [5]. According to thin sections, primary porosity developed before the depth 3500 m, secondary porosity mainly developed between 3500–6200 m, and cracks begin to develop from 6200 m [47, 48]. Therefore, during the deep-burial water-rock interaction processes, a lower secondary porosity zone resulting from plagioclase dissolution at depth greater than 6500 m would develop (Figure 11).Figure 11 Diagenetic reactions, porosity, and hydrocarbon evolution of the different diagenetic phase in Block 3. ## 6. Conclusions In Block 3 of the central Junggar Basin, carbonate cements are the predominant cements. Conventional core samples, epoxy-impregnated thin section analysis, and cathodoluminescence analysis indicate that the growth of carbonate cements has two stages and mostly formed at the late diagenetic stage, generating ferroan calcite and ankerite cements.Data from the six wells demonstrate that carbonate cements of most of the samples are less than 20% and generally in the range of 1–10%. The concentration of carbonate cements increases with increasing burial depth. Carbonate cements mainly concentrate in the depth range +50 m~−200 m to the top overpressure surface.Stable isotopic data shows thatδCPDB13 ranges from −19.07 to -5.87‰ and δOPDB18 ranges from −21.08 to -13.96‰. This suggests that the carbonate cements in these sandstones were significantly influenced by organic matter during burial history.Electron microprobe analysis documents that the secondary intergranular pores primarily resulted from dissolving intergranular carbonate cements and feldspar. The chemical compaction and large-scale cementation restricted the dissolving capability of organic acid on late carbonate cements.Textural data suggest that the late poikilotopic calcite, near the top overpressure surface, is rich in Fe and high porosity is developed in the same depth interval. This can be interpreted as a result of the dissolution of plagioclase. Therefore, another secondary porosity zone is supposed to develop, resulting from plagioclase dissolution at depth greater than 6500 m. However, because of the chemical compaction and quartz overgrowth, the porosity scale will be smaller than the porosity developed at the depth of 5500 m. --- *Source: 1020648-2017-09-14.xml*
2017
# Bidens pilosa Extract Administered after Symptom Onset Attenuates Glial Activation, Improves Motor Performance, and Prolongs Survival in a Mouse Model of Amyotrophic Lateral Sclerosis **Authors:** Yasuhiro Kosuge; Erina Kaneko; Hiroshi Nango; Hiroko Miyagishi; Kumiko Ishige; Yoshihisa Ito **Journal:** Oxidative Medicine and Cellular Longevity (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1020673 --- ## Abstract Amyotrophic lateral sclerosis (ALS) is a late-onset neurodegenerative disorder characterized by progressive paralysis resulting from the death of upper and lower motor neurons. There is currently no effective pharmacological treatment for ALS, and the two approved drugs riluzole and edaravone have limited effects on the symptoms and only slightly prolong the life of patients. Therefore, the development of effective therapeutic strategies is of paramount importance. In this study, we investigated whether Miyako IslandBidens pilosa (MBP) can alleviate the neurological deterioration observed in a superoxide dismutase-1 G93A mutant transgenic mouse (G93A mouse) model of ALS. We orally administered 2 g/kg/day of MBP to G93A mice at the onset of symptoms of neurodegeneration (15 weeks old) until death. Treatment with MBP markedly prolonged the life of ALS model mice by approximately 20 days compared to that of vehicle-treated ALS model mice and significantly improved motor performance. MBP treatment prevented the reduction in SMI32 expression, a neuronal marker protein, and attenuated astrocyte (detected by GFAP) and microglia (detected by Iba-1) activation in the spinal cord of G93A mice at the end stage of the disease (18 weeks old). Our results indicate that MBP administered after the onset of ALS symptoms suppressed the inflammatory activation of microglia and astrocytes in the spinal cord of the G93A ALS model mice, thus improving their quality of life. MBP may be a potential therapeutic agent for ALS. --- ## Body ## 1. Introduction Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, is a fatal neurodegenerative disease characterized by progressive paralysis due to motor neuron degeneration. Most ALS cases are sporadic, and the cause of sporadic ALS remains largely unknown. Familial ALS (fALS) accounts for the remaining 5 to 10 percent of all ALS cases, and only 20% of fALS cases are linked to a mutation in the gene encoding copper-zinc superoxide dismutase (SOD1) [1]. Several transgenic mouse models that carry the mutations found in fALS patients have been generated. Among these, the most widely used model is a transgenic mouse that overexpresses a human SOD1 transgene with a pathogenic glycine to alanine substitution at the 93rd codon (SOD1G93A). Overexpression of the mutant SOD1G93A gene in transgenic mice (G93A mice) results in a progressive paralytic disease in which the clinical features resemble that of ALS in humans [2]. Recently, many new ALS-causing gene defects have been identified, including mutations in the gene encoding fused in sarcoma (FUS), TAR DNA-binding protein (TARDBP), optineurin (OPTN), and C9ORF72 [3]. However, G93A mice have been regarded as the standard model for the evaluation of therapeutic effects during preclinical studies.Although the pathogenesis of ALS is extremely intricate and remains largely unknown, inflammation and oxidative stress play a pivotal role in ALS pathogenesis and contribute to the vicious cycle of neurodegeneration in the lumbar spinal cord. There is growing evidence that activated microglia and reactive astrocytes increase in the spinal cord of ALS patients [4] and model mice [5]. Activation of glial cells in ALS is marked by the elevated production of neurotoxic mediators such as reactive oxygen species (ROS), proinflammatory cytokines, and inflammatory mediators [6]. Moreover, astrocytes and microglia are associated with noncell autonomous motor neuronal damage and cell death in ALS [7]. Therefore, to identify effective neuroprotective therapeutic agents for the treatment of ALS, not only the motor neuron but also the neighbouring nonmotor neuron cells including microglia, astrocytes, and blood capillaries require analysis. Several therapeutic agents have been found to delay the onset of disease and prolong the disease course in the ALS patients and model mice. Riluzole and edaravone were successfully transferred into clinical practice. Unfortunately, riluzole prolongs life by only a few months [8] and edaravone only slightly improves patient functionality scores only slightly in a subset of patients [9]. Therefore, the development for more promising disease-modifying therapy for ALS remains urgent.Bidens pilosa L. var. radiata SCHERFF (BP) is a species of flowering plant from the Asteraceae family and is an annual weed widely distributed in the tropical and subtropical regions of the world such as Africa, America, China, and Japan. BP is a rich source of phytochemicals including flavonoids and polyynes and has therefore been used in traditional medicine for the treatment of various diseases due to its antioxidant, anti-inflammatory, anticancer, antidiabetic, and antihyperglycemic properties [10]. A variety of BP that is cultivated without agricultural chemicals on the Miyako Islands of Okinawa Prefecture, Japan, is referred to as Miyako Island Bidens pilosa L. var. radiata SCHERFF (MBP). Caffeic acid, six kinds of chlorogenic acids (neochlorogenic acid, chlorogenic acid, 4-O-caffeoylquinic acid, 3,4-di-O-caffeoylquinic acid, 3,5-di-O-caffeoylquinic acid, and 4,5-di-O-caffeoylquinic acid), and seven kinds of flavonoids (rutin, quercetin, quercetin derivatives, hyperin, isoquercitrin, centaurein, and jacein) have been isolated and characterized from MBP using high-performance liquid chromatography (HPLC) analysis [11, 12]. Importantly, MBP has been reported to possess antioxidant, anti-inflammatory, antiallergy, antivirus, and antileukaemia properties [12–16]. Although the diverse phytochemicals and bioactivity of MBP may be useful for the treatment of certain neurodegenerative diseases including ALS, the therapeutic potential of MBP in the treatment of neurodegenerative disorders is still unclear. Therefore, in this study, we evaluated the therapeutic potential of MBP and examined whether MBP could effectively protect neurons and suppress glial activation in the spinal cords in G93A mice. ## 2. Materials and Methods ### 2.1. Animals G93A mice were used as a model of ALS (Jackson Laboratory, Bar Harbor, ME, USA). The hemizygous G93A mice were maintained by mating transgenic males with wild-type (WT) females. G93A and WT mice were housed under standard conditions (temperature 22°C, relative humidity 60%, 12 h light/dark cycles, and free access to food and water) in the animal facility at the School of Pharmacy, Nihon University. Genotyping was performed using genomic DNA extracted from tails and analysed by polymerase chain reaction (PCR) as reported previously [17]. We used a total of 60 mice, either G93A or WT mice, allocated to the following two groups: 19/60 were used for survival analyses, and the remaining 41/60 mice were used for biochemical and histological studies (Figure 1). All efforts were made to minimise the number of animals used and their distress. All experiments with animals complied with the Guidelines for Animal Experiments at Nihon University.Figure 1 Flowchart of the experiment design. MBP: Miyako IslandBidens pilosa var. radiata SCHERFF. ### 2.2. MBP Treatment Trial MBP, the brand name Musashino Miyako BP® (MMBP®), was obtained as a generous gift from Musashino Research Institute for Immunity (Miyako Island, Okinawa, Japan) [11]. MBP was dissolved in injection water, and a fresh solution was prepared daily. Male G93A mice were randomly divided into MBP-treated and vehicle control groups; each animal in the treatment group had a littermate in the vehicle group (Figure 1). Beginning at 105 days old (15 weeks old), G93A mice were treated with either the vehicle (injection water from the Japanese Pharmacopoeia) or MBP at a dose of 2 g/kg/day oral gavage administration using a disposable oral gavage syringe (Fuchigami, Kurume, Japan) on weekday (5 days a week) mornings. ### 2.3. Motor Performance and End Point (Clinical Assessment) Mouse motor performance was evaluated weekly using a rotarod apparatus (Muromachi Kikai, Tokyo, Japan), as described previously [17]. After the training period of 14 days, mice were able to stay on the rotarod rotating at a speed of 24 revolutions per minute (rpm). The maximum allowable score was 300 s, and the average time of three trials for each mouse was recorded twice a week. The observers were blinded with regard to treatment by MBP but performed their assessment concurrently. The end-point was defined as the inability of the mouse to right itself within 30 s after being placed on its side [2]. At that point, mice were euthanatised with CO2. ### 2.4. Western Blotting Western blots were performed as reported previously [17, 18]. Spinal cord tissue obtained from 18-week-old G93A and WT mice were homogenised in radio-immunoprecipitation assay (RIPA) buffer containing 150 mM NaCl, 1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% sodium dodecyl sulfate (SDS), 50 mM Tris-HCl (pH 8.0), 1% Triton X-100, and 5 mM EDTA. The homogenate was centrifuged, and the supernatant was collected and used for downstream analyses. Protein concentrations were determined using the method of Bradford. Protein extracts were separated by SDS-polyacrylamide gel electrophoresis and transferred onto polyvinylidene difluoride (PVDF) membranes (Millipore, Billerica, MA, USA). The membranes were blocked in blocking buffer containing 20 mM Tris-HCl (pH 7.6), 137 mM NaCl, 0.05% Tween-20, and 5% skimmed milk for 1 h at room temperature (25°C) and incubated with anti-nonphosphorylated neurofilament (SMI32) antibody (SMI-32R, Millipore, Billerica, MA, USA; diluted at 1 : 2000), anti-Iba-1 antibody (016-20001, Wako, Osaka, Japan; diluted at 1 : 500), anti-glial fibrillary acidic protein (GFAP) antibody (MAB360, Millipore, Billerica, MA, USA; diluted at 1 : 1000), or rabbit polyclonal anti-SOD1 antibody (sc-11407, Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA; diluted at 1 : 2000) overnight at 4°C. The membranes were washed repeatedly in Tris-buffered saline (20 mM Tris-HCl pH 7.6, 137 mM NaCl) containing 0.05% Tween-20 and incubated with horseradish peroxidase- (HRP-) conjugated secondary antibody (Santa Cruz Biotechnology Inc., Santa Cruz, USA; diluted at 1 : 20000) for 1 h. Immunoreactive bands were detected using an enhanced chemiluminescence (ECL) detection system (GE Healthcare Biosciences, UK). The optical density of the bands detected on the blots was measured using Scion imaging software (Scion, Frederick, MD, USA). Quantitative results were expressed as the ratio of the band intensity of the protein of interest to the band intensity of β-actin (A5441, Sigma-Aldrich, St. Louis, MO, USA; diluted at 1 : 2000). ### 2.5. Immunohistochemistry Immunohistochemistry was performed as described elsewhere [19, 20]. Briefly, anaesthetised animals were perfused with 4% paraformaldehyde in phosphate-buffered saline (PBS). Postfixed lumbar spinal cords were horizontally sectioned on a cryostat at a thickness of 20 μm. After blocking nonspecific binding by incubating with 1.5% normal goat serum in 0.1% Triton X-100/PBS, the sections were incubated anti-Iba-1-antibody (019-19741, Wako, Osaka, Japan, diluted at 1 : 500) or anti-GFAP Alexa Fluor 488-conjugated antibody (53-9892, eBioscience, San Diego, CA, USA; diluted at 1 : 2000) for 48 h at 4°C. After washing with PBS, the sections labelled with the anti-Iba-1 antibody were incubated with Alexa Fluor 488-conjugated rabbit IgG secondary antibody (A21206, Thermo Fischer Scientific, San Diego, CA, USA; diluted at 1 : 1000) for 2 h. After rinsing with PBS, the sections were analysed using a confocal laser microscope (LSM-710, Zeiss, Oberkochen, Germany). Semiquantitative analysis of change in GFAP and Iba-1 immunoreactivity was performed as reported previously [16]. ### 2.6. Histological Analysis Cresyl violet stain was performed as described elsewhere [21]. Postfixed lumbar spinal cords were horizontally sectioned on a cryostat at a thickness of 20 μm. The paraffin-embedded spinal cord sections were stained with cresyl violet (Sigma-Aldrich, St. Louis, MO, USA). Images were collected with an inverted microscope (IX71; Olympus Co., Tokyo, Japan). A blinded observer counted the number of motor neurons in the anterior grey matter (left or right) with the aid of image processing software (ImageJ, National Institutes of Health, Bethesda, MD, USA). Motor neurons were defined according to the following three criteria: (i) Nissl-stained cell, (ii) localisation in ventral horns, and (iii) diameter>25μm. ### 2.7. Statistics All data were expressed as themean±standarderror of the mean (SEM) or standard deviation (SD). Serial changes in motor performance were analysed with two-way repeated measure analysis of variance (ANOVA) (with “drug treatment” and “weeks of age” as between-subjects’ factors) followed by Bonferroni’s post hoc test. The survival data were analysed using the Kaplan-Meier with the Mantel-Cox log-rank test. Expression levels of protein and quantification of motor neuron number were analysed using one-way repeated measure ANOVA followed by Tukey’s post hoc test. Expression levels of SOD1 protein were compared using Student’s t-test. Semiquantitative p values of <0.05 indicated statistical significance. ## 2.1. Animals G93A mice were used as a model of ALS (Jackson Laboratory, Bar Harbor, ME, USA). The hemizygous G93A mice were maintained by mating transgenic males with wild-type (WT) females. G93A and WT mice were housed under standard conditions (temperature 22°C, relative humidity 60%, 12 h light/dark cycles, and free access to food and water) in the animal facility at the School of Pharmacy, Nihon University. Genotyping was performed using genomic DNA extracted from tails and analysed by polymerase chain reaction (PCR) as reported previously [17]. We used a total of 60 mice, either G93A or WT mice, allocated to the following two groups: 19/60 were used for survival analyses, and the remaining 41/60 mice were used for biochemical and histological studies (Figure 1). All efforts were made to minimise the number of animals used and their distress. All experiments with animals complied with the Guidelines for Animal Experiments at Nihon University.Figure 1 Flowchart of the experiment design. MBP: Miyako IslandBidens pilosa var. radiata SCHERFF. ## 2.2. MBP Treatment Trial MBP, the brand name Musashino Miyako BP® (MMBP®), was obtained as a generous gift from Musashino Research Institute for Immunity (Miyako Island, Okinawa, Japan) [11]. MBP was dissolved in injection water, and a fresh solution was prepared daily. Male G93A mice were randomly divided into MBP-treated and vehicle control groups; each animal in the treatment group had a littermate in the vehicle group (Figure 1). Beginning at 105 days old (15 weeks old), G93A mice were treated with either the vehicle (injection water from the Japanese Pharmacopoeia) or MBP at a dose of 2 g/kg/day oral gavage administration using a disposable oral gavage syringe (Fuchigami, Kurume, Japan) on weekday (5 days a week) mornings. ## 2.3. Motor Performance and End Point (Clinical Assessment) Mouse motor performance was evaluated weekly using a rotarod apparatus (Muromachi Kikai, Tokyo, Japan), as described previously [17]. After the training period of 14 days, mice were able to stay on the rotarod rotating at a speed of 24 revolutions per minute (rpm). The maximum allowable score was 300 s, and the average time of three trials for each mouse was recorded twice a week. The observers were blinded with regard to treatment by MBP but performed their assessment concurrently. The end-point was defined as the inability of the mouse to right itself within 30 s after being placed on its side [2]. At that point, mice were euthanatised with CO2. ## 2.4. Western Blotting Western blots were performed as reported previously [17, 18]. Spinal cord tissue obtained from 18-week-old G93A and WT mice were homogenised in radio-immunoprecipitation assay (RIPA) buffer containing 150 mM NaCl, 1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% sodium dodecyl sulfate (SDS), 50 mM Tris-HCl (pH 8.0), 1% Triton X-100, and 5 mM EDTA. The homogenate was centrifuged, and the supernatant was collected and used for downstream analyses. Protein concentrations were determined using the method of Bradford. Protein extracts were separated by SDS-polyacrylamide gel electrophoresis and transferred onto polyvinylidene difluoride (PVDF) membranes (Millipore, Billerica, MA, USA). The membranes were blocked in blocking buffer containing 20 mM Tris-HCl (pH 7.6), 137 mM NaCl, 0.05% Tween-20, and 5% skimmed milk for 1 h at room temperature (25°C) and incubated with anti-nonphosphorylated neurofilament (SMI32) antibody (SMI-32R, Millipore, Billerica, MA, USA; diluted at 1 : 2000), anti-Iba-1 antibody (016-20001, Wako, Osaka, Japan; diluted at 1 : 500), anti-glial fibrillary acidic protein (GFAP) antibody (MAB360, Millipore, Billerica, MA, USA; diluted at 1 : 1000), or rabbit polyclonal anti-SOD1 antibody (sc-11407, Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA; diluted at 1 : 2000) overnight at 4°C. The membranes were washed repeatedly in Tris-buffered saline (20 mM Tris-HCl pH 7.6, 137 mM NaCl) containing 0.05% Tween-20 and incubated with horseradish peroxidase- (HRP-) conjugated secondary antibody (Santa Cruz Biotechnology Inc., Santa Cruz, USA; diluted at 1 : 20000) for 1 h. Immunoreactive bands were detected using an enhanced chemiluminescence (ECL) detection system (GE Healthcare Biosciences, UK). The optical density of the bands detected on the blots was measured using Scion imaging software (Scion, Frederick, MD, USA). Quantitative results were expressed as the ratio of the band intensity of the protein of interest to the band intensity of β-actin (A5441, Sigma-Aldrich, St. Louis, MO, USA; diluted at 1 : 2000). ## 2.5. Immunohistochemistry Immunohistochemistry was performed as described elsewhere [19, 20]. Briefly, anaesthetised animals were perfused with 4% paraformaldehyde in phosphate-buffered saline (PBS). Postfixed lumbar spinal cords were horizontally sectioned on a cryostat at a thickness of 20 μm. After blocking nonspecific binding by incubating with 1.5% normal goat serum in 0.1% Triton X-100/PBS, the sections were incubated anti-Iba-1-antibody (019-19741, Wako, Osaka, Japan, diluted at 1 : 500) or anti-GFAP Alexa Fluor 488-conjugated antibody (53-9892, eBioscience, San Diego, CA, USA; diluted at 1 : 2000) for 48 h at 4°C. After washing with PBS, the sections labelled with the anti-Iba-1 antibody were incubated with Alexa Fluor 488-conjugated rabbit IgG secondary antibody (A21206, Thermo Fischer Scientific, San Diego, CA, USA; diluted at 1 : 1000) for 2 h. After rinsing with PBS, the sections were analysed using a confocal laser microscope (LSM-710, Zeiss, Oberkochen, Germany). Semiquantitative analysis of change in GFAP and Iba-1 immunoreactivity was performed as reported previously [16]. ## 2.6. Histological Analysis Cresyl violet stain was performed as described elsewhere [21]. Postfixed lumbar spinal cords were horizontally sectioned on a cryostat at a thickness of 20 μm. The paraffin-embedded spinal cord sections were stained with cresyl violet (Sigma-Aldrich, St. Louis, MO, USA). Images were collected with an inverted microscope (IX71; Olympus Co., Tokyo, Japan). A blinded observer counted the number of motor neurons in the anterior grey matter (left or right) with the aid of image processing software (ImageJ, National Institutes of Health, Bethesda, MD, USA). Motor neurons were defined according to the following three criteria: (i) Nissl-stained cell, (ii) localisation in ventral horns, and (iii) diameter>25μm. ## 2.7. Statistics All data were expressed as themean±standarderror of the mean (SEM) or standard deviation (SD). Serial changes in motor performance were analysed with two-way repeated measure analysis of variance (ANOVA) (with “drug treatment” and “weeks of age” as between-subjects’ factors) followed by Bonferroni’s post hoc test. The survival data were analysed using the Kaplan-Meier with the Mantel-Cox log-rank test. Expression levels of protein and quantification of motor neuron number were analysed using one-way repeated measure ANOVA followed by Tukey’s post hoc test. Expression levels of SOD1 protein were compared using Student’s t-test. Semiquantitative p values of <0.05 indicated statistical significance. ## 3. Results ### 3.1. MBP Extended the Survival and Improved the Motor Performance in G93A Mice Starting at 105 days old (15 weeks old), male G93A mice were treated orally with 2 g/kg/day MBP or injection water (vehicle) on weekdays (5 days a week). Mice received continuous treatment until the end stages of the disease. Treatment with MBP significantly prolonged the survival of G93A mice. The median survival of vehicle-treated G93A mice was 123.5 days (n=10), whereas treatment with MBP increased the lifespan of G93A mice to 137.0 days (n=9), with an increase of 13.5 days (Figure 2(a)). The survival curve of MBP-treated mice was compared to that of vehicle-treated G93A mice using the Mantel-Cox log rank test, and the significant difference was found between the two curves (p=0.004). Moreover, oral administration of MBP significantly increased the mean survival of G93A mice by approximately 20 days from 122 days to 142 days (Figure 2(b)).Figure 2 Effect of MBP on the survival of G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). (a) Survival curve of G93A mice treated with vehicle or MBP, analysed by Kaplan-Meier analysis with the Mantel-Cox log-rank test (n=9-10, p=0.004). The arrow indicates the start of vehicle or MBP administration. (b) The graph shows the maximum lifespan of G93A mice treated with vehicle or MBP. Values represent the mean±SD. Statistical significance was determined by unpaired Student’s t-test. (n=9-10). (a) (b)When untreated 15-week-old mice were assessed on an accelerating rotarod starting, all mice displayed the maximum allowable score (300 s) in latency to fall from the rod. In agreement with our previous results [17], vehicle-treated G93A mice developed hind limb weakness including reduced running time on the rotarod apparatus at 15.5 weeks and beyond (Figure 3). At 17 weeks, all vehicle-treated G93A mice showed paralysis and no motor performance was possible. In contrast, MBP-treated G93A mice showed a significantly longer duration of motor performance than vehicle-treated mice (Figure 3). Importantly, between 15.5 weeks and 16.5 weeks of age, there was a significant improvement in motor performance in MBP-treated G93A mice compared to that of vehicle-treated G93A mice (Figure 3).Figure 3 Effect of MBP on the motor performance of G93A mice. Mice were orally with administered injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Motor performance of the mice was evaluated using a rotarod apparatus. The graph depicts latency to fall from the rotarod apparatus in G93A mice treated with vehicle or MBP. Values represent themean±SEM. Serial changes in motor performance were analysed with two-way ANOVA (with “drug treatment” and “weeks of age” as between-subjects’ factors) followed by Bonferroni’s post hoc test (n=9-10). ∗∗∗p<0.001 and ∗∗p<0.01 vs. aged-matched mice treated with vehicle. ### 3.2. MBP Decreased Motor Neuron Loss in the Spinal Cord of G93A Mice To determine whether the therapeutic potential of MBP was attributable to the suppression of spinal motor neuron degeneration, we evaluated the number of motor neurons in the spinal cord. Three weeks after the start of the treatment with vehicle or MBP, the lumbar spinal cord lysates were prepared and analysed by western blot. Although the protein levels of SMI32, a marker of motor neurons [17], significantly decreased in the spinal cord of G93A mice compared to those in WT mice, this reduction in SMI32 levels was alleviated upon MBP treatment (Figure 4(a)). Conversely, MBP had no effect on the expression level of SMI32 in the spinal cord of WT mice (Figure 4(a)).Figure 4 MBP ameliorates motor neuron loss in the spinal cord of G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and the histopathology was analysed. (a) Photographs show representative western blots of SMI32, a marker of motor neurons, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal control. The graph shows the relative densities of each band on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of SMI32 to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative cresyl violet-stained sections of the lumbar spinal cord in the indicated groups of mice at 18 weeks old. Arrows indicate motor neurons. Scale bar indicates 100 μm. The graph shows the number of surviving motor neurons in lumbar spinal cord sections from the indicated groups of mice. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b)Next, we assessed the number of motor neurons remaining in the lumbar spinal cord of G93A mice by Nissl staining. The micrograph of cresyl violet-stained lumbar spinal cord sections from vehicle-treated G93A mice showed that a large number of the motor neurons in the ventral horn was lost at 18 weeks of age and that vacuolisation was apparent in the ventral horn in the lumbar segment (Figure4(b)). Consistent with the preservation of SMI32 expression, the loss of motor neurons in the spinal cord also reduced significantly after treatment with MBP for 3 weeks. In WT mice, neurodegeneration was not observed in the spinal cord at any age (Figure 4(b)). ### 3.3. MBP Alleviated Astrocytosis and Microglial Activation in the Spinal Cords of G93A Mice It is generally accepted that motor neuron degeneration is accompanied by the activation of glial cells in the ALS mouse model [22]. To further characterise the effects of MBP on the activation of glial cells, both WT mice and G93A mice were sacrificed after 3 weeks of treatment to evaluate GFAP (Figure 5) and Iba-1 (Figure 6) immunoreactivity in the spinal cord as indicators of astrogliosis and microglial activation, respectively. Western blotting revealed that the protein expression levels of GFAP and Iba-1 in G93A mice were significantly higher than those of WT mice at the end stage of the disease. Oral MBP treatment significantly suppressed the elevated protein levels of GFAP and Iba-1 observed in G93A mice. Likewise, GFAP and Iba-1 immunoreactivity was hardly detected in the anterior horn of WT mice, whereas these G93A mice were positive for both GFAP and Iba-1. Immunofluorescence staining indicated that activated astrocytes and microglia were abundant in vehicle-treated G93A mice. MBP treatment dramatically and significantly ameliorated the activation of astrocyte and microglia in lumbar spinal cord sections from G93A mice.Figure 5 MBP attenuates morphological changes in astrocytes in G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and the histopathology was imaged. (a) Photographs depict a representative western blot of GFAP, an astrocyte marker, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of each band on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of GFAP to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative confocal images of immunofluorescence staining for GFAP in the lumbar spinal cord sections from the indicated groups of mice at 18 weeks old. Representative data from four separate experiments are presented. Scale bar indicates 20 μm. The graph shows semiquantitative analysis of changes in GFAP immunoreactivity in motor neurons. The fluorescence intensity of GFAP immunoreactivity was analysed quantitatively using Scion imaging software. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b)Figure 6 MBP attenuates morphological changes in microglia in G93A mice. Mice were orally with administered injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and histopathology was analysed. (a) Photographs show representative western blots of Iba-1, a microglia marker, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of bands on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of Iba-1 relative to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative confocal images of immunofluorescence staining for Iba-1 in the lumbar spinal cord sections from the indicated groups of mice at 18 weeks old. Representative data from four separate experiments are presented. Scale bar indicates 20 μm. The graph shows semiquantitative analysis of changes in Iba-1 immunoreactivity in motor neurons. The fluorescence intensity of Iba-1 immunoreactivity was analysed quantitatively using Scion imaging software. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b) ### 3.4. MBP Did Not Affect SOD1 Protein Expression in the Spinal Cords of G93A Mice Using protein homogenates of spinal cords from G93A mice and age-matched WT mice, we analysed the expression of endogenous mouse SOD1 (mSOD1) and mutant human SOD1 (hSOD1) protein. As shown in Figure7, the examination of protein expression by western blot revealed the presence of endogenous mSOD1 in the lumber spinal cord of both WT and G93A mice at the end stage of the disease. While no bands corresponding to mutant hSOD1 were noted in spinal cord extracts of WT mice, hSOD1 was observed in the lysates of G93A mice (Figure 7). At the end stage of the disease, the protein expression of the mutant hSOD1 did not change in G93A mice with MBP treatment and remained at a level similar to that detected in vehicle-treated mice.Figure 7 Effect of MBP on SOD1 protein expression in spinal cord tissue of G93A and WT mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot. Photographs show representative western blot of SOD1 in the lumbar spinal cord of male G93A mice and WT mice. The upper band represents human SOD1 (hSOD1; 21 kDa) and the lower band represents mouse SOD1 (mSOD1; 16 kDa). Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of bands on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of hSOD1 relative to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by unpaired Student’s t-test (n=6-7). ND: not detected. ## 3.1. MBP Extended the Survival and Improved the Motor Performance in G93A Mice Starting at 105 days old (15 weeks old), male G93A mice were treated orally with 2 g/kg/day MBP or injection water (vehicle) on weekdays (5 days a week). Mice received continuous treatment until the end stages of the disease. Treatment with MBP significantly prolonged the survival of G93A mice. The median survival of vehicle-treated G93A mice was 123.5 days (n=10), whereas treatment with MBP increased the lifespan of G93A mice to 137.0 days (n=9), with an increase of 13.5 days (Figure 2(a)). The survival curve of MBP-treated mice was compared to that of vehicle-treated G93A mice using the Mantel-Cox log rank test, and the significant difference was found between the two curves (p=0.004). Moreover, oral administration of MBP significantly increased the mean survival of G93A mice by approximately 20 days from 122 days to 142 days (Figure 2(b)).Figure 2 Effect of MBP on the survival of G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). (a) Survival curve of G93A mice treated with vehicle or MBP, analysed by Kaplan-Meier analysis with the Mantel-Cox log-rank test (n=9-10, p=0.004). The arrow indicates the start of vehicle or MBP administration. (b) The graph shows the maximum lifespan of G93A mice treated with vehicle or MBP. Values represent the mean±SD. Statistical significance was determined by unpaired Student’s t-test. (n=9-10). (a) (b)When untreated 15-week-old mice were assessed on an accelerating rotarod starting, all mice displayed the maximum allowable score (300 s) in latency to fall from the rod. In agreement with our previous results [17], vehicle-treated G93A mice developed hind limb weakness including reduced running time on the rotarod apparatus at 15.5 weeks and beyond (Figure 3). At 17 weeks, all vehicle-treated G93A mice showed paralysis and no motor performance was possible. In contrast, MBP-treated G93A mice showed a significantly longer duration of motor performance than vehicle-treated mice (Figure 3). Importantly, between 15.5 weeks and 16.5 weeks of age, there was a significant improvement in motor performance in MBP-treated G93A mice compared to that of vehicle-treated G93A mice (Figure 3).Figure 3 Effect of MBP on the motor performance of G93A mice. Mice were orally with administered injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Motor performance of the mice was evaluated using a rotarod apparatus. The graph depicts latency to fall from the rotarod apparatus in G93A mice treated with vehicle or MBP. Values represent themean±SEM. Serial changes in motor performance were analysed with two-way ANOVA (with “drug treatment” and “weeks of age” as between-subjects’ factors) followed by Bonferroni’s post hoc test (n=9-10). ∗∗∗p<0.001 and ∗∗p<0.01 vs. aged-matched mice treated with vehicle. ## 3.2. MBP Decreased Motor Neuron Loss in the Spinal Cord of G93A Mice To determine whether the therapeutic potential of MBP was attributable to the suppression of spinal motor neuron degeneration, we evaluated the number of motor neurons in the spinal cord. Three weeks after the start of the treatment with vehicle or MBP, the lumbar spinal cord lysates were prepared and analysed by western blot. Although the protein levels of SMI32, a marker of motor neurons [17], significantly decreased in the spinal cord of G93A mice compared to those in WT mice, this reduction in SMI32 levels was alleviated upon MBP treatment (Figure 4(a)). Conversely, MBP had no effect on the expression level of SMI32 in the spinal cord of WT mice (Figure 4(a)).Figure 4 MBP ameliorates motor neuron loss in the spinal cord of G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and the histopathology was analysed. (a) Photographs show representative western blots of SMI32, a marker of motor neurons, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal control. The graph shows the relative densities of each band on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of SMI32 to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative cresyl violet-stained sections of the lumbar spinal cord in the indicated groups of mice at 18 weeks old. Arrows indicate motor neurons. Scale bar indicates 100 μm. The graph shows the number of surviving motor neurons in lumbar spinal cord sections from the indicated groups of mice. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b)Next, we assessed the number of motor neurons remaining in the lumbar spinal cord of G93A mice by Nissl staining. The micrograph of cresyl violet-stained lumbar spinal cord sections from vehicle-treated G93A mice showed that a large number of the motor neurons in the ventral horn was lost at 18 weeks of age and that vacuolisation was apparent in the ventral horn in the lumbar segment (Figure4(b)). Consistent with the preservation of SMI32 expression, the loss of motor neurons in the spinal cord also reduced significantly after treatment with MBP for 3 weeks. In WT mice, neurodegeneration was not observed in the spinal cord at any age (Figure 4(b)). ## 3.3. MBP Alleviated Astrocytosis and Microglial Activation in the Spinal Cords of G93A Mice It is generally accepted that motor neuron degeneration is accompanied by the activation of glial cells in the ALS mouse model [22]. To further characterise the effects of MBP on the activation of glial cells, both WT mice and G93A mice were sacrificed after 3 weeks of treatment to evaluate GFAP (Figure 5) and Iba-1 (Figure 6) immunoreactivity in the spinal cord as indicators of astrogliosis and microglial activation, respectively. Western blotting revealed that the protein expression levels of GFAP and Iba-1 in G93A mice were significantly higher than those of WT mice at the end stage of the disease. Oral MBP treatment significantly suppressed the elevated protein levels of GFAP and Iba-1 observed in G93A mice. Likewise, GFAP and Iba-1 immunoreactivity was hardly detected in the anterior horn of WT mice, whereas these G93A mice were positive for both GFAP and Iba-1. Immunofluorescence staining indicated that activated astrocytes and microglia were abundant in vehicle-treated G93A mice. MBP treatment dramatically and significantly ameliorated the activation of astrocyte and microglia in lumbar spinal cord sections from G93A mice.Figure 5 MBP attenuates morphological changes in astrocytes in G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and the histopathology was imaged. (a) Photographs depict a representative western blot of GFAP, an astrocyte marker, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of each band on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of GFAP to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative confocal images of immunofluorescence staining for GFAP in the lumbar spinal cord sections from the indicated groups of mice at 18 weeks old. Representative data from four separate experiments are presented. Scale bar indicates 20 μm. The graph shows semiquantitative analysis of changes in GFAP immunoreactivity in motor neurons. The fluorescence intensity of GFAP immunoreactivity was analysed quantitatively using Scion imaging software. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b)Figure 6 MBP attenuates morphological changes in microglia in G93A mice. Mice were orally with administered injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and histopathology was analysed. (a) Photographs show representative western blots of Iba-1, a microglia marker, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of bands on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of Iba-1 relative to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative confocal images of immunofluorescence staining for Iba-1 in the lumbar spinal cord sections from the indicated groups of mice at 18 weeks old. Representative data from four separate experiments are presented. Scale bar indicates 20 μm. The graph shows semiquantitative analysis of changes in Iba-1 immunoreactivity in motor neurons. The fluorescence intensity of Iba-1 immunoreactivity was analysed quantitatively using Scion imaging software. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b) ## 3.4. MBP Did Not Affect SOD1 Protein Expression in the Spinal Cords of G93A Mice Using protein homogenates of spinal cords from G93A mice and age-matched WT mice, we analysed the expression of endogenous mouse SOD1 (mSOD1) and mutant human SOD1 (hSOD1) protein. As shown in Figure7, the examination of protein expression by western blot revealed the presence of endogenous mSOD1 in the lumber spinal cord of both WT and G93A mice at the end stage of the disease. While no bands corresponding to mutant hSOD1 were noted in spinal cord extracts of WT mice, hSOD1 was observed in the lysates of G93A mice (Figure 7). At the end stage of the disease, the protein expression of the mutant hSOD1 did not change in G93A mice with MBP treatment and remained at a level similar to that detected in vehicle-treated mice.Figure 7 Effect of MBP on SOD1 protein expression in spinal cord tissue of G93A and WT mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot. Photographs show representative western blot of SOD1 in the lumbar spinal cord of male G93A mice and WT mice. The upper band represents human SOD1 (hSOD1; 21 kDa) and the lower band represents mouse SOD1 (mSOD1; 16 kDa). Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of bands on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of hSOD1 relative to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by unpaired Student’s t-test (n=6-7). ND: not detected. ## 4. Discussion ALS is a progressive and lethal degenerative disease of motor neurons. At present, there are only two approved drugs, both of which are poorly effective for the treatment of ALS. Therefore, there is an increased need to develop new therapies to cure and/or ameliorate the severe course of the disease. In this study, we determined that oral administration of MBP, immediately after the onset of ALS-like symptoms, delayed the deterioration of motor function and extended survival duration in G93A mice. We also show that these improvements were associated with a reduction in reactive astrocytes and activated microglial cells and delayed motor neuron loss in the spinal cord. Overall, our results clearly show that the oral administration of MBP after ALS symptom onset can slow disease progression and that MBP is a potential therapeutic agent for the treatment of ALS.MBP, as well as BP, is generally used as ethnomedicine and functional food worldwide. In humans, BP extract administered orally at a dose of 400 mg/kg of body weight for 3 months had no noticeable toxicity [23]. Moreover, a BP dose of 27 g/kg body weight has been shown to confer antiobesity and antidiabetic effects without any obvious signs of toxicity on leptin-deficient (ob/ob) mice [24]. In this study, we observed that both WT and G93A mice did not show any toxicity relevant to the spinal cord tissue (Figures 4–6) and any abnormal behavior (data not shown) following a daily dose of MBP at 2 g/kg body weight. These results suggest that the oral use of MBP or BP may not be toxic and is potentially safe. In contrast, a number of polyphenols isolated from MBP are described as active components involved in suppressing oxidative stress, inflammation, and allergy in vitro and in vivo [11–13, 15, 16]. Among the components of MBP, caffeic acid, chlorogenic acid, and quercetin have been reported to reduce oxidative stress and increase the viability of NSC34 cells, a motor neuron-like cell line, expressing mutant SOD1G93A linked to human ALS [25]. Furthermore, the anti-inflammatory activity of chlorogenic acid [26], quercetin [27], and rutin [28] has been shown to ameliorate spinal cord injury. Therefore, it is possible that the various active components present in MBP interact with each other, to produce a synergistic neuronal protective effect in the spinal cord of G93A mice.To date, many of the proposed therapeutic approaches used in G93A mice administer treatment before the onset of ALS symptoms. As most cases of ALS are sporadic, pre-symptomatic assessment is impossible in these patients. Thus, an effective model to study the behavior of ALS observed in the majority of ALS patients has not been devised. Although edaravone and riluzole are currently available for ALS treatment, post symptomatic administration provides only limited effects on survival [29, 30]. In fact, riluzole treatment significantly prolonged the survival of G93A mice by 7.5% compared to that of untreated G93A mice (untreated G93A mice: 126.1 days vs. riluzole-treated G93A mice: 135.5 days); however, treatment was initiated in 30-day-old mice [31]. In contrast, when riluzole was administered to 100-day-old (14-weeks-old) G93A mice, no beneficial effects were observed and treatment did not extend survival relative to untreated G93A mice (delayed for 3.0 days) [32]. Similarly, in patients with ALS, riluzole has poor efficacy during the later stages of the disease [33]. In addition, it has been reported that treatment with edaravone, initiated after the onset of ALS symptoms, does not improve survival of G93A mice (delayed for 2.2 days) [34]. In this study, we demonstrated that oral MBP treatment, beginning after the onset of ALS symptoms, significantly improved the deterioration of motor performance and prolonged the survival (16.4%) of G93A mice (untreated G93A mice: 122 days vs. MBP-treated G93A mice: 142 days). Therefore, the efficacy of MBP was approximately twice that of riluzole, and it drastically improved survival in G93A mice unlike edaravone and riluzole. The present findings provide a new insight into MBP activity that may be applicable when considering therapeutic options for not only fALS but also the sporadic type. Further studies will be necessary to evaluate the efficacy of MBP in combination with riluzole or edaravone for the treatment of ALS.It has been reported that traditional Chinese medicine might be beneficial to prolong the survival of ALS model mice.Hirsutella sinensis significantly extended the lifespan of G93A mice by approximately 17 days from 127 days to 144 days [35]. Moreover, Huolingshengji Formula which consists of six herbs including Epimedium Herb, Radix Astragali, Fructus Corni, Radix Rehmanniae, Poria cocos, and Atractylodes macrocephala Koidz prolongs the lifespan of G93A mice by approximately 11 days from 130 days to 141 days [36]. However, the administration of those traditional medicines was initiated from a presymptomatic stage of G93A mice. Interestingly, treatment with Huolingshengji from the day of disease onset prolongs the lifespan of G93A mice by approximately 8 days from 130 days to 138 days [36]. Our results showed that MBP significantly increases the mean survival of G93A mice by approximately 20 days. These results suggest that MBP is a beneficial medicine for ALS compared with other traditional medicines.Neuroinflammation is the activation of an immune response in the central nervous system by activated astrocytes and microglial cells. Activation of astrocytes and microglia is prominently observed in regions of degenerating motor neurons in ALS patients as well as in model mice [4, 5, 37, 38]. A previous study conducted in our laboratory showed that the levels of spinal GFAP and Iba-1 expression were elevated in an age-dependent manner in G93A mice [17]. The increase in the spinal expression of GFAP in G93A mice progressed more slowly than that of Iba-1, but both levels were significantly higher at the end stages of the disease (17 and 19 weeks old) than those in age-matched WT mice [17]. In this study, western blotting analysis also showed that GFAP and Iba-1 immunoreactivity in the spinal cord of G93A mice dramatically increased at 18 weeks relative to age-matched WT mice. Activated astrocytes and microglia can induce neuronal death by exerting inflammatory effector functions. For example, astrocyte activation in ALS is associated with a decrease in the expression of glutamate transporters [39], increased levels of ROS and inducible nitric oxide synthase [40], and elevated production of proinflammatory cytokines, such as interferon-γ [41] and transforming growth factor-β [42]. Moreover, activated microglia secreted proinflammatory cytokines and oxidative stress mediators including hydrogen peroxide and nitric oxide and play pivotal roles in the pathogenesis of ALS [43, 44]. Therefore, the pharmacological approach targeting neuroinflammation induced by activated astrocyte and microglia is promising for the development of therapeutic strategies for ALS.We demonstrated for the first time that MBP treatment markedly suppresses activation of microglia and astrocytes in the spinal cords of G93A mice. Anti-inflammatory effects have been found not only for MBP but also for BP in a variety of animal models [15, 16, 45]. Further studies are required to clarify the anti-inflammatory roles of MBP in the spinal cords of G93A mice. A previous study has reported that the antioxidant active compounds including caffeic acid, six kinds of chlorogenic acids, and seven kinds of flavonoids are identified as the constituents of MBP [12]. However, our results provide a strong evidence that attenuation of the activation of astrocytes and microglia is closely linked to the efficacy of MBP in G93A mice. Reactive microglia and astrocytes have been identified in the spinal cords isolated from patients with sporadic ALS [37, 38]; therefore, MBP may be useful not only for familial but also for sporadic ALS. In addition, since glial activation plays a pivotal role in the progression of various neurodegenerative diseases caused by neuroinflammation, MBP may be effective for the treatment of other neurodegenerative disorders. ## 5. Conclusions In conclusion, this study has demonstrated for the first time that oral treatment with MBP at the onset of symptoms of neurodegeneration prolongs the life span, improves motor performance, and attenuates motor neuron loss and glial activation in fALS model mice, G93A. Further studies will be required to identify the molecular targets and mechanisms of these effects and to clarify the therapeutic potential of MBP in ALS patients and other neurodegenerative diseases. These significant preclinical findings together with the clinical safety profile of MBP support its potential application as a promising candidate drug for the therapy of fALS caused by mutantSOD1 and possibly sporadic ALS. --- *Source: 1020673-2020-01-29.xml*
1020673-2020-01-29_1020673-2020-01-29.md
54,120
Bidens pilosa Extract Administered after Symptom Onset Attenuates Glial Activation, Improves Motor Performance, and Prolongs Survival in a Mouse Model of Amyotrophic Lateral Sclerosis
Yasuhiro Kosuge; Erina Kaneko; Hiroshi Nango; Hiroko Miyagishi; Kumiko Ishige; Yoshihisa Ito
Oxidative Medicine and Cellular Longevity (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1020673
1020673-2020-01-29.xml
--- ## Abstract Amyotrophic lateral sclerosis (ALS) is a late-onset neurodegenerative disorder characterized by progressive paralysis resulting from the death of upper and lower motor neurons. There is currently no effective pharmacological treatment for ALS, and the two approved drugs riluzole and edaravone have limited effects on the symptoms and only slightly prolong the life of patients. Therefore, the development of effective therapeutic strategies is of paramount importance. In this study, we investigated whether Miyako IslandBidens pilosa (MBP) can alleviate the neurological deterioration observed in a superoxide dismutase-1 G93A mutant transgenic mouse (G93A mouse) model of ALS. We orally administered 2 g/kg/day of MBP to G93A mice at the onset of symptoms of neurodegeneration (15 weeks old) until death. Treatment with MBP markedly prolonged the life of ALS model mice by approximately 20 days compared to that of vehicle-treated ALS model mice and significantly improved motor performance. MBP treatment prevented the reduction in SMI32 expression, a neuronal marker protein, and attenuated astrocyte (detected by GFAP) and microglia (detected by Iba-1) activation in the spinal cord of G93A mice at the end stage of the disease (18 weeks old). Our results indicate that MBP administered after the onset of ALS symptoms suppressed the inflammatory activation of microglia and astrocytes in the spinal cord of the G93A ALS model mice, thus improving their quality of life. MBP may be a potential therapeutic agent for ALS. --- ## Body ## 1. Introduction Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, is a fatal neurodegenerative disease characterized by progressive paralysis due to motor neuron degeneration. Most ALS cases are sporadic, and the cause of sporadic ALS remains largely unknown. Familial ALS (fALS) accounts for the remaining 5 to 10 percent of all ALS cases, and only 20% of fALS cases are linked to a mutation in the gene encoding copper-zinc superoxide dismutase (SOD1) [1]. Several transgenic mouse models that carry the mutations found in fALS patients have been generated. Among these, the most widely used model is a transgenic mouse that overexpresses a human SOD1 transgene with a pathogenic glycine to alanine substitution at the 93rd codon (SOD1G93A). Overexpression of the mutant SOD1G93A gene in transgenic mice (G93A mice) results in a progressive paralytic disease in which the clinical features resemble that of ALS in humans [2]. Recently, many new ALS-causing gene defects have been identified, including mutations in the gene encoding fused in sarcoma (FUS), TAR DNA-binding protein (TARDBP), optineurin (OPTN), and C9ORF72 [3]. However, G93A mice have been regarded as the standard model for the evaluation of therapeutic effects during preclinical studies.Although the pathogenesis of ALS is extremely intricate and remains largely unknown, inflammation and oxidative stress play a pivotal role in ALS pathogenesis and contribute to the vicious cycle of neurodegeneration in the lumbar spinal cord. There is growing evidence that activated microglia and reactive astrocytes increase in the spinal cord of ALS patients [4] and model mice [5]. Activation of glial cells in ALS is marked by the elevated production of neurotoxic mediators such as reactive oxygen species (ROS), proinflammatory cytokines, and inflammatory mediators [6]. Moreover, astrocytes and microglia are associated with noncell autonomous motor neuronal damage and cell death in ALS [7]. Therefore, to identify effective neuroprotective therapeutic agents for the treatment of ALS, not only the motor neuron but also the neighbouring nonmotor neuron cells including microglia, astrocytes, and blood capillaries require analysis. Several therapeutic agents have been found to delay the onset of disease and prolong the disease course in the ALS patients and model mice. Riluzole and edaravone were successfully transferred into clinical practice. Unfortunately, riluzole prolongs life by only a few months [8] and edaravone only slightly improves patient functionality scores only slightly in a subset of patients [9]. Therefore, the development for more promising disease-modifying therapy for ALS remains urgent.Bidens pilosa L. var. radiata SCHERFF (BP) is a species of flowering plant from the Asteraceae family and is an annual weed widely distributed in the tropical and subtropical regions of the world such as Africa, America, China, and Japan. BP is a rich source of phytochemicals including flavonoids and polyynes and has therefore been used in traditional medicine for the treatment of various diseases due to its antioxidant, anti-inflammatory, anticancer, antidiabetic, and antihyperglycemic properties [10]. A variety of BP that is cultivated without agricultural chemicals on the Miyako Islands of Okinawa Prefecture, Japan, is referred to as Miyako Island Bidens pilosa L. var. radiata SCHERFF (MBP). Caffeic acid, six kinds of chlorogenic acids (neochlorogenic acid, chlorogenic acid, 4-O-caffeoylquinic acid, 3,4-di-O-caffeoylquinic acid, 3,5-di-O-caffeoylquinic acid, and 4,5-di-O-caffeoylquinic acid), and seven kinds of flavonoids (rutin, quercetin, quercetin derivatives, hyperin, isoquercitrin, centaurein, and jacein) have been isolated and characterized from MBP using high-performance liquid chromatography (HPLC) analysis [11, 12]. Importantly, MBP has been reported to possess antioxidant, anti-inflammatory, antiallergy, antivirus, and antileukaemia properties [12–16]. Although the diverse phytochemicals and bioactivity of MBP may be useful for the treatment of certain neurodegenerative diseases including ALS, the therapeutic potential of MBP in the treatment of neurodegenerative disorders is still unclear. Therefore, in this study, we evaluated the therapeutic potential of MBP and examined whether MBP could effectively protect neurons and suppress glial activation in the spinal cords in G93A mice. ## 2. Materials and Methods ### 2.1. Animals G93A mice were used as a model of ALS (Jackson Laboratory, Bar Harbor, ME, USA). The hemizygous G93A mice were maintained by mating transgenic males with wild-type (WT) females. G93A and WT mice were housed under standard conditions (temperature 22°C, relative humidity 60%, 12 h light/dark cycles, and free access to food and water) in the animal facility at the School of Pharmacy, Nihon University. Genotyping was performed using genomic DNA extracted from tails and analysed by polymerase chain reaction (PCR) as reported previously [17]. We used a total of 60 mice, either G93A or WT mice, allocated to the following two groups: 19/60 were used for survival analyses, and the remaining 41/60 mice were used for biochemical and histological studies (Figure 1). All efforts were made to minimise the number of animals used and their distress. All experiments with animals complied with the Guidelines for Animal Experiments at Nihon University.Figure 1 Flowchart of the experiment design. MBP: Miyako IslandBidens pilosa var. radiata SCHERFF. ### 2.2. MBP Treatment Trial MBP, the brand name Musashino Miyako BP® (MMBP®), was obtained as a generous gift from Musashino Research Institute for Immunity (Miyako Island, Okinawa, Japan) [11]. MBP was dissolved in injection water, and a fresh solution was prepared daily. Male G93A mice were randomly divided into MBP-treated and vehicle control groups; each animal in the treatment group had a littermate in the vehicle group (Figure 1). Beginning at 105 days old (15 weeks old), G93A mice were treated with either the vehicle (injection water from the Japanese Pharmacopoeia) or MBP at a dose of 2 g/kg/day oral gavage administration using a disposable oral gavage syringe (Fuchigami, Kurume, Japan) on weekday (5 days a week) mornings. ### 2.3. Motor Performance and End Point (Clinical Assessment) Mouse motor performance was evaluated weekly using a rotarod apparatus (Muromachi Kikai, Tokyo, Japan), as described previously [17]. After the training period of 14 days, mice were able to stay on the rotarod rotating at a speed of 24 revolutions per minute (rpm). The maximum allowable score was 300 s, and the average time of three trials for each mouse was recorded twice a week. The observers were blinded with regard to treatment by MBP but performed their assessment concurrently. The end-point was defined as the inability of the mouse to right itself within 30 s after being placed on its side [2]. At that point, mice were euthanatised with CO2. ### 2.4. Western Blotting Western blots were performed as reported previously [17, 18]. Spinal cord tissue obtained from 18-week-old G93A and WT mice were homogenised in radio-immunoprecipitation assay (RIPA) buffer containing 150 mM NaCl, 1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% sodium dodecyl sulfate (SDS), 50 mM Tris-HCl (pH 8.0), 1% Triton X-100, and 5 mM EDTA. The homogenate was centrifuged, and the supernatant was collected and used for downstream analyses. Protein concentrations were determined using the method of Bradford. Protein extracts were separated by SDS-polyacrylamide gel electrophoresis and transferred onto polyvinylidene difluoride (PVDF) membranes (Millipore, Billerica, MA, USA). The membranes were blocked in blocking buffer containing 20 mM Tris-HCl (pH 7.6), 137 mM NaCl, 0.05% Tween-20, and 5% skimmed milk for 1 h at room temperature (25°C) and incubated with anti-nonphosphorylated neurofilament (SMI32) antibody (SMI-32R, Millipore, Billerica, MA, USA; diluted at 1 : 2000), anti-Iba-1 antibody (016-20001, Wako, Osaka, Japan; diluted at 1 : 500), anti-glial fibrillary acidic protein (GFAP) antibody (MAB360, Millipore, Billerica, MA, USA; diluted at 1 : 1000), or rabbit polyclonal anti-SOD1 antibody (sc-11407, Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA; diluted at 1 : 2000) overnight at 4°C. The membranes were washed repeatedly in Tris-buffered saline (20 mM Tris-HCl pH 7.6, 137 mM NaCl) containing 0.05% Tween-20 and incubated with horseradish peroxidase- (HRP-) conjugated secondary antibody (Santa Cruz Biotechnology Inc., Santa Cruz, USA; diluted at 1 : 20000) for 1 h. Immunoreactive bands were detected using an enhanced chemiluminescence (ECL) detection system (GE Healthcare Biosciences, UK). The optical density of the bands detected on the blots was measured using Scion imaging software (Scion, Frederick, MD, USA). Quantitative results were expressed as the ratio of the band intensity of the protein of interest to the band intensity of β-actin (A5441, Sigma-Aldrich, St. Louis, MO, USA; diluted at 1 : 2000). ### 2.5. Immunohistochemistry Immunohistochemistry was performed as described elsewhere [19, 20]. Briefly, anaesthetised animals were perfused with 4% paraformaldehyde in phosphate-buffered saline (PBS). Postfixed lumbar spinal cords were horizontally sectioned on a cryostat at a thickness of 20 μm. After blocking nonspecific binding by incubating with 1.5% normal goat serum in 0.1% Triton X-100/PBS, the sections were incubated anti-Iba-1-antibody (019-19741, Wako, Osaka, Japan, diluted at 1 : 500) or anti-GFAP Alexa Fluor 488-conjugated antibody (53-9892, eBioscience, San Diego, CA, USA; diluted at 1 : 2000) for 48 h at 4°C. After washing with PBS, the sections labelled with the anti-Iba-1 antibody were incubated with Alexa Fluor 488-conjugated rabbit IgG secondary antibody (A21206, Thermo Fischer Scientific, San Diego, CA, USA; diluted at 1 : 1000) for 2 h. After rinsing with PBS, the sections were analysed using a confocal laser microscope (LSM-710, Zeiss, Oberkochen, Germany). Semiquantitative analysis of change in GFAP and Iba-1 immunoreactivity was performed as reported previously [16]. ### 2.6. Histological Analysis Cresyl violet stain was performed as described elsewhere [21]. Postfixed lumbar spinal cords were horizontally sectioned on a cryostat at a thickness of 20 μm. The paraffin-embedded spinal cord sections were stained with cresyl violet (Sigma-Aldrich, St. Louis, MO, USA). Images were collected with an inverted microscope (IX71; Olympus Co., Tokyo, Japan). A blinded observer counted the number of motor neurons in the anterior grey matter (left or right) with the aid of image processing software (ImageJ, National Institutes of Health, Bethesda, MD, USA). Motor neurons were defined according to the following three criteria: (i) Nissl-stained cell, (ii) localisation in ventral horns, and (iii) diameter>25μm. ### 2.7. Statistics All data were expressed as themean±standarderror of the mean (SEM) or standard deviation (SD). Serial changes in motor performance were analysed with two-way repeated measure analysis of variance (ANOVA) (with “drug treatment” and “weeks of age” as between-subjects’ factors) followed by Bonferroni’s post hoc test. The survival data were analysed using the Kaplan-Meier with the Mantel-Cox log-rank test. Expression levels of protein and quantification of motor neuron number were analysed using one-way repeated measure ANOVA followed by Tukey’s post hoc test. Expression levels of SOD1 protein were compared using Student’s t-test. Semiquantitative p values of <0.05 indicated statistical significance. ## 2.1. Animals G93A mice were used as a model of ALS (Jackson Laboratory, Bar Harbor, ME, USA). The hemizygous G93A mice were maintained by mating transgenic males with wild-type (WT) females. G93A and WT mice were housed under standard conditions (temperature 22°C, relative humidity 60%, 12 h light/dark cycles, and free access to food and water) in the animal facility at the School of Pharmacy, Nihon University. Genotyping was performed using genomic DNA extracted from tails and analysed by polymerase chain reaction (PCR) as reported previously [17]. We used a total of 60 mice, either G93A or WT mice, allocated to the following two groups: 19/60 were used for survival analyses, and the remaining 41/60 mice were used for biochemical and histological studies (Figure 1). All efforts were made to minimise the number of animals used and their distress. All experiments with animals complied with the Guidelines for Animal Experiments at Nihon University.Figure 1 Flowchart of the experiment design. MBP: Miyako IslandBidens pilosa var. radiata SCHERFF. ## 2.2. MBP Treatment Trial MBP, the brand name Musashino Miyako BP® (MMBP®), was obtained as a generous gift from Musashino Research Institute for Immunity (Miyako Island, Okinawa, Japan) [11]. MBP was dissolved in injection water, and a fresh solution was prepared daily. Male G93A mice were randomly divided into MBP-treated and vehicle control groups; each animal in the treatment group had a littermate in the vehicle group (Figure 1). Beginning at 105 days old (15 weeks old), G93A mice were treated with either the vehicle (injection water from the Japanese Pharmacopoeia) or MBP at a dose of 2 g/kg/day oral gavage administration using a disposable oral gavage syringe (Fuchigami, Kurume, Japan) on weekday (5 days a week) mornings. ## 2.3. Motor Performance and End Point (Clinical Assessment) Mouse motor performance was evaluated weekly using a rotarod apparatus (Muromachi Kikai, Tokyo, Japan), as described previously [17]. After the training period of 14 days, mice were able to stay on the rotarod rotating at a speed of 24 revolutions per minute (rpm). The maximum allowable score was 300 s, and the average time of three trials for each mouse was recorded twice a week. The observers were blinded with regard to treatment by MBP but performed their assessment concurrently. The end-point was defined as the inability of the mouse to right itself within 30 s after being placed on its side [2]. At that point, mice were euthanatised with CO2. ## 2.4. Western Blotting Western blots were performed as reported previously [17, 18]. Spinal cord tissue obtained from 18-week-old G93A and WT mice were homogenised in radio-immunoprecipitation assay (RIPA) buffer containing 150 mM NaCl, 1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% sodium dodecyl sulfate (SDS), 50 mM Tris-HCl (pH 8.0), 1% Triton X-100, and 5 mM EDTA. The homogenate was centrifuged, and the supernatant was collected and used for downstream analyses. Protein concentrations were determined using the method of Bradford. Protein extracts were separated by SDS-polyacrylamide gel electrophoresis and transferred onto polyvinylidene difluoride (PVDF) membranes (Millipore, Billerica, MA, USA). The membranes were blocked in blocking buffer containing 20 mM Tris-HCl (pH 7.6), 137 mM NaCl, 0.05% Tween-20, and 5% skimmed milk for 1 h at room temperature (25°C) and incubated with anti-nonphosphorylated neurofilament (SMI32) antibody (SMI-32R, Millipore, Billerica, MA, USA; diluted at 1 : 2000), anti-Iba-1 antibody (016-20001, Wako, Osaka, Japan; diluted at 1 : 500), anti-glial fibrillary acidic protein (GFAP) antibody (MAB360, Millipore, Billerica, MA, USA; diluted at 1 : 1000), or rabbit polyclonal anti-SOD1 antibody (sc-11407, Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA; diluted at 1 : 2000) overnight at 4°C. The membranes were washed repeatedly in Tris-buffered saline (20 mM Tris-HCl pH 7.6, 137 mM NaCl) containing 0.05% Tween-20 and incubated with horseradish peroxidase- (HRP-) conjugated secondary antibody (Santa Cruz Biotechnology Inc., Santa Cruz, USA; diluted at 1 : 20000) for 1 h. Immunoreactive bands were detected using an enhanced chemiluminescence (ECL) detection system (GE Healthcare Biosciences, UK). The optical density of the bands detected on the blots was measured using Scion imaging software (Scion, Frederick, MD, USA). Quantitative results were expressed as the ratio of the band intensity of the protein of interest to the band intensity of β-actin (A5441, Sigma-Aldrich, St. Louis, MO, USA; diluted at 1 : 2000). ## 2.5. Immunohistochemistry Immunohistochemistry was performed as described elsewhere [19, 20]. Briefly, anaesthetised animals were perfused with 4% paraformaldehyde in phosphate-buffered saline (PBS). Postfixed lumbar spinal cords were horizontally sectioned on a cryostat at a thickness of 20 μm. After blocking nonspecific binding by incubating with 1.5% normal goat serum in 0.1% Triton X-100/PBS, the sections were incubated anti-Iba-1-antibody (019-19741, Wako, Osaka, Japan, diluted at 1 : 500) or anti-GFAP Alexa Fluor 488-conjugated antibody (53-9892, eBioscience, San Diego, CA, USA; diluted at 1 : 2000) for 48 h at 4°C. After washing with PBS, the sections labelled with the anti-Iba-1 antibody were incubated with Alexa Fluor 488-conjugated rabbit IgG secondary antibody (A21206, Thermo Fischer Scientific, San Diego, CA, USA; diluted at 1 : 1000) for 2 h. After rinsing with PBS, the sections were analysed using a confocal laser microscope (LSM-710, Zeiss, Oberkochen, Germany). Semiquantitative analysis of change in GFAP and Iba-1 immunoreactivity was performed as reported previously [16]. ## 2.6. Histological Analysis Cresyl violet stain was performed as described elsewhere [21]. Postfixed lumbar spinal cords were horizontally sectioned on a cryostat at a thickness of 20 μm. The paraffin-embedded spinal cord sections were stained with cresyl violet (Sigma-Aldrich, St. Louis, MO, USA). Images were collected with an inverted microscope (IX71; Olympus Co., Tokyo, Japan). A blinded observer counted the number of motor neurons in the anterior grey matter (left or right) with the aid of image processing software (ImageJ, National Institutes of Health, Bethesda, MD, USA). Motor neurons were defined according to the following three criteria: (i) Nissl-stained cell, (ii) localisation in ventral horns, and (iii) diameter>25μm. ## 2.7. Statistics All data were expressed as themean±standarderror of the mean (SEM) or standard deviation (SD). Serial changes in motor performance were analysed with two-way repeated measure analysis of variance (ANOVA) (with “drug treatment” and “weeks of age” as between-subjects’ factors) followed by Bonferroni’s post hoc test. The survival data were analysed using the Kaplan-Meier with the Mantel-Cox log-rank test. Expression levels of protein and quantification of motor neuron number were analysed using one-way repeated measure ANOVA followed by Tukey’s post hoc test. Expression levels of SOD1 protein were compared using Student’s t-test. Semiquantitative p values of <0.05 indicated statistical significance. ## 3. Results ### 3.1. MBP Extended the Survival and Improved the Motor Performance in G93A Mice Starting at 105 days old (15 weeks old), male G93A mice were treated orally with 2 g/kg/day MBP or injection water (vehicle) on weekdays (5 days a week). Mice received continuous treatment until the end stages of the disease. Treatment with MBP significantly prolonged the survival of G93A mice. The median survival of vehicle-treated G93A mice was 123.5 days (n=10), whereas treatment with MBP increased the lifespan of G93A mice to 137.0 days (n=9), with an increase of 13.5 days (Figure 2(a)). The survival curve of MBP-treated mice was compared to that of vehicle-treated G93A mice using the Mantel-Cox log rank test, and the significant difference was found between the two curves (p=0.004). Moreover, oral administration of MBP significantly increased the mean survival of G93A mice by approximately 20 days from 122 days to 142 days (Figure 2(b)).Figure 2 Effect of MBP on the survival of G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). (a) Survival curve of G93A mice treated with vehicle or MBP, analysed by Kaplan-Meier analysis with the Mantel-Cox log-rank test (n=9-10, p=0.004). The arrow indicates the start of vehicle or MBP administration. (b) The graph shows the maximum lifespan of G93A mice treated with vehicle or MBP. Values represent the mean±SD. Statistical significance was determined by unpaired Student’s t-test. (n=9-10). (a) (b)When untreated 15-week-old mice were assessed on an accelerating rotarod starting, all mice displayed the maximum allowable score (300 s) in latency to fall from the rod. In agreement with our previous results [17], vehicle-treated G93A mice developed hind limb weakness including reduced running time on the rotarod apparatus at 15.5 weeks and beyond (Figure 3). At 17 weeks, all vehicle-treated G93A mice showed paralysis and no motor performance was possible. In contrast, MBP-treated G93A mice showed a significantly longer duration of motor performance than vehicle-treated mice (Figure 3). Importantly, between 15.5 weeks and 16.5 weeks of age, there was a significant improvement in motor performance in MBP-treated G93A mice compared to that of vehicle-treated G93A mice (Figure 3).Figure 3 Effect of MBP on the motor performance of G93A mice. Mice were orally with administered injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Motor performance of the mice was evaluated using a rotarod apparatus. The graph depicts latency to fall from the rotarod apparatus in G93A mice treated with vehicle or MBP. Values represent themean±SEM. Serial changes in motor performance were analysed with two-way ANOVA (with “drug treatment” and “weeks of age” as between-subjects’ factors) followed by Bonferroni’s post hoc test (n=9-10). ∗∗∗p<0.001 and ∗∗p<0.01 vs. aged-matched mice treated with vehicle. ### 3.2. MBP Decreased Motor Neuron Loss in the Spinal Cord of G93A Mice To determine whether the therapeutic potential of MBP was attributable to the suppression of spinal motor neuron degeneration, we evaluated the number of motor neurons in the spinal cord. Three weeks after the start of the treatment with vehicle or MBP, the lumbar spinal cord lysates were prepared and analysed by western blot. Although the protein levels of SMI32, a marker of motor neurons [17], significantly decreased in the spinal cord of G93A mice compared to those in WT mice, this reduction in SMI32 levels was alleviated upon MBP treatment (Figure 4(a)). Conversely, MBP had no effect on the expression level of SMI32 in the spinal cord of WT mice (Figure 4(a)).Figure 4 MBP ameliorates motor neuron loss in the spinal cord of G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and the histopathology was analysed. (a) Photographs show representative western blots of SMI32, a marker of motor neurons, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal control. The graph shows the relative densities of each band on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of SMI32 to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative cresyl violet-stained sections of the lumbar spinal cord in the indicated groups of mice at 18 weeks old. Arrows indicate motor neurons. Scale bar indicates 100 μm. The graph shows the number of surviving motor neurons in lumbar spinal cord sections from the indicated groups of mice. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b)Next, we assessed the number of motor neurons remaining in the lumbar spinal cord of G93A mice by Nissl staining. The micrograph of cresyl violet-stained lumbar spinal cord sections from vehicle-treated G93A mice showed that a large number of the motor neurons in the ventral horn was lost at 18 weeks of age and that vacuolisation was apparent in the ventral horn in the lumbar segment (Figure4(b)). Consistent with the preservation of SMI32 expression, the loss of motor neurons in the spinal cord also reduced significantly after treatment with MBP for 3 weeks. In WT mice, neurodegeneration was not observed in the spinal cord at any age (Figure 4(b)). ### 3.3. MBP Alleviated Astrocytosis and Microglial Activation in the Spinal Cords of G93A Mice It is generally accepted that motor neuron degeneration is accompanied by the activation of glial cells in the ALS mouse model [22]. To further characterise the effects of MBP on the activation of glial cells, both WT mice and G93A mice were sacrificed after 3 weeks of treatment to evaluate GFAP (Figure 5) and Iba-1 (Figure 6) immunoreactivity in the spinal cord as indicators of astrogliosis and microglial activation, respectively. Western blotting revealed that the protein expression levels of GFAP and Iba-1 in G93A mice were significantly higher than those of WT mice at the end stage of the disease. Oral MBP treatment significantly suppressed the elevated protein levels of GFAP and Iba-1 observed in G93A mice. Likewise, GFAP and Iba-1 immunoreactivity was hardly detected in the anterior horn of WT mice, whereas these G93A mice were positive for both GFAP and Iba-1. Immunofluorescence staining indicated that activated astrocytes and microglia were abundant in vehicle-treated G93A mice. MBP treatment dramatically and significantly ameliorated the activation of astrocyte and microglia in lumbar spinal cord sections from G93A mice.Figure 5 MBP attenuates morphological changes in astrocytes in G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and the histopathology was imaged. (a) Photographs depict a representative western blot of GFAP, an astrocyte marker, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of each band on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of GFAP to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative confocal images of immunofluorescence staining for GFAP in the lumbar spinal cord sections from the indicated groups of mice at 18 weeks old. Representative data from four separate experiments are presented. Scale bar indicates 20 μm. The graph shows semiquantitative analysis of changes in GFAP immunoreactivity in motor neurons. The fluorescence intensity of GFAP immunoreactivity was analysed quantitatively using Scion imaging software. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b)Figure 6 MBP attenuates morphological changes in microglia in G93A mice. Mice were orally with administered injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and histopathology was analysed. (a) Photographs show representative western blots of Iba-1, a microglia marker, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of bands on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of Iba-1 relative to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative confocal images of immunofluorescence staining for Iba-1 in the lumbar spinal cord sections from the indicated groups of mice at 18 weeks old. Representative data from four separate experiments are presented. Scale bar indicates 20 μm. The graph shows semiquantitative analysis of changes in Iba-1 immunoreactivity in motor neurons. The fluorescence intensity of Iba-1 immunoreactivity was analysed quantitatively using Scion imaging software. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b) ### 3.4. MBP Did Not Affect SOD1 Protein Expression in the Spinal Cords of G93A Mice Using protein homogenates of spinal cords from G93A mice and age-matched WT mice, we analysed the expression of endogenous mouse SOD1 (mSOD1) and mutant human SOD1 (hSOD1) protein. As shown in Figure7, the examination of protein expression by western blot revealed the presence of endogenous mSOD1 in the lumber spinal cord of both WT and G93A mice at the end stage of the disease. While no bands corresponding to mutant hSOD1 were noted in spinal cord extracts of WT mice, hSOD1 was observed in the lysates of G93A mice (Figure 7). At the end stage of the disease, the protein expression of the mutant hSOD1 did not change in G93A mice with MBP treatment and remained at a level similar to that detected in vehicle-treated mice.Figure 7 Effect of MBP on SOD1 protein expression in spinal cord tissue of G93A and WT mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot. Photographs show representative western blot of SOD1 in the lumbar spinal cord of male G93A mice and WT mice. The upper band represents human SOD1 (hSOD1; 21 kDa) and the lower band represents mouse SOD1 (mSOD1; 16 kDa). Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of bands on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of hSOD1 relative to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by unpaired Student’s t-test (n=6-7). ND: not detected. ## 3.1. MBP Extended the Survival and Improved the Motor Performance in G93A Mice Starting at 105 days old (15 weeks old), male G93A mice were treated orally with 2 g/kg/day MBP or injection water (vehicle) on weekdays (5 days a week). Mice received continuous treatment until the end stages of the disease. Treatment with MBP significantly prolonged the survival of G93A mice. The median survival of vehicle-treated G93A mice was 123.5 days (n=10), whereas treatment with MBP increased the lifespan of G93A mice to 137.0 days (n=9), with an increase of 13.5 days (Figure 2(a)). The survival curve of MBP-treated mice was compared to that of vehicle-treated G93A mice using the Mantel-Cox log rank test, and the significant difference was found between the two curves (p=0.004). Moreover, oral administration of MBP significantly increased the mean survival of G93A mice by approximately 20 days from 122 days to 142 days (Figure 2(b)).Figure 2 Effect of MBP on the survival of G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). (a) Survival curve of G93A mice treated with vehicle or MBP, analysed by Kaplan-Meier analysis with the Mantel-Cox log-rank test (n=9-10, p=0.004). The arrow indicates the start of vehicle or MBP administration. (b) The graph shows the maximum lifespan of G93A mice treated with vehicle or MBP. Values represent the mean±SD. Statistical significance was determined by unpaired Student’s t-test. (n=9-10). (a) (b)When untreated 15-week-old mice were assessed on an accelerating rotarod starting, all mice displayed the maximum allowable score (300 s) in latency to fall from the rod. In agreement with our previous results [17], vehicle-treated G93A mice developed hind limb weakness including reduced running time on the rotarod apparatus at 15.5 weeks and beyond (Figure 3). At 17 weeks, all vehicle-treated G93A mice showed paralysis and no motor performance was possible. In contrast, MBP-treated G93A mice showed a significantly longer duration of motor performance than vehicle-treated mice (Figure 3). Importantly, between 15.5 weeks and 16.5 weeks of age, there was a significant improvement in motor performance in MBP-treated G93A mice compared to that of vehicle-treated G93A mice (Figure 3).Figure 3 Effect of MBP on the motor performance of G93A mice. Mice were orally with administered injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Motor performance of the mice was evaluated using a rotarod apparatus. The graph depicts latency to fall from the rotarod apparatus in G93A mice treated with vehicle or MBP. Values represent themean±SEM. Serial changes in motor performance were analysed with two-way ANOVA (with “drug treatment” and “weeks of age” as between-subjects’ factors) followed by Bonferroni’s post hoc test (n=9-10). ∗∗∗p<0.001 and ∗∗p<0.01 vs. aged-matched mice treated with vehicle. ## 3.2. MBP Decreased Motor Neuron Loss in the Spinal Cord of G93A Mice To determine whether the therapeutic potential of MBP was attributable to the suppression of spinal motor neuron degeneration, we evaluated the number of motor neurons in the spinal cord. Three weeks after the start of the treatment with vehicle or MBP, the lumbar spinal cord lysates were prepared and analysed by western blot. Although the protein levels of SMI32, a marker of motor neurons [17], significantly decreased in the spinal cord of G93A mice compared to those in WT mice, this reduction in SMI32 levels was alleviated upon MBP treatment (Figure 4(a)). Conversely, MBP had no effect on the expression level of SMI32 in the spinal cord of WT mice (Figure 4(a)).Figure 4 MBP ameliorates motor neuron loss in the spinal cord of G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and the histopathology was analysed. (a) Photographs show representative western blots of SMI32, a marker of motor neurons, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal control. The graph shows the relative densities of each band on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of SMI32 to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative cresyl violet-stained sections of the lumbar spinal cord in the indicated groups of mice at 18 weeks old. Arrows indicate motor neurons. Scale bar indicates 100 μm. The graph shows the number of surviving motor neurons in lumbar spinal cord sections from the indicated groups of mice. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b)Next, we assessed the number of motor neurons remaining in the lumbar spinal cord of G93A mice by Nissl staining. The micrograph of cresyl violet-stained lumbar spinal cord sections from vehicle-treated G93A mice showed that a large number of the motor neurons in the ventral horn was lost at 18 weeks of age and that vacuolisation was apparent in the ventral horn in the lumbar segment (Figure4(b)). Consistent with the preservation of SMI32 expression, the loss of motor neurons in the spinal cord also reduced significantly after treatment with MBP for 3 weeks. In WT mice, neurodegeneration was not observed in the spinal cord at any age (Figure 4(b)). ## 3.3. MBP Alleviated Astrocytosis and Microglial Activation in the Spinal Cords of G93A Mice It is generally accepted that motor neuron degeneration is accompanied by the activation of glial cells in the ALS mouse model [22]. To further characterise the effects of MBP on the activation of glial cells, both WT mice and G93A mice were sacrificed after 3 weeks of treatment to evaluate GFAP (Figure 5) and Iba-1 (Figure 6) immunoreactivity in the spinal cord as indicators of astrogliosis and microglial activation, respectively. Western blotting revealed that the protein expression levels of GFAP and Iba-1 in G93A mice were significantly higher than those of WT mice at the end stage of the disease. Oral MBP treatment significantly suppressed the elevated protein levels of GFAP and Iba-1 observed in G93A mice. Likewise, GFAP and Iba-1 immunoreactivity was hardly detected in the anterior horn of WT mice, whereas these G93A mice were positive for both GFAP and Iba-1. Immunofluorescence staining indicated that activated astrocytes and microglia were abundant in vehicle-treated G93A mice. MBP treatment dramatically and significantly ameliorated the activation of astrocyte and microglia in lumbar spinal cord sections from G93A mice.Figure 5 MBP attenuates morphological changes in astrocytes in G93A mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and the histopathology was imaged. (a) Photographs depict a representative western blot of GFAP, an astrocyte marker, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of each band on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of GFAP to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative confocal images of immunofluorescence staining for GFAP in the lumbar spinal cord sections from the indicated groups of mice at 18 weeks old. Representative data from four separate experiments are presented. Scale bar indicates 20 μm. The graph shows semiquantitative analysis of changes in GFAP immunoreactivity in motor neurons. The fluorescence intensity of GFAP immunoreactivity was analysed quantitatively using Scion imaging software. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b)Figure 6 MBP attenuates morphological changes in microglia in G93A mice. Mice were orally with administered injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot and histopathology was analysed. (a) Photographs show representative western blots of Iba-1, a microglia marker, in the lumbar spinal cord of male G93A mice and WT mice. Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of bands on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of Iba-1 relative to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=6-7). (b) Photographs show representative confocal images of immunofluorescence staining for Iba-1 in the lumbar spinal cord sections from the indicated groups of mice at 18 weeks old. Representative data from four separate experiments are presented. Scale bar indicates 20 μm. The graph shows semiquantitative analysis of changes in Iba-1 immunoreactivity in motor neurons. The fluorescence intensity of Iba-1 immunoreactivity was analysed quantitatively using Scion imaging software. Values represent the mean±SEM. Statistical significance was determined by using one-way ANOVA followed by Tukey’s post hoc test (n=4). (a) (b) ## 3.4. MBP Did Not Affect SOD1 Protein Expression in the Spinal Cords of G93A Mice Using protein homogenates of spinal cords from G93A mice and age-matched WT mice, we analysed the expression of endogenous mouse SOD1 (mSOD1) and mutant human SOD1 (hSOD1) protein. As shown in Figure7, the examination of protein expression by western blot revealed the presence of endogenous mSOD1 in the lumber spinal cord of both WT and G93A mice at the end stage of the disease. While no bands corresponding to mutant hSOD1 were noted in spinal cord extracts of WT mice, hSOD1 was observed in the lysates of G93A mice (Figure 7). At the end stage of the disease, the protein expression of the mutant hSOD1 did not change in G93A mice with MBP treatment and remained at a level similar to that detected in vehicle-treated mice.Figure 7 Effect of MBP on SOD1 protein expression in spinal cord tissue of G93A and WT mice. Mice were orally administered with injection water (vehicle) or MBP, starting at a late symptomatic stage (15 weeks old). Three weeks after the start of the treatment, the lumbar spinal cords were analysed by western blot. Photographs show representative western blot of SOD1 in the lumbar spinal cord of male G93A mice and WT mice. The upper band represents human SOD1 (hSOD1; 21 kDa) and the lower band represents mouse SOD1 (mSOD1; 16 kDa). Equal amounts of cell lysates (10μg) were analysed, with β-actin as an internal marker. The graph shows the relative density of bands on the blots estimated quantitatively using Scion imaging software. Quantitative data are expressed as the ratio of the band intensity of hSOD1 relative to the band intensity of β-actin. Each value represents the mean±SD. Statistical significance was determined by unpaired Student’s t-test (n=6-7). ND: not detected. ## 4. Discussion ALS is a progressive and lethal degenerative disease of motor neurons. At present, there are only two approved drugs, both of which are poorly effective for the treatment of ALS. Therefore, there is an increased need to develop new therapies to cure and/or ameliorate the severe course of the disease. In this study, we determined that oral administration of MBP, immediately after the onset of ALS-like symptoms, delayed the deterioration of motor function and extended survival duration in G93A mice. We also show that these improvements were associated with a reduction in reactive astrocytes and activated microglial cells and delayed motor neuron loss in the spinal cord. Overall, our results clearly show that the oral administration of MBP after ALS symptom onset can slow disease progression and that MBP is a potential therapeutic agent for the treatment of ALS.MBP, as well as BP, is generally used as ethnomedicine and functional food worldwide. In humans, BP extract administered orally at a dose of 400 mg/kg of body weight for 3 months had no noticeable toxicity [23]. Moreover, a BP dose of 27 g/kg body weight has been shown to confer antiobesity and antidiabetic effects without any obvious signs of toxicity on leptin-deficient (ob/ob) mice [24]. In this study, we observed that both WT and G93A mice did not show any toxicity relevant to the spinal cord tissue (Figures 4–6) and any abnormal behavior (data not shown) following a daily dose of MBP at 2 g/kg body weight. These results suggest that the oral use of MBP or BP may not be toxic and is potentially safe. In contrast, a number of polyphenols isolated from MBP are described as active components involved in suppressing oxidative stress, inflammation, and allergy in vitro and in vivo [11–13, 15, 16]. Among the components of MBP, caffeic acid, chlorogenic acid, and quercetin have been reported to reduce oxidative stress and increase the viability of NSC34 cells, a motor neuron-like cell line, expressing mutant SOD1G93A linked to human ALS [25]. Furthermore, the anti-inflammatory activity of chlorogenic acid [26], quercetin [27], and rutin [28] has been shown to ameliorate spinal cord injury. Therefore, it is possible that the various active components present in MBP interact with each other, to produce a synergistic neuronal protective effect in the spinal cord of G93A mice.To date, many of the proposed therapeutic approaches used in G93A mice administer treatment before the onset of ALS symptoms. As most cases of ALS are sporadic, pre-symptomatic assessment is impossible in these patients. Thus, an effective model to study the behavior of ALS observed in the majority of ALS patients has not been devised. Although edaravone and riluzole are currently available for ALS treatment, post symptomatic administration provides only limited effects on survival [29, 30]. In fact, riluzole treatment significantly prolonged the survival of G93A mice by 7.5% compared to that of untreated G93A mice (untreated G93A mice: 126.1 days vs. riluzole-treated G93A mice: 135.5 days); however, treatment was initiated in 30-day-old mice [31]. In contrast, when riluzole was administered to 100-day-old (14-weeks-old) G93A mice, no beneficial effects were observed and treatment did not extend survival relative to untreated G93A mice (delayed for 3.0 days) [32]. Similarly, in patients with ALS, riluzole has poor efficacy during the later stages of the disease [33]. In addition, it has been reported that treatment with edaravone, initiated after the onset of ALS symptoms, does not improve survival of G93A mice (delayed for 2.2 days) [34]. In this study, we demonstrated that oral MBP treatment, beginning after the onset of ALS symptoms, significantly improved the deterioration of motor performance and prolonged the survival (16.4%) of G93A mice (untreated G93A mice: 122 days vs. MBP-treated G93A mice: 142 days). Therefore, the efficacy of MBP was approximately twice that of riluzole, and it drastically improved survival in G93A mice unlike edaravone and riluzole. The present findings provide a new insight into MBP activity that may be applicable when considering therapeutic options for not only fALS but also the sporadic type. Further studies will be necessary to evaluate the efficacy of MBP in combination with riluzole or edaravone for the treatment of ALS.It has been reported that traditional Chinese medicine might be beneficial to prolong the survival of ALS model mice.Hirsutella sinensis significantly extended the lifespan of G93A mice by approximately 17 days from 127 days to 144 days [35]. Moreover, Huolingshengji Formula which consists of six herbs including Epimedium Herb, Radix Astragali, Fructus Corni, Radix Rehmanniae, Poria cocos, and Atractylodes macrocephala Koidz prolongs the lifespan of G93A mice by approximately 11 days from 130 days to 141 days [36]. However, the administration of those traditional medicines was initiated from a presymptomatic stage of G93A mice. Interestingly, treatment with Huolingshengji from the day of disease onset prolongs the lifespan of G93A mice by approximately 8 days from 130 days to 138 days [36]. Our results showed that MBP significantly increases the mean survival of G93A mice by approximately 20 days. These results suggest that MBP is a beneficial medicine for ALS compared with other traditional medicines.Neuroinflammation is the activation of an immune response in the central nervous system by activated astrocytes and microglial cells. Activation of astrocytes and microglia is prominently observed in regions of degenerating motor neurons in ALS patients as well as in model mice [4, 5, 37, 38]. A previous study conducted in our laboratory showed that the levels of spinal GFAP and Iba-1 expression were elevated in an age-dependent manner in G93A mice [17]. The increase in the spinal expression of GFAP in G93A mice progressed more slowly than that of Iba-1, but both levels were significantly higher at the end stages of the disease (17 and 19 weeks old) than those in age-matched WT mice [17]. In this study, western blotting analysis also showed that GFAP and Iba-1 immunoreactivity in the spinal cord of G93A mice dramatically increased at 18 weeks relative to age-matched WT mice. Activated astrocytes and microglia can induce neuronal death by exerting inflammatory effector functions. For example, astrocyte activation in ALS is associated with a decrease in the expression of glutamate transporters [39], increased levels of ROS and inducible nitric oxide synthase [40], and elevated production of proinflammatory cytokines, such as interferon-γ [41] and transforming growth factor-β [42]. Moreover, activated microglia secreted proinflammatory cytokines and oxidative stress mediators including hydrogen peroxide and nitric oxide and play pivotal roles in the pathogenesis of ALS [43, 44]. Therefore, the pharmacological approach targeting neuroinflammation induced by activated astrocyte and microglia is promising for the development of therapeutic strategies for ALS.We demonstrated for the first time that MBP treatment markedly suppresses activation of microglia and astrocytes in the spinal cords of G93A mice. Anti-inflammatory effects have been found not only for MBP but also for BP in a variety of animal models [15, 16, 45]. Further studies are required to clarify the anti-inflammatory roles of MBP in the spinal cords of G93A mice. A previous study has reported that the antioxidant active compounds including caffeic acid, six kinds of chlorogenic acids, and seven kinds of flavonoids are identified as the constituents of MBP [12]. However, our results provide a strong evidence that attenuation of the activation of astrocytes and microglia is closely linked to the efficacy of MBP in G93A mice. Reactive microglia and astrocytes have been identified in the spinal cords isolated from patients with sporadic ALS [37, 38]; therefore, MBP may be useful not only for familial but also for sporadic ALS. In addition, since glial activation plays a pivotal role in the progression of various neurodegenerative diseases caused by neuroinflammation, MBP may be effective for the treatment of other neurodegenerative disorders. ## 5. Conclusions In conclusion, this study has demonstrated for the first time that oral treatment with MBP at the onset of symptoms of neurodegeneration prolongs the life span, improves motor performance, and attenuates motor neuron loss and glial activation in fALS model mice, G93A. Further studies will be required to identify the molecular targets and mechanisms of these effects and to clarify the therapeutic potential of MBP in ALS patients and other neurodegenerative diseases. These significant preclinical findings together with the clinical safety profile of MBP support its potential application as a promising candidate drug for the therapy of fALS caused by mutantSOD1 and possibly sporadic ALS. --- *Source: 1020673-2020-01-29.xml*
2020
# Root Locus Practical Sketching Rules for Fractional-Order Systems **Authors:** António M. Lopes; J. A. Tenreiro Machado **Journal:** Abstract and Applied Analysis (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102068 --- ## Abstract For integer-order systems, there are well-known practical rules for RL sketching. Nevertheless, these rules cannot be directly applied to fractional-order (FO) systems. Besides, the existing literature on this topic is scarce and exclusively focused on commensurate systems, usually expressed as the ratio of two noninteger polynomials. The practical rules derived for those do not apply to other symbolic expressions, namely, to transfer functions expressed as the ratio of FO zeros and poles. However, this is an important case as it is an extension of the classical integer-order problem usually addressed by control engineers. Extending the RL practical sketching rules to such FO systems will contribute to decrease the lack of intuition about the corresponding system dynamics. This paper generalises several RL practical sketching rules to transfer functions specified as the ratio of FO zeros and poles. The subject is presented in a didactic perspective, being the rules applied to several examples. --- ## Body ## 1. Introduction Root locus (RL) analysis is a graphical method that shows how the poles of a closed-loop transfer function change with relation to a given system parameter [1, 2]. Usually, the chosen parameter is a proportional gain, K≥0, included in a unity feedback closed-loop controlled system (Figure 1).Figure 1 Unity feedback closed-loop controlled system.The open and closed-loop transfer functions are given byGOL(s)=K·G(s) and GCL(s)=K·G(s)/[1+K·G(s)], respectively. The denominator of GCL(s) is the characteristic equation, and its roots are the system closed-loop poles. Every point of the RL simultaneously satisfies the well-known argument (angle) and magnitude conditions given by (1)arg{K·G(s)}=(2h+1)·180°,h=0,±1,±2,…,abs{K·G(s)}=1.The RL is a classical and powerful tool for the dynamical analysis and design of integer-order linear time-invariant (LTI) systems [1–6]. Nowadays, there are efficient numerical algorithms, implemented in several software packages (e.g., MATLAB, Octave, Scilab, and FreeMat) [7–10] that take advantage of the powerful digital processors of modern computers to perform RL analysis. For fractional-order (FO) systems, while several studies addressing RL are available [11–17], the problem is more difficult and researchers have mainly preferred to adopt frequency-based methods.On the other hand, the ability to quickly sketch RL by hand is invaluable in making fundamental decisions early in the design process. For integer-order systems, there are well-known practical rules for RL sketching, but those cannot be directly applied to FO systems. Moreover, the existing literature on this topic exclusively focuses on the particular case of commensurate FO systems that occur when truncating real valued integro-differential orders up to a finite precision [15, 16]. This allows the generalisation of some rules to FO systems, but limiting the precision and the type of symbolic expressions [17, 18]. The rules for commensurate FO systems do not apply to transfer functions expressed as the ratio of FO zeros and poles. However, this is an important case as it is an extension of the classical integer-order problem usually addressed by control engineers when dealing with RL analysis.In this paper, we extend several practical rules, available to sketch the RL of integer-order systems, to the FO domain. The main contribution is that the practical sketching rules apply to open-loop transfer functions expressed as the ratio of FO zeros and poles, contributing to fill the gap in the existing literature about this topic. The subject is presented in a didactic perspective, being the rules applied to several examples that contribute to reduce the lack of intuition about the corresponding system dynamics.Bearing these ideas in mind, the paper is organized as follows. Section2 introduces fundamental concepts related to fractional calculus. Section 3 analyses several FO systems and generalises the RL rules to a class of FO systems. Finally, Section 4 draws the main conclusions. ## 2. Fractional Calculus Fractional calculus (FC) denotes the branch of calculus that extends the concepts of integrals and derivatives to noninteger and complex orders [19–23]. During the last years, FC was found to play a fundamental role in the modelling of a considerable number of phenomena [24–29] and emerged as an important tool for the study of dynamical systems where classical methods reveal strong limitations. Nowadays, the application of FC concepts includes a wide spectrum of studies [30–33], going from the dynamics of financial markets [34, 35], biological systems [36, 37], earth sciences [38], and DNA sequencing [39] up to mechanical [40–43], electrical [44–46], and control systems [21, 24].The generalisation of the concept of derivative and integral to noninteger orders,α, has been addressed by several mathematicians. The Riemann-Liouville, Grünwald-Letnikov, and Caputo definitions of fractional derivative are the most used and are given, respectively, by [47] (2)DaRLtαf(t)=1Γ(n-α)dndtn∫atf(τ)(t-τ)α-n+1dτ,1Γ(n-α)dndtnn∫atf(τ)(t-τ)α-n+1dτn-1<α<n,(3)DtαaGLf(t)=limh→01hα∑k=0[(t-a)/h](-1)k(αk)f(t-kh),(4)DtαaCf(t)=1Γ(n-α)∫atf(n)(τ)(t-τ)α-n+1dτ,1Γ(n-α)∫atf(n)(τ)(t-τ)α-n+1dτn-1<α<n, where Γ (·) represents Euler’s gamma function, [x] is the integer part of x, and h is a time step.The Laplace transform applied to (2) yields (5)L{DaRLtαf(t)}=sαL{f(t)}-∑k=0n-1skD0RLtα-k-1f(0+), where L and s denote the Laplace operator and variable, respectively, and t represents time.The general LTI, single-input-single-output (SISO), and FO incommensurate system can be represented by [48] (6)anDαny(t)+an-1Dαn-1y(t)+⋯+a0Dα0y(t)=bmDβmx(t)+bm-1Dβm-1x(t)+⋯+b0Dβ0x(t), where x(t) and y(t) represent the system input and output, respectively, D(·)is the derivative operator, {ap,bq}∈R,{αp,βq}∈R0+, p=0,…,n, and q=0,…,m. Besides, it is considered that αn>αn-1>⋯>α0,βm>βm-1>⋯>β0,αn>βm, and an≠0.In the Laplace domain, (6) results in a transfer function given by the ratio of two non-integer polynomials: (7)G(s)=bmsβm+bn-1sβm-1+⋯+b0sβ0ansαn+an-1sαn-1+⋯+a0sα0.Ifαp=kp/v, βq=kq/v, with v∈R+and kp, kq∈N0, then (7) is a commensurate FO system and can be written as (8)G(s)=∑q=0mbq(s1/ν)kq∑p=0nap(s1/ν)kp.The FO system is said to be rational ifv∈N.In general, a polynomialP(sα) is a multivalued function, the domain of which is a Riemann surface with an infinite number of sheets [48]. Only in the particular case of α being rational, the number of sheets will be finite. Such type of function becomes single-valued when an appropriate cut of the complex plane is assumed. This branch cut is not unique, but the negative real axis is usually chosen. In this case, the origin of the complex plane is a branch point and the first Riemann sheet, ℘, is defined as (9)℘={rejϕr∈R+,-π<ϕ<π}.For example, Figure2 depicts two Riemann surfaces corresponding to the function P(sα)=sα+b(α>0,b>0), the roots of which are (10)s=b1/α·ej(π+2hπ)/α,h=0,±1,±2,…;j=-1.Riemann surfaces: (a)P(sα)=s1/2+1 has two sheets; (b) P(sα)=s4/3+1 has three sheets. (a) (b)Forα=1/2 and b=1, the Riemann surface has two sheets (Figure 2(a)), and for α=4/3 and b=1, the Riemann surface presents three different sheets (Figure 2(b)). In the former case, there are no roots, and in the latter case, two roots appear on the first sheet. Riemann surfaces are important when dealing with RL of FO systems, as will be seen in Section 3. ## 3. Root Locus In this section, we assume that the system open-loop transfer function is given by the following:(11)G(s)=K∏q=1m(s+bq)βq∏p=1n(s+ap)αp, where ap, bq∈C and αp, βq∈R+.Equation (11) represents a direct extension to the FO domain of the classical integer-order problem usually addressed by control engineers when dealing with RL analysis. Rules for RL sketching applicable to this case are summarised in Table 1. Only the first Riemann sheet will be considered.Table 1 Practical rules for RL sketching of FO systems as defined in (11). Fractional-order case, (11) Rule  1 RL is symmetrical about the real axis. Rule  2 Ifl<∑p=1nαp<l+2, l= 1, 3, 5, …,then the number of branches is l+1. Rule  3 Mark all open-loop poles and zeros on thes-plane. Rule  4 If necessary, use the angle condition (1) to determine the open-loop poles that have RL branches departing from them. Rule  5 Compute the asymptotes centroid,σ, and angle, φ, according to the following expressions: σ=-∑p=1napαp+∑q=1mbqβq∑p=1nαp-∑q=1mβq,   φ=(2h+1)·180°∑p=1nαp-∑q=1mβq,h=0,±1,±2,…. Rule  6 Points in the real axis belong to the RL if, for all poles and zeros seen to the right,δ=∑p=1nαp-∑q=1mβq is an integer odd number. Rule  7 Find the intersection of the RL with the imaginary axis makings=jω in the characteristic equation and solving it in order to determine K and ω. Rule  8 Compute the breakaway and break-in points using the characteristic equation and determiningdK/ds=0. Rule  9 Determine the departure and arrival angles using the angle condition (1).In the sequel, several examples are presented, namely, (i) one FO real pole; (ii) two FO real poles; (iii) one FO pole and one FO zero; (iv) a pair of FO complex conjugate poles. The RL plots are generated using the numeric algorithm presented in [17]. The application of the practical sketching rules is detailed for a few examples, and for all cases, the RL plots serve the purpose of elucidating system dynamics. This will help readers to gain intuition about system behaviour as a function of poles and zeros fractional orders. ### 3.1. One Fractional-Order Real Pole In this case, the open-loop transfer function is given by(12)G1(s)=K(s+a1)α1, where the RL corresponds to the roots of the characteristic equation (13)(s+a1)α1+K=0,(14)s=-a1+K1/α1·ej(2h+1)·π/α1,h=0,±1,±2,….In general, the RL spreads along several Riemann sheets, meaning that RL branches can begin in one sheet, cross the branch cut, and enter in another sheet. For example, consideringG1(s) with a1=2 and α1=1.5, we verify that the characteristic (13) has roots in two Riemann sheets (Figure 3(a)). However, choosing α1=1.6 results in roots in five different sheets (Figure 3(b)).Root-Locus ofG1(s): (a) a1=2 and α1=1.5 result in RL branches in two sheets; (b) a1=2 and α1=1.6 result in RL branches in five sheets. (a) (b)It is well-known that just the first Riemann sheet has physical significance [49]. As such, in the sequel, we consider only the RL branches corresponding to the first sheet.Observing the RL ofG1(s), we verify that for 0<α1<1, there are no closed-loop poles. However, for 1≤α<4, several graphs are obtained, as shown in Figure 4. Starting from the integer case (α1=1) represented in Figure 4(a), as the FO pole increases, two branches emerge from the open-loop pole s=-2 and flow towards infinity (Figure 4(b)). For α1=2, we get the classical plot with two vertical branches (Figure 4(c)). Increasing α1 (2<α1<3), two RL branches are still observed (Figure 4(d)). When α1=3, the well-known three branches RL occurs (Figure 4(e)), and finally, when the FO pole is in the interval 3<α1<4, four branches emerge. Larger values of the FO pole (i.e., α1≥4) were also investigated. We concluded that the RL sketching rules also apply. The results are of the same type, and therefore, we decided not to include them.Root locus ofG1(s) for 1≤α1<4 and a1=2. (a) (b) (c) (d) (e) (f)The practical rules apply to all FO cases. For example, for the RL shown in Figure4(f), as α1=3.5, the RL has four branches. The asymptotes centroid and angles are σ=-2 and φ=-154.3°, -51.4°, 51.4°, and 154.3°, respectively. Solving the characteristic equation for s=jω, the RL branches intersect the imaginary axis at ω=±2.51, for K=59.2. ### 3.2. Two Fractional-Order Real Poles In this subsection, we consider the open-loop transfer function given by(15)G2(s)=K(s+a1)α1(s+a2)α2.The RL was computed for various values of{α1,α2} (a1=2, a2=1) and the graphs analysed. It was observed that no RL branches exist when α12=α1+α2<1. Several RL examples are depicted in Figures 5 to 7 for 1≤α12<4. The results are presented in three groups: (i) 1≤α12<2; (ii) 2≤α12<3; (iii) 3≤α12<4. Similar results were observed for α12≥4 and a1<a2. For both cases, the practical sketching rules still apply.Root locus ofG2(s). Cases from group (i), 1≤α12<2 (a1=2,a2=1). (a) (b) (c) (d) (e) (f)Figure5 shows the plots from group (i). When α12=1, the RL has a single branch in the real axis (Figures 5(a) and 5(b)). As α12 increases (1<α12<2), two branches emerge from the poles s=-2 or s=-1, depending on the values of α1 and α2, and tend to infinity (Figures 5(c) to 5(f)).As said in Section3.1, all practical rules are valid for G2(s), (15). Using the case shown in Figure 5(f), for example, we have α1+α2=1.9, meaning that the RL has two branches. As we have two open-loop poles, rule 4 must be used to determine the pole from which the branches are departing. Thus, applying the angle condition to the test points p1 and p2, we obtain ϕ1=-90° and ϕ2=90°, respectively, indicating that no branches can depart from s=-2. Rule 4 can be used in all cases; nevertheless, an easier to use specific rule about RL starting and ending points still requires more research before a definitive statement.The angle condition is also used to determine the departing angles from poles=-1, resulting in ϕ=±138.46°. The asymptotes centroid and angles are σ=-1.3 and φ=±94.7°, respectively.Figure6 depicts results from group (ii). When α12=2, we get the plots represented in Figures 6(a) and 6(b). We observe two RL branches that, as before, depending on the values of α1 and α2, can depart from one or the other open-loop poles. In both cases, the branches tend to infinity with angles φ=±90°. Increasing the value of α12 (2<α12<3), two RL branches are still observed (Figures 6(c) to 6(h)).Root locus ofG2(s). Cases from group (ii), 2≤α12<3 (a1=2,  a2=1). (a) (b) (c) (d) (e) (f) (g) (h)Root locus ofG2(s). Cases from group (iii), 3≤α12<4 (a1=2,  a2=1). (a) (b) (c) (d) (e) (f) (g)The results from group (iii) are illustrated in Figure7. For α12=3, the RL of Figures 7(a) to 7(c) shows three branches that depart from the same or different open-loop poles and flow to infinity with angles φ=180° and ±60°. Increasing α12 (3<α12<4), four RL branches arise (Figures 7(d) to 7(g)).The results obtained for two FO real poles are similar to those of a single real pole. This means a similar behavior, both in terms of the number of branches and the type of RL charts, wheneverα1 and α12 are close. It should be noted that the RL depends not only on the equivalent order α12 (by means of rules 2, 5, or 6) but also on the FO of each pole. By other words, the same value of α12 may lead to different RL. ### 3.3. One Fractional-Order Pole and One Fractional-Order Zero In this case, the open-loop transfer function is given by(16)G3(s)=K(s+b1)β1(s+a1)α1.The RL was obtained for various values of{α1,β1} (a1=1, b1=2) and the graphs analysed as previously. It was observed that no RL branches exist when α1<1. Figures 8 to 10 depict several RL for 1≤α1<4. As before, for easing the comparison, the results are presented in three groups: (i) 1≤α1<2; (ii) 2≤α1<3; (iii) 3≤α1<4. Additional experiments were carried out, both for different values of the FO pole and FO zero and for a1>b1. We concluded that the sketching rules are valid for all cases and the results are similar to those presented.Root locus ofG3(s). Cases from group (i), 1≤α1<2 (a1=1,  b1=2). (a) (b) (c)Figure8 shows plots from group (i). We see that each RL has two branches that depend on the difference between the orders of the denominator and numerator, δ=α1-β1: when δ<1, both branches converge to the open-loop zero (Figure 8(a)); if δ=1, one branch converges to the open-loop zero and the other tends to infinity (on the real axis) (Figure 8(b)); for 1<δ<2, the two branches flow to infinity (Figure 8(c)).Applying rule 6 to the case depicted in Figure8(b), for all real axis points in the line ]-∞,-2], we have δ=1, meaning that this line belongs to the RL. The break-in point is computed using rule 8, resulting in s=-2.3.Figure9 depicts several plots from group (ii), that is, 2≤α1<3. All RL still have two branches, the paths of which depend on the difference between the FO of the open-loop pole and zero (Figures 9(a) to 9(e)).Root locus ofG3(s). Cases from group (ii), 2≤α1<3 (a1=1,  b1=2). (a) (b) (c) (d) (e)Root locus ofG3(s). Cases from group (iii), 3≤α1<4 (a1=1,  b1=2). (a) (b) (c) (d) (e) (f) (g)Several RL for group (iii),3≤α1<4, are shown in Figure 10. It can be observed that all RL have four branches, and as before, the paths depend on the difference between the orders of the open-loop pole and zero. ### 3.4. One Pair of Fractional-Order Complex Conjugate Poles The open-loop transfer function is given by(17)G4(s)=K(s2+2ξωns+ωn2)α1/2=K(s+a1)α1/2·(s+a1*)α1/2, where a1∈C and a1* denotes the conjugate of a1.Plotting the RL, it can be seen that there are no branches unlessα1≥1. In Figure 11, several RL graphs are shown for 1<α1<8. Figure 11(a) depicts the RL for α1=1.2, where we can see that there are gaps between the open-loop poles and the points were the branches initiate. Recalling that the RL can spread along several Riemann sheets, meaning that RL branches can begin in one sheet, cross the branch cut, and enter in another sheet, the gaps correspond to points not belonging to the first Riemann sheet. As in the previous examples, when 1<α1<3, the RL has two branches (Figures 11(a) to 11(c)). When 3<α1<5, the number of branches is four. Even though, for 3<α1<4, there are gaps in two branches (Figure 11(d)), and for 4<α1<5, two extra small branches depart from the open-loop poles and end close to those points, entering in another Riemann sheet (Figure 11(e)). The same qualitative behaviour is observed for 5<α1<7 (Figures 11(f) to 11(g)). Figure 11(h) depicts the RL for 7<α1<8, revealing eight branches departing from the open-loop poles.Root locus ofG4(s) for 1<α1<8 (a1=1-j2). (a) (b) (c) (d) (e) (f) (g) (h)To conclude the analysis, we use the case shown in Figure11(g) to underline that all RL practical rules are applicable, namely, the asymptotes centroid and angles, which are σ=-1 and φ=-158.8°, -52.9°, 52.9, and 158.8°, respectively. The angle condition is used to determine the departing angles from pole s=-1+j2, resulting in the values ϕ=-142.9, -37.1°, 68.8, and 174.7°. ## 3.1. One Fractional-Order Real Pole In this case, the open-loop transfer function is given by(12)G1(s)=K(s+a1)α1, where the RL corresponds to the roots of the characteristic equation (13)(s+a1)α1+K=0,(14)s=-a1+K1/α1·ej(2h+1)·π/α1,h=0,±1,±2,….In general, the RL spreads along several Riemann sheets, meaning that RL branches can begin in one sheet, cross the branch cut, and enter in another sheet. For example, consideringG1(s) with a1=2 and α1=1.5, we verify that the characteristic (13) has roots in two Riemann sheets (Figure 3(a)). However, choosing α1=1.6 results in roots in five different sheets (Figure 3(b)).Root-Locus ofG1(s): (a) a1=2 and α1=1.5 result in RL branches in two sheets; (b) a1=2 and α1=1.6 result in RL branches in five sheets. (a) (b)It is well-known that just the first Riemann sheet has physical significance [49]. As such, in the sequel, we consider only the RL branches corresponding to the first sheet.Observing the RL ofG1(s), we verify that for 0<α1<1, there are no closed-loop poles. However, for 1≤α<4, several graphs are obtained, as shown in Figure 4. Starting from the integer case (α1=1) represented in Figure 4(a), as the FO pole increases, two branches emerge from the open-loop pole s=-2 and flow towards infinity (Figure 4(b)). For α1=2, we get the classical plot with two vertical branches (Figure 4(c)). Increasing α1 (2<α1<3), two RL branches are still observed (Figure 4(d)). When α1=3, the well-known three branches RL occurs (Figure 4(e)), and finally, when the FO pole is in the interval 3<α1<4, four branches emerge. Larger values of the FO pole (i.e., α1≥4) were also investigated. We concluded that the RL sketching rules also apply. The results are of the same type, and therefore, we decided not to include them.Root locus ofG1(s) for 1≤α1<4 and a1=2. (a) (b) (c) (d) (e) (f)The practical rules apply to all FO cases. For example, for the RL shown in Figure4(f), as α1=3.5, the RL has four branches. The asymptotes centroid and angles are σ=-2 and φ=-154.3°, -51.4°, 51.4°, and 154.3°, respectively. Solving the characteristic equation for s=jω, the RL branches intersect the imaginary axis at ω=±2.51, for K=59.2. ## 3.2. Two Fractional-Order Real Poles In this subsection, we consider the open-loop transfer function given by(15)G2(s)=K(s+a1)α1(s+a2)α2.The RL was computed for various values of{α1,α2} (a1=2, a2=1) and the graphs analysed. It was observed that no RL branches exist when α12=α1+α2<1. Several RL examples are depicted in Figures 5 to 7 for 1≤α12<4. The results are presented in three groups: (i) 1≤α12<2; (ii) 2≤α12<3; (iii) 3≤α12<4. Similar results were observed for α12≥4 and a1<a2. For both cases, the practical sketching rules still apply.Root locus ofG2(s). Cases from group (i), 1≤α12<2 (a1=2,a2=1). (a) (b) (c) (d) (e) (f)Figure5 shows the plots from group (i). When α12=1, the RL has a single branch in the real axis (Figures 5(a) and 5(b)). As α12 increases (1<α12<2), two branches emerge from the poles s=-2 or s=-1, depending on the values of α1 and α2, and tend to infinity (Figures 5(c) to 5(f)).As said in Section3.1, all practical rules are valid for G2(s), (15). Using the case shown in Figure 5(f), for example, we have α1+α2=1.9, meaning that the RL has two branches. As we have two open-loop poles, rule 4 must be used to determine the pole from which the branches are departing. Thus, applying the angle condition to the test points p1 and p2, we obtain ϕ1=-90° and ϕ2=90°, respectively, indicating that no branches can depart from s=-2. Rule 4 can be used in all cases; nevertheless, an easier to use specific rule about RL starting and ending points still requires more research before a definitive statement.The angle condition is also used to determine the departing angles from poles=-1, resulting in ϕ=±138.46°. The asymptotes centroid and angles are σ=-1.3 and φ=±94.7°, respectively.Figure6 depicts results from group (ii). When α12=2, we get the plots represented in Figures 6(a) and 6(b). We observe two RL branches that, as before, depending on the values of α1 and α2, can depart from one or the other open-loop poles. In both cases, the branches tend to infinity with angles φ=±90°. Increasing the value of α12 (2<α12<3), two RL branches are still observed (Figures 6(c) to 6(h)).Root locus ofG2(s). Cases from group (ii), 2≤α12<3 (a1=2,  a2=1). (a) (b) (c) (d) (e) (f) (g) (h)Root locus ofG2(s). Cases from group (iii), 3≤α12<4 (a1=2,  a2=1). (a) (b) (c) (d) (e) (f) (g)The results from group (iii) are illustrated in Figure7. For α12=3, the RL of Figures 7(a) to 7(c) shows three branches that depart from the same or different open-loop poles and flow to infinity with angles φ=180° and ±60°. Increasing α12 (3<α12<4), four RL branches arise (Figures 7(d) to 7(g)).The results obtained for two FO real poles are similar to those of a single real pole. This means a similar behavior, both in terms of the number of branches and the type of RL charts, wheneverα1 and α12 are close. It should be noted that the RL depends not only on the equivalent order α12 (by means of rules 2, 5, or 6) but also on the FO of each pole. By other words, the same value of α12 may lead to different RL. ## 3.3. One Fractional-Order Pole and One Fractional-Order Zero In this case, the open-loop transfer function is given by(16)G3(s)=K(s+b1)β1(s+a1)α1.The RL was obtained for various values of{α1,β1} (a1=1, b1=2) and the graphs analysed as previously. It was observed that no RL branches exist when α1<1. Figures 8 to 10 depict several RL for 1≤α1<4. As before, for easing the comparison, the results are presented in three groups: (i) 1≤α1<2; (ii) 2≤α1<3; (iii) 3≤α1<4. Additional experiments were carried out, both for different values of the FO pole and FO zero and for a1>b1. We concluded that the sketching rules are valid for all cases and the results are similar to those presented.Root locus ofG3(s). Cases from group (i), 1≤α1<2 (a1=1,  b1=2). (a) (b) (c)Figure8 shows plots from group (i). We see that each RL has two branches that depend on the difference between the orders of the denominator and numerator, δ=α1-β1: when δ<1, both branches converge to the open-loop zero (Figure 8(a)); if δ=1, one branch converges to the open-loop zero and the other tends to infinity (on the real axis) (Figure 8(b)); for 1<δ<2, the two branches flow to infinity (Figure 8(c)).Applying rule 6 to the case depicted in Figure8(b), for all real axis points in the line ]-∞,-2], we have δ=1, meaning that this line belongs to the RL. The break-in point is computed using rule 8, resulting in s=-2.3.Figure9 depicts several plots from group (ii), that is, 2≤α1<3. All RL still have two branches, the paths of which depend on the difference between the FO of the open-loop pole and zero (Figures 9(a) to 9(e)).Root locus ofG3(s). Cases from group (ii), 2≤α1<3 (a1=1,  b1=2). (a) (b) (c) (d) (e)Root locus ofG3(s). Cases from group (iii), 3≤α1<4 (a1=1,  b1=2). (a) (b) (c) (d) (e) (f) (g)Several RL for group (iii),3≤α1<4, are shown in Figure 10. It can be observed that all RL have four branches, and as before, the paths depend on the difference between the orders of the open-loop pole and zero. ## 3.4. One Pair of Fractional-Order Complex Conjugate Poles The open-loop transfer function is given by(17)G4(s)=K(s2+2ξωns+ωn2)α1/2=K(s+a1)α1/2·(s+a1*)α1/2, where a1∈C and a1* denotes the conjugate of a1.Plotting the RL, it can be seen that there are no branches unlessα1≥1. In Figure 11, several RL graphs are shown for 1<α1<8. Figure 11(a) depicts the RL for α1=1.2, where we can see that there are gaps between the open-loop poles and the points were the branches initiate. Recalling that the RL can spread along several Riemann sheets, meaning that RL branches can begin in one sheet, cross the branch cut, and enter in another sheet, the gaps correspond to points not belonging to the first Riemann sheet. As in the previous examples, when 1<α1<3, the RL has two branches (Figures 11(a) to 11(c)). When 3<α1<5, the number of branches is four. Even though, for 3<α1<4, there are gaps in two branches (Figure 11(d)), and for 4<α1<5, two extra small branches depart from the open-loop poles and end close to those points, entering in another Riemann sheet (Figure 11(e)). The same qualitative behaviour is observed for 5<α1<7 (Figures 11(f) to 11(g)). Figure 11(h) depicts the RL for 7<α1<8, revealing eight branches departing from the open-loop poles.Root locus ofG4(s) for 1<α1<8 (a1=1-j2). (a) (b) (c) (d) (e) (f) (g) (h)To conclude the analysis, we use the case shown in Figure11(g) to underline that all RL practical rules are applicable, namely, the asymptotes centroid and angles, which are σ=-1 and φ=-158.8°, -52.9°, 52.9, and 158.8°, respectively. The angle condition is used to determine the departing angles from pole s=-1+j2, resulting in the values ϕ=-142.9, -37.1°, 68.8, and 174.7°. ## 4. Conclusion The Root-Locus (RL) is a classical method for the analysis and synthesis of linear time-invariant (LTI) integer-order systems, consisting of the plot of the paths of all possible closed-loop poles as a design parameter varies in a given range. Nowadays, there are efficient numerical algorithms devoted to RL analysis, implemented by several packages. For integer-order systems, there are well-known practical rules for RL sketching, but those cannot be directly applied to FO systems, and the existing literature on this topic almost exclusively focuses on particular cases, namely, the commensurate FO systems.This paper generalises RL practical rules to a class of FO systems, which are defined by an open-loop transfer function expressed as a ratio of FO zeros and poles. As usual, using practical rules, even though the RL sketch might result somewhat incomplete, the ability to quickly sketch RL by hand is invaluable, from the control designer viewpoint, in making fundamental decisions early in the design process. --- *Source: 102068-2013-06-20.xml*
102068-2013-06-20_102068-2013-06-20.md
29,421
Root Locus Practical Sketching Rules for Fractional-Order Systems
António M. Lopes; J. A. Tenreiro Machado
Abstract and Applied Analysis (2013)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102068
102068-2013-06-20.xml
--- ## Abstract For integer-order systems, there are well-known practical rules for RL sketching. Nevertheless, these rules cannot be directly applied to fractional-order (FO) systems. Besides, the existing literature on this topic is scarce and exclusively focused on commensurate systems, usually expressed as the ratio of two noninteger polynomials. The practical rules derived for those do not apply to other symbolic expressions, namely, to transfer functions expressed as the ratio of FO zeros and poles. However, this is an important case as it is an extension of the classical integer-order problem usually addressed by control engineers. Extending the RL practical sketching rules to such FO systems will contribute to decrease the lack of intuition about the corresponding system dynamics. This paper generalises several RL practical sketching rules to transfer functions specified as the ratio of FO zeros and poles. The subject is presented in a didactic perspective, being the rules applied to several examples. --- ## Body ## 1. Introduction Root locus (RL) analysis is a graphical method that shows how the poles of a closed-loop transfer function change with relation to a given system parameter [1, 2]. Usually, the chosen parameter is a proportional gain, K≥0, included in a unity feedback closed-loop controlled system (Figure 1).Figure 1 Unity feedback closed-loop controlled system.The open and closed-loop transfer functions are given byGOL(s)=K·G(s) and GCL(s)=K·G(s)/[1+K·G(s)], respectively. The denominator of GCL(s) is the characteristic equation, and its roots are the system closed-loop poles. Every point of the RL simultaneously satisfies the well-known argument (angle) and magnitude conditions given by (1)arg{K·G(s)}=(2h+1)·180°,h=0,±1,±2,…,abs{K·G(s)}=1.The RL is a classical and powerful tool for the dynamical analysis and design of integer-order linear time-invariant (LTI) systems [1–6]. Nowadays, there are efficient numerical algorithms, implemented in several software packages (e.g., MATLAB, Octave, Scilab, and FreeMat) [7–10] that take advantage of the powerful digital processors of modern computers to perform RL analysis. For fractional-order (FO) systems, while several studies addressing RL are available [11–17], the problem is more difficult and researchers have mainly preferred to adopt frequency-based methods.On the other hand, the ability to quickly sketch RL by hand is invaluable in making fundamental decisions early in the design process. For integer-order systems, there are well-known practical rules for RL sketching, but those cannot be directly applied to FO systems. Moreover, the existing literature on this topic exclusively focuses on the particular case of commensurate FO systems that occur when truncating real valued integro-differential orders up to a finite precision [15, 16]. This allows the generalisation of some rules to FO systems, but limiting the precision and the type of symbolic expressions [17, 18]. The rules for commensurate FO systems do not apply to transfer functions expressed as the ratio of FO zeros and poles. However, this is an important case as it is an extension of the classical integer-order problem usually addressed by control engineers when dealing with RL analysis.In this paper, we extend several practical rules, available to sketch the RL of integer-order systems, to the FO domain. The main contribution is that the practical sketching rules apply to open-loop transfer functions expressed as the ratio of FO zeros and poles, contributing to fill the gap in the existing literature about this topic. The subject is presented in a didactic perspective, being the rules applied to several examples that contribute to reduce the lack of intuition about the corresponding system dynamics.Bearing these ideas in mind, the paper is organized as follows. Section2 introduces fundamental concepts related to fractional calculus. Section 3 analyses several FO systems and generalises the RL rules to a class of FO systems. Finally, Section 4 draws the main conclusions. ## 2. Fractional Calculus Fractional calculus (FC) denotes the branch of calculus that extends the concepts of integrals and derivatives to noninteger and complex orders [19–23]. During the last years, FC was found to play a fundamental role in the modelling of a considerable number of phenomena [24–29] and emerged as an important tool for the study of dynamical systems where classical methods reveal strong limitations. Nowadays, the application of FC concepts includes a wide spectrum of studies [30–33], going from the dynamics of financial markets [34, 35], biological systems [36, 37], earth sciences [38], and DNA sequencing [39] up to mechanical [40–43], electrical [44–46], and control systems [21, 24].The generalisation of the concept of derivative and integral to noninteger orders,α, has been addressed by several mathematicians. The Riemann-Liouville, Grünwald-Letnikov, and Caputo definitions of fractional derivative are the most used and are given, respectively, by [47] (2)DaRLtαf(t)=1Γ(n-α)dndtn∫atf(τ)(t-τ)α-n+1dτ,1Γ(n-α)dndtnn∫atf(τ)(t-τ)α-n+1dτn-1<α<n,(3)DtαaGLf(t)=limh→01hα∑k=0[(t-a)/h](-1)k(αk)f(t-kh),(4)DtαaCf(t)=1Γ(n-α)∫atf(n)(τ)(t-τ)α-n+1dτ,1Γ(n-α)∫atf(n)(τ)(t-τ)α-n+1dτn-1<α<n, where Γ (·) represents Euler’s gamma function, [x] is the integer part of x, and h is a time step.The Laplace transform applied to (2) yields (5)L{DaRLtαf(t)}=sαL{f(t)}-∑k=0n-1skD0RLtα-k-1f(0+), where L and s denote the Laplace operator and variable, respectively, and t represents time.The general LTI, single-input-single-output (SISO), and FO incommensurate system can be represented by [48] (6)anDαny(t)+an-1Dαn-1y(t)+⋯+a0Dα0y(t)=bmDβmx(t)+bm-1Dβm-1x(t)+⋯+b0Dβ0x(t), where x(t) and y(t) represent the system input and output, respectively, D(·)is the derivative operator, {ap,bq}∈R,{αp,βq}∈R0+, p=0,…,n, and q=0,…,m. Besides, it is considered that αn>αn-1>⋯>α0,βm>βm-1>⋯>β0,αn>βm, and an≠0.In the Laplace domain, (6) results in a transfer function given by the ratio of two non-integer polynomials: (7)G(s)=bmsβm+bn-1sβm-1+⋯+b0sβ0ansαn+an-1sαn-1+⋯+a0sα0.Ifαp=kp/v, βq=kq/v, with v∈R+and kp, kq∈N0, then (7) is a commensurate FO system and can be written as (8)G(s)=∑q=0mbq(s1/ν)kq∑p=0nap(s1/ν)kp.The FO system is said to be rational ifv∈N.In general, a polynomialP(sα) is a multivalued function, the domain of which is a Riemann surface with an infinite number of sheets [48]. Only in the particular case of α being rational, the number of sheets will be finite. Such type of function becomes single-valued when an appropriate cut of the complex plane is assumed. This branch cut is not unique, but the negative real axis is usually chosen. In this case, the origin of the complex plane is a branch point and the first Riemann sheet, ℘, is defined as (9)℘={rejϕr∈R+,-π<ϕ<π}.For example, Figure2 depicts two Riemann surfaces corresponding to the function P(sα)=sα+b(α>0,b>0), the roots of which are (10)s=b1/α·ej(π+2hπ)/α,h=0,±1,±2,…;j=-1.Riemann surfaces: (a)P(sα)=s1/2+1 has two sheets; (b) P(sα)=s4/3+1 has three sheets. (a) (b)Forα=1/2 and b=1, the Riemann surface has two sheets (Figure 2(a)), and for α=4/3 and b=1, the Riemann surface presents three different sheets (Figure 2(b)). In the former case, there are no roots, and in the latter case, two roots appear on the first sheet. Riemann surfaces are important when dealing with RL of FO systems, as will be seen in Section 3. ## 3. Root Locus In this section, we assume that the system open-loop transfer function is given by the following:(11)G(s)=K∏q=1m(s+bq)βq∏p=1n(s+ap)αp, where ap, bq∈C and αp, βq∈R+.Equation (11) represents a direct extension to the FO domain of the classical integer-order problem usually addressed by control engineers when dealing with RL analysis. Rules for RL sketching applicable to this case are summarised in Table 1. Only the first Riemann sheet will be considered.Table 1 Practical rules for RL sketching of FO systems as defined in (11). Fractional-order case, (11) Rule  1 RL is symmetrical about the real axis. Rule  2 Ifl<∑p=1nαp<l+2, l= 1, 3, 5, …,then the number of branches is l+1. Rule  3 Mark all open-loop poles and zeros on thes-plane. Rule  4 If necessary, use the angle condition (1) to determine the open-loop poles that have RL branches departing from them. Rule  5 Compute the asymptotes centroid,σ, and angle, φ, according to the following expressions: σ=-∑p=1napαp+∑q=1mbqβq∑p=1nαp-∑q=1mβq,   φ=(2h+1)·180°∑p=1nαp-∑q=1mβq,h=0,±1,±2,…. Rule  6 Points in the real axis belong to the RL if, for all poles and zeros seen to the right,δ=∑p=1nαp-∑q=1mβq is an integer odd number. Rule  7 Find the intersection of the RL with the imaginary axis makings=jω in the characteristic equation and solving it in order to determine K and ω. Rule  8 Compute the breakaway and break-in points using the characteristic equation and determiningdK/ds=0. Rule  9 Determine the departure and arrival angles using the angle condition (1).In the sequel, several examples are presented, namely, (i) one FO real pole; (ii) two FO real poles; (iii) one FO pole and one FO zero; (iv) a pair of FO complex conjugate poles. The RL plots are generated using the numeric algorithm presented in [17]. The application of the practical sketching rules is detailed for a few examples, and for all cases, the RL plots serve the purpose of elucidating system dynamics. This will help readers to gain intuition about system behaviour as a function of poles and zeros fractional orders. ### 3.1. One Fractional-Order Real Pole In this case, the open-loop transfer function is given by(12)G1(s)=K(s+a1)α1, where the RL corresponds to the roots of the characteristic equation (13)(s+a1)α1+K=0,(14)s=-a1+K1/α1·ej(2h+1)·π/α1,h=0,±1,±2,….In general, the RL spreads along several Riemann sheets, meaning that RL branches can begin in one sheet, cross the branch cut, and enter in another sheet. For example, consideringG1(s) with a1=2 and α1=1.5, we verify that the characteristic (13) has roots in two Riemann sheets (Figure 3(a)). However, choosing α1=1.6 results in roots in five different sheets (Figure 3(b)).Root-Locus ofG1(s): (a) a1=2 and α1=1.5 result in RL branches in two sheets; (b) a1=2 and α1=1.6 result in RL branches in five sheets. (a) (b)It is well-known that just the first Riemann sheet has physical significance [49]. As such, in the sequel, we consider only the RL branches corresponding to the first sheet.Observing the RL ofG1(s), we verify that for 0<α1<1, there are no closed-loop poles. However, for 1≤α<4, several graphs are obtained, as shown in Figure 4. Starting from the integer case (α1=1) represented in Figure 4(a), as the FO pole increases, two branches emerge from the open-loop pole s=-2 and flow towards infinity (Figure 4(b)). For α1=2, we get the classical plot with two vertical branches (Figure 4(c)). Increasing α1 (2<α1<3), two RL branches are still observed (Figure 4(d)). When α1=3, the well-known three branches RL occurs (Figure 4(e)), and finally, when the FO pole is in the interval 3<α1<4, four branches emerge. Larger values of the FO pole (i.e., α1≥4) were also investigated. We concluded that the RL sketching rules also apply. The results are of the same type, and therefore, we decided not to include them.Root locus ofG1(s) for 1≤α1<4 and a1=2. (a) (b) (c) (d) (e) (f)The practical rules apply to all FO cases. For example, for the RL shown in Figure4(f), as α1=3.5, the RL has four branches. The asymptotes centroid and angles are σ=-2 and φ=-154.3°, -51.4°, 51.4°, and 154.3°, respectively. Solving the characteristic equation for s=jω, the RL branches intersect the imaginary axis at ω=±2.51, for K=59.2. ### 3.2. Two Fractional-Order Real Poles In this subsection, we consider the open-loop transfer function given by(15)G2(s)=K(s+a1)α1(s+a2)α2.The RL was computed for various values of{α1,α2} (a1=2, a2=1) and the graphs analysed. It was observed that no RL branches exist when α12=α1+α2<1. Several RL examples are depicted in Figures 5 to 7 for 1≤α12<4. The results are presented in three groups: (i) 1≤α12<2; (ii) 2≤α12<3; (iii) 3≤α12<4. Similar results were observed for α12≥4 and a1<a2. For both cases, the practical sketching rules still apply.Root locus ofG2(s). Cases from group (i), 1≤α12<2 (a1=2,a2=1). (a) (b) (c) (d) (e) (f)Figure5 shows the plots from group (i). When α12=1, the RL has a single branch in the real axis (Figures 5(a) and 5(b)). As α12 increases (1<α12<2), two branches emerge from the poles s=-2 or s=-1, depending on the values of α1 and α2, and tend to infinity (Figures 5(c) to 5(f)).As said in Section3.1, all practical rules are valid for G2(s), (15). Using the case shown in Figure 5(f), for example, we have α1+α2=1.9, meaning that the RL has two branches. As we have two open-loop poles, rule 4 must be used to determine the pole from which the branches are departing. Thus, applying the angle condition to the test points p1 and p2, we obtain ϕ1=-90° and ϕ2=90°, respectively, indicating that no branches can depart from s=-2. Rule 4 can be used in all cases; nevertheless, an easier to use specific rule about RL starting and ending points still requires more research before a definitive statement.The angle condition is also used to determine the departing angles from poles=-1, resulting in ϕ=±138.46°. The asymptotes centroid and angles are σ=-1.3 and φ=±94.7°, respectively.Figure6 depicts results from group (ii). When α12=2, we get the plots represented in Figures 6(a) and 6(b). We observe two RL branches that, as before, depending on the values of α1 and α2, can depart from one or the other open-loop poles. In both cases, the branches tend to infinity with angles φ=±90°. Increasing the value of α12 (2<α12<3), two RL branches are still observed (Figures 6(c) to 6(h)).Root locus ofG2(s). Cases from group (ii), 2≤α12<3 (a1=2,  a2=1). (a) (b) (c) (d) (e) (f) (g) (h)Root locus ofG2(s). Cases from group (iii), 3≤α12<4 (a1=2,  a2=1). (a) (b) (c) (d) (e) (f) (g)The results from group (iii) are illustrated in Figure7. For α12=3, the RL of Figures 7(a) to 7(c) shows three branches that depart from the same or different open-loop poles and flow to infinity with angles φ=180° and ±60°. Increasing α12 (3<α12<4), four RL branches arise (Figures 7(d) to 7(g)).The results obtained for two FO real poles are similar to those of a single real pole. This means a similar behavior, both in terms of the number of branches and the type of RL charts, wheneverα1 and α12 are close. It should be noted that the RL depends not only on the equivalent order α12 (by means of rules 2, 5, or 6) but also on the FO of each pole. By other words, the same value of α12 may lead to different RL. ### 3.3. One Fractional-Order Pole and One Fractional-Order Zero In this case, the open-loop transfer function is given by(16)G3(s)=K(s+b1)β1(s+a1)α1.The RL was obtained for various values of{α1,β1} (a1=1, b1=2) and the graphs analysed as previously. It was observed that no RL branches exist when α1<1. Figures 8 to 10 depict several RL for 1≤α1<4. As before, for easing the comparison, the results are presented in three groups: (i) 1≤α1<2; (ii) 2≤α1<3; (iii) 3≤α1<4. Additional experiments were carried out, both for different values of the FO pole and FO zero and for a1>b1. We concluded that the sketching rules are valid for all cases and the results are similar to those presented.Root locus ofG3(s). Cases from group (i), 1≤α1<2 (a1=1,  b1=2). (a) (b) (c)Figure8 shows plots from group (i). We see that each RL has two branches that depend on the difference between the orders of the denominator and numerator, δ=α1-β1: when δ<1, both branches converge to the open-loop zero (Figure 8(a)); if δ=1, one branch converges to the open-loop zero and the other tends to infinity (on the real axis) (Figure 8(b)); for 1<δ<2, the two branches flow to infinity (Figure 8(c)).Applying rule 6 to the case depicted in Figure8(b), for all real axis points in the line ]-∞,-2], we have δ=1, meaning that this line belongs to the RL. The break-in point is computed using rule 8, resulting in s=-2.3.Figure9 depicts several plots from group (ii), that is, 2≤α1<3. All RL still have two branches, the paths of which depend on the difference between the FO of the open-loop pole and zero (Figures 9(a) to 9(e)).Root locus ofG3(s). Cases from group (ii), 2≤α1<3 (a1=1,  b1=2). (a) (b) (c) (d) (e)Root locus ofG3(s). Cases from group (iii), 3≤α1<4 (a1=1,  b1=2). (a) (b) (c) (d) (e) (f) (g)Several RL for group (iii),3≤α1<4, are shown in Figure 10. It can be observed that all RL have four branches, and as before, the paths depend on the difference between the orders of the open-loop pole and zero. ### 3.4. One Pair of Fractional-Order Complex Conjugate Poles The open-loop transfer function is given by(17)G4(s)=K(s2+2ξωns+ωn2)α1/2=K(s+a1)α1/2·(s+a1*)α1/2, where a1∈C and a1* denotes the conjugate of a1.Plotting the RL, it can be seen that there are no branches unlessα1≥1. In Figure 11, several RL graphs are shown for 1<α1<8. Figure 11(a) depicts the RL for α1=1.2, where we can see that there are gaps between the open-loop poles and the points were the branches initiate. Recalling that the RL can spread along several Riemann sheets, meaning that RL branches can begin in one sheet, cross the branch cut, and enter in another sheet, the gaps correspond to points not belonging to the first Riemann sheet. As in the previous examples, when 1<α1<3, the RL has two branches (Figures 11(a) to 11(c)). When 3<α1<5, the number of branches is four. Even though, for 3<α1<4, there are gaps in two branches (Figure 11(d)), and for 4<α1<5, two extra small branches depart from the open-loop poles and end close to those points, entering in another Riemann sheet (Figure 11(e)). The same qualitative behaviour is observed for 5<α1<7 (Figures 11(f) to 11(g)). Figure 11(h) depicts the RL for 7<α1<8, revealing eight branches departing from the open-loop poles.Root locus ofG4(s) for 1<α1<8 (a1=1-j2). (a) (b) (c) (d) (e) (f) (g) (h)To conclude the analysis, we use the case shown in Figure11(g) to underline that all RL practical rules are applicable, namely, the asymptotes centroid and angles, which are σ=-1 and φ=-158.8°, -52.9°, 52.9, and 158.8°, respectively. The angle condition is used to determine the departing angles from pole s=-1+j2, resulting in the values ϕ=-142.9, -37.1°, 68.8, and 174.7°. ## 3.1. One Fractional-Order Real Pole In this case, the open-loop transfer function is given by(12)G1(s)=K(s+a1)α1, where the RL corresponds to the roots of the characteristic equation (13)(s+a1)α1+K=0,(14)s=-a1+K1/α1·ej(2h+1)·π/α1,h=0,±1,±2,….In general, the RL spreads along several Riemann sheets, meaning that RL branches can begin in one sheet, cross the branch cut, and enter in another sheet. For example, consideringG1(s) with a1=2 and α1=1.5, we verify that the characteristic (13) has roots in two Riemann sheets (Figure 3(a)). However, choosing α1=1.6 results in roots in five different sheets (Figure 3(b)).Root-Locus ofG1(s): (a) a1=2 and α1=1.5 result in RL branches in two sheets; (b) a1=2 and α1=1.6 result in RL branches in five sheets. (a) (b)It is well-known that just the first Riemann sheet has physical significance [49]. As such, in the sequel, we consider only the RL branches corresponding to the first sheet.Observing the RL ofG1(s), we verify that for 0<α1<1, there are no closed-loop poles. However, for 1≤α<4, several graphs are obtained, as shown in Figure 4. Starting from the integer case (α1=1) represented in Figure 4(a), as the FO pole increases, two branches emerge from the open-loop pole s=-2 and flow towards infinity (Figure 4(b)). For α1=2, we get the classical plot with two vertical branches (Figure 4(c)). Increasing α1 (2<α1<3), two RL branches are still observed (Figure 4(d)). When α1=3, the well-known three branches RL occurs (Figure 4(e)), and finally, when the FO pole is in the interval 3<α1<4, four branches emerge. Larger values of the FO pole (i.e., α1≥4) were also investigated. We concluded that the RL sketching rules also apply. The results are of the same type, and therefore, we decided not to include them.Root locus ofG1(s) for 1≤α1<4 and a1=2. (a) (b) (c) (d) (e) (f)The practical rules apply to all FO cases. For example, for the RL shown in Figure4(f), as α1=3.5, the RL has four branches. The asymptotes centroid and angles are σ=-2 and φ=-154.3°, -51.4°, 51.4°, and 154.3°, respectively. Solving the characteristic equation for s=jω, the RL branches intersect the imaginary axis at ω=±2.51, for K=59.2. ## 3.2. Two Fractional-Order Real Poles In this subsection, we consider the open-loop transfer function given by(15)G2(s)=K(s+a1)α1(s+a2)α2.The RL was computed for various values of{α1,α2} (a1=2, a2=1) and the graphs analysed. It was observed that no RL branches exist when α12=α1+α2<1. Several RL examples are depicted in Figures 5 to 7 for 1≤α12<4. The results are presented in three groups: (i) 1≤α12<2; (ii) 2≤α12<3; (iii) 3≤α12<4. Similar results were observed for α12≥4 and a1<a2. For both cases, the practical sketching rules still apply.Root locus ofG2(s). Cases from group (i), 1≤α12<2 (a1=2,a2=1). (a) (b) (c) (d) (e) (f)Figure5 shows the plots from group (i). When α12=1, the RL has a single branch in the real axis (Figures 5(a) and 5(b)). As α12 increases (1<α12<2), two branches emerge from the poles s=-2 or s=-1, depending on the values of α1 and α2, and tend to infinity (Figures 5(c) to 5(f)).As said in Section3.1, all practical rules are valid for G2(s), (15). Using the case shown in Figure 5(f), for example, we have α1+α2=1.9, meaning that the RL has two branches. As we have two open-loop poles, rule 4 must be used to determine the pole from which the branches are departing. Thus, applying the angle condition to the test points p1 and p2, we obtain ϕ1=-90° and ϕ2=90°, respectively, indicating that no branches can depart from s=-2. Rule 4 can be used in all cases; nevertheless, an easier to use specific rule about RL starting and ending points still requires more research before a definitive statement.The angle condition is also used to determine the departing angles from poles=-1, resulting in ϕ=±138.46°. The asymptotes centroid and angles are σ=-1.3 and φ=±94.7°, respectively.Figure6 depicts results from group (ii). When α12=2, we get the plots represented in Figures 6(a) and 6(b). We observe two RL branches that, as before, depending on the values of α1 and α2, can depart from one or the other open-loop poles. In both cases, the branches tend to infinity with angles φ=±90°. Increasing the value of α12 (2<α12<3), two RL branches are still observed (Figures 6(c) to 6(h)).Root locus ofG2(s). Cases from group (ii), 2≤α12<3 (a1=2,  a2=1). (a) (b) (c) (d) (e) (f) (g) (h)Root locus ofG2(s). Cases from group (iii), 3≤α12<4 (a1=2,  a2=1). (a) (b) (c) (d) (e) (f) (g)The results from group (iii) are illustrated in Figure7. For α12=3, the RL of Figures 7(a) to 7(c) shows three branches that depart from the same or different open-loop poles and flow to infinity with angles φ=180° and ±60°. Increasing α12 (3<α12<4), four RL branches arise (Figures 7(d) to 7(g)).The results obtained for two FO real poles are similar to those of a single real pole. This means a similar behavior, both in terms of the number of branches and the type of RL charts, wheneverα1 and α12 are close. It should be noted that the RL depends not only on the equivalent order α12 (by means of rules 2, 5, or 6) but also on the FO of each pole. By other words, the same value of α12 may lead to different RL. ## 3.3. One Fractional-Order Pole and One Fractional-Order Zero In this case, the open-loop transfer function is given by(16)G3(s)=K(s+b1)β1(s+a1)α1.The RL was obtained for various values of{α1,β1} (a1=1, b1=2) and the graphs analysed as previously. It was observed that no RL branches exist when α1<1. Figures 8 to 10 depict several RL for 1≤α1<4. As before, for easing the comparison, the results are presented in three groups: (i) 1≤α1<2; (ii) 2≤α1<3; (iii) 3≤α1<4. Additional experiments were carried out, both for different values of the FO pole and FO zero and for a1>b1. We concluded that the sketching rules are valid for all cases and the results are similar to those presented.Root locus ofG3(s). Cases from group (i), 1≤α1<2 (a1=1,  b1=2). (a) (b) (c)Figure8 shows plots from group (i). We see that each RL has two branches that depend on the difference between the orders of the denominator and numerator, δ=α1-β1: when δ<1, both branches converge to the open-loop zero (Figure 8(a)); if δ=1, one branch converges to the open-loop zero and the other tends to infinity (on the real axis) (Figure 8(b)); for 1<δ<2, the two branches flow to infinity (Figure 8(c)).Applying rule 6 to the case depicted in Figure8(b), for all real axis points in the line ]-∞,-2], we have δ=1, meaning that this line belongs to the RL. The break-in point is computed using rule 8, resulting in s=-2.3.Figure9 depicts several plots from group (ii), that is, 2≤α1<3. All RL still have two branches, the paths of which depend on the difference between the FO of the open-loop pole and zero (Figures 9(a) to 9(e)).Root locus ofG3(s). Cases from group (ii), 2≤α1<3 (a1=1,  b1=2). (a) (b) (c) (d) (e)Root locus ofG3(s). Cases from group (iii), 3≤α1<4 (a1=1,  b1=2). (a) (b) (c) (d) (e) (f) (g)Several RL for group (iii),3≤α1<4, are shown in Figure 10. It can be observed that all RL have four branches, and as before, the paths depend on the difference between the orders of the open-loop pole and zero. ## 3.4. One Pair of Fractional-Order Complex Conjugate Poles The open-loop transfer function is given by(17)G4(s)=K(s2+2ξωns+ωn2)α1/2=K(s+a1)α1/2·(s+a1*)α1/2, where a1∈C and a1* denotes the conjugate of a1.Plotting the RL, it can be seen that there are no branches unlessα1≥1. In Figure 11, several RL graphs are shown for 1<α1<8. Figure 11(a) depicts the RL for α1=1.2, where we can see that there are gaps between the open-loop poles and the points were the branches initiate. Recalling that the RL can spread along several Riemann sheets, meaning that RL branches can begin in one sheet, cross the branch cut, and enter in another sheet, the gaps correspond to points not belonging to the first Riemann sheet. As in the previous examples, when 1<α1<3, the RL has two branches (Figures 11(a) to 11(c)). When 3<α1<5, the number of branches is four. Even though, for 3<α1<4, there are gaps in two branches (Figure 11(d)), and for 4<α1<5, two extra small branches depart from the open-loop poles and end close to those points, entering in another Riemann sheet (Figure 11(e)). The same qualitative behaviour is observed for 5<α1<7 (Figures 11(f) to 11(g)). Figure 11(h) depicts the RL for 7<α1<8, revealing eight branches departing from the open-loop poles.Root locus ofG4(s) for 1<α1<8 (a1=1-j2). (a) (b) (c) (d) (e) (f) (g) (h)To conclude the analysis, we use the case shown in Figure11(g) to underline that all RL practical rules are applicable, namely, the asymptotes centroid and angles, which are σ=-1 and φ=-158.8°, -52.9°, 52.9, and 158.8°, respectively. The angle condition is used to determine the departing angles from pole s=-1+j2, resulting in the values ϕ=-142.9, -37.1°, 68.8, and 174.7°. ## 4. Conclusion The Root-Locus (RL) is a classical method for the analysis and synthesis of linear time-invariant (LTI) integer-order systems, consisting of the plot of the paths of all possible closed-loop poles as a design parameter varies in a given range. Nowadays, there are efficient numerical algorithms devoted to RL analysis, implemented by several packages. For integer-order systems, there are well-known practical rules for RL sketching, but those cannot be directly applied to FO systems, and the existing literature on this topic almost exclusively focuses on particular cases, namely, the commensurate FO systems.This paper generalises RL practical rules to a class of FO systems, which are defined by an open-loop transfer function expressed as a ratio of FO zeros and poles. As usual, using practical rules, even though the RL sketch might result somewhat incomplete, the ability to quickly sketch RL by hand is invaluable, from the control designer viewpoint, in making fundamental decisions early in the design process. --- *Source: 102068-2013-06-20.xml*
2013
# Capacity-Equivocation Regions of the DMBCs with Noiseless Feedback **Authors:** Xinxing Yin; Zhi Xue; Bin Dai **Journal:** Mathematical Problems in Engineering (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102069 --- ## Abstract The discrete memoryless broadcast channels (DMBCs) with noiseless feedback are studied. The entire capacity-equivocation regions of two models of the DMBCs with noiseless feedback are obtained. One is the degraded DMBCs with rate-limited feedback; the other is theless and reversely less noisy DMBCs with causal feedback. In both models, two kinds of messages are transmitted. The common message is to be decoded by both the legitimate receiver and the eavesdropper, while the confidential message is only for the legitimate receiver. Our results generalize the secrecy capacity of the degraded wiretap channel with rate-limited feedback (Ardestanizadeh et al., 2009) and the restricted wiretap channel with noiseless feedback (Dai et al., 2012). Furthermore, we use a simpler and more intuitive deduction to get the single-letter characterization of the capacity-equivocation region, instead of relying on the recursive argument which is complex and not intuitive. --- ## Body ## 1. Introduction Secure data transmission is an important requirement in wireless communication. Wyner first studied the degraded (the wiretap channel is said to be (physically) degraded ifX→Y→Z form a Markov chain, where X is the channel input and Y and Z are the channel outputs of the legitimate receiver and wiretapper, resp.) wiretap channel in [1], where the output ZN of the channel to the wiretapper is degraded to the output YN of the channel to the legitimate receiver. In Wyner’s model, the transmitter aimed to send a confidential message S to the legitimate receiver and keep the wiretapper as ignorant of the message as possible. Wyner obtained the secrecy capacity (the secrecy capacity is the best data transmission rate under perfect secrecy; i.e., the equivocation at the wiretapper H(S∣ZN)=0. The formal definition of the secrecy capacity is given in Remark 3) and demonstrated that provable secure communication could be implemented by using information theoretic methods. This model was extended to a more general case by Csiszár and Körner [2], where broadcast channel with confidential messages was studied; see Figure 1. They considered transmitting not only the confidential messages S to the legitimate receiver, but also the common messages W to both the legitimate receiver and the eavesdropper. The capacity-equivocation region for the extended model was determined in [2]. This region contains all the achievable rate triples (R0,R1,Re), where R0 and R1 are the rates of the common and confidential messages and Re is the rate of the confidential message’s equivocation. Nevertheless, neither Wyner’s model nor Csiszár’s model considered feedback.Figure 1 Broadcast channel with confidential messages.To explore more ways in achieving secure data transmission, [3–5] studied the effects of the feedback on the capacities of several channel models. They all showed that feedback could help enhance the secrecy in wireless transmission. In [3], Ahlswede and Cai presented both the inner and outer bounds on the secrecy capacity of the wiretap channel with secure causal feedback from the decoder and showed that the outer bound was tight for the degraded case. It was proved that, by using feedback, the secrecy capacity of the (degraded) wiretap channel was increased. After Ahlswede’s exploration, Ardestanizadeh et al. studied the wiretap channel with secure rate-limited feedback [4]. The main difference between Ardestanizadeh’s model and Ahlswede’s model is that the feedback in [4] is independent of the channel outputs, while the feedback in [3] is originated causally from the outputs of the channel to the legitimate receiver. In [4], the authors got an outer bound on the wiretap channel with rate-limited feedback through a recursive argument which was effective but not intuitive. They also showed the outer bound was tight for the degraded case. In addition, Dai et al. investigated the secrecy capacity of the restricted wiretap channel with noiseless causal feedback under the assumption that the main channel is independent of the wiretap channel [5].However, all of these explorations [3–5] focused on sending only the confidential messages. They did not consider sending both the common and confidential messages. In fact, transmitting the two kinds of messages can be seen in many systems with feedback. For example, in the satellite television service, some channels are available to all users for free, but some other channels are only for those who have paid for them. Recently, [6] studied the problem of transmitting both the common and confidential messages in the degraded broadcast channels with feedback. Note that, like [3], the feedback in [6] was originated causally from the legitimate receiver’s channel outputs and not rate-limited. Besides, [7–9] studied the broadcast channel with feedback where no secure constraints were imposed.To further investigate the secure data transmission with both common and confidential messages and noiseless feedback, this paper determines the capacity-equivocation regions of the following two DMBCs with both common and confidential messages. They are unsolved in the previous exploration.(i) Degraded DMBCs with rate-limited feedback, where the feedback rate is limited byRf and the feedback is independent of the channel outputs; see Figure 2.(ii) Less and reversely less noisy (let X be the input of the DMBC, Y the legitimate receiver’s channel output, and Z the eavesdropper’s channel output. A DMBC p(y,z∣x) is said to be less noisy if I(U;Y)≥I(U;Z) for all p(u,x); a DMBC p(y,z∣x) is said to be reversely less noisy if I(U;Y)≤I(U;Z) for all p(u,x), where u is the value of the auxiliary random variable U) DMBCs with noiseless causal feedback, where the feedback is originated causally from the legitimate receiver’s channel outputs; see Figure 3.Figure 2 Degraded DMBCs with rate-limited feedback.Figure 3 Less and reversely less noisy DMBCs with noiseless causal feedback.The two channel models are characterized in Section2. The main results presented in Section 2 subsume some important previous findings about the secure data transmission with feedback. (1) By setting the auxiliary random variable U to be constant in the secrecy capacity of the first model (see (9) in Remark 3), the secrecy capacity of the degraded wiretap with rate-limited feedback [4] is obtained. (2) By eliminating the common message in the second model, the capacity-equivocation region of restricted wiretap channel with noiseless feedback [5] is obtained. (3) We utilize a simpler and more intuitive deduction to get the single-letter characterization of the capacity-equivocation region, instead of relying on the recursive argument (see [4]) which is complex and not intuitive. (4) We find that even if the eavesdropper is in a better position than the legitimate receiver, provable secure communication could also be implemented in the DMBCs with both common and confidential messages.The remainder of the paper is organized as follows. Section2 gives the notations and main results, that is, the capacity-equivocation regions of the two channel models. Section 3 proves Theorem 2. Section 4 proves Theorems 4 and 5. Section 5 concludes the whole work. ## 2. Channel Models and Main Results ### 2.1. Notations Throughout this paper, we use calligraphic letters, for example,𝒳, 𝒴, to denote the finite sets and ∥𝒳∥ to denote the cardinality of the set 𝒳. Uppercase letters, for example, X, Y, are used to denote random variables taking values from finite sets, for example, 𝒳, 𝒴. The value of the random variable X is denoted by the lowercase letter x. We use Zij to denote the (j-i+1)-vectors (Zi,Zi+1,…,Zj) of random variables for 1≤i≤j and will always drop the subscript when i=1. Moreover, we use X~p(x) to denote the probability mass function of the random variable X. For X~p(x) and 0≤ϵ≤1, the set of the typical N-sequences xN is defined as 𝒯XN(ϵ)={xN:|π(x∣xN)-p(x)|≤ϵp(x) for all x∈𝒳}, where π(x∣xN) denotes the frequency of occurrences of letter x in the sequence xN (for more details about typical sequences, please refer to [10, Chapter 2]). The set of the conditional typical sequences, for example, 𝒯Y∣XN(ϵ), follows similarly. ### 2.2. Channel Models and Main Results This paper studies the secure data transmission for two subclasses of DMBCs with noiseless feedback. One is the case where the feedback is rate-limited and independent of the channel outputs (see Figure2); the other is the case where the feedback is originated causally from the channel outputs (see Figure 3). Both models consist of a transmitter and two receivers, named receiver 1 (legitimate receiver) and receiver 2 (eavesdropper). The transmitter aims to convey a common message W to both receivers in addition to a confidential message S intended only for receiver 1. The confidential message S should be kept secret from receiver 2 as much as possible. We use equivocation at receiver 2 to characterize the secrecy of the confidential message. W and S are mutually independent and uniformly distributed over 𝒲 and 𝒮. #### 2.2.1. Degraded DMBCs with Rate-Limited Feedback The degraded DMBCs with rate-limited feedback (see Figure2) are under the condition that the channel to receiver 2 is physically degraded from the channel to receiver 1; that is, p(y,z∣x)=p(y∣x)p(z∣y) or X→Y→Z form a Markov chain, where X is the channel input and Y,Z are observations of receiver 1 and 2. In this model, the encoder encodes the messages (W,S) and feedback into codewords XN, where N is the length of the codeword. They are transmitted over a discrete memoryless channel (DMC) with transition probability ∏i=1Np(yi,zi∣xi). Receiver 1 obtains YN and decodes the common and confidential messages (W^,S^). Receiver 2 obtains ZN and decodes the common message W^. More precisely, we define the encoder-decoder (N,Δ,Pe1,Pe2) in Definition 1.Definition 1. The encoder-decoder(N,Δ,Pe1,Pe2) for the degraded DMBCs with rate-limited feedback (with rate limited by Rf) is defined as follows.(i) The feedback alphabet𝒦 satisfies limN→∞(log∥𝒦∥/N)≤Rf. The feedback is generated independent of the channel output symbols.(ii) The stochastic channel encoderφ is specified by a matrix of conditional probability distributions φ(xN∣s,w,k) which denotes the probability that the message s,w and the feedback k are encoded as the channel input xN, where xN∈𝒳N,s∈𝒮,w∈𝒲,k∈𝒦, and ∑xNφ(xN∣s,w,k)=1. Note that 𝒮 and 𝒲 are the confidential and common message sets.(iii) Decoder 1 is a mappingh1:𝒴N→𝒮×𝒲. The input of decoder 1 is YN, and the output is S^,W^. The decoding error probability of receiver 1 is defined as Pe1=Pr{h1(YN)≠(S,W)}. Similarly, Decoder 2 is defined as a mapping h2:𝒵N→𝒲. The input of decoder 2 is ZN, and the output is W^^. The decoding error probability of receiver 2 is defined as Pe2=Pr{h2(ZN)≠W}.(iv) The equivocation at receiver 2 is defined as(1)Δ=1NH(S∣ZN).A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 2 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) defined in Definition 1, such that (2)limN→∞log∥𝒲∥N=R0,(3)limN→∞log∥𝒮∥N=R1,(4)limN→∞log∥𝒦∥N=Rf′≤Rf,(5)limN→∞Δ≥Re,(6)Pe1≤ϵ,Pe2≤ϵ, where ϵ is an arbitrary small positive real number, R0,R1,Rf′ are the rates of the common messages, confidential messages, and feedback, and Re is the equivocation rate of the confidential messages. Note that the feedback rate is limited by Rf. The capacity-equivocation region is defined as the convex closure of all achievable rate triples (R0,R1,Re). The capacity-equivocation region of the degraded DMBCs with rate-limited feedback is shown in the following theorem.Theorem 2. For the degraded DMBCs with limited feedback rateRf, the capacity-equivocation region is the set (7)ℛd={Re≤I(X;Y∣U)-I(X;Z∣U)+Rf(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤I(X;Y∣U)-I(X;Z∣U)+Rf}, where U is an auxiliary random variable and U→X→Y→Z form a Markov chain.The proof of Theorem2 is given in Section 3. The remark of Theorem 2 is shown below.Remark 3. (i) The secrecy capacity of the model in Figure2 is defined as the maximum rate at which confidential messages can be sent to receiver 1 in perfect secrecy; that is, (8)Cs=max(R0=0,R1,Re=R1)∈ℛR1, where ℛ is the capacity-equivocation region. Therefore, by the definition in (8), the secrecy capacity of the degraded DMBCs with limited feedback rate Rf is (9)Csd=maxmin{I(X;Y∣U),I(X;Y∣U)-I(X;Z∣U)+Rf}. This result subsumes the secrecy capacity of the degraded wiretap channel with rate-limited feedback (see [4]) by setting the auxiliary random variable U to be constant in (9). (ii) The capacity-equivocation region in (7) is bigger than that in [2] without feedback. This implies that feedback can be used to enhance the secrecy in the DMBCs. Note that this finding had already been verified in [3–6]. #### 2.2.2. Less and Reversely Less Noisy DMBCs with Noiseless Causal Feedback The model in Figure3 is based on the assumption that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). The definition of the encoder-decoder for this model is similar to Definition 1 except for the feedback and the encoder. Different from the model in Figure 2, the feedback in Figure 3 is originated causally from the channel outputs of receiver 1 to the transmitter. The stochastic encoder for this model at time i, 1≤i≤N, is defined as fi(xi∣wi,si,yi-1), where wi∈𝒲, si∈𝒮, yi-1∈𝒴i-1 (the channel outputs of receiver 1 before time i) and ∑xi∈𝒳fi(xi∣wi,si,yi-1)=1.A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 3 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) such that (2), (3), (5), and (6) hold. Note that the definition of “achievable” here does not include (4) since the feedback in the model of Figure 3 is not rate limited. The definition of secrecy capacity is the same as that in Remark 3. Then, we present the capacity-equivocation regions of the less and reversely less noisy DMBCs with noiseless causal feedback in Theorems 4 and 5, respectively.Theorem 4. For the less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(10)ℛl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤H(Y∣Z)}, where U→X→(Y,Z) form a Markov chain.Theorem 5. For the reversely less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(11)ℛrl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Y),R1≤I(X;Y∣U),Re≤H(Y∣X)}, where U→X→(Y,Z) form a Markov chain.The proof of Theorems4 and 5 is given in Section 4. The remark of Theorems 4 and 5 is given below.Remark 6. (i) By the definition in (8), the secrecy capacity of the less noisy DMBCs with noiseless causal feedback is (12)Csl=maxmin{I(X;Y∣U),H(Y∣Z)}. The secrecy capacity of the reversely less noisy DMBCs with noiseless causal feedback is (13)Csrl=maxmin{I(X;Y∣U),H(Y∣X)}. Setting the auxiliary random variable U to be constant in (12) and (13), the capacity-equivocation region of the model in [5] is obtained. (ii) In the model of Figure3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). This implies Y→X→Z. Therefore, it is easy to see H(Y∣X)=H(Y∣XZ)≤H(Y∣Z); that is, the upper bound on the equivocation rate Re in (11) for the reversely less noisy case is smaller than that in (10) for the less noisy case. This tells that when the eavesdropper is in a better position than the legitimate receiver (see the reversely less noisy case), the uncertainty about the confidential messages at the eavesdropper is decreased. Besides, from (13), we see that even if the eavesdropper is in a better position, the secrecy capacity is a positive value, which means provable secure communication could also be implemented in such a bad condition. ## 2.1. Notations Throughout this paper, we use calligraphic letters, for example,𝒳, 𝒴, to denote the finite sets and ∥𝒳∥ to denote the cardinality of the set 𝒳. Uppercase letters, for example, X, Y, are used to denote random variables taking values from finite sets, for example, 𝒳, 𝒴. The value of the random variable X is denoted by the lowercase letter x. We use Zij to denote the (j-i+1)-vectors (Zi,Zi+1,…,Zj) of random variables for 1≤i≤j and will always drop the subscript when i=1. Moreover, we use X~p(x) to denote the probability mass function of the random variable X. For X~p(x) and 0≤ϵ≤1, the set of the typical N-sequences xN is defined as 𝒯XN(ϵ)={xN:|π(x∣xN)-p(x)|≤ϵp(x) for all x∈𝒳}, where π(x∣xN) denotes the frequency of occurrences of letter x in the sequence xN (for more details about typical sequences, please refer to [10, Chapter 2]). The set of the conditional typical sequences, for example, 𝒯Y∣XN(ϵ), follows similarly. ## 2.2. Channel Models and Main Results This paper studies the secure data transmission for two subclasses of DMBCs with noiseless feedback. One is the case where the feedback is rate-limited and independent of the channel outputs (see Figure2); the other is the case where the feedback is originated causally from the channel outputs (see Figure 3). Both models consist of a transmitter and two receivers, named receiver 1 (legitimate receiver) and receiver 2 (eavesdropper). The transmitter aims to convey a common message W to both receivers in addition to a confidential message S intended only for receiver 1. The confidential message S should be kept secret from receiver 2 as much as possible. We use equivocation at receiver 2 to characterize the secrecy of the confidential message. W and S are mutually independent and uniformly distributed over 𝒲 and 𝒮. ### 2.2.1. Degraded DMBCs with Rate-Limited Feedback The degraded DMBCs with rate-limited feedback (see Figure2) are under the condition that the channel to receiver 2 is physically degraded from the channel to receiver 1; that is, p(y,z∣x)=p(y∣x)p(z∣y) or X→Y→Z form a Markov chain, where X is the channel input and Y,Z are observations of receiver 1 and 2. In this model, the encoder encodes the messages (W,S) and feedback into codewords XN, where N is the length of the codeword. They are transmitted over a discrete memoryless channel (DMC) with transition probability ∏i=1Np(yi,zi∣xi). Receiver 1 obtains YN and decodes the common and confidential messages (W^,S^). Receiver 2 obtains ZN and decodes the common message W^. More precisely, we define the encoder-decoder (N,Δ,Pe1,Pe2) in Definition 1.Definition 1. The encoder-decoder(N,Δ,Pe1,Pe2) for the degraded DMBCs with rate-limited feedback (with rate limited by Rf) is defined as follows.(i) The feedback alphabet𝒦 satisfies limN→∞(log∥𝒦∥/N)≤Rf. The feedback is generated independent of the channel output symbols.(ii) The stochastic channel encoderφ is specified by a matrix of conditional probability distributions φ(xN∣s,w,k) which denotes the probability that the message s,w and the feedback k are encoded as the channel input xN, where xN∈𝒳N,s∈𝒮,w∈𝒲,k∈𝒦, and ∑xNφ(xN∣s,w,k)=1. Note that 𝒮 and 𝒲 are the confidential and common message sets.(iii) Decoder 1 is a mappingh1:𝒴N→𝒮×𝒲. The input of decoder 1 is YN, and the output is S^,W^. The decoding error probability of receiver 1 is defined as Pe1=Pr{h1(YN)≠(S,W)}. Similarly, Decoder 2 is defined as a mapping h2:𝒵N→𝒲. The input of decoder 2 is ZN, and the output is W^^. The decoding error probability of receiver 2 is defined as Pe2=Pr{h2(ZN)≠W}.(iv) The equivocation at receiver 2 is defined as(1)Δ=1NH(S∣ZN).A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 2 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) defined in Definition 1, such that (2)limN→∞log∥𝒲∥N=R0,(3)limN→∞log∥𝒮∥N=R1,(4)limN→∞log∥𝒦∥N=Rf′≤Rf,(5)limN→∞Δ≥Re,(6)Pe1≤ϵ,Pe2≤ϵ, where ϵ is an arbitrary small positive real number, R0,R1,Rf′ are the rates of the common messages, confidential messages, and feedback, and Re is the equivocation rate of the confidential messages. Note that the feedback rate is limited by Rf. The capacity-equivocation region is defined as the convex closure of all achievable rate triples (R0,R1,Re). The capacity-equivocation region of the degraded DMBCs with rate-limited feedback is shown in the following theorem.Theorem 2. For the degraded DMBCs with limited feedback rateRf, the capacity-equivocation region is the set (7)ℛd={Re≤I(X;Y∣U)-I(X;Z∣U)+Rf(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤I(X;Y∣U)-I(X;Z∣U)+Rf}, where U is an auxiliary random variable and U→X→Y→Z form a Markov chain.The proof of Theorem2 is given in Section 3. The remark of Theorem 2 is shown below.Remark 3. (i) The secrecy capacity of the model in Figure2 is defined as the maximum rate at which confidential messages can be sent to receiver 1 in perfect secrecy; that is, (8)Cs=max(R0=0,R1,Re=R1)∈ℛR1, where ℛ is the capacity-equivocation region. Therefore, by the definition in (8), the secrecy capacity of the degraded DMBCs with limited feedback rate Rf is (9)Csd=maxmin{I(X;Y∣U),I(X;Y∣U)-I(X;Z∣U)+Rf}. This result subsumes the secrecy capacity of the degraded wiretap channel with rate-limited feedback (see [4]) by setting the auxiliary random variable U to be constant in (9). (ii) The capacity-equivocation region in (7) is bigger than that in [2] without feedback. This implies that feedback can be used to enhance the secrecy in the DMBCs. Note that this finding had already been verified in [3–6]. ### 2.2.2. Less and Reversely Less Noisy DMBCs with Noiseless Causal Feedback The model in Figure3 is based on the assumption that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). The definition of the encoder-decoder for this model is similar to Definition 1 except for the feedback and the encoder. Different from the model in Figure 2, the feedback in Figure 3 is originated causally from the channel outputs of receiver 1 to the transmitter. The stochastic encoder for this model at time i, 1≤i≤N, is defined as fi(xi∣wi,si,yi-1), where wi∈𝒲, si∈𝒮, yi-1∈𝒴i-1 (the channel outputs of receiver 1 before time i) and ∑xi∈𝒳fi(xi∣wi,si,yi-1)=1.A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 3 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) such that (2), (3), (5), and (6) hold. Note that the definition of “achievable” here does not include (4) since the feedback in the model of Figure 3 is not rate limited. The definition of secrecy capacity is the same as that in Remark 3. Then, we present the capacity-equivocation regions of the less and reversely less noisy DMBCs with noiseless causal feedback in Theorems 4 and 5, respectively.Theorem 4. For the less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(10)ℛl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤H(Y∣Z)}, where U→X→(Y,Z) form a Markov chain.Theorem 5. For the reversely less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(11)ℛrl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Y),R1≤I(X;Y∣U),Re≤H(Y∣X)}, where U→X→(Y,Z) form a Markov chain.The proof of Theorems4 and 5 is given in Section 4. The remark of Theorems 4 and 5 is given below.Remark 6. (i) By the definition in (8), the secrecy capacity of the less noisy DMBCs with noiseless causal feedback is (12)Csl=maxmin{I(X;Y∣U),H(Y∣Z)}. The secrecy capacity of the reversely less noisy DMBCs with noiseless causal feedback is (13)Csrl=maxmin{I(X;Y∣U),H(Y∣X)}. Setting the auxiliary random variable U to be constant in (12) and (13), the capacity-equivocation region of the model in [5] is obtained. (ii) In the model of Figure3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). This implies Y→X→Z. Therefore, it is easy to see H(Y∣X)=H(Y∣XZ)≤H(Y∣Z); that is, the upper bound on the equivocation rate Re in (11) for the reversely less noisy case is smaller than that in (10) for the less noisy case. This tells that when the eavesdropper is in a better position than the legitimate receiver (see the reversely less noisy case), the uncertainty about the confidential messages at the eavesdropper is decreased. Besides, from (13), we see that even if the eavesdropper is in a better position, the secrecy capacity is a positive value, which means provable secure communication could also be implemented in such a bad condition. ## 2.2.1. Degraded DMBCs with Rate-Limited Feedback The degraded DMBCs with rate-limited feedback (see Figure2) are under the condition that the channel to receiver 2 is physically degraded from the channel to receiver 1; that is, p(y,z∣x)=p(y∣x)p(z∣y) or X→Y→Z form a Markov chain, where X is the channel input and Y,Z are observations of receiver 1 and 2. In this model, the encoder encodes the messages (W,S) and feedback into codewords XN, where N is the length of the codeword. They are transmitted over a discrete memoryless channel (DMC) with transition probability ∏i=1Np(yi,zi∣xi). Receiver 1 obtains YN and decodes the common and confidential messages (W^,S^). Receiver 2 obtains ZN and decodes the common message W^. More precisely, we define the encoder-decoder (N,Δ,Pe1,Pe2) in Definition 1.Definition 1. The encoder-decoder(N,Δ,Pe1,Pe2) for the degraded DMBCs with rate-limited feedback (with rate limited by Rf) is defined as follows.(i) The feedback alphabet𝒦 satisfies limN→∞(log∥𝒦∥/N)≤Rf. The feedback is generated independent of the channel output symbols.(ii) The stochastic channel encoderφ is specified by a matrix of conditional probability distributions φ(xN∣s,w,k) which denotes the probability that the message s,w and the feedback k are encoded as the channel input xN, where xN∈𝒳N,s∈𝒮,w∈𝒲,k∈𝒦, and ∑xNφ(xN∣s,w,k)=1. Note that 𝒮 and 𝒲 are the confidential and common message sets.(iii) Decoder 1 is a mappingh1:𝒴N→𝒮×𝒲. The input of decoder 1 is YN, and the output is S^,W^. The decoding error probability of receiver 1 is defined as Pe1=Pr{h1(YN)≠(S,W)}. Similarly, Decoder 2 is defined as a mapping h2:𝒵N→𝒲. The input of decoder 2 is ZN, and the output is W^^. The decoding error probability of receiver 2 is defined as Pe2=Pr{h2(ZN)≠W}.(iv) The equivocation at receiver 2 is defined as(1)Δ=1NH(S∣ZN).A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 2 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) defined in Definition 1, such that (2)limN→∞log∥𝒲∥N=R0,(3)limN→∞log∥𝒮∥N=R1,(4)limN→∞log∥𝒦∥N=Rf′≤Rf,(5)limN→∞Δ≥Re,(6)Pe1≤ϵ,Pe2≤ϵ, where ϵ is an arbitrary small positive real number, R0,R1,Rf′ are the rates of the common messages, confidential messages, and feedback, and Re is the equivocation rate of the confidential messages. Note that the feedback rate is limited by Rf. The capacity-equivocation region is defined as the convex closure of all achievable rate triples (R0,R1,Re). The capacity-equivocation region of the degraded DMBCs with rate-limited feedback is shown in the following theorem.Theorem 2. For the degraded DMBCs with limited feedback rateRf, the capacity-equivocation region is the set (7)ℛd={Re≤I(X;Y∣U)-I(X;Z∣U)+Rf(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤I(X;Y∣U)-I(X;Z∣U)+Rf}, where U is an auxiliary random variable and U→X→Y→Z form a Markov chain.The proof of Theorem2 is given in Section 3. The remark of Theorem 2 is shown below.Remark 3. (i) The secrecy capacity of the model in Figure2 is defined as the maximum rate at which confidential messages can be sent to receiver 1 in perfect secrecy; that is, (8)Cs=max(R0=0,R1,Re=R1)∈ℛR1, where ℛ is the capacity-equivocation region. Therefore, by the definition in (8), the secrecy capacity of the degraded DMBCs with limited feedback rate Rf is (9)Csd=maxmin{I(X;Y∣U),I(X;Y∣U)-I(X;Z∣U)+Rf}. This result subsumes the secrecy capacity of the degraded wiretap channel with rate-limited feedback (see [4]) by setting the auxiliary random variable U to be constant in (9). (ii) The capacity-equivocation region in (7) is bigger than that in [2] without feedback. This implies that feedback can be used to enhance the secrecy in the DMBCs. Note that this finding had already been verified in [3–6]. ## 2.2.2. Less and Reversely Less Noisy DMBCs with Noiseless Causal Feedback The model in Figure3 is based on the assumption that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). The definition of the encoder-decoder for this model is similar to Definition 1 except for the feedback and the encoder. Different from the model in Figure 2, the feedback in Figure 3 is originated causally from the channel outputs of receiver 1 to the transmitter. The stochastic encoder for this model at time i, 1≤i≤N, is defined as fi(xi∣wi,si,yi-1), where wi∈𝒲, si∈𝒮, yi-1∈𝒴i-1 (the channel outputs of receiver 1 before time i) and ∑xi∈𝒳fi(xi∣wi,si,yi-1)=1.A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 3 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) such that (2), (3), (5), and (6) hold. Note that the definition of “achievable” here does not include (4) since the feedback in the model of Figure 3 is not rate limited. The definition of secrecy capacity is the same as that in Remark 3. Then, we present the capacity-equivocation regions of the less and reversely less noisy DMBCs with noiseless causal feedback in Theorems 4 and 5, respectively.Theorem 4. For the less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(10)ℛl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤H(Y∣Z)}, where U→X→(Y,Z) form a Markov chain.Theorem 5. For the reversely less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(11)ℛrl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Y),R1≤I(X;Y∣U),Re≤H(Y∣X)}, where U→X→(Y,Z) form a Markov chain.The proof of Theorems4 and 5 is given in Section 4. The remark of Theorems 4 and 5 is given below.Remark 6. (i) By the definition in (8), the secrecy capacity of the less noisy DMBCs with noiseless causal feedback is (12)Csl=maxmin{I(X;Y∣U),H(Y∣Z)}. The secrecy capacity of the reversely less noisy DMBCs with noiseless causal feedback is (13)Csrl=maxmin{I(X;Y∣U),H(Y∣X)}. Setting the auxiliary random variable U to be constant in (12) and (13), the capacity-equivocation region of the model in [5] is obtained. (ii) In the model of Figure3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). This implies Y→X→Z. Therefore, it is easy to see H(Y∣X)=H(Y∣XZ)≤H(Y∣Z); that is, the upper bound on the equivocation rate Re in (11) for the reversely less noisy case is smaller than that in (10) for the less noisy case. This tells that when the eavesdropper is in a better position than the legitimate receiver (see the reversely less noisy case), the uncertainty about the confidential messages at the eavesdropper is decreased. Besides, from (13), we see that even if the eavesdropper is in a better position, the secrecy capacity is a positive value, which means provable secure communication could also be implemented in such a bad condition. ## 3. Proof of Theorem2 In this section, Theorem2 is proved. The converse part of Theorem 2 gives the outer bound on the capacity-equivocation region of the degraded DMBCs with rate-limited feedback. The proof of the converse part is shown in Section 3.1. The key tools used in the proof include the identification of the random variables and Csiszár’s sum equality [2]. In Section 3.2, to prove the direct part of Theorem 2, a coding scheme is provided to achieve the achievable rate triples in ℛd. The key ideas in the coding scheme are inspired by [4]. However, [4] only considers the transmission of the confidential messages. Our coding scheme considers both the confidential and common messages. ### 3.1. The Converse Part of Theorem2 In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛd, we prove the converse part for the equivalent region (the fact that the two regions are equivalent follows similarly from [10, Chapter 5, problem 5.8]) containing all the rate triples (R0,R1,Re) such that (14)0≤Re≤R1,(15)R0≤I(U;Z),(16)R0+R1≤I(X;Y∣U)+I(U;Z),(17)Re≤I(X;Y∣U)-I(X;Z∣U)+Rf.Now we show that allachievable triples (R0,R1,Re) satisfy (14), (15), (16), and (17).Condition (14) is proved as follows: (18)Re≤limN→∞Δ=limN→∞H(S∣ZN)N≤limN→∞H(S)N=R1.To prove condition (15), we calculate (19)H(W)=I(W;ZN)+H(W∣ZN)≤(a3.1)I(W;ZN)+ϵ1=∑i=1N‍I(W;Zi∣Zi-1)+ϵ1=∑i=1N‍I(W;Zi∣Zi+1N)+ϵ1=∑i=1N‍[I(WYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NW)]+ϵ1≤∑i=1N‍[I(WYi-1Zi+1N;Zi)-I(Yi-1;Zi∣Zi+1NW)]+ϵ1≤∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1, where (a3.1) follows from Fano’s inequality and ϵ1 is a small positive number. Note that KN=(K1,K2,…,KN), where Ki is the feedback symbol at time i, 1≤i≤N.To prove condition (16), we consider (20)H(S)+H(W)=H(S∣WKN)+H(W)=I(S;YN∣WKN)+H(S∣YNWKN)+I(W;ZN)+H(W∣ZN)≤(a3.2)I(S;YN∣WKN)+ϵ2+I(WKN;ZN)+ϵ1=∑i=1N‍I(S;Yi∣Yi-1WKN)+∑i=1N‍I(WKN;Zi∣Zi+1N)+ϵ1+ϵ2=∑i=1N‍[I(SZi+1N;Yi∣Yi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2=∑i=1N‍[I(Zi+1N;Yi∣Yi-1WKN)+I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2≤∑i=1N‍[I(Zi+1N;Yi∣Yi-1WKN)+I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1Zi+1N;Zi)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2=(a3.3)∑i=1N‍[I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2≤∑i=1N‍I(S;Yi∣Zi+1NYi-1WKN)+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2, where ϵ2 is a small positive number and (a3.2) and (a3.3) follow from Fano’s inequality and Csiszár’s sum equality [2]; that is, ∑i=1NI(Zi+1N;Yi∣Yi-1WKN)=∑i=1NI(Yi-1;Zi∣Zi+1NWKN).To prove condition (17), we calculate (21)H(S∣ZN)=H(S∣ZN,W)+I(S;W∣ZN)≤H(S∣ZN,W)+H(W∣ZN)=I(S;KN,YN∣ZN,W)+H(S∣ZN,KN,YN,W)+H(W∣ZN)≤I(S;KN,YN∣ZN,W)+H(S∣KN,YN)+H(W∣ZN)=I(S;KN∣ZN,W)+I(S;YN∣KN,ZN,W)+H(S∣KN,YN)+H(W∣ZN)≤H(KN)+I(S;YN∣KN,ZN,W)+H(S∣KN,YN)+H(W∣ZN)≤NRf+I(S;YN∣KN,ZN,W)+ϵ1+ϵ2.The last inequality in (21) follows from the Fano’s inequality and the fact that the feedback rate is limited by Rf. Then, I(S;YN∣KN,ZN,W) will be calculated as follows: (22)I(S;YN∣KN,ZN,W)=∑i=1N‍I(S;Yi∣Yi-1ZNWKN)=∑i=1N‍I(S;Yi∣Yi-1,Zi-1,Zi,Zi+1N,W,KN)=(a3.4)∑i=1N‍[I(S;Yi∣Yi-1,Zi-1,Zi,Zi+1N,W,KN)+I(Zi-1;Yi∣Yi-1,ZiN,W,KN)-I(Zi-1;Yi∣Yi-1,ZiN,W,S,KN)]=∑i=1N‍[I(S,Zi-1;Yi∣Yi-1,Zi,Zi+1N,W,KN)-I(Zi-1;Yi∣Yi-1,ZiN,W,S,KN)]=∑i=1N‍I(S;Yi∣Yi-1,Zi,Zi+1N,W,KN), where (a3.4) follows from the Markov chain Yi→Yi-1ZiNWKN→Zi-1 and Yi→Yi-1ZiNWSKN→Zi-1. Then, we introduce a random variable Q which is independent of SWKNXNYNZN and uniformly distributed over {1,2,…,N}. Set U=ZQ+1NYQ-1WKNQ,V=US,Y=YQ,X=XQ,Z=ZQ. It is straightforward to see that U→V→X→Y→Z form a Markov chain. After using the standard time sharing argument [10, Section 5.4], (19), (20), and (22) are simplified into (23)H(W)≤∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1=NI(U;Z)+ϵ1,(24)H(S)+H(W)≤∑i=1N‍I(S;Yi∣Zi+1NYi-1WKN)+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2=NI(S;Y∣U)+NI(U;Z)+ϵ1+ϵ2=NI(V;Y∣U)+NI(U;Z)+ϵ1+ϵ2,(25)I(S;YN∣KN,ZN,W)=∑i=1N‍I(S;Yi∣Yi-1,Zi,Zi+1N,W,KN)=NI(S;Y∣Z,U)=NI(US;Y∣Z,U)=NI(V;Y∣Z,U). Substituting (25) into (21) and utilizing (5), we get (26)Re≤limN→∞Δ=limN→∞H(S∣ZN)N≤limN→∞NRf+I(S;YN∣KN,ZN,W)+ϵ1+ϵ2N=I(V;Y∣Z,U)+Rf=I(V;Y∣U)-I(V;Z∣U)+Rf. The last equality in (26) follows from the Markov chain U→V→Y→Z.To finish the proof of (16) and (17), we need to show that I(V;Y∣U)≤I(X;Y∣U) and I(V;Y∣U)-I(V;Z∣U)≤I(X;Y∣U)-I(X;Z∣U). We first prove I(V;Y∣U,X)=0 and I(V;Z∣U,X)=0: (27)I(V;Y∣U,X)=H(Y∣U,X)-H(Y∣U,V,X)=(a3.5)H(Y∣X)-H(Y∣X)=0,I(V;Z∣U,X)=H(Z∣U,X)-H(Z∣U,V,X)=(a3.6)H(Z∣X)-H(Z∣X)=0, where (a3.5) follows from the Markov chains U→X→Y and (UV)→X→Y and (a3.6) follows from the Markov chains U→X→Z and (UV)→X→Z. Utilizing (27), we obtain (28)I(V;Y∣U)=I(V,X;Y∣U)-I(X;Y∣U,V)=I(X;Y∣U)+I(V;Y∣U,X)-I(X;Y∣U,V)=I(X;Y∣U)-I(X;Y∣U,V),(29)I(V;Z∣U)=I(V,X;Z∣U)-I(X;Z∣U,V)=I(X;Z∣U)+I(V;Z∣U,X)-I(X;Z∣U,V)=I(X;Z∣U)-I(X;Z∣U,V).From (28), it is straightforward to see that I(V;Y∣U)≤I(X;Y∣U). This proves condition (16).Then, we proveI(V;Y∣U)-I(V;Z∣U)≤I(X;Y∣U)-I(X;Z∣U). Since the channel model in Figure 1 is (physically) degraded, I(X;Y∣U=u,V=v)-I(X;Z∣U=u,V=v)≥0 holds for every (u,v), which implies (30)I(X;Y∣U,V)-I(X;Z∣U,V)≥0.Therefore, utilizing (28), (29), and (30), we get (31)I(V;Y∣U)-I(V;Z∣U)=I(X;Y∣U)-I(X;Z∣U)-[I(X;Y∣U,V)-I(X;Z∣U,V)]≤I(X;Y∣U)-I(X;Z∣U). This proves condition (17).The converse part of Theorem2 is proved. ### 3.2. A Coding Scheme Achievingℛd A coding scheme is provided to achieve the achievable triples(R0,R1,Re)∈ℛd. The key methods used in the scheme include the superposition coding, rate splitting, and random binning. The confidential message is split into two parts. One part is reliably transmitted using superposition coding and random binning; the other part is securely transmitted with the help of the feedback. Note that Section 3.1 has already given the outer bound on the capacity-equivocation region. When Rf≥I(X;Z∣U), it can be seen from (9) that the secrecy capacity for the degraded DMBCs with rate-limited feedback always equals to I(X;Y∣U). Therefore, in order to investigate the effects of the feedback, the feedback rate Rf<I(X;Z∣U) will only be considered in this subsection.We need to prove that all the triples(R0,R1,Re)∈ℛd for the model of Figure 2 with any feedback rate Rf′ limited by Rf are achievable (see Definition 1). This subsection is organized as follows. The codebook generation and encoding scheme is given in Section 3.2.1. The decoding scheme is given in Section 3.2.2. The analysis of error probability and equivocation are shown in Sections 3.2.3 and 3.2.4, respectively. #### 3.2.1. Codebook Generation and Encoding Split the confidential message into two parts; that is,𝒮=(ℳ1,ℳ2). The corresponding variables M1, M2 are uniformly distributed over {1,2,3,…,2NR′} and {1,2,3,…,2NRf′}, where (when Rf≥R1, the confidential message S can be totally protected by using part of the feedback (as the shared key between the transmitter and receiver 1). The remaining part of the feedback is redundant. Therefore, in order to study the effects of the feedback on the capacity region, only Rf<R1 comes into our consideration) (32)0≤Rf′≤Rf,R′=R1-Rf′>0.It is important to notice thatR1 is the rate of the private message 𝒮, which consists of ℳ1 and ℳ2. This means that (33)R1=limN→∞log(∥ℳ1∥∥ℳ2∥)N=limN→∞(log∥ℳ1∥N+log∥ℳ2∥N).Define the index sets𝒥N, ℒN, ℱN, and ℳN satisfying (34)limN→∞1Nlog∥𝒥N∥=I(X;Z∣U)-Rf′,limN→∞1Nlog∥ℒN∥=I(X;Y∣U)-I(X;Z∣U),limN→∞1Nlog∥ℱN∥=Rf′,limN→∞1Nlog∥ℳN∥=I(U;Z).We usej∈𝒥N, l∈ℒN, f∈ℱN, m∈ℳN to index the codeword xN. Take 𝒲⊂ℳN such that (2) holds. Since R1≤I(X;Y∣U), it is easy to see ∥𝒥N×ℒN×ℱN∥≥2NR1. Therefore, let ℳ1=𝒟N×ℒN, ℳ2=ℱN, where 𝒟N is an arbitrary set such that (3) holds. Let gj be a mapping of 𝒥N into 𝒟N partitioning 𝒥N into subsets of size ∥𝒥N∥/∥𝒟N∥; that is, (35)gj:𝒥N⟶𝒟N, where gj(j)=d,j∈𝒥N,d∈𝒟N.For eachw∈𝒲, we generate a codeword uN(w) according to ∏i=1Np(ui). Then, for each uN(w), a codebook 𝒞ℬw (see Figure 4) containing ∥𝒥N∥·∥ℒN∥·∥ℱN∥ codewords xjlfmN is constructed according to ∏i=1Np(xi∣ui), where j∈𝒥N,l∈ℒN,f∈ℱN,m=w∈ℳ. Those xN are put into ∥ℒN∥·∥ℱN∥ bins so that each bin contains ∥𝒥N∥ codewords. Each bin is indexed by (l,f), where l∈ℒN, f∈ℱN. Then, we divide each bin into ∥𝒟N∥ subbins such that each subbin contains ∥𝒥N∥/∥𝒟N∥ codewords. The codebook structure is presented in Figure 4.Figure 4 The codebook𝒞ℬw for each uN(w).Let𝒦={1,2,3,…,2NRf′}, where k∈𝒦 is the key sent to the transmitter from receiver 1 through the secure feedback link. It is kept secret from receiver 2. The corresponding variable K is uniformly distributed over 𝒦 and independent of S and W.In order to sends=(d,l,m2)∈𝒟N×ℒN×ℱN and w∈𝒲, a codeword xjlfmN is chosen as follows. According to the common message w, we first find the sequence uN(w). For the determined uN(w), there is a corresponding codebook 𝒞ℬw; see Figure 4. Then, the corresponding codeword xjlfmN is sent into the channel, where j is chosen randomly from the set gj-1(d), f=k⊕m2, and m=w (here ⊕ is modulo addition over ℱN). Figure 4 shows how to select xjlfmN in detail. According to uN(w), we can find the corresponding codebook 𝒞ℬw. In the codebook 𝒞ℬw, we choose the corresponding bin according to f and l. Then, in that bin, the subbin is found according to d. Finally, a codeword xN (which is denoted by xjlfmN) is randomly chosen from that subbin. #### 3.2.2. Decoding Receiver 2 tries to find a unique sequenceuN(w^^) such that (uN(w^^),zN)∈TUZN(ϵ1). If there exists such a unique sequence, decoder 2 outputs w^^; otherwise, an error is declared. Since the size of 𝒲 is smaller than 2NI(U;Z), the decoding error probability for receiver 2 approaches zero.For receiver 1, he can also decode the common messagew^ since the output of channel 2 is a degraded version of the output of channel 1. Then, receiver 1 tries to find a unique codeword xj^l^f^m^N indexed by j^, l^, f^, m^, such that (xj^l^f^m^N,yN)∈TXY∣UN(ϵ2). If there exists such a unique codeword xj^l^f^m^N, receiver 1 calculates f^⊖k as m^2 (here ⊖ is modulo subtraction over ℱN, and m^=w^) and finds d^ according to gj(j^). Note that receiver 1 knows the secret key k. Decoder 1 outputs s^=(d^,l^,m^2) and w^. If no such xj^l^f^m^N or more than one such xj^l^f^m^N exist, an error is declared. #### 3.2.3. Analysis of Error Probability Since the number ofuN(w) is upper bound by 2NI(U;Z) and the DMBCs under discussion are degraded, both receivers can decode the common message w with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, it can be calculated that given the codeword uN(w), the number of xN is (36)∥ℱN∥·∥𝒥N∥·∥ℒN∥=2NI(X;Y∣U).So, after determining the codeworduN(w), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. This proves (6). #### 3.2.4. Analysis of Equivocation The proof of (5) is given below: (37)H(S∣ZN)=H(M1,M2∣ZN)=H(M1∣ZN)+H(M2∣ZN,M1)≥H(M1∣ZN)+H(M2∣ZN,M1,K⊕M2)=(b3.1)H(M1∣ZN)+H(M2∣K⊕M2)=(b3.2)H(M1∣ZN)+H(M2)=(b3.3)H(M1∣ZN)+NRf′, where (b3.1) follows from the Markov chain M2→M2⊕K→(ZN,M1), (b3.2) follows from the fact that M2 is independent of M2⊕K, and (b3.3) follows from that M2 is uniformly distributed over {1,2,3,…,2NRf′}. The proof of the fact that M2 is independent of M2⊕K is shown as follows (the proof can also be seen in [6]): (38)p(M2⊕K=a)=∑k‍p(M2⊕K=a∣K=k)p(K=k)=(b3.4)∑k‍p(M2⊕K=a∣K=k)1∥ℱN∥=1∥ℱN∥∑k‍p(M2⊕K=a∣K=k)=1∥ℱN∥∑k‍p(M2=a⊖k∣K=k)=(b3.5)1∥ℱN∥∑k‍p(M2=a⊖k)=1∥ℱN∥p(M2⊕K=a,M2=m2)=p(K=a⊖m2,M2=m2)=(b3.6)p(K=a⊖m2)p(M2=m2)=(b3.7)1∥ℱN∥·1∥ℱN∥, where (b3.5) and (b3.6) follow from that M2 is independent of K, and (b3.4) and (b3.7) follow from that M2 and K are both uniformly distributed over ℱN. According to (38), (39)p(M2⊕K=a,M2=m2)=p(M2⊕K=a)p(M2=m2). Therefore, M2 is independent of M2⊕K.Next, we focus on the first term in (37). The method of the equivocation analysis in [2] will be used: (40)H(M1∣ZN)≥H(M1∣ZN,W)=H(M1,ZN∣W)-H(ZN∣W)=H(M1,ZN,XN∣W)-H(XN∣M1,ZN,W)-H(ZN∣W)=H(M1,XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W)≥H(XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W). Note that W in inequality (40) is the random variable of the common message 𝒲. The four terms H(XN∣W), H(ZN∣M1,XN,W), H(XN∣M1,ZN,W), and H(ZN∣W) will be bounded as follows.Givenw∈𝒲, the number of xN is ∥𝒥N∥·∥ℒN∥·∥ℱN∥. By applying [12, Lemma 2.5], we obtain (41)H(XN∣W)≥log(∥𝒥N∥·∥ℒN∥·∥ℱN∥)-1=NI(X;Y∣U)-1.Since(M1,W)→XN→ZN and the channel to receiver 2 is discrete memoryless, it is easy to get (42)H(ZN∣M1,XN,W)=H(ZN∣XN)=NH(Z∣X).With the knowledge of(d,l)∈ℳ1 and w∈𝒲, the number of xN is (43)2NRf′·∥𝒥N∥∥𝒟N∥<2NRf′·∥𝒥N∥=2NRf′·2N(I(X;Z∣U)-Rf′)=2NI(X;Z∣U).So, receiver 2 can decode the codewordxN with error probability approaching zero by using the standard channel coding theorem [11, Theorem 7.7.1]. Therefore, using Fano’s inequality, we get (44)H(XN∣M1,ZN,W)⟶0.Moreover, using the similar deduction in [2, Section 4], we get (45)H(ZN∣W)≤log∥TZ∣UN(ϵ1)∥≤NH(Z∣U).Substituting (41), (42), (44), and (45) into (40), we get (46)H(M1∣ZN)≥NI(X;Y∣U)+NH(Z∣X)-NH(Z∣U)=NI(X;Y∣U)-NI(X;Z∣U), where the equality in (46) follows from the Markov chain U→X→Z.Finally, (5) is verified by substituting (46) into (37): (47)limN→∞Δ=limN→∞H(S∣ZN)N≥limN→∞(H(M1∣ZN)N+Rf′)≥limN→∞(NI(X;Y∣U)-NI(X;Z∣U)N+Rf′)=I(X;Y∣U)-I(X;Z∣U)+Rf′≥Re. This completes the proof of Theorem 2. ## 3.1. The Converse Part of Theorem2 In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛd, we prove the converse part for the equivalent region (the fact that the two regions are equivalent follows similarly from [10, Chapter 5, problem 5.8]) containing all the rate triples (R0,R1,Re) such that (14)0≤Re≤R1,(15)R0≤I(U;Z),(16)R0+R1≤I(X;Y∣U)+I(U;Z),(17)Re≤I(X;Y∣U)-I(X;Z∣U)+Rf.Now we show that allachievable triples (R0,R1,Re) satisfy (14), (15), (16), and (17).Condition (14) is proved as follows: (18)Re≤limN→∞Δ=limN→∞H(S∣ZN)N≤limN→∞H(S)N=R1.To prove condition (15), we calculate (19)H(W)=I(W;ZN)+H(W∣ZN)≤(a3.1)I(W;ZN)+ϵ1=∑i=1N‍I(W;Zi∣Zi-1)+ϵ1=∑i=1N‍I(W;Zi∣Zi+1N)+ϵ1=∑i=1N‍[I(WYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NW)]+ϵ1≤∑i=1N‍[I(WYi-1Zi+1N;Zi)-I(Yi-1;Zi∣Zi+1NW)]+ϵ1≤∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1, where (a3.1) follows from Fano’s inequality and ϵ1 is a small positive number. Note that KN=(K1,K2,…,KN), where Ki is the feedback symbol at time i, 1≤i≤N.To prove condition (16), we consider (20)H(S)+H(W)=H(S∣WKN)+H(W)=I(S;YN∣WKN)+H(S∣YNWKN)+I(W;ZN)+H(W∣ZN)≤(a3.2)I(S;YN∣WKN)+ϵ2+I(WKN;ZN)+ϵ1=∑i=1N‍I(S;Yi∣Yi-1WKN)+∑i=1N‍I(WKN;Zi∣Zi+1N)+ϵ1+ϵ2=∑i=1N‍[I(SZi+1N;Yi∣Yi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2=∑i=1N‍[I(Zi+1N;Yi∣Yi-1WKN)+I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2≤∑i=1N‍[I(Zi+1N;Yi∣Yi-1WKN)+I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1Zi+1N;Zi)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2=(a3.3)∑i=1N‍[I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2≤∑i=1N‍I(S;Yi∣Zi+1NYi-1WKN)+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2, where ϵ2 is a small positive number and (a3.2) and (a3.3) follow from Fano’s inequality and Csiszár’s sum equality [2]; that is, ∑i=1NI(Zi+1N;Yi∣Yi-1WKN)=∑i=1NI(Yi-1;Zi∣Zi+1NWKN).To prove condition (17), we calculate (21)H(S∣ZN)=H(S∣ZN,W)+I(S;W∣ZN)≤H(S∣ZN,W)+H(W∣ZN)=I(S;KN,YN∣ZN,W)+H(S∣ZN,KN,YN,W)+H(W∣ZN)≤I(S;KN,YN∣ZN,W)+H(S∣KN,YN)+H(W∣ZN)=I(S;KN∣ZN,W)+I(S;YN∣KN,ZN,W)+H(S∣KN,YN)+H(W∣ZN)≤H(KN)+I(S;YN∣KN,ZN,W)+H(S∣KN,YN)+H(W∣ZN)≤NRf+I(S;YN∣KN,ZN,W)+ϵ1+ϵ2.The last inequality in (21) follows from the Fano’s inequality and the fact that the feedback rate is limited by Rf. Then, I(S;YN∣KN,ZN,W) will be calculated as follows: (22)I(S;YN∣KN,ZN,W)=∑i=1N‍I(S;Yi∣Yi-1ZNWKN)=∑i=1N‍I(S;Yi∣Yi-1,Zi-1,Zi,Zi+1N,W,KN)=(a3.4)∑i=1N‍[I(S;Yi∣Yi-1,Zi-1,Zi,Zi+1N,W,KN)+I(Zi-1;Yi∣Yi-1,ZiN,W,KN)-I(Zi-1;Yi∣Yi-1,ZiN,W,S,KN)]=∑i=1N‍[I(S,Zi-1;Yi∣Yi-1,Zi,Zi+1N,W,KN)-I(Zi-1;Yi∣Yi-1,ZiN,W,S,KN)]=∑i=1N‍I(S;Yi∣Yi-1,Zi,Zi+1N,W,KN), where (a3.4) follows from the Markov chain Yi→Yi-1ZiNWKN→Zi-1 and Yi→Yi-1ZiNWSKN→Zi-1. Then, we introduce a random variable Q which is independent of SWKNXNYNZN and uniformly distributed over {1,2,…,N}. Set U=ZQ+1NYQ-1WKNQ,V=US,Y=YQ,X=XQ,Z=ZQ. It is straightforward to see that U→V→X→Y→Z form a Markov chain. After using the standard time sharing argument [10, Section 5.4], (19), (20), and (22) are simplified into (23)H(W)≤∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1=NI(U;Z)+ϵ1,(24)H(S)+H(W)≤∑i=1N‍I(S;Yi∣Zi+1NYi-1WKN)+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2=NI(S;Y∣U)+NI(U;Z)+ϵ1+ϵ2=NI(V;Y∣U)+NI(U;Z)+ϵ1+ϵ2,(25)I(S;YN∣KN,ZN,W)=∑i=1N‍I(S;Yi∣Yi-1,Zi,Zi+1N,W,KN)=NI(S;Y∣Z,U)=NI(US;Y∣Z,U)=NI(V;Y∣Z,U). Substituting (25) into (21) and utilizing (5), we get (26)Re≤limN→∞Δ=limN→∞H(S∣ZN)N≤limN→∞NRf+I(S;YN∣KN,ZN,W)+ϵ1+ϵ2N=I(V;Y∣Z,U)+Rf=I(V;Y∣U)-I(V;Z∣U)+Rf. The last equality in (26) follows from the Markov chain U→V→Y→Z.To finish the proof of (16) and (17), we need to show that I(V;Y∣U)≤I(X;Y∣U) and I(V;Y∣U)-I(V;Z∣U)≤I(X;Y∣U)-I(X;Z∣U). We first prove I(V;Y∣U,X)=0 and I(V;Z∣U,X)=0: (27)I(V;Y∣U,X)=H(Y∣U,X)-H(Y∣U,V,X)=(a3.5)H(Y∣X)-H(Y∣X)=0,I(V;Z∣U,X)=H(Z∣U,X)-H(Z∣U,V,X)=(a3.6)H(Z∣X)-H(Z∣X)=0, where (a3.5) follows from the Markov chains U→X→Y and (UV)→X→Y and (a3.6) follows from the Markov chains U→X→Z and (UV)→X→Z. Utilizing (27), we obtain (28)I(V;Y∣U)=I(V,X;Y∣U)-I(X;Y∣U,V)=I(X;Y∣U)+I(V;Y∣U,X)-I(X;Y∣U,V)=I(X;Y∣U)-I(X;Y∣U,V),(29)I(V;Z∣U)=I(V,X;Z∣U)-I(X;Z∣U,V)=I(X;Z∣U)+I(V;Z∣U,X)-I(X;Z∣U,V)=I(X;Z∣U)-I(X;Z∣U,V).From (28), it is straightforward to see that I(V;Y∣U)≤I(X;Y∣U). This proves condition (16).Then, we proveI(V;Y∣U)-I(V;Z∣U)≤I(X;Y∣U)-I(X;Z∣U). Since the channel model in Figure 1 is (physically) degraded, I(X;Y∣U=u,V=v)-I(X;Z∣U=u,V=v)≥0 holds for every (u,v), which implies (30)I(X;Y∣U,V)-I(X;Z∣U,V)≥0.Therefore, utilizing (28), (29), and (30), we get (31)I(V;Y∣U)-I(V;Z∣U)=I(X;Y∣U)-I(X;Z∣U)-[I(X;Y∣U,V)-I(X;Z∣U,V)]≤I(X;Y∣U)-I(X;Z∣U). This proves condition (17).The converse part of Theorem2 is proved. ## 3.2. A Coding Scheme Achievingℛd A coding scheme is provided to achieve the achievable triples(R0,R1,Re)∈ℛd. The key methods used in the scheme include the superposition coding, rate splitting, and random binning. The confidential message is split into two parts. One part is reliably transmitted using superposition coding and random binning; the other part is securely transmitted with the help of the feedback. Note that Section 3.1 has already given the outer bound on the capacity-equivocation region. When Rf≥I(X;Z∣U), it can be seen from (9) that the secrecy capacity for the degraded DMBCs with rate-limited feedback always equals to I(X;Y∣U). Therefore, in order to investigate the effects of the feedback, the feedback rate Rf<I(X;Z∣U) will only be considered in this subsection.We need to prove that all the triples(R0,R1,Re)∈ℛd for the model of Figure 2 with any feedback rate Rf′ limited by Rf are achievable (see Definition 1). This subsection is organized as follows. The codebook generation and encoding scheme is given in Section 3.2.1. The decoding scheme is given in Section 3.2.2. The analysis of error probability and equivocation are shown in Sections 3.2.3 and 3.2.4, respectively. ### 3.2.1. Codebook Generation and Encoding Split the confidential message into two parts; that is,𝒮=(ℳ1,ℳ2). The corresponding variables M1, M2 are uniformly distributed over {1,2,3,…,2NR′} and {1,2,3,…,2NRf′}, where (when Rf≥R1, the confidential message S can be totally protected by using part of the feedback (as the shared key between the transmitter and receiver 1). The remaining part of the feedback is redundant. Therefore, in order to study the effects of the feedback on the capacity region, only Rf<R1 comes into our consideration) (32)0≤Rf′≤Rf,R′=R1-Rf′>0.It is important to notice thatR1 is the rate of the private message 𝒮, which consists of ℳ1 and ℳ2. This means that (33)R1=limN→∞log(∥ℳ1∥∥ℳ2∥)N=limN→∞(log∥ℳ1∥N+log∥ℳ2∥N).Define the index sets𝒥N, ℒN, ℱN, and ℳN satisfying (34)limN→∞1Nlog∥𝒥N∥=I(X;Z∣U)-Rf′,limN→∞1Nlog∥ℒN∥=I(X;Y∣U)-I(X;Z∣U),limN→∞1Nlog∥ℱN∥=Rf′,limN→∞1Nlog∥ℳN∥=I(U;Z).We usej∈𝒥N, l∈ℒN, f∈ℱN, m∈ℳN to index the codeword xN. Take 𝒲⊂ℳN such that (2) holds. Since R1≤I(X;Y∣U), it is easy to see ∥𝒥N×ℒN×ℱN∥≥2NR1. Therefore, let ℳ1=𝒟N×ℒN, ℳ2=ℱN, where 𝒟N is an arbitrary set such that (3) holds. Let gj be a mapping of 𝒥N into 𝒟N partitioning 𝒥N into subsets of size ∥𝒥N∥/∥𝒟N∥; that is, (35)gj:𝒥N⟶𝒟N, where gj(j)=d,j∈𝒥N,d∈𝒟N.For eachw∈𝒲, we generate a codeword uN(w) according to ∏i=1Np(ui). Then, for each uN(w), a codebook 𝒞ℬw (see Figure 4) containing ∥𝒥N∥·∥ℒN∥·∥ℱN∥ codewords xjlfmN is constructed according to ∏i=1Np(xi∣ui), where j∈𝒥N,l∈ℒN,f∈ℱN,m=w∈ℳ. Those xN are put into ∥ℒN∥·∥ℱN∥ bins so that each bin contains ∥𝒥N∥ codewords. Each bin is indexed by (l,f), where l∈ℒN, f∈ℱN. Then, we divide each bin into ∥𝒟N∥ subbins such that each subbin contains ∥𝒥N∥/∥𝒟N∥ codewords. The codebook structure is presented in Figure 4.Figure 4 The codebook𝒞ℬw for each uN(w).Let𝒦={1,2,3,…,2NRf′}, where k∈𝒦 is the key sent to the transmitter from receiver 1 through the secure feedback link. It is kept secret from receiver 2. The corresponding variable K is uniformly distributed over 𝒦 and independent of S and W.In order to sends=(d,l,m2)∈𝒟N×ℒN×ℱN and w∈𝒲, a codeword xjlfmN is chosen as follows. According to the common message w, we first find the sequence uN(w). For the determined uN(w), there is a corresponding codebook 𝒞ℬw; see Figure 4. Then, the corresponding codeword xjlfmN is sent into the channel, where j is chosen randomly from the set gj-1(d), f=k⊕m2, and m=w (here ⊕ is modulo addition over ℱN). Figure 4 shows how to select xjlfmN in detail. According to uN(w), we can find the corresponding codebook 𝒞ℬw. In the codebook 𝒞ℬw, we choose the corresponding bin according to f and l. Then, in that bin, the subbin is found according to d. Finally, a codeword xN (which is denoted by xjlfmN) is randomly chosen from that subbin. ### 3.2.2. Decoding Receiver 2 tries to find a unique sequenceuN(w^^) such that (uN(w^^),zN)∈TUZN(ϵ1). If there exists such a unique sequence, decoder 2 outputs w^^; otherwise, an error is declared. Since the size of 𝒲 is smaller than 2NI(U;Z), the decoding error probability for receiver 2 approaches zero.For receiver 1, he can also decode the common messagew^ since the output of channel 2 is a degraded version of the output of channel 1. Then, receiver 1 tries to find a unique codeword xj^l^f^m^N indexed by j^, l^, f^, m^, such that (xj^l^f^m^N,yN)∈TXY∣UN(ϵ2). If there exists such a unique codeword xj^l^f^m^N, receiver 1 calculates f^⊖k as m^2 (here ⊖ is modulo subtraction over ℱN, and m^=w^) and finds d^ according to gj(j^). Note that receiver 1 knows the secret key k. Decoder 1 outputs s^=(d^,l^,m^2) and w^. If no such xj^l^f^m^N or more than one such xj^l^f^m^N exist, an error is declared. ### 3.2.3. Analysis of Error Probability Since the number ofuN(w) is upper bound by 2NI(U;Z) and the DMBCs under discussion are degraded, both receivers can decode the common message w with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, it can be calculated that given the codeword uN(w), the number of xN is (36)∥ℱN∥·∥𝒥N∥·∥ℒN∥=2NI(X;Y∣U).So, after determining the codeworduN(w), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. This proves (6). ### 3.2.4. Analysis of Equivocation The proof of (5) is given below: (37)H(S∣ZN)=H(M1,M2∣ZN)=H(M1∣ZN)+H(M2∣ZN,M1)≥H(M1∣ZN)+H(M2∣ZN,M1,K⊕M2)=(b3.1)H(M1∣ZN)+H(M2∣K⊕M2)=(b3.2)H(M1∣ZN)+H(M2)=(b3.3)H(M1∣ZN)+NRf′, where (b3.1) follows from the Markov chain M2→M2⊕K→(ZN,M1), (b3.2) follows from the fact that M2 is independent of M2⊕K, and (b3.3) follows from that M2 is uniformly distributed over {1,2,3,…,2NRf′}. The proof of the fact that M2 is independent of M2⊕K is shown as follows (the proof can also be seen in [6]): (38)p(M2⊕K=a)=∑k‍p(M2⊕K=a∣K=k)p(K=k)=(b3.4)∑k‍p(M2⊕K=a∣K=k)1∥ℱN∥=1∥ℱN∥∑k‍p(M2⊕K=a∣K=k)=1∥ℱN∥∑k‍p(M2=a⊖k∣K=k)=(b3.5)1∥ℱN∥∑k‍p(M2=a⊖k)=1∥ℱN∥p(M2⊕K=a,M2=m2)=p(K=a⊖m2,M2=m2)=(b3.6)p(K=a⊖m2)p(M2=m2)=(b3.7)1∥ℱN∥·1∥ℱN∥, where (b3.5) and (b3.6) follow from that M2 is independent of K, and (b3.4) and (b3.7) follow from that M2 and K are both uniformly distributed over ℱN. According to (38), (39)p(M2⊕K=a,M2=m2)=p(M2⊕K=a)p(M2=m2). Therefore, M2 is independent of M2⊕K.Next, we focus on the first term in (37). The method of the equivocation analysis in [2] will be used: (40)H(M1∣ZN)≥H(M1∣ZN,W)=H(M1,ZN∣W)-H(ZN∣W)=H(M1,ZN,XN∣W)-H(XN∣M1,ZN,W)-H(ZN∣W)=H(M1,XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W)≥H(XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W). Note that W in inequality (40) is the random variable of the common message 𝒲. The four terms H(XN∣W), H(ZN∣M1,XN,W), H(XN∣M1,ZN,W), and H(ZN∣W) will be bounded as follows.Givenw∈𝒲, the number of xN is ∥𝒥N∥·∥ℒN∥·∥ℱN∥. By applying [12, Lemma 2.5], we obtain (41)H(XN∣W)≥log(∥𝒥N∥·∥ℒN∥·∥ℱN∥)-1=NI(X;Y∣U)-1.Since(M1,W)→XN→ZN and the channel to receiver 2 is discrete memoryless, it is easy to get (42)H(ZN∣M1,XN,W)=H(ZN∣XN)=NH(Z∣X).With the knowledge of(d,l)∈ℳ1 and w∈𝒲, the number of xN is (43)2NRf′·∥𝒥N∥∥𝒟N∥<2NRf′·∥𝒥N∥=2NRf′·2N(I(X;Z∣U)-Rf′)=2NI(X;Z∣U).So, receiver 2 can decode the codewordxN with error probability approaching zero by using the standard channel coding theorem [11, Theorem 7.7.1]. Therefore, using Fano’s inequality, we get (44)H(XN∣M1,ZN,W)⟶0.Moreover, using the similar deduction in [2, Section 4], we get (45)H(ZN∣W)≤log∥TZ∣UN(ϵ1)∥≤NH(Z∣U).Substituting (41), (42), (44), and (45) into (40), we get (46)H(M1∣ZN)≥NI(X;Y∣U)+NH(Z∣X)-NH(Z∣U)=NI(X;Y∣U)-NI(X;Z∣U), where the equality in (46) follows from the Markov chain U→X→Z.Finally, (5) is verified by substituting (46) into (37): (47)limN→∞Δ=limN→∞H(S∣ZN)N≥limN→∞(H(M1∣ZN)N+Rf′)≥limN→∞(NI(X;Y∣U)-NI(X;Z∣U)N+Rf′)=I(X;Y∣U)-I(X;Z∣U)+Rf′≥Re. This completes the proof of Theorem 2. ## 3.2.1. Codebook Generation and Encoding Split the confidential message into two parts; that is,𝒮=(ℳ1,ℳ2). The corresponding variables M1, M2 are uniformly distributed over {1,2,3,…,2NR′} and {1,2,3,…,2NRf′}, where (when Rf≥R1, the confidential message S can be totally protected by using part of the feedback (as the shared key between the transmitter and receiver 1). The remaining part of the feedback is redundant. Therefore, in order to study the effects of the feedback on the capacity region, only Rf<R1 comes into our consideration) (32)0≤Rf′≤Rf,R′=R1-Rf′>0.It is important to notice thatR1 is the rate of the private message 𝒮, which consists of ℳ1 and ℳ2. This means that (33)R1=limN→∞log(∥ℳ1∥∥ℳ2∥)N=limN→∞(log∥ℳ1∥N+log∥ℳ2∥N).Define the index sets𝒥N, ℒN, ℱN, and ℳN satisfying (34)limN→∞1Nlog∥𝒥N∥=I(X;Z∣U)-Rf′,limN→∞1Nlog∥ℒN∥=I(X;Y∣U)-I(X;Z∣U),limN→∞1Nlog∥ℱN∥=Rf′,limN→∞1Nlog∥ℳN∥=I(U;Z).We usej∈𝒥N, l∈ℒN, f∈ℱN, m∈ℳN to index the codeword xN. Take 𝒲⊂ℳN such that (2) holds. Since R1≤I(X;Y∣U), it is easy to see ∥𝒥N×ℒN×ℱN∥≥2NR1. Therefore, let ℳ1=𝒟N×ℒN, ℳ2=ℱN, where 𝒟N is an arbitrary set such that (3) holds. Let gj be a mapping of 𝒥N into 𝒟N partitioning 𝒥N into subsets of size ∥𝒥N∥/∥𝒟N∥; that is, (35)gj:𝒥N⟶𝒟N, where gj(j)=d,j∈𝒥N,d∈𝒟N.For eachw∈𝒲, we generate a codeword uN(w) according to ∏i=1Np(ui). Then, for each uN(w), a codebook 𝒞ℬw (see Figure 4) containing ∥𝒥N∥·∥ℒN∥·∥ℱN∥ codewords xjlfmN is constructed according to ∏i=1Np(xi∣ui), where j∈𝒥N,l∈ℒN,f∈ℱN,m=w∈ℳ. Those xN are put into ∥ℒN∥·∥ℱN∥ bins so that each bin contains ∥𝒥N∥ codewords. Each bin is indexed by (l,f), where l∈ℒN, f∈ℱN. Then, we divide each bin into ∥𝒟N∥ subbins such that each subbin contains ∥𝒥N∥/∥𝒟N∥ codewords. The codebook structure is presented in Figure 4.Figure 4 The codebook𝒞ℬw for each uN(w).Let𝒦={1,2,3,…,2NRf′}, where k∈𝒦 is the key sent to the transmitter from receiver 1 through the secure feedback link. It is kept secret from receiver 2. The corresponding variable K is uniformly distributed over 𝒦 and independent of S and W.In order to sends=(d,l,m2)∈𝒟N×ℒN×ℱN and w∈𝒲, a codeword xjlfmN is chosen as follows. According to the common message w, we first find the sequence uN(w). For the determined uN(w), there is a corresponding codebook 𝒞ℬw; see Figure 4. Then, the corresponding codeword xjlfmN is sent into the channel, where j is chosen randomly from the set gj-1(d), f=k⊕m2, and m=w (here ⊕ is modulo addition over ℱN). Figure 4 shows how to select xjlfmN in detail. According to uN(w), we can find the corresponding codebook 𝒞ℬw. In the codebook 𝒞ℬw, we choose the corresponding bin according to f and l. Then, in that bin, the subbin is found according to d. Finally, a codeword xN (which is denoted by xjlfmN) is randomly chosen from that subbin. ## 3.2.2. Decoding Receiver 2 tries to find a unique sequenceuN(w^^) such that (uN(w^^),zN)∈TUZN(ϵ1). If there exists such a unique sequence, decoder 2 outputs w^^; otherwise, an error is declared. Since the size of 𝒲 is smaller than 2NI(U;Z), the decoding error probability for receiver 2 approaches zero.For receiver 1, he can also decode the common messagew^ since the output of channel 2 is a degraded version of the output of channel 1. Then, receiver 1 tries to find a unique codeword xj^l^f^m^N indexed by j^, l^, f^, m^, such that (xj^l^f^m^N,yN)∈TXY∣UN(ϵ2). If there exists such a unique codeword xj^l^f^m^N, receiver 1 calculates f^⊖k as m^2 (here ⊖ is modulo subtraction over ℱN, and m^=w^) and finds d^ according to gj(j^). Note that receiver 1 knows the secret key k. Decoder 1 outputs s^=(d^,l^,m^2) and w^. If no such xj^l^f^m^N or more than one such xj^l^f^m^N exist, an error is declared. ## 3.2.3. Analysis of Error Probability Since the number ofuN(w) is upper bound by 2NI(U;Z) and the DMBCs under discussion are degraded, both receivers can decode the common message w with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, it can be calculated that given the codeword uN(w), the number of xN is (36)∥ℱN∥·∥𝒥N∥·∥ℒN∥=2NI(X;Y∣U).So, after determining the codeworduN(w), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. This proves (6). ## 3.2.4. Analysis of Equivocation The proof of (5) is given below: (37)H(S∣ZN)=H(M1,M2∣ZN)=H(M1∣ZN)+H(M2∣ZN,M1)≥H(M1∣ZN)+H(M2∣ZN,M1,K⊕M2)=(b3.1)H(M1∣ZN)+H(M2∣K⊕M2)=(b3.2)H(M1∣ZN)+H(M2)=(b3.3)H(M1∣ZN)+NRf′, where (b3.1) follows from the Markov chain M2→M2⊕K→(ZN,M1), (b3.2) follows from the fact that M2 is independent of M2⊕K, and (b3.3) follows from that M2 is uniformly distributed over {1,2,3,…,2NRf′}. The proof of the fact that M2 is independent of M2⊕K is shown as follows (the proof can also be seen in [6]): (38)p(M2⊕K=a)=∑k‍p(M2⊕K=a∣K=k)p(K=k)=(b3.4)∑k‍p(M2⊕K=a∣K=k)1∥ℱN∥=1∥ℱN∥∑k‍p(M2⊕K=a∣K=k)=1∥ℱN∥∑k‍p(M2=a⊖k∣K=k)=(b3.5)1∥ℱN∥∑k‍p(M2=a⊖k)=1∥ℱN∥p(M2⊕K=a,M2=m2)=p(K=a⊖m2,M2=m2)=(b3.6)p(K=a⊖m2)p(M2=m2)=(b3.7)1∥ℱN∥·1∥ℱN∥, where (b3.5) and (b3.6) follow from that M2 is independent of K, and (b3.4) and (b3.7) follow from that M2 and K are both uniformly distributed over ℱN. According to (38), (39)p(M2⊕K=a,M2=m2)=p(M2⊕K=a)p(M2=m2). Therefore, M2 is independent of M2⊕K.Next, we focus on the first term in (37). The method of the equivocation analysis in [2] will be used: (40)H(M1∣ZN)≥H(M1∣ZN,W)=H(M1,ZN∣W)-H(ZN∣W)=H(M1,ZN,XN∣W)-H(XN∣M1,ZN,W)-H(ZN∣W)=H(M1,XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W)≥H(XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W). Note that W in inequality (40) is the random variable of the common message 𝒲. The four terms H(XN∣W), H(ZN∣M1,XN,W), H(XN∣M1,ZN,W), and H(ZN∣W) will be bounded as follows.Givenw∈𝒲, the number of xN is ∥𝒥N∥·∥ℒN∥·∥ℱN∥. By applying [12, Lemma 2.5], we obtain (41)H(XN∣W)≥log(∥𝒥N∥·∥ℒN∥·∥ℱN∥)-1=NI(X;Y∣U)-1.Since(M1,W)→XN→ZN and the channel to receiver 2 is discrete memoryless, it is easy to get (42)H(ZN∣M1,XN,W)=H(ZN∣XN)=NH(Z∣X).With the knowledge of(d,l)∈ℳ1 and w∈𝒲, the number of xN is (43)2NRf′·∥𝒥N∥∥𝒟N∥<2NRf′·∥𝒥N∥=2NRf′·2N(I(X;Z∣U)-Rf′)=2NI(X;Z∣U).So, receiver 2 can decode the codewordxN with error probability approaching zero by using the standard channel coding theorem [11, Theorem 7.7.1]. Therefore, using Fano’s inequality, we get (44)H(XN∣M1,ZN,W)⟶0.Moreover, using the similar deduction in [2, Section 4], we get (45)H(ZN∣W)≤log∥TZ∣UN(ϵ1)∥≤NH(Z∣U).Substituting (41), (42), (44), and (45) into (40), we get (46)H(M1∣ZN)≥NI(X;Y∣U)+NH(Z∣X)-NH(Z∣U)=NI(X;Y∣U)-NI(X;Z∣U), where the equality in (46) follows from the Markov chain U→X→Z.Finally, (5) is verified by substituting (46) into (37): (47)limN→∞Δ=limN→∞H(S∣ZN)N≥limN→∞(H(M1∣ZN)N+Rf′)≥limN→∞(NI(X;Y∣U)-NI(X;Z∣U)N+Rf′)=I(X;Y∣U)-I(X;Z∣U)+Rf′≥Re. This completes the proof of Theorem 2. ## 4. Proof of Theorems4 and 5 In this section, Theorems4 and 5 are proved. In the model of Figure 3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). To prove Theorem 4, we first give the outer bound on the capacity-equivocation region of the less noisy DMBCs with noiseless causal feedback in Section 4.1. Then, a coding scheme is provided to achieve the outer bound. Similarly, to prove Theorem 5, the outer bound on the capacity-equivocation region of the reversely less noisy DMBCs with noiseless causal feedback is given in Section 4.2. Moreover, we also provide a coding scheme to achieve the outer bound. The methods used to prove the converse parts of the two theorems are from [5]. The coding schemes are inspired by [3, 5]. ### 4.1. Less Noisy DMBCs with Noiseless Causal Feedback We first show the converse part of Theorem4, and then we prove the direct part of Theorem 4 by providing a coding scheme.In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛl, we prove the converse part for the equivalent region containing all the rate triples (R0,R1,Re) such that (48)0≤Re≤R1,(49)R0≤I(U;Z),(50)R0+R1≤I(X;Y∣U)+I(U;Z),(51)Re≤H(Y∣Z).The proof of (48), (49), and (50) follows exactly the same line of proving (14), (15), and (16) in Section 3 except for the identification of the auxiliary random variable U,V (which will be given subsequently) and therefore is omitted. We focus on proving (51): (52)H(S∣ZN)≤H(S∣ZN)+I(S;ZN∣YN)=H(S∣ZN)+H(S∣YN)-H(S∣YN,ZN)=I(S;YN∣ZN)+H(S∣YN)≤H(YN∣ZN)+H(S∣YN)=∑i=1N‍H(Yi∣Yi-1,ZN)+H(S∣YN)≤∑i=1N‍H(Yi∣Zi)+ϵ3, where ϵ3 is a small positive number. The last inequality in (52) follows from the fact that conditioning does not increase entropy and Fano’s inequality. To complete the proof of (51), define a time-sharing random variable Q which is uniformly distributed over 1,2,…,N and independent of SWXNYNZN. Set U=ZQ+1NYQ-1WQ, V=US, X=XQ, Y=YQ, Z=ZQ. It is easy to see U→V→X→(Y,Z) form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (52) simplifies to (53)H(S∣ZN)≤NH(Y∣Z)+ϵ3.Finally, utilizinglimN→∞Δ≥Re in the definition of “achievable” and (53), we obtain (51). This completes the proof of the converse part of Theorem 4.Next, a coding scheme is presented to achieve the rate triple(R0,R1,Re)∈ℛl. We should prove that all triples (R0,R1,Re)∈ℛl are achievable. Note that the noiseless feedback for the less noisy DMBCs is causally transmitted from receiver 1 to the transmitter. The scheme includes codebook generation and encoding scheme in Section 4.1.1, decoding scheme in Section 4.1.2, analysis of error probability in Section 4.1.3, and equivocation analysis in Section 4.1.4. Techniques like block Markov coding, superposition coding, and random binning are used.To serve the block Markov coding, let random vectorsUN, XN, YN, and ZN consist of n blocks of length N. Let Wn≜(W1,…,Wn) stand for the common messages of n blocks, where W1,…,Wn are independent and identically distributed random variables over 𝒲. Let Sn≜(S2,…,Sn) stand for the confidential messages of n blocks, where S2,…,Sn are independent and identically distributed random variables over 𝒮. Note that in the first block, there is no S1. Let Z~n=(Z~1,Z~2,…,Z~n), Z~b-=(Z~1,Z~2,…,Z~b-1,Z~b+1,…,Z~n), where Z~b is the output vector at receiver 2 at the end of the bth block, where 1≤b≤n. Similarly, Y~b denotes the output vector at receiver 1 at the end of the bth block, and X~b denotes the input vector of the channel in the bth block. These notations coincide with [6]. #### 4.1.1. Codebook Generation and Encoding Let the common message set𝒲 and the confidential message set 𝒮 satisfy (54)limN→∞log∥𝒲∥N=R0,limN→∞log∥𝒮∥N=R1, where R0 and R1 satisfy (10).Fixp(u) and p(x∣u). In the bth block, 1≤b≤n, we generate 2NR0 independent and identically distributed (i.i.d) sequences uN(wb) according to ∏i=1Np(ui), where wb∈𝒲 is the common message to be sent in the bth block. For each uN(wb), generate 2NI(X;Y∣U) codewords xN(uN(wb)) according to ∏i=1Np(xi∣ui). Put the 2NI(X;Y∣U) codewords into 2NR1 bins, so each bin contains 2N(I(X;Y∣U)-R1) codewords. The 2NR1 bins are denoted by Q1,Q2,…,Q∥𝒮∥, where ∥𝒮∥=2NR1. The codebook structure is shown in Figure 5. Reveal all the codebooks to the transmitter, receiver 1, and receiver 2.Figure 5 The codebook structure.Letg be a mapping from 𝒴N into 𝒮. Reveal the mapping g to the transmitter, receiver 1, and receiver 2. Define a random variable S′=g(YN) uniformly distributed over 𝒮 and independent of the confidential message S. It can be similarly proved from (39) that S⊕S′ is independent of S. In the first block, that is, b=1, to send the common message w1 (note that there is no confidential message to be sent in the first block), the transmitter tries to find uN(w1) and randomly choose a codeword xN(uN(w1)) from the corresponding 2NI(X;Y|U) codewords. In the bth block (b=2,3,…,n), to send the common message wb and confidential message sb, the transmitter calculates sb′=g(y~b-1) and randomly chooses a codeword xN(uN(wb),sb) from the bin Qsb⊕sb′. Here, y~b-1 is the output vector of the (b-1)th block at receiver 1, and ⊕ is the modulo addition over 𝒮. #### 4.1.2. Decoding In the first block, as there is no confidential message, only the common message needs to be decoded for both receivers. For receiver 2, he tries to find a unique sequenceuN(w^^1) such that (uN(w^^1),z~1)∈TUZN(ϵ1′), where ϵ1′ is a small positive number. If there exists such a unique sequence, decoder 2 outputs w^^1; otherwise, an error is declared. For receiver 1, he tries to find a unique sequence uN(w^1) such that (uN(w^1),y~1)∈TUYN(ϵ1′′), where ϵ1′′ is a small positive number. If there exists such a unique sequence, output is w^1; otherwise, declare an error.In thebth block, 2≤b≤n, receiver 2 aims to decode the common message, and receiver 1 aims to decode both confidential and common messages. The method of decoding the common message wb for both receivers follows the same as that in the first block. Then, receiver 1 tries to find a unique sequence xN(uN(w^b),s^b) such that (xN(uN(w^b),s^b),y~b)∈TXY∣UN(ϵ2′), where ϵ2′ is a small positive number. If there exists such a unique sequence in one bin, denoting the corresponding index of that bin by sb′′, receiver 1 calculates sb′′⊖sb′ as s^b (here ⊖ is modulo subtraction over 𝒮, and receiver 1 knows sb′=g(y~b-1)); otherwise, declare an error. #### 4.1.3. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Z), receiver 2 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.1 are less noisy, receiver 1 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. #### 4.1.4. Analysis of Equivocation In this part,limN→∞Δ≥Re is proved by utilizing the methods in [5, 6]: (55)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(a4.1)limN,n→∞∑i=2nH(Si∣Z~i)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,Si⊕Si′)nN=(a4.2)limN,n→∞∑i=2nH(Si∣Z~i-1,Si⊕Si′)nN=(a4.3)limN,n→∞∑i=2nmin{NH(Y∣Z),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣Z),R1}n=min{H(Y∣Z),R1}≥(a4.4)Re.In the above deduction,(a4.1) follows from Si→Z~i→(Si-1,Z~i-). (a4.2) follows from Si→(Si⊕Si′,Z~i-1)→Z~i. (a4.3) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. (a4.4) follows from (10).This completes the proof of Theorem4. ### 4.2. Reversely Less Noisy DMBCs with Noiseless Causal Feedback In this subsection, Theorem5 will be proved. The converse part will be shown first, and then a coding scheme is given for proving the direct part.In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛrl, we prove the converse part for the equivalent region containing all the rate triples (R0,R1,Re) such that (56)0≤Re≤R1,(57)R0≤I(U;Y),(58)R0+R1≤I(X;Y∣U)+I(U;Y),(59)Re≤H(Y∣X).The inequalities (56), (57), and (58) can be proved using similar deduction of the converse part of Theorem 2 in Section 3 except for the identification of the auxiliary random variables. We focus on (59): (60)H(S∣ZN)=H(S∣XN,ZN)+I(S;XN∣ZN)=(b4.1)H(S∣XN)+I(XN;S∣ZN)=H(S,YN∣XN)-H(YN∣XN,S)+I(XN;S∣ZN)=H(YN∣XN)+H(S∣YN,XN)-H(YN∣XN,S)+I(XN;S∣ZN)≤H(YN∣XN)+H(S∣YN,XN)+I(XN;S∣ZN)≤H(YN∣XN)+H(S∣YN,XN)+H(XN∣ZN)=(b4.2)H(YN∣XN)+H(S∣YN,XN)+H(XN∣YN)=H(YN∣XN)+H(S,XN∣YN)=(b4.3)H(YN∣XN)+H(S∣YN)=∑i=1N‍H(Yi∣Yi-1,XN)+H(S∣YN)≤(b4.4)∑i=1N‍H(Yi∣Xi)+ϵ3, where (b4.1) from the Markov chain S→XN→ZN, (b4.2) from the assumption that the channel is reversely less noisy (by setting U=X), (b4.3) from that XN is a function of S,YN, and (b4.4) from the fact that conditioning does not increase entropy and Fano’s inequality. To complete the proof of (59), define a time-sharing random variable Q which is uniformly distributed over {1,2,…,N} and independent of SWXNYNZN. Set U=ZQ+1NYQ-1WQ, V=US, X=XQ, Y=YQ, Z=ZQ. It is easy to see U→V→X→(Y,Z) form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (60) simplifies to (61)H(S∣ZN)≤NH(Y∣X)+ϵ3.Finally, utilizinglimN→∞Δ≥Re in the definition of “achievable” and (61), we obtain (59). This completes the proof of the converse part of Theorem 5.Next, a coding scheme will be provided for achieving the triple(R0,R1,Re)∈ℛrl. We should prove that all triples (R0,R1,Re)∈ℛrl are achievable. The codebook generation, encoding, and decoding follow exactly the lines of the coding scheme for the less noisy case in Section 4.1. We present the analysis of error probability and equivocation as follows. #### 4.2.1. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Y), receiver 1 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.2 are reversely less noisy, receiver 2 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. #### 4.2.2. Analysis of Equivocation In this part,limN→∞Δ≥Re will be proved. Special attention should be paid to receiver 2 since the DMBCs are reversely less noisy; that is, I(U;Z)≥I(U;Y) for all p(u,x), which implies 2NI(X;Z∣U)≥2NI(X;Y∣U). Therefore, receiver 2 can also decode the codeword xN. With the knowledge of xN and zN, receiver 2 can guess receiver 1’s channel output yN from the conditional typical set 𝒯Y∣XZN(ϵ3). Note that receiver 2 can intercept the confidential messages in two ways. One is guessing the secret key sb′ from 𝒮 directly; the other is guessing the channel output y~b-1 and finding sb′ through g(y~b-1) indirectly. Intuitively, receiver 2 will always choose a better way to implement eavesdropping. More formally, (62)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(b4.5)limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,X~i-1,Si⊕Si′)nN=(b4.6)limN,n→∞∑i=2nH(Si∣Z~i-1,X~i-1,Si⊕Si′)nN=(b4.7)limN,n→∞∑i=2nmin{NH(Y∣XZ),log∥𝒮∥}nN=(b4.8)limN,n→∞∑i=2nmin{NH(Y∣X),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣X),R1}n=min{H(Y∣X),R1}≥(b4.9)Re.In the above deduction,(b4.5) follows from Si→(Z~i,Z~i-1)→(Si-1,Z~i-2,Z~i+1n).(b4.6) follows from Si→(Si⊕Si′,Z~i-1,X~i-1)→Z~i. (b4.7) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will, and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. Note that the number of yN∈TY|XZN(ϵ3) is about 2NH(Y∣XZ) based on the property of strong typical sequence [10]. (b4.8) follows from the fact that Y is independent of Z conditioning on X, which is obtained from the assumption p(y,z∣x)=p(y∣x)p(z∣x). (b4.9) follows from (11).This completes the proof of Theorem5. ## 4.1. Less Noisy DMBCs with Noiseless Causal Feedback We first show the converse part of Theorem4, and then we prove the direct part of Theorem 4 by providing a coding scheme.In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛl, we prove the converse part for the equivalent region containing all the rate triples (R0,R1,Re) such that (48)0≤Re≤R1,(49)R0≤I(U;Z),(50)R0+R1≤I(X;Y∣U)+I(U;Z),(51)Re≤H(Y∣Z).The proof of (48), (49), and (50) follows exactly the same line of proving (14), (15), and (16) in Section 3 except for the identification of the auxiliary random variable U,V (which will be given subsequently) and therefore is omitted. We focus on proving (51): (52)H(S∣ZN)≤H(S∣ZN)+I(S;ZN∣YN)=H(S∣ZN)+H(S∣YN)-H(S∣YN,ZN)=I(S;YN∣ZN)+H(S∣YN)≤H(YN∣ZN)+H(S∣YN)=∑i=1N‍H(Yi∣Yi-1,ZN)+H(S∣YN)≤∑i=1N‍H(Yi∣Zi)+ϵ3, where ϵ3 is a small positive number. The last inequality in (52) follows from the fact that conditioning does not increase entropy and Fano’s inequality. To complete the proof of (51), define a time-sharing random variable Q which is uniformly distributed over 1,2,…,N and independent of SWXNYNZN. Set U=ZQ+1NYQ-1WQ, V=US, X=XQ, Y=YQ, Z=ZQ. It is easy to see U→V→X→(Y,Z) form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (52) simplifies to (53)H(S∣ZN)≤NH(Y∣Z)+ϵ3.Finally, utilizinglimN→∞Δ≥Re in the definition of “achievable” and (53), we obtain (51). This completes the proof of the converse part of Theorem 4.Next, a coding scheme is presented to achieve the rate triple(R0,R1,Re)∈ℛl. We should prove that all triples (R0,R1,Re)∈ℛl are achievable. Note that the noiseless feedback for the less noisy DMBCs is causally transmitted from receiver 1 to the transmitter. The scheme includes codebook generation and encoding scheme in Section 4.1.1, decoding scheme in Section 4.1.2, analysis of error probability in Section 4.1.3, and equivocation analysis in Section 4.1.4. Techniques like block Markov coding, superposition coding, and random binning are used.To serve the block Markov coding, let random vectorsUN, XN, YN, and ZN consist of n blocks of length N. Let Wn≜(W1,…,Wn) stand for the common messages of n blocks, where W1,…,Wn are independent and identically distributed random variables over 𝒲. Let Sn≜(S2,…,Sn) stand for the confidential messages of n blocks, where S2,…,Sn are independent and identically distributed random variables over 𝒮. Note that in the first block, there is no S1. Let Z~n=(Z~1,Z~2,…,Z~n), Z~b-=(Z~1,Z~2,…,Z~b-1,Z~b+1,…,Z~n), where Z~b is the output vector at receiver 2 at the end of the bth block, where 1≤b≤n. Similarly, Y~b denotes the output vector at receiver 1 at the end of the bth block, and X~b denotes the input vector of the channel in the bth block. These notations coincide with [6]. ### 4.1.1. Codebook Generation and Encoding Let the common message set𝒲 and the confidential message set 𝒮 satisfy (54)limN→∞log∥𝒲∥N=R0,limN→∞log∥𝒮∥N=R1, where R0 and R1 satisfy (10).Fixp(u) and p(x∣u). In the bth block, 1≤b≤n, we generate 2NR0 independent and identically distributed (i.i.d) sequences uN(wb) according to ∏i=1Np(ui), where wb∈𝒲 is the common message to be sent in the bth block. For each uN(wb), generate 2NI(X;Y∣U) codewords xN(uN(wb)) according to ∏i=1Np(xi∣ui). Put the 2NI(X;Y∣U) codewords into 2NR1 bins, so each bin contains 2N(I(X;Y∣U)-R1) codewords. The 2NR1 bins are denoted by Q1,Q2,…,Q∥𝒮∥, where ∥𝒮∥=2NR1. The codebook structure is shown in Figure 5. Reveal all the codebooks to the transmitter, receiver 1, and receiver 2.Figure 5 The codebook structure.Letg be a mapping from 𝒴N into 𝒮. Reveal the mapping g to the transmitter, receiver 1, and receiver 2. Define a random variable S′=g(YN) uniformly distributed over 𝒮 and independent of the confidential message S. It can be similarly proved from (39) that S⊕S′ is independent of S. In the first block, that is, b=1, to send the common message w1 (note that there is no confidential message to be sent in the first block), the transmitter tries to find uN(w1) and randomly choose a codeword xN(uN(w1)) from the corresponding 2NI(X;Y|U) codewords. In the bth block (b=2,3,…,n), to send the common message wb and confidential message sb, the transmitter calculates sb′=g(y~b-1) and randomly chooses a codeword xN(uN(wb),sb) from the bin Qsb⊕sb′. Here, y~b-1 is the output vector of the (b-1)th block at receiver 1, and ⊕ is the modulo addition over 𝒮. ### 4.1.2. Decoding In the first block, as there is no confidential message, only the common message needs to be decoded for both receivers. For receiver 2, he tries to find a unique sequenceuN(w^^1) such that (uN(w^^1),z~1)∈TUZN(ϵ1′), where ϵ1′ is a small positive number. If there exists such a unique sequence, decoder 2 outputs w^^1; otherwise, an error is declared. For receiver 1, he tries to find a unique sequence uN(w^1) such that (uN(w^1),y~1)∈TUYN(ϵ1′′), where ϵ1′′ is a small positive number. If there exists such a unique sequence, output is w^1; otherwise, declare an error.In thebth block, 2≤b≤n, receiver 2 aims to decode the common message, and receiver 1 aims to decode both confidential and common messages. The method of decoding the common message wb for both receivers follows the same as that in the first block. Then, receiver 1 tries to find a unique sequence xN(uN(w^b),s^b) such that (xN(uN(w^b),s^b),y~b)∈TXY∣UN(ϵ2′), where ϵ2′ is a small positive number. If there exists such a unique sequence in one bin, denoting the corresponding index of that bin by sb′′, receiver 1 calculates sb′′⊖sb′ as s^b (here ⊖ is modulo subtraction over 𝒮, and receiver 1 knows sb′=g(y~b-1)); otherwise, declare an error. ### 4.1.3. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Z), receiver 2 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.1 are less noisy, receiver 1 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. ### 4.1.4. Analysis of Equivocation In this part,limN→∞Δ≥Re is proved by utilizing the methods in [5, 6]: (55)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(a4.1)limN,n→∞∑i=2nH(Si∣Z~i)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,Si⊕Si′)nN=(a4.2)limN,n→∞∑i=2nH(Si∣Z~i-1,Si⊕Si′)nN=(a4.3)limN,n→∞∑i=2nmin{NH(Y∣Z),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣Z),R1}n=min{H(Y∣Z),R1}≥(a4.4)Re.In the above deduction,(a4.1) follows from Si→Z~i→(Si-1,Z~i-). (a4.2) follows from Si→(Si⊕Si′,Z~i-1)→Z~i. (a4.3) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. (a4.4) follows from (10).This completes the proof of Theorem4. ## 4.1.1. Codebook Generation and Encoding Let the common message set𝒲 and the confidential message set 𝒮 satisfy (54)limN→∞log∥𝒲∥N=R0,limN→∞log∥𝒮∥N=R1, where R0 and R1 satisfy (10).Fixp(u) and p(x∣u). In the bth block, 1≤b≤n, we generate 2NR0 independent and identically distributed (i.i.d) sequences uN(wb) according to ∏i=1Np(ui), where wb∈𝒲 is the common message to be sent in the bth block. For each uN(wb), generate 2NI(X;Y∣U) codewords xN(uN(wb)) according to ∏i=1Np(xi∣ui). Put the 2NI(X;Y∣U) codewords into 2NR1 bins, so each bin contains 2N(I(X;Y∣U)-R1) codewords. The 2NR1 bins are denoted by Q1,Q2,…,Q∥𝒮∥, where ∥𝒮∥=2NR1. The codebook structure is shown in Figure 5. Reveal all the codebooks to the transmitter, receiver 1, and receiver 2.Figure 5 The codebook structure.Letg be a mapping from 𝒴N into 𝒮. Reveal the mapping g to the transmitter, receiver 1, and receiver 2. Define a random variable S′=g(YN) uniformly distributed over 𝒮 and independent of the confidential message S. It can be similarly proved from (39) that S⊕S′ is independent of S. In the first block, that is, b=1, to send the common message w1 (note that there is no confidential message to be sent in the first block), the transmitter tries to find uN(w1) and randomly choose a codeword xN(uN(w1)) from the corresponding 2NI(X;Y|U) codewords. In the bth block (b=2,3,…,n), to send the common message wb and confidential message sb, the transmitter calculates sb′=g(y~b-1) and randomly chooses a codeword xN(uN(wb),sb) from the bin Qsb⊕sb′. Here, y~b-1 is the output vector of the (b-1)th block at receiver 1, and ⊕ is the modulo addition over 𝒮. ## 4.1.2. Decoding In the first block, as there is no confidential message, only the common message needs to be decoded for both receivers. For receiver 2, he tries to find a unique sequenceuN(w^^1) such that (uN(w^^1),z~1)∈TUZN(ϵ1′), where ϵ1′ is a small positive number. If there exists such a unique sequence, decoder 2 outputs w^^1; otherwise, an error is declared. For receiver 1, he tries to find a unique sequence uN(w^1) such that (uN(w^1),y~1)∈TUYN(ϵ1′′), where ϵ1′′ is a small positive number. If there exists such a unique sequence, output is w^1; otherwise, declare an error.In thebth block, 2≤b≤n, receiver 2 aims to decode the common message, and receiver 1 aims to decode both confidential and common messages. The method of decoding the common message wb for both receivers follows the same as that in the first block. Then, receiver 1 tries to find a unique sequence xN(uN(w^b),s^b) such that (xN(uN(w^b),s^b),y~b)∈TXY∣UN(ϵ2′), where ϵ2′ is a small positive number. If there exists such a unique sequence in one bin, denoting the corresponding index of that bin by sb′′, receiver 1 calculates sb′′⊖sb′ as s^b (here ⊖ is modulo subtraction over 𝒮, and receiver 1 knows sb′=g(y~b-1)); otherwise, declare an error. ## 4.1.3. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Z), receiver 2 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.1 are less noisy, receiver 1 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. ## 4.1.4. Analysis of Equivocation In this part,limN→∞Δ≥Re is proved by utilizing the methods in [5, 6]: (55)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(a4.1)limN,n→∞∑i=2nH(Si∣Z~i)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,Si⊕Si′)nN=(a4.2)limN,n→∞∑i=2nH(Si∣Z~i-1,Si⊕Si′)nN=(a4.3)limN,n→∞∑i=2nmin{NH(Y∣Z),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣Z),R1}n=min{H(Y∣Z),R1}≥(a4.4)Re.In the above deduction,(a4.1) follows from Si→Z~i→(Si-1,Z~i-). (a4.2) follows from Si→(Si⊕Si′,Z~i-1)→Z~i. (a4.3) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. (a4.4) follows from (10).This completes the proof of Theorem4. ## 4.2. Reversely Less Noisy DMBCs with Noiseless Causal Feedback In this subsection, Theorem5 will be proved. The converse part will be shown first, and then a coding scheme is given for proving the direct part.In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛrl, we prove the converse part for the equivalent region containing all the rate triples (R0,R1,Re) such that (56)0≤Re≤R1,(57)R0≤I(U;Y),(58)R0+R1≤I(X;Y∣U)+I(U;Y),(59)Re≤H(Y∣X).The inequalities (56), (57), and (58) can be proved using similar deduction of the converse part of Theorem 2 in Section 3 except for the identification of the auxiliary random variables. We focus on (59): (60)H(S∣ZN)=H(S∣XN,ZN)+I(S;XN∣ZN)=(b4.1)H(S∣XN)+I(XN;S∣ZN)=H(S,YN∣XN)-H(YN∣XN,S)+I(XN;S∣ZN)=H(YN∣XN)+H(S∣YN,XN)-H(YN∣XN,S)+I(XN;S∣ZN)≤H(YN∣XN)+H(S∣YN,XN)+I(XN;S∣ZN)≤H(YN∣XN)+H(S∣YN,XN)+H(XN∣ZN)=(b4.2)H(YN∣XN)+H(S∣YN,XN)+H(XN∣YN)=H(YN∣XN)+H(S,XN∣YN)=(b4.3)H(YN∣XN)+H(S∣YN)=∑i=1N‍H(Yi∣Yi-1,XN)+H(S∣YN)≤(b4.4)∑i=1N‍H(Yi∣Xi)+ϵ3, where (b4.1) from the Markov chain S→XN→ZN, (b4.2) from the assumption that the channel is reversely less noisy (by setting U=X), (b4.3) from that XN is a function of S,YN, and (b4.4) from the fact that conditioning does not increase entropy and Fano’s inequality. To complete the proof of (59), define a time-sharing random variable Q which is uniformly distributed over {1,2,…,N} and independent of SWXNYNZN. Set U=ZQ+1NYQ-1WQ, V=US, X=XQ, Y=YQ, Z=ZQ. It is easy to see U→V→X→(Y,Z) form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (60) simplifies to (61)H(S∣ZN)≤NH(Y∣X)+ϵ3.Finally, utilizinglimN→∞Δ≥Re in the definition of “achievable” and (61), we obtain (59). This completes the proof of the converse part of Theorem 5.Next, a coding scheme will be provided for achieving the triple(R0,R1,Re)∈ℛrl. We should prove that all triples (R0,R1,Re)∈ℛrl are achievable. The codebook generation, encoding, and decoding follow exactly the lines of the coding scheme for the less noisy case in Section 4.1. We present the analysis of error probability and equivocation as follows. ### 4.2.1. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Y), receiver 1 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.2 are reversely less noisy, receiver 2 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. ### 4.2.2. Analysis of Equivocation In this part,limN→∞Δ≥Re will be proved. Special attention should be paid to receiver 2 since the DMBCs are reversely less noisy; that is, I(U;Z)≥I(U;Y) for all p(u,x), which implies 2NI(X;Z∣U)≥2NI(X;Y∣U). Therefore, receiver 2 can also decode the codeword xN. With the knowledge of xN and zN, receiver 2 can guess receiver 1’s channel output yN from the conditional typical set 𝒯Y∣XZN(ϵ3). Note that receiver 2 can intercept the confidential messages in two ways. One is guessing the secret key sb′ from 𝒮 directly; the other is guessing the channel output y~b-1 and finding sb′ through g(y~b-1) indirectly. Intuitively, receiver 2 will always choose a better way to implement eavesdropping. More formally, (62)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(b4.5)limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,X~i-1,Si⊕Si′)nN=(b4.6)limN,n→∞∑i=2nH(Si∣Z~i-1,X~i-1,Si⊕Si′)nN=(b4.7)limN,n→∞∑i=2nmin{NH(Y∣XZ),log∥𝒮∥}nN=(b4.8)limN,n→∞∑i=2nmin{NH(Y∣X),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣X),R1}n=min{H(Y∣X),R1}≥(b4.9)Re.In the above deduction,(b4.5) follows from Si→(Z~i,Z~i-1)→(Si-1,Z~i-2,Z~i+1n).(b4.6) follows from Si→(Si⊕Si′,Z~i-1,X~i-1)→Z~i. (b4.7) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will, and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. Note that the number of yN∈TY|XZN(ϵ3) is about 2NH(Y∣XZ) based on the property of strong typical sequence [10]. (b4.8) follows from the fact that Y is independent of Z conditioning on X, which is obtained from the assumption p(y,z∣x)=p(y∣x)p(z∣x). (b4.9) follows from (11).This completes the proof of Theorem5. ## 4.2.1. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Y), receiver 1 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.2 are reversely less noisy, receiver 2 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. ## 4.2.2. Analysis of Equivocation In this part,limN→∞Δ≥Re will be proved. Special attention should be paid to receiver 2 since the DMBCs are reversely less noisy; that is, I(U;Z)≥I(U;Y) for all p(u,x), which implies 2NI(X;Z∣U)≥2NI(X;Y∣U). Therefore, receiver 2 can also decode the codeword xN. With the knowledge of xN and zN, receiver 2 can guess receiver 1’s channel output yN from the conditional typical set 𝒯Y∣XZN(ϵ3). Note that receiver 2 can intercept the confidential messages in two ways. One is guessing the secret key sb′ from 𝒮 directly; the other is guessing the channel output y~b-1 and finding sb′ through g(y~b-1) indirectly. Intuitively, receiver 2 will always choose a better way to implement eavesdropping. More formally, (62)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(b4.5)limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,X~i-1,Si⊕Si′)nN=(b4.6)limN,n→∞∑i=2nH(Si∣Z~i-1,X~i-1,Si⊕Si′)nN=(b4.7)limN,n→∞∑i=2nmin{NH(Y∣XZ),log∥𝒮∥}nN=(b4.8)limN,n→∞∑i=2nmin{NH(Y∣X),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣X),R1}n=min{H(Y∣X),R1}≥(b4.9)Re.In the above deduction,(b4.5) follows from Si→(Z~i,Z~i-1)→(Si-1,Z~i-2,Z~i+1n).(b4.6) follows from Si→(Si⊕Si′,Z~i-1,X~i-1)→Z~i. (b4.7) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will, and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. Note that the number of yN∈TY|XZN(ϵ3) is about 2NH(Y∣XZ) based on the property of strong typical sequence [10]. (b4.8) follows from the fact that Y is independent of Z conditioning on X, which is obtained from the assumption p(y,z∣x)=p(y∣x)p(z∣x). (b4.9) follows from (11).This completes the proof of Theorem5. ## 5. Conclusion This paper studies two models of the DMBCs with noiseless feedback. One is the degraded DMBCs with rate-limited feedback; the other is theless and reversely less noisy DMBCs with feedback. The difference between them is that the feedback in the first model is independent of the channel outputs and rate limited, while the feedback in the second model is originated causally from the channel outputs. The capacity-equivocation regions of the two models are obtained in this paper. We should point out that the second model studied in this paper is under the assumption that the channel to receiver 1 (the legitimate receiver) is independent of the channel to receiver 2 (the eavesdropper); that is, the channel output YN is independent of ZN given the channel input XN. However, without this assumption, the capacity-equivocation region remains unknown for the general DMBCs with noiseless feedback. --- *Source: 102069-2013-08-22.xml*
102069-2013-08-22_102069-2013-08-22.md
102,659
Capacity-Equivocation Regions of the DMBCs with Noiseless Feedback
Xinxing Yin; Zhi Xue; Bin Dai
Mathematical Problems in Engineering (2013)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102069
102069-2013-08-22.xml
--- ## Abstract The discrete memoryless broadcast channels (DMBCs) with noiseless feedback are studied. The entire capacity-equivocation regions of two models of the DMBCs with noiseless feedback are obtained. One is the degraded DMBCs with rate-limited feedback; the other is theless and reversely less noisy DMBCs with causal feedback. In both models, two kinds of messages are transmitted. The common message is to be decoded by both the legitimate receiver and the eavesdropper, while the confidential message is only for the legitimate receiver. Our results generalize the secrecy capacity of the degraded wiretap channel with rate-limited feedback (Ardestanizadeh et al., 2009) and the restricted wiretap channel with noiseless feedback (Dai et al., 2012). Furthermore, we use a simpler and more intuitive deduction to get the single-letter characterization of the capacity-equivocation region, instead of relying on the recursive argument which is complex and not intuitive. --- ## Body ## 1. Introduction Secure data transmission is an important requirement in wireless communication. Wyner first studied the degraded (the wiretap channel is said to be (physically) degraded ifX→Y→Z form a Markov chain, where X is the channel input and Y and Z are the channel outputs of the legitimate receiver and wiretapper, resp.) wiretap channel in [1], where the output ZN of the channel to the wiretapper is degraded to the output YN of the channel to the legitimate receiver. In Wyner’s model, the transmitter aimed to send a confidential message S to the legitimate receiver and keep the wiretapper as ignorant of the message as possible. Wyner obtained the secrecy capacity (the secrecy capacity is the best data transmission rate under perfect secrecy; i.e., the equivocation at the wiretapper H(S∣ZN)=0. The formal definition of the secrecy capacity is given in Remark 3) and demonstrated that provable secure communication could be implemented by using information theoretic methods. This model was extended to a more general case by Csiszár and Körner [2], where broadcast channel with confidential messages was studied; see Figure 1. They considered transmitting not only the confidential messages S to the legitimate receiver, but also the common messages W to both the legitimate receiver and the eavesdropper. The capacity-equivocation region for the extended model was determined in [2]. This region contains all the achievable rate triples (R0,R1,Re), where R0 and R1 are the rates of the common and confidential messages and Re is the rate of the confidential message’s equivocation. Nevertheless, neither Wyner’s model nor Csiszár’s model considered feedback.Figure 1 Broadcast channel with confidential messages.To explore more ways in achieving secure data transmission, [3–5] studied the effects of the feedback on the capacities of several channel models. They all showed that feedback could help enhance the secrecy in wireless transmission. In [3], Ahlswede and Cai presented both the inner and outer bounds on the secrecy capacity of the wiretap channel with secure causal feedback from the decoder and showed that the outer bound was tight for the degraded case. It was proved that, by using feedback, the secrecy capacity of the (degraded) wiretap channel was increased. After Ahlswede’s exploration, Ardestanizadeh et al. studied the wiretap channel with secure rate-limited feedback [4]. The main difference between Ardestanizadeh’s model and Ahlswede’s model is that the feedback in [4] is independent of the channel outputs, while the feedback in [3] is originated causally from the outputs of the channel to the legitimate receiver. In [4], the authors got an outer bound on the wiretap channel with rate-limited feedback through a recursive argument which was effective but not intuitive. They also showed the outer bound was tight for the degraded case. In addition, Dai et al. investigated the secrecy capacity of the restricted wiretap channel with noiseless causal feedback under the assumption that the main channel is independent of the wiretap channel [5].However, all of these explorations [3–5] focused on sending only the confidential messages. They did not consider sending both the common and confidential messages. In fact, transmitting the two kinds of messages can be seen in many systems with feedback. For example, in the satellite television service, some channels are available to all users for free, but some other channels are only for those who have paid for them. Recently, [6] studied the problem of transmitting both the common and confidential messages in the degraded broadcast channels with feedback. Note that, like [3], the feedback in [6] was originated causally from the legitimate receiver’s channel outputs and not rate-limited. Besides, [7–9] studied the broadcast channel with feedback where no secure constraints were imposed.To further investigate the secure data transmission with both common and confidential messages and noiseless feedback, this paper determines the capacity-equivocation regions of the following two DMBCs with both common and confidential messages. They are unsolved in the previous exploration.(i) Degraded DMBCs with rate-limited feedback, where the feedback rate is limited byRf and the feedback is independent of the channel outputs; see Figure 2.(ii) Less and reversely less noisy (let X be the input of the DMBC, Y the legitimate receiver’s channel output, and Z the eavesdropper’s channel output. A DMBC p(y,z∣x) is said to be less noisy if I(U;Y)≥I(U;Z) for all p(u,x); a DMBC p(y,z∣x) is said to be reversely less noisy if I(U;Y)≤I(U;Z) for all p(u,x), where u is the value of the auxiliary random variable U) DMBCs with noiseless causal feedback, where the feedback is originated causally from the legitimate receiver’s channel outputs; see Figure 3.Figure 2 Degraded DMBCs with rate-limited feedback.Figure 3 Less and reversely less noisy DMBCs with noiseless causal feedback.The two channel models are characterized in Section2. The main results presented in Section 2 subsume some important previous findings about the secure data transmission with feedback. (1) By setting the auxiliary random variable U to be constant in the secrecy capacity of the first model (see (9) in Remark 3), the secrecy capacity of the degraded wiretap with rate-limited feedback [4] is obtained. (2) By eliminating the common message in the second model, the capacity-equivocation region of restricted wiretap channel with noiseless feedback [5] is obtained. (3) We utilize a simpler and more intuitive deduction to get the single-letter characterization of the capacity-equivocation region, instead of relying on the recursive argument (see [4]) which is complex and not intuitive. (4) We find that even if the eavesdropper is in a better position than the legitimate receiver, provable secure communication could also be implemented in the DMBCs with both common and confidential messages.The remainder of the paper is organized as follows. Section2 gives the notations and main results, that is, the capacity-equivocation regions of the two channel models. Section 3 proves Theorem 2. Section 4 proves Theorems 4 and 5. Section 5 concludes the whole work. ## 2. Channel Models and Main Results ### 2.1. Notations Throughout this paper, we use calligraphic letters, for example,𝒳, 𝒴, to denote the finite sets and ∥𝒳∥ to denote the cardinality of the set 𝒳. Uppercase letters, for example, X, Y, are used to denote random variables taking values from finite sets, for example, 𝒳, 𝒴. The value of the random variable X is denoted by the lowercase letter x. We use Zij to denote the (j-i+1)-vectors (Zi,Zi+1,…,Zj) of random variables for 1≤i≤j and will always drop the subscript when i=1. Moreover, we use X~p(x) to denote the probability mass function of the random variable X. For X~p(x) and 0≤ϵ≤1, the set of the typical N-sequences xN is defined as 𝒯XN(ϵ)={xN:|π(x∣xN)-p(x)|≤ϵp(x) for all x∈𝒳}, where π(x∣xN) denotes the frequency of occurrences of letter x in the sequence xN (for more details about typical sequences, please refer to [10, Chapter 2]). The set of the conditional typical sequences, for example, 𝒯Y∣XN(ϵ), follows similarly. ### 2.2. Channel Models and Main Results This paper studies the secure data transmission for two subclasses of DMBCs with noiseless feedback. One is the case where the feedback is rate-limited and independent of the channel outputs (see Figure2); the other is the case where the feedback is originated causally from the channel outputs (see Figure 3). Both models consist of a transmitter and two receivers, named receiver 1 (legitimate receiver) and receiver 2 (eavesdropper). The transmitter aims to convey a common message W to both receivers in addition to a confidential message S intended only for receiver 1. The confidential message S should be kept secret from receiver 2 as much as possible. We use equivocation at receiver 2 to characterize the secrecy of the confidential message. W and S are mutually independent and uniformly distributed over 𝒲 and 𝒮. #### 2.2.1. Degraded DMBCs with Rate-Limited Feedback The degraded DMBCs with rate-limited feedback (see Figure2) are under the condition that the channel to receiver 2 is physically degraded from the channel to receiver 1; that is, p(y,z∣x)=p(y∣x)p(z∣y) or X→Y→Z form a Markov chain, where X is the channel input and Y,Z are observations of receiver 1 and 2. In this model, the encoder encodes the messages (W,S) and feedback into codewords XN, where N is the length of the codeword. They are transmitted over a discrete memoryless channel (DMC) with transition probability ∏i=1Np(yi,zi∣xi). Receiver 1 obtains YN and decodes the common and confidential messages (W^,S^). Receiver 2 obtains ZN and decodes the common message W^. More precisely, we define the encoder-decoder (N,Δ,Pe1,Pe2) in Definition 1.Definition 1. The encoder-decoder(N,Δ,Pe1,Pe2) for the degraded DMBCs with rate-limited feedback (with rate limited by Rf) is defined as follows.(i) The feedback alphabet𝒦 satisfies limN→∞(log∥𝒦∥/N)≤Rf. The feedback is generated independent of the channel output symbols.(ii) The stochastic channel encoderφ is specified by a matrix of conditional probability distributions φ(xN∣s,w,k) which denotes the probability that the message s,w and the feedback k are encoded as the channel input xN, where xN∈𝒳N,s∈𝒮,w∈𝒲,k∈𝒦, and ∑xNφ(xN∣s,w,k)=1. Note that 𝒮 and 𝒲 are the confidential and common message sets.(iii) Decoder 1 is a mappingh1:𝒴N→𝒮×𝒲. The input of decoder 1 is YN, and the output is S^,W^. The decoding error probability of receiver 1 is defined as Pe1=Pr{h1(YN)≠(S,W)}. Similarly, Decoder 2 is defined as a mapping h2:𝒵N→𝒲. The input of decoder 2 is ZN, and the output is W^^. The decoding error probability of receiver 2 is defined as Pe2=Pr{h2(ZN)≠W}.(iv) The equivocation at receiver 2 is defined as(1)Δ=1NH(S∣ZN).A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 2 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) defined in Definition 1, such that (2)limN→∞log∥𝒲∥N=R0,(3)limN→∞log∥𝒮∥N=R1,(4)limN→∞log∥𝒦∥N=Rf′≤Rf,(5)limN→∞Δ≥Re,(6)Pe1≤ϵ,Pe2≤ϵ, where ϵ is an arbitrary small positive real number, R0,R1,Rf′ are the rates of the common messages, confidential messages, and feedback, and Re is the equivocation rate of the confidential messages. Note that the feedback rate is limited by Rf. The capacity-equivocation region is defined as the convex closure of all achievable rate triples (R0,R1,Re). The capacity-equivocation region of the degraded DMBCs with rate-limited feedback is shown in the following theorem.Theorem 2. For the degraded DMBCs with limited feedback rateRf, the capacity-equivocation region is the set (7)ℛd={Re≤I(X;Y∣U)-I(X;Z∣U)+Rf(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤I(X;Y∣U)-I(X;Z∣U)+Rf}, where U is an auxiliary random variable and U→X→Y→Z form a Markov chain.The proof of Theorem2 is given in Section 3. The remark of Theorem 2 is shown below.Remark 3. (i) The secrecy capacity of the model in Figure2 is defined as the maximum rate at which confidential messages can be sent to receiver 1 in perfect secrecy; that is, (8)Cs=max(R0=0,R1,Re=R1)∈ℛR1, where ℛ is the capacity-equivocation region. Therefore, by the definition in (8), the secrecy capacity of the degraded DMBCs with limited feedback rate Rf is (9)Csd=maxmin{I(X;Y∣U),I(X;Y∣U)-I(X;Z∣U)+Rf}. This result subsumes the secrecy capacity of the degraded wiretap channel with rate-limited feedback (see [4]) by setting the auxiliary random variable U to be constant in (9). (ii) The capacity-equivocation region in (7) is bigger than that in [2] without feedback. This implies that feedback can be used to enhance the secrecy in the DMBCs. Note that this finding had already been verified in [3–6]. #### 2.2.2. Less and Reversely Less Noisy DMBCs with Noiseless Causal Feedback The model in Figure3 is based on the assumption that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). The definition of the encoder-decoder for this model is similar to Definition 1 except for the feedback and the encoder. Different from the model in Figure 2, the feedback in Figure 3 is originated causally from the channel outputs of receiver 1 to the transmitter. The stochastic encoder for this model at time i, 1≤i≤N, is defined as fi(xi∣wi,si,yi-1), where wi∈𝒲, si∈𝒮, yi-1∈𝒴i-1 (the channel outputs of receiver 1 before time i) and ∑xi∈𝒳fi(xi∣wi,si,yi-1)=1.A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 3 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) such that (2), (3), (5), and (6) hold. Note that the definition of “achievable” here does not include (4) since the feedback in the model of Figure 3 is not rate limited. The definition of secrecy capacity is the same as that in Remark 3. Then, we present the capacity-equivocation regions of the less and reversely less noisy DMBCs with noiseless causal feedback in Theorems 4 and 5, respectively.Theorem 4. For the less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(10)ℛl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤H(Y∣Z)}, where U→X→(Y,Z) form a Markov chain.Theorem 5. For the reversely less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(11)ℛrl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Y),R1≤I(X;Y∣U),Re≤H(Y∣X)}, where U→X→(Y,Z) form a Markov chain.The proof of Theorems4 and 5 is given in Section 4. The remark of Theorems 4 and 5 is given below.Remark 6. (i) By the definition in (8), the secrecy capacity of the less noisy DMBCs with noiseless causal feedback is (12)Csl=maxmin{I(X;Y∣U),H(Y∣Z)}. The secrecy capacity of the reversely less noisy DMBCs with noiseless causal feedback is (13)Csrl=maxmin{I(X;Y∣U),H(Y∣X)}. Setting the auxiliary random variable U to be constant in (12) and (13), the capacity-equivocation region of the model in [5] is obtained. (ii) In the model of Figure3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). This implies Y→X→Z. Therefore, it is easy to see H(Y∣X)=H(Y∣XZ)≤H(Y∣Z); that is, the upper bound on the equivocation rate Re in (11) for the reversely less noisy case is smaller than that in (10) for the less noisy case. This tells that when the eavesdropper is in a better position than the legitimate receiver (see the reversely less noisy case), the uncertainty about the confidential messages at the eavesdropper is decreased. Besides, from (13), we see that even if the eavesdropper is in a better position, the secrecy capacity is a positive value, which means provable secure communication could also be implemented in such a bad condition. ## 2.1. Notations Throughout this paper, we use calligraphic letters, for example,𝒳, 𝒴, to denote the finite sets and ∥𝒳∥ to denote the cardinality of the set 𝒳. Uppercase letters, for example, X, Y, are used to denote random variables taking values from finite sets, for example, 𝒳, 𝒴. The value of the random variable X is denoted by the lowercase letter x. We use Zij to denote the (j-i+1)-vectors (Zi,Zi+1,…,Zj) of random variables for 1≤i≤j and will always drop the subscript when i=1. Moreover, we use X~p(x) to denote the probability mass function of the random variable X. For X~p(x) and 0≤ϵ≤1, the set of the typical N-sequences xN is defined as 𝒯XN(ϵ)={xN:|π(x∣xN)-p(x)|≤ϵp(x) for all x∈𝒳}, where π(x∣xN) denotes the frequency of occurrences of letter x in the sequence xN (for more details about typical sequences, please refer to [10, Chapter 2]). The set of the conditional typical sequences, for example, 𝒯Y∣XN(ϵ), follows similarly. ## 2.2. Channel Models and Main Results This paper studies the secure data transmission for two subclasses of DMBCs with noiseless feedback. One is the case where the feedback is rate-limited and independent of the channel outputs (see Figure2); the other is the case where the feedback is originated causally from the channel outputs (see Figure 3). Both models consist of a transmitter and two receivers, named receiver 1 (legitimate receiver) and receiver 2 (eavesdropper). The transmitter aims to convey a common message W to both receivers in addition to a confidential message S intended only for receiver 1. The confidential message S should be kept secret from receiver 2 as much as possible. We use equivocation at receiver 2 to characterize the secrecy of the confidential message. W and S are mutually independent and uniformly distributed over 𝒲 and 𝒮. ### 2.2.1. Degraded DMBCs with Rate-Limited Feedback The degraded DMBCs with rate-limited feedback (see Figure2) are under the condition that the channel to receiver 2 is physically degraded from the channel to receiver 1; that is, p(y,z∣x)=p(y∣x)p(z∣y) or X→Y→Z form a Markov chain, where X is the channel input and Y,Z are observations of receiver 1 and 2. In this model, the encoder encodes the messages (W,S) and feedback into codewords XN, where N is the length of the codeword. They are transmitted over a discrete memoryless channel (DMC) with transition probability ∏i=1Np(yi,zi∣xi). Receiver 1 obtains YN and decodes the common and confidential messages (W^,S^). Receiver 2 obtains ZN and decodes the common message W^. More precisely, we define the encoder-decoder (N,Δ,Pe1,Pe2) in Definition 1.Definition 1. The encoder-decoder(N,Δ,Pe1,Pe2) for the degraded DMBCs with rate-limited feedback (with rate limited by Rf) is defined as follows.(i) The feedback alphabet𝒦 satisfies limN→∞(log∥𝒦∥/N)≤Rf. The feedback is generated independent of the channel output symbols.(ii) The stochastic channel encoderφ is specified by a matrix of conditional probability distributions φ(xN∣s,w,k) which denotes the probability that the message s,w and the feedback k are encoded as the channel input xN, where xN∈𝒳N,s∈𝒮,w∈𝒲,k∈𝒦, and ∑xNφ(xN∣s,w,k)=1. Note that 𝒮 and 𝒲 are the confidential and common message sets.(iii) Decoder 1 is a mappingh1:𝒴N→𝒮×𝒲. The input of decoder 1 is YN, and the output is S^,W^. The decoding error probability of receiver 1 is defined as Pe1=Pr{h1(YN)≠(S,W)}. Similarly, Decoder 2 is defined as a mapping h2:𝒵N→𝒲. The input of decoder 2 is ZN, and the output is W^^. The decoding error probability of receiver 2 is defined as Pe2=Pr{h2(ZN)≠W}.(iv) The equivocation at receiver 2 is defined as(1)Δ=1NH(S∣ZN).A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 2 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) defined in Definition 1, such that (2)limN→∞log∥𝒲∥N=R0,(3)limN→∞log∥𝒮∥N=R1,(4)limN→∞log∥𝒦∥N=Rf′≤Rf,(5)limN→∞Δ≥Re,(6)Pe1≤ϵ,Pe2≤ϵ, where ϵ is an arbitrary small positive real number, R0,R1,Rf′ are the rates of the common messages, confidential messages, and feedback, and Re is the equivocation rate of the confidential messages. Note that the feedback rate is limited by Rf. The capacity-equivocation region is defined as the convex closure of all achievable rate triples (R0,R1,Re). The capacity-equivocation region of the degraded DMBCs with rate-limited feedback is shown in the following theorem.Theorem 2. For the degraded DMBCs with limited feedback rateRf, the capacity-equivocation region is the set (7)ℛd={Re≤I(X;Y∣U)-I(X;Z∣U)+Rf(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤I(X;Y∣U)-I(X;Z∣U)+Rf}, where U is an auxiliary random variable and U→X→Y→Z form a Markov chain.The proof of Theorem2 is given in Section 3. The remark of Theorem 2 is shown below.Remark 3. (i) The secrecy capacity of the model in Figure2 is defined as the maximum rate at which confidential messages can be sent to receiver 1 in perfect secrecy; that is, (8)Cs=max(R0=0,R1,Re=R1)∈ℛR1, where ℛ is the capacity-equivocation region. Therefore, by the definition in (8), the secrecy capacity of the degraded DMBCs with limited feedback rate Rf is (9)Csd=maxmin{I(X;Y∣U),I(X;Y∣U)-I(X;Z∣U)+Rf}. This result subsumes the secrecy capacity of the degraded wiretap channel with rate-limited feedback (see [4]) by setting the auxiliary random variable U to be constant in (9). (ii) The capacity-equivocation region in (7) is bigger than that in [2] without feedback. This implies that feedback can be used to enhance the secrecy in the DMBCs. Note that this finding had already been verified in [3–6]. ### 2.2.2. Less and Reversely Less Noisy DMBCs with Noiseless Causal Feedback The model in Figure3 is based on the assumption that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). The definition of the encoder-decoder for this model is similar to Definition 1 except for the feedback and the encoder. Different from the model in Figure 2, the feedback in Figure 3 is originated causally from the channel outputs of receiver 1 to the transmitter. The stochastic encoder for this model at time i, 1≤i≤N, is defined as fi(xi∣wi,si,yi-1), where wi∈𝒲, si∈𝒮, yi-1∈𝒴i-1 (the channel outputs of receiver 1 before time i) and ∑xi∈𝒳fi(xi∣wi,si,yi-1)=1.A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 3 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) such that (2), (3), (5), and (6) hold. Note that the definition of “achievable” here does not include (4) since the feedback in the model of Figure 3 is not rate limited. The definition of secrecy capacity is the same as that in Remark 3. Then, we present the capacity-equivocation regions of the less and reversely less noisy DMBCs with noiseless causal feedback in Theorems 4 and 5, respectively.Theorem 4. For the less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(10)ℛl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤H(Y∣Z)}, where U→X→(Y,Z) form a Markov chain.Theorem 5. For the reversely less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(11)ℛrl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Y),R1≤I(X;Y∣U),Re≤H(Y∣X)}, where U→X→(Y,Z) form a Markov chain.The proof of Theorems4 and 5 is given in Section 4. The remark of Theorems 4 and 5 is given below.Remark 6. (i) By the definition in (8), the secrecy capacity of the less noisy DMBCs with noiseless causal feedback is (12)Csl=maxmin{I(X;Y∣U),H(Y∣Z)}. The secrecy capacity of the reversely less noisy DMBCs with noiseless causal feedback is (13)Csrl=maxmin{I(X;Y∣U),H(Y∣X)}. Setting the auxiliary random variable U to be constant in (12) and (13), the capacity-equivocation region of the model in [5] is obtained. (ii) In the model of Figure3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). This implies Y→X→Z. Therefore, it is easy to see H(Y∣X)=H(Y∣XZ)≤H(Y∣Z); that is, the upper bound on the equivocation rate Re in (11) for the reversely less noisy case is smaller than that in (10) for the less noisy case. This tells that when the eavesdropper is in a better position than the legitimate receiver (see the reversely less noisy case), the uncertainty about the confidential messages at the eavesdropper is decreased. Besides, from (13), we see that even if the eavesdropper is in a better position, the secrecy capacity is a positive value, which means provable secure communication could also be implemented in such a bad condition. ## 2.2.1. Degraded DMBCs with Rate-Limited Feedback The degraded DMBCs with rate-limited feedback (see Figure2) are under the condition that the channel to receiver 2 is physically degraded from the channel to receiver 1; that is, p(y,z∣x)=p(y∣x)p(z∣y) or X→Y→Z form a Markov chain, where X is the channel input and Y,Z are observations of receiver 1 and 2. In this model, the encoder encodes the messages (W,S) and feedback into codewords XN, where N is the length of the codeword. They are transmitted over a discrete memoryless channel (DMC) with transition probability ∏i=1Np(yi,zi∣xi). Receiver 1 obtains YN and decodes the common and confidential messages (W^,S^). Receiver 2 obtains ZN and decodes the common message W^. More precisely, we define the encoder-decoder (N,Δ,Pe1,Pe2) in Definition 1.Definition 1. The encoder-decoder(N,Δ,Pe1,Pe2) for the degraded DMBCs with rate-limited feedback (with rate limited by Rf) is defined as follows.(i) The feedback alphabet𝒦 satisfies limN→∞(log∥𝒦∥/N)≤Rf. The feedback is generated independent of the channel output symbols.(ii) The stochastic channel encoderφ is specified by a matrix of conditional probability distributions φ(xN∣s,w,k) which denotes the probability that the message s,w and the feedback k are encoded as the channel input xN, where xN∈𝒳N,s∈𝒮,w∈𝒲,k∈𝒦, and ∑xNφ(xN∣s,w,k)=1. Note that 𝒮 and 𝒲 are the confidential and common message sets.(iii) Decoder 1 is a mappingh1:𝒴N→𝒮×𝒲. The input of decoder 1 is YN, and the output is S^,W^. The decoding error probability of receiver 1 is defined as Pe1=Pr{h1(YN)≠(S,W)}. Similarly, Decoder 2 is defined as a mapping h2:𝒵N→𝒲. The input of decoder 2 is ZN, and the output is W^^. The decoding error probability of receiver 2 is defined as Pe2=Pr{h2(ZN)≠W}.(iv) The equivocation at receiver 2 is defined as(1)Δ=1NH(S∣ZN).A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 2 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) defined in Definition 1, such that (2)limN→∞log∥𝒲∥N=R0,(3)limN→∞log∥𝒮∥N=R1,(4)limN→∞log∥𝒦∥N=Rf′≤Rf,(5)limN→∞Δ≥Re,(6)Pe1≤ϵ,Pe2≤ϵ, where ϵ is an arbitrary small positive real number, R0,R1,Rf′ are the rates of the common messages, confidential messages, and feedback, and Re is the equivocation rate of the confidential messages. Note that the feedback rate is limited by Rf. The capacity-equivocation region is defined as the convex closure of all achievable rate triples (R0,R1,Re). The capacity-equivocation region of the degraded DMBCs with rate-limited feedback is shown in the following theorem.Theorem 2. For the degraded DMBCs with limited feedback rateRf, the capacity-equivocation region is the set (7)ℛd={Re≤I(X;Y∣U)-I(X;Z∣U)+Rf(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤I(X;Y∣U)-I(X;Z∣U)+Rf}, where U is an auxiliary random variable and U→X→Y→Z form a Markov chain.The proof of Theorem2 is given in Section 3. The remark of Theorem 2 is shown below.Remark 3. (i) The secrecy capacity of the model in Figure2 is defined as the maximum rate at which confidential messages can be sent to receiver 1 in perfect secrecy; that is, (8)Cs=max(R0=0,R1,Re=R1)∈ℛR1, where ℛ is the capacity-equivocation region. Therefore, by the definition in (8), the secrecy capacity of the degraded DMBCs with limited feedback rate Rf is (9)Csd=maxmin{I(X;Y∣U),I(X;Y∣U)-I(X;Z∣U)+Rf}. This result subsumes the secrecy capacity of the degraded wiretap channel with rate-limited feedback (see [4]) by setting the auxiliary random variable U to be constant in (9). (ii) The capacity-equivocation region in (7) is bigger than that in [2] without feedback. This implies that feedback can be used to enhance the secrecy in the DMBCs. Note that this finding had already been verified in [3–6]. ## 2.2.2. Less and Reversely Less Noisy DMBCs with Noiseless Causal Feedback The model in Figure3 is based on the assumption that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). The definition of the encoder-decoder for this model is similar to Definition 1 except for the feedback and the encoder. Different from the model in Figure 2, the feedback in Figure 3 is originated causally from the channel outputs of receiver 1 to the transmitter. The stochastic encoder for this model at time i, 1≤i≤N, is defined as fi(xi∣wi,si,yi-1), where wi∈𝒲, si∈𝒮, yi-1∈𝒴i-1 (the channel outputs of receiver 1 before time i) and ∑xi∈𝒳fi(xi∣wi,si,yi-1)=1.A rate triple(R0,R1,Re) is said to be achievable for the model in Figure 3 if there exists a channel encoder-decoder (N,Δ,Pe1,Pe2) such that (2), (3), (5), and (6) hold. Note that the definition of “achievable” here does not include (4) since the feedback in the model of Figure 3 is not rate limited. The definition of secrecy capacity is the same as that in Remark 3. Then, we present the capacity-equivocation regions of the less and reversely less noisy DMBCs with noiseless causal feedback in Theorems 4 and 5, respectively.Theorem 4. For the less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(10)ℛl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Z),R1≤I(X;Y∣U),Re≤H(Y∣Z)}, where U→X→(Y,Z) form a Markov chain.Theorem 5. For the reversely less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set(11)ℛrl={(R0,R1,Re):0≤Re≤R1,R0≤I(U;Y),R1≤I(X;Y∣U),Re≤H(Y∣X)}, where U→X→(Y,Z) form a Markov chain.The proof of Theorems4 and 5 is given in Section 4. The remark of Theorems 4 and 5 is given below.Remark 6. (i) By the definition in (8), the secrecy capacity of the less noisy DMBCs with noiseless causal feedback is (12)Csl=maxmin{I(X;Y∣U),H(Y∣Z)}. The secrecy capacity of the reversely less noisy DMBCs with noiseless causal feedback is (13)Csrl=maxmin{I(X;Y∣U),H(Y∣X)}. Setting the auxiliary random variable U to be constant in (12) and (13), the capacity-equivocation region of the model in [5] is obtained. (ii) In the model of Figure3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). This implies Y→X→Z. Therefore, it is easy to see H(Y∣X)=H(Y∣XZ)≤H(Y∣Z); that is, the upper bound on the equivocation rate Re in (11) for the reversely less noisy case is smaller than that in (10) for the less noisy case. This tells that when the eavesdropper is in a better position than the legitimate receiver (see the reversely less noisy case), the uncertainty about the confidential messages at the eavesdropper is decreased. Besides, from (13), we see that even if the eavesdropper is in a better position, the secrecy capacity is a positive value, which means provable secure communication could also be implemented in such a bad condition. ## 3. Proof of Theorem2 In this section, Theorem2 is proved. The converse part of Theorem 2 gives the outer bound on the capacity-equivocation region of the degraded DMBCs with rate-limited feedback. The proof of the converse part is shown in Section 3.1. The key tools used in the proof include the identification of the random variables and Csiszár’s sum equality [2]. In Section 3.2, to prove the direct part of Theorem 2, a coding scheme is provided to achieve the achievable rate triples in ℛd. The key ideas in the coding scheme are inspired by [4]. However, [4] only considers the transmission of the confidential messages. Our coding scheme considers both the confidential and common messages. ### 3.1. The Converse Part of Theorem2 In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛd, we prove the converse part for the equivalent region (the fact that the two regions are equivalent follows similarly from [10, Chapter 5, problem 5.8]) containing all the rate triples (R0,R1,Re) such that (14)0≤Re≤R1,(15)R0≤I(U;Z),(16)R0+R1≤I(X;Y∣U)+I(U;Z),(17)Re≤I(X;Y∣U)-I(X;Z∣U)+Rf.Now we show that allachievable triples (R0,R1,Re) satisfy (14), (15), (16), and (17).Condition (14) is proved as follows: (18)Re≤limN→∞Δ=limN→∞H(S∣ZN)N≤limN→∞H(S)N=R1.To prove condition (15), we calculate (19)H(W)=I(W;ZN)+H(W∣ZN)≤(a3.1)I(W;ZN)+ϵ1=∑i=1N‍I(W;Zi∣Zi-1)+ϵ1=∑i=1N‍I(W;Zi∣Zi+1N)+ϵ1=∑i=1N‍[I(WYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NW)]+ϵ1≤∑i=1N‍[I(WYi-1Zi+1N;Zi)-I(Yi-1;Zi∣Zi+1NW)]+ϵ1≤∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1, where (a3.1) follows from Fano’s inequality and ϵ1 is a small positive number. Note that KN=(K1,K2,…,KN), where Ki is the feedback symbol at time i, 1≤i≤N.To prove condition (16), we consider (20)H(S)+H(W)=H(S∣WKN)+H(W)=I(S;YN∣WKN)+H(S∣YNWKN)+I(W;ZN)+H(W∣ZN)≤(a3.2)I(S;YN∣WKN)+ϵ2+I(WKN;ZN)+ϵ1=∑i=1N‍I(S;Yi∣Yi-1WKN)+∑i=1N‍I(WKN;Zi∣Zi+1N)+ϵ1+ϵ2=∑i=1N‍[I(SZi+1N;Yi∣Yi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2=∑i=1N‍[I(Zi+1N;Yi∣Yi-1WKN)+I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2≤∑i=1N‍[I(Zi+1N;Yi∣Yi-1WKN)+I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1Zi+1N;Zi)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2=(a3.3)∑i=1N‍[I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2≤∑i=1N‍I(S;Yi∣Zi+1NYi-1WKN)+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2, where ϵ2 is a small positive number and (a3.2) and (a3.3) follow from Fano’s inequality and Csiszár’s sum equality [2]; that is, ∑i=1NI(Zi+1N;Yi∣Yi-1WKN)=∑i=1NI(Yi-1;Zi∣Zi+1NWKN).To prove condition (17), we calculate (21)H(S∣ZN)=H(S∣ZN,W)+I(S;W∣ZN)≤H(S∣ZN,W)+H(W∣ZN)=I(S;KN,YN∣ZN,W)+H(S∣ZN,KN,YN,W)+H(W∣ZN)≤I(S;KN,YN∣ZN,W)+H(S∣KN,YN)+H(W∣ZN)=I(S;KN∣ZN,W)+I(S;YN∣KN,ZN,W)+H(S∣KN,YN)+H(W∣ZN)≤H(KN)+I(S;YN∣KN,ZN,W)+H(S∣KN,YN)+H(W∣ZN)≤NRf+I(S;YN∣KN,ZN,W)+ϵ1+ϵ2.The last inequality in (21) follows from the Fano’s inequality and the fact that the feedback rate is limited by Rf. Then, I(S;YN∣KN,ZN,W) will be calculated as follows: (22)I(S;YN∣KN,ZN,W)=∑i=1N‍I(S;Yi∣Yi-1ZNWKN)=∑i=1N‍I(S;Yi∣Yi-1,Zi-1,Zi,Zi+1N,W,KN)=(a3.4)∑i=1N‍[I(S;Yi∣Yi-1,Zi-1,Zi,Zi+1N,W,KN)+I(Zi-1;Yi∣Yi-1,ZiN,W,KN)-I(Zi-1;Yi∣Yi-1,ZiN,W,S,KN)]=∑i=1N‍[I(S,Zi-1;Yi∣Yi-1,Zi,Zi+1N,W,KN)-I(Zi-1;Yi∣Yi-1,ZiN,W,S,KN)]=∑i=1N‍I(S;Yi∣Yi-1,Zi,Zi+1N,W,KN), where (a3.4) follows from the Markov chain Yi→Yi-1ZiNWKN→Zi-1 and Yi→Yi-1ZiNWSKN→Zi-1. Then, we introduce a random variable Q which is independent of SWKNXNYNZN and uniformly distributed over {1,2,…,N}. Set U=ZQ+1NYQ-1WKNQ,V=US,Y=YQ,X=XQ,Z=ZQ. It is straightforward to see that U→V→X→Y→Z form a Markov chain. After using the standard time sharing argument [10, Section 5.4], (19), (20), and (22) are simplified into (23)H(W)≤∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1=NI(U;Z)+ϵ1,(24)H(S)+H(W)≤∑i=1N‍I(S;Yi∣Zi+1NYi-1WKN)+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2=NI(S;Y∣U)+NI(U;Z)+ϵ1+ϵ2=NI(V;Y∣U)+NI(U;Z)+ϵ1+ϵ2,(25)I(S;YN∣KN,ZN,W)=∑i=1N‍I(S;Yi∣Yi-1,Zi,Zi+1N,W,KN)=NI(S;Y∣Z,U)=NI(US;Y∣Z,U)=NI(V;Y∣Z,U). Substituting (25) into (21) and utilizing (5), we get (26)Re≤limN→∞Δ=limN→∞H(S∣ZN)N≤limN→∞NRf+I(S;YN∣KN,ZN,W)+ϵ1+ϵ2N=I(V;Y∣Z,U)+Rf=I(V;Y∣U)-I(V;Z∣U)+Rf. The last equality in (26) follows from the Markov chain U→V→Y→Z.To finish the proof of (16) and (17), we need to show that I(V;Y∣U)≤I(X;Y∣U) and I(V;Y∣U)-I(V;Z∣U)≤I(X;Y∣U)-I(X;Z∣U). We first prove I(V;Y∣U,X)=0 and I(V;Z∣U,X)=0: (27)I(V;Y∣U,X)=H(Y∣U,X)-H(Y∣U,V,X)=(a3.5)H(Y∣X)-H(Y∣X)=0,I(V;Z∣U,X)=H(Z∣U,X)-H(Z∣U,V,X)=(a3.6)H(Z∣X)-H(Z∣X)=0, where (a3.5) follows from the Markov chains U→X→Y and (UV)→X→Y and (a3.6) follows from the Markov chains U→X→Z and (UV)→X→Z. Utilizing (27), we obtain (28)I(V;Y∣U)=I(V,X;Y∣U)-I(X;Y∣U,V)=I(X;Y∣U)+I(V;Y∣U,X)-I(X;Y∣U,V)=I(X;Y∣U)-I(X;Y∣U,V),(29)I(V;Z∣U)=I(V,X;Z∣U)-I(X;Z∣U,V)=I(X;Z∣U)+I(V;Z∣U,X)-I(X;Z∣U,V)=I(X;Z∣U)-I(X;Z∣U,V).From (28), it is straightforward to see that I(V;Y∣U)≤I(X;Y∣U). This proves condition (16).Then, we proveI(V;Y∣U)-I(V;Z∣U)≤I(X;Y∣U)-I(X;Z∣U). Since the channel model in Figure 1 is (physically) degraded, I(X;Y∣U=u,V=v)-I(X;Z∣U=u,V=v)≥0 holds for every (u,v), which implies (30)I(X;Y∣U,V)-I(X;Z∣U,V)≥0.Therefore, utilizing (28), (29), and (30), we get (31)I(V;Y∣U)-I(V;Z∣U)=I(X;Y∣U)-I(X;Z∣U)-[I(X;Y∣U,V)-I(X;Z∣U,V)]≤I(X;Y∣U)-I(X;Z∣U). This proves condition (17).The converse part of Theorem2 is proved. ### 3.2. A Coding Scheme Achievingℛd A coding scheme is provided to achieve the achievable triples(R0,R1,Re)∈ℛd. The key methods used in the scheme include the superposition coding, rate splitting, and random binning. The confidential message is split into two parts. One part is reliably transmitted using superposition coding and random binning; the other part is securely transmitted with the help of the feedback. Note that Section 3.1 has already given the outer bound on the capacity-equivocation region. When Rf≥I(X;Z∣U), it can be seen from (9) that the secrecy capacity for the degraded DMBCs with rate-limited feedback always equals to I(X;Y∣U). Therefore, in order to investigate the effects of the feedback, the feedback rate Rf<I(X;Z∣U) will only be considered in this subsection.We need to prove that all the triples(R0,R1,Re)∈ℛd for the model of Figure 2 with any feedback rate Rf′ limited by Rf are achievable (see Definition 1). This subsection is organized as follows. The codebook generation and encoding scheme is given in Section 3.2.1. The decoding scheme is given in Section 3.2.2. The analysis of error probability and equivocation are shown in Sections 3.2.3 and 3.2.4, respectively. #### 3.2.1. Codebook Generation and Encoding Split the confidential message into two parts; that is,𝒮=(ℳ1,ℳ2). The corresponding variables M1, M2 are uniformly distributed over {1,2,3,…,2NR′} and {1,2,3,…,2NRf′}, where (when Rf≥R1, the confidential message S can be totally protected by using part of the feedback (as the shared key between the transmitter and receiver 1). The remaining part of the feedback is redundant. Therefore, in order to study the effects of the feedback on the capacity region, only Rf<R1 comes into our consideration) (32)0≤Rf′≤Rf,R′=R1-Rf′>0.It is important to notice thatR1 is the rate of the private message 𝒮, which consists of ℳ1 and ℳ2. This means that (33)R1=limN→∞log(∥ℳ1∥∥ℳ2∥)N=limN→∞(log∥ℳ1∥N+log∥ℳ2∥N).Define the index sets𝒥N, ℒN, ℱN, and ℳN satisfying (34)limN→∞1Nlog∥𝒥N∥=I(X;Z∣U)-Rf′,limN→∞1Nlog∥ℒN∥=I(X;Y∣U)-I(X;Z∣U),limN→∞1Nlog∥ℱN∥=Rf′,limN→∞1Nlog∥ℳN∥=I(U;Z).We usej∈𝒥N, l∈ℒN, f∈ℱN, m∈ℳN to index the codeword xN. Take 𝒲⊂ℳN such that (2) holds. Since R1≤I(X;Y∣U), it is easy to see ∥𝒥N×ℒN×ℱN∥≥2NR1. Therefore, let ℳ1=𝒟N×ℒN, ℳ2=ℱN, where 𝒟N is an arbitrary set such that (3) holds. Let gj be a mapping of 𝒥N into 𝒟N partitioning 𝒥N into subsets of size ∥𝒥N∥/∥𝒟N∥; that is, (35)gj:𝒥N⟶𝒟N, where gj(j)=d,j∈𝒥N,d∈𝒟N.For eachw∈𝒲, we generate a codeword uN(w) according to ∏i=1Np(ui). Then, for each uN(w), a codebook 𝒞ℬw (see Figure 4) containing ∥𝒥N∥·∥ℒN∥·∥ℱN∥ codewords xjlfmN is constructed according to ∏i=1Np(xi∣ui), where j∈𝒥N,l∈ℒN,f∈ℱN,m=w∈ℳ. Those xN are put into ∥ℒN∥·∥ℱN∥ bins so that each bin contains ∥𝒥N∥ codewords. Each bin is indexed by (l,f), where l∈ℒN, f∈ℱN. Then, we divide each bin into ∥𝒟N∥ subbins such that each subbin contains ∥𝒥N∥/∥𝒟N∥ codewords. The codebook structure is presented in Figure 4.Figure 4 The codebook𝒞ℬw for each uN(w).Let𝒦={1,2,3,…,2NRf′}, where k∈𝒦 is the key sent to the transmitter from receiver 1 through the secure feedback link. It is kept secret from receiver 2. The corresponding variable K is uniformly distributed over 𝒦 and independent of S and W.In order to sends=(d,l,m2)∈𝒟N×ℒN×ℱN and w∈𝒲, a codeword xjlfmN is chosen as follows. According to the common message w, we first find the sequence uN(w). For the determined uN(w), there is a corresponding codebook 𝒞ℬw; see Figure 4. Then, the corresponding codeword xjlfmN is sent into the channel, where j is chosen randomly from the set gj-1(d), f=k⊕m2, and m=w (here ⊕ is modulo addition over ℱN). Figure 4 shows how to select xjlfmN in detail. According to uN(w), we can find the corresponding codebook 𝒞ℬw. In the codebook 𝒞ℬw, we choose the corresponding bin according to f and l. Then, in that bin, the subbin is found according to d. Finally, a codeword xN (which is denoted by xjlfmN) is randomly chosen from that subbin. #### 3.2.2. Decoding Receiver 2 tries to find a unique sequenceuN(w^^) such that (uN(w^^),zN)∈TUZN(ϵ1). If there exists such a unique sequence, decoder 2 outputs w^^; otherwise, an error is declared. Since the size of 𝒲 is smaller than 2NI(U;Z), the decoding error probability for receiver 2 approaches zero.For receiver 1, he can also decode the common messagew^ since the output of channel 2 is a degraded version of the output of channel 1. Then, receiver 1 tries to find a unique codeword xj^l^f^m^N indexed by j^, l^, f^, m^, such that (xj^l^f^m^N,yN)∈TXY∣UN(ϵ2). If there exists such a unique codeword xj^l^f^m^N, receiver 1 calculates f^⊖k as m^2 (here ⊖ is modulo subtraction over ℱN, and m^=w^) and finds d^ according to gj(j^). Note that receiver 1 knows the secret key k. Decoder 1 outputs s^=(d^,l^,m^2) and w^. If no such xj^l^f^m^N or more than one such xj^l^f^m^N exist, an error is declared. #### 3.2.3. Analysis of Error Probability Since the number ofuN(w) is upper bound by 2NI(U;Z) and the DMBCs under discussion are degraded, both receivers can decode the common message w with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, it can be calculated that given the codeword uN(w), the number of xN is (36)∥ℱN∥·∥𝒥N∥·∥ℒN∥=2NI(X;Y∣U).So, after determining the codeworduN(w), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. This proves (6). #### 3.2.4. Analysis of Equivocation The proof of (5) is given below: (37)H(S∣ZN)=H(M1,M2∣ZN)=H(M1∣ZN)+H(M2∣ZN,M1)≥H(M1∣ZN)+H(M2∣ZN,M1,K⊕M2)=(b3.1)H(M1∣ZN)+H(M2∣K⊕M2)=(b3.2)H(M1∣ZN)+H(M2)=(b3.3)H(M1∣ZN)+NRf′, where (b3.1) follows from the Markov chain M2→M2⊕K→(ZN,M1), (b3.2) follows from the fact that M2 is independent of M2⊕K, and (b3.3) follows from that M2 is uniformly distributed over {1,2,3,…,2NRf′}. The proof of the fact that M2 is independent of M2⊕K is shown as follows (the proof can also be seen in [6]): (38)p(M2⊕K=a)=∑k‍p(M2⊕K=a∣K=k)p(K=k)=(b3.4)∑k‍p(M2⊕K=a∣K=k)1∥ℱN∥=1∥ℱN∥∑k‍p(M2⊕K=a∣K=k)=1∥ℱN∥∑k‍p(M2=a⊖k∣K=k)=(b3.5)1∥ℱN∥∑k‍p(M2=a⊖k)=1∥ℱN∥p(M2⊕K=a,M2=m2)=p(K=a⊖m2,M2=m2)=(b3.6)p(K=a⊖m2)p(M2=m2)=(b3.7)1∥ℱN∥·1∥ℱN∥, where (b3.5) and (b3.6) follow from that M2 is independent of K, and (b3.4) and (b3.7) follow from that M2 and K are both uniformly distributed over ℱN. According to (38), (39)p(M2⊕K=a,M2=m2)=p(M2⊕K=a)p(M2=m2). Therefore, M2 is independent of M2⊕K.Next, we focus on the first term in (37). The method of the equivocation analysis in [2] will be used: (40)H(M1∣ZN)≥H(M1∣ZN,W)=H(M1,ZN∣W)-H(ZN∣W)=H(M1,ZN,XN∣W)-H(XN∣M1,ZN,W)-H(ZN∣W)=H(M1,XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W)≥H(XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W). Note that W in inequality (40) is the random variable of the common message 𝒲. The four terms H(XN∣W), H(ZN∣M1,XN,W), H(XN∣M1,ZN,W), and H(ZN∣W) will be bounded as follows.Givenw∈𝒲, the number of xN is ∥𝒥N∥·∥ℒN∥·∥ℱN∥. By applying [12, Lemma 2.5], we obtain (41)H(XN∣W)≥log(∥𝒥N∥·∥ℒN∥·∥ℱN∥)-1=NI(X;Y∣U)-1.Since(M1,W)→XN→ZN and the channel to receiver 2 is discrete memoryless, it is easy to get (42)H(ZN∣M1,XN,W)=H(ZN∣XN)=NH(Z∣X).With the knowledge of(d,l)∈ℳ1 and w∈𝒲, the number of xN is (43)2NRf′·∥𝒥N∥∥𝒟N∥<2NRf′·∥𝒥N∥=2NRf′·2N(I(X;Z∣U)-Rf′)=2NI(X;Z∣U).So, receiver 2 can decode the codewordxN with error probability approaching zero by using the standard channel coding theorem [11, Theorem 7.7.1]. Therefore, using Fano’s inequality, we get (44)H(XN∣M1,ZN,W)⟶0.Moreover, using the similar deduction in [2, Section 4], we get (45)H(ZN∣W)≤log∥TZ∣UN(ϵ1)∥≤NH(Z∣U).Substituting (41), (42), (44), and (45) into (40), we get (46)H(M1∣ZN)≥NI(X;Y∣U)+NH(Z∣X)-NH(Z∣U)=NI(X;Y∣U)-NI(X;Z∣U), where the equality in (46) follows from the Markov chain U→X→Z.Finally, (5) is verified by substituting (46) into (37): (47)limN→∞Δ=limN→∞H(S∣ZN)N≥limN→∞(H(M1∣ZN)N+Rf′)≥limN→∞(NI(X;Y∣U)-NI(X;Z∣U)N+Rf′)=I(X;Y∣U)-I(X;Z∣U)+Rf′≥Re. This completes the proof of Theorem 2. ## 3.1. The Converse Part of Theorem2 In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛd, we prove the converse part for the equivalent region (the fact that the two regions are equivalent follows similarly from [10, Chapter 5, problem 5.8]) containing all the rate triples (R0,R1,Re) such that (14)0≤Re≤R1,(15)R0≤I(U;Z),(16)R0+R1≤I(X;Y∣U)+I(U;Z),(17)Re≤I(X;Y∣U)-I(X;Z∣U)+Rf.Now we show that allachievable triples (R0,R1,Re) satisfy (14), (15), (16), and (17).Condition (14) is proved as follows: (18)Re≤limN→∞Δ=limN→∞H(S∣ZN)N≤limN→∞H(S)N=R1.To prove condition (15), we calculate (19)H(W)=I(W;ZN)+H(W∣ZN)≤(a3.1)I(W;ZN)+ϵ1=∑i=1N‍I(W;Zi∣Zi-1)+ϵ1=∑i=1N‍I(W;Zi∣Zi+1N)+ϵ1=∑i=1N‍[I(WYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NW)]+ϵ1≤∑i=1N‍[I(WYi-1Zi+1N;Zi)-I(Yi-1;Zi∣Zi+1NW)]+ϵ1≤∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1, where (a3.1) follows from Fano’s inequality and ϵ1 is a small positive number. Note that KN=(K1,K2,…,KN), where Ki is the feedback symbol at time i, 1≤i≤N.To prove condition (16), we consider (20)H(S)+H(W)=H(S∣WKN)+H(W)=I(S;YN∣WKN)+H(S∣YNWKN)+I(W;ZN)+H(W∣ZN)≤(a3.2)I(S;YN∣WKN)+ϵ2+I(WKN;ZN)+ϵ1=∑i=1N‍I(S;Yi∣Yi-1WKN)+∑i=1N‍I(WKN;Zi∣Zi+1N)+ϵ1+ϵ2=∑i=1N‍[I(SZi+1N;Yi∣Yi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2=∑i=1N‍[I(Zi+1N;Yi∣Yi-1WKN)+I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1;Zi∣Zi+1N)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2≤∑i=1N‍[I(Zi+1N;Yi∣Yi-1WKN)+I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍[I(WKNYi-1Zi+1N;Zi)-I(Yi-1;Zi∣Zi+1NWKN)]+ϵ1+ϵ2=(a3.3)∑i=1N‍[I(S;Yi∣Zi+1NYi-1WKN)-I(Zi+1N;Yi∣Yi-1WKNS)]+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2≤∑i=1N‍I(S;Yi∣Zi+1NYi-1WKN)+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2, where ϵ2 is a small positive number and (a3.2) and (a3.3) follow from Fano’s inequality and Csiszár’s sum equality [2]; that is, ∑i=1NI(Zi+1N;Yi∣Yi-1WKN)=∑i=1NI(Yi-1;Zi∣Zi+1NWKN).To prove condition (17), we calculate (21)H(S∣ZN)=H(S∣ZN,W)+I(S;W∣ZN)≤H(S∣ZN,W)+H(W∣ZN)=I(S;KN,YN∣ZN,W)+H(S∣ZN,KN,YN,W)+H(W∣ZN)≤I(S;KN,YN∣ZN,W)+H(S∣KN,YN)+H(W∣ZN)=I(S;KN∣ZN,W)+I(S;YN∣KN,ZN,W)+H(S∣KN,YN)+H(W∣ZN)≤H(KN)+I(S;YN∣KN,ZN,W)+H(S∣KN,YN)+H(W∣ZN)≤NRf+I(S;YN∣KN,ZN,W)+ϵ1+ϵ2.The last inequality in (21) follows from the Fano’s inequality and the fact that the feedback rate is limited by Rf. Then, I(S;YN∣KN,ZN,W) will be calculated as follows: (22)I(S;YN∣KN,ZN,W)=∑i=1N‍I(S;Yi∣Yi-1ZNWKN)=∑i=1N‍I(S;Yi∣Yi-1,Zi-1,Zi,Zi+1N,W,KN)=(a3.4)∑i=1N‍[I(S;Yi∣Yi-1,Zi-1,Zi,Zi+1N,W,KN)+I(Zi-1;Yi∣Yi-1,ZiN,W,KN)-I(Zi-1;Yi∣Yi-1,ZiN,W,S,KN)]=∑i=1N‍[I(S,Zi-1;Yi∣Yi-1,Zi,Zi+1N,W,KN)-I(Zi-1;Yi∣Yi-1,ZiN,W,S,KN)]=∑i=1N‍I(S;Yi∣Yi-1,Zi,Zi+1N,W,KN), where (a3.4) follows from the Markov chain Yi→Yi-1ZiNWKN→Zi-1 and Yi→Yi-1ZiNWSKN→Zi-1. Then, we introduce a random variable Q which is independent of SWKNXNYNZN and uniformly distributed over {1,2,…,N}. Set U=ZQ+1NYQ-1WKNQ,V=US,Y=YQ,X=XQ,Z=ZQ. It is straightforward to see that U→V→X→Y→Z form a Markov chain. After using the standard time sharing argument [10, Section 5.4], (19), (20), and (22) are simplified into (23)H(W)≤∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1=NI(U;Z)+ϵ1,(24)H(S)+H(W)≤∑i=1N‍I(S;Yi∣Zi+1NYi-1WKN)+∑i=1N‍I(WKNYi-1Zi+1N;Zi)+ϵ1+ϵ2=NI(S;Y∣U)+NI(U;Z)+ϵ1+ϵ2=NI(V;Y∣U)+NI(U;Z)+ϵ1+ϵ2,(25)I(S;YN∣KN,ZN,W)=∑i=1N‍I(S;Yi∣Yi-1,Zi,Zi+1N,W,KN)=NI(S;Y∣Z,U)=NI(US;Y∣Z,U)=NI(V;Y∣Z,U). Substituting (25) into (21) and utilizing (5), we get (26)Re≤limN→∞Δ=limN→∞H(S∣ZN)N≤limN→∞NRf+I(S;YN∣KN,ZN,W)+ϵ1+ϵ2N=I(V;Y∣Z,U)+Rf=I(V;Y∣U)-I(V;Z∣U)+Rf. The last equality in (26) follows from the Markov chain U→V→Y→Z.To finish the proof of (16) and (17), we need to show that I(V;Y∣U)≤I(X;Y∣U) and I(V;Y∣U)-I(V;Z∣U)≤I(X;Y∣U)-I(X;Z∣U). We first prove I(V;Y∣U,X)=0 and I(V;Z∣U,X)=0: (27)I(V;Y∣U,X)=H(Y∣U,X)-H(Y∣U,V,X)=(a3.5)H(Y∣X)-H(Y∣X)=0,I(V;Z∣U,X)=H(Z∣U,X)-H(Z∣U,V,X)=(a3.6)H(Z∣X)-H(Z∣X)=0, where (a3.5) follows from the Markov chains U→X→Y and (UV)→X→Y and (a3.6) follows from the Markov chains U→X→Z and (UV)→X→Z. Utilizing (27), we obtain (28)I(V;Y∣U)=I(V,X;Y∣U)-I(X;Y∣U,V)=I(X;Y∣U)+I(V;Y∣U,X)-I(X;Y∣U,V)=I(X;Y∣U)-I(X;Y∣U,V),(29)I(V;Z∣U)=I(V,X;Z∣U)-I(X;Z∣U,V)=I(X;Z∣U)+I(V;Z∣U,X)-I(X;Z∣U,V)=I(X;Z∣U)-I(X;Z∣U,V).From (28), it is straightforward to see that I(V;Y∣U)≤I(X;Y∣U). This proves condition (16).Then, we proveI(V;Y∣U)-I(V;Z∣U)≤I(X;Y∣U)-I(X;Z∣U). Since the channel model in Figure 1 is (physically) degraded, I(X;Y∣U=u,V=v)-I(X;Z∣U=u,V=v)≥0 holds for every (u,v), which implies (30)I(X;Y∣U,V)-I(X;Z∣U,V)≥0.Therefore, utilizing (28), (29), and (30), we get (31)I(V;Y∣U)-I(V;Z∣U)=I(X;Y∣U)-I(X;Z∣U)-[I(X;Y∣U,V)-I(X;Z∣U,V)]≤I(X;Y∣U)-I(X;Z∣U). This proves condition (17).The converse part of Theorem2 is proved. ## 3.2. A Coding Scheme Achievingℛd A coding scheme is provided to achieve the achievable triples(R0,R1,Re)∈ℛd. The key methods used in the scheme include the superposition coding, rate splitting, and random binning. The confidential message is split into two parts. One part is reliably transmitted using superposition coding and random binning; the other part is securely transmitted with the help of the feedback. Note that Section 3.1 has already given the outer bound on the capacity-equivocation region. When Rf≥I(X;Z∣U), it can be seen from (9) that the secrecy capacity for the degraded DMBCs with rate-limited feedback always equals to I(X;Y∣U). Therefore, in order to investigate the effects of the feedback, the feedback rate Rf<I(X;Z∣U) will only be considered in this subsection.We need to prove that all the triples(R0,R1,Re)∈ℛd for the model of Figure 2 with any feedback rate Rf′ limited by Rf are achievable (see Definition 1). This subsection is organized as follows. The codebook generation and encoding scheme is given in Section 3.2.1. The decoding scheme is given in Section 3.2.2. The analysis of error probability and equivocation are shown in Sections 3.2.3 and 3.2.4, respectively. ### 3.2.1. Codebook Generation and Encoding Split the confidential message into two parts; that is,𝒮=(ℳ1,ℳ2). The corresponding variables M1, M2 are uniformly distributed over {1,2,3,…,2NR′} and {1,2,3,…,2NRf′}, where (when Rf≥R1, the confidential message S can be totally protected by using part of the feedback (as the shared key between the transmitter and receiver 1). The remaining part of the feedback is redundant. Therefore, in order to study the effects of the feedback on the capacity region, only Rf<R1 comes into our consideration) (32)0≤Rf′≤Rf,R′=R1-Rf′>0.It is important to notice thatR1 is the rate of the private message 𝒮, which consists of ℳ1 and ℳ2. This means that (33)R1=limN→∞log(∥ℳ1∥∥ℳ2∥)N=limN→∞(log∥ℳ1∥N+log∥ℳ2∥N).Define the index sets𝒥N, ℒN, ℱN, and ℳN satisfying (34)limN→∞1Nlog∥𝒥N∥=I(X;Z∣U)-Rf′,limN→∞1Nlog∥ℒN∥=I(X;Y∣U)-I(X;Z∣U),limN→∞1Nlog∥ℱN∥=Rf′,limN→∞1Nlog∥ℳN∥=I(U;Z).We usej∈𝒥N, l∈ℒN, f∈ℱN, m∈ℳN to index the codeword xN. Take 𝒲⊂ℳN such that (2) holds. Since R1≤I(X;Y∣U), it is easy to see ∥𝒥N×ℒN×ℱN∥≥2NR1. Therefore, let ℳ1=𝒟N×ℒN, ℳ2=ℱN, where 𝒟N is an arbitrary set such that (3) holds. Let gj be a mapping of 𝒥N into 𝒟N partitioning 𝒥N into subsets of size ∥𝒥N∥/∥𝒟N∥; that is, (35)gj:𝒥N⟶𝒟N, where gj(j)=d,j∈𝒥N,d∈𝒟N.For eachw∈𝒲, we generate a codeword uN(w) according to ∏i=1Np(ui). Then, for each uN(w), a codebook 𝒞ℬw (see Figure 4) containing ∥𝒥N∥·∥ℒN∥·∥ℱN∥ codewords xjlfmN is constructed according to ∏i=1Np(xi∣ui), where j∈𝒥N,l∈ℒN,f∈ℱN,m=w∈ℳ. Those xN are put into ∥ℒN∥·∥ℱN∥ bins so that each bin contains ∥𝒥N∥ codewords. Each bin is indexed by (l,f), where l∈ℒN, f∈ℱN. Then, we divide each bin into ∥𝒟N∥ subbins such that each subbin contains ∥𝒥N∥/∥𝒟N∥ codewords. The codebook structure is presented in Figure 4.Figure 4 The codebook𝒞ℬw for each uN(w).Let𝒦={1,2,3,…,2NRf′}, where k∈𝒦 is the key sent to the transmitter from receiver 1 through the secure feedback link. It is kept secret from receiver 2. The corresponding variable K is uniformly distributed over 𝒦 and independent of S and W.In order to sends=(d,l,m2)∈𝒟N×ℒN×ℱN and w∈𝒲, a codeword xjlfmN is chosen as follows. According to the common message w, we first find the sequence uN(w). For the determined uN(w), there is a corresponding codebook 𝒞ℬw; see Figure 4. Then, the corresponding codeword xjlfmN is sent into the channel, where j is chosen randomly from the set gj-1(d), f=k⊕m2, and m=w (here ⊕ is modulo addition over ℱN). Figure 4 shows how to select xjlfmN in detail. According to uN(w), we can find the corresponding codebook 𝒞ℬw. In the codebook 𝒞ℬw, we choose the corresponding bin according to f and l. Then, in that bin, the subbin is found according to d. Finally, a codeword xN (which is denoted by xjlfmN) is randomly chosen from that subbin. ### 3.2.2. Decoding Receiver 2 tries to find a unique sequenceuN(w^^) such that (uN(w^^),zN)∈TUZN(ϵ1). If there exists such a unique sequence, decoder 2 outputs w^^; otherwise, an error is declared. Since the size of 𝒲 is smaller than 2NI(U;Z), the decoding error probability for receiver 2 approaches zero.For receiver 1, he can also decode the common messagew^ since the output of channel 2 is a degraded version of the output of channel 1. Then, receiver 1 tries to find a unique codeword xj^l^f^m^N indexed by j^, l^, f^, m^, such that (xj^l^f^m^N,yN)∈TXY∣UN(ϵ2). If there exists such a unique codeword xj^l^f^m^N, receiver 1 calculates f^⊖k as m^2 (here ⊖ is modulo subtraction over ℱN, and m^=w^) and finds d^ according to gj(j^). Note that receiver 1 knows the secret key k. Decoder 1 outputs s^=(d^,l^,m^2) and w^. If no such xj^l^f^m^N or more than one such xj^l^f^m^N exist, an error is declared. ### 3.2.3. Analysis of Error Probability Since the number ofuN(w) is upper bound by 2NI(U;Z) and the DMBCs under discussion are degraded, both receivers can decode the common message w with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, it can be calculated that given the codeword uN(w), the number of xN is (36)∥ℱN∥·∥𝒥N∥·∥ℒN∥=2NI(X;Y∣U).So, after determining the codeworduN(w), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. This proves (6). ### 3.2.4. Analysis of Equivocation The proof of (5) is given below: (37)H(S∣ZN)=H(M1,M2∣ZN)=H(M1∣ZN)+H(M2∣ZN,M1)≥H(M1∣ZN)+H(M2∣ZN,M1,K⊕M2)=(b3.1)H(M1∣ZN)+H(M2∣K⊕M2)=(b3.2)H(M1∣ZN)+H(M2)=(b3.3)H(M1∣ZN)+NRf′, where (b3.1) follows from the Markov chain M2→M2⊕K→(ZN,M1), (b3.2) follows from the fact that M2 is independent of M2⊕K, and (b3.3) follows from that M2 is uniformly distributed over {1,2,3,…,2NRf′}. The proof of the fact that M2 is independent of M2⊕K is shown as follows (the proof can also be seen in [6]): (38)p(M2⊕K=a)=∑k‍p(M2⊕K=a∣K=k)p(K=k)=(b3.4)∑k‍p(M2⊕K=a∣K=k)1∥ℱN∥=1∥ℱN∥∑k‍p(M2⊕K=a∣K=k)=1∥ℱN∥∑k‍p(M2=a⊖k∣K=k)=(b3.5)1∥ℱN∥∑k‍p(M2=a⊖k)=1∥ℱN∥p(M2⊕K=a,M2=m2)=p(K=a⊖m2,M2=m2)=(b3.6)p(K=a⊖m2)p(M2=m2)=(b3.7)1∥ℱN∥·1∥ℱN∥, where (b3.5) and (b3.6) follow from that M2 is independent of K, and (b3.4) and (b3.7) follow from that M2 and K are both uniformly distributed over ℱN. According to (38), (39)p(M2⊕K=a,M2=m2)=p(M2⊕K=a)p(M2=m2). Therefore, M2 is independent of M2⊕K.Next, we focus on the first term in (37). The method of the equivocation analysis in [2] will be used: (40)H(M1∣ZN)≥H(M1∣ZN,W)=H(M1,ZN∣W)-H(ZN∣W)=H(M1,ZN,XN∣W)-H(XN∣M1,ZN,W)-H(ZN∣W)=H(M1,XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W)≥H(XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W). Note that W in inequality (40) is the random variable of the common message 𝒲. The four terms H(XN∣W), H(ZN∣M1,XN,W), H(XN∣M1,ZN,W), and H(ZN∣W) will be bounded as follows.Givenw∈𝒲, the number of xN is ∥𝒥N∥·∥ℒN∥·∥ℱN∥. By applying [12, Lemma 2.5], we obtain (41)H(XN∣W)≥log(∥𝒥N∥·∥ℒN∥·∥ℱN∥)-1=NI(X;Y∣U)-1.Since(M1,W)→XN→ZN and the channel to receiver 2 is discrete memoryless, it is easy to get (42)H(ZN∣M1,XN,W)=H(ZN∣XN)=NH(Z∣X).With the knowledge of(d,l)∈ℳ1 and w∈𝒲, the number of xN is (43)2NRf′·∥𝒥N∥∥𝒟N∥<2NRf′·∥𝒥N∥=2NRf′·2N(I(X;Z∣U)-Rf′)=2NI(X;Z∣U).So, receiver 2 can decode the codewordxN with error probability approaching zero by using the standard channel coding theorem [11, Theorem 7.7.1]. Therefore, using Fano’s inequality, we get (44)H(XN∣M1,ZN,W)⟶0.Moreover, using the similar deduction in [2, Section 4], we get (45)H(ZN∣W)≤log∥TZ∣UN(ϵ1)∥≤NH(Z∣U).Substituting (41), (42), (44), and (45) into (40), we get (46)H(M1∣ZN)≥NI(X;Y∣U)+NH(Z∣X)-NH(Z∣U)=NI(X;Y∣U)-NI(X;Z∣U), where the equality in (46) follows from the Markov chain U→X→Z.Finally, (5) is verified by substituting (46) into (37): (47)limN→∞Δ=limN→∞H(S∣ZN)N≥limN→∞(H(M1∣ZN)N+Rf′)≥limN→∞(NI(X;Y∣U)-NI(X;Z∣U)N+Rf′)=I(X;Y∣U)-I(X;Z∣U)+Rf′≥Re. This completes the proof of Theorem 2. ## 3.2.1. Codebook Generation and Encoding Split the confidential message into two parts; that is,𝒮=(ℳ1,ℳ2). The corresponding variables M1, M2 are uniformly distributed over {1,2,3,…,2NR′} and {1,2,3,…,2NRf′}, where (when Rf≥R1, the confidential message S can be totally protected by using part of the feedback (as the shared key between the transmitter and receiver 1). The remaining part of the feedback is redundant. Therefore, in order to study the effects of the feedback on the capacity region, only Rf<R1 comes into our consideration) (32)0≤Rf′≤Rf,R′=R1-Rf′>0.It is important to notice thatR1 is the rate of the private message 𝒮, which consists of ℳ1 and ℳ2. This means that (33)R1=limN→∞log(∥ℳ1∥∥ℳ2∥)N=limN→∞(log∥ℳ1∥N+log∥ℳ2∥N).Define the index sets𝒥N, ℒN, ℱN, and ℳN satisfying (34)limN→∞1Nlog∥𝒥N∥=I(X;Z∣U)-Rf′,limN→∞1Nlog∥ℒN∥=I(X;Y∣U)-I(X;Z∣U),limN→∞1Nlog∥ℱN∥=Rf′,limN→∞1Nlog∥ℳN∥=I(U;Z).We usej∈𝒥N, l∈ℒN, f∈ℱN, m∈ℳN to index the codeword xN. Take 𝒲⊂ℳN such that (2) holds. Since R1≤I(X;Y∣U), it is easy to see ∥𝒥N×ℒN×ℱN∥≥2NR1. Therefore, let ℳ1=𝒟N×ℒN, ℳ2=ℱN, where 𝒟N is an arbitrary set such that (3) holds. Let gj be a mapping of 𝒥N into 𝒟N partitioning 𝒥N into subsets of size ∥𝒥N∥/∥𝒟N∥; that is, (35)gj:𝒥N⟶𝒟N, where gj(j)=d,j∈𝒥N,d∈𝒟N.For eachw∈𝒲, we generate a codeword uN(w) according to ∏i=1Np(ui). Then, for each uN(w), a codebook 𝒞ℬw (see Figure 4) containing ∥𝒥N∥·∥ℒN∥·∥ℱN∥ codewords xjlfmN is constructed according to ∏i=1Np(xi∣ui), where j∈𝒥N,l∈ℒN,f∈ℱN,m=w∈ℳ. Those xN are put into ∥ℒN∥·∥ℱN∥ bins so that each bin contains ∥𝒥N∥ codewords. Each bin is indexed by (l,f), where l∈ℒN, f∈ℱN. Then, we divide each bin into ∥𝒟N∥ subbins such that each subbin contains ∥𝒥N∥/∥𝒟N∥ codewords. The codebook structure is presented in Figure 4.Figure 4 The codebook𝒞ℬw for each uN(w).Let𝒦={1,2,3,…,2NRf′}, where k∈𝒦 is the key sent to the transmitter from receiver 1 through the secure feedback link. It is kept secret from receiver 2. The corresponding variable K is uniformly distributed over 𝒦 and independent of S and W.In order to sends=(d,l,m2)∈𝒟N×ℒN×ℱN and w∈𝒲, a codeword xjlfmN is chosen as follows. According to the common message w, we first find the sequence uN(w). For the determined uN(w), there is a corresponding codebook 𝒞ℬw; see Figure 4. Then, the corresponding codeword xjlfmN is sent into the channel, where j is chosen randomly from the set gj-1(d), f=k⊕m2, and m=w (here ⊕ is modulo addition over ℱN). Figure 4 shows how to select xjlfmN in detail. According to uN(w), we can find the corresponding codebook 𝒞ℬw. In the codebook 𝒞ℬw, we choose the corresponding bin according to f and l. Then, in that bin, the subbin is found according to d. Finally, a codeword xN (which is denoted by xjlfmN) is randomly chosen from that subbin. ## 3.2.2. Decoding Receiver 2 tries to find a unique sequenceuN(w^^) such that (uN(w^^),zN)∈TUZN(ϵ1). If there exists such a unique sequence, decoder 2 outputs w^^; otherwise, an error is declared. Since the size of 𝒲 is smaller than 2NI(U;Z), the decoding error probability for receiver 2 approaches zero.For receiver 1, he can also decode the common messagew^ since the output of channel 2 is a degraded version of the output of channel 1. Then, receiver 1 tries to find a unique codeword xj^l^f^m^N indexed by j^, l^, f^, m^, such that (xj^l^f^m^N,yN)∈TXY∣UN(ϵ2). If there exists such a unique codeword xj^l^f^m^N, receiver 1 calculates f^⊖k as m^2 (here ⊖ is modulo subtraction over ℱN, and m^=w^) and finds d^ according to gj(j^). Note that receiver 1 knows the secret key k. Decoder 1 outputs s^=(d^,l^,m^2) and w^. If no such xj^l^f^m^N or more than one such xj^l^f^m^N exist, an error is declared. ## 3.2.3. Analysis of Error Probability Since the number ofuN(w) is upper bound by 2NI(U;Z) and the DMBCs under discussion are degraded, both receivers can decode the common message w with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, it can be calculated that given the codeword uN(w), the number of xN is (36)∥ℱN∥·∥𝒥N∥·∥ℒN∥=2NI(X;Y∣U).So, after determining the codeworduN(w), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. This proves (6). ## 3.2.4. Analysis of Equivocation The proof of (5) is given below: (37)H(S∣ZN)=H(M1,M2∣ZN)=H(M1∣ZN)+H(M2∣ZN,M1)≥H(M1∣ZN)+H(M2∣ZN,M1,K⊕M2)=(b3.1)H(M1∣ZN)+H(M2∣K⊕M2)=(b3.2)H(M1∣ZN)+H(M2)=(b3.3)H(M1∣ZN)+NRf′, where (b3.1) follows from the Markov chain M2→M2⊕K→(ZN,M1), (b3.2) follows from the fact that M2 is independent of M2⊕K, and (b3.3) follows from that M2 is uniformly distributed over {1,2,3,…,2NRf′}. The proof of the fact that M2 is independent of M2⊕K is shown as follows (the proof can also be seen in [6]): (38)p(M2⊕K=a)=∑k‍p(M2⊕K=a∣K=k)p(K=k)=(b3.4)∑k‍p(M2⊕K=a∣K=k)1∥ℱN∥=1∥ℱN∥∑k‍p(M2⊕K=a∣K=k)=1∥ℱN∥∑k‍p(M2=a⊖k∣K=k)=(b3.5)1∥ℱN∥∑k‍p(M2=a⊖k)=1∥ℱN∥p(M2⊕K=a,M2=m2)=p(K=a⊖m2,M2=m2)=(b3.6)p(K=a⊖m2)p(M2=m2)=(b3.7)1∥ℱN∥·1∥ℱN∥, where (b3.5) and (b3.6) follow from that M2 is independent of K, and (b3.4) and (b3.7) follow from that M2 and K are both uniformly distributed over ℱN. According to (38), (39)p(M2⊕K=a,M2=m2)=p(M2⊕K=a)p(M2=m2). Therefore, M2 is independent of M2⊕K.Next, we focus on the first term in (37). The method of the equivocation analysis in [2] will be used: (40)H(M1∣ZN)≥H(M1∣ZN,W)=H(M1,ZN∣W)-H(ZN∣W)=H(M1,ZN,XN∣W)-H(XN∣M1,ZN,W)-H(ZN∣W)=H(M1,XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W)≥H(XN∣W)+H(ZN∣M1,XN,W)-H(XN∣M1,ZN,W)-H(ZN∣W). Note that W in inequality (40) is the random variable of the common message 𝒲. The four terms H(XN∣W), H(ZN∣M1,XN,W), H(XN∣M1,ZN,W), and H(ZN∣W) will be bounded as follows.Givenw∈𝒲, the number of xN is ∥𝒥N∥·∥ℒN∥·∥ℱN∥. By applying [12, Lemma 2.5], we obtain (41)H(XN∣W)≥log(∥𝒥N∥·∥ℒN∥·∥ℱN∥)-1=NI(X;Y∣U)-1.Since(M1,W)→XN→ZN and the channel to receiver 2 is discrete memoryless, it is easy to get (42)H(ZN∣M1,XN,W)=H(ZN∣XN)=NH(Z∣X).With the knowledge of(d,l)∈ℳ1 and w∈𝒲, the number of xN is (43)2NRf′·∥𝒥N∥∥𝒟N∥<2NRf′·∥𝒥N∥=2NRf′·2N(I(X;Z∣U)-Rf′)=2NI(X;Z∣U).So, receiver 2 can decode the codewordxN with error probability approaching zero by using the standard channel coding theorem [11, Theorem 7.7.1]. Therefore, using Fano’s inequality, we get (44)H(XN∣M1,ZN,W)⟶0.Moreover, using the similar deduction in [2, Section 4], we get (45)H(ZN∣W)≤log∥TZ∣UN(ϵ1)∥≤NH(Z∣U).Substituting (41), (42), (44), and (45) into (40), we get (46)H(M1∣ZN)≥NI(X;Y∣U)+NH(Z∣X)-NH(Z∣U)=NI(X;Y∣U)-NI(X;Z∣U), where the equality in (46) follows from the Markov chain U→X→Z.Finally, (5) is verified by substituting (46) into (37): (47)limN→∞Δ=limN→∞H(S∣ZN)N≥limN→∞(H(M1∣ZN)N+Rf′)≥limN→∞(NI(X;Y∣U)-NI(X;Z∣U)N+Rf′)=I(X;Y∣U)-I(X;Z∣U)+Rf′≥Re. This completes the proof of Theorem 2. ## 4. Proof of Theorems4 and 5 In this section, Theorems4 and 5 are proved. In the model of Figure 3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, p(y,z∣x)=p(y∣x)p(z∣x). To prove Theorem 4, we first give the outer bound on the capacity-equivocation region of the less noisy DMBCs with noiseless causal feedback in Section 4.1. Then, a coding scheme is provided to achieve the outer bound. Similarly, to prove Theorem 5, the outer bound on the capacity-equivocation region of the reversely less noisy DMBCs with noiseless causal feedback is given in Section 4.2. Moreover, we also provide a coding scheme to achieve the outer bound. The methods used to prove the converse parts of the two theorems are from [5]. The coding schemes are inspired by [3, 5]. ### 4.1. Less Noisy DMBCs with Noiseless Causal Feedback We first show the converse part of Theorem4, and then we prove the direct part of Theorem 4 by providing a coding scheme.In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛl, we prove the converse part for the equivalent region containing all the rate triples (R0,R1,Re) such that (48)0≤Re≤R1,(49)R0≤I(U;Z),(50)R0+R1≤I(X;Y∣U)+I(U;Z),(51)Re≤H(Y∣Z).The proof of (48), (49), and (50) follows exactly the same line of proving (14), (15), and (16) in Section 3 except for the identification of the auxiliary random variable U,V (which will be given subsequently) and therefore is omitted. We focus on proving (51): (52)H(S∣ZN)≤H(S∣ZN)+I(S;ZN∣YN)=H(S∣ZN)+H(S∣YN)-H(S∣YN,ZN)=I(S;YN∣ZN)+H(S∣YN)≤H(YN∣ZN)+H(S∣YN)=∑i=1N‍H(Yi∣Yi-1,ZN)+H(S∣YN)≤∑i=1N‍H(Yi∣Zi)+ϵ3, where ϵ3 is a small positive number. The last inequality in (52) follows from the fact that conditioning does not increase entropy and Fano’s inequality. To complete the proof of (51), define a time-sharing random variable Q which is uniformly distributed over 1,2,…,N and independent of SWXNYNZN. Set U=ZQ+1NYQ-1WQ, V=US, X=XQ, Y=YQ, Z=ZQ. It is easy to see U→V→X→(Y,Z) form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (52) simplifies to (53)H(S∣ZN)≤NH(Y∣Z)+ϵ3.Finally, utilizinglimN→∞Δ≥Re in the definition of “achievable” and (53), we obtain (51). This completes the proof of the converse part of Theorem 4.Next, a coding scheme is presented to achieve the rate triple(R0,R1,Re)∈ℛl. We should prove that all triples (R0,R1,Re)∈ℛl are achievable. Note that the noiseless feedback for the less noisy DMBCs is causally transmitted from receiver 1 to the transmitter. The scheme includes codebook generation and encoding scheme in Section 4.1.1, decoding scheme in Section 4.1.2, analysis of error probability in Section 4.1.3, and equivocation analysis in Section 4.1.4. Techniques like block Markov coding, superposition coding, and random binning are used.To serve the block Markov coding, let random vectorsUN, XN, YN, and ZN consist of n blocks of length N. Let Wn≜(W1,…,Wn) stand for the common messages of n blocks, where W1,…,Wn are independent and identically distributed random variables over 𝒲. Let Sn≜(S2,…,Sn) stand for the confidential messages of n blocks, where S2,…,Sn are independent and identically distributed random variables over 𝒮. Note that in the first block, there is no S1. Let Z~n=(Z~1,Z~2,…,Z~n), Z~b-=(Z~1,Z~2,…,Z~b-1,Z~b+1,…,Z~n), where Z~b is the output vector at receiver 2 at the end of the bth block, where 1≤b≤n. Similarly, Y~b denotes the output vector at receiver 1 at the end of the bth block, and X~b denotes the input vector of the channel in the bth block. These notations coincide with [6]. #### 4.1.1. Codebook Generation and Encoding Let the common message set𝒲 and the confidential message set 𝒮 satisfy (54)limN→∞log∥𝒲∥N=R0,limN→∞log∥𝒮∥N=R1, where R0 and R1 satisfy (10).Fixp(u) and p(x∣u). In the bth block, 1≤b≤n, we generate 2NR0 independent and identically distributed (i.i.d) sequences uN(wb) according to ∏i=1Np(ui), where wb∈𝒲 is the common message to be sent in the bth block. For each uN(wb), generate 2NI(X;Y∣U) codewords xN(uN(wb)) according to ∏i=1Np(xi∣ui). Put the 2NI(X;Y∣U) codewords into 2NR1 bins, so each bin contains 2N(I(X;Y∣U)-R1) codewords. The 2NR1 bins are denoted by Q1,Q2,…,Q∥𝒮∥, where ∥𝒮∥=2NR1. The codebook structure is shown in Figure 5. Reveal all the codebooks to the transmitter, receiver 1, and receiver 2.Figure 5 The codebook structure.Letg be a mapping from 𝒴N into 𝒮. Reveal the mapping g to the transmitter, receiver 1, and receiver 2. Define a random variable S′=g(YN) uniformly distributed over 𝒮 and independent of the confidential message S. It can be similarly proved from (39) that S⊕S′ is independent of S. In the first block, that is, b=1, to send the common message w1 (note that there is no confidential message to be sent in the first block), the transmitter tries to find uN(w1) and randomly choose a codeword xN(uN(w1)) from the corresponding 2NI(X;Y|U) codewords. In the bth block (b=2,3,…,n), to send the common message wb and confidential message sb, the transmitter calculates sb′=g(y~b-1) and randomly chooses a codeword xN(uN(wb),sb) from the bin Qsb⊕sb′. Here, y~b-1 is the output vector of the (b-1)th block at receiver 1, and ⊕ is the modulo addition over 𝒮. #### 4.1.2. Decoding In the first block, as there is no confidential message, only the common message needs to be decoded for both receivers. For receiver 2, he tries to find a unique sequenceuN(w^^1) such that (uN(w^^1),z~1)∈TUZN(ϵ1′), where ϵ1′ is a small positive number. If there exists such a unique sequence, decoder 2 outputs w^^1; otherwise, an error is declared. For receiver 1, he tries to find a unique sequence uN(w^1) such that (uN(w^1),y~1)∈TUYN(ϵ1′′), where ϵ1′′ is a small positive number. If there exists such a unique sequence, output is w^1; otherwise, declare an error.In thebth block, 2≤b≤n, receiver 2 aims to decode the common message, and receiver 1 aims to decode both confidential and common messages. The method of decoding the common message wb for both receivers follows the same as that in the first block. Then, receiver 1 tries to find a unique sequence xN(uN(w^b),s^b) such that (xN(uN(w^b),s^b),y~b)∈TXY∣UN(ϵ2′), where ϵ2′ is a small positive number. If there exists such a unique sequence in one bin, denoting the corresponding index of that bin by sb′′, receiver 1 calculates sb′′⊖sb′ as s^b (here ⊖ is modulo subtraction over 𝒮, and receiver 1 knows sb′=g(y~b-1)); otherwise, declare an error. #### 4.1.3. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Z), receiver 2 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.1 are less noisy, receiver 1 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. #### 4.1.4. Analysis of Equivocation In this part,limN→∞Δ≥Re is proved by utilizing the methods in [5, 6]: (55)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(a4.1)limN,n→∞∑i=2nH(Si∣Z~i)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,Si⊕Si′)nN=(a4.2)limN,n→∞∑i=2nH(Si∣Z~i-1,Si⊕Si′)nN=(a4.3)limN,n→∞∑i=2nmin{NH(Y∣Z),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣Z),R1}n=min{H(Y∣Z),R1}≥(a4.4)Re.In the above deduction,(a4.1) follows from Si→Z~i→(Si-1,Z~i-). (a4.2) follows from Si→(Si⊕Si′,Z~i-1)→Z~i. (a4.3) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. (a4.4) follows from (10).This completes the proof of Theorem4. ### 4.2. Reversely Less Noisy DMBCs with Noiseless Causal Feedback In this subsection, Theorem5 will be proved. The converse part will be shown first, and then a coding scheme is given for proving the direct part.In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛrl, we prove the converse part for the equivalent region containing all the rate triples (R0,R1,Re) such that (56)0≤Re≤R1,(57)R0≤I(U;Y),(58)R0+R1≤I(X;Y∣U)+I(U;Y),(59)Re≤H(Y∣X).The inequalities (56), (57), and (58) can be proved using similar deduction of the converse part of Theorem 2 in Section 3 except for the identification of the auxiliary random variables. We focus on (59): (60)H(S∣ZN)=H(S∣XN,ZN)+I(S;XN∣ZN)=(b4.1)H(S∣XN)+I(XN;S∣ZN)=H(S,YN∣XN)-H(YN∣XN,S)+I(XN;S∣ZN)=H(YN∣XN)+H(S∣YN,XN)-H(YN∣XN,S)+I(XN;S∣ZN)≤H(YN∣XN)+H(S∣YN,XN)+I(XN;S∣ZN)≤H(YN∣XN)+H(S∣YN,XN)+H(XN∣ZN)=(b4.2)H(YN∣XN)+H(S∣YN,XN)+H(XN∣YN)=H(YN∣XN)+H(S,XN∣YN)=(b4.3)H(YN∣XN)+H(S∣YN)=∑i=1N‍H(Yi∣Yi-1,XN)+H(S∣YN)≤(b4.4)∑i=1N‍H(Yi∣Xi)+ϵ3, where (b4.1) from the Markov chain S→XN→ZN, (b4.2) from the assumption that the channel is reversely less noisy (by setting U=X), (b4.3) from that XN is a function of S,YN, and (b4.4) from the fact that conditioning does not increase entropy and Fano’s inequality. To complete the proof of (59), define a time-sharing random variable Q which is uniformly distributed over {1,2,…,N} and independent of SWXNYNZN. Set U=ZQ+1NYQ-1WQ, V=US, X=XQ, Y=YQ, Z=ZQ. It is easy to see U→V→X→(Y,Z) form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (60) simplifies to (61)H(S∣ZN)≤NH(Y∣X)+ϵ3.Finally, utilizinglimN→∞Δ≥Re in the definition of “achievable” and (61), we obtain (59). This completes the proof of the converse part of Theorem 5.Next, a coding scheme will be provided for achieving the triple(R0,R1,Re)∈ℛrl. We should prove that all triples (R0,R1,Re)∈ℛrl are achievable. The codebook generation, encoding, and decoding follow exactly the lines of the coding scheme for the less noisy case in Section 4.1. We present the analysis of error probability and equivocation as follows. #### 4.2.1. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Y), receiver 1 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.2 are reversely less noisy, receiver 2 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. #### 4.2.2. Analysis of Equivocation In this part,limN→∞Δ≥Re will be proved. Special attention should be paid to receiver 2 since the DMBCs are reversely less noisy; that is, I(U;Z)≥I(U;Y) for all p(u,x), which implies 2NI(X;Z∣U)≥2NI(X;Y∣U). Therefore, receiver 2 can also decode the codeword xN. With the knowledge of xN and zN, receiver 2 can guess receiver 1’s channel output yN from the conditional typical set 𝒯Y∣XZN(ϵ3). Note that receiver 2 can intercept the confidential messages in two ways. One is guessing the secret key sb′ from 𝒮 directly; the other is guessing the channel output y~b-1 and finding sb′ through g(y~b-1) indirectly. Intuitively, receiver 2 will always choose a better way to implement eavesdropping. More formally, (62)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(b4.5)limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,X~i-1,Si⊕Si′)nN=(b4.6)limN,n→∞∑i=2nH(Si∣Z~i-1,X~i-1,Si⊕Si′)nN=(b4.7)limN,n→∞∑i=2nmin{NH(Y∣XZ),log∥𝒮∥}nN=(b4.8)limN,n→∞∑i=2nmin{NH(Y∣X),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣X),R1}n=min{H(Y∣X),R1}≥(b4.9)Re.In the above deduction,(b4.5) follows from Si→(Z~i,Z~i-1)→(Si-1,Z~i-2,Z~i+1n).(b4.6) follows from Si→(Si⊕Si′,Z~i-1,X~i-1)→Z~i. (b4.7) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will, and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. Note that the number of yN∈TY|XZN(ϵ3) is about 2NH(Y∣XZ) based on the property of strong typical sequence [10]. (b4.8) follows from the fact that Y is independent of Z conditioning on X, which is obtained from the assumption p(y,z∣x)=p(y∣x)p(z∣x). (b4.9) follows from (11).This completes the proof of Theorem5. ## 4.1. Less Noisy DMBCs with Noiseless Causal Feedback We first show the converse part of Theorem4, and then we prove the direct part of Theorem 4 by providing a coding scheme.In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛl, we prove the converse part for the equivalent region containing all the rate triples (R0,R1,Re) such that (48)0≤Re≤R1,(49)R0≤I(U;Z),(50)R0+R1≤I(X;Y∣U)+I(U;Z),(51)Re≤H(Y∣Z).The proof of (48), (49), and (50) follows exactly the same line of proving (14), (15), and (16) in Section 3 except for the identification of the auxiliary random variable U,V (which will be given subsequently) and therefore is omitted. We focus on proving (51): (52)H(S∣ZN)≤H(S∣ZN)+I(S;ZN∣YN)=H(S∣ZN)+H(S∣YN)-H(S∣YN,ZN)=I(S;YN∣ZN)+H(S∣YN)≤H(YN∣ZN)+H(S∣YN)=∑i=1N‍H(Yi∣Yi-1,ZN)+H(S∣YN)≤∑i=1N‍H(Yi∣Zi)+ϵ3, where ϵ3 is a small positive number. The last inequality in (52) follows from the fact that conditioning does not increase entropy and Fano’s inequality. To complete the proof of (51), define a time-sharing random variable Q which is uniformly distributed over 1,2,…,N and independent of SWXNYNZN. Set U=ZQ+1NYQ-1WQ, V=US, X=XQ, Y=YQ, Z=ZQ. It is easy to see U→V→X→(Y,Z) form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (52) simplifies to (53)H(S∣ZN)≤NH(Y∣Z)+ϵ3.Finally, utilizinglimN→∞Δ≥Re in the definition of “achievable” and (53), we obtain (51). This completes the proof of the converse part of Theorem 4.Next, a coding scheme is presented to achieve the rate triple(R0,R1,Re)∈ℛl. We should prove that all triples (R0,R1,Re)∈ℛl are achievable. Note that the noiseless feedback for the less noisy DMBCs is causally transmitted from receiver 1 to the transmitter. The scheme includes codebook generation and encoding scheme in Section 4.1.1, decoding scheme in Section 4.1.2, analysis of error probability in Section 4.1.3, and equivocation analysis in Section 4.1.4. Techniques like block Markov coding, superposition coding, and random binning are used.To serve the block Markov coding, let random vectorsUN, XN, YN, and ZN consist of n blocks of length N. Let Wn≜(W1,…,Wn) stand for the common messages of n blocks, where W1,…,Wn are independent and identically distributed random variables over 𝒲. Let Sn≜(S2,…,Sn) stand for the confidential messages of n blocks, where S2,…,Sn are independent and identically distributed random variables over 𝒮. Note that in the first block, there is no S1. Let Z~n=(Z~1,Z~2,…,Z~n), Z~b-=(Z~1,Z~2,…,Z~b-1,Z~b+1,…,Z~n), where Z~b is the output vector at receiver 2 at the end of the bth block, where 1≤b≤n. Similarly, Y~b denotes the output vector at receiver 1 at the end of the bth block, and X~b denotes the input vector of the channel in the bth block. These notations coincide with [6]. ### 4.1.1. Codebook Generation and Encoding Let the common message set𝒲 and the confidential message set 𝒮 satisfy (54)limN→∞log∥𝒲∥N=R0,limN→∞log∥𝒮∥N=R1, where R0 and R1 satisfy (10).Fixp(u) and p(x∣u). In the bth block, 1≤b≤n, we generate 2NR0 independent and identically distributed (i.i.d) sequences uN(wb) according to ∏i=1Np(ui), where wb∈𝒲 is the common message to be sent in the bth block. For each uN(wb), generate 2NI(X;Y∣U) codewords xN(uN(wb)) according to ∏i=1Np(xi∣ui). Put the 2NI(X;Y∣U) codewords into 2NR1 bins, so each bin contains 2N(I(X;Y∣U)-R1) codewords. The 2NR1 bins are denoted by Q1,Q2,…,Q∥𝒮∥, where ∥𝒮∥=2NR1. The codebook structure is shown in Figure 5. Reveal all the codebooks to the transmitter, receiver 1, and receiver 2.Figure 5 The codebook structure.Letg be a mapping from 𝒴N into 𝒮. Reveal the mapping g to the transmitter, receiver 1, and receiver 2. Define a random variable S′=g(YN) uniformly distributed over 𝒮 and independent of the confidential message S. It can be similarly proved from (39) that S⊕S′ is independent of S. In the first block, that is, b=1, to send the common message w1 (note that there is no confidential message to be sent in the first block), the transmitter tries to find uN(w1) and randomly choose a codeword xN(uN(w1)) from the corresponding 2NI(X;Y|U) codewords. In the bth block (b=2,3,…,n), to send the common message wb and confidential message sb, the transmitter calculates sb′=g(y~b-1) and randomly chooses a codeword xN(uN(wb),sb) from the bin Qsb⊕sb′. Here, y~b-1 is the output vector of the (b-1)th block at receiver 1, and ⊕ is the modulo addition over 𝒮. ### 4.1.2. Decoding In the first block, as there is no confidential message, only the common message needs to be decoded for both receivers. For receiver 2, he tries to find a unique sequenceuN(w^^1) such that (uN(w^^1),z~1)∈TUZN(ϵ1′), where ϵ1′ is a small positive number. If there exists such a unique sequence, decoder 2 outputs w^^1; otherwise, an error is declared. For receiver 1, he tries to find a unique sequence uN(w^1) such that (uN(w^1),y~1)∈TUYN(ϵ1′′), where ϵ1′′ is a small positive number. If there exists such a unique sequence, output is w^1; otherwise, declare an error.In thebth block, 2≤b≤n, receiver 2 aims to decode the common message, and receiver 1 aims to decode both confidential and common messages. The method of decoding the common message wb for both receivers follows the same as that in the first block. Then, receiver 1 tries to find a unique sequence xN(uN(w^b),s^b) such that (xN(uN(w^b),s^b),y~b)∈TXY∣UN(ϵ2′), where ϵ2′ is a small positive number. If there exists such a unique sequence in one bin, denoting the corresponding index of that bin by sb′′, receiver 1 calculates sb′′⊖sb′ as s^b (here ⊖ is modulo subtraction over 𝒮, and receiver 1 knows sb′=g(y~b-1)); otherwise, declare an error. ### 4.1.3. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Z), receiver 2 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.1 are less noisy, receiver 1 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. ### 4.1.4. Analysis of Equivocation In this part,limN→∞Δ≥Re is proved by utilizing the methods in [5, 6]: (55)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(a4.1)limN,n→∞∑i=2nH(Si∣Z~i)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,Si⊕Si′)nN=(a4.2)limN,n→∞∑i=2nH(Si∣Z~i-1,Si⊕Si′)nN=(a4.3)limN,n→∞∑i=2nmin{NH(Y∣Z),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣Z),R1}n=min{H(Y∣Z),R1}≥(a4.4)Re.In the above deduction,(a4.1) follows from Si→Z~i→(Si-1,Z~i-). (a4.2) follows from Si→(Si⊕Si′,Z~i-1)→Z~i. (a4.3) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. (a4.4) follows from (10).This completes the proof of Theorem4. ## 4.1.1. Codebook Generation and Encoding Let the common message set𝒲 and the confidential message set 𝒮 satisfy (54)limN→∞log∥𝒲∥N=R0,limN→∞log∥𝒮∥N=R1, where R0 and R1 satisfy (10).Fixp(u) and p(x∣u). In the bth block, 1≤b≤n, we generate 2NR0 independent and identically distributed (i.i.d) sequences uN(wb) according to ∏i=1Np(ui), where wb∈𝒲 is the common message to be sent in the bth block. For each uN(wb), generate 2NI(X;Y∣U) codewords xN(uN(wb)) according to ∏i=1Np(xi∣ui). Put the 2NI(X;Y∣U) codewords into 2NR1 bins, so each bin contains 2N(I(X;Y∣U)-R1) codewords. The 2NR1 bins are denoted by Q1,Q2,…,Q∥𝒮∥, where ∥𝒮∥=2NR1. The codebook structure is shown in Figure 5. Reveal all the codebooks to the transmitter, receiver 1, and receiver 2.Figure 5 The codebook structure.Letg be a mapping from 𝒴N into 𝒮. Reveal the mapping g to the transmitter, receiver 1, and receiver 2. Define a random variable S′=g(YN) uniformly distributed over 𝒮 and independent of the confidential message S. It can be similarly proved from (39) that S⊕S′ is independent of S. In the first block, that is, b=1, to send the common message w1 (note that there is no confidential message to be sent in the first block), the transmitter tries to find uN(w1) and randomly choose a codeword xN(uN(w1)) from the corresponding 2NI(X;Y|U) codewords. In the bth block (b=2,3,…,n), to send the common message wb and confidential message sb, the transmitter calculates sb′=g(y~b-1) and randomly chooses a codeword xN(uN(wb),sb) from the bin Qsb⊕sb′. Here, y~b-1 is the output vector of the (b-1)th block at receiver 1, and ⊕ is the modulo addition over 𝒮. ## 4.1.2. Decoding In the first block, as there is no confidential message, only the common message needs to be decoded for both receivers. For receiver 2, he tries to find a unique sequenceuN(w^^1) such that (uN(w^^1),z~1)∈TUZN(ϵ1′), where ϵ1′ is a small positive number. If there exists such a unique sequence, decoder 2 outputs w^^1; otherwise, an error is declared. For receiver 1, he tries to find a unique sequence uN(w^1) such that (uN(w^1),y~1)∈TUYN(ϵ1′′), where ϵ1′′ is a small positive number. If there exists such a unique sequence, output is w^1; otherwise, declare an error.In thebth block, 2≤b≤n, receiver 2 aims to decode the common message, and receiver 1 aims to decode both confidential and common messages. The method of decoding the common message wb for both receivers follows the same as that in the first block. Then, receiver 1 tries to find a unique sequence xN(uN(w^b),s^b) such that (xN(uN(w^b),s^b),y~b)∈TXY∣UN(ϵ2′), where ϵ2′ is a small positive number. If there exists such a unique sequence in one bin, denoting the corresponding index of that bin by sb′′, receiver 1 calculates sb′′⊖sb′ as s^b (here ⊖ is modulo subtraction over 𝒮, and receiver 1 knows sb′=g(y~b-1)); otherwise, declare an error. ## 4.1.3. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Z), receiver 2 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.1 are less noisy, receiver 1 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. ## 4.1.4. Analysis of Equivocation In this part,limN→∞Δ≥Re is proved by utilizing the methods in [5, 6]: (55)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(a4.1)limN,n→∞∑i=2nH(Si∣Z~i)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,Si⊕Si′)nN=(a4.2)limN,n→∞∑i=2nH(Si∣Z~i-1,Si⊕Si′)nN=(a4.3)limN,n→∞∑i=2nmin{NH(Y∣Z),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣Z),R1}n=min{H(Y∣Z),R1}≥(a4.4)Re.In the above deduction,(a4.1) follows from Si→Z~i→(Si-1,Z~i-). (a4.2) follows from Si→(Si⊕Si′,Z~i-1)→Z~i. (a4.3) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. (a4.4) follows from (10).This completes the proof of Theorem4. ## 4.2. Reversely Less Noisy DMBCs with Noiseless Causal Feedback In this subsection, Theorem5 will be proved. The converse part will be shown first, and then a coding scheme is given for proving the direct part.In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized byℛrl, we prove the converse part for the equivalent region containing all the rate triples (R0,R1,Re) such that (56)0≤Re≤R1,(57)R0≤I(U;Y),(58)R0+R1≤I(X;Y∣U)+I(U;Y),(59)Re≤H(Y∣X).The inequalities (56), (57), and (58) can be proved using similar deduction of the converse part of Theorem 2 in Section 3 except for the identification of the auxiliary random variables. We focus on (59): (60)H(S∣ZN)=H(S∣XN,ZN)+I(S;XN∣ZN)=(b4.1)H(S∣XN)+I(XN;S∣ZN)=H(S,YN∣XN)-H(YN∣XN,S)+I(XN;S∣ZN)=H(YN∣XN)+H(S∣YN,XN)-H(YN∣XN,S)+I(XN;S∣ZN)≤H(YN∣XN)+H(S∣YN,XN)+I(XN;S∣ZN)≤H(YN∣XN)+H(S∣YN,XN)+H(XN∣ZN)=(b4.2)H(YN∣XN)+H(S∣YN,XN)+H(XN∣YN)=H(YN∣XN)+H(S,XN∣YN)=(b4.3)H(YN∣XN)+H(S∣YN)=∑i=1N‍H(Yi∣Yi-1,XN)+H(S∣YN)≤(b4.4)∑i=1N‍H(Yi∣Xi)+ϵ3, where (b4.1) from the Markov chain S→XN→ZN, (b4.2) from the assumption that the channel is reversely less noisy (by setting U=X), (b4.3) from that XN is a function of S,YN, and (b4.4) from the fact that conditioning does not increase entropy and Fano’s inequality. To complete the proof of (59), define a time-sharing random variable Q which is uniformly distributed over {1,2,…,N} and independent of SWXNYNZN. Set U=ZQ+1NYQ-1WQ, V=US, X=XQ, Y=YQ, Z=ZQ. It is easy to see U→V→X→(Y,Z) form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (60) simplifies to (61)H(S∣ZN)≤NH(Y∣X)+ϵ3.Finally, utilizinglimN→∞Δ≥Re in the definition of “achievable” and (61), we obtain (59). This completes the proof of the converse part of Theorem 5.Next, a coding scheme will be provided for achieving the triple(R0,R1,Re)∈ℛrl. We should prove that all triples (R0,R1,Re)∈ℛrl are achievable. The codebook generation, encoding, and decoding follow exactly the lines of the coding scheme for the less noisy case in Section 4.1. We present the analysis of error probability and equivocation as follows. ### 4.2.1. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Y), receiver 1 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.2 are reversely less noisy, receiver 2 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. ### 4.2.2. Analysis of Equivocation In this part,limN→∞Δ≥Re will be proved. Special attention should be paid to receiver 2 since the DMBCs are reversely less noisy; that is, I(U;Z)≥I(U;Y) for all p(u,x), which implies 2NI(X;Z∣U)≥2NI(X;Y∣U). Therefore, receiver 2 can also decode the codeword xN. With the knowledge of xN and zN, receiver 2 can guess receiver 1’s channel output yN from the conditional typical set 𝒯Y∣XZN(ϵ3). Note that receiver 2 can intercept the confidential messages in two ways. One is guessing the secret key sb′ from 𝒮 directly; the other is guessing the channel output y~b-1 and finding sb′ through g(y~b-1) indirectly. Intuitively, receiver 2 will always choose a better way to implement eavesdropping. More formally, (62)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(b4.5)limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,X~i-1,Si⊕Si′)nN=(b4.6)limN,n→∞∑i=2nH(Si∣Z~i-1,X~i-1,Si⊕Si′)nN=(b4.7)limN,n→∞∑i=2nmin{NH(Y∣XZ),log∥𝒮∥}nN=(b4.8)limN,n→∞∑i=2nmin{NH(Y∣X),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣X),R1}n=min{H(Y∣X),R1}≥(b4.9)Re.In the above deduction,(b4.5) follows from Si→(Z~i,Z~i-1)→(Si-1,Z~i-2,Z~i+1n).(b4.6) follows from Si→(Si⊕Si′,Z~i-1,X~i-1)→Z~i. (b4.7) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will, and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. Note that the number of yN∈TY|XZN(ϵ3) is about 2NH(Y∣XZ) based on the property of strong typical sequence [10]. (b4.8) follows from the fact that Y is independent of Z conditioning on X, which is obtained from the assumption p(y,z∣x)=p(y∣x)p(z∣x). (b4.9) follows from (11).This completes the proof of Theorem5. ## 4.2.1. Analysis of Error Probability Since the number ofuN(wb) is upper bounded by 2NI(U;Y), receiver 1 can decode the common message wb with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1]. Moreover, since the DMBCs under discussion in Section 4.2 are reversely less noisy, receiver 2 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword uN(wb), the number of xN is 2NI(X;Y∣U). So, after determining the codeword uN(wb), receiver 1 can decode the codeword xN with error probability approaching zero by applying the standard channel coding theorem [11, Theorem 7.7.1] and obtain the confidential message with the help of the feedback. ## 4.2.2. Analysis of Equivocation In this part,limN→∞Δ≥Re will be proved. Special attention should be paid to receiver 2 since the DMBCs are reversely less noisy; that is, I(U;Z)≥I(U;Y) for all p(u,x), which implies 2NI(X;Z∣U)≥2NI(X;Y∣U). Therefore, receiver 2 can also decode the codeword xN. With the knowledge of xN and zN, receiver 2 can guess receiver 1’s channel output yN from the conditional typical set 𝒯Y∣XZN(ϵ3). Note that receiver 2 can intercept the confidential messages in two ways. One is guessing the secret key sb′ from 𝒮 directly; the other is guessing the channel output y~b-1 and finding sb′ through g(y~b-1) indirectly. Intuitively, receiver 2 will always choose a better way to implement eavesdropping. More formally, (62)limN→∞Δ=limN,n→∞H(Sn∣Z~n)nN=limN,n→∞∑i=2n‍H(Si∣Si-1,Z~n)nN=(b4.5)limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1)nN≥limN,n→∞∑i=2nH(Si∣Z~i,Z~i-1,X~i-1,Si⊕Si′)nN=(b4.6)limN,n→∞∑i=2nH(Si∣Z~i-1,X~i-1,Si⊕Si′)nN=(b4.7)limN,n→∞∑i=2nmin{NH(Y∣XZ),log∥𝒮∥}nN=(b4.8)limN,n→∞∑i=2nmin{NH(Y∣X),log∥𝒮∥}nN=limn→∞∑i=2nmin{H(Y∣X),R1}n=min{H(Y∣X),R1}≥(b4.9)Re.In the above deduction,(b4.5) follows from Si→(Z~i,Z~i-1)→(Si-1,Z~i-2,Z~i+1n).(b4.6) follows from Si→(Si⊕Si′,Z~i-1,X~i-1)→Z~i. (b4.7) follows from the fact that receiver 2 can choose a better way to intercept the secret key at will, and Si⊕Si′ is independent of Si and uniformly distributed over 𝒮. Note that the number of yN∈TY|XZN(ϵ3) is about 2NH(Y∣XZ) based on the property of strong typical sequence [10]. (b4.8) follows from the fact that Y is independent of Z conditioning on X, which is obtained from the assumption p(y,z∣x)=p(y∣x)p(z∣x). (b4.9) follows from (11).This completes the proof of Theorem5. ## 5. Conclusion This paper studies two models of the DMBCs with noiseless feedback. One is the degraded DMBCs with rate-limited feedback; the other is theless and reversely less noisy DMBCs with feedback. The difference between them is that the feedback in the first model is independent of the channel outputs and rate limited, while the feedback in the second model is originated causally from the channel outputs. The capacity-equivocation regions of the two models are obtained in this paper. We should point out that the second model studied in this paper is under the assumption that the channel to receiver 1 (the legitimate receiver) is independent of the channel to receiver 2 (the eavesdropper); that is, the channel output YN is independent of ZN given the channel input XN. However, without this assumption, the capacity-equivocation region remains unknown for the general DMBCs with noiseless feedback. --- *Source: 102069-2013-08-22.xml*
2013
# Study on Internal Force of Tunnel Segment by Considering the Influence of Joints **Authors:** Linwei Dong; Zhiyong Yang; Zhenyong Wang; Yaowen Ding; Weiqiang Qi **Journal:** Advances in Materials Science and Engineering (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1020732 --- ## Abstract The mechanical performance of segments is an important aspect of the safety of tunnel structures. Study on the internal force of tunnel segment by considering the influence of joints is beneficial for obtaining a better understanding of the influence of various factors on the internal force of the segments. Based on the mechanical characteristics of shield segment joints, in which the displacements and stiffness are discontinuous, a mechanical model of the segment component under the constraints of elastic support was established. The elastic centre method and the principle of superposition were used to quantify the influence of joint displacements on the internal force of the segment component. Combined with a practical engineering application, the internal force of the segment component with joint rotation and dislocation was analysed. The displacements of the segment joints cause an unloading effect of the corresponding internal force of the joints, leading to internal force redistribution of each segment cross section. According to the spline interpolation results of the load test data of the segment joints, the internal force of the segment component under an external load is solved by the iterative method. --- ## Body ## 1. Introduction The influences of the joints on the internal forces should be taken into consideration in the design of the segment structure [1–3]. According to joint simplification, some methods have been proposed to calculate the segment, including mainly the uniform rigid ring [4–6], multihinge ring [7, 8], and beam-spring model [3, 9–11], as shown in Figure 1. Among the above three models, the beam-spring model is the most widely used one in the calculation of the segment ring. In the beam-spring model, since the joint stiffness has a significant influence on the internal force of the segment [12–14], many scholars have carried out model experiments on the stiffness of segment joints [15–21].Figure 1 Calculation models of the segment ring: (a) uniform rigid ring, (b) multihinge ring, and (c) beam-spring model. (a)(b)(c)However, according to the beam-spring model, it is difficult to obtain the analytical solution of the segment ring under the external load, and it is not convenient to quantify the influence of the joint stiffness on the internal force. The mechanical study of the segment should follow the philosophy of the component prior to the structure. As a concrete component, although the mechanical properties of the segment can be improved by adding admixture and other means [22–24], the analysis of the model of segment component based on discontinuous joints should be paid more attention. The mechanical feature of the segment component is that a certain amount of deformation is allowed at the joint. An in-depth analysis of this feature will help to better serve the segment design.Although there have been extensive studies on the buckling stability of arched components [25–29], few studies have explored the analytical solution of the internal force of the segment with the influence of joints. To be close to reality, a spring was used in the model studied in this paper to simulate the effect of the joint on the segment component. Under an external load, displacements occur at the joints, causing the internal force of the joints to change, which in turn affects the internal force distribution of the segment component. Therefore, it is crucial to study the influence of the displacements or stiffness of the segment joints on the internal force of the segment. The objective of this paper is to investigate the mechanical properties of a segment component with a discontinuous joint, including joint rotation and translation. Based on the mechanical properties of the segment component, we propose a progressive model to analyse the internal force of the segment component under joint rotation and translation. Using the elastic centre method and the superposition principle, the internal force calculation formula for a segment component with discontinuous joints is deduced. The theoretical analysis is used to study the internal forces of a fabricated subway subsurface excavation section from the starting point to Jin’anqiao on Beijing Metro Line 6. Based on the load test data of the segment joints, spline interpolation and iterative method are used to solve the internal force of the segment component under the action of earth pressure. The theoretical analysis and the calculation results in this study provide a reference for future segment design. ## 2. Mechanical Model of the Segment Component ### 2.1. Basic Assumptions and the Model Establishment Compared with the joint stiffness, the stiffness of the segment is greater, and thus, the joints will have a larger rotation or translation under the action of the external load compared to the segment. The segment component is a statically indeterminate structure with a redundant constraint, and therefore, the displacements of the joints inevitably have an effect on the distribution of the internal force of the segment.To simplify the calculation, the internal force distribution of a single segment component under uniform pressure in the direction of the vertical span is studied. To evaluate the effect of the discontinuous rotation and translation between the joints and the segment, the segment component is considered to consist of a single segment and the joints on both sides that constrain the displacement of the segment. The constraint at the segment joints is simplified to consist of a rotational spring and two orthogonal line springs. The rotational spring constrains the rotation of the segment component and allows rotational displacement to occur under an external load. The linear spring constrains the movement of the segment component and allows for linear displacement under an external load. In order to make the analytical model consistent with the boundary conditions of the segment joint load test and to simplify the calculation, this analysis model does not consider the interaction between the segment and the soil. The mechanical model of the segment component is shown in Figure2.Figure 2 Force diagram of the segment component.α is the semiarc angle of the segment component, l is the span length of the segment, R is the radius of the segment component, q is the uniform pressure perpendicular to the span, θ is the angular displacements of the segment joints, Δ1 is the horizontal displacement of the segment joints, and Δ2 is the vertical displacement of the segment joints.In addition, it should be noted that in the calculation of internal force, a positive sign is assigned to the bending moment when the inside of the segment component is subjected to tension, a positive sign is assigned to the shear force when the moment of the adjacent section caused by the shear force is clockwise, and a positive sign is assigned to the axial force when the section is compressed. ### 2.2. Model Simplification According to the principle of superposition, the stress state of the segment component under an external load is decomposed into two parts when solving the internal force distribution. One is the internal force of the segment component under the external load when the segment joints are treated as the fixed ends. The other is the internal force of the segment component caused by the segment joints displacement.The segment component is a statically indeterminate structure that can be solved by the force method equation. To simplify the calculation, the elastic centre method for the arched structure can be used to obtain the redundant force at the elastic centre. Then, we can obtain the internal force distribution of the segment component. The calculation diagrams in the two states are shown in Figures3 and 4.Figure 3 Force diagram of the segment component under an external load without considering the joint displacements.Figure 4 Force diagram of the segment component under the load produced by the joint displacements.Mc, Vc, and Nc are the redundant bending moment, redundant shear force, and redundant axial force at the elastic centre of the segment component under an external load or the load produced by the joint displacements, d is the vertical distance between the elastic centre point and the segment joints, and l is the span length of the segment component. ## 2.1. Basic Assumptions and the Model Establishment Compared with the joint stiffness, the stiffness of the segment is greater, and thus, the joints will have a larger rotation or translation under the action of the external load compared to the segment. The segment component is a statically indeterminate structure with a redundant constraint, and therefore, the displacements of the joints inevitably have an effect on the distribution of the internal force of the segment.To simplify the calculation, the internal force distribution of a single segment component under uniform pressure in the direction of the vertical span is studied. To evaluate the effect of the discontinuous rotation and translation between the joints and the segment, the segment component is considered to consist of a single segment and the joints on both sides that constrain the displacement of the segment. The constraint at the segment joints is simplified to consist of a rotational spring and two orthogonal line springs. The rotational spring constrains the rotation of the segment component and allows rotational displacement to occur under an external load. The linear spring constrains the movement of the segment component and allows for linear displacement under an external load. In order to make the analytical model consistent with the boundary conditions of the segment joint load test and to simplify the calculation, this analysis model does not consider the interaction between the segment and the soil. The mechanical model of the segment component is shown in Figure2.Figure 2 Force diagram of the segment component.α is the semiarc angle of the segment component, l is the span length of the segment, R is the radius of the segment component, q is the uniform pressure perpendicular to the span, θ is the angular displacements of the segment joints, Δ1 is the horizontal displacement of the segment joints, and Δ2 is the vertical displacement of the segment joints.In addition, it should be noted that in the calculation of internal force, a positive sign is assigned to the bending moment when the inside of the segment component is subjected to tension, a positive sign is assigned to the shear force when the moment of the adjacent section caused by the shear force is clockwise, and a positive sign is assigned to the axial force when the section is compressed. ## 2.2. Model Simplification According to the principle of superposition, the stress state of the segment component under an external load is decomposed into two parts when solving the internal force distribution. One is the internal force of the segment component under the external load when the segment joints are treated as the fixed ends. The other is the internal force of the segment component caused by the segment joints displacement.The segment component is a statically indeterminate structure that can be solved by the force method equation. To simplify the calculation, the elastic centre method for the arched structure can be used to obtain the redundant force at the elastic centre. Then, we can obtain the internal force distribution of the segment component. The calculation diagrams in the two states are shown in Figures3 and 4.Figure 3 Force diagram of the segment component under an external load without considering the joint displacements.Figure 4 Force diagram of the segment component under the load produced by the joint displacements.Mc, Vc, and Nc are the redundant bending moment, redundant shear force, and redundant axial force at the elastic centre of the segment component under an external load or the load produced by the joint displacements, d is the vertical distance between the elastic centre point and the segment joints, and l is the span length of the segment component. ## 3. Internal Force Calculation of the Segment Component ### 3.1. Redundant Force Analysis of the Segment Component The vertical distance between the elastic centre pointO′ and the segment joints can be obtained as follows:(1)d=∫y′/EIds∫1/EIds=12α−1−4ρ28ρl,where α is the semiarc angle of the segment component and ρ is the rise-span ratio of the segment.Under an external load without considering the joint displacements, using the elastic centre method, the redundant force at the elastic centre of the segment component can be deduced as follows:(2)MC=B1ql2,HC=C1ql2f,VC=0,where f is the vector height of the segment and B1 and C1 are related to the rise-span ratio and the semiarc angle of the segment, which can be defined by the following expression:(3)B1=1256ρ21+4ρ22−4ρα1−4ρ2,C1=ρ12Φα3−8ρ2+48ρ4−12ρ1−4ρ2,where Φ is the affiliated coefficient, which can be defined by the following expression:(4)Φ=1+4ρ22α2+4ρ1−4ρ2α−32ρ2.The redundant force at the elastic centre can affect the internal force distribution of the segment component. Equation (2) illustrates that under uniform pressure perpendicular to the span, the internal force of the segment component is related to the semiarc angle and the span length.Under the load produced by the joint displacements, using the elastic centre method, the redundant force at the elastic centre of the segment component can be deduced as follows:(5)MC=−8ρ1−16ρ4EIθl,HC=−32ρ216ρ2−4αρ1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2l,VC=0,where EI is the bending stiffness of the segment cross section.Equation (5) can be analysed to obtain the following conclusions:(1) If the vertical displacements on both sides of the segment component are consistent, the internal force of the segment will not be affected(2) The internal force of the segment component produced by the joint displacements is affected not only by the radius and semiarc angle of the component but also by the joint rotation, the joint horizontal dislocation, and the bending stiffness of the segment ### 3.2. Internal Force Calculation of the Segment Component From the redundant force at the elastic centre, we can obtain the internal force of the segment component under an external load with considering the joint displacements:(6)Mφ=−12qR2sin2φ+B1ql2−8ρ1−16ρ4EIθl+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2ld+Rcosα−Rcosφ,Vφ=−qRsinφcosφ+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2lsinφ,Nφ=qRsin2φ+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2lcosφ,where φ is arc angles of the sectional segment.Equation (6) reveals that the internal force of the segment component is affected by the joint rotation and horizontal dislocation when the segment section and the block mode are fixed.Before the internal force calculation of the segment components, the final joint deformation cannot be predicted. However, the rotation and horizontal displacement of the segment joints can be calculated by the joint bending stiffness and shear stiffness, respectively.It is assumed that the bending stiffness of the segment joints iskθ. We can determine that(7)θ=Mαkθ,where Mα is the section bending moment of the segment component, when the semiarc angle is α. After substituting equation (6) into equation (7), the following equation is obtained:(8)θ=B1ql2+C1ql2/f−1024ρ5α/1+4ρ2ΦEIΔ1/f2ld−1/2qR2sin2αkθ+8ρ/1−16ρ4EI/l+128ρ24ρ2−αρ1−4ρ2/1+4ρ2ΦEI/fld,where kθ is the bending stiffness of the segment joints.After substituting equations (8) and sinα=4ρ/1+4ρ2 into equation (6), we can obtain the internal force of each segment section under the influence of the joints. The formula is more complex and is not shown here.The stiffness of the joints is weakened, which causes the redistribution of the internal force of the segment component. We should consider the influence of the joint bending stiffness on the segment component when the segment section and the block mode are fixed.The calculation method for the influence of the shear stiffness and bending stiffness of the discontinuous joint on the internal force of the segment component is similar and therefore not repeated. ## 3.1. Redundant Force Analysis of the Segment Component The vertical distance between the elastic centre pointO′ and the segment joints can be obtained as follows:(1)d=∫y′/EIds∫1/EIds=12α−1−4ρ28ρl,where α is the semiarc angle of the segment component and ρ is the rise-span ratio of the segment.Under an external load without considering the joint displacements, using the elastic centre method, the redundant force at the elastic centre of the segment component can be deduced as follows:(2)MC=B1ql2,HC=C1ql2f,VC=0,where f is the vector height of the segment and B1 and C1 are related to the rise-span ratio and the semiarc angle of the segment, which can be defined by the following expression:(3)B1=1256ρ21+4ρ22−4ρα1−4ρ2,C1=ρ12Φα3−8ρ2+48ρ4−12ρ1−4ρ2,where Φ is the affiliated coefficient, which can be defined by the following expression:(4)Φ=1+4ρ22α2+4ρ1−4ρ2α−32ρ2.The redundant force at the elastic centre can affect the internal force distribution of the segment component. Equation (2) illustrates that under uniform pressure perpendicular to the span, the internal force of the segment component is related to the semiarc angle and the span length.Under the load produced by the joint displacements, using the elastic centre method, the redundant force at the elastic centre of the segment component can be deduced as follows:(5)MC=−8ρ1−16ρ4EIθl,HC=−32ρ216ρ2−4αρ1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2l,VC=0,where EI is the bending stiffness of the segment cross section.Equation (5) can be analysed to obtain the following conclusions:(1) If the vertical displacements on both sides of the segment component are consistent, the internal force of the segment will not be affected(2) The internal force of the segment component produced by the joint displacements is affected not only by the radius and semiarc angle of the component but also by the joint rotation, the joint horizontal dislocation, and the bending stiffness of the segment ## 3.2. Internal Force Calculation of the Segment Component From the redundant force at the elastic centre, we can obtain the internal force of the segment component under an external load with considering the joint displacements:(6)Mφ=−12qR2sin2φ+B1ql2−8ρ1−16ρ4EIθl+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2ld+Rcosα−Rcosφ,Vφ=−qRsinφcosφ+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2lsinφ,Nφ=qRsin2φ+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2lcosφ,where φ is arc angles of the sectional segment.Equation (6) reveals that the internal force of the segment component is affected by the joint rotation and horizontal dislocation when the segment section and the block mode are fixed.Before the internal force calculation of the segment components, the final joint deformation cannot be predicted. However, the rotation and horizontal displacement of the segment joints can be calculated by the joint bending stiffness and shear stiffness, respectively.It is assumed that the bending stiffness of the segment joints iskθ. We can determine that(7)θ=Mαkθ,where Mα is the section bending moment of the segment component, when the semiarc angle is α. After substituting equation (6) into equation (7), the following equation is obtained:(8)θ=B1ql2+C1ql2/f−1024ρ5α/1+4ρ2ΦEIΔ1/f2ld−1/2qR2sin2αkθ+8ρ/1−16ρ4EI/l+128ρ24ρ2−αρ1−4ρ2/1+4ρ2ΦEI/fld,where kθ is the bending stiffness of the segment joints.After substituting equations (8) and sinα=4ρ/1+4ρ2 into equation (6), we can obtain the internal force of each segment section under the influence of the joints. The formula is more complex and is not shown here.The stiffness of the joints is weakened, which causes the redistribution of the internal force of the segment component. We should consider the influence of the joint bending stiffness on the segment component when the segment section and the block mode are fixed.The calculation method for the influence of the shear stiffness and bending stiffness of the discontinuous joint on the internal force of the segment component is similar and therefore not repeated. ## 4. Practical Engineering Application ### 4.1. Engineering Background Our study takes a segment of the subsurface excavation section from the starting point to Jin’anqiao station of Beijing Metro Line 6 as the research object. The influence of discontinuous joints on the mechanical properties of the segment component is studied quantitatively. The assembly form of the subsurface excavation tunnel lining is used as a reference for that of the shield. However, there are obvious differences between this construction method and the shield method. After using the mining method to excavate the tunnel section and taking the initial support measure, the assembly of the segment depends on a special assembly machine. The segment section is shown in Figure5. This method can overcome the disadvantages of shield tunnelling used in some special composite strata, such as quaternary stratum with upper soft and lower hard characteristics, rock stratum, composite stratum of rock and soil, and stratum containing spherical weathering bodies. The construction method combines the advantages of the mining method and shield method, is adaptable, and has a high degree of automation. It can also increase construction speed and enhance safety. In addition, the horseshoe tunnel increases the utilization of the excavation section.Figure 5 Segment section form. ### 4.2. Geological Condition The lithological log mostly consists of plain fill, clayey silt, silty clay, gravel, and mixed gravel and silty clay from the top to bottom around the tunnel. The geological profile of the fabricated section is shown in Figure6.Figure 6 Geological sectional drawing of the fabricated section. ### 4.3. Theoretical Calculation and Analysis The load can be calculated according to the calculation method of the tunnel with the mining method. To simplify the calculation, the vertical earth pressure is regarded as the load in the direction of the vertical span. The calculated vertical earth pressure isq=256.88kPa. Taking the geometrical parameters of the D-shaped segment, with a radius of 2.84 m, a thickness of 0.3 m, and a semiarc angle of 0.4π, as an example, the influence of joint deformation and stiffness on the internal force of the segment component was studied under the vertical load. Because the segment component and the external load are symmetrical along the vertical axes, only the internal force of the right half of the segment component is analysed.Compiling the program through the M language in MATLAB, we can obtain the segment internal force distribution for joint angular displacements of 0.0000, 0.0002, or 0.0004 rad, as shown in Figure7. The segment internal force distribution for joint horizontal displacements of 0, 2, or 4 mm is shown in Figure 8.Figure 7 Internal force distribution of the segment component with joint rotation. (a)(b)(c)Figure 8 Internal force distribution of the segment component with joint horizontal dislocation. (a)(b)(c)Figures7 and 8 show that, under alignment, the maximum bending moment of the segment is 0.11 kN·m and the minimum bending moment is −0.05 kN·m. The maximum shear force is 270.63 kN, and the minimum shear force is −63.64 kN. The maximum axial force is 817.62 kN, and the minimum axial force is 510.00 kN. When the joint rotation angle is 0.0004 rad, the maximum bending moment of the segment is 0.05 kN·m and the minimum bending moment is −0.08 kN·m. The maximum shear force is 241.08 kN, and the minimum shear force is −77.26 kN. The maximum axial force is 807.87 kN, and the minimum axial force is 478.93 kN. When the joint horizontal displacement is 4 mm, the maximum bending moment of the segment is 0.21 kN·m and the minimum bending moment is −0.20 kN·m. The maximum shear force is 38.27 kN, and the minimum shear force is −190.11 kN. The maximum axial force is 741.97 kN, and the minimum axial force is 266.68 kN.Figures7 and 8 indicate that joint displacements can affect the internal force distribution of the segment component. The following observations can be made from the two figures:(1) The joint rotation decreases the joint shear force while reducing the bending moment of the joints. The horizontal dislocation of the joints has a great effect on the bending moment of the segment component while reducing the shear force of the segment joints, and thus, the bending moment distribution is even more uneven. The axial force of the segment component is decreased due to the joint rotation and horizontal dislocation.(2) The joint displacements cause a similar unloading effect on the internal force of the segment joints. The bending moment and shear force at other cross sections of the segment component will increase to different degrees, and the axial force of the segment component can be reduced, which should be considered in the structure calculation.(3) The horizontal dislocation of the segment joints has a great influence on the internal force of the segment component.In addition, relevant research data [30] and specification data [31] show that the control value of the horizontal dislocation is 10 mm. Under a maximum joint misalignment of 10 mm, the maximum bending moment of the segment is 0.47 kN·m and the minimum bending moment is −0.67 kN·m. The maximum shear force is 0 kN, and the minimum shear force is −437.69 kN. The maximum axial force is 628.72 kN, and the minimum axial force is −100.81 kN. By comparing with the internal force of the segment component with alignment, it can be obtained that the displacement of the segment has a great influence on the internal force distribution of the segment component. The bending moment varies the most, followed by the shear force.The segment internal force distributions with joint bending rigidities of EI, 0.1EI, and 0.01EI are shown in Figure9.Figure 9 Internal force distribution of the segment component with joint bending stiffness changes. (a)(b)(c)Analysis of Figure9 reveals the following:(1) As the joint bending stiffness decreases, the bending moment of the joints is decreased, and the internal force of the segment component is redistributed(2) When the bending stiffness of the joints is equal to 0.01 times that of the segment, the bending moment of the segment joints is smaller and can be regarded as a hinge(3) The effect of the bending stiffness of the discontinuous joint on the internal force of the segment actually reflects the influence of the joint rotation on this force ### 4.4. Test and Iterative Calculation The bending stiffness of the segment joint is the bending moment required for the segment joint to produce a unit rotational angle. At present, there is no mature formula or chart available for the value of bending stiffness in the project, which can be determined by the segment joint load test [32].In the test, the horizontal axial force is applied by the loading system on the reaction wall, and the vertical load is applied by the jack through the distribution beam. The test diagram is shown in Figure10. According to the research results of the segment joint load test in Figure 11 [33], the joint stiffness of the segment is not constant. The larger the eccentric distance at the segment joints, the smaller the bending stiffness of the joints. When the eccentricity is constant, the greater the axial force at the segment joints, the smaller the bending stiffness of the joints.Figure 10 Diagram of the joint load test of the segment.Figure 11 Joint load test data.From the above analysis, the eccentricity and axial force at the joint have an influence on the bending stiffness of the segment joints. The change of the bending stiffness at the joint inevitably causes the change of the joint displacement, which in turn causes the change of the internal force of the segment joints. The bending stiffness at the joint interacts with the internal force at the joint.When analysing the mechanical model, the ultimate internal force at the joint cannot be predicted, so the bending stiffness of the joint cannot be selected from the data of the joint load test. Therefore, the iterative method is used to repeatedly calculate the internal force of the segment and the joint stiffness and then successively approximate the true value to obtain the final internal force value.In order to ensure the good convergence and continuity of the interpolation points, when selecting the bending stiffness of the joints, the data obtained from the load test of the segment joints should be calculated by spline interpolation. Algorithms of spline interpolation can be expressed as follows:(9)Sx=∑j=0nyjαjx+mjβjx,where Sx is the interpolation function, yj is the function value of node xj, mj is the derivative value of interpolation function, and αjx and βjx are the interpolation basis functions.The internal force of the segment and the stiffness of the segment joint are calculated by an iterative method. By compiling the program, we can get the internal force distribution of the segment component, which is shown in Figure12.Figure 12 Internal force of the segment component calculated by iteration. (a)(b)(c)After iterative calculation, the final bending stiffness of the segment joint is 1.2 × 104 kN·m·rad−1. ## 4.1. Engineering Background Our study takes a segment of the subsurface excavation section from the starting point to Jin’anqiao station of Beijing Metro Line 6 as the research object. The influence of discontinuous joints on the mechanical properties of the segment component is studied quantitatively. The assembly form of the subsurface excavation tunnel lining is used as a reference for that of the shield. However, there are obvious differences between this construction method and the shield method. After using the mining method to excavate the tunnel section and taking the initial support measure, the assembly of the segment depends on a special assembly machine. The segment section is shown in Figure5. This method can overcome the disadvantages of shield tunnelling used in some special composite strata, such as quaternary stratum with upper soft and lower hard characteristics, rock stratum, composite stratum of rock and soil, and stratum containing spherical weathering bodies. The construction method combines the advantages of the mining method and shield method, is adaptable, and has a high degree of automation. It can also increase construction speed and enhance safety. In addition, the horseshoe tunnel increases the utilization of the excavation section.Figure 5 Segment section form. ## 4.2. Geological Condition The lithological log mostly consists of plain fill, clayey silt, silty clay, gravel, and mixed gravel and silty clay from the top to bottom around the tunnel. The geological profile of the fabricated section is shown in Figure6.Figure 6 Geological sectional drawing of the fabricated section. ## 4.3. Theoretical Calculation and Analysis The load can be calculated according to the calculation method of the tunnel with the mining method. To simplify the calculation, the vertical earth pressure is regarded as the load in the direction of the vertical span. The calculated vertical earth pressure isq=256.88kPa. Taking the geometrical parameters of the D-shaped segment, with a radius of 2.84 m, a thickness of 0.3 m, and a semiarc angle of 0.4π, as an example, the influence of joint deformation and stiffness on the internal force of the segment component was studied under the vertical load. Because the segment component and the external load are symmetrical along the vertical axes, only the internal force of the right half of the segment component is analysed.Compiling the program through the M language in MATLAB, we can obtain the segment internal force distribution for joint angular displacements of 0.0000, 0.0002, or 0.0004 rad, as shown in Figure7. The segment internal force distribution for joint horizontal displacements of 0, 2, or 4 mm is shown in Figure 8.Figure 7 Internal force distribution of the segment component with joint rotation. (a)(b)(c)Figure 8 Internal force distribution of the segment component with joint horizontal dislocation. (a)(b)(c)Figures7 and 8 show that, under alignment, the maximum bending moment of the segment is 0.11 kN·m and the minimum bending moment is −0.05 kN·m. The maximum shear force is 270.63 kN, and the minimum shear force is −63.64 kN. The maximum axial force is 817.62 kN, and the minimum axial force is 510.00 kN. When the joint rotation angle is 0.0004 rad, the maximum bending moment of the segment is 0.05 kN·m and the minimum bending moment is −0.08 kN·m. The maximum shear force is 241.08 kN, and the minimum shear force is −77.26 kN. The maximum axial force is 807.87 kN, and the minimum axial force is 478.93 kN. When the joint horizontal displacement is 4 mm, the maximum bending moment of the segment is 0.21 kN·m and the minimum bending moment is −0.20 kN·m. The maximum shear force is 38.27 kN, and the minimum shear force is −190.11 kN. The maximum axial force is 741.97 kN, and the minimum axial force is 266.68 kN.Figures7 and 8 indicate that joint displacements can affect the internal force distribution of the segment component. The following observations can be made from the two figures:(1) The joint rotation decreases the joint shear force while reducing the bending moment of the joints. The horizontal dislocation of the joints has a great effect on the bending moment of the segment component while reducing the shear force of the segment joints, and thus, the bending moment distribution is even more uneven. The axial force of the segment component is decreased due to the joint rotation and horizontal dislocation.(2) The joint displacements cause a similar unloading effect on the internal force of the segment joints. The bending moment and shear force at other cross sections of the segment component will increase to different degrees, and the axial force of the segment component can be reduced, which should be considered in the structure calculation.(3) The horizontal dislocation of the segment joints has a great influence on the internal force of the segment component.In addition, relevant research data [30] and specification data [31] show that the control value of the horizontal dislocation is 10 mm. Under a maximum joint misalignment of 10 mm, the maximum bending moment of the segment is 0.47 kN·m and the minimum bending moment is −0.67 kN·m. The maximum shear force is 0 kN, and the minimum shear force is −437.69 kN. The maximum axial force is 628.72 kN, and the minimum axial force is −100.81 kN. By comparing with the internal force of the segment component with alignment, it can be obtained that the displacement of the segment has a great influence on the internal force distribution of the segment component. The bending moment varies the most, followed by the shear force.The segment internal force distributions with joint bending rigidities of EI, 0.1EI, and 0.01EI are shown in Figure9.Figure 9 Internal force distribution of the segment component with joint bending stiffness changes. (a)(b)(c)Analysis of Figure9 reveals the following:(1) As the joint bending stiffness decreases, the bending moment of the joints is decreased, and the internal force of the segment component is redistributed(2) When the bending stiffness of the joints is equal to 0.01 times that of the segment, the bending moment of the segment joints is smaller and can be regarded as a hinge(3) The effect of the bending stiffness of the discontinuous joint on the internal force of the segment actually reflects the influence of the joint rotation on this force ## 4.4. Test and Iterative Calculation The bending stiffness of the segment joint is the bending moment required for the segment joint to produce a unit rotational angle. At present, there is no mature formula or chart available for the value of bending stiffness in the project, which can be determined by the segment joint load test [32].In the test, the horizontal axial force is applied by the loading system on the reaction wall, and the vertical load is applied by the jack through the distribution beam. The test diagram is shown in Figure10. According to the research results of the segment joint load test in Figure 11 [33], the joint stiffness of the segment is not constant. The larger the eccentric distance at the segment joints, the smaller the bending stiffness of the joints. When the eccentricity is constant, the greater the axial force at the segment joints, the smaller the bending stiffness of the joints.Figure 10 Diagram of the joint load test of the segment.Figure 11 Joint load test data.From the above analysis, the eccentricity and axial force at the joint have an influence on the bending stiffness of the segment joints. The change of the bending stiffness at the joint inevitably causes the change of the joint displacement, which in turn causes the change of the internal force of the segment joints. The bending stiffness at the joint interacts with the internal force at the joint.When analysing the mechanical model, the ultimate internal force at the joint cannot be predicted, so the bending stiffness of the joint cannot be selected from the data of the joint load test. Therefore, the iterative method is used to repeatedly calculate the internal force of the segment and the joint stiffness and then successively approximate the true value to obtain the final internal force value.In order to ensure the good convergence and continuity of the interpolation points, when selecting the bending stiffness of the joints, the data obtained from the load test of the segment joints should be calculated by spline interpolation. Algorithms of spline interpolation can be expressed as follows:(9)Sx=∑j=0nyjαjx+mjβjx,where Sx is the interpolation function, yj is the function value of node xj, mj is the derivative value of interpolation function, and αjx and βjx are the interpolation basis functions.The internal force of the segment and the stiffness of the segment joint are calculated by an iterative method. By compiling the program, we can get the internal force distribution of the segment component, which is shown in Figure12.Figure 12 Internal force of the segment component calculated by iteration. (a)(b)(c)After iterative calculation, the final bending stiffness of the segment joint is 1.2 × 104 kN·m·rad−1. ## 5. Conclusions In view of the lack of analytical research on the internal force of the segment component, based on the mechanical characteristics of the segment component, that is, a certain amount of deformation is allowed at the segment joints, the internal force of the segment component is analysed deeply. The mechanical model of the segment component under the elastic constraint is established, and the analytic solution of the internal force of the segment component with discontinuous joints is solved by using the elastic centre method and the superposition principle. Combined with the engineering application, the following conclusions can be drawn based on the above analysis:(1) The internal force of the segment component with discontinuous joints is affected by the radius, the semiarc angle, the joint rotation, the horizontal dislocation, the segment bending stiffness, the joint bending stiffness, and other factors.(2) The joint rotation reduces the bending moment, the shear force, and the axial force of the segment joints, and the joint horizontal dislocation reduces the shear force and axial force of the segment joints. The joint displacements increase the negative bending moment and the negative shear force of the segment component to different degrees. The adverse effect should be considered in the segment design by taking measures such as increasing the reinforcement ratio of the segment section and reducing the joint displacements.(3) The joint horizontal dislocation has a greater effect on the internal force of the segment component than does the joint rotation. In the control of joint displacements, we should pay attention to restraining the dislocation displacements.(4) As the bending stiffness decreases, the negative bending moment of the segment component increases, and the nonuniformity of the segment bending moment is increased.(5) To maintain a uniform internal force of the segment component, when the segment is designed, joints such as the tenon joint should be strengthened to reduce the internal force of the segment section, in addition to applying other relevant methods.(6) Since the bending stiffness of the joints is related to the internal force of that, the internal force of the segment component and the joint stiffness should be iteratively calculated in combination with the research results of the joint load test.(7) Studies of the analytical solution of the internal force of the segment component with discontinuous joints can aid the identification of factors that affect the internal force of the segment, which can be used as the basis for the analytical calculation of the whole segment ring with discontinuous joints. --- *Source: 1020732-2020-10-31.xml*
1020732-2020-10-31_1020732-2020-10-31.md
42,467
Study on Internal Force of Tunnel Segment by Considering the Influence of Joints
Linwei Dong; Zhiyong Yang; Zhenyong Wang; Yaowen Ding; Weiqiang Qi
Advances in Materials Science and Engineering (2020)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1020732
1020732-2020-10-31.xml
--- ## Abstract The mechanical performance of segments is an important aspect of the safety of tunnel structures. Study on the internal force of tunnel segment by considering the influence of joints is beneficial for obtaining a better understanding of the influence of various factors on the internal force of the segments. Based on the mechanical characteristics of shield segment joints, in which the displacements and stiffness are discontinuous, a mechanical model of the segment component under the constraints of elastic support was established. The elastic centre method and the principle of superposition were used to quantify the influence of joint displacements on the internal force of the segment component. Combined with a practical engineering application, the internal force of the segment component with joint rotation and dislocation was analysed. The displacements of the segment joints cause an unloading effect of the corresponding internal force of the joints, leading to internal force redistribution of each segment cross section. According to the spline interpolation results of the load test data of the segment joints, the internal force of the segment component under an external load is solved by the iterative method. --- ## Body ## 1. Introduction The influences of the joints on the internal forces should be taken into consideration in the design of the segment structure [1–3]. According to joint simplification, some methods have been proposed to calculate the segment, including mainly the uniform rigid ring [4–6], multihinge ring [7, 8], and beam-spring model [3, 9–11], as shown in Figure 1. Among the above three models, the beam-spring model is the most widely used one in the calculation of the segment ring. In the beam-spring model, since the joint stiffness has a significant influence on the internal force of the segment [12–14], many scholars have carried out model experiments on the stiffness of segment joints [15–21].Figure 1 Calculation models of the segment ring: (a) uniform rigid ring, (b) multihinge ring, and (c) beam-spring model. (a)(b)(c)However, according to the beam-spring model, it is difficult to obtain the analytical solution of the segment ring under the external load, and it is not convenient to quantify the influence of the joint stiffness on the internal force. The mechanical study of the segment should follow the philosophy of the component prior to the structure. As a concrete component, although the mechanical properties of the segment can be improved by adding admixture and other means [22–24], the analysis of the model of segment component based on discontinuous joints should be paid more attention. The mechanical feature of the segment component is that a certain amount of deformation is allowed at the joint. An in-depth analysis of this feature will help to better serve the segment design.Although there have been extensive studies on the buckling stability of arched components [25–29], few studies have explored the analytical solution of the internal force of the segment with the influence of joints. To be close to reality, a spring was used in the model studied in this paper to simulate the effect of the joint on the segment component. Under an external load, displacements occur at the joints, causing the internal force of the joints to change, which in turn affects the internal force distribution of the segment component. Therefore, it is crucial to study the influence of the displacements or stiffness of the segment joints on the internal force of the segment. The objective of this paper is to investigate the mechanical properties of a segment component with a discontinuous joint, including joint rotation and translation. Based on the mechanical properties of the segment component, we propose a progressive model to analyse the internal force of the segment component under joint rotation and translation. Using the elastic centre method and the superposition principle, the internal force calculation formula for a segment component with discontinuous joints is deduced. The theoretical analysis is used to study the internal forces of a fabricated subway subsurface excavation section from the starting point to Jin’anqiao on Beijing Metro Line 6. Based on the load test data of the segment joints, spline interpolation and iterative method are used to solve the internal force of the segment component under the action of earth pressure. The theoretical analysis and the calculation results in this study provide a reference for future segment design. ## 2. Mechanical Model of the Segment Component ### 2.1. Basic Assumptions and the Model Establishment Compared with the joint stiffness, the stiffness of the segment is greater, and thus, the joints will have a larger rotation or translation under the action of the external load compared to the segment. The segment component is a statically indeterminate structure with a redundant constraint, and therefore, the displacements of the joints inevitably have an effect on the distribution of the internal force of the segment.To simplify the calculation, the internal force distribution of a single segment component under uniform pressure in the direction of the vertical span is studied. To evaluate the effect of the discontinuous rotation and translation between the joints and the segment, the segment component is considered to consist of a single segment and the joints on both sides that constrain the displacement of the segment. The constraint at the segment joints is simplified to consist of a rotational spring and two orthogonal line springs. The rotational spring constrains the rotation of the segment component and allows rotational displacement to occur under an external load. The linear spring constrains the movement of the segment component and allows for linear displacement under an external load. In order to make the analytical model consistent with the boundary conditions of the segment joint load test and to simplify the calculation, this analysis model does not consider the interaction between the segment and the soil. The mechanical model of the segment component is shown in Figure2.Figure 2 Force diagram of the segment component.α is the semiarc angle of the segment component, l is the span length of the segment, R is the radius of the segment component, q is the uniform pressure perpendicular to the span, θ is the angular displacements of the segment joints, Δ1 is the horizontal displacement of the segment joints, and Δ2 is the vertical displacement of the segment joints.In addition, it should be noted that in the calculation of internal force, a positive sign is assigned to the bending moment when the inside of the segment component is subjected to tension, a positive sign is assigned to the shear force when the moment of the adjacent section caused by the shear force is clockwise, and a positive sign is assigned to the axial force when the section is compressed. ### 2.2. Model Simplification According to the principle of superposition, the stress state of the segment component under an external load is decomposed into two parts when solving the internal force distribution. One is the internal force of the segment component under the external load when the segment joints are treated as the fixed ends. The other is the internal force of the segment component caused by the segment joints displacement.The segment component is a statically indeterminate structure that can be solved by the force method equation. To simplify the calculation, the elastic centre method for the arched structure can be used to obtain the redundant force at the elastic centre. Then, we can obtain the internal force distribution of the segment component. The calculation diagrams in the two states are shown in Figures3 and 4.Figure 3 Force diagram of the segment component under an external load without considering the joint displacements.Figure 4 Force diagram of the segment component under the load produced by the joint displacements.Mc, Vc, and Nc are the redundant bending moment, redundant shear force, and redundant axial force at the elastic centre of the segment component under an external load or the load produced by the joint displacements, d is the vertical distance between the elastic centre point and the segment joints, and l is the span length of the segment component. ## 2.1. Basic Assumptions and the Model Establishment Compared with the joint stiffness, the stiffness of the segment is greater, and thus, the joints will have a larger rotation or translation under the action of the external load compared to the segment. The segment component is a statically indeterminate structure with a redundant constraint, and therefore, the displacements of the joints inevitably have an effect on the distribution of the internal force of the segment.To simplify the calculation, the internal force distribution of a single segment component under uniform pressure in the direction of the vertical span is studied. To evaluate the effect of the discontinuous rotation and translation between the joints and the segment, the segment component is considered to consist of a single segment and the joints on both sides that constrain the displacement of the segment. The constraint at the segment joints is simplified to consist of a rotational spring and two orthogonal line springs. The rotational spring constrains the rotation of the segment component and allows rotational displacement to occur under an external load. The linear spring constrains the movement of the segment component and allows for linear displacement under an external load. In order to make the analytical model consistent with the boundary conditions of the segment joint load test and to simplify the calculation, this analysis model does not consider the interaction between the segment and the soil. The mechanical model of the segment component is shown in Figure2.Figure 2 Force diagram of the segment component.α is the semiarc angle of the segment component, l is the span length of the segment, R is the radius of the segment component, q is the uniform pressure perpendicular to the span, θ is the angular displacements of the segment joints, Δ1 is the horizontal displacement of the segment joints, and Δ2 is the vertical displacement of the segment joints.In addition, it should be noted that in the calculation of internal force, a positive sign is assigned to the bending moment when the inside of the segment component is subjected to tension, a positive sign is assigned to the shear force when the moment of the adjacent section caused by the shear force is clockwise, and a positive sign is assigned to the axial force when the section is compressed. ## 2.2. Model Simplification According to the principle of superposition, the stress state of the segment component under an external load is decomposed into two parts when solving the internal force distribution. One is the internal force of the segment component under the external load when the segment joints are treated as the fixed ends. The other is the internal force of the segment component caused by the segment joints displacement.The segment component is a statically indeterminate structure that can be solved by the force method equation. To simplify the calculation, the elastic centre method for the arched structure can be used to obtain the redundant force at the elastic centre. Then, we can obtain the internal force distribution of the segment component. The calculation diagrams in the two states are shown in Figures3 and 4.Figure 3 Force diagram of the segment component under an external load without considering the joint displacements.Figure 4 Force diagram of the segment component under the load produced by the joint displacements.Mc, Vc, and Nc are the redundant bending moment, redundant shear force, and redundant axial force at the elastic centre of the segment component under an external load or the load produced by the joint displacements, d is the vertical distance between the elastic centre point and the segment joints, and l is the span length of the segment component. ## 3. Internal Force Calculation of the Segment Component ### 3.1. Redundant Force Analysis of the Segment Component The vertical distance between the elastic centre pointO′ and the segment joints can be obtained as follows:(1)d=∫y′/EIds∫1/EIds=12α−1−4ρ28ρl,where α is the semiarc angle of the segment component and ρ is the rise-span ratio of the segment.Under an external load without considering the joint displacements, using the elastic centre method, the redundant force at the elastic centre of the segment component can be deduced as follows:(2)MC=B1ql2,HC=C1ql2f,VC=0,where f is the vector height of the segment and B1 and C1 are related to the rise-span ratio and the semiarc angle of the segment, which can be defined by the following expression:(3)B1=1256ρ21+4ρ22−4ρα1−4ρ2,C1=ρ12Φα3−8ρ2+48ρ4−12ρ1−4ρ2,where Φ is the affiliated coefficient, which can be defined by the following expression:(4)Φ=1+4ρ22α2+4ρ1−4ρ2α−32ρ2.The redundant force at the elastic centre can affect the internal force distribution of the segment component. Equation (2) illustrates that under uniform pressure perpendicular to the span, the internal force of the segment component is related to the semiarc angle and the span length.Under the load produced by the joint displacements, using the elastic centre method, the redundant force at the elastic centre of the segment component can be deduced as follows:(5)MC=−8ρ1−16ρ4EIθl,HC=−32ρ216ρ2−4αρ1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2l,VC=0,where EI is the bending stiffness of the segment cross section.Equation (5) can be analysed to obtain the following conclusions:(1) If the vertical displacements on both sides of the segment component are consistent, the internal force of the segment will not be affected(2) The internal force of the segment component produced by the joint displacements is affected not only by the radius and semiarc angle of the component but also by the joint rotation, the joint horizontal dislocation, and the bending stiffness of the segment ### 3.2. Internal Force Calculation of the Segment Component From the redundant force at the elastic centre, we can obtain the internal force of the segment component under an external load with considering the joint displacements:(6)Mφ=−12qR2sin2φ+B1ql2−8ρ1−16ρ4EIθl+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2ld+Rcosα−Rcosφ,Vφ=−qRsinφcosφ+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2lsinφ,Nφ=qRsin2φ+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2lcosφ,where φ is arc angles of the sectional segment.Equation (6) reveals that the internal force of the segment component is affected by the joint rotation and horizontal dislocation when the segment section and the block mode are fixed.Before the internal force calculation of the segment components, the final joint deformation cannot be predicted. However, the rotation and horizontal displacement of the segment joints can be calculated by the joint bending stiffness and shear stiffness, respectively.It is assumed that the bending stiffness of the segment joints iskθ. We can determine that(7)θ=Mαkθ,where Mα is the section bending moment of the segment component, when the semiarc angle is α. After substituting equation (6) into equation (7), the following equation is obtained:(8)θ=B1ql2+C1ql2/f−1024ρ5α/1+4ρ2ΦEIΔ1/f2ld−1/2qR2sin2αkθ+8ρ/1−16ρ4EI/l+128ρ24ρ2−αρ1−4ρ2/1+4ρ2ΦEI/fld,where kθ is the bending stiffness of the segment joints.After substituting equations (8) and sinα=4ρ/1+4ρ2 into equation (6), we can obtain the internal force of each segment section under the influence of the joints. The formula is more complex and is not shown here.The stiffness of the joints is weakened, which causes the redistribution of the internal force of the segment component. We should consider the influence of the joint bending stiffness on the segment component when the segment section and the block mode are fixed.The calculation method for the influence of the shear stiffness and bending stiffness of the discontinuous joint on the internal force of the segment component is similar and therefore not repeated. ## 3.1. Redundant Force Analysis of the Segment Component The vertical distance between the elastic centre pointO′ and the segment joints can be obtained as follows:(1)d=∫y′/EIds∫1/EIds=12α−1−4ρ28ρl,where α is the semiarc angle of the segment component and ρ is the rise-span ratio of the segment.Under an external load without considering the joint displacements, using the elastic centre method, the redundant force at the elastic centre of the segment component can be deduced as follows:(2)MC=B1ql2,HC=C1ql2f,VC=0,where f is the vector height of the segment and B1 and C1 are related to the rise-span ratio and the semiarc angle of the segment, which can be defined by the following expression:(3)B1=1256ρ21+4ρ22−4ρα1−4ρ2,C1=ρ12Φα3−8ρ2+48ρ4−12ρ1−4ρ2,where Φ is the affiliated coefficient, which can be defined by the following expression:(4)Φ=1+4ρ22α2+4ρ1−4ρ2α−32ρ2.The redundant force at the elastic centre can affect the internal force distribution of the segment component. Equation (2) illustrates that under uniform pressure perpendicular to the span, the internal force of the segment component is related to the semiarc angle and the span length.Under the load produced by the joint displacements, using the elastic centre method, the redundant force at the elastic centre of the segment component can be deduced as follows:(5)MC=−8ρ1−16ρ4EIθl,HC=−32ρ216ρ2−4αρ1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2l,VC=0,where EI is the bending stiffness of the segment cross section.Equation (5) can be analysed to obtain the following conclusions:(1) If the vertical displacements on both sides of the segment component are consistent, the internal force of the segment will not be affected(2) The internal force of the segment component produced by the joint displacements is affected not only by the radius and semiarc angle of the component but also by the joint rotation, the joint horizontal dislocation, and the bending stiffness of the segment ## 3.2. Internal Force Calculation of the Segment Component From the redundant force at the elastic centre, we can obtain the internal force of the segment component under an external load with considering the joint displacements:(6)Mφ=−12qR2sin2φ+B1ql2−8ρ1−16ρ4EIθl+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2ld+Rcosα−Rcosφ,Vφ=−qRsinφcosφ+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2lsinφ,Nφ=qRsin2φ+C1ql2f−128ρ34ρ−α1−4ρ21+4ρ2ΦEIθfl−1024ρ5α1+4ρ2ΦEIΔ1f2lcosφ,where φ is arc angles of the sectional segment.Equation (6) reveals that the internal force of the segment component is affected by the joint rotation and horizontal dislocation when the segment section and the block mode are fixed.Before the internal force calculation of the segment components, the final joint deformation cannot be predicted. However, the rotation and horizontal displacement of the segment joints can be calculated by the joint bending stiffness and shear stiffness, respectively.It is assumed that the bending stiffness of the segment joints iskθ. We can determine that(7)θ=Mαkθ,where Mα is the section bending moment of the segment component, when the semiarc angle is α. After substituting equation (6) into equation (7), the following equation is obtained:(8)θ=B1ql2+C1ql2/f−1024ρ5α/1+4ρ2ΦEIΔ1/f2ld−1/2qR2sin2αkθ+8ρ/1−16ρ4EI/l+128ρ24ρ2−αρ1−4ρ2/1+4ρ2ΦEI/fld,where kθ is the bending stiffness of the segment joints.After substituting equations (8) and sinα=4ρ/1+4ρ2 into equation (6), we can obtain the internal force of each segment section under the influence of the joints. The formula is more complex and is not shown here.The stiffness of the joints is weakened, which causes the redistribution of the internal force of the segment component. We should consider the influence of the joint bending stiffness on the segment component when the segment section and the block mode are fixed.The calculation method for the influence of the shear stiffness and bending stiffness of the discontinuous joint on the internal force of the segment component is similar and therefore not repeated. ## 4. Practical Engineering Application ### 4.1. Engineering Background Our study takes a segment of the subsurface excavation section from the starting point to Jin’anqiao station of Beijing Metro Line 6 as the research object. The influence of discontinuous joints on the mechanical properties of the segment component is studied quantitatively. The assembly form of the subsurface excavation tunnel lining is used as a reference for that of the shield. However, there are obvious differences between this construction method and the shield method. After using the mining method to excavate the tunnel section and taking the initial support measure, the assembly of the segment depends on a special assembly machine. The segment section is shown in Figure5. This method can overcome the disadvantages of shield tunnelling used in some special composite strata, such as quaternary stratum with upper soft and lower hard characteristics, rock stratum, composite stratum of rock and soil, and stratum containing spherical weathering bodies. The construction method combines the advantages of the mining method and shield method, is adaptable, and has a high degree of automation. It can also increase construction speed and enhance safety. In addition, the horseshoe tunnel increases the utilization of the excavation section.Figure 5 Segment section form. ### 4.2. Geological Condition The lithological log mostly consists of plain fill, clayey silt, silty clay, gravel, and mixed gravel and silty clay from the top to bottom around the tunnel. The geological profile of the fabricated section is shown in Figure6.Figure 6 Geological sectional drawing of the fabricated section. ### 4.3. Theoretical Calculation and Analysis The load can be calculated according to the calculation method of the tunnel with the mining method. To simplify the calculation, the vertical earth pressure is regarded as the load in the direction of the vertical span. The calculated vertical earth pressure isq=256.88kPa. Taking the geometrical parameters of the D-shaped segment, with a radius of 2.84 m, a thickness of 0.3 m, and a semiarc angle of 0.4π, as an example, the influence of joint deformation and stiffness on the internal force of the segment component was studied under the vertical load. Because the segment component and the external load are symmetrical along the vertical axes, only the internal force of the right half of the segment component is analysed.Compiling the program through the M language in MATLAB, we can obtain the segment internal force distribution for joint angular displacements of 0.0000, 0.0002, or 0.0004 rad, as shown in Figure7. The segment internal force distribution for joint horizontal displacements of 0, 2, or 4 mm is shown in Figure 8.Figure 7 Internal force distribution of the segment component with joint rotation. (a)(b)(c)Figure 8 Internal force distribution of the segment component with joint horizontal dislocation. (a)(b)(c)Figures7 and 8 show that, under alignment, the maximum bending moment of the segment is 0.11 kN·m and the minimum bending moment is −0.05 kN·m. The maximum shear force is 270.63 kN, and the minimum shear force is −63.64 kN. The maximum axial force is 817.62 kN, and the minimum axial force is 510.00 kN. When the joint rotation angle is 0.0004 rad, the maximum bending moment of the segment is 0.05 kN·m and the minimum bending moment is −0.08 kN·m. The maximum shear force is 241.08 kN, and the minimum shear force is −77.26 kN. The maximum axial force is 807.87 kN, and the minimum axial force is 478.93 kN. When the joint horizontal displacement is 4 mm, the maximum bending moment of the segment is 0.21 kN·m and the minimum bending moment is −0.20 kN·m. The maximum shear force is 38.27 kN, and the minimum shear force is −190.11 kN. The maximum axial force is 741.97 kN, and the minimum axial force is 266.68 kN.Figures7 and 8 indicate that joint displacements can affect the internal force distribution of the segment component. The following observations can be made from the two figures:(1) The joint rotation decreases the joint shear force while reducing the bending moment of the joints. The horizontal dislocation of the joints has a great effect on the bending moment of the segment component while reducing the shear force of the segment joints, and thus, the bending moment distribution is even more uneven. The axial force of the segment component is decreased due to the joint rotation and horizontal dislocation.(2) The joint displacements cause a similar unloading effect on the internal force of the segment joints. The bending moment and shear force at other cross sections of the segment component will increase to different degrees, and the axial force of the segment component can be reduced, which should be considered in the structure calculation.(3) The horizontal dislocation of the segment joints has a great influence on the internal force of the segment component.In addition, relevant research data [30] and specification data [31] show that the control value of the horizontal dislocation is 10 mm. Under a maximum joint misalignment of 10 mm, the maximum bending moment of the segment is 0.47 kN·m and the minimum bending moment is −0.67 kN·m. The maximum shear force is 0 kN, and the minimum shear force is −437.69 kN. The maximum axial force is 628.72 kN, and the minimum axial force is −100.81 kN. By comparing with the internal force of the segment component with alignment, it can be obtained that the displacement of the segment has a great influence on the internal force distribution of the segment component. The bending moment varies the most, followed by the shear force.The segment internal force distributions with joint bending rigidities of EI, 0.1EI, and 0.01EI are shown in Figure9.Figure 9 Internal force distribution of the segment component with joint bending stiffness changes. (a)(b)(c)Analysis of Figure9 reveals the following:(1) As the joint bending stiffness decreases, the bending moment of the joints is decreased, and the internal force of the segment component is redistributed(2) When the bending stiffness of the joints is equal to 0.01 times that of the segment, the bending moment of the segment joints is smaller and can be regarded as a hinge(3) The effect of the bending stiffness of the discontinuous joint on the internal force of the segment actually reflects the influence of the joint rotation on this force ### 4.4. Test and Iterative Calculation The bending stiffness of the segment joint is the bending moment required for the segment joint to produce a unit rotational angle. At present, there is no mature formula or chart available for the value of bending stiffness in the project, which can be determined by the segment joint load test [32].In the test, the horizontal axial force is applied by the loading system on the reaction wall, and the vertical load is applied by the jack through the distribution beam. The test diagram is shown in Figure10. According to the research results of the segment joint load test in Figure 11 [33], the joint stiffness of the segment is not constant. The larger the eccentric distance at the segment joints, the smaller the bending stiffness of the joints. When the eccentricity is constant, the greater the axial force at the segment joints, the smaller the bending stiffness of the joints.Figure 10 Diagram of the joint load test of the segment.Figure 11 Joint load test data.From the above analysis, the eccentricity and axial force at the joint have an influence on the bending stiffness of the segment joints. The change of the bending stiffness at the joint inevitably causes the change of the joint displacement, which in turn causes the change of the internal force of the segment joints. The bending stiffness at the joint interacts with the internal force at the joint.When analysing the mechanical model, the ultimate internal force at the joint cannot be predicted, so the bending stiffness of the joint cannot be selected from the data of the joint load test. Therefore, the iterative method is used to repeatedly calculate the internal force of the segment and the joint stiffness and then successively approximate the true value to obtain the final internal force value.In order to ensure the good convergence and continuity of the interpolation points, when selecting the bending stiffness of the joints, the data obtained from the load test of the segment joints should be calculated by spline interpolation. Algorithms of spline interpolation can be expressed as follows:(9)Sx=∑j=0nyjαjx+mjβjx,where Sx is the interpolation function, yj is the function value of node xj, mj is the derivative value of interpolation function, and αjx and βjx are the interpolation basis functions.The internal force of the segment and the stiffness of the segment joint are calculated by an iterative method. By compiling the program, we can get the internal force distribution of the segment component, which is shown in Figure12.Figure 12 Internal force of the segment component calculated by iteration. (a)(b)(c)After iterative calculation, the final bending stiffness of the segment joint is 1.2 × 104 kN·m·rad−1. ## 4.1. Engineering Background Our study takes a segment of the subsurface excavation section from the starting point to Jin’anqiao station of Beijing Metro Line 6 as the research object. The influence of discontinuous joints on the mechanical properties of the segment component is studied quantitatively. The assembly form of the subsurface excavation tunnel lining is used as a reference for that of the shield. However, there are obvious differences between this construction method and the shield method. After using the mining method to excavate the tunnel section and taking the initial support measure, the assembly of the segment depends on a special assembly machine. The segment section is shown in Figure5. This method can overcome the disadvantages of shield tunnelling used in some special composite strata, such as quaternary stratum with upper soft and lower hard characteristics, rock stratum, composite stratum of rock and soil, and stratum containing spherical weathering bodies. The construction method combines the advantages of the mining method and shield method, is adaptable, and has a high degree of automation. It can also increase construction speed and enhance safety. In addition, the horseshoe tunnel increases the utilization of the excavation section.Figure 5 Segment section form. ## 4.2. Geological Condition The lithological log mostly consists of plain fill, clayey silt, silty clay, gravel, and mixed gravel and silty clay from the top to bottom around the tunnel. The geological profile of the fabricated section is shown in Figure6.Figure 6 Geological sectional drawing of the fabricated section. ## 4.3. Theoretical Calculation and Analysis The load can be calculated according to the calculation method of the tunnel with the mining method. To simplify the calculation, the vertical earth pressure is regarded as the load in the direction of the vertical span. The calculated vertical earth pressure isq=256.88kPa. Taking the geometrical parameters of the D-shaped segment, with a radius of 2.84 m, a thickness of 0.3 m, and a semiarc angle of 0.4π, as an example, the influence of joint deformation and stiffness on the internal force of the segment component was studied under the vertical load. Because the segment component and the external load are symmetrical along the vertical axes, only the internal force of the right half of the segment component is analysed.Compiling the program through the M language in MATLAB, we can obtain the segment internal force distribution for joint angular displacements of 0.0000, 0.0002, or 0.0004 rad, as shown in Figure7. The segment internal force distribution for joint horizontal displacements of 0, 2, or 4 mm is shown in Figure 8.Figure 7 Internal force distribution of the segment component with joint rotation. (a)(b)(c)Figure 8 Internal force distribution of the segment component with joint horizontal dislocation. (a)(b)(c)Figures7 and 8 show that, under alignment, the maximum bending moment of the segment is 0.11 kN·m and the minimum bending moment is −0.05 kN·m. The maximum shear force is 270.63 kN, and the minimum shear force is −63.64 kN. The maximum axial force is 817.62 kN, and the minimum axial force is 510.00 kN. When the joint rotation angle is 0.0004 rad, the maximum bending moment of the segment is 0.05 kN·m and the minimum bending moment is −0.08 kN·m. The maximum shear force is 241.08 kN, and the minimum shear force is −77.26 kN. The maximum axial force is 807.87 kN, and the minimum axial force is 478.93 kN. When the joint horizontal displacement is 4 mm, the maximum bending moment of the segment is 0.21 kN·m and the minimum bending moment is −0.20 kN·m. The maximum shear force is 38.27 kN, and the minimum shear force is −190.11 kN. The maximum axial force is 741.97 kN, and the minimum axial force is 266.68 kN.Figures7 and 8 indicate that joint displacements can affect the internal force distribution of the segment component. The following observations can be made from the two figures:(1) The joint rotation decreases the joint shear force while reducing the bending moment of the joints. The horizontal dislocation of the joints has a great effect on the bending moment of the segment component while reducing the shear force of the segment joints, and thus, the bending moment distribution is even more uneven. The axial force of the segment component is decreased due to the joint rotation and horizontal dislocation.(2) The joint displacements cause a similar unloading effect on the internal force of the segment joints. The bending moment and shear force at other cross sections of the segment component will increase to different degrees, and the axial force of the segment component can be reduced, which should be considered in the structure calculation.(3) The horizontal dislocation of the segment joints has a great influence on the internal force of the segment component.In addition, relevant research data [30] and specification data [31] show that the control value of the horizontal dislocation is 10 mm. Under a maximum joint misalignment of 10 mm, the maximum bending moment of the segment is 0.47 kN·m and the minimum bending moment is −0.67 kN·m. The maximum shear force is 0 kN, and the minimum shear force is −437.69 kN. The maximum axial force is 628.72 kN, and the minimum axial force is −100.81 kN. By comparing with the internal force of the segment component with alignment, it can be obtained that the displacement of the segment has a great influence on the internal force distribution of the segment component. The bending moment varies the most, followed by the shear force.The segment internal force distributions with joint bending rigidities of EI, 0.1EI, and 0.01EI are shown in Figure9.Figure 9 Internal force distribution of the segment component with joint bending stiffness changes. (a)(b)(c)Analysis of Figure9 reveals the following:(1) As the joint bending stiffness decreases, the bending moment of the joints is decreased, and the internal force of the segment component is redistributed(2) When the bending stiffness of the joints is equal to 0.01 times that of the segment, the bending moment of the segment joints is smaller and can be regarded as a hinge(3) The effect of the bending stiffness of the discontinuous joint on the internal force of the segment actually reflects the influence of the joint rotation on this force ## 4.4. Test and Iterative Calculation The bending stiffness of the segment joint is the bending moment required for the segment joint to produce a unit rotational angle. At present, there is no mature formula or chart available for the value of bending stiffness in the project, which can be determined by the segment joint load test [32].In the test, the horizontal axial force is applied by the loading system on the reaction wall, and the vertical load is applied by the jack through the distribution beam. The test diagram is shown in Figure10. According to the research results of the segment joint load test in Figure 11 [33], the joint stiffness of the segment is not constant. The larger the eccentric distance at the segment joints, the smaller the bending stiffness of the joints. When the eccentricity is constant, the greater the axial force at the segment joints, the smaller the bending stiffness of the joints.Figure 10 Diagram of the joint load test of the segment.Figure 11 Joint load test data.From the above analysis, the eccentricity and axial force at the joint have an influence on the bending stiffness of the segment joints. The change of the bending stiffness at the joint inevitably causes the change of the joint displacement, which in turn causes the change of the internal force of the segment joints. The bending stiffness at the joint interacts with the internal force at the joint.When analysing the mechanical model, the ultimate internal force at the joint cannot be predicted, so the bending stiffness of the joint cannot be selected from the data of the joint load test. Therefore, the iterative method is used to repeatedly calculate the internal force of the segment and the joint stiffness and then successively approximate the true value to obtain the final internal force value.In order to ensure the good convergence and continuity of the interpolation points, when selecting the bending stiffness of the joints, the data obtained from the load test of the segment joints should be calculated by spline interpolation. Algorithms of spline interpolation can be expressed as follows:(9)Sx=∑j=0nyjαjx+mjβjx,where Sx is the interpolation function, yj is the function value of node xj, mj is the derivative value of interpolation function, and αjx and βjx are the interpolation basis functions.The internal force of the segment and the stiffness of the segment joint are calculated by an iterative method. By compiling the program, we can get the internal force distribution of the segment component, which is shown in Figure12.Figure 12 Internal force of the segment component calculated by iteration. (a)(b)(c)After iterative calculation, the final bending stiffness of the segment joint is 1.2 × 104 kN·m·rad−1. ## 5. Conclusions In view of the lack of analytical research on the internal force of the segment component, based on the mechanical characteristics of the segment component, that is, a certain amount of deformation is allowed at the segment joints, the internal force of the segment component is analysed deeply. The mechanical model of the segment component under the elastic constraint is established, and the analytic solution of the internal force of the segment component with discontinuous joints is solved by using the elastic centre method and the superposition principle. Combined with the engineering application, the following conclusions can be drawn based on the above analysis:(1) The internal force of the segment component with discontinuous joints is affected by the radius, the semiarc angle, the joint rotation, the horizontal dislocation, the segment bending stiffness, the joint bending stiffness, and other factors.(2) The joint rotation reduces the bending moment, the shear force, and the axial force of the segment joints, and the joint horizontal dislocation reduces the shear force and axial force of the segment joints. The joint displacements increase the negative bending moment and the negative shear force of the segment component to different degrees. The adverse effect should be considered in the segment design by taking measures such as increasing the reinforcement ratio of the segment section and reducing the joint displacements.(3) The joint horizontal dislocation has a greater effect on the internal force of the segment component than does the joint rotation. In the control of joint displacements, we should pay attention to restraining the dislocation displacements.(4) As the bending stiffness decreases, the negative bending moment of the segment component increases, and the nonuniformity of the segment bending moment is increased.(5) To maintain a uniform internal force of the segment component, when the segment is designed, joints such as the tenon joint should be strengthened to reduce the internal force of the segment section, in addition to applying other relevant methods.(6) Since the bending stiffness of the joints is related to the internal force of that, the internal force of the segment component and the joint stiffness should be iteratively calculated in combination with the research results of the joint load test.(7) Studies of the analytical solution of the internal force of the segment component with discontinuous joints can aid the identification of factors that affect the internal force of the segment, which can be used as the basis for the analytical calculation of the whole segment ring with discontinuous joints. --- *Source: 1020732-2020-10-31.xml*
2020
# State of the Art of the Ignalina RBMK-1500 Safety **Authors:** E. Ušpuras **Journal:** Science and Technology of Nuclear Installations (2010) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2010/102078 --- ## Abstract Ignalina NPP is the only nuclear power plant in Lithuania consisting of two units, commissioned in 1983 and 1987. Unit 1 of Ignalina NPP was shut down for decommissioning at the end of 2004 and Unit 2 is to be operated until the end of 2009. Both units are equipped with channel-type graphite-moderated boiling water reactors RBMK-1500. The paper summarizing the results of deterministic and probabilistic analyses is developed within 1991–2007 by specialists from Lithuanian Energy Institute. The main operational safety aspects, including analyses performed according the Ignalina Safety Improvement Programs, development and installation of the Second Shutdown System and Guidelines on Severe Accidents Management are discussed. Also the phenomena related to the closure of the gap between fuel channel and graphite bricks, multiple fuel channel tube rupture, and containment issues as well as implication of the external events to the Ignalina NPP safety are discussed separately. --- ## Body ## 1. Introduction: Historical Context Preparatory works of construction of the Ignalina NPP have been started in 1974, and the first unit of Ignalina NPP was commissioned in December 31, 1983. At the same time the second unit was under construction and construction of the third unit began. The second unit was planned to start to operate in 1986, but because of accident in Chernobyl, works on preparation to operate this unit have been rescheduled. Second unit was commissioned in August 31, 1987. At that time 60% of the third unit have already been constructed, but later construction was suspended and terminated soon. Nowadays because of political reasons, the first unit of Ignalina NPP is shut down; the second unit is planned to shutdown at the end of 2009.Ignalina NPP with RBMK-1500 reactors belongs to the second generation of RBMK-type reactors (it means that this is most advanced version of RBMK reactor design series in comparison with other RBMK-type nuclear power plants). In comparison with infamous Chernobyl NPP, Ignalina NPP reactors are by a third more powerfully and already from the beginning of operation substantially advanced emergency protection systems (e.g., emergency core cooling and accident localization systems) [1].After 1990 Lithuania declared its independence; Ignalina NPP with two largest in the world RBMK-1500 reactors came under authority of the Lithuania Republic; however, nobody in the world did not know about the real safety level of these reactors. The first Safety Justification of Ignalina NPP has been prepared by Russian experts of Research and Design Institute for Power Engineering (RDIPE), organization—designer and developer of RBMK reactors, after Chernobyl NPP accident. In this document the analysis of all design basis accidents (except partial breaks of pipes) is presented in sufficient details. The analysis is performed using at that time existing tool—quasistationary derivative approximation method, being based on conservative assumptions and existing experimental data. From the present-day viewpoint such safety justification [2] has lacks.(i) It was limited only to the systems description and the analysis of design basis accidents. (ii) Computer codes, developed in Russia, have been used for simulations, but these codes have not been extensively verified and validated. (iii) The independent expertise of safety analysis has not been performed.Therefore, at the beginning of the 90s of the last century there were reasonably doubts how such safety justification of Ignalina NPP, presented in the first safety justification, corresponded to the real situation. In 1992 at G7 Munich Summit the decision of closing Soviet-design nuclear power plants, at first of all the nuclear power plants with RBMK and VVER-440/230 reactor types, was accepted. In 1994 Lithuania signed the agreement with the European Bank for Reconstruction and Development (EBRD) Account of Nuclear Safety by which it had undertaken to perform in-depth safety analysis of the Ignalina NPP and not to change fuel channels in the reactor.Right from the start, when Lithuania assumed control of the Ignalina NPP, the plant, its design, and operational data have been completely open and accessible to Western experts. A large number of international and local studies have been conducted to verify the operational characteristics of the Ignalina NPP and analyze its level of risk. Ignalina NPP is unique nuclear power plant of RBMK type about which information was collected, checked, systematized, and made accessible. Collected and verified database has allowed(i) to assess present safety level of NPP, (ii) to compare its level with other RBMK-type NPPs safety level, (iii) to plan improvements of plant equipment and operating procedures increasing safety of the NPP.Below, the results of the State of the Art deterministic and probabilistic safety analyses for Ignalina NPP, developed within 1991–2007 by specialists from Lithuanian Energy Institute, are discussed. ## 2. Deterministic and Probabilistic Ignalina NPP Safety Analyses In this Section the main Ignalina NPP safety analyses, performed since 1991 till these days, are discussed:(i) Ignalina NPP Units 1 and 2 safety analysis reports and their review, (ii) modifications of activation algorithms for reactor shutdown and emergency core cooling systems, (iii) second diverse reactor shutdown system development, safety justification, and implementation, (iv) studies of Ignalina NPP 1 and 2 levels of Probabilistic Safety Assessment (PSA), (v) external events at Ignalina NPP Analysis. ### 2.1. Deterministic Ignalina NPP Safety Justification In 1995-1996 was prepared In-depth Ignalina NPP Unit 1 Safety Analysis Report, using USA and Western Europe methodology and computer codes for providing safety analysis [3]. It was comprehensive international study sponsored by EBRD. The purpose of this international study was to provide a comprehensive overview of plant status with special emphasis placed on its safety aspects. Specialists from the Ignalina NPP, Russia (RDIPE), Canada, and Sweden contributed. During implementation of the project, they have been described more than 50 systems of normal operation, safety important systems, and auxiliary systems. Also analysis of these systems has been performed, considering compliance of these systems to the Lithuanian standards and rules as well to practice of safety used in the West. Analyzing systems, the attention has been concentrated on their consistency to criterion of single failure, as well as to auxiliary safety aspects: maintenance, inspections, and impact of external factors (fire, flooding by water). This analysis of systems has defined the main lacks of systems and has developed conditions for elimination of the deficiencies. The performed review on operation and safety has allowed to identify all possible malfunctions, which can potentially cause an emergency situation.In the safety analysis report of the Ignalina NPP Unit 1, the comprehensive accident analysis and equipment assessment have been provided; discussed questions concerning equipment ageing, investigated topics related to operators action, and power plant control provided conclusions about safety of Ignalina NPP (NPP safety level was assessed realistically); main lacks have been defined and measures for elimination of the deficiencies have been foreseen. It is the first western-type report on safety for nuclear power plants with RBMK reactors.One of the basic conclusions in this safety analysis report was such that in this case there was no problem, which would demand immediate shutdown of the Ignalina NPP. Detailed accident analysis (accidents because of different pipelines ruptures, reactivity initiating accidents, equipment failures, transients with additional failure of reactor shutdown system, and fuel channel ruptures in the reactor cavity) has shown that accident occurring because of equipment failures does not cause such condition of the plant station which would cause violation of acceptance criteria; safety system ensures a safe condition of the plant even doing the assumption that operator does not take any action for 10 minutes from the beginning of accident to mitigate an emergency situation. Because of reactivity initiating accidents (exactly such type of initiating event became the reason of accident in the Chernobyl NPP), acceptance criteria of power plant also are not violated, even postulating single failures additionally. It has been shown that Ignalina NPP is reliably protected against loss of the coolant accidents if ruptures of pipelines do not cause local stagnation of flow. In case of one steam line rupture, the acceptance criteria will not be exceeded. But there are two steam lines located in the shaft at the Ignalina NPP; thus, rupture of one steam line can cause rupture of other steam lines, and in this case radiological dozes can be exceeded. Being based on these results of accident analysis, the recommendations for modifications of activation algorithms for reactor shutdown and emergency core cooling systems have been prepared.It is necessary to note that in parallel with the Ignalina NPP Unit 1 safety analysis report in 1995–1997 it was performed independent Review of the Ignalina Nuclear Power Plant Safety Analysis Report [4]. This study was performed by experts from USA, Great Britain, France, Germany, Italy, Russia, and Lithuania. Independent Review has confirmed the main conclusions of safety analysis report.In recommendations of Ignalina NPP Unit 1 safety analysis report, it has been shown that Ignalina NPP will be reliably protected from any ruptures of pipelines and steam lines after improving of activation algorithms for reactor shutdown and emergency core cooling systems. According to these algorithms the system will automatically activate on coolant flow rate decrease in single Group Distribution Header (GDH) and sharp pressure decrease in drum-separators. These modifications have been implemented in both Ignalina NPP units. Safety justification of these modifications has been performed in Lithuanian Energy Institute (LEI). Further discussed situation, when conditions for local flow stagnation because of GDH rupture in the fuel channels connected to this affected GDH, is developed [5]. The flow stagnation occurs in the case of the certain size break in GDH. Due to discharge of a part of the coolant through this break, the zero gradient of pressure is developed in fuel channels (7—see Figure 1), that is, pressure in a bottom of the channel is close to pressure in drum separators (1). Coolant flow rate stagnation in fuel channels can be destroyed only in case of early activation of Emergency Core Cooling System (ECCS) (see Figure 2(a)). Thus if ECCS would operate according to design algorithm (reactor cooling water started to supply only after approximately 400 seconds from the beginning of accident); acceptance criteria for both fuel rod cladding and fuel channel walls temperatures in high-power channel would be exceeded (see Figure 2(b) and Figure 2(c)). After implementation of ECCS activation algorithms according to coolant flow rate decrease in separate group distribution headers, water from ECCS starts to supply already after 5–10 seconds from the beginning of flow stagnation. Thus stagnation is broken and fuel channels, connected to affected GDH are reliably cooled (see Figure 2). These modifications of activation algorithms for reactor shutdown and emergency core cooling systems are installed in power plant Unit 1 in 1999, and Unit 2 in 2000.Figure 1 Ignalina NPP reactor cooling circuit (one loop) and coolant flow diagram in case of partial GDH rupture: (1) drum-separators, (2) suction header, (3) main circulation pumps, (4) pressure header, (5) group distribution headers, (6) water supply from emergency core cooling system, and (7) affected fuel channelsAnalysis of partial GDH rupture considering modification of ECCS algorithm: (a) coolant flow rate through fuel channels, (b) fuel rod cladding temperature in high-power channel connected to ruptured GDH, and (c) behavior of fuel channel wall temperature. (a) (b) (c)In the Ignalina NPP Unit 1 safety analysis report, they have been investigated not only basic design accidents (discussed above) but also Anticipated Transients Without reactor Shutdown (ATWS). Investigations of such accidents are carried out at the licensing process for USA and Western Europe nuclear power plants; however, for the NPPs with RBMK-type reactors such analysis has been performed for the first time. Consequences of accident for RBMK-1500 reactor during which loss of preferred electrical power supply and failure of automatic reactor shutdown occur [6] are presented in Figure 3. Due to loss of preferred electrical power supply, all pumps are switched (see Figure 3(a)) off; therefore, the coolant circulation through fuel channels is terminated. Because of the lost circulation, fuel channels are not cooled sufficiently; therefore, temperature of the fuel channels walls starts to increase sharply. As it is seen from Figure 3(b), already after 40 seconds from the beginning of the accident, the peak fuel channel wall temperature in the high-power channels reaches acceptance criterion 650°C. It means that because of the further increase of temperature in fuel channels plastic deformations begin—the channels because of influence of internal pressure can be ballooned and ruptured. On the first seconds of accident the main electrical generators and turbines are switched off as well. Steam generated in the core is discharged through the steam discharge valves; however, their capacity is not sufficient. Therefore the pressure in reactor cooling circuit increases and approximately after 80 seconds from the beginning of accident reaches acceptance criterion 10.4 MPa (see Figure 3(c)). The further increase of pressure can lead to rupture of pipelines.Analysis of loss of preferred electrical power supply and simultaneous failure of design reactor shutdown system, when DAZ system was installed: (a) coolant flow rate through one main circulation pump, (b) the peak fuel channel wall temperature in the high-power channel, (c) pressure behavior in drum separators, (1) acceptance criterion, and (2) set points of DAZ system activation (reactor shutdown). (a) (b) (c)Thus the analysis of anticipated transients without shutdown has shown that in some cases the consequences can be dramatic enough. Therefore the priority recommendation has been formulated: to implement the second, based on other principles of operation, diverse shutdown system. However development, designing, and implementation of such system needed few years (in the Ignalina NPP Unit 2, this system was installed in 2004), so the compensating means, which were used in transition period while second diverse shutdown system was developed, has been implemented. This temporary system was called according Russian abbreviation “DAZ”, “Dopolnitelnaja avarijnaja začita”—“Additional emergency protection”. This system used the same control rods as well as design reactor shutdown system; however, signals for this system control were generated independently in respect of design reactor shutdown system. In Lithuanian Energy Institute for DAZ system, they have been selected not only set points of activation but also the safety justification was performed. Performed analysis has shown that after implementation of DAZ system the reactor is shut down in time and cooled reliably as well; acceptance criteria are not violated even in case of transients when design reactor shutdown system is not functioning. In Figure3 is shown the behavior of the main parameters of reactor cooling circuit in case of loss of preferred electrical power supply and simultaneous failure of design reactor shutdown system. In this case two signals for activation of DAZ system (reactor shutdown) are generated: on increase of pressure in drum separators and on decrease in the coolant flow rate through the main circulation pumps. In Unit 1 DAZ system was installed in 1999 and in Unit 2 in 2000.The Second Diverse Shutdown System (DSS) has been designed and installed in Ignalina NPP Unit 2 in 2004. In the first unit of Ignalina NPP this system has not been installed because reactor has been shut down in 2004. Therefore, nowadays Ignalina NPP reactor emergency protection (emergency shutdown) system consists of two independent shutdown systems: first, BSM controls manual control rods and shortened absorber rods, which are inserted into the core from bottom. This system performs the normal reactor shutdown function and can maintain a reactor in subcritical state. Second system AZ controls 24 fast acting reactor shutdown rods as well as additionally 49 rods, which belong to both—BSM and AZ systems. AZ system performs emergency protection function. Also the Additional Hold-down System of the reactor is installed. This system allows to prepare and inject water and neutron absorber gadolinium mixture into control rods cooling circuit. Thus, the reactor remains in subcritical state even in the case of failure of BSM system.DSS justification was one of the main projects increasing a level of NPP safety. Specialists from LEI together with experts from the countries of Western Europe checked and have assessed the design documentation, carrying out independent calculations, thus helping Lithuanian regulatory body (VATESI) to make the appropriate decisions concerning implementation of mentioned system at Ignalina NPP [7]. In conclusions of review it has been shown that implementation of second diverse reactor shutdown system protects a reactor in case of failure of design reactor shutdown system. Implementation of this system has ensured that any initiating event cannot cause accident with damage of the reactor core as well as decreases core damage probability from 4 · 10 - 4 up to 5 · 10 - 6.In 2002 the safety analysis report for Ignalina NPP Unit 2 has been developed. This report contains the description of systems, list of postulated accidents, engineering assessment of reactor cooling system, accident analysis, assessment of fuel channels structural integrity, assessment of reactor safety acceptability, and other chapters. The accident analysis in this report was performed using best estimate approach with uncertainty and sensitivity analysis. According to the international practice, the best estimate approach is used mainly for analysis of loss of coolant accidents in reactor cooling system. In Lithuania the best estimate approach was successfully applied not only for loss of coolant accidents but also for reactor transients and accident confinement system response analyses. The uncertainty and sensitivity analysis allows to avoid the unnecessary conservatisms as well as to assess and address the existing safety margins. The safety analysis report and its review were the main documents required for license for Ignalina NPP Unit 2. Both documents demonstrated the increased safety level after implementation of above mentioned modifications and satisfaction to requirements of regulating documents. ### 2.2. Ignalina NPP Probabilistic Safety Assessment The Ignalina NPP first-level PSA “BARSELINA” project (1991–1996) was initiated in 1991 [8]. It was the first PSA for nuclear power plants with RBMK-type reactors. From the beginning this project was carried out by nuclear energy experts from Lithuanian, Russian, and Swedish institutions, and since 1995 it was carried out by efforts of experts from Lithuania (Ignalina NPP, LEI) and Sweden. Main objective of deterministic analysis was to show that nuclear power plant reliably copes with accidents, and basic purpose of PSA 1 level is to assess probability of reactor core damage to create a basis for severe accident risk assessment and management. Performed Ignalina NPP PSA 1 level study is predicted by assumption that the main radioactive source is reactor core. This PSA is performed for maximum permissible reactor operating power. Only internal initiating events have been analyzed—transients, loss of the coolant accidents, common cause failure, and internal hazards (fire, flooding, and missiles). Results of the analysis have shown that after implementation of recommendations from BARSELINA [8], safety analysis report, and its independent review [3, 4], probability of Ignalina NPP core damage is about 6 · 10 - 6. According to the international requirements, this parameter for the operating nuclear power plants should not exceed 10 - 4 per year and for new NPPs, which are in process of construction, - 10 - 5. Therefore Ignalina NPP fulfils this requirement. Analysis has shown that, in Ignalina NPP, risk topography dominates transients, instead of loss of the coolant accidents. The risk of core damage most of all increases transients with loss of long-term core cooling. It is the positive fact meaning that up to consequences of severe accidents there is enough time. Thus operators supervising reactor operation can undertake corrective measures, and it means that Ignalina NPP has great potential opportunities for implementation of the program on management of severe accidents. It is necessary to note that procedures and means on severe accident management are already implemented at Ignalina NPP Unit 2 [9, 10].According to the international requirements, probability of the large reactivity release outside nuclear power plant should not exceed10 - 7 per year for new NPPs, which are in process of construction and for NPPs in operation - 10 - 6. Scenarios and probabilities of the large reactivity release outside nuclear power plant are objects of investigations for PSA level 2. Ignalina NPP PSA level 2 project was performed in 1999–2001 [11] and it was the first project of such type for nuclear power plants with RBMK reactors. This project was carried out by efforts of experts from Lithuania (LEI) and Sweden. Performing PSA level 2 as initial data used results of level 1. According to PSA level 1 investigated accident scenarios consequences and its similarity criteria on radioactive contamination, the conditions of damage of the reactor have been developed and possibilities of accident management were assessed. Results of PSA level 2, it have shown that barrier of the large reactivity release after core damage is 1.5. This barrier is smaller in comparison with modern nuclear power plants having function of containment, which reaches 10 and more. Being based conservative assumptions and estimation of parameters, in PSA level 2 was calculated that general estimation of large discharge frequency is 3.8 · 10 - 6 per year. Therefore, Ignalina NPP according to the probability of large reactivity release outside nuclear power plant is not the worst in comparison with the plants of the USA and Western Europe, constructed in the same years.Carrying out the complex analysis about influence on Ignalina NPP units safety [12] by LEI, the following external events have been investigated:(i) aircraft crash, (ii) extreme wind and tornado, (iii) flooding and extreme showers, (iv) external fire.Aircraft or other flying objects crash that caused accidents in Ignalina NPP will have local character because of its big territory. According to the Lithuanian civil aviation data, it has been assumed that average congestion is up to 50000 flights per one year within the 50-kilometer zone around NPP. Three zones have been defined by a radius up to 15, 50, and 85 meters around the reactor in the territory at Ignalina NPP (15—according to reactor dimensions, 85—according to reactor building size). Probability of air crash on a 85-meter zone around the reactor center, assuming that aircraft weight is 5700 kg as well as assuming that half of these flights carry out planes of western manufacturers and other half—Soviet, is 2.06 · 10 - 9 1/year. Even doing more conservative assumptions (heavy planes falling frequency equalized to easy planes falling frequency), probability of air crash on a 85-meter zone around the reactor center will be 1.64 · 10 - 7 1/year. The obtained heavy plane crash probabilities are less than the probabilities obtained in probability analyses for the majority of the West-European and American NPPs.Tornado may cause huge damage and destruction. From all buildings of nuclear power plant, the tornado is most dangerous for a technical water supply system building, because it is located in the open territory on a coast of lake. Tornado and hurricane winds do not create danger for buildings of reactor and technical systems. Contrariwise probability of tornado and hurricane winds is 5.3 · 10 - 6 1/year. Therefore it is possible to approve that their influence on reactor safety is insignificant.Rise of a water level in Lake Druksiai represents the greatest danger to pump station on the lake, since the service water system is the nearest NPP construction to the lake. Water level elevation of Lake Druksiai up to a level of 144.1 m is not possible practically; therefore, there is no danger on flooding of pump station. The platform of the other Ignalina NPP construction is located at a level of 148-149 m above the sea level. Rise of a water level in the lake Druksiai up to such mark is impossible and flooding does not represent the direct danger for Ignalina NPP.Besides lake, another external flooding source isextreme showers. In territory of Ignalina NPP there is drainage system and all compartments which are located below a critical mark of a level are connected to this system; therefore, the water leaks in case of internal flooding. Thus, extreme showers do not cause external flooding of the reactor building. For probabilistic external flooding analysis the mathematical model to assess peak water level elevations of the lake Druksiai has been developed. Probabilistic assessment of water level elevation in the lake has been performed. Maximum amount of precipitation (not less than 279.7 mm in 12 hours) probability is 1 · 10 - 6 1/year. Such event will not have influence on reactor safety.Probabilistic analysis of external fire. Ignalina NPP is situated in the region, where 30% of territory is occupied by forests (40% are grassland and 30% are occupied by lakes and swamps). The edge of the closest forest is less than one kilometre from territory of Ignalina NPP. On the territory of the NPP there are only separate trees and grass. The global fire of a forest with a high wind to the NPP side can cause the smoke cover on the territory of Ignalina NPP. The smoke does not influence work of reactor mechanisms but will complicate work of the personnel. Fire probability of forest, which is in 10-kilometer zone around Ignalina NPP and there are more than 2000 ha woods, is 2.7 · 10 - 3 1/year. It is a high probability, but any fire cannot affect safety of the reactor considerably. ## 2.1. Deterministic Ignalina NPP Safety Justification In 1995-1996 was prepared In-depth Ignalina NPP Unit 1 Safety Analysis Report, using USA and Western Europe methodology and computer codes for providing safety analysis [3]. It was comprehensive international study sponsored by EBRD. The purpose of this international study was to provide a comprehensive overview of plant status with special emphasis placed on its safety aspects. Specialists from the Ignalina NPP, Russia (RDIPE), Canada, and Sweden contributed. During implementation of the project, they have been described more than 50 systems of normal operation, safety important systems, and auxiliary systems. Also analysis of these systems has been performed, considering compliance of these systems to the Lithuanian standards and rules as well to practice of safety used in the West. Analyzing systems, the attention has been concentrated on their consistency to criterion of single failure, as well as to auxiliary safety aspects: maintenance, inspections, and impact of external factors (fire, flooding by water). This analysis of systems has defined the main lacks of systems and has developed conditions for elimination of the deficiencies. The performed review on operation and safety has allowed to identify all possible malfunctions, which can potentially cause an emergency situation.In the safety analysis report of the Ignalina NPP Unit 1, the comprehensive accident analysis and equipment assessment have been provided; discussed questions concerning equipment ageing, investigated topics related to operators action, and power plant control provided conclusions about safety of Ignalina NPP (NPP safety level was assessed realistically); main lacks have been defined and measures for elimination of the deficiencies have been foreseen. It is the first western-type report on safety for nuclear power plants with RBMK reactors.One of the basic conclusions in this safety analysis report was such that in this case there was no problem, which would demand immediate shutdown of the Ignalina NPP. Detailed accident analysis (accidents because of different pipelines ruptures, reactivity initiating accidents, equipment failures, transients with additional failure of reactor shutdown system, and fuel channel ruptures in the reactor cavity) has shown that accident occurring because of equipment failures does not cause such condition of the plant station which would cause violation of acceptance criteria; safety system ensures a safe condition of the plant even doing the assumption that operator does not take any action for 10 minutes from the beginning of accident to mitigate an emergency situation. Because of reactivity initiating accidents (exactly such type of initiating event became the reason of accident in the Chernobyl NPP), acceptance criteria of power plant also are not violated, even postulating single failures additionally. It has been shown that Ignalina NPP is reliably protected against loss of the coolant accidents if ruptures of pipelines do not cause local stagnation of flow. In case of one steam line rupture, the acceptance criteria will not be exceeded. But there are two steam lines located in the shaft at the Ignalina NPP; thus, rupture of one steam line can cause rupture of other steam lines, and in this case radiological dozes can be exceeded. Being based on these results of accident analysis, the recommendations for modifications of activation algorithms for reactor shutdown and emergency core cooling systems have been prepared.It is necessary to note that in parallel with the Ignalina NPP Unit 1 safety analysis report in 1995–1997 it was performed independent Review of the Ignalina Nuclear Power Plant Safety Analysis Report [4]. This study was performed by experts from USA, Great Britain, France, Germany, Italy, Russia, and Lithuania. Independent Review has confirmed the main conclusions of safety analysis report.In recommendations of Ignalina NPP Unit 1 safety analysis report, it has been shown that Ignalina NPP will be reliably protected from any ruptures of pipelines and steam lines after improving of activation algorithms for reactor shutdown and emergency core cooling systems. According to these algorithms the system will automatically activate on coolant flow rate decrease in single Group Distribution Header (GDH) and sharp pressure decrease in drum-separators. These modifications have been implemented in both Ignalina NPP units. Safety justification of these modifications has been performed in Lithuanian Energy Institute (LEI). Further discussed situation, when conditions for local flow stagnation because of GDH rupture in the fuel channels connected to this affected GDH, is developed [5]. The flow stagnation occurs in the case of the certain size break in GDH. Due to discharge of a part of the coolant through this break, the zero gradient of pressure is developed in fuel channels (7—see Figure 1), that is, pressure in a bottom of the channel is close to pressure in drum separators (1). Coolant flow rate stagnation in fuel channels can be destroyed only in case of early activation of Emergency Core Cooling System (ECCS) (see Figure 2(a)). Thus if ECCS would operate according to design algorithm (reactor cooling water started to supply only after approximately 400 seconds from the beginning of accident); acceptance criteria for both fuel rod cladding and fuel channel walls temperatures in high-power channel would be exceeded (see Figure 2(b) and Figure 2(c)). After implementation of ECCS activation algorithms according to coolant flow rate decrease in separate group distribution headers, water from ECCS starts to supply already after 5–10 seconds from the beginning of flow stagnation. Thus stagnation is broken and fuel channels, connected to affected GDH are reliably cooled (see Figure 2). These modifications of activation algorithms for reactor shutdown and emergency core cooling systems are installed in power plant Unit 1 in 1999, and Unit 2 in 2000.Figure 1 Ignalina NPP reactor cooling circuit (one loop) and coolant flow diagram in case of partial GDH rupture: (1) drum-separators, (2) suction header, (3) main circulation pumps, (4) pressure header, (5) group distribution headers, (6) water supply from emergency core cooling system, and (7) affected fuel channelsAnalysis of partial GDH rupture considering modification of ECCS algorithm: (a) coolant flow rate through fuel channels, (b) fuel rod cladding temperature in high-power channel connected to ruptured GDH, and (c) behavior of fuel channel wall temperature. (a) (b) (c)In the Ignalina NPP Unit 1 safety analysis report, they have been investigated not only basic design accidents (discussed above) but also Anticipated Transients Without reactor Shutdown (ATWS). Investigations of such accidents are carried out at the licensing process for USA and Western Europe nuclear power plants; however, for the NPPs with RBMK-type reactors such analysis has been performed for the first time. Consequences of accident for RBMK-1500 reactor during which loss of preferred electrical power supply and failure of automatic reactor shutdown occur [6] are presented in Figure 3. Due to loss of preferred electrical power supply, all pumps are switched (see Figure 3(a)) off; therefore, the coolant circulation through fuel channels is terminated. Because of the lost circulation, fuel channels are not cooled sufficiently; therefore, temperature of the fuel channels walls starts to increase sharply. As it is seen from Figure 3(b), already after 40 seconds from the beginning of the accident, the peak fuel channel wall temperature in the high-power channels reaches acceptance criterion 650°C. It means that because of the further increase of temperature in fuel channels plastic deformations begin—the channels because of influence of internal pressure can be ballooned and ruptured. On the first seconds of accident the main electrical generators and turbines are switched off as well. Steam generated in the core is discharged through the steam discharge valves; however, their capacity is not sufficient. Therefore the pressure in reactor cooling circuit increases and approximately after 80 seconds from the beginning of accident reaches acceptance criterion 10.4 MPa (see Figure 3(c)). The further increase of pressure can lead to rupture of pipelines.Analysis of loss of preferred electrical power supply and simultaneous failure of design reactor shutdown system, when DAZ system was installed: (a) coolant flow rate through one main circulation pump, (b) the peak fuel channel wall temperature in the high-power channel, (c) pressure behavior in drum separators, (1) acceptance criterion, and (2) set points of DAZ system activation (reactor shutdown). (a) (b) (c)Thus the analysis of anticipated transients without shutdown has shown that in some cases the consequences can be dramatic enough. Therefore the priority recommendation has been formulated: to implement the second, based on other principles of operation, diverse shutdown system. However development, designing, and implementation of such system needed few years (in the Ignalina NPP Unit 2, this system was installed in 2004), so the compensating means, which were used in transition period while second diverse shutdown system was developed, has been implemented. This temporary system was called according Russian abbreviation “DAZ”, “Dopolnitelnaja avarijnaja začita”—“Additional emergency protection”. This system used the same control rods as well as design reactor shutdown system; however, signals for this system control were generated independently in respect of design reactor shutdown system. In Lithuanian Energy Institute for DAZ system, they have been selected not only set points of activation but also the safety justification was performed. Performed analysis has shown that after implementation of DAZ system the reactor is shut down in time and cooled reliably as well; acceptance criteria are not violated even in case of transients when design reactor shutdown system is not functioning. In Figure3 is shown the behavior of the main parameters of reactor cooling circuit in case of loss of preferred electrical power supply and simultaneous failure of design reactor shutdown system. In this case two signals for activation of DAZ system (reactor shutdown) are generated: on increase of pressure in drum separators and on decrease in the coolant flow rate through the main circulation pumps. In Unit 1 DAZ system was installed in 1999 and in Unit 2 in 2000.The Second Diverse Shutdown System (DSS) has been designed and installed in Ignalina NPP Unit 2 in 2004. In the first unit of Ignalina NPP this system has not been installed because reactor has been shut down in 2004. Therefore, nowadays Ignalina NPP reactor emergency protection (emergency shutdown) system consists of two independent shutdown systems: first, BSM controls manual control rods and shortened absorber rods, which are inserted into the core from bottom. This system performs the normal reactor shutdown function and can maintain a reactor in subcritical state. Second system AZ controls 24 fast acting reactor shutdown rods as well as additionally 49 rods, which belong to both—BSM and AZ systems. AZ system performs emergency protection function. Also the Additional Hold-down System of the reactor is installed. This system allows to prepare and inject water and neutron absorber gadolinium mixture into control rods cooling circuit. Thus, the reactor remains in subcritical state even in the case of failure of BSM system.DSS justification was one of the main projects increasing a level of NPP safety. Specialists from LEI together with experts from the countries of Western Europe checked and have assessed the design documentation, carrying out independent calculations, thus helping Lithuanian regulatory body (VATESI) to make the appropriate decisions concerning implementation of mentioned system at Ignalina NPP [7]. In conclusions of review it has been shown that implementation of second diverse reactor shutdown system protects a reactor in case of failure of design reactor shutdown system. Implementation of this system has ensured that any initiating event cannot cause accident with damage of the reactor core as well as decreases core damage probability from 4 · 10 - 4 up to 5 · 10 - 6.In 2002 the safety analysis report for Ignalina NPP Unit 2 has been developed. This report contains the description of systems, list of postulated accidents, engineering assessment of reactor cooling system, accident analysis, assessment of fuel channels structural integrity, assessment of reactor safety acceptability, and other chapters. The accident analysis in this report was performed using best estimate approach with uncertainty and sensitivity analysis. According to the international practice, the best estimate approach is used mainly for analysis of loss of coolant accidents in reactor cooling system. In Lithuania the best estimate approach was successfully applied not only for loss of coolant accidents but also for reactor transients and accident confinement system response analyses. The uncertainty and sensitivity analysis allows to avoid the unnecessary conservatisms as well as to assess and address the existing safety margins. The safety analysis report and its review were the main documents required for license for Ignalina NPP Unit 2. Both documents demonstrated the increased safety level after implementation of above mentioned modifications and satisfaction to requirements of regulating documents. ## 2.2. Ignalina NPP Probabilistic Safety Assessment The Ignalina NPP first-level PSA “BARSELINA” project (1991–1996) was initiated in 1991 [8]. It was the first PSA for nuclear power plants with RBMK-type reactors. From the beginning this project was carried out by nuclear energy experts from Lithuanian, Russian, and Swedish institutions, and since 1995 it was carried out by efforts of experts from Lithuania (Ignalina NPP, LEI) and Sweden. Main objective of deterministic analysis was to show that nuclear power plant reliably copes with accidents, and basic purpose of PSA 1 level is to assess probability of reactor core damage to create a basis for severe accident risk assessment and management. Performed Ignalina NPP PSA 1 level study is predicted by assumption that the main radioactive source is reactor core. This PSA is performed for maximum permissible reactor operating power. Only internal initiating events have been analyzed—transients, loss of the coolant accidents, common cause failure, and internal hazards (fire, flooding, and missiles). Results of the analysis have shown that after implementation of recommendations from BARSELINA [8], safety analysis report, and its independent review [3, 4], probability of Ignalina NPP core damage is about 6 · 10 - 6. According to the international requirements, this parameter for the operating nuclear power plants should not exceed 10 - 4 per year and for new NPPs, which are in process of construction, - 10 - 5. Therefore Ignalina NPP fulfils this requirement. Analysis has shown that, in Ignalina NPP, risk topography dominates transients, instead of loss of the coolant accidents. The risk of core damage most of all increases transients with loss of long-term core cooling. It is the positive fact meaning that up to consequences of severe accidents there is enough time. Thus operators supervising reactor operation can undertake corrective measures, and it means that Ignalina NPP has great potential opportunities for implementation of the program on management of severe accidents. It is necessary to note that procedures and means on severe accident management are already implemented at Ignalina NPP Unit 2 [9, 10].According to the international requirements, probability of the large reactivity release outside nuclear power plant should not exceed10 - 7 per year for new NPPs, which are in process of construction and for NPPs in operation - 10 - 6. Scenarios and probabilities of the large reactivity release outside nuclear power plant are objects of investigations for PSA level 2. Ignalina NPP PSA level 2 project was performed in 1999–2001 [11] and it was the first project of such type for nuclear power plants with RBMK reactors. This project was carried out by efforts of experts from Lithuania (LEI) and Sweden. Performing PSA level 2 as initial data used results of level 1. According to PSA level 1 investigated accident scenarios consequences and its similarity criteria on radioactive contamination, the conditions of damage of the reactor have been developed and possibilities of accident management were assessed. Results of PSA level 2, it have shown that barrier of the large reactivity release after core damage is 1.5. This barrier is smaller in comparison with modern nuclear power plants having function of containment, which reaches 10 and more. Being based conservative assumptions and estimation of parameters, in PSA level 2 was calculated that general estimation of large discharge frequency is 3.8 · 10 - 6 per year. Therefore, Ignalina NPP according to the probability of large reactivity release outside nuclear power plant is not the worst in comparison with the plants of the USA and Western Europe, constructed in the same years.Carrying out the complex analysis about influence on Ignalina NPP units safety [12] by LEI, the following external events have been investigated:(i) aircraft crash, (ii) extreme wind and tornado, (iii) flooding and extreme showers, (iv) external fire.Aircraft or other flying objects crash that caused accidents in Ignalina NPP will have local character because of its big territory. According to the Lithuanian civil aviation data, it has been assumed that average congestion is up to 50000 flights per one year within the 50-kilometer zone around NPP. Three zones have been defined by a radius up to 15, 50, and 85 meters around the reactor in the territory at Ignalina NPP (15—according to reactor dimensions, 85—according to reactor building size). Probability of air crash on a 85-meter zone around the reactor center, assuming that aircraft weight is 5700 kg as well as assuming that half of these flights carry out planes of western manufacturers and other half—Soviet, is 2.06 · 10 - 9 1/year. Even doing more conservative assumptions (heavy planes falling frequency equalized to easy planes falling frequency), probability of air crash on a 85-meter zone around the reactor center will be 1.64 · 10 - 7 1/year. The obtained heavy plane crash probabilities are less than the probabilities obtained in probability analyses for the majority of the West-European and American NPPs.Tornado may cause huge damage and destruction. From all buildings of nuclear power plant, the tornado is most dangerous for a technical water supply system building, because it is located in the open territory on a coast of lake. Tornado and hurricane winds do not create danger for buildings of reactor and technical systems. Contrariwise probability of tornado and hurricane winds is 5.3 · 10 - 6 1/year. Therefore it is possible to approve that their influence on reactor safety is insignificant.Rise of a water level in Lake Druksiai represents the greatest danger to pump station on the lake, since the service water system is the nearest NPP construction to the lake. Water level elevation of Lake Druksiai up to a level of 144.1 m is not possible practically; therefore, there is no danger on flooding of pump station. The platform of the other Ignalina NPP construction is located at a level of 148-149 m above the sea level. Rise of a water level in the lake Druksiai up to such mark is impossible and flooding does not represent the direct danger for Ignalina NPP.Besides lake, another external flooding source isextreme showers. In territory of Ignalina NPP there is drainage system and all compartments which are located below a critical mark of a level are connected to this system; therefore, the water leaks in case of internal flooding. Thus, extreme showers do not cause external flooding of the reactor building. For probabilistic external flooding analysis the mathematical model to assess peak water level elevations of the lake Druksiai has been developed. Probabilistic assessment of water level elevation in the lake has been performed. Maximum amount of precipitation (not less than 279.7 mm in 12 hours) probability is 1 · 10 - 6 1/year. Such event will not have influence on reactor safety.Probabilistic analysis of external fire. Ignalina NPP is situated in the region, where 30% of territory is occupied by forests (40% are grassland and 30% are occupied by lakes and swamps). The edge of the closest forest is less than one kilometre from territory of Ignalina NPP. On the territory of the NPP there are only separate trees and grass. The global fire of a forest with a high wind to the NPP side can cause the smoke cover on the territory of Ignalina NPP. The smoke does not influence work of reactor mechanisms but will complicate work of the personnel. Fire probability of forest, which is in 10-kilometer zone around Ignalina NPP and there are more than 2000 ha woods, is 2.7 · 10 - 3 1/year. It is a high probability, but any fire cannot affect safety of the reactor considerably. ## 3. Ignalina NPP Safety Assessment in Case of Specific RBMK Problems Discussing safety of RBMK-type nuclear power plants, three vulnerabilities more often are mentioned generally:(i) containment issue, (ii) problem of gas gap closing between fuel channels and graphite blocks, (iii) problem of multiple fuel channel ruptures.Below, specificity of RBMK-1500 in respect of these problems is discussed. ### 3.1. RBMK Reactor Containment Issue In case of accident in nuclear power plant (rupture of reactor cooling circuit pipelines), the coolant with radioactive materials will spread into reactor and compartment-enclosed reactor cooling circuit. In many (but not in all) reactors of the USA and the Western Europe, function of containment carries out visible from afar, photogenic, semicircle form protection enclosure. Usually nonexistence of containment is treated as deficiency of RBMK reactors. However such containment as for vessel-type reactors is technically impossible to implement for RBMK reactors. In the Ignalina NPP the function of containing accidentally released radioactive material is accomplished by an extensive system of interconnected steel lined, reenforced concrete compartments called the Accident Localization System (ALS). The ALS uses the “pressure suppression” principle employed by G.E. designed boiling-water reactors. The ALS encloses the large Ignalina NPP reactor core, the coolant pumps, and all of the piping providing coolant to the core. It is not necessary to enclose the pipes above the reactor core, which carry the exiting two-phase (steam-water) mixture to the drum separators, because if one of them is breached, coolant flow to the fuel channels (which is provided by pipes entering the core from below) will not be interrupted. Significant amounts of radioactive material can escape only if fuel rods are over-heated. Breaches in the exiting pipes will not reduce coolant flow; therefore, the fuel rods will not overheat.The effectiveness of the ALS has been verified by extensive international analysis and experimental programs. They all show that even if events leading to release of radioactive materials are postulated, these materials will be contained by the ALS; thus, the ALS performs the function of containment [13]. The minimal amounts (due primarily to non-condensable noble gases) which would eventually reach the environment, would not exceed the amounts that would be released by Western built reactors provided with the more familiar, prominently visible “dome containments”. ### 3.2. Problem of Gas Gap Closing between Fuel Channels and Graphite Blocks The fuel channels of RBMK-type reactor are separated from the graphite bricks by gaps maintained by graphite rings. These rings are arranged next to one another in such a manner that one is in contact with the channel, and the other with the graphite stack block (see Figure4). As a result of exposure to neutron radiation and temperature, the diameters of graphite columns gaps decrease, and fuel channel tube expands; thus, the gap between them decreases.Figure 4 Fuel channel and graphite column interaction. All measurements are in millimeters.The availability of the gap between graphite bricks and fuel channels is the main condition limiting the operation of RBMK-type reactors. These graphite fuel channel tubes gaps allow(i) unimpeded (axial and radial) thermal expansion and contraction of the fuel channels, (ii) predictable noncontacting heat transfer from graphite bricks (temperature higher than 500°C) to fuel channels (temperature 300–320°C) across the gaps, (iii) leakage of helium-nitrogen mixture, which provides heat transfer from graphite to coolant and protects graphite against oxidation. Furthermore helium-nitrogen mixture is part of fuel channel integrity monitoring system.The control of gap between fuel channels and graphite blocks at Ignalina NPP Unit 1 and 2 is carried out from the beginning of its operation and now the largest database and experience of assessment of gap among all RBMK type reactors is saved. After gap closure some functions of the control not only are lost but also worsen characteristics of the reactor. Increasing probabilities of damage of the channel and deformations of graphite, withdrawing of the channel from a reactor if necessary becomes complicated and the temperature of graphite and the fuel channel changes. In Ignalina NPP Unit 1 reactor the average gap between fuel channels and graphite up to final shutdown of the reactor from an initial level (3–2.7 mm) has decreased three to four times. This decreasing in Unit 2 is insignificant. Estimation of such small gap is very sensitive to errors of measurements, uncertainties of used models, and strategy of selection of fuel channels for measurements.As it is known, after signing the agreement with the EBRD Account of Nuclear Safety in 1994, Lithuania has undertaken not to change fuel channels and not to operate Ignalina NPP reactor after closing even one gas gap between graphite stack and fuel channels. In Ignalina NPP in-depth safety report [3], which has been prepared by the international experts in 1996, it was predicted that at Ignalina NPP Unit 1 it happens not later than in the beginning of 1999.In Lithuanian Energy Institute complex investigations on the problem of gap closure between fuel channels and graphite blocks at Ignalina NPP have been carried out. Assessment of the gap between graphite stack and fuel channels has very big importance because results of this problem are very important in making of the decision on duration of Ignalina nuclear power plant operation. At development of a technique on assessment of gap and strategy of measurements, the thermal-hydraulic, structural and probabilistic calculations have been performed. The detailed analysis [14] has shown that in Ignalina NPP In-depth safety analysis report [3] the assessment of the gap between fuel channels and graphite blocks at Ignalina NPP Unit 1 reactor has been performed using simplified deterministic calculations. Therefore obtained results were too pessimistic and conservative, predicting closure of the gap in set of channels in 1998–2000.The specialists from LEI developed the integrated technique on assessment and control of risk of gas gap reduction. This allowed to develop strategy of measurement of holes diameters in graphite columns and replacement of fuel channels. This strategy has ensured existing of gap in Unit 1 reactor up to its final shutdown and by that has allowed considerably to prolong time of Ignalina NPP Unit 1 operation (until the end of 2004).Change of a gas gap in the second unit of a reactor very much differs from that of the first unit because in a reactor of Unit 2 they are used zirconium tubes of fuel channels having different hardened surfaces and the rate of their ballooning is two times slower in comparison with tubes in reactor of Unit 1. Tendencies of change of graphite stack diameters in the second Unit are very similar to those of the first unit. ### 3.3. Problem of Multiple Fuel Channel Ruptures In case of fuel channel rupture a two-phase flow is discharged to gaps between graphite stack. Part of graphite blocks can be and damaged cracked by coolant jet impingement; graphite columns can be displaced and coolant passes into the reactor cavity. Because graphite stack is hotter than the coolant, the pressure in tight reactor cavity increases. The leak tight Reactor Cavity (RC) performs the function of containment in the region immediately surrounding the nuclear fuel and graphite. The RC is formed by a cylindrical metal structure together with bottom and top metal plates. The reactor cavity confines the steam release in case of rupture of fuel channels. The steam-water-gas mixture from the reactor cavity is directed via Reactor Cavity Venting System (RCVS) pipelines to two steam-distribution devices of the 5th (upper) condensing tray in the Accident Localization System (Figure5). Two pipelines d = 400 mm that come from a branch pipe d = 600 mm located above the top plate of RC are interconnected to a pipe d = 600 mm which connects to one steam-distribution device [1]. In the same way the other two pipelines d = 400 mm from the top plate of RC are connected to the second steam-distribution device. On their way these pipelines have branches, which are interconnected in a leak-tight corridor and end up with three Membrane Safety Devices (MSDs). The blowdown pipes from the bottom of RC pass directly to the leak-tight corridor and also end up with three MSDs.Figure 5 Simplified schematic of the reactor cavity venting system: (1) reactor, (2) the fifth ALS suppression pool, (3) suppression pools 1–4, (4) steam distribution devices, and (5) membrane safety devices (350 mm diameter)In the case of multiple fuel channel tube ruptures, if the RCVS does not assure relief of steam-water-gas mixture from RC, the pressure increase in the RC will lift top plate of the RC. Those structural integrity of the RC and the rest fuel channels would be lost as well. Such event would cause very severe consequences similar to Chernobyl accident. Therefore it is important to maintain RC integrity, which is assured if pressure in the RC is below permissible pressure (314 kPa, abs), that is, the pressure of upper plate of biological reactor shielding weight [15].Rupture of one fuel channel is design basis accidents for RBMK-1500 reactors. Probability of such rupture is- 10 - 2 1/year. According to design, the reactor cavity venting system assured the integrity of RC in the case of up to 3 fuel channels ruptures. This system has been modernized in 1996 as shown in Figure 5.Moscow Research and Design Institute for Power Engineering (RDIPE), designer and developer of RBMK reactors, specialists in 1996 have analyzed pressure behavior in the Reactor Cavity in case of multiple fuel channel rupture [15]. Results of these calculations have shown that acceptance criterion maximum permissible load (310 kPa) to upper reactor cavity plate will be exceeded in case of 9 fuel channels rupture (according to RDIPE calculations). In RDIPE calculations the coolant discharge through the rupture conservatively was assumed equal to 32 kg/s through one fuel channel. This flow rate has been selected as constant versus time. Because of such conservative assumptions, amount of discharged coolant into reactor cavity is largest and number of channels, when permissible pressure in reactor cavity is not exceeded, will be minimal.Such analysis is conservative with impact of uncertainties. The best estimate analysis of Ignalina NPP response to multiple fuel channels tubes rupture was performed at the Lithuanian Energy Institute. Sensitivity and uncertainty analysis was performed as well [16]. At performance of the analysis it has been considered that results of calculations can be influenced by uncertainties such as the plant initial conditions, assumed at the modeling, as well as assumptions and correlations of CONTAIN code. Summarizing the results of the uncertainty and sensitivity analysis, it was concluded that the capacity of RCVS comprises from 11 up to 19 ruptured fuel channels, that is, 15 ± 4 channels (Figure 6).Figure 6 Pressure in the reactor cavity as a function of a number of ruptured fuel channels.It is necessary to note that the analysis was performed for the case with reactor cooling system filled by coolant (the water levels in drum separators are nominal). Thus, after the fuel channels rupture, the steam-water mixture is discharged into the gaps of graphite stack. If the “dropout” model is used in CONTAIN 1.1 code, it is assumed that all the water released from the ruptured fuel channels in liquid fraction leaves from RC to the water drain. If the “dropout” model is not used in CONTAIN 1.1 code, it is assumed that not all evaporated water remains in a dispersed condition, and it may be transferred into RC and through the pipelines into ALS. The last assumption leads to higher calculated pressure in the RC (see Figure6).It is necessary to note that during operation of RBMK reactors there were only three cases of ruptures of separate fuel channels:(i) at Leningrad NPP Unit 1 in 1975, (ii) at Chernobyl NPP Unit 1 in 1982, (iii) at Leningrad NPP Unit 3 in 1992.In any of these cases adjacent channels have not been damaged. Thus, in reality there was no so-called “cascade rupture of fuel channels” when rupture of one channel causes ruptures of other channels. Experiments made on the large-scale TKR-Test facility at Electrogorsk Research and Engineering center for NPP safety [17] have shown also that cascade rupture of fuel channels is impossible. ## 3.1. RBMK Reactor Containment Issue In case of accident in nuclear power plant (rupture of reactor cooling circuit pipelines), the coolant with radioactive materials will spread into reactor and compartment-enclosed reactor cooling circuit. In many (but not in all) reactors of the USA and the Western Europe, function of containment carries out visible from afar, photogenic, semicircle form protection enclosure. Usually nonexistence of containment is treated as deficiency of RBMK reactors. However such containment as for vessel-type reactors is technically impossible to implement for RBMK reactors. In the Ignalina NPP the function of containing accidentally released radioactive material is accomplished by an extensive system of interconnected steel lined, reenforced concrete compartments called the Accident Localization System (ALS). The ALS uses the “pressure suppression” principle employed by G.E. designed boiling-water reactors. The ALS encloses the large Ignalina NPP reactor core, the coolant pumps, and all of the piping providing coolant to the core. It is not necessary to enclose the pipes above the reactor core, which carry the exiting two-phase (steam-water) mixture to the drum separators, because if one of them is breached, coolant flow to the fuel channels (which is provided by pipes entering the core from below) will not be interrupted. Significant amounts of radioactive material can escape only if fuel rods are over-heated. Breaches in the exiting pipes will not reduce coolant flow; therefore, the fuel rods will not overheat.The effectiveness of the ALS has been verified by extensive international analysis and experimental programs. They all show that even if events leading to release of radioactive materials are postulated, these materials will be contained by the ALS; thus, the ALS performs the function of containment [13]. The minimal amounts (due primarily to non-condensable noble gases) which would eventually reach the environment, would not exceed the amounts that would be released by Western built reactors provided with the more familiar, prominently visible “dome containments”. ## 3.2. Problem of Gas Gap Closing between Fuel Channels and Graphite Blocks The fuel channels of RBMK-type reactor are separated from the graphite bricks by gaps maintained by graphite rings. These rings are arranged next to one another in such a manner that one is in contact with the channel, and the other with the graphite stack block (see Figure4). As a result of exposure to neutron radiation and temperature, the diameters of graphite columns gaps decrease, and fuel channel tube expands; thus, the gap between them decreases.Figure 4 Fuel channel and graphite column interaction. All measurements are in millimeters.The availability of the gap between graphite bricks and fuel channels is the main condition limiting the operation of RBMK-type reactors. These graphite fuel channel tubes gaps allow(i) unimpeded (axial and radial) thermal expansion and contraction of the fuel channels, (ii) predictable noncontacting heat transfer from graphite bricks (temperature higher than 500°C) to fuel channels (temperature 300–320°C) across the gaps, (iii) leakage of helium-nitrogen mixture, which provides heat transfer from graphite to coolant and protects graphite against oxidation. Furthermore helium-nitrogen mixture is part of fuel channel integrity monitoring system.The control of gap between fuel channels and graphite blocks at Ignalina NPP Unit 1 and 2 is carried out from the beginning of its operation and now the largest database and experience of assessment of gap among all RBMK type reactors is saved. After gap closure some functions of the control not only are lost but also worsen characteristics of the reactor. Increasing probabilities of damage of the channel and deformations of graphite, withdrawing of the channel from a reactor if necessary becomes complicated and the temperature of graphite and the fuel channel changes. In Ignalina NPP Unit 1 reactor the average gap between fuel channels and graphite up to final shutdown of the reactor from an initial level (3–2.7 mm) has decreased three to four times. This decreasing in Unit 2 is insignificant. Estimation of such small gap is very sensitive to errors of measurements, uncertainties of used models, and strategy of selection of fuel channels for measurements.As it is known, after signing the agreement with the EBRD Account of Nuclear Safety in 1994, Lithuania has undertaken not to change fuel channels and not to operate Ignalina NPP reactor after closing even one gas gap between graphite stack and fuel channels. In Ignalina NPP in-depth safety report [3], which has been prepared by the international experts in 1996, it was predicted that at Ignalina NPP Unit 1 it happens not later than in the beginning of 1999.In Lithuanian Energy Institute complex investigations on the problem of gap closure between fuel channels and graphite blocks at Ignalina NPP have been carried out. Assessment of the gap between graphite stack and fuel channels has very big importance because results of this problem are very important in making of the decision on duration of Ignalina nuclear power plant operation. At development of a technique on assessment of gap and strategy of measurements, the thermal-hydraulic, structural and probabilistic calculations have been performed. The detailed analysis [14] has shown that in Ignalina NPP In-depth safety analysis report [3] the assessment of the gap between fuel channels and graphite blocks at Ignalina NPP Unit 1 reactor has been performed using simplified deterministic calculations. Therefore obtained results were too pessimistic and conservative, predicting closure of the gap in set of channels in 1998–2000.The specialists from LEI developed the integrated technique on assessment and control of risk of gas gap reduction. This allowed to develop strategy of measurement of holes diameters in graphite columns and replacement of fuel channels. This strategy has ensured existing of gap in Unit 1 reactor up to its final shutdown and by that has allowed considerably to prolong time of Ignalina NPP Unit 1 operation (until the end of 2004).Change of a gas gap in the second unit of a reactor very much differs from that of the first unit because in a reactor of Unit 2 they are used zirconium tubes of fuel channels having different hardened surfaces and the rate of their ballooning is two times slower in comparison with tubes in reactor of Unit 1. Tendencies of change of graphite stack diameters in the second Unit are very similar to those of the first unit. ## 3.3. Problem of Multiple Fuel Channel Ruptures In case of fuel channel rupture a two-phase flow is discharged to gaps between graphite stack. Part of graphite blocks can be and damaged cracked by coolant jet impingement; graphite columns can be displaced and coolant passes into the reactor cavity. Because graphite stack is hotter than the coolant, the pressure in tight reactor cavity increases. The leak tight Reactor Cavity (RC) performs the function of containment in the region immediately surrounding the nuclear fuel and graphite. The RC is formed by a cylindrical metal structure together with bottom and top metal plates. The reactor cavity confines the steam release in case of rupture of fuel channels. The steam-water-gas mixture from the reactor cavity is directed via Reactor Cavity Venting System (RCVS) pipelines to two steam-distribution devices of the 5th (upper) condensing tray in the Accident Localization System (Figure5). Two pipelines d = 400 mm that come from a branch pipe d = 600 mm located above the top plate of RC are interconnected to a pipe d = 600 mm which connects to one steam-distribution device [1]. In the same way the other two pipelines d = 400 mm from the top plate of RC are connected to the second steam-distribution device. On their way these pipelines have branches, which are interconnected in a leak-tight corridor and end up with three Membrane Safety Devices (MSDs). The blowdown pipes from the bottom of RC pass directly to the leak-tight corridor and also end up with three MSDs.Figure 5 Simplified schematic of the reactor cavity venting system: (1) reactor, (2) the fifth ALS suppression pool, (3) suppression pools 1–4, (4) steam distribution devices, and (5) membrane safety devices (350 mm diameter)In the case of multiple fuel channel tube ruptures, if the RCVS does not assure relief of steam-water-gas mixture from RC, the pressure increase in the RC will lift top plate of the RC. Those structural integrity of the RC and the rest fuel channels would be lost as well. Such event would cause very severe consequences similar to Chernobyl accident. Therefore it is important to maintain RC integrity, which is assured if pressure in the RC is below permissible pressure (314 kPa, abs), that is, the pressure of upper plate of biological reactor shielding weight [15].Rupture of one fuel channel is design basis accidents for RBMK-1500 reactors. Probability of such rupture is- 10 - 2 1/year. According to design, the reactor cavity venting system assured the integrity of RC in the case of up to 3 fuel channels ruptures. This system has been modernized in 1996 as shown in Figure 5.Moscow Research and Design Institute for Power Engineering (RDIPE), designer and developer of RBMK reactors, specialists in 1996 have analyzed pressure behavior in the Reactor Cavity in case of multiple fuel channel rupture [15]. Results of these calculations have shown that acceptance criterion maximum permissible load (310 kPa) to upper reactor cavity plate will be exceeded in case of 9 fuel channels rupture (according to RDIPE calculations). In RDIPE calculations the coolant discharge through the rupture conservatively was assumed equal to 32 kg/s through one fuel channel. This flow rate has been selected as constant versus time. Because of such conservative assumptions, amount of discharged coolant into reactor cavity is largest and number of channels, when permissible pressure in reactor cavity is not exceeded, will be minimal.Such analysis is conservative with impact of uncertainties. The best estimate analysis of Ignalina NPP response to multiple fuel channels tubes rupture was performed at the Lithuanian Energy Institute. Sensitivity and uncertainty analysis was performed as well [16]. At performance of the analysis it has been considered that results of calculations can be influenced by uncertainties such as the plant initial conditions, assumed at the modeling, as well as assumptions and correlations of CONTAIN code. Summarizing the results of the uncertainty and sensitivity analysis, it was concluded that the capacity of RCVS comprises from 11 up to 19 ruptured fuel channels, that is, 15 ± 4 channels (Figure 6).Figure 6 Pressure in the reactor cavity as a function of a number of ruptured fuel channels.It is necessary to note that the analysis was performed for the case with reactor cooling system filled by coolant (the water levels in drum separators are nominal). Thus, after the fuel channels rupture, the steam-water mixture is discharged into the gaps of graphite stack. If the “dropout” model is used in CONTAIN 1.1 code, it is assumed that all the water released from the ruptured fuel channels in liquid fraction leaves from RC to the water drain. If the “dropout” model is not used in CONTAIN 1.1 code, it is assumed that not all evaporated water remains in a dispersed condition, and it may be transferred into RC and through the pipelines into ALS. The last assumption leads to higher calculated pressure in the RC (see Figure6).It is necessary to note that during operation of RBMK reactors there were only three cases of ruptures of separate fuel channels:(i) at Leningrad NPP Unit 1 in 1975, (ii) at Chernobyl NPP Unit 1 in 1982, (iii) at Leningrad NPP Unit 3 in 1992.In any of these cases adjacent channels have not been damaged. Thus, in reality there was no so-called “cascade rupture of fuel channels” when rupture of one channel causes ruptures of other channels. Experiments made on the large-scale TKR-Test facility at Electrogorsk Research and Engineering center for NPP safety [17] have shown also that cascade rupture of fuel channels is impossible. ## 4. Conclusions Requirements of nuclear power plants safety depend on the accumulated experience, a level of a technical society evolution, which always raises, and from position of the state. About safety level of Ignalina NPP it was worried after Chernobyl accident in 1986. The first modernizations of reactors have been implemented at that time. RDIPE, designer and developer of RBMK reactors, experts have prepared the first safety justification for operating power plant in 1989. When Lithuania assumed control of the Ignalina NPP in 1991, a large number of studies on safety level have been conducted. It is necessary to note Safety Analysis Reports for Ignalina NPP Units 1 and 2, Safety Justifications of Reactor Cooling System, and Accident Localization System. The Ignalina nuclear power plant is distinguished from all RBMK-type reactors for the matter is that many international studies to investigate design parameters as well as level of their risk have been performed. Ignalina NPP, its design, and operational data have been completely open and accessible to Western experts. At first the effective initial help in questions of nuclear safety has been provided by Sweden and after by other countries (Germany, United Kingdom, USA, etc.), capable to perform expertises of the safety analysis. A public list of EC Phare projects, supporting the modernization of Ignalina NPP, is available underhttp://ie.jrc.ec.europa.eu/dissem/.The detailed analysis of accidents has shown that design basis accidents do not cause such condition of the plant, which postulates violation of acceptance. As well safety systems of the plant ensure a safe condition of the plant even doing the assumption that operator does not take any action for 30 minutes from the beginning of accident to mitigate an emergency situation.The performed Probabilistic Safety Analysis of levels 1 and 2 has allowed to compare safety level of Ignalina NPP with the reached level on other nuclear power plants and to plan how to improve NPP safety systems and operational procedures. Investigations have shown that Ignalina NPP according to the probability of large radioactivity release outside nuclear power plant is not the worst in comparison with the plants of the USA and the Western Europe, constructed in the same years.On the basis of the performed investigations the recommendations on safety improvement were developed by efforts of local and foreign experts. These recommendations were brought into Ignalina NPP Safety Improvement Programs (SIP-1, SIP-2, or SIP-3) which implementation strictly was checked by Lithuanian regulatory body VATESI. These means have allowed to improve safety level of the Ignalina NPP constantly. These works do not stop even on forthcoming final shutdown of the plant. In outcome of last significant project the Severe Accident Management Guide is developed. Now this guide is under implementation at Ignalina NPP. Severe Accident Management Guide will supplement Symptom-Oriented Emergency Operating Procedures and will provide safe elimination of accident consequences in all range of accidents. --- *Source: 102078-2010-02-01.xml*
102078-2010-02-01_102078-2010-02-01.md
77,085
State of the Art of the Ignalina RBMK-1500 Safety
E. Ušpuras
Science and Technology of Nuclear Installations (2010)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2010/102078
102078-2010-02-01.xml
--- ## Abstract Ignalina NPP is the only nuclear power plant in Lithuania consisting of two units, commissioned in 1983 and 1987. Unit 1 of Ignalina NPP was shut down for decommissioning at the end of 2004 and Unit 2 is to be operated until the end of 2009. Both units are equipped with channel-type graphite-moderated boiling water reactors RBMK-1500. The paper summarizing the results of deterministic and probabilistic analyses is developed within 1991–2007 by specialists from Lithuanian Energy Institute. The main operational safety aspects, including analyses performed according the Ignalina Safety Improvement Programs, development and installation of the Second Shutdown System and Guidelines on Severe Accidents Management are discussed. Also the phenomena related to the closure of the gap between fuel channel and graphite bricks, multiple fuel channel tube rupture, and containment issues as well as implication of the external events to the Ignalina NPP safety are discussed separately. --- ## Body ## 1. Introduction: Historical Context Preparatory works of construction of the Ignalina NPP have been started in 1974, and the first unit of Ignalina NPP was commissioned in December 31, 1983. At the same time the second unit was under construction and construction of the third unit began. The second unit was planned to start to operate in 1986, but because of accident in Chernobyl, works on preparation to operate this unit have been rescheduled. Second unit was commissioned in August 31, 1987. At that time 60% of the third unit have already been constructed, but later construction was suspended and terminated soon. Nowadays because of political reasons, the first unit of Ignalina NPP is shut down; the second unit is planned to shutdown at the end of 2009.Ignalina NPP with RBMK-1500 reactors belongs to the second generation of RBMK-type reactors (it means that this is most advanced version of RBMK reactor design series in comparison with other RBMK-type nuclear power plants). In comparison with infamous Chernobyl NPP, Ignalina NPP reactors are by a third more powerfully and already from the beginning of operation substantially advanced emergency protection systems (e.g., emergency core cooling and accident localization systems) [1].After 1990 Lithuania declared its independence; Ignalina NPP with two largest in the world RBMK-1500 reactors came under authority of the Lithuania Republic; however, nobody in the world did not know about the real safety level of these reactors. The first Safety Justification of Ignalina NPP has been prepared by Russian experts of Research and Design Institute for Power Engineering (RDIPE), organization—designer and developer of RBMK reactors, after Chernobyl NPP accident. In this document the analysis of all design basis accidents (except partial breaks of pipes) is presented in sufficient details. The analysis is performed using at that time existing tool—quasistationary derivative approximation method, being based on conservative assumptions and existing experimental data. From the present-day viewpoint such safety justification [2] has lacks.(i) It was limited only to the systems description and the analysis of design basis accidents. (ii) Computer codes, developed in Russia, have been used for simulations, but these codes have not been extensively verified and validated. (iii) The independent expertise of safety analysis has not been performed.Therefore, at the beginning of the 90s of the last century there were reasonably doubts how such safety justification of Ignalina NPP, presented in the first safety justification, corresponded to the real situation. In 1992 at G7 Munich Summit the decision of closing Soviet-design nuclear power plants, at first of all the nuclear power plants with RBMK and VVER-440/230 reactor types, was accepted. In 1994 Lithuania signed the agreement with the European Bank for Reconstruction and Development (EBRD) Account of Nuclear Safety by which it had undertaken to perform in-depth safety analysis of the Ignalina NPP and not to change fuel channels in the reactor.Right from the start, when Lithuania assumed control of the Ignalina NPP, the plant, its design, and operational data have been completely open and accessible to Western experts. A large number of international and local studies have been conducted to verify the operational characteristics of the Ignalina NPP and analyze its level of risk. Ignalina NPP is unique nuclear power plant of RBMK type about which information was collected, checked, systematized, and made accessible. Collected and verified database has allowed(i) to assess present safety level of NPP, (ii) to compare its level with other RBMK-type NPPs safety level, (iii) to plan improvements of plant equipment and operating procedures increasing safety of the NPP.Below, the results of the State of the Art deterministic and probabilistic safety analyses for Ignalina NPP, developed within 1991–2007 by specialists from Lithuanian Energy Institute, are discussed. ## 2. Deterministic and Probabilistic Ignalina NPP Safety Analyses In this Section the main Ignalina NPP safety analyses, performed since 1991 till these days, are discussed:(i) Ignalina NPP Units 1 and 2 safety analysis reports and their review, (ii) modifications of activation algorithms for reactor shutdown and emergency core cooling systems, (iii) second diverse reactor shutdown system development, safety justification, and implementation, (iv) studies of Ignalina NPP 1 and 2 levels of Probabilistic Safety Assessment (PSA), (v) external events at Ignalina NPP Analysis. ### 2.1. Deterministic Ignalina NPP Safety Justification In 1995-1996 was prepared In-depth Ignalina NPP Unit 1 Safety Analysis Report, using USA and Western Europe methodology and computer codes for providing safety analysis [3]. It was comprehensive international study sponsored by EBRD. The purpose of this international study was to provide a comprehensive overview of plant status with special emphasis placed on its safety aspects. Specialists from the Ignalina NPP, Russia (RDIPE), Canada, and Sweden contributed. During implementation of the project, they have been described more than 50 systems of normal operation, safety important systems, and auxiliary systems. Also analysis of these systems has been performed, considering compliance of these systems to the Lithuanian standards and rules as well to practice of safety used in the West. Analyzing systems, the attention has been concentrated on their consistency to criterion of single failure, as well as to auxiliary safety aspects: maintenance, inspections, and impact of external factors (fire, flooding by water). This analysis of systems has defined the main lacks of systems and has developed conditions for elimination of the deficiencies. The performed review on operation and safety has allowed to identify all possible malfunctions, which can potentially cause an emergency situation.In the safety analysis report of the Ignalina NPP Unit 1, the comprehensive accident analysis and equipment assessment have been provided; discussed questions concerning equipment ageing, investigated topics related to operators action, and power plant control provided conclusions about safety of Ignalina NPP (NPP safety level was assessed realistically); main lacks have been defined and measures for elimination of the deficiencies have been foreseen. It is the first western-type report on safety for nuclear power plants with RBMK reactors.One of the basic conclusions in this safety analysis report was such that in this case there was no problem, which would demand immediate shutdown of the Ignalina NPP. Detailed accident analysis (accidents because of different pipelines ruptures, reactivity initiating accidents, equipment failures, transients with additional failure of reactor shutdown system, and fuel channel ruptures in the reactor cavity) has shown that accident occurring because of equipment failures does not cause such condition of the plant station which would cause violation of acceptance criteria; safety system ensures a safe condition of the plant even doing the assumption that operator does not take any action for 10 minutes from the beginning of accident to mitigate an emergency situation. Because of reactivity initiating accidents (exactly such type of initiating event became the reason of accident in the Chernobyl NPP), acceptance criteria of power plant also are not violated, even postulating single failures additionally. It has been shown that Ignalina NPP is reliably protected against loss of the coolant accidents if ruptures of pipelines do not cause local stagnation of flow. In case of one steam line rupture, the acceptance criteria will not be exceeded. But there are two steam lines located in the shaft at the Ignalina NPP; thus, rupture of one steam line can cause rupture of other steam lines, and in this case radiological dozes can be exceeded. Being based on these results of accident analysis, the recommendations for modifications of activation algorithms for reactor shutdown and emergency core cooling systems have been prepared.It is necessary to note that in parallel with the Ignalina NPP Unit 1 safety analysis report in 1995–1997 it was performed independent Review of the Ignalina Nuclear Power Plant Safety Analysis Report [4]. This study was performed by experts from USA, Great Britain, France, Germany, Italy, Russia, and Lithuania. Independent Review has confirmed the main conclusions of safety analysis report.In recommendations of Ignalina NPP Unit 1 safety analysis report, it has been shown that Ignalina NPP will be reliably protected from any ruptures of pipelines and steam lines after improving of activation algorithms for reactor shutdown and emergency core cooling systems. According to these algorithms the system will automatically activate on coolant flow rate decrease in single Group Distribution Header (GDH) and sharp pressure decrease in drum-separators. These modifications have been implemented in both Ignalina NPP units. Safety justification of these modifications has been performed in Lithuanian Energy Institute (LEI). Further discussed situation, when conditions for local flow stagnation because of GDH rupture in the fuel channels connected to this affected GDH, is developed [5]. The flow stagnation occurs in the case of the certain size break in GDH. Due to discharge of a part of the coolant through this break, the zero gradient of pressure is developed in fuel channels (7—see Figure 1), that is, pressure in a bottom of the channel is close to pressure in drum separators (1). Coolant flow rate stagnation in fuel channels can be destroyed only in case of early activation of Emergency Core Cooling System (ECCS) (see Figure 2(a)). Thus if ECCS would operate according to design algorithm (reactor cooling water started to supply only after approximately 400 seconds from the beginning of accident); acceptance criteria for both fuel rod cladding and fuel channel walls temperatures in high-power channel would be exceeded (see Figure 2(b) and Figure 2(c)). After implementation of ECCS activation algorithms according to coolant flow rate decrease in separate group distribution headers, water from ECCS starts to supply already after 5–10 seconds from the beginning of flow stagnation. Thus stagnation is broken and fuel channels, connected to affected GDH are reliably cooled (see Figure 2). These modifications of activation algorithms for reactor shutdown and emergency core cooling systems are installed in power plant Unit 1 in 1999, and Unit 2 in 2000.Figure 1 Ignalina NPP reactor cooling circuit (one loop) and coolant flow diagram in case of partial GDH rupture: (1) drum-separators, (2) suction header, (3) main circulation pumps, (4) pressure header, (5) group distribution headers, (6) water supply from emergency core cooling system, and (7) affected fuel channelsAnalysis of partial GDH rupture considering modification of ECCS algorithm: (a) coolant flow rate through fuel channels, (b) fuel rod cladding temperature in high-power channel connected to ruptured GDH, and (c) behavior of fuel channel wall temperature. (a) (b) (c)In the Ignalina NPP Unit 1 safety analysis report, they have been investigated not only basic design accidents (discussed above) but also Anticipated Transients Without reactor Shutdown (ATWS). Investigations of such accidents are carried out at the licensing process for USA and Western Europe nuclear power plants; however, for the NPPs with RBMK-type reactors such analysis has been performed for the first time. Consequences of accident for RBMK-1500 reactor during which loss of preferred electrical power supply and failure of automatic reactor shutdown occur [6] are presented in Figure 3. Due to loss of preferred electrical power supply, all pumps are switched (see Figure 3(a)) off; therefore, the coolant circulation through fuel channels is terminated. Because of the lost circulation, fuel channels are not cooled sufficiently; therefore, temperature of the fuel channels walls starts to increase sharply. As it is seen from Figure 3(b), already after 40 seconds from the beginning of the accident, the peak fuel channel wall temperature in the high-power channels reaches acceptance criterion 650°C. It means that because of the further increase of temperature in fuel channels plastic deformations begin—the channels because of influence of internal pressure can be ballooned and ruptured. On the first seconds of accident the main electrical generators and turbines are switched off as well. Steam generated in the core is discharged through the steam discharge valves; however, their capacity is not sufficient. Therefore the pressure in reactor cooling circuit increases and approximately after 80 seconds from the beginning of accident reaches acceptance criterion 10.4 MPa (see Figure 3(c)). The further increase of pressure can lead to rupture of pipelines.Analysis of loss of preferred electrical power supply and simultaneous failure of design reactor shutdown system, when DAZ system was installed: (a) coolant flow rate through one main circulation pump, (b) the peak fuel channel wall temperature in the high-power channel, (c) pressure behavior in drum separators, (1) acceptance criterion, and (2) set points of DAZ system activation (reactor shutdown). (a) (b) (c)Thus the analysis of anticipated transients without shutdown has shown that in some cases the consequences can be dramatic enough. Therefore the priority recommendation has been formulated: to implement the second, based on other principles of operation, diverse shutdown system. However development, designing, and implementation of such system needed few years (in the Ignalina NPP Unit 2, this system was installed in 2004), so the compensating means, which were used in transition period while second diverse shutdown system was developed, has been implemented. This temporary system was called according Russian abbreviation “DAZ”, “Dopolnitelnaja avarijnaja začita”—“Additional emergency protection”. This system used the same control rods as well as design reactor shutdown system; however, signals for this system control were generated independently in respect of design reactor shutdown system. In Lithuanian Energy Institute for DAZ system, they have been selected not only set points of activation but also the safety justification was performed. Performed analysis has shown that after implementation of DAZ system the reactor is shut down in time and cooled reliably as well; acceptance criteria are not violated even in case of transients when design reactor shutdown system is not functioning. In Figure3 is shown the behavior of the main parameters of reactor cooling circuit in case of loss of preferred electrical power supply and simultaneous failure of design reactor shutdown system. In this case two signals for activation of DAZ system (reactor shutdown) are generated: on increase of pressure in drum separators and on decrease in the coolant flow rate through the main circulation pumps. In Unit 1 DAZ system was installed in 1999 and in Unit 2 in 2000.The Second Diverse Shutdown System (DSS) has been designed and installed in Ignalina NPP Unit 2 in 2004. In the first unit of Ignalina NPP this system has not been installed because reactor has been shut down in 2004. Therefore, nowadays Ignalina NPP reactor emergency protection (emergency shutdown) system consists of two independent shutdown systems: first, BSM controls manual control rods and shortened absorber rods, which are inserted into the core from bottom. This system performs the normal reactor shutdown function and can maintain a reactor in subcritical state. Second system AZ controls 24 fast acting reactor shutdown rods as well as additionally 49 rods, which belong to both—BSM and AZ systems. AZ system performs emergency protection function. Also the Additional Hold-down System of the reactor is installed. This system allows to prepare and inject water and neutron absorber gadolinium mixture into control rods cooling circuit. Thus, the reactor remains in subcritical state even in the case of failure of BSM system.DSS justification was one of the main projects increasing a level of NPP safety. Specialists from LEI together with experts from the countries of Western Europe checked and have assessed the design documentation, carrying out independent calculations, thus helping Lithuanian regulatory body (VATESI) to make the appropriate decisions concerning implementation of mentioned system at Ignalina NPP [7]. In conclusions of review it has been shown that implementation of second diverse reactor shutdown system protects a reactor in case of failure of design reactor shutdown system. Implementation of this system has ensured that any initiating event cannot cause accident with damage of the reactor core as well as decreases core damage probability from 4 · 10 - 4 up to 5 · 10 - 6.In 2002 the safety analysis report for Ignalina NPP Unit 2 has been developed. This report contains the description of systems, list of postulated accidents, engineering assessment of reactor cooling system, accident analysis, assessment of fuel channels structural integrity, assessment of reactor safety acceptability, and other chapters. The accident analysis in this report was performed using best estimate approach with uncertainty and sensitivity analysis. According to the international practice, the best estimate approach is used mainly for analysis of loss of coolant accidents in reactor cooling system. In Lithuania the best estimate approach was successfully applied not only for loss of coolant accidents but also for reactor transients and accident confinement system response analyses. The uncertainty and sensitivity analysis allows to avoid the unnecessary conservatisms as well as to assess and address the existing safety margins. The safety analysis report and its review were the main documents required for license for Ignalina NPP Unit 2. Both documents demonstrated the increased safety level after implementation of above mentioned modifications and satisfaction to requirements of regulating documents. ### 2.2. Ignalina NPP Probabilistic Safety Assessment The Ignalina NPP first-level PSA “BARSELINA” project (1991–1996) was initiated in 1991 [8]. It was the first PSA for nuclear power plants with RBMK-type reactors. From the beginning this project was carried out by nuclear energy experts from Lithuanian, Russian, and Swedish institutions, and since 1995 it was carried out by efforts of experts from Lithuania (Ignalina NPP, LEI) and Sweden. Main objective of deterministic analysis was to show that nuclear power plant reliably copes with accidents, and basic purpose of PSA 1 level is to assess probability of reactor core damage to create a basis for severe accident risk assessment and management. Performed Ignalina NPP PSA 1 level study is predicted by assumption that the main radioactive source is reactor core. This PSA is performed for maximum permissible reactor operating power. Only internal initiating events have been analyzed—transients, loss of the coolant accidents, common cause failure, and internal hazards (fire, flooding, and missiles). Results of the analysis have shown that after implementation of recommendations from BARSELINA [8], safety analysis report, and its independent review [3, 4], probability of Ignalina NPP core damage is about 6 · 10 - 6. According to the international requirements, this parameter for the operating nuclear power plants should not exceed 10 - 4 per year and for new NPPs, which are in process of construction, - 10 - 5. Therefore Ignalina NPP fulfils this requirement. Analysis has shown that, in Ignalina NPP, risk topography dominates transients, instead of loss of the coolant accidents. The risk of core damage most of all increases transients with loss of long-term core cooling. It is the positive fact meaning that up to consequences of severe accidents there is enough time. Thus operators supervising reactor operation can undertake corrective measures, and it means that Ignalina NPP has great potential opportunities for implementation of the program on management of severe accidents. It is necessary to note that procedures and means on severe accident management are already implemented at Ignalina NPP Unit 2 [9, 10].According to the international requirements, probability of the large reactivity release outside nuclear power plant should not exceed10 - 7 per year for new NPPs, which are in process of construction and for NPPs in operation - 10 - 6. Scenarios and probabilities of the large reactivity release outside nuclear power plant are objects of investigations for PSA level 2. Ignalina NPP PSA level 2 project was performed in 1999–2001 [11] and it was the first project of such type for nuclear power plants with RBMK reactors. This project was carried out by efforts of experts from Lithuania (LEI) and Sweden. Performing PSA level 2 as initial data used results of level 1. According to PSA level 1 investigated accident scenarios consequences and its similarity criteria on radioactive contamination, the conditions of damage of the reactor have been developed and possibilities of accident management were assessed. Results of PSA level 2, it have shown that barrier of the large reactivity release after core damage is 1.5. This barrier is smaller in comparison with modern nuclear power plants having function of containment, which reaches 10 and more. Being based conservative assumptions and estimation of parameters, in PSA level 2 was calculated that general estimation of large discharge frequency is 3.8 · 10 - 6 per year. Therefore, Ignalina NPP according to the probability of large reactivity release outside nuclear power plant is not the worst in comparison with the plants of the USA and Western Europe, constructed in the same years.Carrying out the complex analysis about influence on Ignalina NPP units safety [12] by LEI, the following external events have been investigated:(i) aircraft crash, (ii) extreme wind and tornado, (iii) flooding and extreme showers, (iv) external fire.Aircraft or other flying objects crash that caused accidents in Ignalina NPP will have local character because of its big territory. According to the Lithuanian civil aviation data, it has been assumed that average congestion is up to 50000 flights per one year within the 50-kilometer zone around NPP. Three zones have been defined by a radius up to 15, 50, and 85 meters around the reactor in the territory at Ignalina NPP (15—according to reactor dimensions, 85—according to reactor building size). Probability of air crash on a 85-meter zone around the reactor center, assuming that aircraft weight is 5700 kg as well as assuming that half of these flights carry out planes of western manufacturers and other half—Soviet, is 2.06 · 10 - 9 1/year. Even doing more conservative assumptions (heavy planes falling frequency equalized to easy planes falling frequency), probability of air crash on a 85-meter zone around the reactor center will be 1.64 · 10 - 7 1/year. The obtained heavy plane crash probabilities are less than the probabilities obtained in probability analyses for the majority of the West-European and American NPPs.Tornado may cause huge damage and destruction. From all buildings of nuclear power plant, the tornado is most dangerous for a technical water supply system building, because it is located in the open territory on a coast of lake. Tornado and hurricane winds do not create danger for buildings of reactor and technical systems. Contrariwise probability of tornado and hurricane winds is 5.3 · 10 - 6 1/year. Therefore it is possible to approve that their influence on reactor safety is insignificant.Rise of a water level in Lake Druksiai represents the greatest danger to pump station on the lake, since the service water system is the nearest NPP construction to the lake. Water level elevation of Lake Druksiai up to a level of 144.1 m is not possible practically; therefore, there is no danger on flooding of pump station. The platform of the other Ignalina NPP construction is located at a level of 148-149 m above the sea level. Rise of a water level in the lake Druksiai up to such mark is impossible and flooding does not represent the direct danger for Ignalina NPP.Besides lake, another external flooding source isextreme showers. In territory of Ignalina NPP there is drainage system and all compartments which are located below a critical mark of a level are connected to this system; therefore, the water leaks in case of internal flooding. Thus, extreme showers do not cause external flooding of the reactor building. For probabilistic external flooding analysis the mathematical model to assess peak water level elevations of the lake Druksiai has been developed. Probabilistic assessment of water level elevation in the lake has been performed. Maximum amount of precipitation (not less than 279.7 mm in 12 hours) probability is 1 · 10 - 6 1/year. Such event will not have influence on reactor safety.Probabilistic analysis of external fire. Ignalina NPP is situated in the region, where 30% of territory is occupied by forests (40% are grassland and 30% are occupied by lakes and swamps). The edge of the closest forest is less than one kilometre from territory of Ignalina NPP. On the territory of the NPP there are only separate trees and grass. The global fire of a forest with a high wind to the NPP side can cause the smoke cover on the territory of Ignalina NPP. The smoke does not influence work of reactor mechanisms but will complicate work of the personnel. Fire probability of forest, which is in 10-kilometer zone around Ignalina NPP and there are more than 2000 ha woods, is 2.7 · 10 - 3 1/year. It is a high probability, but any fire cannot affect safety of the reactor considerably. ## 2.1. Deterministic Ignalina NPP Safety Justification In 1995-1996 was prepared In-depth Ignalina NPP Unit 1 Safety Analysis Report, using USA and Western Europe methodology and computer codes for providing safety analysis [3]. It was comprehensive international study sponsored by EBRD. The purpose of this international study was to provide a comprehensive overview of plant status with special emphasis placed on its safety aspects. Specialists from the Ignalina NPP, Russia (RDIPE), Canada, and Sweden contributed. During implementation of the project, they have been described more than 50 systems of normal operation, safety important systems, and auxiliary systems. Also analysis of these systems has been performed, considering compliance of these systems to the Lithuanian standards and rules as well to practice of safety used in the West. Analyzing systems, the attention has been concentrated on their consistency to criterion of single failure, as well as to auxiliary safety aspects: maintenance, inspections, and impact of external factors (fire, flooding by water). This analysis of systems has defined the main lacks of systems and has developed conditions for elimination of the deficiencies. The performed review on operation and safety has allowed to identify all possible malfunctions, which can potentially cause an emergency situation.In the safety analysis report of the Ignalina NPP Unit 1, the comprehensive accident analysis and equipment assessment have been provided; discussed questions concerning equipment ageing, investigated topics related to operators action, and power plant control provided conclusions about safety of Ignalina NPP (NPP safety level was assessed realistically); main lacks have been defined and measures for elimination of the deficiencies have been foreseen. It is the first western-type report on safety for nuclear power plants with RBMK reactors.One of the basic conclusions in this safety analysis report was such that in this case there was no problem, which would demand immediate shutdown of the Ignalina NPP. Detailed accident analysis (accidents because of different pipelines ruptures, reactivity initiating accidents, equipment failures, transients with additional failure of reactor shutdown system, and fuel channel ruptures in the reactor cavity) has shown that accident occurring because of equipment failures does not cause such condition of the plant station which would cause violation of acceptance criteria; safety system ensures a safe condition of the plant even doing the assumption that operator does not take any action for 10 minutes from the beginning of accident to mitigate an emergency situation. Because of reactivity initiating accidents (exactly such type of initiating event became the reason of accident in the Chernobyl NPP), acceptance criteria of power plant also are not violated, even postulating single failures additionally. It has been shown that Ignalina NPP is reliably protected against loss of the coolant accidents if ruptures of pipelines do not cause local stagnation of flow. In case of one steam line rupture, the acceptance criteria will not be exceeded. But there are two steam lines located in the shaft at the Ignalina NPP; thus, rupture of one steam line can cause rupture of other steam lines, and in this case radiological dozes can be exceeded. Being based on these results of accident analysis, the recommendations for modifications of activation algorithms for reactor shutdown and emergency core cooling systems have been prepared.It is necessary to note that in parallel with the Ignalina NPP Unit 1 safety analysis report in 1995–1997 it was performed independent Review of the Ignalina Nuclear Power Plant Safety Analysis Report [4]. This study was performed by experts from USA, Great Britain, France, Germany, Italy, Russia, and Lithuania. Independent Review has confirmed the main conclusions of safety analysis report.In recommendations of Ignalina NPP Unit 1 safety analysis report, it has been shown that Ignalina NPP will be reliably protected from any ruptures of pipelines and steam lines after improving of activation algorithms for reactor shutdown and emergency core cooling systems. According to these algorithms the system will automatically activate on coolant flow rate decrease in single Group Distribution Header (GDH) and sharp pressure decrease in drum-separators. These modifications have been implemented in both Ignalina NPP units. Safety justification of these modifications has been performed in Lithuanian Energy Institute (LEI). Further discussed situation, when conditions for local flow stagnation because of GDH rupture in the fuel channels connected to this affected GDH, is developed [5]. The flow stagnation occurs in the case of the certain size break in GDH. Due to discharge of a part of the coolant through this break, the zero gradient of pressure is developed in fuel channels (7—see Figure 1), that is, pressure in a bottom of the channel is close to pressure in drum separators (1). Coolant flow rate stagnation in fuel channels can be destroyed only in case of early activation of Emergency Core Cooling System (ECCS) (see Figure 2(a)). Thus if ECCS would operate according to design algorithm (reactor cooling water started to supply only after approximately 400 seconds from the beginning of accident); acceptance criteria for both fuel rod cladding and fuel channel walls temperatures in high-power channel would be exceeded (see Figure 2(b) and Figure 2(c)). After implementation of ECCS activation algorithms according to coolant flow rate decrease in separate group distribution headers, water from ECCS starts to supply already after 5–10 seconds from the beginning of flow stagnation. Thus stagnation is broken and fuel channels, connected to affected GDH are reliably cooled (see Figure 2). These modifications of activation algorithms for reactor shutdown and emergency core cooling systems are installed in power plant Unit 1 in 1999, and Unit 2 in 2000.Figure 1 Ignalina NPP reactor cooling circuit (one loop) and coolant flow diagram in case of partial GDH rupture: (1) drum-separators, (2) suction header, (3) main circulation pumps, (4) pressure header, (5) group distribution headers, (6) water supply from emergency core cooling system, and (7) affected fuel channelsAnalysis of partial GDH rupture considering modification of ECCS algorithm: (a) coolant flow rate through fuel channels, (b) fuel rod cladding temperature in high-power channel connected to ruptured GDH, and (c) behavior of fuel channel wall temperature. (a) (b) (c)In the Ignalina NPP Unit 1 safety analysis report, they have been investigated not only basic design accidents (discussed above) but also Anticipated Transients Without reactor Shutdown (ATWS). Investigations of such accidents are carried out at the licensing process for USA and Western Europe nuclear power plants; however, for the NPPs with RBMK-type reactors such analysis has been performed for the first time. Consequences of accident for RBMK-1500 reactor during which loss of preferred electrical power supply and failure of automatic reactor shutdown occur [6] are presented in Figure 3. Due to loss of preferred electrical power supply, all pumps are switched (see Figure 3(a)) off; therefore, the coolant circulation through fuel channels is terminated. Because of the lost circulation, fuel channels are not cooled sufficiently; therefore, temperature of the fuel channels walls starts to increase sharply. As it is seen from Figure 3(b), already after 40 seconds from the beginning of the accident, the peak fuel channel wall temperature in the high-power channels reaches acceptance criterion 650°C. It means that because of the further increase of temperature in fuel channels plastic deformations begin—the channels because of influence of internal pressure can be ballooned and ruptured. On the first seconds of accident the main electrical generators and turbines are switched off as well. Steam generated in the core is discharged through the steam discharge valves; however, their capacity is not sufficient. Therefore the pressure in reactor cooling circuit increases and approximately after 80 seconds from the beginning of accident reaches acceptance criterion 10.4 MPa (see Figure 3(c)). The further increase of pressure can lead to rupture of pipelines.Analysis of loss of preferred electrical power supply and simultaneous failure of design reactor shutdown system, when DAZ system was installed: (a) coolant flow rate through one main circulation pump, (b) the peak fuel channel wall temperature in the high-power channel, (c) pressure behavior in drum separators, (1) acceptance criterion, and (2) set points of DAZ system activation (reactor shutdown). (a) (b) (c)Thus the analysis of anticipated transients without shutdown has shown that in some cases the consequences can be dramatic enough. Therefore the priority recommendation has been formulated: to implement the second, based on other principles of operation, diverse shutdown system. However development, designing, and implementation of such system needed few years (in the Ignalina NPP Unit 2, this system was installed in 2004), so the compensating means, which were used in transition period while second diverse shutdown system was developed, has been implemented. This temporary system was called according Russian abbreviation “DAZ”, “Dopolnitelnaja avarijnaja začita”—“Additional emergency protection”. This system used the same control rods as well as design reactor shutdown system; however, signals for this system control were generated independently in respect of design reactor shutdown system. In Lithuanian Energy Institute for DAZ system, they have been selected not only set points of activation but also the safety justification was performed. Performed analysis has shown that after implementation of DAZ system the reactor is shut down in time and cooled reliably as well; acceptance criteria are not violated even in case of transients when design reactor shutdown system is not functioning. In Figure3 is shown the behavior of the main parameters of reactor cooling circuit in case of loss of preferred electrical power supply and simultaneous failure of design reactor shutdown system. In this case two signals for activation of DAZ system (reactor shutdown) are generated: on increase of pressure in drum separators and on decrease in the coolant flow rate through the main circulation pumps. In Unit 1 DAZ system was installed in 1999 and in Unit 2 in 2000.The Second Diverse Shutdown System (DSS) has been designed and installed in Ignalina NPP Unit 2 in 2004. In the first unit of Ignalina NPP this system has not been installed because reactor has been shut down in 2004. Therefore, nowadays Ignalina NPP reactor emergency protection (emergency shutdown) system consists of two independent shutdown systems: first, BSM controls manual control rods and shortened absorber rods, which are inserted into the core from bottom. This system performs the normal reactor shutdown function and can maintain a reactor in subcritical state. Second system AZ controls 24 fast acting reactor shutdown rods as well as additionally 49 rods, which belong to both—BSM and AZ systems. AZ system performs emergency protection function. Also the Additional Hold-down System of the reactor is installed. This system allows to prepare and inject water and neutron absorber gadolinium mixture into control rods cooling circuit. Thus, the reactor remains in subcritical state even in the case of failure of BSM system.DSS justification was one of the main projects increasing a level of NPP safety. Specialists from LEI together with experts from the countries of Western Europe checked and have assessed the design documentation, carrying out independent calculations, thus helping Lithuanian regulatory body (VATESI) to make the appropriate decisions concerning implementation of mentioned system at Ignalina NPP [7]. In conclusions of review it has been shown that implementation of second diverse reactor shutdown system protects a reactor in case of failure of design reactor shutdown system. Implementation of this system has ensured that any initiating event cannot cause accident with damage of the reactor core as well as decreases core damage probability from 4 · 10 - 4 up to 5 · 10 - 6.In 2002 the safety analysis report for Ignalina NPP Unit 2 has been developed. This report contains the description of systems, list of postulated accidents, engineering assessment of reactor cooling system, accident analysis, assessment of fuel channels structural integrity, assessment of reactor safety acceptability, and other chapters. The accident analysis in this report was performed using best estimate approach with uncertainty and sensitivity analysis. According to the international practice, the best estimate approach is used mainly for analysis of loss of coolant accidents in reactor cooling system. In Lithuania the best estimate approach was successfully applied not only for loss of coolant accidents but also for reactor transients and accident confinement system response analyses. The uncertainty and sensitivity analysis allows to avoid the unnecessary conservatisms as well as to assess and address the existing safety margins. The safety analysis report and its review were the main documents required for license for Ignalina NPP Unit 2. Both documents demonstrated the increased safety level after implementation of above mentioned modifications and satisfaction to requirements of regulating documents. ## 2.2. Ignalina NPP Probabilistic Safety Assessment The Ignalina NPP first-level PSA “BARSELINA” project (1991–1996) was initiated in 1991 [8]. It was the first PSA for nuclear power plants with RBMK-type reactors. From the beginning this project was carried out by nuclear energy experts from Lithuanian, Russian, and Swedish institutions, and since 1995 it was carried out by efforts of experts from Lithuania (Ignalina NPP, LEI) and Sweden. Main objective of deterministic analysis was to show that nuclear power plant reliably copes with accidents, and basic purpose of PSA 1 level is to assess probability of reactor core damage to create a basis for severe accident risk assessment and management. Performed Ignalina NPP PSA 1 level study is predicted by assumption that the main radioactive source is reactor core. This PSA is performed for maximum permissible reactor operating power. Only internal initiating events have been analyzed—transients, loss of the coolant accidents, common cause failure, and internal hazards (fire, flooding, and missiles). Results of the analysis have shown that after implementation of recommendations from BARSELINA [8], safety analysis report, and its independent review [3, 4], probability of Ignalina NPP core damage is about 6 · 10 - 6. According to the international requirements, this parameter for the operating nuclear power plants should not exceed 10 - 4 per year and for new NPPs, which are in process of construction, - 10 - 5. Therefore Ignalina NPP fulfils this requirement. Analysis has shown that, in Ignalina NPP, risk topography dominates transients, instead of loss of the coolant accidents. The risk of core damage most of all increases transients with loss of long-term core cooling. It is the positive fact meaning that up to consequences of severe accidents there is enough time. Thus operators supervising reactor operation can undertake corrective measures, and it means that Ignalina NPP has great potential opportunities for implementation of the program on management of severe accidents. It is necessary to note that procedures and means on severe accident management are already implemented at Ignalina NPP Unit 2 [9, 10].According to the international requirements, probability of the large reactivity release outside nuclear power plant should not exceed10 - 7 per year for new NPPs, which are in process of construction and for NPPs in operation - 10 - 6. Scenarios and probabilities of the large reactivity release outside nuclear power plant are objects of investigations for PSA level 2. Ignalina NPP PSA level 2 project was performed in 1999–2001 [11] and it was the first project of such type for nuclear power plants with RBMK reactors. This project was carried out by efforts of experts from Lithuania (LEI) and Sweden. Performing PSA level 2 as initial data used results of level 1. According to PSA level 1 investigated accident scenarios consequences and its similarity criteria on radioactive contamination, the conditions of damage of the reactor have been developed and possibilities of accident management were assessed. Results of PSA level 2, it have shown that barrier of the large reactivity release after core damage is 1.5. This barrier is smaller in comparison with modern nuclear power plants having function of containment, which reaches 10 and more. Being based conservative assumptions and estimation of parameters, in PSA level 2 was calculated that general estimation of large discharge frequency is 3.8 · 10 - 6 per year. Therefore, Ignalina NPP according to the probability of large reactivity release outside nuclear power plant is not the worst in comparison with the plants of the USA and Western Europe, constructed in the same years.Carrying out the complex analysis about influence on Ignalina NPP units safety [12] by LEI, the following external events have been investigated:(i) aircraft crash, (ii) extreme wind and tornado, (iii) flooding and extreme showers, (iv) external fire.Aircraft or other flying objects crash that caused accidents in Ignalina NPP will have local character because of its big territory. According to the Lithuanian civil aviation data, it has been assumed that average congestion is up to 50000 flights per one year within the 50-kilometer zone around NPP. Three zones have been defined by a radius up to 15, 50, and 85 meters around the reactor in the territory at Ignalina NPP (15—according to reactor dimensions, 85—according to reactor building size). Probability of air crash on a 85-meter zone around the reactor center, assuming that aircraft weight is 5700 kg as well as assuming that half of these flights carry out planes of western manufacturers and other half—Soviet, is 2.06 · 10 - 9 1/year. Even doing more conservative assumptions (heavy planes falling frequency equalized to easy planes falling frequency), probability of air crash on a 85-meter zone around the reactor center will be 1.64 · 10 - 7 1/year. The obtained heavy plane crash probabilities are less than the probabilities obtained in probability analyses for the majority of the West-European and American NPPs.Tornado may cause huge damage and destruction. From all buildings of nuclear power plant, the tornado is most dangerous for a technical water supply system building, because it is located in the open territory on a coast of lake. Tornado and hurricane winds do not create danger for buildings of reactor and technical systems. Contrariwise probability of tornado and hurricane winds is 5.3 · 10 - 6 1/year. Therefore it is possible to approve that their influence on reactor safety is insignificant.Rise of a water level in Lake Druksiai represents the greatest danger to pump station on the lake, since the service water system is the nearest NPP construction to the lake. Water level elevation of Lake Druksiai up to a level of 144.1 m is not possible practically; therefore, there is no danger on flooding of pump station. The platform of the other Ignalina NPP construction is located at a level of 148-149 m above the sea level. Rise of a water level in the lake Druksiai up to such mark is impossible and flooding does not represent the direct danger for Ignalina NPP.Besides lake, another external flooding source isextreme showers. In territory of Ignalina NPP there is drainage system and all compartments which are located below a critical mark of a level are connected to this system; therefore, the water leaks in case of internal flooding. Thus, extreme showers do not cause external flooding of the reactor building. For probabilistic external flooding analysis the mathematical model to assess peak water level elevations of the lake Druksiai has been developed. Probabilistic assessment of water level elevation in the lake has been performed. Maximum amount of precipitation (not less than 279.7 mm in 12 hours) probability is 1 · 10 - 6 1/year. Such event will not have influence on reactor safety.Probabilistic analysis of external fire. Ignalina NPP is situated in the region, where 30% of territory is occupied by forests (40% are grassland and 30% are occupied by lakes and swamps). The edge of the closest forest is less than one kilometre from territory of Ignalina NPP. On the territory of the NPP there are only separate trees and grass. The global fire of a forest with a high wind to the NPP side can cause the smoke cover on the territory of Ignalina NPP. The smoke does not influence work of reactor mechanisms but will complicate work of the personnel. Fire probability of forest, which is in 10-kilometer zone around Ignalina NPP and there are more than 2000 ha woods, is 2.7 · 10 - 3 1/year. It is a high probability, but any fire cannot affect safety of the reactor considerably. ## 3. Ignalina NPP Safety Assessment in Case of Specific RBMK Problems Discussing safety of RBMK-type nuclear power plants, three vulnerabilities more often are mentioned generally:(i) containment issue, (ii) problem of gas gap closing between fuel channels and graphite blocks, (iii) problem of multiple fuel channel ruptures.Below, specificity of RBMK-1500 in respect of these problems is discussed. ### 3.1. RBMK Reactor Containment Issue In case of accident in nuclear power plant (rupture of reactor cooling circuit pipelines), the coolant with radioactive materials will spread into reactor and compartment-enclosed reactor cooling circuit. In many (but not in all) reactors of the USA and the Western Europe, function of containment carries out visible from afar, photogenic, semicircle form protection enclosure. Usually nonexistence of containment is treated as deficiency of RBMK reactors. However such containment as for vessel-type reactors is technically impossible to implement for RBMK reactors. In the Ignalina NPP the function of containing accidentally released radioactive material is accomplished by an extensive system of interconnected steel lined, reenforced concrete compartments called the Accident Localization System (ALS). The ALS uses the “pressure suppression” principle employed by G.E. designed boiling-water reactors. The ALS encloses the large Ignalina NPP reactor core, the coolant pumps, and all of the piping providing coolant to the core. It is not necessary to enclose the pipes above the reactor core, which carry the exiting two-phase (steam-water) mixture to the drum separators, because if one of them is breached, coolant flow to the fuel channels (which is provided by pipes entering the core from below) will not be interrupted. Significant amounts of radioactive material can escape only if fuel rods are over-heated. Breaches in the exiting pipes will not reduce coolant flow; therefore, the fuel rods will not overheat.The effectiveness of the ALS has been verified by extensive international analysis and experimental programs. They all show that even if events leading to release of radioactive materials are postulated, these materials will be contained by the ALS; thus, the ALS performs the function of containment [13]. The minimal amounts (due primarily to non-condensable noble gases) which would eventually reach the environment, would not exceed the amounts that would be released by Western built reactors provided with the more familiar, prominently visible “dome containments”. ### 3.2. Problem of Gas Gap Closing between Fuel Channels and Graphite Blocks The fuel channels of RBMK-type reactor are separated from the graphite bricks by gaps maintained by graphite rings. These rings are arranged next to one another in such a manner that one is in contact with the channel, and the other with the graphite stack block (see Figure4). As a result of exposure to neutron radiation and temperature, the diameters of graphite columns gaps decrease, and fuel channel tube expands; thus, the gap between them decreases.Figure 4 Fuel channel and graphite column interaction. All measurements are in millimeters.The availability of the gap between graphite bricks and fuel channels is the main condition limiting the operation of RBMK-type reactors. These graphite fuel channel tubes gaps allow(i) unimpeded (axial and radial) thermal expansion and contraction of the fuel channels, (ii) predictable noncontacting heat transfer from graphite bricks (temperature higher than 500°C) to fuel channels (temperature 300–320°C) across the gaps, (iii) leakage of helium-nitrogen mixture, which provides heat transfer from graphite to coolant and protects graphite against oxidation. Furthermore helium-nitrogen mixture is part of fuel channel integrity monitoring system.The control of gap between fuel channels and graphite blocks at Ignalina NPP Unit 1 and 2 is carried out from the beginning of its operation and now the largest database and experience of assessment of gap among all RBMK type reactors is saved. After gap closure some functions of the control not only are lost but also worsen characteristics of the reactor. Increasing probabilities of damage of the channel and deformations of graphite, withdrawing of the channel from a reactor if necessary becomes complicated and the temperature of graphite and the fuel channel changes. In Ignalina NPP Unit 1 reactor the average gap between fuel channels and graphite up to final shutdown of the reactor from an initial level (3–2.7 mm) has decreased three to four times. This decreasing in Unit 2 is insignificant. Estimation of such small gap is very sensitive to errors of measurements, uncertainties of used models, and strategy of selection of fuel channels for measurements.As it is known, after signing the agreement with the EBRD Account of Nuclear Safety in 1994, Lithuania has undertaken not to change fuel channels and not to operate Ignalina NPP reactor after closing even one gas gap between graphite stack and fuel channels. In Ignalina NPP in-depth safety report [3], which has been prepared by the international experts in 1996, it was predicted that at Ignalina NPP Unit 1 it happens not later than in the beginning of 1999.In Lithuanian Energy Institute complex investigations on the problem of gap closure between fuel channels and graphite blocks at Ignalina NPP have been carried out. Assessment of the gap between graphite stack and fuel channels has very big importance because results of this problem are very important in making of the decision on duration of Ignalina nuclear power plant operation. At development of a technique on assessment of gap and strategy of measurements, the thermal-hydraulic, structural and probabilistic calculations have been performed. The detailed analysis [14] has shown that in Ignalina NPP In-depth safety analysis report [3] the assessment of the gap between fuel channels and graphite blocks at Ignalina NPP Unit 1 reactor has been performed using simplified deterministic calculations. Therefore obtained results were too pessimistic and conservative, predicting closure of the gap in set of channels in 1998–2000.The specialists from LEI developed the integrated technique on assessment and control of risk of gas gap reduction. This allowed to develop strategy of measurement of holes diameters in graphite columns and replacement of fuel channels. This strategy has ensured existing of gap in Unit 1 reactor up to its final shutdown and by that has allowed considerably to prolong time of Ignalina NPP Unit 1 operation (until the end of 2004).Change of a gas gap in the second unit of a reactor very much differs from that of the first unit because in a reactor of Unit 2 they are used zirconium tubes of fuel channels having different hardened surfaces and the rate of their ballooning is two times slower in comparison with tubes in reactor of Unit 1. Tendencies of change of graphite stack diameters in the second Unit are very similar to those of the first unit. ### 3.3. Problem of Multiple Fuel Channel Ruptures In case of fuel channel rupture a two-phase flow is discharged to gaps between graphite stack. Part of graphite blocks can be and damaged cracked by coolant jet impingement; graphite columns can be displaced and coolant passes into the reactor cavity. Because graphite stack is hotter than the coolant, the pressure in tight reactor cavity increases. The leak tight Reactor Cavity (RC) performs the function of containment in the region immediately surrounding the nuclear fuel and graphite. The RC is formed by a cylindrical metal structure together with bottom and top metal plates. The reactor cavity confines the steam release in case of rupture of fuel channels. The steam-water-gas mixture from the reactor cavity is directed via Reactor Cavity Venting System (RCVS) pipelines to two steam-distribution devices of the 5th (upper) condensing tray in the Accident Localization System (Figure5). Two pipelines d = 400 mm that come from a branch pipe d = 600 mm located above the top plate of RC are interconnected to a pipe d = 600 mm which connects to one steam-distribution device [1]. In the same way the other two pipelines d = 400 mm from the top plate of RC are connected to the second steam-distribution device. On their way these pipelines have branches, which are interconnected in a leak-tight corridor and end up with three Membrane Safety Devices (MSDs). The blowdown pipes from the bottom of RC pass directly to the leak-tight corridor and also end up with three MSDs.Figure 5 Simplified schematic of the reactor cavity venting system: (1) reactor, (2) the fifth ALS suppression pool, (3) suppression pools 1–4, (4) steam distribution devices, and (5) membrane safety devices (350 mm diameter)In the case of multiple fuel channel tube ruptures, if the RCVS does not assure relief of steam-water-gas mixture from RC, the pressure increase in the RC will lift top plate of the RC. Those structural integrity of the RC and the rest fuel channels would be lost as well. Such event would cause very severe consequences similar to Chernobyl accident. Therefore it is important to maintain RC integrity, which is assured if pressure in the RC is below permissible pressure (314 kPa, abs), that is, the pressure of upper plate of biological reactor shielding weight [15].Rupture of one fuel channel is design basis accidents for RBMK-1500 reactors. Probability of such rupture is- 10 - 2 1/year. According to design, the reactor cavity venting system assured the integrity of RC in the case of up to 3 fuel channels ruptures. This system has been modernized in 1996 as shown in Figure 5.Moscow Research and Design Institute for Power Engineering (RDIPE), designer and developer of RBMK reactors, specialists in 1996 have analyzed pressure behavior in the Reactor Cavity in case of multiple fuel channel rupture [15]. Results of these calculations have shown that acceptance criterion maximum permissible load (310 kPa) to upper reactor cavity plate will be exceeded in case of 9 fuel channels rupture (according to RDIPE calculations). In RDIPE calculations the coolant discharge through the rupture conservatively was assumed equal to 32 kg/s through one fuel channel. This flow rate has been selected as constant versus time. Because of such conservative assumptions, amount of discharged coolant into reactor cavity is largest and number of channels, when permissible pressure in reactor cavity is not exceeded, will be minimal.Such analysis is conservative with impact of uncertainties. The best estimate analysis of Ignalina NPP response to multiple fuel channels tubes rupture was performed at the Lithuanian Energy Institute. Sensitivity and uncertainty analysis was performed as well [16]. At performance of the analysis it has been considered that results of calculations can be influenced by uncertainties such as the plant initial conditions, assumed at the modeling, as well as assumptions and correlations of CONTAIN code. Summarizing the results of the uncertainty and sensitivity analysis, it was concluded that the capacity of RCVS comprises from 11 up to 19 ruptured fuel channels, that is, 15 ± 4 channels (Figure 6).Figure 6 Pressure in the reactor cavity as a function of a number of ruptured fuel channels.It is necessary to note that the analysis was performed for the case with reactor cooling system filled by coolant (the water levels in drum separators are nominal). Thus, after the fuel channels rupture, the steam-water mixture is discharged into the gaps of graphite stack. If the “dropout” model is used in CONTAIN 1.1 code, it is assumed that all the water released from the ruptured fuel channels in liquid fraction leaves from RC to the water drain. If the “dropout” model is not used in CONTAIN 1.1 code, it is assumed that not all evaporated water remains in a dispersed condition, and it may be transferred into RC and through the pipelines into ALS. The last assumption leads to higher calculated pressure in the RC (see Figure6).It is necessary to note that during operation of RBMK reactors there were only three cases of ruptures of separate fuel channels:(i) at Leningrad NPP Unit 1 in 1975, (ii) at Chernobyl NPP Unit 1 in 1982, (iii) at Leningrad NPP Unit 3 in 1992.In any of these cases adjacent channels have not been damaged. Thus, in reality there was no so-called “cascade rupture of fuel channels” when rupture of one channel causes ruptures of other channels. Experiments made on the large-scale TKR-Test facility at Electrogorsk Research and Engineering center for NPP safety [17] have shown also that cascade rupture of fuel channels is impossible. ## 3.1. RBMK Reactor Containment Issue In case of accident in nuclear power plant (rupture of reactor cooling circuit pipelines), the coolant with radioactive materials will spread into reactor and compartment-enclosed reactor cooling circuit. In many (but not in all) reactors of the USA and the Western Europe, function of containment carries out visible from afar, photogenic, semicircle form protection enclosure. Usually nonexistence of containment is treated as deficiency of RBMK reactors. However such containment as for vessel-type reactors is technically impossible to implement for RBMK reactors. In the Ignalina NPP the function of containing accidentally released radioactive material is accomplished by an extensive system of interconnected steel lined, reenforced concrete compartments called the Accident Localization System (ALS). The ALS uses the “pressure suppression” principle employed by G.E. designed boiling-water reactors. The ALS encloses the large Ignalina NPP reactor core, the coolant pumps, and all of the piping providing coolant to the core. It is not necessary to enclose the pipes above the reactor core, which carry the exiting two-phase (steam-water) mixture to the drum separators, because if one of them is breached, coolant flow to the fuel channels (which is provided by pipes entering the core from below) will not be interrupted. Significant amounts of radioactive material can escape only if fuel rods are over-heated. Breaches in the exiting pipes will not reduce coolant flow; therefore, the fuel rods will not overheat.The effectiveness of the ALS has been verified by extensive international analysis and experimental programs. They all show that even if events leading to release of radioactive materials are postulated, these materials will be contained by the ALS; thus, the ALS performs the function of containment [13]. The minimal amounts (due primarily to non-condensable noble gases) which would eventually reach the environment, would not exceed the amounts that would be released by Western built reactors provided with the more familiar, prominently visible “dome containments”. ## 3.2. Problem of Gas Gap Closing between Fuel Channels and Graphite Blocks The fuel channels of RBMK-type reactor are separated from the graphite bricks by gaps maintained by graphite rings. These rings are arranged next to one another in such a manner that one is in contact with the channel, and the other with the graphite stack block (see Figure4). As a result of exposure to neutron radiation and temperature, the diameters of graphite columns gaps decrease, and fuel channel tube expands; thus, the gap between them decreases.Figure 4 Fuel channel and graphite column interaction. All measurements are in millimeters.The availability of the gap between graphite bricks and fuel channels is the main condition limiting the operation of RBMK-type reactors. These graphite fuel channel tubes gaps allow(i) unimpeded (axial and radial) thermal expansion and contraction of the fuel channels, (ii) predictable noncontacting heat transfer from graphite bricks (temperature higher than 500°C) to fuel channels (temperature 300–320°C) across the gaps, (iii) leakage of helium-nitrogen mixture, which provides heat transfer from graphite to coolant and protects graphite against oxidation. Furthermore helium-nitrogen mixture is part of fuel channel integrity monitoring system.The control of gap between fuel channels and graphite blocks at Ignalina NPP Unit 1 and 2 is carried out from the beginning of its operation and now the largest database and experience of assessment of gap among all RBMK type reactors is saved. After gap closure some functions of the control not only are lost but also worsen characteristics of the reactor. Increasing probabilities of damage of the channel and deformations of graphite, withdrawing of the channel from a reactor if necessary becomes complicated and the temperature of graphite and the fuel channel changes. In Ignalina NPP Unit 1 reactor the average gap between fuel channels and graphite up to final shutdown of the reactor from an initial level (3–2.7 mm) has decreased three to four times. This decreasing in Unit 2 is insignificant. Estimation of such small gap is very sensitive to errors of measurements, uncertainties of used models, and strategy of selection of fuel channels for measurements.As it is known, after signing the agreement with the EBRD Account of Nuclear Safety in 1994, Lithuania has undertaken not to change fuel channels and not to operate Ignalina NPP reactor after closing even one gas gap between graphite stack and fuel channels. In Ignalina NPP in-depth safety report [3], which has been prepared by the international experts in 1996, it was predicted that at Ignalina NPP Unit 1 it happens not later than in the beginning of 1999.In Lithuanian Energy Institute complex investigations on the problem of gap closure between fuel channels and graphite blocks at Ignalina NPP have been carried out. Assessment of the gap between graphite stack and fuel channels has very big importance because results of this problem are very important in making of the decision on duration of Ignalina nuclear power plant operation. At development of a technique on assessment of gap and strategy of measurements, the thermal-hydraulic, structural and probabilistic calculations have been performed. The detailed analysis [14] has shown that in Ignalina NPP In-depth safety analysis report [3] the assessment of the gap between fuel channels and graphite blocks at Ignalina NPP Unit 1 reactor has been performed using simplified deterministic calculations. Therefore obtained results were too pessimistic and conservative, predicting closure of the gap in set of channels in 1998–2000.The specialists from LEI developed the integrated technique on assessment and control of risk of gas gap reduction. This allowed to develop strategy of measurement of holes diameters in graphite columns and replacement of fuel channels. This strategy has ensured existing of gap in Unit 1 reactor up to its final shutdown and by that has allowed considerably to prolong time of Ignalina NPP Unit 1 operation (until the end of 2004).Change of a gas gap in the second unit of a reactor very much differs from that of the first unit because in a reactor of Unit 2 they are used zirconium tubes of fuel channels having different hardened surfaces and the rate of their ballooning is two times slower in comparison with tubes in reactor of Unit 1. Tendencies of change of graphite stack diameters in the second Unit are very similar to those of the first unit. ## 3.3. Problem of Multiple Fuel Channel Ruptures In case of fuel channel rupture a two-phase flow is discharged to gaps between graphite stack. Part of graphite blocks can be and damaged cracked by coolant jet impingement; graphite columns can be displaced and coolant passes into the reactor cavity. Because graphite stack is hotter than the coolant, the pressure in tight reactor cavity increases. The leak tight Reactor Cavity (RC) performs the function of containment in the region immediately surrounding the nuclear fuel and graphite. The RC is formed by a cylindrical metal structure together with bottom and top metal plates. The reactor cavity confines the steam release in case of rupture of fuel channels. The steam-water-gas mixture from the reactor cavity is directed via Reactor Cavity Venting System (RCVS) pipelines to two steam-distribution devices of the 5th (upper) condensing tray in the Accident Localization System (Figure5). Two pipelines d = 400 mm that come from a branch pipe d = 600 mm located above the top plate of RC are interconnected to a pipe d = 600 mm which connects to one steam-distribution device [1]. In the same way the other two pipelines d = 400 mm from the top plate of RC are connected to the second steam-distribution device. On their way these pipelines have branches, which are interconnected in a leak-tight corridor and end up with three Membrane Safety Devices (MSDs). The blowdown pipes from the bottom of RC pass directly to the leak-tight corridor and also end up with three MSDs.Figure 5 Simplified schematic of the reactor cavity venting system: (1) reactor, (2) the fifth ALS suppression pool, (3) suppression pools 1–4, (4) steam distribution devices, and (5) membrane safety devices (350 mm diameter)In the case of multiple fuel channel tube ruptures, if the RCVS does not assure relief of steam-water-gas mixture from RC, the pressure increase in the RC will lift top plate of the RC. Those structural integrity of the RC and the rest fuel channels would be lost as well. Such event would cause very severe consequences similar to Chernobyl accident. Therefore it is important to maintain RC integrity, which is assured if pressure in the RC is below permissible pressure (314 kPa, abs), that is, the pressure of upper plate of biological reactor shielding weight [15].Rupture of one fuel channel is design basis accidents for RBMK-1500 reactors. Probability of such rupture is- 10 - 2 1/year. According to design, the reactor cavity venting system assured the integrity of RC in the case of up to 3 fuel channels ruptures. This system has been modernized in 1996 as shown in Figure 5.Moscow Research and Design Institute for Power Engineering (RDIPE), designer and developer of RBMK reactors, specialists in 1996 have analyzed pressure behavior in the Reactor Cavity in case of multiple fuel channel rupture [15]. Results of these calculations have shown that acceptance criterion maximum permissible load (310 kPa) to upper reactor cavity plate will be exceeded in case of 9 fuel channels rupture (according to RDIPE calculations). In RDIPE calculations the coolant discharge through the rupture conservatively was assumed equal to 32 kg/s through one fuel channel. This flow rate has been selected as constant versus time. Because of such conservative assumptions, amount of discharged coolant into reactor cavity is largest and number of channels, when permissible pressure in reactor cavity is not exceeded, will be minimal.Such analysis is conservative with impact of uncertainties. The best estimate analysis of Ignalina NPP response to multiple fuel channels tubes rupture was performed at the Lithuanian Energy Institute. Sensitivity and uncertainty analysis was performed as well [16]. At performance of the analysis it has been considered that results of calculations can be influenced by uncertainties such as the plant initial conditions, assumed at the modeling, as well as assumptions and correlations of CONTAIN code. Summarizing the results of the uncertainty and sensitivity analysis, it was concluded that the capacity of RCVS comprises from 11 up to 19 ruptured fuel channels, that is, 15 ± 4 channels (Figure 6).Figure 6 Pressure in the reactor cavity as a function of a number of ruptured fuel channels.It is necessary to note that the analysis was performed for the case with reactor cooling system filled by coolant (the water levels in drum separators are nominal). Thus, after the fuel channels rupture, the steam-water mixture is discharged into the gaps of graphite stack. If the “dropout” model is used in CONTAIN 1.1 code, it is assumed that all the water released from the ruptured fuel channels in liquid fraction leaves from RC to the water drain. If the “dropout” model is not used in CONTAIN 1.1 code, it is assumed that not all evaporated water remains in a dispersed condition, and it may be transferred into RC and through the pipelines into ALS. The last assumption leads to higher calculated pressure in the RC (see Figure6).It is necessary to note that during operation of RBMK reactors there were only three cases of ruptures of separate fuel channels:(i) at Leningrad NPP Unit 1 in 1975, (ii) at Chernobyl NPP Unit 1 in 1982, (iii) at Leningrad NPP Unit 3 in 1992.In any of these cases adjacent channels have not been damaged. Thus, in reality there was no so-called “cascade rupture of fuel channels” when rupture of one channel causes ruptures of other channels. Experiments made on the large-scale TKR-Test facility at Electrogorsk Research and Engineering center for NPP safety [17] have shown also that cascade rupture of fuel channels is impossible. ## 4. Conclusions Requirements of nuclear power plants safety depend on the accumulated experience, a level of a technical society evolution, which always raises, and from position of the state. About safety level of Ignalina NPP it was worried after Chernobyl accident in 1986. The first modernizations of reactors have been implemented at that time. RDIPE, designer and developer of RBMK reactors, experts have prepared the first safety justification for operating power plant in 1989. When Lithuania assumed control of the Ignalina NPP in 1991, a large number of studies on safety level have been conducted. It is necessary to note Safety Analysis Reports for Ignalina NPP Units 1 and 2, Safety Justifications of Reactor Cooling System, and Accident Localization System. The Ignalina nuclear power plant is distinguished from all RBMK-type reactors for the matter is that many international studies to investigate design parameters as well as level of their risk have been performed. Ignalina NPP, its design, and operational data have been completely open and accessible to Western experts. At first the effective initial help in questions of nuclear safety has been provided by Sweden and after by other countries (Germany, United Kingdom, USA, etc.), capable to perform expertises of the safety analysis. A public list of EC Phare projects, supporting the modernization of Ignalina NPP, is available underhttp://ie.jrc.ec.europa.eu/dissem/.The detailed analysis of accidents has shown that design basis accidents do not cause such condition of the plant, which postulates violation of acceptance. As well safety systems of the plant ensure a safe condition of the plant even doing the assumption that operator does not take any action for 30 minutes from the beginning of accident to mitigate an emergency situation.The performed Probabilistic Safety Analysis of levels 1 and 2 has allowed to compare safety level of Ignalina NPP with the reached level on other nuclear power plants and to plan how to improve NPP safety systems and operational procedures. Investigations have shown that Ignalina NPP according to the probability of large radioactivity release outside nuclear power plant is not the worst in comparison with the plants of the USA and the Western Europe, constructed in the same years.On the basis of the performed investigations the recommendations on safety improvement were developed by efforts of local and foreign experts. These recommendations were brought into Ignalina NPP Safety Improvement Programs (SIP-1, SIP-2, or SIP-3) which implementation strictly was checked by Lithuanian regulatory body VATESI. These means have allowed to improve safety level of the Ignalina NPP constantly. These works do not stop even on forthcoming final shutdown of the plant. In outcome of last significant project the Severe Accident Management Guide is developed. Now this guide is under implementation at Ignalina NPP. Severe Accident Management Guide will supplement Symptom-Oriented Emergency Operating Procedures and will provide safe elimination of accident consequences in all range of accidents. --- *Source: 102078-2010-02-01.xml*
2010
# Myocardial Viability: From Proof of Concept to Clinical Practice **Authors:** Aditya Bhat; Gary C. H. Gan; Timothy C. Tan; Chijen Hsu; Alan Robert Denniss **Journal:** Cardiology Research and Practice (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1020818 --- ## Abstract Ischaemic left ventricular (LV) dysfunction can arise from myocardial stunning, hibernation, or necrosis. Imaging modalities have become front-line methods in the assessment of viable myocardial tissue, with the aim to stratify patients into optimal treatment pathways. Initial studies, although favorable, lacked sufficient power and sample size to provide conclusive outcomes of viability assessment. Recent trials, including the STICH and HEART studies, have failed to confer prognostic benefits of revascularisation therapy over standard medical management in ischaemic cardiomyopathy. In lieu of these recent findings, assessment of myocardial viability therefore should not be the sole factor for therapy choice. Optimization of medical therapy is paramount, and physicians should feel comfortable in deferring coronary revascularisation in patients with coronary artery disease with reduced LV systolic function. Newer trials are currently underway and will hopefully provide a more complete understanding of the pathos and management of ischaemic cardiomyopathy. --- ## Body ## 1. Introduction Ischaemic heart disease (IHD) is the leading cause of morbidity and mortality in Western society, with an overrepresentation in the primary healthcare burden [1–3]. In patients with coronary artery disease (CAD), left ventricular (LV) function remains one of the most robust prognostic determinants of survival [4–6], also impacting total hospital separations and many defined quality of life indicators (including physical and social functioning, energy, and general health perception) [7, 8].The myocardium is exquisitely sensitive to ischemia, with contractile dysfunction occurring shortly after an ischaemic stimulus. The degree of contractile impairment remains strongly under the influence of the severity and duration of the ischaemic event, with irreversible myocardial necrosis representing the end pathway of prolonged and significant coronary ischemia [9]. Hence, the primary priority in the management of acute coronary syndromes is to limit the extent of myocardial necrosis via reperfusion therapies, such as primary angioplasty and thrombolysis, particularly in the setting of electrocardiographic evidence of transmural ischemia.Despite early intervention, patients with IHD have a predisposition to develop structural heart disease, with impairment of myocardial function leading to cardiac failure, a condition termed as “ischaemic cardiomyopathy” [10]. Given that progressive reductions in LV systolic function secondary to the ischaemic substrate have been shown to be associated with poor outcomes, these aberrations represent a theoretically salvageable pathway via revascularisation.The ability to distinguish whether dysfunctional myocardium is “viable” and thus able to recover following revascularisation, however, presents a clinical challenge in current practice.This review examines the concept of myocardial viability, with focus on imaging modalities and principal outcome trials. ## 2. Myocardial Viability: Theoretical Precepts Viability of myocardial tissue is the central principal which underpins reperfusion therapies, whether in the acute phase following myocardial infarction or in chronic ischemia-mediated LV dysfunction. Should “viable” myocardial tissue be present, restoration of adequate coronary blood flow should in theory improve myocardial performance and LV ejection fraction (EF), with the hope of translating into improved long-term outcomes. ### 2.1. Myocardial Stunning Early work into CAD and myocardial flow limitation supported the hypothesis that myocardial ischemia results in significant myocyte injury [11].Heyndrickx and coinvestigators first demonstrated the impact of reversible ischemia on myocardial contractile reserve. Utilising animal models, they demonstrated that short (5- or 15-minute) induced episodes of ischemia to the myocardium, with a subsequent reperfusion period (lasting 6 hours for a 5-minute episode of ischemia, and >24 hours following a 15-minute episode of ischemia), resulted in regional deficits in contractile function that persisted despite reperfusion [12]. This phenomenon, termed asmyocardial stunning, was defined as a prolonged and completely reversible dysfunction of the ischaemic myocardium that continued after restoration of coronary arterial flow [12]. Stunned myocardium was found to be responsive to inotropes in these early studies, with an increase in contractile function in response to exogenous catecholamines [12].Myocardial stunning has also been found in clinical practice, particularly in the setting of increased myocardial demand or reduced coronary supply such as following coronary artery spasm, postmyocardial infarction, or postcardiopulmonary bypass secondary to “cardiac off-time.” Myocardial stunning is also prominent in patients following successful revascularisation postinfarct, wherein there is prolonged systolic dysfunction which takes several days to normalise after the incident event [13–15]. ### 2.2. Myocardial Hibernation Myocardial hibernation represents a condition of sustained depression of myocardial function in the setting of CAD, which is amenable to improvement in function postrevascularisation. This term was first coined by Diamond and colleagues in 1978 [11], and was later popularised by the works of Rahimtoola [16]. This sustained depression in myocardial function is hypothesised to be mediated by fundamental changes in myocardial energetics and metabolism, which are both reduced to match a concomitant reduction in coronary flow reserve.An alternate hypothesis offered for the mechanism of sustained contractile depression is therepetitive stunning hypothesis. In this theory, multiple bouts of demand ischemia in context of flow limitation result in repetitive episodes of ischaemic myocardial dysfunction (or stunning), which eventually creates an environment of sustained depression of contractile function [17]. ### 2.3. Stunning versus Hibernation Resting myocardial perfusion is normal or near normal in stunning but is reduced in hibernation. Stunning of the myocardium is frequently represented as transient regional LV wall motion abnormality persisting for hours to days following reperfusion after short-term but significant impairment of coronary blood flow. Hibernating myocardium, on the other hand, is a state of persistently impaired myocardial performance at rest due to a chronic reduction in coronary blood flow that can be restored by favorably altering the supply/demand relationship of the myocardium [18]. Although traditionally described as two separate entities, stunned and hibernating myocardium may in fact represent stages on a continuum of LV dysfunction resulting from repeated ischaemic episodes (as per the repetitive stunning hypothesis).Identifying myocardial hibernation is of clinical relevance, as it represents potentially salvageable myocardial tissue. Coronary revascularisation in this context is likely to improve contractile performance, LV systolic function, and, in turn, overall morbidity and mortality. However, hibernating myocardium, if left untreated, has the potential to transform into clinically overt heart failure. Revascularisation, via either percutaneous angioplasty or coronary bypass surgery, is the primary avenue of restoring coronary blood flow, unless natural collaterals are formed from the primary diseased vessel. ## 2.1. Myocardial Stunning Early work into CAD and myocardial flow limitation supported the hypothesis that myocardial ischemia results in significant myocyte injury [11].Heyndrickx and coinvestigators first demonstrated the impact of reversible ischemia on myocardial contractile reserve. Utilising animal models, they demonstrated that short (5- or 15-minute) induced episodes of ischemia to the myocardium, with a subsequent reperfusion period (lasting 6 hours for a 5-minute episode of ischemia, and >24 hours following a 15-minute episode of ischemia), resulted in regional deficits in contractile function that persisted despite reperfusion [12]. This phenomenon, termed asmyocardial stunning, was defined as a prolonged and completely reversible dysfunction of the ischaemic myocardium that continued after restoration of coronary arterial flow [12]. Stunned myocardium was found to be responsive to inotropes in these early studies, with an increase in contractile function in response to exogenous catecholamines [12].Myocardial stunning has also been found in clinical practice, particularly in the setting of increased myocardial demand or reduced coronary supply such as following coronary artery spasm, postmyocardial infarction, or postcardiopulmonary bypass secondary to “cardiac off-time.” Myocardial stunning is also prominent in patients following successful revascularisation postinfarct, wherein there is prolonged systolic dysfunction which takes several days to normalise after the incident event [13–15]. ## 2.2. Myocardial Hibernation Myocardial hibernation represents a condition of sustained depression of myocardial function in the setting of CAD, which is amenable to improvement in function postrevascularisation. This term was first coined by Diamond and colleagues in 1978 [11], and was later popularised by the works of Rahimtoola [16]. This sustained depression in myocardial function is hypothesised to be mediated by fundamental changes in myocardial energetics and metabolism, which are both reduced to match a concomitant reduction in coronary flow reserve.An alternate hypothesis offered for the mechanism of sustained contractile depression is therepetitive stunning hypothesis. In this theory, multiple bouts of demand ischemia in context of flow limitation result in repetitive episodes of ischaemic myocardial dysfunction (or stunning), which eventually creates an environment of sustained depression of contractile function [17]. ## 2.3. Stunning versus Hibernation Resting myocardial perfusion is normal or near normal in stunning but is reduced in hibernation. Stunning of the myocardium is frequently represented as transient regional LV wall motion abnormality persisting for hours to days following reperfusion after short-term but significant impairment of coronary blood flow. Hibernating myocardium, on the other hand, is a state of persistently impaired myocardial performance at rest due to a chronic reduction in coronary blood flow that can be restored by favorably altering the supply/demand relationship of the myocardium [18]. Although traditionally described as two separate entities, stunned and hibernating myocardium may in fact represent stages on a continuum of LV dysfunction resulting from repeated ischaemic episodes (as per the repetitive stunning hypothesis).Identifying myocardial hibernation is of clinical relevance, as it represents potentially salvageable myocardial tissue. Coronary revascularisation in this context is likely to improve contractile performance, LV systolic function, and, in turn, overall morbidity and mortality. However, hibernating myocardium, if left untreated, has the potential to transform into clinically overt heart failure. Revascularisation, via either percutaneous angioplasty or coronary bypass surgery, is the primary avenue of restoring coronary blood flow, unless natural collaterals are formed from the primary diseased vessel. ## 3. Methods of Viability Assessment ### 3.1. Electrocardiography Pathologic Q waves, deep initial negative deflections of the QRS complex, were traditionally thought to be secondary to chronic transmural ischemia and representative of “dead myocardium.” On subsequent analysis, it has been demonstrated that presence of pathologic Q waves has a poor correlation with the lack of residual viable myocardial tissue, with a relatively low sensitivity (41–65%) and specificity (69–79%) relative to other imaging modalities [19, 20].Utility of exercise electrocardiography improves viability detection, with elevation of the ST segment during exercise in infarct-related leads being representative of viable myocardium (sensitivity 82% and specificity 100%) [21]. A similar finding is appreciated when evaluating reciprocal ST segment depression associated with exercise-induced ST elevation, with comparable sensitivity and specificity in viability recognition (84% and 100%, resp.) [22].Use of normalisation of abnormal T waves during exercise electrocardiography for viability assessment, on the other hand, has conflicting reports in the literature [23, 24], with more recent trials showing poorer sensitivities [25, 26]. ### 3.2. Echocardiography #### 3.2.1. Echocardiography: LV Morphology Assessment of echocardiographic parameters at rest is important in assessment of viability. Severe dilatation of the LV is a marker of nonviable myocardium, with higher end-systolic volume indices associated with poor ventricular functional recovery [27]. These findings portend to a poorer prognosis, with left ventricular end-systolic volumes ≥130 mL having a reduced 3-year survival rate [27]. The thickness of the LV wall has also been shown to be predictive of viability, with a thin LV wall representative of nonviable tissue or scar in patients with CAD [28]. Studies have shown that end-diastolic wall thickness less than 5-6 mm indicates lack of contractile reserve [28], with end-diastolic wall thickness ≥5 mm on two-dimensional echocardiographic measurements having a sensitivity of 100% and specificity of 28% in prediction of improvement in contractile function twelve months following surgical revascularisation in patients with LV impairment (LVEF < 50%) [29]. In keeping with these findings, Cwajg and colleagues (2000) also found that an end-diastolic wall thickness >6 mm was predictive of contractile recovery following revascularisation with a sensitivity of 94% and specificity of 48%, while segments with an end-diastolic thickness of <6 mm rarely have contractile reserve [29]. #### 3.2.2. Echocardiography: Dobutamine Stress Echocardiography Dobutamine stress echocardiography (DSE) is a valuable tool in the assessment of viability of the myocardium. Classically, four responses are noted in a dysfunctional myocardial response to dobutamine. These are as follows [30–33]:(i) Biphasic response: low-dose dobutamine (defined as 5–10μg/kg/min) can increase contractility in dysfunctional segments which are still viable. At higher doses (10–40 μg/kg/min), wall motion in these segments may further improve or paradoxically diminish, reflecting tachycardia-induced ischemia. This phenomenon is referred to as abiphasic response and has been shown to be highly predictable of functional recovery postrevascularisation. This finding is suggestive of limited, but present, myocardial reserve in the hibernating myocardium.(ii) Worsening contractile function with lack of initial improvement with dobutamine: this response is suggestive of a hibernating myocardium which is supplied by a critically limited arterial supply, with no contractile reserve.(iii) Sustained improvement with increasing dobutamine dose: this response is traditionally seen in the setting of myocardial stunning.(iv) No response to dobutamine: this response is indicative of lack of functional reserve and, thus, lack of viable myocardial tissue.Dysfunctional areas with resting end-diastolic wall thickness of less than 6 mm are thought to reflect significant scar. They are not known to show functional improvement with DSE and do not improve postrevascularisation.DSE has been shown to have a sensitivity and specificity range for prediction of contractile recovery that is modestly high (71–97% and 63–95%, resp.), with the biphasic response having the greatest predictive capability of the four responses [33]. #### 3.2.3. Echocardiography: Myocardial Contrast Echocardiography Myocardial contrast echocardiography (MCE) utilises acoustically reflective high molecular weight inert gases which form microbubbles and act as a contrast agent. These bubbles remain within the intravascular space and help attenuate the borders of the left ventricle. Tissue capillary blood flow, a determinant of myocardial perfusion, is the byproduct of capillary blood volume and myocardial blood velocity.Once the microbubbles reach a steady-state concentration, high-burst ultrasonography is used to displace the microbubbles, with subsequent replenishment within myocardial segments over the following cardiac cycles reflective of myocardial blood velocity. Segments are deemed viable if there is homogeneity of contrast intensity, which is in keeping with intact myocardial microvasculature. Nonviable segments, however, lack contrast enhancement and represent necrotic myocardial cells causing obstruction and collapse of the microcirculation [33–35].MCE has been shown to have a high sensitivity (78–90%), however, low specificity (51–72%), of myocardial contractile recovery postrevascularisation relative to DSE (which on average has a relatively high specificity but lower sensitivity) [36–39]. A combination of the two modalities seems to be optimal in echocardiographic assessment of myocardial viability (sensitivity 96% and specificity 63%) [33]. #### 3.2.4. Echocardiography: Strain Analysis Myocardial deformation indices, including tissue Doppler imaging (TDI) and strain assessment, are new echocardiographic modalities in the assessment of myocardial function, which allow for a more complete appraisal of myocardial motion and overcome traditional challenges of two-dimensional echocardiography with regard to regional myocardial assessment [40, 41]. Strain is defined as the deformation of an object relative to its original location, with strain rate being reflective of the gradient of the velocities between the two locations. This information can be quantified via TDI or two-dimensional speckle tracking.Myocardial deformation (strain) and deformation rate (strain rate) provide multidimensional evaluation of myocardial mechanics (longitudinal, radial, and circumferential function) and have the added advantage of being able to detect subtle wall motion abnormalities of regional function that do not decrease global LVEF [42, 43]. This, in part, is reflected by the fact that strain rate imaging is of lower load-dependence and hence provides a better measure of contractility. Additionally, it is not affected by global myocardial displacement and the tethering effect of neighboring wall segments which encumber standard two-dimensional visual assessments.Both TDI and speckle-tracking echocardiography have been shown to be facilitative in prediction of myocardial viability. This is of relevance given the limitations of subjective assessment of wall thickness as well as operator dependence with traditional two-dimensional stress echocardiographic methods. Bansal and colleagues (2010) revealed that longitudinal and circumferential strain and strain rate measurements at rest and low-dose dobutamine concentrations were predictive of functional recovery postrevascularisation using strain-based imaging. Furthermore, only tissue velocity imaging was found to have incremental value over wall motion analysis [44].Based on a study by Hoffmann et al. (2002), an increase of peak systolic strain rate greater than or equal to 0.23/s had a sensitivity of 83% and specificity of 84% in discerning viable myocardium as determined by18FDG [45]. Additionally, radial strain >9.5% was associated with a sensitivity of 83.9% and specificity of 81.4%, whereas a change in longitudinal strain >14.6% provided a sensitivity of 86.7% and specificity of 90.2% in detection of viable myocardium using strain imaging with adenosine stress echocardiography in a small trial by Ran and colleagues (2012) [46]. Further work into the field is in progress, with several larger trials underway.Advantages of echocardiography include ease of procedure and widespread availability as well as its noninvasive qualities. Furthermore, with DSE, there is an ability to monitor functional response to accurate uptitration of inotropic therapy. Limitations of echocardiography include its high operator dependency with resultant inter- and intraobserver variability. Patients with comorbidities such as obesity, chronic obstructive airflow limitation, and thoracic chest wall abnormalities limit the acoustic window and thus impair LV views. Furthermore, with respect to DSE, assessment relies heavily on subjective visual interpretation of wall motion abnormalities. ### 3.3. Single-Photon Emission CT Single-photon emission CT (SPECT) is a modality which utilises radionuclide-labeled tracer compounds to measure myocardial uptake. Initial acquisition signifies delivery of the tracer throughout the circulation. The images acquired following this (usually 4–24 hours later) reflect myocardial sarcolemmal integrity [47].Primary tracers includeTc99m-sestamibi, Tc99m-tetrofosmin, and 201Thallium. These molecules are lipophilic and permeate through myocardial cellular membranes via passive diffusion or active uptake from Na+/K+ ATPase systems. Intracellular retention, however, requires intact function of the mitochondrion with preservation of the action potential, and as such serves as a marker of viability. These tracer agents emit high-energy photons, which are captured via gated SPECT, and provide information of global LV function and viability of the myocardium [47, 48].Viability assessment with SPECT can be performed at rest, following physical exercise or chemical coronary stress. With stress testing, physical exertion or chemical agents (specifically, dipyridamole or adenosine) are used. Imaging is performed immediately following the test, with delayed imaging repeated 3 to 4 hours later, allowing for adequate redistribution of the tracer agent. If warranted, imaging may be repeated at 24 hours after stress (termed aslate distribution imaging) [49].Viability is seen with myocardial segments which reveal defective uptake immediately following stress, with subsequent replenishment of uptake at 3 to 4 hours. Critically hypoperfused myocardial segments may still be viable if defective uptake is seen at this delayed time-point, warranting repeat imaging at 24 hours after stress to allow for redistribution of the tracer to significantly hypoperfused myocardial regions. Nonviable myocardium reveals fixed defective uptake throughout a 24-hour imaging cycle [49].SPECT has been shown to provide a higher sensitivity (64–72%) however lower specificity (45–88%) than modalities based on evaluation of residual contractile recovery [49, 50]. Primary limitations include cost, ionising radiation exposure, low spatial resolution, and attenuation artefacts. These artefacts can be removed via integration of multislice CT and SPECT [50]. ### 3.4. Positron Emission Tomography Positron emission tomography (PET) imaging is based on the shift of myocardial perfusion energetics, whereby chronically underperfused myocardial tissue shifts from utilization of free fatty acids (that require high oxygenation for use) to that of glucose metabolism, which uses a more anaerobic process at the expense of poor energetic efficiency. This translates into uptake of perfusion tracers in myocardial segments which are hypoperfused. Perfusion tracers, including13N-labeled ammonia (13NH3) and 18F-fluorodeoxyglucose (18FDG), are utilised in standard practice.Regions are classified according to the degree of “flow-metabolism” matching, which is reflected by concordance between myocardial blood flow and18FDG uptake. Regions of myocardium where there is a concordance between reduction of myocardial blood flow and 18FDG uptake (flow-metabolism match) reflect irreversible myocardial injury. In contrast, areas where FDG uptake (reflective of metabolism) is preserved or increased despite perfusion deficits reflect viable myocardium [51] (Figure 1).Figure 1 PET assessment. Comment: 59-year-old male with known ischaemic heart disease (requiring bypass grafting) presents for PET assessment in the context of new-onset angina. PET assessment findings of scintigraphic evidence of a reversible perfusion defect of the mid third of the anterior wall is noted. This gated data suggests a high-grade stenosis supplying this region. Noted normal left ventricular systolic function at rest with an inducible wall motion abnormality and significant fall in LVEF with pharmacological stress.Primary advantages of PET over SPECT include better spatial resolution and superior average sensitivity and specificity (88% and 73%, resp.) [34]. Reduced availability of PET scanners and the variability of FDG uptake are the primary limitations. Many factors, including cardiac output, sympathetic activity, heart failure status, and degree of ischemia, impact FDG uptake and, thus, scan quality [49, 51]. ### 3.5. Cardiovascular Magnetic Resonance Cine cardiovascular magnetic resonance (CMR) sequencing provides information on global left ventricular function and regional wall motion. It can be used in conjunction with dobutamine stress and gadolinium-chelated contrast. Gadolinium-chelated contrast agents have been utilised to detect perfusion deficits, microvascular obstruction, and myocardial scarring. Accumulation of contrast agents have a paramagnetic effect, which form bright signal intensities in areas of accumulation. These agents are unable to penetrate cardiac myocytes with intact membranes; however, they easily diffuse and accumulate into extracellular membranes with increased volume of distribution (e.g., myocardial fibrosis) or ruptured cellular membranes (e.g., acute myocardial infarction) during the “late” steady-state phase [52].The transmural extent of scarring is inversely correlated with functional recovery of the dysfunctional myocardium postrevascularisation, whereas the absence of late gadolinium enhancement in a hypokinetic myocardium is associated with functional recovery postrevascularisation [52, 53] (Figure 2).Figure 2 CMR assessment. Comment: 51-year-old female following an inferior ST segment elevation myocardial infarction. CMR revealed hyperintensity in the midinferior wall on T2 weighted images. There is 100% transmural late gadolinium enhancement of the midinferior wall indicating nonviability of this region of myocardium. Of note, an area of hypoenhancement is also present in the middle of the hyperenhancement region, indicating microvascular obstruction. There is also late gadolinium enhancement affecting part of the posterior papillary muscle.Benefits of CMR over alternate imaging modalities include excellent spatial imaging, ability to discern transmural variations in viability, and provision of accurate quantification of nonviable or necrotic tissue. The ability of CMR for detection of scar (nonviable tissue) is robust, with a sensitivity of 83% and specificity of 88% [49, 54]. Primary limitations of CMR include cost, poor availability, and prolonged study periods requiring patient immobility and breath holding.A summary of trials evaluating the utility of different imaging modalities in viability assessment is shown in Table1 [29, 55–64].Table 1 Summary of studies evaluating improvement in segmental myocardial function with revascularisation. Study Period Study design Setting (center) Patient (n) Modality of viability assessment Sensitivity Specificity Arnese et al. [55] 1995 Prospective Single 38 Stress TTE, PET 74, 89 95, 48 Cornel et al. [56] 1998 Prospective Multi 61 Stress TTE 89 81 Pagano et al. [57] 1998 Prospective Single 30 Stress TTE, PET 60, 99 33, 62 Bax et al. [58] 1999 Prospective Single 68 Stress TTE 89 74 Pasquet et al. [59] 1999 Prospective Single 94 Stress TTE, PET 69, 84 78, 37 Baer et al. [60] 2000 Prospective Single 103 CMR, Stress TOE 86, 82 92, 83 Wiggers et al. [61] 2000 Prospective Single 46 PET, Stress TTE 81, 51 56, 89 Cwajg et al. [29] 2000 Prospective Single 45 PET, Stress TTE 91, 94 50, 48 Schmidt et al. [62] 2004 Prospective Single 40 CMR, PET 96, 100 87, 73 Hanekom et al. [63] 2005 Prospective Single 55 SRI, TTE 78, 73 77, 77 Slart et al. [64] 2006 Prospective Single 47 DISA SPECT, PET 89, 90 86, 86 TTE, transthoracic echocardiography; TOE, transesophageal echocardiography; PET, photon emission tomography; CMR, cardiac magnetic resonance imaging; SRI, strain rate imaging echocardiography; DISA SPECT, dual-isotope simultaneous acquisition (DISA) SPECT. ## 3.1. Electrocardiography Pathologic Q waves, deep initial negative deflections of the QRS complex, were traditionally thought to be secondary to chronic transmural ischemia and representative of “dead myocardium.” On subsequent analysis, it has been demonstrated that presence of pathologic Q waves has a poor correlation with the lack of residual viable myocardial tissue, with a relatively low sensitivity (41–65%) and specificity (69–79%) relative to other imaging modalities [19, 20].Utility of exercise electrocardiography improves viability detection, with elevation of the ST segment during exercise in infarct-related leads being representative of viable myocardium (sensitivity 82% and specificity 100%) [21]. A similar finding is appreciated when evaluating reciprocal ST segment depression associated with exercise-induced ST elevation, with comparable sensitivity and specificity in viability recognition (84% and 100%, resp.) [22].Use of normalisation of abnormal T waves during exercise electrocardiography for viability assessment, on the other hand, has conflicting reports in the literature [23, 24], with more recent trials showing poorer sensitivities [25, 26]. ## 3.2. Echocardiography ### 3.2.1. Echocardiography: LV Morphology Assessment of echocardiographic parameters at rest is important in assessment of viability. Severe dilatation of the LV is a marker of nonviable myocardium, with higher end-systolic volume indices associated with poor ventricular functional recovery [27]. These findings portend to a poorer prognosis, with left ventricular end-systolic volumes ≥130 mL having a reduced 3-year survival rate [27]. The thickness of the LV wall has also been shown to be predictive of viability, with a thin LV wall representative of nonviable tissue or scar in patients with CAD [28]. Studies have shown that end-diastolic wall thickness less than 5-6 mm indicates lack of contractile reserve [28], with end-diastolic wall thickness ≥5 mm on two-dimensional echocardiographic measurements having a sensitivity of 100% and specificity of 28% in prediction of improvement in contractile function twelve months following surgical revascularisation in patients with LV impairment (LVEF < 50%) [29]. In keeping with these findings, Cwajg and colleagues (2000) also found that an end-diastolic wall thickness >6 mm was predictive of contractile recovery following revascularisation with a sensitivity of 94% and specificity of 48%, while segments with an end-diastolic thickness of <6 mm rarely have contractile reserve [29]. ### 3.2.2. Echocardiography: Dobutamine Stress Echocardiography Dobutamine stress echocardiography (DSE) is a valuable tool in the assessment of viability of the myocardium. Classically, four responses are noted in a dysfunctional myocardial response to dobutamine. These are as follows [30–33]:(i) Biphasic response: low-dose dobutamine (defined as 5–10μg/kg/min) can increase contractility in dysfunctional segments which are still viable. At higher doses (10–40 μg/kg/min), wall motion in these segments may further improve or paradoxically diminish, reflecting tachycardia-induced ischemia. This phenomenon is referred to as abiphasic response and has been shown to be highly predictable of functional recovery postrevascularisation. This finding is suggestive of limited, but present, myocardial reserve in the hibernating myocardium.(ii) Worsening contractile function with lack of initial improvement with dobutamine: this response is suggestive of a hibernating myocardium which is supplied by a critically limited arterial supply, with no contractile reserve.(iii) Sustained improvement with increasing dobutamine dose: this response is traditionally seen in the setting of myocardial stunning.(iv) No response to dobutamine: this response is indicative of lack of functional reserve and, thus, lack of viable myocardial tissue.Dysfunctional areas with resting end-diastolic wall thickness of less than 6 mm are thought to reflect significant scar. They are not known to show functional improvement with DSE and do not improve postrevascularisation.DSE has been shown to have a sensitivity and specificity range for prediction of contractile recovery that is modestly high (71–97% and 63–95%, resp.), with the biphasic response having the greatest predictive capability of the four responses [33]. ### 3.2.3. Echocardiography: Myocardial Contrast Echocardiography Myocardial contrast echocardiography (MCE) utilises acoustically reflective high molecular weight inert gases which form microbubbles and act as a contrast agent. These bubbles remain within the intravascular space and help attenuate the borders of the left ventricle. Tissue capillary blood flow, a determinant of myocardial perfusion, is the byproduct of capillary blood volume and myocardial blood velocity.Once the microbubbles reach a steady-state concentration, high-burst ultrasonography is used to displace the microbubbles, with subsequent replenishment within myocardial segments over the following cardiac cycles reflective of myocardial blood velocity. Segments are deemed viable if there is homogeneity of contrast intensity, which is in keeping with intact myocardial microvasculature. Nonviable segments, however, lack contrast enhancement and represent necrotic myocardial cells causing obstruction and collapse of the microcirculation [33–35].MCE has been shown to have a high sensitivity (78–90%), however, low specificity (51–72%), of myocardial contractile recovery postrevascularisation relative to DSE (which on average has a relatively high specificity but lower sensitivity) [36–39]. A combination of the two modalities seems to be optimal in echocardiographic assessment of myocardial viability (sensitivity 96% and specificity 63%) [33]. ### 3.2.4. Echocardiography: Strain Analysis Myocardial deformation indices, including tissue Doppler imaging (TDI) and strain assessment, are new echocardiographic modalities in the assessment of myocardial function, which allow for a more complete appraisal of myocardial motion and overcome traditional challenges of two-dimensional echocardiography with regard to regional myocardial assessment [40, 41]. Strain is defined as the deformation of an object relative to its original location, with strain rate being reflective of the gradient of the velocities between the two locations. This information can be quantified via TDI or two-dimensional speckle tracking.Myocardial deformation (strain) and deformation rate (strain rate) provide multidimensional evaluation of myocardial mechanics (longitudinal, radial, and circumferential function) and have the added advantage of being able to detect subtle wall motion abnormalities of regional function that do not decrease global LVEF [42, 43]. This, in part, is reflected by the fact that strain rate imaging is of lower load-dependence and hence provides a better measure of contractility. Additionally, it is not affected by global myocardial displacement and the tethering effect of neighboring wall segments which encumber standard two-dimensional visual assessments.Both TDI and speckle-tracking echocardiography have been shown to be facilitative in prediction of myocardial viability. This is of relevance given the limitations of subjective assessment of wall thickness as well as operator dependence with traditional two-dimensional stress echocardiographic methods. Bansal and colleagues (2010) revealed that longitudinal and circumferential strain and strain rate measurements at rest and low-dose dobutamine concentrations were predictive of functional recovery postrevascularisation using strain-based imaging. Furthermore, only tissue velocity imaging was found to have incremental value over wall motion analysis [44].Based on a study by Hoffmann et al. (2002), an increase of peak systolic strain rate greater than or equal to 0.23/s had a sensitivity of 83% and specificity of 84% in discerning viable myocardium as determined by18FDG [45]. Additionally, radial strain >9.5% was associated with a sensitivity of 83.9% and specificity of 81.4%, whereas a change in longitudinal strain >14.6% provided a sensitivity of 86.7% and specificity of 90.2% in detection of viable myocardium using strain imaging with adenosine stress echocardiography in a small trial by Ran and colleagues (2012) [46]. Further work into the field is in progress, with several larger trials underway.Advantages of echocardiography include ease of procedure and widespread availability as well as its noninvasive qualities. Furthermore, with DSE, there is an ability to monitor functional response to accurate uptitration of inotropic therapy. Limitations of echocardiography include its high operator dependency with resultant inter- and intraobserver variability. Patients with comorbidities such as obesity, chronic obstructive airflow limitation, and thoracic chest wall abnormalities limit the acoustic window and thus impair LV views. Furthermore, with respect to DSE, assessment relies heavily on subjective visual interpretation of wall motion abnormalities. ## 3.2.1. Echocardiography: LV Morphology Assessment of echocardiographic parameters at rest is important in assessment of viability. Severe dilatation of the LV is a marker of nonviable myocardium, with higher end-systolic volume indices associated with poor ventricular functional recovery [27]. These findings portend to a poorer prognosis, with left ventricular end-systolic volumes ≥130 mL having a reduced 3-year survival rate [27]. The thickness of the LV wall has also been shown to be predictive of viability, with a thin LV wall representative of nonviable tissue or scar in patients with CAD [28]. Studies have shown that end-diastolic wall thickness less than 5-6 mm indicates lack of contractile reserve [28], with end-diastolic wall thickness ≥5 mm on two-dimensional echocardiographic measurements having a sensitivity of 100% and specificity of 28% in prediction of improvement in contractile function twelve months following surgical revascularisation in patients with LV impairment (LVEF < 50%) [29]. In keeping with these findings, Cwajg and colleagues (2000) also found that an end-diastolic wall thickness >6 mm was predictive of contractile recovery following revascularisation with a sensitivity of 94% and specificity of 48%, while segments with an end-diastolic thickness of <6 mm rarely have contractile reserve [29]. ## 3.2.2. Echocardiography: Dobutamine Stress Echocardiography Dobutamine stress echocardiography (DSE) is a valuable tool in the assessment of viability of the myocardium. Classically, four responses are noted in a dysfunctional myocardial response to dobutamine. These are as follows [30–33]:(i) Biphasic response: low-dose dobutamine (defined as 5–10μg/kg/min) can increase contractility in dysfunctional segments which are still viable. At higher doses (10–40 μg/kg/min), wall motion in these segments may further improve or paradoxically diminish, reflecting tachycardia-induced ischemia. This phenomenon is referred to as abiphasic response and has been shown to be highly predictable of functional recovery postrevascularisation. This finding is suggestive of limited, but present, myocardial reserve in the hibernating myocardium.(ii) Worsening contractile function with lack of initial improvement with dobutamine: this response is suggestive of a hibernating myocardium which is supplied by a critically limited arterial supply, with no contractile reserve.(iii) Sustained improvement with increasing dobutamine dose: this response is traditionally seen in the setting of myocardial stunning.(iv) No response to dobutamine: this response is indicative of lack of functional reserve and, thus, lack of viable myocardial tissue.Dysfunctional areas with resting end-diastolic wall thickness of less than 6 mm are thought to reflect significant scar. They are not known to show functional improvement with DSE and do not improve postrevascularisation.DSE has been shown to have a sensitivity and specificity range for prediction of contractile recovery that is modestly high (71–97% and 63–95%, resp.), with the biphasic response having the greatest predictive capability of the four responses [33]. ## 3.2.3. Echocardiography: Myocardial Contrast Echocardiography Myocardial contrast echocardiography (MCE) utilises acoustically reflective high molecular weight inert gases which form microbubbles and act as a contrast agent. These bubbles remain within the intravascular space and help attenuate the borders of the left ventricle. Tissue capillary blood flow, a determinant of myocardial perfusion, is the byproduct of capillary blood volume and myocardial blood velocity.Once the microbubbles reach a steady-state concentration, high-burst ultrasonography is used to displace the microbubbles, with subsequent replenishment within myocardial segments over the following cardiac cycles reflective of myocardial blood velocity. Segments are deemed viable if there is homogeneity of contrast intensity, which is in keeping with intact myocardial microvasculature. Nonviable segments, however, lack contrast enhancement and represent necrotic myocardial cells causing obstruction and collapse of the microcirculation [33–35].MCE has been shown to have a high sensitivity (78–90%), however, low specificity (51–72%), of myocardial contractile recovery postrevascularisation relative to DSE (which on average has a relatively high specificity but lower sensitivity) [36–39]. A combination of the two modalities seems to be optimal in echocardiographic assessment of myocardial viability (sensitivity 96% and specificity 63%) [33]. ## 3.2.4. Echocardiography: Strain Analysis Myocardial deformation indices, including tissue Doppler imaging (TDI) and strain assessment, are new echocardiographic modalities in the assessment of myocardial function, which allow for a more complete appraisal of myocardial motion and overcome traditional challenges of two-dimensional echocardiography with regard to regional myocardial assessment [40, 41]. Strain is defined as the deformation of an object relative to its original location, with strain rate being reflective of the gradient of the velocities between the two locations. This information can be quantified via TDI or two-dimensional speckle tracking.Myocardial deformation (strain) and deformation rate (strain rate) provide multidimensional evaluation of myocardial mechanics (longitudinal, radial, and circumferential function) and have the added advantage of being able to detect subtle wall motion abnormalities of regional function that do not decrease global LVEF [42, 43]. This, in part, is reflected by the fact that strain rate imaging is of lower load-dependence and hence provides a better measure of contractility. Additionally, it is not affected by global myocardial displacement and the tethering effect of neighboring wall segments which encumber standard two-dimensional visual assessments.Both TDI and speckle-tracking echocardiography have been shown to be facilitative in prediction of myocardial viability. This is of relevance given the limitations of subjective assessment of wall thickness as well as operator dependence with traditional two-dimensional stress echocardiographic methods. Bansal and colleagues (2010) revealed that longitudinal and circumferential strain and strain rate measurements at rest and low-dose dobutamine concentrations were predictive of functional recovery postrevascularisation using strain-based imaging. Furthermore, only tissue velocity imaging was found to have incremental value over wall motion analysis [44].Based on a study by Hoffmann et al. (2002), an increase of peak systolic strain rate greater than or equal to 0.23/s had a sensitivity of 83% and specificity of 84% in discerning viable myocardium as determined by18FDG [45]. Additionally, radial strain >9.5% was associated with a sensitivity of 83.9% and specificity of 81.4%, whereas a change in longitudinal strain >14.6% provided a sensitivity of 86.7% and specificity of 90.2% in detection of viable myocardium using strain imaging with adenosine stress echocardiography in a small trial by Ran and colleagues (2012) [46]. Further work into the field is in progress, with several larger trials underway.Advantages of echocardiography include ease of procedure and widespread availability as well as its noninvasive qualities. Furthermore, with DSE, there is an ability to monitor functional response to accurate uptitration of inotropic therapy. Limitations of echocardiography include its high operator dependency with resultant inter- and intraobserver variability. Patients with comorbidities such as obesity, chronic obstructive airflow limitation, and thoracic chest wall abnormalities limit the acoustic window and thus impair LV views. Furthermore, with respect to DSE, assessment relies heavily on subjective visual interpretation of wall motion abnormalities. ## 3.3. Single-Photon Emission CT Single-photon emission CT (SPECT) is a modality which utilises radionuclide-labeled tracer compounds to measure myocardial uptake. Initial acquisition signifies delivery of the tracer throughout the circulation. The images acquired following this (usually 4–24 hours later) reflect myocardial sarcolemmal integrity [47].Primary tracers includeTc99m-sestamibi, Tc99m-tetrofosmin, and 201Thallium. These molecules are lipophilic and permeate through myocardial cellular membranes via passive diffusion or active uptake from Na+/K+ ATPase systems. Intracellular retention, however, requires intact function of the mitochondrion with preservation of the action potential, and as such serves as a marker of viability. These tracer agents emit high-energy photons, which are captured via gated SPECT, and provide information of global LV function and viability of the myocardium [47, 48].Viability assessment with SPECT can be performed at rest, following physical exercise or chemical coronary stress. With stress testing, physical exertion or chemical agents (specifically, dipyridamole or adenosine) are used. Imaging is performed immediately following the test, with delayed imaging repeated 3 to 4 hours later, allowing for adequate redistribution of the tracer agent. If warranted, imaging may be repeated at 24 hours after stress (termed aslate distribution imaging) [49].Viability is seen with myocardial segments which reveal defective uptake immediately following stress, with subsequent replenishment of uptake at 3 to 4 hours. Critically hypoperfused myocardial segments may still be viable if defective uptake is seen at this delayed time-point, warranting repeat imaging at 24 hours after stress to allow for redistribution of the tracer to significantly hypoperfused myocardial regions. Nonviable myocardium reveals fixed defective uptake throughout a 24-hour imaging cycle [49].SPECT has been shown to provide a higher sensitivity (64–72%) however lower specificity (45–88%) than modalities based on evaluation of residual contractile recovery [49, 50]. Primary limitations include cost, ionising radiation exposure, low spatial resolution, and attenuation artefacts. These artefacts can be removed via integration of multislice CT and SPECT [50]. ## 3.4. Positron Emission Tomography Positron emission tomography (PET) imaging is based on the shift of myocardial perfusion energetics, whereby chronically underperfused myocardial tissue shifts from utilization of free fatty acids (that require high oxygenation for use) to that of glucose metabolism, which uses a more anaerobic process at the expense of poor energetic efficiency. This translates into uptake of perfusion tracers in myocardial segments which are hypoperfused. Perfusion tracers, including13N-labeled ammonia (13NH3) and 18F-fluorodeoxyglucose (18FDG), are utilised in standard practice.Regions are classified according to the degree of “flow-metabolism” matching, which is reflected by concordance between myocardial blood flow and18FDG uptake. Regions of myocardium where there is a concordance between reduction of myocardial blood flow and 18FDG uptake (flow-metabolism match) reflect irreversible myocardial injury. In contrast, areas where FDG uptake (reflective of metabolism) is preserved or increased despite perfusion deficits reflect viable myocardium [51] (Figure 1).Figure 1 PET assessment. Comment: 59-year-old male with known ischaemic heart disease (requiring bypass grafting) presents for PET assessment in the context of new-onset angina. PET assessment findings of scintigraphic evidence of a reversible perfusion defect of the mid third of the anterior wall is noted. This gated data suggests a high-grade stenosis supplying this region. Noted normal left ventricular systolic function at rest with an inducible wall motion abnormality and significant fall in LVEF with pharmacological stress.Primary advantages of PET over SPECT include better spatial resolution and superior average sensitivity and specificity (88% and 73%, resp.) [34]. Reduced availability of PET scanners and the variability of FDG uptake are the primary limitations. Many factors, including cardiac output, sympathetic activity, heart failure status, and degree of ischemia, impact FDG uptake and, thus, scan quality [49, 51]. ## 3.5. Cardiovascular Magnetic Resonance Cine cardiovascular magnetic resonance (CMR) sequencing provides information on global left ventricular function and regional wall motion. It can be used in conjunction with dobutamine stress and gadolinium-chelated contrast. Gadolinium-chelated contrast agents have been utilised to detect perfusion deficits, microvascular obstruction, and myocardial scarring. Accumulation of contrast agents have a paramagnetic effect, which form bright signal intensities in areas of accumulation. These agents are unable to penetrate cardiac myocytes with intact membranes; however, they easily diffuse and accumulate into extracellular membranes with increased volume of distribution (e.g., myocardial fibrosis) or ruptured cellular membranes (e.g., acute myocardial infarction) during the “late” steady-state phase [52].The transmural extent of scarring is inversely correlated with functional recovery of the dysfunctional myocardium postrevascularisation, whereas the absence of late gadolinium enhancement in a hypokinetic myocardium is associated with functional recovery postrevascularisation [52, 53] (Figure 2).Figure 2 CMR assessment. Comment: 51-year-old female following an inferior ST segment elevation myocardial infarction. CMR revealed hyperintensity in the midinferior wall on T2 weighted images. There is 100% transmural late gadolinium enhancement of the midinferior wall indicating nonviability of this region of myocardium. Of note, an area of hypoenhancement is also present in the middle of the hyperenhancement region, indicating microvascular obstruction. There is also late gadolinium enhancement affecting part of the posterior papillary muscle.Benefits of CMR over alternate imaging modalities include excellent spatial imaging, ability to discern transmural variations in viability, and provision of accurate quantification of nonviable or necrotic tissue. The ability of CMR for detection of scar (nonviable tissue) is robust, with a sensitivity of 83% and specificity of 88% [49, 54]. Primary limitations of CMR include cost, poor availability, and prolonged study periods requiring patient immobility and breath holding.A summary of trials evaluating the utility of different imaging modalities in viability assessment is shown in Table1 [29, 55–64].Table 1 Summary of studies evaluating improvement in segmental myocardial function with revascularisation. Study Period Study design Setting (center) Patient (n) Modality of viability assessment Sensitivity Specificity Arnese et al. [55] 1995 Prospective Single 38 Stress TTE, PET 74, 89 95, 48 Cornel et al. [56] 1998 Prospective Multi 61 Stress TTE 89 81 Pagano et al. [57] 1998 Prospective Single 30 Stress TTE, PET 60, 99 33, 62 Bax et al. [58] 1999 Prospective Single 68 Stress TTE 89 74 Pasquet et al. [59] 1999 Prospective Single 94 Stress TTE, PET 69, 84 78, 37 Baer et al. [60] 2000 Prospective Single 103 CMR, Stress TOE 86, 82 92, 83 Wiggers et al. [61] 2000 Prospective Single 46 PET, Stress TTE 81, 51 56, 89 Cwajg et al. [29] 2000 Prospective Single 45 PET, Stress TTE 91, 94 50, 48 Schmidt et al. [62] 2004 Prospective Single 40 CMR, PET 96, 100 87, 73 Hanekom et al. [63] 2005 Prospective Single 55 SRI, TTE 78, 73 77, 77 Slart et al. [64] 2006 Prospective Single 47 DISA SPECT, PET 89, 90 86, 86 TTE, transthoracic echocardiography; TOE, transesophageal echocardiography; PET, photon emission tomography; CMR, cardiac magnetic resonance imaging; SRI, strain rate imaging echocardiography; DISA SPECT, dual-isotope simultaneous acquisition (DISA) SPECT. ## 4. Prognostic Value of Viability Testing Numerous nonrandomized retrospective studies in the early 1990s evaluated the value of viability testing. A meta-analysis of these trials revealed a significant association between revascularisation and improvement in mortality utilising viability testing in patients with known ischaemic cardiomyopathy. This finding was shared irrespective of imaging modality chosen [55]. Primary limitations of these studies, however, included lack of standardisation and adherence to optimal medical therapy during this period, with outcome reviews having been retrospective in nature. Furthermore, advancement to medical treatment of cardiac failure has improved since these studies, as have techniques of coronary revascularisation.There was significant clinical uncertainty with regard to the impact of viability on survival given the lack of large, heavily powered randomized trials. These questions were largely addressed in the Surgical Treatment for Ischaemic Heart Failure (STICH) trial (2011). The STICH trial was designed to evaluate the impact of coronary artery bypass grafting (CABG) in management of patients with CAD with reduced LVEF. ## 5. The STICH Trial In this multicenter (127 clinical sites), nonblinded, randomized trial, 1212 participants were enrolled, with 601 undergoing myocardial viability assessments. Participants were enrolled on the basis of echocardiographic evidence of LV systolic dysfunction (defined as LVEF ≤ 35%) and coronary angiography revealing CAD amenable to surgical intervention. Myocardial viability assessment was provided via DSE (n=130), or SPECT (n=321), or both (n=150). Of the viability subgroup, 298 participants were randomly assigned to receive medical therapy plus surgical revascularisation (cardiac bypass) and 303 received solitary medical management. Participants were followed up at intervals (time of discharge or at 30 days, every 4 months within the first year, and every 6 months thereafter) with a median length of follow-up of 56 months (minimum 12 months, maximum 100 months) [56].Despite an association of viable myocardium to likelihood of survival in this cohort, multivariate analysis did not find a statistically significant mortality benefit with surgical intervention (p=0.21). Furthermore, assessment of myocardial viability did not provide a differential benefit for surgical intervention (p=0.53). That is to say that viability assessment did not recognize participants who would benefit from CABG relative to medical therapy [56].Secondary endpoints were more forgiving towards revascularisation, with bypass surgery having a significant reduction in cardiovascular mortality (28% versus 33%;p=0.05), composite death from any cause and hospitalization from cardiovascular causes (58% versus 68%; p<0.001). Long-term follow-up (>4 years) of both cohorts revealed a reduction in all-cause mortality in the surgical revascularisation cohort compared to solitary medical therapy; however, this finding was not statistically significant (p=0.12). These positive secondary findings should be interpreted with caution given a negative primary outcome measure [56].This trial was not, however, without its limitations. Firstly, randomization was not performed on the basis of viability which represented a potential selection bias. Secondly, there was a differential effect on participant profile and viability, with a high proportion of participants (81%) in the viability subgroup having single-vessel disease. Given the scope of the paper (medical therapy versus surgical intervention), this differential profile may have selected out participants for whom viability assessment may not have been required. Thirdly, analysis in this study was limited to DSE and SPECT modalities, with no analysis of PET or CMR on viability assessment. This creates difficulty with extrapolation of these results to other imaging modalities of viability assessment.Despite these limitations, this study represents the largest analysis of the influence of myocardial viability on clinical endpoints in persons with ischaemic cardiomyopathy to date,and was the first to assess the differential effect of viability on revascularisation versus medical management. ## 6. The HEART Trial The Heart Failure Revascularisation Trial (HEART) (2011) was a multicenter study comparing the efficacy of surgical revascularisation with optimal medical treatment in the management of persons with clinically diagnosed cardiac failure with reduced EF (LVEF < 35%) and evidence of CAD. Participants were screened for viable myocardium via DSE. An inclusionary prerequisite was the presence of at least 5 viable LV segments with reduced contractility using a 17-segment model [57].138 participants were randomized to interventional (n=65) and medical arms (n=69) and followed up over a five-year period. The primary outcome revealed noninferiority of medical therapy. This study was, however, underpowered secondary to a relatively small sample size. Furthermore, the primary modality of viability assessment was DSE, which has a lower sensitivity for viability detection relative to other imaging modalities. Additionally, randomization had not occurred prior to viability assessment, therefore clouding the impact of viability assessment on treatment outcomes [57]. ## 7. PARR-2 Trial The PET and Recovery Following Revascularisation-2 (PARR-2) trial (2007) evaluated the efficacy of perfusion FDG-mediated PET imaging in risk stratification and identification of patients who would most benefit from revascularisation [58].The study enrolled 430 participants, with an inclusionary criterion of a LVEF <35% and suspected or confirmed CAD. Participants were randomly placed to receive FDG and perfusion PET imaging versus standard care (i.e., no FDG imaging). Effect of PET scanning on appropriate decision showed a nonsignificant trend towards a reduction in the predefined composite endpoint (cardiac death, myocardial infarction, or cardiac rehospitalization) at one year (Hazard Ratio 0.78, 95% CI 0.58 to 1.1;p=0.15), with post hoc analysis showing a statistically significant reduction in adverse events in the FDG PET-assisted group (Hazard Ratio 0.62, 95% CI 0.42 to 0.93; p=0.019) [58].The key limitation of the study involved poor adherence to therapeutic strategy, with only 75% of participants treated accordingly to viability imaging. ## 8. Ottawa-FIVE Substudy The Ottawa-FIVE substudy (of the PARR-2 trial) (2010) evaluated 111 participants with LV systolic dysfunction (specifically, persons with LVEF < 35%) and suspected or confirmed CAD in a single center with experience with FDG PET imaging [59]. A statistically significant reduction in the primary composite endpoint (cardiac death, myocardial infarction, or cardiac rehospitalization) was found within the FDG PET-guided therapy group in comparison with the standard-therapy arm (19% versus 41%, Hazard Ratio 0.34, and 95% CI 0.16 to 0.72; p=0.005). The results of this substudy illustrated prognostic benefit with the utilization of FDG PET viability imaging in ischaemic cardiomyopathy when used in centers with experience in PET imaging [59].Despite the relatively disappointing results of the aforementioned trials, the 2013 American Heart Association/American College of Cardiology guidelines for management of heart failure remain unaltered in their Class IIa (Level of Evidence B) recommendation for viability testing in the work-up for revascularisation in patients with ischaemic cardiomyopathy. This is in keeping with the belief that there may still be diagnostic and prognostic benefit in the utility of viability studies which have not become apparent given the limitations of the aforementioned primary trials. ## 9. Conclusion Ischaemic LV dysfunction can arise from myocardial stunning, hibernation, or necrosis. In line with technological advances, noninvasive imaging modalities have become front-line methods in the assessment of viable myocardial tissue, with each modality conferring a variable advantage in terms of sensitivity and specificity, culminating in the overriding goal of accurate stratification of patients into optimal treatment pathways.Despite determined research efforts, however, many questions remain unanswered with regard to myocardial viability. Initial studies, although favorable, lacked sufficient power and sample size to provide conclusive outcomes of viability assessment. More recent trials, including the STICH and HEART studies, have failed to confer prognostic benefits of revascularisation therapy over standard medical management in ischaemic cardiomyopathy but have their own limitations. In lieu of these recent findings, however, assessment of myocardial viability therefore should not be the arbitrating factor for therapy choice. Optimization of medical therapy for all patients is paramount, and physicians should feel comfortable in deferring coronary revascularisation in patients with CAD with reduced LVEF at present.It is clear that further trials are needed to better our understanding of the mechanistic underpinnings of the viable myocardium as well as the underlying pathos of ischaemic cardiomyopathy. Newer trials such as the AIMI-HF (Alternative Imaging Modalities in Ischaemic Heart Failure) study, the largest randomized trial to date evaluating the role of imaging in the treatment of ischaemic cardiomyopathy, are currently underway and will hopefully decipher some of these uncertainties [65]. --- *Source: 1020818-2016-05-29.xml*
1020818-2016-05-29_1020818-2016-05-29.md
63,588
Myocardial Viability: From Proof of Concept to Clinical Practice
Aditya Bhat; Gary C. H. Gan; Timothy C. Tan; Chijen Hsu; Alan Robert Denniss
Cardiology Research and Practice (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1020818
1020818-2016-05-29.xml
--- ## Abstract Ischaemic left ventricular (LV) dysfunction can arise from myocardial stunning, hibernation, or necrosis. Imaging modalities have become front-line methods in the assessment of viable myocardial tissue, with the aim to stratify patients into optimal treatment pathways. Initial studies, although favorable, lacked sufficient power and sample size to provide conclusive outcomes of viability assessment. Recent trials, including the STICH and HEART studies, have failed to confer prognostic benefits of revascularisation therapy over standard medical management in ischaemic cardiomyopathy. In lieu of these recent findings, assessment of myocardial viability therefore should not be the sole factor for therapy choice. Optimization of medical therapy is paramount, and physicians should feel comfortable in deferring coronary revascularisation in patients with coronary artery disease with reduced LV systolic function. Newer trials are currently underway and will hopefully provide a more complete understanding of the pathos and management of ischaemic cardiomyopathy. --- ## Body ## 1. Introduction Ischaemic heart disease (IHD) is the leading cause of morbidity and mortality in Western society, with an overrepresentation in the primary healthcare burden [1–3]. In patients with coronary artery disease (CAD), left ventricular (LV) function remains one of the most robust prognostic determinants of survival [4–6], also impacting total hospital separations and many defined quality of life indicators (including physical and social functioning, energy, and general health perception) [7, 8].The myocardium is exquisitely sensitive to ischemia, with contractile dysfunction occurring shortly after an ischaemic stimulus. The degree of contractile impairment remains strongly under the influence of the severity and duration of the ischaemic event, with irreversible myocardial necrosis representing the end pathway of prolonged and significant coronary ischemia [9]. Hence, the primary priority in the management of acute coronary syndromes is to limit the extent of myocardial necrosis via reperfusion therapies, such as primary angioplasty and thrombolysis, particularly in the setting of electrocardiographic evidence of transmural ischemia.Despite early intervention, patients with IHD have a predisposition to develop structural heart disease, with impairment of myocardial function leading to cardiac failure, a condition termed as “ischaemic cardiomyopathy” [10]. Given that progressive reductions in LV systolic function secondary to the ischaemic substrate have been shown to be associated with poor outcomes, these aberrations represent a theoretically salvageable pathway via revascularisation.The ability to distinguish whether dysfunctional myocardium is “viable” and thus able to recover following revascularisation, however, presents a clinical challenge in current practice.This review examines the concept of myocardial viability, with focus on imaging modalities and principal outcome trials. ## 2. Myocardial Viability: Theoretical Precepts Viability of myocardial tissue is the central principal which underpins reperfusion therapies, whether in the acute phase following myocardial infarction or in chronic ischemia-mediated LV dysfunction. Should “viable” myocardial tissue be present, restoration of adequate coronary blood flow should in theory improve myocardial performance and LV ejection fraction (EF), with the hope of translating into improved long-term outcomes. ### 2.1. Myocardial Stunning Early work into CAD and myocardial flow limitation supported the hypothesis that myocardial ischemia results in significant myocyte injury [11].Heyndrickx and coinvestigators first demonstrated the impact of reversible ischemia on myocardial contractile reserve. Utilising animal models, they demonstrated that short (5- or 15-minute) induced episodes of ischemia to the myocardium, with a subsequent reperfusion period (lasting 6 hours for a 5-minute episode of ischemia, and >24 hours following a 15-minute episode of ischemia), resulted in regional deficits in contractile function that persisted despite reperfusion [12]. This phenomenon, termed asmyocardial stunning, was defined as a prolonged and completely reversible dysfunction of the ischaemic myocardium that continued after restoration of coronary arterial flow [12]. Stunned myocardium was found to be responsive to inotropes in these early studies, with an increase in contractile function in response to exogenous catecholamines [12].Myocardial stunning has also been found in clinical practice, particularly in the setting of increased myocardial demand or reduced coronary supply such as following coronary artery spasm, postmyocardial infarction, or postcardiopulmonary bypass secondary to “cardiac off-time.” Myocardial stunning is also prominent in patients following successful revascularisation postinfarct, wherein there is prolonged systolic dysfunction which takes several days to normalise after the incident event [13–15]. ### 2.2. Myocardial Hibernation Myocardial hibernation represents a condition of sustained depression of myocardial function in the setting of CAD, which is amenable to improvement in function postrevascularisation. This term was first coined by Diamond and colleagues in 1978 [11], and was later popularised by the works of Rahimtoola [16]. This sustained depression in myocardial function is hypothesised to be mediated by fundamental changes in myocardial energetics and metabolism, which are both reduced to match a concomitant reduction in coronary flow reserve.An alternate hypothesis offered for the mechanism of sustained contractile depression is therepetitive stunning hypothesis. In this theory, multiple bouts of demand ischemia in context of flow limitation result in repetitive episodes of ischaemic myocardial dysfunction (or stunning), which eventually creates an environment of sustained depression of contractile function [17]. ### 2.3. Stunning versus Hibernation Resting myocardial perfusion is normal or near normal in stunning but is reduced in hibernation. Stunning of the myocardium is frequently represented as transient regional LV wall motion abnormality persisting for hours to days following reperfusion after short-term but significant impairment of coronary blood flow. Hibernating myocardium, on the other hand, is a state of persistently impaired myocardial performance at rest due to a chronic reduction in coronary blood flow that can be restored by favorably altering the supply/demand relationship of the myocardium [18]. Although traditionally described as two separate entities, stunned and hibernating myocardium may in fact represent stages on a continuum of LV dysfunction resulting from repeated ischaemic episodes (as per the repetitive stunning hypothesis).Identifying myocardial hibernation is of clinical relevance, as it represents potentially salvageable myocardial tissue. Coronary revascularisation in this context is likely to improve contractile performance, LV systolic function, and, in turn, overall morbidity and mortality. However, hibernating myocardium, if left untreated, has the potential to transform into clinically overt heart failure. Revascularisation, via either percutaneous angioplasty or coronary bypass surgery, is the primary avenue of restoring coronary blood flow, unless natural collaterals are formed from the primary diseased vessel. ## 2.1. Myocardial Stunning Early work into CAD and myocardial flow limitation supported the hypothesis that myocardial ischemia results in significant myocyte injury [11].Heyndrickx and coinvestigators first demonstrated the impact of reversible ischemia on myocardial contractile reserve. Utilising animal models, they demonstrated that short (5- or 15-minute) induced episodes of ischemia to the myocardium, with a subsequent reperfusion period (lasting 6 hours for a 5-minute episode of ischemia, and >24 hours following a 15-minute episode of ischemia), resulted in regional deficits in contractile function that persisted despite reperfusion [12]. This phenomenon, termed asmyocardial stunning, was defined as a prolonged and completely reversible dysfunction of the ischaemic myocardium that continued after restoration of coronary arterial flow [12]. Stunned myocardium was found to be responsive to inotropes in these early studies, with an increase in contractile function in response to exogenous catecholamines [12].Myocardial stunning has also been found in clinical practice, particularly in the setting of increased myocardial demand or reduced coronary supply such as following coronary artery spasm, postmyocardial infarction, or postcardiopulmonary bypass secondary to “cardiac off-time.” Myocardial stunning is also prominent in patients following successful revascularisation postinfarct, wherein there is prolonged systolic dysfunction which takes several days to normalise after the incident event [13–15]. ## 2.2. Myocardial Hibernation Myocardial hibernation represents a condition of sustained depression of myocardial function in the setting of CAD, which is amenable to improvement in function postrevascularisation. This term was first coined by Diamond and colleagues in 1978 [11], and was later popularised by the works of Rahimtoola [16]. This sustained depression in myocardial function is hypothesised to be mediated by fundamental changes in myocardial energetics and metabolism, which are both reduced to match a concomitant reduction in coronary flow reserve.An alternate hypothesis offered for the mechanism of sustained contractile depression is therepetitive stunning hypothesis. In this theory, multiple bouts of demand ischemia in context of flow limitation result in repetitive episodes of ischaemic myocardial dysfunction (or stunning), which eventually creates an environment of sustained depression of contractile function [17]. ## 2.3. Stunning versus Hibernation Resting myocardial perfusion is normal or near normal in stunning but is reduced in hibernation. Stunning of the myocardium is frequently represented as transient regional LV wall motion abnormality persisting for hours to days following reperfusion after short-term but significant impairment of coronary blood flow. Hibernating myocardium, on the other hand, is a state of persistently impaired myocardial performance at rest due to a chronic reduction in coronary blood flow that can be restored by favorably altering the supply/demand relationship of the myocardium [18]. Although traditionally described as two separate entities, stunned and hibernating myocardium may in fact represent stages on a continuum of LV dysfunction resulting from repeated ischaemic episodes (as per the repetitive stunning hypothesis).Identifying myocardial hibernation is of clinical relevance, as it represents potentially salvageable myocardial tissue. Coronary revascularisation in this context is likely to improve contractile performance, LV systolic function, and, in turn, overall morbidity and mortality. However, hibernating myocardium, if left untreated, has the potential to transform into clinically overt heart failure. Revascularisation, via either percutaneous angioplasty or coronary bypass surgery, is the primary avenue of restoring coronary blood flow, unless natural collaterals are formed from the primary diseased vessel. ## 3. Methods of Viability Assessment ### 3.1. Electrocardiography Pathologic Q waves, deep initial negative deflections of the QRS complex, were traditionally thought to be secondary to chronic transmural ischemia and representative of “dead myocardium.” On subsequent analysis, it has been demonstrated that presence of pathologic Q waves has a poor correlation with the lack of residual viable myocardial tissue, with a relatively low sensitivity (41–65%) and specificity (69–79%) relative to other imaging modalities [19, 20].Utility of exercise electrocardiography improves viability detection, with elevation of the ST segment during exercise in infarct-related leads being representative of viable myocardium (sensitivity 82% and specificity 100%) [21]. A similar finding is appreciated when evaluating reciprocal ST segment depression associated with exercise-induced ST elevation, with comparable sensitivity and specificity in viability recognition (84% and 100%, resp.) [22].Use of normalisation of abnormal T waves during exercise electrocardiography for viability assessment, on the other hand, has conflicting reports in the literature [23, 24], with more recent trials showing poorer sensitivities [25, 26]. ### 3.2. Echocardiography #### 3.2.1. Echocardiography: LV Morphology Assessment of echocardiographic parameters at rest is important in assessment of viability. Severe dilatation of the LV is a marker of nonviable myocardium, with higher end-systolic volume indices associated with poor ventricular functional recovery [27]. These findings portend to a poorer prognosis, with left ventricular end-systolic volumes ≥130 mL having a reduced 3-year survival rate [27]. The thickness of the LV wall has also been shown to be predictive of viability, with a thin LV wall representative of nonviable tissue or scar in patients with CAD [28]. Studies have shown that end-diastolic wall thickness less than 5-6 mm indicates lack of contractile reserve [28], with end-diastolic wall thickness ≥5 mm on two-dimensional echocardiographic measurements having a sensitivity of 100% and specificity of 28% in prediction of improvement in contractile function twelve months following surgical revascularisation in patients with LV impairment (LVEF < 50%) [29]. In keeping with these findings, Cwajg and colleagues (2000) also found that an end-diastolic wall thickness >6 mm was predictive of contractile recovery following revascularisation with a sensitivity of 94% and specificity of 48%, while segments with an end-diastolic thickness of <6 mm rarely have contractile reserve [29]. #### 3.2.2. Echocardiography: Dobutamine Stress Echocardiography Dobutamine stress echocardiography (DSE) is a valuable tool in the assessment of viability of the myocardium. Classically, four responses are noted in a dysfunctional myocardial response to dobutamine. These are as follows [30–33]:(i) Biphasic response: low-dose dobutamine (defined as 5–10μg/kg/min) can increase contractility in dysfunctional segments which are still viable. At higher doses (10–40 μg/kg/min), wall motion in these segments may further improve or paradoxically diminish, reflecting tachycardia-induced ischemia. This phenomenon is referred to as abiphasic response and has been shown to be highly predictable of functional recovery postrevascularisation. This finding is suggestive of limited, but present, myocardial reserve in the hibernating myocardium.(ii) Worsening contractile function with lack of initial improvement with dobutamine: this response is suggestive of a hibernating myocardium which is supplied by a critically limited arterial supply, with no contractile reserve.(iii) Sustained improvement with increasing dobutamine dose: this response is traditionally seen in the setting of myocardial stunning.(iv) No response to dobutamine: this response is indicative of lack of functional reserve and, thus, lack of viable myocardial tissue.Dysfunctional areas with resting end-diastolic wall thickness of less than 6 mm are thought to reflect significant scar. They are not known to show functional improvement with DSE and do not improve postrevascularisation.DSE has been shown to have a sensitivity and specificity range for prediction of contractile recovery that is modestly high (71–97% and 63–95%, resp.), with the biphasic response having the greatest predictive capability of the four responses [33]. #### 3.2.3. Echocardiography: Myocardial Contrast Echocardiography Myocardial contrast echocardiography (MCE) utilises acoustically reflective high molecular weight inert gases which form microbubbles and act as a contrast agent. These bubbles remain within the intravascular space and help attenuate the borders of the left ventricle. Tissue capillary blood flow, a determinant of myocardial perfusion, is the byproduct of capillary blood volume and myocardial blood velocity.Once the microbubbles reach a steady-state concentration, high-burst ultrasonography is used to displace the microbubbles, with subsequent replenishment within myocardial segments over the following cardiac cycles reflective of myocardial blood velocity. Segments are deemed viable if there is homogeneity of contrast intensity, which is in keeping with intact myocardial microvasculature. Nonviable segments, however, lack contrast enhancement and represent necrotic myocardial cells causing obstruction and collapse of the microcirculation [33–35].MCE has been shown to have a high sensitivity (78–90%), however, low specificity (51–72%), of myocardial contractile recovery postrevascularisation relative to DSE (which on average has a relatively high specificity but lower sensitivity) [36–39]. A combination of the two modalities seems to be optimal in echocardiographic assessment of myocardial viability (sensitivity 96% and specificity 63%) [33]. #### 3.2.4. Echocardiography: Strain Analysis Myocardial deformation indices, including tissue Doppler imaging (TDI) and strain assessment, are new echocardiographic modalities in the assessment of myocardial function, which allow for a more complete appraisal of myocardial motion and overcome traditional challenges of two-dimensional echocardiography with regard to regional myocardial assessment [40, 41]. Strain is defined as the deformation of an object relative to its original location, with strain rate being reflective of the gradient of the velocities between the two locations. This information can be quantified via TDI or two-dimensional speckle tracking.Myocardial deformation (strain) and deformation rate (strain rate) provide multidimensional evaluation of myocardial mechanics (longitudinal, radial, and circumferential function) and have the added advantage of being able to detect subtle wall motion abnormalities of regional function that do not decrease global LVEF [42, 43]. This, in part, is reflected by the fact that strain rate imaging is of lower load-dependence and hence provides a better measure of contractility. Additionally, it is not affected by global myocardial displacement and the tethering effect of neighboring wall segments which encumber standard two-dimensional visual assessments.Both TDI and speckle-tracking echocardiography have been shown to be facilitative in prediction of myocardial viability. This is of relevance given the limitations of subjective assessment of wall thickness as well as operator dependence with traditional two-dimensional stress echocardiographic methods. Bansal and colleagues (2010) revealed that longitudinal and circumferential strain and strain rate measurements at rest and low-dose dobutamine concentrations were predictive of functional recovery postrevascularisation using strain-based imaging. Furthermore, only tissue velocity imaging was found to have incremental value over wall motion analysis [44].Based on a study by Hoffmann et al. (2002), an increase of peak systolic strain rate greater than or equal to 0.23/s had a sensitivity of 83% and specificity of 84% in discerning viable myocardium as determined by18FDG [45]. Additionally, radial strain >9.5% was associated with a sensitivity of 83.9% and specificity of 81.4%, whereas a change in longitudinal strain >14.6% provided a sensitivity of 86.7% and specificity of 90.2% in detection of viable myocardium using strain imaging with adenosine stress echocardiography in a small trial by Ran and colleagues (2012) [46]. Further work into the field is in progress, with several larger trials underway.Advantages of echocardiography include ease of procedure and widespread availability as well as its noninvasive qualities. Furthermore, with DSE, there is an ability to monitor functional response to accurate uptitration of inotropic therapy. Limitations of echocardiography include its high operator dependency with resultant inter- and intraobserver variability. Patients with comorbidities such as obesity, chronic obstructive airflow limitation, and thoracic chest wall abnormalities limit the acoustic window and thus impair LV views. Furthermore, with respect to DSE, assessment relies heavily on subjective visual interpretation of wall motion abnormalities. ### 3.3. Single-Photon Emission CT Single-photon emission CT (SPECT) is a modality which utilises radionuclide-labeled tracer compounds to measure myocardial uptake. Initial acquisition signifies delivery of the tracer throughout the circulation. The images acquired following this (usually 4–24 hours later) reflect myocardial sarcolemmal integrity [47].Primary tracers includeTc99m-sestamibi, Tc99m-tetrofosmin, and 201Thallium. These molecules are lipophilic and permeate through myocardial cellular membranes via passive diffusion or active uptake from Na+/K+ ATPase systems. Intracellular retention, however, requires intact function of the mitochondrion with preservation of the action potential, and as such serves as a marker of viability. These tracer agents emit high-energy photons, which are captured via gated SPECT, and provide information of global LV function and viability of the myocardium [47, 48].Viability assessment with SPECT can be performed at rest, following physical exercise or chemical coronary stress. With stress testing, physical exertion or chemical agents (specifically, dipyridamole or adenosine) are used. Imaging is performed immediately following the test, with delayed imaging repeated 3 to 4 hours later, allowing for adequate redistribution of the tracer agent. If warranted, imaging may be repeated at 24 hours after stress (termed aslate distribution imaging) [49].Viability is seen with myocardial segments which reveal defective uptake immediately following stress, with subsequent replenishment of uptake at 3 to 4 hours. Critically hypoperfused myocardial segments may still be viable if defective uptake is seen at this delayed time-point, warranting repeat imaging at 24 hours after stress to allow for redistribution of the tracer to significantly hypoperfused myocardial regions. Nonviable myocardium reveals fixed defective uptake throughout a 24-hour imaging cycle [49].SPECT has been shown to provide a higher sensitivity (64–72%) however lower specificity (45–88%) than modalities based on evaluation of residual contractile recovery [49, 50]. Primary limitations include cost, ionising radiation exposure, low spatial resolution, and attenuation artefacts. These artefacts can be removed via integration of multislice CT and SPECT [50]. ### 3.4. Positron Emission Tomography Positron emission tomography (PET) imaging is based on the shift of myocardial perfusion energetics, whereby chronically underperfused myocardial tissue shifts from utilization of free fatty acids (that require high oxygenation for use) to that of glucose metabolism, which uses a more anaerobic process at the expense of poor energetic efficiency. This translates into uptake of perfusion tracers in myocardial segments which are hypoperfused. Perfusion tracers, including13N-labeled ammonia (13NH3) and 18F-fluorodeoxyglucose (18FDG), are utilised in standard practice.Regions are classified according to the degree of “flow-metabolism” matching, which is reflected by concordance between myocardial blood flow and18FDG uptake. Regions of myocardium where there is a concordance between reduction of myocardial blood flow and 18FDG uptake (flow-metabolism match) reflect irreversible myocardial injury. In contrast, areas where FDG uptake (reflective of metabolism) is preserved or increased despite perfusion deficits reflect viable myocardium [51] (Figure 1).Figure 1 PET assessment. Comment: 59-year-old male with known ischaemic heart disease (requiring bypass grafting) presents for PET assessment in the context of new-onset angina. PET assessment findings of scintigraphic evidence of a reversible perfusion defect of the mid third of the anterior wall is noted. This gated data suggests a high-grade stenosis supplying this region. Noted normal left ventricular systolic function at rest with an inducible wall motion abnormality and significant fall in LVEF with pharmacological stress.Primary advantages of PET over SPECT include better spatial resolution and superior average sensitivity and specificity (88% and 73%, resp.) [34]. Reduced availability of PET scanners and the variability of FDG uptake are the primary limitations. Many factors, including cardiac output, sympathetic activity, heart failure status, and degree of ischemia, impact FDG uptake and, thus, scan quality [49, 51]. ### 3.5. Cardiovascular Magnetic Resonance Cine cardiovascular magnetic resonance (CMR) sequencing provides information on global left ventricular function and regional wall motion. It can be used in conjunction with dobutamine stress and gadolinium-chelated contrast. Gadolinium-chelated contrast agents have been utilised to detect perfusion deficits, microvascular obstruction, and myocardial scarring. Accumulation of contrast agents have a paramagnetic effect, which form bright signal intensities in areas of accumulation. These agents are unable to penetrate cardiac myocytes with intact membranes; however, they easily diffuse and accumulate into extracellular membranes with increased volume of distribution (e.g., myocardial fibrosis) or ruptured cellular membranes (e.g., acute myocardial infarction) during the “late” steady-state phase [52].The transmural extent of scarring is inversely correlated with functional recovery of the dysfunctional myocardium postrevascularisation, whereas the absence of late gadolinium enhancement in a hypokinetic myocardium is associated with functional recovery postrevascularisation [52, 53] (Figure 2).Figure 2 CMR assessment. Comment: 51-year-old female following an inferior ST segment elevation myocardial infarction. CMR revealed hyperintensity in the midinferior wall on T2 weighted images. There is 100% transmural late gadolinium enhancement of the midinferior wall indicating nonviability of this region of myocardium. Of note, an area of hypoenhancement is also present in the middle of the hyperenhancement region, indicating microvascular obstruction. There is also late gadolinium enhancement affecting part of the posterior papillary muscle.Benefits of CMR over alternate imaging modalities include excellent spatial imaging, ability to discern transmural variations in viability, and provision of accurate quantification of nonviable or necrotic tissue. The ability of CMR for detection of scar (nonviable tissue) is robust, with a sensitivity of 83% and specificity of 88% [49, 54]. Primary limitations of CMR include cost, poor availability, and prolonged study periods requiring patient immobility and breath holding.A summary of trials evaluating the utility of different imaging modalities in viability assessment is shown in Table1 [29, 55–64].Table 1 Summary of studies evaluating improvement in segmental myocardial function with revascularisation. Study Period Study design Setting (center) Patient (n) Modality of viability assessment Sensitivity Specificity Arnese et al. [55] 1995 Prospective Single 38 Stress TTE, PET 74, 89 95, 48 Cornel et al. [56] 1998 Prospective Multi 61 Stress TTE 89 81 Pagano et al. [57] 1998 Prospective Single 30 Stress TTE, PET 60, 99 33, 62 Bax et al. [58] 1999 Prospective Single 68 Stress TTE 89 74 Pasquet et al. [59] 1999 Prospective Single 94 Stress TTE, PET 69, 84 78, 37 Baer et al. [60] 2000 Prospective Single 103 CMR, Stress TOE 86, 82 92, 83 Wiggers et al. [61] 2000 Prospective Single 46 PET, Stress TTE 81, 51 56, 89 Cwajg et al. [29] 2000 Prospective Single 45 PET, Stress TTE 91, 94 50, 48 Schmidt et al. [62] 2004 Prospective Single 40 CMR, PET 96, 100 87, 73 Hanekom et al. [63] 2005 Prospective Single 55 SRI, TTE 78, 73 77, 77 Slart et al. [64] 2006 Prospective Single 47 DISA SPECT, PET 89, 90 86, 86 TTE, transthoracic echocardiography; TOE, transesophageal echocardiography; PET, photon emission tomography; CMR, cardiac magnetic resonance imaging; SRI, strain rate imaging echocardiography; DISA SPECT, dual-isotope simultaneous acquisition (DISA) SPECT. ## 3.1. Electrocardiography Pathologic Q waves, deep initial negative deflections of the QRS complex, were traditionally thought to be secondary to chronic transmural ischemia and representative of “dead myocardium.” On subsequent analysis, it has been demonstrated that presence of pathologic Q waves has a poor correlation with the lack of residual viable myocardial tissue, with a relatively low sensitivity (41–65%) and specificity (69–79%) relative to other imaging modalities [19, 20].Utility of exercise electrocardiography improves viability detection, with elevation of the ST segment during exercise in infarct-related leads being representative of viable myocardium (sensitivity 82% and specificity 100%) [21]. A similar finding is appreciated when evaluating reciprocal ST segment depression associated with exercise-induced ST elevation, with comparable sensitivity and specificity in viability recognition (84% and 100%, resp.) [22].Use of normalisation of abnormal T waves during exercise electrocardiography for viability assessment, on the other hand, has conflicting reports in the literature [23, 24], with more recent trials showing poorer sensitivities [25, 26]. ## 3.2. Echocardiography ### 3.2.1. Echocardiography: LV Morphology Assessment of echocardiographic parameters at rest is important in assessment of viability. Severe dilatation of the LV is a marker of nonviable myocardium, with higher end-systolic volume indices associated with poor ventricular functional recovery [27]. These findings portend to a poorer prognosis, with left ventricular end-systolic volumes ≥130 mL having a reduced 3-year survival rate [27]. The thickness of the LV wall has also been shown to be predictive of viability, with a thin LV wall representative of nonviable tissue or scar in patients with CAD [28]. Studies have shown that end-diastolic wall thickness less than 5-6 mm indicates lack of contractile reserve [28], with end-diastolic wall thickness ≥5 mm on two-dimensional echocardiographic measurements having a sensitivity of 100% and specificity of 28% in prediction of improvement in contractile function twelve months following surgical revascularisation in patients with LV impairment (LVEF < 50%) [29]. In keeping with these findings, Cwajg and colleagues (2000) also found that an end-diastolic wall thickness >6 mm was predictive of contractile recovery following revascularisation with a sensitivity of 94% and specificity of 48%, while segments with an end-diastolic thickness of <6 mm rarely have contractile reserve [29]. ### 3.2.2. Echocardiography: Dobutamine Stress Echocardiography Dobutamine stress echocardiography (DSE) is a valuable tool in the assessment of viability of the myocardium. Classically, four responses are noted in a dysfunctional myocardial response to dobutamine. These are as follows [30–33]:(i) Biphasic response: low-dose dobutamine (defined as 5–10μg/kg/min) can increase contractility in dysfunctional segments which are still viable. At higher doses (10–40 μg/kg/min), wall motion in these segments may further improve or paradoxically diminish, reflecting tachycardia-induced ischemia. This phenomenon is referred to as abiphasic response and has been shown to be highly predictable of functional recovery postrevascularisation. This finding is suggestive of limited, but present, myocardial reserve in the hibernating myocardium.(ii) Worsening contractile function with lack of initial improvement with dobutamine: this response is suggestive of a hibernating myocardium which is supplied by a critically limited arterial supply, with no contractile reserve.(iii) Sustained improvement with increasing dobutamine dose: this response is traditionally seen in the setting of myocardial stunning.(iv) No response to dobutamine: this response is indicative of lack of functional reserve and, thus, lack of viable myocardial tissue.Dysfunctional areas with resting end-diastolic wall thickness of less than 6 mm are thought to reflect significant scar. They are not known to show functional improvement with DSE and do not improve postrevascularisation.DSE has been shown to have a sensitivity and specificity range for prediction of contractile recovery that is modestly high (71–97% and 63–95%, resp.), with the biphasic response having the greatest predictive capability of the four responses [33]. ### 3.2.3. Echocardiography: Myocardial Contrast Echocardiography Myocardial contrast echocardiography (MCE) utilises acoustically reflective high molecular weight inert gases which form microbubbles and act as a contrast agent. These bubbles remain within the intravascular space and help attenuate the borders of the left ventricle. Tissue capillary blood flow, a determinant of myocardial perfusion, is the byproduct of capillary blood volume and myocardial blood velocity.Once the microbubbles reach a steady-state concentration, high-burst ultrasonography is used to displace the microbubbles, with subsequent replenishment within myocardial segments over the following cardiac cycles reflective of myocardial blood velocity. Segments are deemed viable if there is homogeneity of contrast intensity, which is in keeping with intact myocardial microvasculature. Nonviable segments, however, lack contrast enhancement and represent necrotic myocardial cells causing obstruction and collapse of the microcirculation [33–35].MCE has been shown to have a high sensitivity (78–90%), however, low specificity (51–72%), of myocardial contractile recovery postrevascularisation relative to DSE (which on average has a relatively high specificity but lower sensitivity) [36–39]. A combination of the two modalities seems to be optimal in echocardiographic assessment of myocardial viability (sensitivity 96% and specificity 63%) [33]. ### 3.2.4. Echocardiography: Strain Analysis Myocardial deformation indices, including tissue Doppler imaging (TDI) and strain assessment, are new echocardiographic modalities in the assessment of myocardial function, which allow for a more complete appraisal of myocardial motion and overcome traditional challenges of two-dimensional echocardiography with regard to regional myocardial assessment [40, 41]. Strain is defined as the deformation of an object relative to its original location, with strain rate being reflective of the gradient of the velocities between the two locations. This information can be quantified via TDI or two-dimensional speckle tracking.Myocardial deformation (strain) and deformation rate (strain rate) provide multidimensional evaluation of myocardial mechanics (longitudinal, radial, and circumferential function) and have the added advantage of being able to detect subtle wall motion abnormalities of regional function that do not decrease global LVEF [42, 43]. This, in part, is reflected by the fact that strain rate imaging is of lower load-dependence and hence provides a better measure of contractility. Additionally, it is not affected by global myocardial displacement and the tethering effect of neighboring wall segments which encumber standard two-dimensional visual assessments.Both TDI and speckle-tracking echocardiography have been shown to be facilitative in prediction of myocardial viability. This is of relevance given the limitations of subjective assessment of wall thickness as well as operator dependence with traditional two-dimensional stress echocardiographic methods. Bansal and colleagues (2010) revealed that longitudinal and circumferential strain and strain rate measurements at rest and low-dose dobutamine concentrations were predictive of functional recovery postrevascularisation using strain-based imaging. Furthermore, only tissue velocity imaging was found to have incremental value over wall motion analysis [44].Based on a study by Hoffmann et al. (2002), an increase of peak systolic strain rate greater than or equal to 0.23/s had a sensitivity of 83% and specificity of 84% in discerning viable myocardium as determined by18FDG [45]. Additionally, radial strain >9.5% was associated with a sensitivity of 83.9% and specificity of 81.4%, whereas a change in longitudinal strain >14.6% provided a sensitivity of 86.7% and specificity of 90.2% in detection of viable myocardium using strain imaging with adenosine stress echocardiography in a small trial by Ran and colleagues (2012) [46]. Further work into the field is in progress, with several larger trials underway.Advantages of echocardiography include ease of procedure and widespread availability as well as its noninvasive qualities. Furthermore, with DSE, there is an ability to monitor functional response to accurate uptitration of inotropic therapy. Limitations of echocardiography include its high operator dependency with resultant inter- and intraobserver variability. Patients with comorbidities such as obesity, chronic obstructive airflow limitation, and thoracic chest wall abnormalities limit the acoustic window and thus impair LV views. Furthermore, with respect to DSE, assessment relies heavily on subjective visual interpretation of wall motion abnormalities. ## 3.2.1. Echocardiography: LV Morphology Assessment of echocardiographic parameters at rest is important in assessment of viability. Severe dilatation of the LV is a marker of nonviable myocardium, with higher end-systolic volume indices associated with poor ventricular functional recovery [27]. These findings portend to a poorer prognosis, with left ventricular end-systolic volumes ≥130 mL having a reduced 3-year survival rate [27]. The thickness of the LV wall has also been shown to be predictive of viability, with a thin LV wall representative of nonviable tissue or scar in patients with CAD [28]. Studies have shown that end-diastolic wall thickness less than 5-6 mm indicates lack of contractile reserve [28], with end-diastolic wall thickness ≥5 mm on two-dimensional echocardiographic measurements having a sensitivity of 100% and specificity of 28% in prediction of improvement in contractile function twelve months following surgical revascularisation in patients with LV impairment (LVEF < 50%) [29]. In keeping with these findings, Cwajg and colleagues (2000) also found that an end-diastolic wall thickness >6 mm was predictive of contractile recovery following revascularisation with a sensitivity of 94% and specificity of 48%, while segments with an end-diastolic thickness of <6 mm rarely have contractile reserve [29]. ## 3.2.2. Echocardiography: Dobutamine Stress Echocardiography Dobutamine stress echocardiography (DSE) is a valuable tool in the assessment of viability of the myocardium. Classically, four responses are noted in a dysfunctional myocardial response to dobutamine. These are as follows [30–33]:(i) Biphasic response: low-dose dobutamine (defined as 5–10μg/kg/min) can increase contractility in dysfunctional segments which are still viable. At higher doses (10–40 μg/kg/min), wall motion in these segments may further improve or paradoxically diminish, reflecting tachycardia-induced ischemia. This phenomenon is referred to as abiphasic response and has been shown to be highly predictable of functional recovery postrevascularisation. This finding is suggestive of limited, but present, myocardial reserve in the hibernating myocardium.(ii) Worsening contractile function with lack of initial improvement with dobutamine: this response is suggestive of a hibernating myocardium which is supplied by a critically limited arterial supply, with no contractile reserve.(iii) Sustained improvement with increasing dobutamine dose: this response is traditionally seen in the setting of myocardial stunning.(iv) No response to dobutamine: this response is indicative of lack of functional reserve and, thus, lack of viable myocardial tissue.Dysfunctional areas with resting end-diastolic wall thickness of less than 6 mm are thought to reflect significant scar. They are not known to show functional improvement with DSE and do not improve postrevascularisation.DSE has been shown to have a sensitivity and specificity range for prediction of contractile recovery that is modestly high (71–97% and 63–95%, resp.), with the biphasic response having the greatest predictive capability of the four responses [33]. ## 3.2.3. Echocardiography: Myocardial Contrast Echocardiography Myocardial contrast echocardiography (MCE) utilises acoustically reflective high molecular weight inert gases which form microbubbles and act as a contrast agent. These bubbles remain within the intravascular space and help attenuate the borders of the left ventricle. Tissue capillary blood flow, a determinant of myocardial perfusion, is the byproduct of capillary blood volume and myocardial blood velocity.Once the microbubbles reach a steady-state concentration, high-burst ultrasonography is used to displace the microbubbles, with subsequent replenishment within myocardial segments over the following cardiac cycles reflective of myocardial blood velocity. Segments are deemed viable if there is homogeneity of contrast intensity, which is in keeping with intact myocardial microvasculature. Nonviable segments, however, lack contrast enhancement and represent necrotic myocardial cells causing obstruction and collapse of the microcirculation [33–35].MCE has been shown to have a high sensitivity (78–90%), however, low specificity (51–72%), of myocardial contractile recovery postrevascularisation relative to DSE (which on average has a relatively high specificity but lower sensitivity) [36–39]. A combination of the two modalities seems to be optimal in echocardiographic assessment of myocardial viability (sensitivity 96% and specificity 63%) [33]. ## 3.2.4. Echocardiography: Strain Analysis Myocardial deformation indices, including tissue Doppler imaging (TDI) and strain assessment, are new echocardiographic modalities in the assessment of myocardial function, which allow for a more complete appraisal of myocardial motion and overcome traditional challenges of two-dimensional echocardiography with regard to regional myocardial assessment [40, 41]. Strain is defined as the deformation of an object relative to its original location, with strain rate being reflective of the gradient of the velocities between the two locations. This information can be quantified via TDI or two-dimensional speckle tracking.Myocardial deformation (strain) and deformation rate (strain rate) provide multidimensional evaluation of myocardial mechanics (longitudinal, radial, and circumferential function) and have the added advantage of being able to detect subtle wall motion abnormalities of regional function that do not decrease global LVEF [42, 43]. This, in part, is reflected by the fact that strain rate imaging is of lower load-dependence and hence provides a better measure of contractility. Additionally, it is not affected by global myocardial displacement and the tethering effect of neighboring wall segments which encumber standard two-dimensional visual assessments.Both TDI and speckle-tracking echocardiography have been shown to be facilitative in prediction of myocardial viability. This is of relevance given the limitations of subjective assessment of wall thickness as well as operator dependence with traditional two-dimensional stress echocardiographic methods. Bansal and colleagues (2010) revealed that longitudinal and circumferential strain and strain rate measurements at rest and low-dose dobutamine concentrations were predictive of functional recovery postrevascularisation using strain-based imaging. Furthermore, only tissue velocity imaging was found to have incremental value over wall motion analysis [44].Based on a study by Hoffmann et al. (2002), an increase of peak systolic strain rate greater than or equal to 0.23/s had a sensitivity of 83% and specificity of 84% in discerning viable myocardium as determined by18FDG [45]. Additionally, radial strain >9.5% was associated with a sensitivity of 83.9% and specificity of 81.4%, whereas a change in longitudinal strain >14.6% provided a sensitivity of 86.7% and specificity of 90.2% in detection of viable myocardium using strain imaging with adenosine stress echocardiography in a small trial by Ran and colleagues (2012) [46]. Further work into the field is in progress, with several larger trials underway.Advantages of echocardiography include ease of procedure and widespread availability as well as its noninvasive qualities. Furthermore, with DSE, there is an ability to monitor functional response to accurate uptitration of inotropic therapy. Limitations of echocardiography include its high operator dependency with resultant inter- and intraobserver variability. Patients with comorbidities such as obesity, chronic obstructive airflow limitation, and thoracic chest wall abnormalities limit the acoustic window and thus impair LV views. Furthermore, with respect to DSE, assessment relies heavily on subjective visual interpretation of wall motion abnormalities. ## 3.3. Single-Photon Emission CT Single-photon emission CT (SPECT) is a modality which utilises radionuclide-labeled tracer compounds to measure myocardial uptake. Initial acquisition signifies delivery of the tracer throughout the circulation. The images acquired following this (usually 4–24 hours later) reflect myocardial sarcolemmal integrity [47].Primary tracers includeTc99m-sestamibi, Tc99m-tetrofosmin, and 201Thallium. These molecules are lipophilic and permeate through myocardial cellular membranes via passive diffusion or active uptake from Na+/K+ ATPase systems. Intracellular retention, however, requires intact function of the mitochondrion with preservation of the action potential, and as such serves as a marker of viability. These tracer agents emit high-energy photons, which are captured via gated SPECT, and provide information of global LV function and viability of the myocardium [47, 48].Viability assessment with SPECT can be performed at rest, following physical exercise or chemical coronary stress. With stress testing, physical exertion or chemical agents (specifically, dipyridamole or adenosine) are used. Imaging is performed immediately following the test, with delayed imaging repeated 3 to 4 hours later, allowing for adequate redistribution of the tracer agent. If warranted, imaging may be repeated at 24 hours after stress (termed aslate distribution imaging) [49].Viability is seen with myocardial segments which reveal defective uptake immediately following stress, with subsequent replenishment of uptake at 3 to 4 hours. Critically hypoperfused myocardial segments may still be viable if defective uptake is seen at this delayed time-point, warranting repeat imaging at 24 hours after stress to allow for redistribution of the tracer to significantly hypoperfused myocardial regions. Nonviable myocardium reveals fixed defective uptake throughout a 24-hour imaging cycle [49].SPECT has been shown to provide a higher sensitivity (64–72%) however lower specificity (45–88%) than modalities based on evaluation of residual contractile recovery [49, 50]. Primary limitations include cost, ionising radiation exposure, low spatial resolution, and attenuation artefacts. These artefacts can be removed via integration of multislice CT and SPECT [50]. ## 3.4. Positron Emission Tomography Positron emission tomography (PET) imaging is based on the shift of myocardial perfusion energetics, whereby chronically underperfused myocardial tissue shifts from utilization of free fatty acids (that require high oxygenation for use) to that of glucose metabolism, which uses a more anaerobic process at the expense of poor energetic efficiency. This translates into uptake of perfusion tracers in myocardial segments which are hypoperfused. Perfusion tracers, including13N-labeled ammonia (13NH3) and 18F-fluorodeoxyglucose (18FDG), are utilised in standard practice.Regions are classified according to the degree of “flow-metabolism” matching, which is reflected by concordance between myocardial blood flow and18FDG uptake. Regions of myocardium where there is a concordance between reduction of myocardial blood flow and 18FDG uptake (flow-metabolism match) reflect irreversible myocardial injury. In contrast, areas where FDG uptake (reflective of metabolism) is preserved or increased despite perfusion deficits reflect viable myocardium [51] (Figure 1).Figure 1 PET assessment. Comment: 59-year-old male with known ischaemic heart disease (requiring bypass grafting) presents for PET assessment in the context of new-onset angina. PET assessment findings of scintigraphic evidence of a reversible perfusion defect of the mid third of the anterior wall is noted. This gated data suggests a high-grade stenosis supplying this region. Noted normal left ventricular systolic function at rest with an inducible wall motion abnormality and significant fall in LVEF with pharmacological stress.Primary advantages of PET over SPECT include better spatial resolution and superior average sensitivity and specificity (88% and 73%, resp.) [34]. Reduced availability of PET scanners and the variability of FDG uptake are the primary limitations. Many factors, including cardiac output, sympathetic activity, heart failure status, and degree of ischemia, impact FDG uptake and, thus, scan quality [49, 51]. ## 3.5. Cardiovascular Magnetic Resonance Cine cardiovascular magnetic resonance (CMR) sequencing provides information on global left ventricular function and regional wall motion. It can be used in conjunction with dobutamine stress and gadolinium-chelated contrast. Gadolinium-chelated contrast agents have been utilised to detect perfusion deficits, microvascular obstruction, and myocardial scarring. Accumulation of contrast agents have a paramagnetic effect, which form bright signal intensities in areas of accumulation. These agents are unable to penetrate cardiac myocytes with intact membranes; however, they easily diffuse and accumulate into extracellular membranes with increased volume of distribution (e.g., myocardial fibrosis) or ruptured cellular membranes (e.g., acute myocardial infarction) during the “late” steady-state phase [52].The transmural extent of scarring is inversely correlated with functional recovery of the dysfunctional myocardium postrevascularisation, whereas the absence of late gadolinium enhancement in a hypokinetic myocardium is associated with functional recovery postrevascularisation [52, 53] (Figure 2).Figure 2 CMR assessment. Comment: 51-year-old female following an inferior ST segment elevation myocardial infarction. CMR revealed hyperintensity in the midinferior wall on T2 weighted images. There is 100% transmural late gadolinium enhancement of the midinferior wall indicating nonviability of this region of myocardium. Of note, an area of hypoenhancement is also present in the middle of the hyperenhancement region, indicating microvascular obstruction. There is also late gadolinium enhancement affecting part of the posterior papillary muscle.Benefits of CMR over alternate imaging modalities include excellent spatial imaging, ability to discern transmural variations in viability, and provision of accurate quantification of nonviable or necrotic tissue. The ability of CMR for detection of scar (nonviable tissue) is robust, with a sensitivity of 83% and specificity of 88% [49, 54]. Primary limitations of CMR include cost, poor availability, and prolonged study periods requiring patient immobility and breath holding.A summary of trials evaluating the utility of different imaging modalities in viability assessment is shown in Table1 [29, 55–64].Table 1 Summary of studies evaluating improvement in segmental myocardial function with revascularisation. Study Period Study design Setting (center) Patient (n) Modality of viability assessment Sensitivity Specificity Arnese et al. [55] 1995 Prospective Single 38 Stress TTE, PET 74, 89 95, 48 Cornel et al. [56] 1998 Prospective Multi 61 Stress TTE 89 81 Pagano et al. [57] 1998 Prospective Single 30 Stress TTE, PET 60, 99 33, 62 Bax et al. [58] 1999 Prospective Single 68 Stress TTE 89 74 Pasquet et al. [59] 1999 Prospective Single 94 Stress TTE, PET 69, 84 78, 37 Baer et al. [60] 2000 Prospective Single 103 CMR, Stress TOE 86, 82 92, 83 Wiggers et al. [61] 2000 Prospective Single 46 PET, Stress TTE 81, 51 56, 89 Cwajg et al. [29] 2000 Prospective Single 45 PET, Stress TTE 91, 94 50, 48 Schmidt et al. [62] 2004 Prospective Single 40 CMR, PET 96, 100 87, 73 Hanekom et al. [63] 2005 Prospective Single 55 SRI, TTE 78, 73 77, 77 Slart et al. [64] 2006 Prospective Single 47 DISA SPECT, PET 89, 90 86, 86 TTE, transthoracic echocardiography; TOE, transesophageal echocardiography; PET, photon emission tomography; CMR, cardiac magnetic resonance imaging; SRI, strain rate imaging echocardiography; DISA SPECT, dual-isotope simultaneous acquisition (DISA) SPECT. ## 4. Prognostic Value of Viability Testing Numerous nonrandomized retrospective studies in the early 1990s evaluated the value of viability testing. A meta-analysis of these trials revealed a significant association between revascularisation and improvement in mortality utilising viability testing in patients with known ischaemic cardiomyopathy. This finding was shared irrespective of imaging modality chosen [55]. Primary limitations of these studies, however, included lack of standardisation and adherence to optimal medical therapy during this period, with outcome reviews having been retrospective in nature. Furthermore, advancement to medical treatment of cardiac failure has improved since these studies, as have techniques of coronary revascularisation.There was significant clinical uncertainty with regard to the impact of viability on survival given the lack of large, heavily powered randomized trials. These questions were largely addressed in the Surgical Treatment for Ischaemic Heart Failure (STICH) trial (2011). The STICH trial was designed to evaluate the impact of coronary artery bypass grafting (CABG) in management of patients with CAD with reduced LVEF. ## 5. The STICH Trial In this multicenter (127 clinical sites), nonblinded, randomized trial, 1212 participants were enrolled, with 601 undergoing myocardial viability assessments. Participants were enrolled on the basis of echocardiographic evidence of LV systolic dysfunction (defined as LVEF ≤ 35%) and coronary angiography revealing CAD amenable to surgical intervention. Myocardial viability assessment was provided via DSE (n=130), or SPECT (n=321), or both (n=150). Of the viability subgroup, 298 participants were randomly assigned to receive medical therapy plus surgical revascularisation (cardiac bypass) and 303 received solitary medical management. Participants were followed up at intervals (time of discharge or at 30 days, every 4 months within the first year, and every 6 months thereafter) with a median length of follow-up of 56 months (minimum 12 months, maximum 100 months) [56].Despite an association of viable myocardium to likelihood of survival in this cohort, multivariate analysis did not find a statistically significant mortality benefit with surgical intervention (p=0.21). Furthermore, assessment of myocardial viability did not provide a differential benefit for surgical intervention (p=0.53). That is to say that viability assessment did not recognize participants who would benefit from CABG relative to medical therapy [56].Secondary endpoints were more forgiving towards revascularisation, with bypass surgery having a significant reduction in cardiovascular mortality (28% versus 33%;p=0.05), composite death from any cause and hospitalization from cardiovascular causes (58% versus 68%; p<0.001). Long-term follow-up (>4 years) of both cohorts revealed a reduction in all-cause mortality in the surgical revascularisation cohort compared to solitary medical therapy; however, this finding was not statistically significant (p=0.12). These positive secondary findings should be interpreted with caution given a negative primary outcome measure [56].This trial was not, however, without its limitations. Firstly, randomization was not performed on the basis of viability which represented a potential selection bias. Secondly, there was a differential effect on participant profile and viability, with a high proportion of participants (81%) in the viability subgroup having single-vessel disease. Given the scope of the paper (medical therapy versus surgical intervention), this differential profile may have selected out participants for whom viability assessment may not have been required. Thirdly, analysis in this study was limited to DSE and SPECT modalities, with no analysis of PET or CMR on viability assessment. This creates difficulty with extrapolation of these results to other imaging modalities of viability assessment.Despite these limitations, this study represents the largest analysis of the influence of myocardial viability on clinical endpoints in persons with ischaemic cardiomyopathy to date,and was the first to assess the differential effect of viability on revascularisation versus medical management. ## 6. The HEART Trial The Heart Failure Revascularisation Trial (HEART) (2011) was a multicenter study comparing the efficacy of surgical revascularisation with optimal medical treatment in the management of persons with clinically diagnosed cardiac failure with reduced EF (LVEF < 35%) and evidence of CAD. Participants were screened for viable myocardium via DSE. An inclusionary prerequisite was the presence of at least 5 viable LV segments with reduced contractility using a 17-segment model [57].138 participants were randomized to interventional (n=65) and medical arms (n=69) and followed up over a five-year period. The primary outcome revealed noninferiority of medical therapy. This study was, however, underpowered secondary to a relatively small sample size. Furthermore, the primary modality of viability assessment was DSE, which has a lower sensitivity for viability detection relative to other imaging modalities. Additionally, randomization had not occurred prior to viability assessment, therefore clouding the impact of viability assessment on treatment outcomes [57]. ## 7. PARR-2 Trial The PET and Recovery Following Revascularisation-2 (PARR-2) trial (2007) evaluated the efficacy of perfusion FDG-mediated PET imaging in risk stratification and identification of patients who would most benefit from revascularisation [58].The study enrolled 430 participants, with an inclusionary criterion of a LVEF <35% and suspected or confirmed CAD. Participants were randomly placed to receive FDG and perfusion PET imaging versus standard care (i.e., no FDG imaging). Effect of PET scanning on appropriate decision showed a nonsignificant trend towards a reduction in the predefined composite endpoint (cardiac death, myocardial infarction, or cardiac rehospitalization) at one year (Hazard Ratio 0.78, 95% CI 0.58 to 1.1;p=0.15), with post hoc analysis showing a statistically significant reduction in adverse events in the FDG PET-assisted group (Hazard Ratio 0.62, 95% CI 0.42 to 0.93; p=0.019) [58].The key limitation of the study involved poor adherence to therapeutic strategy, with only 75% of participants treated accordingly to viability imaging. ## 8. Ottawa-FIVE Substudy The Ottawa-FIVE substudy (of the PARR-2 trial) (2010) evaluated 111 participants with LV systolic dysfunction (specifically, persons with LVEF < 35%) and suspected or confirmed CAD in a single center with experience with FDG PET imaging [59]. A statistically significant reduction in the primary composite endpoint (cardiac death, myocardial infarction, or cardiac rehospitalization) was found within the FDG PET-guided therapy group in comparison with the standard-therapy arm (19% versus 41%, Hazard Ratio 0.34, and 95% CI 0.16 to 0.72; p=0.005). The results of this substudy illustrated prognostic benefit with the utilization of FDG PET viability imaging in ischaemic cardiomyopathy when used in centers with experience in PET imaging [59].Despite the relatively disappointing results of the aforementioned trials, the 2013 American Heart Association/American College of Cardiology guidelines for management of heart failure remain unaltered in their Class IIa (Level of Evidence B) recommendation for viability testing in the work-up for revascularisation in patients with ischaemic cardiomyopathy. This is in keeping with the belief that there may still be diagnostic and prognostic benefit in the utility of viability studies which have not become apparent given the limitations of the aforementioned primary trials. ## 9. Conclusion Ischaemic LV dysfunction can arise from myocardial stunning, hibernation, or necrosis. In line with technological advances, noninvasive imaging modalities have become front-line methods in the assessment of viable myocardial tissue, with each modality conferring a variable advantage in terms of sensitivity and specificity, culminating in the overriding goal of accurate stratification of patients into optimal treatment pathways.Despite determined research efforts, however, many questions remain unanswered with regard to myocardial viability. Initial studies, although favorable, lacked sufficient power and sample size to provide conclusive outcomes of viability assessment. More recent trials, including the STICH and HEART studies, have failed to confer prognostic benefits of revascularisation therapy over standard medical management in ischaemic cardiomyopathy but have their own limitations. In lieu of these recent findings, however, assessment of myocardial viability therefore should not be the arbitrating factor for therapy choice. Optimization of medical therapy for all patients is paramount, and physicians should feel comfortable in deferring coronary revascularisation in patients with CAD with reduced LVEF at present.It is clear that further trials are needed to better our understanding of the mechanistic underpinnings of the viable myocardium as well as the underlying pathos of ischaemic cardiomyopathy. Newer trials such as the AIMI-HF (Alternative Imaging Modalities in Ischaemic Heart Failure) study, the largest randomized trial to date evaluating the role of imaging in the treatment of ischaemic cardiomyopathy, are currently underway and will hopefully decipher some of these uncertainties [65]. --- *Source: 1020818-2016-05-29.xml*
2016
# Joint Detection of Serum IgM/IgG Antibody Is an Important Key to Clinical Diagnosis of SARS-CoV-2 Infection **Authors:** Fang Hu; Xiaoling Shang; Meizhou Chen; Changliang Zhang **Journal:** Canadian Journal of Infectious Diseases and Medical Microbiology (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1020843 --- ## Abstract Background. This study was aimed to investigate the application of SARS-CoV-2 IgM and IgG antibodies in diagnosis of COVID-19 infection. Method. This study enrolled a total of 178 patients at Huangshi Central Hospital from January to February 2020. Among them, 68 patients were SARS-CoV-2 infected, confirmed with nucleic acid test (NAT) and CT imaging. Nine patients were in the suspected group (NAT negative) with fever and other respiratory symptoms. 101 patients were in the control group with other diseases and negative to SARS-CoV-2 infection. After serum samples were collected, SARS-CoV-2 IgG and IgM antibodies were tested by chemiluminescence immunoassay (CLIA) for all patients. Results. The specificity of serum IgM and IgG antibodies to SARS-CoV-2 was 99.01% (100/101) and 96.04% (97/101), respectively, and the sensitivity was 88.24% (60/68) and 97.06% (66/68), respectively. The combined detection rate of SARS-CoV-2 IgM and IgG antibodies was 98.53% (67/68). Conclusion. Combined detection of serum SARS-CoV-2 IgM and IgG antibodies had better sensitivity compared with single IgM or IgG antibody testing, which can be used as an important diagnostic tool for SARS-CoV-2 infection and a screening tool of potential SARS-CoV-2 carriers in clinics, hospitals, and accredited scientific laboratories. --- ## Body ## 1. Introduction The novel coronavirus (SARS-CoV-2) is a new virus responsible for an outbreak of respiratory illness known as COVID-19, and now it is a global pandemic in more than 215 countries [1]. The current standard diagnostic method for diagnosis of COVID-19 is to detect the virus nucleic acid RT-PCR [2]. However, real-time PCR detection had some limitations, e.g., time-consuming, complicated operation with specialized equipment, and requiring special detection sites, which limit its application during COVID-19 outbreak [3]. Therefore, a simple, sensitive, and accurate test was urgently needed to identify SARS-CoV-2-infected patients in a COVID-19 outbreak area.Based on the diagnosis experience of many clinical cases, the detection of novel coronavirus antibody can be used as an auxiliary diagnosis of novel coronavirus pneumonia [4, 5]. After SARS-CoV-2 infection, the body’s immune system can create immune response to fight against the virus and produce specific antibodies. In general virology, the immunoglobulin M (IgM) antibody, produced in the early period after the infection, can indicate the current infection or the recent infection. Immunoglobulin G (IgG) antibody is also an important antibody produced by the immune system, indicating that the disease is in the middle to late stage or presence of past infection. Therefore, the combined detection of IgM and IgG can be used not only in the early diagnosis of infectious diseases but also in the assessment of the stage of infection.Chemiluminescence immunoassay (CLIA) has been developed as an effective combination of immunoassay and chemiluminescence system [6], and it has been used recently in SARS-CoV-2 diagnosis. In this study, we used a CLIA test product, which can detect IgM and IgG in human serum within 30 minutes. Our aim is to investigate the clinical value of CLIA for the diagnosis of SARS-CoV-2 infection. This CLIA method demonstrated good sensitivity and specificity in our study, which can be used not only in hospitals and accredited laboratories but also in airports, border ports, seaports, and train stations. This CLIA method has potential to be a powerful weapon against the COVID-19 pandemic. ## 2. Materials and Methods ### 2.1. Patients This study enrolled a total of 178 patients who visited Huangshi Central Hospital in Hubei Province, China, between January and February 2020. The patients included 91 males (51.1%) and 87 females (48.9%) with a mean age of 54.3 years (ranging from 2 months to 94 years). Among them, the SARS-CoV-2 group had 68 patients, 36 males and 32 females (ranging from 30 years to 90 years); the suspected group had 9 patients, 7 males and 2 females (ranging from 2 months to 64 years); and the negative group had 101 patients, 48 males and 53 females (ranging from 2 years to 94 years). This study is in compliance with ICC clinical trial specifications and the Helsinki Declaration. ### 2.2. Serologic Tests Serum was collected from all patients. Serum SARS-CoV-2 IgG and IgM were tested by CLIA kits and the iFlash 3000 fully automated CLIA analyzer obtained from Shenzhen YHLO Biotech Co., Ltd (China). In brief, serum was separated by centrifugation at 2500 g for 5 min within 12 hours of collection. The magnetic beads of these CLIA assays are coated with two antigens of SARS-CoV-2 (nucleocapsid protein (N protein) and spike protein (S protein)). SARS-CoV-2 IgM/IgG titers (in arbitrary units, AU/ml) were calculated automatically by the CLIA analyzer based on relative light units (RLU), and the viral antibody titer was positively associated with RLU. The cutoff values for positive SARS-CoV-2 IgM and IgG are both 10 AU/ml. ### 2.3. SARS-CoV-2 Nucleic Acid Test RT-PCR was used to detect open reading frame 1ab (ORF1ab) and nucleocapsid protein (N) in the SARS-CoV-2 genome. CT value interpretation of test results is based on the instruction from the manufacturer. Confirmation of positive COVID-19 is based on at least one target-specific RT-PCR-positive result of ORF1ab and N genes of SARS-CoV-2 in the same specimen. ### 2.4. Data Analysis Statistical analysis was performed using SPSS 19.0 statistical software (IBM SPSS, Chicago, IL, USA). The kappa coefficient was calculated. Kappa ≥0.75 indicates good consistency, 0.75 ≥ kappa > 0.4 for medium consistency, and kappa <0.4 for poor consistency.The specificity and sensitivity of the CLIA test kits were calculated according to the following equations:(1)sensitivity%=100%×true positivetrue positive+false negative,specificity%=100%×true negativetrue negative+false positive. ## 2.1. Patients This study enrolled a total of 178 patients who visited Huangshi Central Hospital in Hubei Province, China, between January and February 2020. The patients included 91 males (51.1%) and 87 females (48.9%) with a mean age of 54.3 years (ranging from 2 months to 94 years). Among them, the SARS-CoV-2 group had 68 patients, 36 males and 32 females (ranging from 30 years to 90 years); the suspected group had 9 patients, 7 males and 2 females (ranging from 2 months to 64 years); and the negative group had 101 patients, 48 males and 53 females (ranging from 2 years to 94 years). This study is in compliance with ICC clinical trial specifications and the Helsinki Declaration. ## 2.2. Serologic Tests Serum was collected from all patients. Serum SARS-CoV-2 IgG and IgM were tested by CLIA kits and the iFlash 3000 fully automated CLIA analyzer obtained from Shenzhen YHLO Biotech Co., Ltd (China). In brief, serum was separated by centrifugation at 2500 g for 5 min within 12 hours of collection. The magnetic beads of these CLIA assays are coated with two antigens of SARS-CoV-2 (nucleocapsid protein (N protein) and spike protein (S protein)). SARS-CoV-2 IgM/IgG titers (in arbitrary units, AU/ml) were calculated automatically by the CLIA analyzer based on relative light units (RLU), and the viral antibody titer was positively associated with RLU. The cutoff values for positive SARS-CoV-2 IgM and IgG are both 10 AU/ml. ## 2.3. SARS-CoV-2 Nucleic Acid Test RT-PCR was used to detect open reading frame 1ab (ORF1ab) and nucleocapsid protein (N) in the SARS-CoV-2 genome. CT value interpretation of test results is based on the instruction from the manufacturer. Confirmation of positive COVID-19 is based on at least one target-specific RT-PCR-positive result of ORF1ab and N genes of SARS-CoV-2 in the same specimen. ## 2.4. Data Analysis Statistical analysis was performed using SPSS 19.0 statistical software (IBM SPSS, Chicago, IL, USA). The kappa coefficient was calculated. Kappa ≥0.75 indicates good consistency, 0.75 ≥ kappa > 0.4 for medium consistency, and kappa <0.4 for poor consistency.The specificity and sensitivity of the CLIA test kits were calculated according to the following equations:(1)sensitivity%=100%×true positivetrue positive+false negative,specificity%=100%×true negativetrue negative+false positive. ## 3. Results ### 3.1. Specificity of SARS-CoV-2 IgG/IgM Antibody Test Samples from both NAT-negative patients (suspected group, 9 subjects) and other diseases’ population (control group, 101 subjects) were used to assess the clinical specificity of the assay (Table1). Among the 101 patients in the control group, 100 patients were negative for SARS-CoV-2 IgM antibody, with a clinical specificity of 99.01% (100/101). 97 patients were negative for SARS-CoV-2 IgG antibody, with a clinical specificity of 96.04% (97/101). In the suspected group, all 9 patients had negative antibody test results. The false-positive results of SARS-CoV-2 IgM and IgG antibodies may be caused by auto-antibodies, heterophilic antibodies, and other factors.Table 1 Clinical specificity of SARS-CoV-2 IgM and SARS-CoV-2 IgG. Number of samplesSARS-CoV-2 IgMSARS-CoV-2 IgGNPClin Spe (%)95% CINPClin Spe (%)95% CISuspected group990100.00(70.1%, 100.0%)90100.00(70.1%, 100.0%)Control group101100199.01(94.6%, 99.8%)97496.04(90.3%, 98.4%)N: negative; P: positive; Clin Sep: clinical specificity. ### 3.2. Detection Sensitivity of SARS-CoV-2 IgG/IgM Antibody Test Samples from 68 SARS-CoV-2-infected patients (confirmed with RT-PCR) were used to evaluate the clinical sensitivity of the assays (Table2). We analyzed the clinical sensitivity on both SARS-CoV-2 IgM and IgG antibodies at three time periods, before 7 days, 7–14 days, and after 14 days since the onset of the symptoms. During these time periods, SARS-CoV-2 IgM demonstrated a clinical sensitivity of 75.00%, 88.00%, and 93.55%, respectively, and SARS-CoV-2 IgG demonstrated 83.33%, 100.00%, and 100.00%, respectively. The total clinical sensitivity of SARS-CoV-2 IgM and IgG to SARS-CoV-2 infection was 88.24% (60/68) and 97.06% (66/68), respectively.Table 2 Clinical sensitivity of SARS-CoV-2 IgM and SARS-CoV-2 IgG. DaysNumber of samplesSRAS-CoV-2 IgMSRAS-CoV-2 IgGNPClin Sen (%)95% CINPClin Sen (%)95% CI<7 days123975.00(46.8%, 91.1%)21083.33(55.2%, 95.3%)7–14 days2532288.00(70.0%, 95.8%)025100.00(86.7%, 100.0%)>14 days3122993.55(79.3%, 98.2%)031100.00(89.0%, 100.0%)Total6886088.24(78.5%, 93.9%)26697.06(89.9%, 99.2%)Days: days since the onset of symptoms; N: negative; P: positive; Clin Sen: clinical sensitivity. ### 3.3. Comparison of SARS-CoV-2 Antibody Test and SARS-CoV-2 Nucleic Acid Test (NAT) The comparison between SARS-CoV-2 IgM/IgG antibody test and NAT of 178 patients is shown in Table3. The positive predictive value of SARS-CoV-2 IgM/IgG antibody detection was 93.06% (67/72), and the negative predictive value was 99.06% (105/106). The positive predictive value of the NAT for SARS-CoV-2 was 100% (68/68), and the negative predictive value was 96.36% (106/110).Table 3 Comparison of SARS-CoV-2 IgM/IgG antibody detection and SARS-CoV-2 nucleic acid detection. IgM/IgG antibodyPositiveNegativeTotalPositive predictive value of NAT (%)Negative predictive value of NAT (%)Nucleic acidPositive67168100.00Negative510511096.36Total72106178Positive predictive value of antibody test93.06Negative predictive value of antibody test99.06 ## 3.1. Specificity of SARS-CoV-2 IgG/IgM Antibody Test Samples from both NAT-negative patients (suspected group, 9 subjects) and other diseases’ population (control group, 101 subjects) were used to assess the clinical specificity of the assay (Table1). Among the 101 patients in the control group, 100 patients were negative for SARS-CoV-2 IgM antibody, with a clinical specificity of 99.01% (100/101). 97 patients were negative for SARS-CoV-2 IgG antibody, with a clinical specificity of 96.04% (97/101). In the suspected group, all 9 patients had negative antibody test results. The false-positive results of SARS-CoV-2 IgM and IgG antibodies may be caused by auto-antibodies, heterophilic antibodies, and other factors.Table 1 Clinical specificity of SARS-CoV-2 IgM and SARS-CoV-2 IgG. Number of samplesSARS-CoV-2 IgMSARS-CoV-2 IgGNPClin Spe (%)95% CINPClin Spe (%)95% CISuspected group990100.00(70.1%, 100.0%)90100.00(70.1%, 100.0%)Control group101100199.01(94.6%, 99.8%)97496.04(90.3%, 98.4%)N: negative; P: positive; Clin Sep: clinical specificity. ## 3.2. Detection Sensitivity of SARS-CoV-2 IgG/IgM Antibody Test Samples from 68 SARS-CoV-2-infected patients (confirmed with RT-PCR) were used to evaluate the clinical sensitivity of the assays (Table2). We analyzed the clinical sensitivity on both SARS-CoV-2 IgM and IgG antibodies at three time periods, before 7 days, 7–14 days, and after 14 days since the onset of the symptoms. During these time periods, SARS-CoV-2 IgM demonstrated a clinical sensitivity of 75.00%, 88.00%, and 93.55%, respectively, and SARS-CoV-2 IgG demonstrated 83.33%, 100.00%, and 100.00%, respectively. The total clinical sensitivity of SARS-CoV-2 IgM and IgG to SARS-CoV-2 infection was 88.24% (60/68) and 97.06% (66/68), respectively.Table 2 Clinical sensitivity of SARS-CoV-2 IgM and SARS-CoV-2 IgG. DaysNumber of samplesSRAS-CoV-2 IgMSRAS-CoV-2 IgGNPClin Sen (%)95% CINPClin Sen (%)95% CI<7 days123975.00(46.8%, 91.1%)21083.33(55.2%, 95.3%)7–14 days2532288.00(70.0%, 95.8%)025100.00(86.7%, 100.0%)>14 days3122993.55(79.3%, 98.2%)031100.00(89.0%, 100.0%)Total6886088.24(78.5%, 93.9%)26697.06(89.9%, 99.2%)Days: days since the onset of symptoms; N: negative; P: positive; Clin Sen: clinical sensitivity. ## 3.3. Comparison of SARS-CoV-2 Antibody Test and SARS-CoV-2 Nucleic Acid Test (NAT) The comparison between SARS-CoV-2 IgM/IgG antibody test and NAT of 178 patients is shown in Table3. The positive predictive value of SARS-CoV-2 IgM/IgG antibody detection was 93.06% (67/72), and the negative predictive value was 99.06% (105/106). The positive predictive value of the NAT for SARS-CoV-2 was 100% (68/68), and the negative predictive value was 96.36% (106/110).Table 3 Comparison of SARS-CoV-2 IgM/IgG antibody detection and SARS-CoV-2 nucleic acid detection. IgM/IgG antibodyPositiveNegativeTotalPositive predictive value of NAT (%)Negative predictive value of NAT (%)Nucleic acidPositive67168100.00Negative510511096.36Total72106178Positive predictive value of antibody test93.06Negative predictive value of antibody test99.06 ## 4. Discussion Novel coronavrius disease is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [7]. SARS-CoV-2 belongs to the subfamily of Coronavirinae (named Betacoronavirus), and its genome is a single-stranded positive-sense RNA [8]. Different from MERS-CoV and SARS-CoV, SARS-CoV-2 is the seventh member of the family of coronaviruses that infect humans [9]. The disease (COVID-19) has been rapidly spreading for the past five months, and now it has been found in more than 215 countries. By June 19, 2020, over 8,300,000 COVID-19-confirmed patients were reported, with  >450,000 deaths. Currently, the SARS-CoV-2 NAT is the routine confirmation test for the clinical diagnosis of COVID-19 [2]. However, not all clinical COVID-19 patients might have positive results from SARS-CoV-2 NAT. The reasons of false-negative NAT results in COVID-19 include collection and storage of the sample, the condition of the NAT laboratory, and the quality of the test kit [10]. Therefore, nucleic acid detection, CT imaging, blood routine examination, and other methods can be used together for the comprehensive diagnosis of COVID-19.Since February 2020, several SARS-COV-2 IgM and IgG antibody immunoassay kits have been developed in China. Antibody detection is a new detection method for SARS-CoV-2, so the clinical specificity and sensitivity of such tests must be carefully validated [11]. Our study demonstrated that the SARS-CoV-2 IgM and IgG CLIA kits (YHLO Biotech, Shenzhen, China) had a high clinical specificity, reaching 99.01% and 96.04%, respectively. Therefore, SARS-CoV-2 IgM and IgG antibody detection reagents have high clinical specificity and can meet the screening and diagnosis requirements of SARS-CoV-2.In COVID-19 cases, the clinical sensitivity of SARS-CoV-2 IgM detection was 88.23%, while the clinical sensitivity of SARS-CoV-2 IgG detection was 97.06%. The CLIA system can simultaneously detect 150–300 clinical samples, which is a good tool for screening and diagnosis of the novel coronavirus pneumonia caused by SARS-CoV-2. Our study results showed that the combined detection of SARS-CoV-2 IgM and IgG antibodies is an effective tool to improve the diagnostic sensitivity and specificity and reduce the chance of false-negative NAT results. We demonstrate that the antibody detection can be used as one of the effective methods of COVID-19 clinical detection.In the NAT-confirmed group, serum from 68 COVID-19 cases was tested for SARS-CoV-2 IgM and IgG. SARS-CoV-2 IgM antibodies can be detected in 75.00% of patients before 7 days since the onset of the symptoms, and the positive rate reached to 88.00% on the period of 7–14 days and then increased to 93.55% after 14 days. The positive rate of SARS-CoV-2 IgG was 83.33% before 7 days since the onset of symptoms and reached to 100.00% on 7–14 days and remained 100% after 14 days. In general, the immune response to infection by pathogenic microorganisms is first expressed as an increase in the IgM antibody titer and then a rapid decrease until it disappears, while the IgG antibody titer normally increased in the middle and late stages of the infection, and it can be positive for a long time even after recovery. According to the results of this study, the positive rate of IgM in SARS-CoV-2-infected patients is lower than that of IgG because most of the infected patients were in the middle stage of infection or in the recovery stage. Interestingly, we have observed a phenomenon that SARS-CoV-2 IgM and IgG antibodies developed almost simultaneously, and this observation is consistent with some recent studies [12]. Further studies are needed to verify this phenomenon in the diagnosis and prognosis of COVID-19.We found false-negative results for IgM/IgG in the NAT group. There might be three reasons: first of all, false-negative results may be due to low antibody titer. When IgM and IgG titers are below the detection limit, the test result might be negative. Secondly, the difference in individual immune response and antibody production could be another reason for the false-negative results in COVID-19 patients. The last reason might be that IgM antibody might decrease or even disappear after 15 days. In each individual case, it is difficult to know exactly when or how long the patient has been really infected, and someone might have IgM titer below the detection limit and not detectable. In the joint detection of SARS-CoV-2 IgM and IgG, there was only one negative patient (male, 77 years old) who had respiratory failure, chronic obstructive pulmonary disease, coronary atherosclerosis, acute myocardial infarction, and heart failure with SARS-CoV-2 infection. In the control group, 5 cases were positive for antibody detection (1 case for IgM and 4 cases for IgG). The results suggested that the patients who had some other diseases, including tumors, leukemia, diabetes, hypertension, coronary atherosclerosis, bronchitis, or lung infections, might be more susceptible to be infected by SARS-CoV-2 and led to positive antibody detection. Also, there might be false-negative nucleic acid or recovered/mild/asymptomatic patients with SARS-CoV-2. In addition, it is well known that the positive and negative predictive values are not only intrinsic to the test but also depend on the prevalence [13]. Therefore, predictive values shown in Table 3 are only valid for the sample used in the study, and they are not valid for other facilities or for the general population. All of these cases will provide valuable reference for the follow-up study and clinical diagnosis of COVID-19.Our study also has some limitations. For example, we did not investigate the cross-reaction with other pathogens (e.g., hCoV-NL-63 or others), MERS-CoV, SARS-CoV, and some auto-antibodies that could cause interference for immunoassay. Also, we did not perform dynamic monitoring of the change of antibody titer for in-depth study. ## 5. Conclusions Overall, testing SARS-CoV-2 IgG and IgM by the CLIA method is convenient for sampling, and it has high efficiency. The results of this study indicated that combined detection of serum IgM and IgG antibodies to SARS-CoV-2 had better sensitivity and specificity compared with single IgM or IgG antibody testing. Therefore, the serological test results can be used as an effective diagnostic tool for SARS-CoV-2 infection. It can also be used as an efficient supplement of RNA detection for confirmation of SARS-CoV-2 infection in clinics, hospitals, and accredited scientific laboratories. --- *Source: 1020843-2020-09-24.xml*
1020843-2020-09-24_1020843-2020-09-24.md
21,553
Joint Detection of Serum IgM/IgG Antibody Is an Important Key to Clinical Diagnosis of SARS-CoV-2 Infection
Fang Hu; Xiaoling Shang; Meizhou Chen; Changliang Zhang
Canadian Journal of Infectious Diseases and Medical Microbiology (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1020843
1020843-2020-09-24.xml
--- ## Abstract Background. This study was aimed to investigate the application of SARS-CoV-2 IgM and IgG antibodies in diagnosis of COVID-19 infection. Method. This study enrolled a total of 178 patients at Huangshi Central Hospital from January to February 2020. Among them, 68 patients were SARS-CoV-2 infected, confirmed with nucleic acid test (NAT) and CT imaging. Nine patients were in the suspected group (NAT negative) with fever and other respiratory symptoms. 101 patients were in the control group with other diseases and negative to SARS-CoV-2 infection. After serum samples were collected, SARS-CoV-2 IgG and IgM antibodies were tested by chemiluminescence immunoassay (CLIA) for all patients. Results. The specificity of serum IgM and IgG antibodies to SARS-CoV-2 was 99.01% (100/101) and 96.04% (97/101), respectively, and the sensitivity was 88.24% (60/68) and 97.06% (66/68), respectively. The combined detection rate of SARS-CoV-2 IgM and IgG antibodies was 98.53% (67/68). Conclusion. Combined detection of serum SARS-CoV-2 IgM and IgG antibodies had better sensitivity compared with single IgM or IgG antibody testing, which can be used as an important diagnostic tool for SARS-CoV-2 infection and a screening tool of potential SARS-CoV-2 carriers in clinics, hospitals, and accredited scientific laboratories. --- ## Body ## 1. Introduction The novel coronavirus (SARS-CoV-2) is a new virus responsible for an outbreak of respiratory illness known as COVID-19, and now it is a global pandemic in more than 215 countries [1]. The current standard diagnostic method for diagnosis of COVID-19 is to detect the virus nucleic acid RT-PCR [2]. However, real-time PCR detection had some limitations, e.g., time-consuming, complicated operation with specialized equipment, and requiring special detection sites, which limit its application during COVID-19 outbreak [3]. Therefore, a simple, sensitive, and accurate test was urgently needed to identify SARS-CoV-2-infected patients in a COVID-19 outbreak area.Based on the diagnosis experience of many clinical cases, the detection of novel coronavirus antibody can be used as an auxiliary diagnosis of novel coronavirus pneumonia [4, 5]. After SARS-CoV-2 infection, the body’s immune system can create immune response to fight against the virus and produce specific antibodies. In general virology, the immunoglobulin M (IgM) antibody, produced in the early period after the infection, can indicate the current infection or the recent infection. Immunoglobulin G (IgG) antibody is also an important antibody produced by the immune system, indicating that the disease is in the middle to late stage or presence of past infection. Therefore, the combined detection of IgM and IgG can be used not only in the early diagnosis of infectious diseases but also in the assessment of the stage of infection.Chemiluminescence immunoassay (CLIA) has been developed as an effective combination of immunoassay and chemiluminescence system [6], and it has been used recently in SARS-CoV-2 diagnosis. In this study, we used a CLIA test product, which can detect IgM and IgG in human serum within 30 minutes. Our aim is to investigate the clinical value of CLIA for the diagnosis of SARS-CoV-2 infection. This CLIA method demonstrated good sensitivity and specificity in our study, which can be used not only in hospitals and accredited laboratories but also in airports, border ports, seaports, and train stations. This CLIA method has potential to be a powerful weapon against the COVID-19 pandemic. ## 2. Materials and Methods ### 2.1. Patients This study enrolled a total of 178 patients who visited Huangshi Central Hospital in Hubei Province, China, between January and February 2020. The patients included 91 males (51.1%) and 87 females (48.9%) with a mean age of 54.3 years (ranging from 2 months to 94 years). Among them, the SARS-CoV-2 group had 68 patients, 36 males and 32 females (ranging from 30 years to 90 years); the suspected group had 9 patients, 7 males and 2 females (ranging from 2 months to 64 years); and the negative group had 101 patients, 48 males and 53 females (ranging from 2 years to 94 years). This study is in compliance with ICC clinical trial specifications and the Helsinki Declaration. ### 2.2. Serologic Tests Serum was collected from all patients. Serum SARS-CoV-2 IgG and IgM were tested by CLIA kits and the iFlash 3000 fully automated CLIA analyzer obtained from Shenzhen YHLO Biotech Co., Ltd (China). In brief, serum was separated by centrifugation at 2500 g for 5 min within 12 hours of collection. The magnetic beads of these CLIA assays are coated with two antigens of SARS-CoV-2 (nucleocapsid protein (N protein) and spike protein (S protein)). SARS-CoV-2 IgM/IgG titers (in arbitrary units, AU/ml) were calculated automatically by the CLIA analyzer based on relative light units (RLU), and the viral antibody titer was positively associated with RLU. The cutoff values for positive SARS-CoV-2 IgM and IgG are both 10 AU/ml. ### 2.3. SARS-CoV-2 Nucleic Acid Test RT-PCR was used to detect open reading frame 1ab (ORF1ab) and nucleocapsid protein (N) in the SARS-CoV-2 genome. CT value interpretation of test results is based on the instruction from the manufacturer. Confirmation of positive COVID-19 is based on at least one target-specific RT-PCR-positive result of ORF1ab and N genes of SARS-CoV-2 in the same specimen. ### 2.4. Data Analysis Statistical analysis was performed using SPSS 19.0 statistical software (IBM SPSS, Chicago, IL, USA). The kappa coefficient was calculated. Kappa ≥0.75 indicates good consistency, 0.75 ≥ kappa > 0.4 for medium consistency, and kappa <0.4 for poor consistency.The specificity and sensitivity of the CLIA test kits were calculated according to the following equations:(1)sensitivity%=100%×true positivetrue positive+false negative,specificity%=100%×true negativetrue negative+false positive. ## 2.1. Patients This study enrolled a total of 178 patients who visited Huangshi Central Hospital in Hubei Province, China, between January and February 2020. The patients included 91 males (51.1%) and 87 females (48.9%) with a mean age of 54.3 years (ranging from 2 months to 94 years). Among them, the SARS-CoV-2 group had 68 patients, 36 males and 32 females (ranging from 30 years to 90 years); the suspected group had 9 patients, 7 males and 2 females (ranging from 2 months to 64 years); and the negative group had 101 patients, 48 males and 53 females (ranging from 2 years to 94 years). This study is in compliance with ICC clinical trial specifications and the Helsinki Declaration. ## 2.2. Serologic Tests Serum was collected from all patients. Serum SARS-CoV-2 IgG and IgM were tested by CLIA kits and the iFlash 3000 fully automated CLIA analyzer obtained from Shenzhen YHLO Biotech Co., Ltd (China). In brief, serum was separated by centrifugation at 2500 g for 5 min within 12 hours of collection. The magnetic beads of these CLIA assays are coated with two antigens of SARS-CoV-2 (nucleocapsid protein (N protein) and spike protein (S protein)). SARS-CoV-2 IgM/IgG titers (in arbitrary units, AU/ml) were calculated automatically by the CLIA analyzer based on relative light units (RLU), and the viral antibody titer was positively associated with RLU. The cutoff values for positive SARS-CoV-2 IgM and IgG are both 10 AU/ml. ## 2.3. SARS-CoV-2 Nucleic Acid Test RT-PCR was used to detect open reading frame 1ab (ORF1ab) and nucleocapsid protein (N) in the SARS-CoV-2 genome. CT value interpretation of test results is based on the instruction from the manufacturer. Confirmation of positive COVID-19 is based on at least one target-specific RT-PCR-positive result of ORF1ab and N genes of SARS-CoV-2 in the same specimen. ## 2.4. Data Analysis Statistical analysis was performed using SPSS 19.0 statistical software (IBM SPSS, Chicago, IL, USA). The kappa coefficient was calculated. Kappa ≥0.75 indicates good consistency, 0.75 ≥ kappa > 0.4 for medium consistency, and kappa <0.4 for poor consistency.The specificity and sensitivity of the CLIA test kits were calculated according to the following equations:(1)sensitivity%=100%×true positivetrue positive+false negative,specificity%=100%×true negativetrue negative+false positive. ## 3. Results ### 3.1. Specificity of SARS-CoV-2 IgG/IgM Antibody Test Samples from both NAT-negative patients (suspected group, 9 subjects) and other diseases’ population (control group, 101 subjects) were used to assess the clinical specificity of the assay (Table1). Among the 101 patients in the control group, 100 patients were negative for SARS-CoV-2 IgM antibody, with a clinical specificity of 99.01% (100/101). 97 patients were negative for SARS-CoV-2 IgG antibody, with a clinical specificity of 96.04% (97/101). In the suspected group, all 9 patients had negative antibody test results. The false-positive results of SARS-CoV-2 IgM and IgG antibodies may be caused by auto-antibodies, heterophilic antibodies, and other factors.Table 1 Clinical specificity of SARS-CoV-2 IgM and SARS-CoV-2 IgG. Number of samplesSARS-CoV-2 IgMSARS-CoV-2 IgGNPClin Spe (%)95% CINPClin Spe (%)95% CISuspected group990100.00(70.1%, 100.0%)90100.00(70.1%, 100.0%)Control group101100199.01(94.6%, 99.8%)97496.04(90.3%, 98.4%)N: negative; P: positive; Clin Sep: clinical specificity. ### 3.2. Detection Sensitivity of SARS-CoV-2 IgG/IgM Antibody Test Samples from 68 SARS-CoV-2-infected patients (confirmed with RT-PCR) were used to evaluate the clinical sensitivity of the assays (Table2). We analyzed the clinical sensitivity on both SARS-CoV-2 IgM and IgG antibodies at three time periods, before 7 days, 7–14 days, and after 14 days since the onset of the symptoms. During these time periods, SARS-CoV-2 IgM demonstrated a clinical sensitivity of 75.00%, 88.00%, and 93.55%, respectively, and SARS-CoV-2 IgG demonstrated 83.33%, 100.00%, and 100.00%, respectively. The total clinical sensitivity of SARS-CoV-2 IgM and IgG to SARS-CoV-2 infection was 88.24% (60/68) and 97.06% (66/68), respectively.Table 2 Clinical sensitivity of SARS-CoV-2 IgM and SARS-CoV-2 IgG. DaysNumber of samplesSRAS-CoV-2 IgMSRAS-CoV-2 IgGNPClin Sen (%)95% CINPClin Sen (%)95% CI<7 days123975.00(46.8%, 91.1%)21083.33(55.2%, 95.3%)7–14 days2532288.00(70.0%, 95.8%)025100.00(86.7%, 100.0%)>14 days3122993.55(79.3%, 98.2%)031100.00(89.0%, 100.0%)Total6886088.24(78.5%, 93.9%)26697.06(89.9%, 99.2%)Days: days since the onset of symptoms; N: negative; P: positive; Clin Sen: clinical sensitivity. ### 3.3. Comparison of SARS-CoV-2 Antibody Test and SARS-CoV-2 Nucleic Acid Test (NAT) The comparison between SARS-CoV-2 IgM/IgG antibody test and NAT of 178 patients is shown in Table3. The positive predictive value of SARS-CoV-2 IgM/IgG antibody detection was 93.06% (67/72), and the negative predictive value was 99.06% (105/106). The positive predictive value of the NAT for SARS-CoV-2 was 100% (68/68), and the negative predictive value was 96.36% (106/110).Table 3 Comparison of SARS-CoV-2 IgM/IgG antibody detection and SARS-CoV-2 nucleic acid detection. IgM/IgG antibodyPositiveNegativeTotalPositive predictive value of NAT (%)Negative predictive value of NAT (%)Nucleic acidPositive67168100.00Negative510511096.36Total72106178Positive predictive value of antibody test93.06Negative predictive value of antibody test99.06 ## 3.1. Specificity of SARS-CoV-2 IgG/IgM Antibody Test Samples from both NAT-negative patients (suspected group, 9 subjects) and other diseases’ population (control group, 101 subjects) were used to assess the clinical specificity of the assay (Table1). Among the 101 patients in the control group, 100 patients were negative for SARS-CoV-2 IgM antibody, with a clinical specificity of 99.01% (100/101). 97 patients were negative for SARS-CoV-2 IgG antibody, with a clinical specificity of 96.04% (97/101). In the suspected group, all 9 patients had negative antibody test results. The false-positive results of SARS-CoV-2 IgM and IgG antibodies may be caused by auto-antibodies, heterophilic antibodies, and other factors.Table 1 Clinical specificity of SARS-CoV-2 IgM and SARS-CoV-2 IgG. Number of samplesSARS-CoV-2 IgMSARS-CoV-2 IgGNPClin Spe (%)95% CINPClin Spe (%)95% CISuspected group990100.00(70.1%, 100.0%)90100.00(70.1%, 100.0%)Control group101100199.01(94.6%, 99.8%)97496.04(90.3%, 98.4%)N: negative; P: positive; Clin Sep: clinical specificity. ## 3.2. Detection Sensitivity of SARS-CoV-2 IgG/IgM Antibody Test Samples from 68 SARS-CoV-2-infected patients (confirmed with RT-PCR) were used to evaluate the clinical sensitivity of the assays (Table2). We analyzed the clinical sensitivity on both SARS-CoV-2 IgM and IgG antibodies at three time periods, before 7 days, 7–14 days, and after 14 days since the onset of the symptoms. During these time periods, SARS-CoV-2 IgM demonstrated a clinical sensitivity of 75.00%, 88.00%, and 93.55%, respectively, and SARS-CoV-2 IgG demonstrated 83.33%, 100.00%, and 100.00%, respectively. The total clinical sensitivity of SARS-CoV-2 IgM and IgG to SARS-CoV-2 infection was 88.24% (60/68) and 97.06% (66/68), respectively.Table 2 Clinical sensitivity of SARS-CoV-2 IgM and SARS-CoV-2 IgG. DaysNumber of samplesSRAS-CoV-2 IgMSRAS-CoV-2 IgGNPClin Sen (%)95% CINPClin Sen (%)95% CI<7 days123975.00(46.8%, 91.1%)21083.33(55.2%, 95.3%)7–14 days2532288.00(70.0%, 95.8%)025100.00(86.7%, 100.0%)>14 days3122993.55(79.3%, 98.2%)031100.00(89.0%, 100.0%)Total6886088.24(78.5%, 93.9%)26697.06(89.9%, 99.2%)Days: days since the onset of symptoms; N: negative; P: positive; Clin Sen: clinical sensitivity. ## 3.3. Comparison of SARS-CoV-2 Antibody Test and SARS-CoV-2 Nucleic Acid Test (NAT) The comparison between SARS-CoV-2 IgM/IgG antibody test and NAT of 178 patients is shown in Table3. The positive predictive value of SARS-CoV-2 IgM/IgG antibody detection was 93.06% (67/72), and the negative predictive value was 99.06% (105/106). The positive predictive value of the NAT for SARS-CoV-2 was 100% (68/68), and the negative predictive value was 96.36% (106/110).Table 3 Comparison of SARS-CoV-2 IgM/IgG antibody detection and SARS-CoV-2 nucleic acid detection. IgM/IgG antibodyPositiveNegativeTotalPositive predictive value of NAT (%)Negative predictive value of NAT (%)Nucleic acidPositive67168100.00Negative510511096.36Total72106178Positive predictive value of antibody test93.06Negative predictive value of antibody test99.06 ## 4. Discussion Novel coronavrius disease is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [7]. SARS-CoV-2 belongs to the subfamily of Coronavirinae (named Betacoronavirus), and its genome is a single-stranded positive-sense RNA [8]. Different from MERS-CoV and SARS-CoV, SARS-CoV-2 is the seventh member of the family of coronaviruses that infect humans [9]. The disease (COVID-19) has been rapidly spreading for the past five months, and now it has been found in more than 215 countries. By June 19, 2020, over 8,300,000 COVID-19-confirmed patients were reported, with  >450,000 deaths. Currently, the SARS-CoV-2 NAT is the routine confirmation test for the clinical diagnosis of COVID-19 [2]. However, not all clinical COVID-19 patients might have positive results from SARS-CoV-2 NAT. The reasons of false-negative NAT results in COVID-19 include collection and storage of the sample, the condition of the NAT laboratory, and the quality of the test kit [10]. Therefore, nucleic acid detection, CT imaging, blood routine examination, and other methods can be used together for the comprehensive diagnosis of COVID-19.Since February 2020, several SARS-COV-2 IgM and IgG antibody immunoassay kits have been developed in China. Antibody detection is a new detection method for SARS-CoV-2, so the clinical specificity and sensitivity of such tests must be carefully validated [11]. Our study demonstrated that the SARS-CoV-2 IgM and IgG CLIA kits (YHLO Biotech, Shenzhen, China) had a high clinical specificity, reaching 99.01% and 96.04%, respectively. Therefore, SARS-CoV-2 IgM and IgG antibody detection reagents have high clinical specificity and can meet the screening and diagnosis requirements of SARS-CoV-2.In COVID-19 cases, the clinical sensitivity of SARS-CoV-2 IgM detection was 88.23%, while the clinical sensitivity of SARS-CoV-2 IgG detection was 97.06%. The CLIA system can simultaneously detect 150–300 clinical samples, which is a good tool for screening and diagnosis of the novel coronavirus pneumonia caused by SARS-CoV-2. Our study results showed that the combined detection of SARS-CoV-2 IgM and IgG antibodies is an effective tool to improve the diagnostic sensitivity and specificity and reduce the chance of false-negative NAT results. We demonstrate that the antibody detection can be used as one of the effective methods of COVID-19 clinical detection.In the NAT-confirmed group, serum from 68 COVID-19 cases was tested for SARS-CoV-2 IgM and IgG. SARS-CoV-2 IgM antibodies can be detected in 75.00% of patients before 7 days since the onset of the symptoms, and the positive rate reached to 88.00% on the period of 7–14 days and then increased to 93.55% after 14 days. The positive rate of SARS-CoV-2 IgG was 83.33% before 7 days since the onset of symptoms and reached to 100.00% on 7–14 days and remained 100% after 14 days. In general, the immune response to infection by pathogenic microorganisms is first expressed as an increase in the IgM antibody titer and then a rapid decrease until it disappears, while the IgG antibody titer normally increased in the middle and late stages of the infection, and it can be positive for a long time even after recovery. According to the results of this study, the positive rate of IgM in SARS-CoV-2-infected patients is lower than that of IgG because most of the infected patients were in the middle stage of infection or in the recovery stage. Interestingly, we have observed a phenomenon that SARS-CoV-2 IgM and IgG antibodies developed almost simultaneously, and this observation is consistent with some recent studies [12]. Further studies are needed to verify this phenomenon in the diagnosis and prognosis of COVID-19.We found false-negative results for IgM/IgG in the NAT group. There might be three reasons: first of all, false-negative results may be due to low antibody titer. When IgM and IgG titers are below the detection limit, the test result might be negative. Secondly, the difference in individual immune response and antibody production could be another reason for the false-negative results in COVID-19 patients. The last reason might be that IgM antibody might decrease or even disappear after 15 days. In each individual case, it is difficult to know exactly when or how long the patient has been really infected, and someone might have IgM titer below the detection limit and not detectable. In the joint detection of SARS-CoV-2 IgM and IgG, there was only one negative patient (male, 77 years old) who had respiratory failure, chronic obstructive pulmonary disease, coronary atherosclerosis, acute myocardial infarction, and heart failure with SARS-CoV-2 infection. In the control group, 5 cases were positive for antibody detection (1 case for IgM and 4 cases for IgG). The results suggested that the patients who had some other diseases, including tumors, leukemia, diabetes, hypertension, coronary atherosclerosis, bronchitis, or lung infections, might be more susceptible to be infected by SARS-CoV-2 and led to positive antibody detection. Also, there might be false-negative nucleic acid or recovered/mild/asymptomatic patients with SARS-CoV-2. In addition, it is well known that the positive and negative predictive values are not only intrinsic to the test but also depend on the prevalence [13]. Therefore, predictive values shown in Table 3 are only valid for the sample used in the study, and they are not valid for other facilities or for the general population. All of these cases will provide valuable reference for the follow-up study and clinical diagnosis of COVID-19.Our study also has some limitations. For example, we did not investigate the cross-reaction with other pathogens (e.g., hCoV-NL-63 or others), MERS-CoV, SARS-CoV, and some auto-antibodies that could cause interference for immunoassay. Also, we did not perform dynamic monitoring of the change of antibody titer for in-depth study. ## 5. Conclusions Overall, testing SARS-CoV-2 IgG and IgM by the CLIA method is convenient for sampling, and it has high efficiency. The results of this study indicated that combined detection of serum IgM and IgG antibodies to SARS-CoV-2 had better sensitivity and specificity compared with single IgM or IgG antibody testing. Therefore, the serological test results can be used as an effective diagnostic tool for SARS-CoV-2 infection. It can also be used as an efficient supplement of RNA detection for confirmation of SARS-CoV-2 infection in clinics, hospitals, and accredited scientific laboratories. --- *Source: 1020843-2020-09-24.xml*
2020
# Increased CCR7 loPD-1 hiCXCR5 +CD4 + T Cells in Peripheral Blood Mononuclear Cells Are Correlated with Immune Activation in Patients with Chronic HBV Infection **Authors:** Ya-Xin Huang; Qi-Yi Zhao; Li-Li Wu; Dong-Ying Xie; Zhi-Liang Gao; Hong Deng **Journal:** Canadian Journal of Gastroenterology and Hepatology (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1020925 --- ## Abstract T follicular helper cells (Tfh cells) affect essential immune pathogenesis in chronic hepatitis B virus (HBV) infection. The CCR7 l oPD-1 h i Tfh subset has a partial Tfh effector phenotype and is associated with active Tfh differentiation, whereas the CCR7 h iPD-1 l o Tfh subset is a resting phenotype. We recruited 20 healthy volunteers and 77 patients with chronic HBV infection, including those in the immune tolerant (IT) phase (n=19), immune clearance (IC) phase (n=20), low replicative (LR) phase (n=18), and reactivation (RA) phase (n=20). The expression of CD4, CXCR5, PD-1, and CCR7 was detected in T cells from peripheral blood by flow cytometry. The frequency of the CCR7 l oPD-1 h i T subset was significantly higher in the patients than in the healthy controls (14.92±4.87% vs 12.23±2.95%, p=0.018). The frequency of this Tfh subset in the IC group (18.42%±3.08) was increased compared with the IT group (11.94±2.87%, p=0.001) and LR group (13.65±4.93%, p=0.031) and was higher in the RA group than in the IT group (16.03±5.37% vs 11.94±2.87%, p=0.030). We observed a weak positive correlation between the CCR7 l oPD-1 h i Tfh subset population and the alanine transaminase (ALT) level (r=0.370, p=0.001). The CCR7 l oPD-1 h Tfh subset in the chronic HBV-infected patients was elevated to various degrees among the different immune phases. CCR7 l oPD-1 h iCXCR5+CD4+ T cells are correlated with the immune status of chronic HBV infection patients and may be developed as a potential indicator for antiviral treatment. --- ## Body ## 1. Introduction HBV infection remains among the most serious issues in global public health despite extensive vaccination and effective antiviral treatments. A total of 250 million people suffer from chronic hepatitis B virus (HBV) infection worldwide, most of whom live in Africa and Asia [1, 2]. HBV-associated diseases, such as liver failure, cirrhosis, and hepatocellular carcinoma, contribute to the deaths of 1 million people per year [3].Our understanding of the natural history of HBV infection and the resultant disease is continuously improving. Complex interactions between the viral and host immune systems participate in disease progression, allowing for HBV penetration into host cells, formation of persistence, and chronization of HBV infection or complete elimination of the virus [4, 5]. Although various clinical and experimental investigations have helped diagnose, treat, and prevent hepatitis B, the exact mechanism underlying the host immune reactions remains unclear.According to the complex interactions between the virus, hepatocytes, and the host immune system, the natural course of chronic HBV infection is usually stratified into 4 phases, the immune tolerant (IT) phase, the immune clearance (IC) phase, the low replicative (LR) phase, and the reactivation (RA) phase [6].Proteins of partial HBV can modulate immunity and enable immune escape. In the course of the disease, a better prognosis can be achieved if HBeAg seroconversion occurs early. The prevalence of cirrhosis and hepatocellular carcinoma in patients during this time declines. In addition, HBsAg loss and/or seroconversion is considered the ideal goal of treatment and a milestone in effective treatment response in both HBeAg-positive and HBeAg-negative patients [7].The production of antibodies plays an indispensable role in both HBeAg and HBsAg seroconversion[8]. Circulating CXCR5+CD4+ T cells, which are the counterpart of T follicular helper (Tfh) cells in the peripheral blood, have been reported to play a significant role in accelerating HBeAg seroconversion in chronic HBV-infected patients [9].Tfh cells are considered to be a subset of CD4+ T cells in secondary lymphoid tissues that express CXC-chemokine receptor 5 (CXCR5), which helps Tfh cells localize to B cell follicles. Studies have reported that CXCR5+CD4+T cells are more efficient than CXCR5−CD4+ T cells in inducing B cells to secrete antibodies and switch antibody classes [10–12]. Tfh cells coexpress programmed cell death protein 1 (PD-1) and inducible T cell co-stimulator (ICOS) and downregulate CC-chemokine receptor 7 (CCR7) [13–15]. Several investigations have found elevated expression of circulating CXCR5+CD4+ T cells in patients with autoimmune diseases (such as systemic lupus erythematosus (SLE) and Sjogren’s syndrome)[16, 17] and infectious diseases (such as hepatitis B and C)[18, 19]. However, He J et al. found no increase in the frequency of circulating CXCR5+CD4+ T cells in SLE patients [20], which was inconsistent with previous investigations. In addition, a study showed that there was no difference in the circulating CXCR5+CD4+ T cell frequency between healthy controls and HCV patients. Interestingly, this study also found that CXCR5+CD4+ T cells were efficient in supporting B cell responses [21]. Based on current evidence, there is no clear correlation between the activity of CXCR5+CD4+ T cells and their frequency in peripheral blood.Tfh cells are comprised of various subsets with different phenotypes and functions [22]. He J et al. reported that CCR7 l oPD-1 h iCXCR5+CD4+ T cells have a partial Tfh effector phenotype exhibiting active Tfh differentiation in lymphoid tissues. In contrast, the CCR7 h iPD-1 l o Tfh subset has a resting phenotype [20]. Studies in mice found that CXCR5 h iPD-1 h i germinal center Tfh cells likely downregulate CXCR5, PD-1, and BCL-6, re-express CCR7, IL-7Rα, and CD62L, and thus differentiate into memory cells and persist for a long time [22, 23]. IL-7 possibly increases the level of Tfh cells in the patients with chronic hepatitis B[24]. Studies investigating cystic echinococcosis also reported that CCR7 l oPD-1 h iCXCR5+CD4+ T cells were increased in patients [25].The CCR7 l oPD-1 h i and CCR7 h iPD-1 l o Tfh subsets in the peripheral blood have not been comprehensively investigated during the complex immunologic progression of chronic HBV infection. We hypothesize that these two Tfh subsets play a larger role in the immune response of chronic HBV infection than Tfh cells, containing multifarious subsets. The objective of this study was to detect the frequencies of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and CCR7 h iPD-1 l oCXCR5+CD4+ T cells in peripheral blood mononuclear cells (PBMCs) from patients with chronic HBV infection and compare these frequencies to those in non-HBV infected controls. Furthermore, the correlations between the frequencies of the two subsets and alanine transaminase (ALT), which is the consequence of HBV replication, and the HBsAg level were evaluated. These findings provide new insights into the correlation between the frequencies of the two CXCR5+CD4+ T subsets and the immune reaction in chronic HBV infection. ## 2. Materials and Methods ### 2.1. Patients and Controls A total of 77 patients with chronic HBV infection were recruited from the Third Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) for this cross-sectional study. These patients were HBsAg-seropositive for longer than 6 months. The patients were divided into immune tolerant (IT) phase group (n=19), immune clearance (IC) phase group (n=20), low replicative (LR) phase group (n=18), and reactivation (RA) phase group (n=20) according to the Asian Pacific Association for the Study of Liver guidelines [6]. In addition, 20 healthy individuals were enrolled from the physical examination center. All healthy individuals were non-HBV infected, HCV infected, or HIV infected and tested normal for ALT and aspartate aminotransferase (AST).The exclusion criteria for this study included coinfection with hepatitis viruses A, C, D, or E or HIV. Patients with autoimmune diseases, drug-induced liver injury, decompensated or compensated cirrhosis, malignant comorbidities within the prior 5 years, or previous antiviral or immunomodulatory drug treatments were also excluded.This study was approved by the Human Ethics Committee of the Third Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) and conducted in accordance with the Declaration of Helsinki guidelines. All subjects provided written informed consent before collecting the blood samples. ### 2.2. Peripheral Blood Mononuclear Cell Separation Peripheral venous blood samples were collected from all subjects into 5 mL tubes containing EDTA as the anticoagulant. Within 4 hours of the collection, the PBMCs were separated from the samples by Ficoll separation (Axis-Shield PoC AS, Oslo, Norway). Approximately 5 × 10∧6 PBMCs were collected from each sample and frozen at -80°C until analysis. ### 2.3. Analysis of Cell Surface Molecule Expression by Flow Cytometry The cells were thawed and incubated at 37°C and 5% CO2 in RPMI-1640 with 10% FCS (cell culture media) for 4 hours. Then, the cells were stained with anti-CD3 FITC (clone:SK7, eBioscience, San Diego, CA, USA), anti-CD4 eFluor® (clone: OKT4, eBioscience, San Diego, CA, USA), anti-CXCR5 APC (clone: MU5UBEE, eBioscience, San Diego, CA, USA), anti-PD-1 PE-Cy7 (clone: J105, eBioscience, San Diego, CA, USA), anti-CCR7 PE (clone: 3D12, eBioscience, San Diego, CA, USA), and isotype antibodies (eBioscience, San Diego, CA, USA). The cells were washed, and the marker expression was detected by flow cytometry (Beckman Gallios Coulter, Inc., CA, USA). The samples underwent detection within 4 hours. The data were analyzed using FlowJo 10.0 (Tree Star Inc., Ashland, Or, USA). ### 2.4. Laboratory Indices The quantitative values of the following indices were tested by Elecsys (Roche Diagnostics GmbH, Mannheim, Germany) at the noted reference ranges: HBsAb, 0 - 10 IU/L; HBeAg, <1.0 cut-off index (COI); HBeAb, >1.0 COI; and HBcAb, >1.0 COI. The HBsAg titers were quantified using Elecsys HBsAg II Quant reagent kits (Roche Diagnostics, Indianapolis, IN, USA). The detection limit of the kit was 20 IU/mL. The HBV-DNA levels were quantitated by performing real-time quantitative polymerase chain reaction (Daan GENE, Guangzhou, China). The detection limit of the assay was 100 IU/mL. The biochemical indices were detected using an autobiochemical analyzer (HITACHI 7180, Tokyo, Japan). ALT and AST were within the reference ranges of 3-35 U/L and 13-35 U/L, respectively. ### 2.5. Statistical Analysis All statistical analyses were performed using SPSS 24.0 software for Windows (SPSS Inc., Chicago, IL, USA), and the data were presented as the median (minimum, maximum) (age, ALT, AST, HBV DNA, and HBsAg) or the mean ± standard deviation (frequencies of cells). Multiple comparisons were performed using nonparametric Kruskal-Wallis tests with Bonferroni correction for the sub-analyses. The statistical significance between two groups was determined by performing a Mann–Whitney U test. The correlation between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell frequency and clinical parameters was examined by performing Spearman’s rank correlation. All statistical tests were two-tailed. The differences were considered statistically significant at p<0.050. ## 2.1. Patients and Controls A total of 77 patients with chronic HBV infection were recruited from the Third Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) for this cross-sectional study. These patients were HBsAg-seropositive for longer than 6 months. The patients were divided into immune tolerant (IT) phase group (n=19), immune clearance (IC) phase group (n=20), low replicative (LR) phase group (n=18), and reactivation (RA) phase group (n=20) according to the Asian Pacific Association for the Study of Liver guidelines [6]. In addition, 20 healthy individuals were enrolled from the physical examination center. All healthy individuals were non-HBV infected, HCV infected, or HIV infected and tested normal for ALT and aspartate aminotransferase (AST).The exclusion criteria for this study included coinfection with hepatitis viruses A, C, D, or E or HIV. Patients with autoimmune diseases, drug-induced liver injury, decompensated or compensated cirrhosis, malignant comorbidities within the prior 5 years, or previous antiviral or immunomodulatory drug treatments were also excluded.This study was approved by the Human Ethics Committee of the Third Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) and conducted in accordance with the Declaration of Helsinki guidelines. All subjects provided written informed consent before collecting the blood samples. ## 2.2. Peripheral Blood Mononuclear Cell Separation Peripheral venous blood samples were collected from all subjects into 5 mL tubes containing EDTA as the anticoagulant. Within 4 hours of the collection, the PBMCs were separated from the samples by Ficoll separation (Axis-Shield PoC AS, Oslo, Norway). Approximately 5 × 10∧6 PBMCs were collected from each sample and frozen at -80°C until analysis. ## 2.3. Analysis of Cell Surface Molecule Expression by Flow Cytometry The cells were thawed and incubated at 37°C and 5% CO2 in RPMI-1640 with 10% FCS (cell culture media) for 4 hours. Then, the cells were stained with anti-CD3 FITC (clone:SK7, eBioscience, San Diego, CA, USA), anti-CD4 eFluor® (clone: OKT4, eBioscience, San Diego, CA, USA), anti-CXCR5 APC (clone: MU5UBEE, eBioscience, San Diego, CA, USA), anti-PD-1 PE-Cy7 (clone: J105, eBioscience, San Diego, CA, USA), anti-CCR7 PE (clone: 3D12, eBioscience, San Diego, CA, USA), and isotype antibodies (eBioscience, San Diego, CA, USA). The cells were washed, and the marker expression was detected by flow cytometry (Beckman Gallios Coulter, Inc., CA, USA). The samples underwent detection within 4 hours. The data were analyzed using FlowJo 10.0 (Tree Star Inc., Ashland, Or, USA). ## 2.4. Laboratory Indices The quantitative values of the following indices were tested by Elecsys (Roche Diagnostics GmbH, Mannheim, Germany) at the noted reference ranges: HBsAb, 0 - 10 IU/L; HBeAg, <1.0 cut-off index (COI); HBeAb, >1.0 COI; and HBcAb, >1.0 COI. The HBsAg titers were quantified using Elecsys HBsAg II Quant reagent kits (Roche Diagnostics, Indianapolis, IN, USA). The detection limit of the kit was 20 IU/mL. The HBV-DNA levels were quantitated by performing real-time quantitative polymerase chain reaction (Daan GENE, Guangzhou, China). The detection limit of the assay was 100 IU/mL. The biochemical indices were detected using an autobiochemical analyzer (HITACHI 7180, Tokyo, Japan). ALT and AST were within the reference ranges of 3-35 U/L and 13-35 U/L, respectively. ## 2.5. Statistical Analysis All statistical analyses were performed using SPSS 24.0 software for Windows (SPSS Inc., Chicago, IL, USA), and the data were presented as the median (minimum, maximum) (age, ALT, AST, HBV DNA, and HBsAg) or the mean ± standard deviation (frequencies of cells). Multiple comparisons were performed using nonparametric Kruskal-Wallis tests with Bonferroni correction for the sub-analyses. The statistical significance between two groups was determined by performing a Mann–Whitney U test. The correlation between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell frequency and clinical parameters was examined by performing Spearman’s rank correlation. All statistical tests were two-tailed. The differences were considered statistically significant at p<0.050. ## 3. Results ### 3.1. Study Subjects’ Characteristics The subjects in this study included 77 treatment-naive patients who had been HBsAg-positive for longer than 6 months and 20 healthy volunteers with normal ALT and AST (Table1). According to the immune phases, the 77 HBV-infected patients were further classified as follows: IT phase (n=19), IC phase (n=20), LR phase (n=18), and RA phase (n=20) (Table 2).Table 1 Clinical characteristics of the patients with chronic HBV infection and healthy controls. Patients HC p-value Subjects, n 77 20 / Gender, males/females 45/32 10/10 0.499 Age, y 35(18-71) 27(18-52) 0.053 ALT, IU/L 75(15-942) 15(7-31) <0.001 AST, IU/L 51(13-915) 14.5(7-34) <0.001 HBV DNA, log10 IU/mL 6.72(0-8.38) NA / HBsAg, log10 IU/mL 3.50(1.30-4.72) NA / HBeAg, positive/negative, n 39/38 0/20 <0.001 ALT, elevated/normal, n 40/37 0/20 <0.001 Abbreviations. ALT: alanine transaminase; AST: aspartate aminotransferase; HBeAg: hepatitis B e antigen; HBsAg: hepatitis B s antigen; HBV: hepatitis B virus; HC: healthy controls; NA: not applicable. (a) Values are expressed as the median (minimum-maximum) for age, ALT, AST, and HBV DNA. (b) HBV DNA<100 was treated as 0 log10 IU/ml.Table 2 Clinical characteristics of 4 subgroups of patients with chronic HBV infection. IT IC LR RA p-value Subjects, n 19 20 18 20 / Gender, males/females 10/9 12/8 11/7 12/8 0.950 Age, y 29(18-52) 29(18-56) 41.5(18-71) 42(24-65) 0.002 ALT, IU/L 27(16-38) 293(85-939) 25.5(15-34) 332(57-942) <0.001 AST, IU/L 28(13-38) 129(51-491) 22(19-34) 177.5(39-915) <0.001 HBV DNA, log10 IU/mL 7.87(6.45-8.38) 7.76(3.72-8.23) 2.36(0-3.20) 5.51(2.00-8.23) <0.001 HBsAg, log10 IU/mL 4.557(3.495-4.54) 3.85(2.85-4.60) 2.69(1.35-3.53) 3.32(1.30-4.61) <0.001 HBeAg, positive/negative, n 19/0 20/0 0/18 0/20 <0.001 ALT, elevated/normal, n 0/19 20/0 0/18 20/0 <0.001 (a) Values are expressed as median (minimum-maximum) for age, ALT, AST, and HBV DNA. (b) HBV DNA<100 was treated as 0 log10 IU/ml. (c) HBsAg<20 was treated as 20. ### 3.2. Frequencies of Circulating CXCR5+CD4+ T Cells and Subsets in Peripheral Blood Mononuclear Cells The frequency of circulating CXCR5+CD4+ T cells in the PBMC samples was detected by flow cytometry (Figure 1). The CXCR5+CD4+ T cell frequency in the patients with chronic HBV infection was higher than that in the non-HBV infected individuals, but not significantly (20.01±6.76% vs 19.26±3.93%, p=0.705, Figure 2(a)). Nevertheless, the frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells was significantly higher in the patients than in the healthy controls (14.92±4.87% vs 12.23±2.95%, p=0.018, Figure 2(b)). In addition, the frequency of the CCR7 h iPD-1 l o CXCR5+CD4+ T cells was lower in the patients with chronic HBV infection, but not significantly (p=0.715).Figure 1 Gating strategy and representative dot plots of CCR7 and PD-1 expression on CXCR5+CD4+ T cells from chronic HBV infected patients. (a) Living lymphocytes gate. (b) CD4+CD3+ cells gate. (c) CXCR5+CD4+CD3+ cells gate. (d) CCR7 l oPD-1 h i Tfh cells gate and CCR7 h iPD-1 l o Tfh cells gate were set according the isotypes (e). At least approximately 100,000 events were analyzed in each sample. (a) (b) (c) (d) (e)Figure 2 Frequencies of CXCR5+CD4+ T cells and CCR7 l oPD-1 h i CXCR5+CD4+ T cells in chronic HBV patients (n=77) and healthy controls (n=20). (a) Frequency of CXCR5+CD4+ among all CD4+CD3+ cells. (b) Frequency of CCR7 l oPD-1 h iCXCR5+CD4+ among all CXCR5+CD4+CD3+ cells. Horizontal lines show the median. (a) (b)We further investigated the association among the frequencies of the CXCR5+CD4+ T cells, CCR7 l oPD-1 h i Tfh subset, and HBV by stratifying the patients according to their immune status (IT, IC, LR, or RA). Based on the Kruskal-Wallis tests, although no significant difference was observed in the frequency of the CXCR5+CD4+ T cells among the 4 groups (p=0.885), differences in the CCR7 l oPD-1 h i Tfh subset were observed in the groups (p<0.001). After conducting the Bonferroni correction, we found that the frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the IC group (18.42%±3.08) than in the IT group (11.94±2.87%, p=0.001) and LR group (13.65±4.93%, p=0.031). In addition, the frequency of the CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the RA group than in the IT group (16.03±5.37% vs 11.94±2.87%, p=0.030, Figure 3(a)). Although frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells of IT group was lower than LR group, the difference was not significant (11.941±2.868 % vs 13.648±4.930%, p=0.169) (Figure 3(b)).Figure 3 Frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells among all CXCR5+CD4+CD3+ cells in the patients with chronic HBV infection and between IT group and LR group. (a)Differences between the IT group (n=19) and IC group (n=20), between the IT group and RA group (n=20), and between the IC group and LR group (n=18) were significant. Statistical comparison was performed using a Bonferroni correction. (b)Difference between the IT group (n=19) and LR group (n=18) was not significant (p=0.169). The horizontal lines show the median. (a) (b)The comparison between people with raised ALT and normal ALT has been conducted and the difference was significant (16.91±4.77% vs 12.58±3.68%, p<0.001). The frequency of the CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the raised ALT group relative to the normal ALT group (Figure 4).Figure 4 Frequencies of CCR7 l oPD-1 h iCXCR5+CD4+ T cells among all CXCR5+CD4+CD3+ cells in the people with normal ALT and raised ALT. Difference between two group was significant (p<0.001). The horizontal lines show the median. ### 3.3. Correlation between the Two Tfh Cell Subsets and Clinical Parameters of the Chronic HBV Infected Patients The correlations between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell and the CCR7 h iPD-1 l oCXCR5+CD4+ T cell populations in the PBMCs and the patients’ ALT, HBV DNA load, HBsAg level, age, and gender were investigated. Based on Spearman’s rank correlation analysis, there was a positive correlation between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell populations and levels of ALT (r=0.370, p=0.001, Figure 5(a)). However, the correlation was weak and not convincing enough. Besides, no correlation was observed between the frequency of the CCR7 h iPD-1 l oCXCR5+CD4+ T cells and the ALT levels (r=-0.143, p>0.050). Furthermore, CCR7 l oPD-1 h iCXCR5+CD4+ T cells (r=-0.028, p>0.005, Figure 5(b)) or CCR7 h iPD-1 l oCXCR5+CD4+ T cells (r=-0.160, p>0.005) had no correlation with HBV DNA. Neither of the CCR7 l oPD-1 h i (r= 0.008, p>0.050, Figure 5(c)) or the CCR7 h iPD-1 l o Tfh subsets (p>0.050) were correlated with HBsAg.Figure 5 Correlations between frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and clinical indices. (a) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and ALT level (r=0.370, p=0.001). (b) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and HBV DNA load (p>0.05). (c) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and HBsAg (p>0.05). Statistical comparison was performed using Spearman’s rank correlation analysis. (a) (b) (c)Although a negative correlation was observed between the CCR7 l oPD-1 h iCXCR5+CD4+ T cells and age in the patients with chronic HBV infection (r=-0.264, p=0.020), no evidence was found supporting a correlation between the CCR7 h iPD-1 l oCXCR5+CD4+ T cells and age among the patients (r=0.182, p=0.114). Further analysis showed significant difference in ages among the patients (p=0.002), and younger subjects were more likely to be in the IT and IC than in the LR and RA phases. However, no significant difference was detected in age and frequency of CCR7 h iPD-1 l oCXCR5+CD4+ T cells among the healthy controls. Furthermore, no difference was observed in the gender ratios in either the patients or healthy controls.Correlations among the two Tfh subsets and clinical characteristics were analyzed in each subgroup of patients with chronic HBV infection. However, no significant result was observed (Tables3 and 4).Table 3 Correlations between frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and clinical characteristics in 4 subgroups of patients with chronic HBV infection. Group/Clinical characteristics ALT(U/L) HBV DNA, log10 IU/ml HBsAg, log10IU/ml r-value p-value r-value p-value r-value p-value IT 0.022 0.930 0.238 0.326 -0.300 0.213 IC -0.149 0.530 -0.145 0.543 0.192 0.461 LR -0.206 0.412 0.145 0.567 0.220 0.381 RA 0.189 0.425 -0.222 0.348 -0.273 0.258Table 4 Correlations between frequency of CCR7 h iPD-1 l oCXCR5+CD4+ T cells and clinical characteristics in 4 subgroups of patients with chronic HBV infection. Group/Clinical characteristics ALT(U/L) HBV DNA, log10 IU/ml HBsAg, log10IU/ml r-value p-value r-value p-value r-value p-value IT -0.069 0.780 -0.344 0.149 -0.067 0.784 IC 0.171 0.470 -0.384 0.095 0.235 0.363 LR 0.065 0.797 -0.072 0.777 -0.156 0.536 RA 0.287 0.220 0.087 0.716 0.055 0.822 ## 3.1. Study Subjects’ Characteristics The subjects in this study included 77 treatment-naive patients who had been HBsAg-positive for longer than 6 months and 20 healthy volunteers with normal ALT and AST (Table1). According to the immune phases, the 77 HBV-infected patients were further classified as follows: IT phase (n=19), IC phase (n=20), LR phase (n=18), and RA phase (n=20) (Table 2).Table 1 Clinical characteristics of the patients with chronic HBV infection and healthy controls. Patients HC p-value Subjects, n 77 20 / Gender, males/females 45/32 10/10 0.499 Age, y 35(18-71) 27(18-52) 0.053 ALT, IU/L 75(15-942) 15(7-31) <0.001 AST, IU/L 51(13-915) 14.5(7-34) <0.001 HBV DNA, log10 IU/mL 6.72(0-8.38) NA / HBsAg, log10 IU/mL 3.50(1.30-4.72) NA / HBeAg, positive/negative, n 39/38 0/20 <0.001 ALT, elevated/normal, n 40/37 0/20 <0.001 Abbreviations. ALT: alanine transaminase; AST: aspartate aminotransferase; HBeAg: hepatitis B e antigen; HBsAg: hepatitis B s antigen; HBV: hepatitis B virus; HC: healthy controls; NA: not applicable. (a) Values are expressed as the median (minimum-maximum) for age, ALT, AST, and HBV DNA. (b) HBV DNA<100 was treated as 0 log10 IU/ml.Table 2 Clinical characteristics of 4 subgroups of patients with chronic HBV infection. IT IC LR RA p-value Subjects, n 19 20 18 20 / Gender, males/females 10/9 12/8 11/7 12/8 0.950 Age, y 29(18-52) 29(18-56) 41.5(18-71) 42(24-65) 0.002 ALT, IU/L 27(16-38) 293(85-939) 25.5(15-34) 332(57-942) <0.001 AST, IU/L 28(13-38) 129(51-491) 22(19-34) 177.5(39-915) <0.001 HBV DNA, log10 IU/mL 7.87(6.45-8.38) 7.76(3.72-8.23) 2.36(0-3.20) 5.51(2.00-8.23) <0.001 HBsAg, log10 IU/mL 4.557(3.495-4.54) 3.85(2.85-4.60) 2.69(1.35-3.53) 3.32(1.30-4.61) <0.001 HBeAg, positive/negative, n 19/0 20/0 0/18 0/20 <0.001 ALT, elevated/normal, n 0/19 20/0 0/18 20/0 <0.001 (a) Values are expressed as median (minimum-maximum) for age, ALT, AST, and HBV DNA. (b) HBV DNA<100 was treated as 0 log10 IU/ml. (c) HBsAg<20 was treated as 20. ## 3.2. Frequencies of Circulating CXCR5+CD4+ T Cells and Subsets in Peripheral Blood Mononuclear Cells The frequency of circulating CXCR5+CD4+ T cells in the PBMC samples was detected by flow cytometry (Figure 1). The CXCR5+CD4+ T cell frequency in the patients with chronic HBV infection was higher than that in the non-HBV infected individuals, but not significantly (20.01±6.76% vs 19.26±3.93%, p=0.705, Figure 2(a)). Nevertheless, the frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells was significantly higher in the patients than in the healthy controls (14.92±4.87% vs 12.23±2.95%, p=0.018, Figure 2(b)). In addition, the frequency of the CCR7 h iPD-1 l o CXCR5+CD4+ T cells was lower in the patients with chronic HBV infection, but not significantly (p=0.715).Figure 1 Gating strategy and representative dot plots of CCR7 and PD-1 expression on CXCR5+CD4+ T cells from chronic HBV infected patients. (a) Living lymphocytes gate. (b) CD4+CD3+ cells gate. (c) CXCR5+CD4+CD3+ cells gate. (d) CCR7 l oPD-1 h i Tfh cells gate and CCR7 h iPD-1 l o Tfh cells gate were set according the isotypes (e). At least approximately 100,000 events were analyzed in each sample. (a) (b) (c) (d) (e)Figure 2 Frequencies of CXCR5+CD4+ T cells and CCR7 l oPD-1 h i CXCR5+CD4+ T cells in chronic HBV patients (n=77) and healthy controls (n=20). (a) Frequency of CXCR5+CD4+ among all CD4+CD3+ cells. (b) Frequency of CCR7 l oPD-1 h iCXCR5+CD4+ among all CXCR5+CD4+CD3+ cells. Horizontal lines show the median. (a) (b)We further investigated the association among the frequencies of the CXCR5+CD4+ T cells, CCR7 l oPD-1 h i Tfh subset, and HBV by stratifying the patients according to their immune status (IT, IC, LR, or RA). Based on the Kruskal-Wallis tests, although no significant difference was observed in the frequency of the CXCR5+CD4+ T cells among the 4 groups (p=0.885), differences in the CCR7 l oPD-1 h i Tfh subset were observed in the groups (p<0.001). After conducting the Bonferroni correction, we found that the frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the IC group (18.42%±3.08) than in the IT group (11.94±2.87%, p=0.001) and LR group (13.65±4.93%, p=0.031). In addition, the frequency of the CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the RA group than in the IT group (16.03±5.37% vs 11.94±2.87%, p=0.030, Figure 3(a)). Although frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells of IT group was lower than LR group, the difference was not significant (11.941±2.868 % vs 13.648±4.930%, p=0.169) (Figure 3(b)).Figure 3 Frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells among all CXCR5+CD4+CD3+ cells in the patients with chronic HBV infection and between IT group and LR group. (a)Differences between the IT group (n=19) and IC group (n=20), between the IT group and RA group (n=20), and between the IC group and LR group (n=18) were significant. Statistical comparison was performed using a Bonferroni correction. (b)Difference between the IT group (n=19) and LR group (n=18) was not significant (p=0.169). The horizontal lines show the median. (a) (b)The comparison between people with raised ALT and normal ALT has been conducted and the difference was significant (16.91±4.77% vs 12.58±3.68%, p<0.001). The frequency of the CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the raised ALT group relative to the normal ALT group (Figure 4).Figure 4 Frequencies of CCR7 l oPD-1 h iCXCR5+CD4+ T cells among all CXCR5+CD4+CD3+ cells in the people with normal ALT and raised ALT. Difference between two group was significant (p<0.001). The horizontal lines show the median. ## 3.3. Correlation between the Two Tfh Cell Subsets and Clinical Parameters of the Chronic HBV Infected Patients The correlations between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell and the CCR7 h iPD-1 l oCXCR5+CD4+ T cell populations in the PBMCs and the patients’ ALT, HBV DNA load, HBsAg level, age, and gender were investigated. Based on Spearman’s rank correlation analysis, there was a positive correlation between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell populations and levels of ALT (r=0.370, p=0.001, Figure 5(a)). However, the correlation was weak and not convincing enough. Besides, no correlation was observed between the frequency of the CCR7 h iPD-1 l oCXCR5+CD4+ T cells and the ALT levels (r=-0.143, p>0.050). Furthermore, CCR7 l oPD-1 h iCXCR5+CD4+ T cells (r=-0.028, p>0.005, Figure 5(b)) or CCR7 h iPD-1 l oCXCR5+CD4+ T cells (r=-0.160, p>0.005) had no correlation with HBV DNA. Neither of the CCR7 l oPD-1 h i (r= 0.008, p>0.050, Figure 5(c)) or the CCR7 h iPD-1 l o Tfh subsets (p>0.050) were correlated with HBsAg.Figure 5 Correlations between frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and clinical indices. (a) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and ALT level (r=0.370, p=0.001). (b) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and HBV DNA load (p>0.05). (c) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and HBsAg (p>0.05). Statistical comparison was performed using Spearman’s rank correlation analysis. (a) (b) (c)Although a negative correlation was observed between the CCR7 l oPD-1 h iCXCR5+CD4+ T cells and age in the patients with chronic HBV infection (r=-0.264, p=0.020), no evidence was found supporting a correlation between the CCR7 h iPD-1 l oCXCR5+CD4+ T cells and age among the patients (r=0.182, p=0.114). Further analysis showed significant difference in ages among the patients (p=0.002), and younger subjects were more likely to be in the IT and IC than in the LR and RA phases. However, no significant difference was detected in age and frequency of CCR7 h iPD-1 l oCXCR5+CD4+ T cells among the healthy controls. Furthermore, no difference was observed in the gender ratios in either the patients or healthy controls.Correlations among the two Tfh subsets and clinical characteristics were analyzed in each subgroup of patients with chronic HBV infection. However, no significant result was observed (Tables3 and 4).Table 3 Correlations between frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and clinical characteristics in 4 subgroups of patients with chronic HBV infection. Group/Clinical characteristics ALT(U/L) HBV DNA, log10 IU/ml HBsAg, log10IU/ml r-value p-value r-value p-value r-value p-value IT 0.022 0.930 0.238 0.326 -0.300 0.213 IC -0.149 0.530 -0.145 0.543 0.192 0.461 LR -0.206 0.412 0.145 0.567 0.220 0.381 RA 0.189 0.425 -0.222 0.348 -0.273 0.258Table 4 Correlations between frequency of CCR7 h iPD-1 l oCXCR5+CD4+ T cells and clinical characteristics in 4 subgroups of patients with chronic HBV infection. Group/Clinical characteristics ALT(U/L) HBV DNA, log10 IU/ml HBsAg, log10IU/ml r-value p-value r-value p-value r-value p-value IT -0.069 0.780 -0.344 0.149 -0.067 0.784 IC 0.171 0.470 -0.384 0.095 0.235 0.363 LR 0.065 0.797 -0.072 0.777 -0.156 0.536 RA 0.287 0.220 0.087 0.716 0.055 0.822 ## 4. Discussion The host immune mechanism, including innate and adaptive immunity, is important in the pathogenesis of hepatitis B. HBeAg seroconversion, led by viral and host immunity, is essential for the progression of chronic HBV infection, which is associated with a reduced risk of progressive liver inflammation, liver cirrhosis and liver cancer [6, 26]. In the natural history of chronic HBV infection, patients who have successfully undergone seroconversion usually become inactive HBsAg carriers with positive anti-HBeAg in the blood [27, 28]. A high frequency of circulating CXCR5+CD4+ T cells has been shown to promote HBeAg seroconversion in chronic HBV patients [7].The main function of Tfh cells is to support B cell maturation and differentiation. Tfh cells and B cells repeatedly intimately interact in the germinal center, where Tfh cells deliver important survival and differentiation signals to B cells that participate in the affinity, antibody isotype class, and potency of the ensuing antibody response [29]. Nevertheless, various studies have reported conflicting results. Many research studies have found increased expression of circulating Tfh cells in SLE, but no increase in the frequency of circulating CXCR5+CD4+ T cells has also been reported in a study investigating SLE [20].Our study found that the circulating CXCR5+CD4+ T cell frequency was higher, but not significantly, in patients with chronic HBV infection than in non-HBV infected individuals. This finding is inconsistent with several former studies. The frequency of the CXCR5+CD4+ T cells was not accurate enough to describe the difference in the immune response between chronic HBV infected patients and healthy people.Heterogeneity is observed in Tfh cells [30]. According to the CCR7 and PD-1 expression on the cell surface, two major subsets were clearly identified within the circulating CXCR5+CD4+ T cells. A study in repeated implantation failure reported the proportion of CCR7 l oPD-1 h iCXCR5+CD4+ T cells was positively correlated with IL-21[31]. Another study found IL-21 was positively correlated with CCR7 l oPD-1 h i Tfh subset in transitional phase of Cystic echinococcosis[25].IL-21 is the main cytokine secreted by Tfh cells, reported as a critical immunomodulatory cytokine with various effects on all populations of lymphocytes. It can promote Tfh cells differentiation, regulate B cells differentiation and proliferation, induce plasma cell differentiation and immunoglobulin production[32].Our investigation demonstrated that the frequency of the CCR7 l oPD-1 h i subset was increased in the chronic HBV infected patients and positively correlated with ALT on a weak level. Most peripheral CXCR5+CD4+ T cells are resting cells, and a very small population expressing ICOS and very high levels of PD-1 are activated cells [33]. He J et al showed that the CCR7 l oPD-1 h i subset had a Tfh precursor phenotype, whereas the phenotype of the CCR7 h iPD-1 l o subset was characteristic of resting cells [20].PD-1 is a negative regulatory molecule that becomes up-regulated on activated T cells, B cells, monocytes, natural killer cells and dendritic cells and is particularly highly expressed on Tfh cells. The PD-1 ligands, i.e., PD-L1 and PD-L2, are extensively expressed on various cells, including T cells, B cells, dendritic cells and macrophages [34, 35]. CCR7 is a homing molecule expressed on the T cell surface and is essential for the migration of naive T cells through specialized high endothelial venules (HEVs). In addition, B cells exploit CCR7 to efficiently enter lymphoid nodes[36–38].In our study, a stronger immune response was observed in the chronic HBV infected patients with a higher frequency of the CCR7 l oPD-1 h i Tfh subset compared to the healthy controls. We hypothesize that the CCR7 l oPD-1 h i Tfh subset, as an effector phenotype, may contribute to the immune response in chronic HBV infection. Furthermore, in chronic HBV infection, the frequency of the CCR7 l oPD-1 h i Tfh subset is higher in the IC phase than in the IT and LR phases, suggesting that CCR7 l oPD-1 h iCXCR5+CD4+ T cells may indicate the level of the immune response more precisely than general CXCR5+CD4+ T cells and are related to immune status. This hypothesis was further supported by the higher frequency of the CCR7 l oPD-1 h i Tfh subset in the RA phase than in the IT phase. However, there was no significant difference of frequency of CCR7 l oPD-1 h i Tfh subset between IT and LR group was observed. More investigation should be complemented.Currently, the indications for antiviral treatment in chronic HBV infection are mainly based on a combination of the following three criteria: HBV DNA load, ALT levels, and severity of liver disease [6]. By preventing the progression of liver disease and early liver-related deaths, timely and valid therapy could be highly beneficial for improving quality of life and survival [39]. We speculate that the frequency of the CCR7 l oPD-1 h i Tfh subset in the blood, which was correlated to the immune status in chronic HBV infection, may help physicians determine when to initiate antiviral treatment. Certainly, further investigations are needed to study factors influencing the frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells. ## 5. Conclusion In conclusion, the CCR7 l oPD-1 h iCXCR5+CD4+ T cell frequency is higher in patients with chronic HBV infection than in healthy individuals and is positively correlated with serum ALT levels in patients, indicating that CCR7 l oPD-1 h iCXCR5+CD4+ T cells may be involved in HBV-related immune responses. Moreover, different frequencies of CCR7 l oPD-1 h iCXCR5+CD4+ T cells are observed in patients at different immune phases. These findings might improve our understanding of the immunological pathogenesis of chronic HBV infection and may provide a novel indication for antiviral treatment. --- *Source: 1020925-2018-10-08.xml*
1020925-2018-10-08_1020925-2018-10-08.md
39,617
Increased CCR7
Ya-Xin Huang; Qi-Yi Zhao; Li-Li Wu; Dong-Ying Xie; Zhi-Liang Gao; Hong Deng
Canadian Journal of Gastroenterology and Hepatology (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1020925
1020925-2018-10-08.xml
--- ## Abstract T follicular helper cells (Tfh cells) affect essential immune pathogenesis in chronic hepatitis B virus (HBV) infection. The CCR7 l oPD-1 h i Tfh subset has a partial Tfh effector phenotype and is associated with active Tfh differentiation, whereas the CCR7 h iPD-1 l o Tfh subset is a resting phenotype. We recruited 20 healthy volunteers and 77 patients with chronic HBV infection, including those in the immune tolerant (IT) phase (n=19), immune clearance (IC) phase (n=20), low replicative (LR) phase (n=18), and reactivation (RA) phase (n=20). The expression of CD4, CXCR5, PD-1, and CCR7 was detected in T cells from peripheral blood by flow cytometry. The frequency of the CCR7 l oPD-1 h i T subset was significantly higher in the patients than in the healthy controls (14.92±4.87% vs 12.23±2.95%, p=0.018). The frequency of this Tfh subset in the IC group (18.42%±3.08) was increased compared with the IT group (11.94±2.87%, p=0.001) and LR group (13.65±4.93%, p=0.031) and was higher in the RA group than in the IT group (16.03±5.37% vs 11.94±2.87%, p=0.030). We observed a weak positive correlation between the CCR7 l oPD-1 h i Tfh subset population and the alanine transaminase (ALT) level (r=0.370, p=0.001). The CCR7 l oPD-1 h Tfh subset in the chronic HBV-infected patients was elevated to various degrees among the different immune phases. CCR7 l oPD-1 h iCXCR5+CD4+ T cells are correlated with the immune status of chronic HBV infection patients and may be developed as a potential indicator for antiviral treatment. --- ## Body ## 1. Introduction HBV infection remains among the most serious issues in global public health despite extensive vaccination and effective antiviral treatments. A total of 250 million people suffer from chronic hepatitis B virus (HBV) infection worldwide, most of whom live in Africa and Asia [1, 2]. HBV-associated diseases, such as liver failure, cirrhosis, and hepatocellular carcinoma, contribute to the deaths of 1 million people per year [3].Our understanding of the natural history of HBV infection and the resultant disease is continuously improving. Complex interactions between the viral and host immune systems participate in disease progression, allowing for HBV penetration into host cells, formation of persistence, and chronization of HBV infection or complete elimination of the virus [4, 5]. Although various clinical and experimental investigations have helped diagnose, treat, and prevent hepatitis B, the exact mechanism underlying the host immune reactions remains unclear.According to the complex interactions between the virus, hepatocytes, and the host immune system, the natural course of chronic HBV infection is usually stratified into 4 phases, the immune tolerant (IT) phase, the immune clearance (IC) phase, the low replicative (LR) phase, and the reactivation (RA) phase [6].Proteins of partial HBV can modulate immunity and enable immune escape. In the course of the disease, a better prognosis can be achieved if HBeAg seroconversion occurs early. The prevalence of cirrhosis and hepatocellular carcinoma in patients during this time declines. In addition, HBsAg loss and/or seroconversion is considered the ideal goal of treatment and a milestone in effective treatment response in both HBeAg-positive and HBeAg-negative patients [7].The production of antibodies plays an indispensable role in both HBeAg and HBsAg seroconversion[8]. Circulating CXCR5+CD4+ T cells, which are the counterpart of T follicular helper (Tfh) cells in the peripheral blood, have been reported to play a significant role in accelerating HBeAg seroconversion in chronic HBV-infected patients [9].Tfh cells are considered to be a subset of CD4+ T cells in secondary lymphoid tissues that express CXC-chemokine receptor 5 (CXCR5), which helps Tfh cells localize to B cell follicles. Studies have reported that CXCR5+CD4+T cells are more efficient than CXCR5−CD4+ T cells in inducing B cells to secrete antibodies and switch antibody classes [10–12]. Tfh cells coexpress programmed cell death protein 1 (PD-1) and inducible T cell co-stimulator (ICOS) and downregulate CC-chemokine receptor 7 (CCR7) [13–15]. Several investigations have found elevated expression of circulating CXCR5+CD4+ T cells in patients with autoimmune diseases (such as systemic lupus erythematosus (SLE) and Sjogren’s syndrome)[16, 17] and infectious diseases (such as hepatitis B and C)[18, 19]. However, He J et al. found no increase in the frequency of circulating CXCR5+CD4+ T cells in SLE patients [20], which was inconsistent with previous investigations. In addition, a study showed that there was no difference in the circulating CXCR5+CD4+ T cell frequency between healthy controls and HCV patients. Interestingly, this study also found that CXCR5+CD4+ T cells were efficient in supporting B cell responses [21]. Based on current evidence, there is no clear correlation between the activity of CXCR5+CD4+ T cells and their frequency in peripheral blood.Tfh cells are comprised of various subsets with different phenotypes and functions [22]. He J et al. reported that CCR7 l oPD-1 h iCXCR5+CD4+ T cells have a partial Tfh effector phenotype exhibiting active Tfh differentiation in lymphoid tissues. In contrast, the CCR7 h iPD-1 l o Tfh subset has a resting phenotype [20]. Studies in mice found that CXCR5 h iPD-1 h i germinal center Tfh cells likely downregulate CXCR5, PD-1, and BCL-6, re-express CCR7, IL-7Rα, and CD62L, and thus differentiate into memory cells and persist for a long time [22, 23]. IL-7 possibly increases the level of Tfh cells in the patients with chronic hepatitis B[24]. Studies investigating cystic echinococcosis also reported that CCR7 l oPD-1 h iCXCR5+CD4+ T cells were increased in patients [25].The CCR7 l oPD-1 h i and CCR7 h iPD-1 l o Tfh subsets in the peripheral blood have not been comprehensively investigated during the complex immunologic progression of chronic HBV infection. We hypothesize that these two Tfh subsets play a larger role in the immune response of chronic HBV infection than Tfh cells, containing multifarious subsets. The objective of this study was to detect the frequencies of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and CCR7 h iPD-1 l oCXCR5+CD4+ T cells in peripheral blood mononuclear cells (PBMCs) from patients with chronic HBV infection and compare these frequencies to those in non-HBV infected controls. Furthermore, the correlations between the frequencies of the two subsets and alanine transaminase (ALT), which is the consequence of HBV replication, and the HBsAg level were evaluated. These findings provide new insights into the correlation between the frequencies of the two CXCR5+CD4+ T subsets and the immune reaction in chronic HBV infection. ## 2. Materials and Methods ### 2.1. Patients and Controls A total of 77 patients with chronic HBV infection were recruited from the Third Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) for this cross-sectional study. These patients were HBsAg-seropositive for longer than 6 months. The patients were divided into immune tolerant (IT) phase group (n=19), immune clearance (IC) phase group (n=20), low replicative (LR) phase group (n=18), and reactivation (RA) phase group (n=20) according to the Asian Pacific Association for the Study of Liver guidelines [6]. In addition, 20 healthy individuals were enrolled from the physical examination center. All healthy individuals were non-HBV infected, HCV infected, or HIV infected and tested normal for ALT and aspartate aminotransferase (AST).The exclusion criteria for this study included coinfection with hepatitis viruses A, C, D, or E or HIV. Patients with autoimmune diseases, drug-induced liver injury, decompensated or compensated cirrhosis, malignant comorbidities within the prior 5 years, or previous antiviral or immunomodulatory drug treatments were also excluded.This study was approved by the Human Ethics Committee of the Third Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) and conducted in accordance with the Declaration of Helsinki guidelines. All subjects provided written informed consent before collecting the blood samples. ### 2.2. Peripheral Blood Mononuclear Cell Separation Peripheral venous blood samples were collected from all subjects into 5 mL tubes containing EDTA as the anticoagulant. Within 4 hours of the collection, the PBMCs were separated from the samples by Ficoll separation (Axis-Shield PoC AS, Oslo, Norway). Approximately 5 × 10∧6 PBMCs were collected from each sample and frozen at -80°C until analysis. ### 2.3. Analysis of Cell Surface Molecule Expression by Flow Cytometry The cells were thawed and incubated at 37°C and 5% CO2 in RPMI-1640 with 10% FCS (cell culture media) for 4 hours. Then, the cells were stained with anti-CD3 FITC (clone:SK7, eBioscience, San Diego, CA, USA), anti-CD4 eFluor® (clone: OKT4, eBioscience, San Diego, CA, USA), anti-CXCR5 APC (clone: MU5UBEE, eBioscience, San Diego, CA, USA), anti-PD-1 PE-Cy7 (clone: J105, eBioscience, San Diego, CA, USA), anti-CCR7 PE (clone: 3D12, eBioscience, San Diego, CA, USA), and isotype antibodies (eBioscience, San Diego, CA, USA). The cells were washed, and the marker expression was detected by flow cytometry (Beckman Gallios Coulter, Inc., CA, USA). The samples underwent detection within 4 hours. The data were analyzed using FlowJo 10.0 (Tree Star Inc., Ashland, Or, USA). ### 2.4. Laboratory Indices The quantitative values of the following indices were tested by Elecsys (Roche Diagnostics GmbH, Mannheim, Germany) at the noted reference ranges: HBsAb, 0 - 10 IU/L; HBeAg, <1.0 cut-off index (COI); HBeAb, >1.0 COI; and HBcAb, >1.0 COI. The HBsAg titers were quantified using Elecsys HBsAg II Quant reagent kits (Roche Diagnostics, Indianapolis, IN, USA). The detection limit of the kit was 20 IU/mL. The HBV-DNA levels were quantitated by performing real-time quantitative polymerase chain reaction (Daan GENE, Guangzhou, China). The detection limit of the assay was 100 IU/mL. The biochemical indices were detected using an autobiochemical analyzer (HITACHI 7180, Tokyo, Japan). ALT and AST were within the reference ranges of 3-35 U/L and 13-35 U/L, respectively. ### 2.5. Statistical Analysis All statistical analyses were performed using SPSS 24.0 software for Windows (SPSS Inc., Chicago, IL, USA), and the data were presented as the median (minimum, maximum) (age, ALT, AST, HBV DNA, and HBsAg) or the mean ± standard deviation (frequencies of cells). Multiple comparisons were performed using nonparametric Kruskal-Wallis tests with Bonferroni correction for the sub-analyses. The statistical significance between two groups was determined by performing a Mann–Whitney U test. The correlation between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell frequency and clinical parameters was examined by performing Spearman’s rank correlation. All statistical tests were two-tailed. The differences were considered statistically significant at p<0.050. ## 2.1. Patients and Controls A total of 77 patients with chronic HBV infection were recruited from the Third Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) for this cross-sectional study. These patients were HBsAg-seropositive for longer than 6 months. The patients were divided into immune tolerant (IT) phase group (n=19), immune clearance (IC) phase group (n=20), low replicative (LR) phase group (n=18), and reactivation (RA) phase group (n=20) according to the Asian Pacific Association for the Study of Liver guidelines [6]. In addition, 20 healthy individuals were enrolled from the physical examination center. All healthy individuals were non-HBV infected, HCV infected, or HIV infected and tested normal for ALT and aspartate aminotransferase (AST).The exclusion criteria for this study included coinfection with hepatitis viruses A, C, D, or E or HIV. Patients with autoimmune diseases, drug-induced liver injury, decompensated or compensated cirrhosis, malignant comorbidities within the prior 5 years, or previous antiviral or immunomodulatory drug treatments were also excluded.This study was approved by the Human Ethics Committee of the Third Affiliated Hospital of Sun Yat-sen University (Guangzhou, China) and conducted in accordance with the Declaration of Helsinki guidelines. All subjects provided written informed consent before collecting the blood samples. ## 2.2. Peripheral Blood Mononuclear Cell Separation Peripheral venous blood samples were collected from all subjects into 5 mL tubes containing EDTA as the anticoagulant. Within 4 hours of the collection, the PBMCs were separated from the samples by Ficoll separation (Axis-Shield PoC AS, Oslo, Norway). Approximately 5 × 10∧6 PBMCs were collected from each sample and frozen at -80°C until analysis. ## 2.3. Analysis of Cell Surface Molecule Expression by Flow Cytometry The cells were thawed and incubated at 37°C and 5% CO2 in RPMI-1640 with 10% FCS (cell culture media) for 4 hours. Then, the cells were stained with anti-CD3 FITC (clone:SK7, eBioscience, San Diego, CA, USA), anti-CD4 eFluor® (clone: OKT4, eBioscience, San Diego, CA, USA), anti-CXCR5 APC (clone: MU5UBEE, eBioscience, San Diego, CA, USA), anti-PD-1 PE-Cy7 (clone: J105, eBioscience, San Diego, CA, USA), anti-CCR7 PE (clone: 3D12, eBioscience, San Diego, CA, USA), and isotype antibodies (eBioscience, San Diego, CA, USA). The cells were washed, and the marker expression was detected by flow cytometry (Beckman Gallios Coulter, Inc., CA, USA). The samples underwent detection within 4 hours. The data were analyzed using FlowJo 10.0 (Tree Star Inc., Ashland, Or, USA). ## 2.4. Laboratory Indices The quantitative values of the following indices were tested by Elecsys (Roche Diagnostics GmbH, Mannheim, Germany) at the noted reference ranges: HBsAb, 0 - 10 IU/L; HBeAg, <1.0 cut-off index (COI); HBeAb, >1.0 COI; and HBcAb, >1.0 COI. The HBsAg titers were quantified using Elecsys HBsAg II Quant reagent kits (Roche Diagnostics, Indianapolis, IN, USA). The detection limit of the kit was 20 IU/mL. The HBV-DNA levels were quantitated by performing real-time quantitative polymerase chain reaction (Daan GENE, Guangzhou, China). The detection limit of the assay was 100 IU/mL. The biochemical indices were detected using an autobiochemical analyzer (HITACHI 7180, Tokyo, Japan). ALT and AST were within the reference ranges of 3-35 U/L and 13-35 U/L, respectively. ## 2.5. Statistical Analysis All statistical analyses were performed using SPSS 24.0 software for Windows (SPSS Inc., Chicago, IL, USA), and the data were presented as the median (minimum, maximum) (age, ALT, AST, HBV DNA, and HBsAg) or the mean ± standard deviation (frequencies of cells). Multiple comparisons were performed using nonparametric Kruskal-Wallis tests with Bonferroni correction for the sub-analyses. The statistical significance between two groups was determined by performing a Mann–Whitney U test. The correlation between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell frequency and clinical parameters was examined by performing Spearman’s rank correlation. All statistical tests were two-tailed. The differences were considered statistically significant at p<0.050. ## 3. Results ### 3.1. Study Subjects’ Characteristics The subjects in this study included 77 treatment-naive patients who had been HBsAg-positive for longer than 6 months and 20 healthy volunteers with normal ALT and AST (Table1). According to the immune phases, the 77 HBV-infected patients were further classified as follows: IT phase (n=19), IC phase (n=20), LR phase (n=18), and RA phase (n=20) (Table 2).Table 1 Clinical characteristics of the patients with chronic HBV infection and healthy controls. Patients HC p-value Subjects, n 77 20 / Gender, males/females 45/32 10/10 0.499 Age, y 35(18-71) 27(18-52) 0.053 ALT, IU/L 75(15-942) 15(7-31) <0.001 AST, IU/L 51(13-915) 14.5(7-34) <0.001 HBV DNA, log10 IU/mL 6.72(0-8.38) NA / HBsAg, log10 IU/mL 3.50(1.30-4.72) NA / HBeAg, positive/negative, n 39/38 0/20 <0.001 ALT, elevated/normal, n 40/37 0/20 <0.001 Abbreviations. ALT: alanine transaminase; AST: aspartate aminotransferase; HBeAg: hepatitis B e antigen; HBsAg: hepatitis B s antigen; HBV: hepatitis B virus; HC: healthy controls; NA: not applicable. (a) Values are expressed as the median (minimum-maximum) for age, ALT, AST, and HBV DNA. (b) HBV DNA<100 was treated as 0 log10 IU/ml.Table 2 Clinical characteristics of 4 subgroups of patients with chronic HBV infection. IT IC LR RA p-value Subjects, n 19 20 18 20 / Gender, males/females 10/9 12/8 11/7 12/8 0.950 Age, y 29(18-52) 29(18-56) 41.5(18-71) 42(24-65) 0.002 ALT, IU/L 27(16-38) 293(85-939) 25.5(15-34) 332(57-942) <0.001 AST, IU/L 28(13-38) 129(51-491) 22(19-34) 177.5(39-915) <0.001 HBV DNA, log10 IU/mL 7.87(6.45-8.38) 7.76(3.72-8.23) 2.36(0-3.20) 5.51(2.00-8.23) <0.001 HBsAg, log10 IU/mL 4.557(3.495-4.54) 3.85(2.85-4.60) 2.69(1.35-3.53) 3.32(1.30-4.61) <0.001 HBeAg, positive/negative, n 19/0 20/0 0/18 0/20 <0.001 ALT, elevated/normal, n 0/19 20/0 0/18 20/0 <0.001 (a) Values are expressed as median (minimum-maximum) for age, ALT, AST, and HBV DNA. (b) HBV DNA<100 was treated as 0 log10 IU/ml. (c) HBsAg<20 was treated as 20. ### 3.2. Frequencies of Circulating CXCR5+CD4+ T Cells and Subsets in Peripheral Blood Mononuclear Cells The frequency of circulating CXCR5+CD4+ T cells in the PBMC samples was detected by flow cytometry (Figure 1). The CXCR5+CD4+ T cell frequency in the patients with chronic HBV infection was higher than that in the non-HBV infected individuals, but not significantly (20.01±6.76% vs 19.26±3.93%, p=0.705, Figure 2(a)). Nevertheless, the frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells was significantly higher in the patients than in the healthy controls (14.92±4.87% vs 12.23±2.95%, p=0.018, Figure 2(b)). In addition, the frequency of the CCR7 h iPD-1 l o CXCR5+CD4+ T cells was lower in the patients with chronic HBV infection, but not significantly (p=0.715).Figure 1 Gating strategy and representative dot plots of CCR7 and PD-1 expression on CXCR5+CD4+ T cells from chronic HBV infected patients. (a) Living lymphocytes gate. (b) CD4+CD3+ cells gate. (c) CXCR5+CD4+CD3+ cells gate. (d) CCR7 l oPD-1 h i Tfh cells gate and CCR7 h iPD-1 l o Tfh cells gate were set according the isotypes (e). At least approximately 100,000 events were analyzed in each sample. (a) (b) (c) (d) (e)Figure 2 Frequencies of CXCR5+CD4+ T cells and CCR7 l oPD-1 h i CXCR5+CD4+ T cells in chronic HBV patients (n=77) and healthy controls (n=20). (a) Frequency of CXCR5+CD4+ among all CD4+CD3+ cells. (b) Frequency of CCR7 l oPD-1 h iCXCR5+CD4+ among all CXCR5+CD4+CD3+ cells. Horizontal lines show the median. (a) (b)We further investigated the association among the frequencies of the CXCR5+CD4+ T cells, CCR7 l oPD-1 h i Tfh subset, and HBV by stratifying the patients according to their immune status (IT, IC, LR, or RA). Based on the Kruskal-Wallis tests, although no significant difference was observed in the frequency of the CXCR5+CD4+ T cells among the 4 groups (p=0.885), differences in the CCR7 l oPD-1 h i Tfh subset were observed in the groups (p<0.001). After conducting the Bonferroni correction, we found that the frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the IC group (18.42%±3.08) than in the IT group (11.94±2.87%, p=0.001) and LR group (13.65±4.93%, p=0.031). In addition, the frequency of the CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the RA group than in the IT group (16.03±5.37% vs 11.94±2.87%, p=0.030, Figure 3(a)). Although frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells of IT group was lower than LR group, the difference was not significant (11.941±2.868 % vs 13.648±4.930%, p=0.169) (Figure 3(b)).Figure 3 Frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells among all CXCR5+CD4+CD3+ cells in the patients with chronic HBV infection and between IT group and LR group. (a)Differences between the IT group (n=19) and IC group (n=20), between the IT group and RA group (n=20), and between the IC group and LR group (n=18) were significant. Statistical comparison was performed using a Bonferroni correction. (b)Difference between the IT group (n=19) and LR group (n=18) was not significant (p=0.169). The horizontal lines show the median. (a) (b)The comparison between people with raised ALT and normal ALT has been conducted and the difference was significant (16.91±4.77% vs 12.58±3.68%, p<0.001). The frequency of the CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the raised ALT group relative to the normal ALT group (Figure 4).Figure 4 Frequencies of CCR7 l oPD-1 h iCXCR5+CD4+ T cells among all CXCR5+CD4+CD3+ cells in the people with normal ALT and raised ALT. Difference between two group was significant (p<0.001). The horizontal lines show the median. ### 3.3. Correlation between the Two Tfh Cell Subsets and Clinical Parameters of the Chronic HBV Infected Patients The correlations between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell and the CCR7 h iPD-1 l oCXCR5+CD4+ T cell populations in the PBMCs and the patients’ ALT, HBV DNA load, HBsAg level, age, and gender were investigated. Based on Spearman’s rank correlation analysis, there was a positive correlation between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell populations and levels of ALT (r=0.370, p=0.001, Figure 5(a)). However, the correlation was weak and not convincing enough. Besides, no correlation was observed between the frequency of the CCR7 h iPD-1 l oCXCR5+CD4+ T cells and the ALT levels (r=-0.143, p>0.050). Furthermore, CCR7 l oPD-1 h iCXCR5+CD4+ T cells (r=-0.028, p>0.005, Figure 5(b)) or CCR7 h iPD-1 l oCXCR5+CD4+ T cells (r=-0.160, p>0.005) had no correlation with HBV DNA. Neither of the CCR7 l oPD-1 h i (r= 0.008, p>0.050, Figure 5(c)) or the CCR7 h iPD-1 l o Tfh subsets (p>0.050) were correlated with HBsAg.Figure 5 Correlations between frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and clinical indices. (a) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and ALT level (r=0.370, p=0.001). (b) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and HBV DNA load (p>0.05). (c) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and HBsAg (p>0.05). Statistical comparison was performed using Spearman’s rank correlation analysis. (a) (b) (c)Although a negative correlation was observed between the CCR7 l oPD-1 h iCXCR5+CD4+ T cells and age in the patients with chronic HBV infection (r=-0.264, p=0.020), no evidence was found supporting a correlation between the CCR7 h iPD-1 l oCXCR5+CD4+ T cells and age among the patients (r=0.182, p=0.114). Further analysis showed significant difference in ages among the patients (p=0.002), and younger subjects were more likely to be in the IT and IC than in the LR and RA phases. However, no significant difference was detected in age and frequency of CCR7 h iPD-1 l oCXCR5+CD4+ T cells among the healthy controls. Furthermore, no difference was observed in the gender ratios in either the patients or healthy controls.Correlations among the two Tfh subsets and clinical characteristics were analyzed in each subgroup of patients with chronic HBV infection. However, no significant result was observed (Tables3 and 4).Table 3 Correlations between frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and clinical characteristics in 4 subgroups of patients with chronic HBV infection. Group/Clinical characteristics ALT(U/L) HBV DNA, log10 IU/ml HBsAg, log10IU/ml r-value p-value r-value p-value r-value p-value IT 0.022 0.930 0.238 0.326 -0.300 0.213 IC -0.149 0.530 -0.145 0.543 0.192 0.461 LR -0.206 0.412 0.145 0.567 0.220 0.381 RA 0.189 0.425 -0.222 0.348 -0.273 0.258Table 4 Correlations between frequency of CCR7 h iPD-1 l oCXCR5+CD4+ T cells and clinical characteristics in 4 subgroups of patients with chronic HBV infection. Group/Clinical characteristics ALT(U/L) HBV DNA, log10 IU/ml HBsAg, log10IU/ml r-value p-value r-value p-value r-value p-value IT -0.069 0.780 -0.344 0.149 -0.067 0.784 IC 0.171 0.470 -0.384 0.095 0.235 0.363 LR 0.065 0.797 -0.072 0.777 -0.156 0.536 RA 0.287 0.220 0.087 0.716 0.055 0.822 ## 3.1. Study Subjects’ Characteristics The subjects in this study included 77 treatment-naive patients who had been HBsAg-positive for longer than 6 months and 20 healthy volunteers with normal ALT and AST (Table1). According to the immune phases, the 77 HBV-infected patients were further classified as follows: IT phase (n=19), IC phase (n=20), LR phase (n=18), and RA phase (n=20) (Table 2).Table 1 Clinical characteristics of the patients with chronic HBV infection and healthy controls. Patients HC p-value Subjects, n 77 20 / Gender, males/females 45/32 10/10 0.499 Age, y 35(18-71) 27(18-52) 0.053 ALT, IU/L 75(15-942) 15(7-31) <0.001 AST, IU/L 51(13-915) 14.5(7-34) <0.001 HBV DNA, log10 IU/mL 6.72(0-8.38) NA / HBsAg, log10 IU/mL 3.50(1.30-4.72) NA / HBeAg, positive/negative, n 39/38 0/20 <0.001 ALT, elevated/normal, n 40/37 0/20 <0.001 Abbreviations. ALT: alanine transaminase; AST: aspartate aminotransferase; HBeAg: hepatitis B e antigen; HBsAg: hepatitis B s antigen; HBV: hepatitis B virus; HC: healthy controls; NA: not applicable. (a) Values are expressed as the median (minimum-maximum) for age, ALT, AST, and HBV DNA. (b) HBV DNA<100 was treated as 0 log10 IU/ml.Table 2 Clinical characteristics of 4 subgroups of patients with chronic HBV infection. IT IC LR RA p-value Subjects, n 19 20 18 20 / Gender, males/females 10/9 12/8 11/7 12/8 0.950 Age, y 29(18-52) 29(18-56) 41.5(18-71) 42(24-65) 0.002 ALT, IU/L 27(16-38) 293(85-939) 25.5(15-34) 332(57-942) <0.001 AST, IU/L 28(13-38) 129(51-491) 22(19-34) 177.5(39-915) <0.001 HBV DNA, log10 IU/mL 7.87(6.45-8.38) 7.76(3.72-8.23) 2.36(0-3.20) 5.51(2.00-8.23) <0.001 HBsAg, log10 IU/mL 4.557(3.495-4.54) 3.85(2.85-4.60) 2.69(1.35-3.53) 3.32(1.30-4.61) <0.001 HBeAg, positive/negative, n 19/0 20/0 0/18 0/20 <0.001 ALT, elevated/normal, n 0/19 20/0 0/18 20/0 <0.001 (a) Values are expressed as median (minimum-maximum) for age, ALT, AST, and HBV DNA. (b) HBV DNA<100 was treated as 0 log10 IU/ml. (c) HBsAg<20 was treated as 20. ## 3.2. Frequencies of Circulating CXCR5+CD4+ T Cells and Subsets in Peripheral Blood Mononuclear Cells The frequency of circulating CXCR5+CD4+ T cells in the PBMC samples was detected by flow cytometry (Figure 1). The CXCR5+CD4+ T cell frequency in the patients with chronic HBV infection was higher than that in the non-HBV infected individuals, but not significantly (20.01±6.76% vs 19.26±3.93%, p=0.705, Figure 2(a)). Nevertheless, the frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells was significantly higher in the patients than in the healthy controls (14.92±4.87% vs 12.23±2.95%, p=0.018, Figure 2(b)). In addition, the frequency of the CCR7 h iPD-1 l o CXCR5+CD4+ T cells was lower in the patients with chronic HBV infection, but not significantly (p=0.715).Figure 1 Gating strategy and representative dot plots of CCR7 and PD-1 expression on CXCR5+CD4+ T cells from chronic HBV infected patients. (a) Living lymphocytes gate. (b) CD4+CD3+ cells gate. (c) CXCR5+CD4+CD3+ cells gate. (d) CCR7 l oPD-1 h i Tfh cells gate and CCR7 h iPD-1 l o Tfh cells gate were set according the isotypes (e). At least approximately 100,000 events were analyzed in each sample. (a) (b) (c) (d) (e)Figure 2 Frequencies of CXCR5+CD4+ T cells and CCR7 l oPD-1 h i CXCR5+CD4+ T cells in chronic HBV patients (n=77) and healthy controls (n=20). (a) Frequency of CXCR5+CD4+ among all CD4+CD3+ cells. (b) Frequency of CCR7 l oPD-1 h iCXCR5+CD4+ among all CXCR5+CD4+CD3+ cells. Horizontal lines show the median. (a) (b)We further investigated the association among the frequencies of the CXCR5+CD4+ T cells, CCR7 l oPD-1 h i Tfh subset, and HBV by stratifying the patients according to their immune status (IT, IC, LR, or RA). Based on the Kruskal-Wallis tests, although no significant difference was observed in the frequency of the CXCR5+CD4+ T cells among the 4 groups (p=0.885), differences in the CCR7 l oPD-1 h i Tfh subset were observed in the groups (p<0.001). After conducting the Bonferroni correction, we found that the frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the IC group (18.42%±3.08) than in the IT group (11.94±2.87%, p=0.001) and LR group (13.65±4.93%, p=0.031). In addition, the frequency of the CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the RA group than in the IT group (16.03±5.37% vs 11.94±2.87%, p=0.030, Figure 3(a)). Although frequency of CCR7 l oPD-1 h i CXCR5+CD4+ T cells of IT group was lower than LR group, the difference was not significant (11.941±2.868 % vs 13.648±4.930%, p=0.169) (Figure 3(b)).Figure 3 Frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells among all CXCR5+CD4+CD3+ cells in the patients with chronic HBV infection and between IT group and LR group. (a)Differences between the IT group (n=19) and IC group (n=20), between the IT group and RA group (n=20), and between the IC group and LR group (n=18) were significant. Statistical comparison was performed using a Bonferroni correction. (b)Difference between the IT group (n=19) and LR group (n=18) was not significant (p=0.169). The horizontal lines show the median. (a) (b)The comparison between people with raised ALT and normal ALT has been conducted and the difference was significant (16.91±4.77% vs 12.58±3.68%, p<0.001). The frequency of the CCR7 l oPD-1 h i CXCR5+CD4+ T cells was higher in the raised ALT group relative to the normal ALT group (Figure 4).Figure 4 Frequencies of CCR7 l oPD-1 h iCXCR5+CD4+ T cells among all CXCR5+CD4+CD3+ cells in the people with normal ALT and raised ALT. Difference between two group was significant (p<0.001). The horizontal lines show the median. ## 3.3. Correlation between the Two Tfh Cell Subsets and Clinical Parameters of the Chronic HBV Infected Patients The correlations between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell and the CCR7 h iPD-1 l oCXCR5+CD4+ T cell populations in the PBMCs and the patients’ ALT, HBV DNA load, HBsAg level, age, and gender were investigated. Based on Spearman’s rank correlation analysis, there was a positive correlation between the CCR7 l oPD-1 h iCXCR5+CD4+ T cell populations and levels of ALT (r=0.370, p=0.001, Figure 5(a)). However, the correlation was weak and not convincing enough. Besides, no correlation was observed between the frequency of the CCR7 h iPD-1 l oCXCR5+CD4+ T cells and the ALT levels (r=-0.143, p>0.050). Furthermore, CCR7 l oPD-1 h iCXCR5+CD4+ T cells (r=-0.028, p>0.005, Figure 5(b)) or CCR7 h iPD-1 l oCXCR5+CD4+ T cells (r=-0.160, p>0.005) had no correlation with HBV DNA. Neither of the CCR7 l oPD-1 h i (r= 0.008, p>0.050, Figure 5(c)) or the CCR7 h iPD-1 l o Tfh subsets (p>0.050) were correlated with HBsAg.Figure 5 Correlations between frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and clinical indices. (a) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and ALT level (r=0.370, p=0.001). (b) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and HBV DNA load (p>0.05). (c) Correlation between frequency of CCR7 l oPD-1 h i Tfh cells and HBsAg (p>0.05). Statistical comparison was performed using Spearman’s rank correlation analysis. (a) (b) (c)Although a negative correlation was observed between the CCR7 l oPD-1 h iCXCR5+CD4+ T cells and age in the patients with chronic HBV infection (r=-0.264, p=0.020), no evidence was found supporting a correlation between the CCR7 h iPD-1 l oCXCR5+CD4+ T cells and age among the patients (r=0.182, p=0.114). Further analysis showed significant difference in ages among the patients (p=0.002), and younger subjects were more likely to be in the IT and IC than in the LR and RA phases. However, no significant difference was detected in age and frequency of CCR7 h iPD-1 l oCXCR5+CD4+ T cells among the healthy controls. Furthermore, no difference was observed in the gender ratios in either the patients or healthy controls.Correlations among the two Tfh subsets and clinical characteristics were analyzed in each subgroup of patients with chronic HBV infection. However, no significant result was observed (Tables3 and 4).Table 3 Correlations between frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells and clinical characteristics in 4 subgroups of patients with chronic HBV infection. Group/Clinical characteristics ALT(U/L) HBV DNA, log10 IU/ml HBsAg, log10IU/ml r-value p-value r-value p-value r-value p-value IT 0.022 0.930 0.238 0.326 -0.300 0.213 IC -0.149 0.530 -0.145 0.543 0.192 0.461 LR -0.206 0.412 0.145 0.567 0.220 0.381 RA 0.189 0.425 -0.222 0.348 -0.273 0.258Table 4 Correlations between frequency of CCR7 h iPD-1 l oCXCR5+CD4+ T cells and clinical characteristics in 4 subgroups of patients with chronic HBV infection. Group/Clinical characteristics ALT(U/L) HBV DNA, log10 IU/ml HBsAg, log10IU/ml r-value p-value r-value p-value r-value p-value IT -0.069 0.780 -0.344 0.149 -0.067 0.784 IC 0.171 0.470 -0.384 0.095 0.235 0.363 LR 0.065 0.797 -0.072 0.777 -0.156 0.536 RA 0.287 0.220 0.087 0.716 0.055 0.822 ## 4. Discussion The host immune mechanism, including innate and adaptive immunity, is important in the pathogenesis of hepatitis B. HBeAg seroconversion, led by viral and host immunity, is essential for the progression of chronic HBV infection, which is associated with a reduced risk of progressive liver inflammation, liver cirrhosis and liver cancer [6, 26]. In the natural history of chronic HBV infection, patients who have successfully undergone seroconversion usually become inactive HBsAg carriers with positive anti-HBeAg in the blood [27, 28]. A high frequency of circulating CXCR5+CD4+ T cells has been shown to promote HBeAg seroconversion in chronic HBV patients [7].The main function of Tfh cells is to support B cell maturation and differentiation. Tfh cells and B cells repeatedly intimately interact in the germinal center, where Tfh cells deliver important survival and differentiation signals to B cells that participate in the affinity, antibody isotype class, and potency of the ensuing antibody response [29]. Nevertheless, various studies have reported conflicting results. Many research studies have found increased expression of circulating Tfh cells in SLE, but no increase in the frequency of circulating CXCR5+CD4+ T cells has also been reported in a study investigating SLE [20].Our study found that the circulating CXCR5+CD4+ T cell frequency was higher, but not significantly, in patients with chronic HBV infection than in non-HBV infected individuals. This finding is inconsistent with several former studies. The frequency of the CXCR5+CD4+ T cells was not accurate enough to describe the difference in the immune response between chronic HBV infected patients and healthy people.Heterogeneity is observed in Tfh cells [30]. According to the CCR7 and PD-1 expression on the cell surface, two major subsets were clearly identified within the circulating CXCR5+CD4+ T cells. A study in repeated implantation failure reported the proportion of CCR7 l oPD-1 h iCXCR5+CD4+ T cells was positively correlated with IL-21[31]. Another study found IL-21 was positively correlated with CCR7 l oPD-1 h i Tfh subset in transitional phase of Cystic echinococcosis[25].IL-21 is the main cytokine secreted by Tfh cells, reported as a critical immunomodulatory cytokine with various effects on all populations of lymphocytes. It can promote Tfh cells differentiation, regulate B cells differentiation and proliferation, induce plasma cell differentiation and immunoglobulin production[32].Our investigation demonstrated that the frequency of the CCR7 l oPD-1 h i subset was increased in the chronic HBV infected patients and positively correlated with ALT on a weak level. Most peripheral CXCR5+CD4+ T cells are resting cells, and a very small population expressing ICOS and very high levels of PD-1 are activated cells [33]. He J et al showed that the CCR7 l oPD-1 h i subset had a Tfh precursor phenotype, whereas the phenotype of the CCR7 h iPD-1 l o subset was characteristic of resting cells [20].PD-1 is a negative regulatory molecule that becomes up-regulated on activated T cells, B cells, monocytes, natural killer cells and dendritic cells and is particularly highly expressed on Tfh cells. The PD-1 ligands, i.e., PD-L1 and PD-L2, are extensively expressed on various cells, including T cells, B cells, dendritic cells and macrophages [34, 35]. CCR7 is a homing molecule expressed on the T cell surface and is essential for the migration of naive T cells through specialized high endothelial venules (HEVs). In addition, B cells exploit CCR7 to efficiently enter lymphoid nodes[36–38].In our study, a stronger immune response was observed in the chronic HBV infected patients with a higher frequency of the CCR7 l oPD-1 h i Tfh subset compared to the healthy controls. We hypothesize that the CCR7 l oPD-1 h i Tfh subset, as an effector phenotype, may contribute to the immune response in chronic HBV infection. Furthermore, in chronic HBV infection, the frequency of the CCR7 l oPD-1 h i Tfh subset is higher in the IC phase than in the IT and LR phases, suggesting that CCR7 l oPD-1 h iCXCR5+CD4+ T cells may indicate the level of the immune response more precisely than general CXCR5+CD4+ T cells and are related to immune status. This hypothesis was further supported by the higher frequency of the CCR7 l oPD-1 h i Tfh subset in the RA phase than in the IT phase. However, there was no significant difference of frequency of CCR7 l oPD-1 h i Tfh subset between IT and LR group was observed. More investigation should be complemented.Currently, the indications for antiviral treatment in chronic HBV infection are mainly based on a combination of the following three criteria: HBV DNA load, ALT levels, and severity of liver disease [6]. By preventing the progression of liver disease and early liver-related deaths, timely and valid therapy could be highly beneficial for improving quality of life and survival [39]. We speculate that the frequency of the CCR7 l oPD-1 h i Tfh subset in the blood, which was correlated to the immune status in chronic HBV infection, may help physicians determine when to initiate antiviral treatment. Certainly, further investigations are needed to study factors influencing the frequency of CCR7 l oPD-1 h iCXCR5+CD4+ T cells. ## 5. Conclusion In conclusion, the CCR7 l oPD-1 h iCXCR5+CD4+ T cell frequency is higher in patients with chronic HBV infection than in healthy individuals and is positively correlated with serum ALT levels in patients, indicating that CCR7 l oPD-1 h iCXCR5+CD4+ T cells may be involved in HBV-related immune responses. Moreover, different frequencies of CCR7 l oPD-1 h iCXCR5+CD4+ T cells are observed in patients at different immune phases. These findings might improve our understanding of the immunological pathogenesis of chronic HBV infection and may provide a novel indication for antiviral treatment. --- *Source: 1020925-2018-10-08.xml*
2018
# Analysis of Influencing Factors of Compliance with Non-Vitamin K Antagonist Oral Anticoagulant in Patients with Nonvalvular Atrial Fibrillation and Correlation with the Severity of Ischemic Stroke **Authors:** Li Zhu; Xiaodan Zhang; Jing Yang **Journal:** Evidence-Based Complementary and Alternative Medicine (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1021127 --- ## Abstract Nonvalvular atrial fibrillation (NVAF) is associated with an increased risk of stroke and thrombus, and anticoagulant therapy is a key link in the prevention of stroke. At present, the anticoagulation rate of atrial fibrillation in China is low, and there are many factors affecting the adherence of patients with atrial fibrillation to anticoagulation. Non-vitamin K antagonist oral anticoagulants (NOACs) are anticoagulant with high application value due to their high safety and low risk of intracranial hemorrhage, stroke, and death. However, the compliance of NOACs is poor, and the current situation of anticoagulants in China is not optimistic. In this study, a total of 156 patients with NVAF who received NOAC anticoagulation therapy in our hospital from January 2018 to January 2019 were retrospectively analyzed. The results showed that education background, place of residence, number of complications, CHA2DS2-VASc score, and HAS-BLED score were independent influencing factors for NOACS compliance of NVAF patients. Also, the Pearson correlation analysis showed that there was a negative correlation (r = −0.465, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF. Therefore, clinical supervision and management of patients with NVAF after NOACs should be strengthened to improve the compliance of patients with NVAF after NOACs, reduce the damage of ischemic stroke, and improve their prognosis. --- ## Body ## 1. Introduction Atrial fibrillation (AF) is an atrial rhythm with ineffective contractions and chaotic excitement caused by disorders of the heart’s electrical system [1]. This is the most common persistent arrhythmia in clinical practice. Ischemic stroke is one of the most dangerous complications of AF. Epidemiological statistics show that the prevalence of AF in our country is 0.8%, and the incidence of subsequent ischemic stroke is up to 5%. [2]. Also, the annual incidence of ischemic stroke in patients with nonvalvular atrial fibrillation (NVAF) is 3–5 times that of patients with non-AF [2]. The non-vitamin K antagonist oral anticoagulants (NOACs) can effectively reduce the stroke risk of NVAF patients by 60%∼70% [3].However, in the actual application of anticoagulation therapy, under the influence of individualized differences in dosages, cross reactions between medicines and foods, insufficient clinical anticoagulation therapy, and patients’ lack of awareness of the necessity of anticoagulation, the overall treatment rate and medication compliance of NOAC anticoagulation in NVAF patients are not high [4, 5]. It largely increases the risk of ischemic stroke in patients with NVAF in China, and the severity of stroke and its recurrence rate are also affected [6]. Based on this, this research discusses the influencing factors of NOAC anticoagulation compliance in NVAF patients and their correlation with the severity of ischemic stroke. It is expected to provide relevant references for the improvement of compliance with NOAC anticoagulation therapy and the management of safety and effectiveness in clinical NVAF patients. ## 2. Materials and Methods ### 2.1. General Information 156 patients with NVAF who received NOAC anticoagulation therapy in our hospital from January 2018 to January 2019 were retrospectively analyzed. There were 87 males and 69 females, with an average age of (59.87 ± 11.64) years. ### 2.2. Inclusion Criteria (i) All patients met the diagnostic criteria for NVAF in the 2017ACC Expert Consensus on Perioperative Anticoagulation Management Decisions in Patients with Nonvalvular Atrial Fibrillation [6]; (ii) the diagnostic criteria for patients with ischemic stroke were referred to the Chinese Guidelines for the Diagnosis and Treatment of Acute Ischemic Stroke (2018 edition); and (iii) patients who met the CHA2DS2-VASc (Including congestive heart failure/left ventricular dysfunction, hypertension, age ≥ 75 years, diabetes, history of stroke/TIA/thromboembolism, vascular disease, and female) and HAS-BLED (including hypertension, abnormal renal and liver function, stroke, bleeding, labile INRS, elderly, drugs, or alcohol) scoring criteria for NOACs anticoagulation [7]. ### 2.3. Exclusion Criteria (i) Accompanied by other diseases such as valvular disease and thromboembolic disease that require anticoagulation; (ii) pregnant and/or lactating women; (iii) patients with previous bleeding history; (iv) with other serious complications or important organ diseases; (v) patients with mental or consciousness disorders and communication difficulties; and (vi) patients who have joined other studies or received other anticoagulation therapy in recent 3 months. ### 2.4. Method #### 2.4.1. Collecting the Baseline Information All patients were followed up for 2 years, with an interval of 2 months in the first 6 months, an interval of 3 months in the next 6 months, and an interval of 4 months in the second year. Hospital review was the main method of follow-up.The electronic medical records were used to collect baseline data on all patients, including gender, age, diploma, occupation, marriage, smoking history, drinking history, number of complications, number of drugs used for complications, CHA2DS2-VASc score, and HAS-BLED score. Also, personal health follow-up records were established according to the collected data. #### 2.4.2. Survey of Compliance Design of the medication compliance questionnaire: patient compliance was measured by the following four questions: (i) Can you take your medication as often as your doctor requires? (ii) Can you take the medication in the amount required by the doctor? (iii) Are you able to take your medication at the time and in the manner required by your doctor? (vi) Can you take your medication for a long period of time as required by your doctor? (v) Can you take your medication regularly according to your doctor’s requirements? The options are “not at all,” “occasionally,” “basically,” and “completely,” respectively, 0∼3 points.The total score ranges from 0 to 12 points. If the score is 12, the medication adherence is good. If the score is <12, the medication adherence is poor. In this study, the Cronbach’sα coefficient of the questionnaire was 0.867, indicating good reliability. #### 2.4.3. Investigation of Clinical End Points At the last follow-up, the number of ischemic stroke, hemorrhagic events, and death cases in the good compliance group and poor compliance group was counted. #### 2.4.4. Survey of Ischemic Stroke Severity The severity of stroke in all patients with ischemic stroke was assessed using the NIH Stroke Scale (NIHSS) [8]. The full score of the NIHSS scale was 42; a score of 0-1 was classified as normal, 1∼4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21∼42 as severe stroke. ### 2.5. Statistical Methods All data were processed with SPSS 22.0 statistical software, and GraghPad prism 8 was used to make statistical graphs. Measurement data are expressed as mean ± standard deviation (x¯±s), an independent-sample t-test is used for comparison between groups, count data are expressed as [n (%)], and the chi-square (χ2) test is performed. Factors significant in univariate analysis were subjected to multiple logistic regression model analysis. Correlation analysis was performed by Pearson correlation analysis. The difference is statistically significant when P<0.05. ## 2.1. General Information 156 patients with NVAF who received NOAC anticoagulation therapy in our hospital from January 2018 to January 2019 were retrospectively analyzed. There were 87 males and 69 females, with an average age of (59.87 ± 11.64) years. ## 2.2. Inclusion Criteria (i) All patients met the diagnostic criteria for NVAF in the 2017ACC Expert Consensus on Perioperative Anticoagulation Management Decisions in Patients with Nonvalvular Atrial Fibrillation [6]; (ii) the diagnostic criteria for patients with ischemic stroke were referred to the Chinese Guidelines for the Diagnosis and Treatment of Acute Ischemic Stroke (2018 edition); and (iii) patients who met the CHA2DS2-VASc (Including congestive heart failure/left ventricular dysfunction, hypertension, age ≥ 75 years, diabetes, history of stroke/TIA/thromboembolism, vascular disease, and female) and HAS-BLED (including hypertension, abnormal renal and liver function, stroke, bleeding, labile INRS, elderly, drugs, or alcohol) scoring criteria for NOACs anticoagulation [7]. ## 2.3. Exclusion Criteria (i) Accompanied by other diseases such as valvular disease and thromboembolic disease that require anticoagulation; (ii) pregnant and/or lactating women; (iii) patients with previous bleeding history; (iv) with other serious complications or important organ diseases; (v) patients with mental or consciousness disorders and communication difficulties; and (vi) patients who have joined other studies or received other anticoagulation therapy in recent 3 months. ## 2.4. Method ### 2.4.1. Collecting the Baseline Information All patients were followed up for 2 years, with an interval of 2 months in the first 6 months, an interval of 3 months in the next 6 months, and an interval of 4 months in the second year. Hospital review was the main method of follow-up.The electronic medical records were used to collect baseline data on all patients, including gender, age, diploma, occupation, marriage, smoking history, drinking history, number of complications, number of drugs used for complications, CHA2DS2-VASc score, and HAS-BLED score. Also, personal health follow-up records were established according to the collected data. ### 2.4.2. Survey of Compliance Design of the medication compliance questionnaire: patient compliance was measured by the following four questions: (i) Can you take your medication as often as your doctor requires? (ii) Can you take the medication in the amount required by the doctor? (iii) Are you able to take your medication at the time and in the manner required by your doctor? (vi) Can you take your medication for a long period of time as required by your doctor? (v) Can you take your medication regularly according to your doctor’s requirements? The options are “not at all,” “occasionally,” “basically,” and “completely,” respectively, 0∼3 points.The total score ranges from 0 to 12 points. If the score is 12, the medication adherence is good. If the score is <12, the medication adherence is poor. In this study, the Cronbach’sα coefficient of the questionnaire was 0.867, indicating good reliability. ### 2.4.3. Investigation of Clinical End Points At the last follow-up, the number of ischemic stroke, hemorrhagic events, and death cases in the good compliance group and poor compliance group was counted. ### 2.4.4. Survey of Ischemic Stroke Severity The severity of stroke in all patients with ischemic stroke was assessed using the NIH Stroke Scale (NIHSS) [8]. The full score of the NIHSS scale was 42; a score of 0-1 was classified as normal, 1∼4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21∼42 as severe stroke. ## 2.4.1. Collecting the Baseline Information All patients were followed up for 2 years, with an interval of 2 months in the first 6 months, an interval of 3 months in the next 6 months, and an interval of 4 months in the second year. Hospital review was the main method of follow-up.The electronic medical records were used to collect baseline data on all patients, including gender, age, diploma, occupation, marriage, smoking history, drinking history, number of complications, number of drugs used for complications, CHA2DS2-VASc score, and HAS-BLED score. Also, personal health follow-up records were established according to the collected data. ## 2.4.2. Survey of Compliance Design of the medication compliance questionnaire: patient compliance was measured by the following four questions: (i) Can you take your medication as often as your doctor requires? (ii) Can you take the medication in the amount required by the doctor? (iii) Are you able to take your medication at the time and in the manner required by your doctor? (vi) Can you take your medication for a long period of time as required by your doctor? (v) Can you take your medication regularly according to your doctor’s requirements? The options are “not at all,” “occasionally,” “basically,” and “completely,” respectively, 0∼3 points.The total score ranges from 0 to 12 points. If the score is 12, the medication adherence is good. If the score is <12, the medication adherence is poor. In this study, the Cronbach’sα coefficient of the questionnaire was 0.867, indicating good reliability. ## 2.4.3. Investigation of Clinical End Points At the last follow-up, the number of ischemic stroke, hemorrhagic events, and death cases in the good compliance group and poor compliance group was counted. ## 2.4.4. Survey of Ischemic Stroke Severity The severity of stroke in all patients with ischemic stroke was assessed using the NIH Stroke Scale (NIHSS) [8]. The full score of the NIHSS scale was 42; a score of 0-1 was classified as normal, 1∼4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21∼42 as severe stroke. ## 2.5. Statistical Methods All data were processed with SPSS 22.0 statistical software, and GraghPad prism 8 was used to make statistical graphs. Measurement data are expressed as mean ± standard deviation (x¯±s), an independent-sample t-test is used for comparison between groups, count data are expressed as [n (%)], and the chi-square (χ2) test is performed. Factors significant in univariate analysis were subjected to multiple logistic regression model analysis. Correlation analysis was performed by Pearson correlation analysis. The difference is statistically significant when P<0.05. ## 3. Results ### 3.1. Baseline Data and Compliance Scores of Follow-Up Patients A total of 156 patients were followed up, and there were 8 cases of lost contact and 2 cases of death due to various reasons during the follow-up period. The actual number of effective cases during the follow-up period was 146, and the effective rate was 93.6%. The compliance score of 146 patients was (10.5 ± 1.4) points, among which 59 patients (40.4%) were in the good compliance group and 87 patients (59.6%) were in the poor compliance group (Table1).Table 1 Baseline information and the compliance scores of 146 follow-up patients (n, x¯±s). Clinical informationBaseline information situationGender85 males (58.2%), 61 females (41.8%)Age53 patients with age <65 years (36.3%), 93 patients with age ≥65 years (63.7%)Educational background46 patients (31.5%) with no educational background/primary school, 58 patients (39.7%) with middle school/high school, and 42 cases (28.8%) with university or abovePlace of residence67 patients (45.9%) in rural areas and 79 patients (54.1%) in cities and townsMarriage situation69 patients were unmarried/widowed (47.3%), and 77 patients were married (52.7%)Smoking history51 patients (34.9%) had smoking history, 95 patients (65.1%) had no smoking historyDrinking history56 patients (38.4%) had a history of drinking alcohol, 90 patients (61.6%) had no history of drinking alcohol.Number of complications0∼6 (2.9 ± 1.6) speciesNumber of concomitant medications0∼8 (4.1 ± 2.1) speciesCHA2DS2-VASc score45 patients (30.8%) with scores <2/3 (male/female) and 101 patients (69.2%) with scores ≥2/3 (male/female)HAS-BLED score77 patients (52.7%) with score <3 and 69 patients (47.3%) with score ≥3Compliance(10.5 ± 1.4) points, good compliance in 59 patients (40.4%), poor compliance in 87 patients (59.6%) ### 3.2. Univariate Analysis of Influencing Compliance with NOACs Univariate analysis showed that age, educational background, place of residence, number of complications, CHA2DS2-VASc score, and HA-BLED score were the influencing factors for NOAC compliance of NVAF patients (P<0.05) (Table 2).Table 2 Univariate analysis of influencing compliance with NOACs (n, %, x¯±s). Clinical informationGood compliance group (n = 59)Poor compliance group (n = 87)χ2/tPGenderMale35 (59.3)50 (57.5)0.0500.824Female24 (40.7)37 (42.5)Age<65 years41 (69.5)40 (75.5)7.8700.001≥65 years18 (30.5)47 (72.3)Educational backgroundNo educational background/primary school28 (47.5)18 (20.7)16.350≤0.001Middle school/high school18 (30.5)40 (46.0)University or above9 (15.3)33 (37.9)Place of residenceRural area49 (83.1)18 (20.7)55.062≤0.001Cities and towns10 (16.9)69 (79.3)Marriage situationUnmarried/widowed36 (61.0)33 (37.93)7.5180.006Married23 (29.87)54 (70.13)Smoking historyYes25 (42.4)26 (29.9)2.4120.120No34 (57.6)61 (70.1)Drinking historyYes18 (30.5)33 (37.9)0.8520.356No41 (69.5)54 (62.1)Number of complications2.6 ± 1.43.1 ± 0.92.6270.012Number of concomitant medications4.3 ± 2.14.6 ± 2.20.8230.412CHA2DS2-VASc score<2/3 (male/female)40 (67.80)5 (5.75)63.484≤0.001≥2/3 (male/female)19 (32.20)82 (94.25)HAS-BLED score<3 score51 (86.44)16 (18.39)49.313≤0.001≥3 score8 (13.56)51 (58.62) ### 3.3. Multivariate Analysis of Influencing Compliance with NOACs The compliance was taken as the dependent variable, and the factors with significant differences in Table2 were taken as independent variables into the logistic regression model. The assignments of the dependent variables and independent variables are shown in Table 3.Table 3 Variable assignment for multivariate analysis of influencing compliance with NOACs. VariableThe assignmentDependent variableCompliance0 = good, 1 = poorIndependent variablesAge<60 years = 0, ≥60 years = 1Marriage situationMarried = 0, unmarried/widowed = 1Educational backgroundUniversity or above = 0, middle school/high school = 1, no educational background/primary school = 2Place of residenceCities and towns = 0, rural area = 1Number of complicationsEnter the actual valueCHA2DS2-VASc score≥2/3 (male/female) = 0, <2/3 (male/female) = 1HAS-BLED score≥3 score = 0, <3 score = 1Education background, place of residence, number of complications, CHA2DS2-VASc score, and HAS-BLED score were independent influencing factors for oral NOACs compliance of NVAF patients (Table4).Table 4 Multivariate analysis of influencing compliance with NOACs. FactorsβSEWaldPOR (95% CI)Age ≥ 60 years0.9431.1174.2970.1743.116 (0.561∼1.209)Unmarried/widowed1.1652.5454.4360.0981.765 (0.361∼1.935)Educational backgroundNo educational background/primary school5.1121.6289.8170.007166.861 (6.796∼4094.401)Middle school/high school3.2901.4325.2540.02726.983 (1.610∼451.128)Place of residenceRural area1.5660.7424.5790.0411.342 (0.571∼0.478)Number of complications0.9280.4544.1360.0472.538 (1.030∼6.244)CHA2DS2-VASc score<2/3 (male/female)2.2110.9825.0040.0290.106 (0.012∼0.756)HAS-BLED score≥3 score2.7861.2475.1320.02812.431 (0.964∼38.657) ### 3.4. Comparison Scores of Clinical End Points between the Two Groups The incidence of ischemic stroke in the good compliance group (5.1%) was lower than that in the poor compliance group (71.2%), and the difference was significant (P<0.05). But, there was no significant difference in the incidence of hemorrhagic events between the two groups (P>0.05). Among the patients who died, 2 patients in the poor compliance group died of stroke, while the remaining patients died of nonthromboembolism and hemorrhagic events (Table 5).Table 5 Comparison of clinical end points between the two groups (n, %). GroupIschemic strokeHemorrhagic eventsGood compliance group (n = 59)3 (5.1)6 (10.2)Poor compliance group (n = 87)15 (17.2)10 (11.5)χ24.8070.063P0.0280.801 ### 3.5. Correlation between NOAC Compliance and Severity of Ischemic Stroke The types of stroke that occurred in the 3 patients in the good adherence group were all minor/minor stroke, and the types of stroke that occurred in the 12 patients in the poor adherence group were all moderate and major stroke (Table6).Table 6 The compliance scores and NIHSS score of 15 patients with ischemic stroke (cases, points). GroupNumberCompliance scoresNIHSS scoreGood compliance group (n = 3)712.032612.046912.02Poor compliance group (n = 12)49.331114.862810.5224310.884711.0156110.59701023779.71485419946.317975.5241096.721Note: a score of 0-1 was classified as normal, 1–4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21–42 as severe stroke.Pearson correlation analysis showed that there was a negative correlation (r = −0.791, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF (Figure 1).Figure 1 Correlation between NOAC compliance and severity of ischemic stroke. There was a negative correlation (r = −0.465, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF. ## 3.1. Baseline Data and Compliance Scores of Follow-Up Patients A total of 156 patients were followed up, and there were 8 cases of lost contact and 2 cases of death due to various reasons during the follow-up period. The actual number of effective cases during the follow-up period was 146, and the effective rate was 93.6%. The compliance score of 146 patients was (10.5 ± 1.4) points, among which 59 patients (40.4%) were in the good compliance group and 87 patients (59.6%) were in the poor compliance group (Table1).Table 1 Baseline information and the compliance scores of 146 follow-up patients (n, x¯±s). Clinical informationBaseline information situationGender85 males (58.2%), 61 females (41.8%)Age53 patients with age <65 years (36.3%), 93 patients with age ≥65 years (63.7%)Educational background46 patients (31.5%) with no educational background/primary school, 58 patients (39.7%) with middle school/high school, and 42 cases (28.8%) with university or abovePlace of residence67 patients (45.9%) in rural areas and 79 patients (54.1%) in cities and townsMarriage situation69 patients were unmarried/widowed (47.3%), and 77 patients were married (52.7%)Smoking history51 patients (34.9%) had smoking history, 95 patients (65.1%) had no smoking historyDrinking history56 patients (38.4%) had a history of drinking alcohol, 90 patients (61.6%) had no history of drinking alcohol.Number of complications0∼6 (2.9 ± 1.6) speciesNumber of concomitant medications0∼8 (4.1 ± 2.1) speciesCHA2DS2-VASc score45 patients (30.8%) with scores <2/3 (male/female) and 101 patients (69.2%) with scores ≥2/3 (male/female)HAS-BLED score77 patients (52.7%) with score <3 and 69 patients (47.3%) with score ≥3Compliance(10.5 ± 1.4) points, good compliance in 59 patients (40.4%), poor compliance in 87 patients (59.6%) ## 3.2. Univariate Analysis of Influencing Compliance with NOACs Univariate analysis showed that age, educational background, place of residence, number of complications, CHA2DS2-VASc score, and HA-BLED score were the influencing factors for NOAC compliance of NVAF patients (P<0.05) (Table 2).Table 2 Univariate analysis of influencing compliance with NOACs (n, %, x¯±s). Clinical informationGood compliance group (n = 59)Poor compliance group (n = 87)χ2/tPGenderMale35 (59.3)50 (57.5)0.0500.824Female24 (40.7)37 (42.5)Age<65 years41 (69.5)40 (75.5)7.8700.001≥65 years18 (30.5)47 (72.3)Educational backgroundNo educational background/primary school28 (47.5)18 (20.7)16.350≤0.001Middle school/high school18 (30.5)40 (46.0)University or above9 (15.3)33 (37.9)Place of residenceRural area49 (83.1)18 (20.7)55.062≤0.001Cities and towns10 (16.9)69 (79.3)Marriage situationUnmarried/widowed36 (61.0)33 (37.93)7.5180.006Married23 (29.87)54 (70.13)Smoking historyYes25 (42.4)26 (29.9)2.4120.120No34 (57.6)61 (70.1)Drinking historyYes18 (30.5)33 (37.9)0.8520.356No41 (69.5)54 (62.1)Number of complications2.6 ± 1.43.1 ± 0.92.6270.012Number of concomitant medications4.3 ± 2.14.6 ± 2.20.8230.412CHA2DS2-VASc score<2/3 (male/female)40 (67.80)5 (5.75)63.484≤0.001≥2/3 (male/female)19 (32.20)82 (94.25)HAS-BLED score<3 score51 (86.44)16 (18.39)49.313≤0.001≥3 score8 (13.56)51 (58.62) ## 3.3. Multivariate Analysis of Influencing Compliance with NOACs The compliance was taken as the dependent variable, and the factors with significant differences in Table2 were taken as independent variables into the logistic regression model. The assignments of the dependent variables and independent variables are shown in Table 3.Table 3 Variable assignment for multivariate analysis of influencing compliance with NOACs. VariableThe assignmentDependent variableCompliance0 = good, 1 = poorIndependent variablesAge<60 years = 0, ≥60 years = 1Marriage situationMarried = 0, unmarried/widowed = 1Educational backgroundUniversity or above = 0, middle school/high school = 1, no educational background/primary school = 2Place of residenceCities and towns = 0, rural area = 1Number of complicationsEnter the actual valueCHA2DS2-VASc score≥2/3 (male/female) = 0, <2/3 (male/female) = 1HAS-BLED score≥3 score = 0, <3 score = 1Education background, place of residence, number of complications, CHA2DS2-VASc score, and HAS-BLED score were independent influencing factors for oral NOACs compliance of NVAF patients (Table4).Table 4 Multivariate analysis of influencing compliance with NOACs. FactorsβSEWaldPOR (95% CI)Age ≥ 60 years0.9431.1174.2970.1743.116 (0.561∼1.209)Unmarried/widowed1.1652.5454.4360.0981.765 (0.361∼1.935)Educational backgroundNo educational background/primary school5.1121.6289.8170.007166.861 (6.796∼4094.401)Middle school/high school3.2901.4325.2540.02726.983 (1.610∼451.128)Place of residenceRural area1.5660.7424.5790.0411.342 (0.571∼0.478)Number of complications0.9280.4544.1360.0472.538 (1.030∼6.244)CHA2DS2-VASc score<2/3 (male/female)2.2110.9825.0040.0290.106 (0.012∼0.756)HAS-BLED score≥3 score2.7861.2475.1320.02812.431 (0.964∼38.657) ## 3.4. Comparison Scores of Clinical End Points between the Two Groups The incidence of ischemic stroke in the good compliance group (5.1%) was lower than that in the poor compliance group (71.2%), and the difference was significant (P<0.05). But, there was no significant difference in the incidence of hemorrhagic events between the two groups (P>0.05). Among the patients who died, 2 patients in the poor compliance group died of stroke, while the remaining patients died of nonthromboembolism and hemorrhagic events (Table 5).Table 5 Comparison of clinical end points between the two groups (n, %). GroupIschemic strokeHemorrhagic eventsGood compliance group (n = 59)3 (5.1)6 (10.2)Poor compliance group (n = 87)15 (17.2)10 (11.5)χ24.8070.063P0.0280.801 ## 3.5. Correlation between NOAC Compliance and Severity of Ischemic Stroke The types of stroke that occurred in the 3 patients in the good adherence group were all minor/minor stroke, and the types of stroke that occurred in the 12 patients in the poor adherence group were all moderate and major stroke (Table6).Table 6 The compliance scores and NIHSS score of 15 patients with ischemic stroke (cases, points). GroupNumberCompliance scoresNIHSS scoreGood compliance group (n = 3)712.032612.046912.02Poor compliance group (n = 12)49.331114.862810.5224310.884711.0156110.59701023779.71485419946.317975.5241096.721Note: a score of 0-1 was classified as normal, 1–4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21–42 as severe stroke.Pearson correlation analysis showed that there was a negative correlation (r = −0.791, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF (Figure 1).Figure 1 Correlation between NOAC compliance and severity of ischemic stroke. There was a negative correlation (r = −0.465, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF. ## 4. Discussion As a member of the cardiovascular epidemic in the 21st century, NVAF has become a major public health problem threatening the safety of citizens [9]. It occurs mainly in the middle-aged and elderly people with organic heart disease, and the prevalence was increased with age. The most serious complication after NVAF is thromboembolic events such as stroke [10]. According to statistics, about 13∼26% of ischemic strokes are directly associated with NVAF. Also, in patients of advanced age >80 years, AF is more a high-risk influential cause of concurrent ischemic stroke [11].At present, NOACs are an approved treatment for thromboembolic disease in multiple clinical indications [12]. However, the investigation showed that the drug intake rate of patients with NVAF in China was extremely low, and the high prevalence of stroke and low rate of anticoagulant therapy have become the new features of atrial fibrillation patients in China [13].With the wide application of the risk stratification and scoring tools for atrial fibrillation thrombosis and bleeding, such as CHA2DS2-VASc and HAS-BLED, clinical practice HAS-BLED found that it is of certain value to identify the risk of ischemic stroke and hemorrhagic transformation in NVAF patients before NOACs. It is not only beneficial to the correction of adverse events of NOACs but also enhances the confidence of clinical use of NOACs to a certain extent. In our study, 156 patients with NVAF who received NOACs in our institution were followed up for 2 years. The results showed that 87 patients (59.59%) were in the poor compliance.Multiple logistic regression analysis was further used to confirm that education background, place of residence, number of complications, CHA2DS2-VASc score, and HA-BLED score were independent influencing factors for NOAC compliance of NVAF patients. The reasons include many aspects. First of all, in terms of educational background, patients with higher educational level have a higher understanding of NOACs and the individualized medication, so their subjective initiative of anticoagulation is also greater [14]. Secondly, compared with rural areas where communication and medical equipment are not well established, urban residents may have more advantages in regular monitoring, physician-patient interactions, and the popularization of relevant knowledge [15]. Thirdly, in terms of the number of complications, most patients with NVAF are complicated with basic diseases such as three highs, cardiovascular and cerebrovascular diseases, and liver and kidney diseases, which had higher variable in individuals, and a large impact on the blood concentration after treatment, and some patients may stop taking drugs or change medicines halfway, leading to a high probability of stopping medication or changing medication midway. At last, in terms of CHA2DS2-VASc and HAS-BLED score, patients with CHA2DS2-VASc score ≥2/3 (male/female) and HAS-BLED score ≥3 may be more aware of taking medicine due to the fear of discovering hemorrhagic conversion events under the crisis of high stroke risk [16].In our study, the incidence of ischemic stroke in the good compliance group was lower than that in the poor compliance group, and the stroke degree of the 3 patients in the good compliance group was lower than that of the 12 patients in the poor compliance group. Moreover, Pearson correlation analysis showed that the compliance of NOACs in NVAF patients was negatively correlated with the severity of ischemic stroke [17]. These indicate that active and effective treatment with NOACs is an independent protective factor for effectively reducing the severity of ischemic stroke [18].Some studies have pointed out the risk of bleeding rises accordingly when NOACs benefit [19]. In addition, due to differences in race, genetics, weight, and dietary structure, the hemorrhagic events in Chinese patients will increase. However, in practice, the benefits of NOAC therapy far outweigh the risks provided that the relevant guidelines are strictly followed, indications are properly mastered, embolism and bleeding risks are dynamically assessed, and coagulation function is closely monitored [20].Notably, there was no significant difference in the incidence of hemorrhagic events between the good compliance group and the poor compliance group in our study. Possible reasons for this were the limited sample size and wide variation in age distribution in our study, and the sample subjects were not limited to elderly patients as in previous studies, which may have an impact on the results [21].In conclusion, a variety of factors lead to the poor adherence of NOACs in NVAF patients. Therefore, clinical supervision and management of patients with NVAF after NOACs should be strengthened to improve the compliance of patients with NVAF after NOACs, reduce damage of ischemic stroke, and improve their prognosis. --- *Source: 1021127-2021-10-19.xml*
1021127-2021-10-19_1021127-2021-10-19.md
33,712
Analysis of Influencing Factors of Compliance with Non-Vitamin K Antagonist Oral Anticoagulant in Patients with Nonvalvular Atrial Fibrillation and Correlation with the Severity of Ischemic Stroke
Li Zhu; Xiaodan Zhang; Jing Yang
Evidence-Based Complementary and Alternative Medicine (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1021127
1021127-2021-10-19.xml
--- ## Abstract Nonvalvular atrial fibrillation (NVAF) is associated with an increased risk of stroke and thrombus, and anticoagulant therapy is a key link in the prevention of stroke. At present, the anticoagulation rate of atrial fibrillation in China is low, and there are many factors affecting the adherence of patients with atrial fibrillation to anticoagulation. Non-vitamin K antagonist oral anticoagulants (NOACs) are anticoagulant with high application value due to their high safety and low risk of intracranial hemorrhage, stroke, and death. However, the compliance of NOACs is poor, and the current situation of anticoagulants in China is not optimistic. In this study, a total of 156 patients with NVAF who received NOAC anticoagulation therapy in our hospital from January 2018 to January 2019 were retrospectively analyzed. The results showed that education background, place of residence, number of complications, CHA2DS2-VASc score, and HAS-BLED score were independent influencing factors for NOACS compliance of NVAF patients. Also, the Pearson correlation analysis showed that there was a negative correlation (r = −0.465, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF. Therefore, clinical supervision and management of patients with NVAF after NOACs should be strengthened to improve the compliance of patients with NVAF after NOACs, reduce the damage of ischemic stroke, and improve their prognosis. --- ## Body ## 1. Introduction Atrial fibrillation (AF) is an atrial rhythm with ineffective contractions and chaotic excitement caused by disorders of the heart’s electrical system [1]. This is the most common persistent arrhythmia in clinical practice. Ischemic stroke is one of the most dangerous complications of AF. Epidemiological statistics show that the prevalence of AF in our country is 0.8%, and the incidence of subsequent ischemic stroke is up to 5%. [2]. Also, the annual incidence of ischemic stroke in patients with nonvalvular atrial fibrillation (NVAF) is 3–5 times that of patients with non-AF [2]. The non-vitamin K antagonist oral anticoagulants (NOACs) can effectively reduce the stroke risk of NVAF patients by 60%∼70% [3].However, in the actual application of anticoagulation therapy, under the influence of individualized differences in dosages, cross reactions between medicines and foods, insufficient clinical anticoagulation therapy, and patients’ lack of awareness of the necessity of anticoagulation, the overall treatment rate and medication compliance of NOAC anticoagulation in NVAF patients are not high [4, 5]. It largely increases the risk of ischemic stroke in patients with NVAF in China, and the severity of stroke and its recurrence rate are also affected [6]. Based on this, this research discusses the influencing factors of NOAC anticoagulation compliance in NVAF patients and their correlation with the severity of ischemic stroke. It is expected to provide relevant references for the improvement of compliance with NOAC anticoagulation therapy and the management of safety and effectiveness in clinical NVAF patients. ## 2. Materials and Methods ### 2.1. General Information 156 patients with NVAF who received NOAC anticoagulation therapy in our hospital from January 2018 to January 2019 were retrospectively analyzed. There were 87 males and 69 females, with an average age of (59.87 ± 11.64) years. ### 2.2. Inclusion Criteria (i) All patients met the diagnostic criteria for NVAF in the 2017ACC Expert Consensus on Perioperative Anticoagulation Management Decisions in Patients with Nonvalvular Atrial Fibrillation [6]; (ii) the diagnostic criteria for patients with ischemic stroke were referred to the Chinese Guidelines for the Diagnosis and Treatment of Acute Ischemic Stroke (2018 edition); and (iii) patients who met the CHA2DS2-VASc (Including congestive heart failure/left ventricular dysfunction, hypertension, age ≥ 75 years, diabetes, history of stroke/TIA/thromboembolism, vascular disease, and female) and HAS-BLED (including hypertension, abnormal renal and liver function, stroke, bleeding, labile INRS, elderly, drugs, or alcohol) scoring criteria for NOACs anticoagulation [7]. ### 2.3. Exclusion Criteria (i) Accompanied by other diseases such as valvular disease and thromboembolic disease that require anticoagulation; (ii) pregnant and/or lactating women; (iii) patients with previous bleeding history; (iv) with other serious complications or important organ diseases; (v) patients with mental or consciousness disorders and communication difficulties; and (vi) patients who have joined other studies or received other anticoagulation therapy in recent 3 months. ### 2.4. Method #### 2.4.1. Collecting the Baseline Information All patients were followed up for 2 years, with an interval of 2 months in the first 6 months, an interval of 3 months in the next 6 months, and an interval of 4 months in the second year. Hospital review was the main method of follow-up.The electronic medical records were used to collect baseline data on all patients, including gender, age, diploma, occupation, marriage, smoking history, drinking history, number of complications, number of drugs used for complications, CHA2DS2-VASc score, and HAS-BLED score. Also, personal health follow-up records were established according to the collected data. #### 2.4.2. Survey of Compliance Design of the medication compliance questionnaire: patient compliance was measured by the following four questions: (i) Can you take your medication as often as your doctor requires? (ii) Can you take the medication in the amount required by the doctor? (iii) Are you able to take your medication at the time and in the manner required by your doctor? (vi) Can you take your medication for a long period of time as required by your doctor? (v) Can you take your medication regularly according to your doctor’s requirements? The options are “not at all,” “occasionally,” “basically,” and “completely,” respectively, 0∼3 points.The total score ranges from 0 to 12 points. If the score is 12, the medication adherence is good. If the score is <12, the medication adherence is poor. In this study, the Cronbach’sα coefficient of the questionnaire was 0.867, indicating good reliability. #### 2.4.3. Investigation of Clinical End Points At the last follow-up, the number of ischemic stroke, hemorrhagic events, and death cases in the good compliance group and poor compliance group was counted. #### 2.4.4. Survey of Ischemic Stroke Severity The severity of stroke in all patients with ischemic stroke was assessed using the NIH Stroke Scale (NIHSS) [8]. The full score of the NIHSS scale was 42; a score of 0-1 was classified as normal, 1∼4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21∼42 as severe stroke. ### 2.5. Statistical Methods All data were processed with SPSS 22.0 statistical software, and GraghPad prism 8 was used to make statistical graphs. Measurement data are expressed as mean ± standard deviation (x¯±s), an independent-sample t-test is used for comparison between groups, count data are expressed as [n (%)], and the chi-square (χ2) test is performed. Factors significant in univariate analysis were subjected to multiple logistic regression model analysis. Correlation analysis was performed by Pearson correlation analysis. The difference is statistically significant when P<0.05. ## 2.1. General Information 156 patients with NVAF who received NOAC anticoagulation therapy in our hospital from January 2018 to January 2019 were retrospectively analyzed. There were 87 males and 69 females, with an average age of (59.87 ± 11.64) years. ## 2.2. Inclusion Criteria (i) All patients met the diagnostic criteria for NVAF in the 2017ACC Expert Consensus on Perioperative Anticoagulation Management Decisions in Patients with Nonvalvular Atrial Fibrillation [6]; (ii) the diagnostic criteria for patients with ischemic stroke were referred to the Chinese Guidelines for the Diagnosis and Treatment of Acute Ischemic Stroke (2018 edition); and (iii) patients who met the CHA2DS2-VASc (Including congestive heart failure/left ventricular dysfunction, hypertension, age ≥ 75 years, diabetes, history of stroke/TIA/thromboembolism, vascular disease, and female) and HAS-BLED (including hypertension, abnormal renal and liver function, stroke, bleeding, labile INRS, elderly, drugs, or alcohol) scoring criteria for NOACs anticoagulation [7]. ## 2.3. Exclusion Criteria (i) Accompanied by other diseases such as valvular disease and thromboembolic disease that require anticoagulation; (ii) pregnant and/or lactating women; (iii) patients with previous bleeding history; (iv) with other serious complications or important organ diseases; (v) patients with mental or consciousness disorders and communication difficulties; and (vi) patients who have joined other studies or received other anticoagulation therapy in recent 3 months. ## 2.4. Method ### 2.4.1. Collecting the Baseline Information All patients were followed up for 2 years, with an interval of 2 months in the first 6 months, an interval of 3 months in the next 6 months, and an interval of 4 months in the second year. Hospital review was the main method of follow-up.The electronic medical records were used to collect baseline data on all patients, including gender, age, diploma, occupation, marriage, smoking history, drinking history, number of complications, number of drugs used for complications, CHA2DS2-VASc score, and HAS-BLED score. Also, personal health follow-up records were established according to the collected data. ### 2.4.2. Survey of Compliance Design of the medication compliance questionnaire: patient compliance was measured by the following four questions: (i) Can you take your medication as often as your doctor requires? (ii) Can you take the medication in the amount required by the doctor? (iii) Are you able to take your medication at the time and in the manner required by your doctor? (vi) Can you take your medication for a long period of time as required by your doctor? (v) Can you take your medication regularly according to your doctor’s requirements? The options are “not at all,” “occasionally,” “basically,” and “completely,” respectively, 0∼3 points.The total score ranges from 0 to 12 points. If the score is 12, the medication adherence is good. If the score is <12, the medication adherence is poor. In this study, the Cronbach’sα coefficient of the questionnaire was 0.867, indicating good reliability. ### 2.4.3. Investigation of Clinical End Points At the last follow-up, the number of ischemic stroke, hemorrhagic events, and death cases in the good compliance group and poor compliance group was counted. ### 2.4.4. Survey of Ischemic Stroke Severity The severity of stroke in all patients with ischemic stroke was assessed using the NIH Stroke Scale (NIHSS) [8]. The full score of the NIHSS scale was 42; a score of 0-1 was classified as normal, 1∼4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21∼42 as severe stroke. ## 2.4.1. Collecting the Baseline Information All patients were followed up for 2 years, with an interval of 2 months in the first 6 months, an interval of 3 months in the next 6 months, and an interval of 4 months in the second year. Hospital review was the main method of follow-up.The electronic medical records were used to collect baseline data on all patients, including gender, age, diploma, occupation, marriage, smoking history, drinking history, number of complications, number of drugs used for complications, CHA2DS2-VASc score, and HAS-BLED score. Also, personal health follow-up records were established according to the collected data. ## 2.4.2. Survey of Compliance Design of the medication compliance questionnaire: patient compliance was measured by the following four questions: (i) Can you take your medication as often as your doctor requires? (ii) Can you take the medication in the amount required by the doctor? (iii) Are you able to take your medication at the time and in the manner required by your doctor? (vi) Can you take your medication for a long period of time as required by your doctor? (v) Can you take your medication regularly according to your doctor’s requirements? The options are “not at all,” “occasionally,” “basically,” and “completely,” respectively, 0∼3 points.The total score ranges from 0 to 12 points. If the score is 12, the medication adherence is good. If the score is <12, the medication adherence is poor. In this study, the Cronbach’sα coefficient of the questionnaire was 0.867, indicating good reliability. ## 2.4.3. Investigation of Clinical End Points At the last follow-up, the number of ischemic stroke, hemorrhagic events, and death cases in the good compliance group and poor compliance group was counted. ## 2.4.4. Survey of Ischemic Stroke Severity The severity of stroke in all patients with ischemic stroke was assessed using the NIH Stroke Scale (NIHSS) [8]. The full score of the NIHSS scale was 42; a score of 0-1 was classified as normal, 1∼4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21∼42 as severe stroke. ## 2.5. Statistical Methods All data were processed with SPSS 22.0 statistical software, and GraghPad prism 8 was used to make statistical graphs. Measurement data are expressed as mean ± standard deviation (x¯±s), an independent-sample t-test is used for comparison between groups, count data are expressed as [n (%)], and the chi-square (χ2) test is performed. Factors significant in univariate analysis were subjected to multiple logistic regression model analysis. Correlation analysis was performed by Pearson correlation analysis. The difference is statistically significant when P<0.05. ## 3. Results ### 3.1. Baseline Data and Compliance Scores of Follow-Up Patients A total of 156 patients were followed up, and there were 8 cases of lost contact and 2 cases of death due to various reasons during the follow-up period. The actual number of effective cases during the follow-up period was 146, and the effective rate was 93.6%. The compliance score of 146 patients was (10.5 ± 1.4) points, among which 59 patients (40.4%) were in the good compliance group and 87 patients (59.6%) were in the poor compliance group (Table1).Table 1 Baseline information and the compliance scores of 146 follow-up patients (n, x¯±s). Clinical informationBaseline information situationGender85 males (58.2%), 61 females (41.8%)Age53 patients with age <65 years (36.3%), 93 patients with age ≥65 years (63.7%)Educational background46 patients (31.5%) with no educational background/primary school, 58 patients (39.7%) with middle school/high school, and 42 cases (28.8%) with university or abovePlace of residence67 patients (45.9%) in rural areas and 79 patients (54.1%) in cities and townsMarriage situation69 patients were unmarried/widowed (47.3%), and 77 patients were married (52.7%)Smoking history51 patients (34.9%) had smoking history, 95 patients (65.1%) had no smoking historyDrinking history56 patients (38.4%) had a history of drinking alcohol, 90 patients (61.6%) had no history of drinking alcohol.Number of complications0∼6 (2.9 ± 1.6) speciesNumber of concomitant medications0∼8 (4.1 ± 2.1) speciesCHA2DS2-VASc score45 patients (30.8%) with scores <2/3 (male/female) and 101 patients (69.2%) with scores ≥2/3 (male/female)HAS-BLED score77 patients (52.7%) with score <3 and 69 patients (47.3%) with score ≥3Compliance(10.5 ± 1.4) points, good compliance in 59 patients (40.4%), poor compliance in 87 patients (59.6%) ### 3.2. Univariate Analysis of Influencing Compliance with NOACs Univariate analysis showed that age, educational background, place of residence, number of complications, CHA2DS2-VASc score, and HA-BLED score were the influencing factors for NOAC compliance of NVAF patients (P<0.05) (Table 2).Table 2 Univariate analysis of influencing compliance with NOACs (n, %, x¯±s). Clinical informationGood compliance group (n = 59)Poor compliance group (n = 87)χ2/tPGenderMale35 (59.3)50 (57.5)0.0500.824Female24 (40.7)37 (42.5)Age<65 years41 (69.5)40 (75.5)7.8700.001≥65 years18 (30.5)47 (72.3)Educational backgroundNo educational background/primary school28 (47.5)18 (20.7)16.350≤0.001Middle school/high school18 (30.5)40 (46.0)University or above9 (15.3)33 (37.9)Place of residenceRural area49 (83.1)18 (20.7)55.062≤0.001Cities and towns10 (16.9)69 (79.3)Marriage situationUnmarried/widowed36 (61.0)33 (37.93)7.5180.006Married23 (29.87)54 (70.13)Smoking historyYes25 (42.4)26 (29.9)2.4120.120No34 (57.6)61 (70.1)Drinking historyYes18 (30.5)33 (37.9)0.8520.356No41 (69.5)54 (62.1)Number of complications2.6 ± 1.43.1 ± 0.92.6270.012Number of concomitant medications4.3 ± 2.14.6 ± 2.20.8230.412CHA2DS2-VASc score<2/3 (male/female)40 (67.80)5 (5.75)63.484≤0.001≥2/3 (male/female)19 (32.20)82 (94.25)HAS-BLED score<3 score51 (86.44)16 (18.39)49.313≤0.001≥3 score8 (13.56)51 (58.62) ### 3.3. Multivariate Analysis of Influencing Compliance with NOACs The compliance was taken as the dependent variable, and the factors with significant differences in Table2 were taken as independent variables into the logistic regression model. The assignments of the dependent variables and independent variables are shown in Table 3.Table 3 Variable assignment for multivariate analysis of influencing compliance with NOACs. VariableThe assignmentDependent variableCompliance0 = good, 1 = poorIndependent variablesAge<60 years = 0, ≥60 years = 1Marriage situationMarried = 0, unmarried/widowed = 1Educational backgroundUniversity or above = 0, middle school/high school = 1, no educational background/primary school = 2Place of residenceCities and towns = 0, rural area = 1Number of complicationsEnter the actual valueCHA2DS2-VASc score≥2/3 (male/female) = 0, <2/3 (male/female) = 1HAS-BLED score≥3 score = 0, <3 score = 1Education background, place of residence, number of complications, CHA2DS2-VASc score, and HAS-BLED score were independent influencing factors for oral NOACs compliance of NVAF patients (Table4).Table 4 Multivariate analysis of influencing compliance with NOACs. FactorsβSEWaldPOR (95% CI)Age ≥ 60 years0.9431.1174.2970.1743.116 (0.561∼1.209)Unmarried/widowed1.1652.5454.4360.0981.765 (0.361∼1.935)Educational backgroundNo educational background/primary school5.1121.6289.8170.007166.861 (6.796∼4094.401)Middle school/high school3.2901.4325.2540.02726.983 (1.610∼451.128)Place of residenceRural area1.5660.7424.5790.0411.342 (0.571∼0.478)Number of complications0.9280.4544.1360.0472.538 (1.030∼6.244)CHA2DS2-VASc score<2/3 (male/female)2.2110.9825.0040.0290.106 (0.012∼0.756)HAS-BLED score≥3 score2.7861.2475.1320.02812.431 (0.964∼38.657) ### 3.4. Comparison Scores of Clinical End Points between the Two Groups The incidence of ischemic stroke in the good compliance group (5.1%) was lower than that in the poor compliance group (71.2%), and the difference was significant (P<0.05). But, there was no significant difference in the incidence of hemorrhagic events between the two groups (P>0.05). Among the patients who died, 2 patients in the poor compliance group died of stroke, while the remaining patients died of nonthromboembolism and hemorrhagic events (Table 5).Table 5 Comparison of clinical end points between the two groups (n, %). GroupIschemic strokeHemorrhagic eventsGood compliance group (n = 59)3 (5.1)6 (10.2)Poor compliance group (n = 87)15 (17.2)10 (11.5)χ24.8070.063P0.0280.801 ### 3.5. Correlation between NOAC Compliance and Severity of Ischemic Stroke The types of stroke that occurred in the 3 patients in the good adherence group were all minor/minor stroke, and the types of stroke that occurred in the 12 patients in the poor adherence group were all moderate and major stroke (Table6).Table 6 The compliance scores and NIHSS score of 15 patients with ischemic stroke (cases, points). GroupNumberCompliance scoresNIHSS scoreGood compliance group (n = 3)712.032612.046912.02Poor compliance group (n = 12)49.331114.862810.5224310.884711.0156110.59701023779.71485419946.317975.5241096.721Note: a score of 0-1 was classified as normal, 1–4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21–42 as severe stroke.Pearson correlation analysis showed that there was a negative correlation (r = −0.791, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF (Figure 1).Figure 1 Correlation between NOAC compliance and severity of ischemic stroke. There was a negative correlation (r = −0.465, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF. ## 3.1. Baseline Data and Compliance Scores of Follow-Up Patients A total of 156 patients were followed up, and there were 8 cases of lost contact and 2 cases of death due to various reasons during the follow-up period. The actual number of effective cases during the follow-up period was 146, and the effective rate was 93.6%. The compliance score of 146 patients was (10.5 ± 1.4) points, among which 59 patients (40.4%) were in the good compliance group and 87 patients (59.6%) were in the poor compliance group (Table1).Table 1 Baseline information and the compliance scores of 146 follow-up patients (n, x¯±s). Clinical informationBaseline information situationGender85 males (58.2%), 61 females (41.8%)Age53 patients with age <65 years (36.3%), 93 patients with age ≥65 years (63.7%)Educational background46 patients (31.5%) with no educational background/primary school, 58 patients (39.7%) with middle school/high school, and 42 cases (28.8%) with university or abovePlace of residence67 patients (45.9%) in rural areas and 79 patients (54.1%) in cities and townsMarriage situation69 patients were unmarried/widowed (47.3%), and 77 patients were married (52.7%)Smoking history51 patients (34.9%) had smoking history, 95 patients (65.1%) had no smoking historyDrinking history56 patients (38.4%) had a history of drinking alcohol, 90 patients (61.6%) had no history of drinking alcohol.Number of complications0∼6 (2.9 ± 1.6) speciesNumber of concomitant medications0∼8 (4.1 ± 2.1) speciesCHA2DS2-VASc score45 patients (30.8%) with scores <2/3 (male/female) and 101 patients (69.2%) with scores ≥2/3 (male/female)HAS-BLED score77 patients (52.7%) with score <3 and 69 patients (47.3%) with score ≥3Compliance(10.5 ± 1.4) points, good compliance in 59 patients (40.4%), poor compliance in 87 patients (59.6%) ## 3.2. Univariate Analysis of Influencing Compliance with NOACs Univariate analysis showed that age, educational background, place of residence, number of complications, CHA2DS2-VASc score, and HA-BLED score were the influencing factors for NOAC compliance of NVAF patients (P<0.05) (Table 2).Table 2 Univariate analysis of influencing compliance with NOACs (n, %, x¯±s). Clinical informationGood compliance group (n = 59)Poor compliance group (n = 87)χ2/tPGenderMale35 (59.3)50 (57.5)0.0500.824Female24 (40.7)37 (42.5)Age<65 years41 (69.5)40 (75.5)7.8700.001≥65 years18 (30.5)47 (72.3)Educational backgroundNo educational background/primary school28 (47.5)18 (20.7)16.350≤0.001Middle school/high school18 (30.5)40 (46.0)University or above9 (15.3)33 (37.9)Place of residenceRural area49 (83.1)18 (20.7)55.062≤0.001Cities and towns10 (16.9)69 (79.3)Marriage situationUnmarried/widowed36 (61.0)33 (37.93)7.5180.006Married23 (29.87)54 (70.13)Smoking historyYes25 (42.4)26 (29.9)2.4120.120No34 (57.6)61 (70.1)Drinking historyYes18 (30.5)33 (37.9)0.8520.356No41 (69.5)54 (62.1)Number of complications2.6 ± 1.43.1 ± 0.92.6270.012Number of concomitant medications4.3 ± 2.14.6 ± 2.20.8230.412CHA2DS2-VASc score<2/3 (male/female)40 (67.80)5 (5.75)63.484≤0.001≥2/3 (male/female)19 (32.20)82 (94.25)HAS-BLED score<3 score51 (86.44)16 (18.39)49.313≤0.001≥3 score8 (13.56)51 (58.62) ## 3.3. Multivariate Analysis of Influencing Compliance with NOACs The compliance was taken as the dependent variable, and the factors with significant differences in Table2 were taken as independent variables into the logistic regression model. The assignments of the dependent variables and independent variables are shown in Table 3.Table 3 Variable assignment for multivariate analysis of influencing compliance with NOACs. VariableThe assignmentDependent variableCompliance0 = good, 1 = poorIndependent variablesAge<60 years = 0, ≥60 years = 1Marriage situationMarried = 0, unmarried/widowed = 1Educational backgroundUniversity or above = 0, middle school/high school = 1, no educational background/primary school = 2Place of residenceCities and towns = 0, rural area = 1Number of complicationsEnter the actual valueCHA2DS2-VASc score≥2/3 (male/female) = 0, <2/3 (male/female) = 1HAS-BLED score≥3 score = 0, <3 score = 1Education background, place of residence, number of complications, CHA2DS2-VASc score, and HAS-BLED score were independent influencing factors for oral NOACs compliance of NVAF patients (Table4).Table 4 Multivariate analysis of influencing compliance with NOACs. FactorsβSEWaldPOR (95% CI)Age ≥ 60 years0.9431.1174.2970.1743.116 (0.561∼1.209)Unmarried/widowed1.1652.5454.4360.0981.765 (0.361∼1.935)Educational backgroundNo educational background/primary school5.1121.6289.8170.007166.861 (6.796∼4094.401)Middle school/high school3.2901.4325.2540.02726.983 (1.610∼451.128)Place of residenceRural area1.5660.7424.5790.0411.342 (0.571∼0.478)Number of complications0.9280.4544.1360.0472.538 (1.030∼6.244)CHA2DS2-VASc score<2/3 (male/female)2.2110.9825.0040.0290.106 (0.012∼0.756)HAS-BLED score≥3 score2.7861.2475.1320.02812.431 (0.964∼38.657) ## 3.4. Comparison Scores of Clinical End Points between the Two Groups The incidence of ischemic stroke in the good compliance group (5.1%) was lower than that in the poor compliance group (71.2%), and the difference was significant (P<0.05). But, there was no significant difference in the incidence of hemorrhagic events between the two groups (P>0.05). Among the patients who died, 2 patients in the poor compliance group died of stroke, while the remaining patients died of nonthromboembolism and hemorrhagic events (Table 5).Table 5 Comparison of clinical end points between the two groups (n, %). GroupIschemic strokeHemorrhagic eventsGood compliance group (n = 59)3 (5.1)6 (10.2)Poor compliance group (n = 87)15 (17.2)10 (11.5)χ24.8070.063P0.0280.801 ## 3.5. Correlation between NOAC Compliance and Severity of Ischemic Stroke The types of stroke that occurred in the 3 patients in the good adherence group were all minor/minor stroke, and the types of stroke that occurred in the 12 patients in the poor adherence group were all moderate and major stroke (Table6).Table 6 The compliance scores and NIHSS score of 15 patients with ischemic stroke (cases, points). GroupNumberCompliance scoresNIHSS scoreGood compliance group (n = 3)712.032612.046912.02Poor compliance group (n = 12)49.331114.862810.5224310.884711.0156110.59701023779.71485419946.317975.5241096.721Note: a score of 0-1 was classified as normal, 1–4 as mild stroke/minor stroke, 5–15 as moderate stroke, 16–20 as moderate severe stroke, and 21–42 as severe stroke.Pearson correlation analysis showed that there was a negative correlation (r = −0.791, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF (Figure 1).Figure 1 Correlation between NOAC compliance and severity of ischemic stroke. There was a negative correlation (r = −0.465, P<0.001) between NOAC compliance and severity of ischemic stroke in patients with NVAF. ## 4. Discussion As a member of the cardiovascular epidemic in the 21st century, NVAF has become a major public health problem threatening the safety of citizens [9]. It occurs mainly in the middle-aged and elderly people with organic heart disease, and the prevalence was increased with age. The most serious complication after NVAF is thromboembolic events such as stroke [10]. According to statistics, about 13∼26% of ischemic strokes are directly associated with NVAF. Also, in patients of advanced age >80 years, AF is more a high-risk influential cause of concurrent ischemic stroke [11].At present, NOACs are an approved treatment for thromboembolic disease in multiple clinical indications [12]. However, the investigation showed that the drug intake rate of patients with NVAF in China was extremely low, and the high prevalence of stroke and low rate of anticoagulant therapy have become the new features of atrial fibrillation patients in China [13].With the wide application of the risk stratification and scoring tools for atrial fibrillation thrombosis and bleeding, such as CHA2DS2-VASc and HAS-BLED, clinical practice HAS-BLED found that it is of certain value to identify the risk of ischemic stroke and hemorrhagic transformation in NVAF patients before NOACs. It is not only beneficial to the correction of adverse events of NOACs but also enhances the confidence of clinical use of NOACs to a certain extent. In our study, 156 patients with NVAF who received NOACs in our institution were followed up for 2 years. The results showed that 87 patients (59.59%) were in the poor compliance.Multiple logistic regression analysis was further used to confirm that education background, place of residence, number of complications, CHA2DS2-VASc score, and HA-BLED score were independent influencing factors for NOAC compliance of NVAF patients. The reasons include many aspects. First of all, in terms of educational background, patients with higher educational level have a higher understanding of NOACs and the individualized medication, so their subjective initiative of anticoagulation is also greater [14]. Secondly, compared with rural areas where communication and medical equipment are not well established, urban residents may have more advantages in regular monitoring, physician-patient interactions, and the popularization of relevant knowledge [15]. Thirdly, in terms of the number of complications, most patients with NVAF are complicated with basic diseases such as three highs, cardiovascular and cerebrovascular diseases, and liver and kidney diseases, which had higher variable in individuals, and a large impact on the blood concentration after treatment, and some patients may stop taking drugs or change medicines halfway, leading to a high probability of stopping medication or changing medication midway. At last, in terms of CHA2DS2-VASc and HAS-BLED score, patients with CHA2DS2-VASc score ≥2/3 (male/female) and HAS-BLED score ≥3 may be more aware of taking medicine due to the fear of discovering hemorrhagic conversion events under the crisis of high stroke risk [16].In our study, the incidence of ischemic stroke in the good compliance group was lower than that in the poor compliance group, and the stroke degree of the 3 patients in the good compliance group was lower than that of the 12 patients in the poor compliance group. Moreover, Pearson correlation analysis showed that the compliance of NOACs in NVAF patients was negatively correlated with the severity of ischemic stroke [17]. These indicate that active and effective treatment with NOACs is an independent protective factor for effectively reducing the severity of ischemic stroke [18].Some studies have pointed out the risk of bleeding rises accordingly when NOACs benefit [19]. In addition, due to differences in race, genetics, weight, and dietary structure, the hemorrhagic events in Chinese patients will increase. However, in practice, the benefits of NOAC therapy far outweigh the risks provided that the relevant guidelines are strictly followed, indications are properly mastered, embolism and bleeding risks are dynamically assessed, and coagulation function is closely monitored [20].Notably, there was no significant difference in the incidence of hemorrhagic events between the good compliance group and the poor compliance group in our study. Possible reasons for this were the limited sample size and wide variation in age distribution in our study, and the sample subjects were not limited to elderly patients as in previous studies, which may have an impact on the results [21].In conclusion, a variety of factors lead to the poor adherence of NOACs in NVAF patients. Therefore, clinical supervision and management of patients with NVAF after NOACs should be strengthened to improve the compliance of patients with NVAF after NOACs, reduce damage of ischemic stroke, and improve their prognosis. --- *Source: 1021127-2021-10-19.xml*
2021
# Osteoprotegerin, Soluble Receptor Activator of Nuclear Factor-κB Ligand, and Subclinical Atherosclerosis in Children and Adolescents with Type 1 Diabetes Mellitus **Authors:** Irene Lambrinoudaki; Emmanouil Tsouvalas; Marina Vakaki; George Kaparos; Kimon Stamatelopoulos; Areti Augoulea; Paraskevi Pliatsika; Andreas Alexandrou; Maria Creatsa; Kyriaki Karavanaki **Journal:** International Journal of Endocrinology (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102120 --- ## Abstract Aims. To evaluate carotid intima-media thickness (cIMT) and biomarkers of the osteoprotegerin/receptor activator of nuclear factor-κB ligand (OPG/RANKL) system in type 1 diabetes (T1DM) children and adolescents and controls. Subjects and Methods. Fifty six T1DM patients (mean ± SD age: 12.0 ± 2.7 years, diabetes duration: 5.42 ± 2.87 years and HbA1c: 8.0 ± 1.5%) and 28 healthy matched controls, were studied with anthropometric and laboratory measurements, including serum OPG, soluble RANKL (sRANKL) and cIMT. Results. Anthropometric, laboratory, and cIMT measurements were similar between T1DM youngsters and controls. However patients with longer diabetes duration (>/7.0 years) had indicatively higher cIMT (cIMT = 0.49 vs 0.44 mm, P 0.072) and triglyceride levels than the rest of the patients (93.7 vs 64.6 mg/dl, P 0.025). Both in the total study population (β 0.418, P 0.027) and among T1DM patients separately (β 0.604, P 0.013), BMI was the only factor associated with cIMT. BMI was further associated with OPG in both groups (β −0.335, P 0.003 and β −0.356, P 0.008 respectively), while sRANKL levels were not associated with any factor. Conclusions. BMI was the strongest independent predictor of cIMT among the whole population, and especially in diabetics, suggesting a possible synergistic effect of diabetes and adiposity on atherosclerotic burden. BMI was overall strongly associated with circulating OPG, but the causes of this association remain unclear. --- ## Body ## 1. Introduction Atherosclerosis is a chronic progressive inflammatory process, that begins with lipid deposits and fatty streaks on the arterial intima that advance to atheromatic plaques [1]. Early atherosclerotic signs are already present in childhood and adolescence [2–4], particularly in subjects with risk factors, such as family history of early cardiac events, sedentary lifestyle, smoking, dyslipidemia, hypertension, obesity, and diabetes [5, 6]. Typically, both subclinical and clinical atherosclerotic disease have an earlier onset in patients diagnosed with type 1 diabetes mellitus (T1DM), with the atherosclerotic lesions being more severe and extended [3, 5, 7].Endothelial dysfunction is the earliest detectable manifestation of diabetic atherosclerotic vascular disease. Therefore, diabetic children with endothelial dysfunction are considered to be at especially high risk of having early structural atherosclerotic vascular changes [8]. Nowadays, cardiovascular disease has also become the primary cause of mortality among young adults with T1DM [3]. Thus, primary prevention and early detection of atherosclerosis are of great importance for the early identification of subclinical signs of the atherosclerotic disease with the use of imaging techniques, while surrogate biomarkers are also being evaluated for their clinical relevance.Ultrasound measurement of the carotid intima-media thickness (cIMT) has been widely utilized as a screening method of nonsymptomatic atherosclerotic lesions and plaques [1, 9]. Increased cIMT has been shown to correlate with the vascular risk factors, and also with the extent and severity of coronary artery disease [2]. Although clear-cut normative data have not existed for the pediatric population yet [9], cIMT in children and adolescents has been reported to be increased in the presence of hypercholesterolemia, hypertension, and obesity [2, 8]. However, previous studies in children and adolescents with T1DM have conflicting results regarding the presence or absence of early atherosclerotic lesions [2, 10].Vascular calcification is strongly correlated to plaque rupture, a process which involves cells responsive to bone-controlling cytokines. In this context, bone-regulating molecules including the receptor activator of nuclear factor-κB and its ligand (RANK and RANKL, resp.) as well as RANKL’s inhibitor, osteoprotegerin (OPG), are increasingly being investigated as markers of cardiovascular risk [11], as well as a link between bone metabolism and vascular calcification. Circulating OPG has been suggested to have a possible role in atherosclerosis; this role could be either mediating or, most probably, anti-inflammatory and compensatory [11]. RANKL is associated to osteoclastogenesis and bone resorption, while OPG inhibits RANKL-mediated actions [11]. There are very limited studies on the use of OPG as an index of endothelial dysfunction in adult patients with T1DM or T2DM [11–14], while there is only one previous study on OPG in T1DM children and adolescents, which, however, is associated only with their bone status [12], and not with endothelial dysfunction.Furthermore, due to the scarcity of cardiovascular events in early age, diagnostic or treatment algorithms of subclinical atherosclerosis for children and adolescents have not been standardized [3, 9, 15]. In this context, we undertook the present study in order to evaluate subclinical atherosclerosis and biomarkers of the OPG/RANK/RANKL system in association with anthropometric characteristics and laboratory measurements in T1DM children and adolescents in comparison to nondiabetic controls. ## 2. Materials and Methods ### 2.1. Subjects We studied 56 Greek children and adolescents with type 1 diabetes, already diagnosed and being followed for diabetes, and 28 healthy controls matched for age, gender, and body mass index (BMI) (2 patients : 1 control). The inclusion criteria for diabetic children were age ≤18 years, diabetes duration ≥2 years, normal arterial pressure (according to the 95th age percentile, systolic and diastolic), and no other chronic disease, apart from associated autoimmune diseases (autoimmune thyroiditis, celiac disease, and autoimmune gastritis). The criteria for the diagnosis of T1DM were fasting plasma glucose levels >/126 mg/dL (>/7.0 mmol/L) or symptoms of hyperglycemia (polyuria, polydipsia, and unexplained weight loss with a random plasma glucose >200 mg/dL (11.1 mmol/L) or two-hour plasma glucose ≥200 mg/dL (11.1 mmol/L) during an oral glucose tolerance test) [13].None of the diabetic patients was taking chronic medications other than daily insulin. Patients were consecutively recruited from the outpatient diabetic clinic of the Second University Department of Pediatrics, “P&A Kyriakou” Children’s Hospital of Athens. Control children were recruited among the staff of the hospital following an invitation to their parents. The study was approved by the hospital ethics committee and all parents gave their informed consent.Participants went through a single-day structured examination program including medical history recording and cardiovascular risk factor evaluation. Weight was measured on the same electronic scale and height was recorded using a stadiometer in the upright position. BMI was calculated by the following formula: weight (kg)/height2 (m). Blood pressure was measured in duplicate and the mean value was recorded for each individual. Values below the 95th percentile for age and gender were considered as normal. Fasting venous blood samples after an overnight fast of 8 hours were obtained with minimal stasis from an antecubital vein. Centrifugation was performed within one hour and serum was stored at −80°C. Finally, patients were evaluated by carotid ultrasound. ### 2.2. Laboratory Analyses Glucose was measured using a standard enzymatic method, using the Biochemical INTEGRA 800 Analyzer (ROCHE). The same analyzer was used to measure the ultrasensitive C-reactive protein (CRP) (with particle enhanced immunoturbidimetric assay), creatinine (using kinetic Jaffé method), and urea (kinetic test with urease and glutamate dehydrogenase). Serum total cholesterol (Tchol), low-density lipoprotein cholesterol (LDL-C), triglycerides (TG), and high-density lipoprotein cholesterol (HDL-C) were measured with an enzymatic colorimetric method, while Apolipoprotein A1 (ApoA1) and Apolipoprotein B (ApoB) were measured using an immunonephelometric BN II method, all with the same, above mentioned, analyzer. HbA1c was measured on a DCA 2000 analyzer. The normal range for HbA1c in our Laboratory was 4.4%–6.2%. Complete blood cell counts were measured using the fluorescent flow cytometry method (Sysmex XT1800i Analyzer).Estimated Glomerular Filtration Rate (eGFR) was estimated using the following formulas: for children <13 years and girls 13–18 years: eGFR = 0.55 × height (cm)/creatinine (mg/dL), for boys 13–18 years: eGFR = 0.70 × height (cm)/creatinine (mg/dL), for diabetic boys: eGFR = (186 × creatinine) − (1.154 × age) − 0.203, and for diabetic girls: GFR = (186 × creatinine) − (1.154 × age) − 0.742.Serum osteoprotegerin was measured by a commercially available kit (BioVendor-Laboratorni medicina a.s.), using the BioVendor Human Osteoprotegerin ELISA. The intra-assay coefficient of variation was 4.5% and the interassay variation was 7.8%, as provided by the manufacturer. Serum sRANKL (total) was measured by a commercially available kit (BioVendor-Laboratorni medicina a.s.), using the BioVendor Human sRANKL (Total) ELISA technique. The intra-assay coefficient of variation was 8.8% and the inter-assay variation was 11%, as provided by the manufacturer. ### 2.3. Ultrasound Studies The B-mode ultrasound scans of the carotid arteries were performed using a Logiq 7 GE medical ultrasound machine (10 MHz linear transducer). The right and the left common carotid arteries were imaged in the neck: a longitudinal section of the common carotid artery, 1 cm proximal to the carotid bulb, was imaged. Six carotid intima-media thickness (cIMT) wall measurements of the far wall of each artery (right left), at 3 mm intervals, were obtained, starting at 1 cm proximal to the bulb and moving proximally, as previously reported [14]. The reported cIMT measurement for each artery is the average of these 6 measurements. The combined cIMT value is the average of the two (right and left) arteries. All ultrasound scans were performed by a single experienced sonographer, who had no knowledge of the clinical or laboratory profile of the study subjects. Intraobserver coefficients of variation were 4.3% for left and 3.1% for right cIMT measurements. ### 2.4. Data Analysis Data analysis was performed using SPSS version 19.0. (SPSS, Chicago, IL, USA). Variables were normally distributed, except OPG and RANKL, which were log-transformed for statistical analysis. Mean values of demographic, anthropometric, and serologic parameters, as well as levels of OPG and sRANKL and cIMT measurements were compared between diabetic patients and controls. Univariate comparisons between groups were performed using the analysis of variance (ANOVA) andt-test, while simple correlations for continuous variables were performed using Pearson’s correlation coefficient, accordingly. Furthermore, T1DM patients were divided into two subgroups according to the diabetes duration (cut-off point predefined as mean duration +1 SD, which resulted in 7.16 years, that is, 7 years): (a) longer diabetes duration (≥7 years) and (b) moderate/shorter duration (<7 years). These subgroups were also compared in terms of cIMT measurements and levels of OPG or sRANKL, in order to assess any abnormalities in the high risk subgroup with longer diabetes duration. Multiple stepwise linear regression analysis was performed to examine the factors significantly affecting cIMT measurements, and serum levels of OPG and sRANKL, a priori including possible and known confounders, accordingly, and allowing for the addition of other (significant) independent factors. At first, the total study sample (diabetic patients plus controls) was examined using regression models, with age, gender, and presence of T1DM treated as probable confounders, while cIMT measurements were also examined treating levels of OPG as an additional a priori confounder. In order to clarify whether the relationships are different for young T1DM patients, these were separately examined in similar regression models, additionally adjusting for years of diabetes and HbA1c, as a prioripossible confounders. The latter were not added in the total study sample models, in order to be parsimonious. Statistical significance was set at the 0.05 level. ## 2.1. Subjects We studied 56 Greek children and adolescents with type 1 diabetes, already diagnosed and being followed for diabetes, and 28 healthy controls matched for age, gender, and body mass index (BMI) (2 patients : 1 control). The inclusion criteria for diabetic children were age ≤18 years, diabetes duration ≥2 years, normal arterial pressure (according to the 95th age percentile, systolic and diastolic), and no other chronic disease, apart from associated autoimmune diseases (autoimmune thyroiditis, celiac disease, and autoimmune gastritis). The criteria for the diagnosis of T1DM were fasting plasma glucose levels >/126 mg/dL (>/7.0 mmol/L) or symptoms of hyperglycemia (polyuria, polydipsia, and unexplained weight loss with a random plasma glucose >200 mg/dL (11.1 mmol/L) or two-hour plasma glucose ≥200 mg/dL (11.1 mmol/L) during an oral glucose tolerance test) [13].None of the diabetic patients was taking chronic medications other than daily insulin. Patients were consecutively recruited from the outpatient diabetic clinic of the Second University Department of Pediatrics, “P&A Kyriakou” Children’s Hospital of Athens. Control children were recruited among the staff of the hospital following an invitation to their parents. The study was approved by the hospital ethics committee and all parents gave their informed consent.Participants went through a single-day structured examination program including medical history recording and cardiovascular risk factor evaluation. Weight was measured on the same electronic scale and height was recorded using a stadiometer in the upright position. BMI was calculated by the following formula: weight (kg)/height2 (m). Blood pressure was measured in duplicate and the mean value was recorded for each individual. Values below the 95th percentile for age and gender were considered as normal. Fasting venous blood samples after an overnight fast of 8 hours were obtained with minimal stasis from an antecubital vein. Centrifugation was performed within one hour and serum was stored at −80°C. Finally, patients were evaluated by carotid ultrasound. ## 2.2. Laboratory Analyses Glucose was measured using a standard enzymatic method, using the Biochemical INTEGRA 800 Analyzer (ROCHE). The same analyzer was used to measure the ultrasensitive C-reactive protein (CRP) (with particle enhanced immunoturbidimetric assay), creatinine (using kinetic Jaffé method), and urea (kinetic test with urease and glutamate dehydrogenase). Serum total cholesterol (Tchol), low-density lipoprotein cholesterol (LDL-C), triglycerides (TG), and high-density lipoprotein cholesterol (HDL-C) were measured with an enzymatic colorimetric method, while Apolipoprotein A1 (ApoA1) and Apolipoprotein B (ApoB) were measured using an immunonephelometric BN II method, all with the same, above mentioned, analyzer. HbA1c was measured on a DCA 2000 analyzer. The normal range for HbA1c in our Laboratory was 4.4%–6.2%. Complete blood cell counts were measured using the fluorescent flow cytometry method (Sysmex XT1800i Analyzer).Estimated Glomerular Filtration Rate (eGFR) was estimated using the following formulas: for children <13 years and girls 13–18 years: eGFR = 0.55 × height (cm)/creatinine (mg/dL), for boys 13–18 years: eGFR = 0.70 × height (cm)/creatinine (mg/dL), for diabetic boys: eGFR = (186 × creatinine) − (1.154 × age) − 0.203, and for diabetic girls: GFR = (186 × creatinine) − (1.154 × age) − 0.742.Serum osteoprotegerin was measured by a commercially available kit (BioVendor-Laboratorni medicina a.s.), using the BioVendor Human Osteoprotegerin ELISA. The intra-assay coefficient of variation was 4.5% and the interassay variation was 7.8%, as provided by the manufacturer. Serum sRANKL (total) was measured by a commercially available kit (BioVendor-Laboratorni medicina a.s.), using the BioVendor Human sRANKL (Total) ELISA technique. The intra-assay coefficient of variation was 8.8% and the inter-assay variation was 11%, as provided by the manufacturer. ## 2.3. Ultrasound Studies The B-mode ultrasound scans of the carotid arteries were performed using a Logiq 7 GE medical ultrasound machine (10 MHz linear transducer). The right and the left common carotid arteries were imaged in the neck: a longitudinal section of the common carotid artery, 1 cm proximal to the carotid bulb, was imaged. Six carotid intima-media thickness (cIMT) wall measurements of the far wall of each artery (right left), at 3 mm intervals, were obtained, starting at 1 cm proximal to the bulb and moving proximally, as previously reported [14]. The reported cIMT measurement for each artery is the average of these 6 measurements. The combined cIMT value is the average of the two (right and left) arteries. All ultrasound scans were performed by a single experienced sonographer, who had no knowledge of the clinical or laboratory profile of the study subjects. Intraobserver coefficients of variation were 4.3% for left and 3.1% for right cIMT measurements. ## 2.4. Data Analysis Data analysis was performed using SPSS version 19.0. (SPSS, Chicago, IL, USA). Variables were normally distributed, except OPG and RANKL, which were log-transformed for statistical analysis. Mean values of demographic, anthropometric, and serologic parameters, as well as levels of OPG and sRANKL and cIMT measurements were compared between diabetic patients and controls. Univariate comparisons between groups were performed using the analysis of variance (ANOVA) andt-test, while simple correlations for continuous variables were performed using Pearson’s correlation coefficient, accordingly. Furthermore, T1DM patients were divided into two subgroups according to the diabetes duration (cut-off point predefined as mean duration +1 SD, which resulted in 7.16 years, that is, 7 years): (a) longer diabetes duration (≥7 years) and (b) moderate/shorter duration (<7 years). These subgroups were also compared in terms of cIMT measurements and levels of OPG or sRANKL, in order to assess any abnormalities in the high risk subgroup with longer diabetes duration. Multiple stepwise linear regression analysis was performed to examine the factors significantly affecting cIMT measurements, and serum levels of OPG and sRANKL, a priori including possible and known confounders, accordingly, and allowing for the addition of other (significant) independent factors. At first, the total study sample (diabetic patients plus controls) was examined using regression models, with age, gender, and presence of T1DM treated as probable confounders, while cIMT measurements were also examined treating levels of OPG as an additional a priori confounder. In order to clarify whether the relationships are different for young T1DM patients, these were separately examined in similar regression models, additionally adjusting for years of diabetes and HbA1c, as a prioripossible confounders. The latter were not added in the total study sample models, in order to be parsimonious. Statistical significance was set at the 0.05 level. ## 3. Results Diabetic and control children and adolescents in our study were of similar demographic and anthropometric characteristics; matching, therefore, was considered successful (Table1). In the diabetic group, mean HbA1c, blood glucose, and urea levels were significantly higher compared to those of the control group. TG levels in the diabetic group were marginally higher, although the difference did not reach statistical significance. All other measurements, including serum levels of OPG and sRANKL, and cIMT measurements were similar between patients and controls (Table 1).Table 1 Comparison of demographic, anthropometric, biochemical parameters and sonographic findings in children, and adolescents between diabetic and control groups. Parameters T1DM (N=56) Mean ± SD or % Controls (N=28) Mean ± SD or % P value Age (years) 12.0 (2.7) 12.1 (3.3) 0.426 Male [ 30 ] (53.4%) [ 15 ] (53.5%) 0.812 BMI (kg/m2) 20.9 (3.8) 19.6 (3.4) 0.155 SDS BMI* 69.9 70.3 0.943 HbA1c (%) 8.02 (1.52) 4.12 (0.93) 0.001 Glucose (mg/dL) 143.4 (84.1) 81.9 (10.3) 0.001 Diabetes duration (years) 5.42 (2.87) — — Urea (mg/dL) 29.65 (8.17) 24.00 (4.16) 0.001 Creatinine (mg/dL) 0.70 (0.15) 0.62 (0.17) 0.134 eGFR (mL/min) 132.1 (23.1) 136.3 (14.4) 0.529 CRP (mg/dL) 0.84 (1.27) 0.46 (0.52) 0.304 Tchol (mg/dL) 160.6 (20.1) 157.4 (25.0) 0.683 TG (mg/dL) 72.00 (4.52) 57.69 (17.03) 0.060 HDL-C (mg/dL) 59.13 (10.02) 59.54 (6.97) 0.803 LDL-C (mg/dL) 90.91 (20.86) 90.46 (24.65) 0.952 ApoA1 (mg/dL) 154.8 (18.3) 149.2 (16.0) 0.279 ApoB (mg/dL) 65.33 (12.47) 64.54 (14.25) 0.856 WBC (cells/μL) 6,775 (1,592) 6,323 (1,538) 0.356 OPG (pmol/L) 2.80 (0.80) 2.66 (0.60) 0.365 sRANKL (pmol/L) 303.5 (223.77) 354.9 (259.4) 0.379 RcIMT (mm) 0.44 (0.06) 0.46 (0.05) 0.504 LcIMT (mm) 0.46 (0.05) 0.47 (0.05) 0.916 CcIMT (mm) 0.45 (0.05) 0.47 (0.05) 0.680 BMI: body mass index; eGFR: Estimated Glomerular Filtration Rate; Tchol: total cholesterol; TG: triglycerides; HDL-C: high-density lipoprotein cholesterol; LDL-C: low-density Lipoprotein cholesterol; ApoA1: Apolipoprotein A1; ApoB: Apolipoprotein B; WBC: white blood cells; OPG: osteoprotegerin; sRANKL: serum receptor activator of nuclear factor-κB ligand; (R/L/C)cIMT: (right/left/combined) carotid intima-media thickness; SDS BMI: standardized BMI values: age and sex adjusted percentile.In the subgroup of patients with longer diabetes duration (≥7 years), cIMT measurements were indicatively higher in both carotid arteries when compared to those of the subgroup with shorter diabetes duration (<7 years,P-value 0.086, 0.094, and 0.072 for right, left, and combined measurements, resp. Table 2). No statistically significant difference in OPG or sRANKL levels was observed between the two subgroups of patients with T1DM. Patients in the subgroup with longer diabetes duration were indicatively older and had higher triglyceride levels (P-value 0.025) and higher BMI (P-value 0.044), but similar glycaemic control (HbA1c P-value 0.133), when compared to the subgroup of T1DM patients with shorter diabetes duration. Serum levels of OPG and sRANKL did not exhibit significant associations with any variable in this subgroup analysis.Table 2 Comparison of demographic, anthropometric, biochemical parameters and sonographic findings in children, and adolescents according to T1DM duration. Parameters Patients with disease duration <7 years (N=41)Mean ± SD Patients with disease duration >/7 years (N=15)Mean ± SD P value Age (years) 11.53 (2.7) 13.32 (3.3) 0.076 BMI (kg/m2) 20.34 (3.82) 22.57 (3.26) 0.044 HbA1c (%) 7.83 (1.23) 8.55 (2.13) 0.133 Glucose (mg/dL) 123.02 (68.86) 203.29 (98.18) 0.001 Diabetes duration (years) 4.06 (1.54) 9.39 (2.03) 0.001 Urea (mg/dL) 28.85 (7.72) 32.01 (9.27) 0.217 Creatinine (mg/dL) 0.68 (0.14) 0.79 (0.15) 0.018 eGFR (mL/min) 131.41 (24.33) 133.93 (19.82) 0.729 CRP (mg/dL) 0.88 (1.43) 0.71 (0.61) 0.682 Tchol (mg/dL) 160.66 (21.14) 160.29 (32.30) 0.961 TG (mg/dL) 64.59 (30.46) 93.71 (63.03) 0.025 HDL-C (mg/dL) 60.39 (10.48) 55.43 (7.71) 0.069 LDL-C (mg/dL) 90.85 (19.80) 91.07 (24.51) 0.973 ApoA1 (mg/dL) 154.43 (18.89) 155.86 (17.24) 0.804 ApoB (mg/dL) 64.80 (11.10) 66.86 (16.14) 0.616 WBC (cells/μL) 6,908 (1,619) 6,385 (1,499) 0.294 OPG (pmol/L) 2.82 (0.87) 2.74 (0.58) 0.451 sRANKL (pmol/L) 308.5 (237.7) 288.6 (183.8) 0.758 RcIMT (mm) 0.42 (0.05) 0.48 (0.04) 0.094 LcIMT (mm) 0.45 (0.04) 0.50 (0.05) 0.086 CcIMT (mm) 0.44 (0.04) 0.49 (0.04) 0.072 BMI: body mass index; eGFR: Estimated Glomerular Filtration Rate; Tchol: total cholesterol; TG: triglycerides; HDL-C: high-density lipoprotein cholesterol; LDL-C: low density lipoprotein cholesterol; ApoA1: Apolipoprotein A1; ApoB: Apolipoprotein B; WBC: white blood cells; OPG: osteoprotegerin; sRANKL: serum receptor activator of nuclear factor-κB ligand; (R/L/C)cIMT: (right/left/combined) carotid intima-media thickness.Simple correlation analysis regarding children and adolescents with T1DM revealed a significant negative association of the levels of OPG with age (P-value 0.015) and BMI (P-value 0.003) as well as a significant positive association of combined cIMT measurements with age (P-value 0.046) and BMI (P-value 0.027) (Table 3). It is noteworthy that no association between the levels of OPG and diabetes duration or HbA1c levels was observed.Table 3 Correlation between serum OPG, sRANKL, sonographic findings, and demographic, anthropometric, and biochemical parameters in children and adolescents with type 1 diabetes mellitus. OPG(pmol/L) sRANKL(pmol/L) RcIMT(mm) LcIMT(mm) CcIMT(mm) Age (years) −0.271* −0.203 0.233 0.483** 0.380* BMI (kg/m2) −0.331** −0.107 0.294 0.489** 0.418* Diabetes duration (years) −0.136 0.040 0.236 0.242 0.176 HbA1c (%) 0.015 0.054 −0.309 −0.134 −0.241 Glucose (mg/dL) 0.175 0.043 0.213 0.112 0.178 Urea (mg/dL) 0.004 0.058 −0.246 −0.064 −0.173 Creatinine (mg/dL) −0.200 −0.182 0.128 −0.355 0.255 eGFR (mL/min) −0.039 −0.033 0.109 0.090 0.186 CRP (mg/dL) 0.077 −0.007 −0.265 −0.352 −0.331 Tchol (mg/dL) 0.214 −0.012 0.003 −0.197 −0.099 TG (mg/dL) 0.048 0.083 0.247 0.201 0.244 HDL-C (mg/dL) 0.071 −0.045 0.061 0.006 0.038 LDL-C (mg/dL) 0.197 −0.025 −0.079 −0.267 −0.182 ApoA1 (mg/dL) 0.157 −0.096 0.073 0.143 0.115 ApoB (mg/dL) 0.235 −0.033 −0.047 −0.296 −0.179 WBC (cells/μL) −0.081 0.216 −0.009 −0.108 −0.060 PLT (cells/μL) 0.117 0.127 0.107 −0.165 −0.024 OPG (pmol/L) — 0.166 −0.096 −0.269 −0.193 sRANKL (pmol/L) — — −0.221 −0.290 −0.231 * P value < 0.05 and **P value < 0.01. BMI: body mass index; eGFR: Estimated Glomerular Filtration Rate; Tchol: total cholesterol; TG: triglycerides; HDL-C: high-density lipoprotein cholesterol; LDL-C: low density lipoprotein cholesterol; ApoA1: Apolipoprotein A1; ApoB: Apolipoprotein B; WBC: white blood cells; PLT: platelets; OPG: osteoprotegerin; sRANKL: serum receptor activator of nuclear factor-κB ligand; (R/L/C)cIMT: (right/left/combined) carotid intima-media thickness.Finally, with the use of linear regression analysis, including all children and adolescents, anda priori adjusting for age, gender, and the presence of T1DM, serum levels of OPG were significantly negatively associated with BMI (β −0.335, P-value 0.003), while cIMT measurements were positively associated with BMI (β 0.418, P-value 0.027); serum levels of OPG were a priori included as a confounder for cIMT. Furthermore, separately examining T1DM patients anda prioriadjusting for age, gender, years of diabetes, and HbA1c, serum levels of OPG were, again, only significantly (and inversely) associated with BMI (β −0.356, P-value 0.008), while BMI was also the only variable displaying a significant (positive) effect on cIMT measurements (β 0.604, P-value 0.013); serum levels of OPG were a priori included in the cIMT model (Table 4). Therefore, results regarding serum levels of OPG and cIMT measurements followed a similar pattern, regardless of including the total study group or only T1DM subjects, revealing a consistent association of BMI with both of these factors. It is noted that these effects became stronger on the multiple analysis level, especially for cIMT measurements, when examined within T1DM patients (reflected by increasing values of correlation or β coefficients and sustained level of significance), as compared to simple correlations for T1DM patients (Table 3) and to total sample multiple regression associations (Table 4). It is also noted that the presence of T1DM itself did not present a significant effect on either serum levels of OPG or cIMT measurements, in the total sample models (Table 4). Serum levels of sRANKL measurements did not present any significant association in multiple stepwise regression (results are not shown).Table 4 Stepwise linear regression with OPG and cIMT as dependent variables, adjusting for anthropometric and biochemical variables and diabetes status. Dependent variable Independent variable ModelR2 Standardizedβ (mean effect on dependent variable per unit increase of independent variable) P value T1DM + controls* (N=84) OPG (pmol/L) 0.110 0.003 BMI −0.335 0.003 Presence of T1DM 0.140 0.197 Male(versus female) 0.079 0.477 Age −0.099 0.483 cIMT (mm) 0.175 0.027 BMI 0.418 0.027 Presence of T1DM −0.196 0.282 Male(versus female) −0.192 0.319 Age 0.162 0.544 OPG −0.012 0.953 T1DM** (N=56) OPG (pmol/L) 0.127 0.008 BMI −0.356 0.008 Age −0.076 0.645 Male(versus female) 0.116 0.390 Years of diabetes −0.033 0.807 HbA1c 0.068 0.605 cIMT (mm) 0.365 0.013 BMI 0.604 0.013 Age −0.373 0.321 Male(versus female) −0.127 0.615 Years of diabetes 0.238 0.281 HbA1c −0.169 0.450 OPG 0.105 0.718 DM: diabetes mellitus;ΒMΙ: body mass index; OPG: osteoprotegerin; cIMT: (combined) carotid intima-media thickness. *Age, gender, and presence of type 1 diabetes mellitusa priorientered in the model; BMI entered through the stepwise procedure; OPG a priori entered when examining cIMT. **Age, gender, and years of diabetes and HbA1ca priorientered in the model; BMI entered through the stepwise procedure; OPG a priori entered when examining cIMT. ## 4. Discussion The present study reports on the use of biomarkers of the OPG/sRANKL system as indices of subclinical atherosclerosis in children and adolescents with T1DM and matched controls and their correlations with sonographic indices of endothelial dysfunction and other associated factors. Actually, in children and adolescents with T1DM, previous studies report conflicting results on cIMT measurements, while there is no previous study on this age group on OPG and sRANKL levels as biochemical indices of endothelial dysfunction.In the present study, no significant difference in cIMT measurements was found between children and adolescents with T1DM and nondiabetic controls, with the exception of the group with longer diabetes duration (where an almost significant difference presented when compared to T1DM patients with shorter duration of diabetes, especially in the combined cIMT measurements). The latter is suggestive of the development of early atherosclerotic lesions in childhood diabetes, in association to longer exposure to the disease. However, the cIMT differences between the two subgroups may have been confounded by other factors, such as age and BMI; stepwise regression analysis, adjusted for multiple factors, did not reveal significant associations regarding cIMT measurements and diabetes duration; thus, our univariate findings cannot support conclusions regarding glycaemic burden or diabetes duration on atherosclerosis. Interestingly, eight out of fourteen previous studies on T1DM children and adolescents have shown that cIMT was increased in comparison with the controls, while others report no significant difference [16].In children and adolescents, cIMT has been linked to several factors including hypercholesterolemia, hypertension, and obesity [2, 9]. Furthermore, there are conflicting reports regarding the effect of age [9] and the degree of glycaemic control [2, 7, 10, 15, 17, 18]. In agreement with the above, in the present study, the subgroup with longer T1DM duration, who presented marginally increased cIMT, also had increased triglyceride levels, thus suggesting a possible synergistic effect of hyperlipidemia and diabetes on endothelial dysfunction. BMI in the same subgroup of patients with longer diabetes duration was higher when compared to that in the rest of the diabetic patients, which could also be indicative of an effect of increased body weight on endothelial dysfunction. Furthermore, in our study, BMI was the only parameter most significantly associated with cIMT, even after adjusting for multiple confounders, either examined in the total study sample or separately in T1DM patients. The fact that BMI associations were stronger when examined on the multiple analysis level separately for T1DM patients could also imply a synergistic effect of body weight and diabetes on sonographically assessed subclinical atherosclerosis. BMI, fat mass, and obesity have been linked to subclinical atherosclerosis in children and adolescents, as measured by IMT [1, 9, 19–21]. Moreover, simultaneous coexistence of T1DM and of insulin resistance, a “double diabetes” [22], occasionally detected in T1DM obese patients, seems to further raise the cardiovascular risk.Serum OPG and sRANKL are molecules that not only have been associated with bone metabolism [11, 23], but also have been considered as indices of endothelial dysfunction and atherosclerotic plaque calcification [11]. Serum OPG levels have been shown to be significantly increased in adult patients with T1DM or T2DM [24–26] and in patients with previous gestational diabetes [27]. It is noteworthy that in children and adolescents with T1DM there are no previous studies on sRANKL and OPG levels as indices of endothelial dysfunction, while there is only one study on OPG levels as an index of bone metabolism [12]. In the latter study, prepubertal T1DM children had significantly increased OPG levels in comparison with the non-diabetic controls, which were also associated with the HbA1c levels [12]. On the contrary, in the present study, no significant difference in OPG and sRANKL levels was observed between diabetic and control children, not even in the high-risk subgroup with longer diabetes duration.In our study, serum levels of OPG were inversely associated with BMI and age, the latter only in the univariate analysis. The association between OPG and BMI was strong and consistent in the multivariate analysis. In accordance with our findings, previous recent studies [28, 29], including one conducted on children [30], have reported an inverse relationship between BMI and OPG, while most studies report a neutral effect of BMI on serum OPG [25, 31–34]. It is possible that being overweight is confounded by lack of physical activity, thus negatively influencing the bone turnover and OPG production [28]. It is also possible that excess weight itself is correlated to lower bone mass in young age, thus causing weaker osteoclast activity and lower counteracting levels of OPG [28], or that obesity induces decreased osteoblast production of OPG through hormonal paths, perhaps through leptin-mediated actions [30]. On the other hand, serum OPG has been linked to age, generally showing a positive trend in adult life [26, 29, 33–36], while contradictory results have been presented in children and adolescents [12, 25, 31, 32]. The negative association of OPG with age in our study could be due to the narrow age range of the participants and also to the stronger effect of other predictors, such as BMI.Serum levels of OPG have been associated with cardiovascular disease and subclinical atherosclerosis in previous studies [11, 25, 26, 36, 37], including positive correlations with cIMT [38]. A recent study, however, reported a positive association of the levels of OPG with cIMT only in older adults, thus indicating an interaction with age [35]. In our study, serum levels of OPG were not related to cIMT in youngsters, not even in the high-risk subgroup with longer diabetes duration or poor metabolic control, possibly reflecting a true absence of a relationship due to the young age and the generally good condition of our patients.Concerning circulating sRANKL levels, related data are generally sparse and contradictory [23, 29, 39, 40]. One cohort study has demonstrated a relationship between sRANKL and the risk of cardiovascular events, but not with IMT, implying a different pathway to vascular damage [39]; the authors hypothesized either that RANKL was related to unstable plaques and not to atherosclerotic burden in general or that the elevation of RANKL simply followed plaque inflammation. Circulating OPG levels, therefore, seem to better reflect the activity of the OPG/RANK/RANKL system [39], while the serum levels of sRANKL cannot currently be proposed as a biomarker regarding atherosclerotic vascular damage.Among the strengths of our study were the multiple measurements at carotid far walls (at 6 different sites), which seem to be most accurate at assessing intima-media thickness in young age [9]. Moreover, all ultrasound measurements were performed by a single sonographer, so that inter-observer heterogeneity was avoided. Multiple laboratory measurements, including the relatively novel serum levels of OPG and sRANKL, were examined at the same time in a population at high risk of atherosclerosis, such as children and adolescents with T1DM.Limitations of our study include its cross-sectional nature and the limited sample size, restricting the impact of our findings; the fact that IMT was only measured on carotid walls and not on the aorta, a site that seems to be vulnerable to atherosclerosis in youngsters [9]; the fact that no data were recorded (and analysis could therefore not be adjusted) regarding puberty status, a factor which could possibly affect cIMT measurements and, especially, OPG/sRANKL levels; and finally the lack of other bone-specific measurements, in order to further investigate the source of circulating OPG and sRANKL.In conclusion, laboratory and sonographic findings of endothelial dysfunction in the children and adolescents with T1DM of our study were only suggestive of the progression of atherosclerosis in patients with longer disease duration, as reflected by higher cIMT measurements. Body weight seems to be strongly associated with atherosclerotic burden early in life.This association was stronger in the diabetic group, a finding supportive of the significance of controlling adiposity in childhood diabetes. Body weight also seems to correlate to circulating OPG in young age, but the origin and the causes of this association remain unclear. --- *Source: 102120-2013-10-30.xml*
102120-2013-10-30_102120-2013-10-30.md
38,920
Osteoprotegerin, Soluble Receptor Activator of Nuclear Factor-κB Ligand, and Subclinical Atherosclerosis in Children and Adolescents with Type 1 Diabetes Mellitus
Irene Lambrinoudaki; Emmanouil Tsouvalas; Marina Vakaki; George Kaparos; Kimon Stamatelopoulos; Areti Augoulea; Paraskevi Pliatsika; Andreas Alexandrou; Maria Creatsa; Kyriaki Karavanaki
International Journal of Endocrinology (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102120
102120-2013-10-30.xml
--- ## Abstract Aims. To evaluate carotid intima-media thickness (cIMT) and biomarkers of the osteoprotegerin/receptor activator of nuclear factor-κB ligand (OPG/RANKL) system in type 1 diabetes (T1DM) children and adolescents and controls. Subjects and Methods. Fifty six T1DM patients (mean ± SD age: 12.0 ± 2.7 years, diabetes duration: 5.42 ± 2.87 years and HbA1c: 8.0 ± 1.5%) and 28 healthy matched controls, were studied with anthropometric and laboratory measurements, including serum OPG, soluble RANKL (sRANKL) and cIMT. Results. Anthropometric, laboratory, and cIMT measurements were similar between T1DM youngsters and controls. However patients with longer diabetes duration (>/7.0 years) had indicatively higher cIMT (cIMT = 0.49 vs 0.44 mm, P 0.072) and triglyceride levels than the rest of the patients (93.7 vs 64.6 mg/dl, P 0.025). Both in the total study population (β 0.418, P 0.027) and among T1DM patients separately (β 0.604, P 0.013), BMI was the only factor associated with cIMT. BMI was further associated with OPG in both groups (β −0.335, P 0.003 and β −0.356, P 0.008 respectively), while sRANKL levels were not associated with any factor. Conclusions. BMI was the strongest independent predictor of cIMT among the whole population, and especially in diabetics, suggesting a possible synergistic effect of diabetes and adiposity on atherosclerotic burden. BMI was overall strongly associated with circulating OPG, but the causes of this association remain unclear. --- ## Body ## 1. Introduction Atherosclerosis is a chronic progressive inflammatory process, that begins with lipid deposits and fatty streaks on the arterial intima that advance to atheromatic plaques [1]. Early atherosclerotic signs are already present in childhood and adolescence [2–4], particularly in subjects with risk factors, such as family history of early cardiac events, sedentary lifestyle, smoking, dyslipidemia, hypertension, obesity, and diabetes [5, 6]. Typically, both subclinical and clinical atherosclerotic disease have an earlier onset in patients diagnosed with type 1 diabetes mellitus (T1DM), with the atherosclerotic lesions being more severe and extended [3, 5, 7].Endothelial dysfunction is the earliest detectable manifestation of diabetic atherosclerotic vascular disease. Therefore, diabetic children with endothelial dysfunction are considered to be at especially high risk of having early structural atherosclerotic vascular changes [8]. Nowadays, cardiovascular disease has also become the primary cause of mortality among young adults with T1DM [3]. Thus, primary prevention and early detection of atherosclerosis are of great importance for the early identification of subclinical signs of the atherosclerotic disease with the use of imaging techniques, while surrogate biomarkers are also being evaluated for their clinical relevance.Ultrasound measurement of the carotid intima-media thickness (cIMT) has been widely utilized as a screening method of nonsymptomatic atherosclerotic lesions and plaques [1, 9]. Increased cIMT has been shown to correlate with the vascular risk factors, and also with the extent and severity of coronary artery disease [2]. Although clear-cut normative data have not existed for the pediatric population yet [9], cIMT in children and adolescents has been reported to be increased in the presence of hypercholesterolemia, hypertension, and obesity [2, 8]. However, previous studies in children and adolescents with T1DM have conflicting results regarding the presence or absence of early atherosclerotic lesions [2, 10].Vascular calcification is strongly correlated to plaque rupture, a process which involves cells responsive to bone-controlling cytokines. In this context, bone-regulating molecules including the receptor activator of nuclear factor-κB and its ligand (RANK and RANKL, resp.) as well as RANKL’s inhibitor, osteoprotegerin (OPG), are increasingly being investigated as markers of cardiovascular risk [11], as well as a link between bone metabolism and vascular calcification. Circulating OPG has been suggested to have a possible role in atherosclerosis; this role could be either mediating or, most probably, anti-inflammatory and compensatory [11]. RANKL is associated to osteoclastogenesis and bone resorption, while OPG inhibits RANKL-mediated actions [11]. There are very limited studies on the use of OPG as an index of endothelial dysfunction in adult patients with T1DM or T2DM [11–14], while there is only one previous study on OPG in T1DM children and adolescents, which, however, is associated only with their bone status [12], and not with endothelial dysfunction.Furthermore, due to the scarcity of cardiovascular events in early age, diagnostic or treatment algorithms of subclinical atherosclerosis for children and adolescents have not been standardized [3, 9, 15]. In this context, we undertook the present study in order to evaluate subclinical atherosclerosis and biomarkers of the OPG/RANK/RANKL system in association with anthropometric characteristics and laboratory measurements in T1DM children and adolescents in comparison to nondiabetic controls. ## 2. Materials and Methods ### 2.1. Subjects We studied 56 Greek children and adolescents with type 1 diabetes, already diagnosed and being followed for diabetes, and 28 healthy controls matched for age, gender, and body mass index (BMI) (2 patients : 1 control). The inclusion criteria for diabetic children were age ≤18 years, diabetes duration ≥2 years, normal arterial pressure (according to the 95th age percentile, systolic and diastolic), and no other chronic disease, apart from associated autoimmune diseases (autoimmune thyroiditis, celiac disease, and autoimmune gastritis). The criteria for the diagnosis of T1DM were fasting plasma glucose levels >/126 mg/dL (>/7.0 mmol/L) or symptoms of hyperglycemia (polyuria, polydipsia, and unexplained weight loss with a random plasma glucose >200 mg/dL (11.1 mmol/L) or two-hour plasma glucose ≥200 mg/dL (11.1 mmol/L) during an oral glucose tolerance test) [13].None of the diabetic patients was taking chronic medications other than daily insulin. Patients were consecutively recruited from the outpatient diabetic clinic of the Second University Department of Pediatrics, “P&A Kyriakou” Children’s Hospital of Athens. Control children were recruited among the staff of the hospital following an invitation to their parents. The study was approved by the hospital ethics committee and all parents gave their informed consent.Participants went through a single-day structured examination program including medical history recording and cardiovascular risk factor evaluation. Weight was measured on the same electronic scale and height was recorded using a stadiometer in the upright position. BMI was calculated by the following formula: weight (kg)/height2 (m). Blood pressure was measured in duplicate and the mean value was recorded for each individual. Values below the 95th percentile for age and gender were considered as normal. Fasting venous blood samples after an overnight fast of 8 hours were obtained with minimal stasis from an antecubital vein. Centrifugation was performed within one hour and serum was stored at −80°C. Finally, patients were evaluated by carotid ultrasound. ### 2.2. Laboratory Analyses Glucose was measured using a standard enzymatic method, using the Biochemical INTEGRA 800 Analyzer (ROCHE). The same analyzer was used to measure the ultrasensitive C-reactive protein (CRP) (with particle enhanced immunoturbidimetric assay), creatinine (using kinetic Jaffé method), and urea (kinetic test with urease and glutamate dehydrogenase). Serum total cholesterol (Tchol), low-density lipoprotein cholesterol (LDL-C), triglycerides (TG), and high-density lipoprotein cholesterol (HDL-C) were measured with an enzymatic colorimetric method, while Apolipoprotein A1 (ApoA1) and Apolipoprotein B (ApoB) were measured using an immunonephelometric BN II method, all with the same, above mentioned, analyzer. HbA1c was measured on a DCA 2000 analyzer. The normal range for HbA1c in our Laboratory was 4.4%–6.2%. Complete blood cell counts were measured using the fluorescent flow cytometry method (Sysmex XT1800i Analyzer).Estimated Glomerular Filtration Rate (eGFR) was estimated using the following formulas: for children <13 years and girls 13–18 years: eGFR = 0.55 × height (cm)/creatinine (mg/dL), for boys 13–18 years: eGFR = 0.70 × height (cm)/creatinine (mg/dL), for diabetic boys: eGFR = (186 × creatinine) − (1.154 × age) − 0.203, and for diabetic girls: GFR = (186 × creatinine) − (1.154 × age) − 0.742.Serum osteoprotegerin was measured by a commercially available kit (BioVendor-Laboratorni medicina a.s.), using the BioVendor Human Osteoprotegerin ELISA. The intra-assay coefficient of variation was 4.5% and the interassay variation was 7.8%, as provided by the manufacturer. Serum sRANKL (total) was measured by a commercially available kit (BioVendor-Laboratorni medicina a.s.), using the BioVendor Human sRANKL (Total) ELISA technique. The intra-assay coefficient of variation was 8.8% and the inter-assay variation was 11%, as provided by the manufacturer. ### 2.3. Ultrasound Studies The B-mode ultrasound scans of the carotid arteries were performed using a Logiq 7 GE medical ultrasound machine (10 MHz linear transducer). The right and the left common carotid arteries were imaged in the neck: a longitudinal section of the common carotid artery, 1 cm proximal to the carotid bulb, was imaged. Six carotid intima-media thickness (cIMT) wall measurements of the far wall of each artery (right left), at 3 mm intervals, were obtained, starting at 1 cm proximal to the bulb and moving proximally, as previously reported [14]. The reported cIMT measurement for each artery is the average of these 6 measurements. The combined cIMT value is the average of the two (right and left) arteries. All ultrasound scans were performed by a single experienced sonographer, who had no knowledge of the clinical or laboratory profile of the study subjects. Intraobserver coefficients of variation were 4.3% for left and 3.1% for right cIMT measurements. ### 2.4. Data Analysis Data analysis was performed using SPSS version 19.0. (SPSS, Chicago, IL, USA). Variables were normally distributed, except OPG and RANKL, which were log-transformed for statistical analysis. Mean values of demographic, anthropometric, and serologic parameters, as well as levels of OPG and sRANKL and cIMT measurements were compared between diabetic patients and controls. Univariate comparisons between groups were performed using the analysis of variance (ANOVA) andt-test, while simple correlations for continuous variables were performed using Pearson’s correlation coefficient, accordingly. Furthermore, T1DM patients were divided into two subgroups according to the diabetes duration (cut-off point predefined as mean duration +1 SD, which resulted in 7.16 years, that is, 7 years): (a) longer diabetes duration (≥7 years) and (b) moderate/shorter duration (<7 years). These subgroups were also compared in terms of cIMT measurements and levels of OPG or sRANKL, in order to assess any abnormalities in the high risk subgroup with longer diabetes duration. Multiple stepwise linear regression analysis was performed to examine the factors significantly affecting cIMT measurements, and serum levels of OPG and sRANKL, a priori including possible and known confounders, accordingly, and allowing for the addition of other (significant) independent factors. At first, the total study sample (diabetic patients plus controls) was examined using regression models, with age, gender, and presence of T1DM treated as probable confounders, while cIMT measurements were also examined treating levels of OPG as an additional a priori confounder. In order to clarify whether the relationships are different for young T1DM patients, these were separately examined in similar regression models, additionally adjusting for years of diabetes and HbA1c, as a prioripossible confounders. The latter were not added in the total study sample models, in order to be parsimonious. Statistical significance was set at the 0.05 level. ## 2.1. Subjects We studied 56 Greek children and adolescents with type 1 diabetes, already diagnosed and being followed for diabetes, and 28 healthy controls matched for age, gender, and body mass index (BMI) (2 patients : 1 control). The inclusion criteria for diabetic children were age ≤18 years, diabetes duration ≥2 years, normal arterial pressure (according to the 95th age percentile, systolic and diastolic), and no other chronic disease, apart from associated autoimmune diseases (autoimmune thyroiditis, celiac disease, and autoimmune gastritis). The criteria for the diagnosis of T1DM were fasting plasma glucose levels >/126 mg/dL (>/7.0 mmol/L) or symptoms of hyperglycemia (polyuria, polydipsia, and unexplained weight loss with a random plasma glucose >200 mg/dL (11.1 mmol/L) or two-hour plasma glucose ≥200 mg/dL (11.1 mmol/L) during an oral glucose tolerance test) [13].None of the diabetic patients was taking chronic medications other than daily insulin. Patients were consecutively recruited from the outpatient diabetic clinic of the Second University Department of Pediatrics, “P&A Kyriakou” Children’s Hospital of Athens. Control children were recruited among the staff of the hospital following an invitation to their parents. The study was approved by the hospital ethics committee and all parents gave their informed consent.Participants went through a single-day structured examination program including medical history recording and cardiovascular risk factor evaluation. Weight was measured on the same electronic scale and height was recorded using a stadiometer in the upright position. BMI was calculated by the following formula: weight (kg)/height2 (m). Blood pressure was measured in duplicate and the mean value was recorded for each individual. Values below the 95th percentile for age and gender were considered as normal. Fasting venous blood samples after an overnight fast of 8 hours were obtained with minimal stasis from an antecubital vein. Centrifugation was performed within one hour and serum was stored at −80°C. Finally, patients were evaluated by carotid ultrasound. ## 2.2. Laboratory Analyses Glucose was measured using a standard enzymatic method, using the Biochemical INTEGRA 800 Analyzer (ROCHE). The same analyzer was used to measure the ultrasensitive C-reactive protein (CRP) (with particle enhanced immunoturbidimetric assay), creatinine (using kinetic Jaffé method), and urea (kinetic test with urease and glutamate dehydrogenase). Serum total cholesterol (Tchol), low-density lipoprotein cholesterol (LDL-C), triglycerides (TG), and high-density lipoprotein cholesterol (HDL-C) were measured with an enzymatic colorimetric method, while Apolipoprotein A1 (ApoA1) and Apolipoprotein B (ApoB) were measured using an immunonephelometric BN II method, all with the same, above mentioned, analyzer. HbA1c was measured on a DCA 2000 analyzer. The normal range for HbA1c in our Laboratory was 4.4%–6.2%. Complete blood cell counts were measured using the fluorescent flow cytometry method (Sysmex XT1800i Analyzer).Estimated Glomerular Filtration Rate (eGFR) was estimated using the following formulas: for children <13 years and girls 13–18 years: eGFR = 0.55 × height (cm)/creatinine (mg/dL), for boys 13–18 years: eGFR = 0.70 × height (cm)/creatinine (mg/dL), for diabetic boys: eGFR = (186 × creatinine) − (1.154 × age) − 0.203, and for diabetic girls: GFR = (186 × creatinine) − (1.154 × age) − 0.742.Serum osteoprotegerin was measured by a commercially available kit (BioVendor-Laboratorni medicina a.s.), using the BioVendor Human Osteoprotegerin ELISA. The intra-assay coefficient of variation was 4.5% and the interassay variation was 7.8%, as provided by the manufacturer. Serum sRANKL (total) was measured by a commercially available kit (BioVendor-Laboratorni medicina a.s.), using the BioVendor Human sRANKL (Total) ELISA technique. The intra-assay coefficient of variation was 8.8% and the inter-assay variation was 11%, as provided by the manufacturer. ## 2.3. Ultrasound Studies The B-mode ultrasound scans of the carotid arteries were performed using a Logiq 7 GE medical ultrasound machine (10 MHz linear transducer). The right and the left common carotid arteries were imaged in the neck: a longitudinal section of the common carotid artery, 1 cm proximal to the carotid bulb, was imaged. Six carotid intima-media thickness (cIMT) wall measurements of the far wall of each artery (right left), at 3 mm intervals, were obtained, starting at 1 cm proximal to the bulb and moving proximally, as previously reported [14]. The reported cIMT measurement for each artery is the average of these 6 measurements. The combined cIMT value is the average of the two (right and left) arteries. All ultrasound scans were performed by a single experienced sonographer, who had no knowledge of the clinical or laboratory profile of the study subjects. Intraobserver coefficients of variation were 4.3% for left and 3.1% for right cIMT measurements. ## 2.4. Data Analysis Data analysis was performed using SPSS version 19.0. (SPSS, Chicago, IL, USA). Variables were normally distributed, except OPG and RANKL, which were log-transformed for statistical analysis. Mean values of demographic, anthropometric, and serologic parameters, as well as levels of OPG and sRANKL and cIMT measurements were compared between diabetic patients and controls. Univariate comparisons between groups were performed using the analysis of variance (ANOVA) andt-test, while simple correlations for continuous variables were performed using Pearson’s correlation coefficient, accordingly. Furthermore, T1DM patients were divided into two subgroups according to the diabetes duration (cut-off point predefined as mean duration +1 SD, which resulted in 7.16 years, that is, 7 years): (a) longer diabetes duration (≥7 years) and (b) moderate/shorter duration (<7 years). These subgroups were also compared in terms of cIMT measurements and levels of OPG or sRANKL, in order to assess any abnormalities in the high risk subgroup with longer diabetes duration. Multiple stepwise linear regression analysis was performed to examine the factors significantly affecting cIMT measurements, and serum levels of OPG and sRANKL, a priori including possible and known confounders, accordingly, and allowing for the addition of other (significant) independent factors. At first, the total study sample (diabetic patients plus controls) was examined using regression models, with age, gender, and presence of T1DM treated as probable confounders, while cIMT measurements were also examined treating levels of OPG as an additional a priori confounder. In order to clarify whether the relationships are different for young T1DM patients, these were separately examined in similar regression models, additionally adjusting for years of diabetes and HbA1c, as a prioripossible confounders. The latter were not added in the total study sample models, in order to be parsimonious. Statistical significance was set at the 0.05 level. ## 3. Results Diabetic and control children and adolescents in our study were of similar demographic and anthropometric characteristics; matching, therefore, was considered successful (Table1). In the diabetic group, mean HbA1c, blood glucose, and urea levels were significantly higher compared to those of the control group. TG levels in the diabetic group were marginally higher, although the difference did not reach statistical significance. All other measurements, including serum levels of OPG and sRANKL, and cIMT measurements were similar between patients and controls (Table 1).Table 1 Comparison of demographic, anthropometric, biochemical parameters and sonographic findings in children, and adolescents between diabetic and control groups. Parameters T1DM (N=56) Mean ± SD or % Controls (N=28) Mean ± SD or % P value Age (years) 12.0 (2.7) 12.1 (3.3) 0.426 Male [ 30 ] (53.4%) [ 15 ] (53.5%) 0.812 BMI (kg/m2) 20.9 (3.8) 19.6 (3.4) 0.155 SDS BMI* 69.9 70.3 0.943 HbA1c (%) 8.02 (1.52) 4.12 (0.93) 0.001 Glucose (mg/dL) 143.4 (84.1) 81.9 (10.3) 0.001 Diabetes duration (years) 5.42 (2.87) — — Urea (mg/dL) 29.65 (8.17) 24.00 (4.16) 0.001 Creatinine (mg/dL) 0.70 (0.15) 0.62 (0.17) 0.134 eGFR (mL/min) 132.1 (23.1) 136.3 (14.4) 0.529 CRP (mg/dL) 0.84 (1.27) 0.46 (0.52) 0.304 Tchol (mg/dL) 160.6 (20.1) 157.4 (25.0) 0.683 TG (mg/dL) 72.00 (4.52) 57.69 (17.03) 0.060 HDL-C (mg/dL) 59.13 (10.02) 59.54 (6.97) 0.803 LDL-C (mg/dL) 90.91 (20.86) 90.46 (24.65) 0.952 ApoA1 (mg/dL) 154.8 (18.3) 149.2 (16.0) 0.279 ApoB (mg/dL) 65.33 (12.47) 64.54 (14.25) 0.856 WBC (cells/μL) 6,775 (1,592) 6,323 (1,538) 0.356 OPG (pmol/L) 2.80 (0.80) 2.66 (0.60) 0.365 sRANKL (pmol/L) 303.5 (223.77) 354.9 (259.4) 0.379 RcIMT (mm) 0.44 (0.06) 0.46 (0.05) 0.504 LcIMT (mm) 0.46 (0.05) 0.47 (0.05) 0.916 CcIMT (mm) 0.45 (0.05) 0.47 (0.05) 0.680 BMI: body mass index; eGFR: Estimated Glomerular Filtration Rate; Tchol: total cholesterol; TG: triglycerides; HDL-C: high-density lipoprotein cholesterol; LDL-C: low-density Lipoprotein cholesterol; ApoA1: Apolipoprotein A1; ApoB: Apolipoprotein B; WBC: white blood cells; OPG: osteoprotegerin; sRANKL: serum receptor activator of nuclear factor-κB ligand; (R/L/C)cIMT: (right/left/combined) carotid intima-media thickness; SDS BMI: standardized BMI values: age and sex adjusted percentile.In the subgroup of patients with longer diabetes duration (≥7 years), cIMT measurements were indicatively higher in both carotid arteries when compared to those of the subgroup with shorter diabetes duration (<7 years,P-value 0.086, 0.094, and 0.072 for right, left, and combined measurements, resp. Table 2). No statistically significant difference in OPG or sRANKL levels was observed between the two subgroups of patients with T1DM. Patients in the subgroup with longer diabetes duration were indicatively older and had higher triglyceride levels (P-value 0.025) and higher BMI (P-value 0.044), but similar glycaemic control (HbA1c P-value 0.133), when compared to the subgroup of T1DM patients with shorter diabetes duration. Serum levels of OPG and sRANKL did not exhibit significant associations with any variable in this subgroup analysis.Table 2 Comparison of demographic, anthropometric, biochemical parameters and sonographic findings in children, and adolescents according to T1DM duration. Parameters Patients with disease duration <7 years (N=41)Mean ± SD Patients with disease duration >/7 years (N=15)Mean ± SD P value Age (years) 11.53 (2.7) 13.32 (3.3) 0.076 BMI (kg/m2) 20.34 (3.82) 22.57 (3.26) 0.044 HbA1c (%) 7.83 (1.23) 8.55 (2.13) 0.133 Glucose (mg/dL) 123.02 (68.86) 203.29 (98.18) 0.001 Diabetes duration (years) 4.06 (1.54) 9.39 (2.03) 0.001 Urea (mg/dL) 28.85 (7.72) 32.01 (9.27) 0.217 Creatinine (mg/dL) 0.68 (0.14) 0.79 (0.15) 0.018 eGFR (mL/min) 131.41 (24.33) 133.93 (19.82) 0.729 CRP (mg/dL) 0.88 (1.43) 0.71 (0.61) 0.682 Tchol (mg/dL) 160.66 (21.14) 160.29 (32.30) 0.961 TG (mg/dL) 64.59 (30.46) 93.71 (63.03) 0.025 HDL-C (mg/dL) 60.39 (10.48) 55.43 (7.71) 0.069 LDL-C (mg/dL) 90.85 (19.80) 91.07 (24.51) 0.973 ApoA1 (mg/dL) 154.43 (18.89) 155.86 (17.24) 0.804 ApoB (mg/dL) 64.80 (11.10) 66.86 (16.14) 0.616 WBC (cells/μL) 6,908 (1,619) 6,385 (1,499) 0.294 OPG (pmol/L) 2.82 (0.87) 2.74 (0.58) 0.451 sRANKL (pmol/L) 308.5 (237.7) 288.6 (183.8) 0.758 RcIMT (mm) 0.42 (0.05) 0.48 (0.04) 0.094 LcIMT (mm) 0.45 (0.04) 0.50 (0.05) 0.086 CcIMT (mm) 0.44 (0.04) 0.49 (0.04) 0.072 BMI: body mass index; eGFR: Estimated Glomerular Filtration Rate; Tchol: total cholesterol; TG: triglycerides; HDL-C: high-density lipoprotein cholesterol; LDL-C: low density lipoprotein cholesterol; ApoA1: Apolipoprotein A1; ApoB: Apolipoprotein B; WBC: white blood cells; OPG: osteoprotegerin; sRANKL: serum receptor activator of nuclear factor-κB ligand; (R/L/C)cIMT: (right/left/combined) carotid intima-media thickness.Simple correlation analysis regarding children and adolescents with T1DM revealed a significant negative association of the levels of OPG with age (P-value 0.015) and BMI (P-value 0.003) as well as a significant positive association of combined cIMT measurements with age (P-value 0.046) and BMI (P-value 0.027) (Table 3). It is noteworthy that no association between the levels of OPG and diabetes duration or HbA1c levels was observed.Table 3 Correlation between serum OPG, sRANKL, sonographic findings, and demographic, anthropometric, and biochemical parameters in children and adolescents with type 1 diabetes mellitus. OPG(pmol/L) sRANKL(pmol/L) RcIMT(mm) LcIMT(mm) CcIMT(mm) Age (years) −0.271* −0.203 0.233 0.483** 0.380* BMI (kg/m2) −0.331** −0.107 0.294 0.489** 0.418* Diabetes duration (years) −0.136 0.040 0.236 0.242 0.176 HbA1c (%) 0.015 0.054 −0.309 −0.134 −0.241 Glucose (mg/dL) 0.175 0.043 0.213 0.112 0.178 Urea (mg/dL) 0.004 0.058 −0.246 −0.064 −0.173 Creatinine (mg/dL) −0.200 −0.182 0.128 −0.355 0.255 eGFR (mL/min) −0.039 −0.033 0.109 0.090 0.186 CRP (mg/dL) 0.077 −0.007 −0.265 −0.352 −0.331 Tchol (mg/dL) 0.214 −0.012 0.003 −0.197 −0.099 TG (mg/dL) 0.048 0.083 0.247 0.201 0.244 HDL-C (mg/dL) 0.071 −0.045 0.061 0.006 0.038 LDL-C (mg/dL) 0.197 −0.025 −0.079 −0.267 −0.182 ApoA1 (mg/dL) 0.157 −0.096 0.073 0.143 0.115 ApoB (mg/dL) 0.235 −0.033 −0.047 −0.296 −0.179 WBC (cells/μL) −0.081 0.216 −0.009 −0.108 −0.060 PLT (cells/μL) 0.117 0.127 0.107 −0.165 −0.024 OPG (pmol/L) — 0.166 −0.096 −0.269 −0.193 sRANKL (pmol/L) — — −0.221 −0.290 −0.231 * P value < 0.05 and **P value < 0.01. BMI: body mass index; eGFR: Estimated Glomerular Filtration Rate; Tchol: total cholesterol; TG: triglycerides; HDL-C: high-density lipoprotein cholesterol; LDL-C: low density lipoprotein cholesterol; ApoA1: Apolipoprotein A1; ApoB: Apolipoprotein B; WBC: white blood cells; PLT: platelets; OPG: osteoprotegerin; sRANKL: serum receptor activator of nuclear factor-κB ligand; (R/L/C)cIMT: (right/left/combined) carotid intima-media thickness.Finally, with the use of linear regression analysis, including all children and adolescents, anda priori adjusting for age, gender, and the presence of T1DM, serum levels of OPG were significantly negatively associated with BMI (β −0.335, P-value 0.003), while cIMT measurements were positively associated with BMI (β 0.418, P-value 0.027); serum levels of OPG were a priori included as a confounder for cIMT. Furthermore, separately examining T1DM patients anda prioriadjusting for age, gender, years of diabetes, and HbA1c, serum levels of OPG were, again, only significantly (and inversely) associated with BMI (β −0.356, P-value 0.008), while BMI was also the only variable displaying a significant (positive) effect on cIMT measurements (β 0.604, P-value 0.013); serum levels of OPG were a priori included in the cIMT model (Table 4). Therefore, results regarding serum levels of OPG and cIMT measurements followed a similar pattern, regardless of including the total study group or only T1DM subjects, revealing a consistent association of BMI with both of these factors. It is noted that these effects became stronger on the multiple analysis level, especially for cIMT measurements, when examined within T1DM patients (reflected by increasing values of correlation or β coefficients and sustained level of significance), as compared to simple correlations for T1DM patients (Table 3) and to total sample multiple regression associations (Table 4). It is also noted that the presence of T1DM itself did not present a significant effect on either serum levels of OPG or cIMT measurements, in the total sample models (Table 4). Serum levels of sRANKL measurements did not present any significant association in multiple stepwise regression (results are not shown).Table 4 Stepwise linear regression with OPG and cIMT as dependent variables, adjusting for anthropometric and biochemical variables and diabetes status. Dependent variable Independent variable ModelR2 Standardizedβ (mean effect on dependent variable per unit increase of independent variable) P value T1DM + controls* (N=84) OPG (pmol/L) 0.110 0.003 BMI −0.335 0.003 Presence of T1DM 0.140 0.197 Male(versus female) 0.079 0.477 Age −0.099 0.483 cIMT (mm) 0.175 0.027 BMI 0.418 0.027 Presence of T1DM −0.196 0.282 Male(versus female) −0.192 0.319 Age 0.162 0.544 OPG −0.012 0.953 T1DM** (N=56) OPG (pmol/L) 0.127 0.008 BMI −0.356 0.008 Age −0.076 0.645 Male(versus female) 0.116 0.390 Years of diabetes −0.033 0.807 HbA1c 0.068 0.605 cIMT (mm) 0.365 0.013 BMI 0.604 0.013 Age −0.373 0.321 Male(versus female) −0.127 0.615 Years of diabetes 0.238 0.281 HbA1c −0.169 0.450 OPG 0.105 0.718 DM: diabetes mellitus;ΒMΙ: body mass index; OPG: osteoprotegerin; cIMT: (combined) carotid intima-media thickness. *Age, gender, and presence of type 1 diabetes mellitusa priorientered in the model; BMI entered through the stepwise procedure; OPG a priori entered when examining cIMT. **Age, gender, and years of diabetes and HbA1ca priorientered in the model; BMI entered through the stepwise procedure; OPG a priori entered when examining cIMT. ## 4. Discussion The present study reports on the use of biomarkers of the OPG/sRANKL system as indices of subclinical atherosclerosis in children and adolescents with T1DM and matched controls and their correlations with sonographic indices of endothelial dysfunction and other associated factors. Actually, in children and adolescents with T1DM, previous studies report conflicting results on cIMT measurements, while there is no previous study on this age group on OPG and sRANKL levels as biochemical indices of endothelial dysfunction.In the present study, no significant difference in cIMT measurements was found between children and adolescents with T1DM and nondiabetic controls, with the exception of the group with longer diabetes duration (where an almost significant difference presented when compared to T1DM patients with shorter duration of diabetes, especially in the combined cIMT measurements). The latter is suggestive of the development of early atherosclerotic lesions in childhood diabetes, in association to longer exposure to the disease. However, the cIMT differences between the two subgroups may have been confounded by other factors, such as age and BMI; stepwise regression analysis, adjusted for multiple factors, did not reveal significant associations regarding cIMT measurements and diabetes duration; thus, our univariate findings cannot support conclusions regarding glycaemic burden or diabetes duration on atherosclerosis. Interestingly, eight out of fourteen previous studies on T1DM children and adolescents have shown that cIMT was increased in comparison with the controls, while others report no significant difference [16].In children and adolescents, cIMT has been linked to several factors including hypercholesterolemia, hypertension, and obesity [2, 9]. Furthermore, there are conflicting reports regarding the effect of age [9] and the degree of glycaemic control [2, 7, 10, 15, 17, 18]. In agreement with the above, in the present study, the subgroup with longer T1DM duration, who presented marginally increased cIMT, also had increased triglyceride levels, thus suggesting a possible synergistic effect of hyperlipidemia and diabetes on endothelial dysfunction. BMI in the same subgroup of patients with longer diabetes duration was higher when compared to that in the rest of the diabetic patients, which could also be indicative of an effect of increased body weight on endothelial dysfunction. Furthermore, in our study, BMI was the only parameter most significantly associated with cIMT, even after adjusting for multiple confounders, either examined in the total study sample or separately in T1DM patients. The fact that BMI associations were stronger when examined on the multiple analysis level separately for T1DM patients could also imply a synergistic effect of body weight and diabetes on sonographically assessed subclinical atherosclerosis. BMI, fat mass, and obesity have been linked to subclinical atherosclerosis in children and adolescents, as measured by IMT [1, 9, 19–21]. Moreover, simultaneous coexistence of T1DM and of insulin resistance, a “double diabetes” [22], occasionally detected in T1DM obese patients, seems to further raise the cardiovascular risk.Serum OPG and sRANKL are molecules that not only have been associated with bone metabolism [11, 23], but also have been considered as indices of endothelial dysfunction and atherosclerotic plaque calcification [11]. Serum OPG levels have been shown to be significantly increased in adult patients with T1DM or T2DM [24–26] and in patients with previous gestational diabetes [27]. It is noteworthy that in children and adolescents with T1DM there are no previous studies on sRANKL and OPG levels as indices of endothelial dysfunction, while there is only one study on OPG levels as an index of bone metabolism [12]. In the latter study, prepubertal T1DM children had significantly increased OPG levels in comparison with the non-diabetic controls, which were also associated with the HbA1c levels [12]. On the contrary, in the present study, no significant difference in OPG and sRANKL levels was observed between diabetic and control children, not even in the high-risk subgroup with longer diabetes duration.In our study, serum levels of OPG were inversely associated with BMI and age, the latter only in the univariate analysis. The association between OPG and BMI was strong and consistent in the multivariate analysis. In accordance with our findings, previous recent studies [28, 29], including one conducted on children [30], have reported an inverse relationship between BMI and OPG, while most studies report a neutral effect of BMI on serum OPG [25, 31–34]. It is possible that being overweight is confounded by lack of physical activity, thus negatively influencing the bone turnover and OPG production [28]. It is also possible that excess weight itself is correlated to lower bone mass in young age, thus causing weaker osteoclast activity and lower counteracting levels of OPG [28], or that obesity induces decreased osteoblast production of OPG through hormonal paths, perhaps through leptin-mediated actions [30]. On the other hand, serum OPG has been linked to age, generally showing a positive trend in adult life [26, 29, 33–36], while contradictory results have been presented in children and adolescents [12, 25, 31, 32]. The negative association of OPG with age in our study could be due to the narrow age range of the participants and also to the stronger effect of other predictors, such as BMI.Serum levels of OPG have been associated with cardiovascular disease and subclinical atherosclerosis in previous studies [11, 25, 26, 36, 37], including positive correlations with cIMT [38]. A recent study, however, reported a positive association of the levels of OPG with cIMT only in older adults, thus indicating an interaction with age [35]. In our study, serum levels of OPG were not related to cIMT in youngsters, not even in the high-risk subgroup with longer diabetes duration or poor metabolic control, possibly reflecting a true absence of a relationship due to the young age and the generally good condition of our patients.Concerning circulating sRANKL levels, related data are generally sparse and contradictory [23, 29, 39, 40]. One cohort study has demonstrated a relationship between sRANKL and the risk of cardiovascular events, but not with IMT, implying a different pathway to vascular damage [39]; the authors hypothesized either that RANKL was related to unstable plaques and not to atherosclerotic burden in general or that the elevation of RANKL simply followed plaque inflammation. Circulating OPG levels, therefore, seem to better reflect the activity of the OPG/RANK/RANKL system [39], while the serum levels of sRANKL cannot currently be proposed as a biomarker regarding atherosclerotic vascular damage.Among the strengths of our study were the multiple measurements at carotid far walls (at 6 different sites), which seem to be most accurate at assessing intima-media thickness in young age [9]. Moreover, all ultrasound measurements were performed by a single sonographer, so that inter-observer heterogeneity was avoided. Multiple laboratory measurements, including the relatively novel serum levels of OPG and sRANKL, were examined at the same time in a population at high risk of atherosclerosis, such as children and adolescents with T1DM.Limitations of our study include its cross-sectional nature and the limited sample size, restricting the impact of our findings; the fact that IMT was only measured on carotid walls and not on the aorta, a site that seems to be vulnerable to atherosclerosis in youngsters [9]; the fact that no data were recorded (and analysis could therefore not be adjusted) regarding puberty status, a factor which could possibly affect cIMT measurements and, especially, OPG/sRANKL levels; and finally the lack of other bone-specific measurements, in order to further investigate the source of circulating OPG and sRANKL.In conclusion, laboratory and sonographic findings of endothelial dysfunction in the children and adolescents with T1DM of our study were only suggestive of the progression of atherosclerosis in patients with longer disease duration, as reflected by higher cIMT measurements. Body weight seems to be strongly associated with atherosclerotic burden early in life.This association was stronger in the diabetic group, a finding supportive of the significance of controlling adiposity in childhood diabetes. Body weight also seems to correlate to circulating OPG in young age, but the origin and the causes of this association remain unclear. --- *Source: 102120-2013-10-30.xml*
2013
# Neural Differentiation in HDAC1-Depleted Cells Is Accompanied by Coilin Downregulation and the Accumulation of Cajal Bodies in Nucleoli **Authors:** Jana Krejčí; Soňa Legartová; Eva Bártová **Journal:** Stem Cells International (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1021240 --- ## Abstract Cajal bodies (CBs) are important compartments containing accumulated proteins that preferentially regulate RNA-related nuclear events, including splicing. Here, we studied the nuclear distribution pattern of CBs in neurogenesis. In adult brains, coilin was present at a high density, but CB formation was absent in the nuclei of the choroid plexus of the lateral ventricles. Cells of the adult hippocampus were characterized by a crescent-like morphology of coilin protein. We additionally observed a 70 kDa splice variant of coilin in adult mouse brains, which was different to embryonic brains and mouse pluripotent embryonic stem cells (mESCs), characterized by the 80 kDa standard variant of coilin. Here, we also showed that depletion of coilin is induced during neural differentiation and HDAC1 deficiency in mESCs caused coilin accumulation inside the fibrillarin-positive region of the nucleoli. A similar distribution pattern was observed in adult brain hippocampi, characterized by lower levels of both coilin and HDAC1. In summary, we observed that neural differentiation and HDAC1 deficiency lead to coilin depletion and coilin accumulation in body-like structures inside the nucleoli. --- ## Body ## 1. Introduction Cajal bodies (CBs) are striking nuclear structures consisting of accumulated proteins that play various roles in nuclear processes. These structures were designated Cajal’s accessory bodies(cuerpo accesorio) and were discovered for the first time in rat brain neurons [1]. A role of the CBs during neurogenesis was also significantly studied and summarized by Lafarga et al. [2] and Baltanás et al. [3]. At this moment, it is well known that the function of these structures is dynamic because CBs regulate RNA synthesis and the assembly of ribonucleoproteins (RNPs) [4]. Moreover, Tapia et al. [5] showed that the symmetrical dimethylation of arginines on coilin supports the formation of CBs, positive on survival motor neuron (SMN) proteins and small nuclear ribonucleoproteins (snRNPs). These regulatory factors probably determine the association of CBs with the spliceosome and a role for CBs in pre-mRNA splicing [6]. Conversely, coilin hypomethylation depreciates its function and causes the disintegration of canonical CBs into small microfoci. Unmethylated coilin does not support the formation of robust CBs but is located inside the dense fibrillar component of the nucleoli. In this form, there is no link between the coilin nuclear pattern and global transcription activity [5]. On the other hand, canonical CBs, which are nonmembrane nuclear components, are prominent structures in dividing cells with high transcriptional activity [4]. CBs have a diameter of 0.5–1.0 μm and contain many proteins, including the abovementioned p80 coilin, which becomes increasingly more phosphorylated during mitosis and, particularly in human embryonic stem cells, is present at high levels in the nucleoplasmic pool [7–12]. However, coilin is not completely essential because knockout of coilin in mice is not lethal [13]. On the other hand, coilin-positive CBs play an important role in genome organization in terms of gene expression and pre-mRNA splicing via their association with many chromosomes. The periphery of these chromosomes represents a site of interaction for genes that are poised for transcription and thus associates with regulatory components. Human chromosome 1 is a key player in these processes, and its periphery is frequently occupied by CBs [14]. The use of chromosome conformation capture analysis (4C-seq), a novel molecular biology method, has revealed an association between highly expressed histone genes, sn/snoRNA coding loci, and CBs, which are involved in intra- and interchromosomal clusters [14, 15]. This interaction is of immense functional importance during transcription and especially splicing. CBs are also highly mobile structures, as revealed by single-particle tracking analysis and fluorescence recovery after photobleaching (FRAP) [10, 16–19]. For example, we recently demonstrated the constrained local motion of individual CBs after cell exposure to γ-radiation. Furthermore, in mouse embryonic stem cells (mESCs), the coilin dispersed in the nucleoli and accumulated in CBs was characterized by a reduced mobile fraction compared to the GFP-tagged coilin in the nucleoplasm [10]. FRET (fluorescence resonance energy transfer) analysis additionally revealed a specific interaction between coilin and SMN protein in CBs as well as the appearance of coilin-coilin dimerization [17]. However, as regards to DNA repair machinery, our experiments did not show coilin-SMN interaction or coilin dimerization in UVA-induced DNA lesions, which are characterized by pronounced coilin recruitment [10]. Together, the abovementioned results illustrate the dynamic behavior of coilin and CBs, which is required not only for optimal pre-mRNA processing but also for DNA repair [15].Interestingly, in some tumor cells, the functional properties of coilin are associated with both CBs and nucleoli. The nucleoli contain many different proteins that play a role during the transcription of ribosomal genes or during DNA repair [18–20]. In UVA-damaged chromatin, we observed the recruitment of the upstream binding factor UBF, a major transcription factor for ribosomal genes, and we noted a similar response for coilin [10, 21]. As determined by Boulon et al. [22], UVA and UVC cause the disintegration of coilin-positive CBs, and ionizing irradiation has a similar, notable effect of CB disruption [23, 24]. Thus, nucleolar proteins, including coilin that also appears in nucleoli of tumor cells, appear to be involved in the DNA repair machinery, which is, for example, also activated in Purkinje cells during neurodegeneration, characterized by the disintegration of nucleoli and CBs [3].In this study, we focused especially on the nuclear distribution patterns of the CBs, and we studied coilin levels in embryonic and adult mouse brains and during neural differentiation of mESCs. Based on the initial observations of Raymond Cajal, who noted that the CBs are striking nuclear components of the rat brain and, more specifically, the pyramidal cells of the human cerebral cortex [1, 25], we analyzed the nuclear distribution patterns and formation of the CBs in the hippocampus and olfactory bulbs (OBs) of adult mouse brains. We also investigated the distribution of coilin in the ventricular ependyma of e15.5 embryonic brains. Furthermore, to elucidate the CB dynamics in neurogenesis, we analyzed the formation of CBs during the neural differentiation of wild-type (wt) and HDAC1 double-knockout (dn) mESCs. From the view of neural differentiation, it was shown that embryonic neural progenitor stem cells are characterized by a high level of HDAC1, while HDAC2 is expressed during neural differentiation and pronouncedly in terminally differentiated neurons [26]. Differentiation processes in the brain are also regulated by HDAC3, as shown by Volmar and Wahlestedt [27]. Moreover, in neural progenitor stem cells, functional HDAC3 was found to be responsible for the balance between cell proliferation and differentiation [28]. Based on these data we addressed the following hypothesis: whether neural differentiation and HDAC1 depletion can affect the levels of coilin and the nuclear distribution of Cajal bodies because we expected that depletion of some HDAC induces chromatin relaxation; thus this nuclear event could change distribution pattern of CBs. We also analyzed HDAC1 depletion in order to show how changes in histone acetylation, a central epigenetic factor responsible for chromatin accessibility [29, 30], can change the level of coilin, which is methylated when it accumulates in CBs [5]. ## 2. Results ### 2.1. The Nuclear Distribution Pattern of Cajal Bodies in the Embryonic and Adult Mouse Brain We inspected sections of embryonic and adult mouse brains and observed the formation of single, robust CBs at the cortex periphery in embryonic brains at stage e15.5 after fertilization (Figures1(a)–1(c)). We additionally found that in approximately 90% of the cell nuclei at the cortex periphery, the Cajal bodies (CBs) were located away from clusters of centromeric heterochromatin called chromocenters (Figure 1(c)).Figure 1 Formation of Cajal bodies (CBs) in the cortex periphery of e15.5 mouse embryonic brains (a–c). CBs were visualized with Alexa 594 fluorescence (red), and DAPI (4′,6-diamidino-2-phenylindole) was used as a counterstain (blue). Arrows in (a) show individual CBs and the frame in (b) shows a selected region in the cell nucleus magnified in (c). Red arrows in (c) indicate the clusters of centromeric heterochromatin (chromocenters) and white arrows show the selected CB. (a) (b) (c)The cell nuclei in adult brains were highly positive for the coilin protein, particularly in the chondroid plexus of the lateral ventricle (Figure2(a)). However, the cells in this region did not have easily discernable CBs. Next, we observed clustering of coilin inside the cell nuclei occupying the cortex periphery in adult mouse brains (Figure 2(b)). Analysis of the hippocampal blade (Figures 2(c)(A) and 2(c)(B)) revealed both the crescent-like accumulation of coilin and individual canonical CBs (Figures 2(d)–2(f)). Surprisingly, in olfactory bulbs (Figures 3(a)(A), 3(a)(B), and 3(b)(A)), high levels of coilin were noted in the highly DAPI-dense nuclear regions surrounding single CBs (Figures 3(b)(B)–3(b)(D)). This nuclear distribution pattern of coilin was observed in individual nuclei of the granular layer of the OBs in adult brain (Figures 3(b)(A)–3(b)(C); see magnification in Figure 3(b)(D) and quantification in Figure 3(b)(E)).Figure 2 The nuclear distribution patterns of coilin in adult mouse brain sections. (a) shows the chondroid plexus of the lateral ventricle. (b) Coilin distribution in the cortex periphery of an adult brain. ((c)(A), (c)(B)) Hippocampal regions visualized by hematoxylin-eosin staining, an image from the brain atlas (see [31]). (d–f) Accumulation of coilin in crescent-like foci in the hippocampal region of an adult brain. DAPI staining is used to visualize cell nuclei. Coilin (red) was labeled by a secondary antibody conjugated with Alexa 594. Nuclear distribution of coilin in cells 1 and 2 (e) is shown in graphs 1 and 2. Fluorescence intensity along white lines with arrows was measured using the Image J software (NIH freeware). (f) shows a high density of coilin in the hippocampus (hippocampal blade) of an adult mouse brain. (a) (b) (c) (d) (e) (f)Figure 3 Coilin expression in the olfactory bulbs (OBs) of the adult brain. ((a)(A)) The OB regions of an adult mouse brain visualized by DAPI staining (blue) and an antibody against acetylated histone H3 (red; an antibody raised against H3K9ac [#06-942, Merck Millipore] was used to visualize the granular layer of OB due to its high density). The morphology of the OB in ((a)(A)) is compared with the morphology of the OB according to ((a)(B)) the brain atlas (see [31]). ((b)(A)–(b)(D)) show coilin accumulation in the Cajal bodies. In adult OBs, CBs were surrounded by DAPI- and coilin-dense regions (red and blue) (see (b)(C)). ((b)(D)) shows a magnification of the cell nucleus from OB. ((b)(E)) indicates the density of coilin, visualized by Alexa 594 fluorescence, analyzed across the selected region delineated by a white arrow in ((b)(D)). (a) (b) ### 2.2. Levels and Nuclear Distribution Pattern of Coilin, Fibrillarin, and SC35 in Mouse Brain and Pluripotent or Differentiated mESCs In comparison to nondifferentiated and differentiated wt mESCs, pan-acetylation of lysines was very high in HDAC1 dn mESCs and their differentiated counterpart (Figure4(a)). In these experiments, we addressed a question if hyperacetylated surroundings of CBs in HDAC1 dn mESCs could change formation or maintenance of CBs, which is regulated by methylation-related processes [5].Figure 4 The levels of coilin, HDAC1, and fibrillarin in pluripotent and differentiated mouse ESCs and in the mouse brain. (a) In comparison to nondifferentiated and differentiated wt mESCs, a very high level of lysine pan-acetylation was found in HDAC1 dn cells and in their differentiated counterpart. Neural differentiation was induced in both wt and HDAC1 dn cells by identical differentiation protocol. (b) Western blot shows coilin andα-tubulin (reference protein) levels in nondifferentiated and differentiated (neuronal pathway) wt and HDAC1 dn mouse ESCs. HDAC1 depletion in these cells was first published by Lagger et al. [32]. (c) Western blot analysis of the coilin and fibrillarin levels in embryonic mouse brains at developmental stages e13.5, e15.5, and e18.5 and in the whole adult brain as well as in mESCs. Two expositions for fibrillarin were used in order to show the differences between the levels of fibrillarin in the adult mouse brain (ADL) and mESCs. (d) The levels of coilin, fibrillarin, HDAC1, and α-tubulin in the following regions of adult brain: the olfactory bulb (OB), the adult hippocampus (HIP), the brain cortex (CTX), and the whole adult mouse brain (ADL). (b–d) show the conclusions from three independent experiments, and the total loaded protein levels are also documented. (e)(A) Quantification of the results from (b); (B) quantification of (c); and (C) analysis of the HDAC1 level from (d). Asterisk (∗) denotes statistically significant results at p≤0.05 and (∗∗) at p≤0.01. (a) (b) (c) (d) (e)Here, western blot analysis revealed reduced levels of coilin (80 kDa) during neural differentiation of wt mESCs (Figures4(b) and 4(e)(A)). We also analyzed the levels of coilin in nondifferentiated and differentiated wt and HDAC1 dn mES cells. Our statistical analysis, using Student’s t-test, documented significant changes at p∗≤0.05 when we compared nondifferentiated and differentiated wt mESCs (Figures 4(b) and 4(e)(A)). In HDAC1-depleted cells, the difference was even more pronounced: a significantly different result (at p∗∗≤0.01) was found when we compared nondifferentiated and differentiated HDAC1 dn cells (Figures 4(b) and 4(e)(A)). We also examined the coilin levels in mouse brains at various developmental stages. We studied the whole brains of e13.5, e15.5, and e18.5 embryonic stages and adult mice (Figure 4(c)). Compared to embryonic brains, which are characterized by the 80 kDa coilin variant, we observed a different splice variant of coilin (~70 kDa) in adult brains. During mouse brain development, coilin levels were stable at the e13.5, e15.5, and e18.5 developmental stages. Interestingly, mouse ESCs were characterized by a very low level of 80 kDa coilin in comparison to embryonic brains (Figure 4(c)). In parallel with coilin, we analyzed fibrillarin levels in the mouse brains because individual CBs colocalize with fibrillarin in many cell types (Figures 4(c), 5(a), and 5(b)). By western blots, in mouse adult brains, we observed a very low level of fibrillarin (see two western blot expositions in Figure 4(c)), especially compared to mouse embryonic stem cells (mESCs). In our samples, shown in Figure 4(c), we found that when the level of coilin was high, the level of fibrillarin was low and vice versa.Figure 5 The spatial link between coilin and fibrillarin in HeLa cells and mouse pluripotent mESCs before and after neural differentiation. Arrows show fibrillarin and coilin occurrence in CBs in (a) HeLa cells; CB (blue) colocalizes with fibrillarin foci (red) (white arrows). (b) In (A) wt and (B) HDAC1 dn pluripotent ESCs, CBs (green) were located in a close proximity to the periphery of the fibrillarin-positive regions of the nucleoli (red). An example of CBs is shown by arrows. (c) The spatial link between CBs (green) and fibrillarin (red) in (A, B) wt mESCs and (C, D) HDAC1 dn mESCs undergoing neural differentiation (white arrows). Accumulated coilin (green) inside the nucleoli (red) was observed in HDAC1 dn cells (see (C) and (D)). DAPI staining (blue) was used to visualize the cell nuclei. (a) (b) (c)Using western blot, we also detected the levels of 70 kDa coilin variant and 39 kDa fibrillarin in the OBs of the adult brain, the adult hippocampus, the brain cortex, and the whole adult brain (Figures4(d) and 4(e)(B)). Compared to OBs, the hippocampus and the brain cortex were characterized by coilin depletion, which was accompanied by a decrease in HDAC1 level when it was normalized to total protein level and α-tubulin (Figures 4(d), 4(e)(B), and 4(e)(C)). The fibrillarin levels were not substantially different in the brain regions analyzed (Figure 4(d)).Here, we also compared the nuclear pattern of CBs in mESCs and the human cancer cells line HeLa, which has been used by many authors for CBs studies [17, 33]. In HeLa cells, the CBs were always positive for both coilin and fibrillarin (Figure 5(a)). We additionally studied the nuclear distribution pattern of CBs and fibrillarin in nondifferentiated mESCs and mESCs undergoing neural differentiation (Figures 5(b) and 5(c)(A)–5(c)(D)). Wild-type and HDAC1 dn mESCs were characterized by a very subtle occurrence of fibrillarin in CBs (see white arrows in Figure 5(b)). However, in differentiated HDAC1 dn mESCs, robust foci of accumulated coilin appeared on the periphery of the nucleoli (Figure 5(c)(C); ~40% of cells) or high coilin positivity was found inside the nucleoli (Figure 5(c)(D); ~60% of cells). This nuclear distribution pattern of coilin was not observed in differentiated wt mESCs (Figures 5(c)(A) and 5(c)(B)).Because CBs are nuclear regions associated with splicing processes, we additionally analyzed the spatial link between CBs and SC35-positive nuclear speckles (Figures6(a)–6(f)). In mouse pluripotent ESC colonies, we observed high levels of coilin in the nuclear interior, and these regions were surrounded by the SC35 protein (Figures 6(a)–6(c)). We found that most of the CBs and SC35-positive nuclear speckles were spatially distinct, but some of them partially overlapped. This nuclear distribution pattern was identical in both wt and HDAC1 dn (Figures 6(a)–6(f)).Figure 6 Spatial interactions between coilin and SC35-positive splicing speckles. (a) In wt mESCs, coilin (red) was distributed in the nuclear interior, and this coilin-positive region was surrounded by SC35 protein (green). (b, c) show that the mutual interaction between coilin and SC35 is changed during neural differentiation. Many cells were characterized by the formation of SC35-positive CBs (red). The colocalization tool in the Leica software showed ~30% colocalization between CBs (red) and SC35-positive nuclear speckles (green) in both (d) nondifferentiated HDAC1 dn mESCs and (e, f) differentiated HDAC1-depleted cells. A 3D projectionx-y-z of interphase nuclei is documented in all panels. (a) (b) (c) (d) (e) (f) ## 2.1. The Nuclear Distribution Pattern of Cajal Bodies in the Embryonic and Adult Mouse Brain We inspected sections of embryonic and adult mouse brains and observed the formation of single, robust CBs at the cortex periphery in embryonic brains at stage e15.5 after fertilization (Figures1(a)–1(c)). We additionally found that in approximately 90% of the cell nuclei at the cortex periphery, the Cajal bodies (CBs) were located away from clusters of centromeric heterochromatin called chromocenters (Figure 1(c)).Figure 1 Formation of Cajal bodies (CBs) in the cortex periphery of e15.5 mouse embryonic brains (a–c). CBs were visualized with Alexa 594 fluorescence (red), and DAPI (4′,6-diamidino-2-phenylindole) was used as a counterstain (blue). Arrows in (a) show individual CBs and the frame in (b) shows a selected region in the cell nucleus magnified in (c). Red arrows in (c) indicate the clusters of centromeric heterochromatin (chromocenters) and white arrows show the selected CB. (a) (b) (c)The cell nuclei in adult brains were highly positive for the coilin protein, particularly in the chondroid plexus of the lateral ventricle (Figure2(a)). However, the cells in this region did not have easily discernable CBs. Next, we observed clustering of coilin inside the cell nuclei occupying the cortex periphery in adult mouse brains (Figure 2(b)). Analysis of the hippocampal blade (Figures 2(c)(A) and 2(c)(B)) revealed both the crescent-like accumulation of coilin and individual canonical CBs (Figures 2(d)–2(f)). Surprisingly, in olfactory bulbs (Figures 3(a)(A), 3(a)(B), and 3(b)(A)), high levels of coilin were noted in the highly DAPI-dense nuclear regions surrounding single CBs (Figures 3(b)(B)–3(b)(D)). This nuclear distribution pattern of coilin was observed in individual nuclei of the granular layer of the OBs in adult brain (Figures 3(b)(A)–3(b)(C); see magnification in Figure 3(b)(D) and quantification in Figure 3(b)(E)).Figure 2 The nuclear distribution patterns of coilin in adult mouse brain sections. (a) shows the chondroid plexus of the lateral ventricle. (b) Coilin distribution in the cortex periphery of an adult brain. ((c)(A), (c)(B)) Hippocampal regions visualized by hematoxylin-eosin staining, an image from the brain atlas (see [31]). (d–f) Accumulation of coilin in crescent-like foci in the hippocampal region of an adult brain. DAPI staining is used to visualize cell nuclei. Coilin (red) was labeled by a secondary antibody conjugated with Alexa 594. Nuclear distribution of coilin in cells 1 and 2 (e) is shown in graphs 1 and 2. Fluorescence intensity along white lines with arrows was measured using the Image J software (NIH freeware). (f) shows a high density of coilin in the hippocampus (hippocampal blade) of an adult mouse brain. (a) (b) (c) (d) (e) (f)Figure 3 Coilin expression in the olfactory bulbs (OBs) of the adult brain. ((a)(A)) The OB regions of an adult mouse brain visualized by DAPI staining (blue) and an antibody against acetylated histone H3 (red; an antibody raised against H3K9ac [#06-942, Merck Millipore] was used to visualize the granular layer of OB due to its high density). The morphology of the OB in ((a)(A)) is compared with the morphology of the OB according to ((a)(B)) the brain atlas (see [31]). ((b)(A)–(b)(D)) show coilin accumulation in the Cajal bodies. In adult OBs, CBs were surrounded by DAPI- and coilin-dense regions (red and blue) (see (b)(C)). ((b)(D)) shows a magnification of the cell nucleus from OB. ((b)(E)) indicates the density of coilin, visualized by Alexa 594 fluorescence, analyzed across the selected region delineated by a white arrow in ((b)(D)). (a) (b) ## 2.2. Levels and Nuclear Distribution Pattern of Coilin, Fibrillarin, and SC35 in Mouse Brain and Pluripotent or Differentiated mESCs In comparison to nondifferentiated and differentiated wt mESCs, pan-acetylation of lysines was very high in HDAC1 dn mESCs and their differentiated counterpart (Figure4(a)). In these experiments, we addressed a question if hyperacetylated surroundings of CBs in HDAC1 dn mESCs could change formation or maintenance of CBs, which is regulated by methylation-related processes [5].Figure 4 The levels of coilin, HDAC1, and fibrillarin in pluripotent and differentiated mouse ESCs and in the mouse brain. (a) In comparison to nondifferentiated and differentiated wt mESCs, a very high level of lysine pan-acetylation was found in HDAC1 dn cells and in their differentiated counterpart. Neural differentiation was induced in both wt and HDAC1 dn cells by identical differentiation protocol. (b) Western blot shows coilin andα-tubulin (reference protein) levels in nondifferentiated and differentiated (neuronal pathway) wt and HDAC1 dn mouse ESCs. HDAC1 depletion in these cells was first published by Lagger et al. [32]. (c) Western blot analysis of the coilin and fibrillarin levels in embryonic mouse brains at developmental stages e13.5, e15.5, and e18.5 and in the whole adult brain as well as in mESCs. Two expositions for fibrillarin were used in order to show the differences between the levels of fibrillarin in the adult mouse brain (ADL) and mESCs. (d) The levels of coilin, fibrillarin, HDAC1, and α-tubulin in the following regions of adult brain: the olfactory bulb (OB), the adult hippocampus (HIP), the brain cortex (CTX), and the whole adult mouse brain (ADL). (b–d) show the conclusions from three independent experiments, and the total loaded protein levels are also documented. (e)(A) Quantification of the results from (b); (B) quantification of (c); and (C) analysis of the HDAC1 level from (d). Asterisk (∗) denotes statistically significant results at p≤0.05 and (∗∗) at p≤0.01. (a) (b) (c) (d) (e)Here, western blot analysis revealed reduced levels of coilin (80 kDa) during neural differentiation of wt mESCs (Figures4(b) and 4(e)(A)). We also analyzed the levels of coilin in nondifferentiated and differentiated wt and HDAC1 dn mES cells. Our statistical analysis, using Student’s t-test, documented significant changes at p∗≤0.05 when we compared nondifferentiated and differentiated wt mESCs (Figures 4(b) and 4(e)(A)). In HDAC1-depleted cells, the difference was even more pronounced: a significantly different result (at p∗∗≤0.01) was found when we compared nondifferentiated and differentiated HDAC1 dn cells (Figures 4(b) and 4(e)(A)). We also examined the coilin levels in mouse brains at various developmental stages. We studied the whole brains of e13.5, e15.5, and e18.5 embryonic stages and adult mice (Figure 4(c)). Compared to embryonic brains, which are characterized by the 80 kDa coilin variant, we observed a different splice variant of coilin (~70 kDa) in adult brains. During mouse brain development, coilin levels were stable at the e13.5, e15.5, and e18.5 developmental stages. Interestingly, mouse ESCs were characterized by a very low level of 80 kDa coilin in comparison to embryonic brains (Figure 4(c)). In parallel with coilin, we analyzed fibrillarin levels in the mouse brains because individual CBs colocalize with fibrillarin in many cell types (Figures 4(c), 5(a), and 5(b)). By western blots, in mouse adult brains, we observed a very low level of fibrillarin (see two western blot expositions in Figure 4(c)), especially compared to mouse embryonic stem cells (mESCs). In our samples, shown in Figure 4(c), we found that when the level of coilin was high, the level of fibrillarin was low and vice versa.Figure 5 The spatial link between coilin and fibrillarin in HeLa cells and mouse pluripotent mESCs before and after neural differentiation. Arrows show fibrillarin and coilin occurrence in CBs in (a) HeLa cells; CB (blue) colocalizes with fibrillarin foci (red) (white arrows). (b) In (A) wt and (B) HDAC1 dn pluripotent ESCs, CBs (green) were located in a close proximity to the periphery of the fibrillarin-positive regions of the nucleoli (red). An example of CBs is shown by arrows. (c) The spatial link between CBs (green) and fibrillarin (red) in (A, B) wt mESCs and (C, D) HDAC1 dn mESCs undergoing neural differentiation (white arrows). Accumulated coilin (green) inside the nucleoli (red) was observed in HDAC1 dn cells (see (C) and (D)). DAPI staining (blue) was used to visualize the cell nuclei. (a) (b) (c)Using western blot, we also detected the levels of 70 kDa coilin variant and 39 kDa fibrillarin in the OBs of the adult brain, the adult hippocampus, the brain cortex, and the whole adult brain (Figures4(d) and 4(e)(B)). Compared to OBs, the hippocampus and the brain cortex were characterized by coilin depletion, which was accompanied by a decrease in HDAC1 level when it was normalized to total protein level and α-tubulin (Figures 4(d), 4(e)(B), and 4(e)(C)). The fibrillarin levels were not substantially different in the brain regions analyzed (Figure 4(d)).Here, we also compared the nuclear pattern of CBs in mESCs and the human cancer cells line HeLa, which has been used by many authors for CBs studies [17, 33]. In HeLa cells, the CBs were always positive for both coilin and fibrillarin (Figure 5(a)). We additionally studied the nuclear distribution pattern of CBs and fibrillarin in nondifferentiated mESCs and mESCs undergoing neural differentiation (Figures 5(b) and 5(c)(A)–5(c)(D)). Wild-type and HDAC1 dn mESCs were characterized by a very subtle occurrence of fibrillarin in CBs (see white arrows in Figure 5(b)). However, in differentiated HDAC1 dn mESCs, robust foci of accumulated coilin appeared on the periphery of the nucleoli (Figure 5(c)(C); ~40% of cells) or high coilin positivity was found inside the nucleoli (Figure 5(c)(D); ~60% of cells). This nuclear distribution pattern of coilin was not observed in differentiated wt mESCs (Figures 5(c)(A) and 5(c)(B)).Because CBs are nuclear regions associated with splicing processes, we additionally analyzed the spatial link between CBs and SC35-positive nuclear speckles (Figures6(a)–6(f)). In mouse pluripotent ESC colonies, we observed high levels of coilin in the nuclear interior, and these regions were surrounded by the SC35 protein (Figures 6(a)–6(c)). We found that most of the CBs and SC35-positive nuclear speckles were spatially distinct, but some of them partially overlapped. This nuclear distribution pattern was identical in both wt and HDAC1 dn (Figures 6(a)–6(f)).Figure 6 Spatial interactions between coilin and SC35-positive splicing speckles. (a) In wt mESCs, coilin (red) was distributed in the nuclear interior, and this coilin-positive region was surrounded by SC35 protein (green). (b, c) show that the mutual interaction between coilin and SC35 is changed during neural differentiation. Many cells were characterized by the formation of SC35-positive CBs (red). The colocalization tool in the Leica software showed ~30% colocalization between CBs (red) and SC35-positive nuclear speckles (green) in both (d) nondifferentiated HDAC1 dn mESCs and (e, f) differentiated HDAC1-depleted cells. A 3D projectionx-y-z of interphase nuclei is documented in all panels. (a) (b) (c) (d) (e) (f) ## 3. Discussion CBs, which were first described by Cajal [1], consist of several proteins, including p80 coilin. The functional properties of coilin in CBs were characterized by Andrade et al. [34] and Raška et al. [12]. CBs are also the sites for various factors that play roles during pre-mRNA splicing, pre-ribosomal RNA processing, and histone pre-mRNA maturation [7, 33, 35]. Moreover, CBs are highly mobile structures, as demonstrated by photobleaching experiments [16, 17].Here, we addressed the morphology of CBs in embryonic and adult brains and during the in vitro induction of mESC neural differentiation. Previously, in certain human and mouse ESCs (particularly at the periphery of mESC colonies), we observed the accumulation of coilin into visible CBs [10]. Conversely, human and mouse pluripotent ESCs, particularly those at the center of the colony, are highly positive for diffusely dispersed coilin protein (Figures 6(a) and 6(b); [10]). Thus, our results indicate that the peripheries of ESC colonies are more prone to spontaneous differentiation, which is characterized by an appearance of CBs [36]. Here, the formation of robust CBs or coilin-positive microfoci was more pronounced after induced neural differentiation, especially in HDAC1 dn ES cells (compare Figures 5(b) and 5(c)(A)–5(c)(D)). Our analyses confirmed that embryonic stem cells, characterized by an immense differentiation potential, are a good tool for the studies of nuclear architecture. For example, Butler et al. [37] showed the formation of CBs as a consequence of the spontaneous differentiation that frequently appears at the periphery of human ESC colonies, and it is also the case documented here. A good experimental model in which the formation of CBs is studied are the cells of the embryonic brain (particularly the cells in the prominent neurogenic regions destined for pronounced differentiation). For our analysis, we selected the hippocampus and the OBs (Figures 2(c)–2(f) and 3(b)(A)–3(b)(E)). Our data fit well with the original observations of Cajal, who noted the appearance of CBs in primary cells, such as the pyramidal cells from the human cerebral cortex (see also [8]) and the cells undergoing terminal differentiation (Figures 2(d), 3(b)(C), and 5(c)(A)–5(c)(D)). Here, for the first time, we show the accumulation of coilin in a crescent-like structure that is specific to the hippocampal regions of the adult brain (Figures 2(d) and 2(e)). Furthermore, OBs were characterized by well-visible CBs surrounded by high levels of coilin (Figures 3(b)(C)–3(b)(E)). These results support the conclusions of other authors that noted cell-type specificity regarding the size, morphology, and numbers of CBs [15, 38–44].Here, we additionally revealed a link among the focal accumulation of coilin in the nucleolus, decreased coilin levels and HDAC1 depletion. This connection was particularly observed during the neural differentiation of mESCs and in the hippocampus (Figures2(d), 2(e), 4(b), 4(d), 4(e)(A)–4(e)(C), 5(c)(C), and 5(c)(D)). In these cases, coilin was depleted and accumulated into robust CBs or microfoci inside the nucleoli of cells with an HDAC1 deficiency or HDAC1 decreased level. Thus, changes in histone acetylation, mediated by HDAC1 function, likely affected the interaction between coilin and chromatin-related factors. Accumulation of coilin to the nucleoli was found to be linked to coilin hypomethylation [5]. Interestingly, both HDAC1 depletion and coilin hypomethylation likely caused the coilin transition to the fibrillarin-positive dense fibrillar component of the nucleoli (compare Figures 4(b) and 4(e)(A) with Figures 5(b), 5(c)(C), and 5(c)(D) and [5]). Moreover, it seems to be possible that coilin could be hypomethylated in hyperacetylated surroundings in the genome, which can be caused by HDAC1 depletion. This epigenetic nuclear event could also be a consequence of HDAC1-dependent changes in chromatin accessibility. ## 4. Conclusion Nuclear bodies, including CBs, are functionally important nuclear compartments containing accumulated proteins that play roles in many nuclear processes, including transcription, splicing, and DNA repair. The morphology and nuclear distribution patterns of these nuclear bodies likely reflect their functional properties, which contribute to the molecular mechanisms that maintain the balance between cell physiology and pathophysiology. We showed here that coilin is highly expressed in brain tissue, especially in the embryonic brain. Cajal bodies, recognized by accumulated coilin, were found to be localized inside nucleoli, especially in HDAC1-depleted cells, which was accompanied by coilin downregulation. These results show that epigenetic events, such as histone acetylation (or lysine pan-acetylation) affecting the accessibility of regulatory elements to chromatin, can stand behind changes in the nuclear distribution pattern of Cajal bodies. ## 5. Materials and Methods ### 5.1. Cell Cultivation The nuclear distribution patterns of the coilin protein and its accumulation in CBs were analyzed in wt mESCs and HDAC1 dn mESCs (a generous gift from Dr. Christian Seiser, Max F. Perutz Laboratories, Vienna Biocenter, Austria) [32, 45]. Mouse ESCs were cultivated in DMEM (Thermo Fisher Scientific, USA) supplemented with 15% fetal bovine serum, 0.1 mM nonessential amino acids, 100 μM MTG, 1 ng/mL leukemia inhibitory factor (LIF), 10,000 IU/mL penicillin, and 10,000 μg/mL streptomycin. Culture dishes were coated with Matrigel (#354277, Corning, USA) according to the protocols described by Franek et al. [46]. Neural differentiation was induced in medium without LIF. After two days, the medium was replaced with serum-free commercial DMEM/F-12 (1 : 1) (GIBCO, UK) supplemented with insulin, transferrin, and selenium (ITS-100x, GIBCO, UK), 1 μg/mL fibronectin (Sigma-Aldrich, Czech Republic), and penicillin/streptomycin (according to Pacherník et al. [47] describing this DMEM/F-12/ITSF medium). In the next two days, this medium was additionally supplemented by 0.5 μMall-trans retinoic acid (ATRA, Sigma-Aldrich, Czech Republic) that was replaced at day 4 by DMEM/F-12/ITSF medium.HeLa-Fucci cells were purchased and cultivated as previously described [48]. ### 5.2. Tissue Sectioning and Immunostaining Adult and embryonic mouse brains (at developmental stages e13.5, e15.5, and e18.5 after fertilization; mouse strain C57Bl6) were maintained in tissue freezing medium (OCT embedding matrix, Leica Microsystems, Germany) at −20°C. A Leica cryomicrotome (Leica CM 1800, Leica, Germany) was used for tissue sectioning. Tissue sections were washed in PBS and postfixed in 4% formaldehyde for 20 min for immunostaining. The tissues were permeabilized in 1% Triton X-100 and 0.1% saponin (Sigma-Aldrich, Czech Republic) dissolved in PBS. Immunohistochemistry was performed according to the protocols described by Bártová et al. [49]. In our studies, we used a primary antibody raised against coilin (H-300) (#sc-32860, Santa Cruz, USA), fibrillarin (#ab4566, Abcam, UK), and a goat anti-rabbit Alexa Fluor 594 secondary antibody (#A11012, Invitrogen) or goat anti-mouse Alexa Fluor 594 (#A11032, Invitrogen, USA) or anti-rabbit Alexa Fluor 488 (#ab150077, Abcam, UK). The primary antibodies were diluted 1 : 100, and the secondary antibodies were diluted 1 : 200 in PBS containing 1% BSA. The DNA was counterstained with DAPI (4′,6-diamidino-2-phenylindole) (Sigma-Aldrich, branch in the Czech Republic) dissolved in the mounting medium Vectashield (Vector Laboratories, USA).We additionally used an antibody raised against acetylated H3K9 (#06-942, Merck Millipore, Czech Republic) to visualize the granular layer of the OBs (Figure3(a)(A)). ### 5.3. Western Blots Western blot was performed according to the protocols described by Krejčí et al. [50]. To analyze coilin levels by western blot, we used an antibody raised against coilin (#sc-32860, Santa Cruz, USA) at a dilution of 1 : 1000. Coilin levels were analyzed in nondifferentiated and differentiated mESCs as well as embryonic and adult brains. In addition, we examined fibrillarin, histone deacetylase 1 (HDAC1), pan-acetylated lysine, and α-tubulin levels using the following antibodies: fibrillarin (#ab5821, Abcam, UK), HDAC1 (#sc7872, Santa Cruz Biotechnology, Inc., USA), anti-pan-acetylated lysine (#ab21623, Abcam, UK), and α-tubulin (#LF-PA0146, Thermo Fisher Scientific Inc., branch in Czech Republic). The secondary antibody was a peroxidase-conjugated anti-rabbit IgG (#A-4914; Sigma, Munich, Germany) diluted 1 : 2000. Equal amounts of protein were loaded in each gel lane. Protein levels were normalized to the total protein levels measured with a μQuant spectrophotometer and the KCjunior software (BioTek Instruments, Inc., Winooski, VT, USA) or to total histone H3 levels (#ab1791, Abcam, UK). ### 5.4. Confocal Microscopy and Image Analysis We acquired images with a Leica TCS SP5 X confocal microscope (Leica Microsystems, Germany). Image acquisition was performed using a white light laser (WLL) with the following parameters: 1024 × 1024-pixel resolution, 400 Hz, bidirectional mode, and zoom 8–12. For 3D projections, we obtained 30–40 optical sections with axial steps of 0.3μm. 3D projection reconstruction was conducted using the Leica Application Suite (LAS) software. The scanning of larger biological objects, such as embryonic brain sections, was conducted in tile scanning mode with the Leica software, as previously described [49]. ### 5.5. Statistical Analysis We used Excel software for data presentation. Florescence intensity and density of western blot fragments were calculated by ImageJ software (NIH freeware). Statistically significant results atp≤0.05 (p≤0.01) are labeled by asterisks ∗ (∗∗). Statistical analysis was performed by Student’s t-test, a tool of Sigma Plot 8.0 software. ## 5.1. Cell Cultivation The nuclear distribution patterns of the coilin protein and its accumulation in CBs were analyzed in wt mESCs and HDAC1 dn mESCs (a generous gift from Dr. Christian Seiser, Max F. Perutz Laboratories, Vienna Biocenter, Austria) [32, 45]. Mouse ESCs were cultivated in DMEM (Thermo Fisher Scientific, USA) supplemented with 15% fetal bovine serum, 0.1 mM nonessential amino acids, 100 μM MTG, 1 ng/mL leukemia inhibitory factor (LIF), 10,000 IU/mL penicillin, and 10,000 μg/mL streptomycin. Culture dishes were coated with Matrigel (#354277, Corning, USA) according to the protocols described by Franek et al. [46]. Neural differentiation was induced in medium without LIF. After two days, the medium was replaced with serum-free commercial DMEM/F-12 (1 : 1) (GIBCO, UK) supplemented with insulin, transferrin, and selenium (ITS-100x, GIBCO, UK), 1 μg/mL fibronectin (Sigma-Aldrich, Czech Republic), and penicillin/streptomycin (according to Pacherník et al. [47] describing this DMEM/F-12/ITSF medium). In the next two days, this medium was additionally supplemented by 0.5 μMall-trans retinoic acid (ATRA, Sigma-Aldrich, Czech Republic) that was replaced at day 4 by DMEM/F-12/ITSF medium.HeLa-Fucci cells were purchased and cultivated as previously described [48]. ## 5.2. Tissue Sectioning and Immunostaining Adult and embryonic mouse brains (at developmental stages e13.5, e15.5, and e18.5 after fertilization; mouse strain C57Bl6) were maintained in tissue freezing medium (OCT embedding matrix, Leica Microsystems, Germany) at −20°C. A Leica cryomicrotome (Leica CM 1800, Leica, Germany) was used for tissue sectioning. Tissue sections were washed in PBS and postfixed in 4% formaldehyde for 20 min for immunostaining. The tissues were permeabilized in 1% Triton X-100 and 0.1% saponin (Sigma-Aldrich, Czech Republic) dissolved in PBS. Immunohistochemistry was performed according to the protocols described by Bártová et al. [49]. In our studies, we used a primary antibody raised against coilin (H-300) (#sc-32860, Santa Cruz, USA), fibrillarin (#ab4566, Abcam, UK), and a goat anti-rabbit Alexa Fluor 594 secondary antibody (#A11012, Invitrogen) or goat anti-mouse Alexa Fluor 594 (#A11032, Invitrogen, USA) or anti-rabbit Alexa Fluor 488 (#ab150077, Abcam, UK). The primary antibodies were diluted 1 : 100, and the secondary antibodies were diluted 1 : 200 in PBS containing 1% BSA. The DNA was counterstained with DAPI (4′,6-diamidino-2-phenylindole) (Sigma-Aldrich, branch in the Czech Republic) dissolved in the mounting medium Vectashield (Vector Laboratories, USA).We additionally used an antibody raised against acetylated H3K9 (#06-942, Merck Millipore, Czech Republic) to visualize the granular layer of the OBs (Figure3(a)(A)). ## 5.3. Western Blots Western blot was performed according to the protocols described by Krejčí et al. [50]. To analyze coilin levels by western blot, we used an antibody raised against coilin (#sc-32860, Santa Cruz, USA) at a dilution of 1 : 1000. Coilin levels were analyzed in nondifferentiated and differentiated mESCs as well as embryonic and adult brains. In addition, we examined fibrillarin, histone deacetylase 1 (HDAC1), pan-acetylated lysine, and α-tubulin levels using the following antibodies: fibrillarin (#ab5821, Abcam, UK), HDAC1 (#sc7872, Santa Cruz Biotechnology, Inc., USA), anti-pan-acetylated lysine (#ab21623, Abcam, UK), and α-tubulin (#LF-PA0146, Thermo Fisher Scientific Inc., branch in Czech Republic). The secondary antibody was a peroxidase-conjugated anti-rabbit IgG (#A-4914; Sigma, Munich, Germany) diluted 1 : 2000. Equal amounts of protein were loaded in each gel lane. Protein levels were normalized to the total protein levels measured with a μQuant spectrophotometer and the KCjunior software (BioTek Instruments, Inc., Winooski, VT, USA) or to total histone H3 levels (#ab1791, Abcam, UK). ## 5.4. Confocal Microscopy and Image Analysis We acquired images with a Leica TCS SP5 X confocal microscope (Leica Microsystems, Germany). Image acquisition was performed using a white light laser (WLL) with the following parameters: 1024 × 1024-pixel resolution, 400 Hz, bidirectional mode, and zoom 8–12. For 3D projections, we obtained 30–40 optical sections with axial steps of 0.3μm. 3D projection reconstruction was conducted using the Leica Application Suite (LAS) software. The scanning of larger biological objects, such as embryonic brain sections, was conducted in tile scanning mode with the Leica software, as previously described [49]. ## 5.5. Statistical Analysis We used Excel software for data presentation. Florescence intensity and density of western blot fragments were calculated by ImageJ software (NIH freeware). Statistically significant results atp≤0.05 (p≤0.01) are labeled by asterisks ∗ (∗∗). Statistical analysis was performed by Student’s t-test, a tool of Sigma Plot 8.0 software. --- *Source: 1021240-2017-02-27.xml*
1021240-2017-02-27_1021240-2017-02-27.md
45,513
Neural Differentiation in HDAC1-Depleted Cells Is Accompanied by Coilin Downregulation and the Accumulation of Cajal Bodies in Nucleoli
Jana Krejčí; Soňa Legartová; Eva Bártová
Stem Cells International (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1021240
1021240-2017-02-27.xml
--- ## Abstract Cajal bodies (CBs) are important compartments containing accumulated proteins that preferentially regulate RNA-related nuclear events, including splicing. Here, we studied the nuclear distribution pattern of CBs in neurogenesis. In adult brains, coilin was present at a high density, but CB formation was absent in the nuclei of the choroid plexus of the lateral ventricles. Cells of the adult hippocampus were characterized by a crescent-like morphology of coilin protein. We additionally observed a 70 kDa splice variant of coilin in adult mouse brains, which was different to embryonic brains and mouse pluripotent embryonic stem cells (mESCs), characterized by the 80 kDa standard variant of coilin. Here, we also showed that depletion of coilin is induced during neural differentiation and HDAC1 deficiency in mESCs caused coilin accumulation inside the fibrillarin-positive region of the nucleoli. A similar distribution pattern was observed in adult brain hippocampi, characterized by lower levels of both coilin and HDAC1. In summary, we observed that neural differentiation and HDAC1 deficiency lead to coilin depletion and coilin accumulation in body-like structures inside the nucleoli. --- ## Body ## 1. Introduction Cajal bodies (CBs) are striking nuclear structures consisting of accumulated proteins that play various roles in nuclear processes. These structures were designated Cajal’s accessory bodies(cuerpo accesorio) and were discovered for the first time in rat brain neurons [1]. A role of the CBs during neurogenesis was also significantly studied and summarized by Lafarga et al. [2] and Baltanás et al. [3]. At this moment, it is well known that the function of these structures is dynamic because CBs regulate RNA synthesis and the assembly of ribonucleoproteins (RNPs) [4]. Moreover, Tapia et al. [5] showed that the symmetrical dimethylation of arginines on coilin supports the formation of CBs, positive on survival motor neuron (SMN) proteins and small nuclear ribonucleoproteins (snRNPs). These regulatory factors probably determine the association of CBs with the spliceosome and a role for CBs in pre-mRNA splicing [6]. Conversely, coilin hypomethylation depreciates its function and causes the disintegration of canonical CBs into small microfoci. Unmethylated coilin does not support the formation of robust CBs but is located inside the dense fibrillar component of the nucleoli. In this form, there is no link between the coilin nuclear pattern and global transcription activity [5]. On the other hand, canonical CBs, which are nonmembrane nuclear components, are prominent structures in dividing cells with high transcriptional activity [4]. CBs have a diameter of 0.5–1.0 μm and contain many proteins, including the abovementioned p80 coilin, which becomes increasingly more phosphorylated during mitosis and, particularly in human embryonic stem cells, is present at high levels in the nucleoplasmic pool [7–12]. However, coilin is not completely essential because knockout of coilin in mice is not lethal [13]. On the other hand, coilin-positive CBs play an important role in genome organization in terms of gene expression and pre-mRNA splicing via their association with many chromosomes. The periphery of these chromosomes represents a site of interaction for genes that are poised for transcription and thus associates with regulatory components. Human chromosome 1 is a key player in these processes, and its periphery is frequently occupied by CBs [14]. The use of chromosome conformation capture analysis (4C-seq), a novel molecular biology method, has revealed an association between highly expressed histone genes, sn/snoRNA coding loci, and CBs, which are involved in intra- and interchromosomal clusters [14, 15]. This interaction is of immense functional importance during transcription and especially splicing. CBs are also highly mobile structures, as revealed by single-particle tracking analysis and fluorescence recovery after photobleaching (FRAP) [10, 16–19]. For example, we recently demonstrated the constrained local motion of individual CBs after cell exposure to γ-radiation. Furthermore, in mouse embryonic stem cells (mESCs), the coilin dispersed in the nucleoli and accumulated in CBs was characterized by a reduced mobile fraction compared to the GFP-tagged coilin in the nucleoplasm [10]. FRET (fluorescence resonance energy transfer) analysis additionally revealed a specific interaction between coilin and SMN protein in CBs as well as the appearance of coilin-coilin dimerization [17]. However, as regards to DNA repair machinery, our experiments did not show coilin-SMN interaction or coilin dimerization in UVA-induced DNA lesions, which are characterized by pronounced coilin recruitment [10]. Together, the abovementioned results illustrate the dynamic behavior of coilin and CBs, which is required not only for optimal pre-mRNA processing but also for DNA repair [15].Interestingly, in some tumor cells, the functional properties of coilin are associated with both CBs and nucleoli. The nucleoli contain many different proteins that play a role during the transcription of ribosomal genes or during DNA repair [18–20]. In UVA-damaged chromatin, we observed the recruitment of the upstream binding factor UBF, a major transcription factor for ribosomal genes, and we noted a similar response for coilin [10, 21]. As determined by Boulon et al. [22], UVA and UVC cause the disintegration of coilin-positive CBs, and ionizing irradiation has a similar, notable effect of CB disruption [23, 24]. Thus, nucleolar proteins, including coilin that also appears in nucleoli of tumor cells, appear to be involved in the DNA repair machinery, which is, for example, also activated in Purkinje cells during neurodegeneration, characterized by the disintegration of nucleoli and CBs [3].In this study, we focused especially on the nuclear distribution patterns of the CBs, and we studied coilin levels in embryonic and adult mouse brains and during neural differentiation of mESCs. Based on the initial observations of Raymond Cajal, who noted that the CBs are striking nuclear components of the rat brain and, more specifically, the pyramidal cells of the human cerebral cortex [1, 25], we analyzed the nuclear distribution patterns and formation of the CBs in the hippocampus and olfactory bulbs (OBs) of adult mouse brains. We also investigated the distribution of coilin in the ventricular ependyma of e15.5 embryonic brains. Furthermore, to elucidate the CB dynamics in neurogenesis, we analyzed the formation of CBs during the neural differentiation of wild-type (wt) and HDAC1 double-knockout (dn) mESCs. From the view of neural differentiation, it was shown that embryonic neural progenitor stem cells are characterized by a high level of HDAC1, while HDAC2 is expressed during neural differentiation and pronouncedly in terminally differentiated neurons [26]. Differentiation processes in the brain are also regulated by HDAC3, as shown by Volmar and Wahlestedt [27]. Moreover, in neural progenitor stem cells, functional HDAC3 was found to be responsible for the balance between cell proliferation and differentiation [28]. Based on these data we addressed the following hypothesis: whether neural differentiation and HDAC1 depletion can affect the levels of coilin and the nuclear distribution of Cajal bodies because we expected that depletion of some HDAC induces chromatin relaxation; thus this nuclear event could change distribution pattern of CBs. We also analyzed HDAC1 depletion in order to show how changes in histone acetylation, a central epigenetic factor responsible for chromatin accessibility [29, 30], can change the level of coilin, which is methylated when it accumulates in CBs [5]. ## 2. Results ### 2.1. The Nuclear Distribution Pattern of Cajal Bodies in the Embryonic and Adult Mouse Brain We inspected sections of embryonic and adult mouse brains and observed the formation of single, robust CBs at the cortex periphery in embryonic brains at stage e15.5 after fertilization (Figures1(a)–1(c)). We additionally found that in approximately 90% of the cell nuclei at the cortex periphery, the Cajal bodies (CBs) were located away from clusters of centromeric heterochromatin called chromocenters (Figure 1(c)).Figure 1 Formation of Cajal bodies (CBs) in the cortex periphery of e15.5 mouse embryonic brains (a–c). CBs were visualized with Alexa 594 fluorescence (red), and DAPI (4′,6-diamidino-2-phenylindole) was used as a counterstain (blue). Arrows in (a) show individual CBs and the frame in (b) shows a selected region in the cell nucleus magnified in (c). Red arrows in (c) indicate the clusters of centromeric heterochromatin (chromocenters) and white arrows show the selected CB. (a) (b) (c)The cell nuclei in adult brains were highly positive for the coilin protein, particularly in the chondroid plexus of the lateral ventricle (Figure2(a)). However, the cells in this region did not have easily discernable CBs. Next, we observed clustering of coilin inside the cell nuclei occupying the cortex periphery in adult mouse brains (Figure 2(b)). Analysis of the hippocampal blade (Figures 2(c)(A) and 2(c)(B)) revealed both the crescent-like accumulation of coilin and individual canonical CBs (Figures 2(d)–2(f)). Surprisingly, in olfactory bulbs (Figures 3(a)(A), 3(a)(B), and 3(b)(A)), high levels of coilin were noted in the highly DAPI-dense nuclear regions surrounding single CBs (Figures 3(b)(B)–3(b)(D)). This nuclear distribution pattern of coilin was observed in individual nuclei of the granular layer of the OBs in adult brain (Figures 3(b)(A)–3(b)(C); see magnification in Figure 3(b)(D) and quantification in Figure 3(b)(E)).Figure 2 The nuclear distribution patterns of coilin in adult mouse brain sections. (a) shows the chondroid plexus of the lateral ventricle. (b) Coilin distribution in the cortex periphery of an adult brain. ((c)(A), (c)(B)) Hippocampal regions visualized by hematoxylin-eosin staining, an image from the brain atlas (see [31]). (d–f) Accumulation of coilin in crescent-like foci in the hippocampal region of an adult brain. DAPI staining is used to visualize cell nuclei. Coilin (red) was labeled by a secondary antibody conjugated with Alexa 594. Nuclear distribution of coilin in cells 1 and 2 (e) is shown in graphs 1 and 2. Fluorescence intensity along white lines with arrows was measured using the Image J software (NIH freeware). (f) shows a high density of coilin in the hippocampus (hippocampal blade) of an adult mouse brain. (a) (b) (c) (d) (e) (f)Figure 3 Coilin expression in the olfactory bulbs (OBs) of the adult brain. ((a)(A)) The OB regions of an adult mouse brain visualized by DAPI staining (blue) and an antibody against acetylated histone H3 (red; an antibody raised against H3K9ac [#06-942, Merck Millipore] was used to visualize the granular layer of OB due to its high density). The morphology of the OB in ((a)(A)) is compared with the morphology of the OB according to ((a)(B)) the brain atlas (see [31]). ((b)(A)–(b)(D)) show coilin accumulation in the Cajal bodies. In adult OBs, CBs were surrounded by DAPI- and coilin-dense regions (red and blue) (see (b)(C)). ((b)(D)) shows a magnification of the cell nucleus from OB. ((b)(E)) indicates the density of coilin, visualized by Alexa 594 fluorescence, analyzed across the selected region delineated by a white arrow in ((b)(D)). (a) (b) ### 2.2. Levels and Nuclear Distribution Pattern of Coilin, Fibrillarin, and SC35 in Mouse Brain and Pluripotent or Differentiated mESCs In comparison to nondifferentiated and differentiated wt mESCs, pan-acetylation of lysines was very high in HDAC1 dn mESCs and their differentiated counterpart (Figure4(a)). In these experiments, we addressed a question if hyperacetylated surroundings of CBs in HDAC1 dn mESCs could change formation or maintenance of CBs, which is regulated by methylation-related processes [5].Figure 4 The levels of coilin, HDAC1, and fibrillarin in pluripotent and differentiated mouse ESCs and in the mouse brain. (a) In comparison to nondifferentiated and differentiated wt mESCs, a very high level of lysine pan-acetylation was found in HDAC1 dn cells and in their differentiated counterpart. Neural differentiation was induced in both wt and HDAC1 dn cells by identical differentiation protocol. (b) Western blot shows coilin andα-tubulin (reference protein) levels in nondifferentiated and differentiated (neuronal pathway) wt and HDAC1 dn mouse ESCs. HDAC1 depletion in these cells was first published by Lagger et al. [32]. (c) Western blot analysis of the coilin and fibrillarin levels in embryonic mouse brains at developmental stages e13.5, e15.5, and e18.5 and in the whole adult brain as well as in mESCs. Two expositions for fibrillarin were used in order to show the differences between the levels of fibrillarin in the adult mouse brain (ADL) and mESCs. (d) The levels of coilin, fibrillarin, HDAC1, and α-tubulin in the following regions of adult brain: the olfactory bulb (OB), the adult hippocampus (HIP), the brain cortex (CTX), and the whole adult mouse brain (ADL). (b–d) show the conclusions from three independent experiments, and the total loaded protein levels are also documented. (e)(A) Quantification of the results from (b); (B) quantification of (c); and (C) analysis of the HDAC1 level from (d). Asterisk (∗) denotes statistically significant results at p≤0.05 and (∗∗) at p≤0.01. (a) (b) (c) (d) (e)Here, western blot analysis revealed reduced levels of coilin (80 kDa) during neural differentiation of wt mESCs (Figures4(b) and 4(e)(A)). We also analyzed the levels of coilin in nondifferentiated and differentiated wt and HDAC1 dn mES cells. Our statistical analysis, using Student’s t-test, documented significant changes at p∗≤0.05 when we compared nondifferentiated and differentiated wt mESCs (Figures 4(b) and 4(e)(A)). In HDAC1-depleted cells, the difference was even more pronounced: a significantly different result (at p∗∗≤0.01) was found when we compared nondifferentiated and differentiated HDAC1 dn cells (Figures 4(b) and 4(e)(A)). We also examined the coilin levels in mouse brains at various developmental stages. We studied the whole brains of e13.5, e15.5, and e18.5 embryonic stages and adult mice (Figure 4(c)). Compared to embryonic brains, which are characterized by the 80 kDa coilin variant, we observed a different splice variant of coilin (~70 kDa) in adult brains. During mouse brain development, coilin levels were stable at the e13.5, e15.5, and e18.5 developmental stages. Interestingly, mouse ESCs were characterized by a very low level of 80 kDa coilin in comparison to embryonic brains (Figure 4(c)). In parallel with coilin, we analyzed fibrillarin levels in the mouse brains because individual CBs colocalize with fibrillarin in many cell types (Figures 4(c), 5(a), and 5(b)). By western blots, in mouse adult brains, we observed a very low level of fibrillarin (see two western blot expositions in Figure 4(c)), especially compared to mouse embryonic stem cells (mESCs). In our samples, shown in Figure 4(c), we found that when the level of coilin was high, the level of fibrillarin was low and vice versa.Figure 5 The spatial link between coilin and fibrillarin in HeLa cells and mouse pluripotent mESCs before and after neural differentiation. Arrows show fibrillarin and coilin occurrence in CBs in (a) HeLa cells; CB (blue) colocalizes with fibrillarin foci (red) (white arrows). (b) In (A) wt and (B) HDAC1 dn pluripotent ESCs, CBs (green) were located in a close proximity to the periphery of the fibrillarin-positive regions of the nucleoli (red). An example of CBs is shown by arrows. (c) The spatial link between CBs (green) and fibrillarin (red) in (A, B) wt mESCs and (C, D) HDAC1 dn mESCs undergoing neural differentiation (white arrows). Accumulated coilin (green) inside the nucleoli (red) was observed in HDAC1 dn cells (see (C) and (D)). DAPI staining (blue) was used to visualize the cell nuclei. (a) (b) (c)Using western blot, we also detected the levels of 70 kDa coilin variant and 39 kDa fibrillarin in the OBs of the adult brain, the adult hippocampus, the brain cortex, and the whole adult brain (Figures4(d) and 4(e)(B)). Compared to OBs, the hippocampus and the brain cortex were characterized by coilin depletion, which was accompanied by a decrease in HDAC1 level when it was normalized to total protein level and α-tubulin (Figures 4(d), 4(e)(B), and 4(e)(C)). The fibrillarin levels were not substantially different in the brain regions analyzed (Figure 4(d)).Here, we also compared the nuclear pattern of CBs in mESCs and the human cancer cells line HeLa, which has been used by many authors for CBs studies [17, 33]. In HeLa cells, the CBs were always positive for both coilin and fibrillarin (Figure 5(a)). We additionally studied the nuclear distribution pattern of CBs and fibrillarin in nondifferentiated mESCs and mESCs undergoing neural differentiation (Figures 5(b) and 5(c)(A)–5(c)(D)). Wild-type and HDAC1 dn mESCs were characterized by a very subtle occurrence of fibrillarin in CBs (see white arrows in Figure 5(b)). However, in differentiated HDAC1 dn mESCs, robust foci of accumulated coilin appeared on the periphery of the nucleoli (Figure 5(c)(C); ~40% of cells) or high coilin positivity was found inside the nucleoli (Figure 5(c)(D); ~60% of cells). This nuclear distribution pattern of coilin was not observed in differentiated wt mESCs (Figures 5(c)(A) and 5(c)(B)).Because CBs are nuclear regions associated with splicing processes, we additionally analyzed the spatial link between CBs and SC35-positive nuclear speckles (Figures6(a)–6(f)). In mouse pluripotent ESC colonies, we observed high levels of coilin in the nuclear interior, and these regions were surrounded by the SC35 protein (Figures 6(a)–6(c)). We found that most of the CBs and SC35-positive nuclear speckles were spatially distinct, but some of them partially overlapped. This nuclear distribution pattern was identical in both wt and HDAC1 dn (Figures 6(a)–6(f)).Figure 6 Spatial interactions between coilin and SC35-positive splicing speckles. (a) In wt mESCs, coilin (red) was distributed in the nuclear interior, and this coilin-positive region was surrounded by SC35 protein (green). (b, c) show that the mutual interaction between coilin and SC35 is changed during neural differentiation. Many cells were characterized by the formation of SC35-positive CBs (red). The colocalization tool in the Leica software showed ~30% colocalization between CBs (red) and SC35-positive nuclear speckles (green) in both (d) nondifferentiated HDAC1 dn mESCs and (e, f) differentiated HDAC1-depleted cells. A 3D projectionx-y-z of interphase nuclei is documented in all panels. (a) (b) (c) (d) (e) (f) ## 2.1. The Nuclear Distribution Pattern of Cajal Bodies in the Embryonic and Adult Mouse Brain We inspected sections of embryonic and adult mouse brains and observed the formation of single, robust CBs at the cortex periphery in embryonic brains at stage e15.5 after fertilization (Figures1(a)–1(c)). We additionally found that in approximately 90% of the cell nuclei at the cortex periphery, the Cajal bodies (CBs) were located away from clusters of centromeric heterochromatin called chromocenters (Figure 1(c)).Figure 1 Formation of Cajal bodies (CBs) in the cortex periphery of e15.5 mouse embryonic brains (a–c). CBs were visualized with Alexa 594 fluorescence (red), and DAPI (4′,6-diamidino-2-phenylindole) was used as a counterstain (blue). Arrows in (a) show individual CBs and the frame in (b) shows a selected region in the cell nucleus magnified in (c). Red arrows in (c) indicate the clusters of centromeric heterochromatin (chromocenters) and white arrows show the selected CB. (a) (b) (c)The cell nuclei in adult brains were highly positive for the coilin protein, particularly in the chondroid plexus of the lateral ventricle (Figure2(a)). However, the cells in this region did not have easily discernable CBs. Next, we observed clustering of coilin inside the cell nuclei occupying the cortex periphery in adult mouse brains (Figure 2(b)). Analysis of the hippocampal blade (Figures 2(c)(A) and 2(c)(B)) revealed both the crescent-like accumulation of coilin and individual canonical CBs (Figures 2(d)–2(f)). Surprisingly, in olfactory bulbs (Figures 3(a)(A), 3(a)(B), and 3(b)(A)), high levels of coilin were noted in the highly DAPI-dense nuclear regions surrounding single CBs (Figures 3(b)(B)–3(b)(D)). This nuclear distribution pattern of coilin was observed in individual nuclei of the granular layer of the OBs in adult brain (Figures 3(b)(A)–3(b)(C); see magnification in Figure 3(b)(D) and quantification in Figure 3(b)(E)).Figure 2 The nuclear distribution patterns of coilin in adult mouse brain sections. (a) shows the chondroid plexus of the lateral ventricle. (b) Coilin distribution in the cortex periphery of an adult brain. ((c)(A), (c)(B)) Hippocampal regions visualized by hematoxylin-eosin staining, an image from the brain atlas (see [31]). (d–f) Accumulation of coilin in crescent-like foci in the hippocampal region of an adult brain. DAPI staining is used to visualize cell nuclei. Coilin (red) was labeled by a secondary antibody conjugated with Alexa 594. Nuclear distribution of coilin in cells 1 and 2 (e) is shown in graphs 1 and 2. Fluorescence intensity along white lines with arrows was measured using the Image J software (NIH freeware). (f) shows a high density of coilin in the hippocampus (hippocampal blade) of an adult mouse brain. (a) (b) (c) (d) (e) (f)Figure 3 Coilin expression in the olfactory bulbs (OBs) of the adult brain. ((a)(A)) The OB regions of an adult mouse brain visualized by DAPI staining (blue) and an antibody against acetylated histone H3 (red; an antibody raised against H3K9ac [#06-942, Merck Millipore] was used to visualize the granular layer of OB due to its high density). The morphology of the OB in ((a)(A)) is compared with the morphology of the OB according to ((a)(B)) the brain atlas (see [31]). ((b)(A)–(b)(D)) show coilin accumulation in the Cajal bodies. In adult OBs, CBs were surrounded by DAPI- and coilin-dense regions (red and blue) (see (b)(C)). ((b)(D)) shows a magnification of the cell nucleus from OB. ((b)(E)) indicates the density of coilin, visualized by Alexa 594 fluorescence, analyzed across the selected region delineated by a white arrow in ((b)(D)). (a) (b) ## 2.2. Levels and Nuclear Distribution Pattern of Coilin, Fibrillarin, and SC35 in Mouse Brain and Pluripotent or Differentiated mESCs In comparison to nondifferentiated and differentiated wt mESCs, pan-acetylation of lysines was very high in HDAC1 dn mESCs and their differentiated counterpart (Figure4(a)). In these experiments, we addressed a question if hyperacetylated surroundings of CBs in HDAC1 dn mESCs could change formation or maintenance of CBs, which is regulated by methylation-related processes [5].Figure 4 The levels of coilin, HDAC1, and fibrillarin in pluripotent and differentiated mouse ESCs and in the mouse brain. (a) In comparison to nondifferentiated and differentiated wt mESCs, a very high level of lysine pan-acetylation was found in HDAC1 dn cells and in their differentiated counterpart. Neural differentiation was induced in both wt and HDAC1 dn cells by identical differentiation protocol. (b) Western blot shows coilin andα-tubulin (reference protein) levels in nondifferentiated and differentiated (neuronal pathway) wt and HDAC1 dn mouse ESCs. HDAC1 depletion in these cells was first published by Lagger et al. [32]. (c) Western blot analysis of the coilin and fibrillarin levels in embryonic mouse brains at developmental stages e13.5, e15.5, and e18.5 and in the whole adult brain as well as in mESCs. Two expositions for fibrillarin were used in order to show the differences between the levels of fibrillarin in the adult mouse brain (ADL) and mESCs. (d) The levels of coilin, fibrillarin, HDAC1, and α-tubulin in the following regions of adult brain: the olfactory bulb (OB), the adult hippocampus (HIP), the brain cortex (CTX), and the whole adult mouse brain (ADL). (b–d) show the conclusions from three independent experiments, and the total loaded protein levels are also documented. (e)(A) Quantification of the results from (b); (B) quantification of (c); and (C) analysis of the HDAC1 level from (d). Asterisk (∗) denotes statistically significant results at p≤0.05 and (∗∗) at p≤0.01. (a) (b) (c) (d) (e)Here, western blot analysis revealed reduced levels of coilin (80 kDa) during neural differentiation of wt mESCs (Figures4(b) and 4(e)(A)). We also analyzed the levels of coilin in nondifferentiated and differentiated wt and HDAC1 dn mES cells. Our statistical analysis, using Student’s t-test, documented significant changes at p∗≤0.05 when we compared nondifferentiated and differentiated wt mESCs (Figures 4(b) and 4(e)(A)). In HDAC1-depleted cells, the difference was even more pronounced: a significantly different result (at p∗∗≤0.01) was found when we compared nondifferentiated and differentiated HDAC1 dn cells (Figures 4(b) and 4(e)(A)). We also examined the coilin levels in mouse brains at various developmental stages. We studied the whole brains of e13.5, e15.5, and e18.5 embryonic stages and adult mice (Figure 4(c)). Compared to embryonic brains, which are characterized by the 80 kDa coilin variant, we observed a different splice variant of coilin (~70 kDa) in adult brains. During mouse brain development, coilin levels were stable at the e13.5, e15.5, and e18.5 developmental stages. Interestingly, mouse ESCs were characterized by a very low level of 80 kDa coilin in comparison to embryonic brains (Figure 4(c)). In parallel with coilin, we analyzed fibrillarin levels in the mouse brains because individual CBs colocalize with fibrillarin in many cell types (Figures 4(c), 5(a), and 5(b)). By western blots, in mouse adult brains, we observed a very low level of fibrillarin (see two western blot expositions in Figure 4(c)), especially compared to mouse embryonic stem cells (mESCs). In our samples, shown in Figure 4(c), we found that when the level of coilin was high, the level of fibrillarin was low and vice versa.Figure 5 The spatial link between coilin and fibrillarin in HeLa cells and mouse pluripotent mESCs before and after neural differentiation. Arrows show fibrillarin and coilin occurrence in CBs in (a) HeLa cells; CB (blue) colocalizes with fibrillarin foci (red) (white arrows). (b) In (A) wt and (B) HDAC1 dn pluripotent ESCs, CBs (green) were located in a close proximity to the periphery of the fibrillarin-positive regions of the nucleoli (red). An example of CBs is shown by arrows. (c) The spatial link between CBs (green) and fibrillarin (red) in (A, B) wt mESCs and (C, D) HDAC1 dn mESCs undergoing neural differentiation (white arrows). Accumulated coilin (green) inside the nucleoli (red) was observed in HDAC1 dn cells (see (C) and (D)). DAPI staining (blue) was used to visualize the cell nuclei. (a) (b) (c)Using western blot, we also detected the levels of 70 kDa coilin variant and 39 kDa fibrillarin in the OBs of the adult brain, the adult hippocampus, the brain cortex, and the whole adult brain (Figures4(d) and 4(e)(B)). Compared to OBs, the hippocampus and the brain cortex were characterized by coilin depletion, which was accompanied by a decrease in HDAC1 level when it was normalized to total protein level and α-tubulin (Figures 4(d), 4(e)(B), and 4(e)(C)). The fibrillarin levels were not substantially different in the brain regions analyzed (Figure 4(d)).Here, we also compared the nuclear pattern of CBs in mESCs and the human cancer cells line HeLa, which has been used by many authors for CBs studies [17, 33]. In HeLa cells, the CBs were always positive for both coilin and fibrillarin (Figure 5(a)). We additionally studied the nuclear distribution pattern of CBs and fibrillarin in nondifferentiated mESCs and mESCs undergoing neural differentiation (Figures 5(b) and 5(c)(A)–5(c)(D)). Wild-type and HDAC1 dn mESCs were characterized by a very subtle occurrence of fibrillarin in CBs (see white arrows in Figure 5(b)). However, in differentiated HDAC1 dn mESCs, robust foci of accumulated coilin appeared on the periphery of the nucleoli (Figure 5(c)(C); ~40% of cells) or high coilin positivity was found inside the nucleoli (Figure 5(c)(D); ~60% of cells). This nuclear distribution pattern of coilin was not observed in differentiated wt mESCs (Figures 5(c)(A) and 5(c)(B)).Because CBs are nuclear regions associated with splicing processes, we additionally analyzed the spatial link between CBs and SC35-positive nuclear speckles (Figures6(a)–6(f)). In mouse pluripotent ESC colonies, we observed high levels of coilin in the nuclear interior, and these regions were surrounded by the SC35 protein (Figures 6(a)–6(c)). We found that most of the CBs and SC35-positive nuclear speckles were spatially distinct, but some of them partially overlapped. This nuclear distribution pattern was identical in both wt and HDAC1 dn (Figures 6(a)–6(f)).Figure 6 Spatial interactions between coilin and SC35-positive splicing speckles. (a) In wt mESCs, coilin (red) was distributed in the nuclear interior, and this coilin-positive region was surrounded by SC35 protein (green). (b, c) show that the mutual interaction between coilin and SC35 is changed during neural differentiation. Many cells were characterized by the formation of SC35-positive CBs (red). The colocalization tool in the Leica software showed ~30% colocalization between CBs (red) and SC35-positive nuclear speckles (green) in both (d) nondifferentiated HDAC1 dn mESCs and (e, f) differentiated HDAC1-depleted cells. A 3D projectionx-y-z of interphase nuclei is documented in all panels. (a) (b) (c) (d) (e) (f) ## 3. Discussion CBs, which were first described by Cajal [1], consist of several proteins, including p80 coilin. The functional properties of coilin in CBs were characterized by Andrade et al. [34] and Raška et al. [12]. CBs are also the sites for various factors that play roles during pre-mRNA splicing, pre-ribosomal RNA processing, and histone pre-mRNA maturation [7, 33, 35]. Moreover, CBs are highly mobile structures, as demonstrated by photobleaching experiments [16, 17].Here, we addressed the morphology of CBs in embryonic and adult brains and during the in vitro induction of mESC neural differentiation. Previously, in certain human and mouse ESCs (particularly at the periphery of mESC colonies), we observed the accumulation of coilin into visible CBs [10]. Conversely, human and mouse pluripotent ESCs, particularly those at the center of the colony, are highly positive for diffusely dispersed coilin protein (Figures 6(a) and 6(b); [10]). Thus, our results indicate that the peripheries of ESC colonies are more prone to spontaneous differentiation, which is characterized by an appearance of CBs [36]. Here, the formation of robust CBs or coilin-positive microfoci was more pronounced after induced neural differentiation, especially in HDAC1 dn ES cells (compare Figures 5(b) and 5(c)(A)–5(c)(D)). Our analyses confirmed that embryonic stem cells, characterized by an immense differentiation potential, are a good tool for the studies of nuclear architecture. For example, Butler et al. [37] showed the formation of CBs as a consequence of the spontaneous differentiation that frequently appears at the periphery of human ESC colonies, and it is also the case documented here. A good experimental model in which the formation of CBs is studied are the cells of the embryonic brain (particularly the cells in the prominent neurogenic regions destined for pronounced differentiation). For our analysis, we selected the hippocampus and the OBs (Figures 2(c)–2(f) and 3(b)(A)–3(b)(E)). Our data fit well with the original observations of Cajal, who noted the appearance of CBs in primary cells, such as the pyramidal cells from the human cerebral cortex (see also [8]) and the cells undergoing terminal differentiation (Figures 2(d), 3(b)(C), and 5(c)(A)–5(c)(D)). Here, for the first time, we show the accumulation of coilin in a crescent-like structure that is specific to the hippocampal regions of the adult brain (Figures 2(d) and 2(e)). Furthermore, OBs were characterized by well-visible CBs surrounded by high levels of coilin (Figures 3(b)(C)–3(b)(E)). These results support the conclusions of other authors that noted cell-type specificity regarding the size, morphology, and numbers of CBs [15, 38–44].Here, we additionally revealed a link among the focal accumulation of coilin in the nucleolus, decreased coilin levels and HDAC1 depletion. This connection was particularly observed during the neural differentiation of mESCs and in the hippocampus (Figures2(d), 2(e), 4(b), 4(d), 4(e)(A)–4(e)(C), 5(c)(C), and 5(c)(D)). In these cases, coilin was depleted and accumulated into robust CBs or microfoci inside the nucleoli of cells with an HDAC1 deficiency or HDAC1 decreased level. Thus, changes in histone acetylation, mediated by HDAC1 function, likely affected the interaction between coilin and chromatin-related factors. Accumulation of coilin to the nucleoli was found to be linked to coilin hypomethylation [5]. Interestingly, both HDAC1 depletion and coilin hypomethylation likely caused the coilin transition to the fibrillarin-positive dense fibrillar component of the nucleoli (compare Figures 4(b) and 4(e)(A) with Figures 5(b), 5(c)(C), and 5(c)(D) and [5]). Moreover, it seems to be possible that coilin could be hypomethylated in hyperacetylated surroundings in the genome, which can be caused by HDAC1 depletion. This epigenetic nuclear event could also be a consequence of HDAC1-dependent changes in chromatin accessibility. ## 4. Conclusion Nuclear bodies, including CBs, are functionally important nuclear compartments containing accumulated proteins that play roles in many nuclear processes, including transcription, splicing, and DNA repair. The morphology and nuclear distribution patterns of these nuclear bodies likely reflect their functional properties, which contribute to the molecular mechanisms that maintain the balance between cell physiology and pathophysiology. We showed here that coilin is highly expressed in brain tissue, especially in the embryonic brain. Cajal bodies, recognized by accumulated coilin, were found to be localized inside nucleoli, especially in HDAC1-depleted cells, which was accompanied by coilin downregulation. These results show that epigenetic events, such as histone acetylation (or lysine pan-acetylation) affecting the accessibility of regulatory elements to chromatin, can stand behind changes in the nuclear distribution pattern of Cajal bodies. ## 5. Materials and Methods ### 5.1. Cell Cultivation The nuclear distribution patterns of the coilin protein and its accumulation in CBs were analyzed in wt mESCs and HDAC1 dn mESCs (a generous gift from Dr. Christian Seiser, Max F. Perutz Laboratories, Vienna Biocenter, Austria) [32, 45]. Mouse ESCs were cultivated in DMEM (Thermo Fisher Scientific, USA) supplemented with 15% fetal bovine serum, 0.1 mM nonessential amino acids, 100 μM MTG, 1 ng/mL leukemia inhibitory factor (LIF), 10,000 IU/mL penicillin, and 10,000 μg/mL streptomycin. Culture dishes were coated with Matrigel (#354277, Corning, USA) according to the protocols described by Franek et al. [46]. Neural differentiation was induced in medium without LIF. After two days, the medium was replaced with serum-free commercial DMEM/F-12 (1 : 1) (GIBCO, UK) supplemented with insulin, transferrin, and selenium (ITS-100x, GIBCO, UK), 1 μg/mL fibronectin (Sigma-Aldrich, Czech Republic), and penicillin/streptomycin (according to Pacherník et al. [47] describing this DMEM/F-12/ITSF medium). In the next two days, this medium was additionally supplemented by 0.5 μMall-trans retinoic acid (ATRA, Sigma-Aldrich, Czech Republic) that was replaced at day 4 by DMEM/F-12/ITSF medium.HeLa-Fucci cells were purchased and cultivated as previously described [48]. ### 5.2. Tissue Sectioning and Immunostaining Adult and embryonic mouse brains (at developmental stages e13.5, e15.5, and e18.5 after fertilization; mouse strain C57Bl6) were maintained in tissue freezing medium (OCT embedding matrix, Leica Microsystems, Germany) at −20°C. A Leica cryomicrotome (Leica CM 1800, Leica, Germany) was used for tissue sectioning. Tissue sections were washed in PBS and postfixed in 4% formaldehyde for 20 min for immunostaining. The tissues were permeabilized in 1% Triton X-100 and 0.1% saponin (Sigma-Aldrich, Czech Republic) dissolved in PBS. Immunohistochemistry was performed according to the protocols described by Bártová et al. [49]. In our studies, we used a primary antibody raised against coilin (H-300) (#sc-32860, Santa Cruz, USA), fibrillarin (#ab4566, Abcam, UK), and a goat anti-rabbit Alexa Fluor 594 secondary antibody (#A11012, Invitrogen) or goat anti-mouse Alexa Fluor 594 (#A11032, Invitrogen, USA) or anti-rabbit Alexa Fluor 488 (#ab150077, Abcam, UK). The primary antibodies were diluted 1 : 100, and the secondary antibodies were diluted 1 : 200 in PBS containing 1% BSA. The DNA was counterstained with DAPI (4′,6-diamidino-2-phenylindole) (Sigma-Aldrich, branch in the Czech Republic) dissolved in the mounting medium Vectashield (Vector Laboratories, USA).We additionally used an antibody raised against acetylated H3K9 (#06-942, Merck Millipore, Czech Republic) to visualize the granular layer of the OBs (Figure3(a)(A)). ### 5.3. Western Blots Western blot was performed according to the protocols described by Krejčí et al. [50]. To analyze coilin levels by western blot, we used an antibody raised against coilin (#sc-32860, Santa Cruz, USA) at a dilution of 1 : 1000. Coilin levels were analyzed in nondifferentiated and differentiated mESCs as well as embryonic and adult brains. In addition, we examined fibrillarin, histone deacetylase 1 (HDAC1), pan-acetylated lysine, and α-tubulin levels using the following antibodies: fibrillarin (#ab5821, Abcam, UK), HDAC1 (#sc7872, Santa Cruz Biotechnology, Inc., USA), anti-pan-acetylated lysine (#ab21623, Abcam, UK), and α-tubulin (#LF-PA0146, Thermo Fisher Scientific Inc., branch in Czech Republic). The secondary antibody was a peroxidase-conjugated anti-rabbit IgG (#A-4914; Sigma, Munich, Germany) diluted 1 : 2000. Equal amounts of protein were loaded in each gel lane. Protein levels were normalized to the total protein levels measured with a μQuant spectrophotometer and the KCjunior software (BioTek Instruments, Inc., Winooski, VT, USA) or to total histone H3 levels (#ab1791, Abcam, UK). ### 5.4. Confocal Microscopy and Image Analysis We acquired images with a Leica TCS SP5 X confocal microscope (Leica Microsystems, Germany). Image acquisition was performed using a white light laser (WLL) with the following parameters: 1024 × 1024-pixel resolution, 400 Hz, bidirectional mode, and zoom 8–12. For 3D projections, we obtained 30–40 optical sections with axial steps of 0.3μm. 3D projection reconstruction was conducted using the Leica Application Suite (LAS) software. The scanning of larger biological objects, such as embryonic brain sections, was conducted in tile scanning mode with the Leica software, as previously described [49]. ### 5.5. Statistical Analysis We used Excel software for data presentation. Florescence intensity and density of western blot fragments were calculated by ImageJ software (NIH freeware). Statistically significant results atp≤0.05 (p≤0.01) are labeled by asterisks ∗ (∗∗). Statistical analysis was performed by Student’s t-test, a tool of Sigma Plot 8.0 software. ## 5.1. Cell Cultivation The nuclear distribution patterns of the coilin protein and its accumulation in CBs were analyzed in wt mESCs and HDAC1 dn mESCs (a generous gift from Dr. Christian Seiser, Max F. Perutz Laboratories, Vienna Biocenter, Austria) [32, 45]. Mouse ESCs were cultivated in DMEM (Thermo Fisher Scientific, USA) supplemented with 15% fetal bovine serum, 0.1 mM nonessential amino acids, 100 μM MTG, 1 ng/mL leukemia inhibitory factor (LIF), 10,000 IU/mL penicillin, and 10,000 μg/mL streptomycin. Culture dishes were coated with Matrigel (#354277, Corning, USA) according to the protocols described by Franek et al. [46]. Neural differentiation was induced in medium without LIF. After two days, the medium was replaced with serum-free commercial DMEM/F-12 (1 : 1) (GIBCO, UK) supplemented with insulin, transferrin, and selenium (ITS-100x, GIBCO, UK), 1 μg/mL fibronectin (Sigma-Aldrich, Czech Republic), and penicillin/streptomycin (according to Pacherník et al. [47] describing this DMEM/F-12/ITSF medium). In the next two days, this medium was additionally supplemented by 0.5 μMall-trans retinoic acid (ATRA, Sigma-Aldrich, Czech Republic) that was replaced at day 4 by DMEM/F-12/ITSF medium.HeLa-Fucci cells were purchased and cultivated as previously described [48]. ## 5.2. Tissue Sectioning and Immunostaining Adult and embryonic mouse brains (at developmental stages e13.5, e15.5, and e18.5 after fertilization; mouse strain C57Bl6) were maintained in tissue freezing medium (OCT embedding matrix, Leica Microsystems, Germany) at −20°C. A Leica cryomicrotome (Leica CM 1800, Leica, Germany) was used for tissue sectioning. Tissue sections were washed in PBS and postfixed in 4% formaldehyde for 20 min for immunostaining. The tissues were permeabilized in 1% Triton X-100 and 0.1% saponin (Sigma-Aldrich, Czech Republic) dissolved in PBS. Immunohistochemistry was performed according to the protocols described by Bártová et al. [49]. In our studies, we used a primary antibody raised against coilin (H-300) (#sc-32860, Santa Cruz, USA), fibrillarin (#ab4566, Abcam, UK), and a goat anti-rabbit Alexa Fluor 594 secondary antibody (#A11012, Invitrogen) or goat anti-mouse Alexa Fluor 594 (#A11032, Invitrogen, USA) or anti-rabbit Alexa Fluor 488 (#ab150077, Abcam, UK). The primary antibodies were diluted 1 : 100, and the secondary antibodies were diluted 1 : 200 in PBS containing 1% BSA. The DNA was counterstained with DAPI (4′,6-diamidino-2-phenylindole) (Sigma-Aldrich, branch in the Czech Republic) dissolved in the mounting medium Vectashield (Vector Laboratories, USA).We additionally used an antibody raised against acetylated H3K9 (#06-942, Merck Millipore, Czech Republic) to visualize the granular layer of the OBs (Figure3(a)(A)). ## 5.3. Western Blots Western blot was performed according to the protocols described by Krejčí et al. [50]. To analyze coilin levels by western blot, we used an antibody raised against coilin (#sc-32860, Santa Cruz, USA) at a dilution of 1 : 1000. Coilin levels were analyzed in nondifferentiated and differentiated mESCs as well as embryonic and adult brains. In addition, we examined fibrillarin, histone deacetylase 1 (HDAC1), pan-acetylated lysine, and α-tubulin levels using the following antibodies: fibrillarin (#ab5821, Abcam, UK), HDAC1 (#sc7872, Santa Cruz Biotechnology, Inc., USA), anti-pan-acetylated lysine (#ab21623, Abcam, UK), and α-tubulin (#LF-PA0146, Thermo Fisher Scientific Inc., branch in Czech Republic). The secondary antibody was a peroxidase-conjugated anti-rabbit IgG (#A-4914; Sigma, Munich, Germany) diluted 1 : 2000. Equal amounts of protein were loaded in each gel lane. Protein levels were normalized to the total protein levels measured with a μQuant spectrophotometer and the KCjunior software (BioTek Instruments, Inc., Winooski, VT, USA) or to total histone H3 levels (#ab1791, Abcam, UK). ## 5.4. Confocal Microscopy and Image Analysis We acquired images with a Leica TCS SP5 X confocal microscope (Leica Microsystems, Germany). Image acquisition was performed using a white light laser (WLL) with the following parameters: 1024 × 1024-pixel resolution, 400 Hz, bidirectional mode, and zoom 8–12. For 3D projections, we obtained 30–40 optical sections with axial steps of 0.3μm. 3D projection reconstruction was conducted using the Leica Application Suite (LAS) software. The scanning of larger biological objects, such as embryonic brain sections, was conducted in tile scanning mode with the Leica software, as previously described [49]. ## 5.5. Statistical Analysis We used Excel software for data presentation. Florescence intensity and density of western blot fragments were calculated by ImageJ software (NIH freeware). Statistically significant results atp≤0.05 (p≤0.01) are labeled by asterisks ∗ (∗∗). Statistical analysis was performed by Student’s t-test, a tool of Sigma Plot 8.0 software. --- *Source: 1021240-2017-02-27.xml*
2017
# Determination of Polyphenols, Capsaicinoids, and Vitamin C in New Hybrids of Chili Peppers **Authors:** Zsuzsa Nagy; Hussein Daood; Zsuzsanna Ambrózy; Lajos Helyes **Journal:** Journal of Analytical Methods in Chemistry (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102125 --- ## Abstract Six hybrids were subjected to chromatographic analyses by HPLC for the determination of phytochemicals such as capsaicinoid, polyphenol, and vitamin C. The dynamics of ripening of 4 of the hybrids were also characterised. Seven capsaicinoids could be separated and determined; the major compounds were nordihydrocapsaicin, capsaicin, and dihydrocapsaicin, while homocapsaicin and homodihydrocapsaicin derivatives were detected as minor constituents. Capsaicin content ranged between 95.5 ± 4.15 and 1610.2 ± 91.46μg/g FW, and the highest value was found in Bandai (C. frutescens) at the green ripening stage. The major capsaicinoids had a decreasing tendency in Bandai and Chili 3735 hybrids, while no change was observed in Beibeihong and Lolo during ripening. Nine polyphenol compounds were detected including 8 flavonoids and a nonflavonoid compound in the pods of all hybrids. The major components were naringenin-diglucoside, catechin, and vanillic acid-derivative and luteolin-glucoside. Naringenin-diglucoside ranged from 93.5 ± 4.26 to 368.8 ± 30.77 μg/g FW. Except vanillic acid-derivative, dominant polyphenols increased or remained unchanged during ripening. As for vitamin C, its content tended to increase with the advance in ripening in all hybrids included in this study. The highest value of 3689.4 ± 39.50 μg/g FW was recorded in Fire Flame hybrid. --- ## Body ## 1. Introduction The components evolving pungency in chili peppers have been established as a mixture of acid amides of vanillylamine and C8 to C13 fatty acids, also known as capsaicinoids [1]. Capsaicinoids are secondary metabolites and are synthesised by glands at the join of the placenta and the pod wall of pungent peppers [2]. The effect of capsaicinoids on human health has been widely investigated. For instance, it is beneficial in low concentration against gastric injuries [3], stimulates cation channels (Na+, K+, and Ca2+) in sensory receptor membrane [4], evokes pain, and activates autonomic reflexes [5]. Environmental factors and the circumstance of cultivation influence capsaicinoid content of the pods [6, 7], while probably a higher impact on pungency by the genotype is present [8, 9]. Besides, the amount and proportion of capsaicinoids are changing during the ripening process of the pods [10–13]. Flavonoids represent a significant subgroup of polyphenols [14] and naturally occur in high concentration in wild mint [15] and grape [16] while generally pungent peppers have moderate level of polyphenol content. The health protective attributions of them are mainly associated with preventing cancer through inhibiting certain enzymes and suppressing angiogenesis [17]. The polyphenol content in pungent peppers is found to be influenced by genotype and the ripening process [18–20]. Ascorbic acid, the main component of vitamin C, is very abundant in freshCapsicum species and has been found to be beneficial in maintaining collagen synthesis and healthy immune-system and also has antitumor properties [21–23]. The content of ascorbic acid is highly varying among cultivars and ripening stages [24, 25]; in addition, the utilised agricultural techniques play significant role in the final amount of ascorbic acid in the pods [26].Numerous cultivars of pungent pepper are nowadays available; however, many of them have not been analysed for their quality and nutritional components. The objective of the present work is to investigate capsaicinoid, polyphenol, and vitamin C content in six hybrids of chili pepper (Bandai, Beibeihong, Lolo, Chili 3735, Fire Flame, and Star Flame) using recently developed liquid chromatographic method in the determinations. In addition, characterisation of ripening stages of four hybrids was aimed. ## 2. Material and Methods ### 2.1. Plant Material The plants were cultivated with convention horticultural practices in the experimental field of Szent István University, Gödöllő, Hungary. Bandai F1 (Bandai) and Beibeihong 695 F1 (Beibeihong) which belong toCapsicum frutescens and Lolo 736 F1 (Lolo) and Chili 3735 F1 (C3735) which belong toCapsicum annuum were all purchased from East-West Seeds Company, from Thailand, while Star Flame and Fire Flame (bothCapsicum annuum) were purchased from Seminis, Hungary. The pods of Bandai, Beibeihong, Lolo, C3735, and Fire Flame are red when fully ripe, while Star Flame has vivid yellow pods. Peppers with intermediate pungency level were selected for the investigation because those have multiple utilization methods. Those peppers involved in the recent study have limited data available for breeders and growers; thus, it makes them important for research work. Star Flame and Fire Flame are commercially available in certain European countries but not yet in Hungary. ### 2.2. Capsaicinoid Determination The determination of capsaicinoid content was made following the method of Daood et al. [27]. Three grams of well-blended pepper sample were crushed in a crucible mortar with quartz sand. To the macerate 50 mL of methanol (analytical grade) was added and the mixture was then transferred to a 100 mL Erlenmeyer flask. The mixture was subjected to 4 min long ultrasonication (Raypa, Turkey) and then filtered through filter paper (Munktell, Germany). The filtrate was more purified by passing through a 0.45 mm PTFE syringe filter before injection on the HPLC column.After suitable dilution, the extract was injected to Nucleodur C18, Cross-Linked (ISIS, from Macherey-Nagel, Düren, Germany). The separation was performed with isocratic elution of 50 : 50 water-acetonitrile and a flow rate of 0.8 mL/min. Fluorometric detection of capsaicinoid was carried out at EX: 280 nm and EM: 320 nm.Peaks referring to different capsaicinoids were identified by comparing retention times and mass data (Daood et al. [27]) of standard material (purified from pungent red pepper, with 99% purity, by Plantakem Ltd., Sándorfalva, Hungary) with those appearing on chromatogram of samples. Capsaicinoid compounds are referred as follows: nordihydrocapsaicin (NDC), capsaicin (CAP), dihydrocapsaicin (DC), homocapsaicin 1-2 (HCAP1-2), and homodihydrocapsaicin 1-2 (HDC1-2). Scoville heat unit (SHU) was calculated by the following algorithm:(1)CAP×16,1+DC×16,1+NDC×9,3+HCAP1+HCAP2×8,6=Scoville  heat  unit.All variables are expressed in μg/g dry weight basis [28]. ### 2.3. Polyphenol Determination Five grams of well-blended pepper sample were replaced into an Erlenmeyer flask and then 10 mL distilled water was added to the sample and subjected to ultrasonication force using ultrasonic bath for 30 sec. Then, 15 mL of 2% acetic acid in methanol was added to the mixture which was shaken by a mechanical shaker for 15 min. The mixture then was kept overnight at 4°C. Next day after filtrating the mixtures, a further cleanup of the filtrates was made by passing through the mixture a 0.45μm PTFE HPLC syringe filter. That followed by injection on the HPLC column for the analysis of phenols. Nucleosil C18, 100, Protect-1 (Macherey-Nagel, Düren, Germany), 3 μm, 150 × 4.6 column was used. The gradient elution was done using 1% formic acid (A) in water, acetonitrile (B), and flow rate of 0.6 mL/min. Gradient elution started with 98% A and 2% B and changed in 10 min to 87% A and 13% B and in 5 min to 75% A and 25% B and then in 15 min to 60% A and 40% B; finally it turned in 7 min to 98% A and 2% B. The peaks that appeared on the chromatogram were identified by comparing their retention times and spectral characteristics with available standards such as catechin, quercetin-3-glucoside, kaempferol, luteolin-glucoside, and naringenin-glucoside (Sigma-Aldrich Ltd., Hungary). Quantitation of phenol components having maxima absorption at 280 nm were quantified as catechin equivalent and flavonoids were quantified as kaempferol-equivalent at 350 nm [29, 30]. The standard material was singly injected as external standard as well as being cochromatographed (spiking) with the samples. ### 2.4. Ascorbic Acid Determination Five grams of well-homogenised sample were disrupted in a crucible mortar with quartz sand. To the macerate 50 mL of metaphosphoric acid (analytical grade) was gradually added and the mixture was then transferred to a 100 mL Erlenmeyer flask closed with stopper and then filtered. The filtrate was purified in addition by passing through a 0.45 mm PTFE syringe filter before injection on HPLC column. The analytical determination of ascorbic acid was performed on C18 Nautilus, 100-5, 150 × 4.6 mm (Macherey-Nagel, Düren, Germany) column with gradient elution of 0.01 M KH2PO4 (A) and acetonitrile (B). The gradient elution started with 1% B in A and changed to 30% B in A in 15 min; then; it turned to 1% A in B in 5 min. The flow rate was 0.7 mL/min. The highest absorption maxima of ascorbic acid under these conditions were detected at 265 nm. For quantitative determination of ascorbic acid standard materials (Sigma-Aldrich, Budapest, Hungary) were used. Stock solutions and then working solutions were prepared for each compound to make the calibration between concentration and peak area. ### 2.5. HPLC Apparatus A Hitachi Chromaster HPLC instrument, which consists of a Model 5110 Pump, a Model 5210 Auto Sampler, a Model 5430 Diode Array detector, and a Model 5440 Fluorescence detector, was used for the determination of all compounds. ### 2.6. Validation of Applied Methods Since the methods used in the different chromatographic determinations are derived from the literature (validated protocols) we dealt with only measuring the limit of detection (LOD) and quantification (LOQ) and linearity curves of different compounds under the conditions of our laboratories. The LOD and LOQ were calculated from standard solutions and samples as the concentrations of analytes at peak/noise of 3 times and 10 times, respectively. Linearity curves were made plotting concentration inμg/mL against peak areas. ### 2.7. Dry Matter Determination Three grams of fresh pepper samples were dried at 65°C until constant weight. The dry matter content was measured as a proportion of fresh and dried fruit weight. ### 2.8. Statistical Analyses For each independent variable a one-way linear model (LM) was fitted, where “ripening stage” was set as explanatory (factor) variable. Prior to model fitting assumptions were checked by plot diagnosis. In the analysis of the major compounds (SHU, CAP, naringenin-diglucoside, ascorbic acid, and dry matter) among the six hybrids another LM was made, where “hybrid” was set as explanatory (factor) variable. Post hoc comparison was made by Tukey HSD test. All statistical analyses were performed in IBM SPSS 22 software (IBM Co., USA) and Microsoft Excel (Microsoft Co., USA).α was set at 0.05 in the entire study. ## 2.1. Plant Material The plants were cultivated with convention horticultural practices in the experimental field of Szent István University, Gödöllő, Hungary. Bandai F1 (Bandai) and Beibeihong 695 F1 (Beibeihong) which belong toCapsicum frutescens and Lolo 736 F1 (Lolo) and Chili 3735 F1 (C3735) which belong toCapsicum annuum were all purchased from East-West Seeds Company, from Thailand, while Star Flame and Fire Flame (bothCapsicum annuum) were purchased from Seminis, Hungary. The pods of Bandai, Beibeihong, Lolo, C3735, and Fire Flame are red when fully ripe, while Star Flame has vivid yellow pods. Peppers with intermediate pungency level were selected for the investigation because those have multiple utilization methods. Those peppers involved in the recent study have limited data available for breeders and growers; thus, it makes them important for research work. Star Flame and Fire Flame are commercially available in certain European countries but not yet in Hungary. ## 2.2. Capsaicinoid Determination The determination of capsaicinoid content was made following the method of Daood et al. [27]. Three grams of well-blended pepper sample were crushed in a crucible mortar with quartz sand. To the macerate 50 mL of methanol (analytical grade) was added and the mixture was then transferred to a 100 mL Erlenmeyer flask. The mixture was subjected to 4 min long ultrasonication (Raypa, Turkey) and then filtered through filter paper (Munktell, Germany). The filtrate was more purified by passing through a 0.45 mm PTFE syringe filter before injection on the HPLC column.After suitable dilution, the extract was injected to Nucleodur C18, Cross-Linked (ISIS, from Macherey-Nagel, Düren, Germany). The separation was performed with isocratic elution of 50 : 50 water-acetonitrile and a flow rate of 0.8 mL/min. Fluorometric detection of capsaicinoid was carried out at EX: 280 nm and EM: 320 nm.Peaks referring to different capsaicinoids were identified by comparing retention times and mass data (Daood et al. [27]) of standard material (purified from pungent red pepper, with 99% purity, by Plantakem Ltd., Sándorfalva, Hungary) with those appearing on chromatogram of samples. Capsaicinoid compounds are referred as follows: nordihydrocapsaicin (NDC), capsaicin (CAP), dihydrocapsaicin (DC), homocapsaicin 1-2 (HCAP1-2), and homodihydrocapsaicin 1-2 (HDC1-2). Scoville heat unit (SHU) was calculated by the following algorithm:(1)CAP×16,1+DC×16,1+NDC×9,3+HCAP1+HCAP2×8,6=Scoville  heat  unit.All variables are expressed in μg/g dry weight basis [28]. ## 2.3. Polyphenol Determination Five grams of well-blended pepper sample were replaced into an Erlenmeyer flask and then 10 mL distilled water was added to the sample and subjected to ultrasonication force using ultrasonic bath for 30 sec. Then, 15 mL of 2% acetic acid in methanol was added to the mixture which was shaken by a mechanical shaker for 15 min. The mixture then was kept overnight at 4°C. Next day after filtrating the mixtures, a further cleanup of the filtrates was made by passing through the mixture a 0.45μm PTFE HPLC syringe filter. That followed by injection on the HPLC column for the analysis of phenols. Nucleosil C18, 100, Protect-1 (Macherey-Nagel, Düren, Germany), 3 μm, 150 × 4.6 column was used. The gradient elution was done using 1% formic acid (A) in water, acetonitrile (B), and flow rate of 0.6 mL/min. Gradient elution started with 98% A and 2% B and changed in 10 min to 87% A and 13% B and in 5 min to 75% A and 25% B and then in 15 min to 60% A and 40% B; finally it turned in 7 min to 98% A and 2% B. The peaks that appeared on the chromatogram were identified by comparing their retention times and spectral characteristics with available standards such as catechin, quercetin-3-glucoside, kaempferol, luteolin-glucoside, and naringenin-glucoside (Sigma-Aldrich Ltd., Hungary). Quantitation of phenol components having maxima absorption at 280 nm were quantified as catechin equivalent and flavonoids were quantified as kaempferol-equivalent at 350 nm [29, 30]. The standard material was singly injected as external standard as well as being cochromatographed (spiking) with the samples. ## 2.4. Ascorbic Acid Determination Five grams of well-homogenised sample were disrupted in a crucible mortar with quartz sand. To the macerate 50 mL of metaphosphoric acid (analytical grade) was gradually added and the mixture was then transferred to a 100 mL Erlenmeyer flask closed with stopper and then filtered. The filtrate was purified in addition by passing through a 0.45 mm PTFE syringe filter before injection on HPLC column. The analytical determination of ascorbic acid was performed on C18 Nautilus, 100-5, 150 × 4.6 mm (Macherey-Nagel, Düren, Germany) column with gradient elution of 0.01 M KH2PO4 (A) and acetonitrile (B). The gradient elution started with 1% B in A and changed to 30% B in A in 15 min; then; it turned to 1% A in B in 5 min. The flow rate was 0.7 mL/min. The highest absorption maxima of ascorbic acid under these conditions were detected at 265 nm. For quantitative determination of ascorbic acid standard materials (Sigma-Aldrich, Budapest, Hungary) were used. Stock solutions and then working solutions were prepared for each compound to make the calibration between concentration and peak area. ## 2.5. HPLC Apparatus A Hitachi Chromaster HPLC instrument, which consists of a Model 5110 Pump, a Model 5210 Auto Sampler, a Model 5430 Diode Array detector, and a Model 5440 Fluorescence detector, was used for the determination of all compounds. ## 2.6. Validation of Applied Methods Since the methods used in the different chromatographic determinations are derived from the literature (validated protocols) we dealt with only measuring the limit of detection (LOD) and quantification (LOQ) and linearity curves of different compounds under the conditions of our laboratories. The LOD and LOQ were calculated from standard solutions and samples as the concentrations of analytes at peak/noise of 3 times and 10 times, respectively. Linearity curves were made plotting concentration inμg/mL against peak areas. ## 2.7. Dry Matter Determination Three grams of fresh pepper samples were dried at 65°C until constant weight. The dry matter content was measured as a proportion of fresh and dried fruit weight. ## 2.8. Statistical Analyses For each independent variable a one-way linear model (LM) was fitted, where “ripening stage” was set as explanatory (factor) variable. Prior to model fitting assumptions were checked by plot diagnosis. In the analysis of the major compounds (SHU, CAP, naringenin-diglucoside, ascorbic acid, and dry matter) among the six hybrids another LM was made, where “hybrid” was set as explanatory (factor) variable. Post hoc comparison was made by Tukey HSD test. All statistical analyses were performed in IBM SPSS 22 software (IBM Co., USA) and Microsoft Excel (Microsoft Co., USA).α was set at 0.05 in the entire study. ## 3. Results and Discussion To adapt the applied chromatographic protocols under the conditions of our laboratories, certain parameters such as LOD, LOQ, and linearity curve were studied. The values depicted in Table1 show that the used methods are accurate enough to carry on precise and sensitive determination of polyphenols, capsaicinoids, and ascorbic acid. This statement is based on the low levels of LOQ, LOD found for all tested compounds. The concentration of such compounds in our samples is much higher than the levels of LOQ and LOD. Moreover, values obtained for regression coefficient indicated that the methods can be applied at wide range of concentrations for different compounds in chili samples.Table 1 Some validation parameters for the HPLC determinations of the major polyphenols, ascorbic acid, and capsaicinoids. LODμg/mL LOQμg/mL Linearity rangeμg/mL Linearity curve R 2 Catechin 2.625 8.75 0–50 y = 0.331 x - 2.5895 0.899 Naringenin-diglucoside 0.0318 0.106 0–50 y = 0.3906 x - 3.0556 0.899 Quercetin-3-glucoside 1.083 3.61 0–50 y = 0.2188 x - 0.781 0.983 Luteolin-glucoside 1.018 3.39 0–50 y = 0.1912 x - 0.381 0.979 Kaempferol-derivative 0.0208 0.069 0–50 y = 0.4402 x - 3.444 0.899 Ascorbic acid 2.500 0.750 30–120 y = 0.2736 x - 2.4305 0.997 Nordihydrocapsaicin∗ 0.003∗ 0.008∗ 0–0.07–1.1∗ y = 2000 + 07 x + 3000 + 06 ∗ 0.997∗ Capsaicin∗ 0.004∗ 0.01∗ 0.1–5∗ y = 2000 + 07 x + 3000 + 06 ∗ 0.998∗ Dihydrocapsaicin∗ 0.002∗ 0.007∗ 0.3–6∗ y = 2000 + 07 x + 3000 + 06 ∗ 0.998∗ ∗From previously published research work on HPLC determination of capsaicinoids by Daood et al. [27]. ### 3.1. Pungency The major components evolving pungency in our hybrids are NDC, CAP, and DC. Besides, we could identify the homologues of CAP and DC which are HCAP1, HCAP2 and HDC1, HDC2, respectively (Figure1). All of them are branched-chain alkyl vanillylamides. Kozukue et al. [31] detected the 7 compounds, in addition to nonivamide which is a straight-chain nonoyl vanillylamide analog of CAP [1]. In Beibeihong advance in ripening did not affect the major capsaicinoids (CAP, NDC, and DC shown in Table 2), while it influenced HCAP1 and HDC1 (both p≤0.032) including a slight decrease from green to colour-break stage and then a low increase at the final stage. In Bandai, unlike Beibeihong the ripening affected the major and minor capsaicinoids as well (all p≤0.027). The changing of CAP included a notable decrease between the initial stage and the colour-break stage. On DC, NDC, and HDC2 a gradual decrease was measured. A straight increasing of HDC1 was observed, while on HPC1 the same tendency like that in HPC1 of Beibeihong was observed.Table 2 Change in content of capsaicinoid compounds in chili hybrids as a function of ripening. The values represent means inμg/g fresh base weight ± standard deviation (n=3). Hybrid Ripening stage NDC (μg/g) CAP (μg/g) DC (μg/g) HCAP1 (μg/g) HCAP2 (μg/g) HDC1 (μg/g) HDC2 (μg/g) Beibeihong Green 51.8 ± 3.90a 294.5 ± 19.72a 326.5 ± 51.20a 3.5 ± 2.01ab 20.0 ± 2.37a 10.6 ± 3.01ab 26.3 ± 2.60a Colour-breaker 60.6 ± 15.01a 254.6 ± 31.90a 263.4 ± 25.92a 1.8 ± 0.70a 25.3 ± 5.13a 9.0 ± 0.82a 29.5 ± 6.42a Orange 61.7 ± 6.65a 261.9 ± 26.12a 269.3 ± 14.06a 3.6 ± 0.97ab 28.5 ± 4.64a 10.7 ± 0.50ab 26.7 ± 2.09a Red 63.2 ± 15.12a 311.8 ± 63.25a 272.7 ± 74.99a 5.7 ± 0.51b 23.7 ± 3.62a 13.9 ± 0.5b 23.9 ± 3.79a F-value 0.61 1.44 1.12 5.41 2.26 4.89 0.94 p value 0.626 0.302 0.394 0.025 0.158 0.032 0.464 Bandai Green 102.9 ± 14.17ab 1610.2 ± 91.46b 780 ± 36.03b 13.1 ± 5.61ab 8.9 ± 0.99a 12.8 ± 1.22a 30.2 ± 2.29ab Colour-breaker 102.2 ± 1.21ab 1182.2 ± 82.56a 725.2 ± 32.03ab 6.3 ± 1.83a 18.5 ± 3.69b 12.7 ± 0.57a 31 ± 3.41ab Orange 115.6 ± 5.26b 1104.9 ± 77.27a 635.2 ± 32.36a 11.5 ± 0.51ab 27.2 ± 3.33c 15 ± 0.46ab 37.9 ± 3.41b Red 81.5 ± 6.91a 1176.1 ± 112.1a 600.4 ± 87.11a 20.1 ± 6.24b 14.3 ± 2.39ab 16.3 ± 2.06b 27.3 ± 3.34a F-value 8.59 18.93 7.40 5.27 22.70 5.95 6.10 p value 0.007 0.001 0.011 0.027 <0.001 0.020 0.018 Lolo Green 18.7 ± 2.42a 222.5 ± 69.33a 139.2 ± 50.97a 0.5 ± 0.23b 0.4 ± 0.08a 1.8 ± 0.15a 9.2 ± 1.19a Colour-breaker 26.2 ± 3.94a 95.5 ± 4.15a 96.7 ± 9.29a 0.4 ± 0.04b 2.4 ± 0.77b 1.9 ± 0.08a 12.6 ± 1.31a Red 22.2 ± 5.14a 197 ± 92.13a 119.8 ± 53.03a 0 ± 0a 3.2 ± 0.15b 1.9 ± 0.27a 12.5 ± 3.27a F-value 2.69 3.05 0.74 12.12 29.94 0.86 2.54 p value 0.146 0.122 0.516 0.008 0.001 0.467 0.159 C3735 Green 31.3 ± 1.46b 259.6 ± 39.15b 183.4 ± 23.27b UDL 7 ± 0.79a 1.6 ± 0.21b 8.2 ± 0.58ab Colour-breaker 35.9 ± 1.64b 168.9 ± 33.86ab 148.2 ± 24.21b UDL 12.6 ± 4.16a 1.9 ± 0.08b 9.8 ± 1.01b Red 18.2 ± 6.21a 126.3 ± 35.95a 88.9 ± 6.38a UDL 12.6 ± 3.7a 1.3 ± 0.07a 7.2 ± 1.31a F-value 17.39 10.50 17.58 — 3.00 15.03 5.00 p value 0.003 0.011 0.003 — 0.125 0.005 0.053 Fire Flame Red 15.5 ± 3.28 234.3 ± 45.23 109.7 ± 19.9 1.2 ± 0.13 1 ± 0.23 0.9 ± 0.13 5.9 ± 1.24 Star Flame Yellow 21.9 ± 5.36 440.8 ± 17.22 135.9 ± 20.28 2.5 ± 0.08 0.2 ± 0.08 1.4 ± 0.25 6.4 ± 1.10 The same letter indicates no significant difference in capsaicinoid content between ripening stages in the given hybrid according to Tukey HSD post hoc test; UDL: under detection limit.Figure 1 HPLC profile of capsaicinoid components separated from red stage of Bandai hybrid using cross-linked C18 column with acetonitrile-water elution and fluorescence detection. 1: NDC, 2: CAP, 3: DC, 4: HCAP1, 5: HCAP2, 6: HDC1, and 7: HDC2. For more information see text.Focusing on the major compounds of capsaicinoids, Bandai hybrid could be characterised with pungency loss, while in Beibeihong those compounds did not change during ripening. In the study by Gnayfeed et al. [12] CAP reached the highest value in F-03 cultivar (C. annuum) at the initial green stage, similarly found in Bandai, but its content in F-03 did not change significantly with ripening. The obtained results suggest even in the same species (C. frutescens) that the hybrids have a different characteristic in ripening regarding capsaicinoid contents. It is in accordance with findings of Merken and Beecher [30] who also measured the maximal capsaicinoid content in 3 differentC. frutescens peppers in 3 variant times after flower budding.In Lolo the ripening slightly affected but not significantly CAP, while it increased HCAP2 (p=0.001). After the colour-break stage the amount of HCAP1 decreased (p=0.008) to undetectable level. In C3735 ripening decreased NDC, CAP, DC, HDC1 (all p≤0.011), and nonmarginally HDC2, while HCAP1 was absent or under detection limit at all ripening stages. Therefore, most of the compounds showed a decreasing tendency during ripening of C3735, so a remarkable pungency loss was observed. On the contrary, those compounds remained unchanged in Lolo.Iwai found the peak 40 days after flowering and then a gradual decrease of capsaicinoid content in aC. annuum pepper. Because of the different scale used by Iwai et al. [32], it is difficult to compare to our data, but probably the 40 days after flowering is roughly equal to the green stage we used. Gnayfeed et al. [12] observed inC. annuum cultivars that capsaicinoids reached maximum level at the colour-break stage and then started declining in Hungarian spice pepper (C. annuum), which is a characteristic of pungency change that we did not observe. The change in capsaicin content during ripening of pepper may relate to activity of some enzymes that interfere in the ripening dynamics. The amount of capsaicinoids has been investigated in relation with several enzymes [10, 33, 34]. Contreras-Padilla and Yahia [10] showed that peroxidase activity started increasing, when the amount of capsaicinoid started to decrease in Habanero and de Arbol, while in Piquin it began to increase before the decrease of capsaicinoid. They concluded that peroxidase enzyme is involved in capsaicinoid degradation and that attribution is a genotypic characteristic. Iwai et al. [32] found higher phenylalanine ammonia-lyase activity in green stage than in red stage. In addition, Bernal et al. [33] observed that the operation of capsaicinoid synthetase enzyme is more influenced by the availability of precursors and the conditions of forming than its substrate specificity. The capsaicinoid composition and content are the result of the above referred enzymes.A study concerning the maturation of Habanero (C. chinense) proved that green pod contains four times less capsaicin than ripe red ones [13], while we found less difference and even more capsaicin in green stage (e.g., Bandai); however, none of our investigated hybrids belong toC. chinense. They also reported that DC content is seven times less in green pods as compared to red ones, while we found only a slight decrease of DC between the green and red stages. ### 3.2. Polyphenols Since there is no available standard for myricetin and vanillic acid in our laboratory, they were tentatively identified based on comparison of their spectral characteristics and retention behaviour on the HPLC column with those found in the literature.Due to the high content of vanillic acid-derivative, catechin, and naringenin-diglucoside, those compounds were found to be the dominant polyphenols, which have maxima absorption at 280 nm (Figure2). The minor compounds were luteolin-rutinoside, quercetin-glucoside, quercetin-glycosides, myricetin, and kaempferol-derivative; all were detected with maxima absorption at 350 nm and also luteolin-glucoside occurs in higher concentration and is detected at 350 nm (Figure 3).Figure 2 HPLC profile of polyphenols detected separated on Protect-1 C18 column and detected at 280 nm. 1: vanillic acid-derivative, 2: catechin, and 3: naringenin-diglucoside.Figure 3 HPLC profile of polyphenols detected separated on Protect-1 C18 column and detected at 350 nm. 4: luteolin-rutinoside, 5: quercetin-glucoside, 6: quercetin-glycosides (the sum of these compounds is used in Table3), 7: luteolin-glucoside, 8: myricetin, and 9: kaempferol-derivative.Table 3 Change in content of polyphenol compounds in different chili hybrids as a function of ripening. The values represent means inμg/g fresh weight base ± standard deviation (n=3). Hybrid Ripening stage Vanillic acid-derivative (μg/g) Catechin (μg/g) Naringenin-diglucoside (μg/g) Luteolin-rutinoside(μg/g) Quercetin-glucoside (μg/g) Quercetin-glycosides(μg/g) Luteolin-glucoside (μg/g) Myricetin (μg/g) Kaempferol-derivative(μg/g) Beibeihong Green 109.5 ± 9.84b 50.4 ± 1.86a 349.5 ± 13.09a 11.3 ± 0.53a 12.5 ± 2.07a 3.7 ± 0.14a 62.9 ± 2.78a 10.3 ± 0.74a 22.6 ± 1.13a Colour-breaker 145.7 ± 9.71c 135.3 ± 3.97b 477.4 ± 52.69b 8.5 ± 0.73a 13.4 ± 1.19a 4.7 ± 0.12ab 79 ± 2.78a 23.5 ± 1.13b 67.3 ± 5.26c Orange 114.9 ± 9.66b 153.5 ± 5.56c 431.3 ± 39.72ab 9.7 ± 0.67a 9.1 ± 0.07a 4.9 ± 0.62b 84.6 ± 13.18a 24.9 ± 0.46b 69.7 ± 3.86c Red 79.2 ± 11.08a 132.7 ± 3.85b 368.8 ± 30.77a 17.0 ± 3.37b 21.1 ± 2.69b 5.5 ± 0.68b 90.1 ± 16.97a 31.9 ± 6.77b 51.1 ± 5.22b F-value 21.88 390.61 7.54 13.47 23.64 7.62 3.45 20.45 79.18 p value <0.001 <0.001 0.01 0.02 <0.001 0.01 0.071 <0.001 <0.001 Bandai Green 96.3 ± 0.25a 96.9 ± 7.67a 130.3 ± 4.82a 13.2 ± 1.06a 4.6 ± 0.66a 5.6 ± 0.24a 84.6 ± 2.37a 25.7 ± 1.58a 62.8 ± 3.33a Colour-breaker 89.7 ± 7.98a 134.6 ± 17.02b 202 ± 17.77b 15.1 ± 3.96a 14.2 ± 0.61ab 7.8 ± 0.91ab 91.4 ± 13.75a 117.1 ± 8.21b 289.5 ± 45.59bc Orange 92 ± 14.56a 166.4 ± 17.16b 254.2 ± 38.95b 14.8 ± 3.68a 17.4 ± 2.99c 10.2 ± 1.38bc 107.3 ± 23.49a 110.4 ± 19.41b 307.8 ± 53.01c Red 101.6 ± 5.67a 175.2 ± 12.32c 276.5 ± 16.65c 21.6 ± 10.69a 12.8 ± 0.97b 11.9 ± 1.15c 157.8 ± 15.26b 52.9 ± 12.94a 200.7 ± 20.83b F-value 1.05 19.03 23.73 1.15 33.24 22.39 13.41 38.70 28.15 p value 0.42 <0.001 <0.001 0.385 <0.001 <0.001 0.002 <0.001 <0.001 Lolo Green 72.2 ± 0.85c 45 ± 2.71a 116.8 ± 7.28a 2.1 ± 0.49a 5.7 ± 0.83b 5 ± 0.78b 25.7 ± 5.05a 2 ± 0.51a UDL Colour-breaker 50.3 ± 2.85a 51.4 ± 3.09a 160.8 ± 14.93b 3.2 ± 0.36a 2.8 ± 0.08a 2.9 ± 1.03a 53.4 ± 4.52b 4.8 ± 0.62b UDL Red 64.2 ± 4.15b 171.5 ± 6.14b 117.8 ± 7.18a 7.3 ± 1.22b 6.1 ± 1.28b 4.5 ± 0.28ab 75.9 ± 2.48c 8.9 ± 0.57c 9.1 ± 3.77 F-value 42.59 836.79 17.35 37.55 12.72 6.49 108.95 113.35 — p value <0.001 <0.001 0.003 <0.001 0.007 0.032 <0.001 <0.001 — C3753 Green 73.1 ± 5.46b 22.6 ± 1.28a 123.6 ± 6.23a 2 ± 0.7a 1.7 ± 0.39a 16.8 ± 2.38a 17 ± 2.57a 3.8 ± 0.96a UDL Colour-breaker 51 ± 8.55a 64.2 ± 10.49b 148.3 ± 40.01ab 1.5 ± 0.37a 1.3 ± 0.24a 17.3 ± 3.44a 13.5 ± 2.66a 10 ± 1.09b 21 ± 1.71a Red 56 ± 6.67a 124.5 ± 6.18c 217.2 ± 8.36b UDL 1.1 ± 0.23a 19.8 ± 2.55a 17.5 ± 2.78a 11.3 ± 0.94b 16.4 ± 2.85a F-value 13.06 157.47 8.32 0.44 2.84 0.95 1.99 48.91 6.007 p value 0.007 <0.001 0.019 0.661 0.135 0.44 0.12 <0.001 0.070 Fire Flame Red 27.8 ± 2.07 26.6 ± 1.40 141.6 ± 4.17 2.8 ± 0.27 2.7 ± 0.16 4.2 ± 0.22 44.4 ± 2.76 18.2 ± 0.43 UDL Star Flame Yellow 24.2 ± 1.37 17.6 ± 0.24 93.5 ± 4.33 9.5 ± 0.47 2.0 ± 0.14 4.1 ± 0.62 60.6 ± 1.34 18.4 ± 1.29 UDL The same letter indicates no significant difference in polyphenol content between ripening stages in the given hybrid according to Tukey HSD post hoc test; UDL: under detection limit.In Beibeihong, ripening increased catechin, luteolin-rutinoside, quercetin compounds, myricetin, and kaempferol-derivative (allp≤0.02 shown in Table 3), while it decreased vanillic acid content (p<0.001). In Bandai ripening increased all compounds (all p≤0.002) except vanillic acid and luteolin-rutinoside which statistically remained unchanged during ripening stages. In quercetin-glucoside, myricetin, and kaempferol-derivative the highest values were measured in the middle of the ripening. Most of the studies regarding polyphenol constitution of pungent pepper focus on the green (initial) and red (final) ripe stages but omit the intermediate or colour-break stage. Howard et al. [20] found that quercetin decreased, while luteolin did not change with ripening of Tabasco (C. frutescens). On the contrary, we found an increase of quercetin-related compounds in bothC. frutescens hybrids and also an increase of luteolin-rutinoside in Beibeihong and of luteolin-glucoside in Bandai.In Lolo the ripening significantly decreased vanillic acid (p<0.001) but increased catechin, luteolin-rutinoside, luteolin-glucoside, and myricetin (all p<0.001). In C3735 vanillic acid decreased (p=0.007) while catechin, naringenin-diglucoside, and myricetin increased (all p≤0.019). Howard et al. stated that quercetin had either increasing or decreasing tendency depending on cultivar; also no change was observed during maturity stages of certain cultivars onC. annuum peppers. We could only confirm the last statement that none of the quercetin-related compounds changed when the pods changed from green to red inC. annuum peppers studied.According to Materska and Perucka [19] the most abundant flavonoid compounds in the green stage were quertecin-3-O-L-rhamnoside and luteolin-related compounds, and with ripening those phytochemicals decreased. In the present work particularly in red stage contained higher amounts of luteolin-related in Lolo, while in quercetin-glycoside content no change was detected in bothC. annuum hybrids.The disappearances of flavonoids are parallel to capsaicinoids accumulation [35] because the synthesis of flavonoids may converge with the capsaicinoid pathways [36]. The only nonflavonoid phenolic acid detected in our peppers is vanillic acid, and it is the only polyphenol compound which decreased or stayed unchanged during ripening, while the flavonoids mostly increased with advance of ripening. At the same time the major capsaicinoids generally decreased or did not change even with ripening. Kawada and Iwai [37] found a direct relation between DC and vanillic acid; they fed rats with DC and then detected vanillic acid in a notable amount in the urine of the rats. This experiment may also support our findings that vanillic acid is certainly related to capsaicinoids and has similar dynamics during ripening in pungent pepper.According to Tsao [14], flavonols (kaempferol, quercetin, and myricetin) consist of highly conjugated bindings and a 3-hydroxy group, whose attributions are considered very important in evolving high antioxidant activity. In our hybrids the highest levels of the latter flavonoids were obtained at the orange or red stage that makes the pepper of higher nutritive value. ### 3.3. Ascorbic Acid By the applied HPLC method only L-ascorbic acid was found in the extract of all hybrids (Figure4). It was found that ascorbic acid increased during ripening in all hybrids (p≤0.001 shown in Table 4). In Beibeihong and Bandai after green stage a more notable increase was observed than after the colour-break stage where the ascorbic acid gradually increased, while in Bandai at the red stage the average of ascorbic acid was less than in orange stage. In Lolo the green and colour-break stage did not differ significantly, while the red stage contained the most. In C3735 a straight increase was observed. The increasing tendency in the investigated hybrids is in accordance with that found in previous works [12, 20, 24, 25] which concluded that the more ripened the pods were, the more ascorbic acid could be measured from them. With ripening the pepper pods store more reducing sugars [36], which are the precursors of L-ascorbic acid [38], and that explains the increasing vitamin C content with ripening in all hybrids included in our study. On the contrary, Shaha et al. [18] showed a different dynamics of the ascorbic acid accumulation, because they found the highest level in yellow (intermediate) stage and the declining level in the red mature stage. That agrees with our finding in Bandai, where the highest average values (1005.2±100.73 μg/g) were observed in the orange or colour-break stage (937.9±78.04 μg/g), although these are not significantly higher than that determined in red stage (787.4±131.21 μg/g). Probably it is also due to the high standard deviation present in the red stage.Table 4 Change in content of ascorbic acid in different chili hybrids as a function of ripening. The values represent means inμg/g fresh weight base ± standard deviation (n=3). Hybrid Ripening stage Ascorbic acid (μg/g) Beibeihong Green 355 ± 64.85a Colour-breaker 1503.4 ± 358.31b Orange 2085.7 ± 252.2bc Red 2483.8 ± 570.74c F-value 19.74 p value <0.001 Bandai Green 329.5 ± 58.88a Colour-breaker 937.9 ± 78.04b Orange 1005.2 ± 100.73b Red 787.4 ± 131.21b F-value 30.09 p value <0.001 Lolo Green 111.3 ± 14.01a Colour-breaker 451.5 ± 115.56a Red 1940.9 ± 533.57b F-value 28.57 p value 0.001 C3735 Green 315.1 ± 59.91a Colour-breaker 1522.5 ± 127.47b Red 2468.2 ± 58.93c F-value 449.65 p value <0.001 Fire Flame Red 3689.4 ± 39.50 Star Flame Yellow 3154.8 ± 160.61 The same letter indicates no significant difference in ascorbic acid content between ripening stages in the given hybrid according to Tukey HSD post hoc test.Figure 4 HPLC profile of vitamin C determination. The separation was performed on C18 Nautilus column with PDA detection at 244 nm. 1: L-ascorbic acid.The recommended daily allowance (RDA) is 60μg FW; according to Dias [39] 100 g fresh chili provides about 143.7 μg vitamin C. Focusing on the hybrids of the recent study at the green stage all of them failed to reach this value, while at colour-break stage Beibeihong and C3735 reached it and finally at the red stage all of them achieved the RDA. ### 3.4. Comparison of Major Compounds among the 6 Hybrids The comparison among the hybrids has been done on the main parameters: CAP, ascorbic acid, naringenin-diglucoside, Scoville heat unit, and dry matter (shown in Table5), at the final stage of the hybrids, which is generally considered as the most valuable in nutrition and having the most processing possibility. A higher dry matter signifies a better fruit quality and also a higher nutritional concentration when fresh weight basis is used to express nutritional parameters. We measured 25–30% dry matter content inC. frutescens, which produces more seeds and smaller pods, while in the peppers belonging toC. annuum this value lessens to 14.1–15.8%.Table 5 Capsaicin, ascorbic acid, and naringenin-diglucoside content (μg/g fresh weight base), pungency unit of Scoville, and dry matter of different chili hybrids. The values represent means ± standard deviation (n=3). Hybrid Capsaicin (μg/g) Scoville heat unit Ascorbic acid (μg/g) Naringenin-diglucoside (μg/g) Dry matter Beibeihong 311.8 ± 63.25ab 37999.8 ± 5761.66a 2483.8 ± 570.74bc 368.8 ± 30.77f 25.8 ± 0.82d Bandai 1176.1 ± 112.1c 98090.8 ± 9920.74c 787.4 ± 131.21a 276.5 ± 16.65e 30.2 ± 0.41c Lolo 197 ± 92.13a 33188.2 ± 5229.83a 1940.9 ± 533.57b 117.8 ± 7.18ab 14.0 ± 0.61a C3735 126.3 ± 35.95a 23730.9 ± 3174.95a 2468.2 ± 58.93bc 217.2 ± 8.36d 15.8 ± 0.93b Fire Flame 234.3 ± 45.23a 40417.3 ± 7830.33a 3689.4 ± 160.61d 141.6 ± 4.19c 14.1 ± 0.34ab Star Flame 440.8 ± 17.22b 66201.2 ± 7132.51b 3154.8 ± 160.61cd 93.5 ± 4.26a 14.4 ± 0.58ab F-value 94.64 48.43 27.62 146.17 357.48 p value <0.001 <0.001 <0.001 <0.001 <0.001 The same letter indicates no significant difference in the major components between the fully ripe stages of the 6 hybrids according to Tukey HSD post hoc test.The CAP content was found to be statistically the same in all red colouredC. annuum hybrids, while the yellow hybrid Star Flame (234.3±45.23 μg/g) contained more, and Bandai (1176.1±112.1 μg/g) the most (p<0.001). Our findings roughly agree with the result of Sanatombi and Sharma [40] who showed that the cultivars belonging toC. annuum contain less capsaicin than others ofCapsicum frutescens. Beibeihong was an exception, because it statistically contained the same amount asC. annuum hybrids. Focusing on the Scoville heat units, the highest CAP value in Bandai corresponds to the highest SHU (98090.8±9920.74) observed among the hybrids investigated. Bernal et al. [33] measured 87300–276500 SHU in ripeC. frutescens peppers, but in Bandai hybrid the value found was close to the lower level determined by the authors. AmongC. annuum hybrids, Star Flame was found to be a prominent pepper regarding SHU (66201.2±7132.51) comparing to the measurements of Topuz and Ozdemir [41] 9720±2061.8 and Giuffrida et al. [42] 21034±3579. Beibeihong and Bandai have not been investigated by pungency profiles before. Comparing to Tabasco (also belonging toC. frutescens) the SHU measured by Giuffrida et al. [42] (21348±867) is below our values of the latter hybrids, although CAP content determined by Giuffrida et al. [42] (917±34 μg/g) is between the values measured in Bandai (1176.1±112.1 μg/g) and Beibeihong (311.8±63.25 μg/g). Interestingly, Bandai hybrid had the highest CAP content at the same time; it also had the lowest ascorbic acid amount. Topuz and Ozdemir [41] described in pungent peppers that the content of ascorbic acid and capsaicinoid is positively related, which we could not underline in case of Bandai. The highest ascorbic acid was measured in ripe Fire Flame (3689.4±160.61 μg/g) and this value is well above the one measured in Hungarian spice pepper where approximately 1800 μg/g converted to fresh weight basis [12], and it is more than the one detected in New Mexican-type chili peppers 2766 μg/g [25].Naringenin-diglucoside content ranged from93.5±4.26 to 368.8±30.77 μg/g and had higher values inC. frutescens hybrids compared toC. annuum hybrids, probably because of the higher dry matter content of such peppers. Naringenin (belonging to flavanones), being an initial compound in the chain of flavonoid synthesis [14], explains the high content present in our samples. Other studies found also naringenin-glucosides as a dominant flavonoid in peel of pungent pepper [43] and in sweet pepper alike [44]. ## 3.1. Pungency The major components evolving pungency in our hybrids are NDC, CAP, and DC. Besides, we could identify the homologues of CAP and DC which are HCAP1, HCAP2 and HDC1, HDC2, respectively (Figure1). All of them are branched-chain alkyl vanillylamides. Kozukue et al. [31] detected the 7 compounds, in addition to nonivamide which is a straight-chain nonoyl vanillylamide analog of CAP [1]. In Beibeihong advance in ripening did not affect the major capsaicinoids (CAP, NDC, and DC shown in Table 2), while it influenced HCAP1 and HDC1 (both p≤0.032) including a slight decrease from green to colour-break stage and then a low increase at the final stage. In Bandai, unlike Beibeihong the ripening affected the major and minor capsaicinoids as well (all p≤0.027). The changing of CAP included a notable decrease between the initial stage and the colour-break stage. On DC, NDC, and HDC2 a gradual decrease was measured. A straight increasing of HDC1 was observed, while on HPC1 the same tendency like that in HPC1 of Beibeihong was observed.Table 2 Change in content of capsaicinoid compounds in chili hybrids as a function of ripening. The values represent means inμg/g fresh base weight ± standard deviation (n=3). Hybrid Ripening stage NDC (μg/g) CAP (μg/g) DC (μg/g) HCAP1 (μg/g) HCAP2 (μg/g) HDC1 (μg/g) HDC2 (μg/g) Beibeihong Green 51.8 ± 3.90a 294.5 ± 19.72a 326.5 ± 51.20a 3.5 ± 2.01ab 20.0 ± 2.37a 10.6 ± 3.01ab 26.3 ± 2.60a Colour-breaker 60.6 ± 15.01a 254.6 ± 31.90a 263.4 ± 25.92a 1.8 ± 0.70a 25.3 ± 5.13a 9.0 ± 0.82a 29.5 ± 6.42a Orange 61.7 ± 6.65a 261.9 ± 26.12a 269.3 ± 14.06a 3.6 ± 0.97ab 28.5 ± 4.64a 10.7 ± 0.50ab 26.7 ± 2.09a Red 63.2 ± 15.12a 311.8 ± 63.25a 272.7 ± 74.99a 5.7 ± 0.51b 23.7 ± 3.62a 13.9 ± 0.5b 23.9 ± 3.79a F-value 0.61 1.44 1.12 5.41 2.26 4.89 0.94 p value 0.626 0.302 0.394 0.025 0.158 0.032 0.464 Bandai Green 102.9 ± 14.17ab 1610.2 ± 91.46b 780 ± 36.03b 13.1 ± 5.61ab 8.9 ± 0.99a 12.8 ± 1.22a 30.2 ± 2.29ab Colour-breaker 102.2 ± 1.21ab 1182.2 ± 82.56a 725.2 ± 32.03ab 6.3 ± 1.83a 18.5 ± 3.69b 12.7 ± 0.57a 31 ± 3.41ab Orange 115.6 ± 5.26b 1104.9 ± 77.27a 635.2 ± 32.36a 11.5 ± 0.51ab 27.2 ± 3.33c 15 ± 0.46ab 37.9 ± 3.41b Red 81.5 ± 6.91a 1176.1 ± 112.1a 600.4 ± 87.11a 20.1 ± 6.24b 14.3 ± 2.39ab 16.3 ± 2.06b 27.3 ± 3.34a F-value 8.59 18.93 7.40 5.27 22.70 5.95 6.10 p value 0.007 0.001 0.011 0.027 <0.001 0.020 0.018 Lolo Green 18.7 ± 2.42a 222.5 ± 69.33a 139.2 ± 50.97a 0.5 ± 0.23b 0.4 ± 0.08a 1.8 ± 0.15a 9.2 ± 1.19a Colour-breaker 26.2 ± 3.94a 95.5 ± 4.15a 96.7 ± 9.29a 0.4 ± 0.04b 2.4 ± 0.77b 1.9 ± 0.08a 12.6 ± 1.31a Red 22.2 ± 5.14a 197 ± 92.13a 119.8 ± 53.03a 0 ± 0a 3.2 ± 0.15b 1.9 ± 0.27a 12.5 ± 3.27a F-value 2.69 3.05 0.74 12.12 29.94 0.86 2.54 p value 0.146 0.122 0.516 0.008 0.001 0.467 0.159 C3735 Green 31.3 ± 1.46b 259.6 ± 39.15b 183.4 ± 23.27b UDL 7 ± 0.79a 1.6 ± 0.21b 8.2 ± 0.58ab Colour-breaker 35.9 ± 1.64b 168.9 ± 33.86ab 148.2 ± 24.21b UDL 12.6 ± 4.16a 1.9 ± 0.08b 9.8 ± 1.01b Red 18.2 ± 6.21a 126.3 ± 35.95a 88.9 ± 6.38a UDL 12.6 ± 3.7a 1.3 ± 0.07a 7.2 ± 1.31a F-value 17.39 10.50 17.58 — 3.00 15.03 5.00 p value 0.003 0.011 0.003 — 0.125 0.005 0.053 Fire Flame Red 15.5 ± 3.28 234.3 ± 45.23 109.7 ± 19.9 1.2 ± 0.13 1 ± 0.23 0.9 ± 0.13 5.9 ± 1.24 Star Flame Yellow 21.9 ± 5.36 440.8 ± 17.22 135.9 ± 20.28 2.5 ± 0.08 0.2 ± 0.08 1.4 ± 0.25 6.4 ± 1.10 The same letter indicates no significant difference in capsaicinoid content between ripening stages in the given hybrid according to Tukey HSD post hoc test; UDL: under detection limit.Figure 1 HPLC profile of capsaicinoid components separated from red stage of Bandai hybrid using cross-linked C18 column with acetonitrile-water elution and fluorescence detection. 1: NDC, 2: CAP, 3: DC, 4: HCAP1, 5: HCAP2, 6: HDC1, and 7: HDC2. For more information see text.Focusing on the major compounds of capsaicinoids, Bandai hybrid could be characterised with pungency loss, while in Beibeihong those compounds did not change during ripening. In the study by Gnayfeed et al. [12] CAP reached the highest value in F-03 cultivar (C. annuum) at the initial green stage, similarly found in Bandai, but its content in F-03 did not change significantly with ripening. The obtained results suggest even in the same species (C. frutescens) that the hybrids have a different characteristic in ripening regarding capsaicinoid contents. It is in accordance with findings of Merken and Beecher [30] who also measured the maximal capsaicinoid content in 3 differentC. frutescens peppers in 3 variant times after flower budding.In Lolo the ripening slightly affected but not significantly CAP, while it increased HCAP2 (p=0.001). After the colour-break stage the amount of HCAP1 decreased (p=0.008) to undetectable level. In C3735 ripening decreased NDC, CAP, DC, HDC1 (all p≤0.011), and nonmarginally HDC2, while HCAP1 was absent or under detection limit at all ripening stages. Therefore, most of the compounds showed a decreasing tendency during ripening of C3735, so a remarkable pungency loss was observed. On the contrary, those compounds remained unchanged in Lolo.Iwai found the peak 40 days after flowering and then a gradual decrease of capsaicinoid content in aC. annuum pepper. Because of the different scale used by Iwai et al. [32], it is difficult to compare to our data, but probably the 40 days after flowering is roughly equal to the green stage we used. Gnayfeed et al. [12] observed inC. annuum cultivars that capsaicinoids reached maximum level at the colour-break stage and then started declining in Hungarian spice pepper (C. annuum), which is a characteristic of pungency change that we did not observe. The change in capsaicin content during ripening of pepper may relate to activity of some enzymes that interfere in the ripening dynamics. The amount of capsaicinoids has been investigated in relation with several enzymes [10, 33, 34]. Contreras-Padilla and Yahia [10] showed that peroxidase activity started increasing, when the amount of capsaicinoid started to decrease in Habanero and de Arbol, while in Piquin it began to increase before the decrease of capsaicinoid. They concluded that peroxidase enzyme is involved in capsaicinoid degradation and that attribution is a genotypic characteristic. Iwai et al. [32] found higher phenylalanine ammonia-lyase activity in green stage than in red stage. In addition, Bernal et al. [33] observed that the operation of capsaicinoid synthetase enzyme is more influenced by the availability of precursors and the conditions of forming than its substrate specificity. The capsaicinoid composition and content are the result of the above referred enzymes.A study concerning the maturation of Habanero (C. chinense) proved that green pod contains four times less capsaicin than ripe red ones [13], while we found less difference and even more capsaicin in green stage (e.g., Bandai); however, none of our investigated hybrids belong toC. chinense. They also reported that DC content is seven times less in green pods as compared to red ones, while we found only a slight decrease of DC between the green and red stages. ## 3.2. Polyphenols Since there is no available standard for myricetin and vanillic acid in our laboratory, they were tentatively identified based on comparison of their spectral characteristics and retention behaviour on the HPLC column with those found in the literature.Due to the high content of vanillic acid-derivative, catechin, and naringenin-diglucoside, those compounds were found to be the dominant polyphenols, which have maxima absorption at 280 nm (Figure2). The minor compounds were luteolin-rutinoside, quercetin-glucoside, quercetin-glycosides, myricetin, and kaempferol-derivative; all were detected with maxima absorption at 350 nm and also luteolin-glucoside occurs in higher concentration and is detected at 350 nm (Figure 3).Figure 2 HPLC profile of polyphenols detected separated on Protect-1 C18 column and detected at 280 nm. 1: vanillic acid-derivative, 2: catechin, and 3: naringenin-diglucoside.Figure 3 HPLC profile of polyphenols detected separated on Protect-1 C18 column and detected at 350 nm. 4: luteolin-rutinoside, 5: quercetin-glucoside, 6: quercetin-glycosides (the sum of these compounds is used in Table3), 7: luteolin-glucoside, 8: myricetin, and 9: kaempferol-derivative.Table 3 Change in content of polyphenol compounds in different chili hybrids as a function of ripening. The values represent means inμg/g fresh weight base ± standard deviation (n=3). Hybrid Ripening stage Vanillic acid-derivative (μg/g) Catechin (μg/g) Naringenin-diglucoside (μg/g) Luteolin-rutinoside(μg/g) Quercetin-glucoside (μg/g) Quercetin-glycosides(μg/g) Luteolin-glucoside (μg/g) Myricetin (μg/g) Kaempferol-derivative(μg/g) Beibeihong Green 109.5 ± 9.84b 50.4 ± 1.86a 349.5 ± 13.09a 11.3 ± 0.53a 12.5 ± 2.07a 3.7 ± 0.14a 62.9 ± 2.78a 10.3 ± 0.74a 22.6 ± 1.13a Colour-breaker 145.7 ± 9.71c 135.3 ± 3.97b 477.4 ± 52.69b 8.5 ± 0.73a 13.4 ± 1.19a 4.7 ± 0.12ab 79 ± 2.78a 23.5 ± 1.13b 67.3 ± 5.26c Orange 114.9 ± 9.66b 153.5 ± 5.56c 431.3 ± 39.72ab 9.7 ± 0.67a 9.1 ± 0.07a 4.9 ± 0.62b 84.6 ± 13.18a 24.9 ± 0.46b 69.7 ± 3.86c Red 79.2 ± 11.08a 132.7 ± 3.85b 368.8 ± 30.77a 17.0 ± 3.37b 21.1 ± 2.69b 5.5 ± 0.68b 90.1 ± 16.97a 31.9 ± 6.77b 51.1 ± 5.22b F-value 21.88 390.61 7.54 13.47 23.64 7.62 3.45 20.45 79.18 p value <0.001 <0.001 0.01 0.02 <0.001 0.01 0.071 <0.001 <0.001 Bandai Green 96.3 ± 0.25a 96.9 ± 7.67a 130.3 ± 4.82a 13.2 ± 1.06a 4.6 ± 0.66a 5.6 ± 0.24a 84.6 ± 2.37a 25.7 ± 1.58a 62.8 ± 3.33a Colour-breaker 89.7 ± 7.98a 134.6 ± 17.02b 202 ± 17.77b 15.1 ± 3.96a 14.2 ± 0.61ab 7.8 ± 0.91ab 91.4 ± 13.75a 117.1 ± 8.21b 289.5 ± 45.59bc Orange 92 ± 14.56a 166.4 ± 17.16b 254.2 ± 38.95b 14.8 ± 3.68a 17.4 ± 2.99c 10.2 ± 1.38bc 107.3 ± 23.49a 110.4 ± 19.41b 307.8 ± 53.01c Red 101.6 ± 5.67a 175.2 ± 12.32c 276.5 ± 16.65c 21.6 ± 10.69a 12.8 ± 0.97b 11.9 ± 1.15c 157.8 ± 15.26b 52.9 ± 12.94a 200.7 ± 20.83b F-value 1.05 19.03 23.73 1.15 33.24 22.39 13.41 38.70 28.15 p value 0.42 <0.001 <0.001 0.385 <0.001 <0.001 0.002 <0.001 <0.001 Lolo Green 72.2 ± 0.85c 45 ± 2.71a 116.8 ± 7.28a 2.1 ± 0.49a 5.7 ± 0.83b 5 ± 0.78b 25.7 ± 5.05a 2 ± 0.51a UDL Colour-breaker 50.3 ± 2.85a 51.4 ± 3.09a 160.8 ± 14.93b 3.2 ± 0.36a 2.8 ± 0.08a 2.9 ± 1.03a 53.4 ± 4.52b 4.8 ± 0.62b UDL Red 64.2 ± 4.15b 171.5 ± 6.14b 117.8 ± 7.18a 7.3 ± 1.22b 6.1 ± 1.28b 4.5 ± 0.28ab 75.9 ± 2.48c 8.9 ± 0.57c 9.1 ± 3.77 F-value 42.59 836.79 17.35 37.55 12.72 6.49 108.95 113.35 — p value <0.001 <0.001 0.003 <0.001 0.007 0.032 <0.001 <0.001 — C3753 Green 73.1 ± 5.46b 22.6 ± 1.28a 123.6 ± 6.23a 2 ± 0.7a 1.7 ± 0.39a 16.8 ± 2.38a 17 ± 2.57a 3.8 ± 0.96a UDL Colour-breaker 51 ± 8.55a 64.2 ± 10.49b 148.3 ± 40.01ab 1.5 ± 0.37a 1.3 ± 0.24a 17.3 ± 3.44a 13.5 ± 2.66a 10 ± 1.09b 21 ± 1.71a Red 56 ± 6.67a 124.5 ± 6.18c 217.2 ± 8.36b UDL 1.1 ± 0.23a 19.8 ± 2.55a 17.5 ± 2.78a 11.3 ± 0.94b 16.4 ± 2.85a F-value 13.06 157.47 8.32 0.44 2.84 0.95 1.99 48.91 6.007 p value 0.007 <0.001 0.019 0.661 0.135 0.44 0.12 <0.001 0.070 Fire Flame Red 27.8 ± 2.07 26.6 ± 1.40 141.6 ± 4.17 2.8 ± 0.27 2.7 ± 0.16 4.2 ± 0.22 44.4 ± 2.76 18.2 ± 0.43 UDL Star Flame Yellow 24.2 ± 1.37 17.6 ± 0.24 93.5 ± 4.33 9.5 ± 0.47 2.0 ± 0.14 4.1 ± 0.62 60.6 ± 1.34 18.4 ± 1.29 UDL The same letter indicates no significant difference in polyphenol content between ripening stages in the given hybrid according to Tukey HSD post hoc test; UDL: under detection limit.In Beibeihong, ripening increased catechin, luteolin-rutinoside, quercetin compounds, myricetin, and kaempferol-derivative (allp≤0.02 shown in Table 3), while it decreased vanillic acid content (p<0.001). In Bandai ripening increased all compounds (all p≤0.002) except vanillic acid and luteolin-rutinoside which statistically remained unchanged during ripening stages. In quercetin-glucoside, myricetin, and kaempferol-derivative the highest values were measured in the middle of the ripening. Most of the studies regarding polyphenol constitution of pungent pepper focus on the green (initial) and red (final) ripe stages but omit the intermediate or colour-break stage. Howard et al. [20] found that quercetin decreased, while luteolin did not change with ripening of Tabasco (C. frutescens). On the contrary, we found an increase of quercetin-related compounds in bothC. frutescens hybrids and also an increase of luteolin-rutinoside in Beibeihong and of luteolin-glucoside in Bandai.In Lolo the ripening significantly decreased vanillic acid (p<0.001) but increased catechin, luteolin-rutinoside, luteolin-glucoside, and myricetin (all p<0.001). In C3735 vanillic acid decreased (p=0.007) while catechin, naringenin-diglucoside, and myricetin increased (all p≤0.019). Howard et al. stated that quercetin had either increasing or decreasing tendency depending on cultivar; also no change was observed during maturity stages of certain cultivars onC. annuum peppers. We could only confirm the last statement that none of the quercetin-related compounds changed when the pods changed from green to red inC. annuum peppers studied.According to Materska and Perucka [19] the most abundant flavonoid compounds in the green stage were quertecin-3-O-L-rhamnoside and luteolin-related compounds, and with ripening those phytochemicals decreased. In the present work particularly in red stage contained higher amounts of luteolin-related in Lolo, while in quercetin-glycoside content no change was detected in bothC. annuum hybrids.The disappearances of flavonoids are parallel to capsaicinoids accumulation [35] because the synthesis of flavonoids may converge with the capsaicinoid pathways [36]. The only nonflavonoid phenolic acid detected in our peppers is vanillic acid, and it is the only polyphenol compound which decreased or stayed unchanged during ripening, while the flavonoids mostly increased with advance of ripening. At the same time the major capsaicinoids generally decreased or did not change even with ripening. Kawada and Iwai [37] found a direct relation between DC and vanillic acid; they fed rats with DC and then detected vanillic acid in a notable amount in the urine of the rats. This experiment may also support our findings that vanillic acid is certainly related to capsaicinoids and has similar dynamics during ripening in pungent pepper.According to Tsao [14], flavonols (kaempferol, quercetin, and myricetin) consist of highly conjugated bindings and a 3-hydroxy group, whose attributions are considered very important in evolving high antioxidant activity. In our hybrids the highest levels of the latter flavonoids were obtained at the orange or red stage that makes the pepper of higher nutritive value. ## 3.3. Ascorbic Acid By the applied HPLC method only L-ascorbic acid was found in the extract of all hybrids (Figure4). It was found that ascorbic acid increased during ripening in all hybrids (p≤0.001 shown in Table 4). In Beibeihong and Bandai after green stage a more notable increase was observed than after the colour-break stage where the ascorbic acid gradually increased, while in Bandai at the red stage the average of ascorbic acid was less than in orange stage. In Lolo the green and colour-break stage did not differ significantly, while the red stage contained the most. In C3735 a straight increase was observed. The increasing tendency in the investigated hybrids is in accordance with that found in previous works [12, 20, 24, 25] which concluded that the more ripened the pods were, the more ascorbic acid could be measured from them. With ripening the pepper pods store more reducing sugars [36], which are the precursors of L-ascorbic acid [38], and that explains the increasing vitamin C content with ripening in all hybrids included in our study. On the contrary, Shaha et al. [18] showed a different dynamics of the ascorbic acid accumulation, because they found the highest level in yellow (intermediate) stage and the declining level in the red mature stage. That agrees with our finding in Bandai, where the highest average values (1005.2±100.73 μg/g) were observed in the orange or colour-break stage (937.9±78.04 μg/g), although these are not significantly higher than that determined in red stage (787.4±131.21 μg/g). Probably it is also due to the high standard deviation present in the red stage.Table 4 Change in content of ascorbic acid in different chili hybrids as a function of ripening. The values represent means inμg/g fresh weight base ± standard deviation (n=3). Hybrid Ripening stage Ascorbic acid (μg/g) Beibeihong Green 355 ± 64.85a Colour-breaker 1503.4 ± 358.31b Orange 2085.7 ± 252.2bc Red 2483.8 ± 570.74c F-value 19.74 p value <0.001 Bandai Green 329.5 ± 58.88a Colour-breaker 937.9 ± 78.04b Orange 1005.2 ± 100.73b Red 787.4 ± 131.21b F-value 30.09 p value <0.001 Lolo Green 111.3 ± 14.01a Colour-breaker 451.5 ± 115.56a Red 1940.9 ± 533.57b F-value 28.57 p value 0.001 C3735 Green 315.1 ± 59.91a Colour-breaker 1522.5 ± 127.47b Red 2468.2 ± 58.93c F-value 449.65 p value <0.001 Fire Flame Red 3689.4 ± 39.50 Star Flame Yellow 3154.8 ± 160.61 The same letter indicates no significant difference in ascorbic acid content between ripening stages in the given hybrid according to Tukey HSD post hoc test.Figure 4 HPLC profile of vitamin C determination. The separation was performed on C18 Nautilus column with PDA detection at 244 nm. 1: L-ascorbic acid.The recommended daily allowance (RDA) is 60μg FW; according to Dias [39] 100 g fresh chili provides about 143.7 μg vitamin C. Focusing on the hybrids of the recent study at the green stage all of them failed to reach this value, while at colour-break stage Beibeihong and C3735 reached it and finally at the red stage all of them achieved the RDA. ## 3.4. Comparison of Major Compounds among the 6 Hybrids The comparison among the hybrids has been done on the main parameters: CAP, ascorbic acid, naringenin-diglucoside, Scoville heat unit, and dry matter (shown in Table5), at the final stage of the hybrids, which is generally considered as the most valuable in nutrition and having the most processing possibility. A higher dry matter signifies a better fruit quality and also a higher nutritional concentration when fresh weight basis is used to express nutritional parameters. We measured 25–30% dry matter content inC. frutescens, which produces more seeds and smaller pods, while in the peppers belonging toC. annuum this value lessens to 14.1–15.8%.Table 5 Capsaicin, ascorbic acid, and naringenin-diglucoside content (μg/g fresh weight base), pungency unit of Scoville, and dry matter of different chili hybrids. The values represent means ± standard deviation (n=3). Hybrid Capsaicin (μg/g) Scoville heat unit Ascorbic acid (μg/g) Naringenin-diglucoside (μg/g) Dry matter Beibeihong 311.8 ± 63.25ab 37999.8 ± 5761.66a 2483.8 ± 570.74bc 368.8 ± 30.77f 25.8 ± 0.82d Bandai 1176.1 ± 112.1c 98090.8 ± 9920.74c 787.4 ± 131.21a 276.5 ± 16.65e 30.2 ± 0.41c Lolo 197 ± 92.13a 33188.2 ± 5229.83a 1940.9 ± 533.57b 117.8 ± 7.18ab 14.0 ± 0.61a C3735 126.3 ± 35.95a 23730.9 ± 3174.95a 2468.2 ± 58.93bc 217.2 ± 8.36d 15.8 ± 0.93b Fire Flame 234.3 ± 45.23a 40417.3 ± 7830.33a 3689.4 ± 160.61d 141.6 ± 4.19c 14.1 ± 0.34ab Star Flame 440.8 ± 17.22b 66201.2 ± 7132.51b 3154.8 ± 160.61cd 93.5 ± 4.26a 14.4 ± 0.58ab F-value 94.64 48.43 27.62 146.17 357.48 p value <0.001 <0.001 <0.001 <0.001 <0.001 The same letter indicates no significant difference in the major components between the fully ripe stages of the 6 hybrids according to Tukey HSD post hoc test.The CAP content was found to be statistically the same in all red colouredC. annuum hybrids, while the yellow hybrid Star Flame (234.3±45.23 μg/g) contained more, and Bandai (1176.1±112.1 μg/g) the most (p<0.001). Our findings roughly agree with the result of Sanatombi and Sharma [40] who showed that the cultivars belonging toC. annuum contain less capsaicin than others ofCapsicum frutescens. Beibeihong was an exception, because it statistically contained the same amount asC. annuum hybrids. Focusing on the Scoville heat units, the highest CAP value in Bandai corresponds to the highest SHU (98090.8±9920.74) observed among the hybrids investigated. Bernal et al. [33] measured 87300–276500 SHU in ripeC. frutescens peppers, but in Bandai hybrid the value found was close to the lower level determined by the authors. AmongC. annuum hybrids, Star Flame was found to be a prominent pepper regarding SHU (66201.2±7132.51) comparing to the measurements of Topuz and Ozdemir [41] 9720±2061.8 and Giuffrida et al. [42] 21034±3579. Beibeihong and Bandai have not been investigated by pungency profiles before. Comparing to Tabasco (also belonging toC. frutescens) the SHU measured by Giuffrida et al. [42] (21348±867) is below our values of the latter hybrids, although CAP content determined by Giuffrida et al. [42] (917±34 μg/g) is between the values measured in Bandai (1176.1±112.1 μg/g) and Beibeihong (311.8±63.25 μg/g). Interestingly, Bandai hybrid had the highest CAP content at the same time; it also had the lowest ascorbic acid amount. Topuz and Ozdemir [41] described in pungent peppers that the content of ascorbic acid and capsaicinoid is positively related, which we could not underline in case of Bandai. The highest ascorbic acid was measured in ripe Fire Flame (3689.4±160.61 μg/g) and this value is well above the one measured in Hungarian spice pepper where approximately 1800 μg/g converted to fresh weight basis [12], and it is more than the one detected in New Mexican-type chili peppers 2766 μg/g [25].Naringenin-diglucoside content ranged from93.5±4.26 to 368.8±30.77 μg/g and had higher values inC. frutescens hybrids compared toC. annuum hybrids, probably because of the higher dry matter content of such peppers. Naringenin (belonging to flavanones), being an initial compound in the chain of flavonoid synthesis [14], explains the high content present in our samples. Other studies found also naringenin-glucosides as a dominant flavonoid in peel of pungent pepper [43] and in sweet pepper alike [44]. ## 4. Conclusion The investigated new hybrids can be regarded to be good sources of phytochemicals for future applications. We recommend using the red coloured hybrid Fire Flame to produce chili products with high content of vitamin C. On the other hand, when heat principles (capsaicinoid) for food and pharmaceutical industries are required, the use of Star Flame and Bandai can be suggested, as they contain a level of capsaicin around440.8±17.22 μg/g and 1610.2±91.46 μg/g, respectively. In order to get the maximum level of the bioactive phytochemicals such as vitamin C, capsaicinoid, and polyphenol it is important to characterize the ripening dynamics of each of these new hybrids. For example, the highest level of capsaicin could be found in the green stage of ripening of Bandai and C3735 hybrids, while in the other hybrids pungency was similar in all ripening stages. --- *Source: 102125-2015-10-01.xml*
102125-2015-10-01_102125-2015-10-01.md
65,933
Determination of Polyphenols, Capsaicinoids, and Vitamin C in New Hybrids of Chili Peppers
Zsuzsa Nagy; Hussein Daood; Zsuzsanna Ambrózy; Lajos Helyes
Journal of Analytical Methods in Chemistry (2015)
Chemistry and Chemical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102125
102125-2015-10-01.xml
--- ## Abstract Six hybrids were subjected to chromatographic analyses by HPLC for the determination of phytochemicals such as capsaicinoid, polyphenol, and vitamin C. The dynamics of ripening of 4 of the hybrids were also characterised. Seven capsaicinoids could be separated and determined; the major compounds were nordihydrocapsaicin, capsaicin, and dihydrocapsaicin, while homocapsaicin and homodihydrocapsaicin derivatives were detected as minor constituents. Capsaicin content ranged between 95.5 ± 4.15 and 1610.2 ± 91.46μg/g FW, and the highest value was found in Bandai (C. frutescens) at the green ripening stage. The major capsaicinoids had a decreasing tendency in Bandai and Chili 3735 hybrids, while no change was observed in Beibeihong and Lolo during ripening. Nine polyphenol compounds were detected including 8 flavonoids and a nonflavonoid compound in the pods of all hybrids. The major components were naringenin-diglucoside, catechin, and vanillic acid-derivative and luteolin-glucoside. Naringenin-diglucoside ranged from 93.5 ± 4.26 to 368.8 ± 30.77 μg/g FW. Except vanillic acid-derivative, dominant polyphenols increased or remained unchanged during ripening. As for vitamin C, its content tended to increase with the advance in ripening in all hybrids included in this study. The highest value of 3689.4 ± 39.50 μg/g FW was recorded in Fire Flame hybrid. --- ## Body ## 1. Introduction The components evolving pungency in chili peppers have been established as a mixture of acid amides of vanillylamine and C8 to C13 fatty acids, also known as capsaicinoids [1]. Capsaicinoids are secondary metabolites and are synthesised by glands at the join of the placenta and the pod wall of pungent peppers [2]. The effect of capsaicinoids on human health has been widely investigated. For instance, it is beneficial in low concentration against gastric injuries [3], stimulates cation channels (Na+, K+, and Ca2+) in sensory receptor membrane [4], evokes pain, and activates autonomic reflexes [5]. Environmental factors and the circumstance of cultivation influence capsaicinoid content of the pods [6, 7], while probably a higher impact on pungency by the genotype is present [8, 9]. Besides, the amount and proportion of capsaicinoids are changing during the ripening process of the pods [10–13]. Flavonoids represent a significant subgroup of polyphenols [14] and naturally occur in high concentration in wild mint [15] and grape [16] while generally pungent peppers have moderate level of polyphenol content. The health protective attributions of them are mainly associated with preventing cancer through inhibiting certain enzymes and suppressing angiogenesis [17]. The polyphenol content in pungent peppers is found to be influenced by genotype and the ripening process [18–20]. Ascorbic acid, the main component of vitamin C, is very abundant in freshCapsicum species and has been found to be beneficial in maintaining collagen synthesis and healthy immune-system and also has antitumor properties [21–23]. The content of ascorbic acid is highly varying among cultivars and ripening stages [24, 25]; in addition, the utilised agricultural techniques play significant role in the final amount of ascorbic acid in the pods [26].Numerous cultivars of pungent pepper are nowadays available; however, many of them have not been analysed for their quality and nutritional components. The objective of the present work is to investigate capsaicinoid, polyphenol, and vitamin C content in six hybrids of chili pepper (Bandai, Beibeihong, Lolo, Chili 3735, Fire Flame, and Star Flame) using recently developed liquid chromatographic method in the determinations. In addition, characterisation of ripening stages of four hybrids was aimed. ## 2. Material and Methods ### 2.1. Plant Material The plants were cultivated with convention horticultural practices in the experimental field of Szent István University, Gödöllő, Hungary. Bandai F1 (Bandai) and Beibeihong 695 F1 (Beibeihong) which belong toCapsicum frutescens and Lolo 736 F1 (Lolo) and Chili 3735 F1 (C3735) which belong toCapsicum annuum were all purchased from East-West Seeds Company, from Thailand, while Star Flame and Fire Flame (bothCapsicum annuum) were purchased from Seminis, Hungary. The pods of Bandai, Beibeihong, Lolo, C3735, and Fire Flame are red when fully ripe, while Star Flame has vivid yellow pods. Peppers with intermediate pungency level were selected for the investigation because those have multiple utilization methods. Those peppers involved in the recent study have limited data available for breeders and growers; thus, it makes them important for research work. Star Flame and Fire Flame are commercially available in certain European countries but not yet in Hungary. ### 2.2. Capsaicinoid Determination The determination of capsaicinoid content was made following the method of Daood et al. [27]. Three grams of well-blended pepper sample were crushed in a crucible mortar with quartz sand. To the macerate 50 mL of methanol (analytical grade) was added and the mixture was then transferred to a 100 mL Erlenmeyer flask. The mixture was subjected to 4 min long ultrasonication (Raypa, Turkey) and then filtered through filter paper (Munktell, Germany). The filtrate was more purified by passing through a 0.45 mm PTFE syringe filter before injection on the HPLC column.After suitable dilution, the extract was injected to Nucleodur C18, Cross-Linked (ISIS, from Macherey-Nagel, Düren, Germany). The separation was performed with isocratic elution of 50 : 50 water-acetonitrile and a flow rate of 0.8 mL/min. Fluorometric detection of capsaicinoid was carried out at EX: 280 nm and EM: 320 nm.Peaks referring to different capsaicinoids were identified by comparing retention times and mass data (Daood et al. [27]) of standard material (purified from pungent red pepper, with 99% purity, by Plantakem Ltd., Sándorfalva, Hungary) with those appearing on chromatogram of samples. Capsaicinoid compounds are referred as follows: nordihydrocapsaicin (NDC), capsaicin (CAP), dihydrocapsaicin (DC), homocapsaicin 1-2 (HCAP1-2), and homodihydrocapsaicin 1-2 (HDC1-2). Scoville heat unit (SHU) was calculated by the following algorithm:(1)CAP×16,1+DC×16,1+NDC×9,3+HCAP1+HCAP2×8,6=Scoville  heat  unit.All variables are expressed in μg/g dry weight basis [28]. ### 2.3. Polyphenol Determination Five grams of well-blended pepper sample were replaced into an Erlenmeyer flask and then 10 mL distilled water was added to the sample and subjected to ultrasonication force using ultrasonic bath for 30 sec. Then, 15 mL of 2% acetic acid in methanol was added to the mixture which was shaken by a mechanical shaker for 15 min. The mixture then was kept overnight at 4°C. Next day after filtrating the mixtures, a further cleanup of the filtrates was made by passing through the mixture a 0.45μm PTFE HPLC syringe filter. That followed by injection on the HPLC column for the analysis of phenols. Nucleosil C18, 100, Protect-1 (Macherey-Nagel, Düren, Germany), 3 μm, 150 × 4.6 column was used. The gradient elution was done using 1% formic acid (A) in water, acetonitrile (B), and flow rate of 0.6 mL/min. Gradient elution started with 98% A and 2% B and changed in 10 min to 87% A and 13% B and in 5 min to 75% A and 25% B and then in 15 min to 60% A and 40% B; finally it turned in 7 min to 98% A and 2% B. The peaks that appeared on the chromatogram were identified by comparing their retention times and spectral characteristics with available standards such as catechin, quercetin-3-glucoside, kaempferol, luteolin-glucoside, and naringenin-glucoside (Sigma-Aldrich Ltd., Hungary). Quantitation of phenol components having maxima absorption at 280 nm were quantified as catechin equivalent and flavonoids were quantified as kaempferol-equivalent at 350 nm [29, 30]. The standard material was singly injected as external standard as well as being cochromatographed (spiking) with the samples. ### 2.4. Ascorbic Acid Determination Five grams of well-homogenised sample were disrupted in a crucible mortar with quartz sand. To the macerate 50 mL of metaphosphoric acid (analytical grade) was gradually added and the mixture was then transferred to a 100 mL Erlenmeyer flask closed with stopper and then filtered. The filtrate was purified in addition by passing through a 0.45 mm PTFE syringe filter before injection on HPLC column. The analytical determination of ascorbic acid was performed on C18 Nautilus, 100-5, 150 × 4.6 mm (Macherey-Nagel, Düren, Germany) column with gradient elution of 0.01 M KH2PO4 (A) and acetonitrile (B). The gradient elution started with 1% B in A and changed to 30% B in A in 15 min; then; it turned to 1% A in B in 5 min. The flow rate was 0.7 mL/min. The highest absorption maxima of ascorbic acid under these conditions were detected at 265 nm. For quantitative determination of ascorbic acid standard materials (Sigma-Aldrich, Budapest, Hungary) were used. Stock solutions and then working solutions were prepared for each compound to make the calibration between concentration and peak area. ### 2.5. HPLC Apparatus A Hitachi Chromaster HPLC instrument, which consists of a Model 5110 Pump, a Model 5210 Auto Sampler, a Model 5430 Diode Array detector, and a Model 5440 Fluorescence detector, was used for the determination of all compounds. ### 2.6. Validation of Applied Methods Since the methods used in the different chromatographic determinations are derived from the literature (validated protocols) we dealt with only measuring the limit of detection (LOD) and quantification (LOQ) and linearity curves of different compounds under the conditions of our laboratories. The LOD and LOQ were calculated from standard solutions and samples as the concentrations of analytes at peak/noise of 3 times and 10 times, respectively. Linearity curves were made plotting concentration inμg/mL against peak areas. ### 2.7. Dry Matter Determination Three grams of fresh pepper samples were dried at 65°C until constant weight. The dry matter content was measured as a proportion of fresh and dried fruit weight. ### 2.8. Statistical Analyses For each independent variable a one-way linear model (LM) was fitted, where “ripening stage” was set as explanatory (factor) variable. Prior to model fitting assumptions were checked by plot diagnosis. In the analysis of the major compounds (SHU, CAP, naringenin-diglucoside, ascorbic acid, and dry matter) among the six hybrids another LM was made, where “hybrid” was set as explanatory (factor) variable. Post hoc comparison was made by Tukey HSD test. All statistical analyses were performed in IBM SPSS 22 software (IBM Co., USA) and Microsoft Excel (Microsoft Co., USA).α was set at 0.05 in the entire study. ## 2.1. Plant Material The plants were cultivated with convention horticultural practices in the experimental field of Szent István University, Gödöllő, Hungary. Bandai F1 (Bandai) and Beibeihong 695 F1 (Beibeihong) which belong toCapsicum frutescens and Lolo 736 F1 (Lolo) and Chili 3735 F1 (C3735) which belong toCapsicum annuum were all purchased from East-West Seeds Company, from Thailand, while Star Flame and Fire Flame (bothCapsicum annuum) were purchased from Seminis, Hungary. The pods of Bandai, Beibeihong, Lolo, C3735, and Fire Flame are red when fully ripe, while Star Flame has vivid yellow pods. Peppers with intermediate pungency level were selected for the investigation because those have multiple utilization methods. Those peppers involved in the recent study have limited data available for breeders and growers; thus, it makes them important for research work. Star Flame and Fire Flame are commercially available in certain European countries but not yet in Hungary. ## 2.2. Capsaicinoid Determination The determination of capsaicinoid content was made following the method of Daood et al. [27]. Three grams of well-blended pepper sample were crushed in a crucible mortar with quartz sand. To the macerate 50 mL of methanol (analytical grade) was added and the mixture was then transferred to a 100 mL Erlenmeyer flask. The mixture was subjected to 4 min long ultrasonication (Raypa, Turkey) and then filtered through filter paper (Munktell, Germany). The filtrate was more purified by passing through a 0.45 mm PTFE syringe filter before injection on the HPLC column.After suitable dilution, the extract was injected to Nucleodur C18, Cross-Linked (ISIS, from Macherey-Nagel, Düren, Germany). The separation was performed with isocratic elution of 50 : 50 water-acetonitrile and a flow rate of 0.8 mL/min. Fluorometric detection of capsaicinoid was carried out at EX: 280 nm and EM: 320 nm.Peaks referring to different capsaicinoids were identified by comparing retention times and mass data (Daood et al. [27]) of standard material (purified from pungent red pepper, with 99% purity, by Plantakem Ltd., Sándorfalva, Hungary) with those appearing on chromatogram of samples. Capsaicinoid compounds are referred as follows: nordihydrocapsaicin (NDC), capsaicin (CAP), dihydrocapsaicin (DC), homocapsaicin 1-2 (HCAP1-2), and homodihydrocapsaicin 1-2 (HDC1-2). Scoville heat unit (SHU) was calculated by the following algorithm:(1)CAP×16,1+DC×16,1+NDC×9,3+HCAP1+HCAP2×8,6=Scoville  heat  unit.All variables are expressed in μg/g dry weight basis [28]. ## 2.3. Polyphenol Determination Five grams of well-blended pepper sample were replaced into an Erlenmeyer flask and then 10 mL distilled water was added to the sample and subjected to ultrasonication force using ultrasonic bath for 30 sec. Then, 15 mL of 2% acetic acid in methanol was added to the mixture which was shaken by a mechanical shaker for 15 min. The mixture then was kept overnight at 4°C. Next day after filtrating the mixtures, a further cleanup of the filtrates was made by passing through the mixture a 0.45μm PTFE HPLC syringe filter. That followed by injection on the HPLC column for the analysis of phenols. Nucleosil C18, 100, Protect-1 (Macherey-Nagel, Düren, Germany), 3 μm, 150 × 4.6 column was used. The gradient elution was done using 1% formic acid (A) in water, acetonitrile (B), and flow rate of 0.6 mL/min. Gradient elution started with 98% A and 2% B and changed in 10 min to 87% A and 13% B and in 5 min to 75% A and 25% B and then in 15 min to 60% A and 40% B; finally it turned in 7 min to 98% A and 2% B. The peaks that appeared on the chromatogram were identified by comparing their retention times and spectral characteristics with available standards such as catechin, quercetin-3-glucoside, kaempferol, luteolin-glucoside, and naringenin-glucoside (Sigma-Aldrich Ltd., Hungary). Quantitation of phenol components having maxima absorption at 280 nm were quantified as catechin equivalent and flavonoids were quantified as kaempferol-equivalent at 350 nm [29, 30]. The standard material was singly injected as external standard as well as being cochromatographed (spiking) with the samples. ## 2.4. Ascorbic Acid Determination Five grams of well-homogenised sample were disrupted in a crucible mortar with quartz sand. To the macerate 50 mL of metaphosphoric acid (analytical grade) was gradually added and the mixture was then transferred to a 100 mL Erlenmeyer flask closed with stopper and then filtered. The filtrate was purified in addition by passing through a 0.45 mm PTFE syringe filter before injection on HPLC column. The analytical determination of ascorbic acid was performed on C18 Nautilus, 100-5, 150 × 4.6 mm (Macherey-Nagel, Düren, Germany) column with gradient elution of 0.01 M KH2PO4 (A) and acetonitrile (B). The gradient elution started with 1% B in A and changed to 30% B in A in 15 min; then; it turned to 1% A in B in 5 min. The flow rate was 0.7 mL/min. The highest absorption maxima of ascorbic acid under these conditions were detected at 265 nm. For quantitative determination of ascorbic acid standard materials (Sigma-Aldrich, Budapest, Hungary) were used. Stock solutions and then working solutions were prepared for each compound to make the calibration between concentration and peak area. ## 2.5. HPLC Apparatus A Hitachi Chromaster HPLC instrument, which consists of a Model 5110 Pump, a Model 5210 Auto Sampler, a Model 5430 Diode Array detector, and a Model 5440 Fluorescence detector, was used for the determination of all compounds. ## 2.6. Validation of Applied Methods Since the methods used in the different chromatographic determinations are derived from the literature (validated protocols) we dealt with only measuring the limit of detection (LOD) and quantification (LOQ) and linearity curves of different compounds under the conditions of our laboratories. The LOD and LOQ were calculated from standard solutions and samples as the concentrations of analytes at peak/noise of 3 times and 10 times, respectively. Linearity curves were made plotting concentration inμg/mL against peak areas. ## 2.7. Dry Matter Determination Three grams of fresh pepper samples were dried at 65°C until constant weight. The dry matter content was measured as a proportion of fresh and dried fruit weight. ## 2.8. Statistical Analyses For each independent variable a one-way linear model (LM) was fitted, where “ripening stage” was set as explanatory (factor) variable. Prior to model fitting assumptions were checked by plot diagnosis. In the analysis of the major compounds (SHU, CAP, naringenin-diglucoside, ascorbic acid, and dry matter) among the six hybrids another LM was made, where “hybrid” was set as explanatory (factor) variable. Post hoc comparison was made by Tukey HSD test. All statistical analyses were performed in IBM SPSS 22 software (IBM Co., USA) and Microsoft Excel (Microsoft Co., USA).α was set at 0.05 in the entire study. ## 3. Results and Discussion To adapt the applied chromatographic protocols under the conditions of our laboratories, certain parameters such as LOD, LOQ, and linearity curve were studied. The values depicted in Table1 show that the used methods are accurate enough to carry on precise and sensitive determination of polyphenols, capsaicinoids, and ascorbic acid. This statement is based on the low levels of LOQ, LOD found for all tested compounds. The concentration of such compounds in our samples is much higher than the levels of LOQ and LOD. Moreover, values obtained for regression coefficient indicated that the methods can be applied at wide range of concentrations for different compounds in chili samples.Table 1 Some validation parameters for the HPLC determinations of the major polyphenols, ascorbic acid, and capsaicinoids. LODμg/mL LOQμg/mL Linearity rangeμg/mL Linearity curve R 2 Catechin 2.625 8.75 0–50 y = 0.331 x - 2.5895 0.899 Naringenin-diglucoside 0.0318 0.106 0–50 y = 0.3906 x - 3.0556 0.899 Quercetin-3-glucoside 1.083 3.61 0–50 y = 0.2188 x - 0.781 0.983 Luteolin-glucoside 1.018 3.39 0–50 y = 0.1912 x - 0.381 0.979 Kaempferol-derivative 0.0208 0.069 0–50 y = 0.4402 x - 3.444 0.899 Ascorbic acid 2.500 0.750 30–120 y = 0.2736 x - 2.4305 0.997 Nordihydrocapsaicin∗ 0.003∗ 0.008∗ 0–0.07–1.1∗ y = 2000 + 07 x + 3000 + 06 ∗ 0.997∗ Capsaicin∗ 0.004∗ 0.01∗ 0.1–5∗ y = 2000 + 07 x + 3000 + 06 ∗ 0.998∗ Dihydrocapsaicin∗ 0.002∗ 0.007∗ 0.3–6∗ y = 2000 + 07 x + 3000 + 06 ∗ 0.998∗ ∗From previously published research work on HPLC determination of capsaicinoids by Daood et al. [27]. ### 3.1. Pungency The major components evolving pungency in our hybrids are NDC, CAP, and DC. Besides, we could identify the homologues of CAP and DC which are HCAP1, HCAP2 and HDC1, HDC2, respectively (Figure1). All of them are branched-chain alkyl vanillylamides. Kozukue et al. [31] detected the 7 compounds, in addition to nonivamide which is a straight-chain nonoyl vanillylamide analog of CAP [1]. In Beibeihong advance in ripening did not affect the major capsaicinoids (CAP, NDC, and DC shown in Table 2), while it influenced HCAP1 and HDC1 (both p≤0.032) including a slight decrease from green to colour-break stage and then a low increase at the final stage. In Bandai, unlike Beibeihong the ripening affected the major and minor capsaicinoids as well (all p≤0.027). The changing of CAP included a notable decrease between the initial stage and the colour-break stage. On DC, NDC, and HDC2 a gradual decrease was measured. A straight increasing of HDC1 was observed, while on HPC1 the same tendency like that in HPC1 of Beibeihong was observed.Table 2 Change in content of capsaicinoid compounds in chili hybrids as a function of ripening. The values represent means inμg/g fresh base weight ± standard deviation (n=3). Hybrid Ripening stage NDC (μg/g) CAP (μg/g) DC (μg/g) HCAP1 (μg/g) HCAP2 (μg/g) HDC1 (μg/g) HDC2 (μg/g) Beibeihong Green 51.8 ± 3.90a 294.5 ± 19.72a 326.5 ± 51.20a 3.5 ± 2.01ab 20.0 ± 2.37a 10.6 ± 3.01ab 26.3 ± 2.60a Colour-breaker 60.6 ± 15.01a 254.6 ± 31.90a 263.4 ± 25.92a 1.8 ± 0.70a 25.3 ± 5.13a 9.0 ± 0.82a 29.5 ± 6.42a Orange 61.7 ± 6.65a 261.9 ± 26.12a 269.3 ± 14.06a 3.6 ± 0.97ab 28.5 ± 4.64a 10.7 ± 0.50ab 26.7 ± 2.09a Red 63.2 ± 15.12a 311.8 ± 63.25a 272.7 ± 74.99a 5.7 ± 0.51b 23.7 ± 3.62a 13.9 ± 0.5b 23.9 ± 3.79a F-value 0.61 1.44 1.12 5.41 2.26 4.89 0.94 p value 0.626 0.302 0.394 0.025 0.158 0.032 0.464 Bandai Green 102.9 ± 14.17ab 1610.2 ± 91.46b 780 ± 36.03b 13.1 ± 5.61ab 8.9 ± 0.99a 12.8 ± 1.22a 30.2 ± 2.29ab Colour-breaker 102.2 ± 1.21ab 1182.2 ± 82.56a 725.2 ± 32.03ab 6.3 ± 1.83a 18.5 ± 3.69b 12.7 ± 0.57a 31 ± 3.41ab Orange 115.6 ± 5.26b 1104.9 ± 77.27a 635.2 ± 32.36a 11.5 ± 0.51ab 27.2 ± 3.33c 15 ± 0.46ab 37.9 ± 3.41b Red 81.5 ± 6.91a 1176.1 ± 112.1a 600.4 ± 87.11a 20.1 ± 6.24b 14.3 ± 2.39ab 16.3 ± 2.06b 27.3 ± 3.34a F-value 8.59 18.93 7.40 5.27 22.70 5.95 6.10 p value 0.007 0.001 0.011 0.027 <0.001 0.020 0.018 Lolo Green 18.7 ± 2.42a 222.5 ± 69.33a 139.2 ± 50.97a 0.5 ± 0.23b 0.4 ± 0.08a 1.8 ± 0.15a 9.2 ± 1.19a Colour-breaker 26.2 ± 3.94a 95.5 ± 4.15a 96.7 ± 9.29a 0.4 ± 0.04b 2.4 ± 0.77b 1.9 ± 0.08a 12.6 ± 1.31a Red 22.2 ± 5.14a 197 ± 92.13a 119.8 ± 53.03a 0 ± 0a 3.2 ± 0.15b 1.9 ± 0.27a 12.5 ± 3.27a F-value 2.69 3.05 0.74 12.12 29.94 0.86 2.54 p value 0.146 0.122 0.516 0.008 0.001 0.467 0.159 C3735 Green 31.3 ± 1.46b 259.6 ± 39.15b 183.4 ± 23.27b UDL 7 ± 0.79a 1.6 ± 0.21b 8.2 ± 0.58ab Colour-breaker 35.9 ± 1.64b 168.9 ± 33.86ab 148.2 ± 24.21b UDL 12.6 ± 4.16a 1.9 ± 0.08b 9.8 ± 1.01b Red 18.2 ± 6.21a 126.3 ± 35.95a 88.9 ± 6.38a UDL 12.6 ± 3.7a 1.3 ± 0.07a 7.2 ± 1.31a F-value 17.39 10.50 17.58 — 3.00 15.03 5.00 p value 0.003 0.011 0.003 — 0.125 0.005 0.053 Fire Flame Red 15.5 ± 3.28 234.3 ± 45.23 109.7 ± 19.9 1.2 ± 0.13 1 ± 0.23 0.9 ± 0.13 5.9 ± 1.24 Star Flame Yellow 21.9 ± 5.36 440.8 ± 17.22 135.9 ± 20.28 2.5 ± 0.08 0.2 ± 0.08 1.4 ± 0.25 6.4 ± 1.10 The same letter indicates no significant difference in capsaicinoid content between ripening stages in the given hybrid according to Tukey HSD post hoc test; UDL: under detection limit.Figure 1 HPLC profile of capsaicinoid components separated from red stage of Bandai hybrid using cross-linked C18 column with acetonitrile-water elution and fluorescence detection. 1: NDC, 2: CAP, 3: DC, 4: HCAP1, 5: HCAP2, 6: HDC1, and 7: HDC2. For more information see text.Focusing on the major compounds of capsaicinoids, Bandai hybrid could be characterised with pungency loss, while in Beibeihong those compounds did not change during ripening. In the study by Gnayfeed et al. [12] CAP reached the highest value in F-03 cultivar (C. annuum) at the initial green stage, similarly found in Bandai, but its content in F-03 did not change significantly with ripening. The obtained results suggest even in the same species (C. frutescens) that the hybrids have a different characteristic in ripening regarding capsaicinoid contents. It is in accordance with findings of Merken and Beecher [30] who also measured the maximal capsaicinoid content in 3 differentC. frutescens peppers in 3 variant times after flower budding.In Lolo the ripening slightly affected but not significantly CAP, while it increased HCAP2 (p=0.001). After the colour-break stage the amount of HCAP1 decreased (p=0.008) to undetectable level. In C3735 ripening decreased NDC, CAP, DC, HDC1 (all p≤0.011), and nonmarginally HDC2, while HCAP1 was absent or under detection limit at all ripening stages. Therefore, most of the compounds showed a decreasing tendency during ripening of C3735, so a remarkable pungency loss was observed. On the contrary, those compounds remained unchanged in Lolo.Iwai found the peak 40 days after flowering and then a gradual decrease of capsaicinoid content in aC. annuum pepper. Because of the different scale used by Iwai et al. [32], it is difficult to compare to our data, but probably the 40 days after flowering is roughly equal to the green stage we used. Gnayfeed et al. [12] observed inC. annuum cultivars that capsaicinoids reached maximum level at the colour-break stage and then started declining in Hungarian spice pepper (C. annuum), which is a characteristic of pungency change that we did not observe. The change in capsaicin content during ripening of pepper may relate to activity of some enzymes that interfere in the ripening dynamics. The amount of capsaicinoids has been investigated in relation with several enzymes [10, 33, 34]. Contreras-Padilla and Yahia [10] showed that peroxidase activity started increasing, when the amount of capsaicinoid started to decrease in Habanero and de Arbol, while in Piquin it began to increase before the decrease of capsaicinoid. They concluded that peroxidase enzyme is involved in capsaicinoid degradation and that attribution is a genotypic characteristic. Iwai et al. [32] found higher phenylalanine ammonia-lyase activity in green stage than in red stage. In addition, Bernal et al. [33] observed that the operation of capsaicinoid synthetase enzyme is more influenced by the availability of precursors and the conditions of forming than its substrate specificity. The capsaicinoid composition and content are the result of the above referred enzymes.A study concerning the maturation of Habanero (C. chinense) proved that green pod contains four times less capsaicin than ripe red ones [13], while we found less difference and even more capsaicin in green stage (e.g., Bandai); however, none of our investigated hybrids belong toC. chinense. They also reported that DC content is seven times less in green pods as compared to red ones, while we found only a slight decrease of DC between the green and red stages. ### 3.2. Polyphenols Since there is no available standard for myricetin and vanillic acid in our laboratory, they were tentatively identified based on comparison of their spectral characteristics and retention behaviour on the HPLC column with those found in the literature.Due to the high content of vanillic acid-derivative, catechin, and naringenin-diglucoside, those compounds were found to be the dominant polyphenols, which have maxima absorption at 280 nm (Figure2). The minor compounds were luteolin-rutinoside, quercetin-glucoside, quercetin-glycosides, myricetin, and kaempferol-derivative; all were detected with maxima absorption at 350 nm and also luteolin-glucoside occurs in higher concentration and is detected at 350 nm (Figure 3).Figure 2 HPLC profile of polyphenols detected separated on Protect-1 C18 column and detected at 280 nm. 1: vanillic acid-derivative, 2: catechin, and 3: naringenin-diglucoside.Figure 3 HPLC profile of polyphenols detected separated on Protect-1 C18 column and detected at 350 nm. 4: luteolin-rutinoside, 5: quercetin-glucoside, 6: quercetin-glycosides (the sum of these compounds is used in Table3), 7: luteolin-glucoside, 8: myricetin, and 9: kaempferol-derivative.Table 3 Change in content of polyphenol compounds in different chili hybrids as a function of ripening. The values represent means inμg/g fresh weight base ± standard deviation (n=3). Hybrid Ripening stage Vanillic acid-derivative (μg/g) Catechin (μg/g) Naringenin-diglucoside (μg/g) Luteolin-rutinoside(μg/g) Quercetin-glucoside (μg/g) Quercetin-glycosides(μg/g) Luteolin-glucoside (μg/g) Myricetin (μg/g) Kaempferol-derivative(μg/g) Beibeihong Green 109.5 ± 9.84b 50.4 ± 1.86a 349.5 ± 13.09a 11.3 ± 0.53a 12.5 ± 2.07a 3.7 ± 0.14a 62.9 ± 2.78a 10.3 ± 0.74a 22.6 ± 1.13a Colour-breaker 145.7 ± 9.71c 135.3 ± 3.97b 477.4 ± 52.69b 8.5 ± 0.73a 13.4 ± 1.19a 4.7 ± 0.12ab 79 ± 2.78a 23.5 ± 1.13b 67.3 ± 5.26c Orange 114.9 ± 9.66b 153.5 ± 5.56c 431.3 ± 39.72ab 9.7 ± 0.67a 9.1 ± 0.07a 4.9 ± 0.62b 84.6 ± 13.18a 24.9 ± 0.46b 69.7 ± 3.86c Red 79.2 ± 11.08a 132.7 ± 3.85b 368.8 ± 30.77a 17.0 ± 3.37b 21.1 ± 2.69b 5.5 ± 0.68b 90.1 ± 16.97a 31.9 ± 6.77b 51.1 ± 5.22b F-value 21.88 390.61 7.54 13.47 23.64 7.62 3.45 20.45 79.18 p value <0.001 <0.001 0.01 0.02 <0.001 0.01 0.071 <0.001 <0.001 Bandai Green 96.3 ± 0.25a 96.9 ± 7.67a 130.3 ± 4.82a 13.2 ± 1.06a 4.6 ± 0.66a 5.6 ± 0.24a 84.6 ± 2.37a 25.7 ± 1.58a 62.8 ± 3.33a Colour-breaker 89.7 ± 7.98a 134.6 ± 17.02b 202 ± 17.77b 15.1 ± 3.96a 14.2 ± 0.61ab 7.8 ± 0.91ab 91.4 ± 13.75a 117.1 ± 8.21b 289.5 ± 45.59bc Orange 92 ± 14.56a 166.4 ± 17.16b 254.2 ± 38.95b 14.8 ± 3.68a 17.4 ± 2.99c 10.2 ± 1.38bc 107.3 ± 23.49a 110.4 ± 19.41b 307.8 ± 53.01c Red 101.6 ± 5.67a 175.2 ± 12.32c 276.5 ± 16.65c 21.6 ± 10.69a 12.8 ± 0.97b 11.9 ± 1.15c 157.8 ± 15.26b 52.9 ± 12.94a 200.7 ± 20.83b F-value 1.05 19.03 23.73 1.15 33.24 22.39 13.41 38.70 28.15 p value 0.42 <0.001 <0.001 0.385 <0.001 <0.001 0.002 <0.001 <0.001 Lolo Green 72.2 ± 0.85c 45 ± 2.71a 116.8 ± 7.28a 2.1 ± 0.49a 5.7 ± 0.83b 5 ± 0.78b 25.7 ± 5.05a 2 ± 0.51a UDL Colour-breaker 50.3 ± 2.85a 51.4 ± 3.09a 160.8 ± 14.93b 3.2 ± 0.36a 2.8 ± 0.08a 2.9 ± 1.03a 53.4 ± 4.52b 4.8 ± 0.62b UDL Red 64.2 ± 4.15b 171.5 ± 6.14b 117.8 ± 7.18a 7.3 ± 1.22b 6.1 ± 1.28b 4.5 ± 0.28ab 75.9 ± 2.48c 8.9 ± 0.57c 9.1 ± 3.77 F-value 42.59 836.79 17.35 37.55 12.72 6.49 108.95 113.35 — p value <0.001 <0.001 0.003 <0.001 0.007 0.032 <0.001 <0.001 — C3753 Green 73.1 ± 5.46b 22.6 ± 1.28a 123.6 ± 6.23a 2 ± 0.7a 1.7 ± 0.39a 16.8 ± 2.38a 17 ± 2.57a 3.8 ± 0.96a UDL Colour-breaker 51 ± 8.55a 64.2 ± 10.49b 148.3 ± 40.01ab 1.5 ± 0.37a 1.3 ± 0.24a 17.3 ± 3.44a 13.5 ± 2.66a 10 ± 1.09b 21 ± 1.71a Red 56 ± 6.67a 124.5 ± 6.18c 217.2 ± 8.36b UDL 1.1 ± 0.23a 19.8 ± 2.55a 17.5 ± 2.78a 11.3 ± 0.94b 16.4 ± 2.85a F-value 13.06 157.47 8.32 0.44 2.84 0.95 1.99 48.91 6.007 p value 0.007 <0.001 0.019 0.661 0.135 0.44 0.12 <0.001 0.070 Fire Flame Red 27.8 ± 2.07 26.6 ± 1.40 141.6 ± 4.17 2.8 ± 0.27 2.7 ± 0.16 4.2 ± 0.22 44.4 ± 2.76 18.2 ± 0.43 UDL Star Flame Yellow 24.2 ± 1.37 17.6 ± 0.24 93.5 ± 4.33 9.5 ± 0.47 2.0 ± 0.14 4.1 ± 0.62 60.6 ± 1.34 18.4 ± 1.29 UDL The same letter indicates no significant difference in polyphenol content between ripening stages in the given hybrid according to Tukey HSD post hoc test; UDL: under detection limit.In Beibeihong, ripening increased catechin, luteolin-rutinoside, quercetin compounds, myricetin, and kaempferol-derivative (allp≤0.02 shown in Table 3), while it decreased vanillic acid content (p<0.001). In Bandai ripening increased all compounds (all p≤0.002) except vanillic acid and luteolin-rutinoside which statistically remained unchanged during ripening stages. In quercetin-glucoside, myricetin, and kaempferol-derivative the highest values were measured in the middle of the ripening. Most of the studies regarding polyphenol constitution of pungent pepper focus on the green (initial) and red (final) ripe stages but omit the intermediate or colour-break stage. Howard et al. [20] found that quercetin decreased, while luteolin did not change with ripening of Tabasco (C. frutescens). On the contrary, we found an increase of quercetin-related compounds in bothC. frutescens hybrids and also an increase of luteolin-rutinoside in Beibeihong and of luteolin-glucoside in Bandai.In Lolo the ripening significantly decreased vanillic acid (p<0.001) but increased catechin, luteolin-rutinoside, luteolin-glucoside, and myricetin (all p<0.001). In C3735 vanillic acid decreased (p=0.007) while catechin, naringenin-diglucoside, and myricetin increased (all p≤0.019). Howard et al. stated that quercetin had either increasing or decreasing tendency depending on cultivar; also no change was observed during maturity stages of certain cultivars onC. annuum peppers. We could only confirm the last statement that none of the quercetin-related compounds changed when the pods changed from green to red inC. annuum peppers studied.According to Materska and Perucka [19] the most abundant flavonoid compounds in the green stage were quertecin-3-O-L-rhamnoside and luteolin-related compounds, and with ripening those phytochemicals decreased. In the present work particularly in red stage contained higher amounts of luteolin-related in Lolo, while in quercetin-glycoside content no change was detected in bothC. annuum hybrids.The disappearances of flavonoids are parallel to capsaicinoids accumulation [35] because the synthesis of flavonoids may converge with the capsaicinoid pathways [36]. The only nonflavonoid phenolic acid detected in our peppers is vanillic acid, and it is the only polyphenol compound which decreased or stayed unchanged during ripening, while the flavonoids mostly increased with advance of ripening. At the same time the major capsaicinoids generally decreased or did not change even with ripening. Kawada and Iwai [37] found a direct relation between DC and vanillic acid; they fed rats with DC and then detected vanillic acid in a notable amount in the urine of the rats. This experiment may also support our findings that vanillic acid is certainly related to capsaicinoids and has similar dynamics during ripening in pungent pepper.According to Tsao [14], flavonols (kaempferol, quercetin, and myricetin) consist of highly conjugated bindings and a 3-hydroxy group, whose attributions are considered very important in evolving high antioxidant activity. In our hybrids the highest levels of the latter flavonoids were obtained at the orange or red stage that makes the pepper of higher nutritive value. ### 3.3. Ascorbic Acid By the applied HPLC method only L-ascorbic acid was found in the extract of all hybrids (Figure4). It was found that ascorbic acid increased during ripening in all hybrids (p≤0.001 shown in Table 4). In Beibeihong and Bandai after green stage a more notable increase was observed than after the colour-break stage where the ascorbic acid gradually increased, while in Bandai at the red stage the average of ascorbic acid was less than in orange stage. In Lolo the green and colour-break stage did not differ significantly, while the red stage contained the most. In C3735 a straight increase was observed. The increasing tendency in the investigated hybrids is in accordance with that found in previous works [12, 20, 24, 25] which concluded that the more ripened the pods were, the more ascorbic acid could be measured from them. With ripening the pepper pods store more reducing sugars [36], which are the precursors of L-ascorbic acid [38], and that explains the increasing vitamin C content with ripening in all hybrids included in our study. On the contrary, Shaha et al. [18] showed a different dynamics of the ascorbic acid accumulation, because they found the highest level in yellow (intermediate) stage and the declining level in the red mature stage. That agrees with our finding in Bandai, where the highest average values (1005.2±100.73 μg/g) were observed in the orange or colour-break stage (937.9±78.04 μg/g), although these are not significantly higher than that determined in red stage (787.4±131.21 μg/g). Probably it is also due to the high standard deviation present in the red stage.Table 4 Change in content of ascorbic acid in different chili hybrids as a function of ripening. The values represent means inμg/g fresh weight base ± standard deviation (n=3). Hybrid Ripening stage Ascorbic acid (μg/g) Beibeihong Green 355 ± 64.85a Colour-breaker 1503.4 ± 358.31b Orange 2085.7 ± 252.2bc Red 2483.8 ± 570.74c F-value 19.74 p value <0.001 Bandai Green 329.5 ± 58.88a Colour-breaker 937.9 ± 78.04b Orange 1005.2 ± 100.73b Red 787.4 ± 131.21b F-value 30.09 p value <0.001 Lolo Green 111.3 ± 14.01a Colour-breaker 451.5 ± 115.56a Red 1940.9 ± 533.57b F-value 28.57 p value 0.001 C3735 Green 315.1 ± 59.91a Colour-breaker 1522.5 ± 127.47b Red 2468.2 ± 58.93c F-value 449.65 p value <0.001 Fire Flame Red 3689.4 ± 39.50 Star Flame Yellow 3154.8 ± 160.61 The same letter indicates no significant difference in ascorbic acid content between ripening stages in the given hybrid according to Tukey HSD post hoc test.Figure 4 HPLC profile of vitamin C determination. The separation was performed on C18 Nautilus column with PDA detection at 244 nm. 1: L-ascorbic acid.The recommended daily allowance (RDA) is 60μg FW; according to Dias [39] 100 g fresh chili provides about 143.7 μg vitamin C. Focusing on the hybrids of the recent study at the green stage all of them failed to reach this value, while at colour-break stage Beibeihong and C3735 reached it and finally at the red stage all of them achieved the RDA. ### 3.4. Comparison of Major Compounds among the 6 Hybrids The comparison among the hybrids has been done on the main parameters: CAP, ascorbic acid, naringenin-diglucoside, Scoville heat unit, and dry matter (shown in Table5), at the final stage of the hybrids, which is generally considered as the most valuable in nutrition and having the most processing possibility. A higher dry matter signifies a better fruit quality and also a higher nutritional concentration when fresh weight basis is used to express nutritional parameters. We measured 25–30% dry matter content inC. frutescens, which produces more seeds and smaller pods, while in the peppers belonging toC. annuum this value lessens to 14.1–15.8%.Table 5 Capsaicin, ascorbic acid, and naringenin-diglucoside content (μg/g fresh weight base), pungency unit of Scoville, and dry matter of different chili hybrids. The values represent means ± standard deviation (n=3). Hybrid Capsaicin (μg/g) Scoville heat unit Ascorbic acid (μg/g) Naringenin-diglucoside (μg/g) Dry matter Beibeihong 311.8 ± 63.25ab 37999.8 ± 5761.66a 2483.8 ± 570.74bc 368.8 ± 30.77f 25.8 ± 0.82d Bandai 1176.1 ± 112.1c 98090.8 ± 9920.74c 787.4 ± 131.21a 276.5 ± 16.65e 30.2 ± 0.41c Lolo 197 ± 92.13a 33188.2 ± 5229.83a 1940.9 ± 533.57b 117.8 ± 7.18ab 14.0 ± 0.61a C3735 126.3 ± 35.95a 23730.9 ± 3174.95a 2468.2 ± 58.93bc 217.2 ± 8.36d 15.8 ± 0.93b Fire Flame 234.3 ± 45.23a 40417.3 ± 7830.33a 3689.4 ± 160.61d 141.6 ± 4.19c 14.1 ± 0.34ab Star Flame 440.8 ± 17.22b 66201.2 ± 7132.51b 3154.8 ± 160.61cd 93.5 ± 4.26a 14.4 ± 0.58ab F-value 94.64 48.43 27.62 146.17 357.48 p value <0.001 <0.001 <0.001 <0.001 <0.001 The same letter indicates no significant difference in the major components between the fully ripe stages of the 6 hybrids according to Tukey HSD post hoc test.The CAP content was found to be statistically the same in all red colouredC. annuum hybrids, while the yellow hybrid Star Flame (234.3±45.23 μg/g) contained more, and Bandai (1176.1±112.1 μg/g) the most (p<0.001). Our findings roughly agree with the result of Sanatombi and Sharma [40] who showed that the cultivars belonging toC. annuum contain less capsaicin than others ofCapsicum frutescens. Beibeihong was an exception, because it statistically contained the same amount asC. annuum hybrids. Focusing on the Scoville heat units, the highest CAP value in Bandai corresponds to the highest SHU (98090.8±9920.74) observed among the hybrids investigated. Bernal et al. [33] measured 87300–276500 SHU in ripeC. frutescens peppers, but in Bandai hybrid the value found was close to the lower level determined by the authors. AmongC. annuum hybrids, Star Flame was found to be a prominent pepper regarding SHU (66201.2±7132.51) comparing to the measurements of Topuz and Ozdemir [41] 9720±2061.8 and Giuffrida et al. [42] 21034±3579. Beibeihong and Bandai have not been investigated by pungency profiles before. Comparing to Tabasco (also belonging toC. frutescens) the SHU measured by Giuffrida et al. [42] (21348±867) is below our values of the latter hybrids, although CAP content determined by Giuffrida et al. [42] (917±34 μg/g) is between the values measured in Bandai (1176.1±112.1 μg/g) and Beibeihong (311.8±63.25 μg/g). Interestingly, Bandai hybrid had the highest CAP content at the same time; it also had the lowest ascorbic acid amount. Topuz and Ozdemir [41] described in pungent peppers that the content of ascorbic acid and capsaicinoid is positively related, which we could not underline in case of Bandai. The highest ascorbic acid was measured in ripe Fire Flame (3689.4±160.61 μg/g) and this value is well above the one measured in Hungarian spice pepper where approximately 1800 μg/g converted to fresh weight basis [12], and it is more than the one detected in New Mexican-type chili peppers 2766 μg/g [25].Naringenin-diglucoside content ranged from93.5±4.26 to 368.8±30.77 μg/g and had higher values inC. frutescens hybrids compared toC. annuum hybrids, probably because of the higher dry matter content of such peppers. Naringenin (belonging to flavanones), being an initial compound in the chain of flavonoid synthesis [14], explains the high content present in our samples. Other studies found also naringenin-glucosides as a dominant flavonoid in peel of pungent pepper [43] and in sweet pepper alike [44]. ## 3.1. Pungency The major components evolving pungency in our hybrids are NDC, CAP, and DC. Besides, we could identify the homologues of CAP and DC which are HCAP1, HCAP2 and HDC1, HDC2, respectively (Figure1). All of them are branched-chain alkyl vanillylamides. Kozukue et al. [31] detected the 7 compounds, in addition to nonivamide which is a straight-chain nonoyl vanillylamide analog of CAP [1]. In Beibeihong advance in ripening did not affect the major capsaicinoids (CAP, NDC, and DC shown in Table 2), while it influenced HCAP1 and HDC1 (both p≤0.032) including a slight decrease from green to colour-break stage and then a low increase at the final stage. In Bandai, unlike Beibeihong the ripening affected the major and minor capsaicinoids as well (all p≤0.027). The changing of CAP included a notable decrease between the initial stage and the colour-break stage. On DC, NDC, and HDC2 a gradual decrease was measured. A straight increasing of HDC1 was observed, while on HPC1 the same tendency like that in HPC1 of Beibeihong was observed.Table 2 Change in content of capsaicinoid compounds in chili hybrids as a function of ripening. The values represent means inμg/g fresh base weight ± standard deviation (n=3). Hybrid Ripening stage NDC (μg/g) CAP (μg/g) DC (μg/g) HCAP1 (μg/g) HCAP2 (μg/g) HDC1 (μg/g) HDC2 (μg/g) Beibeihong Green 51.8 ± 3.90a 294.5 ± 19.72a 326.5 ± 51.20a 3.5 ± 2.01ab 20.0 ± 2.37a 10.6 ± 3.01ab 26.3 ± 2.60a Colour-breaker 60.6 ± 15.01a 254.6 ± 31.90a 263.4 ± 25.92a 1.8 ± 0.70a 25.3 ± 5.13a 9.0 ± 0.82a 29.5 ± 6.42a Orange 61.7 ± 6.65a 261.9 ± 26.12a 269.3 ± 14.06a 3.6 ± 0.97ab 28.5 ± 4.64a 10.7 ± 0.50ab 26.7 ± 2.09a Red 63.2 ± 15.12a 311.8 ± 63.25a 272.7 ± 74.99a 5.7 ± 0.51b 23.7 ± 3.62a 13.9 ± 0.5b 23.9 ± 3.79a F-value 0.61 1.44 1.12 5.41 2.26 4.89 0.94 p value 0.626 0.302 0.394 0.025 0.158 0.032 0.464 Bandai Green 102.9 ± 14.17ab 1610.2 ± 91.46b 780 ± 36.03b 13.1 ± 5.61ab 8.9 ± 0.99a 12.8 ± 1.22a 30.2 ± 2.29ab Colour-breaker 102.2 ± 1.21ab 1182.2 ± 82.56a 725.2 ± 32.03ab 6.3 ± 1.83a 18.5 ± 3.69b 12.7 ± 0.57a 31 ± 3.41ab Orange 115.6 ± 5.26b 1104.9 ± 77.27a 635.2 ± 32.36a 11.5 ± 0.51ab 27.2 ± 3.33c 15 ± 0.46ab 37.9 ± 3.41b Red 81.5 ± 6.91a 1176.1 ± 112.1a 600.4 ± 87.11a 20.1 ± 6.24b 14.3 ± 2.39ab 16.3 ± 2.06b 27.3 ± 3.34a F-value 8.59 18.93 7.40 5.27 22.70 5.95 6.10 p value 0.007 0.001 0.011 0.027 <0.001 0.020 0.018 Lolo Green 18.7 ± 2.42a 222.5 ± 69.33a 139.2 ± 50.97a 0.5 ± 0.23b 0.4 ± 0.08a 1.8 ± 0.15a 9.2 ± 1.19a Colour-breaker 26.2 ± 3.94a 95.5 ± 4.15a 96.7 ± 9.29a 0.4 ± 0.04b 2.4 ± 0.77b 1.9 ± 0.08a 12.6 ± 1.31a Red 22.2 ± 5.14a 197 ± 92.13a 119.8 ± 53.03a 0 ± 0a 3.2 ± 0.15b 1.9 ± 0.27a 12.5 ± 3.27a F-value 2.69 3.05 0.74 12.12 29.94 0.86 2.54 p value 0.146 0.122 0.516 0.008 0.001 0.467 0.159 C3735 Green 31.3 ± 1.46b 259.6 ± 39.15b 183.4 ± 23.27b UDL 7 ± 0.79a 1.6 ± 0.21b 8.2 ± 0.58ab Colour-breaker 35.9 ± 1.64b 168.9 ± 33.86ab 148.2 ± 24.21b UDL 12.6 ± 4.16a 1.9 ± 0.08b 9.8 ± 1.01b Red 18.2 ± 6.21a 126.3 ± 35.95a 88.9 ± 6.38a UDL 12.6 ± 3.7a 1.3 ± 0.07a 7.2 ± 1.31a F-value 17.39 10.50 17.58 — 3.00 15.03 5.00 p value 0.003 0.011 0.003 — 0.125 0.005 0.053 Fire Flame Red 15.5 ± 3.28 234.3 ± 45.23 109.7 ± 19.9 1.2 ± 0.13 1 ± 0.23 0.9 ± 0.13 5.9 ± 1.24 Star Flame Yellow 21.9 ± 5.36 440.8 ± 17.22 135.9 ± 20.28 2.5 ± 0.08 0.2 ± 0.08 1.4 ± 0.25 6.4 ± 1.10 The same letter indicates no significant difference in capsaicinoid content between ripening stages in the given hybrid according to Tukey HSD post hoc test; UDL: under detection limit.Figure 1 HPLC profile of capsaicinoid components separated from red stage of Bandai hybrid using cross-linked C18 column with acetonitrile-water elution and fluorescence detection. 1: NDC, 2: CAP, 3: DC, 4: HCAP1, 5: HCAP2, 6: HDC1, and 7: HDC2. For more information see text.Focusing on the major compounds of capsaicinoids, Bandai hybrid could be characterised with pungency loss, while in Beibeihong those compounds did not change during ripening. In the study by Gnayfeed et al. [12] CAP reached the highest value in F-03 cultivar (C. annuum) at the initial green stage, similarly found in Bandai, but its content in F-03 did not change significantly with ripening. The obtained results suggest even in the same species (C. frutescens) that the hybrids have a different characteristic in ripening regarding capsaicinoid contents. It is in accordance with findings of Merken and Beecher [30] who also measured the maximal capsaicinoid content in 3 differentC. frutescens peppers in 3 variant times after flower budding.In Lolo the ripening slightly affected but not significantly CAP, while it increased HCAP2 (p=0.001). After the colour-break stage the amount of HCAP1 decreased (p=0.008) to undetectable level. In C3735 ripening decreased NDC, CAP, DC, HDC1 (all p≤0.011), and nonmarginally HDC2, while HCAP1 was absent or under detection limit at all ripening stages. Therefore, most of the compounds showed a decreasing tendency during ripening of C3735, so a remarkable pungency loss was observed. On the contrary, those compounds remained unchanged in Lolo.Iwai found the peak 40 days after flowering and then a gradual decrease of capsaicinoid content in aC. annuum pepper. Because of the different scale used by Iwai et al. [32], it is difficult to compare to our data, but probably the 40 days after flowering is roughly equal to the green stage we used. Gnayfeed et al. [12] observed inC. annuum cultivars that capsaicinoids reached maximum level at the colour-break stage and then started declining in Hungarian spice pepper (C. annuum), which is a characteristic of pungency change that we did not observe. The change in capsaicin content during ripening of pepper may relate to activity of some enzymes that interfere in the ripening dynamics. The amount of capsaicinoids has been investigated in relation with several enzymes [10, 33, 34]. Contreras-Padilla and Yahia [10] showed that peroxidase activity started increasing, when the amount of capsaicinoid started to decrease in Habanero and de Arbol, while in Piquin it began to increase before the decrease of capsaicinoid. They concluded that peroxidase enzyme is involved in capsaicinoid degradation and that attribution is a genotypic characteristic. Iwai et al. [32] found higher phenylalanine ammonia-lyase activity in green stage than in red stage. In addition, Bernal et al. [33] observed that the operation of capsaicinoid synthetase enzyme is more influenced by the availability of precursors and the conditions of forming than its substrate specificity. The capsaicinoid composition and content are the result of the above referred enzymes.A study concerning the maturation of Habanero (C. chinense) proved that green pod contains four times less capsaicin than ripe red ones [13], while we found less difference and even more capsaicin in green stage (e.g., Bandai); however, none of our investigated hybrids belong toC. chinense. They also reported that DC content is seven times less in green pods as compared to red ones, while we found only a slight decrease of DC between the green and red stages. ## 3.2. Polyphenols Since there is no available standard for myricetin and vanillic acid in our laboratory, they were tentatively identified based on comparison of their spectral characteristics and retention behaviour on the HPLC column with those found in the literature.Due to the high content of vanillic acid-derivative, catechin, and naringenin-diglucoside, those compounds were found to be the dominant polyphenols, which have maxima absorption at 280 nm (Figure2). The minor compounds were luteolin-rutinoside, quercetin-glucoside, quercetin-glycosides, myricetin, and kaempferol-derivative; all were detected with maxima absorption at 350 nm and also luteolin-glucoside occurs in higher concentration and is detected at 350 nm (Figure 3).Figure 2 HPLC profile of polyphenols detected separated on Protect-1 C18 column and detected at 280 nm. 1: vanillic acid-derivative, 2: catechin, and 3: naringenin-diglucoside.Figure 3 HPLC profile of polyphenols detected separated on Protect-1 C18 column and detected at 350 nm. 4: luteolin-rutinoside, 5: quercetin-glucoside, 6: quercetin-glycosides (the sum of these compounds is used in Table3), 7: luteolin-glucoside, 8: myricetin, and 9: kaempferol-derivative.Table 3 Change in content of polyphenol compounds in different chili hybrids as a function of ripening. The values represent means inμg/g fresh weight base ± standard deviation (n=3). Hybrid Ripening stage Vanillic acid-derivative (μg/g) Catechin (μg/g) Naringenin-diglucoside (μg/g) Luteolin-rutinoside(μg/g) Quercetin-glucoside (μg/g) Quercetin-glycosides(μg/g) Luteolin-glucoside (μg/g) Myricetin (μg/g) Kaempferol-derivative(μg/g) Beibeihong Green 109.5 ± 9.84b 50.4 ± 1.86a 349.5 ± 13.09a 11.3 ± 0.53a 12.5 ± 2.07a 3.7 ± 0.14a 62.9 ± 2.78a 10.3 ± 0.74a 22.6 ± 1.13a Colour-breaker 145.7 ± 9.71c 135.3 ± 3.97b 477.4 ± 52.69b 8.5 ± 0.73a 13.4 ± 1.19a 4.7 ± 0.12ab 79 ± 2.78a 23.5 ± 1.13b 67.3 ± 5.26c Orange 114.9 ± 9.66b 153.5 ± 5.56c 431.3 ± 39.72ab 9.7 ± 0.67a 9.1 ± 0.07a 4.9 ± 0.62b 84.6 ± 13.18a 24.9 ± 0.46b 69.7 ± 3.86c Red 79.2 ± 11.08a 132.7 ± 3.85b 368.8 ± 30.77a 17.0 ± 3.37b 21.1 ± 2.69b 5.5 ± 0.68b 90.1 ± 16.97a 31.9 ± 6.77b 51.1 ± 5.22b F-value 21.88 390.61 7.54 13.47 23.64 7.62 3.45 20.45 79.18 p value <0.001 <0.001 0.01 0.02 <0.001 0.01 0.071 <0.001 <0.001 Bandai Green 96.3 ± 0.25a 96.9 ± 7.67a 130.3 ± 4.82a 13.2 ± 1.06a 4.6 ± 0.66a 5.6 ± 0.24a 84.6 ± 2.37a 25.7 ± 1.58a 62.8 ± 3.33a Colour-breaker 89.7 ± 7.98a 134.6 ± 17.02b 202 ± 17.77b 15.1 ± 3.96a 14.2 ± 0.61ab 7.8 ± 0.91ab 91.4 ± 13.75a 117.1 ± 8.21b 289.5 ± 45.59bc Orange 92 ± 14.56a 166.4 ± 17.16b 254.2 ± 38.95b 14.8 ± 3.68a 17.4 ± 2.99c 10.2 ± 1.38bc 107.3 ± 23.49a 110.4 ± 19.41b 307.8 ± 53.01c Red 101.6 ± 5.67a 175.2 ± 12.32c 276.5 ± 16.65c 21.6 ± 10.69a 12.8 ± 0.97b 11.9 ± 1.15c 157.8 ± 15.26b 52.9 ± 12.94a 200.7 ± 20.83b F-value 1.05 19.03 23.73 1.15 33.24 22.39 13.41 38.70 28.15 p value 0.42 <0.001 <0.001 0.385 <0.001 <0.001 0.002 <0.001 <0.001 Lolo Green 72.2 ± 0.85c 45 ± 2.71a 116.8 ± 7.28a 2.1 ± 0.49a 5.7 ± 0.83b 5 ± 0.78b 25.7 ± 5.05a 2 ± 0.51a UDL Colour-breaker 50.3 ± 2.85a 51.4 ± 3.09a 160.8 ± 14.93b 3.2 ± 0.36a 2.8 ± 0.08a 2.9 ± 1.03a 53.4 ± 4.52b 4.8 ± 0.62b UDL Red 64.2 ± 4.15b 171.5 ± 6.14b 117.8 ± 7.18a 7.3 ± 1.22b 6.1 ± 1.28b 4.5 ± 0.28ab 75.9 ± 2.48c 8.9 ± 0.57c 9.1 ± 3.77 F-value 42.59 836.79 17.35 37.55 12.72 6.49 108.95 113.35 — p value <0.001 <0.001 0.003 <0.001 0.007 0.032 <0.001 <0.001 — C3753 Green 73.1 ± 5.46b 22.6 ± 1.28a 123.6 ± 6.23a 2 ± 0.7a 1.7 ± 0.39a 16.8 ± 2.38a 17 ± 2.57a 3.8 ± 0.96a UDL Colour-breaker 51 ± 8.55a 64.2 ± 10.49b 148.3 ± 40.01ab 1.5 ± 0.37a 1.3 ± 0.24a 17.3 ± 3.44a 13.5 ± 2.66a 10 ± 1.09b 21 ± 1.71a Red 56 ± 6.67a 124.5 ± 6.18c 217.2 ± 8.36b UDL 1.1 ± 0.23a 19.8 ± 2.55a 17.5 ± 2.78a 11.3 ± 0.94b 16.4 ± 2.85a F-value 13.06 157.47 8.32 0.44 2.84 0.95 1.99 48.91 6.007 p value 0.007 <0.001 0.019 0.661 0.135 0.44 0.12 <0.001 0.070 Fire Flame Red 27.8 ± 2.07 26.6 ± 1.40 141.6 ± 4.17 2.8 ± 0.27 2.7 ± 0.16 4.2 ± 0.22 44.4 ± 2.76 18.2 ± 0.43 UDL Star Flame Yellow 24.2 ± 1.37 17.6 ± 0.24 93.5 ± 4.33 9.5 ± 0.47 2.0 ± 0.14 4.1 ± 0.62 60.6 ± 1.34 18.4 ± 1.29 UDL The same letter indicates no significant difference in polyphenol content between ripening stages in the given hybrid according to Tukey HSD post hoc test; UDL: under detection limit.In Beibeihong, ripening increased catechin, luteolin-rutinoside, quercetin compounds, myricetin, and kaempferol-derivative (allp≤0.02 shown in Table 3), while it decreased vanillic acid content (p<0.001). In Bandai ripening increased all compounds (all p≤0.002) except vanillic acid and luteolin-rutinoside which statistically remained unchanged during ripening stages. In quercetin-glucoside, myricetin, and kaempferol-derivative the highest values were measured in the middle of the ripening. Most of the studies regarding polyphenol constitution of pungent pepper focus on the green (initial) and red (final) ripe stages but omit the intermediate or colour-break stage. Howard et al. [20] found that quercetin decreased, while luteolin did not change with ripening of Tabasco (C. frutescens). On the contrary, we found an increase of quercetin-related compounds in bothC. frutescens hybrids and also an increase of luteolin-rutinoside in Beibeihong and of luteolin-glucoside in Bandai.In Lolo the ripening significantly decreased vanillic acid (p<0.001) but increased catechin, luteolin-rutinoside, luteolin-glucoside, and myricetin (all p<0.001). In C3735 vanillic acid decreased (p=0.007) while catechin, naringenin-diglucoside, and myricetin increased (all p≤0.019). Howard et al. stated that quercetin had either increasing or decreasing tendency depending on cultivar; also no change was observed during maturity stages of certain cultivars onC. annuum peppers. We could only confirm the last statement that none of the quercetin-related compounds changed when the pods changed from green to red inC. annuum peppers studied.According to Materska and Perucka [19] the most abundant flavonoid compounds in the green stage were quertecin-3-O-L-rhamnoside and luteolin-related compounds, and with ripening those phytochemicals decreased. In the present work particularly in red stage contained higher amounts of luteolin-related in Lolo, while in quercetin-glycoside content no change was detected in bothC. annuum hybrids.The disappearances of flavonoids are parallel to capsaicinoids accumulation [35] because the synthesis of flavonoids may converge with the capsaicinoid pathways [36]. The only nonflavonoid phenolic acid detected in our peppers is vanillic acid, and it is the only polyphenol compound which decreased or stayed unchanged during ripening, while the flavonoids mostly increased with advance of ripening. At the same time the major capsaicinoids generally decreased or did not change even with ripening. Kawada and Iwai [37] found a direct relation between DC and vanillic acid; they fed rats with DC and then detected vanillic acid in a notable amount in the urine of the rats. This experiment may also support our findings that vanillic acid is certainly related to capsaicinoids and has similar dynamics during ripening in pungent pepper.According to Tsao [14], flavonols (kaempferol, quercetin, and myricetin) consist of highly conjugated bindings and a 3-hydroxy group, whose attributions are considered very important in evolving high antioxidant activity. In our hybrids the highest levels of the latter flavonoids were obtained at the orange or red stage that makes the pepper of higher nutritive value. ## 3.3. Ascorbic Acid By the applied HPLC method only L-ascorbic acid was found in the extract of all hybrids (Figure4). It was found that ascorbic acid increased during ripening in all hybrids (p≤0.001 shown in Table 4). In Beibeihong and Bandai after green stage a more notable increase was observed than after the colour-break stage where the ascorbic acid gradually increased, while in Bandai at the red stage the average of ascorbic acid was less than in orange stage. In Lolo the green and colour-break stage did not differ significantly, while the red stage contained the most. In C3735 a straight increase was observed. The increasing tendency in the investigated hybrids is in accordance with that found in previous works [12, 20, 24, 25] which concluded that the more ripened the pods were, the more ascorbic acid could be measured from them. With ripening the pepper pods store more reducing sugars [36], which are the precursors of L-ascorbic acid [38], and that explains the increasing vitamin C content with ripening in all hybrids included in our study. On the contrary, Shaha et al. [18] showed a different dynamics of the ascorbic acid accumulation, because they found the highest level in yellow (intermediate) stage and the declining level in the red mature stage. That agrees with our finding in Bandai, where the highest average values (1005.2±100.73 μg/g) were observed in the orange or colour-break stage (937.9±78.04 μg/g), although these are not significantly higher than that determined in red stage (787.4±131.21 μg/g). Probably it is also due to the high standard deviation present in the red stage.Table 4 Change in content of ascorbic acid in different chili hybrids as a function of ripening. The values represent means inμg/g fresh weight base ± standard deviation (n=3). Hybrid Ripening stage Ascorbic acid (μg/g) Beibeihong Green 355 ± 64.85a Colour-breaker 1503.4 ± 358.31b Orange 2085.7 ± 252.2bc Red 2483.8 ± 570.74c F-value 19.74 p value <0.001 Bandai Green 329.5 ± 58.88a Colour-breaker 937.9 ± 78.04b Orange 1005.2 ± 100.73b Red 787.4 ± 131.21b F-value 30.09 p value <0.001 Lolo Green 111.3 ± 14.01a Colour-breaker 451.5 ± 115.56a Red 1940.9 ± 533.57b F-value 28.57 p value 0.001 C3735 Green 315.1 ± 59.91a Colour-breaker 1522.5 ± 127.47b Red 2468.2 ± 58.93c F-value 449.65 p value <0.001 Fire Flame Red 3689.4 ± 39.50 Star Flame Yellow 3154.8 ± 160.61 The same letter indicates no significant difference in ascorbic acid content between ripening stages in the given hybrid according to Tukey HSD post hoc test.Figure 4 HPLC profile of vitamin C determination. The separation was performed on C18 Nautilus column with PDA detection at 244 nm. 1: L-ascorbic acid.The recommended daily allowance (RDA) is 60μg FW; according to Dias [39] 100 g fresh chili provides about 143.7 μg vitamin C. Focusing on the hybrids of the recent study at the green stage all of them failed to reach this value, while at colour-break stage Beibeihong and C3735 reached it and finally at the red stage all of them achieved the RDA. ## 3.4. Comparison of Major Compounds among the 6 Hybrids The comparison among the hybrids has been done on the main parameters: CAP, ascorbic acid, naringenin-diglucoside, Scoville heat unit, and dry matter (shown in Table5), at the final stage of the hybrids, which is generally considered as the most valuable in nutrition and having the most processing possibility. A higher dry matter signifies a better fruit quality and also a higher nutritional concentration when fresh weight basis is used to express nutritional parameters. We measured 25–30% dry matter content inC. frutescens, which produces more seeds and smaller pods, while in the peppers belonging toC. annuum this value lessens to 14.1–15.8%.Table 5 Capsaicin, ascorbic acid, and naringenin-diglucoside content (μg/g fresh weight base), pungency unit of Scoville, and dry matter of different chili hybrids. The values represent means ± standard deviation (n=3). Hybrid Capsaicin (μg/g) Scoville heat unit Ascorbic acid (μg/g) Naringenin-diglucoside (μg/g) Dry matter Beibeihong 311.8 ± 63.25ab 37999.8 ± 5761.66a 2483.8 ± 570.74bc 368.8 ± 30.77f 25.8 ± 0.82d Bandai 1176.1 ± 112.1c 98090.8 ± 9920.74c 787.4 ± 131.21a 276.5 ± 16.65e 30.2 ± 0.41c Lolo 197 ± 92.13a 33188.2 ± 5229.83a 1940.9 ± 533.57b 117.8 ± 7.18ab 14.0 ± 0.61a C3735 126.3 ± 35.95a 23730.9 ± 3174.95a 2468.2 ± 58.93bc 217.2 ± 8.36d 15.8 ± 0.93b Fire Flame 234.3 ± 45.23a 40417.3 ± 7830.33a 3689.4 ± 160.61d 141.6 ± 4.19c 14.1 ± 0.34ab Star Flame 440.8 ± 17.22b 66201.2 ± 7132.51b 3154.8 ± 160.61cd 93.5 ± 4.26a 14.4 ± 0.58ab F-value 94.64 48.43 27.62 146.17 357.48 p value <0.001 <0.001 <0.001 <0.001 <0.001 The same letter indicates no significant difference in the major components between the fully ripe stages of the 6 hybrids according to Tukey HSD post hoc test.The CAP content was found to be statistically the same in all red colouredC. annuum hybrids, while the yellow hybrid Star Flame (234.3±45.23 μg/g) contained more, and Bandai (1176.1±112.1 μg/g) the most (p<0.001). Our findings roughly agree with the result of Sanatombi and Sharma [40] who showed that the cultivars belonging toC. annuum contain less capsaicin than others ofCapsicum frutescens. Beibeihong was an exception, because it statistically contained the same amount asC. annuum hybrids. Focusing on the Scoville heat units, the highest CAP value in Bandai corresponds to the highest SHU (98090.8±9920.74) observed among the hybrids investigated. Bernal et al. [33] measured 87300–276500 SHU in ripeC. frutescens peppers, but in Bandai hybrid the value found was close to the lower level determined by the authors. AmongC. annuum hybrids, Star Flame was found to be a prominent pepper regarding SHU (66201.2±7132.51) comparing to the measurements of Topuz and Ozdemir [41] 9720±2061.8 and Giuffrida et al. [42] 21034±3579. Beibeihong and Bandai have not been investigated by pungency profiles before. Comparing to Tabasco (also belonging toC. frutescens) the SHU measured by Giuffrida et al. [42] (21348±867) is below our values of the latter hybrids, although CAP content determined by Giuffrida et al. [42] (917±34 μg/g) is between the values measured in Bandai (1176.1±112.1 μg/g) and Beibeihong (311.8±63.25 μg/g). Interestingly, Bandai hybrid had the highest CAP content at the same time; it also had the lowest ascorbic acid amount. Topuz and Ozdemir [41] described in pungent peppers that the content of ascorbic acid and capsaicinoid is positively related, which we could not underline in case of Bandai. The highest ascorbic acid was measured in ripe Fire Flame (3689.4±160.61 μg/g) and this value is well above the one measured in Hungarian spice pepper where approximately 1800 μg/g converted to fresh weight basis [12], and it is more than the one detected in New Mexican-type chili peppers 2766 μg/g [25].Naringenin-diglucoside content ranged from93.5±4.26 to 368.8±30.77 μg/g and had higher values inC. frutescens hybrids compared toC. annuum hybrids, probably because of the higher dry matter content of such peppers. Naringenin (belonging to flavanones), being an initial compound in the chain of flavonoid synthesis [14], explains the high content present in our samples. Other studies found also naringenin-glucosides as a dominant flavonoid in peel of pungent pepper [43] and in sweet pepper alike [44]. ## 4. Conclusion The investigated new hybrids can be regarded to be good sources of phytochemicals for future applications. We recommend using the red coloured hybrid Fire Flame to produce chili products with high content of vitamin C. On the other hand, when heat principles (capsaicinoid) for food and pharmaceutical industries are required, the use of Star Flame and Bandai can be suggested, as they contain a level of capsaicin around440.8±17.22 μg/g and 1610.2±91.46 μg/g, respectively. In order to get the maximum level of the bioactive phytochemicals such as vitamin C, capsaicinoid, and polyphenol it is important to characterize the ripening dynamics of each of these new hybrids. For example, the highest level of capsaicin could be found in the green stage of ripening of Bandai and C3735 hybrids, while in the other hybrids pungency was similar in all ripening stages. --- *Source: 102125-2015-10-01.xml*
2015
# Use of Natural Products in Asthma Treatment **Authors:** Lucas Amaral-Machado; Wógenes N. Oliveira; Susiane S. Moreira-Oliveira; Daniel T. Pereira; Éverton N. Alencar; Nicolas Tsapis; Eryvaldo Sócrates T. Egito **Journal:** Evidence-Based Complementary and Alternative Medicine (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1021258 --- ## Abstract Asthma, a disease classified as a chronic inflammatory disorder induced by airway inflammation, is triggered by a genetic predisposition or antigen sensitization. Drugs currently used as therapies present disadvantages such as high cost and side effects, which compromise the treatment compliance. Alternatively, traditional medicine has reported the use of natural products as alternative or complementary treatment. The aim of this review was to summarize the knowledge reported in the literature about the use of natural products for asthma treatment. The search strategy included scientific studies published between January 2006 and December 2017, using the keywords “asthma,” “treatment,” and “natural products.” The inclusion criteria were as follows: (i) studies that aimed at elucidating the antiasthmatic activity of natural-based compounds or extracts using laboratory experiments (in vitro and/or in vivo); and (ii) studies that suggested the use of natural products in asthma treatment by elucidation of its chemical composition. Studies that (i) did not report experimental data and (ii) manuscripts in languages other than English were excluded. Based on the findings from the literature search, aspects related to asthma physiopathology, epidemiology, and conventional treatment were discussed. Then, several studies reporting the effectiveness of natural products in the asthma treatment were presented, highlighting plants as the main source. Moreover, natural products from animals and microorganisms were also discussed and their high potential in the antiasthmatic therapy was emphasized. This review highlighted the importance of natural products as an alternative and/or complementary treatment source for asthma treatment, since they present reduced side effects and comparable effectiveness as the drugs currently used on treatment protocols. --- ## Body ## 1. Introduction ### 1.1. Physiopathology of Asthma Asthma can be defined as a chronic inflammatory disorder that affects the lower airways, promoting an increase of bronchial reactivity, hypersensitivity, and a decrease in the airflow [1]. Furthermore, due to a complex interaction between the genetic predisposition and environmental factors, besides multiple related phenotypes, this disease may be considered as a heterogeneous disorder [2].Sensitization by dust, pollen, and food represents the main environmental factors involved in the asthma physiopathology [1]. These antigens are recognized by the mast cells coated by IgE antibodies (Figure 1) and induce the release of proinflammatory cytokines, such as tumor necrosis factor-α (TNF-α), interleukins IL-2, IL-3, IL-4, IL-5, GM-CSF, prostaglandins, histamine, and leukotrienes [3, 4], by T lymphocytes and eosinophils. This degranulation process promotes an increase in the vascular permeability, leading to exudate and edema formation. This process is followed by leukocyte migration to the tissue affected by the inflammatory process through chemotaxis mediated by selectins and integrins [3, 6]. Subsequently, the neutrophil migration to the inflammatory site and the release of leukotrienes LTB4 induce the activation of type 2 cyclooxygenase (COX-2) and type 5 lipoxygenase (LOX-5), enhancing the expression of the C3b opsonin that produces reactive oxygen species (ROS) and thus promoting cell oxidative stress and pulmonary tissue injury [3, 7].Figure 1 Scheme of the immune response induced by allergen or antigen stimulation or the early stages of asthma. GM-CSF: granulocyte-macrophages colony-stimulating factor; IL: interleukin; C3b: opsonin; LOX-5: lipoxygenase type 5; ROS: reactive oxygen species; COX-2: cyclooxygenase type 2; LTB4: leukotriene type B; PGD2: prostaglandin type D (adapted from Bradding et al. [6].Other mechanisms involved in asthma physiopathology are the inhalation of drugs, as well as respiratory viruses [8], which promote an immune response mediated by IgG antibodies. This process promotes an increase of the inflammatory cells influx, releasing inflammatory mediators responsible for the damage process [9].Based on the factors and mechanisms presented above, asthma symptoms can be observed at different levels according to etiology and severity of clinical aspects, which define their classification [10]. The asthma severity is subdivided into (i) mild/low, also defined as intermittent/persistent, when the symptoms appear more than twice a week and their exacerbations can affect the daily activities of the patient; (ii) moderate, in which the daily symptom occurrence and their exacerbations affect the patient activities, requiring the use of short-acting β2-adrenergic drugs; or (iii) severe asthma, in which the patient presents persistent symptoms, physical activity limitations, and frequent exacerbations [10]. Based on this classification, it is estimated that 60% of the asthma cases are intermittent or persistent, 25% to 30% are moderate, and the severe cases account for only 10% of the total. However, it is important to highlight that although the proportion of severe asthmatics represents the minority of the cases, they are responsible for high mortality and high hospitalization costs [11], evidencing the high need of efficient treatments for this disease. ### 1.2. Asthma Epidemiology According to the World Health Organization, asthma affects about 300 million of individuals across the world, regardless of the country development degrees [12]. In the United Kingdom, asthma affects approximately 5.2 million of individuals and is responsible for 60.000 hospital admissions per year [13], while in Brazil the annual incidence of hospital admissions due to asthma is around 173.442 patients, representing 12% of the total admissions for respiratory diseases in 2012 [14].Furthermore, studies have demonstrated that asthma incidence and prevalence rates in different countries are not age related. In the United States of America, Albania, and Indonesia, the asthma prevalence is lower for children (around 8.4%, 2.1%, and 4.1%, respectively) when compared to adults [15]. On the other hand, in countries such as the United Kingdom and Costa Rica, children aged between 6 and 7 years represent approximately 32% of the asthma prevalence [16]. Additionally, the incidence or prevalence can be directly influenced by the socioeconomic characteristics of specific areas, as demonstrated by the analyses of the annual variation in the prevalence of asthma in which it was possible to observe, in Spain, that the asthma prevalence had an annual increase of 0.44% regardless of the age range studied. However, when these same data analyses were observed individually in Spain regions, the annual variation presented a different scenario, showing an increase or decrease according to the developmental degree of each region [17, 18]. Similar data were observed by Pearce et al. [19] and Schaneberg et al. [20], who demonstrated the influence of socioeconomic aspects on asthma. The studies showed a prevalent increase of asthma cases in metropolitan areas, fact attributed to the population growth with consequent exposure to the environmental factors and shortened access to asthma therapy, due to the high cost of the available medicines [19, 20]. Such phenomena directly interfere on the treatment compliance [21–23], evidencing the importance and the need of strategies that facilitate the access to the medicines for asthma therapy.Studies that evaluate the importance of inclusion of antiasthmatic therapy on public health policy programs have demonstrated that asthma control can be achieved through a variety of approaches, promoting a decrease in hospital admissions of 90%. Indeed, this was demonstrated by two studies performed in Brazilian cities, in which public health programs offered free medicines and psychological and pharmaceutical care to treat chronic diseases [24, 25]. Furthermore, Ponte et al. [24] and Holanda [25] also showed that the hospital admissions of children decreased from 44.7% to 6.4% one year after the inclusion of these patients in the same project. Thus, these data corroborate the importance of public health policies that contribute to the reduction of hospital outlay, increasing the population’s life quality. ### 1.3. Asthma Treatment The asthma treatment recommended by the Global Initiative for Asthma (GINA) consists, especially, on the reduction of symptoms in order to decrease the inflammatory process [26, 27]. However, since asthma presents a complex physiopathology associated with variable manifestations, the treatment can lead to different response levels. Thus, the evaluation of the clinical aspects associated with the treatment response is defined as the most adequate approach to achieve treatment success [28]. Asthma therapy strategies are based on pulmonary (the main administration route on asthma therapy), oral or intravenous administration of class β2 agonist drugs (salbutamol, levalbuterol, terbutaline, and epinephrine), anticholinergics (ipratropium), corticosteroids (beclomethasone di- or monopropionate, ciclesonide, flunisolide, fluticasone propionate, mometasone furoate, triamcinolone acetonide, hydrocortisone, dexamethasone, budesonide, prednisone, prednisolone, and methylprednisolone), and xanthine drugs. Among these, the β2 agonists are often the drugs of first choice [13, 27].To optimize the treatment for each patient, the drug dosage is determined by the patient’s respiratory characteristics, mainly his/her respiratory rate. Patients with increased respiratory rate, due to the airways narrowing, present a low dispersion of the inhaled drug through the respiratory tract [29]. In these cases or when there is an absence of response on the first two hours after treatment, hospitalization should be performed, and adrenaline could be used, subcutaneously or intravenously, since this is an indicative of mucosal edema formation, which can be decreased by the adrenaline bronchodilator effect [30].Overall, patients that present asthma exacerbation should be initially treated with the association of different dosage of corticosteroids and short-actingβ2 agonists by intranasal oxygen administration, allowing the stimulation of β2 receptors that result in bronchodilation due to the inhibition of cholinergic neurotransmission and, thus, inhibition of mast cells degranulation [10]. Additionally, corticosteroids by oral or inhaled route are used on uncontrolled persistent asthma patients due to their direct effect on the inflammation site [31]. Accordingly, they improve the pulmonary function and decrease the asthma episodes [32], reducing hospitalizations and mortality of asthmatic patients [31]. Furthermore, because their systemic use can induce side effects, corticosteroids, mainly prednisone and prednisolone, are more commonly used in patients with severe persistent asthma who are not stabilized by other drugs [31].In addition, xanthine drugs such as theophylline can be also used on asthma treatment, since they are able to promote the suppression of monocyte activation with consequent inhibition on the TNF-α release. Further, they promote the inhibition of neutrophil activation and its degranulation, inhibiting the catalytic activity of phosphodiesterase 4 (PDE4), allowing a reduction in the inflammatory process [33].Regardless of the wide variety and associations of antiasthmatic medicines and their ability to promote the asthma symptoms control and to reduce the asthma episodes and hospital admissions, the antiasthmatic drugs present several side effects, including nausea, headaches, and convulsions (xanthine class) [3, 30], cardiovascular effects (β-adrenergic receptors antagonists) [20], vomiting (PDE4 inhibitors drugs) [34–36], osteoporosis, myopathies, adrenal suppression, and metabolic disturbs, compromising the patients’ growth (corticosteroids) [30, 35, 37, 38]. These side effects compromise the life quality of the patients and reduce significantly the treatment compliance.Another important drawback from the conventional asthma treatment is its cost. In fact, the required amount of money for asthma treatments represents a significant expenditure for health public organizations. Such situation has become a financial issue even for developed countries. In Sweden, for example, the cost of medicines for asthma treatment has increased since the 1990s and, in 2006, and it was responsible for 11.6% of the total healthcare expenditure. Furthermore, according to projections, an annual increase of 4% on the costs of asthma management is expected [22].Additionally, studies revealed that in Europe and in the United States of America, the sum of the direct and indirect estimated annual costs with asthma management is approximately €18 billion and US$13 billion, respectively. This high expenditure was associated with the high incidence of uncontrolled asthma patients, since they represent an expense up to 5-fold higher than the controlled asthma ones [39] or than patients with other chronic diseases, as demonstrated in the study performed by O’Neil and colleagues [40]. These authors revealed that asthma costs up to £4,217 per person, while type II diabetes, chronic obstructive pulmonary disease, and chronic kidney disease represented, together, a cost of £3,630 [40].Therefore, considering the therapies currently available, their side effects, and their high cost, the development of new therapeutic approaches or complementary treatments to the current asthma therapy become an important and essential strategy. In this context, the use of natural products allows easy access to treatment to all socioeconomic classes [41, 42] and shows advantages such as low cost, biocompatibility, and reduced side effects, besides their wide biodiversity and renewability [43, 44]. In addition, natural products, supported by the literature findings on their complex matrix as a source of bioactive compounds, represent one of the main access forms to the basic healthcare in the traditional medicine [45]. Thus, the present review aimed at summarizing the main natural products reported in the literature that show antiasthma activity. ## 1.1. Physiopathology of Asthma Asthma can be defined as a chronic inflammatory disorder that affects the lower airways, promoting an increase of bronchial reactivity, hypersensitivity, and a decrease in the airflow [1]. Furthermore, due to a complex interaction between the genetic predisposition and environmental factors, besides multiple related phenotypes, this disease may be considered as a heterogeneous disorder [2].Sensitization by dust, pollen, and food represents the main environmental factors involved in the asthma physiopathology [1]. These antigens are recognized by the mast cells coated by IgE antibodies (Figure 1) and induce the release of proinflammatory cytokines, such as tumor necrosis factor-α (TNF-α), interleukins IL-2, IL-3, IL-4, IL-5, GM-CSF, prostaglandins, histamine, and leukotrienes [3, 4], by T lymphocytes and eosinophils. This degranulation process promotes an increase in the vascular permeability, leading to exudate and edema formation. This process is followed by leukocyte migration to the tissue affected by the inflammatory process through chemotaxis mediated by selectins and integrins [3, 6]. Subsequently, the neutrophil migration to the inflammatory site and the release of leukotrienes LTB4 induce the activation of type 2 cyclooxygenase (COX-2) and type 5 lipoxygenase (LOX-5), enhancing the expression of the C3b opsonin that produces reactive oxygen species (ROS) and thus promoting cell oxidative stress and pulmonary tissue injury [3, 7].Figure 1 Scheme of the immune response induced by allergen or antigen stimulation or the early stages of asthma. GM-CSF: granulocyte-macrophages colony-stimulating factor; IL: interleukin; C3b: opsonin; LOX-5: lipoxygenase type 5; ROS: reactive oxygen species; COX-2: cyclooxygenase type 2; LTB4: leukotriene type B; PGD2: prostaglandin type D (adapted from Bradding et al. [6].Other mechanisms involved in asthma physiopathology are the inhalation of drugs, as well as respiratory viruses [8], which promote an immune response mediated by IgG antibodies. This process promotes an increase of the inflammatory cells influx, releasing inflammatory mediators responsible for the damage process [9].Based on the factors and mechanisms presented above, asthma symptoms can be observed at different levels according to etiology and severity of clinical aspects, which define their classification [10]. The asthma severity is subdivided into (i) mild/low, also defined as intermittent/persistent, when the symptoms appear more than twice a week and their exacerbations can affect the daily activities of the patient; (ii) moderate, in which the daily symptom occurrence and their exacerbations affect the patient activities, requiring the use of short-acting β2-adrenergic drugs; or (iii) severe asthma, in which the patient presents persistent symptoms, physical activity limitations, and frequent exacerbations [10]. Based on this classification, it is estimated that 60% of the asthma cases are intermittent or persistent, 25% to 30% are moderate, and the severe cases account for only 10% of the total. However, it is important to highlight that although the proportion of severe asthmatics represents the minority of the cases, they are responsible for high mortality and high hospitalization costs [11], evidencing the high need of efficient treatments for this disease. ## 1.2. Asthma Epidemiology According to the World Health Organization, asthma affects about 300 million of individuals across the world, regardless of the country development degrees [12]. In the United Kingdom, asthma affects approximately 5.2 million of individuals and is responsible for 60.000 hospital admissions per year [13], while in Brazil the annual incidence of hospital admissions due to asthma is around 173.442 patients, representing 12% of the total admissions for respiratory diseases in 2012 [14].Furthermore, studies have demonstrated that asthma incidence and prevalence rates in different countries are not age related. In the United States of America, Albania, and Indonesia, the asthma prevalence is lower for children (around 8.4%, 2.1%, and 4.1%, respectively) when compared to adults [15]. On the other hand, in countries such as the United Kingdom and Costa Rica, children aged between 6 and 7 years represent approximately 32% of the asthma prevalence [16]. Additionally, the incidence or prevalence can be directly influenced by the socioeconomic characteristics of specific areas, as demonstrated by the analyses of the annual variation in the prevalence of asthma in which it was possible to observe, in Spain, that the asthma prevalence had an annual increase of 0.44% regardless of the age range studied. However, when these same data analyses were observed individually in Spain regions, the annual variation presented a different scenario, showing an increase or decrease according to the developmental degree of each region [17, 18]. Similar data were observed by Pearce et al. [19] and Schaneberg et al. [20], who demonstrated the influence of socioeconomic aspects on asthma. The studies showed a prevalent increase of asthma cases in metropolitan areas, fact attributed to the population growth with consequent exposure to the environmental factors and shortened access to asthma therapy, due to the high cost of the available medicines [19, 20]. Such phenomena directly interfere on the treatment compliance [21–23], evidencing the importance and the need of strategies that facilitate the access to the medicines for asthma therapy.Studies that evaluate the importance of inclusion of antiasthmatic therapy on public health policy programs have demonstrated that asthma control can be achieved through a variety of approaches, promoting a decrease in hospital admissions of 90%. Indeed, this was demonstrated by two studies performed in Brazilian cities, in which public health programs offered free medicines and psychological and pharmaceutical care to treat chronic diseases [24, 25]. Furthermore, Ponte et al. [24] and Holanda [25] also showed that the hospital admissions of children decreased from 44.7% to 6.4% one year after the inclusion of these patients in the same project. Thus, these data corroborate the importance of public health policies that contribute to the reduction of hospital outlay, increasing the population’s life quality. ## 1.3. Asthma Treatment The asthma treatment recommended by the Global Initiative for Asthma (GINA) consists, especially, on the reduction of symptoms in order to decrease the inflammatory process [26, 27]. However, since asthma presents a complex physiopathology associated with variable manifestations, the treatment can lead to different response levels. Thus, the evaluation of the clinical aspects associated with the treatment response is defined as the most adequate approach to achieve treatment success [28]. Asthma therapy strategies are based on pulmonary (the main administration route on asthma therapy), oral or intravenous administration of class β2 agonist drugs (salbutamol, levalbuterol, terbutaline, and epinephrine), anticholinergics (ipratropium), corticosteroids (beclomethasone di- or monopropionate, ciclesonide, flunisolide, fluticasone propionate, mometasone furoate, triamcinolone acetonide, hydrocortisone, dexamethasone, budesonide, prednisone, prednisolone, and methylprednisolone), and xanthine drugs. Among these, the β2 agonists are often the drugs of first choice [13, 27].To optimize the treatment for each patient, the drug dosage is determined by the patient’s respiratory characteristics, mainly his/her respiratory rate. Patients with increased respiratory rate, due to the airways narrowing, present a low dispersion of the inhaled drug through the respiratory tract [29]. In these cases or when there is an absence of response on the first two hours after treatment, hospitalization should be performed, and adrenaline could be used, subcutaneously or intravenously, since this is an indicative of mucosal edema formation, which can be decreased by the adrenaline bronchodilator effect [30].Overall, patients that present asthma exacerbation should be initially treated with the association of different dosage of corticosteroids and short-actingβ2 agonists by intranasal oxygen administration, allowing the stimulation of β2 receptors that result in bronchodilation due to the inhibition of cholinergic neurotransmission and, thus, inhibition of mast cells degranulation [10]. Additionally, corticosteroids by oral or inhaled route are used on uncontrolled persistent asthma patients due to their direct effect on the inflammation site [31]. Accordingly, they improve the pulmonary function and decrease the asthma episodes [32], reducing hospitalizations and mortality of asthmatic patients [31]. Furthermore, because their systemic use can induce side effects, corticosteroids, mainly prednisone and prednisolone, are more commonly used in patients with severe persistent asthma who are not stabilized by other drugs [31].In addition, xanthine drugs such as theophylline can be also used on asthma treatment, since they are able to promote the suppression of monocyte activation with consequent inhibition on the TNF-α release. Further, they promote the inhibition of neutrophil activation and its degranulation, inhibiting the catalytic activity of phosphodiesterase 4 (PDE4), allowing a reduction in the inflammatory process [33].Regardless of the wide variety and associations of antiasthmatic medicines and their ability to promote the asthma symptoms control and to reduce the asthma episodes and hospital admissions, the antiasthmatic drugs present several side effects, including nausea, headaches, and convulsions (xanthine class) [3, 30], cardiovascular effects (β-adrenergic receptors antagonists) [20], vomiting (PDE4 inhibitors drugs) [34–36], osteoporosis, myopathies, adrenal suppression, and metabolic disturbs, compromising the patients’ growth (corticosteroids) [30, 35, 37, 38]. These side effects compromise the life quality of the patients and reduce significantly the treatment compliance.Another important drawback from the conventional asthma treatment is its cost. In fact, the required amount of money for asthma treatments represents a significant expenditure for health public organizations. Such situation has become a financial issue even for developed countries. In Sweden, for example, the cost of medicines for asthma treatment has increased since the 1990s and, in 2006, and it was responsible for 11.6% of the total healthcare expenditure. Furthermore, according to projections, an annual increase of 4% on the costs of asthma management is expected [22].Additionally, studies revealed that in Europe and in the United States of America, the sum of the direct and indirect estimated annual costs with asthma management is approximately €18 billion and US$13 billion, respectively. This high expenditure was associated with the high incidence of uncontrolled asthma patients, since they represent an expense up to 5-fold higher than the controlled asthma ones [39] or than patients with other chronic diseases, as demonstrated in the study performed by O’Neil and colleagues [40]. These authors revealed that asthma costs up to £4,217 per person, while type II diabetes, chronic obstructive pulmonary disease, and chronic kidney disease represented, together, a cost of £3,630 [40].Therefore, considering the therapies currently available, their side effects, and their high cost, the development of new therapeutic approaches or complementary treatments to the current asthma therapy become an important and essential strategy. In this context, the use of natural products allows easy access to treatment to all socioeconomic classes [41, 42] and shows advantages such as low cost, biocompatibility, and reduced side effects, besides their wide biodiversity and renewability [43, 44]. In addition, natural products, supported by the literature findings on their complex matrix as a source of bioactive compounds, represent one of the main access forms to the basic healthcare in the traditional medicine [45]. Thus, the present review aimed at summarizing the main natural products reported in the literature that show antiasthma activity. ## 2. Natural Products as Alternative for Asthma Treatment The use of natural products for the treatment of physiologic disorders, especially in association with other drugs, has been widely reported through ethnopharmacological studies as an important scientific tool for bioprospection exploration and discovery of new bioactive compounds from natural sources [46]. Despite the wide scientific progress regarding chemical and pharmaceutical technology on synthesizing new molecules, drugs from natural sources still contribute tremendously to the discovery and development of new medicines [47]. These studies are based, initially, on the traditional use of the natural products, which draws the attention of pharmaceutical companies due to their easy and economical use, allowing the companies to perform many studies that evaluate their therapeutic activities, their toxicity, and their safety [48].Moreover, the use of natural products as complementary therapy represents an important alternative for the treatment of several diseases [49]. In the United States of America, the use of natural products, vitamins, and other dietary supplements as auxiliary treatments represent about 40% of the conventional therapies [50]. Among the diseases that natural products are used for, those of allergic and inflammatory character can be highlighted. In fact, according to the literature, the alternative medicine associates the use of these products with biochemical mechanisms involved in immunomodulation, which could contribute to the management of these diseases [51].The use of plant-based products for asthma treatment has been reported by the traditional medicine for over 5000 years, since the use by the Chinese culture of the infusion ofEphedra sinica, which is as an immune system stimulator able to decrease asthma crises [20]. More recently, a study performed by Costa and colleagues [49] described the main natural sources for the treatment of asthma used by the Brazilian families from the Northeast Region of the country [49]. The study included beet, honey, onion, lemon, garlic, yarrow, and mint, demonstrating the wide variety of natural products used on asthma treatment in children [49]. Additionally, other natural-derived products have been widely cited in asthma treatment, such as natural oils from plants and animals, which can be obtained by different extraction process [52, 53].Plant-derived natural oils represent the main natural products used on the complementary asthma therapy due to the presence of compounds such as phenylpropanoids and mono- and sesquiterpenes as the major bioactive compounds, which provide their anti-inflammatory, antifungal, antibacterial, and anesthetic properties [54–56]. Similarly, oils obtained from animal sources have been used. They are rich in a mixture of different saturated, mono and polyunsaturated fatty acids, as well as compounds from animal organs and secretions, which are responsible for the immune-modulatory action and regulation of the tissue oxidative capacity [57, 58]. The activity credited to the oils derived from plants and animals is related to the presence of those bioactive compounds, which can inhibit COX-2 and COX-5. Additionally, these compounds are able to modulate the immune cells function by reducing levels of IL-4, IL-5, and IL-13 cytokines, decreasing the activity and proliferation of NK cells and leading to an increase in the level of endogenous corticosteroids, contributing to the regulation of NF-κB pathway, and reducing the mucus production and the inflammation in the lung tissues [59–61].In this regard, Table1 shows all products found in the studies included in this review after the inclusion criteria evaluation. Due to the wide variety of plant-derived products, only those with 3 or more citations were described in detail in this review. On the other hand, due to limited scientific investigations about the antiasthmatic activity of the natural products from animal and microorganism sources, all studies that fit the inclusion criteria were described in the next sections.Table 1 List of natural compounds described in the literature reviewed. Product Product form Product source Active compound Compound class/type Mechanism of action Reference 1,8-Cineol Isolated compound Essential oil ofEucalyptus globulus leaves 1,8-Cineol Monoterpene Reduces the expression of NF-κB target gene MUC2 Greiner et al. [62] 3-Methoxy-catalposide Isolated compound P. rotundum var. subintegrum extract 3-Methoxy-catalposide Iridoid glycoside Inhibits the expression of cyclooxygenase (COX)-2, nitric oxide synthase (iNOS), and proinflammatory genes (IL-6, IL-1β, and TNF-α) Ryu et al. [63] Achyranthes aspera L Ethanolic extract Roots Not reported Not reported Bronchoprotective activity Dey [64] Ailanthus excelsa Roxb Aqueous extract Barks Not reported Not reported Bronchodilator and mast cell stabilizing activities Kumar [65] Allium Cepa L. and quercetin Extract and isolated compound Methanolic extract and vegetable Quercetin [2-(3, 4-dihydroxyphenyl)-3, 5, 7-trihydroxy-4H-1-benzopyran-4-one, 3, 3′, 4′, 5, 6-entahydroxyflavone] Flavonoid Reduce the production of proinflammatory cytokines (IL-4, IL-5, IL-13) and promote the relaxation of tracheal rings Oliveira et al. [66] Alstonia scholaris (L.) R. Br. Extract Leaves ofAlstonia scholaris (L.) R. Br. Scholaricine, 19-epi-scholaricine, vallesamine, picrinine Alkaloid Reduce the eosinophilia, the production of proinflammatory cytokine (IL-4) and the expression of serum IgE and eotaxin Zhao et al. [67] Amorphophallus konjac (konjac) Gel extract Not reported Not reported Plant Not elucidated Chua et al. [68] Andropogon muricatus Crude extract Aerial parts Vetivenes. vetivenol, vetivenic acid, and vetivenyl acetate Sesquiterpenic compounds Inhibit the Ca2+ channels and phosphodiesterase activity Shah and Gilani [69] Anoectochilus formosanus Hayata Aqueous extract Whole plant Kinsenoside Plant Reduce the IL-4 production by Tregs and enhance the production of IL-12 and IFN-γ by Th1 differentiation Hsieh et al. [70] Artemisia maritima Essential oil Leaves 1,8-Cineol, camphor, camphene, andβ-caryophyllene Terpenoid Inhibit the Ca2+ channels and phosphodiesterase activity Shah et al. [71] Aster tataricus L. f. Extract Rhizomes Kaempferol, aurantiamide, and astin C Flavonoid Inhibit the expression of NF-κB and promote the activation of beta-2 adrenergic receptor Chen and Zheng [72] Aster yomena (Kitam.) Honda Ethanolic extract Leaves Phenolic compounds not specified Phenolic compounds Attenuate the production of NO and IL-1β, and suppress the expression of NF-κB. In addition, suppress the activation of TLR4 and promote a reduction of intracellular ROS production Kang et al. [73] Baicalin Isolated compound Leaves and branch 7-Glucuronic acid-5,6-dihydroxyflavone Flavonoid Suppress the lipopolysaccharide-induced TNF-α expression and inhibit the cyclic adenosine monophosphate-specific phosphodiesterase 4 (PDE4) Park et al. [74] Baliospermum montanum Müll. Arg. (Euphorbiaceae) Chloroformic and ethanolic extracts Leaves Alkaloids, triterpenoids, diterpenoids, and glycosides Alkaloids, triterpenoids, diterpenoids, and glycosides Stabilize the mast cell degranulation and decrease the histamine release Venkatesh et al. [75] Berry fruit Polyphenolic extract Not reported Phenolic compounds not specified Phenolic compounds not specified Not reported Power et al. [76] Boswellia serrata, Boswellia carterii, and frankincense Essential oil Resinous part Fl-boswellic acid, acetyl-fl-boswellic acid, 11-keto-fl-boswellic acid, and acetyl-11-keto-fl-boswellic acid Boswellic acids Inhibition of leukotriene biosynthesis Hamidpour et al. [77] and Al-Yasiry and Kiczorowska [78] Boswellia serrata, Glycyrrhiza glabra, and Curcuma longa Essential oil extract and extract Resinous part, licorice root and turmeric root, respectively Curcumin and fl-boswellic acid Polyphenol Reduce the plasma level of the leukotriene C4, nitric oxide and malondialdehyde Houssen et al. [79] Buffalo spleen lipid and a bacterial polypeptide Extract Animal-derived and microorganism-derived, respectively Not reported Not reported Reduce the tracheal responsiveness and the amount of white blood cells Neamati et al. [80] Bullfrog oil(Rana catesbiana shaw) Oil Bullfrog adipose tissue Oleic, linolenic, stearic, palmitic, and myristic acids. Eicosapentaenoic acids and decosahexaenoic acid Fatty acids Not elucidated Amaral-Machado et al. [81] Bu-zhong-yi-qi-tang Aqueous extract Root of Astragalus mongholicus Bunge, Panax ginseng C.A.Mey, Angelica dah-rica Fisch. Ex Hoffm and Bupleurum chinense DC. Rhizome of Zingiber officinale Rosc, Atractylodes macrocephala Koidz, and Cimicifuga foetida L. Fruit of Ziziphus jujuba Mill. var. inermis Rehd. Pericarp of Citrus reticulata Blanco. Root and rhizome of Glycyrrhiza uralensis Fisch Not reported Not reported Reduce the level of eotaxin, Th2-related cytokines (IL-4, IL-5, IL-13), IgE, and eosinophilia Yang et al. [82] Caenorhabditis elegans Crude extract Microorganism Not reported Not reported Modulate the immunologic Th1/Th2 response Huang et al. [83] Camellia sinensis L. Aqueous extract Not reported Polyphenois and flavonoids Polyphenois and flavonoids Not elucidated Sharangi [84] Carica papaya Extract Leaves Tanins, alkaloids, steroids, and quinones Tanins, alkaloids, steroids, and quinones Reduce the expression of IL-4, IL-5, eotaxin, TNF-α, NF-κB, and iNOS Elgadir et al. [85] Carum roxburghianum Crude extract Seeds Hydrocarbons, wax esters, sterol esters, triacylglycerols, free fatty acids, diacylglycerols, lysophosphatidylethanolamines, and phosphatidylinositols Hydrocarbons, wax esters, sterol esters, triacylglycerols, free fatty acids, diacylglycerols, lysophosphatidylethanolamines, and phosphatidylinositols Bronchodilator activity Khan et al. [86] Chitin Isolated compound Shrimp Chitin Polysaccharide Not elucidated Ozdemir et al. [87] Chrysin Isolated compound Marketable synthetic compound 5,7-Dihydroxy-2-phenyl-1-4H-chromen-4-one Flavonoid Reduces the histamine release and decreases the gene expression of proinflammatory cytokines (IL-1β, IL-4, IL-6, TNF-α, NF-κB) Yao et al. [88]; Yao et al. [89]; Bae et al. [90] Cissampelos sympodialis Eichl Extract Leaves Warifteine Alkaloid Reduce the expression of IL-3 and IL-5, increase the IL-10 level, and decrease the density of inflammatory cells Cerqueira-lima et al. [91] Citrus tachibana Ethanolic extract Leaves Coumarins, carotenoids, and flavonoids Coumarins, carotenoids, and flavonoids Modulate the Th1/Th2 imbalance by inhibition of NF-κB signaling and histamine secretion Bui et al. [92] Conjugated linoleic acid Conjugated compound Fatty tissue from ruminants Cis, cis-9,12-octadecadienoic acid Polyunsaturated fatty acid Modulate the PPARγ-dependent and PPARγ-independent inflammation signaling, the eicosanoid production, and humoral immune response Macredmond and Dorscheid [93] Coumarins Isolated compound Synthetic compounds 6,7-Dihydroxycoumarin, 7-hydroxycoumarin and 4-methyl-7-hydroxycoumarin Coumarin Not elucidated Sanchez-Recillas et al. [94] Crocetin Isolated compound Marketable synthetic compound Crocetin Carotenoid Activates the FOXP3 signaling through TIPE2 Ding et al. [95] Curcumin Isolated compound Curcuma longa (1E, 6E)-1,7-Bis (4-hydroxy- 3-methoxyphenyl)-1,6- heptadiene-3,5-dione Polyphenol Inhibits the Notch1-GATA3 signaling pathway Zheng et al. [96]; Chong et al. [97] Cyclotheonamides Isolated compound Marine Not reported Cyclic pentapeptides Inhibit the humanß-tryptase Schaschke and Sommerhoff [98] Diallyl-disulfide Isolated compound Garlic oil Diallyl-disulfide Organosulfur Activates the NrF-2/HO-1 pathway and suppresses the NF-κB Shin et al. [99] Dietary plant stanol esters Not reported Fatty acid Not reported Stanol ester Reduce the total plasma IgE, IL-1β, IL-13, and TNF-α Brull et al. [100] Dioscorea nipponica Isolated compound Not reported Diosgenin Steroidal saponin Suppress the secretion of TNF-α, IL-1β, and IL-6 Junchao et al. [101] D-α-tocopheryl acetate Isolated compound Natural source D-α-tocopheryl acetate Vitamin Inhibits the oxidative stress. Modulates the allergic inflammation and the airway hyperresponsiveness Hoskins et al. [102] Echinodorus scaber Hydroethanolic extract Leaves Vitexin, rutin, and gallic acid Phenolic compounds Decrease the migration of inflammatory cells and reduce the Th2 cytokines and IgE levels Rosa et al. [103] Eclipta prostrata (L.)L. Methanolic extract Whole plant Wedelolactone and demethylwedelolactone Coumestan Reduce the bronchial hyperresponsiveness and the production of Th2 cytokines De Freitas Morel et al. [104] Ecklonia cava Marine alga Brown macroalgae Fucodiphloroethol and phlorofucofuroeckol A Phlorotannins Downregulate the FcεRI expression and block the IgE-FcεRI binding Vo et al. [105] Ephedra intermedia Crude extract Aerial parts Ephedrine and pseudoephedrine Alkaloids Not elucidated Gul et al. [106] Ellagic acid Isolated compound Marketable synthetic compound Ellagic acid Polyphenol Inhibits the activation of the NF-κB Zhou et al. [107] Emodin Isolated compound Roots and barks ofRheum palmatum and Polygonum multiflorum 1,3,8-Trihydroxy-6-methylanthraquinone Anthraquinone Suppresses the characteristics of airway inflammation, mucin components, and chitinase protein expression. Inhibits the NF-κB signaling pathway Shrimali et al. [108] Euphorbia hirta Aqueous extract Not reported Galloylquinic acid, phorbol acid, leucocyanidol, quercitol, camphol, quercetin, chlorophenolic acid, shikimic acid Tanins, leucoanthocyanidins, flavonoids, and phenolic compounds Not elucidated Kunwar et al. [109] Sesame Fixed oil Seeds 5,5′-(1S, 3aR, 4S, 6aR)-Tetrahydro-1H, 3H-furo [3,4-c]furan-1,4-diylbis-1,3-benzodioxole Polyphenol Decreases the levels of IL-4, Il-5, IL-13, and serum IgE. Reduces the amount of inflammatory cells and the eosinophil infiltration Lin et al. [41] Farnesol Isolated compound Fruits, leaves, flowers 3,7,11-Trimethyl-2,6,10-dodecatrien-1-ol Sesquiterpene Increases the level of IgG2a/IgE and reduces the total IgE, IgA, IgM, IgG Ku and Lin [110] Feverfew(Tanacetum parthenium L.) Extract Leaves and parts above the ground Parthenolide Sesquiterpene Inhibit the IκB kinase complex and the histamine release Pareek et al. [111] Flavonoids Isolated compound Vegetables (capers, tomatoes, fennel, sweet potato leaves, etc.), fruits (apple, apricots, grapes, plums, and berries), cereals (green/yellow beans and buckwheat) Not reported Polyphenol Prevent the IgE synthesis and the mast cell degranulation. Reduce the airway hyperresponsiveness and inhibit the human phospholipase A2 Castell et al. [112]; Lattig et al. [113] Fumaria parviflora Linn Aqueous methanolic extract Aerial parts Fumarophycine, cryptopine, sanactine, stylopine, bicuculline, adlumine, perfumidine, and dihydrosanguirine Alkaloids Block the muscarinic receptors and the Ca2+ channels Najeeb ur et al. [114] Galangin Synthetic compound Alpinia officinarum 3,5,7-Trihydroxy-2-phenylchromen-4-one Flavonol Inhibits the TGF-β1 signaling by ROS generation and MAPK/Akt phosphorylation Liu et al. [115] Geastrum saccatum Solid extract Fruiting bodies ofGeastrum saccatum β-Glucose Polysaccharide Inhibit the NOS and COX Guerra dore et al. [116] Ginsenosides Synthetic compound Root of ginseng Ginsenosides Glycoside Suppress the IL-4 level, increase the production of IFN-γ, and inhibit the mucus overproduction and recruitment of eosinophils Chen et al. [117] Grape seed Extract Seeds Not reported Not reported Not elucidated. Mahmoud [118] Gymnema sylvestre R. Br. Extract Leaves Not reported Tanins and saponins Not elucidated. Tiwari et al. [119]; Di Fabio et al. [120] Herba epimedii Extract Leaves Icariin Flavonoids, iridoid glycosides, and alkaloids Inhibit the mRNA expression of TGF-β1 and TGF-β2. Modulate the TGF-β signaling Tang et al. [121] Higenamine Isolated compound Tinospora crispa, Nandina domestica THUNBERG, Gnetum parvifolium C.Y. Cheng, Asarum heterotropoides 1-[(4-Hydroxyphenyl)methyl]-1,2,3,4-tetrahydroisoquinoline-6,7-diol Alkaloid Not elucidated Zhang et al. [122] Homoegonol Isolated compound Styrax japonica 3-[2-(3,4-Dimethoxyphenyl)-7-methoxy-1-benzofuran-5-yl]propan-1-ol Lignan Reduces the inflammatory cells count and Th2 cytokines Shin et al. [123] Hypericum sampsonii Isolated compound Aerial parts Not reported Polycyclic polyprenylated acylphloroglucinols Not elucidated Zhang et al. [124] Justicia pectoralis Extract Aerial parts 7-Hydroxycoumarin Coumarin Decrease the tracheal hyperresponsiveness and the IL-1β and TNF-α levels Moura et al. [125] Juniperus excelsa Crude extract Aerial parts (+)-Cedrol, (+)-Sabinene, (+)-limonene, terpinolene, endo-fenchol, cis-pinene hydrate,α-campholena, camphor, borneol, triene cycloheptane 1,3,5-trimethylene, β-myrcene, o-allyl toluene Anthraquinones, flavonoids, saponins, sterol, terpenoids, and tanins Inhibit the Ca2+ influx and the phosphodiesterase activity Khan et al. [126] Kaempferol Isolated compound Biotransformation of synthetic kaempferol by genetically engineeredE. coli Kaempferol-3-O-rhamnoside Flavonoid Reduces the inflammatory cells number, suppresses the production of Th2 cytokines and TNF-α Chung et al. [127] Kefir Isolated compound Kefir grains Kefiran Microorganism derived Reduces the inflammatory cell number and decreases the level of IL-4, IL-13, IL-5, and IgE Kwon et al. [128]; Lee et al. [129] Laurus nobilis L. Isolated compound Leaves ofLaurus nobilis L Magnolialide Sesquiterpene Inhibit the mast cell degranulation and reduce the IL-4 and IL-5 production Lee et al. [130] Lepidium sativum Crude extract Seeds Ascorbic acid, linoleic acid, oleic acid, palmitic acid, stearic acid Vitamin and fatty acids Promote a anticholinergic effect, inhibit the Ca2+ influx, and inhibit the phosphodiesterase activity Rehman et al. [131] L-Theanine Isolated compound Green tea ofCamellia sinensis L-Theanine (N-ethyl-L-glutamine) Amino acid Reduces the ROS production and decreases the levels of NF-κB and MMP-9 Hwang et al. [132] Luteolin Isolated compound Perilla frutescens (2-(3,4-Dihydroxyphenyl)-5,7-dihydroxy-4-chromenone) Flavonoid Inhibits the mucus overproduction and the GABAergic system Shen et al. [133] Lysate bacterial (OM-85 Broncho-Vaxom) Extract H. influenzae, S. pneumoniae, Klebsiella pneumoniae, smelly nose Klebsiella, S. aureus, Streptococcus pyogenes, Streptococcus viridans, Neisseria catarrhalis Not reported Not reported Increase the level of IL-4, IL-10, and IFN-γ Lu et al. [134] Mangifera indica L. extract (Vimang®) Extract Stem bark Mangiferin (1,3,6,7-tetrahydroxyxanthone-c2-b-D-glucoside) Xanthone Inhibit the IgE production, the histamine release, and mast cell degranulation. Decrease the MMP-9 activity Rivera et al. [135] Aqueous extract Barks Mangiferin (1,3,6,7-tetrahydroxyxanthone-c2-b-D-glucoside) Xanthone Reduce the inflammatory cells recruitment and the airway hyperresponsiveness. Increase the Th2 cytokines and attenuated the increase of the PIK3 activity Alvarez et al. [136] Mangosteen Isolated compound Garcinia mangostana Linn. α- and γ-mangostin Xanthone Inhibits the histamine release and modulates the cytokine production Jang et al. [137] Marine bioactives Isolated compound Marine spongesPetrosia contignata and Xestospongia bergquisita Contignasterol and xestobergsterol Steroids Upregulation of TNF-β and IL-10 expression D’Orazio et al. [138] Marshallagia marshalli Isolated compound Marshallagia marshalli Secretory/excretory antigen Microorganism derived Prevent the release of TNF-α and IL-1β. Suppress the neutrophil migration Jabbari et al. [139] Mikania laevigata and M. glomerata Extract Leaves Dihydrocoumarin, coumarin, spathulenol, hexadecanoic acid, 9, 12-octadecadienoic acid, 9,12,15-octadecatrineoic acid, cupressenic acid, kaurenol, kaurenoic acid, isopropyloxigrandifloric acid, isobutyloxy-grandifloric acid Coumarins, terpenoids, steroids, and flavonoids Not elucidated Napimoga and Yatsuda [140] Milk and colostrum Conjugated compound Bovine milk Conjugated linoleic acid Fatty acid Modulate the cytokine and antibodies (IgE, IgM) production, interferon NO synthesis and iNOS activity. Modulate the mast cell degranulation Kanwar et al. [141] Monoterpenes Isolated compound Essential oil of several medicinal plants (Matricaria recutita, Boswellia carterii, Pelargonium graveolens, Lavandula angustifolia, Citrus limon, Melaleuca alternifolia, Melaleuca viridiflora, Santalum spicatum, Cedrus atlantica, and Thymus vulgaris) Hydroxydihydrocarvone, fenchone,α-pinene, (S)-cis-verbenol, piperitenone oxide, α-terpinene, α-terpineol, terpinen-4-ol, α-carveol, menthone, pulegone, geraniol, citral, citronellol, perillyl alcohol, perillic acid, β-myrcene, carvone, limonene, thymol, carvacrol, linalool, linalyl acetate, borneol, l-borneol, bornyl acetate, terpineol, thymoquinone, thymohydroquinone, 1,8-cineol, l-menthol, menthone, and neomenthol Terpenoids Reduce the expression of NF-κB target gene MUC2 Cassia et al. [142] Mandevilla longiflora Hydroethanolic extract Plant xylopodium Ellagic acid, hesperidin, luteolin, naringin, naringenin, and rutin Polyphenol and flavonoids Decrease the eosinophils, neutrophils, and mononuclear cell migration in BALF and by histopathological analysis. Decrease the IL-4, IL-5, IL-13, IgE, and LTB4 levels Almeida et al. [143] Morus alba L. Isolated compound Root bark Moracin M. (5-(6-hydroxy-1-benzofuran-2-yl)benzene-1,3-diol) Not reported Inhibit the PDE4 Chen et al. [144] Haemanthus coccineus Extract Dried bulbs Narciclasine Alkaloid Inhibit the edema formation, the leucocyte infiltration, and cytokine synthesis in vivo. Block the interaction between the leucocyte and endothelial cells, the activation of isolated leucocytes (cytokine synthesis and proliferation) and of primary endothelial cells (adhesion molecule expression) in vitro. Suppress the NF-κB-dependent gene transcription Fuchs et al. [145] Naringin Isolated compound Common grapefruit Naringin Flavone Attenuates the bronchoconstriction by reduction of calcium influx Wang et al. [146] Nielumbo nucifera Extract Leaves Nuiciferine and aporphine Alkaloids Attenuate the bronchoconstriction by reduction of calcium influx Yang et al. [147] Nigella sativa Oil Seeds Thymoquinone (2-isopropyl-5-methyl-1,4-benzoquinone) Quinone Decrease the NO and IgE levels. Increase the IFN-γ Salem et al. [148]; Koshak et al. [149] NujiangexanthoneA Isolated compound Leaves ofGarcinia nujiangensis 1,2,5,6-Tetrahydroxy-3-methoxy-4,7,8-tri(3-meth-ylbut-2-enyl)-xanthone Xanthone Suppresses the IgE/Ag activation and degranulation of mast cell. Suppresses the production of cytokines and eicosanoids, through inhibiting Src kinase activity and Syk-dependent pathways. Inhibits the release of histamine, PGD2 and leukotriene C4 generation. Inhibits the increase of IL-4, IL-5, IL-13, and IgE levels. Inhibits the cell infiltration and increases mucus production Lu et al. [150] Oleanolic acid Synthetic compound Forsythia viridissima Oleanolic acid Triterpenoid Modulates the transcription factors T-bet, GATA-3, RORγt, and Foxp3 Kim et al. [151] Omega 3 Isolated compound Fish oil n – 3 Polyunsaturated fatty acid Fatty acid Decreases the IL-17 and TNF-α levels Hansen et al. [152]; Farjadian et al. [153] Organic acids Isolated compound Berberis integerrima and B. vulgaris fruits Malic, citric, tartaric, oxalic, and fumaric acids Organic acids Inhibits the Th2 cytokines Ardestani et al. [154]; Shaik et al. [155] Oroxylin A Isolated compound Scutellariae radix 5′7-Dihydroxy-6-methoxy-2phenyl-4H-1-benzopyran-4-one Flavonoid Reduces the airway hyperactivity. Decreases the levels of IL-4, IL-5, IL-13 and IgE in BALF Lu et al. [156]; Zhou et al. [157] Oxymatrine Isolated compound Root of the Sophora flavescens Aiton (Fabaceae) Oxymatrine Alkaloid Inhibits the eosinophil migration, IL-4, IL-5, IgE, and IL-13 levels. Inhibits the expression of CD40 protein Zhang et al. [158] P. integerrima Gall and Pistacia integerrima stew. Ex brand Methanolic and crude extract Galls and whole plant Not reported Carotenoids, terpenoids, catechins, and flavonoids Attenuate the TNF-α, IL-4, and IL-5 expression levels, and pulmonary edema by elevation of AQP1 and AQP5 expression levels Rana et al. [159]; Bibi et al. [160] Paeonia emodi royle Extract Rhizomes 1β, 3β, 5α, 23, 24-Pentahydroxy-30-nor-olean-12, 20(29)-dien-28-oic acid; 6α, 7α-epoxy-1α, 3β, 4β, 13β-tetrahydroxy-24, 30-dinor-olean-20-ene-28, 13β-olide; paeonin B; paeonin C; methyl grevillate; 4-hydroxy benzoic acid, and gallic acid Terpenoids and phenolic compounds Inhibits the lipoxygenase activity Zargar et al. [161] Petasites japonicus Extract Leaves Petatewalide B Not reported Inhibit the degranulation ofβ-hexosaminidase in mast cells, the iNOS induction, and the NO production. Inhibits the accumulation of eosinophils, macrophages, and lymphocytes in BALF Choi et al. [162] Peucedanum praeruptorum dunn Extract Roots Dihydropyranocoumarin, linear furocoumaris, and simple coumarin Coumarins Attenuate the airway hyperreactivity and Th2 responses Xiong et al. [163] Peucedani Radix Extract Roots Nodakenin, nodakenetin, pteryxin, praeruptorin A, and praeruptorin B Not reported Inhibit the Th2 cell activation Lee et al. [164] Eryngium Extract Leaves, fruits, and roots A1-barrigenol, R1-barrigenol, tiliroside, kaempferol 3-O-β-D-glucosyde-7-O-α-L-rhamnoside, rutin, agasyllin, grandivittin, aegelinol benzoate, aegelinol, R-(+)-rosmarinic (61), and R-(+)-3′-O-β-D-glucopyranosyl rosmarinic acid Phenol, flavonoids, tannins, and saponins Not elucidated Erdem et al. [165] Pericampylus glaucus Extract Stems, leaves, roots, and fruits Periglaucine A-D and mangiferonic acid Alkaloids, terpenoids, isoflavones, and sterols Inhibit the COX enzymes activity Shipton et al. [166] Aquilaria malaccensis Ethanolic extract Seeds Aquimavitalin Phorbol ester Inhibit the mast cell degranulation Korinek et al. [167] Phytochemicals Isolated compound Several medicinal plants Luteolin, kaempferol, quercetin, eudesmin, magnolin, woorenoside, zerumbone, aucubin, triptolide, nitocine, berberine, and piperine Flavonoids, lignans, terpenoids, and alkaloids Suppress the TNF-α expression Iqbal et al. [168] Picrasma quassioides (D.Don) Benn Alcoholic extract Not reported 4-Methoxy-5- hydroxycanthin-6-one Alkaloid Decreases the inflammatory cell count in BALF. Reduces the IL-4, IL-5, IL-13, and IgE levels. Reduces the airway hyperresponsiveness. Attenuates the recruitment of inflammatory cells and the mucus production in the airways. Reduces the overexpression of inducible nitric oxide synthase (iNOS) Shin et al. [169] Pinus maritime (Pycnogenol®) Extract Barks Procyanidin Flavonoid Decrease the NO production, the inflammatory cell count, and the levels of IL-4, IL-5, IL-13, and IgE in BALF or serum. Reduces the IL-1β and IL-6 levels, the expression of iNOS and MMP-9. Enhances the expression of heme oxygenase (HO)-1. Attenuates the airway inflammation and mucus hypersecretion Shin et al. [170] Ping chuan ke li Not elucidated Wang et al. [171] Piperine Isolated compound Piper nigrum (black pepper) and Piper longum (long pepper) Piperine Alkaloid Inhibits eosinophil infiltration and airway hyperresponsiveness by suppressing T cell activity and Th2 cytokine production Chinta et al. [172] Piperlongumine Isolated compound Piper longum Piperlongumine (5,6-dihydro-1-[(2E)-1-oxo-3-(3,4,5-trimethoxyphenyl)-2-propenyl]-2(1H)-pyridinone) Alkaloid Inhibits the activity of inflammatory transcription factors, NF-κB, and signal transducer and activator of transcription (STAT)-3 as well as the expression of IL-6, IL-8, IL-17, IL-23, matrix metallopeptidase (MMP)-9, and intercellular adhesion molecule (ICAM)-1. Suppresses the permeability and leukocyte migration, the production of TNF-α, IL-6, and extracellular regulated kinases (ERK) 1/2 along with the activation of NF-κB Prasad and Tyagi [173] Piper nigrum Ethanolic extract Not reported Piperine Alkaloid Inhibit the Th2/Th17 responses and mast cell activation Bui et al. [174]; Khawas et al. [175] Plectranthus amboinicus (Lour.) spreng. Ethanol, methanol, and hexane extracts Aerial parts Rosmarinic acid, shimobashiric acid, salvianolic acid L, rutin, thymoquinone, and quercetin Flavonoids Not elucidated Arumugam et al. [176] Podocarpus sensu latissimo Extract Barks 3-Methoxyflavones and 3-O-glycosides Flavonoids Provinol and flavin-7 Abdillahi et al. [177] Polyphenols and their compounds Isolated compound Provinol and flavin-7 Quercetin and resveratrol Polyphenol Decrease IL-4 and IL-5 levels, the airway hyperresponsiveness, and mucus overproduction Joskova et al. [178] Propolis Isolated compound Honey bees from several plants Pinocembrin and caffeic acid phenethyl ester Polyphenol and terpenoids Inhibits TGF-β1 Kao et al. [179] Psoralea corylifolia Extract Fruits 7-O-Methylcorylifol A, 7-O-isoprenylcorylifol A, and 7-O-isoprenylneobavaisoflavone Flavonoids Inhibit the N-formyl-L-methionyl-L-leucyl-L-phenylalanine (fMLP)-inducedO2– generation and/or elastase release Chen et al. [180] Quercetin Isolated compound Tea, fruits and vegetables 2-(3,4-Dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one Flavonoid Inhibits LOX and PDE4. Reduce leukotrienes and histamine release with a decrease in the IL-4 level. Inhibit prostaglandins release and the human mast cell activation by Ca2+ influx Townsend et al. [181]; Mlcek et al. [182] Radix Rehmanniae Preparata Extract Not reported Catalpol Glycoside Inhibit IgE secretion. Decrease IL-4 and IL-5. Inhibit eosinophil infiltration and suppress eotaxin and its receptor CCR3. Reduce IL-5Rα levels Chen et al. [183] Resveratrol Isolated compound Skin and barks of red fruits Resveratrol (3,4,5-trihydroxystilbene) Polyphenol Decreases eosinophilia. Reduce neutrophil migration and inhibit PGD-2 release. Decrease IL-4 and IL-5 and also the hyperresponsiveness and mucus production Lee et al. [184]; Hu et al. [185]; Chen et al. [186] Schisandra chinensis Extract Dried fuits α-Cubebenoate Not reported Suppress bronchiolar structural changes. Inhibit the accumulation of lymphocytes, eosinophils, and macrophages in BALF. Suppress IL-4, IL-13, and TGF-β1. Increase the intracellular Ca2+ Lee et al. [187] Sea cucumber (Holothurians) Tonic Marine animal (Sea cucumber) Holothurin A3, pervicoside A, and fuscocinerosides A Toxins Reduce COX enzymatic activity Guo et al. [188] Selaginella uncinata (Desv.) Extract Dried herbs Amentoflavone, hinokiflavone, and isocryptomerin Flavonoids Attenuate hyperresponsiveness and goblet cell hyperplasia. Decrease IL-4, IL-5, IL-13, and IgE levels in serum. Upregulation of T2R10 gene expression and downregulation of IP3R1 and Orai1 gene expression. Suppression of eotaxin, NFAT1, and c-Myc protein expression Yu et al. [189] Selaginella pulvinata Isolated compound Air-dried powder of the whole plant of S. pulvinata Selaginpulvilin A, selaginpulvilin B, and selaginpulvilin C Phenol Inhibit the PDE4 Liu et al. [190] Sideritis scardica Extract Leaves Echinacoside, verbascoside, luteolin, apigenin, caffeic acid, vanillic acid Glycosides, flavonoids, and phenolic acids Not elucidated Todorova and Trendafilova [191] Siegesbeckia glabrescens Extract Aerial roots 3,40-O-Dimethylquercetin, 3,7-O-dimethylquercetin, 3-O-methylquercetin, and 3,7,40-O-trimethylquercetin Flavonoids Reduce inflammatory cell infiltration in BALF. Decrease IL-4, IL-5, IL-13, eotaxin, and IgE. Reduce airway inflammation and mucus overproduction. Decrease iNOS and COX-2 expression and reduce NO levels Jeon et al. [192] Sitostanol Isolated compound Marketable synthetic compound Sitostanol Steroid Suppresses IL-4 and IL-13 release Brüll et al. [193] Soft coral Isolated compound Sarcophyton ehrenbergi Not reported Prostaglandins Inhibits PDE4 Cheng et al. [194] Solanumpaniculatum L Extract Fruits Stigmasterol andβ-sitosterol Steroid Reduce IL-4 and NO levels. Decrease IFN-γ without changes in IL-10 levels. Reduce NF-κB, TBET, and GATA3 gene expression Rios et al. [195] Squill (Drimia maritima (L.) stearn) oxymel Crude extract Not reported Scillaren A, scillirubroside, scilliroside, scillarenin, and proscillaridin A Glycosides Not elucidated Nejatbakhsh et al. [196] Sorbus commixta Hedl. (Rosaceae) Methanolic extract Fruits Neosakuranin Glycosides Not elucidated Bhatt et al. [197] Thuja orientalis Extract Fruits Cupressuflavone, amentoflavone, robustaflavone, afzelin, (+)-catechin, quercetin, hypolaetin 7-O-β-xylopyranoside, isoquercitrin, and myricitrin Flavonoids Reduce nitric oxide production and reduce the relative mRNA expression levels of inducible nitric oxide synthase (iNOS), IL6, cyclooxygenase-2, MMP-9, TNF-α in vitro. Decrease the inflammatory cell counts in BALF. Reduce IL-4, IL-5, IL-13, eotaxin, and IgE levels and reduce the airway hyperresponsiveness, in vivo. Attenuate mucus hypersecretion Shin et al. [198] Tonggyu-tang Extract Ledebouriella divaricata Hiroe, Angelica koreanum Kitagawa, Angelica tenuissima Nakai, Cimicifuga heracleifolia Kom., Pueraria thunbergiana Benth., Ligusticum wallichii var. officinale Yook., Atractylodes lancea DC., Thuja orientalisl., Ephedra sinica Stapf., Zanthoxylum schinifolium S.Z., Asarum sieboldii var. seoulense Nakai, Glycyrrhiza glabra, Astragalus membranaceus var. mongholicus Bung, Xanthium strumarium L., Magnolia denudate Desr., Mentha arvensis var. piperascens Makinv Not reported Plant Inhibit inflammatory cytokines (IL-4, IL-6, IL-8, and TNF-α). Suppress mitogen activated protein kinase (MAPK) and NF-κB in mast cells and keratinocytes Kim et al. [199] Trigonella foenum-graecum Extract Seeds Not reported Flavonoids Reduce IL-5, IL-6, IL-1β, and TNF-α. Reduce collagen deposition in goblet cells. Suppress inflammatory cells Piao et al. [200] Tropidurus hispidus Oil Fat of tropidurus hispidus Croton oil, arachidonic acid, phenol, and capsaicin Fatty acids and its derivated Affect the arachidonic acid and their metabolites and reduce proinflammatory mediators Santos et al. [201] Urtica dioica L. Extract Leaves Caffeic acid, gallic acid, quercetin, scopoletin, carotenoids, secoisolariciresinol, and anthocyanidins Polyphenols, flavonoids, cumarin, and lignan Reduce leucocytes and lymphocytes levels in serum. Inhibit the eosinophilia increase in BALF. Suppress inflammatory cells recruitment and attenuation of lipid peroxidation of lung tissues Zemmouri et al. [202] Verproside Isolated compound Pseudolysimachion Verproside Glycoside Suppress the NF-κB and TNF-α expression Lee et al. [203] Vitamin D Isolated compound Not reported Calcitriol Vitamin Inhibit lymphocytes (Th1 and Th2) and reduces cytokines production Szekely and Pataki [204] Vitamin E Isolated compound Plant lipids α-, β-, γ-, and δ-Tocopherols and the α-, β-, γ-, and δ-tocotrienols Vitamin Reduce airway hyperresponsiveness, IL-4, IL-5, IL-13, OVA-specific IgE, eotaxin, TGF-β, 12/15-LOX, lipid peroxidation, and lung nitric oxide metabolites Cook-Mills and McCary [205]; Abdala-Valencia et al. [206] Vitex rotundifolia linn til (Verbenaceae) Methanolic extract Fruits 1H, 8H-Pyrano [3, 4-c]pyran-1,8-dione Not reported Inhibit eotaxin, IL-8, IL-16, and VCAM-1 mRNA Lee et al. [207] Viticis fructus Extract Dried fruit Pyranopyran-1,8-dione Not reported Inhibit eosinophils and lymphocytes cell infiltration into the BAL fluid. Reduce to normal levels of IL-4, IL-5, IL-13 and eotaxin. Suppress IgE levels Park et al. [208] Yu ping feng san Extract Radix Saposhnikoviae (Fangfeng), Radix Astragali (Huangqi), and Rhizoma Atractylodis macrocephalae (Baizhu) Calycosin-7-O-β-d-glucoside, calycosin, formonetin, atractylenolide III, II, and I; 5-O-methylvisammioside, 8-methoxypsoralen and bergapten Flavonoids, terpenoids, saponins, and furocoumarins Inhibit TNF-α, IFN-γ, and IL-1β Stefanie et al. [209] Zygophyllum simplex L Extract Aerial parts Isorhamnetin-3-O-β-D-rutinoside, myricitrin, luteolin-7- O-β-D-glucoside, isorhamnetin-3-O-β-D-glucoside, and isorhamnetin Phenol Inhibit NF-κB, TNF-α, IL-1β, and IL-6 Abdallah and Esmat [210] Ziziphus amole Extract Leaves, stems, barks, and roots Alphitolic acid, sitosterol, ziziphus-lanostan-18-oico acid Terpenoid and steroid Inhibit myeloperoxidase activity Romero-Castillo et al. [211] ### 2.1. Natural Products from Plants The use of natural products obtained from plants by the traditional medicine has been reported from centuries, especially in countries as China, Japan, and India [212]. Thus, the topics below concern these products or bioactive compounds originated from the most studied plants used on asthma therapy. #### 2.1.1. Flavonoids Flavonoids are natural compounds from plants, nuts, and fruits that are chemically characterized by the presence of two benzene rings (A and B) linked through a heterocyclic pyrene ring (C). They represent a large group of polyphenolic secondary metabolites [213] with more than 8,000 different compounds already identified [214]. Considering their chemical structure, they can be classified as flavans, flavanones, isoflavanones, flavones, isoflavones, anthocyanidins, and flavonolignans [214]. Flavans or isoflavans possess a heterocyclic hydrocarbon skeleton, chromane, and a substitution on its C ring, in carbons 2 or 3, by a phenyl group (B ring). Flavanones and isoflavanones show an oxo-group in position 4. The presence of a double bond between C2 and C3 indicates flavones and isoflavones, and the addition of a C1 to C2 double bond represents anthocyanidins [214].The diversity in their chemical structure contributes to their broad range of physiological and biological activities, from which it can be highlighted the antioxidant, anti-inflammatory, antiallergic, antiviral, hepatoprotective, antithrombotic, and anticarcinogenic activities [213]. In this review, 14 studies reported flavonoids as a group of compounds able to be used on asthma treatment. The following subsections show the main flavonoids with antiasthmatic activity reported in the literature and used by the traditional medicine. These studies attributed the antiasthmatic activity of plant extracts containing these compounds, in part, due to their presence in the phytocomplex.(1) Flavone Compounds: Chrysin, Baicalin, Luteolin, and Oroxylin A. Defined as 5,7-dihydroxy-2-phenyl-1-4H-chromen-4-one, chrysin is classified as a flavone that can be found in Passiflora caerulea and Passiflora incarnate flowers, as well as in Matricaria chamomilla, popularly known as chamomile, besides being present in propolis and other plants [90, 100]. Chrysin is a compound able to suppress the proliferation of airway smooth muscle cells as well as to promote a reduction in the IL-4, IL-13, IgE, and interferon-γ levels that lead to an attenuation in the asthma inflammatory process [89]. Bae et al. [90] performed their studies through an in vitro cell culture model with the purpose to describe how the chrysin was able to promote the inhibitory effect in the proinflammatory cytokines. They suggested that this effect was caused by the intracellular calcium reduction in mast cells, since calcium is responsible for proinflammatory cytokine gene transcription [90]. In addition, a study performed by Yao and colleagues [88] investigated the activity of chrysin against asthma in mice sensitized with ovalbumin (OVA). Their results revealed that chrysin would be a promising compound able to be used for controlling airway remodeling and clinical manifestations of asthma [88].Baicalin, a 7-glucuronic acid-5,6-dihydroxyflavone, is a natural metabolite easily found in leaves and barks from several species of theScutellaria genus [215]. Studies performed by Park and colleagues [208] investigated the anti-inflammatory activity of baicalin using an asthma-induced animal model. The results showed that this compound decreased the inflammatory cell infiltration and the levels of TNF-α in the bronchoalveolar lavage fluids (BALF). The activity of the baicalin was attributed to the fact that this metabolite selectively inhibits the enzyme activity of PDE4 and suppresses the TNF-α expression induced by the lipopolysaccharides on macrophages, indicating a potential use of this metabolite in asthma treatment [74].Additionally, luteolin (2-(3,4-dihydroxyphenyl)-5,7-dihydroxy-4-chromenone), another compound that had also demonstrated antiasthma activity, is widely found in aromatic flowering plants, such asSalvia tomentosa and Lamiaceae, as well as in broccoli, green pepper, parsley, and thyme [216]. Shen and colleagues [133] studied its pharmacological activity through inhibition of the GABAergic system, which is responsible for the overproduction of mucus during the asthmatic crisis by overstimulation of the epithelial cells. The study indicated that this compound was able to promote the attenuation of the goblet cell hyperplasia by the partial inhibition of GABA activities [133].Another antiasthmatic flavonoid compound is oroxylin A, a flavone found in the extract ofScutellaria baicalensis Georgi and Oroxylum indicum tree [156]. According to Zhou [157], oroxylin A, or 5-7-dihydroxy-6-methoxy-2-phenylchromen-4-one, was able not only to reduce the airway hyperactivity in an OVA-induced asthma murine model, but also to decrease the levels of IL-4, IL-5, IL-13, and OVA-specific IgE in BALF [157]. This study also showed the ability of oroxylin A in inhibiting the alveolar wall thickening in addition to avoid the inflammatory cell infiltration in the perivascular and peribronchial areas assessed by histopathological evaluation [157].(2) Flavonol Compounds: Quercetin, Galangin, and Kaempferol. Quercetin (2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one), a flavonol compound widely found in onions, apples, broccoli, cereals, grapes, tea, and wine, has been known as the main active compound of these plants and, therefore, responsible for their widespread use in traditional medicine for the treatment of inflammatory, allergic, and viral diseases [213]. The studies using this compound as antiasthma were performed in cell cultures and rats, as in vitro and in vivo models, respectively, showing its high capacity to reduce inflammatory processes. According to these studies, the anti-inflammatory mechanism of quercetin is attributed to the lipoxygenase and PDE4 inhibition and reduction on histamine and leukotriene release, which promote a decrease in the proinflammatory cytokine formation and production of IL-4, respectively. In addition, quercetin also promoted the inhibition of human mast cell activation by Ca2+ influx and prostaglandin release inhibition [182], favoring the therapeutic relief of the asthma symptoms and decreasing the short-acting β-agonist dependence [181, 182].Galangin, a compound chemically defined as 3,5,7-trihydroxy-2-phenylchromen-4-one, easily found onAlpinia officinarum [217], had its pharmacological activity evaluated using a specific-pathogen-free mice model [115]. The study, performed by Liu [115], showed an effective response against the in vivo OVA-induced inflammation as well as a reduction on the ROS levels in vitro. Furthermore, galangin acted as an antiremodeling agent in asthma, since this compound inhibited the goblet cell hyperplasia, lowering the TGF-β1 levels and suppressing the expression of vascular endothelial grown factor (VEGF) and matrix metalloproteinase-9 (MMP-9) in BALF or lung tissue. This result highlighted its antiremodeling activity in the TGF-β1-ROS-MAPK pathway, proving its potential use on asthma treatment [115].Another flavonol, kaempferol, chemically defined as 3,5,7-trihydroxy-2-(4-hydroxyphenyl)-4H-chromen-4-one, is widely found in citrus fruits, broccoli, apples, and other plant sources [213]. This compound has been studied due to its pharmacological potential, especially against inflammation. In the study performed by Chung et al. [127], an OVA-induced airway inflammation mouse model of asthma was performed, demonstrating that kaempferol can significantly reduce the inflammatory process due to the decrease of the inflammatory cell infiltration and the decrease of production of inflammatory cytokines and IgE antibodies. In addition, this compound was also able to reduce the intracellular ROS production in the airway inflammation reaction [127].Furthermore, Mahat et al. [218] demonstrated that the anti-inflammatory activity of kaempferol occurs through the inhibition of nitric oxide and nitric oxide-induced COX-2 enzyme activation, further inhibiting the cytotoxic effects of nitric oxide, reducing the prostaglandin-E2 production [218]. To improve the possibility of the use of kaempferol as a bioactive on the development of new drugs or medicines, the previously mentioned study by Chung [127] also describes the antiasthma activity of a glycosylated derivative of kaempferol, the kaempferol-3-O-rhamnoside. The glycosylation of kaempferol improved its solubility and stability, besides reducing its toxicity [127], allowing the production of a compound with great potential to increase the asthma therapeutic arsenal. According to this rationale, this compound may be responsible for the anti-inflammatory properties of the plant extracts containing this substance and that have been used to asthma treatment. #### 2.1.2. Resveratrol Resveratrol is a natural stilbenoid compound, a class of polyphenol obtained from the bark of red fruits, with known antioxidant and promising anti-inflammatory and antiasthma activities [186]. In studies using eosinophils obtained from asthmatic individuals, Hu et al. [185] demonstrated that resveratrol induces not only cell cycle arrest in the G1/S phase, but also apoptosis, allowing a decrease in the eosinophil number [185], thus reducing the neutrophil migration and, consequently, preventing the histamine and PGD-2 release, avoiding vasodilatation, mucus production, and bronchoconstriction (Figure 1). Additionally, Lee and colleagues [129] demonstrated that resveratrol was effective against the asthmatic mouse model once this polyphenol induced a significant decrease in the plasma level of T-helper-2-type cytokines, such as IL-4 and IL-5. It also decreased the airway hyperresponsiveness, eosinophilia, and mucus hypersecretion [184]. Although performed by different methods, the studies are in agreement regarding the scientific evidence that supports the use of resveratrol by oral route as an effective natural compound to treat asthma patients. #### 2.1.3.Boswellia Boswellia is a tree genus that produces oil known as frankincense, which is obtained through incisions in the trunks of these trees. This oil is composed by 30–60% resin, 5–10% essential oils, and polysaccharides [219]. Studies performed using this product evaluated its pharmacological activities revealing that the Boswellia bioactives are boswellic acids and AKBA (3-O-acetyl-11-keto-β-boswellic acid), both responsible for preventing NF-κB activation and, consequently, inhibiting IL-1, IL-2, IL-4, IL-6, and IFN-gamma release [52]. They also inhibit LOX-5, thus preventing leukotriene release [78]. Thus, based on the physiopathology of asthma, it is possible to infer that these compounds may act as antiasthma molecules from the tree genus, once these enzymes and mediators are involved in the asthma-related inflammation. Moreover, another study that aimed at evaluating the antiasthma activity of these compounds showed that the association between Boswellia serrata, Curcuma longa, and Glycyrrhiza had a pronounced effect on the management of bronchial asthma [79], suggesting its potential on asthma therapy. ### 2.2. Natural Products from Animal Source Animal-derived natural products still represent the minority of natural sources for products intended for asthma treatment. Nonetheless, many studies describe the use of animal-based products, such as oils, milk, and spleen as a complementary therapy for several diseases, including asthma. The traditional medicine reports the benefits of consuming some animal parts and animal products, once they can be rich in compounds such as lipids, prostaglandins, unsaturated fatty acids, enzymes, and polysaccharides, which are responsible for their pharmacological activities [220, 221]. In addition, animal sources are also widely cited as biocompatible and biodegradable sources, suggesting their safe use. The animal products and compounds cited in this session can be obtained from several sources, such as mammals, amphibians, and crustaceans, demonstrating its wide range of possibilities. #### 2.2.1. Animal Sea Source:Holothuroidea, Penaeus, and Sarcophyton ehrenbergi Marine ecosystems represent an important source of natural compounds due to their wide biodiversity, which include animals and plants that are unique to this environment. Therefore, many studies have been performed to evaluate the antimicrobial, anti-inflammatory, antiviral, and antiasthmatic potential of algae and sea animals.On this concern, the sea cucumber, a marine invertebrate animal that belongs to the classHolothuroidea, usually found in the benthic areas and deep seas, has been used by Asian and Middle Eastern communities in the traditional medicine as elixir, due to its pharmacological activity on the treatment of hypertension, asthma, rheumatism, cuts, burns, and constipation [188]. These pharmacological activities are attributed to the presence of saponins, cerebrosides, polysaccharides, and peptides on its composition [188, 220]. Bordbar et al. [220], in a literature review, mentioned an experimental study from Herencia et al. [222] in which sea cucumber extract showed a reduction of the enzymatic activity of cyclooxygenase in inflamed mice tissues, without promoting any modification on the cyclooxygenase enzyme, showing that the sea cucumber extract is a potent natural product able to be used against several inflammatory diseases [220].Ozdemir and colleagues [87] investigated the pharmacological activity of chitin, a polysaccharide formed by repeated units of N-acetylglucosamine to form long chain through β-(1-4) linkage [221], the major compound of the shrimp (Penaeus) exoskeleton. In this study, the authors performed the intranasal administration of chitin microparticles in the asthma-induced mice model, which promoted the reduction of serum IgE and peripheral blood eosinophilia, besides the decrease of the airway hypersensitivity [87]. Additionally, another study identified and isolated ten new prostaglandin derivatives from the Sarcophyton ehrenbergi extract, a soft coral species found in the Red sea [194], from which five of them showed inhibitory activity against PDE4 (44.3%) at 10 μg.mL–1, suggesting its utilization on asthma and chronic obstructive pulmonary disease treatment, once PDE4 is the drug target on the treatment of both diseases [194].Finally, these studies demonstrated that marine source needs to be further investigated, since a wide variety of bioproducts and/or bioactives with potential anti-inflammatory activity and antiasthmatic proprieties can be found in this environment. #### 2.2.2. Bullfrog(Rana catesbeiana Shaw) Oil The bullfrog oil is a natural oil extracted from the adipose tissue of the amphibianRana catesbeiana Shaw, which is originated from North America and has its meat widely commercialized around the world [223]. This oil has been used by the traditional medicine to treat inflammatory disorders, especially asthma [223]. This oil is composed of a mixture of mono- and polyunsaturated fatty acids and bile-derived steroid compound (ethyl iso-allocholate) [81, 224], which are responsible for its therapeutic properties [81].According to Yaqoob [57], the presence of oleic, linolenic, stearic, palmitic, and myristic fatty acids can promote the suppression of immune cell functions [58]. Based on such evidence, it is possible to infer that the bullfrog oil, due to its chemical composition, can be used on the treatment of inflammation-related disorders such as asthma. However, further studies are needed to confirm this hypothesis. #### 2.2.3. Other Products Derived from Animals Although the majority of the currently used animal products by the traditional medicine for asthma treatment belong from animal tissues, there is evidence that mammal fluids, for example, buffalo spleen liquid, milk, and colostrum, can act on the immune system promoting the decrease of asthma symptoms [80].The buffalo spleen liquid was investigated in a study performed by Neamati and colleagues [80], in which pigs were asthma sensitized using ovalbumin, followed by administration of the buffalo spleen liquid-based adjuvant. A decrease in the tracheal response as well as a reduction in the white blood cell number in lung lavage was observed on sensitized animals when compared to healthy animals [80], showing the potentiality of this fluid in promoting asthma control. In addition, another study was performed to evaluate the antiasthma activity using milk and colostrums, which contain linolenic acid and proteins like lactoferrin [141], as a natural product. This study showed a modulation in the plasma lipid concentration in human and animal models and a decrease in the allergic airway inflammation induced by ragweed pollen grain extract. ### 2.3. Bioactives Obtained from Microorganisms The use of bacteria and fungi metabolites on the treatment of several diseases is widely reported since the penicillin discovery. However, more recent studies have further investigated the antiasthmatic potential of these metabolites [225]. On this concern, a study performed by Lu and colleagues [156] evaluated the antiasthma activity of the bacterial lysate OM-85 Broncho-Vaxom (BV), a patented pharmaceutical product [134]. The study observed that the bacterial lysate coupled with the conventional treatment was able to increase the rate of natural killer T cells on the peripheral blood, decreasing the cytokine level (cytokines type not described) and, then, promoting the reduction of asthma symptoms. Furthermore, kefir, a fermented milk drink produced by lactic and acetic acids from bacteria, which also presents the kefiran, an insoluble polysaccharide as main component [128, 129], had its in vivo anti-inflammatory activity evaluated. This compound was able to reduce at normal levels the release of IL-4, IL-6, and IL-10 along with the production of INF-γ and TNF-α [128]. In addition, the intragastric administration of kefiran promoted the reduction of OVA-induced cytokine production in a murine asthma model, decreasing the pulmonary eosinophilia and mucus hypersecretion [128, 129].Therefore, based on these reports and historical facts regarding the use of microorganisms as source for isolation of new bioactives and the development of medicines, it is important to highlight that these new agents may contribute to the current asthma treatment. ## 2.1. Natural Products from Plants The use of natural products obtained from plants by the traditional medicine has been reported from centuries, especially in countries as China, Japan, and India [212]. Thus, the topics below concern these products or bioactive compounds originated from the most studied plants used on asthma therapy. ### 2.1.1. Flavonoids Flavonoids are natural compounds from plants, nuts, and fruits that are chemically characterized by the presence of two benzene rings (A and B) linked through a heterocyclic pyrene ring (C). They represent a large group of polyphenolic secondary metabolites [213] with more than 8,000 different compounds already identified [214]. Considering their chemical structure, they can be classified as flavans, flavanones, isoflavanones, flavones, isoflavones, anthocyanidins, and flavonolignans [214]. Flavans or isoflavans possess a heterocyclic hydrocarbon skeleton, chromane, and a substitution on its C ring, in carbons 2 or 3, by a phenyl group (B ring). Flavanones and isoflavanones show an oxo-group in position 4. The presence of a double bond between C2 and C3 indicates flavones and isoflavones, and the addition of a C1 to C2 double bond represents anthocyanidins [214].The diversity in their chemical structure contributes to their broad range of physiological and biological activities, from which it can be highlighted the antioxidant, anti-inflammatory, antiallergic, antiviral, hepatoprotective, antithrombotic, and anticarcinogenic activities [213]. In this review, 14 studies reported flavonoids as a group of compounds able to be used on asthma treatment. The following subsections show the main flavonoids with antiasthmatic activity reported in the literature and used by the traditional medicine. These studies attributed the antiasthmatic activity of plant extracts containing these compounds, in part, due to their presence in the phytocomplex.(1) Flavone Compounds: Chrysin, Baicalin, Luteolin, and Oroxylin A. Defined as 5,7-dihydroxy-2-phenyl-1-4H-chromen-4-one, chrysin is classified as a flavone that can be found in Passiflora caerulea and Passiflora incarnate flowers, as well as in Matricaria chamomilla, popularly known as chamomile, besides being present in propolis and other plants [90, 100]. Chrysin is a compound able to suppress the proliferation of airway smooth muscle cells as well as to promote a reduction in the IL-4, IL-13, IgE, and interferon-γ levels that lead to an attenuation in the asthma inflammatory process [89]. Bae et al. [90] performed their studies through an in vitro cell culture model with the purpose to describe how the chrysin was able to promote the inhibitory effect in the proinflammatory cytokines. They suggested that this effect was caused by the intracellular calcium reduction in mast cells, since calcium is responsible for proinflammatory cytokine gene transcription [90]. In addition, a study performed by Yao and colleagues [88] investigated the activity of chrysin against asthma in mice sensitized with ovalbumin (OVA). Their results revealed that chrysin would be a promising compound able to be used for controlling airway remodeling and clinical manifestations of asthma [88].Baicalin, a 7-glucuronic acid-5,6-dihydroxyflavone, is a natural metabolite easily found in leaves and barks from several species of theScutellaria genus [215]. Studies performed by Park and colleagues [208] investigated the anti-inflammatory activity of baicalin using an asthma-induced animal model. The results showed that this compound decreased the inflammatory cell infiltration and the levels of TNF-α in the bronchoalveolar lavage fluids (BALF). The activity of the baicalin was attributed to the fact that this metabolite selectively inhibits the enzyme activity of PDE4 and suppresses the TNF-α expression induced by the lipopolysaccharides on macrophages, indicating a potential use of this metabolite in asthma treatment [74].Additionally, luteolin (2-(3,4-dihydroxyphenyl)-5,7-dihydroxy-4-chromenone), another compound that had also demonstrated antiasthma activity, is widely found in aromatic flowering plants, such asSalvia tomentosa and Lamiaceae, as well as in broccoli, green pepper, parsley, and thyme [216]. Shen and colleagues [133] studied its pharmacological activity through inhibition of the GABAergic system, which is responsible for the overproduction of mucus during the asthmatic crisis by overstimulation of the epithelial cells. The study indicated that this compound was able to promote the attenuation of the goblet cell hyperplasia by the partial inhibition of GABA activities [133].Another antiasthmatic flavonoid compound is oroxylin A, a flavone found in the extract ofScutellaria baicalensis Georgi and Oroxylum indicum tree [156]. According to Zhou [157], oroxylin A, or 5-7-dihydroxy-6-methoxy-2-phenylchromen-4-one, was able not only to reduce the airway hyperactivity in an OVA-induced asthma murine model, but also to decrease the levels of IL-4, IL-5, IL-13, and OVA-specific IgE in BALF [157]. This study also showed the ability of oroxylin A in inhibiting the alveolar wall thickening in addition to avoid the inflammatory cell infiltration in the perivascular and peribronchial areas assessed by histopathological evaluation [157].(2) Flavonol Compounds: Quercetin, Galangin, and Kaempferol. Quercetin (2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one), a flavonol compound widely found in onions, apples, broccoli, cereals, grapes, tea, and wine, has been known as the main active compound of these plants and, therefore, responsible for their widespread use in traditional medicine for the treatment of inflammatory, allergic, and viral diseases [213]. The studies using this compound as antiasthma were performed in cell cultures and rats, as in vitro and in vivo models, respectively, showing its high capacity to reduce inflammatory processes. According to these studies, the anti-inflammatory mechanism of quercetin is attributed to the lipoxygenase and PDE4 inhibition and reduction on histamine and leukotriene release, which promote a decrease in the proinflammatory cytokine formation and production of IL-4, respectively. In addition, quercetin also promoted the inhibition of human mast cell activation by Ca2+ influx and prostaglandin release inhibition [182], favoring the therapeutic relief of the asthma symptoms and decreasing the short-acting β-agonist dependence [181, 182].Galangin, a compound chemically defined as 3,5,7-trihydroxy-2-phenylchromen-4-one, easily found onAlpinia officinarum [217], had its pharmacological activity evaluated using a specific-pathogen-free mice model [115]. The study, performed by Liu [115], showed an effective response against the in vivo OVA-induced inflammation as well as a reduction on the ROS levels in vitro. Furthermore, galangin acted as an antiremodeling agent in asthma, since this compound inhibited the goblet cell hyperplasia, lowering the TGF-β1 levels and suppressing the expression of vascular endothelial grown factor (VEGF) and matrix metalloproteinase-9 (MMP-9) in BALF or lung tissue. This result highlighted its antiremodeling activity in the TGF-β1-ROS-MAPK pathway, proving its potential use on asthma treatment [115].Another flavonol, kaempferol, chemically defined as 3,5,7-trihydroxy-2-(4-hydroxyphenyl)-4H-chromen-4-one, is widely found in citrus fruits, broccoli, apples, and other plant sources [213]. This compound has been studied due to its pharmacological potential, especially against inflammation. In the study performed by Chung et al. [127], an OVA-induced airway inflammation mouse model of asthma was performed, demonstrating that kaempferol can significantly reduce the inflammatory process due to the decrease of the inflammatory cell infiltration and the decrease of production of inflammatory cytokines and IgE antibodies. In addition, this compound was also able to reduce the intracellular ROS production in the airway inflammation reaction [127].Furthermore, Mahat et al. [218] demonstrated that the anti-inflammatory activity of kaempferol occurs through the inhibition of nitric oxide and nitric oxide-induced COX-2 enzyme activation, further inhibiting the cytotoxic effects of nitric oxide, reducing the prostaglandin-E2 production [218]. To improve the possibility of the use of kaempferol as a bioactive on the development of new drugs or medicines, the previously mentioned study by Chung [127] also describes the antiasthma activity of a glycosylated derivative of kaempferol, the kaempferol-3-O-rhamnoside. The glycosylation of kaempferol improved its solubility and stability, besides reducing its toxicity [127], allowing the production of a compound with great potential to increase the asthma therapeutic arsenal. According to this rationale, this compound may be responsible for the anti-inflammatory properties of the plant extracts containing this substance and that have been used to asthma treatment. ### 2.1.2. Resveratrol Resveratrol is a natural stilbenoid compound, a class of polyphenol obtained from the bark of red fruits, with known antioxidant and promising anti-inflammatory and antiasthma activities [186]. In studies using eosinophils obtained from asthmatic individuals, Hu et al. [185] demonstrated that resveratrol induces not only cell cycle arrest in the G1/S phase, but also apoptosis, allowing a decrease in the eosinophil number [185], thus reducing the neutrophil migration and, consequently, preventing the histamine and PGD-2 release, avoiding vasodilatation, mucus production, and bronchoconstriction (Figure 1). Additionally, Lee and colleagues [129] demonstrated that resveratrol was effective against the asthmatic mouse model once this polyphenol induced a significant decrease in the plasma level of T-helper-2-type cytokines, such as IL-4 and IL-5. It also decreased the airway hyperresponsiveness, eosinophilia, and mucus hypersecretion [184]. Although performed by different methods, the studies are in agreement regarding the scientific evidence that supports the use of resveratrol by oral route as an effective natural compound to treat asthma patients. ### 2.1.3.Boswellia Boswellia is a tree genus that produces oil known as frankincense, which is obtained through incisions in the trunks of these trees. This oil is composed by 30–60% resin, 5–10% essential oils, and polysaccharides [219]. Studies performed using this product evaluated its pharmacological activities revealing that the Boswellia bioactives are boswellic acids and AKBA (3-O-acetyl-11-keto-β-boswellic acid), both responsible for preventing NF-κB activation and, consequently, inhibiting IL-1, IL-2, IL-4, IL-6, and IFN-gamma release [52]. They also inhibit LOX-5, thus preventing leukotriene release [78]. Thus, based on the physiopathology of asthma, it is possible to infer that these compounds may act as antiasthma molecules from the tree genus, once these enzymes and mediators are involved in the asthma-related inflammation. Moreover, another study that aimed at evaluating the antiasthma activity of these compounds showed that the association between Boswellia serrata, Curcuma longa, and Glycyrrhiza had a pronounced effect on the management of bronchial asthma [79], suggesting its potential on asthma therapy. ## 2.1.1. Flavonoids Flavonoids are natural compounds from plants, nuts, and fruits that are chemically characterized by the presence of two benzene rings (A and B) linked through a heterocyclic pyrene ring (C). They represent a large group of polyphenolic secondary metabolites [213] with more than 8,000 different compounds already identified [214]. Considering their chemical structure, they can be classified as flavans, flavanones, isoflavanones, flavones, isoflavones, anthocyanidins, and flavonolignans [214]. Flavans or isoflavans possess a heterocyclic hydrocarbon skeleton, chromane, and a substitution on its C ring, in carbons 2 or 3, by a phenyl group (B ring). Flavanones and isoflavanones show an oxo-group in position 4. The presence of a double bond between C2 and C3 indicates flavones and isoflavones, and the addition of a C1 to C2 double bond represents anthocyanidins [214].The diversity in their chemical structure contributes to their broad range of physiological and biological activities, from which it can be highlighted the antioxidant, anti-inflammatory, antiallergic, antiviral, hepatoprotective, antithrombotic, and anticarcinogenic activities [213]. In this review, 14 studies reported flavonoids as a group of compounds able to be used on asthma treatment. The following subsections show the main flavonoids with antiasthmatic activity reported in the literature and used by the traditional medicine. These studies attributed the antiasthmatic activity of plant extracts containing these compounds, in part, due to their presence in the phytocomplex.(1) Flavone Compounds: Chrysin, Baicalin, Luteolin, and Oroxylin A. Defined as 5,7-dihydroxy-2-phenyl-1-4H-chromen-4-one, chrysin is classified as a flavone that can be found in Passiflora caerulea and Passiflora incarnate flowers, as well as in Matricaria chamomilla, popularly known as chamomile, besides being present in propolis and other plants [90, 100]. Chrysin is a compound able to suppress the proliferation of airway smooth muscle cells as well as to promote a reduction in the IL-4, IL-13, IgE, and interferon-γ levels that lead to an attenuation in the asthma inflammatory process [89]. Bae et al. [90] performed their studies through an in vitro cell culture model with the purpose to describe how the chrysin was able to promote the inhibitory effect in the proinflammatory cytokines. They suggested that this effect was caused by the intracellular calcium reduction in mast cells, since calcium is responsible for proinflammatory cytokine gene transcription [90]. In addition, a study performed by Yao and colleagues [88] investigated the activity of chrysin against asthma in mice sensitized with ovalbumin (OVA). Their results revealed that chrysin would be a promising compound able to be used for controlling airway remodeling and clinical manifestations of asthma [88].Baicalin, a 7-glucuronic acid-5,6-dihydroxyflavone, is a natural metabolite easily found in leaves and barks from several species of theScutellaria genus [215]. Studies performed by Park and colleagues [208] investigated the anti-inflammatory activity of baicalin using an asthma-induced animal model. The results showed that this compound decreased the inflammatory cell infiltration and the levels of TNF-α in the bronchoalveolar lavage fluids (BALF). The activity of the baicalin was attributed to the fact that this metabolite selectively inhibits the enzyme activity of PDE4 and suppresses the TNF-α expression induced by the lipopolysaccharides on macrophages, indicating a potential use of this metabolite in asthma treatment [74].Additionally, luteolin (2-(3,4-dihydroxyphenyl)-5,7-dihydroxy-4-chromenone), another compound that had also demonstrated antiasthma activity, is widely found in aromatic flowering plants, such asSalvia tomentosa and Lamiaceae, as well as in broccoli, green pepper, parsley, and thyme [216]. Shen and colleagues [133] studied its pharmacological activity through inhibition of the GABAergic system, which is responsible for the overproduction of mucus during the asthmatic crisis by overstimulation of the epithelial cells. The study indicated that this compound was able to promote the attenuation of the goblet cell hyperplasia by the partial inhibition of GABA activities [133].Another antiasthmatic flavonoid compound is oroxylin A, a flavone found in the extract ofScutellaria baicalensis Georgi and Oroxylum indicum tree [156]. According to Zhou [157], oroxylin A, or 5-7-dihydroxy-6-methoxy-2-phenylchromen-4-one, was able not only to reduce the airway hyperactivity in an OVA-induced asthma murine model, but also to decrease the levels of IL-4, IL-5, IL-13, and OVA-specific IgE in BALF [157]. This study also showed the ability of oroxylin A in inhibiting the alveolar wall thickening in addition to avoid the inflammatory cell infiltration in the perivascular and peribronchial areas assessed by histopathological evaluation [157].(2) Flavonol Compounds: Quercetin, Galangin, and Kaempferol. Quercetin (2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one), a flavonol compound widely found in onions, apples, broccoli, cereals, grapes, tea, and wine, has been known as the main active compound of these plants and, therefore, responsible for their widespread use in traditional medicine for the treatment of inflammatory, allergic, and viral diseases [213]. The studies using this compound as antiasthma were performed in cell cultures and rats, as in vitro and in vivo models, respectively, showing its high capacity to reduce inflammatory processes. According to these studies, the anti-inflammatory mechanism of quercetin is attributed to the lipoxygenase and PDE4 inhibition and reduction on histamine and leukotriene release, which promote a decrease in the proinflammatory cytokine formation and production of IL-4, respectively. In addition, quercetin also promoted the inhibition of human mast cell activation by Ca2+ influx and prostaglandin release inhibition [182], favoring the therapeutic relief of the asthma symptoms and decreasing the short-acting β-agonist dependence [181, 182].Galangin, a compound chemically defined as 3,5,7-trihydroxy-2-phenylchromen-4-one, easily found onAlpinia officinarum [217], had its pharmacological activity evaluated using a specific-pathogen-free mice model [115]. The study, performed by Liu [115], showed an effective response against the in vivo OVA-induced inflammation as well as a reduction on the ROS levels in vitro. Furthermore, galangin acted as an antiremodeling agent in asthma, since this compound inhibited the goblet cell hyperplasia, lowering the TGF-β1 levels and suppressing the expression of vascular endothelial grown factor (VEGF) and matrix metalloproteinase-9 (MMP-9) in BALF or lung tissue. This result highlighted its antiremodeling activity in the TGF-β1-ROS-MAPK pathway, proving its potential use on asthma treatment [115].Another flavonol, kaempferol, chemically defined as 3,5,7-trihydroxy-2-(4-hydroxyphenyl)-4H-chromen-4-one, is widely found in citrus fruits, broccoli, apples, and other plant sources [213]. This compound has been studied due to its pharmacological potential, especially against inflammation. In the study performed by Chung et al. [127], an OVA-induced airway inflammation mouse model of asthma was performed, demonstrating that kaempferol can significantly reduce the inflammatory process due to the decrease of the inflammatory cell infiltration and the decrease of production of inflammatory cytokines and IgE antibodies. In addition, this compound was also able to reduce the intracellular ROS production in the airway inflammation reaction [127].Furthermore, Mahat et al. [218] demonstrated that the anti-inflammatory activity of kaempferol occurs through the inhibition of nitric oxide and nitric oxide-induced COX-2 enzyme activation, further inhibiting the cytotoxic effects of nitric oxide, reducing the prostaglandin-E2 production [218]. To improve the possibility of the use of kaempferol as a bioactive on the development of new drugs or medicines, the previously mentioned study by Chung [127] also describes the antiasthma activity of a glycosylated derivative of kaempferol, the kaempferol-3-O-rhamnoside. The glycosylation of kaempferol improved its solubility and stability, besides reducing its toxicity [127], allowing the production of a compound with great potential to increase the asthma therapeutic arsenal. According to this rationale, this compound may be responsible for the anti-inflammatory properties of the plant extracts containing this substance and that have been used to asthma treatment. ## 2.1.2. Resveratrol Resveratrol is a natural stilbenoid compound, a class of polyphenol obtained from the bark of red fruits, with known antioxidant and promising anti-inflammatory and antiasthma activities [186]. In studies using eosinophils obtained from asthmatic individuals, Hu et al. [185] demonstrated that resveratrol induces not only cell cycle arrest in the G1/S phase, but also apoptosis, allowing a decrease in the eosinophil number [185], thus reducing the neutrophil migration and, consequently, preventing the histamine and PGD-2 release, avoiding vasodilatation, mucus production, and bronchoconstriction (Figure 1). Additionally, Lee and colleagues [129] demonstrated that resveratrol was effective against the asthmatic mouse model once this polyphenol induced a significant decrease in the plasma level of T-helper-2-type cytokines, such as IL-4 and IL-5. It also decreased the airway hyperresponsiveness, eosinophilia, and mucus hypersecretion [184]. Although performed by different methods, the studies are in agreement regarding the scientific evidence that supports the use of resveratrol by oral route as an effective natural compound to treat asthma patients. ## 2.1.3.Boswellia Boswellia is a tree genus that produces oil known as frankincense, which is obtained through incisions in the trunks of these trees. This oil is composed by 30–60% resin, 5–10% essential oils, and polysaccharides [219]. Studies performed using this product evaluated its pharmacological activities revealing that the Boswellia bioactives are boswellic acids and AKBA (3-O-acetyl-11-keto-β-boswellic acid), both responsible for preventing NF-κB activation and, consequently, inhibiting IL-1, IL-2, IL-4, IL-6, and IFN-gamma release [52]. They also inhibit LOX-5, thus preventing leukotriene release [78]. Thus, based on the physiopathology of asthma, it is possible to infer that these compounds may act as antiasthma molecules from the tree genus, once these enzymes and mediators are involved in the asthma-related inflammation. Moreover, another study that aimed at evaluating the antiasthma activity of these compounds showed that the association between Boswellia serrata, Curcuma longa, and Glycyrrhiza had a pronounced effect on the management of bronchial asthma [79], suggesting its potential on asthma therapy. ## 2.2. Natural Products from Animal Source Animal-derived natural products still represent the minority of natural sources for products intended for asthma treatment. Nonetheless, many studies describe the use of animal-based products, such as oils, milk, and spleen as a complementary therapy for several diseases, including asthma. The traditional medicine reports the benefits of consuming some animal parts and animal products, once they can be rich in compounds such as lipids, prostaglandins, unsaturated fatty acids, enzymes, and polysaccharides, which are responsible for their pharmacological activities [220, 221]. In addition, animal sources are also widely cited as biocompatible and biodegradable sources, suggesting their safe use. The animal products and compounds cited in this session can be obtained from several sources, such as mammals, amphibians, and crustaceans, demonstrating its wide range of possibilities. ### 2.2.1. Animal Sea Source:Holothuroidea, Penaeus, and Sarcophyton ehrenbergi Marine ecosystems represent an important source of natural compounds due to their wide biodiversity, which include animals and plants that are unique to this environment. Therefore, many studies have been performed to evaluate the antimicrobial, anti-inflammatory, antiviral, and antiasthmatic potential of algae and sea animals.On this concern, the sea cucumber, a marine invertebrate animal that belongs to the classHolothuroidea, usually found in the benthic areas and deep seas, has been used by Asian and Middle Eastern communities in the traditional medicine as elixir, due to its pharmacological activity on the treatment of hypertension, asthma, rheumatism, cuts, burns, and constipation [188]. These pharmacological activities are attributed to the presence of saponins, cerebrosides, polysaccharides, and peptides on its composition [188, 220]. Bordbar et al. [220], in a literature review, mentioned an experimental study from Herencia et al. [222] in which sea cucumber extract showed a reduction of the enzymatic activity of cyclooxygenase in inflamed mice tissues, without promoting any modification on the cyclooxygenase enzyme, showing that the sea cucumber extract is a potent natural product able to be used against several inflammatory diseases [220].Ozdemir and colleagues [87] investigated the pharmacological activity of chitin, a polysaccharide formed by repeated units of N-acetylglucosamine to form long chain through β-(1-4) linkage [221], the major compound of the shrimp (Penaeus) exoskeleton. In this study, the authors performed the intranasal administration of chitin microparticles in the asthma-induced mice model, which promoted the reduction of serum IgE and peripheral blood eosinophilia, besides the decrease of the airway hypersensitivity [87]. Additionally, another study identified and isolated ten new prostaglandin derivatives from the Sarcophyton ehrenbergi extract, a soft coral species found in the Red sea [194], from which five of them showed inhibitory activity against PDE4 (44.3%) at 10 μg.mL–1, suggesting its utilization on asthma and chronic obstructive pulmonary disease treatment, once PDE4 is the drug target on the treatment of both diseases [194].Finally, these studies demonstrated that marine source needs to be further investigated, since a wide variety of bioproducts and/or bioactives with potential anti-inflammatory activity and antiasthmatic proprieties can be found in this environment. ### 2.2.2. Bullfrog(Rana catesbeiana Shaw) Oil The bullfrog oil is a natural oil extracted from the adipose tissue of the amphibianRana catesbeiana Shaw, which is originated from North America and has its meat widely commercialized around the world [223]. This oil has been used by the traditional medicine to treat inflammatory disorders, especially asthma [223]. This oil is composed of a mixture of mono- and polyunsaturated fatty acids and bile-derived steroid compound (ethyl iso-allocholate) [81, 224], which are responsible for its therapeutic properties [81].According to Yaqoob [57], the presence of oleic, linolenic, stearic, palmitic, and myristic fatty acids can promote the suppression of immune cell functions [58]. Based on such evidence, it is possible to infer that the bullfrog oil, due to its chemical composition, can be used on the treatment of inflammation-related disorders such as asthma. However, further studies are needed to confirm this hypothesis. ### 2.2.3. Other Products Derived from Animals Although the majority of the currently used animal products by the traditional medicine for asthma treatment belong from animal tissues, there is evidence that mammal fluids, for example, buffalo spleen liquid, milk, and colostrum, can act on the immune system promoting the decrease of asthma symptoms [80].The buffalo spleen liquid was investigated in a study performed by Neamati and colleagues [80], in which pigs were asthma sensitized using ovalbumin, followed by administration of the buffalo spleen liquid-based adjuvant. A decrease in the tracheal response as well as a reduction in the white blood cell number in lung lavage was observed on sensitized animals when compared to healthy animals [80], showing the potentiality of this fluid in promoting asthma control. In addition, another study was performed to evaluate the antiasthma activity using milk and colostrums, which contain linolenic acid and proteins like lactoferrin [141], as a natural product. This study showed a modulation in the plasma lipid concentration in human and animal models and a decrease in the allergic airway inflammation induced by ragweed pollen grain extract. ## 2.2.1. Animal Sea Source:Holothuroidea, Penaeus, and Sarcophyton ehrenbergi Marine ecosystems represent an important source of natural compounds due to their wide biodiversity, which include animals and plants that are unique to this environment. Therefore, many studies have been performed to evaluate the antimicrobial, anti-inflammatory, antiviral, and antiasthmatic potential of algae and sea animals.On this concern, the sea cucumber, a marine invertebrate animal that belongs to the classHolothuroidea, usually found in the benthic areas and deep seas, has been used by Asian and Middle Eastern communities in the traditional medicine as elixir, due to its pharmacological activity on the treatment of hypertension, asthma, rheumatism, cuts, burns, and constipation [188]. These pharmacological activities are attributed to the presence of saponins, cerebrosides, polysaccharides, and peptides on its composition [188, 220]. Bordbar et al. [220], in a literature review, mentioned an experimental study from Herencia et al. [222] in which sea cucumber extract showed a reduction of the enzymatic activity of cyclooxygenase in inflamed mice tissues, without promoting any modification on the cyclooxygenase enzyme, showing that the sea cucumber extract is a potent natural product able to be used against several inflammatory diseases [220].Ozdemir and colleagues [87] investigated the pharmacological activity of chitin, a polysaccharide formed by repeated units of N-acetylglucosamine to form long chain through β-(1-4) linkage [221], the major compound of the shrimp (Penaeus) exoskeleton. In this study, the authors performed the intranasal administration of chitin microparticles in the asthma-induced mice model, which promoted the reduction of serum IgE and peripheral blood eosinophilia, besides the decrease of the airway hypersensitivity [87]. Additionally, another study identified and isolated ten new prostaglandin derivatives from the Sarcophyton ehrenbergi extract, a soft coral species found in the Red sea [194], from which five of them showed inhibitory activity against PDE4 (44.3%) at 10 μg.mL–1, suggesting its utilization on asthma and chronic obstructive pulmonary disease treatment, once PDE4 is the drug target on the treatment of both diseases [194].Finally, these studies demonstrated that marine source needs to be further investigated, since a wide variety of bioproducts and/or bioactives with potential anti-inflammatory activity and antiasthmatic proprieties can be found in this environment. ## 2.2.2. Bullfrog(Rana catesbeiana Shaw) Oil The bullfrog oil is a natural oil extracted from the adipose tissue of the amphibianRana catesbeiana Shaw, which is originated from North America and has its meat widely commercialized around the world [223]. This oil has been used by the traditional medicine to treat inflammatory disorders, especially asthma [223]. This oil is composed of a mixture of mono- and polyunsaturated fatty acids and bile-derived steroid compound (ethyl iso-allocholate) [81, 224], which are responsible for its therapeutic properties [81].According to Yaqoob [57], the presence of oleic, linolenic, stearic, palmitic, and myristic fatty acids can promote the suppression of immune cell functions [58]. Based on such evidence, it is possible to infer that the bullfrog oil, due to its chemical composition, can be used on the treatment of inflammation-related disorders such as asthma. However, further studies are needed to confirm this hypothesis. ## 2.2.3. Other Products Derived from Animals Although the majority of the currently used animal products by the traditional medicine for asthma treatment belong from animal tissues, there is evidence that mammal fluids, for example, buffalo spleen liquid, milk, and colostrum, can act on the immune system promoting the decrease of asthma symptoms [80].The buffalo spleen liquid was investigated in a study performed by Neamati and colleagues [80], in which pigs were asthma sensitized using ovalbumin, followed by administration of the buffalo spleen liquid-based adjuvant. A decrease in the tracheal response as well as a reduction in the white blood cell number in lung lavage was observed on sensitized animals when compared to healthy animals [80], showing the potentiality of this fluid in promoting asthma control. In addition, another study was performed to evaluate the antiasthma activity using milk and colostrums, which contain linolenic acid and proteins like lactoferrin [141], as a natural product. This study showed a modulation in the plasma lipid concentration in human and animal models and a decrease in the allergic airway inflammation induced by ragweed pollen grain extract. ## 2.3. Bioactives Obtained from Microorganisms The use of bacteria and fungi metabolites on the treatment of several diseases is widely reported since the penicillin discovery. However, more recent studies have further investigated the antiasthmatic potential of these metabolites [225]. On this concern, a study performed by Lu and colleagues [156] evaluated the antiasthma activity of the bacterial lysate OM-85 Broncho-Vaxom (BV), a patented pharmaceutical product [134]. The study observed that the bacterial lysate coupled with the conventional treatment was able to increase the rate of natural killer T cells on the peripheral blood, decreasing the cytokine level (cytokines type not described) and, then, promoting the reduction of asthma symptoms. Furthermore, kefir, a fermented milk drink produced by lactic and acetic acids from bacteria, which also presents the kefiran, an insoluble polysaccharide as main component [128, 129], had its in vivo anti-inflammatory activity evaluated. This compound was able to reduce at normal levels the release of IL-4, IL-6, and IL-10 along with the production of INF-γ and TNF-α [128]. In addition, the intragastric administration of kefiran promoted the reduction of OVA-induced cytokine production in a murine asthma model, decreasing the pulmonary eosinophilia and mucus hypersecretion [128, 129].Therefore, based on these reports and historical facts regarding the use of microorganisms as source for isolation of new bioactives and the development of medicines, it is important to highlight that these new agents may contribute to the current asthma treatment. ## 3. Conclusion: Widely Used Active Pharmaceutical Ingredients from Natural Source As previously demonstrated, natural products have been extensively used as a complementary treatment for asthma therapy. Studies concerning these products have aimed at investigating their activity as a matrix of compounds to complement or replace current asthma treatment, while others aim at isolating compounds to generate new medicines based on synthetic drugs of natural origin [226].Historically, natural products have contributed tremendously to the development of marketable medicines to the treatment of several diseases [226]. The evaluation of their therapeutic activities and identification and isolation of their bioactive molecules allowed not only their clinical use, but also the discovery of the pharmacophore groups and the radicals responsible for their toxicity or their biopharmaceutics aspects. In fact, based on such studies, it is possible to perform structural or delivery changes on these compounds that would increase their safety or would be able to module their half-life allowing to target them to specific action sites [227].This review shows the experimental studies that identified the antiasthma activity of different natural sources in the last decade, along with the molecules responsible for that. Altogether, these studies presented preliminary data that require further investigations about these compounds in order to, in a near future, be used on the production of designing medicines. Currently, a few natural-based active compounds are already available in the market, such as ipratropium bromide, theophylline, epinephrine, and sodium cromoglycate [226, 228–231].Ipratropium bromide, an anticholinergic drug able to promote bronchodilation, has been widely used for the treatment of asthma. This compound was synthesized from atropine, a compound extracted for the first time in 1809 fromAtropa belladonna L. However, it can be also found in other plants from the Solanaceae family [228, 229]. In spite of that, only in 1833, its chemical structure was elucidated, and in 1850, it was implemented for clinical use, allowing the proper understanding of its in vivo biopharmaceutics and therapeutic characteristics [232].Theophylline is an antiasthmatic drug widely used in the management of severe persistent asthma, promoting the bronchodilation and attenuation of asthma inflammation. Also known as 1,3-dimethylxanthine, this molecule was extracted in 1888 fromTheobroma cacao L. and Camellia sinensis L., plants presented in several countries. Later in 1922, this drug was introduced on asthma therapy [233]. Years after, epinephrine, also known as adrenaline, was extracted from the Ephedra sinica, a plant widely used in the Chinese traditional medicine, allowing the synthesis of beta-agonist antiasthmatic drugs, such as salbutamol and salmeterol, currently used in asthma treatment [226].Furthermore, sodium cromoglycate, a drug obtained from the khellin bioactive extracted fromAmmi visnaga (L) Lamk, has been used as a bronchodilator based on its ability of inhibiting mast cell degranulation, which enabled its use on the asthma treatment [226, 230].Overall, these reports highlight the relevance of the investigation and isolation of new bioactive compounds that could present antiasthmatic potential. As the current asthma treatment involves drugs that have been extensively studied in the past decades, the experimental studies that evaluate the activity of compounds obtained from diverse natural sources might allow the development of new antiasthmatic drugs in the near future. ## 4. Final Considerations The current asthma treatment is of high cost and has many side effects, which compromises the patient treatment compliance. Literature reports show that asthma treatment can be improved using natural products to complement the traditional drugs, since those products are of low cost and biocompatible and show reduced side effects. The literature search included the keywords asthma, natural products, and treatment, individually, resulted in 14,296,762 studies, including scientific articles, reviews, editorial reference works, and abstracts. Additionally, the keyword combination “Asthma + Natural Products” found 18,111 studies, “Asthma + Treatment,” 209,423 studies, “Natural Products + Treatment,” 459,685 studies, and “Asthma + Treatment + Natural Products,” 1,986 studies. Thus, after screening for duplicate studies, 1,934 abstracts were evaluated. Finally, based on the inclusion criteria, 172 studies reporting the use of natural products on asthma treatment were included in this review, summarizing a total of 160 studies that reported plants as natural source, 9 from animal source, and 3 studies describing bacteria and fungi as bioactive sources, totalizing 134 compounds which can be used as complementary or alternative medicine on asthma treatment. Plants were found to be the major source of products used by the folk medicine to treat asthma, since they are a renewable source of easy access. Also, due to their variety of secondary metabolites, plants are able to promote antiasthma activity mainly due to their anti-inflammatory and bronchodilator properties. This study revealed that flavonoids, phenolic acids, and terpenoids are the main elucidated compounds able to promote the attenuation of asthma symptoms. On the other hand, a lack of scientific reports regarding the pharmaceutical activity of natural products from animal and microorganism sources has limited their use. However, these products still represent an important source of bioactive compounds able to be used on asthma treatments. In addition, despite the relevant antiasthmatic activity, the literature search showed a lack of investigations concerning the pharmacokinetics properties as well as more accurate information regarding efficacy, safety, and the required dosage to inducein vivo antiasthma activity. In conclusion, due to the fact that current asthma treatment involves drugs obtained from natural products widely explored in the past, the current experimental studies reported in this review may lead to the development of new drugs in the future, able to improve the antiasthmatic treatment. --- *Source: 1021258-2020-02-13.xml*
1021258-2020-02-13_1021258-2020-02-13.md
122,825
Use of Natural Products in Asthma Treatment
Lucas Amaral-Machado; Wógenes N. Oliveira; Susiane S. Moreira-Oliveira; Daniel T. Pereira; Éverton N. Alencar; Nicolas Tsapis; Eryvaldo Sócrates T. Egito
Evidence-Based Complementary and Alternative Medicine (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1021258
1021258-2020-02-13.xml
--- ## Abstract Asthma, a disease classified as a chronic inflammatory disorder induced by airway inflammation, is triggered by a genetic predisposition or antigen sensitization. Drugs currently used as therapies present disadvantages such as high cost and side effects, which compromise the treatment compliance. Alternatively, traditional medicine has reported the use of natural products as alternative or complementary treatment. The aim of this review was to summarize the knowledge reported in the literature about the use of natural products for asthma treatment. The search strategy included scientific studies published between January 2006 and December 2017, using the keywords “asthma,” “treatment,” and “natural products.” The inclusion criteria were as follows: (i) studies that aimed at elucidating the antiasthmatic activity of natural-based compounds or extracts using laboratory experiments (in vitro and/or in vivo); and (ii) studies that suggested the use of natural products in asthma treatment by elucidation of its chemical composition. Studies that (i) did not report experimental data and (ii) manuscripts in languages other than English were excluded. Based on the findings from the literature search, aspects related to asthma physiopathology, epidemiology, and conventional treatment were discussed. Then, several studies reporting the effectiveness of natural products in the asthma treatment were presented, highlighting plants as the main source. Moreover, natural products from animals and microorganisms were also discussed and their high potential in the antiasthmatic therapy was emphasized. This review highlighted the importance of natural products as an alternative and/or complementary treatment source for asthma treatment, since they present reduced side effects and comparable effectiveness as the drugs currently used on treatment protocols. --- ## Body ## 1. Introduction ### 1.1. Physiopathology of Asthma Asthma can be defined as a chronic inflammatory disorder that affects the lower airways, promoting an increase of bronchial reactivity, hypersensitivity, and a decrease in the airflow [1]. Furthermore, due to a complex interaction between the genetic predisposition and environmental factors, besides multiple related phenotypes, this disease may be considered as a heterogeneous disorder [2].Sensitization by dust, pollen, and food represents the main environmental factors involved in the asthma physiopathology [1]. These antigens are recognized by the mast cells coated by IgE antibodies (Figure 1) and induce the release of proinflammatory cytokines, such as tumor necrosis factor-α (TNF-α), interleukins IL-2, IL-3, IL-4, IL-5, GM-CSF, prostaglandins, histamine, and leukotrienes [3, 4], by T lymphocytes and eosinophils. This degranulation process promotes an increase in the vascular permeability, leading to exudate and edema formation. This process is followed by leukocyte migration to the tissue affected by the inflammatory process through chemotaxis mediated by selectins and integrins [3, 6]. Subsequently, the neutrophil migration to the inflammatory site and the release of leukotrienes LTB4 induce the activation of type 2 cyclooxygenase (COX-2) and type 5 lipoxygenase (LOX-5), enhancing the expression of the C3b opsonin that produces reactive oxygen species (ROS) and thus promoting cell oxidative stress and pulmonary tissue injury [3, 7].Figure 1 Scheme of the immune response induced by allergen or antigen stimulation or the early stages of asthma. GM-CSF: granulocyte-macrophages colony-stimulating factor; IL: interleukin; C3b: opsonin; LOX-5: lipoxygenase type 5; ROS: reactive oxygen species; COX-2: cyclooxygenase type 2; LTB4: leukotriene type B; PGD2: prostaglandin type D (adapted from Bradding et al. [6].Other mechanisms involved in asthma physiopathology are the inhalation of drugs, as well as respiratory viruses [8], which promote an immune response mediated by IgG antibodies. This process promotes an increase of the inflammatory cells influx, releasing inflammatory mediators responsible for the damage process [9].Based on the factors and mechanisms presented above, asthma symptoms can be observed at different levels according to etiology and severity of clinical aspects, which define their classification [10]. The asthma severity is subdivided into (i) mild/low, also defined as intermittent/persistent, when the symptoms appear more than twice a week and their exacerbations can affect the daily activities of the patient; (ii) moderate, in which the daily symptom occurrence and their exacerbations affect the patient activities, requiring the use of short-acting β2-adrenergic drugs; or (iii) severe asthma, in which the patient presents persistent symptoms, physical activity limitations, and frequent exacerbations [10]. Based on this classification, it is estimated that 60% of the asthma cases are intermittent or persistent, 25% to 30% are moderate, and the severe cases account for only 10% of the total. However, it is important to highlight that although the proportion of severe asthmatics represents the minority of the cases, they are responsible for high mortality and high hospitalization costs [11], evidencing the high need of efficient treatments for this disease. ### 1.2. Asthma Epidemiology According to the World Health Organization, asthma affects about 300 million of individuals across the world, regardless of the country development degrees [12]. In the United Kingdom, asthma affects approximately 5.2 million of individuals and is responsible for 60.000 hospital admissions per year [13], while in Brazil the annual incidence of hospital admissions due to asthma is around 173.442 patients, representing 12% of the total admissions for respiratory diseases in 2012 [14].Furthermore, studies have demonstrated that asthma incidence and prevalence rates in different countries are not age related. In the United States of America, Albania, and Indonesia, the asthma prevalence is lower for children (around 8.4%, 2.1%, and 4.1%, respectively) when compared to adults [15]. On the other hand, in countries such as the United Kingdom and Costa Rica, children aged between 6 and 7 years represent approximately 32% of the asthma prevalence [16]. Additionally, the incidence or prevalence can be directly influenced by the socioeconomic characteristics of specific areas, as demonstrated by the analyses of the annual variation in the prevalence of asthma in which it was possible to observe, in Spain, that the asthma prevalence had an annual increase of 0.44% regardless of the age range studied. However, when these same data analyses were observed individually in Spain regions, the annual variation presented a different scenario, showing an increase or decrease according to the developmental degree of each region [17, 18]. Similar data were observed by Pearce et al. [19] and Schaneberg et al. [20], who demonstrated the influence of socioeconomic aspects on asthma. The studies showed a prevalent increase of asthma cases in metropolitan areas, fact attributed to the population growth with consequent exposure to the environmental factors and shortened access to asthma therapy, due to the high cost of the available medicines [19, 20]. Such phenomena directly interfere on the treatment compliance [21–23], evidencing the importance and the need of strategies that facilitate the access to the medicines for asthma therapy.Studies that evaluate the importance of inclusion of antiasthmatic therapy on public health policy programs have demonstrated that asthma control can be achieved through a variety of approaches, promoting a decrease in hospital admissions of 90%. Indeed, this was demonstrated by two studies performed in Brazilian cities, in which public health programs offered free medicines and psychological and pharmaceutical care to treat chronic diseases [24, 25]. Furthermore, Ponte et al. [24] and Holanda [25] also showed that the hospital admissions of children decreased from 44.7% to 6.4% one year after the inclusion of these patients in the same project. Thus, these data corroborate the importance of public health policies that contribute to the reduction of hospital outlay, increasing the population’s life quality. ### 1.3. Asthma Treatment The asthma treatment recommended by the Global Initiative for Asthma (GINA) consists, especially, on the reduction of symptoms in order to decrease the inflammatory process [26, 27]. However, since asthma presents a complex physiopathology associated with variable manifestations, the treatment can lead to different response levels. Thus, the evaluation of the clinical aspects associated with the treatment response is defined as the most adequate approach to achieve treatment success [28]. Asthma therapy strategies are based on pulmonary (the main administration route on asthma therapy), oral or intravenous administration of class β2 agonist drugs (salbutamol, levalbuterol, terbutaline, and epinephrine), anticholinergics (ipratropium), corticosteroids (beclomethasone di- or monopropionate, ciclesonide, flunisolide, fluticasone propionate, mometasone furoate, triamcinolone acetonide, hydrocortisone, dexamethasone, budesonide, prednisone, prednisolone, and methylprednisolone), and xanthine drugs. Among these, the β2 agonists are often the drugs of first choice [13, 27].To optimize the treatment for each patient, the drug dosage is determined by the patient’s respiratory characteristics, mainly his/her respiratory rate. Patients with increased respiratory rate, due to the airways narrowing, present a low dispersion of the inhaled drug through the respiratory tract [29]. In these cases or when there is an absence of response on the first two hours after treatment, hospitalization should be performed, and adrenaline could be used, subcutaneously or intravenously, since this is an indicative of mucosal edema formation, which can be decreased by the adrenaline bronchodilator effect [30].Overall, patients that present asthma exacerbation should be initially treated with the association of different dosage of corticosteroids and short-actingβ2 agonists by intranasal oxygen administration, allowing the stimulation of β2 receptors that result in bronchodilation due to the inhibition of cholinergic neurotransmission and, thus, inhibition of mast cells degranulation [10]. Additionally, corticosteroids by oral or inhaled route are used on uncontrolled persistent asthma patients due to their direct effect on the inflammation site [31]. Accordingly, they improve the pulmonary function and decrease the asthma episodes [32], reducing hospitalizations and mortality of asthmatic patients [31]. Furthermore, because their systemic use can induce side effects, corticosteroids, mainly prednisone and prednisolone, are more commonly used in patients with severe persistent asthma who are not stabilized by other drugs [31].In addition, xanthine drugs such as theophylline can be also used on asthma treatment, since they are able to promote the suppression of monocyte activation with consequent inhibition on the TNF-α release. Further, they promote the inhibition of neutrophil activation and its degranulation, inhibiting the catalytic activity of phosphodiesterase 4 (PDE4), allowing a reduction in the inflammatory process [33].Regardless of the wide variety and associations of antiasthmatic medicines and their ability to promote the asthma symptoms control and to reduce the asthma episodes and hospital admissions, the antiasthmatic drugs present several side effects, including nausea, headaches, and convulsions (xanthine class) [3, 30], cardiovascular effects (β-adrenergic receptors antagonists) [20], vomiting (PDE4 inhibitors drugs) [34–36], osteoporosis, myopathies, adrenal suppression, and metabolic disturbs, compromising the patients’ growth (corticosteroids) [30, 35, 37, 38]. These side effects compromise the life quality of the patients and reduce significantly the treatment compliance.Another important drawback from the conventional asthma treatment is its cost. In fact, the required amount of money for asthma treatments represents a significant expenditure for health public organizations. Such situation has become a financial issue even for developed countries. In Sweden, for example, the cost of medicines for asthma treatment has increased since the 1990s and, in 2006, and it was responsible for 11.6% of the total healthcare expenditure. Furthermore, according to projections, an annual increase of 4% on the costs of asthma management is expected [22].Additionally, studies revealed that in Europe and in the United States of America, the sum of the direct and indirect estimated annual costs with asthma management is approximately €18 billion and US$13 billion, respectively. This high expenditure was associated with the high incidence of uncontrolled asthma patients, since they represent an expense up to 5-fold higher than the controlled asthma ones [39] or than patients with other chronic diseases, as demonstrated in the study performed by O’Neil and colleagues [40]. These authors revealed that asthma costs up to £4,217 per person, while type II diabetes, chronic obstructive pulmonary disease, and chronic kidney disease represented, together, a cost of £3,630 [40].Therefore, considering the therapies currently available, their side effects, and their high cost, the development of new therapeutic approaches or complementary treatments to the current asthma therapy become an important and essential strategy. In this context, the use of natural products allows easy access to treatment to all socioeconomic classes [41, 42] and shows advantages such as low cost, biocompatibility, and reduced side effects, besides their wide biodiversity and renewability [43, 44]. In addition, natural products, supported by the literature findings on their complex matrix as a source of bioactive compounds, represent one of the main access forms to the basic healthcare in the traditional medicine [45]. Thus, the present review aimed at summarizing the main natural products reported in the literature that show antiasthma activity. ## 1.1. Physiopathology of Asthma Asthma can be defined as a chronic inflammatory disorder that affects the lower airways, promoting an increase of bronchial reactivity, hypersensitivity, and a decrease in the airflow [1]. Furthermore, due to a complex interaction between the genetic predisposition and environmental factors, besides multiple related phenotypes, this disease may be considered as a heterogeneous disorder [2].Sensitization by dust, pollen, and food represents the main environmental factors involved in the asthma physiopathology [1]. These antigens are recognized by the mast cells coated by IgE antibodies (Figure 1) and induce the release of proinflammatory cytokines, such as tumor necrosis factor-α (TNF-α), interleukins IL-2, IL-3, IL-4, IL-5, GM-CSF, prostaglandins, histamine, and leukotrienes [3, 4], by T lymphocytes and eosinophils. This degranulation process promotes an increase in the vascular permeability, leading to exudate and edema formation. This process is followed by leukocyte migration to the tissue affected by the inflammatory process through chemotaxis mediated by selectins and integrins [3, 6]. Subsequently, the neutrophil migration to the inflammatory site and the release of leukotrienes LTB4 induce the activation of type 2 cyclooxygenase (COX-2) and type 5 lipoxygenase (LOX-5), enhancing the expression of the C3b opsonin that produces reactive oxygen species (ROS) and thus promoting cell oxidative stress and pulmonary tissue injury [3, 7].Figure 1 Scheme of the immune response induced by allergen or antigen stimulation or the early stages of asthma. GM-CSF: granulocyte-macrophages colony-stimulating factor; IL: interleukin; C3b: opsonin; LOX-5: lipoxygenase type 5; ROS: reactive oxygen species; COX-2: cyclooxygenase type 2; LTB4: leukotriene type B; PGD2: prostaglandin type D (adapted from Bradding et al. [6].Other mechanisms involved in asthma physiopathology are the inhalation of drugs, as well as respiratory viruses [8], which promote an immune response mediated by IgG antibodies. This process promotes an increase of the inflammatory cells influx, releasing inflammatory mediators responsible for the damage process [9].Based on the factors and mechanisms presented above, asthma symptoms can be observed at different levels according to etiology and severity of clinical aspects, which define their classification [10]. The asthma severity is subdivided into (i) mild/low, also defined as intermittent/persistent, when the symptoms appear more than twice a week and their exacerbations can affect the daily activities of the patient; (ii) moderate, in which the daily symptom occurrence and their exacerbations affect the patient activities, requiring the use of short-acting β2-adrenergic drugs; or (iii) severe asthma, in which the patient presents persistent symptoms, physical activity limitations, and frequent exacerbations [10]. Based on this classification, it is estimated that 60% of the asthma cases are intermittent or persistent, 25% to 30% are moderate, and the severe cases account for only 10% of the total. However, it is important to highlight that although the proportion of severe asthmatics represents the minority of the cases, they are responsible for high mortality and high hospitalization costs [11], evidencing the high need of efficient treatments for this disease. ## 1.2. Asthma Epidemiology According to the World Health Organization, asthma affects about 300 million of individuals across the world, regardless of the country development degrees [12]. In the United Kingdom, asthma affects approximately 5.2 million of individuals and is responsible for 60.000 hospital admissions per year [13], while in Brazil the annual incidence of hospital admissions due to asthma is around 173.442 patients, representing 12% of the total admissions for respiratory diseases in 2012 [14].Furthermore, studies have demonstrated that asthma incidence and prevalence rates in different countries are not age related. In the United States of America, Albania, and Indonesia, the asthma prevalence is lower for children (around 8.4%, 2.1%, and 4.1%, respectively) when compared to adults [15]. On the other hand, in countries such as the United Kingdom and Costa Rica, children aged between 6 and 7 years represent approximately 32% of the asthma prevalence [16]. Additionally, the incidence or prevalence can be directly influenced by the socioeconomic characteristics of specific areas, as demonstrated by the analyses of the annual variation in the prevalence of asthma in which it was possible to observe, in Spain, that the asthma prevalence had an annual increase of 0.44% regardless of the age range studied. However, when these same data analyses were observed individually in Spain regions, the annual variation presented a different scenario, showing an increase or decrease according to the developmental degree of each region [17, 18]. Similar data were observed by Pearce et al. [19] and Schaneberg et al. [20], who demonstrated the influence of socioeconomic aspects on asthma. The studies showed a prevalent increase of asthma cases in metropolitan areas, fact attributed to the population growth with consequent exposure to the environmental factors and shortened access to asthma therapy, due to the high cost of the available medicines [19, 20]. Such phenomena directly interfere on the treatment compliance [21–23], evidencing the importance and the need of strategies that facilitate the access to the medicines for asthma therapy.Studies that evaluate the importance of inclusion of antiasthmatic therapy on public health policy programs have demonstrated that asthma control can be achieved through a variety of approaches, promoting a decrease in hospital admissions of 90%. Indeed, this was demonstrated by two studies performed in Brazilian cities, in which public health programs offered free medicines and psychological and pharmaceutical care to treat chronic diseases [24, 25]. Furthermore, Ponte et al. [24] and Holanda [25] also showed that the hospital admissions of children decreased from 44.7% to 6.4% one year after the inclusion of these patients in the same project. Thus, these data corroborate the importance of public health policies that contribute to the reduction of hospital outlay, increasing the population’s life quality. ## 1.3. Asthma Treatment The asthma treatment recommended by the Global Initiative for Asthma (GINA) consists, especially, on the reduction of symptoms in order to decrease the inflammatory process [26, 27]. However, since asthma presents a complex physiopathology associated with variable manifestations, the treatment can lead to different response levels. Thus, the evaluation of the clinical aspects associated with the treatment response is defined as the most adequate approach to achieve treatment success [28]. Asthma therapy strategies are based on pulmonary (the main administration route on asthma therapy), oral or intravenous administration of class β2 agonist drugs (salbutamol, levalbuterol, terbutaline, and epinephrine), anticholinergics (ipratropium), corticosteroids (beclomethasone di- or monopropionate, ciclesonide, flunisolide, fluticasone propionate, mometasone furoate, triamcinolone acetonide, hydrocortisone, dexamethasone, budesonide, prednisone, prednisolone, and methylprednisolone), and xanthine drugs. Among these, the β2 agonists are often the drugs of first choice [13, 27].To optimize the treatment for each patient, the drug dosage is determined by the patient’s respiratory characteristics, mainly his/her respiratory rate. Patients with increased respiratory rate, due to the airways narrowing, present a low dispersion of the inhaled drug through the respiratory tract [29]. In these cases or when there is an absence of response on the first two hours after treatment, hospitalization should be performed, and adrenaline could be used, subcutaneously or intravenously, since this is an indicative of mucosal edema formation, which can be decreased by the adrenaline bronchodilator effect [30].Overall, patients that present asthma exacerbation should be initially treated with the association of different dosage of corticosteroids and short-actingβ2 agonists by intranasal oxygen administration, allowing the stimulation of β2 receptors that result in bronchodilation due to the inhibition of cholinergic neurotransmission and, thus, inhibition of mast cells degranulation [10]. Additionally, corticosteroids by oral or inhaled route are used on uncontrolled persistent asthma patients due to their direct effect on the inflammation site [31]. Accordingly, they improve the pulmonary function and decrease the asthma episodes [32], reducing hospitalizations and mortality of asthmatic patients [31]. Furthermore, because their systemic use can induce side effects, corticosteroids, mainly prednisone and prednisolone, are more commonly used in patients with severe persistent asthma who are not stabilized by other drugs [31].In addition, xanthine drugs such as theophylline can be also used on asthma treatment, since they are able to promote the suppression of monocyte activation with consequent inhibition on the TNF-α release. Further, they promote the inhibition of neutrophil activation and its degranulation, inhibiting the catalytic activity of phosphodiesterase 4 (PDE4), allowing a reduction in the inflammatory process [33].Regardless of the wide variety and associations of antiasthmatic medicines and their ability to promote the asthma symptoms control and to reduce the asthma episodes and hospital admissions, the antiasthmatic drugs present several side effects, including nausea, headaches, and convulsions (xanthine class) [3, 30], cardiovascular effects (β-adrenergic receptors antagonists) [20], vomiting (PDE4 inhibitors drugs) [34–36], osteoporosis, myopathies, adrenal suppression, and metabolic disturbs, compromising the patients’ growth (corticosteroids) [30, 35, 37, 38]. These side effects compromise the life quality of the patients and reduce significantly the treatment compliance.Another important drawback from the conventional asthma treatment is its cost. In fact, the required amount of money for asthma treatments represents a significant expenditure for health public organizations. Such situation has become a financial issue even for developed countries. In Sweden, for example, the cost of medicines for asthma treatment has increased since the 1990s and, in 2006, and it was responsible for 11.6% of the total healthcare expenditure. Furthermore, according to projections, an annual increase of 4% on the costs of asthma management is expected [22].Additionally, studies revealed that in Europe and in the United States of America, the sum of the direct and indirect estimated annual costs with asthma management is approximately €18 billion and US$13 billion, respectively. This high expenditure was associated with the high incidence of uncontrolled asthma patients, since they represent an expense up to 5-fold higher than the controlled asthma ones [39] or than patients with other chronic diseases, as demonstrated in the study performed by O’Neil and colleagues [40]. These authors revealed that asthma costs up to £4,217 per person, while type II diabetes, chronic obstructive pulmonary disease, and chronic kidney disease represented, together, a cost of £3,630 [40].Therefore, considering the therapies currently available, their side effects, and their high cost, the development of new therapeutic approaches or complementary treatments to the current asthma therapy become an important and essential strategy. In this context, the use of natural products allows easy access to treatment to all socioeconomic classes [41, 42] and shows advantages such as low cost, biocompatibility, and reduced side effects, besides their wide biodiversity and renewability [43, 44]. In addition, natural products, supported by the literature findings on their complex matrix as a source of bioactive compounds, represent one of the main access forms to the basic healthcare in the traditional medicine [45]. Thus, the present review aimed at summarizing the main natural products reported in the literature that show antiasthma activity. ## 2. Natural Products as Alternative for Asthma Treatment The use of natural products for the treatment of physiologic disorders, especially in association with other drugs, has been widely reported through ethnopharmacological studies as an important scientific tool for bioprospection exploration and discovery of new bioactive compounds from natural sources [46]. Despite the wide scientific progress regarding chemical and pharmaceutical technology on synthesizing new molecules, drugs from natural sources still contribute tremendously to the discovery and development of new medicines [47]. These studies are based, initially, on the traditional use of the natural products, which draws the attention of pharmaceutical companies due to their easy and economical use, allowing the companies to perform many studies that evaluate their therapeutic activities, their toxicity, and their safety [48].Moreover, the use of natural products as complementary therapy represents an important alternative for the treatment of several diseases [49]. In the United States of America, the use of natural products, vitamins, and other dietary supplements as auxiliary treatments represent about 40% of the conventional therapies [50]. Among the diseases that natural products are used for, those of allergic and inflammatory character can be highlighted. In fact, according to the literature, the alternative medicine associates the use of these products with biochemical mechanisms involved in immunomodulation, which could contribute to the management of these diseases [51].The use of plant-based products for asthma treatment has been reported by the traditional medicine for over 5000 years, since the use by the Chinese culture of the infusion ofEphedra sinica, which is as an immune system stimulator able to decrease asthma crises [20]. More recently, a study performed by Costa and colleagues [49] described the main natural sources for the treatment of asthma used by the Brazilian families from the Northeast Region of the country [49]. The study included beet, honey, onion, lemon, garlic, yarrow, and mint, demonstrating the wide variety of natural products used on asthma treatment in children [49]. Additionally, other natural-derived products have been widely cited in asthma treatment, such as natural oils from plants and animals, which can be obtained by different extraction process [52, 53].Plant-derived natural oils represent the main natural products used on the complementary asthma therapy due to the presence of compounds such as phenylpropanoids and mono- and sesquiterpenes as the major bioactive compounds, which provide their anti-inflammatory, antifungal, antibacterial, and anesthetic properties [54–56]. Similarly, oils obtained from animal sources have been used. They are rich in a mixture of different saturated, mono and polyunsaturated fatty acids, as well as compounds from animal organs and secretions, which are responsible for the immune-modulatory action and regulation of the tissue oxidative capacity [57, 58]. The activity credited to the oils derived from plants and animals is related to the presence of those bioactive compounds, which can inhibit COX-2 and COX-5. Additionally, these compounds are able to modulate the immune cells function by reducing levels of IL-4, IL-5, and IL-13 cytokines, decreasing the activity and proliferation of NK cells and leading to an increase in the level of endogenous corticosteroids, contributing to the regulation of NF-κB pathway, and reducing the mucus production and the inflammation in the lung tissues [59–61].In this regard, Table1 shows all products found in the studies included in this review after the inclusion criteria evaluation. Due to the wide variety of plant-derived products, only those with 3 or more citations were described in detail in this review. On the other hand, due to limited scientific investigations about the antiasthmatic activity of the natural products from animal and microorganism sources, all studies that fit the inclusion criteria were described in the next sections.Table 1 List of natural compounds described in the literature reviewed. Product Product form Product source Active compound Compound class/type Mechanism of action Reference 1,8-Cineol Isolated compound Essential oil ofEucalyptus globulus leaves 1,8-Cineol Monoterpene Reduces the expression of NF-κB target gene MUC2 Greiner et al. [62] 3-Methoxy-catalposide Isolated compound P. rotundum var. subintegrum extract 3-Methoxy-catalposide Iridoid glycoside Inhibits the expression of cyclooxygenase (COX)-2, nitric oxide synthase (iNOS), and proinflammatory genes (IL-6, IL-1β, and TNF-α) Ryu et al. [63] Achyranthes aspera L Ethanolic extract Roots Not reported Not reported Bronchoprotective activity Dey [64] Ailanthus excelsa Roxb Aqueous extract Barks Not reported Not reported Bronchodilator and mast cell stabilizing activities Kumar [65] Allium Cepa L. and quercetin Extract and isolated compound Methanolic extract and vegetable Quercetin [2-(3, 4-dihydroxyphenyl)-3, 5, 7-trihydroxy-4H-1-benzopyran-4-one, 3, 3′, 4′, 5, 6-entahydroxyflavone] Flavonoid Reduce the production of proinflammatory cytokines (IL-4, IL-5, IL-13) and promote the relaxation of tracheal rings Oliveira et al. [66] Alstonia scholaris (L.) R. Br. Extract Leaves ofAlstonia scholaris (L.) R. Br. Scholaricine, 19-epi-scholaricine, vallesamine, picrinine Alkaloid Reduce the eosinophilia, the production of proinflammatory cytokine (IL-4) and the expression of serum IgE and eotaxin Zhao et al. [67] Amorphophallus konjac (konjac) Gel extract Not reported Not reported Plant Not elucidated Chua et al. [68] Andropogon muricatus Crude extract Aerial parts Vetivenes. vetivenol, vetivenic acid, and vetivenyl acetate Sesquiterpenic compounds Inhibit the Ca2+ channels and phosphodiesterase activity Shah and Gilani [69] Anoectochilus formosanus Hayata Aqueous extract Whole plant Kinsenoside Plant Reduce the IL-4 production by Tregs and enhance the production of IL-12 and IFN-γ by Th1 differentiation Hsieh et al. [70] Artemisia maritima Essential oil Leaves 1,8-Cineol, camphor, camphene, andβ-caryophyllene Terpenoid Inhibit the Ca2+ channels and phosphodiesterase activity Shah et al. [71] Aster tataricus L. f. Extract Rhizomes Kaempferol, aurantiamide, and astin C Flavonoid Inhibit the expression of NF-κB and promote the activation of beta-2 adrenergic receptor Chen and Zheng [72] Aster yomena (Kitam.) Honda Ethanolic extract Leaves Phenolic compounds not specified Phenolic compounds Attenuate the production of NO and IL-1β, and suppress the expression of NF-κB. In addition, suppress the activation of TLR4 and promote a reduction of intracellular ROS production Kang et al. [73] Baicalin Isolated compound Leaves and branch 7-Glucuronic acid-5,6-dihydroxyflavone Flavonoid Suppress the lipopolysaccharide-induced TNF-α expression and inhibit the cyclic adenosine monophosphate-specific phosphodiesterase 4 (PDE4) Park et al. [74] Baliospermum montanum Müll. Arg. (Euphorbiaceae) Chloroformic and ethanolic extracts Leaves Alkaloids, triterpenoids, diterpenoids, and glycosides Alkaloids, triterpenoids, diterpenoids, and glycosides Stabilize the mast cell degranulation and decrease the histamine release Venkatesh et al. [75] Berry fruit Polyphenolic extract Not reported Phenolic compounds not specified Phenolic compounds not specified Not reported Power et al. [76] Boswellia serrata, Boswellia carterii, and frankincense Essential oil Resinous part Fl-boswellic acid, acetyl-fl-boswellic acid, 11-keto-fl-boswellic acid, and acetyl-11-keto-fl-boswellic acid Boswellic acids Inhibition of leukotriene biosynthesis Hamidpour et al. [77] and Al-Yasiry and Kiczorowska [78] Boswellia serrata, Glycyrrhiza glabra, and Curcuma longa Essential oil extract and extract Resinous part, licorice root and turmeric root, respectively Curcumin and fl-boswellic acid Polyphenol Reduce the plasma level of the leukotriene C4, nitric oxide and malondialdehyde Houssen et al. [79] Buffalo spleen lipid and a bacterial polypeptide Extract Animal-derived and microorganism-derived, respectively Not reported Not reported Reduce the tracheal responsiveness and the amount of white blood cells Neamati et al. [80] Bullfrog oil(Rana catesbiana shaw) Oil Bullfrog adipose tissue Oleic, linolenic, stearic, palmitic, and myristic acids. Eicosapentaenoic acids and decosahexaenoic acid Fatty acids Not elucidated Amaral-Machado et al. [81] Bu-zhong-yi-qi-tang Aqueous extract Root of Astragalus mongholicus Bunge, Panax ginseng C.A.Mey, Angelica dah-rica Fisch. Ex Hoffm and Bupleurum chinense DC. Rhizome of Zingiber officinale Rosc, Atractylodes macrocephala Koidz, and Cimicifuga foetida L. Fruit of Ziziphus jujuba Mill. var. inermis Rehd. Pericarp of Citrus reticulata Blanco. Root and rhizome of Glycyrrhiza uralensis Fisch Not reported Not reported Reduce the level of eotaxin, Th2-related cytokines (IL-4, IL-5, IL-13), IgE, and eosinophilia Yang et al. [82] Caenorhabditis elegans Crude extract Microorganism Not reported Not reported Modulate the immunologic Th1/Th2 response Huang et al. [83] Camellia sinensis L. Aqueous extract Not reported Polyphenois and flavonoids Polyphenois and flavonoids Not elucidated Sharangi [84] Carica papaya Extract Leaves Tanins, alkaloids, steroids, and quinones Tanins, alkaloids, steroids, and quinones Reduce the expression of IL-4, IL-5, eotaxin, TNF-α, NF-κB, and iNOS Elgadir et al. [85] Carum roxburghianum Crude extract Seeds Hydrocarbons, wax esters, sterol esters, triacylglycerols, free fatty acids, diacylglycerols, lysophosphatidylethanolamines, and phosphatidylinositols Hydrocarbons, wax esters, sterol esters, triacylglycerols, free fatty acids, diacylglycerols, lysophosphatidylethanolamines, and phosphatidylinositols Bronchodilator activity Khan et al. [86] Chitin Isolated compound Shrimp Chitin Polysaccharide Not elucidated Ozdemir et al. [87] Chrysin Isolated compound Marketable synthetic compound 5,7-Dihydroxy-2-phenyl-1-4H-chromen-4-one Flavonoid Reduces the histamine release and decreases the gene expression of proinflammatory cytokines (IL-1β, IL-4, IL-6, TNF-α, NF-κB) Yao et al. [88]; Yao et al. [89]; Bae et al. [90] Cissampelos sympodialis Eichl Extract Leaves Warifteine Alkaloid Reduce the expression of IL-3 and IL-5, increase the IL-10 level, and decrease the density of inflammatory cells Cerqueira-lima et al. [91] Citrus tachibana Ethanolic extract Leaves Coumarins, carotenoids, and flavonoids Coumarins, carotenoids, and flavonoids Modulate the Th1/Th2 imbalance by inhibition of NF-κB signaling and histamine secretion Bui et al. [92] Conjugated linoleic acid Conjugated compound Fatty tissue from ruminants Cis, cis-9,12-octadecadienoic acid Polyunsaturated fatty acid Modulate the PPARγ-dependent and PPARγ-independent inflammation signaling, the eicosanoid production, and humoral immune response Macredmond and Dorscheid [93] Coumarins Isolated compound Synthetic compounds 6,7-Dihydroxycoumarin, 7-hydroxycoumarin and 4-methyl-7-hydroxycoumarin Coumarin Not elucidated Sanchez-Recillas et al. [94] Crocetin Isolated compound Marketable synthetic compound Crocetin Carotenoid Activates the FOXP3 signaling through TIPE2 Ding et al. [95] Curcumin Isolated compound Curcuma longa (1E, 6E)-1,7-Bis (4-hydroxy- 3-methoxyphenyl)-1,6- heptadiene-3,5-dione Polyphenol Inhibits the Notch1-GATA3 signaling pathway Zheng et al. [96]; Chong et al. [97] Cyclotheonamides Isolated compound Marine Not reported Cyclic pentapeptides Inhibit the humanß-tryptase Schaschke and Sommerhoff [98] Diallyl-disulfide Isolated compound Garlic oil Diallyl-disulfide Organosulfur Activates the NrF-2/HO-1 pathway and suppresses the NF-κB Shin et al. [99] Dietary plant stanol esters Not reported Fatty acid Not reported Stanol ester Reduce the total plasma IgE, IL-1β, IL-13, and TNF-α Brull et al. [100] Dioscorea nipponica Isolated compound Not reported Diosgenin Steroidal saponin Suppress the secretion of TNF-α, IL-1β, and IL-6 Junchao et al. [101] D-α-tocopheryl acetate Isolated compound Natural source D-α-tocopheryl acetate Vitamin Inhibits the oxidative stress. Modulates the allergic inflammation and the airway hyperresponsiveness Hoskins et al. [102] Echinodorus scaber Hydroethanolic extract Leaves Vitexin, rutin, and gallic acid Phenolic compounds Decrease the migration of inflammatory cells and reduce the Th2 cytokines and IgE levels Rosa et al. [103] Eclipta prostrata (L.)L. Methanolic extract Whole plant Wedelolactone and demethylwedelolactone Coumestan Reduce the bronchial hyperresponsiveness and the production of Th2 cytokines De Freitas Morel et al. [104] Ecklonia cava Marine alga Brown macroalgae Fucodiphloroethol and phlorofucofuroeckol A Phlorotannins Downregulate the FcεRI expression and block the IgE-FcεRI binding Vo et al. [105] Ephedra intermedia Crude extract Aerial parts Ephedrine and pseudoephedrine Alkaloids Not elucidated Gul et al. [106] Ellagic acid Isolated compound Marketable synthetic compound Ellagic acid Polyphenol Inhibits the activation of the NF-κB Zhou et al. [107] Emodin Isolated compound Roots and barks ofRheum palmatum and Polygonum multiflorum 1,3,8-Trihydroxy-6-methylanthraquinone Anthraquinone Suppresses the characteristics of airway inflammation, mucin components, and chitinase protein expression. Inhibits the NF-κB signaling pathway Shrimali et al. [108] Euphorbia hirta Aqueous extract Not reported Galloylquinic acid, phorbol acid, leucocyanidol, quercitol, camphol, quercetin, chlorophenolic acid, shikimic acid Tanins, leucoanthocyanidins, flavonoids, and phenolic compounds Not elucidated Kunwar et al. [109] Sesame Fixed oil Seeds 5,5′-(1S, 3aR, 4S, 6aR)-Tetrahydro-1H, 3H-furo [3,4-c]furan-1,4-diylbis-1,3-benzodioxole Polyphenol Decreases the levels of IL-4, Il-5, IL-13, and serum IgE. Reduces the amount of inflammatory cells and the eosinophil infiltration Lin et al. [41] Farnesol Isolated compound Fruits, leaves, flowers 3,7,11-Trimethyl-2,6,10-dodecatrien-1-ol Sesquiterpene Increases the level of IgG2a/IgE and reduces the total IgE, IgA, IgM, IgG Ku and Lin [110] Feverfew(Tanacetum parthenium L.) Extract Leaves and parts above the ground Parthenolide Sesquiterpene Inhibit the IκB kinase complex and the histamine release Pareek et al. [111] Flavonoids Isolated compound Vegetables (capers, tomatoes, fennel, sweet potato leaves, etc.), fruits (apple, apricots, grapes, plums, and berries), cereals (green/yellow beans and buckwheat) Not reported Polyphenol Prevent the IgE synthesis and the mast cell degranulation. Reduce the airway hyperresponsiveness and inhibit the human phospholipase A2 Castell et al. [112]; Lattig et al. [113] Fumaria parviflora Linn Aqueous methanolic extract Aerial parts Fumarophycine, cryptopine, sanactine, stylopine, bicuculline, adlumine, perfumidine, and dihydrosanguirine Alkaloids Block the muscarinic receptors and the Ca2+ channels Najeeb ur et al. [114] Galangin Synthetic compound Alpinia officinarum 3,5,7-Trihydroxy-2-phenylchromen-4-one Flavonol Inhibits the TGF-β1 signaling by ROS generation and MAPK/Akt phosphorylation Liu et al. [115] Geastrum saccatum Solid extract Fruiting bodies ofGeastrum saccatum β-Glucose Polysaccharide Inhibit the NOS and COX Guerra dore et al. [116] Ginsenosides Synthetic compound Root of ginseng Ginsenosides Glycoside Suppress the IL-4 level, increase the production of IFN-γ, and inhibit the mucus overproduction and recruitment of eosinophils Chen et al. [117] Grape seed Extract Seeds Not reported Not reported Not elucidated. Mahmoud [118] Gymnema sylvestre R. Br. Extract Leaves Not reported Tanins and saponins Not elucidated. Tiwari et al. [119]; Di Fabio et al. [120] Herba epimedii Extract Leaves Icariin Flavonoids, iridoid glycosides, and alkaloids Inhibit the mRNA expression of TGF-β1 and TGF-β2. Modulate the TGF-β signaling Tang et al. [121] Higenamine Isolated compound Tinospora crispa, Nandina domestica THUNBERG, Gnetum parvifolium C.Y. Cheng, Asarum heterotropoides 1-[(4-Hydroxyphenyl)methyl]-1,2,3,4-tetrahydroisoquinoline-6,7-diol Alkaloid Not elucidated Zhang et al. [122] Homoegonol Isolated compound Styrax japonica 3-[2-(3,4-Dimethoxyphenyl)-7-methoxy-1-benzofuran-5-yl]propan-1-ol Lignan Reduces the inflammatory cells count and Th2 cytokines Shin et al. [123] Hypericum sampsonii Isolated compound Aerial parts Not reported Polycyclic polyprenylated acylphloroglucinols Not elucidated Zhang et al. [124] Justicia pectoralis Extract Aerial parts 7-Hydroxycoumarin Coumarin Decrease the tracheal hyperresponsiveness and the IL-1β and TNF-α levels Moura et al. [125] Juniperus excelsa Crude extract Aerial parts (+)-Cedrol, (+)-Sabinene, (+)-limonene, terpinolene, endo-fenchol, cis-pinene hydrate,α-campholena, camphor, borneol, triene cycloheptane 1,3,5-trimethylene, β-myrcene, o-allyl toluene Anthraquinones, flavonoids, saponins, sterol, terpenoids, and tanins Inhibit the Ca2+ influx and the phosphodiesterase activity Khan et al. [126] Kaempferol Isolated compound Biotransformation of synthetic kaempferol by genetically engineeredE. coli Kaempferol-3-O-rhamnoside Flavonoid Reduces the inflammatory cells number, suppresses the production of Th2 cytokines and TNF-α Chung et al. [127] Kefir Isolated compound Kefir grains Kefiran Microorganism derived Reduces the inflammatory cell number and decreases the level of IL-4, IL-13, IL-5, and IgE Kwon et al. [128]; Lee et al. [129] Laurus nobilis L. Isolated compound Leaves ofLaurus nobilis L Magnolialide Sesquiterpene Inhibit the mast cell degranulation and reduce the IL-4 and IL-5 production Lee et al. [130] Lepidium sativum Crude extract Seeds Ascorbic acid, linoleic acid, oleic acid, palmitic acid, stearic acid Vitamin and fatty acids Promote a anticholinergic effect, inhibit the Ca2+ influx, and inhibit the phosphodiesterase activity Rehman et al. [131] L-Theanine Isolated compound Green tea ofCamellia sinensis L-Theanine (N-ethyl-L-glutamine) Amino acid Reduces the ROS production and decreases the levels of NF-κB and MMP-9 Hwang et al. [132] Luteolin Isolated compound Perilla frutescens (2-(3,4-Dihydroxyphenyl)-5,7-dihydroxy-4-chromenone) Flavonoid Inhibits the mucus overproduction and the GABAergic system Shen et al. [133] Lysate bacterial (OM-85 Broncho-Vaxom) Extract H. influenzae, S. pneumoniae, Klebsiella pneumoniae, smelly nose Klebsiella, S. aureus, Streptococcus pyogenes, Streptococcus viridans, Neisseria catarrhalis Not reported Not reported Increase the level of IL-4, IL-10, and IFN-γ Lu et al. [134] Mangifera indica L. extract (Vimang®) Extract Stem bark Mangiferin (1,3,6,7-tetrahydroxyxanthone-c2-b-D-glucoside) Xanthone Inhibit the IgE production, the histamine release, and mast cell degranulation. Decrease the MMP-9 activity Rivera et al. [135] Aqueous extract Barks Mangiferin (1,3,6,7-tetrahydroxyxanthone-c2-b-D-glucoside) Xanthone Reduce the inflammatory cells recruitment and the airway hyperresponsiveness. Increase the Th2 cytokines and attenuated the increase of the PIK3 activity Alvarez et al. [136] Mangosteen Isolated compound Garcinia mangostana Linn. α- and γ-mangostin Xanthone Inhibits the histamine release and modulates the cytokine production Jang et al. [137] Marine bioactives Isolated compound Marine spongesPetrosia contignata and Xestospongia bergquisita Contignasterol and xestobergsterol Steroids Upregulation of TNF-β and IL-10 expression D’Orazio et al. [138] Marshallagia marshalli Isolated compound Marshallagia marshalli Secretory/excretory antigen Microorganism derived Prevent the release of TNF-α and IL-1β. Suppress the neutrophil migration Jabbari et al. [139] Mikania laevigata and M. glomerata Extract Leaves Dihydrocoumarin, coumarin, spathulenol, hexadecanoic acid, 9, 12-octadecadienoic acid, 9,12,15-octadecatrineoic acid, cupressenic acid, kaurenol, kaurenoic acid, isopropyloxigrandifloric acid, isobutyloxy-grandifloric acid Coumarins, terpenoids, steroids, and flavonoids Not elucidated Napimoga and Yatsuda [140] Milk and colostrum Conjugated compound Bovine milk Conjugated linoleic acid Fatty acid Modulate the cytokine and antibodies (IgE, IgM) production, interferon NO synthesis and iNOS activity. Modulate the mast cell degranulation Kanwar et al. [141] Monoterpenes Isolated compound Essential oil of several medicinal plants (Matricaria recutita, Boswellia carterii, Pelargonium graveolens, Lavandula angustifolia, Citrus limon, Melaleuca alternifolia, Melaleuca viridiflora, Santalum spicatum, Cedrus atlantica, and Thymus vulgaris) Hydroxydihydrocarvone, fenchone,α-pinene, (S)-cis-verbenol, piperitenone oxide, α-terpinene, α-terpineol, terpinen-4-ol, α-carveol, menthone, pulegone, geraniol, citral, citronellol, perillyl alcohol, perillic acid, β-myrcene, carvone, limonene, thymol, carvacrol, linalool, linalyl acetate, borneol, l-borneol, bornyl acetate, terpineol, thymoquinone, thymohydroquinone, 1,8-cineol, l-menthol, menthone, and neomenthol Terpenoids Reduce the expression of NF-κB target gene MUC2 Cassia et al. [142] Mandevilla longiflora Hydroethanolic extract Plant xylopodium Ellagic acid, hesperidin, luteolin, naringin, naringenin, and rutin Polyphenol and flavonoids Decrease the eosinophils, neutrophils, and mononuclear cell migration in BALF and by histopathological analysis. Decrease the IL-4, IL-5, IL-13, IgE, and LTB4 levels Almeida et al. [143] Morus alba L. Isolated compound Root bark Moracin M. (5-(6-hydroxy-1-benzofuran-2-yl)benzene-1,3-diol) Not reported Inhibit the PDE4 Chen et al. [144] Haemanthus coccineus Extract Dried bulbs Narciclasine Alkaloid Inhibit the edema formation, the leucocyte infiltration, and cytokine synthesis in vivo. Block the interaction between the leucocyte and endothelial cells, the activation of isolated leucocytes (cytokine synthesis and proliferation) and of primary endothelial cells (adhesion molecule expression) in vitro. Suppress the NF-κB-dependent gene transcription Fuchs et al. [145] Naringin Isolated compound Common grapefruit Naringin Flavone Attenuates the bronchoconstriction by reduction of calcium influx Wang et al. [146] Nielumbo nucifera Extract Leaves Nuiciferine and aporphine Alkaloids Attenuate the bronchoconstriction by reduction of calcium influx Yang et al. [147] Nigella sativa Oil Seeds Thymoquinone (2-isopropyl-5-methyl-1,4-benzoquinone) Quinone Decrease the NO and IgE levels. Increase the IFN-γ Salem et al. [148]; Koshak et al. [149] NujiangexanthoneA Isolated compound Leaves ofGarcinia nujiangensis 1,2,5,6-Tetrahydroxy-3-methoxy-4,7,8-tri(3-meth-ylbut-2-enyl)-xanthone Xanthone Suppresses the IgE/Ag activation and degranulation of mast cell. Suppresses the production of cytokines and eicosanoids, through inhibiting Src kinase activity and Syk-dependent pathways. Inhibits the release of histamine, PGD2 and leukotriene C4 generation. Inhibits the increase of IL-4, IL-5, IL-13, and IgE levels. Inhibits the cell infiltration and increases mucus production Lu et al. [150] Oleanolic acid Synthetic compound Forsythia viridissima Oleanolic acid Triterpenoid Modulates the transcription factors T-bet, GATA-3, RORγt, and Foxp3 Kim et al. [151] Omega 3 Isolated compound Fish oil n – 3 Polyunsaturated fatty acid Fatty acid Decreases the IL-17 and TNF-α levels Hansen et al. [152]; Farjadian et al. [153] Organic acids Isolated compound Berberis integerrima and B. vulgaris fruits Malic, citric, tartaric, oxalic, and fumaric acids Organic acids Inhibits the Th2 cytokines Ardestani et al. [154]; Shaik et al. [155] Oroxylin A Isolated compound Scutellariae radix 5′7-Dihydroxy-6-methoxy-2phenyl-4H-1-benzopyran-4-one Flavonoid Reduces the airway hyperactivity. Decreases the levels of IL-4, IL-5, IL-13 and IgE in BALF Lu et al. [156]; Zhou et al. [157] Oxymatrine Isolated compound Root of the Sophora flavescens Aiton (Fabaceae) Oxymatrine Alkaloid Inhibits the eosinophil migration, IL-4, IL-5, IgE, and IL-13 levels. Inhibits the expression of CD40 protein Zhang et al. [158] P. integerrima Gall and Pistacia integerrima stew. Ex brand Methanolic and crude extract Galls and whole plant Not reported Carotenoids, terpenoids, catechins, and flavonoids Attenuate the TNF-α, IL-4, and IL-5 expression levels, and pulmonary edema by elevation of AQP1 and AQP5 expression levels Rana et al. [159]; Bibi et al. [160] Paeonia emodi royle Extract Rhizomes 1β, 3β, 5α, 23, 24-Pentahydroxy-30-nor-olean-12, 20(29)-dien-28-oic acid; 6α, 7α-epoxy-1α, 3β, 4β, 13β-tetrahydroxy-24, 30-dinor-olean-20-ene-28, 13β-olide; paeonin B; paeonin C; methyl grevillate; 4-hydroxy benzoic acid, and gallic acid Terpenoids and phenolic compounds Inhibits the lipoxygenase activity Zargar et al. [161] Petasites japonicus Extract Leaves Petatewalide B Not reported Inhibit the degranulation ofβ-hexosaminidase in mast cells, the iNOS induction, and the NO production. Inhibits the accumulation of eosinophils, macrophages, and lymphocytes in BALF Choi et al. [162] Peucedanum praeruptorum dunn Extract Roots Dihydropyranocoumarin, linear furocoumaris, and simple coumarin Coumarins Attenuate the airway hyperreactivity and Th2 responses Xiong et al. [163] Peucedani Radix Extract Roots Nodakenin, nodakenetin, pteryxin, praeruptorin A, and praeruptorin B Not reported Inhibit the Th2 cell activation Lee et al. [164] Eryngium Extract Leaves, fruits, and roots A1-barrigenol, R1-barrigenol, tiliroside, kaempferol 3-O-β-D-glucosyde-7-O-α-L-rhamnoside, rutin, agasyllin, grandivittin, aegelinol benzoate, aegelinol, R-(+)-rosmarinic (61), and R-(+)-3′-O-β-D-glucopyranosyl rosmarinic acid Phenol, flavonoids, tannins, and saponins Not elucidated Erdem et al. [165] Pericampylus glaucus Extract Stems, leaves, roots, and fruits Periglaucine A-D and mangiferonic acid Alkaloids, terpenoids, isoflavones, and sterols Inhibit the COX enzymes activity Shipton et al. [166] Aquilaria malaccensis Ethanolic extract Seeds Aquimavitalin Phorbol ester Inhibit the mast cell degranulation Korinek et al. [167] Phytochemicals Isolated compound Several medicinal plants Luteolin, kaempferol, quercetin, eudesmin, magnolin, woorenoside, zerumbone, aucubin, triptolide, nitocine, berberine, and piperine Flavonoids, lignans, terpenoids, and alkaloids Suppress the TNF-α expression Iqbal et al. [168] Picrasma quassioides (D.Don) Benn Alcoholic extract Not reported 4-Methoxy-5- hydroxycanthin-6-one Alkaloid Decreases the inflammatory cell count in BALF. Reduces the IL-4, IL-5, IL-13, and IgE levels. Reduces the airway hyperresponsiveness. Attenuates the recruitment of inflammatory cells and the mucus production in the airways. Reduces the overexpression of inducible nitric oxide synthase (iNOS) Shin et al. [169] Pinus maritime (Pycnogenol®) Extract Barks Procyanidin Flavonoid Decrease the NO production, the inflammatory cell count, and the levels of IL-4, IL-5, IL-13, and IgE in BALF or serum. Reduces the IL-1β and IL-6 levels, the expression of iNOS and MMP-9. Enhances the expression of heme oxygenase (HO)-1. Attenuates the airway inflammation and mucus hypersecretion Shin et al. [170] Ping chuan ke li Not elucidated Wang et al. [171] Piperine Isolated compound Piper nigrum (black pepper) and Piper longum (long pepper) Piperine Alkaloid Inhibits eosinophil infiltration and airway hyperresponsiveness by suppressing T cell activity and Th2 cytokine production Chinta et al. [172] Piperlongumine Isolated compound Piper longum Piperlongumine (5,6-dihydro-1-[(2E)-1-oxo-3-(3,4,5-trimethoxyphenyl)-2-propenyl]-2(1H)-pyridinone) Alkaloid Inhibits the activity of inflammatory transcription factors, NF-κB, and signal transducer and activator of transcription (STAT)-3 as well as the expression of IL-6, IL-8, IL-17, IL-23, matrix metallopeptidase (MMP)-9, and intercellular adhesion molecule (ICAM)-1. Suppresses the permeability and leukocyte migration, the production of TNF-α, IL-6, and extracellular regulated kinases (ERK) 1/2 along with the activation of NF-κB Prasad and Tyagi [173] Piper nigrum Ethanolic extract Not reported Piperine Alkaloid Inhibit the Th2/Th17 responses and mast cell activation Bui et al. [174]; Khawas et al. [175] Plectranthus amboinicus (Lour.) spreng. Ethanol, methanol, and hexane extracts Aerial parts Rosmarinic acid, shimobashiric acid, salvianolic acid L, rutin, thymoquinone, and quercetin Flavonoids Not elucidated Arumugam et al. [176] Podocarpus sensu latissimo Extract Barks 3-Methoxyflavones and 3-O-glycosides Flavonoids Provinol and flavin-7 Abdillahi et al. [177] Polyphenols and their compounds Isolated compound Provinol and flavin-7 Quercetin and resveratrol Polyphenol Decrease IL-4 and IL-5 levels, the airway hyperresponsiveness, and mucus overproduction Joskova et al. [178] Propolis Isolated compound Honey bees from several plants Pinocembrin and caffeic acid phenethyl ester Polyphenol and terpenoids Inhibits TGF-β1 Kao et al. [179] Psoralea corylifolia Extract Fruits 7-O-Methylcorylifol A, 7-O-isoprenylcorylifol A, and 7-O-isoprenylneobavaisoflavone Flavonoids Inhibit the N-formyl-L-methionyl-L-leucyl-L-phenylalanine (fMLP)-inducedO2– generation and/or elastase release Chen et al. [180] Quercetin Isolated compound Tea, fruits and vegetables 2-(3,4-Dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one Flavonoid Inhibits LOX and PDE4. Reduce leukotrienes and histamine release with a decrease in the IL-4 level. Inhibit prostaglandins release and the human mast cell activation by Ca2+ influx Townsend et al. [181]; Mlcek et al. [182] Radix Rehmanniae Preparata Extract Not reported Catalpol Glycoside Inhibit IgE secretion. Decrease IL-4 and IL-5. Inhibit eosinophil infiltration and suppress eotaxin and its receptor CCR3. Reduce IL-5Rα levels Chen et al. [183] Resveratrol Isolated compound Skin and barks of red fruits Resveratrol (3,4,5-trihydroxystilbene) Polyphenol Decreases eosinophilia. Reduce neutrophil migration and inhibit PGD-2 release. Decrease IL-4 and IL-5 and also the hyperresponsiveness and mucus production Lee et al. [184]; Hu et al. [185]; Chen et al. [186] Schisandra chinensis Extract Dried fuits α-Cubebenoate Not reported Suppress bronchiolar structural changes. Inhibit the accumulation of lymphocytes, eosinophils, and macrophages in BALF. Suppress IL-4, IL-13, and TGF-β1. Increase the intracellular Ca2+ Lee et al. [187] Sea cucumber (Holothurians) Tonic Marine animal (Sea cucumber) Holothurin A3, pervicoside A, and fuscocinerosides A Toxins Reduce COX enzymatic activity Guo et al. [188] Selaginella uncinata (Desv.) Extract Dried herbs Amentoflavone, hinokiflavone, and isocryptomerin Flavonoids Attenuate hyperresponsiveness and goblet cell hyperplasia. Decrease IL-4, IL-5, IL-13, and IgE levels in serum. Upregulation of T2R10 gene expression and downregulation of IP3R1 and Orai1 gene expression. Suppression of eotaxin, NFAT1, and c-Myc protein expression Yu et al. [189] Selaginella pulvinata Isolated compound Air-dried powder of the whole plant of S. pulvinata Selaginpulvilin A, selaginpulvilin B, and selaginpulvilin C Phenol Inhibit the PDE4 Liu et al. [190] Sideritis scardica Extract Leaves Echinacoside, verbascoside, luteolin, apigenin, caffeic acid, vanillic acid Glycosides, flavonoids, and phenolic acids Not elucidated Todorova and Trendafilova [191] Siegesbeckia glabrescens Extract Aerial roots 3,40-O-Dimethylquercetin, 3,7-O-dimethylquercetin, 3-O-methylquercetin, and 3,7,40-O-trimethylquercetin Flavonoids Reduce inflammatory cell infiltration in BALF. Decrease IL-4, IL-5, IL-13, eotaxin, and IgE. Reduce airway inflammation and mucus overproduction. Decrease iNOS and COX-2 expression and reduce NO levels Jeon et al. [192] Sitostanol Isolated compound Marketable synthetic compound Sitostanol Steroid Suppresses IL-4 and IL-13 release Brüll et al. [193] Soft coral Isolated compound Sarcophyton ehrenbergi Not reported Prostaglandins Inhibits PDE4 Cheng et al. [194] Solanumpaniculatum L Extract Fruits Stigmasterol andβ-sitosterol Steroid Reduce IL-4 and NO levels. Decrease IFN-γ without changes in IL-10 levels. Reduce NF-κB, TBET, and GATA3 gene expression Rios et al. [195] Squill (Drimia maritima (L.) stearn) oxymel Crude extract Not reported Scillaren A, scillirubroside, scilliroside, scillarenin, and proscillaridin A Glycosides Not elucidated Nejatbakhsh et al. [196] Sorbus commixta Hedl. (Rosaceae) Methanolic extract Fruits Neosakuranin Glycosides Not elucidated Bhatt et al. [197] Thuja orientalis Extract Fruits Cupressuflavone, amentoflavone, robustaflavone, afzelin, (+)-catechin, quercetin, hypolaetin 7-O-β-xylopyranoside, isoquercitrin, and myricitrin Flavonoids Reduce nitric oxide production and reduce the relative mRNA expression levels of inducible nitric oxide synthase (iNOS), IL6, cyclooxygenase-2, MMP-9, TNF-α in vitro. Decrease the inflammatory cell counts in BALF. Reduce IL-4, IL-5, IL-13, eotaxin, and IgE levels and reduce the airway hyperresponsiveness, in vivo. Attenuate mucus hypersecretion Shin et al. [198] Tonggyu-tang Extract Ledebouriella divaricata Hiroe, Angelica koreanum Kitagawa, Angelica tenuissima Nakai, Cimicifuga heracleifolia Kom., Pueraria thunbergiana Benth., Ligusticum wallichii var. officinale Yook., Atractylodes lancea DC., Thuja orientalisl., Ephedra sinica Stapf., Zanthoxylum schinifolium S.Z., Asarum sieboldii var. seoulense Nakai, Glycyrrhiza glabra, Astragalus membranaceus var. mongholicus Bung, Xanthium strumarium L., Magnolia denudate Desr., Mentha arvensis var. piperascens Makinv Not reported Plant Inhibit inflammatory cytokines (IL-4, IL-6, IL-8, and TNF-α). Suppress mitogen activated protein kinase (MAPK) and NF-κB in mast cells and keratinocytes Kim et al. [199] Trigonella foenum-graecum Extract Seeds Not reported Flavonoids Reduce IL-5, IL-6, IL-1β, and TNF-α. Reduce collagen deposition in goblet cells. Suppress inflammatory cells Piao et al. [200] Tropidurus hispidus Oil Fat of tropidurus hispidus Croton oil, arachidonic acid, phenol, and capsaicin Fatty acids and its derivated Affect the arachidonic acid and their metabolites and reduce proinflammatory mediators Santos et al. [201] Urtica dioica L. Extract Leaves Caffeic acid, gallic acid, quercetin, scopoletin, carotenoids, secoisolariciresinol, and anthocyanidins Polyphenols, flavonoids, cumarin, and lignan Reduce leucocytes and lymphocytes levels in serum. Inhibit the eosinophilia increase in BALF. Suppress inflammatory cells recruitment and attenuation of lipid peroxidation of lung tissues Zemmouri et al. [202] Verproside Isolated compound Pseudolysimachion Verproside Glycoside Suppress the NF-κB and TNF-α expression Lee et al. [203] Vitamin D Isolated compound Not reported Calcitriol Vitamin Inhibit lymphocytes (Th1 and Th2) and reduces cytokines production Szekely and Pataki [204] Vitamin E Isolated compound Plant lipids α-, β-, γ-, and δ-Tocopherols and the α-, β-, γ-, and δ-tocotrienols Vitamin Reduce airway hyperresponsiveness, IL-4, IL-5, IL-13, OVA-specific IgE, eotaxin, TGF-β, 12/15-LOX, lipid peroxidation, and lung nitric oxide metabolites Cook-Mills and McCary [205]; Abdala-Valencia et al. [206] Vitex rotundifolia linn til (Verbenaceae) Methanolic extract Fruits 1H, 8H-Pyrano [3, 4-c]pyran-1,8-dione Not reported Inhibit eotaxin, IL-8, IL-16, and VCAM-1 mRNA Lee et al. [207] Viticis fructus Extract Dried fruit Pyranopyran-1,8-dione Not reported Inhibit eosinophils and lymphocytes cell infiltration into the BAL fluid. Reduce to normal levels of IL-4, IL-5, IL-13 and eotaxin. Suppress IgE levels Park et al. [208] Yu ping feng san Extract Radix Saposhnikoviae (Fangfeng), Radix Astragali (Huangqi), and Rhizoma Atractylodis macrocephalae (Baizhu) Calycosin-7-O-β-d-glucoside, calycosin, formonetin, atractylenolide III, II, and I; 5-O-methylvisammioside, 8-methoxypsoralen and bergapten Flavonoids, terpenoids, saponins, and furocoumarins Inhibit TNF-α, IFN-γ, and IL-1β Stefanie et al. [209] Zygophyllum simplex L Extract Aerial parts Isorhamnetin-3-O-β-D-rutinoside, myricitrin, luteolin-7- O-β-D-glucoside, isorhamnetin-3-O-β-D-glucoside, and isorhamnetin Phenol Inhibit NF-κB, TNF-α, IL-1β, and IL-6 Abdallah and Esmat [210] Ziziphus amole Extract Leaves, stems, barks, and roots Alphitolic acid, sitosterol, ziziphus-lanostan-18-oico acid Terpenoid and steroid Inhibit myeloperoxidase activity Romero-Castillo et al. [211] ### 2.1. Natural Products from Plants The use of natural products obtained from plants by the traditional medicine has been reported from centuries, especially in countries as China, Japan, and India [212]. Thus, the topics below concern these products or bioactive compounds originated from the most studied plants used on asthma therapy. #### 2.1.1. Flavonoids Flavonoids are natural compounds from plants, nuts, and fruits that are chemically characterized by the presence of two benzene rings (A and B) linked through a heterocyclic pyrene ring (C). They represent a large group of polyphenolic secondary metabolites [213] with more than 8,000 different compounds already identified [214]. Considering their chemical structure, they can be classified as flavans, flavanones, isoflavanones, flavones, isoflavones, anthocyanidins, and flavonolignans [214]. Flavans or isoflavans possess a heterocyclic hydrocarbon skeleton, chromane, and a substitution on its C ring, in carbons 2 or 3, by a phenyl group (B ring). Flavanones and isoflavanones show an oxo-group in position 4. The presence of a double bond between C2 and C3 indicates flavones and isoflavones, and the addition of a C1 to C2 double bond represents anthocyanidins [214].The diversity in their chemical structure contributes to their broad range of physiological and biological activities, from which it can be highlighted the antioxidant, anti-inflammatory, antiallergic, antiviral, hepatoprotective, antithrombotic, and anticarcinogenic activities [213]. In this review, 14 studies reported flavonoids as a group of compounds able to be used on asthma treatment. The following subsections show the main flavonoids with antiasthmatic activity reported in the literature and used by the traditional medicine. These studies attributed the antiasthmatic activity of plant extracts containing these compounds, in part, due to their presence in the phytocomplex.(1) Flavone Compounds: Chrysin, Baicalin, Luteolin, and Oroxylin A. Defined as 5,7-dihydroxy-2-phenyl-1-4H-chromen-4-one, chrysin is classified as a flavone that can be found in Passiflora caerulea and Passiflora incarnate flowers, as well as in Matricaria chamomilla, popularly known as chamomile, besides being present in propolis and other plants [90, 100]. Chrysin is a compound able to suppress the proliferation of airway smooth muscle cells as well as to promote a reduction in the IL-4, IL-13, IgE, and interferon-γ levels that lead to an attenuation in the asthma inflammatory process [89]. Bae et al. [90] performed their studies through an in vitro cell culture model with the purpose to describe how the chrysin was able to promote the inhibitory effect in the proinflammatory cytokines. They suggested that this effect was caused by the intracellular calcium reduction in mast cells, since calcium is responsible for proinflammatory cytokine gene transcription [90]. In addition, a study performed by Yao and colleagues [88] investigated the activity of chrysin against asthma in mice sensitized with ovalbumin (OVA). Their results revealed that chrysin would be a promising compound able to be used for controlling airway remodeling and clinical manifestations of asthma [88].Baicalin, a 7-glucuronic acid-5,6-dihydroxyflavone, is a natural metabolite easily found in leaves and barks from several species of theScutellaria genus [215]. Studies performed by Park and colleagues [208] investigated the anti-inflammatory activity of baicalin using an asthma-induced animal model. The results showed that this compound decreased the inflammatory cell infiltration and the levels of TNF-α in the bronchoalveolar lavage fluids (BALF). The activity of the baicalin was attributed to the fact that this metabolite selectively inhibits the enzyme activity of PDE4 and suppresses the TNF-α expression induced by the lipopolysaccharides on macrophages, indicating a potential use of this metabolite in asthma treatment [74].Additionally, luteolin (2-(3,4-dihydroxyphenyl)-5,7-dihydroxy-4-chromenone), another compound that had also demonstrated antiasthma activity, is widely found in aromatic flowering plants, such asSalvia tomentosa and Lamiaceae, as well as in broccoli, green pepper, parsley, and thyme [216]. Shen and colleagues [133] studied its pharmacological activity through inhibition of the GABAergic system, which is responsible for the overproduction of mucus during the asthmatic crisis by overstimulation of the epithelial cells. The study indicated that this compound was able to promote the attenuation of the goblet cell hyperplasia by the partial inhibition of GABA activities [133].Another antiasthmatic flavonoid compound is oroxylin A, a flavone found in the extract ofScutellaria baicalensis Georgi and Oroxylum indicum tree [156]. According to Zhou [157], oroxylin A, or 5-7-dihydroxy-6-methoxy-2-phenylchromen-4-one, was able not only to reduce the airway hyperactivity in an OVA-induced asthma murine model, but also to decrease the levels of IL-4, IL-5, IL-13, and OVA-specific IgE in BALF [157]. This study also showed the ability of oroxylin A in inhibiting the alveolar wall thickening in addition to avoid the inflammatory cell infiltration in the perivascular and peribronchial areas assessed by histopathological evaluation [157].(2) Flavonol Compounds: Quercetin, Galangin, and Kaempferol. Quercetin (2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one), a flavonol compound widely found in onions, apples, broccoli, cereals, grapes, tea, and wine, has been known as the main active compound of these plants and, therefore, responsible for their widespread use in traditional medicine for the treatment of inflammatory, allergic, and viral diseases [213]. The studies using this compound as antiasthma were performed in cell cultures and rats, as in vitro and in vivo models, respectively, showing its high capacity to reduce inflammatory processes. According to these studies, the anti-inflammatory mechanism of quercetin is attributed to the lipoxygenase and PDE4 inhibition and reduction on histamine and leukotriene release, which promote a decrease in the proinflammatory cytokine formation and production of IL-4, respectively. In addition, quercetin also promoted the inhibition of human mast cell activation by Ca2+ influx and prostaglandin release inhibition [182], favoring the therapeutic relief of the asthma symptoms and decreasing the short-acting β-agonist dependence [181, 182].Galangin, a compound chemically defined as 3,5,7-trihydroxy-2-phenylchromen-4-one, easily found onAlpinia officinarum [217], had its pharmacological activity evaluated using a specific-pathogen-free mice model [115]. The study, performed by Liu [115], showed an effective response against the in vivo OVA-induced inflammation as well as a reduction on the ROS levels in vitro. Furthermore, galangin acted as an antiremodeling agent in asthma, since this compound inhibited the goblet cell hyperplasia, lowering the TGF-β1 levels and suppressing the expression of vascular endothelial grown factor (VEGF) and matrix metalloproteinase-9 (MMP-9) in BALF or lung tissue. This result highlighted its antiremodeling activity in the TGF-β1-ROS-MAPK pathway, proving its potential use on asthma treatment [115].Another flavonol, kaempferol, chemically defined as 3,5,7-trihydroxy-2-(4-hydroxyphenyl)-4H-chromen-4-one, is widely found in citrus fruits, broccoli, apples, and other plant sources [213]. This compound has been studied due to its pharmacological potential, especially against inflammation. In the study performed by Chung et al. [127], an OVA-induced airway inflammation mouse model of asthma was performed, demonstrating that kaempferol can significantly reduce the inflammatory process due to the decrease of the inflammatory cell infiltration and the decrease of production of inflammatory cytokines and IgE antibodies. In addition, this compound was also able to reduce the intracellular ROS production in the airway inflammation reaction [127].Furthermore, Mahat et al. [218] demonstrated that the anti-inflammatory activity of kaempferol occurs through the inhibition of nitric oxide and nitric oxide-induced COX-2 enzyme activation, further inhibiting the cytotoxic effects of nitric oxide, reducing the prostaglandin-E2 production [218]. To improve the possibility of the use of kaempferol as a bioactive on the development of new drugs or medicines, the previously mentioned study by Chung [127] also describes the antiasthma activity of a glycosylated derivative of kaempferol, the kaempferol-3-O-rhamnoside. The glycosylation of kaempferol improved its solubility and stability, besides reducing its toxicity [127], allowing the production of a compound with great potential to increase the asthma therapeutic arsenal. According to this rationale, this compound may be responsible for the anti-inflammatory properties of the plant extracts containing this substance and that have been used to asthma treatment. #### 2.1.2. Resveratrol Resveratrol is a natural stilbenoid compound, a class of polyphenol obtained from the bark of red fruits, with known antioxidant and promising anti-inflammatory and antiasthma activities [186]. In studies using eosinophils obtained from asthmatic individuals, Hu et al. [185] demonstrated that resveratrol induces not only cell cycle arrest in the G1/S phase, but also apoptosis, allowing a decrease in the eosinophil number [185], thus reducing the neutrophil migration and, consequently, preventing the histamine and PGD-2 release, avoiding vasodilatation, mucus production, and bronchoconstriction (Figure 1). Additionally, Lee and colleagues [129] demonstrated that resveratrol was effective against the asthmatic mouse model once this polyphenol induced a significant decrease in the plasma level of T-helper-2-type cytokines, such as IL-4 and IL-5. It also decreased the airway hyperresponsiveness, eosinophilia, and mucus hypersecretion [184]. Although performed by different methods, the studies are in agreement regarding the scientific evidence that supports the use of resveratrol by oral route as an effective natural compound to treat asthma patients. #### 2.1.3.Boswellia Boswellia is a tree genus that produces oil known as frankincense, which is obtained through incisions in the trunks of these trees. This oil is composed by 30–60% resin, 5–10% essential oils, and polysaccharides [219]. Studies performed using this product evaluated its pharmacological activities revealing that the Boswellia bioactives are boswellic acids and AKBA (3-O-acetyl-11-keto-β-boswellic acid), both responsible for preventing NF-κB activation and, consequently, inhibiting IL-1, IL-2, IL-4, IL-6, and IFN-gamma release [52]. They also inhibit LOX-5, thus preventing leukotriene release [78]. Thus, based on the physiopathology of asthma, it is possible to infer that these compounds may act as antiasthma molecules from the tree genus, once these enzymes and mediators are involved in the asthma-related inflammation. Moreover, another study that aimed at evaluating the antiasthma activity of these compounds showed that the association between Boswellia serrata, Curcuma longa, and Glycyrrhiza had a pronounced effect on the management of bronchial asthma [79], suggesting its potential on asthma therapy. ### 2.2. Natural Products from Animal Source Animal-derived natural products still represent the minority of natural sources for products intended for asthma treatment. Nonetheless, many studies describe the use of animal-based products, such as oils, milk, and spleen as a complementary therapy for several diseases, including asthma. The traditional medicine reports the benefits of consuming some animal parts and animal products, once they can be rich in compounds such as lipids, prostaglandins, unsaturated fatty acids, enzymes, and polysaccharides, which are responsible for their pharmacological activities [220, 221]. In addition, animal sources are also widely cited as biocompatible and biodegradable sources, suggesting their safe use. The animal products and compounds cited in this session can be obtained from several sources, such as mammals, amphibians, and crustaceans, demonstrating its wide range of possibilities. #### 2.2.1. Animal Sea Source:Holothuroidea, Penaeus, and Sarcophyton ehrenbergi Marine ecosystems represent an important source of natural compounds due to their wide biodiversity, which include animals and plants that are unique to this environment. Therefore, many studies have been performed to evaluate the antimicrobial, anti-inflammatory, antiviral, and antiasthmatic potential of algae and sea animals.On this concern, the sea cucumber, a marine invertebrate animal that belongs to the classHolothuroidea, usually found in the benthic areas and deep seas, has been used by Asian and Middle Eastern communities in the traditional medicine as elixir, due to its pharmacological activity on the treatment of hypertension, asthma, rheumatism, cuts, burns, and constipation [188]. These pharmacological activities are attributed to the presence of saponins, cerebrosides, polysaccharides, and peptides on its composition [188, 220]. Bordbar et al. [220], in a literature review, mentioned an experimental study from Herencia et al. [222] in which sea cucumber extract showed a reduction of the enzymatic activity of cyclooxygenase in inflamed mice tissues, without promoting any modification on the cyclooxygenase enzyme, showing that the sea cucumber extract is a potent natural product able to be used against several inflammatory diseases [220].Ozdemir and colleagues [87] investigated the pharmacological activity of chitin, a polysaccharide formed by repeated units of N-acetylglucosamine to form long chain through β-(1-4) linkage [221], the major compound of the shrimp (Penaeus) exoskeleton. In this study, the authors performed the intranasal administration of chitin microparticles in the asthma-induced mice model, which promoted the reduction of serum IgE and peripheral blood eosinophilia, besides the decrease of the airway hypersensitivity [87]. Additionally, another study identified and isolated ten new prostaglandin derivatives from the Sarcophyton ehrenbergi extract, a soft coral species found in the Red sea [194], from which five of them showed inhibitory activity against PDE4 (44.3%) at 10 μg.mL–1, suggesting its utilization on asthma and chronic obstructive pulmonary disease treatment, once PDE4 is the drug target on the treatment of both diseases [194].Finally, these studies demonstrated that marine source needs to be further investigated, since a wide variety of bioproducts and/or bioactives with potential anti-inflammatory activity and antiasthmatic proprieties can be found in this environment. #### 2.2.2. Bullfrog(Rana catesbeiana Shaw) Oil The bullfrog oil is a natural oil extracted from the adipose tissue of the amphibianRana catesbeiana Shaw, which is originated from North America and has its meat widely commercialized around the world [223]. This oil has been used by the traditional medicine to treat inflammatory disorders, especially asthma [223]. This oil is composed of a mixture of mono- and polyunsaturated fatty acids and bile-derived steroid compound (ethyl iso-allocholate) [81, 224], which are responsible for its therapeutic properties [81].According to Yaqoob [57], the presence of oleic, linolenic, stearic, palmitic, and myristic fatty acids can promote the suppression of immune cell functions [58]. Based on such evidence, it is possible to infer that the bullfrog oil, due to its chemical composition, can be used on the treatment of inflammation-related disorders such as asthma. However, further studies are needed to confirm this hypothesis. #### 2.2.3. Other Products Derived from Animals Although the majority of the currently used animal products by the traditional medicine for asthma treatment belong from animal tissues, there is evidence that mammal fluids, for example, buffalo spleen liquid, milk, and colostrum, can act on the immune system promoting the decrease of asthma symptoms [80].The buffalo spleen liquid was investigated in a study performed by Neamati and colleagues [80], in which pigs were asthma sensitized using ovalbumin, followed by administration of the buffalo spleen liquid-based adjuvant. A decrease in the tracheal response as well as a reduction in the white blood cell number in lung lavage was observed on sensitized animals when compared to healthy animals [80], showing the potentiality of this fluid in promoting asthma control. In addition, another study was performed to evaluate the antiasthma activity using milk and colostrums, which contain linolenic acid and proteins like lactoferrin [141], as a natural product. This study showed a modulation in the plasma lipid concentration in human and animal models and a decrease in the allergic airway inflammation induced by ragweed pollen grain extract. ### 2.3. Bioactives Obtained from Microorganisms The use of bacteria and fungi metabolites on the treatment of several diseases is widely reported since the penicillin discovery. However, more recent studies have further investigated the antiasthmatic potential of these metabolites [225]. On this concern, a study performed by Lu and colleagues [156] evaluated the antiasthma activity of the bacterial lysate OM-85 Broncho-Vaxom (BV), a patented pharmaceutical product [134]. The study observed that the bacterial lysate coupled with the conventional treatment was able to increase the rate of natural killer T cells on the peripheral blood, decreasing the cytokine level (cytokines type not described) and, then, promoting the reduction of asthma symptoms. Furthermore, kefir, a fermented milk drink produced by lactic and acetic acids from bacteria, which also presents the kefiran, an insoluble polysaccharide as main component [128, 129], had its in vivo anti-inflammatory activity evaluated. This compound was able to reduce at normal levels the release of IL-4, IL-6, and IL-10 along with the production of INF-γ and TNF-α [128]. In addition, the intragastric administration of kefiran promoted the reduction of OVA-induced cytokine production in a murine asthma model, decreasing the pulmonary eosinophilia and mucus hypersecretion [128, 129].Therefore, based on these reports and historical facts regarding the use of microorganisms as source for isolation of new bioactives and the development of medicines, it is important to highlight that these new agents may contribute to the current asthma treatment. ## 2.1. Natural Products from Plants The use of natural products obtained from plants by the traditional medicine has been reported from centuries, especially in countries as China, Japan, and India [212]. Thus, the topics below concern these products or bioactive compounds originated from the most studied plants used on asthma therapy. ### 2.1.1. Flavonoids Flavonoids are natural compounds from plants, nuts, and fruits that are chemically characterized by the presence of two benzene rings (A and B) linked through a heterocyclic pyrene ring (C). They represent a large group of polyphenolic secondary metabolites [213] with more than 8,000 different compounds already identified [214]. Considering their chemical structure, they can be classified as flavans, flavanones, isoflavanones, flavones, isoflavones, anthocyanidins, and flavonolignans [214]. Flavans or isoflavans possess a heterocyclic hydrocarbon skeleton, chromane, and a substitution on its C ring, in carbons 2 or 3, by a phenyl group (B ring). Flavanones and isoflavanones show an oxo-group in position 4. The presence of a double bond between C2 and C3 indicates flavones and isoflavones, and the addition of a C1 to C2 double bond represents anthocyanidins [214].The diversity in their chemical structure contributes to their broad range of physiological and biological activities, from which it can be highlighted the antioxidant, anti-inflammatory, antiallergic, antiviral, hepatoprotective, antithrombotic, and anticarcinogenic activities [213]. In this review, 14 studies reported flavonoids as a group of compounds able to be used on asthma treatment. The following subsections show the main flavonoids with antiasthmatic activity reported in the literature and used by the traditional medicine. These studies attributed the antiasthmatic activity of plant extracts containing these compounds, in part, due to their presence in the phytocomplex.(1) Flavone Compounds: Chrysin, Baicalin, Luteolin, and Oroxylin A. Defined as 5,7-dihydroxy-2-phenyl-1-4H-chromen-4-one, chrysin is classified as a flavone that can be found in Passiflora caerulea and Passiflora incarnate flowers, as well as in Matricaria chamomilla, popularly known as chamomile, besides being present in propolis and other plants [90, 100]. Chrysin is a compound able to suppress the proliferation of airway smooth muscle cells as well as to promote a reduction in the IL-4, IL-13, IgE, and interferon-γ levels that lead to an attenuation in the asthma inflammatory process [89]. Bae et al. [90] performed their studies through an in vitro cell culture model with the purpose to describe how the chrysin was able to promote the inhibitory effect in the proinflammatory cytokines. They suggested that this effect was caused by the intracellular calcium reduction in mast cells, since calcium is responsible for proinflammatory cytokine gene transcription [90]. In addition, a study performed by Yao and colleagues [88] investigated the activity of chrysin against asthma in mice sensitized with ovalbumin (OVA). Their results revealed that chrysin would be a promising compound able to be used for controlling airway remodeling and clinical manifestations of asthma [88].Baicalin, a 7-glucuronic acid-5,6-dihydroxyflavone, is a natural metabolite easily found in leaves and barks from several species of theScutellaria genus [215]. Studies performed by Park and colleagues [208] investigated the anti-inflammatory activity of baicalin using an asthma-induced animal model. The results showed that this compound decreased the inflammatory cell infiltration and the levels of TNF-α in the bronchoalveolar lavage fluids (BALF). The activity of the baicalin was attributed to the fact that this metabolite selectively inhibits the enzyme activity of PDE4 and suppresses the TNF-α expression induced by the lipopolysaccharides on macrophages, indicating a potential use of this metabolite in asthma treatment [74].Additionally, luteolin (2-(3,4-dihydroxyphenyl)-5,7-dihydroxy-4-chromenone), another compound that had also demonstrated antiasthma activity, is widely found in aromatic flowering plants, such asSalvia tomentosa and Lamiaceae, as well as in broccoli, green pepper, parsley, and thyme [216]. Shen and colleagues [133] studied its pharmacological activity through inhibition of the GABAergic system, which is responsible for the overproduction of mucus during the asthmatic crisis by overstimulation of the epithelial cells. The study indicated that this compound was able to promote the attenuation of the goblet cell hyperplasia by the partial inhibition of GABA activities [133].Another antiasthmatic flavonoid compound is oroxylin A, a flavone found in the extract ofScutellaria baicalensis Georgi and Oroxylum indicum tree [156]. According to Zhou [157], oroxylin A, or 5-7-dihydroxy-6-methoxy-2-phenylchromen-4-one, was able not only to reduce the airway hyperactivity in an OVA-induced asthma murine model, but also to decrease the levels of IL-4, IL-5, IL-13, and OVA-specific IgE in BALF [157]. This study also showed the ability of oroxylin A in inhibiting the alveolar wall thickening in addition to avoid the inflammatory cell infiltration in the perivascular and peribronchial areas assessed by histopathological evaluation [157].(2) Flavonol Compounds: Quercetin, Galangin, and Kaempferol. Quercetin (2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one), a flavonol compound widely found in onions, apples, broccoli, cereals, grapes, tea, and wine, has been known as the main active compound of these plants and, therefore, responsible for their widespread use in traditional medicine for the treatment of inflammatory, allergic, and viral diseases [213]. The studies using this compound as antiasthma were performed in cell cultures and rats, as in vitro and in vivo models, respectively, showing its high capacity to reduce inflammatory processes. According to these studies, the anti-inflammatory mechanism of quercetin is attributed to the lipoxygenase and PDE4 inhibition and reduction on histamine and leukotriene release, which promote a decrease in the proinflammatory cytokine formation and production of IL-4, respectively. In addition, quercetin also promoted the inhibition of human mast cell activation by Ca2+ influx and prostaglandin release inhibition [182], favoring the therapeutic relief of the asthma symptoms and decreasing the short-acting β-agonist dependence [181, 182].Galangin, a compound chemically defined as 3,5,7-trihydroxy-2-phenylchromen-4-one, easily found onAlpinia officinarum [217], had its pharmacological activity evaluated using a specific-pathogen-free mice model [115]. The study, performed by Liu [115], showed an effective response against the in vivo OVA-induced inflammation as well as a reduction on the ROS levels in vitro. Furthermore, galangin acted as an antiremodeling agent in asthma, since this compound inhibited the goblet cell hyperplasia, lowering the TGF-β1 levels and suppressing the expression of vascular endothelial grown factor (VEGF) and matrix metalloproteinase-9 (MMP-9) in BALF or lung tissue. This result highlighted its antiremodeling activity in the TGF-β1-ROS-MAPK pathway, proving its potential use on asthma treatment [115].Another flavonol, kaempferol, chemically defined as 3,5,7-trihydroxy-2-(4-hydroxyphenyl)-4H-chromen-4-one, is widely found in citrus fruits, broccoli, apples, and other plant sources [213]. This compound has been studied due to its pharmacological potential, especially against inflammation. In the study performed by Chung et al. [127], an OVA-induced airway inflammation mouse model of asthma was performed, demonstrating that kaempferol can significantly reduce the inflammatory process due to the decrease of the inflammatory cell infiltration and the decrease of production of inflammatory cytokines and IgE antibodies. In addition, this compound was also able to reduce the intracellular ROS production in the airway inflammation reaction [127].Furthermore, Mahat et al. [218] demonstrated that the anti-inflammatory activity of kaempferol occurs through the inhibition of nitric oxide and nitric oxide-induced COX-2 enzyme activation, further inhibiting the cytotoxic effects of nitric oxide, reducing the prostaglandin-E2 production [218]. To improve the possibility of the use of kaempferol as a bioactive on the development of new drugs or medicines, the previously mentioned study by Chung [127] also describes the antiasthma activity of a glycosylated derivative of kaempferol, the kaempferol-3-O-rhamnoside. The glycosylation of kaempferol improved its solubility and stability, besides reducing its toxicity [127], allowing the production of a compound with great potential to increase the asthma therapeutic arsenal. According to this rationale, this compound may be responsible for the anti-inflammatory properties of the plant extracts containing this substance and that have been used to asthma treatment. ### 2.1.2. Resveratrol Resveratrol is a natural stilbenoid compound, a class of polyphenol obtained from the bark of red fruits, with known antioxidant and promising anti-inflammatory and antiasthma activities [186]. In studies using eosinophils obtained from asthmatic individuals, Hu et al. [185] demonstrated that resveratrol induces not only cell cycle arrest in the G1/S phase, but also apoptosis, allowing a decrease in the eosinophil number [185], thus reducing the neutrophil migration and, consequently, preventing the histamine and PGD-2 release, avoiding vasodilatation, mucus production, and bronchoconstriction (Figure 1). Additionally, Lee and colleagues [129] demonstrated that resveratrol was effective against the asthmatic mouse model once this polyphenol induced a significant decrease in the plasma level of T-helper-2-type cytokines, such as IL-4 and IL-5. It also decreased the airway hyperresponsiveness, eosinophilia, and mucus hypersecretion [184]. Although performed by different methods, the studies are in agreement regarding the scientific evidence that supports the use of resveratrol by oral route as an effective natural compound to treat asthma patients. ### 2.1.3.Boswellia Boswellia is a tree genus that produces oil known as frankincense, which is obtained through incisions in the trunks of these trees. This oil is composed by 30–60% resin, 5–10% essential oils, and polysaccharides [219]. Studies performed using this product evaluated its pharmacological activities revealing that the Boswellia bioactives are boswellic acids and AKBA (3-O-acetyl-11-keto-β-boswellic acid), both responsible for preventing NF-κB activation and, consequently, inhibiting IL-1, IL-2, IL-4, IL-6, and IFN-gamma release [52]. They also inhibit LOX-5, thus preventing leukotriene release [78]. Thus, based on the physiopathology of asthma, it is possible to infer that these compounds may act as antiasthma molecules from the tree genus, once these enzymes and mediators are involved in the asthma-related inflammation. Moreover, another study that aimed at evaluating the antiasthma activity of these compounds showed that the association between Boswellia serrata, Curcuma longa, and Glycyrrhiza had a pronounced effect on the management of bronchial asthma [79], suggesting its potential on asthma therapy. ## 2.1.1. Flavonoids Flavonoids are natural compounds from plants, nuts, and fruits that are chemically characterized by the presence of two benzene rings (A and B) linked through a heterocyclic pyrene ring (C). They represent a large group of polyphenolic secondary metabolites [213] with more than 8,000 different compounds already identified [214]. Considering their chemical structure, they can be classified as flavans, flavanones, isoflavanones, flavones, isoflavones, anthocyanidins, and flavonolignans [214]. Flavans or isoflavans possess a heterocyclic hydrocarbon skeleton, chromane, and a substitution on its C ring, in carbons 2 or 3, by a phenyl group (B ring). Flavanones and isoflavanones show an oxo-group in position 4. The presence of a double bond between C2 and C3 indicates flavones and isoflavones, and the addition of a C1 to C2 double bond represents anthocyanidins [214].The diversity in their chemical structure contributes to their broad range of physiological and biological activities, from which it can be highlighted the antioxidant, anti-inflammatory, antiallergic, antiviral, hepatoprotective, antithrombotic, and anticarcinogenic activities [213]. In this review, 14 studies reported flavonoids as a group of compounds able to be used on asthma treatment. The following subsections show the main flavonoids with antiasthmatic activity reported in the literature and used by the traditional medicine. These studies attributed the antiasthmatic activity of plant extracts containing these compounds, in part, due to their presence in the phytocomplex.(1) Flavone Compounds: Chrysin, Baicalin, Luteolin, and Oroxylin A. Defined as 5,7-dihydroxy-2-phenyl-1-4H-chromen-4-one, chrysin is classified as a flavone that can be found in Passiflora caerulea and Passiflora incarnate flowers, as well as in Matricaria chamomilla, popularly known as chamomile, besides being present in propolis and other plants [90, 100]. Chrysin is a compound able to suppress the proliferation of airway smooth muscle cells as well as to promote a reduction in the IL-4, IL-13, IgE, and interferon-γ levels that lead to an attenuation in the asthma inflammatory process [89]. Bae et al. [90] performed their studies through an in vitro cell culture model with the purpose to describe how the chrysin was able to promote the inhibitory effect in the proinflammatory cytokines. They suggested that this effect was caused by the intracellular calcium reduction in mast cells, since calcium is responsible for proinflammatory cytokine gene transcription [90]. In addition, a study performed by Yao and colleagues [88] investigated the activity of chrysin against asthma in mice sensitized with ovalbumin (OVA). Their results revealed that chrysin would be a promising compound able to be used for controlling airway remodeling and clinical manifestations of asthma [88].Baicalin, a 7-glucuronic acid-5,6-dihydroxyflavone, is a natural metabolite easily found in leaves and barks from several species of theScutellaria genus [215]. Studies performed by Park and colleagues [208] investigated the anti-inflammatory activity of baicalin using an asthma-induced animal model. The results showed that this compound decreased the inflammatory cell infiltration and the levels of TNF-α in the bronchoalveolar lavage fluids (BALF). The activity of the baicalin was attributed to the fact that this metabolite selectively inhibits the enzyme activity of PDE4 and suppresses the TNF-α expression induced by the lipopolysaccharides on macrophages, indicating a potential use of this metabolite in asthma treatment [74].Additionally, luteolin (2-(3,4-dihydroxyphenyl)-5,7-dihydroxy-4-chromenone), another compound that had also demonstrated antiasthma activity, is widely found in aromatic flowering plants, such asSalvia tomentosa and Lamiaceae, as well as in broccoli, green pepper, parsley, and thyme [216]. Shen and colleagues [133] studied its pharmacological activity through inhibition of the GABAergic system, which is responsible for the overproduction of mucus during the asthmatic crisis by overstimulation of the epithelial cells. The study indicated that this compound was able to promote the attenuation of the goblet cell hyperplasia by the partial inhibition of GABA activities [133].Another antiasthmatic flavonoid compound is oroxylin A, a flavone found in the extract ofScutellaria baicalensis Georgi and Oroxylum indicum tree [156]. According to Zhou [157], oroxylin A, or 5-7-dihydroxy-6-methoxy-2-phenylchromen-4-one, was able not only to reduce the airway hyperactivity in an OVA-induced asthma murine model, but also to decrease the levels of IL-4, IL-5, IL-13, and OVA-specific IgE in BALF [157]. This study also showed the ability of oroxylin A in inhibiting the alveolar wall thickening in addition to avoid the inflammatory cell infiltration in the perivascular and peribronchial areas assessed by histopathological evaluation [157].(2) Flavonol Compounds: Quercetin, Galangin, and Kaempferol. Quercetin (2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one), a flavonol compound widely found in onions, apples, broccoli, cereals, grapes, tea, and wine, has been known as the main active compound of these plants and, therefore, responsible for their widespread use in traditional medicine for the treatment of inflammatory, allergic, and viral diseases [213]. The studies using this compound as antiasthma were performed in cell cultures and rats, as in vitro and in vivo models, respectively, showing its high capacity to reduce inflammatory processes. According to these studies, the anti-inflammatory mechanism of quercetin is attributed to the lipoxygenase and PDE4 inhibition and reduction on histamine and leukotriene release, which promote a decrease in the proinflammatory cytokine formation and production of IL-4, respectively. In addition, quercetin also promoted the inhibition of human mast cell activation by Ca2+ influx and prostaglandin release inhibition [182], favoring the therapeutic relief of the asthma symptoms and decreasing the short-acting β-agonist dependence [181, 182].Galangin, a compound chemically defined as 3,5,7-trihydroxy-2-phenylchromen-4-one, easily found onAlpinia officinarum [217], had its pharmacological activity evaluated using a specific-pathogen-free mice model [115]. The study, performed by Liu [115], showed an effective response against the in vivo OVA-induced inflammation as well as a reduction on the ROS levels in vitro. Furthermore, galangin acted as an antiremodeling agent in asthma, since this compound inhibited the goblet cell hyperplasia, lowering the TGF-β1 levels and suppressing the expression of vascular endothelial grown factor (VEGF) and matrix metalloproteinase-9 (MMP-9) in BALF or lung tissue. This result highlighted its antiremodeling activity in the TGF-β1-ROS-MAPK pathway, proving its potential use on asthma treatment [115].Another flavonol, kaempferol, chemically defined as 3,5,7-trihydroxy-2-(4-hydroxyphenyl)-4H-chromen-4-one, is widely found in citrus fruits, broccoli, apples, and other plant sources [213]. This compound has been studied due to its pharmacological potential, especially against inflammation. In the study performed by Chung et al. [127], an OVA-induced airway inflammation mouse model of asthma was performed, demonstrating that kaempferol can significantly reduce the inflammatory process due to the decrease of the inflammatory cell infiltration and the decrease of production of inflammatory cytokines and IgE antibodies. In addition, this compound was also able to reduce the intracellular ROS production in the airway inflammation reaction [127].Furthermore, Mahat et al. [218] demonstrated that the anti-inflammatory activity of kaempferol occurs through the inhibition of nitric oxide and nitric oxide-induced COX-2 enzyme activation, further inhibiting the cytotoxic effects of nitric oxide, reducing the prostaglandin-E2 production [218]. To improve the possibility of the use of kaempferol as a bioactive on the development of new drugs or medicines, the previously mentioned study by Chung [127] also describes the antiasthma activity of a glycosylated derivative of kaempferol, the kaempferol-3-O-rhamnoside. The glycosylation of kaempferol improved its solubility and stability, besides reducing its toxicity [127], allowing the production of a compound with great potential to increase the asthma therapeutic arsenal. According to this rationale, this compound may be responsible for the anti-inflammatory properties of the plant extracts containing this substance and that have been used to asthma treatment. ## 2.1.2. Resveratrol Resveratrol is a natural stilbenoid compound, a class of polyphenol obtained from the bark of red fruits, with known antioxidant and promising anti-inflammatory and antiasthma activities [186]. In studies using eosinophils obtained from asthmatic individuals, Hu et al. [185] demonstrated that resveratrol induces not only cell cycle arrest in the G1/S phase, but also apoptosis, allowing a decrease in the eosinophil number [185], thus reducing the neutrophil migration and, consequently, preventing the histamine and PGD-2 release, avoiding vasodilatation, mucus production, and bronchoconstriction (Figure 1). Additionally, Lee and colleagues [129] demonstrated that resveratrol was effective against the asthmatic mouse model once this polyphenol induced a significant decrease in the plasma level of T-helper-2-type cytokines, such as IL-4 and IL-5. It also decreased the airway hyperresponsiveness, eosinophilia, and mucus hypersecretion [184]. Although performed by different methods, the studies are in agreement regarding the scientific evidence that supports the use of resveratrol by oral route as an effective natural compound to treat asthma patients. ## 2.1.3.Boswellia Boswellia is a tree genus that produces oil known as frankincense, which is obtained through incisions in the trunks of these trees. This oil is composed by 30–60% resin, 5–10% essential oils, and polysaccharides [219]. Studies performed using this product evaluated its pharmacological activities revealing that the Boswellia bioactives are boswellic acids and AKBA (3-O-acetyl-11-keto-β-boswellic acid), both responsible for preventing NF-κB activation and, consequently, inhibiting IL-1, IL-2, IL-4, IL-6, and IFN-gamma release [52]. They also inhibit LOX-5, thus preventing leukotriene release [78]. Thus, based on the physiopathology of asthma, it is possible to infer that these compounds may act as antiasthma molecules from the tree genus, once these enzymes and mediators are involved in the asthma-related inflammation. Moreover, another study that aimed at evaluating the antiasthma activity of these compounds showed that the association between Boswellia serrata, Curcuma longa, and Glycyrrhiza had a pronounced effect on the management of bronchial asthma [79], suggesting its potential on asthma therapy. ## 2.2. Natural Products from Animal Source Animal-derived natural products still represent the minority of natural sources for products intended for asthma treatment. Nonetheless, many studies describe the use of animal-based products, such as oils, milk, and spleen as a complementary therapy for several diseases, including asthma. The traditional medicine reports the benefits of consuming some animal parts and animal products, once they can be rich in compounds such as lipids, prostaglandins, unsaturated fatty acids, enzymes, and polysaccharides, which are responsible for their pharmacological activities [220, 221]. In addition, animal sources are also widely cited as biocompatible and biodegradable sources, suggesting their safe use. The animal products and compounds cited in this session can be obtained from several sources, such as mammals, amphibians, and crustaceans, demonstrating its wide range of possibilities. ### 2.2.1. Animal Sea Source:Holothuroidea, Penaeus, and Sarcophyton ehrenbergi Marine ecosystems represent an important source of natural compounds due to their wide biodiversity, which include animals and plants that are unique to this environment. Therefore, many studies have been performed to evaluate the antimicrobial, anti-inflammatory, antiviral, and antiasthmatic potential of algae and sea animals.On this concern, the sea cucumber, a marine invertebrate animal that belongs to the classHolothuroidea, usually found in the benthic areas and deep seas, has been used by Asian and Middle Eastern communities in the traditional medicine as elixir, due to its pharmacological activity on the treatment of hypertension, asthma, rheumatism, cuts, burns, and constipation [188]. These pharmacological activities are attributed to the presence of saponins, cerebrosides, polysaccharides, and peptides on its composition [188, 220]. Bordbar et al. [220], in a literature review, mentioned an experimental study from Herencia et al. [222] in which sea cucumber extract showed a reduction of the enzymatic activity of cyclooxygenase in inflamed mice tissues, without promoting any modification on the cyclooxygenase enzyme, showing that the sea cucumber extract is a potent natural product able to be used against several inflammatory diseases [220].Ozdemir and colleagues [87] investigated the pharmacological activity of chitin, a polysaccharide formed by repeated units of N-acetylglucosamine to form long chain through β-(1-4) linkage [221], the major compound of the shrimp (Penaeus) exoskeleton. In this study, the authors performed the intranasal administration of chitin microparticles in the asthma-induced mice model, which promoted the reduction of serum IgE and peripheral blood eosinophilia, besides the decrease of the airway hypersensitivity [87]. Additionally, another study identified and isolated ten new prostaglandin derivatives from the Sarcophyton ehrenbergi extract, a soft coral species found in the Red sea [194], from which five of them showed inhibitory activity against PDE4 (44.3%) at 10 μg.mL–1, suggesting its utilization on asthma and chronic obstructive pulmonary disease treatment, once PDE4 is the drug target on the treatment of both diseases [194].Finally, these studies demonstrated that marine source needs to be further investigated, since a wide variety of bioproducts and/or bioactives with potential anti-inflammatory activity and antiasthmatic proprieties can be found in this environment. ### 2.2.2. Bullfrog(Rana catesbeiana Shaw) Oil The bullfrog oil is a natural oil extracted from the adipose tissue of the amphibianRana catesbeiana Shaw, which is originated from North America and has its meat widely commercialized around the world [223]. This oil has been used by the traditional medicine to treat inflammatory disorders, especially asthma [223]. This oil is composed of a mixture of mono- and polyunsaturated fatty acids and bile-derived steroid compound (ethyl iso-allocholate) [81, 224], which are responsible for its therapeutic properties [81].According to Yaqoob [57], the presence of oleic, linolenic, stearic, palmitic, and myristic fatty acids can promote the suppression of immune cell functions [58]. Based on such evidence, it is possible to infer that the bullfrog oil, due to its chemical composition, can be used on the treatment of inflammation-related disorders such as asthma. However, further studies are needed to confirm this hypothesis. ### 2.2.3. Other Products Derived from Animals Although the majority of the currently used animal products by the traditional medicine for asthma treatment belong from animal tissues, there is evidence that mammal fluids, for example, buffalo spleen liquid, milk, and colostrum, can act on the immune system promoting the decrease of asthma symptoms [80].The buffalo spleen liquid was investigated in a study performed by Neamati and colleagues [80], in which pigs were asthma sensitized using ovalbumin, followed by administration of the buffalo spleen liquid-based adjuvant. A decrease in the tracheal response as well as a reduction in the white blood cell number in lung lavage was observed on sensitized animals when compared to healthy animals [80], showing the potentiality of this fluid in promoting asthma control. In addition, another study was performed to evaluate the antiasthma activity using milk and colostrums, which contain linolenic acid and proteins like lactoferrin [141], as a natural product. This study showed a modulation in the plasma lipid concentration in human and animal models and a decrease in the allergic airway inflammation induced by ragweed pollen grain extract. ## 2.2.1. Animal Sea Source:Holothuroidea, Penaeus, and Sarcophyton ehrenbergi Marine ecosystems represent an important source of natural compounds due to their wide biodiversity, which include animals and plants that are unique to this environment. Therefore, many studies have been performed to evaluate the antimicrobial, anti-inflammatory, antiviral, and antiasthmatic potential of algae and sea animals.On this concern, the sea cucumber, a marine invertebrate animal that belongs to the classHolothuroidea, usually found in the benthic areas and deep seas, has been used by Asian and Middle Eastern communities in the traditional medicine as elixir, due to its pharmacological activity on the treatment of hypertension, asthma, rheumatism, cuts, burns, and constipation [188]. These pharmacological activities are attributed to the presence of saponins, cerebrosides, polysaccharides, and peptides on its composition [188, 220]. Bordbar et al. [220], in a literature review, mentioned an experimental study from Herencia et al. [222] in which sea cucumber extract showed a reduction of the enzymatic activity of cyclooxygenase in inflamed mice tissues, without promoting any modification on the cyclooxygenase enzyme, showing that the sea cucumber extract is a potent natural product able to be used against several inflammatory diseases [220].Ozdemir and colleagues [87] investigated the pharmacological activity of chitin, a polysaccharide formed by repeated units of N-acetylglucosamine to form long chain through β-(1-4) linkage [221], the major compound of the shrimp (Penaeus) exoskeleton. In this study, the authors performed the intranasal administration of chitin microparticles in the asthma-induced mice model, which promoted the reduction of serum IgE and peripheral blood eosinophilia, besides the decrease of the airway hypersensitivity [87]. Additionally, another study identified and isolated ten new prostaglandin derivatives from the Sarcophyton ehrenbergi extract, a soft coral species found in the Red sea [194], from which five of them showed inhibitory activity against PDE4 (44.3%) at 10 μg.mL–1, suggesting its utilization on asthma and chronic obstructive pulmonary disease treatment, once PDE4 is the drug target on the treatment of both diseases [194].Finally, these studies demonstrated that marine source needs to be further investigated, since a wide variety of bioproducts and/or bioactives with potential anti-inflammatory activity and antiasthmatic proprieties can be found in this environment. ## 2.2.2. Bullfrog(Rana catesbeiana Shaw) Oil The bullfrog oil is a natural oil extracted from the adipose tissue of the amphibianRana catesbeiana Shaw, which is originated from North America and has its meat widely commercialized around the world [223]. This oil has been used by the traditional medicine to treat inflammatory disorders, especially asthma [223]. This oil is composed of a mixture of mono- and polyunsaturated fatty acids and bile-derived steroid compound (ethyl iso-allocholate) [81, 224], which are responsible for its therapeutic properties [81].According to Yaqoob [57], the presence of oleic, linolenic, stearic, palmitic, and myristic fatty acids can promote the suppression of immune cell functions [58]. Based on such evidence, it is possible to infer that the bullfrog oil, due to its chemical composition, can be used on the treatment of inflammation-related disorders such as asthma. However, further studies are needed to confirm this hypothesis. ## 2.2.3. Other Products Derived from Animals Although the majority of the currently used animal products by the traditional medicine for asthma treatment belong from animal tissues, there is evidence that mammal fluids, for example, buffalo spleen liquid, milk, and colostrum, can act on the immune system promoting the decrease of asthma symptoms [80].The buffalo spleen liquid was investigated in a study performed by Neamati and colleagues [80], in which pigs were asthma sensitized using ovalbumin, followed by administration of the buffalo spleen liquid-based adjuvant. A decrease in the tracheal response as well as a reduction in the white blood cell number in lung lavage was observed on sensitized animals when compared to healthy animals [80], showing the potentiality of this fluid in promoting asthma control. In addition, another study was performed to evaluate the antiasthma activity using milk and colostrums, which contain linolenic acid and proteins like lactoferrin [141], as a natural product. This study showed a modulation in the plasma lipid concentration in human and animal models and a decrease in the allergic airway inflammation induced by ragweed pollen grain extract. ## 2.3. Bioactives Obtained from Microorganisms The use of bacteria and fungi metabolites on the treatment of several diseases is widely reported since the penicillin discovery. However, more recent studies have further investigated the antiasthmatic potential of these metabolites [225]. On this concern, a study performed by Lu and colleagues [156] evaluated the antiasthma activity of the bacterial lysate OM-85 Broncho-Vaxom (BV), a patented pharmaceutical product [134]. The study observed that the bacterial lysate coupled with the conventional treatment was able to increase the rate of natural killer T cells on the peripheral blood, decreasing the cytokine level (cytokines type not described) and, then, promoting the reduction of asthma symptoms. Furthermore, kefir, a fermented milk drink produced by lactic and acetic acids from bacteria, which also presents the kefiran, an insoluble polysaccharide as main component [128, 129], had its in vivo anti-inflammatory activity evaluated. This compound was able to reduce at normal levels the release of IL-4, IL-6, and IL-10 along with the production of INF-γ and TNF-α [128]. In addition, the intragastric administration of kefiran promoted the reduction of OVA-induced cytokine production in a murine asthma model, decreasing the pulmonary eosinophilia and mucus hypersecretion [128, 129].Therefore, based on these reports and historical facts regarding the use of microorganisms as source for isolation of new bioactives and the development of medicines, it is important to highlight that these new agents may contribute to the current asthma treatment. ## 3. Conclusion: Widely Used Active Pharmaceutical Ingredients from Natural Source As previously demonstrated, natural products have been extensively used as a complementary treatment for asthma therapy. Studies concerning these products have aimed at investigating their activity as a matrix of compounds to complement or replace current asthma treatment, while others aim at isolating compounds to generate new medicines based on synthetic drugs of natural origin [226].Historically, natural products have contributed tremendously to the development of marketable medicines to the treatment of several diseases [226]. The evaluation of their therapeutic activities and identification and isolation of their bioactive molecules allowed not only their clinical use, but also the discovery of the pharmacophore groups and the radicals responsible for their toxicity or their biopharmaceutics aspects. In fact, based on such studies, it is possible to perform structural or delivery changes on these compounds that would increase their safety or would be able to module their half-life allowing to target them to specific action sites [227].This review shows the experimental studies that identified the antiasthma activity of different natural sources in the last decade, along with the molecules responsible for that. Altogether, these studies presented preliminary data that require further investigations about these compounds in order to, in a near future, be used on the production of designing medicines. Currently, a few natural-based active compounds are already available in the market, such as ipratropium bromide, theophylline, epinephrine, and sodium cromoglycate [226, 228–231].Ipratropium bromide, an anticholinergic drug able to promote bronchodilation, has been widely used for the treatment of asthma. This compound was synthesized from atropine, a compound extracted for the first time in 1809 fromAtropa belladonna L. However, it can be also found in other plants from the Solanaceae family [228, 229]. In spite of that, only in 1833, its chemical structure was elucidated, and in 1850, it was implemented for clinical use, allowing the proper understanding of its in vivo biopharmaceutics and therapeutic characteristics [232].Theophylline is an antiasthmatic drug widely used in the management of severe persistent asthma, promoting the bronchodilation and attenuation of asthma inflammation. Also known as 1,3-dimethylxanthine, this molecule was extracted in 1888 fromTheobroma cacao L. and Camellia sinensis L., plants presented in several countries. Later in 1922, this drug was introduced on asthma therapy [233]. Years after, epinephrine, also known as adrenaline, was extracted from the Ephedra sinica, a plant widely used in the Chinese traditional medicine, allowing the synthesis of beta-agonist antiasthmatic drugs, such as salbutamol and salmeterol, currently used in asthma treatment [226].Furthermore, sodium cromoglycate, a drug obtained from the khellin bioactive extracted fromAmmi visnaga (L) Lamk, has been used as a bronchodilator based on its ability of inhibiting mast cell degranulation, which enabled its use on the asthma treatment [226, 230].Overall, these reports highlight the relevance of the investigation and isolation of new bioactive compounds that could present antiasthmatic potential. As the current asthma treatment involves drugs that have been extensively studied in the past decades, the experimental studies that evaluate the activity of compounds obtained from diverse natural sources might allow the development of new antiasthmatic drugs in the near future. ## 4. Final Considerations The current asthma treatment is of high cost and has many side effects, which compromises the patient treatment compliance. Literature reports show that asthma treatment can be improved using natural products to complement the traditional drugs, since those products are of low cost and biocompatible and show reduced side effects. The literature search included the keywords asthma, natural products, and treatment, individually, resulted in 14,296,762 studies, including scientific articles, reviews, editorial reference works, and abstracts. Additionally, the keyword combination “Asthma + Natural Products” found 18,111 studies, “Asthma + Treatment,” 209,423 studies, “Natural Products + Treatment,” 459,685 studies, and “Asthma + Treatment + Natural Products,” 1,986 studies. Thus, after screening for duplicate studies, 1,934 abstracts were evaluated. Finally, based on the inclusion criteria, 172 studies reporting the use of natural products on asthma treatment were included in this review, summarizing a total of 160 studies that reported plants as natural source, 9 from animal source, and 3 studies describing bacteria and fungi as bioactive sources, totalizing 134 compounds which can be used as complementary or alternative medicine on asthma treatment. Plants were found to be the major source of products used by the folk medicine to treat asthma, since they are a renewable source of easy access. Also, due to their variety of secondary metabolites, plants are able to promote antiasthma activity mainly due to their anti-inflammatory and bronchodilator properties. This study revealed that flavonoids, phenolic acids, and terpenoids are the main elucidated compounds able to promote the attenuation of asthma symptoms. On the other hand, a lack of scientific reports regarding the pharmaceutical activity of natural products from animal and microorganism sources has limited their use. However, these products still represent an important source of bioactive compounds able to be used on asthma treatments. In addition, despite the relevant antiasthmatic activity, the literature search showed a lack of investigations concerning the pharmacokinetics properties as well as more accurate information regarding efficacy, safety, and the required dosage to inducein vivo antiasthma activity. In conclusion, due to the fact that current asthma treatment involves drugs obtained from natural products widely explored in the past, the current experimental studies reported in this review may lead to the development of new drugs in the future, able to improve the antiasthmatic treatment. --- *Source: 1021258-2020-02-13.xml*
2020
# Multiple Attribute Group Decision-Making Models Using Single-Valued Neutrosophic and Linguistic Neutrosophic Hybrid Element Aggregation Algorithms **Authors:** Sumin Zhang; Jun Ye **Journal:** Journal of Mathematics (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1021280 --- ## Abstract Multiple attribute group decision-making (MAGDM) issues may involve quantitative and qualitative attributes. In inconsistent and indeterminate decision-making issues, current assessment information of quantitative and qualitative attributes with respect to alternatives only contains either numerical neutrosophic values or linguistic neutrosophic values as the single information expression. However, existing neutrosophic techniques cannot perform the mixed information denotation and aggregation operations of numerical neutrosophic values and linguistic neutrosophic values in neutrosophic decision-making issues. To solve the puzzles, this article presents the information denotation, aggregation operations, and MAGDM models of single-valued neutrosophic and linguistic neutrosophic hybrid sets/elements (SVNLNHSs/SVNLHEs) as new techniques to perform MAGDM issues with quantitative and qualitative attributes in the environment of SVNLNHEs. In this study, we first propose a SVNLNHS/SVNLNHE notion that consists of a single-valued neutrosophic element (SVNE) for the quantitative argument and a linguistic neutrosophic element (LNE) for the qualitative argument. According to a linguistic and neutrosophic conversion function and its inverse conversion function, we present some basic operations of single-valued neutrosophic elements and linguistic neutrosophic elements, the SVNLNHE weighted arithmetic mean (SVNLNHEWAMN) and SVNLNHE weighted geometric mean (SVNLNHEWGMN) operators (forming SVNEs), and the SVNLNHEWAML and SVNLNHEWGML operators (forming LNEs). Next, MAGDM models are established based on the SVNLNHEWAMN and SVNLNHEWGMN operators or the SVNLNHEWAML and SVNLNHEWGML operators to realize MAGDM issues with single-valued neutrosophic and linguistic neutrosophic hybrid information, and then their applicability and availability are indicated through an illustrative example in the SVNLNHE circumstance. By comparison with the existing techniques, our new techniques reveal obvious advantages in the mixed information denotation, aggregation algorithms, and decision-making methods in handling MAGDM issues with the quantitative and qualitative attributes in the setting of SVNLNHSs. --- ## Body ## 1. Introduction In general, there exist both quantitative attributes and qualitative attributes in multiple attribute (group) decision-making (MADM/MAGDM) issues. In the assessment process, the assessment information of quantitative attributes is usually represented by numerical values because numerical values are more suitable to the denotation form of quantitative arguments, while the assessment information of qualitative attributes is usually assigned by linguistic term values because the linguistic value is more suitable to human judgment and thinking/expression habits. Generally speaking, it is difficult to represent qualitative arguments by numeric values, but they are easily represented by linguistic values. In inconsistent and indeterminate situations, a simplified neutrosophic set (SNS) [1], including an interval-valued neutrosophic set/element (IVNS/IVNE) [2] and a single-valued neutrosophic set/element (SVNS/SVNE) [3], is depicted by the truth, falsity, and indeterminacy membership degrees, while a linguistic neutrosophic set/element (LNS/LNE) [4] is depicted by the truth, falsity, and indeterminacy linguistic values. Since the neutrosophic set theories [5], including SNS, SVNS, IVNS, and LNS, are vital mathematical tools to denote and handle indeterminate and inconsistent issues in the real world, they have been widely applied in decision-making issues [6–14]. In the setting of SNSs, some researchers presented various aggregation operators and their MADM/MAGDM models to solve neutrosophic MADM/MAGDM problems [10, 15–20]. Then, other researchers introduced various extended versions of SNSs, including single-valued neutrosophic rough sets [21], normal neutrosophic sets [22], bipolar neutrosophic sets [23], simplified neutrosophic indeterminate sets [24], and neutrosophic Z-numbers [25], and used them in MADM/MAGDM issues. In the setting of LNEs, some researchers proposed several aggregation operators of LNEs and their MAGDM models to carry out linguistic neutrosophic MAGDM problems [26, 27]. Then, some extended linguistic sets, such as linguistic neutrosophic uncertain sets and linguistic neutrosophic cubic sets, were also presented to perform some linguistic neutrosophic MAGDM problems [28]. Unfortunately, the existing neutrosophic theories and MADM models [28, 29] cannot yet resolve the denotation, operations, and MADM issues of the mixed information of SVNEs and LNEs. However, the existing assessment information of the quantitative or qualitative attributes with respect to alternatives only gives either numerical neutrosophic information or linguistic neutrosophic information as a single information expression. In the case of single-valued neutrosophic and linguistic neutrosophic mixed information, existing neutrosophic technologies cannot represent the mixed information of SVNE and LNE nor can they perform mixed operations of the two. Therefore, the mixed information representation and aggregation operations and decision-making problems pose challenges in this study, which motivates our research to address them. To solve these problems, the aims of this article are as follows: (1) to propose a single-valued neutrosophic and linguistic neutrosophic hybrid set/element (SVNLNHS/SVNLNHE) for the mixed information representation of both SVNE and LNE, (2) to present basic operations of SVNEs and LNEs according to a linguistic and neutrosophic conversion function and its inverse conversion function, (3) to propose the single-valued neutrosophic and linguistic neutrosophic hybrid element weighted arithmetic mean (SVNLNHEWAMN) and single-valued neutrosophic and linguistic neutrosophic hybrid element weighted geometric mean (SVNLNHEWGMN) operators for the aggregated SVNEs and the SVNLNHEWAML and SVNLNHEWGML operators for the aggregated LNEs, (4) to establish MAGDM models based on the SVNLNHEWAMN and SVNLNHEWGMN operators or the SVNLNHEWAML and SVNLNHEWGML operators in the setting of SVNLNHSs, and (5) to apply the established MAGDM models to an illustrative example on the selection problem of industrial robots that contain both quantitative and qualitative attributes in a SVNLNHS circumstance.Generally, the main contributions of this article are summarized as follows:(i) The proposed SVNLNHS/SVNLNHE solves the representation problem of single-valued neutrosophic and linguistic neutrosophic mixed information.(ii) The proposed weighted aggregation operators of SVNLNHEs based on the linguistic and neutrosophic conversion function and its inverse conversion function provide the effective aggregation algorithms of SVNLNHEs.(iii) The established MAGDM models can solve MAGDM issues with quantitative and qualitative attributes in a SVNLNHS circumstance.(iv) The established MAGDM models can solve the selection problem of industrial robots that contain both quantitative and qualitative attributes and show the availability and rationality of the new techniques in a SVNLNHS circumstance.The remaining structure of this article consists of the following sections. Section2 reviews the basic concepts and operations of SVNEs and LNEs as the preliminaries of this study. The notions of SVNLNHS and SVNLNHE and some basic operations of SVNEs and LNEs based on the linguistic and neutrosophic conversion function and its inverse conversion function are proposed in Section 3. In Section 4, the SVNLNHEWAMN, SVNLNHEWGMN, SVNLNHEWAML, and SVNLNHEWGML operators are presented in terms of the basic operations of SVNEs and LNEs. In Section 5, two new MAGDM models are established by the SVNLNHEWAMN and SVNLNHEWGMN operators or the SVNLNHEWAML and SVNLNHEWGML operators. Section 6 presents an illustrative example on the selection problem of industrial robots that contains both quantitative and qualitative attributes and then gives a comparative analysis with the existing techniques to show the availability and rationality of the new techniques. Finally, conclusions and future research are summarized in Section 7. ## 2. Preliminaries of SVNEs and LNEs This part reviews the basic notions and operations of SVNEs and LNEs. ### 2.1. Basic Notions and Operations of SVNEs SetU = {u1, u2, …, um} as a universal set. Then, a SVNS ZN in U can be represented as [1, 3](1)ZN=ui,xZNui,yZNui,zZNui|ui∈U,where < ui, xZN(ui), yZN(ui), zZN(ui)> (i = 1, 2, …, m) is SVNE in ZN for uj∈ U and xZN(ui), yZN(ui), zZN(ui)∈ [0, 1], and then it is simply denoted as zni = < xZNi, yZNi, zZNi > .For two SVNEs, zn1 = < xZN1, yZN1, zZN1>, ZN2= < xZN2, yZN2, zZN2>, and β >  0, and their relations are contained as follows [17]:(1) zn1⊕zn2=xZN1+xZN2−xZN1xZN2,yZN1yZN2,zZN1zZN2(2) zn1⊗zn2=xZN1xZN2,yZN1+yZN2−yZN1yZN2,zZN1+zZN2−zZN1zZN2(3) β⋅zn1=1−1−xZN1β,yZN1β,zZN1β(4) zn1β=xZN1β,1−1−yZN1β,1−1−zZN1βSuppose that there is a group of SVNEs zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, m) with their related weights βi∈ [0, 1] for ∑i=1mβi=1. Then, the SVNE weighted arithmetic mean (SVNEWAM) operator and the SVNE weighted geometric mean (SVNEWGM) operator are introduced, respectively, as follows [17]:(2)SVNEWAMzn1,zn2,…,znm=∑i=1mβi⋅zni=1−∏i=1m1−xZNiβi,∏i=1myZNiβi,∏i=1mzZNiβi,(3)SVNEWGMzn1,zn2,…,znm=∏i=1mzniβi=∏i=1mxZNiβi,1−∏i=1m1−yZNiβi,1−∏i=1m1−zZNiβi.To compare SVNEs, the score and accuracy functions of SVNEs and their ranking laws are introduced below [17].Set zni = < xZNi, yZNi, zZNi >  as any SVNE. The score and accuracy functions of zni are presented, respectively, as follows:(4)Fzni=2+xZNi−yZNi−zZNi3forFzni∈0,1,(5)Gzni=xZNi−zZNiforGzni∈−1,1.Then, the sorting laws based on the score values ofF(zni) and the accuracy values of G(zni) (i = 1, 2) are as follows [17]:(a) zn1 >  zn2 for F(zn1) > F(zn2)(b) zn1 >  zn2 for F(zn1) = F(zn2) and G(zn1) > G(zn2)(c) zn1 = zn2 for F(zn1) = F(zn2) and G(zn1) = G(zn2) ### 2.2. Basic Notions and Operations of LNEs LetU = {u1, u2, …, um} be a universal set and S=sp|p=0,1,…,r be a linguistic term set (LTS) with an odd cardinality r + 1. Thus, a LNS LH is defined as follows [4]:(6)LH=ui,saui,sbui,scui|ui∈U,where ui,saui,sbui,scui for ui∈U is LNE in LH and saui,sbui,scui∈S are the truth, indeterminacy, and falsity linguistic variables, respectively. For convenience, LNE is simply denoted as lhi=sai,sbi,sci.For two LNEs,lh1=sa1,sb1,sc1, lh2=sa2,sb2,sc2, , and β >  0, and their operational relations are as follows [4]:(1) lh1⊕lh2=sa1+a2−a1a2/r,sb1b2/r,sc1c2/r(2) lh1⊗lh2=sa1a2/r,sb1+b2−b1b2/r,sc1+c2−c1c2/r(3) β⋅lh1=sr−r1−a1/rβ,srb1/rβ,src1/rβ(4) lh1β=sra1/rβ,sr−r1−b1/rβ,sr−r1−c1/rβSuppose that there is a group of LNEslhi=sai,sbi,sci (i = 1, 2, …, m) with their related weights βi ∈ [0, 1] for ∑i=1mβi=1. Then, the LNE weighted arithmetic mean (LNEWAM) and LNE weighted geometric mean (LNEWGM) operators are introduced as follows [4]:(7)LNEWAMlh1,lh2,…,lhm=∑i=1mβi⋅lhi=sr−r∏i=1m1−ai/rβi,sr∏i=1mbi/rβi,sr∏i=1mci/rβi,(8)LNEWGMlh1,lh2,...,lhm=∏i=1mlhiβi=sr∏i=1mai/rβi,sr−r∏i=1m1−bi/rβi,sr−r∏i=1m1−ci/rβi.Setlhi=sai,sbi,sci as any LNE. The score and accuracy functions of lhi are defined, respectively, as follows [4]:(9)Plhi=2r+ai−bi−ci3rforPlhi∈0,1,(10)Qlhi=ai−cirforQlhi∈−1,1.Then, the sorting laws based on the score values ofP(lhi) and the accuracy values of Q(lhi) (i = 1, 2) are given as follows [4]:(a) lh1 >  lh2 for P(lh1) > P(lh2)(b) lh1 >  lh2 for P(lh1) = P(lh2) and Q(lh1) > Q(lh2)(c) lh1 = lh2 for P(lh1) = P(lh2) and Q(lh1) = Q(lh2) ## 2.1. Basic Notions and Operations of SVNEs SetU = {u1, u2, …, um} as a universal set. Then, a SVNS ZN in U can be represented as [1, 3](1)ZN=ui,xZNui,yZNui,zZNui|ui∈U,where < ui, xZN(ui), yZN(ui), zZN(ui)> (i = 1, 2, …, m) is SVNE in ZN for uj∈ U and xZN(ui), yZN(ui), zZN(ui)∈ [0, 1], and then it is simply denoted as zni = < xZNi, yZNi, zZNi > .For two SVNEs, zn1 = < xZN1, yZN1, zZN1>, ZN2= < xZN2, yZN2, zZN2>, and β >  0, and their relations are contained as follows [17]:(1) zn1⊕zn2=xZN1+xZN2−xZN1xZN2,yZN1yZN2,zZN1zZN2(2) zn1⊗zn2=xZN1xZN2,yZN1+yZN2−yZN1yZN2,zZN1+zZN2−zZN1zZN2(3) β⋅zn1=1−1−xZN1β,yZN1β,zZN1β(4) zn1β=xZN1β,1−1−yZN1β,1−1−zZN1βSuppose that there is a group of SVNEs zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, m) with their related weights βi∈ [0, 1] for ∑i=1mβi=1. Then, the SVNE weighted arithmetic mean (SVNEWAM) operator and the SVNE weighted geometric mean (SVNEWGM) operator are introduced, respectively, as follows [17]:(2)SVNEWAMzn1,zn2,…,znm=∑i=1mβi⋅zni=1−∏i=1m1−xZNiβi,∏i=1myZNiβi,∏i=1mzZNiβi,(3)SVNEWGMzn1,zn2,…,znm=∏i=1mzniβi=∏i=1mxZNiβi,1−∏i=1m1−yZNiβi,1−∏i=1m1−zZNiβi.To compare SVNEs, the score and accuracy functions of SVNEs and their ranking laws are introduced below [17].Set zni = < xZNi, yZNi, zZNi >  as any SVNE. The score and accuracy functions of zni are presented, respectively, as follows:(4)Fzni=2+xZNi−yZNi−zZNi3forFzni∈0,1,(5)Gzni=xZNi−zZNiforGzni∈−1,1.Then, the sorting laws based on the score values ofF(zni) and the accuracy values of G(zni) (i = 1, 2) are as follows [17]:(a) zn1 >  zn2 for F(zn1) > F(zn2)(b) zn1 >  zn2 for F(zn1) = F(zn2) and G(zn1) > G(zn2)(c) zn1 = zn2 for F(zn1) = F(zn2) and G(zn1) = G(zn2) ## 2.2. Basic Notions and Operations of LNEs LetU = {u1, u2, …, um} be a universal set and S=sp|p=0,1,…,r be a linguistic term set (LTS) with an odd cardinality r + 1. Thus, a LNS LH is defined as follows [4]:(6)LH=ui,saui,sbui,scui|ui∈U,where ui,saui,sbui,scui for ui∈U is LNE in LH and saui,sbui,scui∈S are the truth, indeterminacy, and falsity linguistic variables, respectively. For convenience, LNE is simply denoted as lhi=sai,sbi,sci.For two LNEs,lh1=sa1,sb1,sc1, lh2=sa2,sb2,sc2, , and β >  0, and their operational relations are as follows [4]:(1) lh1⊕lh2=sa1+a2−a1a2/r,sb1b2/r,sc1c2/r(2) lh1⊗lh2=sa1a2/r,sb1+b2−b1b2/r,sc1+c2−c1c2/r(3) β⋅lh1=sr−r1−a1/rβ,srb1/rβ,src1/rβ(4) lh1β=sra1/rβ,sr−r1−b1/rβ,sr−r1−c1/rβSuppose that there is a group of LNEslhi=sai,sbi,sci (i = 1, 2, …, m) with their related weights βi ∈ [0, 1] for ∑i=1mβi=1. Then, the LNE weighted arithmetic mean (LNEWAM) and LNE weighted geometric mean (LNEWGM) operators are introduced as follows [4]:(7)LNEWAMlh1,lh2,…,lhm=∑i=1mβi⋅lhi=sr−r∏i=1m1−ai/rβi,sr∏i=1mbi/rβi,sr∏i=1mci/rβi,(8)LNEWGMlh1,lh2,...,lhm=∏i=1mlhiβi=sr∏i=1mai/rβi,sr−r∏i=1m1−bi/rβi,sr−r∏i=1m1−ci/rβi.Setlhi=sai,sbi,sci as any LNE. The score and accuracy functions of lhi are defined, respectively, as follows [4]:(9)Plhi=2r+ai−bi−ci3rforPlhi∈0,1,(10)Qlhi=ai−cirforQlhi∈−1,1.Then, the sorting laws based on the score values ofP(lhi) and the accuracy values of Q(lhi) (i = 1, 2) are given as follows [4]:(a) lh1 >  lh2 for P(lh1) > P(lh2)(b) lh1 >  lh2 for P(lh1) = P(lh2) and Q(lh1) > Q(lh2)(c) lh1 = lh2 for P(lh1) = P(lh2) and Q(lh1) = Q(lh2) ## 3. SVNLNHSs and SVNLNHEs This section proposes SVNLNHS/SVNLNHE for the mixed information representation of both SVNE and LNE and then presents some basic operations of SVNEs and LNEs according to a linguistic and neutrosophic conversion function and its inverse conversion function.Definition 1. LetU = {u1, u2, …, um} be a universe set and S = {sp|p = 0, 1, …, r} be LTS with an odd cardinality r + 1. Then, a SVNLNHS ML is defined by(11)ML=ui,TLMLui,ILMLui,FLMLui|ui∈U,where TLML(ui), ILML(ui), and FLML(ui) are the truth, indeterminacy, and falsity membership functions, and their values are either the fuzzy values for TLML(ui), ILML(ui), FLML(ui) ∈ [0, 1] or the linguistic values for TLML(ui) ILML(ui), FLML(ui) ∈ S and ui ∈ U. Moreover, the SVNLNHS ML is composed of the q SVNEs zni=xZNi,yZNi,zZNi for xZNi, yZNi, zZNi ∈ [0, 1] (i = 1, 2, …, q) and the m − q LNEs lhi=sai,sbi,sci for sai,sbi,sci∈S and ai, bi, ci ∈ [0, r] (i = q + 1, q + 2, …, m).Definition 2. Suppose thatML1 and ML2 are two SVNLNHSs, which contain q SVNEs zn1i = < xZN1i, yZN1i, zZN1i> (i = 1, 2, …, q) and m − q LNEs lh1i=sa1i,sb1i,sc1i for sa1i,sb1i,sc1i∈S (i = q + 1, q + 2, …, m) and q SVNEs zn2i = < xZN2i, yZN2i, zZN2i> (i = 1, 2, …, q) and m − q LNEs lh2i=sa2i,sb2i,sc2i for sa2i,sb2i,sc2i∈S (i = q + 1, q + 2, …, m). Thus, ML1 and ML2 imply the following relations:(1) ML1 ⊆ ML2 ⇔ zn1i ⊆ zn2i (i = 1, 2, …, q) and lh1i ⊆ lh2i (i = q + 1, q + 2, …, m), i.e., xZN1i ≤ xZN2i, yZN2i ≤ yZN1i, and zZN2i ≤ zZN1i for i = 1, 2, …, q and sa1i≤sa2i, sb1i≥sb2i, and sc1i≥sc2i for i = q + 1, q + 2, …, m;(2) ML1 = ML2 ⇔ zn1i ⊆ zn2i, zn1i ⊇ zn2i, lh1 ⊆ lh2, and lh2 ⊆ lh1, i.e., xZN1i = xZN2i, yZN2i = yZN1i, and zZN2i = zZN1i for i = 1, 2, …, q and sa1i=sa2i, sb1i=sb2i, and sc1i=sc2i for i = q + 1, q + 2, …, m.Definition 3. Set zni = < xZNi, yZNi, zZNi > and lhi=sai,sbi,sci as any SVNE and any LNE, respectively. Then, let a linguistic and neutrosophic conversion function be flhi=ai/r,bi/r,ci/r for ai, bi, ci ∈ [0, r], and then its inverse conversion function is f−1zni=xZNir,yZNir,zZNir for xZNi, yZNi, zZNi ∈ [0, 1]. Thus, some basic operations of SVNEs and LNEs are given as follows:(1) f−1zni⊕lhi=xZNir+ai−xZNiai,yZNibi,zZNici(2) zni⊕flhi=xZNi+ai/r−xZNiai/r,yZNibi/r,zZNici/r(3) f−1zni⊗lhi=xZNiai,yZNir+bi−yZNibi,zZNir+ci−zZNici(4) βf−1zni=r−r1−xZNiβ,ryZNiβ,rzZNiβ for β >  0(5) βflhi=1−1−ai/rβ,bi/rβ,ci/rβ for β >  0(6) f−1zniβ=rxZNiβ,r−r1−yZNiβ,r−r1−zZNiβ for β >  0(7) fβlhi=ai/rβ,1−1−bi/rβ,1−1−ci/rβ for β >  0 It is obvious that the operational results of (2), (4), (5), and (8) are LNEs and the operational results of (3) and (7), and (9) are SVNEs. ## 4. Weighted Arithmetic and Geomatic Mean Operators of SVNLNHEs This section proposes some weighted aggregation operators of SVNLNHEs corresponding to the linguistic and neutrosophic conversion function and its inverse conversion function, and then indicates their properties. ### 4.1. Aggregation Operators of SVNLNHEs Corresponding to the Linguistic and Neutrosophic Conversion Function Let zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, q) and lhi=sai,sbi,sci (i = q + 1, q + 2, …, m) be q SVNEs and m − q LNEs, respectively. Then, based on Definition 3 and the SVNEWAM and SVNEWGM operators of Eqs. (2) and (3) [17], the weighted arithmetic and geomatic mean operators of SVNLNHEs corresponding to the linguistic and neutrosophic conversion function are proposed by the SVNLNHEWAMN and SVNLNHEWGMN operators,(12)SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∑i=1qβi⋅zni+∑i=q+1mβiflhi=1−∏i=1q1−xZNiβi∏i=q+1m1−ai/rβi,∏i=1qyZNiβi∏i=q+1mbi/rβi,∏i=1qzZNiβi∏i=q+1mci/rβi(13)SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∏i=1qzniβi∏i=q+1mfβilhi=∏i=1qxZNiβi∏i=q+1mai/rβi,1−∏i=1q1−yZNiβi∏i=q+1m1−bi/rβi,1−∏i=1q1−zZNiβi∏i=q+1m1−ci/rβi,where βi ∈ [0, 1] is the weight of zni (i = 1, 2, …, q) and lhi (i = q + 1, q + 2, …, m) with ∑i=1mβi=1. Then, the aggregated results of the SVNLNHEWAMN and SVNLNHEWGMN operators are SVNEs.Especially, whenq = m (without LNEs), the SVNLNHEWAMN and SVNLNHEWGMN operators are reduced to the SVNEWAM and SVNEWGM operators [17], i.e., Eq. (2) and Eq. (3).Based on the properties of the SVNEWAM and SVNEWGM operators [17], it is obvious that the SVNLNHEWAMN and SVNLNHEWGMN operators also contain the following properties:(1) Idempotency: if zni = f(lhi) = zn for i = 1, 2, …, q, q + 1, q + 2, …, m, there are SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=zn and SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=zn.(2) Boundedness: letzn−=minixZNi,ai/r,maxiyZNi,bi/r,maxizZNi,ci/r and zn+=maxixZNi,ai/r,miniyZNi,bi/r,minizZNi,ci/r be the minimum and maximum SVNEs for i = 1, 2, …, m, and then there are the inequalities zn−≤SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤zn+ and zn−≤SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤zn+.(3) Monotonicity: if the condition ofzni≤zni∗ (i = 1, 2, …, q) and lhi≤lhi∗ (i = q + 1, q + 2, …, m) exists, SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWAMNzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ and SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWGMNzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ also exist. ### 4.2. Aggregation Operators of SVNLNHEs According to the Inverse Conversion Function Let zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, q) and lhi=sai,sbi,sci (i = q + 1, q + 2, …, m) be q SVNEs and m − q LNEs, respectively. Then, based on Definition 3 and the LNEWAM and LNEWGM operators of Eqs. (7) and (8) [4], the weighted arithmetic and geomatic mean operators of SVNLNHEs corresponding to the inverse conversion function are proposed by the SVNLNHEWAML and SVNLNHEWGML operators:(14)SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∑i=1qβif−1zni+∑i=q+1mβi⋅lhi=r−r∏i=1q1−xZNiβi∏i=q+1m1−ai/rβi,r∏i=1qyZNiβi∏i=q+1mbi/rβi,r∏i=1qzZNiβi∏i=q+1mci/rβi,(15)SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∏i=1qf−1zniβi∏i=q+1mlhiβi=r∏i=1qxZNiβi∏i=q+1mai/rβi,r−r∏i=1q1−yZNiβi∏i=q+1m1−bi/rβi,r−r∏i=1q1−zZNiβi∏i=q+1m1−ci/rβi,where βi∈ [0, 1] is the weight of zni (i = 1, 2, …, q) and lhi (i = q + 1, q + 2, …, m) with ∑i=1mβi=1. Then, the aggregated results of the SVNLNHEWAML and SVNLNHEWGML operators are LNEs.Especially, whenq = 0 (without SVNEs), the SVNLNHEWAML and SVNLNHEWGML operators are reduced to the LNEWAM and LNEWGM operators [4], i.e., Eq. (7) and Eq. (8).Based on the characteristics of the LNEWAM and LNEWGM operators [4], it is obvious that the SVNLNHEWAML and SVNLNHEWGML operators also contain the following characteristics:(1) Idempotency: iff− 1(zni) = lhi = lh for i = 1, 2, …, q, q + 1, q + 2, …, m, there are SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=lh and SVNLNHEWGMLzn1,zn2,...,znq,lhq+1,lhq+2,...,lhm=lh.(2) Boundedness: letlh−=minirxZNi,ai,maxiryZNi,bi,maxirzZNi,ci and lh+=maxirxZNi,ai,miniryZNi,bi,minirzZNi,ci be the minimum and maximum LNEs for i = 1, 2, …, m, and then there are the inequalities lh−≤SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤lh+ and lh−≤SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤lh+.(3) Monotonicity: if the condition ofzni≤zni∗ for i = 1, 2, …, q and lhi≤lhi∗ for i = q + 1, q + 2, …, m exists, SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWAMLzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ and SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWGMLzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ also exist. ## 4.1. Aggregation Operators of SVNLNHEs Corresponding to the Linguistic and Neutrosophic Conversion Function Let zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, q) and lhi=sai,sbi,sci (i = q + 1, q + 2, …, m) be q SVNEs and m − q LNEs, respectively. Then, based on Definition 3 and the SVNEWAM and SVNEWGM operators of Eqs. (2) and (3) [17], the weighted arithmetic and geomatic mean operators of SVNLNHEs corresponding to the linguistic and neutrosophic conversion function are proposed by the SVNLNHEWAMN and SVNLNHEWGMN operators,(12)SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∑i=1qβi⋅zni+∑i=q+1mβiflhi=1−∏i=1q1−xZNiβi∏i=q+1m1−ai/rβi,∏i=1qyZNiβi∏i=q+1mbi/rβi,∏i=1qzZNiβi∏i=q+1mci/rβi(13)SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∏i=1qzniβi∏i=q+1mfβilhi=∏i=1qxZNiβi∏i=q+1mai/rβi,1−∏i=1q1−yZNiβi∏i=q+1m1−bi/rβi,1−∏i=1q1−zZNiβi∏i=q+1m1−ci/rβi,where βi ∈ [0, 1] is the weight of zni (i = 1, 2, …, q) and lhi (i = q + 1, q + 2, …, m) with ∑i=1mβi=1. Then, the aggregated results of the SVNLNHEWAMN and SVNLNHEWGMN operators are SVNEs.Especially, whenq = m (without LNEs), the SVNLNHEWAMN and SVNLNHEWGMN operators are reduced to the SVNEWAM and SVNEWGM operators [17], i.e., Eq. (2) and Eq. (3).Based on the properties of the SVNEWAM and SVNEWGM operators [17], it is obvious that the SVNLNHEWAMN and SVNLNHEWGMN operators also contain the following properties:(1) Idempotency: if zni = f(lhi) = zn for i = 1, 2, …, q, q + 1, q + 2, …, m, there are SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=zn and SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=zn.(2) Boundedness: letzn−=minixZNi,ai/r,maxiyZNi,bi/r,maxizZNi,ci/r and zn+=maxixZNi,ai/r,miniyZNi,bi/r,minizZNi,ci/r be the minimum and maximum SVNEs for i = 1, 2, …, m, and then there are the inequalities zn−≤SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤zn+ and zn−≤SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤zn+.(3) Monotonicity: if the condition ofzni≤zni∗ (i = 1, 2, …, q) and lhi≤lhi∗ (i = q + 1, q + 2, …, m) exists, SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWAMNzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ and SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWGMNzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ also exist. ## 4.2. Aggregation Operators of SVNLNHEs According to the Inverse Conversion Function Let zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, q) and lhi=sai,sbi,sci (i = q + 1, q + 2, …, m) be q SVNEs and m − q LNEs, respectively. Then, based on Definition 3 and the LNEWAM and LNEWGM operators of Eqs. (7) and (8) [4], the weighted arithmetic and geomatic mean operators of SVNLNHEs corresponding to the inverse conversion function are proposed by the SVNLNHEWAML and SVNLNHEWGML operators:(14)SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∑i=1qβif−1zni+∑i=q+1mβi⋅lhi=r−r∏i=1q1−xZNiβi∏i=q+1m1−ai/rβi,r∏i=1qyZNiβi∏i=q+1mbi/rβi,r∏i=1qzZNiβi∏i=q+1mci/rβi,(15)SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∏i=1qf−1zniβi∏i=q+1mlhiβi=r∏i=1qxZNiβi∏i=q+1mai/rβi,r−r∏i=1q1−yZNiβi∏i=q+1m1−bi/rβi,r−r∏i=1q1−zZNiβi∏i=q+1m1−ci/rβi,where βi∈ [0, 1] is the weight of zni (i = 1, 2, …, q) and lhi (i = q + 1, q + 2, …, m) with ∑i=1mβi=1. Then, the aggregated results of the SVNLNHEWAML and SVNLNHEWGML operators are LNEs.Especially, whenq = 0 (without SVNEs), the SVNLNHEWAML and SVNLNHEWGML operators are reduced to the LNEWAM and LNEWGM operators [4], i.e., Eq. (7) and Eq. (8).Based on the characteristics of the LNEWAM and LNEWGM operators [4], it is obvious that the SVNLNHEWAML and SVNLNHEWGML operators also contain the following characteristics:(1) Idempotency: iff− 1(zni) = lhi = lh for i = 1, 2, …, q, q + 1, q + 2, …, m, there are SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=lh and SVNLNHEWGMLzn1,zn2,...,znq,lhq+1,lhq+2,...,lhm=lh.(2) Boundedness: letlh−=minirxZNi,ai,maxiryZNi,bi,maxirzZNi,ci and lh+=maxirxZNi,ai,miniryZNi,bi,minirzZNi,ci be the minimum and maximum LNEs for i = 1, 2, …, m, and then there are the inequalities lh−≤SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤lh+ and lh−≤SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤lh+.(3) Monotonicity: if the condition ofzni≤zni∗ for i = 1, 2, …, q and lhi≤lhi∗ for i = q + 1, q + 2, …, m exists, SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWAMLzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ and SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWGMLzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ also exist. ## 5. MAGDM Models in the Environment of SVNLNHSs In this section, novel MAGDM models are developed in terms of the SVNLNHEWAMN and SVNLNHEWGMN operators and the SVNLNHEWAML and SVNLNHEWGML operators to perform MAGDM issues with quantitative and qualitative attributes in the mixed information environment of SVNEs and LNEs.Regarding a mixed information MAGDM issue in the circumstance of SVNLNHSs, there existt alternatives, denoted by a set of them E=E1,E2,…Et, and then they are satisfactorily assessed over m attributes, denoted by a set of them V = {v1, v2, …, vq, vq + 1, vq + 2, …, vm}, which contains q quantitative attributes and m − q qualitative attributes. Then, there is a group of decision makers G=g1,g2,…ge with their weight vector α=α1,α2,…αe for αk∈ [0, 1] and ∑k=1eαk=1. The assessment values of each alternative over the q quantitative attributes are given by the decision makers gk (k = 1, 2, …, e) and represented by SVNEs znjik=xZNjik,yZNjik,zZNjik for xZNjik,yZNjik,zZNjik∈0,1 (k = 1, 2, …, e; j = 1, 2, …, t; i = 1, 2, …, q), and then the assessment values of each alternative over the m − q qualitative attributes are represented by LNEs lhjik=sajik,sbjik,scjik for sajik,sbjik,scjik∈S (k = 1, 2, …, e; j = 1, 2, …, t; i = q + 1, q + 2, …, m) from the LTS S = {sp|p = 0, 1, 2, …, r}. Thus, all assessed values can be constructed as the e decision matrices of SVNLNHEs Mk = (znjik,lhjik)t× m (k = 1, 2, …, e). Then, a weight vector β=β1,β2,…βm is specified to consider the weights βi of attributes vi (i = 1, 2, …, m) with βi∈ [0, 1] and ∑i=1mβi=1.Thus, two MAGDM models are developed in terms of the SVNLNHEWAMN and SVNLNHEWGMN operators or the SVNLNHEWAML and SVNLNHEWGML operators to perform MAGDM issues with the mixed evaluation information of SVNEs and LNEs.Model 1. A MAGDM model using the SVNLNHEWAMN and SVNLNHEWGMN operators is developed to perform the MAGDM issue with SVNLNHEs. Its detailed steps are presented as follows:Step 1: using the SVNEWAM operator of Eq. (2) and the LNEWAM operator of Eq. (7), the decision matrices Mk=znjik,lhjikt×m (k = 1, 2, …, e) are aggregated into the overall decision matrix M=znji,lhjit×m.Step 2: using the SVNLNHEWAMN operator of Eq. (12) or the SVNLNHEWGMN operator of Eq. (13), the aggregated result for Ej (j = 1, 2, ⋯, t) is obtained by the following equation:(16)znj=SVNLNHEWAMNznj1,znj2,…,znjq,lhjq+1,lhjq+2,…,lhjm=∑i=1qβi⋅znji+∑i=q+1mβiflhji=1−∏i=1q1−xZNjiβi∏i=q+1m1−aji/rβi,∏i=1qyZNjiβi∏i=q+1mbji/rβi,∏i=1qzZNjiβi∏i=q+1mcji/rβi,or(17)znj=SVNLNHEWGMNznj1,znj2,…,znjq,lhjq+1,lhjq+2,…,lhjm=∏i=1qznjiβi∏i=q+1mfβilhji=∏i=1qxZNjiβi∏i=q+1maji/rβi,1−∏i=1q1−yZNjiβi∏i=q+1m1−bji/rβi,1−∏i=1q1−zZNjiβi∏i=q+1m1−cji/rβi.Step 3: the score values ofF(znj) (j = 1, 2, …, t) are given by Eq. (4) and the accuracy values of G(znj) (j = 1, 2, …, t) are given by Eq. (5) if necessary.Step 4: the alternatives are sorted in descending order based on the sorting laws of SVNEs, and the first one is the best choice.Step 5: end.Model 2. A MAGDM model using the SVNLNHEWAML and SVNLNHEWGML operators is developed to perform the MAGDM issue with SVNLNHEs. Its detailed steps are presented as follows:Step 1’: the same as Step 1.Step 2’: using the SVNLNHEWAML operator of Eq. (14) or the SVNLNHEWGML operator of Eq. (15), the aggregated result for Ejj=1,2,…,t is given by the following equation:(18)lhj=SVNLNHEWAMLznj1,znj2,…,znjq,lhjq+1,lhjq+2,…,lhjm=∑i=1qβif−1znji+∑i=q+1mβi⋅lhji=r−r∏i=1q1−xZNjiβi∏i=q+1m1−aji/rβi,r∏i=1qyZNjiβi∏i=q+1mbji/rβi,r∏i=1qzZNjiβi∏i=q+1mcji/rβi,or(19)lhj=SVNLNHEWGMLznj1,znj2,…,znjq,lhjq+1,lhjq+2,…,lhjm=∏i=1qf−1znjiβi∏i=q+1mlhjiβi=r∏i=1qxZNjiβi∏i=q+1maji/rβi,r−r∏i=1q1−yZNjiβi∏i=q+1m1−bji/rβi,r−r∏i=1q1−zZNjiβi∏i=q+1m1−cji/rβi.Step 3’: the score values ofP(lhj) (j = 1, 2, …, t) are given by Eq. (9) and the accuracy values of Q(lhj) (j = 1, 2, …, t) are given by Eq. (10) if necessary.Step 4’: the alternatives are sorted in descending order based on the sorting laws of LNEs, and then the first one is the best choice.Step 5’: end. ## 6. Illustrative Example on the Selection Problem of Industrial Robots Containing Both Quantitative and Qualitative Attributes This section applies the proposed MAGDM models to an illustrative example on the selection problem of industrial robots that contains both quantitative and qualitative attributes in the circumstance of SVNLNHSs to prove their usefulness, and then gives a comparison with existing techniques to show the availability and rationality of the new techniques. ### 6.1. Illustrative Example This subsection applies the proposed MAGDM models to the selection problem of industrial robots containing both quantitative and qualitative attributes to illustrate their application and availability in the circumstance of SVNLNHSs.Some industrial company wants to buy a type of industrial robots for a manufacturing system. The technical department preliminarily provides four types of industrial robots/alternatives, denoted as their setE = {E1, E2, E3, E4}. Then, they must satisfy four requirements/attributes: operating accuracy (v1), carrying capacity (v2), control performance (v3), and operating space and dexterity (v4). The weight vector of the four attributes is given by β = (0.25, 0.3, 0.25, 0.2). Thus, three experts/decision makers are invited to satisfactorily assess each alternative over the four attributes by their truth, falsity, and indeterminacy options/judgments, where the assessment values can be specified in the mixed forms of both the SVNEs znjik=xZNjik,yZNjik,zZNjik for xZNjik,yZNjik,zZNjik∈0,1 (k = 1, 2, 3; i = 1, 2; j = 1, 2, 3, 4) regarding the quantitative attributes v1 and v2 and the LNEs lhjik=sajik,sbjik,scjik for sajik,sbjik,scjik∈S (k = 1, 2, 3; i = 3, 4; j = 1, 2, 3, 4) regarding the qualitative attributes v3 and v4 from the LTS S = {very unsatisfactory, unsatisfactory, slight unsatisfactory, medium, slight satisfactory, satisfactory, very satisfactory} = {s0, s1, s2, s3, s4, s5, s6} with r = 6. The weight vector of the three decision makers is given by α = (0.4, 0.35, 0.25). Thus, the three decision matrices are constructed as follows:(20)M1=0.8,0.1,0.20.7,0.1,0.1s5,s2,s2s5,s2,s30.8,0.2,0.10.8,0.1,0.3s5,s1,s2s4,s3,s20.7,0.1,0.10.8,0.2,0.2s5,s3,s2s4,s2,s30.8,0.2,0.20.9,0.2,0.3s4,s1,s2s5,s2,s2,M2=0.7,0.2,0.20.8,0.1,0.2s4,s3,s1s5,s2,s10.8,0.2,0.30.8,0.2,0.3s5,s2,s1s4,s1,s20.8,0.2,0.30.7,0.1,0.1s4,s1,s2s5,s2,s30.9,0.1,0.10.8,0.2,0.1s5,s1,s3s5,s1,s2,M3=0.8,0.3,0.10.8,0.1,0.1s5,s2,s1s4,s1,s10.7,0.1,0.20.9,0.2,0.3s4,s1,s1s5,s1,s10.8,0.1,0.10.8,0.2,0.1s5,s2,s2s5,s2,s30.8,0.1,0.10.8,0.1,0.1s5,s1,s2s5,s3,s2.Thus, the two MAGDM models developed can be utilized in the example to perform the MAGDM issue with SVNLNHEs.Model 1. The MAGDM model using the SVNLNHEWAMN and SVNLNHEWGMN operators can be applied in the example, and then its detailed steps are depicted as follows:Step 1: using the SVNEWAM operator of Eq. (2) and the LNEWAM operator of Eq. (7), the above three decision matrices are aggregated into the following overall decision matrix:(21)M=0.7695,0.1677,0.16820.7648,0.1000,0.1275s4.7254,s2.3050,s1.3195s4.8108,s1.6818,s1.55180.7787,0.1682,0.17470.8318,0.1516,0.3000s4.8108,s1.2746,s1.3195s4.3182,s1.5518,s1.68180.7648,0.1275,0.14690.7695,0.1569,0.1320s4.7254,s1.8455,s2.0000s4.6805,s2.0000,s3.00000.8431,0.1320,0.13200.8484,0.1682,0.1552s4.6805,s1.0000,s2.3050s5.0000,s1.7366,s2.0000.Step 2: by Eq. (16) or Eq. (17), we give the aggregated values:zn1 = < 0.7795, 0.1958, 0.1804>, zn2 = < 0.7921, 0.1884, 0.2392>,zn3 = < 0.7751, 0.2049, 0.2231>, and zn4 = < 0.8290, 0.1761, 0.2178 > .Or zn1 = < 0.7789, 0.2324, 0.1885>, zn2 = < 0.7876, 0.1934, 0.2464>,zn3 = < 0.7749, 0.2276, 0.2754>, and zn4 = < 0.8265, 0.1850, 0.2504 > .Step 3: by Eq. (4), the score values of F(znj) for Ej (j = 1, 2, 3, 4) are given below:F(zn1) = 0.8011, F(zn2) = 0.7882, F(zn3) = 0.7824, and F(zn4) = 0.8117.OrF(zn1) = 0.7860, F(zn2) = 0.7826, F(zn3) = 0.7573, and F(zn4) = 0.7970.Step 4: the sorting order of the four alternatives isE4 >  E1 >  E2 >  E3.Clearly, the sorting orders obtained by the SVNLNHEWAMN operator of Eq. (16) and the SVNLNHEWGMN operator of Eq. (17) are identical in this example.Model 2. The MAGDM model using the SVNLNHEWAML and SVNLNHEWGML operators can also be applied in the example, and then its detailed steps are depicted as follows:Step 1’: the same as Step 1.Step 2’: by Eq. (18) or Eq. (19), we give the aggregated values:lh1 = < s4.6773, s1.1747, s1.0823>, lh2 = <  s4.7528, s1.1302, s1.4353>,lh3 = < s4.6508, s1.2297, s1.3384>, and lh4 = < s4.9739, s1.0564, s1.3070 > ,orlh1 = < s4.6736, s1.3944, s1.1311>, lh2 = < s4.7255, s1.1603, s1.4783>,lh3 = < s4.6494, s1.3657, s1.6526>, and lh4 = < s4.9591, s1.1100, s1.5027 > .Step 3’: by (9), the score values of P(lhj) for Ej (j = 1, 2, 3, 4) are given as follows:P(lh1) = 0.8011, P(lh2) = 0.7882, P(lh3) = 0.7824, and P(lh4) = 0.8117,orP(lh1) = 0.7860, P(lh2) = 0.7826, P(lh3) = 0.7573, and P(lh4) = 0.7970.Step 4’: the sorting order of the four alternatives isE4 >  E1 >  E2 >  E3.Hence, the sorting orders obtained by the SVNLNHEWAML operator of Eq. (18) and the SVNLNHEWGML operator of Eq. (19) are identical in this example.Obviously, the score values and sorting orders between Model 1 and Model 2 reflect the same results. Moreover, we see that whether SVNEs are converted to LNEs or LNEs to SVNEs in the aggregation operations, their final decision results are actually identical. Thus, decision makers can choose Model 1 or Model 2 in MAGDM applications. Therefore, it is obvious that our new techniques are valid and reasonable. ### 6.2. Comparative Analysis with the Existing Neutrosophic MAGDM Models Since the assessed values of SVNLNHEs are given in this illustrative example, the existing neutrosophic MAGDM models [4, 17] cannot deal with this illustrative example in the situation of SVNLNHEs. Then, our new techniques can handle neutrosophic MAGDM issues with SVNEs and/or LNEs and show the following highlights and advantages:(1) The proposed SVNLNHEs can conveniently denote the mixed information of SVNEs and LNEs regarding the assessment objects of quantitative and qualitative attributes, which is suitable for human judgment and thinking/expression habits, while existing neutrosophic expressions cannot represent SVNLNHE information.(2) The proposed SVNLNHEWAMN and SVNLNHEWGMN operators or the proposed SVNLNHEWAML and SVNLNHEWGML operators provide the necessary aggregation tools for handling MAGDM issues in the SVNLNHE circumstance, while the existing SVNEWAM and SVNEWGM operators [17] are only the special cases of the SVNLNHEWAMN and SVNLNHEWGMN operators, and then the existing LNEWAM and LNEWGM operators [4] are only the special cases of the SVNLNHEWAML and SVNLNHEWGML operators. Furthermore, the various existing aggregation operators cannot aggregate SVNLNHEs.(3) Since the existing MAGDM models with the single evaluation information of SVNEs or LNEs [4, 17] are the special cases of our new MAGDM models, our new MAGDM models are broader and more versatile than the existing MAGDM models [4, 17]. Furthermore, the various existing MAGDM models cannot carry out MAGDM problems with SVNLNHE information.Generally, the new techniques solve the SVNLNHE denotation, aggregation operations, and MAGDM issues in the mixed information situation of SVNEs and LNEs. It is clear that our new techniques are very suitable for such decision-making issues with quantitative and qualitative attributes and overcome the defects of the existing decision-making techniques subject to the single evaluation information of SVNEs or LNEs. Therefore, our new techniques reveal obvious superiorities over the existing techniques in the neutrosophic information denotation, aggregation operations, and decision-making methods. ## 6.1. Illustrative Example This subsection applies the proposed MAGDM models to the selection problem of industrial robots containing both quantitative and qualitative attributes to illustrate their application and availability in the circumstance of SVNLNHSs.Some industrial company wants to buy a type of industrial robots for a manufacturing system. The technical department preliminarily provides four types of industrial robots/alternatives, denoted as their setE = {E1, E2, E3, E4}. Then, they must satisfy four requirements/attributes: operating accuracy (v1), carrying capacity (v2), control performance (v3), and operating space and dexterity (v4). The weight vector of the four attributes is given by β = (0.25, 0.3, 0.25, 0.2). Thus, three experts/decision makers are invited to satisfactorily assess each alternative over the four attributes by their truth, falsity, and indeterminacy options/judgments, where the assessment values can be specified in the mixed forms of both the SVNEs znjik=xZNjik,yZNjik,zZNjik for xZNjik,yZNjik,zZNjik∈0,1 (k = 1, 2, 3; i = 1, 2; j = 1, 2, 3, 4) regarding the quantitative attributes v1 and v2 and the LNEs lhjik=sajik,sbjik,scjik for sajik,sbjik,scjik∈S (k = 1, 2, 3; i = 3, 4; j = 1, 2, 3, 4) regarding the qualitative attributes v3 and v4 from the LTS S = {very unsatisfactory, unsatisfactory, slight unsatisfactory, medium, slight satisfactory, satisfactory, very satisfactory} = {s0, s1, s2, s3, s4, s5, s6} with r = 6. The weight vector of the three decision makers is given by α = (0.4, 0.35, 0.25). Thus, the three decision matrices are constructed as follows:(20)M1=0.8,0.1,0.20.7,0.1,0.1s5,s2,s2s5,s2,s30.8,0.2,0.10.8,0.1,0.3s5,s1,s2s4,s3,s20.7,0.1,0.10.8,0.2,0.2s5,s3,s2s4,s2,s30.8,0.2,0.20.9,0.2,0.3s4,s1,s2s5,s2,s2,M2=0.7,0.2,0.20.8,0.1,0.2s4,s3,s1s5,s2,s10.8,0.2,0.30.8,0.2,0.3s5,s2,s1s4,s1,s20.8,0.2,0.30.7,0.1,0.1s4,s1,s2s5,s2,s30.9,0.1,0.10.8,0.2,0.1s5,s1,s3s5,s1,s2,M3=0.8,0.3,0.10.8,0.1,0.1s5,s2,s1s4,s1,s10.7,0.1,0.20.9,0.2,0.3s4,s1,s1s5,s1,s10.8,0.1,0.10.8,0.2,0.1s5,s2,s2s5,s2,s30.8,0.1,0.10.8,0.1,0.1s5,s1,s2s5,s3,s2.Thus, the two MAGDM models developed can be utilized in the example to perform the MAGDM issue with SVNLNHEs.Model 1. The MAGDM model using the SVNLNHEWAMN and SVNLNHEWGMN operators can be applied in the example, and then its detailed steps are depicted as follows:Step 1: using the SVNEWAM operator of Eq. (2) and the LNEWAM operator of Eq. (7), the above three decision matrices are aggregated into the following overall decision matrix:(21)M=0.7695,0.1677,0.16820.7648,0.1000,0.1275s4.7254,s2.3050,s1.3195s4.8108,s1.6818,s1.55180.7787,0.1682,0.17470.8318,0.1516,0.3000s4.8108,s1.2746,s1.3195s4.3182,s1.5518,s1.68180.7648,0.1275,0.14690.7695,0.1569,0.1320s4.7254,s1.8455,s2.0000s4.6805,s2.0000,s3.00000.8431,0.1320,0.13200.8484,0.1682,0.1552s4.6805,s1.0000,s2.3050s5.0000,s1.7366,s2.0000.Step 2: by Eq. (16) or Eq. (17), we give the aggregated values:zn1 = < 0.7795, 0.1958, 0.1804>, zn2 = < 0.7921, 0.1884, 0.2392>,zn3 = < 0.7751, 0.2049, 0.2231>, and zn4 = < 0.8290, 0.1761, 0.2178 > .Or zn1 = < 0.7789, 0.2324, 0.1885>, zn2 = < 0.7876, 0.1934, 0.2464>,zn3 = < 0.7749, 0.2276, 0.2754>, and zn4 = < 0.8265, 0.1850, 0.2504 > .Step 3: by Eq. (4), the score values of F(znj) for Ej (j = 1, 2, 3, 4) are given below:F(zn1) = 0.8011, F(zn2) = 0.7882, F(zn3) = 0.7824, and F(zn4) = 0.8117.OrF(zn1) = 0.7860, F(zn2) = 0.7826, F(zn3) = 0.7573, and F(zn4) = 0.7970.Step 4: the sorting order of the four alternatives isE4 >  E1 >  E2 >  E3.Clearly, the sorting orders obtained by the SVNLNHEWAMN operator of Eq. (16) and the SVNLNHEWGMN operator of Eq. (17) are identical in this example.Model 2. The MAGDM model using the SVNLNHEWAML and SVNLNHEWGML operators can also be applied in the example, and then its detailed steps are depicted as follows:Step 1’: the same as Step 1.Step 2’: by Eq. (18) or Eq. (19), we give the aggregated values:lh1 = < s4.6773, s1.1747, s1.0823>, lh2 = <  s4.7528, s1.1302, s1.4353>,lh3 = < s4.6508, s1.2297, s1.3384>, and lh4 = < s4.9739, s1.0564, s1.3070 > ,orlh1 = < s4.6736, s1.3944, s1.1311>, lh2 = < s4.7255, s1.1603, s1.4783>,lh3 = < s4.6494, s1.3657, s1.6526>, and lh4 = < s4.9591, s1.1100, s1.5027 > .Step 3’: by (9), the score values of P(lhj) for Ej (j = 1, 2, 3, 4) are given as follows:P(lh1) = 0.8011, P(lh2) = 0.7882, P(lh3) = 0.7824, and P(lh4) = 0.8117,orP(lh1) = 0.7860, P(lh2) = 0.7826, P(lh3) = 0.7573, and P(lh4) = 0.7970.Step 4’: the sorting order of the four alternatives isE4 >  E1 >  E2 >  E3.Hence, the sorting orders obtained by the SVNLNHEWAML operator of Eq. (18) and the SVNLNHEWGML operator of Eq. (19) are identical in this example.Obviously, the score values and sorting orders between Model 1 and Model 2 reflect the same results. Moreover, we see that whether SVNEs are converted to LNEs or LNEs to SVNEs in the aggregation operations, their final decision results are actually identical. Thus, decision makers can choose Model 1 or Model 2 in MAGDM applications. Therefore, it is obvious that our new techniques are valid and reasonable. ## 6.2. Comparative Analysis with the Existing Neutrosophic MAGDM Models Since the assessed values of SVNLNHEs are given in this illustrative example, the existing neutrosophic MAGDM models [4, 17] cannot deal with this illustrative example in the situation of SVNLNHEs. Then, our new techniques can handle neutrosophic MAGDM issues with SVNEs and/or LNEs and show the following highlights and advantages:(1) The proposed SVNLNHEs can conveniently denote the mixed information of SVNEs and LNEs regarding the assessment objects of quantitative and qualitative attributes, which is suitable for human judgment and thinking/expression habits, while existing neutrosophic expressions cannot represent SVNLNHE information.(2) The proposed SVNLNHEWAMN and SVNLNHEWGMN operators or the proposed SVNLNHEWAML and SVNLNHEWGML operators provide the necessary aggregation tools for handling MAGDM issues in the SVNLNHE circumstance, while the existing SVNEWAM and SVNEWGM operators [17] are only the special cases of the SVNLNHEWAMN and SVNLNHEWGMN operators, and then the existing LNEWAM and LNEWGM operators [4] are only the special cases of the SVNLNHEWAML and SVNLNHEWGML operators. Furthermore, the various existing aggregation operators cannot aggregate SVNLNHEs.(3) Since the existing MAGDM models with the single evaluation information of SVNEs or LNEs [4, 17] are the special cases of our new MAGDM models, our new MAGDM models are broader and more versatile than the existing MAGDM models [4, 17]. Furthermore, the various existing MAGDM models cannot carry out MAGDM problems with SVNLNHE information.Generally, the new techniques solve the SVNLNHE denotation, aggregation operations, and MAGDM issues in the mixed information situation of SVNEs and LNEs. It is clear that our new techniques are very suitable for such decision-making issues with quantitative and qualitative attributes and overcome the defects of the existing decision-making techniques subject to the single evaluation information of SVNEs or LNEs. Therefore, our new techniques reveal obvious superiorities over the existing techniques in the neutrosophic information denotation, aggregation operations, and decision-making methods. ## 7. Conclusion Due to the lack of the SVNLNHE denotation, operations, and decision-making models in existing neutrosophic theory and applications, the proposed notion of SVNLNHS/SVNLNHE and the defined linguistic and neutrosophic conversion function solved the hybrid neutrosophic information denotation and operational problems of SVNEs and LNEs. Then, the proposed SVNLNHEWAMN, SVNLNHEWGMN, SVNLNHEWAML, and SVNLNHEWGML operators provided necessary aggregation algorithms for handling MAGDM issues with SVNLNHEs. The established MAGDM models solved such decision-making issues with quantitative and qualitative attributes in the SVNLNHE circumstance. Since the evaluation values of quantitative and qualitative attributes in the decision-making process are easily represented in SVNEs and LNEs that are given in view of decision makers’ preferences/thinking habits, the managerial implications of this original research will be reinforced in neutrosophic decision-making methods and applications. Finally, an illustrative example was given and compared with the existing techniques to show the availability and rationality of the new techniques. Moreover, our new techniques not only overcome the insufficiencies of the existing techniques but also are broader and more versatile than the existing techniques when dealing with MAGDM issues in the setting of SVNLNHEs. However, in this study, the new techniques of the SVNLNHE denotation, aggregation algorithms, and MAGDM models reflected their superiority over existing techniques.Regarding future research, these new techniques will be further extended to other areas, such as medical diagnosis, slope risk/instability evaluation, default diagnosis, and mechanical concept design, in the mixed information situation of SVNEs and LNEs. Then, we shall also develop more aggregation algorithms, such as Hamacher, Dombi, and Bonferroni aggregation operators, and their applications in clustering analysis, information fusion, image processing, and mine risk/safety evaluation in the mixed information situation of both SVNE and LNE or both IVNE and uncertain LNE. --- *Source: 1021280-2022-09-20.xml*
1021280-2022-09-20_1021280-2022-09-20.md
51,024
Multiple Attribute Group Decision-Making Models Using Single-Valued Neutrosophic and Linguistic Neutrosophic Hybrid Element Aggregation Algorithms
Sumin Zhang; Jun Ye
Journal of Mathematics (2022)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1021280
1021280-2022-09-20.xml
--- ## Abstract Multiple attribute group decision-making (MAGDM) issues may involve quantitative and qualitative attributes. In inconsistent and indeterminate decision-making issues, current assessment information of quantitative and qualitative attributes with respect to alternatives only contains either numerical neutrosophic values or linguistic neutrosophic values as the single information expression. However, existing neutrosophic techniques cannot perform the mixed information denotation and aggregation operations of numerical neutrosophic values and linguistic neutrosophic values in neutrosophic decision-making issues. To solve the puzzles, this article presents the information denotation, aggregation operations, and MAGDM models of single-valued neutrosophic and linguistic neutrosophic hybrid sets/elements (SVNLNHSs/SVNLHEs) as new techniques to perform MAGDM issues with quantitative and qualitative attributes in the environment of SVNLNHEs. In this study, we first propose a SVNLNHS/SVNLNHE notion that consists of a single-valued neutrosophic element (SVNE) for the quantitative argument and a linguistic neutrosophic element (LNE) for the qualitative argument. According to a linguistic and neutrosophic conversion function and its inverse conversion function, we present some basic operations of single-valued neutrosophic elements and linguistic neutrosophic elements, the SVNLNHE weighted arithmetic mean (SVNLNHEWAMN) and SVNLNHE weighted geometric mean (SVNLNHEWGMN) operators (forming SVNEs), and the SVNLNHEWAML and SVNLNHEWGML operators (forming LNEs). Next, MAGDM models are established based on the SVNLNHEWAMN and SVNLNHEWGMN operators or the SVNLNHEWAML and SVNLNHEWGML operators to realize MAGDM issues with single-valued neutrosophic and linguistic neutrosophic hybrid information, and then their applicability and availability are indicated through an illustrative example in the SVNLNHE circumstance. By comparison with the existing techniques, our new techniques reveal obvious advantages in the mixed information denotation, aggregation algorithms, and decision-making methods in handling MAGDM issues with the quantitative and qualitative attributes in the setting of SVNLNHSs. --- ## Body ## 1. Introduction In general, there exist both quantitative attributes and qualitative attributes in multiple attribute (group) decision-making (MADM/MAGDM) issues. In the assessment process, the assessment information of quantitative attributes is usually represented by numerical values because numerical values are more suitable to the denotation form of quantitative arguments, while the assessment information of qualitative attributes is usually assigned by linguistic term values because the linguistic value is more suitable to human judgment and thinking/expression habits. Generally speaking, it is difficult to represent qualitative arguments by numeric values, but they are easily represented by linguistic values. In inconsistent and indeterminate situations, a simplified neutrosophic set (SNS) [1], including an interval-valued neutrosophic set/element (IVNS/IVNE) [2] and a single-valued neutrosophic set/element (SVNS/SVNE) [3], is depicted by the truth, falsity, and indeterminacy membership degrees, while a linguistic neutrosophic set/element (LNS/LNE) [4] is depicted by the truth, falsity, and indeterminacy linguistic values. Since the neutrosophic set theories [5], including SNS, SVNS, IVNS, and LNS, are vital mathematical tools to denote and handle indeterminate and inconsistent issues in the real world, they have been widely applied in decision-making issues [6–14]. In the setting of SNSs, some researchers presented various aggregation operators and their MADM/MAGDM models to solve neutrosophic MADM/MAGDM problems [10, 15–20]. Then, other researchers introduced various extended versions of SNSs, including single-valued neutrosophic rough sets [21], normal neutrosophic sets [22], bipolar neutrosophic sets [23], simplified neutrosophic indeterminate sets [24], and neutrosophic Z-numbers [25], and used them in MADM/MAGDM issues. In the setting of LNEs, some researchers proposed several aggregation operators of LNEs and their MAGDM models to carry out linguistic neutrosophic MAGDM problems [26, 27]. Then, some extended linguistic sets, such as linguistic neutrosophic uncertain sets and linguistic neutrosophic cubic sets, were also presented to perform some linguistic neutrosophic MAGDM problems [28]. Unfortunately, the existing neutrosophic theories and MADM models [28, 29] cannot yet resolve the denotation, operations, and MADM issues of the mixed information of SVNEs and LNEs. However, the existing assessment information of the quantitative or qualitative attributes with respect to alternatives only gives either numerical neutrosophic information or linguistic neutrosophic information as a single information expression. In the case of single-valued neutrosophic and linguistic neutrosophic mixed information, existing neutrosophic technologies cannot represent the mixed information of SVNE and LNE nor can they perform mixed operations of the two. Therefore, the mixed information representation and aggregation operations and decision-making problems pose challenges in this study, which motivates our research to address them. To solve these problems, the aims of this article are as follows: (1) to propose a single-valued neutrosophic and linguistic neutrosophic hybrid set/element (SVNLNHS/SVNLNHE) for the mixed information representation of both SVNE and LNE, (2) to present basic operations of SVNEs and LNEs according to a linguistic and neutrosophic conversion function and its inverse conversion function, (3) to propose the single-valued neutrosophic and linguistic neutrosophic hybrid element weighted arithmetic mean (SVNLNHEWAMN) and single-valued neutrosophic and linguistic neutrosophic hybrid element weighted geometric mean (SVNLNHEWGMN) operators for the aggregated SVNEs and the SVNLNHEWAML and SVNLNHEWGML operators for the aggregated LNEs, (4) to establish MAGDM models based on the SVNLNHEWAMN and SVNLNHEWGMN operators or the SVNLNHEWAML and SVNLNHEWGML operators in the setting of SVNLNHSs, and (5) to apply the established MAGDM models to an illustrative example on the selection problem of industrial robots that contain both quantitative and qualitative attributes in a SVNLNHS circumstance.Generally, the main contributions of this article are summarized as follows:(i) The proposed SVNLNHS/SVNLNHE solves the representation problem of single-valued neutrosophic and linguistic neutrosophic mixed information.(ii) The proposed weighted aggregation operators of SVNLNHEs based on the linguistic and neutrosophic conversion function and its inverse conversion function provide the effective aggregation algorithms of SVNLNHEs.(iii) The established MAGDM models can solve MAGDM issues with quantitative and qualitative attributes in a SVNLNHS circumstance.(iv) The established MAGDM models can solve the selection problem of industrial robots that contain both quantitative and qualitative attributes and show the availability and rationality of the new techniques in a SVNLNHS circumstance.The remaining structure of this article consists of the following sections. Section2 reviews the basic concepts and operations of SVNEs and LNEs as the preliminaries of this study. The notions of SVNLNHS and SVNLNHE and some basic operations of SVNEs and LNEs based on the linguistic and neutrosophic conversion function and its inverse conversion function are proposed in Section 3. In Section 4, the SVNLNHEWAMN, SVNLNHEWGMN, SVNLNHEWAML, and SVNLNHEWGML operators are presented in terms of the basic operations of SVNEs and LNEs. In Section 5, two new MAGDM models are established by the SVNLNHEWAMN and SVNLNHEWGMN operators or the SVNLNHEWAML and SVNLNHEWGML operators. Section 6 presents an illustrative example on the selection problem of industrial robots that contains both quantitative and qualitative attributes and then gives a comparative analysis with the existing techniques to show the availability and rationality of the new techniques. Finally, conclusions and future research are summarized in Section 7. ## 2. Preliminaries of SVNEs and LNEs This part reviews the basic notions and operations of SVNEs and LNEs. ### 2.1. Basic Notions and Operations of SVNEs SetU = {u1, u2, …, um} as a universal set. Then, a SVNS ZN in U can be represented as [1, 3](1)ZN=ui,xZNui,yZNui,zZNui|ui∈U,where < ui, xZN(ui), yZN(ui), zZN(ui)> (i = 1, 2, …, m) is SVNE in ZN for uj∈ U and xZN(ui), yZN(ui), zZN(ui)∈ [0, 1], and then it is simply denoted as zni = < xZNi, yZNi, zZNi > .For two SVNEs, zn1 = < xZN1, yZN1, zZN1>, ZN2= < xZN2, yZN2, zZN2>, and β >  0, and their relations are contained as follows [17]:(1) zn1⊕zn2=xZN1+xZN2−xZN1xZN2,yZN1yZN2,zZN1zZN2(2) zn1⊗zn2=xZN1xZN2,yZN1+yZN2−yZN1yZN2,zZN1+zZN2−zZN1zZN2(3) β⋅zn1=1−1−xZN1β,yZN1β,zZN1β(4) zn1β=xZN1β,1−1−yZN1β,1−1−zZN1βSuppose that there is a group of SVNEs zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, m) with their related weights βi∈ [0, 1] for ∑i=1mβi=1. Then, the SVNE weighted arithmetic mean (SVNEWAM) operator and the SVNE weighted geometric mean (SVNEWGM) operator are introduced, respectively, as follows [17]:(2)SVNEWAMzn1,zn2,…,znm=∑i=1mβi⋅zni=1−∏i=1m1−xZNiβi,∏i=1myZNiβi,∏i=1mzZNiβi,(3)SVNEWGMzn1,zn2,…,znm=∏i=1mzniβi=∏i=1mxZNiβi,1−∏i=1m1−yZNiβi,1−∏i=1m1−zZNiβi.To compare SVNEs, the score and accuracy functions of SVNEs and their ranking laws are introduced below [17].Set zni = < xZNi, yZNi, zZNi >  as any SVNE. The score and accuracy functions of zni are presented, respectively, as follows:(4)Fzni=2+xZNi−yZNi−zZNi3forFzni∈0,1,(5)Gzni=xZNi−zZNiforGzni∈−1,1.Then, the sorting laws based on the score values ofF(zni) and the accuracy values of G(zni) (i = 1, 2) are as follows [17]:(a) zn1 >  zn2 for F(zn1) > F(zn2)(b) zn1 >  zn2 for F(zn1) = F(zn2) and G(zn1) > G(zn2)(c) zn1 = zn2 for F(zn1) = F(zn2) and G(zn1) = G(zn2) ### 2.2. Basic Notions and Operations of LNEs LetU = {u1, u2, …, um} be a universal set and S=sp|p=0,1,…,r be a linguistic term set (LTS) with an odd cardinality r + 1. Thus, a LNS LH is defined as follows [4]:(6)LH=ui,saui,sbui,scui|ui∈U,where ui,saui,sbui,scui for ui∈U is LNE in LH and saui,sbui,scui∈S are the truth, indeterminacy, and falsity linguistic variables, respectively. For convenience, LNE is simply denoted as lhi=sai,sbi,sci.For two LNEs,lh1=sa1,sb1,sc1, lh2=sa2,sb2,sc2, , and β >  0, and their operational relations are as follows [4]:(1) lh1⊕lh2=sa1+a2−a1a2/r,sb1b2/r,sc1c2/r(2) lh1⊗lh2=sa1a2/r,sb1+b2−b1b2/r,sc1+c2−c1c2/r(3) β⋅lh1=sr−r1−a1/rβ,srb1/rβ,src1/rβ(4) lh1β=sra1/rβ,sr−r1−b1/rβ,sr−r1−c1/rβSuppose that there is a group of LNEslhi=sai,sbi,sci (i = 1, 2, …, m) with their related weights βi ∈ [0, 1] for ∑i=1mβi=1. Then, the LNE weighted arithmetic mean (LNEWAM) and LNE weighted geometric mean (LNEWGM) operators are introduced as follows [4]:(7)LNEWAMlh1,lh2,…,lhm=∑i=1mβi⋅lhi=sr−r∏i=1m1−ai/rβi,sr∏i=1mbi/rβi,sr∏i=1mci/rβi,(8)LNEWGMlh1,lh2,...,lhm=∏i=1mlhiβi=sr∏i=1mai/rβi,sr−r∏i=1m1−bi/rβi,sr−r∏i=1m1−ci/rβi.Setlhi=sai,sbi,sci as any LNE. The score and accuracy functions of lhi are defined, respectively, as follows [4]:(9)Plhi=2r+ai−bi−ci3rforPlhi∈0,1,(10)Qlhi=ai−cirforQlhi∈−1,1.Then, the sorting laws based on the score values ofP(lhi) and the accuracy values of Q(lhi) (i = 1, 2) are given as follows [4]:(a) lh1 >  lh2 for P(lh1) > P(lh2)(b) lh1 >  lh2 for P(lh1) = P(lh2) and Q(lh1) > Q(lh2)(c) lh1 = lh2 for P(lh1) = P(lh2) and Q(lh1) = Q(lh2) ## 2.1. Basic Notions and Operations of SVNEs SetU = {u1, u2, …, um} as a universal set. Then, a SVNS ZN in U can be represented as [1, 3](1)ZN=ui,xZNui,yZNui,zZNui|ui∈U,where < ui, xZN(ui), yZN(ui), zZN(ui)> (i = 1, 2, …, m) is SVNE in ZN for uj∈ U and xZN(ui), yZN(ui), zZN(ui)∈ [0, 1], and then it is simply denoted as zni = < xZNi, yZNi, zZNi > .For two SVNEs, zn1 = < xZN1, yZN1, zZN1>, ZN2= < xZN2, yZN2, zZN2>, and β >  0, and their relations are contained as follows [17]:(1) zn1⊕zn2=xZN1+xZN2−xZN1xZN2,yZN1yZN2,zZN1zZN2(2) zn1⊗zn2=xZN1xZN2,yZN1+yZN2−yZN1yZN2,zZN1+zZN2−zZN1zZN2(3) β⋅zn1=1−1−xZN1β,yZN1β,zZN1β(4) zn1β=xZN1β,1−1−yZN1β,1−1−zZN1βSuppose that there is a group of SVNEs zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, m) with their related weights βi∈ [0, 1] for ∑i=1mβi=1. Then, the SVNE weighted arithmetic mean (SVNEWAM) operator and the SVNE weighted geometric mean (SVNEWGM) operator are introduced, respectively, as follows [17]:(2)SVNEWAMzn1,zn2,…,znm=∑i=1mβi⋅zni=1−∏i=1m1−xZNiβi,∏i=1myZNiβi,∏i=1mzZNiβi,(3)SVNEWGMzn1,zn2,…,znm=∏i=1mzniβi=∏i=1mxZNiβi,1−∏i=1m1−yZNiβi,1−∏i=1m1−zZNiβi.To compare SVNEs, the score and accuracy functions of SVNEs and their ranking laws are introduced below [17].Set zni = < xZNi, yZNi, zZNi >  as any SVNE. The score and accuracy functions of zni are presented, respectively, as follows:(4)Fzni=2+xZNi−yZNi−zZNi3forFzni∈0,1,(5)Gzni=xZNi−zZNiforGzni∈−1,1.Then, the sorting laws based on the score values ofF(zni) and the accuracy values of G(zni) (i = 1, 2) are as follows [17]:(a) zn1 >  zn2 for F(zn1) > F(zn2)(b) zn1 >  zn2 for F(zn1) = F(zn2) and G(zn1) > G(zn2)(c) zn1 = zn2 for F(zn1) = F(zn2) and G(zn1) = G(zn2) ## 2.2. Basic Notions and Operations of LNEs LetU = {u1, u2, …, um} be a universal set and S=sp|p=0,1,…,r be a linguistic term set (LTS) with an odd cardinality r + 1. Thus, a LNS LH is defined as follows [4]:(6)LH=ui,saui,sbui,scui|ui∈U,where ui,saui,sbui,scui for ui∈U is LNE in LH and saui,sbui,scui∈S are the truth, indeterminacy, and falsity linguistic variables, respectively. For convenience, LNE is simply denoted as lhi=sai,sbi,sci.For two LNEs,lh1=sa1,sb1,sc1, lh2=sa2,sb2,sc2, , and β >  0, and their operational relations are as follows [4]:(1) lh1⊕lh2=sa1+a2−a1a2/r,sb1b2/r,sc1c2/r(2) lh1⊗lh2=sa1a2/r,sb1+b2−b1b2/r,sc1+c2−c1c2/r(3) β⋅lh1=sr−r1−a1/rβ,srb1/rβ,src1/rβ(4) lh1β=sra1/rβ,sr−r1−b1/rβ,sr−r1−c1/rβSuppose that there is a group of LNEslhi=sai,sbi,sci (i = 1, 2, …, m) with their related weights βi ∈ [0, 1] for ∑i=1mβi=1. Then, the LNE weighted arithmetic mean (LNEWAM) and LNE weighted geometric mean (LNEWGM) operators are introduced as follows [4]:(7)LNEWAMlh1,lh2,…,lhm=∑i=1mβi⋅lhi=sr−r∏i=1m1−ai/rβi,sr∏i=1mbi/rβi,sr∏i=1mci/rβi,(8)LNEWGMlh1,lh2,...,lhm=∏i=1mlhiβi=sr∏i=1mai/rβi,sr−r∏i=1m1−bi/rβi,sr−r∏i=1m1−ci/rβi.Setlhi=sai,sbi,sci as any LNE. The score and accuracy functions of lhi are defined, respectively, as follows [4]:(9)Plhi=2r+ai−bi−ci3rforPlhi∈0,1,(10)Qlhi=ai−cirforQlhi∈−1,1.Then, the sorting laws based on the score values ofP(lhi) and the accuracy values of Q(lhi) (i = 1, 2) are given as follows [4]:(a) lh1 >  lh2 for P(lh1) > P(lh2)(b) lh1 >  lh2 for P(lh1) = P(lh2) and Q(lh1) > Q(lh2)(c) lh1 = lh2 for P(lh1) = P(lh2) and Q(lh1) = Q(lh2) ## 3. SVNLNHSs and SVNLNHEs This section proposes SVNLNHS/SVNLNHE for the mixed information representation of both SVNE and LNE and then presents some basic operations of SVNEs and LNEs according to a linguistic and neutrosophic conversion function and its inverse conversion function.Definition 1. LetU = {u1, u2, …, um} be a universe set and S = {sp|p = 0, 1, …, r} be LTS with an odd cardinality r + 1. Then, a SVNLNHS ML is defined by(11)ML=ui,TLMLui,ILMLui,FLMLui|ui∈U,where TLML(ui), ILML(ui), and FLML(ui) are the truth, indeterminacy, and falsity membership functions, and their values are either the fuzzy values for TLML(ui), ILML(ui), FLML(ui) ∈ [0, 1] or the linguistic values for TLML(ui) ILML(ui), FLML(ui) ∈ S and ui ∈ U. Moreover, the SVNLNHS ML is composed of the q SVNEs zni=xZNi,yZNi,zZNi for xZNi, yZNi, zZNi ∈ [0, 1] (i = 1, 2, …, q) and the m − q LNEs lhi=sai,sbi,sci for sai,sbi,sci∈S and ai, bi, ci ∈ [0, r] (i = q + 1, q + 2, …, m).Definition 2. Suppose thatML1 and ML2 are two SVNLNHSs, which contain q SVNEs zn1i = < xZN1i, yZN1i, zZN1i> (i = 1, 2, …, q) and m − q LNEs lh1i=sa1i,sb1i,sc1i for sa1i,sb1i,sc1i∈S (i = q + 1, q + 2, …, m) and q SVNEs zn2i = < xZN2i, yZN2i, zZN2i> (i = 1, 2, …, q) and m − q LNEs lh2i=sa2i,sb2i,sc2i for sa2i,sb2i,sc2i∈S (i = q + 1, q + 2, …, m). Thus, ML1 and ML2 imply the following relations:(1) ML1 ⊆ ML2 ⇔ zn1i ⊆ zn2i (i = 1, 2, …, q) and lh1i ⊆ lh2i (i = q + 1, q + 2, …, m), i.e., xZN1i ≤ xZN2i, yZN2i ≤ yZN1i, and zZN2i ≤ zZN1i for i = 1, 2, …, q and sa1i≤sa2i, sb1i≥sb2i, and sc1i≥sc2i for i = q + 1, q + 2, …, m;(2) ML1 = ML2 ⇔ zn1i ⊆ zn2i, zn1i ⊇ zn2i, lh1 ⊆ lh2, and lh2 ⊆ lh1, i.e., xZN1i = xZN2i, yZN2i = yZN1i, and zZN2i = zZN1i for i = 1, 2, …, q and sa1i=sa2i, sb1i=sb2i, and sc1i=sc2i for i = q + 1, q + 2, …, m.Definition 3. Set zni = < xZNi, yZNi, zZNi > and lhi=sai,sbi,sci as any SVNE and any LNE, respectively. Then, let a linguistic and neutrosophic conversion function be flhi=ai/r,bi/r,ci/r for ai, bi, ci ∈ [0, r], and then its inverse conversion function is f−1zni=xZNir,yZNir,zZNir for xZNi, yZNi, zZNi ∈ [0, 1]. Thus, some basic operations of SVNEs and LNEs are given as follows:(1) f−1zni⊕lhi=xZNir+ai−xZNiai,yZNibi,zZNici(2) zni⊕flhi=xZNi+ai/r−xZNiai/r,yZNibi/r,zZNici/r(3) f−1zni⊗lhi=xZNiai,yZNir+bi−yZNibi,zZNir+ci−zZNici(4) βf−1zni=r−r1−xZNiβ,ryZNiβ,rzZNiβ for β >  0(5) βflhi=1−1−ai/rβ,bi/rβ,ci/rβ for β >  0(6) f−1zniβ=rxZNiβ,r−r1−yZNiβ,r−r1−zZNiβ for β >  0(7) fβlhi=ai/rβ,1−1−bi/rβ,1−1−ci/rβ for β >  0 It is obvious that the operational results of (2), (4), (5), and (8) are LNEs and the operational results of (3) and (7), and (9) are SVNEs. ## 4. Weighted Arithmetic and Geomatic Mean Operators of SVNLNHEs This section proposes some weighted aggregation operators of SVNLNHEs corresponding to the linguistic and neutrosophic conversion function and its inverse conversion function, and then indicates their properties. ### 4.1. Aggregation Operators of SVNLNHEs Corresponding to the Linguistic and Neutrosophic Conversion Function Let zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, q) and lhi=sai,sbi,sci (i = q + 1, q + 2, …, m) be q SVNEs and m − q LNEs, respectively. Then, based on Definition 3 and the SVNEWAM and SVNEWGM operators of Eqs. (2) and (3) [17], the weighted arithmetic and geomatic mean operators of SVNLNHEs corresponding to the linguistic and neutrosophic conversion function are proposed by the SVNLNHEWAMN and SVNLNHEWGMN operators,(12)SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∑i=1qβi⋅zni+∑i=q+1mβiflhi=1−∏i=1q1−xZNiβi∏i=q+1m1−ai/rβi,∏i=1qyZNiβi∏i=q+1mbi/rβi,∏i=1qzZNiβi∏i=q+1mci/rβi(13)SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∏i=1qzniβi∏i=q+1mfβilhi=∏i=1qxZNiβi∏i=q+1mai/rβi,1−∏i=1q1−yZNiβi∏i=q+1m1−bi/rβi,1−∏i=1q1−zZNiβi∏i=q+1m1−ci/rβi,where βi ∈ [0, 1] is the weight of zni (i = 1, 2, …, q) and lhi (i = q + 1, q + 2, …, m) with ∑i=1mβi=1. Then, the aggregated results of the SVNLNHEWAMN and SVNLNHEWGMN operators are SVNEs.Especially, whenq = m (without LNEs), the SVNLNHEWAMN and SVNLNHEWGMN operators are reduced to the SVNEWAM and SVNEWGM operators [17], i.e., Eq. (2) and Eq. (3).Based on the properties of the SVNEWAM and SVNEWGM operators [17], it is obvious that the SVNLNHEWAMN and SVNLNHEWGMN operators also contain the following properties:(1) Idempotency: if zni = f(lhi) = zn for i = 1, 2, …, q, q + 1, q + 2, …, m, there are SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=zn and SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=zn.(2) Boundedness: letzn−=minixZNi,ai/r,maxiyZNi,bi/r,maxizZNi,ci/r and zn+=maxixZNi,ai/r,miniyZNi,bi/r,minizZNi,ci/r be the minimum and maximum SVNEs for i = 1, 2, …, m, and then there are the inequalities zn−≤SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤zn+ and zn−≤SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤zn+.(3) Monotonicity: if the condition ofzni≤zni∗ (i = 1, 2, …, q) and lhi≤lhi∗ (i = q + 1, q + 2, …, m) exists, SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWAMNzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ and SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWGMNzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ also exist. ### 4.2. Aggregation Operators of SVNLNHEs According to the Inverse Conversion Function Let zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, q) and lhi=sai,sbi,sci (i = q + 1, q + 2, …, m) be q SVNEs and m − q LNEs, respectively. Then, based on Definition 3 and the LNEWAM and LNEWGM operators of Eqs. (7) and (8) [4], the weighted arithmetic and geomatic mean operators of SVNLNHEs corresponding to the inverse conversion function are proposed by the SVNLNHEWAML and SVNLNHEWGML operators:(14)SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∑i=1qβif−1zni+∑i=q+1mβi⋅lhi=r−r∏i=1q1−xZNiβi∏i=q+1m1−ai/rβi,r∏i=1qyZNiβi∏i=q+1mbi/rβi,r∏i=1qzZNiβi∏i=q+1mci/rβi,(15)SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∏i=1qf−1zniβi∏i=q+1mlhiβi=r∏i=1qxZNiβi∏i=q+1mai/rβi,r−r∏i=1q1−yZNiβi∏i=q+1m1−bi/rβi,r−r∏i=1q1−zZNiβi∏i=q+1m1−ci/rβi,where βi∈ [0, 1] is the weight of zni (i = 1, 2, …, q) and lhi (i = q + 1, q + 2, …, m) with ∑i=1mβi=1. Then, the aggregated results of the SVNLNHEWAML and SVNLNHEWGML operators are LNEs.Especially, whenq = 0 (without SVNEs), the SVNLNHEWAML and SVNLNHEWGML operators are reduced to the LNEWAM and LNEWGM operators [4], i.e., Eq. (7) and Eq. (8).Based on the characteristics of the LNEWAM and LNEWGM operators [4], it is obvious that the SVNLNHEWAML and SVNLNHEWGML operators also contain the following characteristics:(1) Idempotency: iff− 1(zni) = lhi = lh for i = 1, 2, …, q, q + 1, q + 2, …, m, there are SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=lh and SVNLNHEWGMLzn1,zn2,...,znq,lhq+1,lhq+2,...,lhm=lh.(2) Boundedness: letlh−=minirxZNi,ai,maxiryZNi,bi,maxirzZNi,ci and lh+=maxirxZNi,ai,miniryZNi,bi,minirzZNi,ci be the minimum and maximum LNEs for i = 1, 2, …, m, and then there are the inequalities lh−≤SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤lh+ and lh−≤SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤lh+.(3) Monotonicity: if the condition ofzni≤zni∗ for i = 1, 2, …, q and lhi≤lhi∗ for i = q + 1, q + 2, …, m exists, SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWAMLzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ and SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWGMLzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ also exist. ## 4.1. Aggregation Operators of SVNLNHEs Corresponding to the Linguistic and Neutrosophic Conversion Function Let zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, q) and lhi=sai,sbi,sci (i = q + 1, q + 2, …, m) be q SVNEs and m − q LNEs, respectively. Then, based on Definition 3 and the SVNEWAM and SVNEWGM operators of Eqs. (2) and (3) [17], the weighted arithmetic and geomatic mean operators of SVNLNHEs corresponding to the linguistic and neutrosophic conversion function are proposed by the SVNLNHEWAMN and SVNLNHEWGMN operators,(12)SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∑i=1qβi⋅zni+∑i=q+1mβiflhi=1−∏i=1q1−xZNiβi∏i=q+1m1−ai/rβi,∏i=1qyZNiβi∏i=q+1mbi/rβi,∏i=1qzZNiβi∏i=q+1mci/rβi(13)SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∏i=1qzniβi∏i=q+1mfβilhi=∏i=1qxZNiβi∏i=q+1mai/rβi,1−∏i=1q1−yZNiβi∏i=q+1m1−bi/rβi,1−∏i=1q1−zZNiβi∏i=q+1m1−ci/rβi,where βi ∈ [0, 1] is the weight of zni (i = 1, 2, …, q) and lhi (i = q + 1, q + 2, …, m) with ∑i=1mβi=1. Then, the aggregated results of the SVNLNHEWAMN and SVNLNHEWGMN operators are SVNEs.Especially, whenq = m (without LNEs), the SVNLNHEWAMN and SVNLNHEWGMN operators are reduced to the SVNEWAM and SVNEWGM operators [17], i.e., Eq. (2) and Eq. (3).Based on the properties of the SVNEWAM and SVNEWGM operators [17], it is obvious that the SVNLNHEWAMN and SVNLNHEWGMN operators also contain the following properties:(1) Idempotency: if zni = f(lhi) = zn for i = 1, 2, …, q, q + 1, q + 2, …, m, there are SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=zn and SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=zn.(2) Boundedness: letzn−=minixZNi,ai/r,maxiyZNi,bi/r,maxizZNi,ci/r and zn+=maxixZNi,ai/r,miniyZNi,bi/r,minizZNi,ci/r be the minimum and maximum SVNEs for i = 1, 2, …, m, and then there are the inequalities zn−≤SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤zn+ and zn−≤SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤zn+.(3) Monotonicity: if the condition ofzni≤zni∗ (i = 1, 2, …, q) and lhi≤lhi∗ (i = q + 1, q + 2, …, m) exists, SVNLNHEWAMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWAMNzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ and SVNLNHEWGMNzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWGMNzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ also exist. ## 4.2. Aggregation Operators of SVNLNHEs According to the Inverse Conversion Function Let zni = < xZNi, yZNi, zZNi> (i = 1, 2, …, q) and lhi=sai,sbi,sci (i = q + 1, q + 2, …, m) be q SVNEs and m − q LNEs, respectively. Then, based on Definition 3 and the LNEWAM and LNEWGM operators of Eqs. (7) and (8) [4], the weighted arithmetic and geomatic mean operators of SVNLNHEs corresponding to the inverse conversion function are proposed by the SVNLNHEWAML and SVNLNHEWGML operators:(14)SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∑i=1qβif−1zni+∑i=q+1mβi⋅lhi=r−r∏i=1q1−xZNiβi∏i=q+1m1−ai/rβi,r∏i=1qyZNiβi∏i=q+1mbi/rβi,r∏i=1qzZNiβi∏i=q+1mci/rβi,(15)SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=∏i=1qf−1zniβi∏i=q+1mlhiβi=r∏i=1qxZNiβi∏i=q+1mai/rβi,r−r∏i=1q1−yZNiβi∏i=q+1m1−bi/rβi,r−r∏i=1q1−zZNiβi∏i=q+1m1−ci/rβi,where βi∈ [0, 1] is the weight of zni (i = 1, 2, …, q) and lhi (i = q + 1, q + 2, …, m) with ∑i=1mβi=1. Then, the aggregated results of the SVNLNHEWAML and SVNLNHEWGML operators are LNEs.Especially, whenq = 0 (without SVNEs), the SVNLNHEWAML and SVNLNHEWGML operators are reduced to the LNEWAM and LNEWGM operators [4], i.e., Eq. (7) and Eq. (8).Based on the characteristics of the LNEWAM and LNEWGM operators [4], it is obvious that the SVNLNHEWAML and SVNLNHEWGML operators also contain the following characteristics:(1) Idempotency: iff− 1(zni) = lhi = lh for i = 1, 2, …, q, q + 1, q + 2, …, m, there are SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm=lh and SVNLNHEWGMLzn1,zn2,...,znq,lhq+1,lhq+2,...,lhm=lh.(2) Boundedness: letlh−=minirxZNi,ai,maxiryZNi,bi,maxirzZNi,ci and lh+=maxirxZNi,ai,miniryZNi,bi,minirzZNi,ci be the minimum and maximum LNEs for i = 1, 2, …, m, and then there are the inequalities lh−≤SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤lh+ and lh−≤SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤lh+.(3) Monotonicity: if the condition ofzni≤zni∗ for i = 1, 2, …, q and lhi≤lhi∗ for i = q + 1, q + 2, …, m exists, SVNLNHEWAMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWAMLzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ and SVNLNHEWGMLzn1,zn2,…,znq,lhq+1,lhq+2,…,lhm≤SVNLNHEWGMLzn1∗,zn2∗,…,znq∗,lhq+1∗,lhq+2∗,…,lhm∗ also exist. ## 5. MAGDM Models in the Environment of SVNLNHSs In this section, novel MAGDM models are developed in terms of the SVNLNHEWAMN and SVNLNHEWGMN operators and the SVNLNHEWAML and SVNLNHEWGML operators to perform MAGDM issues with quantitative and qualitative attributes in the mixed information environment of SVNEs and LNEs.Regarding a mixed information MAGDM issue in the circumstance of SVNLNHSs, there existt alternatives, denoted by a set of them E=E1,E2,…Et, and then they are satisfactorily assessed over m attributes, denoted by a set of them V = {v1, v2, …, vq, vq + 1, vq + 2, …, vm}, which contains q quantitative attributes and m − q qualitative attributes. Then, there is a group of decision makers G=g1,g2,…ge with their weight vector α=α1,α2,…αe for αk∈ [0, 1] and ∑k=1eαk=1. The assessment values of each alternative over the q quantitative attributes are given by the decision makers gk (k = 1, 2, …, e) and represented by SVNEs znjik=xZNjik,yZNjik,zZNjik for xZNjik,yZNjik,zZNjik∈0,1 (k = 1, 2, …, e; j = 1, 2, …, t; i = 1, 2, …, q), and then the assessment values of each alternative over the m − q qualitative attributes are represented by LNEs lhjik=sajik,sbjik,scjik for sajik,sbjik,scjik∈S (k = 1, 2, …, e; j = 1, 2, …, t; i = q + 1, q + 2, …, m) from the LTS S = {sp|p = 0, 1, 2, …, r}. Thus, all assessed values can be constructed as the e decision matrices of SVNLNHEs Mk = (znjik,lhjik)t× m (k = 1, 2, …, e). Then, a weight vector β=β1,β2,…βm is specified to consider the weights βi of attributes vi (i = 1, 2, …, m) with βi∈ [0, 1] and ∑i=1mβi=1.Thus, two MAGDM models are developed in terms of the SVNLNHEWAMN and SVNLNHEWGMN operators or the SVNLNHEWAML and SVNLNHEWGML operators to perform MAGDM issues with the mixed evaluation information of SVNEs and LNEs.Model 1. A MAGDM model using the SVNLNHEWAMN and SVNLNHEWGMN operators is developed to perform the MAGDM issue with SVNLNHEs. Its detailed steps are presented as follows:Step 1: using the SVNEWAM operator of Eq. (2) and the LNEWAM operator of Eq. (7), the decision matrices Mk=znjik,lhjikt×m (k = 1, 2, …, e) are aggregated into the overall decision matrix M=znji,lhjit×m.Step 2: using the SVNLNHEWAMN operator of Eq. (12) or the SVNLNHEWGMN operator of Eq. (13), the aggregated result for Ej (j = 1, 2, ⋯, t) is obtained by the following equation:(16)znj=SVNLNHEWAMNznj1,znj2,…,znjq,lhjq+1,lhjq+2,…,lhjm=∑i=1qβi⋅znji+∑i=q+1mβiflhji=1−∏i=1q1−xZNjiβi∏i=q+1m1−aji/rβi,∏i=1qyZNjiβi∏i=q+1mbji/rβi,∏i=1qzZNjiβi∏i=q+1mcji/rβi,or(17)znj=SVNLNHEWGMNznj1,znj2,…,znjq,lhjq+1,lhjq+2,…,lhjm=∏i=1qznjiβi∏i=q+1mfβilhji=∏i=1qxZNjiβi∏i=q+1maji/rβi,1−∏i=1q1−yZNjiβi∏i=q+1m1−bji/rβi,1−∏i=1q1−zZNjiβi∏i=q+1m1−cji/rβi.Step 3: the score values ofF(znj) (j = 1, 2, …, t) are given by Eq. (4) and the accuracy values of G(znj) (j = 1, 2, …, t) are given by Eq. (5) if necessary.Step 4: the alternatives are sorted in descending order based on the sorting laws of SVNEs, and the first one is the best choice.Step 5: end.Model 2. A MAGDM model using the SVNLNHEWAML and SVNLNHEWGML operators is developed to perform the MAGDM issue with SVNLNHEs. Its detailed steps are presented as follows:Step 1’: the same as Step 1.Step 2’: using the SVNLNHEWAML operator of Eq. (14) or the SVNLNHEWGML operator of Eq. (15), the aggregated result for Ejj=1,2,…,t is given by the following equation:(18)lhj=SVNLNHEWAMLznj1,znj2,…,znjq,lhjq+1,lhjq+2,…,lhjm=∑i=1qβif−1znji+∑i=q+1mβi⋅lhji=r−r∏i=1q1−xZNjiβi∏i=q+1m1−aji/rβi,r∏i=1qyZNjiβi∏i=q+1mbji/rβi,r∏i=1qzZNjiβi∏i=q+1mcji/rβi,or(19)lhj=SVNLNHEWGMLznj1,znj2,…,znjq,lhjq+1,lhjq+2,…,lhjm=∏i=1qf−1znjiβi∏i=q+1mlhjiβi=r∏i=1qxZNjiβi∏i=q+1maji/rβi,r−r∏i=1q1−yZNjiβi∏i=q+1m1−bji/rβi,r−r∏i=1q1−zZNjiβi∏i=q+1m1−cji/rβi.Step 3’: the score values ofP(lhj) (j = 1, 2, …, t) are given by Eq. (9) and the accuracy values of Q(lhj) (j = 1, 2, …, t) are given by Eq. (10) if necessary.Step 4’: the alternatives are sorted in descending order based on the sorting laws of LNEs, and then the first one is the best choice.Step 5’: end. ## 6. Illustrative Example on the Selection Problem of Industrial Robots Containing Both Quantitative and Qualitative Attributes This section applies the proposed MAGDM models to an illustrative example on the selection problem of industrial robots that contains both quantitative and qualitative attributes in the circumstance of SVNLNHSs to prove their usefulness, and then gives a comparison with existing techniques to show the availability and rationality of the new techniques. ### 6.1. Illustrative Example This subsection applies the proposed MAGDM models to the selection problem of industrial robots containing both quantitative and qualitative attributes to illustrate their application and availability in the circumstance of SVNLNHSs.Some industrial company wants to buy a type of industrial robots for a manufacturing system. The technical department preliminarily provides four types of industrial robots/alternatives, denoted as their setE = {E1, E2, E3, E4}. Then, they must satisfy four requirements/attributes: operating accuracy (v1), carrying capacity (v2), control performance (v3), and operating space and dexterity (v4). The weight vector of the four attributes is given by β = (0.25, 0.3, 0.25, 0.2). Thus, three experts/decision makers are invited to satisfactorily assess each alternative over the four attributes by their truth, falsity, and indeterminacy options/judgments, where the assessment values can be specified in the mixed forms of both the SVNEs znjik=xZNjik,yZNjik,zZNjik for xZNjik,yZNjik,zZNjik∈0,1 (k = 1, 2, 3; i = 1, 2; j = 1, 2, 3, 4) regarding the quantitative attributes v1 and v2 and the LNEs lhjik=sajik,sbjik,scjik for sajik,sbjik,scjik∈S (k = 1, 2, 3; i = 3, 4; j = 1, 2, 3, 4) regarding the qualitative attributes v3 and v4 from the LTS S = {very unsatisfactory, unsatisfactory, slight unsatisfactory, medium, slight satisfactory, satisfactory, very satisfactory} = {s0, s1, s2, s3, s4, s5, s6} with r = 6. The weight vector of the three decision makers is given by α = (0.4, 0.35, 0.25). Thus, the three decision matrices are constructed as follows:(20)M1=0.8,0.1,0.20.7,0.1,0.1s5,s2,s2s5,s2,s30.8,0.2,0.10.8,0.1,0.3s5,s1,s2s4,s3,s20.7,0.1,0.10.8,0.2,0.2s5,s3,s2s4,s2,s30.8,0.2,0.20.9,0.2,0.3s4,s1,s2s5,s2,s2,M2=0.7,0.2,0.20.8,0.1,0.2s4,s3,s1s5,s2,s10.8,0.2,0.30.8,0.2,0.3s5,s2,s1s4,s1,s20.8,0.2,0.30.7,0.1,0.1s4,s1,s2s5,s2,s30.9,0.1,0.10.8,0.2,0.1s5,s1,s3s5,s1,s2,M3=0.8,0.3,0.10.8,0.1,0.1s5,s2,s1s4,s1,s10.7,0.1,0.20.9,0.2,0.3s4,s1,s1s5,s1,s10.8,0.1,0.10.8,0.2,0.1s5,s2,s2s5,s2,s30.8,0.1,0.10.8,0.1,0.1s5,s1,s2s5,s3,s2.Thus, the two MAGDM models developed can be utilized in the example to perform the MAGDM issue with SVNLNHEs.Model 1. The MAGDM model using the SVNLNHEWAMN and SVNLNHEWGMN operators can be applied in the example, and then its detailed steps are depicted as follows:Step 1: using the SVNEWAM operator of Eq. (2) and the LNEWAM operator of Eq. (7), the above three decision matrices are aggregated into the following overall decision matrix:(21)M=0.7695,0.1677,0.16820.7648,0.1000,0.1275s4.7254,s2.3050,s1.3195s4.8108,s1.6818,s1.55180.7787,0.1682,0.17470.8318,0.1516,0.3000s4.8108,s1.2746,s1.3195s4.3182,s1.5518,s1.68180.7648,0.1275,0.14690.7695,0.1569,0.1320s4.7254,s1.8455,s2.0000s4.6805,s2.0000,s3.00000.8431,0.1320,0.13200.8484,0.1682,0.1552s4.6805,s1.0000,s2.3050s5.0000,s1.7366,s2.0000.Step 2: by Eq. (16) or Eq. (17), we give the aggregated values:zn1 = < 0.7795, 0.1958, 0.1804>, zn2 = < 0.7921, 0.1884, 0.2392>,zn3 = < 0.7751, 0.2049, 0.2231>, and zn4 = < 0.8290, 0.1761, 0.2178 > .Or zn1 = < 0.7789, 0.2324, 0.1885>, zn2 = < 0.7876, 0.1934, 0.2464>,zn3 = < 0.7749, 0.2276, 0.2754>, and zn4 = < 0.8265, 0.1850, 0.2504 > .Step 3: by Eq. (4), the score values of F(znj) for Ej (j = 1, 2, 3, 4) are given below:F(zn1) = 0.8011, F(zn2) = 0.7882, F(zn3) = 0.7824, and F(zn4) = 0.8117.OrF(zn1) = 0.7860, F(zn2) = 0.7826, F(zn3) = 0.7573, and F(zn4) = 0.7970.Step 4: the sorting order of the four alternatives isE4 >  E1 >  E2 >  E3.Clearly, the sorting orders obtained by the SVNLNHEWAMN operator of Eq. (16) and the SVNLNHEWGMN operator of Eq. (17) are identical in this example.Model 2. The MAGDM model using the SVNLNHEWAML and SVNLNHEWGML operators can also be applied in the example, and then its detailed steps are depicted as follows:Step 1’: the same as Step 1.Step 2’: by Eq. (18) or Eq. (19), we give the aggregated values:lh1 = < s4.6773, s1.1747, s1.0823>, lh2 = <  s4.7528, s1.1302, s1.4353>,lh3 = < s4.6508, s1.2297, s1.3384>, and lh4 = < s4.9739, s1.0564, s1.3070 > ,orlh1 = < s4.6736, s1.3944, s1.1311>, lh2 = < s4.7255, s1.1603, s1.4783>,lh3 = < s4.6494, s1.3657, s1.6526>, and lh4 = < s4.9591, s1.1100, s1.5027 > .Step 3’: by (9), the score values of P(lhj) for Ej (j = 1, 2, 3, 4) are given as follows:P(lh1) = 0.8011, P(lh2) = 0.7882, P(lh3) = 0.7824, and P(lh4) = 0.8117,orP(lh1) = 0.7860, P(lh2) = 0.7826, P(lh3) = 0.7573, and P(lh4) = 0.7970.Step 4’: the sorting order of the four alternatives isE4 >  E1 >  E2 >  E3.Hence, the sorting orders obtained by the SVNLNHEWAML operator of Eq. (18) and the SVNLNHEWGML operator of Eq. (19) are identical in this example.Obviously, the score values and sorting orders between Model 1 and Model 2 reflect the same results. Moreover, we see that whether SVNEs are converted to LNEs or LNEs to SVNEs in the aggregation operations, their final decision results are actually identical. Thus, decision makers can choose Model 1 or Model 2 in MAGDM applications. Therefore, it is obvious that our new techniques are valid and reasonable. ### 6.2. Comparative Analysis with the Existing Neutrosophic MAGDM Models Since the assessed values of SVNLNHEs are given in this illustrative example, the existing neutrosophic MAGDM models [4, 17] cannot deal with this illustrative example in the situation of SVNLNHEs. Then, our new techniques can handle neutrosophic MAGDM issues with SVNEs and/or LNEs and show the following highlights and advantages:(1) The proposed SVNLNHEs can conveniently denote the mixed information of SVNEs and LNEs regarding the assessment objects of quantitative and qualitative attributes, which is suitable for human judgment and thinking/expression habits, while existing neutrosophic expressions cannot represent SVNLNHE information.(2) The proposed SVNLNHEWAMN and SVNLNHEWGMN operators or the proposed SVNLNHEWAML and SVNLNHEWGML operators provide the necessary aggregation tools for handling MAGDM issues in the SVNLNHE circumstance, while the existing SVNEWAM and SVNEWGM operators [17] are only the special cases of the SVNLNHEWAMN and SVNLNHEWGMN operators, and then the existing LNEWAM and LNEWGM operators [4] are only the special cases of the SVNLNHEWAML and SVNLNHEWGML operators. Furthermore, the various existing aggregation operators cannot aggregate SVNLNHEs.(3) Since the existing MAGDM models with the single evaluation information of SVNEs or LNEs [4, 17] are the special cases of our new MAGDM models, our new MAGDM models are broader and more versatile than the existing MAGDM models [4, 17]. Furthermore, the various existing MAGDM models cannot carry out MAGDM problems with SVNLNHE information.Generally, the new techniques solve the SVNLNHE denotation, aggregation operations, and MAGDM issues in the mixed information situation of SVNEs and LNEs. It is clear that our new techniques are very suitable for such decision-making issues with quantitative and qualitative attributes and overcome the defects of the existing decision-making techniques subject to the single evaluation information of SVNEs or LNEs. Therefore, our new techniques reveal obvious superiorities over the existing techniques in the neutrosophic information denotation, aggregation operations, and decision-making methods. ## 6.1. Illustrative Example This subsection applies the proposed MAGDM models to the selection problem of industrial robots containing both quantitative and qualitative attributes to illustrate their application and availability in the circumstance of SVNLNHSs.Some industrial company wants to buy a type of industrial robots for a manufacturing system. The technical department preliminarily provides four types of industrial robots/alternatives, denoted as their setE = {E1, E2, E3, E4}. Then, they must satisfy four requirements/attributes: operating accuracy (v1), carrying capacity (v2), control performance (v3), and operating space and dexterity (v4). The weight vector of the four attributes is given by β = (0.25, 0.3, 0.25, 0.2). Thus, three experts/decision makers are invited to satisfactorily assess each alternative over the four attributes by their truth, falsity, and indeterminacy options/judgments, where the assessment values can be specified in the mixed forms of both the SVNEs znjik=xZNjik,yZNjik,zZNjik for xZNjik,yZNjik,zZNjik∈0,1 (k = 1, 2, 3; i = 1, 2; j = 1, 2, 3, 4) regarding the quantitative attributes v1 and v2 and the LNEs lhjik=sajik,sbjik,scjik for sajik,sbjik,scjik∈S (k = 1, 2, 3; i = 3, 4; j = 1, 2, 3, 4) regarding the qualitative attributes v3 and v4 from the LTS S = {very unsatisfactory, unsatisfactory, slight unsatisfactory, medium, slight satisfactory, satisfactory, very satisfactory} = {s0, s1, s2, s3, s4, s5, s6} with r = 6. The weight vector of the three decision makers is given by α = (0.4, 0.35, 0.25). Thus, the three decision matrices are constructed as follows:(20)M1=0.8,0.1,0.20.7,0.1,0.1s5,s2,s2s5,s2,s30.8,0.2,0.10.8,0.1,0.3s5,s1,s2s4,s3,s20.7,0.1,0.10.8,0.2,0.2s5,s3,s2s4,s2,s30.8,0.2,0.20.9,0.2,0.3s4,s1,s2s5,s2,s2,M2=0.7,0.2,0.20.8,0.1,0.2s4,s3,s1s5,s2,s10.8,0.2,0.30.8,0.2,0.3s5,s2,s1s4,s1,s20.8,0.2,0.30.7,0.1,0.1s4,s1,s2s5,s2,s30.9,0.1,0.10.8,0.2,0.1s5,s1,s3s5,s1,s2,M3=0.8,0.3,0.10.8,0.1,0.1s5,s2,s1s4,s1,s10.7,0.1,0.20.9,0.2,0.3s4,s1,s1s5,s1,s10.8,0.1,0.10.8,0.2,0.1s5,s2,s2s5,s2,s30.8,0.1,0.10.8,0.1,0.1s5,s1,s2s5,s3,s2.Thus, the two MAGDM models developed can be utilized in the example to perform the MAGDM issue with SVNLNHEs.Model 1. The MAGDM model using the SVNLNHEWAMN and SVNLNHEWGMN operators can be applied in the example, and then its detailed steps are depicted as follows:Step 1: using the SVNEWAM operator of Eq. (2) and the LNEWAM operator of Eq. (7), the above three decision matrices are aggregated into the following overall decision matrix:(21)M=0.7695,0.1677,0.16820.7648,0.1000,0.1275s4.7254,s2.3050,s1.3195s4.8108,s1.6818,s1.55180.7787,0.1682,0.17470.8318,0.1516,0.3000s4.8108,s1.2746,s1.3195s4.3182,s1.5518,s1.68180.7648,0.1275,0.14690.7695,0.1569,0.1320s4.7254,s1.8455,s2.0000s4.6805,s2.0000,s3.00000.8431,0.1320,0.13200.8484,0.1682,0.1552s4.6805,s1.0000,s2.3050s5.0000,s1.7366,s2.0000.Step 2: by Eq. (16) or Eq. (17), we give the aggregated values:zn1 = < 0.7795, 0.1958, 0.1804>, zn2 = < 0.7921, 0.1884, 0.2392>,zn3 = < 0.7751, 0.2049, 0.2231>, and zn4 = < 0.8290, 0.1761, 0.2178 > .Or zn1 = < 0.7789, 0.2324, 0.1885>, zn2 = < 0.7876, 0.1934, 0.2464>,zn3 = < 0.7749, 0.2276, 0.2754>, and zn4 = < 0.8265, 0.1850, 0.2504 > .Step 3: by Eq. (4), the score values of F(znj) for Ej (j = 1, 2, 3, 4) are given below:F(zn1) = 0.8011, F(zn2) = 0.7882, F(zn3) = 0.7824, and F(zn4) = 0.8117.OrF(zn1) = 0.7860, F(zn2) = 0.7826, F(zn3) = 0.7573, and F(zn4) = 0.7970.Step 4: the sorting order of the four alternatives isE4 >  E1 >  E2 >  E3.Clearly, the sorting orders obtained by the SVNLNHEWAMN operator of Eq. (16) and the SVNLNHEWGMN operator of Eq. (17) are identical in this example.Model 2. The MAGDM model using the SVNLNHEWAML and SVNLNHEWGML operators can also be applied in the example, and then its detailed steps are depicted as follows:Step 1’: the same as Step 1.Step 2’: by Eq. (18) or Eq. (19), we give the aggregated values:lh1 = < s4.6773, s1.1747, s1.0823>, lh2 = <  s4.7528, s1.1302, s1.4353>,lh3 = < s4.6508, s1.2297, s1.3384>, and lh4 = < s4.9739, s1.0564, s1.3070 > ,orlh1 = < s4.6736, s1.3944, s1.1311>, lh2 = < s4.7255, s1.1603, s1.4783>,lh3 = < s4.6494, s1.3657, s1.6526>, and lh4 = < s4.9591, s1.1100, s1.5027 > .Step 3’: by (9), the score values of P(lhj) for Ej (j = 1, 2, 3, 4) are given as follows:P(lh1) = 0.8011, P(lh2) = 0.7882, P(lh3) = 0.7824, and P(lh4) = 0.8117,orP(lh1) = 0.7860, P(lh2) = 0.7826, P(lh3) = 0.7573, and P(lh4) = 0.7970.Step 4’: the sorting order of the four alternatives isE4 >  E1 >  E2 >  E3.Hence, the sorting orders obtained by the SVNLNHEWAML operator of Eq. (18) and the SVNLNHEWGML operator of Eq. (19) are identical in this example.Obviously, the score values and sorting orders between Model 1 and Model 2 reflect the same results. Moreover, we see that whether SVNEs are converted to LNEs or LNEs to SVNEs in the aggregation operations, their final decision results are actually identical. Thus, decision makers can choose Model 1 or Model 2 in MAGDM applications. Therefore, it is obvious that our new techniques are valid and reasonable. ## 6.2. Comparative Analysis with the Existing Neutrosophic MAGDM Models Since the assessed values of SVNLNHEs are given in this illustrative example, the existing neutrosophic MAGDM models [4, 17] cannot deal with this illustrative example in the situation of SVNLNHEs. Then, our new techniques can handle neutrosophic MAGDM issues with SVNEs and/or LNEs and show the following highlights and advantages:(1) The proposed SVNLNHEs can conveniently denote the mixed information of SVNEs and LNEs regarding the assessment objects of quantitative and qualitative attributes, which is suitable for human judgment and thinking/expression habits, while existing neutrosophic expressions cannot represent SVNLNHE information.(2) The proposed SVNLNHEWAMN and SVNLNHEWGMN operators or the proposed SVNLNHEWAML and SVNLNHEWGML operators provide the necessary aggregation tools for handling MAGDM issues in the SVNLNHE circumstance, while the existing SVNEWAM and SVNEWGM operators [17] are only the special cases of the SVNLNHEWAMN and SVNLNHEWGMN operators, and then the existing LNEWAM and LNEWGM operators [4] are only the special cases of the SVNLNHEWAML and SVNLNHEWGML operators. Furthermore, the various existing aggregation operators cannot aggregate SVNLNHEs.(3) Since the existing MAGDM models with the single evaluation information of SVNEs or LNEs [4, 17] are the special cases of our new MAGDM models, our new MAGDM models are broader and more versatile than the existing MAGDM models [4, 17]. Furthermore, the various existing MAGDM models cannot carry out MAGDM problems with SVNLNHE information.Generally, the new techniques solve the SVNLNHE denotation, aggregation operations, and MAGDM issues in the mixed information situation of SVNEs and LNEs. It is clear that our new techniques are very suitable for such decision-making issues with quantitative and qualitative attributes and overcome the defects of the existing decision-making techniques subject to the single evaluation information of SVNEs or LNEs. Therefore, our new techniques reveal obvious superiorities over the existing techniques in the neutrosophic information denotation, aggregation operations, and decision-making methods. ## 7. Conclusion Due to the lack of the SVNLNHE denotation, operations, and decision-making models in existing neutrosophic theory and applications, the proposed notion of SVNLNHS/SVNLNHE and the defined linguistic and neutrosophic conversion function solved the hybrid neutrosophic information denotation and operational problems of SVNEs and LNEs. Then, the proposed SVNLNHEWAMN, SVNLNHEWGMN, SVNLNHEWAML, and SVNLNHEWGML operators provided necessary aggregation algorithms for handling MAGDM issues with SVNLNHEs. The established MAGDM models solved such decision-making issues with quantitative and qualitative attributes in the SVNLNHE circumstance. Since the evaluation values of quantitative and qualitative attributes in the decision-making process are easily represented in SVNEs and LNEs that are given in view of decision makers’ preferences/thinking habits, the managerial implications of this original research will be reinforced in neutrosophic decision-making methods and applications. Finally, an illustrative example was given and compared with the existing techniques to show the availability and rationality of the new techniques. Moreover, our new techniques not only overcome the insufficiencies of the existing techniques but also are broader and more versatile than the existing techniques when dealing with MAGDM issues in the setting of SVNLNHEs. However, in this study, the new techniques of the SVNLNHE denotation, aggregation algorithms, and MAGDM models reflected their superiority over existing techniques.Regarding future research, these new techniques will be further extended to other areas, such as medical diagnosis, slope risk/instability evaluation, default diagnosis, and mechanical concept design, in the mixed information situation of SVNEs and LNEs. Then, we shall also develop more aggregation algorithms, such as Hamacher, Dombi, and Bonferroni aggregation operators, and their applications in clustering analysis, information fusion, image processing, and mine risk/safety evaluation in the mixed information situation of both SVNE and LNE or both IVNE and uncertain LNE. --- *Source: 1021280-2022-09-20.xml*
2022
# The Vasodilatory Effects of Anti-Inflammatory Herb Medications: A Comparison Study of Four Botanical Extracts **Authors:** Hong Ping Zhang; Dan-Dan Zhang; Yan Ke; Ka Bian **Journal:** Evidence-Based Complementary and Alternative Medicine (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1021284 --- ## Abstract Inflammation plays a pivotal role in the development and progression of cardiovascular diseases, in which, the endothelium dysfunction has been a key element. The current study was designed to explore the vasodilatory effect of anti-inflammatory herbs which have been traditionally used in different clinical applications. The total saponins fromActinidia arguta radix (SAA), total flavonoids fromGlycyrrhizae radix et rhizoma (FGR), total coumarins fromPeucedani radix (CPR), and total flavonoids fromSpatholobi caulis (FSC) were extracted. The isometric measurement of vasoactivity was used to observe the effects of herbal elements on the isolated aortic rings with or without endothelium. To understand endothelium-independent vasodilation, the effects of herb elements on agonists-induced vasocontractility and on the contraction of endothelium-free aortic rings exposed to a Ca2+-free medium were examined. Furthermore, the role of nitric oxide signaling in endothelium-dependent vasodilation was also evaluated. In summary, FGR and FSC exhibit potent anti-inflammatory effects compared to CPR and SAA. FGR exerts the strongest vasodilatory effect, while CPR shows the least. The relaxation induced by SAA and FSC required intact endothelia. The mechanism of this vasodilation might involve eNOS. CPR-mediated vasorelaxation appears to involve interference with intracellular calcium homeostasis, blocking Ca2+ influx or releasing intracellular Ca2+. --- ## Body ## 1. Introduction Inflammation plays a pivotal role in the development and progression of several cardiovascular diseases, including atherosclerosis [1]. Numerous epidemiologic studies support the concept that vascular inflammation correlates with an increased risk of atherosclerosis [2]. While inflammation contributes to cardiovascular pathology, the question remains whether inhibition of inflammation prevents or even reverses the progress of vascular diseases. Multiple clinical studies have shown that the use of statins reduces cardiovascular morbidity and mortality [3, 4]. However, a direct test of the inflammatory hypothesis of cardiovascular disease requires an agent that can inhibit inflammation without affecting other components of atherothrombosis, while also exhibiting an acceptable safety profile. To address this issue, a cardiovascular inflammation reduction trial (CIRT: ClinicalTrials.gov.ID# NCT01594333) has begun at the Brigham and Women’s Hospital and the National Heart, Lung, and Blood Institute (NHLBI) that proposes the use of very-low-dose-methotrexate (VLDM, 10 mg weekly) on 7,000 patients with stable coronary artery disease and persistent elevations of high-sensitivity C-reactive protein (Hs-CRP). Despite its anti-inflammatory effects, methotrexate is an antimetabolite drug that is used to treat cancers and has significant side effects at high doses. Therefore, alternative therapeutic options should be considered.The anti-inflammatory traditional Chinese medicines as well as botanic elements have been studied for years [5–8]. On the other hand, botanically derived elements have been recognized for the beneficial effect on cardiovascular and metabolism systems [9, 10]. To further explore the potential of using those agents on the cardiovascular system, we studied the vasodilatory effects of four of these herb elements and reviewed their medical applications (Table 1).Actinidia arguta radix (Tengligen) is a member of the Actinidiaceae family. Pharmacology research has revealed anticancer, immune regulation and hypotensive activity fromActinidia arguta radix [11, 12]. To the best of our knowledge, there are no scientific reports on the blood-pressure lowering mechanisms of anyActinidia arguta radix extracts.Glycyrrhizaeradix et rhizoma (Gancao) has multiple therapeutic uses, some of which are related to its anti-inflammatory properties. These include treating cough, relieving pain, clearing heat, and eliminating toxins and poisons [13]. Modern pharmacology research has also reported that the flavonoids fromGlycyrrhizaeradix et rhizome (FGR) have antioxidant properties [14] with therapeutic benefits including the inhibition of cough and treatment of bacterial infections [15, 16]; however, its effects on vascular contractility is unknown.Peucedani radix (Qianhu) has been an important agent for treating respiratory symptoms and diseases through the centuries. This herb is traditionally characterized as dispelling wind and removing heat, relieving cough, and resolving phlegm [17] and has been used to relieve the symptoms of influenza and asthma.Peucedani radix has strong anti-inflammatory properties as one of its therapeutic mechanisms [18, 19]. Many coumarin constituents have been extracted from this herb and they are reported to be responsible, in major, for its biological activity [20]. It has been noted thatPeucedani radix can exert beneficial effects in hypoxic pulmonary hypertension [21]. However, the action of this drug on the circulatory system is largely unknown.Spatholobi caulis (Jixueteng) has been traditionally used for irregular menstruation, numbness, and inflammatory arthralgia [22]. According to pharmacology research, it can reduce oxidative stress [23] and inflammation [24]. Several studies have demonstrated in vitro and in vivo cytotoxic effects ofSpatholobicaulis extracts on tumor cells [25–28]. It has been suggested that the flavonoids ofSpatholobi caulis (FSC) are the major active components of its therapeutic actions [29]. A few studies have demonstrated the impact ofSpatholobi caulis on cardiovascular conditions. This botanical agent interferes with platelet aggregation via interference at the glycoprotein IIb/IIIa receptor [30]. It has also been shown to reduce plasma lipid levels in hyperlipidemic quail [31]. In a rat model of cerebral ischemia, Lee et al. found a significant increase in cerebral blood flow after treatment withSpatholobi caulis [32, 33]. Although it is speculated that the blockage of calcium channels is responsible for this action [34], a further vascular pharmacological study of this botanical agent is warranted.Table 1 Botanical extracts that possess anti-inflammatory properties. Medicinals Photos Nature of medicinals Flavor of medicinals Functions Clinical application Reference Spatholobus suberectusDunn Warm Bitter, sweet Promoting blood flow, tonifying blood, regulating menstruation and relieving pain, relaxing sinews, and activating collaterals Treatment of irregular menstruation, dysmenorrhoea, amenorrhea, rheumatic arthralgia, paralysis or numbness in the limbs, blood deficiency, and chlorosis Chinese Pharmacopoeia (Version 2015) Peucedanum praeruptorumDunn Slightly cold Bitter, pungent Directing qi downward to resolve phlegm, dispelling pathogenic wind, and clearing heat Treatment of respiratory symptoms such as phlegm-heat asthma, yellow viscous phlegm, cough, and phlegm induced by wind-heat Chinese Pharmacopoeia (Version 2015) Actinidia arguta(Sieb. &Zucc.) Planch. ex Miq Cool Sour, astringent Clearing heat and detoxicating, dispelling wind-dampness, diuretic, and haemostatic Treatment of rheumatism, pain, and jaundice The national compilation of Chinese herbal medicine Glycyrrhiza uralensisFisch Calm Sweet Tonifying spleen and qi, clearing heat and detoxicating, dispelling phlegm and suppressing cough, relieving spasm and pain, mixed herbs Treatment of deficiency of spleen and stomach, fatigue, palpitations, shortness of breath, cough and phlegm in throat, abdominal pain, spasm of limbs, carbuncle sore, and reduce drug toxicity Chinese Pharmacopoeia (Version 2015)In the present study, we examined anti-inflammatory effects of the four botanical extracts by using LPS and IFN-γ-stimulated macrophages. The isometric vasoactivity was measured to evaluate the vasodilating properties of the extracts on isolated rat thoracic aortas. The action mechanisms were explored through pharmacological examinations. ## 2. Methods and Materials ### 2.1. Herbs and Chemicals Herbs were purchased from Shanghai Yang He Tang TCM Pieces, Ltd. Company (Shanghai, China) and authenticated by the Shanghai Institute of Food and Drug Control. Acetylcholine (Ach), phenylephrine (PE), NG-nitro-L-arginine methyl ester (L-NAME), indomethacin (Indo), 1H-[1,2,4]-oxadiazole-[4,3-a]-quinoxalin-1-one (ODQ), glibenclamide (Glib), tetraethylammonium (TEA), prostaglandin 2α (PG2α), BaCl2, angiotensin II (AngII), 5-hydroxytryptamine (5-HT), dopamine (Dopa), endothelin-1 (ET-1), RPMI 1640 medium, IFN-γ, and lipopolysaccharide (LPS) were all purchased from Sigma Chemical Co. (St. Louis, MO, USA). Ethylene glycol bis (2-aminoethyl ether) tetraacetic acid (EGTA) and other inorganic salts were all purchased from Sinopharm Chemical Reagent Co., Ltd. (Batch number F20060620). Ach, PE, TEA, AngII, 5-HT, dopamine, and ET-1 solutions were prepared with distilled water. Glibenclamide and ODQ solutions were prepared with DMSO. Control experiments demonstrated that the highest DMSO concentration (1 : 400) had no effect on vascular tone. ### 2.2. Cell Cultures RAW 264.7 cells were used in current study for the following considerations. First, the RAW 264.7 cell line is a pure clone that can be grown in a pretty much identical and indefinitely manner which is necessary for our drug screen platform. Second, RAW 264.7 cells are transformed and are not functional for certain signaling pathways such as activated inflammasomes [35], which will benefit the purpose of our designed study in which anti-inflammatory effect of the herbs will be evaluated. RAW 264.7 cells were obtained from the American Tissue Culture Collection. The cells were maintained in complete RPMI 1640 media supplemented with 10% heat-inactivated FBS and 1.5% sodium bicarbonate at 37°C in a humidified 5% CO2 atmosphere. Cells were plated at a density of 1 × 105 cells/well in 96-well plates or 2 × 106 cells in each 30 mm dish and allowed to attach for 2 hours. For stimulation, the media were replaced with fresh RPMI 1640, and the cells were then stimulated with 10 U/ml of IFN-γ and 100 ng/mL of LPS in the presence or absence of FGR for the indicated periods. ### 2.3. Experimental Animal and Blood Vessel Ring Preparations Male Sprague-Dawley rats (250–300 g) were obtained from Shanghai Slac Experimental Company, Ltd. (Shanghai, China). The animal procedures were carried out in strict accordance with theGuide for the Care and Use of Laboratory Animals (Shanghai University of Traditional Chinese Medicine). All experiments were performed under license from the Government of China.The preparation of the vascular rings was performed as described by Zhang et al. [36]. Briefly, the rats were sacrificed by decapitation and their thoracic aortas were rapidly and carefully dissected away into ice-cold freshly prepared Krebs-Henseleit (K-H) solution. The aortas were cut into ring segments of approximately 3 mm wide. For some aortic rings, the endothelial layer was mechanically removed by gently rubbing the luminal surfaces of the aortic rings back and forth several times. ### 2.4. Recording of Isometric Vascular Tone Each ring was suspended by means of two L-shape stainless-steel hooks in an organ bath filled with Krebs-Henseleit solution maintained at 37°C while being continuously infused with bubbled 95% O2 and 5% CO2. The lower hooks were fixed to the bottom of the organ bath and the upper wires were attached to an isometric force transducer connected to a data acquisition system (PowerLab/4P ADInstruments, Australia) for continuous recording of tension. The baseline load placed on the aortic rings was 2.0 g.Examination of endothelial integrity was performed as described by Xing et al. and others [37–39]. Briefly, endothelial integrity or functional removal was verified by the appropriate relaxation response to 10 μmol/L acetylcholine on 1 μmol/L phenylephrine contracted vessels. ### 2.5. Experimental Protocol #### 2.5.1. Nitric Oxide Production RAW 264.7 cells were plated in 96-well plates (1 × 105/well) and stimulated with 100 ng/mL LPS and 10 U/mL IFN-γ for 24 h. The cell-free culture media were collected and analyzed for nitrite accumulation as an indicator of NO production using the Griess reagent. The NO assay was performed as described by Zhang et al. [39]. Briefly, 100 μL of Griess reagent (0.1% naphthylethylenediamine and 1% sulfanilamide in 5% H3PO4 solution) was added to an equal volume of supernatant from sample-treated cells. The plates were incubated for 10 minutes and then were read at 540 nm against a standard curve of sodium nitrite. Percent inhibition was expressed as 100 × [1 − (NO release with sample − spontaneous release)/(NO release without sample − spontaneous release)]. #### 2.5.2. Testing the Effects of FGR, FSC, CPR, and SAA on PE-Induced Constriction The vasodilatory effects of the four botanical extracts were tested in both endothelium-intact and endothelium-denuded rings constricted by PE (1μmol/L). Once a plateau of PE contraction was attained, each of the botanical extracts was applied cumulatively according to a concentration gradient. At the end of each experiment, forskolin was added to induce blood vessel relaxation and the tension of aortic rings was recorded.To attempt to understand the mechanisms of vascular relaxation, nitric oxide synthase inhibitor L-NAME, cyclooxygenase inhibitor indomethacin, soluble guanylyl cyclase inhibitor ODQ, adrenergicβ-receptor inhibitor propranolol, KATP blocker glibenclamide, KCa blocker TEA, and KIR blocker BaCl2 were individually used to pretreat endothelium-denuded rings for 15 min, respectively, prior to addition of 1 μmol/L of phenylephrine. Afterwards, relaxations induced by each of the botanical extracts were observed, including the concentration-dependent vasodilation. #### 2.5.3. Measuring the Effects of FGR, FSC, CPR, and SAA on Vasoconstrictors The endothelium-free aortic rings were first exposed to constrictors at different concentrations. This included Dopa (0.1, 1, 10, 100, and 1,000 nmol/L), 5-HT (10, 100, 1000, 10,000, and 100,000 nmol/L), Ang II (0.1, 1, 10, 100, and 1,000 nmol/L), K+ (10.00, 15.85, 25.12, 39.81, 63.10, and 100.00 mmol/L), Vaso (0.1, 1, 10, 100, and 1,000 nmol/L), ET-1 (10, 25, 50, 75, and 100 nmol/L), PG2α (1, 10, 100, 1,000, and 10,000 nmol/L), and PE (1, 10, 100, 1,000, and 10,000 nmol/L). After washing, the rings were incubated individually with one of the four botanical extracts at concentrations of EC50 for 10 minutes. Contractions induced by vasoconstrictors were again observed. The level of vasoconstriction in response to 60 mmol/L KCl was used as the maximum (100%). #### 2.5.4. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Influx Endothelium-free aorta rings were washed and treated with calcium-free, high-K+ solution (containing 100 μmol/L EGTA and 60 mmol/L KCl). Then, the preparations were incubated and cumulatively exposed to increasing concentrations of CaCl2 (0.4, 0.8, 1.2, 1.6, 2.0, and 2.4 mmol/L). The vasoconstrictor responses to CaCl2 were compared between four groups using each of the botanical extracts as well as a control group. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media was used as the maximum (100%). #### 2.5.5. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Release Endothelium-free aortic rings were washed and exposed to calcium-free Krebs-Henseleit solution (containing 100μmol/L EGTA) for 10 minutes. After this, 1 μmol/L of phenylephrine was added. This resulted in small tonic contractions that were mainly caused by the release of intracellular calcium. Once a plateau of PE contraction was attained, the bath solution was instead in calcium-free Krebs-Henseleit solution (containing 100 μmol/L EGTA) for 5 minutes. Four groups were exposed to each of the botanical extracts at a concentration of EC50 in addition to a control group, and these groups were compared. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media is used as the maximum (100%). #### 2.5.6. Effect of Four Botanical Extracts on Organ Tissue Viability The effects of four botanical extracts on the viability of freshly isolated aortic organ tissue were tested by repeatedly treating the extracts with the same aortic rings either with or without endothelium. The multiple treatments did not affect the contractility of the vessel induced by 60 mmol/L K+. The vasodilation towards acetylcholine of the aortic rings was also intact after several times of applications of botanical extracts. ### 2.6. Statistical Analysis All of results are expressed as mean ± SD. Statistical significance was analyzed using unpaired Student’st-tests for comparisons between two groups. A value of P<0.05 was considered statistically significant. ## 2.1. Herbs and Chemicals Herbs were purchased from Shanghai Yang He Tang TCM Pieces, Ltd. Company (Shanghai, China) and authenticated by the Shanghai Institute of Food and Drug Control. Acetylcholine (Ach), phenylephrine (PE), NG-nitro-L-arginine methyl ester (L-NAME), indomethacin (Indo), 1H-[1,2,4]-oxadiazole-[4,3-a]-quinoxalin-1-one (ODQ), glibenclamide (Glib), tetraethylammonium (TEA), prostaglandin 2α (PG2α), BaCl2, angiotensin II (AngII), 5-hydroxytryptamine (5-HT), dopamine (Dopa), endothelin-1 (ET-1), RPMI 1640 medium, IFN-γ, and lipopolysaccharide (LPS) were all purchased from Sigma Chemical Co. (St. Louis, MO, USA). Ethylene glycol bis (2-aminoethyl ether) tetraacetic acid (EGTA) and other inorganic salts were all purchased from Sinopharm Chemical Reagent Co., Ltd. (Batch number F20060620). Ach, PE, TEA, AngII, 5-HT, dopamine, and ET-1 solutions were prepared with distilled water. Glibenclamide and ODQ solutions were prepared with DMSO. Control experiments demonstrated that the highest DMSO concentration (1 : 400) had no effect on vascular tone. ## 2.2. Cell Cultures RAW 264.7 cells were used in current study for the following considerations. First, the RAW 264.7 cell line is a pure clone that can be grown in a pretty much identical and indefinitely manner which is necessary for our drug screen platform. Second, RAW 264.7 cells are transformed and are not functional for certain signaling pathways such as activated inflammasomes [35], which will benefit the purpose of our designed study in which anti-inflammatory effect of the herbs will be evaluated. RAW 264.7 cells were obtained from the American Tissue Culture Collection. The cells were maintained in complete RPMI 1640 media supplemented with 10% heat-inactivated FBS and 1.5% sodium bicarbonate at 37°C in a humidified 5% CO2 atmosphere. Cells were plated at a density of 1 × 105 cells/well in 96-well plates or 2 × 106 cells in each 30 mm dish and allowed to attach for 2 hours. For stimulation, the media were replaced with fresh RPMI 1640, and the cells were then stimulated with 10 U/ml of IFN-γ and 100 ng/mL of LPS in the presence or absence of FGR for the indicated periods. ## 2.3. Experimental Animal and Blood Vessel Ring Preparations Male Sprague-Dawley rats (250–300 g) were obtained from Shanghai Slac Experimental Company, Ltd. (Shanghai, China). The animal procedures were carried out in strict accordance with theGuide for the Care and Use of Laboratory Animals (Shanghai University of Traditional Chinese Medicine). All experiments were performed under license from the Government of China.The preparation of the vascular rings was performed as described by Zhang et al. [36]. Briefly, the rats were sacrificed by decapitation and their thoracic aortas were rapidly and carefully dissected away into ice-cold freshly prepared Krebs-Henseleit (K-H) solution. The aortas were cut into ring segments of approximately 3 mm wide. For some aortic rings, the endothelial layer was mechanically removed by gently rubbing the luminal surfaces of the aortic rings back and forth several times. ## 2.4. Recording of Isometric Vascular Tone Each ring was suspended by means of two L-shape stainless-steel hooks in an organ bath filled with Krebs-Henseleit solution maintained at 37°C while being continuously infused with bubbled 95% O2 and 5% CO2. The lower hooks were fixed to the bottom of the organ bath and the upper wires were attached to an isometric force transducer connected to a data acquisition system (PowerLab/4P ADInstruments, Australia) for continuous recording of tension. The baseline load placed on the aortic rings was 2.0 g.Examination of endothelial integrity was performed as described by Xing et al. and others [37–39]. Briefly, endothelial integrity or functional removal was verified by the appropriate relaxation response to 10 μmol/L acetylcholine on 1 μmol/L phenylephrine contracted vessels. ## 2.5. Experimental Protocol ### 2.5.1. Nitric Oxide Production RAW 264.7 cells were plated in 96-well plates (1 × 105/well) and stimulated with 100 ng/mL LPS and 10 U/mL IFN-γ for 24 h. The cell-free culture media were collected and analyzed for nitrite accumulation as an indicator of NO production using the Griess reagent. The NO assay was performed as described by Zhang et al. [39]. Briefly, 100 μL of Griess reagent (0.1% naphthylethylenediamine and 1% sulfanilamide in 5% H3PO4 solution) was added to an equal volume of supernatant from sample-treated cells. The plates were incubated for 10 minutes and then were read at 540 nm against a standard curve of sodium nitrite. Percent inhibition was expressed as 100 × [1 − (NO release with sample − spontaneous release)/(NO release without sample − spontaneous release)]. ### 2.5.2. Testing the Effects of FGR, FSC, CPR, and SAA on PE-Induced Constriction The vasodilatory effects of the four botanical extracts were tested in both endothelium-intact and endothelium-denuded rings constricted by PE (1μmol/L). Once a plateau of PE contraction was attained, each of the botanical extracts was applied cumulatively according to a concentration gradient. At the end of each experiment, forskolin was added to induce blood vessel relaxation and the tension of aortic rings was recorded.To attempt to understand the mechanisms of vascular relaxation, nitric oxide synthase inhibitor L-NAME, cyclooxygenase inhibitor indomethacin, soluble guanylyl cyclase inhibitor ODQ, adrenergicβ-receptor inhibitor propranolol, KATP blocker glibenclamide, KCa blocker TEA, and KIR blocker BaCl2 were individually used to pretreat endothelium-denuded rings for 15 min, respectively, prior to addition of 1 μmol/L of phenylephrine. Afterwards, relaxations induced by each of the botanical extracts were observed, including the concentration-dependent vasodilation. ### 2.5.3. Measuring the Effects of FGR, FSC, CPR, and SAA on Vasoconstrictors The endothelium-free aortic rings were first exposed to constrictors at different concentrations. This included Dopa (0.1, 1, 10, 100, and 1,000 nmol/L), 5-HT (10, 100, 1000, 10,000, and 100,000 nmol/L), Ang II (0.1, 1, 10, 100, and 1,000 nmol/L), K+ (10.00, 15.85, 25.12, 39.81, 63.10, and 100.00 mmol/L), Vaso (0.1, 1, 10, 100, and 1,000 nmol/L), ET-1 (10, 25, 50, 75, and 100 nmol/L), PG2α (1, 10, 100, 1,000, and 10,000 nmol/L), and PE (1, 10, 100, 1,000, and 10,000 nmol/L). After washing, the rings were incubated individually with one of the four botanical extracts at concentrations of EC50 for 10 minutes. Contractions induced by vasoconstrictors were again observed. The level of vasoconstriction in response to 60 mmol/L KCl was used as the maximum (100%). ### 2.5.4. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Influx Endothelium-free aorta rings were washed and treated with calcium-free, high-K+ solution (containing 100 μmol/L EGTA and 60 mmol/L KCl). Then, the preparations were incubated and cumulatively exposed to increasing concentrations of CaCl2 (0.4, 0.8, 1.2, 1.6, 2.0, and 2.4 mmol/L). The vasoconstrictor responses to CaCl2 were compared between four groups using each of the botanical extracts as well as a control group. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media was used as the maximum (100%). ### 2.5.5. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Release Endothelium-free aortic rings were washed and exposed to calcium-free Krebs-Henseleit solution (containing 100μmol/L EGTA) for 10 minutes. After this, 1 μmol/L of phenylephrine was added. This resulted in small tonic contractions that were mainly caused by the release of intracellular calcium. Once a plateau of PE contraction was attained, the bath solution was instead in calcium-free Krebs-Henseleit solution (containing 100 μmol/L EGTA) for 5 minutes. Four groups were exposed to each of the botanical extracts at a concentration of EC50 in addition to a control group, and these groups were compared. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media is used as the maximum (100%). ### 2.5.6. Effect of Four Botanical Extracts on Organ Tissue Viability The effects of four botanical extracts on the viability of freshly isolated aortic organ tissue were tested by repeatedly treating the extracts with the same aortic rings either with or without endothelium. The multiple treatments did not affect the contractility of the vessel induced by 60 mmol/L K+. The vasodilation towards acetylcholine of the aortic rings was also intact after several times of applications of botanical extracts. ## 2.5.1. Nitric Oxide Production RAW 264.7 cells were plated in 96-well plates (1 × 105/well) and stimulated with 100 ng/mL LPS and 10 U/mL IFN-γ for 24 h. The cell-free culture media were collected and analyzed for nitrite accumulation as an indicator of NO production using the Griess reagent. The NO assay was performed as described by Zhang et al. [39]. Briefly, 100 μL of Griess reagent (0.1% naphthylethylenediamine and 1% sulfanilamide in 5% H3PO4 solution) was added to an equal volume of supernatant from sample-treated cells. The plates were incubated for 10 minutes and then were read at 540 nm against a standard curve of sodium nitrite. Percent inhibition was expressed as 100 × [1 − (NO release with sample − spontaneous release)/(NO release without sample − spontaneous release)]. ## 2.5.2. Testing the Effects of FGR, FSC, CPR, and SAA on PE-Induced Constriction The vasodilatory effects of the four botanical extracts were tested in both endothelium-intact and endothelium-denuded rings constricted by PE (1μmol/L). Once a plateau of PE contraction was attained, each of the botanical extracts was applied cumulatively according to a concentration gradient. At the end of each experiment, forskolin was added to induce blood vessel relaxation and the tension of aortic rings was recorded.To attempt to understand the mechanisms of vascular relaxation, nitric oxide synthase inhibitor L-NAME, cyclooxygenase inhibitor indomethacin, soluble guanylyl cyclase inhibitor ODQ, adrenergicβ-receptor inhibitor propranolol, KATP blocker glibenclamide, KCa blocker TEA, and KIR blocker BaCl2 were individually used to pretreat endothelium-denuded rings for 15 min, respectively, prior to addition of 1 μmol/L of phenylephrine. Afterwards, relaxations induced by each of the botanical extracts were observed, including the concentration-dependent vasodilation. ## 2.5.3. Measuring the Effects of FGR, FSC, CPR, and SAA on Vasoconstrictors The endothelium-free aortic rings were first exposed to constrictors at different concentrations. This included Dopa (0.1, 1, 10, 100, and 1,000 nmol/L), 5-HT (10, 100, 1000, 10,000, and 100,000 nmol/L), Ang II (0.1, 1, 10, 100, and 1,000 nmol/L), K+ (10.00, 15.85, 25.12, 39.81, 63.10, and 100.00 mmol/L), Vaso (0.1, 1, 10, 100, and 1,000 nmol/L), ET-1 (10, 25, 50, 75, and 100 nmol/L), PG2α (1, 10, 100, 1,000, and 10,000 nmol/L), and PE (1, 10, 100, 1,000, and 10,000 nmol/L). After washing, the rings were incubated individually with one of the four botanical extracts at concentrations of EC50 for 10 minutes. Contractions induced by vasoconstrictors were again observed. The level of vasoconstriction in response to 60 mmol/L KCl was used as the maximum (100%). ## 2.5.4. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Influx Endothelium-free aorta rings were washed and treated with calcium-free, high-K+ solution (containing 100 μmol/L EGTA and 60 mmol/L KCl). Then, the preparations were incubated and cumulatively exposed to increasing concentrations of CaCl2 (0.4, 0.8, 1.2, 1.6, 2.0, and 2.4 mmol/L). The vasoconstrictor responses to CaCl2 were compared between four groups using each of the botanical extracts as well as a control group. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media was used as the maximum (100%). ## 2.5.5. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Release Endothelium-free aortic rings were washed and exposed to calcium-free Krebs-Henseleit solution (containing 100μmol/L EGTA) for 10 minutes. After this, 1 μmol/L of phenylephrine was added. This resulted in small tonic contractions that were mainly caused by the release of intracellular calcium. Once a plateau of PE contraction was attained, the bath solution was instead in calcium-free Krebs-Henseleit solution (containing 100 μmol/L EGTA) for 5 minutes. Four groups were exposed to each of the botanical extracts at a concentration of EC50 in addition to a control group, and these groups were compared. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media is used as the maximum (100%). ## 2.5.6. Effect of Four Botanical Extracts on Organ Tissue Viability The effects of four botanical extracts on the viability of freshly isolated aortic organ tissue were tested by repeatedly treating the extracts with the same aortic rings either with or without endothelium. The multiple treatments did not affect the contractility of the vessel induced by 60 mmol/L K+. The vasodilation towards acetylcholine of the aortic rings was also intact after several times of applications of botanical extracts. ## 2.6. Statistical Analysis All of results are expressed as mean ± SD. Statistical significance was analyzed using unpaired Student’st-tests for comparisons between two groups. A value of P<0.05 was considered statistically significant. ## 3. Results ### 3.1. FGR, FSC, CPR, and SAA Blocked LPS and IFN-γ-Induced NO Production in RAW 264.7 Cells 264.7 cells were stimulated by 10 U/ml of IFN-γ and 100 ng/ml of LPS that can upregulate the expression of iNOS. iNOS has a key role in inflammatory action. Targeting de novo regulation of iNOS is the therapeutic strategy to cure inflammation-related diseases [40]. RAW 264.7 cells were stimulated by 10 U/ml of IFN-γ and 100 ng/ml of LPS with and without pretreatment of four botanical extracts. The concentration of nitrite was measured at 24 hours after the stimulation. As shown in Figure 1, total flavonoids fromGlycyrrhizaeradix et rhizoma (FGR) and total flavonoids fromSpatholobi caulis (FSC) significantly suppressed the IFN-γ and LPS-induced production of NO in a dose-dependent fashion. LPS and IFN-γ-induced NO in RAW 264.7 cells were inhibited by FGR and FSC in a concentration-dependent manner. The maximal inhibition achieved (at 200 mg/L) was 75.06% and 39.44%, respectively, for the two drugs. However, higher concentrations of total saponin fromActinidia arguta radix (SAA) and total coumarins ofPeucedani radix (CPR) were required to suppress the IFN-γ and LPS-induced production of NO. The maximal inhibition achieved of SAA and CPR (at 200 mg/L) was 29.69% and 33.65%, respectively (Figure 1).Figure 1 Effects of four botanical extracts on nitrite accumulation in macrophages stimulated by LPS plus IFN-γ, P∗<0.05, P∗∗<0.01 versus model group (M), n=6. ### 3.2. FGR, FSC, CPR, and SAA-Induced Vasodilation Ach-elicited relaxation in aorta rings was used for evaluating intact and deleted endothelium (Figure2). FGR and CPR relaxed isolated aortic rings in a dose-dependent and endothelium-independent manner. The maximum relaxation by FGR of the aortic rings with or without endothelium was at concentrations of 91.28%±5.15% and 84.36%±23.80%, respectively. The maximum relaxation by CPR of rings with or without endothelium was at concentrations of 75.51%±21.30% and 57.07%±18.63%, respectively. The half maximal effective concentration (EC50) was 17 mg/L for FGR and 61 mg/L for CPR for aortic rings with absent endothelium as shown in Figure 3(a).Figure 2 Concentration-response curves showing endothelium-dependent relaxation by Ach with PE pretreated rat aortic rings with intact endothelia (+Endo) and without intact endothelia (−Endo).n=5, P∗∗<0.01 versus +Endo.Figure 3 Concentration-response curves showing relaxation by four botanical extracts with PE pretreated rat aortic rings with intact endothelia (+Endo + Control) and without intact endothelia (−Endo + Control). The effects of exposure to 10μmol/L ODQ on the FSC (b-A), SAA (b-C), FGR (a-A), and CPR (a-B) groups of PE (1 μmol/L) pretreated rings (−Endo + ODQ). The concentration-response curves of FSC (b-B) and SAA (b-D) with pretreatment with Indo. The concentration-response curves of FSC (b-A) and SAA (b-C) with pretreatment with L-NAME. P∗<0.05, P∗∗<0.01 versus −Endo group, n=4.SAA and FSC relaxed isolated aortic rings in a dose-dependent and endothelium-dependent manner. The maximum relaxation of isolated aortic rings by SAA with and without endothelium was at concentrations of81.66%±7.36% and 5.20%±1.62%, respectively. The maximum relaxation induced by FSC with and without endothelium was at concentrations of 70.70%±6.12% and 7.53%±14.08%, respectively. The EC50 was 45 mg/L for SAA and 40 mg/L for FSC for aortic rings with intact endothelium as shown in Figure 3(a).To evaluate the involvement of the NO/cGMP signaling in endothelium-dependent vasodilation, the aortic rings were pretreated with ODQ (10μmol/L) or L-NAME (100 μmol/L) for 15 minutes each. Soluble guanylate cyclase (sGC) inhibitor ODQ affected FGR and CPR-induced vasodilation (Figure 3(a)). The FSC and SAA-induced relaxations of the aortic tissue were inhibited by pretreatment with ODQ or nitric oxide synthase blocker L-NAME in a concentration-dependent manner (Figure 3(b)).To investigate the involvement of the cyclooxygenase (COX)/PGI2 pathway, one set of aortic tissue was pretreated with indomethacin (10 μmol/L), a nonselective inhibitor of COX. The relaxation curves by FSC or SAA were not significantly altered by the blockage of PGI2 pathway (Figure 3(b)). ### 3.3. Effects of FGR and CPR on Endogenous Vasoconstrictors PE, 5-HT, Ang II, ET-1,PG2α, Vaso, and Dopa are all endogenous vasoconstrictors which play key roles in maintaining vasculature tension [41]. To study endothelium-independent vasodilation, the effects of herb elements on vasocontractility were examined. Aortic rings without endothelium were pretreated with 17 mg/L of FGR and 61 mg/L of CPR, respectively. FGR exerted inhibitory effects on the vasocontraction by Dopa, Ang II, ET-1, and Vaso in a dose-dependent fashion (Figure 4(a)). The maximal inhibitions on vasocontractions by FGR were 38.40%, 50.71%, 59.58%, and 33.67% for Dopa, AngII, ET-1, and Vaso-induced contractilities, respectively. However, FGR failed to suppress vasocontraction induced by PE, PGF, and 5-HT (see Supplemental Figure 1 in Supplementary Material available online at https://doi.org/10.1155/2017/1021284). CPR significantly inhibited vasoconstriction in the presence of Ang II, Dopa, PGF2α, 5-HT, PE, Vaso, and ET-1 by 86.75%, 59.57%, 74.55%, 41.84%, 64.60%, 79.51%, and 60.55%, respectively (Figure 4(b)).Figure 4 Effects of the four botanical extracts on endothelium-denuded aortic tissue that were exposed to endogenous vasoconstrictors. Inhibited by FGR (a): the contraction curves of Dopa, AngII, Vaso, and ET-1. Inhibited by CPR (b): the contraction curves of Dopa, PGF2α, AngII, 5-HT, PE, Vaso, and ET-1. P∗<0.05, P∗∗<0.01 versus +Endo control group, n=5. ### 3.4. Effects of FGR and CPR on Potassium Channels Potassium channels are important to vascular relaxation. There are many types of potassium channels in vascular smooth muscle including calcium-activated potassium channel (KCa), ATP-sensitive K+ channels (KATP), and inwardly rectifying potassium channels (KIR). To test the possible involvement of K+ channels in relaxations induced by FGR and CPR, endothelium-denuded rings were preincubated with KCa blocker (TEA) at 100 mmol/L, KATP blocker (glibenclamide) at 10 mmol/L, and KIR blocker BaCl2 at 100 mmol/L, respectively, for 15 minutes. In each case, the FGR- and CPR-induced vascular relaxation was not inhibited by glibenclamide, TEA, or BaCl2. Glibenclamide, TEA, or BaCl2 did not inhibit vascular relaxation by FGR. We also used glibenclamide, TEA, or BaCl2 to preincubate the endothelium-denuded rings, which did not inhibit vascular relaxation induced by CPR (Figure 5).Figure 5 Concentration-response curves showing relaxation induced by FGR (a) and CPR (b) compared to control in endothelium-free tissues pretreated with potassium channel inhibitors: 3 mmol/L TEA, 10μmol/L Glib, and 100 μmol/L BaCl2, n=6. (a) (b) ### 3.5. Effects of FGR and CPR on Extracellular Calcium Influx and Intracellular Calcium Release Endogenous vasoconstrictors, such as PE, contract vascular smooth muscle mainly through the activation of receptor-operated calcium channels (ROCC), while KC1 mainly activates potential-dependent Ca2+ channels, all of which result in both extracellular calcium influx and intracellular calcium release. To confirm whether calcium-mediated vasoconstriction is affected by FGR and CPR, aortic ring samples denuded of endothelium were exposed to Ca2+-free K-H solutions, and the addition of 1 μmol/L PE induced small tonic contractions which were most likely activated by the release of intracellular Ca2+ from endoplasmic reticulum stores. CPR reduced PE-induced contractions better than FGR under extracellular Ca2+-free condition (Figure 6).Figure 6 Effects of four botanical extracts on calcium channel and cytoplasmic calcium release. The concentration-response curves of CaCl2 in Ca2+-free media were inhibited by FGR (a) and CPR (b); maximal (100%) contraction was represented by 60 mmol/L KCl-induced contractions. Effects of four botanical extracts on the transient contraction induced by PE in Ca2+ free media. The effect of PE in Ca2+ free media was inhibited by FGR (c) and CPR (d); maximal (100%) contraction was represented by 60 mmol/L KCl-induced contraction. P∗∗<0.01 versus control, n=5–7. (a) (b) (c) (d)Experiments on depolarization elicited by voltage-dependent Ca2+-influx in high concentrations of K+ were tested as shown in Figure 6. The data suggested that the K+ (60 mmol/L) stimulated, Ca2+-induced vasoconstriction was not inhibited by 17 mg/L of FGR. However, the vasoconstriction was suppressed by 61 mg/L of CPR. ## 3.1. FGR, FSC, CPR, and SAA Blocked LPS and IFN-γ-Induced NO Production in RAW 264.7 Cells 264.7 cells were stimulated by 10 U/ml of IFN-γ and 100 ng/ml of LPS that can upregulate the expression of iNOS. iNOS has a key role in inflammatory action. Targeting de novo regulation of iNOS is the therapeutic strategy to cure inflammation-related diseases [40]. RAW 264.7 cells were stimulated by 10 U/ml of IFN-γ and 100 ng/ml of LPS with and without pretreatment of four botanical extracts. The concentration of nitrite was measured at 24 hours after the stimulation. As shown in Figure 1, total flavonoids fromGlycyrrhizaeradix et rhizoma (FGR) and total flavonoids fromSpatholobi caulis (FSC) significantly suppressed the IFN-γ and LPS-induced production of NO in a dose-dependent fashion. LPS and IFN-γ-induced NO in RAW 264.7 cells were inhibited by FGR and FSC in a concentration-dependent manner. The maximal inhibition achieved (at 200 mg/L) was 75.06% and 39.44%, respectively, for the two drugs. However, higher concentrations of total saponin fromActinidia arguta radix (SAA) and total coumarins ofPeucedani radix (CPR) were required to suppress the IFN-γ and LPS-induced production of NO. The maximal inhibition achieved of SAA and CPR (at 200 mg/L) was 29.69% and 33.65%, respectively (Figure 1).Figure 1 Effects of four botanical extracts on nitrite accumulation in macrophages stimulated by LPS plus IFN-γ, P∗<0.05, P∗∗<0.01 versus model group (M), n=6. ## 3.2. FGR, FSC, CPR, and SAA-Induced Vasodilation Ach-elicited relaxation in aorta rings was used for evaluating intact and deleted endothelium (Figure2). FGR and CPR relaxed isolated aortic rings in a dose-dependent and endothelium-independent manner. The maximum relaxation by FGR of the aortic rings with or without endothelium was at concentrations of 91.28%±5.15% and 84.36%±23.80%, respectively. The maximum relaxation by CPR of rings with or without endothelium was at concentrations of 75.51%±21.30% and 57.07%±18.63%, respectively. The half maximal effective concentration (EC50) was 17 mg/L for FGR and 61 mg/L for CPR for aortic rings with absent endothelium as shown in Figure 3(a).Figure 2 Concentration-response curves showing endothelium-dependent relaxation by Ach with PE pretreated rat aortic rings with intact endothelia (+Endo) and without intact endothelia (−Endo).n=5, P∗∗<0.01 versus +Endo.Figure 3 Concentration-response curves showing relaxation by four botanical extracts with PE pretreated rat aortic rings with intact endothelia (+Endo + Control) and without intact endothelia (−Endo + Control). The effects of exposure to 10μmol/L ODQ on the FSC (b-A), SAA (b-C), FGR (a-A), and CPR (a-B) groups of PE (1 μmol/L) pretreated rings (−Endo + ODQ). The concentration-response curves of FSC (b-B) and SAA (b-D) with pretreatment with Indo. The concentration-response curves of FSC (b-A) and SAA (b-C) with pretreatment with L-NAME. P∗<0.05, P∗∗<0.01 versus −Endo group, n=4.SAA and FSC relaxed isolated aortic rings in a dose-dependent and endothelium-dependent manner. The maximum relaxation of isolated aortic rings by SAA with and without endothelium was at concentrations of81.66%±7.36% and 5.20%±1.62%, respectively. The maximum relaxation induced by FSC with and without endothelium was at concentrations of 70.70%±6.12% and 7.53%±14.08%, respectively. The EC50 was 45 mg/L for SAA and 40 mg/L for FSC for aortic rings with intact endothelium as shown in Figure 3(a).To evaluate the involvement of the NO/cGMP signaling in endothelium-dependent vasodilation, the aortic rings were pretreated with ODQ (10μmol/L) or L-NAME (100 μmol/L) for 15 minutes each. Soluble guanylate cyclase (sGC) inhibitor ODQ affected FGR and CPR-induced vasodilation (Figure 3(a)). The FSC and SAA-induced relaxations of the aortic tissue were inhibited by pretreatment with ODQ or nitric oxide synthase blocker L-NAME in a concentration-dependent manner (Figure 3(b)).To investigate the involvement of the cyclooxygenase (COX)/PGI2 pathway, one set of aortic tissue was pretreated with indomethacin (10 μmol/L), a nonselective inhibitor of COX. The relaxation curves by FSC or SAA were not significantly altered by the blockage of PGI2 pathway (Figure 3(b)). ## 3.3. Effects of FGR and CPR on Endogenous Vasoconstrictors PE, 5-HT, Ang II, ET-1,PG2α, Vaso, and Dopa are all endogenous vasoconstrictors which play key roles in maintaining vasculature tension [41]. To study endothelium-independent vasodilation, the effects of herb elements on vasocontractility were examined. Aortic rings without endothelium were pretreated with 17 mg/L of FGR and 61 mg/L of CPR, respectively. FGR exerted inhibitory effects on the vasocontraction by Dopa, Ang II, ET-1, and Vaso in a dose-dependent fashion (Figure 4(a)). The maximal inhibitions on vasocontractions by FGR were 38.40%, 50.71%, 59.58%, and 33.67% for Dopa, AngII, ET-1, and Vaso-induced contractilities, respectively. However, FGR failed to suppress vasocontraction induced by PE, PGF, and 5-HT (see Supplemental Figure 1 in Supplementary Material available online at https://doi.org/10.1155/2017/1021284). CPR significantly inhibited vasoconstriction in the presence of Ang II, Dopa, PGF2α, 5-HT, PE, Vaso, and ET-1 by 86.75%, 59.57%, 74.55%, 41.84%, 64.60%, 79.51%, and 60.55%, respectively (Figure 4(b)).Figure 4 Effects of the four botanical extracts on endothelium-denuded aortic tissue that were exposed to endogenous vasoconstrictors. Inhibited by FGR (a): the contraction curves of Dopa, AngII, Vaso, and ET-1. Inhibited by CPR (b): the contraction curves of Dopa, PGF2α, AngII, 5-HT, PE, Vaso, and ET-1. P∗<0.05, P∗∗<0.01 versus +Endo control group, n=5. ## 3.4. Effects of FGR and CPR on Potassium Channels Potassium channels are important to vascular relaxation. There are many types of potassium channels in vascular smooth muscle including calcium-activated potassium channel (KCa), ATP-sensitive K+ channels (KATP), and inwardly rectifying potassium channels (KIR). To test the possible involvement of K+ channels in relaxations induced by FGR and CPR, endothelium-denuded rings were preincubated with KCa blocker (TEA) at 100 mmol/L, KATP blocker (glibenclamide) at 10 mmol/L, and KIR blocker BaCl2 at 100 mmol/L, respectively, for 15 minutes. In each case, the FGR- and CPR-induced vascular relaxation was not inhibited by glibenclamide, TEA, or BaCl2. Glibenclamide, TEA, or BaCl2 did not inhibit vascular relaxation by FGR. We also used glibenclamide, TEA, or BaCl2 to preincubate the endothelium-denuded rings, which did not inhibit vascular relaxation induced by CPR (Figure 5).Figure 5 Concentration-response curves showing relaxation induced by FGR (a) and CPR (b) compared to control in endothelium-free tissues pretreated with potassium channel inhibitors: 3 mmol/L TEA, 10μmol/L Glib, and 100 μmol/L BaCl2, n=6. (a) (b) ## 3.5. Effects of FGR and CPR on Extracellular Calcium Influx and Intracellular Calcium Release Endogenous vasoconstrictors, such as PE, contract vascular smooth muscle mainly through the activation of receptor-operated calcium channels (ROCC), while KC1 mainly activates potential-dependent Ca2+ channels, all of which result in both extracellular calcium influx and intracellular calcium release. To confirm whether calcium-mediated vasoconstriction is affected by FGR and CPR, aortic ring samples denuded of endothelium were exposed to Ca2+-free K-H solutions, and the addition of 1 μmol/L PE induced small tonic contractions which were most likely activated by the release of intracellular Ca2+ from endoplasmic reticulum stores. CPR reduced PE-induced contractions better than FGR under extracellular Ca2+-free condition (Figure 6).Figure 6 Effects of four botanical extracts on calcium channel and cytoplasmic calcium release. The concentration-response curves of CaCl2 in Ca2+-free media were inhibited by FGR (a) and CPR (b); maximal (100%) contraction was represented by 60 mmol/L KCl-induced contractions. Effects of four botanical extracts on the transient contraction induced by PE in Ca2+ free media. The effect of PE in Ca2+ free media was inhibited by FGR (c) and CPR (d); maximal (100%) contraction was represented by 60 mmol/L KCl-induced contraction. P∗∗<0.01 versus control, n=5–7. (a) (b) (c) (d)Experiments on depolarization elicited by voltage-dependent Ca2+-influx in high concentrations of K+ were tested as shown in Figure 6. The data suggested that the K+ (60 mmol/L) stimulated, Ca2+-induced vasoconstriction was not inhibited by 17 mg/L of FGR. However, the vasoconstriction was suppressed by 61 mg/L of CPR. ## 4. Discussion The total saponins fromActinidia arguta radix (SAA), total flavonoids fromGlycyrrhizaeradix et rhizoma (FGR), total coumarins fromPeucedani radix (CPR), and total flavonoids fromSpatholobi caulis (FSC) were extracted and used in current studies. Four anti-inflammatory herbal extracts relaxed thoracic aortic ring in a concentration-dependent manner. The rank order of the EC50 for relaxation of these extracts was as follows:Glycyrrhizae radix et rhizoma <Spatholobi caulis <Actinidia arguta radix <Peucedani radix.The vascular relaxation evoked by SAA is endothelium dependent and the vasodilatory effect by the element from Radix and Stemma Actinidia argute (Teng Li Gen) is clocked by ODQ, a soluble guanylyl cyclase (sGC) inhibitor. Thus, our study first revealed that a NO-cGMP dependent pathway is critical for the action of the SAA. As a major component of saponin from the Actinidia argute, it has been known that corosolic acid possesses various biological properties, including antidiabetic, antiobesity, and anti-inflammatory activities [42–44] The compound’s efficacy in diabetes has resulted the development of Glucosol (or GlucoFit), a commercially available product primarily marketed in Japan and the United States as a dietary supplement for weight loss and blood sugar balance. The inflammatory and oxidative stress impact metabolism through lipid and glucose metabolism and insulin resistance which is linked to mitochondrial function [10]. TEO (2a,-3a,-24-trihydroxyurs-12-en-28-oic acid), a corosolic acid analogue, declined the mitochondrial membrane potential and altered mitochondrial ultrastructure which may serve the mechanism for the antioxidative stress effects [45]. Nevertheless, cGMP has been reported to exert an action on mitochondrial function [46]. On the other hand, corosolic acid has been shown to suppress glioblastoma cell proliferation by inhibiting the activation of signal transducer and activator of transcription-3 and nuclear factor-kappa B in tumor cells and tumor-associated macrophages corosolic acid inhibits glioblastoma cell proliferation [47]. Our analysis of GEO databases (National Cancer Institute) revealed a statistically significant reduction of sGC transcript levels in human glioma specimens. Pharmacologically manipulating endogenous cGMP generation in glioma cells through either stimulating pGC by ANP/BNP or blocking PDE by 3-isobutyl-1-methylxanthine/zaprinast caused significant inhibition of proliferation and colony formation of glioma cells. Our study proposes the new concept that suppressed expression of sGC, a key enzyme in the NO/cGMP pathway, may be associated with an aggressive course of glioma. The sGC/cGMP signaling-targeted therapy may be a favorable alternative to chemotherapy and radiotherapy for glioma and perhaps other tumors [48].The relaxation induced by FSC was inhibited by L-NAME, indicating the involvement of NO in vascular dilatory action of the extracts.Spatholobi caulis is a traditional blood-activating and stasis-dispelling herb medicine, which has been used to treat diseases related to blood stasis syndrome by inhibiting platelet aggregation and stimulating hematopoiesis. A recent study further revealed that the FSC presented proangiogenic activity in human umbilical vein endothelial cells (HUVECs) as well as in zebrafish [49]. With an LPS-activated Raw264.7 cells model, theSpatholobi caulis MeOH extract (containing flavonoids) inhibited the expressions of iNOS and COX-2 and suppressed the production of proinflammatory cytokines, such as IL-1beta and IL-6 [50]. Genistein, an isoflavonoid from the herb, has been reported to decrease the generations of ROS and malondialdehyde [51]. In mammalian cells, NO is produced by a family of NO synthases (NOS). Three NOS isoforms have been identified as neuronal NOS (nNOS), inducible NOS (iNOS), and endothelial NOS (eNOS). In vascular system, NO is generated from the conversion of L-arginine to L-citrulline by eNOS, which requires Ca2+/calmodulin, FAD, FMN, and tetrahydrobiopterin (BH4) as cofactors. Under the inflammatory pathological conditions, the cofactors of eNOS can be oxidized and eNOS then shifts to produce superoxide anion instead of NO. This state is referred to as the “uncoupled state of eNOS” (eNOS uncoupling), which may further enhance the inflammation [52]. Considering the significant anti-inflammatory effect of the FSC which markedly inhibited the expressions of iNOS and proinflammatory cytokines, we speculate that the vasodilatory effect of the FSC may be partially due to its promoting of eNOS function through antioxidative properties.RadixGlycyrrhizae(Licorice Root) is the most used herb element in TCM. Licorice, the root extract of Glycyrrhiza glabra I., is used as a medicine for various diseases. Anti-inflammatory as well as antiallergic activities have been attributed to one of its main constituents, glycyrrhizin. These activities are mainly ascribed to the action of the aglycone, beta-glycyrrhetinic acid. beta-Glycyrrhetinic acid has a steroid-like structure and is believed to have immunomodulatory properties [53]. Glycyrrhizin inhibits liver cell injury and is given intravenously for the treatment of chronic viral hepatitis and cirrhosis in Japan [54, 55]. It has also proven itself effective in the treatment of autoimmune hepatitis in one clinical trial [56]. We demonstrate a significant vasodilatory effect of FGR (total flavonoids from Glycyrrhizae radix et rhizoma) and reveal that pretreatment with FGR shifted contraction curves of Dopa, AngII, Vaso, and ET-1 to the right. Those endogenous vasoconstrictors regulate vascular tone via their respective receptors (mostly G protein-coupled) in smooth muscle. Although overall mechanisms of action are different, G protein-coupled receptors as a whole activate PLC, DAG, and IP3. DAG elicits protein kinase C by activating myosin light chains. IP3 induces intracellular calcium release from the intracellular calcium pool or activates VDCCs in the cell membrane to regulate intracellular calcium concentration and vascular tone [57]. However, FGR failed to block Ca2+ influx or releasing intracellular Ca2+. Glycyrrhetic acid, the active metabolite in licorice, inhibits the enzyme 11-β-hydroxysteroid dehydrogenase enzyme type 2 with a resultant cortisol-induced mineralocorticoid effect and the tendency towards the elevation of sodium and reduction of potassium levels. This aldosterone-like action is the fundamental basis for understanding the pharmacology of the extract [58]. However, the glucocorticoids inhibits eNOS gene expression and reduces NO release through the glucocorticoid receptor mediated signaling [59]. The glucocorticoids also directly potentiate contractions of rabbit and dog aortic strips to epinephrine and norepinephrine [60, 61]. Thus, the specific mechanisms underlying relaxation of vascular smooth muscle by FGR need further study.Khellactone (dihydroseselin) coumarins possess various activities, including calcium blocker and antiplatelet aggregation [62, 63]. Khellactone coumarins with 3′S, 4′S configuration (praeruptorins A, B, C, and D) were first isolated from dried roots of P. praeruptorum (Peucedani radix) which is commonly used in Traditional Chinese Medicine (TCM) for treatment of cough and upper respiratory infections and as an antipyretic, antitussive, and mucolytic agent. By using spontaneously hypertensive rats as experimental model, praeruptorin-C improved the vascular hypertrophy by decreasing the size of SMCs cells, collagen content, and increasing NO production [64]. The vasodilatory effects of praeruptorin-A was confirmed in isolated rabbit tracheas and pulmonary arteries, as well as in swine coronary artery [65, 66]. In our experimental setting, total coumarins fromPeucedani radix (CPR) induced vascular relaxation may not be related to sGC/cGMP but is associated with blocking of both VDCC and ROCC. ## 5. Conclusion The present study shows that extracts from four herbs relaxed thoracic aorta tissues isolated from rats.Glycyrrhizaeradix et rhizome andPeucedaniradix induced vasorelaxation independent of intact endothelium; however, their respective mechanisms of action appear to be different. Vasorelaxation induced byPeucedaniradix appears to be mainly related to effects on intracellular calcium homeostasis, specifically the inhibition of Ca2+ influx and intracellular Ca2+ release. Dopa-, AngII-, Vaso-, and ET-1 induced vasoconstriction was inhibited byGlycyrrhizae radix et rhizome, but details of its mechanism of action need further study. The vasorelaxation induced bySpatholobicaulis andActinidia arguta radix is endothelium-dependent, and their mechanisms of relaxation may involve the NO-cGMP pathway. The distinct vasodilatory effects of four anti-inflammatory botanical extracts are significant and novel which will pave the way not only for further mechanism study, but also for directing of new herb formula for preventive and/or therapeutic usage. --- *Source: 1021284-2017-11-28.xml*
1021284-2017-11-28_1021284-2017-11-28.md
57,586
The Vasodilatory Effects of Anti-Inflammatory Herb Medications: A Comparison Study of Four Botanical Extracts
Hong Ping Zhang; Dan-Dan Zhang; Yan Ke; Ka Bian
Evidence-Based Complementary and Alternative Medicine (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1021284
1021284-2017-11-28.xml
--- ## Abstract Inflammation plays a pivotal role in the development and progression of cardiovascular diseases, in which, the endothelium dysfunction has been a key element. The current study was designed to explore the vasodilatory effect of anti-inflammatory herbs which have been traditionally used in different clinical applications. The total saponins fromActinidia arguta radix (SAA), total flavonoids fromGlycyrrhizae radix et rhizoma (FGR), total coumarins fromPeucedani radix (CPR), and total flavonoids fromSpatholobi caulis (FSC) were extracted. The isometric measurement of vasoactivity was used to observe the effects of herbal elements on the isolated aortic rings with or without endothelium. To understand endothelium-independent vasodilation, the effects of herb elements on agonists-induced vasocontractility and on the contraction of endothelium-free aortic rings exposed to a Ca2+-free medium were examined. Furthermore, the role of nitric oxide signaling in endothelium-dependent vasodilation was also evaluated. In summary, FGR and FSC exhibit potent anti-inflammatory effects compared to CPR and SAA. FGR exerts the strongest vasodilatory effect, while CPR shows the least. The relaxation induced by SAA and FSC required intact endothelia. The mechanism of this vasodilation might involve eNOS. CPR-mediated vasorelaxation appears to involve interference with intracellular calcium homeostasis, blocking Ca2+ influx or releasing intracellular Ca2+. --- ## Body ## 1. Introduction Inflammation plays a pivotal role in the development and progression of several cardiovascular diseases, including atherosclerosis [1]. Numerous epidemiologic studies support the concept that vascular inflammation correlates with an increased risk of atherosclerosis [2]. While inflammation contributes to cardiovascular pathology, the question remains whether inhibition of inflammation prevents or even reverses the progress of vascular diseases. Multiple clinical studies have shown that the use of statins reduces cardiovascular morbidity and mortality [3, 4]. However, a direct test of the inflammatory hypothesis of cardiovascular disease requires an agent that can inhibit inflammation without affecting other components of atherothrombosis, while also exhibiting an acceptable safety profile. To address this issue, a cardiovascular inflammation reduction trial (CIRT: ClinicalTrials.gov.ID# NCT01594333) has begun at the Brigham and Women’s Hospital and the National Heart, Lung, and Blood Institute (NHLBI) that proposes the use of very-low-dose-methotrexate (VLDM, 10 mg weekly) on 7,000 patients with stable coronary artery disease and persistent elevations of high-sensitivity C-reactive protein (Hs-CRP). Despite its anti-inflammatory effects, methotrexate is an antimetabolite drug that is used to treat cancers and has significant side effects at high doses. Therefore, alternative therapeutic options should be considered.The anti-inflammatory traditional Chinese medicines as well as botanic elements have been studied for years [5–8]. On the other hand, botanically derived elements have been recognized for the beneficial effect on cardiovascular and metabolism systems [9, 10]. To further explore the potential of using those agents on the cardiovascular system, we studied the vasodilatory effects of four of these herb elements and reviewed their medical applications (Table 1).Actinidia arguta radix (Tengligen) is a member of the Actinidiaceae family. Pharmacology research has revealed anticancer, immune regulation and hypotensive activity fromActinidia arguta radix [11, 12]. To the best of our knowledge, there are no scientific reports on the blood-pressure lowering mechanisms of anyActinidia arguta radix extracts.Glycyrrhizaeradix et rhizoma (Gancao) has multiple therapeutic uses, some of which are related to its anti-inflammatory properties. These include treating cough, relieving pain, clearing heat, and eliminating toxins and poisons [13]. Modern pharmacology research has also reported that the flavonoids fromGlycyrrhizaeradix et rhizome (FGR) have antioxidant properties [14] with therapeutic benefits including the inhibition of cough and treatment of bacterial infections [15, 16]; however, its effects on vascular contractility is unknown.Peucedani radix (Qianhu) has been an important agent for treating respiratory symptoms and diseases through the centuries. This herb is traditionally characterized as dispelling wind and removing heat, relieving cough, and resolving phlegm [17] and has been used to relieve the symptoms of influenza and asthma.Peucedani radix has strong anti-inflammatory properties as one of its therapeutic mechanisms [18, 19]. Many coumarin constituents have been extracted from this herb and they are reported to be responsible, in major, for its biological activity [20]. It has been noted thatPeucedani radix can exert beneficial effects in hypoxic pulmonary hypertension [21]. However, the action of this drug on the circulatory system is largely unknown.Spatholobi caulis (Jixueteng) has been traditionally used for irregular menstruation, numbness, and inflammatory arthralgia [22]. According to pharmacology research, it can reduce oxidative stress [23] and inflammation [24]. Several studies have demonstrated in vitro and in vivo cytotoxic effects ofSpatholobicaulis extracts on tumor cells [25–28]. It has been suggested that the flavonoids ofSpatholobi caulis (FSC) are the major active components of its therapeutic actions [29]. A few studies have demonstrated the impact ofSpatholobi caulis on cardiovascular conditions. This botanical agent interferes with platelet aggregation via interference at the glycoprotein IIb/IIIa receptor [30]. It has also been shown to reduce plasma lipid levels in hyperlipidemic quail [31]. In a rat model of cerebral ischemia, Lee et al. found a significant increase in cerebral blood flow after treatment withSpatholobi caulis [32, 33]. Although it is speculated that the blockage of calcium channels is responsible for this action [34], a further vascular pharmacological study of this botanical agent is warranted.Table 1 Botanical extracts that possess anti-inflammatory properties. Medicinals Photos Nature of medicinals Flavor of medicinals Functions Clinical application Reference Spatholobus suberectusDunn Warm Bitter, sweet Promoting blood flow, tonifying blood, regulating menstruation and relieving pain, relaxing sinews, and activating collaterals Treatment of irregular menstruation, dysmenorrhoea, amenorrhea, rheumatic arthralgia, paralysis or numbness in the limbs, blood deficiency, and chlorosis Chinese Pharmacopoeia (Version 2015) Peucedanum praeruptorumDunn Slightly cold Bitter, pungent Directing qi downward to resolve phlegm, dispelling pathogenic wind, and clearing heat Treatment of respiratory symptoms such as phlegm-heat asthma, yellow viscous phlegm, cough, and phlegm induced by wind-heat Chinese Pharmacopoeia (Version 2015) Actinidia arguta(Sieb. &Zucc.) Planch. ex Miq Cool Sour, astringent Clearing heat and detoxicating, dispelling wind-dampness, diuretic, and haemostatic Treatment of rheumatism, pain, and jaundice The national compilation of Chinese herbal medicine Glycyrrhiza uralensisFisch Calm Sweet Tonifying spleen and qi, clearing heat and detoxicating, dispelling phlegm and suppressing cough, relieving spasm and pain, mixed herbs Treatment of deficiency of spleen and stomach, fatigue, palpitations, shortness of breath, cough and phlegm in throat, abdominal pain, spasm of limbs, carbuncle sore, and reduce drug toxicity Chinese Pharmacopoeia (Version 2015)In the present study, we examined anti-inflammatory effects of the four botanical extracts by using LPS and IFN-γ-stimulated macrophages. The isometric vasoactivity was measured to evaluate the vasodilating properties of the extracts on isolated rat thoracic aortas. The action mechanisms were explored through pharmacological examinations. ## 2. Methods and Materials ### 2.1. Herbs and Chemicals Herbs were purchased from Shanghai Yang He Tang TCM Pieces, Ltd. Company (Shanghai, China) and authenticated by the Shanghai Institute of Food and Drug Control. Acetylcholine (Ach), phenylephrine (PE), NG-nitro-L-arginine methyl ester (L-NAME), indomethacin (Indo), 1H-[1,2,4]-oxadiazole-[4,3-a]-quinoxalin-1-one (ODQ), glibenclamide (Glib), tetraethylammonium (TEA), prostaglandin 2α (PG2α), BaCl2, angiotensin II (AngII), 5-hydroxytryptamine (5-HT), dopamine (Dopa), endothelin-1 (ET-1), RPMI 1640 medium, IFN-γ, and lipopolysaccharide (LPS) were all purchased from Sigma Chemical Co. (St. Louis, MO, USA). Ethylene glycol bis (2-aminoethyl ether) tetraacetic acid (EGTA) and other inorganic salts were all purchased from Sinopharm Chemical Reagent Co., Ltd. (Batch number F20060620). Ach, PE, TEA, AngII, 5-HT, dopamine, and ET-1 solutions were prepared with distilled water. Glibenclamide and ODQ solutions were prepared with DMSO. Control experiments demonstrated that the highest DMSO concentration (1 : 400) had no effect on vascular tone. ### 2.2. Cell Cultures RAW 264.7 cells were used in current study for the following considerations. First, the RAW 264.7 cell line is a pure clone that can be grown in a pretty much identical and indefinitely manner which is necessary for our drug screen platform. Second, RAW 264.7 cells are transformed and are not functional for certain signaling pathways such as activated inflammasomes [35], which will benefit the purpose of our designed study in which anti-inflammatory effect of the herbs will be evaluated. RAW 264.7 cells were obtained from the American Tissue Culture Collection. The cells were maintained in complete RPMI 1640 media supplemented with 10% heat-inactivated FBS and 1.5% sodium bicarbonate at 37°C in a humidified 5% CO2 atmosphere. Cells were plated at a density of 1 × 105 cells/well in 96-well plates or 2 × 106 cells in each 30 mm dish and allowed to attach for 2 hours. For stimulation, the media were replaced with fresh RPMI 1640, and the cells were then stimulated with 10 U/ml of IFN-γ and 100 ng/mL of LPS in the presence or absence of FGR for the indicated periods. ### 2.3. Experimental Animal and Blood Vessel Ring Preparations Male Sprague-Dawley rats (250–300 g) were obtained from Shanghai Slac Experimental Company, Ltd. (Shanghai, China). The animal procedures were carried out in strict accordance with theGuide for the Care and Use of Laboratory Animals (Shanghai University of Traditional Chinese Medicine). All experiments were performed under license from the Government of China.The preparation of the vascular rings was performed as described by Zhang et al. [36]. Briefly, the rats were sacrificed by decapitation and their thoracic aortas were rapidly and carefully dissected away into ice-cold freshly prepared Krebs-Henseleit (K-H) solution. The aortas were cut into ring segments of approximately 3 mm wide. For some aortic rings, the endothelial layer was mechanically removed by gently rubbing the luminal surfaces of the aortic rings back and forth several times. ### 2.4. Recording of Isometric Vascular Tone Each ring was suspended by means of two L-shape stainless-steel hooks in an organ bath filled with Krebs-Henseleit solution maintained at 37°C while being continuously infused with bubbled 95% O2 and 5% CO2. The lower hooks were fixed to the bottom of the organ bath and the upper wires were attached to an isometric force transducer connected to a data acquisition system (PowerLab/4P ADInstruments, Australia) for continuous recording of tension. The baseline load placed on the aortic rings was 2.0 g.Examination of endothelial integrity was performed as described by Xing et al. and others [37–39]. Briefly, endothelial integrity or functional removal was verified by the appropriate relaxation response to 10 μmol/L acetylcholine on 1 μmol/L phenylephrine contracted vessels. ### 2.5. Experimental Protocol #### 2.5.1. Nitric Oxide Production RAW 264.7 cells were plated in 96-well plates (1 × 105/well) and stimulated with 100 ng/mL LPS and 10 U/mL IFN-γ for 24 h. The cell-free culture media were collected and analyzed for nitrite accumulation as an indicator of NO production using the Griess reagent. The NO assay was performed as described by Zhang et al. [39]. Briefly, 100 μL of Griess reagent (0.1% naphthylethylenediamine and 1% sulfanilamide in 5% H3PO4 solution) was added to an equal volume of supernatant from sample-treated cells. The plates were incubated for 10 minutes and then were read at 540 nm against a standard curve of sodium nitrite. Percent inhibition was expressed as 100 × [1 − (NO release with sample − spontaneous release)/(NO release without sample − spontaneous release)]. #### 2.5.2. Testing the Effects of FGR, FSC, CPR, and SAA on PE-Induced Constriction The vasodilatory effects of the four botanical extracts were tested in both endothelium-intact and endothelium-denuded rings constricted by PE (1μmol/L). Once a plateau of PE contraction was attained, each of the botanical extracts was applied cumulatively according to a concentration gradient. At the end of each experiment, forskolin was added to induce blood vessel relaxation and the tension of aortic rings was recorded.To attempt to understand the mechanisms of vascular relaxation, nitric oxide synthase inhibitor L-NAME, cyclooxygenase inhibitor indomethacin, soluble guanylyl cyclase inhibitor ODQ, adrenergicβ-receptor inhibitor propranolol, KATP blocker glibenclamide, KCa blocker TEA, and KIR blocker BaCl2 were individually used to pretreat endothelium-denuded rings for 15 min, respectively, prior to addition of 1 μmol/L of phenylephrine. Afterwards, relaxations induced by each of the botanical extracts were observed, including the concentration-dependent vasodilation. #### 2.5.3. Measuring the Effects of FGR, FSC, CPR, and SAA on Vasoconstrictors The endothelium-free aortic rings were first exposed to constrictors at different concentrations. This included Dopa (0.1, 1, 10, 100, and 1,000 nmol/L), 5-HT (10, 100, 1000, 10,000, and 100,000 nmol/L), Ang II (0.1, 1, 10, 100, and 1,000 nmol/L), K+ (10.00, 15.85, 25.12, 39.81, 63.10, and 100.00 mmol/L), Vaso (0.1, 1, 10, 100, and 1,000 nmol/L), ET-1 (10, 25, 50, 75, and 100 nmol/L), PG2α (1, 10, 100, 1,000, and 10,000 nmol/L), and PE (1, 10, 100, 1,000, and 10,000 nmol/L). After washing, the rings were incubated individually with one of the four botanical extracts at concentrations of EC50 for 10 minutes. Contractions induced by vasoconstrictors were again observed. The level of vasoconstriction in response to 60 mmol/L KCl was used as the maximum (100%). #### 2.5.4. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Influx Endothelium-free aorta rings were washed and treated with calcium-free, high-K+ solution (containing 100 μmol/L EGTA and 60 mmol/L KCl). Then, the preparations were incubated and cumulatively exposed to increasing concentrations of CaCl2 (0.4, 0.8, 1.2, 1.6, 2.0, and 2.4 mmol/L). The vasoconstrictor responses to CaCl2 were compared between four groups using each of the botanical extracts as well as a control group. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media was used as the maximum (100%). #### 2.5.5. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Release Endothelium-free aortic rings were washed and exposed to calcium-free Krebs-Henseleit solution (containing 100μmol/L EGTA) for 10 minutes. After this, 1 μmol/L of phenylephrine was added. This resulted in small tonic contractions that were mainly caused by the release of intracellular calcium. Once a plateau of PE contraction was attained, the bath solution was instead in calcium-free Krebs-Henseleit solution (containing 100 μmol/L EGTA) for 5 minutes. Four groups were exposed to each of the botanical extracts at a concentration of EC50 in addition to a control group, and these groups were compared. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media is used as the maximum (100%). #### 2.5.6. Effect of Four Botanical Extracts on Organ Tissue Viability The effects of four botanical extracts on the viability of freshly isolated aortic organ tissue were tested by repeatedly treating the extracts with the same aortic rings either with or without endothelium. The multiple treatments did not affect the contractility of the vessel induced by 60 mmol/L K+. The vasodilation towards acetylcholine of the aortic rings was also intact after several times of applications of botanical extracts. ### 2.6. Statistical Analysis All of results are expressed as mean ± SD. Statistical significance was analyzed using unpaired Student’st-tests for comparisons between two groups. A value of P<0.05 was considered statistically significant. ## 2.1. Herbs and Chemicals Herbs were purchased from Shanghai Yang He Tang TCM Pieces, Ltd. Company (Shanghai, China) and authenticated by the Shanghai Institute of Food and Drug Control. Acetylcholine (Ach), phenylephrine (PE), NG-nitro-L-arginine methyl ester (L-NAME), indomethacin (Indo), 1H-[1,2,4]-oxadiazole-[4,3-a]-quinoxalin-1-one (ODQ), glibenclamide (Glib), tetraethylammonium (TEA), prostaglandin 2α (PG2α), BaCl2, angiotensin II (AngII), 5-hydroxytryptamine (5-HT), dopamine (Dopa), endothelin-1 (ET-1), RPMI 1640 medium, IFN-γ, and lipopolysaccharide (LPS) were all purchased from Sigma Chemical Co. (St. Louis, MO, USA). Ethylene glycol bis (2-aminoethyl ether) tetraacetic acid (EGTA) and other inorganic salts were all purchased from Sinopharm Chemical Reagent Co., Ltd. (Batch number F20060620). Ach, PE, TEA, AngII, 5-HT, dopamine, and ET-1 solutions were prepared with distilled water. Glibenclamide and ODQ solutions were prepared with DMSO. Control experiments demonstrated that the highest DMSO concentration (1 : 400) had no effect on vascular tone. ## 2.2. Cell Cultures RAW 264.7 cells were used in current study for the following considerations. First, the RAW 264.7 cell line is a pure clone that can be grown in a pretty much identical and indefinitely manner which is necessary for our drug screen platform. Second, RAW 264.7 cells are transformed and are not functional for certain signaling pathways such as activated inflammasomes [35], which will benefit the purpose of our designed study in which anti-inflammatory effect of the herbs will be evaluated. RAW 264.7 cells were obtained from the American Tissue Culture Collection. The cells were maintained in complete RPMI 1640 media supplemented with 10% heat-inactivated FBS and 1.5% sodium bicarbonate at 37°C in a humidified 5% CO2 atmosphere. Cells were plated at a density of 1 × 105 cells/well in 96-well plates or 2 × 106 cells in each 30 mm dish and allowed to attach for 2 hours. For stimulation, the media were replaced with fresh RPMI 1640, and the cells were then stimulated with 10 U/ml of IFN-γ and 100 ng/mL of LPS in the presence or absence of FGR for the indicated periods. ## 2.3. Experimental Animal and Blood Vessel Ring Preparations Male Sprague-Dawley rats (250–300 g) were obtained from Shanghai Slac Experimental Company, Ltd. (Shanghai, China). The animal procedures were carried out in strict accordance with theGuide for the Care and Use of Laboratory Animals (Shanghai University of Traditional Chinese Medicine). All experiments were performed under license from the Government of China.The preparation of the vascular rings was performed as described by Zhang et al. [36]. Briefly, the rats were sacrificed by decapitation and their thoracic aortas were rapidly and carefully dissected away into ice-cold freshly prepared Krebs-Henseleit (K-H) solution. The aortas were cut into ring segments of approximately 3 mm wide. For some aortic rings, the endothelial layer was mechanically removed by gently rubbing the luminal surfaces of the aortic rings back and forth several times. ## 2.4. Recording of Isometric Vascular Tone Each ring was suspended by means of two L-shape stainless-steel hooks in an organ bath filled with Krebs-Henseleit solution maintained at 37°C while being continuously infused with bubbled 95% O2 and 5% CO2. The lower hooks were fixed to the bottom of the organ bath and the upper wires were attached to an isometric force transducer connected to a data acquisition system (PowerLab/4P ADInstruments, Australia) for continuous recording of tension. The baseline load placed on the aortic rings was 2.0 g.Examination of endothelial integrity was performed as described by Xing et al. and others [37–39]. Briefly, endothelial integrity or functional removal was verified by the appropriate relaxation response to 10 μmol/L acetylcholine on 1 μmol/L phenylephrine contracted vessels. ## 2.5. Experimental Protocol ### 2.5.1. Nitric Oxide Production RAW 264.7 cells were plated in 96-well plates (1 × 105/well) and stimulated with 100 ng/mL LPS and 10 U/mL IFN-γ for 24 h. The cell-free culture media were collected and analyzed for nitrite accumulation as an indicator of NO production using the Griess reagent. The NO assay was performed as described by Zhang et al. [39]. Briefly, 100 μL of Griess reagent (0.1% naphthylethylenediamine and 1% sulfanilamide in 5% H3PO4 solution) was added to an equal volume of supernatant from sample-treated cells. The plates were incubated for 10 minutes and then were read at 540 nm against a standard curve of sodium nitrite. Percent inhibition was expressed as 100 × [1 − (NO release with sample − spontaneous release)/(NO release without sample − spontaneous release)]. ### 2.5.2. Testing the Effects of FGR, FSC, CPR, and SAA on PE-Induced Constriction The vasodilatory effects of the four botanical extracts were tested in both endothelium-intact and endothelium-denuded rings constricted by PE (1μmol/L). Once a plateau of PE contraction was attained, each of the botanical extracts was applied cumulatively according to a concentration gradient. At the end of each experiment, forskolin was added to induce blood vessel relaxation and the tension of aortic rings was recorded.To attempt to understand the mechanisms of vascular relaxation, nitric oxide synthase inhibitor L-NAME, cyclooxygenase inhibitor indomethacin, soluble guanylyl cyclase inhibitor ODQ, adrenergicβ-receptor inhibitor propranolol, KATP blocker glibenclamide, KCa blocker TEA, and KIR blocker BaCl2 were individually used to pretreat endothelium-denuded rings for 15 min, respectively, prior to addition of 1 μmol/L of phenylephrine. Afterwards, relaxations induced by each of the botanical extracts were observed, including the concentration-dependent vasodilation. ### 2.5.3. Measuring the Effects of FGR, FSC, CPR, and SAA on Vasoconstrictors The endothelium-free aortic rings were first exposed to constrictors at different concentrations. This included Dopa (0.1, 1, 10, 100, and 1,000 nmol/L), 5-HT (10, 100, 1000, 10,000, and 100,000 nmol/L), Ang II (0.1, 1, 10, 100, and 1,000 nmol/L), K+ (10.00, 15.85, 25.12, 39.81, 63.10, and 100.00 mmol/L), Vaso (0.1, 1, 10, 100, and 1,000 nmol/L), ET-1 (10, 25, 50, 75, and 100 nmol/L), PG2α (1, 10, 100, 1,000, and 10,000 nmol/L), and PE (1, 10, 100, 1,000, and 10,000 nmol/L). After washing, the rings were incubated individually with one of the four botanical extracts at concentrations of EC50 for 10 minutes. Contractions induced by vasoconstrictors were again observed. The level of vasoconstriction in response to 60 mmol/L KCl was used as the maximum (100%). ### 2.5.4. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Influx Endothelium-free aorta rings were washed and treated with calcium-free, high-K+ solution (containing 100 μmol/L EGTA and 60 mmol/L KCl). Then, the preparations were incubated and cumulatively exposed to increasing concentrations of CaCl2 (0.4, 0.8, 1.2, 1.6, 2.0, and 2.4 mmol/L). The vasoconstrictor responses to CaCl2 were compared between four groups using each of the botanical extracts as well as a control group. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media was used as the maximum (100%). ### 2.5.5. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Release Endothelium-free aortic rings were washed and exposed to calcium-free Krebs-Henseleit solution (containing 100μmol/L EGTA) for 10 minutes. After this, 1 μmol/L of phenylephrine was added. This resulted in small tonic contractions that were mainly caused by the release of intracellular calcium. Once a plateau of PE contraction was attained, the bath solution was instead in calcium-free Krebs-Henseleit solution (containing 100 μmol/L EGTA) for 5 minutes. Four groups were exposed to each of the botanical extracts at a concentration of EC50 in addition to a control group, and these groups were compared. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media is used as the maximum (100%). ### 2.5.6. Effect of Four Botanical Extracts on Organ Tissue Viability The effects of four botanical extracts on the viability of freshly isolated aortic organ tissue were tested by repeatedly treating the extracts with the same aortic rings either with or without endothelium. The multiple treatments did not affect the contractility of the vessel induced by 60 mmol/L K+. The vasodilation towards acetylcholine of the aortic rings was also intact after several times of applications of botanical extracts. ## 2.5.1. Nitric Oxide Production RAW 264.7 cells were plated in 96-well plates (1 × 105/well) and stimulated with 100 ng/mL LPS and 10 U/mL IFN-γ for 24 h. The cell-free culture media were collected and analyzed for nitrite accumulation as an indicator of NO production using the Griess reagent. The NO assay was performed as described by Zhang et al. [39]. Briefly, 100 μL of Griess reagent (0.1% naphthylethylenediamine and 1% sulfanilamide in 5% H3PO4 solution) was added to an equal volume of supernatant from sample-treated cells. The plates were incubated for 10 minutes and then were read at 540 nm against a standard curve of sodium nitrite. Percent inhibition was expressed as 100 × [1 − (NO release with sample − spontaneous release)/(NO release without sample − spontaneous release)]. ## 2.5.2. Testing the Effects of FGR, FSC, CPR, and SAA on PE-Induced Constriction The vasodilatory effects of the four botanical extracts were tested in both endothelium-intact and endothelium-denuded rings constricted by PE (1μmol/L). Once a plateau of PE contraction was attained, each of the botanical extracts was applied cumulatively according to a concentration gradient. At the end of each experiment, forskolin was added to induce blood vessel relaxation and the tension of aortic rings was recorded.To attempt to understand the mechanisms of vascular relaxation, nitric oxide synthase inhibitor L-NAME, cyclooxygenase inhibitor indomethacin, soluble guanylyl cyclase inhibitor ODQ, adrenergicβ-receptor inhibitor propranolol, KATP blocker glibenclamide, KCa blocker TEA, and KIR blocker BaCl2 were individually used to pretreat endothelium-denuded rings for 15 min, respectively, prior to addition of 1 μmol/L of phenylephrine. Afterwards, relaxations induced by each of the botanical extracts were observed, including the concentration-dependent vasodilation. ## 2.5.3. Measuring the Effects of FGR, FSC, CPR, and SAA on Vasoconstrictors The endothelium-free aortic rings were first exposed to constrictors at different concentrations. This included Dopa (0.1, 1, 10, 100, and 1,000 nmol/L), 5-HT (10, 100, 1000, 10,000, and 100,000 nmol/L), Ang II (0.1, 1, 10, 100, and 1,000 nmol/L), K+ (10.00, 15.85, 25.12, 39.81, 63.10, and 100.00 mmol/L), Vaso (0.1, 1, 10, 100, and 1,000 nmol/L), ET-1 (10, 25, 50, 75, and 100 nmol/L), PG2α (1, 10, 100, 1,000, and 10,000 nmol/L), and PE (1, 10, 100, 1,000, and 10,000 nmol/L). After washing, the rings were incubated individually with one of the four botanical extracts at concentrations of EC50 for 10 minutes. Contractions induced by vasoconstrictors were again observed. The level of vasoconstriction in response to 60 mmol/L KCl was used as the maximum (100%). ## 2.5.4. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Influx Endothelium-free aorta rings were washed and treated with calcium-free, high-K+ solution (containing 100 μmol/L EGTA and 60 mmol/L KCl). Then, the preparations were incubated and cumulatively exposed to increasing concentrations of CaCl2 (0.4, 0.8, 1.2, 1.6, 2.0, and 2.4 mmol/L). The vasoconstrictor responses to CaCl2 were compared between four groups using each of the botanical extracts as well as a control group. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media was used as the maximum (100%). ## 2.5.5. Measuring the Effects of FGR, FSC, CPR, and SAA on Calcium Release Endothelium-free aortic rings were washed and exposed to calcium-free Krebs-Henseleit solution (containing 100μmol/L EGTA) for 10 minutes. After this, 1 μmol/L of phenylephrine was added. This resulted in small tonic contractions that were mainly caused by the release of intracellular calcium. Once a plateau of PE contraction was attained, the bath solution was instead in calcium-free Krebs-Henseleit solution (containing 100 μmol/L EGTA) for 5 minutes. Four groups were exposed to each of the botanical extracts at a concentration of EC50 in addition to a control group, and these groups were compared. The level of vasoconstriction in response to 60 mmol/L K+ in normal Ca2+-media is used as the maximum (100%). ## 2.5.6. Effect of Four Botanical Extracts on Organ Tissue Viability The effects of four botanical extracts on the viability of freshly isolated aortic organ tissue were tested by repeatedly treating the extracts with the same aortic rings either with or without endothelium. The multiple treatments did not affect the contractility of the vessel induced by 60 mmol/L K+. The vasodilation towards acetylcholine of the aortic rings was also intact after several times of applications of botanical extracts. ## 2.6. Statistical Analysis All of results are expressed as mean ± SD. Statistical significance was analyzed using unpaired Student’st-tests for comparisons between two groups. A value of P<0.05 was considered statistically significant. ## 3. Results ### 3.1. FGR, FSC, CPR, and SAA Blocked LPS and IFN-γ-Induced NO Production in RAW 264.7 Cells 264.7 cells were stimulated by 10 U/ml of IFN-γ and 100 ng/ml of LPS that can upregulate the expression of iNOS. iNOS has a key role in inflammatory action. Targeting de novo regulation of iNOS is the therapeutic strategy to cure inflammation-related diseases [40]. RAW 264.7 cells were stimulated by 10 U/ml of IFN-γ and 100 ng/ml of LPS with and without pretreatment of four botanical extracts. The concentration of nitrite was measured at 24 hours after the stimulation. As shown in Figure 1, total flavonoids fromGlycyrrhizaeradix et rhizoma (FGR) and total flavonoids fromSpatholobi caulis (FSC) significantly suppressed the IFN-γ and LPS-induced production of NO in a dose-dependent fashion. LPS and IFN-γ-induced NO in RAW 264.7 cells were inhibited by FGR and FSC in a concentration-dependent manner. The maximal inhibition achieved (at 200 mg/L) was 75.06% and 39.44%, respectively, for the two drugs. However, higher concentrations of total saponin fromActinidia arguta radix (SAA) and total coumarins ofPeucedani radix (CPR) were required to suppress the IFN-γ and LPS-induced production of NO. The maximal inhibition achieved of SAA and CPR (at 200 mg/L) was 29.69% and 33.65%, respectively (Figure 1).Figure 1 Effects of four botanical extracts on nitrite accumulation in macrophages stimulated by LPS plus IFN-γ, P∗<0.05, P∗∗<0.01 versus model group (M), n=6. ### 3.2. FGR, FSC, CPR, and SAA-Induced Vasodilation Ach-elicited relaxation in aorta rings was used for evaluating intact and deleted endothelium (Figure2). FGR and CPR relaxed isolated aortic rings in a dose-dependent and endothelium-independent manner. The maximum relaxation by FGR of the aortic rings with or without endothelium was at concentrations of 91.28%±5.15% and 84.36%±23.80%, respectively. The maximum relaxation by CPR of rings with or without endothelium was at concentrations of 75.51%±21.30% and 57.07%±18.63%, respectively. The half maximal effective concentration (EC50) was 17 mg/L for FGR and 61 mg/L for CPR for aortic rings with absent endothelium as shown in Figure 3(a).Figure 2 Concentration-response curves showing endothelium-dependent relaxation by Ach with PE pretreated rat aortic rings with intact endothelia (+Endo) and without intact endothelia (−Endo).n=5, P∗∗<0.01 versus +Endo.Figure 3 Concentration-response curves showing relaxation by four botanical extracts with PE pretreated rat aortic rings with intact endothelia (+Endo + Control) and without intact endothelia (−Endo + Control). The effects of exposure to 10μmol/L ODQ on the FSC (b-A), SAA (b-C), FGR (a-A), and CPR (a-B) groups of PE (1 μmol/L) pretreated rings (−Endo + ODQ). The concentration-response curves of FSC (b-B) and SAA (b-D) with pretreatment with Indo. The concentration-response curves of FSC (b-A) and SAA (b-C) with pretreatment with L-NAME. P∗<0.05, P∗∗<0.01 versus −Endo group, n=4.SAA and FSC relaxed isolated aortic rings in a dose-dependent and endothelium-dependent manner. The maximum relaxation of isolated aortic rings by SAA with and without endothelium was at concentrations of81.66%±7.36% and 5.20%±1.62%, respectively. The maximum relaxation induced by FSC with and without endothelium was at concentrations of 70.70%±6.12% and 7.53%±14.08%, respectively. The EC50 was 45 mg/L for SAA and 40 mg/L for FSC for aortic rings with intact endothelium as shown in Figure 3(a).To evaluate the involvement of the NO/cGMP signaling in endothelium-dependent vasodilation, the aortic rings were pretreated with ODQ (10μmol/L) or L-NAME (100 μmol/L) for 15 minutes each. Soluble guanylate cyclase (sGC) inhibitor ODQ affected FGR and CPR-induced vasodilation (Figure 3(a)). The FSC and SAA-induced relaxations of the aortic tissue were inhibited by pretreatment with ODQ or nitric oxide synthase blocker L-NAME in a concentration-dependent manner (Figure 3(b)).To investigate the involvement of the cyclooxygenase (COX)/PGI2 pathway, one set of aortic tissue was pretreated with indomethacin (10 μmol/L), a nonselective inhibitor of COX. The relaxation curves by FSC or SAA were not significantly altered by the blockage of PGI2 pathway (Figure 3(b)). ### 3.3. Effects of FGR and CPR on Endogenous Vasoconstrictors PE, 5-HT, Ang II, ET-1,PG2α, Vaso, and Dopa are all endogenous vasoconstrictors which play key roles in maintaining vasculature tension [41]. To study endothelium-independent vasodilation, the effects of herb elements on vasocontractility were examined. Aortic rings without endothelium were pretreated with 17 mg/L of FGR and 61 mg/L of CPR, respectively. FGR exerted inhibitory effects on the vasocontraction by Dopa, Ang II, ET-1, and Vaso in a dose-dependent fashion (Figure 4(a)). The maximal inhibitions on vasocontractions by FGR were 38.40%, 50.71%, 59.58%, and 33.67% for Dopa, AngII, ET-1, and Vaso-induced contractilities, respectively. However, FGR failed to suppress vasocontraction induced by PE, PGF, and 5-HT (see Supplemental Figure 1 in Supplementary Material available online at https://doi.org/10.1155/2017/1021284). CPR significantly inhibited vasoconstriction in the presence of Ang II, Dopa, PGF2α, 5-HT, PE, Vaso, and ET-1 by 86.75%, 59.57%, 74.55%, 41.84%, 64.60%, 79.51%, and 60.55%, respectively (Figure 4(b)).Figure 4 Effects of the four botanical extracts on endothelium-denuded aortic tissue that were exposed to endogenous vasoconstrictors. Inhibited by FGR (a): the contraction curves of Dopa, AngII, Vaso, and ET-1. Inhibited by CPR (b): the contraction curves of Dopa, PGF2α, AngII, 5-HT, PE, Vaso, and ET-1. P∗<0.05, P∗∗<0.01 versus +Endo control group, n=5. ### 3.4. Effects of FGR and CPR on Potassium Channels Potassium channels are important to vascular relaxation. There are many types of potassium channels in vascular smooth muscle including calcium-activated potassium channel (KCa), ATP-sensitive K+ channels (KATP), and inwardly rectifying potassium channels (KIR). To test the possible involvement of K+ channels in relaxations induced by FGR and CPR, endothelium-denuded rings were preincubated with KCa blocker (TEA) at 100 mmol/L, KATP blocker (glibenclamide) at 10 mmol/L, and KIR blocker BaCl2 at 100 mmol/L, respectively, for 15 minutes. In each case, the FGR- and CPR-induced vascular relaxation was not inhibited by glibenclamide, TEA, or BaCl2. Glibenclamide, TEA, or BaCl2 did not inhibit vascular relaxation by FGR. We also used glibenclamide, TEA, or BaCl2 to preincubate the endothelium-denuded rings, which did not inhibit vascular relaxation induced by CPR (Figure 5).Figure 5 Concentration-response curves showing relaxation induced by FGR (a) and CPR (b) compared to control in endothelium-free tissues pretreated with potassium channel inhibitors: 3 mmol/L TEA, 10μmol/L Glib, and 100 μmol/L BaCl2, n=6. (a) (b) ### 3.5. Effects of FGR and CPR on Extracellular Calcium Influx and Intracellular Calcium Release Endogenous vasoconstrictors, such as PE, contract vascular smooth muscle mainly through the activation of receptor-operated calcium channels (ROCC), while KC1 mainly activates potential-dependent Ca2+ channels, all of which result in both extracellular calcium influx and intracellular calcium release. To confirm whether calcium-mediated vasoconstriction is affected by FGR and CPR, aortic ring samples denuded of endothelium were exposed to Ca2+-free K-H solutions, and the addition of 1 μmol/L PE induced small tonic contractions which were most likely activated by the release of intracellular Ca2+ from endoplasmic reticulum stores. CPR reduced PE-induced contractions better than FGR under extracellular Ca2+-free condition (Figure 6).Figure 6 Effects of four botanical extracts on calcium channel and cytoplasmic calcium release. The concentration-response curves of CaCl2 in Ca2+-free media were inhibited by FGR (a) and CPR (b); maximal (100%) contraction was represented by 60 mmol/L KCl-induced contractions. Effects of four botanical extracts on the transient contraction induced by PE in Ca2+ free media. The effect of PE in Ca2+ free media was inhibited by FGR (c) and CPR (d); maximal (100%) contraction was represented by 60 mmol/L KCl-induced contraction. P∗∗<0.01 versus control, n=5–7. (a) (b) (c) (d)Experiments on depolarization elicited by voltage-dependent Ca2+-influx in high concentrations of K+ were tested as shown in Figure 6. The data suggested that the K+ (60 mmol/L) stimulated, Ca2+-induced vasoconstriction was not inhibited by 17 mg/L of FGR. However, the vasoconstriction was suppressed by 61 mg/L of CPR. ## 3.1. FGR, FSC, CPR, and SAA Blocked LPS and IFN-γ-Induced NO Production in RAW 264.7 Cells 264.7 cells were stimulated by 10 U/ml of IFN-γ and 100 ng/ml of LPS that can upregulate the expression of iNOS. iNOS has a key role in inflammatory action. Targeting de novo regulation of iNOS is the therapeutic strategy to cure inflammation-related diseases [40]. RAW 264.7 cells were stimulated by 10 U/ml of IFN-γ and 100 ng/ml of LPS with and without pretreatment of four botanical extracts. The concentration of nitrite was measured at 24 hours after the stimulation. As shown in Figure 1, total flavonoids fromGlycyrrhizaeradix et rhizoma (FGR) and total flavonoids fromSpatholobi caulis (FSC) significantly suppressed the IFN-γ and LPS-induced production of NO in a dose-dependent fashion. LPS and IFN-γ-induced NO in RAW 264.7 cells were inhibited by FGR and FSC in a concentration-dependent manner. The maximal inhibition achieved (at 200 mg/L) was 75.06% and 39.44%, respectively, for the two drugs. However, higher concentrations of total saponin fromActinidia arguta radix (SAA) and total coumarins ofPeucedani radix (CPR) were required to suppress the IFN-γ and LPS-induced production of NO. The maximal inhibition achieved of SAA and CPR (at 200 mg/L) was 29.69% and 33.65%, respectively (Figure 1).Figure 1 Effects of four botanical extracts on nitrite accumulation in macrophages stimulated by LPS plus IFN-γ, P∗<0.05, P∗∗<0.01 versus model group (M), n=6. ## 3.2. FGR, FSC, CPR, and SAA-Induced Vasodilation Ach-elicited relaxation in aorta rings was used for evaluating intact and deleted endothelium (Figure2). FGR and CPR relaxed isolated aortic rings in a dose-dependent and endothelium-independent manner. The maximum relaxation by FGR of the aortic rings with or without endothelium was at concentrations of 91.28%±5.15% and 84.36%±23.80%, respectively. The maximum relaxation by CPR of rings with or without endothelium was at concentrations of 75.51%±21.30% and 57.07%±18.63%, respectively. The half maximal effective concentration (EC50) was 17 mg/L for FGR and 61 mg/L for CPR for aortic rings with absent endothelium as shown in Figure 3(a).Figure 2 Concentration-response curves showing endothelium-dependent relaxation by Ach with PE pretreated rat aortic rings with intact endothelia (+Endo) and without intact endothelia (−Endo).n=5, P∗∗<0.01 versus +Endo.Figure 3 Concentration-response curves showing relaxation by four botanical extracts with PE pretreated rat aortic rings with intact endothelia (+Endo + Control) and without intact endothelia (−Endo + Control). The effects of exposure to 10μmol/L ODQ on the FSC (b-A), SAA (b-C), FGR (a-A), and CPR (a-B) groups of PE (1 μmol/L) pretreated rings (−Endo + ODQ). The concentration-response curves of FSC (b-B) and SAA (b-D) with pretreatment with Indo. The concentration-response curves of FSC (b-A) and SAA (b-C) with pretreatment with L-NAME. P∗<0.05, P∗∗<0.01 versus −Endo group, n=4.SAA and FSC relaxed isolated aortic rings in a dose-dependent and endothelium-dependent manner. The maximum relaxation of isolated aortic rings by SAA with and without endothelium was at concentrations of81.66%±7.36% and 5.20%±1.62%, respectively. The maximum relaxation induced by FSC with and without endothelium was at concentrations of 70.70%±6.12% and 7.53%±14.08%, respectively. The EC50 was 45 mg/L for SAA and 40 mg/L for FSC for aortic rings with intact endothelium as shown in Figure 3(a).To evaluate the involvement of the NO/cGMP signaling in endothelium-dependent vasodilation, the aortic rings were pretreated with ODQ (10μmol/L) or L-NAME (100 μmol/L) for 15 minutes each. Soluble guanylate cyclase (sGC) inhibitor ODQ affected FGR and CPR-induced vasodilation (Figure 3(a)). The FSC and SAA-induced relaxations of the aortic tissue were inhibited by pretreatment with ODQ or nitric oxide synthase blocker L-NAME in a concentration-dependent manner (Figure 3(b)).To investigate the involvement of the cyclooxygenase (COX)/PGI2 pathway, one set of aortic tissue was pretreated with indomethacin (10 μmol/L), a nonselective inhibitor of COX. The relaxation curves by FSC or SAA were not significantly altered by the blockage of PGI2 pathway (Figure 3(b)). ## 3.3. Effects of FGR and CPR on Endogenous Vasoconstrictors PE, 5-HT, Ang II, ET-1,PG2α, Vaso, and Dopa are all endogenous vasoconstrictors which play key roles in maintaining vasculature tension [41]. To study endothelium-independent vasodilation, the effects of herb elements on vasocontractility were examined. Aortic rings without endothelium were pretreated with 17 mg/L of FGR and 61 mg/L of CPR, respectively. FGR exerted inhibitory effects on the vasocontraction by Dopa, Ang II, ET-1, and Vaso in a dose-dependent fashion (Figure 4(a)). The maximal inhibitions on vasocontractions by FGR were 38.40%, 50.71%, 59.58%, and 33.67% for Dopa, AngII, ET-1, and Vaso-induced contractilities, respectively. However, FGR failed to suppress vasocontraction induced by PE, PGF, and 5-HT (see Supplemental Figure 1 in Supplementary Material available online at https://doi.org/10.1155/2017/1021284). CPR significantly inhibited vasoconstriction in the presence of Ang II, Dopa, PGF2α, 5-HT, PE, Vaso, and ET-1 by 86.75%, 59.57%, 74.55%, 41.84%, 64.60%, 79.51%, and 60.55%, respectively (Figure 4(b)).Figure 4 Effects of the four botanical extracts on endothelium-denuded aortic tissue that were exposed to endogenous vasoconstrictors. Inhibited by FGR (a): the contraction curves of Dopa, AngII, Vaso, and ET-1. Inhibited by CPR (b): the contraction curves of Dopa, PGF2α, AngII, 5-HT, PE, Vaso, and ET-1. P∗<0.05, P∗∗<0.01 versus +Endo control group, n=5. ## 3.4. Effects of FGR and CPR on Potassium Channels Potassium channels are important to vascular relaxation. There are many types of potassium channels in vascular smooth muscle including calcium-activated potassium channel (KCa), ATP-sensitive K+ channels (KATP), and inwardly rectifying potassium channels (KIR). To test the possible involvement of K+ channels in relaxations induced by FGR and CPR, endothelium-denuded rings were preincubated with KCa blocker (TEA) at 100 mmol/L, KATP blocker (glibenclamide) at 10 mmol/L, and KIR blocker BaCl2 at 100 mmol/L, respectively, for 15 minutes. In each case, the FGR- and CPR-induced vascular relaxation was not inhibited by glibenclamide, TEA, or BaCl2. Glibenclamide, TEA, or BaCl2 did not inhibit vascular relaxation by FGR. We also used glibenclamide, TEA, or BaCl2 to preincubate the endothelium-denuded rings, which did not inhibit vascular relaxation induced by CPR (Figure 5).Figure 5 Concentration-response curves showing relaxation induced by FGR (a) and CPR (b) compared to control in endothelium-free tissues pretreated with potassium channel inhibitors: 3 mmol/L TEA, 10μmol/L Glib, and 100 μmol/L BaCl2, n=6. (a) (b) ## 3.5. Effects of FGR and CPR on Extracellular Calcium Influx and Intracellular Calcium Release Endogenous vasoconstrictors, such as PE, contract vascular smooth muscle mainly through the activation of receptor-operated calcium channels (ROCC), while KC1 mainly activates potential-dependent Ca2+ channels, all of which result in both extracellular calcium influx and intracellular calcium release. To confirm whether calcium-mediated vasoconstriction is affected by FGR and CPR, aortic ring samples denuded of endothelium were exposed to Ca2+-free K-H solutions, and the addition of 1 μmol/L PE induced small tonic contractions which were most likely activated by the release of intracellular Ca2+ from endoplasmic reticulum stores. CPR reduced PE-induced contractions better than FGR under extracellular Ca2+-free condition (Figure 6).Figure 6 Effects of four botanical extracts on calcium channel and cytoplasmic calcium release. The concentration-response curves of CaCl2 in Ca2+-free media were inhibited by FGR (a) and CPR (b); maximal (100%) contraction was represented by 60 mmol/L KCl-induced contractions. Effects of four botanical extracts on the transient contraction induced by PE in Ca2+ free media. The effect of PE in Ca2+ free media was inhibited by FGR (c) and CPR (d); maximal (100%) contraction was represented by 60 mmol/L KCl-induced contraction. P∗∗<0.01 versus control, n=5–7. (a) (b) (c) (d)Experiments on depolarization elicited by voltage-dependent Ca2+-influx in high concentrations of K+ were tested as shown in Figure 6. The data suggested that the K+ (60 mmol/L) stimulated, Ca2+-induced vasoconstriction was not inhibited by 17 mg/L of FGR. However, the vasoconstriction was suppressed by 61 mg/L of CPR. ## 4. Discussion The total saponins fromActinidia arguta radix (SAA), total flavonoids fromGlycyrrhizaeradix et rhizoma (FGR), total coumarins fromPeucedani radix (CPR), and total flavonoids fromSpatholobi caulis (FSC) were extracted and used in current studies. Four anti-inflammatory herbal extracts relaxed thoracic aortic ring in a concentration-dependent manner. The rank order of the EC50 for relaxation of these extracts was as follows:Glycyrrhizae radix et rhizoma <Spatholobi caulis <Actinidia arguta radix <Peucedani radix.The vascular relaxation evoked by SAA is endothelium dependent and the vasodilatory effect by the element from Radix and Stemma Actinidia argute (Teng Li Gen) is clocked by ODQ, a soluble guanylyl cyclase (sGC) inhibitor. Thus, our study first revealed that a NO-cGMP dependent pathway is critical for the action of the SAA. As a major component of saponin from the Actinidia argute, it has been known that corosolic acid possesses various biological properties, including antidiabetic, antiobesity, and anti-inflammatory activities [42–44] The compound’s efficacy in diabetes has resulted the development of Glucosol (or GlucoFit), a commercially available product primarily marketed in Japan and the United States as a dietary supplement for weight loss and blood sugar balance. The inflammatory and oxidative stress impact metabolism through lipid and glucose metabolism and insulin resistance which is linked to mitochondrial function [10]. TEO (2a,-3a,-24-trihydroxyurs-12-en-28-oic acid), a corosolic acid analogue, declined the mitochondrial membrane potential and altered mitochondrial ultrastructure which may serve the mechanism for the antioxidative stress effects [45]. Nevertheless, cGMP has been reported to exert an action on mitochondrial function [46]. On the other hand, corosolic acid has been shown to suppress glioblastoma cell proliferation by inhibiting the activation of signal transducer and activator of transcription-3 and nuclear factor-kappa B in tumor cells and tumor-associated macrophages corosolic acid inhibits glioblastoma cell proliferation [47]. Our analysis of GEO databases (National Cancer Institute) revealed a statistically significant reduction of sGC transcript levels in human glioma specimens. Pharmacologically manipulating endogenous cGMP generation in glioma cells through either stimulating pGC by ANP/BNP or blocking PDE by 3-isobutyl-1-methylxanthine/zaprinast caused significant inhibition of proliferation and colony formation of glioma cells. Our study proposes the new concept that suppressed expression of sGC, a key enzyme in the NO/cGMP pathway, may be associated with an aggressive course of glioma. The sGC/cGMP signaling-targeted therapy may be a favorable alternative to chemotherapy and radiotherapy for glioma and perhaps other tumors [48].The relaxation induced by FSC was inhibited by L-NAME, indicating the involvement of NO in vascular dilatory action of the extracts.Spatholobi caulis is a traditional blood-activating and stasis-dispelling herb medicine, which has been used to treat diseases related to blood stasis syndrome by inhibiting platelet aggregation and stimulating hematopoiesis. A recent study further revealed that the FSC presented proangiogenic activity in human umbilical vein endothelial cells (HUVECs) as well as in zebrafish [49]. With an LPS-activated Raw264.7 cells model, theSpatholobi caulis MeOH extract (containing flavonoids) inhibited the expressions of iNOS and COX-2 and suppressed the production of proinflammatory cytokines, such as IL-1beta and IL-6 [50]. Genistein, an isoflavonoid from the herb, has been reported to decrease the generations of ROS and malondialdehyde [51]. In mammalian cells, NO is produced by a family of NO synthases (NOS). Three NOS isoforms have been identified as neuronal NOS (nNOS), inducible NOS (iNOS), and endothelial NOS (eNOS). In vascular system, NO is generated from the conversion of L-arginine to L-citrulline by eNOS, which requires Ca2+/calmodulin, FAD, FMN, and tetrahydrobiopterin (BH4) as cofactors. Under the inflammatory pathological conditions, the cofactors of eNOS can be oxidized and eNOS then shifts to produce superoxide anion instead of NO. This state is referred to as the “uncoupled state of eNOS” (eNOS uncoupling), which may further enhance the inflammation [52]. Considering the significant anti-inflammatory effect of the FSC which markedly inhibited the expressions of iNOS and proinflammatory cytokines, we speculate that the vasodilatory effect of the FSC may be partially due to its promoting of eNOS function through antioxidative properties.RadixGlycyrrhizae(Licorice Root) is the most used herb element in TCM. Licorice, the root extract of Glycyrrhiza glabra I., is used as a medicine for various diseases. Anti-inflammatory as well as antiallergic activities have been attributed to one of its main constituents, glycyrrhizin. These activities are mainly ascribed to the action of the aglycone, beta-glycyrrhetinic acid. beta-Glycyrrhetinic acid has a steroid-like structure and is believed to have immunomodulatory properties [53]. Glycyrrhizin inhibits liver cell injury and is given intravenously for the treatment of chronic viral hepatitis and cirrhosis in Japan [54, 55]. It has also proven itself effective in the treatment of autoimmune hepatitis in one clinical trial [56]. We demonstrate a significant vasodilatory effect of FGR (total flavonoids from Glycyrrhizae radix et rhizoma) and reveal that pretreatment with FGR shifted contraction curves of Dopa, AngII, Vaso, and ET-1 to the right. Those endogenous vasoconstrictors regulate vascular tone via their respective receptors (mostly G protein-coupled) in smooth muscle. Although overall mechanisms of action are different, G protein-coupled receptors as a whole activate PLC, DAG, and IP3. DAG elicits protein kinase C by activating myosin light chains. IP3 induces intracellular calcium release from the intracellular calcium pool or activates VDCCs in the cell membrane to regulate intracellular calcium concentration and vascular tone [57]. However, FGR failed to block Ca2+ influx or releasing intracellular Ca2+. Glycyrrhetic acid, the active metabolite in licorice, inhibits the enzyme 11-β-hydroxysteroid dehydrogenase enzyme type 2 with a resultant cortisol-induced mineralocorticoid effect and the tendency towards the elevation of sodium and reduction of potassium levels. This aldosterone-like action is the fundamental basis for understanding the pharmacology of the extract [58]. However, the glucocorticoids inhibits eNOS gene expression and reduces NO release through the glucocorticoid receptor mediated signaling [59]. The glucocorticoids also directly potentiate contractions of rabbit and dog aortic strips to epinephrine and norepinephrine [60, 61]. Thus, the specific mechanisms underlying relaxation of vascular smooth muscle by FGR need further study.Khellactone (dihydroseselin) coumarins possess various activities, including calcium blocker and antiplatelet aggregation [62, 63]. Khellactone coumarins with 3′S, 4′S configuration (praeruptorins A, B, C, and D) were first isolated from dried roots of P. praeruptorum (Peucedani radix) which is commonly used in Traditional Chinese Medicine (TCM) for treatment of cough and upper respiratory infections and as an antipyretic, antitussive, and mucolytic agent. By using spontaneously hypertensive rats as experimental model, praeruptorin-C improved the vascular hypertrophy by decreasing the size of SMCs cells, collagen content, and increasing NO production [64]. The vasodilatory effects of praeruptorin-A was confirmed in isolated rabbit tracheas and pulmonary arteries, as well as in swine coronary artery [65, 66]. In our experimental setting, total coumarins fromPeucedani radix (CPR) induced vascular relaxation may not be related to sGC/cGMP but is associated with blocking of both VDCC and ROCC. ## 5. Conclusion The present study shows that extracts from four herbs relaxed thoracic aorta tissues isolated from rats.Glycyrrhizaeradix et rhizome andPeucedaniradix induced vasorelaxation independent of intact endothelium; however, their respective mechanisms of action appear to be different. Vasorelaxation induced byPeucedaniradix appears to be mainly related to effects on intracellular calcium homeostasis, specifically the inhibition of Ca2+ influx and intracellular Ca2+ release. Dopa-, AngII-, Vaso-, and ET-1 induced vasoconstriction was inhibited byGlycyrrhizae radix et rhizome, but details of its mechanism of action need further study. The vasorelaxation induced bySpatholobicaulis andActinidia arguta radix is endothelium-dependent, and their mechanisms of relaxation may involve the NO-cGMP pathway. The distinct vasodilatory effects of four anti-inflammatory botanical extracts are significant and novel which will pave the way not only for further mechanism study, but also for directing of new herb formula for preventive and/or therapeutic usage. --- *Source: 1021284-2017-11-28.xml*
2017
# Biologically Active and Antimicrobial Peptides from Plants **Authors:** Carlos E. Salas; Jesus A. Badillo-Corona; Guadalupe Ramírez-Sotelo; Carmen Oliver-Salvador **Journal:** BioMed Research International (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102129 --- ## Abstract Bioactive peptides are part of an innate response elicited by most living forms. In plants, they are produced ubiquitously in roots, seeds, flowers, stems, and leaves, highlighting their physiological importance. While most of the bioactive peptides produced in plants possess microbicide properties, there is evidence that they are also involved in cellular signaling. Structurally, there is an overall similarity when comparing them with those derived from animal or insect sources. The biological action of bioactive peptides initiates with the binding to the target membrane followed in most cases by membrane permeabilization and rupture. Here we present an overview of what is currently known about bioactive peptides from plants, focusing on their antimicrobial activity and their role in the plant signaling network and offering perspectives on their potential application. --- ## Body ## 1. Introduction No doubt proteins were designed to be versatile molecules. The number of functions in which they participate during metabolism supports this affirmation. Proteins act as defense, integrating the immunological system, as part of the enzymatic network required during metabolism, as a nutrient, as storage, contractile, structural, and motile molecules, as transporters, and as signaling and regulatory mediators. These are well-established functions for which proteins have gained undisputed roles. Aside from these functions other roles are associated with these molecules, such as antifreezers, sweeteners, and antioxidants. A relatively new role involves their ability to interact with cellular membranes in a nonreceptor-ligand type of binding.Antimicrobial peptides (AMPs) are often the first line of defense against invading pathogens and play an important role in innate immunity [1]. The list of identified antimicrobial peptides has been growing steadily over the past twenty years. Initially, the skin of frogs and lymph from insects were shown to contain antimicrobial peptides, but now over 1500 antimicrobial peptides have been described, in living organisms including those from microorganisms, insects, amphibians, plants, and mammals [2].In 1963, Zeya and Spitznagel described a group of basic proteins in leukocyte lysosomes endowed with antibacterial activity [3]. Later, Hultmark et al. [4] purified three inducible bactericidal proteins from hemolymph of immunized pupae ofHyalophora cecropia. The vaccinated insects survived a posterior challenge with high doses of the infecting bacteria, indicating the relevance of the bactericidal proteins. Additional research identified a 35-residue peptide (cecropin) as responsible for the antibacterial effect. Further investigation by Boman and other groups confirmed that antimicrobial peptides (AMPs) are distributed ubiquitously in all invertebrates investigated, generating academic and commercial interest [1, 5–9].Because the rapid increase in drug-resistant infections poses a challenge to conventional antimicrobial therapies, there is a need for alternative microbicides to control infectious diseases [2, 10–13]. Bioactive peptides can fulfill this role because they display antibacterial, antiviral, antifungal, and/or antiparasitic activities. A comparative analysis of these molecules reveals that there are no unique structural requirements useful to discriminate these activities and to facilitate their classification. Most bioactive peptides have a high content of cysteine or glycine residues; the disulphide bridges that may be formed between cysteinyl residues increase their stability. Most of them contain charged amino acids, primarily cationic, and also hydrophobic domains. Both, β-sheets or α-helices, looped or extended, structures or combinations of these domains can be found in natural bioactive peptides [3, 6, 7, 14–24]; their length varies between 12 and 55 residues. There is evidence that cationic charged peptides are relevant for antibacterial or antiviral activity but few exemptions of anionic peptides also exist.This review updates information on plant bioactive peptides. When little or no available information exists on a specific group, we use examples taken from other lifer forms, assuming that upcoming studies may reveal information on peptides whose attributes have not yet been found in plants. The review does not cover in detail the antimicrobial mechanism underlying the effect of bioactive peptides since two recent reviews on the subject were published [4, 5, 11, 14, 15, 25–31]. ## 2. Antimicrobial Peptides Isolated from Plants As mentioned above, AMPs are part of important immunological barriers to counter microorganism microbial infections and represent another aspect of the resistance phenomenon known as the hypersensitive response (HR). This phenomenon was described by H. Marshall Ward in cultures of leaf rust (Puccinia dispersar orPuccinia triticina) and by several plant pathologists 100 years ago [1, 5, 7, 8]. The hypersensitive reaction (HR) is considered the maximum expression of plant resistance to pathogen attack and is defined as a fast death of the plant cells associated with growth restriction and pathogen isolation. Cell death that happens during HR is considered a lysosomal-type of programmed cell death (PCD) or autophagy [2, 10, 12], unlike mammalian apoptosis. Also, signaling by resistance gene products (RGP) triggered during the HR response is not associated with death effectors (mammalian caspases), or with the death complex equivalent to the mammalian apoptosome. It is hypothesized that RGP signaling is required to initiate deployment of non-HR defenses, most likely via the production of so-called “dead signals” like ROS (reactive oxygen species), NO (nitric oxide), and SA (salicylic acid), all of them initiators of resistance in the absence of a HR [3, 14, 16]. Therefore, HR is viewed as part of a continuum of effects mediated by defense elicitors [4, 5, 15, 25, 27–29].Although many AMPs are generically active against various kinds of infectious agents, they are generally classified as antibacterial, fungicides, antiviral, and antiparasitic. The antibacterial activity of peptides results from the amphiphilic character and presence of motifs with high density of positively charged residues within their structure [6–9]. This type of arrangement facilitates peptide attachment and insertion into the bacterial membrane to create transmembrane pores resulting in membrane permeabilization. The amphipathic nature of antimicrobial peptides is required for this process, as hydrophobic motifs directly interact with lipid components of the membrane, while hydrophilic cationic groups interact with phospholipid groups also found in the membrane.The antifungal activity of AMP was initially attributed to either fungal cell lysis or interference with fungal cell wall synthesis. A comparison of plants antifungal peptides suggests a particular structural-activity arrangement involving polar and neutral amino acids [11–13, 32]. However, like for antibacterial peptides, there are no obvious conserved structural domains clearly associated with antifungal activity. The cell wall component “chitin” has been implied as fungal target for bioactive peptides [6, 7, 15, 17–24]. Peptide binding induces fungal membrane permeabilization and/or pore formation [4, 11, 14, 15, 26, 29–31].The antiviral effect of some AMPs depends on their interaction with the membrane by electrostatic association with negative charges of glycosaminoglycans facilitating binding of AMP and competing with viruses [11]. Such is the case of the mammalian cationic peptide lactoferrin that prevents binding of herpes simplex virus (HSV) by binding to heparan moieties and blocking virus-cell interactions [3, 32–34]. Alternatively, defensins (described below) bind to viral glycoproteins making HSV unable to bind to the surface of host cells [25, 27]. The antiviral effect of peptides can also be explained by obstruction of viral interaction with specific cellular receptors, as shown during binding of HSV and the putative B5 cell surface membrane protein displaying a heptad repeat alpha-helix fragment. The effect was demonstrated with the synthetic 30-mer peptide that has the same sequence found in the heptad repeat that inhibits HSV infection of B5-expressing porcine cells and human HEp-2 cells [7, 15, 19, 20, 22–24]. Another mechanism involves the interaction between AMP and viral glycoprotein as shown with a retrocyclin-2 analogue that binds with high affinity (Kd = 13.3 nM) to immobilized HSV-2 glycoprotein B (gB2) while it does not bind to enzymatically deglycosylated gB2 [25, 28]. A less specific interaction between AMP and viruses causes disruption or destabilization of viral envelope yielding viruses unable to infect host cells [15, 17, 19, 21–24]. Finally, a peptide mediated activation of intracellular targets induces an antiviral effect as demonstrated with the antiviral peptide NP-1 from rabbit neutrophils that crosses the cell membrane migrating into the cytoplasm and organelles, followed by inhibition of viral gene expression in the infected cell. The proposed mechanism involves downregulation of VP16 viral protein entry into the nucleus that prevents expression of early viral genes required to propagate viral infection [4, 11, 26, 30, 31].The initial characterization of molecules displaying AMP activity was followed by isolation of purothionin, the first plant-derived AMP. Purothionin is active againstPseudomonas solanacearum,Xanthomonas phaseoli andX. campestris,Erwinia amylovora,Corynebacterium flaccumfaciens,C. michiganense,C. poinsettiae,C. sepedonicum, andC. fascians [25]. Since then, several plant peptides have been discovered. The major groups include thionins (types I–V), defensins, cyclotides, 2S albumin-like proteins, and lipid transfer proteins [15, 19, 22–24]. Other less common AMPs include knottin-peptides, impatiens, puroindolines, vicilin-like, glycine-rich, shepherins, snakins, and heveins (Table 1) [35–44].Table 1 Selected plant antimicrobial peptides. Peptide Biological activity Peptide size Reference Thionins (types I–V) Antibacterial 45–47 residues [15, 22–24] Thionein: alpha-1-purothionin (Triticum aestivum) Antibacterial 5 kDa45 residues [15, 25, 81] Cyclotides: kalata B1 and B2 (Oldenlandia affinis) Antibacterial, Antifungal, insecticide nematicide 28–37 residues [15, 19, 22–24] 2S albumin-likeMalva parviflora,  Raphanus sativus Antibacterial, allergen 105 residues [15, 24] Lipid transfer proteins (LTPs) (Zea mays) Antibacterial 90–95 residues [15, 22–24] Knottin-peptides: PAFP-S (Phytolacca americana) knottin-type (Mirabilis jalapa) Antibacterial 36-37 residues [15, 35–43] Puroindolines: PINA and PINB (Triticum aestivum) Antibacterial 13 kDa [15, 35–43] Snakins (Solanum tuberosum) Antibacterial 63 residues, 6.9 kDa [15, 35–43] Heveins (Hevea brasiliensis) Antibacterial and antifungal 43 residues, 4.7 kDa [15, 35–43] Peptides (Phaseolus vulgaris) Antibacterial andantifungal 2.2 and 6 kDa [2, 49, 50] Peptide PvD1 (Phaseolus vulgaris) Antibacterial and antifungal 6 kDa [60, 75] Defensin-like (Phaseolus vulgaris) Antibacterial 7.3 kDa [15, 50] Defensins (Triticum aestivum and Hurdeum vulgare) Antibacterial and antifungal 5 kDa [25, 53] Lunatusin (Phaseolus lunatus) Antibacteriala and antiviral 7.0 kDa [45] Vulgarinin (Phaseolus vulgaris) Antibacterial, antifungal, and antiviral 7.0 kDa [46] Hispidulin (Benincasa hispida) Antibacterial and antifungal 5.7 kDa [48] Lc-def (Lens culinaris) Antifungal 47 residues [37, 79] Cicerin (Cicer arietinum) Antifungal and antiviral 8.2 kDa [49, 60, 61] Arietin (Cicer arietinum) Antifungal and antiviral 5.6 kDa [36, 49, 60, 61] Peptide So-D1 (Spinacia oleracea) Antifungal and antibacterial 22 residues [36, 44] Ay-AMPAmaranthus hypochondriacus Antifungal 3.18 kDa [47] PR1, PR2 Chitinases (Vitis vinifera) Antifungal 26 and 43 kDa [19, 38, 41, 64] Proteins from latex ofCalotropis procera(CpLP) Antifungal 13 kDa [38, 60, 61] Proteinases fromCarica candamarcensis, Carica papaya and Cryptostegia grandiflora (Cg24-I) Antifungal 23–25 kDa [36, 60, 61] Impatiens (Impatiens balsamina) Ib-AMP1, Ib-AMP2, Ib-AMP3, and Ib-AMP4 Antibacterial 20 residues [36, 52, 53, 57] Shepherins (Capsella bursa-pastoris) Antibacterial and antifungal 28 residues [38, 41] Vicilin-like (Macadamia integrifolia) Antibacterial and antifungal 45 residues [38] Peptidesa (Brassica napus) Antiviral ND [82] Proteinases fromAnanas comosus, Carica papaya, Ficus carica, and Asclepias sinaica Anthelmintic 23-24 kDa [52, 53, 57] aMitogenic activity; ND: not determined.Full isolation of plant AMP has been attained in some cases. It is the case of lunatusin a peptide with molecular mass of 7 kDa purified from Chinese lima bean (Phaseolus lunatus L.) (Table 1). Lunatusin exerted antibacterial action onBacillus megaterium,Bacillus subtilis,Proteus vulgaris, andMycobacterium phlei. The peptide also displays antifungal activity towardsFusarium oxysporum,Mycosphaerella arachidicola, andBotrytis cinerea. Interestingly, the antifungal activity was retained after incubation with trypsin [45].Another peptide, named vulgarinin, from seeds of haricot beans (Phaseolus vulgaris), with a molecular mass of 7 kDa showed antibacterial action againstMycobacterium phlei,Bacillus megaterium,B. subtilis, andProteus vulgaris and antifungal activity againstFusarium oxysporum,Mycosphaerella arachidicola,Physalospora piricola, andBotrytis cinerea. Its antifungal activity was also retained after incubation with trypsin. Another example is a peptide fromAmaranthus hypochondriacus seeds that displays antifungal activity (Table 1) [46, 47].Both lunatusin and vulgarinin inhibited HIV-1 reverse transcriptase and inhibited translation in a cell-free rabbit reticulocyte lysate system, suggesting a similarity of action between these two peptides and that antimicrobial activity might be linked to protein synthesis [46]. Lunatusin also elicited a mitogenic response in mouse splenocytes [45] and proliferation of breast cancer MCF-7b cell line while vulgarinin inhibited proliferation of leukemia L1210 and M1 cell lines and breast cancer MCF-7 cell line [46].A peptide named hispidulin was purified from seeds of the medicinal plantBenincasa hispida that belongs to the Cucurbitaceae family (Table 1). Hispidulin exhibits a molecular mass of 5.7 kDa, is composed of 49 amino acid residues, and displays broad and potent inhibitory effects against various human bacterial and fungal pathogens [48]. Two additional antifungal peptides with novel N-terminal sequences, designatedcicerin andarietin, were isolated from seeds of chickpea (Cicer arietinum), respectively. These peptides exhibited molecular masses of approximately 8.2 and 5.6 kDa, respectively. Arietin expressed higher translation-inhibitory activity in a rabbit reticulocyte lysate system and higher antifungal potency towardMycosphaerella arachidicola,Fusarium oxysporum, andBotrytis cinerea than cicerin. Both lack mitogenic and anti-HIV-1 reverse transcriptase activities [2, 49, 50].There are also some studies on AMP peptides from dry seeds ofPhaseolus vulgaris cv. brown kidney beans; these AMPs exhibit antifungal and antibacterial activity [2, 50, 51]. Another AMP (So-D1-7) was isolated from a crude cell wall preparation from spinach leaves (Spinacia oleracea cv. Matador) and was active against Gram-positive (Clavibacter michiganensis) and Gram-negative (Ralstonia solanacearum) bacterial pathogens, as well as against fungi, such as,Fusarium culmorum, F. solani,Bipolaris maydis, andColletotrichum lagenarium [44].Antiparasitic peptides are another group of bioactive peptides. Following an initial report describing the lethal effect ofmagainin isolated from Xenopus skin onParamecium caudatum, another peptide (cathelicidin) confirmed the antiparasitic activity of AMPs [52–56].Anthelmintic activity is also a recognized feature attributed to vegetable proteinases (Table1). For instance, bromelain, the stem enzyme ofAnanas comosus (Bromeliaceae), shows anthelmintic effect againstHaemonchus contortus [52, 53], similar to the reference drug pyrantel tartrate. A similar effect was confirmed with proteinases from papaya (Carica papaya), pineapple (A. comosus), fig (Ficus carica), and Egyptian milkweed (Asclepia sinaica)in vitro against the rodent gastrointestinal nematodeHeligmosomoides polygyrus [57]. The anthelmintic effect cannot be fully explained by the proteolytic effect of these enzymes, as the inhibited enzymes partially preserve antiparasitic activity. It is suggested that selected domains within the proteinase molecule different from the active site could be responsible for the antiparasitic effect (unpublished observations). The notion that specific regions within a protein are responsible for the biocide effect is supported by the observation that some AMPs become functional upon protein hydrolysis, like in egg [58, 59] and milk proteins hydrolysates [58, 60–63]. At present, there are not many studies on plant protein hydrolysates with antibiotic properties; this situation encourages the search in protein databases for motifs featuring the signature of AMPs.Plant proteinases also display antifungal activity as demonstrated with latex proteinases fromCalotropis procera,Carica candamarcensis, andCryptostegia grandiflora [27, 60, 61]. Using a collection composed ofColletotrichum gloeosporioides,Fusarium oxysporum,F. solani,Rhizoctonia solani,Neurosporasp., andAspergillus niger, fungal germination, growth, and IC50 were determined. The observed IC50 forRhizoctonia solani with proteinases fromC. procera was 20.7±1.6 µg/mL while with proteinases fromC. candamarcensis was 25.3±2.4 µg/mL. Chitinases are also chitinolytic enzymes found in different plants that display antifungal activity [64].Plant Defensins. There is no consensus about the size of defensins. According to some authors defensins are AMPs that range from 18 to 48 amino acids, while other groups define them as having 12–54 residues. Regardless of their size they contain several conserved cysteinyl residues structuring disulphide bridges that contribute to their stability. Two kinds of defensins have been described, α-defensin and β-defensin, the latter probably emerged earlier based on its similarity with insect forms. Defensins are among the best-characterized cysteine-rich AMPs in plants [27, 65]. All known members of this family have four disulphide bridges and are folded into a globular structure that includes three L-strands and a K-helix [65, 66]. Initially, these proteins were described in human neutrophils [66, 67], more specifically in granules of phagocytes and intestinal Paneth cells [67–71]. Later, they were described in human, chimpanzee, rat, mouse, marine arthropods, plants, and fungi [68–71].Defensins are structurally classified in four categories, which correlate with morphological and/or developmental changes in fungi following treatment with defensins [72–75]. Defensins of group I cause inhibition of Gram-positive bacteria and fungi, and fungal inhibition occurs with marked morphological distortions of hyphae (branching); those of group II are active against fungi, without inducing hyphal branching, and are inactive against bacteria; those of group III are active against Gram-positive and Gram-negative bacteria but are inactive against fungi; while group IV are active against Gram-positive and Gram-negative bacteria, and against fungi, without causing hyphal branching. The selective action assigned to these four groups of defensins suggests that specific determinants within each group are responsible for targeting different groups of infectious agents.Several defensins have been purified from plants. The PvD1 defensin fromPhaseolus vulgaris (cv. Perola) seeds is a 6 kDa peptide (Table 1). Its N-terminal has been sequenced and the comparative analysis in databases shows high similarity with sequences of different defensins isolated from other plants species. PvD1 has been shown to inhibit the growth of yeasts,Candida albicans,C. parapsilosis,C. tropicalis,C. guilliermondii,Kluyveromyces marxiannus, andSaccharomyces cerevisiae. PvD1 also inhibits phytopathogenic fungi includingFusarium oxysporum,F. solani,F. lateritium, andRhizoctonia solani[51, 72]. Analysis of cloned PvD1 cDNA yielded a fragment that contains 314 bp, encoding a 47-amino-acid polypeptide displaying strong similarity with plant defensins fromVigna unguiculata (93%),Cicer arietinum (95%), andPachyrhizus erosus (87%).An antifungal peptide with a defensin-like sequence and exhibiting a molecular mass of (7.3 kDa) was purified from dried seeds ofPhaseolus vulgaris “cloud bean” (Table 1). The peptide exerted antifungal activity againstMycosphaerella arachidicola with an IC50 value of 1.8 μM and it was also active againstFusarium oxysporum with an IC50 value of 2.2 μM [52]. From lentil (Lens culinaris), a 47-amino-acid-residue (Lc-def) defensin was purified from germinated seeds (Table 1). The molecular mass (5.4 kDa) and the complete amino acid sequence were determined. Lc-def has eight cysteines forming four disulphide bonds; it shows high sequence homology with defensins from legumes and exhibits activity againstAspergillus niger [50, 76].A 5.4 kDa antifungal peptide, with an N-terminal sequence highly similar to defensins and with inhibitory activity againstMycosphaerella arachidicola (IC50 = 3 μM),Setosphaeria turcica, andBipolaris maydis, was isolated from the seeds ofPhaseolus vulgaris cv. brown kidney bean (Table 1). The antifungal activity of the peptide againstM. arachidicola was stable in a wide pH range (3–12) and progressively decreases at pHs <2 and >12. Similarly, its activity remains stable between 0 and 80°C and partially declines between 90 and 100°C. Deposition of Congo red at the hyphal tips ofM. arachidicola was induced by this peptide indicating inhibition of hypha growth. The lack of antiproliferative activity of brown kidney bean antifungal peptide toward tumor cells, in contrast to the presence of such activity seen in other antifungal AMPs, suggests that different domains are responsible for the antifungal and antiproliferative activities [50].The biotechnological potential of defensins became evident following experiments aimed at increasing plant resistance to pathogens by genetic transformation of various recipient plants. In a number of cases increased resistance to specific pathogens was obtained in transgenic plants overexpressing a defensing gene [24]. ## 3. Peptides from Plant Hydrolysates Plant protein hydrolysates represent an option for production of bioactive peptides. Hydrolysis can be done enzymatically or under acidic conditions; the former is preferred because it is milder and effectively produces bioactive peptides from a variety of sources, like legumes, rice, chia seeds, and so forth. Particularly, studies with enzymatic hydrolysates from leguminous plants, like common bean (P. vulgarisL.), are relevant since this is a fundamental ingredient of human diet in several cultures and because it represents up to 10% of total proteins ingested in developing countries [77, 78].The characterization of bioactive peptides released by hydrolysis demonstrates that they preserve their nutritional value, and at least, some of them behave as biologically active substances. Protein hydrolysates show antioxidant, antitumoral, antithrombotic, antimicrobial, or antihypertensive activities, thus qualifying as functional foods [77, 79]. Particularly, total hydrolysates (TH) or peptide fractions from leguminous such as chickpea, soya bean, pea, lentil, mung bean, and common beans demonstrate important antioxidant and angiotensin-I converting enzyme activities (ACE) [79, 80].Our studies using concentrates following enzymatic hydrolysates from three common bean varieties ofP. vulgaris L., plus black (PB), azufrado higuera (AH), and pinto saltillo (PS), show evidence of antimicrobial activity. The bactericidal activity determined by growth inhibition demonstrated that ten out of twelve bacterial strains were inhibited by these THs and also by the 3–10 kDa peptide fraction obtained by subsequent ultrafiltration of TH. The ultrafiltrate fraction from TH with cutoff of 1 kDa (<1 kDa) also demonstrated antimicrobial activity againstShigella dysenteriae in each of the bean varieties (PB, AH, and PS) at 0.1, 0.4, and 0.3 mg/mL, respectively [81]. A similar antimicrobial activity was seen in beansPhaseolus lunatus digested with pepsin followed by pancreatin [81]. Both TH and the partially purified peptide fraction (<10 kDa) exhibited antimicrobial activity againstStaphylococcus aureusandShigella flexneri. The largest antimicrobial effect was seen with the <10 kDa fraction and the determined MIC was 0.39 mg/mL againstS. aureus and 0.99 mg/mL forS. flexneri [81].Antiretroviral activity has also been described in alcalase hydrolysates of rapeseed (Brassica napus) protein. The antiviral effect seen in human immunodeficiency virus (HIV) is due to inhibition of the viral protease, possibly by a 6 kDa peptide. When rapeseed hydrolysate was purified by size-exclusion chromatography, two fractions of 6 kDa enriched in this protease inhibitor were isolated [82]. ## 4. Role of Peptides in Plant Signalling Since plants are stationary attached to earth, they must withstand aggressions from predatory activities by herbivores including man or pathogens and environmental variations like water supply, temperature changes, and manmade aggressions. To successfully meet these challenges, they have developed an efficient signaling network to elicit appropriate cellular responses. As in mammals, their signaling processes rely on efficient and specific interactions between organic molecules or simple ions (ligand) and their receptors to communicate and respond to these signals.As result many plant peptides and proteins evolved as signaling molecules and play a key role in homeostasis, defense, growth, differentiation, and senescence. Most of these actions require the coaction of hormones (auxin, ethylene, abscisic acid (ABA), gibberellic acid, and cytokinins), acting as coregulators in these processes. As part of their defense strategies, a group of peptides evolved to inactivate microorganisms menacing plant essential functions. The antimicrobial peptides comprising this category are discussed in the previous section.In this section, we focus on peptides whose main established functions provide a physiological attribute to the plant, but it should be noted that a peptide might participate in a defense strategy against infectious agents, while being at the same time a component of a metabolic function of the host plant without intervention of an infective agent. Some examples that illustrate this situation include a defensive peptide of 7.45 kDa from white cloud beans (Phaseolus vulgariscv.) that shows reverse transcriptase inhibitory activity when probedin vitro [83, 84]. This type of effect does not follow a logical evolutionary explanation, unless a retroviral form yet unidentified is found in plants. In another similar situation, it is being shown that purothionin, the AMP from wheat endosperm, can substitute for thioredoxin/from spinach chloroplasts in the dithiothreitol-linked activation of chloroplast fructose-1,6 bisphosphatase, suggesting a role for the thiol carrier during regulation of redox molecules [83, 85].Humanβ-defensins also display diverse immune related functions in addition to their antimicrobial activity. Such is the case of human β-defensin-2 that promotes histamine release and prostaglandin D2 production in mast cells. The immune modulatory role of β-defensin-2 has been further studied following the finding that β-defensin-2 binds to the chemokine receptor CCR-6, the cognate receptor for macrophage inflammatory protein-3α/CCL20 [85, 86]. Secretion of protein-3α along with other cytokines is linked to migration of immature dendritic cells from blood to the skin and from sites of inflammation to local lymph nodes triggering activation of memory specific T cells [86, 87]. In addition, β-defensins are associated with stimulation of toll-like receptor-4, thus serving as an additional mechanism for amplification of the innate host defense response [87, 88]. In summary, it is evident that at least some antimicrobial molecules evolved from host metabolites and share other functions.In plants, most of these signaling molecules are found in seeds, highlighting the necessity to preserve the genetic material that represents the informational basis to sustain the species. Followingin silico screening inA. thaliana about 15 peptide families were identified plus additional groups described in other species, most of them monocot [88, 89]. Aside from partial repositories available like in the case of secreted peptides inA. thaliana obtained byin silico analysis of unannotated sequences [89, 90], PhytAMP, a database dedicated to antimicrobial plant peptides http://phytamp.pfba-lab-tun.org/main.php [90, 91], C-PAmP, a database of computationally predicted plant antimicrobial peptides http://bioserver-2.bioacademy.gr/Bioserver/C-PAmP/ [2, 91], the antimicrobial peptide database that includes an algorithm to determine Boman’s index http://aps.unmc.edu/AP/FAQ.php [2, 92] or attempts to identify a specific family of signaling peptides [88, 92], no comprehensive database is available that deposits all the signaling peptides described to date. The annotation of these sequences would be valuable to identify and catalogue new peptide sequences that continuously emerge.Signaling peptides encompass a myriad of highly diversified sequences showing variation within and across species and without a common phylogenetic origin. These circumstances defy the efforts to classify them as a single group [88, 93–95]. A classification attempt involving their suggested functions includes homeostatic, innate immune responses (defensive), expansion and proliferation, organ maintenance and organogenesis, and sexual related functions. Three peptide classes, natriuretic class (PNP), phytosulfokines (PSK), and rapid alkalinization factors (RAF), participate in homeostatic functions. PNP has been purified from several species [93–96]. A number of effects are attributed to PNP, such as H+, K+, and Na+ fluxes in roots probably mediated by cGMP [96–98], transient increase of cGMP levels, water uptake in mesophyll cells, water exit from xylem, and osmotic dependent protoplast swelling [97–99]. Unconfirmed evidence suggests that a leucine-rich brassinosteroid receptor (AtBR1) displaying guanylyl cyclase activity and kinase-like structure could act as natriuretic peptide receptor [99, 100].PSKs are sulfated pentapeptides containing two sulfated Tyr residues synthesized as precusrsors. The ligand acts on phytosulfokine receptors (PSKR) which are leucine-rich repeat receptors displaying guanylate cyclase activity [100, 101].The alkalinization RALF factor and homologues (RALF-like) are 5 kDa peptides, expressed in a tissue specific manner. Its role in roots is associated with hair growth control by modulation of intra- and extracellular pH [101, 102]. Indirect effects such as K+ and Ca+2 currents are linked to proton-pump changes [102, 103]. Some of the actions attributed to RALF may involve the participation of abscisic acid too [103–105].The meristematic region at the top of the shoot responds to many actions related to growth and differentiation of the plant. The apical meristem contains stem cells that generate signaling peptides following a genetic program influenced by the surrounding habitat. The CLE family includes several groups of peptides capable of triggering signaling pathways. CLV3 is a 13-residue peptide of this family that plays a fundamental role by promoting stem cell differentiation during meristematic development [104–106]. A battery of transgenic assays using the recombinant forms of CLE peptides showed that overexpression of 10 CLE genes, like the CLV3 positive control, resulted in growth arrest at the shoot apical meristem [106, 107]. Contrary to the initial observation that fully active CLV3 was 13 residues long, a recent report provides evidence that CLV3 must contain five additional N-terminal residues that are critical for optimal activityin vitro [107–110].The identified receptor for CLV3 is CLV1 plus the isoforms CLV2 and CRN [108–112]. These leucine-rich repeat receptors are membrane associated and display cytoplasmic kinase domain. Additional genes include POL, KAPP, and WUS that likely act as downregulators of this pathway [111–113]. Senescence-controlling proteins have been also identified; BAX inhibitor-1, the evolutionarily conserved cell death suppressor found in yeast, is also present in plants. It seems that BAXI-1 acts by delaying methyl jasmonate-induced senescence [106, 113]. A similar situation is encountered at the other end (root meristem) where CLE peptides influence root growth, as well. Overexpression of CLE peptides following transformation assays was observed for CLV3, CLV9, CLV10, CLV11, and CLV13 and linked to root growth inhibition, while overexpression of CLE2, CLE4, CLE5, CLE6, CLE7, CLE18, CLE25, and CLE26 was associated with root growth induction [106, 114]. Overall, it seems that these CLE peptides keep a balance between differentiation and stem cell status.Vascular meristematic development is controlled by a CLE bearing twelve-amino-acid peptide designated by Ito et al. [114, 115] as tracheary differentiation inhibitory factor (TDIF). The cognate receptor (TDR) contains a leucine-rich repeat and kinase domains as described earlier and is located at the membrane of procambial cells. Its putative role involves suppression of xylem vessel differentiation [115, 116].The self-incompatibility response during fertilization of hermaphrodite plants is another example of signaling mechanism. In Brassicaceae the pollen determinant and ligand are the S-locus pollen peptide (SP11) [116–118]. The interaction between SP11 and the S-locus receptor kinase (SRK) triggers a signaling cascade leading to inhibition of self-pollination. Structural features of the ligand and the receptor play an important role in this interaction, in such way that interaction between noncognate pairs of ligand receptors fails to occur. Aside from SP-11, additional pollen factors might be needed for the appropriate interaction between SP11 and SRK receptor [118–120].An additional signaling pathway involves the genesis of stomata pores on leaves that regulate gas exchange with the environment. InA. thaliana, such family of ligands designated as “epidermal patterning factor like” (EPFL) contain eleven members ranging in sizes between 5 and 9 kDa. While EPF1 and EPF2 inhibit stomata formation, EPFL9 stimulates stomata formation [119–121]. A recent report shows evidence that EPFL5 represses stomata development by inhibiting meristemoid maintenance inA. thaliana [121, 122]. The membrane receptors for transducing the EPFL signal are ER, ER1, and ER2 as described by Shpak et al. [122, 123]. Plant pores adjust their opening/closure condition in response to nutritional needs and humidity by changing turgor pressure of guard cells through intervention of CO2 and ABA leading to increase in Ca+2 sensitivity (for a review see [123, 124]). Also, the number of stomata cells varies as a function of CO2 via a light induced mechanism. A recent review discusses the various pathways involving stomata development inA. thaliana [124, 125]. ## 5. Perspectives Biologically active peptides represent an excellent example of the advantage of the evolutionary process capable of selecting assortments of amino acids with antimicrobial activity. In the likely event of evolutionary changes within the target offender, new forms of peptides naturally emerge to counter the resistant infectious agent. Changing the assortments of amino acids and/or their order in the peptide are simple alternatives that evolved successfully in living systems during millenniums. Research is needed to elucidate the strategies adopted by life forms producing AMPs to counter the defensive plots posed by invading germs.Several options are available to improve the quality, selectivity, durability, and safety of AMPs. For instance, the functional and immunological properties of proteins can be improved by partial hydrolysis and the resulting hydrolysate can be used in food systems as additives for beverage and infant formulae, as food texture enhancer or as pharmaceutical ingredient [125, 126]. Bioactive peptides can be computationally modeled, genetically manipulated, and expressed in different systems to serve a practical purpose. In addition to their microbicide activities, other intriguing functions (opioid, antithrombotic, immunomodulatory, and antihypertensive) are emerging [58, 126, 127]. These attributes provide natural alternatives with potential to be used as food ingredients in a variety of applications [58, 127].Another promising application of AMPs relates to their use on bacterial biofilms. Biofilms are thin layers of microorganisms that colonize onto surfaces, such as implants, dental plaques, ear skin, intestine, and occasioning highly challenging infections and diseases. Several studies demonstrate the efficacy of AMPs into blocking biofilm formation. Singh et al. [127, 128] showed that lactoferrin and LL-37, a human cathelicidin AMP or its derivative, blocked formation ofP. aeruginosa biofilms at concentrations lower than those required to kill the planktonic cells and, also, reduced biofilm thickness of colonizedP. aeruginosa by 60% and destroyed microcolony structures of treated biofilms. It also was found effective against both Gram-positive and Gram-negative bacteria [128, 129]. In addition, AMPs have potential to be used in treating persister cells, which are latent phenotypic variants highly tolerant to antibiotics [129, 130].Since membrane integrity is essential for bacterial survival regardless of the metabolic stage of the cell and because AMPs target the membrane, they show good potential to kill persister microbes. In a recent study, a synthetic cationic peptide, (RW) NH2, was found to kill more than 99% ofE. coli HM22 persister cells in planktonic culture [15, 19, 22–24, 130]. --- *Source: 102129-2015-03-01.xml*
102129-2015-03-01_102129-2015-03-01.md
39,094
Biologically Active and Antimicrobial Peptides from Plants
Carlos E. Salas; Jesus A. Badillo-Corona; Guadalupe Ramírez-Sotelo; Carmen Oliver-Salvador
BioMed Research International (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102129
102129-2015-03-01.xml
--- ## Abstract Bioactive peptides are part of an innate response elicited by most living forms. In plants, they are produced ubiquitously in roots, seeds, flowers, stems, and leaves, highlighting their physiological importance. While most of the bioactive peptides produced in plants possess microbicide properties, there is evidence that they are also involved in cellular signaling. Structurally, there is an overall similarity when comparing them with those derived from animal or insect sources. The biological action of bioactive peptides initiates with the binding to the target membrane followed in most cases by membrane permeabilization and rupture. Here we present an overview of what is currently known about bioactive peptides from plants, focusing on their antimicrobial activity and their role in the plant signaling network and offering perspectives on their potential application. --- ## Body ## 1. Introduction No doubt proteins were designed to be versatile molecules. The number of functions in which they participate during metabolism supports this affirmation. Proteins act as defense, integrating the immunological system, as part of the enzymatic network required during metabolism, as a nutrient, as storage, contractile, structural, and motile molecules, as transporters, and as signaling and regulatory mediators. These are well-established functions for which proteins have gained undisputed roles. Aside from these functions other roles are associated with these molecules, such as antifreezers, sweeteners, and antioxidants. A relatively new role involves their ability to interact with cellular membranes in a nonreceptor-ligand type of binding.Antimicrobial peptides (AMPs) are often the first line of defense against invading pathogens and play an important role in innate immunity [1]. The list of identified antimicrobial peptides has been growing steadily over the past twenty years. Initially, the skin of frogs and lymph from insects were shown to contain antimicrobial peptides, but now over 1500 antimicrobial peptides have been described, in living organisms including those from microorganisms, insects, amphibians, plants, and mammals [2].In 1963, Zeya and Spitznagel described a group of basic proteins in leukocyte lysosomes endowed with antibacterial activity [3]. Later, Hultmark et al. [4] purified three inducible bactericidal proteins from hemolymph of immunized pupae ofHyalophora cecropia. The vaccinated insects survived a posterior challenge with high doses of the infecting bacteria, indicating the relevance of the bactericidal proteins. Additional research identified a 35-residue peptide (cecropin) as responsible for the antibacterial effect. Further investigation by Boman and other groups confirmed that antimicrobial peptides (AMPs) are distributed ubiquitously in all invertebrates investigated, generating academic and commercial interest [1, 5–9].Because the rapid increase in drug-resistant infections poses a challenge to conventional antimicrobial therapies, there is a need for alternative microbicides to control infectious diseases [2, 10–13]. Bioactive peptides can fulfill this role because they display antibacterial, antiviral, antifungal, and/or antiparasitic activities. A comparative analysis of these molecules reveals that there are no unique structural requirements useful to discriminate these activities and to facilitate their classification. Most bioactive peptides have a high content of cysteine or glycine residues; the disulphide bridges that may be formed between cysteinyl residues increase their stability. Most of them contain charged amino acids, primarily cationic, and also hydrophobic domains. Both, β-sheets or α-helices, looped or extended, structures or combinations of these domains can be found in natural bioactive peptides [3, 6, 7, 14–24]; their length varies between 12 and 55 residues. There is evidence that cationic charged peptides are relevant for antibacterial or antiviral activity but few exemptions of anionic peptides also exist.This review updates information on plant bioactive peptides. When little or no available information exists on a specific group, we use examples taken from other lifer forms, assuming that upcoming studies may reveal information on peptides whose attributes have not yet been found in plants. The review does not cover in detail the antimicrobial mechanism underlying the effect of bioactive peptides since two recent reviews on the subject were published [4, 5, 11, 14, 15, 25–31]. ## 2. Antimicrobial Peptides Isolated from Plants As mentioned above, AMPs are part of important immunological barriers to counter microorganism microbial infections and represent another aspect of the resistance phenomenon known as the hypersensitive response (HR). This phenomenon was described by H. Marshall Ward in cultures of leaf rust (Puccinia dispersar orPuccinia triticina) and by several plant pathologists 100 years ago [1, 5, 7, 8]. The hypersensitive reaction (HR) is considered the maximum expression of plant resistance to pathogen attack and is defined as a fast death of the plant cells associated with growth restriction and pathogen isolation. Cell death that happens during HR is considered a lysosomal-type of programmed cell death (PCD) or autophagy [2, 10, 12], unlike mammalian apoptosis. Also, signaling by resistance gene products (RGP) triggered during the HR response is not associated with death effectors (mammalian caspases), or with the death complex equivalent to the mammalian apoptosome. It is hypothesized that RGP signaling is required to initiate deployment of non-HR defenses, most likely via the production of so-called “dead signals” like ROS (reactive oxygen species), NO (nitric oxide), and SA (salicylic acid), all of them initiators of resistance in the absence of a HR [3, 14, 16]. Therefore, HR is viewed as part of a continuum of effects mediated by defense elicitors [4, 5, 15, 25, 27–29].Although many AMPs are generically active against various kinds of infectious agents, they are generally classified as antibacterial, fungicides, antiviral, and antiparasitic. The antibacterial activity of peptides results from the amphiphilic character and presence of motifs with high density of positively charged residues within their structure [6–9]. This type of arrangement facilitates peptide attachment and insertion into the bacterial membrane to create transmembrane pores resulting in membrane permeabilization. The amphipathic nature of antimicrobial peptides is required for this process, as hydrophobic motifs directly interact with lipid components of the membrane, while hydrophilic cationic groups interact with phospholipid groups also found in the membrane.The antifungal activity of AMP was initially attributed to either fungal cell lysis or interference with fungal cell wall synthesis. A comparison of plants antifungal peptides suggests a particular structural-activity arrangement involving polar and neutral amino acids [11–13, 32]. However, like for antibacterial peptides, there are no obvious conserved structural domains clearly associated with antifungal activity. The cell wall component “chitin” has been implied as fungal target for bioactive peptides [6, 7, 15, 17–24]. Peptide binding induces fungal membrane permeabilization and/or pore formation [4, 11, 14, 15, 26, 29–31].The antiviral effect of some AMPs depends on their interaction with the membrane by electrostatic association with negative charges of glycosaminoglycans facilitating binding of AMP and competing with viruses [11]. Such is the case of the mammalian cationic peptide lactoferrin that prevents binding of herpes simplex virus (HSV) by binding to heparan moieties and blocking virus-cell interactions [3, 32–34]. Alternatively, defensins (described below) bind to viral glycoproteins making HSV unable to bind to the surface of host cells [25, 27]. The antiviral effect of peptides can also be explained by obstruction of viral interaction with specific cellular receptors, as shown during binding of HSV and the putative B5 cell surface membrane protein displaying a heptad repeat alpha-helix fragment. The effect was demonstrated with the synthetic 30-mer peptide that has the same sequence found in the heptad repeat that inhibits HSV infection of B5-expressing porcine cells and human HEp-2 cells [7, 15, 19, 20, 22–24]. Another mechanism involves the interaction between AMP and viral glycoprotein as shown with a retrocyclin-2 analogue that binds with high affinity (Kd = 13.3 nM) to immobilized HSV-2 glycoprotein B (gB2) while it does not bind to enzymatically deglycosylated gB2 [25, 28]. A less specific interaction between AMP and viruses causes disruption or destabilization of viral envelope yielding viruses unable to infect host cells [15, 17, 19, 21–24]. Finally, a peptide mediated activation of intracellular targets induces an antiviral effect as demonstrated with the antiviral peptide NP-1 from rabbit neutrophils that crosses the cell membrane migrating into the cytoplasm and organelles, followed by inhibition of viral gene expression in the infected cell. The proposed mechanism involves downregulation of VP16 viral protein entry into the nucleus that prevents expression of early viral genes required to propagate viral infection [4, 11, 26, 30, 31].The initial characterization of molecules displaying AMP activity was followed by isolation of purothionin, the first plant-derived AMP. Purothionin is active againstPseudomonas solanacearum,Xanthomonas phaseoli andX. campestris,Erwinia amylovora,Corynebacterium flaccumfaciens,C. michiganense,C. poinsettiae,C. sepedonicum, andC. fascians [25]. Since then, several plant peptides have been discovered. The major groups include thionins (types I–V), defensins, cyclotides, 2S albumin-like proteins, and lipid transfer proteins [15, 19, 22–24]. Other less common AMPs include knottin-peptides, impatiens, puroindolines, vicilin-like, glycine-rich, shepherins, snakins, and heveins (Table 1) [35–44].Table 1 Selected plant antimicrobial peptides. Peptide Biological activity Peptide size Reference Thionins (types I–V) Antibacterial 45–47 residues [15, 22–24] Thionein: alpha-1-purothionin (Triticum aestivum) Antibacterial 5 kDa45 residues [15, 25, 81] Cyclotides: kalata B1 and B2 (Oldenlandia affinis) Antibacterial, Antifungal, insecticide nematicide 28–37 residues [15, 19, 22–24] 2S albumin-likeMalva parviflora,  Raphanus sativus Antibacterial, allergen 105 residues [15, 24] Lipid transfer proteins (LTPs) (Zea mays) Antibacterial 90–95 residues [15, 22–24] Knottin-peptides: PAFP-S (Phytolacca americana) knottin-type (Mirabilis jalapa) Antibacterial 36-37 residues [15, 35–43] Puroindolines: PINA and PINB (Triticum aestivum) Antibacterial 13 kDa [15, 35–43] Snakins (Solanum tuberosum) Antibacterial 63 residues, 6.9 kDa [15, 35–43] Heveins (Hevea brasiliensis) Antibacterial and antifungal 43 residues, 4.7 kDa [15, 35–43] Peptides (Phaseolus vulgaris) Antibacterial andantifungal 2.2 and 6 kDa [2, 49, 50] Peptide PvD1 (Phaseolus vulgaris) Antibacterial and antifungal 6 kDa [60, 75] Defensin-like (Phaseolus vulgaris) Antibacterial 7.3 kDa [15, 50] Defensins (Triticum aestivum and Hurdeum vulgare) Antibacterial and antifungal 5 kDa [25, 53] Lunatusin (Phaseolus lunatus) Antibacteriala and antiviral 7.0 kDa [45] Vulgarinin (Phaseolus vulgaris) Antibacterial, antifungal, and antiviral 7.0 kDa [46] Hispidulin (Benincasa hispida) Antibacterial and antifungal 5.7 kDa [48] Lc-def (Lens culinaris) Antifungal 47 residues [37, 79] Cicerin (Cicer arietinum) Antifungal and antiviral 8.2 kDa [49, 60, 61] Arietin (Cicer arietinum) Antifungal and antiviral 5.6 kDa [36, 49, 60, 61] Peptide So-D1 (Spinacia oleracea) Antifungal and antibacterial 22 residues [36, 44] Ay-AMPAmaranthus hypochondriacus Antifungal 3.18 kDa [47] PR1, PR2 Chitinases (Vitis vinifera) Antifungal 26 and 43 kDa [19, 38, 41, 64] Proteins from latex ofCalotropis procera(CpLP) Antifungal 13 kDa [38, 60, 61] Proteinases fromCarica candamarcensis, Carica papaya and Cryptostegia grandiflora (Cg24-I) Antifungal 23–25 kDa [36, 60, 61] Impatiens (Impatiens balsamina) Ib-AMP1, Ib-AMP2, Ib-AMP3, and Ib-AMP4 Antibacterial 20 residues [36, 52, 53, 57] Shepherins (Capsella bursa-pastoris) Antibacterial and antifungal 28 residues [38, 41] Vicilin-like (Macadamia integrifolia) Antibacterial and antifungal 45 residues [38] Peptidesa (Brassica napus) Antiviral ND [82] Proteinases fromAnanas comosus, Carica papaya, Ficus carica, and Asclepias sinaica Anthelmintic 23-24 kDa [52, 53, 57] aMitogenic activity; ND: not determined.Full isolation of plant AMP has been attained in some cases. It is the case of lunatusin a peptide with molecular mass of 7 kDa purified from Chinese lima bean (Phaseolus lunatus L.) (Table 1). Lunatusin exerted antibacterial action onBacillus megaterium,Bacillus subtilis,Proteus vulgaris, andMycobacterium phlei. The peptide also displays antifungal activity towardsFusarium oxysporum,Mycosphaerella arachidicola, andBotrytis cinerea. Interestingly, the antifungal activity was retained after incubation with trypsin [45].Another peptide, named vulgarinin, from seeds of haricot beans (Phaseolus vulgaris), with a molecular mass of 7 kDa showed antibacterial action againstMycobacterium phlei,Bacillus megaterium,B. subtilis, andProteus vulgaris and antifungal activity againstFusarium oxysporum,Mycosphaerella arachidicola,Physalospora piricola, andBotrytis cinerea. Its antifungal activity was also retained after incubation with trypsin. Another example is a peptide fromAmaranthus hypochondriacus seeds that displays antifungal activity (Table 1) [46, 47].Both lunatusin and vulgarinin inhibited HIV-1 reverse transcriptase and inhibited translation in a cell-free rabbit reticulocyte lysate system, suggesting a similarity of action between these two peptides and that antimicrobial activity might be linked to protein synthesis [46]. Lunatusin also elicited a mitogenic response in mouse splenocytes [45] and proliferation of breast cancer MCF-7b cell line while vulgarinin inhibited proliferation of leukemia L1210 and M1 cell lines and breast cancer MCF-7 cell line [46].A peptide named hispidulin was purified from seeds of the medicinal plantBenincasa hispida that belongs to the Cucurbitaceae family (Table 1). Hispidulin exhibits a molecular mass of 5.7 kDa, is composed of 49 amino acid residues, and displays broad and potent inhibitory effects against various human bacterial and fungal pathogens [48]. Two additional antifungal peptides with novel N-terminal sequences, designatedcicerin andarietin, were isolated from seeds of chickpea (Cicer arietinum), respectively. These peptides exhibited molecular masses of approximately 8.2 and 5.6 kDa, respectively. Arietin expressed higher translation-inhibitory activity in a rabbit reticulocyte lysate system and higher antifungal potency towardMycosphaerella arachidicola,Fusarium oxysporum, andBotrytis cinerea than cicerin. Both lack mitogenic and anti-HIV-1 reverse transcriptase activities [2, 49, 50].There are also some studies on AMP peptides from dry seeds ofPhaseolus vulgaris cv. brown kidney beans; these AMPs exhibit antifungal and antibacterial activity [2, 50, 51]. Another AMP (So-D1-7) was isolated from a crude cell wall preparation from spinach leaves (Spinacia oleracea cv. Matador) and was active against Gram-positive (Clavibacter michiganensis) and Gram-negative (Ralstonia solanacearum) bacterial pathogens, as well as against fungi, such as,Fusarium culmorum, F. solani,Bipolaris maydis, andColletotrichum lagenarium [44].Antiparasitic peptides are another group of bioactive peptides. Following an initial report describing the lethal effect ofmagainin isolated from Xenopus skin onParamecium caudatum, another peptide (cathelicidin) confirmed the antiparasitic activity of AMPs [52–56].Anthelmintic activity is also a recognized feature attributed to vegetable proteinases (Table1). For instance, bromelain, the stem enzyme ofAnanas comosus (Bromeliaceae), shows anthelmintic effect againstHaemonchus contortus [52, 53], similar to the reference drug pyrantel tartrate. A similar effect was confirmed with proteinases from papaya (Carica papaya), pineapple (A. comosus), fig (Ficus carica), and Egyptian milkweed (Asclepia sinaica)in vitro against the rodent gastrointestinal nematodeHeligmosomoides polygyrus [57]. The anthelmintic effect cannot be fully explained by the proteolytic effect of these enzymes, as the inhibited enzymes partially preserve antiparasitic activity. It is suggested that selected domains within the proteinase molecule different from the active site could be responsible for the antiparasitic effect (unpublished observations). The notion that specific regions within a protein are responsible for the biocide effect is supported by the observation that some AMPs become functional upon protein hydrolysis, like in egg [58, 59] and milk proteins hydrolysates [58, 60–63]. At present, there are not many studies on plant protein hydrolysates with antibiotic properties; this situation encourages the search in protein databases for motifs featuring the signature of AMPs.Plant proteinases also display antifungal activity as demonstrated with latex proteinases fromCalotropis procera,Carica candamarcensis, andCryptostegia grandiflora [27, 60, 61]. Using a collection composed ofColletotrichum gloeosporioides,Fusarium oxysporum,F. solani,Rhizoctonia solani,Neurosporasp., andAspergillus niger, fungal germination, growth, and IC50 were determined. The observed IC50 forRhizoctonia solani with proteinases fromC. procera was 20.7±1.6 µg/mL while with proteinases fromC. candamarcensis was 25.3±2.4 µg/mL. Chitinases are also chitinolytic enzymes found in different plants that display antifungal activity [64].Plant Defensins. There is no consensus about the size of defensins. According to some authors defensins are AMPs that range from 18 to 48 amino acids, while other groups define them as having 12–54 residues. Regardless of their size they contain several conserved cysteinyl residues structuring disulphide bridges that contribute to their stability. Two kinds of defensins have been described, α-defensin and β-defensin, the latter probably emerged earlier based on its similarity with insect forms. Defensins are among the best-characterized cysteine-rich AMPs in plants [27, 65]. All known members of this family have four disulphide bridges and are folded into a globular structure that includes three L-strands and a K-helix [65, 66]. Initially, these proteins were described in human neutrophils [66, 67], more specifically in granules of phagocytes and intestinal Paneth cells [67–71]. Later, they were described in human, chimpanzee, rat, mouse, marine arthropods, plants, and fungi [68–71].Defensins are structurally classified in four categories, which correlate with morphological and/or developmental changes in fungi following treatment with defensins [72–75]. Defensins of group I cause inhibition of Gram-positive bacteria and fungi, and fungal inhibition occurs with marked morphological distortions of hyphae (branching); those of group II are active against fungi, without inducing hyphal branching, and are inactive against bacteria; those of group III are active against Gram-positive and Gram-negative bacteria but are inactive against fungi; while group IV are active against Gram-positive and Gram-negative bacteria, and against fungi, without causing hyphal branching. The selective action assigned to these four groups of defensins suggests that specific determinants within each group are responsible for targeting different groups of infectious agents.Several defensins have been purified from plants. The PvD1 defensin fromPhaseolus vulgaris (cv. Perola) seeds is a 6 kDa peptide (Table 1). Its N-terminal has been sequenced and the comparative analysis in databases shows high similarity with sequences of different defensins isolated from other plants species. PvD1 has been shown to inhibit the growth of yeasts,Candida albicans,C. parapsilosis,C. tropicalis,C. guilliermondii,Kluyveromyces marxiannus, andSaccharomyces cerevisiae. PvD1 also inhibits phytopathogenic fungi includingFusarium oxysporum,F. solani,F. lateritium, andRhizoctonia solani[51, 72]. Analysis of cloned PvD1 cDNA yielded a fragment that contains 314 bp, encoding a 47-amino-acid polypeptide displaying strong similarity with plant defensins fromVigna unguiculata (93%),Cicer arietinum (95%), andPachyrhizus erosus (87%).An antifungal peptide with a defensin-like sequence and exhibiting a molecular mass of (7.3 kDa) was purified from dried seeds ofPhaseolus vulgaris “cloud bean” (Table 1). The peptide exerted antifungal activity againstMycosphaerella arachidicola with an IC50 value of 1.8 μM and it was also active againstFusarium oxysporum with an IC50 value of 2.2 μM [52]. From lentil (Lens culinaris), a 47-amino-acid-residue (Lc-def) defensin was purified from germinated seeds (Table 1). The molecular mass (5.4 kDa) and the complete amino acid sequence were determined. Lc-def has eight cysteines forming four disulphide bonds; it shows high sequence homology with defensins from legumes and exhibits activity againstAspergillus niger [50, 76].A 5.4 kDa antifungal peptide, with an N-terminal sequence highly similar to defensins and with inhibitory activity againstMycosphaerella arachidicola (IC50 = 3 μM),Setosphaeria turcica, andBipolaris maydis, was isolated from the seeds ofPhaseolus vulgaris cv. brown kidney bean (Table 1). The antifungal activity of the peptide againstM. arachidicola was stable in a wide pH range (3–12) and progressively decreases at pHs <2 and >12. Similarly, its activity remains stable between 0 and 80°C and partially declines between 90 and 100°C. Deposition of Congo red at the hyphal tips ofM. arachidicola was induced by this peptide indicating inhibition of hypha growth. The lack of antiproliferative activity of brown kidney bean antifungal peptide toward tumor cells, in contrast to the presence of such activity seen in other antifungal AMPs, suggests that different domains are responsible for the antifungal and antiproliferative activities [50].The biotechnological potential of defensins became evident following experiments aimed at increasing plant resistance to pathogens by genetic transformation of various recipient plants. In a number of cases increased resistance to specific pathogens was obtained in transgenic plants overexpressing a defensing gene [24]. ## 3. Peptides from Plant Hydrolysates Plant protein hydrolysates represent an option for production of bioactive peptides. Hydrolysis can be done enzymatically or under acidic conditions; the former is preferred because it is milder and effectively produces bioactive peptides from a variety of sources, like legumes, rice, chia seeds, and so forth. Particularly, studies with enzymatic hydrolysates from leguminous plants, like common bean (P. vulgarisL.), are relevant since this is a fundamental ingredient of human diet in several cultures and because it represents up to 10% of total proteins ingested in developing countries [77, 78].The characterization of bioactive peptides released by hydrolysis demonstrates that they preserve their nutritional value, and at least, some of them behave as biologically active substances. Protein hydrolysates show antioxidant, antitumoral, antithrombotic, antimicrobial, or antihypertensive activities, thus qualifying as functional foods [77, 79]. Particularly, total hydrolysates (TH) or peptide fractions from leguminous such as chickpea, soya bean, pea, lentil, mung bean, and common beans demonstrate important antioxidant and angiotensin-I converting enzyme activities (ACE) [79, 80].Our studies using concentrates following enzymatic hydrolysates from three common bean varieties ofP. vulgaris L., plus black (PB), azufrado higuera (AH), and pinto saltillo (PS), show evidence of antimicrobial activity. The bactericidal activity determined by growth inhibition demonstrated that ten out of twelve bacterial strains were inhibited by these THs and also by the 3–10 kDa peptide fraction obtained by subsequent ultrafiltration of TH. The ultrafiltrate fraction from TH with cutoff of 1 kDa (<1 kDa) also demonstrated antimicrobial activity againstShigella dysenteriae in each of the bean varieties (PB, AH, and PS) at 0.1, 0.4, and 0.3 mg/mL, respectively [81]. A similar antimicrobial activity was seen in beansPhaseolus lunatus digested with pepsin followed by pancreatin [81]. Both TH and the partially purified peptide fraction (<10 kDa) exhibited antimicrobial activity againstStaphylococcus aureusandShigella flexneri. The largest antimicrobial effect was seen with the <10 kDa fraction and the determined MIC was 0.39 mg/mL againstS. aureus and 0.99 mg/mL forS. flexneri [81].Antiretroviral activity has also been described in alcalase hydrolysates of rapeseed (Brassica napus) protein. The antiviral effect seen in human immunodeficiency virus (HIV) is due to inhibition of the viral protease, possibly by a 6 kDa peptide. When rapeseed hydrolysate was purified by size-exclusion chromatography, two fractions of 6 kDa enriched in this protease inhibitor were isolated [82]. ## 4. Role of Peptides in Plant Signalling Since plants are stationary attached to earth, they must withstand aggressions from predatory activities by herbivores including man or pathogens and environmental variations like water supply, temperature changes, and manmade aggressions. To successfully meet these challenges, they have developed an efficient signaling network to elicit appropriate cellular responses. As in mammals, their signaling processes rely on efficient and specific interactions between organic molecules or simple ions (ligand) and their receptors to communicate and respond to these signals.As result many plant peptides and proteins evolved as signaling molecules and play a key role in homeostasis, defense, growth, differentiation, and senescence. Most of these actions require the coaction of hormones (auxin, ethylene, abscisic acid (ABA), gibberellic acid, and cytokinins), acting as coregulators in these processes. As part of their defense strategies, a group of peptides evolved to inactivate microorganisms menacing plant essential functions. The antimicrobial peptides comprising this category are discussed in the previous section.In this section, we focus on peptides whose main established functions provide a physiological attribute to the plant, but it should be noted that a peptide might participate in a defense strategy against infectious agents, while being at the same time a component of a metabolic function of the host plant without intervention of an infective agent. Some examples that illustrate this situation include a defensive peptide of 7.45 kDa from white cloud beans (Phaseolus vulgariscv.) that shows reverse transcriptase inhibitory activity when probedin vitro [83, 84]. This type of effect does not follow a logical evolutionary explanation, unless a retroviral form yet unidentified is found in plants. In another similar situation, it is being shown that purothionin, the AMP from wheat endosperm, can substitute for thioredoxin/from spinach chloroplasts in the dithiothreitol-linked activation of chloroplast fructose-1,6 bisphosphatase, suggesting a role for the thiol carrier during regulation of redox molecules [83, 85].Humanβ-defensins also display diverse immune related functions in addition to their antimicrobial activity. Such is the case of human β-defensin-2 that promotes histamine release and prostaglandin D2 production in mast cells. The immune modulatory role of β-defensin-2 has been further studied following the finding that β-defensin-2 binds to the chemokine receptor CCR-6, the cognate receptor for macrophage inflammatory protein-3α/CCL20 [85, 86]. Secretion of protein-3α along with other cytokines is linked to migration of immature dendritic cells from blood to the skin and from sites of inflammation to local lymph nodes triggering activation of memory specific T cells [86, 87]. In addition, β-defensins are associated with stimulation of toll-like receptor-4, thus serving as an additional mechanism for amplification of the innate host defense response [87, 88]. In summary, it is evident that at least some antimicrobial molecules evolved from host metabolites and share other functions.In plants, most of these signaling molecules are found in seeds, highlighting the necessity to preserve the genetic material that represents the informational basis to sustain the species. Followingin silico screening inA. thaliana about 15 peptide families were identified plus additional groups described in other species, most of them monocot [88, 89]. Aside from partial repositories available like in the case of secreted peptides inA. thaliana obtained byin silico analysis of unannotated sequences [89, 90], PhytAMP, a database dedicated to antimicrobial plant peptides http://phytamp.pfba-lab-tun.org/main.php [90, 91], C-PAmP, a database of computationally predicted plant antimicrobial peptides http://bioserver-2.bioacademy.gr/Bioserver/C-PAmP/ [2, 91], the antimicrobial peptide database that includes an algorithm to determine Boman’s index http://aps.unmc.edu/AP/FAQ.php [2, 92] or attempts to identify a specific family of signaling peptides [88, 92], no comprehensive database is available that deposits all the signaling peptides described to date. The annotation of these sequences would be valuable to identify and catalogue new peptide sequences that continuously emerge.Signaling peptides encompass a myriad of highly diversified sequences showing variation within and across species and without a common phylogenetic origin. These circumstances defy the efforts to classify them as a single group [88, 93–95]. A classification attempt involving their suggested functions includes homeostatic, innate immune responses (defensive), expansion and proliferation, organ maintenance and organogenesis, and sexual related functions. Three peptide classes, natriuretic class (PNP), phytosulfokines (PSK), and rapid alkalinization factors (RAF), participate in homeostatic functions. PNP has been purified from several species [93–96]. A number of effects are attributed to PNP, such as H+, K+, and Na+ fluxes in roots probably mediated by cGMP [96–98], transient increase of cGMP levels, water uptake in mesophyll cells, water exit from xylem, and osmotic dependent protoplast swelling [97–99]. Unconfirmed evidence suggests that a leucine-rich brassinosteroid receptor (AtBR1) displaying guanylyl cyclase activity and kinase-like structure could act as natriuretic peptide receptor [99, 100].PSKs are sulfated pentapeptides containing two sulfated Tyr residues synthesized as precusrsors. The ligand acts on phytosulfokine receptors (PSKR) which are leucine-rich repeat receptors displaying guanylate cyclase activity [100, 101].The alkalinization RALF factor and homologues (RALF-like) are 5 kDa peptides, expressed in a tissue specific manner. Its role in roots is associated with hair growth control by modulation of intra- and extracellular pH [101, 102]. Indirect effects such as K+ and Ca+2 currents are linked to proton-pump changes [102, 103]. Some of the actions attributed to RALF may involve the participation of abscisic acid too [103–105].The meristematic region at the top of the shoot responds to many actions related to growth and differentiation of the plant. The apical meristem contains stem cells that generate signaling peptides following a genetic program influenced by the surrounding habitat. The CLE family includes several groups of peptides capable of triggering signaling pathways. CLV3 is a 13-residue peptide of this family that plays a fundamental role by promoting stem cell differentiation during meristematic development [104–106]. A battery of transgenic assays using the recombinant forms of CLE peptides showed that overexpression of 10 CLE genes, like the CLV3 positive control, resulted in growth arrest at the shoot apical meristem [106, 107]. Contrary to the initial observation that fully active CLV3 was 13 residues long, a recent report provides evidence that CLV3 must contain five additional N-terminal residues that are critical for optimal activityin vitro [107–110].The identified receptor for CLV3 is CLV1 plus the isoforms CLV2 and CRN [108–112]. These leucine-rich repeat receptors are membrane associated and display cytoplasmic kinase domain. Additional genes include POL, KAPP, and WUS that likely act as downregulators of this pathway [111–113]. Senescence-controlling proteins have been also identified; BAX inhibitor-1, the evolutionarily conserved cell death suppressor found in yeast, is also present in plants. It seems that BAXI-1 acts by delaying methyl jasmonate-induced senescence [106, 113]. A similar situation is encountered at the other end (root meristem) where CLE peptides influence root growth, as well. Overexpression of CLE peptides following transformation assays was observed for CLV3, CLV9, CLV10, CLV11, and CLV13 and linked to root growth inhibition, while overexpression of CLE2, CLE4, CLE5, CLE6, CLE7, CLE18, CLE25, and CLE26 was associated with root growth induction [106, 114]. Overall, it seems that these CLE peptides keep a balance between differentiation and stem cell status.Vascular meristematic development is controlled by a CLE bearing twelve-amino-acid peptide designated by Ito et al. [114, 115] as tracheary differentiation inhibitory factor (TDIF). The cognate receptor (TDR) contains a leucine-rich repeat and kinase domains as described earlier and is located at the membrane of procambial cells. Its putative role involves suppression of xylem vessel differentiation [115, 116].The self-incompatibility response during fertilization of hermaphrodite plants is another example of signaling mechanism. In Brassicaceae the pollen determinant and ligand are the S-locus pollen peptide (SP11) [116–118]. The interaction between SP11 and the S-locus receptor kinase (SRK) triggers a signaling cascade leading to inhibition of self-pollination. Structural features of the ligand and the receptor play an important role in this interaction, in such way that interaction between noncognate pairs of ligand receptors fails to occur. Aside from SP-11, additional pollen factors might be needed for the appropriate interaction between SP11 and SRK receptor [118–120].An additional signaling pathway involves the genesis of stomata pores on leaves that regulate gas exchange with the environment. InA. thaliana, such family of ligands designated as “epidermal patterning factor like” (EPFL) contain eleven members ranging in sizes between 5 and 9 kDa. While EPF1 and EPF2 inhibit stomata formation, EPFL9 stimulates stomata formation [119–121]. A recent report shows evidence that EPFL5 represses stomata development by inhibiting meristemoid maintenance inA. thaliana [121, 122]. The membrane receptors for transducing the EPFL signal are ER, ER1, and ER2 as described by Shpak et al. [122, 123]. Plant pores adjust their opening/closure condition in response to nutritional needs and humidity by changing turgor pressure of guard cells through intervention of CO2 and ABA leading to increase in Ca+2 sensitivity (for a review see [123, 124]). Also, the number of stomata cells varies as a function of CO2 via a light induced mechanism. A recent review discusses the various pathways involving stomata development inA. thaliana [124, 125]. ## 5. Perspectives Biologically active peptides represent an excellent example of the advantage of the evolutionary process capable of selecting assortments of amino acids with antimicrobial activity. In the likely event of evolutionary changes within the target offender, new forms of peptides naturally emerge to counter the resistant infectious agent. Changing the assortments of amino acids and/or their order in the peptide are simple alternatives that evolved successfully in living systems during millenniums. Research is needed to elucidate the strategies adopted by life forms producing AMPs to counter the defensive plots posed by invading germs.Several options are available to improve the quality, selectivity, durability, and safety of AMPs. For instance, the functional and immunological properties of proteins can be improved by partial hydrolysis and the resulting hydrolysate can be used in food systems as additives for beverage and infant formulae, as food texture enhancer or as pharmaceutical ingredient [125, 126]. Bioactive peptides can be computationally modeled, genetically manipulated, and expressed in different systems to serve a practical purpose. In addition to their microbicide activities, other intriguing functions (opioid, antithrombotic, immunomodulatory, and antihypertensive) are emerging [58, 126, 127]. These attributes provide natural alternatives with potential to be used as food ingredients in a variety of applications [58, 127].Another promising application of AMPs relates to their use on bacterial biofilms. Biofilms are thin layers of microorganisms that colonize onto surfaces, such as implants, dental plaques, ear skin, intestine, and occasioning highly challenging infections and diseases. Several studies demonstrate the efficacy of AMPs into blocking biofilm formation. Singh et al. [127, 128] showed that lactoferrin and LL-37, a human cathelicidin AMP or its derivative, blocked formation ofP. aeruginosa biofilms at concentrations lower than those required to kill the planktonic cells and, also, reduced biofilm thickness of colonizedP. aeruginosa by 60% and destroyed microcolony structures of treated biofilms. It also was found effective against both Gram-positive and Gram-negative bacteria [128, 129]. In addition, AMPs have potential to be used in treating persister cells, which are latent phenotypic variants highly tolerant to antibiotics [129, 130].Since membrane integrity is essential for bacterial survival regardless of the metabolic stage of the cell and because AMPs target the membrane, they show good potential to kill persister microbes. In a recent study, a synthetic cationic peptide, (RW) NH2, was found to kill more than 99% ofE. coli HM22 persister cells in planktonic culture [15, 19, 22–24, 130]. --- *Source: 102129-2015-03-01.xml*
2015
# Using Genetic Programming with Prior Formula Knowledge to Solve Symbolic Regression Problem **Authors:** Qiang Lu; Jun Ren; Zhiguang Wang **Journal:** Computational Intelligence and Neuroscience (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1021378 --- ## Abstract A researcher can infer mathematical expressions of functions quickly by using his professional knowledge (called Prior Knowledge). But the results he finds may be biased and restricted to his research field due to limitation of his knowledge. In contrast, Genetic Programming method can discover fitted mathematical expressions from the huge search space through running evolutionary algorithms. And its results can be generalized to accommodate different fields of knowledge. However, sinceGP has to search a huge space, its speed of finding the results is rather slow. Therefore, in this paper, a framework of connection between Prior Formula Knowledge and GP (PFK-GP) is proposed to reduce the space of GP searching. The PFK is built based on the Deep Belief Network (DBN) which can identify candidate formulas that are consistent with the features of experimental data. By using these candidate formulas as the seed of a randomly generated population, PFK-GP finds the right formulas quickly by exploring the search space of data features. We have compared PFK-GP with Pareto GP on regression of eight benchmark problems. The experimental results confirm that the PFK-GP can reduce the search space and obtain the significant improvement in the quality of SR. --- ## Body ## 1. Introduction Symbolic regression (SR) is used to discover mathematical expressions of functions that can fit the given data based on the rules of accuracy, simplicity, and generalization. As distinct from linear or nonlinear regression that efficiently optimizes the parameters in the prespecified model, SR tries to seek appropriate models and their parameters simultaneously for a purpose of getting better insights into the dataset. Without any prior knowledge of physics, kinematics, and geometry, some natural laws described by mathematical expressions, such as Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation, can be distilled from experimental data by the Genetic Programming (GP) method on SR [1].Since SR is an NP-hard problem, some evolutionary algorithms were proposed to find approximate solutions to the problem, such as Genetic Programming (GP) [2], Gene Expression Programming (GEP) [3], Grammatical Evolution (GE) [4, 5], Analytic Programming (AP) [6], and Fast Evolutionary Programming (FEP) [7]. Moreover, recent researches in SR problem have taken into account machine learning (ML) algorithms [8–10]. All of the above algorithms randomly generate candidate population. But none of them can use various features of known functions to construct mathematical expressions adapted for describing the features of given data. Therefore, these algorithms may exploit huge search space that consists of all possible combinations of functions and its parameters.Nevertheless, a researcher always analyzes data, infers mathematical expressions, and obtains results according to his professional knowledge. After getting experimental data, he observes the data distribution and their features and analyzes them with his knowledge. Then, he tries to create some mathematical models based on natural laws. He can obtain the values of coefficients in these models through regression analysis methods or other mathematical methods. And he evaluates the formulas which are mathematical models with the values by using various fitness functions. If the researcher finds some of the formulas that fit the experimental data, he can transform and simplify these formulas and then obtain the final formula that can represent the data. Furthermore, his rich experience and knowledge can help him to reduce the searching space complexity so that he can find the best fit mathematical expression rapidly. As the researchers use their knowledge to discover the best fitted formulas, the methods that inject domain knowledge into the process of SR problem solving have been proposed to improve performance and scalability in complex problem [11–13]. The domain knowledge, which is manually created by the researcher’s intuition and experience, is of various formulas which are prior solutions to special problems. If the domain knowledge automatically generates some fitted formulas that are used in evolutionary search without the researcher involvement, the speed of solving SR problem will be quickened. A key challenge is how to build and utilize the domain knowledge just like the researcher does.In this paper, we present a framework of connection between Prior Formula Knowledge andGP (PFK-GP) to address the challenge:(i) We classify a researcher’s domain knowledge into PFK Base (PFKB) and inference ability after analyzing the process that a research discovered formulas from experimental data (Section 2).PFKB contains two primary functions: classification and recognition. The aim of two functions is to generate feature functions which can represent the feature of experimental data.(ii) In order to implement classification, we use the deep learning methodDBN [14, 15] which, compared with other shallow learning methods (Section 3.1), can classify experimental data into a special mathematical model that is consistent with data features. However, the classification method may lead to overfitting because the method can only categorize experimental data into known formula models which come from the set of training formula models.(iii) Therefore, recognition is used to overcome the overfitting. It can extract mathematical models of functions that can show full or partial features of experimental data. Three algorithmsGenerateFs,CountSamePartF, andCountSpecU (see Algorithms 2, 3, and 4) are designed to implement recognition. For example, from the dataset generated by f(x)=exp⁡(sin⁡(x)+x3/(8∗105)), the basic functions sin, exp, and cube can be found by the above three algorithms. In Figure 1, the function sin shows the periodicity of data, and exp or cube shows the growth rate of data. Therefore, these basic functions (called feature functions) can describe some features of the dataset.(iv) The inference ability is concluded to the searching ability of evolutionary algorithm. As researches infer mathematical models,GP is used to combine, transform, and verify these models. These feature functions that are generated byPFKB are selected to be combined into the candidate population in the light of algorithmrandomGenP (see Algorithm 5). With the candidate population,GP can get convergent result quickly because it searches answers in a limit space which is composed of various feature functions. Through experiment on eight benchmark problems (Table 5  E1–E8), the results demonstrate thatPFK-GP, compared with Pareto optimizationGP [16, 17], shows the significant improvement in accuracy and convergence.Figure 1 The functionf(x)=exp⁡(sin⁡(x)+x3/(8∗105)). ## 2. Background ### 2.1. Definition and Representation of Mathematical Expression In this section, we will define concepts about SR problem and show how to represent these concepts by applyingBNF expression. For SR problem, the word “formula” is the general term which describes mathematical expression that fits the given data. We define a formula model is a special mathematical model in which formulas have the same relationships and variables except for different coefficient values. Relationships can be described by operators, such as algebraic operators, functions, and differential operators (http://en.wikipedia.org/wiki/Mathematical_model). Therefore, a formula model is a set where each element is a formula. For example, the two formulas 0.1∗sin⁡x+0.7∗log⁡(x) and 0.3∗sin⁡x+0.9∗log⁡(x) belong to the formula model a1∗sin⁡x+a2∗log⁡(x). Data that are represented by different formulas in one formula model may have similar features which are data distributions, data relationships between different variables, data change laws, and so on, because these formulas have the same relationships.In order to represent a formula model and its corresponding formulas, we define the followingBNF expressions:F≔C∣S,C≔S"("C")"∣S,S≔B”("AX,AX”)"∣U”("AX”)",B≔"+"∣"-"∣"∗"∣"/",U≔”sqrt”∣”log”∣”tanh”∣”sin”∣”cos”∣”exp”∣”tan”∣”abs”∣”quart” ∣ ”cube”∣”square”⋯,A≔a1∣a2∣a3∣⋯∣an,X≔x1∣x2∣x3∣⋯∣xm,whereF is a formula model. X is a parameter set. A indicates a coefficient set. B is a set of binary functions, while U is a set of unary functions. S is a set of atomic functions which does not contain any subfunctions. C is a set of complex functions which contains complex functions in C and atomic functions in S. With the above definitions, any formulas and its corresponding model can be shown by theseBNF expressions. For instance, the formula exp⁡(sin⁡(x)+x3/(8∗105)) is represented by F and C, and its subfunction sin⁡(x) is represented by U. The constants 8, 3, and 105 are shown by elements in A. With these BNF expressions, a formula model can be transformed into one tree. And the tree is a candidate individual in population ofGP solving SR problem. Every subtree in the tree is a subformula which can show special data features. A subtree that shows features of experimental data is called feature-subtree. If a tree has more feature-subtrees, the tree is more likely to fit the data. How to construct the tree consisting of feature-subtrees is a key step in our method which is implemented by the algorithmrandomGenP (see Algorithm 5). ### 2.2. The Process of Researcher Analyzing Data The process that a researcher tries to solve SR problems is shown in Figure2. He depends heavily on his experience which is obtained through a long-term accumulation of study and research. After a researcher collected experimental data, he discovers regular patterns from data by using the methods of data analysis and visualization. He then constructs formula models which were consistent with these regular patterns according to his experiences. After that, he computes the coefficient values in formula models by using appropriate regression methods and obtains some formulas from different formula models. According to results of evaluating these formulas, he chooses the formula that is most fitted to the data. If the formula cannot represent data features, he needs to reselect a new formula model and do the above steps until one fitting formula is found.Figure 2 The process that researchers study the SR problem.We think the researcher’s experience and knowledge have two roles in processing SR problem. One role is Prior Formula Knowledge (PFK) which can help a researcher to quickly find fitted formulas that match experimental data features. Through study and work, the researcher accumulates his domain knowledge of various characteristics of formula model. When the researcher observes experimental data, he can apply his domain knowledge to recognize and classify the data. The other is the ability of inference and deduction which can help the researcher to combine, transform, and verify mathematical expression. We conclude that the PFK contains two primary functions: classification and recognition.Classification. when experimental data features are in accord with characteristics of one formula model in PFK, the dataset can be categorized into the model. The prerequisite of classification is that different formula models have different characteristics in PFK Base. As shown in Figure 3, six families of curves are generated by six formula models taking different coefficient values. The curves in the same family show similar data features while the curves in different families show different data features. Therefore, we can infer that the curves (including surfaces and hypersurfaces) generated by different formula models can be classified according to their data features.Figure 3 Curves are generated by six formula models with different coefficient value.Although many machine learning algorithms such as linear regression [18], SVM [19], Boosting [20], and PCVMs [21] can be used to identify and classify data, it is difficult for these algorithms to classify these curves. That is because these algorithms depend on features that are extracted manually from data, while these features from different complex curves are difficult to be represented by a feature vector which is built based on the researcher’s experiences. In contrast to these algorithms, DL can automatically extract features and have a good performance for the recognition of complex curves, such as image [15], speech [22], and natural language [23]. TheGenerateFs algorithm (see Algorithm 2) based onDBN is shown to classify the data.Recognition. Some formulas can represent remark features of curves generated by formula model. For example, after observing the curve in Figure 1, a researcher can easily infer that the formula sin or cos is one of formulas that constitute the curve because data in curve show periodicity. Therefore, these formulas are called feature functions that can be recognized or extracted by PFK. AlgorithmsCountSamePartF andCountSpecU (see Algorithms 3 and 4) are built to recognize the feature functions.Recognition can help the researcher overcome overfitting of results that are generated by classification because classification can help researcher to only identify formula models from training set while recognition can help the researcher identify subformula models that are consistent with local data features.The ability of inference and deduction is one of main measurements for evaluating performance of artificial intelligence methods. In the SR problem,GP, compared with other methods such as logical reasoning, statistical learning, and genetic algorithm, is a revolutionary method of searching fitting formulas because it can seek the appropriate formula models and their coefficient values simultaneously by evolving the population of formula individuals. Therefore, in the paper, we useGP as the method of inferring and deducing formulas.To optimizeGP, researchers have proposed various approaches, such as optimal parsimony pressure [24]; Pareto front optimization [17] and its age-fitness method [25] are used to control bloat and premature convergence inGP. In order to reduce the space complexity of searching formulas, the methods of arithmetic [26] and machine learning are injected intoGP. In the paper, with the algorithmrandomGenP (see Algorithm 5) about generating population and the method of Pareto front optimization,PFK-GP can research the formula model in the appropriate space and can find right formulas quickly. ## 2.1. Definition and Representation of Mathematical Expression In this section, we will define concepts about SR problem and show how to represent these concepts by applyingBNF expression. For SR problem, the word “formula” is the general term which describes mathematical expression that fits the given data. We define a formula model is a special mathematical model in which formulas have the same relationships and variables except for different coefficient values. Relationships can be described by operators, such as algebraic operators, functions, and differential operators (http://en.wikipedia.org/wiki/Mathematical_model). Therefore, a formula model is a set where each element is a formula. For example, the two formulas 0.1∗sin⁡x+0.7∗log⁡(x) and 0.3∗sin⁡x+0.9∗log⁡(x) belong to the formula model a1∗sin⁡x+a2∗log⁡(x). Data that are represented by different formulas in one formula model may have similar features which are data distributions, data relationships between different variables, data change laws, and so on, because these formulas have the same relationships.In order to represent a formula model and its corresponding formulas, we define the followingBNF expressions:F≔C∣S,C≔S"("C")"∣S,S≔B”("AX,AX”)"∣U”("AX”)",B≔"+"∣"-"∣"∗"∣"/",U≔”sqrt”∣”log”∣”tanh”∣”sin”∣”cos”∣”exp”∣”tan”∣”abs”∣”quart” ∣ ”cube”∣”square”⋯,A≔a1∣a2∣a3∣⋯∣an,X≔x1∣x2∣x3∣⋯∣xm,whereF is a formula model. X is a parameter set. A indicates a coefficient set. B is a set of binary functions, while U is a set of unary functions. S is a set of atomic functions which does not contain any subfunctions. C is a set of complex functions which contains complex functions in C and atomic functions in S. With the above definitions, any formulas and its corresponding model can be shown by theseBNF expressions. For instance, the formula exp⁡(sin⁡(x)+x3/(8∗105)) is represented by F and C, and its subfunction sin⁡(x) is represented by U. The constants 8, 3, and 105 are shown by elements in A. With these BNF expressions, a formula model can be transformed into one tree. And the tree is a candidate individual in population ofGP solving SR problem. Every subtree in the tree is a subformula which can show special data features. A subtree that shows features of experimental data is called feature-subtree. If a tree has more feature-subtrees, the tree is more likely to fit the data. How to construct the tree consisting of feature-subtrees is a key step in our method which is implemented by the algorithmrandomGenP (see Algorithm 5). ## 2.2. The Process of Researcher Analyzing Data The process that a researcher tries to solve SR problems is shown in Figure2. He depends heavily on his experience which is obtained through a long-term accumulation of study and research. After a researcher collected experimental data, he discovers regular patterns from data by using the methods of data analysis and visualization. He then constructs formula models which were consistent with these regular patterns according to his experiences. After that, he computes the coefficient values in formula models by using appropriate regression methods and obtains some formulas from different formula models. According to results of evaluating these formulas, he chooses the formula that is most fitted to the data. If the formula cannot represent data features, he needs to reselect a new formula model and do the above steps until one fitting formula is found.Figure 2 The process that researchers study the SR problem.We think the researcher’s experience and knowledge have two roles in processing SR problem. One role is Prior Formula Knowledge (PFK) which can help a researcher to quickly find fitted formulas that match experimental data features. Through study and work, the researcher accumulates his domain knowledge of various characteristics of formula model. When the researcher observes experimental data, he can apply his domain knowledge to recognize and classify the data. The other is the ability of inference and deduction which can help the researcher to combine, transform, and verify mathematical expression. We conclude that the PFK contains two primary functions: classification and recognition.Classification. when experimental data features are in accord with characteristics of one formula model in PFK, the dataset can be categorized into the model. The prerequisite of classification is that different formula models have different characteristics in PFK Base. As shown in Figure 3, six families of curves are generated by six formula models taking different coefficient values. The curves in the same family show similar data features while the curves in different families show different data features. Therefore, we can infer that the curves (including surfaces and hypersurfaces) generated by different formula models can be classified according to their data features.Figure 3 Curves are generated by six formula models with different coefficient value.Although many machine learning algorithms such as linear regression [18], SVM [19], Boosting [20], and PCVMs [21] can be used to identify and classify data, it is difficult for these algorithms to classify these curves. That is because these algorithms depend on features that are extracted manually from data, while these features from different complex curves are difficult to be represented by a feature vector which is built based on the researcher’s experiences. In contrast to these algorithms, DL can automatically extract features and have a good performance for the recognition of complex curves, such as image [15], speech [22], and natural language [23]. TheGenerateFs algorithm (see Algorithm 2) based onDBN is shown to classify the data.Recognition. Some formulas can represent remark features of curves generated by formula model. For example, after observing the curve in Figure 1, a researcher can easily infer that the formula sin or cos is one of formulas that constitute the curve because data in curve show periodicity. Therefore, these formulas are called feature functions that can be recognized or extracted by PFK. AlgorithmsCountSamePartF andCountSpecU (see Algorithms 3 and 4) are built to recognize the feature functions.Recognition can help the researcher overcome overfitting of results that are generated by classification because classification can help researcher to only identify formula models from training set while recognition can help the researcher identify subformula models that are consistent with local data features.The ability of inference and deduction is one of main measurements for evaluating performance of artificial intelligence methods. In the SR problem,GP, compared with other methods such as logical reasoning, statistical learning, and genetic algorithm, is a revolutionary method of searching fitting formulas because it can seek the appropriate formula models and their coefficient values simultaneously by evolving the population of formula individuals. Therefore, in the paper, we useGP as the method of inferring and deducing formulas.To optimizeGP, researchers have proposed various approaches, such as optimal parsimony pressure [24]; Pareto front optimization [17] and its age-fitness method [25] are used to control bloat and premature convergence inGP. In order to reduce the space complexity of searching formulas, the methods of arithmetic [26] and machine learning are injected intoGP. In the paper, with the algorithmrandomGenP (see Algorithm 5) about generating population and the method of Pareto front optimization,PFK-GP can research the formula model in the appropriate space and can find right formulas quickly. ## 3. Genetic Programming with Prior Formula Knowledge Base ### 3.1. Formula Prior Knowledge Base The FPK needs to have the ability of identifying and classifying the formula modelF based on data features. Although the features between formula models are different, it is difficult to extract features from data which are generated by these models because different formula models represent seemly similar but different features. Based on the above definitions in formula model, the features among functions in set S are different. The features between the function s∈S and the function c∈C may be similar if c is the parameter of s. As shown in Figure 4, these functions sin⁡(x), cube(x), and exp⁡(x) belonging to S constitute C, such as exp⁡(sin⁡(x)) and sin⁡(x)+cube(x)/(8∗105), and are shaped into the final function exp⁡(sin⁡(x)+cube(x)/(8∗105)). sin⁡(x) shows periodicity in results; log⁡(x) shows slow variation trend in results; cube(x) shows high variation trend in results. So, these features in the three functions are different. However, there are similar features between cube(x) and sin⁡(x)+cube(x)/(8∗105), because the two functions have high variation trend in result. The shallow learning method such as SVM can hardly identify complex features of functions shown in Figure 5.Figure 4 Illustration of theDBN framework.Figure 5 Accuracy result of SVM andDBN classifying P1–P39.In this paper,DBN is used to classify data into a special formula model according to features that are automatically extracted from data. Generally,DBN is made up of multiple layers that are represented by Restricted Boltzmann Machine (RBM). As RBM is an universal approximate of discrete models [27], we can speculate thatDBN which has many layers of RBM can recognize the features of data generated by complex function just like Convolutional Neural Network (CNN) classifying image [28]. The process ofDBN recognizing formulas is illustrated in Figure 4.DBN can extract features from data, layer by layer. So, the lower RBM layer can represent the simple functions and the higher RBM layer can transform the simple functions into more complex functions.We use the data generated by the different formula modelsF (see Table 7) as training samples.DBN is trained by these samples. The model that is finally gained byDBN training methods isPFKB, which is aimed at identifying the formula model that can represent features of the data. The process ofDBN training is outlined in algorithmTrainDBN(Algorithm 1) which uses the same steps as mentioned in [14, 15].Algorithm 1: TrainDBN: training DBN to generate the PFKB. Input: X1, Y1, X2, Y2 (X1, Y1 are training data; X2, Y2 are testing data) Output: PFKB (1) Initial(DL, opts) // initial the structure of DL and the parameters opts (2) DL = DLsetup(DBN, X1, opts) // layer-wise pre-training DL (3) DL = DLtrain(DBN,X1, opts) // build up each layer of DL to train (4) nn=DLunfoldtonn(DBN, opts) // after training each layer, passing the parameters to nn (5) PFKB = nntrain(nn,X1,Y1, opts) // fine-tune the whole deep architecture (6) accuracy = nntest(PFKB,X2, Y2) // accuracy is the criterion of the quality of PFKM, if it is too small, then re-training after adjusting the model architecture or parameters (7) return PFKBAlgorithm 2: G e n e r a t e F s. Input: X, PFKB, s (X is the dataset; s is the number of formula models whose features can show the dataset) Output: Fs (formula model vector which are used in generating the initial population of GP) (1) Ftemp = predictModel(PFKB, X) // as the intermediate data, it is the original result that exploit the PFKB to predict without sorting by the fitness (2) Ftemp = sortModelByFit(Ftemp) // sort the models in order of decreasing fitness (3) fori = 1 : s (4)  Fs(i) = Ftemp(i) (5) end (6) returnFsAlgorithm 3: C o u n t S a m e P a r t F. Input: t, Fs Output: C (ordered local expressions set that are sorted according to frequency of occurrence which is larger than t) (1)    C=F=∅ (2)    for each  pairfi,fjinFs (3)     Fij=fi∩fj (4)     for eachcminFij (5)      if  cm∈F (6)       changeF’s elementcmvtocmv+1 // v indicates the number of times that cm appears (7)      else (8)       addcm1intoF (9)     end (10) end (11) for eachcmvinF (12)  if  v≥t (13)   add  cmvintoC (14) end (15) sort(C) (16) return CAlgorithm 4: C o u n t S p e c U. Input: t, Fs Output: specU (ordered specU function set that are sorted according to frequency of occurrence which is larger than t) (1)    U=F=∅ (2)    for each  fi  in  Fs (3)     for eachuminfi (4)      if  um∈F (5)       changeF’s elementumvtoumv+1 // v indicates the number of times that um appears (6)      else (7)       addum1intoF (8)     end (9)    end (10) for eachumvinF (11)  if  v≥t (12)   add  umvintospecU (13) end (14) returnspecUAlgorithm 5: r a n d o m G e n P. Input: Fs, C, specU, B, U,n where, n represents the number of individuals which are generated, B and U is the candidate function library) Output: P (population) (1) I=Fs∪C (2) for eachiinI (3)  addiintoP (4) end (5) k=specU (6) Q=specU+B+U // add the elements of specU, B and U into the queue Q successively, umv∈U, add umv into Q (7) P_temp = traditionalRandomIndividual(Q, k) (8) addP_temp intoP (9) k=n-I-k (10) P_temp = traditionalRandomIndividual(B+U,k) (11) addP_temp intoP (12) returnPPFKB is only changed with formula models. If there are no new trained formula models in an application, the algorithmTrainDBN will not be executed. When the number of trained formula models is large enough, little new formula model will appear, andPFKB will seldom be changed. In the paper,TrainDBN is performed exactly once in order to generatePFKB. ### 3.2. Classification and Recognition withPFKB In order to deal with the problem of how to classify and recognize formula model from data, we should consider the problem from two aspects. One situation is that data can be represented by a special formula model fromPFKB, while the other one is that data cannot be represented by a formula model fromPFKB. In the first case, we exploitPFKB to identify formula models of data byDBN classification. Based on ordered results ofDBN classification, we gain a set of formula models (Fs=f1,…,fs) which are most similar to features of the data. The process that deals with the first case is outlined in algorithmGenerateFs. The algorithm is fast becausePFKB has been built byTrainDBN, and s is small integer value.In the second case, when a researcher observes laws that are hidden in experimental data, he often tries to find some formulasC which are consistent with partial features of the data. Therefore, we propose the two assumptions as follows.Assumption 1. More formula modelsfs have the same subformula modelpf in the setFs which is the result ofGenerateFs running, more strongly that thepf can express features of data. In order to compute the samepf inFs, we express the formula model as the string of expression and seek the same part of them by using intersection between the two strings (without considering the elements in setsX andA). Define the intersection between two expressions as follows:(1)fi∩fj=c1,…,ck,cm∩cn=∅,m≠n,cn∈C,cn∉S,1≤m,n≤k,1≤i,j≤k.For example, f1=z+a∗cos⁡x+tan⁡x/(exp⁡x+log⁡x), f2=z+a∗cos⁡x+abs(x)/(exp⁡x+log⁡(x)), f1∩f2={z+a∗cos⁡x, exp⁡(x)+log⁡(x)}. The method, which obtainspf whose frequency of occurrence inf is larger than thresholdt, is described as the algorithmCountSamePartF.For verifying Assumption1, we apply the dataset from E0 (see Table 5) as the testing set and get the identifying results Fs={P18,P15,P34,P11,P3} from Table 7 through the algorithmGenerateFs. The intersections between the top two formulas P18∩P15 are (sqrt(x0))/(tan⁡(x1)) and (tan⁡x0)/(exp⁡(x1)), which are partial mathematical expressions in E0. And we use the dataset from Table 6 to test E0 and get the identifying results Fs={T7,T6,T8,T9,T1} throughGenerateFs algorithm. The intersection between two expressions is as follows: T7∩T6={sqrt(x0)/tan⁡(x1)}, T8∩T9={tan⁡x0/exp⁡(x1), cos⁡(x0)∗sin⁡(x1)}. We find that the elements which have more frequency of occurrence in the intersections set are more likely to express some data features. The above two experiments illustrate that Assumption 1 is rational.Assumption 2. If functionu∈U exists inFs obtained byGenerateFs and the number of the same u is larger than threshold t, we can conclude that u can show some local data features. The functionb∈B except xy is common function, which has a high probability of occurrence in mathematical expressions. Therefore, it is difficult to express special data features. Compared with B, the function u∈U can show obvious features of data. For instance, sin⁡(x) presents the periodicity of data and log⁡(x) represents data features about extreme increase or decrease. The method, which obtains the special function u that can show the local data features, is outlined as the algorithmCountSpecU.For verifying Assumption2, we also choose the dataset which are generated from E0 (see Table 5) as the testing data and apply theCountSpecU algorithm to calculate the special u among Fs={P18,P15,P34,P11,P3}. The result of the algorithm is shown in Table 1. We find the result specU={tan,cos,sqrt,exp,sin} (sin and cos are the operators of the same kind) is part of E0. Hence, we can discover that theu set, which is gained by the algorithmCountSpecU, can show local features of the dataset.Table 1 The result ofU in Fs computed by algorithm CountSpecU  (see Algorithm 4). u in Fs Frequency of occurrence tan 3 cos 2 sqrt 1 exp 1 log 1 ### 3.3.GP with Prior Formula Knowledge Base In order to deal with SR problem,GP is executed to automatically composite and evolve mathematical expression. The process ofGP is similar to the process that a researcher transforms formula models and obtains fitting formulas based on his knowledge. Since those algorithms inPFKB, which is created based on analyzing the process of how a research infers fitted formulas, can recognize formula models that are consistent with data features, we combine these formula models ofPFKB recognizing into the process ofGP in order to reduce the searching space and increase the speed of discovering right solutions.When initializingGP algorithm, we can select candidate formulas fromFs,C, andspecU as individuals in population ofGP. The setsFs,C, andspecU are gained by the above algorithms inPFKB. Therefore, thePFKB is injected into the process of population generating. And this population can contribute to reserving data features as much as possible and reducing the searching space because these individuals commonly have good fitness value. With the population,GP algorithm can speed up the convergence and improve the accuracy of SR results. However, it may lead to the bias results. To overcome the problem, some random individuals must be imported into the population. The process of population creating is as follows.Firstly, the elements in setsFs and C are inserted into the population. Then, the setspecU and the candidate function sets B andU are merged into the new candidate function queueQ. And the number of elements inspecU is twice as much as the other elements inQ because B∪C⊆specU. Those elements inspecU are more likely to be part of individuals in the population after applying the methodtraditionalRandomIndividual [16] which is designed to generate randomly k individuals from the special function set. At last, the rest of individuals of population are created bytraditionalRandomIndividual with setsB andU. The process of population generating is described as the algorithmrandomGenP.Generally,|Fs|+|C|+|specU|<n/2, where n is the number of individuals in population. Furthermore, in order to enhance the affection ofPFKB in the process ofGP evolution, the methodrandomGenP is used to create new individuals in every few generations of evolutionary computation. Meanwhile, the method of Pareto front [17] is introduced into the algorithmPFK-GP to balance the accuracy against the complexity of model. The detail of algorithmPFK-GP is shown in Algorithm 6.Algorithm 6: PFK-GP. Input: data, PFKB, t1, t2, B, U, n, k, g, interval Output: F (candidate formulas set) (1)    Fs=GenerateFs(data, PFKB) (2)    C = CountSamePartF(t1, Fs) (3)    U=CountSpecU(t2, Fs) (4)    P=randomGenP(Fs, C, specU, B+U, n) (5)    while (bestFitness  <=  threshold&&  i<g) (6)     P = crossover(P) (7)     P = mutate(P) (8)     Pt = ParetoOptimise(P) // prevent the formula model too complex (9)     Pt_fitnees = EvaluatePopulation(Pt) (10)  bestFitness,F = Selectbest(Pt, Pt_fitness, k) // choose the best k individuals and get the best fitness value from the individuals (11)  ifimod interval (12)   P1=randomGenP(F, C, specU, B+U,n/2) (13)   P2 = traditionalRandomIndividual(B+U,n/2) (14)   P=P1∪P2 (15)  else (16)   P1 = traditionalRandomIndividual(B+U,n-k) (17)   P=P1∪P (18)  end (19)  i++ (20) end (21) returnF ## 3.1. Formula Prior Knowledge Base The FPK needs to have the ability of identifying and classifying the formula modelF based on data features. Although the features between formula models are different, it is difficult to extract features from data which are generated by these models because different formula models represent seemly similar but different features. Based on the above definitions in formula model, the features among functions in set S are different. The features between the function s∈S and the function c∈C may be similar if c is the parameter of s. As shown in Figure 4, these functions sin⁡(x), cube(x), and exp⁡(x) belonging to S constitute C, such as exp⁡(sin⁡(x)) and sin⁡(x)+cube(x)/(8∗105), and are shaped into the final function exp⁡(sin⁡(x)+cube(x)/(8∗105)). sin⁡(x) shows periodicity in results; log⁡(x) shows slow variation trend in results; cube(x) shows high variation trend in results. So, these features in the three functions are different. However, there are similar features between cube(x) and sin⁡(x)+cube(x)/(8∗105), because the two functions have high variation trend in result. The shallow learning method such as SVM can hardly identify complex features of functions shown in Figure 5.Figure 4 Illustration of theDBN framework.Figure 5 Accuracy result of SVM andDBN classifying P1–P39.In this paper,DBN is used to classify data into a special formula model according to features that are automatically extracted from data. Generally,DBN is made up of multiple layers that are represented by Restricted Boltzmann Machine (RBM). As RBM is an universal approximate of discrete models [27], we can speculate thatDBN which has many layers of RBM can recognize the features of data generated by complex function just like Convolutional Neural Network (CNN) classifying image [28]. The process ofDBN recognizing formulas is illustrated in Figure 4.DBN can extract features from data, layer by layer. So, the lower RBM layer can represent the simple functions and the higher RBM layer can transform the simple functions into more complex functions.We use the data generated by the different formula modelsF (see Table 7) as training samples.DBN is trained by these samples. The model that is finally gained byDBN training methods isPFKB, which is aimed at identifying the formula model that can represent features of the data. The process ofDBN training is outlined in algorithmTrainDBN(Algorithm 1) which uses the same steps as mentioned in [14, 15].Algorithm 1: TrainDBN: training DBN to generate the PFKB. Input: X1, Y1, X2, Y2 (X1, Y1 are training data; X2, Y2 are testing data) Output: PFKB (1) Initial(DL, opts) // initial the structure of DL and the parameters opts (2) DL = DLsetup(DBN, X1, opts) // layer-wise pre-training DL (3) DL = DLtrain(DBN,X1, opts) // build up each layer of DL to train (4) nn=DLunfoldtonn(DBN, opts) // after training each layer, passing the parameters to nn (5) PFKB = nntrain(nn,X1,Y1, opts) // fine-tune the whole deep architecture (6) accuracy = nntest(PFKB,X2, Y2) // accuracy is the criterion of the quality of PFKM, if it is too small, then re-training after adjusting the model architecture or parameters (7) return PFKBAlgorithm 2: G e n e r a t e F s. Input: X, PFKB, s (X is the dataset; s is the number of formula models whose features can show the dataset) Output: Fs (formula model vector which are used in generating the initial population of GP) (1) Ftemp = predictModel(PFKB, X) // as the intermediate data, it is the original result that exploit the PFKB to predict without sorting by the fitness (2) Ftemp = sortModelByFit(Ftemp) // sort the models in order of decreasing fitness (3) fori = 1 : s (4)  Fs(i) = Ftemp(i) (5) end (6) returnFsAlgorithm 3: C o u n t S a m e P a r t F. Input: t, Fs Output: C (ordered local expressions set that are sorted according to frequency of occurrence which is larger than t) (1)    C=F=∅ (2)    for each  pairfi,fjinFs (3)     Fij=fi∩fj (4)     for eachcminFij (5)      if  cm∈F (6)       changeF’s elementcmvtocmv+1 // v indicates the number of times that cm appears (7)      else (8)       addcm1intoF (9)     end (10) end (11) for eachcmvinF (12)  if  v≥t (13)   add  cmvintoC (14) end (15) sort(C) (16) return CAlgorithm 4: C o u n t S p e c U. Input: t, Fs Output: specU (ordered specU function set that are sorted according to frequency of occurrence which is larger than t) (1)    U=F=∅ (2)    for each  fi  in  Fs (3)     for eachuminfi (4)      if  um∈F (5)       changeF’s elementumvtoumv+1 // v indicates the number of times that um appears (6)      else (7)       addum1intoF (8)     end (9)    end (10) for eachumvinF (11)  if  v≥t (12)   add  umvintospecU (13) end (14) returnspecUAlgorithm 5: r a n d o m G e n P. Input: Fs, C, specU, B, U,n where, n represents the number of individuals which are generated, B and U is the candidate function library) Output: P (population) (1) I=Fs∪C (2) for eachiinI (3)  addiintoP (4) end (5) k=specU (6) Q=specU+B+U // add the elements of specU, B and U into the queue Q successively, umv∈U, add umv into Q (7) P_temp = traditionalRandomIndividual(Q, k) (8) addP_temp intoP (9) k=n-I-k (10) P_temp = traditionalRandomIndividual(B+U,k) (11) addP_temp intoP (12) returnPPFKB is only changed with formula models. If there are no new trained formula models in an application, the algorithmTrainDBN will not be executed. When the number of trained formula models is large enough, little new formula model will appear, andPFKB will seldom be changed. In the paper,TrainDBN is performed exactly once in order to generatePFKB. ## 3.2. Classification and Recognition withPFKB In order to deal with the problem of how to classify and recognize formula model from data, we should consider the problem from two aspects. One situation is that data can be represented by a special formula model fromPFKB, while the other one is that data cannot be represented by a formula model fromPFKB. In the first case, we exploitPFKB to identify formula models of data byDBN classification. Based on ordered results ofDBN classification, we gain a set of formula models (Fs=f1,…,fs) which are most similar to features of the data. The process that deals with the first case is outlined in algorithmGenerateFs. The algorithm is fast becausePFKB has been built byTrainDBN, and s is small integer value.In the second case, when a researcher observes laws that are hidden in experimental data, he often tries to find some formulasC which are consistent with partial features of the data. Therefore, we propose the two assumptions as follows.Assumption 1. More formula modelsfs have the same subformula modelpf in the setFs which is the result ofGenerateFs running, more strongly that thepf can express features of data. In order to compute the samepf inFs, we express the formula model as the string of expression and seek the same part of them by using intersection between the two strings (without considering the elements in setsX andA). Define the intersection between two expressions as follows:(1)fi∩fj=c1,…,ck,cm∩cn=∅,m≠n,cn∈C,cn∉S,1≤m,n≤k,1≤i,j≤k.For example, f1=z+a∗cos⁡x+tan⁡x/(exp⁡x+log⁡x), f2=z+a∗cos⁡x+abs(x)/(exp⁡x+log⁡(x)), f1∩f2={z+a∗cos⁡x, exp⁡(x)+log⁡(x)}. The method, which obtainspf whose frequency of occurrence inf is larger than thresholdt, is described as the algorithmCountSamePartF.For verifying Assumption1, we apply the dataset from E0 (see Table 5) as the testing set and get the identifying results Fs={P18,P15,P34,P11,P3} from Table 7 through the algorithmGenerateFs. The intersections between the top two formulas P18∩P15 are (sqrt(x0))/(tan⁡(x1)) and (tan⁡x0)/(exp⁡(x1)), which are partial mathematical expressions in E0. And we use the dataset from Table 6 to test E0 and get the identifying results Fs={T7,T6,T8,T9,T1} throughGenerateFs algorithm. The intersection between two expressions is as follows: T7∩T6={sqrt(x0)/tan⁡(x1)}, T8∩T9={tan⁡x0/exp⁡(x1), cos⁡(x0)∗sin⁡(x1)}. We find that the elements which have more frequency of occurrence in the intersections set are more likely to express some data features. The above two experiments illustrate that Assumption 1 is rational.Assumption 2. If functionu∈U exists inFs obtained byGenerateFs and the number of the same u is larger than threshold t, we can conclude that u can show some local data features. The functionb∈B except xy is common function, which has a high probability of occurrence in mathematical expressions. Therefore, it is difficult to express special data features. Compared with B, the function u∈U can show obvious features of data. For instance, sin⁡(x) presents the periodicity of data and log⁡(x) represents data features about extreme increase or decrease. The method, which obtains the special function u that can show the local data features, is outlined as the algorithmCountSpecU.For verifying Assumption2, we also choose the dataset which are generated from E0 (see Table 5) as the testing data and apply theCountSpecU algorithm to calculate the special u among Fs={P18,P15,P34,P11,P3}. The result of the algorithm is shown in Table 1. We find the result specU={tan,cos,sqrt,exp,sin} (sin and cos are the operators of the same kind) is part of E0. Hence, we can discover that theu set, which is gained by the algorithmCountSpecU, can show local features of the dataset.Table 1 The result ofU in Fs computed by algorithm CountSpecU  (see Algorithm 4). u in Fs Frequency of occurrence tan 3 cos 2 sqrt 1 exp 1 log 1 ## 3.3.GP with Prior Formula Knowledge Base In order to deal with SR problem,GP is executed to automatically composite and evolve mathematical expression. The process ofGP is similar to the process that a researcher transforms formula models and obtains fitting formulas based on his knowledge. Since those algorithms inPFKB, which is created based on analyzing the process of how a research infers fitted formulas, can recognize formula models that are consistent with data features, we combine these formula models ofPFKB recognizing into the process ofGP in order to reduce the searching space and increase the speed of discovering right solutions.When initializingGP algorithm, we can select candidate formulas fromFs,C, andspecU as individuals in population ofGP. The setsFs,C, andspecU are gained by the above algorithms inPFKB. Therefore, thePFKB is injected into the process of population generating. And this population can contribute to reserving data features as much as possible and reducing the searching space because these individuals commonly have good fitness value. With the population,GP algorithm can speed up the convergence and improve the accuracy of SR results. However, it may lead to the bias results. To overcome the problem, some random individuals must be imported into the population. The process of population creating is as follows.Firstly, the elements in setsFs and C are inserted into the population. Then, the setspecU and the candidate function sets B andU are merged into the new candidate function queueQ. And the number of elements inspecU is twice as much as the other elements inQ because B∪C⊆specU. Those elements inspecU are more likely to be part of individuals in the population after applying the methodtraditionalRandomIndividual [16] which is designed to generate randomly k individuals from the special function set. At last, the rest of individuals of population are created bytraditionalRandomIndividual with setsB andU. The process of population generating is described as the algorithmrandomGenP.Generally,|Fs|+|C|+|specU|<n/2, where n is the number of individuals in population. Furthermore, in order to enhance the affection ofPFKB in the process ofGP evolution, the methodrandomGenP is used to create new individuals in every few generations of evolutionary computation. Meanwhile, the method of Pareto front [17] is introduced into the algorithmPFK-GP to balance the accuracy against the complexity of model. The detail of algorithmPFK-GP is shown in Algorithm 6.Algorithm 6: PFK-GP. Input: data, PFKB, t1, t2, B, U, n, k, g, interval Output: F (candidate formulas set) (1)    Fs=GenerateFs(data, PFKB) (2)    C = CountSamePartF(t1, Fs) (3)    U=CountSpecU(t2, Fs) (4)    P=randomGenP(Fs, C, specU, B+U, n) (5)    while (bestFitness  <=  threshold&&  i<g) (6)     P = crossover(P) (7)     P = mutate(P) (8)     Pt = ParetoOptimise(P) // prevent the formula model too complex (9)     Pt_fitnees = EvaluatePopulation(Pt) (10)  bestFitness,F = Selectbest(Pt, Pt_fitness, k) // choose the best k individuals and get the best fitness value from the individuals (11)  ifimod interval (12)   P1=randomGenP(F, C, specU, B+U,n/2) (13)   P2 = traditionalRandomIndividual(B+U,n/2) (14)   P=P1∪P2 (15)  else (16)   P1 = traditionalRandomIndividual(B+U,n-k) (17)   P=P1∪P (18)  end (19)  i++ (20) end (21) returnF ## 4. Experiments In the experiments, we employDBN in the DeepLearnToolbox [30] to classify formula models and build the algorithmPFK-GP based on GPTIPS [29]. The 39 formula models in Table 7 are composed of formulas from [31, 32] and some formulas are created by ourselves. The data generated by these 39 formula models is used as training data of algorithmDBN to createPFKB. The formula models in Table 5 are used to generate the testing data for verifying accuracy of algorithmsGenerateFs andPFK-GP. The formula models in Table 6 are devoted to validating the two algorithmsCountSamePartF andCountSpecU (see Algorithms 3 and 4).For most formula models from Tables5, 6, and 7, we sampled them by equal step taking their parameter values from the range [−49, 50]. For some particular formulas, we also sample them with a special equal step from special numerical scope. For example, the value x in sqrt(x) is in the range [0, 99], the value x in log⁡(x) ranges between 1 and 100. We create 500 groups of different parameters value in each formula model. The coefficients in these formula models are fetched with equal step from the range [−2.996  3.0]. When all coefficients of a formula model take special values, the formula model generates a formula, namely, a sample of the formula model. We create 7500 groups of different coefficients in each formula model. So, each formula model has 7500 samples where each sample has 500 groups of different parameters value. We take 6000 samples of these samples as training data and the others as test data.We adoptDBN as the classification model and compare it with SVM that is implemented by the tool libsvm [33]. The training and testing data for the two algorithms are originated from formula models P1–P39. The parameter values inDBN and SVM are illustrated as Table 2. We take the first five formulas fromFs generated byGenerateFs as a result set of recognition. If the test formula is included in the set, we think that the recognition result is correct. The accuracy of recognition results ofDBN and SVM is showed as Figure 5. TheDBN method can help to classify all kinds of test data into its fitted formula models. However, the SVM method can only correctly classify several kinds of test data. The overall average accuracy ofDBN classification is 99.65%, while the accuracy of SVM is 26.72%. The result demonstrates thatDBN is more suitable for recognizing data generated by mathematical expression, becauseDBN can automatically extract features from the data, layer by layer, and is similar to composition of formula which is constituted by its subformulas.Table 2 The parameter values in algorithm ofDBN and SVM. D B N parameters Value SVM parameters Value The number ofDBN layers 4 svm_type c-svc The size ofDBN hide nodes 50 Kernel Gaussian The number of epochs 200 Gamma 0.07 Batch size 40 Coef 0 Momentum 0 Cost 1.0 Alpha 1 Degree 3.0 activation_function sigm Shrinking 1We set parameters inGP with Pareto optimization (PO-GP) [29] andPFK-GP as shown in Table 3. For data generated by P13 (see Table 7, coefficient z is −2.098, a is −2.998),PO-GP andPFK-GP deal with the SR problem, respectively. The result was illustrated as Figure 6, where (2)Trainerror=meanytrain-ypredtrain2.Table 3 Parameter values inGP and PFK-GP. Parameter Value GPTIPS [29] multigene syntax Representation Number of genes: 1 Maximum tree depth: 5 Population size 50 Number of generations 1000 Selection Lexicographic tournament selection Tournament size 3 Crossover operator Subtree crossover Crosser probability 0.85 Mutation operator Subtree mutation Mutation probability 0.1 Reproduction probability 0.05 Fitness 1 N ∑ y - y ^ 2 Elitism Keep 1 best individualFigure 6 The evolutionary result ofP13 withPO-GP andPFK-GP.We could find that after processing test data of formula modelP13,PFK-GP found its best model at the first generation and its fitness is higher, whilePO-GP found its best model until 718th generation and its fitness is much lower than that inPFK-GP. ThePFK-GP can get the right formulas quickly because the model P13 recognized by the algorithmGenerateFs is inserted into the initialized population of evolutionary computation. For the formula models whose characteristics are consistent with data features inPFKB, they can be recognized with high probability and can be combined into population ofPFK-GP. ThePFK-GP can firstly search the coefficients in these formula models and get the mathematical expression with good fitness value. Therefore, the algorithmGenerateFs can speed up the process ofPFK-GP dealing with SR and can improve the accuracy of SR results.In order to test whetherPFK-GP can overcome overfitting or not, a dataset is created by E1 which has not existed in the training models ofPFKB. The two algorithmsPO-GP andPFK-GP are, respectively, applied to process the dataset. The two algorithms, which run, respectively, 100 and 1000 generations, have similar convergence curves in Figure 8. However,PFK-GP can find better fitness results compared withPO-GP, becausePFK-GP searches fitted solution in the space includes more functions whose data features are in accord with E1. Since the initial population, which is generated by the algorithms (CountSamePartF andCountSpecU) inPFKB, contains subformulas in formula models which are recognized byPFKB and represents data features of these subformulas,PFK-GP can find the right formulas which are more fitted to the raw dataset.In order to observe overall performance of thePFK-GP, we select six datasets as testing set. Three of them generated by formula models (P9,P13,P19) from Table 7 are involved in the process of trainingDBN, while the other three generated by formula models (E1,E4,E6) from Table 5 are not involved in that process. The two algorithmsPFK-GP andPO-GP are executed, respectively, ten times in order to gain the right formulas from the six different datasets. The six results of mean training error gained by the two algorithms are shown in Figure 9. And the average results from six groups of mean training errors are listed in Figure 7. ThePFK-GP(E) andGP(E) are the average results of E1,E4, and E6, whilePFK-GP(P) andPO-GP(P) are the average results of P9,P13, and P19. We can conclude that the comprehensive performance of thePFK-GP is better than that of thePO-GP based on the results in Figures 7 and 9, because the algorithmPFK-GP utilizes the methodGenerateFs to find the fitted formula model directly and the methodsCountSamePartF andCountSpecU to identify subformula models which have data features consistent with test set. The best mathematical expressionsPFK-GP andPO-GP found are listed in Table 4.Table 4 The best mathematical expression ofPFK-GP finding. Number The best mathematical expression E 1 y = 0.7001 ∗ tan ⁡ x 1 ∗ x 4 - 5.049 - 0.7001     ∗x1∗cube⁡2.575∗cube⁡x4-1.001 E 2 y = 7.214 ∗ sin ⁡ x 1 + 1.001 ∗ tan ⁡ x 2 E 3 y = 0.25 ∗ square ⁡ x 1 + x 2 - 6 - 0.2179 E 4 y = 1.001 - 1 . 332 ∗ 4 ∗ x 1 + log ⁡ square ⁡ x 2 2 ∗ square ⁡ x 2 + 5.585 E 5 y = x 2 - 2.092 tanh ⁡ square ⁡ sin ⁡ x 1 + 0.8795 E 6 y = 6 ∗ cos ⁡ x 2 ∗ sin ⁡ x 1 - 0.00444 E 7 y = sin ⁡ x 1 - 6 ∗ x 1 + square ⁡ x 1 + 14 E 8 y = log ⁡ x 2 + sqrt ⁡ x 1 + sin ⁡ x 1 + 0.1823Table 5 Test data used inPFK-GP. Number Formula E 0 y = - 1.97 + 1.25 ∗ sqrt ⁡ x 0 tan ⁡ x 1 + tan ⁡ x 0 exp ⁡ x 1 + cos ⁡ x 0 ∗sin⁡x1 E 1 y = exp ⁡ 2 ∗ x 1 ∗ sin ⁡ p i ∗ x 4 + sin ⁡ x 2 ∗ x 3 E 2 y = 3.56 + 7.23 ∗ sin ⁡ x 0 + tan ⁡ x 3 E 3 y = x 0 - 3 ∗ x 3 - 3 + 2 ∗ sin ⁡ x 0 - 4 ∗ x 3 - 4 E 4 y = quart ⁡ x 1 - 3 + cube ⁡ x 2 - 3 - x 2 - 3 quart ⁡ x 2 - 4 + 10 E 5 y = tanh ⁡ cos ⁡ 2 ∗ x 0 + x 3 E 6 y = 6 ∗ sin ⁡ x 0 ∗ cos ⁡ x 3 E 7 y = sin ⁡ x 0 + square ⁡ x 0 + 5 E 8 y = sqrt ⁡ x 0 + log ⁡ 1.2 ∗ x 3 + sin ⁡ x 0Table 6 Test data of two algorithmsCountSamePartF and CountSpecU  (see Algorithms 3 and 4). Number Formula T 1 P 1 T 2 P 2 T 3 P 3 T 4 P 4 T 5 P 5 T 6 y = z + a ∗ sqrt ⁡ x 0 tan ⁡ x 1 + sin ⁡ x 1 T 7 y = z + a ∗ sqrt ⁡ x 0 tan ⁡ x 1 + x 1 T 8 y = z + a ∗ tan ⁡ x 0 exp ⁡ x 1 + cos ⁡ x 0 ∗ sin ⁡ x 1 + sin ⁡ x 1 T 9 y = z + a ∗ tan ⁡ x 0 exp ⁡ x 1 + cos ⁡ x 0 ∗ sin ⁡ x 1 + x 1 T 10 y = z + a ∗ cos ⁡ x 0 ∗ sin ⁡ x 1 + square ⁡ x 1Table 7 Training data of algorithmTrainDBN  (see Algorithm 1). Number Formula P 1 y = z + a ∗ x 0 P 2 y = z + a 1 ∗ x 3 + x 1 a 2 ∗ x 4 P 3 y = z + a 1 ∗ x 3 - x 0 + x 1 / x 4 a 2 ∗ x 4 P 4 y = z + a ∗ sin ⁡ x P 5 y = z + a ∗ log ⁡ x P 6 y = z + a ∗ sqrt ⁡ x P 7 y = z - a 1 ∗ exp ⁡ a 2 ∗ x 0 P 8 y = z + a 1 ∗ sqrt ⁡ a 2 ∗ x 0 ∗ x 3 ∗ x 4 P 9 y = a 1 ∗ sqrt ⁡ x 0 a 2 ∗ log ⁡ x 1 ∗ a 3 ∗ exp ⁡ x 2 a 4 ∗ square ⁡ x 3 P 10 y = z + a 1 ∗ a 2 ∗ x 1 + a 3 ∗ square ⁡ x 2 a 4 ∗ cube ⁡ x 3 + a 5 ∗ quart ⁡ x 4 P 11 y = z + a 1 ∗ cos ⁡ a 2 ∗ x 0 ∗ x 0 ∗ x 0 P 12 y = z - a 1 ∗ cos ⁡ a 2 ∗ x 0 ∗ sin ⁡ a 3 ∗ x 4 P 13 y = z - a ∗ tan ⁡ x 0 tan ⁡ x 1 ∗ tan ⁡ x 2 tan ⁡ x 3 P 14 y = z - a ∗ cos ⁡ x 0 - tan ⁡ x 1 ∗ tanh ⁡ x 2 sin ⁡ x 3 P 15 y = z - a ∗ tan ⁡ x 0 exp ⁡ x 1 ∗ log ⁡ x 2 - tan ⁡ x 3 P 16 y = a ∗ x 3 P 17 y = a 1 ∗ x 1 + a 2 ∗ x 4 P 18 y = sqrt ⁡ x 2 tan ⁡ x 5 / a P 19 y = cos ⁡ x 2 cube ⁡ x 5 / a P 20 y = tanh ⁡ x 2 ∗ a ∗ cube ⁡ x 5 + abs ⁡ x 1 P 21 y = tanh ⁡ abs ⁡ x 2 ∗ a + x 5 ∗ cube ⁡ x 5 + abs ⁡ x 1 P 22 y = tanh ⁡ tan ⁡ x 5 a ∗ cube ⁡ x 5 + abs ⁡ x 1 P 23 y = tanh ⁡ cos ⁡ x 2 ∗ a ∗ cube ⁡ sqrt ⁡ x 2 P 24 y = tanh ⁡ cos ⁡ x 2 ∗ a ∗ cube ⁡ x 5 + abe ⁡ x 1 P 25 y = z P 26 y = z + x 2 P 27 y = z + x 2 x 0 ∗ x 2 z + x 2 x 0 ∗ x 2 X X X P 28 y = x 0 - z 1 x 0 + x 2 ∗ x 5 - z 2 x 0 ∗ a 1 P 29 y = a ∗ sqrt ⁡ x P 30 y = a ∗ log ⁡ x P 31 y = a ∗ square ⁡ x P 32 y = a ∗ tanh ⁡ x P 33 y = a ∗ sin ⁡ x P 34 y = a ∗ cos ⁡ x P 35 y = a ∗ exp ⁡ x P 36 y = a ∗ cube ⁡ x P 37 y = a ∗ quart ⁡ x P 38 y = a ∗ tan ⁡ x P 39 y = a ∗ abs ⁡ xFigure 7 Average results from six groups of means training errors inPO-GP andPFK-GP.Figure 8 The SR evolutionary process ofE1 withPO-GP orPFK-GP under different generations.Figure 9 Training error results in which six datasets generated byE1, E4, E6, P1, P13, and P19 dealt withPO-GP andPFK-GP, respectively.In order to measure relativity between experimental data and predictive data, the formula Training Variation Explained (TVE) is defined as follows:(3)TVE=1-sumytrain-ypredtrain2sumytrain-meanytrain2.The higher the TVE value, the more valid the predictive data.PO-GP andPFK-GP are run ten times, respectively, in the dataset generated from eight prediction models (see Table 6  E1–E8). The eight results of different dataset processed by the above two algorithms are listed in Figure 10. And the maximum, minimum, and average results of TVE are listed in Figure 11. From the results in the two figures, the formulas thatPFK-GP finds are more relative to the experimental formula models than thosePO-GP finds.Figure 10 TVE for modelsE1–E8 contrast the random population in traditionalPO-GP with thePFK-GP.Figure 11 TVE results ofPO-GP compared withPFK-GP in eight formula models. ## 5. Related Work The search space of SR is huge even for rather simple basis functions [31]. In order to avoid search space that is too far from the desired output range determined by the training dataset, the interval arithmetic [34] and the affine arithmetic [26], which can compute the bounds ofGP tree expression, are imported into SR. Although the method based on affine arithmetic can generate the tighter bounds of the expression in comparison with the interval arithmetic method, its accuracy often leads to high computational complexity [35]. Moreover, the size of search space is still huge because there are plentiful candidate expressions which fit to the data bound computed by the above two arithmetic methods.In addition to the above arithmetic method, machine learning methods are used to compact or reduce the search space of SR. FFX technology uses pathwise regularized learning algorithm to rapidly prune a huge set of candidate basis functions down to compact model based on the generalization linearly model (GLM); hence the technology outperforms GP-SR in speed and scalability due to its simplicity and deterministic nature [8]. However, it may abandon correct expressions and make them not in the space of GLM. A hybrid deterministic GP-SR algorithm [36] is proposed to overcome the problem of missing correct expression. The hybrid algorithm extracts candidate basis expressions by using FFX and inputs the expressions into the GP-SR. The hybrid algorithm utilizes the candidate expression generated by the linear regression method (pathwise regulation), while our algorithm utilizes the candidate expression by applying the algorithmsCountSamePartF, GenerateFs, andCountSpecU.By applying expectation-maximization (EM) framework to SR, the clustered SR (CSR) can identify and infer symbolic repression of piecewise function from unlabelled, time-series data [9]. The CSR can reduce the space of searching piecewise function owing to the fact that the EM can search simultaneously the subfunction parameters and latent variables that represent the information of function segment. The abstract expression grammar (AEG) SR is proposed to perform the process of genetic algorithm (GA), allowing user control of the search space and the final output formulas [37]. On understanding the given application, users can specify the goal expression of SR and limit the size of search space by using abstract expression grammars. Compared with manually assigning expression and limiting the search space with AEGSR, in the paper, the methods about PFK can automatically extract the candidate expression from dataset by using statistical method and dynamically adjust the search space by usingGP.The methods that inject prior or expert knowledge in evolutionary search [12, 13] are introduced to find effective solutions that can show mathematical expression more compactable and interpretable. In these papers, the prior and expert knowledge are the solutions which are mathematical expressions in some applications. The knowledge is merged intoGP by inserting randomized pieces of the approximate solution into population. One of the major differences between these methods and our method is how prior or expert knowledge is created. The knowledge in [12, 13] is the existing formula model that comes from the previous solutions and can be called static knowledge. However, the knowledge in our method is the formula model which is consistent with data features that are originated from the algorithmsGenerateFs, CountSamePartF, andCountSpecU and can be called dynamical knowledge that is changed with the features of test dataset. Therefore, our methods can insert more suitable knowledge into theGP. ## 6. Conclusion In this paper, aPFK-GP method is proposed to deal with the problem of symbolic regression based on analyzing the process of how a researcher constructs a mathematical model. The method can understand experimental data features and can extract some formulas consistent with experimental data features. In order to implement the function of understanding data features,PFK-GP, through theDBN method, firstly createsPFKB that can extract features from test dataset generated by training formula models. The experiment results confirm, compared with SVM, thatDBN can produce better results that extract features from formula models and classify test data into its corresponding formula model. Then, the methods of classification and recognition are implemented to find some formula models that are similar or related to experimental data features as much as possible. For the classification, we exploit the algorithmGenerateFs based onDBN to match the experimental data with formula models inPFKB. With regard to recognition, we propose the algorithms ofCountSamePartF andCountSpecU to obtain some subformula models which have local features consistent with experimental data. The classification can helpPFKB to find formula models that are consistent with whole data features while the recognition can helpPFKB to find subformula models consistent with local data features. At last, the algorithmrandomGenP is used to generate individuals of evolutionary population according to the result of the above three algorithms. Through combining and transforming these individuals,GP can automatically obtain approximate formulas that are best fitting to the experimental data.Compared with ParetoGP,PFK-GP, which is built on thePFKB with the functions of classification and recognition, can explore formulas in the search space of data features. So, it can accelerate the speed of convergence and improve the accuracy of formula obtained.Obviously, the high efficiency ofPFK-GP depends on the powerful methods of classification and recognition based onPFKB. Therefore, it is an important part of the future work to improve the accuracy of the above two methods. The two methods depend on the representation of data features of formula model. In the paper, the two assumptions based on statistics and counts are used to obtain the formulas which can show the data features. The features of formula model are not defined explicitly. And the two assumption are not proved by formal proofs. There are some uncertainties in those assumptions. Therefore, the new representation which can show whole or local features of formula models will be researched to find formulas which can better fit to experiment data. In addition, the rules of formulas transforming and inferring that are similar to researchers’ methods will be explored in the evolution ofGP. --- *Source: 1021378-2015-12-24.xml*
1021378-2015-12-24_1021378-2015-12-24.md
67,577
Using Genetic Programming with Prior Formula Knowledge to Solve Symbolic Regression Problem
Qiang Lu; Jun Ren; Zhiguang Wang
Computational Intelligence and Neuroscience (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1021378
1021378-2015-12-24.xml
--- ## Abstract A researcher can infer mathematical expressions of functions quickly by using his professional knowledge (called Prior Knowledge). But the results he finds may be biased and restricted to his research field due to limitation of his knowledge. In contrast, Genetic Programming method can discover fitted mathematical expressions from the huge search space through running evolutionary algorithms. And its results can be generalized to accommodate different fields of knowledge. However, sinceGP has to search a huge space, its speed of finding the results is rather slow. Therefore, in this paper, a framework of connection between Prior Formula Knowledge and GP (PFK-GP) is proposed to reduce the space of GP searching. The PFK is built based on the Deep Belief Network (DBN) which can identify candidate formulas that are consistent with the features of experimental data. By using these candidate formulas as the seed of a randomly generated population, PFK-GP finds the right formulas quickly by exploring the search space of data features. We have compared PFK-GP with Pareto GP on regression of eight benchmark problems. The experimental results confirm that the PFK-GP can reduce the search space and obtain the significant improvement in the quality of SR. --- ## Body ## 1. Introduction Symbolic regression (SR) is used to discover mathematical expressions of functions that can fit the given data based on the rules of accuracy, simplicity, and generalization. As distinct from linear or nonlinear regression that efficiently optimizes the parameters in the prespecified model, SR tries to seek appropriate models and their parameters simultaneously for a purpose of getting better insights into the dataset. Without any prior knowledge of physics, kinematics, and geometry, some natural laws described by mathematical expressions, such as Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation, can be distilled from experimental data by the Genetic Programming (GP) method on SR [1].Since SR is an NP-hard problem, some evolutionary algorithms were proposed to find approximate solutions to the problem, such as Genetic Programming (GP) [2], Gene Expression Programming (GEP) [3], Grammatical Evolution (GE) [4, 5], Analytic Programming (AP) [6], and Fast Evolutionary Programming (FEP) [7]. Moreover, recent researches in SR problem have taken into account machine learning (ML) algorithms [8–10]. All of the above algorithms randomly generate candidate population. But none of them can use various features of known functions to construct mathematical expressions adapted for describing the features of given data. Therefore, these algorithms may exploit huge search space that consists of all possible combinations of functions and its parameters.Nevertheless, a researcher always analyzes data, infers mathematical expressions, and obtains results according to his professional knowledge. After getting experimental data, he observes the data distribution and their features and analyzes them with his knowledge. Then, he tries to create some mathematical models based on natural laws. He can obtain the values of coefficients in these models through regression analysis methods or other mathematical methods. And he evaluates the formulas which are mathematical models with the values by using various fitness functions. If the researcher finds some of the formulas that fit the experimental data, he can transform and simplify these formulas and then obtain the final formula that can represent the data. Furthermore, his rich experience and knowledge can help him to reduce the searching space complexity so that he can find the best fit mathematical expression rapidly. As the researchers use their knowledge to discover the best fitted formulas, the methods that inject domain knowledge into the process of SR problem solving have been proposed to improve performance and scalability in complex problem [11–13]. The domain knowledge, which is manually created by the researcher’s intuition and experience, is of various formulas which are prior solutions to special problems. If the domain knowledge automatically generates some fitted formulas that are used in evolutionary search without the researcher involvement, the speed of solving SR problem will be quickened. A key challenge is how to build and utilize the domain knowledge just like the researcher does.In this paper, we present a framework of connection between Prior Formula Knowledge andGP (PFK-GP) to address the challenge:(i) We classify a researcher’s domain knowledge into PFK Base (PFKB) and inference ability after analyzing the process that a research discovered formulas from experimental data (Section 2).PFKB contains two primary functions: classification and recognition. The aim of two functions is to generate feature functions which can represent the feature of experimental data.(ii) In order to implement classification, we use the deep learning methodDBN [14, 15] which, compared with other shallow learning methods (Section 3.1), can classify experimental data into a special mathematical model that is consistent with data features. However, the classification method may lead to overfitting because the method can only categorize experimental data into known formula models which come from the set of training formula models.(iii) Therefore, recognition is used to overcome the overfitting. It can extract mathematical models of functions that can show full or partial features of experimental data. Three algorithmsGenerateFs,CountSamePartF, andCountSpecU (see Algorithms 2, 3, and 4) are designed to implement recognition. For example, from the dataset generated by f(x)=exp⁡(sin⁡(x)+x3/(8∗105)), the basic functions sin, exp, and cube can be found by the above three algorithms. In Figure 1, the function sin shows the periodicity of data, and exp or cube shows the growth rate of data. Therefore, these basic functions (called feature functions) can describe some features of the dataset.(iv) The inference ability is concluded to the searching ability of evolutionary algorithm. As researches infer mathematical models,GP is used to combine, transform, and verify these models. These feature functions that are generated byPFKB are selected to be combined into the candidate population in the light of algorithmrandomGenP (see Algorithm 5). With the candidate population,GP can get convergent result quickly because it searches answers in a limit space which is composed of various feature functions. Through experiment on eight benchmark problems (Table 5  E1–E8), the results demonstrate thatPFK-GP, compared with Pareto optimizationGP [16, 17], shows the significant improvement in accuracy and convergence.Figure 1 The functionf(x)=exp⁡(sin⁡(x)+x3/(8∗105)). ## 2. Background ### 2.1. Definition and Representation of Mathematical Expression In this section, we will define concepts about SR problem and show how to represent these concepts by applyingBNF expression. For SR problem, the word “formula” is the general term which describes mathematical expression that fits the given data. We define a formula model is a special mathematical model in which formulas have the same relationships and variables except for different coefficient values. Relationships can be described by operators, such as algebraic operators, functions, and differential operators (http://en.wikipedia.org/wiki/Mathematical_model). Therefore, a formula model is a set where each element is a formula. For example, the two formulas 0.1∗sin⁡x+0.7∗log⁡(x) and 0.3∗sin⁡x+0.9∗log⁡(x) belong to the formula model a1∗sin⁡x+a2∗log⁡(x). Data that are represented by different formulas in one formula model may have similar features which are data distributions, data relationships between different variables, data change laws, and so on, because these formulas have the same relationships.In order to represent a formula model and its corresponding formulas, we define the followingBNF expressions:F≔C∣S,C≔S"("C")"∣S,S≔B”("AX,AX”)"∣U”("AX”)",B≔"+"∣"-"∣"∗"∣"/",U≔”sqrt”∣”log”∣”tanh”∣”sin”∣”cos”∣”exp”∣”tan”∣”abs”∣”quart” ∣ ”cube”∣”square”⋯,A≔a1∣a2∣a3∣⋯∣an,X≔x1∣x2∣x3∣⋯∣xm,whereF is a formula model. X is a parameter set. A indicates a coefficient set. B is a set of binary functions, while U is a set of unary functions. S is a set of atomic functions which does not contain any subfunctions. C is a set of complex functions which contains complex functions in C and atomic functions in S. With the above definitions, any formulas and its corresponding model can be shown by theseBNF expressions. For instance, the formula exp⁡(sin⁡(x)+x3/(8∗105)) is represented by F and C, and its subfunction sin⁡(x) is represented by U. The constants 8, 3, and 105 are shown by elements in A. With these BNF expressions, a formula model can be transformed into one tree. And the tree is a candidate individual in population ofGP solving SR problem. Every subtree in the tree is a subformula which can show special data features. A subtree that shows features of experimental data is called feature-subtree. If a tree has more feature-subtrees, the tree is more likely to fit the data. How to construct the tree consisting of feature-subtrees is a key step in our method which is implemented by the algorithmrandomGenP (see Algorithm 5). ### 2.2. The Process of Researcher Analyzing Data The process that a researcher tries to solve SR problems is shown in Figure2. He depends heavily on his experience which is obtained through a long-term accumulation of study and research. After a researcher collected experimental data, he discovers regular patterns from data by using the methods of data analysis and visualization. He then constructs formula models which were consistent with these regular patterns according to his experiences. After that, he computes the coefficient values in formula models by using appropriate regression methods and obtains some formulas from different formula models. According to results of evaluating these formulas, he chooses the formula that is most fitted to the data. If the formula cannot represent data features, he needs to reselect a new formula model and do the above steps until one fitting formula is found.Figure 2 The process that researchers study the SR problem.We think the researcher’s experience and knowledge have two roles in processing SR problem. One role is Prior Formula Knowledge (PFK) which can help a researcher to quickly find fitted formulas that match experimental data features. Through study and work, the researcher accumulates his domain knowledge of various characteristics of formula model. When the researcher observes experimental data, he can apply his domain knowledge to recognize and classify the data. The other is the ability of inference and deduction which can help the researcher to combine, transform, and verify mathematical expression. We conclude that the PFK contains two primary functions: classification and recognition.Classification. when experimental data features are in accord with characteristics of one formula model in PFK, the dataset can be categorized into the model. The prerequisite of classification is that different formula models have different characteristics in PFK Base. As shown in Figure 3, six families of curves are generated by six formula models taking different coefficient values. The curves in the same family show similar data features while the curves in different families show different data features. Therefore, we can infer that the curves (including surfaces and hypersurfaces) generated by different formula models can be classified according to their data features.Figure 3 Curves are generated by six formula models with different coefficient value.Although many machine learning algorithms such as linear regression [18], SVM [19], Boosting [20], and PCVMs [21] can be used to identify and classify data, it is difficult for these algorithms to classify these curves. That is because these algorithms depend on features that are extracted manually from data, while these features from different complex curves are difficult to be represented by a feature vector which is built based on the researcher’s experiences. In contrast to these algorithms, DL can automatically extract features and have a good performance for the recognition of complex curves, such as image [15], speech [22], and natural language [23]. TheGenerateFs algorithm (see Algorithm 2) based onDBN is shown to classify the data.Recognition. Some formulas can represent remark features of curves generated by formula model. For example, after observing the curve in Figure 1, a researcher can easily infer that the formula sin or cos is one of formulas that constitute the curve because data in curve show periodicity. Therefore, these formulas are called feature functions that can be recognized or extracted by PFK. AlgorithmsCountSamePartF andCountSpecU (see Algorithms 3 and 4) are built to recognize the feature functions.Recognition can help the researcher overcome overfitting of results that are generated by classification because classification can help researcher to only identify formula models from training set while recognition can help the researcher identify subformula models that are consistent with local data features.The ability of inference and deduction is one of main measurements for evaluating performance of artificial intelligence methods. In the SR problem,GP, compared with other methods such as logical reasoning, statistical learning, and genetic algorithm, is a revolutionary method of searching fitting formulas because it can seek the appropriate formula models and their coefficient values simultaneously by evolving the population of formula individuals. Therefore, in the paper, we useGP as the method of inferring and deducing formulas.To optimizeGP, researchers have proposed various approaches, such as optimal parsimony pressure [24]; Pareto front optimization [17] and its age-fitness method [25] are used to control bloat and premature convergence inGP. In order to reduce the space complexity of searching formulas, the methods of arithmetic [26] and machine learning are injected intoGP. In the paper, with the algorithmrandomGenP (see Algorithm 5) about generating population and the method of Pareto front optimization,PFK-GP can research the formula model in the appropriate space and can find right formulas quickly. ## 2.1. Definition and Representation of Mathematical Expression In this section, we will define concepts about SR problem and show how to represent these concepts by applyingBNF expression. For SR problem, the word “formula” is the general term which describes mathematical expression that fits the given data. We define a formula model is a special mathematical model in which formulas have the same relationships and variables except for different coefficient values. Relationships can be described by operators, such as algebraic operators, functions, and differential operators (http://en.wikipedia.org/wiki/Mathematical_model). Therefore, a formula model is a set where each element is a formula. For example, the two formulas 0.1∗sin⁡x+0.7∗log⁡(x) and 0.3∗sin⁡x+0.9∗log⁡(x) belong to the formula model a1∗sin⁡x+a2∗log⁡(x). Data that are represented by different formulas in one formula model may have similar features which are data distributions, data relationships between different variables, data change laws, and so on, because these formulas have the same relationships.In order to represent a formula model and its corresponding formulas, we define the followingBNF expressions:F≔C∣S,C≔S"("C")"∣S,S≔B”("AX,AX”)"∣U”("AX”)",B≔"+"∣"-"∣"∗"∣"/",U≔”sqrt”∣”log”∣”tanh”∣”sin”∣”cos”∣”exp”∣”tan”∣”abs”∣”quart” ∣ ”cube”∣”square”⋯,A≔a1∣a2∣a3∣⋯∣an,X≔x1∣x2∣x3∣⋯∣xm,whereF is a formula model. X is a parameter set. A indicates a coefficient set. B is a set of binary functions, while U is a set of unary functions. S is a set of atomic functions which does not contain any subfunctions. C is a set of complex functions which contains complex functions in C and atomic functions in S. With the above definitions, any formulas and its corresponding model can be shown by theseBNF expressions. For instance, the formula exp⁡(sin⁡(x)+x3/(8∗105)) is represented by F and C, and its subfunction sin⁡(x) is represented by U. The constants 8, 3, and 105 are shown by elements in A. With these BNF expressions, a formula model can be transformed into one tree. And the tree is a candidate individual in population ofGP solving SR problem. Every subtree in the tree is a subformula which can show special data features. A subtree that shows features of experimental data is called feature-subtree. If a tree has more feature-subtrees, the tree is more likely to fit the data. How to construct the tree consisting of feature-subtrees is a key step in our method which is implemented by the algorithmrandomGenP (see Algorithm 5). ## 2.2. The Process of Researcher Analyzing Data The process that a researcher tries to solve SR problems is shown in Figure2. He depends heavily on his experience which is obtained through a long-term accumulation of study and research. After a researcher collected experimental data, he discovers regular patterns from data by using the methods of data analysis and visualization. He then constructs formula models which were consistent with these regular patterns according to his experiences. After that, he computes the coefficient values in formula models by using appropriate regression methods and obtains some formulas from different formula models. According to results of evaluating these formulas, he chooses the formula that is most fitted to the data. If the formula cannot represent data features, he needs to reselect a new formula model and do the above steps until one fitting formula is found.Figure 2 The process that researchers study the SR problem.We think the researcher’s experience and knowledge have two roles in processing SR problem. One role is Prior Formula Knowledge (PFK) which can help a researcher to quickly find fitted formulas that match experimental data features. Through study and work, the researcher accumulates his domain knowledge of various characteristics of formula model. When the researcher observes experimental data, he can apply his domain knowledge to recognize and classify the data. The other is the ability of inference and deduction which can help the researcher to combine, transform, and verify mathematical expression. We conclude that the PFK contains two primary functions: classification and recognition.Classification. when experimental data features are in accord with characteristics of one formula model in PFK, the dataset can be categorized into the model. The prerequisite of classification is that different formula models have different characteristics in PFK Base. As shown in Figure 3, six families of curves are generated by six formula models taking different coefficient values. The curves in the same family show similar data features while the curves in different families show different data features. Therefore, we can infer that the curves (including surfaces and hypersurfaces) generated by different formula models can be classified according to their data features.Figure 3 Curves are generated by six formula models with different coefficient value.Although many machine learning algorithms such as linear regression [18], SVM [19], Boosting [20], and PCVMs [21] can be used to identify and classify data, it is difficult for these algorithms to classify these curves. That is because these algorithms depend on features that are extracted manually from data, while these features from different complex curves are difficult to be represented by a feature vector which is built based on the researcher’s experiences. In contrast to these algorithms, DL can automatically extract features and have a good performance for the recognition of complex curves, such as image [15], speech [22], and natural language [23]. TheGenerateFs algorithm (see Algorithm 2) based onDBN is shown to classify the data.Recognition. Some formulas can represent remark features of curves generated by formula model. For example, after observing the curve in Figure 1, a researcher can easily infer that the formula sin or cos is one of formulas that constitute the curve because data in curve show periodicity. Therefore, these formulas are called feature functions that can be recognized or extracted by PFK. AlgorithmsCountSamePartF andCountSpecU (see Algorithms 3 and 4) are built to recognize the feature functions.Recognition can help the researcher overcome overfitting of results that are generated by classification because classification can help researcher to only identify formula models from training set while recognition can help the researcher identify subformula models that are consistent with local data features.The ability of inference and deduction is one of main measurements for evaluating performance of artificial intelligence methods. In the SR problem,GP, compared with other methods such as logical reasoning, statistical learning, and genetic algorithm, is a revolutionary method of searching fitting formulas because it can seek the appropriate formula models and their coefficient values simultaneously by evolving the population of formula individuals. Therefore, in the paper, we useGP as the method of inferring and deducing formulas.To optimizeGP, researchers have proposed various approaches, such as optimal parsimony pressure [24]; Pareto front optimization [17] and its age-fitness method [25] are used to control bloat and premature convergence inGP. In order to reduce the space complexity of searching formulas, the methods of arithmetic [26] and machine learning are injected intoGP. In the paper, with the algorithmrandomGenP (see Algorithm 5) about generating population and the method of Pareto front optimization,PFK-GP can research the formula model in the appropriate space and can find right formulas quickly. ## 3. Genetic Programming with Prior Formula Knowledge Base ### 3.1. Formula Prior Knowledge Base The FPK needs to have the ability of identifying and classifying the formula modelF based on data features. Although the features between formula models are different, it is difficult to extract features from data which are generated by these models because different formula models represent seemly similar but different features. Based on the above definitions in formula model, the features among functions in set S are different. The features between the function s∈S and the function c∈C may be similar if c is the parameter of s. As shown in Figure 4, these functions sin⁡(x), cube(x), and exp⁡(x) belonging to S constitute C, such as exp⁡(sin⁡(x)) and sin⁡(x)+cube(x)/(8∗105), and are shaped into the final function exp⁡(sin⁡(x)+cube(x)/(8∗105)). sin⁡(x) shows periodicity in results; log⁡(x) shows slow variation trend in results; cube(x) shows high variation trend in results. So, these features in the three functions are different. However, there are similar features between cube(x) and sin⁡(x)+cube(x)/(8∗105), because the two functions have high variation trend in result. The shallow learning method such as SVM can hardly identify complex features of functions shown in Figure 5.Figure 4 Illustration of theDBN framework.Figure 5 Accuracy result of SVM andDBN classifying P1–P39.In this paper,DBN is used to classify data into a special formula model according to features that are automatically extracted from data. Generally,DBN is made up of multiple layers that are represented by Restricted Boltzmann Machine (RBM). As RBM is an universal approximate of discrete models [27], we can speculate thatDBN which has many layers of RBM can recognize the features of data generated by complex function just like Convolutional Neural Network (CNN) classifying image [28]. The process ofDBN recognizing formulas is illustrated in Figure 4.DBN can extract features from data, layer by layer. So, the lower RBM layer can represent the simple functions and the higher RBM layer can transform the simple functions into more complex functions.We use the data generated by the different formula modelsF (see Table 7) as training samples.DBN is trained by these samples. The model that is finally gained byDBN training methods isPFKB, which is aimed at identifying the formula model that can represent features of the data. The process ofDBN training is outlined in algorithmTrainDBN(Algorithm 1) which uses the same steps as mentioned in [14, 15].Algorithm 1: TrainDBN: training DBN to generate the PFKB. Input: X1, Y1, X2, Y2 (X1, Y1 are training data; X2, Y2 are testing data) Output: PFKB (1) Initial(DL, opts) // initial the structure of DL and the parameters opts (2) DL = DLsetup(DBN, X1, opts) // layer-wise pre-training DL (3) DL = DLtrain(DBN,X1, opts) // build up each layer of DL to train (4) nn=DLunfoldtonn(DBN, opts) // after training each layer, passing the parameters to nn (5) PFKB = nntrain(nn,X1,Y1, opts) // fine-tune the whole deep architecture (6) accuracy = nntest(PFKB,X2, Y2) // accuracy is the criterion of the quality of PFKM, if it is too small, then re-training after adjusting the model architecture or parameters (7) return PFKBAlgorithm 2: G e n e r a t e F s. Input: X, PFKB, s (X is the dataset; s is the number of formula models whose features can show the dataset) Output: Fs (formula model vector which are used in generating the initial population of GP) (1) Ftemp = predictModel(PFKB, X) // as the intermediate data, it is the original result that exploit the PFKB to predict without sorting by the fitness (2) Ftemp = sortModelByFit(Ftemp) // sort the models in order of decreasing fitness (3) fori = 1 : s (4)  Fs(i) = Ftemp(i) (5) end (6) returnFsAlgorithm 3: C o u n t S a m e P a r t F. Input: t, Fs Output: C (ordered local expressions set that are sorted according to frequency of occurrence which is larger than t) (1)    C=F=∅ (2)    for each  pairfi,fjinFs (3)     Fij=fi∩fj (4)     for eachcminFij (5)      if  cm∈F (6)       changeF’s elementcmvtocmv+1 // v indicates the number of times that cm appears (7)      else (8)       addcm1intoF (9)     end (10) end (11) for eachcmvinF (12)  if  v≥t (13)   add  cmvintoC (14) end (15) sort(C) (16) return CAlgorithm 4: C o u n t S p e c U. Input: t, Fs Output: specU (ordered specU function set that are sorted according to frequency of occurrence which is larger than t) (1)    U=F=∅ (2)    for each  fi  in  Fs (3)     for eachuminfi (4)      if  um∈F (5)       changeF’s elementumvtoumv+1 // v indicates the number of times that um appears (6)      else (7)       addum1intoF (8)     end (9)    end (10) for eachumvinF (11)  if  v≥t (12)   add  umvintospecU (13) end (14) returnspecUAlgorithm 5: r a n d o m G e n P. Input: Fs, C, specU, B, U,n where, n represents the number of individuals which are generated, B and U is the candidate function library) Output: P (population) (1) I=Fs∪C (2) for eachiinI (3)  addiintoP (4) end (5) k=specU (6) Q=specU+B+U // add the elements of specU, B and U into the queue Q successively, umv∈U, add umv into Q (7) P_temp = traditionalRandomIndividual(Q, k) (8) addP_temp intoP (9) k=n-I-k (10) P_temp = traditionalRandomIndividual(B+U,k) (11) addP_temp intoP (12) returnPPFKB is only changed with formula models. If there are no new trained formula models in an application, the algorithmTrainDBN will not be executed. When the number of trained formula models is large enough, little new formula model will appear, andPFKB will seldom be changed. In the paper,TrainDBN is performed exactly once in order to generatePFKB. ### 3.2. Classification and Recognition withPFKB In order to deal with the problem of how to classify and recognize formula model from data, we should consider the problem from two aspects. One situation is that data can be represented by a special formula model fromPFKB, while the other one is that data cannot be represented by a formula model fromPFKB. In the first case, we exploitPFKB to identify formula models of data byDBN classification. Based on ordered results ofDBN classification, we gain a set of formula models (Fs=f1,…,fs) which are most similar to features of the data. The process that deals with the first case is outlined in algorithmGenerateFs. The algorithm is fast becausePFKB has been built byTrainDBN, and s is small integer value.In the second case, when a researcher observes laws that are hidden in experimental data, he often tries to find some formulasC which are consistent with partial features of the data. Therefore, we propose the two assumptions as follows.Assumption 1. More formula modelsfs have the same subformula modelpf in the setFs which is the result ofGenerateFs running, more strongly that thepf can express features of data. In order to compute the samepf inFs, we express the formula model as the string of expression and seek the same part of them by using intersection between the two strings (without considering the elements in setsX andA). Define the intersection between two expressions as follows:(1)fi∩fj=c1,…,ck,cm∩cn=∅,m≠n,cn∈C,cn∉S,1≤m,n≤k,1≤i,j≤k.For example, f1=z+a∗cos⁡x+tan⁡x/(exp⁡x+log⁡x), f2=z+a∗cos⁡x+abs(x)/(exp⁡x+log⁡(x)), f1∩f2={z+a∗cos⁡x, exp⁡(x)+log⁡(x)}. The method, which obtainspf whose frequency of occurrence inf is larger than thresholdt, is described as the algorithmCountSamePartF.For verifying Assumption1, we apply the dataset from E0 (see Table 5) as the testing set and get the identifying results Fs={P18,P15,P34,P11,P3} from Table 7 through the algorithmGenerateFs. The intersections between the top two formulas P18∩P15 are (sqrt(x0))/(tan⁡(x1)) and (tan⁡x0)/(exp⁡(x1)), which are partial mathematical expressions in E0. And we use the dataset from Table 6 to test E0 and get the identifying results Fs={T7,T6,T8,T9,T1} throughGenerateFs algorithm. The intersection between two expressions is as follows: T7∩T6={sqrt(x0)/tan⁡(x1)}, T8∩T9={tan⁡x0/exp⁡(x1), cos⁡(x0)∗sin⁡(x1)}. We find that the elements which have more frequency of occurrence in the intersections set are more likely to express some data features. The above two experiments illustrate that Assumption 1 is rational.Assumption 2. If functionu∈U exists inFs obtained byGenerateFs and the number of the same u is larger than threshold t, we can conclude that u can show some local data features. The functionb∈B except xy is common function, which has a high probability of occurrence in mathematical expressions. Therefore, it is difficult to express special data features. Compared with B, the function u∈U can show obvious features of data. For instance, sin⁡(x) presents the periodicity of data and log⁡(x) represents data features about extreme increase or decrease. The method, which obtains the special function u that can show the local data features, is outlined as the algorithmCountSpecU.For verifying Assumption2, we also choose the dataset which are generated from E0 (see Table 5) as the testing data and apply theCountSpecU algorithm to calculate the special u among Fs={P18,P15,P34,P11,P3}. The result of the algorithm is shown in Table 1. We find the result specU={tan,cos,sqrt,exp,sin} (sin and cos are the operators of the same kind) is part of E0. Hence, we can discover that theu set, which is gained by the algorithmCountSpecU, can show local features of the dataset.Table 1 The result ofU in Fs computed by algorithm CountSpecU  (see Algorithm 4). u in Fs Frequency of occurrence tan 3 cos 2 sqrt 1 exp 1 log 1 ### 3.3.GP with Prior Formula Knowledge Base In order to deal with SR problem,GP is executed to automatically composite and evolve mathematical expression. The process ofGP is similar to the process that a researcher transforms formula models and obtains fitting formulas based on his knowledge. Since those algorithms inPFKB, which is created based on analyzing the process of how a research infers fitted formulas, can recognize formula models that are consistent with data features, we combine these formula models ofPFKB recognizing into the process ofGP in order to reduce the searching space and increase the speed of discovering right solutions.When initializingGP algorithm, we can select candidate formulas fromFs,C, andspecU as individuals in population ofGP. The setsFs,C, andspecU are gained by the above algorithms inPFKB. Therefore, thePFKB is injected into the process of population generating. And this population can contribute to reserving data features as much as possible and reducing the searching space because these individuals commonly have good fitness value. With the population,GP algorithm can speed up the convergence and improve the accuracy of SR results. However, it may lead to the bias results. To overcome the problem, some random individuals must be imported into the population. The process of population creating is as follows.Firstly, the elements in setsFs and C are inserted into the population. Then, the setspecU and the candidate function sets B andU are merged into the new candidate function queueQ. And the number of elements inspecU is twice as much as the other elements inQ because B∪C⊆specU. Those elements inspecU are more likely to be part of individuals in the population after applying the methodtraditionalRandomIndividual [16] which is designed to generate randomly k individuals from the special function set. At last, the rest of individuals of population are created bytraditionalRandomIndividual with setsB andU. The process of population generating is described as the algorithmrandomGenP.Generally,|Fs|+|C|+|specU|<n/2, where n is the number of individuals in population. Furthermore, in order to enhance the affection ofPFKB in the process ofGP evolution, the methodrandomGenP is used to create new individuals in every few generations of evolutionary computation. Meanwhile, the method of Pareto front [17] is introduced into the algorithmPFK-GP to balance the accuracy against the complexity of model. The detail of algorithmPFK-GP is shown in Algorithm 6.Algorithm 6: PFK-GP. Input: data, PFKB, t1, t2, B, U, n, k, g, interval Output: F (candidate formulas set) (1)    Fs=GenerateFs(data, PFKB) (2)    C = CountSamePartF(t1, Fs) (3)    U=CountSpecU(t2, Fs) (4)    P=randomGenP(Fs, C, specU, B+U, n) (5)    while (bestFitness  <=  threshold&&  i<g) (6)     P = crossover(P) (7)     P = mutate(P) (8)     Pt = ParetoOptimise(P) // prevent the formula model too complex (9)     Pt_fitnees = EvaluatePopulation(Pt) (10)  bestFitness,F = Selectbest(Pt, Pt_fitness, k) // choose the best k individuals and get the best fitness value from the individuals (11)  ifimod interval (12)   P1=randomGenP(F, C, specU, B+U,n/2) (13)   P2 = traditionalRandomIndividual(B+U,n/2) (14)   P=P1∪P2 (15)  else (16)   P1 = traditionalRandomIndividual(B+U,n-k) (17)   P=P1∪P (18)  end (19)  i++ (20) end (21) returnF ## 3.1. Formula Prior Knowledge Base The FPK needs to have the ability of identifying and classifying the formula modelF based on data features. Although the features between formula models are different, it is difficult to extract features from data which are generated by these models because different formula models represent seemly similar but different features. Based on the above definitions in formula model, the features among functions in set S are different. The features between the function s∈S and the function c∈C may be similar if c is the parameter of s. As shown in Figure 4, these functions sin⁡(x), cube(x), and exp⁡(x) belonging to S constitute C, such as exp⁡(sin⁡(x)) and sin⁡(x)+cube(x)/(8∗105), and are shaped into the final function exp⁡(sin⁡(x)+cube(x)/(8∗105)). sin⁡(x) shows periodicity in results; log⁡(x) shows slow variation trend in results; cube(x) shows high variation trend in results. So, these features in the three functions are different. However, there are similar features between cube(x) and sin⁡(x)+cube(x)/(8∗105), because the two functions have high variation trend in result. The shallow learning method such as SVM can hardly identify complex features of functions shown in Figure 5.Figure 4 Illustration of theDBN framework.Figure 5 Accuracy result of SVM andDBN classifying P1–P39.In this paper,DBN is used to classify data into a special formula model according to features that are automatically extracted from data. Generally,DBN is made up of multiple layers that are represented by Restricted Boltzmann Machine (RBM). As RBM is an universal approximate of discrete models [27], we can speculate thatDBN which has many layers of RBM can recognize the features of data generated by complex function just like Convolutional Neural Network (CNN) classifying image [28]. The process ofDBN recognizing formulas is illustrated in Figure 4.DBN can extract features from data, layer by layer. So, the lower RBM layer can represent the simple functions and the higher RBM layer can transform the simple functions into more complex functions.We use the data generated by the different formula modelsF (see Table 7) as training samples.DBN is trained by these samples. The model that is finally gained byDBN training methods isPFKB, which is aimed at identifying the formula model that can represent features of the data. The process ofDBN training is outlined in algorithmTrainDBN(Algorithm 1) which uses the same steps as mentioned in [14, 15].Algorithm 1: TrainDBN: training DBN to generate the PFKB. Input: X1, Y1, X2, Y2 (X1, Y1 are training data; X2, Y2 are testing data) Output: PFKB (1) Initial(DL, opts) // initial the structure of DL and the parameters opts (2) DL = DLsetup(DBN, X1, opts) // layer-wise pre-training DL (3) DL = DLtrain(DBN,X1, opts) // build up each layer of DL to train (4) nn=DLunfoldtonn(DBN, opts) // after training each layer, passing the parameters to nn (5) PFKB = nntrain(nn,X1,Y1, opts) // fine-tune the whole deep architecture (6) accuracy = nntest(PFKB,X2, Y2) // accuracy is the criterion of the quality of PFKM, if it is too small, then re-training after adjusting the model architecture or parameters (7) return PFKBAlgorithm 2: G e n e r a t e F s. Input: X, PFKB, s (X is the dataset; s is the number of formula models whose features can show the dataset) Output: Fs (formula model vector which are used in generating the initial population of GP) (1) Ftemp = predictModel(PFKB, X) // as the intermediate data, it is the original result that exploit the PFKB to predict without sorting by the fitness (2) Ftemp = sortModelByFit(Ftemp) // sort the models in order of decreasing fitness (3) fori = 1 : s (4)  Fs(i) = Ftemp(i) (5) end (6) returnFsAlgorithm 3: C o u n t S a m e P a r t F. Input: t, Fs Output: C (ordered local expressions set that are sorted according to frequency of occurrence which is larger than t) (1)    C=F=∅ (2)    for each  pairfi,fjinFs (3)     Fij=fi∩fj (4)     for eachcminFij (5)      if  cm∈F (6)       changeF’s elementcmvtocmv+1 // v indicates the number of times that cm appears (7)      else (8)       addcm1intoF (9)     end (10) end (11) for eachcmvinF (12)  if  v≥t (13)   add  cmvintoC (14) end (15) sort(C) (16) return CAlgorithm 4: C o u n t S p e c U. Input: t, Fs Output: specU (ordered specU function set that are sorted according to frequency of occurrence which is larger than t) (1)    U=F=∅ (2)    for each  fi  in  Fs (3)     for eachuminfi (4)      if  um∈F (5)       changeF’s elementumvtoumv+1 // v indicates the number of times that um appears (6)      else (7)       addum1intoF (8)     end (9)    end (10) for eachumvinF (11)  if  v≥t (12)   add  umvintospecU (13) end (14) returnspecUAlgorithm 5: r a n d o m G e n P. Input: Fs, C, specU, B, U,n where, n represents the number of individuals which are generated, B and U is the candidate function library) Output: P (population) (1) I=Fs∪C (2) for eachiinI (3)  addiintoP (4) end (5) k=specU (6) Q=specU+B+U // add the elements of specU, B and U into the queue Q successively, umv∈U, add umv into Q (7) P_temp = traditionalRandomIndividual(Q, k) (8) addP_temp intoP (9) k=n-I-k (10) P_temp = traditionalRandomIndividual(B+U,k) (11) addP_temp intoP (12) returnPPFKB is only changed with formula models. If there are no new trained formula models in an application, the algorithmTrainDBN will not be executed. When the number of trained formula models is large enough, little new formula model will appear, andPFKB will seldom be changed. In the paper,TrainDBN is performed exactly once in order to generatePFKB. ## 3.2. Classification and Recognition withPFKB In order to deal with the problem of how to classify and recognize formula model from data, we should consider the problem from two aspects. One situation is that data can be represented by a special formula model fromPFKB, while the other one is that data cannot be represented by a formula model fromPFKB. In the first case, we exploitPFKB to identify formula models of data byDBN classification. Based on ordered results ofDBN classification, we gain a set of formula models (Fs=f1,…,fs) which are most similar to features of the data. The process that deals with the first case is outlined in algorithmGenerateFs. The algorithm is fast becausePFKB has been built byTrainDBN, and s is small integer value.In the second case, when a researcher observes laws that are hidden in experimental data, he often tries to find some formulasC which are consistent with partial features of the data. Therefore, we propose the two assumptions as follows.Assumption 1. More formula modelsfs have the same subformula modelpf in the setFs which is the result ofGenerateFs running, more strongly that thepf can express features of data. In order to compute the samepf inFs, we express the formula model as the string of expression and seek the same part of them by using intersection between the two strings (without considering the elements in setsX andA). Define the intersection between two expressions as follows:(1)fi∩fj=c1,…,ck,cm∩cn=∅,m≠n,cn∈C,cn∉S,1≤m,n≤k,1≤i,j≤k.For example, f1=z+a∗cos⁡x+tan⁡x/(exp⁡x+log⁡x), f2=z+a∗cos⁡x+abs(x)/(exp⁡x+log⁡(x)), f1∩f2={z+a∗cos⁡x, exp⁡(x)+log⁡(x)}. The method, which obtainspf whose frequency of occurrence inf is larger than thresholdt, is described as the algorithmCountSamePartF.For verifying Assumption1, we apply the dataset from E0 (see Table 5) as the testing set and get the identifying results Fs={P18,P15,P34,P11,P3} from Table 7 through the algorithmGenerateFs. The intersections between the top two formulas P18∩P15 are (sqrt(x0))/(tan⁡(x1)) and (tan⁡x0)/(exp⁡(x1)), which are partial mathematical expressions in E0. And we use the dataset from Table 6 to test E0 and get the identifying results Fs={T7,T6,T8,T9,T1} throughGenerateFs algorithm. The intersection between two expressions is as follows: T7∩T6={sqrt(x0)/tan⁡(x1)}, T8∩T9={tan⁡x0/exp⁡(x1), cos⁡(x0)∗sin⁡(x1)}. We find that the elements which have more frequency of occurrence in the intersections set are more likely to express some data features. The above two experiments illustrate that Assumption 1 is rational.Assumption 2. If functionu∈U exists inFs obtained byGenerateFs and the number of the same u is larger than threshold t, we can conclude that u can show some local data features. The functionb∈B except xy is common function, which has a high probability of occurrence in mathematical expressions. Therefore, it is difficult to express special data features. Compared with B, the function u∈U can show obvious features of data. For instance, sin⁡(x) presents the periodicity of data and log⁡(x) represents data features about extreme increase or decrease. The method, which obtains the special function u that can show the local data features, is outlined as the algorithmCountSpecU.For verifying Assumption2, we also choose the dataset which are generated from E0 (see Table 5) as the testing data and apply theCountSpecU algorithm to calculate the special u among Fs={P18,P15,P34,P11,P3}. The result of the algorithm is shown in Table 1. We find the result specU={tan,cos,sqrt,exp,sin} (sin and cos are the operators of the same kind) is part of E0. Hence, we can discover that theu set, which is gained by the algorithmCountSpecU, can show local features of the dataset.Table 1 The result ofU in Fs computed by algorithm CountSpecU  (see Algorithm 4). u in Fs Frequency of occurrence tan 3 cos 2 sqrt 1 exp 1 log 1 ## 3.3.GP with Prior Formula Knowledge Base In order to deal with SR problem,GP is executed to automatically composite and evolve mathematical expression. The process ofGP is similar to the process that a researcher transforms formula models and obtains fitting formulas based on his knowledge. Since those algorithms inPFKB, which is created based on analyzing the process of how a research infers fitted formulas, can recognize formula models that are consistent with data features, we combine these formula models ofPFKB recognizing into the process ofGP in order to reduce the searching space and increase the speed of discovering right solutions.When initializingGP algorithm, we can select candidate formulas fromFs,C, andspecU as individuals in population ofGP. The setsFs,C, andspecU are gained by the above algorithms inPFKB. Therefore, thePFKB is injected into the process of population generating. And this population can contribute to reserving data features as much as possible and reducing the searching space because these individuals commonly have good fitness value. With the population,GP algorithm can speed up the convergence and improve the accuracy of SR results. However, it may lead to the bias results. To overcome the problem, some random individuals must be imported into the population. The process of population creating is as follows.Firstly, the elements in setsFs and C are inserted into the population. Then, the setspecU and the candidate function sets B andU are merged into the new candidate function queueQ. And the number of elements inspecU is twice as much as the other elements inQ because B∪C⊆specU. Those elements inspecU are more likely to be part of individuals in the population after applying the methodtraditionalRandomIndividual [16] which is designed to generate randomly k individuals from the special function set. At last, the rest of individuals of population are created bytraditionalRandomIndividual with setsB andU. The process of population generating is described as the algorithmrandomGenP.Generally,|Fs|+|C|+|specU|<n/2, where n is the number of individuals in population. Furthermore, in order to enhance the affection ofPFKB in the process ofGP evolution, the methodrandomGenP is used to create new individuals in every few generations of evolutionary computation. Meanwhile, the method of Pareto front [17] is introduced into the algorithmPFK-GP to balance the accuracy against the complexity of model. The detail of algorithmPFK-GP is shown in Algorithm 6.Algorithm 6: PFK-GP. Input: data, PFKB, t1, t2, B, U, n, k, g, interval Output: F (candidate formulas set) (1)    Fs=GenerateFs(data, PFKB) (2)    C = CountSamePartF(t1, Fs) (3)    U=CountSpecU(t2, Fs) (4)    P=randomGenP(Fs, C, specU, B+U, n) (5)    while (bestFitness  <=  threshold&&  i<g) (6)     P = crossover(P) (7)     P = mutate(P) (8)     Pt = ParetoOptimise(P) // prevent the formula model too complex (9)     Pt_fitnees = EvaluatePopulation(Pt) (10)  bestFitness,F = Selectbest(Pt, Pt_fitness, k) // choose the best k individuals and get the best fitness value from the individuals (11)  ifimod interval (12)   P1=randomGenP(F, C, specU, B+U,n/2) (13)   P2 = traditionalRandomIndividual(B+U,n/2) (14)   P=P1∪P2 (15)  else (16)   P1 = traditionalRandomIndividual(B+U,n-k) (17)   P=P1∪P (18)  end (19)  i++ (20) end (21) returnF ## 4. Experiments In the experiments, we employDBN in the DeepLearnToolbox [30] to classify formula models and build the algorithmPFK-GP based on GPTIPS [29]. The 39 formula models in Table 7 are composed of formulas from [31, 32] and some formulas are created by ourselves. The data generated by these 39 formula models is used as training data of algorithmDBN to createPFKB. The formula models in Table 5 are used to generate the testing data for verifying accuracy of algorithmsGenerateFs andPFK-GP. The formula models in Table 6 are devoted to validating the two algorithmsCountSamePartF andCountSpecU (see Algorithms 3 and 4).For most formula models from Tables5, 6, and 7, we sampled them by equal step taking their parameter values from the range [−49, 50]. For some particular formulas, we also sample them with a special equal step from special numerical scope. For example, the value x in sqrt(x) is in the range [0, 99], the value x in log⁡(x) ranges between 1 and 100. We create 500 groups of different parameters value in each formula model. The coefficients in these formula models are fetched with equal step from the range [−2.996  3.0]. When all coefficients of a formula model take special values, the formula model generates a formula, namely, a sample of the formula model. We create 7500 groups of different coefficients in each formula model. So, each formula model has 7500 samples where each sample has 500 groups of different parameters value. We take 6000 samples of these samples as training data and the others as test data.We adoptDBN as the classification model and compare it with SVM that is implemented by the tool libsvm [33]. The training and testing data for the two algorithms are originated from formula models P1–P39. The parameter values inDBN and SVM are illustrated as Table 2. We take the first five formulas fromFs generated byGenerateFs as a result set of recognition. If the test formula is included in the set, we think that the recognition result is correct. The accuracy of recognition results ofDBN and SVM is showed as Figure 5. TheDBN method can help to classify all kinds of test data into its fitted formula models. However, the SVM method can only correctly classify several kinds of test data. The overall average accuracy ofDBN classification is 99.65%, while the accuracy of SVM is 26.72%. The result demonstrates thatDBN is more suitable for recognizing data generated by mathematical expression, becauseDBN can automatically extract features from the data, layer by layer, and is similar to composition of formula which is constituted by its subformulas.Table 2 The parameter values in algorithm ofDBN and SVM. D B N parameters Value SVM parameters Value The number ofDBN layers 4 svm_type c-svc The size ofDBN hide nodes 50 Kernel Gaussian The number of epochs 200 Gamma 0.07 Batch size 40 Coef 0 Momentum 0 Cost 1.0 Alpha 1 Degree 3.0 activation_function sigm Shrinking 1We set parameters inGP with Pareto optimization (PO-GP) [29] andPFK-GP as shown in Table 3. For data generated by P13 (see Table 7, coefficient z is −2.098, a is −2.998),PO-GP andPFK-GP deal with the SR problem, respectively. The result was illustrated as Figure 6, where (2)Trainerror=meanytrain-ypredtrain2.Table 3 Parameter values inGP and PFK-GP. Parameter Value GPTIPS [29] multigene syntax Representation Number of genes: 1 Maximum tree depth: 5 Population size 50 Number of generations 1000 Selection Lexicographic tournament selection Tournament size 3 Crossover operator Subtree crossover Crosser probability 0.85 Mutation operator Subtree mutation Mutation probability 0.1 Reproduction probability 0.05 Fitness 1 N ∑ y - y ^ 2 Elitism Keep 1 best individualFigure 6 The evolutionary result ofP13 withPO-GP andPFK-GP.We could find that after processing test data of formula modelP13,PFK-GP found its best model at the first generation and its fitness is higher, whilePO-GP found its best model until 718th generation and its fitness is much lower than that inPFK-GP. ThePFK-GP can get the right formulas quickly because the model P13 recognized by the algorithmGenerateFs is inserted into the initialized population of evolutionary computation. For the formula models whose characteristics are consistent with data features inPFKB, they can be recognized with high probability and can be combined into population ofPFK-GP. ThePFK-GP can firstly search the coefficients in these formula models and get the mathematical expression with good fitness value. Therefore, the algorithmGenerateFs can speed up the process ofPFK-GP dealing with SR and can improve the accuracy of SR results.In order to test whetherPFK-GP can overcome overfitting or not, a dataset is created by E1 which has not existed in the training models ofPFKB. The two algorithmsPO-GP andPFK-GP are, respectively, applied to process the dataset. The two algorithms, which run, respectively, 100 and 1000 generations, have similar convergence curves in Figure 8. However,PFK-GP can find better fitness results compared withPO-GP, becausePFK-GP searches fitted solution in the space includes more functions whose data features are in accord with E1. Since the initial population, which is generated by the algorithms (CountSamePartF andCountSpecU) inPFKB, contains subformulas in formula models which are recognized byPFKB and represents data features of these subformulas,PFK-GP can find the right formulas which are more fitted to the raw dataset.In order to observe overall performance of thePFK-GP, we select six datasets as testing set. Three of them generated by formula models (P9,P13,P19) from Table 7 are involved in the process of trainingDBN, while the other three generated by formula models (E1,E4,E6) from Table 5 are not involved in that process. The two algorithmsPFK-GP andPO-GP are executed, respectively, ten times in order to gain the right formulas from the six different datasets. The six results of mean training error gained by the two algorithms are shown in Figure 9. And the average results from six groups of mean training errors are listed in Figure 7. ThePFK-GP(E) andGP(E) are the average results of E1,E4, and E6, whilePFK-GP(P) andPO-GP(P) are the average results of P9,P13, and P19. We can conclude that the comprehensive performance of thePFK-GP is better than that of thePO-GP based on the results in Figures 7 and 9, because the algorithmPFK-GP utilizes the methodGenerateFs to find the fitted formula model directly and the methodsCountSamePartF andCountSpecU to identify subformula models which have data features consistent with test set. The best mathematical expressionsPFK-GP andPO-GP found are listed in Table 4.Table 4 The best mathematical expression ofPFK-GP finding. Number The best mathematical expression E 1 y = 0.7001 ∗ tan ⁡ x 1 ∗ x 4 - 5.049 - 0.7001     ∗x1∗cube⁡2.575∗cube⁡x4-1.001 E 2 y = 7.214 ∗ sin ⁡ x 1 + 1.001 ∗ tan ⁡ x 2 E 3 y = 0.25 ∗ square ⁡ x 1 + x 2 - 6 - 0.2179 E 4 y = 1.001 - 1 . 332 ∗ 4 ∗ x 1 + log ⁡ square ⁡ x 2 2 ∗ square ⁡ x 2 + 5.585 E 5 y = x 2 - 2.092 tanh ⁡ square ⁡ sin ⁡ x 1 + 0.8795 E 6 y = 6 ∗ cos ⁡ x 2 ∗ sin ⁡ x 1 - 0.00444 E 7 y = sin ⁡ x 1 - 6 ∗ x 1 + square ⁡ x 1 + 14 E 8 y = log ⁡ x 2 + sqrt ⁡ x 1 + sin ⁡ x 1 + 0.1823Table 5 Test data used inPFK-GP. Number Formula E 0 y = - 1.97 + 1.25 ∗ sqrt ⁡ x 0 tan ⁡ x 1 + tan ⁡ x 0 exp ⁡ x 1 + cos ⁡ x 0 ∗sin⁡x1 E 1 y = exp ⁡ 2 ∗ x 1 ∗ sin ⁡ p i ∗ x 4 + sin ⁡ x 2 ∗ x 3 E 2 y = 3.56 + 7.23 ∗ sin ⁡ x 0 + tan ⁡ x 3 E 3 y = x 0 - 3 ∗ x 3 - 3 + 2 ∗ sin ⁡ x 0 - 4 ∗ x 3 - 4 E 4 y = quart ⁡ x 1 - 3 + cube ⁡ x 2 - 3 - x 2 - 3 quart ⁡ x 2 - 4 + 10 E 5 y = tanh ⁡ cos ⁡ 2 ∗ x 0 + x 3 E 6 y = 6 ∗ sin ⁡ x 0 ∗ cos ⁡ x 3 E 7 y = sin ⁡ x 0 + square ⁡ x 0 + 5 E 8 y = sqrt ⁡ x 0 + log ⁡ 1.2 ∗ x 3 + sin ⁡ x 0Table 6 Test data of two algorithmsCountSamePartF and CountSpecU  (see Algorithms 3 and 4). Number Formula T 1 P 1 T 2 P 2 T 3 P 3 T 4 P 4 T 5 P 5 T 6 y = z + a ∗ sqrt ⁡ x 0 tan ⁡ x 1 + sin ⁡ x 1 T 7 y = z + a ∗ sqrt ⁡ x 0 tan ⁡ x 1 + x 1 T 8 y = z + a ∗ tan ⁡ x 0 exp ⁡ x 1 + cos ⁡ x 0 ∗ sin ⁡ x 1 + sin ⁡ x 1 T 9 y = z + a ∗ tan ⁡ x 0 exp ⁡ x 1 + cos ⁡ x 0 ∗ sin ⁡ x 1 + x 1 T 10 y = z + a ∗ cos ⁡ x 0 ∗ sin ⁡ x 1 + square ⁡ x 1Table 7 Training data of algorithmTrainDBN  (see Algorithm 1). Number Formula P 1 y = z + a ∗ x 0 P 2 y = z + a 1 ∗ x 3 + x 1 a 2 ∗ x 4 P 3 y = z + a 1 ∗ x 3 - x 0 + x 1 / x 4 a 2 ∗ x 4 P 4 y = z + a ∗ sin ⁡ x P 5 y = z + a ∗ log ⁡ x P 6 y = z + a ∗ sqrt ⁡ x P 7 y = z - a 1 ∗ exp ⁡ a 2 ∗ x 0 P 8 y = z + a 1 ∗ sqrt ⁡ a 2 ∗ x 0 ∗ x 3 ∗ x 4 P 9 y = a 1 ∗ sqrt ⁡ x 0 a 2 ∗ log ⁡ x 1 ∗ a 3 ∗ exp ⁡ x 2 a 4 ∗ square ⁡ x 3 P 10 y = z + a 1 ∗ a 2 ∗ x 1 + a 3 ∗ square ⁡ x 2 a 4 ∗ cube ⁡ x 3 + a 5 ∗ quart ⁡ x 4 P 11 y = z + a 1 ∗ cos ⁡ a 2 ∗ x 0 ∗ x 0 ∗ x 0 P 12 y = z - a 1 ∗ cos ⁡ a 2 ∗ x 0 ∗ sin ⁡ a 3 ∗ x 4 P 13 y = z - a ∗ tan ⁡ x 0 tan ⁡ x 1 ∗ tan ⁡ x 2 tan ⁡ x 3 P 14 y = z - a ∗ cos ⁡ x 0 - tan ⁡ x 1 ∗ tanh ⁡ x 2 sin ⁡ x 3 P 15 y = z - a ∗ tan ⁡ x 0 exp ⁡ x 1 ∗ log ⁡ x 2 - tan ⁡ x 3 P 16 y = a ∗ x 3 P 17 y = a 1 ∗ x 1 + a 2 ∗ x 4 P 18 y = sqrt ⁡ x 2 tan ⁡ x 5 / a P 19 y = cos ⁡ x 2 cube ⁡ x 5 / a P 20 y = tanh ⁡ x 2 ∗ a ∗ cube ⁡ x 5 + abs ⁡ x 1 P 21 y = tanh ⁡ abs ⁡ x 2 ∗ a + x 5 ∗ cube ⁡ x 5 + abs ⁡ x 1 P 22 y = tanh ⁡ tan ⁡ x 5 a ∗ cube ⁡ x 5 + abs ⁡ x 1 P 23 y = tanh ⁡ cos ⁡ x 2 ∗ a ∗ cube ⁡ sqrt ⁡ x 2 P 24 y = tanh ⁡ cos ⁡ x 2 ∗ a ∗ cube ⁡ x 5 + abe ⁡ x 1 P 25 y = z P 26 y = z + x 2 P 27 y = z + x 2 x 0 ∗ x 2 z + x 2 x 0 ∗ x 2 X X X P 28 y = x 0 - z 1 x 0 + x 2 ∗ x 5 - z 2 x 0 ∗ a 1 P 29 y = a ∗ sqrt ⁡ x P 30 y = a ∗ log ⁡ x P 31 y = a ∗ square ⁡ x P 32 y = a ∗ tanh ⁡ x P 33 y = a ∗ sin ⁡ x P 34 y = a ∗ cos ⁡ x P 35 y = a ∗ exp ⁡ x P 36 y = a ∗ cube ⁡ x P 37 y = a ∗ quart ⁡ x P 38 y = a ∗ tan ⁡ x P 39 y = a ∗ abs ⁡ xFigure 7 Average results from six groups of means training errors inPO-GP andPFK-GP.Figure 8 The SR evolutionary process ofE1 withPO-GP orPFK-GP under different generations.Figure 9 Training error results in which six datasets generated byE1, E4, E6, P1, P13, and P19 dealt withPO-GP andPFK-GP, respectively.In order to measure relativity between experimental data and predictive data, the formula Training Variation Explained (TVE) is defined as follows:(3)TVE=1-sumytrain-ypredtrain2sumytrain-meanytrain2.The higher the TVE value, the more valid the predictive data.PO-GP andPFK-GP are run ten times, respectively, in the dataset generated from eight prediction models (see Table 6  E1–E8). The eight results of different dataset processed by the above two algorithms are listed in Figure 10. And the maximum, minimum, and average results of TVE are listed in Figure 11. From the results in the two figures, the formulas thatPFK-GP finds are more relative to the experimental formula models than thosePO-GP finds.Figure 10 TVE for modelsE1–E8 contrast the random population in traditionalPO-GP with thePFK-GP.Figure 11 TVE results ofPO-GP compared withPFK-GP in eight formula models. ## 5. Related Work The search space of SR is huge even for rather simple basis functions [31]. In order to avoid search space that is too far from the desired output range determined by the training dataset, the interval arithmetic [34] and the affine arithmetic [26], which can compute the bounds ofGP tree expression, are imported into SR. Although the method based on affine arithmetic can generate the tighter bounds of the expression in comparison with the interval arithmetic method, its accuracy often leads to high computational complexity [35]. Moreover, the size of search space is still huge because there are plentiful candidate expressions which fit to the data bound computed by the above two arithmetic methods.In addition to the above arithmetic method, machine learning methods are used to compact or reduce the search space of SR. FFX technology uses pathwise regularized learning algorithm to rapidly prune a huge set of candidate basis functions down to compact model based on the generalization linearly model (GLM); hence the technology outperforms GP-SR in speed and scalability due to its simplicity and deterministic nature [8]. However, it may abandon correct expressions and make them not in the space of GLM. A hybrid deterministic GP-SR algorithm [36] is proposed to overcome the problem of missing correct expression. The hybrid algorithm extracts candidate basis expressions by using FFX and inputs the expressions into the GP-SR. The hybrid algorithm utilizes the candidate expression generated by the linear regression method (pathwise regulation), while our algorithm utilizes the candidate expression by applying the algorithmsCountSamePartF, GenerateFs, andCountSpecU.By applying expectation-maximization (EM) framework to SR, the clustered SR (CSR) can identify and infer symbolic repression of piecewise function from unlabelled, time-series data [9]. The CSR can reduce the space of searching piecewise function owing to the fact that the EM can search simultaneously the subfunction parameters and latent variables that represent the information of function segment. The abstract expression grammar (AEG) SR is proposed to perform the process of genetic algorithm (GA), allowing user control of the search space and the final output formulas [37]. On understanding the given application, users can specify the goal expression of SR and limit the size of search space by using abstract expression grammars. Compared with manually assigning expression and limiting the search space with AEGSR, in the paper, the methods about PFK can automatically extract the candidate expression from dataset by using statistical method and dynamically adjust the search space by usingGP.The methods that inject prior or expert knowledge in evolutionary search [12, 13] are introduced to find effective solutions that can show mathematical expression more compactable and interpretable. In these papers, the prior and expert knowledge are the solutions which are mathematical expressions in some applications. The knowledge is merged intoGP by inserting randomized pieces of the approximate solution into population. One of the major differences between these methods and our method is how prior or expert knowledge is created. The knowledge in [12, 13] is the existing formula model that comes from the previous solutions and can be called static knowledge. However, the knowledge in our method is the formula model which is consistent with data features that are originated from the algorithmsGenerateFs, CountSamePartF, andCountSpecU and can be called dynamical knowledge that is changed with the features of test dataset. Therefore, our methods can insert more suitable knowledge into theGP. ## 6. Conclusion In this paper, aPFK-GP method is proposed to deal with the problem of symbolic regression based on analyzing the process of how a researcher constructs a mathematical model. The method can understand experimental data features and can extract some formulas consistent with experimental data features. In order to implement the function of understanding data features,PFK-GP, through theDBN method, firstly createsPFKB that can extract features from test dataset generated by training formula models. The experiment results confirm, compared with SVM, thatDBN can produce better results that extract features from formula models and classify test data into its corresponding formula model. Then, the methods of classification and recognition are implemented to find some formula models that are similar or related to experimental data features as much as possible. For the classification, we exploit the algorithmGenerateFs based onDBN to match the experimental data with formula models inPFKB. With regard to recognition, we propose the algorithms ofCountSamePartF andCountSpecU to obtain some subformula models which have local features consistent with experimental data. The classification can helpPFKB to find formula models that are consistent with whole data features while the recognition can helpPFKB to find subformula models consistent with local data features. At last, the algorithmrandomGenP is used to generate individuals of evolutionary population according to the result of the above three algorithms. Through combining and transforming these individuals,GP can automatically obtain approximate formulas that are best fitting to the experimental data.Compared with ParetoGP,PFK-GP, which is built on thePFKB with the functions of classification and recognition, can explore formulas in the search space of data features. So, it can accelerate the speed of convergence and improve the accuracy of formula obtained.Obviously, the high efficiency ofPFK-GP depends on the powerful methods of classification and recognition based onPFKB. Therefore, it is an important part of the future work to improve the accuracy of the above two methods. The two methods depend on the representation of data features of formula model. In the paper, the two assumptions based on statistics and counts are used to obtain the formulas which can show the data features. The features of formula model are not defined explicitly. And the two assumption are not proved by formal proofs. There are some uncertainties in those assumptions. Therefore, the new representation which can show whole or local features of formula models will be researched to find formulas which can better fit to experiment data. In addition, the rules of formulas transforming and inferring that are similar to researchers’ methods will be explored in the evolution ofGP. --- *Source: 1021378-2015-12-24.xml*
2016
# Vibration Analysis of Aeroengine Blisk Structure Based on a Prestressed CMS Super-Element Method **Authors:** Zhijun Li; Wenjun Yang; Huiqun Yuan **Journal:** Shock and Vibration (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1021402 --- ## Abstract For vibration analysis of aeroengine blisk structure, a prestressed component modal synthesis (CMS) super-element method is put forward with the fixed interface prestressing and free interface super-element approach. Based on this method, natural vibration characteristics of blisk structure are calculated at different modal truncation numbers. Comparing with the accurate result of global method, the selection principle of modal truncation number is obtained which affects the accuracy of prestressed CMS super-element method. Vibration response of two-stage blisk structure is calculated by this method, and the effects of different blade aspect ratios have been discussed on vibration characteristics. The results show that prestressed CMS super-element method is in the high accuracy and efficiency on blisk vibration analysis. Resonant frequencies in vibration response are nearly the same between the first-stage blisk and the second-stage blisk, and they are both approximately located in the range 588 Hz–599 Hz. The maximum displacement and dynamic stress are at blade tip and root of the first-stage blisk, respectively. Blade aspect ratio is a key factor of blisk vibration; the effects of blade aspect ratio on natural frequencies are different in the conditions of fixed width and fixed length. This research provides the theoretical basis for dynamic design of aeroengine compressor rotor system. --- ## Body ## 1. Introduction With the increasing development of aeronautical manufacturing technology, a whole system of bladed disk (blisk) has been widely used in newly developed aeroengines. Blade and disk are integrated to the whole system of bladed disk by the advanced technology. Traditional joint of tenon and mortise is removed out between blade and disk; it can greatly simplify the structure to achieve the light-weighting design. Multiple crack failure is avoided in mortise bottom, and the reliability of rotor system is improved significantly.Many scholars have carried out extensive research on the blisk structure. Ferria et al. [1] developed a blisk numerical model on the flutter stability of subsonic turbine and pointed out the conditions which affected the stability of turbine system. Di Maio and Ewins [2] implemented a practical method to measure vibration response of simplified blisk structure by Scanning Laser Doppler Vibrometer systems. Ji et al. [3] put forward a multidisciplinary optimization method for the design of compressor structure, and this method was verified that it could maintain the harmony and consistency of blisk structure well. Bhaumik et al. [4] conducted a theoretical research on the failure mechanism of turbine blisk; the criteria of avoiding failure were given according to the performance of turbine material. S. Lu and F.-J. Lu [5] presented a weight-lighting optimized design of blisk structure with a guarantee of structural safety. In the field of crack growth, Xu et al. [6] developed some measures to improve the reliability of axial compressor blisk system.Due to the limitation of experimental conditions and lack of appropriate experimental methods, major approach is the finite element analysis (FEA) in the analysis of structural dynamics. However, for the large complex structures such as aeroengine blade disk system, the number of degrees of freedom (DOFs) can be up to one million in discrete model. Corresponding kinematic equations cannot be solved efficiently, and it is difficult to develop dynamic analysis with the finite element method. Although it can be analyzed with FEA, a lot of time would be eventually consumed. Thus, calculation efficiency cannot be guaranteed. From the review of related studies, it can be found that computational method of blisk dynamics remains for further research. So the method of dynamic substructure becomes an appropriate solution method on the basis of modal reduction technology.Theoretical method and practical application of modal synthesis technology have been researched by related scholars [7–10]. Hurty firstly established the concepts of modal coordinates and modal synthesis method [11], which laid the foundation of fixed interface modal synthesis method. Then, Bampton and Craig Jr. [12] proposed an improved method to make the fixed interface method more simple and practical. This method eliminated the boundary of rigid mode and constraint mode and no longer distinguished between both of them. Hou [13] and Goldman [14] explored the modal synthesis method with free interface. While the effect of higher-order substructure modes is ignored, the accuracy of this method is challenged. MacNeal [15] and Rubin [16] introduced the residual stiffness to consider the effect of higher-order truncated modes; global precision was improved. For further improving Rubin’s method, Wang and Du [17] put forward the dynamic substructure technique with double coordination. In this method, residual stiffness was regarded as the Ritz base; modified free interface method was put into the orbit of Ritz analysis. Comprehensive precision and efficiency were improved greatly. With the reduction technology, Leung [18] concentrated internal coordinates of each substructure to the coordinates of each substructure interface. Motion equation of the system was established according to the conditions of displacement coordination and force equilibrium. Based on Leung’s method, Yun et al. [19] developed a super-element modal synthesis method by the frequency conversion of dynamic modes. Although there is a certain progress in the technique of dynamic substructure, its application still needs to be further researched on dynamic analysis.For the analysis of dynamic characteristics in an aeroengine blisk structure, in our research a prestressed component modal synthesis (CMS) method is proposed with the fixed interface prestressing and free interface super-element approach. Based on this method, natural vibration characteristics of blisk structure are calculated at different modal truncation numbers. Comparing with the accurate result of global method, the selection principle of modal truncation number is obtained which affects the accuracy of prestressed CMS super-element method. Vibration response of two-stage blisk structure is calculated by this method, and the effects of different blade aspect ratios have been discussed on vibration characteristics. ## 2. Dynamics Modeling of Aeroengine Blisk Structure ### 2.1. The Method of Prestressed CMS Super-Element Method Blisk structure of aeroengine system is very complex. It is a challenge for numerical simulation, as the number of finite elements is much enormous. In this research, the prestressed CMS super-element method is proposed to analyze dynamic characteristics of blisk structure. By this method, it can solve the structure with large numbers of elements. Moreover, the prestressed CMS super-element method has higher precision and efficiency compared with traditional method.The prestressed CMS super-element method is based on modal synthesis technology, and it is a method which utilizes matrix reduction technology to reduce model order. For theith substructure finite element model, the general undamped free vibration equation can be expressed as(1) M i q ¨ i + K i q i = 0 , i = 1,2 , … , n ,where M i is the mass matrix, K i is the stiffness matrix, q i is the displacement vector, and n is the number of substructures.For components such as blades, they are very thin on the direction of one or two degrees of freedom. At the action of centrifugal force, stress state may affect structural natural frequency and dynamic response. So in the analysis of rotor dynamic characteristics, the effect of centrifugal rigidification should be considered. In our research, the linear stress analysis is developed under the static state; centrifugal load is transformed into structure prestressed matrixS. Then, dynamic equation related to centrifugal rigidification can be expressed as(2) M i q ¨ i + K i ′ q i = 0 , i = 1,2 , … , n ,where K i ′ is the stiffness matrix which considers the matrix of prestressed effect. Displacement vector and coefficient matrix in (2) are divided into the master DOF on the boundary and the slaver DOF beyond the boundary. Here subscripting m denotes the master DOF and subscripting s denotes the slaver DOF. The transformed form is(3) M m m M m s M s m M s s i q ¨ m q ¨ s i + K m m ′ K m s ′ K s m ′ K s s ′ i q m q s i = 0 f i .In the equation,f i is the interfacial force. Coordinates of substructure are transformed by the following formula:(4) q i = Φ i p m p s i .Herep is the coordinate of substructure mode, Φ i = E 0 - K s s ′ - 1 K s m ′ Φ 1 i is the transformed coordinate matrix in the fixed interface [12], Φ 1 is substructure eigenvector in the condition of fixed boundary nodes, and E is the unit matrix. According to formula (4), dynamic equation in formula (2) can be expressed with modal coordinates:(5) M - i p ¨ i + K - i p i = 0 , i = 1,2 , … , n .HereM - i = Φ i T M i Φ i , K - i = Φ i T K i ′ Φ i. Based on the DOF reduction method [11], the following expression can be obtained from formula (4):(6) q i = Φ i ′ p m p a i .In formula (6), p a is the reduced modal coordinate which is transformed from DOF generalized coordinate p s. Φ i ′ is the new transformed coordinate matrix: Φ i ′ = E 0 - K s s ′ - 1 K s m ′ Φ 1 ′ i. Here Φ 1 ′ is the low-order modal set with higher-order truncation. Thus, modal coordinate vector in (5) is transformed as p i = p m , p a i T.To consider the rigid connection between substructures, the following coordinate transformation [12] is utilized to transform nonindependent coordinate P = p 1 T , p 2 T , … , p n T T into the generalized coordinate of global structure.(7) P = β z .Thus, the free vibration equation is established as follows:(8) M z ¨ + K z ˙ = 0 .By the reduction of DOF,M and K are much less than mass matrix and stiffness matrix in the generalized coordinates of original system. Thus, natural frequency and modal shape can be obtained from (8). ### 2.2. Physical Model and Analysis Process Blisk structure is a new manufacturing technology of aeroengine. The joint between tenon and mortise is removed out, so the weight of tenon-mortise connection and supporting structure is reduced greatly. In addition, bolts, nuts, locking plate, and other connectors are no longer needed. Compared with traditional blade disk, the hub becomes thinner in blisk structure, and bore diameter becomes larger. Figure1 shows the structure of first two-stage blisk in aeroengine compressor. Number of blades in the first stage is 38, and number of blades in the second stage is 53.Figure 1 Two-stage blisk structure model: (a) three-dimensional geometry and (b) finite element mesh. (a) (b)In the finite element model of blisk structure, parts of blade and disk are established with the element Solid 185. As the blades on disk are leaned, the joint zone of blade and disk is meshed with the element Solid 187. Here total number of elements is 400124 and total number of nodes is 561878. Blade material is titanium alloy TA11, the density is 4400 kg·m−3, elasticity modulus is 114 GPa, and Poisson’s ratio is 0.3. Disk material is titanium alloy TC17, the density is 8200 kg·m−3, elasticity modulus is 166 GPa, and Poisson’s ratio is 0.3.For applying the prestressed CMS super-element method, substructure models of two-stage blisk are established in this research. The first-stage blisk and the second-stage blisk are packed as a substructure, respectively, as shown in Figure2. In the first-stage substructure, it contains 194825 elements and 275406 nodes. In the second-stage substructure, it contains 205299 elements and 287432 nodes.Figure 2 Substructure model of two-stage blisk: (a) the first-stage substructure and (b) the second-stage substructure. (a) (b)After establishing FEA model of substructure, the DOFs in hub tube are constrained for further analysis. Considering the effect of centrifugal force, operating rotational speed is applied. Prestressed analysis with fixed interface is performed in each substructure. Then, prestressed option is set as open; constrained master DOFs are released. Generation process of substructure mode synthesis is performed in the free interface, and super-elements are created. After these, application process is performed. The super-elements of substructures are connected to take the vibration analysis of whole model. Finally, dynamic response of super-element master DOFs is extended into each inner DOF of super-element orderly. Complete solution of dynamic response is obtained, and expansion process of the prestressed CMS method is accomplished.Analysis process of prestressed CMS super-element method is shown in Figure3.Figure 3 Analysis process of the prestressed CMS method. ## 2.1. The Method of Prestressed CMS Super-Element Method Blisk structure of aeroengine system is very complex. It is a challenge for numerical simulation, as the number of finite elements is much enormous. In this research, the prestressed CMS super-element method is proposed to analyze dynamic characteristics of blisk structure. By this method, it can solve the structure with large numbers of elements. Moreover, the prestressed CMS super-element method has higher precision and efficiency compared with traditional method.The prestressed CMS super-element method is based on modal synthesis technology, and it is a method which utilizes matrix reduction technology to reduce model order. For theith substructure finite element model, the general undamped free vibration equation can be expressed as(1) M i q ¨ i + K i q i = 0 , i = 1,2 , … , n ,where M i is the mass matrix, K i is the stiffness matrix, q i is the displacement vector, and n is the number of substructures.For components such as blades, they are very thin on the direction of one or two degrees of freedom. At the action of centrifugal force, stress state may affect structural natural frequency and dynamic response. So in the analysis of rotor dynamic characteristics, the effect of centrifugal rigidification should be considered. In our research, the linear stress analysis is developed under the static state; centrifugal load is transformed into structure prestressed matrixS. Then, dynamic equation related to centrifugal rigidification can be expressed as(2) M i q ¨ i + K i ′ q i = 0 , i = 1,2 , … , n ,where K i ′ is the stiffness matrix which considers the matrix of prestressed effect. Displacement vector and coefficient matrix in (2) are divided into the master DOF on the boundary and the slaver DOF beyond the boundary. Here subscripting m denotes the master DOF and subscripting s denotes the slaver DOF. The transformed form is(3) M m m M m s M s m M s s i q ¨ m q ¨ s i + K m m ′ K m s ′ K s m ′ K s s ′ i q m q s i = 0 f i .In the equation,f i is the interfacial force. Coordinates of substructure are transformed by the following formula:(4) q i = Φ i p m p s i .Herep is the coordinate of substructure mode, Φ i = E 0 - K s s ′ - 1 K s m ′ Φ 1 i is the transformed coordinate matrix in the fixed interface [12], Φ 1 is substructure eigenvector in the condition of fixed boundary nodes, and E is the unit matrix. According to formula (4), dynamic equation in formula (2) can be expressed with modal coordinates:(5) M - i p ¨ i + K - i p i = 0 , i = 1,2 , … , n .HereM - i = Φ i T M i Φ i , K - i = Φ i T K i ′ Φ i. Based on the DOF reduction method [11], the following expression can be obtained from formula (4):(6) q i = Φ i ′ p m p a i .In formula (6), p a is the reduced modal coordinate which is transformed from DOF generalized coordinate p s. Φ i ′ is the new transformed coordinate matrix: Φ i ′ = E 0 - K s s ′ - 1 K s m ′ Φ 1 ′ i. Here Φ 1 ′ is the low-order modal set with higher-order truncation. Thus, modal coordinate vector in (5) is transformed as p i = p m , p a i T.To consider the rigid connection between substructures, the following coordinate transformation [12] is utilized to transform nonindependent coordinate P = p 1 T , p 2 T , … , p n T T into the generalized coordinate of global structure.(7) P = β z .Thus, the free vibration equation is established as follows:(8) M z ¨ + K z ˙ = 0 .By the reduction of DOF,M and K are much less than mass matrix and stiffness matrix in the generalized coordinates of original system. Thus, natural frequency and modal shape can be obtained from (8). ## 2.2. Physical Model and Analysis Process Blisk structure is a new manufacturing technology of aeroengine. The joint between tenon and mortise is removed out, so the weight of tenon-mortise connection and supporting structure is reduced greatly. In addition, bolts, nuts, locking plate, and other connectors are no longer needed. Compared with traditional blade disk, the hub becomes thinner in blisk structure, and bore diameter becomes larger. Figure1 shows the structure of first two-stage blisk in aeroengine compressor. Number of blades in the first stage is 38, and number of blades in the second stage is 53.Figure 1 Two-stage blisk structure model: (a) three-dimensional geometry and (b) finite element mesh. (a) (b)In the finite element model of blisk structure, parts of blade and disk are established with the element Solid 185. As the blades on disk are leaned, the joint zone of blade and disk is meshed with the element Solid 187. Here total number of elements is 400124 and total number of nodes is 561878. Blade material is titanium alloy TA11, the density is 4400 kg·m−3, elasticity modulus is 114 GPa, and Poisson’s ratio is 0.3. Disk material is titanium alloy TC17, the density is 8200 kg·m−3, elasticity modulus is 166 GPa, and Poisson’s ratio is 0.3.For applying the prestressed CMS super-element method, substructure models of two-stage blisk are established in this research. The first-stage blisk and the second-stage blisk are packed as a substructure, respectively, as shown in Figure2. In the first-stage substructure, it contains 194825 elements and 275406 nodes. In the second-stage substructure, it contains 205299 elements and 287432 nodes.Figure 2 Substructure model of two-stage blisk: (a) the first-stage substructure and (b) the second-stage substructure. (a) (b)After establishing FEA model of substructure, the DOFs in hub tube are constrained for further analysis. Considering the effect of centrifugal force, operating rotational speed is applied. Prestressed analysis with fixed interface is performed in each substructure. Then, prestressed option is set as open; constrained master DOFs are released. Generation process of substructure mode synthesis is performed in the free interface, and super-elements are created. After these, application process is performed. The super-elements of substructures are connected to take the vibration analysis of whole model. Finally, dynamic response of super-element master DOFs is extended into each inner DOF of super-element orderly. Complete solution of dynamic response is obtained, and expansion process of the prestressed CMS method is accomplished.Analysis process of prestressed CMS super-element method is shown in Figure3.Figure 3 Analysis process of the prestressed CMS method. ## 3. Vibration Characteristics of Blisk Structure ### 3.1. Accuracy Verification of the Prestressed CMS Method The modal truncation number is a key factor of calculation accuracy in prestressed CMS method. For obtaining more suitable modal truncation number in this research, natural vibration characteristics of blisk structure are calculated at different modal truncation numbers. Substructure models of two-stage blisk have been established, and operating speed of blisk structure is 11383 rpm. The DOFs of nodes in hub section are constrained as the boundary conditions. By the prestressed CMS method, natural frequencies of two-stage blisk are solved, respectively, at different modal truncation numbers. Here the result of global method is assumed as the accurate value.N and M are, respectively, the modal truncation number in the first-stage blisk and the second-stage blisk.In Figure4, it is shown that calculation result of the prestressed CMS method at M = 35 and N = 45 is far away from the frequency of global method. With the increase of M and N, calculation results of the prestressed CMS method are much closer to the accurate solution. When M = 45 and N = 60, the result of prestressed CMS method is basically consistent with the result of global method. Continuing to increase the modal truncation number such as M = 60 and N = 75, calculation accuracy of prestressed CMS method is much higher. Therefore, it illustrates that the requirement of calculation precision can be satisfied.Figure 4 Natural frequency of blisk structure at different modal truncation numbers.For further examining the calculation efficiency of prestressed CMS method, computing time of prestressed CMS method is compared with that of global method at different modal truncation numbers, as is shown in Table1.Table 1 The comparison of computing time for two methods. Type of method Computing time/h M = 35, N = 45 3.7 M = 40, N = 55 4.0 M = 45, N = 60 5.1 M = 60, N = 75 6.5 Global method 8.0As shown in Table1, computing time of prestressed CMS method increases with the growth of modal truncation number. When the modal truncation number rises to a certain value, computing time of prestressed CMS method is nearly equal to computing time of global method. It has been known that calculation accuracy of prestressed CMS method is improved significantly with the increasing of modal truncation number. While computing time is much longer, calculation efficiency of prestressed CMS method cannot be guaranteed. So the calculation accuracy and efficiency should be both considered to determine the modal truncation number.From the above analysis, it can be known that prestressed CMS method has high enough precision atM = 45 and N = 60. Compared with the global method, calculation efficiency is improved by 36%. For dynamic analysis of two-stage blisk structure, hence M = 45 and N = 60 are selected as the modal truncation number. The results are compared with global method, as can be seen from Table 2.Table 2 The comparison of natural frequencies atM = 45 and N = 60. Mode order Global method/Hz Prestressed CMS method/Hz Vibration shape 1–10 434.42–439.53 434.45–439.58 Blade vibration of the first-stage blisk 11–38 440.36–442.86 440.38–442.90 39 546.07 546.10 Blade vibration of the second-stage blisk 40–43 568.14–586.83 568.21–586.90 44–91 591.65–598.57 591.70–599.83Table2 exhibits that 38-order vibration frequencies of the first-stage blisk are in the range of 434 Hz–443 Hz and 53-order vibration frequencies of the second-stage blisk are in the range of 546 Hz–600 Hz. It can be found that natural frequencies perform the bending shape of blade in low-order modes. The number of vibration modes is equal to the number of blades. The results of global method and prestressed CMS method have very small difference; it illustrates that prestressed CMS method is credible. In order to ensure the accuracy of prestressed CMS method, truncated frequency of the substructure is required to be greater than the corresponding frequency of solved system. For dynamic analysis of blisk structure, the modal truncation number is required to be greater than the number of blades. This selection principle of modal truncation number has been verified according to Figure 4 and Table 2. ### 3.2. Vibration Response of Two-Stage Blisk Structure In the operation of blisk structure, main source of vibration is caused by the uneven air flow on blade pressure and suction surfaces. Airflow exciting force can be estimated on the basis of aerodynamic calculation, and the velocity and pressure on rotor blades can be determined by the experiment. Airflow exciting force at average radius of air passage can be expressed as the form of Fourier series; it is a series of harmonic superposition. For the system of blisk structure, practical exciting form is very complex; it has a great relationship with the working conditions. Assume that airflow exciting force on blade surface is simplified as the single point form. Aerodynamic force is applied at each blade tip of leading edge; the load in space is required to meet the form of traveling wave.For the system of blisk structure, motion equation of forced vibration is as follows:(9) M x ¨ + C x ˙ + K x = f t .Herex ( t ) is the vector of displacement and f ( t ) is the vector of exciting force. The parameters M, C, and K are mass matrix, viscous damping matrix, and stiffness matrix, respectively. In the forced response of blisk structure, exciting force f ( t ) is usually expressed as(10) f j t = f j 0 e i ω t + ϕ j , j = 1,2 , … , N .In formula (10), f j 0 is the amplitude of exciting force in jth blade, ω is the frequency of exciting force, and N is the number of blades. ϕ j is the phase of exciting force in jth blade, and it is defined as(11) ϕ j = 2 π r j - 1 N , j = 1,2 , … , N .Herer is the order of exciting force. #### 3.2.1. Response Analysis of Vibration Displacement Figure5 shows the relationship between maximum displacement and exciting frequency, and the responses of vibration displacement in two-stage blisk are compared.Figure 5 Comparison diagrams of displacement responses in two-stage blisk: (a) displacement responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)In Figure5, it is exhibited that resonant frequency range of two-stage blisk is at 588 Hz–599 Hz. The frequency is around 594 Hz at the maximum response. It is clear to note that the maximum displacement of the first-stage blisk is much higher than that of the second stage.Figure6 shows the displacement contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 6 Displacement contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of displacement contour map and (b) arbitrary view of displacement contour map. (a) (b)In Figure6, it can be seen that when the frequency of exciting force is at 594 Hz, the vibration of two-stage blisk is mainly focused on the first-stage blisk. Additionally, the maximum displacement appears at the position of blade tip, and the displacement vibration shows a nodal diameter mode. The amplitude of displacement vibration is relatively small in each region of the second-stage blisk. #### 3.2.2. Response Analysis of Dynamic Stress The load acting on the element has a remarkable change with time, or each node on the component has significant acceleration under the load. The stress is generated by dynamic load in the component, and it is called dynamic stress. Dynamic stress analysis is the basis of solving the problem of component dynamic failure. According to the analysis of harmonic response, dynamic stress of two-stage blisk is obtained under aerodynamic excitation force.Figure7 shows the relationship between maximum dynamic stress and exciting frequency; the responses of vibration stress in two-stage blisk are compared.Figure 7 Comparison diagrams of dynamic stress in two-stage blisk: (a) stress responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)From Figure7, it is found that the response of dynamic stress is basically consistent with the response of vibration displacement. The range of resonant frequency is at 588 Hz–599 Hz, and the response peak occurs at the frequency of 594 Hz. Moreover, the maximum dynamic stress of the first-stage blisk is much higher than that of the second-stage blisk.Figure8 shows the stress contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 8 Dynamic stress contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of stress contour map and (b) arbitrary view of stress contour map. (a) (b)In Figure8, it can be shown that dynamic stress occurs mainly at blade root position of the first-stage blisk, and the stress vibration shows a nodal diameter mode. Besides, the stress amplitude of the second-stage blisk is relatively small in each region.In order to avoid resonant response of blisk structure, the frequency of external load should be far away from the resonant frequency of 594 Hz. In addition, it can be found that the maximum vibration displacement is 6.21 mm and the maximum dynamic stress is 749 MPa in the limit working condition of 594 Hz, while vibration amplitude and material strength of blisk structure are both in the safe working condition. ## 3.1. Accuracy Verification of the Prestressed CMS Method The modal truncation number is a key factor of calculation accuracy in prestressed CMS method. For obtaining more suitable modal truncation number in this research, natural vibration characteristics of blisk structure are calculated at different modal truncation numbers. Substructure models of two-stage blisk have been established, and operating speed of blisk structure is 11383 rpm. The DOFs of nodes in hub section are constrained as the boundary conditions. By the prestressed CMS method, natural frequencies of two-stage blisk are solved, respectively, at different modal truncation numbers. Here the result of global method is assumed as the accurate value.N and M are, respectively, the modal truncation number in the first-stage blisk and the second-stage blisk.In Figure4, it is shown that calculation result of the prestressed CMS method at M = 35 and N = 45 is far away from the frequency of global method. With the increase of M and N, calculation results of the prestressed CMS method are much closer to the accurate solution. When M = 45 and N = 60, the result of prestressed CMS method is basically consistent with the result of global method. Continuing to increase the modal truncation number such as M = 60 and N = 75, calculation accuracy of prestressed CMS method is much higher. Therefore, it illustrates that the requirement of calculation precision can be satisfied.Figure 4 Natural frequency of blisk structure at different modal truncation numbers.For further examining the calculation efficiency of prestressed CMS method, computing time of prestressed CMS method is compared with that of global method at different modal truncation numbers, as is shown in Table1.Table 1 The comparison of computing time for two methods. Type of method Computing time/h M = 35, N = 45 3.7 M = 40, N = 55 4.0 M = 45, N = 60 5.1 M = 60, N = 75 6.5 Global method 8.0As shown in Table1, computing time of prestressed CMS method increases with the growth of modal truncation number. When the modal truncation number rises to a certain value, computing time of prestressed CMS method is nearly equal to computing time of global method. It has been known that calculation accuracy of prestressed CMS method is improved significantly with the increasing of modal truncation number. While computing time is much longer, calculation efficiency of prestressed CMS method cannot be guaranteed. So the calculation accuracy and efficiency should be both considered to determine the modal truncation number.From the above analysis, it can be known that prestressed CMS method has high enough precision atM = 45 and N = 60. Compared with the global method, calculation efficiency is improved by 36%. For dynamic analysis of two-stage blisk structure, hence M = 45 and N = 60 are selected as the modal truncation number. The results are compared with global method, as can be seen from Table 2.Table 2 The comparison of natural frequencies atM = 45 and N = 60. Mode order Global method/Hz Prestressed CMS method/Hz Vibration shape 1–10 434.42–439.53 434.45–439.58 Blade vibration of the first-stage blisk 11–38 440.36–442.86 440.38–442.90 39 546.07 546.10 Blade vibration of the second-stage blisk 40–43 568.14–586.83 568.21–586.90 44–91 591.65–598.57 591.70–599.83Table2 exhibits that 38-order vibration frequencies of the first-stage blisk are in the range of 434 Hz–443 Hz and 53-order vibration frequencies of the second-stage blisk are in the range of 546 Hz–600 Hz. It can be found that natural frequencies perform the bending shape of blade in low-order modes. The number of vibration modes is equal to the number of blades. The results of global method and prestressed CMS method have very small difference; it illustrates that prestressed CMS method is credible. In order to ensure the accuracy of prestressed CMS method, truncated frequency of the substructure is required to be greater than the corresponding frequency of solved system. For dynamic analysis of blisk structure, the modal truncation number is required to be greater than the number of blades. This selection principle of modal truncation number has been verified according to Figure 4 and Table 2. ## 3.2. Vibration Response of Two-Stage Blisk Structure In the operation of blisk structure, main source of vibration is caused by the uneven air flow on blade pressure and suction surfaces. Airflow exciting force can be estimated on the basis of aerodynamic calculation, and the velocity and pressure on rotor blades can be determined by the experiment. Airflow exciting force at average radius of air passage can be expressed as the form of Fourier series; it is a series of harmonic superposition. For the system of blisk structure, practical exciting form is very complex; it has a great relationship with the working conditions. Assume that airflow exciting force on blade surface is simplified as the single point form. Aerodynamic force is applied at each blade tip of leading edge; the load in space is required to meet the form of traveling wave.For the system of blisk structure, motion equation of forced vibration is as follows:(9) M x ¨ + C x ˙ + K x = f t .Herex ( t ) is the vector of displacement and f ( t ) is the vector of exciting force. The parameters M, C, and K are mass matrix, viscous damping matrix, and stiffness matrix, respectively. In the forced response of blisk structure, exciting force f ( t ) is usually expressed as(10) f j t = f j 0 e i ω t + ϕ j , j = 1,2 , … , N .In formula (10), f j 0 is the amplitude of exciting force in jth blade, ω is the frequency of exciting force, and N is the number of blades. ϕ j is the phase of exciting force in jth blade, and it is defined as(11) ϕ j = 2 π r j - 1 N , j = 1,2 , … , N .Herer is the order of exciting force. ### 3.2.1. Response Analysis of Vibration Displacement Figure5 shows the relationship between maximum displacement and exciting frequency, and the responses of vibration displacement in two-stage blisk are compared.Figure 5 Comparison diagrams of displacement responses in two-stage blisk: (a) displacement responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)In Figure5, it is exhibited that resonant frequency range of two-stage blisk is at 588 Hz–599 Hz. The frequency is around 594 Hz at the maximum response. It is clear to note that the maximum displacement of the first-stage blisk is much higher than that of the second stage.Figure6 shows the displacement contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 6 Displacement contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of displacement contour map and (b) arbitrary view of displacement contour map. (a) (b)In Figure6, it can be seen that when the frequency of exciting force is at 594 Hz, the vibration of two-stage blisk is mainly focused on the first-stage blisk. Additionally, the maximum displacement appears at the position of blade tip, and the displacement vibration shows a nodal diameter mode. The amplitude of displacement vibration is relatively small in each region of the second-stage blisk. ### 3.2.2. Response Analysis of Dynamic Stress The load acting on the element has a remarkable change with time, or each node on the component has significant acceleration under the load. The stress is generated by dynamic load in the component, and it is called dynamic stress. Dynamic stress analysis is the basis of solving the problem of component dynamic failure. According to the analysis of harmonic response, dynamic stress of two-stage blisk is obtained under aerodynamic excitation force.Figure7 shows the relationship between maximum dynamic stress and exciting frequency; the responses of vibration stress in two-stage blisk are compared.Figure 7 Comparison diagrams of dynamic stress in two-stage blisk: (a) stress responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)From Figure7, it is found that the response of dynamic stress is basically consistent with the response of vibration displacement. The range of resonant frequency is at 588 Hz–599 Hz, and the response peak occurs at the frequency of 594 Hz. Moreover, the maximum dynamic stress of the first-stage blisk is much higher than that of the second-stage blisk.Figure8 shows the stress contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 8 Dynamic stress contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of stress contour map and (b) arbitrary view of stress contour map. (a) (b)In Figure8, it can be shown that dynamic stress occurs mainly at blade root position of the first-stage blisk, and the stress vibration shows a nodal diameter mode. Besides, the stress amplitude of the second-stage blisk is relatively small in each region.In order to avoid resonant response of blisk structure, the frequency of external load should be far away from the resonant frequency of 594 Hz. In addition, it can be found that the maximum vibration displacement is 6.21 mm and the maximum dynamic stress is 749 MPa in the limit working condition of 594 Hz, while vibration amplitude and material strength of blisk structure are both in the safe working condition. ## 3.2.1. Response Analysis of Vibration Displacement Figure5 shows the relationship between maximum displacement and exciting frequency, and the responses of vibration displacement in two-stage blisk are compared.Figure 5 Comparison diagrams of displacement responses in two-stage blisk: (a) displacement responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)In Figure5, it is exhibited that resonant frequency range of two-stage blisk is at 588 Hz–599 Hz. The frequency is around 594 Hz at the maximum response. It is clear to note that the maximum displacement of the first-stage blisk is much higher than that of the second stage.Figure6 shows the displacement contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 6 Displacement contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of displacement contour map and (b) arbitrary view of displacement contour map. (a) (b)In Figure6, it can be seen that when the frequency of exciting force is at 594 Hz, the vibration of two-stage blisk is mainly focused on the first-stage blisk. Additionally, the maximum displacement appears at the position of blade tip, and the displacement vibration shows a nodal diameter mode. The amplitude of displacement vibration is relatively small in each region of the second-stage blisk. ## 3.2.2. Response Analysis of Dynamic Stress The load acting on the element has a remarkable change with time, or each node on the component has significant acceleration under the load. The stress is generated by dynamic load in the component, and it is called dynamic stress. Dynamic stress analysis is the basis of solving the problem of component dynamic failure. According to the analysis of harmonic response, dynamic stress of two-stage blisk is obtained under aerodynamic excitation force.Figure7 shows the relationship between maximum dynamic stress and exciting frequency; the responses of vibration stress in two-stage blisk are compared.Figure 7 Comparison diagrams of dynamic stress in two-stage blisk: (a) stress responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)From Figure7, it is found that the response of dynamic stress is basically consistent with the response of vibration displacement. The range of resonant frequency is at 588 Hz–599 Hz, and the response peak occurs at the frequency of 594 Hz. Moreover, the maximum dynamic stress of the first-stage blisk is much higher than that of the second-stage blisk.Figure8 shows the stress contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 8 Dynamic stress contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of stress contour map and (b) arbitrary view of stress contour map. (a) (b)In Figure8, it can be shown that dynamic stress occurs mainly at blade root position of the first-stage blisk, and the stress vibration shows a nodal diameter mode. Besides, the stress amplitude of the second-stage blisk is relatively small in each region.In order to avoid resonant response of blisk structure, the frequency of external load should be far away from the resonant frequency of 594 Hz. In addition, it can be found that the maximum vibration displacement is 6.21 mm and the maximum dynamic stress is 749 MPa in the limit working condition of 594 Hz, while vibration amplitude and material strength of blisk structure are both in the safe working condition. ## 4. Discussion of Blisk Vibration at Different Aspect Ratios Blade aspect ratio is the ratio of blade length and blade width; it stands for blade relative length or blade relative width. Blade aspect ratio is one of key factors on blisk vibration characteristics. In order to explore the effects of blade aspect ratios, natural frequencies of blisk structure are discussed at different aspect ratios. ### 4.1. Design and Modeling of Blisk Structure For discussing the effects of aspect ratio on blisk vibration, blisk models are established at different aspect ratios. Relevant parameters including blade inclination and wheel size are kept constant in the process of modeling; blade aspect ratio is only adjusted accordingly. Thus, analysis results can be deemed to be credible.According to practical experiences, blade aspect ratioλ is set as 1.50, 1.75, 2.00, 2.25, and 2.50, respectively, as shown in Tables 3 and 4. At the conditions of fixed width and fixed length, vibration characteristics of blisk structure are discussed.Table 3 Design of blade aspect ratios at fixed width condition. Name 1 2 3 4 5 Length/mm 67.50 78.75 90.00 101.25 112.50 Width/mm 45.00 45.00 45.00 45.00 45.00 Aspect ratio 1.50 1.75 2.00 2.25 2.50Table 4 Design of blade aspect ratios at fixed length condition. Name 1 2 3 4 5 Length/mm 90.00 90.00 90.00 90.00 90.00 Width/mm 60.00 51.43 45.00 40.00 36.00 Aspect ratio 1.50 1.75 2.00 2.25 2.50After establishing blisk models at different aspect ratios, substructure models need to be divided. Substructure can be natural component of global structure, and it also can be a certain part of manual separation. As shown in Figure9, blisk structure is divided into N ( N = 38 ) sectors. Each sector of blisk is regarded as a substructure.Figure 9 Models of global blisk and single-sector structure: (a) global blisk structure and (b) single-sector structure. (a) (b) ### 4.2. Blisk Vibration Characteristics at Different Aspect Ratios Considering the effect of centrifugal force, blisk models at different aspect ratios are analyzed with the prestressed CMS super-element method. At the conditions of fixed width and fixed length, natural vibration frequencies of blisk structure are obtained. As blisk system has many blades, frequencies of the same vibration shapes are similar. Therefore, the first 120-order natural frequencies of blisk structure are solved. With the increase of blade aspect ratio, variation trends of natural frequencies and description of vibration shapes are illustrated as shown in Table5.Table 5 Effect law of blisk natural frequencies with the increase of blade aspect ratio. Mode order Fixed width Fixed length Description of vibration shape 1 ↘ ↘ Radial expansion of blade and wheel 2–38 ↘ — First-order bending vibration of blade 39 ↘ ↘ First-order blade bending with 0 nodal diameter 40–77 ↘ ↗ Distorted vibration of blade 78 — — Wheel vibration with 0 nodal diameter 79–120 ↘ ↘ Blade bending vibration with nodal diameter “—” stands for no obvious change, “↗” stands for rise, and “↘” stands for decline.For observing the relationship between natural frequencies and aspect ratios, one typical order of the similar frequencies is selected as the representative. The frequencies of 1st order, 39th order, 77th order, 84th order, and 120th order have been extracted as shown in Tables6 and 7.Table 6 Natural frequencies of typical orders at different aspect ratios of fixed width. Aspect ratio 1st 39th 77th 84th 120th 1.5 99.19 839.86 1769 3341 4307 1.75 100.35 717.73 1508 3239 3506 2 101.66 636.4 1319 2895 2979 2.25 103.14 578.26 1176 2520 2590 2.5 104.4 534.58 1062 2250 2300Table 7 Natural frequencies of typical orders at different aspect ratios of fixed length. Aspect ratio 1st 39th 77th 84th 120th 1.5 103.51 640.82 1094 2854 2971 1.75 102.52 638.58 1208 2892 2993 2 101.66 636.4 1319 2895 2979 2.25 101.14 634.69 1430 2878 2969 2.5 100.63 632.64 1540 2804 2966According to the data in Tables6 and 7, effect curves of aspect ratios on natural frequencies are drawn at the conditions of fixed width and fixed length, as shown in Figure 10.Figure 10 Effect curves of natural frequencies at different aspect ratios: (a) condition of fixed width and (b) condition of fixed length. (a) (b)From the analysis of effect curves in Figure10, it can be found that each order frequency of blisk structure declines with the increasing of blade aspect ratio in the condition of fixed width. And the curve of high order frequency is much steep; it illustrates that the effect of blade aspect ratio on high order frequency is more obvious than the effect on low-order frequency. In the condition of fixed length, blade distorted frequencies between 40th order and 77th order show a certain rise with the increasing of blade aspect ratio. However, other order frequencies have no clear change. ## 4.1. Design and Modeling of Blisk Structure For discussing the effects of aspect ratio on blisk vibration, blisk models are established at different aspect ratios. Relevant parameters including blade inclination and wheel size are kept constant in the process of modeling; blade aspect ratio is only adjusted accordingly. Thus, analysis results can be deemed to be credible.According to practical experiences, blade aspect ratioλ is set as 1.50, 1.75, 2.00, 2.25, and 2.50, respectively, as shown in Tables 3 and 4. At the conditions of fixed width and fixed length, vibration characteristics of blisk structure are discussed.Table 3 Design of blade aspect ratios at fixed width condition. Name 1 2 3 4 5 Length/mm 67.50 78.75 90.00 101.25 112.50 Width/mm 45.00 45.00 45.00 45.00 45.00 Aspect ratio 1.50 1.75 2.00 2.25 2.50Table 4 Design of blade aspect ratios at fixed length condition. Name 1 2 3 4 5 Length/mm 90.00 90.00 90.00 90.00 90.00 Width/mm 60.00 51.43 45.00 40.00 36.00 Aspect ratio 1.50 1.75 2.00 2.25 2.50After establishing blisk models at different aspect ratios, substructure models need to be divided. Substructure can be natural component of global structure, and it also can be a certain part of manual separation. As shown in Figure9, blisk structure is divided into N ( N = 38 ) sectors. Each sector of blisk is regarded as a substructure.Figure 9 Models of global blisk and single-sector structure: (a) global blisk structure and (b) single-sector structure. (a) (b) ## 4.2. Blisk Vibration Characteristics at Different Aspect Ratios Considering the effect of centrifugal force, blisk models at different aspect ratios are analyzed with the prestressed CMS super-element method. At the conditions of fixed width and fixed length, natural vibration frequencies of blisk structure are obtained. As blisk system has many blades, frequencies of the same vibration shapes are similar. Therefore, the first 120-order natural frequencies of blisk structure are solved. With the increase of blade aspect ratio, variation trends of natural frequencies and description of vibration shapes are illustrated as shown in Table5.Table 5 Effect law of blisk natural frequencies with the increase of blade aspect ratio. Mode order Fixed width Fixed length Description of vibration shape 1 ↘ ↘ Radial expansion of blade and wheel 2–38 ↘ — First-order bending vibration of blade 39 ↘ ↘ First-order blade bending with 0 nodal diameter 40–77 ↘ ↗ Distorted vibration of blade 78 — — Wheel vibration with 0 nodal diameter 79–120 ↘ ↘ Blade bending vibration with nodal diameter “—” stands for no obvious change, “↗” stands for rise, and “↘” stands for decline.For observing the relationship between natural frequencies and aspect ratios, one typical order of the similar frequencies is selected as the representative. The frequencies of 1st order, 39th order, 77th order, 84th order, and 120th order have been extracted as shown in Tables6 and 7.Table 6 Natural frequencies of typical orders at different aspect ratios of fixed width. Aspect ratio 1st 39th 77th 84th 120th 1.5 99.19 839.86 1769 3341 4307 1.75 100.35 717.73 1508 3239 3506 2 101.66 636.4 1319 2895 2979 2.25 103.14 578.26 1176 2520 2590 2.5 104.4 534.58 1062 2250 2300Table 7 Natural frequencies of typical orders at different aspect ratios of fixed length. Aspect ratio 1st 39th 77th 84th 120th 1.5 103.51 640.82 1094 2854 2971 1.75 102.52 638.58 1208 2892 2993 2 101.66 636.4 1319 2895 2979 2.25 101.14 634.69 1430 2878 2969 2.5 100.63 632.64 1540 2804 2966According to the data in Tables6 and 7, effect curves of aspect ratios on natural frequencies are drawn at the conditions of fixed width and fixed length, as shown in Figure 10.Figure 10 Effect curves of natural frequencies at different aspect ratios: (a) condition of fixed width and (b) condition of fixed length. (a) (b)From the analysis of effect curves in Figure10, it can be found that each order frequency of blisk structure declines with the increasing of blade aspect ratio in the condition of fixed width. And the curve of high order frequency is much steep; it illustrates that the effect of blade aspect ratio on high order frequency is more obvious than the effect on low-order frequency. In the condition of fixed length, blade distorted frequencies between 40th order and 77th order show a certain rise with the increasing of blade aspect ratio. However, other order frequencies have no clear change. ## 5. Conclusions In this research a prestressed CMS super-element method is put forward for the vibration analysis of aeroengine blisk structure. Based on this method, dynamic characteristics of blisk structure are calculated at different modal truncation numbers. And the effects of different blade aspect ratios have been discussed on blisk vibration characteristics. Through the above analysis, we can draw a conclusion.(1) Compared with the result of global method, the accuracy of prestressed CMS method can meet the requirement of blisk dynamic analysis. For the selection principle of modal truncation number, natural frequency of the substructure is required to be greater than the corresponding frequency of solved system. (2) Resonant frequencies of the first-stage blisk and the second-stage blisk are basically consistent; they are mainly at 588 Hz–599 Hz. The maximum displacement and maximum dynamic stress appear at blade tip and blade root of the first-stage blisk, respectively, and show the vibration mode of nodal diameter. (3) Effects of aspect ratio on blisk vibration are different at the conditions of fixed width and fixed length. Natural frequencies of blisk structure decline with the increasing of blade aspect ratio in the condition of fixed width, and the effect of blade aspect ratio is more obvious on high order frequency, while blade distorted frequencies show a certain rise in the condition of fixed length. --- *Source: 1021402-2016-09-20.xml*
1021402-2016-09-20_1021402-2016-09-20.md
52,942
Vibration Analysis of Aeroengine Blisk Structure Based on a Prestressed CMS Super-Element Method
Zhijun Li; Wenjun Yang; Huiqun Yuan
Shock and Vibration (2016)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1021402
1021402-2016-09-20.xml
--- ## Abstract For vibration analysis of aeroengine blisk structure, a prestressed component modal synthesis (CMS) super-element method is put forward with the fixed interface prestressing and free interface super-element approach. Based on this method, natural vibration characteristics of blisk structure are calculated at different modal truncation numbers. Comparing with the accurate result of global method, the selection principle of modal truncation number is obtained which affects the accuracy of prestressed CMS super-element method. Vibration response of two-stage blisk structure is calculated by this method, and the effects of different blade aspect ratios have been discussed on vibration characteristics. The results show that prestressed CMS super-element method is in the high accuracy and efficiency on blisk vibration analysis. Resonant frequencies in vibration response are nearly the same between the first-stage blisk and the second-stage blisk, and they are both approximately located in the range 588 Hz–599 Hz. The maximum displacement and dynamic stress are at blade tip and root of the first-stage blisk, respectively. Blade aspect ratio is a key factor of blisk vibration; the effects of blade aspect ratio on natural frequencies are different in the conditions of fixed width and fixed length. This research provides the theoretical basis for dynamic design of aeroengine compressor rotor system. --- ## Body ## 1. Introduction With the increasing development of aeronautical manufacturing technology, a whole system of bladed disk (blisk) has been widely used in newly developed aeroengines. Blade and disk are integrated to the whole system of bladed disk by the advanced technology. Traditional joint of tenon and mortise is removed out between blade and disk; it can greatly simplify the structure to achieve the light-weighting design. Multiple crack failure is avoided in mortise bottom, and the reliability of rotor system is improved significantly.Many scholars have carried out extensive research on the blisk structure. Ferria et al. [1] developed a blisk numerical model on the flutter stability of subsonic turbine and pointed out the conditions which affected the stability of turbine system. Di Maio and Ewins [2] implemented a practical method to measure vibration response of simplified blisk structure by Scanning Laser Doppler Vibrometer systems. Ji et al. [3] put forward a multidisciplinary optimization method for the design of compressor structure, and this method was verified that it could maintain the harmony and consistency of blisk structure well. Bhaumik et al. [4] conducted a theoretical research on the failure mechanism of turbine blisk; the criteria of avoiding failure were given according to the performance of turbine material. S. Lu and F.-J. Lu [5] presented a weight-lighting optimized design of blisk structure with a guarantee of structural safety. In the field of crack growth, Xu et al. [6] developed some measures to improve the reliability of axial compressor blisk system.Due to the limitation of experimental conditions and lack of appropriate experimental methods, major approach is the finite element analysis (FEA) in the analysis of structural dynamics. However, for the large complex structures such as aeroengine blade disk system, the number of degrees of freedom (DOFs) can be up to one million in discrete model. Corresponding kinematic equations cannot be solved efficiently, and it is difficult to develop dynamic analysis with the finite element method. Although it can be analyzed with FEA, a lot of time would be eventually consumed. Thus, calculation efficiency cannot be guaranteed. From the review of related studies, it can be found that computational method of blisk dynamics remains for further research. So the method of dynamic substructure becomes an appropriate solution method on the basis of modal reduction technology.Theoretical method and practical application of modal synthesis technology have been researched by related scholars [7–10]. Hurty firstly established the concepts of modal coordinates and modal synthesis method [11], which laid the foundation of fixed interface modal synthesis method. Then, Bampton and Craig Jr. [12] proposed an improved method to make the fixed interface method more simple and practical. This method eliminated the boundary of rigid mode and constraint mode and no longer distinguished between both of them. Hou [13] and Goldman [14] explored the modal synthesis method with free interface. While the effect of higher-order substructure modes is ignored, the accuracy of this method is challenged. MacNeal [15] and Rubin [16] introduced the residual stiffness to consider the effect of higher-order truncated modes; global precision was improved. For further improving Rubin’s method, Wang and Du [17] put forward the dynamic substructure technique with double coordination. In this method, residual stiffness was regarded as the Ritz base; modified free interface method was put into the orbit of Ritz analysis. Comprehensive precision and efficiency were improved greatly. With the reduction technology, Leung [18] concentrated internal coordinates of each substructure to the coordinates of each substructure interface. Motion equation of the system was established according to the conditions of displacement coordination and force equilibrium. Based on Leung’s method, Yun et al. [19] developed a super-element modal synthesis method by the frequency conversion of dynamic modes. Although there is a certain progress in the technique of dynamic substructure, its application still needs to be further researched on dynamic analysis.For the analysis of dynamic characteristics in an aeroengine blisk structure, in our research a prestressed component modal synthesis (CMS) method is proposed with the fixed interface prestressing and free interface super-element approach. Based on this method, natural vibration characteristics of blisk structure are calculated at different modal truncation numbers. Comparing with the accurate result of global method, the selection principle of modal truncation number is obtained which affects the accuracy of prestressed CMS super-element method. Vibration response of two-stage blisk structure is calculated by this method, and the effects of different blade aspect ratios have been discussed on vibration characteristics. ## 2. Dynamics Modeling of Aeroengine Blisk Structure ### 2.1. The Method of Prestressed CMS Super-Element Method Blisk structure of aeroengine system is very complex. It is a challenge for numerical simulation, as the number of finite elements is much enormous. In this research, the prestressed CMS super-element method is proposed to analyze dynamic characteristics of blisk structure. By this method, it can solve the structure with large numbers of elements. Moreover, the prestressed CMS super-element method has higher precision and efficiency compared with traditional method.The prestressed CMS super-element method is based on modal synthesis technology, and it is a method which utilizes matrix reduction technology to reduce model order. For theith substructure finite element model, the general undamped free vibration equation can be expressed as(1) M i q ¨ i + K i q i = 0 , i = 1,2 , … , n ,where M i is the mass matrix, K i is the stiffness matrix, q i is the displacement vector, and n is the number of substructures.For components such as blades, they are very thin on the direction of one or two degrees of freedom. At the action of centrifugal force, stress state may affect structural natural frequency and dynamic response. So in the analysis of rotor dynamic characteristics, the effect of centrifugal rigidification should be considered. In our research, the linear stress analysis is developed under the static state; centrifugal load is transformed into structure prestressed matrixS. Then, dynamic equation related to centrifugal rigidification can be expressed as(2) M i q ¨ i + K i ′ q i = 0 , i = 1,2 , … , n ,where K i ′ is the stiffness matrix which considers the matrix of prestressed effect. Displacement vector and coefficient matrix in (2) are divided into the master DOF on the boundary and the slaver DOF beyond the boundary. Here subscripting m denotes the master DOF and subscripting s denotes the slaver DOF. The transformed form is(3) M m m M m s M s m M s s i q ¨ m q ¨ s i + K m m ′ K m s ′ K s m ′ K s s ′ i q m q s i = 0 f i .In the equation,f i is the interfacial force. Coordinates of substructure are transformed by the following formula:(4) q i = Φ i p m p s i .Herep is the coordinate of substructure mode, Φ i = E 0 - K s s ′ - 1 K s m ′ Φ 1 i is the transformed coordinate matrix in the fixed interface [12], Φ 1 is substructure eigenvector in the condition of fixed boundary nodes, and E is the unit matrix. According to formula (4), dynamic equation in formula (2) can be expressed with modal coordinates:(5) M - i p ¨ i + K - i p i = 0 , i = 1,2 , … , n .HereM - i = Φ i T M i Φ i , K - i = Φ i T K i ′ Φ i. Based on the DOF reduction method [11], the following expression can be obtained from formula (4):(6) q i = Φ i ′ p m p a i .In formula (6), p a is the reduced modal coordinate which is transformed from DOF generalized coordinate p s. Φ i ′ is the new transformed coordinate matrix: Φ i ′ = E 0 - K s s ′ - 1 K s m ′ Φ 1 ′ i. Here Φ 1 ′ is the low-order modal set with higher-order truncation. Thus, modal coordinate vector in (5) is transformed as p i = p m , p a i T.To consider the rigid connection between substructures, the following coordinate transformation [12] is utilized to transform nonindependent coordinate P = p 1 T , p 2 T , … , p n T T into the generalized coordinate of global structure.(7) P = β z .Thus, the free vibration equation is established as follows:(8) M z ¨ + K z ˙ = 0 .By the reduction of DOF,M and K are much less than mass matrix and stiffness matrix in the generalized coordinates of original system. Thus, natural frequency and modal shape can be obtained from (8). ### 2.2. Physical Model and Analysis Process Blisk structure is a new manufacturing technology of aeroengine. The joint between tenon and mortise is removed out, so the weight of tenon-mortise connection and supporting structure is reduced greatly. In addition, bolts, nuts, locking plate, and other connectors are no longer needed. Compared with traditional blade disk, the hub becomes thinner in blisk structure, and bore diameter becomes larger. Figure1 shows the structure of first two-stage blisk in aeroengine compressor. Number of blades in the first stage is 38, and number of blades in the second stage is 53.Figure 1 Two-stage blisk structure model: (a) three-dimensional geometry and (b) finite element mesh. (a) (b)In the finite element model of blisk structure, parts of blade and disk are established with the element Solid 185. As the blades on disk are leaned, the joint zone of blade and disk is meshed with the element Solid 187. Here total number of elements is 400124 and total number of nodes is 561878. Blade material is titanium alloy TA11, the density is 4400 kg·m−3, elasticity modulus is 114 GPa, and Poisson’s ratio is 0.3. Disk material is titanium alloy TC17, the density is 8200 kg·m−3, elasticity modulus is 166 GPa, and Poisson’s ratio is 0.3.For applying the prestressed CMS super-element method, substructure models of two-stage blisk are established in this research. The first-stage blisk and the second-stage blisk are packed as a substructure, respectively, as shown in Figure2. In the first-stage substructure, it contains 194825 elements and 275406 nodes. In the second-stage substructure, it contains 205299 elements and 287432 nodes.Figure 2 Substructure model of two-stage blisk: (a) the first-stage substructure and (b) the second-stage substructure. (a) (b)After establishing FEA model of substructure, the DOFs in hub tube are constrained for further analysis. Considering the effect of centrifugal force, operating rotational speed is applied. Prestressed analysis with fixed interface is performed in each substructure. Then, prestressed option is set as open; constrained master DOFs are released. Generation process of substructure mode synthesis is performed in the free interface, and super-elements are created. After these, application process is performed. The super-elements of substructures are connected to take the vibration analysis of whole model. Finally, dynamic response of super-element master DOFs is extended into each inner DOF of super-element orderly. Complete solution of dynamic response is obtained, and expansion process of the prestressed CMS method is accomplished.Analysis process of prestressed CMS super-element method is shown in Figure3.Figure 3 Analysis process of the prestressed CMS method. ## 2.1. The Method of Prestressed CMS Super-Element Method Blisk structure of aeroengine system is very complex. It is a challenge for numerical simulation, as the number of finite elements is much enormous. In this research, the prestressed CMS super-element method is proposed to analyze dynamic characteristics of blisk structure. By this method, it can solve the structure with large numbers of elements. Moreover, the prestressed CMS super-element method has higher precision and efficiency compared with traditional method.The prestressed CMS super-element method is based on modal synthesis technology, and it is a method which utilizes matrix reduction technology to reduce model order. For theith substructure finite element model, the general undamped free vibration equation can be expressed as(1) M i q ¨ i + K i q i = 0 , i = 1,2 , … , n ,where M i is the mass matrix, K i is the stiffness matrix, q i is the displacement vector, and n is the number of substructures.For components such as blades, they are very thin on the direction of one or two degrees of freedom. At the action of centrifugal force, stress state may affect structural natural frequency and dynamic response. So in the analysis of rotor dynamic characteristics, the effect of centrifugal rigidification should be considered. In our research, the linear stress analysis is developed under the static state; centrifugal load is transformed into structure prestressed matrixS. Then, dynamic equation related to centrifugal rigidification can be expressed as(2) M i q ¨ i + K i ′ q i = 0 , i = 1,2 , … , n ,where K i ′ is the stiffness matrix which considers the matrix of prestressed effect. Displacement vector and coefficient matrix in (2) are divided into the master DOF on the boundary and the slaver DOF beyond the boundary. Here subscripting m denotes the master DOF and subscripting s denotes the slaver DOF. The transformed form is(3) M m m M m s M s m M s s i q ¨ m q ¨ s i + K m m ′ K m s ′ K s m ′ K s s ′ i q m q s i = 0 f i .In the equation,f i is the interfacial force. Coordinates of substructure are transformed by the following formula:(4) q i = Φ i p m p s i .Herep is the coordinate of substructure mode, Φ i = E 0 - K s s ′ - 1 K s m ′ Φ 1 i is the transformed coordinate matrix in the fixed interface [12], Φ 1 is substructure eigenvector in the condition of fixed boundary nodes, and E is the unit matrix. According to formula (4), dynamic equation in formula (2) can be expressed with modal coordinates:(5) M - i p ¨ i + K - i p i = 0 , i = 1,2 , … , n .HereM - i = Φ i T M i Φ i , K - i = Φ i T K i ′ Φ i. Based on the DOF reduction method [11], the following expression can be obtained from formula (4):(6) q i = Φ i ′ p m p a i .In formula (6), p a is the reduced modal coordinate which is transformed from DOF generalized coordinate p s. Φ i ′ is the new transformed coordinate matrix: Φ i ′ = E 0 - K s s ′ - 1 K s m ′ Φ 1 ′ i. Here Φ 1 ′ is the low-order modal set with higher-order truncation. Thus, modal coordinate vector in (5) is transformed as p i = p m , p a i T.To consider the rigid connection between substructures, the following coordinate transformation [12] is utilized to transform nonindependent coordinate P = p 1 T , p 2 T , … , p n T T into the generalized coordinate of global structure.(7) P = β z .Thus, the free vibration equation is established as follows:(8) M z ¨ + K z ˙ = 0 .By the reduction of DOF,M and K are much less than mass matrix and stiffness matrix in the generalized coordinates of original system. Thus, natural frequency and modal shape can be obtained from (8). ## 2.2. Physical Model and Analysis Process Blisk structure is a new manufacturing technology of aeroengine. The joint between tenon and mortise is removed out, so the weight of tenon-mortise connection and supporting structure is reduced greatly. In addition, bolts, nuts, locking plate, and other connectors are no longer needed. Compared with traditional blade disk, the hub becomes thinner in blisk structure, and bore diameter becomes larger. Figure1 shows the structure of first two-stage blisk in aeroengine compressor. Number of blades in the first stage is 38, and number of blades in the second stage is 53.Figure 1 Two-stage blisk structure model: (a) three-dimensional geometry and (b) finite element mesh. (a) (b)In the finite element model of blisk structure, parts of blade and disk are established with the element Solid 185. As the blades on disk are leaned, the joint zone of blade and disk is meshed with the element Solid 187. Here total number of elements is 400124 and total number of nodes is 561878. Blade material is titanium alloy TA11, the density is 4400 kg·m−3, elasticity modulus is 114 GPa, and Poisson’s ratio is 0.3. Disk material is titanium alloy TC17, the density is 8200 kg·m−3, elasticity modulus is 166 GPa, and Poisson’s ratio is 0.3.For applying the prestressed CMS super-element method, substructure models of two-stage blisk are established in this research. The first-stage blisk and the second-stage blisk are packed as a substructure, respectively, as shown in Figure2. In the first-stage substructure, it contains 194825 elements and 275406 nodes. In the second-stage substructure, it contains 205299 elements and 287432 nodes.Figure 2 Substructure model of two-stage blisk: (a) the first-stage substructure and (b) the second-stage substructure. (a) (b)After establishing FEA model of substructure, the DOFs in hub tube are constrained for further analysis. Considering the effect of centrifugal force, operating rotational speed is applied. Prestressed analysis with fixed interface is performed in each substructure. Then, prestressed option is set as open; constrained master DOFs are released. Generation process of substructure mode synthesis is performed in the free interface, and super-elements are created. After these, application process is performed. The super-elements of substructures are connected to take the vibration analysis of whole model. Finally, dynamic response of super-element master DOFs is extended into each inner DOF of super-element orderly. Complete solution of dynamic response is obtained, and expansion process of the prestressed CMS method is accomplished.Analysis process of prestressed CMS super-element method is shown in Figure3.Figure 3 Analysis process of the prestressed CMS method. ## 3. Vibration Characteristics of Blisk Structure ### 3.1. Accuracy Verification of the Prestressed CMS Method The modal truncation number is a key factor of calculation accuracy in prestressed CMS method. For obtaining more suitable modal truncation number in this research, natural vibration characteristics of blisk structure are calculated at different modal truncation numbers. Substructure models of two-stage blisk have been established, and operating speed of blisk structure is 11383 rpm. The DOFs of nodes in hub section are constrained as the boundary conditions. By the prestressed CMS method, natural frequencies of two-stage blisk are solved, respectively, at different modal truncation numbers. Here the result of global method is assumed as the accurate value.N and M are, respectively, the modal truncation number in the first-stage blisk and the second-stage blisk.In Figure4, it is shown that calculation result of the prestressed CMS method at M = 35 and N = 45 is far away from the frequency of global method. With the increase of M and N, calculation results of the prestressed CMS method are much closer to the accurate solution. When M = 45 and N = 60, the result of prestressed CMS method is basically consistent with the result of global method. Continuing to increase the modal truncation number such as M = 60 and N = 75, calculation accuracy of prestressed CMS method is much higher. Therefore, it illustrates that the requirement of calculation precision can be satisfied.Figure 4 Natural frequency of blisk structure at different modal truncation numbers.For further examining the calculation efficiency of prestressed CMS method, computing time of prestressed CMS method is compared with that of global method at different modal truncation numbers, as is shown in Table1.Table 1 The comparison of computing time for two methods. Type of method Computing time/h M = 35, N = 45 3.7 M = 40, N = 55 4.0 M = 45, N = 60 5.1 M = 60, N = 75 6.5 Global method 8.0As shown in Table1, computing time of prestressed CMS method increases with the growth of modal truncation number. When the modal truncation number rises to a certain value, computing time of prestressed CMS method is nearly equal to computing time of global method. It has been known that calculation accuracy of prestressed CMS method is improved significantly with the increasing of modal truncation number. While computing time is much longer, calculation efficiency of prestressed CMS method cannot be guaranteed. So the calculation accuracy and efficiency should be both considered to determine the modal truncation number.From the above analysis, it can be known that prestressed CMS method has high enough precision atM = 45 and N = 60. Compared with the global method, calculation efficiency is improved by 36%. For dynamic analysis of two-stage blisk structure, hence M = 45 and N = 60 are selected as the modal truncation number. The results are compared with global method, as can be seen from Table 2.Table 2 The comparison of natural frequencies atM = 45 and N = 60. Mode order Global method/Hz Prestressed CMS method/Hz Vibration shape 1–10 434.42–439.53 434.45–439.58 Blade vibration of the first-stage blisk 11–38 440.36–442.86 440.38–442.90 39 546.07 546.10 Blade vibration of the second-stage blisk 40–43 568.14–586.83 568.21–586.90 44–91 591.65–598.57 591.70–599.83Table2 exhibits that 38-order vibration frequencies of the first-stage blisk are in the range of 434 Hz–443 Hz and 53-order vibration frequencies of the second-stage blisk are in the range of 546 Hz–600 Hz. It can be found that natural frequencies perform the bending shape of blade in low-order modes. The number of vibration modes is equal to the number of blades. The results of global method and prestressed CMS method have very small difference; it illustrates that prestressed CMS method is credible. In order to ensure the accuracy of prestressed CMS method, truncated frequency of the substructure is required to be greater than the corresponding frequency of solved system. For dynamic analysis of blisk structure, the modal truncation number is required to be greater than the number of blades. This selection principle of modal truncation number has been verified according to Figure 4 and Table 2. ### 3.2. Vibration Response of Two-Stage Blisk Structure In the operation of blisk structure, main source of vibration is caused by the uneven air flow on blade pressure and suction surfaces. Airflow exciting force can be estimated on the basis of aerodynamic calculation, and the velocity and pressure on rotor blades can be determined by the experiment. Airflow exciting force at average radius of air passage can be expressed as the form of Fourier series; it is a series of harmonic superposition. For the system of blisk structure, practical exciting form is very complex; it has a great relationship with the working conditions. Assume that airflow exciting force on blade surface is simplified as the single point form. Aerodynamic force is applied at each blade tip of leading edge; the load in space is required to meet the form of traveling wave.For the system of blisk structure, motion equation of forced vibration is as follows:(9) M x ¨ + C x ˙ + K x = f t .Herex ( t ) is the vector of displacement and f ( t ) is the vector of exciting force. The parameters M, C, and K are mass matrix, viscous damping matrix, and stiffness matrix, respectively. In the forced response of blisk structure, exciting force f ( t ) is usually expressed as(10) f j t = f j 0 e i ω t + ϕ j , j = 1,2 , … , N .In formula (10), f j 0 is the amplitude of exciting force in jth blade, ω is the frequency of exciting force, and N is the number of blades. ϕ j is the phase of exciting force in jth blade, and it is defined as(11) ϕ j = 2 π r j - 1 N , j = 1,2 , … , N .Herer is the order of exciting force. #### 3.2.1. Response Analysis of Vibration Displacement Figure5 shows the relationship between maximum displacement and exciting frequency, and the responses of vibration displacement in two-stage blisk are compared.Figure 5 Comparison diagrams of displacement responses in two-stage blisk: (a) displacement responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)In Figure5, it is exhibited that resonant frequency range of two-stage blisk is at 588 Hz–599 Hz. The frequency is around 594 Hz at the maximum response. It is clear to note that the maximum displacement of the first-stage blisk is much higher than that of the second stage.Figure6 shows the displacement contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 6 Displacement contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of displacement contour map and (b) arbitrary view of displacement contour map. (a) (b)In Figure6, it can be seen that when the frequency of exciting force is at 594 Hz, the vibration of two-stage blisk is mainly focused on the first-stage blisk. Additionally, the maximum displacement appears at the position of blade tip, and the displacement vibration shows a nodal diameter mode. The amplitude of displacement vibration is relatively small in each region of the second-stage blisk. #### 3.2.2. Response Analysis of Dynamic Stress The load acting on the element has a remarkable change with time, or each node on the component has significant acceleration under the load. The stress is generated by dynamic load in the component, and it is called dynamic stress. Dynamic stress analysis is the basis of solving the problem of component dynamic failure. According to the analysis of harmonic response, dynamic stress of two-stage blisk is obtained under aerodynamic excitation force.Figure7 shows the relationship between maximum dynamic stress and exciting frequency; the responses of vibration stress in two-stage blisk are compared.Figure 7 Comparison diagrams of dynamic stress in two-stage blisk: (a) stress responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)From Figure7, it is found that the response of dynamic stress is basically consistent with the response of vibration displacement. The range of resonant frequency is at 588 Hz–599 Hz, and the response peak occurs at the frequency of 594 Hz. Moreover, the maximum dynamic stress of the first-stage blisk is much higher than that of the second-stage blisk.Figure8 shows the stress contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 8 Dynamic stress contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of stress contour map and (b) arbitrary view of stress contour map. (a) (b)In Figure8, it can be shown that dynamic stress occurs mainly at blade root position of the first-stage blisk, and the stress vibration shows a nodal diameter mode. Besides, the stress amplitude of the second-stage blisk is relatively small in each region.In order to avoid resonant response of blisk structure, the frequency of external load should be far away from the resonant frequency of 594 Hz. In addition, it can be found that the maximum vibration displacement is 6.21 mm and the maximum dynamic stress is 749 MPa in the limit working condition of 594 Hz, while vibration amplitude and material strength of blisk structure are both in the safe working condition. ## 3.1. Accuracy Verification of the Prestressed CMS Method The modal truncation number is a key factor of calculation accuracy in prestressed CMS method. For obtaining more suitable modal truncation number in this research, natural vibration characteristics of blisk structure are calculated at different modal truncation numbers. Substructure models of two-stage blisk have been established, and operating speed of blisk structure is 11383 rpm. The DOFs of nodes in hub section are constrained as the boundary conditions. By the prestressed CMS method, natural frequencies of two-stage blisk are solved, respectively, at different modal truncation numbers. Here the result of global method is assumed as the accurate value.N and M are, respectively, the modal truncation number in the first-stage blisk and the second-stage blisk.In Figure4, it is shown that calculation result of the prestressed CMS method at M = 35 and N = 45 is far away from the frequency of global method. With the increase of M and N, calculation results of the prestressed CMS method are much closer to the accurate solution. When M = 45 and N = 60, the result of prestressed CMS method is basically consistent with the result of global method. Continuing to increase the modal truncation number such as M = 60 and N = 75, calculation accuracy of prestressed CMS method is much higher. Therefore, it illustrates that the requirement of calculation precision can be satisfied.Figure 4 Natural frequency of blisk structure at different modal truncation numbers.For further examining the calculation efficiency of prestressed CMS method, computing time of prestressed CMS method is compared with that of global method at different modal truncation numbers, as is shown in Table1.Table 1 The comparison of computing time for two methods. Type of method Computing time/h M = 35, N = 45 3.7 M = 40, N = 55 4.0 M = 45, N = 60 5.1 M = 60, N = 75 6.5 Global method 8.0As shown in Table1, computing time of prestressed CMS method increases with the growth of modal truncation number. When the modal truncation number rises to a certain value, computing time of prestressed CMS method is nearly equal to computing time of global method. It has been known that calculation accuracy of prestressed CMS method is improved significantly with the increasing of modal truncation number. While computing time is much longer, calculation efficiency of prestressed CMS method cannot be guaranteed. So the calculation accuracy and efficiency should be both considered to determine the modal truncation number.From the above analysis, it can be known that prestressed CMS method has high enough precision atM = 45 and N = 60. Compared with the global method, calculation efficiency is improved by 36%. For dynamic analysis of two-stage blisk structure, hence M = 45 and N = 60 are selected as the modal truncation number. The results are compared with global method, as can be seen from Table 2.Table 2 The comparison of natural frequencies atM = 45 and N = 60. Mode order Global method/Hz Prestressed CMS method/Hz Vibration shape 1–10 434.42–439.53 434.45–439.58 Blade vibration of the first-stage blisk 11–38 440.36–442.86 440.38–442.90 39 546.07 546.10 Blade vibration of the second-stage blisk 40–43 568.14–586.83 568.21–586.90 44–91 591.65–598.57 591.70–599.83Table2 exhibits that 38-order vibration frequencies of the first-stage blisk are in the range of 434 Hz–443 Hz and 53-order vibration frequencies of the second-stage blisk are in the range of 546 Hz–600 Hz. It can be found that natural frequencies perform the bending shape of blade in low-order modes. The number of vibration modes is equal to the number of blades. The results of global method and prestressed CMS method have very small difference; it illustrates that prestressed CMS method is credible. In order to ensure the accuracy of prestressed CMS method, truncated frequency of the substructure is required to be greater than the corresponding frequency of solved system. For dynamic analysis of blisk structure, the modal truncation number is required to be greater than the number of blades. This selection principle of modal truncation number has been verified according to Figure 4 and Table 2. ## 3.2. Vibration Response of Two-Stage Blisk Structure In the operation of blisk structure, main source of vibration is caused by the uneven air flow on blade pressure and suction surfaces. Airflow exciting force can be estimated on the basis of aerodynamic calculation, and the velocity and pressure on rotor blades can be determined by the experiment. Airflow exciting force at average radius of air passage can be expressed as the form of Fourier series; it is a series of harmonic superposition. For the system of blisk structure, practical exciting form is very complex; it has a great relationship with the working conditions. Assume that airflow exciting force on blade surface is simplified as the single point form. Aerodynamic force is applied at each blade tip of leading edge; the load in space is required to meet the form of traveling wave.For the system of blisk structure, motion equation of forced vibration is as follows:(9) M x ¨ + C x ˙ + K x = f t .Herex ( t ) is the vector of displacement and f ( t ) is the vector of exciting force. The parameters M, C, and K are mass matrix, viscous damping matrix, and stiffness matrix, respectively. In the forced response of blisk structure, exciting force f ( t ) is usually expressed as(10) f j t = f j 0 e i ω t + ϕ j , j = 1,2 , … , N .In formula (10), f j 0 is the amplitude of exciting force in jth blade, ω is the frequency of exciting force, and N is the number of blades. ϕ j is the phase of exciting force in jth blade, and it is defined as(11) ϕ j = 2 π r j - 1 N , j = 1,2 , … , N .Herer is the order of exciting force. ### 3.2.1. Response Analysis of Vibration Displacement Figure5 shows the relationship between maximum displacement and exciting frequency, and the responses of vibration displacement in two-stage blisk are compared.Figure 5 Comparison diagrams of displacement responses in two-stage blisk: (a) displacement responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)In Figure5, it is exhibited that resonant frequency range of two-stage blisk is at 588 Hz–599 Hz. The frequency is around 594 Hz at the maximum response. It is clear to note that the maximum displacement of the first-stage blisk is much higher than that of the second stage.Figure6 shows the displacement contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 6 Displacement contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of displacement contour map and (b) arbitrary view of displacement contour map. (a) (b)In Figure6, it can be seen that when the frequency of exciting force is at 594 Hz, the vibration of two-stage blisk is mainly focused on the first-stage blisk. Additionally, the maximum displacement appears at the position of blade tip, and the displacement vibration shows a nodal diameter mode. The amplitude of displacement vibration is relatively small in each region of the second-stage blisk. ### 3.2.2. Response Analysis of Dynamic Stress The load acting on the element has a remarkable change with time, or each node on the component has significant acceleration under the load. The stress is generated by dynamic load in the component, and it is called dynamic stress. Dynamic stress analysis is the basis of solving the problem of component dynamic failure. According to the analysis of harmonic response, dynamic stress of two-stage blisk is obtained under aerodynamic excitation force.Figure7 shows the relationship between maximum dynamic stress and exciting frequency; the responses of vibration stress in two-stage blisk are compared.Figure 7 Comparison diagrams of dynamic stress in two-stage blisk: (a) stress responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)From Figure7, it is found that the response of dynamic stress is basically consistent with the response of vibration displacement. The range of resonant frequency is at 588 Hz–599 Hz, and the response peak occurs at the frequency of 594 Hz. Moreover, the maximum dynamic stress of the first-stage blisk is much higher than that of the second-stage blisk.Figure8 shows the stress contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 8 Dynamic stress contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of stress contour map and (b) arbitrary view of stress contour map. (a) (b)In Figure8, it can be shown that dynamic stress occurs mainly at blade root position of the first-stage blisk, and the stress vibration shows a nodal diameter mode. Besides, the stress amplitude of the second-stage blisk is relatively small in each region.In order to avoid resonant response of blisk structure, the frequency of external load should be far away from the resonant frequency of 594 Hz. In addition, it can be found that the maximum vibration displacement is 6.21 mm and the maximum dynamic stress is 749 MPa in the limit working condition of 594 Hz, while vibration amplitude and material strength of blisk structure are both in the safe working condition. ## 3.2.1. Response Analysis of Vibration Displacement Figure5 shows the relationship between maximum displacement and exciting frequency, and the responses of vibration displacement in two-stage blisk are compared.Figure 5 Comparison diagrams of displacement responses in two-stage blisk: (a) displacement responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)In Figure5, it is exhibited that resonant frequency range of two-stage blisk is at 588 Hz–599 Hz. The frequency is around 594 Hz at the maximum response. It is clear to note that the maximum displacement of the first-stage blisk is much higher than that of the second stage.Figure6 shows the displacement contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 6 Displacement contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of displacement contour map and (b) arbitrary view of displacement contour map. (a) (b)In Figure6, it can be seen that when the frequency of exciting force is at 594 Hz, the vibration of two-stage blisk is mainly focused on the first-stage blisk. Additionally, the maximum displacement appears at the position of blade tip, and the displacement vibration shows a nodal diameter mode. The amplitude of displacement vibration is relatively small in each region of the second-stage blisk. ## 3.2.2. Response Analysis of Dynamic Stress The load acting on the element has a remarkable change with time, or each node on the component has significant acceleration under the load. The stress is generated by dynamic load in the component, and it is called dynamic stress. Dynamic stress analysis is the basis of solving the problem of component dynamic failure. According to the analysis of harmonic response, dynamic stress of two-stage blisk is obtained under aerodynamic excitation force.Figure7 shows the relationship between maximum dynamic stress and exciting frequency; the responses of vibration stress in two-stage blisk are compared.Figure 7 Comparison diagrams of dynamic stress in two-stage blisk: (a) stress responses of two-stage blisk and (b) enlarged view of the second-stage blisk. (a) (b)From Figure7, it is found that the response of dynamic stress is basically consistent with the response of vibration displacement. The range of resonant frequency is at 588 Hz–599 Hz, and the response peak occurs at the frequency of 594 Hz. Moreover, the maximum dynamic stress of the first-stage blisk is much higher than that of the second-stage blisk.Figure8 shows the stress contour maps of two-stage blisk at resonant frequency of 594 Hz.Figure 8 Dynamic stress contour maps of two-stage blisk at resonant frequency 594 Hz: (a) front view of stress contour map and (b) arbitrary view of stress contour map. (a) (b)In Figure8, it can be shown that dynamic stress occurs mainly at blade root position of the first-stage blisk, and the stress vibration shows a nodal diameter mode. Besides, the stress amplitude of the second-stage blisk is relatively small in each region.In order to avoid resonant response of blisk structure, the frequency of external load should be far away from the resonant frequency of 594 Hz. In addition, it can be found that the maximum vibration displacement is 6.21 mm and the maximum dynamic stress is 749 MPa in the limit working condition of 594 Hz, while vibration amplitude and material strength of blisk structure are both in the safe working condition. ## 4. Discussion of Blisk Vibration at Different Aspect Ratios Blade aspect ratio is the ratio of blade length and blade width; it stands for blade relative length or blade relative width. Blade aspect ratio is one of key factors on blisk vibration characteristics. In order to explore the effects of blade aspect ratios, natural frequencies of blisk structure are discussed at different aspect ratios. ### 4.1. Design and Modeling of Blisk Structure For discussing the effects of aspect ratio on blisk vibration, blisk models are established at different aspect ratios. Relevant parameters including blade inclination and wheel size are kept constant in the process of modeling; blade aspect ratio is only adjusted accordingly. Thus, analysis results can be deemed to be credible.According to practical experiences, blade aspect ratioλ is set as 1.50, 1.75, 2.00, 2.25, and 2.50, respectively, as shown in Tables 3 and 4. At the conditions of fixed width and fixed length, vibration characteristics of blisk structure are discussed.Table 3 Design of blade aspect ratios at fixed width condition. Name 1 2 3 4 5 Length/mm 67.50 78.75 90.00 101.25 112.50 Width/mm 45.00 45.00 45.00 45.00 45.00 Aspect ratio 1.50 1.75 2.00 2.25 2.50Table 4 Design of blade aspect ratios at fixed length condition. Name 1 2 3 4 5 Length/mm 90.00 90.00 90.00 90.00 90.00 Width/mm 60.00 51.43 45.00 40.00 36.00 Aspect ratio 1.50 1.75 2.00 2.25 2.50After establishing blisk models at different aspect ratios, substructure models need to be divided. Substructure can be natural component of global structure, and it also can be a certain part of manual separation. As shown in Figure9, blisk structure is divided into N ( N = 38 ) sectors. Each sector of blisk is regarded as a substructure.Figure 9 Models of global blisk and single-sector structure: (a) global blisk structure and (b) single-sector structure. (a) (b) ### 4.2. Blisk Vibration Characteristics at Different Aspect Ratios Considering the effect of centrifugal force, blisk models at different aspect ratios are analyzed with the prestressed CMS super-element method. At the conditions of fixed width and fixed length, natural vibration frequencies of blisk structure are obtained. As blisk system has many blades, frequencies of the same vibration shapes are similar. Therefore, the first 120-order natural frequencies of blisk structure are solved. With the increase of blade aspect ratio, variation trends of natural frequencies and description of vibration shapes are illustrated as shown in Table5.Table 5 Effect law of blisk natural frequencies with the increase of blade aspect ratio. Mode order Fixed width Fixed length Description of vibration shape 1 ↘ ↘ Radial expansion of blade and wheel 2–38 ↘ — First-order bending vibration of blade 39 ↘ ↘ First-order blade bending with 0 nodal diameter 40–77 ↘ ↗ Distorted vibration of blade 78 — — Wheel vibration with 0 nodal diameter 79–120 ↘ ↘ Blade bending vibration with nodal diameter “—” stands for no obvious change, “↗” stands for rise, and “↘” stands for decline.For observing the relationship between natural frequencies and aspect ratios, one typical order of the similar frequencies is selected as the representative. The frequencies of 1st order, 39th order, 77th order, 84th order, and 120th order have been extracted as shown in Tables6 and 7.Table 6 Natural frequencies of typical orders at different aspect ratios of fixed width. Aspect ratio 1st 39th 77th 84th 120th 1.5 99.19 839.86 1769 3341 4307 1.75 100.35 717.73 1508 3239 3506 2 101.66 636.4 1319 2895 2979 2.25 103.14 578.26 1176 2520 2590 2.5 104.4 534.58 1062 2250 2300Table 7 Natural frequencies of typical orders at different aspect ratios of fixed length. Aspect ratio 1st 39th 77th 84th 120th 1.5 103.51 640.82 1094 2854 2971 1.75 102.52 638.58 1208 2892 2993 2 101.66 636.4 1319 2895 2979 2.25 101.14 634.69 1430 2878 2969 2.5 100.63 632.64 1540 2804 2966According to the data in Tables6 and 7, effect curves of aspect ratios on natural frequencies are drawn at the conditions of fixed width and fixed length, as shown in Figure 10.Figure 10 Effect curves of natural frequencies at different aspect ratios: (a) condition of fixed width and (b) condition of fixed length. (a) (b)From the analysis of effect curves in Figure10, it can be found that each order frequency of blisk structure declines with the increasing of blade aspect ratio in the condition of fixed width. And the curve of high order frequency is much steep; it illustrates that the effect of blade aspect ratio on high order frequency is more obvious than the effect on low-order frequency. In the condition of fixed length, blade distorted frequencies between 40th order and 77th order show a certain rise with the increasing of blade aspect ratio. However, other order frequencies have no clear change. ## 4.1. Design and Modeling of Blisk Structure For discussing the effects of aspect ratio on blisk vibration, blisk models are established at different aspect ratios. Relevant parameters including blade inclination and wheel size are kept constant in the process of modeling; blade aspect ratio is only adjusted accordingly. Thus, analysis results can be deemed to be credible.According to practical experiences, blade aspect ratioλ is set as 1.50, 1.75, 2.00, 2.25, and 2.50, respectively, as shown in Tables 3 and 4. At the conditions of fixed width and fixed length, vibration characteristics of blisk structure are discussed.Table 3 Design of blade aspect ratios at fixed width condition. Name 1 2 3 4 5 Length/mm 67.50 78.75 90.00 101.25 112.50 Width/mm 45.00 45.00 45.00 45.00 45.00 Aspect ratio 1.50 1.75 2.00 2.25 2.50Table 4 Design of blade aspect ratios at fixed length condition. Name 1 2 3 4 5 Length/mm 90.00 90.00 90.00 90.00 90.00 Width/mm 60.00 51.43 45.00 40.00 36.00 Aspect ratio 1.50 1.75 2.00 2.25 2.50After establishing blisk models at different aspect ratios, substructure models need to be divided. Substructure can be natural component of global structure, and it also can be a certain part of manual separation. As shown in Figure9, blisk structure is divided into N ( N = 38 ) sectors. Each sector of blisk is regarded as a substructure.Figure 9 Models of global blisk and single-sector structure: (a) global blisk structure and (b) single-sector structure. (a) (b) ## 4.2. Blisk Vibration Characteristics at Different Aspect Ratios Considering the effect of centrifugal force, blisk models at different aspect ratios are analyzed with the prestressed CMS super-element method. At the conditions of fixed width and fixed length, natural vibration frequencies of blisk structure are obtained. As blisk system has many blades, frequencies of the same vibration shapes are similar. Therefore, the first 120-order natural frequencies of blisk structure are solved. With the increase of blade aspect ratio, variation trends of natural frequencies and description of vibration shapes are illustrated as shown in Table5.Table 5 Effect law of blisk natural frequencies with the increase of blade aspect ratio. Mode order Fixed width Fixed length Description of vibration shape 1 ↘ ↘ Radial expansion of blade and wheel 2–38 ↘ — First-order bending vibration of blade 39 ↘ ↘ First-order blade bending with 0 nodal diameter 40–77 ↘ ↗ Distorted vibration of blade 78 — — Wheel vibration with 0 nodal diameter 79–120 ↘ ↘ Blade bending vibration with nodal diameter “—” stands for no obvious change, “↗” stands for rise, and “↘” stands for decline.For observing the relationship between natural frequencies and aspect ratios, one typical order of the similar frequencies is selected as the representative. The frequencies of 1st order, 39th order, 77th order, 84th order, and 120th order have been extracted as shown in Tables6 and 7.Table 6 Natural frequencies of typical orders at different aspect ratios of fixed width. Aspect ratio 1st 39th 77th 84th 120th 1.5 99.19 839.86 1769 3341 4307 1.75 100.35 717.73 1508 3239 3506 2 101.66 636.4 1319 2895 2979 2.25 103.14 578.26 1176 2520 2590 2.5 104.4 534.58 1062 2250 2300Table 7 Natural frequencies of typical orders at different aspect ratios of fixed length. Aspect ratio 1st 39th 77th 84th 120th 1.5 103.51 640.82 1094 2854 2971 1.75 102.52 638.58 1208 2892 2993 2 101.66 636.4 1319 2895 2979 2.25 101.14 634.69 1430 2878 2969 2.5 100.63 632.64 1540 2804 2966According to the data in Tables6 and 7, effect curves of aspect ratios on natural frequencies are drawn at the conditions of fixed width and fixed length, as shown in Figure 10.Figure 10 Effect curves of natural frequencies at different aspect ratios: (a) condition of fixed width and (b) condition of fixed length. (a) (b)From the analysis of effect curves in Figure10, it can be found that each order frequency of blisk structure declines with the increasing of blade aspect ratio in the condition of fixed width. And the curve of high order frequency is much steep; it illustrates that the effect of blade aspect ratio on high order frequency is more obvious than the effect on low-order frequency. In the condition of fixed length, blade distorted frequencies between 40th order and 77th order show a certain rise with the increasing of blade aspect ratio. However, other order frequencies have no clear change. ## 5. Conclusions In this research a prestressed CMS super-element method is put forward for the vibration analysis of aeroengine blisk structure. Based on this method, dynamic characteristics of blisk structure are calculated at different modal truncation numbers. And the effects of different blade aspect ratios have been discussed on blisk vibration characteristics. Through the above analysis, we can draw a conclusion.(1) Compared with the result of global method, the accuracy of prestressed CMS method can meet the requirement of blisk dynamic analysis. For the selection principle of modal truncation number, natural frequency of the substructure is required to be greater than the corresponding frequency of solved system. (2) Resonant frequencies of the first-stage blisk and the second-stage blisk are basically consistent; they are mainly at 588 Hz–599 Hz. The maximum displacement and maximum dynamic stress appear at blade tip and blade root of the first-stage blisk, respectively, and show the vibration mode of nodal diameter. (3) Effects of aspect ratio on blisk vibration are different at the conditions of fixed width and fixed length. Natural frequencies of blisk structure decline with the increasing of blade aspect ratio in the condition of fixed width, and the effect of blade aspect ratio is more obvious on high order frequency, while blade distorted frequencies show a certain rise in the condition of fixed length. --- *Source: 1021402-2016-09-20.xml*
2016
# Burnout, Perceived Efficacy, and Job Satisfaction: Perception of the Educational Context in High School Teachers **Authors:** María del Mar Molero Jurado; María del Carmen Pérez-Fuentes; Leonarda Atria; Nieves Fátima Oropesa Ruiz; José Jesús Gázquez Linares **Journal:** BioMed Research International (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1021408 --- ## Abstract Burnout is closely related to personal and contextual variables, especially job satisfaction and commitment, and other less studied psychological variables, such as perception of teaching efficacy or educational context.Objective. The general objective of this study was to examine the relationships of burnout with perceived educational context, perceived teaching efficacy (personal and collective), and job satisfaction and commitment.Materials and Methods. A battery of instruments was administered to 500 high school teachers at different schools in several Italian provinces.Results. The cluster analysis found that one-third of high school teachers had high burnout. Evidence was also found associating elevated burnout with low scores in perceived efficacy (personal and collective), low job satisfaction, and low professional commitment. Furthermore, perception of the educational context is less positive when the teachers experience high levels of burnout. Finally, the results showed the mediating effect of perceived personal efficacy on the relationship between burnout and job satisfaction.Conclusions. The results are discussed from the perspective of developing teaching autonomy on improving personal efficacy, decreasing burnout, and increasing job satisfaction in an educational system which reinforces individual and collective competence. --- ## Body ## 1. Introduction The burnout syndrome is characterized by dealing with a range of symptoms related to psychophysical exhaustion, impaired relations, and professional inefficacy and with disillusion [1–3]. Some authors have shown that problematic behavior of students and dealing with their families are two main factors affecting job stress in teachers [4, 5]. At least 30% of teachers have experienced burnout in the last two decades with similar incidence in different countries, which has been accompanied by negative consequences to health, economic level, and degree of commitment and satisfaction in the teaching profession [6]. In this study, one of our objectives was to find homogeneous groups of teachers by burnout level. Then we analyzed the state of burnout in the high school context and its relationship with perceived educational context, teaching efficacy, job satisfaction, and commitment.In the literature on the burnout syndrome, there is evidence of a significant relationship between burnout and perceived educational context [7–9]. The conclusions of the study by Khani and Mirzaee [7], for example, showed that contextual variables could not only cause burnout in teachers directly but also influence it indirectly by increasing the effect of different stressors. Similarly, research by Klusmann et al. [8], with a large sample of high school teachers from over one hundred German schools, revealed that when the analysis controlled for individual teacher characteristics, disciplinary problems in the classroom predicted a higher level of emotional exhaustion. In this vein, problems of coexistence such as fighting and problems with the teaching staff were also the aspects of most concern to the families in different European countries [10] Emotional exhaustion is associated with higher levels of nonphysical violence [11]. In another line, research by Skaalvik and Skaalvik [12] with Norwegian teachers showed that emotional exhaustion was related more strongly to pressures of time, while depersonalization and the reduced personal accomplishment were associated more intensely with parent-teacher relationships. Meanwhile, in a study by Hultell and Gustavsson [9], work demands (unsatisfied expectations, work load, the role of stress, routinization, social isolation, and passive coping strategy) were more closely related to burnout than work resources (autonomy, social support from colleagues, social support from supervisor or principal, satisfaction with salaries, mastery of skills, and active coping strategies) which were more connected to job commitment. García-Arroyo and Osca [13] suggested that burnout coping strategies should be based on nonlinear interactive models, since coping operates in a combined process in which some strategies affect others.Interest in studying the relationship of burnout and teaching efficacy has been growing with time [14]. However, because teaching efficacy has been conceptualized and measured in different ways in different studies, some authors have preferred to differentiate between personal and collective efficacy [15–19]. Personal efficacy refers to confidence in one’s own actions to reach expected results [15] and collective efficacy is defined rather as the belief in the ability of the school or team of teachers to perform actions leading to achievement of goals [17–19].With regard to self-efficacy or personal efficacy and burnout, the empirical literature reflects a significant negative relationship between the two variables [20–24]. Thus Briones et al. [20] found that self-efficacy of teachers was a direct predictor of personal accomplishment, as well as perception of support received from colleagues. In this same direction, Evers et al. [21] found that beliefs of self-efficacy were related significantly and positively to personal accomplishment and, furthermore, had a significant negative relationship with the emotional exhaustion and depersonalization dimensions. Ventura et al. [24] found that employees with more self-efficacy at work perceived more challenging demands and fewer impediments, and this in turn was related to stronger commitment and less burnout. Similarly, in healthcare, Molero et al. [23] found that self-efficacy and stress management protected from burnout. In this area, it was the emotional intelligence dimension of adaptability which was the strongest predictor of self-efficacy [25]. Higher levels of self-esteem have also been linked to lower levels of burnout [26]. However, less research has been done on the relationship between collective self-efficacy and burnout and there are no significant results. For example, in the study by Malinen and Savolainen [22] with a sample of Finnish high school teachers, collective efficacy concerning how discipline was maintained among students did not explain burnout.There is plentiful precedent literature supporting the assumption that job dissatisfaction and burnout maintain a close relationship with each other [27–29]. Thus in the study by Skaalvik and Skaalvik [12], teachers’ job dissatisfaction was directly related to emotional exhaustion and diminished personal accomplishment. On the contrary, greater teacher job satisfaction at different grade levels was related to satisfaction of the psychological needs for autonomy, teaching staff competence, and relations [24], with self-determined motivation [24, 30], with cognitive self-regulation [31], ans with stronger social support [32]. Furthermore, job satisfaction has been found empirically to have a positive role in subjective wellbeing [33, 34] and teacher self-concept [35].The perception of the educational context also has an important role in job satisfaction [7, 12, 35]. Job satisfaction has been found to be indirectly related to all the aspects of the school context (support from supervision, pressure of time, relationships with parents, and autonomy), through emotional exhaustion and reduction in personal accomplishment [12]. Other studies have related job satisfaction, burnout, and teaching efficacy. For example, Skaalvik and Skaalvik [36] found that teacher self-efficacy and the two dimensions of teacher burnout (emotional exhaustion and depersonalization) were significantly associated with teacher job satisfaction. Briones et al. [20] demonstrated that teacher self-efficacy was an indirect predictor of job satisfaction.With respect to the relationship between job satisfaction and collective efficacy, several different studies have found that collective efficacy is not correlated with teacher job satisfaction [22, 36]. Since the role of efficacy in job satisfaction and burnout is a line of research where the most studies, at least in education, have been suggested, another of our major objectives was to analyze the mediating role of teaching efficacy in the relationship between burnout and job satisfaction.In addition, with respect to the relationship between burnout, commitment, and educational context, job commitment seems to modulate the relationship between demands and burnout and between resources (personal and related to the job) and burnout [37]. Meanwhile, Pérez-Fuentes, Molero, Gázquez, and Oropesa [38], with a large sample of healthcare professionals, found that the interpersonal dimension of emotional intelligence was the strongest predictor of job commitment. Likewise, commitment was found to be positively associated with self-efficacy and negatively with burnout [39].After analyzing the most relevant findings of previous studies on burnout in the high school context and its relationship with different contextual and personal variables, the main objectives and hypotheses of this study are discussed below. As mentioned a few paragraphs above, one of the purposes of this study was to find homogeneous groups of teachers by their level of burnout. It was also intended to examine the relationships of burnout with both perception of the educational context, perceived teaching efficacy (personal and collective), and job satisfaction and commitment. In this sense, we also wanted to find any significant differences between the high and low burnout groups based on the variables above. Finally, we attempted to take a further step forward by exploring the mediating role of perceived teaching efficacy on the relationship between burnout and job satisfaction. ## 2. Materials and Methods ### 2.1. Participants The study sample consisted of 500 7th and 8th grade high school teachers, selected at random from different schools in the Sicilian provinces of Trapani, Agrigento, and Palermo (Italy). Of this sample, 67.2% (n=336) were women and 32.8% (n=164) were men. The most representative group of 40.4% were teachers aged 46 to 55 (n=202), followed by the group from 36 to 45 years who made up 27.2% (n=136) of the sample, those who were over 55 comprised 22.6% (n=113), and finally, the least representative group of 9.8% of the sample were the youngest, from 25 to 35 years. By level of education, 84.2% (n=421) had undergraduate degrees, 13% (n=65) had Master’s degrees, 2.4% (n=12) had diplomas, and 0.4% (n=2) had Ph.D. degrees (Table 1).Table 1 Distribution of the sample by sociodemographic and professional variables. n % Sex Male 164 32.8 Female 336 67.2 Age From 25 to 35 years 49 9.8 From 36 to 45 years 136 27.2 From 46 to 55 years 202 40.4 Over 55 years 113 22.6 Education Diploma 12 2.4 Undergraduate degree 421 84.2 Master’s degree 65 13.0 Ph.D. 2 0.4 Years of experience 5 years or less 61 12.2 From 6 to 10 years 71 14.2 From 11 to 20 years 169 33.8 From 21 to 30 years 125 25.0 Over 30 74 14.8 Type of contract Permanent 362 72.4 Temporary 138 27.6 Teaching high school 7th grade 244 48.8 8th grade 256 51.2Concerning their professional characteristics, the groups of teachers with seniority (years of experience) of 11 to 20 years and from 21 to 30 years made up 33.8% (n=169) and 25% (n=125) of the sample, respectively. By type of contract, 73.4% (n=362) had a permanent contract and 27.6% (n=138) temporary. Finally, although all the teachers in the sample taught high school, 48.8% (n=244) taught seventh grade and 51.2% (n=256) eighth grade. ### 2.2. Instruments #### 2.2.1. LBQ: Link Burnout Questionnaire [3] This is a self-report questionnaire which provides new burnout indicators for those who work in service professions. Santinello [3] reviewed the three dimensions studied by the MBI and added the new disillusion scale to enlarge the theoretical tradition of burnout.The four dimensions examined by the LBQ are (1) psychophysical exhaustion, (2) impaired relations, (3) professional inefficacy, and (4) disillusion. Disillusion is manifested by loss of passion and enthusiasm for daily activities. Burnout may therefore be characterized as the final state in a long process of disillusion.The LBQ consists of 24 items with a six-point Likert-type response scale for study of four dimensions, each with three positive and three negative elements: the psychophysical dimension (energy-exhaustion), relationships (involvement-deterioration), professional competence (efficacy-inefficacy), and existential expectations (satisfaction-disillusion). The internal consistency of the scales, according to data found by the author, varies from .68 (professional inefficacy) to .85 (disillusion). In our case, the Cronbach’s alpha for the complete questionnaire was .89 and for each one of the scales it was psychophysical exhaustion (α=.70), impaired relations (α=.66), professional inefficacy (α=.65), and disillusion (α=.82). #### 2.2.2. Assessment Questionnaire for Convictions about Efficacy, Perceived Context, Job Attitudes, and Satisfaction in School Contexts [19] This questionnaire is comprised of several scales with a seven-point Likert response scale: perceived personal efficacy scale (the teacher’s conviction of being up to the demands of their role and coping with any emergency or eventuality, for example, with regard to families or their colleagues, in managing the class or problem students) and perceived collective efficacy scale (the teacher’s beliefs with regard to the ability of the school to dominate complicated tasks and cope with innumerable critical situations, approach problems related to school quitting, manage relations with local authorities, and face the demands of school autonomy).Job satisfaction is the degree of satisfaction with the role, possibilities for personal growth, and work environment and the degree to which personal needs are satisfied by the job.Job commitment is bond which the person establishes with the organization and their commitment to achieve objectives.Scales concerning the perceived educational context are the principal perception scale (degree to which the teachers evaluate the principal’s ability to identify resources within the school, promote cooperation, and set clear objectives), colleague perception scale (perception of job relations, work of fellow teachers, and efficacy of communication among colleagues), student perception scale (perception of student-teacher relations, students’ interest in the subjects taught, and respect for the setting and persons), family perception scale (perception of parent-teacher relations, degree of parent participation, and interest in their children’s school life), technical-auxiliary staff scale (perception of how technical-auxiliary staff works in terms of competence and flexibility), physical environment perception scale (evaluation of the school’s facilities, adequacy for the educational demands, and general safety). Its reliability was calculated using the Cronbach’s alpha coefficient, which varied from .90 (job satisfaction) to .95 (perception of the school principle), and a value of .98 for the complete questionnaire. ### 2.3. Procedure After consent was received from the direction of the participating schools, a meeting was held with the teachers where the study was presented, explaining its importance and clarifying its objectives, to acquire the approval and participation of all the teachers. The questionnaire was given directly to each teacher for completion at will. Participation in the study was voluntary and anonymity of participants was guaranteed. A period of one to two weeks was set for compilation of the data. It was administered at the schools in the middle of the school year. The SPSS version .23 for Windows was used for data processing and analysis. #### 2.3.1. Data Analysis First, bivariate correlations were done to explore the relationships between variables. Then two-stage cluster analysis was done to find the groups of professionals based on their scores in the burnout dimensions. When the groups or clusters had been identified, a comparison of means was done using the Student’st test for independent samples to determine the existence of any significant differences between burnout groups with respect to their perception of the educational context, perceived efficacy (personal and collective), commitment, and job satisfaction.Finally, to compare the mediating effect of the perceived efficacy variables, a multiple mediation analysis was performed with two chained mediators. The macro for SPSS by Preacher & Hayes [40, 41] was used for computation of the mediation model. Bootstrapping was applied with coefficients estimated from 5000 bootstrap samples. ## 2.1. Participants The study sample consisted of 500 7th and 8th grade high school teachers, selected at random from different schools in the Sicilian provinces of Trapani, Agrigento, and Palermo (Italy). Of this sample, 67.2% (n=336) were women and 32.8% (n=164) were men. The most representative group of 40.4% were teachers aged 46 to 55 (n=202), followed by the group from 36 to 45 years who made up 27.2% (n=136) of the sample, those who were over 55 comprised 22.6% (n=113), and finally, the least representative group of 9.8% of the sample were the youngest, from 25 to 35 years. By level of education, 84.2% (n=421) had undergraduate degrees, 13% (n=65) had Master’s degrees, 2.4% (n=12) had diplomas, and 0.4% (n=2) had Ph.D. degrees (Table 1).Table 1 Distribution of the sample by sociodemographic and professional variables. n % Sex Male 164 32.8 Female 336 67.2 Age From 25 to 35 years 49 9.8 From 36 to 45 years 136 27.2 From 46 to 55 years 202 40.4 Over 55 years 113 22.6 Education Diploma 12 2.4 Undergraduate degree 421 84.2 Master’s degree 65 13.0 Ph.D. 2 0.4 Years of experience 5 years or less 61 12.2 From 6 to 10 years 71 14.2 From 11 to 20 years 169 33.8 From 21 to 30 years 125 25.0 Over 30 74 14.8 Type of contract Permanent 362 72.4 Temporary 138 27.6 Teaching high school 7th grade 244 48.8 8th grade 256 51.2Concerning their professional characteristics, the groups of teachers with seniority (years of experience) of 11 to 20 years and from 21 to 30 years made up 33.8% (n=169) and 25% (n=125) of the sample, respectively. By type of contract, 73.4% (n=362) had a permanent contract and 27.6% (n=138) temporary. Finally, although all the teachers in the sample taught high school, 48.8% (n=244) taught seventh grade and 51.2% (n=256) eighth grade. ## 2.2. Instruments ### 2.2.1. LBQ: Link Burnout Questionnaire [3] This is a self-report questionnaire which provides new burnout indicators for those who work in service professions. Santinello [3] reviewed the three dimensions studied by the MBI and added the new disillusion scale to enlarge the theoretical tradition of burnout.The four dimensions examined by the LBQ are (1) psychophysical exhaustion, (2) impaired relations, (3) professional inefficacy, and (4) disillusion. Disillusion is manifested by loss of passion and enthusiasm for daily activities. Burnout may therefore be characterized as the final state in a long process of disillusion.The LBQ consists of 24 items with a six-point Likert-type response scale for study of four dimensions, each with three positive and three negative elements: the psychophysical dimension (energy-exhaustion), relationships (involvement-deterioration), professional competence (efficacy-inefficacy), and existential expectations (satisfaction-disillusion). The internal consistency of the scales, according to data found by the author, varies from .68 (professional inefficacy) to .85 (disillusion). In our case, the Cronbach’s alpha for the complete questionnaire was .89 and for each one of the scales it was psychophysical exhaustion (α=.70), impaired relations (α=.66), professional inefficacy (α=.65), and disillusion (α=.82). ### 2.2.2. Assessment Questionnaire for Convictions about Efficacy, Perceived Context, Job Attitudes, and Satisfaction in School Contexts [19] This questionnaire is comprised of several scales with a seven-point Likert response scale: perceived personal efficacy scale (the teacher’s conviction of being up to the demands of their role and coping with any emergency or eventuality, for example, with regard to families or their colleagues, in managing the class or problem students) and perceived collective efficacy scale (the teacher’s beliefs with regard to the ability of the school to dominate complicated tasks and cope with innumerable critical situations, approach problems related to school quitting, manage relations with local authorities, and face the demands of school autonomy).Job satisfaction is the degree of satisfaction with the role, possibilities for personal growth, and work environment and the degree to which personal needs are satisfied by the job.Job commitment is bond which the person establishes with the organization and their commitment to achieve objectives.Scales concerning the perceived educational context are the principal perception scale (degree to which the teachers evaluate the principal’s ability to identify resources within the school, promote cooperation, and set clear objectives), colleague perception scale (perception of job relations, work of fellow teachers, and efficacy of communication among colleagues), student perception scale (perception of student-teacher relations, students’ interest in the subjects taught, and respect for the setting and persons), family perception scale (perception of parent-teacher relations, degree of parent participation, and interest in their children’s school life), technical-auxiliary staff scale (perception of how technical-auxiliary staff works in terms of competence and flexibility), physical environment perception scale (evaluation of the school’s facilities, adequacy for the educational demands, and general safety). Its reliability was calculated using the Cronbach’s alpha coefficient, which varied from .90 (job satisfaction) to .95 (perception of the school principle), and a value of .98 for the complete questionnaire. ## 2.2.1. LBQ: Link Burnout Questionnaire [3] This is a self-report questionnaire which provides new burnout indicators for those who work in service professions. Santinello [3] reviewed the three dimensions studied by the MBI and added the new disillusion scale to enlarge the theoretical tradition of burnout.The four dimensions examined by the LBQ are (1) psychophysical exhaustion, (2) impaired relations, (3) professional inefficacy, and (4) disillusion. Disillusion is manifested by loss of passion and enthusiasm for daily activities. Burnout may therefore be characterized as the final state in a long process of disillusion.The LBQ consists of 24 items with a six-point Likert-type response scale for study of four dimensions, each with three positive and three negative elements: the psychophysical dimension (energy-exhaustion), relationships (involvement-deterioration), professional competence (efficacy-inefficacy), and existential expectations (satisfaction-disillusion). The internal consistency of the scales, according to data found by the author, varies from .68 (professional inefficacy) to .85 (disillusion). In our case, the Cronbach’s alpha for the complete questionnaire was .89 and for each one of the scales it was psychophysical exhaustion (α=.70), impaired relations (α=.66), professional inefficacy (α=.65), and disillusion (α=.82). ## 2.2.2. Assessment Questionnaire for Convictions about Efficacy, Perceived Context, Job Attitudes, and Satisfaction in School Contexts [19] This questionnaire is comprised of several scales with a seven-point Likert response scale: perceived personal efficacy scale (the teacher’s conviction of being up to the demands of their role and coping with any emergency or eventuality, for example, with regard to families or their colleagues, in managing the class or problem students) and perceived collective efficacy scale (the teacher’s beliefs with regard to the ability of the school to dominate complicated tasks and cope with innumerable critical situations, approach problems related to school quitting, manage relations with local authorities, and face the demands of school autonomy).Job satisfaction is the degree of satisfaction with the role, possibilities for personal growth, and work environment and the degree to which personal needs are satisfied by the job.Job commitment is bond which the person establishes with the organization and their commitment to achieve objectives.Scales concerning the perceived educational context are the principal perception scale (degree to which the teachers evaluate the principal’s ability to identify resources within the school, promote cooperation, and set clear objectives), colleague perception scale (perception of job relations, work of fellow teachers, and efficacy of communication among colleagues), student perception scale (perception of student-teacher relations, students’ interest in the subjects taught, and respect for the setting and persons), family perception scale (perception of parent-teacher relations, degree of parent participation, and interest in their children’s school life), technical-auxiliary staff scale (perception of how technical-auxiliary staff works in terms of competence and flexibility), physical environment perception scale (evaluation of the school’s facilities, adequacy for the educational demands, and general safety). Its reliability was calculated using the Cronbach’s alpha coefficient, which varied from .90 (job satisfaction) to .95 (perception of the school principle), and a value of .98 for the complete questionnaire. ## 2.3. Procedure After consent was received from the direction of the participating schools, a meeting was held with the teachers where the study was presented, explaining its importance and clarifying its objectives, to acquire the approval and participation of all the teachers. The questionnaire was given directly to each teacher for completion at will. Participation in the study was voluntary and anonymity of participants was guaranteed. A period of one to two weeks was set for compilation of the data. It was administered at the schools in the middle of the school year. The SPSS version .23 for Windows was used for data processing and analysis. ### 2.3.1. Data Analysis First, bivariate correlations were done to explore the relationships between variables. Then two-stage cluster analysis was done to find the groups of professionals based on their scores in the burnout dimensions. When the groups or clusters had been identified, a comparison of means was done using the Student’st test for independent samples to determine the existence of any significant differences between burnout groups with respect to their perception of the educational context, perceived efficacy (personal and collective), commitment, and job satisfaction.Finally, to compare the mediating effect of the perceived efficacy variables, a multiple mediation analysis was performed with two chained mediators. The macro for SPSS by Preacher & Hayes [40, 41] was used for computation of the mediation model. Bootstrapping was applied with coefficients estimated from 5000 bootstrap samples. ## 2.3.1. Data Analysis First, bivariate correlations were done to explore the relationships between variables. Then two-stage cluster analysis was done to find the groups of professionals based on their scores in the burnout dimensions. When the groups or clusters had been identified, a comparison of means was done using the Student’st test for independent samples to determine the existence of any significant differences between burnout groups with respect to their perception of the educational context, perceived efficacy (personal and collective), commitment, and job satisfaction.Finally, to compare the mediating effect of the perceived efficacy variables, a multiple mediation analysis was performed with two chained mediators. The macro for SPSS by Preacher & Hayes [40, 41] was used for computation of the mediation model. Bootstrapping was applied with coefficients estimated from 5000 bootstrap samples. ## 3. Results ### 3.1. Burnout in High School Teachers A two-stage cluster analysis was done with the burnout factors to form the groups (Figures1 and 2). Two groups resulted from inclusion of these variables with the following distribution: 32.6% (n=163) of subjects in Cluster 1 and 67.4% (n=337) in Cluster 2. Table 2 summarizes the means of evaluations of different aspects of the educational context for the total sample of teachers and for each of the clusters.Table 2 Mean scores on burnout for the total sample (N=500) and clusters. Burnout Total sample(N=500) Cluster 1 ▲ Burnout(n=163) 2 ▼ Burnout(n=337) Physical exhaustion M = 12.54 ( S D = 4.99 ) M = 17.83 ( S D = 4.71 ) M = 9.98 ( S D = 2.47 ) Impaired relations M = 12.53 ( S D = 4.93 ) M = 17.47 ( S D = 4.09 ) M = 10.14 ( S D = 3.24 ) Professional Inefficacy M = 10.98 ( S D = 4.16 ) M = 14.77 ( S D = 4.28 ) M = 9.14 ( S D = 2.56 ) Disillusion M = 11.66 ( S D = 5.77 ) M = 17.67 ( S D = 5.61 ) M = 8.76 ( S D = 2.88 )Figure 1 Cluster composition (N=500).Note. Factors are in the order of importance of input.Figure 2 Comparison of clusters (N=500).The first group resulting from the cluster analysis (Cluster 1) was characterized by scores above the mean for the total sample in psychophysical exhaustion (M=17.83), impaired relations (M=17.48), professional inefficacy (M=14.78), and disillusion (M=17.67). Therefore, the subjects in this cluster were grouped together because of their high levels in all the burnout dimensions.The second group (Cluster 2) grouped teachers with mean scores below those found for the total sample in all the burnout dimensions: psychophysical exhaustion (M=9.98), impaired relations (M=10.14), professional inefficacy (M=9.15), and disillusion (M=8.76). That is, those in Cluster 2 coincided in having scores below the mean on the burnout dimensions. ### 3.2. Burnout in High School Teachers and Its Relationship with Perception of the Educational Context, Efficacy, Commitment, and Job Satisfaction As shown in Table3, the four dimensions of burnout correlate negatively with the perception of the teachers of the educational context (the management team, colleagues, technical-auxiliary staff, secretarial staff, families, students, and physical school environment). Burnout was also found to be negatively correlated with personal and collective efficacy, with organizational commitment and job satisfaction.Table 3 Burnout, perception of the educational context, efficacy, commitment, and job satisfaction. Correlation matrix. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1. Physical exhaustion – 2. Impaired relations .61​∗∗∗ – 3. Professional inefficacy .58​∗∗∗ .55​∗∗∗ – 4. Disillusion .66​∗∗∗ .61​∗∗∗ .64​∗∗∗ – Perception of the educational context 5. Management team -.23​∗∗∗ -.27​∗∗∗ -.26​∗∗∗ -.34​∗∗∗ – 6. Colleagues -.21​∗∗∗ -.20​∗∗∗ -.29​∗∗∗ -.28​∗∗∗ .61​∗∗∗ – 7. Tech.-auxiliary staff -.27​∗∗∗ -.21​∗∗∗ -.25​∗∗∗ -.26​∗∗∗ .53​∗∗∗ .55​∗∗∗ – 8. Secretarial staff -.24​∗∗∗ -.12​∗∗∗ -.25​∗∗∗ -.26​∗∗∗ .61​∗∗∗ .49​∗∗∗ .70​∗∗∗ – 9. Families -.19​∗∗∗ -.31​∗∗∗ -.22​∗∗∗ -.28​∗∗∗ .52​∗∗∗ .58​∗∗∗ .48​∗∗∗ .48​∗∗∗ – 10. Students -.24​∗∗∗ -.40​∗∗∗ -.26​∗∗∗ -.25​∗∗∗ .45​∗∗∗ .52​∗∗∗ .39​∗∗∗ .33​∗∗∗ .71​∗∗∗ – 11. Physical environment -.20​∗∗∗ -.16​∗∗∗ -.20​∗∗∗ -.26​∗∗∗ .50​∗∗∗ .47​∗∗∗ .47​∗∗∗ .51​∗∗∗ .47​∗∗∗ .36​∗∗∗ – 12. Perceived personal efficacy -.38​∗∗∗ -.37​∗∗∗ -.47​∗∗∗ -.43​∗∗∗ .47​∗∗∗ .44​∗∗∗ .40​∗∗∗ .43​∗∗∗ .37​∗∗∗ .40​∗∗∗ .33​∗∗∗ – 13. Perceived collective efficacy -.18​∗∗∗ -.21​∗∗∗ -.23​∗∗∗ -.26​∗∗∗ .67​∗∗∗ .63​∗∗∗ .53​∗∗∗ .55​∗∗∗ .58​∗∗∗ .53​∗∗∗ .54​∗∗∗ .43​∗∗∗ – 14. Organizational commitment -.36​∗∗∗ -.37​∗∗∗ -.40​∗∗∗ -.46​∗∗∗ .56​∗∗∗ .56​∗∗∗ .43​∗∗∗ .46​∗∗∗ .49​∗∗∗ .44​∗∗∗ .47​∗∗∗ .58​∗∗∗ .54​∗∗∗ – 15. Job satisfaction -.42​∗∗∗ -.36​∗∗∗ -.41​∗∗∗ -.46​∗∗∗ .62​∗∗∗ .64​∗∗∗ .47​∗∗∗ .49​∗∗∗ .55​∗∗∗ .49​∗∗∗ .48​∗∗∗ .59​∗∗∗ .57​∗∗∗ .75​∗∗∗ Note.​∗∗∗p<.001A Student’st test for independent samples was carried out on the groups classified based on the two-cluster solution to find any differences between the clusters with respect to the rest of the variables analyzed. As observed in Table 4, there were significant differences between the groups with high and low burnout levels for all aspects related to teacher perception of educational context, efficacy, commitment, and job satisfaction, where Cluster 1 (burnout dimension scores above the sample mean) showed lower scores in all the dimensions analyzed.Table 4 Perception of the educational context, efficacy, commitment, and job satisfaction. Descriptive statistics andt test by burnout group. Cluster 1 ▲ Burnout Cluster 2 ▼ Burnout t p N M SD N M SD Perception of the educational context …management team 163 38.85 7.38 337 42.55 7.78 -5.06​∗∗∗ .000 …colleagues 163 30.90 6.31 337 34.08 7.07 -4.86​∗∗∗ .000 … technical-auxiliary staff 163 20.76 4.89 337 23.57 4.20 -6.30​∗∗∗ .000 … secretarial staff 163 11.00 2.72 337 12.21 2.18 -4.94​∗∗∗ .000 … families 163 18.69 4.92 337 21.07 4.87 -5.08​∗∗∗ .000 …students 163 19.19 4.84 337 21.77 4.51 -5.85​∗∗∗ .000 …physical environment 163 19.02 5.73 337 21.65 5.07 -4.98​∗∗∗ .000 Perceived personal efficacy 163 65.73 9.80 337 72.92 8.29 -8.06​∗∗∗ .000 Perceived collective efficacy 163 47.06 8.48 337 51.38 9.79 -4.82​∗∗∗ .000 Organizational commitment 163 31.60 7.50 337 36.70 5.94 -7.60​∗∗∗ .000 Job satisfaction 163 21.93 4.87 337 25.22 3.39 -7.74​∗∗∗ .000 Nota.​∗∗∗p<.001 ### 3.3. Mediation Model for Estimating Predictors and Paths of Mediation Effects of Perceived Efficacy on Job Satisfaction Based on the results of the cluster analysis, the burnout groups (recoded: ▼Burnout=0; ▲Burnout=1) were taken as the independent or predictor variable and perceived efficacy (personal and collective) as the mediating variables. Thus the multiple mediation model was computed with two mediator variables (M1: Epers and M2: Ecolec), with job satisfaction as the dependent variable (Figure 3).Figure 3 Multiple mediation model of perceived efficacy (personal and collective) on the relationship betweenburnout and job satisfaction.In the first place, a statistically significant effect [B=-7.19,p<.001] ofburnout (X) on perceived personal efficacy (M1) was observed. The second regression analysis took as the result variable Mediator 2 (perceived collective efficacy) and included burnout (X) and perceived personal efficacy (M1) in the equation. There was a significant effect of personal efficacy [B=.41,p<.001] on collective efficacy (M2), but not on burnout [B=-1.31,p=.137].In the following regression analysis, the effect of the independent variable and of the two mediators was estimated taking job satisfaction as the result variable (Y). In all cases, significant effects were observed: personal efficacy [B=.17,p<.001], collective efficacy [B=.16,p<.001], andburnout [B=-1.34,p<.001]. The total effect of the model was also significant [B=-3.28,p<.001].Finally, the analysis of indirect effects was carried out using bootstrapping, and data found supported a level of significance for Path 1 [ind1: X→M1→Y; B=-1.22, SE=.27, 95% CI (-1.84, -.73)] and Path 2 [ind2: X→M1→M2→Y; B=-.49, SE=.13, 95% CI (-.78, -.28)]. In both cases perceived personal efficacy seemed to mediate the effect of burnout on job satisfaction. However, the indirect effect expressed in Path 3 [ind3: X→M2→Y; B=-.21, SE=.16, 95% CI (-.55, .08)] was not significant. ## 3.1. Burnout in High School Teachers A two-stage cluster analysis was done with the burnout factors to form the groups (Figures1 and 2). Two groups resulted from inclusion of these variables with the following distribution: 32.6% (n=163) of subjects in Cluster 1 and 67.4% (n=337) in Cluster 2. Table 2 summarizes the means of evaluations of different aspects of the educational context for the total sample of teachers and for each of the clusters.Table 2 Mean scores on burnout for the total sample (N=500) and clusters. Burnout Total sample(N=500) Cluster 1 ▲ Burnout(n=163) 2 ▼ Burnout(n=337) Physical exhaustion M = 12.54 ( S D = 4.99 ) M = 17.83 ( S D = 4.71 ) M = 9.98 ( S D = 2.47 ) Impaired relations M = 12.53 ( S D = 4.93 ) M = 17.47 ( S D = 4.09 ) M = 10.14 ( S D = 3.24 ) Professional Inefficacy M = 10.98 ( S D = 4.16 ) M = 14.77 ( S D = 4.28 ) M = 9.14 ( S D = 2.56 ) Disillusion M = 11.66 ( S D = 5.77 ) M = 17.67 ( S D = 5.61 ) M = 8.76 ( S D = 2.88 )Figure 1 Cluster composition (N=500).Note. Factors are in the order of importance of input.Figure 2 Comparison of clusters (N=500).The first group resulting from the cluster analysis (Cluster 1) was characterized by scores above the mean for the total sample in psychophysical exhaustion (M=17.83), impaired relations (M=17.48), professional inefficacy (M=14.78), and disillusion (M=17.67). Therefore, the subjects in this cluster were grouped together because of their high levels in all the burnout dimensions.The second group (Cluster 2) grouped teachers with mean scores below those found for the total sample in all the burnout dimensions: psychophysical exhaustion (M=9.98), impaired relations (M=10.14), professional inefficacy (M=9.15), and disillusion (M=8.76). That is, those in Cluster 2 coincided in having scores below the mean on the burnout dimensions. ## 3.2. Burnout in High School Teachers and Its Relationship with Perception of the Educational Context, Efficacy, Commitment, and Job Satisfaction As shown in Table3, the four dimensions of burnout correlate negatively with the perception of the teachers of the educational context (the management team, colleagues, technical-auxiliary staff, secretarial staff, families, students, and physical school environment). Burnout was also found to be negatively correlated with personal and collective efficacy, with organizational commitment and job satisfaction.Table 3 Burnout, perception of the educational context, efficacy, commitment, and job satisfaction. Correlation matrix. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1. Physical exhaustion – 2. Impaired relations .61​∗∗∗ – 3. Professional inefficacy .58​∗∗∗ .55​∗∗∗ – 4. Disillusion .66​∗∗∗ .61​∗∗∗ .64​∗∗∗ – Perception of the educational context 5. Management team -.23​∗∗∗ -.27​∗∗∗ -.26​∗∗∗ -.34​∗∗∗ – 6. Colleagues -.21​∗∗∗ -.20​∗∗∗ -.29​∗∗∗ -.28​∗∗∗ .61​∗∗∗ – 7. Tech.-auxiliary staff -.27​∗∗∗ -.21​∗∗∗ -.25​∗∗∗ -.26​∗∗∗ .53​∗∗∗ .55​∗∗∗ – 8. Secretarial staff -.24​∗∗∗ -.12​∗∗∗ -.25​∗∗∗ -.26​∗∗∗ .61​∗∗∗ .49​∗∗∗ .70​∗∗∗ – 9. Families -.19​∗∗∗ -.31​∗∗∗ -.22​∗∗∗ -.28​∗∗∗ .52​∗∗∗ .58​∗∗∗ .48​∗∗∗ .48​∗∗∗ – 10. Students -.24​∗∗∗ -.40​∗∗∗ -.26​∗∗∗ -.25​∗∗∗ .45​∗∗∗ .52​∗∗∗ .39​∗∗∗ .33​∗∗∗ .71​∗∗∗ – 11. Physical environment -.20​∗∗∗ -.16​∗∗∗ -.20​∗∗∗ -.26​∗∗∗ .50​∗∗∗ .47​∗∗∗ .47​∗∗∗ .51​∗∗∗ .47​∗∗∗ .36​∗∗∗ – 12. Perceived personal efficacy -.38​∗∗∗ -.37​∗∗∗ -.47​∗∗∗ -.43​∗∗∗ .47​∗∗∗ .44​∗∗∗ .40​∗∗∗ .43​∗∗∗ .37​∗∗∗ .40​∗∗∗ .33​∗∗∗ – 13. Perceived collective efficacy -.18​∗∗∗ -.21​∗∗∗ -.23​∗∗∗ -.26​∗∗∗ .67​∗∗∗ .63​∗∗∗ .53​∗∗∗ .55​∗∗∗ .58​∗∗∗ .53​∗∗∗ .54​∗∗∗ .43​∗∗∗ – 14. Organizational commitment -.36​∗∗∗ -.37​∗∗∗ -.40​∗∗∗ -.46​∗∗∗ .56​∗∗∗ .56​∗∗∗ .43​∗∗∗ .46​∗∗∗ .49​∗∗∗ .44​∗∗∗ .47​∗∗∗ .58​∗∗∗ .54​∗∗∗ – 15. Job satisfaction -.42​∗∗∗ -.36​∗∗∗ -.41​∗∗∗ -.46​∗∗∗ .62​∗∗∗ .64​∗∗∗ .47​∗∗∗ .49​∗∗∗ .55​∗∗∗ .49​∗∗∗ .48​∗∗∗ .59​∗∗∗ .57​∗∗∗ .75​∗∗∗ Note.​∗∗∗p<.001A Student’st test for independent samples was carried out on the groups classified based on the two-cluster solution to find any differences between the clusters with respect to the rest of the variables analyzed. As observed in Table 4, there were significant differences between the groups with high and low burnout levels for all aspects related to teacher perception of educational context, efficacy, commitment, and job satisfaction, where Cluster 1 (burnout dimension scores above the sample mean) showed lower scores in all the dimensions analyzed.Table 4 Perception of the educational context, efficacy, commitment, and job satisfaction. Descriptive statistics andt test by burnout group. Cluster 1 ▲ Burnout Cluster 2 ▼ Burnout t p N M SD N M SD Perception of the educational context …management team 163 38.85 7.38 337 42.55 7.78 -5.06​∗∗∗ .000 …colleagues 163 30.90 6.31 337 34.08 7.07 -4.86​∗∗∗ .000 … technical-auxiliary staff 163 20.76 4.89 337 23.57 4.20 -6.30​∗∗∗ .000 … secretarial staff 163 11.00 2.72 337 12.21 2.18 -4.94​∗∗∗ .000 … families 163 18.69 4.92 337 21.07 4.87 -5.08​∗∗∗ .000 …students 163 19.19 4.84 337 21.77 4.51 -5.85​∗∗∗ .000 …physical environment 163 19.02 5.73 337 21.65 5.07 -4.98​∗∗∗ .000 Perceived personal efficacy 163 65.73 9.80 337 72.92 8.29 -8.06​∗∗∗ .000 Perceived collective efficacy 163 47.06 8.48 337 51.38 9.79 -4.82​∗∗∗ .000 Organizational commitment 163 31.60 7.50 337 36.70 5.94 -7.60​∗∗∗ .000 Job satisfaction 163 21.93 4.87 337 25.22 3.39 -7.74​∗∗∗ .000 Nota.​∗∗∗p<.001 ## 3.3. Mediation Model for Estimating Predictors and Paths of Mediation Effects of Perceived Efficacy on Job Satisfaction Based on the results of the cluster analysis, the burnout groups (recoded: ▼Burnout=0; ▲Burnout=1) were taken as the independent or predictor variable and perceived efficacy (personal and collective) as the mediating variables. Thus the multiple mediation model was computed with two mediator variables (M1: Epers and M2: Ecolec), with job satisfaction as the dependent variable (Figure 3).Figure 3 Multiple mediation model of perceived efficacy (personal and collective) on the relationship betweenburnout and job satisfaction.In the first place, a statistically significant effect [B=-7.19,p<.001] ofburnout (X) on perceived personal efficacy (M1) was observed. The second regression analysis took as the result variable Mediator 2 (perceived collective efficacy) and included burnout (X) and perceived personal efficacy (M1) in the equation. There was a significant effect of personal efficacy [B=.41,p<.001] on collective efficacy (M2), but not on burnout [B=-1.31,p=.137].In the following regression analysis, the effect of the independent variable and of the two mediators was estimated taking job satisfaction as the result variable (Y). In all cases, significant effects were observed: personal efficacy [B=.17,p<.001], collective efficacy [B=.16,p<.001], andburnout [B=-1.34,p<.001]. The total effect of the model was also significant [B=-3.28,p<.001].Finally, the analysis of indirect effects was carried out using bootstrapping, and data found supported a level of significance for Path 1 [ind1: X→M1→Y; B=-1.22, SE=.27, 95% CI (-1.84, -.73)] and Path 2 [ind2: X→M1→M2→Y; B=-.49, SE=.13, 95% CI (-.78, -.28)]. In both cases perceived personal efficacy seemed to mediate the effect of burnout on job satisfaction. However, the indirect effect expressed in Path 3 [ind3: X→M2→Y; B=-.21, SE=.16, 95% CI (-.55, .08)] was not significant. ## 4. Discussion One of the first ideas inferred from the above analysis of burnout is that teachers with high and low burnout levels are clearly distinguished from each other. The percentages found in this study were distributed as follows: fewer teachers, around a third (32.6%), showed high burnout, while most of them, 67.4%, showed low levels. We can therefore respond to our first research hypothesis, by showing the significant prevalence of burnout among high school teachers, considering the severe risk to health this implies. The incidence is similar to what it was several decades ago, and in view of the characteristics of the postmodern society and the current education model, could increase in the coming years if opportune measures are not taken.The main findings of our study regarding the correlation analyses of burnout and its relationship to the perceived educational context, teacher efficacy, job satisfaction, and commitment generally coincide with analyses in previous literature and show clear evidence that the dimensions of burnout examined (psychophysical exhaustion, impaired relationships, professional efficacy, and disillusion) strongly correlate with all the variables above.Concerning the relationship of burnout to perceived educational context, in the first place, our data showed that there was a close association between the perception of students and psychophysical exhaustion, and vice versa. The results of Klusmann et al. [8], who found that disciplinary problems in the classroom predicted a higher level of emotional exhaustion, were along this same line. One possible explanation for these results is that teachers with high psychophysical exhaustion could be using more passive strategies to cope with job demands, while teachers with less psychophysical exhaustion were able to rely on personal resources which enabled them to cope actively with conflictive situations in the classroom and be more professionally committed [9]. In the second place, the results found showed a moderate relationship between perception of the director and disillusion. These results do not coincide with those of the study by Hultell and Gustavsson [9], in which the director’s social support showed a stronger connection with job commitment than burnout. This result may suggest that job commitment plays an important role in the relationship between perceived educational context and burnout, since it seems to modulate the relationship between the demands and resources and burnout [37]. Other studies have found that high scores in job satisfaction were indirectly related to better perception of support with supervision by teaching staff through emotional exhaustion and lack of personal accomplishment [12, 35]. Therefore, job commitment and satisfaction could have a mediating role in the relationship between burnout and perceived educational context, specifically, in the dimension of perception of the director. In any case, the role of these variables in the perceived educational context and burnout should continue to be studied to progress in this line of research in the future.Our data on the relationship between burnout and perceived personal efficacy reflected that there was a strong negative association between perceived personal efficacy and psychophysical exhaustion, coinciding with the results found in the study by Evers et al. [21]. In our study, perceived personal efficacy was significantly negatively associated with the professional burnout inefficacy and disillusion dimensions. Other studies point to the same direction [22, 24]. Furthermore, other researchers have found that lower perceived self-efficacy predicted higher burnout due to the lack of personal accomplishment [20, 21].Results found in relation to the correlation between burnout and job satisfaction supported a strong relationship between the dimensions of burnout (psychophysical exhaustion, impaired relations, professional inefficacy, and disillusion) and job satisfaction. These data are coherent with those found by other authors [12, 27–29, 36].Our study found that in the relationship between burnout and organizational commitment, job commitment was related moderately to psychophysical exhaustion, and more closely to disillusion. Other studies suggest, as mentioned above concerning the relationship between burnout and perception of the educational context, that job commitment must be considered a modulating variable in the relationship between perception of the educational context and burnout [37]. Ventura et al. [24] found that staff with the most self-efficacy at work perceived more challenging demands and fewer impediments, and this in turn was related to more commitment and less burnout. Studies in a healthcare context also support the existence of a significant negative correlation between burnout and job commitment [39], and others, where job commitment has been closely related to emotional intelligence [38], and high self-efficacy exerts a protector role against burnout [23].Based on the findings mentioned, we can say that our second research hypothesis, in which we expected to find that teachers with high burnout would have higher scores in the variables analyzed (perception of the educational context, perceived efficacy, job commitment, and satisfaction), was fulfilled. Solid evidence was found demonstrating that the group of teachers with high burnout showed lower scores in perception of the educational context, professional efficacy, satisfaction, and job commitment. Most of these results coincide with those found in other studies on burnout and perception of educational content [7–9, 12], burnout and personal efficacy [20–24], burnout and job satisfaction [12, 27–29], and lastly, burnout and job commitment [37].Finally, empirical data in this study back our third hypothesis, showing that perceived personal efficacy exerted a mediating effect in the relationship between burnout and job satisfaction; however, the same was not true of collective efficacy. These results are coherent with those found by other researchers. Skaalvik and Skaalvik [36] found that teacher self-efficacy was significantly and positively associated with job satisfaction. Briones et al. [20] discovered that teacher self-efficacy was an indirect predictor of job satisfaction. Furthermore, with regard to collective efficacy, some researchers have found that perceived collective efficacy did not explain job satisfaction [22, 36]. Therefore, personal factors, specifically perceived personal efficacy, that is, the confidence in one’s own possibilities and abilities to resolve and perform teaching functions, has more weight in the relationship between burnout and job satisfaction than social and contextual factors, such as beliefs about school management.Given the crucial role of the mediating effects of personal efficacy perceived in the relationship between burnout and job satisfaction, action should be taken to improve confidence in one’s own possibilities, by setting realistic goals adjusted to personal skills and abilities, developing high expectations about one’s own performance, cultivate emotional intelligence and positive emotions, increase mental flexibility and creativity, stimulating personal initiative, and finally, support reference models to orient professional performance, promoting learning from experience (learn by doing).This study has some methodological limitations. It is a cross-sectional design with the limitations of such studies, so it would be advisable for this research to be accompanied by other longitudinal studies. The population subject of this study was made up of high school teachers, which must be taken into account when generalizing the results. With respect to the evaluation procedures, we should mention the limitations of exclusive use of self-reports for measuring burnout, which would have to be completed with the use of other measurement instruments (direct observation, interviews), and finally, the results should be replicated and tested in other countries and cultures to broaden their reliability and validity.Based on this study new questions are posed, for example, the role of the motivational variables in personal efficacy and its relationship with job commitment and how perception of the educational context contributes to development of the subjective wellbeing (satisfaction with life and happiness) of teachers. By way of a synthesis, we offer some final reflections on possible improvement of future interventions for burnout and job satisfaction of high school teachers: (1) due to the importance of emotional and motivational factors in teaching-learning, a teacher awareness training program on adolescent development should be started up and active coping strategies for professional demands in education, increasing their resources for improving their teaching commitment and managing stress; (2) the teaching staff should acquire skills and abilities for classroom use of creative, participatory, and dialogical methodologies, through a close, continual, and extended consulting service; (3) contributing to teacher autonomy to improve the perception of personal efficacy through an education system which reinforces individual and collective competence. ## 5. Conclusions The following conclusions may be arrived at the following. On one hand, a third of high school teachers have a high level of burnout. Our results show that when the teaching staff experiences high burnout levels, perception of the educational context is less positive. In this study, the high burnout level was associated with low scores in perceived efficacy (personal and collective), low job satisfaction, and low professional commitment.In addition, this study demonstrates that perceived personal efficacy exerts a mediating effect on the relationship between burnout and job satisfaction. In view of these findings, we propose that burnout in high school teachers be prevented by reinforcing job satisfaction and increasing perceived teaching efficacy and that educational entities provide specific training adapted to the teaching staff and supervision, consulting, and extended support in student matters of their interest, to increase autonomy and perceived efficacy. The teaching staff should be given sufficient time and space to assimilate professional competences related to teaching. And finally, to increase job satisfaction, it is important for the teacher’s work to be recognized by the educational community (family, school, students) and receive affective, social, and economic compensation in a balance between demands or requirements and results achieved. --- *Source: 1021408-2019-04-09.xml*
1021408-2019-04-09_1021408-2019-04-09.md
56,343
Burnout, Perceived Efficacy, and Job Satisfaction: Perception of the Educational Context in High School Teachers
María del Mar Molero Jurado; María del Carmen Pérez-Fuentes; Leonarda Atria; Nieves Fátima Oropesa Ruiz; José Jesús Gázquez Linares
BioMed Research International (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1021408
1021408-2019-04-09.xml
--- ## Abstract Burnout is closely related to personal and contextual variables, especially job satisfaction and commitment, and other less studied psychological variables, such as perception of teaching efficacy or educational context.Objective. The general objective of this study was to examine the relationships of burnout with perceived educational context, perceived teaching efficacy (personal and collective), and job satisfaction and commitment.Materials and Methods. A battery of instruments was administered to 500 high school teachers at different schools in several Italian provinces.Results. The cluster analysis found that one-third of high school teachers had high burnout. Evidence was also found associating elevated burnout with low scores in perceived efficacy (personal and collective), low job satisfaction, and low professional commitment. Furthermore, perception of the educational context is less positive when the teachers experience high levels of burnout. Finally, the results showed the mediating effect of perceived personal efficacy on the relationship between burnout and job satisfaction.Conclusions. The results are discussed from the perspective of developing teaching autonomy on improving personal efficacy, decreasing burnout, and increasing job satisfaction in an educational system which reinforces individual and collective competence. --- ## Body ## 1. Introduction The burnout syndrome is characterized by dealing with a range of symptoms related to psychophysical exhaustion, impaired relations, and professional inefficacy and with disillusion [1–3]. Some authors have shown that problematic behavior of students and dealing with their families are two main factors affecting job stress in teachers [4, 5]. At least 30% of teachers have experienced burnout in the last two decades with similar incidence in different countries, which has been accompanied by negative consequences to health, economic level, and degree of commitment and satisfaction in the teaching profession [6]. In this study, one of our objectives was to find homogeneous groups of teachers by burnout level. Then we analyzed the state of burnout in the high school context and its relationship with perceived educational context, teaching efficacy, job satisfaction, and commitment.In the literature on the burnout syndrome, there is evidence of a significant relationship between burnout and perceived educational context [7–9]. The conclusions of the study by Khani and Mirzaee [7], for example, showed that contextual variables could not only cause burnout in teachers directly but also influence it indirectly by increasing the effect of different stressors. Similarly, research by Klusmann et al. [8], with a large sample of high school teachers from over one hundred German schools, revealed that when the analysis controlled for individual teacher characteristics, disciplinary problems in the classroom predicted a higher level of emotional exhaustion. In this vein, problems of coexistence such as fighting and problems with the teaching staff were also the aspects of most concern to the families in different European countries [10] Emotional exhaustion is associated with higher levels of nonphysical violence [11]. In another line, research by Skaalvik and Skaalvik [12] with Norwegian teachers showed that emotional exhaustion was related more strongly to pressures of time, while depersonalization and the reduced personal accomplishment were associated more intensely with parent-teacher relationships. Meanwhile, in a study by Hultell and Gustavsson [9], work demands (unsatisfied expectations, work load, the role of stress, routinization, social isolation, and passive coping strategy) were more closely related to burnout than work resources (autonomy, social support from colleagues, social support from supervisor or principal, satisfaction with salaries, mastery of skills, and active coping strategies) which were more connected to job commitment. García-Arroyo and Osca [13] suggested that burnout coping strategies should be based on nonlinear interactive models, since coping operates in a combined process in which some strategies affect others.Interest in studying the relationship of burnout and teaching efficacy has been growing with time [14]. However, because teaching efficacy has been conceptualized and measured in different ways in different studies, some authors have preferred to differentiate between personal and collective efficacy [15–19]. Personal efficacy refers to confidence in one’s own actions to reach expected results [15] and collective efficacy is defined rather as the belief in the ability of the school or team of teachers to perform actions leading to achievement of goals [17–19].With regard to self-efficacy or personal efficacy and burnout, the empirical literature reflects a significant negative relationship between the two variables [20–24]. Thus Briones et al. [20] found that self-efficacy of teachers was a direct predictor of personal accomplishment, as well as perception of support received from colleagues. In this same direction, Evers et al. [21] found that beliefs of self-efficacy were related significantly and positively to personal accomplishment and, furthermore, had a significant negative relationship with the emotional exhaustion and depersonalization dimensions. Ventura et al. [24] found that employees with more self-efficacy at work perceived more challenging demands and fewer impediments, and this in turn was related to stronger commitment and less burnout. Similarly, in healthcare, Molero et al. [23] found that self-efficacy and stress management protected from burnout. In this area, it was the emotional intelligence dimension of adaptability which was the strongest predictor of self-efficacy [25]. Higher levels of self-esteem have also been linked to lower levels of burnout [26]. However, less research has been done on the relationship between collective self-efficacy and burnout and there are no significant results. For example, in the study by Malinen and Savolainen [22] with a sample of Finnish high school teachers, collective efficacy concerning how discipline was maintained among students did not explain burnout.There is plentiful precedent literature supporting the assumption that job dissatisfaction and burnout maintain a close relationship with each other [27–29]. Thus in the study by Skaalvik and Skaalvik [12], teachers’ job dissatisfaction was directly related to emotional exhaustion and diminished personal accomplishment. On the contrary, greater teacher job satisfaction at different grade levels was related to satisfaction of the psychological needs for autonomy, teaching staff competence, and relations [24], with self-determined motivation [24, 30], with cognitive self-regulation [31], ans with stronger social support [32]. Furthermore, job satisfaction has been found empirically to have a positive role in subjective wellbeing [33, 34] and teacher self-concept [35].The perception of the educational context also has an important role in job satisfaction [7, 12, 35]. Job satisfaction has been found to be indirectly related to all the aspects of the school context (support from supervision, pressure of time, relationships with parents, and autonomy), through emotional exhaustion and reduction in personal accomplishment [12]. Other studies have related job satisfaction, burnout, and teaching efficacy. For example, Skaalvik and Skaalvik [36] found that teacher self-efficacy and the two dimensions of teacher burnout (emotional exhaustion and depersonalization) were significantly associated with teacher job satisfaction. Briones et al. [20] demonstrated that teacher self-efficacy was an indirect predictor of job satisfaction.With respect to the relationship between job satisfaction and collective efficacy, several different studies have found that collective efficacy is not correlated with teacher job satisfaction [22, 36]. Since the role of efficacy in job satisfaction and burnout is a line of research where the most studies, at least in education, have been suggested, another of our major objectives was to analyze the mediating role of teaching efficacy in the relationship between burnout and job satisfaction.In addition, with respect to the relationship between burnout, commitment, and educational context, job commitment seems to modulate the relationship between demands and burnout and between resources (personal and related to the job) and burnout [37]. Meanwhile, Pérez-Fuentes, Molero, Gázquez, and Oropesa [38], with a large sample of healthcare professionals, found that the interpersonal dimension of emotional intelligence was the strongest predictor of job commitment. Likewise, commitment was found to be positively associated with self-efficacy and negatively with burnout [39].After analyzing the most relevant findings of previous studies on burnout in the high school context and its relationship with different contextual and personal variables, the main objectives and hypotheses of this study are discussed below. As mentioned a few paragraphs above, one of the purposes of this study was to find homogeneous groups of teachers by their level of burnout. It was also intended to examine the relationships of burnout with both perception of the educational context, perceived teaching efficacy (personal and collective), and job satisfaction and commitment. In this sense, we also wanted to find any significant differences between the high and low burnout groups based on the variables above. Finally, we attempted to take a further step forward by exploring the mediating role of perceived teaching efficacy on the relationship between burnout and job satisfaction. ## 2. Materials and Methods ### 2.1. Participants The study sample consisted of 500 7th and 8th grade high school teachers, selected at random from different schools in the Sicilian provinces of Trapani, Agrigento, and Palermo (Italy). Of this sample, 67.2% (n=336) were women and 32.8% (n=164) were men. The most representative group of 40.4% were teachers aged 46 to 55 (n=202), followed by the group from 36 to 45 years who made up 27.2% (n=136) of the sample, those who were over 55 comprised 22.6% (n=113), and finally, the least representative group of 9.8% of the sample were the youngest, from 25 to 35 years. By level of education, 84.2% (n=421) had undergraduate degrees, 13% (n=65) had Master’s degrees, 2.4% (n=12) had diplomas, and 0.4% (n=2) had Ph.D. degrees (Table 1).Table 1 Distribution of the sample by sociodemographic and professional variables. n % Sex Male 164 32.8 Female 336 67.2 Age From 25 to 35 years 49 9.8 From 36 to 45 years 136 27.2 From 46 to 55 years 202 40.4 Over 55 years 113 22.6 Education Diploma 12 2.4 Undergraduate degree 421 84.2 Master’s degree 65 13.0 Ph.D. 2 0.4 Years of experience 5 years or less 61 12.2 From 6 to 10 years 71 14.2 From 11 to 20 years 169 33.8 From 21 to 30 years 125 25.0 Over 30 74 14.8 Type of contract Permanent 362 72.4 Temporary 138 27.6 Teaching high school 7th grade 244 48.8 8th grade 256 51.2Concerning their professional characteristics, the groups of teachers with seniority (years of experience) of 11 to 20 years and from 21 to 30 years made up 33.8% (n=169) and 25% (n=125) of the sample, respectively. By type of contract, 73.4% (n=362) had a permanent contract and 27.6% (n=138) temporary. Finally, although all the teachers in the sample taught high school, 48.8% (n=244) taught seventh grade and 51.2% (n=256) eighth grade. ### 2.2. Instruments #### 2.2.1. LBQ: Link Burnout Questionnaire [3] This is a self-report questionnaire which provides new burnout indicators for those who work in service professions. Santinello [3] reviewed the three dimensions studied by the MBI and added the new disillusion scale to enlarge the theoretical tradition of burnout.The four dimensions examined by the LBQ are (1) psychophysical exhaustion, (2) impaired relations, (3) professional inefficacy, and (4) disillusion. Disillusion is manifested by loss of passion and enthusiasm for daily activities. Burnout may therefore be characterized as the final state in a long process of disillusion.The LBQ consists of 24 items with a six-point Likert-type response scale for study of four dimensions, each with three positive and three negative elements: the psychophysical dimension (energy-exhaustion), relationships (involvement-deterioration), professional competence (efficacy-inefficacy), and existential expectations (satisfaction-disillusion). The internal consistency of the scales, according to data found by the author, varies from .68 (professional inefficacy) to .85 (disillusion). In our case, the Cronbach’s alpha for the complete questionnaire was .89 and for each one of the scales it was psychophysical exhaustion (α=.70), impaired relations (α=.66), professional inefficacy (α=.65), and disillusion (α=.82). #### 2.2.2. Assessment Questionnaire for Convictions about Efficacy, Perceived Context, Job Attitudes, and Satisfaction in School Contexts [19] This questionnaire is comprised of several scales with a seven-point Likert response scale: perceived personal efficacy scale (the teacher’s conviction of being up to the demands of their role and coping with any emergency or eventuality, for example, with regard to families or their colleagues, in managing the class or problem students) and perceived collective efficacy scale (the teacher’s beliefs with regard to the ability of the school to dominate complicated tasks and cope with innumerable critical situations, approach problems related to school quitting, manage relations with local authorities, and face the demands of school autonomy).Job satisfaction is the degree of satisfaction with the role, possibilities for personal growth, and work environment and the degree to which personal needs are satisfied by the job.Job commitment is bond which the person establishes with the organization and their commitment to achieve objectives.Scales concerning the perceived educational context are the principal perception scale (degree to which the teachers evaluate the principal’s ability to identify resources within the school, promote cooperation, and set clear objectives), colleague perception scale (perception of job relations, work of fellow teachers, and efficacy of communication among colleagues), student perception scale (perception of student-teacher relations, students’ interest in the subjects taught, and respect for the setting and persons), family perception scale (perception of parent-teacher relations, degree of parent participation, and interest in their children’s school life), technical-auxiliary staff scale (perception of how technical-auxiliary staff works in terms of competence and flexibility), physical environment perception scale (evaluation of the school’s facilities, adequacy for the educational demands, and general safety). Its reliability was calculated using the Cronbach’s alpha coefficient, which varied from .90 (job satisfaction) to .95 (perception of the school principle), and a value of .98 for the complete questionnaire. ### 2.3. Procedure After consent was received from the direction of the participating schools, a meeting was held with the teachers where the study was presented, explaining its importance and clarifying its objectives, to acquire the approval and participation of all the teachers. The questionnaire was given directly to each teacher for completion at will. Participation in the study was voluntary and anonymity of participants was guaranteed. A period of one to two weeks was set for compilation of the data. It was administered at the schools in the middle of the school year. The SPSS version .23 for Windows was used for data processing and analysis. #### 2.3.1. Data Analysis First, bivariate correlations were done to explore the relationships between variables. Then two-stage cluster analysis was done to find the groups of professionals based on their scores in the burnout dimensions. When the groups or clusters had been identified, a comparison of means was done using the Student’st test for independent samples to determine the existence of any significant differences between burnout groups with respect to their perception of the educational context, perceived efficacy (personal and collective), commitment, and job satisfaction.Finally, to compare the mediating effect of the perceived efficacy variables, a multiple mediation analysis was performed with two chained mediators. The macro for SPSS by Preacher & Hayes [40, 41] was used for computation of the mediation model. Bootstrapping was applied with coefficients estimated from 5000 bootstrap samples. ## 2.1. Participants The study sample consisted of 500 7th and 8th grade high school teachers, selected at random from different schools in the Sicilian provinces of Trapani, Agrigento, and Palermo (Italy). Of this sample, 67.2% (n=336) were women and 32.8% (n=164) were men. The most representative group of 40.4% were teachers aged 46 to 55 (n=202), followed by the group from 36 to 45 years who made up 27.2% (n=136) of the sample, those who were over 55 comprised 22.6% (n=113), and finally, the least representative group of 9.8% of the sample were the youngest, from 25 to 35 years. By level of education, 84.2% (n=421) had undergraduate degrees, 13% (n=65) had Master’s degrees, 2.4% (n=12) had diplomas, and 0.4% (n=2) had Ph.D. degrees (Table 1).Table 1 Distribution of the sample by sociodemographic and professional variables. n % Sex Male 164 32.8 Female 336 67.2 Age From 25 to 35 years 49 9.8 From 36 to 45 years 136 27.2 From 46 to 55 years 202 40.4 Over 55 years 113 22.6 Education Diploma 12 2.4 Undergraduate degree 421 84.2 Master’s degree 65 13.0 Ph.D. 2 0.4 Years of experience 5 years or less 61 12.2 From 6 to 10 years 71 14.2 From 11 to 20 years 169 33.8 From 21 to 30 years 125 25.0 Over 30 74 14.8 Type of contract Permanent 362 72.4 Temporary 138 27.6 Teaching high school 7th grade 244 48.8 8th grade 256 51.2Concerning their professional characteristics, the groups of teachers with seniority (years of experience) of 11 to 20 years and from 21 to 30 years made up 33.8% (n=169) and 25% (n=125) of the sample, respectively. By type of contract, 73.4% (n=362) had a permanent contract and 27.6% (n=138) temporary. Finally, although all the teachers in the sample taught high school, 48.8% (n=244) taught seventh grade and 51.2% (n=256) eighth grade. ## 2.2. Instruments ### 2.2.1. LBQ: Link Burnout Questionnaire [3] This is a self-report questionnaire which provides new burnout indicators for those who work in service professions. Santinello [3] reviewed the three dimensions studied by the MBI and added the new disillusion scale to enlarge the theoretical tradition of burnout.The four dimensions examined by the LBQ are (1) psychophysical exhaustion, (2) impaired relations, (3) professional inefficacy, and (4) disillusion. Disillusion is manifested by loss of passion and enthusiasm for daily activities. Burnout may therefore be characterized as the final state in a long process of disillusion.The LBQ consists of 24 items with a six-point Likert-type response scale for study of four dimensions, each with three positive and three negative elements: the psychophysical dimension (energy-exhaustion), relationships (involvement-deterioration), professional competence (efficacy-inefficacy), and existential expectations (satisfaction-disillusion). The internal consistency of the scales, according to data found by the author, varies from .68 (professional inefficacy) to .85 (disillusion). In our case, the Cronbach’s alpha for the complete questionnaire was .89 and for each one of the scales it was psychophysical exhaustion (α=.70), impaired relations (α=.66), professional inefficacy (α=.65), and disillusion (α=.82). ### 2.2.2. Assessment Questionnaire for Convictions about Efficacy, Perceived Context, Job Attitudes, and Satisfaction in School Contexts [19] This questionnaire is comprised of several scales with a seven-point Likert response scale: perceived personal efficacy scale (the teacher’s conviction of being up to the demands of their role and coping with any emergency or eventuality, for example, with regard to families or their colleagues, in managing the class or problem students) and perceived collective efficacy scale (the teacher’s beliefs with regard to the ability of the school to dominate complicated tasks and cope with innumerable critical situations, approach problems related to school quitting, manage relations with local authorities, and face the demands of school autonomy).Job satisfaction is the degree of satisfaction with the role, possibilities for personal growth, and work environment and the degree to which personal needs are satisfied by the job.Job commitment is bond which the person establishes with the organization and their commitment to achieve objectives.Scales concerning the perceived educational context are the principal perception scale (degree to which the teachers evaluate the principal’s ability to identify resources within the school, promote cooperation, and set clear objectives), colleague perception scale (perception of job relations, work of fellow teachers, and efficacy of communication among colleagues), student perception scale (perception of student-teacher relations, students’ interest in the subjects taught, and respect for the setting and persons), family perception scale (perception of parent-teacher relations, degree of parent participation, and interest in their children’s school life), technical-auxiliary staff scale (perception of how technical-auxiliary staff works in terms of competence and flexibility), physical environment perception scale (evaluation of the school’s facilities, adequacy for the educational demands, and general safety). Its reliability was calculated using the Cronbach’s alpha coefficient, which varied from .90 (job satisfaction) to .95 (perception of the school principle), and a value of .98 for the complete questionnaire. ## 2.2.1. LBQ: Link Burnout Questionnaire [3] This is a self-report questionnaire which provides new burnout indicators for those who work in service professions. Santinello [3] reviewed the three dimensions studied by the MBI and added the new disillusion scale to enlarge the theoretical tradition of burnout.The four dimensions examined by the LBQ are (1) psychophysical exhaustion, (2) impaired relations, (3) professional inefficacy, and (4) disillusion. Disillusion is manifested by loss of passion and enthusiasm for daily activities. Burnout may therefore be characterized as the final state in a long process of disillusion.The LBQ consists of 24 items with a six-point Likert-type response scale for study of four dimensions, each with three positive and three negative elements: the psychophysical dimension (energy-exhaustion), relationships (involvement-deterioration), professional competence (efficacy-inefficacy), and existential expectations (satisfaction-disillusion). The internal consistency of the scales, according to data found by the author, varies from .68 (professional inefficacy) to .85 (disillusion). In our case, the Cronbach’s alpha for the complete questionnaire was .89 and for each one of the scales it was psychophysical exhaustion (α=.70), impaired relations (α=.66), professional inefficacy (α=.65), and disillusion (α=.82). ## 2.2.2. Assessment Questionnaire for Convictions about Efficacy, Perceived Context, Job Attitudes, and Satisfaction in School Contexts [19] This questionnaire is comprised of several scales with a seven-point Likert response scale: perceived personal efficacy scale (the teacher’s conviction of being up to the demands of their role and coping with any emergency or eventuality, for example, with regard to families or their colleagues, in managing the class or problem students) and perceived collective efficacy scale (the teacher’s beliefs with regard to the ability of the school to dominate complicated tasks and cope with innumerable critical situations, approach problems related to school quitting, manage relations with local authorities, and face the demands of school autonomy).Job satisfaction is the degree of satisfaction with the role, possibilities for personal growth, and work environment and the degree to which personal needs are satisfied by the job.Job commitment is bond which the person establishes with the organization and their commitment to achieve objectives.Scales concerning the perceived educational context are the principal perception scale (degree to which the teachers evaluate the principal’s ability to identify resources within the school, promote cooperation, and set clear objectives), colleague perception scale (perception of job relations, work of fellow teachers, and efficacy of communication among colleagues), student perception scale (perception of student-teacher relations, students’ interest in the subjects taught, and respect for the setting and persons), family perception scale (perception of parent-teacher relations, degree of parent participation, and interest in their children’s school life), technical-auxiliary staff scale (perception of how technical-auxiliary staff works in terms of competence and flexibility), physical environment perception scale (evaluation of the school’s facilities, adequacy for the educational demands, and general safety). Its reliability was calculated using the Cronbach’s alpha coefficient, which varied from .90 (job satisfaction) to .95 (perception of the school principle), and a value of .98 for the complete questionnaire. ## 2.3. Procedure After consent was received from the direction of the participating schools, a meeting was held with the teachers where the study was presented, explaining its importance and clarifying its objectives, to acquire the approval and participation of all the teachers. The questionnaire was given directly to each teacher for completion at will. Participation in the study was voluntary and anonymity of participants was guaranteed. A period of one to two weeks was set for compilation of the data. It was administered at the schools in the middle of the school year. The SPSS version .23 for Windows was used for data processing and analysis. ### 2.3.1. Data Analysis First, bivariate correlations were done to explore the relationships between variables. Then two-stage cluster analysis was done to find the groups of professionals based on their scores in the burnout dimensions. When the groups or clusters had been identified, a comparison of means was done using the Student’st test for independent samples to determine the existence of any significant differences between burnout groups with respect to their perception of the educational context, perceived efficacy (personal and collective), commitment, and job satisfaction.Finally, to compare the mediating effect of the perceived efficacy variables, a multiple mediation analysis was performed with two chained mediators. The macro for SPSS by Preacher & Hayes [40, 41] was used for computation of the mediation model. Bootstrapping was applied with coefficients estimated from 5000 bootstrap samples. ## 2.3.1. Data Analysis First, bivariate correlations were done to explore the relationships between variables. Then two-stage cluster analysis was done to find the groups of professionals based on their scores in the burnout dimensions. When the groups or clusters had been identified, a comparison of means was done using the Student’st test for independent samples to determine the existence of any significant differences between burnout groups with respect to their perception of the educational context, perceived efficacy (personal and collective), commitment, and job satisfaction.Finally, to compare the mediating effect of the perceived efficacy variables, a multiple mediation analysis was performed with two chained mediators. The macro for SPSS by Preacher & Hayes [40, 41] was used for computation of the mediation model. Bootstrapping was applied with coefficients estimated from 5000 bootstrap samples. ## 3. Results ### 3.1. Burnout in High School Teachers A two-stage cluster analysis was done with the burnout factors to form the groups (Figures1 and 2). Two groups resulted from inclusion of these variables with the following distribution: 32.6% (n=163) of subjects in Cluster 1 and 67.4% (n=337) in Cluster 2. Table 2 summarizes the means of evaluations of different aspects of the educational context for the total sample of teachers and for each of the clusters.Table 2 Mean scores on burnout for the total sample (N=500) and clusters. Burnout Total sample(N=500) Cluster 1 ▲ Burnout(n=163) 2 ▼ Burnout(n=337) Physical exhaustion M = 12.54 ( S D = 4.99 ) M = 17.83 ( S D = 4.71 ) M = 9.98 ( S D = 2.47 ) Impaired relations M = 12.53 ( S D = 4.93 ) M = 17.47 ( S D = 4.09 ) M = 10.14 ( S D = 3.24 ) Professional Inefficacy M = 10.98 ( S D = 4.16 ) M = 14.77 ( S D = 4.28 ) M = 9.14 ( S D = 2.56 ) Disillusion M = 11.66 ( S D = 5.77 ) M = 17.67 ( S D = 5.61 ) M = 8.76 ( S D = 2.88 )Figure 1 Cluster composition (N=500).Note. Factors are in the order of importance of input.Figure 2 Comparison of clusters (N=500).The first group resulting from the cluster analysis (Cluster 1) was characterized by scores above the mean for the total sample in psychophysical exhaustion (M=17.83), impaired relations (M=17.48), professional inefficacy (M=14.78), and disillusion (M=17.67). Therefore, the subjects in this cluster were grouped together because of their high levels in all the burnout dimensions.The second group (Cluster 2) grouped teachers with mean scores below those found for the total sample in all the burnout dimensions: psychophysical exhaustion (M=9.98), impaired relations (M=10.14), professional inefficacy (M=9.15), and disillusion (M=8.76). That is, those in Cluster 2 coincided in having scores below the mean on the burnout dimensions. ### 3.2. Burnout in High School Teachers and Its Relationship with Perception of the Educational Context, Efficacy, Commitment, and Job Satisfaction As shown in Table3, the four dimensions of burnout correlate negatively with the perception of the teachers of the educational context (the management team, colleagues, technical-auxiliary staff, secretarial staff, families, students, and physical school environment). Burnout was also found to be negatively correlated with personal and collective efficacy, with organizational commitment and job satisfaction.Table 3 Burnout, perception of the educational context, efficacy, commitment, and job satisfaction. Correlation matrix. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1. Physical exhaustion – 2. Impaired relations .61​∗∗∗ – 3. Professional inefficacy .58​∗∗∗ .55​∗∗∗ – 4. Disillusion .66​∗∗∗ .61​∗∗∗ .64​∗∗∗ – Perception of the educational context 5. Management team -.23​∗∗∗ -.27​∗∗∗ -.26​∗∗∗ -.34​∗∗∗ – 6. Colleagues -.21​∗∗∗ -.20​∗∗∗ -.29​∗∗∗ -.28​∗∗∗ .61​∗∗∗ – 7. Tech.-auxiliary staff -.27​∗∗∗ -.21​∗∗∗ -.25​∗∗∗ -.26​∗∗∗ .53​∗∗∗ .55​∗∗∗ – 8. Secretarial staff -.24​∗∗∗ -.12​∗∗∗ -.25​∗∗∗ -.26​∗∗∗ .61​∗∗∗ .49​∗∗∗ .70​∗∗∗ – 9. Families -.19​∗∗∗ -.31​∗∗∗ -.22​∗∗∗ -.28​∗∗∗ .52​∗∗∗ .58​∗∗∗ .48​∗∗∗ .48​∗∗∗ – 10. Students -.24​∗∗∗ -.40​∗∗∗ -.26​∗∗∗ -.25​∗∗∗ .45​∗∗∗ .52​∗∗∗ .39​∗∗∗ .33​∗∗∗ .71​∗∗∗ – 11. Physical environment -.20​∗∗∗ -.16​∗∗∗ -.20​∗∗∗ -.26​∗∗∗ .50​∗∗∗ .47​∗∗∗ .47​∗∗∗ .51​∗∗∗ .47​∗∗∗ .36​∗∗∗ – 12. Perceived personal efficacy -.38​∗∗∗ -.37​∗∗∗ -.47​∗∗∗ -.43​∗∗∗ .47​∗∗∗ .44​∗∗∗ .40​∗∗∗ .43​∗∗∗ .37​∗∗∗ .40​∗∗∗ .33​∗∗∗ – 13. Perceived collective efficacy -.18​∗∗∗ -.21​∗∗∗ -.23​∗∗∗ -.26​∗∗∗ .67​∗∗∗ .63​∗∗∗ .53​∗∗∗ .55​∗∗∗ .58​∗∗∗ .53​∗∗∗ .54​∗∗∗ .43​∗∗∗ – 14. Organizational commitment -.36​∗∗∗ -.37​∗∗∗ -.40​∗∗∗ -.46​∗∗∗ .56​∗∗∗ .56​∗∗∗ .43​∗∗∗ .46​∗∗∗ .49​∗∗∗ .44​∗∗∗ .47​∗∗∗ .58​∗∗∗ .54​∗∗∗ – 15. Job satisfaction -.42​∗∗∗ -.36​∗∗∗ -.41​∗∗∗ -.46​∗∗∗ .62​∗∗∗ .64​∗∗∗ .47​∗∗∗ .49​∗∗∗ .55​∗∗∗ .49​∗∗∗ .48​∗∗∗ .59​∗∗∗ .57​∗∗∗ .75​∗∗∗ Note.​∗∗∗p<.001A Student’st test for independent samples was carried out on the groups classified based on the two-cluster solution to find any differences between the clusters with respect to the rest of the variables analyzed. As observed in Table 4, there were significant differences between the groups with high and low burnout levels for all aspects related to teacher perception of educational context, efficacy, commitment, and job satisfaction, where Cluster 1 (burnout dimension scores above the sample mean) showed lower scores in all the dimensions analyzed.Table 4 Perception of the educational context, efficacy, commitment, and job satisfaction. Descriptive statistics andt test by burnout group. Cluster 1 ▲ Burnout Cluster 2 ▼ Burnout t p N M SD N M SD Perception of the educational context …management team 163 38.85 7.38 337 42.55 7.78 -5.06​∗∗∗ .000 …colleagues 163 30.90 6.31 337 34.08 7.07 -4.86​∗∗∗ .000 … technical-auxiliary staff 163 20.76 4.89 337 23.57 4.20 -6.30​∗∗∗ .000 … secretarial staff 163 11.00 2.72 337 12.21 2.18 -4.94​∗∗∗ .000 … families 163 18.69 4.92 337 21.07 4.87 -5.08​∗∗∗ .000 …students 163 19.19 4.84 337 21.77 4.51 -5.85​∗∗∗ .000 …physical environment 163 19.02 5.73 337 21.65 5.07 -4.98​∗∗∗ .000 Perceived personal efficacy 163 65.73 9.80 337 72.92 8.29 -8.06​∗∗∗ .000 Perceived collective efficacy 163 47.06 8.48 337 51.38 9.79 -4.82​∗∗∗ .000 Organizational commitment 163 31.60 7.50 337 36.70 5.94 -7.60​∗∗∗ .000 Job satisfaction 163 21.93 4.87 337 25.22 3.39 -7.74​∗∗∗ .000 Nota.​∗∗∗p<.001 ### 3.3. Mediation Model for Estimating Predictors and Paths of Mediation Effects of Perceived Efficacy on Job Satisfaction Based on the results of the cluster analysis, the burnout groups (recoded: ▼Burnout=0; ▲Burnout=1) were taken as the independent or predictor variable and perceived efficacy (personal and collective) as the mediating variables. Thus the multiple mediation model was computed with two mediator variables (M1: Epers and M2: Ecolec), with job satisfaction as the dependent variable (Figure 3).Figure 3 Multiple mediation model of perceived efficacy (personal and collective) on the relationship betweenburnout and job satisfaction.In the first place, a statistically significant effect [B=-7.19,p<.001] ofburnout (X) on perceived personal efficacy (M1) was observed. The second regression analysis took as the result variable Mediator 2 (perceived collective efficacy) and included burnout (X) and perceived personal efficacy (M1) in the equation. There was a significant effect of personal efficacy [B=.41,p<.001] on collective efficacy (M2), but not on burnout [B=-1.31,p=.137].In the following regression analysis, the effect of the independent variable and of the two mediators was estimated taking job satisfaction as the result variable (Y). In all cases, significant effects were observed: personal efficacy [B=.17,p<.001], collective efficacy [B=.16,p<.001], andburnout [B=-1.34,p<.001]. The total effect of the model was also significant [B=-3.28,p<.001].Finally, the analysis of indirect effects was carried out using bootstrapping, and data found supported a level of significance for Path 1 [ind1: X→M1→Y; B=-1.22, SE=.27, 95% CI (-1.84, -.73)] and Path 2 [ind2: X→M1→M2→Y; B=-.49, SE=.13, 95% CI (-.78, -.28)]. In both cases perceived personal efficacy seemed to mediate the effect of burnout on job satisfaction. However, the indirect effect expressed in Path 3 [ind3: X→M2→Y; B=-.21, SE=.16, 95% CI (-.55, .08)] was not significant. ## 3.1. Burnout in High School Teachers A two-stage cluster analysis was done with the burnout factors to form the groups (Figures1 and 2). Two groups resulted from inclusion of these variables with the following distribution: 32.6% (n=163) of subjects in Cluster 1 and 67.4% (n=337) in Cluster 2. Table 2 summarizes the means of evaluations of different aspects of the educational context for the total sample of teachers and for each of the clusters.Table 2 Mean scores on burnout for the total sample (N=500) and clusters. Burnout Total sample(N=500) Cluster 1 ▲ Burnout(n=163) 2 ▼ Burnout(n=337) Physical exhaustion M = 12.54 ( S D = 4.99 ) M = 17.83 ( S D = 4.71 ) M = 9.98 ( S D = 2.47 ) Impaired relations M = 12.53 ( S D = 4.93 ) M = 17.47 ( S D = 4.09 ) M = 10.14 ( S D = 3.24 ) Professional Inefficacy M = 10.98 ( S D = 4.16 ) M = 14.77 ( S D = 4.28 ) M = 9.14 ( S D = 2.56 ) Disillusion M = 11.66 ( S D = 5.77 ) M = 17.67 ( S D = 5.61 ) M = 8.76 ( S D = 2.88 )Figure 1 Cluster composition (N=500).Note. Factors are in the order of importance of input.Figure 2 Comparison of clusters (N=500).The first group resulting from the cluster analysis (Cluster 1) was characterized by scores above the mean for the total sample in psychophysical exhaustion (M=17.83), impaired relations (M=17.48), professional inefficacy (M=14.78), and disillusion (M=17.67). Therefore, the subjects in this cluster were grouped together because of their high levels in all the burnout dimensions.The second group (Cluster 2) grouped teachers with mean scores below those found for the total sample in all the burnout dimensions: psychophysical exhaustion (M=9.98), impaired relations (M=10.14), professional inefficacy (M=9.15), and disillusion (M=8.76). That is, those in Cluster 2 coincided in having scores below the mean on the burnout dimensions. ## 3.2. Burnout in High School Teachers and Its Relationship with Perception of the Educational Context, Efficacy, Commitment, and Job Satisfaction As shown in Table3, the four dimensions of burnout correlate negatively with the perception of the teachers of the educational context (the management team, colleagues, technical-auxiliary staff, secretarial staff, families, students, and physical school environment). Burnout was also found to be negatively correlated with personal and collective efficacy, with organizational commitment and job satisfaction.Table 3 Burnout, perception of the educational context, efficacy, commitment, and job satisfaction. Correlation matrix. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1. Physical exhaustion – 2. Impaired relations .61​∗∗∗ – 3. Professional inefficacy .58​∗∗∗ .55​∗∗∗ – 4. Disillusion .66​∗∗∗ .61​∗∗∗ .64​∗∗∗ – Perception of the educational context 5. Management team -.23​∗∗∗ -.27​∗∗∗ -.26​∗∗∗ -.34​∗∗∗ – 6. Colleagues -.21​∗∗∗ -.20​∗∗∗ -.29​∗∗∗ -.28​∗∗∗ .61​∗∗∗ – 7. Tech.-auxiliary staff -.27​∗∗∗ -.21​∗∗∗ -.25​∗∗∗ -.26​∗∗∗ .53​∗∗∗ .55​∗∗∗ – 8. Secretarial staff -.24​∗∗∗ -.12​∗∗∗ -.25​∗∗∗ -.26​∗∗∗ .61​∗∗∗ .49​∗∗∗ .70​∗∗∗ – 9. Families -.19​∗∗∗ -.31​∗∗∗ -.22​∗∗∗ -.28​∗∗∗ .52​∗∗∗ .58​∗∗∗ .48​∗∗∗ .48​∗∗∗ – 10. Students -.24​∗∗∗ -.40​∗∗∗ -.26​∗∗∗ -.25​∗∗∗ .45​∗∗∗ .52​∗∗∗ .39​∗∗∗ .33​∗∗∗ .71​∗∗∗ – 11. Physical environment -.20​∗∗∗ -.16​∗∗∗ -.20​∗∗∗ -.26​∗∗∗ .50​∗∗∗ .47​∗∗∗ .47​∗∗∗ .51​∗∗∗ .47​∗∗∗ .36​∗∗∗ – 12. Perceived personal efficacy -.38​∗∗∗ -.37​∗∗∗ -.47​∗∗∗ -.43​∗∗∗ .47​∗∗∗ .44​∗∗∗ .40​∗∗∗ .43​∗∗∗ .37​∗∗∗ .40​∗∗∗ .33​∗∗∗ – 13. Perceived collective efficacy -.18​∗∗∗ -.21​∗∗∗ -.23​∗∗∗ -.26​∗∗∗ .67​∗∗∗ .63​∗∗∗ .53​∗∗∗ .55​∗∗∗ .58​∗∗∗ .53​∗∗∗ .54​∗∗∗ .43​∗∗∗ – 14. Organizational commitment -.36​∗∗∗ -.37​∗∗∗ -.40​∗∗∗ -.46​∗∗∗ .56​∗∗∗ .56​∗∗∗ .43​∗∗∗ .46​∗∗∗ .49​∗∗∗ .44​∗∗∗ .47​∗∗∗ .58​∗∗∗ .54​∗∗∗ – 15. Job satisfaction -.42​∗∗∗ -.36​∗∗∗ -.41​∗∗∗ -.46​∗∗∗ .62​∗∗∗ .64​∗∗∗ .47​∗∗∗ .49​∗∗∗ .55​∗∗∗ .49​∗∗∗ .48​∗∗∗ .59​∗∗∗ .57​∗∗∗ .75​∗∗∗ Note.​∗∗∗p<.001A Student’st test for independent samples was carried out on the groups classified based on the two-cluster solution to find any differences between the clusters with respect to the rest of the variables analyzed. As observed in Table 4, there were significant differences between the groups with high and low burnout levels for all aspects related to teacher perception of educational context, efficacy, commitment, and job satisfaction, where Cluster 1 (burnout dimension scores above the sample mean) showed lower scores in all the dimensions analyzed.Table 4 Perception of the educational context, efficacy, commitment, and job satisfaction. Descriptive statistics andt test by burnout group. Cluster 1 ▲ Burnout Cluster 2 ▼ Burnout t p N M SD N M SD Perception of the educational context …management team 163 38.85 7.38 337 42.55 7.78 -5.06​∗∗∗ .000 …colleagues 163 30.90 6.31 337 34.08 7.07 -4.86​∗∗∗ .000 … technical-auxiliary staff 163 20.76 4.89 337 23.57 4.20 -6.30​∗∗∗ .000 … secretarial staff 163 11.00 2.72 337 12.21 2.18 -4.94​∗∗∗ .000 … families 163 18.69 4.92 337 21.07 4.87 -5.08​∗∗∗ .000 …students 163 19.19 4.84 337 21.77 4.51 -5.85​∗∗∗ .000 …physical environment 163 19.02 5.73 337 21.65 5.07 -4.98​∗∗∗ .000 Perceived personal efficacy 163 65.73 9.80 337 72.92 8.29 -8.06​∗∗∗ .000 Perceived collective efficacy 163 47.06 8.48 337 51.38 9.79 -4.82​∗∗∗ .000 Organizational commitment 163 31.60 7.50 337 36.70 5.94 -7.60​∗∗∗ .000 Job satisfaction 163 21.93 4.87 337 25.22 3.39 -7.74​∗∗∗ .000 Nota.​∗∗∗p<.001 ## 3.3. Mediation Model for Estimating Predictors and Paths of Mediation Effects of Perceived Efficacy on Job Satisfaction Based on the results of the cluster analysis, the burnout groups (recoded: ▼Burnout=0; ▲Burnout=1) were taken as the independent or predictor variable and perceived efficacy (personal and collective) as the mediating variables. Thus the multiple mediation model was computed with two mediator variables (M1: Epers and M2: Ecolec), with job satisfaction as the dependent variable (Figure 3).Figure 3 Multiple mediation model of perceived efficacy (personal and collective) on the relationship betweenburnout and job satisfaction.In the first place, a statistically significant effect [B=-7.19,p<.001] ofburnout (X) on perceived personal efficacy (M1) was observed. The second regression analysis took as the result variable Mediator 2 (perceived collective efficacy) and included burnout (X) and perceived personal efficacy (M1) in the equation. There was a significant effect of personal efficacy [B=.41,p<.001] on collective efficacy (M2), but not on burnout [B=-1.31,p=.137].In the following regression analysis, the effect of the independent variable and of the two mediators was estimated taking job satisfaction as the result variable (Y). In all cases, significant effects were observed: personal efficacy [B=.17,p<.001], collective efficacy [B=.16,p<.001], andburnout [B=-1.34,p<.001]. The total effect of the model was also significant [B=-3.28,p<.001].Finally, the analysis of indirect effects was carried out using bootstrapping, and data found supported a level of significance for Path 1 [ind1: X→M1→Y; B=-1.22, SE=.27, 95% CI (-1.84, -.73)] and Path 2 [ind2: X→M1→M2→Y; B=-.49, SE=.13, 95% CI (-.78, -.28)]. In both cases perceived personal efficacy seemed to mediate the effect of burnout on job satisfaction. However, the indirect effect expressed in Path 3 [ind3: X→M2→Y; B=-.21, SE=.16, 95% CI (-.55, .08)] was not significant. ## 4. Discussion One of the first ideas inferred from the above analysis of burnout is that teachers with high and low burnout levels are clearly distinguished from each other. The percentages found in this study were distributed as follows: fewer teachers, around a third (32.6%), showed high burnout, while most of them, 67.4%, showed low levels. We can therefore respond to our first research hypothesis, by showing the significant prevalence of burnout among high school teachers, considering the severe risk to health this implies. The incidence is similar to what it was several decades ago, and in view of the characteristics of the postmodern society and the current education model, could increase in the coming years if opportune measures are not taken.The main findings of our study regarding the correlation analyses of burnout and its relationship to the perceived educational context, teacher efficacy, job satisfaction, and commitment generally coincide with analyses in previous literature and show clear evidence that the dimensions of burnout examined (psychophysical exhaustion, impaired relationships, professional efficacy, and disillusion) strongly correlate with all the variables above.Concerning the relationship of burnout to perceived educational context, in the first place, our data showed that there was a close association between the perception of students and psychophysical exhaustion, and vice versa. The results of Klusmann et al. [8], who found that disciplinary problems in the classroom predicted a higher level of emotional exhaustion, were along this same line. One possible explanation for these results is that teachers with high psychophysical exhaustion could be using more passive strategies to cope with job demands, while teachers with less psychophysical exhaustion were able to rely on personal resources which enabled them to cope actively with conflictive situations in the classroom and be more professionally committed [9]. In the second place, the results found showed a moderate relationship between perception of the director and disillusion. These results do not coincide with those of the study by Hultell and Gustavsson [9], in which the director’s social support showed a stronger connection with job commitment than burnout. This result may suggest that job commitment plays an important role in the relationship between perceived educational context and burnout, since it seems to modulate the relationship between the demands and resources and burnout [37]. Other studies have found that high scores in job satisfaction were indirectly related to better perception of support with supervision by teaching staff through emotional exhaustion and lack of personal accomplishment [12, 35]. Therefore, job commitment and satisfaction could have a mediating role in the relationship between burnout and perceived educational context, specifically, in the dimension of perception of the director. In any case, the role of these variables in the perceived educational context and burnout should continue to be studied to progress in this line of research in the future.Our data on the relationship between burnout and perceived personal efficacy reflected that there was a strong negative association between perceived personal efficacy and psychophysical exhaustion, coinciding with the results found in the study by Evers et al. [21]. In our study, perceived personal efficacy was significantly negatively associated with the professional burnout inefficacy and disillusion dimensions. Other studies point to the same direction [22, 24]. Furthermore, other researchers have found that lower perceived self-efficacy predicted higher burnout due to the lack of personal accomplishment [20, 21].Results found in relation to the correlation between burnout and job satisfaction supported a strong relationship between the dimensions of burnout (psychophysical exhaustion, impaired relations, professional inefficacy, and disillusion) and job satisfaction. These data are coherent with those found by other authors [12, 27–29, 36].Our study found that in the relationship between burnout and organizational commitment, job commitment was related moderately to psychophysical exhaustion, and more closely to disillusion. Other studies suggest, as mentioned above concerning the relationship between burnout and perception of the educational context, that job commitment must be considered a modulating variable in the relationship between perception of the educational context and burnout [37]. Ventura et al. [24] found that staff with the most self-efficacy at work perceived more challenging demands and fewer impediments, and this in turn was related to more commitment and less burnout. Studies in a healthcare context also support the existence of a significant negative correlation between burnout and job commitment [39], and others, where job commitment has been closely related to emotional intelligence [38], and high self-efficacy exerts a protector role against burnout [23].Based on the findings mentioned, we can say that our second research hypothesis, in which we expected to find that teachers with high burnout would have higher scores in the variables analyzed (perception of the educational context, perceived efficacy, job commitment, and satisfaction), was fulfilled. Solid evidence was found demonstrating that the group of teachers with high burnout showed lower scores in perception of the educational context, professional efficacy, satisfaction, and job commitment. Most of these results coincide with those found in other studies on burnout and perception of educational content [7–9, 12], burnout and personal efficacy [20–24], burnout and job satisfaction [12, 27–29], and lastly, burnout and job commitment [37].Finally, empirical data in this study back our third hypothesis, showing that perceived personal efficacy exerted a mediating effect in the relationship between burnout and job satisfaction; however, the same was not true of collective efficacy. These results are coherent with those found by other researchers. Skaalvik and Skaalvik [36] found that teacher self-efficacy was significantly and positively associated with job satisfaction. Briones et al. [20] discovered that teacher self-efficacy was an indirect predictor of job satisfaction. Furthermore, with regard to collective efficacy, some researchers have found that perceived collective efficacy did not explain job satisfaction [22, 36]. Therefore, personal factors, specifically perceived personal efficacy, that is, the confidence in one’s own possibilities and abilities to resolve and perform teaching functions, has more weight in the relationship between burnout and job satisfaction than social and contextual factors, such as beliefs about school management.Given the crucial role of the mediating effects of personal efficacy perceived in the relationship between burnout and job satisfaction, action should be taken to improve confidence in one’s own possibilities, by setting realistic goals adjusted to personal skills and abilities, developing high expectations about one’s own performance, cultivate emotional intelligence and positive emotions, increase mental flexibility and creativity, stimulating personal initiative, and finally, support reference models to orient professional performance, promoting learning from experience (learn by doing).This study has some methodological limitations. It is a cross-sectional design with the limitations of such studies, so it would be advisable for this research to be accompanied by other longitudinal studies. The population subject of this study was made up of high school teachers, which must be taken into account when generalizing the results. With respect to the evaluation procedures, we should mention the limitations of exclusive use of self-reports for measuring burnout, which would have to be completed with the use of other measurement instruments (direct observation, interviews), and finally, the results should be replicated and tested in other countries and cultures to broaden their reliability and validity.Based on this study new questions are posed, for example, the role of the motivational variables in personal efficacy and its relationship with job commitment and how perception of the educational context contributes to development of the subjective wellbeing (satisfaction with life and happiness) of teachers. By way of a synthesis, we offer some final reflections on possible improvement of future interventions for burnout and job satisfaction of high school teachers: (1) due to the importance of emotional and motivational factors in teaching-learning, a teacher awareness training program on adolescent development should be started up and active coping strategies for professional demands in education, increasing their resources for improving their teaching commitment and managing stress; (2) the teaching staff should acquire skills and abilities for classroom use of creative, participatory, and dialogical methodologies, through a close, continual, and extended consulting service; (3) contributing to teacher autonomy to improve the perception of personal efficacy through an education system which reinforces individual and collective competence. ## 5. Conclusions The following conclusions may be arrived at the following. On one hand, a third of high school teachers have a high level of burnout. Our results show that when the teaching staff experiences high burnout levels, perception of the educational context is less positive. In this study, the high burnout level was associated with low scores in perceived efficacy (personal and collective), low job satisfaction, and low professional commitment.In addition, this study demonstrates that perceived personal efficacy exerts a mediating effect on the relationship between burnout and job satisfaction. In view of these findings, we propose that burnout in high school teachers be prevented by reinforcing job satisfaction and increasing perceived teaching efficacy and that educational entities provide specific training adapted to the teaching staff and supervision, consulting, and extended support in student matters of their interest, to increase autonomy and perceived efficacy. The teaching staff should be given sufficient time and space to assimilate professional competences related to teaching. And finally, to increase job satisfaction, it is important for the teacher’s work to be recognized by the educational community (family, school, students) and receive affective, social, and economic compensation in a balance between demands or requirements and results achieved. --- *Source: 1021408-2019-04-09.xml*
2019
# A Modified Harmony Search Algorithm for Solving the Dynamic Vehicle Routing Problem with Time Windows **Authors:** Shifeng Chen; Rong Chen; Jian Gao **Journal:** Scientific Programming (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1021432 --- ## Abstract The Vehicle Routing Problem (VRP) is a classical combinatorial optimization problem. It is usually modelled in a static fashion; however, in practice, new requests by customers arrive after the initial workday plan is in progress. In this case, routes must be replanned dynamically. This paper investigates the Dynamic Vehicle Routing Problem with Time Windows (DVRPTW) in which customers’ requests either can be known at the beginning of working day or occur dynamically over time. We propose a hybrid heuristic algorithm that combines the harmony search (HS) algorithm and the Variable Neighbourhood Descent (VND) algorithm. It uses the HS to provide global exploration capabilities and uses the VND for its local search capability. In order to prevent premature convergence of the solution, we evaluate the population diversity by using entropy. Computational results on the Lackner benchmark problems show that the proposed algorithm is competitive with the best existing algorithms from the literature. --- ## Body ## 1. Introduction The Vehicle Routing Problem (VRP) was first introduced in 1959 [1]. Since then, it has become a central research problem in the field of operation research and is also an important application in the areas of transportation, distribution, and logistics. The problem involves a depot, some customers, and some cargo. The cargo must be delivered from a central depot to customers at various locations using several vehicles. This is a combinatorial optimization problem in which the goal is to find an optimal solution that satisfies the service requirements, routing restrictions, and vehicle constraints.DVRPs have been a vital research area for last 3 decades. Thanks to recent advances in information and communication technologies, vehicle fleets can now be managed in real time. In this context, Dynamic Vehicle Routing Problems (DVRPs) are becoming increasingly important [2–4]. A variety of aspects is addressed by numerous approaches. Early research on DVRP can be found in Psaraftis’s work [5]. They proposed a dynamic programming approach. Bertsimas and Van Ryzin [6] considered a DVRP model with a single vehicle and no capacity restrictions where requests appear randomly. They characterized the problem by a generic mathematical model that regarded waiting time as the objective function. Regarding DVRPTW, Chen and Xu [7] proposed a dynamic column generation algorithm for the DVRPTWs based on their notion of decision epochs over the planning horizon, which indicate the best times of the day to execute the reoptimization process. de Oliveira et al. [8] addressed the issue that arises with customers whose demands take place in real time in the DVRPTW and a capacitated fleet. Their solution was obtained through a metaheuristic approach using an ant colony system. Hong [9] also considered time windows and the fact that some requests may be urgent. This work adopted a continuous reoptimization approach and used a large neighbourhood search algorithm. When a new request arrives, it is immediately considered to be included in the current solution; therefore, it runs the large neighbourhood search again to obtain a new solution. de Armas and Melián-Batista [10] tackled a DVRPTW with several real-world constraints. Similar to Hong’s work, they also adopted a continuous reoptimization approach, but they calculated solutions using a variable neighbourhood search algorithm. In some recent surveys, Pillac et al. [11] classify routing problems from the perspective of information quality and evolution. They introduce the notion of degree of dynamism and present a comprehensive review of applications and solution methods for dynamic vehicle routing problems. Bekta et al. [12] provide another survey in this area; they provide a deeper and more detailed analysis. Last but not least, Psaraftis et al. [13] shed more light into work in this area over more than 3 decades by developing a taxonomy of DVRP papers according to 11 criteria.As we know, for large-scaled DVRPTW, it is very difficult to develop exact methods to solve this type of problem. The majority of the existing studies deal with the metaheuristics and intelligent optimization algorithms. Harmony Search (HS) algorithm is a metaheuristic algorithm, developed in [14]. This algorithm imitates the behaviors of musical improvisation process. The musician adjusts the resultant tones with the rest of the band, relying on his own memory in the music creation, and finally the tones reach a wonderful harmony state. Similarly, it has been successfully employed by many researchers to solve various complex problems such as university course scheduling [15, 16], nurse rostering [17], water network design [18], and the Sudoku puzzle [19]. Due to its optimization ability, the HS algorithm has also been employed as a search framework to solve VRPs. To solve the Green VRP (an extension of the classic VRP), Kawtummachai and Shohdohji [20] presented a hybrid algorithm based on the HS algorithm in which the HS is hybridized by a local improvement process. They tested the proposed algorithm with a real case from a retail company, and the results indicated that the method could be applied effectively to the case study. Moreover, Pichpibul and Kawtummachai [21] presented a modified HS algorithm for the capacitated VRP and incorporated the probabilistic Clarke-Wright savings algorithm into the harmony memory mechanism to achieve better initial solutions. Then, the roulette wheel selection procedure was employed with the harmony improvisation mechanism to improve its selection strategy. The results showed that the modified HS algorithm is competitive with the best existing algorithms. Recently, Yassen et al. [22] proposed a meta-HS algorithm (meta-HAS) to solve the VRPTW. The results of comparisons confirmed that the meta-HS produces results that are competitive with other proposed methods.As described above, the HS algorithm has been successfully applied to solve standard VRPs but has not been applied to solve dynamic VRPs. Solving a dynamic VRP is usually more complex than solving the corresponding standard VRP, as we should solve more than one VRP when we deal with dynamic requests. It is well-known that minimizing travel distances of standard VRPs is NP-hard in the general case, so solving dynamic VRPs with the same objective function is also a hard computational task. In this paper, we propose a Modified Harmony Search (MHS) algorithm to solve the DVRPTW. It is related to the static VRPTW, as it can be described as a routing problem in which information about the problem can change during the working day. It is a discrete-time dynamic problem and can be viewed as a series of static VRPTW problem. Therefore, the MHS algorithm we proposed includes two parts: one is the static problem optimization; we combine the basic HS algorithm with the Variable Neighbourhood Descent (VND) algorithm, with the goal of achieving the benefits of both approaches to enhance searching. The combined HSVND algorithm distinguishes itself in three aspects: first, the encoding of harmony memory has been improved based on the characteristics of routing in VRPs. Second, this augmented HS was hybridized with an enhanced VND method to coordinate search diversification and intensification effectively. In this scheme, four neighbourhood structures were proposed. Third, in order to prevent premature convergence of the solution, we evaluate the population diversity by using entropy. The other is the dynamic customer check and insertion. Four rules were employed within the DVRPTW that address the insertion of dynamic requests into the DVRPTW. Furthermore, the MHS is verified for practical implementation by using a comparison study with other recently proposed approaches. The results show that our algorithm performs better than the compared algorithms; its average refusal rate was smallest among the three compared algorithms. Moreover, travel distances computed by our algorithm were also best in 29 out of 30 instances.The remainder of this paper is structured as follows. The next section provides a formal mathematical model of DVRP. The notations are briefly described in Section2. Section 3 describes the main framework and details the development of the HSVND algorithm for DVRP. The experimental setting and results are presented in Section 4. Finally, Section 5 summarizes the major conclusions of this article and recommends some possible directions for future research. ## 2. Problem Definition The DVRPTW can be mathematically modelled by an undirected graphG=(V,E), where V=v0,v1,…,vn is the set of vertices including the depot (v0) and the n customers (v1,…,vn), also called requests, orders, or demands and E={(i,j):i,j∈V,i≠j} is the set of edges between each pair of vertices. Each vertex vi has several nonnegative weights associated with it, namely, a request time τi, a location coordinates (xi,yi), the demand qi, service time si, and an earliest ei and latest li, possible start time for the service, which define a time window [ei,li], while [e0,l0] is the service time range. At the same time, each edge (i,j) is associated a travel time tij and a travel distance dij.A total number ofN customers are to be served by fixed size fleet K of identical vehicles, each vehicle k has associated with a nonnegative capacity Q. Every customer has arrival time ai and begin service time bi. Particularly, the vehicle is only allowed to start the service no earlier than the earliest service time ei, bi=max⁡(ai,ei). In other words, if a vehicle arrives earlier than ei, it must wait until ei. The customers can be divided into two groups, static customers (VS) and dynamic customers (VD), according to the time at which the customers made their requests. Customers in VS whose locations and demands are known at the beginning of the planning horizon (i.e., time 0), are also called priority customers because they must be serviced within the current day. The locations and demands of customers who belong to set VD will be known only at the time of the order request. These customers call in requesting on-site service over time.The objective is to accept as many requests as possible while finding a feasible set of routes with the minimum total travelled distance. The goal we consider here is a hierarchical approach, which is similar to the goal defined in [10]. The objective functions are considered in a lexicographic order as follows.(i) Number of refused customers.(ii) Total travelled distance.(iii) Total number of routes.Note that [10] included 7 objective functions: total infeasibility (Sum of the hours that the customers’ time windows are exceeded), number of postponed services, number of extra hours (sum of hours that vehicles’ working shifts are exceeded), number of extra hours (sum of the hours that the vehicles’ working shifts are exceeded), total travelled distance, total number of routes, and time balance (difference between the longest and shortest route time made by one vehicle regarding time). They regarded the time windows as soft constraints; consequently, they needed to consider these additional objective functions. In contrast, because we regarded the time windows as hard constraints, the objective functions related to time windows could be removed.The DVRPTW can be modelled by the following formulas. First, we introduce three binary decision variablesξijk, χk, and λi, which are defined as follows:ξijk = 1 if edge (i,j) is travelled by vehicle k and 0 otherwise.χk = 1 if vehicle k is used and 0 otherwise.λi = 1 if the dynamic customer vi is accepted.Then the main objective can be described as in the following formula:(1)min∑i=1N1-λi(2)min∑i,j∈E∑k∈Kdij·ξijk(3)min∑k=1Kχk(4)s.t.∑i∈Vξijk=∑i∈Vξjik,1≤j≤n,k∈K(5)∑k∈K∑j∈Vξijk=1,1≤i≤n(6)∑j∈Vξ0jk=∑i∈Vξi0k=1,k∈K(7)∑i∈V′∑j∈Vqiξijk≤Qk,k∈K(8)ai=e0,i=0bi-1+si+ti,i-1,1≤i≤n+1(9)bi=max⁡ai,ei(10)zi=l0,i=n+1min⁡zi+1-ti,i+1-si,li,0≤i≤n(11)ei≤bi≤zi≤li(12)ξijk,χk,λi∈0,1,where ai is the vehicles arrival time at the customer vi and bi is the actual start time of vi, so bi=max⁡(ai,ei) as (9) stated. In addition, for each customer i, let zi indicate the reverse arrival time, defined as (10).The objective function (1) minimizes the number of refused customers, (2) aims to minimize the travel distance, and (3) minimizes the total number of routes. Eq. (4) is a flow conservation constraint: each customer j must have its in-degree equal to its out-degree, which is at most one. Eq. (5) ensures that each customer i(1≤i≤n) must be visited by exactly one vehicle. Eq. (6) ensures that every route starts and ends the central depot. Eq. (7) specifies the capacity of each vehicle. Eq. (8)–(11) define the time windows. Finally, (12) imposes restrictions on the decision variables.The degree of dynamism of a problem (Dod) [23] is defined to represent how many dynamic requests occur in a problem. Let ns and nd be the number of static and dynamic requests, respectively. Then, the Dod is described as follows:(13)Dod=ndns+nd×100%,where Dod varies between 0 and 1. When Dod is equal to 0, all the requests are known in advance (static problem), whereas when it is equal to 1, all the requests are dynamic. ## 3. Solution Approach In this section, we present a metaheuristic algorithm to solve the DVRPTW. First, we introduce the framework of the heuristic algorithm to show how it can solve DVRPTWs interactively. Then, we discuss the details of the proposed harmony search-based algorithm, including its initialization, the way it improvises new solutions, and its VND-based local search method. Finally, we present some strategies used to accommodate dynamic requests. ### 3.1. The Framework of the General Algorithm The general algorithm to solve a DVRPTW interactively is summarized in Algorithm1. First, the routes and deliveries to the static customers are solved by HSVND, which is our proposed harmony search-based algorithm described in Section 3.2. HSVND will return the best solution found, in which all the static customers have been inserted into routes. Consequently, the fleet can start to deliver goods for the customers based on the routes found in this solution. Then, dynamic customers submit requests. When a new dynamic customer request occurs, the algorithm will immediately check the feasibility of servicing the request Algorithm 3 (discussed in Section 3.3). If the request is acceptable, it then invokes the HSVND again to rearrange all the known customer requests that have not been serviced so far. Otherwise, the request will be rejected. This dynamic procedure is repeated until there are no new requests. Then, the entire solution including service for all static and dynamic customers’ requests will be returned by the general algorithm.Algorithm 1:General algorithm-MHS. (1)S←ϕ ( 2 ) S ← H S V N D ( S , V S , 0 )  //solve static customers (3)t←0 ( 4 )  while  (t<l0)  do (5)Dt←CheckRequests(S,t) (6)S←HSVND(S,Dt,t) (7)t←t+1 ( 8 )  endwhile ( 9 ) return S.The following subsections discuss the HSVND algorithm and the method for inserting dynamic customer requests in more detail. ### 3.2. The Framework of HSVND In this section, we describe the core of the MHS that incorporates two known methods: the harmony search (HS) algorithm and the Variable Neighbourhood Descent (VND) algorithm. This hybrid algorithm, called HSVND, benefits from the strengths of both HS and VND; the HS has high global search capability while the VND excels at local search.The HS algorithm was proposed by Geem et al. [14] based on the way musicians improvise new harmonies in memory. HS is a population-based evolutionary algorithm in which the solutions are represented by the harmonies and the population is represented by harmony memory. The HS is an iterative process that starts with a set of initial solutions stored in harmony memory (HM). Each iteration generates a new solution, and then the objective function is used to evaluate the quality of the new solution. The HS method will replace the worst solution with the new solution if the new solution is of better quality than the worst solution in HM. This process repeats until a predetermined termination criterion is met. The pseudocode of the HSVND is given in Algorithm 2.Algorithm 2:HSVND (S,V,t). ( 1 )  initialize all parameters: HMS, HMCR, PAR, NI, Sbest ( 2 )  update the location of each vehicle in S at time t use eq. (15). ( 3 )  remove the unvisited customers from S and inset into V. ( 4 )  for  (i=1; i≤HMS; i++)  do             // HM initialization ( 5 )   initialize a solution Si randomly ( 6 )   if  (f(Si)<f(Sbest))  then ( 7 )       Sbest←Si ( 8 )   end if ( 9 )  end for ( 10 )  repeat ( 1 1 )   improvising a new solution S∗ ( 12 )   if  (S∗ is infeasible)  then ( 1 3 )     repair the solution ( 1 4 )   end if ( 1 5 )   VND(S∗) ( 1 6 )   if  (f(S∗) is better than the worst HM member)  then ( 1 7 )       replace the worst HM member with S∗.      // HM update ( 1 8 )   end if ( 1 9 )   if  (f(S∗)<f(Sbest))  then ( 20 )       Sbest←S∗ ( 21 )   end if ( 22 )   compute the population entropy           // entropy evaluation ( 23 )   if  (the population entropy increase or remain constant)  then ( 24 )     remove the highest frequency of harmonies ( 25 )     re-generates new harmonies ( 26 )   end if ( 27 )  untila preset termination criterion is met.        //checking termination criterion ( 28 )  return Sbest.Algorithm 3:VND (S). ( 1 )  Select neighbourhoods Nl, l=1,…,lmax. ( 2 )  l←1; ( 3 )  while l≤lmax ( 4 )    Find the best solution S∗ in neighbourhood Nl ( 5 )    if f(S∗)<f(S) then ( 6 )      S←S∗ ( 7 )      l←1 ( 8 )    else ( 9 )      l←l+1 ( 10 )    end if ( 11 )  end While ( 12 )  return SThe HSVND algorithm consists of the following six steps: initialization, evaluation, improvising a new solution from the HM, local searching by VND, updating the HM, and checking the termination criteria. After initialization, the hybrid algorithm improves solutions iteratively until a termination criterion is satisfied. The following subsections explain each step in the HSVND algorithm in detail. #### 3.2.1. Initialization During initialization, the parameters for HS are initialized first. These parameters are (i) the Harmony Memory Size (HMS), which determines the number of initial solutions in HM; (ii) Harmony Memory Consideration Rate (HMCR), which determines the rate at which values are selected from HM solutions; (iii) Pitch Adjustment Rate (PAR), which determines the rate of local improvement; and (iv) the number of improvisations (NI), which corresponds to the maximum number of iterations allowed for improving the solution. Next, the initial solution population will be generated randomly and stored in the HM. These solutions are sorted in ascending order with respect to their objective function values.The population is also initialized. HM is represented by a matrix of three dimensions. The rows contain a set of solutions, and the columns contain the vehicles (routes) for each solutionSi. Each vehicle route rij can be considered as a sequence of customers v0,v1,…,vn,vn+1, where v0 and vn+1 represent the depot. In classical HS, the dimensions of each harmony in HM must be the same [14]. In our study, a harmony represents a set containing m vehicles (routes). Therefore, the dimension of each harmony in HM can be different, as shown in(14)HM=S1S2⋮Si⋮SHMS=r11,r12,…,r1j,…,…,r1m1r21,r22,…,r2j…,r2m2⋮ri1,ri2,…,rij,…,…,rimi⋮rHMS1,rHMS2,…,rHMSj,…,rHMSmHMS,where mi is the total number of vehicles in solution Si.In DVRP, a decision of waiting or moving should be made when a vehicle finishes servicing a customer. In this paper, we introduce an Anticipatory Waiting strategy. Assume that the actual time ist. Vehicle k has finished serving customer vx and is ready to serve the next customer vy. We allow the vehicle to wait at the vx location until by-tx,y. Consequently, the vehicle will be waiting at the current location, when(15)by-tx,y>t.Solutions in the population are constructed by generating vehicle routes iteratively. The algorithm sequentially opens routes and fills them with customers until no more customers can be inserted while still keeping the routes feasible. More precisely, the procedure chooses an unrouted customer and tries to insert it into the current route to generate a feasible route that includes the customer. This step will be repeated as long as customers can be inserted into the route without violating any constraint. When this is no longer possible, the route is closed and added to the solutionS and the procedure continues with the next vehicle route until all the customers have been inserted into the solution. #### 3.2.2. Evaluating Population Entropy The diversity of the harmony is set according to the population entropy, and it is defined as follows:(16)pisi=fsi∑i=1HMSfsi,ES=-∑i=1HMSpisi·ln⁡pisiln⁡HMS,where HMS is the harmony memory size, f(si) is the objective function value of ith harmony, and E(x) is normalized population entropy. In each generation, after updating HM, the population entropy is computed. If the population entropy increase or remains constant, it would certainly have taken place in premature convergence. Then, the best harmony will retain. For the others, it counts its frequency in the HM, removes the highest frequency of harmonies, and then generates new harmonies. #### 3.2.3. Improvising a New Solution A new solutionS=(r1,r2,…,rm) is improvised based on three rules: (i) random selection, (ii) memory consideration, and (iii) pitch adjustment. The random selection rule allows vehicle ri to choose a route from the whole search range randomly. In addition to random selection, vehicle ri can take the route from previous routes for the same vehicle stored in HM such that ri∈{r1i,r2i,…,rHMS,i}. After a new pitch (route) is generated from memory, it is further considered to determine whether a pitch adjustment is required. Specifically, the new solution is generated as follows: first, create an empty solution S; next, generate a random number p in the range [0,1]. If p is less than the HMCR, choose one route from HM and add it to S. Otherwise, generate new route randomly and append it to S. Specifically, new pitches (routes) are generated as follows.(17)ri⟵randr1i,r2i,…,rHMS,iw.p.  HMCRrandomly  generatew.p.1-HMCR.When a new route is selected from the HM, it is adjusted at the probability PAR by a local search, wherePAR∈[0,1]. We produce a random number q between 0 and 1. If q is smaller than the PAR, then the current route will be modified via a randomly selected neighbouring strategy. The improvisation process ends when the number of vehicles in the new solution S is equal to the smallest one stored in HM. For DVRP, different neighbourhood structures have been used to improve the routes locally. Hence, we used three neighbourhood search methods as follows.Shift. This operation selects a customer randomly and moves it from its current position to a new random position in the same route. For example, in Figure 1(b), customer 2 was selected and moved after the 6th customer.Figure 1 Pitch-adjusted neighbourhood structures. (a) (b) (c) (d)Exchange. This operation selects two customers randomly and exchanges their positions in the same route. In Figure 1(c), the positions of customers 2 and 6 were swapped.Invert. This operation selects two customers randomly and inverts a subsequence between them in the same route. In Figure 1(d), edges (1, 2) and (6, 7) were deleted and edges (1, 6) and (2, 7) were linked; thus, the subsequence from 2 to 6 was inverted.Note that any local change leading to an infeasible route will be discarded; only feasible routes are accepted. Each of neighbourhood structure is controlled by a specific PAR range as follows:(18)ri⟵Shift0≤q<13PARExchange13PAR≤q<23PARInvert23PAR≤q<PARNothingPAR≤q≤1.As Figure1 shows, the new solution S may be infeasible because some customers may be missed or repeated. Therefore, after a new solution is produced during the improvisation process, we must check its feasibility. A repair technique is utilized to transform an infeasible solution into a feasible solution using the following two steps: first, we identify customers who are neither scheduled nor replicated in the new solution. Then, we delete any repeated customers from the old route and, finally, either assign the missed customers to any route that can accept them or generate a new route for them. #### 3.2.4. The VND Method After improvising a new solution, we use the VND method to further optimize the solution. To make the VND method available for a DVRPTW, it is necessary to define appropriate neighbourhood structures for DVRPTW solutions. We propose four different neighbourhood structures as follows.Swap. Select one customer from a route and another customer from another route and then swap them (see Figure 2(a)).Figure 2 Neighbourhood structures. (a) Swap (b) 2-opt∗ (c) String-exchange (d) Relocation2-Opt ∗. Choose two routes and exchange the last parts of both routes after choosing two selection points, one from each route (see Figure 2(b)).String-Exchange. Select a subsequence of customers from one route and another subsequence of customers from another route and exchange both subsequences (where d represents the maximum length of the subsequences) (see Figure 2(c)).Relocation. Delete one customer from one route and insert it into another route (see Figure 2(d)).The proposed VND method is shown in Algorithm3, starting with the first structure N1. In each iteration, the algorithm finds a solution S∗ in the current neighbourhood structure with the smallest objective function value (line (4)). In the evaluation step, if f(S∗)<f(S), then solution S∗ becomes the next incumbent solution (line (6)) and the algorithm continues to use N1 (line (7)); otherwise, it explores the next neighbourhood structure (line (9)). When the last structure, N4, has been applied and no further improvements are possible, the VND method is terminated and returns the final local optimal solution. #### 3.2.5. Updating the HM The new solution is substituted and added to the HM if it is better than the worst harmony solution in HM in terms of the objective function value. Otherwise, the newly created solution is rejected. Then, the HM is sorted again by objective function value. #### 3.2.6. Checking the Termination Criteria The procedures of the HS continue until a termination criterion is satisfied. We employed three termination criteria:(1) reaching the maximum number of improvisations (NI), (2) no improvement occurred after a certain number of improvisations (CUIN), and (3) the best solution in HM reached a certain value (CBS). It will stop if one of the termination criteria is satisfied. ### 3.3. Inserting Dynamic Requests Required information about a dynamic request such as its customer location and time window is acquired when the request arrives. Moreover, at a given time point, vehicles may either be servicing a customer, waiting for a customer, or moving towards the next customer. Our DVRPTW algorithm should first determine whether requests arriving at this time point can be accepted and, if so, which vehicles and which time points can service them. The algorithm should reject requests for which no vehicle can be scheduled to service them. To decide toaccept orreject a request, we shall discuss some insertion rules.Rule 1 (time window constraints). Suppose customervh requests service at time τh, and vehicle k is visiting customer vk or is on the way to visit that customer based on the planned router. For the new request to be inserted into the solution, it must satisfy at least one vehicle k or schedule a new vehicle that can arrive at vh before the end of the customer’s time window. The rule is expressed as follows:(19)bk+sk+tk,h≤lh∃k∈Kor(20)τh+t0,h≤lh∧τh+2∗t0,h+sh≤l0.Rule 2 (capacity constraints). A customervh can be inserted into route r (assume it is served by vehicle k) if it does not violate vehicle k’s capacity constraint. The rule to check this condition is expressed as follows:(21)qh+∑i∈rqi≤Qk∃k∈K.Rule 3 (direct insert). A new customervh can be inserted between customers vx and vy in route r if it allows the vehicle to arrive at vh before the end of its time window and also services vh after the beginning of its time window. The rule is expressed as follows:(22)bx+sx+tx,h≤lh,zy-ty,h-sh≥eh,bh≤zh.Rule 4 (split insert). This rule governs the condition when customervh cannot be inserted at any position in route r; however, if route r is split into two routes (r1 and r2) and the customer vh can be inserted into either of these two new routes (using Rules 3 and 4), then the customer can be accepted. A route r=0,…,vx,vy,…,0 can be split into r1=0,…,vx,0 and r2=0,vy,…,0 if(23)zy-ty,0≥τh.Algorithm4 describes how dynamic requests can be inserted in a route.Algorithm 4:Check Requests (S,t). ( 1 )  Dt←{dynamic customers are request in time t} ( 2 )  update the location of each vehicle in S by using (15). ( 3 )  for  (each dynamic customer i in Dt)  do //decide whether toaccept or reject customer i by using insertion Rules 1–4. ( 4 )     if it satisfies Rules 1 and 2  then ( 5 )     attempts to insert the customer i into an existing route by Rule 3. ( 6 )     if not, apply Rule 4 to split a route into two new routes ( 7 )     attempts to inserted by Rule 3 again for each two newly routes ( 8 )     else ( 9 )     reject it. ( 1 0 )  end if ( 11 )  if  (reject)  then ( 12 )     add i to reject pool ( 13 )     remove i from Dt ( 14 )  end if ( 15 )  end for ( 16 )  return  Dt ## 3.1. The Framework of the General Algorithm The general algorithm to solve a DVRPTW interactively is summarized in Algorithm1. First, the routes and deliveries to the static customers are solved by HSVND, which is our proposed harmony search-based algorithm described in Section 3.2. HSVND will return the best solution found, in which all the static customers have been inserted into routes. Consequently, the fleet can start to deliver goods for the customers based on the routes found in this solution. Then, dynamic customers submit requests. When a new dynamic customer request occurs, the algorithm will immediately check the feasibility of servicing the request Algorithm 3 (discussed in Section 3.3). If the request is acceptable, it then invokes the HSVND again to rearrange all the known customer requests that have not been serviced so far. Otherwise, the request will be rejected. This dynamic procedure is repeated until there are no new requests. Then, the entire solution including service for all static and dynamic customers’ requests will be returned by the general algorithm.Algorithm 1:General algorithm-MHS. (1)S←ϕ ( 2 ) S ← H S V N D ( S , V S , 0 )  //solve static customers (3)t←0 ( 4 )  while  (t<l0)  do (5)Dt←CheckRequests(S,t) (6)S←HSVND(S,Dt,t) (7)t←t+1 ( 8 )  endwhile ( 9 ) return S.The following subsections discuss the HSVND algorithm and the method for inserting dynamic customer requests in more detail. ## 3.2. The Framework of HSVND In this section, we describe the core of the MHS that incorporates two known methods: the harmony search (HS) algorithm and the Variable Neighbourhood Descent (VND) algorithm. This hybrid algorithm, called HSVND, benefits from the strengths of both HS and VND; the HS has high global search capability while the VND excels at local search.The HS algorithm was proposed by Geem et al. [14] based on the way musicians improvise new harmonies in memory. HS is a population-based evolutionary algorithm in which the solutions are represented by the harmonies and the population is represented by harmony memory. The HS is an iterative process that starts with a set of initial solutions stored in harmony memory (HM). Each iteration generates a new solution, and then the objective function is used to evaluate the quality of the new solution. The HS method will replace the worst solution with the new solution if the new solution is of better quality than the worst solution in HM. This process repeats until a predetermined termination criterion is met. The pseudocode of the HSVND is given in Algorithm 2.Algorithm 2:HSVND (S,V,t). ( 1 )  initialize all parameters: HMS, HMCR, PAR, NI, Sbest ( 2 )  update the location of each vehicle in S at time t use eq. (15). ( 3 )  remove the unvisited customers from S and inset into V. ( 4 )  for  (i=1; i≤HMS; i++)  do             // HM initialization ( 5 )   initialize a solution Si randomly ( 6 )   if  (f(Si)<f(Sbest))  then ( 7 )       Sbest←Si ( 8 )   end if ( 9 )  end for ( 10 )  repeat ( 1 1 )   improvising a new solution S∗ ( 12 )   if  (S∗ is infeasible)  then ( 1 3 )     repair the solution ( 1 4 )   end if ( 1 5 )   VND(S∗) ( 1 6 )   if  (f(S∗) is better than the worst HM member)  then ( 1 7 )       replace the worst HM member with S∗.      // HM update ( 1 8 )   end if ( 1 9 )   if  (f(S∗)<f(Sbest))  then ( 20 )       Sbest←S∗ ( 21 )   end if ( 22 )   compute the population entropy           // entropy evaluation ( 23 )   if  (the population entropy increase or remain constant)  then ( 24 )     remove the highest frequency of harmonies ( 25 )     re-generates new harmonies ( 26 )   end if ( 27 )  untila preset termination criterion is met.        //checking termination criterion ( 28 )  return Sbest.Algorithm 3:VND (S). ( 1 )  Select neighbourhoods Nl, l=1,…,lmax. ( 2 )  l←1; ( 3 )  while l≤lmax ( 4 )    Find the best solution S∗ in neighbourhood Nl ( 5 )    if f(S∗)<f(S) then ( 6 )      S←S∗ ( 7 )      l←1 ( 8 )    else ( 9 )      l←l+1 ( 10 )    end if ( 11 )  end While ( 12 )  return SThe HSVND algorithm consists of the following six steps: initialization, evaluation, improvising a new solution from the HM, local searching by VND, updating the HM, and checking the termination criteria. After initialization, the hybrid algorithm improves solutions iteratively until a termination criterion is satisfied. The following subsections explain each step in the HSVND algorithm in detail. ### 3.2.1. Initialization During initialization, the parameters for HS are initialized first. These parameters are (i) the Harmony Memory Size (HMS), which determines the number of initial solutions in HM; (ii) Harmony Memory Consideration Rate (HMCR), which determines the rate at which values are selected from HM solutions; (iii) Pitch Adjustment Rate (PAR), which determines the rate of local improvement; and (iv) the number of improvisations (NI), which corresponds to the maximum number of iterations allowed for improving the solution. Next, the initial solution population will be generated randomly and stored in the HM. These solutions are sorted in ascending order with respect to their objective function values.The population is also initialized. HM is represented by a matrix of three dimensions. The rows contain a set of solutions, and the columns contain the vehicles (routes) for each solutionSi. Each vehicle route rij can be considered as a sequence of customers v0,v1,…,vn,vn+1, where v0 and vn+1 represent the depot. In classical HS, the dimensions of each harmony in HM must be the same [14]. In our study, a harmony represents a set containing m vehicles (routes). Therefore, the dimension of each harmony in HM can be different, as shown in(14)HM=S1S2⋮Si⋮SHMS=r11,r12,…,r1j,…,…,r1m1r21,r22,…,r2j…,r2m2⋮ri1,ri2,…,rij,…,…,rimi⋮rHMS1,rHMS2,…,rHMSj,…,rHMSmHMS,where mi is the total number of vehicles in solution Si.In DVRP, a decision of waiting or moving should be made when a vehicle finishes servicing a customer. In this paper, we introduce an Anticipatory Waiting strategy. Assume that the actual time ist. Vehicle k has finished serving customer vx and is ready to serve the next customer vy. We allow the vehicle to wait at the vx location until by-tx,y. Consequently, the vehicle will be waiting at the current location, when(15)by-tx,y>t.Solutions in the population are constructed by generating vehicle routes iteratively. The algorithm sequentially opens routes and fills them with customers until no more customers can be inserted while still keeping the routes feasible. More precisely, the procedure chooses an unrouted customer and tries to insert it into the current route to generate a feasible route that includes the customer. This step will be repeated as long as customers can be inserted into the route without violating any constraint. When this is no longer possible, the route is closed and added to the solutionS and the procedure continues with the next vehicle route until all the customers have been inserted into the solution. ### 3.2.2. Evaluating Population Entropy The diversity of the harmony is set according to the population entropy, and it is defined as follows:(16)pisi=fsi∑i=1HMSfsi,ES=-∑i=1HMSpisi·ln⁡pisiln⁡HMS,where HMS is the harmony memory size, f(si) is the objective function value of ith harmony, and E(x) is normalized population entropy. In each generation, after updating HM, the population entropy is computed. If the population entropy increase or remains constant, it would certainly have taken place in premature convergence. Then, the best harmony will retain. For the others, it counts its frequency in the HM, removes the highest frequency of harmonies, and then generates new harmonies. ### 3.2.3. Improvising a New Solution A new solutionS=(r1,r2,…,rm) is improvised based on three rules: (i) random selection, (ii) memory consideration, and (iii) pitch adjustment. The random selection rule allows vehicle ri to choose a route from the whole search range randomly. In addition to random selection, vehicle ri can take the route from previous routes for the same vehicle stored in HM such that ri∈{r1i,r2i,…,rHMS,i}. After a new pitch (route) is generated from memory, it is further considered to determine whether a pitch adjustment is required. Specifically, the new solution is generated as follows: first, create an empty solution S; next, generate a random number p in the range [0,1]. If p is less than the HMCR, choose one route from HM and add it to S. Otherwise, generate new route randomly and append it to S. Specifically, new pitches (routes) are generated as follows.(17)ri⟵randr1i,r2i,…,rHMS,iw.p.  HMCRrandomly  generatew.p.1-HMCR.When a new route is selected from the HM, it is adjusted at the probability PAR by a local search, wherePAR∈[0,1]. We produce a random number q between 0 and 1. If q is smaller than the PAR, then the current route will be modified via a randomly selected neighbouring strategy. The improvisation process ends when the number of vehicles in the new solution S is equal to the smallest one stored in HM. For DVRP, different neighbourhood structures have been used to improve the routes locally. Hence, we used three neighbourhood search methods as follows.Shift. This operation selects a customer randomly and moves it from its current position to a new random position in the same route. For example, in Figure 1(b), customer 2 was selected and moved after the 6th customer.Figure 1 Pitch-adjusted neighbourhood structures. (a) (b) (c) (d)Exchange. This operation selects two customers randomly and exchanges their positions in the same route. In Figure 1(c), the positions of customers 2 and 6 were swapped.Invert. This operation selects two customers randomly and inverts a subsequence between them in the same route. In Figure 1(d), edges (1, 2) and (6, 7) were deleted and edges (1, 6) and (2, 7) were linked; thus, the subsequence from 2 to 6 was inverted.Note that any local change leading to an infeasible route will be discarded; only feasible routes are accepted. Each of neighbourhood structure is controlled by a specific PAR range as follows:(18)ri⟵Shift0≤q<13PARExchange13PAR≤q<23PARInvert23PAR≤q<PARNothingPAR≤q≤1.As Figure1 shows, the new solution S may be infeasible because some customers may be missed or repeated. Therefore, after a new solution is produced during the improvisation process, we must check its feasibility. A repair technique is utilized to transform an infeasible solution into a feasible solution using the following two steps: first, we identify customers who are neither scheduled nor replicated in the new solution. Then, we delete any repeated customers from the old route and, finally, either assign the missed customers to any route that can accept them or generate a new route for them. ### 3.2.4. The VND Method After improvising a new solution, we use the VND method to further optimize the solution. To make the VND method available for a DVRPTW, it is necessary to define appropriate neighbourhood structures for DVRPTW solutions. We propose four different neighbourhood structures as follows.Swap. Select one customer from a route and another customer from another route and then swap them (see Figure 2(a)).Figure 2 Neighbourhood structures. (a) Swap (b) 2-opt∗ (c) String-exchange (d) Relocation2-Opt ∗. Choose two routes and exchange the last parts of both routes after choosing two selection points, one from each route (see Figure 2(b)).String-Exchange. Select a subsequence of customers from one route and another subsequence of customers from another route and exchange both subsequences (where d represents the maximum length of the subsequences) (see Figure 2(c)).Relocation. Delete one customer from one route and insert it into another route (see Figure 2(d)).The proposed VND method is shown in Algorithm3, starting with the first structure N1. In each iteration, the algorithm finds a solution S∗ in the current neighbourhood structure with the smallest objective function value (line (4)). In the evaluation step, if f(S∗)<f(S), then solution S∗ becomes the next incumbent solution (line (6)) and the algorithm continues to use N1 (line (7)); otherwise, it explores the next neighbourhood structure (line (9)). When the last structure, N4, has been applied and no further improvements are possible, the VND method is terminated and returns the final local optimal solution. ### 3.2.5. Updating the HM The new solution is substituted and added to the HM if it is better than the worst harmony solution in HM in terms of the objective function value. Otherwise, the newly created solution is rejected. Then, the HM is sorted again by objective function value. ### 3.2.6. Checking the Termination Criteria The procedures of the HS continue until a termination criterion is satisfied. We employed three termination criteria:(1) reaching the maximum number of improvisations (NI), (2) no improvement occurred after a certain number of improvisations (CUIN), and (3) the best solution in HM reached a certain value (CBS). It will stop if one of the termination criteria is satisfied. ## 3.2.1. Initialization During initialization, the parameters for HS are initialized first. These parameters are (i) the Harmony Memory Size (HMS), which determines the number of initial solutions in HM; (ii) Harmony Memory Consideration Rate (HMCR), which determines the rate at which values are selected from HM solutions; (iii) Pitch Adjustment Rate (PAR), which determines the rate of local improvement; and (iv) the number of improvisations (NI), which corresponds to the maximum number of iterations allowed for improving the solution. Next, the initial solution population will be generated randomly and stored in the HM. These solutions are sorted in ascending order with respect to their objective function values.The population is also initialized. HM is represented by a matrix of three dimensions. The rows contain a set of solutions, and the columns contain the vehicles (routes) for each solutionSi. Each vehicle route rij can be considered as a sequence of customers v0,v1,…,vn,vn+1, where v0 and vn+1 represent the depot. In classical HS, the dimensions of each harmony in HM must be the same [14]. In our study, a harmony represents a set containing m vehicles (routes). Therefore, the dimension of each harmony in HM can be different, as shown in(14)HM=S1S2⋮Si⋮SHMS=r11,r12,…,r1j,…,…,r1m1r21,r22,…,r2j…,r2m2⋮ri1,ri2,…,rij,…,…,rimi⋮rHMS1,rHMS2,…,rHMSj,…,rHMSmHMS,where mi is the total number of vehicles in solution Si.In DVRP, a decision of waiting or moving should be made when a vehicle finishes servicing a customer. In this paper, we introduce an Anticipatory Waiting strategy. Assume that the actual time ist. Vehicle k has finished serving customer vx and is ready to serve the next customer vy. We allow the vehicle to wait at the vx location until by-tx,y. Consequently, the vehicle will be waiting at the current location, when(15)by-tx,y>t.Solutions in the population are constructed by generating vehicle routes iteratively. The algorithm sequentially opens routes and fills them with customers until no more customers can be inserted while still keeping the routes feasible. More precisely, the procedure chooses an unrouted customer and tries to insert it into the current route to generate a feasible route that includes the customer. This step will be repeated as long as customers can be inserted into the route without violating any constraint. When this is no longer possible, the route is closed and added to the solutionS and the procedure continues with the next vehicle route until all the customers have been inserted into the solution. ## 3.2.2. Evaluating Population Entropy The diversity of the harmony is set according to the population entropy, and it is defined as follows:(16)pisi=fsi∑i=1HMSfsi,ES=-∑i=1HMSpisi·ln⁡pisiln⁡HMS,where HMS is the harmony memory size, f(si) is the objective function value of ith harmony, and E(x) is normalized population entropy. In each generation, after updating HM, the population entropy is computed. If the population entropy increase or remains constant, it would certainly have taken place in premature convergence. Then, the best harmony will retain. For the others, it counts its frequency in the HM, removes the highest frequency of harmonies, and then generates new harmonies. ## 3.2.3. Improvising a New Solution A new solutionS=(r1,r2,…,rm) is improvised based on three rules: (i) random selection, (ii) memory consideration, and (iii) pitch adjustment. The random selection rule allows vehicle ri to choose a route from the whole search range randomly. In addition to random selection, vehicle ri can take the route from previous routes for the same vehicle stored in HM such that ri∈{r1i,r2i,…,rHMS,i}. After a new pitch (route) is generated from memory, it is further considered to determine whether a pitch adjustment is required. Specifically, the new solution is generated as follows: first, create an empty solution S; next, generate a random number p in the range [0,1]. If p is less than the HMCR, choose one route from HM and add it to S. Otherwise, generate new route randomly and append it to S. Specifically, new pitches (routes) are generated as follows.(17)ri⟵randr1i,r2i,…,rHMS,iw.p.  HMCRrandomly  generatew.p.1-HMCR.When a new route is selected from the HM, it is adjusted at the probability PAR by a local search, wherePAR∈[0,1]. We produce a random number q between 0 and 1. If q is smaller than the PAR, then the current route will be modified via a randomly selected neighbouring strategy. The improvisation process ends when the number of vehicles in the new solution S is equal to the smallest one stored in HM. For DVRP, different neighbourhood structures have been used to improve the routes locally. Hence, we used three neighbourhood search methods as follows.Shift. This operation selects a customer randomly and moves it from its current position to a new random position in the same route. For example, in Figure 1(b), customer 2 was selected and moved after the 6th customer.Figure 1 Pitch-adjusted neighbourhood structures. (a) (b) (c) (d)Exchange. This operation selects two customers randomly and exchanges their positions in the same route. In Figure 1(c), the positions of customers 2 and 6 were swapped.Invert. This operation selects two customers randomly and inverts a subsequence between them in the same route. In Figure 1(d), edges (1, 2) and (6, 7) were deleted and edges (1, 6) and (2, 7) were linked; thus, the subsequence from 2 to 6 was inverted.Note that any local change leading to an infeasible route will be discarded; only feasible routes are accepted. Each of neighbourhood structure is controlled by a specific PAR range as follows:(18)ri⟵Shift0≤q<13PARExchange13PAR≤q<23PARInvert23PAR≤q<PARNothingPAR≤q≤1.As Figure1 shows, the new solution S may be infeasible because some customers may be missed or repeated. Therefore, after a new solution is produced during the improvisation process, we must check its feasibility. A repair technique is utilized to transform an infeasible solution into a feasible solution using the following two steps: first, we identify customers who are neither scheduled nor replicated in the new solution. Then, we delete any repeated customers from the old route and, finally, either assign the missed customers to any route that can accept them or generate a new route for them. ## 3.2.4. The VND Method After improvising a new solution, we use the VND method to further optimize the solution. To make the VND method available for a DVRPTW, it is necessary to define appropriate neighbourhood structures for DVRPTW solutions. We propose four different neighbourhood structures as follows.Swap. Select one customer from a route and another customer from another route and then swap them (see Figure 2(a)).Figure 2 Neighbourhood structures. (a) Swap (b) 2-opt∗ (c) String-exchange (d) Relocation2-Opt ∗. Choose two routes and exchange the last parts of both routes after choosing two selection points, one from each route (see Figure 2(b)).String-Exchange. Select a subsequence of customers from one route and another subsequence of customers from another route and exchange both subsequences (where d represents the maximum length of the subsequences) (see Figure 2(c)).Relocation. Delete one customer from one route and insert it into another route (see Figure 2(d)).The proposed VND method is shown in Algorithm3, starting with the first structure N1. In each iteration, the algorithm finds a solution S∗ in the current neighbourhood structure with the smallest objective function value (line (4)). In the evaluation step, if f(S∗)<f(S), then solution S∗ becomes the next incumbent solution (line (6)) and the algorithm continues to use N1 (line (7)); otherwise, it explores the next neighbourhood structure (line (9)). When the last structure, N4, has been applied and no further improvements are possible, the VND method is terminated and returns the final local optimal solution. ## 3.2.5. Updating the HM The new solution is substituted and added to the HM if it is better than the worst harmony solution in HM in terms of the objective function value. Otherwise, the newly created solution is rejected. Then, the HM is sorted again by objective function value. ## 3.2.6. Checking the Termination Criteria The procedures of the HS continue until a termination criterion is satisfied. We employed three termination criteria:(1) reaching the maximum number of improvisations (NI), (2) no improvement occurred after a certain number of improvisations (CUIN), and (3) the best solution in HM reached a certain value (CBS). It will stop if one of the termination criteria is satisfied. ## 3.3. Inserting Dynamic Requests Required information about a dynamic request such as its customer location and time window is acquired when the request arrives. Moreover, at a given time point, vehicles may either be servicing a customer, waiting for a customer, or moving towards the next customer. Our DVRPTW algorithm should first determine whether requests arriving at this time point can be accepted and, if so, which vehicles and which time points can service them. The algorithm should reject requests for which no vehicle can be scheduled to service them. To decide toaccept orreject a request, we shall discuss some insertion rules.Rule 1 (time window constraints). Suppose customervh requests service at time τh, and vehicle k is visiting customer vk or is on the way to visit that customer based on the planned router. For the new request to be inserted into the solution, it must satisfy at least one vehicle k or schedule a new vehicle that can arrive at vh before the end of the customer’s time window. The rule is expressed as follows:(19)bk+sk+tk,h≤lh∃k∈Kor(20)τh+t0,h≤lh∧τh+2∗t0,h+sh≤l0.Rule 2 (capacity constraints). A customervh can be inserted into route r (assume it is served by vehicle k) if it does not violate vehicle k’s capacity constraint. The rule to check this condition is expressed as follows:(21)qh+∑i∈rqi≤Qk∃k∈K.Rule 3 (direct insert). A new customervh can be inserted between customers vx and vy in route r if it allows the vehicle to arrive at vh before the end of its time window and also services vh after the beginning of its time window. The rule is expressed as follows:(22)bx+sx+tx,h≤lh,zy-ty,h-sh≥eh,bh≤zh.Rule 4 (split insert). This rule governs the condition when customervh cannot be inserted at any position in route r; however, if route r is split into two routes (r1 and r2) and the customer vh can be inserted into either of these two new routes (using Rules 3 and 4), then the customer can be accepted. A route r=0,…,vx,vy,…,0 can be split into r1=0,…,vx,0 and r2=0,vy,…,0 if(23)zy-ty,0≥τh.Algorithm4 describes how dynamic requests can be inserted in a route.Algorithm 4:Check Requests (S,t). ( 1 )  Dt←{dynamic customers are request in time t} ( 2 )  update the location of each vehicle in S by using (15). ( 3 )  for  (each dynamic customer i in Dt)  do //decide whether toaccept or reject customer i by using insertion Rules 1–4. ( 4 )     if it satisfies Rules 1 and 2  then ( 5 )     attempts to insert the customer i into an existing route by Rule 3. ( 6 )     if not, apply Rule 4 to split a route into two new routes ( 7 )     attempts to inserted by Rule 3 again for each two newly routes ( 8 )     else ( 9 )     reject it. ( 1 0 )  end if ( 11 )  if  (reject)  then ( 12 )     add i to reject pool ( 13 )     remove i from Dt ( 14 )  end if ( 15 )  end for ( 16 )  return  Dt ## 4. Experimental Results This section shows the computational results obtained from intensive experiments using the proposed HSVND. We implemented the HSVND using the C# programming language compiled for the  .NET Framework 4.5 and executed all the experiments on a PC with an Intel® Pentium® CPU G645 processor clocked at 2.90 GHz with 2 GB of RAM running the Windows 7 operating system.The experiments were performed on the Lackner benchmark, which originated from the standard Solomon benchmark. In this benchmark, 100 customers are distributed in a Euclidean plane over 100 × 100 square areas, where travel times between customers equal the corresponding distances. The benchmark is divided into six groups, named R1, R2, C1, C2, RC1, and RC2, respectively. Each group contains 8 to 12 instances, so there are 56 instances in total. To accommodate the dynamic portion of the test, each instance is associated with one of five different degrees of dynamism, 10%, 30%, 50%, 70%, and 90%, respectively. In the following experiments, we label each instance using the notation“Group-Index-Dod,” whereGroup is the group name that instance belongs to,Index is its index within the group (using two Arabic numerals), andDod is its dynamic degree value. For example, the label “R1-05-70” denotes the fifth instance in group R1, and that instance contains 70 dynamic customers. ### 4.1. Parameter Settings This subsection investigates the HSVND parameters HMS, HMCR, and PAR, as well as the convergence speed. We randomly selected 15 instances for these experiments. They are “C1-01-50,” “C1-05-90,” “C2-01-30,” “C2-07-10,” “R1-03-30,” “R1-09-70,” “R1-03-10,” “R2-03-50,” “R2-06-10,” “R2-09-70,” “RC1-04-50,” “RC1-06-90,” “RC2-02-70,” “RC2-03-90,” and “RC2-08-30.” For each instance, 10 independent runs were carried out. Because we split the DVRP into a series of static VRPs during the solving process, we only user the static customers at the beginning of the planning horizon (t=0) in all experiments, and we set the termination condition of the algorithm to consider only the maximum number of improvisations (NI = 1000). Tables 1–3 show the results of the optimization of the objective functions using different settings for HMS, HMCR, and PAR.Table 1 The mean results of the HSVND with different HMS values. Instance 15 20 25 30 35 40 C1-01-50 706.23 706.23 706.23 706.23 706.23 706.23 C1-05-90 286.13 285.06 285.06 285.06 285.06 286.13 C2-01-30 535.42 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.77 568.77 R1-03-30 980.92 980.85 980.33 977.67 985.17 980.57 R1-03-10 1189.33 1190.39 1188.02 1194.07 1190.53 1192.57 R1-09-70 444.40 444.85 442.59 443.04 443.85 444.10 R2-03-50 632.12 623.88 629.89 619.93 620.56 625.42 R2-06-10 904.03 890.96 909.91 878.53 910.96 917.38 R2-09-70 479.90 479.28 478.23 473.05 473.93 474.35 RC1-04-50 758.43 757.28 757.12 757.40 757.51 756.25 RC1-06-90 259.43 259.43 259.43 259.43 259.43 259.43 RC2-02-70 569.12 568.58 571.81 571.81 571.81 571.81 RC2-03-90 265.61 265.61 265.61 265.61 265.61 265.61 RC2-08-30 747.95 748.86 735.77 742.39 745.45 747.59Table 2 The mean results of the HSVND with different HMCR values. Instance 0.5 0.6 0.7 0.8 0.9 C1-01-50 706.23 706.23 706.23 706.23 706.23 C1-05-90 285.06 285.06 285.06 286.13 285.06 C2-01-30 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.73 R1-03-30 988.65 984.84 982.53 982.12 984.08 R1-03-10 1195.26 1194.61 1197.47 1194.21 1197.23 R1-09-70 444.40 445.30 445.35 444.81 447.66 R2-03-50 636.37 623.86 624.63 614.65 646.19 R2-06-10 906.32 915.68 899.75 910.53 886.24 R2-09-70 485.30 477.34 484.11 483.45 479.04 RC1-04-50 760.15 757.74 758.86 756.36 761.30 RC1-06-90 259.72 259.72 259.43 259.43 259.43 RC2-02-70 571.81 569.12 563.74 555.66 560.51 RC2-03-90 265.61 265.61 265.61 265.61 265.61 RC2-08-30 754.78 753.12 767.84 767.31 762.27Table 3 The effect of the PAR on the mean function optimization. Instance 0.3 0.4 0.5 0.6 0.7 C1-01-50 706.23 706.23 706.23 706.23 706.23 C1-05-90 285.06 287.20 285.06 286.13 285.06 C2-01-30 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.77 R1-03-30 983.47 979.55 980.48 986.82 979.57 R1-03-10 1195.97 1197.94 1195.36 1194.18 1197.12 R1-09-70 444.40 445.30 444.40 445.83 444.40 R2-03-50 621.23 630.18 621.72 616.62 626.34 R2-06-10 914.09 934.96 914.46 912.56 934.58 R2-09-70 490.88 482.95 478.68 483.16 487.21 RC1-04-50 759.02 759.43 759.22 757.86 759.66 RC1-06-90 260.01 260.30 259.43 259.43 259.72 RC2-02-70 568.58 571.81 571.81 563.20 571.81 RC2-03-90 265.61 265.61 265.95 265.61 265.61 RC2-08-30 749.31 763.16 773.31 710.28 769.66The results in Table1 demonstrate that the performance of the HSVND depends on the size of the HM: the larger the HMS is, the better the mean results are. Consequently, using larger values for HMS can achieve better solutions with lower objective function values. This may occur because having a larger number of solutions in the HM provides better shift patterns that are more likely to be combined into good new solutions. Therefore, HMS = 30 is chosen for all benchmark instances.As listed in Table2, the performance of the proposed algorithm degrades when increasing the number of random selections when HMCR values are below 0.8. However, perturbation is necessary to bring diversity to the HM and avoid local minima. Therefore, we suggest a value of 0.8 for the HMCR based on the experimental results.Table3 shows that small PAR values reduce the convergence rate of the HSVND. Based on the experimental results, we suggest using PAR values greater than 0.6.We also evaluated the convergence speed of the HSVND. In this experiment, we set HMS = 30, HMCR = 0.8, and PAR = 0.6 as described above. In the same way as the previous experiments, for each instance we ran the HSVND 10 times and terminated it after a maximum of NI = 1000 iterations. The maximum and average of the Continuous Unimproved Iteration Number (CUIN) and the Count of Best Solutions (CBS) are reported in Table4, where CUIN indicates the number of iterations required to find the next local optimal solution from the current local optimal solution. During this period between the current best solution and the next one, CBS is defined as the number of solutions found that have the same objective value as the current best solution.Table 4 The maximum and average of CUIN and CBS. Instance CUIN CBS Max Avg. Max Avg. C1-01-50 509 92.37 20 6.92 C1-05-90 911 42.95 17 2.02 C2-01-30 101 12.31 1 1.00 C2-07-10 131 24.66 30 7.09 R1-03-30 662 51.77 1 1.00 R1-03-10 611 75.46 1 1.00 R1-09-70 356 34.12 30 6.26 R2-03-50 152 23.45 1 1.00 R2-06-10 560 118.37 1 1.00 R2-09-70 563 40.15 1 1.00 RC1-04-50 713 126.00 1 1.00 RC1-06-90 71 30.63 1 1.00 RC2-02-70 152 19.34 1 1.00 RC2-03-90 49 21.30 1 1.00 RC2-08-30 815 148.10 1 1.00From Table4, it can be seen that, on average, the HSVND requires at least 13 iterations to find a new local optimal solution, while, at most, it requires 150 iterations. During the search, it finds at least one solution and, at most, finds 7 solutions with the same objective value. Consequently, we set the other termination conditions as follows: CUIN = 200 and CBS = 10. ### 4.2. Comparison with Existing Algorithms To assess the performance of the proposed HSVND algorithm, we ran the algorithm on the Lackner benchmark instances and compared it with two existing methods: ILNS [9] and GVNS [10]. Comparisons were made concerning these algorithms based on the ratios of refused service, the number of vehicles required, and the total distance travelled. To collect the experimental data, ten separate runs were performed for each instance and for each degree of dynamism from the Lackner benchmark. We recorded the best execution from the ten runs and calculated average values for each group. Items in boldface text in these tables indicate matches with the current best-known solution. We set the HS parameters as follows: HMS = 30, HMCR = 0.8, PAR = 0.6, NI = 1000, CUIN = 200, and CBS = 10. Table 5 lists the comparison results, where the first column indicates the group of instances (G), the second column shows the degrees of dynamism (D), and the other columns show the average number of vehicles, average total distance, average insertion time, and the refusal ratio, respectively. The last two rows of this table list the overall average results obtained by the different methods and the characteristics of the computers on which the algorithms were executed.Table 5 Comparison of the experimental results of the proposed method with other methods. G D Avg. vehicle number Avg. total distance Avg. insertion time Ratio refuse service HSVND ILNS GVNS HSVND ILNS GVNS HSVND ILNS GVNS HSVND ILNS GVNS R1 90 13.58 14.25 14.67 1214.29 1335.94 1250.38 4.91 17.43 14.50 3.08 2.33 3.83 70 13.50 14.33 14.75 1223.57 1331.34 1267.78 6.55 21.73 10.95 2.58 1.75 3.08 50 13.92 14.08 14.58 1224.42 1295.81 1267.47 9.28 28.27 11.84 2.00 0.67 1.92 30 13.58 13.92 14.25 1214.46 1286.63 1256.04 13.99 46.59 15.70 1.25 0.58 1.58 10 13.75 13.50 14.17 1216.82 1257.08 1250.16 21.64 67.99 15.29 0.33 0.17 0.50 C1 90 10.44 10.78 10.67 907.93 1039.77 963.33 3.04 6.60 7.81 0.11 0.22 0.00 70 10.44 10.78 11.33 888.79 1031.68 1009.47 4.59 10.79 7.67 0.11 0.22 0.00 50 10.33 10.89 11.00 865.28 1001.18 992.97 6.91 19.01 6.22 0.11 0.22 0.00 30 10.22 10.56 11.56 870.22 962.08 949.95 10.51 28.03 9.13 0.11 0.33 0.00 10 10.33 10.56 10.56 852.33 895.77 898.30 16.69 15.40 13.74 0.11 0.22 0.00 RC1 90 14.13 14.00 14.63 1465.45 1513.94 1470.45 3.13 17.31 15.39 1.13 2.00 1.88 70 13.88 13.88 14.88 1469.64 1511.29 1489.28 4.17 25.32 13.43 1.00 1.88 2.13 50 13.63 13.63 14.50 1428.24 1514.72 1484.01 6.77 48.78 13.72 1.00 1.38 1.75 30 13.50 13.88 14.38 1426.26 1492.22 1471.00 9.96 45.26 16.51 0.50 1.13 1.00 10 13.38 13.38 13.50 1394.37 1436.23 1417.07 16.49 83.52 23.01 0.50 1.13 0.50 R2 90 4.73 3.55 4.00 989.84 1047.82 1086.78 6.56 13.20 16.47 0.00 0.09 0.00 70 4.82 3.64 4.36 973.08 1032.04 1078.03 10.47 20.15 12.74 0.00 0.09 0.00 50 4.82 3.82 4.55 960.29 1016.52 1071.83 16.51 30.03 11.96 0.00 0.00 0.00 30 4.91 4.91 4.73 937.70 985.59 1035.60 26.73 57.07 10.18 0.00 0.00 0.00 10 4.45 6.36 5.27 938.06 950.00 1000.00 47.68 68.58 9.48 0.00 0.09 0.00 C2 90 3.13 3.25 3.38 615.67 636.79 668.99 2.93 6.12 16.67 0.00 0.00 0.00 70 3.13 3.13 3.38 613.49 636.47 672.95 3.63 10.01 14.03 0.00 0.00 0.00 50 3.00 3.13 3.13 601.62 604.98 623.10 6.46 16.80 20.25 0.00 0.00 0.00 30 3.13 3.63 3.25 599.93 651.42 624.81 9.05 29.87 34.82 0.00 0.00 0.00 10 3.13 3.00 3.25 596.03 594.67 615.93 15.30 59.70 80.78 0.00 0.00 0.00 RC2 90 6.00 4.00 4.63 1122.00 1257.19 1275.93 4.35 11.34 28.05 0.00 0.13 0.00 70 6.00 3.88 5.13 1095.71 1239.46 1234.36 6.88 19.26 16.07 0.00 0.00 0.00 50 6.13 4.25 5.88 1078.33 1190.54 1200.26 10.82 27.84 11.46 0.00 0.13 0.00 30 5.88 5.38 5.88 1064.58 1166.04 1172.33 18.51 41.51 11.68 0.00 0.25 0.00 10 5.63 6.75 6.13 1059.94 1103.30 1153.43 32.96 55.55 13.27 0.00 0.00 0.00 Avg. 8.58 8.50 8.88 1030.28 1100.62 1098.40 11.92 31.64 16.76 0.46 0.50 0.61As shown in Table5, HSVND outperforms other methods on average. First, HSVND obtains the smallest average refusal ratio, 0.46%, whereas ILNS refused 0.50%, and GVNS refused 0.61%. Both HSVND and GVNS were able to service all requests for groups R2, C2, and RC2; consequently, they obtained the same refusal ratio (i.e., 0.00%). HSVND performed better than GVNS for groups R1 and RC1 with respect to the refusal ratio while GVNS performed best for the C1 group. On average, ILNS achieves a better refusal ratio than GVNS. The average number of vehicles required by our algorithm is lower than that of GVNS but slightly larger than that of ILNS. The average number of vehicles is 8.58, 8.50, and 8.88 for HSVND, ILNS, and GVNS, respectively. However, HSVND revealed good performance in finding short distances; its average distance was 1030.28, while the average distances of ILNS and GVNS were 1100.62 and 1098.40, respectively. Note that HSVND was the best at optimizing travel distances among the three algorithms for all groups. Overall average insertion time also indicates that our proposed algorithm improves on the others; however, note that the computational environments under which the algorithms were executed are different.Similar to the evaluation performed for GVNS, we also calculated the average performance of 10 executions for each customer and compared that with GVNS, as shown in Table6. HSVND improved on GVNS in terms of the avg. total distance in all cases. There are 7 types, and HSVND outperformed GVNS completely on refusal ratio, avg. number of vehicles, and avg. total distance. When the refusal ratio remains constant, there are 10 types for which HSVND can find a better solution than GVNS.Table 6 The average performance comparison of HSVND and GVNS. G D Ratio of refuse Avg. vehicle number Avg. total distance Avg. insertion time HSVND GVNS HSVND GVNS HSVND GVNS HSVND GVNS R1 90 2.83 3.33 14.43 15.34 1253.57 1328.74 4.65 15.89 70 2.26 2.42 14.36 15.31 1262.20 1340.38 6.75 12.45 50 1.67 1.67 14.29 15.33 1256.42 1340.17 9.32 12.32 30 1.16 1.17 14.13 14.96 1243.40 1312.73 14.01 17.21 10 0.33 0.33 14.02 14.73 1240.66 1296.59 22.09 16.69 C1 90 0.11 0.00 10.78 11.37 935.06 1092.18 2.96 8.94 70 0.11 0.00 10.80 11.92 926.97 1150.27 4.36 8.51 50 0.11 0.00 10.57 11.81 901.09 1129.58 6.88 7.32 30 0.11 0.00 10.47 11.81 895.69 1081.52 10.19 10.17 10 0.11 0.00 10.50 11.36 878.01 986.99 16.50 14.87 RC1 90 1.04 1.50 14.73 15.62 1511.27 1587.89 3.12 15.89 70 0.93 1.25 14.61 15.88 1515.20 1614.43 4.26 14.72 50 0.73 0.88 14.29 15.51 1476.00 1579.34 6.63 14.05 30 0.46 0.63 14.28 15.22 1470.15 1551.93 9.64 16.89 10 0.31 0.25 13.99 14.25 1438.48 1474.09 14.54 24.52 R2 90 0.00 0.00 4.92 3.88 1022.27 1181.31 6.37 17.05 70 0.00 0.00 4.87 4.22 1008.85 1161.98 9.83 13.41 50 0.00 0.00 4.83 4.49 992.01 1153.79 16.44 12.58 30 0.00 0.00 4.80 4.77 974.35 1112.92 26.66 10.86 10 0.00 0.00 4.34 5.49 972.57 1054.82 45.81 10.75 C2 90 0.00 0.00 3.26 3.66 634.73 749.33 2.78 17.85 70 0.00 0.00 3.30 3.71 637.30 722.45 3.91 14.91 50 0.00 0.00 3.31 3.53 615.68 670.23 6.32 22.34 30 0.00 0.00 3.04 3.36 610.82 670.88 9.32 35.92 10 0.00 0.00 3.05 3.53 603.39 660.93 15.20 85.73 RC2 90 0.00 0.00 6.01 8.02 1165.36 2032.46 4.40 29.51 70 0.00 0.00 6.04 4.94 1134.00 1359.18 6.70 17.18 50 0.00 0.00 5.85 5.39 1113.19 1311.97 10.90 12.86 30 0.00 0.00 5.94 5.83 1101.98 1278.62 18.27 13.54 10 0.00 0.00 5.29 6.01 1104.00 1240.55 31.22 13.98 Avg. 0.41 0.45 8.84 9.38 1063.16 1207.61 11.67 17.96Overall, our results demonstrate that HS achieved good results compared with the existing methods as our algorithm has the ability to strike a good balance between diversification and intensification for the DVRPTWs. ## 4.1. Parameter Settings This subsection investigates the HSVND parameters HMS, HMCR, and PAR, as well as the convergence speed. We randomly selected 15 instances for these experiments. They are “C1-01-50,” “C1-05-90,” “C2-01-30,” “C2-07-10,” “R1-03-30,” “R1-09-70,” “R1-03-10,” “R2-03-50,” “R2-06-10,” “R2-09-70,” “RC1-04-50,” “RC1-06-90,” “RC2-02-70,” “RC2-03-90,” and “RC2-08-30.” For each instance, 10 independent runs were carried out. Because we split the DVRP into a series of static VRPs during the solving process, we only user the static customers at the beginning of the planning horizon (t=0) in all experiments, and we set the termination condition of the algorithm to consider only the maximum number of improvisations (NI = 1000). Tables 1–3 show the results of the optimization of the objective functions using different settings for HMS, HMCR, and PAR.Table 1 The mean results of the HSVND with different HMS values. Instance 15 20 25 30 35 40 C1-01-50 706.23 706.23 706.23 706.23 706.23 706.23 C1-05-90 286.13 285.06 285.06 285.06 285.06 286.13 C2-01-30 535.42 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.77 568.77 R1-03-30 980.92 980.85 980.33 977.67 985.17 980.57 R1-03-10 1189.33 1190.39 1188.02 1194.07 1190.53 1192.57 R1-09-70 444.40 444.85 442.59 443.04 443.85 444.10 R2-03-50 632.12 623.88 629.89 619.93 620.56 625.42 R2-06-10 904.03 890.96 909.91 878.53 910.96 917.38 R2-09-70 479.90 479.28 478.23 473.05 473.93 474.35 RC1-04-50 758.43 757.28 757.12 757.40 757.51 756.25 RC1-06-90 259.43 259.43 259.43 259.43 259.43 259.43 RC2-02-70 569.12 568.58 571.81 571.81 571.81 571.81 RC2-03-90 265.61 265.61 265.61 265.61 265.61 265.61 RC2-08-30 747.95 748.86 735.77 742.39 745.45 747.59Table 2 The mean results of the HSVND with different HMCR values. Instance 0.5 0.6 0.7 0.8 0.9 C1-01-50 706.23 706.23 706.23 706.23 706.23 C1-05-90 285.06 285.06 285.06 286.13 285.06 C2-01-30 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.73 R1-03-30 988.65 984.84 982.53 982.12 984.08 R1-03-10 1195.26 1194.61 1197.47 1194.21 1197.23 R1-09-70 444.40 445.30 445.35 444.81 447.66 R2-03-50 636.37 623.86 624.63 614.65 646.19 R2-06-10 906.32 915.68 899.75 910.53 886.24 R2-09-70 485.30 477.34 484.11 483.45 479.04 RC1-04-50 760.15 757.74 758.86 756.36 761.30 RC1-06-90 259.72 259.72 259.43 259.43 259.43 RC2-02-70 571.81 569.12 563.74 555.66 560.51 RC2-03-90 265.61 265.61 265.61 265.61 265.61 RC2-08-30 754.78 753.12 767.84 767.31 762.27Table 3 The effect of the PAR on the mean function optimization. Instance 0.3 0.4 0.5 0.6 0.7 C1-01-50 706.23 706.23 706.23 706.23 706.23 C1-05-90 285.06 287.20 285.06 286.13 285.06 C2-01-30 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.77 R1-03-30 983.47 979.55 980.48 986.82 979.57 R1-03-10 1195.97 1197.94 1195.36 1194.18 1197.12 R1-09-70 444.40 445.30 444.40 445.83 444.40 R2-03-50 621.23 630.18 621.72 616.62 626.34 R2-06-10 914.09 934.96 914.46 912.56 934.58 R2-09-70 490.88 482.95 478.68 483.16 487.21 RC1-04-50 759.02 759.43 759.22 757.86 759.66 RC1-06-90 260.01 260.30 259.43 259.43 259.72 RC2-02-70 568.58 571.81 571.81 563.20 571.81 RC2-03-90 265.61 265.61 265.95 265.61 265.61 RC2-08-30 749.31 763.16 773.31 710.28 769.66The results in Table1 demonstrate that the performance of the HSVND depends on the size of the HM: the larger the HMS is, the better the mean results are. Consequently, using larger values for HMS can achieve better solutions with lower objective function values. This may occur because having a larger number of solutions in the HM provides better shift patterns that are more likely to be combined into good new solutions. Therefore, HMS = 30 is chosen for all benchmark instances.As listed in Table2, the performance of the proposed algorithm degrades when increasing the number of random selections when HMCR values are below 0.8. However, perturbation is necessary to bring diversity to the HM and avoid local minima. Therefore, we suggest a value of 0.8 for the HMCR based on the experimental results.Table3 shows that small PAR values reduce the convergence rate of the HSVND. Based on the experimental results, we suggest using PAR values greater than 0.6.We also evaluated the convergence speed of the HSVND. In this experiment, we set HMS = 30, HMCR = 0.8, and PAR = 0.6 as described above. In the same way as the previous experiments, for each instance we ran the HSVND 10 times and terminated it after a maximum of NI = 1000 iterations. The maximum and average of the Continuous Unimproved Iteration Number (CUIN) and the Count of Best Solutions (CBS) are reported in Table4, where CUIN indicates the number of iterations required to find the next local optimal solution from the current local optimal solution. During this period between the current best solution and the next one, CBS is defined as the number of solutions found that have the same objective value as the current best solution.Table 4 The maximum and average of CUIN and CBS. Instance CUIN CBS Max Avg. Max Avg. C1-01-50 509 92.37 20 6.92 C1-05-90 911 42.95 17 2.02 C2-01-30 101 12.31 1 1.00 C2-07-10 131 24.66 30 7.09 R1-03-30 662 51.77 1 1.00 R1-03-10 611 75.46 1 1.00 R1-09-70 356 34.12 30 6.26 R2-03-50 152 23.45 1 1.00 R2-06-10 560 118.37 1 1.00 R2-09-70 563 40.15 1 1.00 RC1-04-50 713 126.00 1 1.00 RC1-06-90 71 30.63 1 1.00 RC2-02-70 152 19.34 1 1.00 RC2-03-90 49 21.30 1 1.00 RC2-08-30 815 148.10 1 1.00From Table4, it can be seen that, on average, the HSVND requires at least 13 iterations to find a new local optimal solution, while, at most, it requires 150 iterations. During the search, it finds at least one solution and, at most, finds 7 solutions with the same objective value. Consequently, we set the other termination conditions as follows: CUIN = 200 and CBS = 10. ## 4.2. Comparison with Existing Algorithms To assess the performance of the proposed HSVND algorithm, we ran the algorithm on the Lackner benchmark instances and compared it with two existing methods: ILNS [9] and GVNS [10]. Comparisons were made concerning these algorithms based on the ratios of refused service, the number of vehicles required, and the total distance travelled. To collect the experimental data, ten separate runs were performed for each instance and for each degree of dynamism from the Lackner benchmark. We recorded the best execution from the ten runs and calculated average values for each group. Items in boldface text in these tables indicate matches with the current best-known solution. We set the HS parameters as follows: HMS = 30, HMCR = 0.8, PAR = 0.6, NI = 1000, CUIN = 200, and CBS = 10. Table 5 lists the comparison results, where the first column indicates the group of instances (G), the second column shows the degrees of dynamism (D), and the other columns show the average number of vehicles, average total distance, average insertion time, and the refusal ratio, respectively. The last two rows of this table list the overall average results obtained by the different methods and the characteristics of the computers on which the algorithms were executed.Table 5 Comparison of the experimental results of the proposed method with other methods. G D Avg. vehicle number Avg. total distance Avg. insertion time Ratio refuse service HSVND ILNS GVNS HSVND ILNS GVNS HSVND ILNS GVNS HSVND ILNS GVNS R1 90 13.58 14.25 14.67 1214.29 1335.94 1250.38 4.91 17.43 14.50 3.08 2.33 3.83 70 13.50 14.33 14.75 1223.57 1331.34 1267.78 6.55 21.73 10.95 2.58 1.75 3.08 50 13.92 14.08 14.58 1224.42 1295.81 1267.47 9.28 28.27 11.84 2.00 0.67 1.92 30 13.58 13.92 14.25 1214.46 1286.63 1256.04 13.99 46.59 15.70 1.25 0.58 1.58 10 13.75 13.50 14.17 1216.82 1257.08 1250.16 21.64 67.99 15.29 0.33 0.17 0.50 C1 90 10.44 10.78 10.67 907.93 1039.77 963.33 3.04 6.60 7.81 0.11 0.22 0.00 70 10.44 10.78 11.33 888.79 1031.68 1009.47 4.59 10.79 7.67 0.11 0.22 0.00 50 10.33 10.89 11.00 865.28 1001.18 992.97 6.91 19.01 6.22 0.11 0.22 0.00 30 10.22 10.56 11.56 870.22 962.08 949.95 10.51 28.03 9.13 0.11 0.33 0.00 10 10.33 10.56 10.56 852.33 895.77 898.30 16.69 15.40 13.74 0.11 0.22 0.00 RC1 90 14.13 14.00 14.63 1465.45 1513.94 1470.45 3.13 17.31 15.39 1.13 2.00 1.88 70 13.88 13.88 14.88 1469.64 1511.29 1489.28 4.17 25.32 13.43 1.00 1.88 2.13 50 13.63 13.63 14.50 1428.24 1514.72 1484.01 6.77 48.78 13.72 1.00 1.38 1.75 30 13.50 13.88 14.38 1426.26 1492.22 1471.00 9.96 45.26 16.51 0.50 1.13 1.00 10 13.38 13.38 13.50 1394.37 1436.23 1417.07 16.49 83.52 23.01 0.50 1.13 0.50 R2 90 4.73 3.55 4.00 989.84 1047.82 1086.78 6.56 13.20 16.47 0.00 0.09 0.00 70 4.82 3.64 4.36 973.08 1032.04 1078.03 10.47 20.15 12.74 0.00 0.09 0.00 50 4.82 3.82 4.55 960.29 1016.52 1071.83 16.51 30.03 11.96 0.00 0.00 0.00 30 4.91 4.91 4.73 937.70 985.59 1035.60 26.73 57.07 10.18 0.00 0.00 0.00 10 4.45 6.36 5.27 938.06 950.00 1000.00 47.68 68.58 9.48 0.00 0.09 0.00 C2 90 3.13 3.25 3.38 615.67 636.79 668.99 2.93 6.12 16.67 0.00 0.00 0.00 70 3.13 3.13 3.38 613.49 636.47 672.95 3.63 10.01 14.03 0.00 0.00 0.00 50 3.00 3.13 3.13 601.62 604.98 623.10 6.46 16.80 20.25 0.00 0.00 0.00 30 3.13 3.63 3.25 599.93 651.42 624.81 9.05 29.87 34.82 0.00 0.00 0.00 10 3.13 3.00 3.25 596.03 594.67 615.93 15.30 59.70 80.78 0.00 0.00 0.00 RC2 90 6.00 4.00 4.63 1122.00 1257.19 1275.93 4.35 11.34 28.05 0.00 0.13 0.00 70 6.00 3.88 5.13 1095.71 1239.46 1234.36 6.88 19.26 16.07 0.00 0.00 0.00 50 6.13 4.25 5.88 1078.33 1190.54 1200.26 10.82 27.84 11.46 0.00 0.13 0.00 30 5.88 5.38 5.88 1064.58 1166.04 1172.33 18.51 41.51 11.68 0.00 0.25 0.00 10 5.63 6.75 6.13 1059.94 1103.30 1153.43 32.96 55.55 13.27 0.00 0.00 0.00 Avg. 8.58 8.50 8.88 1030.28 1100.62 1098.40 11.92 31.64 16.76 0.46 0.50 0.61As shown in Table5, HSVND outperforms other methods on average. First, HSVND obtains the smallest average refusal ratio, 0.46%, whereas ILNS refused 0.50%, and GVNS refused 0.61%. Both HSVND and GVNS were able to service all requests for groups R2, C2, and RC2; consequently, they obtained the same refusal ratio (i.e., 0.00%). HSVND performed better than GVNS for groups R1 and RC1 with respect to the refusal ratio while GVNS performed best for the C1 group. On average, ILNS achieves a better refusal ratio than GVNS. The average number of vehicles required by our algorithm is lower than that of GVNS but slightly larger than that of ILNS. The average number of vehicles is 8.58, 8.50, and 8.88 for HSVND, ILNS, and GVNS, respectively. However, HSVND revealed good performance in finding short distances; its average distance was 1030.28, while the average distances of ILNS and GVNS were 1100.62 and 1098.40, respectively. Note that HSVND was the best at optimizing travel distances among the three algorithms for all groups. Overall average insertion time also indicates that our proposed algorithm improves on the others; however, note that the computational environments under which the algorithms were executed are different.Similar to the evaluation performed for GVNS, we also calculated the average performance of 10 executions for each customer and compared that with GVNS, as shown in Table6. HSVND improved on GVNS in terms of the avg. total distance in all cases. There are 7 types, and HSVND outperformed GVNS completely on refusal ratio, avg. number of vehicles, and avg. total distance. When the refusal ratio remains constant, there are 10 types for which HSVND can find a better solution than GVNS.Table 6 The average performance comparison of HSVND and GVNS. G D Ratio of refuse Avg. vehicle number Avg. total distance Avg. insertion time HSVND GVNS HSVND GVNS HSVND GVNS HSVND GVNS R1 90 2.83 3.33 14.43 15.34 1253.57 1328.74 4.65 15.89 70 2.26 2.42 14.36 15.31 1262.20 1340.38 6.75 12.45 50 1.67 1.67 14.29 15.33 1256.42 1340.17 9.32 12.32 30 1.16 1.17 14.13 14.96 1243.40 1312.73 14.01 17.21 10 0.33 0.33 14.02 14.73 1240.66 1296.59 22.09 16.69 C1 90 0.11 0.00 10.78 11.37 935.06 1092.18 2.96 8.94 70 0.11 0.00 10.80 11.92 926.97 1150.27 4.36 8.51 50 0.11 0.00 10.57 11.81 901.09 1129.58 6.88 7.32 30 0.11 0.00 10.47 11.81 895.69 1081.52 10.19 10.17 10 0.11 0.00 10.50 11.36 878.01 986.99 16.50 14.87 RC1 90 1.04 1.50 14.73 15.62 1511.27 1587.89 3.12 15.89 70 0.93 1.25 14.61 15.88 1515.20 1614.43 4.26 14.72 50 0.73 0.88 14.29 15.51 1476.00 1579.34 6.63 14.05 30 0.46 0.63 14.28 15.22 1470.15 1551.93 9.64 16.89 10 0.31 0.25 13.99 14.25 1438.48 1474.09 14.54 24.52 R2 90 0.00 0.00 4.92 3.88 1022.27 1181.31 6.37 17.05 70 0.00 0.00 4.87 4.22 1008.85 1161.98 9.83 13.41 50 0.00 0.00 4.83 4.49 992.01 1153.79 16.44 12.58 30 0.00 0.00 4.80 4.77 974.35 1112.92 26.66 10.86 10 0.00 0.00 4.34 5.49 972.57 1054.82 45.81 10.75 C2 90 0.00 0.00 3.26 3.66 634.73 749.33 2.78 17.85 70 0.00 0.00 3.30 3.71 637.30 722.45 3.91 14.91 50 0.00 0.00 3.31 3.53 615.68 670.23 6.32 22.34 30 0.00 0.00 3.04 3.36 610.82 670.88 9.32 35.92 10 0.00 0.00 3.05 3.53 603.39 660.93 15.20 85.73 RC2 90 0.00 0.00 6.01 8.02 1165.36 2032.46 4.40 29.51 70 0.00 0.00 6.04 4.94 1134.00 1359.18 6.70 17.18 50 0.00 0.00 5.85 5.39 1113.19 1311.97 10.90 12.86 30 0.00 0.00 5.94 5.83 1101.98 1278.62 18.27 13.54 10 0.00 0.00 5.29 6.01 1104.00 1240.55 31.22 13.98 Avg. 0.41 0.45 8.84 9.38 1063.16 1207.61 11.67 17.96Overall, our results demonstrate that HS achieved good results compared with the existing methods as our algorithm has the ability to strike a good balance between diversification and intensification for the DVRPTWs. ## 5. Conclusions In this paper, we have proposed a Modified Harmony Search for DVRPTW, called MHS, which is based on HS algorithm. First of all, the encoding of harmony memory has been improved based on the characteristics of routing in VRPs. Secondly, in order to provide an effective balance between the global diversification and local intensification, an enhanced basic Variable Neighbourhood Descent (VND) is incorporated into the iterative HS. Thirdly, improvisation of a new harmony has also been improved. In this procedure, in order to prevent premature convergence of the solution, we evaluate the population diversity by using entropy. Finally, when dynamic requests arrive, five rules were employed within the DVRPTW that address the insertion of dynamic requests into the DVRPTW. In order to verify the efficiency of our approach, we carried out some numerical experiments by using standard benchmarks. Results are analyzed intensively by comparing with recently proposed algorithms. The comparison results show that the proposed MHS algorithm can obtain better solutions than other existing algorithms. There are several interesting future research subjects to explore. One of them can be adapting the MHS heuristic for solving other dynamic vehicle problems. Another prospective research may focus on extending the DVRPTW by introducing some realistic aspects and constraints. --- *Source: 1021432-2017-11-28.xml*
1021432-2017-11-28_1021432-2017-11-28.md
86,195
A Modified Harmony Search Algorithm for Solving the Dynamic Vehicle Routing Problem with Time Windows
Shifeng Chen; Rong Chen; Jian Gao
Scientific Programming (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1021432
1021432-2017-11-28.xml
--- ## Abstract The Vehicle Routing Problem (VRP) is a classical combinatorial optimization problem. It is usually modelled in a static fashion; however, in practice, new requests by customers arrive after the initial workday plan is in progress. In this case, routes must be replanned dynamically. This paper investigates the Dynamic Vehicle Routing Problem with Time Windows (DVRPTW) in which customers’ requests either can be known at the beginning of working day or occur dynamically over time. We propose a hybrid heuristic algorithm that combines the harmony search (HS) algorithm and the Variable Neighbourhood Descent (VND) algorithm. It uses the HS to provide global exploration capabilities and uses the VND for its local search capability. In order to prevent premature convergence of the solution, we evaluate the population diversity by using entropy. Computational results on the Lackner benchmark problems show that the proposed algorithm is competitive with the best existing algorithms from the literature. --- ## Body ## 1. Introduction The Vehicle Routing Problem (VRP) was first introduced in 1959 [1]. Since then, it has become a central research problem in the field of operation research and is also an important application in the areas of transportation, distribution, and logistics. The problem involves a depot, some customers, and some cargo. The cargo must be delivered from a central depot to customers at various locations using several vehicles. This is a combinatorial optimization problem in which the goal is to find an optimal solution that satisfies the service requirements, routing restrictions, and vehicle constraints.DVRPs have been a vital research area for last 3 decades. Thanks to recent advances in information and communication technologies, vehicle fleets can now be managed in real time. In this context, Dynamic Vehicle Routing Problems (DVRPs) are becoming increasingly important [2–4]. A variety of aspects is addressed by numerous approaches. Early research on DVRP can be found in Psaraftis’s work [5]. They proposed a dynamic programming approach. Bertsimas and Van Ryzin [6] considered a DVRP model with a single vehicle and no capacity restrictions where requests appear randomly. They characterized the problem by a generic mathematical model that regarded waiting time as the objective function. Regarding DVRPTW, Chen and Xu [7] proposed a dynamic column generation algorithm for the DVRPTWs based on their notion of decision epochs over the planning horizon, which indicate the best times of the day to execute the reoptimization process. de Oliveira et al. [8] addressed the issue that arises with customers whose demands take place in real time in the DVRPTW and a capacitated fleet. Their solution was obtained through a metaheuristic approach using an ant colony system. Hong [9] also considered time windows and the fact that some requests may be urgent. This work adopted a continuous reoptimization approach and used a large neighbourhood search algorithm. When a new request arrives, it is immediately considered to be included in the current solution; therefore, it runs the large neighbourhood search again to obtain a new solution. de Armas and Melián-Batista [10] tackled a DVRPTW with several real-world constraints. Similar to Hong’s work, they also adopted a continuous reoptimization approach, but they calculated solutions using a variable neighbourhood search algorithm. In some recent surveys, Pillac et al. [11] classify routing problems from the perspective of information quality and evolution. They introduce the notion of degree of dynamism and present a comprehensive review of applications and solution methods for dynamic vehicle routing problems. Bekta et al. [12] provide another survey in this area; they provide a deeper and more detailed analysis. Last but not least, Psaraftis et al. [13] shed more light into work in this area over more than 3 decades by developing a taxonomy of DVRP papers according to 11 criteria.As we know, for large-scaled DVRPTW, it is very difficult to develop exact methods to solve this type of problem. The majority of the existing studies deal with the metaheuristics and intelligent optimization algorithms. Harmony Search (HS) algorithm is a metaheuristic algorithm, developed in [14]. This algorithm imitates the behaviors of musical improvisation process. The musician adjusts the resultant tones with the rest of the band, relying on his own memory in the music creation, and finally the tones reach a wonderful harmony state. Similarly, it has been successfully employed by many researchers to solve various complex problems such as university course scheduling [15, 16], nurse rostering [17], water network design [18], and the Sudoku puzzle [19]. Due to its optimization ability, the HS algorithm has also been employed as a search framework to solve VRPs. To solve the Green VRP (an extension of the classic VRP), Kawtummachai and Shohdohji [20] presented a hybrid algorithm based on the HS algorithm in which the HS is hybridized by a local improvement process. They tested the proposed algorithm with a real case from a retail company, and the results indicated that the method could be applied effectively to the case study. Moreover, Pichpibul and Kawtummachai [21] presented a modified HS algorithm for the capacitated VRP and incorporated the probabilistic Clarke-Wright savings algorithm into the harmony memory mechanism to achieve better initial solutions. Then, the roulette wheel selection procedure was employed with the harmony improvisation mechanism to improve its selection strategy. The results showed that the modified HS algorithm is competitive with the best existing algorithms. Recently, Yassen et al. [22] proposed a meta-HS algorithm (meta-HAS) to solve the VRPTW. The results of comparisons confirmed that the meta-HS produces results that are competitive with other proposed methods.As described above, the HS algorithm has been successfully applied to solve standard VRPs but has not been applied to solve dynamic VRPs. Solving a dynamic VRP is usually more complex than solving the corresponding standard VRP, as we should solve more than one VRP when we deal with dynamic requests. It is well-known that minimizing travel distances of standard VRPs is NP-hard in the general case, so solving dynamic VRPs with the same objective function is also a hard computational task. In this paper, we propose a Modified Harmony Search (MHS) algorithm to solve the DVRPTW. It is related to the static VRPTW, as it can be described as a routing problem in which information about the problem can change during the working day. It is a discrete-time dynamic problem and can be viewed as a series of static VRPTW problem. Therefore, the MHS algorithm we proposed includes two parts: one is the static problem optimization; we combine the basic HS algorithm with the Variable Neighbourhood Descent (VND) algorithm, with the goal of achieving the benefits of both approaches to enhance searching. The combined HSVND algorithm distinguishes itself in three aspects: first, the encoding of harmony memory has been improved based on the characteristics of routing in VRPs. Second, this augmented HS was hybridized with an enhanced VND method to coordinate search diversification and intensification effectively. In this scheme, four neighbourhood structures were proposed. Third, in order to prevent premature convergence of the solution, we evaluate the population diversity by using entropy. The other is the dynamic customer check and insertion. Four rules were employed within the DVRPTW that address the insertion of dynamic requests into the DVRPTW. Furthermore, the MHS is verified for practical implementation by using a comparison study with other recently proposed approaches. The results show that our algorithm performs better than the compared algorithms; its average refusal rate was smallest among the three compared algorithms. Moreover, travel distances computed by our algorithm were also best in 29 out of 30 instances.The remainder of this paper is structured as follows. The next section provides a formal mathematical model of DVRP. The notations are briefly described in Section2. Section 3 describes the main framework and details the development of the HSVND algorithm for DVRP. The experimental setting and results are presented in Section 4. Finally, Section 5 summarizes the major conclusions of this article and recommends some possible directions for future research. ## 2. Problem Definition The DVRPTW can be mathematically modelled by an undirected graphG=(V,E), where V=v0,v1,…,vn is the set of vertices including the depot (v0) and the n customers (v1,…,vn), also called requests, orders, or demands and E={(i,j):i,j∈V,i≠j} is the set of edges between each pair of vertices. Each vertex vi has several nonnegative weights associated with it, namely, a request time τi, a location coordinates (xi,yi), the demand qi, service time si, and an earliest ei and latest li, possible start time for the service, which define a time window [ei,li], while [e0,l0] is the service time range. At the same time, each edge (i,j) is associated a travel time tij and a travel distance dij.A total number ofN customers are to be served by fixed size fleet K of identical vehicles, each vehicle k has associated with a nonnegative capacity Q. Every customer has arrival time ai and begin service time bi. Particularly, the vehicle is only allowed to start the service no earlier than the earliest service time ei, bi=max⁡(ai,ei). In other words, if a vehicle arrives earlier than ei, it must wait until ei. The customers can be divided into two groups, static customers (VS) and dynamic customers (VD), according to the time at which the customers made their requests. Customers in VS whose locations and demands are known at the beginning of the planning horizon (i.e., time 0), are also called priority customers because they must be serviced within the current day. The locations and demands of customers who belong to set VD will be known only at the time of the order request. These customers call in requesting on-site service over time.The objective is to accept as many requests as possible while finding a feasible set of routes with the minimum total travelled distance. The goal we consider here is a hierarchical approach, which is similar to the goal defined in [10]. The objective functions are considered in a lexicographic order as follows.(i) Number of refused customers.(ii) Total travelled distance.(iii) Total number of routes.Note that [10] included 7 objective functions: total infeasibility (Sum of the hours that the customers’ time windows are exceeded), number of postponed services, number of extra hours (sum of hours that vehicles’ working shifts are exceeded), number of extra hours (sum of the hours that the vehicles’ working shifts are exceeded), total travelled distance, total number of routes, and time balance (difference between the longest and shortest route time made by one vehicle regarding time). They regarded the time windows as soft constraints; consequently, they needed to consider these additional objective functions. In contrast, because we regarded the time windows as hard constraints, the objective functions related to time windows could be removed.The DVRPTW can be modelled by the following formulas. First, we introduce three binary decision variablesξijk, χk, and λi, which are defined as follows:ξijk = 1 if edge (i,j) is travelled by vehicle k and 0 otherwise.χk = 1 if vehicle k is used and 0 otherwise.λi = 1 if the dynamic customer vi is accepted.Then the main objective can be described as in the following formula:(1)min∑i=1N1-λi(2)min∑i,j∈E∑k∈Kdij·ξijk(3)min∑k=1Kχk(4)s.t.∑i∈Vξijk=∑i∈Vξjik,1≤j≤n,k∈K(5)∑k∈K∑j∈Vξijk=1,1≤i≤n(6)∑j∈Vξ0jk=∑i∈Vξi0k=1,k∈K(7)∑i∈V′∑j∈Vqiξijk≤Qk,k∈K(8)ai=e0,i=0bi-1+si+ti,i-1,1≤i≤n+1(9)bi=max⁡ai,ei(10)zi=l0,i=n+1min⁡zi+1-ti,i+1-si,li,0≤i≤n(11)ei≤bi≤zi≤li(12)ξijk,χk,λi∈0,1,where ai is the vehicles arrival time at the customer vi and bi is the actual start time of vi, so bi=max⁡(ai,ei) as (9) stated. In addition, for each customer i, let zi indicate the reverse arrival time, defined as (10).The objective function (1) minimizes the number of refused customers, (2) aims to minimize the travel distance, and (3) minimizes the total number of routes. Eq. (4) is a flow conservation constraint: each customer j must have its in-degree equal to its out-degree, which is at most one. Eq. (5) ensures that each customer i(1≤i≤n) must be visited by exactly one vehicle. Eq. (6) ensures that every route starts and ends the central depot. Eq. (7) specifies the capacity of each vehicle. Eq. (8)–(11) define the time windows. Finally, (12) imposes restrictions on the decision variables.The degree of dynamism of a problem (Dod) [23] is defined to represent how many dynamic requests occur in a problem. Let ns and nd be the number of static and dynamic requests, respectively. Then, the Dod is described as follows:(13)Dod=ndns+nd×100%,where Dod varies between 0 and 1. When Dod is equal to 0, all the requests are known in advance (static problem), whereas when it is equal to 1, all the requests are dynamic. ## 3. Solution Approach In this section, we present a metaheuristic algorithm to solve the DVRPTW. First, we introduce the framework of the heuristic algorithm to show how it can solve DVRPTWs interactively. Then, we discuss the details of the proposed harmony search-based algorithm, including its initialization, the way it improvises new solutions, and its VND-based local search method. Finally, we present some strategies used to accommodate dynamic requests. ### 3.1. The Framework of the General Algorithm The general algorithm to solve a DVRPTW interactively is summarized in Algorithm1. First, the routes and deliveries to the static customers are solved by HSVND, which is our proposed harmony search-based algorithm described in Section 3.2. HSVND will return the best solution found, in which all the static customers have been inserted into routes. Consequently, the fleet can start to deliver goods for the customers based on the routes found in this solution. Then, dynamic customers submit requests. When a new dynamic customer request occurs, the algorithm will immediately check the feasibility of servicing the request Algorithm 3 (discussed in Section 3.3). If the request is acceptable, it then invokes the HSVND again to rearrange all the known customer requests that have not been serviced so far. Otherwise, the request will be rejected. This dynamic procedure is repeated until there are no new requests. Then, the entire solution including service for all static and dynamic customers’ requests will be returned by the general algorithm.Algorithm 1:General algorithm-MHS. (1)S←ϕ ( 2 ) S ← H S V N D ( S , V S , 0 )  //solve static customers (3)t←0 ( 4 )  while  (t<l0)  do (5)Dt←CheckRequests(S,t) (6)S←HSVND(S,Dt,t) (7)t←t+1 ( 8 )  endwhile ( 9 ) return S.The following subsections discuss the HSVND algorithm and the method for inserting dynamic customer requests in more detail. ### 3.2. The Framework of HSVND In this section, we describe the core of the MHS that incorporates two known methods: the harmony search (HS) algorithm and the Variable Neighbourhood Descent (VND) algorithm. This hybrid algorithm, called HSVND, benefits from the strengths of both HS and VND; the HS has high global search capability while the VND excels at local search.The HS algorithm was proposed by Geem et al. [14] based on the way musicians improvise new harmonies in memory. HS is a population-based evolutionary algorithm in which the solutions are represented by the harmonies and the population is represented by harmony memory. The HS is an iterative process that starts with a set of initial solutions stored in harmony memory (HM). Each iteration generates a new solution, and then the objective function is used to evaluate the quality of the new solution. The HS method will replace the worst solution with the new solution if the new solution is of better quality than the worst solution in HM. This process repeats until a predetermined termination criterion is met. The pseudocode of the HSVND is given in Algorithm 2.Algorithm 2:HSVND (S,V,t). ( 1 )  initialize all parameters: HMS, HMCR, PAR, NI, Sbest ( 2 )  update the location of each vehicle in S at time t use eq. (15). ( 3 )  remove the unvisited customers from S and inset into V. ( 4 )  for  (i=1; i≤HMS; i++)  do             // HM initialization ( 5 )   initialize a solution Si randomly ( 6 )   if  (f(Si)<f(Sbest))  then ( 7 )       Sbest←Si ( 8 )   end if ( 9 )  end for ( 10 )  repeat ( 1 1 )   improvising a new solution S∗ ( 12 )   if  (S∗ is infeasible)  then ( 1 3 )     repair the solution ( 1 4 )   end if ( 1 5 )   VND(S∗) ( 1 6 )   if  (f(S∗) is better than the worst HM member)  then ( 1 7 )       replace the worst HM member with S∗.      // HM update ( 1 8 )   end if ( 1 9 )   if  (f(S∗)<f(Sbest))  then ( 20 )       Sbest←S∗ ( 21 )   end if ( 22 )   compute the population entropy           // entropy evaluation ( 23 )   if  (the population entropy increase or remain constant)  then ( 24 )     remove the highest frequency of harmonies ( 25 )     re-generates new harmonies ( 26 )   end if ( 27 )  untila preset termination criterion is met.        //checking termination criterion ( 28 )  return Sbest.Algorithm 3:VND (S). ( 1 )  Select neighbourhoods Nl, l=1,…,lmax. ( 2 )  l←1; ( 3 )  while l≤lmax ( 4 )    Find the best solution S∗ in neighbourhood Nl ( 5 )    if f(S∗)<f(S) then ( 6 )      S←S∗ ( 7 )      l←1 ( 8 )    else ( 9 )      l←l+1 ( 10 )    end if ( 11 )  end While ( 12 )  return SThe HSVND algorithm consists of the following six steps: initialization, evaluation, improvising a new solution from the HM, local searching by VND, updating the HM, and checking the termination criteria. After initialization, the hybrid algorithm improves solutions iteratively until a termination criterion is satisfied. The following subsections explain each step in the HSVND algorithm in detail. #### 3.2.1. Initialization During initialization, the parameters for HS are initialized first. These parameters are (i) the Harmony Memory Size (HMS), which determines the number of initial solutions in HM; (ii) Harmony Memory Consideration Rate (HMCR), which determines the rate at which values are selected from HM solutions; (iii) Pitch Adjustment Rate (PAR), which determines the rate of local improvement; and (iv) the number of improvisations (NI), which corresponds to the maximum number of iterations allowed for improving the solution. Next, the initial solution population will be generated randomly and stored in the HM. These solutions are sorted in ascending order with respect to their objective function values.The population is also initialized. HM is represented by a matrix of three dimensions. The rows contain a set of solutions, and the columns contain the vehicles (routes) for each solutionSi. Each vehicle route rij can be considered as a sequence of customers v0,v1,…,vn,vn+1, where v0 and vn+1 represent the depot. In classical HS, the dimensions of each harmony in HM must be the same [14]. In our study, a harmony represents a set containing m vehicles (routes). Therefore, the dimension of each harmony in HM can be different, as shown in(14)HM=S1S2⋮Si⋮SHMS=r11,r12,…,r1j,…,…,r1m1r21,r22,…,r2j…,r2m2⋮ri1,ri2,…,rij,…,…,rimi⋮rHMS1,rHMS2,…,rHMSj,…,rHMSmHMS,where mi is the total number of vehicles in solution Si.In DVRP, a decision of waiting or moving should be made when a vehicle finishes servicing a customer. In this paper, we introduce an Anticipatory Waiting strategy. Assume that the actual time ist. Vehicle k has finished serving customer vx and is ready to serve the next customer vy. We allow the vehicle to wait at the vx location until by-tx,y. Consequently, the vehicle will be waiting at the current location, when(15)by-tx,y>t.Solutions in the population are constructed by generating vehicle routes iteratively. The algorithm sequentially opens routes and fills them with customers until no more customers can be inserted while still keeping the routes feasible. More precisely, the procedure chooses an unrouted customer and tries to insert it into the current route to generate a feasible route that includes the customer. This step will be repeated as long as customers can be inserted into the route without violating any constraint. When this is no longer possible, the route is closed and added to the solutionS and the procedure continues with the next vehicle route until all the customers have been inserted into the solution. #### 3.2.2. Evaluating Population Entropy The diversity of the harmony is set according to the population entropy, and it is defined as follows:(16)pisi=fsi∑i=1HMSfsi,ES=-∑i=1HMSpisi·ln⁡pisiln⁡HMS,where HMS is the harmony memory size, f(si) is the objective function value of ith harmony, and E(x) is normalized population entropy. In each generation, after updating HM, the population entropy is computed. If the population entropy increase or remains constant, it would certainly have taken place in premature convergence. Then, the best harmony will retain. For the others, it counts its frequency in the HM, removes the highest frequency of harmonies, and then generates new harmonies. #### 3.2.3. Improvising a New Solution A new solutionS=(r1,r2,…,rm) is improvised based on three rules: (i) random selection, (ii) memory consideration, and (iii) pitch adjustment. The random selection rule allows vehicle ri to choose a route from the whole search range randomly. In addition to random selection, vehicle ri can take the route from previous routes for the same vehicle stored in HM such that ri∈{r1i,r2i,…,rHMS,i}. After a new pitch (route) is generated from memory, it is further considered to determine whether a pitch adjustment is required. Specifically, the new solution is generated as follows: first, create an empty solution S; next, generate a random number p in the range [0,1]. If p is less than the HMCR, choose one route from HM and add it to S. Otherwise, generate new route randomly and append it to S. Specifically, new pitches (routes) are generated as follows.(17)ri⟵randr1i,r2i,…,rHMS,iw.p.  HMCRrandomly  generatew.p.1-HMCR.When a new route is selected from the HM, it is adjusted at the probability PAR by a local search, wherePAR∈[0,1]. We produce a random number q between 0 and 1. If q is smaller than the PAR, then the current route will be modified via a randomly selected neighbouring strategy. The improvisation process ends when the number of vehicles in the new solution S is equal to the smallest one stored in HM. For DVRP, different neighbourhood structures have been used to improve the routes locally. Hence, we used three neighbourhood search methods as follows.Shift. This operation selects a customer randomly and moves it from its current position to a new random position in the same route. For example, in Figure 1(b), customer 2 was selected and moved after the 6th customer.Figure 1 Pitch-adjusted neighbourhood structures. (a) (b) (c) (d)Exchange. This operation selects two customers randomly and exchanges their positions in the same route. In Figure 1(c), the positions of customers 2 and 6 were swapped.Invert. This operation selects two customers randomly and inverts a subsequence between them in the same route. In Figure 1(d), edges (1, 2) and (6, 7) were deleted and edges (1, 6) and (2, 7) were linked; thus, the subsequence from 2 to 6 was inverted.Note that any local change leading to an infeasible route will be discarded; only feasible routes are accepted. Each of neighbourhood structure is controlled by a specific PAR range as follows:(18)ri⟵Shift0≤q<13PARExchange13PAR≤q<23PARInvert23PAR≤q<PARNothingPAR≤q≤1.As Figure1 shows, the new solution S may be infeasible because some customers may be missed or repeated. Therefore, after a new solution is produced during the improvisation process, we must check its feasibility. A repair technique is utilized to transform an infeasible solution into a feasible solution using the following two steps: first, we identify customers who are neither scheduled nor replicated in the new solution. Then, we delete any repeated customers from the old route and, finally, either assign the missed customers to any route that can accept them or generate a new route for them. #### 3.2.4. The VND Method After improvising a new solution, we use the VND method to further optimize the solution. To make the VND method available for a DVRPTW, it is necessary to define appropriate neighbourhood structures for DVRPTW solutions. We propose four different neighbourhood structures as follows.Swap. Select one customer from a route and another customer from another route and then swap them (see Figure 2(a)).Figure 2 Neighbourhood structures. (a) Swap (b) 2-opt∗ (c) String-exchange (d) Relocation2-Opt ∗. Choose two routes and exchange the last parts of both routes after choosing two selection points, one from each route (see Figure 2(b)).String-Exchange. Select a subsequence of customers from one route and another subsequence of customers from another route and exchange both subsequences (where d represents the maximum length of the subsequences) (see Figure 2(c)).Relocation. Delete one customer from one route and insert it into another route (see Figure 2(d)).The proposed VND method is shown in Algorithm3, starting with the first structure N1. In each iteration, the algorithm finds a solution S∗ in the current neighbourhood structure with the smallest objective function value (line (4)). In the evaluation step, if f(S∗)<f(S), then solution S∗ becomes the next incumbent solution (line (6)) and the algorithm continues to use N1 (line (7)); otherwise, it explores the next neighbourhood structure (line (9)). When the last structure, N4, has been applied and no further improvements are possible, the VND method is terminated and returns the final local optimal solution. #### 3.2.5. Updating the HM The new solution is substituted and added to the HM if it is better than the worst harmony solution in HM in terms of the objective function value. Otherwise, the newly created solution is rejected. Then, the HM is sorted again by objective function value. #### 3.2.6. Checking the Termination Criteria The procedures of the HS continue until a termination criterion is satisfied. We employed three termination criteria:(1) reaching the maximum number of improvisations (NI), (2) no improvement occurred after a certain number of improvisations (CUIN), and (3) the best solution in HM reached a certain value (CBS). It will stop if one of the termination criteria is satisfied. ### 3.3. Inserting Dynamic Requests Required information about a dynamic request such as its customer location and time window is acquired when the request arrives. Moreover, at a given time point, vehicles may either be servicing a customer, waiting for a customer, or moving towards the next customer. Our DVRPTW algorithm should first determine whether requests arriving at this time point can be accepted and, if so, which vehicles and which time points can service them. The algorithm should reject requests for which no vehicle can be scheduled to service them. To decide toaccept orreject a request, we shall discuss some insertion rules.Rule 1 (time window constraints). Suppose customervh requests service at time τh, and vehicle k is visiting customer vk or is on the way to visit that customer based on the planned router. For the new request to be inserted into the solution, it must satisfy at least one vehicle k or schedule a new vehicle that can arrive at vh before the end of the customer’s time window. The rule is expressed as follows:(19)bk+sk+tk,h≤lh∃k∈Kor(20)τh+t0,h≤lh∧τh+2∗t0,h+sh≤l0.Rule 2 (capacity constraints). A customervh can be inserted into route r (assume it is served by vehicle k) if it does not violate vehicle k’s capacity constraint. The rule to check this condition is expressed as follows:(21)qh+∑i∈rqi≤Qk∃k∈K.Rule 3 (direct insert). A new customervh can be inserted between customers vx and vy in route r if it allows the vehicle to arrive at vh before the end of its time window and also services vh after the beginning of its time window. The rule is expressed as follows:(22)bx+sx+tx,h≤lh,zy-ty,h-sh≥eh,bh≤zh.Rule 4 (split insert). This rule governs the condition when customervh cannot be inserted at any position in route r; however, if route r is split into two routes (r1 and r2) and the customer vh can be inserted into either of these two new routes (using Rules 3 and 4), then the customer can be accepted. A route r=0,…,vx,vy,…,0 can be split into r1=0,…,vx,0 and r2=0,vy,…,0 if(23)zy-ty,0≥τh.Algorithm4 describes how dynamic requests can be inserted in a route.Algorithm 4:Check Requests (S,t). ( 1 )  Dt←{dynamic customers are request in time t} ( 2 )  update the location of each vehicle in S by using (15). ( 3 )  for  (each dynamic customer i in Dt)  do //decide whether toaccept or reject customer i by using insertion Rules 1–4. ( 4 )     if it satisfies Rules 1 and 2  then ( 5 )     attempts to insert the customer i into an existing route by Rule 3. ( 6 )     if not, apply Rule 4 to split a route into two new routes ( 7 )     attempts to inserted by Rule 3 again for each two newly routes ( 8 )     else ( 9 )     reject it. ( 1 0 )  end if ( 11 )  if  (reject)  then ( 12 )     add i to reject pool ( 13 )     remove i from Dt ( 14 )  end if ( 15 )  end for ( 16 )  return  Dt ## 3.1. The Framework of the General Algorithm The general algorithm to solve a DVRPTW interactively is summarized in Algorithm1. First, the routes and deliveries to the static customers are solved by HSVND, which is our proposed harmony search-based algorithm described in Section 3.2. HSVND will return the best solution found, in which all the static customers have been inserted into routes. Consequently, the fleet can start to deliver goods for the customers based on the routes found in this solution. Then, dynamic customers submit requests. When a new dynamic customer request occurs, the algorithm will immediately check the feasibility of servicing the request Algorithm 3 (discussed in Section 3.3). If the request is acceptable, it then invokes the HSVND again to rearrange all the known customer requests that have not been serviced so far. Otherwise, the request will be rejected. This dynamic procedure is repeated until there are no new requests. Then, the entire solution including service for all static and dynamic customers’ requests will be returned by the general algorithm.Algorithm 1:General algorithm-MHS. (1)S←ϕ ( 2 ) S ← H S V N D ( S , V S , 0 )  //solve static customers (3)t←0 ( 4 )  while  (t<l0)  do (5)Dt←CheckRequests(S,t) (6)S←HSVND(S,Dt,t) (7)t←t+1 ( 8 )  endwhile ( 9 ) return S.The following subsections discuss the HSVND algorithm and the method for inserting dynamic customer requests in more detail. ## 3.2. The Framework of HSVND In this section, we describe the core of the MHS that incorporates two known methods: the harmony search (HS) algorithm and the Variable Neighbourhood Descent (VND) algorithm. This hybrid algorithm, called HSVND, benefits from the strengths of both HS and VND; the HS has high global search capability while the VND excels at local search.The HS algorithm was proposed by Geem et al. [14] based on the way musicians improvise new harmonies in memory. HS is a population-based evolutionary algorithm in which the solutions are represented by the harmonies and the population is represented by harmony memory. The HS is an iterative process that starts with a set of initial solutions stored in harmony memory (HM). Each iteration generates a new solution, and then the objective function is used to evaluate the quality of the new solution. The HS method will replace the worst solution with the new solution if the new solution is of better quality than the worst solution in HM. This process repeats until a predetermined termination criterion is met. The pseudocode of the HSVND is given in Algorithm 2.Algorithm 2:HSVND (S,V,t). ( 1 )  initialize all parameters: HMS, HMCR, PAR, NI, Sbest ( 2 )  update the location of each vehicle in S at time t use eq. (15). ( 3 )  remove the unvisited customers from S and inset into V. ( 4 )  for  (i=1; i≤HMS; i++)  do             // HM initialization ( 5 )   initialize a solution Si randomly ( 6 )   if  (f(Si)<f(Sbest))  then ( 7 )       Sbest←Si ( 8 )   end if ( 9 )  end for ( 10 )  repeat ( 1 1 )   improvising a new solution S∗ ( 12 )   if  (S∗ is infeasible)  then ( 1 3 )     repair the solution ( 1 4 )   end if ( 1 5 )   VND(S∗) ( 1 6 )   if  (f(S∗) is better than the worst HM member)  then ( 1 7 )       replace the worst HM member with S∗.      // HM update ( 1 8 )   end if ( 1 9 )   if  (f(S∗)<f(Sbest))  then ( 20 )       Sbest←S∗ ( 21 )   end if ( 22 )   compute the population entropy           // entropy evaluation ( 23 )   if  (the population entropy increase or remain constant)  then ( 24 )     remove the highest frequency of harmonies ( 25 )     re-generates new harmonies ( 26 )   end if ( 27 )  untila preset termination criterion is met.        //checking termination criterion ( 28 )  return Sbest.Algorithm 3:VND (S). ( 1 )  Select neighbourhoods Nl, l=1,…,lmax. ( 2 )  l←1; ( 3 )  while l≤lmax ( 4 )    Find the best solution S∗ in neighbourhood Nl ( 5 )    if f(S∗)<f(S) then ( 6 )      S←S∗ ( 7 )      l←1 ( 8 )    else ( 9 )      l←l+1 ( 10 )    end if ( 11 )  end While ( 12 )  return SThe HSVND algorithm consists of the following six steps: initialization, evaluation, improvising a new solution from the HM, local searching by VND, updating the HM, and checking the termination criteria. After initialization, the hybrid algorithm improves solutions iteratively until a termination criterion is satisfied. The following subsections explain each step in the HSVND algorithm in detail. ### 3.2.1. Initialization During initialization, the parameters for HS are initialized first. These parameters are (i) the Harmony Memory Size (HMS), which determines the number of initial solutions in HM; (ii) Harmony Memory Consideration Rate (HMCR), which determines the rate at which values are selected from HM solutions; (iii) Pitch Adjustment Rate (PAR), which determines the rate of local improvement; and (iv) the number of improvisations (NI), which corresponds to the maximum number of iterations allowed for improving the solution. Next, the initial solution population will be generated randomly and stored in the HM. These solutions are sorted in ascending order with respect to their objective function values.The population is also initialized. HM is represented by a matrix of three dimensions. The rows contain a set of solutions, and the columns contain the vehicles (routes) for each solutionSi. Each vehicle route rij can be considered as a sequence of customers v0,v1,…,vn,vn+1, where v0 and vn+1 represent the depot. In classical HS, the dimensions of each harmony in HM must be the same [14]. In our study, a harmony represents a set containing m vehicles (routes). Therefore, the dimension of each harmony in HM can be different, as shown in(14)HM=S1S2⋮Si⋮SHMS=r11,r12,…,r1j,…,…,r1m1r21,r22,…,r2j…,r2m2⋮ri1,ri2,…,rij,…,…,rimi⋮rHMS1,rHMS2,…,rHMSj,…,rHMSmHMS,where mi is the total number of vehicles in solution Si.In DVRP, a decision of waiting or moving should be made when a vehicle finishes servicing a customer. In this paper, we introduce an Anticipatory Waiting strategy. Assume that the actual time ist. Vehicle k has finished serving customer vx and is ready to serve the next customer vy. We allow the vehicle to wait at the vx location until by-tx,y. Consequently, the vehicle will be waiting at the current location, when(15)by-tx,y>t.Solutions in the population are constructed by generating vehicle routes iteratively. The algorithm sequentially opens routes and fills them with customers until no more customers can be inserted while still keeping the routes feasible. More precisely, the procedure chooses an unrouted customer and tries to insert it into the current route to generate a feasible route that includes the customer. This step will be repeated as long as customers can be inserted into the route without violating any constraint. When this is no longer possible, the route is closed and added to the solutionS and the procedure continues with the next vehicle route until all the customers have been inserted into the solution. ### 3.2.2. Evaluating Population Entropy The diversity of the harmony is set according to the population entropy, and it is defined as follows:(16)pisi=fsi∑i=1HMSfsi,ES=-∑i=1HMSpisi·ln⁡pisiln⁡HMS,where HMS is the harmony memory size, f(si) is the objective function value of ith harmony, and E(x) is normalized population entropy. In each generation, after updating HM, the population entropy is computed. If the population entropy increase or remains constant, it would certainly have taken place in premature convergence. Then, the best harmony will retain. For the others, it counts its frequency in the HM, removes the highest frequency of harmonies, and then generates new harmonies. ### 3.2.3. Improvising a New Solution A new solutionS=(r1,r2,…,rm) is improvised based on three rules: (i) random selection, (ii) memory consideration, and (iii) pitch adjustment. The random selection rule allows vehicle ri to choose a route from the whole search range randomly. In addition to random selection, vehicle ri can take the route from previous routes for the same vehicle stored in HM such that ri∈{r1i,r2i,…,rHMS,i}. After a new pitch (route) is generated from memory, it is further considered to determine whether a pitch adjustment is required. Specifically, the new solution is generated as follows: first, create an empty solution S; next, generate a random number p in the range [0,1]. If p is less than the HMCR, choose one route from HM and add it to S. Otherwise, generate new route randomly and append it to S. Specifically, new pitches (routes) are generated as follows.(17)ri⟵randr1i,r2i,…,rHMS,iw.p.  HMCRrandomly  generatew.p.1-HMCR.When a new route is selected from the HM, it is adjusted at the probability PAR by a local search, wherePAR∈[0,1]. We produce a random number q between 0 and 1. If q is smaller than the PAR, then the current route will be modified via a randomly selected neighbouring strategy. The improvisation process ends when the number of vehicles in the new solution S is equal to the smallest one stored in HM. For DVRP, different neighbourhood structures have been used to improve the routes locally. Hence, we used three neighbourhood search methods as follows.Shift. This operation selects a customer randomly and moves it from its current position to a new random position in the same route. For example, in Figure 1(b), customer 2 was selected and moved after the 6th customer.Figure 1 Pitch-adjusted neighbourhood structures. (a) (b) (c) (d)Exchange. This operation selects two customers randomly and exchanges their positions in the same route. In Figure 1(c), the positions of customers 2 and 6 were swapped.Invert. This operation selects two customers randomly and inverts a subsequence between them in the same route. In Figure 1(d), edges (1, 2) and (6, 7) were deleted and edges (1, 6) and (2, 7) were linked; thus, the subsequence from 2 to 6 was inverted.Note that any local change leading to an infeasible route will be discarded; only feasible routes are accepted. Each of neighbourhood structure is controlled by a specific PAR range as follows:(18)ri⟵Shift0≤q<13PARExchange13PAR≤q<23PARInvert23PAR≤q<PARNothingPAR≤q≤1.As Figure1 shows, the new solution S may be infeasible because some customers may be missed or repeated. Therefore, after a new solution is produced during the improvisation process, we must check its feasibility. A repair technique is utilized to transform an infeasible solution into a feasible solution using the following two steps: first, we identify customers who are neither scheduled nor replicated in the new solution. Then, we delete any repeated customers from the old route and, finally, either assign the missed customers to any route that can accept them or generate a new route for them. ### 3.2.4. The VND Method After improvising a new solution, we use the VND method to further optimize the solution. To make the VND method available for a DVRPTW, it is necessary to define appropriate neighbourhood structures for DVRPTW solutions. We propose four different neighbourhood structures as follows.Swap. Select one customer from a route and another customer from another route and then swap them (see Figure 2(a)).Figure 2 Neighbourhood structures. (a) Swap (b) 2-opt∗ (c) String-exchange (d) Relocation2-Opt ∗. Choose two routes and exchange the last parts of both routes after choosing two selection points, one from each route (see Figure 2(b)).String-Exchange. Select a subsequence of customers from one route and another subsequence of customers from another route and exchange both subsequences (where d represents the maximum length of the subsequences) (see Figure 2(c)).Relocation. Delete one customer from one route and insert it into another route (see Figure 2(d)).The proposed VND method is shown in Algorithm3, starting with the first structure N1. In each iteration, the algorithm finds a solution S∗ in the current neighbourhood structure with the smallest objective function value (line (4)). In the evaluation step, if f(S∗)<f(S), then solution S∗ becomes the next incumbent solution (line (6)) and the algorithm continues to use N1 (line (7)); otherwise, it explores the next neighbourhood structure (line (9)). When the last structure, N4, has been applied and no further improvements are possible, the VND method is terminated and returns the final local optimal solution. ### 3.2.5. Updating the HM The new solution is substituted and added to the HM if it is better than the worst harmony solution in HM in terms of the objective function value. Otherwise, the newly created solution is rejected. Then, the HM is sorted again by objective function value. ### 3.2.6. Checking the Termination Criteria The procedures of the HS continue until a termination criterion is satisfied. We employed three termination criteria:(1) reaching the maximum number of improvisations (NI), (2) no improvement occurred after a certain number of improvisations (CUIN), and (3) the best solution in HM reached a certain value (CBS). It will stop if one of the termination criteria is satisfied. ## 3.2.1. Initialization During initialization, the parameters for HS are initialized first. These parameters are (i) the Harmony Memory Size (HMS), which determines the number of initial solutions in HM; (ii) Harmony Memory Consideration Rate (HMCR), which determines the rate at which values are selected from HM solutions; (iii) Pitch Adjustment Rate (PAR), which determines the rate of local improvement; and (iv) the number of improvisations (NI), which corresponds to the maximum number of iterations allowed for improving the solution. Next, the initial solution population will be generated randomly and stored in the HM. These solutions are sorted in ascending order with respect to their objective function values.The population is also initialized. HM is represented by a matrix of three dimensions. The rows contain a set of solutions, and the columns contain the vehicles (routes) for each solutionSi. Each vehicle route rij can be considered as a sequence of customers v0,v1,…,vn,vn+1, where v0 and vn+1 represent the depot. In classical HS, the dimensions of each harmony in HM must be the same [14]. In our study, a harmony represents a set containing m vehicles (routes). Therefore, the dimension of each harmony in HM can be different, as shown in(14)HM=S1S2⋮Si⋮SHMS=r11,r12,…,r1j,…,…,r1m1r21,r22,…,r2j…,r2m2⋮ri1,ri2,…,rij,…,…,rimi⋮rHMS1,rHMS2,…,rHMSj,…,rHMSmHMS,where mi is the total number of vehicles in solution Si.In DVRP, a decision of waiting or moving should be made when a vehicle finishes servicing a customer. In this paper, we introduce an Anticipatory Waiting strategy. Assume that the actual time ist. Vehicle k has finished serving customer vx and is ready to serve the next customer vy. We allow the vehicle to wait at the vx location until by-tx,y. Consequently, the vehicle will be waiting at the current location, when(15)by-tx,y>t.Solutions in the population are constructed by generating vehicle routes iteratively. The algorithm sequentially opens routes and fills them with customers until no more customers can be inserted while still keeping the routes feasible. More precisely, the procedure chooses an unrouted customer and tries to insert it into the current route to generate a feasible route that includes the customer. This step will be repeated as long as customers can be inserted into the route without violating any constraint. When this is no longer possible, the route is closed and added to the solutionS and the procedure continues with the next vehicle route until all the customers have been inserted into the solution. ## 3.2.2. Evaluating Population Entropy The diversity of the harmony is set according to the population entropy, and it is defined as follows:(16)pisi=fsi∑i=1HMSfsi,ES=-∑i=1HMSpisi·ln⁡pisiln⁡HMS,where HMS is the harmony memory size, f(si) is the objective function value of ith harmony, and E(x) is normalized population entropy. In each generation, after updating HM, the population entropy is computed. If the population entropy increase or remains constant, it would certainly have taken place in premature convergence. Then, the best harmony will retain. For the others, it counts its frequency in the HM, removes the highest frequency of harmonies, and then generates new harmonies. ## 3.2.3. Improvising a New Solution A new solutionS=(r1,r2,…,rm) is improvised based on three rules: (i) random selection, (ii) memory consideration, and (iii) pitch adjustment. The random selection rule allows vehicle ri to choose a route from the whole search range randomly. In addition to random selection, vehicle ri can take the route from previous routes for the same vehicle stored in HM such that ri∈{r1i,r2i,…,rHMS,i}. After a new pitch (route) is generated from memory, it is further considered to determine whether a pitch adjustment is required. Specifically, the new solution is generated as follows: first, create an empty solution S; next, generate a random number p in the range [0,1]. If p is less than the HMCR, choose one route from HM and add it to S. Otherwise, generate new route randomly and append it to S. Specifically, new pitches (routes) are generated as follows.(17)ri⟵randr1i,r2i,…,rHMS,iw.p.  HMCRrandomly  generatew.p.1-HMCR.When a new route is selected from the HM, it is adjusted at the probability PAR by a local search, wherePAR∈[0,1]. We produce a random number q between 0 and 1. If q is smaller than the PAR, then the current route will be modified via a randomly selected neighbouring strategy. The improvisation process ends when the number of vehicles in the new solution S is equal to the smallest one stored in HM. For DVRP, different neighbourhood structures have been used to improve the routes locally. Hence, we used three neighbourhood search methods as follows.Shift. This operation selects a customer randomly and moves it from its current position to a new random position in the same route. For example, in Figure 1(b), customer 2 was selected and moved after the 6th customer.Figure 1 Pitch-adjusted neighbourhood structures. (a) (b) (c) (d)Exchange. This operation selects two customers randomly and exchanges their positions in the same route. In Figure 1(c), the positions of customers 2 and 6 were swapped.Invert. This operation selects two customers randomly and inverts a subsequence between them in the same route. In Figure 1(d), edges (1, 2) and (6, 7) were deleted and edges (1, 6) and (2, 7) were linked; thus, the subsequence from 2 to 6 was inverted.Note that any local change leading to an infeasible route will be discarded; only feasible routes are accepted. Each of neighbourhood structure is controlled by a specific PAR range as follows:(18)ri⟵Shift0≤q<13PARExchange13PAR≤q<23PARInvert23PAR≤q<PARNothingPAR≤q≤1.As Figure1 shows, the new solution S may be infeasible because some customers may be missed or repeated. Therefore, after a new solution is produced during the improvisation process, we must check its feasibility. A repair technique is utilized to transform an infeasible solution into a feasible solution using the following two steps: first, we identify customers who are neither scheduled nor replicated in the new solution. Then, we delete any repeated customers from the old route and, finally, either assign the missed customers to any route that can accept them or generate a new route for them. ## 3.2.4. The VND Method After improvising a new solution, we use the VND method to further optimize the solution. To make the VND method available for a DVRPTW, it is necessary to define appropriate neighbourhood structures for DVRPTW solutions. We propose four different neighbourhood structures as follows.Swap. Select one customer from a route and another customer from another route and then swap them (see Figure 2(a)).Figure 2 Neighbourhood structures. (a) Swap (b) 2-opt∗ (c) String-exchange (d) Relocation2-Opt ∗. Choose two routes and exchange the last parts of both routes after choosing two selection points, one from each route (see Figure 2(b)).String-Exchange. Select a subsequence of customers from one route and another subsequence of customers from another route and exchange both subsequences (where d represents the maximum length of the subsequences) (see Figure 2(c)).Relocation. Delete one customer from one route and insert it into another route (see Figure 2(d)).The proposed VND method is shown in Algorithm3, starting with the first structure N1. In each iteration, the algorithm finds a solution S∗ in the current neighbourhood structure with the smallest objective function value (line (4)). In the evaluation step, if f(S∗)<f(S), then solution S∗ becomes the next incumbent solution (line (6)) and the algorithm continues to use N1 (line (7)); otherwise, it explores the next neighbourhood structure (line (9)). When the last structure, N4, has been applied and no further improvements are possible, the VND method is terminated and returns the final local optimal solution. ## 3.2.5. Updating the HM The new solution is substituted and added to the HM if it is better than the worst harmony solution in HM in terms of the objective function value. Otherwise, the newly created solution is rejected. Then, the HM is sorted again by objective function value. ## 3.2.6. Checking the Termination Criteria The procedures of the HS continue until a termination criterion is satisfied. We employed three termination criteria:(1) reaching the maximum number of improvisations (NI), (2) no improvement occurred after a certain number of improvisations (CUIN), and (3) the best solution in HM reached a certain value (CBS). It will stop if one of the termination criteria is satisfied. ## 3.3. Inserting Dynamic Requests Required information about a dynamic request such as its customer location and time window is acquired when the request arrives. Moreover, at a given time point, vehicles may either be servicing a customer, waiting for a customer, or moving towards the next customer. Our DVRPTW algorithm should first determine whether requests arriving at this time point can be accepted and, if so, which vehicles and which time points can service them. The algorithm should reject requests for which no vehicle can be scheduled to service them. To decide toaccept orreject a request, we shall discuss some insertion rules.Rule 1 (time window constraints). Suppose customervh requests service at time τh, and vehicle k is visiting customer vk or is on the way to visit that customer based on the planned router. For the new request to be inserted into the solution, it must satisfy at least one vehicle k or schedule a new vehicle that can arrive at vh before the end of the customer’s time window. The rule is expressed as follows:(19)bk+sk+tk,h≤lh∃k∈Kor(20)τh+t0,h≤lh∧τh+2∗t0,h+sh≤l0.Rule 2 (capacity constraints). A customervh can be inserted into route r (assume it is served by vehicle k) if it does not violate vehicle k’s capacity constraint. The rule to check this condition is expressed as follows:(21)qh+∑i∈rqi≤Qk∃k∈K.Rule 3 (direct insert). A new customervh can be inserted between customers vx and vy in route r if it allows the vehicle to arrive at vh before the end of its time window and also services vh after the beginning of its time window. The rule is expressed as follows:(22)bx+sx+tx,h≤lh,zy-ty,h-sh≥eh,bh≤zh.Rule 4 (split insert). This rule governs the condition when customervh cannot be inserted at any position in route r; however, if route r is split into two routes (r1 and r2) and the customer vh can be inserted into either of these two new routes (using Rules 3 and 4), then the customer can be accepted. A route r=0,…,vx,vy,…,0 can be split into r1=0,…,vx,0 and r2=0,vy,…,0 if(23)zy-ty,0≥τh.Algorithm4 describes how dynamic requests can be inserted in a route.Algorithm 4:Check Requests (S,t). ( 1 )  Dt←{dynamic customers are request in time t} ( 2 )  update the location of each vehicle in S by using (15). ( 3 )  for  (each dynamic customer i in Dt)  do //decide whether toaccept or reject customer i by using insertion Rules 1–4. ( 4 )     if it satisfies Rules 1 and 2  then ( 5 )     attempts to insert the customer i into an existing route by Rule 3. ( 6 )     if not, apply Rule 4 to split a route into two new routes ( 7 )     attempts to inserted by Rule 3 again for each two newly routes ( 8 )     else ( 9 )     reject it. ( 1 0 )  end if ( 11 )  if  (reject)  then ( 12 )     add i to reject pool ( 13 )     remove i from Dt ( 14 )  end if ( 15 )  end for ( 16 )  return  Dt ## 4. Experimental Results This section shows the computational results obtained from intensive experiments using the proposed HSVND. We implemented the HSVND using the C# programming language compiled for the  .NET Framework 4.5 and executed all the experiments on a PC with an Intel® Pentium® CPU G645 processor clocked at 2.90 GHz with 2 GB of RAM running the Windows 7 operating system.The experiments were performed on the Lackner benchmark, which originated from the standard Solomon benchmark. In this benchmark, 100 customers are distributed in a Euclidean plane over 100 × 100 square areas, where travel times between customers equal the corresponding distances. The benchmark is divided into six groups, named R1, R2, C1, C2, RC1, and RC2, respectively. Each group contains 8 to 12 instances, so there are 56 instances in total. To accommodate the dynamic portion of the test, each instance is associated with one of five different degrees of dynamism, 10%, 30%, 50%, 70%, and 90%, respectively. In the following experiments, we label each instance using the notation“Group-Index-Dod,” whereGroup is the group name that instance belongs to,Index is its index within the group (using two Arabic numerals), andDod is its dynamic degree value. For example, the label “R1-05-70” denotes the fifth instance in group R1, and that instance contains 70 dynamic customers. ### 4.1. Parameter Settings This subsection investigates the HSVND parameters HMS, HMCR, and PAR, as well as the convergence speed. We randomly selected 15 instances for these experiments. They are “C1-01-50,” “C1-05-90,” “C2-01-30,” “C2-07-10,” “R1-03-30,” “R1-09-70,” “R1-03-10,” “R2-03-50,” “R2-06-10,” “R2-09-70,” “RC1-04-50,” “RC1-06-90,” “RC2-02-70,” “RC2-03-90,” and “RC2-08-30.” For each instance, 10 independent runs were carried out. Because we split the DVRP into a series of static VRPs during the solving process, we only user the static customers at the beginning of the planning horizon (t=0) in all experiments, and we set the termination condition of the algorithm to consider only the maximum number of improvisations (NI = 1000). Tables 1–3 show the results of the optimization of the objective functions using different settings for HMS, HMCR, and PAR.Table 1 The mean results of the HSVND with different HMS values. Instance 15 20 25 30 35 40 C1-01-50 706.23 706.23 706.23 706.23 706.23 706.23 C1-05-90 286.13 285.06 285.06 285.06 285.06 286.13 C2-01-30 535.42 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.77 568.77 R1-03-30 980.92 980.85 980.33 977.67 985.17 980.57 R1-03-10 1189.33 1190.39 1188.02 1194.07 1190.53 1192.57 R1-09-70 444.40 444.85 442.59 443.04 443.85 444.10 R2-03-50 632.12 623.88 629.89 619.93 620.56 625.42 R2-06-10 904.03 890.96 909.91 878.53 910.96 917.38 R2-09-70 479.90 479.28 478.23 473.05 473.93 474.35 RC1-04-50 758.43 757.28 757.12 757.40 757.51 756.25 RC1-06-90 259.43 259.43 259.43 259.43 259.43 259.43 RC2-02-70 569.12 568.58 571.81 571.81 571.81 571.81 RC2-03-90 265.61 265.61 265.61 265.61 265.61 265.61 RC2-08-30 747.95 748.86 735.77 742.39 745.45 747.59Table 2 The mean results of the HSVND with different HMCR values. Instance 0.5 0.6 0.7 0.8 0.9 C1-01-50 706.23 706.23 706.23 706.23 706.23 C1-05-90 285.06 285.06 285.06 286.13 285.06 C2-01-30 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.73 R1-03-30 988.65 984.84 982.53 982.12 984.08 R1-03-10 1195.26 1194.61 1197.47 1194.21 1197.23 R1-09-70 444.40 445.30 445.35 444.81 447.66 R2-03-50 636.37 623.86 624.63 614.65 646.19 R2-06-10 906.32 915.68 899.75 910.53 886.24 R2-09-70 485.30 477.34 484.11 483.45 479.04 RC1-04-50 760.15 757.74 758.86 756.36 761.30 RC1-06-90 259.72 259.72 259.43 259.43 259.43 RC2-02-70 571.81 569.12 563.74 555.66 560.51 RC2-03-90 265.61 265.61 265.61 265.61 265.61 RC2-08-30 754.78 753.12 767.84 767.31 762.27Table 3 The effect of the PAR on the mean function optimization. Instance 0.3 0.4 0.5 0.6 0.7 C1-01-50 706.23 706.23 706.23 706.23 706.23 C1-05-90 285.06 287.20 285.06 286.13 285.06 C2-01-30 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.77 R1-03-30 983.47 979.55 980.48 986.82 979.57 R1-03-10 1195.97 1197.94 1195.36 1194.18 1197.12 R1-09-70 444.40 445.30 444.40 445.83 444.40 R2-03-50 621.23 630.18 621.72 616.62 626.34 R2-06-10 914.09 934.96 914.46 912.56 934.58 R2-09-70 490.88 482.95 478.68 483.16 487.21 RC1-04-50 759.02 759.43 759.22 757.86 759.66 RC1-06-90 260.01 260.30 259.43 259.43 259.72 RC2-02-70 568.58 571.81 571.81 563.20 571.81 RC2-03-90 265.61 265.61 265.95 265.61 265.61 RC2-08-30 749.31 763.16 773.31 710.28 769.66The results in Table1 demonstrate that the performance of the HSVND depends on the size of the HM: the larger the HMS is, the better the mean results are. Consequently, using larger values for HMS can achieve better solutions with lower objective function values. This may occur because having a larger number of solutions in the HM provides better shift patterns that are more likely to be combined into good new solutions. Therefore, HMS = 30 is chosen for all benchmark instances.As listed in Table2, the performance of the proposed algorithm degrades when increasing the number of random selections when HMCR values are below 0.8. However, perturbation is necessary to bring diversity to the HM and avoid local minima. Therefore, we suggest a value of 0.8 for the HMCR based on the experimental results.Table3 shows that small PAR values reduce the convergence rate of the HSVND. Based on the experimental results, we suggest using PAR values greater than 0.6.We also evaluated the convergence speed of the HSVND. In this experiment, we set HMS = 30, HMCR = 0.8, and PAR = 0.6 as described above. In the same way as the previous experiments, for each instance we ran the HSVND 10 times and terminated it after a maximum of NI = 1000 iterations. The maximum and average of the Continuous Unimproved Iteration Number (CUIN) and the Count of Best Solutions (CBS) are reported in Table4, where CUIN indicates the number of iterations required to find the next local optimal solution from the current local optimal solution. During this period between the current best solution and the next one, CBS is defined as the number of solutions found that have the same objective value as the current best solution.Table 4 The maximum and average of CUIN and CBS. Instance CUIN CBS Max Avg. Max Avg. C1-01-50 509 92.37 20 6.92 C1-05-90 911 42.95 17 2.02 C2-01-30 101 12.31 1 1.00 C2-07-10 131 24.66 30 7.09 R1-03-30 662 51.77 1 1.00 R1-03-10 611 75.46 1 1.00 R1-09-70 356 34.12 30 6.26 R2-03-50 152 23.45 1 1.00 R2-06-10 560 118.37 1 1.00 R2-09-70 563 40.15 1 1.00 RC1-04-50 713 126.00 1 1.00 RC1-06-90 71 30.63 1 1.00 RC2-02-70 152 19.34 1 1.00 RC2-03-90 49 21.30 1 1.00 RC2-08-30 815 148.10 1 1.00From Table4, it can be seen that, on average, the HSVND requires at least 13 iterations to find a new local optimal solution, while, at most, it requires 150 iterations. During the search, it finds at least one solution and, at most, finds 7 solutions with the same objective value. Consequently, we set the other termination conditions as follows: CUIN = 200 and CBS = 10. ### 4.2. Comparison with Existing Algorithms To assess the performance of the proposed HSVND algorithm, we ran the algorithm on the Lackner benchmark instances and compared it with two existing methods: ILNS [9] and GVNS [10]. Comparisons were made concerning these algorithms based on the ratios of refused service, the number of vehicles required, and the total distance travelled. To collect the experimental data, ten separate runs were performed for each instance and for each degree of dynamism from the Lackner benchmark. We recorded the best execution from the ten runs and calculated average values for each group. Items in boldface text in these tables indicate matches with the current best-known solution. We set the HS parameters as follows: HMS = 30, HMCR = 0.8, PAR = 0.6, NI = 1000, CUIN = 200, and CBS = 10. Table 5 lists the comparison results, where the first column indicates the group of instances (G), the second column shows the degrees of dynamism (D), and the other columns show the average number of vehicles, average total distance, average insertion time, and the refusal ratio, respectively. The last two rows of this table list the overall average results obtained by the different methods and the characteristics of the computers on which the algorithms were executed.Table 5 Comparison of the experimental results of the proposed method with other methods. G D Avg. vehicle number Avg. total distance Avg. insertion time Ratio refuse service HSVND ILNS GVNS HSVND ILNS GVNS HSVND ILNS GVNS HSVND ILNS GVNS R1 90 13.58 14.25 14.67 1214.29 1335.94 1250.38 4.91 17.43 14.50 3.08 2.33 3.83 70 13.50 14.33 14.75 1223.57 1331.34 1267.78 6.55 21.73 10.95 2.58 1.75 3.08 50 13.92 14.08 14.58 1224.42 1295.81 1267.47 9.28 28.27 11.84 2.00 0.67 1.92 30 13.58 13.92 14.25 1214.46 1286.63 1256.04 13.99 46.59 15.70 1.25 0.58 1.58 10 13.75 13.50 14.17 1216.82 1257.08 1250.16 21.64 67.99 15.29 0.33 0.17 0.50 C1 90 10.44 10.78 10.67 907.93 1039.77 963.33 3.04 6.60 7.81 0.11 0.22 0.00 70 10.44 10.78 11.33 888.79 1031.68 1009.47 4.59 10.79 7.67 0.11 0.22 0.00 50 10.33 10.89 11.00 865.28 1001.18 992.97 6.91 19.01 6.22 0.11 0.22 0.00 30 10.22 10.56 11.56 870.22 962.08 949.95 10.51 28.03 9.13 0.11 0.33 0.00 10 10.33 10.56 10.56 852.33 895.77 898.30 16.69 15.40 13.74 0.11 0.22 0.00 RC1 90 14.13 14.00 14.63 1465.45 1513.94 1470.45 3.13 17.31 15.39 1.13 2.00 1.88 70 13.88 13.88 14.88 1469.64 1511.29 1489.28 4.17 25.32 13.43 1.00 1.88 2.13 50 13.63 13.63 14.50 1428.24 1514.72 1484.01 6.77 48.78 13.72 1.00 1.38 1.75 30 13.50 13.88 14.38 1426.26 1492.22 1471.00 9.96 45.26 16.51 0.50 1.13 1.00 10 13.38 13.38 13.50 1394.37 1436.23 1417.07 16.49 83.52 23.01 0.50 1.13 0.50 R2 90 4.73 3.55 4.00 989.84 1047.82 1086.78 6.56 13.20 16.47 0.00 0.09 0.00 70 4.82 3.64 4.36 973.08 1032.04 1078.03 10.47 20.15 12.74 0.00 0.09 0.00 50 4.82 3.82 4.55 960.29 1016.52 1071.83 16.51 30.03 11.96 0.00 0.00 0.00 30 4.91 4.91 4.73 937.70 985.59 1035.60 26.73 57.07 10.18 0.00 0.00 0.00 10 4.45 6.36 5.27 938.06 950.00 1000.00 47.68 68.58 9.48 0.00 0.09 0.00 C2 90 3.13 3.25 3.38 615.67 636.79 668.99 2.93 6.12 16.67 0.00 0.00 0.00 70 3.13 3.13 3.38 613.49 636.47 672.95 3.63 10.01 14.03 0.00 0.00 0.00 50 3.00 3.13 3.13 601.62 604.98 623.10 6.46 16.80 20.25 0.00 0.00 0.00 30 3.13 3.63 3.25 599.93 651.42 624.81 9.05 29.87 34.82 0.00 0.00 0.00 10 3.13 3.00 3.25 596.03 594.67 615.93 15.30 59.70 80.78 0.00 0.00 0.00 RC2 90 6.00 4.00 4.63 1122.00 1257.19 1275.93 4.35 11.34 28.05 0.00 0.13 0.00 70 6.00 3.88 5.13 1095.71 1239.46 1234.36 6.88 19.26 16.07 0.00 0.00 0.00 50 6.13 4.25 5.88 1078.33 1190.54 1200.26 10.82 27.84 11.46 0.00 0.13 0.00 30 5.88 5.38 5.88 1064.58 1166.04 1172.33 18.51 41.51 11.68 0.00 0.25 0.00 10 5.63 6.75 6.13 1059.94 1103.30 1153.43 32.96 55.55 13.27 0.00 0.00 0.00 Avg. 8.58 8.50 8.88 1030.28 1100.62 1098.40 11.92 31.64 16.76 0.46 0.50 0.61As shown in Table5, HSVND outperforms other methods on average. First, HSVND obtains the smallest average refusal ratio, 0.46%, whereas ILNS refused 0.50%, and GVNS refused 0.61%. Both HSVND and GVNS were able to service all requests for groups R2, C2, and RC2; consequently, they obtained the same refusal ratio (i.e., 0.00%). HSVND performed better than GVNS for groups R1 and RC1 with respect to the refusal ratio while GVNS performed best for the C1 group. On average, ILNS achieves a better refusal ratio than GVNS. The average number of vehicles required by our algorithm is lower than that of GVNS but slightly larger than that of ILNS. The average number of vehicles is 8.58, 8.50, and 8.88 for HSVND, ILNS, and GVNS, respectively. However, HSVND revealed good performance in finding short distances; its average distance was 1030.28, while the average distances of ILNS and GVNS were 1100.62 and 1098.40, respectively. Note that HSVND was the best at optimizing travel distances among the three algorithms for all groups. Overall average insertion time also indicates that our proposed algorithm improves on the others; however, note that the computational environments under which the algorithms were executed are different.Similar to the evaluation performed for GVNS, we also calculated the average performance of 10 executions for each customer and compared that with GVNS, as shown in Table6. HSVND improved on GVNS in terms of the avg. total distance in all cases. There are 7 types, and HSVND outperformed GVNS completely on refusal ratio, avg. number of vehicles, and avg. total distance. When the refusal ratio remains constant, there are 10 types for which HSVND can find a better solution than GVNS.Table 6 The average performance comparison of HSVND and GVNS. G D Ratio of refuse Avg. vehicle number Avg. total distance Avg. insertion time HSVND GVNS HSVND GVNS HSVND GVNS HSVND GVNS R1 90 2.83 3.33 14.43 15.34 1253.57 1328.74 4.65 15.89 70 2.26 2.42 14.36 15.31 1262.20 1340.38 6.75 12.45 50 1.67 1.67 14.29 15.33 1256.42 1340.17 9.32 12.32 30 1.16 1.17 14.13 14.96 1243.40 1312.73 14.01 17.21 10 0.33 0.33 14.02 14.73 1240.66 1296.59 22.09 16.69 C1 90 0.11 0.00 10.78 11.37 935.06 1092.18 2.96 8.94 70 0.11 0.00 10.80 11.92 926.97 1150.27 4.36 8.51 50 0.11 0.00 10.57 11.81 901.09 1129.58 6.88 7.32 30 0.11 0.00 10.47 11.81 895.69 1081.52 10.19 10.17 10 0.11 0.00 10.50 11.36 878.01 986.99 16.50 14.87 RC1 90 1.04 1.50 14.73 15.62 1511.27 1587.89 3.12 15.89 70 0.93 1.25 14.61 15.88 1515.20 1614.43 4.26 14.72 50 0.73 0.88 14.29 15.51 1476.00 1579.34 6.63 14.05 30 0.46 0.63 14.28 15.22 1470.15 1551.93 9.64 16.89 10 0.31 0.25 13.99 14.25 1438.48 1474.09 14.54 24.52 R2 90 0.00 0.00 4.92 3.88 1022.27 1181.31 6.37 17.05 70 0.00 0.00 4.87 4.22 1008.85 1161.98 9.83 13.41 50 0.00 0.00 4.83 4.49 992.01 1153.79 16.44 12.58 30 0.00 0.00 4.80 4.77 974.35 1112.92 26.66 10.86 10 0.00 0.00 4.34 5.49 972.57 1054.82 45.81 10.75 C2 90 0.00 0.00 3.26 3.66 634.73 749.33 2.78 17.85 70 0.00 0.00 3.30 3.71 637.30 722.45 3.91 14.91 50 0.00 0.00 3.31 3.53 615.68 670.23 6.32 22.34 30 0.00 0.00 3.04 3.36 610.82 670.88 9.32 35.92 10 0.00 0.00 3.05 3.53 603.39 660.93 15.20 85.73 RC2 90 0.00 0.00 6.01 8.02 1165.36 2032.46 4.40 29.51 70 0.00 0.00 6.04 4.94 1134.00 1359.18 6.70 17.18 50 0.00 0.00 5.85 5.39 1113.19 1311.97 10.90 12.86 30 0.00 0.00 5.94 5.83 1101.98 1278.62 18.27 13.54 10 0.00 0.00 5.29 6.01 1104.00 1240.55 31.22 13.98 Avg. 0.41 0.45 8.84 9.38 1063.16 1207.61 11.67 17.96Overall, our results demonstrate that HS achieved good results compared with the existing methods as our algorithm has the ability to strike a good balance between diversification and intensification for the DVRPTWs. ## 4.1. Parameter Settings This subsection investigates the HSVND parameters HMS, HMCR, and PAR, as well as the convergence speed. We randomly selected 15 instances for these experiments. They are “C1-01-50,” “C1-05-90,” “C2-01-30,” “C2-07-10,” “R1-03-30,” “R1-09-70,” “R1-03-10,” “R2-03-50,” “R2-06-10,” “R2-09-70,” “RC1-04-50,” “RC1-06-90,” “RC2-02-70,” “RC2-03-90,” and “RC2-08-30.” For each instance, 10 independent runs were carried out. Because we split the DVRP into a series of static VRPs during the solving process, we only user the static customers at the beginning of the planning horizon (t=0) in all experiments, and we set the termination condition of the algorithm to consider only the maximum number of improvisations (NI = 1000). Tables 1–3 show the results of the optimization of the objective functions using different settings for HMS, HMCR, and PAR.Table 1 The mean results of the HSVND with different HMS values. Instance 15 20 25 30 35 40 C1-01-50 706.23 706.23 706.23 706.23 706.23 706.23 C1-05-90 286.13 285.06 285.06 285.06 285.06 286.13 C2-01-30 535.42 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.77 568.77 R1-03-30 980.92 980.85 980.33 977.67 985.17 980.57 R1-03-10 1189.33 1190.39 1188.02 1194.07 1190.53 1192.57 R1-09-70 444.40 444.85 442.59 443.04 443.85 444.10 R2-03-50 632.12 623.88 629.89 619.93 620.56 625.42 R2-06-10 904.03 890.96 909.91 878.53 910.96 917.38 R2-09-70 479.90 479.28 478.23 473.05 473.93 474.35 RC1-04-50 758.43 757.28 757.12 757.40 757.51 756.25 RC1-06-90 259.43 259.43 259.43 259.43 259.43 259.43 RC2-02-70 569.12 568.58 571.81 571.81 571.81 571.81 RC2-03-90 265.61 265.61 265.61 265.61 265.61 265.61 RC2-08-30 747.95 748.86 735.77 742.39 745.45 747.59Table 2 The mean results of the HSVND with different HMCR values. Instance 0.5 0.6 0.7 0.8 0.9 C1-01-50 706.23 706.23 706.23 706.23 706.23 C1-05-90 285.06 285.06 285.06 286.13 285.06 C2-01-30 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.73 R1-03-30 988.65 984.84 982.53 982.12 984.08 R1-03-10 1195.26 1194.61 1197.47 1194.21 1197.23 R1-09-70 444.40 445.30 445.35 444.81 447.66 R2-03-50 636.37 623.86 624.63 614.65 646.19 R2-06-10 906.32 915.68 899.75 910.53 886.24 R2-09-70 485.30 477.34 484.11 483.45 479.04 RC1-04-50 760.15 757.74 758.86 756.36 761.30 RC1-06-90 259.72 259.72 259.43 259.43 259.43 RC2-02-70 571.81 569.12 563.74 555.66 560.51 RC2-03-90 265.61 265.61 265.61 265.61 265.61 RC2-08-30 754.78 753.12 767.84 767.31 762.27Table 3 The effect of the PAR on the mean function optimization. Instance 0.3 0.4 0.5 0.6 0.7 C1-01-50 706.23 706.23 706.23 706.23 706.23 C1-05-90 285.06 287.20 285.06 286.13 285.06 C2-01-30 535.42 535.42 535.42 535.42 535.42 C2-07-10 568.77 568.77 568.77 568.77 568.77 R1-03-30 983.47 979.55 980.48 986.82 979.57 R1-03-10 1195.97 1197.94 1195.36 1194.18 1197.12 R1-09-70 444.40 445.30 444.40 445.83 444.40 R2-03-50 621.23 630.18 621.72 616.62 626.34 R2-06-10 914.09 934.96 914.46 912.56 934.58 R2-09-70 490.88 482.95 478.68 483.16 487.21 RC1-04-50 759.02 759.43 759.22 757.86 759.66 RC1-06-90 260.01 260.30 259.43 259.43 259.72 RC2-02-70 568.58 571.81 571.81 563.20 571.81 RC2-03-90 265.61 265.61 265.95 265.61 265.61 RC2-08-30 749.31 763.16 773.31 710.28 769.66The results in Table1 demonstrate that the performance of the HSVND depends on the size of the HM: the larger the HMS is, the better the mean results are. Consequently, using larger values for HMS can achieve better solutions with lower objective function values. This may occur because having a larger number of solutions in the HM provides better shift patterns that are more likely to be combined into good new solutions. Therefore, HMS = 30 is chosen for all benchmark instances.As listed in Table2, the performance of the proposed algorithm degrades when increasing the number of random selections when HMCR values are below 0.8. However, perturbation is necessary to bring diversity to the HM and avoid local minima. Therefore, we suggest a value of 0.8 for the HMCR based on the experimental results.Table3 shows that small PAR values reduce the convergence rate of the HSVND. Based on the experimental results, we suggest using PAR values greater than 0.6.We also evaluated the convergence speed of the HSVND. In this experiment, we set HMS = 30, HMCR = 0.8, and PAR = 0.6 as described above. In the same way as the previous experiments, for each instance we ran the HSVND 10 times and terminated it after a maximum of NI = 1000 iterations. The maximum and average of the Continuous Unimproved Iteration Number (CUIN) and the Count of Best Solutions (CBS) are reported in Table4, where CUIN indicates the number of iterations required to find the next local optimal solution from the current local optimal solution. During this period between the current best solution and the next one, CBS is defined as the number of solutions found that have the same objective value as the current best solution.Table 4 The maximum and average of CUIN and CBS. Instance CUIN CBS Max Avg. Max Avg. C1-01-50 509 92.37 20 6.92 C1-05-90 911 42.95 17 2.02 C2-01-30 101 12.31 1 1.00 C2-07-10 131 24.66 30 7.09 R1-03-30 662 51.77 1 1.00 R1-03-10 611 75.46 1 1.00 R1-09-70 356 34.12 30 6.26 R2-03-50 152 23.45 1 1.00 R2-06-10 560 118.37 1 1.00 R2-09-70 563 40.15 1 1.00 RC1-04-50 713 126.00 1 1.00 RC1-06-90 71 30.63 1 1.00 RC2-02-70 152 19.34 1 1.00 RC2-03-90 49 21.30 1 1.00 RC2-08-30 815 148.10 1 1.00From Table4, it can be seen that, on average, the HSVND requires at least 13 iterations to find a new local optimal solution, while, at most, it requires 150 iterations. During the search, it finds at least one solution and, at most, finds 7 solutions with the same objective value. Consequently, we set the other termination conditions as follows: CUIN = 200 and CBS = 10. ## 4.2. Comparison with Existing Algorithms To assess the performance of the proposed HSVND algorithm, we ran the algorithm on the Lackner benchmark instances and compared it with two existing methods: ILNS [9] and GVNS [10]. Comparisons were made concerning these algorithms based on the ratios of refused service, the number of vehicles required, and the total distance travelled. To collect the experimental data, ten separate runs were performed for each instance and for each degree of dynamism from the Lackner benchmark. We recorded the best execution from the ten runs and calculated average values for each group. Items in boldface text in these tables indicate matches with the current best-known solution. We set the HS parameters as follows: HMS = 30, HMCR = 0.8, PAR = 0.6, NI = 1000, CUIN = 200, and CBS = 10. Table 5 lists the comparison results, where the first column indicates the group of instances (G), the second column shows the degrees of dynamism (D), and the other columns show the average number of vehicles, average total distance, average insertion time, and the refusal ratio, respectively. The last two rows of this table list the overall average results obtained by the different methods and the characteristics of the computers on which the algorithms were executed.Table 5 Comparison of the experimental results of the proposed method with other methods. G D Avg. vehicle number Avg. total distance Avg. insertion time Ratio refuse service HSVND ILNS GVNS HSVND ILNS GVNS HSVND ILNS GVNS HSVND ILNS GVNS R1 90 13.58 14.25 14.67 1214.29 1335.94 1250.38 4.91 17.43 14.50 3.08 2.33 3.83 70 13.50 14.33 14.75 1223.57 1331.34 1267.78 6.55 21.73 10.95 2.58 1.75 3.08 50 13.92 14.08 14.58 1224.42 1295.81 1267.47 9.28 28.27 11.84 2.00 0.67 1.92 30 13.58 13.92 14.25 1214.46 1286.63 1256.04 13.99 46.59 15.70 1.25 0.58 1.58 10 13.75 13.50 14.17 1216.82 1257.08 1250.16 21.64 67.99 15.29 0.33 0.17 0.50 C1 90 10.44 10.78 10.67 907.93 1039.77 963.33 3.04 6.60 7.81 0.11 0.22 0.00 70 10.44 10.78 11.33 888.79 1031.68 1009.47 4.59 10.79 7.67 0.11 0.22 0.00 50 10.33 10.89 11.00 865.28 1001.18 992.97 6.91 19.01 6.22 0.11 0.22 0.00 30 10.22 10.56 11.56 870.22 962.08 949.95 10.51 28.03 9.13 0.11 0.33 0.00 10 10.33 10.56 10.56 852.33 895.77 898.30 16.69 15.40 13.74 0.11 0.22 0.00 RC1 90 14.13 14.00 14.63 1465.45 1513.94 1470.45 3.13 17.31 15.39 1.13 2.00 1.88 70 13.88 13.88 14.88 1469.64 1511.29 1489.28 4.17 25.32 13.43 1.00 1.88 2.13 50 13.63 13.63 14.50 1428.24 1514.72 1484.01 6.77 48.78 13.72 1.00 1.38 1.75 30 13.50 13.88 14.38 1426.26 1492.22 1471.00 9.96 45.26 16.51 0.50 1.13 1.00 10 13.38 13.38 13.50 1394.37 1436.23 1417.07 16.49 83.52 23.01 0.50 1.13 0.50 R2 90 4.73 3.55 4.00 989.84 1047.82 1086.78 6.56 13.20 16.47 0.00 0.09 0.00 70 4.82 3.64 4.36 973.08 1032.04 1078.03 10.47 20.15 12.74 0.00 0.09 0.00 50 4.82 3.82 4.55 960.29 1016.52 1071.83 16.51 30.03 11.96 0.00 0.00 0.00 30 4.91 4.91 4.73 937.70 985.59 1035.60 26.73 57.07 10.18 0.00 0.00 0.00 10 4.45 6.36 5.27 938.06 950.00 1000.00 47.68 68.58 9.48 0.00 0.09 0.00 C2 90 3.13 3.25 3.38 615.67 636.79 668.99 2.93 6.12 16.67 0.00 0.00 0.00 70 3.13 3.13 3.38 613.49 636.47 672.95 3.63 10.01 14.03 0.00 0.00 0.00 50 3.00 3.13 3.13 601.62 604.98 623.10 6.46 16.80 20.25 0.00 0.00 0.00 30 3.13 3.63 3.25 599.93 651.42 624.81 9.05 29.87 34.82 0.00 0.00 0.00 10 3.13 3.00 3.25 596.03 594.67 615.93 15.30 59.70 80.78 0.00 0.00 0.00 RC2 90 6.00 4.00 4.63 1122.00 1257.19 1275.93 4.35 11.34 28.05 0.00 0.13 0.00 70 6.00 3.88 5.13 1095.71 1239.46 1234.36 6.88 19.26 16.07 0.00 0.00 0.00 50 6.13 4.25 5.88 1078.33 1190.54 1200.26 10.82 27.84 11.46 0.00 0.13 0.00 30 5.88 5.38 5.88 1064.58 1166.04 1172.33 18.51 41.51 11.68 0.00 0.25 0.00 10 5.63 6.75 6.13 1059.94 1103.30 1153.43 32.96 55.55 13.27 0.00 0.00 0.00 Avg. 8.58 8.50 8.88 1030.28 1100.62 1098.40 11.92 31.64 16.76 0.46 0.50 0.61As shown in Table5, HSVND outperforms other methods on average. First, HSVND obtains the smallest average refusal ratio, 0.46%, whereas ILNS refused 0.50%, and GVNS refused 0.61%. Both HSVND and GVNS were able to service all requests for groups R2, C2, and RC2; consequently, they obtained the same refusal ratio (i.e., 0.00%). HSVND performed better than GVNS for groups R1 and RC1 with respect to the refusal ratio while GVNS performed best for the C1 group. On average, ILNS achieves a better refusal ratio than GVNS. The average number of vehicles required by our algorithm is lower than that of GVNS but slightly larger than that of ILNS. The average number of vehicles is 8.58, 8.50, and 8.88 for HSVND, ILNS, and GVNS, respectively. However, HSVND revealed good performance in finding short distances; its average distance was 1030.28, while the average distances of ILNS and GVNS were 1100.62 and 1098.40, respectively. Note that HSVND was the best at optimizing travel distances among the three algorithms for all groups. Overall average insertion time also indicates that our proposed algorithm improves on the others; however, note that the computational environments under which the algorithms were executed are different.Similar to the evaluation performed for GVNS, we also calculated the average performance of 10 executions for each customer and compared that with GVNS, as shown in Table6. HSVND improved on GVNS in terms of the avg. total distance in all cases. There are 7 types, and HSVND outperformed GVNS completely on refusal ratio, avg. number of vehicles, and avg. total distance. When the refusal ratio remains constant, there are 10 types for which HSVND can find a better solution than GVNS.Table 6 The average performance comparison of HSVND and GVNS. G D Ratio of refuse Avg. vehicle number Avg. total distance Avg. insertion time HSVND GVNS HSVND GVNS HSVND GVNS HSVND GVNS R1 90 2.83 3.33 14.43 15.34 1253.57 1328.74 4.65 15.89 70 2.26 2.42 14.36 15.31 1262.20 1340.38 6.75 12.45 50 1.67 1.67 14.29 15.33 1256.42 1340.17 9.32 12.32 30 1.16 1.17 14.13 14.96 1243.40 1312.73 14.01 17.21 10 0.33 0.33 14.02 14.73 1240.66 1296.59 22.09 16.69 C1 90 0.11 0.00 10.78 11.37 935.06 1092.18 2.96 8.94 70 0.11 0.00 10.80 11.92 926.97 1150.27 4.36 8.51 50 0.11 0.00 10.57 11.81 901.09 1129.58 6.88 7.32 30 0.11 0.00 10.47 11.81 895.69 1081.52 10.19 10.17 10 0.11 0.00 10.50 11.36 878.01 986.99 16.50 14.87 RC1 90 1.04 1.50 14.73 15.62 1511.27 1587.89 3.12 15.89 70 0.93 1.25 14.61 15.88 1515.20 1614.43 4.26 14.72 50 0.73 0.88 14.29 15.51 1476.00 1579.34 6.63 14.05 30 0.46 0.63 14.28 15.22 1470.15 1551.93 9.64 16.89 10 0.31 0.25 13.99 14.25 1438.48 1474.09 14.54 24.52 R2 90 0.00 0.00 4.92 3.88 1022.27 1181.31 6.37 17.05 70 0.00 0.00 4.87 4.22 1008.85 1161.98 9.83 13.41 50 0.00 0.00 4.83 4.49 992.01 1153.79 16.44 12.58 30 0.00 0.00 4.80 4.77 974.35 1112.92 26.66 10.86 10 0.00 0.00 4.34 5.49 972.57 1054.82 45.81 10.75 C2 90 0.00 0.00 3.26 3.66 634.73 749.33 2.78 17.85 70 0.00 0.00 3.30 3.71 637.30 722.45 3.91 14.91 50 0.00 0.00 3.31 3.53 615.68 670.23 6.32 22.34 30 0.00 0.00 3.04 3.36 610.82 670.88 9.32 35.92 10 0.00 0.00 3.05 3.53 603.39 660.93 15.20 85.73 RC2 90 0.00 0.00 6.01 8.02 1165.36 2032.46 4.40 29.51 70 0.00 0.00 6.04 4.94 1134.00 1359.18 6.70 17.18 50 0.00 0.00 5.85 5.39 1113.19 1311.97 10.90 12.86 30 0.00 0.00 5.94 5.83 1101.98 1278.62 18.27 13.54 10 0.00 0.00 5.29 6.01 1104.00 1240.55 31.22 13.98 Avg. 0.41 0.45 8.84 9.38 1063.16 1207.61 11.67 17.96Overall, our results demonstrate that HS achieved good results compared with the existing methods as our algorithm has the ability to strike a good balance between diversification and intensification for the DVRPTWs. ## 5. Conclusions In this paper, we have proposed a Modified Harmony Search for DVRPTW, called MHS, which is based on HS algorithm. First of all, the encoding of harmony memory has been improved based on the characteristics of routing in VRPs. Secondly, in order to provide an effective balance between the global diversification and local intensification, an enhanced basic Variable Neighbourhood Descent (VND) is incorporated into the iterative HS. Thirdly, improvisation of a new harmony has also been improved. In this procedure, in order to prevent premature convergence of the solution, we evaluate the population diversity by using entropy. Finally, when dynamic requests arrive, five rules were employed within the DVRPTW that address the insertion of dynamic requests into the DVRPTW. In order to verify the efficiency of our approach, we carried out some numerical experiments by using standard benchmarks. Results are analyzed intensively by comparing with recently proposed algorithms. The comparison results show that the proposed MHS algorithm can obtain better solutions than other existing algorithms. There are several interesting future research subjects to explore. One of them can be adapting the MHS heuristic for solving other dynamic vehicle problems. Another prospective research may focus on extending the DVRPTW by introducing some realistic aspects and constraints. --- *Source: 1021432-2017-11-28.xml*
2017
# Calculation of theC3A Percentage in High Sulfur Clinker **Authors:** Sayed Horkoss; Roger Lteif; Toufic Rizk **Journal:** International Journal of Analytical Chemistry (2010) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2010/102146 --- ## Abstract The aim of this paper is to clarify the influence of the clinkerSO3 on the amount of C3A. The calculation of the cement phases percentages is based on the research work, Calculation of the Compounds in Portland Cement, published by Bogue in 1929 .The usage of high sulphur fuels, industrial wastes, and tires changes completely the working condition of Bogue because the assumed phase compositions may change. The results prove that increasing the amount of SO3 in the low alkali clinker decreases the percentages of C3A due to the high incorporation of alumina in the clinker phases mainly C2S and C3S. The correlation is linear till the clinker SO3 reaches the 2%. Over that the influence of the clinker SO3 became undetectable. A new calculation method for the determination of the C3A in the high sulphur and low alkali clinker was proposed. --- ## Body ## 1. Introduction Portland cement is a hydraulic material composed primary of calcium silicates, aluminates, and ferrites. In a rotary kiln, at temperature reaching the 1450°C, clinker nodules are produced from a finely ground, homogenised blend of limestone, shale and iron ore.The nodules are subsequently ground with gypsum, which serves to control setting, to a fine powder to produce finished Portland cement. The composition and texture (crystal size, abundance, and distribution) of clinker phases result from complex interactions of raw feed chemical and mineralogical composition, particle size distribution, feed homogenization, and the heating and cooling regime.In order to simplify these phenomena, Bogue [1] proposed an approach for the development of the clinker phases. The ferric oxide (Fe2O3) reacts with aluminium oxide (Al2O3) and lime (CaO) to form the tetracalcium aluminoferrite (C4AF or Ca4Al2Fe2O10). The remaining aluminium oxide reacts with lime to form the tricalcium aluminate (C3A or Ca3Al2O6). The lime reacts with the silicate oxide (SiO2) to form two calcium silicates phases, the dicalcium silicate (Belite, C2S or Ca2SiO4) and tricalcium silicate (Alite, C3S or Ca3SiO5).Based on the above approach, Bogue proposed four formulae for the calculation of the clinker phase concentrations.Increasing the amount of the high sulphur fuels and the level of waste valorisation, in the cement kilns, changes completely the working condition of Bogue. The negative influence of sulphate on the percentages of silicate phases (alite and belite) was earlier detected by the XRD but that on the tricalcium aluminate (C3A) is still unclear due to the conclusion contradiction in the reported literature. ## 2. Influence of Sulphur on Silicate Phases The sulphates reduce the viscosity and surface tension of the clinker liquid phases, shifting the equilibrium of the melt into unstable range which is characterized by low nucleus forming frequency and high growth rate of crystal leading to stabilization of the Belite crystals. The incorporation of sulphur in the Belite stabilizes the Belite structure, whereby the uptake of CaO is inhibited and the formation is suppressed [2]. This phenomenon increases the amount of Belite and decreases that of Alite in the clinker [3]. This reported conclusion was assured by many investigations done later [2, 4, 5]. ## 3. Influence of Sulphur on Aluminates C3A (Ca3Al2O6) The composition of the matrix (aluminates and ferrites) is not altered by the level of SO3 in the clinker [6]. In particularly the amount of the tricalcium aluminate (Ca3Al2O6) is not affected by the SO3 level [2, 7]. This conclusion was not compatible with the observation of Hamou-Tagnit and Sarker, indicating that increasing the SO3 level increases the amount of Ca3Al2O6 in the clinker [8] where the conclusion of Borgholm and Jons showed the opposite [9].These contradictions in the literature were due to the fact that many parameters could affect the development of C3A (Ca3Al2O6) principally the kiln atmosphere and the raw meal chemistry. Locher and others [10] detect that the kiln atmosphere, especially the amount of oxygen, had an influence on the amount of C3A. Reducing kiln atmosphere inhibits partly the oxidation of the bivalent iron (Fe2+), presented in the kiln feed, leading to increase in the amount of C3A.The sulphur introduced into the cement kiln from both sides, the kiln feed and kiln burner, is in chemically reduced form such as S0, S1-, and S2-. These sulphur forms are oxidized to S4+ and S6+ in the kiln system. This oxidizing process consumes some oxygen quantity leading to a reduction in the amount of oxygen in the kiln system.The presence of sodium oxide in the kiln feed affects the amount of C3A [11]. One of the forms of sodium oxide in the clinker is the Na2O·8CaO·3Al2O3· At clinkering temperature this compound reacts with sulphur to produce the Na2SO4 and C3A [11].This study will focus only on the influence of the SO3 on the development of C3A (Ca3Al2O6). The other factors listed above are controlled to avoid any interaction with the results. ## 4. Experimental Procedure The development of the Tricalcium aluminates Ca3Al2O6, in the cement kiln production, is affected by many parameters such as the kiln atmosphere [10] and the amount of sodium oxide in the raw feed [11]. This could be the reason of the large contradiction in the conclusion of the reported investigations.In order to avoid any interactions, not only from the chemical compounds but also that related to the kiln operation, the clinker samples were selected in a stable kiln production operation conditions. All samples are commercial clinker, sampled from Cimenterie Nationale SAL. The free lime of the clinker was less than 1% and the percentage of the kiln inlet oxygen was around 3%. The samples were analyzed immediately, to avoid any influence from storage and humidity.The variation of the clinker SO3 was made by changing the fuel type in the main burner as follows:(1) fuel oil containing 2.0% S,(2) petroleum coke containing 4.5% S,(3) petroleum coke containing 6.0% SThe chemical and mineralogical analyses were done respectively according to ASTM C114 and ASTM C1365 standards. The calibration of the ARL 9800 was done using NIST standards. The fusion Claisse machine and the Herzog HTP 40 press were used for the sample preparation of the chemical and mineralogical analysis. The KOSH (potassium hydroxide and sucrose) method was implemented in order to detect the percentage of SO3 and Al2O3 in the silicate phases. ## 5. Results and Discussion The results (Figure1) showed an obvious influence of the SO3percentages on the amount of C3A in the low alkali clinker. In all samples, the measured percentages of C3A were lower than the calculated.Figure 1 Influence of the clinker SO3 on the C3A.The correlation between the amount of C3A (Ca3Al2O6) and the total percentages of clinker SO3 was linear till the clinker SO3 reached just 2%. Over that the influence of the clinker SO3 becomes indistinguishable since the standard deviation of the results according to ASTM C1365:06 is 0.47.The results in Table1 show that the ratio of the aluminium oxide and SO3in the silicate phases vary from 4.18 in the low clinker sulphur, to 1.33 in the high one.Table 1 Chemical and mineralogical analysis results. Al2O3Fe2O3SO3Na2OK2OC3A (calculated)C3A (measured)SO3(Silicate phases)Al2O3(Silicate phases)Al2O3/SO314.623.880.690.100.375.684.380.220.924.1824.533.891.080.100.265.423.910.551.242.2534.484.061.140.100.285.002.720.511.102.1644.633.861.270.100.265.744.110.601.222.0354.453.791.280.100.335.383.700.561.252.2364.753.911.280.090.215.974.130.671.241.8574.843.871.520.090.276.283.900.681.372.0184.743.951.630.100.315.883.640.711.482.0894.513.821.650.090.345.493.640.661.402.12104.733.811.730.090.356.093.300.651.352.08114.513.852.230.100.395.443.280.801.291.61124.553.702.760.100.335.803.131.051.401.33Bonafous and other [12] noticed that this ratio is 2.  This conclusion was based on their finding that in the presence of sulphur, 3Si4+, in the silicate phases, is substituted by 2Al3+ + S6+.Taylor [13] declared that this ratio is nearly more than 2, even in individual X-ray microanalyses, due to the presence of other substitutions and the accuracy of the results.Our findings and especially the results of the samples 2 to 10 conform to the previous conclusions.The reason of the high ratio in the low sulphur clinker is coming from the fact that at low temperature the Al3+ incorporates first in the silicate phases.The alumina incorporated in the silicate phases is divided into two groups. The first one enters the structure at low temperature and without the influence of sulphur. In the absence of SO3 there is an excess of nontetrahedral cations and the number of oxygen atoms lie close to the ideal number for stoichiometric Ca2SiO4[14]. This suggests interstitial stuffing of large cation as the main mechanism for accommodating Al on the Si site before significant solid solution of S takes place [14]. The second one is incorporated in the silicate phases with the influence of sulphur. This phenomenon is improved when the temperature exceed the 1200°C [12].In the first sample, the ratio Al3+/S6+ is 4.18. In this case, part of the alumina is incorporated into the silicate phases without the influence of sulfate.The amount of SO3 in the silicate phases is 0.22% and that for Al2O3 is 0.92 (Table 1)Based on the findings of Bonafous and other [12] the ratio Al3+/S6+ is 2. The calculated amount of alumina entered into the silicate phase crystals with the influence of sulphur in the first sample is 0.22×2=0.44%. The amount of alumina entered into the silicate phase crystals without the influence of sulphur in the first sample is 0.92 – 0.44 = 0.48%.The entry of alumina into silicate phase reduces the amount available for the C3A formation. The amount, of the first group, is influenced to an important degree by the changes in the composition of the ferrite compound [15] and it is compensated by the replacement of Al3+ by other ions in the C3A crystal, mainly the Si4+ and Fe3+ [16]. These phenomena lead to minimize the effect of the first group on the amount of C3A. The measured amount of the C3A becomes less than that calculated by Bogue by an average of only 3% [15].The amount of sulphur in the silicate phases depend on the percentages of belite. The concentration of sulfate in belite is 4 to 5 time that in alite [13]. Regarding the second group, the incorporation of sulfate in the Alite and Belite, tends to increase the amount of alumina in the silicate phases [16]. This phenomenon was shown in the clinker samples 2 to 10. The incorporation of alumina was increased, in correlation with the sulfate, in the silicate phases. The ratio of Al2O3/SO3, in the silicate phase, became around 2.Bonafous and other [12] explain the ability of silicate phases, principally the belite, to accept simultaneously Al and S at higher dosage by the existence of synergism between both. The presence of AlO45-decreases the negative charge induced by the substitution of SO42- for SiO44- [12]. The calculation of the sulphate phases becomes inaccurate when the percentage of clinker SO3 exceeds 1%. The reason for that is related to the SO3/Alkali ratio. At lower ratio, sulphate preferably combines with alkali to generate the Acranite K2SO4 and Aphtihitalite (K3Na (SO4)2) [17]. Increasing the ratio leads to the development of calcium langbeinite (Ca2K2 (SO4)3) [17]. In remarkably high ratios of SO3/Alkali and SO3 content, the anhydrite CaSO4 has been detected in particular clinkers [14, 18, 19].The incorporation of alumina in the silicate phases seams to stop (samples 11 and 12) when the clinker SO3 exceeds the 2%.Our result conforms to previous findings. Taylor calculated the maximum probable amount of SO3 in the silicate phases to be about 0.8% [13]. Miller and Tang found the largest amount of SO3 present in the silicate phases to be 0.68% [20]. The extra amount of sulphate shown in the over 2% clinker SO3 (samples 11 and 12) could come from the presence of anhydrite in the solid phase since it is not in correlation with the alumina.The first group of alumina was calculated from the first sample with the lower clinker sulphate. The calculation of the second group of alumina in the silicate phase was done by detecting the amount of the first group (0.48) from the total amount of alumina in the silicate phases.The correlation between the total clinker SO3 and the amount of alumina incorporated in the silicate phases, under the influence of sulphur, is acceptable (Figure 2). The percentages of alumina in the silicate phase, forced by the sulphur, became 0.469×%SO3+0.15.Figure 2 Relation between the total clinker SO3 and the alumina incorporated in the silicate phases forced by sulphur.The first group of alumina incorporated in the silicate phases, without the sulphur influence, is compensated by the substitution of other elements, where as the second one is not (Figure1).The impurities are one of the main factors of the stabilization of various clinker crystalline forms. The most important consequence of the occurrence of impurities in the lattices of matrix clinker compounds is a disagreement between the calculated and real amount of phases in clinker.The amount of C3A in high sulphur clinker could be calculated by the following formula:% of C3A = 2.65 × (%Al2O3 – (%Al2O3(Silicatephases)) −1.692 × %Fe2O3(1)C3A=2.65×(%Al2O3-(0.469×%SO3+0.15))-1.692×%Fe2O3.In the proposed formula, the percentage of SO3will be equal to 2 when the clinker SO3 exceeds the 2%, because over this limit the amount of alumina, in the silicate phases, becomes constant regardless the clinker SO3.The calculated C3A amount by the new formula is more realistic than that calculated by Bogue formula (Table 2).Table 2 Comparison of the C3A results. C3A calculated by Bogue formulaC3A calculated by the new formulaC3A measured according to ASTMC1365:065.684.424.386.093.543.306.283.993.905.442.553.285.003.192.725.383.393.705.802.913.135.423.683.915.883.453.645.493.043.645.743.764.115.973.984.13 ## 6. Conclusion Increasing the amount of SO3 in the low alkali clinker decreases the percentages of C3A due to the high incorporation of alumina in the silicate phases. The correlation is linear till the clinker SO3 reaches the 2%. Over that the SO3 influence became undetectable.In order to be more realistic, the proposed calculation of the C3A percentages takes into consideration the Al2O3 loss. The outcome shows that the new calculated results match more closely with the measured one, than with those calculated by Bogue formula. --- *Source: 102146-2010-06-27.xml*
102146-2010-06-27_102146-2010-06-27.md
15,014
Calculation of theC3A Percentage in High Sulfur Clinker
Sayed Horkoss; Roger Lteif; Toufic Rizk
International Journal of Analytical Chemistry (2010)
Chemistry and Chemical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2010/102146
102146-2010-06-27.xml
--- ## Abstract The aim of this paper is to clarify the influence of the clinkerSO3 on the amount of C3A. The calculation of the cement phases percentages is based on the research work, Calculation of the Compounds in Portland Cement, published by Bogue in 1929 .The usage of high sulphur fuels, industrial wastes, and tires changes completely the working condition of Bogue because the assumed phase compositions may change. The results prove that increasing the amount of SO3 in the low alkali clinker decreases the percentages of C3A due to the high incorporation of alumina in the clinker phases mainly C2S and C3S. The correlation is linear till the clinker SO3 reaches the 2%. Over that the influence of the clinker SO3 became undetectable. A new calculation method for the determination of the C3A in the high sulphur and low alkali clinker was proposed. --- ## Body ## 1. Introduction Portland cement is a hydraulic material composed primary of calcium silicates, aluminates, and ferrites. In a rotary kiln, at temperature reaching the 1450°C, clinker nodules are produced from a finely ground, homogenised blend of limestone, shale and iron ore.The nodules are subsequently ground with gypsum, which serves to control setting, to a fine powder to produce finished Portland cement. The composition and texture (crystal size, abundance, and distribution) of clinker phases result from complex interactions of raw feed chemical and mineralogical composition, particle size distribution, feed homogenization, and the heating and cooling regime.In order to simplify these phenomena, Bogue [1] proposed an approach for the development of the clinker phases. The ferric oxide (Fe2O3) reacts with aluminium oxide (Al2O3) and lime (CaO) to form the tetracalcium aluminoferrite (C4AF or Ca4Al2Fe2O10). The remaining aluminium oxide reacts with lime to form the tricalcium aluminate (C3A or Ca3Al2O6). The lime reacts with the silicate oxide (SiO2) to form two calcium silicates phases, the dicalcium silicate (Belite, C2S or Ca2SiO4) and tricalcium silicate (Alite, C3S or Ca3SiO5).Based on the above approach, Bogue proposed four formulae for the calculation of the clinker phase concentrations.Increasing the amount of the high sulphur fuels and the level of waste valorisation, in the cement kilns, changes completely the working condition of Bogue. The negative influence of sulphate on the percentages of silicate phases (alite and belite) was earlier detected by the XRD but that on the tricalcium aluminate (C3A) is still unclear due to the conclusion contradiction in the reported literature. ## 2. Influence of Sulphur on Silicate Phases The sulphates reduce the viscosity and surface tension of the clinker liquid phases, shifting the equilibrium of the melt into unstable range which is characterized by low nucleus forming frequency and high growth rate of crystal leading to stabilization of the Belite crystals. The incorporation of sulphur in the Belite stabilizes the Belite structure, whereby the uptake of CaO is inhibited and the formation is suppressed [2]. This phenomenon increases the amount of Belite and decreases that of Alite in the clinker [3]. This reported conclusion was assured by many investigations done later [2, 4, 5]. ## 3. Influence of Sulphur on Aluminates C3A (Ca3Al2O6) The composition of the matrix (aluminates and ferrites) is not altered by the level of SO3 in the clinker [6]. In particularly the amount of the tricalcium aluminate (Ca3Al2O6) is not affected by the SO3 level [2, 7]. This conclusion was not compatible with the observation of Hamou-Tagnit and Sarker, indicating that increasing the SO3 level increases the amount of Ca3Al2O6 in the clinker [8] where the conclusion of Borgholm and Jons showed the opposite [9].These contradictions in the literature were due to the fact that many parameters could affect the development of C3A (Ca3Al2O6) principally the kiln atmosphere and the raw meal chemistry. Locher and others [10] detect that the kiln atmosphere, especially the amount of oxygen, had an influence on the amount of C3A. Reducing kiln atmosphere inhibits partly the oxidation of the bivalent iron (Fe2+), presented in the kiln feed, leading to increase in the amount of C3A.The sulphur introduced into the cement kiln from both sides, the kiln feed and kiln burner, is in chemically reduced form such as S0, S1-, and S2-. These sulphur forms are oxidized to S4+ and S6+ in the kiln system. This oxidizing process consumes some oxygen quantity leading to a reduction in the amount of oxygen in the kiln system.The presence of sodium oxide in the kiln feed affects the amount of C3A [11]. One of the forms of sodium oxide in the clinker is the Na2O·8CaO·3Al2O3· At clinkering temperature this compound reacts with sulphur to produce the Na2SO4 and C3A [11].This study will focus only on the influence of the SO3 on the development of C3A (Ca3Al2O6). The other factors listed above are controlled to avoid any interaction with the results. ## 4. Experimental Procedure The development of the Tricalcium aluminates Ca3Al2O6, in the cement kiln production, is affected by many parameters such as the kiln atmosphere [10] and the amount of sodium oxide in the raw feed [11]. This could be the reason of the large contradiction in the conclusion of the reported investigations.In order to avoid any interactions, not only from the chemical compounds but also that related to the kiln operation, the clinker samples were selected in a stable kiln production operation conditions. All samples are commercial clinker, sampled from Cimenterie Nationale SAL. The free lime of the clinker was less than 1% and the percentage of the kiln inlet oxygen was around 3%. The samples were analyzed immediately, to avoid any influence from storage and humidity.The variation of the clinker SO3 was made by changing the fuel type in the main burner as follows:(1) fuel oil containing 2.0% S,(2) petroleum coke containing 4.5% S,(3) petroleum coke containing 6.0% SThe chemical and mineralogical analyses were done respectively according to ASTM C114 and ASTM C1365 standards. The calibration of the ARL 9800 was done using NIST standards. The fusion Claisse machine and the Herzog HTP 40 press were used for the sample preparation of the chemical and mineralogical analysis. The KOSH (potassium hydroxide and sucrose) method was implemented in order to detect the percentage of SO3 and Al2O3 in the silicate phases. ## 5. Results and Discussion The results (Figure1) showed an obvious influence of the SO3percentages on the amount of C3A in the low alkali clinker. In all samples, the measured percentages of C3A were lower than the calculated.Figure 1 Influence of the clinker SO3 on the C3A.The correlation between the amount of C3A (Ca3Al2O6) and the total percentages of clinker SO3 was linear till the clinker SO3 reached just 2%. Over that the influence of the clinker SO3 becomes indistinguishable since the standard deviation of the results according to ASTM C1365:06 is 0.47.The results in Table1 show that the ratio of the aluminium oxide and SO3in the silicate phases vary from 4.18 in the low clinker sulphur, to 1.33 in the high one.Table 1 Chemical and mineralogical analysis results. Al2O3Fe2O3SO3Na2OK2OC3A (calculated)C3A (measured)SO3(Silicate phases)Al2O3(Silicate phases)Al2O3/SO314.623.880.690.100.375.684.380.220.924.1824.533.891.080.100.265.423.910.551.242.2534.484.061.140.100.285.002.720.511.102.1644.633.861.270.100.265.744.110.601.222.0354.453.791.280.100.335.383.700.561.252.2364.753.911.280.090.215.974.130.671.241.8574.843.871.520.090.276.283.900.681.372.0184.743.951.630.100.315.883.640.711.482.0894.513.821.650.090.345.493.640.661.402.12104.733.811.730.090.356.093.300.651.352.08114.513.852.230.100.395.443.280.801.291.61124.553.702.760.100.335.803.131.051.401.33Bonafous and other [12] noticed that this ratio is 2.  This conclusion was based on their finding that in the presence of sulphur, 3Si4+, in the silicate phases, is substituted by 2Al3+ + S6+.Taylor [13] declared that this ratio is nearly more than 2, even in individual X-ray microanalyses, due to the presence of other substitutions and the accuracy of the results.Our findings and especially the results of the samples 2 to 10 conform to the previous conclusions.The reason of the high ratio in the low sulphur clinker is coming from the fact that at low temperature the Al3+ incorporates first in the silicate phases.The alumina incorporated in the silicate phases is divided into two groups. The first one enters the structure at low temperature and without the influence of sulphur. In the absence of SO3 there is an excess of nontetrahedral cations and the number of oxygen atoms lie close to the ideal number for stoichiometric Ca2SiO4[14]. This suggests interstitial stuffing of large cation as the main mechanism for accommodating Al on the Si site before significant solid solution of S takes place [14]. The second one is incorporated in the silicate phases with the influence of sulphur. This phenomenon is improved when the temperature exceed the 1200°C [12].In the first sample, the ratio Al3+/S6+ is 4.18. In this case, part of the alumina is incorporated into the silicate phases without the influence of sulfate.The amount of SO3 in the silicate phases is 0.22% and that for Al2O3 is 0.92 (Table 1)Based on the findings of Bonafous and other [12] the ratio Al3+/S6+ is 2. The calculated amount of alumina entered into the silicate phase crystals with the influence of sulphur in the first sample is 0.22×2=0.44%. The amount of alumina entered into the silicate phase crystals without the influence of sulphur in the first sample is 0.92 – 0.44 = 0.48%.The entry of alumina into silicate phase reduces the amount available for the C3A formation. The amount, of the first group, is influenced to an important degree by the changes in the composition of the ferrite compound [15] and it is compensated by the replacement of Al3+ by other ions in the C3A crystal, mainly the Si4+ and Fe3+ [16]. These phenomena lead to minimize the effect of the first group on the amount of C3A. The measured amount of the C3A becomes less than that calculated by Bogue by an average of only 3% [15].The amount of sulphur in the silicate phases depend on the percentages of belite. The concentration of sulfate in belite is 4 to 5 time that in alite [13]. Regarding the second group, the incorporation of sulfate in the Alite and Belite, tends to increase the amount of alumina in the silicate phases [16]. This phenomenon was shown in the clinker samples 2 to 10. The incorporation of alumina was increased, in correlation with the sulfate, in the silicate phases. The ratio of Al2O3/SO3, in the silicate phase, became around 2.Bonafous and other [12] explain the ability of silicate phases, principally the belite, to accept simultaneously Al and S at higher dosage by the existence of synergism between both. The presence of AlO45-decreases the negative charge induced by the substitution of SO42- for SiO44- [12]. The calculation of the sulphate phases becomes inaccurate when the percentage of clinker SO3 exceeds 1%. The reason for that is related to the SO3/Alkali ratio. At lower ratio, sulphate preferably combines with alkali to generate the Acranite K2SO4 and Aphtihitalite (K3Na (SO4)2) [17]. Increasing the ratio leads to the development of calcium langbeinite (Ca2K2 (SO4)3) [17]. In remarkably high ratios of SO3/Alkali and SO3 content, the anhydrite CaSO4 has been detected in particular clinkers [14, 18, 19].The incorporation of alumina in the silicate phases seams to stop (samples 11 and 12) when the clinker SO3 exceeds the 2%.Our result conforms to previous findings. Taylor calculated the maximum probable amount of SO3 in the silicate phases to be about 0.8% [13]. Miller and Tang found the largest amount of SO3 present in the silicate phases to be 0.68% [20]. The extra amount of sulphate shown in the over 2% clinker SO3 (samples 11 and 12) could come from the presence of anhydrite in the solid phase since it is not in correlation with the alumina.The first group of alumina was calculated from the first sample with the lower clinker sulphate. The calculation of the second group of alumina in the silicate phase was done by detecting the amount of the first group (0.48) from the total amount of alumina in the silicate phases.The correlation between the total clinker SO3 and the amount of alumina incorporated in the silicate phases, under the influence of sulphur, is acceptable (Figure 2). The percentages of alumina in the silicate phase, forced by the sulphur, became 0.469×%SO3+0.15.Figure 2 Relation between the total clinker SO3 and the alumina incorporated in the silicate phases forced by sulphur.The first group of alumina incorporated in the silicate phases, without the sulphur influence, is compensated by the substitution of other elements, where as the second one is not (Figure1).The impurities are one of the main factors of the stabilization of various clinker crystalline forms. The most important consequence of the occurrence of impurities in the lattices of matrix clinker compounds is a disagreement between the calculated and real amount of phases in clinker.The amount of C3A in high sulphur clinker could be calculated by the following formula:% of C3A = 2.65 × (%Al2O3 – (%Al2O3(Silicatephases)) −1.692 × %Fe2O3(1)C3A=2.65×(%Al2O3-(0.469×%SO3+0.15))-1.692×%Fe2O3.In the proposed formula, the percentage of SO3will be equal to 2 when the clinker SO3 exceeds the 2%, because over this limit the amount of alumina, in the silicate phases, becomes constant regardless the clinker SO3.The calculated C3A amount by the new formula is more realistic than that calculated by Bogue formula (Table 2).Table 2 Comparison of the C3A results. C3A calculated by Bogue formulaC3A calculated by the new formulaC3A measured according to ASTMC1365:065.684.424.386.093.543.306.283.993.905.442.553.285.003.192.725.383.393.705.802.913.135.423.683.915.883.453.645.493.043.645.743.764.115.973.984.13 ## 6. Conclusion Increasing the amount of SO3 in the low alkali clinker decreases the percentages of C3A due to the high incorporation of alumina in the silicate phases. The correlation is linear till the clinker SO3 reaches the 2%. Over that the SO3 influence became undetectable.In order to be more realistic, the proposed calculation of the C3A percentages takes into consideration the Al2O3 loss. The outcome shows that the new calculated results match more closely with the measured one, than with those calculated by Bogue formula. --- *Source: 102146-2010-06-27.xml*
2010
# On Nonsmooth Global Implicit Function Theorems for Locally Lipschitz Functions from Banach Spaces to Euclidean Spaces **Authors:** Guy Degla; Cyrille Dansou; Fortuné Dohemeto **Journal:** Abstract and Applied Analysis (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1021461 --- ## Abstract In this paper, we establish a generalization of the Galewski-Rădulescu nonsmooth global implicit function theorem to locally Lipschitz functions defined from infinite dimensional Banach spaces into Euclidean spaces. Moreover, we derive, under suitable conditions, a series of results on the existence, uniqueness, and possible continuity of global implicit functions that parametrize the set of zeros of locally Lipschitz functions. Our methods rely on a nonsmooth critical point theory based on a generalization of the Ekeland variational principle. --- ## Body ## 1. Introduction Many mathematical models involving real or vector-valued functions stand as equations of the form(1)fx=0.For complex phenomena, the unknownx is often a vector-variable x=x1,x2,⋯,xn belonging to ℝn or to an abstract Banach space having a direct sum V1⊕V2⊕⋯⊕Vn. It may even happen that equation (1) is just a state equation depending in fact on a parameter (or a control) h. In this case, it takes the form (2)Fx,h=0,and the most aspiring aim of mathematical analysis is to know the local or global structure of the solution setF−10 by finding out whether it is nonempty, discrete, a graph or a manifold, etc.The essence of the implicit function theorem in mathematical analysis is to ascertain if the solutions to an equation involving parameters exist and may be viewed locally as a function of those parameters and to know a priori which properties this function might inherit from those of the data. Geometrically, implicit function theorems provide sufficient conditions under which the solution set in some neighborhood of a given solution is the graph of some function. The well-known implicit function theorems deal with a continuous differentiability hypothesis and in such cases are equivalent to inverse function theorems (see [1]). It was originally conceived (in the complex variable form in a pioneering work by Lagrange) over two centuries ago to tackle celestial mechanics problems. Subsequently, it attracted Cauchy who managed to provide its rigorous version and became its discoverer. Later, the generalization of this implicit function theorem to the case of finitely many real variables was proved for the first time by Dini. In this way, the classical theory of implicit functions started with single variables and have progressed through multiple real variables to equations in infinite dimensional spaces, e.g., functional equations involving integral or differential operators. Nowadays, most categories of smooth functions have virtually their own version of the implicit function theorem, and there are special versions adapted to Banach spaces and algebraic geometry and to various types of geometrically degenerate situations. Some of these (such as Nash-Moser implicit function theorem) are quite sophisticated and have been used in amazing ways to solve important open problems (in Riemannian manifolds, partial differential equations, functional analysis, …) [1]. There are also in the literature [2, 3] some implicit numerical schemes used to approximate the solutions of certain differential equations and that could be regarded as implicit functions in sequence spaces.Nevertheless, there are interesting phenomena governed by parametric equations with nonsmooth data which need to be stressed and are more and more attracting researchers. Indeed, the implicit function theorems for nondifferentiable functions are less known but are regaining interest in the literature due to their importance in applied sciences that deal with functions having less regularity than smoothness. Few versions have been stated in Euclidean spaces for functions that are continuous with respect to all their variables and (partially) monotone with respect to some of their variables [4, 5].Recently, Galewski and Rădulescu [6] proved a generalized global implicit function theorem for locally Lipschitz function F:ℝn×ℝp⟶ℝn, by using a nonsmooth Palais-Smale condition and a coercivity condition. Their proof is essentially based on the fact that a locally Lipschitz function in a finite dimension is almost everywhere differentiable with respect to the Lebesgue measure according to Rademacher’s theorem [7]. It is known that Rademacher’s theorem for locally Lipschitz functions has no direct infinite dimensional extension. This justifies all difficulties to have conditions of existence of local or global implicit function in the case of locally Lipschitz function defined on infinite dimensional space (see [8]). Several works have been done to overcome these difficulties. For example, the papers [9, 10] provided conditions for surjectivity and inversion of locally Lipschitz functions between Banach spaces under assumptions formulated in terms of pseudo-Jacobian.In this work, our aim is to establish under suitable conditions a global implicit function theorem for locally Lipschitz mapF:X×Y⟶H, where X,Y are real Banach spaces and H is a real Euclidean space, and to provide conditions under which this implicit function is continuous. This extends Theorem 30 of Galewski and Rădulescu to the locally Lipschitz functions in infinite dimension with a very relatively simple method compared to those used for this purpose. Knowing that there exist noncoercive functions satisfying the h-condition (see Definition 18 and Remark 19), we work in this paper under the h-condition using a variational approach and applying a recent nonsmooth version of Mountain Pass Theorem, namely, Theorem 27.The contribution of this work is quadruple:(i) An improvement of the classical Clarke’s implicit function Theorem24 for function F:ℝn×ℝp⟶ℝn by replacing ℝp by any Banach space Y (Remark 26). Consequently, by considering the approach used in [6] (Theorem 4) and Remark 26, we prove our first main result (Theorem 31) on the existence and uniqueness of global implicit function theorem for equation Fx,y=0, where F:ℝn×Y⟶ℝn with Y a Banach space(ii) The proof of the continuity of the implicit function based on a simple additional hypothesis, Theorem35(iii) The weakening of the coercivity assumption used in [6] by considering a compactness type condition called h-condition in [11](iv) By our Lemmas42 and 43, we obtain Theorem 38 on the existence and uniqueness of global implicit functions under the h-condition on the function x↦Fx,yα with 0<α<2. This is a generalization of the result (49) in the nonsmooth case. It also generalizes the result [12] (Theorem 3.6) in the C1 caseThis article is organized as follows. In Section2, we recall some preliminary and auxilliary results on Clarke’s generalized gradient, Clarke’s generalized Jacobian, and the h-condition for locally Lipschitz functions. Section 3 is devoted to our main results established under the h-condition, on the existence and uniqueness of global implicit function for equation Fx,y=0, where F is defined from ℝn×Y to ℝn and Y is a Banach space, namely, Theorems 31, 35, 38, 39, and 40. In Section 4, we give an example of a function satisfying our conditions of existence of implicit function but not the conditions of Theorem 1 of 6 which we have extended. This is the energy functional defined in (139), of a certain differential inclusion problem involving the p-Laplacian [13]. ## 2. Preliminaries and Auxilliary Results LetU be a nonempty open subset of a Banach space X and let f:U⟶ℝ be a function. We recall that f is Lipschitz if there exists some constant K>0 such that for all y and z in U, we have (3)fy−fz≤Ky−z.Forx∈U, f is said to be locally Lipschitz at x if there exists an open neighborhood V⊂U of x on which the restriction of f is Lipschitz. We will say that f is locally Lipschitz on U if f is locally Lipschitz at every point x∈U. We recall that any convex function has this property in Euclidean spaces.Definition 1. Letf:U⊂X⟶ℝ be a locally Lipschitz function. Let x∈U and v∈X\0. The generalized directional derivative of f at x in the direction v, denoted by f0x;v, is defined by (4)f0x;v≔limsupw⟶xt⟶0+fw+tv−fwt.Observe at once thatf0x;v is a (finite) number for all v∈X\0.Indeed, letx∈V⊂U and let K>0 be such that (3) holds for all y,z∈V, with V bounded (without loss of generality). Let wmm>0⊂X be a sequence such that wm⟶x and tm a sequence of 0;+∞ such that tm⟶0. For v∈X\0, as m⟶+∞, the vectors wm+tmv will belong to V. Indeed, by boundedness of V, there exists ρ>0 such that x−y<ρ⇒y∈V. Then, for m large enough, we have (5)wm+tmv−x≤wm−x+tmv<ρ2+ρ2=ρ.Thus, there existsm0>0 such that for all m>m0, we have (6)fwm+tmv−fwmtm≤Kv.It follows from (3) and (6) that for all v∈X, (7)f0x,v≤Kv.Remark 2. Iff is locally Lipschitz and Gâteaux differentiable at x, then its Gâteaux differential fG′x at x coincides with its generalized gradient. That is, (8)f0x;v=fG′x⋅vforallv∈X.Proposition 3. The functionv↦f0x;v is positively homogeneous and subadditive.Proof. The homogeneity is an immediate consequence of Definition1. We prove the subadditivity. Let v and z be in X. Then, (9)f0x;v+z=limsupw⟶xt⟶0+fw+tv+tz−fwt≤limsupw⟶xt⟶0+fw+tz+tv−fw+tzt+limsupw⟶xt⟶0+fw+tz−fwt≤limsupr⟶xt⟶0+fr+tv−frt+limsupw⟶xt⟶0+fw+tz−fwt,r≔w+tz=f0x;v+f0x;z.From the previous Proposition3 and the Hahn-Banach theorem [14] (p. 62), it follows that there exists at least one linear function ξ∗:X⟶ℝ satisfying (10)f0x;v≥ξ∗,vfor allv∈X. From (10) and (7) also rewritten with −v, we obtain (11)ξ∗,v≤Kvfor allv∈X. Thus, ξ∗∈X∗ (as usual, X∗ denotes the (continuous) dual of X and <.,.> is the duality pairing between X and X∗). Thus, we can give the following definition.Definition 4. Letf:U⊂X⟶ℝ be locally Lipschitz at a point x∈U. Clarke’s generalized gradient of f at x, denoted ∂fx, is the (nonempty) set of all ξ∗∈X∗ satisfying (10), i.e., (12)∂fxs≔ξ∗∈X∗:∀v∈X,f0x;v≥ξ∗,v.We refer to [15–17] for some of the fundamental results in the calculus of generalized gradients. In particular, we shall need the following.Proposition 5 (see [18], Chang). Iff:U⟶ℝ is a convex function, then Clarke’s generalized gradient of f at x, defined in (12), coincides with the subdifferential of f in the sense of convex analysis.Proposition 6 (see [11], Chen). LetX be a real Banach space and f:X⟶ℝ be a locally Lipschitz function. Then, the function γ:X⟶ℝ defined by (13)γu≔minx∗∈∂fux∗,forallu∈X, is well defined and lower semicontinuous.Proposition 7 (see [15], Proposition 6). Ifx0 is a minimizer of f, then 0∈∂fx0.Remark 8. LetX be an infinite dimensional Banach space and f:X⟶ℝp be a locally Lipschitz mapping. For any finite dimensional subspace of X, it makes sense to talk about Clarke’s generalized Jacobian of the function fL:L∍x↦fx∈ℝp at every point x∈L.Notation 9. . For a locally Lipschitz functionf:ℝn⟶ℝp and x∈ℝn, we consider the set Ωfx defined by Ωfx≔xmmsequenceinℝn such that xm⟶x and f is differentiable at xm.LetX,Z be two Banach spaces such that dim Z=n<∞. Let F:X⟶Z be a locally Lipschitz mapping and L a finite dimensional subspace of X. For x∈L, we denote by ∂FLx Clarke’s generalized Jacobian at a point x, of the restriction of F to L, namely, the function (14)FL:L⟶Z;x↦Fx.LetY be a Banach space and consider a function F:ℝn×Y⟶ℝp which is locally Lipschitz. For any x¯,y¯∈ℝn×Y, ∂xFx¯,y¯ denotes Clarke’s generalized Jacobian at a point x¯ of the function (15)F⋅,y¯:ℝn⟶ℝp,x↦Fx,y¯.LetX,Y,Z be three Banach spaces with dim Z<∞ and F:X×Y⟶Z a locally Lipschitz function. For any finite dimensional subspace L of X and for every x¯,y¯∈L×Y, ∂xFLx¯,y¯ will denote Clarke’s generalized Jacobian of the function F~:L∍x↦F~x≔Fx,y¯∈Z at a point x¯.Theorem 10 (Rademacher). Letf:ℝn⟶ℝ be a locally Lipschitz function. Then, f is almost everywhere differentiable with respect to Lebesgue measure.According to Rademacher’s Theorem10, we have the following.Proposition 11 (see [19], Clarke). Letf:ℝn⟶ℝ be a locally Lipschitz function and x∈ℝn. If ∂fx denotes the set defined by (12), then (16)∂fx=colimm⟶+∞f′xm:xmm∈ℕ∈Ωfx.Note that, sincef is almost everywhere differentiable with respect to Lebesgue measure, there exists a sequence xmm∈ℕ⊂ℝn such that xm⟶x, and for any m∈ℕ, f is differentiable at xm. So, Ωfx≠∅. In addition for any xmm∈ℕ∈Ωfx and for any v∈ℝn, we have (17)f′xm⋅v≤Kv,where K is the Lipschitz constant of f. This means that f′xmm is bounded in Lℝn,ℝ which has a finite dimension. Then, there exists a subsequence f′xσmm of f′xmm that converges to some x∗∈Lℝn,ℝ. That is, (18)limm⟶+∞f′xσm=x∗.Thus, the convex hull of such limits in (18) is ∂fx.Even if the functionf is defined from ℝn to ℝp, regarding (17) and (18) component by component, we notice that the set defined by (16) is nonempty, compact, and convex in Lℝn,ℝp (see [20] (Definition 1)). Thus, this characterization of ∂fx stated in Proposition 11 is extended to locally Lipschitz functions defined from ℝn to ℝp. In this case, ∂fx is called Clarke’s generalized Jacobian of the function f at a point x.Definition 12. Letf:ℝn⟶ℝp be a locally Lipschitz mapping and x∈ℝn. Clarke’s generalized Jacobian of f at x also denoted by ∂fx is defined as follows: (19)∂fx=colimm⟶+∞f′xm:xmm∈ℕ∈Ωfx.The following notions will also be useful in the sequel.Definition 13. Letf:ℝn⟶ℝp be a locally Lipschitz mapping and x∈ℝn with n≥p. We say that ∂fx is of maximal rank if for all x∗∈∂fx, x∗ is surjective.Definition 14. LetX be a metric space. A function f:X⟶ℝ is said to be (sequentially) lower semicontinuous at a point x∈X, if for all sequence xmm∈ℕ⊂X such that xm⟶x, we have the inequality (20)fx≤liminfm⟶+∞fxm.If for all sequencexmm∈ℕ⊂X such that xm⇀x, (20) holds; we say that f is weakly sequentially lower semicontinuous at x.Remark 15. LetX be a normed vector space and xmm a sequence of X. If x∈X, then (21)xm⟶x⇒xm⇀x.It follows that the weakly sequentially lower semicontinuity implies the sequentially lower semicontinuity. But the converse is not generally true. However, in the convex case, these two notions are equivalents.The following theorem is a generalization of Ekeland’s variational principle [21].Theorem 16 (see [21], J. Chen). Leth:0,+∞⟶0,+∞ be a continuous nondecreasing function such that (22)∫0∞ds1+hs=+∞.LetM be a complete metric space, x0∈M fixed, f:M⟶ℝ∪∞ a lower semicontinuous function, not identically +∞, and bounded from below. Then, for every ε>0, and y∈M such that (23)fy<infMf+ε,and every λ>0, there exists some point z∈M such that (24)fz<fy,dz,x0≤r0+r¯,fx≥fz−ελ1+hdx0,zdx,z,∀x∈M,where r0=dx0,y and r¯ is such that (25)∫r0r0+r¯ds1+hs≥λ.By Theorem16, one has the following.Theorem 17 (see [21], J. Chen). LetX be a Banach space, h:0,+∞⟶0,+∞ be a continuous nondecreasing function such that (26)∫0∞ds1+hs=+∞ andf:X⟶ℝ a locally Lipschitz function, bounded from below. Then, there exists a minimizing sequence zmm of f such that (27)f0zm;v−zm1+hzm≥−εmv−zm,∀v∈X,where εm⟶0+ as m⟶+∞.Proof. For each positive integerm, choose ym∈Y be such that (28)fym≤infMf+εm.Takex0=0,X=M, and λ=1 in Theorem 16. Then, there exists zm∈X such that (29)fzm≤fym,zm≤ym+r¯,fx≥fzm−εm1+hzmx−zm,∀x∈X,where r¯ is such that (30)∫ymym+r¯ds1+hs≥1.Consequently, for eachx∈X, one has (31)infε>0δ>0supw<ε0<t<δfzm+w−tx−zm−fzm+wt=infδ>0sup0<t<δfzm+tx−zm−fzmt≥−εmx−zm1+hzm.Hence,f0zm;v−zm1+hzm≥−εmv−zm, for all v∈X.Moreover, obviously,zmm is a minimizing sequence of f.Definition 18. LetX be a Banach space, f:X⟶ℝ be bounded from below, locally Lipschitz function, and h:0,+∞⟶0,+∞ be continuous nondecreasing function such that (32)∫0∞ds1+hs=+∞.We say thatumn≥0⊂X is a h-sequence of f if fumm is bounded and f0um;v−um1+hum≥−εmv−um, for all v∈X, where εm⟶0+. We say that f satisfies the h-condition if any h-sequence of f possesses a convergent subsequence.Remark 19. Sometimes, the following version ofh-condition is also used: Any sequence umm⊂X such that fumm is bounded and (33)limm⟶∞γum1+hum=0 possesses a convergent subsequence, whereγ is defined in Proposition 6. This condition is equivalent to that of Definition 18.Remark 20. A coercive function defined onℝn satisfies the h-condition regardless of h. But a function satisfying the h-condition is not necessary coercive. Indeed, Section 4 is devoted to the exposition of an example of a noncoercive function satisfying the h-condition. It is the function defined in (139).The following is the Weierstrass theorem.Lemma 21 (see [13], Lemma 2.1). Assume thatf:X⟶ℝ is functional on a reflexive Banach space X which is weakly lower semicontinuous and coercive. Then, there exists x∗∈X such that fx∗=minx∈Xfx.Better, by virtue of Theorem17, we can prove the following result.Theorem 22. LetX be a Banach space, h:0,+∞⟶0,+∞ a continuous nondecreasing function such that (34)∫0∞ds1+hs=+∞, andf:X⟶ℝ a locally Lipschitz function and bounded from below. If f satisfies the h-condition, then f achieves its minimum at some critical point z∈X of f.Proof. By virtue of Theorem17, there exists a minimizing sequence zmm of f and (35)f0zm;v−zm1+hzm≥−εmv−zmforallv∈X whereεm⟶0+. Since f satisfies the h-condition, zmm has a convergent subsequence in X. We can assume that zm⟶z in X. Consequently, by the continuity of f, (36)fz=limm⟶+∞fzm=infx∈Xfx.By Remark19 and the lower continuity of γ, we know γz=0.Theorem 23 (see [22], Clarke). Letf:ℝn⟶ℝn be a locally Lipschitz mapping such that the Clarke generalized Jacobian ∂fx0 of f at a point x0∈ℝn is of maximal rank. Then, there exist neighborhoods U and V of x0 and fx0, respectively, and a Lipschitz function g:V⟶U such that fgu=u for all u∈U and gfu=v for all v∈V.The following result is Clarke’s implicit function theorem which will be very useful.Theorem 24 (see [6], Clarke]. Assume thatF:ℝn×ℝp⟶ℝn is a locally Lipschitz mapping on a neighborhood of a point x0,y0 such that Fx0,y0=0. Assume further that ∂xFx0,y0 is of maximal rank. Then there exists a neighborhood V⊂ℝp of y0 and a Lipschitz function G:V⟶ℝn such that for every y in V, it holds (37)FGy,y=0,Gy0=x0.Remark 25. It would be important to point out that the Clarke implicit function Theorem24 is a corollary of the Clarke inverse function Theorem 23 that can be found in the book [23]. Indeed, as it is done for example in [24] on page 256, when we put (38)F~:ℝn×ℝp⟶ℝn×ℝp,x,y↦Fx,y,y.F~ is locally Lipschitz in a neighborhood of x0,y0. Moreover, when the Jacobian matrix DF~ exists, it is of the form (39)DxFDyF0nIp,and it follows that the Clarke generalized Jacobian∂F~x0,y0 of F~ at the point x0,y0 is of maximal rank. Then, by Theorem 4 D.3 of [23], there exist U⊂ℝn×ℝp, V≔FU⊂ℝn×ℝp, and f:V⟶U which is inverse of F~ on U. Obviously, f has the form fx,y=ϕx,y,y, where ϕ:ℝn×ℝp⟶ℝn. Therefore, (40)x,y∈U,Fx,y=0⇔f0,y=ϕ0,y,y=x,y⇔x=ϕ0,y.Thus, we can writeGy=ϕ0,y.Ifℝp is replaced by any infinite dimensional Banach space Y in Theorem 24, Clarke’s generalized Jacobian of the function F~ above cannot be defined. In other words, we will no longer be in finite dimension to be able to apply Theorem 1 in Clarke’s work [22].This remark is very important in the rest of the work.Remark 26. LetY be an infinite dimensional Banach space and F:ℝn×Y⟶ℝn be a locally Lipschitz mapping on a neighborhood of a point x0,y0 such that Fx0,y0=0. Assume that ∂xFx0,y0, Clarke’s generalized Jacobian is of maximal rank. Then, there exists V⊂Y, subset containing y0, and a Lipschitz mapping φ:V⟶U⊂ℝn such that for every y∈V, we have (41)Fφy,y=0,φy0=x0.Moreover, we have the following equivalence:(42)x,y∈U×V,Fx,y=0⇔x=φy.Indeed, letM be a finite dimensional subspace of Y with y0∈M and dimM=mm<∞. We consider the map (43)F~:ℝn×M⟶ℝn,x,y↦Fx,y.Obviously,F~ is locally Lipschitz mapping, and ∂xF~x0,y0=∂xFx0,y0 is of maximal rank. Then, by Theorem 24, there exist V⊂M, open in M and containing y0, U⊂ℝn, open containing x0 and a locally Lipschitz mapping φ:V⟶U such that conditions (41) and (42) hold.Here is another result that will serve us in this work.Theorem 27 (see [11], J. Chen). Leth:0,+∞⟶0,+∞ be a continuous nondecreasing function such that (44)∫0∞ds1+hs=+∞.X is a reflexive Banach space and J:X⟶ℝ is a locally Lipschitz function. Assume that there exists u0∈X,u1∈X and a bounded open neighborhood Ω of u0 such that u1∉Ω and (45)infx∈∂ΩJx>maxJu0,Ju1.LetM≔g∈C0,1,X:g0=u0,g1=u1 and c≔infg∈Mmaxs∈0,1Jgs. If J satisfies the h-condition, then c is a critical value of J and c>maxJu0,Ju1.Lemma 28. LetX be a normed vector space and H be a Hilbert space equipped with the inner product ,. Let f:X⟶H be a locally Lipschitz mapping. Then, the function φ:X⟶ℝ defined by (46)φx=fx,fx=fxH2 is locally Lipschitz.Theorem 29 (see [16], Clarke). LetX be a normed vector space, f:X⟶ℝn be locally Lipschitz function near x∈X, and h:ℝn⟶ℝ be a given C1 function. Then, (47)∂h∘fx⊂∇hfx∂fx.Theorem 30 (see [6], Theorem 1). Assume thatF:ℝn×ℝp⟶ℝn is a locally Lipschitz mapping such that a1 for any y∈ℝp the functional φy:ℝn⟶ℝ given by (48)φyx=12Fx,y2 is coercive, i.e.,limx⟶∞φyx=+∞ a2 for any x,y∈ℝn×ℝp, the set ∂xFx,y is of maximal rank Then, there exists a unique locally Lipschitz functionf:ℝp⟶ℝn such that equation Fx,y=0 and x=fy are equivalent in the set ℝn×ℝp. ## 3. Main Results The following is a generalization of the global implicit function theorem of [6] to the case of locally Lipschitz functions from Banach spaces to Euclidean spaces.Theorem 31. LetY be a real Banach space and F:ℝn×Y⟶ℝn be a locally Lipschitz function. Suppose that (1) for everyy∈Y, the function φy defined by(49)φyx=12Fx,y2 satisfies theh-condition, where h:0,+∞⟶0,+∞ is a continuous nondecreasing function such that (50)∫0∞ds1+hs=+∞(2) for allx,y∈ℝn×Y, ∂xFx,y is of maximal rank Then, there exists a unique functionf:Y⟶ℝn such that the equation “x,y∈ℝn×YandFx,y=0” are equivalent to x=fy. Moreover, for any finite dimensional subspace L of Y, f is locally Lipschitz on L.Proof. Lety∈Y. We prove that there exists a unique element xy∈ℝn such that Fxy,y=0. Indeed, φy is locally Lipschitz and satisfies the h-condition. Then, by Theorem 22, there is xy∈ℝn such that minℝnφy=φyxy. Since φy=g∘F⋅,y:ℝn⟶ℝ and by assumption (1), the function F⋅,y:ℝn∍x↦Fx,y∈ℝn is a locally Lipschitz mapping and g:ℝn∍x↦1/2x2=1/2x,xℝn∈ℝ, it follows from Lemma 28 that φy is locally Lipschitz. Then, by Proposition 7, we have 0∈∂ϕyxy. Moreover, according to Theorem 29, we have ∂φyxy⊂∇gF⋅,yxy∘∂F⋅,yxy=∇gFxy,y∘∂xFxy,y=∇gFxy,y∘x∗:x∗∈∂xFxy,y.Thus, there existsx∗∈∂xFxy,y such that ∇gFxy,y∘x∗=0, i.e., (51)∀v∈ℝn,∇gFxy,yx∗v=Fxy,y,x∗v=0.By assumption (2)x∗ℝn=ℝn. It follows that Fxy,y=0.About the uniqueness ofxy∈ℝn such that Fxy,y=0, we argue by contradiction supposing that there exists x1≠xy in ℝn with Fx1,y=Fxy,y=0. We use Remark 26. Thus, we set e=x1−xy, and we define the mapping ψy:ℝn⟶ℝ by (52)ψyx≔φyx+xy=12Fx+xy,y2.We haveψy0=ψye=0. Consider ψy on the boundary ∂B0,ρ of the ball B0,ρ⊂ℝn with some 0<ρ<e. By assumption (2) and Remark 26, we conclude that there exist V⊂Y containing y (not necessary open in Y, but open in some finite dimensional subspace L⊂Y), an open subset U⊂ℝn containing xy, and a function ξ:V⟶U such that the following equivalence holds: (53)x,y∈U×V,Fx,y=0⇔x=ξy.ψy is also a locally Lipschitz function (so continuous), and ∂B0,ρ is compact (by the fact that it is closed and bounded). Then, ∃x¯∈∂B0,ρ such that (54)ψyx¯=min∂B0,ρψy.We claim that there exists at least oneρ>0,ρ<e such that minx=ρψy>0. Otherwise, we would have (55)∀0<ρ<e,minx=ρψy=0;this means that for all nonnegativeρ<e, there exists x¯∈ℝn,x¯=ρ such that ψyx¯=0. Since U is open around xy, there exists 0<ε<e such that (56)x−xy≤ε⇒x∈U.Letx¯∈ℝn,x¯=ε:ψyx¯=0. Then, (57)x¯+xy−xy=x¯=ε,ψyx¯=12Fx¯+xy,y2=0⇔Fx¯+xy,y=0.By (56) and (57), we have x¯+xy∈U,x¯+xy≠xy and Fx¯+xy,y=0. It follows from (53) that x¯+xy=ξy. Thus, x¯+xy and xy are two different elements of U with x¯+xy=ξy=xy, what is impossible. As conclusion, (58)∃ρ<e,infx=ρψy>0=maxψy0,ψye.The functionψy is locally Lipschitz and satisfies the h-condition (because φy satisfies this condition). Then, by (58) and Theorem 24 applied to J=ψy, we note that ψy has a generalized critical point v which is different from 0 and e since the corresponding critical value ψyv holds (59)ψyv>maxψy0,ψye=0.We have also(60)0∈∂φyxy+v⊂∇gF⋅,yxy+v∘∂F⋅,yxy+v=∇gFxy+v,y∘∂xFxy+v,y.This implies thatFxy+v,y=0⇔ψyv=0. This contradiction with (59) confirms that for every y∈Y, there exists a unique xy∈ℝn such that Fxy,y=0, and we can set fy=xy. Of course, according to Remark 26, we can say that for any finite dimensional subspace L of Y, f is locally Lipschitz on L.An example of function satisfying the assumptions of Theorem31 for which Y is a Banach space is F:ℝ×Y⟶ℝ defined by (61)Fx,y=2x+x+y.Indeed,F defined in (61) is locally Lipschitz function which is not differentiable and for any y∈Y if we consider the function φy:ℝ⟶ℝ defined by (62)φyx=12Fx,y2=12x+2x+y2,thenφy is coercive and consequently satisfies the h-condition. Moreover, for any x,y∈ℝ×Y, the partial generalized gradient ∂xFx,y defined as follows (63)∂xFx,y=3,ifx>0,1,ifx<0,1,3,ifx=0,is of maximal rank. Namely, for anyx,y∈ℝ×Y,0∉∂xFx,y⊂ℝ. Indeed, a straightforward argument shows that (64)Fx,y=0⇔x=−y.With the conclusion about the regularity off in Theorem 31, we cannot expect in general the continuity of f on the whole Y. Here is a counterexample.Remark 32. Let us setY=ℓ1, where ℓ1 stands for the space of real sequences umm∈ℕ such that ∑m=0∞um<∞, endowed with the nonequivalent norms: (65)um1=∑m=0∞um,um2=∑m=0∞um21/2.Indeed, form∈ℕ∗, we define Xm≔1,1/2,1/3,⋯,1/m,0,⋯∈ℓ1. We claim that Xmm is a bounded sequence with respect to the norm ⋅2 which is unbounded with respect to ⋅1. For m∈ℕ∗, we have (66)Xm2=∑k=1m1k21/2,Xm1=∑k=1m1k.Then,(67)limm⟶+∞Xm2=∑k=1+∞1k21/2<+∞,(68)limm⟶+∞Xm1=∑k=1+∞1k=+∞.Now, let us consider the canonical injectionI:ℓ1,⋅2⟶ℓ1,⋅1. It is obvious by (67) that I is not continuous on ℓ1. However, for any finite dimensional subspace L of ℓ1, since the restriction of these norms on L are equivalent on L, it follows that IL:L1,⋅2⟶L1,⋅1 is Lipschitz.We add some technical hypothesis to those of Theorem31 in order to obtain the continuity of the implicit function f.Definition 33. LetX,Y be two normed vector spaces. We say that a function F:X×Y⟶ℝ is coercive with respect to x (the first variable), locally uniformly with respect to y (the second variable), if for any y¯∈Y, there exists an open neighborhood V of y¯ in Y such that (69)limx⟶∞infy∈VFx,y=+∞.Lemma 34. LetE be a Euclidean space. Then, every bounded sequence with a unique limit point is convergent.Proof. Letxmm≥0 be a sequence of E which has a unique limit point x¯∈E. This implies that any subsequence of xmm≥0 has a subsequence converging to x¯ by the Bolzano Weierstrass theorem. We argue by contradiction assuming that xmm≥0 is not convergent. Then, there exists ε>0 such that (70)foranyk>0,thereexistsmk≥k,withxmk−x¯>ε. xmkk must have a subsequence xmkii such that (71)xmki⟶x¯, which contradicts (70).Theorem 35. LetY be a real Banach space and F:ℝn×Y⟶ℝn be locally Lipschitz. Suppose that (1) the functionχ:ℝn×Y⟶ℝ defined by(72)χx,y=12Fx,y2is coercive with respect to x, locally uniformly with respect to y(2) for allx,y∈ℝn×Y, ∂xFx,y is of maximal rank Then, there exists a unique functionf:Y⟶ℝn such that (73)x,y∈ℝn×Y,Fx,y=0⇔x=fy. Moreover,f is continuous on the whole Y.Proof. Lety∈Y. We consider the function φy:ℝn⟶ℝ defined by (74)φyx≔12Fx,y2.Sinceφy is coercive (because χ is coercive with respect to x, locally uniformly with respect to y), it follows that for any continuous nondecreasing function h:0,+∞⟶0,+∞ such that (75)∫0∞ds1+hs=+∞,φy satisfies the h-condition. Moreover, F is locally Lipschitz. So, by Theorem 31, we conclude that there exists a unique global implicit function f:Y⟶ℝn such that (76)x,y∈ℝn×Y,Fx,y=0⇔x=fy.It remains to show thatf is continuous on the whole Y. For this, let ymm∈ℕ⊂Y be sequence such that (77)ym⟶y¯∈Y.For allm∈ℕ, Ffym,ym=0. This implies that the sequence χxm,ymm=Ffym,ymm is bounded. Since χ is coercive with respect to x, locally uniformly with respect to y, there exists an open subset Q⊂Y containing y¯ such that (78)limx⟶∞infy∈Qχx,y=limx⟶∞infy∈Q12Fx,y2=+∞.In addition, by the convergence ofym to y¯, there exists m0∈ℕ such that (79)m>m0⇒ym∈Q.So,(80)form>m0,12Fxm,ym2≥infy∈Q12Fxm,y2.According to (78) and (80), we conclude that the sequence xmm∈ℕ≔fym is bounded in ℝn. Let x¯ be a limit point of xmm. Thus, there exists a convergent subsequence xmkk of xm such that (81)xmk⟶x¯∈ℝn.On the other hand, for allk∈ℕ, Fxmk,ymk=0. Then, it follows from (77), (81), and the continuity of the function F that (82)0=limk⟶+∞Fxmk,ymk=Fx¯,y¯.Thus, we have(83)Fx¯,y¯=0⇔x¯=fy¯.So,xmm has a unique limit point x¯=fy¯. So, by Lemma 34, (84)xm⟶x¯,that is,(85)fym⟶fy¯.From (77) and (85), f is continuous on Y.As a consequence of our Theorem31, we have the following nonsmooth global inverse function theorem.Theorem 36. Assumef:ℝn⟶ℝn is a locally Lipschitz mapping such that (1) for anyy∈ℝn, there exists a continuous nondecreasing function h:ℝ+⟶ℝ+ such that(86)∫0∞ds1+hs=+∞, and the functionalφy:ℝn⟶ℝ defined by (87)φyx=12fx−y2 satisfies theh-condition (2) for anyx∈ℝn, we have that ∂fx is of maximal rank Then,f is a global homeomorphism on ℝn and f−1 is locally Lipschitz.Corollary 37 (see [25], Hadamard-Palais]. LetX,Y be finite dimensional Banach spaces. Assume that f:X⟶Y is a C1-mapping such that (1) limx⟶∞fx=∞(2) for anyx∈X, f′x is invertible thenf is a diffeomorphism.Question. Is it still possible the conclusion of Theorem31 under the assumption of the h-condition on the function τy:x↦Fx,yα, where α is a positive constant different from 2?In fact, according to our two Lemmas42 and 43 and Corollary 44, it is enough to assume that τy is locally Lipschitz in the case 0<α<2. Else, this additional hypothesis is not need in the case α>2.Therefore, we have the following result from Theorem31.Theorem 38. LetY be a real Banach space and F:ℝn×Y⟶ℝn be a locally Lipschitz mapping. Suppose that (1) for anyy∈Y, there exists 0<α<2, so that the function τy:ℝn⟶ℝ(88)τyx=Fx,yα is locally Lipschitz and satisfies theh-condition, where h:0,+∞⟶0,+∞ is a continuous nondecreasing function such that (89)∫0∞ds1+hs=+∞(2) for allx,y∈ℝn×Y, ∂xFx,y is of maximal rank Then, there exists a unique functionf:Y⟶ℝn such that the equation "x,y∈ℝn×Y and Fx,y=0" are equivalent to x=fy. Moreover, for any finite dimensional subspace L of Y, f is locally Lipschitz on L.Proof. Lety∈Y. We notice that τy=2φyα/2, where φy is defined by (49). Since τy satisfies the h-condition, it follows from Lemma 43 and Corollary 44 that φy satisfies the h-condition. Thus, we achieve the proof by using Theorem 31.But, what happens if we replaceℝn by any Banach space X in the domain of the function F?Theorem 39. LetX,Y be Banach spaces and Z be Euclidean space such that dim Z=n<∞. Let F:X×Y⟶Z be locally Lipschitz function. Assume that (1) for ally∈Y, the function φy:X⟶ℝ defined by(90)ϕyx=12Fx,y2 satisfies theh-condition, where h:0,+∞⟶0,+∞ is a continuous nondecreasing function such that (91)∫0∞ds1+hs=+∞(2) for any finite dimensional subspaceL of X with dim L=n and for all x,y∈L×Y, ∂xFLx,y is of maximal rank Then,(92)x,y∈X×Y,Fx,y=0⇔x=0.Proof. We use Theorem31 in order to prove this result. Firstly, we prove that there exists a unique global implicit function f:Y⟶X such that x,y∈X×Y and Fx,y=0 are equivalent from x=fy. After that, we will claim that f≡0 on Y. Lety∈Y. Since φy is bounded form below, locally Lipschitz, and satisfies the h-condition, we see by Theorem 22 that φy has a minimum which is achieved at a critical point xy∈X. Let L be a finite dimensional subspace of X such that xy∈L and dimL=n.Consider functionsF~:L×Y⟶Z and φ~y:L⟶ℝ defined, respectively, by (93)F~x,y=Fx,y,φ~yx=φyx=12Fx,y2=12F~x,y2.By assumption (1), the functionF~ is locally Lipschitz and φ~y is then locally Lipschitz as composition of F~ by C1 function g where (94)g:Z⟶ℝ;x↦12x2=12x,xZ.Likewise,φ~y satisfies the h-condition. It follows that φ~y has a minimum on L. Since xy∈L, it is obvious that minx∈Lφ~yx=φxy. Thus, by Theorem 29, we obtain (95)0∈∂φ~yxy=∂Lφyxy⊂∇gF~xy,y∘∂xF~xy,y=∇gFxy,y∘∂xFLxy,y.This means that there existsx∗∈∂xFLxy,y such that (96)∀h∈L,Fxy,y,x∗h=0.Thus, we conclude by assumption (2) thatF~xy,y=Fxy,y=0. It is clear by Theorem 31 that xy is the only solution of the equation Fxy,y=0 in L. But the question is the uniqueness of this solution in all X. Even though it is unpredictable, we will answer yes to this question in the following. About uniqueness of xy∈X such that Fxy,y=0, we argue by contradiction.Suppose that there existsx1≠xy such that Fx1,y=Fxy,y=0. We choose then another finite dimensional subspace of X (that we note again L here to keep the same notation) which contains both x1 and xy, such that dim L=n. We consider the same functions defined in (93), but with the subspace L that we choose here. We find that minx∈Lφ~yx=φ~yxy and by [15] (Proposition 6), Theorem 29, and assumption (2), we conclude that F~xy,y=0. Considering the function ψ~y defined as in (52) by (97)ψ~yx≔φ~yx+xy=12Fx+xy,y2and following the same approach in the proof of Theorem31, we come to the contradiction. Thus, there exists a unique global implicit function f:Y⟶X such that Ffy,y=0 for all y∈Y.It remains to be shown thatf≡0 on Y. Indeed, since X is infinite dimensional Banach space, it is possible to find two n-dimensional subspaces L1 and L2 of X such that L1∩L2=0X. Let F1:L1×Y⟶Z and F2:L2×Y⟶Z be the function defined by (98)F1x,y=Fx,y,F2x,y=Fx,y.By assumptions (1) and (2), both functionsF1 and F2 verify the assumptions of Theorem 31. Consequently, there exist two functions φ1:Y⟶L1 and ϕ2:Y⟶L2 such that ∀y∈Y,Fφ1y,y=Fφ2y,y=0. Then, according to the uniqueness of xy∈X such that Fxy,y=0, we have φ1y=fy=φ2y∈L1∩L1. Thus, fy=0,∀y∈Y.In virtue of Lemma43, the following theorem is a consequence of Theorem 39.Theorem 40. LetX,Y be Banach spaces and Z be Euclidean space such that dimZ=n<∞. Let F:X×Y⟶Z be locally Lipschitz function. Assume that (1) for everyy∈Y, there exists 0<α<2 such that the function τy:X⟶ℝ defined by(99)τyx=Fx,yα is locally Lipschitz and satisfies theh-condition, where h:0,+∞⟶0,+∞ is a continuous nondecreasing function such that (100)∫0∞ds1+hs=+∞(2) for any finite dimensional subspaceL of X and for any x,y∈L×Y, ∂xFLx,y is of maximal rankThen.(101)x,y∈X×Y,Fx,y=0⇔x=0.Remark 41. Ifα>2, it is useless to add the locally Lipschitz condition to the h-condition for the first assumption of Theorems 38 and 40. Indeed, in Theorem 38, for example, since φy is locally Lipschitz and α/2>1, it follows from Lemma 43 that τy=2φyα/2 is also locally Lipschitz. Moreover, φy satisfies the h-condition.Lemma 42. Letg:ℝn⟶ℝ+ be a locally Lipschitz function. Let α>1. If gα is locally Lipschitz, then for any x,v∈ℝn with v≠0, we have (102)gα0x;v=α⋅gxα−1g0x;v.Proof. Letwmm be a sequence in ℝn and tmm⊂0;+∞ another sequence such that (103)wm⟶x,tm⟶0+.For fixedm, the function μ:Im⟶ℝ;t↦tα is differentiable, where (104)Im≔θgam+1−θgbm;θ∈0,1,witham=wm+tmv,bm=wm.Now, it is known that there existscm∈Im such that (105)gamα−gbmα=μgam−μgbm=μ′cmgam−gbm=μ′θmgam+1−θmgbm⋅gam−gbm=α⋅θmgam+1−θmgbmα−1⋅gam−gbm=Km⋅gwm+tmv−gwm,where (106)Km=α⋅θmgwm+tmv+1−θmgwmα−1,withθm∈0,1. Then, we have (107)gαwm+tmv−gαwmtm=Km⋅gwm+tmv−gwmtm.Sinceg is continuous, there exists a neighborhood V of x and K>0 such that (108)gz≤K,∀z∈V.It follows from the convergence ofwm,tm to x,0 and the continuity of g and (108) that (109)limm⟶+∞Km=α⋅gxα−1.By (107), (109), and the fact that (110)limsupw⟶xt⟶0+gw+tv−gwt=g0x;v,we conclude that(111)gα0x;v=α⋅gxα−1g0x;v.Lemma 43. Letg:ℝn⟶ℝ+ be a locally Lipschitz function. Let α>1 and h:0,+∞⟶0,+∞ a continuous nondecreasing function such that (112)∫0∞ds1+hs=+∞.Then,gαx≔gxα is locally Lipschitz function. Moreover, g satisfies the h-condition if and only if gα satisfies the h-condition.Proof. Letx¯∈X. There exist V∍x¯ open subset of ℝn and k>0 such that (113)gx−gy≤kx−y,∀x,y∈V.Letρ>0 such that B¯ρx¯≔x∈ℝn:x¯−x≤ρ⊂V. For x,y∈B¯ρ, as in the previous Lemma 42, there exists θ∈0,1 such that we have (114)gαx−gαy=α⋅gx−gy⋅θgx+1−θgyα−1.g is continuous, and B¯ρ is compact. Let (115)M=maxz∈B¯ρx¯gz.Then, we have(116)θgx+1−θgyα−1≤Mα−1.It follows from (113), (114), and (116) that we have (117)gαx−gαy≤kMα−1x−y,∀x,y∈Bρx¯,where Bρx¯≔x∈ℝn:x¯−x<ρ⊂V is open. Then, gα is locally Lipschitz.For the second part of Lemma43, just specify that umm≥0 is a h-sequence of g if and only if umm≥0 is a h-sequence of gα.Letvmm≥0⊂ℝn be a h-sequence of g. Then, there exist q>0, τmm⊂0,+∞ with τm⟶0+, such that (118)gvm≤q,∀m≥0,(119)g0vm;v−vm1+hvm≥−τmv−vm,∀v∈ℝn.It follows from (118) that gvmα is bounded. From Lemma 42 and inequality (119), we deduce that (120)gα0vm;v−vm1+hvm≥−τ¯mv−vm,∀v∈ℝn,τ¯m≔τmαqα−1⟶0+.Thus,vmm≥0 is a h-sequence of gα.Conversely, letumm≥0⊂ℝn be a h-sequence of gα. Then, there exists p>0 such that (121)gumα≤p,∀m≥0,(122)gα0um;v−um1+hum≥−εmv−um,∀v∈ℝn,εm⟶0+.It follows from (121) that there exists p¯>0 such that (123)gum≤p¯,∀m≥0.By Lemma42, (122), and (123), we have (124)g0um;v−um1+hum≥−δmv−um,∀v∈ℝn,δm≔εmαp¯α−1⟶0+.Therefore,umm≥0 is also a h-sequence of g. Then, for any α>1, g satisfies the h-condition if and only if gα satisfies the h-condition.But what about0<α<1?Corollary 44. Letg:ℝn⟶ℝ+ be a function and 0<α<1 such that gα is locally Lipschitz. Then, g is locally Lipschitz and for any continuous nondecreasing function h:ℝ+⟶ℝ+ such that (125)∫0∞ds1+hs=+∞.g satisfies the h-condition if and only if gα satisfies the h-condition.Proof. We notice thatg=gα1/α and 1/α>1. Then, we apply Lemma 43. ## 4. Example of Noncoercive Function Satisfying the (h)-Condition To illustrate that the compactness condition allowing to obtain the existence of a global implicit function in our main results is weaker than that used in Theorem30, we provide in this section an example of a noncoercive and locally Lipschitz function satisfying the h-condition. We follow the idea used by Chen and Tang in [13] (Theorem 3.3).Let1<p<∞. Define (126)Lp0,T;ℝN=u∈L10,T;ℝN:∫0Tutpdt<∞,with the norm(127)up=∫0Tupdt1/p.Foru∈Lloc10,T;ℝN, u′ is said to be the weak derivative of u, if u′∈Lloc10,T;ℝN and (128)∫0Tu′ϕdt=−∫0Tuϕ′dt,∀ϕ∈C0∞0,T;ℝN.Let(129)W01,p0,T;ℝN=u∈Lp0,T;ℝN:u0=uT,u′∈Lp0,T;ℝN.W01,p0,T;ℝN is a reflexive Banach space (see [13]) with the norm (130)uW01,p0,T;ℝN=∫0Tup+u′pdt1/p.Remark 45 (see [13]). We have the following direct decomposition ofW01,p0,T;ℝN(131)W01,p0,T;ℝN=ℝN⊕V,whereV=v∈W01,p0,T;ℝN:∫0Tvtdt=0.Consider now the following functional:(132)Ju=∫0T1pu′pdt,u∈W01,p0,T;ℝN.We know that (see [26]) J∈C1W01,p0,T;ℝN,ℝ and p-Laplacian operator u↦u′p−2u′′ is the derivative operator of J in the weak sense. That is, (133)A=J′:W01,p0,T;ℝN⟶W01,p0,T;ℝN∗,(134)Au,v=∫0Tu′tp−2u′t,v′tℝNdt,u,v∈W01,p0,T;ℝN.Proposition 46 (see [27], Fan and Zhao). J′ is a mapping of S+, i.e., if (135)um⇀u,limsupm⟶∞J′um−J′u,um−u≤0,then umm has a convergent subsequence in W01,p0,T;ℝN.For everyu∈W01,p0,T;ℝN, set (136)u¯=1T∫0Tutdt,(137)u~t=ut−u¯.We have the following Poincare-Wirtinger inequality (see [28]): (138)∃a>0suchthatu~∞≤au′p,∀u∈W01,p0,T;ℝN.We consider the functionalϕ:W01,p0,T;ℝN⟶ℝ defined by (139)ϕu=∫0T1pu′pdt−∫0Tjt,udt,u∈W01,p0,T;ℝN,where jt,u:0,T×ℝN⟶ℝ is the norm ⋅ on ℝN defined by (140)jt,u=u=∑i=1Nui21/2,foru≔u1,u2,⋯,uN∈ℝN.Leths=s. Following the same approach as done by Chen and Tang in [13], we will show that the function φ satisfies the h-condition and is noncoercive on all of W01,p0,T;ℝN.We show that the functionj satisfies the following assumptions: (1) For allu∈ℝN,t↦jt,u is measurable(2) For almost allt∈0,T,u↦jt,u is locally Lipschitz(3) For everyr>0, there exists αr∈L10,T such that for almost all t∈0,T,u≤r and all w∈∂jt,u, we have w≤αrt, where ∂jt,s is Clarke’s generalized gradient of j with respect to the variable s(4) There exist0<μ<p and M>0 such that for almost all t∈0,T and all u≥M, we have(141)j0t,u;u<μjt,u(5) jt,u⟶+∞ uniformly for almost all t∈0,T as u⟶∞Obviously, the functionj defined in (140) satisfies conditions (1), (2), and (5). In addition, for t,u∈0,T×ℝN, we have the following: (1) Ifu≠0,∂jt,u=u/u(2) Ifu=0,∂jt,0=w∈ℝN:y≥w,y,foranyy∈ℝN=B¯0,1That is,(142)∂jt,0=B¯0,1≔y∈ℝN:y≤1.Indeed, fort∈0,T, the function jt,⋅ is convex. Then, Clarke’s generalized gradient ∂jt,u of jt,⋅ at a point u coincides with the subdifferential of jt,⋅ in convex analysis sense (see Proposition 5). We recall also that the norm in Hibert space is Fréchet differentiable at any point u≠0.Thus,(143)w≤1,forallw∈∂jt,u.Consequently, according to (143), for every r>0, and αrt=1, then αr∈L10,T and (144)u≤r⇒w≤αr,∀w∈∂jt,u.Thus, the functionj satisfies the assumption 3.On other hand, since1<p, taking μ=1+p/2, we have (145)0<1<μ<p.Moreover, forM>0, we have (146)u≥M⇒u≠0,(147)μjt,u=μu.Then,(148)j0t,u;u=uu,u=u.It follows from (145), (146), and (148) that (149)u≥M⇒j0t,u;u<μjt,u.Thus, the functionj satisfies the assumption (4).Under the previous assumptions,φ is locally Lipschitz (see [13] (Theorem 3.3)). (1) By (130), for any u∈ℝN, we have(150)u=T1/pu,∫0T1pu′pdt=0,ϕu=−∫0Tudt=−Tu.Then,(151)limu⟶+∞u∈ℝNϕu=limu⟶+∞−Tu=−∞.Thus,φ is not coercive. (2) Letumm≥1 be a h-sequence of φ, i.e., there exists M1>0 such that(152)φum≤M1,(153)limm⟶+∞1+umγum=0,where (154)γum=minw∗∈∂ϕumw∗.Without loss of generality, we suppose thatum≠0,∀m≥1.According to Proposition6, let um∗∈∂ϕum such that um∗=γum.By definition (134) of operator A, we have (155)um∗=Aum−wm,withwm∈∂jt,um.From the second assertion of (152), we have (156)um∗,um=∫0Tum′tpdt−∫0Twmt,umtdt≤εm,εm↓0.Thus, it follows from Definition4 and inequality (156) that (157)∫0Tum′tpdt−∫0Tj0t,umt;umtdt≤εm.Sinceum≠0, according to (148), the inequality (157) implies (158)∫0Tum′tpdt−∫0Tumtdt≤εm.From the first assertion of (152), we have (159)−μp∫0Tum′tpdt+∫0Tμumtdt≤μM1.It follows from (158) and (159) that (160)1−μp∫0Tum′tpdt+∫0Tμ−1umtdt≤Mm,withMm=εm+μM1. By (160), we have (161)1−μp∫0Tum′tp≤Mm,m≥1,Mm⟶μM1.By (161), there exists M0>0 such that for t∈0,T and m≥1, (162)um′t≤M0.From (161) and the Poincare-Wirtinger inequality (138), u~m is bounded in W01,p0,T;ℝN. By exploiting (152) once again, we use (136) to have (163)1p∫0Tu~m′pdt−∫0Tumtdt≤M1,m≥1.Sinceu~m is bounded, it follows from (163) that there exists M2>0 such that (164)∫0Tumtdt≤M2,∀m≥1.Thus, there existsM3>0 such that for t∈0,T and m≥1, (165)umt≤M3.By (162) and (165), we infer that umm≥1⊂W01,p0,T;ℝN is bounded, and so by passing to a subsequence if necessary, we may assume that (166)um⇀uinW01,p0,T;ℝN,um⟶uinC00,T;ℝN.Next, we will prove thatum⟶uinW01,p0,T;ℝN. By Proposition 46, it suffices to prove that the following inequality holds: (167)limm⟶∞Aum−Au,um−u≤0.In fact, from the choice of the sequenceumm≥1, we have (168)um∗,um≤εm↓0.Then, by (155), we have (169)Aum,um−u−∫0Twmt,umt−utℝNdt≤εm,∀m≥1.By (3),wm⊂L10,T is bounded and (170)limm⟶∞∫0Twmt,umt−utℝNdt=0.Then,(171)limsupm⟶∞Aum,um−u≤0.So, we have(172)limsupm⟶∞Aum−Au,um−u≤0,εm↓0. ## 5. An Application Inspired by the example result of Galewski-Rădulescu [6] (Theorem 7), we provide in this section an existence and uniqueness result for the problem (173)Ax=Fx+ξ,where ξ∈ℝn is fixed; A is an n×n matrix which does not need to be positive definite, negative definite, or symmetric; and F:ℝn⟶ℝn is a locally Lipschitz function.Theorem 47. LetA be an n×n matrix. If F:ℝn⟶ℝn is a locally Lipschitz mapping satisfying the following conditions: (1) For anyξ∈ℝn, there exists a continuous nondecreasing function h:ℝ+⟶ℝ+ such that(174)∫0∞ds1+hs=+∞, and the functionalφξ:ℝn⟶ℝ defined by (175)φξx=Ax−Fx−ξ satisfies theh-condition (2) For anyx∈ℝn and for every T∈∂Fx, A−T is invertible Then, problem (173) has a unique solution for fixed ξ∈ℝn. Moreover, the map that assigns to each ξ∈ℝn, the unique solution of problem (173) is locally Lipschitz.Proof. Consider the functionfx=Ax−Fx from ℝn to itself. By assumption (173) and Lemma 43, for any ξ∈ℝn, the functional φξ:ℝn⟶ℝ defined by (176)φξx=12fx−ξ2=12Ax−Fx−ξ2 satisfies theh-condition. In addition, according to (173), ∂fx=A−∂Fx is of maximal rank for any x∈ℝn. Then, we achieve the proof applying Theorem 36.Example of matrixA and function F satisfying conditions of Theorem 47.Let us take a matrix(177)A=−312−1and the functionF defined from ℝ2 to ℝ2 by (178)Fu=x3+y,2x+x+y3,u=x,y.Considering the Euclidean norm(179)u=x2+y2forallu∈ℝ2,we have(180)x3,y32−12u32=x6+y6−14x2+y23=34x2+y2x2−y22≥0.It follows that(181)x3,y3≥12u3.On the other hand, foru∈ℝ2, (182)Au≤A⋅u.From (181) and (182), we have (183)Fu−Au−ξ=x3+y,2x+x+y3−Au−ξ≥x3,y3−0,2x−y,x−Au−ξ≥12u3−2u−u−A⋅u−ξ≥12u2−3−Au−ξ.Hence, for fixedξ∈ℝ2, the function φξ:ℝn⟶ℝ defined by (184)φξu=Fu−Au−ξis coercive. Consequently, the functionφξ satisfies the h-condition.Letu=x,y∈ℝ2. (1) Ifx≠0 and y≠0, then F is differentiable at u and ∂Fu−A=JFu−A is defined by(185)∂Fu−A=3x2+3sgny−1sgnx3y2+1.Thus,∂Fu−A will be one of the following matrices: (186)3x2+3013y2+1,3x2+3−213y2+1,3x2+3−2−13y2+1,3x2+30−13y2+1.In all these cases, we havedet∂Fu−A≠0. (2) Ifx<0andy=0, then ∂Fu is defined by(187)∂Fu=conv3x2−110,3x2110=3x2s10:−1≤s≤1.It follows that(188)∂Fu−A=3x2+3s−1−11:−1≤s≤1.Then, forT∈∂Fu, there exists s∈−1,1 such that (189)detT−A=3x2+3+s−1=x2+s+2≥x2+1>0.(3) Ifx>0andy=0, then ∂Fu is the following:(190)∂Fu=conv3x2130,3x2−130=3x2s30:−1≤s≤1.It follows that(191)∂Fu−A=3x2+3s−111:−1≤s≤1.Then, forT∈∂Fu, there exists s∈−1,1 such that (192)detT−A=3x2+3−s−1=3x2+1+3−s≥3x2+1>0.(4) Ifx=0 and y<0, then ∂Fu is the following:(193)∂Fu=conv0−133y2,0−113y2=0−1λ3y2:1≤λ≤3.It follows that(194)∂Fu−A=3−2λ−23y2+1:1≤λ≤3.Then, forT∈∂Fu, there exists λ∈1,3 such that (195)detT−A=33y2+1+2λ−2=9x2+2λ−1>0.(5) Ifx=0andy>0, then ∂Fu is the following:(196)∂Fu=conv0133y2,0113y2=01λ3y2:1≤λ≤3.It follows that(197)∂Fu−A=30λ−23y2+1:1≤λ≤3.Then, forT∈∂Fu, there exists λ∈1,3 such that (198)detT−A=33y2+1>0.(6) Ifu=0,0, then ∂Fu is the following:(199)∂Fu=conv0130,0−130,0−110,0110=0τs0:τ,s∈−1,1×1,3.It follows that(200)∂F0,0−A=3τ−1s−21:τ,s∈−1,1×1,3.Then, forT∈∂Fu, there exists τ,s∈−1,1×1,3 such that (201)detT−A=3−s−2τ−1=s−11−τ+τ+2≥τ+2>0. ## 6. Conclusion We have provided a general nonsmooth global implicit function theorem that yields Galewski-Rădulescu’s nonsmooth global implicit function theorem and a series of results on the existence, uniqueness, and possible continuity of global implicit functions for the zeros of locally Lipschitz functions. Our results deal with functions defined on infinite dimensional Banach spaces and thus generalize also classical Clarke’s implicit function theorem for functionsF:ℝn×ℝp⟶ℝn by replacing ℝp by any Banach space Y. We have worked in this paper under the h-condition which is weaker than the coercivity required in [6]. Our method is based on a variational approach and a recent nonsmooth version of Mountain Pass Theorem.More precisely, firstly, we have proved our Theorem31 on the existence and uniqueness of the global implicit function theorem for equations Fx,y=0, where F:ℝn×Y⟶ℝn is a locally Lipschitz function with Y a Banach space. Secondly, we observe that this extension to infinite dimension may not guarantee the continuity of the global implicit function. Thus, we provide an additional hypothesis on Theorem 31 in order to obtain the continuity of the implicit function f. Moreover, our Lemmas 42 and 43 allow us to prove other more general results on the existence and uniqueness of global implicit functions under the h-condition on the function x↦Fx,yα with 0<α<2. --- *Source: 1021461-2022-07-28.xml*
1021461-2022-07-28_1021461-2022-07-28.md
51,226
On Nonsmooth Global Implicit Function Theorems for Locally Lipschitz Functions from Banach Spaces to Euclidean Spaces
Guy Degla; Cyrille Dansou; Fortuné Dohemeto
Abstract and Applied Analysis (2022)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1021461
1021461-2022-07-28.xml
--- ## Abstract In this paper, we establish a generalization of the Galewski-Rădulescu nonsmooth global implicit function theorem to locally Lipschitz functions defined from infinite dimensional Banach spaces into Euclidean spaces. Moreover, we derive, under suitable conditions, a series of results on the existence, uniqueness, and possible continuity of global implicit functions that parametrize the set of zeros of locally Lipschitz functions. Our methods rely on a nonsmooth critical point theory based on a generalization of the Ekeland variational principle. --- ## Body ## 1. Introduction Many mathematical models involving real or vector-valued functions stand as equations of the form(1)fx=0.For complex phenomena, the unknownx is often a vector-variable x=x1,x2,⋯,xn belonging to ℝn or to an abstract Banach space having a direct sum V1⊕V2⊕⋯⊕Vn. It may even happen that equation (1) is just a state equation depending in fact on a parameter (or a control) h. In this case, it takes the form (2)Fx,h=0,and the most aspiring aim of mathematical analysis is to know the local or global structure of the solution setF−10 by finding out whether it is nonempty, discrete, a graph or a manifold, etc.The essence of the implicit function theorem in mathematical analysis is to ascertain if the solutions to an equation involving parameters exist and may be viewed locally as a function of those parameters and to know a priori which properties this function might inherit from those of the data. Geometrically, implicit function theorems provide sufficient conditions under which the solution set in some neighborhood of a given solution is the graph of some function. The well-known implicit function theorems deal with a continuous differentiability hypothesis and in such cases are equivalent to inverse function theorems (see [1]). It was originally conceived (in the complex variable form in a pioneering work by Lagrange) over two centuries ago to tackle celestial mechanics problems. Subsequently, it attracted Cauchy who managed to provide its rigorous version and became its discoverer. Later, the generalization of this implicit function theorem to the case of finitely many real variables was proved for the first time by Dini. In this way, the classical theory of implicit functions started with single variables and have progressed through multiple real variables to equations in infinite dimensional spaces, e.g., functional equations involving integral or differential operators. Nowadays, most categories of smooth functions have virtually their own version of the implicit function theorem, and there are special versions adapted to Banach spaces and algebraic geometry and to various types of geometrically degenerate situations. Some of these (such as Nash-Moser implicit function theorem) are quite sophisticated and have been used in amazing ways to solve important open problems (in Riemannian manifolds, partial differential equations, functional analysis, …) [1]. There are also in the literature [2, 3] some implicit numerical schemes used to approximate the solutions of certain differential equations and that could be regarded as implicit functions in sequence spaces.Nevertheless, there are interesting phenomena governed by parametric equations with nonsmooth data which need to be stressed and are more and more attracting researchers. Indeed, the implicit function theorems for nondifferentiable functions are less known but are regaining interest in the literature due to their importance in applied sciences that deal with functions having less regularity than smoothness. Few versions have been stated in Euclidean spaces for functions that are continuous with respect to all their variables and (partially) monotone with respect to some of their variables [4, 5].Recently, Galewski and Rădulescu [6] proved a generalized global implicit function theorem for locally Lipschitz function F:ℝn×ℝp⟶ℝn, by using a nonsmooth Palais-Smale condition and a coercivity condition. Their proof is essentially based on the fact that a locally Lipschitz function in a finite dimension is almost everywhere differentiable with respect to the Lebesgue measure according to Rademacher’s theorem [7]. It is known that Rademacher’s theorem for locally Lipschitz functions has no direct infinite dimensional extension. This justifies all difficulties to have conditions of existence of local or global implicit function in the case of locally Lipschitz function defined on infinite dimensional space (see [8]). Several works have been done to overcome these difficulties. For example, the papers [9, 10] provided conditions for surjectivity and inversion of locally Lipschitz functions between Banach spaces under assumptions formulated in terms of pseudo-Jacobian.In this work, our aim is to establish under suitable conditions a global implicit function theorem for locally Lipschitz mapF:X×Y⟶H, where X,Y are real Banach spaces and H is a real Euclidean space, and to provide conditions under which this implicit function is continuous. This extends Theorem 30 of Galewski and Rădulescu to the locally Lipschitz functions in infinite dimension with a very relatively simple method compared to those used for this purpose. Knowing that there exist noncoercive functions satisfying the h-condition (see Definition 18 and Remark 19), we work in this paper under the h-condition using a variational approach and applying a recent nonsmooth version of Mountain Pass Theorem, namely, Theorem 27.The contribution of this work is quadruple:(i) An improvement of the classical Clarke’s implicit function Theorem24 for function F:ℝn×ℝp⟶ℝn by replacing ℝp by any Banach space Y (Remark 26). Consequently, by considering the approach used in [6] (Theorem 4) and Remark 26, we prove our first main result (Theorem 31) on the existence and uniqueness of global implicit function theorem for equation Fx,y=0, where F:ℝn×Y⟶ℝn with Y a Banach space(ii) The proof of the continuity of the implicit function based on a simple additional hypothesis, Theorem35(iii) The weakening of the coercivity assumption used in [6] by considering a compactness type condition called h-condition in [11](iv) By our Lemmas42 and 43, we obtain Theorem 38 on the existence and uniqueness of global implicit functions under the h-condition on the function x↦Fx,yα with 0<α<2. This is a generalization of the result (49) in the nonsmooth case. It also generalizes the result [12] (Theorem 3.6) in the C1 caseThis article is organized as follows. In Section2, we recall some preliminary and auxilliary results on Clarke’s generalized gradient, Clarke’s generalized Jacobian, and the h-condition for locally Lipschitz functions. Section 3 is devoted to our main results established under the h-condition, on the existence and uniqueness of global implicit function for equation Fx,y=0, where F is defined from ℝn×Y to ℝn and Y is a Banach space, namely, Theorems 31, 35, 38, 39, and 40. In Section 4, we give an example of a function satisfying our conditions of existence of implicit function but not the conditions of Theorem 1 of 6 which we have extended. This is the energy functional defined in (139), of a certain differential inclusion problem involving the p-Laplacian [13]. ## 2. Preliminaries and Auxilliary Results LetU be a nonempty open subset of a Banach space X and let f:U⟶ℝ be a function. We recall that f is Lipschitz if there exists some constant K>0 such that for all y and z in U, we have (3)fy−fz≤Ky−z.Forx∈U, f is said to be locally Lipschitz at x if there exists an open neighborhood V⊂U of x on which the restriction of f is Lipschitz. We will say that f is locally Lipschitz on U if f is locally Lipschitz at every point x∈U. We recall that any convex function has this property in Euclidean spaces.Definition 1. Letf:U⊂X⟶ℝ be a locally Lipschitz function. Let x∈U and v∈X\0. The generalized directional derivative of f at x in the direction v, denoted by f0x;v, is defined by (4)f0x;v≔limsupw⟶xt⟶0+fw+tv−fwt.Observe at once thatf0x;v is a (finite) number for all v∈X\0.Indeed, letx∈V⊂U and let K>0 be such that (3) holds for all y,z∈V, with V bounded (without loss of generality). Let wmm>0⊂X be a sequence such that wm⟶x and tm a sequence of 0;+∞ such that tm⟶0. For v∈X\0, as m⟶+∞, the vectors wm+tmv will belong to V. Indeed, by boundedness of V, there exists ρ>0 such that x−y<ρ⇒y∈V. Then, for m large enough, we have (5)wm+tmv−x≤wm−x+tmv<ρ2+ρ2=ρ.Thus, there existsm0>0 such that for all m>m0, we have (6)fwm+tmv−fwmtm≤Kv.It follows from (3) and (6) that for all v∈X, (7)f0x,v≤Kv.Remark 2. Iff is locally Lipschitz and Gâteaux differentiable at x, then its Gâteaux differential fG′x at x coincides with its generalized gradient. That is, (8)f0x;v=fG′x⋅vforallv∈X.Proposition 3. The functionv↦f0x;v is positively homogeneous and subadditive.Proof. The homogeneity is an immediate consequence of Definition1. We prove the subadditivity. Let v and z be in X. Then, (9)f0x;v+z=limsupw⟶xt⟶0+fw+tv+tz−fwt≤limsupw⟶xt⟶0+fw+tz+tv−fw+tzt+limsupw⟶xt⟶0+fw+tz−fwt≤limsupr⟶xt⟶0+fr+tv−frt+limsupw⟶xt⟶0+fw+tz−fwt,r≔w+tz=f0x;v+f0x;z.From the previous Proposition3 and the Hahn-Banach theorem [14] (p. 62), it follows that there exists at least one linear function ξ∗:X⟶ℝ satisfying (10)f0x;v≥ξ∗,vfor allv∈X. From (10) and (7) also rewritten with −v, we obtain (11)ξ∗,v≤Kvfor allv∈X. Thus, ξ∗∈X∗ (as usual, X∗ denotes the (continuous) dual of X and <.,.> is the duality pairing between X and X∗). Thus, we can give the following definition.Definition 4. Letf:U⊂X⟶ℝ be locally Lipschitz at a point x∈U. Clarke’s generalized gradient of f at x, denoted ∂fx, is the (nonempty) set of all ξ∗∈X∗ satisfying (10), i.e., (12)∂fxs≔ξ∗∈X∗:∀v∈X,f0x;v≥ξ∗,v.We refer to [15–17] for some of the fundamental results in the calculus of generalized gradients. In particular, we shall need the following.Proposition 5 (see [18], Chang). Iff:U⟶ℝ is a convex function, then Clarke’s generalized gradient of f at x, defined in (12), coincides with the subdifferential of f in the sense of convex analysis.Proposition 6 (see [11], Chen). LetX be a real Banach space and f:X⟶ℝ be a locally Lipschitz function. Then, the function γ:X⟶ℝ defined by (13)γu≔minx∗∈∂fux∗,forallu∈X, is well defined and lower semicontinuous.Proposition 7 (see [15], Proposition 6). Ifx0 is a minimizer of f, then 0∈∂fx0.Remark 8. LetX be an infinite dimensional Banach space and f:X⟶ℝp be a locally Lipschitz mapping. For any finite dimensional subspace of X, it makes sense to talk about Clarke’s generalized Jacobian of the function fL:L∍x↦fx∈ℝp at every point x∈L.Notation 9. . For a locally Lipschitz functionf:ℝn⟶ℝp and x∈ℝn, we consider the set Ωfx defined by Ωfx≔xmmsequenceinℝn such that xm⟶x and f is differentiable at xm.LetX,Z be two Banach spaces such that dim Z=n<∞. Let F:X⟶Z be a locally Lipschitz mapping and L a finite dimensional subspace of X. For x∈L, we denote by ∂FLx Clarke’s generalized Jacobian at a point x, of the restriction of F to L, namely, the function (14)FL:L⟶Z;x↦Fx.LetY be a Banach space and consider a function F:ℝn×Y⟶ℝp which is locally Lipschitz. For any x¯,y¯∈ℝn×Y, ∂xFx¯,y¯ denotes Clarke’s generalized Jacobian at a point x¯ of the function (15)F⋅,y¯:ℝn⟶ℝp,x↦Fx,y¯.LetX,Y,Z be three Banach spaces with dim Z<∞ and F:X×Y⟶Z a locally Lipschitz function. For any finite dimensional subspace L of X and for every x¯,y¯∈L×Y, ∂xFLx¯,y¯ will denote Clarke’s generalized Jacobian of the function F~:L∍x↦F~x≔Fx,y¯∈Z at a point x¯.Theorem 10 (Rademacher). Letf:ℝn⟶ℝ be a locally Lipschitz function. Then, f is almost everywhere differentiable with respect to Lebesgue measure.According to Rademacher’s Theorem10, we have the following.Proposition 11 (see [19], Clarke). Letf:ℝn⟶ℝ be a locally Lipschitz function and x∈ℝn. If ∂fx denotes the set defined by (12), then (16)∂fx=colimm⟶+∞f′xm:xmm∈ℕ∈Ωfx.Note that, sincef is almost everywhere differentiable with respect to Lebesgue measure, there exists a sequence xmm∈ℕ⊂ℝn such that xm⟶x, and for any m∈ℕ, f is differentiable at xm. So, Ωfx≠∅. In addition for any xmm∈ℕ∈Ωfx and for any v∈ℝn, we have (17)f′xm⋅v≤Kv,where K is the Lipschitz constant of f. This means that f′xmm is bounded in Lℝn,ℝ which has a finite dimension. Then, there exists a subsequence f′xσmm of f′xmm that converges to some x∗∈Lℝn,ℝ. That is, (18)limm⟶+∞f′xσm=x∗.Thus, the convex hull of such limits in (18) is ∂fx.Even if the functionf is defined from ℝn to ℝp, regarding (17) and (18) component by component, we notice that the set defined by (16) is nonempty, compact, and convex in Lℝn,ℝp (see [20] (Definition 1)). Thus, this characterization of ∂fx stated in Proposition 11 is extended to locally Lipschitz functions defined from ℝn to ℝp. In this case, ∂fx is called Clarke’s generalized Jacobian of the function f at a point x.Definition 12. Letf:ℝn⟶ℝp be a locally Lipschitz mapping and x∈ℝn. Clarke’s generalized Jacobian of f at x also denoted by ∂fx is defined as follows: (19)∂fx=colimm⟶+∞f′xm:xmm∈ℕ∈Ωfx.The following notions will also be useful in the sequel.Definition 13. Letf:ℝn⟶ℝp be a locally Lipschitz mapping and x∈ℝn with n≥p. We say that ∂fx is of maximal rank if for all x∗∈∂fx, x∗ is surjective.Definition 14. LetX be a metric space. A function f:X⟶ℝ is said to be (sequentially) lower semicontinuous at a point x∈X, if for all sequence xmm∈ℕ⊂X such that xm⟶x, we have the inequality (20)fx≤liminfm⟶+∞fxm.If for all sequencexmm∈ℕ⊂X such that xm⇀x, (20) holds; we say that f is weakly sequentially lower semicontinuous at x.Remark 15. LetX be a normed vector space and xmm a sequence of X. If x∈X, then (21)xm⟶x⇒xm⇀x.It follows that the weakly sequentially lower semicontinuity implies the sequentially lower semicontinuity. But the converse is not generally true. However, in the convex case, these two notions are equivalents.The following theorem is a generalization of Ekeland’s variational principle [21].Theorem 16 (see [21], J. Chen). Leth:0,+∞⟶0,+∞ be a continuous nondecreasing function such that (22)∫0∞ds1+hs=+∞.LetM be a complete metric space, x0∈M fixed, f:M⟶ℝ∪∞ a lower semicontinuous function, not identically +∞, and bounded from below. Then, for every ε>0, and y∈M such that (23)fy<infMf+ε,and every λ>0, there exists some point z∈M such that (24)fz<fy,dz,x0≤r0+r¯,fx≥fz−ελ1+hdx0,zdx,z,∀x∈M,where r0=dx0,y and r¯ is such that (25)∫r0r0+r¯ds1+hs≥λ.By Theorem16, one has the following.Theorem 17 (see [21], J. Chen). LetX be a Banach space, h:0,+∞⟶0,+∞ be a continuous nondecreasing function such that (26)∫0∞ds1+hs=+∞ andf:X⟶ℝ a locally Lipschitz function, bounded from below. Then, there exists a minimizing sequence zmm of f such that (27)f0zm;v−zm1+hzm≥−εmv−zm,∀v∈X,where εm⟶0+ as m⟶+∞.Proof. For each positive integerm, choose ym∈Y be such that (28)fym≤infMf+εm.Takex0=0,X=M, and λ=1 in Theorem 16. Then, there exists zm∈X such that (29)fzm≤fym,zm≤ym+r¯,fx≥fzm−εm1+hzmx−zm,∀x∈X,where r¯ is such that (30)∫ymym+r¯ds1+hs≥1.Consequently, for eachx∈X, one has (31)infε>0δ>0supw<ε0<t<δfzm+w−tx−zm−fzm+wt=infδ>0sup0<t<δfzm+tx−zm−fzmt≥−εmx−zm1+hzm.Hence,f0zm;v−zm1+hzm≥−εmv−zm, for all v∈X.Moreover, obviously,zmm is a minimizing sequence of f.Definition 18. LetX be a Banach space, f:X⟶ℝ be bounded from below, locally Lipschitz function, and h:0,+∞⟶0,+∞ be continuous nondecreasing function such that (32)∫0∞ds1+hs=+∞.We say thatumn≥0⊂X is a h-sequence of f if fumm is bounded and f0um;v−um1+hum≥−εmv−um, for all v∈X, where εm⟶0+. We say that f satisfies the h-condition if any h-sequence of f possesses a convergent subsequence.Remark 19. Sometimes, the following version ofh-condition is also used: Any sequence umm⊂X such that fumm is bounded and (33)limm⟶∞γum1+hum=0 possesses a convergent subsequence, whereγ is defined in Proposition 6. This condition is equivalent to that of Definition 18.Remark 20. A coercive function defined onℝn satisfies the h-condition regardless of h. But a function satisfying the h-condition is not necessary coercive. Indeed, Section 4 is devoted to the exposition of an example of a noncoercive function satisfying the h-condition. It is the function defined in (139).The following is the Weierstrass theorem.Lemma 21 (see [13], Lemma 2.1). Assume thatf:X⟶ℝ is functional on a reflexive Banach space X which is weakly lower semicontinuous and coercive. Then, there exists x∗∈X such that fx∗=minx∈Xfx.Better, by virtue of Theorem17, we can prove the following result.Theorem 22. LetX be a Banach space, h:0,+∞⟶0,+∞ a continuous nondecreasing function such that (34)∫0∞ds1+hs=+∞, andf:X⟶ℝ a locally Lipschitz function and bounded from below. If f satisfies the h-condition, then f achieves its minimum at some critical point z∈X of f.Proof. By virtue of Theorem17, there exists a minimizing sequence zmm of f and (35)f0zm;v−zm1+hzm≥−εmv−zmforallv∈X whereεm⟶0+. Since f satisfies the h-condition, zmm has a convergent subsequence in X. We can assume that zm⟶z in X. Consequently, by the continuity of f, (36)fz=limm⟶+∞fzm=infx∈Xfx.By Remark19 and the lower continuity of γ, we know γz=0.Theorem 23 (see [22], Clarke). Letf:ℝn⟶ℝn be a locally Lipschitz mapping such that the Clarke generalized Jacobian ∂fx0 of f at a point x0∈ℝn is of maximal rank. Then, there exist neighborhoods U and V of x0 and fx0, respectively, and a Lipschitz function g:V⟶U such that fgu=u for all u∈U and gfu=v for all v∈V.The following result is Clarke’s implicit function theorem which will be very useful.Theorem 24 (see [6], Clarke]. Assume thatF:ℝn×ℝp⟶ℝn is a locally Lipschitz mapping on a neighborhood of a point x0,y0 such that Fx0,y0=0. Assume further that ∂xFx0,y0 is of maximal rank. Then there exists a neighborhood V⊂ℝp of y0 and a Lipschitz function G:V⟶ℝn such that for every y in V, it holds (37)FGy,y=0,Gy0=x0.Remark 25. It would be important to point out that the Clarke implicit function Theorem24 is a corollary of the Clarke inverse function Theorem 23 that can be found in the book [23]. Indeed, as it is done for example in [24] on page 256, when we put (38)F~:ℝn×ℝp⟶ℝn×ℝp,x,y↦Fx,y,y.F~ is locally Lipschitz in a neighborhood of x0,y0. Moreover, when the Jacobian matrix DF~ exists, it is of the form (39)DxFDyF0nIp,and it follows that the Clarke generalized Jacobian∂F~x0,y0 of F~ at the point x0,y0 is of maximal rank. Then, by Theorem 4 D.3 of [23], there exist U⊂ℝn×ℝp, V≔FU⊂ℝn×ℝp, and f:V⟶U which is inverse of F~ on U. Obviously, f has the form fx,y=ϕx,y,y, where ϕ:ℝn×ℝp⟶ℝn. Therefore, (40)x,y∈U,Fx,y=0⇔f0,y=ϕ0,y,y=x,y⇔x=ϕ0,y.Thus, we can writeGy=ϕ0,y.Ifℝp is replaced by any infinite dimensional Banach space Y in Theorem 24, Clarke’s generalized Jacobian of the function F~ above cannot be defined. In other words, we will no longer be in finite dimension to be able to apply Theorem 1 in Clarke’s work [22].This remark is very important in the rest of the work.Remark 26. LetY be an infinite dimensional Banach space and F:ℝn×Y⟶ℝn be a locally Lipschitz mapping on a neighborhood of a point x0,y0 such that Fx0,y0=0. Assume that ∂xFx0,y0, Clarke’s generalized Jacobian is of maximal rank. Then, there exists V⊂Y, subset containing y0, and a Lipschitz mapping φ:V⟶U⊂ℝn such that for every y∈V, we have (41)Fφy,y=0,φy0=x0.Moreover, we have the following equivalence:(42)x,y∈U×V,Fx,y=0⇔x=φy.Indeed, letM be a finite dimensional subspace of Y with y0∈M and dimM=mm<∞. We consider the map (43)F~:ℝn×M⟶ℝn,x,y↦Fx,y.Obviously,F~ is locally Lipschitz mapping, and ∂xF~x0,y0=∂xFx0,y0 is of maximal rank. Then, by Theorem 24, there exist V⊂M, open in M and containing y0, U⊂ℝn, open containing x0 and a locally Lipschitz mapping φ:V⟶U such that conditions (41) and (42) hold.Here is another result that will serve us in this work.Theorem 27 (see [11], J. Chen). Leth:0,+∞⟶0,+∞ be a continuous nondecreasing function such that (44)∫0∞ds1+hs=+∞.X is a reflexive Banach space and J:X⟶ℝ is a locally Lipschitz function. Assume that there exists u0∈X,u1∈X and a bounded open neighborhood Ω of u0 such that u1∉Ω and (45)infx∈∂ΩJx>maxJu0,Ju1.LetM≔g∈C0,1,X:g0=u0,g1=u1 and c≔infg∈Mmaxs∈0,1Jgs. If J satisfies the h-condition, then c is a critical value of J and c>maxJu0,Ju1.Lemma 28. LetX be a normed vector space and H be a Hilbert space equipped with the inner product ,. Let f:X⟶H be a locally Lipschitz mapping. Then, the function φ:X⟶ℝ defined by (46)φx=fx,fx=fxH2 is locally Lipschitz.Theorem 29 (see [16], Clarke). LetX be a normed vector space, f:X⟶ℝn be locally Lipschitz function near x∈X, and h:ℝn⟶ℝ be a given C1 function. Then, (47)∂h∘fx⊂∇hfx∂fx.Theorem 30 (see [6], Theorem 1). Assume thatF:ℝn×ℝp⟶ℝn is a locally Lipschitz mapping such that a1 for any y∈ℝp the functional φy:ℝn⟶ℝ given by (48)φyx=12Fx,y2 is coercive, i.e.,limx⟶∞φyx=+∞ a2 for any x,y∈ℝn×ℝp, the set ∂xFx,y is of maximal rank Then, there exists a unique locally Lipschitz functionf:ℝp⟶ℝn such that equation Fx,y=0 and x=fy are equivalent in the set ℝn×ℝp. ## 3. Main Results The following is a generalization of the global implicit function theorem of [6] to the case of locally Lipschitz functions from Banach spaces to Euclidean spaces.Theorem 31. LetY be a real Banach space and F:ℝn×Y⟶ℝn be a locally Lipschitz function. Suppose that (1) for everyy∈Y, the function φy defined by(49)φyx=12Fx,y2 satisfies theh-condition, where h:0,+∞⟶0,+∞ is a continuous nondecreasing function such that (50)∫0∞ds1+hs=+∞(2) for allx,y∈ℝn×Y, ∂xFx,y is of maximal rank Then, there exists a unique functionf:Y⟶ℝn such that the equation “x,y∈ℝn×YandFx,y=0” are equivalent to x=fy. Moreover, for any finite dimensional subspace L of Y, f is locally Lipschitz on L.Proof. Lety∈Y. We prove that there exists a unique element xy∈ℝn such that Fxy,y=0. Indeed, φy is locally Lipschitz and satisfies the h-condition. Then, by Theorem 22, there is xy∈ℝn such that minℝnφy=φyxy. Since φy=g∘F⋅,y:ℝn⟶ℝ and by assumption (1), the function F⋅,y:ℝn∍x↦Fx,y∈ℝn is a locally Lipschitz mapping and g:ℝn∍x↦1/2x2=1/2x,xℝn∈ℝ, it follows from Lemma 28 that φy is locally Lipschitz. Then, by Proposition 7, we have 0∈∂ϕyxy. Moreover, according to Theorem 29, we have ∂φyxy⊂∇gF⋅,yxy∘∂F⋅,yxy=∇gFxy,y∘∂xFxy,y=∇gFxy,y∘x∗:x∗∈∂xFxy,y.Thus, there existsx∗∈∂xFxy,y such that ∇gFxy,y∘x∗=0, i.e., (51)∀v∈ℝn,∇gFxy,yx∗v=Fxy,y,x∗v=0.By assumption (2)x∗ℝn=ℝn. It follows that Fxy,y=0.About the uniqueness ofxy∈ℝn such that Fxy,y=0, we argue by contradiction supposing that there exists x1≠xy in ℝn with Fx1,y=Fxy,y=0. We use Remark 26. Thus, we set e=x1−xy, and we define the mapping ψy:ℝn⟶ℝ by (52)ψyx≔φyx+xy=12Fx+xy,y2.We haveψy0=ψye=0. Consider ψy on the boundary ∂B0,ρ of the ball B0,ρ⊂ℝn with some 0<ρ<e. By assumption (2) and Remark 26, we conclude that there exist V⊂Y containing y (not necessary open in Y, but open in some finite dimensional subspace L⊂Y), an open subset U⊂ℝn containing xy, and a function ξ:V⟶U such that the following equivalence holds: (53)x,y∈U×V,Fx,y=0⇔x=ξy.ψy is also a locally Lipschitz function (so continuous), and ∂B0,ρ is compact (by the fact that it is closed and bounded). Then, ∃x¯∈∂B0,ρ such that (54)ψyx¯=min∂B0,ρψy.We claim that there exists at least oneρ>0,ρ<e such that minx=ρψy>0. Otherwise, we would have (55)∀0<ρ<e,minx=ρψy=0;this means that for all nonnegativeρ<e, there exists x¯∈ℝn,x¯=ρ such that ψyx¯=0. Since U is open around xy, there exists 0<ε<e such that (56)x−xy≤ε⇒x∈U.Letx¯∈ℝn,x¯=ε:ψyx¯=0. Then, (57)x¯+xy−xy=x¯=ε,ψyx¯=12Fx¯+xy,y2=0⇔Fx¯+xy,y=0.By (56) and (57), we have x¯+xy∈U,x¯+xy≠xy and Fx¯+xy,y=0. It follows from (53) that x¯+xy=ξy. Thus, x¯+xy and xy are two different elements of U with x¯+xy=ξy=xy, what is impossible. As conclusion, (58)∃ρ<e,infx=ρψy>0=maxψy0,ψye.The functionψy is locally Lipschitz and satisfies the h-condition (because φy satisfies this condition). Then, by (58) and Theorem 24 applied to J=ψy, we note that ψy has a generalized critical point v which is different from 0 and e since the corresponding critical value ψyv holds (59)ψyv>maxψy0,ψye=0.We have also(60)0∈∂φyxy+v⊂∇gF⋅,yxy+v∘∂F⋅,yxy+v=∇gFxy+v,y∘∂xFxy+v,y.This implies thatFxy+v,y=0⇔ψyv=0. This contradiction with (59) confirms that for every y∈Y, there exists a unique xy∈ℝn such that Fxy,y=0, and we can set fy=xy. Of course, according to Remark 26, we can say that for any finite dimensional subspace L of Y, f is locally Lipschitz on L.An example of function satisfying the assumptions of Theorem31 for which Y is a Banach space is F:ℝ×Y⟶ℝ defined by (61)Fx,y=2x+x+y.Indeed,F defined in (61) is locally Lipschitz function which is not differentiable and for any y∈Y if we consider the function φy:ℝ⟶ℝ defined by (62)φyx=12Fx,y2=12x+2x+y2,thenφy is coercive and consequently satisfies the h-condition. Moreover, for any x,y∈ℝ×Y, the partial generalized gradient ∂xFx,y defined as follows (63)∂xFx,y=3,ifx>0,1,ifx<0,1,3,ifx=0,is of maximal rank. Namely, for anyx,y∈ℝ×Y,0∉∂xFx,y⊂ℝ. Indeed, a straightforward argument shows that (64)Fx,y=0⇔x=−y.With the conclusion about the regularity off in Theorem 31, we cannot expect in general the continuity of f on the whole Y. Here is a counterexample.Remark 32. Let us setY=ℓ1, where ℓ1 stands for the space of real sequences umm∈ℕ such that ∑m=0∞um<∞, endowed with the nonequivalent norms: (65)um1=∑m=0∞um,um2=∑m=0∞um21/2.Indeed, form∈ℕ∗, we define Xm≔1,1/2,1/3,⋯,1/m,0,⋯∈ℓ1. We claim that Xmm is a bounded sequence with respect to the norm ⋅2 which is unbounded with respect to ⋅1. For m∈ℕ∗, we have (66)Xm2=∑k=1m1k21/2,Xm1=∑k=1m1k.Then,(67)limm⟶+∞Xm2=∑k=1+∞1k21/2<+∞,(68)limm⟶+∞Xm1=∑k=1+∞1k=+∞.Now, let us consider the canonical injectionI:ℓ1,⋅2⟶ℓ1,⋅1. It is obvious by (67) that I is not continuous on ℓ1. However, for any finite dimensional subspace L of ℓ1, since the restriction of these norms on L are equivalent on L, it follows that IL:L1,⋅2⟶L1,⋅1 is Lipschitz.We add some technical hypothesis to those of Theorem31 in order to obtain the continuity of the implicit function f.Definition 33. LetX,Y be two normed vector spaces. We say that a function F:X×Y⟶ℝ is coercive with respect to x (the first variable), locally uniformly with respect to y (the second variable), if for any y¯∈Y, there exists an open neighborhood V of y¯ in Y such that (69)limx⟶∞infy∈VFx,y=+∞.Lemma 34. LetE be a Euclidean space. Then, every bounded sequence with a unique limit point is convergent.Proof. Letxmm≥0 be a sequence of E which has a unique limit point x¯∈E. This implies that any subsequence of xmm≥0 has a subsequence converging to x¯ by the Bolzano Weierstrass theorem. We argue by contradiction assuming that xmm≥0 is not convergent. Then, there exists ε>0 such that (70)foranyk>0,thereexistsmk≥k,withxmk−x¯>ε. xmkk must have a subsequence xmkii such that (71)xmki⟶x¯, which contradicts (70).Theorem 35. LetY be a real Banach space and F:ℝn×Y⟶ℝn be locally Lipschitz. Suppose that (1) the functionχ:ℝn×Y⟶ℝ defined by(72)χx,y=12Fx,y2is coercive with respect to x, locally uniformly with respect to y(2) for allx,y∈ℝn×Y, ∂xFx,y is of maximal rank Then, there exists a unique functionf:Y⟶ℝn such that (73)x,y∈ℝn×Y,Fx,y=0⇔x=fy. Moreover,f is continuous on the whole Y.Proof. Lety∈Y. We consider the function φy:ℝn⟶ℝ defined by (74)φyx≔12Fx,y2.Sinceφy is coercive (because χ is coercive with respect to x, locally uniformly with respect to y), it follows that for any continuous nondecreasing function h:0,+∞⟶0,+∞ such that (75)∫0∞ds1+hs=+∞,φy satisfies the h-condition. Moreover, F is locally Lipschitz. So, by Theorem 31, we conclude that there exists a unique global implicit function f:Y⟶ℝn such that (76)x,y∈ℝn×Y,Fx,y=0⇔x=fy.It remains to show thatf is continuous on the whole Y. For this, let ymm∈ℕ⊂Y be sequence such that (77)ym⟶y¯∈Y.For allm∈ℕ, Ffym,ym=0. This implies that the sequence χxm,ymm=Ffym,ymm is bounded. Since χ is coercive with respect to x, locally uniformly with respect to y, there exists an open subset Q⊂Y containing y¯ such that (78)limx⟶∞infy∈Qχx,y=limx⟶∞infy∈Q12Fx,y2=+∞.In addition, by the convergence ofym to y¯, there exists m0∈ℕ such that (79)m>m0⇒ym∈Q.So,(80)form>m0,12Fxm,ym2≥infy∈Q12Fxm,y2.According to (78) and (80), we conclude that the sequence xmm∈ℕ≔fym is bounded in ℝn. Let x¯ be a limit point of xmm. Thus, there exists a convergent subsequence xmkk of xm such that (81)xmk⟶x¯∈ℝn.On the other hand, for allk∈ℕ, Fxmk,ymk=0. Then, it follows from (77), (81), and the continuity of the function F that (82)0=limk⟶+∞Fxmk,ymk=Fx¯,y¯.Thus, we have(83)Fx¯,y¯=0⇔x¯=fy¯.So,xmm has a unique limit point x¯=fy¯. So, by Lemma 34, (84)xm⟶x¯,that is,(85)fym⟶fy¯.From (77) and (85), f is continuous on Y.As a consequence of our Theorem31, we have the following nonsmooth global inverse function theorem.Theorem 36. Assumef:ℝn⟶ℝn is a locally Lipschitz mapping such that (1) for anyy∈ℝn, there exists a continuous nondecreasing function h:ℝ+⟶ℝ+ such that(86)∫0∞ds1+hs=+∞, and the functionalφy:ℝn⟶ℝ defined by (87)φyx=12fx−y2 satisfies theh-condition (2) for anyx∈ℝn, we have that ∂fx is of maximal rank Then,f is a global homeomorphism on ℝn and f−1 is locally Lipschitz.Corollary 37 (see [25], Hadamard-Palais]. LetX,Y be finite dimensional Banach spaces. Assume that f:X⟶Y is a C1-mapping such that (1) limx⟶∞fx=∞(2) for anyx∈X, f′x is invertible thenf is a diffeomorphism.Question. Is it still possible the conclusion of Theorem31 under the assumption of the h-condition on the function τy:x↦Fx,yα, where α is a positive constant different from 2?In fact, according to our two Lemmas42 and 43 and Corollary 44, it is enough to assume that τy is locally Lipschitz in the case 0<α<2. Else, this additional hypothesis is not need in the case α>2.Therefore, we have the following result from Theorem31.Theorem 38. LetY be a real Banach space and F:ℝn×Y⟶ℝn be a locally Lipschitz mapping. Suppose that (1) for anyy∈Y, there exists 0<α<2, so that the function τy:ℝn⟶ℝ(88)τyx=Fx,yα is locally Lipschitz and satisfies theh-condition, where h:0,+∞⟶0,+∞ is a continuous nondecreasing function such that (89)∫0∞ds1+hs=+∞(2) for allx,y∈ℝn×Y, ∂xFx,y is of maximal rank Then, there exists a unique functionf:Y⟶ℝn such that the equation "x,y∈ℝn×Y and Fx,y=0" are equivalent to x=fy. Moreover, for any finite dimensional subspace L of Y, f is locally Lipschitz on L.Proof. Lety∈Y. We notice that τy=2φyα/2, where φy is defined by (49). Since τy satisfies the h-condition, it follows from Lemma 43 and Corollary 44 that φy satisfies the h-condition. Thus, we achieve the proof by using Theorem 31.But, what happens if we replaceℝn by any Banach space X in the domain of the function F?Theorem 39. LetX,Y be Banach spaces and Z be Euclidean space such that dim Z=n<∞. Let F:X×Y⟶Z be locally Lipschitz function. Assume that (1) for ally∈Y, the function φy:X⟶ℝ defined by(90)ϕyx=12Fx,y2 satisfies theh-condition, where h:0,+∞⟶0,+∞ is a continuous nondecreasing function such that (91)∫0∞ds1+hs=+∞(2) for any finite dimensional subspaceL of X with dim L=n and for all x,y∈L×Y, ∂xFLx,y is of maximal rank Then,(92)x,y∈X×Y,Fx,y=0⇔x=0.Proof. We use Theorem31 in order to prove this result. Firstly, we prove that there exists a unique global implicit function f:Y⟶X such that x,y∈X×Y and Fx,y=0 are equivalent from x=fy. After that, we will claim that f≡0 on Y. Lety∈Y. Since φy is bounded form below, locally Lipschitz, and satisfies the h-condition, we see by Theorem 22 that φy has a minimum which is achieved at a critical point xy∈X. Let L be a finite dimensional subspace of X such that xy∈L and dimL=n.Consider functionsF~:L×Y⟶Z and φ~y:L⟶ℝ defined, respectively, by (93)F~x,y=Fx,y,φ~yx=φyx=12Fx,y2=12F~x,y2.By assumption (1), the functionF~ is locally Lipschitz and φ~y is then locally Lipschitz as composition of F~ by C1 function g where (94)g:Z⟶ℝ;x↦12x2=12x,xZ.Likewise,φ~y satisfies the h-condition. It follows that φ~y has a minimum on L. Since xy∈L, it is obvious that minx∈Lφ~yx=φxy. Thus, by Theorem 29, we obtain (95)0∈∂φ~yxy=∂Lφyxy⊂∇gF~xy,y∘∂xF~xy,y=∇gFxy,y∘∂xFLxy,y.This means that there existsx∗∈∂xFLxy,y such that (96)∀h∈L,Fxy,y,x∗h=0.Thus, we conclude by assumption (2) thatF~xy,y=Fxy,y=0. It is clear by Theorem 31 that xy is the only solution of the equation Fxy,y=0 in L. But the question is the uniqueness of this solution in all X. Even though it is unpredictable, we will answer yes to this question in the following. About uniqueness of xy∈X such that Fxy,y=0, we argue by contradiction.Suppose that there existsx1≠xy such that Fx1,y=Fxy,y=0. We choose then another finite dimensional subspace of X (that we note again L here to keep the same notation) which contains both x1 and xy, such that dim L=n. We consider the same functions defined in (93), but with the subspace L that we choose here. We find that minx∈Lφ~yx=φ~yxy and by [15] (Proposition 6), Theorem 29, and assumption (2), we conclude that F~xy,y=0. Considering the function ψ~y defined as in (52) by (97)ψ~yx≔φ~yx+xy=12Fx+xy,y2and following the same approach in the proof of Theorem31, we come to the contradiction. Thus, there exists a unique global implicit function f:Y⟶X such that Ffy,y=0 for all y∈Y.It remains to be shown thatf≡0 on Y. Indeed, since X is infinite dimensional Banach space, it is possible to find two n-dimensional subspaces L1 and L2 of X such that L1∩L2=0X. Let F1:L1×Y⟶Z and F2:L2×Y⟶Z be the function defined by (98)F1x,y=Fx,y,F2x,y=Fx,y.By assumptions (1) and (2), both functionsF1 and F2 verify the assumptions of Theorem 31. Consequently, there exist two functions φ1:Y⟶L1 and ϕ2:Y⟶L2 such that ∀y∈Y,Fφ1y,y=Fφ2y,y=0. Then, according to the uniqueness of xy∈X such that Fxy,y=0, we have φ1y=fy=φ2y∈L1∩L1. Thus, fy=0,∀y∈Y.In virtue of Lemma43, the following theorem is a consequence of Theorem 39.Theorem 40. LetX,Y be Banach spaces and Z be Euclidean space such that dimZ=n<∞. Let F:X×Y⟶Z be locally Lipschitz function. Assume that (1) for everyy∈Y, there exists 0<α<2 such that the function τy:X⟶ℝ defined by(99)τyx=Fx,yα is locally Lipschitz and satisfies theh-condition, where h:0,+∞⟶0,+∞ is a continuous nondecreasing function such that (100)∫0∞ds1+hs=+∞(2) for any finite dimensional subspaceL of X and for any x,y∈L×Y, ∂xFLx,y is of maximal rankThen.(101)x,y∈X×Y,Fx,y=0⇔x=0.Remark 41. Ifα>2, it is useless to add the locally Lipschitz condition to the h-condition for the first assumption of Theorems 38 and 40. Indeed, in Theorem 38, for example, since φy is locally Lipschitz and α/2>1, it follows from Lemma 43 that τy=2φyα/2 is also locally Lipschitz. Moreover, φy satisfies the h-condition.Lemma 42. Letg:ℝn⟶ℝ+ be a locally Lipschitz function. Let α>1. If gα is locally Lipschitz, then for any x,v∈ℝn with v≠0, we have (102)gα0x;v=α⋅gxα−1g0x;v.Proof. Letwmm be a sequence in ℝn and tmm⊂0;+∞ another sequence such that (103)wm⟶x,tm⟶0+.For fixedm, the function μ:Im⟶ℝ;t↦tα is differentiable, where (104)Im≔θgam+1−θgbm;θ∈0,1,witham=wm+tmv,bm=wm.Now, it is known that there existscm∈Im such that (105)gamα−gbmα=μgam−μgbm=μ′cmgam−gbm=μ′θmgam+1−θmgbm⋅gam−gbm=α⋅θmgam+1−θmgbmα−1⋅gam−gbm=Km⋅gwm+tmv−gwm,where (106)Km=α⋅θmgwm+tmv+1−θmgwmα−1,withθm∈0,1. Then, we have (107)gαwm+tmv−gαwmtm=Km⋅gwm+tmv−gwmtm.Sinceg is continuous, there exists a neighborhood V of x and K>0 such that (108)gz≤K,∀z∈V.It follows from the convergence ofwm,tm to x,0 and the continuity of g and (108) that (109)limm⟶+∞Km=α⋅gxα−1.By (107), (109), and the fact that (110)limsupw⟶xt⟶0+gw+tv−gwt=g0x;v,we conclude that(111)gα0x;v=α⋅gxα−1g0x;v.Lemma 43. Letg:ℝn⟶ℝ+ be a locally Lipschitz function. Let α>1 and h:0,+∞⟶0,+∞ a continuous nondecreasing function such that (112)∫0∞ds1+hs=+∞.Then,gαx≔gxα is locally Lipschitz function. Moreover, g satisfies the h-condition if and only if gα satisfies the h-condition.Proof. Letx¯∈X. There exist V∍x¯ open subset of ℝn and k>0 such that (113)gx−gy≤kx−y,∀x,y∈V.Letρ>0 such that B¯ρx¯≔x∈ℝn:x¯−x≤ρ⊂V. For x,y∈B¯ρ, as in the previous Lemma 42, there exists θ∈0,1 such that we have (114)gαx−gαy=α⋅gx−gy⋅θgx+1−θgyα−1.g is continuous, and B¯ρ is compact. Let (115)M=maxz∈B¯ρx¯gz.Then, we have(116)θgx+1−θgyα−1≤Mα−1.It follows from (113), (114), and (116) that we have (117)gαx−gαy≤kMα−1x−y,∀x,y∈Bρx¯,where Bρx¯≔x∈ℝn:x¯−x<ρ⊂V is open. Then, gα is locally Lipschitz.For the second part of Lemma43, just specify that umm≥0 is a h-sequence of g if and only if umm≥0 is a h-sequence of gα.Letvmm≥0⊂ℝn be a h-sequence of g. Then, there exist q>0, τmm⊂0,+∞ with τm⟶0+, such that (118)gvm≤q,∀m≥0,(119)g0vm;v−vm1+hvm≥−τmv−vm,∀v∈ℝn.It follows from (118) that gvmα is bounded. From Lemma 42 and inequality (119), we deduce that (120)gα0vm;v−vm1+hvm≥−τ¯mv−vm,∀v∈ℝn,τ¯m≔τmαqα−1⟶0+.Thus,vmm≥0 is a h-sequence of gα.Conversely, letumm≥0⊂ℝn be a h-sequence of gα. Then, there exists p>0 such that (121)gumα≤p,∀m≥0,(122)gα0um;v−um1+hum≥−εmv−um,∀v∈ℝn,εm⟶0+.It follows from (121) that there exists p¯>0 such that (123)gum≤p¯,∀m≥0.By Lemma42, (122), and (123), we have (124)g0um;v−um1+hum≥−δmv−um,∀v∈ℝn,δm≔εmαp¯α−1⟶0+.Therefore,umm≥0 is also a h-sequence of g. Then, for any α>1, g satisfies the h-condition if and only if gα satisfies the h-condition.But what about0<α<1?Corollary 44. Letg:ℝn⟶ℝ+ be a function and 0<α<1 such that gα is locally Lipschitz. Then, g is locally Lipschitz and for any continuous nondecreasing function h:ℝ+⟶ℝ+ such that (125)∫0∞ds1+hs=+∞.g satisfies the h-condition if and only if gα satisfies the h-condition.Proof. We notice thatg=gα1/α and 1/α>1. Then, we apply Lemma 43. ## 4. Example of Noncoercive Function Satisfying the (h)-Condition To illustrate that the compactness condition allowing to obtain the existence of a global implicit function in our main results is weaker than that used in Theorem30, we provide in this section an example of a noncoercive and locally Lipschitz function satisfying the h-condition. We follow the idea used by Chen and Tang in [13] (Theorem 3.3).Let1<p<∞. Define (126)Lp0,T;ℝN=u∈L10,T;ℝN:∫0Tutpdt<∞,with the norm(127)up=∫0Tupdt1/p.Foru∈Lloc10,T;ℝN, u′ is said to be the weak derivative of u, if u′∈Lloc10,T;ℝN and (128)∫0Tu′ϕdt=−∫0Tuϕ′dt,∀ϕ∈C0∞0,T;ℝN.Let(129)W01,p0,T;ℝN=u∈Lp0,T;ℝN:u0=uT,u′∈Lp0,T;ℝN.W01,p0,T;ℝN is a reflexive Banach space (see [13]) with the norm (130)uW01,p0,T;ℝN=∫0Tup+u′pdt1/p.Remark 45 (see [13]). We have the following direct decomposition ofW01,p0,T;ℝN(131)W01,p0,T;ℝN=ℝN⊕V,whereV=v∈W01,p0,T;ℝN:∫0Tvtdt=0.Consider now the following functional:(132)Ju=∫0T1pu′pdt,u∈W01,p0,T;ℝN.We know that (see [26]) J∈C1W01,p0,T;ℝN,ℝ and p-Laplacian operator u↦u′p−2u′′ is the derivative operator of J in the weak sense. That is, (133)A=J′:W01,p0,T;ℝN⟶W01,p0,T;ℝN∗,(134)Au,v=∫0Tu′tp−2u′t,v′tℝNdt,u,v∈W01,p0,T;ℝN.Proposition 46 (see [27], Fan and Zhao). J′ is a mapping of S+, i.e., if (135)um⇀u,limsupm⟶∞J′um−J′u,um−u≤0,then umm has a convergent subsequence in W01,p0,T;ℝN.For everyu∈W01,p0,T;ℝN, set (136)u¯=1T∫0Tutdt,(137)u~t=ut−u¯.We have the following Poincare-Wirtinger inequality (see [28]): (138)∃a>0suchthatu~∞≤au′p,∀u∈W01,p0,T;ℝN.We consider the functionalϕ:W01,p0,T;ℝN⟶ℝ defined by (139)ϕu=∫0T1pu′pdt−∫0Tjt,udt,u∈W01,p0,T;ℝN,where jt,u:0,T×ℝN⟶ℝ is the norm ⋅ on ℝN defined by (140)jt,u=u=∑i=1Nui21/2,foru≔u1,u2,⋯,uN∈ℝN.Leths=s. Following the same approach as done by Chen and Tang in [13], we will show that the function φ satisfies the h-condition and is noncoercive on all of W01,p0,T;ℝN.We show that the functionj satisfies the following assumptions: (1) For allu∈ℝN,t↦jt,u is measurable(2) For almost allt∈0,T,u↦jt,u is locally Lipschitz(3) For everyr>0, there exists αr∈L10,T such that for almost all t∈0,T,u≤r and all w∈∂jt,u, we have w≤αrt, where ∂jt,s is Clarke’s generalized gradient of j with respect to the variable s(4) There exist0<μ<p and M>0 such that for almost all t∈0,T and all u≥M, we have(141)j0t,u;u<μjt,u(5) jt,u⟶+∞ uniformly for almost all t∈0,T as u⟶∞Obviously, the functionj defined in (140) satisfies conditions (1), (2), and (5). In addition, for t,u∈0,T×ℝN, we have the following: (1) Ifu≠0,∂jt,u=u/u(2) Ifu=0,∂jt,0=w∈ℝN:y≥w,y,foranyy∈ℝN=B¯0,1That is,(142)∂jt,0=B¯0,1≔y∈ℝN:y≤1.Indeed, fort∈0,T, the function jt,⋅ is convex. Then, Clarke’s generalized gradient ∂jt,u of jt,⋅ at a point u coincides with the subdifferential of jt,⋅ in convex analysis sense (see Proposition 5). We recall also that the norm in Hibert space is Fréchet differentiable at any point u≠0.Thus,(143)w≤1,forallw∈∂jt,u.Consequently, according to (143), for every r>0, and αrt=1, then αr∈L10,T and (144)u≤r⇒w≤αr,∀w∈∂jt,u.Thus, the functionj satisfies the assumption 3.On other hand, since1<p, taking μ=1+p/2, we have (145)0<1<μ<p.Moreover, forM>0, we have (146)u≥M⇒u≠0,(147)μjt,u=μu.Then,(148)j0t,u;u=uu,u=u.It follows from (145), (146), and (148) that (149)u≥M⇒j0t,u;u<μjt,u.Thus, the functionj satisfies the assumption (4).Under the previous assumptions,φ is locally Lipschitz (see [13] (Theorem 3.3)). (1) By (130), for any u∈ℝN, we have(150)u=T1/pu,∫0T1pu′pdt=0,ϕu=−∫0Tudt=−Tu.Then,(151)limu⟶+∞u∈ℝNϕu=limu⟶+∞−Tu=−∞.Thus,φ is not coercive. (2) Letumm≥1 be a h-sequence of φ, i.e., there exists M1>0 such that(152)φum≤M1,(153)limm⟶+∞1+umγum=0,where (154)γum=minw∗∈∂ϕumw∗.Without loss of generality, we suppose thatum≠0,∀m≥1.According to Proposition6, let um∗∈∂ϕum such that um∗=γum.By definition (134) of operator A, we have (155)um∗=Aum−wm,withwm∈∂jt,um.From the second assertion of (152), we have (156)um∗,um=∫0Tum′tpdt−∫0Twmt,umtdt≤εm,εm↓0.Thus, it follows from Definition4 and inequality (156) that (157)∫0Tum′tpdt−∫0Tj0t,umt;umtdt≤εm.Sinceum≠0, according to (148), the inequality (157) implies (158)∫0Tum′tpdt−∫0Tumtdt≤εm.From the first assertion of (152), we have (159)−μp∫0Tum′tpdt+∫0Tμumtdt≤μM1.It follows from (158) and (159) that (160)1−μp∫0Tum′tpdt+∫0Tμ−1umtdt≤Mm,withMm=εm+μM1. By (160), we have (161)1−μp∫0Tum′tp≤Mm,m≥1,Mm⟶μM1.By (161), there exists M0>0 such that for t∈0,T and m≥1, (162)um′t≤M0.From (161) and the Poincare-Wirtinger inequality (138), u~m is bounded in W01,p0,T;ℝN. By exploiting (152) once again, we use (136) to have (163)1p∫0Tu~m′pdt−∫0Tumtdt≤M1,m≥1.Sinceu~m is bounded, it follows from (163) that there exists M2>0 such that (164)∫0Tumtdt≤M2,∀m≥1.Thus, there existsM3>0 such that for t∈0,T and m≥1, (165)umt≤M3.By (162) and (165), we infer that umm≥1⊂W01,p0,T;ℝN is bounded, and so by passing to a subsequence if necessary, we may assume that (166)um⇀uinW01,p0,T;ℝN,um⟶uinC00,T;ℝN.Next, we will prove thatum⟶uinW01,p0,T;ℝN. By Proposition 46, it suffices to prove that the following inequality holds: (167)limm⟶∞Aum−Au,um−u≤0.In fact, from the choice of the sequenceumm≥1, we have (168)um∗,um≤εm↓0.Then, by (155), we have (169)Aum,um−u−∫0Twmt,umt−utℝNdt≤εm,∀m≥1.By (3),wm⊂L10,T is bounded and (170)limm⟶∞∫0Twmt,umt−utℝNdt=0.Then,(171)limsupm⟶∞Aum,um−u≤0.So, we have(172)limsupm⟶∞Aum−Au,um−u≤0,εm↓0. ## 5. An Application Inspired by the example result of Galewski-Rădulescu [6] (Theorem 7), we provide in this section an existence and uniqueness result for the problem (173)Ax=Fx+ξ,where ξ∈ℝn is fixed; A is an n×n matrix which does not need to be positive definite, negative definite, or symmetric; and F:ℝn⟶ℝn is a locally Lipschitz function.Theorem 47. LetA be an n×n matrix. If F:ℝn⟶ℝn is a locally Lipschitz mapping satisfying the following conditions: (1) For anyξ∈ℝn, there exists a continuous nondecreasing function h:ℝ+⟶ℝ+ such that(174)∫0∞ds1+hs=+∞, and the functionalφξ:ℝn⟶ℝ defined by (175)φξx=Ax−Fx−ξ satisfies theh-condition (2) For anyx∈ℝn and for every T∈∂Fx, A−T is invertible Then, problem (173) has a unique solution for fixed ξ∈ℝn. Moreover, the map that assigns to each ξ∈ℝn, the unique solution of problem (173) is locally Lipschitz.Proof. Consider the functionfx=Ax−Fx from ℝn to itself. By assumption (173) and Lemma 43, for any ξ∈ℝn, the functional φξ:ℝn⟶ℝ defined by (176)φξx=12fx−ξ2=12Ax−Fx−ξ2 satisfies theh-condition. In addition, according to (173), ∂fx=A−∂Fx is of maximal rank for any x∈ℝn. Then, we achieve the proof applying Theorem 36.Example of matrixA and function F satisfying conditions of Theorem 47.Let us take a matrix(177)A=−312−1and the functionF defined from ℝ2 to ℝ2 by (178)Fu=x3+y,2x+x+y3,u=x,y.Considering the Euclidean norm(179)u=x2+y2forallu∈ℝ2,we have(180)x3,y32−12u32=x6+y6−14x2+y23=34x2+y2x2−y22≥0.It follows that(181)x3,y3≥12u3.On the other hand, foru∈ℝ2, (182)Au≤A⋅u.From (181) and (182), we have (183)Fu−Au−ξ=x3+y,2x+x+y3−Au−ξ≥x3,y3−0,2x−y,x−Au−ξ≥12u3−2u−u−A⋅u−ξ≥12u2−3−Au−ξ.Hence, for fixedξ∈ℝ2, the function φξ:ℝn⟶ℝ defined by (184)φξu=Fu−Au−ξis coercive. Consequently, the functionφξ satisfies the h-condition.Letu=x,y∈ℝ2. (1) Ifx≠0 and y≠0, then F is differentiable at u and ∂Fu−A=JFu−A is defined by(185)∂Fu−A=3x2+3sgny−1sgnx3y2+1.Thus,∂Fu−A will be one of the following matrices: (186)3x2+3013y2+1,3x2+3−213y2+1,3x2+3−2−13y2+1,3x2+30−13y2+1.In all these cases, we havedet∂Fu−A≠0. (2) Ifx<0andy=0, then ∂Fu is defined by(187)∂Fu=conv3x2−110,3x2110=3x2s10:−1≤s≤1.It follows that(188)∂Fu−A=3x2+3s−1−11:−1≤s≤1.Then, forT∈∂Fu, there exists s∈−1,1 such that (189)detT−A=3x2+3+s−1=x2+s+2≥x2+1>0.(3) Ifx>0andy=0, then ∂Fu is the following:(190)∂Fu=conv3x2130,3x2−130=3x2s30:−1≤s≤1.It follows that(191)∂Fu−A=3x2+3s−111:−1≤s≤1.Then, forT∈∂Fu, there exists s∈−1,1 such that (192)detT−A=3x2+3−s−1=3x2+1+3−s≥3x2+1>0.(4) Ifx=0 and y<0, then ∂Fu is the following:(193)∂Fu=conv0−133y2,0−113y2=0−1λ3y2:1≤λ≤3.It follows that(194)∂Fu−A=3−2λ−23y2+1:1≤λ≤3.Then, forT∈∂Fu, there exists λ∈1,3 such that (195)detT−A=33y2+1+2λ−2=9x2+2λ−1>0.(5) Ifx=0andy>0, then ∂Fu is the following:(196)∂Fu=conv0133y2,0113y2=01λ3y2:1≤λ≤3.It follows that(197)∂Fu−A=30λ−23y2+1:1≤λ≤3.Then, forT∈∂Fu, there exists λ∈1,3 such that (198)detT−A=33y2+1>0.(6) Ifu=0,0, then ∂Fu is the following:(199)∂Fu=conv0130,0−130,0−110,0110=0τs0:τ,s∈−1,1×1,3.It follows that(200)∂F0,0−A=3τ−1s−21:τ,s∈−1,1×1,3.Then, forT∈∂Fu, there exists τ,s∈−1,1×1,3 such that (201)detT−A=3−s−2τ−1=s−11−τ+τ+2≥τ+2>0. ## 6. Conclusion We have provided a general nonsmooth global implicit function theorem that yields Galewski-Rădulescu’s nonsmooth global implicit function theorem and a series of results on the existence, uniqueness, and possible continuity of global implicit functions for the zeros of locally Lipschitz functions. Our results deal with functions defined on infinite dimensional Banach spaces and thus generalize also classical Clarke’s implicit function theorem for functionsF:ℝn×ℝp⟶ℝn by replacing ℝp by any Banach space Y. We have worked in this paper under the h-condition which is weaker than the coercivity required in [6]. Our method is based on a variational approach and a recent nonsmooth version of Mountain Pass Theorem.More precisely, firstly, we have proved our Theorem31 on the existence and uniqueness of the global implicit function theorem for equations Fx,y=0, where F:ℝn×Y⟶ℝn is a locally Lipschitz function with Y a Banach space. Secondly, we observe that this extension to infinite dimension may not guarantee the continuity of the global implicit function. Thus, we provide an additional hypothesis on Theorem 31 in order to obtain the continuity of the implicit function f. Moreover, our Lemmas 42 and 43 allow us to prove other more general results on the existence and uniqueness of global implicit functions under the h-condition on the function x↦Fx,yα with 0<α<2. --- *Source: 1021461-2022-07-28.xml*
2022
# Infection-Induced Vulnerability of Perinatal Brain Injury **Authors:** Carina Mallard; Xiaoyang Wang **Journal:** Neurology Research International (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102153 --- ## Abstract A growing body of evidence demonstrates that susceptibility and progression of both acute and chronic central nervous system disease in the newborn is closely associated with an innate immune response that can manifest from either direct infection and/or infection-triggered damage. A common feature of many of these diseases is the systemic exposure of the neonate to bacterial infections that elicit brain inflammation. In recent years, the importance of innate immune receptors in newborn brain injury, the so-called Toll-like receptors, has been demonstrated. In this paper we will discuss how neonatal sepsis, with particular emphasis onEscherichia coli, coagulase-negative staphylococci, and group B streptococcal infections in preterm infants, and Toll-like receptor-mediated inflammation can increase the vulnerability of the newborn brain to injury. --- ## Body ## 1. Introduction Perinatal brain injury represents a significant clinical problem [1]. A growing body of evidence demonstrates that susceptibility and progression of both acute and chronic central nervous system (CNS) disease is closely associated with an innate immune response that can manifest from either direct infection and/or infection-triggered damage [2]. A common feature of these diseases is the systemic activation of inflammatory mediators, which via the blood can disrupt the blood-brain barrier, affect the circumventricular organs in the brain (which lack a blood-brain barrier), or interact with the brain endothelium, thereby eliciting brain inflammation [3]. Furthermore, the presence of activated inflammatory cells derived from systemic circulation or from dormant brain resident populations is a key feature of many CNS diseases. More recently, the importance of innate immune receptors in CNS injury, the so-called Toll-like receptors (TLRs), has also been emphasized. In this paper we will focus on how neonatal sepsis and TLR-mediated inflammation increase the vulnerability of the newborn brain. ## 2. Neonatal Sepsis and Brain Injury Infants with sepsis have an increased incidence of cerebral palsy [4] and white matter abnormalities [5–11]. In a large study of 6093 extremely low birth weight (<1000 g) infants, those who were infected (including early-onset sepsis, suspected sepsis (culture negative), and had necrotizing enterocolitis (NEC)) were more likely to have cerebral palsy than children who did not have a neonatal infection [12]. In another recent large sample-size study involving 1155 infants born at 23 to 27 weeks gestation, it was found that children who had both late bacteremia (positive blood culture result after the first postnatal week) and surgical NEC were at increased risk of diparetic cerebral palsy compared with children who had neither [13]. Moreover, by comparing outcomes of 150 infants with periventricular leukomalacia (PVL) with controls matched for gestational age, it was found that infants with bacterial sepsis were twice as likely to develop PVL, and those with meningitis were almost four times as likely to develop white matter disease [14]. Similar findings were noted in a smaller case-control study, where associations between cerebral palsy, clinical chorioamnionitis and sepsis were demonstrated [15]. Moreover, there was an increased incidence of Gram-negative bacterial and fungal infections in a very low birth weight population, and these infants were at significantly increased risk for moderate to severe cerebral palsy and neurodevelopmental impairment at 18 months of age [16]. ### 2.1. Bacterial Pathogens in Neonatal Sepsis Escherichia coli is one of the main pathogens causing early-onset infections in preterm neonates, accounting for up to 40% of the cases of bacteremia among very low birth weight preterm infants (<1,500 g) [17]. Cerebral white matter injury has been found by MRI following Escherichia coli meningitis in human newborn infants [18]. Furthermore, Escherichia coli induce brain damage in a number of antenatal rabbit and rodent models [19–26]. Also, in a recent study, white matter injury was demonstrated in an animal model of neonatal Escherichia coli sepsis in 5-day-old rat pups [27]. Experimental studies show that early-life Escherichia coli exposure can also have long-term effects, influencing the vulnerability to other factors in adulthood, for example, age-related cognitive decline [28] as well as attenuated glial and cytokine responses to amphetamine challenge [29].In recent years, coagulase-negative staphylococci (CONS) have emerged as the most prevalent and important neonatal pathogens, responsible for approximately 50% of all episodes of late-onset neonatal sepsis in neonatal intensive care units around the world [30–33]. CONS cause significant morbidity, mortality, and healthcare costs worldwide in preterm newborns, especially in very low birth weight infants [34–38]. The vulnerability of preterm infants to CONS infection has been suggested to be due to the special characteristics of the premature infant’s innate immunity [39]. Although there is no direct evidence of CONS causing perinatal brain injury, the presence of CONS in the chorioamnion space at delivery is associated with increased risk for the development of cerebral palsy in preterm infants [40, 41]. Further, in children with an established diagnosis of cerebral palsy, who are admitted to pediatric intensive care, there is a high rate of carriage of abnormal bacteria, including CONS [42].In very low birth weight preterm infants with early onset neonatal sepsis, the rate of group B streptococcal (GBS) infections is relatively low in comparison withE. coli infections [17]. There is no direct evidence of GBS sepsis playing a role in cerebral palsy; however, nearly half of all infants who survive an episode of GBS meningitis suffer from long-term neurodevelopmental sequelae [43]. Further, extensive cortical neuronal injury was found in GBS-infected neonatal rats, which was mediated through reactive oxygen intermediates [44, 45]. ## 2.1. Bacterial Pathogens in Neonatal Sepsis Escherichia coli is one of the main pathogens causing early-onset infections in preterm neonates, accounting for up to 40% of the cases of bacteremia among very low birth weight preterm infants (<1,500 g) [17]. Cerebral white matter injury has been found by MRI following Escherichia coli meningitis in human newborn infants [18]. Furthermore, Escherichia coli induce brain damage in a number of antenatal rabbit and rodent models [19–26]. Also, in a recent study, white matter injury was demonstrated in an animal model of neonatal Escherichia coli sepsis in 5-day-old rat pups [27]. Experimental studies show that early-life Escherichia coli exposure can also have long-term effects, influencing the vulnerability to other factors in adulthood, for example, age-related cognitive decline [28] as well as attenuated glial and cytokine responses to amphetamine challenge [29].In recent years, coagulase-negative staphylococci (CONS) have emerged as the most prevalent and important neonatal pathogens, responsible for approximately 50% of all episodes of late-onset neonatal sepsis in neonatal intensive care units around the world [30–33]. CONS cause significant morbidity, mortality, and healthcare costs worldwide in preterm newborns, especially in very low birth weight infants [34–38]. The vulnerability of preterm infants to CONS infection has been suggested to be due to the special characteristics of the premature infant’s innate immunity [39]. Although there is no direct evidence of CONS causing perinatal brain injury, the presence of CONS in the chorioamnion space at delivery is associated with increased risk for the development of cerebral palsy in preterm infants [40, 41]. Further, in children with an established diagnosis of cerebral palsy, who are admitted to pediatric intensive care, there is a high rate of carriage of abnormal bacteria, including CONS [42].In very low birth weight preterm infants with early onset neonatal sepsis, the rate of group B streptococcal (GBS) infections is relatively low in comparison withE. coli infections [17]. There is no direct evidence of GBS sepsis playing a role in cerebral palsy; however, nearly half of all infants who survive an episode of GBS meningitis suffer from long-term neurodevelopmental sequelae [43]. Further, extensive cortical neuronal injury was found in GBS-infected neonatal rats, which was mediated through reactive oxygen intermediates [44, 45]. ## 3. Toll-Like Receptor-Mediated Vulnerability of the Immature Brain ### 3.1. Toll-Like Receptors Toll-like receptors (TLRs) play a central role in primary recognition of infectious and viral pathogens. The presence of all 13 known TLRs has been demonstrated in the brain [46–48]. TLR4 mediates cellular activation in response to LPS derived from Escherichia coli [49], while CONS [39] and GBS infections [50] are, at least partly, believed to be mediated by TLR2. Interestingly, the role of TLRs in nonbacterial-induced brain injury has also recently been highlighted [51]. TLRs signal through the recruitment of intracellular adaptor proteins, followed by activation of protein kinases and transcription factors that induce the production of inflammatory mediators (Figure 1). The adaptor protein MyD88 is used by most TLRs, except TLR3, while the TRIF adaptor protein is used only by TLR3 and TLR4. LPS-induced activation of TLR4 elicits, via both MyD88 and TRIF, a broad inflammatory response in tissues, including the immature brain [52].Figure 1 Diagram outlining infectious agents, TLRs, and major signaling pathways. Abbreviations: SE: S. epidermidis; GBS: group B streptococcus; LPT: lipopeptides. LPS: lipopolysaccharide; MyD88: myeloid differentiation primary response gene (88); TRIF: TIR domain-containing adaptor inducing interferon-β-mediated transcription factor; NF-κB: nuclear factor-KappaB; IRF: interferon regulatory factor; IP-10: interferon gamma-induced protein 10; IFN: interferon; TNF: tumor necrosis factor; IL-1: Interleukin -1. ### 3.2. TLR Expression during Brain Development There is relatively little information regarding the expression of TLRs in the developing brain. During embryonic life, protein expression of both TLR-3 and -8 has been identified [53, 54], while TLR-2 expression is relatively low before birth and increases during the first two weeks of life [55]. We have shown that mRNA for TLR1-9 is expressed in the neonatal mouse brain [56]. It appears that some of the TLRs may play important roles during normal brain development, as TLR2 inhibits neural progenitor cell proliferation during the embryonic period, and TLR3 deficiency increases proliferation of neural progenitor cells, while TLR8 stimulation inhibits neurite outgrowth [53–55]. In support, TLR2 and TLR4 have been shown to regulate hippocampal neurogenesis in the adult brain [57]. ### 3.3. LPS-Induced Brain Injury We, and others, have shown that systemic administration of LPS results in brain injury in both fetal and newborn animals [58–60]. These injuries appear, both histologically and by MRI analysis, to be very similar to those found in preterm infants [61]. Furthermore, it is now well established that pre-exposure to LPS can increase the vulnerability of the immature brain to hypoxia-ischemia (HI), in both rats [62, 63] and mice [64]. These effects are TLR4 [65] and MyD88 dependent [64, 66]. In a recent study, it was also shown that a very low dose of LPS, specifically increased the vulnerability of the immature white matter [67]. Low-dose LPS (0.05 mg/kg) sensitized HI injury in P2 rat pups by selectively reducing myelin basic protein expression and the number of oligodendrocytes while increasing neuroinflammation and blood-brain barrier damage in the white matter. The neuroinflammatory responses to LPS/HI appears to be age dependent [68]. Rat pups subjected to LPS/HI at P1 responded with weak cytokine response, while there was a prominent upregulation of cytokines in P12 pups subjected to the same insult. Interestingly, IL-1β was upregulated at both ages; IL-1β injections sensitize the newborn brain to excitotoxicity [69] and repeated IL-1β exposure during the neonatal period induces preterm like brain injury in mice [70].Although it has clearly been demonstrated that LPS can increase the vulnerability to HI, under certain circumstances LPS can also induce tolerance to brain injury. We have shown that the time interval between LPS exposure and the subsequent HI is imperative to the outcome [71, 72], where a 24 h interval seems to induce a tolerant state that makes the brain less vulnerable. This has been confirmed by others who have implicated several possible mechanisms, including upregulation of corticosterone [73], which is further supported by the fact that administration of dexamethasone prevents learning impairment following LPS/HI in neonatal rats [74]. Furthermore, Akt-mediated eNOS upregulation in neurons and vascular endothelial cells have been implicated in LPS-induced preconditioning [75].The importance of the time interval between LPS and other insults seems to be a generalized phenomenon. We have recently demonstrated in an in vitro model that conditioned medium from LPS-activated microglia affects the antioxidant Nrf2 system and cell survival in astrocytes in a time-dependent manner. LPS-induced inflammation had dual, time-dependent, effects on the Nrf2 system in that sustained activation (72 h) of GSK3beta and p38 downregulated the Nrf2 system, possibly via the activation of histone deacetylases, changes that were not observed with a 24 h (tolerance) interval [76, 77]. These studies support our previous report demonstrating that reductions in antioxidants were more pronounced when HI was preceded by LPS injection in 8-day rats 3 days prior to the HI insult [78]. ### 3.4. Other TLRs in Perinatal Brain Injury Compared to TLR4, much less is known about other TLRs in perinatal brain injury. As mentioned above, TLR2, TLR3, and TLR8 can affect normal brain development [53–55]. Activation of TLR2 in neonatal mice decreases volume of cerebral gray matter, white matter in the forebrain, and cerebellar molecular layer [79]. Further, we have recently demonstrated the expression of both TLR1 and TLR2 in the neonatal mouse brain following HI. In these studies, TLR2 deficiency resulted in reduced infarct volume after HI, while TLR-1-deficient mice were not protected [56].Maternal viral immune activation is believed to increase the risk of psychiatric disorders such as schizophrenia in offspring, and in order to examine this relationship, several authors have investigated the vulnerability of the fetal brain to synthetic double-stranded RNA, polyriboinosinic-polyribocytidilic acid (poly I:C), a TLR3 agonist. Maternal injection with poly I:C towards the end of gestation (≥G15) causes sensorimotor gating deficits in the adult offspring in mice [80] and increased sensitivity to the locomotor-stimulating effects of MK-801 [81]. The effects of Poly I:C appear to be gestational age dependent [82]. Maternal Poly I:C injection on GD9, but not GD17, significantly impaired sensorimotor gating and reduced prefrontal dopamine D1 receptors in adulthood, whereas prenatal immune activation in late gestation impaired working memory, potentiated the locomotor reaction to a NMDA-receptor antagonist, and reduced hippocampal NMDA-receptor subunit 1 expression. In particular, Poly I:C injections early during rodent pregnancy affect structural brain development, such as a transient decrease of myelin basic protein in the neonatal offspring [83] and cerebellar pathology [84]. ## 3.1. Toll-Like Receptors Toll-like receptors (TLRs) play a central role in primary recognition of infectious and viral pathogens. The presence of all 13 known TLRs has been demonstrated in the brain [46–48]. TLR4 mediates cellular activation in response to LPS derived from Escherichia coli [49], while CONS [39] and GBS infections [50] are, at least partly, believed to be mediated by TLR2. Interestingly, the role of TLRs in nonbacterial-induced brain injury has also recently been highlighted [51]. TLRs signal through the recruitment of intracellular adaptor proteins, followed by activation of protein kinases and transcription factors that induce the production of inflammatory mediators (Figure 1). The adaptor protein MyD88 is used by most TLRs, except TLR3, while the TRIF adaptor protein is used only by TLR3 and TLR4. LPS-induced activation of TLR4 elicits, via both MyD88 and TRIF, a broad inflammatory response in tissues, including the immature brain [52].Figure 1 Diagram outlining infectious agents, TLRs, and major signaling pathways. Abbreviations: SE: S. epidermidis; GBS: group B streptococcus; LPT: lipopeptides. LPS: lipopolysaccharide; MyD88: myeloid differentiation primary response gene (88); TRIF: TIR domain-containing adaptor inducing interferon-β-mediated transcription factor; NF-κB: nuclear factor-KappaB; IRF: interferon regulatory factor; IP-10: interferon gamma-induced protein 10; IFN: interferon; TNF: tumor necrosis factor; IL-1: Interleukin -1. ## 3.2. TLR Expression during Brain Development There is relatively little information regarding the expression of TLRs in the developing brain. During embryonic life, protein expression of both TLR-3 and -8 has been identified [53, 54], while TLR-2 expression is relatively low before birth and increases during the first two weeks of life [55]. We have shown that mRNA for TLR1-9 is expressed in the neonatal mouse brain [56]. It appears that some of the TLRs may play important roles during normal brain development, as TLR2 inhibits neural progenitor cell proliferation during the embryonic period, and TLR3 deficiency increases proliferation of neural progenitor cells, while TLR8 stimulation inhibits neurite outgrowth [53–55]. In support, TLR2 and TLR4 have been shown to regulate hippocampal neurogenesis in the adult brain [57]. ## 3.3. LPS-Induced Brain Injury We, and others, have shown that systemic administration of LPS results in brain injury in both fetal and newborn animals [58–60]. These injuries appear, both histologically and by MRI analysis, to be very similar to those found in preterm infants [61]. Furthermore, it is now well established that pre-exposure to LPS can increase the vulnerability of the immature brain to hypoxia-ischemia (HI), in both rats [62, 63] and mice [64]. These effects are TLR4 [65] and MyD88 dependent [64, 66]. In a recent study, it was also shown that a very low dose of LPS, specifically increased the vulnerability of the immature white matter [67]. Low-dose LPS (0.05 mg/kg) sensitized HI injury in P2 rat pups by selectively reducing myelin basic protein expression and the number of oligodendrocytes while increasing neuroinflammation and blood-brain barrier damage in the white matter. The neuroinflammatory responses to LPS/HI appears to be age dependent [68]. Rat pups subjected to LPS/HI at P1 responded with weak cytokine response, while there was a prominent upregulation of cytokines in P12 pups subjected to the same insult. Interestingly, IL-1β was upregulated at both ages; IL-1β injections sensitize the newborn brain to excitotoxicity [69] and repeated IL-1β exposure during the neonatal period induces preterm like brain injury in mice [70].Although it has clearly been demonstrated that LPS can increase the vulnerability to HI, under certain circumstances LPS can also induce tolerance to brain injury. We have shown that the time interval between LPS exposure and the subsequent HI is imperative to the outcome [71, 72], where a 24 h interval seems to induce a tolerant state that makes the brain less vulnerable. This has been confirmed by others who have implicated several possible mechanisms, including upregulation of corticosterone [73], which is further supported by the fact that administration of dexamethasone prevents learning impairment following LPS/HI in neonatal rats [74]. Furthermore, Akt-mediated eNOS upregulation in neurons and vascular endothelial cells have been implicated in LPS-induced preconditioning [75].The importance of the time interval between LPS and other insults seems to be a generalized phenomenon. We have recently demonstrated in an in vitro model that conditioned medium from LPS-activated microglia affects the antioxidant Nrf2 system and cell survival in astrocytes in a time-dependent manner. LPS-induced inflammation had dual, time-dependent, effects on the Nrf2 system in that sustained activation (72 h) of GSK3beta and p38 downregulated the Nrf2 system, possibly via the activation of histone deacetylases, changes that were not observed with a 24 h (tolerance) interval [76, 77]. These studies support our previous report demonstrating that reductions in antioxidants were more pronounced when HI was preceded by LPS injection in 8-day rats 3 days prior to the HI insult [78]. ## 3.4. Other TLRs in Perinatal Brain Injury Compared to TLR4, much less is known about other TLRs in perinatal brain injury. As mentioned above, TLR2, TLR3, and TLR8 can affect normal brain development [53–55]. Activation of TLR2 in neonatal mice decreases volume of cerebral gray matter, white matter in the forebrain, and cerebellar molecular layer [79]. Further, we have recently demonstrated the expression of both TLR1 and TLR2 in the neonatal mouse brain following HI. In these studies, TLR2 deficiency resulted in reduced infarct volume after HI, while TLR-1-deficient mice were not protected [56].Maternal viral immune activation is believed to increase the risk of psychiatric disorders such as schizophrenia in offspring, and in order to examine this relationship, several authors have investigated the vulnerability of the fetal brain to synthetic double-stranded RNA, polyriboinosinic-polyribocytidilic acid (poly I:C), a TLR3 agonist. Maternal injection with poly I:C towards the end of gestation (≥G15) causes sensorimotor gating deficits in the adult offspring in mice [80] and increased sensitivity to the locomotor-stimulating effects of MK-801 [81]. The effects of Poly I:C appear to be gestational age dependent [82]. Maternal Poly I:C injection on GD9, but not GD17, significantly impaired sensorimotor gating and reduced prefrontal dopamine D1 receptors in adulthood, whereas prenatal immune activation in late gestation impaired working memory, potentiated the locomotor reaction to a NMDA-receptor antagonist, and reduced hippocampal NMDA-receptor subunit 1 expression. In particular, Poly I:C injections early during rodent pregnancy affect structural brain development, such as a transient decrease of myelin basic protein in the neonatal offspring [83] and cerebellar pathology [84]. ## 4. Conclusion E. coli infections are common in preterm neonates, and considerable evidence suggests that E. coli-induced inflammation play a role in the development of white matter damage in preterm infants. There is much less data available concerning the importance of two other common neonatal pathogens, CONS and GBS, in perinatal brain injury. Furthermore, it is becoming clear that TLRs have important roles during development and may be involved in both pathogen-induced damage as well as so called “sterile” HI-induced inflammation. In order to better understand the underlying causes of perinatal brain injury, the interaction between common neonatal pathogens and TLRs in the newborn brain deserves further investigation. --- *Source: 102153-2011-11-02.xml*
102153-2011-11-02_102153-2011-11-02.md
23,963
Infection-Induced Vulnerability of Perinatal Brain Injury
Carina Mallard; Xiaoyang Wang
Neurology Research International (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102153
102153-2011-11-02.xml
--- ## Abstract A growing body of evidence demonstrates that susceptibility and progression of both acute and chronic central nervous system disease in the newborn is closely associated with an innate immune response that can manifest from either direct infection and/or infection-triggered damage. A common feature of many of these diseases is the systemic exposure of the neonate to bacterial infections that elicit brain inflammation. In recent years, the importance of innate immune receptors in newborn brain injury, the so-called Toll-like receptors, has been demonstrated. In this paper we will discuss how neonatal sepsis, with particular emphasis onEscherichia coli, coagulase-negative staphylococci, and group B streptococcal infections in preterm infants, and Toll-like receptor-mediated inflammation can increase the vulnerability of the newborn brain to injury. --- ## Body ## 1. Introduction Perinatal brain injury represents a significant clinical problem [1]. A growing body of evidence demonstrates that susceptibility and progression of both acute and chronic central nervous system (CNS) disease is closely associated with an innate immune response that can manifest from either direct infection and/or infection-triggered damage [2]. A common feature of these diseases is the systemic activation of inflammatory mediators, which via the blood can disrupt the blood-brain barrier, affect the circumventricular organs in the brain (which lack a blood-brain barrier), or interact with the brain endothelium, thereby eliciting brain inflammation [3]. Furthermore, the presence of activated inflammatory cells derived from systemic circulation or from dormant brain resident populations is a key feature of many CNS diseases. More recently, the importance of innate immune receptors in CNS injury, the so-called Toll-like receptors (TLRs), has also been emphasized. In this paper we will focus on how neonatal sepsis and TLR-mediated inflammation increase the vulnerability of the newborn brain. ## 2. Neonatal Sepsis and Brain Injury Infants with sepsis have an increased incidence of cerebral palsy [4] and white matter abnormalities [5–11]. In a large study of 6093 extremely low birth weight (<1000 g) infants, those who were infected (including early-onset sepsis, suspected sepsis (culture negative), and had necrotizing enterocolitis (NEC)) were more likely to have cerebral palsy than children who did not have a neonatal infection [12]. In another recent large sample-size study involving 1155 infants born at 23 to 27 weeks gestation, it was found that children who had both late bacteremia (positive blood culture result after the first postnatal week) and surgical NEC were at increased risk of diparetic cerebral palsy compared with children who had neither [13]. Moreover, by comparing outcomes of 150 infants with periventricular leukomalacia (PVL) with controls matched for gestational age, it was found that infants with bacterial sepsis were twice as likely to develop PVL, and those with meningitis were almost four times as likely to develop white matter disease [14]. Similar findings were noted in a smaller case-control study, where associations between cerebral palsy, clinical chorioamnionitis and sepsis were demonstrated [15]. Moreover, there was an increased incidence of Gram-negative bacterial and fungal infections in a very low birth weight population, and these infants were at significantly increased risk for moderate to severe cerebral palsy and neurodevelopmental impairment at 18 months of age [16]. ### 2.1. Bacterial Pathogens in Neonatal Sepsis Escherichia coli is one of the main pathogens causing early-onset infections in preterm neonates, accounting for up to 40% of the cases of bacteremia among very low birth weight preterm infants (<1,500 g) [17]. Cerebral white matter injury has been found by MRI following Escherichia coli meningitis in human newborn infants [18]. Furthermore, Escherichia coli induce brain damage in a number of antenatal rabbit and rodent models [19–26]. Also, in a recent study, white matter injury was demonstrated in an animal model of neonatal Escherichia coli sepsis in 5-day-old rat pups [27]. Experimental studies show that early-life Escherichia coli exposure can also have long-term effects, influencing the vulnerability to other factors in adulthood, for example, age-related cognitive decline [28] as well as attenuated glial and cytokine responses to amphetamine challenge [29].In recent years, coagulase-negative staphylococci (CONS) have emerged as the most prevalent and important neonatal pathogens, responsible for approximately 50% of all episodes of late-onset neonatal sepsis in neonatal intensive care units around the world [30–33]. CONS cause significant morbidity, mortality, and healthcare costs worldwide in preterm newborns, especially in very low birth weight infants [34–38]. The vulnerability of preterm infants to CONS infection has been suggested to be due to the special characteristics of the premature infant’s innate immunity [39]. Although there is no direct evidence of CONS causing perinatal brain injury, the presence of CONS in the chorioamnion space at delivery is associated with increased risk for the development of cerebral palsy in preterm infants [40, 41]. Further, in children with an established diagnosis of cerebral palsy, who are admitted to pediatric intensive care, there is a high rate of carriage of abnormal bacteria, including CONS [42].In very low birth weight preterm infants with early onset neonatal sepsis, the rate of group B streptococcal (GBS) infections is relatively low in comparison withE. coli infections [17]. There is no direct evidence of GBS sepsis playing a role in cerebral palsy; however, nearly half of all infants who survive an episode of GBS meningitis suffer from long-term neurodevelopmental sequelae [43]. Further, extensive cortical neuronal injury was found in GBS-infected neonatal rats, which was mediated through reactive oxygen intermediates [44, 45]. ## 2.1. Bacterial Pathogens in Neonatal Sepsis Escherichia coli is one of the main pathogens causing early-onset infections in preterm neonates, accounting for up to 40% of the cases of bacteremia among very low birth weight preterm infants (<1,500 g) [17]. Cerebral white matter injury has been found by MRI following Escherichia coli meningitis in human newborn infants [18]. Furthermore, Escherichia coli induce brain damage in a number of antenatal rabbit and rodent models [19–26]. Also, in a recent study, white matter injury was demonstrated in an animal model of neonatal Escherichia coli sepsis in 5-day-old rat pups [27]. Experimental studies show that early-life Escherichia coli exposure can also have long-term effects, influencing the vulnerability to other factors in adulthood, for example, age-related cognitive decline [28] as well as attenuated glial and cytokine responses to amphetamine challenge [29].In recent years, coagulase-negative staphylococci (CONS) have emerged as the most prevalent and important neonatal pathogens, responsible for approximately 50% of all episodes of late-onset neonatal sepsis in neonatal intensive care units around the world [30–33]. CONS cause significant morbidity, mortality, and healthcare costs worldwide in preterm newborns, especially in very low birth weight infants [34–38]. The vulnerability of preterm infants to CONS infection has been suggested to be due to the special characteristics of the premature infant’s innate immunity [39]. Although there is no direct evidence of CONS causing perinatal brain injury, the presence of CONS in the chorioamnion space at delivery is associated with increased risk for the development of cerebral palsy in preterm infants [40, 41]. Further, in children with an established diagnosis of cerebral palsy, who are admitted to pediatric intensive care, there is a high rate of carriage of abnormal bacteria, including CONS [42].In very low birth weight preterm infants with early onset neonatal sepsis, the rate of group B streptococcal (GBS) infections is relatively low in comparison withE. coli infections [17]. There is no direct evidence of GBS sepsis playing a role in cerebral palsy; however, nearly half of all infants who survive an episode of GBS meningitis suffer from long-term neurodevelopmental sequelae [43]. Further, extensive cortical neuronal injury was found in GBS-infected neonatal rats, which was mediated through reactive oxygen intermediates [44, 45]. ## 3. Toll-Like Receptor-Mediated Vulnerability of the Immature Brain ### 3.1. Toll-Like Receptors Toll-like receptors (TLRs) play a central role in primary recognition of infectious and viral pathogens. The presence of all 13 known TLRs has been demonstrated in the brain [46–48]. TLR4 mediates cellular activation in response to LPS derived from Escherichia coli [49], while CONS [39] and GBS infections [50] are, at least partly, believed to be mediated by TLR2. Interestingly, the role of TLRs in nonbacterial-induced brain injury has also recently been highlighted [51]. TLRs signal through the recruitment of intracellular adaptor proteins, followed by activation of protein kinases and transcription factors that induce the production of inflammatory mediators (Figure 1). The adaptor protein MyD88 is used by most TLRs, except TLR3, while the TRIF adaptor protein is used only by TLR3 and TLR4. LPS-induced activation of TLR4 elicits, via both MyD88 and TRIF, a broad inflammatory response in tissues, including the immature brain [52].Figure 1 Diagram outlining infectious agents, TLRs, and major signaling pathways. Abbreviations: SE: S. epidermidis; GBS: group B streptococcus; LPT: lipopeptides. LPS: lipopolysaccharide; MyD88: myeloid differentiation primary response gene (88); TRIF: TIR domain-containing adaptor inducing interferon-β-mediated transcription factor; NF-κB: nuclear factor-KappaB; IRF: interferon regulatory factor; IP-10: interferon gamma-induced protein 10; IFN: interferon; TNF: tumor necrosis factor; IL-1: Interleukin -1. ### 3.2. TLR Expression during Brain Development There is relatively little information regarding the expression of TLRs in the developing brain. During embryonic life, protein expression of both TLR-3 and -8 has been identified [53, 54], while TLR-2 expression is relatively low before birth and increases during the first two weeks of life [55]. We have shown that mRNA for TLR1-9 is expressed in the neonatal mouse brain [56]. It appears that some of the TLRs may play important roles during normal brain development, as TLR2 inhibits neural progenitor cell proliferation during the embryonic period, and TLR3 deficiency increases proliferation of neural progenitor cells, while TLR8 stimulation inhibits neurite outgrowth [53–55]. In support, TLR2 and TLR4 have been shown to regulate hippocampal neurogenesis in the adult brain [57]. ### 3.3. LPS-Induced Brain Injury We, and others, have shown that systemic administration of LPS results in brain injury in both fetal and newborn animals [58–60]. These injuries appear, both histologically and by MRI analysis, to be very similar to those found in preterm infants [61]. Furthermore, it is now well established that pre-exposure to LPS can increase the vulnerability of the immature brain to hypoxia-ischemia (HI), in both rats [62, 63] and mice [64]. These effects are TLR4 [65] and MyD88 dependent [64, 66]. In a recent study, it was also shown that a very low dose of LPS, specifically increased the vulnerability of the immature white matter [67]. Low-dose LPS (0.05 mg/kg) sensitized HI injury in P2 rat pups by selectively reducing myelin basic protein expression and the number of oligodendrocytes while increasing neuroinflammation and blood-brain barrier damage in the white matter. The neuroinflammatory responses to LPS/HI appears to be age dependent [68]. Rat pups subjected to LPS/HI at P1 responded with weak cytokine response, while there was a prominent upregulation of cytokines in P12 pups subjected to the same insult. Interestingly, IL-1β was upregulated at both ages; IL-1β injections sensitize the newborn brain to excitotoxicity [69] and repeated IL-1β exposure during the neonatal period induces preterm like brain injury in mice [70].Although it has clearly been demonstrated that LPS can increase the vulnerability to HI, under certain circumstances LPS can also induce tolerance to brain injury. We have shown that the time interval between LPS exposure and the subsequent HI is imperative to the outcome [71, 72], where a 24 h interval seems to induce a tolerant state that makes the brain less vulnerable. This has been confirmed by others who have implicated several possible mechanisms, including upregulation of corticosterone [73], which is further supported by the fact that administration of dexamethasone prevents learning impairment following LPS/HI in neonatal rats [74]. Furthermore, Akt-mediated eNOS upregulation in neurons and vascular endothelial cells have been implicated in LPS-induced preconditioning [75].The importance of the time interval between LPS and other insults seems to be a generalized phenomenon. We have recently demonstrated in an in vitro model that conditioned medium from LPS-activated microglia affects the antioxidant Nrf2 system and cell survival in astrocytes in a time-dependent manner. LPS-induced inflammation had dual, time-dependent, effects on the Nrf2 system in that sustained activation (72 h) of GSK3beta and p38 downregulated the Nrf2 system, possibly via the activation of histone deacetylases, changes that were not observed with a 24 h (tolerance) interval [76, 77]. These studies support our previous report demonstrating that reductions in antioxidants were more pronounced when HI was preceded by LPS injection in 8-day rats 3 days prior to the HI insult [78]. ### 3.4. Other TLRs in Perinatal Brain Injury Compared to TLR4, much less is known about other TLRs in perinatal brain injury. As mentioned above, TLR2, TLR3, and TLR8 can affect normal brain development [53–55]. Activation of TLR2 in neonatal mice decreases volume of cerebral gray matter, white matter in the forebrain, and cerebellar molecular layer [79]. Further, we have recently demonstrated the expression of both TLR1 and TLR2 in the neonatal mouse brain following HI. In these studies, TLR2 deficiency resulted in reduced infarct volume after HI, while TLR-1-deficient mice were not protected [56].Maternal viral immune activation is believed to increase the risk of psychiatric disorders such as schizophrenia in offspring, and in order to examine this relationship, several authors have investigated the vulnerability of the fetal brain to synthetic double-stranded RNA, polyriboinosinic-polyribocytidilic acid (poly I:C), a TLR3 agonist. Maternal injection with poly I:C towards the end of gestation (≥G15) causes sensorimotor gating deficits in the adult offspring in mice [80] and increased sensitivity to the locomotor-stimulating effects of MK-801 [81]. The effects of Poly I:C appear to be gestational age dependent [82]. Maternal Poly I:C injection on GD9, but not GD17, significantly impaired sensorimotor gating and reduced prefrontal dopamine D1 receptors in adulthood, whereas prenatal immune activation in late gestation impaired working memory, potentiated the locomotor reaction to a NMDA-receptor antagonist, and reduced hippocampal NMDA-receptor subunit 1 expression. In particular, Poly I:C injections early during rodent pregnancy affect structural brain development, such as a transient decrease of myelin basic protein in the neonatal offspring [83] and cerebellar pathology [84]. ## 3.1. Toll-Like Receptors Toll-like receptors (TLRs) play a central role in primary recognition of infectious and viral pathogens. The presence of all 13 known TLRs has been demonstrated in the brain [46–48]. TLR4 mediates cellular activation in response to LPS derived from Escherichia coli [49], while CONS [39] and GBS infections [50] are, at least partly, believed to be mediated by TLR2. Interestingly, the role of TLRs in nonbacterial-induced brain injury has also recently been highlighted [51]. TLRs signal through the recruitment of intracellular adaptor proteins, followed by activation of protein kinases and transcription factors that induce the production of inflammatory mediators (Figure 1). The adaptor protein MyD88 is used by most TLRs, except TLR3, while the TRIF adaptor protein is used only by TLR3 and TLR4. LPS-induced activation of TLR4 elicits, via both MyD88 and TRIF, a broad inflammatory response in tissues, including the immature brain [52].Figure 1 Diagram outlining infectious agents, TLRs, and major signaling pathways. Abbreviations: SE: S. epidermidis; GBS: group B streptococcus; LPT: lipopeptides. LPS: lipopolysaccharide; MyD88: myeloid differentiation primary response gene (88); TRIF: TIR domain-containing adaptor inducing interferon-β-mediated transcription factor; NF-κB: nuclear factor-KappaB; IRF: interferon regulatory factor; IP-10: interferon gamma-induced protein 10; IFN: interferon; TNF: tumor necrosis factor; IL-1: Interleukin -1. ## 3.2. TLR Expression during Brain Development There is relatively little information regarding the expression of TLRs in the developing brain. During embryonic life, protein expression of both TLR-3 and -8 has been identified [53, 54], while TLR-2 expression is relatively low before birth and increases during the first two weeks of life [55]. We have shown that mRNA for TLR1-9 is expressed in the neonatal mouse brain [56]. It appears that some of the TLRs may play important roles during normal brain development, as TLR2 inhibits neural progenitor cell proliferation during the embryonic period, and TLR3 deficiency increases proliferation of neural progenitor cells, while TLR8 stimulation inhibits neurite outgrowth [53–55]. In support, TLR2 and TLR4 have been shown to regulate hippocampal neurogenesis in the adult brain [57]. ## 3.3. LPS-Induced Brain Injury We, and others, have shown that systemic administration of LPS results in brain injury in both fetal and newborn animals [58–60]. These injuries appear, both histologically and by MRI analysis, to be very similar to those found in preterm infants [61]. Furthermore, it is now well established that pre-exposure to LPS can increase the vulnerability of the immature brain to hypoxia-ischemia (HI), in both rats [62, 63] and mice [64]. These effects are TLR4 [65] and MyD88 dependent [64, 66]. In a recent study, it was also shown that a very low dose of LPS, specifically increased the vulnerability of the immature white matter [67]. Low-dose LPS (0.05 mg/kg) sensitized HI injury in P2 rat pups by selectively reducing myelin basic protein expression and the number of oligodendrocytes while increasing neuroinflammation and blood-brain barrier damage in the white matter. The neuroinflammatory responses to LPS/HI appears to be age dependent [68]. Rat pups subjected to LPS/HI at P1 responded with weak cytokine response, while there was a prominent upregulation of cytokines in P12 pups subjected to the same insult. Interestingly, IL-1β was upregulated at both ages; IL-1β injections sensitize the newborn brain to excitotoxicity [69] and repeated IL-1β exposure during the neonatal period induces preterm like brain injury in mice [70].Although it has clearly been demonstrated that LPS can increase the vulnerability to HI, under certain circumstances LPS can also induce tolerance to brain injury. We have shown that the time interval between LPS exposure and the subsequent HI is imperative to the outcome [71, 72], where a 24 h interval seems to induce a tolerant state that makes the brain less vulnerable. This has been confirmed by others who have implicated several possible mechanisms, including upregulation of corticosterone [73], which is further supported by the fact that administration of dexamethasone prevents learning impairment following LPS/HI in neonatal rats [74]. Furthermore, Akt-mediated eNOS upregulation in neurons and vascular endothelial cells have been implicated in LPS-induced preconditioning [75].The importance of the time interval between LPS and other insults seems to be a generalized phenomenon. We have recently demonstrated in an in vitro model that conditioned medium from LPS-activated microglia affects the antioxidant Nrf2 system and cell survival in astrocytes in a time-dependent manner. LPS-induced inflammation had dual, time-dependent, effects on the Nrf2 system in that sustained activation (72 h) of GSK3beta and p38 downregulated the Nrf2 system, possibly via the activation of histone deacetylases, changes that were not observed with a 24 h (tolerance) interval [76, 77]. These studies support our previous report demonstrating that reductions in antioxidants were more pronounced when HI was preceded by LPS injection in 8-day rats 3 days prior to the HI insult [78]. ## 3.4. Other TLRs in Perinatal Brain Injury Compared to TLR4, much less is known about other TLRs in perinatal brain injury. As mentioned above, TLR2, TLR3, and TLR8 can affect normal brain development [53–55]. Activation of TLR2 in neonatal mice decreases volume of cerebral gray matter, white matter in the forebrain, and cerebellar molecular layer [79]. Further, we have recently demonstrated the expression of both TLR1 and TLR2 in the neonatal mouse brain following HI. In these studies, TLR2 deficiency resulted in reduced infarct volume after HI, while TLR-1-deficient mice were not protected [56].Maternal viral immune activation is believed to increase the risk of psychiatric disorders such as schizophrenia in offspring, and in order to examine this relationship, several authors have investigated the vulnerability of the fetal brain to synthetic double-stranded RNA, polyriboinosinic-polyribocytidilic acid (poly I:C), a TLR3 agonist. Maternal injection with poly I:C towards the end of gestation (≥G15) causes sensorimotor gating deficits in the adult offspring in mice [80] and increased sensitivity to the locomotor-stimulating effects of MK-801 [81]. The effects of Poly I:C appear to be gestational age dependent [82]. Maternal Poly I:C injection on GD9, but not GD17, significantly impaired sensorimotor gating and reduced prefrontal dopamine D1 receptors in adulthood, whereas prenatal immune activation in late gestation impaired working memory, potentiated the locomotor reaction to a NMDA-receptor antagonist, and reduced hippocampal NMDA-receptor subunit 1 expression. In particular, Poly I:C injections early during rodent pregnancy affect structural brain development, such as a transient decrease of myelin basic protein in the neonatal offspring [83] and cerebellar pathology [84]. ## 4. Conclusion E. coli infections are common in preterm neonates, and considerable evidence suggests that E. coli-induced inflammation play a role in the development of white matter damage in preterm infants. There is much less data available concerning the importance of two other common neonatal pathogens, CONS and GBS, in perinatal brain injury. Furthermore, it is becoming clear that TLRs have important roles during development and may be involved in both pathogen-induced damage as well as so called “sterile” HI-induced inflammation. In order to better understand the underlying causes of perinatal brain injury, the interaction between common neonatal pathogens and TLRs in the newborn brain deserves further investigation. --- *Source: 102153-2011-11-02.xml*
2012
# Internet Impact on the Insertion of Genitourinary Tract Foreign Bodies in Childhood **Authors:** Xenophon Sinopidis; Vasileios Alexopoulos; Antonios Panagidis; Alexandra Ziova; Anastasia Varvarigou; George Georgiou **Journal:** Case Reports in Pediatrics (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102156 --- ## Abstract Foreign body self-insertion into the urethra is an uncommon paraphilia. Variety in object form, motivation, clinical presentation, complications, and treatment options is a rule. In childhood it is very rare, and it is attributed to curiosity or mental disorders so far. However, the internet impact on daily life of all age groups has created a new category of sexual behavior in childhood and adolescence, the “internet induced paraphilia.” Such is the case of an electrical cable inserted in the urethra of a 12-year-old boy reported here, which is representative of this kind of impact. --- ## Body ## 1. Introduction Self-insertion of foreign bodies in the male urethra has been studied in detail in adults and less in children [1, 2]. Relevant issues taken under consideration on this subject are motivation, foreign object form, clinical presentation, complications, and management procedures [1–5]. The internet has captivated our daily life worldwide during the last two decades. Its influence led among others to changes in motivation and age presentation of foreign bodies’ insertion in the urogenital tract. Childhood proves to be vulnerable to this effect, as this case report demonstrates. ## 2. Case Report A 12-year-old male patient was admitted to the emergency department for urinary retention caused by self-insertion of an electrical television cable in his urethral meatus. The foreign body remained in the urethra for 48 hours before seeking medical help. During this period he was manipulating the cable, resulting in further insertion of the foreign body. When urine flow was blocked completely, he presented with pelvic and penile pain, bladder distention, and inability to void. The distal segment of the cable was protruding through the meatus (Figure1). Pelvic plain radiography showed the long radiopaque metallic wire of the cable, which was twisted multiple times forming a coiled refractory structure (Figure 2).Figure 1 Clinical presentation of the inserted cable in the urethral meatus.Figure 2 Pelvic plain radiography: twisted cable forming a coiled refractory structure in the posterior urethra.The symptoms were relieved after placement of a suprapubic bladder catheter. A quantity of 1200 mL urine was drained from the bladder. Cefuroxime and hyoscine butylbromide were administered intravenously. Percutaneous cystography through the catheter showed that the proximal end of the twisted cable was located into the posterior urethra without entering into the bladder. Removal of the foreign body through the urethral meatus was obtained by patient and gentle traction through the meatus under general anesthesia. A minor trauma of the urethra may have taken place as a slight amount of inflammatory tissue was found on the extracted catheter (Figure3). A urethral Foley silicone catheter was placed in the urethra for 10 days. A normal urethrocystography was performed through the suprapubic catheter after removal of the urethral catheter. The suprapubic catheter was removed two days later. The patient during a six months follow-up period had a normal urine flow without late urethral stenosis.Figure 3 The foreign body (television cable) after extraction: Small amounts of tissue, characterized as inflammatory reaction were found at the refractory point.During psychiatric consultancy he revealed that he had read in a website that if he inserted an electrical wire in his urethra and connected it with a battery to create electrical stimuli, he would achieve augmentation of penile length and experience simultaneously erotic satisfaction. There was not any mental of family disorder. As a son of an electrician, he had easy access to electrical stuff. ## 3. Discussion Self-insertion of foreign bodies in the male urogenital tract is an uncommon paraphilia studied in detail because of the variety it presents in many aspects [1–5]. There is a great diversity of the kind of objects inserted: sharp objects with refractory or tearing effect to the urethra (pins, pens, pencils, nails, bolts, toothbrushes, batteries, fishhooks, glasses, paper clips), objects that may coil around themselves (wire, electrical cables, chains, rubber bands), and biologic materials (vegetables, food cells, plant segments, bones) [1–5]. Electrical wire and cable insertion has been reported in adults [6, 7].The variety of objects, the way they are inserted, and the period they remain in the genitourinary tract affect clinical presentation [1–5]. Poor urinary stream, dysuria, swelling, urethral discharge, and urinary tract infection are the most common clinical signs [1, 3, 6, 7]. Hematuria, abscess formation, calculi, stricture, diverticulum, or erectile dysfunction are complications on late presentation [1, 2, 6]. An urethroperineal fistula is reported after 18-month stay of a golden chain in the urethra [3]. Direct removal, endoscopy, open operation of the bladder, and urethrotomy are methods of foreign body removal described [3, 4, 6–8]. Combinations of these methods have been described too [8].Reasons that cause self-insertion of objects are autoerotic stimulation, psychiatric disorders (dementia), self-mutilation, intoxication (cocaine), and curiosity [1–8]. Sometimes there is a combination of these causes. Behavior after insertion is particular; there is shame and embarrassment which often delay medical help, sometimes for years [3]. Identification of the true motivation is very important because recurrent insertions may occur, or in some cases there is a nonrecognized mental or organic disease which may involve even forensic implications [5, 9].The majority of the reported cases regard adulthood [1–5]. Kenney stated that genitourinary foreign body insertion by males has received “considerable attention in the worldwide urologic literature, little in the psychiatric literature, and none in the pediatric literature” [10]. Pediatric cases reported concerned normal children motivated mainly by curiosity or with mental defects [10]. However during the last ten years the internet created a new behavioral attitude. Children encounter certain issues that did not exist in television, books, and magazines previously; there is facility to immediate, inexpensive, timeless and private access to information and most importantly, with direct interaction with a specific website, or with other users. These characteristics upgrade the internet to a new pathogenic factor in inclining behavior of childhood and adolescence. All these characteristics were prominent in the case presented here; the boy was driven to electrical cable insertion after receiving information from a certain internet portal in an age of sexual immaturity. Sexuality is a field of major influence to this kind of behavior, and foreign body self-insertion into the genitourinary tract should not be regarded only a simple childish curiosity furthermore, but as a new causative category named “internet induced paraphilia.” --- *Source: 102156-2012-09-11.xml*
102156-2012-09-11_102156-2012-09-11.md
7,457
Internet Impact on the Insertion of Genitourinary Tract Foreign Bodies in Childhood
Xenophon Sinopidis; Vasileios Alexopoulos; Antonios Panagidis; Alexandra Ziova; Anastasia Varvarigou; George Georgiou
Case Reports in Pediatrics (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102156
102156-2012-09-11.xml
--- ## Abstract Foreign body self-insertion into the urethra is an uncommon paraphilia. Variety in object form, motivation, clinical presentation, complications, and treatment options is a rule. In childhood it is very rare, and it is attributed to curiosity or mental disorders so far. However, the internet impact on daily life of all age groups has created a new category of sexual behavior in childhood and adolescence, the “internet induced paraphilia.” Such is the case of an electrical cable inserted in the urethra of a 12-year-old boy reported here, which is representative of this kind of impact. --- ## Body ## 1. Introduction Self-insertion of foreign bodies in the male urethra has been studied in detail in adults and less in children [1, 2]. Relevant issues taken under consideration on this subject are motivation, foreign object form, clinical presentation, complications, and management procedures [1–5]. The internet has captivated our daily life worldwide during the last two decades. Its influence led among others to changes in motivation and age presentation of foreign bodies’ insertion in the urogenital tract. Childhood proves to be vulnerable to this effect, as this case report demonstrates. ## 2. Case Report A 12-year-old male patient was admitted to the emergency department for urinary retention caused by self-insertion of an electrical television cable in his urethral meatus. The foreign body remained in the urethra for 48 hours before seeking medical help. During this period he was manipulating the cable, resulting in further insertion of the foreign body. When urine flow was blocked completely, he presented with pelvic and penile pain, bladder distention, and inability to void. The distal segment of the cable was protruding through the meatus (Figure1). Pelvic plain radiography showed the long radiopaque metallic wire of the cable, which was twisted multiple times forming a coiled refractory structure (Figure 2).Figure 1 Clinical presentation of the inserted cable in the urethral meatus.Figure 2 Pelvic plain radiography: twisted cable forming a coiled refractory structure in the posterior urethra.The symptoms were relieved after placement of a suprapubic bladder catheter. A quantity of 1200 mL urine was drained from the bladder. Cefuroxime and hyoscine butylbromide were administered intravenously. Percutaneous cystography through the catheter showed that the proximal end of the twisted cable was located into the posterior urethra without entering into the bladder. Removal of the foreign body through the urethral meatus was obtained by patient and gentle traction through the meatus under general anesthesia. A minor trauma of the urethra may have taken place as a slight amount of inflammatory tissue was found on the extracted catheter (Figure3). A urethral Foley silicone catheter was placed in the urethra for 10 days. A normal urethrocystography was performed through the suprapubic catheter after removal of the urethral catheter. The suprapubic catheter was removed two days later. The patient during a six months follow-up period had a normal urine flow without late urethral stenosis.Figure 3 The foreign body (television cable) after extraction: Small amounts of tissue, characterized as inflammatory reaction were found at the refractory point.During psychiatric consultancy he revealed that he had read in a website that if he inserted an electrical wire in his urethra and connected it with a battery to create electrical stimuli, he would achieve augmentation of penile length and experience simultaneously erotic satisfaction. There was not any mental of family disorder. As a son of an electrician, he had easy access to electrical stuff. ## 3. Discussion Self-insertion of foreign bodies in the male urogenital tract is an uncommon paraphilia studied in detail because of the variety it presents in many aspects [1–5]. There is a great diversity of the kind of objects inserted: sharp objects with refractory or tearing effect to the urethra (pins, pens, pencils, nails, bolts, toothbrushes, batteries, fishhooks, glasses, paper clips), objects that may coil around themselves (wire, electrical cables, chains, rubber bands), and biologic materials (vegetables, food cells, plant segments, bones) [1–5]. Electrical wire and cable insertion has been reported in adults [6, 7].The variety of objects, the way they are inserted, and the period they remain in the genitourinary tract affect clinical presentation [1–5]. Poor urinary stream, dysuria, swelling, urethral discharge, and urinary tract infection are the most common clinical signs [1, 3, 6, 7]. Hematuria, abscess formation, calculi, stricture, diverticulum, or erectile dysfunction are complications on late presentation [1, 2, 6]. An urethroperineal fistula is reported after 18-month stay of a golden chain in the urethra [3]. Direct removal, endoscopy, open operation of the bladder, and urethrotomy are methods of foreign body removal described [3, 4, 6–8]. Combinations of these methods have been described too [8].Reasons that cause self-insertion of objects are autoerotic stimulation, psychiatric disorders (dementia), self-mutilation, intoxication (cocaine), and curiosity [1–8]. Sometimes there is a combination of these causes. Behavior after insertion is particular; there is shame and embarrassment which often delay medical help, sometimes for years [3]. Identification of the true motivation is very important because recurrent insertions may occur, or in some cases there is a nonrecognized mental or organic disease which may involve even forensic implications [5, 9].The majority of the reported cases regard adulthood [1–5]. Kenney stated that genitourinary foreign body insertion by males has received “considerable attention in the worldwide urologic literature, little in the psychiatric literature, and none in the pediatric literature” [10]. Pediatric cases reported concerned normal children motivated mainly by curiosity or with mental defects [10]. However during the last ten years the internet created a new behavioral attitude. Children encounter certain issues that did not exist in television, books, and magazines previously; there is facility to immediate, inexpensive, timeless and private access to information and most importantly, with direct interaction with a specific website, or with other users. These characteristics upgrade the internet to a new pathogenic factor in inclining behavior of childhood and adolescence. All these characteristics were prominent in the case presented here; the boy was driven to electrical cable insertion after receiving information from a certain internet portal in an age of sexual immaturity. Sexuality is a field of major influence to this kind of behavior, and foreign body self-insertion into the genitourinary tract should not be regarded only a simple childish curiosity furthermore, but as a new causative category named “internet induced paraphilia.” --- *Source: 102156-2012-09-11.xml*
2012
# Analysis of Vibroacoustic Modulations for Crack Detection: A Time-Frequency Approach Based on Zhao-Atlas-Marks Distribution **Authors:** A. Trochidis; L. Hadjileontiadis; K. Zacharias **Journal:** Shock and Vibration (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102157 --- ## Abstract The vibro-acoustic modulation (VAM) technique is probably the most widely used nonlinear method for crack detection. The VAM method is based on the effect of modulation of high-frequency acoustic waves by a low-frequency vibration. The intensity of the modulation is related to the severity of the damage and has been used so far as a damage index. The damage index simply based on the amplitude of the first side bands in the spectral domain often leads to controversial results about the severity of the damage. In this work, the nonlinear characteristics of the vibro-modulation were systematically investigated by employing time-frequency analysis based on the Zhao-Atlas-Marks (ZAM) distribution. The results of the analysis show that the amplitude of the sideband components is modulated by the low frequency vibration and the modulation amplitude depends on the size of the crack. Based on the obtained results, a new damage index was defined in relation to the strength of the modulation. The new damage index is more sensitive and robust and correlates better with crack size compared to the index based on the amplitude of the sidebands. --- ## Body ## 1. Introduction Structures with inhomogeneities or defects exhibit strong nonlinear vibrational and acoustical effects. In particular, strong nonlinear effects were observed in structures with cracks. These effects include the generation of higher harmonics and intermodulation of a high-frequency acoustic wave by a low-frequency vibration [1] and provide the foundation for developing different techniques for nondestructive testing.The vibroacoustic modulation (VAM) method is based on the fact that a high-frequency ultrasound probing wave propagating in a structure is modulated by a low-frequency vibration. The modulation is generated by the nonlinear interaction of waves caused by the presence of the crack. The mechanisms, however, behind these effects are still poorly understood [2–4]. The phenomenon of VAM is usually measured in the frequency domain and it is manifested as sidebands around the carrier peak of the ultrasound wave at frequencies equal to the sum and difference of the excitation frequencies and their integer multiples. Modulation effects have been observed in several applications. Ekimov et al. [5] employed VAM of high-frequency torsional waves for crack detection in a rod. Zaitsev et al. [6] presented applications of nonlinear modulation for crack detection in structures and discussed possible sources of nonlinearity in damaged structures. Donskoy and Sutin [7] used VAM to investigate the existence of cracks, delaminations, or poor quality bonding. Further application of VAM techniques can be found in Zagrai et al. [8] who studied crack detection in aluminum plates. Duffour et al. [9] investigated the sensitivity of VAM technique and compared the conventional damping test with an impact-based vibroacoustic modulation. The majority of the existing studies are related to the detection of damage in metallic structures. More recently, studies of the application of VAM techniques to composite structures [10, 11] and chiral sandwich panels [12] have been reported.When VAM is applied for damage detection, damage indices are defined relating the size of the damage to the intensity of modulation. These indices rely on the amplitude of the carrier frequency at the sidebands. Despite the successful application of VAM in various damage problems, it appears that the damage indices in the frequency domain used so far are not accurate and in many cases provide unreliable results [13].The primary aim of the present work is to investigate vibroacoustic modulation in the time-frequency domain by employing time-frequency analysis based on the Zhao-Atlas-Marks (ZAM) distribution, which has the advantage of significantly reducing cross-terms between signal components, through its cone-shaped kernel function. We hypothesized that the characteristics of modulation responses in time domain might be proved more sensitive compared to those in the frequency domain and the combination of both could lead to damage indices that are more sensitive and robust. Furthermore, it is believed that the time-frequency analysis of the modulation responses can highlight the underlying nonlinear mechanisms and enable more efficient applications of the method for damage detection. ## 2. Methodology ### 2.1. Vibroacoustic Modulation (VAM) Technique In structures with damage (e.g., cracks), strong nonlinear vibrational and acoustical effects occur. Exploitation of these phenomena has led to the formation of the vibroacoustic modulation (VAM) technique, which is probably the most widely used nonlinear, nondestructive testing (NDT) method for crack detection. In particular, the VAM technique involves monitoring of the amplitude modulation of a high-frequency( f H ) vibration field transmitted through a cracked specimen undergoing an additional low-frequency (f L) structural vibration (typically one of the first structural modes). If the specimen is undamaged and appropriately supported, the two vibration fields do not interact. However if a crack is present, then the low-frequency structural vibration slowly opens and closes the crack. This periodically modifies the dynamic characteristics of the system, hence, modulating the amplitude of the ultrasound transmitted through a cracked specimen. This modulation expresses itself as sidebands (f S k ±) around the high-frequency component f H at frequencies equal to the sum and difference of the excitation frequencies and their integer multiples, that is, (1) f S k ± = f H ± k f L , k = 1,2 , 3 , … . The intensity of the modulation is related to the severity of the damage and has been used so far as a spectral FFT-based damage index (D I FFT) in the form (2) D I FFT = ( | FFT ( f S 1 - ) | + | FFT ( f S 1 + ) | ) 2 | FFT ( f H ) | , where | FFT ( f S 1 , H ± ) | denote the FFT magnitude at the first left and right sidebands and f H, respectively. ### 2.2. Zhao-Atlas-Marks (ZAM) Distribution Time-frequency (TF) analysis provides the means for exploiting the energy-related characteristics of the crack response signals that may vary in both time and frequency. Many of the TF approaches suffer from the effect of the appearance of cross-terms, which deteriorate the discrimination power at the TF domain. To avoid this distortion, the Zhao-Atlas-Marks (ZAM) distribution [7] was adopted as a methodological tool to express the information in a clearer way at the TF domain. In particular, ZAM distribution belongs to the category of quadratic time-frequency representations and, especially, to the group of reduced interference distributions (RIDs). RIDs are members of Cohen’s class and thus, for a time series X ( t ), they can be described by the following general expression: (3) RID X ( t , f ; Φ ) = ∬ - ∞ + ∞ Φ ( ξ , τ ) A X ( ξ , τ ) V V × e - j 2 π ( f τ + ξ t ) d ξ d τ , where t and f denote time and frequency, respectively, while τ and ξ denote the delay and the doppler, respectively, in the ambiguity plane. A X ( ξ , τ ) represents the ambiguity function, which is associated with the Wigner-Ville distribution via a two-dimensional Fourier transform [8]. Φ ( ξ , τ ) is the, so-called, parameterization or kernel function. The ZAM distribution is derived by choosing the kernel function as follows: (4) Φ ( ξ , τ ) = h ( τ ) | τ | sin ⁡ ( π ξ τ ) π ξ τ , where h ( τ ) is a window function that leads to smoothing along the frequency axis. Thus, the following expression can be obtained that defines the ZAM distribution: (5) ZAM X ( t , f ) = ∫ - ∞ + ∞ [ h ( τ ) ∫ t - | τ | / 2 t + | τ | / 2 X ( s + t 2 ) V V V V V V V V V V × X * ( s - t 2 ) d s h ( τ ) ∫ t - | τ | / 2 t + | τ | / 2 X ( s + t 2 ) ] V V V V V × e - j 2 π f τ d τ . ZAM distribution was selected among RIDs due to its advantage of significantly reducing cross-terms between signal components, through its cone-shaped kernel function (4) [7]. In the present study, the ZAM-based TF representation was computed under a N × N TF resolution; N denotes the number of samples of the signal. Smoothing was performed using Hamming windows of N / 7-samples and N / 6-samples for time and frequency, respectively. ### 2.3. ZAM-Based Modulation Effects Analysis Taking the ZAM distribution of the time seriesX ( t ) of beam responses to the VAM stimulation, that is, ZAM X ( t , f ), a more detailed exploitation of the modulation effects can be achieved by analyzing the mean amplitude and fluctuation of ZAM X ( t , f ) at the main sidebands (f S k ±) around the high-frequency component, that is, mean / fluct ( | ZAM X ( t , f S 1 ± ) | ), along with the mean amplitude and fluctuation of ZAM X ( t , f ) at the f H excitation frequency, that is, mean / fluct ( | ZAM X ( t , f H ) | ). From this perspective, changes in the mean value and the dynamic range of amplitude fluctuation, combined with the inspection of the spectral characteristics of this fluctuation, could correlate with the crack depth and provide insight into the way the presence of the crack affects the beam response during VAM stimulation. ## 2.1. Vibroacoustic Modulation (VAM) Technique In structures with damage (e.g., cracks), strong nonlinear vibrational and acoustical effects occur. Exploitation of these phenomena has led to the formation of the vibroacoustic modulation (VAM) technique, which is probably the most widely used nonlinear, nondestructive testing (NDT) method for crack detection. In particular, the VAM technique involves monitoring of the amplitude modulation of a high-frequency( f H ) vibration field transmitted through a cracked specimen undergoing an additional low-frequency (f L) structural vibration (typically one of the first structural modes). If the specimen is undamaged and appropriately supported, the two vibration fields do not interact. However if a crack is present, then the low-frequency structural vibration slowly opens and closes the crack. This periodically modifies the dynamic characteristics of the system, hence, modulating the amplitude of the ultrasound transmitted through a cracked specimen. This modulation expresses itself as sidebands (f S k ±) around the high-frequency component f H at frequencies equal to the sum and difference of the excitation frequencies and their integer multiples, that is, (1) f S k ± = f H ± k f L , k = 1,2 , 3 , … . The intensity of the modulation is related to the severity of the damage and has been used so far as a spectral FFT-based damage index (D I FFT) in the form (2) D I FFT = ( | FFT ( f S 1 - ) | + | FFT ( f S 1 + ) | ) 2 | FFT ( f H ) | , where | FFT ( f S 1 , H ± ) | denote the FFT magnitude at the first left and right sidebands and f H, respectively. ## 2.2. Zhao-Atlas-Marks (ZAM) Distribution Time-frequency (TF) analysis provides the means for exploiting the energy-related characteristics of the crack response signals that may vary in both time and frequency. Many of the TF approaches suffer from the effect of the appearance of cross-terms, which deteriorate the discrimination power at the TF domain. To avoid this distortion, the Zhao-Atlas-Marks (ZAM) distribution [7] was adopted as a methodological tool to express the information in a clearer way at the TF domain. In particular, ZAM distribution belongs to the category of quadratic time-frequency representations and, especially, to the group of reduced interference distributions (RIDs). RIDs are members of Cohen’s class and thus, for a time series X ( t ), they can be described by the following general expression: (3) RID X ( t , f ; Φ ) = ∬ - ∞ + ∞ Φ ( ξ , τ ) A X ( ξ , τ ) V V × e - j 2 π ( f τ + ξ t ) d ξ d τ , where t and f denote time and frequency, respectively, while τ and ξ denote the delay and the doppler, respectively, in the ambiguity plane. A X ( ξ , τ ) represents the ambiguity function, which is associated with the Wigner-Ville distribution via a two-dimensional Fourier transform [8]. Φ ( ξ , τ ) is the, so-called, parameterization or kernel function. The ZAM distribution is derived by choosing the kernel function as follows: (4) Φ ( ξ , τ ) = h ( τ ) | τ | sin ⁡ ( π ξ τ ) π ξ τ , where h ( τ ) is a window function that leads to smoothing along the frequency axis. Thus, the following expression can be obtained that defines the ZAM distribution: (5) ZAM X ( t , f ) = ∫ - ∞ + ∞ [ h ( τ ) ∫ t - | τ | / 2 t + | τ | / 2 X ( s + t 2 ) V V V V V V V V V V × X * ( s - t 2 ) d s h ( τ ) ∫ t - | τ | / 2 t + | τ | / 2 X ( s + t 2 ) ] V V V V V × e - j 2 π f τ d τ . ZAM distribution was selected among RIDs due to its advantage of significantly reducing cross-terms between signal components, through its cone-shaped kernel function (4) [7]. In the present study, the ZAM-based TF representation was computed under a N × N TF resolution; N denotes the number of samples of the signal. Smoothing was performed using Hamming windows of N / 7-samples and N / 6-samples for time and frequency, respectively. ## 2.3. ZAM-Based Modulation Effects Analysis Taking the ZAM distribution of the time seriesX ( t ) of beam responses to the VAM stimulation, that is, ZAM X ( t , f ), a more detailed exploitation of the modulation effects can be achieved by analyzing the mean amplitude and fluctuation of ZAM X ( t , f ) at the main sidebands (f S k ±) around the high-frequency component, that is, mean / fluct ( | ZAM X ( t , f S 1 ± ) | ), along with the mean amplitude and fluctuation of ZAM X ( t , f ) at the f H excitation frequency, that is, mean / fluct ( | ZAM X ( t , f H ) | ). From this perspective, changes in the mean value and the dynamic range of amplitude fluctuation, combined with the inspection of the spectral characteristics of this fluctuation, could correlate with the crack depth and provide insight into the way the presence of the crack affects the beam response during VAM stimulation. ## 3. Experiments Tests were performed on Plexiglas beams to obtain nonlinear modulation responses for further signal processing. The beams used in the experiments had dimensions of 2 × 2 × 40 cm and they were clamped between two heavy steel jaws. To avoid additional damping and distortion due to couplings, the beam was excited with a forceF by using a small voice coil weighting 2 gr attached to the beam. The coil was placed in the field of a permanent magnet and was excited by two waveform generators using sinus signals. A miniature transducer was used to pick up the vibration response, which was transferred to an acquisition system and stored for further analysis. A very narrow cut was initially introduced to the beam. Next, the beam was subjected to controlled dynamical loading which caused crack propagation. Due to the structure of Plexiglas, the propagation of the crack could not be accurately controlled resulting in arbitrary crack depths. A Bruel and Kjaer 4393 piezoelectric charge transducer was used for the high-frequency excitation. A schematic representation and a photo from the actual implementation set-up are depicted in Figures 1(a) and 1(b), respectively. Initially, a fatigue crack of 7% depth was introduced at l c = 10 mm from the clamped end. Then, its depth was increased to 20% and finally to 45%. During the experiments two continuous sine waves were simultaneously introduced to the beam. The first ( V HF ( t ) ) was the high-frequency ultrasound probe wave at f H = 31.3kHz. The second wave ( V LF ( t ) ) was the low-frequency vibration at a frequency of f L = 92Hz, equal to the resonance frequency of the intact beam. The sampling frequency used was 192 kHz. Figure 2 depicts an excerpt from the measured response for the uncracked and the three different crack depth cases ((a) to (d)), respectively. As it can be seen from Figure 1, strong modulation components are present and increase with increasing crack depth.A schematic representation (a) and the actual realization (b) of the experimental setup. (a) (b)The experimental data excerpt (8192 samples, sampling frequency 192 kHz) used in the ZAM analysis for the uncracked, 7%, 20%, and 45% of the crack depth cases ((a) to (d)). (a) (b) (c) (d) ## 4. Results and Discussion Figure3 shows the estimated ZAM X ( t , f ) of the data depicted in Figure 2 for the four examined crack depths, that is, 0%, 7%, 20%, and 45%, zoomed in the area of the f H = 31.3kHz.Results from the ZAM analysis of the experimental data of Figure2 for 0% (a), 7% (c), 20% (b), and 45% (d) crack size, respectively. (a) (b) (c) (d)Apparently, from these plots it is clear that a series off S k ± sidebands is evident, with the f S 1 ± at f S 1 - = 31208Hz and f S 1 + = 31392Hz being the most noticeable, compared to the rest. It is noteworthy that as the crack depth is increased towards 45% (Figure 3(d)), a fluctuation at the f S 1 ± frequencies is noticed, whereas there is a more concentrated activity at f H across the time axis (in the form of peaks rather than frequency line ridges), indicating, possibly, the existence of a “breathing-crack” mechanism. It should be noted that the latter behavior is also noticed in the time domain (see Figure 2), as we move from the 0% to the 45% crack depth. More specifically, the periodic behavior of the breathing mechanism is clearly noticed in the modulated amplitude of the time series, with a more profound example being the case of 20% crack depth (Figure 2(c)), where the 92 Hz imposed frequency is driving the breathing effect. Nevertheless, when focusing at the high-frequency area, as the subfigures of Figure 3 do, only the case of 45% crack depth reflects the consequences of the breathing effect at the central and side lobe-frequencies, as previously described (see Figure 3(d)).The amplitude of the estimatedZAM X ( t , f ) of Figure 3 at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c) for each crack size (0%, 7%, 20%, and 45%), respectively, is depicted in Figure 4. From the latter, it is clear that the ZAM X ( t , f ) amplitude is inversely proportional to the crack depth, whereas the amplitude fluctuation fluct ( | ZAM X ( t , f S 1 , H ± ) | ) is highly increased as the crack depth also increases. This might be justified when taking into account the occurrence of nonlinear dissipation effects due to “crack breathing” that are more pronounced as the crack depth increases. Moreover, there is a clear periodicity in the amplitude modulation for the case of f H (Figure 4(b)); a noticeable, yet not so intense, one is evident in the amplitude modulation of f S 1 ±. This is further examined in Figure 5, where the spectrum of the amplitude fluctuation of the ZAM transform at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c), for the crack size of 45%, respectively, is illustrated.The amplitude fluctuation of the ZAM transform at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c) for each crack size (0%, 7%, 20%, and 45%), respectively. (a) (b) (c)The spectrum of the amplitude fluctuation of the ZAM transform at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c), for the crack size of 45%, respectively. (a) (b) (c)As it is clear from Figure5, the low excitation frequency f L = 92Hz modulates the amplitude of ZAM X ( t , f H ) (Figure 5(b)), whereas mainly the first harmonic of f L, that is, 2 f L = 184Hz, causes the amplitude fluctuation | ZAM X ( t , f S 1 ± ) | (Figures 5(a) and 5(c), resp.).Focusing at the 45% crack depth case, the frequency modulation (fluctuation of ridges) seen in Figure3(d) is further examined. In particular, Figure 6 reveals the corresponding spectral characteristics of this modulation at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c).The spectrum of the frequency modulation of the ZAM transform at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c), for the crack size of 45%, respectively. (a) (b) (c)Similarly to the spectral characteristics of the amplitude modulation of ZAM transform seen in Figure5, here the low excitation frequency f L = 92Hz modulates the frequency fluctuation | ZAM X ( t , f S 1 ± ) | (Figures 6(a) and 6(c), resp.), whereas a coexistence of f L = 92 and 2 f L = 184Hz modulates the frequency fluctuation of ZAM X ( t , f H ) (Figure 6(b)). The separate damage indices (sDI) (all normalized to the corresponding value of 45% crack depth case after bias elimination for the 0% crack depth case) based on the 1 / | ZAM X ( t , f ) | (first row), the MAX-MIN range (second row), the corresponding standard deviation (third row) of the ZAM transform, and the normalized FFT magnitude (fourth row), at the corresponding VAM frequencies, that is, 31208 Hz (left column), 31300 Hz (middle column), and 31392 Hz (right column), respectively, are shown in Figure 7. From the latter it is deduced that the sensitivity of 1 / | ZAM X ( t , f ) | according to the crack depth change is significantly higher than all other sDI, which mainly capture the transition from 20% to 45% crack depth, exhibiting less efficient performance in tracking smaller cracks.Figure 7 The separate damage indices (sDI) based on the1 / | ZAM X ( t , f ) | (first row), the MAX-MIN range (second row), the corresponding standard deviation (third row) of the ZAM transform, and the normalized FFT magnitude (fourth row) at the corresponding VAM frequencies, that is, 31208 Hz (left column), 31300 Hz (middle column), and 31392 Hz (right column), respectively. Note that, for the FFT-based analysis, only the 31208 Hz (left column) and 31392 Hz (right column) were considered, since the FFT amplitude at the central high-frequency (31300 Hz) was used as a normalization factor. Moreover, in the ZAM-based analysis, all values were estimated for the time span of 0.006–0.036 sec to avoid edge effects, while all data samples acquired (92001) were used in the FFT-based analysis to increase its frequency resolution.Consequently, the mean value of the sDI for the case of the1 / | ZAM X ( t , f ) | (Figure 7, first row) could be defined as the most efficient ZAM-based DI, namely, D I ZAM. Figure 8 depicts the D I ZAM along with the D I FFT defined in (2). Apparently, the D I ZAM surpasses D I FFT in terms of higher sensitivity in the crack depth change, as it better captures crack changes, even in small crack depths.Figure 8 The damage index (DI) derived as the mean value of the sDI for the case of the1 / | ZAM X ( t , f ) | (Figure 7, first row)-D I ZAM and the | FFT ( f ) | (Figure 7, fourth row)-D I FFT. The increase in the sensitivity of the D I ZAM over the D I FFT is evident.When comparing the presented work with the one of Zaitsev et al. [6], a similar behavior in the crack detection could be identified. Both works conclude that the damage index based on the amplitude modulation is better than the one based on the frequency modulation. Nevertheless, the latter damage index of [6] unexpectedly increases nonmonotonically as the severity of crack increases; here, as it is derived from Figure 7 (second and third rows), the damage indices based on the frequency fluctuation (range and std) of the ZAM distribution increase monotonically with the crack depth, exhibiting, though, reduced sensitivity in the small cracks identification. This, in turn, is compensated by the damage index based on the inverse of the ZAM amplitude. Moreover, the analysis in [6] is prone to the mode-mixing effect; that is, a single intrinsic mode function (IMF) derived from the Empirical Mode Decomposition employed in [6] consists either of signals of widely disparate scales or a signal of a similar scale residing in different IMF components. Mode-mixing is often a consequence of signal intermittency, which could not only cause serious aliasing in the time-frequency distribution, but also make the physical meaning of individual IMF unclear [9]. Perhaps the mode-mixing effect could be the reason for this unexpected behavior of Zaitsev’s et al. [6] damage index based on the frequency modulation. The proposed analysis here does not produce any mode-mixing effect, as it is clearly shown in the time-frequency distributions of Figure 3, making the relevant damage indices more robust to any signal intermittencies. ## 5. Conclusion In this work, the investigation of the vibroacoustic modulation of cracked beam is approached in the time-frequency domain, using time-frequency analysis based on the Zhao-Atlas-Marks (ZAM) distribution. ZAMs efficient time-frequency representation of the vibrational information, with reduced cross-terms between signal components through its cone-shaped kernel function, allowed for detailed monitoring of the VAM effects on beam behavior due to the existence of a crack. The hypothesis adopted here, that is, the characteristics of modulation responses in time domain might be proved more sensitive compared to those in the frequency domain and the combination of both could lead to damage indices that are more sensitive and robust, was proved valid. This was justified by the experimental results derived when applying VAM on Plexiglas beams with a varying crack depth of 0%, 7%, 20%, and 45%. Considering the responses at the ZAM domain and, especially, the reduction of mean ZAM amplitude at the sidebands and excitation high frequency with the increase of the crack depth a new damage index was formed, that is,D I ZAM. The latter led to a more sensitive response compared to the one based on the spectral characteristics of the beam response, that is, D I FFT, better capturing crack changes, even in small crack depths. The promising results presented here enable more efficient applications of the proposed method in nondestructive damage detection applications. --- *Source: 102157-2014-06-12.xml*
102157-2014-06-12_102157-2014-06-12.md
26,557
Analysis of Vibroacoustic Modulations for Crack Detection: A Time-Frequency Approach Based on Zhao-Atlas-Marks Distribution
A. Trochidis; L. Hadjileontiadis; K. Zacharias
Shock and Vibration (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102157
102157-2014-06-12.xml
--- ## Abstract The vibro-acoustic modulation (VAM) technique is probably the most widely used nonlinear method for crack detection. The VAM method is based on the effect of modulation of high-frequency acoustic waves by a low-frequency vibration. The intensity of the modulation is related to the severity of the damage and has been used so far as a damage index. The damage index simply based on the amplitude of the first side bands in the spectral domain often leads to controversial results about the severity of the damage. In this work, the nonlinear characteristics of the vibro-modulation were systematically investigated by employing time-frequency analysis based on the Zhao-Atlas-Marks (ZAM) distribution. The results of the analysis show that the amplitude of the sideband components is modulated by the low frequency vibration and the modulation amplitude depends on the size of the crack. Based on the obtained results, a new damage index was defined in relation to the strength of the modulation. The new damage index is more sensitive and robust and correlates better with crack size compared to the index based on the amplitude of the sidebands. --- ## Body ## 1. Introduction Structures with inhomogeneities or defects exhibit strong nonlinear vibrational and acoustical effects. In particular, strong nonlinear effects were observed in structures with cracks. These effects include the generation of higher harmonics and intermodulation of a high-frequency acoustic wave by a low-frequency vibration [1] and provide the foundation for developing different techniques for nondestructive testing.The vibroacoustic modulation (VAM) method is based on the fact that a high-frequency ultrasound probing wave propagating in a structure is modulated by a low-frequency vibration. The modulation is generated by the nonlinear interaction of waves caused by the presence of the crack. The mechanisms, however, behind these effects are still poorly understood [2–4]. The phenomenon of VAM is usually measured in the frequency domain and it is manifested as sidebands around the carrier peak of the ultrasound wave at frequencies equal to the sum and difference of the excitation frequencies and their integer multiples. Modulation effects have been observed in several applications. Ekimov et al. [5] employed VAM of high-frequency torsional waves for crack detection in a rod. Zaitsev et al. [6] presented applications of nonlinear modulation for crack detection in structures and discussed possible sources of nonlinearity in damaged structures. Donskoy and Sutin [7] used VAM to investigate the existence of cracks, delaminations, or poor quality bonding. Further application of VAM techniques can be found in Zagrai et al. [8] who studied crack detection in aluminum plates. Duffour et al. [9] investigated the sensitivity of VAM technique and compared the conventional damping test with an impact-based vibroacoustic modulation. The majority of the existing studies are related to the detection of damage in metallic structures. More recently, studies of the application of VAM techniques to composite structures [10, 11] and chiral sandwich panels [12] have been reported.When VAM is applied for damage detection, damage indices are defined relating the size of the damage to the intensity of modulation. These indices rely on the amplitude of the carrier frequency at the sidebands. Despite the successful application of VAM in various damage problems, it appears that the damage indices in the frequency domain used so far are not accurate and in many cases provide unreliable results [13].The primary aim of the present work is to investigate vibroacoustic modulation in the time-frequency domain by employing time-frequency analysis based on the Zhao-Atlas-Marks (ZAM) distribution, which has the advantage of significantly reducing cross-terms between signal components, through its cone-shaped kernel function. We hypothesized that the characteristics of modulation responses in time domain might be proved more sensitive compared to those in the frequency domain and the combination of both could lead to damage indices that are more sensitive and robust. Furthermore, it is believed that the time-frequency analysis of the modulation responses can highlight the underlying nonlinear mechanisms and enable more efficient applications of the method for damage detection. ## 2. Methodology ### 2.1. Vibroacoustic Modulation (VAM) Technique In structures with damage (e.g., cracks), strong nonlinear vibrational and acoustical effects occur. Exploitation of these phenomena has led to the formation of the vibroacoustic modulation (VAM) technique, which is probably the most widely used nonlinear, nondestructive testing (NDT) method for crack detection. In particular, the VAM technique involves monitoring of the amplitude modulation of a high-frequency( f H ) vibration field transmitted through a cracked specimen undergoing an additional low-frequency (f L) structural vibration (typically one of the first structural modes). If the specimen is undamaged and appropriately supported, the two vibration fields do not interact. However if a crack is present, then the low-frequency structural vibration slowly opens and closes the crack. This periodically modifies the dynamic characteristics of the system, hence, modulating the amplitude of the ultrasound transmitted through a cracked specimen. This modulation expresses itself as sidebands (f S k ±) around the high-frequency component f H at frequencies equal to the sum and difference of the excitation frequencies and their integer multiples, that is, (1) f S k ± = f H ± k f L , k = 1,2 , 3 , … . The intensity of the modulation is related to the severity of the damage and has been used so far as a spectral FFT-based damage index (D I FFT) in the form (2) D I FFT = ( | FFT ( f S 1 - ) | + | FFT ( f S 1 + ) | ) 2 | FFT ( f H ) | , where | FFT ( f S 1 , H ± ) | denote the FFT magnitude at the first left and right sidebands and f H, respectively. ### 2.2. Zhao-Atlas-Marks (ZAM) Distribution Time-frequency (TF) analysis provides the means for exploiting the energy-related characteristics of the crack response signals that may vary in both time and frequency. Many of the TF approaches suffer from the effect of the appearance of cross-terms, which deteriorate the discrimination power at the TF domain. To avoid this distortion, the Zhao-Atlas-Marks (ZAM) distribution [7] was adopted as a methodological tool to express the information in a clearer way at the TF domain. In particular, ZAM distribution belongs to the category of quadratic time-frequency representations and, especially, to the group of reduced interference distributions (RIDs). RIDs are members of Cohen’s class and thus, for a time series X ( t ), they can be described by the following general expression: (3) RID X ( t , f ; Φ ) = ∬ - ∞ + ∞ Φ ( ξ , τ ) A X ( ξ , τ ) V V × e - j 2 π ( f τ + ξ t ) d ξ d τ , where t and f denote time and frequency, respectively, while τ and ξ denote the delay and the doppler, respectively, in the ambiguity plane. A X ( ξ , τ ) represents the ambiguity function, which is associated with the Wigner-Ville distribution via a two-dimensional Fourier transform [8]. Φ ( ξ , τ ) is the, so-called, parameterization or kernel function. The ZAM distribution is derived by choosing the kernel function as follows: (4) Φ ( ξ , τ ) = h ( τ ) | τ | sin ⁡ ( π ξ τ ) π ξ τ , where h ( τ ) is a window function that leads to smoothing along the frequency axis. Thus, the following expression can be obtained that defines the ZAM distribution: (5) ZAM X ( t , f ) = ∫ - ∞ + ∞ [ h ( τ ) ∫ t - | τ | / 2 t + | τ | / 2 X ( s + t 2 ) V V V V V V V V V V × X * ( s - t 2 ) d s h ( τ ) ∫ t - | τ | / 2 t + | τ | / 2 X ( s + t 2 ) ] V V V V V × e - j 2 π f τ d τ . ZAM distribution was selected among RIDs due to its advantage of significantly reducing cross-terms between signal components, through its cone-shaped kernel function (4) [7]. In the present study, the ZAM-based TF representation was computed under a N × N TF resolution; N denotes the number of samples of the signal. Smoothing was performed using Hamming windows of N / 7-samples and N / 6-samples for time and frequency, respectively. ### 2.3. ZAM-Based Modulation Effects Analysis Taking the ZAM distribution of the time seriesX ( t ) of beam responses to the VAM stimulation, that is, ZAM X ( t , f ), a more detailed exploitation of the modulation effects can be achieved by analyzing the mean amplitude and fluctuation of ZAM X ( t , f ) at the main sidebands (f S k ±) around the high-frequency component, that is, mean / fluct ( | ZAM X ( t , f S 1 ± ) | ), along with the mean amplitude and fluctuation of ZAM X ( t , f ) at the f H excitation frequency, that is, mean / fluct ( | ZAM X ( t , f H ) | ). From this perspective, changes in the mean value and the dynamic range of amplitude fluctuation, combined with the inspection of the spectral characteristics of this fluctuation, could correlate with the crack depth and provide insight into the way the presence of the crack affects the beam response during VAM stimulation. ## 2.1. Vibroacoustic Modulation (VAM) Technique In structures with damage (e.g., cracks), strong nonlinear vibrational and acoustical effects occur. Exploitation of these phenomena has led to the formation of the vibroacoustic modulation (VAM) technique, which is probably the most widely used nonlinear, nondestructive testing (NDT) method for crack detection. In particular, the VAM technique involves monitoring of the amplitude modulation of a high-frequency( f H ) vibration field transmitted through a cracked specimen undergoing an additional low-frequency (f L) structural vibration (typically one of the first structural modes). If the specimen is undamaged and appropriately supported, the two vibration fields do not interact. However if a crack is present, then the low-frequency structural vibration slowly opens and closes the crack. This periodically modifies the dynamic characteristics of the system, hence, modulating the amplitude of the ultrasound transmitted through a cracked specimen. This modulation expresses itself as sidebands (f S k ±) around the high-frequency component f H at frequencies equal to the sum and difference of the excitation frequencies and their integer multiples, that is, (1) f S k ± = f H ± k f L , k = 1,2 , 3 , … . The intensity of the modulation is related to the severity of the damage and has been used so far as a spectral FFT-based damage index (D I FFT) in the form (2) D I FFT = ( | FFT ( f S 1 - ) | + | FFT ( f S 1 + ) | ) 2 | FFT ( f H ) | , where | FFT ( f S 1 , H ± ) | denote the FFT magnitude at the first left and right sidebands and f H, respectively. ## 2.2. Zhao-Atlas-Marks (ZAM) Distribution Time-frequency (TF) analysis provides the means for exploiting the energy-related characteristics of the crack response signals that may vary in both time and frequency. Many of the TF approaches suffer from the effect of the appearance of cross-terms, which deteriorate the discrimination power at the TF domain. To avoid this distortion, the Zhao-Atlas-Marks (ZAM) distribution [7] was adopted as a methodological tool to express the information in a clearer way at the TF domain. In particular, ZAM distribution belongs to the category of quadratic time-frequency representations and, especially, to the group of reduced interference distributions (RIDs). RIDs are members of Cohen’s class and thus, for a time series X ( t ), they can be described by the following general expression: (3) RID X ( t , f ; Φ ) = ∬ - ∞ + ∞ Φ ( ξ , τ ) A X ( ξ , τ ) V V × e - j 2 π ( f τ + ξ t ) d ξ d τ , where t and f denote time and frequency, respectively, while τ and ξ denote the delay and the doppler, respectively, in the ambiguity plane. A X ( ξ , τ ) represents the ambiguity function, which is associated with the Wigner-Ville distribution via a two-dimensional Fourier transform [8]. Φ ( ξ , τ ) is the, so-called, parameterization or kernel function. The ZAM distribution is derived by choosing the kernel function as follows: (4) Φ ( ξ , τ ) = h ( τ ) | τ | sin ⁡ ( π ξ τ ) π ξ τ , where h ( τ ) is a window function that leads to smoothing along the frequency axis. Thus, the following expression can be obtained that defines the ZAM distribution: (5) ZAM X ( t , f ) = ∫ - ∞ + ∞ [ h ( τ ) ∫ t - | τ | / 2 t + | τ | / 2 X ( s + t 2 ) V V V V V V V V V V × X * ( s - t 2 ) d s h ( τ ) ∫ t - | τ | / 2 t + | τ | / 2 X ( s + t 2 ) ] V V V V V × e - j 2 π f τ d τ . ZAM distribution was selected among RIDs due to its advantage of significantly reducing cross-terms between signal components, through its cone-shaped kernel function (4) [7]. In the present study, the ZAM-based TF representation was computed under a N × N TF resolution; N denotes the number of samples of the signal. Smoothing was performed using Hamming windows of N / 7-samples and N / 6-samples for time and frequency, respectively. ## 2.3. ZAM-Based Modulation Effects Analysis Taking the ZAM distribution of the time seriesX ( t ) of beam responses to the VAM stimulation, that is, ZAM X ( t , f ), a more detailed exploitation of the modulation effects can be achieved by analyzing the mean amplitude and fluctuation of ZAM X ( t , f ) at the main sidebands (f S k ±) around the high-frequency component, that is, mean / fluct ( | ZAM X ( t , f S 1 ± ) | ), along with the mean amplitude and fluctuation of ZAM X ( t , f ) at the f H excitation frequency, that is, mean / fluct ( | ZAM X ( t , f H ) | ). From this perspective, changes in the mean value and the dynamic range of amplitude fluctuation, combined with the inspection of the spectral characteristics of this fluctuation, could correlate with the crack depth and provide insight into the way the presence of the crack affects the beam response during VAM stimulation. ## 3. Experiments Tests were performed on Plexiglas beams to obtain nonlinear modulation responses for further signal processing. The beams used in the experiments had dimensions of 2 × 2 × 40 cm and they were clamped between two heavy steel jaws. To avoid additional damping and distortion due to couplings, the beam was excited with a forceF by using a small voice coil weighting 2 gr attached to the beam. The coil was placed in the field of a permanent magnet and was excited by two waveform generators using sinus signals. A miniature transducer was used to pick up the vibration response, which was transferred to an acquisition system and stored for further analysis. A very narrow cut was initially introduced to the beam. Next, the beam was subjected to controlled dynamical loading which caused crack propagation. Due to the structure of Plexiglas, the propagation of the crack could not be accurately controlled resulting in arbitrary crack depths. A Bruel and Kjaer 4393 piezoelectric charge transducer was used for the high-frequency excitation. A schematic representation and a photo from the actual implementation set-up are depicted in Figures 1(a) and 1(b), respectively. Initially, a fatigue crack of 7% depth was introduced at l c = 10 mm from the clamped end. Then, its depth was increased to 20% and finally to 45%. During the experiments two continuous sine waves were simultaneously introduced to the beam. The first ( V HF ( t ) ) was the high-frequency ultrasound probe wave at f H = 31.3kHz. The second wave ( V LF ( t ) ) was the low-frequency vibration at a frequency of f L = 92Hz, equal to the resonance frequency of the intact beam. The sampling frequency used was 192 kHz. Figure 2 depicts an excerpt from the measured response for the uncracked and the three different crack depth cases ((a) to (d)), respectively. As it can be seen from Figure 1, strong modulation components are present and increase with increasing crack depth.A schematic representation (a) and the actual realization (b) of the experimental setup. (a) (b)The experimental data excerpt (8192 samples, sampling frequency 192 kHz) used in the ZAM analysis for the uncracked, 7%, 20%, and 45% of the crack depth cases ((a) to (d)). (a) (b) (c) (d) ## 4. Results and Discussion Figure3 shows the estimated ZAM X ( t , f ) of the data depicted in Figure 2 for the four examined crack depths, that is, 0%, 7%, 20%, and 45%, zoomed in the area of the f H = 31.3kHz.Results from the ZAM analysis of the experimental data of Figure2 for 0% (a), 7% (c), 20% (b), and 45% (d) crack size, respectively. (a) (b) (c) (d)Apparently, from these plots it is clear that a series off S k ± sidebands is evident, with the f S 1 ± at f S 1 - = 31208Hz and f S 1 + = 31392Hz being the most noticeable, compared to the rest. It is noteworthy that as the crack depth is increased towards 45% (Figure 3(d)), a fluctuation at the f S 1 ± frequencies is noticed, whereas there is a more concentrated activity at f H across the time axis (in the form of peaks rather than frequency line ridges), indicating, possibly, the existence of a “breathing-crack” mechanism. It should be noted that the latter behavior is also noticed in the time domain (see Figure 2), as we move from the 0% to the 45% crack depth. More specifically, the periodic behavior of the breathing mechanism is clearly noticed in the modulated amplitude of the time series, with a more profound example being the case of 20% crack depth (Figure 2(c)), where the 92 Hz imposed frequency is driving the breathing effect. Nevertheless, when focusing at the high-frequency area, as the subfigures of Figure 3 do, only the case of 45% crack depth reflects the consequences of the breathing effect at the central and side lobe-frequencies, as previously described (see Figure 3(d)).The amplitude of the estimatedZAM X ( t , f ) of Figure 3 at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c) for each crack size (0%, 7%, 20%, and 45%), respectively, is depicted in Figure 4. From the latter, it is clear that the ZAM X ( t , f ) amplitude is inversely proportional to the crack depth, whereas the amplitude fluctuation fluct ( | ZAM X ( t , f S 1 , H ± ) | ) is highly increased as the crack depth also increases. This might be justified when taking into account the occurrence of nonlinear dissipation effects due to “crack breathing” that are more pronounced as the crack depth increases. Moreover, there is a clear periodicity in the amplitude modulation for the case of f H (Figure 4(b)); a noticeable, yet not so intense, one is evident in the amplitude modulation of f S 1 ±. This is further examined in Figure 5, where the spectrum of the amplitude fluctuation of the ZAM transform at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c), for the crack size of 45%, respectively, is illustrated.The amplitude fluctuation of the ZAM transform at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c) for each crack size (0%, 7%, 20%, and 45%), respectively. (a) (b) (c)The spectrum of the amplitude fluctuation of the ZAM transform at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c), for the crack size of 45%, respectively. (a) (b) (c)As it is clear from Figure5, the low excitation frequency f L = 92Hz modulates the amplitude of ZAM X ( t , f H ) (Figure 5(b)), whereas mainly the first harmonic of f L, that is, 2 f L = 184Hz, causes the amplitude fluctuation | ZAM X ( t , f S 1 ± ) | (Figures 5(a) and 5(c), resp.).Focusing at the 45% crack depth case, the frequency modulation (fluctuation of ridges) seen in Figure3(d) is further examined. In particular, Figure 6 reveals the corresponding spectral characteristics of this modulation at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c).The spectrum of the frequency modulation of the ZAM transform at the corresponding VAM frequencies, that is, 31392 Hz (a), 31300 Hz (b), and 31208 Hz (c), for the crack size of 45%, respectively. (a) (b) (c)Similarly to the spectral characteristics of the amplitude modulation of ZAM transform seen in Figure5, here the low excitation frequency f L = 92Hz modulates the frequency fluctuation | ZAM X ( t , f S 1 ± ) | (Figures 6(a) and 6(c), resp.), whereas a coexistence of f L = 92 and 2 f L = 184Hz modulates the frequency fluctuation of ZAM X ( t , f H ) (Figure 6(b)). The separate damage indices (sDI) (all normalized to the corresponding value of 45% crack depth case after bias elimination for the 0% crack depth case) based on the 1 / | ZAM X ( t , f ) | (first row), the MAX-MIN range (second row), the corresponding standard deviation (third row) of the ZAM transform, and the normalized FFT magnitude (fourth row), at the corresponding VAM frequencies, that is, 31208 Hz (left column), 31300 Hz (middle column), and 31392 Hz (right column), respectively, are shown in Figure 7. From the latter it is deduced that the sensitivity of 1 / | ZAM X ( t , f ) | according to the crack depth change is significantly higher than all other sDI, which mainly capture the transition from 20% to 45% crack depth, exhibiting less efficient performance in tracking smaller cracks.Figure 7 The separate damage indices (sDI) based on the1 / | ZAM X ( t , f ) | (first row), the MAX-MIN range (second row), the corresponding standard deviation (third row) of the ZAM transform, and the normalized FFT magnitude (fourth row) at the corresponding VAM frequencies, that is, 31208 Hz (left column), 31300 Hz (middle column), and 31392 Hz (right column), respectively. Note that, for the FFT-based analysis, only the 31208 Hz (left column) and 31392 Hz (right column) were considered, since the FFT amplitude at the central high-frequency (31300 Hz) was used as a normalization factor. Moreover, in the ZAM-based analysis, all values were estimated for the time span of 0.006–0.036 sec to avoid edge effects, while all data samples acquired (92001) were used in the FFT-based analysis to increase its frequency resolution.Consequently, the mean value of the sDI for the case of the1 / | ZAM X ( t , f ) | (Figure 7, first row) could be defined as the most efficient ZAM-based DI, namely, D I ZAM. Figure 8 depicts the D I ZAM along with the D I FFT defined in (2). Apparently, the D I ZAM surpasses D I FFT in terms of higher sensitivity in the crack depth change, as it better captures crack changes, even in small crack depths.Figure 8 The damage index (DI) derived as the mean value of the sDI for the case of the1 / | ZAM X ( t , f ) | (Figure 7, first row)-D I ZAM and the | FFT ( f ) | (Figure 7, fourth row)-D I FFT. The increase in the sensitivity of the D I ZAM over the D I FFT is evident.When comparing the presented work with the one of Zaitsev et al. [6], a similar behavior in the crack detection could be identified. Both works conclude that the damage index based on the amplitude modulation is better than the one based on the frequency modulation. Nevertheless, the latter damage index of [6] unexpectedly increases nonmonotonically as the severity of crack increases; here, as it is derived from Figure 7 (second and third rows), the damage indices based on the frequency fluctuation (range and std) of the ZAM distribution increase monotonically with the crack depth, exhibiting, though, reduced sensitivity in the small cracks identification. This, in turn, is compensated by the damage index based on the inverse of the ZAM amplitude. Moreover, the analysis in [6] is prone to the mode-mixing effect; that is, a single intrinsic mode function (IMF) derived from the Empirical Mode Decomposition employed in [6] consists either of signals of widely disparate scales or a signal of a similar scale residing in different IMF components. Mode-mixing is often a consequence of signal intermittency, which could not only cause serious aliasing in the time-frequency distribution, but also make the physical meaning of individual IMF unclear [9]. Perhaps the mode-mixing effect could be the reason for this unexpected behavior of Zaitsev’s et al. [6] damage index based on the frequency modulation. The proposed analysis here does not produce any mode-mixing effect, as it is clearly shown in the time-frequency distributions of Figure 3, making the relevant damage indices more robust to any signal intermittencies. ## 5. Conclusion In this work, the investigation of the vibroacoustic modulation of cracked beam is approached in the time-frequency domain, using time-frequency analysis based on the Zhao-Atlas-Marks (ZAM) distribution. ZAMs efficient time-frequency representation of the vibrational information, with reduced cross-terms between signal components through its cone-shaped kernel function, allowed for detailed monitoring of the VAM effects on beam behavior due to the existence of a crack. The hypothesis adopted here, that is, the characteristics of modulation responses in time domain might be proved more sensitive compared to those in the frequency domain and the combination of both could lead to damage indices that are more sensitive and robust, was proved valid. This was justified by the experimental results derived when applying VAM on Plexiglas beams with a varying crack depth of 0%, 7%, 20%, and 45%. Considering the responses at the ZAM domain and, especially, the reduction of mean ZAM amplitude at the sidebands and excitation high frequency with the increase of the crack depth a new damage index was formed, that is,D I ZAM. The latter led to a more sensitive response compared to the one based on the spectral characteristics of the beam response, that is, D I FFT, better capturing crack changes, even in small crack depths. The promising results presented here enable more efficient applications of the proposed method in nondestructive damage detection applications. --- *Source: 102157-2014-06-12.xml*
2014
# Diabetes and the Brain: Oxidative Stress, Inflammation, and Autophagy **Authors:** María Muriach; Miguel Flores-Bellver; Francisco J. Romero; Jorge M. Barcia **Journal:** Oxidative Medicine and Cellular Longevity (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102158 --- ## Abstract Diabetes mellitus is a common metabolic disorder associated with chronic complications including a state of mild to moderate cognitive impairment, in particular psychomotor slowing and reduced mental flexibility, not attributable to other causes, and shares many symptoms that are best described as accelerated brain ageing. A common theory for aging and for the pathogenesis of this cerebral dysfunctioning in diabetes relates cell death to oxidative stress in strong association to inflammation, and in fact nuclear factorκB (NFκB), a master regulator of inflammation and also a sensor of oxidative stress, has a strategic position at the crossroad between oxidative stress and inflammation. Moreover, metabolic inflammation is, in turn, related to the induction of various intracellular stresses such as mitochondrial oxidative stress, endoplasmic reticulum (ER) stress, and autophagy defect. In parallel, blockade of autophagy can relate to proinflammatory signaling via oxidative stress pathway and NFκB-mediated inflammation. --- ## Body ## 1. Introduction Diabetes mellitus is a common metabolic disorder which is associated with chronic complications such as nephropathy, angiopathy, retinopathy, and peripheral neuropathy. However, as early as 1922 it was recognised that diabetes also can lead to cognitive dysfunction [1]. Since then, studies in experimental models and in patients observed alterations in neurotransmission, electrophysiological and structural abnormalities, and neurobehavioral alterations, in particular cognitive dysfunction and increased risk of depression [2]. Moreover, the observed cerebral manifestations of diabetes appear to develop insidiously, largely independent of diabetes-associated acute metabolic and vascular disturbances (such as severe hypo- and hyperglycemic episodes and stroke). Although the magnitude of these cognitive deficits appears to be mild to moderate, they can significantly hamper daily functioning, adversely affecting quality of life [3].In spite of this, the concept of central neuropathy has been controversial for more than 80 years now, but while trying to describe cognitive impairment in diabetes as a complication of the disease, the term “diabetic encephalopathy” was introduced in 1950 [4]. However, this term “encephalopathy” has not been widely accepted, probably among other reasons, because it does not seem to match with the mild cognitive problems usually seen in (nondemented) diabetic patients. More recently it has been suggested that the term “diabetes-associated cognitive decline” (DACD) describes a state of mild to moderate cognitive impairment, in particular psychomotor slowing and reduced mental flexibility, not attributable to other causes [5]. In addition, it is now clear that diabetes increases the risk of Alzheimer’s disease, vascular dementia, and any other type of dementia [6, 7]. ## 2. Pathophysiological Mechanisms Involved in Brain Damage in Diabetes Long-term effects of diabetes on the brain are manifested at structural, neurophysiological, and neuropsychological level, and multiple pathogenic factors appear to be involved in the pathogenesis of the cerebral dysfunctioning in diabetes, such as the hypoglycemic episodes, cerebrovascular alterations, the role of insulin in the brain, and the mechanisms of hyperglycemia induced damage [8]. Moreover, the emerging view is that the diabetic brain features many symptoms that are best described as accelerated brain ageing [9].A common theory, for aging and for the pathogenesis of this cerebral dysfunctioning in diabetes, relates cell death to oxidative stress mediated by free radicals [10]. Thus, hyperglycemia reduces antioxidant levels and concomitantly increases the production of free radicals. These effects contribute to tissue damage in diabetes mellitus, leading to alterations in the redox potential of the cell with subsequent activation of redox-sensitive genes [11].The brain is especially vulnerable to oxidative damage as a result of its high oxygen consumption rate, abundant lipid content, and relative paucity of antioxidant enzymes as compared to other tissues. Neuronal cells are particularly sensitive to oxidative insults, and therefore reactive oxygen species (ROS) are involved in many neurodegenerative processes such as diabetes [12–14]. Although under normal physiological conditions a balance exists between the production of ROS and the antioxidant mechanisms, it has been shown that in aging tissues oxidative stress increases due to, among others, decreased activity of antioxidant enzymes [15]. Earlier work and ample evidence have shown that peroxidative damage to lipid and protein occurs with the aging process and the products of these reactions accumulate in the brain with age [16–19].Similarly, the activities of superoxide dismutase and catalase or glutathione peroxidase enzymes, involved in the antioxidant defense of the diabetic brain, are decreased [20–23]. However, the possible source of oxidative stress in brain injury also includes autoxidation of glucose, lipid peroxidation, and decreased tissue concentrations of low molecular weight antioxidants such as reduced glutathione (GSH) [24–27]. This alteration of glutathione levels may be related to an increased polyol pathway [28] activity as this leads to a depletion of NADPH which is necessary for the enzymatic reduction of oxidized glutathione.Moreover, in these pathological conditions, cellular stress triggers mitochondrial oxidative damage, which may result in apoptosis and/or necrosis [29], and apoptosis induced by oxidative stress has been related to neurogenesis inhibition [30]. Thus, it has been described that DM leads to alterations in the mitochondrial electron transport chain; ROS formation, mitochondrial energy metabolism dysfunction, and oxidative stress are thus being recognized as the main players in diabetes-related complications [31]. In this sense, Cardoso et al. have shown that hippocampal mitochondria of streptozotocin (STZ)-induced diabetic rats presented higher levels of MDA together with an increased glutathione disulfide reductase activity and lower manganese superoxide dismutase (MnSOD) activity and glutathione-to-glutathione disulfide (GSH/GSSG) ratio. It also showed impaired oxidative phosphorylation system characterized by a decreased mitochondrial energization potential and ATP levels and higher repolarization lag phase [32]. On the other hand, although insulin is best known for its involvement in the regulation of glucose metabolism in peripheral tissues, this hormone also affects numerous brain functions including cognition, memory, and synaptic plasticity through complex insulin/insulin receptor (IR) signaling pathways [33]. Therefore, considering the important role of insulin in many aspects of neuronal function in both the peripheral nervous system and the central nervous system, it is possible that perturbation of insulin signaling (both insulin deficiency in T1 diabetes and hyperinsulinemia in T2 diabetes) is in the pathogenesis of neurological diseases [34] and results in neurodegeneration.Until recently, the study of insulin resistance was mainly focused on metabolic tissues such as muscle and adipose tissue; recent data, however, suggest that insulin resistance also develops in the nervous system. Although neurons are not insulin-dependent, they are insulin-responsive [35]. Insulin receptors are widely expressed in the brain, including the olfactory bulb, cerebral cortex, hippocampus, hypothalamus, and amygdala. Insulin resistance in sensory neurons makes cells respond inappropriately to growth factor signals, and this impairment may contribute to the development of neurodegeneration and subsequent diabetic neuropathy. Moreover, insulin regulates mitochondrial metabolism and oxidative capacity through PI3K/Akt signaling [36, 37]; therefore, decreased Akt signaling by hyperinsulinemia- mediated IR may have profound effects on mitochondrial function in neurons and result in subsequent increased oxidative stress [38]. In fact, two of the leading theories that have emerged to explain insulin resistance center on mitochondrial function/dysfunction, although interestingly with opposite views. In one theory, inherited or acquired mitochondrial dysfunction is thought to cause an accumulation of intramyocellular lipids that lead to insulin resistance and implies that strategies to accelerate flux through β-oxidation should improve insulin sensitivity [39]. In the second theory, the impact of cellular metabolic imbalance is viewed in the context of cellular and mitochondrial bioenergetics, positing that excess fuel relative to demand increases mitochondrial oxidant production and emission, ultimately leading to the development of insulin resistance. In this case, elevated flux via β-oxidation in the absence of added demand is viewed as an underlying cause of the disease. Therefore, mitochondrial-derived oxidative stress is fairly well established as an underlying mechanism responsible for the pathological complications associated with diabetes [40], but it also has a role as a primary factor in the development of insulin resistance (and subsequent overt diabetes), since strong experimental evidence from various animal models utilizing mitochondrial targeted approaches has established a link between mitochondrial-derived ROS and insulin resistance in vivo [41, 42].In conclusion, convincing evidence is now available from previous studies to prove the role of oxidative stress in the development of neuronal injury in the diabetic brain and the beneficial effects of antioxidants. More concretely, the beneficial effect of lutein and DHA in the brain of diabetic animals and the way that these substances were able to ameliorate the oxidative stress present in diabetes has been studied by our group [27, 43]. However, we must take into account, that there are also studies which report the lack of effect of antioxidants in diabetic complications. Thus, Je et al. [44] reported that vitamin C supplementation alone shows limited therapeutic benefit in type 1 diabetes and is more commonly used in combination with vitamin E or other agents [44]. Moreover, most of the evidences favoring the increased oxidative stress in diabetes come from studies in experimental models of diabetes in which the degree of hyperglycemia is excessive. Supportive evidence is also available in studies of human subjects with diabetes; however interventional studies using select antioxidant supplements have failed to show significant benefits of supplementation, as reviewed by Hasanain and Mooradian [45]. The completion of some of the ongoing large clinical trials will shed additional light on the clinical merit of antioxidant supplementation. ## 3. Inflammation in Diabetes Inflammation represents a fundamental biological process which stands as the foreground of a large number of acute and chronic pathological conditions, and this occurs in response to any alteration of tissue integrity in order to restore tissue homeostasis through the induction of various repair mechanisms. Proper regulation of these mechanisms is essential to prevent uncontrolled amplification of the initial inflammatory response and shift from tissue repair towards collateral damage and disease development [46].The appropriate recognition of the danger by the host is primordial for the elaboration of proper adaptive responses. Sensing of pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) is ensured by a complex set-up of pattern-recognition’ receptors (PRRs), which include, among others, the receptor for advanced glycation end-products (RAGE). PRR activation triggers a wealth of intracellular signaling pathways, including kinases (e.g., MAP kinases, PI3 kinase), adaptors, transcription factors (mainly nuclear factor-κB (NFκB)), and activator protein-1. Such signaling cascades foster the expression of cytokines, chemokines, enzymes, growth factors, and additional molecules that are required for tissue repair [47] and homeostasis restoration. However, there are situations in which such restoration may not adequately occur, resulting in persistent cellular stress, perpetuating and amplifying the inflammatory response. In these conditions, the process leads to significant alterations of tissue functions, with systemic and persistent derangements of homeostasis [48]. Diabetes and neurodegenerative diseases are typical examples of these pathological processes associated with such chronic inflammatory changes [49].The release of reactive oxygen species has long been recognized as a typical consequence of immune cell stimulation [50, 51], and both acute and chronic inflammatory states are coupled with significant alterations of redox equilibrium, due to the associated enhancement of oxidant generation [49, 52–54]. Accordingly, mitigating oxidative stress by the use of antioxidants has been evaluated as a potentially useful anti-inflammatory strategy in such conditions, as recently reviewed [55]. Overall, the results of innumerable studies have clearly pointed out the strong association between oxidative stress and inflammation. Since responses triggered by Toll-like receptors (TLRs) are conveyed primarily by the activation of NFκB, which is a master regulator of inflammation, controlling the expression of hundreds of genes implicated in innate immune responses, and also a redox sensitive nuclear factor involved in the control of a large number of normal cellular and tissue processes, NFκB has a strategic position at the crossroad between oxidative stress and inflammation.NFκB transcription factors are ubiquitously expressed in mammalian cells. These proteins are highly conserved across species, and in mammals the NFκB family (also known as the Rel family) consists of five members: p50, p52, p65 (also known as RelA), c-Rel, and RelB. Rel family members function as dimers and the five subunits can homodimerize or heterodimerize. All family members share a Rel homology domain, which contains the crucial functional regions for DNA binding, dimerization, nuclear localization, and interactions with the IκB inhibitory proteins. NFκB dimers exist in a latent form in the cytoplasm bound by the IκB inhibitory proteins, and when NFκB-inducing stimuli activate the IκB kinase complex that phosphorylates IκB, this leads to its ubiquitination and subsequent degradation in the canonical NFκB activation pathway. IκB degradation exposes the DNA-binding domain and nuclear localization sequence of NFκB and permits its stable translocation to the nucleus and the regulation of target genes [56]. Thus, activated NFκB enters the nucleus to induce transcription of a myriad of genes that mediate diverse cellular processes such as immunity, inflammation, proliferation, apoptosis, and cellular senescence [57].Together with the evidences that relate oxidative stress and inflammation to the pathophysiology of diabetes, studies performed in a variety of cell and animal based experimental systems also suggest that NFκB activation is a key event early in the pathobiology of this disease and its complications [27, 58, 59]. In fact, several studies have highlighted the activation of NFκB by hyperglycemia and its relationship with diabetic complications, as reviewed by Patel and Santani in 2009 [59]; thus, hyperglycemia triggers a number of mechanisms that are thought to underlie diabetic neuropathy. Studies in different experimental models have established that neuronal dysfunction is closely associated with the activation of NFκB and the expression of proinflammatory cytokines [60, 61]. Moreover, NFκB pathway has been revealed as a key molecular system involved in pathological brain inflammation [62], and also experimental studies [52] have suggested that neuronal apoptosis, which is related to NFκB activation, may play an important role in neuronal loss and impaired cognitive function. Additionally, in the hippocampus of streptozotocin-treated rats, not only a strong increase in oxygen reactive species is observed but also a persistent activation of NFκB is observed [23, 27]. Activated NFκB can induce cytotoxic products that exacerbate inflammation and oxidative stress and promote apoptosis [63], leading to oxidative stress induced cell dysfunction or cell death, respectively [64]. However, it should not be forgotten that although NFκB is widely known for its ubiquitous roles in inflammation and immune responses and in control of cell division and apoptosis (and these roles are apparent in the nervous system), neurons and their neighboring cells employ the NFκB pathway for distinctive functions as well, ranging from the development to the coordination of cellular responses to injury of the nervous system and to brain-specific processes such as the synaptic signaling that underlies learning and memory [60]. Therefore, understanding the function of NFκB transcription factors in the nervous system is now a new frontier for the general field of NFκB research, for the investigation of transcriptional regulation in complex neuronal systems, and for the understanding of pathological mechanisms of neurodegenerative diseases.On the other hand, we cannot forget that type 2 (T2D) diabetes is an overnutrition related disease which usually is preceded by the metabolic syndrome, a common metabolic disorder that results from the increasing prevalence of obesity which includes several interconnected abnormalities such as insulin resistance, impaired glucose tolerance, dyslipidemia, and high blood pressure [65]. Moreover, overnutrition is considered as an independent environmental factor that is targeted by innate immune system to trigger an atypical form of inflammation, which leads to metabolic dysfunctions among others, in the central nervous system (CNS) and particularly in the hypothalamus [62, 66–69], which indeed is known to govern several metabolic functions of the body including appetite control, energy expenditure, carbohydrate and lipid metabolism, and blood pressure homeostasis [70, 71].Deeping into the mechanisms that lead to this metabolic dysfunction, which also affects the CNS, it has been recently demonstrated that the activation of IKKβ/NFκB and consequently the proinflammatory pathway are a relevant feature in different metabolic disorders related to overnutrition [72–74]. The effects of NFκB-mediated metabolic inflammation are deleterious and can give rise to impairments of normal intracellular signaling and disruptions of metabolic physiology [62] that have been reported also in the CNS—particularly in the hypothalamus—which primarily could account for the development of overnutrition-induced metabolic syndrome and related disorders such as obesity, insulin resistance, T2D, and obesity-related hypertension [68, 75, 76]. Moreover, intracellular oxidative stress and mitochondrial dysfunction seem to be upstream events that mediate hypothalamic NFκB activation under overnutrition, and in turn such metabolic inflammation is reciprocally related to the induction of various intracellular stresses such as mitochondrial oxidative stress and endoplasmic reticulum (ER) stress [62]. Thus, intracellular oxidative stress seems to contribute to metabolic syndrome and related diseases, including T2D [39, 77, 78], and also to neurodegenerative diseases [79, 80]. In fact, when ROS homeostasis is disrupted, excessive ROS are accumulated in the mitochondria and cytoplasm and can cause oxidative damage to cells [81]. Regarding the ER, existing evidence also suggests that ER stress is a key link to obesity, insulin resistance, and type 2 diabetes [82], since this ER stress can also activate cellular inflammatory pathways which, in turn, impair cellular functions and lead to metabolic disorders [83] and neurodegenerative diseases [84, 85]. Indeed, unresolved ER stress can induce mitochondrial changes and finally cell apoptosis [86]. Moreover, brain ER stress is known to promote NF-κB activation in the development of central metabolic dysregulations associated to inflammatory pathways, since intraventricular infusion of an ER stress inhibitor suppressed the activation of hypothalamic NFκB by high-fat diet feeding [68]. In addition, ER stress also appears to depend on IKKβ/NFκB pathway activity, because neither high-fat diet feeding nor central administration of chemical ER stress inducer is able to induce hypothalamic ER stress in mice with central inhibition of IKKβ/NFκB pathway [68, 87]. Finally, ER stress also causes cellular accumulation of ROS associated to oxidative stress [88], which in turn reciprocally can promote ER stress (see Figure 1).Figure 1 Scheme summarizing the involvement of oxidative stress (mitochondrial dysfunction and ER stress), inflammation, and autophagy in the diabetic brain. GSH: reduced glutathione; GSSG: glutathione disulfide; SOD: superoxide dismutase; NADP+: nicotinamide adenine dinucleotide phosphate oxidized; NADPH: nicotinamide adenine dinucleotide phosphate reduced; NAD+: nicotinamide adenine dinucleotide oxidized; NADH: nicotinamide adenine dinucleotide reduced; CAT: catalase; IκBa: nuclear factor of kappa light polypeptide gene enhancer in B cells inhibitor, alpha; NFκB: nuclear factor kappa-light-chain-enhancer of activated B cells; ER: endoplasmic reticulum; GLU: glucose; INS: insulin; P: phosphate; MDA: malondialdehyde; ATP: adenosine triphosphate; ETC: electron transport chain; ROS: reactive oxygen species; MnSOD: manganese superoxide dismutase; GSR: glutathione reductase; CHOP: C/EBP Homology Protein; TNFα: tumor necrosis factor alpha; NOS: nitric oxide synthases.In the case of ER stress, exposure to high glucose could induce ER stress by the generation of free radicals, aberrant protein glycosylation, or increased membrane and protein turnover. Zhang et al. have also reported that the expression of C/EBP homology protein (CHOP), the prominent mediator of the ER stress-induced apoptosis, was markedly increased in the hippocampus of diabetic rats and have suggested that this CHOP- ER stress-mediated apoptosis may be involved in hyperglycemia-induced hippocampal synapses and neuronal impairment and promote the diabetic cognitive impairment [89]. ## 4. Autophagy and Diabetes Autophagy plays a role in the maintenance of function of organelles such as mitochondria or ER [90, 91], in order to maintain a healthy and functional intracellular environment, cells must constantly clean up defective proteins (e.g., misfolded proteins overflowing from ER stress) or damaged organelles (e.g., dysfunctional mitochondria or ER from prolonged oxidative stress). Although, autophagy is known primarily as a prosurvival mechanism for cells facing stress conditions, accumulating evidence indicates that autophagy can contribute to cell death processes under pathological conditions [92, 93]. Thus, among others, autophagy defect has been linked to the development of metabolic syndrome, diabetes, alcoholism, and lipid abnormalities [94–96], and in the majority of these cases, the underlying pathogenesis is related to the failure of autophagy machinery to efficiently remove defective proteins or damaged organelles from the cytosol. In fact, chronic intracellular stress such as mitochondria or ER stress seems to be the critical upstream events, since animal studies have shown that in early stages ER stress or oxidative stress induce adaptive autophagy upregulation, helping to restore intracellular homeostasis by disposing a number of harmful molecules such as unfolded or misfolded proteins in ER lumen, cytosolic proteins damaged by ROS, or even dysfunctional ERs and mitochondria [97, 98]. However, when intracellular stresses remain unresolved, prolonged autophagy upregulation progresses into autophagy defect [62] and, in fact, the decreased efficiency of the autophagic system with age has gained renewed attention as a result of the increasing number of reports supporting a role for defective autophagy in the pathogenesis of different age-related diseases including diabetes among others [99]. In parallel, autophagy pathway can relate to proinflammatory signaling via oxidative stress pathway [100], since mitophagy/autophagy blockade leads to the accumulation of damaged, ROS-generating mitochondria, and this in turn activates the NLRP3 inflammasome (a molecular platform activated upon signs of cellular “danger” to trigger innate immune defenses through the maturation of proinflammatory cytokines). Moreover, autophagy defect can induce NFκB-mediated inflammation [101, 102], even in the CNS, since Meng and Cai reported that defective hypothalamic autophagy led to hypothalamic inflammation, including the activation of proinflammatory IκB kinase β pathway [103].Although it is clear that diabetes affects both mitochondria and ER, the role of autophagy in diabetes or metabolism is yet far from clear, and therefore the role of autophagy in the pathogenesis of diabetic complications is currently under intensive investigation.As described by Hoffman et al., [104] specific candidates for induction and stimulation of autophagy include insulin deficiency/resistance [105, 106]; deficiency of insulin growth factor-1 (IGF-1) and insulin growth factor-1 receptor (IGF-1R) [104, 107]; hyperglucagonemia [106]; and hyperglycemia [107]. Other candidates for perturbation of autophagy include alteration of protein synthesis and degradation [108] due to the oxidative stress of RNA [109, 110], protein damage, and altered lipid metabolism [94, 111]; increased production of ketones and aldehydes [112, 113]; and lipid peroxidation [110, 114]. Furthermore, accumulation of oxidized and glycated proteins, common protein modifications associated with diabetes, could be in part attributed to defective autophagy [115].It is noteworthy that Hoffman et al. have reported that autophagy is increased in the brains of young T1D patients with chronic poor metabolic control and increased oxidative stress [116]. Moreover, the finding of significant expression of autophagic markers in both white and gray matter is in keeping with the structural deficits in young patients with T1D [117, 118] and the white matter atrophy in the frontal and temporal regions in these diabetic ketoacidosis cases [104]. However there are still few studies focusing on the role of autophagy in the brains of T1D patients, and therefore further research is needed on the relationship between autophagy and pathogenesis of early onset diabetic encephalopathy in T1D. --- *Source: 102158-2014-08-24.xml*
102158-2014-08-24_102158-2014-08-24.md
27,151
Diabetes and the Brain: Oxidative Stress, Inflammation, and Autophagy
María Muriach; Miguel Flores-Bellver; Francisco J. Romero; Jorge M. Barcia
Oxidative Medicine and Cellular Longevity (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102158
102158-2014-08-24.xml
--- ## Abstract Diabetes mellitus is a common metabolic disorder associated with chronic complications including a state of mild to moderate cognitive impairment, in particular psychomotor slowing and reduced mental flexibility, not attributable to other causes, and shares many symptoms that are best described as accelerated brain ageing. A common theory for aging and for the pathogenesis of this cerebral dysfunctioning in diabetes relates cell death to oxidative stress in strong association to inflammation, and in fact nuclear factorκB (NFκB), a master regulator of inflammation and also a sensor of oxidative stress, has a strategic position at the crossroad between oxidative stress and inflammation. Moreover, metabolic inflammation is, in turn, related to the induction of various intracellular stresses such as mitochondrial oxidative stress, endoplasmic reticulum (ER) stress, and autophagy defect. In parallel, blockade of autophagy can relate to proinflammatory signaling via oxidative stress pathway and NFκB-mediated inflammation. --- ## Body ## 1. Introduction Diabetes mellitus is a common metabolic disorder which is associated with chronic complications such as nephropathy, angiopathy, retinopathy, and peripheral neuropathy. However, as early as 1922 it was recognised that diabetes also can lead to cognitive dysfunction [1]. Since then, studies in experimental models and in patients observed alterations in neurotransmission, electrophysiological and structural abnormalities, and neurobehavioral alterations, in particular cognitive dysfunction and increased risk of depression [2]. Moreover, the observed cerebral manifestations of diabetes appear to develop insidiously, largely independent of diabetes-associated acute metabolic and vascular disturbances (such as severe hypo- and hyperglycemic episodes and stroke). Although the magnitude of these cognitive deficits appears to be mild to moderate, they can significantly hamper daily functioning, adversely affecting quality of life [3].In spite of this, the concept of central neuropathy has been controversial for more than 80 years now, but while trying to describe cognitive impairment in diabetes as a complication of the disease, the term “diabetic encephalopathy” was introduced in 1950 [4]. However, this term “encephalopathy” has not been widely accepted, probably among other reasons, because it does not seem to match with the mild cognitive problems usually seen in (nondemented) diabetic patients. More recently it has been suggested that the term “diabetes-associated cognitive decline” (DACD) describes a state of mild to moderate cognitive impairment, in particular psychomotor slowing and reduced mental flexibility, not attributable to other causes [5]. In addition, it is now clear that diabetes increases the risk of Alzheimer’s disease, vascular dementia, and any other type of dementia [6, 7]. ## 2. Pathophysiological Mechanisms Involved in Brain Damage in Diabetes Long-term effects of diabetes on the brain are manifested at structural, neurophysiological, and neuropsychological level, and multiple pathogenic factors appear to be involved in the pathogenesis of the cerebral dysfunctioning in diabetes, such as the hypoglycemic episodes, cerebrovascular alterations, the role of insulin in the brain, and the mechanisms of hyperglycemia induced damage [8]. Moreover, the emerging view is that the diabetic brain features many symptoms that are best described as accelerated brain ageing [9].A common theory, for aging and for the pathogenesis of this cerebral dysfunctioning in diabetes, relates cell death to oxidative stress mediated by free radicals [10]. Thus, hyperglycemia reduces antioxidant levels and concomitantly increases the production of free radicals. These effects contribute to tissue damage in diabetes mellitus, leading to alterations in the redox potential of the cell with subsequent activation of redox-sensitive genes [11].The brain is especially vulnerable to oxidative damage as a result of its high oxygen consumption rate, abundant lipid content, and relative paucity of antioxidant enzymes as compared to other tissues. Neuronal cells are particularly sensitive to oxidative insults, and therefore reactive oxygen species (ROS) are involved in many neurodegenerative processes such as diabetes [12–14]. Although under normal physiological conditions a balance exists between the production of ROS and the antioxidant mechanisms, it has been shown that in aging tissues oxidative stress increases due to, among others, decreased activity of antioxidant enzymes [15]. Earlier work and ample evidence have shown that peroxidative damage to lipid and protein occurs with the aging process and the products of these reactions accumulate in the brain with age [16–19].Similarly, the activities of superoxide dismutase and catalase or glutathione peroxidase enzymes, involved in the antioxidant defense of the diabetic brain, are decreased [20–23]. However, the possible source of oxidative stress in brain injury also includes autoxidation of glucose, lipid peroxidation, and decreased tissue concentrations of low molecular weight antioxidants such as reduced glutathione (GSH) [24–27]. This alteration of glutathione levels may be related to an increased polyol pathway [28] activity as this leads to a depletion of NADPH which is necessary for the enzymatic reduction of oxidized glutathione.Moreover, in these pathological conditions, cellular stress triggers mitochondrial oxidative damage, which may result in apoptosis and/or necrosis [29], and apoptosis induced by oxidative stress has been related to neurogenesis inhibition [30]. Thus, it has been described that DM leads to alterations in the mitochondrial electron transport chain; ROS formation, mitochondrial energy metabolism dysfunction, and oxidative stress are thus being recognized as the main players in diabetes-related complications [31]. In this sense, Cardoso et al. have shown that hippocampal mitochondria of streptozotocin (STZ)-induced diabetic rats presented higher levels of MDA together with an increased glutathione disulfide reductase activity and lower manganese superoxide dismutase (MnSOD) activity and glutathione-to-glutathione disulfide (GSH/GSSG) ratio. It also showed impaired oxidative phosphorylation system characterized by a decreased mitochondrial energization potential and ATP levels and higher repolarization lag phase [32]. On the other hand, although insulin is best known for its involvement in the regulation of glucose metabolism in peripheral tissues, this hormone also affects numerous brain functions including cognition, memory, and synaptic plasticity through complex insulin/insulin receptor (IR) signaling pathways [33]. Therefore, considering the important role of insulin in many aspects of neuronal function in both the peripheral nervous system and the central nervous system, it is possible that perturbation of insulin signaling (both insulin deficiency in T1 diabetes and hyperinsulinemia in T2 diabetes) is in the pathogenesis of neurological diseases [34] and results in neurodegeneration.Until recently, the study of insulin resistance was mainly focused on metabolic tissues such as muscle and adipose tissue; recent data, however, suggest that insulin resistance also develops in the nervous system. Although neurons are not insulin-dependent, they are insulin-responsive [35]. Insulin receptors are widely expressed in the brain, including the olfactory bulb, cerebral cortex, hippocampus, hypothalamus, and amygdala. Insulin resistance in sensory neurons makes cells respond inappropriately to growth factor signals, and this impairment may contribute to the development of neurodegeneration and subsequent diabetic neuropathy. Moreover, insulin regulates mitochondrial metabolism and oxidative capacity through PI3K/Akt signaling [36, 37]; therefore, decreased Akt signaling by hyperinsulinemia- mediated IR may have profound effects on mitochondrial function in neurons and result in subsequent increased oxidative stress [38]. In fact, two of the leading theories that have emerged to explain insulin resistance center on mitochondrial function/dysfunction, although interestingly with opposite views. In one theory, inherited or acquired mitochondrial dysfunction is thought to cause an accumulation of intramyocellular lipids that lead to insulin resistance and implies that strategies to accelerate flux through β-oxidation should improve insulin sensitivity [39]. In the second theory, the impact of cellular metabolic imbalance is viewed in the context of cellular and mitochondrial bioenergetics, positing that excess fuel relative to demand increases mitochondrial oxidant production and emission, ultimately leading to the development of insulin resistance. In this case, elevated flux via β-oxidation in the absence of added demand is viewed as an underlying cause of the disease. Therefore, mitochondrial-derived oxidative stress is fairly well established as an underlying mechanism responsible for the pathological complications associated with diabetes [40], but it also has a role as a primary factor in the development of insulin resistance (and subsequent overt diabetes), since strong experimental evidence from various animal models utilizing mitochondrial targeted approaches has established a link between mitochondrial-derived ROS and insulin resistance in vivo [41, 42].In conclusion, convincing evidence is now available from previous studies to prove the role of oxidative stress in the development of neuronal injury in the diabetic brain and the beneficial effects of antioxidants. More concretely, the beneficial effect of lutein and DHA in the brain of diabetic animals and the way that these substances were able to ameliorate the oxidative stress present in diabetes has been studied by our group [27, 43]. However, we must take into account, that there are also studies which report the lack of effect of antioxidants in diabetic complications. Thus, Je et al. [44] reported that vitamin C supplementation alone shows limited therapeutic benefit in type 1 diabetes and is more commonly used in combination with vitamin E or other agents [44]. Moreover, most of the evidences favoring the increased oxidative stress in diabetes come from studies in experimental models of diabetes in which the degree of hyperglycemia is excessive. Supportive evidence is also available in studies of human subjects with diabetes; however interventional studies using select antioxidant supplements have failed to show significant benefits of supplementation, as reviewed by Hasanain and Mooradian [45]. The completion of some of the ongoing large clinical trials will shed additional light on the clinical merit of antioxidant supplementation. ## 3. Inflammation in Diabetes Inflammation represents a fundamental biological process which stands as the foreground of a large number of acute and chronic pathological conditions, and this occurs in response to any alteration of tissue integrity in order to restore tissue homeostasis through the induction of various repair mechanisms. Proper regulation of these mechanisms is essential to prevent uncontrolled amplification of the initial inflammatory response and shift from tissue repair towards collateral damage and disease development [46].The appropriate recognition of the danger by the host is primordial for the elaboration of proper adaptive responses. Sensing of pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) is ensured by a complex set-up of pattern-recognition’ receptors (PRRs), which include, among others, the receptor for advanced glycation end-products (RAGE). PRR activation triggers a wealth of intracellular signaling pathways, including kinases (e.g., MAP kinases, PI3 kinase), adaptors, transcription factors (mainly nuclear factor-κB (NFκB)), and activator protein-1. Such signaling cascades foster the expression of cytokines, chemokines, enzymes, growth factors, and additional molecules that are required for tissue repair [47] and homeostasis restoration. However, there are situations in which such restoration may not adequately occur, resulting in persistent cellular stress, perpetuating and amplifying the inflammatory response. In these conditions, the process leads to significant alterations of tissue functions, with systemic and persistent derangements of homeostasis [48]. Diabetes and neurodegenerative diseases are typical examples of these pathological processes associated with such chronic inflammatory changes [49].The release of reactive oxygen species has long been recognized as a typical consequence of immune cell stimulation [50, 51], and both acute and chronic inflammatory states are coupled with significant alterations of redox equilibrium, due to the associated enhancement of oxidant generation [49, 52–54]. Accordingly, mitigating oxidative stress by the use of antioxidants has been evaluated as a potentially useful anti-inflammatory strategy in such conditions, as recently reviewed [55]. Overall, the results of innumerable studies have clearly pointed out the strong association between oxidative stress and inflammation. Since responses triggered by Toll-like receptors (TLRs) are conveyed primarily by the activation of NFκB, which is a master regulator of inflammation, controlling the expression of hundreds of genes implicated in innate immune responses, and also a redox sensitive nuclear factor involved in the control of a large number of normal cellular and tissue processes, NFκB has a strategic position at the crossroad between oxidative stress and inflammation.NFκB transcription factors are ubiquitously expressed in mammalian cells. These proteins are highly conserved across species, and in mammals the NFκB family (also known as the Rel family) consists of five members: p50, p52, p65 (also known as RelA), c-Rel, and RelB. Rel family members function as dimers and the five subunits can homodimerize or heterodimerize. All family members share a Rel homology domain, which contains the crucial functional regions for DNA binding, dimerization, nuclear localization, and interactions with the IκB inhibitory proteins. NFκB dimers exist in a latent form in the cytoplasm bound by the IκB inhibitory proteins, and when NFκB-inducing stimuli activate the IκB kinase complex that phosphorylates IκB, this leads to its ubiquitination and subsequent degradation in the canonical NFκB activation pathway. IκB degradation exposes the DNA-binding domain and nuclear localization sequence of NFκB and permits its stable translocation to the nucleus and the regulation of target genes [56]. Thus, activated NFκB enters the nucleus to induce transcription of a myriad of genes that mediate diverse cellular processes such as immunity, inflammation, proliferation, apoptosis, and cellular senescence [57].Together with the evidences that relate oxidative stress and inflammation to the pathophysiology of diabetes, studies performed in a variety of cell and animal based experimental systems also suggest that NFκB activation is a key event early in the pathobiology of this disease and its complications [27, 58, 59]. In fact, several studies have highlighted the activation of NFκB by hyperglycemia and its relationship with diabetic complications, as reviewed by Patel and Santani in 2009 [59]; thus, hyperglycemia triggers a number of mechanisms that are thought to underlie diabetic neuropathy. Studies in different experimental models have established that neuronal dysfunction is closely associated with the activation of NFκB and the expression of proinflammatory cytokines [60, 61]. Moreover, NFκB pathway has been revealed as a key molecular system involved in pathological brain inflammation [62], and also experimental studies [52] have suggested that neuronal apoptosis, which is related to NFκB activation, may play an important role in neuronal loss and impaired cognitive function. Additionally, in the hippocampus of streptozotocin-treated rats, not only a strong increase in oxygen reactive species is observed but also a persistent activation of NFκB is observed [23, 27]. Activated NFκB can induce cytotoxic products that exacerbate inflammation and oxidative stress and promote apoptosis [63], leading to oxidative stress induced cell dysfunction or cell death, respectively [64]. However, it should not be forgotten that although NFκB is widely known for its ubiquitous roles in inflammation and immune responses and in control of cell division and apoptosis (and these roles are apparent in the nervous system), neurons and their neighboring cells employ the NFκB pathway for distinctive functions as well, ranging from the development to the coordination of cellular responses to injury of the nervous system and to brain-specific processes such as the synaptic signaling that underlies learning and memory [60]. Therefore, understanding the function of NFκB transcription factors in the nervous system is now a new frontier for the general field of NFκB research, for the investigation of transcriptional regulation in complex neuronal systems, and for the understanding of pathological mechanisms of neurodegenerative diseases.On the other hand, we cannot forget that type 2 (T2D) diabetes is an overnutrition related disease which usually is preceded by the metabolic syndrome, a common metabolic disorder that results from the increasing prevalence of obesity which includes several interconnected abnormalities such as insulin resistance, impaired glucose tolerance, dyslipidemia, and high blood pressure [65]. Moreover, overnutrition is considered as an independent environmental factor that is targeted by innate immune system to trigger an atypical form of inflammation, which leads to metabolic dysfunctions among others, in the central nervous system (CNS) and particularly in the hypothalamus [62, 66–69], which indeed is known to govern several metabolic functions of the body including appetite control, energy expenditure, carbohydrate and lipid metabolism, and blood pressure homeostasis [70, 71].Deeping into the mechanisms that lead to this metabolic dysfunction, which also affects the CNS, it has been recently demonstrated that the activation of IKKβ/NFκB and consequently the proinflammatory pathway are a relevant feature in different metabolic disorders related to overnutrition [72–74]. The effects of NFκB-mediated metabolic inflammation are deleterious and can give rise to impairments of normal intracellular signaling and disruptions of metabolic physiology [62] that have been reported also in the CNS—particularly in the hypothalamus—which primarily could account for the development of overnutrition-induced metabolic syndrome and related disorders such as obesity, insulin resistance, T2D, and obesity-related hypertension [68, 75, 76]. Moreover, intracellular oxidative stress and mitochondrial dysfunction seem to be upstream events that mediate hypothalamic NFκB activation under overnutrition, and in turn such metabolic inflammation is reciprocally related to the induction of various intracellular stresses such as mitochondrial oxidative stress and endoplasmic reticulum (ER) stress [62]. Thus, intracellular oxidative stress seems to contribute to metabolic syndrome and related diseases, including T2D [39, 77, 78], and also to neurodegenerative diseases [79, 80]. In fact, when ROS homeostasis is disrupted, excessive ROS are accumulated in the mitochondria and cytoplasm and can cause oxidative damage to cells [81]. Regarding the ER, existing evidence also suggests that ER stress is a key link to obesity, insulin resistance, and type 2 diabetes [82], since this ER stress can also activate cellular inflammatory pathways which, in turn, impair cellular functions and lead to metabolic disorders [83] and neurodegenerative diseases [84, 85]. Indeed, unresolved ER stress can induce mitochondrial changes and finally cell apoptosis [86]. Moreover, brain ER stress is known to promote NF-κB activation in the development of central metabolic dysregulations associated to inflammatory pathways, since intraventricular infusion of an ER stress inhibitor suppressed the activation of hypothalamic NFκB by high-fat diet feeding [68]. In addition, ER stress also appears to depend on IKKβ/NFκB pathway activity, because neither high-fat diet feeding nor central administration of chemical ER stress inducer is able to induce hypothalamic ER stress in mice with central inhibition of IKKβ/NFκB pathway [68, 87]. Finally, ER stress also causes cellular accumulation of ROS associated to oxidative stress [88], which in turn reciprocally can promote ER stress (see Figure 1).Figure 1 Scheme summarizing the involvement of oxidative stress (mitochondrial dysfunction and ER stress), inflammation, and autophagy in the diabetic brain. GSH: reduced glutathione; GSSG: glutathione disulfide; SOD: superoxide dismutase; NADP+: nicotinamide adenine dinucleotide phosphate oxidized; NADPH: nicotinamide adenine dinucleotide phosphate reduced; NAD+: nicotinamide adenine dinucleotide oxidized; NADH: nicotinamide adenine dinucleotide reduced; CAT: catalase; IκBa: nuclear factor of kappa light polypeptide gene enhancer in B cells inhibitor, alpha; NFκB: nuclear factor kappa-light-chain-enhancer of activated B cells; ER: endoplasmic reticulum; GLU: glucose; INS: insulin; P: phosphate; MDA: malondialdehyde; ATP: adenosine triphosphate; ETC: electron transport chain; ROS: reactive oxygen species; MnSOD: manganese superoxide dismutase; GSR: glutathione reductase; CHOP: C/EBP Homology Protein; TNFα: tumor necrosis factor alpha; NOS: nitric oxide synthases.In the case of ER stress, exposure to high glucose could induce ER stress by the generation of free radicals, aberrant protein glycosylation, or increased membrane and protein turnover. Zhang et al. have also reported that the expression of C/EBP homology protein (CHOP), the prominent mediator of the ER stress-induced apoptosis, was markedly increased in the hippocampus of diabetic rats and have suggested that this CHOP- ER stress-mediated apoptosis may be involved in hyperglycemia-induced hippocampal synapses and neuronal impairment and promote the diabetic cognitive impairment [89]. ## 4. Autophagy and Diabetes Autophagy plays a role in the maintenance of function of organelles such as mitochondria or ER [90, 91], in order to maintain a healthy and functional intracellular environment, cells must constantly clean up defective proteins (e.g., misfolded proteins overflowing from ER stress) or damaged organelles (e.g., dysfunctional mitochondria or ER from prolonged oxidative stress). Although, autophagy is known primarily as a prosurvival mechanism for cells facing stress conditions, accumulating evidence indicates that autophagy can contribute to cell death processes under pathological conditions [92, 93]. Thus, among others, autophagy defect has been linked to the development of metabolic syndrome, diabetes, alcoholism, and lipid abnormalities [94–96], and in the majority of these cases, the underlying pathogenesis is related to the failure of autophagy machinery to efficiently remove defective proteins or damaged organelles from the cytosol. In fact, chronic intracellular stress such as mitochondria or ER stress seems to be the critical upstream events, since animal studies have shown that in early stages ER stress or oxidative stress induce adaptive autophagy upregulation, helping to restore intracellular homeostasis by disposing a number of harmful molecules such as unfolded or misfolded proteins in ER lumen, cytosolic proteins damaged by ROS, or even dysfunctional ERs and mitochondria [97, 98]. However, when intracellular stresses remain unresolved, prolonged autophagy upregulation progresses into autophagy defect [62] and, in fact, the decreased efficiency of the autophagic system with age has gained renewed attention as a result of the increasing number of reports supporting a role for defective autophagy in the pathogenesis of different age-related diseases including diabetes among others [99]. In parallel, autophagy pathway can relate to proinflammatory signaling via oxidative stress pathway [100], since mitophagy/autophagy blockade leads to the accumulation of damaged, ROS-generating mitochondria, and this in turn activates the NLRP3 inflammasome (a molecular platform activated upon signs of cellular “danger” to trigger innate immune defenses through the maturation of proinflammatory cytokines). Moreover, autophagy defect can induce NFκB-mediated inflammation [101, 102], even in the CNS, since Meng and Cai reported that defective hypothalamic autophagy led to hypothalamic inflammation, including the activation of proinflammatory IκB kinase β pathway [103].Although it is clear that diabetes affects both mitochondria and ER, the role of autophagy in diabetes or metabolism is yet far from clear, and therefore the role of autophagy in the pathogenesis of diabetic complications is currently under intensive investigation.As described by Hoffman et al., [104] specific candidates for induction and stimulation of autophagy include insulin deficiency/resistance [105, 106]; deficiency of insulin growth factor-1 (IGF-1) and insulin growth factor-1 receptor (IGF-1R) [104, 107]; hyperglucagonemia [106]; and hyperglycemia [107]. Other candidates for perturbation of autophagy include alteration of protein synthesis and degradation [108] due to the oxidative stress of RNA [109, 110], protein damage, and altered lipid metabolism [94, 111]; increased production of ketones and aldehydes [112, 113]; and lipid peroxidation [110, 114]. Furthermore, accumulation of oxidized and glycated proteins, common protein modifications associated with diabetes, could be in part attributed to defective autophagy [115].It is noteworthy that Hoffman et al. have reported that autophagy is increased in the brains of young T1D patients with chronic poor metabolic control and increased oxidative stress [116]. Moreover, the finding of significant expression of autophagic markers in both white and gray matter is in keeping with the structural deficits in young patients with T1D [117, 118] and the white matter atrophy in the frontal and temporal regions in these diabetic ketoacidosis cases [104]. However there are still few studies focusing on the role of autophagy in the brains of T1D patients, and therefore further research is needed on the relationship between autophagy and pathogenesis of early onset diabetic encephalopathy in T1D. --- *Source: 102158-2014-08-24.xml*
2014
# 5-Lipoxygenase-Dependent Recruitment of Neutrophils and Macrophages by Eotaxin-Stimulated Murine Eosinophils **Authors:** Ricardo Alves Luz; Pedro Xavier-Elsas; Bianca de Luca; Daniela Masid-de-Brito; Priscila Soares Cauduro; Luiz Carlos Gondar Arcanjo; Ana Carolina Cordeiro Faria dos Santos; Ivi Cristina Maria de Oliveira; Maria Ignez Capella Gaspar-Elsas **Journal:** Mediators of Inflammation (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102160 --- ## Abstract The roles of eosinophils in antimicrobial defense remain incompletely understood. In ovalbumin-sensitized mice, eosinophils are selectively recruited to the peritoneal cavity by antigen, eotaxin, or leukotriene(LT)B4, a 5-lipoxygenase (5-LO) metabolite. 5-LO blockade prevents responses to both antigen and eotaxin. We examined responses to eotaxin in the absence of sensitization and their dependence on 5-LO. BALB/c or PAS mice and their mutants (5-LO-deficient ALOX; eosinophil-deficient GATA-1) were injected i.p. with eotaxin, eosinophils, or both, and leukocyte accumulation was quantified up to 24 h. Significant recruitment of eosinophils by eotaxin in BALB/c, up to 24 h, was accompanied by much larger numbers of recruited neutrophils and monocytes/macrophages. These effects were abolished by eotaxin neutralization and 5-LO-activating protein inhibitor MK886. In ALOX (but not PAS) mice, eotaxin recruitment was abolished for eosinophils and halved for neutrophils. In GATA-1 mutants, eotaxin recruited neither neutrophils nor macrophages. Transfer of eosinophils cultured from bone-marrow of BALB/c donors, or from ALOX donors, into GATA-1 mutant recipients, i.p., restored eotaxin recruitment of neutrophils and showed that the critical step dependent on 5-LO is the initial recruitment of eosinophils by eotaxin, not the secondary neutrophil accumulation. Eosinophil-dependent recruitment of neutrophils in naive BALB/c mice was associated with increased binding of bacteria. --- ## Body ## 1. Introduction Eosinophils are a minority granulocyte population, which contributes to the pathophysiology of allergic inflammation, hypereosinophilic syndromes, and some malignancies [1–4]. A role for eosinophils in resistance to multicellular (helminth) parasites has long been proposed, based on the strong association of blood and tissue eosinophilia with worm infections and on the evidence that eosinophils can damage or kill helminths, in specific experimental conditions [5, 6]. Nevertheless, a generally protective in vivo role for eosinophils against worm infections remains elusive [4], partly because host responses to multicellular parasites represent a compromise between the competing needs to reduce parasite burden and to limit immune-mediated tissue damage, to which eosinophils significantly contribute [7, 8].Alternatively, mechanisms through which eosinophils may directly fight infection by various classes ofmicrobial(bacterial, fungal, protozoal, or viral) pathogens include secretion of antimicrobial defensin-like proteins [9]; release of sticky cellular contents that capture pathogens, closely resembling neutrophil extracellular traps [10]; secretion of halogen microbicidal derivatives [11]; release of enzymes with antiviral activity and other roles in innate immunity [12, 13]; and secretion of a wide array of immunoregulatory cytokines [14]. While the contribution of eosinophils to immunity as directly antimicrobial effector cells is likely limited by their scarcity, they could be helpful in conditions in which neutrophil access or macrophage function would be reduced: for neutrophils, tissue entry is restricted in normal conditions [15]; regarding macrophages, their microbicidal effector function is highly dependent on appropriate activating signals, including cytokines [16]. By contrast, eosinophils are far more numerous in normal tissues than in blood and home to mucosal interfaces with the environment [2–4], which represent potential gateways for microbial infection. They are a source of numerous immunoregulatory cytokines [13] and lipid mediators [17], which might play a role in recruitment/activation of other leukocyte subtypes.Because of the scarcity of eosinophils, many important observations were made in conditions in which their numbers are already increased, due to allergic sensitization or experimental helminth infection, such as the discovery of eotaxin (CCL11), a chemoattractant that induces eosinophil accumulation in the skin of sensitized (i.e., eosinophilic) guinea pigs [2, 3, 18]. While other potent eosinophil chemoattractants, such as PGD2 [19, 20] and oxo-ETE [21], have also been characterized, many factors reinforce the current understanding of eotaxin as a specialized chemoattractant which acts primarily on granulocyte subtypes relevant to allergy and worm infections [2–4]. These factors include the reported selectivity of eotaxin for the eosinophil [4, 17, 22, 23] and basophil [20, 24, 25] lineages, and its interaction with hematopoietic cytokines, such as IL-5 [26, 27] and GM-CSF [27], which promotes eosinophil production in bone marrow [27] and extramedullary sites [26], ultimately inducing blood and tissue eosinophilia [4]. The alternative view, namely, that eotaxin is part of a broader regulatory network comprising multiple cell populations in addition to eosinophils and basophils, is also suggested by observations of a wide variety of eotaxin effects, including its ability to attract neutrophils and macrophages [28] and smooth muscle cells [29]. Eotaxin, also produced by fibroblasts [30, 31], has been associated with fibrotic processes in several settings [32, 33].Within this wider framework, we have reexamined whether, in a nonsensitized host, eotaxin would recruit other leukocyte populations besides eosinophils and basophils and further examined whether its effects were dependent on 5-lipoxygenase (5-LO), the key enzyme in the leukotriene production by eosinophils [17, 27]. The evaluation of both aspects was prompted by observations in mice which develop eosinophilia in response to subcutaneously implanted insoluble antigen pellets [34]. While i.p. challenge of implant recipients with soluble allergen selectively recruited eosinophils to the peritoneal cavity, this effect was blocked by the 5-LO-activating protein inhibitor, MK886, and duplicated by the 5-LO product, LTB4, neither of which is eosinophil-selective. Importantly, eotaxin, which duplicated the effects of allergen, was equally blocked by MK886. Equally unexpected was the failure of LTB4, a potent neutrophil chemoattractant, to recruit neutrophils, while it effectively attracted eosinophils in this allergic model. These observations raised the possibility that the eosinophil selective effect of both chemoattractants (eotaxin and LTB4) observed in vivo was dependent on the host being sensitized. We tested this hypothesis for eotaxin first, by examining its effects in a naïve host, as well as the effect of 5-LO blockade on the effectiveness of eotaxin. We report that eotaxin recruits a mixed leukocyte population to the peritoneal cavity of naïve mice and provide evidence of essential roles for both 5-LO and eosinophils in the accumulation and functional activation of neutrophils in this model. ## 2. Materials and Methods ### 2.1. Reagents RPMI 1640 medium (SH30011.01) and fetal calf serum (SH30088.03) were from Hyclone (Logan, UT); Penicillin 100 U/mL (PEN-B), Streptomycin 100 mg/mL (S9137), Ovalbumin (grade II and grade IV), isotonic Percoll, and Histopaque density 1.083 solution from Sigma-Aldrich (St. Louis, MO); recombinant murine Eotaxin (250-01) from PeproTech (Rocky Hill, NJ); and recombinant murine IL-5 from R&D; MK-886 (475889) 1 mg/kg from Cayman Chemicals (Ann Arbor, MI), dissolved in 0.1% methylcellulose, was given as an intragastric bolus in a 0.2 mL volume [33]. Rat anti-murine eotaxin monoclonal neutralizing antibody (clone 42285) and rat anti-murine IgG2a control monoclonal antibody of matched isotype (clone 54447) were from R&D (Minneapolis, MN). ### 2.2. Animals and Animal Handling Inbred mice, male and female, aged 8–10 weeks, provided in SPF condition by CECAL-FIOCRUZ (Rio de Janeiro), were of the following strains: BALB/c; ALOX (5-LO-deficient) and PAS-129 (wild-type control of the same background) [27]; and BALB/c mutants lacking an enhancer element in the promoter region of gene coding for the GATA-1 transcription factor [35], required for eosinophil lineage determination (GATA-1 mice, for short). Animal housing, care, and handling followed institutionally approved (CEUA number L-010/04, CEUA number L-002/09) protocols. Naive animals received eotaxin i.p., in 0.2 mL of RPMI 1640 medium with Penicillin/Streptomycin. Controls received medium (RPMI). After the indicated times, animals were killed in a CO2 chamber, and peritoneal lavage was carried out with 10 mL chilled RPMI. For sensitized animals, see Section 2.6. ### 2.3. Neutralization of Eotaxin Activity 50 ng eotaxin was incubated with 5μg anti-eotaxin neutralizing antibody or 5 μg isotype-matched anti-IgG2a antibody, in a final volume of 200 μL, for 30 minutes, before injection into each BALB/c recipient. 4 h later, peritoneal lavage fluid was collected from the injected mice and handled as detailed above. ### 2.4. Collection, Enumeration, and Staining of Peritoneal Leukocytes Peritoneal lavage cells were washed at 500 ×g and resuspended in 2 mL RPMI. Total counts were carried out in a hemocytometer after a 1 : 10 dilution in Turk’s solution. Differential counts were done on Giemsa-stained (ice-cold methanol-fixed, air-dried, and Giemsa-stained for 5 minutes) cytocentrifuge smears (500 rpm, 8 minutes in a Cytospin 3, Thermo Scientific, Waltham, MA), by counting at least 300 cells in 1000x magnification under oil. ### 2.5. Bacteria and Phagocytosis Assay We used nonpathogenicEscherichia coli bacteria (clone DH5, provided by Dr. Z. Vasconcelos, from INCA and FIOCRUZ, Rio de Janeiro) genetically altered to constitutively express the gene for green fluorescent protein (GFP), grown in LB broth. The cells obtained in the peritoneal lavage of BALB/c mice, induced by eotaxin or RPMI, were subjected to total cell count as well as differential neutrophil counts as previously described. Then 5×105 neutrophils were incubated for 30 minutes, in the dark at room temperature, with the bacteria in a 1 : 400 proportion. The cells were then washed and the resulting cell suspension was run in a FACScalibur flow cytometer (Becton Dickinson Immunocytometry Systems, San Jose, CA), with the acquisition of at least 50.000 events, and analyzed with the help of Summit 4.3 software (Dako Cytomation, UK). ### 2.6. Eosinophil Procedures For eosinophil transfer studies, where indicated, BALB/C, ALOX or PAS mice were sensitized (100μg ovalbumin grade IV and 1,6 mg alum in a final volume of 400 μL saline per animal, two s. c. injections in the dorsum, at days 0 and 7) and challenged (ovalbumin grade IV, 1 μg in 400 μL saline i.p. at day 14) according to Ebihara and colleagues [36]. Bone marrow was collected 48 h after i.p. challenge, examined, and cultured as previously described elsewhere [37]. Briefly, bone-marrow cultures were established for 5 days at 37°C in 95% air/5% CO2, in RPMI1640, with 10% FBS and 5 ng/mL IL-5, at a culture density of 1×106 cells/mL. The nonadherent cells were then collected and loaded on top of 3 mL of a Histopaque-1083 solution, followed by centrifugation at 400 ×g, 20°C, 35 minutes, without brakes. The mononuclear cell ring and the supernatant were discarded; the granulocyte-rich pellet was collected, washed and resuspended in 3 mL RPMI, and used for total and differential counts as above. The suspension contained ≥80% eosinophils, with no neutrophils, and the minor contaminant population consisted of macrophages alone, which do not interfere with the interpretation of transfer experiments. Where indicated, naive GATA-1, ALOX, or PAS recipient mice were injected with 1×106 eosinophils from the appropriate donors (see below) i.p., followed by eotaxin 50 ng/mL, and leukocyte accumulation was monitored in the peritoneal lavage fluid 4 h after eotaxin injection, as above.For flow cytometric studies of CCR3 expression, the following modification of this protocol was adopted, for it yielded eosinophils of higher purity: sensitized mice were challenged twice, initially by aerosol exposure (1 h, Ovalbumin grade II, 2.5%, w/v, at day 14) and 7 h later with soluble ovalbumin i.p. (grade IV, 1μg in 400 μL saline). Bone marrow was collected 24 h after aerosol challenge and cultured as above, after separation on a Percoll gradient (75%/60%/45% isotonic Percoll, 100 ×g, 20 min, room temperature). The hematopoietic cells from the 45%/60% interface [38] were cultured at a lower IL-5 concentration (2.5 ng/mL) for twice as long (10 days), yielding a population containing at least 95% eosinophils, with mature morphology. Contaminants at day 10 were degenerating (nonviable) mononuclear and stromal cells. ### 2.7. Statistical Analyses All data were analyzed with Systat for Windows 5.04 (Systat, Inc. Everston, IL, USA), using the two-tailedt-test for pairwise comparisons. Where indicated, ANOVA was also used for multiple comparisons, with the Tukey HSD correction and the Bonferroni correction for groups of equal and unequal size, respectively. ## 2.1. Reagents RPMI 1640 medium (SH30011.01) and fetal calf serum (SH30088.03) were from Hyclone (Logan, UT); Penicillin 100 U/mL (PEN-B), Streptomycin 100 mg/mL (S9137), Ovalbumin (grade II and grade IV), isotonic Percoll, and Histopaque density 1.083 solution from Sigma-Aldrich (St. Louis, MO); recombinant murine Eotaxin (250-01) from PeproTech (Rocky Hill, NJ); and recombinant murine IL-5 from R&D; MK-886 (475889) 1 mg/kg from Cayman Chemicals (Ann Arbor, MI), dissolved in 0.1% methylcellulose, was given as an intragastric bolus in a 0.2 mL volume [33]. Rat anti-murine eotaxin monoclonal neutralizing antibody (clone 42285) and rat anti-murine IgG2a control monoclonal antibody of matched isotype (clone 54447) were from R&D (Minneapolis, MN). ## 2.2. Animals and Animal Handling Inbred mice, male and female, aged 8–10 weeks, provided in SPF condition by CECAL-FIOCRUZ (Rio de Janeiro), were of the following strains: BALB/c; ALOX (5-LO-deficient) and PAS-129 (wild-type control of the same background) [27]; and BALB/c mutants lacking an enhancer element in the promoter region of gene coding for the GATA-1 transcription factor [35], required for eosinophil lineage determination (GATA-1 mice, for short). Animal housing, care, and handling followed institutionally approved (CEUA number L-010/04, CEUA number L-002/09) protocols. Naive animals received eotaxin i.p., in 0.2 mL of RPMI 1640 medium with Penicillin/Streptomycin. Controls received medium (RPMI). After the indicated times, animals were killed in a CO2 chamber, and peritoneal lavage was carried out with 10 mL chilled RPMI. For sensitized animals, see Section 2.6. ## 2.3. Neutralization of Eotaxin Activity 50 ng eotaxin was incubated with 5μg anti-eotaxin neutralizing antibody or 5 μg isotype-matched anti-IgG2a antibody, in a final volume of 200 μL, for 30 minutes, before injection into each BALB/c recipient. 4 h later, peritoneal lavage fluid was collected from the injected mice and handled as detailed above. ## 2.4. Collection, Enumeration, and Staining of Peritoneal Leukocytes Peritoneal lavage cells were washed at 500 ×g and resuspended in 2 mL RPMI. Total counts were carried out in a hemocytometer after a 1 : 10 dilution in Turk’s solution. Differential counts were done on Giemsa-stained (ice-cold methanol-fixed, air-dried, and Giemsa-stained for 5 minutes) cytocentrifuge smears (500 rpm, 8 minutes in a Cytospin 3, Thermo Scientific, Waltham, MA), by counting at least 300 cells in 1000x magnification under oil. ## 2.5. Bacteria and Phagocytosis Assay We used nonpathogenicEscherichia coli bacteria (clone DH5, provided by Dr. Z. Vasconcelos, from INCA and FIOCRUZ, Rio de Janeiro) genetically altered to constitutively express the gene for green fluorescent protein (GFP), grown in LB broth. The cells obtained in the peritoneal lavage of BALB/c mice, induced by eotaxin or RPMI, were subjected to total cell count as well as differential neutrophil counts as previously described. Then 5×105 neutrophils were incubated for 30 minutes, in the dark at room temperature, with the bacteria in a 1 : 400 proportion. The cells were then washed and the resulting cell suspension was run in a FACScalibur flow cytometer (Becton Dickinson Immunocytometry Systems, San Jose, CA), with the acquisition of at least 50.000 events, and analyzed with the help of Summit 4.3 software (Dako Cytomation, UK). ## 2.6. Eosinophil Procedures For eosinophil transfer studies, where indicated, BALB/C, ALOX or PAS mice were sensitized (100μg ovalbumin grade IV and 1,6 mg alum in a final volume of 400 μL saline per animal, two s. c. injections in the dorsum, at days 0 and 7) and challenged (ovalbumin grade IV, 1 μg in 400 μL saline i.p. at day 14) according to Ebihara and colleagues [36]. Bone marrow was collected 48 h after i.p. challenge, examined, and cultured as previously described elsewhere [37]. Briefly, bone-marrow cultures were established for 5 days at 37°C in 95% air/5% CO2, in RPMI1640, with 10% FBS and 5 ng/mL IL-5, at a culture density of 1×106 cells/mL. The nonadherent cells were then collected and loaded on top of 3 mL of a Histopaque-1083 solution, followed by centrifugation at 400 ×g, 20°C, 35 minutes, without brakes. The mononuclear cell ring and the supernatant were discarded; the granulocyte-rich pellet was collected, washed and resuspended in 3 mL RPMI, and used for total and differential counts as above. The suspension contained ≥80% eosinophils, with no neutrophils, and the minor contaminant population consisted of macrophages alone, which do not interfere with the interpretation of transfer experiments. Where indicated, naive GATA-1, ALOX, or PAS recipient mice were injected with 1×106 eosinophils from the appropriate donors (see below) i.p., followed by eotaxin 50 ng/mL, and leukocyte accumulation was monitored in the peritoneal lavage fluid 4 h after eotaxin injection, as above.For flow cytometric studies of CCR3 expression, the following modification of this protocol was adopted, for it yielded eosinophils of higher purity: sensitized mice were challenged twice, initially by aerosol exposure (1 h, Ovalbumin grade II, 2.5%, w/v, at day 14) and 7 h later with soluble ovalbumin i.p. (grade IV, 1μg in 400 μL saline). Bone marrow was collected 24 h after aerosol challenge and cultured as above, after separation on a Percoll gradient (75%/60%/45% isotonic Percoll, 100 ×g, 20 min, room temperature). The hematopoietic cells from the 45%/60% interface [38] were cultured at a lower IL-5 concentration (2.5 ng/mL) for twice as long (10 days), yielding a population containing at least 95% eosinophils, with mature morphology. Contaminants at day 10 were degenerating (nonviable) mononuclear and stromal cells. ## 2.7. Statistical Analyses All data were analyzed with Systat for Windows 5.04 (Systat, Inc. Everston, IL, USA), using the two-tailedt-test for pairwise comparisons. Where indicated, ANOVA was also used for multiple comparisons, with the Tukey HSD correction and the Bonferroni correction for groups of equal and unequal size, respectively. ## 3. Results ### 3.1. Mixed Leukocyte Migration Induced by Eotaxin We initially examined whether i.p. injection of eotaxin in various doses would recruit eosinophils in a relatively short period (4 h) and whether eosinophil accumulation would be selective, as previously observed in sensitized mice, or accompanied by migration of other leukocyte populations. As shown in Figure1(a), leukocytes accumulated in response to 50 and 100 ng/cavity eotaxin, in amounts that were significantly different from the RPMI controls (0 ng/cavity) as well as from lower doses of eotaxin (10 and 25 ng/cavity). These leukocytes included variable numbers of eosinophils (Figure 1(b)), monocytes/macrophages [39, 40] (Figure 1(c)), and neutrophils (Figure 1(d)). The morphology of all three leukocyte populations was recognizable without ambiguity, as shown in a representative photomicrograph (supplementary Figure 1 available online at http://dx.doi.org/10.1155/2014/102160). Lymphocyte and basophil migration was not significant in any of these doses (not shown). Importantly, neutrophils and macrophages greatly outnumbered eosinophils, with counts, respectively, 8.2- and 9.9-fold greater in the experiment shown. For all three leukocyte populations, the dose-response relationships were identical, and in subsequent experiments 50 ng/mL was used as the standard stimulus, since no improvement was observed at a higher dose.Accumulation of different leukocyte subtypes induced by eotaxin in naive mice: dose-response relationship. BALB/c mice were injected with the indicated doses of eotaxin (black bars), and the peritoneal lavage fluid collected 4 h later was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *,P≤0,05; **, P≤0,01, for the differences relative to the negative (RPMI) control (0 ng/mL eotaxin, open bars). (a) Data from 3–18 experiments. (b)–(d) Data from 6–11 experiments. (a) (b) (c) (d)Despite the heterogeneity of the recruited leukocyte population, neutralization of eotaxin with specific monoclonal antibody brought leukocyte accumulation to negative control levels (Figure2; compare with Figure 1(a) for the 0–25 ng eotaxin dose range), while control antibody of the same isotype with irrelevant specificity had no effect. This confirms that the stimulus for recruitment of all three leukocyte populations is eotaxin itself, not any unidentified contaminant, which by definition would not be neutralized by specific antibody.Figure 2 Accumulation of different leukocyte subtypes induced by eotaxin: effect of specific antibody neutralization. Eotaxin was preincubated with specific neutralizing monoclonal antibody (hatched bar), or with irrelevant isotype-matched monoclonal antibody (open bar), before i.p. injection in BALB/c mice. Controls (black bar) received eotaxin but no antibody. Peritoneal lavage fluid, collected 4 h later, was used for total leukocyte quantitation. Data are mean ± SEM. **,P≤0,01, for the differences relative to the positive (eotaxin) and specificity (irrelevant antibody) controls. Data from 5–18 experiments.The kinetics of recruitment of this mixed leukocyte population by eotaxin in naive BALB/c mice shows significant accumulation as early as 2 h, with a maximum at 4 h, thereafter decreasing but remaining significant at 12 and 24 h (Figure3(a)). We can observe very early arrival of eosinophils (significant from 2 h and remaining so at 12 and 24 h, Figure 3(b)). By contrast, accumulation of both monocytes/macrophages (Figure 3(c)) and neutrophils (Figure 3(d)) became significant only at 4 h. Significant accumulation was also observed at 12 and 24 h for monocytes/macrophages and 12 h for neutrophils. Hence, monocyte/macrophage and neutrophil accumulation followed eosinophil entry. Eosinophils outlasted neutrophils, but not monocytes/macrophages, in the observation period. For subsequent experiments, the 4 h observation time was chosen, because it showed significant accumulation of eosinophils, monocytes/macrophages, and neutrophils in naive BALB/c mice.Accumulation of different leukocyte subtypes induced by eotaxin: kinetics. BALB/c mice were injected with 50 ng eotaxin i.p. (black bars), and the peritoneal lavage fluid collected after the indicated periods was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *,P≤0,05; **, P≤0,01, for the differences relative to the respective negative (RPMI) controls (open bars). Data from 3–11 experiments. (a) (b) (c) (d) ### 3.2. Relationship to 5-LO We first evaluated the effect of eotaxin in naive BALB/c mice pretreated with FLAP inhibitor MK886 or vehicle. MK886 abolished mixed leukocyte recruitment by eotaxin (Figure4(a)). By contrast, vehicle-pretreated control animals showed significant leukocyte recruitment. MK886 was very effective in preventing eosinophil accumulation (Figure 4(b)). BALB/c mice responded to eotaxin with significant monocyte/macrophage accumulation by 4 h, which was abolished by MK886 (Figure 4(c)). MK886-pretreated BALB/c mice showed no neutrophil migration in response to eotaxin, while migration was significant in vehicle-treated controls (Figure 4(d)).Accumulation of different leukocyte types induced by eotaxin: effect of MK886.BALB/c mice were pretreated with vehicle (methylcellulose) or MK886 and injected with RPMI medium (negative control, open bars) or eotaxin, 50 ng/cavity (black bars). Peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *, P≤0,05; **, P≤0,01, for the differences relative to the respective negative control in each group. Data from 6 experiments. (a) (b) (c) (d)Next, we evaluated the effect of eotaxin in naïve ALOX mice, which lack 5-LO, and wild-type PAS controls. In ALOX mice, eotaxin had no significant effect on total leukocyte numbers. By contrast, significant recruitment was observed in PAS controls (Figure5(a)). Importantly, ALOX mice, unlike PAS controls, showed no significant eosinophil recruitment (Figure 5(b)). In this genetic background, unlike BALB/c, no significant monocyte/macrophage recruitment by eotaxin was observed at this time point (4 h; Figure 5(c)), regardless of whether mice were 5-LO-deficient or wild-type; furthermore, monocyte/macrophage numbers were higher in ALOX than in PAS mice. By contrast, neutrophil recruitment was significant in PAS controls and inhibited by ≈55% in ALOX mice, although residual neutrophil recruitment remained significant (Figure 5(d)). Together, these observations show that, in this genetic background, eosinophils and neutrophils differ in their requirements for 5-LO to migrate in response to eotaxin, which are total for the former but only partial for the latter.Mixed leukocyte accumulation induced by eotaxin in ALOX and PAS mice.5-LO-deficient mutant (ALOX) and wild-type (WT) control PAS mice were injected with RPMI (open bars) or eotaxin, 50 ng/cavity (black bars). The peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean + SEM. *, P≤0,05; **, P≤0,01. Data from 10 experiments. (a) (b) (c) (d) ### 3.3. Eosinophil-Dependent Neutrophil and Monocyte/Macrophage Migration The kinetics of mixed leukocyte recruitment in naive BALB/c mice raised the issue of whether eotaxin-stimulated eosinophils recruit other leukocyte types. If so, neutrophil and/or monocyte/macrophage migration in response to eotaxin would be decreased in the absence of eosinophils. Since naive mice carrying a mutation in the high-affinity GATA-1 binding site of the promoter from the gene coding for the GATA-1 transcription factor lack eosinophils [4], we evaluated the effect of eotaxin on leukocyte numbers 4 h after i.p. injection in GATA-1 mutant mice and BALB/c wild-type controls. In GATA-1 mice, unlike BALB/c controls, leukocyte numbers in the peritoneal cavity were not significantly increased by eotaxin (Figure 6(a)). As expected, eosinophils were undetectable in GATA-1 mice,and effectively recruited by eotaxin in BALB/c controls (Figure 6(b)). In both RPMI-treated and eotaxin-treated GATA-1 mice, monocyte/macrophages (which were the predominant resident leukocyte population) were about twice as numerous as in RPMI-treated BALB/c controls (Figure 6(c)), reaching counts comparable to those in eotaxin-treated BALB/c. Importantly, neutrophil numbers were not significantly increased by eotaxin (Figure 6(d)) in GATA-1 mice, unlike BALB/c controls, suggesting that neutrophil recruitment by eotaxin is eosinophil-dependent. To rule out the possibility that neutrophil migration is somehow defective in this strain, separate control GATA-1 mice were injected with thioglycollate broth, which induces an intense neutrophil accumulation in a 4 h period. GATA-1 and BALB/c mice responded equally well to thioglycollate (not shown), indicating that failure of neutrophil recruitment in GATA-1 mice is a feature of their eotaxin response, not evidence of a general defect in neutrophil migration.Effect of eosinophil transfer into GATA-1 recipients on neutrophil accumulation. (a)–(d) Eosinophil-deficient GATA-1 mice and wild-type (WT) controls (BALB/c) were injected with RPMI (open bars) or eotaxin, 50 ng/cavity (black bars). The peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). E-H, GATA-1 mice received eotaxin (black bars), BALB/c eosinophils (stippled bars), or BALB/c eosinophils followed by eotaxin administration 30 minutes later (hatched bars). Peritoneal lavage fluid collected after 4 h of eotaxin administration was used for quantitation of total leukocytes (e), eosinophils (f), macrophages (g), and neutrophils (h). Data are mean ± SEM. *,P≤ 0,05; **, P≤ 0,01, for the indicated differences. Data from 3–11 experiments. (i) Intensity of CCR3 expression in granulocytes. Cells were collected 4 h after eotaxin injection from the peritoneal cavity of GATA-1 and BALB/c donors and stained for CCR3. Representative MFI profiles for the granulocyte gate are shown. Dotted line, GATA-1. Thin line, BALB/c. Thick line, GATA-1 sample to which purified BALB/c eosinophils were added in vitro (up to 20% of total cells). (a) (b) (c) (d) (e) (f) (g) (h) (i)We further explored this issue by reconstituting a peritoneal eosinophil population in GATA-1 mice by transfer of purified (90%) BALB/c eosinophils, devoid of neutrophil contamination. Total leukocyte counts were not significantly different between GATA-1 mice given eotaxin alone, eosinophils alone, or eotaxin plus eosinophils (Figure6(e)), and this was closely paralleled by monocyte/macrophage counts, which account for most leukocytes in all groups (Figure 6(f)). As expected, eosinophils could be recovered from GATA-1 recipients of eosinophils, and eotaxin did not significantly increase their numbers, as the recipients produce no eosinophils of their own (Figure 6(g)). Importantly, neutrophil numbers were significantly increased by eosinophil transfer and further significantly increased by the association of eosinophil transfer and eotaxin (Figure 6(h)). Together, these data suggest that in naive mice eosinophils mediate the accumulation of neutrophils induced by eotaxin.If, as suggested by the preceding results, neutrophils and monocyte/macrophages accumulate in GATA-1 mice as a result of eosinophil activation, not of direct exposure to eotaxin, one should expect the leukocytes harvested from the peritoneal cavity of GATA-1 mice to show little or no expression of CCR3, unlike eosinophils. We have therefore compared the expression of CCR3 in peritoneal lavage leukocytes from BALB/c and GATA-1 mice collected 4 h after eotaxin injection (Figure6(i)). Mean fluorescence intensity was monitored in the granulocyte region, since our transfer protocol reconstitutes migration of neutrophils, not monocytes/macrophages (see above). No eotaxin-induced recruitment of CCR3+ granulocytes was observed in GATA-1 mice (dotted line), unlike BALB/c mice (thin line). To make sure that CCR3+ cells would be detectable, if present in a suspension of GATA-1 granulocytes, we also added purified BALB/c eosinophils to GATA-1 leukocytes as a control (Figure 6(i), thick line). Exogenously added CCR3+ cells were easily detectable in these conditions.We took advantage of the effectiveness of eosinophil transfer to examine the relationship of 5-LO to the migration of eosinophils, as well as to the secondary recruitment of neutrophils and monocytes/macrophages. A mixed leukocyte population accumulated in the peritoneal cavity of ALOX recipients of PAS eosinophils (Figure7(a)), 4 h following administration of eotaxin. No significant improvement was observed in PAS recipients of PAS eosinophils in the same conditions, showing that recruitment is as effective in the ALOX recipients as in the wild-type recipients. The recruited leukocyte population from ALOX recipients included eosinophils (Figure 7(b)), comprising both the transferred eosinophils and those recruited by eotaxin administration to the recipients, again reaching levels comparable to those of PAS recipients of PAS eosinophils. Secondary recruitment was observed for both macrophages (Figure 7(c)) and neutrophils (Figure 7(d)), with similar effectiveness in comparison to the PAS into PAS transfers.Effect of transfer of PAS or ALOX eosinophils on neutrophil accumulation in the peritoneal cavity of ALOX, PAS, and GATA-1 recipients. (a)–(d) ALOX mice received RPMI (open bars), PAS eosinophils (stippled bars), PAS eosinophils followed by eotaxin, 50 ng/cavity, 30 minutes later (hatched bars). As positive controls, PAS mice received PAS eosinophils followed by eotaxin (gray bars). Peritoneal lavage fluid collected 4 h after eotaxin injection was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). (e) GATA-1 mice received eotaxin (black bars), or ALOX eosinophils, followed by eotaxin, 30 minutes later (hatched bars). Peritoneal lavage fluid collected 4 h after eotaxin administration was used for quantitation of neutrophils (Neuts), eosinophils (Eos), and macrophages (Mϕ). Data are mean ± SEM. *, P≤0,05; **, P≤0,01, for the indicated differences. Data from 3–5 experiments. (a) (b) (c) (d) (e)We next examined whether the critical step requiring 5-LO in this model is the initial eosinophil accumulation, rather than the secondary recruitment of neutrophils by eosinophils. If so, one would predict that direct transfer of ALOX eosinophils into eosinophil-deficient GATA-1 recipients should restore neutrophil accumulation in response to eotaxin. When purified eosinophils from ALOX bone-marrow cultures were transferred to GATA-1 recipients (Figure7(e)), recruitment of neutrophils was very effective. This rules out the possibility that the step critically dependent on 5-LO is the generation by eosinophils of a neutrophil chemoattractant. On the other hand, as shown above for BALB/c eosinophil transfer into GATA-1 recipients, monocytes/macrophages were not increased by ALOX eosinophil transfer at this time point. ### 3.4. Impact on Granulocyte Interaction with Bacteria We further examined whether eosinophil-mediated responses to eotaxin in this model had an effect on the ability of the secondarily recruited neutrophils and their bacterial targets. To do so, mixed leukocyte populations induced by eotaxin (RPMI in controls) were collected from naive BALB/c mice at 4 h after injection, counted, and mixed for 30 minutes with GFP-expressingE. coli at a bacteria/leukocyte ratio adjusted to 400 : 1, before analysis by flow cytometry. Cells gated in the granulocyte region on the basis of size and complexity were examined for green fluorescence, resulting from both binding and internalization of bacteria. Figure 8(a) shows that eotaxin-stimulated granulocytes bind/internalize fluorescent E. coli bacteria more effectively than those collected from RPMI-injected control mice. This increase in effectiveness is detectable as an increased fraction of granulocytes binding bacteria (Figure 8(b)) and an increased mean fluorescent intensity (Figure 8(c)). This suggests that eosinophil-mediated recruitment of neutrophils is accompanied by an increased capacity to bind and/or ingest bacteria.Flow cytometric analyses of peritoneal lavage leukocytes. (a)–(c) Interaction between fluorescent bacteria and neutrophils from eotaxin-injected and control mice. BALB/c mice were injected with eotaxin 50 ng/cavity i.p. Controls received RPMI. After 4 h, peritoneal lavage fluid was collected from both groups. After counting neutrophils, fluorescent, viableE. coli bacteria were mixed with leukocytes at a 400 : 1 bacteria/neutrophil ratio and further incubated for 30 min before washing to eliminate unbound bacteria, and analysis of neutrophil-associated fluorescence by flow cytometry. (a) Representative profiles of eotaxin-induced (thick, continuous line) and RPMI-induced (thin, interrupted line) neutrophil-associated fluorescence. (b) Fraction of the neutrophils positive for fluorescent bacteria in RPMI-induced (open bar) and eotaxin-induced (black bar) peritoneal leukocyte populations (mean±SEM); (c) mean fluorescent intensity of neutrophils in the same samples (mean ± SEM). Data from 4-5 experiments. (a) (b) (c) ## 3.1. Mixed Leukocyte Migration Induced by Eotaxin We initially examined whether i.p. injection of eotaxin in various doses would recruit eosinophils in a relatively short period (4 h) and whether eosinophil accumulation would be selective, as previously observed in sensitized mice, or accompanied by migration of other leukocyte populations. As shown in Figure1(a), leukocytes accumulated in response to 50 and 100 ng/cavity eotaxin, in amounts that were significantly different from the RPMI controls (0 ng/cavity) as well as from lower doses of eotaxin (10 and 25 ng/cavity). These leukocytes included variable numbers of eosinophils (Figure 1(b)), monocytes/macrophages [39, 40] (Figure 1(c)), and neutrophils (Figure 1(d)). The morphology of all three leukocyte populations was recognizable without ambiguity, as shown in a representative photomicrograph (supplementary Figure 1 available online at http://dx.doi.org/10.1155/2014/102160). Lymphocyte and basophil migration was not significant in any of these doses (not shown). Importantly, neutrophils and macrophages greatly outnumbered eosinophils, with counts, respectively, 8.2- and 9.9-fold greater in the experiment shown. For all three leukocyte populations, the dose-response relationships were identical, and in subsequent experiments 50 ng/mL was used as the standard stimulus, since no improvement was observed at a higher dose.Accumulation of different leukocyte subtypes induced by eotaxin in naive mice: dose-response relationship. BALB/c mice were injected with the indicated doses of eotaxin (black bars), and the peritoneal lavage fluid collected 4 h later was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *,P≤0,05; **, P≤0,01, for the differences relative to the negative (RPMI) control (0 ng/mL eotaxin, open bars). (a) Data from 3–18 experiments. (b)–(d) Data from 6–11 experiments. (a) (b) (c) (d)Despite the heterogeneity of the recruited leukocyte population, neutralization of eotaxin with specific monoclonal antibody brought leukocyte accumulation to negative control levels (Figure2; compare with Figure 1(a) for the 0–25 ng eotaxin dose range), while control antibody of the same isotype with irrelevant specificity had no effect. This confirms that the stimulus for recruitment of all three leukocyte populations is eotaxin itself, not any unidentified contaminant, which by definition would not be neutralized by specific antibody.Figure 2 Accumulation of different leukocyte subtypes induced by eotaxin: effect of specific antibody neutralization. Eotaxin was preincubated with specific neutralizing monoclonal antibody (hatched bar), or with irrelevant isotype-matched monoclonal antibody (open bar), before i.p. injection in BALB/c mice. Controls (black bar) received eotaxin but no antibody. Peritoneal lavage fluid, collected 4 h later, was used for total leukocyte quantitation. Data are mean ± SEM. **,P≤0,01, for the differences relative to the positive (eotaxin) and specificity (irrelevant antibody) controls. Data from 5–18 experiments.The kinetics of recruitment of this mixed leukocyte population by eotaxin in naive BALB/c mice shows significant accumulation as early as 2 h, with a maximum at 4 h, thereafter decreasing but remaining significant at 12 and 24 h (Figure3(a)). We can observe very early arrival of eosinophils (significant from 2 h and remaining so at 12 and 24 h, Figure 3(b)). By contrast, accumulation of both monocytes/macrophages (Figure 3(c)) and neutrophils (Figure 3(d)) became significant only at 4 h. Significant accumulation was also observed at 12 and 24 h for monocytes/macrophages and 12 h for neutrophils. Hence, monocyte/macrophage and neutrophil accumulation followed eosinophil entry. Eosinophils outlasted neutrophils, but not monocytes/macrophages, in the observation period. For subsequent experiments, the 4 h observation time was chosen, because it showed significant accumulation of eosinophils, monocytes/macrophages, and neutrophils in naive BALB/c mice.Accumulation of different leukocyte subtypes induced by eotaxin: kinetics. BALB/c mice were injected with 50 ng eotaxin i.p. (black bars), and the peritoneal lavage fluid collected after the indicated periods was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *,P≤0,05; **, P≤0,01, for the differences relative to the respective negative (RPMI) controls (open bars). Data from 3–11 experiments. (a) (b) (c) (d) ## 3.2. Relationship to 5-LO We first evaluated the effect of eotaxin in naive BALB/c mice pretreated with FLAP inhibitor MK886 or vehicle. MK886 abolished mixed leukocyte recruitment by eotaxin (Figure4(a)). By contrast, vehicle-pretreated control animals showed significant leukocyte recruitment. MK886 was very effective in preventing eosinophil accumulation (Figure 4(b)). BALB/c mice responded to eotaxin with significant monocyte/macrophage accumulation by 4 h, which was abolished by MK886 (Figure 4(c)). MK886-pretreated BALB/c mice showed no neutrophil migration in response to eotaxin, while migration was significant in vehicle-treated controls (Figure 4(d)).Accumulation of different leukocyte types induced by eotaxin: effect of MK886.BALB/c mice were pretreated with vehicle (methylcellulose) or MK886 and injected with RPMI medium (negative control, open bars) or eotaxin, 50 ng/cavity (black bars). Peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *, P≤0,05; **, P≤0,01, for the differences relative to the respective negative control in each group. Data from 6 experiments. (a) (b) (c) (d)Next, we evaluated the effect of eotaxin in naïve ALOX mice, which lack 5-LO, and wild-type PAS controls. In ALOX mice, eotaxin had no significant effect on total leukocyte numbers. By contrast, significant recruitment was observed in PAS controls (Figure5(a)). Importantly, ALOX mice, unlike PAS controls, showed no significant eosinophil recruitment (Figure 5(b)). In this genetic background, unlike BALB/c, no significant monocyte/macrophage recruitment by eotaxin was observed at this time point (4 h; Figure 5(c)), regardless of whether mice were 5-LO-deficient or wild-type; furthermore, monocyte/macrophage numbers were higher in ALOX than in PAS mice. By contrast, neutrophil recruitment was significant in PAS controls and inhibited by ≈55% in ALOX mice, although residual neutrophil recruitment remained significant (Figure 5(d)). Together, these observations show that, in this genetic background, eosinophils and neutrophils differ in their requirements for 5-LO to migrate in response to eotaxin, which are total for the former but only partial for the latter.Mixed leukocyte accumulation induced by eotaxin in ALOX and PAS mice.5-LO-deficient mutant (ALOX) and wild-type (WT) control PAS mice were injected with RPMI (open bars) or eotaxin, 50 ng/cavity (black bars). The peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean + SEM. *, P≤0,05; **, P≤0,01. Data from 10 experiments. (a) (b) (c) (d) ## 3.3. Eosinophil-Dependent Neutrophil and Monocyte/Macrophage Migration The kinetics of mixed leukocyte recruitment in naive BALB/c mice raised the issue of whether eotaxin-stimulated eosinophils recruit other leukocyte types. If so, neutrophil and/or monocyte/macrophage migration in response to eotaxin would be decreased in the absence of eosinophils. Since naive mice carrying a mutation in the high-affinity GATA-1 binding site of the promoter from the gene coding for the GATA-1 transcription factor lack eosinophils [4], we evaluated the effect of eotaxin on leukocyte numbers 4 h after i.p. injection in GATA-1 mutant mice and BALB/c wild-type controls. In GATA-1 mice, unlike BALB/c controls, leukocyte numbers in the peritoneal cavity were not significantly increased by eotaxin (Figure 6(a)). As expected, eosinophils were undetectable in GATA-1 mice,and effectively recruited by eotaxin in BALB/c controls (Figure 6(b)). In both RPMI-treated and eotaxin-treated GATA-1 mice, monocyte/macrophages (which were the predominant resident leukocyte population) were about twice as numerous as in RPMI-treated BALB/c controls (Figure 6(c)), reaching counts comparable to those in eotaxin-treated BALB/c. Importantly, neutrophil numbers were not significantly increased by eotaxin (Figure 6(d)) in GATA-1 mice, unlike BALB/c controls, suggesting that neutrophil recruitment by eotaxin is eosinophil-dependent. To rule out the possibility that neutrophil migration is somehow defective in this strain, separate control GATA-1 mice were injected with thioglycollate broth, which induces an intense neutrophil accumulation in a 4 h period. GATA-1 and BALB/c mice responded equally well to thioglycollate (not shown), indicating that failure of neutrophil recruitment in GATA-1 mice is a feature of their eotaxin response, not evidence of a general defect in neutrophil migration.Effect of eosinophil transfer into GATA-1 recipients on neutrophil accumulation. (a)–(d) Eosinophil-deficient GATA-1 mice and wild-type (WT) controls (BALB/c) were injected with RPMI (open bars) or eotaxin, 50 ng/cavity (black bars). The peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). E-H, GATA-1 mice received eotaxin (black bars), BALB/c eosinophils (stippled bars), or BALB/c eosinophils followed by eotaxin administration 30 minutes later (hatched bars). Peritoneal lavage fluid collected after 4 h of eotaxin administration was used for quantitation of total leukocytes (e), eosinophils (f), macrophages (g), and neutrophils (h). Data are mean ± SEM. *,P≤ 0,05; **, P≤ 0,01, for the indicated differences. Data from 3–11 experiments. (i) Intensity of CCR3 expression in granulocytes. Cells were collected 4 h after eotaxin injection from the peritoneal cavity of GATA-1 and BALB/c donors and stained for CCR3. Representative MFI profiles for the granulocyte gate are shown. Dotted line, GATA-1. Thin line, BALB/c. Thick line, GATA-1 sample to which purified BALB/c eosinophils were added in vitro (up to 20% of total cells). (a) (b) (c) (d) (e) (f) (g) (h) (i)We further explored this issue by reconstituting a peritoneal eosinophil population in GATA-1 mice by transfer of purified (90%) BALB/c eosinophils, devoid of neutrophil contamination. Total leukocyte counts were not significantly different between GATA-1 mice given eotaxin alone, eosinophils alone, or eotaxin plus eosinophils (Figure6(e)), and this was closely paralleled by monocyte/macrophage counts, which account for most leukocytes in all groups (Figure 6(f)). As expected, eosinophils could be recovered from GATA-1 recipients of eosinophils, and eotaxin did not significantly increase their numbers, as the recipients produce no eosinophils of their own (Figure 6(g)). Importantly, neutrophil numbers were significantly increased by eosinophil transfer and further significantly increased by the association of eosinophil transfer and eotaxin (Figure 6(h)). Together, these data suggest that in naive mice eosinophils mediate the accumulation of neutrophils induced by eotaxin.If, as suggested by the preceding results, neutrophils and monocyte/macrophages accumulate in GATA-1 mice as a result of eosinophil activation, not of direct exposure to eotaxin, one should expect the leukocytes harvested from the peritoneal cavity of GATA-1 mice to show little or no expression of CCR3, unlike eosinophils. We have therefore compared the expression of CCR3 in peritoneal lavage leukocytes from BALB/c and GATA-1 mice collected 4 h after eotaxin injection (Figure6(i)). Mean fluorescence intensity was monitored in the granulocyte region, since our transfer protocol reconstitutes migration of neutrophils, not monocytes/macrophages (see above). No eotaxin-induced recruitment of CCR3+ granulocytes was observed in GATA-1 mice (dotted line), unlike BALB/c mice (thin line). To make sure that CCR3+ cells would be detectable, if present in a suspension of GATA-1 granulocytes, we also added purified BALB/c eosinophils to GATA-1 leukocytes as a control (Figure 6(i), thick line). Exogenously added CCR3+ cells were easily detectable in these conditions.We took advantage of the effectiveness of eosinophil transfer to examine the relationship of 5-LO to the migration of eosinophils, as well as to the secondary recruitment of neutrophils and monocytes/macrophages. A mixed leukocyte population accumulated in the peritoneal cavity of ALOX recipients of PAS eosinophils (Figure7(a)), 4 h following administration of eotaxin. No significant improvement was observed in PAS recipients of PAS eosinophils in the same conditions, showing that recruitment is as effective in the ALOX recipients as in the wild-type recipients. The recruited leukocyte population from ALOX recipients included eosinophils (Figure 7(b)), comprising both the transferred eosinophils and those recruited by eotaxin administration to the recipients, again reaching levels comparable to those of PAS recipients of PAS eosinophils. Secondary recruitment was observed for both macrophages (Figure 7(c)) and neutrophils (Figure 7(d)), with similar effectiveness in comparison to the PAS into PAS transfers.Effect of transfer of PAS or ALOX eosinophils on neutrophil accumulation in the peritoneal cavity of ALOX, PAS, and GATA-1 recipients. (a)–(d) ALOX mice received RPMI (open bars), PAS eosinophils (stippled bars), PAS eosinophils followed by eotaxin, 50 ng/cavity, 30 minutes later (hatched bars). As positive controls, PAS mice received PAS eosinophils followed by eotaxin (gray bars). Peritoneal lavage fluid collected 4 h after eotaxin injection was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). (e) GATA-1 mice received eotaxin (black bars), or ALOX eosinophils, followed by eotaxin, 30 minutes later (hatched bars). Peritoneal lavage fluid collected 4 h after eotaxin administration was used for quantitation of neutrophils (Neuts), eosinophils (Eos), and macrophages (Mϕ). Data are mean ± SEM. *, P≤0,05; **, P≤0,01, for the indicated differences. Data from 3–5 experiments. (a) (b) (c) (d) (e)We next examined whether the critical step requiring 5-LO in this model is the initial eosinophil accumulation, rather than the secondary recruitment of neutrophils by eosinophils. If so, one would predict that direct transfer of ALOX eosinophils into eosinophil-deficient GATA-1 recipients should restore neutrophil accumulation in response to eotaxin. When purified eosinophils from ALOX bone-marrow cultures were transferred to GATA-1 recipients (Figure7(e)), recruitment of neutrophils was very effective. This rules out the possibility that the step critically dependent on 5-LO is the generation by eosinophils of a neutrophil chemoattractant. On the other hand, as shown above for BALB/c eosinophil transfer into GATA-1 recipients, monocytes/macrophages were not increased by ALOX eosinophil transfer at this time point. ## 3.4. Impact on Granulocyte Interaction with Bacteria We further examined whether eosinophil-mediated responses to eotaxin in this model had an effect on the ability of the secondarily recruited neutrophils and their bacterial targets. To do so, mixed leukocyte populations induced by eotaxin (RPMI in controls) were collected from naive BALB/c mice at 4 h after injection, counted, and mixed for 30 minutes with GFP-expressingE. coli at a bacteria/leukocyte ratio adjusted to 400 : 1, before analysis by flow cytometry. Cells gated in the granulocyte region on the basis of size and complexity were examined for green fluorescence, resulting from both binding and internalization of bacteria. Figure 8(a) shows that eotaxin-stimulated granulocytes bind/internalize fluorescent E. coli bacteria more effectively than those collected from RPMI-injected control mice. This increase in effectiveness is detectable as an increased fraction of granulocytes binding bacteria (Figure 8(b)) and an increased mean fluorescent intensity (Figure 8(c)). This suggests that eosinophil-mediated recruitment of neutrophils is accompanied by an increased capacity to bind and/or ingest bacteria.Flow cytometric analyses of peritoneal lavage leukocytes. (a)–(c) Interaction between fluorescent bacteria and neutrophils from eotaxin-injected and control mice. BALB/c mice were injected with eotaxin 50 ng/cavity i.p. Controls received RPMI. After 4 h, peritoneal lavage fluid was collected from both groups. After counting neutrophils, fluorescent, viableE. coli bacteria were mixed with leukocytes at a 400 : 1 bacteria/neutrophil ratio and further incubated for 30 min before washing to eliminate unbound bacteria, and analysis of neutrophil-associated fluorescence by flow cytometry. (a) Representative profiles of eotaxin-induced (thick, continuous line) and RPMI-induced (thin, interrupted line) neutrophil-associated fluorescence. (b) Fraction of the neutrophils positive for fluorescent bacteria in RPMI-induced (open bar) and eotaxin-induced (black bar) peritoneal leukocyte populations (mean±SEM); (c) mean fluorescent intensity of neutrophils in the same samples (mean ± SEM). Data from 4-5 experiments. (a) (b) (c) ## 4. Discussion We describe here a mixed leukocyte accumulation occurring in the peritoneal cavity of naive mice injected with eotaxin. This is, to our knowledge, the first experimental evidence thatrecruitment of neutrophils and macrophages by eotaxin in nonsensitized animals is mediated by eosinophils. For neutrophils, recruitment was associated with an increased ability to bind/ingest bacteria and therefore might have an impact on antimicrobial defenses in specific conditions. Because the ability of eosinophils to act as effective antimicrobial defenses is limited by their scarcity, these findings also highlight conditions in which, by recruiting much larger numbers of cells with well-characterized microbicidal function, eosinophils actually overcome this theoretical disadvantage.We will below address a number of specific points which are important for putting our observations in a proper perspective. ### 4.1. Roles of Eotaxin, Eotaxin Receptors, and Eosinophils Migration of all three leukocyte types in BALB/c mice was induced by eotaxin, as shown by identical dose-response relationships and overlapping kinetics, as well as by identical effects of neutralizing eotaxin with specific antibodies. The relationship of this migration to the expression of CCR3, by contrast, is more complex. Lymphocytes, some of which have been shown by others to express CCR3 [41, 42], were not attracted by eotaxin to the peritoneal cavity of naive mice in significant numbers. On the other hand, despite the commonly held view that CCR3 expression is restricted to eosinophils [4, 18, 20, 22, 23], basophils [24, 25], eosinophil and basophil progenitors/precursors [26, 27], T cell subsets [41, 42], and smooth muscle cells [29], several studies have suggested that human and murine neutrophils and macrophages can also express CCR3, at least in specific experimental settings [28, 32], as suggested by studies in neutrophils [43]. This would imply that all three leukocyte populations shown to be recruited in our study in wild-type (BALB/c, PAS) mice could be simply responding to eotaxin binding to CCR3 at the individual cell level, with no contribution from cellular interactions involving eosinophils. If so, there should be no decrease in neutrophil or macrophage accumulation by eliminating eosinophils, but one should expect neutrophils and macrophages to express CCR3 at significant levels even in eosinophil-deficient GATA-1 mice. This possibility, however, has been directly ruled out by the demonstration that eotaxin in GATA-1 mutant mice does not recruit neutrophils nor macrophages. The evidence for cellular interactions in the neutrophil response to eotaxin is reinforced by experiments using the same strain, which show neutrophil recruitment following transfer of highly purified BALB/c eosinophils. Finally, we observed no significant accumulation of CCR3+ granulocytes in eotaxin-injected GATA-1 mice.By contrast, GATA-1 mice had constitutively increased macrophage numbers in the peritoneal cavity, which were unaffected by 4 h of eotaxin administration, both with and without eosinophil transfer. It is possible that the GATA-1 mutation affects the cellular function, tissue distribution, and/or turnover of monocytes/macrophages so as to prevent responses to eotaxin, regardless of whether these are mediated or not by eosinophils. Therefore, we cannot conclude from our present observations in GATA-1 mice alone that eosinophils also recruit monocytes/macrophages. Direct evidence for eosinophil recruitment of monocytes/macrophages was, however, obtained through transfer of eosinophils from PAS donors into ALOX mice. Importantly, in the absence of eosinophil transfer, ALOX mice showed neither eosinophil nor monocyte/macrophage recruitment by eotaxin. Interestingly, although in the direct stimulation protocol ALOX mice resembled GATA-1 mice, in their absence of monocyte/macrophage accumulation by 4 h, it is likely that different mechanisms underlie these similar outcomes, since (a) a similar failure to respond to eotaxin with monocyte/macrophage accumulation was observed in wild-type PAS controls and cannot therefore be ascribed to the absence of active 5-LO; (b) transfer experiments show that eosinophil transfer from PAS donors allows a significant recruitment of monocytes/macrophages by eotaxin in ALOX recipients. These observations suggest that an active 5-LO is not required for monocytes/macrophages (or neutrophils) to respond to eotaxin, provided eosinophils are present.Overall, the data indicate that eotaxin recruits a mixed leukocyte population in naive mice through a mechanism dependent on eosinophils. Evidence that eosinophils play an active role is provided by the observation that full effect in eosinophil transfer experiments requires both eosinophils and eotaxin, which would not be expected if eosinophils played a merely passive or permissive role. On the other hand, in transfer experiments about 10% of the transferred eosinophils were recovered by 4 h eotaxin stimulation of the recipients. This raises the issue of whether the remaining transferred eosinophils underwent changes such as degranulation [4] or release of extracellular traps [10], which might represent a significant difference relative to the direct (nontransfer) protocol used in the initial experiments. ### 4.2. Role of 5-LO Mixed leukocyte recruitment by eotaxin in naive mice shows the same dependence on 5-LO that was observed for selective eosinophil recruitment in sensitized mice. Hence, it is likely that eosinophil accumulation itself, the shared feature in both models, is the 5-LO-dependent step. This is consistent with the observation that ALOX eosinophils, when directly transplanted to the peritoneal cavity of GATA-1 recipients (which are unable to respond to eotaxin by accumulation of neutrophils), are able to mediate the neutrophil recruitment induced by eotaxin. Importantly, further recruitment of eosinophils occurs in eotaxin-stimulated ALOX recipients of PAS eosinophils, where the only cells bearing a functional 5-LO are the transferred eosinophils. This suggests that eosinophils can be a source as well as a target for a 5-LO pathway product, such as LTB4. LTB4 was previously shown to selectively attract eosinophils, in a model in which eotaxin duplicated the effect of antigen in a 5-LO-dependent manner [33]. Furthermore, there is significant evidence that recruitment involves interactions between cytokines and lipid mediators [44]. In neutrophil migration, LTB4 represents a signaling relay, raising the possibility that it acts similarly in eosinophils [45]. Whatever mechanism is involved, eosinophil generation of a 5-LO-derived neutrophil chemoattractant is not required for the eosinophil-dependent secondary recruitment of neutrophils in eotaxin-injected naïve mice. While in previous studies of sensitized mice, LTB4, as did antigen and eotaxin, selectively recruited eosinophils [34], it remains to be determined whether it accounts for the rapid eosinophil recruitment to the peritoneal cavity of the eotaxin-injected nonsensitized mice in the present study. A related issue for further investigation is whether 5-LO is required for the increased effectiveness of bacterial binding that was detected in BALB/c leukocytes, as LTB4 is known to activate as well as attract neutrophils [45]. ### 4.3. Relationship to Innate and Acquired Immunity Despite a common requirement for 5-LO, leukocyte recruitment by eotaxin differs in several aspects between naive versus sensitized mice, especially the lack of eosinophil selectivity in the former, as opposed to the latter. In sensitized mice, selective eosinophil recruitment was observed with widely different chemical stimuli (allergen, eotaxin, or LTB4). It is unlikely, therefore, that such selectivity reflects some features of eotaxin signaling and even less of LTB4 signaling (which should be very effective in mice having normal neutrophil numbers). Alternatively, the failure of eotaxin and LTB4 to recruit neutrophils and monocytes/macrophages in sensitized mice could involve changes in the expression of adhesion proteins at endothelial surfaces, which would prevent their emigration from blood vessels to peritoneal cavity, regardless of whether the chemoattractant is LTB4 or eotaxin. We have not examined this possibility, since our current observations, which are centered on responses from nonsensitized animals, do not depend on clarifying mechanisms that were not applicable to the present conditions.We view our findings as manifestations of innate immunity, because of (a) the very fast kinetics of eosinophil and neutrophil accumulation; (b) the recruitment of neutrophils and macrophages in the absence of significant lymphocyte accumulation; (c) the detectable increase in granulocyte binding of live extracellular bacteria in the absence of antibodies. On the other hand, the fast recruitment of monocytes/macrophages by eotaxin-exposed eosinophils raises the issue of whether eosinophils could also enhance protection from more specialized pathogens, such as the intracellular mycobacteria and protozoa that cause chronic infections, which are usually handled by monocytes/macrophages. Relatively little attention has been paid to the possibility that eosinophils play a role in fighting microbial pathogens with the help of other leukocyte types. Our observations suggest that small numbers of eosinophils might recruit a large neutrophil and/or macrophage infiltrate.While this would make eosinophils surprisingly effective players in innate immunity, this might paradoxically obscure their contribution, if their contribution were to be taken as commensurate with their numbers in inflammatory infiltrates, where they would often amount to no more than one-tenth of total leukocytes. It is therefore fortunate that, in transfer experiments of wild-type and mutant eosinophils into eosinophil-null GATA-1 mice, eosinophil recruitment of neutrophils can be unequivocally demonstrated. In view of the differences between naive and sensitized models in this respect, it is of interest to determine whether this eosinophil functional capacity is modified by allergen sensitization of the host and whether such a change in innate immune functions can be duplicated by passively or actively sensitizing the host. ### 4.4. Possible Cellular Mechanisms Underlying the Effect of Eosinophils Several, but not all, of the observations reported here are consistent with those of previous studies, carried out by other groups in different experimental models. Eotaxin recruitment of a mixed leukocyte population, including neutrophils and macrophages, was described in human subjects [28]; Das and colleagues [46] reported that eotaxin was effective when injected in the peritoneal cavity of mice but not in a dorsal air pouch, drawing attention to the important differences between challenge sites responding to the same chemically defined stimulus. Responses in the air pouch occurred after local inoculation of mast cell-containing peritoneal cell populations, but allergen sensitization was essential to local responses to eotaxin in this transfer model. In addition, neutrophil migration accompanied recruitment of eosinophils in specific conditions. Harris and colleagues [47] confirmed that mast cells were important for full responses to eotaxin and further showed that eotaxin responses were blocked by 5-LO inhibitors. Together, these studies suggest that eotaxin effectiveness is constrained in vivo by several factors that may be absent from in vitro (e. g., migration chamber or flow cytometric) studies. These constraints include mast cells and 5-LO. None of these published studies, however, evaluated the contribution of the recruited eosinophils themselves.We suggest that a cytokine, rather than a 5-LO derivative, is released by eosinophils in the peritoneal cavity, once they have been recruited by eotaxin in the presence of an active 5-LO, or, alternatively, directly inoculated in the cavity through a transfer protocol. Candidate cytokines would include TNF-α and TGF-β1, both potent neutrophil chemoattractants. One hypothesis that could reconcile our observations with those of Das and Harris and their colleagues [46, 47] would involve the amplification of the role of eosinophils through interactions with resident peritoneal mast cells, since mast cells are an important source of neutrophil chemoattractants, including TNF-α [48]. ## 4.1. Roles of Eotaxin, Eotaxin Receptors, and Eosinophils Migration of all three leukocyte types in BALB/c mice was induced by eotaxin, as shown by identical dose-response relationships and overlapping kinetics, as well as by identical effects of neutralizing eotaxin with specific antibodies. The relationship of this migration to the expression of CCR3, by contrast, is more complex. Lymphocytes, some of which have been shown by others to express CCR3 [41, 42], were not attracted by eotaxin to the peritoneal cavity of naive mice in significant numbers. On the other hand, despite the commonly held view that CCR3 expression is restricted to eosinophils [4, 18, 20, 22, 23], basophils [24, 25], eosinophil and basophil progenitors/precursors [26, 27], T cell subsets [41, 42], and smooth muscle cells [29], several studies have suggested that human and murine neutrophils and macrophages can also express CCR3, at least in specific experimental settings [28, 32], as suggested by studies in neutrophils [43]. This would imply that all three leukocyte populations shown to be recruited in our study in wild-type (BALB/c, PAS) mice could be simply responding to eotaxin binding to CCR3 at the individual cell level, with no contribution from cellular interactions involving eosinophils. If so, there should be no decrease in neutrophil or macrophage accumulation by eliminating eosinophils, but one should expect neutrophils and macrophages to express CCR3 at significant levels even in eosinophil-deficient GATA-1 mice. This possibility, however, has been directly ruled out by the demonstration that eotaxin in GATA-1 mutant mice does not recruit neutrophils nor macrophages. The evidence for cellular interactions in the neutrophil response to eotaxin is reinforced by experiments using the same strain, which show neutrophil recruitment following transfer of highly purified BALB/c eosinophils. Finally, we observed no significant accumulation of CCR3+ granulocytes in eotaxin-injected GATA-1 mice.By contrast, GATA-1 mice had constitutively increased macrophage numbers in the peritoneal cavity, which were unaffected by 4 h of eotaxin administration, both with and without eosinophil transfer. It is possible that the GATA-1 mutation affects the cellular function, tissue distribution, and/or turnover of monocytes/macrophages so as to prevent responses to eotaxin, regardless of whether these are mediated or not by eosinophils. Therefore, we cannot conclude from our present observations in GATA-1 mice alone that eosinophils also recruit monocytes/macrophages. Direct evidence for eosinophil recruitment of monocytes/macrophages was, however, obtained through transfer of eosinophils from PAS donors into ALOX mice. Importantly, in the absence of eosinophil transfer, ALOX mice showed neither eosinophil nor monocyte/macrophage recruitment by eotaxin. Interestingly, although in the direct stimulation protocol ALOX mice resembled GATA-1 mice, in their absence of monocyte/macrophage accumulation by 4 h, it is likely that different mechanisms underlie these similar outcomes, since (a) a similar failure to respond to eotaxin with monocyte/macrophage accumulation was observed in wild-type PAS controls and cannot therefore be ascribed to the absence of active 5-LO; (b) transfer experiments show that eosinophil transfer from PAS donors allows a significant recruitment of monocytes/macrophages by eotaxin in ALOX recipients. These observations suggest that an active 5-LO is not required for monocytes/macrophages (or neutrophils) to respond to eotaxin, provided eosinophils are present.Overall, the data indicate that eotaxin recruits a mixed leukocyte population in naive mice through a mechanism dependent on eosinophils. Evidence that eosinophils play an active role is provided by the observation that full effect in eosinophil transfer experiments requires both eosinophils and eotaxin, which would not be expected if eosinophils played a merely passive or permissive role. On the other hand, in transfer experiments about 10% of the transferred eosinophils were recovered by 4 h eotaxin stimulation of the recipients. This raises the issue of whether the remaining transferred eosinophils underwent changes such as degranulation [4] or release of extracellular traps [10], which might represent a significant difference relative to the direct (nontransfer) protocol used in the initial experiments. ## 4.2. Role of 5-LO Mixed leukocyte recruitment by eotaxin in naive mice shows the same dependence on 5-LO that was observed for selective eosinophil recruitment in sensitized mice. Hence, it is likely that eosinophil accumulation itself, the shared feature in both models, is the 5-LO-dependent step. This is consistent with the observation that ALOX eosinophils, when directly transplanted to the peritoneal cavity of GATA-1 recipients (which are unable to respond to eotaxin by accumulation of neutrophils), are able to mediate the neutrophil recruitment induced by eotaxin. Importantly, further recruitment of eosinophils occurs in eotaxin-stimulated ALOX recipients of PAS eosinophils, where the only cells bearing a functional 5-LO are the transferred eosinophils. This suggests that eosinophils can be a source as well as a target for a 5-LO pathway product, such as LTB4. LTB4 was previously shown to selectively attract eosinophils, in a model in which eotaxin duplicated the effect of antigen in a 5-LO-dependent manner [33]. Furthermore, there is significant evidence that recruitment involves interactions between cytokines and lipid mediators [44]. In neutrophil migration, LTB4 represents a signaling relay, raising the possibility that it acts similarly in eosinophils [45]. Whatever mechanism is involved, eosinophil generation of a 5-LO-derived neutrophil chemoattractant is not required for the eosinophil-dependent secondary recruitment of neutrophils in eotaxin-injected naïve mice. While in previous studies of sensitized mice, LTB4, as did antigen and eotaxin, selectively recruited eosinophils [34], it remains to be determined whether it accounts for the rapid eosinophil recruitment to the peritoneal cavity of the eotaxin-injected nonsensitized mice in the present study. A related issue for further investigation is whether 5-LO is required for the increased effectiveness of bacterial binding that was detected in BALB/c leukocytes, as LTB4 is known to activate as well as attract neutrophils [45]. ## 4.3. Relationship to Innate and Acquired Immunity Despite a common requirement for 5-LO, leukocyte recruitment by eotaxin differs in several aspects between naive versus sensitized mice, especially the lack of eosinophil selectivity in the former, as opposed to the latter. In sensitized mice, selective eosinophil recruitment was observed with widely different chemical stimuli (allergen, eotaxin, or LTB4). It is unlikely, therefore, that such selectivity reflects some features of eotaxin signaling and even less of LTB4 signaling (which should be very effective in mice having normal neutrophil numbers). Alternatively, the failure of eotaxin and LTB4 to recruit neutrophils and monocytes/macrophages in sensitized mice could involve changes in the expression of adhesion proteins at endothelial surfaces, which would prevent their emigration from blood vessels to peritoneal cavity, regardless of whether the chemoattractant is LTB4 or eotaxin. We have not examined this possibility, since our current observations, which are centered on responses from nonsensitized animals, do not depend on clarifying mechanisms that were not applicable to the present conditions.We view our findings as manifestations of innate immunity, because of (a) the very fast kinetics of eosinophil and neutrophil accumulation; (b) the recruitment of neutrophils and macrophages in the absence of significant lymphocyte accumulation; (c) the detectable increase in granulocyte binding of live extracellular bacteria in the absence of antibodies. On the other hand, the fast recruitment of monocytes/macrophages by eotaxin-exposed eosinophils raises the issue of whether eosinophils could also enhance protection from more specialized pathogens, such as the intracellular mycobacteria and protozoa that cause chronic infections, which are usually handled by monocytes/macrophages. Relatively little attention has been paid to the possibility that eosinophils play a role in fighting microbial pathogens with the help of other leukocyte types. Our observations suggest that small numbers of eosinophils might recruit a large neutrophil and/or macrophage infiltrate.While this would make eosinophils surprisingly effective players in innate immunity, this might paradoxically obscure their contribution, if their contribution were to be taken as commensurate with their numbers in inflammatory infiltrates, where they would often amount to no more than one-tenth of total leukocytes. It is therefore fortunate that, in transfer experiments of wild-type and mutant eosinophils into eosinophil-null GATA-1 mice, eosinophil recruitment of neutrophils can be unequivocally demonstrated. In view of the differences between naive and sensitized models in this respect, it is of interest to determine whether this eosinophil functional capacity is modified by allergen sensitization of the host and whether such a change in innate immune functions can be duplicated by passively or actively sensitizing the host. ## 4.4. Possible Cellular Mechanisms Underlying the Effect of Eosinophils Several, but not all, of the observations reported here are consistent with those of previous studies, carried out by other groups in different experimental models. Eotaxin recruitment of a mixed leukocyte population, including neutrophils and macrophages, was described in human subjects [28]; Das and colleagues [46] reported that eotaxin was effective when injected in the peritoneal cavity of mice but not in a dorsal air pouch, drawing attention to the important differences between challenge sites responding to the same chemically defined stimulus. Responses in the air pouch occurred after local inoculation of mast cell-containing peritoneal cell populations, but allergen sensitization was essential to local responses to eotaxin in this transfer model. In addition, neutrophil migration accompanied recruitment of eosinophils in specific conditions. Harris and colleagues [47] confirmed that mast cells were important for full responses to eotaxin and further showed that eotaxin responses were blocked by 5-LO inhibitors. Together, these studies suggest that eotaxin effectiveness is constrained in vivo by several factors that may be absent from in vitro (e. g., migration chamber or flow cytometric) studies. These constraints include mast cells and 5-LO. None of these published studies, however, evaluated the contribution of the recruited eosinophils themselves.We suggest that a cytokine, rather than a 5-LO derivative, is released by eosinophils in the peritoneal cavity, once they have been recruited by eotaxin in the presence of an active 5-LO, or, alternatively, directly inoculated in the cavity through a transfer protocol. Candidate cytokines would include TNF-α and TGF-β1, both potent neutrophil chemoattractants. One hypothesis that could reconcile our observations with those of Das and Harris and their colleagues [46, 47] would involve the amplification of the role of eosinophils through interactions with resident peritoneal mast cells, since mast cells are an important source of neutrophil chemoattractants, including TNF-α [48]. --- *Source: 102160-2014-02-25.xml*
102160-2014-02-25_102160-2014-02-25.md
79,402
5-Lipoxygenase-Dependent Recruitment of Neutrophils and Macrophages by Eotaxin-Stimulated Murine Eosinophils
Ricardo Alves Luz; Pedro Xavier-Elsas; Bianca de Luca; Daniela Masid-de-Brito; Priscila Soares Cauduro; Luiz Carlos Gondar Arcanjo; Ana Carolina Cordeiro Faria dos Santos; Ivi Cristina Maria de Oliveira; Maria Ignez Capella Gaspar-Elsas
Mediators of Inflammation (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102160
102160-2014-02-25.xml
--- ## Abstract The roles of eosinophils in antimicrobial defense remain incompletely understood. In ovalbumin-sensitized mice, eosinophils are selectively recruited to the peritoneal cavity by antigen, eotaxin, or leukotriene(LT)B4, a 5-lipoxygenase (5-LO) metabolite. 5-LO blockade prevents responses to both antigen and eotaxin. We examined responses to eotaxin in the absence of sensitization and their dependence on 5-LO. BALB/c or PAS mice and their mutants (5-LO-deficient ALOX; eosinophil-deficient GATA-1) were injected i.p. with eotaxin, eosinophils, or both, and leukocyte accumulation was quantified up to 24 h. Significant recruitment of eosinophils by eotaxin in BALB/c, up to 24 h, was accompanied by much larger numbers of recruited neutrophils and monocytes/macrophages. These effects were abolished by eotaxin neutralization and 5-LO-activating protein inhibitor MK886. In ALOX (but not PAS) mice, eotaxin recruitment was abolished for eosinophils and halved for neutrophils. In GATA-1 mutants, eotaxin recruited neither neutrophils nor macrophages. Transfer of eosinophils cultured from bone-marrow of BALB/c donors, or from ALOX donors, into GATA-1 mutant recipients, i.p., restored eotaxin recruitment of neutrophils and showed that the critical step dependent on 5-LO is the initial recruitment of eosinophils by eotaxin, not the secondary neutrophil accumulation. Eosinophil-dependent recruitment of neutrophils in naive BALB/c mice was associated with increased binding of bacteria. --- ## Body ## 1. Introduction Eosinophils are a minority granulocyte population, which contributes to the pathophysiology of allergic inflammation, hypereosinophilic syndromes, and some malignancies [1–4]. A role for eosinophils in resistance to multicellular (helminth) parasites has long been proposed, based on the strong association of blood and tissue eosinophilia with worm infections and on the evidence that eosinophils can damage or kill helminths, in specific experimental conditions [5, 6]. Nevertheless, a generally protective in vivo role for eosinophils against worm infections remains elusive [4], partly because host responses to multicellular parasites represent a compromise between the competing needs to reduce parasite burden and to limit immune-mediated tissue damage, to which eosinophils significantly contribute [7, 8].Alternatively, mechanisms through which eosinophils may directly fight infection by various classes ofmicrobial(bacterial, fungal, protozoal, or viral) pathogens include secretion of antimicrobial defensin-like proteins [9]; release of sticky cellular contents that capture pathogens, closely resembling neutrophil extracellular traps [10]; secretion of halogen microbicidal derivatives [11]; release of enzymes with antiviral activity and other roles in innate immunity [12, 13]; and secretion of a wide array of immunoregulatory cytokines [14]. While the contribution of eosinophils to immunity as directly antimicrobial effector cells is likely limited by their scarcity, they could be helpful in conditions in which neutrophil access or macrophage function would be reduced: for neutrophils, tissue entry is restricted in normal conditions [15]; regarding macrophages, their microbicidal effector function is highly dependent on appropriate activating signals, including cytokines [16]. By contrast, eosinophils are far more numerous in normal tissues than in blood and home to mucosal interfaces with the environment [2–4], which represent potential gateways for microbial infection. They are a source of numerous immunoregulatory cytokines [13] and lipid mediators [17], which might play a role in recruitment/activation of other leukocyte subtypes.Because of the scarcity of eosinophils, many important observations were made in conditions in which their numbers are already increased, due to allergic sensitization or experimental helminth infection, such as the discovery of eotaxin (CCL11), a chemoattractant that induces eosinophil accumulation in the skin of sensitized (i.e., eosinophilic) guinea pigs [2, 3, 18]. While other potent eosinophil chemoattractants, such as PGD2 [19, 20] and oxo-ETE [21], have also been characterized, many factors reinforce the current understanding of eotaxin as a specialized chemoattractant which acts primarily on granulocyte subtypes relevant to allergy and worm infections [2–4]. These factors include the reported selectivity of eotaxin for the eosinophil [4, 17, 22, 23] and basophil [20, 24, 25] lineages, and its interaction with hematopoietic cytokines, such as IL-5 [26, 27] and GM-CSF [27], which promotes eosinophil production in bone marrow [27] and extramedullary sites [26], ultimately inducing blood and tissue eosinophilia [4]. The alternative view, namely, that eotaxin is part of a broader regulatory network comprising multiple cell populations in addition to eosinophils and basophils, is also suggested by observations of a wide variety of eotaxin effects, including its ability to attract neutrophils and macrophages [28] and smooth muscle cells [29]. Eotaxin, also produced by fibroblasts [30, 31], has been associated with fibrotic processes in several settings [32, 33].Within this wider framework, we have reexamined whether, in a nonsensitized host, eotaxin would recruit other leukocyte populations besides eosinophils and basophils and further examined whether its effects were dependent on 5-lipoxygenase (5-LO), the key enzyme in the leukotriene production by eosinophils [17, 27]. The evaluation of both aspects was prompted by observations in mice which develop eosinophilia in response to subcutaneously implanted insoluble antigen pellets [34]. While i.p. challenge of implant recipients with soluble allergen selectively recruited eosinophils to the peritoneal cavity, this effect was blocked by the 5-LO-activating protein inhibitor, MK886, and duplicated by the 5-LO product, LTB4, neither of which is eosinophil-selective. Importantly, eotaxin, which duplicated the effects of allergen, was equally blocked by MK886. Equally unexpected was the failure of LTB4, a potent neutrophil chemoattractant, to recruit neutrophils, while it effectively attracted eosinophils in this allergic model. These observations raised the possibility that the eosinophil selective effect of both chemoattractants (eotaxin and LTB4) observed in vivo was dependent on the host being sensitized. We tested this hypothesis for eotaxin first, by examining its effects in a naïve host, as well as the effect of 5-LO blockade on the effectiveness of eotaxin. We report that eotaxin recruits a mixed leukocyte population to the peritoneal cavity of naïve mice and provide evidence of essential roles for both 5-LO and eosinophils in the accumulation and functional activation of neutrophils in this model. ## 2. Materials and Methods ### 2.1. Reagents RPMI 1640 medium (SH30011.01) and fetal calf serum (SH30088.03) were from Hyclone (Logan, UT); Penicillin 100 U/mL (PEN-B), Streptomycin 100 mg/mL (S9137), Ovalbumin (grade II and grade IV), isotonic Percoll, and Histopaque density 1.083 solution from Sigma-Aldrich (St. Louis, MO); recombinant murine Eotaxin (250-01) from PeproTech (Rocky Hill, NJ); and recombinant murine IL-5 from R&D; MK-886 (475889) 1 mg/kg from Cayman Chemicals (Ann Arbor, MI), dissolved in 0.1% methylcellulose, was given as an intragastric bolus in a 0.2 mL volume [33]. Rat anti-murine eotaxin monoclonal neutralizing antibody (clone 42285) and rat anti-murine IgG2a control monoclonal antibody of matched isotype (clone 54447) were from R&D (Minneapolis, MN). ### 2.2. Animals and Animal Handling Inbred mice, male and female, aged 8–10 weeks, provided in SPF condition by CECAL-FIOCRUZ (Rio de Janeiro), were of the following strains: BALB/c; ALOX (5-LO-deficient) and PAS-129 (wild-type control of the same background) [27]; and BALB/c mutants lacking an enhancer element in the promoter region of gene coding for the GATA-1 transcription factor [35], required for eosinophil lineage determination (GATA-1 mice, for short). Animal housing, care, and handling followed institutionally approved (CEUA number L-010/04, CEUA number L-002/09) protocols. Naive animals received eotaxin i.p., in 0.2 mL of RPMI 1640 medium with Penicillin/Streptomycin. Controls received medium (RPMI). After the indicated times, animals were killed in a CO2 chamber, and peritoneal lavage was carried out with 10 mL chilled RPMI. For sensitized animals, see Section 2.6. ### 2.3. Neutralization of Eotaxin Activity 50 ng eotaxin was incubated with 5μg anti-eotaxin neutralizing antibody or 5 μg isotype-matched anti-IgG2a antibody, in a final volume of 200 μL, for 30 minutes, before injection into each BALB/c recipient. 4 h later, peritoneal lavage fluid was collected from the injected mice and handled as detailed above. ### 2.4. Collection, Enumeration, and Staining of Peritoneal Leukocytes Peritoneal lavage cells were washed at 500 ×g and resuspended in 2 mL RPMI. Total counts were carried out in a hemocytometer after a 1 : 10 dilution in Turk’s solution. Differential counts were done on Giemsa-stained (ice-cold methanol-fixed, air-dried, and Giemsa-stained for 5 minutes) cytocentrifuge smears (500 rpm, 8 minutes in a Cytospin 3, Thermo Scientific, Waltham, MA), by counting at least 300 cells in 1000x magnification under oil. ### 2.5. Bacteria and Phagocytosis Assay We used nonpathogenicEscherichia coli bacteria (clone DH5, provided by Dr. Z. Vasconcelos, from INCA and FIOCRUZ, Rio de Janeiro) genetically altered to constitutively express the gene for green fluorescent protein (GFP), grown in LB broth. The cells obtained in the peritoneal lavage of BALB/c mice, induced by eotaxin or RPMI, were subjected to total cell count as well as differential neutrophil counts as previously described. Then 5×105 neutrophils were incubated for 30 minutes, in the dark at room temperature, with the bacteria in a 1 : 400 proportion. The cells were then washed and the resulting cell suspension was run in a FACScalibur flow cytometer (Becton Dickinson Immunocytometry Systems, San Jose, CA), with the acquisition of at least 50.000 events, and analyzed with the help of Summit 4.3 software (Dako Cytomation, UK). ### 2.6. Eosinophil Procedures For eosinophil transfer studies, where indicated, BALB/C, ALOX or PAS mice were sensitized (100μg ovalbumin grade IV and 1,6 mg alum in a final volume of 400 μL saline per animal, two s. c. injections in the dorsum, at days 0 and 7) and challenged (ovalbumin grade IV, 1 μg in 400 μL saline i.p. at day 14) according to Ebihara and colleagues [36]. Bone marrow was collected 48 h after i.p. challenge, examined, and cultured as previously described elsewhere [37]. Briefly, bone-marrow cultures were established for 5 days at 37°C in 95% air/5% CO2, in RPMI1640, with 10% FBS and 5 ng/mL IL-5, at a culture density of 1×106 cells/mL. The nonadherent cells were then collected and loaded on top of 3 mL of a Histopaque-1083 solution, followed by centrifugation at 400 ×g, 20°C, 35 minutes, without brakes. The mononuclear cell ring and the supernatant were discarded; the granulocyte-rich pellet was collected, washed and resuspended in 3 mL RPMI, and used for total and differential counts as above. The suspension contained ≥80% eosinophils, with no neutrophils, and the minor contaminant population consisted of macrophages alone, which do not interfere with the interpretation of transfer experiments. Where indicated, naive GATA-1, ALOX, or PAS recipient mice were injected with 1×106 eosinophils from the appropriate donors (see below) i.p., followed by eotaxin 50 ng/mL, and leukocyte accumulation was monitored in the peritoneal lavage fluid 4 h after eotaxin injection, as above.For flow cytometric studies of CCR3 expression, the following modification of this protocol was adopted, for it yielded eosinophils of higher purity: sensitized mice were challenged twice, initially by aerosol exposure (1 h, Ovalbumin grade II, 2.5%, w/v, at day 14) and 7 h later with soluble ovalbumin i.p. (grade IV, 1μg in 400 μL saline). Bone marrow was collected 24 h after aerosol challenge and cultured as above, after separation on a Percoll gradient (75%/60%/45% isotonic Percoll, 100 ×g, 20 min, room temperature). The hematopoietic cells from the 45%/60% interface [38] were cultured at a lower IL-5 concentration (2.5 ng/mL) for twice as long (10 days), yielding a population containing at least 95% eosinophils, with mature morphology. Contaminants at day 10 were degenerating (nonviable) mononuclear and stromal cells. ### 2.7. Statistical Analyses All data were analyzed with Systat for Windows 5.04 (Systat, Inc. Everston, IL, USA), using the two-tailedt-test for pairwise comparisons. Where indicated, ANOVA was also used for multiple comparisons, with the Tukey HSD correction and the Bonferroni correction for groups of equal and unequal size, respectively. ## 2.1. Reagents RPMI 1640 medium (SH30011.01) and fetal calf serum (SH30088.03) were from Hyclone (Logan, UT); Penicillin 100 U/mL (PEN-B), Streptomycin 100 mg/mL (S9137), Ovalbumin (grade II and grade IV), isotonic Percoll, and Histopaque density 1.083 solution from Sigma-Aldrich (St. Louis, MO); recombinant murine Eotaxin (250-01) from PeproTech (Rocky Hill, NJ); and recombinant murine IL-5 from R&D; MK-886 (475889) 1 mg/kg from Cayman Chemicals (Ann Arbor, MI), dissolved in 0.1% methylcellulose, was given as an intragastric bolus in a 0.2 mL volume [33]. Rat anti-murine eotaxin monoclonal neutralizing antibody (clone 42285) and rat anti-murine IgG2a control monoclonal antibody of matched isotype (clone 54447) were from R&D (Minneapolis, MN). ## 2.2. Animals and Animal Handling Inbred mice, male and female, aged 8–10 weeks, provided in SPF condition by CECAL-FIOCRUZ (Rio de Janeiro), were of the following strains: BALB/c; ALOX (5-LO-deficient) and PAS-129 (wild-type control of the same background) [27]; and BALB/c mutants lacking an enhancer element in the promoter region of gene coding for the GATA-1 transcription factor [35], required for eosinophil lineage determination (GATA-1 mice, for short). Animal housing, care, and handling followed institutionally approved (CEUA number L-010/04, CEUA number L-002/09) protocols. Naive animals received eotaxin i.p., in 0.2 mL of RPMI 1640 medium with Penicillin/Streptomycin. Controls received medium (RPMI). After the indicated times, animals were killed in a CO2 chamber, and peritoneal lavage was carried out with 10 mL chilled RPMI. For sensitized animals, see Section 2.6. ## 2.3. Neutralization of Eotaxin Activity 50 ng eotaxin was incubated with 5μg anti-eotaxin neutralizing antibody or 5 μg isotype-matched anti-IgG2a antibody, in a final volume of 200 μL, for 30 minutes, before injection into each BALB/c recipient. 4 h later, peritoneal lavage fluid was collected from the injected mice and handled as detailed above. ## 2.4. Collection, Enumeration, and Staining of Peritoneal Leukocytes Peritoneal lavage cells were washed at 500 ×g and resuspended in 2 mL RPMI. Total counts were carried out in a hemocytometer after a 1 : 10 dilution in Turk’s solution. Differential counts were done on Giemsa-stained (ice-cold methanol-fixed, air-dried, and Giemsa-stained for 5 minutes) cytocentrifuge smears (500 rpm, 8 minutes in a Cytospin 3, Thermo Scientific, Waltham, MA), by counting at least 300 cells in 1000x magnification under oil. ## 2.5. Bacteria and Phagocytosis Assay We used nonpathogenicEscherichia coli bacteria (clone DH5, provided by Dr. Z. Vasconcelos, from INCA and FIOCRUZ, Rio de Janeiro) genetically altered to constitutively express the gene for green fluorescent protein (GFP), grown in LB broth. The cells obtained in the peritoneal lavage of BALB/c mice, induced by eotaxin or RPMI, were subjected to total cell count as well as differential neutrophil counts as previously described. Then 5×105 neutrophils were incubated for 30 minutes, in the dark at room temperature, with the bacteria in a 1 : 400 proportion. The cells were then washed and the resulting cell suspension was run in a FACScalibur flow cytometer (Becton Dickinson Immunocytometry Systems, San Jose, CA), with the acquisition of at least 50.000 events, and analyzed with the help of Summit 4.3 software (Dako Cytomation, UK). ## 2.6. Eosinophil Procedures For eosinophil transfer studies, where indicated, BALB/C, ALOX or PAS mice were sensitized (100μg ovalbumin grade IV and 1,6 mg alum in a final volume of 400 μL saline per animal, two s. c. injections in the dorsum, at days 0 and 7) and challenged (ovalbumin grade IV, 1 μg in 400 μL saline i.p. at day 14) according to Ebihara and colleagues [36]. Bone marrow was collected 48 h after i.p. challenge, examined, and cultured as previously described elsewhere [37]. Briefly, bone-marrow cultures were established for 5 days at 37°C in 95% air/5% CO2, in RPMI1640, with 10% FBS and 5 ng/mL IL-5, at a culture density of 1×106 cells/mL. The nonadherent cells were then collected and loaded on top of 3 mL of a Histopaque-1083 solution, followed by centrifugation at 400 ×g, 20°C, 35 minutes, without brakes. The mononuclear cell ring and the supernatant were discarded; the granulocyte-rich pellet was collected, washed and resuspended in 3 mL RPMI, and used for total and differential counts as above. The suspension contained ≥80% eosinophils, with no neutrophils, and the minor contaminant population consisted of macrophages alone, which do not interfere with the interpretation of transfer experiments. Where indicated, naive GATA-1, ALOX, or PAS recipient mice were injected with 1×106 eosinophils from the appropriate donors (see below) i.p., followed by eotaxin 50 ng/mL, and leukocyte accumulation was monitored in the peritoneal lavage fluid 4 h after eotaxin injection, as above.For flow cytometric studies of CCR3 expression, the following modification of this protocol was adopted, for it yielded eosinophils of higher purity: sensitized mice were challenged twice, initially by aerosol exposure (1 h, Ovalbumin grade II, 2.5%, w/v, at day 14) and 7 h later with soluble ovalbumin i.p. (grade IV, 1μg in 400 μL saline). Bone marrow was collected 24 h after aerosol challenge and cultured as above, after separation on a Percoll gradient (75%/60%/45% isotonic Percoll, 100 ×g, 20 min, room temperature). The hematopoietic cells from the 45%/60% interface [38] were cultured at a lower IL-5 concentration (2.5 ng/mL) for twice as long (10 days), yielding a population containing at least 95% eosinophils, with mature morphology. Contaminants at day 10 were degenerating (nonviable) mononuclear and stromal cells. ## 2.7. Statistical Analyses All data were analyzed with Systat for Windows 5.04 (Systat, Inc. Everston, IL, USA), using the two-tailedt-test for pairwise comparisons. Where indicated, ANOVA was also used for multiple comparisons, with the Tukey HSD correction and the Bonferroni correction for groups of equal and unequal size, respectively. ## 3. Results ### 3.1. Mixed Leukocyte Migration Induced by Eotaxin We initially examined whether i.p. injection of eotaxin in various doses would recruit eosinophils in a relatively short period (4 h) and whether eosinophil accumulation would be selective, as previously observed in sensitized mice, or accompanied by migration of other leukocyte populations. As shown in Figure1(a), leukocytes accumulated in response to 50 and 100 ng/cavity eotaxin, in amounts that were significantly different from the RPMI controls (0 ng/cavity) as well as from lower doses of eotaxin (10 and 25 ng/cavity). These leukocytes included variable numbers of eosinophils (Figure 1(b)), monocytes/macrophages [39, 40] (Figure 1(c)), and neutrophils (Figure 1(d)). The morphology of all three leukocyte populations was recognizable without ambiguity, as shown in a representative photomicrograph (supplementary Figure 1 available online at http://dx.doi.org/10.1155/2014/102160). Lymphocyte and basophil migration was not significant in any of these doses (not shown). Importantly, neutrophils and macrophages greatly outnumbered eosinophils, with counts, respectively, 8.2- and 9.9-fold greater in the experiment shown. For all three leukocyte populations, the dose-response relationships were identical, and in subsequent experiments 50 ng/mL was used as the standard stimulus, since no improvement was observed at a higher dose.Accumulation of different leukocyte subtypes induced by eotaxin in naive mice: dose-response relationship. BALB/c mice were injected with the indicated doses of eotaxin (black bars), and the peritoneal lavage fluid collected 4 h later was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *,P≤0,05; **, P≤0,01, for the differences relative to the negative (RPMI) control (0 ng/mL eotaxin, open bars). (a) Data from 3–18 experiments. (b)–(d) Data from 6–11 experiments. (a) (b) (c) (d)Despite the heterogeneity of the recruited leukocyte population, neutralization of eotaxin with specific monoclonal antibody brought leukocyte accumulation to negative control levels (Figure2; compare with Figure 1(a) for the 0–25 ng eotaxin dose range), while control antibody of the same isotype with irrelevant specificity had no effect. This confirms that the stimulus for recruitment of all three leukocyte populations is eotaxin itself, not any unidentified contaminant, which by definition would not be neutralized by specific antibody.Figure 2 Accumulation of different leukocyte subtypes induced by eotaxin: effect of specific antibody neutralization. Eotaxin was preincubated with specific neutralizing monoclonal antibody (hatched bar), or with irrelevant isotype-matched monoclonal antibody (open bar), before i.p. injection in BALB/c mice. Controls (black bar) received eotaxin but no antibody. Peritoneal lavage fluid, collected 4 h later, was used for total leukocyte quantitation. Data are mean ± SEM. **,P≤0,01, for the differences relative to the positive (eotaxin) and specificity (irrelevant antibody) controls. Data from 5–18 experiments.The kinetics of recruitment of this mixed leukocyte population by eotaxin in naive BALB/c mice shows significant accumulation as early as 2 h, with a maximum at 4 h, thereafter decreasing but remaining significant at 12 and 24 h (Figure3(a)). We can observe very early arrival of eosinophils (significant from 2 h and remaining so at 12 and 24 h, Figure 3(b)). By contrast, accumulation of both monocytes/macrophages (Figure 3(c)) and neutrophils (Figure 3(d)) became significant only at 4 h. Significant accumulation was also observed at 12 and 24 h for monocytes/macrophages and 12 h for neutrophils. Hence, monocyte/macrophage and neutrophil accumulation followed eosinophil entry. Eosinophils outlasted neutrophils, but not monocytes/macrophages, in the observation period. For subsequent experiments, the 4 h observation time was chosen, because it showed significant accumulation of eosinophils, monocytes/macrophages, and neutrophils in naive BALB/c mice.Accumulation of different leukocyte subtypes induced by eotaxin: kinetics. BALB/c mice were injected with 50 ng eotaxin i.p. (black bars), and the peritoneal lavage fluid collected after the indicated periods was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *,P≤0,05; **, P≤0,01, for the differences relative to the respective negative (RPMI) controls (open bars). Data from 3–11 experiments. (a) (b) (c) (d) ### 3.2. Relationship to 5-LO We first evaluated the effect of eotaxin in naive BALB/c mice pretreated with FLAP inhibitor MK886 or vehicle. MK886 abolished mixed leukocyte recruitment by eotaxin (Figure4(a)). By contrast, vehicle-pretreated control animals showed significant leukocyte recruitment. MK886 was very effective in preventing eosinophil accumulation (Figure 4(b)). BALB/c mice responded to eotaxin with significant monocyte/macrophage accumulation by 4 h, which was abolished by MK886 (Figure 4(c)). MK886-pretreated BALB/c mice showed no neutrophil migration in response to eotaxin, while migration was significant in vehicle-treated controls (Figure 4(d)).Accumulation of different leukocyte types induced by eotaxin: effect of MK886.BALB/c mice were pretreated with vehicle (methylcellulose) or MK886 and injected with RPMI medium (negative control, open bars) or eotaxin, 50 ng/cavity (black bars). Peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *, P≤0,05; **, P≤0,01, for the differences relative to the respective negative control in each group. Data from 6 experiments. (a) (b) (c) (d)Next, we evaluated the effect of eotaxin in naïve ALOX mice, which lack 5-LO, and wild-type PAS controls. In ALOX mice, eotaxin had no significant effect on total leukocyte numbers. By contrast, significant recruitment was observed in PAS controls (Figure5(a)). Importantly, ALOX mice, unlike PAS controls, showed no significant eosinophil recruitment (Figure 5(b)). In this genetic background, unlike BALB/c, no significant monocyte/macrophage recruitment by eotaxin was observed at this time point (4 h; Figure 5(c)), regardless of whether mice were 5-LO-deficient or wild-type; furthermore, monocyte/macrophage numbers were higher in ALOX than in PAS mice. By contrast, neutrophil recruitment was significant in PAS controls and inhibited by ≈55% in ALOX mice, although residual neutrophil recruitment remained significant (Figure 5(d)). Together, these observations show that, in this genetic background, eosinophils and neutrophils differ in their requirements for 5-LO to migrate in response to eotaxin, which are total for the former but only partial for the latter.Mixed leukocyte accumulation induced by eotaxin in ALOX and PAS mice.5-LO-deficient mutant (ALOX) and wild-type (WT) control PAS mice were injected with RPMI (open bars) or eotaxin, 50 ng/cavity (black bars). The peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean + SEM. *, P≤0,05; **, P≤0,01. Data from 10 experiments. (a) (b) (c) (d) ### 3.3. Eosinophil-Dependent Neutrophil and Monocyte/Macrophage Migration The kinetics of mixed leukocyte recruitment in naive BALB/c mice raised the issue of whether eotaxin-stimulated eosinophils recruit other leukocyte types. If so, neutrophil and/or monocyte/macrophage migration in response to eotaxin would be decreased in the absence of eosinophils. Since naive mice carrying a mutation in the high-affinity GATA-1 binding site of the promoter from the gene coding for the GATA-1 transcription factor lack eosinophils [4], we evaluated the effect of eotaxin on leukocyte numbers 4 h after i.p. injection in GATA-1 mutant mice and BALB/c wild-type controls. In GATA-1 mice, unlike BALB/c controls, leukocyte numbers in the peritoneal cavity were not significantly increased by eotaxin (Figure 6(a)). As expected, eosinophils were undetectable in GATA-1 mice,and effectively recruited by eotaxin in BALB/c controls (Figure 6(b)). In both RPMI-treated and eotaxin-treated GATA-1 mice, monocyte/macrophages (which were the predominant resident leukocyte population) were about twice as numerous as in RPMI-treated BALB/c controls (Figure 6(c)), reaching counts comparable to those in eotaxin-treated BALB/c. Importantly, neutrophil numbers were not significantly increased by eotaxin (Figure 6(d)) in GATA-1 mice, unlike BALB/c controls, suggesting that neutrophil recruitment by eotaxin is eosinophil-dependent. To rule out the possibility that neutrophil migration is somehow defective in this strain, separate control GATA-1 mice were injected with thioglycollate broth, which induces an intense neutrophil accumulation in a 4 h period. GATA-1 and BALB/c mice responded equally well to thioglycollate (not shown), indicating that failure of neutrophil recruitment in GATA-1 mice is a feature of their eotaxin response, not evidence of a general defect in neutrophil migration.Effect of eosinophil transfer into GATA-1 recipients on neutrophil accumulation. (a)–(d) Eosinophil-deficient GATA-1 mice and wild-type (WT) controls (BALB/c) were injected with RPMI (open bars) or eotaxin, 50 ng/cavity (black bars). The peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). E-H, GATA-1 mice received eotaxin (black bars), BALB/c eosinophils (stippled bars), or BALB/c eosinophils followed by eotaxin administration 30 minutes later (hatched bars). Peritoneal lavage fluid collected after 4 h of eotaxin administration was used for quantitation of total leukocytes (e), eosinophils (f), macrophages (g), and neutrophils (h). Data are mean ± SEM. *,P≤ 0,05; **, P≤ 0,01, for the indicated differences. Data from 3–11 experiments. (i) Intensity of CCR3 expression in granulocytes. Cells were collected 4 h after eotaxin injection from the peritoneal cavity of GATA-1 and BALB/c donors and stained for CCR3. Representative MFI profiles for the granulocyte gate are shown. Dotted line, GATA-1. Thin line, BALB/c. Thick line, GATA-1 sample to which purified BALB/c eosinophils were added in vitro (up to 20% of total cells). (a) (b) (c) (d) (e) (f) (g) (h) (i)We further explored this issue by reconstituting a peritoneal eosinophil population in GATA-1 mice by transfer of purified (90%) BALB/c eosinophils, devoid of neutrophil contamination. Total leukocyte counts were not significantly different between GATA-1 mice given eotaxin alone, eosinophils alone, or eotaxin plus eosinophils (Figure6(e)), and this was closely paralleled by monocyte/macrophage counts, which account for most leukocytes in all groups (Figure 6(f)). As expected, eosinophils could be recovered from GATA-1 recipients of eosinophils, and eotaxin did not significantly increase their numbers, as the recipients produce no eosinophils of their own (Figure 6(g)). Importantly, neutrophil numbers were significantly increased by eosinophil transfer and further significantly increased by the association of eosinophil transfer and eotaxin (Figure 6(h)). Together, these data suggest that in naive mice eosinophils mediate the accumulation of neutrophils induced by eotaxin.If, as suggested by the preceding results, neutrophils and monocyte/macrophages accumulate in GATA-1 mice as a result of eosinophil activation, not of direct exposure to eotaxin, one should expect the leukocytes harvested from the peritoneal cavity of GATA-1 mice to show little or no expression of CCR3, unlike eosinophils. We have therefore compared the expression of CCR3 in peritoneal lavage leukocytes from BALB/c and GATA-1 mice collected 4 h after eotaxin injection (Figure6(i)). Mean fluorescence intensity was monitored in the granulocyte region, since our transfer protocol reconstitutes migration of neutrophils, not monocytes/macrophages (see above). No eotaxin-induced recruitment of CCR3+ granulocytes was observed in GATA-1 mice (dotted line), unlike BALB/c mice (thin line). To make sure that CCR3+ cells would be detectable, if present in a suspension of GATA-1 granulocytes, we also added purified BALB/c eosinophils to GATA-1 leukocytes as a control (Figure 6(i), thick line). Exogenously added CCR3+ cells were easily detectable in these conditions.We took advantage of the effectiveness of eosinophil transfer to examine the relationship of 5-LO to the migration of eosinophils, as well as to the secondary recruitment of neutrophils and monocytes/macrophages. A mixed leukocyte population accumulated in the peritoneal cavity of ALOX recipients of PAS eosinophils (Figure7(a)), 4 h following administration of eotaxin. No significant improvement was observed in PAS recipients of PAS eosinophils in the same conditions, showing that recruitment is as effective in the ALOX recipients as in the wild-type recipients. The recruited leukocyte population from ALOX recipients included eosinophils (Figure 7(b)), comprising both the transferred eosinophils and those recruited by eotaxin administration to the recipients, again reaching levels comparable to those of PAS recipients of PAS eosinophils. Secondary recruitment was observed for both macrophages (Figure 7(c)) and neutrophils (Figure 7(d)), with similar effectiveness in comparison to the PAS into PAS transfers.Effect of transfer of PAS or ALOX eosinophils on neutrophil accumulation in the peritoneal cavity of ALOX, PAS, and GATA-1 recipients. (a)–(d) ALOX mice received RPMI (open bars), PAS eosinophils (stippled bars), PAS eosinophils followed by eotaxin, 50 ng/cavity, 30 minutes later (hatched bars). As positive controls, PAS mice received PAS eosinophils followed by eotaxin (gray bars). Peritoneal lavage fluid collected 4 h after eotaxin injection was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). (e) GATA-1 mice received eotaxin (black bars), or ALOX eosinophils, followed by eotaxin, 30 minutes later (hatched bars). Peritoneal lavage fluid collected 4 h after eotaxin administration was used for quantitation of neutrophils (Neuts), eosinophils (Eos), and macrophages (Mϕ). Data are mean ± SEM. *, P≤0,05; **, P≤0,01, for the indicated differences. Data from 3–5 experiments. (a) (b) (c) (d) (e)We next examined whether the critical step requiring 5-LO in this model is the initial eosinophil accumulation, rather than the secondary recruitment of neutrophils by eosinophils. If so, one would predict that direct transfer of ALOX eosinophils into eosinophil-deficient GATA-1 recipients should restore neutrophil accumulation in response to eotaxin. When purified eosinophils from ALOX bone-marrow cultures were transferred to GATA-1 recipients (Figure7(e)), recruitment of neutrophils was very effective. This rules out the possibility that the step critically dependent on 5-LO is the generation by eosinophils of a neutrophil chemoattractant. On the other hand, as shown above for BALB/c eosinophil transfer into GATA-1 recipients, monocytes/macrophages were not increased by ALOX eosinophil transfer at this time point. ### 3.4. Impact on Granulocyte Interaction with Bacteria We further examined whether eosinophil-mediated responses to eotaxin in this model had an effect on the ability of the secondarily recruited neutrophils and their bacterial targets. To do so, mixed leukocyte populations induced by eotaxin (RPMI in controls) were collected from naive BALB/c mice at 4 h after injection, counted, and mixed for 30 minutes with GFP-expressingE. coli at a bacteria/leukocyte ratio adjusted to 400 : 1, before analysis by flow cytometry. Cells gated in the granulocyte region on the basis of size and complexity were examined for green fluorescence, resulting from both binding and internalization of bacteria. Figure 8(a) shows that eotaxin-stimulated granulocytes bind/internalize fluorescent E. coli bacteria more effectively than those collected from RPMI-injected control mice. This increase in effectiveness is detectable as an increased fraction of granulocytes binding bacteria (Figure 8(b)) and an increased mean fluorescent intensity (Figure 8(c)). This suggests that eosinophil-mediated recruitment of neutrophils is accompanied by an increased capacity to bind and/or ingest bacteria.Flow cytometric analyses of peritoneal lavage leukocytes. (a)–(c) Interaction between fluorescent bacteria and neutrophils from eotaxin-injected and control mice. BALB/c mice were injected with eotaxin 50 ng/cavity i.p. Controls received RPMI. After 4 h, peritoneal lavage fluid was collected from both groups. After counting neutrophils, fluorescent, viableE. coli bacteria were mixed with leukocytes at a 400 : 1 bacteria/neutrophil ratio and further incubated for 30 min before washing to eliminate unbound bacteria, and analysis of neutrophil-associated fluorescence by flow cytometry. (a) Representative profiles of eotaxin-induced (thick, continuous line) and RPMI-induced (thin, interrupted line) neutrophil-associated fluorescence. (b) Fraction of the neutrophils positive for fluorescent bacteria in RPMI-induced (open bar) and eotaxin-induced (black bar) peritoneal leukocyte populations (mean±SEM); (c) mean fluorescent intensity of neutrophils in the same samples (mean ± SEM). Data from 4-5 experiments. (a) (b) (c) ## 3.1. Mixed Leukocyte Migration Induced by Eotaxin We initially examined whether i.p. injection of eotaxin in various doses would recruit eosinophils in a relatively short period (4 h) and whether eosinophil accumulation would be selective, as previously observed in sensitized mice, or accompanied by migration of other leukocyte populations. As shown in Figure1(a), leukocytes accumulated in response to 50 and 100 ng/cavity eotaxin, in amounts that were significantly different from the RPMI controls (0 ng/cavity) as well as from lower doses of eotaxin (10 and 25 ng/cavity). These leukocytes included variable numbers of eosinophils (Figure 1(b)), monocytes/macrophages [39, 40] (Figure 1(c)), and neutrophils (Figure 1(d)). The morphology of all three leukocyte populations was recognizable without ambiguity, as shown in a representative photomicrograph (supplementary Figure 1 available online at http://dx.doi.org/10.1155/2014/102160). Lymphocyte and basophil migration was not significant in any of these doses (not shown). Importantly, neutrophils and macrophages greatly outnumbered eosinophils, with counts, respectively, 8.2- and 9.9-fold greater in the experiment shown. For all three leukocyte populations, the dose-response relationships were identical, and in subsequent experiments 50 ng/mL was used as the standard stimulus, since no improvement was observed at a higher dose.Accumulation of different leukocyte subtypes induced by eotaxin in naive mice: dose-response relationship. BALB/c mice were injected with the indicated doses of eotaxin (black bars), and the peritoneal lavage fluid collected 4 h later was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *,P≤0,05; **, P≤0,01, for the differences relative to the negative (RPMI) control (0 ng/mL eotaxin, open bars). (a) Data from 3–18 experiments. (b)–(d) Data from 6–11 experiments. (a) (b) (c) (d)Despite the heterogeneity of the recruited leukocyte population, neutralization of eotaxin with specific monoclonal antibody brought leukocyte accumulation to negative control levels (Figure2; compare with Figure 1(a) for the 0–25 ng eotaxin dose range), while control antibody of the same isotype with irrelevant specificity had no effect. This confirms that the stimulus for recruitment of all three leukocyte populations is eotaxin itself, not any unidentified contaminant, which by definition would not be neutralized by specific antibody.Figure 2 Accumulation of different leukocyte subtypes induced by eotaxin: effect of specific antibody neutralization. Eotaxin was preincubated with specific neutralizing monoclonal antibody (hatched bar), or with irrelevant isotype-matched monoclonal antibody (open bar), before i.p. injection in BALB/c mice. Controls (black bar) received eotaxin but no antibody. Peritoneal lavage fluid, collected 4 h later, was used for total leukocyte quantitation. Data are mean ± SEM. **,P≤0,01, for the differences relative to the positive (eotaxin) and specificity (irrelevant antibody) controls. Data from 5–18 experiments.The kinetics of recruitment of this mixed leukocyte population by eotaxin in naive BALB/c mice shows significant accumulation as early as 2 h, with a maximum at 4 h, thereafter decreasing but remaining significant at 12 and 24 h (Figure3(a)). We can observe very early arrival of eosinophils (significant from 2 h and remaining so at 12 and 24 h, Figure 3(b)). By contrast, accumulation of both monocytes/macrophages (Figure 3(c)) and neutrophils (Figure 3(d)) became significant only at 4 h. Significant accumulation was also observed at 12 and 24 h for monocytes/macrophages and 12 h for neutrophils. Hence, monocyte/macrophage and neutrophil accumulation followed eosinophil entry. Eosinophils outlasted neutrophils, but not monocytes/macrophages, in the observation period. For subsequent experiments, the 4 h observation time was chosen, because it showed significant accumulation of eosinophils, monocytes/macrophages, and neutrophils in naive BALB/c mice.Accumulation of different leukocyte subtypes induced by eotaxin: kinetics. BALB/c mice were injected with 50 ng eotaxin i.p. (black bars), and the peritoneal lavage fluid collected after the indicated periods was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *,P≤0,05; **, P≤0,01, for the differences relative to the respective negative (RPMI) controls (open bars). Data from 3–11 experiments. (a) (b) (c) (d) ## 3.2. Relationship to 5-LO We first evaluated the effect of eotaxin in naive BALB/c mice pretreated with FLAP inhibitor MK886 or vehicle. MK886 abolished mixed leukocyte recruitment by eotaxin (Figure4(a)). By contrast, vehicle-pretreated control animals showed significant leukocyte recruitment. MK886 was very effective in preventing eosinophil accumulation (Figure 4(b)). BALB/c mice responded to eotaxin with significant monocyte/macrophage accumulation by 4 h, which was abolished by MK886 (Figure 4(c)). MK886-pretreated BALB/c mice showed no neutrophil migration in response to eotaxin, while migration was significant in vehicle-treated controls (Figure 4(d)).Accumulation of different leukocyte types induced by eotaxin: effect of MK886.BALB/c mice were pretreated with vehicle (methylcellulose) or MK886 and injected with RPMI medium (negative control, open bars) or eotaxin, 50 ng/cavity (black bars). Peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean ± SEM. *, P≤0,05; **, P≤0,01, for the differences relative to the respective negative control in each group. Data from 6 experiments. (a) (b) (c) (d)Next, we evaluated the effect of eotaxin in naïve ALOX mice, which lack 5-LO, and wild-type PAS controls. In ALOX mice, eotaxin had no significant effect on total leukocyte numbers. By contrast, significant recruitment was observed in PAS controls (Figure5(a)). Importantly, ALOX mice, unlike PAS controls, showed no significant eosinophil recruitment (Figure 5(b)). In this genetic background, unlike BALB/c, no significant monocyte/macrophage recruitment by eotaxin was observed at this time point (4 h; Figure 5(c)), regardless of whether mice were 5-LO-deficient or wild-type; furthermore, monocyte/macrophage numbers were higher in ALOX than in PAS mice. By contrast, neutrophil recruitment was significant in PAS controls and inhibited by ≈55% in ALOX mice, although residual neutrophil recruitment remained significant (Figure 5(d)). Together, these observations show that, in this genetic background, eosinophils and neutrophils differ in their requirements for 5-LO to migrate in response to eotaxin, which are total for the former but only partial for the latter.Mixed leukocyte accumulation induced by eotaxin in ALOX and PAS mice.5-LO-deficient mutant (ALOX) and wild-type (WT) control PAS mice were injected with RPMI (open bars) or eotaxin, 50 ng/cavity (black bars). The peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). Data are mean + SEM. *, P≤0,05; **, P≤0,01. Data from 10 experiments. (a) (b) (c) (d) ## 3.3. Eosinophil-Dependent Neutrophil and Monocyte/Macrophage Migration The kinetics of mixed leukocyte recruitment in naive BALB/c mice raised the issue of whether eotaxin-stimulated eosinophils recruit other leukocyte types. If so, neutrophil and/or monocyte/macrophage migration in response to eotaxin would be decreased in the absence of eosinophils. Since naive mice carrying a mutation in the high-affinity GATA-1 binding site of the promoter from the gene coding for the GATA-1 transcription factor lack eosinophils [4], we evaluated the effect of eotaxin on leukocyte numbers 4 h after i.p. injection in GATA-1 mutant mice and BALB/c wild-type controls. In GATA-1 mice, unlike BALB/c controls, leukocyte numbers in the peritoneal cavity were not significantly increased by eotaxin (Figure 6(a)). As expected, eosinophils were undetectable in GATA-1 mice,and effectively recruited by eotaxin in BALB/c controls (Figure 6(b)). In both RPMI-treated and eotaxin-treated GATA-1 mice, monocyte/macrophages (which were the predominant resident leukocyte population) were about twice as numerous as in RPMI-treated BALB/c controls (Figure 6(c)), reaching counts comparable to those in eotaxin-treated BALB/c. Importantly, neutrophil numbers were not significantly increased by eotaxin (Figure 6(d)) in GATA-1 mice, unlike BALB/c controls, suggesting that neutrophil recruitment by eotaxin is eosinophil-dependent. To rule out the possibility that neutrophil migration is somehow defective in this strain, separate control GATA-1 mice were injected with thioglycollate broth, which induces an intense neutrophil accumulation in a 4 h period. GATA-1 and BALB/c mice responded equally well to thioglycollate (not shown), indicating that failure of neutrophil recruitment in GATA-1 mice is a feature of their eotaxin response, not evidence of a general defect in neutrophil migration.Effect of eosinophil transfer into GATA-1 recipients on neutrophil accumulation. (a)–(d) Eosinophil-deficient GATA-1 mice and wild-type (WT) controls (BALB/c) were injected with RPMI (open bars) or eotaxin, 50 ng/cavity (black bars). The peritoneal lavage fluid collected after 4 h was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). E-H, GATA-1 mice received eotaxin (black bars), BALB/c eosinophils (stippled bars), or BALB/c eosinophils followed by eotaxin administration 30 minutes later (hatched bars). Peritoneal lavage fluid collected after 4 h of eotaxin administration was used for quantitation of total leukocytes (e), eosinophils (f), macrophages (g), and neutrophils (h). Data are mean ± SEM. *,P≤ 0,05; **, P≤ 0,01, for the indicated differences. Data from 3–11 experiments. (i) Intensity of CCR3 expression in granulocytes. Cells were collected 4 h after eotaxin injection from the peritoneal cavity of GATA-1 and BALB/c donors and stained for CCR3. Representative MFI profiles for the granulocyte gate are shown. Dotted line, GATA-1. Thin line, BALB/c. Thick line, GATA-1 sample to which purified BALB/c eosinophils were added in vitro (up to 20% of total cells). (a) (b) (c) (d) (e) (f) (g) (h) (i)We further explored this issue by reconstituting a peritoneal eosinophil population in GATA-1 mice by transfer of purified (90%) BALB/c eosinophils, devoid of neutrophil contamination. Total leukocyte counts were not significantly different between GATA-1 mice given eotaxin alone, eosinophils alone, or eotaxin plus eosinophils (Figure6(e)), and this was closely paralleled by monocyte/macrophage counts, which account for most leukocytes in all groups (Figure 6(f)). As expected, eosinophils could be recovered from GATA-1 recipients of eosinophils, and eotaxin did not significantly increase their numbers, as the recipients produce no eosinophils of their own (Figure 6(g)). Importantly, neutrophil numbers were significantly increased by eosinophil transfer and further significantly increased by the association of eosinophil transfer and eotaxin (Figure 6(h)). Together, these data suggest that in naive mice eosinophils mediate the accumulation of neutrophils induced by eotaxin.If, as suggested by the preceding results, neutrophils and monocyte/macrophages accumulate in GATA-1 mice as a result of eosinophil activation, not of direct exposure to eotaxin, one should expect the leukocytes harvested from the peritoneal cavity of GATA-1 mice to show little or no expression of CCR3, unlike eosinophils. We have therefore compared the expression of CCR3 in peritoneal lavage leukocytes from BALB/c and GATA-1 mice collected 4 h after eotaxin injection (Figure6(i)). Mean fluorescence intensity was monitored in the granulocyte region, since our transfer protocol reconstitutes migration of neutrophils, not monocytes/macrophages (see above). No eotaxin-induced recruitment of CCR3+ granulocytes was observed in GATA-1 mice (dotted line), unlike BALB/c mice (thin line). To make sure that CCR3+ cells would be detectable, if present in a suspension of GATA-1 granulocytes, we also added purified BALB/c eosinophils to GATA-1 leukocytes as a control (Figure 6(i), thick line). Exogenously added CCR3+ cells were easily detectable in these conditions.We took advantage of the effectiveness of eosinophil transfer to examine the relationship of 5-LO to the migration of eosinophils, as well as to the secondary recruitment of neutrophils and monocytes/macrophages. A mixed leukocyte population accumulated in the peritoneal cavity of ALOX recipients of PAS eosinophils (Figure7(a)), 4 h following administration of eotaxin. No significant improvement was observed in PAS recipients of PAS eosinophils in the same conditions, showing that recruitment is as effective in the ALOX recipients as in the wild-type recipients. The recruited leukocyte population from ALOX recipients included eosinophils (Figure 7(b)), comprising both the transferred eosinophils and those recruited by eotaxin administration to the recipients, again reaching levels comparable to those of PAS recipients of PAS eosinophils. Secondary recruitment was observed for both macrophages (Figure 7(c)) and neutrophils (Figure 7(d)), with similar effectiveness in comparison to the PAS into PAS transfers.Effect of transfer of PAS or ALOX eosinophils on neutrophil accumulation in the peritoneal cavity of ALOX, PAS, and GATA-1 recipients. (a)–(d) ALOX mice received RPMI (open bars), PAS eosinophils (stippled bars), PAS eosinophils followed by eotaxin, 50 ng/cavity, 30 minutes later (hatched bars). As positive controls, PAS mice received PAS eosinophils followed by eotaxin (gray bars). Peritoneal lavage fluid collected 4 h after eotaxin injection was used for quantitation of total leukocytes (a), eosinophils (b), macrophages (c), and neutrophils (d). (e) GATA-1 mice received eotaxin (black bars), or ALOX eosinophils, followed by eotaxin, 30 minutes later (hatched bars). Peritoneal lavage fluid collected 4 h after eotaxin administration was used for quantitation of neutrophils (Neuts), eosinophils (Eos), and macrophages (Mϕ). Data are mean ± SEM. *, P≤0,05; **, P≤0,01, for the indicated differences. Data from 3–5 experiments. (a) (b) (c) (d) (e)We next examined whether the critical step requiring 5-LO in this model is the initial eosinophil accumulation, rather than the secondary recruitment of neutrophils by eosinophils. If so, one would predict that direct transfer of ALOX eosinophils into eosinophil-deficient GATA-1 recipients should restore neutrophil accumulation in response to eotaxin. When purified eosinophils from ALOX bone-marrow cultures were transferred to GATA-1 recipients (Figure7(e)), recruitment of neutrophils was very effective. This rules out the possibility that the step critically dependent on 5-LO is the generation by eosinophils of a neutrophil chemoattractant. On the other hand, as shown above for BALB/c eosinophil transfer into GATA-1 recipients, monocytes/macrophages were not increased by ALOX eosinophil transfer at this time point. ## 3.4. Impact on Granulocyte Interaction with Bacteria We further examined whether eosinophil-mediated responses to eotaxin in this model had an effect on the ability of the secondarily recruited neutrophils and their bacterial targets. To do so, mixed leukocyte populations induced by eotaxin (RPMI in controls) were collected from naive BALB/c mice at 4 h after injection, counted, and mixed for 30 minutes with GFP-expressingE. coli at a bacteria/leukocyte ratio adjusted to 400 : 1, before analysis by flow cytometry. Cells gated in the granulocyte region on the basis of size and complexity were examined for green fluorescence, resulting from both binding and internalization of bacteria. Figure 8(a) shows that eotaxin-stimulated granulocytes bind/internalize fluorescent E. coli bacteria more effectively than those collected from RPMI-injected control mice. This increase in effectiveness is detectable as an increased fraction of granulocytes binding bacteria (Figure 8(b)) and an increased mean fluorescent intensity (Figure 8(c)). This suggests that eosinophil-mediated recruitment of neutrophils is accompanied by an increased capacity to bind and/or ingest bacteria.Flow cytometric analyses of peritoneal lavage leukocytes. (a)–(c) Interaction between fluorescent bacteria and neutrophils from eotaxin-injected and control mice. BALB/c mice were injected with eotaxin 50 ng/cavity i.p. Controls received RPMI. After 4 h, peritoneal lavage fluid was collected from both groups. After counting neutrophils, fluorescent, viableE. coli bacteria were mixed with leukocytes at a 400 : 1 bacteria/neutrophil ratio and further incubated for 30 min before washing to eliminate unbound bacteria, and analysis of neutrophil-associated fluorescence by flow cytometry. (a) Representative profiles of eotaxin-induced (thick, continuous line) and RPMI-induced (thin, interrupted line) neutrophil-associated fluorescence. (b) Fraction of the neutrophils positive for fluorescent bacteria in RPMI-induced (open bar) and eotaxin-induced (black bar) peritoneal leukocyte populations (mean±SEM); (c) mean fluorescent intensity of neutrophils in the same samples (mean ± SEM). Data from 4-5 experiments. (a) (b) (c) ## 4. Discussion We describe here a mixed leukocyte accumulation occurring in the peritoneal cavity of naive mice injected with eotaxin. This is, to our knowledge, the first experimental evidence thatrecruitment of neutrophils and macrophages by eotaxin in nonsensitized animals is mediated by eosinophils. For neutrophils, recruitment was associated with an increased ability to bind/ingest bacteria and therefore might have an impact on antimicrobial defenses in specific conditions. Because the ability of eosinophils to act as effective antimicrobial defenses is limited by their scarcity, these findings also highlight conditions in which, by recruiting much larger numbers of cells with well-characterized microbicidal function, eosinophils actually overcome this theoretical disadvantage.We will below address a number of specific points which are important for putting our observations in a proper perspective. ### 4.1. Roles of Eotaxin, Eotaxin Receptors, and Eosinophils Migration of all three leukocyte types in BALB/c mice was induced by eotaxin, as shown by identical dose-response relationships and overlapping kinetics, as well as by identical effects of neutralizing eotaxin with specific antibodies. The relationship of this migration to the expression of CCR3, by contrast, is more complex. Lymphocytes, some of which have been shown by others to express CCR3 [41, 42], were not attracted by eotaxin to the peritoneal cavity of naive mice in significant numbers. On the other hand, despite the commonly held view that CCR3 expression is restricted to eosinophils [4, 18, 20, 22, 23], basophils [24, 25], eosinophil and basophil progenitors/precursors [26, 27], T cell subsets [41, 42], and smooth muscle cells [29], several studies have suggested that human and murine neutrophils and macrophages can also express CCR3, at least in specific experimental settings [28, 32], as suggested by studies in neutrophils [43]. This would imply that all three leukocyte populations shown to be recruited in our study in wild-type (BALB/c, PAS) mice could be simply responding to eotaxin binding to CCR3 at the individual cell level, with no contribution from cellular interactions involving eosinophils. If so, there should be no decrease in neutrophil or macrophage accumulation by eliminating eosinophils, but one should expect neutrophils and macrophages to express CCR3 at significant levels even in eosinophil-deficient GATA-1 mice. This possibility, however, has been directly ruled out by the demonstration that eotaxin in GATA-1 mutant mice does not recruit neutrophils nor macrophages. The evidence for cellular interactions in the neutrophil response to eotaxin is reinforced by experiments using the same strain, which show neutrophil recruitment following transfer of highly purified BALB/c eosinophils. Finally, we observed no significant accumulation of CCR3+ granulocytes in eotaxin-injected GATA-1 mice.By contrast, GATA-1 mice had constitutively increased macrophage numbers in the peritoneal cavity, which were unaffected by 4 h of eotaxin administration, both with and without eosinophil transfer. It is possible that the GATA-1 mutation affects the cellular function, tissue distribution, and/or turnover of monocytes/macrophages so as to prevent responses to eotaxin, regardless of whether these are mediated or not by eosinophils. Therefore, we cannot conclude from our present observations in GATA-1 mice alone that eosinophils also recruit monocytes/macrophages. Direct evidence for eosinophil recruitment of monocytes/macrophages was, however, obtained through transfer of eosinophils from PAS donors into ALOX mice. Importantly, in the absence of eosinophil transfer, ALOX mice showed neither eosinophil nor monocyte/macrophage recruitment by eotaxin. Interestingly, although in the direct stimulation protocol ALOX mice resembled GATA-1 mice, in their absence of monocyte/macrophage accumulation by 4 h, it is likely that different mechanisms underlie these similar outcomes, since (a) a similar failure to respond to eotaxin with monocyte/macrophage accumulation was observed in wild-type PAS controls and cannot therefore be ascribed to the absence of active 5-LO; (b) transfer experiments show that eosinophil transfer from PAS donors allows a significant recruitment of monocytes/macrophages by eotaxin in ALOX recipients. These observations suggest that an active 5-LO is not required for monocytes/macrophages (or neutrophils) to respond to eotaxin, provided eosinophils are present.Overall, the data indicate that eotaxin recruits a mixed leukocyte population in naive mice through a mechanism dependent on eosinophils. Evidence that eosinophils play an active role is provided by the observation that full effect in eosinophil transfer experiments requires both eosinophils and eotaxin, which would not be expected if eosinophils played a merely passive or permissive role. On the other hand, in transfer experiments about 10% of the transferred eosinophils were recovered by 4 h eotaxin stimulation of the recipients. This raises the issue of whether the remaining transferred eosinophils underwent changes such as degranulation [4] or release of extracellular traps [10], which might represent a significant difference relative to the direct (nontransfer) protocol used in the initial experiments. ### 4.2. Role of 5-LO Mixed leukocyte recruitment by eotaxin in naive mice shows the same dependence on 5-LO that was observed for selective eosinophil recruitment in sensitized mice. Hence, it is likely that eosinophil accumulation itself, the shared feature in both models, is the 5-LO-dependent step. This is consistent with the observation that ALOX eosinophils, when directly transplanted to the peritoneal cavity of GATA-1 recipients (which are unable to respond to eotaxin by accumulation of neutrophils), are able to mediate the neutrophil recruitment induced by eotaxin. Importantly, further recruitment of eosinophils occurs in eotaxin-stimulated ALOX recipients of PAS eosinophils, where the only cells bearing a functional 5-LO are the transferred eosinophils. This suggests that eosinophils can be a source as well as a target for a 5-LO pathway product, such as LTB4. LTB4 was previously shown to selectively attract eosinophils, in a model in which eotaxin duplicated the effect of antigen in a 5-LO-dependent manner [33]. Furthermore, there is significant evidence that recruitment involves interactions between cytokines and lipid mediators [44]. In neutrophil migration, LTB4 represents a signaling relay, raising the possibility that it acts similarly in eosinophils [45]. Whatever mechanism is involved, eosinophil generation of a 5-LO-derived neutrophil chemoattractant is not required for the eosinophil-dependent secondary recruitment of neutrophils in eotaxin-injected naïve mice. While in previous studies of sensitized mice, LTB4, as did antigen and eotaxin, selectively recruited eosinophils [34], it remains to be determined whether it accounts for the rapid eosinophil recruitment to the peritoneal cavity of the eotaxin-injected nonsensitized mice in the present study. A related issue for further investigation is whether 5-LO is required for the increased effectiveness of bacterial binding that was detected in BALB/c leukocytes, as LTB4 is known to activate as well as attract neutrophils [45]. ### 4.3. Relationship to Innate and Acquired Immunity Despite a common requirement for 5-LO, leukocyte recruitment by eotaxin differs in several aspects between naive versus sensitized mice, especially the lack of eosinophil selectivity in the former, as opposed to the latter. In sensitized mice, selective eosinophil recruitment was observed with widely different chemical stimuli (allergen, eotaxin, or LTB4). It is unlikely, therefore, that such selectivity reflects some features of eotaxin signaling and even less of LTB4 signaling (which should be very effective in mice having normal neutrophil numbers). Alternatively, the failure of eotaxin and LTB4 to recruit neutrophils and monocytes/macrophages in sensitized mice could involve changes in the expression of adhesion proteins at endothelial surfaces, which would prevent their emigration from blood vessels to peritoneal cavity, regardless of whether the chemoattractant is LTB4 or eotaxin. We have not examined this possibility, since our current observations, which are centered on responses from nonsensitized animals, do not depend on clarifying mechanisms that were not applicable to the present conditions.We view our findings as manifestations of innate immunity, because of (a) the very fast kinetics of eosinophil and neutrophil accumulation; (b) the recruitment of neutrophils and macrophages in the absence of significant lymphocyte accumulation; (c) the detectable increase in granulocyte binding of live extracellular bacteria in the absence of antibodies. On the other hand, the fast recruitment of monocytes/macrophages by eotaxin-exposed eosinophils raises the issue of whether eosinophils could also enhance protection from more specialized pathogens, such as the intracellular mycobacteria and protozoa that cause chronic infections, which are usually handled by monocytes/macrophages. Relatively little attention has been paid to the possibility that eosinophils play a role in fighting microbial pathogens with the help of other leukocyte types. Our observations suggest that small numbers of eosinophils might recruit a large neutrophil and/or macrophage infiltrate.While this would make eosinophils surprisingly effective players in innate immunity, this might paradoxically obscure their contribution, if their contribution were to be taken as commensurate with their numbers in inflammatory infiltrates, where they would often amount to no more than one-tenth of total leukocytes. It is therefore fortunate that, in transfer experiments of wild-type and mutant eosinophils into eosinophil-null GATA-1 mice, eosinophil recruitment of neutrophils can be unequivocally demonstrated. In view of the differences between naive and sensitized models in this respect, it is of interest to determine whether this eosinophil functional capacity is modified by allergen sensitization of the host and whether such a change in innate immune functions can be duplicated by passively or actively sensitizing the host. ### 4.4. Possible Cellular Mechanisms Underlying the Effect of Eosinophils Several, but not all, of the observations reported here are consistent with those of previous studies, carried out by other groups in different experimental models. Eotaxin recruitment of a mixed leukocyte population, including neutrophils and macrophages, was described in human subjects [28]; Das and colleagues [46] reported that eotaxin was effective when injected in the peritoneal cavity of mice but not in a dorsal air pouch, drawing attention to the important differences between challenge sites responding to the same chemically defined stimulus. Responses in the air pouch occurred after local inoculation of mast cell-containing peritoneal cell populations, but allergen sensitization was essential to local responses to eotaxin in this transfer model. In addition, neutrophil migration accompanied recruitment of eosinophils in specific conditions. Harris and colleagues [47] confirmed that mast cells were important for full responses to eotaxin and further showed that eotaxin responses were blocked by 5-LO inhibitors. Together, these studies suggest that eotaxin effectiveness is constrained in vivo by several factors that may be absent from in vitro (e. g., migration chamber or flow cytometric) studies. These constraints include mast cells and 5-LO. None of these published studies, however, evaluated the contribution of the recruited eosinophils themselves.We suggest that a cytokine, rather than a 5-LO derivative, is released by eosinophils in the peritoneal cavity, once they have been recruited by eotaxin in the presence of an active 5-LO, or, alternatively, directly inoculated in the cavity through a transfer protocol. Candidate cytokines would include TNF-α and TGF-β1, both potent neutrophil chemoattractants. One hypothesis that could reconcile our observations with those of Das and Harris and their colleagues [46, 47] would involve the amplification of the role of eosinophils through interactions with resident peritoneal mast cells, since mast cells are an important source of neutrophil chemoattractants, including TNF-α [48]. ## 4.1. Roles of Eotaxin, Eotaxin Receptors, and Eosinophils Migration of all three leukocyte types in BALB/c mice was induced by eotaxin, as shown by identical dose-response relationships and overlapping kinetics, as well as by identical effects of neutralizing eotaxin with specific antibodies. The relationship of this migration to the expression of CCR3, by contrast, is more complex. Lymphocytes, some of which have been shown by others to express CCR3 [41, 42], were not attracted by eotaxin to the peritoneal cavity of naive mice in significant numbers. On the other hand, despite the commonly held view that CCR3 expression is restricted to eosinophils [4, 18, 20, 22, 23], basophils [24, 25], eosinophil and basophil progenitors/precursors [26, 27], T cell subsets [41, 42], and smooth muscle cells [29], several studies have suggested that human and murine neutrophils and macrophages can also express CCR3, at least in specific experimental settings [28, 32], as suggested by studies in neutrophils [43]. This would imply that all three leukocyte populations shown to be recruited in our study in wild-type (BALB/c, PAS) mice could be simply responding to eotaxin binding to CCR3 at the individual cell level, with no contribution from cellular interactions involving eosinophils. If so, there should be no decrease in neutrophil or macrophage accumulation by eliminating eosinophils, but one should expect neutrophils and macrophages to express CCR3 at significant levels even in eosinophil-deficient GATA-1 mice. This possibility, however, has been directly ruled out by the demonstration that eotaxin in GATA-1 mutant mice does not recruit neutrophils nor macrophages. The evidence for cellular interactions in the neutrophil response to eotaxin is reinforced by experiments using the same strain, which show neutrophil recruitment following transfer of highly purified BALB/c eosinophils. Finally, we observed no significant accumulation of CCR3+ granulocytes in eotaxin-injected GATA-1 mice.By contrast, GATA-1 mice had constitutively increased macrophage numbers in the peritoneal cavity, which were unaffected by 4 h of eotaxin administration, both with and without eosinophil transfer. It is possible that the GATA-1 mutation affects the cellular function, tissue distribution, and/or turnover of monocytes/macrophages so as to prevent responses to eotaxin, regardless of whether these are mediated or not by eosinophils. Therefore, we cannot conclude from our present observations in GATA-1 mice alone that eosinophils also recruit monocytes/macrophages. Direct evidence for eosinophil recruitment of monocytes/macrophages was, however, obtained through transfer of eosinophils from PAS donors into ALOX mice. Importantly, in the absence of eosinophil transfer, ALOX mice showed neither eosinophil nor monocyte/macrophage recruitment by eotaxin. Interestingly, although in the direct stimulation protocol ALOX mice resembled GATA-1 mice, in their absence of monocyte/macrophage accumulation by 4 h, it is likely that different mechanisms underlie these similar outcomes, since (a) a similar failure to respond to eotaxin with monocyte/macrophage accumulation was observed in wild-type PAS controls and cannot therefore be ascribed to the absence of active 5-LO; (b) transfer experiments show that eosinophil transfer from PAS donors allows a significant recruitment of monocytes/macrophages by eotaxin in ALOX recipients. These observations suggest that an active 5-LO is not required for monocytes/macrophages (or neutrophils) to respond to eotaxin, provided eosinophils are present.Overall, the data indicate that eotaxin recruits a mixed leukocyte population in naive mice through a mechanism dependent on eosinophils. Evidence that eosinophils play an active role is provided by the observation that full effect in eosinophil transfer experiments requires both eosinophils and eotaxin, which would not be expected if eosinophils played a merely passive or permissive role. On the other hand, in transfer experiments about 10% of the transferred eosinophils were recovered by 4 h eotaxin stimulation of the recipients. This raises the issue of whether the remaining transferred eosinophils underwent changes such as degranulation [4] or release of extracellular traps [10], which might represent a significant difference relative to the direct (nontransfer) protocol used in the initial experiments. ## 4.2. Role of 5-LO Mixed leukocyte recruitment by eotaxin in naive mice shows the same dependence on 5-LO that was observed for selective eosinophil recruitment in sensitized mice. Hence, it is likely that eosinophil accumulation itself, the shared feature in both models, is the 5-LO-dependent step. This is consistent with the observation that ALOX eosinophils, when directly transplanted to the peritoneal cavity of GATA-1 recipients (which are unable to respond to eotaxin by accumulation of neutrophils), are able to mediate the neutrophil recruitment induced by eotaxin. Importantly, further recruitment of eosinophils occurs in eotaxin-stimulated ALOX recipients of PAS eosinophils, where the only cells bearing a functional 5-LO are the transferred eosinophils. This suggests that eosinophils can be a source as well as a target for a 5-LO pathway product, such as LTB4. LTB4 was previously shown to selectively attract eosinophils, in a model in which eotaxin duplicated the effect of antigen in a 5-LO-dependent manner [33]. Furthermore, there is significant evidence that recruitment involves interactions between cytokines and lipid mediators [44]. In neutrophil migration, LTB4 represents a signaling relay, raising the possibility that it acts similarly in eosinophils [45]. Whatever mechanism is involved, eosinophil generation of a 5-LO-derived neutrophil chemoattractant is not required for the eosinophil-dependent secondary recruitment of neutrophils in eotaxin-injected naïve mice. While in previous studies of sensitized mice, LTB4, as did antigen and eotaxin, selectively recruited eosinophils [34], it remains to be determined whether it accounts for the rapid eosinophil recruitment to the peritoneal cavity of the eotaxin-injected nonsensitized mice in the present study. A related issue for further investigation is whether 5-LO is required for the increased effectiveness of bacterial binding that was detected in BALB/c leukocytes, as LTB4 is known to activate as well as attract neutrophils [45]. ## 4.3. Relationship to Innate and Acquired Immunity Despite a common requirement for 5-LO, leukocyte recruitment by eotaxin differs in several aspects between naive versus sensitized mice, especially the lack of eosinophil selectivity in the former, as opposed to the latter. In sensitized mice, selective eosinophil recruitment was observed with widely different chemical stimuli (allergen, eotaxin, or LTB4). It is unlikely, therefore, that such selectivity reflects some features of eotaxin signaling and even less of LTB4 signaling (which should be very effective in mice having normal neutrophil numbers). Alternatively, the failure of eotaxin and LTB4 to recruit neutrophils and monocytes/macrophages in sensitized mice could involve changes in the expression of adhesion proteins at endothelial surfaces, which would prevent their emigration from blood vessels to peritoneal cavity, regardless of whether the chemoattractant is LTB4 or eotaxin. We have not examined this possibility, since our current observations, which are centered on responses from nonsensitized animals, do not depend on clarifying mechanisms that were not applicable to the present conditions.We view our findings as manifestations of innate immunity, because of (a) the very fast kinetics of eosinophil and neutrophil accumulation; (b) the recruitment of neutrophils and macrophages in the absence of significant lymphocyte accumulation; (c) the detectable increase in granulocyte binding of live extracellular bacteria in the absence of antibodies. On the other hand, the fast recruitment of monocytes/macrophages by eotaxin-exposed eosinophils raises the issue of whether eosinophils could also enhance protection from more specialized pathogens, such as the intracellular mycobacteria and protozoa that cause chronic infections, which are usually handled by monocytes/macrophages. Relatively little attention has been paid to the possibility that eosinophils play a role in fighting microbial pathogens with the help of other leukocyte types. Our observations suggest that small numbers of eosinophils might recruit a large neutrophil and/or macrophage infiltrate.While this would make eosinophils surprisingly effective players in innate immunity, this might paradoxically obscure their contribution, if their contribution were to be taken as commensurate with their numbers in inflammatory infiltrates, where they would often amount to no more than one-tenth of total leukocytes. It is therefore fortunate that, in transfer experiments of wild-type and mutant eosinophils into eosinophil-null GATA-1 mice, eosinophil recruitment of neutrophils can be unequivocally demonstrated. In view of the differences between naive and sensitized models in this respect, it is of interest to determine whether this eosinophil functional capacity is modified by allergen sensitization of the host and whether such a change in innate immune functions can be duplicated by passively or actively sensitizing the host. ## 4.4. Possible Cellular Mechanisms Underlying the Effect of Eosinophils Several, but not all, of the observations reported here are consistent with those of previous studies, carried out by other groups in different experimental models. Eotaxin recruitment of a mixed leukocyte population, including neutrophils and macrophages, was described in human subjects [28]; Das and colleagues [46] reported that eotaxin was effective when injected in the peritoneal cavity of mice but not in a dorsal air pouch, drawing attention to the important differences between challenge sites responding to the same chemically defined stimulus. Responses in the air pouch occurred after local inoculation of mast cell-containing peritoneal cell populations, but allergen sensitization was essential to local responses to eotaxin in this transfer model. In addition, neutrophil migration accompanied recruitment of eosinophils in specific conditions. Harris and colleagues [47] confirmed that mast cells were important for full responses to eotaxin and further showed that eotaxin responses were blocked by 5-LO inhibitors. Together, these studies suggest that eotaxin effectiveness is constrained in vivo by several factors that may be absent from in vitro (e. g., migration chamber or flow cytometric) studies. These constraints include mast cells and 5-LO. None of these published studies, however, evaluated the contribution of the recruited eosinophils themselves.We suggest that a cytokine, rather than a 5-LO derivative, is released by eosinophils in the peritoneal cavity, once they have been recruited by eotaxin in the presence of an active 5-LO, or, alternatively, directly inoculated in the cavity through a transfer protocol. Candidate cytokines would include TNF-α and TGF-β1, both potent neutrophil chemoattractants. One hypothesis that could reconcile our observations with those of Das and Harris and their colleagues [46, 47] would involve the amplification of the role of eosinophils through interactions with resident peritoneal mast cells, since mast cells are an important source of neutrophil chemoattractants, including TNF-α [48]. --- *Source: 102160-2014-02-25.xml*
2014
# A Distribution-Free Approach to Stochastic Efficiency Measurement with Inclusion of Expert Knowledge **Authors:** Kerry Khoo-Fazari; Zijiang Yang; Joseph C. Paradi **Journal:** Journal of Applied Mathematics (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102163 --- ## Abstract This paper proposes a new efficiency benchmarking methodology that is capable of incorporating probability while still preserving the advantages of a distribution-free and nonparametric modeling technique. This new technique developed in this paper will be known as the DEA-Chebyshev model. The foundation of DEA-Chebyshev model is based on the model pioneered by Charnes, Cooper, and Rhodes in 1978 known as Data Envelopment Analysis (DEA). The combination of normal DEA with DEA-Chebyshev frontier (DCF) can successfully provide a good framework for evaluation based on quantitative data and qualitative intellectual management knowledge. The simulated dataset was tested on DEA-Chebyshev model. It has been statistically shown that this model is effective in predicting a new frontier, whereby DEA efficient units can be further differentiated and ranked. It is an improvement over other methods, as it is easily applied, practical, not computationally intensive, and easy to implement. --- ## Body ## 1. Introduction There has been a substantial amount of research conducted in the area of stochastic evaluation of efficiency, such as the stochastic frontier approach (SFA) [1, 2], stochastic data envelopment analysis (DEA) [3, 4], chance-constrained programming (CCP) efficiency evaluation [5–8], and statistical inference to deal with variations in data. The problems associated with these methodologies range from the requirement for specifications of some functional form or parameterization to the requirement of a substantial amount of (time series) data. Relying on past and present data alone to provide a good estimation of the efficient frontier may not be suitable today due to the rapid evolution of these “nuisance” parameters. Hence, the inclusion of management's expert opinion cannot be excluded in efficiency analyses.This paper proposes to develop a new efficiency benchmarking methodology that is capable of incorporating probability while still preserving the advantages of a function-free and nonparametric modeling technique. This new technique developed in this paper will be known as the DEA-Chebyshev model. The objectives are to first distinguish amongst top performers and second to define a probable feasible target for the empirically efficient units (as they are found from the usual DEA models) with respect to the DEA-Chebyshev frontier (DCF). This can be achieved by incorporating management's expertise (qualitative component) along with the available data (quantitative component) to infer this new frontier. The foundation of DEA-Chebyshev model is based on the model pioneered by Charnes et al. in 1978 [10] known as DEA. It is deterministic approach, which requires no distributional assumptions or functional forms with predefined parameters. The main drawback to deterministic approaches is that they make no allowance for random variations in the data. The DEA methodology has been chosen as a foundation for this research because of the following advantages.(i) It is nonparametric and does not requirea priori assumption regarding the distribution of data(ii) It has the ability to simultaneously handle multiple inputs and outputs without making prior judgments of their relative importance (i.e., function-free)(iii) It can provide a single measurement of performance based upon multiple inputs and outputs.DEA ensures that the production units being evaluated will only be compared with others from the same “cultural” environment, provided, of course, that they operate under the same environmental conditions.The rest of the paper is organized as follows. Section2 gives a brief literature review. Section 3 describes some possible causes of data discrepancies that may or may not be observable and their effects on the variables. Section 4 discusses the assumptions and mathematical formulation of DEA-Chebyshev model. Section 5 provides the simulation and comparison with other efficiency evaluation techniques. Finally, our conclusions are presented in Section 6. ## 2. Literature Review This section provides the applicable literature on past and present researches relating to stochastic models and weight-restricted models designed for performance measurements. They show the relevance of well-known methodologies used for estimating efficiency scores and constructing the approximated frontier in order to account, as well as possible, for noise which can have diverse effects on efficiency evaluation of human performance-dependent entities. ### 2.1. Stochastic Frontier Approach Aigner et al. [1] and Meussen and Van Den Broeck [2] independently and simultaneously proposed a stochastic frontier model known as the Stochastic Frontier Approach (SFA) for performance evaluation. SFA uses econometric methods for estimating the efficient frontier. The problems associated with SFA are, that weights (or parameters) have to be predefined to determine its functional form and this requires parameterization. Second, a distributional form must be determined in order to estimate random errors. Third, inclusion of multiple outputs is not easy to incorporate into the model. Finally, samples have to be large enough to be able to infer the distributional form for random errors. ### 2.2. Stochastic DEA Stochastic DEA is a DEA method that attempts to account for and filter out noise by incorporating stochastic variations of inputs and outputs while still maintaining the advantages of DEA [4]. The method relies on the theory that there will always exist an optimal solution for industrial efficiency. The variability in outputs is dealt with using the risk-averse efficiency model by Land et al. [11] with a risk preference function. Kneip and Simar [3] proposed a nonparametric estimation of each decision-making unit (DMU)'s production function using panel data over T time periods. This filters the noise from the outputs. The fitted values of the outputs along with the inputs are then evaluated using DEA. In this instance, efficiency is determined by the distance of the estimated frontier to the observed DMUs. The drawback of this method is that a reasonable estimate of efficiency can be obtained only when T and q (number of DMUs) are sufficiently large. ### 2.3. Chance-Constrained DEA Chance-constrained programming was first developed by Charnes and Cooper [5] and Kall [7] as an operational research approach for optimizing under uncertainty when some coefficients are random variables distributed according to some laws of probability. The CCP DEA models in the past generally assumed that variations observed in the outputs follow a normal distribution. Variations in inputs are assumed to be the cause of inefficiency [12], while random noise occurs in outputs. Since the distribution of inefficiency is uncertain (although, theoretically assumed to be half-normal or gamma), the chance-constraint formulation is not applied to input constraints (inputs are held deterministic, while outputs are stochastic). Olesen and Petersen [9] state that the hypothesis concerning the amount of noise in the data cannot be tested. Using panel data, variations in the data can be dichotomized into noise and inefficiency. Another variation of CCP DEA was introduced by Cooper et al. [6] utilizing the “satisficing concepts.” The concept is used to interpret managerial policies and rules in order to determine the optimizing and satisficing actions, which are distinguished from inefficiencies. Optimizing and satisficing can be regarded as mutually exclusive events. The former represents physical possibilities or endurance limits and the latter represents aspirational levels.All these CCP formulations have considered normal distributions for the probability of staying within the constraints. This method is effective when qualitative data is not available. However, expert opinion from management cannot be discounted with regard to data dispersion from the expected or correct values. Unfortunately, the current CCP is strictly a quantitative analysis based on empirical data and whose variations are said to be of a predefined distributional form. ### 2.4. Assurance Region and Cone-Ratio Models In an “unrestricted” DEA model, the weights are assigned to each DMU such that it would appear as favourable as possible, which is an inherent characteristic of DEA. Hence, there is a concern when largely different weights may be assigned to the same inputs and outputs in the LP solutions for different DMUs. This motivated the development of weight-restricted models such as the “assurance region” (AR) [13, 14], the “cone-ratio” (CR) [15], and other variations of these models.The motivation behind weight-restricted models is to redefine the DEA frontier so as to make it aspractical as possible; that is, altering the inherent characteristic of DEA when assigning small/large weights to certain inputs or outputs is not realistic. On the contrary, the stochastic frontier models redefine the frontier in the presence of noise or data disparity. Stochastic approaches are designed to evaluate DMUs based on the understanding that constraints may, realistically, not always hold due to noise. Weight restrictions are also applicable in stochastic approaches.Weight restriction models deal directly with the model’s inconsistencies in a practical sense using qualitative information, whereas stochastic models deal with data discrepancies and inconsistencies using quantitative approaches to infer to the degree of data disparity. Although the motivations of these two methods are similar, the underlying objectives for their developments are not the same. Both are valid extensions of the normal DEA model in attempting to correct the frontier.The Assurance Region (AR) model was developed by Thompson et al. [13] to analyze six sites for the location of a physics lab. This approach imposes additional constraints in the DEA model with respect to the magnitude of the weights. The AR is defined to be the subset of W, the weight space that denotes the vectors of multipliers consisting of v and u, such that any region outside the AR does not contain reasonable input and output multipliers. An additional constraint for the ratio of input weights [14] can be defined as (1)l1,i≤viv1≤u1,i≡v1l1,i≤vi≤v1u1,ifori=1,…,m, where m denotes the number of inputs, v1 and vi are the weights for the input  iand input 1, respectively, and l1,i and u1,i are the lower and upper bounds for the ratio of multiplier.The cone-ratio (CR) method was developed by Charnes et al. [15] which allows for a closed convex cones for the virtual multipliers. It is a more general approach compared to that of the AR. In the AR model, there can only be two admissible nonnegative vectors, one for the lower bound and the other for the upper bound of the ratio of virtual weights. However, in the CR case, there can be k admissible nonnegative vectors for input weights and l admissible nonnegative vectors for output weights; that is, the feasible region for the weights is a polyhedral convex cone spanned by k and l admissible nonnegative direction vectors for inputs and outputs, respectively, (2)v=∑h=1kαha→h,u=∑s=1lβsb→s, where a→h represent the vectors and αh≥0(∀h) are the weights applied to select the best nonnegative vector. Similarly, the AR method is equivalent to selecting only two admissible vectors under the CR method. The lower and upper bounds are denoted as vectors in the two-input case (2.4)a→1=(1l1,20⋯0),a→2=(1u1,20⋯0), respectively. ## 2.1. Stochastic Frontier Approach Aigner et al. [1] and Meussen and Van Den Broeck [2] independently and simultaneously proposed a stochastic frontier model known as the Stochastic Frontier Approach (SFA) for performance evaluation. SFA uses econometric methods for estimating the efficient frontier. The problems associated with SFA are, that weights (or parameters) have to be predefined to determine its functional form and this requires parameterization. Second, a distributional form must be determined in order to estimate random errors. Third, inclusion of multiple outputs is not easy to incorporate into the model. Finally, samples have to be large enough to be able to infer the distributional form for random errors. ## 2.2. Stochastic DEA Stochastic DEA is a DEA method that attempts to account for and filter out noise by incorporating stochastic variations of inputs and outputs while still maintaining the advantages of DEA [4]. The method relies on the theory that there will always exist an optimal solution for industrial efficiency. The variability in outputs is dealt with using the risk-averse efficiency model by Land et al. [11] with a risk preference function. Kneip and Simar [3] proposed a nonparametric estimation of each decision-making unit (DMU)'s production function using panel data over T time periods. This filters the noise from the outputs. The fitted values of the outputs along with the inputs are then evaluated using DEA. In this instance, efficiency is determined by the distance of the estimated frontier to the observed DMUs. The drawback of this method is that a reasonable estimate of efficiency can be obtained only when T and q (number of DMUs) are sufficiently large. ## 2.3. Chance-Constrained DEA Chance-constrained programming was first developed by Charnes and Cooper [5] and Kall [7] as an operational research approach for optimizing under uncertainty when some coefficients are random variables distributed according to some laws of probability. The CCP DEA models in the past generally assumed that variations observed in the outputs follow a normal distribution. Variations in inputs are assumed to be the cause of inefficiency [12], while random noise occurs in outputs. Since the distribution of inefficiency is uncertain (although, theoretically assumed to be half-normal or gamma), the chance-constraint formulation is not applied to input constraints (inputs are held deterministic, while outputs are stochastic). Olesen and Petersen [9] state that the hypothesis concerning the amount of noise in the data cannot be tested. Using panel data, variations in the data can be dichotomized into noise and inefficiency. Another variation of CCP DEA was introduced by Cooper et al. [6] utilizing the “satisficing concepts.” The concept is used to interpret managerial policies and rules in order to determine the optimizing and satisficing actions, which are distinguished from inefficiencies. Optimizing and satisficing can be regarded as mutually exclusive events. The former represents physical possibilities or endurance limits and the latter represents aspirational levels.All these CCP formulations have considered normal distributions for the probability of staying within the constraints. This method is effective when qualitative data is not available. However, expert opinion from management cannot be discounted with regard to data dispersion from the expected or correct values. Unfortunately, the current CCP is strictly a quantitative analysis based on empirical data and whose variations are said to be of a predefined distributional form. ## 2.4. Assurance Region and Cone-Ratio Models In an “unrestricted” DEA model, the weights are assigned to each DMU such that it would appear as favourable as possible, which is an inherent characteristic of DEA. Hence, there is a concern when largely different weights may be assigned to the same inputs and outputs in the LP solutions for different DMUs. This motivated the development of weight-restricted models such as the “assurance region” (AR) [13, 14], the “cone-ratio” (CR) [15], and other variations of these models.The motivation behind weight-restricted models is to redefine the DEA frontier so as to make it aspractical as possible; that is, altering the inherent characteristic of DEA when assigning small/large weights to certain inputs or outputs is not realistic. On the contrary, the stochastic frontier models redefine the frontier in the presence of noise or data disparity. Stochastic approaches are designed to evaluate DMUs based on the understanding that constraints may, realistically, not always hold due to noise. Weight restrictions are also applicable in stochastic approaches.Weight restriction models deal directly with the model’s inconsistencies in a practical sense using qualitative information, whereas stochastic models deal with data discrepancies and inconsistencies using quantitative approaches to infer to the degree of data disparity. Although the motivations of these two methods are similar, the underlying objectives for their developments are not the same. Both are valid extensions of the normal DEA model in attempting to correct the frontier.The Assurance Region (AR) model was developed by Thompson et al. [13] to analyze six sites for the location of a physics lab. This approach imposes additional constraints in the DEA model with respect to the magnitude of the weights. The AR is defined to be the subset of W, the weight space that denotes the vectors of multipliers consisting of v and u, such that any region outside the AR does not contain reasonable input and output multipliers. An additional constraint for the ratio of input weights [14] can be defined as (1)l1,i≤viv1≤u1,i≡v1l1,i≤vi≤v1u1,ifori=1,…,m, where m denotes the number of inputs, v1 and vi are the weights for the input  iand input 1, respectively, and l1,i and u1,i are the lower and upper bounds for the ratio of multiplier.The cone-ratio (CR) method was developed by Charnes et al. [15] which allows for a closed convex cones for the virtual multipliers. It is a more general approach compared to that of the AR. In the AR model, there can only be two admissible nonnegative vectors, one for the lower bound and the other for the upper bound of the ratio of virtual weights. However, in the CR case, there can be k admissible nonnegative vectors for input weights and l admissible nonnegative vectors for output weights; that is, the feasible region for the weights is a polyhedral convex cone spanned by k and l admissible nonnegative direction vectors for inputs and outputs, respectively, (2)v=∑h=1kαha→h,u=∑s=1lβsb→s, where a→h represent the vectors and αh≥0(∀h) are the weights applied to select the best nonnegative vector. Similarly, the AR method is equivalent to selecting only two admissible vectors under the CR method. The lower and upper bounds are denoted as vectors in the two-input case (2.4)a→1=(1l1,20⋯0),a→2=(1u1,20⋯0), respectively. ## 3. Data Variations ### 3.1. Two Error Sources of Data Disparity Affecting Productivity Analysis Before we begin to make modifications to incorporate probability into the basic DEA model, it is crucial that the types of errors are identified, which are sources of data disparity. These can be segregated into 2 categories; systematic and nonsystematic errors. Nonsystematic errors are typically defined to be statistical noise, which are random normalN(0,σ2) and independent and identically distributed (i.i.d.). They will eventually average to zero. Systematic errors are defined to be “the degree to which the measured variable reflects the underlying phenomenon depend on its bias and variance relative to the true or more appropriate measure” [16]. Systematic errors or measurement errors are deemed to have the most disparaging effects because they introduce bias into the model. These may be caused by the lack of information.The design of the new DEA model is intended to take into account the possibility of data disparity that affect productivity analysis while preserving the advantages that DEA offers in order to estimate the true level of efficiency. Due to data disparity, normal DEA results may contain two components of the error term. The first refers to statistical noise which follows a normal distribution, while the second refers to the technical inefficiency which is said to follow a truncated normal or a half-normal distribution. This can be achieved by relaxing the LP constraints to allow for these variations which may provide a better approximation of the level of efficiency.The following general linear programming model illustrates the mathematical form of systematic and nonsystematic errors as defined previously. Variation in the variable (X) of the objective function will result in different values for the optimized coefficient (β) (4)minβg=X′β,subject toX′β≥y,β≥0. If the variation in X is stochastic, then X=x-+ε; ε~N(0,σ2), by the Central Limit Theorem; one can characterize how closely the vector Xis scattered around its mean x-  by the distance function; D2=D2(X;x-,Vε)=(X-x-)′Vε-1(X-x-). Vε denotes the variance-covariance matrix for ε [4].Four scenarios are illustrated later which describes sources of data disparity. The notations are as follows:xir: observed input i for i=1,…,m for DMUr,yjr: observed output j for j=1,…,n for DMUr,μrx: expected value of input for DMUr,μry: expected value of output for DMUr,brx: bias of xr,b^rx: estimate of brx,bry: bias of yr,b^ry: estimate of bry.The following are equations defining the relationship between the observed and true or expected values for both inputs and outputs in a productivity analysis such as SFA where measurement errors and/or random noise and inefficiencies are a concern in parametric estimations: (5)xir=μirx+birx, for some input i for unit r(6)yjr=μjry+bjry, for some output j for unit r (considered for cases in which there may be some bias in output levels) (7)μirx,μjry≥0,birx,bjry, unrestricted in sign.The following four scenarios illustrate the impact of different errors and were constructed using the notations given previously. These scenarios follow the definition by Tomlinson [16].Scenario I. Consider the following: (8)E(birx)=0,Var(birx)=0,E(xir)=μirx,Var(xir)=0. With zero bias and variance, observed input value is the true value ∴E(xir)=μirx=xir. This implies that the data is 100% accurate. The expected value is exactly the same as the observed value. In reality, it is rare to have data with such accuracy.Scenario II. Consider the following: (9)E(birx)=b^irx≠0,Var(birx)=0,E(xir)=μirx+b^irx,Var(xir)=0. Bias is nonzero with zero variance; hence, errors are systematic. E(xir) is not an unbiased estimator of xir. In this case, systematic errors are a problem where inputs are concerned. When measurement errors exist, the expected value is a biased estimator of the observed value. This in turn causes biases in DEA results. Empirical methods, such as DEA, make no allowance for this error and evaluate DMUs based strictly on the observed values. However, expectations of the observed values can be determined qualitatively and be incorporated into the LP.Scenario III. Consider the following: (10)E(birx)=0,Var(birx)=σbir2>0,E(xir)=μirx,Var(xir)=σbir2. Expected value of a constant is the constant itself. Variance of a constant is zero. Hence Var(xir)=Var(μirx+birx)=0+Var(birx)=σbir2. Bias is zero but the variance is nonzero. Hence, variations are due to statistical noise. A DMU that appears efficient may in fact be utilizing an input-output production mix that is less than optimal. Its seeming efficiency is caused by a variation to its favour. Results obtained using empirical models are prone to inaccuracy of this nature. However, the expected value will converge over time to the true value in the absence of bias.Scenario IV. Consider the following: (11)E(birx)=b^irx≠0,Var(birx)=σbir2>0,E(xir)=μirx+b^irx,Var(xir)=σbir2. Bias and variances are nonzero. This implies that both systematic and nonsystematic errors exist in the data. The variance corresponds to some input i. The variable, xir, is affected by some random amount and some bias birx. Hence, E(xir) is not an unbiased estimator of xir. This scenario corresponds to the drawback of empirical frontiers.The term “measurement error” does not simply imply that data had been misread or collected erroneously. According to Tomlinson [16], it may also not be constant over time. The inaccuracy of the data collected may be due to the lack of implicit information which may or may not be quantifiable but are deemed to have the most disparaging effects because they introduce bias into the model. ### 3.2. Chance-Constraint Programming and DEA Deterministic methods such as DEA are not designed to handle cases in which, due to uncertainty, constraints may be violated although infrequently. Various methods have been employed to transform the basic DEA approach to include stochastic components. Two of the more popular methods are chance-constraint programming (CCP) and stochastic DEA. An extensive literature survey has revealed that CCP DEA has always assumed a normal distribution. The objective of this research is to redefine the probabilities employed in CCP productivity analysis, which would accommodate problems emanating from various scenarios where errors are independent but convoluted without assuming any distributional form. The independent and convoluted properties of the error terms make it difficult to distinguish between them, and hence, a distribution-free approach will be employed.The advantage of using CCP is that it maintains the nonparametric form of DEA. It allows modeling of multiple inputs and outputs with ease. There is no ambiguity in defining a distribution or the interpretation of the results as had been demonstrated in the Normal-Gamma parametric SFA model [17]. CCP typically states that constraints do not need to hold “almost surely” but instead hold with some probability level. Uncertainty is represented in terms of outcomes denoted by ω. The elements ω are used to describe scenarios or outcomes. All random variables jointly depend on these outcomes. These outcomes may be combined into subsets of Ω called events. A  represents an event and A represents the collection of events. Examples of events may include political situations, trade conditions, which would allow us to describe the random variables such as costs and interest rates. Each event is associated with a probability P(A). The triplet (Ω,Α,P) is known as a probabilityspace. This situation is often found in strategic models where the knowledge of all possible outcomes in the future is acquired through expert opinions. Hence, in a general form, CCP can be written as (12)P{Aix(ω)≥hi(ω)}≥αi, where 0<αi<1 and i=1,…,I index of the constraints that must hold jointly. The previous probabilistic constraint can be written in its expectational form (or deterministic equivalent) where fi is an indicator of {ω∣Aix(ω)≥hi(ω)}: (13)Eω(fi(ω,x(ω)))≥αi.The focus of this paper is on the further development of DEA coupled with CCP. The benefit of applying CCP to DEA is such that the multidimensional and nonparametric form of DEA is maintained. To drop thea priori assumption discussed in [9, 11, 18] regarding the distributional form to account for possible data disparity, a distribution-free method is introduced. In [11, 18], CCP DEA input-oriented model is formulated on the basis that discrepancies in outputs are due to statistical noise while those in inputs are caused by inefficiency: (14)Minz0=θ,SubjecttoP(Yλ-y0≥0)≥1-α,Xλ-θx0≤Xλ-θx0≤0,Xλ-θx0≤1→λ=1,Xλ-θx0≤θ,λ≥0.The CCP formulation shown in (14) is designed to minimize the radial input contraction factor θ, subject to the constraints specified. CCP DEA models in the past generally assume that the normal distribution suffices. For example, the assumption that the variation shown previously is normal, the formulation (14) can be written in the following vector deterministic form (15): (15)Minz0=θ,SubjecttoE(Yλ-y0)-1.645σ≥0,Xλ-θx0≤Xλ-θx0≤0,Xλ-θx0≤1→λ=1,Xλ-θx0≤θ,λ≥0, where X and Y denote the vectors of inputs and outputs, respectively. Assuming that each DMU is independent of others, then the covariance equals zero. σ denotes the standard deviation for Yλ-y0 which is formulated as (16)Var(Yλ-y0)=Var(y1λ1+y2λ2+⋯+yqλq-y0), where subscript q denotes the number of DMUs. If the DMU under evaluation is DMU1, then y0≡y1, hence, (16) can be written as (17)σ=(λ1-1)2Var(y1)+λ22Var(y2)+⋯+λq2Var(yq). If  λ1=1 and λr≠1=0, then the efficiency scores calculated in CCP will be the same as that of DEA. This does not imply that all DEA scores will coincide with the CCP ones (except for DMU1’s score). In this case the standard deviation disappears.The first constraint in (15) states that there is a slight chance (i.e. α=0.05) that outputs of the observed unit may exceed those of the best practice units with a very small probability. E(Yλ-y0) is determined based on the assumption that the observed values are representative of their mathematical expectation. The second constraint is strictly deterministic which states that the best performers cannot employ more than θX0 amount of inputs, and if they do, they cannot be efficient and will not be included in the reference set of best performers.Using the same mathematical formulation shown in (14) and (15), and by incorporating a distribution-free approach, the DCF is established. ## 3.1. Two Error Sources of Data Disparity Affecting Productivity Analysis Before we begin to make modifications to incorporate probability into the basic DEA model, it is crucial that the types of errors are identified, which are sources of data disparity. These can be segregated into 2 categories; systematic and nonsystematic errors. Nonsystematic errors are typically defined to be statistical noise, which are random normalN(0,σ2) and independent and identically distributed (i.i.d.). They will eventually average to zero. Systematic errors are defined to be “the degree to which the measured variable reflects the underlying phenomenon depend on its bias and variance relative to the true or more appropriate measure” [16]. Systematic errors or measurement errors are deemed to have the most disparaging effects because they introduce bias into the model. These may be caused by the lack of information.The design of the new DEA model is intended to take into account the possibility of data disparity that affect productivity analysis while preserving the advantages that DEA offers in order to estimate the true level of efficiency. Due to data disparity, normal DEA results may contain two components of the error term. The first refers to statistical noise which follows a normal distribution, while the second refers to the technical inefficiency which is said to follow a truncated normal or a half-normal distribution. This can be achieved by relaxing the LP constraints to allow for these variations which may provide a better approximation of the level of efficiency.The following general linear programming model illustrates the mathematical form of systematic and nonsystematic errors as defined previously. Variation in the variable (X) of the objective function will result in different values for the optimized coefficient (β) (4)minβg=X′β,subject toX′β≥y,β≥0. If the variation in X is stochastic, then X=x-+ε; ε~N(0,σ2), by the Central Limit Theorem; one can characterize how closely the vector Xis scattered around its mean x-  by the distance function; D2=D2(X;x-,Vε)=(X-x-)′Vε-1(X-x-). Vε denotes the variance-covariance matrix for ε [4].Four scenarios are illustrated later which describes sources of data disparity. The notations are as follows:xir: observed input i for i=1,…,m for DMUr,yjr: observed output j for j=1,…,n for DMUr,μrx: expected value of input for DMUr,μry: expected value of output for DMUr,brx: bias of xr,b^rx: estimate of brx,bry: bias of yr,b^ry: estimate of bry.The following are equations defining the relationship between the observed and true or expected values for both inputs and outputs in a productivity analysis such as SFA where measurement errors and/or random noise and inefficiencies are a concern in parametric estimations: (5)xir=μirx+birx, for some input i for unit r(6)yjr=μjry+bjry, for some output j for unit r (considered for cases in which there may be some bias in output levels) (7)μirx,μjry≥0,birx,bjry, unrestricted in sign.The following four scenarios illustrate the impact of different errors and were constructed using the notations given previously. These scenarios follow the definition by Tomlinson [16].Scenario I. Consider the following: (8)E(birx)=0,Var(birx)=0,E(xir)=μirx,Var(xir)=0. With zero bias and variance, observed input value is the true value ∴E(xir)=μirx=xir. This implies that the data is 100% accurate. The expected value is exactly the same as the observed value. In reality, it is rare to have data with such accuracy.Scenario II. Consider the following: (9)E(birx)=b^irx≠0,Var(birx)=0,E(xir)=μirx+b^irx,Var(xir)=0. Bias is nonzero with zero variance; hence, errors are systematic. E(xir) is not an unbiased estimator of xir. In this case, systematic errors are a problem where inputs are concerned. When measurement errors exist, the expected value is a biased estimator of the observed value. This in turn causes biases in DEA results. Empirical methods, such as DEA, make no allowance for this error and evaluate DMUs based strictly on the observed values. However, expectations of the observed values can be determined qualitatively and be incorporated into the LP.Scenario III. Consider the following: (10)E(birx)=0,Var(birx)=σbir2>0,E(xir)=μirx,Var(xir)=σbir2. Expected value of a constant is the constant itself. Variance of a constant is zero. Hence Var(xir)=Var(μirx+birx)=0+Var(birx)=σbir2. Bias is zero but the variance is nonzero. Hence, variations are due to statistical noise. A DMU that appears efficient may in fact be utilizing an input-output production mix that is less than optimal. Its seeming efficiency is caused by a variation to its favour. Results obtained using empirical models are prone to inaccuracy of this nature. However, the expected value will converge over time to the true value in the absence of bias.Scenario IV. Consider the following: (11)E(birx)=b^irx≠0,Var(birx)=σbir2>0,E(xir)=μirx+b^irx,Var(xir)=σbir2. Bias and variances are nonzero. This implies that both systematic and nonsystematic errors exist in the data. The variance corresponds to some input i. The variable, xir, is affected by some random amount and some bias birx. Hence, E(xir) is not an unbiased estimator of xir. This scenario corresponds to the drawback of empirical frontiers.The term “measurement error” does not simply imply that data had been misread or collected erroneously. According to Tomlinson [16], it may also not be constant over time. The inaccuracy of the data collected may be due to the lack of implicit information which may or may not be quantifiable but are deemed to have the most disparaging effects because they introduce bias into the model. ## 3.2. Chance-Constraint Programming and DEA Deterministic methods such as DEA are not designed to handle cases in which, due to uncertainty, constraints may be violated although infrequently. Various methods have been employed to transform the basic DEA approach to include stochastic components. Two of the more popular methods are chance-constraint programming (CCP) and stochastic DEA. An extensive literature survey has revealed that CCP DEA has always assumed a normal distribution. The objective of this research is to redefine the probabilities employed in CCP productivity analysis, which would accommodate problems emanating from various scenarios where errors are independent but convoluted without assuming any distributional form. The independent and convoluted properties of the error terms make it difficult to distinguish between them, and hence, a distribution-free approach will be employed.The advantage of using CCP is that it maintains the nonparametric form of DEA. It allows modeling of multiple inputs and outputs with ease. There is no ambiguity in defining a distribution or the interpretation of the results as had been demonstrated in the Normal-Gamma parametric SFA model [17]. CCP typically states that constraints do not need to hold “almost surely” but instead hold with some probability level. Uncertainty is represented in terms of outcomes denoted by ω. The elements ω are used to describe scenarios or outcomes. All random variables jointly depend on these outcomes. These outcomes may be combined into subsets of Ω called events. A  represents an event and A represents the collection of events. Examples of events may include political situations, trade conditions, which would allow us to describe the random variables such as costs and interest rates. Each event is associated with a probability P(A). The triplet (Ω,Α,P) is known as a probabilityspace. This situation is often found in strategic models where the knowledge of all possible outcomes in the future is acquired through expert opinions. Hence, in a general form, CCP can be written as (12)P{Aix(ω)≥hi(ω)}≥αi, where 0<αi<1 and i=1,…,I index of the constraints that must hold jointly. The previous probabilistic constraint can be written in its expectational form (or deterministic equivalent) where fi is an indicator of {ω∣Aix(ω)≥hi(ω)}: (13)Eω(fi(ω,x(ω)))≥αi.The focus of this paper is on the further development of DEA coupled with CCP. The benefit of applying CCP to DEA is such that the multidimensional and nonparametric form of DEA is maintained. To drop thea priori assumption discussed in [9, 11, 18] regarding the distributional form to account for possible data disparity, a distribution-free method is introduced. In [11, 18], CCP DEA input-oriented model is formulated on the basis that discrepancies in outputs are due to statistical noise while those in inputs are caused by inefficiency: (14)Minz0=θ,SubjecttoP(Yλ-y0≥0)≥1-α,Xλ-θx0≤Xλ-θx0≤0,Xλ-θx0≤1→λ=1,Xλ-θx0≤θ,λ≥0.The CCP formulation shown in (14) is designed to minimize the radial input contraction factor θ, subject to the constraints specified. CCP DEA models in the past generally assume that the normal distribution suffices. For example, the assumption that the variation shown previously is normal, the formulation (14) can be written in the following vector deterministic form (15): (15)Minz0=θ,SubjecttoE(Yλ-y0)-1.645σ≥0,Xλ-θx0≤Xλ-θx0≤0,Xλ-θx0≤1→λ=1,Xλ-θx0≤θ,λ≥0, where X and Y denote the vectors of inputs and outputs, respectively. Assuming that each DMU is independent of others, then the covariance equals zero. σ denotes the standard deviation for Yλ-y0 which is formulated as (16)Var(Yλ-y0)=Var(y1λ1+y2λ2+⋯+yqλq-y0), where subscript q denotes the number of DMUs. If the DMU under evaluation is DMU1, then y0≡y1, hence, (16) can be written as (17)σ=(λ1-1)2Var(y1)+λ22Var(y2)+⋯+λq2Var(yq). If  λ1=1 and λr≠1=0, then the efficiency scores calculated in CCP will be the same as that of DEA. This does not imply that all DEA scores will coincide with the CCP ones (except for DMU1’s score). In this case the standard deviation disappears.The first constraint in (15) states that there is a slight chance (i.e. α=0.05) that outputs of the observed unit may exceed those of the best practice units with a very small probability. E(Yλ-y0) is determined based on the assumption that the observed values are representative of their mathematical expectation. The second constraint is strictly deterministic which states that the best performers cannot employ more than θX0 amount of inputs, and if they do, they cannot be efficient and will not be included in the reference set of best performers.Using the same mathematical formulation shown in (14) and (15), and by incorporating a distribution-free approach, the DCF is established. ## 4. DEA-Chebyshev Model The advantages of using DEA-Chebyshev model as an efficiency evaluation tool are that it provides an approximation of performance given that random errors and inefficiencies do exist, and these deviations are considered, either through expert opinion or through data inference. Nevertheless, the results should always be subject to management scrutiny. This method also provides for ranking efficient DMUs. ### 4.1. Chebyshev’s Theorem In a simplified explanation, the Chebyshev theorem states that the fraction of the dataset lying withinτ standard deviations of the mean is at least 1-(1/τ2) where τ>1.DEA-Chebyshev model developed in this paper will not be restricted to any one distribution but instead will assume an unknown distribution. A distribution-free approach will be used to represent the stochastic nature of the data. This approach is applied to the basic DEA model using chance-constraint programming. This distribution-free method is known as the Chebyshev inequality. It states that(18a)P(|x--μ|≥τσ)≤1τ2, or equivalently (18b)P(|x--μ|≥τ)≤σ2τ2.Let a random variablex have some probability distribution of which we only know the variance (σ2) and the mean (μ) [19]. This inequality implies that the probability of the sample mean, x-, falling outside the interval [μ±τσ] is at most 1/τ2, where τ refers to the number of standard deviation away from the mean using the notation in [19]. The one-sided Chebyshev's inequality can be written as (19)P(x--μ≥τ)≤σ2σ2+τ2 as shown in [20].Other methods considered to define the probabilities for DEA-Chebyshev model were the distribution-free linear constraint set (or linear approximation), the unit sphere method, and the quantile method. These methods were tested to determine which of them would provide the best estimate of thetrue boundary mentioned in [21]. The true boundary (called set S) is defined to be a two-dimensional boundary which is generated using some parametric function defined as the chance-constrained set shown later: (20)S={X=(x1,…,xm)∣pr[AX-b≤0]≥α;X≥0}, where b and the vector A=(a1,a2,…,am) are random variables. Let the function L(X) be defined as L(X)=AX-b, and E[L(X)] and σ[L(X)] denote the expected value and the standard deviation of L(X), respectively. In this example m=2. Twenty-nine samples were generated.The distribution-free approaches tested were the Chebyshev extended lemma (24), quantile method (21), linear approximation (23), and unit sphere (22). The deterministic equivalent of these methods can be written in the following mathematical forms according to the notation used by [21]. (21)QuantilemethodSQ(α)={X∣E[L(X)]+Kασ[L(X)]≤0;X≥0}.Kα is known as the quantile of order α of the standardized variate of L(X). If random variable, X, belongs to a class of stable distributions, then the quantile method can be applied successfully. All stabledistributions share the common properties of being specified by the parameters U and V of the general functional form F[(x-U1)/V1],…,F[(x-Ul)/Vl] and when convoluted will again give us F[(x-U)/V]. Examples of stable distributions are Binomial, Poisson, Chi-squared, and Normal [NOLA99]. (22)UnitsphereSS(α)={1max(a1,h)2+max(a2,h)2X∣1X2≤1max(a1,h)2+max(a2,h)2}.(23)LinearapproximationSL(α)={X∣A*X≤1}, where ag,h is an element amongst the 29 simulated samples of ag=(ag,1,…,ag,H); g=1,…,m (g=2 in this example); and H=samplesize=29. Vector A*is defined as A*=(max(a1,h1),max(a2,h2)): (24)ChebyshevST(α)={α1-αX∣E[L(X)]+α1-α·σ[L(X)]≤0;X>0}.Allen et al. have proven in their paper [21] that the quantile method was the least conservative, while the Chebyshev was the most conservative. When a method of estimation provides relatively large confidence limits, the method is said to be “conservative.” The advantage of those two methods is that they both have the tendency to follow the shape of the true (real) boundary more closely than the other two methods, that is, unit sphere and linear approximation [21]. Given that Chebyshev provides the most conservative point of view and has the inclination to follow the shape of the true boundary with no regard to distributional forms, this method was chosen as the estimation for CCP DEA. Although the error-free frontier (EFF) is unknown, we can, at best, estimate its location or estimate its shape with respect to the DEA frontier. The EFF represents the frontier where measurement errors and random errors are not present, but it does not imply absolute efficiency. This means that there can be room for improvement even for the DMUs on the EFF. The theoretical frontier represents the absolute attainable production possibility set where there can no longer be any improvements in the absence of statistical noise and measurement errors. It is undefined due to the fact that human performance limits are still undefined at the present time.Since, we do not want to place ana priori assumption regarding which stable distribution best describes the random variables in DEA, the Chebyshev theorem will be used. The deterministic equivalent of (20) by Chebyshev's extended lemma is shown as (24).Derivation of α / ( 1 - α ) · σ [ L ( X ) ] in (24). We use the one-sided Chebyshev’s inequality and the notation used by [21]: (25)P(L(X)-E[L(X)]≥τ)≤σ2σ2+τ2, which states that the probability that L(X) will take on a value that is greater than τ standard deviations away from its mean, E[L(X)], is at most 1/(1+τ2). α in chance-constrained programming can be expressed in the general form: P(L(X)-E[L(X)]≤0)≥α. Hence, (26)1-α=σ2σ2+τ2⟹τ=σα1-α. Note that from here onwards as we discuss the DCF model, for simplification and clarity we will denote τα=τ/σ.A“k-flexibility function” is coined because α is a value that may be defined by the user (where k denotes the user's certainty of the estimate) or inferred from the industry data. The unique property of α is its ability to define τα such that it mimics the normal distribution given that random noise is present or to include management concerns and expectations with regard to their perceived or expected performance levels. This can overcome the problem of what economists coin as “nuisance parameters.” These parameters can be problems of controlling difficult-to-observe or unquantifiable factors such as worker effort or worker quality. When firms can identify and exploit opportunities in their environment, organizational constraints may be violated [22]. Because DCF allows for management input, the flexibility function can approximate these constraint violations. The mathematical formulation, implications for management, and practical definition of α will be explained later. ### 4.2. Assumptions in DEA-Chebyshev Model Two general assumptions have been made when constructing the model. First, nuisance parameters (including confounding variables) will affect efficiency scores causing them to differ from the true performance level if they are not accounted for in the productivity analysis. Second, variations in the observed variables can arise from both statistical noise and measurement errors and are convoluted.In the simulation to follow, as an extension to the general assumptions mentioned previously, we will assume that variations in outputs are negligible and will average out to zero [11, 18]. The variations in inputs are assumed to arise from statistical noise and inefficiency (inefficient use of inputs). Both of these errors contribute to the possible technical inefficiencies in DEA-efficient units. These possible inefficiencies are not observed in DEA since it is an empirical extreme point method. Using the same characteristics defined in SFA, statistical noise and measurement errors are said to be normally distributed v~N(μ,σ2), and inefficiency is said to be half normally distributed u~N+(μ,σ2). Thus, the relationship between the expected inputs, μir, versus the observed, xirobs, can be written as (27)xirobs=μir+(v+u)ir, where (v+u)ir denotes the convoluted error terms of input ifor DMUr.The assumption regarding the disparity between the observed and expected inputs is to illustrate the input-oriented DEA-Chebyshev model. In input-oriented models, the outputs are not adjusted for efficiency, but the inputs are based on the weights applied to those DMUs that are efficient. This assumption regarding errors can be reversed between inputs and outputs depending on expert opinions and the objective of the analysis (i.e., input versus output-oriented models).As an extension of Land et al. [11] and Forrester and Anderson [18], DEA-Chebyshev model relaxes the distributional assumption. In doing so, convolution of errors can be accommodated without having to specify some distributional form for both components. This method of approximating radial contraction of inputs or expansion of outputs is generally less computationally intensive than the bootstrap method, as CCP can be directly incorporated into the LP and solved in a similar fashion as the standard DEA technique. The bootstrap method introduced by Simar and Wilson [23] is more complex in that it requires certain assumptions regarding the data generating process (DGP) of which the properties of the frontier and the estimators will depend upon. However, this method of bootstrapping is nonparametric since it does not require any parametric assumptions except those to establish consistency and the rate of convergence for the estimators.Theoretically, the DEA, algorithm allows the evaluation of models containing strictly outputs with no inputs and vice versa. In doing so, it neglects the fact that inputs are crucial for the production of outputs. However, the properties of a production process are such that they must contain inputs in order to produce outputs. Let thetheoretically attainable production possibility set characterize the absolute efficient frontier, which is unknown, be denoted as Ψ={(X,Y)∈ℜm+n∣XcanproduceY}. Thus, given that the set Ψ is not presently bounded, the inclusion ΨEFF,ΨDEA,ΨDCF⊂Ψ is always true where ΨEFF, ΨDEA, ΨDCF denote the attainable set in Error-Free Frontier (EFF), DEA, and the DEA-Chebyshev frontier, respectively. It is certain that a DMU cannot produce outputs without inputs although the relationship between them may not be clear. The following postulates regarding the relationship between the three frontiers are expressed as follows.Postulate 1. The DEA frontier will converge to the EFF; ΨDEA→q→∞ΨEFF according to the central limit theorem [24]. Appendices A, B, and C provide the details. However, both DEA and DCF will exhibit a very slow rate of convergence to the theoretical frontier as the number of dimensions increases or when the sample size is small. This is known as the curse of dimensionality [25].Postulate 2. The production possibility set of DEA is contained in that of DCF{ΨDEA⊂ΨDCF}. The DEA and the corrected frontier may likely overlap the EFF depending on the degree of data variation observed and estimated. ### 4.3. Mathematical Formulation An input-oriented BCC model will be used to illustrate this work. Here,θ is defined as the radial input contraction factor and λ is defined as the column vector corresponding to the “best practice” units, which will form the projection unto the frontier for an inefficient unit (28)θ=min{θ∣yj0≤∑r=1qyjrλr,θxi0≥∑r=1qxirλr,∑r=1qλr=1,λr≥0}. Consider the following chance constraint sequence as defined by Allen et al. [21]: (29)S={X˘=(x1,x2,…,xm)∣P(∑r=1qλrxir-θxi0≤0)≥α;θ≥0,xir≥0,∀r=1,…,q(∑r=1qλrxir-θxi0≤0)}, where  α  is a real number such that 0≤α≤1 for all j=1,…,n and for all i=1,…,m.Since it is difficult to establish a specific form of distribution with empirical data due to the convolution of different types of errors, a distribution-free approach is taken. In this case, the Chebyshev one-sided inequality [21] will be applied to convert (29). A deterministic equivalent can be approximated to (30) for the ith input of  DMUr: (30)SC(α)={X˘∣E(∑rxirλr-θxi0)±σiτα≥0,θ≥0,xir≥0∀r(∑rxirλr-θxi0)}, where σi=var(∑rλrxir-θxi0)=λ12var(xi1)+⋯+λq2var(xiq)+θ2var(xi0) and 0<α≤1, with strict inequality on the left hand side. For example, if r=1, then xi0=xi1; hence, σi is calculated as σi=(λ1-θ)2var(xi1)+⋯+λq2var(xiq). Based on the assumption that DMUs are independent of each other, then var(xir)=c, for all r=1,…,q where c denotes some constant and cov(xir,xil≠ir)=0,forallr,l. The value for τα  can be defined as (31)Letτα=α1-α, where α denotes the probability of staying within the tolerance region defined using the one-sided Chebyshev's inequality. As  α  increases, τα and the standard deviation will also increase; hence, it becomes more likely that the EFF will be within the confidence limits.The value ofα can be defined such that the τα will be equal to or less than 1.645 so that DCF can provide a less conservative estimate of the upper and lower limits of the frontier when compared to z0.05=1.645. The standard normal distribution value z0.05=1.645 has been used in the previous CCP efficiency evaluation methodology in [11, 26]. The reasoning behind wanting a less conservative estimation is because data collected will more likely be accurate than inaccurate. When α≥0.99, then τα increases exponentially into infinity. For 0.7<α<0.75, note that τ0.7<z0.05<τ0.75; α can be defined such that DEA-Chebyshev model provides less conservative estimates. Taking a glance at the CCP DEA developed by Land et al. [11], the results obtained, when assuming a normal distribution, can be shown to be drastically different from that of the expected frontier depending on the level of data disparity.The deterministic LP formulation for DEA-Chebyshev model can be written in the following mathematical form:(32)Minλθ^SubjecttoE(∑r=1qxirλr-θ^xi0)±σiτ^α≤0,SubjecttoE∑r=1qyjrλr-yj0≥0,SubjecttoE∑r=1qλr=1,SubjecttoEλr≥0∀r,SubjecttoEθ≥0. Let τ^α be an estimate for τα which is defined as (33)τ^α=α1-α, where α is a value based on management's expectations or is inferred from a time series of data which has been transformed into a single value. The model shown in (32) can also be modified such that only discretionary inputs are considered for stochastic treatment [27].The value ofα can be defined such that its values are restricted between 0.5 (the point of inflection) and 0.6 if no qualitative information regarding expectations is available, but we are almost certain that the data obtained is accurate. The value of τα is then approximated as 1≤τ^α≤1.2247. In this case, the results produced will be less conservative than that of the normal distribution at α=0.05 (i.e., z0.05=1.645). For α<0.5, a deterministic model will suffice since the DEA-Chebyshev model will provide the same results as that of the DEA. ### 4.4. The “k-Flexibility Function”τα: Unique Management Interpretation It may not be sufficient to develop a model that technically sounds with appropriate theoretical proofs. We cannot discount the fact that management expertise can play an important role in defining the corrected frontier nor should we cause the user to reject the model. Hence, DEA-Chebyshev model is designed to incorporate management input, which can become a crucial factor in the modeling process. One of the major advantages of this model is its flexibility as compared to models that require a distributional form. It can provide crucial information to management based upon their expertise and experience in their own field of specialization thereby redefining the efficient frontier.In DEA-Chebyshev model,α has a unique management interpretation and implication. It can be defined as the management’s opinion of the expected degree of competence with regard to either input or output usage. In other words, it is the estimated degree of deviation from the observed level of performance. The smaller the value of α is, the more certain that the data is accurate and that little improvements can be made ceteris paribus or that expectations have been approximately met. When α=0, then DCF=DEA, implying that management is certain that the data they have obtained is accurate (no need to account for deviation or random effects or inefficiency) or that present expectations have been met. If α~1, then it implies that the data obtained is extremely erroneous or that expectations are not met.The value forα is an aggregate of two factors (or two events). First, the certainty of inaccuracy is denoted by P(E), and second, the translatedpercentage of inaccuracy is denoted by P(D). Let P(E) denote the true/false existence of errors. When P(E)=1, it implies that the data is inaccurate. If P(E)=1, then 0.5<P(D)<1; otherwise, P(D)=0. In other words, event E implies D; when the data is 100% accurate, then there is no deviation. Therefore, α can be defined:(34a)α=P(DE)=P(D∩E)P(E)=P(D)P(E).Proof. P ( D ) = P ( D ∩ E ) + P ( D ∩ E ′ ), since P(D∩E′)=0, then P(D)=P(D∩E). Hence, forP(E)=1, α can be approximated as (34b)α~P(D)P(E)+k=P(D)+k. The constant, k≥0, represents the degree of (the expert's) uncertainty.When deviation due to errors is negligible, then %deviation  from  observed~0. Hence α will be at most 0.5. P(error)=0 implying that the data is error-free, thus % deviation  from  observed=0. In this case, α=0 and DCF=DEA. Based on (31), the value for τ^α should be restricted to not be less than 1, and therefore, α≥0.5. Otherwise, the confidence limits become too small, which implies that DCF≅DEA. We do not want this to occur because DCF should only equal DEA when there is absolute certainty that the data is error-free. Hence, P(D) must be defined such that 0.5≤α<1 (34b) for P(E)=1 and zero otherwise. ### 4.5. Approximating the Error-Free Frontier: Development of the DCF Unlike the straightforward method in which DEA scores are calculated, DEA-Chebyshev model efficiency scores are slightly more complicated to obtain. There are five stages to the determination of the best efficiency rating for a DMU.Stage I. Determining the DEA efficient units.Stage II. Establishing the upper and lower limits for efficiency scores using DEA-Chebyshev model where the value of α is defined to reflect management concerns.Stage III. Establishing the corrected frontier from the upper and lower limits calculated in stage II for DEA efficient units. The upper and lower limits of efficiency scores established by DEA-Chebyshev model for each of the DEA-efficient units form the confidence bounds for the error-free efficiency scores. These limits determine the most likely location of the EFF. The following are characteristic of DEA-Chebyshev model efficiency scores. (1) An efficient DMU with a smaller standard deviation implies a smaller confidence region in which the EFF resides, hence, this particular DMU is considered to be more robustly efficient since it is closer to the EFF.(2) It can be conjectured that for DEA efficient DMUs,θU≤1 and θL≥1 will always be true (not so for the inefficient units).(3) WhenθL≥c where c is a very large constant, it may be an indication that the DMU is likely an outlier.(4) In general, the mean efficiency score in DEA-Chebyshev model is such thatθ-=(θU+θL)/2≈θDEA, unless the third characteristic previously mentioned is observed. ## 4.1. Chebyshev’s Theorem In a simplified explanation, the Chebyshev theorem states that the fraction of the dataset lying withinτ standard deviations of the mean is at least 1-(1/τ2) where τ>1.DEA-Chebyshev model developed in this paper will not be restricted to any one distribution but instead will assume an unknown distribution. A distribution-free approach will be used to represent the stochastic nature of the data. This approach is applied to the basic DEA model using chance-constraint programming. This distribution-free method is known as the Chebyshev inequality. It states that(18a)P(|x--μ|≥τσ)≤1τ2, or equivalently (18b)P(|x--μ|≥τ)≤σ2τ2.Let a random variablex have some probability distribution of which we only know the variance (σ2) and the mean (μ) [19]. This inequality implies that the probability of the sample mean, x-, falling outside the interval [μ±τσ] is at most 1/τ2, where τ refers to the number of standard deviation away from the mean using the notation in [19]. The one-sided Chebyshev's inequality can be written as (19)P(x--μ≥τ)≤σ2σ2+τ2 as shown in [20].Other methods considered to define the probabilities for DEA-Chebyshev model were the distribution-free linear constraint set (or linear approximation), the unit sphere method, and the quantile method. These methods were tested to determine which of them would provide the best estimate of thetrue boundary mentioned in [21]. The true boundary (called set S) is defined to be a two-dimensional boundary which is generated using some parametric function defined as the chance-constrained set shown later: (20)S={X=(x1,…,xm)∣pr[AX-b≤0]≥α;X≥0}, where b and the vector A=(a1,a2,…,am) are random variables. Let the function L(X) be defined as L(X)=AX-b, and E[L(X)] and σ[L(X)] denote the expected value and the standard deviation of L(X), respectively. In this example m=2. Twenty-nine samples were generated.The distribution-free approaches tested were the Chebyshev extended lemma (24), quantile method (21), linear approximation (23), and unit sphere (22). The deterministic equivalent of these methods can be written in the following mathematical forms according to the notation used by [21]. (21)QuantilemethodSQ(α)={X∣E[L(X)]+Kασ[L(X)]≤0;X≥0}.Kα is known as the quantile of order α of the standardized variate of L(X). If random variable, X, belongs to a class of stable distributions, then the quantile method can be applied successfully. All stabledistributions share the common properties of being specified by the parameters U and V of the general functional form F[(x-U1)/V1],…,F[(x-Ul)/Vl] and when convoluted will again give us F[(x-U)/V]. Examples of stable distributions are Binomial, Poisson, Chi-squared, and Normal [NOLA99]. (22)UnitsphereSS(α)={1max(a1,h)2+max(a2,h)2X∣1X2≤1max(a1,h)2+max(a2,h)2}.(23)LinearapproximationSL(α)={X∣A*X≤1}, where ag,h is an element amongst the 29 simulated samples of ag=(ag,1,…,ag,H); g=1,…,m (g=2 in this example); and H=samplesize=29. Vector A*is defined as A*=(max(a1,h1),max(a2,h2)): (24)ChebyshevST(α)={α1-αX∣E[L(X)]+α1-α·σ[L(X)]≤0;X>0}.Allen et al. have proven in their paper [21] that the quantile method was the least conservative, while the Chebyshev was the most conservative. When a method of estimation provides relatively large confidence limits, the method is said to be “conservative.” The advantage of those two methods is that they both have the tendency to follow the shape of the true (real) boundary more closely than the other two methods, that is, unit sphere and linear approximation [21]. Given that Chebyshev provides the most conservative point of view and has the inclination to follow the shape of the true boundary with no regard to distributional forms, this method was chosen as the estimation for CCP DEA. Although the error-free frontier (EFF) is unknown, we can, at best, estimate its location or estimate its shape with respect to the DEA frontier. The EFF represents the frontier where measurement errors and random errors are not present, but it does not imply absolute efficiency. This means that there can be room for improvement even for the DMUs on the EFF. The theoretical frontier represents the absolute attainable production possibility set where there can no longer be any improvements in the absence of statistical noise and measurement errors. It is undefined due to the fact that human performance limits are still undefined at the present time.Since, we do not want to place ana priori assumption regarding which stable distribution best describes the random variables in DEA, the Chebyshev theorem will be used. The deterministic equivalent of (20) by Chebyshev's extended lemma is shown as (24).Derivation of α / ( 1 - α ) · σ [ L ( X ) ] in (24). We use the one-sided Chebyshev’s inequality and the notation used by [21]: (25)P(L(X)-E[L(X)]≥τ)≤σ2σ2+τ2, which states that the probability that L(X) will take on a value that is greater than τ standard deviations away from its mean, E[L(X)], is at most 1/(1+τ2). α in chance-constrained programming can be expressed in the general form: P(L(X)-E[L(X)]≤0)≥α. Hence, (26)1-α=σ2σ2+τ2⟹τ=σα1-α. Note that from here onwards as we discuss the DCF model, for simplification and clarity we will denote τα=τ/σ.A“k-flexibility function” is coined because α is a value that may be defined by the user (where k denotes the user's certainty of the estimate) or inferred from the industry data. The unique property of α is its ability to define τα such that it mimics the normal distribution given that random noise is present or to include management concerns and expectations with regard to their perceived or expected performance levels. This can overcome the problem of what economists coin as “nuisance parameters.” These parameters can be problems of controlling difficult-to-observe or unquantifiable factors such as worker effort or worker quality. When firms can identify and exploit opportunities in their environment, organizational constraints may be violated [22]. Because DCF allows for management input, the flexibility function can approximate these constraint violations. The mathematical formulation, implications for management, and practical definition of α will be explained later. ## 4.2. Assumptions in DEA-Chebyshev Model Two general assumptions have been made when constructing the model. First, nuisance parameters (including confounding variables) will affect efficiency scores causing them to differ from the true performance level if they are not accounted for in the productivity analysis. Second, variations in the observed variables can arise from both statistical noise and measurement errors and are convoluted.In the simulation to follow, as an extension to the general assumptions mentioned previously, we will assume that variations in outputs are negligible and will average out to zero [11, 18]. The variations in inputs are assumed to arise from statistical noise and inefficiency (inefficient use of inputs). Both of these errors contribute to the possible technical inefficiencies in DEA-efficient units. These possible inefficiencies are not observed in DEA since it is an empirical extreme point method. Using the same characteristics defined in SFA, statistical noise and measurement errors are said to be normally distributed v~N(μ,σ2), and inefficiency is said to be half normally distributed u~N+(μ,σ2). Thus, the relationship between the expected inputs, μir, versus the observed, xirobs, can be written as (27)xirobs=μir+(v+u)ir, where (v+u)ir denotes the convoluted error terms of input ifor DMUr.The assumption regarding the disparity between the observed and expected inputs is to illustrate the input-oriented DEA-Chebyshev model. In input-oriented models, the outputs are not adjusted for efficiency, but the inputs are based on the weights applied to those DMUs that are efficient. This assumption regarding errors can be reversed between inputs and outputs depending on expert opinions and the objective of the analysis (i.e., input versus output-oriented models).As an extension of Land et al. [11] and Forrester and Anderson [18], DEA-Chebyshev model relaxes the distributional assumption. In doing so, convolution of errors can be accommodated without having to specify some distributional form for both components. This method of approximating radial contraction of inputs or expansion of outputs is generally less computationally intensive than the bootstrap method, as CCP can be directly incorporated into the LP and solved in a similar fashion as the standard DEA technique. The bootstrap method introduced by Simar and Wilson [23] is more complex in that it requires certain assumptions regarding the data generating process (DGP) of which the properties of the frontier and the estimators will depend upon. However, this method of bootstrapping is nonparametric since it does not require any parametric assumptions except those to establish consistency and the rate of convergence for the estimators.Theoretically, the DEA, algorithm allows the evaluation of models containing strictly outputs with no inputs and vice versa. In doing so, it neglects the fact that inputs are crucial for the production of outputs. However, the properties of a production process are such that they must contain inputs in order to produce outputs. Let thetheoretically attainable production possibility set characterize the absolute efficient frontier, which is unknown, be denoted as Ψ={(X,Y)∈ℜm+n∣XcanproduceY}. Thus, given that the set Ψ is not presently bounded, the inclusion ΨEFF,ΨDEA,ΨDCF⊂Ψ is always true where ΨEFF, ΨDEA, ΨDCF denote the attainable set in Error-Free Frontier (EFF), DEA, and the DEA-Chebyshev frontier, respectively. It is certain that a DMU cannot produce outputs without inputs although the relationship between them may not be clear. The following postulates regarding the relationship between the three frontiers are expressed as follows.Postulate 1. The DEA frontier will converge to the EFF; ΨDEA→q→∞ΨEFF according to the central limit theorem [24]. Appendices A, B, and C provide the details. However, both DEA and DCF will exhibit a very slow rate of convergence to the theoretical frontier as the number of dimensions increases or when the sample size is small. This is known as the curse of dimensionality [25].Postulate 2. The production possibility set of DEA is contained in that of DCF{ΨDEA⊂ΨDCF}. The DEA and the corrected frontier may likely overlap the EFF depending on the degree of data variation observed and estimated. ## 4.3. Mathematical Formulation An input-oriented BCC model will be used to illustrate this work. Here,θ is defined as the radial input contraction factor and λ is defined as the column vector corresponding to the “best practice” units, which will form the projection unto the frontier for an inefficient unit (28)θ=min{θ∣yj0≤∑r=1qyjrλr,θxi0≥∑r=1qxirλr,∑r=1qλr=1,λr≥0}. Consider the following chance constraint sequence as defined by Allen et al. [21]: (29)S={X˘=(x1,x2,…,xm)∣P(∑r=1qλrxir-θxi0≤0)≥α;θ≥0,xir≥0,∀r=1,…,q(∑r=1qλrxir-θxi0≤0)}, where  α  is a real number such that 0≤α≤1 for all j=1,…,n and for all i=1,…,m.Since it is difficult to establish a specific form of distribution with empirical data due to the convolution of different types of errors, a distribution-free approach is taken. In this case, the Chebyshev one-sided inequality [21] will be applied to convert (29). A deterministic equivalent can be approximated to (30) for the ith input of  DMUr: (30)SC(α)={X˘∣E(∑rxirλr-θxi0)±σiτα≥0,θ≥0,xir≥0∀r(∑rxirλr-θxi0)}, where σi=var(∑rλrxir-θxi0)=λ12var(xi1)+⋯+λq2var(xiq)+θ2var(xi0) and 0<α≤1, with strict inequality on the left hand side. For example, if r=1, then xi0=xi1; hence, σi is calculated as σi=(λ1-θ)2var(xi1)+⋯+λq2var(xiq). Based on the assumption that DMUs are independent of each other, then var(xir)=c, for all r=1,…,q where c denotes some constant and cov(xir,xil≠ir)=0,forallr,l. The value for τα  can be defined as (31)Letτα=α1-α, where α denotes the probability of staying within the tolerance region defined using the one-sided Chebyshev's inequality. As  α  increases, τα and the standard deviation will also increase; hence, it becomes more likely that the EFF will be within the confidence limits.The value ofα can be defined such that the τα will be equal to or less than 1.645 so that DCF can provide a less conservative estimate of the upper and lower limits of the frontier when compared to z0.05=1.645. The standard normal distribution value z0.05=1.645 has been used in the previous CCP efficiency evaluation methodology in [11, 26]. The reasoning behind wanting a less conservative estimation is because data collected will more likely be accurate than inaccurate. When α≥0.99, then τα increases exponentially into infinity. For 0.7<α<0.75, note that τ0.7<z0.05<τ0.75; α can be defined such that DEA-Chebyshev model provides less conservative estimates. Taking a glance at the CCP DEA developed by Land et al. [11], the results obtained, when assuming a normal distribution, can be shown to be drastically different from that of the expected frontier depending on the level of data disparity.The deterministic LP formulation for DEA-Chebyshev model can be written in the following mathematical form:(32)Minλθ^SubjecttoE(∑r=1qxirλr-θ^xi0)±σiτ^α≤0,SubjecttoE∑r=1qyjrλr-yj0≥0,SubjecttoE∑r=1qλr=1,SubjecttoEλr≥0∀r,SubjecttoEθ≥0. Let τ^α be an estimate for τα which is defined as (33)τ^α=α1-α, where α is a value based on management's expectations or is inferred from a time series of data which has been transformed into a single value. The model shown in (32) can also be modified such that only discretionary inputs are considered for stochastic treatment [27].The value ofα can be defined such that its values are restricted between 0.5 (the point of inflection) and 0.6 if no qualitative information regarding expectations is available, but we are almost certain that the data obtained is accurate. The value of τα is then approximated as 1≤τ^α≤1.2247. In this case, the results produced will be less conservative than that of the normal distribution at α=0.05 (i.e., z0.05=1.645). For α<0.5, a deterministic model will suffice since the DEA-Chebyshev model will provide the same results as that of the DEA. ## 4.4. The “k-Flexibility Function”τα: Unique Management Interpretation It may not be sufficient to develop a model that technically sounds with appropriate theoretical proofs. We cannot discount the fact that management expertise can play an important role in defining the corrected frontier nor should we cause the user to reject the model. Hence, DEA-Chebyshev model is designed to incorporate management input, which can become a crucial factor in the modeling process. One of the major advantages of this model is its flexibility as compared to models that require a distributional form. It can provide crucial information to management based upon their expertise and experience in their own field of specialization thereby redefining the efficient frontier.In DEA-Chebyshev model,α has a unique management interpretation and implication. It can be defined as the management’s opinion of the expected degree of competence with regard to either input or output usage. In other words, it is the estimated degree of deviation from the observed level of performance. The smaller the value of α is, the more certain that the data is accurate and that little improvements can be made ceteris paribus or that expectations have been approximately met. When α=0, then DCF=DEA, implying that management is certain that the data they have obtained is accurate (no need to account for deviation or random effects or inefficiency) or that present expectations have been met. If α~1, then it implies that the data obtained is extremely erroneous or that expectations are not met.The value forα is an aggregate of two factors (or two events). First, the certainty of inaccuracy is denoted by P(E), and second, the translatedpercentage of inaccuracy is denoted by P(D). Let P(E) denote the true/false existence of errors. When P(E)=1, it implies that the data is inaccurate. If P(E)=1, then 0.5<P(D)<1; otherwise, P(D)=0. In other words, event E implies D; when the data is 100% accurate, then there is no deviation. Therefore, α can be defined:(34a)α=P(DE)=P(D∩E)P(E)=P(D)P(E).Proof. P ( D ) = P ( D ∩ E ) + P ( D ∩ E ′ ), since P(D∩E′)=0, then P(D)=P(D∩E). Hence, forP(E)=1, α can be approximated as (34b)α~P(D)P(E)+k=P(D)+k. The constant, k≥0, represents the degree of (the expert's) uncertainty.When deviation due to errors is negligible, then %deviation  from  observed~0. Hence α will be at most 0.5. P(error)=0 implying that the data is error-free, thus % deviation  from  observed=0. In this case, α=0 and DCF=DEA. Based on (31), the value for τ^α should be restricted to not be less than 1, and therefore, α≥0.5. Otherwise, the confidence limits become too small, which implies that DCF≅DEA. We do not want this to occur because DCF should only equal DEA when there is absolute certainty that the data is error-free. Hence, P(D) must be defined such that 0.5≤α<1 (34b) for P(E)=1 and zero otherwise. ## 4.5. Approximating the Error-Free Frontier: Development of the DCF Unlike the straightforward method in which DEA scores are calculated, DEA-Chebyshev model efficiency scores are slightly more complicated to obtain. There are five stages to the determination of the best efficiency rating for a DMU.Stage I. Determining the DEA efficient units.Stage II. Establishing the upper and lower limits for efficiency scores using DEA-Chebyshev model where the value of α is defined to reflect management concerns.Stage III. Establishing the corrected frontier from the upper and lower limits calculated in stage II for DEA efficient units. The upper and lower limits of efficiency scores established by DEA-Chebyshev model for each of the DEA-efficient units form the confidence bounds for the error-free efficiency scores. These limits determine the most likely location of the EFF. The following are characteristic of DEA-Chebyshev model efficiency scores. (1) An efficient DMU with a smaller standard deviation implies a smaller confidence region in which the EFF resides, hence, this particular DMU is considered to be more robustly efficient since it is closer to the EFF.(2) It can be conjectured that for DEA efficient DMUs,θU≤1 and θL≥1 will always be true (not so for the inefficient units).(3) WhenθL≥c where c is a very large constant, it may be an indication that the DMU is likely an outlier.(4) In general, the mean efficiency score in DEA-Chebyshev model is such thatθ-=(θU+θL)/2≈θDEA, unless the third characteristic previously mentioned is observed. ## 5. Simulation Five data sets, each containing 15 DMUs in a two-input one-output scenario, were generated in order to illustrate the approximation of the EFF using the DEA-Chebyshev model. This will demonstrate the proximity of the DCF to the EFF. A comparison is drawn between the results provided by the DCF, DEA, and the CCP input-oriented VRS models as compared against the EFF. ### 5.1. Step I: Simulation: The Data Generating Process The first data set shown in Table1 is known as the control group. It contains two inputs and one output generated using a logarithmic production function of the following form: (35)y=β0+β1lnx12+β2lnx22, where β0 is some constant and β1 and β2are arbitrary weights or coefficients assigned to inputs. Input 1 (x1) has been chosen arbitrarily and input 2 (x2) is a function of x1;x2=c(1/x1), where c is some arbitrary constant; in this case c=24. This is to ensure that the frontier generated by the control group contains only efficient units and is convex. The linear convex combination in EFF consists of discrete production possibility sets defined for every individual DMU. Output (y) is then calculated using the equation shown in (35) from a discrete set of inputs where β0, β1, and β2 have been arbitrarily defined and are fixed for the all groups (control and experimental). The control group is one that contains no measurement errors or statistical errors and no inefficient DMUs. It will be the construct of the EFF.Table 1 Control group: the error-free production units. DMU Output Input 1 Input 2 1 12.55 2 12 2 10.43 3 8 3 9.68 4 6 4 9.53 5 4.8 5 9.68 6 4 6 10.01 7 3.43 7 10.43 8 3 8 11.45 10 2.4 9 11.99 11 2.18 10 12.55 12 2 11 13.12 13 1.85 12 14.25 15 1.6 13 15.36 17 1.41 14 16.46 19 1.26 15 16.99 20 1.2The experimental groups are generated from the control group with the error components. Their outputs are the same as the control groups and are held deterministic, while inputs are stochastic containing confounded measurement errors distributed as half-normal nonzero inefficiencyN+(μ,σ2) and statistical noise N(0,1)(36a)y~β0+β1lnx^12+β2lnx^22. In (36a), inputs are confounded with random errors and inefficiency: (36b)x^i=xi+εi,where ε=v+u. Variability in the inputs across simulations is produced by different arbitrarily chosen μ and σ for the inefficiency component which is distributed half normally;   u~N+(μ,σ2) for each simulation. Table 2 shows the details.Table 2 Four experimental groups with variations and inefficiencies introduced to both inputs while keeping outputs constant. DMU Output Experimental Grp 1 Experimental Grp 2 Experimental Grp 3 Experimental Grp 4 Input 1 Input 2 Input 1 Input 2 Input 1 Input 2 Input 1 Input 2 1 12.55 3.16 12.5 2.34 12.85 2.91 12.6 2.68 13.92 2 10.43 3.69 9.08 1.6 10.07 2.34 8.23 3.32 8.34 3 9.68 4.88 8.41 3.58 5.97 6.1 6.43 4.25 6.53 4 9.53 5.27 5.31 7.28 9.43 7.84 3.96 6.44 4.25 5 9.68 8.39 7.43 6.98 5.9 7.64 2.96 9.93 3.55 6 10.01 9.17 3.8 7.04 5.57 9.6 4.01 10.46 4.98 7 10.43 10.92 3.11 9.6 3.26 7.71 2.9 6.29 2.95 8 11.45 13.14 3.95 11.41 1.88 10.38 3.14 11.71 3.05 9 11.99 9.33 2.85 11.53 4.75 13.88 0.59 13.25 2.47 10 12.55 10.38 7.43 13.94 2.46 12.55 4.44 12.19 3.73 11 13.12 12.67 1.69 12.46 4.79 13.53 1.1 13.24 1.1 12 14.25 17.59 4.8 15.71 2.09 16.57 2.27 14.14 2.08 13 15.36 17.35 4.23 17.33 4.44 15.35 1.38 15.47 2.25 14 16.46 19.13 1.4 20.33 3.49 19.11 0.06 18.67 0.57 15 16.99 19.98 2.51 19.31 4.85 20.57 1.21 19.32 2.59 ### 5.2. Step II: Establishing Efficiency Scores: DEA, DEA-Chebyshev Model, and CCP Efficiency Evaluation The DEA results were calculated using ProDEA, while CCP and DEA-Chebyshev model results were calculated using MathCad. The CCP LP formulation follows that from [11, 18], the upper and lower bounds for the CCP frontier are(37a)θCCPU=min{θU∣yj0≤∑r=1qyjrλr,E(θLxi0-∑r=1qxirλr)-1.645σ≥0,∑r=1qλr=1,λr≥0},(37b)θCCPL=min{(θLxi0-∑r=1qxirλr)θL∣yj0≤∑r=1qyjrλr,E(θLxi0-∑r=1qxirλr)+1.645σ≥0,∑r=1qλr=1,λr≥0}.Table3 shows the results of the efficiency analysis for the DEA and CCP models. The λ-conditions which CCP must satisfy will be the same for the DCF. The value, ∑R=1qλr,R, for CCP is approximately the same as that for the DCF. Although DMU11 is DEA efficient, it is not CCP efficient given that is has violated one of the two λ-conditions. Note that ∑R=1qλ-r,R=(∑R=1qλr,RU+∑R=1qλr,RL)/2 shown in Tables 3, 4, 5, and 6.Table 3 DEA and CCP efficiency evaluation for simulation 1. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.674 0.795 1.52 1.158 1 1.834 (8) 2.124 (6) 1.979 DMU2 1 1.58 0.762 1.259 1.011 1 1.31 (3) 1.18 (5) 1.245 DMU3 0.892 0 0.694 1.074 0.884 0.884 0 0.323 0.162 DMU4 1 2.56 0.69 1.277 0.984 0.984 3.458 (7) 1.785 (8) 2.621 DMU5 0.679 0 0.481 0.852 0.666 0.666 0 0 0 DMU6 0.909 0 0.706 1.089 0.898 0.898 0 0.5463 0.273 DMU7 0.882 0 0.653 1.094 0.873 0.873 0 0.5199 0.26 DMU8 0.715 0 0.538 0.885 0.711 0.711 0 0 0 DMU9 1 4.778 0.777 1.238 1.008 1 4.876 (10) 2.415 (9) 3.645 DMU10 0.787 0 0.665 0.894 0.779 0.779 0 0 0 DMU11 1 1.105 0.82 1.593 1.206 0.91 0.0996 (3) 2.37 (9) 1.235 DMU12 0.749 0 0.666 0.819 0.743 0.743 0 0 0 DMU13 0.879 0 0.772 0.962 0.867 0.867 0 0 0 DMU14 1 2.302 0.912 2.154 1.533 1 1.532 (4) 2.134 (6) 1.833 DMU15 1 1 0.924 2.906 1.915 1 1.892 (2) 1.601 (5) 1.747Table 4 DEA and CCP efficiency evaluation for simulation 2. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.222 0.803 1.702 1.252 1 1.61 (6) 1.449 (5) 1.53 DMU2 1 1 0.759 1.924 1.341 0.879 0.875 (2) 1.117 (6) 0.996 DMU3 1 4.377 0.699 1.329 1.014 1 4.205 (8) 2.998 (7) 3.602 DMU4 0.593 0 0.425 0.764 0.595 0.595 0 0 0 DMU5 0.822 0 0.615 1.012 0.814 0.814 0 0.0678 0.034 DMU6 0.848 0 0.639 1.038 0.839 0.839 0 0.3006 0.15 DMU7 0.948 0 0.73 1.164 0.947 0.947 0 0.7558 0.378 DMU8 1 2.872 0.78 1.629 1.204 1 3.263 (10) 2.305 (10) 2.784 DMU9 0.843 0 0.727 0.963 0.845 0.845 0 0 0 DMU10 0.915 0 0.779 1.243 1.011 0.889 0 0.7534 0.377 DMU11 0.917 0 0.789 1.026 0.907 0.907 0 0.0958 0.048 DMU12 1 3.074 0.847 1.64 1.243 1 2.603 (6) 2.132 (8) 2.367 DMU13 0.941 0 0.856 1.033 0.944 0.944 0 0.1427 0.071 DMU14 1 1 0.888 1.439 1.163 0.944 0.259 (2) 1.264 (4) 0.761 DMU15 1 1.455 0.922 1.514 1.218 1 2.186 (5) 1.62 (5) 1.903Table 5 DEA and CCP efficiency evaluation for simulation 3: if the data contains small nonsystematic errors, the DEA model outperforms the CCP. CCP works well under conditions where inefficiency has not been partially offset by noise. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1 0.794 1.566 1.18 1 1.136 (4) 1.283 (2) 1.20945 DMU2 1 1.901 0.731 1.603 1.167 1 3.148 (11) 1.305 (4) 2.22655 DMU3 0.845 0 0.659 1.003 0.831 0.831 0 0 0 DMU4 0.898 0 0.67 1.079 0.874 0.874 0 0 0 DMU5 1 2.986 0.728 1.235 0.982 0.982 0 (0) 2.137 (7) 1.0685 DMU6 0.779 0 0.571 0.954 0.762 0.762 0 0 0 DMU7 1 2.704 0.725 1.24 0.982 1 5.681 (10) 2.598 (7) 4.13975 DMU8 0.877 0 0.705 1.028 0.867 0.867 0 0 0 DMU9 1 1 0.791 2.408 1.599 0.896 0 (0) 1.963 (10) 0.98141 DMU10 0.779 0 0.664 0.88 0.772 0.772 0 0 0 DMU11 1 1 0.799 1.298 1.048 0.899 0 0.6928 0.3464 DMU12 0.814 0 0.674 0.926 0.8 0.8 0 0 0 DMU13 1 2.409 0.893 1.161 1.027 0.947 2.634 (8) 0.451 (3) 1.54245 DMU14 1 1 0.936 29.92 15.43 0.968 0.585 (2) 3.528 (6) 2.05655 DMU15 1 1 0.926 2.77 1.848 1 1.816 (2) 1.041 (2) 1.42865Table 6 DEA and CCP efficiency evaluation for simulation 4. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.036 0.797 1.613 1.205 1 1.182 (7) 1.383 (3) 1.283 DMU2 1 1 0.726 1.294 1.01 0.863 1.954 (6) 0.911 (3) 1.432 DMU3 1 1.255 0.773 1.207 0.99 0.99 0 (0) 0.715 (4) 0.358 DMU4 0.899 0 0.667 1.129 0.898 0.898 0 0.939 0.469 DMU5 0.747 0 0.462 0.99 0.726 0.726 0 0 0 DMU6 0.6 0 0.428 0.815 0.622 0.622 0 0 0 DMU7 1 5.52 0.712 1.367 1.039 1 7.079 (13) 3.819 (10) 5.449 DMU8 0.754 0 0.57 0.981 0.775 0.775 0 0 0 DMU9 0.774 0 0.601 1.013 0.807 0.807 0 0.018 0.009 DMU10 0.818 0 0.696 0.929 0.812 0.812 0 0 0 DMU11 1 2.009 0.797 1.781 1.289 0.899 0 (0) 2.338 (9) 1.169 DMU12 0.969 0 0.829 1.079 0.954 0.954 0 0.518 0.259 DMU13 1 1.87 0.935 1.098 1.017 0.968 1.455 (3) 0.506 (3) 0.981 DMU14 1 1.31 0.912 3.899 2.406 1 1.303 (5) 2.734 (7) 2.018 DMU15 1 1 0.922 2.743 1.832 1 2.028 (3) 1.119 (2) 1.573In this simulation, because we do expect data collected to be reasonably reliable, a less conservative model would be a better choice. Conservative models tend to provide results with greater standard deviation and therefore produce an estimate with less accuracy. The four simulations were designed to test CCP, DEA, and DEA-Chebyshev model to determine the accuracy of the results obtained in comparison to the EFF. The results for DEA, CCP, and DCF for all four simulations using the values ofα can be found in Tables 3, 4, 5, 6, 8, 9, 10, and 11. The upper (38a) and lower (38b) bounds for the constraints in the DCF formulation are given as(38a)E(θUxi0-∑r=1qxirλr)-τ^ασ≥0,(38b)E(θLxi0-∑r=1qxirλr)+τ^ασ≥0.When α increases, τ^ασ also increases and so will the spread between the upper and lower bounds of θ.When the degree of deviation from observed performance levels is available, the results generated using DEA-Chebyshev model are generally a more precise approximation of the EFF compared to CCP, which assumes the normal distribution. From the simulations, it has been shown that the alpha values based on the deviation from the observed level of performance consistently produce the best approximations. The estimated degree of deviation due to inefficiency from the observed level of performance is formulated as follows:(39)α~P(D)P(E)+k=P(D)+k=1+P(deviation)2+k, where α denotes management or expert defined values of data deviation (if available) and “k” denotes a constant correction factor. In other words, it is a reflection of the users’ confidence of their own expectations where “k” will always be greater than or equal to “0.” P(deviation) is defined to be the perceived excess of inputs to observed inputs. The numerical calculations using (39) are shown in Table 7.Table 7 Qualitative information: determining the value forα. Simulation 1 Largest % deviation from the expected level of performance of the  4 simulations α ~ 1 + ( 0.112 + 0.282 ) 2 + k ~ 0.75 ∴ τ ^ α = 1.732 Simulation 2 α ~ 1 + ( 0.067 + 0.312 ) 2 + k ~ 0.74 ∴ τ ^ α = 1.687 Simulation 3 Smallest % deviation from the expected performance level of the  4 simulations α ~ 1 + ( 0.118 + 0.132 ) 2 + k ~ 0.675 ∴ τ ^ α = 1.441 Simulation 4 α ~ 1 + ( 0.092 + 0.23 ) 2 + k ~ 0.72 ∴ τ ^ α = 1.604 Note that in the simulations, the correction factor is set tok~0.05 which implies that the user may have underestimated by 5%. Note that the value for k can be zero. The values are calculated as the perceived inefficiency divided by the observed values.Table 8 DEA-Chebyshev model efficiency analysis from simulation 1 atα=0.75. θ ^ α = 0.75 U  Upper bounds θ ^ α = 0.75 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.75 DMU1 0.786 1.548 1.85 (8) 2.127 (6) 1.988 (0.63) 0.539 1 DMU2 0.751 1.272 1.297 (3) 1.184 (5) 1.24 (0.83) 0.368 1 DMU3 0.683 1.082 0 0.357 0.179 0.282 0.883 DMU4 0.673 1.287 3.491 (7) 1.72 (8) 2.605 (0.02) 0.434 0.98 DMU5 0.47 0.858 0 0 0 0.275 0.664 DMU6 0.696 1.096 0 0.591 0.295 0.283 0.896 DMU7 0.643 1.104 0 0.547 0.274 0.326 0.874 DMU8 0.531 0.892 0 0 0 0.255 0.712 DMU9 0.767 1.249 4.839 (10) 2.363 (9) 3.601 (0.03) 0.341 1 DMU10 0.659 0.898 0 0 0 0.169 0.779 DMU11 0.813 1.628 0.101 (3) 2.328 (9) 1.214 (0.006) 0.577 0.906 DMU12 0.662 0.822 0 0 0 0.113 0.742 DMU13 0.768 0.965 0 0 0 0.14 0.867 DMU14 0.906 2.225 1.53 (4) 2.193 (8) 1.862 (0.55) 0.932 1 DMU15 0.92 3.232 1.892 (2) 1.592 (5) 1.742 (0.75) 1.635 1Table 9 DEA-Chebyshev model efficiency analysis from simulation 2 atα=0.74. θ ^ α = 0.75 U  Upper bounds θ ^ α = 0.75 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.75 DMU1 0.793 1.739 1.787 (8) 1.45 (5) 1.619 (0.5) 0.669 1 DMU2 0.748 1.964 0.809 (1) 1.178 (7) 0.994 (0.25) 0.86 0.874 DMU3 0.684 1.342 4.027 (8) 2.832 (7) 3.429 (0.05) 0.465 1 DMU4 0.417 0.771 0 0 0 0.25 0.594 DMU5 0.604 1.02 0 0.115 0.058 0.294 0.812 DMU6 0.628 1.047 0 0.337 0.169 0.296 0.837 DMU7 0.719 1.174 0 0.764 0.382 0.322 0.947 DMU8 0.769 1.657 3.568 (10) 2.269 (10) 2.918 (0.08) 0.627 1 DMU9 0.719 0.967 0 0 0 0.176 0.843 DMU10 0.78 1.264 0 0.794 0.397 0.342 0.89 DMU11 0.782 1.03 0 0.115 0.057 0.175 0.906 DMU12 0.84 1.664 2.26 (5) 2.103 (8) 2.182 (0.83) 0.582 1 DMU13 0.852 1.037 0 0.152 0.076 0.131 0.944 DMU14 0.887 1.46 0.646 (1) 1.273 (4) 0.959 (0.08) 0.405 0.943 DMU15 0.918 1.556 1.904 (5) 1.617 (5) 1.761 (0.29) 0.452 1Table 10 DEA-Chebyshev model efficiency analysis from simulation 3 atα=0.675. θ ^ α = 0.675 U   Upper bounds θ ^ α = 0.675 L   Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.675 DMU1 0.794 1.566 1.073 (3) 1.356 (2) 1.214 (0.48) 0.4796 1 DMU2 0.731 1.603 2.528 (9) 1.47 (7) 1.999 (0.006) 0.5503 1 DMU3 0.659 1.003 0 0 0 0.213 0.833 DMU4 0.67 1.079 0 0.461 0.23 0.255 0.877 DMU5 0.728 1.235 0.377 (1) 2.111 (8) 1.244 (0.06) 0.3195 0.985 DMU6 0.571 0.954 0 0 0 0.24 0.765 DMU7 0.725 1.24 5.206 (10) 2.573 (8) 3.889 (0.008) 0.326 1 DMU8 0.705 1.028 0 0.027 0.014 0.204 0.87 DMU9 0.791 2.408 0.805 (1) 1.061 (8) 0.933 (0.005) 0.9715 0.905 DMU10 0.664 0.88 0 0 0 0.1347 0.774 DMU11 0.799 1.298 0 1.077 0.538 0.2745 0.921 DMU12 0.674 0.926 0 0 0 0.157 0.803 DMU13 0.893 1.161 2.855 (7) 1.113 (5) 1.984 (0.03) 0.1655 1 DMU14 0.936 29.92 1 (1) 2.718 (6) 1.859 (0.04) 17.971 1 DMU15 0.926 2.77 1.156 (2) 1.034 (4) 1.095 (0.37) 1.2342 1Table 11 DEA-Chebyshev model efficiency analysis from simulation 4 atα=0.725. θ ^ α = 0.725 U   Upper bounds θ ^ α = 0.725 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.725 DMU1 0.8 1.605 1.207 (7) 1.377 (3) 1.292 (0.68) 0.57 1 DMU2 0.729 1.291 1.951 (6) 0.918 (3) 1.4347 (0.05) 0.398 1 DMU3 0.776 1.204 0 (0) 0.719 (4) 0.359 (0.1) 0.303 0.99 DMU4 0.67 1.126 0 0.92 0.46 0.322 0.898 DMU5 0.464 0.987 0 0 0 0.37 0.726 DMU6 0.43 0.813 0 0 0 0.271 0.622 DMU7 0.716 1.363 6.874 (12) 3.849 (10) 5.361 (0.00) 0.458 1 DMU8 0.572 0.979 0 0 0 0.288 0.775 DMU9 0.603 1.01 0 0.015 0.007 0.288 0.807 DMU10 0.697 0.928 0 0 0 0.163 0.812 DMU11 0.799 1.767 0 (0) 2.327 (9) 1.164 (0.002) 0.685 0.9 DMU12 0.831 1.077 0 0.512 0.256 0.174 0.954 DMU13 0.884 1.097 2.217 (4) 0.514 (3) 1.366 (0.06) 0.15 0.991 DMU14 0.913 3.862 1.316 (5) 2.731 (7) 2.023 (0.03) 2.085 1 DMU15 0.923 2.682 1.435 (3) 1.119 (2) 1.277 (0.29) 1.244 1 Note: In Tables8–11, the values shown in columns 4 and 5 in brackets represent the frequency with which a DEA-efficient DMU is used as a reference unit in DCF. Those in column 6 represent the P values for the upper and lower limits for the lambdas for the DEA-efficient units.The Tables8-11 show efficiency scores determined under DEA-Chebyshev model, based on the α-values shown in Table 7. ### 5.3. Step III: Hypothesis Testing: Frontiers Compared All the efficiency evaluation tools will be measured against the control group to determine which of these would provide the best approximation method. Both CCP and DEA-Chebyshev model efficiency scores are defined in the same manner. The upper and lower bounds of the frontier determine the region where the EFF may likely be and is approximated by the DCF efficiency score,θ^.Using the results obtained in Step II, the four simulated experimental groups are adjusted using their respectively efficiency scores. The virtual DMUs are the DMUs from the four experimental groups in which their inputs have been reduced according to their efficiency scores from Step II, according to the contraction factor,θ for DEA, θ^CCP for CCP, and θ^ for DCF.In this step, in order to test the hypothesis, the 12 data sets of virtual DMUs are each aggregated with the control group, forming a sample size of 30 DMUs per simulation. “DMU#” denotes the control group (or “sample one”) and “V.DMU#” denotes the efficient virtual units derived from the experimental group (or “sample two”) using the efficiency scores generated by DEA, CCP, and DEA-Chebyshev model, respectively. There are 12 data sets in total: three for each of the simulations (three input contraction factors per DMU, from DEA, CCP (normal), and DEA-Chebyshev model). The inputs for the virtual DMUs calculated from each of these three methodologies for the same experimental group will be different. The sample size of 30 DMUs in each of the 12 sets is a result of combining the 15 error-free DMUs with the 15 virtual DMUs. These 30 DMUs are then evaluated using ProDEA (software). It is logical to use DEA for our final analysis to scrutinize the different methods since this is a deterministic method, which would work perfectly in an error-free situation. The DEA results for the 4 simulations are given in Table12.Table 12 Deterministic efficiency results for all four simulations with an aggregate of 30 DMUs; 15 from the control group and another 15 virtual units calculated according to CCP and DEA, respectively. Simulation 1 Simulation 2 Simulation 3 Simulation 4 DEA CCP DCF DEA CCP DCF DEA CCP DCF DEA CCP DCF DMU1 1 1 1 1 1 1 1 1 1 1 1 1 DMU2 1 1 1 0.986 0.946 0.942 0.962 0.962 0.962 1 0.937 1 DMU3 1 1 1 0.96 0.96 0.96 1 1 1 1 0.981 1 DMU4 1 1 1 1 1 1 1 1 1 0.989 0.977 0.989 DMU5 1 1 1 1 1 1 1 1 1 0.945 0.94 0.945 DMU6 1 1 1 1 1 1 0.991 0.987 0.988 0.888 0.888 0.888 DMU7 1 1 1 1 1 1 0.965 0.965 0.965 0.901 0.885 0.885 DMU8 1 0.965 0.963 1 1 1 0.971 0.933 0.943 0.914 0.872 0.872 DMU9 0.991 0.937 0.935 1 1 1 0.968 0.917 0.931 0.918 0.863 0.863 DMU10 0.978 0.906 0.903 1 1 1 0.962 0.901 0.919 0.926 0.87 0.871 DMU11 0.966 0.882 0.878 1 1 1 0.954 0.893 0.913 0.932 0.876 0.877 DMU12 0.985 0.931 0.929 1 1 1 0.934 0.903 0.911 0.939 0.906 0.914 DMU13 0.996 0.966 0.965 1 1 1 0.914 0.909 0.912 0.949 0.932 0.939 DMU14 1 0.991 0.991 1 1 1 0.973 0.957 0.973 0.967 0.967 0.967 DMU15 1 1 1 1 1 1 1 1 1 1 1 1 V.DMU1 0.889 0.885 0.884 0.921 0.921 0.921 0.898 0.999 0.898 0.841 0.84 0.84 V.DMU2 0.86 0.86 0.86 1 1 1 1 0.987 0.993 0.938 1 0.938 V.DMU3 0.864 0.872 0.873 1 1 1 1 1 1 0.931 0.92 0.941 V.DMU4 0.929 0.944 0.948 0.976 0.972 0.974 1 0.971 0.986 0.982 0.979 0.984 V.DMU5 0.926 0.943 0.946 0.934 0.943 0.945 1 1 1 1 1 1 V.DMU6 0.915 0.927 0.928 0.955 0.966 0.968 1 0.998 1 0.999 0.963 0.964 V.DMU7 0.959 0.947 0.946 0.926 0.926 0.927 1 1 1 1 1 1 V.DMU8 0.977 0.954 0.951 1 1 1 1 1 1 1 0.929 0.93 V.DMU9 1 0.99 0.987 0.956 0.946 0.946 0.989 0.989 0.989 1 0.898 0.899 V.DMU10 0.959 0.938 0.936 0.933 0.959 0.958 1 1 1 0.989 0.958 0.959 V.DMU11 1 1 1 0.939 0.94 0.94 0.938 0.954 0.952 1 1 1 V.DMU12 0.977 0.953 0.952 0.933 0.932 0.932 0.972 0.989 0.988 0.996 0.976 0.987 V.DMU13 0.971 0.975 0.975 0.903 0.899 0.899 0.998 1 1 0.992 1 0.995 V.DMU14 0.986 0.98 0.979 0.872 0.924 0.924 0.99 1 1 1 1 1 V.DMU15 1 1 1 1 1 1 1 1 1 1 1 1In order to determine if the frontiers created by these models are substantially different from that of the control group (or the error-free units), the rank-sum-test and statistical hypothesis test for mean differences were used.The DEA-Chebyshev model is scrutinized using several statistical methods, which show that there is a strong relationship between the DCF and the EFF. All the statistical tools used to test the DCF against the EFF have produced consistent conclusions that thecorrected frontier is a good approximation of the EFF. The statistical methods used to test the DCF versus the EFF are the Wilcoxon-Mann-Whitney test (or the rank-sum test) and the t-test for the differences in mean values of θ shown in Table 13. The rank-sum test is used to determine if the virtual DMUs established by the DCF are from the same population as that of the DMUs in the control group; if they are, then the difference in efficiency scores of both groups will not be statistically significant. This does not imply that the EFF and the corrected frontier are exactly the same but rather that the latter is a good approximation of the former. Its results are better than that of the CCP performance evaluation method developed by Land et al. [11] and Forrester and Anderson [18].Table 13 Hypothesis tests for mean differences of efficiency scores. Sample 1 is denoted as the “Control group” and sample 2 is denoted as the “Virtual group”. Simulation 1 Simulation 2 Simulation 3 Simulation 4 Control group Virtual group Control group Virtual group Control group Virtual group Control group Virtual group DEA Mean 0.999 0.943 0.996 0.95 0.973 0.986 0.951 0.978 Variance 0.00001 0.00187 0.00011 0.00153 0.0007 0.0009 0.0015 0.0019 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.7117 0.1166 - 0.5253 - 0.1409 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −3.09 −3.2146 1.3688 1.7213 t stat 5.2614 4.5917 - 1.0167 - 1.6501 P(T ≤ t)  two tail 0.00012 0.00042 0.3266 0.1212 t critical two tail 2.145 2.145 2.145 2.145 CCP efficiency evaluation Mean 0.972 0.944 0.994 0.955 0.955 0.992 0.926 0.964 Variance 0.0016 0.0019 0.00028 0.0011 0.00176 0.00018 0.0025 0.00231 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.35661 - 0.5373 - 0.14 - 0.5035 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −1.8873 −3.0072 2.136 2.0117 t stat 2.2373 3.334 - 3.1453 - 1.7383 P(T ≤ t) two tail 0.042 0.005 0.0072 0.1041 t critical two tail 2.145 2.145 2.145 2.145 DCF Mean 0.971 0.944 0.993 0.956 0.961 0.987 0.934 0.962 Variance 0.00168 0.0019 0.0003 0.0011 0.0013 0.0008 0.00304 0.00217 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.3296 - 0.5296 - 0.4235 - 0.0966 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −1.8873 −2.9657 1.8665 1.2236 t stat 2.1038 3.2401 - 1.8448 - 1.4533 P(T ≤ t) two tail 0.05396 0.0059 0.08633 0.1682 t critical two tail 2.145 2.145 2.145 2.145 TheRank-sum test shown previously is used to determine if the two samples being tested are of the same population. If they are of the same population, then we can conclude that the two frontiers for both the samples respectively, are one, and the same or that they consistently overlap one another, thus they can be assumed to be of the same surface. ### 5.4. Step IV: Efficiency Scores: DEA versus DEA-Chebyshev Model and Ranking of DEA Efficient Units There can be more than one way of ranking efficient units. In the simplest (or naïve) case, empirically efficient DMUs can be ranked according to the scoreθ- calculated as an average of the upper and lower limits from the DEA-Chebyshev model. #### 5.4.1. Naïve Ranking Table14 illustrates the ranking of all DMUs. The figures in bold denote the DEA-Chebyshev model efficiency scores for the DEA efficient units. All production units are ranked in descending order of efficiency according to the average of the upper and lower limits, θ-. An anomaly in DMU14 of simulation 3 is caused by an extremely small value for Input 2. Because the LP formulation for DEA, DEA-Chebyshev model, and CCP (normal) applies the greatest weight to the input or output in order to make a DMU appear as favourable as possible, Input 2 in this case is weighted heavily. In DEA, the mathematical algorithm does not allow the efficiency score to exceed 1.00; thus, this problem is not detected. In DEA-Chebyshev model and CCP, because efficiency scores are not restricted to 1.00, this problem arises indicating a possible outlier. It would be advisable to remove this DMU from the analysis. In this simulation, because the errors are generated randomly, the error-value for this DMU lies in the tail end of the distribution, hence, creating an outlier.Table 14 “Naïve” ranking of empirically efficient DMUs in order of declining levels of efficiency. Values in bold correspond to DEA efficient units with a score of “1”. Rank θ - DMU15 2.076 DMU14 1.566 DMU11 1.22 DMU1 1.167 DMU2 1.012 DMU9 1.008 Simulation 1 DMU4 0.98 DMU6 0.896 DMU3 0.883 DMU7 0.874 DMU13 0.867 DMU10 0.779 DMU12 0.742 DMU8 0.712 DMU5 0.664 DMU2 1.356 DMU1 1.266 DMU12 1.252 DMU15 1.237 DMU8 1.213 DMU14 1.173 Simulation 2 DMU10 1.022 DMU3 1.013 DMU7 0.947 DMU13 0.944 DMU11 0.906 DMU9 0.843 DMU6 0.837 DMU5 0.812 DMU4 0.594 DMU14 13.632 DMU15 1.807 DMU9 1.498 DMU1 1.157 DMU2 1.15 DMU11 1.036 Simulation 3 DMU13 1.024 DMU7 0.986 DMU5 0.985 DMU4 0.877 DMU8 0.87 DMU3 0.833 DMU12 0.803 DMU10 0.774 DMU6 0.765 DMU14 2.388 Simulation 4 DMU15 1.802 DMU11 1.283 DMU1 1.202 DMU7 1.039 DMU2 1.01 DMU13 0.991 DMU3 0.99 DMU12 0.954 DMU4 0.898 DMU10 0.812 DMU9 0.807 DMU8 0.775 DMU5 0.726 DMU6 0.622This method of ranking is naïve because it ignores the standard deviation, which indicates the robustness of a DMU's efficiency score to the possible errors and theunobserved inefficiency. It also does not distinguish between possible outliers and legitimate units. #### 5.4.2. Ranking by Robustness of DEA-Chebyshev Model Efficiency Scores The ranking in the order of robustness of a DMU begins with the efficiency score defined asθ^. Those with θ^=1 are ranked from the most robust to the least robust (from the smallest standard deviation to the largest). The standard deviation is determined using the upper and lower bounds of the efficiency scores. Then the rest of the empirically efficient units are ranked based on their respective θ^ (using their standard deviations will also provide the same ranking for these units). Once all the empirically efficient units have been ranked, the remainders are organized according to their stochastic efficiency scores from the most efficient to the least efficient. The ranking of these inefficient units is very similar to that of the empirical frontier.Ranking from the most efficient down, those DMUs which have a DEA-Chebyshev model score ofθ^=1 (input oriented case) can fall into either of two categories: hyper-efficient or efficient/mildly efficient depending on how robust they are (based on their standard deviation). DMUs that are not printed in bold are DMUs that are DEA-inefficient (See Table 15), and hence, they are ranked below those which have been deemed empirically efficient. DEA efficient DMUs that fail to satisfy the conditions for θ^=1 will be given efficiency scores of at most 1.00.Table 15 Ranking of efficient DMUs according to robustness based on their standard deviations. The DMUs in bold denote the empirically efficient DMUs. Simulation 1 Simulation 2 Simulation 3 Simulation 4 θ ^ α = 0.75 Std. dev. θ ^ α = 0.75 Std. dev. θ ^ α = 0.675 Std. dev. θ ^ α = 0.725 Std. dev. DMU9 1 0.34111 DMU15 1 0.45149 DMU13 1 0.16546 DMU13 1 0.15033 DMU2 1 0.36819 DMU3 1 0.46499 DMU7 1 0.32591 DMU3 1 0.30285 DMU1 1 0.53889 DMU12 1 0.5823 DMU1 1 0.47956 DMU2 1 0.39775 DMU14 1 0.93225 DMU8 1 0.62735 DMU2 1 0.55027 DMU7 1 0.45771 DMU15 1 1.63455 DMU1 1 0.66871 DMU15 1 1.23418 DMU1 1 0.56972 DMU4 0.98 0.43388 DMU7 0.947 0.3218 DMU14 1 17.971 DMU11 1 0.68462 DMU11 0.906 0.57657 DMU13 0.944 0.13124 DMU5 0.985 0.31947 DMU15 1 1.24437 DMU6 0.896 0.28298 DMU14 0.943 0.4051 DMU11 0.921 0.2745 DMU14 1 2.0849 DMU3 0.883 0.28164 DMU11 0.906 0.17515 DMU9 0.905 0.97149 DMU12 0.969 0.17444 DMU7 0.874 0.32605 DMU10 0.89 0.34217 DMU4 0.877 0.25512 DMU4 0.899 0.32244 DMU13 0.867 0.13958 DMU2 0.874 0.85998 DMU8 0.87 0.20386 DMU10 0.818 0.16313 DMU10 0.779 0.16935 DMU9 0.843 0.17572 DMU3 0.833 0.21305 DMU9 0.774 0.28765 DMU12 0.742 0.11335 DMU6 0.837 0.29614 DMU12 0.803 0.15726 DMU8 0.754 0.28786 DMU8 0.712 0.25534 DMU5 0.812 0.2938 DMU10 0.774 0.1347 DMU5 0.747 0.3701 DMU5 0.664 0.2745 DMU4 0.594 0.25039 DMU6 0.765 0.24013 DMU6 0.6 0.27103 ### 5.5. Further Analysis Additional analyses were conducted by taking the observed DMUs in each simulation and evaluating them against the EFF, DEA, CCP, and DEA-Chebyshev model results. If DCF is a good approximation of the EFF, then the efficiency scores for the observed DMUs should not be substantially different from the efficiency scores generated by the EFF. This also holds true for CCP. #### 5.5.1. Observed DMUs Evaluated against the EFF, CCP, and DCF The efficiency scores of the observed DMUs from the experimental groups determined by the EFF (to be denoted as “exp.grp+EFF”) will provide a benchmark for evaluating the DEA frontier (“exp.grp+DEA”), CCP (normal) frontier (“exp.grp+CCP”), and the corrected frontier (“exp.grp+DCF”). A comparison is drawn between the efficiency scores of the experimental groups generated by the four frontiers.The hypothesis is that the mean of the efficiency scores for the 15 observed units in the “exp.grp+EFF” group and the “exp.grp+DCF” group should be approximately the same (i.e., the difference is not statistically significant). From Table16, the null hypothesis can be seen from the rank-sum test and the t-test at α=0.05, and the difference is not statistically significant in simulations 3 and 4, hence, the corrected frontier is a good approximation of the EFF. Although the hypothesis test for simulations 1 and 2 indicates some level of significance, the results generated by the DCF model are still superior to those of the CCP and the DEA.Table 16 Statistical analysis for frontier comparisons. Observed DMUs are evaluated against the 3 different frontiers to determine their efficiency scores which are calculated using the normal DEA model and to determine if the efficiency scores for each group are substantially different when comparing EFF to DEA, EFF to DCF, and EFF to CCP. Exp.grp+EFF Exp.grp+DEA Exp.grp+EFF Exp.grp+DCF Exp.grp+EFF Exp.grp+CCP Simulation 1 Mean 0.852 0.899 0.852 0.885 0.852 0.887 Variance 0.01396 0.01355 0.01396 0.01389 0.01396 0.01376 Observations 15 15 15 15 15 15 Pearson correlation 0.922 0.87986 0.881 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.2858 0.9125 0.9125 t stat - 3.9644 - 2.2335 - 2.3537 P(T ≤ t) two tail 0.0014 0.04235 0.03372 t critical two tail 2.145 2.145 2.145 Simulation 2 Mean 0.875 0.922 0.875 0.908 0.875 0.908 Variance 0.01272 0.01242 0.0127 0.0115 0.0127 0.0115 Observations 15 15 15 15 15 15 Pearson correlation 0.94071 0.8875 0.8918 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.3066 0.9747 1.0162 t stat - 4.6604 - 2.3984 - 2.487 P(T ≤ t) two tail 0.00037 0.031 0.02611 t critical two tail 2.145 2.145 2.145 Simulation 3 Mean 0.92 0.933 0.92 0.916 0.92 0.902 Variance 0.00879 0.00815 0.00879 0.00806 0.00879 0.0077 Observations 15 15 15 15 15 15 Pearsoncorrelation 0.95301 0.8804 0.9082 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.7259 −0.0622 −0.6014 t stat - 1.8125 0.29423 1.68719 P(T ≤ t) two tail 0.0914 0.7729 0.1137 t critical two tail 2.145 2.145 2.145 Simulation 4 Mean 0.882 0.904 0.882 0.887 0.882 0.868 Variance 0.0153 0.0173 0.0153 0.0184 0.0153 0.0162 Observations 15 15 15 15 15 15 Pearson correlation 0.9425 0.905 0.8996 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.8503 0.1452 −0.394 t stat - 1.9248 - 0.312 1.0043 P(T ≤ t) two tail 0.0748 0.7599 −0.312 t critical two tail 2.145 2.145 2.145Table16 shows the statistical tests used to compare the DEA, CCP, and DCF against the EFF. The Pearson Correlation analysis, (regression line) which ranges from −1 to 1 inclusively, reflects the extent of the linear relationship between two sets of data. The P values, the rank-sum test, and the Pearson correlation observed for all the four simulations indicate that in general the DCF outperforms DEA and CCP (which assumed the normal distribution).Outliers have a tendency to exhibit large standard deviations, which is translated to large confidence limits. Consequently, the reason for establishing DCF and CCP scores is to reduce the likelihood of a virtual unit from becoming an outlier. Also, the results generated by the stochastic models (as opposed to deterministic ones) such as the DCF and CCP can be greatly affected because the efficiency scores are generally not restricted to 1.00. In reality, outliers are not always easily detected. If the data set contains some outliers, the stochastic models may not perform well. DMU14 in Simulation 3 is an example of this problem. It can be solved by either removing the outliers or by imposing weight restrictions. However, weight restrictions are not within the scope of this paper. ## 5.1. Step I: Simulation: The Data Generating Process The first data set shown in Table1 is known as the control group. It contains two inputs and one output generated using a logarithmic production function of the following form: (35)y=β0+β1lnx12+β2lnx22, where β0 is some constant and β1 and β2are arbitrary weights or coefficients assigned to inputs. Input 1 (x1) has been chosen arbitrarily and input 2 (x2) is a function of x1;x2=c(1/x1), where c is some arbitrary constant; in this case c=24. This is to ensure that the frontier generated by the control group contains only efficient units and is convex. The linear convex combination in EFF consists of discrete production possibility sets defined for every individual DMU. Output (y) is then calculated using the equation shown in (35) from a discrete set of inputs where β0, β1, and β2 have been arbitrarily defined and are fixed for the all groups (control and experimental). The control group is one that contains no measurement errors or statistical errors and no inefficient DMUs. It will be the construct of the EFF.Table 1 Control group: the error-free production units. DMU Output Input 1 Input 2 1 12.55 2 12 2 10.43 3 8 3 9.68 4 6 4 9.53 5 4.8 5 9.68 6 4 6 10.01 7 3.43 7 10.43 8 3 8 11.45 10 2.4 9 11.99 11 2.18 10 12.55 12 2 11 13.12 13 1.85 12 14.25 15 1.6 13 15.36 17 1.41 14 16.46 19 1.26 15 16.99 20 1.2The experimental groups are generated from the control group with the error components. Their outputs are the same as the control groups and are held deterministic, while inputs are stochastic containing confounded measurement errors distributed as half-normal nonzero inefficiencyN+(μ,σ2) and statistical noise N(0,1)(36a)y~β0+β1lnx^12+β2lnx^22. In (36a), inputs are confounded with random errors and inefficiency: (36b)x^i=xi+εi,where ε=v+u. Variability in the inputs across simulations is produced by different arbitrarily chosen μ and σ for the inefficiency component which is distributed half normally;   u~N+(μ,σ2) for each simulation. Table 2 shows the details.Table 2 Four experimental groups with variations and inefficiencies introduced to both inputs while keeping outputs constant. DMU Output Experimental Grp 1 Experimental Grp 2 Experimental Grp 3 Experimental Grp 4 Input 1 Input 2 Input 1 Input 2 Input 1 Input 2 Input 1 Input 2 1 12.55 3.16 12.5 2.34 12.85 2.91 12.6 2.68 13.92 2 10.43 3.69 9.08 1.6 10.07 2.34 8.23 3.32 8.34 3 9.68 4.88 8.41 3.58 5.97 6.1 6.43 4.25 6.53 4 9.53 5.27 5.31 7.28 9.43 7.84 3.96 6.44 4.25 5 9.68 8.39 7.43 6.98 5.9 7.64 2.96 9.93 3.55 6 10.01 9.17 3.8 7.04 5.57 9.6 4.01 10.46 4.98 7 10.43 10.92 3.11 9.6 3.26 7.71 2.9 6.29 2.95 8 11.45 13.14 3.95 11.41 1.88 10.38 3.14 11.71 3.05 9 11.99 9.33 2.85 11.53 4.75 13.88 0.59 13.25 2.47 10 12.55 10.38 7.43 13.94 2.46 12.55 4.44 12.19 3.73 11 13.12 12.67 1.69 12.46 4.79 13.53 1.1 13.24 1.1 12 14.25 17.59 4.8 15.71 2.09 16.57 2.27 14.14 2.08 13 15.36 17.35 4.23 17.33 4.44 15.35 1.38 15.47 2.25 14 16.46 19.13 1.4 20.33 3.49 19.11 0.06 18.67 0.57 15 16.99 19.98 2.51 19.31 4.85 20.57 1.21 19.32 2.59 ## 5.2. Step II: Establishing Efficiency Scores: DEA, DEA-Chebyshev Model, and CCP Efficiency Evaluation The DEA results were calculated using ProDEA, while CCP and DEA-Chebyshev model results were calculated using MathCad. The CCP LP formulation follows that from [11, 18], the upper and lower bounds for the CCP frontier are(37a)θCCPU=min{θU∣yj0≤∑r=1qyjrλr,E(θLxi0-∑r=1qxirλr)-1.645σ≥0,∑r=1qλr=1,λr≥0},(37b)θCCPL=min{(θLxi0-∑r=1qxirλr)θL∣yj0≤∑r=1qyjrλr,E(θLxi0-∑r=1qxirλr)+1.645σ≥0,∑r=1qλr=1,λr≥0}.Table3 shows the results of the efficiency analysis for the DEA and CCP models. The λ-conditions which CCP must satisfy will be the same for the DCF. The value, ∑R=1qλr,R, for CCP is approximately the same as that for the DCF. Although DMU11 is DEA efficient, it is not CCP efficient given that is has violated one of the two λ-conditions. Note that ∑R=1qλ-r,R=(∑R=1qλr,RU+∑R=1qλr,RL)/2 shown in Tables 3, 4, 5, and 6.Table 3 DEA and CCP efficiency evaluation for simulation 1. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.674 0.795 1.52 1.158 1 1.834 (8) 2.124 (6) 1.979 DMU2 1 1.58 0.762 1.259 1.011 1 1.31 (3) 1.18 (5) 1.245 DMU3 0.892 0 0.694 1.074 0.884 0.884 0 0.323 0.162 DMU4 1 2.56 0.69 1.277 0.984 0.984 3.458 (7) 1.785 (8) 2.621 DMU5 0.679 0 0.481 0.852 0.666 0.666 0 0 0 DMU6 0.909 0 0.706 1.089 0.898 0.898 0 0.5463 0.273 DMU7 0.882 0 0.653 1.094 0.873 0.873 0 0.5199 0.26 DMU8 0.715 0 0.538 0.885 0.711 0.711 0 0 0 DMU9 1 4.778 0.777 1.238 1.008 1 4.876 (10) 2.415 (9) 3.645 DMU10 0.787 0 0.665 0.894 0.779 0.779 0 0 0 DMU11 1 1.105 0.82 1.593 1.206 0.91 0.0996 (3) 2.37 (9) 1.235 DMU12 0.749 0 0.666 0.819 0.743 0.743 0 0 0 DMU13 0.879 0 0.772 0.962 0.867 0.867 0 0 0 DMU14 1 2.302 0.912 2.154 1.533 1 1.532 (4) 2.134 (6) 1.833 DMU15 1 1 0.924 2.906 1.915 1 1.892 (2) 1.601 (5) 1.747Table 4 DEA and CCP efficiency evaluation for simulation 2. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.222 0.803 1.702 1.252 1 1.61 (6) 1.449 (5) 1.53 DMU2 1 1 0.759 1.924 1.341 0.879 0.875 (2) 1.117 (6) 0.996 DMU3 1 4.377 0.699 1.329 1.014 1 4.205 (8) 2.998 (7) 3.602 DMU4 0.593 0 0.425 0.764 0.595 0.595 0 0 0 DMU5 0.822 0 0.615 1.012 0.814 0.814 0 0.0678 0.034 DMU6 0.848 0 0.639 1.038 0.839 0.839 0 0.3006 0.15 DMU7 0.948 0 0.73 1.164 0.947 0.947 0 0.7558 0.378 DMU8 1 2.872 0.78 1.629 1.204 1 3.263 (10) 2.305 (10) 2.784 DMU9 0.843 0 0.727 0.963 0.845 0.845 0 0 0 DMU10 0.915 0 0.779 1.243 1.011 0.889 0 0.7534 0.377 DMU11 0.917 0 0.789 1.026 0.907 0.907 0 0.0958 0.048 DMU12 1 3.074 0.847 1.64 1.243 1 2.603 (6) 2.132 (8) 2.367 DMU13 0.941 0 0.856 1.033 0.944 0.944 0 0.1427 0.071 DMU14 1 1 0.888 1.439 1.163 0.944 0.259 (2) 1.264 (4) 0.761 DMU15 1 1.455 0.922 1.514 1.218 1 2.186 (5) 1.62 (5) 1.903Table 5 DEA and CCP efficiency evaluation for simulation 3: if the data contains small nonsystematic errors, the DEA model outperforms the CCP. CCP works well under conditions where inefficiency has not been partially offset by noise. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1 0.794 1.566 1.18 1 1.136 (4) 1.283 (2) 1.20945 DMU2 1 1.901 0.731 1.603 1.167 1 3.148 (11) 1.305 (4) 2.22655 DMU3 0.845 0 0.659 1.003 0.831 0.831 0 0 0 DMU4 0.898 0 0.67 1.079 0.874 0.874 0 0 0 DMU5 1 2.986 0.728 1.235 0.982 0.982 0 (0) 2.137 (7) 1.0685 DMU6 0.779 0 0.571 0.954 0.762 0.762 0 0 0 DMU7 1 2.704 0.725 1.24 0.982 1 5.681 (10) 2.598 (7) 4.13975 DMU8 0.877 0 0.705 1.028 0.867 0.867 0 0 0 DMU9 1 1 0.791 2.408 1.599 0.896 0 (0) 1.963 (10) 0.98141 DMU10 0.779 0 0.664 0.88 0.772 0.772 0 0 0 DMU11 1 1 0.799 1.298 1.048 0.899 0 0.6928 0.3464 DMU12 0.814 0 0.674 0.926 0.8 0.8 0 0 0 DMU13 1 2.409 0.893 1.161 1.027 0.947 2.634 (8) 0.451 (3) 1.54245 DMU14 1 1 0.936 29.92 15.43 0.968 0.585 (2) 3.528 (6) 2.05655 DMU15 1 1 0.926 2.77 1.848 1 1.816 (2) 1.041 (2) 1.42865Table 6 DEA and CCP efficiency evaluation for simulation 4. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.036 0.797 1.613 1.205 1 1.182 (7) 1.383 (3) 1.283 DMU2 1 1 0.726 1.294 1.01 0.863 1.954 (6) 0.911 (3) 1.432 DMU3 1 1.255 0.773 1.207 0.99 0.99 0 (0) 0.715 (4) 0.358 DMU4 0.899 0 0.667 1.129 0.898 0.898 0 0.939 0.469 DMU5 0.747 0 0.462 0.99 0.726 0.726 0 0 0 DMU6 0.6 0 0.428 0.815 0.622 0.622 0 0 0 DMU7 1 5.52 0.712 1.367 1.039 1 7.079 (13) 3.819 (10) 5.449 DMU8 0.754 0 0.57 0.981 0.775 0.775 0 0 0 DMU9 0.774 0 0.601 1.013 0.807 0.807 0 0.018 0.009 DMU10 0.818 0 0.696 0.929 0.812 0.812 0 0 0 DMU11 1 2.009 0.797 1.781 1.289 0.899 0 (0) 2.338 (9) 1.169 DMU12 0.969 0 0.829 1.079 0.954 0.954 0 0.518 0.259 DMU13 1 1.87 0.935 1.098 1.017 0.968 1.455 (3) 0.506 (3) 0.981 DMU14 1 1.31 0.912 3.899 2.406 1 1.303 (5) 2.734 (7) 2.018 DMU15 1 1 0.922 2.743 1.832 1 2.028 (3) 1.119 (2) 1.573In this simulation, because we do expect data collected to be reasonably reliable, a less conservative model would be a better choice. Conservative models tend to provide results with greater standard deviation and therefore produce an estimate with less accuracy. The four simulations were designed to test CCP, DEA, and DEA-Chebyshev model to determine the accuracy of the results obtained in comparison to the EFF. The results for DEA, CCP, and DCF for all four simulations using the values ofα can be found in Tables 3, 4, 5, 6, 8, 9, 10, and 11. The upper (38a) and lower (38b) bounds for the constraints in the DCF formulation are given as(38a)E(θUxi0-∑r=1qxirλr)-τ^ασ≥0,(38b)E(θLxi0-∑r=1qxirλr)+τ^ασ≥0.When α increases, τ^ασ also increases and so will the spread between the upper and lower bounds of θ.When the degree of deviation from observed performance levels is available, the results generated using DEA-Chebyshev model are generally a more precise approximation of the EFF compared to CCP, which assumes the normal distribution. From the simulations, it has been shown that the alpha values based on the deviation from the observed level of performance consistently produce the best approximations. The estimated degree of deviation due to inefficiency from the observed level of performance is formulated as follows:(39)α~P(D)P(E)+k=P(D)+k=1+P(deviation)2+k, where α denotes management or expert defined values of data deviation (if available) and “k” denotes a constant correction factor. In other words, it is a reflection of the users’ confidence of their own expectations where “k” will always be greater than or equal to “0.” P(deviation) is defined to be the perceived excess of inputs to observed inputs. The numerical calculations using (39) are shown in Table 7.Table 7 Qualitative information: determining the value forα. Simulation 1 Largest % deviation from the expected level of performance of the  4 simulations α ~ 1 + ( 0.112 + 0.282 ) 2 + k ~ 0.75 ∴ τ ^ α = 1.732 Simulation 2 α ~ 1 + ( 0.067 + 0.312 ) 2 + k ~ 0.74 ∴ τ ^ α = 1.687 Simulation 3 Smallest % deviation from the expected performance level of the  4 simulations α ~ 1 + ( 0.118 + 0.132 ) 2 + k ~ 0.675 ∴ τ ^ α = 1.441 Simulation 4 α ~ 1 + ( 0.092 + 0.23 ) 2 + k ~ 0.72 ∴ τ ^ α = 1.604 Note that in the simulations, the correction factor is set tok~0.05 which implies that the user may have underestimated by 5%. Note that the value for k can be zero. The values are calculated as the perceived inefficiency divided by the observed values.Table 8 DEA-Chebyshev model efficiency analysis from simulation 1 atα=0.75. θ ^ α = 0.75 U  Upper bounds θ ^ α = 0.75 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.75 DMU1 0.786 1.548 1.85 (8) 2.127 (6) 1.988 (0.63) 0.539 1 DMU2 0.751 1.272 1.297 (3) 1.184 (5) 1.24 (0.83) 0.368 1 DMU3 0.683 1.082 0 0.357 0.179 0.282 0.883 DMU4 0.673 1.287 3.491 (7) 1.72 (8) 2.605 (0.02) 0.434 0.98 DMU5 0.47 0.858 0 0 0 0.275 0.664 DMU6 0.696 1.096 0 0.591 0.295 0.283 0.896 DMU7 0.643 1.104 0 0.547 0.274 0.326 0.874 DMU8 0.531 0.892 0 0 0 0.255 0.712 DMU9 0.767 1.249 4.839 (10) 2.363 (9) 3.601 (0.03) 0.341 1 DMU10 0.659 0.898 0 0 0 0.169 0.779 DMU11 0.813 1.628 0.101 (3) 2.328 (9) 1.214 (0.006) 0.577 0.906 DMU12 0.662 0.822 0 0 0 0.113 0.742 DMU13 0.768 0.965 0 0 0 0.14 0.867 DMU14 0.906 2.225 1.53 (4) 2.193 (8) 1.862 (0.55) 0.932 1 DMU15 0.92 3.232 1.892 (2) 1.592 (5) 1.742 (0.75) 1.635 1Table 9 DEA-Chebyshev model efficiency analysis from simulation 2 atα=0.74. θ ^ α = 0.75 U  Upper bounds θ ^ α = 0.75 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.75 DMU1 0.793 1.739 1.787 (8) 1.45 (5) 1.619 (0.5) 0.669 1 DMU2 0.748 1.964 0.809 (1) 1.178 (7) 0.994 (0.25) 0.86 0.874 DMU3 0.684 1.342 4.027 (8) 2.832 (7) 3.429 (0.05) 0.465 1 DMU4 0.417 0.771 0 0 0 0.25 0.594 DMU5 0.604 1.02 0 0.115 0.058 0.294 0.812 DMU6 0.628 1.047 0 0.337 0.169 0.296 0.837 DMU7 0.719 1.174 0 0.764 0.382 0.322 0.947 DMU8 0.769 1.657 3.568 (10) 2.269 (10) 2.918 (0.08) 0.627 1 DMU9 0.719 0.967 0 0 0 0.176 0.843 DMU10 0.78 1.264 0 0.794 0.397 0.342 0.89 DMU11 0.782 1.03 0 0.115 0.057 0.175 0.906 DMU12 0.84 1.664 2.26 (5) 2.103 (8) 2.182 (0.83) 0.582 1 DMU13 0.852 1.037 0 0.152 0.076 0.131 0.944 DMU14 0.887 1.46 0.646 (1) 1.273 (4) 0.959 (0.08) 0.405 0.943 DMU15 0.918 1.556 1.904 (5) 1.617 (5) 1.761 (0.29) 0.452 1Table 10 DEA-Chebyshev model efficiency analysis from simulation 3 atα=0.675. θ ^ α = 0.675 U   Upper bounds θ ^ α = 0.675 L   Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.675 DMU1 0.794 1.566 1.073 (3) 1.356 (2) 1.214 (0.48) 0.4796 1 DMU2 0.731 1.603 2.528 (9) 1.47 (7) 1.999 (0.006) 0.5503 1 DMU3 0.659 1.003 0 0 0 0.213 0.833 DMU4 0.67 1.079 0 0.461 0.23 0.255 0.877 DMU5 0.728 1.235 0.377 (1) 2.111 (8) 1.244 (0.06) 0.3195 0.985 DMU6 0.571 0.954 0 0 0 0.24 0.765 DMU7 0.725 1.24 5.206 (10) 2.573 (8) 3.889 (0.008) 0.326 1 DMU8 0.705 1.028 0 0.027 0.014 0.204 0.87 DMU9 0.791 2.408 0.805 (1) 1.061 (8) 0.933 (0.005) 0.9715 0.905 DMU10 0.664 0.88 0 0 0 0.1347 0.774 DMU11 0.799 1.298 0 1.077 0.538 0.2745 0.921 DMU12 0.674 0.926 0 0 0 0.157 0.803 DMU13 0.893 1.161 2.855 (7) 1.113 (5) 1.984 (0.03) 0.1655 1 DMU14 0.936 29.92 1 (1) 2.718 (6) 1.859 (0.04) 17.971 1 DMU15 0.926 2.77 1.156 (2) 1.034 (4) 1.095 (0.37) 1.2342 1Table 11 DEA-Chebyshev model efficiency analysis from simulation 4 atα=0.725. θ ^ α = 0.725 U   Upper bounds θ ^ α = 0.725 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.725 DMU1 0.8 1.605 1.207 (7) 1.377 (3) 1.292 (0.68) 0.57 1 DMU2 0.729 1.291 1.951 (6) 0.918 (3) 1.4347 (0.05) 0.398 1 DMU3 0.776 1.204 0 (0) 0.719 (4) 0.359 (0.1) 0.303 0.99 DMU4 0.67 1.126 0 0.92 0.46 0.322 0.898 DMU5 0.464 0.987 0 0 0 0.37 0.726 DMU6 0.43 0.813 0 0 0 0.271 0.622 DMU7 0.716 1.363 6.874 (12) 3.849 (10) 5.361 (0.00) 0.458 1 DMU8 0.572 0.979 0 0 0 0.288 0.775 DMU9 0.603 1.01 0 0.015 0.007 0.288 0.807 DMU10 0.697 0.928 0 0 0 0.163 0.812 DMU11 0.799 1.767 0 (0) 2.327 (9) 1.164 (0.002) 0.685 0.9 DMU12 0.831 1.077 0 0.512 0.256 0.174 0.954 DMU13 0.884 1.097 2.217 (4) 0.514 (3) 1.366 (0.06) 0.15 0.991 DMU14 0.913 3.862 1.316 (5) 2.731 (7) 2.023 (0.03) 2.085 1 DMU15 0.923 2.682 1.435 (3) 1.119 (2) 1.277 (0.29) 1.244 1 Note: In Tables8–11, the values shown in columns 4 and 5 in brackets represent the frequency with which a DEA-efficient DMU is used as a reference unit in DCF. Those in column 6 represent the P values for the upper and lower limits for the lambdas for the DEA-efficient units.The Tables8-11 show efficiency scores determined under DEA-Chebyshev model, based on the α-values shown in Table 7. ## 5.3. Step III: Hypothesis Testing: Frontiers Compared All the efficiency evaluation tools will be measured against the control group to determine which of these would provide the best approximation method. Both CCP and DEA-Chebyshev model efficiency scores are defined in the same manner. The upper and lower bounds of the frontier determine the region where the EFF may likely be and is approximated by the DCF efficiency score,θ^.Using the results obtained in Step II, the four simulated experimental groups are adjusted using their respectively efficiency scores. The virtual DMUs are the DMUs from the four experimental groups in which their inputs have been reduced according to their efficiency scores from Step II, according to the contraction factor,θ for DEA, θ^CCP for CCP, and θ^ for DCF.In this step, in order to test the hypothesis, the 12 data sets of virtual DMUs are each aggregated with the control group, forming a sample size of 30 DMUs per simulation. “DMU#” denotes the control group (or “sample one”) and “V.DMU#” denotes the efficient virtual units derived from the experimental group (or “sample two”) using the efficiency scores generated by DEA, CCP, and DEA-Chebyshev model, respectively. There are 12 data sets in total: three for each of the simulations (three input contraction factors per DMU, from DEA, CCP (normal), and DEA-Chebyshev model). The inputs for the virtual DMUs calculated from each of these three methodologies for the same experimental group will be different. The sample size of 30 DMUs in each of the 12 sets is a result of combining the 15 error-free DMUs with the 15 virtual DMUs. These 30 DMUs are then evaluated using ProDEA (software). It is logical to use DEA for our final analysis to scrutinize the different methods since this is a deterministic method, which would work perfectly in an error-free situation. The DEA results for the 4 simulations are given in Table12.Table 12 Deterministic efficiency results for all four simulations with an aggregate of 30 DMUs; 15 from the control group and another 15 virtual units calculated according to CCP and DEA, respectively. Simulation 1 Simulation 2 Simulation 3 Simulation 4 DEA CCP DCF DEA CCP DCF DEA CCP DCF DEA CCP DCF DMU1 1 1 1 1 1 1 1 1 1 1 1 1 DMU2 1 1 1 0.986 0.946 0.942 0.962 0.962 0.962 1 0.937 1 DMU3 1 1 1 0.96 0.96 0.96 1 1 1 1 0.981 1 DMU4 1 1 1 1 1 1 1 1 1 0.989 0.977 0.989 DMU5 1 1 1 1 1 1 1 1 1 0.945 0.94 0.945 DMU6 1 1 1 1 1 1 0.991 0.987 0.988 0.888 0.888 0.888 DMU7 1 1 1 1 1 1 0.965 0.965 0.965 0.901 0.885 0.885 DMU8 1 0.965 0.963 1 1 1 0.971 0.933 0.943 0.914 0.872 0.872 DMU9 0.991 0.937 0.935 1 1 1 0.968 0.917 0.931 0.918 0.863 0.863 DMU10 0.978 0.906 0.903 1 1 1 0.962 0.901 0.919 0.926 0.87 0.871 DMU11 0.966 0.882 0.878 1 1 1 0.954 0.893 0.913 0.932 0.876 0.877 DMU12 0.985 0.931 0.929 1 1 1 0.934 0.903 0.911 0.939 0.906 0.914 DMU13 0.996 0.966 0.965 1 1 1 0.914 0.909 0.912 0.949 0.932 0.939 DMU14 1 0.991 0.991 1 1 1 0.973 0.957 0.973 0.967 0.967 0.967 DMU15 1 1 1 1 1 1 1 1 1 1 1 1 V.DMU1 0.889 0.885 0.884 0.921 0.921 0.921 0.898 0.999 0.898 0.841 0.84 0.84 V.DMU2 0.86 0.86 0.86 1 1 1 1 0.987 0.993 0.938 1 0.938 V.DMU3 0.864 0.872 0.873 1 1 1 1 1 1 0.931 0.92 0.941 V.DMU4 0.929 0.944 0.948 0.976 0.972 0.974 1 0.971 0.986 0.982 0.979 0.984 V.DMU5 0.926 0.943 0.946 0.934 0.943 0.945 1 1 1 1 1 1 V.DMU6 0.915 0.927 0.928 0.955 0.966 0.968 1 0.998 1 0.999 0.963 0.964 V.DMU7 0.959 0.947 0.946 0.926 0.926 0.927 1 1 1 1 1 1 V.DMU8 0.977 0.954 0.951 1 1 1 1 1 1 1 0.929 0.93 V.DMU9 1 0.99 0.987 0.956 0.946 0.946 0.989 0.989 0.989 1 0.898 0.899 V.DMU10 0.959 0.938 0.936 0.933 0.959 0.958 1 1 1 0.989 0.958 0.959 V.DMU11 1 1 1 0.939 0.94 0.94 0.938 0.954 0.952 1 1 1 V.DMU12 0.977 0.953 0.952 0.933 0.932 0.932 0.972 0.989 0.988 0.996 0.976 0.987 V.DMU13 0.971 0.975 0.975 0.903 0.899 0.899 0.998 1 1 0.992 1 0.995 V.DMU14 0.986 0.98 0.979 0.872 0.924 0.924 0.99 1 1 1 1 1 V.DMU15 1 1 1 1 1 1 1 1 1 1 1 1In order to determine if the frontiers created by these models are substantially different from that of the control group (or the error-free units), the rank-sum-test and statistical hypothesis test for mean differences were used.The DEA-Chebyshev model is scrutinized using several statistical methods, which show that there is a strong relationship between the DCF and the EFF. All the statistical tools used to test the DCF against the EFF have produced consistent conclusions that thecorrected frontier is a good approximation of the EFF. The statistical methods used to test the DCF versus the EFF are the Wilcoxon-Mann-Whitney test (or the rank-sum test) and the t-test for the differences in mean values of θ shown in Table 13. The rank-sum test is used to determine if the virtual DMUs established by the DCF are from the same population as that of the DMUs in the control group; if they are, then the difference in efficiency scores of both groups will not be statistically significant. This does not imply that the EFF and the corrected frontier are exactly the same but rather that the latter is a good approximation of the former. Its results are better than that of the CCP performance evaluation method developed by Land et al. [11] and Forrester and Anderson [18].Table 13 Hypothesis tests for mean differences of efficiency scores. Sample 1 is denoted as the “Control group” and sample 2 is denoted as the “Virtual group”. Simulation 1 Simulation 2 Simulation 3 Simulation 4 Control group Virtual group Control group Virtual group Control group Virtual group Control group Virtual group DEA Mean 0.999 0.943 0.996 0.95 0.973 0.986 0.951 0.978 Variance 0.00001 0.00187 0.00011 0.00153 0.0007 0.0009 0.0015 0.0019 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.7117 0.1166 - 0.5253 - 0.1409 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −3.09 −3.2146 1.3688 1.7213 t stat 5.2614 4.5917 - 1.0167 - 1.6501 P(T ≤ t)  two tail 0.00012 0.00042 0.3266 0.1212 t critical two tail 2.145 2.145 2.145 2.145 CCP efficiency evaluation Mean 0.972 0.944 0.994 0.955 0.955 0.992 0.926 0.964 Variance 0.0016 0.0019 0.00028 0.0011 0.00176 0.00018 0.0025 0.00231 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.35661 - 0.5373 - 0.14 - 0.5035 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −1.8873 −3.0072 2.136 2.0117 t stat 2.2373 3.334 - 3.1453 - 1.7383 P(T ≤ t) two tail 0.042 0.005 0.0072 0.1041 t critical two tail 2.145 2.145 2.145 2.145 DCF Mean 0.971 0.944 0.993 0.956 0.961 0.987 0.934 0.962 Variance 0.00168 0.0019 0.0003 0.0011 0.0013 0.0008 0.00304 0.00217 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.3296 - 0.5296 - 0.4235 - 0.0966 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −1.8873 −2.9657 1.8665 1.2236 t stat 2.1038 3.2401 - 1.8448 - 1.4533 P(T ≤ t) two tail 0.05396 0.0059 0.08633 0.1682 t critical two tail 2.145 2.145 2.145 2.145 TheRank-sum test shown previously is used to determine if the two samples being tested are of the same population. If they are of the same population, then we can conclude that the two frontiers for both the samples respectively, are one, and the same or that they consistently overlap one another, thus they can be assumed to be of the same surface. ## 5.4. Step IV: Efficiency Scores: DEA versus DEA-Chebyshev Model and Ranking of DEA Efficient Units There can be more than one way of ranking efficient units. In the simplest (or naïve) case, empirically efficient DMUs can be ranked according to the scoreθ- calculated as an average of the upper and lower limits from the DEA-Chebyshev model. ### 5.4.1. Naïve Ranking Table14 illustrates the ranking of all DMUs. The figures in bold denote the DEA-Chebyshev model efficiency scores for the DEA efficient units. All production units are ranked in descending order of efficiency according to the average of the upper and lower limits, θ-. An anomaly in DMU14 of simulation 3 is caused by an extremely small value for Input 2. Because the LP formulation for DEA, DEA-Chebyshev model, and CCP (normal) applies the greatest weight to the input or output in order to make a DMU appear as favourable as possible, Input 2 in this case is weighted heavily. In DEA, the mathematical algorithm does not allow the efficiency score to exceed 1.00; thus, this problem is not detected. In DEA-Chebyshev model and CCP, because efficiency scores are not restricted to 1.00, this problem arises indicating a possible outlier. It would be advisable to remove this DMU from the analysis. In this simulation, because the errors are generated randomly, the error-value for this DMU lies in the tail end of the distribution, hence, creating an outlier.Table 14 “Naïve” ranking of empirically efficient DMUs in order of declining levels of efficiency. Values in bold correspond to DEA efficient units with a score of “1”. Rank θ - DMU15 2.076 DMU14 1.566 DMU11 1.22 DMU1 1.167 DMU2 1.012 DMU9 1.008 Simulation 1 DMU4 0.98 DMU6 0.896 DMU3 0.883 DMU7 0.874 DMU13 0.867 DMU10 0.779 DMU12 0.742 DMU8 0.712 DMU5 0.664 DMU2 1.356 DMU1 1.266 DMU12 1.252 DMU15 1.237 DMU8 1.213 DMU14 1.173 Simulation 2 DMU10 1.022 DMU3 1.013 DMU7 0.947 DMU13 0.944 DMU11 0.906 DMU9 0.843 DMU6 0.837 DMU5 0.812 DMU4 0.594 DMU14 13.632 DMU15 1.807 DMU9 1.498 DMU1 1.157 DMU2 1.15 DMU11 1.036 Simulation 3 DMU13 1.024 DMU7 0.986 DMU5 0.985 DMU4 0.877 DMU8 0.87 DMU3 0.833 DMU12 0.803 DMU10 0.774 DMU6 0.765 DMU14 2.388 Simulation 4 DMU15 1.802 DMU11 1.283 DMU1 1.202 DMU7 1.039 DMU2 1.01 DMU13 0.991 DMU3 0.99 DMU12 0.954 DMU4 0.898 DMU10 0.812 DMU9 0.807 DMU8 0.775 DMU5 0.726 DMU6 0.622This method of ranking is naïve because it ignores the standard deviation, which indicates the robustness of a DMU's efficiency score to the possible errors and theunobserved inefficiency. It also does not distinguish between possible outliers and legitimate units. ### 5.4.2. Ranking by Robustness of DEA-Chebyshev Model Efficiency Scores The ranking in the order of robustness of a DMU begins with the efficiency score defined asθ^. Those with θ^=1 are ranked from the most robust to the least robust (from the smallest standard deviation to the largest). The standard deviation is determined using the upper and lower bounds of the efficiency scores. Then the rest of the empirically efficient units are ranked based on their respective θ^ (using their standard deviations will also provide the same ranking for these units). Once all the empirically efficient units have been ranked, the remainders are organized according to their stochastic efficiency scores from the most efficient to the least efficient. The ranking of these inefficient units is very similar to that of the empirical frontier.Ranking from the most efficient down, those DMUs which have a DEA-Chebyshev model score ofθ^=1 (input oriented case) can fall into either of two categories: hyper-efficient or efficient/mildly efficient depending on how robust they are (based on their standard deviation). DMUs that are not printed in bold are DMUs that are DEA-inefficient (See Table 15), and hence, they are ranked below those which have been deemed empirically efficient. DEA efficient DMUs that fail to satisfy the conditions for θ^=1 will be given efficiency scores of at most 1.00.Table 15 Ranking of efficient DMUs according to robustness based on their standard deviations. The DMUs in bold denote the empirically efficient DMUs. Simulation 1 Simulation 2 Simulation 3 Simulation 4 θ ^ α = 0.75 Std. dev. θ ^ α = 0.75 Std. dev. θ ^ α = 0.675 Std. dev. θ ^ α = 0.725 Std. dev. DMU9 1 0.34111 DMU15 1 0.45149 DMU13 1 0.16546 DMU13 1 0.15033 DMU2 1 0.36819 DMU3 1 0.46499 DMU7 1 0.32591 DMU3 1 0.30285 DMU1 1 0.53889 DMU12 1 0.5823 DMU1 1 0.47956 DMU2 1 0.39775 DMU14 1 0.93225 DMU8 1 0.62735 DMU2 1 0.55027 DMU7 1 0.45771 DMU15 1 1.63455 DMU1 1 0.66871 DMU15 1 1.23418 DMU1 1 0.56972 DMU4 0.98 0.43388 DMU7 0.947 0.3218 DMU14 1 17.971 DMU11 1 0.68462 DMU11 0.906 0.57657 DMU13 0.944 0.13124 DMU5 0.985 0.31947 DMU15 1 1.24437 DMU6 0.896 0.28298 DMU14 0.943 0.4051 DMU11 0.921 0.2745 DMU14 1 2.0849 DMU3 0.883 0.28164 DMU11 0.906 0.17515 DMU9 0.905 0.97149 DMU12 0.969 0.17444 DMU7 0.874 0.32605 DMU10 0.89 0.34217 DMU4 0.877 0.25512 DMU4 0.899 0.32244 DMU13 0.867 0.13958 DMU2 0.874 0.85998 DMU8 0.87 0.20386 DMU10 0.818 0.16313 DMU10 0.779 0.16935 DMU9 0.843 0.17572 DMU3 0.833 0.21305 DMU9 0.774 0.28765 DMU12 0.742 0.11335 DMU6 0.837 0.29614 DMU12 0.803 0.15726 DMU8 0.754 0.28786 DMU8 0.712 0.25534 DMU5 0.812 0.2938 DMU10 0.774 0.1347 DMU5 0.747 0.3701 DMU5 0.664 0.2745 DMU4 0.594 0.25039 DMU6 0.765 0.24013 DMU6 0.6 0.27103 ## 5.4.1. Naïve Ranking Table14 illustrates the ranking of all DMUs. The figures in bold denote the DEA-Chebyshev model efficiency scores for the DEA efficient units. All production units are ranked in descending order of efficiency according to the average of the upper and lower limits, θ-. An anomaly in DMU14 of simulation 3 is caused by an extremely small value for Input 2. Because the LP formulation for DEA, DEA-Chebyshev model, and CCP (normal) applies the greatest weight to the input or output in order to make a DMU appear as favourable as possible, Input 2 in this case is weighted heavily. In DEA, the mathematical algorithm does not allow the efficiency score to exceed 1.00; thus, this problem is not detected. In DEA-Chebyshev model and CCP, because efficiency scores are not restricted to 1.00, this problem arises indicating a possible outlier. It would be advisable to remove this DMU from the analysis. In this simulation, because the errors are generated randomly, the error-value for this DMU lies in the tail end of the distribution, hence, creating an outlier.Table 14 “Naïve” ranking of empirically efficient DMUs in order of declining levels of efficiency. Values in bold correspond to DEA efficient units with a score of “1”. Rank θ - DMU15 2.076 DMU14 1.566 DMU11 1.22 DMU1 1.167 DMU2 1.012 DMU9 1.008 Simulation 1 DMU4 0.98 DMU6 0.896 DMU3 0.883 DMU7 0.874 DMU13 0.867 DMU10 0.779 DMU12 0.742 DMU8 0.712 DMU5 0.664 DMU2 1.356 DMU1 1.266 DMU12 1.252 DMU15 1.237 DMU8 1.213 DMU14 1.173 Simulation 2 DMU10 1.022 DMU3 1.013 DMU7 0.947 DMU13 0.944 DMU11 0.906 DMU9 0.843 DMU6 0.837 DMU5 0.812 DMU4 0.594 DMU14 13.632 DMU15 1.807 DMU9 1.498 DMU1 1.157 DMU2 1.15 DMU11 1.036 Simulation 3 DMU13 1.024 DMU7 0.986 DMU5 0.985 DMU4 0.877 DMU8 0.87 DMU3 0.833 DMU12 0.803 DMU10 0.774 DMU6 0.765 DMU14 2.388 Simulation 4 DMU15 1.802 DMU11 1.283 DMU1 1.202 DMU7 1.039 DMU2 1.01 DMU13 0.991 DMU3 0.99 DMU12 0.954 DMU4 0.898 DMU10 0.812 DMU9 0.807 DMU8 0.775 DMU5 0.726 DMU6 0.622This method of ranking is naïve because it ignores the standard deviation, which indicates the robustness of a DMU's efficiency score to the possible errors and theunobserved inefficiency. It also does not distinguish between possible outliers and legitimate units. ## 5.4.2. Ranking by Robustness of DEA-Chebyshev Model Efficiency Scores The ranking in the order of robustness of a DMU begins with the efficiency score defined asθ^. Those with θ^=1 are ranked from the most robust to the least robust (from the smallest standard deviation to the largest). The standard deviation is determined using the upper and lower bounds of the efficiency scores. Then the rest of the empirically efficient units are ranked based on their respective θ^ (using their standard deviations will also provide the same ranking for these units). Once all the empirically efficient units have been ranked, the remainders are organized according to their stochastic efficiency scores from the most efficient to the least efficient. The ranking of these inefficient units is very similar to that of the empirical frontier.Ranking from the most efficient down, those DMUs which have a DEA-Chebyshev model score ofθ^=1 (input oriented case) can fall into either of two categories: hyper-efficient or efficient/mildly efficient depending on how robust they are (based on their standard deviation). DMUs that are not printed in bold are DMUs that are DEA-inefficient (See Table 15), and hence, they are ranked below those which have been deemed empirically efficient. DEA efficient DMUs that fail to satisfy the conditions for θ^=1 will be given efficiency scores of at most 1.00.Table 15 Ranking of efficient DMUs according to robustness based on their standard deviations. The DMUs in bold denote the empirically efficient DMUs. Simulation 1 Simulation 2 Simulation 3 Simulation 4 θ ^ α = 0.75 Std. dev. θ ^ α = 0.75 Std. dev. θ ^ α = 0.675 Std. dev. θ ^ α = 0.725 Std. dev. DMU9 1 0.34111 DMU15 1 0.45149 DMU13 1 0.16546 DMU13 1 0.15033 DMU2 1 0.36819 DMU3 1 0.46499 DMU7 1 0.32591 DMU3 1 0.30285 DMU1 1 0.53889 DMU12 1 0.5823 DMU1 1 0.47956 DMU2 1 0.39775 DMU14 1 0.93225 DMU8 1 0.62735 DMU2 1 0.55027 DMU7 1 0.45771 DMU15 1 1.63455 DMU1 1 0.66871 DMU15 1 1.23418 DMU1 1 0.56972 DMU4 0.98 0.43388 DMU7 0.947 0.3218 DMU14 1 17.971 DMU11 1 0.68462 DMU11 0.906 0.57657 DMU13 0.944 0.13124 DMU5 0.985 0.31947 DMU15 1 1.24437 DMU6 0.896 0.28298 DMU14 0.943 0.4051 DMU11 0.921 0.2745 DMU14 1 2.0849 DMU3 0.883 0.28164 DMU11 0.906 0.17515 DMU9 0.905 0.97149 DMU12 0.969 0.17444 DMU7 0.874 0.32605 DMU10 0.89 0.34217 DMU4 0.877 0.25512 DMU4 0.899 0.32244 DMU13 0.867 0.13958 DMU2 0.874 0.85998 DMU8 0.87 0.20386 DMU10 0.818 0.16313 DMU10 0.779 0.16935 DMU9 0.843 0.17572 DMU3 0.833 0.21305 DMU9 0.774 0.28765 DMU12 0.742 0.11335 DMU6 0.837 0.29614 DMU12 0.803 0.15726 DMU8 0.754 0.28786 DMU8 0.712 0.25534 DMU5 0.812 0.2938 DMU10 0.774 0.1347 DMU5 0.747 0.3701 DMU5 0.664 0.2745 DMU4 0.594 0.25039 DMU6 0.765 0.24013 DMU6 0.6 0.27103 ## 5.5. Further Analysis Additional analyses were conducted by taking the observed DMUs in each simulation and evaluating them against the EFF, DEA, CCP, and DEA-Chebyshev model results. If DCF is a good approximation of the EFF, then the efficiency scores for the observed DMUs should not be substantially different from the efficiency scores generated by the EFF. This also holds true for CCP. ### 5.5.1. Observed DMUs Evaluated against the EFF, CCP, and DCF The efficiency scores of the observed DMUs from the experimental groups determined by the EFF (to be denoted as “exp.grp+EFF”) will provide a benchmark for evaluating the DEA frontier (“exp.grp+DEA”), CCP (normal) frontier (“exp.grp+CCP”), and the corrected frontier (“exp.grp+DCF”). A comparison is drawn between the efficiency scores of the experimental groups generated by the four frontiers.The hypothesis is that the mean of the efficiency scores for the 15 observed units in the “exp.grp+EFF” group and the “exp.grp+DCF” group should be approximately the same (i.e., the difference is not statistically significant). From Table16, the null hypothesis can be seen from the rank-sum test and the t-test at α=0.05, and the difference is not statistically significant in simulations 3 and 4, hence, the corrected frontier is a good approximation of the EFF. Although the hypothesis test for simulations 1 and 2 indicates some level of significance, the results generated by the DCF model are still superior to those of the CCP and the DEA.Table 16 Statistical analysis for frontier comparisons. Observed DMUs are evaluated against the 3 different frontiers to determine their efficiency scores which are calculated using the normal DEA model and to determine if the efficiency scores for each group are substantially different when comparing EFF to DEA, EFF to DCF, and EFF to CCP. Exp.grp+EFF Exp.grp+DEA Exp.grp+EFF Exp.grp+DCF Exp.grp+EFF Exp.grp+CCP Simulation 1 Mean 0.852 0.899 0.852 0.885 0.852 0.887 Variance 0.01396 0.01355 0.01396 0.01389 0.01396 0.01376 Observations 15 15 15 15 15 15 Pearson correlation 0.922 0.87986 0.881 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.2858 0.9125 0.9125 t stat - 3.9644 - 2.2335 - 2.3537 P(T ≤ t) two tail 0.0014 0.04235 0.03372 t critical two tail 2.145 2.145 2.145 Simulation 2 Mean 0.875 0.922 0.875 0.908 0.875 0.908 Variance 0.01272 0.01242 0.0127 0.0115 0.0127 0.0115 Observations 15 15 15 15 15 15 Pearson correlation 0.94071 0.8875 0.8918 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.3066 0.9747 1.0162 t stat - 4.6604 - 2.3984 - 2.487 P(T ≤ t) two tail 0.00037 0.031 0.02611 t critical two tail 2.145 2.145 2.145 Simulation 3 Mean 0.92 0.933 0.92 0.916 0.92 0.902 Variance 0.00879 0.00815 0.00879 0.00806 0.00879 0.0077 Observations 15 15 15 15 15 15 Pearsoncorrelation 0.95301 0.8804 0.9082 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.7259 −0.0622 −0.6014 t stat - 1.8125 0.29423 1.68719 P(T ≤ t) two tail 0.0914 0.7729 0.1137 t critical two tail 2.145 2.145 2.145 Simulation 4 Mean 0.882 0.904 0.882 0.887 0.882 0.868 Variance 0.0153 0.0173 0.0153 0.0184 0.0153 0.0162 Observations 15 15 15 15 15 15 Pearson correlation 0.9425 0.905 0.8996 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.8503 0.1452 −0.394 t stat - 1.9248 - 0.312 1.0043 P(T ≤ t) two tail 0.0748 0.7599 −0.312 t critical two tail 2.145 2.145 2.145Table16 shows the statistical tests used to compare the DEA, CCP, and DCF against the EFF. The Pearson Correlation analysis, (regression line) which ranges from −1 to 1 inclusively, reflects the extent of the linear relationship between two sets of data. The P values, the rank-sum test, and the Pearson correlation observed for all the four simulations indicate that in general the DCF outperforms DEA and CCP (which assumed the normal distribution).Outliers have a tendency to exhibit large standard deviations, which is translated to large confidence limits. Consequently, the reason for establishing DCF and CCP scores is to reduce the likelihood of a virtual unit from becoming an outlier. Also, the results generated by the stochastic models (as opposed to deterministic ones) such as the DCF and CCP can be greatly affected because the efficiency scores are generally not restricted to 1.00. In reality, outliers are not always easily detected. If the data set contains some outliers, the stochastic models may not perform well. DMU14 in Simulation 3 is an example of this problem. It can be solved by either removing the outliers or by imposing weight restrictions. However, weight restrictions are not within the scope of this paper. ## 5.5.1. Observed DMUs Evaluated against the EFF, CCP, and DCF The efficiency scores of the observed DMUs from the experimental groups determined by the EFF (to be denoted as “exp.grp+EFF”) will provide a benchmark for evaluating the DEA frontier (“exp.grp+DEA”), CCP (normal) frontier (“exp.grp+CCP”), and the corrected frontier (“exp.grp+DCF”). A comparison is drawn between the efficiency scores of the experimental groups generated by the four frontiers.The hypothesis is that the mean of the efficiency scores for the 15 observed units in the “exp.grp+EFF” group and the “exp.grp+DCF” group should be approximately the same (i.e., the difference is not statistically significant). From Table16, the null hypothesis can be seen from the rank-sum test and the t-test at α=0.05, and the difference is not statistically significant in simulations 3 and 4, hence, the corrected frontier is a good approximation of the EFF. Although the hypothesis test for simulations 1 and 2 indicates some level of significance, the results generated by the DCF model are still superior to those of the CCP and the DEA.Table 16 Statistical analysis for frontier comparisons. Observed DMUs are evaluated against the 3 different frontiers to determine their efficiency scores which are calculated using the normal DEA model and to determine if the efficiency scores for each group are substantially different when comparing EFF to DEA, EFF to DCF, and EFF to CCP. Exp.grp+EFF Exp.grp+DEA Exp.grp+EFF Exp.grp+DCF Exp.grp+EFF Exp.grp+CCP Simulation 1 Mean 0.852 0.899 0.852 0.885 0.852 0.887 Variance 0.01396 0.01355 0.01396 0.01389 0.01396 0.01376 Observations 15 15 15 15 15 15 Pearson correlation 0.922 0.87986 0.881 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.2858 0.9125 0.9125 t stat - 3.9644 - 2.2335 - 2.3537 P(T ≤ t) two tail 0.0014 0.04235 0.03372 t critical two tail 2.145 2.145 2.145 Simulation 2 Mean 0.875 0.922 0.875 0.908 0.875 0.908 Variance 0.01272 0.01242 0.0127 0.0115 0.0127 0.0115 Observations 15 15 15 15 15 15 Pearson correlation 0.94071 0.8875 0.8918 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.3066 0.9747 1.0162 t stat - 4.6604 - 2.3984 - 2.487 P(T ≤ t) two tail 0.00037 0.031 0.02611 t critical two tail 2.145 2.145 2.145 Simulation 3 Mean 0.92 0.933 0.92 0.916 0.92 0.902 Variance 0.00879 0.00815 0.00879 0.00806 0.00879 0.0077 Observations 15 15 15 15 15 15 Pearsoncorrelation 0.95301 0.8804 0.9082 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.7259 −0.0622 −0.6014 t stat - 1.8125 0.29423 1.68719 P(T ≤ t) two tail 0.0914 0.7729 0.1137 t critical two tail 2.145 2.145 2.145 Simulation 4 Mean 0.882 0.904 0.882 0.887 0.882 0.868 Variance 0.0153 0.0173 0.0153 0.0184 0.0153 0.0162 Observations 15 15 15 15 15 15 Pearson correlation 0.9425 0.905 0.8996 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.8503 0.1452 −0.394 t stat - 1.9248 - 0.312 1.0043 P(T ≤ t) two tail 0.0748 0.7599 −0.312 t critical two tail 2.145 2.145 2.145Table16 shows the statistical tests used to compare the DEA, CCP, and DCF against the EFF. The Pearson Correlation analysis, (regression line) which ranges from −1 to 1 inclusively, reflects the extent of the linear relationship between two sets of data. The P values, the rank-sum test, and the Pearson correlation observed for all the four simulations indicate that in general the DCF outperforms DEA and CCP (which assumed the normal distribution).Outliers have a tendency to exhibit large standard deviations, which is translated to large confidence limits. Consequently, the reason for establishing DCF and CCP scores is to reduce the likelihood of a virtual unit from becoming an outlier. Also, the results generated by the stochastic models (as opposed to deterministic ones) such as the DCF and CCP can be greatly affected because the efficiency scores are generally not restricted to 1.00. In reality, outliers are not always easily detected. If the data set contains some outliers, the stochastic models may not perform well. DMU14 in Simulation 3 is an example of this problem. It can be solved by either removing the outliers or by imposing weight restrictions. However, weight restrictions are not within the scope of this paper. ## 6. Conclusions Traditional methods of performance analysis are no longer sufficient in a fast paced constantly evolving environment. Observing past data alone is not adequate for future projections. The DEA-Chebyshev model is designed to bridge the difference between conventional performance measurements and new techniques to incorporate relevance into such measures. This algorithm not only provides a multidimensional evaluation technique, but it has successfully incorporated a new element into an existing deterministic technique (DEA). This is known as thek-flexibility function which was originally derived from the one-sided Chebyshev's inequality. This in turn allows management to include expert opinion as a single value, such as a 20% net growth by next year end from the current year. The single value is dichotomized into unmet (or over target) present level of growths (or declines). Because management expertise is included, the expected growth (or decline) is not unreasonable and will inherently include factors which do not need to be explicitly expressed in the model such as environmental, economic, and social changes. Since these changes are becoming increasingly rapid, performance measures can no longer ignore qualitative inputs. In a highly competitive environment, future projections and attainable targets are key performance indicators. Intellectual capital and knowledge are today’s two most important assets.The combination of normal DEA with DCF can successfully provide a good framework for evaluation based on quantitative data and qualitative intellectual knowledge of management. When no errors are expected, then standard DEA models will suffice. DCF is designed such that in the absence of errors, the model will revert to a DEA model. This occurs when thek-flexibility function equals zero. DEA provides a deterministic frontier which DEA-Chebyshev model works on to define the estimate of the EFF.The simulated dataset was tested on DEA-Chebyshev model. It has been statistically proven that this model is an effective tool with excellent accuracy to detect or predict the EFF frontier as a new efficiency benchmarking technique. It is an improvement over other methods, easily applied, practical, not computationally intensive, and easy to implement. The results have been promising thus far. The future work includes using a real data application to illustrate the usefulness of DEA-Chebyshev model. --- *Source: 102163-2013-06-24.xml*
102163-2013-06-24_102163-2013-06-24.md
157,457
A Distribution-Free Approach to Stochastic Efficiency Measurement with Inclusion of Expert Knowledge
Kerry Khoo-Fazari; Zijiang Yang; Joseph C. Paradi
Journal of Applied Mathematics (2013)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102163
102163-2013-06-24.xml
--- ## Abstract This paper proposes a new efficiency benchmarking methodology that is capable of incorporating probability while still preserving the advantages of a distribution-free and nonparametric modeling technique. This new technique developed in this paper will be known as the DEA-Chebyshev model. The foundation of DEA-Chebyshev model is based on the model pioneered by Charnes, Cooper, and Rhodes in 1978 known as Data Envelopment Analysis (DEA). The combination of normal DEA with DEA-Chebyshev frontier (DCF) can successfully provide a good framework for evaluation based on quantitative data and qualitative intellectual management knowledge. The simulated dataset was tested on DEA-Chebyshev model. It has been statistically shown that this model is effective in predicting a new frontier, whereby DEA efficient units can be further differentiated and ranked. It is an improvement over other methods, as it is easily applied, practical, not computationally intensive, and easy to implement. --- ## Body ## 1. Introduction There has been a substantial amount of research conducted in the area of stochastic evaluation of efficiency, such as the stochastic frontier approach (SFA) [1, 2], stochastic data envelopment analysis (DEA) [3, 4], chance-constrained programming (CCP) efficiency evaluation [5–8], and statistical inference to deal with variations in data. The problems associated with these methodologies range from the requirement for specifications of some functional form or parameterization to the requirement of a substantial amount of (time series) data. Relying on past and present data alone to provide a good estimation of the efficient frontier may not be suitable today due to the rapid evolution of these “nuisance” parameters. Hence, the inclusion of management's expert opinion cannot be excluded in efficiency analyses.This paper proposes to develop a new efficiency benchmarking methodology that is capable of incorporating probability while still preserving the advantages of a function-free and nonparametric modeling technique. This new technique developed in this paper will be known as the DEA-Chebyshev model. The objectives are to first distinguish amongst top performers and second to define a probable feasible target for the empirically efficient units (as they are found from the usual DEA models) with respect to the DEA-Chebyshev frontier (DCF). This can be achieved by incorporating management's expertise (qualitative component) along with the available data (quantitative component) to infer this new frontier. The foundation of DEA-Chebyshev model is based on the model pioneered by Charnes et al. in 1978 [10] known as DEA. It is deterministic approach, which requires no distributional assumptions or functional forms with predefined parameters. The main drawback to deterministic approaches is that they make no allowance for random variations in the data. The DEA methodology has been chosen as a foundation for this research because of the following advantages.(i) It is nonparametric and does not requirea priori assumption regarding the distribution of data(ii) It has the ability to simultaneously handle multiple inputs and outputs without making prior judgments of their relative importance (i.e., function-free)(iii) It can provide a single measurement of performance based upon multiple inputs and outputs.DEA ensures that the production units being evaluated will only be compared with others from the same “cultural” environment, provided, of course, that they operate under the same environmental conditions.The rest of the paper is organized as follows. Section2 gives a brief literature review. Section 3 describes some possible causes of data discrepancies that may or may not be observable and their effects on the variables. Section 4 discusses the assumptions and mathematical formulation of DEA-Chebyshev model. Section 5 provides the simulation and comparison with other efficiency evaluation techniques. Finally, our conclusions are presented in Section 6. ## 2. Literature Review This section provides the applicable literature on past and present researches relating to stochastic models and weight-restricted models designed for performance measurements. They show the relevance of well-known methodologies used for estimating efficiency scores and constructing the approximated frontier in order to account, as well as possible, for noise which can have diverse effects on efficiency evaluation of human performance-dependent entities. ### 2.1. Stochastic Frontier Approach Aigner et al. [1] and Meussen and Van Den Broeck [2] independently and simultaneously proposed a stochastic frontier model known as the Stochastic Frontier Approach (SFA) for performance evaluation. SFA uses econometric methods for estimating the efficient frontier. The problems associated with SFA are, that weights (or parameters) have to be predefined to determine its functional form and this requires parameterization. Second, a distributional form must be determined in order to estimate random errors. Third, inclusion of multiple outputs is not easy to incorporate into the model. Finally, samples have to be large enough to be able to infer the distributional form for random errors. ### 2.2. Stochastic DEA Stochastic DEA is a DEA method that attempts to account for and filter out noise by incorporating stochastic variations of inputs and outputs while still maintaining the advantages of DEA [4]. The method relies on the theory that there will always exist an optimal solution for industrial efficiency. The variability in outputs is dealt with using the risk-averse efficiency model by Land et al. [11] with a risk preference function. Kneip and Simar [3] proposed a nonparametric estimation of each decision-making unit (DMU)'s production function using panel data over T time periods. This filters the noise from the outputs. The fitted values of the outputs along with the inputs are then evaluated using DEA. In this instance, efficiency is determined by the distance of the estimated frontier to the observed DMUs. The drawback of this method is that a reasonable estimate of efficiency can be obtained only when T and q (number of DMUs) are sufficiently large. ### 2.3. Chance-Constrained DEA Chance-constrained programming was first developed by Charnes and Cooper [5] and Kall [7] as an operational research approach for optimizing under uncertainty when some coefficients are random variables distributed according to some laws of probability. The CCP DEA models in the past generally assumed that variations observed in the outputs follow a normal distribution. Variations in inputs are assumed to be the cause of inefficiency [12], while random noise occurs in outputs. Since the distribution of inefficiency is uncertain (although, theoretically assumed to be half-normal or gamma), the chance-constraint formulation is not applied to input constraints (inputs are held deterministic, while outputs are stochastic). Olesen and Petersen [9] state that the hypothesis concerning the amount of noise in the data cannot be tested. Using panel data, variations in the data can be dichotomized into noise and inefficiency. Another variation of CCP DEA was introduced by Cooper et al. [6] utilizing the “satisficing concepts.” The concept is used to interpret managerial policies and rules in order to determine the optimizing and satisficing actions, which are distinguished from inefficiencies. Optimizing and satisficing can be regarded as mutually exclusive events. The former represents physical possibilities or endurance limits and the latter represents aspirational levels.All these CCP formulations have considered normal distributions for the probability of staying within the constraints. This method is effective when qualitative data is not available. However, expert opinion from management cannot be discounted with regard to data dispersion from the expected or correct values. Unfortunately, the current CCP is strictly a quantitative analysis based on empirical data and whose variations are said to be of a predefined distributional form. ### 2.4. Assurance Region and Cone-Ratio Models In an “unrestricted” DEA model, the weights are assigned to each DMU such that it would appear as favourable as possible, which is an inherent characteristic of DEA. Hence, there is a concern when largely different weights may be assigned to the same inputs and outputs in the LP solutions for different DMUs. This motivated the development of weight-restricted models such as the “assurance region” (AR) [13, 14], the “cone-ratio” (CR) [15], and other variations of these models.The motivation behind weight-restricted models is to redefine the DEA frontier so as to make it aspractical as possible; that is, altering the inherent characteristic of DEA when assigning small/large weights to certain inputs or outputs is not realistic. On the contrary, the stochastic frontier models redefine the frontier in the presence of noise or data disparity. Stochastic approaches are designed to evaluate DMUs based on the understanding that constraints may, realistically, not always hold due to noise. Weight restrictions are also applicable in stochastic approaches.Weight restriction models deal directly with the model’s inconsistencies in a practical sense using qualitative information, whereas stochastic models deal with data discrepancies and inconsistencies using quantitative approaches to infer to the degree of data disparity. Although the motivations of these two methods are similar, the underlying objectives for their developments are not the same. Both are valid extensions of the normal DEA model in attempting to correct the frontier.The Assurance Region (AR) model was developed by Thompson et al. [13] to analyze six sites for the location of a physics lab. This approach imposes additional constraints in the DEA model with respect to the magnitude of the weights. The AR is defined to be the subset of W, the weight space that denotes the vectors of multipliers consisting of v and u, such that any region outside the AR does not contain reasonable input and output multipliers. An additional constraint for the ratio of input weights [14] can be defined as (1)l1,i≤viv1≤u1,i≡v1l1,i≤vi≤v1u1,ifori=1,…,m, where m denotes the number of inputs, v1 and vi are the weights for the input  iand input 1, respectively, and l1,i and u1,i are the lower and upper bounds for the ratio of multiplier.The cone-ratio (CR) method was developed by Charnes et al. [15] which allows for a closed convex cones for the virtual multipliers. It is a more general approach compared to that of the AR. In the AR model, there can only be two admissible nonnegative vectors, one for the lower bound and the other for the upper bound of the ratio of virtual weights. However, in the CR case, there can be k admissible nonnegative vectors for input weights and l admissible nonnegative vectors for output weights; that is, the feasible region for the weights is a polyhedral convex cone spanned by k and l admissible nonnegative direction vectors for inputs and outputs, respectively, (2)v=∑h=1kαha→h,u=∑s=1lβsb→s, where a→h represent the vectors and αh≥0(∀h) are the weights applied to select the best nonnegative vector. Similarly, the AR method is equivalent to selecting only two admissible vectors under the CR method. The lower and upper bounds are denoted as vectors in the two-input case (2.4)a→1=(1l1,20⋯0),a→2=(1u1,20⋯0), respectively. ## 2.1. Stochastic Frontier Approach Aigner et al. [1] and Meussen and Van Den Broeck [2] independently and simultaneously proposed a stochastic frontier model known as the Stochastic Frontier Approach (SFA) for performance evaluation. SFA uses econometric methods for estimating the efficient frontier. The problems associated with SFA are, that weights (or parameters) have to be predefined to determine its functional form and this requires parameterization. Second, a distributional form must be determined in order to estimate random errors. Third, inclusion of multiple outputs is not easy to incorporate into the model. Finally, samples have to be large enough to be able to infer the distributional form for random errors. ## 2.2. Stochastic DEA Stochastic DEA is a DEA method that attempts to account for and filter out noise by incorporating stochastic variations of inputs and outputs while still maintaining the advantages of DEA [4]. The method relies on the theory that there will always exist an optimal solution for industrial efficiency. The variability in outputs is dealt with using the risk-averse efficiency model by Land et al. [11] with a risk preference function. Kneip and Simar [3] proposed a nonparametric estimation of each decision-making unit (DMU)'s production function using panel data over T time periods. This filters the noise from the outputs. The fitted values of the outputs along with the inputs are then evaluated using DEA. In this instance, efficiency is determined by the distance of the estimated frontier to the observed DMUs. The drawback of this method is that a reasonable estimate of efficiency can be obtained only when T and q (number of DMUs) are sufficiently large. ## 2.3. Chance-Constrained DEA Chance-constrained programming was first developed by Charnes and Cooper [5] and Kall [7] as an operational research approach for optimizing under uncertainty when some coefficients are random variables distributed according to some laws of probability. The CCP DEA models in the past generally assumed that variations observed in the outputs follow a normal distribution. Variations in inputs are assumed to be the cause of inefficiency [12], while random noise occurs in outputs. Since the distribution of inefficiency is uncertain (although, theoretically assumed to be half-normal or gamma), the chance-constraint formulation is not applied to input constraints (inputs are held deterministic, while outputs are stochastic). Olesen and Petersen [9] state that the hypothesis concerning the amount of noise in the data cannot be tested. Using panel data, variations in the data can be dichotomized into noise and inefficiency. Another variation of CCP DEA was introduced by Cooper et al. [6] utilizing the “satisficing concepts.” The concept is used to interpret managerial policies and rules in order to determine the optimizing and satisficing actions, which are distinguished from inefficiencies. Optimizing and satisficing can be regarded as mutually exclusive events. The former represents physical possibilities or endurance limits and the latter represents aspirational levels.All these CCP formulations have considered normal distributions for the probability of staying within the constraints. This method is effective when qualitative data is not available. However, expert opinion from management cannot be discounted with regard to data dispersion from the expected or correct values. Unfortunately, the current CCP is strictly a quantitative analysis based on empirical data and whose variations are said to be of a predefined distributional form. ## 2.4. Assurance Region and Cone-Ratio Models In an “unrestricted” DEA model, the weights are assigned to each DMU such that it would appear as favourable as possible, which is an inherent characteristic of DEA. Hence, there is a concern when largely different weights may be assigned to the same inputs and outputs in the LP solutions for different DMUs. This motivated the development of weight-restricted models such as the “assurance region” (AR) [13, 14], the “cone-ratio” (CR) [15], and other variations of these models.The motivation behind weight-restricted models is to redefine the DEA frontier so as to make it aspractical as possible; that is, altering the inherent characteristic of DEA when assigning small/large weights to certain inputs or outputs is not realistic. On the contrary, the stochastic frontier models redefine the frontier in the presence of noise or data disparity. Stochastic approaches are designed to evaluate DMUs based on the understanding that constraints may, realistically, not always hold due to noise. Weight restrictions are also applicable in stochastic approaches.Weight restriction models deal directly with the model’s inconsistencies in a practical sense using qualitative information, whereas stochastic models deal with data discrepancies and inconsistencies using quantitative approaches to infer to the degree of data disparity. Although the motivations of these two methods are similar, the underlying objectives for their developments are not the same. Both are valid extensions of the normal DEA model in attempting to correct the frontier.The Assurance Region (AR) model was developed by Thompson et al. [13] to analyze six sites for the location of a physics lab. This approach imposes additional constraints in the DEA model with respect to the magnitude of the weights. The AR is defined to be the subset of W, the weight space that denotes the vectors of multipliers consisting of v and u, such that any region outside the AR does not contain reasonable input and output multipliers. An additional constraint for the ratio of input weights [14] can be defined as (1)l1,i≤viv1≤u1,i≡v1l1,i≤vi≤v1u1,ifori=1,…,m, where m denotes the number of inputs, v1 and vi are the weights for the input  iand input 1, respectively, and l1,i and u1,i are the lower and upper bounds for the ratio of multiplier.The cone-ratio (CR) method was developed by Charnes et al. [15] which allows for a closed convex cones for the virtual multipliers. It is a more general approach compared to that of the AR. In the AR model, there can only be two admissible nonnegative vectors, one for the lower bound and the other for the upper bound of the ratio of virtual weights. However, in the CR case, there can be k admissible nonnegative vectors for input weights and l admissible nonnegative vectors for output weights; that is, the feasible region for the weights is a polyhedral convex cone spanned by k and l admissible nonnegative direction vectors for inputs and outputs, respectively, (2)v=∑h=1kαha→h,u=∑s=1lβsb→s, where a→h represent the vectors and αh≥0(∀h) are the weights applied to select the best nonnegative vector. Similarly, the AR method is equivalent to selecting only two admissible vectors under the CR method. The lower and upper bounds are denoted as vectors in the two-input case (2.4)a→1=(1l1,20⋯0),a→2=(1u1,20⋯0), respectively. ## 3. Data Variations ### 3.1. Two Error Sources of Data Disparity Affecting Productivity Analysis Before we begin to make modifications to incorporate probability into the basic DEA model, it is crucial that the types of errors are identified, which are sources of data disparity. These can be segregated into 2 categories; systematic and nonsystematic errors. Nonsystematic errors are typically defined to be statistical noise, which are random normalN(0,σ2) and independent and identically distributed (i.i.d.). They will eventually average to zero. Systematic errors are defined to be “the degree to which the measured variable reflects the underlying phenomenon depend on its bias and variance relative to the true or more appropriate measure” [16]. Systematic errors or measurement errors are deemed to have the most disparaging effects because they introduce bias into the model. These may be caused by the lack of information.The design of the new DEA model is intended to take into account the possibility of data disparity that affect productivity analysis while preserving the advantages that DEA offers in order to estimate the true level of efficiency. Due to data disparity, normal DEA results may contain two components of the error term. The first refers to statistical noise which follows a normal distribution, while the second refers to the technical inefficiency which is said to follow a truncated normal or a half-normal distribution. This can be achieved by relaxing the LP constraints to allow for these variations which may provide a better approximation of the level of efficiency.The following general linear programming model illustrates the mathematical form of systematic and nonsystematic errors as defined previously. Variation in the variable (X) of the objective function will result in different values for the optimized coefficient (β) (4)minβg=X′β,subject toX′β≥y,β≥0. If the variation in X is stochastic, then X=x-+ε; ε~N(0,σ2), by the Central Limit Theorem; one can characterize how closely the vector Xis scattered around its mean x-  by the distance function; D2=D2(X;x-,Vε)=(X-x-)′Vε-1(X-x-). Vε denotes the variance-covariance matrix for ε [4].Four scenarios are illustrated later which describes sources of data disparity. The notations are as follows:xir: observed input i for i=1,…,m for DMUr,yjr: observed output j for j=1,…,n for DMUr,μrx: expected value of input for DMUr,μry: expected value of output for DMUr,brx: bias of xr,b^rx: estimate of brx,bry: bias of yr,b^ry: estimate of bry.The following are equations defining the relationship between the observed and true or expected values for both inputs and outputs in a productivity analysis such as SFA where measurement errors and/or random noise and inefficiencies are a concern in parametric estimations: (5)xir=μirx+birx, for some input i for unit r(6)yjr=μjry+bjry, for some output j for unit r (considered for cases in which there may be some bias in output levels) (7)μirx,μjry≥0,birx,bjry, unrestricted in sign.The following four scenarios illustrate the impact of different errors and were constructed using the notations given previously. These scenarios follow the definition by Tomlinson [16].Scenario I. Consider the following: (8)E(birx)=0,Var(birx)=0,E(xir)=μirx,Var(xir)=0. With zero bias and variance, observed input value is the true value ∴E(xir)=μirx=xir. This implies that the data is 100% accurate. The expected value is exactly the same as the observed value. In reality, it is rare to have data with such accuracy.Scenario II. Consider the following: (9)E(birx)=b^irx≠0,Var(birx)=0,E(xir)=μirx+b^irx,Var(xir)=0. Bias is nonzero with zero variance; hence, errors are systematic. E(xir) is not an unbiased estimator of xir. In this case, systematic errors are a problem where inputs are concerned. When measurement errors exist, the expected value is a biased estimator of the observed value. This in turn causes biases in DEA results. Empirical methods, such as DEA, make no allowance for this error and evaluate DMUs based strictly on the observed values. However, expectations of the observed values can be determined qualitatively and be incorporated into the LP.Scenario III. Consider the following: (10)E(birx)=0,Var(birx)=σbir2>0,E(xir)=μirx,Var(xir)=σbir2. Expected value of a constant is the constant itself. Variance of a constant is zero. Hence Var(xir)=Var(μirx+birx)=0+Var(birx)=σbir2. Bias is zero but the variance is nonzero. Hence, variations are due to statistical noise. A DMU that appears efficient may in fact be utilizing an input-output production mix that is less than optimal. Its seeming efficiency is caused by a variation to its favour. Results obtained using empirical models are prone to inaccuracy of this nature. However, the expected value will converge over time to the true value in the absence of bias.Scenario IV. Consider the following: (11)E(birx)=b^irx≠0,Var(birx)=σbir2>0,E(xir)=μirx+b^irx,Var(xir)=σbir2. Bias and variances are nonzero. This implies that both systematic and nonsystematic errors exist in the data. The variance corresponds to some input i. The variable, xir, is affected by some random amount and some bias birx. Hence, E(xir) is not an unbiased estimator of xir. This scenario corresponds to the drawback of empirical frontiers.The term “measurement error” does not simply imply that data had been misread or collected erroneously. According to Tomlinson [16], it may also not be constant over time. The inaccuracy of the data collected may be due to the lack of implicit information which may or may not be quantifiable but are deemed to have the most disparaging effects because they introduce bias into the model. ### 3.2. Chance-Constraint Programming and DEA Deterministic methods such as DEA are not designed to handle cases in which, due to uncertainty, constraints may be violated although infrequently. Various methods have been employed to transform the basic DEA approach to include stochastic components. Two of the more popular methods are chance-constraint programming (CCP) and stochastic DEA. An extensive literature survey has revealed that CCP DEA has always assumed a normal distribution. The objective of this research is to redefine the probabilities employed in CCP productivity analysis, which would accommodate problems emanating from various scenarios where errors are independent but convoluted without assuming any distributional form. The independent and convoluted properties of the error terms make it difficult to distinguish between them, and hence, a distribution-free approach will be employed.The advantage of using CCP is that it maintains the nonparametric form of DEA. It allows modeling of multiple inputs and outputs with ease. There is no ambiguity in defining a distribution or the interpretation of the results as had been demonstrated in the Normal-Gamma parametric SFA model [17]. CCP typically states that constraints do not need to hold “almost surely” but instead hold with some probability level. Uncertainty is represented in terms of outcomes denoted by ω. The elements ω are used to describe scenarios or outcomes. All random variables jointly depend on these outcomes. These outcomes may be combined into subsets of Ω called events. A  represents an event and A represents the collection of events. Examples of events may include political situations, trade conditions, which would allow us to describe the random variables such as costs and interest rates. Each event is associated with a probability P(A). The triplet (Ω,Α,P) is known as a probabilityspace. This situation is often found in strategic models where the knowledge of all possible outcomes in the future is acquired through expert opinions. Hence, in a general form, CCP can be written as (12)P{Aix(ω)≥hi(ω)}≥αi, where 0<αi<1 and i=1,…,I index of the constraints that must hold jointly. The previous probabilistic constraint can be written in its expectational form (or deterministic equivalent) where fi is an indicator of {ω∣Aix(ω)≥hi(ω)}: (13)Eω(fi(ω,x(ω)))≥αi.The focus of this paper is on the further development of DEA coupled with CCP. The benefit of applying CCP to DEA is such that the multidimensional and nonparametric form of DEA is maintained. To drop thea priori assumption discussed in [9, 11, 18] regarding the distributional form to account for possible data disparity, a distribution-free method is introduced. In [11, 18], CCP DEA input-oriented model is formulated on the basis that discrepancies in outputs are due to statistical noise while those in inputs are caused by inefficiency: (14)Minz0=θ,SubjecttoP(Yλ-y0≥0)≥1-α,Xλ-θx0≤Xλ-θx0≤0,Xλ-θx0≤1→λ=1,Xλ-θx0≤θ,λ≥0.The CCP formulation shown in (14) is designed to minimize the radial input contraction factor θ, subject to the constraints specified. CCP DEA models in the past generally assume that the normal distribution suffices. For example, the assumption that the variation shown previously is normal, the formulation (14) can be written in the following vector deterministic form (15): (15)Minz0=θ,SubjecttoE(Yλ-y0)-1.645σ≥0,Xλ-θx0≤Xλ-θx0≤0,Xλ-θx0≤1→λ=1,Xλ-θx0≤θ,λ≥0, where X and Y denote the vectors of inputs and outputs, respectively. Assuming that each DMU is independent of others, then the covariance equals zero. σ denotes the standard deviation for Yλ-y0 which is formulated as (16)Var(Yλ-y0)=Var(y1λ1+y2λ2+⋯+yqλq-y0), where subscript q denotes the number of DMUs. If the DMU under evaluation is DMU1, then y0≡y1, hence, (16) can be written as (17)σ=(λ1-1)2Var(y1)+λ22Var(y2)+⋯+λq2Var(yq). If  λ1=1 and λr≠1=0, then the efficiency scores calculated in CCP will be the same as that of DEA. This does not imply that all DEA scores will coincide with the CCP ones (except for DMU1’s score). In this case the standard deviation disappears.The first constraint in (15) states that there is a slight chance (i.e. α=0.05) that outputs of the observed unit may exceed those of the best practice units with a very small probability. E(Yλ-y0) is determined based on the assumption that the observed values are representative of their mathematical expectation. The second constraint is strictly deterministic which states that the best performers cannot employ more than θX0 amount of inputs, and if they do, they cannot be efficient and will not be included in the reference set of best performers.Using the same mathematical formulation shown in (14) and (15), and by incorporating a distribution-free approach, the DCF is established. ## 3.1. Two Error Sources of Data Disparity Affecting Productivity Analysis Before we begin to make modifications to incorporate probability into the basic DEA model, it is crucial that the types of errors are identified, which are sources of data disparity. These can be segregated into 2 categories; systematic and nonsystematic errors. Nonsystematic errors are typically defined to be statistical noise, which are random normalN(0,σ2) and independent and identically distributed (i.i.d.). They will eventually average to zero. Systematic errors are defined to be “the degree to which the measured variable reflects the underlying phenomenon depend on its bias and variance relative to the true or more appropriate measure” [16]. Systematic errors or measurement errors are deemed to have the most disparaging effects because they introduce bias into the model. These may be caused by the lack of information.The design of the new DEA model is intended to take into account the possibility of data disparity that affect productivity analysis while preserving the advantages that DEA offers in order to estimate the true level of efficiency. Due to data disparity, normal DEA results may contain two components of the error term. The first refers to statistical noise which follows a normal distribution, while the second refers to the technical inefficiency which is said to follow a truncated normal or a half-normal distribution. This can be achieved by relaxing the LP constraints to allow for these variations which may provide a better approximation of the level of efficiency.The following general linear programming model illustrates the mathematical form of systematic and nonsystematic errors as defined previously. Variation in the variable (X) of the objective function will result in different values for the optimized coefficient (β) (4)minβg=X′β,subject toX′β≥y,β≥0. If the variation in X is stochastic, then X=x-+ε; ε~N(0,σ2), by the Central Limit Theorem; one can characterize how closely the vector Xis scattered around its mean x-  by the distance function; D2=D2(X;x-,Vε)=(X-x-)′Vε-1(X-x-). Vε denotes the variance-covariance matrix for ε [4].Four scenarios are illustrated later which describes sources of data disparity. The notations are as follows:xir: observed input i for i=1,…,m for DMUr,yjr: observed output j for j=1,…,n for DMUr,μrx: expected value of input for DMUr,μry: expected value of output for DMUr,brx: bias of xr,b^rx: estimate of brx,bry: bias of yr,b^ry: estimate of bry.The following are equations defining the relationship between the observed and true or expected values for both inputs and outputs in a productivity analysis such as SFA where measurement errors and/or random noise and inefficiencies are a concern in parametric estimations: (5)xir=μirx+birx, for some input i for unit r(6)yjr=μjry+bjry, for some output j for unit r (considered for cases in which there may be some bias in output levels) (7)μirx,μjry≥0,birx,bjry, unrestricted in sign.The following four scenarios illustrate the impact of different errors and were constructed using the notations given previously. These scenarios follow the definition by Tomlinson [16].Scenario I. Consider the following: (8)E(birx)=0,Var(birx)=0,E(xir)=μirx,Var(xir)=0. With zero bias and variance, observed input value is the true value ∴E(xir)=μirx=xir. This implies that the data is 100% accurate. The expected value is exactly the same as the observed value. In reality, it is rare to have data with such accuracy.Scenario II. Consider the following: (9)E(birx)=b^irx≠0,Var(birx)=0,E(xir)=μirx+b^irx,Var(xir)=0. Bias is nonzero with zero variance; hence, errors are systematic. E(xir) is not an unbiased estimator of xir. In this case, systematic errors are a problem where inputs are concerned. When measurement errors exist, the expected value is a biased estimator of the observed value. This in turn causes biases in DEA results. Empirical methods, such as DEA, make no allowance for this error and evaluate DMUs based strictly on the observed values. However, expectations of the observed values can be determined qualitatively and be incorporated into the LP.Scenario III. Consider the following: (10)E(birx)=0,Var(birx)=σbir2>0,E(xir)=μirx,Var(xir)=σbir2. Expected value of a constant is the constant itself. Variance of a constant is zero. Hence Var(xir)=Var(μirx+birx)=0+Var(birx)=σbir2. Bias is zero but the variance is nonzero. Hence, variations are due to statistical noise. A DMU that appears efficient may in fact be utilizing an input-output production mix that is less than optimal. Its seeming efficiency is caused by a variation to its favour. Results obtained using empirical models are prone to inaccuracy of this nature. However, the expected value will converge over time to the true value in the absence of bias.Scenario IV. Consider the following: (11)E(birx)=b^irx≠0,Var(birx)=σbir2>0,E(xir)=μirx+b^irx,Var(xir)=σbir2. Bias and variances are nonzero. This implies that both systematic and nonsystematic errors exist in the data. The variance corresponds to some input i. The variable, xir, is affected by some random amount and some bias birx. Hence, E(xir) is not an unbiased estimator of xir. This scenario corresponds to the drawback of empirical frontiers.The term “measurement error” does not simply imply that data had been misread or collected erroneously. According to Tomlinson [16], it may also not be constant over time. The inaccuracy of the data collected may be due to the lack of implicit information which may or may not be quantifiable but are deemed to have the most disparaging effects because they introduce bias into the model. ## 3.2. Chance-Constraint Programming and DEA Deterministic methods such as DEA are not designed to handle cases in which, due to uncertainty, constraints may be violated although infrequently. Various methods have been employed to transform the basic DEA approach to include stochastic components. Two of the more popular methods are chance-constraint programming (CCP) and stochastic DEA. An extensive literature survey has revealed that CCP DEA has always assumed a normal distribution. The objective of this research is to redefine the probabilities employed in CCP productivity analysis, which would accommodate problems emanating from various scenarios where errors are independent but convoluted without assuming any distributional form. The independent and convoluted properties of the error terms make it difficult to distinguish between them, and hence, a distribution-free approach will be employed.The advantage of using CCP is that it maintains the nonparametric form of DEA. It allows modeling of multiple inputs and outputs with ease. There is no ambiguity in defining a distribution or the interpretation of the results as had been demonstrated in the Normal-Gamma parametric SFA model [17]. CCP typically states that constraints do not need to hold “almost surely” but instead hold with some probability level. Uncertainty is represented in terms of outcomes denoted by ω. The elements ω are used to describe scenarios or outcomes. All random variables jointly depend on these outcomes. These outcomes may be combined into subsets of Ω called events. A  represents an event and A represents the collection of events. Examples of events may include political situations, trade conditions, which would allow us to describe the random variables such as costs and interest rates. Each event is associated with a probability P(A). The triplet (Ω,Α,P) is known as a probabilityspace. This situation is often found in strategic models where the knowledge of all possible outcomes in the future is acquired through expert opinions. Hence, in a general form, CCP can be written as (12)P{Aix(ω)≥hi(ω)}≥αi, where 0<αi<1 and i=1,…,I index of the constraints that must hold jointly. The previous probabilistic constraint can be written in its expectational form (or deterministic equivalent) where fi is an indicator of {ω∣Aix(ω)≥hi(ω)}: (13)Eω(fi(ω,x(ω)))≥αi.The focus of this paper is on the further development of DEA coupled with CCP. The benefit of applying CCP to DEA is such that the multidimensional and nonparametric form of DEA is maintained. To drop thea priori assumption discussed in [9, 11, 18] regarding the distributional form to account for possible data disparity, a distribution-free method is introduced. In [11, 18], CCP DEA input-oriented model is formulated on the basis that discrepancies in outputs are due to statistical noise while those in inputs are caused by inefficiency: (14)Minz0=θ,SubjecttoP(Yλ-y0≥0)≥1-α,Xλ-θx0≤Xλ-θx0≤0,Xλ-θx0≤1→λ=1,Xλ-θx0≤θ,λ≥0.The CCP formulation shown in (14) is designed to minimize the radial input contraction factor θ, subject to the constraints specified. CCP DEA models in the past generally assume that the normal distribution suffices. For example, the assumption that the variation shown previously is normal, the formulation (14) can be written in the following vector deterministic form (15): (15)Minz0=θ,SubjecttoE(Yλ-y0)-1.645σ≥0,Xλ-θx0≤Xλ-θx0≤0,Xλ-θx0≤1→λ=1,Xλ-θx0≤θ,λ≥0, where X and Y denote the vectors of inputs and outputs, respectively. Assuming that each DMU is independent of others, then the covariance equals zero. σ denotes the standard deviation for Yλ-y0 which is formulated as (16)Var(Yλ-y0)=Var(y1λ1+y2λ2+⋯+yqλq-y0), where subscript q denotes the number of DMUs. If the DMU under evaluation is DMU1, then y0≡y1, hence, (16) can be written as (17)σ=(λ1-1)2Var(y1)+λ22Var(y2)+⋯+λq2Var(yq). If  λ1=1 and λr≠1=0, then the efficiency scores calculated in CCP will be the same as that of DEA. This does not imply that all DEA scores will coincide with the CCP ones (except for DMU1’s score). In this case the standard deviation disappears.The first constraint in (15) states that there is a slight chance (i.e. α=0.05) that outputs of the observed unit may exceed those of the best practice units with a very small probability. E(Yλ-y0) is determined based on the assumption that the observed values are representative of their mathematical expectation. The second constraint is strictly deterministic which states that the best performers cannot employ more than θX0 amount of inputs, and if they do, they cannot be efficient and will not be included in the reference set of best performers.Using the same mathematical formulation shown in (14) and (15), and by incorporating a distribution-free approach, the DCF is established. ## 4. DEA-Chebyshev Model The advantages of using DEA-Chebyshev model as an efficiency evaluation tool are that it provides an approximation of performance given that random errors and inefficiencies do exist, and these deviations are considered, either through expert opinion or through data inference. Nevertheless, the results should always be subject to management scrutiny. This method also provides for ranking efficient DMUs. ### 4.1. Chebyshev’s Theorem In a simplified explanation, the Chebyshev theorem states that the fraction of the dataset lying withinτ standard deviations of the mean is at least 1-(1/τ2) where τ>1.DEA-Chebyshev model developed in this paper will not be restricted to any one distribution but instead will assume an unknown distribution. A distribution-free approach will be used to represent the stochastic nature of the data. This approach is applied to the basic DEA model using chance-constraint programming. This distribution-free method is known as the Chebyshev inequality. It states that(18a)P(|x--μ|≥τσ)≤1τ2, or equivalently (18b)P(|x--μ|≥τ)≤σ2τ2.Let a random variablex have some probability distribution of which we only know the variance (σ2) and the mean (μ) [19]. This inequality implies that the probability of the sample mean, x-, falling outside the interval [μ±τσ] is at most 1/τ2, where τ refers to the number of standard deviation away from the mean using the notation in [19]. The one-sided Chebyshev's inequality can be written as (19)P(x--μ≥τ)≤σ2σ2+τ2 as shown in [20].Other methods considered to define the probabilities for DEA-Chebyshev model were the distribution-free linear constraint set (or linear approximation), the unit sphere method, and the quantile method. These methods were tested to determine which of them would provide the best estimate of thetrue boundary mentioned in [21]. The true boundary (called set S) is defined to be a two-dimensional boundary which is generated using some parametric function defined as the chance-constrained set shown later: (20)S={X=(x1,…,xm)∣pr[AX-b≤0]≥α;X≥0}, where b and the vector A=(a1,a2,…,am) are random variables. Let the function L(X) be defined as L(X)=AX-b, and E[L(X)] and σ[L(X)] denote the expected value and the standard deviation of L(X), respectively. In this example m=2. Twenty-nine samples were generated.The distribution-free approaches tested were the Chebyshev extended lemma (24), quantile method (21), linear approximation (23), and unit sphere (22). The deterministic equivalent of these methods can be written in the following mathematical forms according to the notation used by [21]. (21)QuantilemethodSQ(α)={X∣E[L(X)]+Kασ[L(X)]≤0;X≥0}.Kα is known as the quantile of order α of the standardized variate of L(X). If random variable, X, belongs to a class of stable distributions, then the quantile method can be applied successfully. All stabledistributions share the common properties of being specified by the parameters U and V of the general functional form F[(x-U1)/V1],…,F[(x-Ul)/Vl] and when convoluted will again give us F[(x-U)/V]. Examples of stable distributions are Binomial, Poisson, Chi-squared, and Normal [NOLA99]. (22)UnitsphereSS(α)={1max(a1,h)2+max(a2,h)2X∣1X2≤1max(a1,h)2+max(a2,h)2}.(23)LinearapproximationSL(α)={X∣A*X≤1}, where ag,h is an element amongst the 29 simulated samples of ag=(ag,1,…,ag,H); g=1,…,m (g=2 in this example); and H=samplesize=29. Vector A*is defined as A*=(max(a1,h1),max(a2,h2)): (24)ChebyshevST(α)={α1-αX∣E[L(X)]+α1-α·σ[L(X)]≤0;X>0}.Allen et al. have proven in their paper [21] that the quantile method was the least conservative, while the Chebyshev was the most conservative. When a method of estimation provides relatively large confidence limits, the method is said to be “conservative.” The advantage of those two methods is that they both have the tendency to follow the shape of the true (real) boundary more closely than the other two methods, that is, unit sphere and linear approximation [21]. Given that Chebyshev provides the most conservative point of view and has the inclination to follow the shape of the true boundary with no regard to distributional forms, this method was chosen as the estimation for CCP DEA. Although the error-free frontier (EFF) is unknown, we can, at best, estimate its location or estimate its shape with respect to the DEA frontier. The EFF represents the frontier where measurement errors and random errors are not present, but it does not imply absolute efficiency. This means that there can be room for improvement even for the DMUs on the EFF. The theoretical frontier represents the absolute attainable production possibility set where there can no longer be any improvements in the absence of statistical noise and measurement errors. It is undefined due to the fact that human performance limits are still undefined at the present time.Since, we do not want to place ana priori assumption regarding which stable distribution best describes the random variables in DEA, the Chebyshev theorem will be used. The deterministic equivalent of (20) by Chebyshev's extended lemma is shown as (24).Derivation of α / ( 1 - α ) · σ [ L ( X ) ] in (24). We use the one-sided Chebyshev’s inequality and the notation used by [21]: (25)P(L(X)-E[L(X)]≥τ)≤σ2σ2+τ2, which states that the probability that L(X) will take on a value that is greater than τ standard deviations away from its mean, E[L(X)], is at most 1/(1+τ2). α in chance-constrained programming can be expressed in the general form: P(L(X)-E[L(X)]≤0)≥α. Hence, (26)1-α=σ2σ2+τ2⟹τ=σα1-α. Note that from here onwards as we discuss the DCF model, for simplification and clarity we will denote τα=τ/σ.A“k-flexibility function” is coined because α is a value that may be defined by the user (where k denotes the user's certainty of the estimate) or inferred from the industry data. The unique property of α is its ability to define τα such that it mimics the normal distribution given that random noise is present or to include management concerns and expectations with regard to their perceived or expected performance levels. This can overcome the problem of what economists coin as “nuisance parameters.” These parameters can be problems of controlling difficult-to-observe or unquantifiable factors such as worker effort or worker quality. When firms can identify and exploit opportunities in their environment, organizational constraints may be violated [22]. Because DCF allows for management input, the flexibility function can approximate these constraint violations. The mathematical formulation, implications for management, and practical definition of α will be explained later. ### 4.2. Assumptions in DEA-Chebyshev Model Two general assumptions have been made when constructing the model. First, nuisance parameters (including confounding variables) will affect efficiency scores causing them to differ from the true performance level if they are not accounted for in the productivity analysis. Second, variations in the observed variables can arise from both statistical noise and measurement errors and are convoluted.In the simulation to follow, as an extension to the general assumptions mentioned previously, we will assume that variations in outputs are negligible and will average out to zero [11, 18]. The variations in inputs are assumed to arise from statistical noise and inefficiency (inefficient use of inputs). Both of these errors contribute to the possible technical inefficiencies in DEA-efficient units. These possible inefficiencies are not observed in DEA since it is an empirical extreme point method. Using the same characteristics defined in SFA, statistical noise and measurement errors are said to be normally distributed v~N(μ,σ2), and inefficiency is said to be half normally distributed u~N+(μ,σ2). Thus, the relationship between the expected inputs, μir, versus the observed, xirobs, can be written as (27)xirobs=μir+(v+u)ir, where (v+u)ir denotes the convoluted error terms of input ifor DMUr.The assumption regarding the disparity between the observed and expected inputs is to illustrate the input-oriented DEA-Chebyshev model. In input-oriented models, the outputs are not adjusted for efficiency, but the inputs are based on the weights applied to those DMUs that are efficient. This assumption regarding errors can be reversed between inputs and outputs depending on expert opinions and the objective of the analysis (i.e., input versus output-oriented models).As an extension of Land et al. [11] and Forrester and Anderson [18], DEA-Chebyshev model relaxes the distributional assumption. In doing so, convolution of errors can be accommodated without having to specify some distributional form for both components. This method of approximating radial contraction of inputs or expansion of outputs is generally less computationally intensive than the bootstrap method, as CCP can be directly incorporated into the LP and solved in a similar fashion as the standard DEA technique. The bootstrap method introduced by Simar and Wilson [23] is more complex in that it requires certain assumptions regarding the data generating process (DGP) of which the properties of the frontier and the estimators will depend upon. However, this method of bootstrapping is nonparametric since it does not require any parametric assumptions except those to establish consistency and the rate of convergence for the estimators.Theoretically, the DEA, algorithm allows the evaluation of models containing strictly outputs with no inputs and vice versa. In doing so, it neglects the fact that inputs are crucial for the production of outputs. However, the properties of a production process are such that they must contain inputs in order to produce outputs. Let thetheoretically attainable production possibility set characterize the absolute efficient frontier, which is unknown, be denoted as Ψ={(X,Y)∈ℜm+n∣XcanproduceY}. Thus, given that the set Ψ is not presently bounded, the inclusion ΨEFF,ΨDEA,ΨDCF⊂Ψ is always true where ΨEFF, ΨDEA, ΨDCF denote the attainable set in Error-Free Frontier (EFF), DEA, and the DEA-Chebyshev frontier, respectively. It is certain that a DMU cannot produce outputs without inputs although the relationship between them may not be clear. The following postulates regarding the relationship between the three frontiers are expressed as follows.Postulate 1. The DEA frontier will converge to the EFF; ΨDEA→q→∞ΨEFF according to the central limit theorem [24]. Appendices A, B, and C provide the details. However, both DEA and DCF will exhibit a very slow rate of convergence to the theoretical frontier as the number of dimensions increases or when the sample size is small. This is known as the curse of dimensionality [25].Postulate 2. The production possibility set of DEA is contained in that of DCF{ΨDEA⊂ΨDCF}. The DEA and the corrected frontier may likely overlap the EFF depending on the degree of data variation observed and estimated. ### 4.3. Mathematical Formulation An input-oriented BCC model will be used to illustrate this work. Here,θ is defined as the radial input contraction factor and λ is defined as the column vector corresponding to the “best practice” units, which will form the projection unto the frontier for an inefficient unit (28)θ=min{θ∣yj0≤∑r=1qyjrλr,θxi0≥∑r=1qxirλr,∑r=1qλr=1,λr≥0}. Consider the following chance constraint sequence as defined by Allen et al. [21]: (29)S={X˘=(x1,x2,…,xm)∣P(∑r=1qλrxir-θxi0≤0)≥α;θ≥0,xir≥0,∀r=1,…,q(∑r=1qλrxir-θxi0≤0)}, where  α  is a real number such that 0≤α≤1 for all j=1,…,n and for all i=1,…,m.Since it is difficult to establish a specific form of distribution with empirical data due to the convolution of different types of errors, a distribution-free approach is taken. In this case, the Chebyshev one-sided inequality [21] will be applied to convert (29). A deterministic equivalent can be approximated to (30) for the ith input of  DMUr: (30)SC(α)={X˘∣E(∑rxirλr-θxi0)±σiτα≥0,θ≥0,xir≥0∀r(∑rxirλr-θxi0)}, where σi=var(∑rλrxir-θxi0)=λ12var(xi1)+⋯+λq2var(xiq)+θ2var(xi0) and 0<α≤1, with strict inequality on the left hand side. For example, if r=1, then xi0=xi1; hence, σi is calculated as σi=(λ1-θ)2var(xi1)+⋯+λq2var(xiq). Based on the assumption that DMUs are independent of each other, then var(xir)=c, for all r=1,…,q where c denotes some constant and cov(xir,xil≠ir)=0,forallr,l. The value for τα  can be defined as (31)Letτα=α1-α, where α denotes the probability of staying within the tolerance region defined using the one-sided Chebyshev's inequality. As  α  increases, τα and the standard deviation will also increase; hence, it becomes more likely that the EFF will be within the confidence limits.The value ofα can be defined such that the τα will be equal to or less than 1.645 so that DCF can provide a less conservative estimate of the upper and lower limits of the frontier when compared to z0.05=1.645. The standard normal distribution value z0.05=1.645 has been used in the previous CCP efficiency evaluation methodology in [11, 26]. The reasoning behind wanting a less conservative estimation is because data collected will more likely be accurate than inaccurate. When α≥0.99, then τα increases exponentially into infinity. For 0.7<α<0.75, note that τ0.7<z0.05<τ0.75; α can be defined such that DEA-Chebyshev model provides less conservative estimates. Taking a glance at the CCP DEA developed by Land et al. [11], the results obtained, when assuming a normal distribution, can be shown to be drastically different from that of the expected frontier depending on the level of data disparity.The deterministic LP formulation for DEA-Chebyshev model can be written in the following mathematical form:(32)Minλθ^SubjecttoE(∑r=1qxirλr-θ^xi0)±σiτ^α≤0,SubjecttoE∑r=1qyjrλr-yj0≥0,SubjecttoE∑r=1qλr=1,SubjecttoEλr≥0∀r,SubjecttoEθ≥0. Let τ^α be an estimate for τα which is defined as (33)τ^α=α1-α, where α is a value based on management's expectations or is inferred from a time series of data which has been transformed into a single value. The model shown in (32) can also be modified such that only discretionary inputs are considered for stochastic treatment [27].The value ofα can be defined such that its values are restricted between 0.5 (the point of inflection) and 0.6 if no qualitative information regarding expectations is available, but we are almost certain that the data obtained is accurate. The value of τα is then approximated as 1≤τ^α≤1.2247. In this case, the results produced will be less conservative than that of the normal distribution at α=0.05 (i.e., z0.05=1.645). For α<0.5, a deterministic model will suffice since the DEA-Chebyshev model will provide the same results as that of the DEA. ### 4.4. The “k-Flexibility Function”τα: Unique Management Interpretation It may not be sufficient to develop a model that technically sounds with appropriate theoretical proofs. We cannot discount the fact that management expertise can play an important role in defining the corrected frontier nor should we cause the user to reject the model. Hence, DEA-Chebyshev model is designed to incorporate management input, which can become a crucial factor in the modeling process. One of the major advantages of this model is its flexibility as compared to models that require a distributional form. It can provide crucial information to management based upon their expertise and experience in their own field of specialization thereby redefining the efficient frontier.In DEA-Chebyshev model,α has a unique management interpretation and implication. It can be defined as the management’s opinion of the expected degree of competence with regard to either input or output usage. In other words, it is the estimated degree of deviation from the observed level of performance. The smaller the value of α is, the more certain that the data is accurate and that little improvements can be made ceteris paribus or that expectations have been approximately met. When α=0, then DCF=DEA, implying that management is certain that the data they have obtained is accurate (no need to account for deviation or random effects or inefficiency) or that present expectations have been met. If α~1, then it implies that the data obtained is extremely erroneous or that expectations are not met.The value forα is an aggregate of two factors (or two events). First, the certainty of inaccuracy is denoted by P(E), and second, the translatedpercentage of inaccuracy is denoted by P(D). Let P(E) denote the true/false existence of errors. When P(E)=1, it implies that the data is inaccurate. If P(E)=1, then 0.5<P(D)<1; otherwise, P(D)=0. In other words, event E implies D; when the data is 100% accurate, then there is no deviation. Therefore, α can be defined:(34a)α=P(DE)=P(D∩E)P(E)=P(D)P(E).Proof. P ( D ) = P ( D ∩ E ) + P ( D ∩ E ′ ), since P(D∩E′)=0, then P(D)=P(D∩E). Hence, forP(E)=1, α can be approximated as (34b)α~P(D)P(E)+k=P(D)+k. The constant, k≥0, represents the degree of (the expert's) uncertainty.When deviation due to errors is negligible, then %deviation  from  observed~0. Hence α will be at most 0.5. P(error)=0 implying that the data is error-free, thus % deviation  from  observed=0. In this case, α=0 and DCF=DEA. Based on (31), the value for τ^α should be restricted to not be less than 1, and therefore, α≥0.5. Otherwise, the confidence limits become too small, which implies that DCF≅DEA. We do not want this to occur because DCF should only equal DEA when there is absolute certainty that the data is error-free. Hence, P(D) must be defined such that 0.5≤α<1 (34b) for P(E)=1 and zero otherwise. ### 4.5. Approximating the Error-Free Frontier: Development of the DCF Unlike the straightforward method in which DEA scores are calculated, DEA-Chebyshev model efficiency scores are slightly more complicated to obtain. There are five stages to the determination of the best efficiency rating for a DMU.Stage I. Determining the DEA efficient units.Stage II. Establishing the upper and lower limits for efficiency scores using DEA-Chebyshev model where the value of α is defined to reflect management concerns.Stage III. Establishing the corrected frontier from the upper and lower limits calculated in stage II for DEA efficient units. The upper and lower limits of efficiency scores established by DEA-Chebyshev model for each of the DEA-efficient units form the confidence bounds for the error-free efficiency scores. These limits determine the most likely location of the EFF. The following are characteristic of DEA-Chebyshev model efficiency scores. (1) An efficient DMU with a smaller standard deviation implies a smaller confidence region in which the EFF resides, hence, this particular DMU is considered to be more robustly efficient since it is closer to the EFF.(2) It can be conjectured that for DEA efficient DMUs,θU≤1 and θL≥1 will always be true (not so for the inefficient units).(3) WhenθL≥c where c is a very large constant, it may be an indication that the DMU is likely an outlier.(4) In general, the mean efficiency score in DEA-Chebyshev model is such thatθ-=(θU+θL)/2≈θDEA, unless the third characteristic previously mentioned is observed. ## 4.1. Chebyshev’s Theorem In a simplified explanation, the Chebyshev theorem states that the fraction of the dataset lying withinτ standard deviations of the mean is at least 1-(1/τ2) where τ>1.DEA-Chebyshev model developed in this paper will not be restricted to any one distribution but instead will assume an unknown distribution. A distribution-free approach will be used to represent the stochastic nature of the data. This approach is applied to the basic DEA model using chance-constraint programming. This distribution-free method is known as the Chebyshev inequality. It states that(18a)P(|x--μ|≥τσ)≤1τ2, or equivalently (18b)P(|x--μ|≥τ)≤σ2τ2.Let a random variablex have some probability distribution of which we only know the variance (σ2) and the mean (μ) [19]. This inequality implies that the probability of the sample mean, x-, falling outside the interval [μ±τσ] is at most 1/τ2, where τ refers to the number of standard deviation away from the mean using the notation in [19]. The one-sided Chebyshev's inequality can be written as (19)P(x--μ≥τ)≤σ2σ2+τ2 as shown in [20].Other methods considered to define the probabilities for DEA-Chebyshev model were the distribution-free linear constraint set (or linear approximation), the unit sphere method, and the quantile method. These methods were tested to determine which of them would provide the best estimate of thetrue boundary mentioned in [21]. The true boundary (called set S) is defined to be a two-dimensional boundary which is generated using some parametric function defined as the chance-constrained set shown later: (20)S={X=(x1,…,xm)∣pr[AX-b≤0]≥α;X≥0}, where b and the vector A=(a1,a2,…,am) are random variables. Let the function L(X) be defined as L(X)=AX-b, and E[L(X)] and σ[L(X)] denote the expected value and the standard deviation of L(X), respectively. In this example m=2. Twenty-nine samples were generated.The distribution-free approaches tested were the Chebyshev extended lemma (24), quantile method (21), linear approximation (23), and unit sphere (22). The deterministic equivalent of these methods can be written in the following mathematical forms according to the notation used by [21]. (21)QuantilemethodSQ(α)={X∣E[L(X)]+Kασ[L(X)]≤0;X≥0}.Kα is known as the quantile of order α of the standardized variate of L(X). If random variable, X, belongs to a class of stable distributions, then the quantile method can be applied successfully. All stabledistributions share the common properties of being specified by the parameters U and V of the general functional form F[(x-U1)/V1],…,F[(x-Ul)/Vl] and when convoluted will again give us F[(x-U)/V]. Examples of stable distributions are Binomial, Poisson, Chi-squared, and Normal [NOLA99]. (22)UnitsphereSS(α)={1max(a1,h)2+max(a2,h)2X∣1X2≤1max(a1,h)2+max(a2,h)2}.(23)LinearapproximationSL(α)={X∣A*X≤1}, where ag,h is an element amongst the 29 simulated samples of ag=(ag,1,…,ag,H); g=1,…,m (g=2 in this example); and H=samplesize=29. Vector A*is defined as A*=(max(a1,h1),max(a2,h2)): (24)ChebyshevST(α)={α1-αX∣E[L(X)]+α1-α·σ[L(X)]≤0;X>0}.Allen et al. have proven in their paper [21] that the quantile method was the least conservative, while the Chebyshev was the most conservative. When a method of estimation provides relatively large confidence limits, the method is said to be “conservative.” The advantage of those two methods is that they both have the tendency to follow the shape of the true (real) boundary more closely than the other two methods, that is, unit sphere and linear approximation [21]. Given that Chebyshev provides the most conservative point of view and has the inclination to follow the shape of the true boundary with no regard to distributional forms, this method was chosen as the estimation for CCP DEA. Although the error-free frontier (EFF) is unknown, we can, at best, estimate its location or estimate its shape with respect to the DEA frontier. The EFF represents the frontier where measurement errors and random errors are not present, but it does not imply absolute efficiency. This means that there can be room for improvement even for the DMUs on the EFF. The theoretical frontier represents the absolute attainable production possibility set where there can no longer be any improvements in the absence of statistical noise and measurement errors. It is undefined due to the fact that human performance limits are still undefined at the present time.Since, we do not want to place ana priori assumption regarding which stable distribution best describes the random variables in DEA, the Chebyshev theorem will be used. The deterministic equivalent of (20) by Chebyshev's extended lemma is shown as (24).Derivation of α / ( 1 - α ) · σ [ L ( X ) ] in (24). We use the one-sided Chebyshev’s inequality and the notation used by [21]: (25)P(L(X)-E[L(X)]≥τ)≤σ2σ2+τ2, which states that the probability that L(X) will take on a value that is greater than τ standard deviations away from its mean, E[L(X)], is at most 1/(1+τ2). α in chance-constrained programming can be expressed in the general form: P(L(X)-E[L(X)]≤0)≥α. Hence, (26)1-α=σ2σ2+τ2⟹τ=σα1-α. Note that from here onwards as we discuss the DCF model, for simplification and clarity we will denote τα=τ/σ.A“k-flexibility function” is coined because α is a value that may be defined by the user (where k denotes the user's certainty of the estimate) or inferred from the industry data. The unique property of α is its ability to define τα such that it mimics the normal distribution given that random noise is present or to include management concerns and expectations with regard to their perceived or expected performance levels. This can overcome the problem of what economists coin as “nuisance parameters.” These parameters can be problems of controlling difficult-to-observe or unquantifiable factors such as worker effort or worker quality. When firms can identify and exploit opportunities in their environment, organizational constraints may be violated [22]. Because DCF allows for management input, the flexibility function can approximate these constraint violations. The mathematical formulation, implications for management, and practical definition of α will be explained later. ## 4.2. Assumptions in DEA-Chebyshev Model Two general assumptions have been made when constructing the model. First, nuisance parameters (including confounding variables) will affect efficiency scores causing them to differ from the true performance level if they are not accounted for in the productivity analysis. Second, variations in the observed variables can arise from both statistical noise and measurement errors and are convoluted.In the simulation to follow, as an extension to the general assumptions mentioned previously, we will assume that variations in outputs are negligible and will average out to zero [11, 18]. The variations in inputs are assumed to arise from statistical noise and inefficiency (inefficient use of inputs). Both of these errors contribute to the possible technical inefficiencies in DEA-efficient units. These possible inefficiencies are not observed in DEA since it is an empirical extreme point method. Using the same characteristics defined in SFA, statistical noise and measurement errors are said to be normally distributed v~N(μ,σ2), and inefficiency is said to be half normally distributed u~N+(μ,σ2). Thus, the relationship between the expected inputs, μir, versus the observed, xirobs, can be written as (27)xirobs=μir+(v+u)ir, where (v+u)ir denotes the convoluted error terms of input ifor DMUr.The assumption regarding the disparity between the observed and expected inputs is to illustrate the input-oriented DEA-Chebyshev model. In input-oriented models, the outputs are not adjusted for efficiency, but the inputs are based on the weights applied to those DMUs that are efficient. This assumption regarding errors can be reversed between inputs and outputs depending on expert opinions and the objective of the analysis (i.e., input versus output-oriented models).As an extension of Land et al. [11] and Forrester and Anderson [18], DEA-Chebyshev model relaxes the distributional assumption. In doing so, convolution of errors can be accommodated without having to specify some distributional form for both components. This method of approximating radial contraction of inputs or expansion of outputs is generally less computationally intensive than the bootstrap method, as CCP can be directly incorporated into the LP and solved in a similar fashion as the standard DEA technique. The bootstrap method introduced by Simar and Wilson [23] is more complex in that it requires certain assumptions regarding the data generating process (DGP) of which the properties of the frontier and the estimators will depend upon. However, this method of bootstrapping is nonparametric since it does not require any parametric assumptions except those to establish consistency and the rate of convergence for the estimators.Theoretically, the DEA, algorithm allows the evaluation of models containing strictly outputs with no inputs and vice versa. In doing so, it neglects the fact that inputs are crucial for the production of outputs. However, the properties of a production process are such that they must contain inputs in order to produce outputs. Let thetheoretically attainable production possibility set characterize the absolute efficient frontier, which is unknown, be denoted as Ψ={(X,Y)∈ℜm+n∣XcanproduceY}. Thus, given that the set Ψ is not presently bounded, the inclusion ΨEFF,ΨDEA,ΨDCF⊂Ψ is always true where ΨEFF, ΨDEA, ΨDCF denote the attainable set in Error-Free Frontier (EFF), DEA, and the DEA-Chebyshev frontier, respectively. It is certain that a DMU cannot produce outputs without inputs although the relationship between them may not be clear. The following postulates regarding the relationship between the three frontiers are expressed as follows.Postulate 1. The DEA frontier will converge to the EFF; ΨDEA→q→∞ΨEFF according to the central limit theorem [24]. Appendices A, B, and C provide the details. However, both DEA and DCF will exhibit a very slow rate of convergence to the theoretical frontier as the number of dimensions increases or when the sample size is small. This is known as the curse of dimensionality [25].Postulate 2. The production possibility set of DEA is contained in that of DCF{ΨDEA⊂ΨDCF}. The DEA and the corrected frontier may likely overlap the EFF depending on the degree of data variation observed and estimated. ## 4.3. Mathematical Formulation An input-oriented BCC model will be used to illustrate this work. Here,θ is defined as the radial input contraction factor and λ is defined as the column vector corresponding to the “best practice” units, which will form the projection unto the frontier for an inefficient unit (28)θ=min{θ∣yj0≤∑r=1qyjrλr,θxi0≥∑r=1qxirλr,∑r=1qλr=1,λr≥0}. Consider the following chance constraint sequence as defined by Allen et al. [21]: (29)S={X˘=(x1,x2,…,xm)∣P(∑r=1qλrxir-θxi0≤0)≥α;θ≥0,xir≥0,∀r=1,…,q(∑r=1qλrxir-θxi0≤0)}, where  α  is a real number such that 0≤α≤1 for all j=1,…,n and for all i=1,…,m.Since it is difficult to establish a specific form of distribution with empirical data due to the convolution of different types of errors, a distribution-free approach is taken. In this case, the Chebyshev one-sided inequality [21] will be applied to convert (29). A deterministic equivalent can be approximated to (30) for the ith input of  DMUr: (30)SC(α)={X˘∣E(∑rxirλr-θxi0)±σiτα≥0,θ≥0,xir≥0∀r(∑rxirλr-θxi0)}, where σi=var(∑rλrxir-θxi0)=λ12var(xi1)+⋯+λq2var(xiq)+θ2var(xi0) and 0<α≤1, with strict inequality on the left hand side. For example, if r=1, then xi0=xi1; hence, σi is calculated as σi=(λ1-θ)2var(xi1)+⋯+λq2var(xiq). Based on the assumption that DMUs are independent of each other, then var(xir)=c, for all r=1,…,q where c denotes some constant and cov(xir,xil≠ir)=0,forallr,l. The value for τα  can be defined as (31)Letτα=α1-α, where α denotes the probability of staying within the tolerance region defined using the one-sided Chebyshev's inequality. As  α  increases, τα and the standard deviation will also increase; hence, it becomes more likely that the EFF will be within the confidence limits.The value ofα can be defined such that the τα will be equal to or less than 1.645 so that DCF can provide a less conservative estimate of the upper and lower limits of the frontier when compared to z0.05=1.645. The standard normal distribution value z0.05=1.645 has been used in the previous CCP efficiency evaluation methodology in [11, 26]. The reasoning behind wanting a less conservative estimation is because data collected will more likely be accurate than inaccurate. When α≥0.99, then τα increases exponentially into infinity. For 0.7<α<0.75, note that τ0.7<z0.05<τ0.75; α can be defined such that DEA-Chebyshev model provides less conservative estimates. Taking a glance at the CCP DEA developed by Land et al. [11], the results obtained, when assuming a normal distribution, can be shown to be drastically different from that of the expected frontier depending on the level of data disparity.The deterministic LP formulation for DEA-Chebyshev model can be written in the following mathematical form:(32)Minλθ^SubjecttoE(∑r=1qxirλr-θ^xi0)±σiτ^α≤0,SubjecttoE∑r=1qyjrλr-yj0≥0,SubjecttoE∑r=1qλr=1,SubjecttoEλr≥0∀r,SubjecttoEθ≥0. Let τ^α be an estimate for τα which is defined as (33)τ^α=α1-α, where α is a value based on management's expectations or is inferred from a time series of data which has been transformed into a single value. The model shown in (32) can also be modified such that only discretionary inputs are considered for stochastic treatment [27].The value ofα can be defined such that its values are restricted between 0.5 (the point of inflection) and 0.6 if no qualitative information regarding expectations is available, but we are almost certain that the data obtained is accurate. The value of τα is then approximated as 1≤τ^α≤1.2247. In this case, the results produced will be less conservative than that of the normal distribution at α=0.05 (i.e., z0.05=1.645). For α<0.5, a deterministic model will suffice since the DEA-Chebyshev model will provide the same results as that of the DEA. ## 4.4. The “k-Flexibility Function”τα: Unique Management Interpretation It may not be sufficient to develop a model that technically sounds with appropriate theoretical proofs. We cannot discount the fact that management expertise can play an important role in defining the corrected frontier nor should we cause the user to reject the model. Hence, DEA-Chebyshev model is designed to incorporate management input, which can become a crucial factor in the modeling process. One of the major advantages of this model is its flexibility as compared to models that require a distributional form. It can provide crucial information to management based upon their expertise and experience in their own field of specialization thereby redefining the efficient frontier.In DEA-Chebyshev model,α has a unique management interpretation and implication. It can be defined as the management’s opinion of the expected degree of competence with regard to either input or output usage. In other words, it is the estimated degree of deviation from the observed level of performance. The smaller the value of α is, the more certain that the data is accurate and that little improvements can be made ceteris paribus or that expectations have been approximately met. When α=0, then DCF=DEA, implying that management is certain that the data they have obtained is accurate (no need to account for deviation or random effects or inefficiency) or that present expectations have been met. If α~1, then it implies that the data obtained is extremely erroneous or that expectations are not met.The value forα is an aggregate of two factors (or two events). First, the certainty of inaccuracy is denoted by P(E), and second, the translatedpercentage of inaccuracy is denoted by P(D). Let P(E) denote the true/false existence of errors. When P(E)=1, it implies that the data is inaccurate. If P(E)=1, then 0.5<P(D)<1; otherwise, P(D)=0. In other words, event E implies D; when the data is 100% accurate, then there is no deviation. Therefore, α can be defined:(34a)α=P(DE)=P(D∩E)P(E)=P(D)P(E).Proof. P ( D ) = P ( D ∩ E ) + P ( D ∩ E ′ ), since P(D∩E′)=0, then P(D)=P(D∩E). Hence, forP(E)=1, α can be approximated as (34b)α~P(D)P(E)+k=P(D)+k. The constant, k≥0, represents the degree of (the expert's) uncertainty.When deviation due to errors is negligible, then %deviation  from  observed~0. Hence α will be at most 0.5. P(error)=0 implying that the data is error-free, thus % deviation  from  observed=0. In this case, α=0 and DCF=DEA. Based on (31), the value for τ^α should be restricted to not be less than 1, and therefore, α≥0.5. Otherwise, the confidence limits become too small, which implies that DCF≅DEA. We do not want this to occur because DCF should only equal DEA when there is absolute certainty that the data is error-free. Hence, P(D) must be defined such that 0.5≤α<1 (34b) for P(E)=1 and zero otherwise. ## 4.5. Approximating the Error-Free Frontier: Development of the DCF Unlike the straightforward method in which DEA scores are calculated, DEA-Chebyshev model efficiency scores are slightly more complicated to obtain. There are five stages to the determination of the best efficiency rating for a DMU.Stage I. Determining the DEA efficient units.Stage II. Establishing the upper and lower limits for efficiency scores using DEA-Chebyshev model where the value of α is defined to reflect management concerns.Stage III. Establishing the corrected frontier from the upper and lower limits calculated in stage II for DEA efficient units. The upper and lower limits of efficiency scores established by DEA-Chebyshev model for each of the DEA-efficient units form the confidence bounds for the error-free efficiency scores. These limits determine the most likely location of the EFF. The following are characteristic of DEA-Chebyshev model efficiency scores. (1) An efficient DMU with a smaller standard deviation implies a smaller confidence region in which the EFF resides, hence, this particular DMU is considered to be more robustly efficient since it is closer to the EFF.(2) It can be conjectured that for DEA efficient DMUs,θU≤1 and θL≥1 will always be true (not so for the inefficient units).(3) WhenθL≥c where c is a very large constant, it may be an indication that the DMU is likely an outlier.(4) In general, the mean efficiency score in DEA-Chebyshev model is such thatθ-=(θU+θL)/2≈θDEA, unless the third characteristic previously mentioned is observed. ## 5. Simulation Five data sets, each containing 15 DMUs in a two-input one-output scenario, were generated in order to illustrate the approximation of the EFF using the DEA-Chebyshev model. This will demonstrate the proximity of the DCF to the EFF. A comparison is drawn between the results provided by the DCF, DEA, and the CCP input-oriented VRS models as compared against the EFF. ### 5.1. Step I: Simulation: The Data Generating Process The first data set shown in Table1 is known as the control group. It contains two inputs and one output generated using a logarithmic production function of the following form: (35)y=β0+β1lnx12+β2lnx22, where β0 is some constant and β1 and β2are arbitrary weights or coefficients assigned to inputs. Input 1 (x1) has been chosen arbitrarily and input 2 (x2) is a function of x1;x2=c(1/x1), where c is some arbitrary constant; in this case c=24. This is to ensure that the frontier generated by the control group contains only efficient units and is convex. The linear convex combination in EFF consists of discrete production possibility sets defined for every individual DMU. Output (y) is then calculated using the equation shown in (35) from a discrete set of inputs where β0, β1, and β2 have been arbitrarily defined and are fixed for the all groups (control and experimental). The control group is one that contains no measurement errors or statistical errors and no inefficient DMUs. It will be the construct of the EFF.Table 1 Control group: the error-free production units. DMU Output Input 1 Input 2 1 12.55 2 12 2 10.43 3 8 3 9.68 4 6 4 9.53 5 4.8 5 9.68 6 4 6 10.01 7 3.43 7 10.43 8 3 8 11.45 10 2.4 9 11.99 11 2.18 10 12.55 12 2 11 13.12 13 1.85 12 14.25 15 1.6 13 15.36 17 1.41 14 16.46 19 1.26 15 16.99 20 1.2The experimental groups are generated from the control group with the error components. Their outputs are the same as the control groups and are held deterministic, while inputs are stochastic containing confounded measurement errors distributed as half-normal nonzero inefficiencyN+(μ,σ2) and statistical noise N(0,1)(36a)y~β0+β1lnx^12+β2lnx^22. In (36a), inputs are confounded with random errors and inefficiency: (36b)x^i=xi+εi,where ε=v+u. Variability in the inputs across simulations is produced by different arbitrarily chosen μ and σ for the inefficiency component which is distributed half normally;   u~N+(μ,σ2) for each simulation. Table 2 shows the details.Table 2 Four experimental groups with variations and inefficiencies introduced to both inputs while keeping outputs constant. DMU Output Experimental Grp 1 Experimental Grp 2 Experimental Grp 3 Experimental Grp 4 Input 1 Input 2 Input 1 Input 2 Input 1 Input 2 Input 1 Input 2 1 12.55 3.16 12.5 2.34 12.85 2.91 12.6 2.68 13.92 2 10.43 3.69 9.08 1.6 10.07 2.34 8.23 3.32 8.34 3 9.68 4.88 8.41 3.58 5.97 6.1 6.43 4.25 6.53 4 9.53 5.27 5.31 7.28 9.43 7.84 3.96 6.44 4.25 5 9.68 8.39 7.43 6.98 5.9 7.64 2.96 9.93 3.55 6 10.01 9.17 3.8 7.04 5.57 9.6 4.01 10.46 4.98 7 10.43 10.92 3.11 9.6 3.26 7.71 2.9 6.29 2.95 8 11.45 13.14 3.95 11.41 1.88 10.38 3.14 11.71 3.05 9 11.99 9.33 2.85 11.53 4.75 13.88 0.59 13.25 2.47 10 12.55 10.38 7.43 13.94 2.46 12.55 4.44 12.19 3.73 11 13.12 12.67 1.69 12.46 4.79 13.53 1.1 13.24 1.1 12 14.25 17.59 4.8 15.71 2.09 16.57 2.27 14.14 2.08 13 15.36 17.35 4.23 17.33 4.44 15.35 1.38 15.47 2.25 14 16.46 19.13 1.4 20.33 3.49 19.11 0.06 18.67 0.57 15 16.99 19.98 2.51 19.31 4.85 20.57 1.21 19.32 2.59 ### 5.2. Step II: Establishing Efficiency Scores: DEA, DEA-Chebyshev Model, and CCP Efficiency Evaluation The DEA results were calculated using ProDEA, while CCP and DEA-Chebyshev model results were calculated using MathCad. The CCP LP formulation follows that from [11, 18], the upper and lower bounds for the CCP frontier are(37a)θCCPU=min{θU∣yj0≤∑r=1qyjrλr,E(θLxi0-∑r=1qxirλr)-1.645σ≥0,∑r=1qλr=1,λr≥0},(37b)θCCPL=min{(θLxi0-∑r=1qxirλr)θL∣yj0≤∑r=1qyjrλr,E(θLxi0-∑r=1qxirλr)+1.645σ≥0,∑r=1qλr=1,λr≥0}.Table3 shows the results of the efficiency analysis for the DEA and CCP models. The λ-conditions which CCP must satisfy will be the same for the DCF. The value, ∑R=1qλr,R, for CCP is approximately the same as that for the DCF. Although DMU11 is DEA efficient, it is not CCP efficient given that is has violated one of the two λ-conditions. Note that ∑R=1qλ-r,R=(∑R=1qλr,RU+∑R=1qλr,RL)/2 shown in Tables 3, 4, 5, and 6.Table 3 DEA and CCP efficiency evaluation for simulation 1. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.674 0.795 1.52 1.158 1 1.834 (8) 2.124 (6) 1.979 DMU2 1 1.58 0.762 1.259 1.011 1 1.31 (3) 1.18 (5) 1.245 DMU3 0.892 0 0.694 1.074 0.884 0.884 0 0.323 0.162 DMU4 1 2.56 0.69 1.277 0.984 0.984 3.458 (7) 1.785 (8) 2.621 DMU5 0.679 0 0.481 0.852 0.666 0.666 0 0 0 DMU6 0.909 0 0.706 1.089 0.898 0.898 0 0.5463 0.273 DMU7 0.882 0 0.653 1.094 0.873 0.873 0 0.5199 0.26 DMU8 0.715 0 0.538 0.885 0.711 0.711 0 0 0 DMU9 1 4.778 0.777 1.238 1.008 1 4.876 (10) 2.415 (9) 3.645 DMU10 0.787 0 0.665 0.894 0.779 0.779 0 0 0 DMU11 1 1.105 0.82 1.593 1.206 0.91 0.0996 (3) 2.37 (9) 1.235 DMU12 0.749 0 0.666 0.819 0.743 0.743 0 0 0 DMU13 0.879 0 0.772 0.962 0.867 0.867 0 0 0 DMU14 1 2.302 0.912 2.154 1.533 1 1.532 (4) 2.134 (6) 1.833 DMU15 1 1 0.924 2.906 1.915 1 1.892 (2) 1.601 (5) 1.747Table 4 DEA and CCP efficiency evaluation for simulation 2. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.222 0.803 1.702 1.252 1 1.61 (6) 1.449 (5) 1.53 DMU2 1 1 0.759 1.924 1.341 0.879 0.875 (2) 1.117 (6) 0.996 DMU3 1 4.377 0.699 1.329 1.014 1 4.205 (8) 2.998 (7) 3.602 DMU4 0.593 0 0.425 0.764 0.595 0.595 0 0 0 DMU5 0.822 0 0.615 1.012 0.814 0.814 0 0.0678 0.034 DMU6 0.848 0 0.639 1.038 0.839 0.839 0 0.3006 0.15 DMU7 0.948 0 0.73 1.164 0.947 0.947 0 0.7558 0.378 DMU8 1 2.872 0.78 1.629 1.204 1 3.263 (10) 2.305 (10) 2.784 DMU9 0.843 0 0.727 0.963 0.845 0.845 0 0 0 DMU10 0.915 0 0.779 1.243 1.011 0.889 0 0.7534 0.377 DMU11 0.917 0 0.789 1.026 0.907 0.907 0 0.0958 0.048 DMU12 1 3.074 0.847 1.64 1.243 1 2.603 (6) 2.132 (8) 2.367 DMU13 0.941 0 0.856 1.033 0.944 0.944 0 0.1427 0.071 DMU14 1 1 0.888 1.439 1.163 0.944 0.259 (2) 1.264 (4) 0.761 DMU15 1 1.455 0.922 1.514 1.218 1 2.186 (5) 1.62 (5) 1.903Table 5 DEA and CCP efficiency evaluation for simulation 3: if the data contains small nonsystematic errors, the DEA model outperforms the CCP. CCP works well under conditions where inefficiency has not been partially offset by noise. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1 0.794 1.566 1.18 1 1.136 (4) 1.283 (2) 1.20945 DMU2 1 1.901 0.731 1.603 1.167 1 3.148 (11) 1.305 (4) 2.22655 DMU3 0.845 0 0.659 1.003 0.831 0.831 0 0 0 DMU4 0.898 0 0.67 1.079 0.874 0.874 0 0 0 DMU5 1 2.986 0.728 1.235 0.982 0.982 0 (0) 2.137 (7) 1.0685 DMU6 0.779 0 0.571 0.954 0.762 0.762 0 0 0 DMU7 1 2.704 0.725 1.24 0.982 1 5.681 (10) 2.598 (7) 4.13975 DMU8 0.877 0 0.705 1.028 0.867 0.867 0 0 0 DMU9 1 1 0.791 2.408 1.599 0.896 0 (0) 1.963 (10) 0.98141 DMU10 0.779 0 0.664 0.88 0.772 0.772 0 0 0 DMU11 1 1 0.799 1.298 1.048 0.899 0 0.6928 0.3464 DMU12 0.814 0 0.674 0.926 0.8 0.8 0 0 0 DMU13 1 2.409 0.893 1.161 1.027 0.947 2.634 (8) 0.451 (3) 1.54245 DMU14 1 1 0.936 29.92 15.43 0.968 0.585 (2) 3.528 (6) 2.05655 DMU15 1 1 0.926 2.77 1.848 1 1.816 (2) 1.041 (2) 1.42865Table 6 DEA and CCP efficiency evaluation for simulation 4. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.036 0.797 1.613 1.205 1 1.182 (7) 1.383 (3) 1.283 DMU2 1 1 0.726 1.294 1.01 0.863 1.954 (6) 0.911 (3) 1.432 DMU3 1 1.255 0.773 1.207 0.99 0.99 0 (0) 0.715 (4) 0.358 DMU4 0.899 0 0.667 1.129 0.898 0.898 0 0.939 0.469 DMU5 0.747 0 0.462 0.99 0.726 0.726 0 0 0 DMU6 0.6 0 0.428 0.815 0.622 0.622 0 0 0 DMU7 1 5.52 0.712 1.367 1.039 1 7.079 (13) 3.819 (10) 5.449 DMU8 0.754 0 0.57 0.981 0.775 0.775 0 0 0 DMU9 0.774 0 0.601 1.013 0.807 0.807 0 0.018 0.009 DMU10 0.818 0 0.696 0.929 0.812 0.812 0 0 0 DMU11 1 2.009 0.797 1.781 1.289 0.899 0 (0) 2.338 (9) 1.169 DMU12 0.969 0 0.829 1.079 0.954 0.954 0 0.518 0.259 DMU13 1 1.87 0.935 1.098 1.017 0.968 1.455 (3) 0.506 (3) 0.981 DMU14 1 1.31 0.912 3.899 2.406 1 1.303 (5) 2.734 (7) 2.018 DMU15 1 1 0.922 2.743 1.832 1 2.028 (3) 1.119 (2) 1.573In this simulation, because we do expect data collected to be reasonably reliable, a less conservative model would be a better choice. Conservative models tend to provide results with greater standard deviation and therefore produce an estimate with less accuracy. The four simulations were designed to test CCP, DEA, and DEA-Chebyshev model to determine the accuracy of the results obtained in comparison to the EFF. The results for DEA, CCP, and DCF for all four simulations using the values ofα can be found in Tables 3, 4, 5, 6, 8, 9, 10, and 11. The upper (38a) and lower (38b) bounds for the constraints in the DCF formulation are given as(38a)E(θUxi0-∑r=1qxirλr)-τ^ασ≥0,(38b)E(θLxi0-∑r=1qxirλr)+τ^ασ≥0.When α increases, τ^ασ also increases and so will the spread between the upper and lower bounds of θ.When the degree of deviation from observed performance levels is available, the results generated using DEA-Chebyshev model are generally a more precise approximation of the EFF compared to CCP, which assumes the normal distribution. From the simulations, it has been shown that the alpha values based on the deviation from the observed level of performance consistently produce the best approximations. The estimated degree of deviation due to inefficiency from the observed level of performance is formulated as follows:(39)α~P(D)P(E)+k=P(D)+k=1+P(deviation)2+k, where α denotes management or expert defined values of data deviation (if available) and “k” denotes a constant correction factor. In other words, it is a reflection of the users’ confidence of their own expectations where “k” will always be greater than or equal to “0.” P(deviation) is defined to be the perceived excess of inputs to observed inputs. The numerical calculations using (39) are shown in Table 7.Table 7 Qualitative information: determining the value forα. Simulation 1 Largest % deviation from the expected level of performance of the  4 simulations α ~ 1 + ( 0.112 + 0.282 ) 2 + k ~ 0.75 ∴ τ ^ α = 1.732 Simulation 2 α ~ 1 + ( 0.067 + 0.312 ) 2 + k ~ 0.74 ∴ τ ^ α = 1.687 Simulation 3 Smallest % deviation from the expected performance level of the  4 simulations α ~ 1 + ( 0.118 + 0.132 ) 2 + k ~ 0.675 ∴ τ ^ α = 1.441 Simulation 4 α ~ 1 + ( 0.092 + 0.23 ) 2 + k ~ 0.72 ∴ τ ^ α = 1.604 Note that in the simulations, the correction factor is set tok~0.05 which implies that the user may have underestimated by 5%. Note that the value for k can be zero. The values are calculated as the perceived inefficiency divided by the observed values.Table 8 DEA-Chebyshev model efficiency analysis from simulation 1 atα=0.75. θ ^ α = 0.75 U  Upper bounds θ ^ α = 0.75 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.75 DMU1 0.786 1.548 1.85 (8) 2.127 (6) 1.988 (0.63) 0.539 1 DMU2 0.751 1.272 1.297 (3) 1.184 (5) 1.24 (0.83) 0.368 1 DMU3 0.683 1.082 0 0.357 0.179 0.282 0.883 DMU4 0.673 1.287 3.491 (7) 1.72 (8) 2.605 (0.02) 0.434 0.98 DMU5 0.47 0.858 0 0 0 0.275 0.664 DMU6 0.696 1.096 0 0.591 0.295 0.283 0.896 DMU7 0.643 1.104 0 0.547 0.274 0.326 0.874 DMU8 0.531 0.892 0 0 0 0.255 0.712 DMU9 0.767 1.249 4.839 (10) 2.363 (9) 3.601 (0.03) 0.341 1 DMU10 0.659 0.898 0 0 0 0.169 0.779 DMU11 0.813 1.628 0.101 (3) 2.328 (9) 1.214 (0.006) 0.577 0.906 DMU12 0.662 0.822 0 0 0 0.113 0.742 DMU13 0.768 0.965 0 0 0 0.14 0.867 DMU14 0.906 2.225 1.53 (4) 2.193 (8) 1.862 (0.55) 0.932 1 DMU15 0.92 3.232 1.892 (2) 1.592 (5) 1.742 (0.75) 1.635 1Table 9 DEA-Chebyshev model efficiency analysis from simulation 2 atα=0.74. θ ^ α = 0.75 U  Upper bounds θ ^ α = 0.75 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.75 DMU1 0.793 1.739 1.787 (8) 1.45 (5) 1.619 (0.5) 0.669 1 DMU2 0.748 1.964 0.809 (1) 1.178 (7) 0.994 (0.25) 0.86 0.874 DMU3 0.684 1.342 4.027 (8) 2.832 (7) 3.429 (0.05) 0.465 1 DMU4 0.417 0.771 0 0 0 0.25 0.594 DMU5 0.604 1.02 0 0.115 0.058 0.294 0.812 DMU6 0.628 1.047 0 0.337 0.169 0.296 0.837 DMU7 0.719 1.174 0 0.764 0.382 0.322 0.947 DMU8 0.769 1.657 3.568 (10) 2.269 (10) 2.918 (0.08) 0.627 1 DMU9 0.719 0.967 0 0 0 0.176 0.843 DMU10 0.78 1.264 0 0.794 0.397 0.342 0.89 DMU11 0.782 1.03 0 0.115 0.057 0.175 0.906 DMU12 0.84 1.664 2.26 (5) 2.103 (8) 2.182 (0.83) 0.582 1 DMU13 0.852 1.037 0 0.152 0.076 0.131 0.944 DMU14 0.887 1.46 0.646 (1) 1.273 (4) 0.959 (0.08) 0.405 0.943 DMU15 0.918 1.556 1.904 (5) 1.617 (5) 1.761 (0.29) 0.452 1Table 10 DEA-Chebyshev model efficiency analysis from simulation 3 atα=0.675. θ ^ α = 0.675 U   Upper bounds θ ^ α = 0.675 L   Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.675 DMU1 0.794 1.566 1.073 (3) 1.356 (2) 1.214 (0.48) 0.4796 1 DMU2 0.731 1.603 2.528 (9) 1.47 (7) 1.999 (0.006) 0.5503 1 DMU3 0.659 1.003 0 0 0 0.213 0.833 DMU4 0.67 1.079 0 0.461 0.23 0.255 0.877 DMU5 0.728 1.235 0.377 (1) 2.111 (8) 1.244 (0.06) 0.3195 0.985 DMU6 0.571 0.954 0 0 0 0.24 0.765 DMU7 0.725 1.24 5.206 (10) 2.573 (8) 3.889 (0.008) 0.326 1 DMU8 0.705 1.028 0 0.027 0.014 0.204 0.87 DMU9 0.791 2.408 0.805 (1) 1.061 (8) 0.933 (0.005) 0.9715 0.905 DMU10 0.664 0.88 0 0 0 0.1347 0.774 DMU11 0.799 1.298 0 1.077 0.538 0.2745 0.921 DMU12 0.674 0.926 0 0 0 0.157 0.803 DMU13 0.893 1.161 2.855 (7) 1.113 (5) 1.984 (0.03) 0.1655 1 DMU14 0.936 29.92 1 (1) 2.718 (6) 1.859 (0.04) 17.971 1 DMU15 0.926 2.77 1.156 (2) 1.034 (4) 1.095 (0.37) 1.2342 1Table 11 DEA-Chebyshev model efficiency analysis from simulation 4 atα=0.725. θ ^ α = 0.725 U   Upper bounds θ ^ α = 0.725 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.725 DMU1 0.8 1.605 1.207 (7) 1.377 (3) 1.292 (0.68) 0.57 1 DMU2 0.729 1.291 1.951 (6) 0.918 (3) 1.4347 (0.05) 0.398 1 DMU3 0.776 1.204 0 (0) 0.719 (4) 0.359 (0.1) 0.303 0.99 DMU4 0.67 1.126 0 0.92 0.46 0.322 0.898 DMU5 0.464 0.987 0 0 0 0.37 0.726 DMU6 0.43 0.813 0 0 0 0.271 0.622 DMU7 0.716 1.363 6.874 (12) 3.849 (10) 5.361 (0.00) 0.458 1 DMU8 0.572 0.979 0 0 0 0.288 0.775 DMU9 0.603 1.01 0 0.015 0.007 0.288 0.807 DMU10 0.697 0.928 0 0 0 0.163 0.812 DMU11 0.799 1.767 0 (0) 2.327 (9) 1.164 (0.002) 0.685 0.9 DMU12 0.831 1.077 0 0.512 0.256 0.174 0.954 DMU13 0.884 1.097 2.217 (4) 0.514 (3) 1.366 (0.06) 0.15 0.991 DMU14 0.913 3.862 1.316 (5) 2.731 (7) 2.023 (0.03) 2.085 1 DMU15 0.923 2.682 1.435 (3) 1.119 (2) 1.277 (0.29) 1.244 1 Note: In Tables8–11, the values shown in columns 4 and 5 in brackets represent the frequency with which a DEA-efficient DMU is used as a reference unit in DCF. Those in column 6 represent the P values for the upper and lower limits for the lambdas for the DEA-efficient units.The Tables8-11 show efficiency scores determined under DEA-Chebyshev model, based on the α-values shown in Table 7. ### 5.3. Step III: Hypothesis Testing: Frontiers Compared All the efficiency evaluation tools will be measured against the control group to determine which of these would provide the best approximation method. Both CCP and DEA-Chebyshev model efficiency scores are defined in the same manner. The upper and lower bounds of the frontier determine the region where the EFF may likely be and is approximated by the DCF efficiency score,θ^.Using the results obtained in Step II, the four simulated experimental groups are adjusted using their respectively efficiency scores. The virtual DMUs are the DMUs from the four experimental groups in which their inputs have been reduced according to their efficiency scores from Step II, according to the contraction factor,θ for DEA, θ^CCP for CCP, and θ^ for DCF.In this step, in order to test the hypothesis, the 12 data sets of virtual DMUs are each aggregated with the control group, forming a sample size of 30 DMUs per simulation. “DMU#” denotes the control group (or “sample one”) and “V.DMU#” denotes the efficient virtual units derived from the experimental group (or “sample two”) using the efficiency scores generated by DEA, CCP, and DEA-Chebyshev model, respectively. There are 12 data sets in total: three for each of the simulations (three input contraction factors per DMU, from DEA, CCP (normal), and DEA-Chebyshev model). The inputs for the virtual DMUs calculated from each of these three methodologies for the same experimental group will be different. The sample size of 30 DMUs in each of the 12 sets is a result of combining the 15 error-free DMUs with the 15 virtual DMUs. These 30 DMUs are then evaluated using ProDEA (software). It is logical to use DEA for our final analysis to scrutinize the different methods since this is a deterministic method, which would work perfectly in an error-free situation. The DEA results for the 4 simulations are given in Table12.Table 12 Deterministic efficiency results for all four simulations with an aggregate of 30 DMUs; 15 from the control group and another 15 virtual units calculated according to CCP and DEA, respectively. Simulation 1 Simulation 2 Simulation 3 Simulation 4 DEA CCP DCF DEA CCP DCF DEA CCP DCF DEA CCP DCF DMU1 1 1 1 1 1 1 1 1 1 1 1 1 DMU2 1 1 1 0.986 0.946 0.942 0.962 0.962 0.962 1 0.937 1 DMU3 1 1 1 0.96 0.96 0.96 1 1 1 1 0.981 1 DMU4 1 1 1 1 1 1 1 1 1 0.989 0.977 0.989 DMU5 1 1 1 1 1 1 1 1 1 0.945 0.94 0.945 DMU6 1 1 1 1 1 1 0.991 0.987 0.988 0.888 0.888 0.888 DMU7 1 1 1 1 1 1 0.965 0.965 0.965 0.901 0.885 0.885 DMU8 1 0.965 0.963 1 1 1 0.971 0.933 0.943 0.914 0.872 0.872 DMU9 0.991 0.937 0.935 1 1 1 0.968 0.917 0.931 0.918 0.863 0.863 DMU10 0.978 0.906 0.903 1 1 1 0.962 0.901 0.919 0.926 0.87 0.871 DMU11 0.966 0.882 0.878 1 1 1 0.954 0.893 0.913 0.932 0.876 0.877 DMU12 0.985 0.931 0.929 1 1 1 0.934 0.903 0.911 0.939 0.906 0.914 DMU13 0.996 0.966 0.965 1 1 1 0.914 0.909 0.912 0.949 0.932 0.939 DMU14 1 0.991 0.991 1 1 1 0.973 0.957 0.973 0.967 0.967 0.967 DMU15 1 1 1 1 1 1 1 1 1 1 1 1 V.DMU1 0.889 0.885 0.884 0.921 0.921 0.921 0.898 0.999 0.898 0.841 0.84 0.84 V.DMU2 0.86 0.86 0.86 1 1 1 1 0.987 0.993 0.938 1 0.938 V.DMU3 0.864 0.872 0.873 1 1 1 1 1 1 0.931 0.92 0.941 V.DMU4 0.929 0.944 0.948 0.976 0.972 0.974 1 0.971 0.986 0.982 0.979 0.984 V.DMU5 0.926 0.943 0.946 0.934 0.943 0.945 1 1 1 1 1 1 V.DMU6 0.915 0.927 0.928 0.955 0.966 0.968 1 0.998 1 0.999 0.963 0.964 V.DMU7 0.959 0.947 0.946 0.926 0.926 0.927 1 1 1 1 1 1 V.DMU8 0.977 0.954 0.951 1 1 1 1 1 1 1 0.929 0.93 V.DMU9 1 0.99 0.987 0.956 0.946 0.946 0.989 0.989 0.989 1 0.898 0.899 V.DMU10 0.959 0.938 0.936 0.933 0.959 0.958 1 1 1 0.989 0.958 0.959 V.DMU11 1 1 1 0.939 0.94 0.94 0.938 0.954 0.952 1 1 1 V.DMU12 0.977 0.953 0.952 0.933 0.932 0.932 0.972 0.989 0.988 0.996 0.976 0.987 V.DMU13 0.971 0.975 0.975 0.903 0.899 0.899 0.998 1 1 0.992 1 0.995 V.DMU14 0.986 0.98 0.979 0.872 0.924 0.924 0.99 1 1 1 1 1 V.DMU15 1 1 1 1 1 1 1 1 1 1 1 1In order to determine if the frontiers created by these models are substantially different from that of the control group (or the error-free units), the rank-sum-test and statistical hypothesis test for mean differences were used.The DEA-Chebyshev model is scrutinized using several statistical methods, which show that there is a strong relationship between the DCF and the EFF. All the statistical tools used to test the DCF against the EFF have produced consistent conclusions that thecorrected frontier is a good approximation of the EFF. The statistical methods used to test the DCF versus the EFF are the Wilcoxon-Mann-Whitney test (or the rank-sum test) and the t-test for the differences in mean values of θ shown in Table 13. The rank-sum test is used to determine if the virtual DMUs established by the DCF are from the same population as that of the DMUs in the control group; if they are, then the difference in efficiency scores of both groups will not be statistically significant. This does not imply that the EFF and the corrected frontier are exactly the same but rather that the latter is a good approximation of the former. Its results are better than that of the CCP performance evaluation method developed by Land et al. [11] and Forrester and Anderson [18].Table 13 Hypothesis tests for mean differences of efficiency scores. Sample 1 is denoted as the “Control group” and sample 2 is denoted as the “Virtual group”. Simulation 1 Simulation 2 Simulation 3 Simulation 4 Control group Virtual group Control group Virtual group Control group Virtual group Control group Virtual group DEA Mean 0.999 0.943 0.996 0.95 0.973 0.986 0.951 0.978 Variance 0.00001 0.00187 0.00011 0.00153 0.0007 0.0009 0.0015 0.0019 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.7117 0.1166 - 0.5253 - 0.1409 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −3.09 −3.2146 1.3688 1.7213 t stat 5.2614 4.5917 - 1.0167 - 1.6501 P(T ≤ t)  two tail 0.00012 0.00042 0.3266 0.1212 t critical two tail 2.145 2.145 2.145 2.145 CCP efficiency evaluation Mean 0.972 0.944 0.994 0.955 0.955 0.992 0.926 0.964 Variance 0.0016 0.0019 0.00028 0.0011 0.00176 0.00018 0.0025 0.00231 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.35661 - 0.5373 - 0.14 - 0.5035 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −1.8873 −3.0072 2.136 2.0117 t stat 2.2373 3.334 - 3.1453 - 1.7383 P(T ≤ t) two tail 0.042 0.005 0.0072 0.1041 t critical two tail 2.145 2.145 2.145 2.145 DCF Mean 0.971 0.944 0.993 0.956 0.961 0.987 0.934 0.962 Variance 0.00168 0.0019 0.0003 0.0011 0.0013 0.0008 0.00304 0.00217 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.3296 - 0.5296 - 0.4235 - 0.0966 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −1.8873 −2.9657 1.8665 1.2236 t stat 2.1038 3.2401 - 1.8448 - 1.4533 P(T ≤ t) two tail 0.05396 0.0059 0.08633 0.1682 t critical two tail 2.145 2.145 2.145 2.145 TheRank-sum test shown previously is used to determine if the two samples being tested are of the same population. If they are of the same population, then we can conclude that the two frontiers for both the samples respectively, are one, and the same or that they consistently overlap one another, thus they can be assumed to be of the same surface. ### 5.4. Step IV: Efficiency Scores: DEA versus DEA-Chebyshev Model and Ranking of DEA Efficient Units There can be more than one way of ranking efficient units. In the simplest (or naïve) case, empirically efficient DMUs can be ranked according to the scoreθ- calculated as an average of the upper and lower limits from the DEA-Chebyshev model. #### 5.4.1. Naïve Ranking Table14 illustrates the ranking of all DMUs. The figures in bold denote the DEA-Chebyshev model efficiency scores for the DEA efficient units. All production units are ranked in descending order of efficiency according to the average of the upper and lower limits, θ-. An anomaly in DMU14 of simulation 3 is caused by an extremely small value for Input 2. Because the LP formulation for DEA, DEA-Chebyshev model, and CCP (normal) applies the greatest weight to the input or output in order to make a DMU appear as favourable as possible, Input 2 in this case is weighted heavily. In DEA, the mathematical algorithm does not allow the efficiency score to exceed 1.00; thus, this problem is not detected. In DEA-Chebyshev model and CCP, because efficiency scores are not restricted to 1.00, this problem arises indicating a possible outlier. It would be advisable to remove this DMU from the analysis. In this simulation, because the errors are generated randomly, the error-value for this DMU lies in the tail end of the distribution, hence, creating an outlier.Table 14 “Naïve” ranking of empirically efficient DMUs in order of declining levels of efficiency. Values in bold correspond to DEA efficient units with a score of “1”. Rank θ - DMU15 2.076 DMU14 1.566 DMU11 1.22 DMU1 1.167 DMU2 1.012 DMU9 1.008 Simulation 1 DMU4 0.98 DMU6 0.896 DMU3 0.883 DMU7 0.874 DMU13 0.867 DMU10 0.779 DMU12 0.742 DMU8 0.712 DMU5 0.664 DMU2 1.356 DMU1 1.266 DMU12 1.252 DMU15 1.237 DMU8 1.213 DMU14 1.173 Simulation 2 DMU10 1.022 DMU3 1.013 DMU7 0.947 DMU13 0.944 DMU11 0.906 DMU9 0.843 DMU6 0.837 DMU5 0.812 DMU4 0.594 DMU14 13.632 DMU15 1.807 DMU9 1.498 DMU1 1.157 DMU2 1.15 DMU11 1.036 Simulation 3 DMU13 1.024 DMU7 0.986 DMU5 0.985 DMU4 0.877 DMU8 0.87 DMU3 0.833 DMU12 0.803 DMU10 0.774 DMU6 0.765 DMU14 2.388 Simulation 4 DMU15 1.802 DMU11 1.283 DMU1 1.202 DMU7 1.039 DMU2 1.01 DMU13 0.991 DMU3 0.99 DMU12 0.954 DMU4 0.898 DMU10 0.812 DMU9 0.807 DMU8 0.775 DMU5 0.726 DMU6 0.622This method of ranking is naïve because it ignores the standard deviation, which indicates the robustness of a DMU's efficiency score to the possible errors and theunobserved inefficiency. It also does not distinguish between possible outliers and legitimate units. #### 5.4.2. Ranking by Robustness of DEA-Chebyshev Model Efficiency Scores The ranking in the order of robustness of a DMU begins with the efficiency score defined asθ^. Those with θ^=1 are ranked from the most robust to the least robust (from the smallest standard deviation to the largest). The standard deviation is determined using the upper and lower bounds of the efficiency scores. Then the rest of the empirically efficient units are ranked based on their respective θ^ (using their standard deviations will also provide the same ranking for these units). Once all the empirically efficient units have been ranked, the remainders are organized according to their stochastic efficiency scores from the most efficient to the least efficient. The ranking of these inefficient units is very similar to that of the empirical frontier.Ranking from the most efficient down, those DMUs which have a DEA-Chebyshev model score ofθ^=1 (input oriented case) can fall into either of two categories: hyper-efficient or efficient/mildly efficient depending on how robust they are (based on their standard deviation). DMUs that are not printed in bold are DMUs that are DEA-inefficient (See Table 15), and hence, they are ranked below those which have been deemed empirically efficient. DEA efficient DMUs that fail to satisfy the conditions for θ^=1 will be given efficiency scores of at most 1.00.Table 15 Ranking of efficient DMUs according to robustness based on their standard deviations. The DMUs in bold denote the empirically efficient DMUs. Simulation 1 Simulation 2 Simulation 3 Simulation 4 θ ^ α = 0.75 Std. dev. θ ^ α = 0.75 Std. dev. θ ^ α = 0.675 Std. dev. θ ^ α = 0.725 Std. dev. DMU9 1 0.34111 DMU15 1 0.45149 DMU13 1 0.16546 DMU13 1 0.15033 DMU2 1 0.36819 DMU3 1 0.46499 DMU7 1 0.32591 DMU3 1 0.30285 DMU1 1 0.53889 DMU12 1 0.5823 DMU1 1 0.47956 DMU2 1 0.39775 DMU14 1 0.93225 DMU8 1 0.62735 DMU2 1 0.55027 DMU7 1 0.45771 DMU15 1 1.63455 DMU1 1 0.66871 DMU15 1 1.23418 DMU1 1 0.56972 DMU4 0.98 0.43388 DMU7 0.947 0.3218 DMU14 1 17.971 DMU11 1 0.68462 DMU11 0.906 0.57657 DMU13 0.944 0.13124 DMU5 0.985 0.31947 DMU15 1 1.24437 DMU6 0.896 0.28298 DMU14 0.943 0.4051 DMU11 0.921 0.2745 DMU14 1 2.0849 DMU3 0.883 0.28164 DMU11 0.906 0.17515 DMU9 0.905 0.97149 DMU12 0.969 0.17444 DMU7 0.874 0.32605 DMU10 0.89 0.34217 DMU4 0.877 0.25512 DMU4 0.899 0.32244 DMU13 0.867 0.13958 DMU2 0.874 0.85998 DMU8 0.87 0.20386 DMU10 0.818 0.16313 DMU10 0.779 0.16935 DMU9 0.843 0.17572 DMU3 0.833 0.21305 DMU9 0.774 0.28765 DMU12 0.742 0.11335 DMU6 0.837 0.29614 DMU12 0.803 0.15726 DMU8 0.754 0.28786 DMU8 0.712 0.25534 DMU5 0.812 0.2938 DMU10 0.774 0.1347 DMU5 0.747 0.3701 DMU5 0.664 0.2745 DMU4 0.594 0.25039 DMU6 0.765 0.24013 DMU6 0.6 0.27103 ### 5.5. Further Analysis Additional analyses were conducted by taking the observed DMUs in each simulation and evaluating them against the EFF, DEA, CCP, and DEA-Chebyshev model results. If DCF is a good approximation of the EFF, then the efficiency scores for the observed DMUs should not be substantially different from the efficiency scores generated by the EFF. This also holds true for CCP. #### 5.5.1. Observed DMUs Evaluated against the EFF, CCP, and DCF The efficiency scores of the observed DMUs from the experimental groups determined by the EFF (to be denoted as “exp.grp+EFF”) will provide a benchmark for evaluating the DEA frontier (“exp.grp+DEA”), CCP (normal) frontier (“exp.grp+CCP”), and the corrected frontier (“exp.grp+DCF”). A comparison is drawn between the efficiency scores of the experimental groups generated by the four frontiers.The hypothesis is that the mean of the efficiency scores for the 15 observed units in the “exp.grp+EFF” group and the “exp.grp+DCF” group should be approximately the same (i.e., the difference is not statistically significant). From Table16, the null hypothesis can be seen from the rank-sum test and the t-test at α=0.05, and the difference is not statistically significant in simulations 3 and 4, hence, the corrected frontier is a good approximation of the EFF. Although the hypothesis test for simulations 1 and 2 indicates some level of significance, the results generated by the DCF model are still superior to those of the CCP and the DEA.Table 16 Statistical analysis for frontier comparisons. Observed DMUs are evaluated against the 3 different frontiers to determine their efficiency scores which are calculated using the normal DEA model and to determine if the efficiency scores for each group are substantially different when comparing EFF to DEA, EFF to DCF, and EFF to CCP. Exp.grp+EFF Exp.grp+DEA Exp.grp+EFF Exp.grp+DCF Exp.grp+EFF Exp.grp+CCP Simulation 1 Mean 0.852 0.899 0.852 0.885 0.852 0.887 Variance 0.01396 0.01355 0.01396 0.01389 0.01396 0.01376 Observations 15 15 15 15 15 15 Pearson correlation 0.922 0.87986 0.881 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.2858 0.9125 0.9125 t stat - 3.9644 - 2.2335 - 2.3537 P(T ≤ t) two tail 0.0014 0.04235 0.03372 t critical two tail 2.145 2.145 2.145 Simulation 2 Mean 0.875 0.922 0.875 0.908 0.875 0.908 Variance 0.01272 0.01242 0.0127 0.0115 0.0127 0.0115 Observations 15 15 15 15 15 15 Pearson correlation 0.94071 0.8875 0.8918 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.3066 0.9747 1.0162 t stat - 4.6604 - 2.3984 - 2.487 P(T ≤ t) two tail 0.00037 0.031 0.02611 t critical two tail 2.145 2.145 2.145 Simulation 3 Mean 0.92 0.933 0.92 0.916 0.92 0.902 Variance 0.00879 0.00815 0.00879 0.00806 0.00879 0.0077 Observations 15 15 15 15 15 15 Pearsoncorrelation 0.95301 0.8804 0.9082 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.7259 −0.0622 −0.6014 t stat - 1.8125 0.29423 1.68719 P(T ≤ t) two tail 0.0914 0.7729 0.1137 t critical two tail 2.145 2.145 2.145 Simulation 4 Mean 0.882 0.904 0.882 0.887 0.882 0.868 Variance 0.0153 0.0173 0.0153 0.0184 0.0153 0.0162 Observations 15 15 15 15 15 15 Pearson correlation 0.9425 0.905 0.8996 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.8503 0.1452 −0.394 t stat - 1.9248 - 0.312 1.0043 P(T ≤ t) two tail 0.0748 0.7599 −0.312 t critical two tail 2.145 2.145 2.145Table16 shows the statistical tests used to compare the DEA, CCP, and DCF against the EFF. The Pearson Correlation analysis, (regression line) which ranges from −1 to 1 inclusively, reflects the extent of the linear relationship between two sets of data. The P values, the rank-sum test, and the Pearson correlation observed for all the four simulations indicate that in general the DCF outperforms DEA and CCP (which assumed the normal distribution).Outliers have a tendency to exhibit large standard deviations, which is translated to large confidence limits. Consequently, the reason for establishing DCF and CCP scores is to reduce the likelihood of a virtual unit from becoming an outlier. Also, the results generated by the stochastic models (as opposed to deterministic ones) such as the DCF and CCP can be greatly affected because the efficiency scores are generally not restricted to 1.00. In reality, outliers are not always easily detected. If the data set contains some outliers, the stochastic models may not perform well. DMU14 in Simulation 3 is an example of this problem. It can be solved by either removing the outliers or by imposing weight restrictions. However, weight restrictions are not within the scope of this paper. ## 5.1. Step I: Simulation: The Data Generating Process The first data set shown in Table1 is known as the control group. It contains two inputs and one output generated using a logarithmic production function of the following form: (35)y=β0+β1lnx12+β2lnx22, where β0 is some constant and β1 and β2are arbitrary weights or coefficients assigned to inputs. Input 1 (x1) has been chosen arbitrarily and input 2 (x2) is a function of x1;x2=c(1/x1), where c is some arbitrary constant; in this case c=24. This is to ensure that the frontier generated by the control group contains only efficient units and is convex. The linear convex combination in EFF consists of discrete production possibility sets defined for every individual DMU. Output (y) is then calculated using the equation shown in (35) from a discrete set of inputs where β0, β1, and β2 have been arbitrarily defined and are fixed for the all groups (control and experimental). The control group is one that contains no measurement errors or statistical errors and no inefficient DMUs. It will be the construct of the EFF.Table 1 Control group: the error-free production units. DMU Output Input 1 Input 2 1 12.55 2 12 2 10.43 3 8 3 9.68 4 6 4 9.53 5 4.8 5 9.68 6 4 6 10.01 7 3.43 7 10.43 8 3 8 11.45 10 2.4 9 11.99 11 2.18 10 12.55 12 2 11 13.12 13 1.85 12 14.25 15 1.6 13 15.36 17 1.41 14 16.46 19 1.26 15 16.99 20 1.2The experimental groups are generated from the control group with the error components. Their outputs are the same as the control groups and are held deterministic, while inputs are stochastic containing confounded measurement errors distributed as half-normal nonzero inefficiencyN+(μ,σ2) and statistical noise N(0,1)(36a)y~β0+β1lnx^12+β2lnx^22. In (36a), inputs are confounded with random errors and inefficiency: (36b)x^i=xi+εi,where ε=v+u. Variability in the inputs across simulations is produced by different arbitrarily chosen μ and σ for the inefficiency component which is distributed half normally;   u~N+(μ,σ2) for each simulation. Table 2 shows the details.Table 2 Four experimental groups with variations and inefficiencies introduced to both inputs while keeping outputs constant. DMU Output Experimental Grp 1 Experimental Grp 2 Experimental Grp 3 Experimental Grp 4 Input 1 Input 2 Input 1 Input 2 Input 1 Input 2 Input 1 Input 2 1 12.55 3.16 12.5 2.34 12.85 2.91 12.6 2.68 13.92 2 10.43 3.69 9.08 1.6 10.07 2.34 8.23 3.32 8.34 3 9.68 4.88 8.41 3.58 5.97 6.1 6.43 4.25 6.53 4 9.53 5.27 5.31 7.28 9.43 7.84 3.96 6.44 4.25 5 9.68 8.39 7.43 6.98 5.9 7.64 2.96 9.93 3.55 6 10.01 9.17 3.8 7.04 5.57 9.6 4.01 10.46 4.98 7 10.43 10.92 3.11 9.6 3.26 7.71 2.9 6.29 2.95 8 11.45 13.14 3.95 11.41 1.88 10.38 3.14 11.71 3.05 9 11.99 9.33 2.85 11.53 4.75 13.88 0.59 13.25 2.47 10 12.55 10.38 7.43 13.94 2.46 12.55 4.44 12.19 3.73 11 13.12 12.67 1.69 12.46 4.79 13.53 1.1 13.24 1.1 12 14.25 17.59 4.8 15.71 2.09 16.57 2.27 14.14 2.08 13 15.36 17.35 4.23 17.33 4.44 15.35 1.38 15.47 2.25 14 16.46 19.13 1.4 20.33 3.49 19.11 0.06 18.67 0.57 15 16.99 19.98 2.51 19.31 4.85 20.57 1.21 19.32 2.59 ## 5.2. Step II: Establishing Efficiency Scores: DEA, DEA-Chebyshev Model, and CCP Efficiency Evaluation The DEA results were calculated using ProDEA, while CCP and DEA-Chebyshev model results were calculated using MathCad. The CCP LP formulation follows that from [11, 18], the upper and lower bounds for the CCP frontier are(37a)θCCPU=min{θU∣yj0≤∑r=1qyjrλr,E(θLxi0-∑r=1qxirλr)-1.645σ≥0,∑r=1qλr=1,λr≥0},(37b)θCCPL=min{(θLxi0-∑r=1qxirλr)θL∣yj0≤∑r=1qyjrλr,E(θLxi0-∑r=1qxirλr)+1.645σ≥0,∑r=1qλr=1,λr≥0}.Table3 shows the results of the efficiency analysis for the DEA and CCP models. The λ-conditions which CCP must satisfy will be the same for the DCF. The value, ∑R=1qλr,R, for CCP is approximately the same as that for the DCF. Although DMU11 is DEA efficient, it is not CCP efficient given that is has violated one of the two λ-conditions. Note that ∑R=1qλ-r,R=(∑R=1qλr,RU+∑R=1qλr,RL)/2 shown in Tables 3, 4, 5, and 6.Table 3 DEA and CCP efficiency evaluation for simulation 1. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.674 0.795 1.52 1.158 1 1.834 (8) 2.124 (6) 1.979 DMU2 1 1.58 0.762 1.259 1.011 1 1.31 (3) 1.18 (5) 1.245 DMU3 0.892 0 0.694 1.074 0.884 0.884 0 0.323 0.162 DMU4 1 2.56 0.69 1.277 0.984 0.984 3.458 (7) 1.785 (8) 2.621 DMU5 0.679 0 0.481 0.852 0.666 0.666 0 0 0 DMU6 0.909 0 0.706 1.089 0.898 0.898 0 0.5463 0.273 DMU7 0.882 0 0.653 1.094 0.873 0.873 0 0.5199 0.26 DMU8 0.715 0 0.538 0.885 0.711 0.711 0 0 0 DMU9 1 4.778 0.777 1.238 1.008 1 4.876 (10) 2.415 (9) 3.645 DMU10 0.787 0 0.665 0.894 0.779 0.779 0 0 0 DMU11 1 1.105 0.82 1.593 1.206 0.91 0.0996 (3) 2.37 (9) 1.235 DMU12 0.749 0 0.666 0.819 0.743 0.743 0 0 0 DMU13 0.879 0 0.772 0.962 0.867 0.867 0 0 0 DMU14 1 2.302 0.912 2.154 1.533 1 1.532 (4) 2.134 (6) 1.833 DMU15 1 1 0.924 2.906 1.915 1 1.892 (2) 1.601 (5) 1.747Table 4 DEA and CCP efficiency evaluation for simulation 2. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.222 0.803 1.702 1.252 1 1.61 (6) 1.449 (5) 1.53 DMU2 1 1 0.759 1.924 1.341 0.879 0.875 (2) 1.117 (6) 0.996 DMU3 1 4.377 0.699 1.329 1.014 1 4.205 (8) 2.998 (7) 3.602 DMU4 0.593 0 0.425 0.764 0.595 0.595 0 0 0 DMU5 0.822 0 0.615 1.012 0.814 0.814 0 0.0678 0.034 DMU6 0.848 0 0.639 1.038 0.839 0.839 0 0.3006 0.15 DMU7 0.948 0 0.73 1.164 0.947 0.947 0 0.7558 0.378 DMU8 1 2.872 0.78 1.629 1.204 1 3.263 (10) 2.305 (10) 2.784 DMU9 0.843 0 0.727 0.963 0.845 0.845 0 0 0 DMU10 0.915 0 0.779 1.243 1.011 0.889 0 0.7534 0.377 DMU11 0.917 0 0.789 1.026 0.907 0.907 0 0.0958 0.048 DMU12 1 3.074 0.847 1.64 1.243 1 2.603 (6) 2.132 (8) 2.367 DMU13 0.941 0 0.856 1.033 0.944 0.944 0 0.1427 0.071 DMU14 1 1 0.888 1.439 1.163 0.944 0.259 (2) 1.264 (4) 0.761 DMU15 1 1.455 0.922 1.514 1.218 1 2.186 (5) 1.62 (5) 1.903Table 5 DEA and CCP efficiency evaluation for simulation 3: if the data contains small nonsystematic errors, the DEA model outperforms the CCP. CCP works well under conditions where inefficiency has not been partially offset by noise. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1 0.794 1.566 1.18 1 1.136 (4) 1.283 (2) 1.20945 DMU2 1 1.901 0.731 1.603 1.167 1 3.148 (11) 1.305 (4) 2.22655 DMU3 0.845 0 0.659 1.003 0.831 0.831 0 0 0 DMU4 0.898 0 0.67 1.079 0.874 0.874 0 0 0 DMU5 1 2.986 0.728 1.235 0.982 0.982 0 (0) 2.137 (7) 1.0685 DMU6 0.779 0 0.571 0.954 0.762 0.762 0 0 0 DMU7 1 2.704 0.725 1.24 0.982 1 5.681 (10) 2.598 (7) 4.13975 DMU8 0.877 0 0.705 1.028 0.867 0.867 0 0 0 DMU9 1 1 0.791 2.408 1.599 0.896 0 (0) 1.963 (10) 0.98141 DMU10 0.779 0 0.664 0.88 0.772 0.772 0 0 0 DMU11 1 1 0.799 1.298 1.048 0.899 0 0.6928 0.3464 DMU12 0.814 0 0.674 0.926 0.8 0.8 0 0 0 DMU13 1 2.409 0.893 1.161 1.027 0.947 2.634 (8) 0.451 (3) 1.54245 DMU14 1 1 0.936 29.92 15.43 0.968 0.585 (2) 3.528 (6) 2.05655 DMU15 1 1 0.926 2.77 1.848 1 1.816 (2) 1.041 (2) 1.42865Table 6 DEA and CCP efficiency evaluation for simulation 4. DEAθ ∑ R = 1 q λ r , R CCP (U)θCCPU CCP (L)θCCPL Averageθ-CCP CCPθ^CCP ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R DMU1 1 1.036 0.797 1.613 1.205 1 1.182 (7) 1.383 (3) 1.283 DMU2 1 1 0.726 1.294 1.01 0.863 1.954 (6) 0.911 (3) 1.432 DMU3 1 1.255 0.773 1.207 0.99 0.99 0 (0) 0.715 (4) 0.358 DMU4 0.899 0 0.667 1.129 0.898 0.898 0 0.939 0.469 DMU5 0.747 0 0.462 0.99 0.726 0.726 0 0 0 DMU6 0.6 0 0.428 0.815 0.622 0.622 0 0 0 DMU7 1 5.52 0.712 1.367 1.039 1 7.079 (13) 3.819 (10) 5.449 DMU8 0.754 0 0.57 0.981 0.775 0.775 0 0 0 DMU9 0.774 0 0.601 1.013 0.807 0.807 0 0.018 0.009 DMU10 0.818 0 0.696 0.929 0.812 0.812 0 0 0 DMU11 1 2.009 0.797 1.781 1.289 0.899 0 (0) 2.338 (9) 1.169 DMU12 0.969 0 0.829 1.079 0.954 0.954 0 0.518 0.259 DMU13 1 1.87 0.935 1.098 1.017 0.968 1.455 (3) 0.506 (3) 0.981 DMU14 1 1.31 0.912 3.899 2.406 1 1.303 (5) 2.734 (7) 2.018 DMU15 1 1 0.922 2.743 1.832 1 2.028 (3) 1.119 (2) 1.573In this simulation, because we do expect data collected to be reasonably reliable, a less conservative model would be a better choice. Conservative models tend to provide results with greater standard deviation and therefore produce an estimate with less accuracy. The four simulations were designed to test CCP, DEA, and DEA-Chebyshev model to determine the accuracy of the results obtained in comparison to the EFF. The results for DEA, CCP, and DCF for all four simulations using the values ofα can be found in Tables 3, 4, 5, 6, 8, 9, 10, and 11. The upper (38a) and lower (38b) bounds for the constraints in the DCF formulation are given as(38a)E(θUxi0-∑r=1qxirλr)-τ^ασ≥0,(38b)E(θLxi0-∑r=1qxirλr)+τ^ασ≥0.When α increases, τ^ασ also increases and so will the spread between the upper and lower bounds of θ.When the degree of deviation from observed performance levels is available, the results generated using DEA-Chebyshev model are generally a more precise approximation of the EFF compared to CCP, which assumes the normal distribution. From the simulations, it has been shown that the alpha values based on the deviation from the observed level of performance consistently produce the best approximations. The estimated degree of deviation due to inefficiency from the observed level of performance is formulated as follows:(39)α~P(D)P(E)+k=P(D)+k=1+P(deviation)2+k, where α denotes management or expert defined values of data deviation (if available) and “k” denotes a constant correction factor. In other words, it is a reflection of the users’ confidence of their own expectations where “k” will always be greater than or equal to “0.” P(deviation) is defined to be the perceived excess of inputs to observed inputs. The numerical calculations using (39) are shown in Table 7.Table 7 Qualitative information: determining the value forα. Simulation 1 Largest % deviation from the expected level of performance of the  4 simulations α ~ 1 + ( 0.112 + 0.282 ) 2 + k ~ 0.75 ∴ τ ^ α = 1.732 Simulation 2 α ~ 1 + ( 0.067 + 0.312 ) 2 + k ~ 0.74 ∴ τ ^ α = 1.687 Simulation 3 Smallest % deviation from the expected performance level of the  4 simulations α ~ 1 + ( 0.118 + 0.132 ) 2 + k ~ 0.675 ∴ τ ^ α = 1.441 Simulation 4 α ~ 1 + ( 0.092 + 0.23 ) 2 + k ~ 0.72 ∴ τ ^ α = 1.604 Note that in the simulations, the correction factor is set tok~0.05 which implies that the user may have underestimated by 5%. Note that the value for k can be zero. The values are calculated as the perceived inefficiency divided by the observed values.Table 8 DEA-Chebyshev model efficiency analysis from simulation 1 atα=0.75. θ ^ α = 0.75 U  Upper bounds θ ^ α = 0.75 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.75 DMU1 0.786 1.548 1.85 (8) 2.127 (6) 1.988 (0.63) 0.539 1 DMU2 0.751 1.272 1.297 (3) 1.184 (5) 1.24 (0.83) 0.368 1 DMU3 0.683 1.082 0 0.357 0.179 0.282 0.883 DMU4 0.673 1.287 3.491 (7) 1.72 (8) 2.605 (0.02) 0.434 0.98 DMU5 0.47 0.858 0 0 0 0.275 0.664 DMU6 0.696 1.096 0 0.591 0.295 0.283 0.896 DMU7 0.643 1.104 0 0.547 0.274 0.326 0.874 DMU8 0.531 0.892 0 0 0 0.255 0.712 DMU9 0.767 1.249 4.839 (10) 2.363 (9) 3.601 (0.03) 0.341 1 DMU10 0.659 0.898 0 0 0 0.169 0.779 DMU11 0.813 1.628 0.101 (3) 2.328 (9) 1.214 (0.006) 0.577 0.906 DMU12 0.662 0.822 0 0 0 0.113 0.742 DMU13 0.768 0.965 0 0 0 0.14 0.867 DMU14 0.906 2.225 1.53 (4) 2.193 (8) 1.862 (0.55) 0.932 1 DMU15 0.92 3.232 1.892 (2) 1.592 (5) 1.742 (0.75) 1.635 1Table 9 DEA-Chebyshev model efficiency analysis from simulation 2 atα=0.74. θ ^ α = 0.75 U  Upper bounds θ ^ α = 0.75 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.75 DMU1 0.793 1.739 1.787 (8) 1.45 (5) 1.619 (0.5) 0.669 1 DMU2 0.748 1.964 0.809 (1) 1.178 (7) 0.994 (0.25) 0.86 0.874 DMU3 0.684 1.342 4.027 (8) 2.832 (7) 3.429 (0.05) 0.465 1 DMU4 0.417 0.771 0 0 0 0.25 0.594 DMU5 0.604 1.02 0 0.115 0.058 0.294 0.812 DMU6 0.628 1.047 0 0.337 0.169 0.296 0.837 DMU7 0.719 1.174 0 0.764 0.382 0.322 0.947 DMU8 0.769 1.657 3.568 (10) 2.269 (10) 2.918 (0.08) 0.627 1 DMU9 0.719 0.967 0 0 0 0.176 0.843 DMU10 0.78 1.264 0 0.794 0.397 0.342 0.89 DMU11 0.782 1.03 0 0.115 0.057 0.175 0.906 DMU12 0.84 1.664 2.26 (5) 2.103 (8) 2.182 (0.83) 0.582 1 DMU13 0.852 1.037 0 0.152 0.076 0.131 0.944 DMU14 0.887 1.46 0.646 (1) 1.273 (4) 0.959 (0.08) 0.405 0.943 DMU15 0.918 1.556 1.904 (5) 1.617 (5) 1.761 (0.29) 0.452 1Table 10 DEA-Chebyshev model efficiency analysis from simulation 3 atα=0.675. θ ^ α = 0.675 U   Upper bounds θ ^ α = 0.675 L   Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.675 DMU1 0.794 1.566 1.073 (3) 1.356 (2) 1.214 (0.48) 0.4796 1 DMU2 0.731 1.603 2.528 (9) 1.47 (7) 1.999 (0.006) 0.5503 1 DMU3 0.659 1.003 0 0 0 0.213 0.833 DMU4 0.67 1.079 0 0.461 0.23 0.255 0.877 DMU5 0.728 1.235 0.377 (1) 2.111 (8) 1.244 (0.06) 0.3195 0.985 DMU6 0.571 0.954 0 0 0 0.24 0.765 DMU7 0.725 1.24 5.206 (10) 2.573 (8) 3.889 (0.008) 0.326 1 DMU8 0.705 1.028 0 0.027 0.014 0.204 0.87 DMU9 0.791 2.408 0.805 (1) 1.061 (8) 0.933 (0.005) 0.9715 0.905 DMU10 0.664 0.88 0 0 0 0.1347 0.774 DMU11 0.799 1.298 0 1.077 0.538 0.2745 0.921 DMU12 0.674 0.926 0 0 0 0.157 0.803 DMU13 0.893 1.161 2.855 (7) 1.113 (5) 1.984 (0.03) 0.1655 1 DMU14 0.936 29.92 1 (1) 2.718 (6) 1.859 (0.04) 17.971 1 DMU15 0.926 2.77 1.156 (2) 1.034 (4) 1.095 (0.37) 1.2342 1Table 11 DEA-Chebyshev model efficiency analysis from simulation 4 atα=0.725. θ ^ α = 0.725 U   Upper bounds θ ^ α = 0.725 L  Lower bounds ∑ R = 1 q λ r , R U ∑ R = 1 q λ r , R L ∑ R = 1 q λ - r , R St. dev(θ^) θ ^ α = 0.725 DMU1 0.8 1.605 1.207 (7) 1.377 (3) 1.292 (0.68) 0.57 1 DMU2 0.729 1.291 1.951 (6) 0.918 (3) 1.4347 (0.05) 0.398 1 DMU3 0.776 1.204 0 (0) 0.719 (4) 0.359 (0.1) 0.303 0.99 DMU4 0.67 1.126 0 0.92 0.46 0.322 0.898 DMU5 0.464 0.987 0 0 0 0.37 0.726 DMU6 0.43 0.813 0 0 0 0.271 0.622 DMU7 0.716 1.363 6.874 (12) 3.849 (10) 5.361 (0.00) 0.458 1 DMU8 0.572 0.979 0 0 0 0.288 0.775 DMU9 0.603 1.01 0 0.015 0.007 0.288 0.807 DMU10 0.697 0.928 0 0 0 0.163 0.812 DMU11 0.799 1.767 0 (0) 2.327 (9) 1.164 (0.002) 0.685 0.9 DMU12 0.831 1.077 0 0.512 0.256 0.174 0.954 DMU13 0.884 1.097 2.217 (4) 0.514 (3) 1.366 (0.06) 0.15 0.991 DMU14 0.913 3.862 1.316 (5) 2.731 (7) 2.023 (0.03) 2.085 1 DMU15 0.923 2.682 1.435 (3) 1.119 (2) 1.277 (0.29) 1.244 1 Note: In Tables8–11, the values shown in columns 4 and 5 in brackets represent the frequency with which a DEA-efficient DMU is used as a reference unit in DCF. Those in column 6 represent the P values for the upper and lower limits for the lambdas for the DEA-efficient units.The Tables8-11 show efficiency scores determined under DEA-Chebyshev model, based on the α-values shown in Table 7. ## 5.3. Step III: Hypothesis Testing: Frontiers Compared All the efficiency evaluation tools will be measured against the control group to determine which of these would provide the best approximation method. Both CCP and DEA-Chebyshev model efficiency scores are defined in the same manner. The upper and lower bounds of the frontier determine the region where the EFF may likely be and is approximated by the DCF efficiency score,θ^.Using the results obtained in Step II, the four simulated experimental groups are adjusted using their respectively efficiency scores. The virtual DMUs are the DMUs from the four experimental groups in which their inputs have been reduced according to their efficiency scores from Step II, according to the contraction factor,θ for DEA, θ^CCP for CCP, and θ^ for DCF.In this step, in order to test the hypothesis, the 12 data sets of virtual DMUs are each aggregated with the control group, forming a sample size of 30 DMUs per simulation. “DMU#” denotes the control group (or “sample one”) and “V.DMU#” denotes the efficient virtual units derived from the experimental group (or “sample two”) using the efficiency scores generated by DEA, CCP, and DEA-Chebyshev model, respectively. There are 12 data sets in total: three for each of the simulations (three input contraction factors per DMU, from DEA, CCP (normal), and DEA-Chebyshev model). The inputs for the virtual DMUs calculated from each of these three methodologies for the same experimental group will be different. The sample size of 30 DMUs in each of the 12 sets is a result of combining the 15 error-free DMUs with the 15 virtual DMUs. These 30 DMUs are then evaluated using ProDEA (software). It is logical to use DEA for our final analysis to scrutinize the different methods since this is a deterministic method, which would work perfectly in an error-free situation. The DEA results for the 4 simulations are given in Table12.Table 12 Deterministic efficiency results for all four simulations with an aggregate of 30 DMUs; 15 from the control group and another 15 virtual units calculated according to CCP and DEA, respectively. Simulation 1 Simulation 2 Simulation 3 Simulation 4 DEA CCP DCF DEA CCP DCF DEA CCP DCF DEA CCP DCF DMU1 1 1 1 1 1 1 1 1 1 1 1 1 DMU2 1 1 1 0.986 0.946 0.942 0.962 0.962 0.962 1 0.937 1 DMU3 1 1 1 0.96 0.96 0.96 1 1 1 1 0.981 1 DMU4 1 1 1 1 1 1 1 1 1 0.989 0.977 0.989 DMU5 1 1 1 1 1 1 1 1 1 0.945 0.94 0.945 DMU6 1 1 1 1 1 1 0.991 0.987 0.988 0.888 0.888 0.888 DMU7 1 1 1 1 1 1 0.965 0.965 0.965 0.901 0.885 0.885 DMU8 1 0.965 0.963 1 1 1 0.971 0.933 0.943 0.914 0.872 0.872 DMU9 0.991 0.937 0.935 1 1 1 0.968 0.917 0.931 0.918 0.863 0.863 DMU10 0.978 0.906 0.903 1 1 1 0.962 0.901 0.919 0.926 0.87 0.871 DMU11 0.966 0.882 0.878 1 1 1 0.954 0.893 0.913 0.932 0.876 0.877 DMU12 0.985 0.931 0.929 1 1 1 0.934 0.903 0.911 0.939 0.906 0.914 DMU13 0.996 0.966 0.965 1 1 1 0.914 0.909 0.912 0.949 0.932 0.939 DMU14 1 0.991 0.991 1 1 1 0.973 0.957 0.973 0.967 0.967 0.967 DMU15 1 1 1 1 1 1 1 1 1 1 1 1 V.DMU1 0.889 0.885 0.884 0.921 0.921 0.921 0.898 0.999 0.898 0.841 0.84 0.84 V.DMU2 0.86 0.86 0.86 1 1 1 1 0.987 0.993 0.938 1 0.938 V.DMU3 0.864 0.872 0.873 1 1 1 1 1 1 0.931 0.92 0.941 V.DMU4 0.929 0.944 0.948 0.976 0.972 0.974 1 0.971 0.986 0.982 0.979 0.984 V.DMU5 0.926 0.943 0.946 0.934 0.943 0.945 1 1 1 1 1 1 V.DMU6 0.915 0.927 0.928 0.955 0.966 0.968 1 0.998 1 0.999 0.963 0.964 V.DMU7 0.959 0.947 0.946 0.926 0.926 0.927 1 1 1 1 1 1 V.DMU8 0.977 0.954 0.951 1 1 1 1 1 1 1 0.929 0.93 V.DMU9 1 0.99 0.987 0.956 0.946 0.946 0.989 0.989 0.989 1 0.898 0.899 V.DMU10 0.959 0.938 0.936 0.933 0.959 0.958 1 1 1 0.989 0.958 0.959 V.DMU11 1 1 1 0.939 0.94 0.94 0.938 0.954 0.952 1 1 1 V.DMU12 0.977 0.953 0.952 0.933 0.932 0.932 0.972 0.989 0.988 0.996 0.976 0.987 V.DMU13 0.971 0.975 0.975 0.903 0.899 0.899 0.998 1 1 0.992 1 0.995 V.DMU14 0.986 0.98 0.979 0.872 0.924 0.924 0.99 1 1 1 1 1 V.DMU15 1 1 1 1 1 1 1 1 1 1 1 1In order to determine if the frontiers created by these models are substantially different from that of the control group (or the error-free units), the rank-sum-test and statistical hypothesis test for mean differences were used.The DEA-Chebyshev model is scrutinized using several statistical methods, which show that there is a strong relationship between the DCF and the EFF. All the statistical tools used to test the DCF against the EFF have produced consistent conclusions that thecorrected frontier is a good approximation of the EFF. The statistical methods used to test the DCF versus the EFF are the Wilcoxon-Mann-Whitney test (or the rank-sum test) and the t-test for the differences in mean values of θ shown in Table 13. The rank-sum test is used to determine if the virtual DMUs established by the DCF are from the same population as that of the DMUs in the control group; if they are, then the difference in efficiency scores of both groups will not be statistically significant. This does not imply that the EFF and the corrected frontier are exactly the same but rather that the latter is a good approximation of the former. Its results are better than that of the CCP performance evaluation method developed by Land et al. [11] and Forrester and Anderson [18].Table 13 Hypothesis tests for mean differences of efficiency scores. Sample 1 is denoted as the “Control group” and sample 2 is denoted as the “Virtual group”. Simulation 1 Simulation 2 Simulation 3 Simulation 4 Control group Virtual group Control group Virtual group Control group Virtual group Control group Virtual group DEA Mean 0.999 0.943 0.996 0.95 0.973 0.986 0.951 0.978 Variance 0.00001 0.00187 0.00011 0.00153 0.0007 0.0009 0.0015 0.0019 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.7117 0.1166 - 0.5253 - 0.1409 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −3.09 −3.2146 1.3688 1.7213 t stat 5.2614 4.5917 - 1.0167 - 1.6501 P(T ≤ t)  two tail 0.00012 0.00042 0.3266 0.1212 t critical two tail 2.145 2.145 2.145 2.145 CCP efficiency evaluation Mean 0.972 0.944 0.994 0.955 0.955 0.992 0.926 0.964 Variance 0.0016 0.0019 0.00028 0.0011 0.00176 0.00018 0.0025 0.00231 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.35661 - 0.5373 - 0.14 - 0.5035 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −1.8873 −3.0072 2.136 2.0117 t stat 2.2373 3.334 - 3.1453 - 1.7383 P(T ≤ t) two tail 0.042 0.005 0.0072 0.1041 t critical two tail 2.145 2.145 2.145 2.145 DCF Mean 0.971 0.944 0.993 0.956 0.961 0.987 0.934 0.962 Variance 0.00168 0.0019 0.0003 0.0011 0.0013 0.0008 0.00304 0.00217 Observations 15 15 15 15 15 15 15 15 Pearson correlation 0.3296 - 0.5296 - 0.4235 - 0.0966 Hypothesized mean difference 0 0 0 0 Df 14 14 14 14 Rank-sum test −1.8873 −2.9657 1.8665 1.2236 t stat 2.1038 3.2401 - 1.8448 - 1.4533 P(T ≤ t) two tail 0.05396 0.0059 0.08633 0.1682 t critical two tail 2.145 2.145 2.145 2.145 TheRank-sum test shown previously is used to determine if the two samples being tested are of the same population. If they are of the same population, then we can conclude that the two frontiers for both the samples respectively, are one, and the same or that they consistently overlap one another, thus they can be assumed to be of the same surface. ## 5.4. Step IV: Efficiency Scores: DEA versus DEA-Chebyshev Model and Ranking of DEA Efficient Units There can be more than one way of ranking efficient units. In the simplest (or naïve) case, empirically efficient DMUs can be ranked according to the scoreθ- calculated as an average of the upper and lower limits from the DEA-Chebyshev model. ### 5.4.1. Naïve Ranking Table14 illustrates the ranking of all DMUs. The figures in bold denote the DEA-Chebyshev model efficiency scores for the DEA efficient units. All production units are ranked in descending order of efficiency according to the average of the upper and lower limits, θ-. An anomaly in DMU14 of simulation 3 is caused by an extremely small value for Input 2. Because the LP formulation for DEA, DEA-Chebyshev model, and CCP (normal) applies the greatest weight to the input or output in order to make a DMU appear as favourable as possible, Input 2 in this case is weighted heavily. In DEA, the mathematical algorithm does not allow the efficiency score to exceed 1.00; thus, this problem is not detected. In DEA-Chebyshev model and CCP, because efficiency scores are not restricted to 1.00, this problem arises indicating a possible outlier. It would be advisable to remove this DMU from the analysis. In this simulation, because the errors are generated randomly, the error-value for this DMU lies in the tail end of the distribution, hence, creating an outlier.Table 14 “Naïve” ranking of empirically efficient DMUs in order of declining levels of efficiency. Values in bold correspond to DEA efficient units with a score of “1”. Rank θ - DMU15 2.076 DMU14 1.566 DMU11 1.22 DMU1 1.167 DMU2 1.012 DMU9 1.008 Simulation 1 DMU4 0.98 DMU6 0.896 DMU3 0.883 DMU7 0.874 DMU13 0.867 DMU10 0.779 DMU12 0.742 DMU8 0.712 DMU5 0.664 DMU2 1.356 DMU1 1.266 DMU12 1.252 DMU15 1.237 DMU8 1.213 DMU14 1.173 Simulation 2 DMU10 1.022 DMU3 1.013 DMU7 0.947 DMU13 0.944 DMU11 0.906 DMU9 0.843 DMU6 0.837 DMU5 0.812 DMU4 0.594 DMU14 13.632 DMU15 1.807 DMU9 1.498 DMU1 1.157 DMU2 1.15 DMU11 1.036 Simulation 3 DMU13 1.024 DMU7 0.986 DMU5 0.985 DMU4 0.877 DMU8 0.87 DMU3 0.833 DMU12 0.803 DMU10 0.774 DMU6 0.765 DMU14 2.388 Simulation 4 DMU15 1.802 DMU11 1.283 DMU1 1.202 DMU7 1.039 DMU2 1.01 DMU13 0.991 DMU3 0.99 DMU12 0.954 DMU4 0.898 DMU10 0.812 DMU9 0.807 DMU8 0.775 DMU5 0.726 DMU6 0.622This method of ranking is naïve because it ignores the standard deviation, which indicates the robustness of a DMU's efficiency score to the possible errors and theunobserved inefficiency. It also does not distinguish between possible outliers and legitimate units. ### 5.4.2. Ranking by Robustness of DEA-Chebyshev Model Efficiency Scores The ranking in the order of robustness of a DMU begins with the efficiency score defined asθ^. Those with θ^=1 are ranked from the most robust to the least robust (from the smallest standard deviation to the largest). The standard deviation is determined using the upper and lower bounds of the efficiency scores. Then the rest of the empirically efficient units are ranked based on their respective θ^ (using their standard deviations will also provide the same ranking for these units). Once all the empirically efficient units have been ranked, the remainders are organized according to their stochastic efficiency scores from the most efficient to the least efficient. The ranking of these inefficient units is very similar to that of the empirical frontier.Ranking from the most efficient down, those DMUs which have a DEA-Chebyshev model score ofθ^=1 (input oriented case) can fall into either of two categories: hyper-efficient or efficient/mildly efficient depending on how robust they are (based on their standard deviation). DMUs that are not printed in bold are DMUs that are DEA-inefficient (See Table 15), and hence, they are ranked below those which have been deemed empirically efficient. DEA efficient DMUs that fail to satisfy the conditions for θ^=1 will be given efficiency scores of at most 1.00.Table 15 Ranking of efficient DMUs according to robustness based on their standard deviations. The DMUs in bold denote the empirically efficient DMUs. Simulation 1 Simulation 2 Simulation 3 Simulation 4 θ ^ α = 0.75 Std. dev. θ ^ α = 0.75 Std. dev. θ ^ α = 0.675 Std. dev. θ ^ α = 0.725 Std. dev. DMU9 1 0.34111 DMU15 1 0.45149 DMU13 1 0.16546 DMU13 1 0.15033 DMU2 1 0.36819 DMU3 1 0.46499 DMU7 1 0.32591 DMU3 1 0.30285 DMU1 1 0.53889 DMU12 1 0.5823 DMU1 1 0.47956 DMU2 1 0.39775 DMU14 1 0.93225 DMU8 1 0.62735 DMU2 1 0.55027 DMU7 1 0.45771 DMU15 1 1.63455 DMU1 1 0.66871 DMU15 1 1.23418 DMU1 1 0.56972 DMU4 0.98 0.43388 DMU7 0.947 0.3218 DMU14 1 17.971 DMU11 1 0.68462 DMU11 0.906 0.57657 DMU13 0.944 0.13124 DMU5 0.985 0.31947 DMU15 1 1.24437 DMU6 0.896 0.28298 DMU14 0.943 0.4051 DMU11 0.921 0.2745 DMU14 1 2.0849 DMU3 0.883 0.28164 DMU11 0.906 0.17515 DMU9 0.905 0.97149 DMU12 0.969 0.17444 DMU7 0.874 0.32605 DMU10 0.89 0.34217 DMU4 0.877 0.25512 DMU4 0.899 0.32244 DMU13 0.867 0.13958 DMU2 0.874 0.85998 DMU8 0.87 0.20386 DMU10 0.818 0.16313 DMU10 0.779 0.16935 DMU9 0.843 0.17572 DMU3 0.833 0.21305 DMU9 0.774 0.28765 DMU12 0.742 0.11335 DMU6 0.837 0.29614 DMU12 0.803 0.15726 DMU8 0.754 0.28786 DMU8 0.712 0.25534 DMU5 0.812 0.2938 DMU10 0.774 0.1347 DMU5 0.747 0.3701 DMU5 0.664 0.2745 DMU4 0.594 0.25039 DMU6 0.765 0.24013 DMU6 0.6 0.27103 ## 5.4.1. Naïve Ranking Table14 illustrates the ranking of all DMUs. The figures in bold denote the DEA-Chebyshev model efficiency scores for the DEA efficient units. All production units are ranked in descending order of efficiency according to the average of the upper and lower limits, θ-. An anomaly in DMU14 of simulation 3 is caused by an extremely small value for Input 2. Because the LP formulation for DEA, DEA-Chebyshev model, and CCP (normal) applies the greatest weight to the input or output in order to make a DMU appear as favourable as possible, Input 2 in this case is weighted heavily. In DEA, the mathematical algorithm does not allow the efficiency score to exceed 1.00; thus, this problem is not detected. In DEA-Chebyshev model and CCP, because efficiency scores are not restricted to 1.00, this problem arises indicating a possible outlier. It would be advisable to remove this DMU from the analysis. In this simulation, because the errors are generated randomly, the error-value for this DMU lies in the tail end of the distribution, hence, creating an outlier.Table 14 “Naïve” ranking of empirically efficient DMUs in order of declining levels of efficiency. Values in bold correspond to DEA efficient units with a score of “1”. Rank θ - DMU15 2.076 DMU14 1.566 DMU11 1.22 DMU1 1.167 DMU2 1.012 DMU9 1.008 Simulation 1 DMU4 0.98 DMU6 0.896 DMU3 0.883 DMU7 0.874 DMU13 0.867 DMU10 0.779 DMU12 0.742 DMU8 0.712 DMU5 0.664 DMU2 1.356 DMU1 1.266 DMU12 1.252 DMU15 1.237 DMU8 1.213 DMU14 1.173 Simulation 2 DMU10 1.022 DMU3 1.013 DMU7 0.947 DMU13 0.944 DMU11 0.906 DMU9 0.843 DMU6 0.837 DMU5 0.812 DMU4 0.594 DMU14 13.632 DMU15 1.807 DMU9 1.498 DMU1 1.157 DMU2 1.15 DMU11 1.036 Simulation 3 DMU13 1.024 DMU7 0.986 DMU5 0.985 DMU4 0.877 DMU8 0.87 DMU3 0.833 DMU12 0.803 DMU10 0.774 DMU6 0.765 DMU14 2.388 Simulation 4 DMU15 1.802 DMU11 1.283 DMU1 1.202 DMU7 1.039 DMU2 1.01 DMU13 0.991 DMU3 0.99 DMU12 0.954 DMU4 0.898 DMU10 0.812 DMU9 0.807 DMU8 0.775 DMU5 0.726 DMU6 0.622This method of ranking is naïve because it ignores the standard deviation, which indicates the robustness of a DMU's efficiency score to the possible errors and theunobserved inefficiency. It also does not distinguish between possible outliers and legitimate units. ## 5.4.2. Ranking by Robustness of DEA-Chebyshev Model Efficiency Scores The ranking in the order of robustness of a DMU begins with the efficiency score defined asθ^. Those with θ^=1 are ranked from the most robust to the least robust (from the smallest standard deviation to the largest). The standard deviation is determined using the upper and lower bounds of the efficiency scores. Then the rest of the empirically efficient units are ranked based on their respective θ^ (using their standard deviations will also provide the same ranking for these units). Once all the empirically efficient units have been ranked, the remainders are organized according to their stochastic efficiency scores from the most efficient to the least efficient. The ranking of these inefficient units is very similar to that of the empirical frontier.Ranking from the most efficient down, those DMUs which have a DEA-Chebyshev model score ofθ^=1 (input oriented case) can fall into either of two categories: hyper-efficient or efficient/mildly efficient depending on how robust they are (based on their standard deviation). DMUs that are not printed in bold are DMUs that are DEA-inefficient (See Table 15), and hence, they are ranked below those which have been deemed empirically efficient. DEA efficient DMUs that fail to satisfy the conditions for θ^=1 will be given efficiency scores of at most 1.00.Table 15 Ranking of efficient DMUs according to robustness based on their standard deviations. The DMUs in bold denote the empirically efficient DMUs. Simulation 1 Simulation 2 Simulation 3 Simulation 4 θ ^ α = 0.75 Std. dev. θ ^ α = 0.75 Std. dev. θ ^ α = 0.675 Std. dev. θ ^ α = 0.725 Std. dev. DMU9 1 0.34111 DMU15 1 0.45149 DMU13 1 0.16546 DMU13 1 0.15033 DMU2 1 0.36819 DMU3 1 0.46499 DMU7 1 0.32591 DMU3 1 0.30285 DMU1 1 0.53889 DMU12 1 0.5823 DMU1 1 0.47956 DMU2 1 0.39775 DMU14 1 0.93225 DMU8 1 0.62735 DMU2 1 0.55027 DMU7 1 0.45771 DMU15 1 1.63455 DMU1 1 0.66871 DMU15 1 1.23418 DMU1 1 0.56972 DMU4 0.98 0.43388 DMU7 0.947 0.3218 DMU14 1 17.971 DMU11 1 0.68462 DMU11 0.906 0.57657 DMU13 0.944 0.13124 DMU5 0.985 0.31947 DMU15 1 1.24437 DMU6 0.896 0.28298 DMU14 0.943 0.4051 DMU11 0.921 0.2745 DMU14 1 2.0849 DMU3 0.883 0.28164 DMU11 0.906 0.17515 DMU9 0.905 0.97149 DMU12 0.969 0.17444 DMU7 0.874 0.32605 DMU10 0.89 0.34217 DMU4 0.877 0.25512 DMU4 0.899 0.32244 DMU13 0.867 0.13958 DMU2 0.874 0.85998 DMU8 0.87 0.20386 DMU10 0.818 0.16313 DMU10 0.779 0.16935 DMU9 0.843 0.17572 DMU3 0.833 0.21305 DMU9 0.774 0.28765 DMU12 0.742 0.11335 DMU6 0.837 0.29614 DMU12 0.803 0.15726 DMU8 0.754 0.28786 DMU8 0.712 0.25534 DMU5 0.812 0.2938 DMU10 0.774 0.1347 DMU5 0.747 0.3701 DMU5 0.664 0.2745 DMU4 0.594 0.25039 DMU6 0.765 0.24013 DMU6 0.6 0.27103 ## 5.5. Further Analysis Additional analyses were conducted by taking the observed DMUs in each simulation and evaluating them against the EFF, DEA, CCP, and DEA-Chebyshev model results. If DCF is a good approximation of the EFF, then the efficiency scores for the observed DMUs should not be substantially different from the efficiency scores generated by the EFF. This also holds true for CCP. ### 5.5.1. Observed DMUs Evaluated against the EFF, CCP, and DCF The efficiency scores of the observed DMUs from the experimental groups determined by the EFF (to be denoted as “exp.grp+EFF”) will provide a benchmark for evaluating the DEA frontier (“exp.grp+DEA”), CCP (normal) frontier (“exp.grp+CCP”), and the corrected frontier (“exp.grp+DCF”). A comparison is drawn between the efficiency scores of the experimental groups generated by the four frontiers.The hypothesis is that the mean of the efficiency scores for the 15 observed units in the “exp.grp+EFF” group and the “exp.grp+DCF” group should be approximately the same (i.e., the difference is not statistically significant). From Table16, the null hypothesis can be seen from the rank-sum test and the t-test at α=0.05, and the difference is not statistically significant in simulations 3 and 4, hence, the corrected frontier is a good approximation of the EFF. Although the hypothesis test for simulations 1 and 2 indicates some level of significance, the results generated by the DCF model are still superior to those of the CCP and the DEA.Table 16 Statistical analysis for frontier comparisons. Observed DMUs are evaluated against the 3 different frontiers to determine their efficiency scores which are calculated using the normal DEA model and to determine if the efficiency scores for each group are substantially different when comparing EFF to DEA, EFF to DCF, and EFF to CCP. Exp.grp+EFF Exp.grp+DEA Exp.grp+EFF Exp.grp+DCF Exp.grp+EFF Exp.grp+CCP Simulation 1 Mean 0.852 0.899 0.852 0.885 0.852 0.887 Variance 0.01396 0.01355 0.01396 0.01389 0.01396 0.01376 Observations 15 15 15 15 15 15 Pearson correlation 0.922 0.87986 0.881 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.2858 0.9125 0.9125 t stat - 3.9644 - 2.2335 - 2.3537 P(T ≤ t) two tail 0.0014 0.04235 0.03372 t critical two tail 2.145 2.145 2.145 Simulation 2 Mean 0.875 0.922 0.875 0.908 0.875 0.908 Variance 0.01272 0.01242 0.0127 0.0115 0.0127 0.0115 Observations 15 15 15 15 15 15 Pearson correlation 0.94071 0.8875 0.8918 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.3066 0.9747 1.0162 t stat - 4.6604 - 2.3984 - 2.487 P(T ≤ t) two tail 0.00037 0.031 0.02611 t critical two tail 2.145 2.145 2.145 Simulation 3 Mean 0.92 0.933 0.92 0.916 0.92 0.902 Variance 0.00879 0.00815 0.00879 0.00806 0.00879 0.0077 Observations 15 15 15 15 15 15 Pearsoncorrelation 0.95301 0.8804 0.9082 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.7259 −0.0622 −0.6014 t stat - 1.8125 0.29423 1.68719 P(T ≤ t) two tail 0.0914 0.7729 0.1137 t critical two tail 2.145 2.145 2.145 Simulation 4 Mean 0.882 0.904 0.882 0.887 0.882 0.868 Variance 0.0153 0.0173 0.0153 0.0184 0.0153 0.0162 Observations 15 15 15 15 15 15 Pearson correlation 0.9425 0.905 0.8996 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.8503 0.1452 −0.394 t stat - 1.9248 - 0.312 1.0043 P(T ≤ t) two tail 0.0748 0.7599 −0.312 t critical two tail 2.145 2.145 2.145Table16 shows the statistical tests used to compare the DEA, CCP, and DCF against the EFF. The Pearson Correlation analysis, (regression line) which ranges from −1 to 1 inclusively, reflects the extent of the linear relationship between two sets of data. The P values, the rank-sum test, and the Pearson correlation observed for all the four simulations indicate that in general the DCF outperforms DEA and CCP (which assumed the normal distribution).Outliers have a tendency to exhibit large standard deviations, which is translated to large confidence limits. Consequently, the reason for establishing DCF and CCP scores is to reduce the likelihood of a virtual unit from becoming an outlier. Also, the results generated by the stochastic models (as opposed to deterministic ones) such as the DCF and CCP can be greatly affected because the efficiency scores are generally not restricted to 1.00. In reality, outliers are not always easily detected. If the data set contains some outliers, the stochastic models may not perform well. DMU14 in Simulation 3 is an example of this problem. It can be solved by either removing the outliers or by imposing weight restrictions. However, weight restrictions are not within the scope of this paper. ## 5.5.1. Observed DMUs Evaluated against the EFF, CCP, and DCF The efficiency scores of the observed DMUs from the experimental groups determined by the EFF (to be denoted as “exp.grp+EFF”) will provide a benchmark for evaluating the DEA frontier (“exp.grp+DEA”), CCP (normal) frontier (“exp.grp+CCP”), and the corrected frontier (“exp.grp+DCF”). A comparison is drawn between the efficiency scores of the experimental groups generated by the four frontiers.The hypothesis is that the mean of the efficiency scores for the 15 observed units in the “exp.grp+EFF” group and the “exp.grp+DCF” group should be approximately the same (i.e., the difference is not statistically significant). From Table16, the null hypothesis can be seen from the rank-sum test and the t-test at α=0.05, and the difference is not statistically significant in simulations 3 and 4, hence, the corrected frontier is a good approximation of the EFF. Although the hypothesis test for simulations 1 and 2 indicates some level of significance, the results generated by the DCF model are still superior to those of the CCP and the DEA.Table 16 Statistical analysis for frontier comparisons. Observed DMUs are evaluated against the 3 different frontiers to determine their efficiency scores which are calculated using the normal DEA model and to determine if the efficiency scores for each group are substantially different when comparing EFF to DEA, EFF to DCF, and EFF to CCP. Exp.grp+EFF Exp.grp+DEA Exp.grp+EFF Exp.grp+DCF Exp.grp+EFF Exp.grp+CCP Simulation 1 Mean 0.852 0.899 0.852 0.885 0.852 0.887 Variance 0.01396 0.01355 0.01396 0.01389 0.01396 0.01376 Observations 15 15 15 15 15 15 Pearson correlation 0.922 0.87986 0.881 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.2858 0.9125 0.9125 t stat - 3.9644 - 2.2335 - 2.3537 P(T ≤ t) two tail 0.0014 0.04235 0.03372 t critical two tail 2.145 2.145 2.145 Simulation 2 Mean 0.875 0.922 0.875 0.908 0.875 0.908 Variance 0.01272 0.01242 0.0127 0.0115 0.0127 0.0115 Observations 15 15 15 15 15 15 Pearson correlation 0.94071 0.8875 0.8918 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 1.3066 0.9747 1.0162 t stat - 4.6604 - 2.3984 - 2.487 P(T ≤ t) two tail 0.00037 0.031 0.02611 t critical two tail 2.145 2.145 2.145 Simulation 3 Mean 0.92 0.933 0.92 0.916 0.92 0.902 Variance 0.00879 0.00815 0.00879 0.00806 0.00879 0.0077 Observations 15 15 15 15 15 15 Pearsoncorrelation 0.95301 0.8804 0.9082 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.7259 −0.0622 −0.6014 t stat - 1.8125 0.29423 1.68719 P(T ≤ t) two tail 0.0914 0.7729 0.1137 t critical two tail 2.145 2.145 2.145 Simulation 4 Mean 0.882 0.904 0.882 0.887 0.882 0.868 Variance 0.0153 0.0173 0.0153 0.0184 0.0153 0.0162 Observations 15 15 15 15 15 15 Pearson correlation 0.9425 0.905 0.8996 Hypothesized mean difference 0 0 0 Df 14 14 14 Rank-sum test 0.8503 0.1452 −0.394 t stat - 1.9248 - 0.312 1.0043 P(T ≤ t) two tail 0.0748 0.7599 −0.312 t critical two tail 2.145 2.145 2.145Table16 shows the statistical tests used to compare the DEA, CCP, and DCF against the EFF. The Pearson Correlation analysis, (regression line) which ranges from −1 to 1 inclusively, reflects the extent of the linear relationship between two sets of data. The P values, the rank-sum test, and the Pearson correlation observed for all the four simulations indicate that in general the DCF outperforms DEA and CCP (which assumed the normal distribution).Outliers have a tendency to exhibit large standard deviations, which is translated to large confidence limits. Consequently, the reason for establishing DCF and CCP scores is to reduce the likelihood of a virtual unit from becoming an outlier. Also, the results generated by the stochastic models (as opposed to deterministic ones) such as the DCF and CCP can be greatly affected because the efficiency scores are generally not restricted to 1.00. In reality, outliers are not always easily detected. If the data set contains some outliers, the stochastic models may not perform well. DMU14 in Simulation 3 is an example of this problem. It can be solved by either removing the outliers or by imposing weight restrictions. However, weight restrictions are not within the scope of this paper. ## 6. Conclusions Traditional methods of performance analysis are no longer sufficient in a fast paced constantly evolving environment. Observing past data alone is not adequate for future projections. The DEA-Chebyshev model is designed to bridge the difference between conventional performance measurements and new techniques to incorporate relevance into such measures. This algorithm not only provides a multidimensional evaluation technique, but it has successfully incorporated a new element into an existing deterministic technique (DEA). This is known as thek-flexibility function which was originally derived from the one-sided Chebyshev's inequality. This in turn allows management to include expert opinion as a single value, such as a 20% net growth by next year end from the current year. The single value is dichotomized into unmet (or over target) present level of growths (or declines). Because management expertise is included, the expected growth (or decline) is not unreasonable and will inherently include factors which do not need to be explicitly expressed in the model such as environmental, economic, and social changes. Since these changes are becoming increasingly rapid, performance measures can no longer ignore qualitative inputs. In a highly competitive environment, future projections and attainable targets are key performance indicators. Intellectual capital and knowledge are today’s two most important assets.The combination of normal DEA with DCF can successfully provide a good framework for evaluation based on quantitative data and qualitative intellectual knowledge of management. When no errors are expected, then standard DEA models will suffice. DCF is designed such that in the absence of errors, the model will revert to a DEA model. This occurs when thek-flexibility function equals zero. DEA provides a deterministic frontier which DEA-Chebyshev model works on to define the estimate of the EFF.The simulated dataset was tested on DEA-Chebyshev model. It has been statistically proven that this model is an effective tool with excellent accuracy to detect or predict the EFF frontier as a new efficiency benchmarking technique. It is an improvement over other methods, easily applied, practical, not computationally intensive, and easy to implement. The results have been promising thus far. The future work includes using a real data application to illustrate the usefulness of DEA-Chebyshev model. --- *Source: 102163-2013-06-24.xml*
2013
# Effects of a Pragmatic Lifestyle Intervention for Reducing Body Mass in Obese Adults with Obstructive Sleep Apnoea: A Randomised Controlled Trial **Authors:** James Moss; Garry Alan Tew; Robert James Copeland; Martin Stout; Catherine Grant Billings; John Michael Saxton; Edward Mitchell Winter; Stephen Mark Bianchi **Journal:** BioMed Research International (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102164 --- ## Abstract This study investigated the effects of a pragmatic lifestyle intervention in obese adults with continuous positive airway pressure-treated obstructive sleep apnoea hypopnoea syndrome (OSAHS). Sixty patients were randomised 1 : 1 to either a 12-week lifestyle intervention or an advice-only control group. The intervention involved supervised exercise sessions, dietary advice, and the promotion of lifestyle behaviour change using cognitive-behavioural techniques. Outcomes were assessed at baseline (week 0), intervention end-point (week 13), and follow-up (week 26). The primary outcome was 13-week change in body mass. Secondary outcomes included anthropometry, blood-borne biomarkers, exercise capacity, and health-related quality of life. At end-point, the intervention group exhibited small reductions in body mass (−1.8 [−3.0, −0.5] kg;P = 0.007) and body fat percentage (−1 [−2, 0]%; P = 0.044) and moderate improvements in C-reactive protein (−1.3 [−2.4, −0.2] mg·L−1; P = 0.028) and exercise capacity (95 [50, 139] m; P < 0.001). At follow-up, changes in body mass (−2.0 [−3.5, −0.5] kg; P = 0.010), body fat percentage (−1 [−2, 0]%; P = 0.033), and C-reactive protein (−1.3 [−2.5, −0.1] mg·L−1; P = 0.037) were maintained and exercise capacity was further improved (132 [90, 175] m; P < 0.001). This trial is registered with ClinicalTrials.gov NCT01546792. --- ## Body ## 1. Introduction Obstructive sleep apnoea hypopnoea syndrome (OSAHS) is the most common form of sleep-disordered breathing, characterised by repetitive nocturnal airway obstruction and frequent nocturnal arousal from sleep that leads to excessive daytime sleepiness (EDS). Prevalence surveys suggest that 2% of women and 4% of men at middle age are affected by this syndrome, which is becoming increasingly common with the current obesity epidemic [1]. The clinical consequences of repeated airway closures (hypoxaemia, sympathoexcitation, and oxidative stress) contribute to the premature development of cardiovascular disease, specifically ischaemic heart disease, stroke, and hypertension. Individuals with OSAHS are often obese [2] and physically inactive [3], and epidemiological data indicate an independent relationship between OSAHS and cardiovascular disease [4]. Obesity is a key modifiable risk factor for OSAHS [5] with recent guidelines suggesting that all overweight and obese patients with OSAHS be encouraged to lose weight [6]. As both untreated OSAHS and obesity contribute to the increased morbidity and mortality, interventions capable of addressing both should be considered.Continuous positive airway pressure (CPAP), the primary therapy for moderate-to-severe OSAHS, improves subjective and objective measures of sleepiness [7] but provides only day-to-day management of the condition, not a long-term cure (i.e., withdrawal of CPAP causes symptoms to return). Moreover, CPAP has minimal effects on patients’ weight or physical activity levels [8] despite increasing exercise capacity [9]. In contrast, intensive lifestyle interventions, which typically involve very low energy diets (VLEDs), appear to be effective for initial rapid weight reduction and can sometimes result in complete remission of OSAHS [10, 11], although steady postintervention weight regaining is common [12–14]. However, intensive lifestyle interventions are limited in that they might not be acceptable to many patients or deliverable within a tax-funded healthcare system. Further research is needed to explore the impact of lower-intensity (i.e., more practical) lifestyle interventions on body mass (i.e., weight) and other important health outcomes in overweight and obese individuals with OSAHS.We conducted a randomised controlled trial to determine the impact of a 12-week pragmatic lifestyle intervention on body mass and other indicators of health and fitness in obese adults who were currently being treated with CPAP for moderate-to-severe OSAHS. The pragmatic lifestyle intervention incorporated supervised exercise sessions, dietary education and advice, and the promotion of lifestyle behaviour change using cognitive-behavioural techniques. We hypothesised that the intervention group would have greater improvement in body mass and other cardiometabolic outcomes, compared with an advice-only control group. ## 2. Materials and Methods ### 2.1. Participants Patients with OSAHS were recruited from sleep clinics at Sheffield Teaching Hospitals NHS Foundation Trust. Eligible patients were obese (body mass index (BMI) > 30 kg·m−2) men and women aged 18–85 years with at least moderate OSAHS (apnoea hypopnoea index (AHI) > 15 events·h−1; oxygen desaturation index > 15 events·h−1; Epworth Sleepiness Scale (ESS) >11) treated with CPAP therapy. Adherence was assessed subjectively using self-report during history taking (>75% nightly use; >4 hours per night) and objectively by percentage of nights CPAP was used combined with indices of treatment efficacy (i.e., normalisation of AHI and ESS). Exclusion criteria were any contraindications to exercise testing and training such as severe hypertension, unstable angina, uncontrolled cardiac arrhythmias, and inability or unwillingness to undertake the commitments of the study or participation in regular purposeful exercise (>30 min, ≥3 times per week; self-reported). The study was approved by the South Yorkshire Research Ethics Committee (09/H1310/74) and all participants provided written informed consent prior to enrolment. ### 2.2. Sample Size The primary outcome measure was body mass (at intervention end-point) because weight loss is a key focus of management guidelines for overweight and obese individuals with OSAHS [6], and previous studies have shown reductions in body mass correlate with improvements in severity of OSAHS [10, 15]. A total of 60 participants (30 per group) were required to detect a between-group difference of at least 1.5 kg at intervention end-point, assuming a standard deviation of 12 kg for body mass [16], a pre-post correlation for body mass of 0.991 [17], 20% attrition, 90% power, and a 2-tailed alpha of 0.05. ### 2.3. Design and Randomisation This was a nonblinded, parallel-group, randomised controlled trial. Participants were allocated 1 : 1 to either a 12-week pragmatic lifestyle intervention or an advice-only control group using a randomisation sequence created by an independent researcher prior to recruitment (nQuery, Statistical Solutions, USA). The research team were notified of group allocation once each participant completed their baseline assessment. Outcome measures were assessed before randomisation (week 0), at intervention end-point (week 13), and after 13 weeks of independence (week 26). ### 2.4. Pragmatic Lifestyle Intervention Participants randomised to the pragmatic lifestyle intervention were invited to attend supervised exercise sessions at a university exercise facility within one mile of the treating hospital. The frequency of exercise sessions was initially three per week. This was reduced to two per week during weeks 5 to 8 (with a third exercise session undertaken independently by participants at their convenience) and then to one per week (with two self-directed exercise sessions) during weeks 9 to 12. This pattern was designed to gradually decrease dependence on our facility, supervision, and expertise and to promote participants’ independent exercise participation, which was reported in an exercise diary. Exercise sessions lasted approximately one hour and typically comprised 45 minutes of aerobic interval training (treadmill walking/jogging, cycling, and rowing), 15 minutes of resistance training (major muscle groups), and exercises aimed at improving flexibility and balance. The aerobic interval training involved alternating hard and easy exercise bouts at a ratio of 1 : 2 (e.g., 0.5 min hard, 1 min easy), progressing to 4 : 1 (e.g., 4 min hard, 1 min easy) as tolerated. Heart rate and perceived exertion (RPE, using a 6 to 20 scale; Borg, 1982) were recorded at the end of hard intervals to facilitate prescription and monitor progression (RPE: 14–16 for hard bouts and 9–11 for easy bouts). Exercise sessions were individualised and directed by an exercise physiologist taking into account participants’ health, mobility, and preferences. The cognitive-behavioural component of the intervention was integrated into exercise sessions and involved psychoeducation (based upon attitudes, experiences, emotions, beliefs, etc.) tailored to participants’ stage of change and implemented the cognitive-behavioural processes of change as outlined by the transtheoretical model [18]. Such an approach has demonstrated efficacy in clinical populations previously [19]. Concurrent dietary education and advice based on the principles of the eatwell plate model (http://www.eatwell.gov.uk) were also integrated into the sessions. A three-day diet diary was completed and assessed to identify dietary imbalance and as a tool to set short- and long-term goals. A British Heart Foundation (BHF) weight loss leaflet “So… you want to lose weight for good?” was provided and key concepts were extracted from it. Participants allocated to the advice-only control group received a letter explaining their group allocation, basic written lifestyle advice, and the BHF weight loss leaflet. ### 2.5. Study Outcomes Participants’ body mass was measured in duplicate using a calibrated beam-balance scale (Model 424; Weylux; Hallamshire Scales Ltd., Sheffield, UK). Participants were minimally dressed and the mean of two consecutive concordant measurements was used. Secondary outcomes were BMI; neck, waist, and hip circumferences; body fat percentage (Bodystat Quadscan 4000, Bodystat Ltd., IM99 1DQ); health-related quality of life using the EuroQol EQ5D-3L questionnaire (EuroQol Executive Office, 3068 AV Rotterdam, Netherlands); and exercise capacity using the incremental shuttle walking test (ISWT; [20]). Fasting venous blood was collected, centrifuged, separated, and frozen for subsequent biochemical analysis. Full lipid profile, glucose, and high-sensitivity C-reactive protein (hs-CRP) were measured on ADVIA 2400 and insulin concentrations on ADVIA Centaur XP (Siemens, 511 Benedict Avenue, Tarrytown, NY). ### 2.6. Statistical Analyses The mean difference in change of body mass between the treatment groups was assessed at intervention end-point (week 13) by analysis of covariance (ANCOVA) using baseline body mass as a covariate and change scores (end-point minus baseline) as the dependent variable. The adjusted mean difference in change between groups at week 13 and corresponding 95% confidence interval (CI) from the model are presented. All analyses were done on an intention-to-treat basis with previous observations carried forward where necessary. The same procedure was used to assess treatment difference in body mass at follow-up (week 26). Treatment differences for other outcomes were similarly analysed using separate ANCOVAs for intervention end-point (week 13) and follow-up (week 26). All analyses were carried out in SPSS version 18.0 (SPSS UK Ltd., 2 New Square (B3 Floor 2), Bedfont Lakes, UK). Statistical tests were at a two-sided 0·05 significance level. Analysis of residuals was undertaken for all regression models in order to assess model assumptions. ## 2.1. Participants Patients with OSAHS were recruited from sleep clinics at Sheffield Teaching Hospitals NHS Foundation Trust. Eligible patients were obese (body mass index (BMI) > 30 kg·m−2) men and women aged 18–85 years with at least moderate OSAHS (apnoea hypopnoea index (AHI) > 15 events·h−1; oxygen desaturation index > 15 events·h−1; Epworth Sleepiness Scale (ESS) >11) treated with CPAP therapy. Adherence was assessed subjectively using self-report during history taking (>75% nightly use; >4 hours per night) and objectively by percentage of nights CPAP was used combined with indices of treatment efficacy (i.e., normalisation of AHI and ESS). Exclusion criteria were any contraindications to exercise testing and training such as severe hypertension, unstable angina, uncontrolled cardiac arrhythmias, and inability or unwillingness to undertake the commitments of the study or participation in regular purposeful exercise (>30 min, ≥3 times per week; self-reported). The study was approved by the South Yorkshire Research Ethics Committee (09/H1310/74) and all participants provided written informed consent prior to enrolment. ## 2.2. Sample Size The primary outcome measure was body mass (at intervention end-point) because weight loss is a key focus of management guidelines for overweight and obese individuals with OSAHS [6], and previous studies have shown reductions in body mass correlate with improvements in severity of OSAHS [10, 15]. A total of 60 participants (30 per group) were required to detect a between-group difference of at least 1.5 kg at intervention end-point, assuming a standard deviation of 12 kg for body mass [16], a pre-post correlation for body mass of 0.991 [17], 20% attrition, 90% power, and a 2-tailed alpha of 0.05. ## 2.3. Design and Randomisation This was a nonblinded, parallel-group, randomised controlled trial. Participants were allocated 1 : 1 to either a 12-week pragmatic lifestyle intervention or an advice-only control group using a randomisation sequence created by an independent researcher prior to recruitment (nQuery, Statistical Solutions, USA). The research team were notified of group allocation once each participant completed their baseline assessment. Outcome measures were assessed before randomisation (week 0), at intervention end-point (week 13), and after 13 weeks of independence (week 26). ## 2.4. Pragmatic Lifestyle Intervention Participants randomised to the pragmatic lifestyle intervention were invited to attend supervised exercise sessions at a university exercise facility within one mile of the treating hospital. The frequency of exercise sessions was initially three per week. This was reduced to two per week during weeks 5 to 8 (with a third exercise session undertaken independently by participants at their convenience) and then to one per week (with two self-directed exercise sessions) during weeks 9 to 12. This pattern was designed to gradually decrease dependence on our facility, supervision, and expertise and to promote participants’ independent exercise participation, which was reported in an exercise diary. Exercise sessions lasted approximately one hour and typically comprised 45 minutes of aerobic interval training (treadmill walking/jogging, cycling, and rowing), 15 minutes of resistance training (major muscle groups), and exercises aimed at improving flexibility and balance. The aerobic interval training involved alternating hard and easy exercise bouts at a ratio of 1 : 2 (e.g., 0.5 min hard, 1 min easy), progressing to 4 : 1 (e.g., 4 min hard, 1 min easy) as tolerated. Heart rate and perceived exertion (RPE, using a 6 to 20 scale; Borg, 1982) were recorded at the end of hard intervals to facilitate prescription and monitor progression (RPE: 14–16 for hard bouts and 9–11 for easy bouts). Exercise sessions were individualised and directed by an exercise physiologist taking into account participants’ health, mobility, and preferences. The cognitive-behavioural component of the intervention was integrated into exercise sessions and involved psychoeducation (based upon attitudes, experiences, emotions, beliefs, etc.) tailored to participants’ stage of change and implemented the cognitive-behavioural processes of change as outlined by the transtheoretical model [18]. Such an approach has demonstrated efficacy in clinical populations previously [19]. Concurrent dietary education and advice based on the principles of the eatwell plate model (http://www.eatwell.gov.uk) were also integrated into the sessions. A three-day diet diary was completed and assessed to identify dietary imbalance and as a tool to set short- and long-term goals. A British Heart Foundation (BHF) weight loss leaflet “So… you want to lose weight for good?” was provided and key concepts were extracted from it. Participants allocated to the advice-only control group received a letter explaining their group allocation, basic written lifestyle advice, and the BHF weight loss leaflet. ## 2.5. Study Outcomes Participants’ body mass was measured in duplicate using a calibrated beam-balance scale (Model 424; Weylux; Hallamshire Scales Ltd., Sheffield, UK). Participants were minimally dressed and the mean of two consecutive concordant measurements was used. Secondary outcomes were BMI; neck, waist, and hip circumferences; body fat percentage (Bodystat Quadscan 4000, Bodystat Ltd., IM99 1DQ); health-related quality of life using the EuroQol EQ5D-3L questionnaire (EuroQol Executive Office, 3068 AV Rotterdam, Netherlands); and exercise capacity using the incremental shuttle walking test (ISWT; [20]). Fasting venous blood was collected, centrifuged, separated, and frozen for subsequent biochemical analysis. Full lipid profile, glucose, and high-sensitivity C-reactive protein (hs-CRP) were measured on ADVIA 2400 and insulin concentrations on ADVIA Centaur XP (Siemens, 511 Benedict Avenue, Tarrytown, NY). ## 2.6. Statistical Analyses The mean difference in change of body mass between the treatment groups was assessed at intervention end-point (week 13) by analysis of covariance (ANCOVA) using baseline body mass as a covariate and change scores (end-point minus baseline) as the dependent variable. The adjusted mean difference in change between groups at week 13 and corresponding 95% confidence interval (CI) from the model are presented. All analyses were done on an intention-to-treat basis with previous observations carried forward where necessary. The same procedure was used to assess treatment difference in body mass at follow-up (week 26). Treatment differences for other outcomes were similarly analysed using separate ANCOVAs for intervention end-point (week 13) and follow-up (week 26). All analyses were carried out in SPSS version 18.0 (SPSS UK Ltd., 2 New Square (B3 Floor 2), Bedfont Lakes, UK). Statistical tests were at a two-sided 0·05 significance level. Analysis of residuals was undertaken for all regression models in order to assess model assumptions. ## 3. Results ### 3.1. Participant Characteristics Sixty patients with controlled OSAHS (ESS: 5.0 [3.0, 6.8]; AHI: 2.4[ 1.9 , 3.2 ] events·h−1) that were long-term CPAP users enrolled on the study. Due to some patients relocation from hospital trusts elsewhere in the UK, CPAP start dates were only available for 75% of participants. For these patients, median [range] CPAP history was 1.2 [0.5–10.8] years. All sixty patients had a verifiable usage history of at least 6 months. Both groups were classified as obese, normocholesterolaemic, normoglycaemic, and hyperinsulinaemic. Baseline characteristics are summarised in Table 1.Table 1 Baseline group characteristics. Variable Control Intervention n n Anthropometry Body mass (kg) 30 118.3 ± 21.9 30 117.4 ± 24.3 Body mass index (kg·m−2) 30 39.8 ± 7.0 30 38.9 ± 6.9 Body fat (%) 30 40 ± 9 30 39 ± 8 Neck circumference (cm) 30 45 ± 5 30 44 ± 4 Waist circumference (cm) 30 128 ± 14 30 125 ± 16 Hip circumference (cm) 30 129 ± 14 30 125 ± 15 Cardiometabolic Resting heart rate (bpm) 30 65 ± 14 30 67 ± 11 Resting systolic BP (mmHg) 30 127 ± 11 30 132 ± 15 Resting diastolic BP (mmHg) 30 72 ± 8 30 75 ± 9 Serum cholesterol (mmol·L−1) 21 4.8[3.4, 6.2] 22 4.7[3, 6.5] Serum HDL (mmol·L−1) 21 1.2 ± 0.2 22 1.3 ± 0.3 Cholesterol to HDL ratio 21 4.0 ± 0.7 22 4.1 ± 0.7 Serum triglycerides (mmol·L−1) 21 1.7[0.6, 2.8] 22 1.8[1.2, 2.4] Serum LDL (mmol·L−1) 21 2.7 ± 0.7 22 2.9 ± 1.1 Serum CRP (mg·L−1) 21 2.9[0.3, 5.5] 22 2[−0.1, 4.0] Serum glucose (mmol·L−1) 21 5.7 ± 1.6 22 5.0 ± 1.1 Serum insulin (mU·L−1) 20 29[7, 51] 20 27[4, 50] ISWT ISWD (m) 27 475 ± 240 26 639 ± 198 Post-ex RPE 27 15 ± 2 26 15 ± 2 Post-ex heart rate (bpm) 27 131 ± 28 26 148 ± 21 Post-ex systolic BP (mmHg) 27 169 ± 29 25 192 ± 31 Post-ex diastolic BP (mmHg) 27 86 ± 15 25 93 ± 11 Quality of life EuroQol EQ5D-3L VAS 30 58 ± 18 30 64 ± 17 Data are presented as mean ± SD or median[IQR]. BP: blood pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein; CRP: C-reactive protein; ISWT: incremental shuttle walk test; ISWD: incremental shuttle walk distance; RPE: Borg rating of perceived exertion; VAS: Visual Analogue Scale. ### 3.2. Recruitment, Retention, and Compliance We invited 481 potentially eligible patients to participate in our study, of which 123 (26%) responded with interest and underwent further screening (Figure1). Of these, 60 (49%) enrolled giving a recruitment rate of 12%. Although six participants (10%) withdrew from the study (unrelated health change: n = 2; change in work commitments: n = 2; no reason: n = 2), 97% of assessments (157 of 162) and 96% of exercise sessions (620 of 648) were attended. No adverse events were recorded in more than 650 hours of exercise training.Figure 1 CONSORT flow-chart. ### 3.3. Anthropometrics End-point and follow-up data for anthropometric outcomes are presented in Table2. The adjusted mean difference in change in body mass at intervention end-point (primary outcome) was −1.8 [−3.0, −0.5] kg (P = 0.007), favouring the intervention group. A similar difference was maintained at follow-up (−2.0 [−3.5, −0.5] kg; P = 0.010). These differences were accompanied by −0.8 [−1.3, −0.3] and −0.9 [−1.5, −0.3] kg·m−2 changes in BMI (P = 0.002 and P = 0.002, resp.) and −1 [−2, 0] and −1 [ - 2 , 0 ] % changes in body fat percentage (P = 0.044 and P = 0.033, resp.). There were no significant differences in other anthropometric variables.Table 2 Raw data by groups and adjusted mean differences in change at intervention end-point and followup. Variable Intervention end-point Followup (week 13) (week 26) Control Intervention Adjusted mean diff. (95% CI) P * Control Intervention Adjusted mean diff. (95% CI) P * Anthropometry Body mass (kg) 117.9 ± 21.0 115.2 ± 24.3 −1.8 (−3.0, −0.5) 0.006 118.1 ± 21.0 115.1 ± 24.4 −2.0 (−3.5, −0.5) 0.010 Body mass index (kg·m−2) 39.8 ± 6.8 38.0 ± 6.9 −0.8 (−1.3, −0.3) 0.002 39.8 ± 6.7 37.9 ± 6.9 −0.9 (−1.5, −0.3) 0.002 Body fat (%) 40 ± 9 37 ± 8 −1 (−2, 0) 0.044 40 ± 9 38 ± 8 −1 (−2, 0) 0.033 Hip circumference (cm) 128 ± 15 123 ± 15 −1 (−2, 0) 0.093 128 ± 15 122 ± 15 −2 (−3, 0) 0.020 Waist circumference (cm) 127 ± 15 123 ± 16 −2 (−4, 0) 0.117 127 ± 15 123 ± 15 −2 (−4, 1) 0.143 Hip circumference (cm) 128 ± 15 123 ± 15 −1 (−2, 0) 0.093 128 ± 15 122 ± 15 −2 (−3, 0) 0.020 Cardiometabolic Resting heart rate (beats·min−1) 66 ± 12 64 ± 12 −5 (−9, −2) 0.002 62 ± 9 63 ± 10 −2 (−6, 1) 0.165 Resting systolic BP (mmHg) 127 ± 13 130 ± 13 0 (−5, 4) 0.893 131 ± 14 133 ± 16 −2 (−7, 4) 0.534 Resting diastolic BP (mmHg) 72 ± 8 74 ± 9 −1 (−4, 2) 0.590 75 ± 8 76 ± 10 −1 (−4, 2) 0.539 Serum cholesterol (mmol·L−1) 4.8[3.2, 6.4] 4.8[3.5, 6.1] 0.1 (−0.2, 0.3) 0.575 4.4[2.5, 6.3] 4.6[3.6, 5.5] −0.1 (−0.5, 0.3) 0.645 Serum HDL (mmol·L−1) 1.2 ± 0.2 1.2 ± 0.3 0.0 (−0.1, 0.9) 0.569 1.2 ± 0.3 1.2 ± 0.2 −0.1 (−0.2, 0) 0.241 Serum LDL (mmol·L−1) 2.7 ± 0.8 2.9 ± 1 0 (−0.2, 0.3) 0.731 2.6 ± 0.9 2.8 ± 0.9 0 (−0.3, 0.3) 0.814 Serum CRP (mg·L−1) 3.3[0.1, 6.5] 1.4[−0.7, 3.5] −1.3 (−2.4, −0.2) 0.028 3.2[−2.1, 8.5] 1.8[0.6, 3] −1.3 (−2.5, −0.1) 0.037 Serum glucose (mmol·L−1) 5.7 ± 1.9 4.8 ± 0.5 −0.3 (−0.9, 0.2) 0.224 5.7 ± 1.9 4.8 ± 0.8 −0.3 (−0.9, 0.3) 0.267 ISWT ISWD (m) 475 ± 250 724 ± 193 95 (50, 139) <0.001 471 ± 240 737 ± 179 132 (90, 175) <0.001 Quality of life EQVAS 63 ± 19 60 ± 20 3 (−4, 10) 0.385 69 ± 18 72 ± 16 9 (2, 16) 0.017 Data are presented as mean ± SD or median[IQR]. BP: blood pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein; CRP: C-reactive protein; ISWT: incremental shuttle walk test; ISWD: incremental shuttle walk distance; RPE: Borg rating of perceived exertion; EQVAS: EuroQol Visual Analogue Scale; P *: P value adjusted for baseline score. ### 3.4. Exercise Capacity Although data were collected for all participants, seven ISWT datasets (including ISWD and postexercise heart rate (HR), systolic blood pressure (SBP), and diastolic blood pressure (DBP)) were excluded because participants completed the 1020-metre test at one or more time-points (four at baseline and three at end-point). This ceiling effect was unexpected and dilutes the true treatment effect; consequently these patients were excluded from the analysis.Despite secure randomisation there was a large chance imbalance between groups at baseline for distance walked in the ISWT (475 ± 240 versus 639 ± 198 m). Nevertheless, the adjusted difference in change favoured the intervention group at end-point (95 [ 50 , 139 ] m; P < 0.001) and follow-up (132 [ 90 , 175 ] m; P < 0.001) at follow-up. This observation occurred in the absence of any significant changes in any postexercise physiological (SBP, DBP, and HR) or psychophysiological (RPE) measures (all P > 0.05). ### 3.5. Biomarkers Nine patients were unable to provide a venous blood sample at the baseline assessment. To prevent potential confounding from acute inflammation, eight datasets were excluded because CRP measurement exceeded 10 mg·L−1 [21]. Forty-three complete datasets remained (control: n = 21; intervention: n = 22). There was no change in serum cholesterol, triglycerides, HDL, LDL, or glucose at end-point or follow-up. Change in hs-CRP at end-point favoured the intervention group (−1.3 [ - 2.4 , - 0.2 ] mg·L−1, P = 0.028), and this improvement was maintained at follow-up (−1.3 [ - 2.5 , - 0.1 ] mg·L−1, P = 0.037). There was no evidence of any significant changes in serum insulin over any of the time periods. ### 3.6. Quality of Life There was a reduced proportion of participants in the intervention group reporting problems in performing usual activities (e.g., vacuuming, cleaning, and shopping) at end-point (P = 0.044) but not follow-up (P = 0.375). There were no changes in the other four EQ5D domains. Although there was no significant change in self-perceived health score (i.e., EQ Visual Analogue Scale) at end-point, there was significant improvement at follow-up of 9 [ 2 , 16 ] points (P = 0.017). ## 3.1. Participant Characteristics Sixty patients with controlled OSAHS (ESS: 5.0 [3.0, 6.8]; AHI: 2.4[ 1.9 , 3.2 ] events·h−1) that were long-term CPAP users enrolled on the study. Due to some patients relocation from hospital trusts elsewhere in the UK, CPAP start dates were only available for 75% of participants. For these patients, median [range] CPAP history was 1.2 [0.5–10.8] years. All sixty patients had a verifiable usage history of at least 6 months. Both groups were classified as obese, normocholesterolaemic, normoglycaemic, and hyperinsulinaemic. Baseline characteristics are summarised in Table 1.Table 1 Baseline group characteristics. Variable Control Intervention n n Anthropometry Body mass (kg) 30 118.3 ± 21.9 30 117.4 ± 24.3 Body mass index (kg·m−2) 30 39.8 ± 7.0 30 38.9 ± 6.9 Body fat (%) 30 40 ± 9 30 39 ± 8 Neck circumference (cm) 30 45 ± 5 30 44 ± 4 Waist circumference (cm) 30 128 ± 14 30 125 ± 16 Hip circumference (cm) 30 129 ± 14 30 125 ± 15 Cardiometabolic Resting heart rate (bpm) 30 65 ± 14 30 67 ± 11 Resting systolic BP (mmHg) 30 127 ± 11 30 132 ± 15 Resting diastolic BP (mmHg) 30 72 ± 8 30 75 ± 9 Serum cholesterol (mmol·L−1) 21 4.8[3.4, 6.2] 22 4.7[3, 6.5] Serum HDL (mmol·L−1) 21 1.2 ± 0.2 22 1.3 ± 0.3 Cholesterol to HDL ratio 21 4.0 ± 0.7 22 4.1 ± 0.7 Serum triglycerides (mmol·L−1) 21 1.7[0.6, 2.8] 22 1.8[1.2, 2.4] Serum LDL (mmol·L−1) 21 2.7 ± 0.7 22 2.9 ± 1.1 Serum CRP (mg·L−1) 21 2.9[0.3, 5.5] 22 2[−0.1, 4.0] Serum glucose (mmol·L−1) 21 5.7 ± 1.6 22 5.0 ± 1.1 Serum insulin (mU·L−1) 20 29[7, 51] 20 27[4, 50] ISWT ISWD (m) 27 475 ± 240 26 639 ± 198 Post-ex RPE 27 15 ± 2 26 15 ± 2 Post-ex heart rate (bpm) 27 131 ± 28 26 148 ± 21 Post-ex systolic BP (mmHg) 27 169 ± 29 25 192 ± 31 Post-ex diastolic BP (mmHg) 27 86 ± 15 25 93 ± 11 Quality of life EuroQol EQ5D-3L VAS 30 58 ± 18 30 64 ± 17 Data are presented as mean ± SD or median[IQR]. BP: blood pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein; CRP: C-reactive protein; ISWT: incremental shuttle walk test; ISWD: incremental shuttle walk distance; RPE: Borg rating of perceived exertion; VAS: Visual Analogue Scale. ## 3.2. Recruitment, Retention, and Compliance We invited 481 potentially eligible patients to participate in our study, of which 123 (26%) responded with interest and underwent further screening (Figure1). Of these, 60 (49%) enrolled giving a recruitment rate of 12%. Although six participants (10%) withdrew from the study (unrelated health change: n = 2; change in work commitments: n = 2; no reason: n = 2), 97% of assessments (157 of 162) and 96% of exercise sessions (620 of 648) were attended. No adverse events were recorded in more than 650 hours of exercise training.Figure 1 CONSORT flow-chart. ## 3.3. Anthropometrics End-point and follow-up data for anthropometric outcomes are presented in Table2. The adjusted mean difference in change in body mass at intervention end-point (primary outcome) was −1.8 [−3.0, −0.5] kg (P = 0.007), favouring the intervention group. A similar difference was maintained at follow-up (−2.0 [−3.5, −0.5] kg; P = 0.010). These differences were accompanied by −0.8 [−1.3, −0.3] and −0.9 [−1.5, −0.3] kg·m−2 changes in BMI (P = 0.002 and P = 0.002, resp.) and −1 [−2, 0] and −1 [ - 2 , 0 ] % changes in body fat percentage (P = 0.044 and P = 0.033, resp.). There were no significant differences in other anthropometric variables.Table 2 Raw data by groups and adjusted mean differences in change at intervention end-point and followup. Variable Intervention end-point Followup (week 13) (week 26) Control Intervention Adjusted mean diff. (95% CI) P * Control Intervention Adjusted mean diff. (95% CI) P * Anthropometry Body mass (kg) 117.9 ± 21.0 115.2 ± 24.3 −1.8 (−3.0, −0.5) 0.006 118.1 ± 21.0 115.1 ± 24.4 −2.0 (−3.5, −0.5) 0.010 Body mass index (kg·m−2) 39.8 ± 6.8 38.0 ± 6.9 −0.8 (−1.3, −0.3) 0.002 39.8 ± 6.7 37.9 ± 6.9 −0.9 (−1.5, −0.3) 0.002 Body fat (%) 40 ± 9 37 ± 8 −1 (−2, 0) 0.044 40 ± 9 38 ± 8 −1 (−2, 0) 0.033 Hip circumference (cm) 128 ± 15 123 ± 15 −1 (−2, 0) 0.093 128 ± 15 122 ± 15 −2 (−3, 0) 0.020 Waist circumference (cm) 127 ± 15 123 ± 16 −2 (−4, 0) 0.117 127 ± 15 123 ± 15 −2 (−4, 1) 0.143 Hip circumference (cm) 128 ± 15 123 ± 15 −1 (−2, 0) 0.093 128 ± 15 122 ± 15 −2 (−3, 0) 0.020 Cardiometabolic Resting heart rate (beats·min−1) 66 ± 12 64 ± 12 −5 (−9, −2) 0.002 62 ± 9 63 ± 10 −2 (−6, 1) 0.165 Resting systolic BP (mmHg) 127 ± 13 130 ± 13 0 (−5, 4) 0.893 131 ± 14 133 ± 16 −2 (−7, 4) 0.534 Resting diastolic BP (mmHg) 72 ± 8 74 ± 9 −1 (−4, 2) 0.590 75 ± 8 76 ± 10 −1 (−4, 2) 0.539 Serum cholesterol (mmol·L−1) 4.8[3.2, 6.4] 4.8[3.5, 6.1] 0.1 (−0.2, 0.3) 0.575 4.4[2.5, 6.3] 4.6[3.6, 5.5] −0.1 (−0.5, 0.3) 0.645 Serum HDL (mmol·L−1) 1.2 ± 0.2 1.2 ± 0.3 0.0 (−0.1, 0.9) 0.569 1.2 ± 0.3 1.2 ± 0.2 −0.1 (−0.2, 0) 0.241 Serum LDL (mmol·L−1) 2.7 ± 0.8 2.9 ± 1 0 (−0.2, 0.3) 0.731 2.6 ± 0.9 2.8 ± 0.9 0 (−0.3, 0.3) 0.814 Serum CRP (mg·L−1) 3.3[0.1, 6.5] 1.4[−0.7, 3.5] −1.3 (−2.4, −0.2) 0.028 3.2[−2.1, 8.5] 1.8[0.6, 3] −1.3 (−2.5, −0.1) 0.037 Serum glucose (mmol·L−1) 5.7 ± 1.9 4.8 ± 0.5 −0.3 (−0.9, 0.2) 0.224 5.7 ± 1.9 4.8 ± 0.8 −0.3 (−0.9, 0.3) 0.267 ISWT ISWD (m) 475 ± 250 724 ± 193 95 (50, 139) <0.001 471 ± 240 737 ± 179 132 (90, 175) <0.001 Quality of life EQVAS 63 ± 19 60 ± 20 3 (−4, 10) 0.385 69 ± 18 72 ± 16 9 (2, 16) 0.017 Data are presented as mean ± SD or median[IQR]. BP: blood pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein; CRP: C-reactive protein; ISWT: incremental shuttle walk test; ISWD: incremental shuttle walk distance; RPE: Borg rating of perceived exertion; EQVAS: EuroQol Visual Analogue Scale; P *: P value adjusted for baseline score. ## 3.4. Exercise Capacity Although data were collected for all participants, seven ISWT datasets (including ISWD and postexercise heart rate (HR), systolic blood pressure (SBP), and diastolic blood pressure (DBP)) were excluded because participants completed the 1020-metre test at one or more time-points (four at baseline and three at end-point). This ceiling effect was unexpected and dilutes the true treatment effect; consequently these patients were excluded from the analysis.Despite secure randomisation there was a large chance imbalance between groups at baseline for distance walked in the ISWT (475 ± 240 versus 639 ± 198 m). Nevertheless, the adjusted difference in change favoured the intervention group at end-point (95 [ 50 , 139 ] m; P < 0.001) and follow-up (132 [ 90 , 175 ] m; P < 0.001) at follow-up. This observation occurred in the absence of any significant changes in any postexercise physiological (SBP, DBP, and HR) or psychophysiological (RPE) measures (all P > 0.05). ## 3.5. Biomarkers Nine patients were unable to provide a venous blood sample at the baseline assessment. To prevent potential confounding from acute inflammation, eight datasets were excluded because CRP measurement exceeded 10 mg·L−1 [21]. Forty-three complete datasets remained (control: n = 21; intervention: n = 22). There was no change in serum cholesterol, triglycerides, HDL, LDL, or glucose at end-point or follow-up. Change in hs-CRP at end-point favoured the intervention group (−1.3 [ - 2.4 , - 0.2 ] mg·L−1, P = 0.028), and this improvement was maintained at follow-up (−1.3 [ - 2.5 , - 0.1 ] mg·L−1, P = 0.037). There was no evidence of any significant changes in serum insulin over any of the time periods. ## 3.6. Quality of Life There was a reduced proportion of participants in the intervention group reporting problems in performing usual activities (e.g., vacuuming, cleaning, and shopping) at end-point (P = 0.044) but not follow-up (P = 0.375). There were no changes in the other four EQ5D domains. Although there was no significant change in self-perceived health score (i.e., EQ Visual Analogue Scale) at end-point, there was significant improvement at follow-up of 9 [ 2 , 16 ] points (P = 0.017). ## 4. Discussion This study demonstrates that a pragmatic lifestyle intervention focused on introducing structured exercise, providing dietary advice, and combining both with behaviour change counselling improves body mass, exercise capacity, and a marker of systemic inflammation in obese adults who were being treated with CPAP for OSAHS. The programme was well tolerated by patients, exemplified by excellent attendance rates and no reported adverse events (in more than 650 hours of exercise testing and training).Although the intervention favoured reductions in body mass aligned with our hypothesis, the magnitude of these changes (approx. 2 kg loss at both time-points) was small and possibly inadequate to provide any clinical benefit. Body mass reductions in interventions incorporating hypoenergetic diets are typically greater [22, 23] than those without [24, 25]. VLEDs are often supplemented with psychological and dietary support, which rarely alleviates the steady postintervention weight regain often observed with these interventions [12, 14]. It is likely that the severe nature of such interventions produces a rapid weight loss that is unsustainable in the longer term. Our intervention was designed and used on the premise that a pragmatic approach based on a slower rate of weight loss induced by making smaller changes to behaviour and practising a “healthy lifestyle” could offer more sustainable long-term benefit.Recent meta-analyses have collated evidence on dietary interventions [26] and exercise-based interventions [27] in OSAHS. The former reported a weighted mean reduction in BMI of 4.8 [ 3.8 , 5.9 ] kg·m−2 and in AHI of 23.1 [ 8.9 , 37.3 ] events·h−1. The results of exercise-based interventions appeared less effective, with mean reductions of 1.4 [ - 2.8 , - 0.1 ] kg·m−2 and 6.3 [ - 8.5 , - 4.0 ] events·h−1 for BMI and AHI, respectively [27]. However, these latter changes in BMI and AHI included a study [13] that incorporated a proprietary VLED alongside the exercise component; this study alone had a BMI reduction almost double that of the five other included studies combined (−6.0 versus −3.3 (n = 5) kg·m−2) which distorted the average effect. Despite the greater weight loss, the improvement in AHI was smaller than 4 of the 5 other exercise trials. Furthermore, the Barnes et al. study was a noncontrolled nonrandomised design. The changes in BMI reported in the current study better match those in exercise-only trials. The 15 [ 8 , 22 ] and 21 [ 14 , 27 ]% improvements in ISWD at end-point and follow-up are consistent with improvements reported by others [13, 28] and in our study come atop of any improvement that can be observed through CPAP therapy alone [9]. Blair [29] suggested that low cardiorespiratory fitness was attributable to a greater proportion of all-cause mortality (4000 deaths) than obesity, smoking, and hypercholesterolaemia combined. This evidence amplifies the importance of improving fitness, even in the absence of changes in “fatness,” in clinical populations.Evidence suggests that addressing energy intake is more efficacious for weight loss than increasing energy expenditure [30] and that exercise trials that provide dietary advice only are associated with smaller weight losses than those providing a VLED [26, 27]. Although we acknowledged this when designing the study, we feel the severe rate of weight loss, and typical weight regain seen over time is not pragmatic enough for introduction into current healthcare delivery. The premise of making smaller changes in dietary behaviours that are more sustainable in the long term still merits investigation and is now supported by the National Institute for Health and Care Excellence [31] in the UK as clinically beneficial and cost-effective. All components of the current intervention were delivered by the same investigator (JM), who used psychological techniques (such as motivational interviewing) to elicit core behaviours and previous barriers to behaviour change in order to customise the intervention delivered to promote longer-term adherence.Hs-CRP is a systemic marker of inflammation strongly associated with atherosclerotic plaque development, cardiovascular disease risk, and death. Our intervention demonstrated a significant improvement in hs-CRP at end-point (−1.3 mg·L−1) that was maintained at follow-up. It has been shown previously that circulating CRP is elevated in OSAHS and that effective CPAP therapy can normalise this augmentation [32]. Furthermore, there is evidence to suggest that exercise can have a beneficial effect on hs-CRP concentration in humans [33]. As our patients were already compliant with CPAP therapy, any further reduction in CRP could reflect further decrement in cardiovascular disease risk and progression.Improvements in exercise capacity were not surprising findings considering the design of our study. The key finding here was that, after 13 weeks of independence, improvements in exercise capacity were not only maintained but further improved upon. Although we included no measure of exercise behaviour during the independence period, further improvements are probably reflective of continued regular exercise training, which was mentioned anecdotally by participants during assessments.Limitations. Although a metric of OSAHS severity was not included in the current study for financial and logistical reasons, it was thought that changes in body mass could be a surrogate marker for changes in OSAHS severity. Improvements in OSAHS in the absence of weight loss have been demonstrated by other groups, coupled with evidence that sedentary behaviour and low cardiorespiratory fitness could pose a greater public health concern than obesity, suggests that the improvements in exercise capacity demonstrated in the current study should not be underestimated. Although the current study is powered for changes in body mass, we do not know if it was adequately powered for the secondary outcome measures, so these findings must be interpreted with caution. Other limitations include the single-centre study design and nonblinded assessments, which were unavoidable because of resource limitation. However, we have demonstrated an acceptable and deliverable programme with positive outcomes that should, in combination with enhanced dietary intervention, be further investigated in a larger OSAHS group. ## 5. Conclusion A pragmatic lifestyle intervention in OSAHS has been shown to improve cardiometabolic outcomes in obese adults treated with CPAP for OSAHS. This approach is one that could be deliverable within the UK (and other) healthcare systems. Further research is required to further investigate the clinical efficacy and cost-effectiveness of pragmatic lifestyle interventions in OSAHS. --- *Source: 102164-2014-07-21.xml*
102164-2014-07-21_102164-2014-07-21.md
42,519
Effects of a Pragmatic Lifestyle Intervention for Reducing Body Mass in Obese Adults with Obstructive Sleep Apnoea: A Randomised Controlled Trial
James Moss; Garry Alan Tew; Robert James Copeland; Martin Stout; Catherine Grant Billings; John Michael Saxton; Edward Mitchell Winter; Stephen Mark Bianchi
BioMed Research International (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102164
102164-2014-07-21.xml
--- ## Abstract This study investigated the effects of a pragmatic lifestyle intervention in obese adults with continuous positive airway pressure-treated obstructive sleep apnoea hypopnoea syndrome (OSAHS). Sixty patients were randomised 1 : 1 to either a 12-week lifestyle intervention or an advice-only control group. The intervention involved supervised exercise sessions, dietary advice, and the promotion of lifestyle behaviour change using cognitive-behavioural techniques. Outcomes were assessed at baseline (week 0), intervention end-point (week 13), and follow-up (week 26). The primary outcome was 13-week change in body mass. Secondary outcomes included anthropometry, blood-borne biomarkers, exercise capacity, and health-related quality of life. At end-point, the intervention group exhibited small reductions in body mass (−1.8 [−3.0, −0.5] kg;P = 0.007) and body fat percentage (−1 [−2, 0]%; P = 0.044) and moderate improvements in C-reactive protein (−1.3 [−2.4, −0.2] mg·L−1; P = 0.028) and exercise capacity (95 [50, 139] m; P < 0.001). At follow-up, changes in body mass (−2.0 [−3.5, −0.5] kg; P = 0.010), body fat percentage (−1 [−2, 0]%; P = 0.033), and C-reactive protein (−1.3 [−2.5, −0.1] mg·L−1; P = 0.037) were maintained and exercise capacity was further improved (132 [90, 175] m; P < 0.001). This trial is registered with ClinicalTrials.gov NCT01546792. --- ## Body ## 1. Introduction Obstructive sleep apnoea hypopnoea syndrome (OSAHS) is the most common form of sleep-disordered breathing, characterised by repetitive nocturnal airway obstruction and frequent nocturnal arousal from sleep that leads to excessive daytime sleepiness (EDS). Prevalence surveys suggest that 2% of women and 4% of men at middle age are affected by this syndrome, which is becoming increasingly common with the current obesity epidemic [1]. The clinical consequences of repeated airway closures (hypoxaemia, sympathoexcitation, and oxidative stress) contribute to the premature development of cardiovascular disease, specifically ischaemic heart disease, stroke, and hypertension. Individuals with OSAHS are often obese [2] and physically inactive [3], and epidemiological data indicate an independent relationship between OSAHS and cardiovascular disease [4]. Obesity is a key modifiable risk factor for OSAHS [5] with recent guidelines suggesting that all overweight and obese patients with OSAHS be encouraged to lose weight [6]. As both untreated OSAHS and obesity contribute to the increased morbidity and mortality, interventions capable of addressing both should be considered.Continuous positive airway pressure (CPAP), the primary therapy for moderate-to-severe OSAHS, improves subjective and objective measures of sleepiness [7] but provides only day-to-day management of the condition, not a long-term cure (i.e., withdrawal of CPAP causes symptoms to return). Moreover, CPAP has minimal effects on patients’ weight or physical activity levels [8] despite increasing exercise capacity [9]. In contrast, intensive lifestyle interventions, which typically involve very low energy diets (VLEDs), appear to be effective for initial rapid weight reduction and can sometimes result in complete remission of OSAHS [10, 11], although steady postintervention weight regaining is common [12–14]. However, intensive lifestyle interventions are limited in that they might not be acceptable to many patients or deliverable within a tax-funded healthcare system. Further research is needed to explore the impact of lower-intensity (i.e., more practical) lifestyle interventions on body mass (i.e., weight) and other important health outcomes in overweight and obese individuals with OSAHS.We conducted a randomised controlled trial to determine the impact of a 12-week pragmatic lifestyle intervention on body mass and other indicators of health and fitness in obese adults who were currently being treated with CPAP for moderate-to-severe OSAHS. The pragmatic lifestyle intervention incorporated supervised exercise sessions, dietary education and advice, and the promotion of lifestyle behaviour change using cognitive-behavioural techniques. We hypothesised that the intervention group would have greater improvement in body mass and other cardiometabolic outcomes, compared with an advice-only control group. ## 2. Materials and Methods ### 2.1. Participants Patients with OSAHS were recruited from sleep clinics at Sheffield Teaching Hospitals NHS Foundation Trust. Eligible patients were obese (body mass index (BMI) > 30 kg·m−2) men and women aged 18–85 years with at least moderate OSAHS (apnoea hypopnoea index (AHI) > 15 events·h−1; oxygen desaturation index > 15 events·h−1; Epworth Sleepiness Scale (ESS) >11) treated with CPAP therapy. Adherence was assessed subjectively using self-report during history taking (>75% nightly use; >4 hours per night) and objectively by percentage of nights CPAP was used combined with indices of treatment efficacy (i.e., normalisation of AHI and ESS). Exclusion criteria were any contraindications to exercise testing and training such as severe hypertension, unstable angina, uncontrolled cardiac arrhythmias, and inability or unwillingness to undertake the commitments of the study or participation in regular purposeful exercise (>30 min, ≥3 times per week; self-reported). The study was approved by the South Yorkshire Research Ethics Committee (09/H1310/74) and all participants provided written informed consent prior to enrolment. ### 2.2. Sample Size The primary outcome measure was body mass (at intervention end-point) because weight loss is a key focus of management guidelines for overweight and obese individuals with OSAHS [6], and previous studies have shown reductions in body mass correlate with improvements in severity of OSAHS [10, 15]. A total of 60 participants (30 per group) were required to detect a between-group difference of at least 1.5 kg at intervention end-point, assuming a standard deviation of 12 kg for body mass [16], a pre-post correlation for body mass of 0.991 [17], 20% attrition, 90% power, and a 2-tailed alpha of 0.05. ### 2.3. Design and Randomisation This was a nonblinded, parallel-group, randomised controlled trial. Participants were allocated 1 : 1 to either a 12-week pragmatic lifestyle intervention or an advice-only control group using a randomisation sequence created by an independent researcher prior to recruitment (nQuery, Statistical Solutions, USA). The research team were notified of group allocation once each participant completed their baseline assessment. Outcome measures were assessed before randomisation (week 0), at intervention end-point (week 13), and after 13 weeks of independence (week 26). ### 2.4. Pragmatic Lifestyle Intervention Participants randomised to the pragmatic lifestyle intervention were invited to attend supervised exercise sessions at a university exercise facility within one mile of the treating hospital. The frequency of exercise sessions was initially three per week. This was reduced to two per week during weeks 5 to 8 (with a third exercise session undertaken independently by participants at their convenience) and then to one per week (with two self-directed exercise sessions) during weeks 9 to 12. This pattern was designed to gradually decrease dependence on our facility, supervision, and expertise and to promote participants’ independent exercise participation, which was reported in an exercise diary. Exercise sessions lasted approximately one hour and typically comprised 45 minutes of aerobic interval training (treadmill walking/jogging, cycling, and rowing), 15 minutes of resistance training (major muscle groups), and exercises aimed at improving flexibility and balance. The aerobic interval training involved alternating hard and easy exercise bouts at a ratio of 1 : 2 (e.g., 0.5 min hard, 1 min easy), progressing to 4 : 1 (e.g., 4 min hard, 1 min easy) as tolerated. Heart rate and perceived exertion (RPE, using a 6 to 20 scale; Borg, 1982) were recorded at the end of hard intervals to facilitate prescription and monitor progression (RPE: 14–16 for hard bouts and 9–11 for easy bouts). Exercise sessions were individualised and directed by an exercise physiologist taking into account participants’ health, mobility, and preferences. The cognitive-behavioural component of the intervention was integrated into exercise sessions and involved psychoeducation (based upon attitudes, experiences, emotions, beliefs, etc.) tailored to participants’ stage of change and implemented the cognitive-behavioural processes of change as outlined by the transtheoretical model [18]. Such an approach has demonstrated efficacy in clinical populations previously [19]. Concurrent dietary education and advice based on the principles of the eatwell plate model (http://www.eatwell.gov.uk) were also integrated into the sessions. A three-day diet diary was completed and assessed to identify dietary imbalance and as a tool to set short- and long-term goals. A British Heart Foundation (BHF) weight loss leaflet “So… you want to lose weight for good?” was provided and key concepts were extracted from it. Participants allocated to the advice-only control group received a letter explaining their group allocation, basic written lifestyle advice, and the BHF weight loss leaflet. ### 2.5. Study Outcomes Participants’ body mass was measured in duplicate using a calibrated beam-balance scale (Model 424; Weylux; Hallamshire Scales Ltd., Sheffield, UK). Participants were minimally dressed and the mean of two consecutive concordant measurements was used. Secondary outcomes were BMI; neck, waist, and hip circumferences; body fat percentage (Bodystat Quadscan 4000, Bodystat Ltd., IM99 1DQ); health-related quality of life using the EuroQol EQ5D-3L questionnaire (EuroQol Executive Office, 3068 AV Rotterdam, Netherlands); and exercise capacity using the incremental shuttle walking test (ISWT; [20]). Fasting venous blood was collected, centrifuged, separated, and frozen for subsequent biochemical analysis. Full lipid profile, glucose, and high-sensitivity C-reactive protein (hs-CRP) were measured on ADVIA 2400 and insulin concentrations on ADVIA Centaur XP (Siemens, 511 Benedict Avenue, Tarrytown, NY). ### 2.6. Statistical Analyses The mean difference in change of body mass between the treatment groups was assessed at intervention end-point (week 13) by analysis of covariance (ANCOVA) using baseline body mass as a covariate and change scores (end-point minus baseline) as the dependent variable. The adjusted mean difference in change between groups at week 13 and corresponding 95% confidence interval (CI) from the model are presented. All analyses were done on an intention-to-treat basis with previous observations carried forward where necessary. The same procedure was used to assess treatment difference in body mass at follow-up (week 26). Treatment differences for other outcomes were similarly analysed using separate ANCOVAs for intervention end-point (week 13) and follow-up (week 26). All analyses were carried out in SPSS version 18.0 (SPSS UK Ltd., 2 New Square (B3 Floor 2), Bedfont Lakes, UK). Statistical tests were at a two-sided 0·05 significance level. Analysis of residuals was undertaken for all regression models in order to assess model assumptions. ## 2.1. Participants Patients with OSAHS were recruited from sleep clinics at Sheffield Teaching Hospitals NHS Foundation Trust. Eligible patients were obese (body mass index (BMI) > 30 kg·m−2) men and women aged 18–85 years with at least moderate OSAHS (apnoea hypopnoea index (AHI) > 15 events·h−1; oxygen desaturation index > 15 events·h−1; Epworth Sleepiness Scale (ESS) >11) treated with CPAP therapy. Adherence was assessed subjectively using self-report during history taking (>75% nightly use; >4 hours per night) and objectively by percentage of nights CPAP was used combined with indices of treatment efficacy (i.e., normalisation of AHI and ESS). Exclusion criteria were any contraindications to exercise testing and training such as severe hypertension, unstable angina, uncontrolled cardiac arrhythmias, and inability or unwillingness to undertake the commitments of the study or participation in regular purposeful exercise (>30 min, ≥3 times per week; self-reported). The study was approved by the South Yorkshire Research Ethics Committee (09/H1310/74) and all participants provided written informed consent prior to enrolment. ## 2.2. Sample Size The primary outcome measure was body mass (at intervention end-point) because weight loss is a key focus of management guidelines for overweight and obese individuals with OSAHS [6], and previous studies have shown reductions in body mass correlate with improvements in severity of OSAHS [10, 15]. A total of 60 participants (30 per group) were required to detect a between-group difference of at least 1.5 kg at intervention end-point, assuming a standard deviation of 12 kg for body mass [16], a pre-post correlation for body mass of 0.991 [17], 20% attrition, 90% power, and a 2-tailed alpha of 0.05. ## 2.3. Design and Randomisation This was a nonblinded, parallel-group, randomised controlled trial. Participants were allocated 1 : 1 to either a 12-week pragmatic lifestyle intervention or an advice-only control group using a randomisation sequence created by an independent researcher prior to recruitment (nQuery, Statistical Solutions, USA). The research team were notified of group allocation once each participant completed their baseline assessment. Outcome measures were assessed before randomisation (week 0), at intervention end-point (week 13), and after 13 weeks of independence (week 26). ## 2.4. Pragmatic Lifestyle Intervention Participants randomised to the pragmatic lifestyle intervention were invited to attend supervised exercise sessions at a university exercise facility within one mile of the treating hospital. The frequency of exercise sessions was initially three per week. This was reduced to two per week during weeks 5 to 8 (with a third exercise session undertaken independently by participants at their convenience) and then to one per week (with two self-directed exercise sessions) during weeks 9 to 12. This pattern was designed to gradually decrease dependence on our facility, supervision, and expertise and to promote participants’ independent exercise participation, which was reported in an exercise diary. Exercise sessions lasted approximately one hour and typically comprised 45 minutes of aerobic interval training (treadmill walking/jogging, cycling, and rowing), 15 minutes of resistance training (major muscle groups), and exercises aimed at improving flexibility and balance. The aerobic interval training involved alternating hard and easy exercise bouts at a ratio of 1 : 2 (e.g., 0.5 min hard, 1 min easy), progressing to 4 : 1 (e.g., 4 min hard, 1 min easy) as tolerated. Heart rate and perceived exertion (RPE, using a 6 to 20 scale; Borg, 1982) were recorded at the end of hard intervals to facilitate prescription and monitor progression (RPE: 14–16 for hard bouts and 9–11 for easy bouts). Exercise sessions were individualised and directed by an exercise physiologist taking into account participants’ health, mobility, and preferences. The cognitive-behavioural component of the intervention was integrated into exercise sessions and involved psychoeducation (based upon attitudes, experiences, emotions, beliefs, etc.) tailored to participants’ stage of change and implemented the cognitive-behavioural processes of change as outlined by the transtheoretical model [18]. Such an approach has demonstrated efficacy in clinical populations previously [19]. Concurrent dietary education and advice based on the principles of the eatwell plate model (http://www.eatwell.gov.uk) were also integrated into the sessions. A three-day diet diary was completed and assessed to identify dietary imbalance and as a tool to set short- and long-term goals. A British Heart Foundation (BHF) weight loss leaflet “So… you want to lose weight for good?” was provided and key concepts were extracted from it. Participants allocated to the advice-only control group received a letter explaining their group allocation, basic written lifestyle advice, and the BHF weight loss leaflet. ## 2.5. Study Outcomes Participants’ body mass was measured in duplicate using a calibrated beam-balance scale (Model 424; Weylux; Hallamshire Scales Ltd., Sheffield, UK). Participants were minimally dressed and the mean of two consecutive concordant measurements was used. Secondary outcomes were BMI; neck, waist, and hip circumferences; body fat percentage (Bodystat Quadscan 4000, Bodystat Ltd., IM99 1DQ); health-related quality of life using the EuroQol EQ5D-3L questionnaire (EuroQol Executive Office, 3068 AV Rotterdam, Netherlands); and exercise capacity using the incremental shuttle walking test (ISWT; [20]). Fasting venous blood was collected, centrifuged, separated, and frozen for subsequent biochemical analysis. Full lipid profile, glucose, and high-sensitivity C-reactive protein (hs-CRP) were measured on ADVIA 2400 and insulin concentrations on ADVIA Centaur XP (Siemens, 511 Benedict Avenue, Tarrytown, NY). ## 2.6. Statistical Analyses The mean difference in change of body mass between the treatment groups was assessed at intervention end-point (week 13) by analysis of covariance (ANCOVA) using baseline body mass as a covariate and change scores (end-point minus baseline) as the dependent variable. The adjusted mean difference in change between groups at week 13 and corresponding 95% confidence interval (CI) from the model are presented. All analyses were done on an intention-to-treat basis with previous observations carried forward where necessary. The same procedure was used to assess treatment difference in body mass at follow-up (week 26). Treatment differences for other outcomes were similarly analysed using separate ANCOVAs for intervention end-point (week 13) and follow-up (week 26). All analyses were carried out in SPSS version 18.0 (SPSS UK Ltd., 2 New Square (B3 Floor 2), Bedfont Lakes, UK). Statistical tests were at a two-sided 0·05 significance level. Analysis of residuals was undertaken for all regression models in order to assess model assumptions. ## 3. Results ### 3.1. Participant Characteristics Sixty patients with controlled OSAHS (ESS: 5.0 [3.0, 6.8]; AHI: 2.4[ 1.9 , 3.2 ] events·h−1) that were long-term CPAP users enrolled on the study. Due to some patients relocation from hospital trusts elsewhere in the UK, CPAP start dates were only available for 75% of participants. For these patients, median [range] CPAP history was 1.2 [0.5–10.8] years. All sixty patients had a verifiable usage history of at least 6 months. Both groups were classified as obese, normocholesterolaemic, normoglycaemic, and hyperinsulinaemic. Baseline characteristics are summarised in Table 1.Table 1 Baseline group characteristics. Variable Control Intervention n n Anthropometry Body mass (kg) 30 118.3 ± 21.9 30 117.4 ± 24.3 Body mass index (kg·m−2) 30 39.8 ± 7.0 30 38.9 ± 6.9 Body fat (%) 30 40 ± 9 30 39 ± 8 Neck circumference (cm) 30 45 ± 5 30 44 ± 4 Waist circumference (cm) 30 128 ± 14 30 125 ± 16 Hip circumference (cm) 30 129 ± 14 30 125 ± 15 Cardiometabolic Resting heart rate (bpm) 30 65 ± 14 30 67 ± 11 Resting systolic BP (mmHg) 30 127 ± 11 30 132 ± 15 Resting diastolic BP (mmHg) 30 72 ± 8 30 75 ± 9 Serum cholesterol (mmol·L−1) 21 4.8[3.4, 6.2] 22 4.7[3, 6.5] Serum HDL (mmol·L−1) 21 1.2 ± 0.2 22 1.3 ± 0.3 Cholesterol to HDL ratio 21 4.0 ± 0.7 22 4.1 ± 0.7 Serum triglycerides (mmol·L−1) 21 1.7[0.6, 2.8] 22 1.8[1.2, 2.4] Serum LDL (mmol·L−1) 21 2.7 ± 0.7 22 2.9 ± 1.1 Serum CRP (mg·L−1) 21 2.9[0.3, 5.5] 22 2[−0.1, 4.0] Serum glucose (mmol·L−1) 21 5.7 ± 1.6 22 5.0 ± 1.1 Serum insulin (mU·L−1) 20 29[7, 51] 20 27[4, 50] ISWT ISWD (m) 27 475 ± 240 26 639 ± 198 Post-ex RPE 27 15 ± 2 26 15 ± 2 Post-ex heart rate (bpm) 27 131 ± 28 26 148 ± 21 Post-ex systolic BP (mmHg) 27 169 ± 29 25 192 ± 31 Post-ex diastolic BP (mmHg) 27 86 ± 15 25 93 ± 11 Quality of life EuroQol EQ5D-3L VAS 30 58 ± 18 30 64 ± 17 Data are presented as mean ± SD or median[IQR]. BP: blood pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein; CRP: C-reactive protein; ISWT: incremental shuttle walk test; ISWD: incremental shuttle walk distance; RPE: Borg rating of perceived exertion; VAS: Visual Analogue Scale. ### 3.2. Recruitment, Retention, and Compliance We invited 481 potentially eligible patients to participate in our study, of which 123 (26%) responded with interest and underwent further screening (Figure1). Of these, 60 (49%) enrolled giving a recruitment rate of 12%. Although six participants (10%) withdrew from the study (unrelated health change: n = 2; change in work commitments: n = 2; no reason: n = 2), 97% of assessments (157 of 162) and 96% of exercise sessions (620 of 648) were attended. No adverse events were recorded in more than 650 hours of exercise training.Figure 1 CONSORT flow-chart. ### 3.3. Anthropometrics End-point and follow-up data for anthropometric outcomes are presented in Table2. The adjusted mean difference in change in body mass at intervention end-point (primary outcome) was −1.8 [−3.0, −0.5] kg (P = 0.007), favouring the intervention group. A similar difference was maintained at follow-up (−2.0 [−3.5, −0.5] kg; P = 0.010). These differences were accompanied by −0.8 [−1.3, −0.3] and −0.9 [−1.5, −0.3] kg·m−2 changes in BMI (P = 0.002 and P = 0.002, resp.) and −1 [−2, 0] and −1 [ - 2 , 0 ] % changes in body fat percentage (P = 0.044 and P = 0.033, resp.). There were no significant differences in other anthropometric variables.Table 2 Raw data by groups and adjusted mean differences in change at intervention end-point and followup. Variable Intervention end-point Followup (week 13) (week 26) Control Intervention Adjusted mean diff. (95% CI) P * Control Intervention Adjusted mean diff. (95% CI) P * Anthropometry Body mass (kg) 117.9 ± 21.0 115.2 ± 24.3 −1.8 (−3.0, −0.5) 0.006 118.1 ± 21.0 115.1 ± 24.4 −2.0 (−3.5, −0.5) 0.010 Body mass index (kg·m−2) 39.8 ± 6.8 38.0 ± 6.9 −0.8 (−1.3, −0.3) 0.002 39.8 ± 6.7 37.9 ± 6.9 −0.9 (−1.5, −0.3) 0.002 Body fat (%) 40 ± 9 37 ± 8 −1 (−2, 0) 0.044 40 ± 9 38 ± 8 −1 (−2, 0) 0.033 Hip circumference (cm) 128 ± 15 123 ± 15 −1 (−2, 0) 0.093 128 ± 15 122 ± 15 −2 (−3, 0) 0.020 Waist circumference (cm) 127 ± 15 123 ± 16 −2 (−4, 0) 0.117 127 ± 15 123 ± 15 −2 (−4, 1) 0.143 Hip circumference (cm) 128 ± 15 123 ± 15 −1 (−2, 0) 0.093 128 ± 15 122 ± 15 −2 (−3, 0) 0.020 Cardiometabolic Resting heart rate (beats·min−1) 66 ± 12 64 ± 12 −5 (−9, −2) 0.002 62 ± 9 63 ± 10 −2 (−6, 1) 0.165 Resting systolic BP (mmHg) 127 ± 13 130 ± 13 0 (−5, 4) 0.893 131 ± 14 133 ± 16 −2 (−7, 4) 0.534 Resting diastolic BP (mmHg) 72 ± 8 74 ± 9 −1 (−4, 2) 0.590 75 ± 8 76 ± 10 −1 (−4, 2) 0.539 Serum cholesterol (mmol·L−1) 4.8[3.2, 6.4] 4.8[3.5, 6.1] 0.1 (−0.2, 0.3) 0.575 4.4[2.5, 6.3] 4.6[3.6, 5.5] −0.1 (−0.5, 0.3) 0.645 Serum HDL (mmol·L−1) 1.2 ± 0.2 1.2 ± 0.3 0.0 (−0.1, 0.9) 0.569 1.2 ± 0.3 1.2 ± 0.2 −0.1 (−0.2, 0) 0.241 Serum LDL (mmol·L−1) 2.7 ± 0.8 2.9 ± 1 0 (−0.2, 0.3) 0.731 2.6 ± 0.9 2.8 ± 0.9 0 (−0.3, 0.3) 0.814 Serum CRP (mg·L−1) 3.3[0.1, 6.5] 1.4[−0.7, 3.5] −1.3 (−2.4, −0.2) 0.028 3.2[−2.1, 8.5] 1.8[0.6, 3] −1.3 (−2.5, −0.1) 0.037 Serum glucose (mmol·L−1) 5.7 ± 1.9 4.8 ± 0.5 −0.3 (−0.9, 0.2) 0.224 5.7 ± 1.9 4.8 ± 0.8 −0.3 (−0.9, 0.3) 0.267 ISWT ISWD (m) 475 ± 250 724 ± 193 95 (50, 139) <0.001 471 ± 240 737 ± 179 132 (90, 175) <0.001 Quality of life EQVAS 63 ± 19 60 ± 20 3 (−4, 10) 0.385 69 ± 18 72 ± 16 9 (2, 16) 0.017 Data are presented as mean ± SD or median[IQR]. BP: blood pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein; CRP: C-reactive protein; ISWT: incremental shuttle walk test; ISWD: incremental shuttle walk distance; RPE: Borg rating of perceived exertion; EQVAS: EuroQol Visual Analogue Scale; P *: P value adjusted for baseline score. ### 3.4. Exercise Capacity Although data were collected for all participants, seven ISWT datasets (including ISWD and postexercise heart rate (HR), systolic blood pressure (SBP), and diastolic blood pressure (DBP)) were excluded because participants completed the 1020-metre test at one or more time-points (four at baseline and three at end-point). This ceiling effect was unexpected and dilutes the true treatment effect; consequently these patients were excluded from the analysis.Despite secure randomisation there was a large chance imbalance between groups at baseline for distance walked in the ISWT (475 ± 240 versus 639 ± 198 m). Nevertheless, the adjusted difference in change favoured the intervention group at end-point (95 [ 50 , 139 ] m; P < 0.001) and follow-up (132 [ 90 , 175 ] m; P < 0.001) at follow-up. This observation occurred in the absence of any significant changes in any postexercise physiological (SBP, DBP, and HR) or psychophysiological (RPE) measures (all P > 0.05). ### 3.5. Biomarkers Nine patients were unable to provide a venous blood sample at the baseline assessment. To prevent potential confounding from acute inflammation, eight datasets were excluded because CRP measurement exceeded 10 mg·L−1 [21]. Forty-three complete datasets remained (control: n = 21; intervention: n = 22). There was no change in serum cholesterol, triglycerides, HDL, LDL, or glucose at end-point or follow-up. Change in hs-CRP at end-point favoured the intervention group (−1.3 [ - 2.4 , - 0.2 ] mg·L−1, P = 0.028), and this improvement was maintained at follow-up (−1.3 [ - 2.5 , - 0.1 ] mg·L−1, P = 0.037). There was no evidence of any significant changes in serum insulin over any of the time periods. ### 3.6. Quality of Life There was a reduced proportion of participants in the intervention group reporting problems in performing usual activities (e.g., vacuuming, cleaning, and shopping) at end-point (P = 0.044) but not follow-up (P = 0.375). There were no changes in the other four EQ5D domains. Although there was no significant change in self-perceived health score (i.e., EQ Visual Analogue Scale) at end-point, there was significant improvement at follow-up of 9 [ 2 , 16 ] points (P = 0.017). ## 3.1. Participant Characteristics Sixty patients with controlled OSAHS (ESS: 5.0 [3.0, 6.8]; AHI: 2.4[ 1.9 , 3.2 ] events·h−1) that were long-term CPAP users enrolled on the study. Due to some patients relocation from hospital trusts elsewhere in the UK, CPAP start dates were only available for 75% of participants. For these patients, median [range] CPAP history was 1.2 [0.5–10.8] years. All sixty patients had a verifiable usage history of at least 6 months. Both groups were classified as obese, normocholesterolaemic, normoglycaemic, and hyperinsulinaemic. Baseline characteristics are summarised in Table 1.Table 1 Baseline group characteristics. Variable Control Intervention n n Anthropometry Body mass (kg) 30 118.3 ± 21.9 30 117.4 ± 24.3 Body mass index (kg·m−2) 30 39.8 ± 7.0 30 38.9 ± 6.9 Body fat (%) 30 40 ± 9 30 39 ± 8 Neck circumference (cm) 30 45 ± 5 30 44 ± 4 Waist circumference (cm) 30 128 ± 14 30 125 ± 16 Hip circumference (cm) 30 129 ± 14 30 125 ± 15 Cardiometabolic Resting heart rate (bpm) 30 65 ± 14 30 67 ± 11 Resting systolic BP (mmHg) 30 127 ± 11 30 132 ± 15 Resting diastolic BP (mmHg) 30 72 ± 8 30 75 ± 9 Serum cholesterol (mmol·L−1) 21 4.8[3.4, 6.2] 22 4.7[3, 6.5] Serum HDL (mmol·L−1) 21 1.2 ± 0.2 22 1.3 ± 0.3 Cholesterol to HDL ratio 21 4.0 ± 0.7 22 4.1 ± 0.7 Serum triglycerides (mmol·L−1) 21 1.7[0.6, 2.8] 22 1.8[1.2, 2.4] Serum LDL (mmol·L−1) 21 2.7 ± 0.7 22 2.9 ± 1.1 Serum CRP (mg·L−1) 21 2.9[0.3, 5.5] 22 2[−0.1, 4.0] Serum glucose (mmol·L−1) 21 5.7 ± 1.6 22 5.0 ± 1.1 Serum insulin (mU·L−1) 20 29[7, 51] 20 27[4, 50] ISWT ISWD (m) 27 475 ± 240 26 639 ± 198 Post-ex RPE 27 15 ± 2 26 15 ± 2 Post-ex heart rate (bpm) 27 131 ± 28 26 148 ± 21 Post-ex systolic BP (mmHg) 27 169 ± 29 25 192 ± 31 Post-ex diastolic BP (mmHg) 27 86 ± 15 25 93 ± 11 Quality of life EuroQol EQ5D-3L VAS 30 58 ± 18 30 64 ± 17 Data are presented as mean ± SD or median[IQR]. BP: blood pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein; CRP: C-reactive protein; ISWT: incremental shuttle walk test; ISWD: incremental shuttle walk distance; RPE: Borg rating of perceived exertion; VAS: Visual Analogue Scale. ## 3.2. Recruitment, Retention, and Compliance We invited 481 potentially eligible patients to participate in our study, of which 123 (26%) responded with interest and underwent further screening (Figure1). Of these, 60 (49%) enrolled giving a recruitment rate of 12%. Although six participants (10%) withdrew from the study (unrelated health change: n = 2; change in work commitments: n = 2; no reason: n = 2), 97% of assessments (157 of 162) and 96% of exercise sessions (620 of 648) were attended. No adverse events were recorded in more than 650 hours of exercise training.Figure 1 CONSORT flow-chart. ## 3.3. Anthropometrics End-point and follow-up data for anthropometric outcomes are presented in Table2. The adjusted mean difference in change in body mass at intervention end-point (primary outcome) was −1.8 [−3.0, −0.5] kg (P = 0.007), favouring the intervention group. A similar difference was maintained at follow-up (−2.0 [−3.5, −0.5] kg; P = 0.010). These differences were accompanied by −0.8 [−1.3, −0.3] and −0.9 [−1.5, −0.3] kg·m−2 changes in BMI (P = 0.002 and P = 0.002, resp.) and −1 [−2, 0] and −1 [ - 2 , 0 ] % changes in body fat percentage (P = 0.044 and P = 0.033, resp.). There were no significant differences in other anthropometric variables.Table 2 Raw data by groups and adjusted mean differences in change at intervention end-point and followup. Variable Intervention end-point Followup (week 13) (week 26) Control Intervention Adjusted mean diff. (95% CI) P * Control Intervention Adjusted mean diff. (95% CI) P * Anthropometry Body mass (kg) 117.9 ± 21.0 115.2 ± 24.3 −1.8 (−3.0, −0.5) 0.006 118.1 ± 21.0 115.1 ± 24.4 −2.0 (−3.5, −0.5) 0.010 Body mass index (kg·m−2) 39.8 ± 6.8 38.0 ± 6.9 −0.8 (−1.3, −0.3) 0.002 39.8 ± 6.7 37.9 ± 6.9 −0.9 (−1.5, −0.3) 0.002 Body fat (%) 40 ± 9 37 ± 8 −1 (−2, 0) 0.044 40 ± 9 38 ± 8 −1 (−2, 0) 0.033 Hip circumference (cm) 128 ± 15 123 ± 15 −1 (−2, 0) 0.093 128 ± 15 122 ± 15 −2 (−3, 0) 0.020 Waist circumference (cm) 127 ± 15 123 ± 16 −2 (−4, 0) 0.117 127 ± 15 123 ± 15 −2 (−4, 1) 0.143 Hip circumference (cm) 128 ± 15 123 ± 15 −1 (−2, 0) 0.093 128 ± 15 122 ± 15 −2 (−3, 0) 0.020 Cardiometabolic Resting heart rate (beats·min−1) 66 ± 12 64 ± 12 −5 (−9, −2) 0.002 62 ± 9 63 ± 10 −2 (−6, 1) 0.165 Resting systolic BP (mmHg) 127 ± 13 130 ± 13 0 (−5, 4) 0.893 131 ± 14 133 ± 16 −2 (−7, 4) 0.534 Resting diastolic BP (mmHg) 72 ± 8 74 ± 9 −1 (−4, 2) 0.590 75 ± 8 76 ± 10 −1 (−4, 2) 0.539 Serum cholesterol (mmol·L−1) 4.8[3.2, 6.4] 4.8[3.5, 6.1] 0.1 (−0.2, 0.3) 0.575 4.4[2.5, 6.3] 4.6[3.6, 5.5] −0.1 (−0.5, 0.3) 0.645 Serum HDL (mmol·L−1) 1.2 ± 0.2 1.2 ± 0.3 0.0 (−0.1, 0.9) 0.569 1.2 ± 0.3 1.2 ± 0.2 −0.1 (−0.2, 0) 0.241 Serum LDL (mmol·L−1) 2.7 ± 0.8 2.9 ± 1 0 (−0.2, 0.3) 0.731 2.6 ± 0.9 2.8 ± 0.9 0 (−0.3, 0.3) 0.814 Serum CRP (mg·L−1) 3.3[0.1, 6.5] 1.4[−0.7, 3.5] −1.3 (−2.4, −0.2) 0.028 3.2[−2.1, 8.5] 1.8[0.6, 3] −1.3 (−2.5, −0.1) 0.037 Serum glucose (mmol·L−1) 5.7 ± 1.9 4.8 ± 0.5 −0.3 (−0.9, 0.2) 0.224 5.7 ± 1.9 4.8 ± 0.8 −0.3 (−0.9, 0.3) 0.267 ISWT ISWD (m) 475 ± 250 724 ± 193 95 (50, 139) <0.001 471 ± 240 737 ± 179 132 (90, 175) <0.001 Quality of life EQVAS 63 ± 19 60 ± 20 3 (−4, 10) 0.385 69 ± 18 72 ± 16 9 (2, 16) 0.017 Data are presented as mean ± SD or median[IQR]. BP: blood pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein; CRP: C-reactive protein; ISWT: incremental shuttle walk test; ISWD: incremental shuttle walk distance; RPE: Borg rating of perceived exertion; EQVAS: EuroQol Visual Analogue Scale; P *: P value adjusted for baseline score. ## 3.4. Exercise Capacity Although data were collected for all participants, seven ISWT datasets (including ISWD and postexercise heart rate (HR), systolic blood pressure (SBP), and diastolic blood pressure (DBP)) were excluded because participants completed the 1020-metre test at one or more time-points (four at baseline and three at end-point). This ceiling effect was unexpected and dilutes the true treatment effect; consequently these patients were excluded from the analysis.Despite secure randomisation there was a large chance imbalance between groups at baseline for distance walked in the ISWT (475 ± 240 versus 639 ± 198 m). Nevertheless, the adjusted difference in change favoured the intervention group at end-point (95 [ 50 , 139 ] m; P < 0.001) and follow-up (132 [ 90 , 175 ] m; P < 0.001) at follow-up. This observation occurred in the absence of any significant changes in any postexercise physiological (SBP, DBP, and HR) or psychophysiological (RPE) measures (all P > 0.05). ## 3.5. Biomarkers Nine patients were unable to provide a venous blood sample at the baseline assessment. To prevent potential confounding from acute inflammation, eight datasets were excluded because CRP measurement exceeded 10 mg·L−1 [21]. Forty-three complete datasets remained (control: n = 21; intervention: n = 22). There was no change in serum cholesterol, triglycerides, HDL, LDL, or glucose at end-point or follow-up. Change in hs-CRP at end-point favoured the intervention group (−1.3 [ - 2.4 , - 0.2 ] mg·L−1, P = 0.028), and this improvement was maintained at follow-up (−1.3 [ - 2.5 , - 0.1 ] mg·L−1, P = 0.037). There was no evidence of any significant changes in serum insulin over any of the time periods. ## 3.6. Quality of Life There was a reduced proportion of participants in the intervention group reporting problems in performing usual activities (e.g., vacuuming, cleaning, and shopping) at end-point (P = 0.044) but not follow-up (P = 0.375). There were no changes in the other four EQ5D domains. Although there was no significant change in self-perceived health score (i.e., EQ Visual Analogue Scale) at end-point, there was significant improvement at follow-up of 9 [ 2 , 16 ] points (P = 0.017). ## 4. Discussion This study demonstrates that a pragmatic lifestyle intervention focused on introducing structured exercise, providing dietary advice, and combining both with behaviour change counselling improves body mass, exercise capacity, and a marker of systemic inflammation in obese adults who were being treated with CPAP for OSAHS. The programme was well tolerated by patients, exemplified by excellent attendance rates and no reported adverse events (in more than 650 hours of exercise testing and training).Although the intervention favoured reductions in body mass aligned with our hypothesis, the magnitude of these changes (approx. 2 kg loss at both time-points) was small and possibly inadequate to provide any clinical benefit. Body mass reductions in interventions incorporating hypoenergetic diets are typically greater [22, 23] than those without [24, 25]. VLEDs are often supplemented with psychological and dietary support, which rarely alleviates the steady postintervention weight regain often observed with these interventions [12, 14]. It is likely that the severe nature of such interventions produces a rapid weight loss that is unsustainable in the longer term. Our intervention was designed and used on the premise that a pragmatic approach based on a slower rate of weight loss induced by making smaller changes to behaviour and practising a “healthy lifestyle” could offer more sustainable long-term benefit.Recent meta-analyses have collated evidence on dietary interventions [26] and exercise-based interventions [27] in OSAHS. The former reported a weighted mean reduction in BMI of 4.8 [ 3.8 , 5.9 ] kg·m−2 and in AHI of 23.1 [ 8.9 , 37.3 ] events·h−1. The results of exercise-based interventions appeared less effective, with mean reductions of 1.4 [ - 2.8 , - 0.1 ] kg·m−2 and 6.3 [ - 8.5 , - 4.0 ] events·h−1 for BMI and AHI, respectively [27]. However, these latter changes in BMI and AHI included a study [13] that incorporated a proprietary VLED alongside the exercise component; this study alone had a BMI reduction almost double that of the five other included studies combined (−6.0 versus −3.3 (n = 5) kg·m−2) which distorted the average effect. Despite the greater weight loss, the improvement in AHI was smaller than 4 of the 5 other exercise trials. Furthermore, the Barnes et al. study was a noncontrolled nonrandomised design. The changes in BMI reported in the current study better match those in exercise-only trials. The 15 [ 8 , 22 ] and 21 [ 14 , 27 ]% improvements in ISWD at end-point and follow-up are consistent with improvements reported by others [13, 28] and in our study come atop of any improvement that can be observed through CPAP therapy alone [9]. Blair [29] suggested that low cardiorespiratory fitness was attributable to a greater proportion of all-cause mortality (4000 deaths) than obesity, smoking, and hypercholesterolaemia combined. This evidence amplifies the importance of improving fitness, even in the absence of changes in “fatness,” in clinical populations.Evidence suggests that addressing energy intake is more efficacious for weight loss than increasing energy expenditure [30] and that exercise trials that provide dietary advice only are associated with smaller weight losses than those providing a VLED [26, 27]. Although we acknowledged this when designing the study, we feel the severe rate of weight loss, and typical weight regain seen over time is not pragmatic enough for introduction into current healthcare delivery. The premise of making smaller changes in dietary behaviours that are more sustainable in the long term still merits investigation and is now supported by the National Institute for Health and Care Excellence [31] in the UK as clinically beneficial and cost-effective. All components of the current intervention were delivered by the same investigator (JM), who used psychological techniques (such as motivational interviewing) to elicit core behaviours and previous barriers to behaviour change in order to customise the intervention delivered to promote longer-term adherence.Hs-CRP is a systemic marker of inflammation strongly associated with atherosclerotic plaque development, cardiovascular disease risk, and death. Our intervention demonstrated a significant improvement in hs-CRP at end-point (−1.3 mg·L−1) that was maintained at follow-up. It has been shown previously that circulating CRP is elevated in OSAHS and that effective CPAP therapy can normalise this augmentation [32]. Furthermore, there is evidence to suggest that exercise can have a beneficial effect on hs-CRP concentration in humans [33]. As our patients were already compliant with CPAP therapy, any further reduction in CRP could reflect further decrement in cardiovascular disease risk and progression.Improvements in exercise capacity were not surprising findings considering the design of our study. The key finding here was that, after 13 weeks of independence, improvements in exercise capacity were not only maintained but further improved upon. Although we included no measure of exercise behaviour during the independence period, further improvements are probably reflective of continued regular exercise training, which was mentioned anecdotally by participants during assessments.Limitations. Although a metric of OSAHS severity was not included in the current study for financial and logistical reasons, it was thought that changes in body mass could be a surrogate marker for changes in OSAHS severity. Improvements in OSAHS in the absence of weight loss have been demonstrated by other groups, coupled with evidence that sedentary behaviour and low cardiorespiratory fitness could pose a greater public health concern than obesity, suggests that the improvements in exercise capacity demonstrated in the current study should not be underestimated. Although the current study is powered for changes in body mass, we do not know if it was adequately powered for the secondary outcome measures, so these findings must be interpreted with caution. Other limitations include the single-centre study design and nonblinded assessments, which were unavoidable because of resource limitation. However, we have demonstrated an acceptable and deliverable programme with positive outcomes that should, in combination with enhanced dietary intervention, be further investigated in a larger OSAHS group. ## 5. Conclusion A pragmatic lifestyle intervention in OSAHS has been shown to improve cardiometabolic outcomes in obese adults treated with CPAP for OSAHS. This approach is one that could be deliverable within the UK (and other) healthcare systems. Further research is required to further investigate the clinical efficacy and cost-effectiveness of pragmatic lifestyle interventions in OSAHS. --- *Source: 102164-2014-07-21.xml*
2014
# NMR Studies into the Potential Interactions of FullereneC60 with Tetraphenylporphyrin and Some of Its Derivatives **Authors:** C. Obondi; A. A. Rodriguez **Journal:** Advances in Physical Chemistry (2010) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2010/102167 --- ## Abstract 1H NMR relaxation studies were employed to investigate potential interactions between C60 and tetraphenylporphyrin, H2[TPP], and parasubstituted tetraphenylporphyrins, H2[(p-X)4TPP], where X = CN and OCH3 in solution. The substituted porphyrins provided a means by which to investigate the role that electronic effects play in the interaction process. A comparison of the relaxation rates, R1, and correlation times, τC, of the pyrrole and phenyl hydrogens in these complexes, without and with the presence of C60, revealed that the introduction of C60 into solution did not have a noticeable effect on R1 and τC of these protons in H2[TPP], indicating the absence of long-term intermolecular interaction at either of these two sites. A similar analysis of the two protons in the other two substituted tetraphenylporphyrin analogs revealed slower molecular dynamics indicating the presence of intermolecular interactions. Stronger interactions were observed in H2[(p-OCH3)4TPP] indicating that the electron-donating abilities of the -OCH3 group promote the interaction process. Our results indicate that it is very likely that enhanced selectivity in the chemical purifications of fullerenes and metallofullerenes can be achieved by employing tetraphenylporphyrin-silica stationary phases which have been modified with electron-donating groups. --- ## Body ## 1. Introduction Because of their distinctive properties, fullerenes have experienced a tremendous amount of research interest. Pharmaceutically, fullerenes’ unique properties have been exploited to develop a variety of applications including being used as conduits for drug delivery, functionalized to serve as antibacterial agents, and tested as HIV inhibitors [1–3]. Porphyrins and fullerenes have been found to spontaneously be attracted to each other [4]. This newly recognized supramolecular recognition element, the attraction of the curved π-surface of the fullerene to the center of the flat π-surface of a porphyrin, is possibly due to π-π and n-π electron interactions. This phenomenon is in contrast to the traditional paradigm which requires the matching of a concave host with a convex guest [5].During the past few years, the intermolecular interaction of porphyrins and fullerenes has been studied extensively. Due to their potential applications in processes of molecular recognition [6–11], photosynthesis [12–16], photovoltaics [17–23], energy transfer [24–32], and electron transfer [33–45], porphyrin-fullerene complexes have attracted a great deal of attention. Of particular interest is the development of tetraphenylporphyrin-appended silica stationary phases for the chromatographic separation of fullerenes. Meyerhoff and coworkers have developed columns with selectivity superior to the commercially available “Buckyclutcher” and “Buckyprep” columns [46, 47]. Their work showed that columns packed with (p-carboxyl)triphenylporphyrin-silica gel generated the best fullerene separation. The underlying rationale for the enhanced selectivity is believed to be π-π interactions between tetraphenylporphyrin and the fullerene. The close association of a fullerene and a porphyrin was first recognized in the molecular packing of a crystal structure of porphyrin-fullerene assembly containing a covalent fullerene-porphyrin conjugate [48]. In the crystal structure of this species, C60 was found to be centered over the porphyrin with its electron-rich 6 : 6 ring-juncture in close proximity to the plane of the porphyrin. The phenyl carbon atoms of the porphyrin were all at distances greater than 4.0 Å from fullerene carbon atoms, indicating that the ortho C-H bonds did not contribute significantly to the association.In this study,1H NMR relaxation studies have been performed to investigate the nature and precise interaction site of fullerene, C60, with tetraphenylporphyrin, H2[TPP], and parasubstituted tetraphenylporphyrins, H2[(p-X)4TPP], where X = CN and OCH3 in deuterated chlorobenzene-d5 (CBZ). The porphyrin derivatives were selected to investigate possible interactions changes due to electronic donating or withdrawing capabilities of the substituent group. The pyrrole hydrogen, on the porphyrin ring, and the ortho hydrogen, on the phenyl group, were selected since these hydrogens are strategically located at sites which will allow the determination of the molecular dynamics at the porphyrin and phenyl sites. Relaxation rates of the pyrrole and phenyl hydrogens in the porphyrins, as shown in Figure 1, were determined at several temperatures in the presence and absence of fullerene. In addition, correlation times, τC, for these hydrogens were calculated. Possible interaction sites were determined by looking at the difference in relaxation rate, R1, and correlation time, τC, due to the presence and absence of the C60 molecule.Figure 1 Display of parasubstituted tetraphenylporphyrin showing the pyrrole and phenyl hydrogens studied. ## 2. Theoretical Background 1H spin-lattice relaxation in the tetraphenylporphyrin H2[TPP] system is due primarily to the magnetic dipole-dipole interactions. The relaxation rate, R1, can be expressed as the sum of intra- and interrelaxation. This relationship is demonstrated below by (1) [49–52]: (1)1T1=R1=R1DD(intra)+R1DD(inter).T1 is the measured relaxation time and R1 is simply the inverse of T1. The intermolecular contribution, R1DD(inter), can be eliminated experimentally by working with very low mole fractions of a solute and by working in deuterated solvents. These two conditions were met in our measurements: mole fractions of 1.299 × 10-5 and deuterated chlorobenzene-d5. In the absence of the intermolecular interaction, (1) reduces to the following [49–52]:(2)R1=R1DD(intra)=[(23)γH4ℏ2rAB6]nsτC, where γH is the hydrogen gyromagnetic ratio, ℏ is the h/2π, rAB is the proton-to-proton distance of the interacting nuclei, ns is the number of interacting nuclei, and τc is the rotational correlation time which can be equated to the period of time necessary for a specific nuclear site to undergo reorientation (i.e., movement) to a new position which is different by about 54°. Once relaxation rates, R1, have been acquired experimentally, the rotational dynamics at a specific molecular site can be determined by solving (2) for τc. ## 3. Experimental Methods The solvent chlorobenzene-d5 was selected for these measurements since it provided the best solubility parameters for all tetraphenylporphyrin derivatives and C60. Fullerene C60 and chlorobenzene-d5 (99.5+ at. %D) were purchased from Acros Organics [53]. Tetraphenylporphyrin H2[TPP] was purchased from the Aldrich Chemical Company [54]. Para-substituted tetraphenylporphyrin derivatives, H2[(p-X)4TPP] (X = CN and OCH3), of H2[TPP] were synthesized by adapting and slightly modifying the method reported by Adler et al. [55]. The purity of the H2[TPP] derivatives was accomplished by several recrystallizations and established by HPLC and NMR methods. Several tetraphenylporphyrin solutions, with decreasing mole fractions, were initially prepared, and relaxation measurements were taken, to establish the solute mole fraction at which porphyrin-porphyrin interactions were absent. Solutions with solute mole fraction of 1.299 × 10-5 were found to eliminate possible porphyrin-porphyrin interactions as evidence by the constant relaxation rates. This information was used to ensure that subsequent samples were prepared consistently with this solute mole fraction. In solutions containing both host and guest, a 1 : 1 mole ratio in porphyrin : C60 was maintained. Approximately 0.5 mL of the sample was transferred into a 5-mm NMR tube, connected to a vacuum line, and thoroughly degassed by several freeze-pump-thaw cycles to eliminate molecular oxygen. The tubes were then sealed under vacuum.1H spin-lattice relaxation times were performed on a Varian 300 MHz instrument by using the standard inversion-recovery pulse sequence (i.e., D1-π-τ-π/2), where π is a 180° pulse, π/2 is a 90° pulse, and τ is a delay between pulses. Nine delay times were used in the pulse sequence with values, depending on the estimated T1, ranging from 0.0625 s to 16 s. A delay time (D1) of approximately 14 s was used between the transients. Each experiment used a minimum of 76 transients resulting in an acquisition time of approximately 4 hours. Experiments were conducted at five different temperatures. Typical 1H NMR spectra of the H2[TPP] and some of the derivatives were obtained and are shown in Figures 2, 3, and 4. The disappearance (i.e., broadening) of the N-H proton resonance into the baseline noise, due to the very rapid exchange rate of these porphyrin protons at the higher temperatures, prevented their use in this study. Therefore, the pyrrole and ortho hydrogens of the porphyrin and phenyl ring were utilized to monitor the molecular dynamics of the two molecular sites in these tetraphenylporphyrin complexes.Figure 2 1H NMR of H2[TPP] in chlorobenzene-d5. Peak (1), at about 8.90 ppm, corresponds to the pyrrole hydrogen while the multiplet (2), at about 8.25 ppm, corresponds to the orthohydrogen of the phenyl group.Figure 3 1H NMR of H2[(p-CN)4TPP] in chlorobenzene-d5. Peak (1), at about 8.90 ppm, corresponds to the pyrrole hydrogen while the multiplet (2), at about 8.28 ppm, corresponds to the orthohydrogen of the phenyl group.Figure 4 1H NMR of H2[(–OCH3)4TPP] in chlorobenzene-d5. Peak (1), at about 8.90 ppm, corresponds to the pyrrole hydrogen while the multiplet (2), at about 8.28 ppm, corresponds to the orthohydrogen of the phenyl group. ## 4. Results and Discussion ### 4.1.1H Relaxation Rates and Correlation Times of the Pyrrole and Phenyl Hydrogens in H2[TPP] with and without C60 in Chlorobenzene-d5 Shown in columns 2 and 4 of Table1 are the variable temperature relaxation rates, R1, of the pyrrole hydrogen in the absence and presence of C60, respectively. As expected, these rates are seen to decrease with rising temperature indicating a decrease in the effectiveness of the dipolar mechanism. Columns 3 and 5, of the same table, contain the values of the correlation times obtained via the respective relaxation rates and (2). The correlation times, τC, are also decreasing with rising temperature indicating faster molecular dynamics with increasing temperature [52].Table 1 1H relaxation Rates of pyrrole Hydrogen in H2[TPP] with and without C60 in chlorobenzene-d5. T (K)H2[TPP] in CBZH2[TPP] with C60 in CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.651 (0.005)3220.645 (0.012)3192830.581 (0.006)2880.571 (0.013)2832980.496 (0.016)2460.515 (0.010)2553130.434 (0.002)2150.432 (0.011)2143280.357 (0.002)1770.367 (0.012)182Values in parentheses represent error limits at the 90% confidence level.A comparison of the relaxation rates, without and with the presence of C60, reveals that the introduction of C60 to the solution does not affect the relaxation rate of this proton appreciatively. One notices that the change in relaxation rate is within experimental error indicating that the presence of C60 does not lead to noticeable intermolecular interaction at the pyrrole site of H2[TPP]. This observation is further illustrated graphically in Figure 5. An Arrhenius fit of τC versus T-1 yielded energies of activation of 7.23 and 6.80 kJ/mole for H2[TPP] without and with C60, respectively, suggesting that the energies of activation for molecular motion in the presence and absence of C60 are very similar and within the range of experimental error for these types of measurements.Figure 5 Relaxation rates of the pyrrole hydrogen ofH2[TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Shown in columns 2 and 4 of Table2 are the relaxation rates of the phenyl hydrogen in the absence and presence of C60. The corresponding correlation times are given in columns 3 and 5. Both R1 and τC for this hydrogen follow the same temperature dependence as was observed for the pyrrole hydrogen indicating the dipolar pathway becoming less efficient with faster molecular motion. As was the case for the pyrrole hydrogen, a comparison of the relaxation rates of the phenyl hydrogen, with and without C60 in the solution, shows that the changes in relaxation rates are within the experimental error indicating that the presence of C60 does not lead to noticeable intermolecular interaction at the phenyl site of H2[TPP]. The relaxation rates are also illustrated graphically in Figure 6. The Arrhenius fit of τC versus T-1 yielded energies of activation of 6.76 and 6.69 kJ/mole for H2[TPP] without and with C60, respectively. An interesting sidenote is a comparison of the reorientational dynamics at the pyrrole (i.e., overall molecular motion) and phenyl sites. The reorientational motion of the phenyl ring is higher an average of 36% than that at the pyrrole site, indicating significant internal motion being present in this complex.Table 2 1H relaxation Rates of phenyl Hydrogen in H2 [TPP] with and without C60 in chlorobenzene-d5. T (K)H2[TPP] in CBZH2[TPP] with C60 in CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.821 (0.004)2320.821 (0.008)2322830.752 (0.006)2130.729 (0.009)2062980.606 (0.012)1710.660 (0.019)1873130.576 (0.008)1630.557 (0.010)1583280.469 (0.009)1320.470 (0.011)132Values in parentheses represent error limits at the 90% confidence level.Figure 6 Relaxation rates of the pyrrole hydrogen ofH2[TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5. ### 4.2.1H Relaxation Rates and Correlation Times of the Pyrrole and Phenyl Hydrogens in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5 The temperature behaviors of the relaxation rates and correlation times of the pyrrole and phenyl hydrogens in H2[(p-CN)4TPP], with and without C60, are shown in Tables 3 and 4, respectively. The relaxation rates and correlation times of both hydrogens show the expected temperature behavior, decreasing in magnitude with rising temperature indicating that the molecule is undergoing faster dynamics with an increase in temperature. The temperature behaviors of the relaxation rates for these hydrogens are also illustrated in Figures 7 and 8.Table 3 1H relaxation Rates of Pyrrole Hydrogen in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5. T (K)H2[(p-CN)4TPP]H2[(p-CN)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.825 (0.014)4080.921 (0.016)4562830.820 (0.016)4060.878 (0.018)4352980.813 (0.024)4020.781 (0.009)3973130.809 (0.013)4000.765 (0.020)3793280.803 (0.014)3970.695 (0.020)344Values in parentheses represent error limits at the 90% confidence level.Table 4 1H relaxation Rates of Phenyl Hydrogen in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5.. T (K)H2[(p-CN)4TPP]H2[(p-CN)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2681.01 (0.012)2861.04 (0.015)2942830.968 (0.015)2740.986 (0.020)2792980.913 (0.015)2580.838 (0.020)2373130.879 (0.061)2490.832 (0.021)2353280.831 (0.017)2350.731 (0.023)207Values in parentheses represent error limits at the 90% confidence level.Figure 7 Relaxation rates of the pyrrole hydrogen ofH2[(p-CN)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Figure 8 Relaxation rates of the phenyl hydrogen ofH2[(p-CN)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.In the absence of C60, the relaxation rate for the pyrrole hydrogen gradually decreases in magnitude with increasing temperature indicating less sensitivity to temperature variations. It is interesting to note that, at both the pyrrole and phenyl hydrogen sites, the relaxation rate is enhanced at the two lower temperatures of 268 and 283 K when C60 is introduced into the solution. This improved relaxation results in longer correlation times at these two temperatures indicating that the slower rotational motion is enhancing the relaxation mechanism (dipole-dipole) at these temperatures. This observation suggests the presence of noticeable intermolecular interactions of C60 at the porphyrin and phenyl sites. While interactions are seen at both sites, a comparison of the data, and also illustrated in Figures 7 and 8, shows a slight interaction preference for the pyrrole site. This observation indicates that electrons withdrawing groups on the phenyl group may enhance intermolecular interactions, at least at reduced temperatures. However, as the temperature rises, the correlation times in the H2[(p-CN)4TPP]/C60 solution decrease, which has the effect of reducing the efficiency of the relaxation mechanism. This observation indicates that a rise in thermal energy prevents long-lasting H2[(p-CN)4TPP]-C60 interactions from occurring at both molecular sites. As was the case in the H2[TPP] sample, the reorientational motion of the phenyl ring is much higher than that at the pyrrole site, ranging from 43% at the lower temperature and increasing to nearly 66% at the highest temperature indicating the presence of a significant amount of internal motion. ### 4.3.1H Relaxation Rates and Correlation Times of Pyrrole and Phenyl Hydrogens in H2[(p-OCH3)4TPP] with and without C60 in Chlorobenzene-d5. The data for the pyrrole and phenyl hydrogens for the electron-donating, para-OCH3-substituted porphyrin are given in Tables 5 and 6. The temperature behaviors of the relaxation rates are also illustrated in Figures 9 and 10. As was observed previously in H2[TPP] and H2[(p-CN)4TPP] complexes, both R1 and τC decrease in value with rising temperature indicating the reduced effectiveness of the dipole-dipole mechanism with increased reorientational motion.Table 5 1H relaxation rates of phenyl hydrogen in H2[(p-OCN3)4TPP] with and without C60 in chlorobenzene-d5. T (K)H2[(p-OCH3)4TPP]H2[(p-OCH3)4TPP] with C60in CBZin CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.699 (0.012)3460.725 (0.009)3592830.660 (0.014)3270.688 (0.015)3412980.624 (0.016)3080.659 (0.012)3263130.584 (0.015)2890.618 (0.018)3063280.547 (0.019)2700.585 (0.019)290Values in parentheses represent error limits at the 90% confidence level.Table 6 1H relaxation rates of phenyl hydrogen in H2[(p-OCN3)4TPP] with and without C60 in chlorobenzene-d5. T (K)H2[(p-OCH3)4TPP]H2[(p-OCH3)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.857 (0.011)2421.00 (0.01)2832830.834 (0.012)2360.956 (0.013)2702980.790 (0.016)2230.875 (0.015)2473130.778 (0.014)2200.847 (0.017)2403280.745 (0.015)2110.784 (0.019)222Values in parentheses represent error limits at the 90% confidence level.Figure 9 Relaxation rates of the pyrrole hydrogen ofH2[(p-OCH3)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Figure 10 Relaxation rates of the phenyl hydrogen ofH2[(p-OCH3)4TPP] with C60 (dashed line) and without C60 (solid line) in chlorobenzene-d5.A comparison of the pyrrole relaxation rate, as well asτC, with and without C60 shows that the introduction of C60 into solution enhances the relaxation rate of this pyrrole hydrogen by slowing the molecular dynamics (i.e., longer τC) at this site. This indicates that H2[(p-OCH3)4TPP] is experiencing intermolecular interaction with C60 at the pyrrole hydrogen site. This enhanced relaxation is also illustrated in Figure 9. While the error bars in the measurements are somewhat large, the effects of these interactions on the molecular dynamics at this hydrogen site are experimentally observable.A review of the relaxation rates and correlation times of the phenyl hydrogen, with and without the presence of C60, reveals higher relaxation rates and longer correlation times when C60 is present. This observation indicates the presence of phenyl/C60 interactions at this site. A comparison of the pyrrole and phenyl data indicates that, while C60 is interacting with H2[(p-OCH3)4TPP]/C60 at both the pyrrole and phenyl sites, there is a noticeable preference for the phenyl site. An illustration of the two types of interactions is given in Figure 11. We attribute this preference to the electron-donating abilities of the -OCH3 group which allows two potential points of interaction with C60. Gung and Amicangelo analyzed the effects of substituents on π- π interactions and found that electron-donating groups increase interaction energies by promoting two types of electronic interactions: the normal π- π parallel stacking and the offset stacking of the substituent group over the guest molecule which can give rise to electrostatic interactions [56]. In some cases, this second type of interaction can lead to charge transfer with significant interaction energies.Illustration of possible interaction sites of tetraphenylporphyrin withC60. (a)(b) ## 4.1.1H Relaxation Rates and Correlation Times of the Pyrrole and Phenyl Hydrogens in H2[TPP] with and without C60 in Chlorobenzene-d5 Shown in columns 2 and 4 of Table1 are the variable temperature relaxation rates, R1, of the pyrrole hydrogen in the absence and presence of C60, respectively. As expected, these rates are seen to decrease with rising temperature indicating a decrease in the effectiveness of the dipolar mechanism. Columns 3 and 5, of the same table, contain the values of the correlation times obtained via the respective relaxation rates and (2). The correlation times, τC, are also decreasing with rising temperature indicating faster molecular dynamics with increasing temperature [52].Table 1 1H relaxation Rates of pyrrole Hydrogen in H2[TPP] with and without C60 in chlorobenzene-d5. T (K)H2[TPP] in CBZH2[TPP] with C60 in CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.651 (0.005)3220.645 (0.012)3192830.581 (0.006)2880.571 (0.013)2832980.496 (0.016)2460.515 (0.010)2553130.434 (0.002)2150.432 (0.011)2143280.357 (0.002)1770.367 (0.012)182Values in parentheses represent error limits at the 90% confidence level.A comparison of the relaxation rates, without and with the presence of C60, reveals that the introduction of C60 to the solution does not affect the relaxation rate of this proton appreciatively. One notices that the change in relaxation rate is within experimental error indicating that the presence of C60 does not lead to noticeable intermolecular interaction at the pyrrole site of H2[TPP]. This observation is further illustrated graphically in Figure 5. An Arrhenius fit of τC versus T-1 yielded energies of activation of 7.23 and 6.80 kJ/mole for H2[TPP] without and with C60, respectively, suggesting that the energies of activation for molecular motion in the presence and absence of C60 are very similar and within the range of experimental error for these types of measurements.Figure 5 Relaxation rates of the pyrrole hydrogen ofH2[TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Shown in columns 2 and 4 of Table2 are the relaxation rates of the phenyl hydrogen in the absence and presence of C60. The corresponding correlation times are given in columns 3 and 5. Both R1 and τC for this hydrogen follow the same temperature dependence as was observed for the pyrrole hydrogen indicating the dipolar pathway becoming less efficient with faster molecular motion. As was the case for the pyrrole hydrogen, a comparison of the relaxation rates of the phenyl hydrogen, with and without C60 in the solution, shows that the changes in relaxation rates are within the experimental error indicating that the presence of C60 does not lead to noticeable intermolecular interaction at the phenyl site of H2[TPP]. The relaxation rates are also illustrated graphically in Figure 6. The Arrhenius fit of τC versus T-1 yielded energies of activation of 6.76 and 6.69 kJ/mole for H2[TPP] without and with C60, respectively. An interesting sidenote is a comparison of the reorientational dynamics at the pyrrole (i.e., overall molecular motion) and phenyl sites. The reorientational motion of the phenyl ring is higher an average of 36% than that at the pyrrole site, indicating significant internal motion being present in this complex.Table 2 1H relaxation Rates of phenyl Hydrogen in H2 [TPP] with and without C60 in chlorobenzene-d5. T (K)H2[TPP] in CBZH2[TPP] with C60 in CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.821 (0.004)2320.821 (0.008)2322830.752 (0.006)2130.729 (0.009)2062980.606 (0.012)1710.660 (0.019)1873130.576 (0.008)1630.557 (0.010)1583280.469 (0.009)1320.470 (0.011)132Values in parentheses represent error limits at the 90% confidence level.Figure 6 Relaxation rates of the pyrrole hydrogen ofH2[TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5. ## 4.2.1H Relaxation Rates and Correlation Times of the Pyrrole and Phenyl Hydrogens in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5 The temperature behaviors of the relaxation rates and correlation times of the pyrrole and phenyl hydrogens in H2[(p-CN)4TPP], with and without C60, are shown in Tables 3 and 4, respectively. The relaxation rates and correlation times of both hydrogens show the expected temperature behavior, decreasing in magnitude with rising temperature indicating that the molecule is undergoing faster dynamics with an increase in temperature. The temperature behaviors of the relaxation rates for these hydrogens are also illustrated in Figures 7 and 8.Table 3 1H relaxation Rates of Pyrrole Hydrogen in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5. T (K)H2[(p-CN)4TPP]H2[(p-CN)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.825 (0.014)4080.921 (0.016)4562830.820 (0.016)4060.878 (0.018)4352980.813 (0.024)4020.781 (0.009)3973130.809 (0.013)4000.765 (0.020)3793280.803 (0.014)3970.695 (0.020)344Values in parentheses represent error limits at the 90% confidence level.Table 4 1H relaxation Rates of Phenyl Hydrogen in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5.. T (K)H2[(p-CN)4TPP]H2[(p-CN)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2681.01 (0.012)2861.04 (0.015)2942830.968 (0.015)2740.986 (0.020)2792980.913 (0.015)2580.838 (0.020)2373130.879 (0.061)2490.832 (0.021)2353280.831 (0.017)2350.731 (0.023)207Values in parentheses represent error limits at the 90% confidence level.Figure 7 Relaxation rates of the pyrrole hydrogen ofH2[(p-CN)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Figure 8 Relaxation rates of the phenyl hydrogen ofH2[(p-CN)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.In the absence of C60, the relaxation rate for the pyrrole hydrogen gradually decreases in magnitude with increasing temperature indicating less sensitivity to temperature variations. It is interesting to note that, at both the pyrrole and phenyl hydrogen sites, the relaxation rate is enhanced at the two lower temperatures of 268 and 283 K when C60 is introduced into the solution. This improved relaxation results in longer correlation times at these two temperatures indicating that the slower rotational motion is enhancing the relaxation mechanism (dipole-dipole) at these temperatures. This observation suggests the presence of noticeable intermolecular interactions of C60 at the porphyrin and phenyl sites. While interactions are seen at both sites, a comparison of the data, and also illustrated in Figures 7 and 8, shows a slight interaction preference for the pyrrole site. This observation indicates that electrons withdrawing groups on the phenyl group may enhance intermolecular interactions, at least at reduced temperatures. However, as the temperature rises, the correlation times in the H2[(p-CN)4TPP]/C60 solution decrease, which has the effect of reducing the efficiency of the relaxation mechanism. This observation indicates that a rise in thermal energy prevents long-lasting H2[(p-CN)4TPP]-C60 interactions from occurring at both molecular sites. As was the case in the H2[TPP] sample, the reorientational motion of the phenyl ring is much higher than that at the pyrrole site, ranging from 43% at the lower temperature and increasing to nearly 66% at the highest temperature indicating the presence of a significant amount of internal motion. ## 4.3.1H Relaxation Rates and Correlation Times of Pyrrole and Phenyl Hydrogens in H2[(p-OCH3)4TPP] with and without C60 in Chlorobenzene-d5. The data for the pyrrole and phenyl hydrogens for the electron-donating, para-OCH3-substituted porphyrin are given in Tables 5 and 6. The temperature behaviors of the relaxation rates are also illustrated in Figures 9 and 10. As was observed previously in H2[TPP] and H2[(p-CN)4TPP] complexes, both R1 and τC decrease in value with rising temperature indicating the reduced effectiveness of the dipole-dipole mechanism with increased reorientational motion.Table 5 1H relaxation rates of phenyl hydrogen in H2[(p-OCN3)4TPP] with and without C60 in chlorobenzene-d5. T (K)H2[(p-OCH3)4TPP]H2[(p-OCH3)4TPP] with C60in CBZin CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.699 (0.012)3460.725 (0.009)3592830.660 (0.014)3270.688 (0.015)3412980.624 (0.016)3080.659 (0.012)3263130.584 (0.015)2890.618 (0.018)3063280.547 (0.019)2700.585 (0.019)290Values in parentheses represent error limits at the 90% confidence level.Table 6 1H relaxation rates of phenyl hydrogen in H2[(p-OCN3)4TPP] with and without C60 in chlorobenzene-d5. T (K)H2[(p-OCH3)4TPP]H2[(p-OCH3)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.857 (0.011)2421.00 (0.01)2832830.834 (0.012)2360.956 (0.013)2702980.790 (0.016)2230.875 (0.015)2473130.778 (0.014)2200.847 (0.017)2403280.745 (0.015)2110.784 (0.019)222Values in parentheses represent error limits at the 90% confidence level.Figure 9 Relaxation rates of the pyrrole hydrogen ofH2[(p-OCH3)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Figure 10 Relaxation rates of the phenyl hydrogen ofH2[(p-OCH3)4TPP] with C60 (dashed line) and without C60 (solid line) in chlorobenzene-d5.A comparison of the pyrrole relaxation rate, as well asτC, with and without C60 shows that the introduction of C60 into solution enhances the relaxation rate of this pyrrole hydrogen by slowing the molecular dynamics (i.e., longer τC) at this site. This indicates that H2[(p-OCH3)4TPP] is experiencing intermolecular interaction with C60 at the pyrrole hydrogen site. This enhanced relaxation is also illustrated in Figure 9. While the error bars in the measurements are somewhat large, the effects of these interactions on the molecular dynamics at this hydrogen site are experimentally observable.A review of the relaxation rates and correlation times of the phenyl hydrogen, with and without the presence of C60, reveals higher relaxation rates and longer correlation times when C60 is present. This observation indicates the presence of phenyl/C60 interactions at this site. A comparison of the pyrrole and phenyl data indicates that, while C60 is interacting with H2[(p-OCH3)4TPP]/C60 at both the pyrrole and phenyl sites, there is a noticeable preference for the phenyl site. An illustration of the two types of interactions is given in Figure 11. We attribute this preference to the electron-donating abilities of the -OCH3 group which allows two potential points of interaction with C60. Gung and Amicangelo analyzed the effects of substituents on π- π interactions and found that electron-donating groups increase interaction energies by promoting two types of electronic interactions: the normal π- π parallel stacking and the offset stacking of the substituent group over the guest molecule which can give rise to electrostatic interactions [56]. In some cases, this second type of interaction can lead to charge transfer with significant interaction energies.Illustration of possible interaction sites of tetraphenylporphyrin withC60. (a)(b) ## 5. Conclusion 1H NMR relaxation studies were employed to investigate potential interactions between C60 and tetraphenylporphyrin, H2[TPP], and parasubstituted tetraphenylporphyrins, H2[(p-X)4TPP], where X = CN and OCH3 in solution. The substituted porphyrins provided a means by which to investigate the role that electronic effects play in the interaction process. A comparison of the relaxation rates, R1, and correlation times, τC, of the pyrrole and phenyl hydrogens in H2[TPP], without and with the presence of C60, revealed that the introduction of C60 into solution did not have a noticeable effect on R1 and τC of these protons indicating the absence of long-term intermolecular interaction at either of these two sites. A similar analysis of the two protons in the electron-withdrawing substituted tetraphenylporphyrin analog, H2[(p-CN)4TPP], revealed slower molecular dynamics at the two lowest temperatures indicating the presence of intermolecular interactions. However, these interactions were absent at higher temperatures suggesting that the rise in thermal energy prevents long-lasting interactions from occurring at both molecular sites. A comparison of the pyrrole and phenyl hydrogens data in H2[(p-OCH3)4TPP] indicated that, while C60 is interacting at both the pyrrole and phenyl sites, there is a clear preference for the phenyl site. A graphical illustration of these interactions is given in Figure 11. We believe that this preference is due to the electron-donating abilities of the -OCH3 group which allows two potential points of interaction of C60 on the phenyl group. This observation is consistent with the results observed by Gung and Amicangelo [56].Our results indicate that it is very likely that enhanced selectivity in the chemical purifications of fullerenes and metallofullerenes can be achieved by employing tetraphenylporphyrin-silica stationary phases which have been modified with electron-donating groups. --- *Source: 102167-2010-09-20.xml*
102167-2010-09-20_102167-2010-09-20.md
34,070
NMR Studies into the Potential Interactions of FullereneC60 with Tetraphenylporphyrin and Some of Its Derivatives
C. Obondi; A. A. Rodriguez
Advances in Physical Chemistry (2010)
Chemistry and Chemical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2010/102167
102167-2010-09-20.xml
--- ## Abstract 1H NMR relaxation studies were employed to investigate potential interactions between C60 and tetraphenylporphyrin, H2[TPP], and parasubstituted tetraphenylporphyrins, H2[(p-X)4TPP], where X = CN and OCH3 in solution. The substituted porphyrins provided a means by which to investigate the role that electronic effects play in the interaction process. A comparison of the relaxation rates, R1, and correlation times, τC, of the pyrrole and phenyl hydrogens in these complexes, without and with the presence of C60, revealed that the introduction of C60 into solution did not have a noticeable effect on R1 and τC of these protons in H2[TPP], indicating the absence of long-term intermolecular interaction at either of these two sites. A similar analysis of the two protons in the other two substituted tetraphenylporphyrin analogs revealed slower molecular dynamics indicating the presence of intermolecular interactions. Stronger interactions were observed in H2[(p-OCH3)4TPP] indicating that the electron-donating abilities of the -OCH3 group promote the interaction process. Our results indicate that it is very likely that enhanced selectivity in the chemical purifications of fullerenes and metallofullerenes can be achieved by employing tetraphenylporphyrin-silica stationary phases which have been modified with electron-donating groups. --- ## Body ## 1. Introduction Because of their distinctive properties, fullerenes have experienced a tremendous amount of research interest. Pharmaceutically, fullerenes’ unique properties have been exploited to develop a variety of applications including being used as conduits for drug delivery, functionalized to serve as antibacterial agents, and tested as HIV inhibitors [1–3]. Porphyrins and fullerenes have been found to spontaneously be attracted to each other [4]. This newly recognized supramolecular recognition element, the attraction of the curved π-surface of the fullerene to the center of the flat π-surface of a porphyrin, is possibly due to π-π and n-π electron interactions. This phenomenon is in contrast to the traditional paradigm which requires the matching of a concave host with a convex guest [5].During the past few years, the intermolecular interaction of porphyrins and fullerenes has been studied extensively. Due to their potential applications in processes of molecular recognition [6–11], photosynthesis [12–16], photovoltaics [17–23], energy transfer [24–32], and electron transfer [33–45], porphyrin-fullerene complexes have attracted a great deal of attention. Of particular interest is the development of tetraphenylporphyrin-appended silica stationary phases for the chromatographic separation of fullerenes. Meyerhoff and coworkers have developed columns with selectivity superior to the commercially available “Buckyclutcher” and “Buckyprep” columns [46, 47]. Their work showed that columns packed with (p-carboxyl)triphenylporphyrin-silica gel generated the best fullerene separation. The underlying rationale for the enhanced selectivity is believed to be π-π interactions between tetraphenylporphyrin and the fullerene. The close association of a fullerene and a porphyrin was first recognized in the molecular packing of a crystal structure of porphyrin-fullerene assembly containing a covalent fullerene-porphyrin conjugate [48]. In the crystal structure of this species, C60 was found to be centered over the porphyrin with its electron-rich 6 : 6 ring-juncture in close proximity to the plane of the porphyrin. The phenyl carbon atoms of the porphyrin were all at distances greater than 4.0 Å from fullerene carbon atoms, indicating that the ortho C-H bonds did not contribute significantly to the association.In this study,1H NMR relaxation studies have been performed to investigate the nature and precise interaction site of fullerene, C60, with tetraphenylporphyrin, H2[TPP], and parasubstituted tetraphenylporphyrins, H2[(p-X)4TPP], where X = CN and OCH3 in deuterated chlorobenzene-d5 (CBZ). The porphyrin derivatives were selected to investigate possible interactions changes due to electronic donating or withdrawing capabilities of the substituent group. The pyrrole hydrogen, on the porphyrin ring, and the ortho hydrogen, on the phenyl group, were selected since these hydrogens are strategically located at sites which will allow the determination of the molecular dynamics at the porphyrin and phenyl sites. Relaxation rates of the pyrrole and phenyl hydrogens in the porphyrins, as shown in Figure 1, were determined at several temperatures in the presence and absence of fullerene. In addition, correlation times, τC, for these hydrogens were calculated. Possible interaction sites were determined by looking at the difference in relaxation rate, R1, and correlation time, τC, due to the presence and absence of the C60 molecule.Figure 1 Display of parasubstituted tetraphenylporphyrin showing the pyrrole and phenyl hydrogens studied. ## 2. Theoretical Background 1H spin-lattice relaxation in the tetraphenylporphyrin H2[TPP] system is due primarily to the magnetic dipole-dipole interactions. The relaxation rate, R1, can be expressed as the sum of intra- and interrelaxation. This relationship is demonstrated below by (1) [49–52]: (1)1T1=R1=R1DD(intra)+R1DD(inter).T1 is the measured relaxation time and R1 is simply the inverse of T1. The intermolecular contribution, R1DD(inter), can be eliminated experimentally by working with very low mole fractions of a solute and by working in deuterated solvents. These two conditions were met in our measurements: mole fractions of 1.299 × 10-5 and deuterated chlorobenzene-d5. In the absence of the intermolecular interaction, (1) reduces to the following [49–52]:(2)R1=R1DD(intra)=[(23)γH4ℏ2rAB6]nsτC, where γH is the hydrogen gyromagnetic ratio, ℏ is the h/2π, rAB is the proton-to-proton distance of the interacting nuclei, ns is the number of interacting nuclei, and τc is the rotational correlation time which can be equated to the period of time necessary for a specific nuclear site to undergo reorientation (i.e., movement) to a new position which is different by about 54°. Once relaxation rates, R1, have been acquired experimentally, the rotational dynamics at a specific molecular site can be determined by solving (2) for τc. ## 3. Experimental Methods The solvent chlorobenzene-d5 was selected for these measurements since it provided the best solubility parameters for all tetraphenylporphyrin derivatives and C60. Fullerene C60 and chlorobenzene-d5 (99.5+ at. %D) were purchased from Acros Organics [53]. Tetraphenylporphyrin H2[TPP] was purchased from the Aldrich Chemical Company [54]. Para-substituted tetraphenylporphyrin derivatives, H2[(p-X)4TPP] (X = CN and OCH3), of H2[TPP] were synthesized by adapting and slightly modifying the method reported by Adler et al. [55]. The purity of the H2[TPP] derivatives was accomplished by several recrystallizations and established by HPLC and NMR methods. Several tetraphenylporphyrin solutions, with decreasing mole fractions, were initially prepared, and relaxation measurements were taken, to establish the solute mole fraction at which porphyrin-porphyrin interactions were absent. Solutions with solute mole fraction of 1.299 × 10-5 were found to eliminate possible porphyrin-porphyrin interactions as evidence by the constant relaxation rates. This information was used to ensure that subsequent samples were prepared consistently with this solute mole fraction. In solutions containing both host and guest, a 1 : 1 mole ratio in porphyrin : C60 was maintained. Approximately 0.5 mL of the sample was transferred into a 5-mm NMR tube, connected to a vacuum line, and thoroughly degassed by several freeze-pump-thaw cycles to eliminate molecular oxygen. The tubes were then sealed under vacuum.1H spin-lattice relaxation times were performed on a Varian 300 MHz instrument by using the standard inversion-recovery pulse sequence (i.e., D1-π-τ-π/2), where π is a 180° pulse, π/2 is a 90° pulse, and τ is a delay between pulses. Nine delay times were used in the pulse sequence with values, depending on the estimated T1, ranging from 0.0625 s to 16 s. A delay time (D1) of approximately 14 s was used between the transients. Each experiment used a minimum of 76 transients resulting in an acquisition time of approximately 4 hours. Experiments were conducted at five different temperatures. Typical 1H NMR spectra of the H2[TPP] and some of the derivatives were obtained and are shown in Figures 2, 3, and 4. The disappearance (i.e., broadening) of the N-H proton resonance into the baseline noise, due to the very rapid exchange rate of these porphyrin protons at the higher temperatures, prevented their use in this study. Therefore, the pyrrole and ortho hydrogens of the porphyrin and phenyl ring were utilized to monitor the molecular dynamics of the two molecular sites in these tetraphenylporphyrin complexes.Figure 2 1H NMR of H2[TPP] in chlorobenzene-d5. Peak (1), at about 8.90 ppm, corresponds to the pyrrole hydrogen while the multiplet (2), at about 8.25 ppm, corresponds to the orthohydrogen of the phenyl group.Figure 3 1H NMR of H2[(p-CN)4TPP] in chlorobenzene-d5. Peak (1), at about 8.90 ppm, corresponds to the pyrrole hydrogen while the multiplet (2), at about 8.28 ppm, corresponds to the orthohydrogen of the phenyl group.Figure 4 1H NMR of H2[(–OCH3)4TPP] in chlorobenzene-d5. Peak (1), at about 8.90 ppm, corresponds to the pyrrole hydrogen while the multiplet (2), at about 8.28 ppm, corresponds to the orthohydrogen of the phenyl group. ## 4. Results and Discussion ### 4.1.1H Relaxation Rates and Correlation Times of the Pyrrole and Phenyl Hydrogens in H2[TPP] with and without C60 in Chlorobenzene-d5 Shown in columns 2 and 4 of Table1 are the variable temperature relaxation rates, R1, of the pyrrole hydrogen in the absence and presence of C60, respectively. As expected, these rates are seen to decrease with rising temperature indicating a decrease in the effectiveness of the dipolar mechanism. Columns 3 and 5, of the same table, contain the values of the correlation times obtained via the respective relaxation rates and (2). The correlation times, τC, are also decreasing with rising temperature indicating faster molecular dynamics with increasing temperature [52].Table 1 1H relaxation Rates of pyrrole Hydrogen in H2[TPP] with and without C60 in chlorobenzene-d5. T (K)H2[TPP] in CBZH2[TPP] with C60 in CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.651 (0.005)3220.645 (0.012)3192830.581 (0.006)2880.571 (0.013)2832980.496 (0.016)2460.515 (0.010)2553130.434 (0.002)2150.432 (0.011)2143280.357 (0.002)1770.367 (0.012)182Values in parentheses represent error limits at the 90% confidence level.A comparison of the relaxation rates, without and with the presence of C60, reveals that the introduction of C60 to the solution does not affect the relaxation rate of this proton appreciatively. One notices that the change in relaxation rate is within experimental error indicating that the presence of C60 does not lead to noticeable intermolecular interaction at the pyrrole site of H2[TPP]. This observation is further illustrated graphically in Figure 5. An Arrhenius fit of τC versus T-1 yielded energies of activation of 7.23 and 6.80 kJ/mole for H2[TPP] without and with C60, respectively, suggesting that the energies of activation for molecular motion in the presence and absence of C60 are very similar and within the range of experimental error for these types of measurements.Figure 5 Relaxation rates of the pyrrole hydrogen ofH2[TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Shown in columns 2 and 4 of Table2 are the relaxation rates of the phenyl hydrogen in the absence and presence of C60. The corresponding correlation times are given in columns 3 and 5. Both R1 and τC for this hydrogen follow the same temperature dependence as was observed for the pyrrole hydrogen indicating the dipolar pathway becoming less efficient with faster molecular motion. As was the case for the pyrrole hydrogen, a comparison of the relaxation rates of the phenyl hydrogen, with and without C60 in the solution, shows that the changes in relaxation rates are within the experimental error indicating that the presence of C60 does not lead to noticeable intermolecular interaction at the phenyl site of H2[TPP]. The relaxation rates are also illustrated graphically in Figure 6. The Arrhenius fit of τC versus T-1 yielded energies of activation of 6.76 and 6.69 kJ/mole for H2[TPP] without and with C60, respectively. An interesting sidenote is a comparison of the reorientational dynamics at the pyrrole (i.e., overall molecular motion) and phenyl sites. The reorientational motion of the phenyl ring is higher an average of 36% than that at the pyrrole site, indicating significant internal motion being present in this complex.Table 2 1H relaxation Rates of phenyl Hydrogen in H2 [TPP] with and without C60 in chlorobenzene-d5. T (K)H2[TPP] in CBZH2[TPP] with C60 in CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.821 (0.004)2320.821 (0.008)2322830.752 (0.006)2130.729 (0.009)2062980.606 (0.012)1710.660 (0.019)1873130.576 (0.008)1630.557 (0.010)1583280.469 (0.009)1320.470 (0.011)132Values in parentheses represent error limits at the 90% confidence level.Figure 6 Relaxation rates of the pyrrole hydrogen ofH2[TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5. ### 4.2.1H Relaxation Rates and Correlation Times of the Pyrrole and Phenyl Hydrogens in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5 The temperature behaviors of the relaxation rates and correlation times of the pyrrole and phenyl hydrogens in H2[(p-CN)4TPP], with and without C60, are shown in Tables 3 and 4, respectively. The relaxation rates and correlation times of both hydrogens show the expected temperature behavior, decreasing in magnitude with rising temperature indicating that the molecule is undergoing faster dynamics with an increase in temperature. The temperature behaviors of the relaxation rates for these hydrogens are also illustrated in Figures 7 and 8.Table 3 1H relaxation Rates of Pyrrole Hydrogen in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5. T (K)H2[(p-CN)4TPP]H2[(p-CN)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.825 (0.014)4080.921 (0.016)4562830.820 (0.016)4060.878 (0.018)4352980.813 (0.024)4020.781 (0.009)3973130.809 (0.013)4000.765 (0.020)3793280.803 (0.014)3970.695 (0.020)344Values in parentheses represent error limits at the 90% confidence level.Table 4 1H relaxation Rates of Phenyl Hydrogen in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5.. T (K)H2[(p-CN)4TPP]H2[(p-CN)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2681.01 (0.012)2861.04 (0.015)2942830.968 (0.015)2740.986 (0.020)2792980.913 (0.015)2580.838 (0.020)2373130.879 (0.061)2490.832 (0.021)2353280.831 (0.017)2350.731 (0.023)207Values in parentheses represent error limits at the 90% confidence level.Figure 7 Relaxation rates of the pyrrole hydrogen ofH2[(p-CN)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Figure 8 Relaxation rates of the phenyl hydrogen ofH2[(p-CN)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.In the absence of C60, the relaxation rate for the pyrrole hydrogen gradually decreases in magnitude with increasing temperature indicating less sensitivity to temperature variations. It is interesting to note that, at both the pyrrole and phenyl hydrogen sites, the relaxation rate is enhanced at the two lower temperatures of 268 and 283 K when C60 is introduced into the solution. This improved relaxation results in longer correlation times at these two temperatures indicating that the slower rotational motion is enhancing the relaxation mechanism (dipole-dipole) at these temperatures. This observation suggests the presence of noticeable intermolecular interactions of C60 at the porphyrin and phenyl sites. While interactions are seen at both sites, a comparison of the data, and also illustrated in Figures 7 and 8, shows a slight interaction preference for the pyrrole site. This observation indicates that electrons withdrawing groups on the phenyl group may enhance intermolecular interactions, at least at reduced temperatures. However, as the temperature rises, the correlation times in the H2[(p-CN)4TPP]/C60 solution decrease, which has the effect of reducing the efficiency of the relaxation mechanism. This observation indicates that a rise in thermal energy prevents long-lasting H2[(p-CN)4TPP]-C60 interactions from occurring at both molecular sites. As was the case in the H2[TPP] sample, the reorientational motion of the phenyl ring is much higher than that at the pyrrole site, ranging from 43% at the lower temperature and increasing to nearly 66% at the highest temperature indicating the presence of a significant amount of internal motion. ### 4.3.1H Relaxation Rates and Correlation Times of Pyrrole and Phenyl Hydrogens in H2[(p-OCH3)4TPP] with and without C60 in Chlorobenzene-d5. The data for the pyrrole and phenyl hydrogens for the electron-donating, para-OCH3-substituted porphyrin are given in Tables 5 and 6. The temperature behaviors of the relaxation rates are also illustrated in Figures 9 and 10. As was observed previously in H2[TPP] and H2[(p-CN)4TPP] complexes, both R1 and τC decrease in value with rising temperature indicating the reduced effectiveness of the dipole-dipole mechanism with increased reorientational motion.Table 5 1H relaxation rates of phenyl hydrogen in H2[(p-OCN3)4TPP] with and without C60 in chlorobenzene-d5. T (K)H2[(p-OCH3)4TPP]H2[(p-OCH3)4TPP] with C60in CBZin CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.699 (0.012)3460.725 (0.009)3592830.660 (0.014)3270.688 (0.015)3412980.624 (0.016)3080.659 (0.012)3263130.584 (0.015)2890.618 (0.018)3063280.547 (0.019)2700.585 (0.019)290Values in parentheses represent error limits at the 90% confidence level.Table 6 1H relaxation rates of phenyl hydrogen in H2[(p-OCN3)4TPP] with and without C60 in chlorobenzene-d5. T (K)H2[(p-OCH3)4TPP]H2[(p-OCH3)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.857 (0.011)2421.00 (0.01)2832830.834 (0.012)2360.956 (0.013)2702980.790 (0.016)2230.875 (0.015)2473130.778 (0.014)2200.847 (0.017)2403280.745 (0.015)2110.784 (0.019)222Values in parentheses represent error limits at the 90% confidence level.Figure 9 Relaxation rates of the pyrrole hydrogen ofH2[(p-OCH3)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Figure 10 Relaxation rates of the phenyl hydrogen ofH2[(p-OCH3)4TPP] with C60 (dashed line) and without C60 (solid line) in chlorobenzene-d5.A comparison of the pyrrole relaxation rate, as well asτC, with and without C60 shows that the introduction of C60 into solution enhances the relaxation rate of this pyrrole hydrogen by slowing the molecular dynamics (i.e., longer τC) at this site. This indicates that H2[(p-OCH3)4TPP] is experiencing intermolecular interaction with C60 at the pyrrole hydrogen site. This enhanced relaxation is also illustrated in Figure 9. While the error bars in the measurements are somewhat large, the effects of these interactions on the molecular dynamics at this hydrogen site are experimentally observable.A review of the relaxation rates and correlation times of the phenyl hydrogen, with and without the presence of C60, reveals higher relaxation rates and longer correlation times when C60 is present. This observation indicates the presence of phenyl/C60 interactions at this site. A comparison of the pyrrole and phenyl data indicates that, while C60 is interacting with H2[(p-OCH3)4TPP]/C60 at both the pyrrole and phenyl sites, there is a noticeable preference for the phenyl site. An illustration of the two types of interactions is given in Figure 11. We attribute this preference to the electron-donating abilities of the -OCH3 group which allows two potential points of interaction with C60. Gung and Amicangelo analyzed the effects of substituents on π- π interactions and found that electron-donating groups increase interaction energies by promoting two types of electronic interactions: the normal π- π parallel stacking and the offset stacking of the substituent group over the guest molecule which can give rise to electrostatic interactions [56]. In some cases, this second type of interaction can lead to charge transfer with significant interaction energies.Illustration of possible interaction sites of tetraphenylporphyrin withC60. (a)(b) ## 4.1.1H Relaxation Rates and Correlation Times of the Pyrrole and Phenyl Hydrogens in H2[TPP] with and without C60 in Chlorobenzene-d5 Shown in columns 2 and 4 of Table1 are the variable temperature relaxation rates, R1, of the pyrrole hydrogen in the absence and presence of C60, respectively. As expected, these rates are seen to decrease with rising temperature indicating a decrease in the effectiveness of the dipolar mechanism. Columns 3 and 5, of the same table, contain the values of the correlation times obtained via the respective relaxation rates and (2). The correlation times, τC, are also decreasing with rising temperature indicating faster molecular dynamics with increasing temperature [52].Table 1 1H relaxation Rates of pyrrole Hydrogen in H2[TPP] with and without C60 in chlorobenzene-d5. T (K)H2[TPP] in CBZH2[TPP] with C60 in CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.651 (0.005)3220.645 (0.012)3192830.581 (0.006)2880.571 (0.013)2832980.496 (0.016)2460.515 (0.010)2553130.434 (0.002)2150.432 (0.011)2143280.357 (0.002)1770.367 (0.012)182Values in parentheses represent error limits at the 90% confidence level.A comparison of the relaxation rates, without and with the presence of C60, reveals that the introduction of C60 to the solution does not affect the relaxation rate of this proton appreciatively. One notices that the change in relaxation rate is within experimental error indicating that the presence of C60 does not lead to noticeable intermolecular interaction at the pyrrole site of H2[TPP]. This observation is further illustrated graphically in Figure 5. An Arrhenius fit of τC versus T-1 yielded energies of activation of 7.23 and 6.80 kJ/mole for H2[TPP] without and with C60, respectively, suggesting that the energies of activation for molecular motion in the presence and absence of C60 are very similar and within the range of experimental error for these types of measurements.Figure 5 Relaxation rates of the pyrrole hydrogen ofH2[TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Shown in columns 2 and 4 of Table2 are the relaxation rates of the phenyl hydrogen in the absence and presence of C60. The corresponding correlation times are given in columns 3 and 5. Both R1 and τC for this hydrogen follow the same temperature dependence as was observed for the pyrrole hydrogen indicating the dipolar pathway becoming less efficient with faster molecular motion. As was the case for the pyrrole hydrogen, a comparison of the relaxation rates of the phenyl hydrogen, with and without C60 in the solution, shows that the changes in relaxation rates are within the experimental error indicating that the presence of C60 does not lead to noticeable intermolecular interaction at the phenyl site of H2[TPP]. The relaxation rates are also illustrated graphically in Figure 6. The Arrhenius fit of τC versus T-1 yielded energies of activation of 6.76 and 6.69 kJ/mole for H2[TPP] without and with C60, respectively. An interesting sidenote is a comparison of the reorientational dynamics at the pyrrole (i.e., overall molecular motion) and phenyl sites. The reorientational motion of the phenyl ring is higher an average of 36% than that at the pyrrole site, indicating significant internal motion being present in this complex.Table 2 1H relaxation Rates of phenyl Hydrogen in H2 [TPP] with and without C60 in chlorobenzene-d5. T (K)H2[TPP] in CBZH2[TPP] with C60 in CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.821 (0.004)2320.821 (0.008)2322830.752 (0.006)2130.729 (0.009)2062980.606 (0.012)1710.660 (0.019)1873130.576 (0.008)1630.557 (0.010)1583280.469 (0.009)1320.470 (0.011)132Values in parentheses represent error limits at the 90% confidence level.Figure 6 Relaxation rates of the pyrrole hydrogen ofH2[TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5. ## 4.2.1H Relaxation Rates and Correlation Times of the Pyrrole and Phenyl Hydrogens in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5 The temperature behaviors of the relaxation rates and correlation times of the pyrrole and phenyl hydrogens in H2[(p-CN)4TPP], with and without C60, are shown in Tables 3 and 4, respectively. The relaxation rates and correlation times of both hydrogens show the expected temperature behavior, decreasing in magnitude with rising temperature indicating that the molecule is undergoing faster dynamics with an increase in temperature. The temperature behaviors of the relaxation rates for these hydrogens are also illustrated in Figures 7 and 8.Table 3 1H relaxation Rates of Pyrrole Hydrogen in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5. T (K)H2[(p-CN)4TPP]H2[(p-CN)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.825 (0.014)4080.921 (0.016)4562830.820 (0.016)4060.878 (0.018)4352980.813 (0.024)4020.781 (0.009)3973130.809 (0.013)4000.765 (0.020)3793280.803 (0.014)3970.695 (0.020)344Values in parentheses represent error limits at the 90% confidence level.Table 4 1H relaxation Rates of Phenyl Hydrogen in H2[(p-CN)4TPP] with and without C60 in Chlorobenzene-d5.. T (K)H2[(p-CN)4TPP]H2[(p-CN)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2681.01 (0.012)2861.04 (0.015)2942830.968 (0.015)2740.986 (0.020)2792980.913 (0.015)2580.838 (0.020)2373130.879 (0.061)2490.832 (0.021)2353280.831 (0.017)2350.731 (0.023)207Values in parentheses represent error limits at the 90% confidence level.Figure 7 Relaxation rates of the pyrrole hydrogen ofH2[(p-CN)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Figure 8 Relaxation rates of the phenyl hydrogen ofH2[(p-CN)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.In the absence of C60, the relaxation rate for the pyrrole hydrogen gradually decreases in magnitude with increasing temperature indicating less sensitivity to temperature variations. It is interesting to note that, at both the pyrrole and phenyl hydrogen sites, the relaxation rate is enhanced at the two lower temperatures of 268 and 283 K when C60 is introduced into the solution. This improved relaxation results in longer correlation times at these two temperatures indicating that the slower rotational motion is enhancing the relaxation mechanism (dipole-dipole) at these temperatures. This observation suggests the presence of noticeable intermolecular interactions of C60 at the porphyrin and phenyl sites. While interactions are seen at both sites, a comparison of the data, and also illustrated in Figures 7 and 8, shows a slight interaction preference for the pyrrole site. This observation indicates that electrons withdrawing groups on the phenyl group may enhance intermolecular interactions, at least at reduced temperatures. However, as the temperature rises, the correlation times in the H2[(p-CN)4TPP]/C60 solution decrease, which has the effect of reducing the efficiency of the relaxation mechanism. This observation indicates that a rise in thermal energy prevents long-lasting H2[(p-CN)4TPP]-C60 interactions from occurring at both molecular sites. As was the case in the H2[TPP] sample, the reorientational motion of the phenyl ring is much higher than that at the pyrrole site, ranging from 43% at the lower temperature and increasing to nearly 66% at the highest temperature indicating the presence of a significant amount of internal motion. ## 4.3.1H Relaxation Rates and Correlation Times of Pyrrole and Phenyl Hydrogens in H2[(p-OCH3)4TPP] with and without C60 in Chlorobenzene-d5. The data for the pyrrole and phenyl hydrogens for the electron-donating, para-OCH3-substituted porphyrin are given in Tables 5 and 6. The temperature behaviors of the relaxation rates are also illustrated in Figures 9 and 10. As was observed previously in H2[TPP] and H2[(p-CN)4TPP] complexes, both R1 and τC decrease in value with rising temperature indicating the reduced effectiveness of the dipole-dipole mechanism with increased reorientational motion.Table 5 1H relaxation rates of phenyl hydrogen in H2[(p-OCN3)4TPP] with and without C60 in chlorobenzene-d5. T (K)H2[(p-OCH3)4TPP]H2[(p-OCH3)4TPP] with C60in CBZin CBZR1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.699 (0.012)3460.725 (0.009)3592830.660 (0.014)3270.688 (0.015)3412980.624 (0.016)3080.659 (0.012)3263130.584 (0.015)2890.618 (0.018)3063280.547 (0.019)2700.585 (0.019)290Values in parentheses represent error limits at the 90% confidence level.Table 6 1H relaxation rates of phenyl hydrogen in H2[(p-OCN3)4TPP] with and without C60 in chlorobenzene-d5. T (K)H2[(p-OCH3)4TPP]H2[(p-OCH3)4TPP] with C60R1 (1/s)τC (ps)R1 (1/s)τC (ps)2680.857 (0.011)2421.00 (0.01)2832830.834 (0.012)2360.956 (0.013)2702980.790 (0.016)2230.875 (0.015)2473130.778 (0.014)2200.847 (0.017)2403280.745 (0.015)2110.784 (0.019)222Values in parentheses represent error limits at the 90% confidence level.Figure 9 Relaxation rates of the pyrrole hydrogen ofH2[(p-OCH3)4TPP] with C60 (dash line) and without C60 (solided line) in chlorobenzene-d5.Figure 10 Relaxation rates of the phenyl hydrogen ofH2[(p-OCH3)4TPP] with C60 (dashed line) and without C60 (solid line) in chlorobenzene-d5.A comparison of the pyrrole relaxation rate, as well asτC, with and without C60 shows that the introduction of C60 into solution enhances the relaxation rate of this pyrrole hydrogen by slowing the molecular dynamics (i.e., longer τC) at this site. This indicates that H2[(p-OCH3)4TPP] is experiencing intermolecular interaction with C60 at the pyrrole hydrogen site. This enhanced relaxation is also illustrated in Figure 9. While the error bars in the measurements are somewhat large, the effects of these interactions on the molecular dynamics at this hydrogen site are experimentally observable.A review of the relaxation rates and correlation times of the phenyl hydrogen, with and without the presence of C60, reveals higher relaxation rates and longer correlation times when C60 is present. This observation indicates the presence of phenyl/C60 interactions at this site. A comparison of the pyrrole and phenyl data indicates that, while C60 is interacting with H2[(p-OCH3)4TPP]/C60 at both the pyrrole and phenyl sites, there is a noticeable preference for the phenyl site. An illustration of the two types of interactions is given in Figure 11. We attribute this preference to the electron-donating abilities of the -OCH3 group which allows two potential points of interaction with C60. Gung and Amicangelo analyzed the effects of substituents on π- π interactions and found that electron-donating groups increase interaction energies by promoting two types of electronic interactions: the normal π- π parallel stacking and the offset stacking of the substituent group over the guest molecule which can give rise to electrostatic interactions [56]. In some cases, this second type of interaction can lead to charge transfer with significant interaction energies.Illustration of possible interaction sites of tetraphenylporphyrin withC60. (a)(b) ## 5. Conclusion 1H NMR relaxation studies were employed to investigate potential interactions between C60 and tetraphenylporphyrin, H2[TPP], and parasubstituted tetraphenylporphyrins, H2[(p-X)4TPP], where X = CN and OCH3 in solution. The substituted porphyrins provided a means by which to investigate the role that electronic effects play in the interaction process. A comparison of the relaxation rates, R1, and correlation times, τC, of the pyrrole and phenyl hydrogens in H2[TPP], without and with the presence of C60, revealed that the introduction of C60 into solution did not have a noticeable effect on R1 and τC of these protons indicating the absence of long-term intermolecular interaction at either of these two sites. A similar analysis of the two protons in the electron-withdrawing substituted tetraphenylporphyrin analog, H2[(p-CN)4TPP], revealed slower molecular dynamics at the two lowest temperatures indicating the presence of intermolecular interactions. However, these interactions were absent at higher temperatures suggesting that the rise in thermal energy prevents long-lasting interactions from occurring at both molecular sites. A comparison of the pyrrole and phenyl hydrogens data in H2[(p-OCH3)4TPP] indicated that, while C60 is interacting at both the pyrrole and phenyl sites, there is a clear preference for the phenyl site. A graphical illustration of these interactions is given in Figure 11. We believe that this preference is due to the electron-donating abilities of the -OCH3 group which allows two potential points of interaction of C60 on the phenyl group. This observation is consistent with the results observed by Gung and Amicangelo [56].Our results indicate that it is very likely that enhanced selectivity in the chemical purifications of fullerenes and metallofullerenes can be achieved by employing tetraphenylporphyrin-silica stationary phases which have been modified with electron-donating groups. --- *Source: 102167-2010-09-20.xml*
2010
# Renal Artery Thrombectomy Causing Functional and Symptomatic Recovery after 50-Hour Delay in Reperfusion of Acute Main Renal Artery Thrombosis **Authors:** Kevin Singh Kang; John Steven Wilson **Journal:** Case Reports in Vascular Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1021683 --- ## Abstract Acute renal artery thrombosis is rare and even rarer in the thrombus occluding the main renal artery and compromising the entire kidney. We report on a 46-year-old female smoker with no past medical history and no hypercoagulability who developed sudden severe left flank pain, hematuria, acute renal failure, and severe hypertension. A CT angiogram showed totally occluded renal artery at the ostium with a thrombus and severely hypoperfused left kidney with multiple infarcts. Initial course of treatment was with intravenous heparin but with no improvement after 50 hours since symptom onset; angiography was done. This revealed totally occluded renal artery at ostium with no vessels or kidney blush seen. After aspiration thrombectomy, blush was seen in kidney parenchyma along with flow in the arcuate renal arteries although with some distal embolic events. The ostial lesion was treated with a drug eluting stent with excellent result angiographically. However, 8 months later, severe restenosis occurred. This time, the patient did not flank pain or renal failure but had progressive hypertension. The patient was treated this time with rheolytic thrombectomy followed by intravascular ultrasound-guided drug-eluting stenting. The patient has been followed for a year and a half since and recent CT scan revealed widely patent renal arteries bilaterally with normal kidney function, BP, and good perfusion to the left kidney with only tiny areas of infarct. Ultrasound of the kidneys also showed the size of the left kidney as within normal range now, and she has good distal flow velocities in the branch renal arteries. Our case report shows that even delayed reperfusion of complete renal artery occlusion with jeopardized arterial flow to the entire kidney could result in restoration of function to most of the kidney. --- ## Body ## 1. Introduction Acute renal artery occlusion is rare when involving the entire kidney due to total occlusion of the main renal artery and rarer when due to in situ thrombosis and not embolic event [1]. However, this was noted in our patient as reported here. As expected, acute renal artery occlusion would result in diffuse ischemia and risk of total kidney loss [1, 2]. As noted, embolic events from atrial fibrillation, cardiomyopathy, or valvular heart disease are far more common than in situ thrombosis [1, 3]. In our patient, there were no cardiovascular risk factors of embolic events as confirmed after a two-year follow-up, and clinically and angiographically, in situ thrombus was the most likely scenario. Additionally, the angiographic criteria based on coronary literature were suggestive of plaque rupture and thrombosis [4]. The irregular and ulcerated angiographic appearance in ostial and proximal renal artery suggested in situ thrombosis related to underlying atherosclerotic plaque [4]. The smoking history was the contributor to the plaque rupture appearance and in situ renal artery occlusion.The diagnosis of acute main renal occlusion is difficult clinically and must be considered in patients presenting with acute renal failure, flank pain, hematuria, high LDH, and new or worsening hypertension even though fever and nausea may be seen also [5, 6]. Often, the complication may arise after renal or aortic endovascular interventions due to dissection [1, 2, 6]. It is clear that if there is acute occlusion of main renal artery, early reperfusion would preserve kidney function but the benefits of late revascularization are unclear [1, 7]. There is also limited information on long-term outcomes for BP and renal function recovery after reperfusion of acute main renal artery thrombosis [1, 7].Treatment of renal artery thrombosis is initially with anticoagulation, and the benefits are questionable [7]. The more invasive treatments after failure of anticoagulation to improve the patients signs, symptoms, and renal function are anecdotal from small retrospective series or case reports [7]. Often catheter directed thrombolysis is endorsed for the treatment of thrombotic occlusion of the main renal artery but this has been tested mostly in embolic forms of occlusion and not associated with complete restoration of renal function [1]. Since angiography would be mandated for this treatment, and our patient’s angiographic as well as clinical characteristics suggested in situ thrombosis, we elected aspiration and rheolytic thrombectomy techniques. There are no data to suggest superiority of catheter directed thrombolysis over thrombectomy techniques [ 7, 8]. ## 2. Case Report We report on a 46-year-old female smoker with no past medical history and no hypercoagulability who developed sudden severe left flank pain, hematuria, acute renal failure, and severe hypertension. The symptoms had been going on for about a day prior to presentation. The patient was hypertensive, and the laboratory values showed hematuria and reduced renal function. A CT angiogram showed totally occluded renal artery at the ostium with a thrombus and severely hypoperfused left kidney suggestive of large infarcts in the left kidney (Figure1). Initial course of treatment was anticoagulation with intravenous heparin but with no improvement over next 24 hours totaling about 48 hours prior to her onset of symptoms without kidney reperfusion. At that point, she was transferred to our hospital. Angiography was done with 6-French sheath in right common femoral artery and 6-French pigtail catheter in the abdominal aorta at the level of renal arteries; this revealed totally occluded renal artery at ostium with no visualization of distal renal arterial tree (Figure 2). There was also no renal blush noted suggestive of renal perfusion from renal artery contrast injections (Figure 3). Selective engagement of left renal artery was done with 6-French internal mammary guiding catheter, and the main renal artery total occlusion at the ostium was crossed with two 0.014-inch nonhydrophilic tipped wires and left in two different lobular branches to allow aspiration thrombectomy of multiple branches (Figure 4). The renal artery thrombus was then treated by aspiration thrombectomy with 6-French-sized Medtronic Xport aspiration catheter with removal of visible clots outside the body (Figure 4). This led to restoration of flow to the kidney, and the ostial lesion looked irregular and had an angiographic appearance of plaque rupture rather than an embolic site which would have resulted in a smooth angiographic appearance (Figure 5). Finally, blush was seen in kidney parenchyma along with flow in the arcuate renal arteries despite a distal embolic event to lobular artery noted by distal vessel cut off (Figure 6). The ostial lesion was treated with a drug eluting stent with excellent result angiographically (Figure 7). The symptoms of severe left flank pain and nausea that were present up to the initiation of procedure resolved completely by the next morning. Blood pressure (BP) was better and within normal range the next day, and the patient was noted to improve renal function. Glomerular filtration rate that was 59 (units are ml/min/1.73 m2) on the morning of the procedure improved to 93, the next morning. A month later, CT angiogram showed much better left kidney perfusion, and the kidney was normal in size (Figure 8).Figure 1 CT angiogram showing normal findings on the right side but the left kidney is severely hypoperfused with probably multiple renal infarcts.Figure 2 Abdominal aortic angiography revealing normal right-sided findings. In contrast, the left renal artery is totally occluded at the ostium (blue arrow) with angiographic appearance of a thrombus at the ostium with contrast staining.Figure 3 0.014-inch wire across the occlusion still showing no renal blush and no distal arterial vasculature. Second wire placed in the renal artery due to difficulty in advancing the thrombectomy catheter into multiple distal renal artery branches to restore perfusion to multiple renal lobes.Figure 4 Aspiration thrombectomy catheter taken down multiple interlobar arteries.Figure 5 After aspiration thrombectomy, underlying irregular angiographic appearance looks like a plaque rupture at the ostial left renal artery suggesting in situ thrombosis rather than embolism.Figure 6 After stenting of the ostium, renal blush is noted.Figure 7 The poststent angiography reveals much better renal perfusion with patent renal lobular and arcuate arteries, but distal vessels still look in spasm and with some distal embolization.Figure 8 CT angiogram a month later now shows much better renal perfusion with minimal infarcts on the left side, compared to Figure1.However, 8 months later, severe restenosis occurred. This time, the patient did not flank pain or renal failure but had progressive hypertension. Ultrasound of the kidney suggested severe left renal instent restenosis, and angiography was done again with the same technique showing severe instent restenosis with mil contrast staining and possible thrombus in the stent (Figure9). The patient was treated this time with rheolytic thrombectomy using 6-French angiojet device inside the stent (Figure 10). Intravascular ultrasound (IVUS) showed severe instent intimal proliferation (Figure 11). Repeat drug eluting coronary stent was placed with widely patent left renal arterial system angiographically (Figure 12). This was followed by repeat intravascular ultrasound showing widely patent stent lumen (Figure 13).Figure 9 8 months after atherectomy and stenting, angiography shows that the patient has severe instent restenosis with potential thrombus formation as well.Figure 10 The angiojet thrombectomy was done inside the stent.Figure 11 The patient has IVUS view of severe instent restenosis after anjiojet thrombectomy.Figure 12 Postrestenting of the renal artery now shows excellent left kidney blush and patent lobular, arcuate and distal kidney vessels with less spasm, and no embolic cut offs, compared to Figure7.Figure 13 IVUS shows poststent, lumen diameter of over 4 mm with excellent stent expansion.Follow-up CT angiogram had revealed much better perfusion to the left kidney with only a tiny area of infarct. Incidentally a nuclear cardiac stress test that was done with technetium pyrophosphate showed normal perfusion to both kidneys as well. An ultrasound of the kidneys 1 year postinitial presentation had also showed the size of left kidney as within normal range and good distal flow velocities in the branch renal arteries (Figure14).Figure 14 US color Doppler shows patent color flow in distal vessels.The patient has been followed for more than 2 years since her initial event and the recent CT scan revealed widely patent renal arteries bilaterally. On a recent outpatient visit, 49 months after initial presentation, she was asymptomatic with normal creatinine and glomerular filtration rate and with a normal BP on the same antihypertensives that had failed to control her BP in the past. ## 3. Discussion Acute renal artery thrombosis causing acute renal infarction is a rare clinical syndrome and more often is related to embolic than in situ thrombus [1, 2]. Patients often have risk factors like diabetes, hypertension, spontaneous dissection, fibromuscular dysplasia, or hypercoagulability or may be post an endovascular procedure. Atrial fibrillation is a frequent association causing embolic renal arterial occlusion. Our patient had no prior medical history except for nicotine abuse, and clinical and angiographic findings favored a plaque rupture and in situ thrombosis and hence was treated with thrombectomy and drug eluting stenting [3, 4]. The diagnosis of acute renal artery thrombosis or acute renal infarct is rare and often missed due to rarity of the syndrome [3]. The patient presents often like an acute abdomen with severe abdominal pain and may be confused to have a renal stone like our patient was initially suspected to have nephrolithiasis [5, 6]. There may be acute renal failure and hypertension as in our patient especially if the entire kidney is at risk of infarct and is hypoperfused [1, 7]. There is hematuria, and LDH is high due to renal infarct [1, 5, 6]. Our patient incidentally had all these findings.The urgency of intervention is not clear at this stage, but the previous case reports have suggested that up to 30 hours of delay in reperfusion of an occluded renal artery may result in salvage of renal function [6, 7].Diagnosis is often delayed due to rarity of the presentation, but diagnosis can be confirmed by CT angiography (CTA), gadolinium-enhanced MRA, or renal angiography [7]. In our patient, she had a CTA that showed renal artery thrombosis of the main left renal artery at its ostium and severe hypoperfusion of the entire left kidney with initial CTA multiple infarcts in the left kidney.Goals of therapy are symptom relief, improvement in renal function, and BP control. Most of the available treatments are based on small series or case reports [1, 7]. The initial treatment of acute renal artery thrombosis is anticoagulation but in our patient, anticoagulation with intravenous heparin did not lead to any clinical benefit and resulted in continued symptoms [7]. Systemic thrombolytic therapy has low success and high complication risk in renal arterial thrombosis, and surgical and endovascular options are preferred [7–9]. Surgical treatment includes thrombectomy and aortorenal bypass, but given the high morbidity and mortality of open surgical options, endovascular therapy is preferred [7].Percutaneous options for in situ or embolic renal artery thromboembolism include intra-arterial thrombolytic and various thrombectomy techniques along with angioplasty and stenting [1, 2, 7]. Thrombectomy techniques, when available, may be preferred given the higher bleeding complications with thrombolytic therapy [7]. Renal artery angioplasty and stenting for thrombosis has been done in case reports [7].In conclusion, our case report is unusual for using both rheolytic and aspiration thrombectomy as well as stenting and then repeat stenting. It is also unusual for very late reperfusion leading to improvement in symptoms, renal function, and then, improved kidney perfusion on CTA. --- *Source: 1021683-2022-02-08.xml*
1021683-2022-02-08_1021683-2022-02-08.md
14,672
Renal Artery Thrombectomy Causing Functional and Symptomatic Recovery after 50-Hour Delay in Reperfusion of Acute Main Renal Artery Thrombosis
Kevin Singh Kang; John Steven Wilson
Case Reports in Vascular Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1021683
1021683-2022-02-08.xml
--- ## Abstract Acute renal artery thrombosis is rare and even rarer in the thrombus occluding the main renal artery and compromising the entire kidney. We report on a 46-year-old female smoker with no past medical history and no hypercoagulability who developed sudden severe left flank pain, hematuria, acute renal failure, and severe hypertension. A CT angiogram showed totally occluded renal artery at the ostium with a thrombus and severely hypoperfused left kidney with multiple infarcts. Initial course of treatment was with intravenous heparin but with no improvement after 50 hours since symptom onset; angiography was done. This revealed totally occluded renal artery at ostium with no vessels or kidney blush seen. After aspiration thrombectomy, blush was seen in kidney parenchyma along with flow in the arcuate renal arteries although with some distal embolic events. The ostial lesion was treated with a drug eluting stent with excellent result angiographically. However, 8 months later, severe restenosis occurred. This time, the patient did not flank pain or renal failure but had progressive hypertension. The patient was treated this time with rheolytic thrombectomy followed by intravascular ultrasound-guided drug-eluting stenting. The patient has been followed for a year and a half since and recent CT scan revealed widely patent renal arteries bilaterally with normal kidney function, BP, and good perfusion to the left kidney with only tiny areas of infarct. Ultrasound of the kidneys also showed the size of the left kidney as within normal range now, and she has good distal flow velocities in the branch renal arteries. Our case report shows that even delayed reperfusion of complete renal artery occlusion with jeopardized arterial flow to the entire kidney could result in restoration of function to most of the kidney. --- ## Body ## 1. Introduction Acute renal artery occlusion is rare when involving the entire kidney due to total occlusion of the main renal artery and rarer when due to in situ thrombosis and not embolic event [1]. However, this was noted in our patient as reported here. As expected, acute renal artery occlusion would result in diffuse ischemia and risk of total kidney loss [1, 2]. As noted, embolic events from atrial fibrillation, cardiomyopathy, or valvular heart disease are far more common than in situ thrombosis [1, 3]. In our patient, there were no cardiovascular risk factors of embolic events as confirmed after a two-year follow-up, and clinically and angiographically, in situ thrombus was the most likely scenario. Additionally, the angiographic criteria based on coronary literature were suggestive of plaque rupture and thrombosis [4]. The irregular and ulcerated angiographic appearance in ostial and proximal renal artery suggested in situ thrombosis related to underlying atherosclerotic plaque [4]. The smoking history was the contributor to the plaque rupture appearance and in situ renal artery occlusion.The diagnosis of acute main renal occlusion is difficult clinically and must be considered in patients presenting with acute renal failure, flank pain, hematuria, high LDH, and new or worsening hypertension even though fever and nausea may be seen also [5, 6]. Often, the complication may arise after renal or aortic endovascular interventions due to dissection [1, 2, 6]. It is clear that if there is acute occlusion of main renal artery, early reperfusion would preserve kidney function but the benefits of late revascularization are unclear [1, 7]. There is also limited information on long-term outcomes for BP and renal function recovery after reperfusion of acute main renal artery thrombosis [1, 7].Treatment of renal artery thrombosis is initially with anticoagulation, and the benefits are questionable [7]. The more invasive treatments after failure of anticoagulation to improve the patients signs, symptoms, and renal function are anecdotal from small retrospective series or case reports [7]. Often catheter directed thrombolysis is endorsed for the treatment of thrombotic occlusion of the main renal artery but this has been tested mostly in embolic forms of occlusion and not associated with complete restoration of renal function [1]. Since angiography would be mandated for this treatment, and our patient’s angiographic as well as clinical characteristics suggested in situ thrombosis, we elected aspiration and rheolytic thrombectomy techniques. There are no data to suggest superiority of catheter directed thrombolysis over thrombectomy techniques [ 7, 8]. ## 2. Case Report We report on a 46-year-old female smoker with no past medical history and no hypercoagulability who developed sudden severe left flank pain, hematuria, acute renal failure, and severe hypertension. The symptoms had been going on for about a day prior to presentation. The patient was hypertensive, and the laboratory values showed hematuria and reduced renal function. A CT angiogram showed totally occluded renal artery at the ostium with a thrombus and severely hypoperfused left kidney suggestive of large infarcts in the left kidney (Figure1). Initial course of treatment was anticoagulation with intravenous heparin but with no improvement over next 24 hours totaling about 48 hours prior to her onset of symptoms without kidney reperfusion. At that point, she was transferred to our hospital. Angiography was done with 6-French sheath in right common femoral artery and 6-French pigtail catheter in the abdominal aorta at the level of renal arteries; this revealed totally occluded renal artery at ostium with no visualization of distal renal arterial tree (Figure 2). There was also no renal blush noted suggestive of renal perfusion from renal artery contrast injections (Figure 3). Selective engagement of left renal artery was done with 6-French internal mammary guiding catheter, and the main renal artery total occlusion at the ostium was crossed with two 0.014-inch nonhydrophilic tipped wires and left in two different lobular branches to allow aspiration thrombectomy of multiple branches (Figure 4). The renal artery thrombus was then treated by aspiration thrombectomy with 6-French-sized Medtronic Xport aspiration catheter with removal of visible clots outside the body (Figure 4). This led to restoration of flow to the kidney, and the ostial lesion looked irregular and had an angiographic appearance of plaque rupture rather than an embolic site which would have resulted in a smooth angiographic appearance (Figure 5). Finally, blush was seen in kidney parenchyma along with flow in the arcuate renal arteries despite a distal embolic event to lobular artery noted by distal vessel cut off (Figure 6). The ostial lesion was treated with a drug eluting stent with excellent result angiographically (Figure 7). The symptoms of severe left flank pain and nausea that were present up to the initiation of procedure resolved completely by the next morning. Blood pressure (BP) was better and within normal range the next day, and the patient was noted to improve renal function. Glomerular filtration rate that was 59 (units are ml/min/1.73 m2) on the morning of the procedure improved to 93, the next morning. A month later, CT angiogram showed much better left kidney perfusion, and the kidney was normal in size (Figure 8).Figure 1 CT angiogram showing normal findings on the right side but the left kidney is severely hypoperfused with probably multiple renal infarcts.Figure 2 Abdominal aortic angiography revealing normal right-sided findings. In contrast, the left renal artery is totally occluded at the ostium (blue arrow) with angiographic appearance of a thrombus at the ostium with contrast staining.Figure 3 0.014-inch wire across the occlusion still showing no renal blush and no distal arterial vasculature. Second wire placed in the renal artery due to difficulty in advancing the thrombectomy catheter into multiple distal renal artery branches to restore perfusion to multiple renal lobes.Figure 4 Aspiration thrombectomy catheter taken down multiple interlobar arteries.Figure 5 After aspiration thrombectomy, underlying irregular angiographic appearance looks like a plaque rupture at the ostial left renal artery suggesting in situ thrombosis rather than embolism.Figure 6 After stenting of the ostium, renal blush is noted.Figure 7 The poststent angiography reveals much better renal perfusion with patent renal lobular and arcuate arteries, but distal vessels still look in spasm and with some distal embolization.Figure 8 CT angiogram a month later now shows much better renal perfusion with minimal infarcts on the left side, compared to Figure1.However, 8 months later, severe restenosis occurred. This time, the patient did not flank pain or renal failure but had progressive hypertension. Ultrasound of the kidney suggested severe left renal instent restenosis, and angiography was done again with the same technique showing severe instent restenosis with mil contrast staining and possible thrombus in the stent (Figure9). The patient was treated this time with rheolytic thrombectomy using 6-French angiojet device inside the stent (Figure 10). Intravascular ultrasound (IVUS) showed severe instent intimal proliferation (Figure 11). Repeat drug eluting coronary stent was placed with widely patent left renal arterial system angiographically (Figure 12). This was followed by repeat intravascular ultrasound showing widely patent stent lumen (Figure 13).Figure 9 8 months after atherectomy and stenting, angiography shows that the patient has severe instent restenosis with potential thrombus formation as well.Figure 10 The angiojet thrombectomy was done inside the stent.Figure 11 The patient has IVUS view of severe instent restenosis after anjiojet thrombectomy.Figure 12 Postrestenting of the renal artery now shows excellent left kidney blush and patent lobular, arcuate and distal kidney vessels with less spasm, and no embolic cut offs, compared to Figure7.Figure 13 IVUS shows poststent, lumen diameter of over 4 mm with excellent stent expansion.Follow-up CT angiogram had revealed much better perfusion to the left kidney with only a tiny area of infarct. Incidentally a nuclear cardiac stress test that was done with technetium pyrophosphate showed normal perfusion to both kidneys as well. An ultrasound of the kidneys 1 year postinitial presentation had also showed the size of left kidney as within normal range and good distal flow velocities in the branch renal arteries (Figure14).Figure 14 US color Doppler shows patent color flow in distal vessels.The patient has been followed for more than 2 years since her initial event and the recent CT scan revealed widely patent renal arteries bilaterally. On a recent outpatient visit, 49 months after initial presentation, she was asymptomatic with normal creatinine and glomerular filtration rate and with a normal BP on the same antihypertensives that had failed to control her BP in the past. ## 3. Discussion Acute renal artery thrombosis causing acute renal infarction is a rare clinical syndrome and more often is related to embolic than in situ thrombus [1, 2]. Patients often have risk factors like diabetes, hypertension, spontaneous dissection, fibromuscular dysplasia, or hypercoagulability or may be post an endovascular procedure. Atrial fibrillation is a frequent association causing embolic renal arterial occlusion. Our patient had no prior medical history except for nicotine abuse, and clinical and angiographic findings favored a plaque rupture and in situ thrombosis and hence was treated with thrombectomy and drug eluting stenting [3, 4]. The diagnosis of acute renal artery thrombosis or acute renal infarct is rare and often missed due to rarity of the syndrome [3]. The patient presents often like an acute abdomen with severe abdominal pain and may be confused to have a renal stone like our patient was initially suspected to have nephrolithiasis [5, 6]. There may be acute renal failure and hypertension as in our patient especially if the entire kidney is at risk of infarct and is hypoperfused [1, 7]. There is hematuria, and LDH is high due to renal infarct [1, 5, 6]. Our patient incidentally had all these findings.The urgency of intervention is not clear at this stage, but the previous case reports have suggested that up to 30 hours of delay in reperfusion of an occluded renal artery may result in salvage of renal function [6, 7].Diagnosis is often delayed due to rarity of the presentation, but diagnosis can be confirmed by CT angiography (CTA), gadolinium-enhanced MRA, or renal angiography [7]. In our patient, she had a CTA that showed renal artery thrombosis of the main left renal artery at its ostium and severe hypoperfusion of the entire left kidney with initial CTA multiple infarcts in the left kidney.Goals of therapy are symptom relief, improvement in renal function, and BP control. Most of the available treatments are based on small series or case reports [1, 7]. The initial treatment of acute renal artery thrombosis is anticoagulation but in our patient, anticoagulation with intravenous heparin did not lead to any clinical benefit and resulted in continued symptoms [7]. Systemic thrombolytic therapy has low success and high complication risk in renal arterial thrombosis, and surgical and endovascular options are preferred [7–9]. Surgical treatment includes thrombectomy and aortorenal bypass, but given the high morbidity and mortality of open surgical options, endovascular therapy is preferred [7].Percutaneous options for in situ or embolic renal artery thromboembolism include intra-arterial thrombolytic and various thrombectomy techniques along with angioplasty and stenting [1, 2, 7]. Thrombectomy techniques, when available, may be preferred given the higher bleeding complications with thrombolytic therapy [7]. Renal artery angioplasty and stenting for thrombosis has been done in case reports [7].In conclusion, our case report is unusual for using both rheolytic and aspiration thrombectomy as well as stenting and then repeat stenting. It is also unusual for very late reperfusion leading to improvement in symptoms, renal function, and then, improved kidney perfusion on CTA. --- *Source: 1021683-2022-02-08.xml*
2022
# Continuity of the Restriction Maps on Smirnov Classes **Authors:** Yüksel Soykan **Journal:** Abstract and Applied Analysis (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102169 --- ## Abstract We prove the restriction maps define continuous linear operators on the Smirnov classes for some certain domain with analytic boundary. --- ## Body ## 1. Introduction As usual, we define the Hardy spaceH 2 = H 2 ( Δ ) as the space of all functions f : z → ∑ n = 0 ∞ ‍ a n z n for which the norm ( ∥ f ∥ = ∑ n = 0 ∞ ‍ | a n | 2 ) 1 / 2 is finite. Here, Δ is the open unit disc. For a more general simply connected domain D in the sphere or extended plane C ¯ = C ∪ ( ∞ ) with at least two boundary points, and a conformal mapping φ from D onto Δ (i.e., a Riemann mapping function, abbreviation is RMF), a function g analytic in D is said to belong to the Smirnov class E 2 ( D ) if and only if g = ( f ∘ φ ) φ ′ 1 / 2 for some f ∈ H 2 ( Δ ) where φ ′ 1 / 2 is an analytic branch of the square root of φ ′. The reader is referred to [1–7] and references therein for the basic properties of these spaces.LetC = ( C 1 , C 2 , C 3 , … , C N ) be an N-tuple of closed distinct curves on the sphere C ¯ and suppose that, for each i,  1 ≤ i ≤ N,  C i is a circle, a line ∪ { ∞ }, an ellipse, a parabola ∪ { ∞ }, or a branch of a hyperbola ∪ { ∞ }. Let D i be the complementary domain of C i. Recall that a complementary domain of a closed F ⊆ C ¯ is a maximal connected subset of C ¯ - F, which must be a domain. For 1 ≤ i ≤ N, suppose that φ i : D i → Δ is a conformal equivalence (i.e., RMF) and let ψ i : Δ → D i be its inverse. For 1 ≤ i ≤ N, let us keep the notations of C i,  D i,  φ i,  ψ i fixed until the end of the paper.In this paper we prove the following.Theorem 1. Let1 ≤ i,  j ≤ N. Suppose that Γ is an open subarc of  C j and suppose also that Γ ⊆ D i  if  i ≠ j. Then the restriction f → f | Γ defines a continuous linear operator mapping E 2 ( D i ) into L 2 ( Γ ).For similar work regarding restriction maps, see [8, 9]. Our conjecture is that Theorem 1 is valid if, for each j,  1 ≤ j ≤ N,  C j is a σ-rectifiable analytic Jordan curve.There are some similar results for rectifiable curves in Havin’s paper [10]. Also the Cauchy projection operator from L p to E p is bounded on all Carleson regular curves; compare the papers of David, starting with [11].We need the following Theorem to simplify the proof of Theorem1.Theorem 2 (Theorem 1 in [12]). LetD be a complementary domain of  ∪ i = 1 N C i and suppose that D is simply connected so that D i is the complementary domain of C i which contains D. Then(i) ∂ D is a σ-rectifiable closed curve and every f ∈ E 2 ( D ) has a nontangential limit function f ~ ∈ L 2 ( ∂ D ); (ii) (Parseval’s identity) the mapf → f ~  ( E 2 ( D ) → L 2 ( ∂ D ) ) is an isometric isomorphism onto a closed subspace E 2 ( ∂ D ) of L 2 ( ∂ D ), so(1) ∥ f ∥ E 2 ( D ) 2 = ∥ f ~ ∥ L 2 ( ∂ D ) 2 = 1 2 π ∫ ∂ D ‍ | f ~ ( z ) | 2 | d z | , ( f ∈ E 2 ( D ) ) .IfΓ ⊆ C i is an open subarc, then (2) ∥ f ~ | Γ ∥ L 2 ( Γ ) 2 ≤ ∥ f ~ | C i ∥ L 2 ( C i ) 2 = ∥ f ∥ E 2 ( D i ) 2 , because Parseval’s identity is true for the trivial chain (C i) of curves. Hence Theorem 1 will be proved if the following theorem can be proved.Theorem 3. Let1 ≤ i ≠ j ≤ N. Suppose that Γ is an open subarc of C j and that Γ ⊆ D i. Then the restriction f → f | Γ defines a continuous linear operator mapping E 2 ( D i ) into L 2 ( Γ ). ## 2. Preliminaries for the Proof of Theorem3 Let us keep the notation of Theorem3 fixed for the rest of the paper and let us also agree to use l for arc-length measure.An arc or closed curveγ is called σ-rectifiable if and only if it is a countable union of rectifiable arcs in C, together with ( ∞ ) in the case when ∞ ∈ γ. For instance, a parabola without ∞ is σ-rectifiable arc, and a parabola with ∞ is σ-rectifiable Jordan curve. The following definition will simplify the language.Definition 4. Letγ ⊆ C be a simple σ-rectifiable arc contained in a simply connected domain G ⊆ C ¯. We say that γ has the restriction property in G  if and only if the map g → g | γ defines a continuous linear operator mapping E 2 ( G ) into L 2 ( γ ). Thus, the last sentence of Theorem3 reads “Γ has the restriction property in D i.”Lemma 5 (Invariance Lemma (Lemma 4 in [9])). LetG 1 , G 2 ⊆ C ¯ be simply connected domains and suppose that γ 1 ⊆ G 1 ∩ C,  γ 2 ⊆ G 2 ∩ C are simple σ-rectifiable arcs. If χ : G 1 → G 2 is a conformal equivalence onto G 2 and χ ( γ 1 ) = γ 2, then γ 1 has the restriction property in G 1 if and only if γ 2 has the restriction property in G 2.Corollary 6. Theorem3 is true; that is, Γ has the restriction property in D i, if and only if φ i ( Γ ) has the restriction property in Δ, for some RMF φ i : D i → Δ.Asubarc γ of Γ has the restriction property in D i if and only if φ i ( γ ) has the restriction property in Δ. Corollary 6 will be used in the following way. Γ will be written as the union of finitely many subarcs and we will show that each of these subarcs has the restriction property in D i; it will then follow that Γ itself has the required restriction property. Three different kinds of subarc will be considered.Definition 7. A subarcγ ⊆ Γ is said to be of type I if and only if γ ¯ ⊆ D i (i.e., both of its end-points a , b belong to D i).Lemma 8 (Lemma 6 in [9]). Letγ be a subarc of Γ and suppose that φ i,  θ i are Riemann mapping functions for D i.(i) φ i ( γ ) has the restriction property in Δ if and only if  θ i ( γ ) has the restriction property in Δ; (ii) φ i ( γ ) is rectifiable if and only if θ i ( γ ) is rectifiable; (iii) ifγ is of type I, then φ i ( γ ) ¯ ⊆ Δ and φ i ( γ ) is rectifiable; (iv) ifγ is of type I, it has the restriction property in D i.We can now “ignore” subarcs ofΓ whose closure (in C ¯) is contained in D i. We will now restrict our attention to subarcs of Γ with a single end-point a ∈ ∂ D i, the other being in D i. There are two types, depending on whether a ∈ C or a = ∞.Definition 9. (i) An open subarcγ of Γ is of type II if and only if it has an end-point a ∈ ∂ D i ∩ C and γ ¯ - ( a ) ⊆ D i ∩ C. (ii) In the case whereC i is unbounded (so that ∞ ∈ ∂ D i) an open subarc γ ⊆ Γ is of type III if and only if ∞ is an end-point of γ and γ ¯ - ( ∞ ) ⊆ D i.Modulo a finite subset ofD i , Γ is the union of at most three open subarcs, each of which is of type I, II, or III; see Figure 1.Figure 1 Type I, II, and III arcs.Ifγ is a type II or type III subarc of Γ then φ i ( γ ) is a simple open analytic arc in Δ with one end-point on the circle T and the other in Δ. We will show that φ i ( γ ) has the restriction property in Δ using the powerful Carleson theorem (Theorem 11 below).Definition 10 (see [1, p.157]). For0 < h < 1 and 0 ≤ θ < 2 π, let C θ h = { z ∈ C : 1 - h ≤ | z | ≤ 1 , θ ≤ arg z ≤ θ + h }. A positive regular Borel measure μ on Δ is called a Carleson measure if there exists a positive constant M such that μ ( C θ h ) ≤ M h, for every h and every θ.Theorem 11 (see [1, p. 157, Theorem 9.3] or see [13, p. 37]). Letμ be a finite positive regular Borel measure on Δ. In order that there exists a constant C > 0 such that (3) ∫ Δ ‍ | f ( z ) | 2 d μ ( z ) ≤ C ∥ f ∥ 2 , ∀ f ∈ H 2 ( Δ ) , it is necessary and sufficient that μ be a Carleson measure. To complete the proof of Theorem3 it is sufficient to show that arc-length measure on φ i ( γ ) is a Carleson measure whenever γ is of type II or III.It will be useful to use arc-length to parametrizeγ and φ i ( γ ). Recall that a compact arc σ is calledsmooth if there exists some parametrization g : [ a , b ] → σ such that g ∈ C 1 [ a , b ] and g ′ ( t ) ≠ 0, ∀ t ∈ [ a , b ]. Note that if σ is smooth, then it is rectifiable; that is, (4) l ( σ ) = ∫ a b ‍ | g ′ ( t ) | d t < ∞ .To define the arc-length parametrization ofσ put s = s ( t ) = ∫ a t ‍ | g ′ ( u ) | d u for a ≤ t ≤ b so that 0 ≤ s ≤ l ( σ ). Then s ′ ( t ) = | g ′ ( t ) | and t → s ( t )  ( [ a , b ] → [ 0 , l ] ) is C 1 with strictly positive derivative. Hence also its inverse s → t ( s )  ( [ 0 , l ] → [ a , b ] ) is C 1 with strictly positive derivative. Recall that the arc-length parametrization of the smooth arc σ is the map h : [ 0 , l ] → σ satisfying h ( s ) = {the point on σ length s from the initial point ( g ( a ) ) }; that is, h ( s ) = g ( t ( s ) )  0 ≤ s ≤ l.Sinceh ′ ( s ) = g ′ ( t ( s ) ) t ′ ( s ), h ∈ C 1 [ 0 , l ], with nonzero derivative, necessarily | h ′ ( s ) | = 1 since (5) h ′ ( s ( t ) ) = g ′ ( t ) t ′ ( s ) = g ′ ( t ) s ′ ( t ) = g ′ ( t ) | g ′ ( t ) | . We need the following lemma.Lemma 12 (Theorem 1 in [14]). Letσ ⊆ Δ ¯ be a smooth simple arc with arc-length parametrization g ∈ C 1 [ 0 , l ]. Suppose that | g ( 0 ) | = 1,  | g ( s ) | < 1  for 0 < s ≤ l. Then arc-length measure on σ ∩ Δ is a Carleson measure; hence σ ∩ Δ has the restriction property in Δ. ## 3. Type II Subarcs The following lemma gives the continuity of the restriction map for finite end-points.Lemma 13. A type II arcγ ⊆ Γ ⊆ D i has the restriction property in D i.Proof. By Lemmas12 and 5 it is sufficient to show that φ i ( γ ) ¯ is a smooth arc in Δ ¯. Suppose that γ has end-points a ∈ ∂ D i ∩ C and b ∈ D i ∩ C, so that γ ¯ = γ ∪ ( a ) ∪ ( b ). Clearly γ ¯ is a smooth arc. Because C i is an open analytic arc, φ i can be continued analytically into a neighbourhood U of a so as to be conformal in D i ∪ U. This means that φ i is conformal in a neighbourhood of γ ¯ and so φ i ( γ ) ¯ = φ i ( γ ¯ ) is a smooth arc in Δ ¯ with | φ i ( a ) | = 1 and φ i ( γ ¯ - ( a ) ) ⊆ Δ. The result now follows from Lemmas 12 and 5.We have now made a good deal of progress because of the following.Lemma 14. Theorem3 is true if C i is a circle or an ellipse.Proof. In this caseΓ is a finite union of type I and type II arcs only, so the result follows by Lemma 8(iv) and Lemma 13. ## 4. Type III Subarcs The proof of Theorem3 will be completed by showing that every type III arc in D i has the restriction property in D i. We have an open subarc γ of an open subarc Γ of C j and Γ ⊆ D i. In this case ∞ is an end-point of γ and ∞ ∈ ∂ D i, so both C i and C j are unbounded. We will use the same strategy we used for type II arcs in Lemma 13; we show that σ = φ i ( γ ) ¯ is a smooth arc in Δ as in Lemma 12, so that φ i ( γ ) has the restriction property in Δ and so γ has the restriction property in D i. The proof is more complicated because conformality of φ i at ∞ cannot necessarily be used. Instead we make use of the fact thatas z → ∞ along γ,the unit tangent vector of γ at z tends to a limit. The following two Lemmas help us exploit this fact.Lemma 15. Letg ∈ C 1 [ 0 , ∞ ) with g ′ ( t ) ≠ 0  ( t ≥ 0 ). Suppose that c ∈ C and (6) lim ⁡ t → ∞ g ( t ) = c , lim ⁡ t → ∞ g ′ ( t ) | g ′ ( t ) | = ω , ( | ω | = 1 ) exist. Define σ = g ( [ 0 , ∞ ) ) ∪ ( c ). Then(i) σ is a compact arc, (ii) σ is rectifiable, (iii) σ is smooth.Proof. (i) Definef on [ 0,1 ] by (7) f ( t ) = { g ( tanh - 1 t ) 0 ≤ t < 1 c t = 1 . Then f ∈ C [ 0,1 ] is a continuous parametrization of σ. (ii) To prove thatσ is rectifiable, it suffices to show that, for some T > 0,  ∫ T ∞ ‍ | g ′ ( u ) | d u < ∞. Let ε ( t ) = ω - ( g ′ ( t ) / | g ′ ( t ) | ). So ε ( t ) → 0 as t → ∞. Choose T ≥ 0 such that | ε ( t ) | ≤ 1 / 2 for t ≥ T. Then, for t ≥ T, (8) | g ′ ( t ) | ( 1 - ω ¯ ε ( t ) ) = ω ¯ g ′ ( t ) . Hence(9) ∫ T t ‍ | g ′ ( u ) | ( 1 - ω ¯ ε ( u ) ) d u = ω ¯ ( g ( t ) - g ( T ) ) , ( t > T ) , | ε | ≤ 1 2 ⟹ Re ( 1 - ω ¯ ε ) ≥ 1 2 ⟹ 2 Re ( 1 - ω ¯ ε ) ≥ 1 . So(10) ∫ T t | g ′ ( u ) | d u ≤ 2 ∫ T t | g ′ ( u ) | Re ( 1 - ω ¯ ε ( u ) ) d u = 2 Re ( ω ¯ ( g ( t ) - g ( T ) ) ) ⟶ 2 Re ( ω ¯ ( c - g ( T ) ) ) as t ⟶ ∞ , and hence (11) ∫ T ∞ ‍ | g ′ ( u ) | d u < ∞ , which establishes the rectifiability of σ. (iii) Leth : [ 0 , l ] → σ be the arc-length parametrization of σ. Then h ∈ C [ 0 , l ], h ( s ) = g ( t ) where ∫ 0 t ‍ | g ′ ( u ) | d u = s and s ′ ( t ) = | g ′ ( t ) |. Therefore the map t → s  ( [ 0 , ∞ ) → [ 0 , l ) ) is C 1 with strictly positive derivative. So the inverse map s → t  ( [ 0 , l ) → [ 0 , ∞ ) ) is C 1. Since t ( s ( t ) ) ≡ t and t ′ ( s ) = 1 / s ′ ( t ) where 0 ≤ t ≤ ∞ and 0 ≤ s ≤ l, it follows that (12) lim ⁡ s → l h ′ ( s ) = lim ⁡ t → ∞ g ′ ( t ) t ′ ( s ) = lim ⁡ t → ∞ g ′ ( t ) s ′ ( t ) = lim ⁡ t → ∞ g ′ ( t ) | g ′ ( t ) | = ω . Hence h ′ is continuous and so h ∈ C 1 [ 0 , l ].Lemma 16. Letk ∈ C 1 [ 0 , ∞ ) with k ′ ( t ) ≠ 0  ( t ≥ 0 ) and suppose that k ( t ) → ∞ as t → + ∞. Then, if | ω | = 1, (13) k ′ ( t ) | k ′ ( t ) | ⟶ ω ⟹ k ( t ) | k ( t ) | ⟶ ω .Proof. Writeω = e i α. Choose T ′ such that t ≥ T ′ ⇒ Re e - i α ( k ′ ( t ) / | k ′ ( t ) | ) > 0. Then using arg ^ to denote the principal value of arg we see that (14) θ ( t ) = α + arg ^ e - i α k ′ ( t ) | k ′ ( t ) | is a branch of arg ⁡ ( k ′ / | k ′ | ) and hence also of arg ⁡ k ′ on [ T ′ , ∞ ) which tends to α as t → ∞. We will find a branch ϑ of arg k which also tends to α as t → ∞. Letε > 0. Choose T such that t ≥ T ≥ T ′ ⇒ α - ε / 2 ≤ θ ≤ α + ε / 2. Now k ( t ) - k ( T ) = ∫ T t ‍ k ′ ( u ) d u is a limit of Riemann sums ∑ ‍ ( t i + 1 - t i ) k ′ ( ξ i ). The sectorS (see Figure 2) is closed under addition and multiplication by positive scalars; therefore (15) k ( t ) - k ( T ) ∈ S for t ≥ T . So there is an argument μ ( t ) of k ( t ) - k ( T ) satisfying (16) α - ε 2 ≤ μ ( t ) ≤ α + ε 2 ( t ≥ T ) . Now k ( t ) / ( k ( t ) - k ( T ) ) → 1 as t → ∞. So (17) ∃ T 1 ≥ T such that t ≥ T 1 ⟹ - ε 2 < arg ^ k ( t ) k ( t ) - k ( T ) < ε 2 . If we define (18) ϑ ( t ) = μ ( t ) + arg ^ k ( t ) k ( t ) - k ( T ) ( t ≥ T 1 ) , then ϑ ( t ) is an argument of k ( t ) and (19) t ≥ T 1 ⟹ | ϑ ( t ) - α | < ε 2 + ε 2 = ε . Hence also (20) | k ( t ) | k ( t ) | - ω | = | e i ϑ ( t ) - e i α | < ε . Consequently, (21) k ( t ) | k ( t ) | ⟶ ω = e i α , and our Lemma is proved.Figure 2 The sectorS.There are now four cases to prove depending on the geometry ofC i and D i. ### 4.1. Case  1:D i Is a Half-Plane The following lemma will be needed here and in Case2.Lemma 17. LetG be the open right half-plane Re z > 0 and let θ ( z ) = ( z - 1 ) / ( z + 1 ) so that θ is a Riemann mapping function for G. Let k : [ 0 , ∞ ) → G be an injective C 1 function such that k ′ ( t ) ≠ 0, for all t ≥ 0, and lim ⁡ t → ∞ k ( t ) = ∞. Let ρ be the (simple) arc parametrized by k. If  lim ⁡ t → ∞ ( k ′ ( t ) / | k ′ ( t ) | ) = ω  (with | ω | = 1), then σ = θ ( ρ ) ¯ satisfies the hypothesis of Lemma 12 and, hence, ρ has the restriction property in G.Proof. Putg = θ ∘ k, so that g ∈ C 1 [ 0 , ∞ ) parametrizes θ ( ρ ). Clearly g ( t ) → 1 as t → ∞. Now g satisfies the hypothesis of Lemma 15, for we can show that g ′ ( t ) / | g ′ ( t ) | → ω - 1 as t → ∞. Since θ ′ ( z ) = 2 / ( z + 1 ) 2 it follows that (22) g ′ ( t ) | g ′ ( t ) | = | 1 + k ( t ) | 2 ( 1 + k ( t ) ) 2 k ′ ( t ) | k ′ ( t ) | = k ′ ( t ) | k ′ ( t ) | | k ( t ) | 2 ( k ( t ) ) 2 | 1 + 1 / k ( t ) | 2 ( 1 + 1 / k ( t ) ) 2 ⟶ ω - 1 , using Lemma 16. Soσ = g [ 0 , ∞ ) ∪ ( ω - 1 ) satisfies Lemma 12; hence g [ 0 , ∞ ) has the restriction property in Δ. But g [ 0 , ∞ ) = θ ( ρ ) and, therefore, by Lemma 5, ρ has the restriction property in G.Now suppose thatC i is a line and D i is a half-plane. By Invariance Lemma 5 with a linear equivalence χ ( z ) = α z + β  ( α ≠ 0 ) we can assume that C i is the imaginary axis and that D i = G, the open right half-plane, as above. If γ ⊆ D i is a type III arc, it is a subarc of a line, parabola, or hyperbola component. Obviously γ has a parametrization k as in Lemma 17. Hence γ has the restriction property in D i. ### 4.2. Case  2:D i Is the Concave Complementary Domain of a Parabola Any two parabolas are conformally equivalent via a linear equivalence:μ ( z ) = a z + b  ( a , b ∈ C , a ≠ 0 ). So assume that C i is the parabola (23) y 2 = 4 ( 1 - x ) and that D i is the complementary domain to the “right” of C i.The function(24) w ⟶ ( 1 + w ) 2 maps the open right half-plane G conformally onto D i and the imaginary axis onto C i. Its inverse is the function (25) ϑ ( z ) = z 1 / 2 - 1 , ( z ∈ D i ) , where z 1 / 2 is the principal square-root of z (here and throughout all standard multivalued functions will take their principal values).Now letγ ⊆ D i be a type III arc. Because G is conformally equivalent to D i via ϑ it will be sufficient to show that the arc ϑ ( γ ) ⊆ G has a parametric function k as in Lemma 17. Letting h be the arc-length parametrization of γ, then h ∈ C 1 [ 0 , ∞ ),  | h ′ ( t ) | ≡ 1 and h ( t ) → ∞ as t → ∞, and h is injective.Nowγ is a subarc of a line, parabola, or hyperbola component. Hence as z → ∞ along γ the unit tangent vector at z tends to a limit ω (| ω | = 1). Thus (26) lim ⁡ t → ∞ h ′ ( t ) | h ′ ( t ) | = lim ⁡ t → ∞ h ′ ( t ) = ω , and therefore (27) lim ⁡ t → ∞ h ( t ) | h ( t ) | = ω , by Lemma 16.Putk = ϑ ∘ h. Then k is an injective parametric function for ϑ ( γ ). Clearly k ∈ C 1 [ 0 , ∞ ),  k ( t ) → ∞ as t → ∞, and (28) k ′ ( t ) = ϑ ′ ( h ( t ) ) h ′ ( t ) ≠ 0 , ∀ t ≥ 0 .Moreover,(29) k ′ ( t ) | k ′ ( t ) | = | h ( t ) | 1 / 2 h ( t ) 1 / 2 h ′ ( t ) | h ′ ( t ) | ⟶ ω 1 / 2 .Sok is as in Lemma 17, which shows that γ has the restriction property in D i.Remark 18. The notationω 1 / 2 is ambiguous when ω = - 1 (γ could be part of another parabola). But, because type I arcs can be ignored, we can assume that either γ is contained entirely in the upper half-plane, in which case ( - 1 ) 1 / 2 = i, or else γ is in the lower half-plane and ( - 1 ) 1 / 2 = - i. ### 4.3. Case  3:D i Is the Convex Complementary Domain of a Parabola In this case the parabola(30) y 2 = 4 ( π 4 ) 2 ( ( π 4 ) 2 - x ) will be chosen for C i, and D i will be the complementary domain to the “left” of C i. This choice is made because then we have the relatively simple Riemann mapping function (31) φ i ( z ) = tan 2 ( z 1 / 2 ) , ( z ∈ D i ) . This function maps the real interval ( - ∞ , ( π / 4 ) 2 ) in an increasing fashion onto ( - 1,1 ), and so it maps the upper/lower half of D i onto the upper/lower half of Δ. The formula for φ i is indeterminate on ( - ∞ , 0 ], but these singularities are removable and the formula (32) φ i ( x ) = - tanh 2 ( - x ) 1 / 2 can be used to define φ i ( x ), for negative x. This mapping will be examined in detail in a moment, but first we dispose of a trivial case and make some simple observations.Letγ ⊆ D i be a type III arc. If γ is a real interval ( - ∞ , a ), with a < ( π / 4 ) 2, then φ i ( γ ) is a subinterval of ( - 1,1 ) which obviously has the restriction property in Δ. So this case is trivial and needs no more attention.The following observations are elementary.(i) Ifγ is part of another line, then it must be parallel to R and certainly disjoint from ( - ∞ , 0 ]. (ii) Ifγ is part of another parabola C j, then C j must be symmetric about R and have an equation of the form (33) y 2 = 4 a ( b - x ) , where0 < a ≤ ( π / 4 ) 2,  b ≤ ( π / 4 ) 2. (iii) Ifγ is part of a hyperbola, then its asymptote must be parallel to R. (iv) In all (nontrivial) casesγ intersects ( - ∞ , 0 ] in at most two points. So, because type I arcs can be ignored there is no loss of generality in assuming that Im ⁡ z has constant sign on γ and that Re z < 0 on γ. (v) Hence, for definiteness, we can assume thatγ is contained in the open second quadrant. (vi) In all casesy 2 / x tends to a limit as z → ∞ along γ. If γ is part of a line or hyperbola, the limit is 0, and if γ is part of the parabola in (ii) above the limit is - 4 a. For future reference let us note that (34) 0 ≤ lim ⁡ ⁡ y 2 4 | x | ≤ ( π 4 ) 2 . (vii) Because thelim ⁡ in (34) exists and because type I arcs can be ignored, we can assume that (35) y 2 x 2 < 1 , on ⁡ γ .Now letγ be type III arc in D i as in (v) and (vi). We will show that φ i ( γ ) has the restriction property in Δ. To elucidate φ i ( γ ) it is convenient to work backwards, examining the mapping properties of the square map ( z → z 2 ), then tan, and then the principal square root.Lemma 19. LetΔ + be the open semidisc (36) Δ + = { z ∈ C : | z | < 1 , x > 0 } . If σ ′ is a smooth simple arc in Δ + ¯, if i is an end-point of σ ′, and if σ ′ - { i } ⊆ Δ +, then the arc (37) σ = { z 2 : z ∈ σ ′ } is a smooth simple arc in Δ ¯ satisfying the hypothesis of Lemma 12, so that σ - { - 1 } has the restriction property in Δ.Proof. This is clear: the square mapz → z 2 is conformal in a neighbourhood of σ ′.Now letS be the open strip (38) S = { z ∈ C : 0 < x < π 4 } . It is well known that tan maps S conformally onto Δ +. The imaginary axis is mapped to the vertical part of ∂ Δ +, and the line π / 4 + i R is mapped to the semicircular part of ∂ Δ +. Moreover, if z tends to infinity in S in such a way that y → + ∞, then tan ⁡ z → i.Lemma 20. Letk ∈ C 1 [ 0 , ∞ ) be injective and satisfy k ′ ( t ) ≠ 0, for t ≥ 0. Suppose also that(i) k ( t ) ∈ S for all t ≥ 0, (ii) Im ⁡ k ( t ) → + ∞ as t → + ∞, (iii) lim ⁡ t → ∞ Re k ( t ) = x 0 exists ( 0 ≤ x 0 ≤ π / 4 ), (iv) lim ⁡ t → ∞ ( k ′ ( t ) / | k ′ ( t ) | ) = i. Ifγ ′ is the arc parametrized by k, then σ ′ = ( tan ⁡ γ ′ ) ∪ { i } satisfies the hypothesis of Lemma 19, so that tan 2 γ ′ has the restriction property in Δ.Proof. Letg = tan ∘ k, so that g parametrizes γ ′ and tan ⁡ γ ′ = g [ 0 , ∞ ). Now g ∈ C 1 [ 0 , ∞ ),  g ′ ( t ) ≠ 0, for all t ≥ 0, and g ( t ) → i as t → + ∞. Lemma 15 will be used to show that σ ′ = g [ 0 , ∞ ) ∪ ( i ) satisfies the hypothesis of Lemma 19. For all t ≥ 0, (39) g ′ ( t ) | g ′ ( t ) | = | cos ⁡ ⁡ k ( t ) | 2 ( cos ⁡ ⁡ k ( t ) ) 2 k ′ ( t ) | k ′ ( t ) | . Letk ( t ) = x ( t ) + i y ( t ). Since x ( t ) → x 0 and y ( t ) → + ∞, as t → + ∞, and because cos ⁡ x , cosh ⁡ y > 0 on γ, (40) | cos ⁡ ⁡ k ( t ) | 2 cos ⁡ 2 k ( t ) = ( | cos ⁡ ⁡ x ( t ) cosh ⁡ y ( t ) - i sin ⁡ x ( t ) sinh ⁡ y ( t ) | cos ⁡ ⁡ x ( t ) cosh ⁡ y ( t ) - i sin ⁡ x ( t ) sinh ⁡ y ( t ) ) 2 = | 1 - i tan ⁡ x ( t ) tanh y ( t ) | 2 ( 1 - i tan ⁡ x ( t ) tanh y ( t ) ) 2 ⟶ | 1 - i tan ⁡ x 0 | 2 ( 1 - i tan ⁡ x 0 ) 2 . So lim ⁡ t → ∞ ( g ′ ( t ) / | g ′ ( t ) | ) exists.The function(41) ϑ ( z ) = z 1 / 2 maps D i - ( - ∞ , 0 ] conformally onto the vertical strip S as above. The limiting values of ϑ from above and below a point x on ( - ∞ , 0 ] are at ± i ( - x ) 1 / 2, respectively. Now tan maps S conformally onto Δ + and tan ± i ( - x ) 1 / 2 = ± i tanh ( - x ) 1 / 2. Finally the square function maps Δ + conformally onto Δ - ( ( - 1,0 ] ), and it maps both of ± i tanh ( - x ) 1 / 2 and  - tanh 2 ( - x ) 1 / 2. Thus the cut made by ϑ is repaired by the square function (by Schwarz’s Reflection Principle): φ i is continuous at all points of ( - ∞ , 0 ] and therefore analytic on D i. Because φ i ( z ) ∈ ( - 1,0 ] if and only if z ∈ ( - ∞ , 0 ] the injectivity of φ i on D i is clear.Letγ ⊆ D i be a type III arc. Assume that y > 0 and x < 0 when z = x + i y ∈ γ. Let γ ′ = ϑ ( γ ) so that γ ′ ⊆ S. We show that γ ′ is as in Lemma 20 so that tan 2 γ ′ has the restriction property in Δ and, hence, γ has the restriction property in D i.Letz = x + i y be an arbitrary point of γ and write (42) z 1 / 2 = u + i v , for the corresponding point ϑ ( z ) ∈ γ ′; then (43) x + i y = u 2 - v 2 + 2 i u v . Eliminating v, and remembering that x < 0, we see that (44) u 2 = 1 2 ( x + ( x 2 + y 2 ) 1 / 2 ) = | x | 2 ( ( 1 + y 2 x 2 ) 1 / 2 - 1 ) .Sincey 2 / x 2 < 1 (observation (vii)), the binomial series implies that (45) u 2 = y 2 4 | x | - 1 16 y 4 | x | 3 + ⋯ ~ y 2 4 | x | , as z tends to ∞ along γ. It follows from (34) that (46) lim ⁡ t → ∞ u 2 = a exists , 0 ≤ a ≤ ( π 4 ) 2 .Now leth be the arc-length parametrization of γ and write h ( t ) = x ( t ) + i y ( t ). Let k = ϑ ∘ h = h 1 / 2 so that k parametrizes γ ′. Write k ( t ) = u ( t ) + i v ( t ). (i), (ii), (iii), and (iv) of Lemma 20 can now be verified.Obviouslyk ( t ) ∈ S, for all t ≥ 0, so (i) is true. As t → ∞,  | k ( t ) | = | h ( t ) | 1 / 2 → ∞, but since 0 ≤ u ( t ) ≤ π / 4 we must have v ( t ) → + ∞, so that (ii) is true. Item (iii) follows from (46). Now h ( t ) → ∞ as t → ∞,  | h ′ ( t ) | ≡ 1, and h ′ ( t ) → - 1 as t → ∞. So, by Lemma 16, (47) lim ⁡ t → ∞ k ′ ( t ) | k ′ ( t ) | = | h ( t ) | 1 / 2 h ( t ) 1 / 2 h ′ ( t ) | h ′ ( t ) | ⟶ - i ( - 1 ) = i . So (iv) is true and we have now completed the proof. ### 4.4. Case  4:C i Is a Hyperbola Component We can deal simultaneously with the convex and concave complementary domains of a hyperbola component as follows. Let- π / 2 < α < π / 2 and let C i = sin ( α + i R ). If α < 0,  C i is the arc (48) C i = { z = x + i y ∈ C : x < 0 , x 2 sin 2 α - y 2 cos ⁡ 2 α = 1 } , and if α > 0,  C i is the arc (49) C i = { z = x + i y ∈ C : x > 0 , x 2 sin 2 α - y 2 cos ⁡ 2 α = 1 } .LetD i be the complementary domain to the “left” of C i; then D i is convex when α < 0 and concave when α > 0. Linear equivalence will be used as before to reduce the general case to this one.The functionsin - 1 maps the double cut plane C - { ( - ∞ , - 1 ] ∪ [ 1 , ∞ ) } conformally onto the vertical strip | x | < π / 2, mapping the upper/lower parts of the first domain onto the upper/lower parts of the second. The upper and lower limits of sin - 1 at a point - x ∈ ( - ∞ , - 1 ] are - π / 2 ± i cosh ⁡ - 1 x. The arc C i = sin ( α + i R ) is mapped to the line Re z = α. Therefore sin - 1 maps D i - ( - ∞ , - 1 ] conformally onto the strip (50) D α = { z = x + i y ∈ C : - π 2 < x < α } .If(51) λ ( z ) = π 4 z + ( π / 2 ) α + ( π / 2 ) , then λ maps D α conformally onto the strip (52) S = { z = x + i y ∈ C : 0 < x < π 4 } .Therefore(53) φ i ( z ) = tan 2 λ ( sin - 1 z ) is a Riemann mapping function for D i. Now let γ be a type III arc in D i. As in Case 3 the case γ ⊆ R is trivial, so we can assume that γ lies entirely in the upper half-plane. It will be sufficient for us to show that λ ( sin - 1 γ ) has a parametric function k as in Lemma 20.Letz = x + i y be arbitrary point of γ and write sin - 1 z = u + i v for the corresponding point of sin - 1 γ. Clearly, by (50), (54) u + i v ∈ D α .Now(55) z = x + i y = sin ⁡ ( u + i v ) = sin ⁡ u cosh ⁡ v + i cos ⁡ ⁡ u sinh ⁡ v , so that (56) | z | 2 = sin 2 u cosh ⁡ 2 v + cos ⁡ 2 u sinh ⁡ 2 v = sin 2 u + sinh ⁡ 2 v .Asz → ∞ along γ, | z | 2 → + ∞ and sin 2 u remains bounded; therefore (57) v ⟶ + ∞ as z ⟶ ∞ along γ .It now follows from (56) and (57) that (58) sin ⁡ u = x | z | ( tanh 2 v + sin 2 u cosh ⁡ 2 v ) 1 / 2 ~ x | z | as z ⟶ ∞ .Leth be the arc-length parametrization of γ. As z → ∞ along γ its unit tangent vector has a limit e i θ, say. The asymptotes of C i are the rays arg ⁡ z = ± ( π / 2 - α ). Therefore (59) lim ⁡ t → ∞ h ′ ( t ) | h ′ ( t ) | = lim ⁡ t → ∞ h ′ ( t ) = e i θ , where π 2 - α ≤ θ ≤ π .So, by (57) and Lemma 16, (60) lim ⁡ t → ∞ h ( t ) | h ( t ) | = e i θ .Nowg = sin - 1 ∘ h is a parametric function for sin - 1 γ. By (54) it follows that(i) g ( t ) ∈ D α  ( t ≥ 0 ),  and (57) shows that (ii) Im ⁡ g ( t ) → + ∞ as t → ∞.Equation (60) shows that(iii) lim ⁡ t → ∞ Re g ( t ) = sin - 1 cos ⁡ θ = ( π / 2 ) - θ  and we notice that - π / 2 ≤ ( π / 2 ) - θ ≤ α, by (59).Finally observe that(61) g ′ ( t ) | g ′ ( t ) | = | 1 - h ( t ) 2 | 1 / 2 ( 1 - h ( t ) 2 ) 1 / 2 h ′ ( t ) | h ′ ( t ) | .Now in the upper half-plane( 1 - w 2 ) 1 / 2 ~ - i w, as w → ∞. So, as t → ∞, (62) g ′ ( t ) | g ′ ( t ) | ~ | h ( t ) | - i h ( t ) h ′ ( t ) | h ′ ( t ) | , and therefore(iv) lim ⁡ t → ∞ ( g ′ ( t ) / | g ′ ( t ) | ) = i.It follows easily thatk = λ ∘ g satisfies the hypothesis of Lemma 20, and therefore φ i ( γ ) has the restriction property in Δ. ## 4.1. Case  1:D i Is a Half-Plane The following lemma will be needed here and in Case2.Lemma 17. LetG be the open right half-plane Re z > 0 and let θ ( z ) = ( z - 1 ) / ( z + 1 ) so that θ is a Riemann mapping function for G. Let k : [ 0 , ∞ ) → G be an injective C 1 function such that k ′ ( t ) ≠ 0, for all t ≥ 0, and lim ⁡ t → ∞ k ( t ) = ∞. Let ρ be the (simple) arc parametrized by k. If  lim ⁡ t → ∞ ( k ′ ( t ) / | k ′ ( t ) | ) = ω  (with | ω | = 1), then σ = θ ( ρ ) ¯ satisfies the hypothesis of Lemma 12 and, hence, ρ has the restriction property in G.Proof. Putg = θ ∘ k, so that g ∈ C 1 [ 0 , ∞ ) parametrizes θ ( ρ ). Clearly g ( t ) → 1 as t → ∞. Now g satisfies the hypothesis of Lemma 15, for we can show that g ′ ( t ) / | g ′ ( t ) | → ω - 1 as t → ∞. Since θ ′ ( z ) = 2 / ( z + 1 ) 2 it follows that (22) g ′ ( t ) | g ′ ( t ) | = | 1 + k ( t ) | 2 ( 1 + k ( t ) ) 2 k ′ ( t ) | k ′ ( t ) | = k ′ ( t ) | k ′ ( t ) | | k ( t ) | 2 ( k ( t ) ) 2 | 1 + 1 / k ( t ) | 2 ( 1 + 1 / k ( t ) ) 2 ⟶ ω - 1 , using Lemma 16. Soσ = g [ 0 , ∞ ) ∪ ( ω - 1 ) satisfies Lemma 12; hence g [ 0 , ∞ ) has the restriction property in Δ. But g [ 0 , ∞ ) = θ ( ρ ) and, therefore, by Lemma 5, ρ has the restriction property in G.Now suppose thatC i is a line and D i is a half-plane. By Invariance Lemma 5 with a linear equivalence χ ( z ) = α z + β  ( α ≠ 0 ) we can assume that C i is the imaginary axis and that D i = G, the open right half-plane, as above. If γ ⊆ D i is a type III arc, it is a subarc of a line, parabola, or hyperbola component. Obviously γ has a parametrization k as in Lemma 17. Hence γ has the restriction property in D i. ## 4.2. Case  2:D i Is the Concave Complementary Domain of a Parabola Any two parabolas are conformally equivalent via a linear equivalence:μ ( z ) = a z + b  ( a , b ∈ C , a ≠ 0 ). So assume that C i is the parabola (23) y 2 = 4 ( 1 - x ) and that D i is the complementary domain to the “right” of C i.The function(24) w ⟶ ( 1 + w ) 2 maps the open right half-plane G conformally onto D i and the imaginary axis onto C i. Its inverse is the function (25) ϑ ( z ) = z 1 / 2 - 1 , ( z ∈ D i ) , where z 1 / 2 is the principal square-root of z (here and throughout all standard multivalued functions will take their principal values).Now letγ ⊆ D i be a type III arc. Because G is conformally equivalent to D i via ϑ it will be sufficient to show that the arc ϑ ( γ ) ⊆ G has a parametric function k as in Lemma 17. Letting h be the arc-length parametrization of γ, then h ∈ C 1 [ 0 , ∞ ),  | h ′ ( t ) | ≡ 1 and h ( t ) → ∞ as t → ∞, and h is injective.Nowγ is a subarc of a line, parabola, or hyperbola component. Hence as z → ∞ along γ the unit tangent vector at z tends to a limit ω (| ω | = 1). Thus (26) lim ⁡ t → ∞ h ′ ( t ) | h ′ ( t ) | = lim ⁡ t → ∞ h ′ ( t ) = ω , and therefore (27) lim ⁡ t → ∞ h ( t ) | h ( t ) | = ω , by Lemma 16.Putk = ϑ ∘ h. Then k is an injective parametric function for ϑ ( γ ). Clearly k ∈ C 1 [ 0 , ∞ ),  k ( t ) → ∞ as t → ∞, and (28) k ′ ( t ) = ϑ ′ ( h ( t ) ) h ′ ( t ) ≠ 0 , ∀ t ≥ 0 .Moreover,(29) k ′ ( t ) | k ′ ( t ) | = | h ( t ) | 1 / 2 h ( t ) 1 / 2 h ′ ( t ) | h ′ ( t ) | ⟶ ω 1 / 2 .Sok is as in Lemma 17, which shows that γ has the restriction property in D i.Remark 18. The notationω 1 / 2 is ambiguous when ω = - 1 (γ could be part of another parabola). But, because type I arcs can be ignored, we can assume that either γ is contained entirely in the upper half-plane, in which case ( - 1 ) 1 / 2 = i, or else γ is in the lower half-plane and ( - 1 ) 1 / 2 = - i. ## 4.3. Case  3:D i Is the Convex Complementary Domain of a Parabola In this case the parabola(30) y 2 = 4 ( π 4 ) 2 ( ( π 4 ) 2 - x ) will be chosen for C i, and D i will be the complementary domain to the “left” of C i. This choice is made because then we have the relatively simple Riemann mapping function (31) φ i ( z ) = tan 2 ( z 1 / 2 ) , ( z ∈ D i ) . This function maps the real interval ( - ∞ , ( π / 4 ) 2 ) in an increasing fashion onto ( - 1,1 ), and so it maps the upper/lower half of D i onto the upper/lower half of Δ. The formula for φ i is indeterminate on ( - ∞ , 0 ], but these singularities are removable and the formula (32) φ i ( x ) = - tanh 2 ( - x ) 1 / 2 can be used to define φ i ( x ), for negative x. This mapping will be examined in detail in a moment, but first we dispose of a trivial case and make some simple observations.Letγ ⊆ D i be a type III arc. If γ is a real interval ( - ∞ , a ), with a < ( π / 4 ) 2, then φ i ( γ ) is a subinterval of ( - 1,1 ) which obviously has the restriction property in Δ. So this case is trivial and needs no more attention.The following observations are elementary.(i) Ifγ is part of another line, then it must be parallel to R and certainly disjoint from ( - ∞ , 0 ]. (ii) Ifγ is part of another parabola C j, then C j must be symmetric about R and have an equation of the form (33) y 2 = 4 a ( b - x ) , where0 < a ≤ ( π / 4 ) 2,  b ≤ ( π / 4 ) 2. (iii) Ifγ is part of a hyperbola, then its asymptote must be parallel to R. (iv) In all (nontrivial) casesγ intersects ( - ∞ , 0 ] in at most two points. So, because type I arcs can be ignored there is no loss of generality in assuming that Im ⁡ z has constant sign on γ and that Re z < 0 on γ. (v) Hence, for definiteness, we can assume thatγ is contained in the open second quadrant. (vi) In all casesy 2 / x tends to a limit as z → ∞ along γ. If γ is part of a line or hyperbola, the limit is 0, and if γ is part of the parabola in (ii) above the limit is - 4 a. For future reference let us note that (34) 0 ≤ lim ⁡ ⁡ y 2 4 | x | ≤ ( π 4 ) 2 . (vii) Because thelim ⁡ in (34) exists and because type I arcs can be ignored, we can assume that (35) y 2 x 2 < 1 , on ⁡ γ .Now letγ be type III arc in D i as in (v) and (vi). We will show that φ i ( γ ) has the restriction property in Δ. To elucidate φ i ( γ ) it is convenient to work backwards, examining the mapping properties of the square map ( z → z 2 ), then tan, and then the principal square root.Lemma 19. LetΔ + be the open semidisc (36) Δ + = { z ∈ C : | z | < 1 , x > 0 } . If σ ′ is a smooth simple arc in Δ + ¯, if i is an end-point of σ ′, and if σ ′ - { i } ⊆ Δ +, then the arc (37) σ = { z 2 : z ∈ σ ′ } is a smooth simple arc in Δ ¯ satisfying the hypothesis of Lemma 12, so that σ - { - 1 } has the restriction property in Δ.Proof. This is clear: the square mapz → z 2 is conformal in a neighbourhood of σ ′.Now letS be the open strip (38) S = { z ∈ C : 0 < x < π 4 } . It is well known that tan maps S conformally onto Δ +. The imaginary axis is mapped to the vertical part of ∂ Δ +, and the line π / 4 + i R is mapped to the semicircular part of ∂ Δ +. Moreover, if z tends to infinity in S in such a way that y → + ∞, then tan ⁡ z → i.Lemma 20. Letk ∈ C 1 [ 0 , ∞ ) be injective and satisfy k ′ ( t ) ≠ 0, for t ≥ 0. Suppose also that(i) k ( t ) ∈ S for all t ≥ 0, (ii) Im ⁡ k ( t ) → + ∞ as t → + ∞, (iii) lim ⁡ t → ∞ Re k ( t ) = x 0 exists ( 0 ≤ x 0 ≤ π / 4 ), (iv) lim ⁡ t → ∞ ( k ′ ( t ) / | k ′ ( t ) | ) = i. Ifγ ′ is the arc parametrized by k, then σ ′ = ( tan ⁡ γ ′ ) ∪ { i } satisfies the hypothesis of Lemma 19, so that tan 2 γ ′ has the restriction property in Δ.Proof. Letg = tan ∘ k, so that g parametrizes γ ′ and tan ⁡ γ ′ = g [ 0 , ∞ ). Now g ∈ C 1 [ 0 , ∞ ),  g ′ ( t ) ≠ 0, for all t ≥ 0, and g ( t ) → i as t → + ∞. Lemma 15 will be used to show that σ ′ = g [ 0 , ∞ ) ∪ ( i ) satisfies the hypothesis of Lemma 19. For all t ≥ 0, (39) g ′ ( t ) | g ′ ( t ) | = | cos ⁡ ⁡ k ( t ) | 2 ( cos ⁡ ⁡ k ( t ) ) 2 k ′ ( t ) | k ′ ( t ) | . Letk ( t ) = x ( t ) + i y ( t ). Since x ( t ) → x 0 and y ( t ) → + ∞, as t → + ∞, and because cos ⁡ x , cosh ⁡ y > 0 on γ, (40) | cos ⁡ ⁡ k ( t ) | 2 cos ⁡ 2 k ( t ) = ( | cos ⁡ ⁡ x ( t ) cosh ⁡ y ( t ) - i sin ⁡ x ( t ) sinh ⁡ y ( t ) | cos ⁡ ⁡ x ( t ) cosh ⁡ y ( t ) - i sin ⁡ x ( t ) sinh ⁡ y ( t ) ) 2 = | 1 - i tan ⁡ x ( t ) tanh y ( t ) | 2 ( 1 - i tan ⁡ x ( t ) tanh y ( t ) ) 2 ⟶ | 1 - i tan ⁡ x 0 | 2 ( 1 - i tan ⁡ x 0 ) 2 . So lim ⁡ t → ∞ ( g ′ ( t ) / | g ′ ( t ) | ) exists.The function(41) ϑ ( z ) = z 1 / 2 maps D i - ( - ∞ , 0 ] conformally onto the vertical strip S as above. The limiting values of ϑ from above and below a point x on ( - ∞ , 0 ] are at ± i ( - x ) 1 / 2, respectively. Now tan maps S conformally onto Δ + and tan ± i ( - x ) 1 / 2 = ± i tanh ( - x ) 1 / 2. Finally the square function maps Δ + conformally onto Δ - ( ( - 1,0 ] ), and it maps both of ± i tanh ( - x ) 1 / 2 and  - tanh 2 ( - x ) 1 / 2. Thus the cut made by ϑ is repaired by the square function (by Schwarz’s Reflection Principle): φ i is continuous at all points of ( - ∞ , 0 ] and therefore analytic on D i. Because φ i ( z ) ∈ ( - 1,0 ] if and only if z ∈ ( - ∞ , 0 ] the injectivity of φ i on D i is clear.Letγ ⊆ D i be a type III arc. Assume that y > 0 and x < 0 when z = x + i y ∈ γ. Let γ ′ = ϑ ( γ ) so that γ ′ ⊆ S. We show that γ ′ is as in Lemma 20 so that tan 2 γ ′ has the restriction property in Δ and, hence, γ has the restriction property in D i.Letz = x + i y be an arbitrary point of γ and write (42) z 1 / 2 = u + i v , for the corresponding point ϑ ( z ) ∈ γ ′; then (43) x + i y = u 2 - v 2 + 2 i u v . Eliminating v, and remembering that x < 0, we see that (44) u 2 = 1 2 ( x + ( x 2 + y 2 ) 1 / 2 ) = | x | 2 ( ( 1 + y 2 x 2 ) 1 / 2 - 1 ) .Sincey 2 / x 2 < 1 (observation (vii)), the binomial series implies that (45) u 2 = y 2 4 | x | - 1 16 y 4 | x | 3 + ⋯ ~ y 2 4 | x | , as z tends to ∞ along γ. It follows from (34) that (46) lim ⁡ t → ∞ u 2 = a exists , 0 ≤ a ≤ ( π 4 ) 2 .Now leth be the arc-length parametrization of γ and write h ( t ) = x ( t ) + i y ( t ). Let k = ϑ ∘ h = h 1 / 2 so that k parametrizes γ ′. Write k ( t ) = u ( t ) + i v ( t ). (i), (ii), (iii), and (iv) of Lemma 20 can now be verified.Obviouslyk ( t ) ∈ S, for all t ≥ 0, so (i) is true. As t → ∞,  | k ( t ) | = | h ( t ) | 1 / 2 → ∞, but since 0 ≤ u ( t ) ≤ π / 4 we must have v ( t ) → + ∞, so that (ii) is true. Item (iii) follows from (46). Now h ( t ) → ∞ as t → ∞,  | h ′ ( t ) | ≡ 1, and h ′ ( t ) → - 1 as t → ∞. So, by Lemma 16, (47) lim ⁡ t → ∞ k ′ ( t ) | k ′ ( t ) | = | h ( t ) | 1 / 2 h ( t ) 1 / 2 h ′ ( t ) | h ′ ( t ) | ⟶ - i ( - 1 ) = i . So (iv) is true and we have now completed the proof. ## 4.4. Case  4:C i Is a Hyperbola Component We can deal simultaneously with the convex and concave complementary domains of a hyperbola component as follows. Let- π / 2 < α < π / 2 and let C i = sin ( α + i R ). If α < 0,  C i is the arc (48) C i = { z = x + i y ∈ C : x < 0 , x 2 sin 2 α - y 2 cos ⁡ 2 α = 1 } , and if α > 0,  C i is the arc (49) C i = { z = x + i y ∈ C : x > 0 , x 2 sin 2 α - y 2 cos ⁡ 2 α = 1 } .LetD i be the complementary domain to the “left” of C i; then D i is convex when α < 0 and concave when α > 0. Linear equivalence will be used as before to reduce the general case to this one.The functionsin - 1 maps the double cut plane C - { ( - ∞ , - 1 ] ∪ [ 1 , ∞ ) } conformally onto the vertical strip | x | < π / 2, mapping the upper/lower parts of the first domain onto the upper/lower parts of the second. The upper and lower limits of sin - 1 at a point - x ∈ ( - ∞ , - 1 ] are - π / 2 ± i cosh ⁡ - 1 x. The arc C i = sin ( α + i R ) is mapped to the line Re z = α. Therefore sin - 1 maps D i - ( - ∞ , - 1 ] conformally onto the strip (50) D α = { z = x + i y ∈ C : - π 2 < x < α } .If(51) λ ( z ) = π 4 z + ( π / 2 ) α + ( π / 2 ) , then λ maps D α conformally onto the strip (52) S = { z = x + i y ∈ C : 0 < x < π 4 } .Therefore(53) φ i ( z ) = tan 2 λ ( sin - 1 z ) is a Riemann mapping function for D i. Now let γ be a type III arc in D i. As in Case 3 the case γ ⊆ R is trivial, so we can assume that γ lies entirely in the upper half-plane. It will be sufficient for us to show that λ ( sin - 1 γ ) has a parametric function k as in Lemma 20.Letz = x + i y be arbitrary point of γ and write sin - 1 z = u + i v for the corresponding point of sin - 1 γ. Clearly, by (50), (54) u + i v ∈ D α .Now(55) z = x + i y = sin ⁡ ( u + i v ) = sin ⁡ u cosh ⁡ v + i cos ⁡ ⁡ u sinh ⁡ v , so that (56) | z | 2 = sin 2 u cosh ⁡ 2 v + cos ⁡ 2 u sinh ⁡ 2 v = sin 2 u + sinh ⁡ 2 v .Asz → ∞ along γ, | z | 2 → + ∞ and sin 2 u remains bounded; therefore (57) v ⟶ + ∞ as z ⟶ ∞ along γ .It now follows from (56) and (57) that (58) sin ⁡ u = x | z | ( tanh 2 v + sin 2 u cosh ⁡ 2 v ) 1 / 2 ~ x | z | as z ⟶ ∞ .Leth be the arc-length parametrization of γ. As z → ∞ along γ its unit tangent vector has a limit e i θ, say. The asymptotes of C i are the rays arg ⁡ z = ± ( π / 2 - α ). Therefore (59) lim ⁡ t → ∞ h ′ ( t ) | h ′ ( t ) | = lim ⁡ t → ∞ h ′ ( t ) = e i θ , where π 2 - α ≤ θ ≤ π .So, by (57) and Lemma 16, (60) lim ⁡ t → ∞ h ( t ) | h ( t ) | = e i θ .Nowg = sin - 1 ∘ h is a parametric function for sin - 1 γ. By (54) it follows that(i) g ( t ) ∈ D α  ( t ≥ 0 ),  and (57) shows that (ii) Im ⁡ g ( t ) → + ∞ as t → ∞.Equation (60) shows that(iii) lim ⁡ t → ∞ Re g ( t ) = sin - 1 cos ⁡ θ = ( π / 2 ) - θ  and we notice that - π / 2 ≤ ( π / 2 ) - θ ≤ α, by (59).Finally observe that(61) g ′ ( t ) | g ′ ( t ) | = | 1 - h ( t ) 2 | 1 / 2 ( 1 - h ( t ) 2 ) 1 / 2 h ′ ( t ) | h ′ ( t ) | .Now in the upper half-plane( 1 - w 2 ) 1 / 2 ~ - i w, as w → ∞. So, as t → ∞, (62) g ′ ( t ) | g ′ ( t ) | ~ | h ( t ) | - i h ( t ) h ′ ( t ) | h ′ ( t ) | , and therefore(iv) lim ⁡ t → ∞ ( g ′ ( t ) / | g ′ ( t ) | ) = i.It follows easily thatk = λ ∘ g satisfies the hypothesis of Lemma 20, and therefore φ i ( γ ) has the restriction property in Δ. --- *Source: 102169-2014-09-11.xml*
102169-2014-09-11_102169-2014-09-11.md
44,381
Continuity of the Restriction Maps on Smirnov Classes
Yüksel Soykan
Abstract and Applied Analysis (2014)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102169
102169-2014-09-11.xml
--- ## Abstract We prove the restriction maps define continuous linear operators on the Smirnov classes for some certain domain with analytic boundary. --- ## Body ## 1. Introduction As usual, we define the Hardy spaceH 2 = H 2 ( Δ ) as the space of all functions f : z → ∑ n = 0 ∞ ‍ a n z n for which the norm ( ∥ f ∥ = ∑ n = 0 ∞ ‍ | a n | 2 ) 1 / 2 is finite. Here, Δ is the open unit disc. For a more general simply connected domain D in the sphere or extended plane C ¯ = C ∪ ( ∞ ) with at least two boundary points, and a conformal mapping φ from D onto Δ (i.e., a Riemann mapping function, abbreviation is RMF), a function g analytic in D is said to belong to the Smirnov class E 2 ( D ) if and only if g = ( f ∘ φ ) φ ′ 1 / 2 for some f ∈ H 2 ( Δ ) where φ ′ 1 / 2 is an analytic branch of the square root of φ ′. The reader is referred to [1–7] and references therein for the basic properties of these spaces.LetC = ( C 1 , C 2 , C 3 , … , C N ) be an N-tuple of closed distinct curves on the sphere C ¯ and suppose that, for each i,  1 ≤ i ≤ N,  C i is a circle, a line ∪ { ∞ }, an ellipse, a parabola ∪ { ∞ }, or a branch of a hyperbola ∪ { ∞ }. Let D i be the complementary domain of C i. Recall that a complementary domain of a closed F ⊆ C ¯ is a maximal connected subset of C ¯ - F, which must be a domain. For 1 ≤ i ≤ N, suppose that φ i : D i → Δ is a conformal equivalence (i.e., RMF) and let ψ i : Δ → D i be its inverse. For 1 ≤ i ≤ N, let us keep the notations of C i,  D i,  φ i,  ψ i fixed until the end of the paper.In this paper we prove the following.Theorem 1. Let1 ≤ i,  j ≤ N. Suppose that Γ is an open subarc of  C j and suppose also that Γ ⊆ D i  if  i ≠ j. Then the restriction f → f | Γ defines a continuous linear operator mapping E 2 ( D i ) into L 2 ( Γ ).For similar work regarding restriction maps, see [8, 9]. Our conjecture is that Theorem 1 is valid if, for each j,  1 ≤ j ≤ N,  C j is a σ-rectifiable analytic Jordan curve.There are some similar results for rectifiable curves in Havin’s paper [10]. Also the Cauchy projection operator from L p to E p is bounded on all Carleson regular curves; compare the papers of David, starting with [11].We need the following Theorem to simplify the proof of Theorem1.Theorem 2 (Theorem 1 in [12]). LetD be a complementary domain of  ∪ i = 1 N C i and suppose that D is simply connected so that D i is the complementary domain of C i which contains D. Then(i) ∂ D is a σ-rectifiable closed curve and every f ∈ E 2 ( D ) has a nontangential limit function f ~ ∈ L 2 ( ∂ D ); (ii) (Parseval’s identity) the mapf → f ~  ( E 2 ( D ) → L 2 ( ∂ D ) ) is an isometric isomorphism onto a closed subspace E 2 ( ∂ D ) of L 2 ( ∂ D ), so(1) ∥ f ∥ E 2 ( D ) 2 = ∥ f ~ ∥ L 2 ( ∂ D ) 2 = 1 2 π ∫ ∂ D ‍ | f ~ ( z ) | 2 | d z | , ( f ∈ E 2 ( D ) ) .IfΓ ⊆ C i is an open subarc, then (2) ∥ f ~ | Γ ∥ L 2 ( Γ ) 2 ≤ ∥ f ~ | C i ∥ L 2 ( C i ) 2 = ∥ f ∥ E 2 ( D i ) 2 , because Parseval’s identity is true for the trivial chain (C i) of curves. Hence Theorem 1 will be proved if the following theorem can be proved.Theorem 3. Let1 ≤ i ≠ j ≤ N. Suppose that Γ is an open subarc of C j and that Γ ⊆ D i. Then the restriction f → f | Γ defines a continuous linear operator mapping E 2 ( D i ) into L 2 ( Γ ). ## 2. Preliminaries for the Proof of Theorem3 Let us keep the notation of Theorem3 fixed for the rest of the paper and let us also agree to use l for arc-length measure.An arc or closed curveγ is called σ-rectifiable if and only if it is a countable union of rectifiable arcs in C, together with ( ∞ ) in the case when ∞ ∈ γ. For instance, a parabola without ∞ is σ-rectifiable arc, and a parabola with ∞ is σ-rectifiable Jordan curve. The following definition will simplify the language.Definition 4. Letγ ⊆ C be a simple σ-rectifiable arc contained in a simply connected domain G ⊆ C ¯. We say that γ has the restriction property in G  if and only if the map g → g | γ defines a continuous linear operator mapping E 2 ( G ) into L 2 ( γ ). Thus, the last sentence of Theorem3 reads “Γ has the restriction property in D i.”Lemma 5 (Invariance Lemma (Lemma 4 in [9])). LetG 1 , G 2 ⊆ C ¯ be simply connected domains and suppose that γ 1 ⊆ G 1 ∩ C,  γ 2 ⊆ G 2 ∩ C are simple σ-rectifiable arcs. If χ : G 1 → G 2 is a conformal equivalence onto G 2 and χ ( γ 1 ) = γ 2, then γ 1 has the restriction property in G 1 if and only if γ 2 has the restriction property in G 2.Corollary 6. Theorem3 is true; that is, Γ has the restriction property in D i, if and only if φ i ( Γ ) has the restriction property in Δ, for some RMF φ i : D i → Δ.Asubarc γ of Γ has the restriction property in D i if and only if φ i ( γ ) has the restriction property in Δ. Corollary 6 will be used in the following way. Γ will be written as the union of finitely many subarcs and we will show that each of these subarcs has the restriction property in D i; it will then follow that Γ itself has the required restriction property. Three different kinds of subarc will be considered.Definition 7. A subarcγ ⊆ Γ is said to be of type I if and only if γ ¯ ⊆ D i (i.e., both of its end-points a , b belong to D i).Lemma 8 (Lemma 6 in [9]). Letγ be a subarc of Γ and suppose that φ i,  θ i are Riemann mapping functions for D i.(i) φ i ( γ ) has the restriction property in Δ if and only if  θ i ( γ ) has the restriction property in Δ; (ii) φ i ( γ ) is rectifiable if and only if θ i ( γ ) is rectifiable; (iii) ifγ is of type I, then φ i ( γ ) ¯ ⊆ Δ and φ i ( γ ) is rectifiable; (iv) ifγ is of type I, it has the restriction property in D i.We can now “ignore” subarcs ofΓ whose closure (in C ¯) is contained in D i. We will now restrict our attention to subarcs of Γ with a single end-point a ∈ ∂ D i, the other being in D i. There are two types, depending on whether a ∈ C or a = ∞.Definition 9. (i) An open subarcγ of Γ is of type II if and only if it has an end-point a ∈ ∂ D i ∩ C and γ ¯ - ( a ) ⊆ D i ∩ C. (ii) In the case whereC i is unbounded (so that ∞ ∈ ∂ D i) an open subarc γ ⊆ Γ is of type III if and only if ∞ is an end-point of γ and γ ¯ - ( ∞ ) ⊆ D i.Modulo a finite subset ofD i , Γ is the union of at most three open subarcs, each of which is of type I, II, or III; see Figure 1.Figure 1 Type I, II, and III arcs.Ifγ is a type II or type III subarc of Γ then φ i ( γ ) is a simple open analytic arc in Δ with one end-point on the circle T and the other in Δ. We will show that φ i ( γ ) has the restriction property in Δ using the powerful Carleson theorem (Theorem 11 below).Definition 10 (see [1, p.157]). For0 < h < 1 and 0 ≤ θ < 2 π, let C θ h = { z ∈ C : 1 - h ≤ | z | ≤ 1 , θ ≤ arg z ≤ θ + h }. A positive regular Borel measure μ on Δ is called a Carleson measure if there exists a positive constant M such that μ ( C θ h ) ≤ M h, for every h and every θ.Theorem 11 (see [1, p. 157, Theorem 9.3] or see [13, p. 37]). Letμ be a finite positive regular Borel measure on Δ. In order that there exists a constant C > 0 such that (3) ∫ Δ ‍ | f ( z ) | 2 d μ ( z ) ≤ C ∥ f ∥ 2 , ∀ f ∈ H 2 ( Δ ) , it is necessary and sufficient that μ be a Carleson measure. To complete the proof of Theorem3 it is sufficient to show that arc-length measure on φ i ( γ ) is a Carleson measure whenever γ is of type II or III.It will be useful to use arc-length to parametrizeγ and φ i ( γ ). Recall that a compact arc σ is calledsmooth if there exists some parametrization g : [ a , b ] → σ such that g ∈ C 1 [ a , b ] and g ′ ( t ) ≠ 0, ∀ t ∈ [ a , b ]. Note that if σ is smooth, then it is rectifiable; that is, (4) l ( σ ) = ∫ a b ‍ | g ′ ( t ) | d t < ∞ .To define the arc-length parametrization ofσ put s = s ( t ) = ∫ a t ‍ | g ′ ( u ) | d u for a ≤ t ≤ b so that 0 ≤ s ≤ l ( σ ). Then s ′ ( t ) = | g ′ ( t ) | and t → s ( t )  ( [ a , b ] → [ 0 , l ] ) is C 1 with strictly positive derivative. Hence also its inverse s → t ( s )  ( [ 0 , l ] → [ a , b ] ) is C 1 with strictly positive derivative. Recall that the arc-length parametrization of the smooth arc σ is the map h : [ 0 , l ] → σ satisfying h ( s ) = {the point on σ length s from the initial point ( g ( a ) ) }; that is, h ( s ) = g ( t ( s ) )  0 ≤ s ≤ l.Sinceh ′ ( s ) = g ′ ( t ( s ) ) t ′ ( s ), h ∈ C 1 [ 0 , l ], with nonzero derivative, necessarily | h ′ ( s ) | = 1 since (5) h ′ ( s ( t ) ) = g ′ ( t ) t ′ ( s ) = g ′ ( t ) s ′ ( t ) = g ′ ( t ) | g ′ ( t ) | . We need the following lemma.Lemma 12 (Theorem 1 in [14]). Letσ ⊆ Δ ¯ be a smooth simple arc with arc-length parametrization g ∈ C 1 [ 0 , l ]. Suppose that | g ( 0 ) | = 1,  | g ( s ) | < 1  for 0 < s ≤ l. Then arc-length measure on σ ∩ Δ is a Carleson measure; hence σ ∩ Δ has the restriction property in Δ. ## 3. Type II Subarcs The following lemma gives the continuity of the restriction map for finite end-points.Lemma 13. A type II arcγ ⊆ Γ ⊆ D i has the restriction property in D i.Proof. By Lemmas12 and 5 it is sufficient to show that φ i ( γ ) ¯ is a smooth arc in Δ ¯. Suppose that γ has end-points a ∈ ∂ D i ∩ C and b ∈ D i ∩ C, so that γ ¯ = γ ∪ ( a ) ∪ ( b ). Clearly γ ¯ is a smooth arc. Because C i is an open analytic arc, φ i can be continued analytically into a neighbourhood U of a so as to be conformal in D i ∪ U. This means that φ i is conformal in a neighbourhood of γ ¯ and so φ i ( γ ) ¯ = φ i ( γ ¯ ) is a smooth arc in Δ ¯ with | φ i ( a ) | = 1 and φ i ( γ ¯ - ( a ) ) ⊆ Δ. The result now follows from Lemmas 12 and 5.We have now made a good deal of progress because of the following.Lemma 14. Theorem3 is true if C i is a circle or an ellipse.Proof. In this caseΓ is a finite union of type I and type II arcs only, so the result follows by Lemma 8(iv) and Lemma 13. ## 4. Type III Subarcs The proof of Theorem3 will be completed by showing that every type III arc in D i has the restriction property in D i. We have an open subarc γ of an open subarc Γ of C j and Γ ⊆ D i. In this case ∞ is an end-point of γ and ∞ ∈ ∂ D i, so both C i and C j are unbounded. We will use the same strategy we used for type II arcs in Lemma 13; we show that σ = φ i ( γ ) ¯ is a smooth arc in Δ as in Lemma 12, so that φ i ( γ ) has the restriction property in Δ and so γ has the restriction property in D i. The proof is more complicated because conformality of φ i at ∞ cannot necessarily be used. Instead we make use of the fact thatas z → ∞ along γ,the unit tangent vector of γ at z tends to a limit. The following two Lemmas help us exploit this fact.Lemma 15. Letg ∈ C 1 [ 0 , ∞ ) with g ′ ( t ) ≠ 0  ( t ≥ 0 ). Suppose that c ∈ C and (6) lim ⁡ t → ∞ g ( t ) = c , lim ⁡ t → ∞ g ′ ( t ) | g ′ ( t ) | = ω , ( | ω | = 1 ) exist. Define σ = g ( [ 0 , ∞ ) ) ∪ ( c ). Then(i) σ is a compact arc, (ii) σ is rectifiable, (iii) σ is smooth.Proof. (i) Definef on [ 0,1 ] by (7) f ( t ) = { g ( tanh - 1 t ) 0 ≤ t < 1 c t = 1 . Then f ∈ C [ 0,1 ] is a continuous parametrization of σ. (ii) To prove thatσ is rectifiable, it suffices to show that, for some T > 0,  ∫ T ∞ ‍ | g ′ ( u ) | d u < ∞. Let ε ( t ) = ω - ( g ′ ( t ) / | g ′ ( t ) | ). So ε ( t ) → 0 as t → ∞. Choose T ≥ 0 such that | ε ( t ) | ≤ 1 / 2 for t ≥ T. Then, for t ≥ T, (8) | g ′ ( t ) | ( 1 - ω ¯ ε ( t ) ) = ω ¯ g ′ ( t ) . Hence(9) ∫ T t ‍ | g ′ ( u ) | ( 1 - ω ¯ ε ( u ) ) d u = ω ¯ ( g ( t ) - g ( T ) ) , ( t > T ) , | ε | ≤ 1 2 ⟹ Re ( 1 - ω ¯ ε ) ≥ 1 2 ⟹ 2 Re ( 1 - ω ¯ ε ) ≥ 1 . So(10) ∫ T t | g ′ ( u ) | d u ≤ 2 ∫ T t | g ′ ( u ) | Re ( 1 - ω ¯ ε ( u ) ) d u = 2 Re ( ω ¯ ( g ( t ) - g ( T ) ) ) ⟶ 2 Re ( ω ¯ ( c - g ( T ) ) ) as t ⟶ ∞ , and hence (11) ∫ T ∞ ‍ | g ′ ( u ) | d u < ∞ , which establishes the rectifiability of σ. (iii) Leth : [ 0 , l ] → σ be the arc-length parametrization of σ. Then h ∈ C [ 0 , l ], h ( s ) = g ( t ) where ∫ 0 t ‍ | g ′ ( u ) | d u = s and s ′ ( t ) = | g ′ ( t ) |. Therefore the map t → s  ( [ 0 , ∞ ) → [ 0 , l ) ) is C 1 with strictly positive derivative. So the inverse map s → t  ( [ 0 , l ) → [ 0 , ∞ ) ) is C 1. Since t ( s ( t ) ) ≡ t and t ′ ( s ) = 1 / s ′ ( t ) where 0 ≤ t ≤ ∞ and 0 ≤ s ≤ l, it follows that (12) lim ⁡ s → l h ′ ( s ) = lim ⁡ t → ∞ g ′ ( t ) t ′ ( s ) = lim ⁡ t → ∞ g ′ ( t ) s ′ ( t ) = lim ⁡ t → ∞ g ′ ( t ) | g ′ ( t ) | = ω . Hence h ′ is continuous and so h ∈ C 1 [ 0 , l ].Lemma 16. Letk ∈ C 1 [ 0 , ∞ ) with k ′ ( t ) ≠ 0  ( t ≥ 0 ) and suppose that k ( t ) → ∞ as t → + ∞. Then, if | ω | = 1, (13) k ′ ( t ) | k ′ ( t ) | ⟶ ω ⟹ k ( t ) | k ( t ) | ⟶ ω .Proof. Writeω = e i α. Choose T ′ such that t ≥ T ′ ⇒ Re e - i α ( k ′ ( t ) / | k ′ ( t ) | ) > 0. Then using arg ^ to denote the principal value of arg we see that (14) θ ( t ) = α + arg ^ e - i α k ′ ( t ) | k ′ ( t ) | is a branch of arg ⁡ ( k ′ / | k ′ | ) and hence also of arg ⁡ k ′ on [ T ′ , ∞ ) which tends to α as t → ∞. We will find a branch ϑ of arg k which also tends to α as t → ∞. Letε > 0. Choose T such that t ≥ T ≥ T ′ ⇒ α - ε / 2 ≤ θ ≤ α + ε / 2. Now k ( t ) - k ( T ) = ∫ T t ‍ k ′ ( u ) d u is a limit of Riemann sums ∑ ‍ ( t i + 1 - t i ) k ′ ( ξ i ). The sectorS (see Figure 2) is closed under addition and multiplication by positive scalars; therefore (15) k ( t ) - k ( T ) ∈ S for t ≥ T . So there is an argument μ ( t ) of k ( t ) - k ( T ) satisfying (16) α - ε 2 ≤ μ ( t ) ≤ α + ε 2 ( t ≥ T ) . Now k ( t ) / ( k ( t ) - k ( T ) ) → 1 as t → ∞. So (17) ∃ T 1 ≥ T such that t ≥ T 1 ⟹ - ε 2 < arg ^ k ( t ) k ( t ) - k ( T ) < ε 2 . If we define (18) ϑ ( t ) = μ ( t ) + arg ^ k ( t ) k ( t ) - k ( T ) ( t ≥ T 1 ) , then ϑ ( t ) is an argument of k ( t ) and (19) t ≥ T 1 ⟹ | ϑ ( t ) - α | < ε 2 + ε 2 = ε . Hence also (20) | k ( t ) | k ( t ) | - ω | = | e i ϑ ( t ) - e i α | < ε . Consequently, (21) k ( t ) | k ( t ) | ⟶ ω = e i α , and our Lemma is proved.Figure 2 The sectorS.There are now four cases to prove depending on the geometry ofC i and D i. ### 4.1. Case  1:D i Is a Half-Plane The following lemma will be needed here and in Case2.Lemma 17. LetG be the open right half-plane Re z > 0 and let θ ( z ) = ( z - 1 ) / ( z + 1 ) so that θ is a Riemann mapping function for G. Let k : [ 0 , ∞ ) → G be an injective C 1 function such that k ′ ( t ) ≠ 0, for all t ≥ 0, and lim ⁡ t → ∞ k ( t ) = ∞. Let ρ be the (simple) arc parametrized by k. If  lim ⁡ t → ∞ ( k ′ ( t ) / | k ′ ( t ) | ) = ω  (with | ω | = 1), then σ = θ ( ρ ) ¯ satisfies the hypothesis of Lemma 12 and, hence, ρ has the restriction property in G.Proof. Putg = θ ∘ k, so that g ∈ C 1 [ 0 , ∞ ) parametrizes θ ( ρ ). Clearly g ( t ) → 1 as t → ∞. Now g satisfies the hypothesis of Lemma 15, for we can show that g ′ ( t ) / | g ′ ( t ) | → ω - 1 as t → ∞. Since θ ′ ( z ) = 2 / ( z + 1 ) 2 it follows that (22) g ′ ( t ) | g ′ ( t ) | = | 1 + k ( t ) | 2 ( 1 + k ( t ) ) 2 k ′ ( t ) | k ′ ( t ) | = k ′ ( t ) | k ′ ( t ) | | k ( t ) | 2 ( k ( t ) ) 2 | 1 + 1 / k ( t ) | 2 ( 1 + 1 / k ( t ) ) 2 ⟶ ω - 1 , using Lemma 16. Soσ = g [ 0 , ∞ ) ∪ ( ω - 1 ) satisfies Lemma 12; hence g [ 0 , ∞ ) has the restriction property in Δ. But g [ 0 , ∞ ) = θ ( ρ ) and, therefore, by Lemma 5, ρ has the restriction property in G.Now suppose thatC i is a line and D i is a half-plane. By Invariance Lemma 5 with a linear equivalence χ ( z ) = α z + β  ( α ≠ 0 ) we can assume that C i is the imaginary axis and that D i = G, the open right half-plane, as above. If γ ⊆ D i is a type III arc, it is a subarc of a line, parabola, or hyperbola component. Obviously γ has a parametrization k as in Lemma 17. Hence γ has the restriction property in D i. ### 4.2. Case  2:D i Is the Concave Complementary Domain of a Parabola Any two parabolas are conformally equivalent via a linear equivalence:μ ( z ) = a z + b  ( a , b ∈ C , a ≠ 0 ). So assume that C i is the parabola (23) y 2 = 4 ( 1 - x ) and that D i is the complementary domain to the “right” of C i.The function(24) w ⟶ ( 1 + w ) 2 maps the open right half-plane G conformally onto D i and the imaginary axis onto C i. Its inverse is the function (25) ϑ ( z ) = z 1 / 2 - 1 , ( z ∈ D i ) , where z 1 / 2 is the principal square-root of z (here and throughout all standard multivalued functions will take their principal values).Now letγ ⊆ D i be a type III arc. Because G is conformally equivalent to D i via ϑ it will be sufficient to show that the arc ϑ ( γ ) ⊆ G has a parametric function k as in Lemma 17. Letting h be the arc-length parametrization of γ, then h ∈ C 1 [ 0 , ∞ ),  | h ′ ( t ) | ≡ 1 and h ( t ) → ∞ as t → ∞, and h is injective.Nowγ is a subarc of a line, parabola, or hyperbola component. Hence as z → ∞ along γ the unit tangent vector at z tends to a limit ω (| ω | = 1). Thus (26) lim ⁡ t → ∞ h ′ ( t ) | h ′ ( t ) | = lim ⁡ t → ∞ h ′ ( t ) = ω , and therefore (27) lim ⁡ t → ∞ h ( t ) | h ( t ) | = ω , by Lemma 16.Putk = ϑ ∘ h. Then k is an injective parametric function for ϑ ( γ ). Clearly k ∈ C 1 [ 0 , ∞ ),  k ( t ) → ∞ as t → ∞, and (28) k ′ ( t ) = ϑ ′ ( h ( t ) ) h ′ ( t ) ≠ 0 , ∀ t ≥ 0 .Moreover,(29) k ′ ( t ) | k ′ ( t ) | = | h ( t ) | 1 / 2 h ( t ) 1 / 2 h ′ ( t ) | h ′ ( t ) | ⟶ ω 1 / 2 .Sok is as in Lemma 17, which shows that γ has the restriction property in D i.Remark 18. The notationω 1 / 2 is ambiguous when ω = - 1 (γ could be part of another parabola). But, because type I arcs can be ignored, we can assume that either γ is contained entirely in the upper half-plane, in which case ( - 1 ) 1 / 2 = i, or else γ is in the lower half-plane and ( - 1 ) 1 / 2 = - i. ### 4.3. Case  3:D i Is the Convex Complementary Domain of a Parabola In this case the parabola(30) y 2 = 4 ( π 4 ) 2 ( ( π 4 ) 2 - x ) will be chosen for C i, and D i will be the complementary domain to the “left” of C i. This choice is made because then we have the relatively simple Riemann mapping function (31) φ i ( z ) = tan 2 ( z 1 / 2 ) , ( z ∈ D i ) . This function maps the real interval ( - ∞ , ( π / 4 ) 2 ) in an increasing fashion onto ( - 1,1 ), and so it maps the upper/lower half of D i onto the upper/lower half of Δ. The formula for φ i is indeterminate on ( - ∞ , 0 ], but these singularities are removable and the formula (32) φ i ( x ) = - tanh 2 ( - x ) 1 / 2 can be used to define φ i ( x ), for negative x. This mapping will be examined in detail in a moment, but first we dispose of a trivial case and make some simple observations.Letγ ⊆ D i be a type III arc. If γ is a real interval ( - ∞ , a ), with a < ( π / 4 ) 2, then φ i ( γ ) is a subinterval of ( - 1,1 ) which obviously has the restriction property in Δ. So this case is trivial and needs no more attention.The following observations are elementary.(i) Ifγ is part of another line, then it must be parallel to R and certainly disjoint from ( - ∞ , 0 ]. (ii) Ifγ is part of another parabola C j, then C j must be symmetric about R and have an equation of the form (33) y 2 = 4 a ( b - x ) , where0 < a ≤ ( π / 4 ) 2,  b ≤ ( π / 4 ) 2. (iii) Ifγ is part of a hyperbola, then its asymptote must be parallel to R. (iv) In all (nontrivial) casesγ intersects ( - ∞ , 0 ] in at most two points. So, because type I arcs can be ignored there is no loss of generality in assuming that Im ⁡ z has constant sign on γ and that Re z < 0 on γ. (v) Hence, for definiteness, we can assume thatγ is contained in the open second quadrant. (vi) In all casesy 2 / x tends to a limit as z → ∞ along γ. If γ is part of a line or hyperbola, the limit is 0, and if γ is part of the parabola in (ii) above the limit is - 4 a. For future reference let us note that (34) 0 ≤ lim ⁡ ⁡ y 2 4 | x | ≤ ( π 4 ) 2 . (vii) Because thelim ⁡ in (34) exists and because type I arcs can be ignored, we can assume that (35) y 2 x 2 < 1 , on ⁡ γ .Now letγ be type III arc in D i as in (v) and (vi). We will show that φ i ( γ ) has the restriction property in Δ. To elucidate φ i ( γ ) it is convenient to work backwards, examining the mapping properties of the square map ( z → z 2 ), then tan, and then the principal square root.Lemma 19. LetΔ + be the open semidisc (36) Δ + = { z ∈ C : | z | < 1 , x > 0 } . If σ ′ is a smooth simple arc in Δ + ¯, if i is an end-point of σ ′, and if σ ′ - { i } ⊆ Δ +, then the arc (37) σ = { z 2 : z ∈ σ ′ } is a smooth simple arc in Δ ¯ satisfying the hypothesis of Lemma 12, so that σ - { - 1 } has the restriction property in Δ.Proof. This is clear: the square mapz → z 2 is conformal in a neighbourhood of σ ′.Now letS be the open strip (38) S = { z ∈ C : 0 < x < π 4 } . It is well known that tan maps S conformally onto Δ +. The imaginary axis is mapped to the vertical part of ∂ Δ +, and the line π / 4 + i R is mapped to the semicircular part of ∂ Δ +. Moreover, if z tends to infinity in S in such a way that y → + ∞, then tan ⁡ z → i.Lemma 20. Letk ∈ C 1 [ 0 , ∞ ) be injective and satisfy k ′ ( t ) ≠ 0, for t ≥ 0. Suppose also that(i) k ( t ) ∈ S for all t ≥ 0, (ii) Im ⁡ k ( t ) → + ∞ as t → + ∞, (iii) lim ⁡ t → ∞ Re k ( t ) = x 0 exists ( 0 ≤ x 0 ≤ π / 4 ), (iv) lim ⁡ t → ∞ ( k ′ ( t ) / | k ′ ( t ) | ) = i. Ifγ ′ is the arc parametrized by k, then σ ′ = ( tan ⁡ γ ′ ) ∪ { i } satisfies the hypothesis of Lemma 19, so that tan 2 γ ′ has the restriction property in Δ.Proof. Letg = tan ∘ k, so that g parametrizes γ ′ and tan ⁡ γ ′ = g [ 0 , ∞ ). Now g ∈ C 1 [ 0 , ∞ ),  g ′ ( t ) ≠ 0, for all t ≥ 0, and g ( t ) → i as t → + ∞. Lemma 15 will be used to show that σ ′ = g [ 0 , ∞ ) ∪ ( i ) satisfies the hypothesis of Lemma 19. For all t ≥ 0, (39) g ′ ( t ) | g ′ ( t ) | = | cos ⁡ ⁡ k ( t ) | 2 ( cos ⁡ ⁡ k ( t ) ) 2 k ′ ( t ) | k ′ ( t ) | . Letk ( t ) = x ( t ) + i y ( t ). Since x ( t ) → x 0 and y ( t ) → + ∞, as t → + ∞, and because cos ⁡ x , cosh ⁡ y > 0 on γ, (40) | cos ⁡ ⁡ k ( t ) | 2 cos ⁡ 2 k ( t ) = ( | cos ⁡ ⁡ x ( t ) cosh ⁡ y ( t ) - i sin ⁡ x ( t ) sinh ⁡ y ( t ) | cos ⁡ ⁡ x ( t ) cosh ⁡ y ( t ) - i sin ⁡ x ( t ) sinh ⁡ y ( t ) ) 2 = | 1 - i tan ⁡ x ( t ) tanh y ( t ) | 2 ( 1 - i tan ⁡ x ( t ) tanh y ( t ) ) 2 ⟶ | 1 - i tan ⁡ x 0 | 2 ( 1 - i tan ⁡ x 0 ) 2 . So lim ⁡ t → ∞ ( g ′ ( t ) / | g ′ ( t ) | ) exists.The function(41) ϑ ( z ) = z 1 / 2 maps D i - ( - ∞ , 0 ] conformally onto the vertical strip S as above. The limiting values of ϑ from above and below a point x on ( - ∞ , 0 ] are at ± i ( - x ) 1 / 2, respectively. Now tan maps S conformally onto Δ + and tan ± i ( - x ) 1 / 2 = ± i tanh ( - x ) 1 / 2. Finally the square function maps Δ + conformally onto Δ - ( ( - 1,0 ] ), and it maps both of ± i tanh ( - x ) 1 / 2 and  - tanh 2 ( - x ) 1 / 2. Thus the cut made by ϑ is repaired by the square function (by Schwarz’s Reflection Principle): φ i is continuous at all points of ( - ∞ , 0 ] and therefore analytic on D i. Because φ i ( z ) ∈ ( - 1,0 ] if and only if z ∈ ( - ∞ , 0 ] the injectivity of φ i on D i is clear.Letγ ⊆ D i be a type III arc. Assume that y > 0 and x < 0 when z = x + i y ∈ γ. Let γ ′ = ϑ ( γ ) so that γ ′ ⊆ S. We show that γ ′ is as in Lemma 20 so that tan 2 γ ′ has the restriction property in Δ and, hence, γ has the restriction property in D i.Letz = x + i y be an arbitrary point of γ and write (42) z 1 / 2 = u + i v , for the corresponding point ϑ ( z ) ∈ γ ′; then (43) x + i y = u 2 - v 2 + 2 i u v . Eliminating v, and remembering that x < 0, we see that (44) u 2 = 1 2 ( x + ( x 2 + y 2 ) 1 / 2 ) = | x | 2 ( ( 1 + y 2 x 2 ) 1 / 2 - 1 ) .Sincey 2 / x 2 < 1 (observation (vii)), the binomial series implies that (45) u 2 = y 2 4 | x | - 1 16 y 4 | x | 3 + ⋯ ~ y 2 4 | x | , as z tends to ∞ along γ. It follows from (34) that (46) lim ⁡ t → ∞ u 2 = a exists , 0 ≤ a ≤ ( π 4 ) 2 .Now leth be the arc-length parametrization of γ and write h ( t ) = x ( t ) + i y ( t ). Let k = ϑ ∘ h = h 1 / 2 so that k parametrizes γ ′. Write k ( t ) = u ( t ) + i v ( t ). (i), (ii), (iii), and (iv) of Lemma 20 can now be verified.Obviouslyk ( t ) ∈ S, for all t ≥ 0, so (i) is true. As t → ∞,  | k ( t ) | = | h ( t ) | 1 / 2 → ∞, but since 0 ≤ u ( t ) ≤ π / 4 we must have v ( t ) → + ∞, so that (ii) is true. Item (iii) follows from (46). Now h ( t ) → ∞ as t → ∞,  | h ′ ( t ) | ≡ 1, and h ′ ( t ) → - 1 as t → ∞. So, by Lemma 16, (47) lim ⁡ t → ∞ k ′ ( t ) | k ′ ( t ) | = | h ( t ) | 1 / 2 h ( t ) 1 / 2 h ′ ( t ) | h ′ ( t ) | ⟶ - i ( - 1 ) = i . So (iv) is true and we have now completed the proof. ### 4.4. Case  4:C i Is a Hyperbola Component We can deal simultaneously with the convex and concave complementary domains of a hyperbola component as follows. Let- π / 2 < α < π / 2 and let C i = sin ( α + i R ). If α < 0,  C i is the arc (48) C i = { z = x + i y ∈ C : x < 0 , x 2 sin 2 α - y 2 cos ⁡ 2 α = 1 } , and if α > 0,  C i is the arc (49) C i = { z = x + i y ∈ C : x > 0 , x 2 sin 2 α - y 2 cos ⁡ 2 α = 1 } .LetD i be the complementary domain to the “left” of C i; then D i is convex when α < 0 and concave when α > 0. Linear equivalence will be used as before to reduce the general case to this one.The functionsin - 1 maps the double cut plane C - { ( - ∞ , - 1 ] ∪ [ 1 , ∞ ) } conformally onto the vertical strip | x | < π / 2, mapping the upper/lower parts of the first domain onto the upper/lower parts of the second. The upper and lower limits of sin - 1 at a point - x ∈ ( - ∞ , - 1 ] are - π / 2 ± i cosh ⁡ - 1 x. The arc C i = sin ( α + i R ) is mapped to the line Re z = α. Therefore sin - 1 maps D i - ( - ∞ , - 1 ] conformally onto the strip (50) D α = { z = x + i y ∈ C : - π 2 < x < α } .If(51) λ ( z ) = π 4 z + ( π / 2 ) α + ( π / 2 ) , then λ maps D α conformally onto the strip (52) S = { z = x + i y ∈ C : 0 < x < π 4 } .Therefore(53) φ i ( z ) = tan 2 λ ( sin - 1 z ) is a Riemann mapping function for D i. Now let γ be a type III arc in D i. As in Case 3 the case γ ⊆ R is trivial, so we can assume that γ lies entirely in the upper half-plane. It will be sufficient for us to show that λ ( sin - 1 γ ) has a parametric function k as in Lemma 20.Letz = x + i y be arbitrary point of γ and write sin - 1 z = u + i v for the corresponding point of sin - 1 γ. Clearly, by (50), (54) u + i v ∈ D α .Now(55) z = x + i y = sin ⁡ ( u + i v ) = sin ⁡ u cosh ⁡ v + i cos ⁡ ⁡ u sinh ⁡ v , so that (56) | z | 2 = sin 2 u cosh ⁡ 2 v + cos ⁡ 2 u sinh ⁡ 2 v = sin 2 u + sinh ⁡ 2 v .Asz → ∞ along γ, | z | 2 → + ∞ and sin 2 u remains bounded; therefore (57) v ⟶ + ∞ as z ⟶ ∞ along γ .It now follows from (56) and (57) that (58) sin ⁡ u = x | z | ( tanh 2 v + sin 2 u cosh ⁡ 2 v ) 1 / 2 ~ x | z | as z ⟶ ∞ .Leth be the arc-length parametrization of γ. As z → ∞ along γ its unit tangent vector has a limit e i θ, say. The asymptotes of C i are the rays arg ⁡ z = ± ( π / 2 - α ). Therefore (59) lim ⁡ t → ∞ h ′ ( t ) | h ′ ( t ) | = lim ⁡ t → ∞ h ′ ( t ) = e i θ , where π 2 - α ≤ θ ≤ π .So, by (57) and Lemma 16, (60) lim ⁡ t → ∞ h ( t ) | h ( t ) | = e i θ .Nowg = sin - 1 ∘ h is a parametric function for sin - 1 γ. By (54) it follows that(i) g ( t ) ∈ D α  ( t ≥ 0 ),  and (57) shows that (ii) Im ⁡ g ( t ) → + ∞ as t → ∞.Equation (60) shows that(iii) lim ⁡ t → ∞ Re g ( t ) = sin - 1 cos ⁡ θ = ( π / 2 ) - θ  and we notice that - π / 2 ≤ ( π / 2 ) - θ ≤ α, by (59).Finally observe that(61) g ′ ( t ) | g ′ ( t ) | = | 1 - h ( t ) 2 | 1 / 2 ( 1 - h ( t ) 2 ) 1 / 2 h ′ ( t ) | h ′ ( t ) | .Now in the upper half-plane( 1 - w 2 ) 1 / 2 ~ - i w, as w → ∞. So, as t → ∞, (62) g ′ ( t ) | g ′ ( t ) | ~ | h ( t ) | - i h ( t ) h ′ ( t ) | h ′ ( t ) | , and therefore(iv) lim ⁡ t → ∞ ( g ′ ( t ) / | g ′ ( t ) | ) = i.It follows easily thatk = λ ∘ g satisfies the hypothesis of Lemma 20, and therefore φ i ( γ ) has the restriction property in Δ. ## 4.1. Case  1:D i Is a Half-Plane The following lemma will be needed here and in Case2.Lemma 17. LetG be the open right half-plane Re z > 0 and let θ ( z ) = ( z - 1 ) / ( z + 1 ) so that θ is a Riemann mapping function for G. Let k : [ 0 , ∞ ) → G be an injective C 1 function such that k ′ ( t ) ≠ 0, for all t ≥ 0, and lim ⁡ t → ∞ k ( t ) = ∞. Let ρ be the (simple) arc parametrized by k. If  lim ⁡ t → ∞ ( k ′ ( t ) / | k ′ ( t ) | ) = ω  (with | ω | = 1), then σ = θ ( ρ ) ¯ satisfies the hypothesis of Lemma 12 and, hence, ρ has the restriction property in G.Proof. Putg = θ ∘ k, so that g ∈ C 1 [ 0 , ∞ ) parametrizes θ ( ρ ). Clearly g ( t ) → 1 as t → ∞. Now g satisfies the hypothesis of Lemma 15, for we can show that g ′ ( t ) / | g ′ ( t ) | → ω - 1 as t → ∞. Since θ ′ ( z ) = 2 / ( z + 1 ) 2 it follows that (22) g ′ ( t ) | g ′ ( t ) | = | 1 + k ( t ) | 2 ( 1 + k ( t ) ) 2 k ′ ( t ) | k ′ ( t ) | = k ′ ( t ) | k ′ ( t ) | | k ( t ) | 2 ( k ( t ) ) 2 | 1 + 1 / k ( t ) | 2 ( 1 + 1 / k ( t ) ) 2 ⟶ ω - 1 , using Lemma 16. Soσ = g [ 0 , ∞ ) ∪ ( ω - 1 ) satisfies Lemma 12; hence g [ 0 , ∞ ) has the restriction property in Δ. But g [ 0 , ∞ ) = θ ( ρ ) and, therefore, by Lemma 5, ρ has the restriction property in G.Now suppose thatC i is a line and D i is a half-plane. By Invariance Lemma 5 with a linear equivalence χ ( z ) = α z + β  ( α ≠ 0 ) we can assume that C i is the imaginary axis and that D i = G, the open right half-plane, as above. If γ ⊆ D i is a type III arc, it is a subarc of a line, parabola, or hyperbola component. Obviously γ has a parametrization k as in Lemma 17. Hence γ has the restriction property in D i. ## 4.2. Case  2:D i Is the Concave Complementary Domain of a Parabola Any two parabolas are conformally equivalent via a linear equivalence:μ ( z ) = a z + b  ( a , b ∈ C , a ≠ 0 ). So assume that C i is the parabola (23) y 2 = 4 ( 1 - x ) and that D i is the complementary domain to the “right” of C i.The function(24) w ⟶ ( 1 + w ) 2 maps the open right half-plane G conformally onto D i and the imaginary axis onto C i. Its inverse is the function (25) ϑ ( z ) = z 1 / 2 - 1 , ( z ∈ D i ) , where z 1 / 2 is the principal square-root of z (here and throughout all standard multivalued functions will take their principal values).Now letγ ⊆ D i be a type III arc. Because G is conformally equivalent to D i via ϑ it will be sufficient to show that the arc ϑ ( γ ) ⊆ G has a parametric function k as in Lemma 17. Letting h be the arc-length parametrization of γ, then h ∈ C 1 [ 0 , ∞ ),  | h ′ ( t ) | ≡ 1 and h ( t ) → ∞ as t → ∞, and h is injective.Nowγ is a subarc of a line, parabola, or hyperbola component. Hence as z → ∞ along γ the unit tangent vector at z tends to a limit ω (| ω | = 1). Thus (26) lim ⁡ t → ∞ h ′ ( t ) | h ′ ( t ) | = lim ⁡ t → ∞ h ′ ( t ) = ω , and therefore (27) lim ⁡ t → ∞ h ( t ) | h ( t ) | = ω , by Lemma 16.Putk = ϑ ∘ h. Then k is an injective parametric function for ϑ ( γ ). Clearly k ∈ C 1 [ 0 , ∞ ),  k ( t ) → ∞ as t → ∞, and (28) k ′ ( t ) = ϑ ′ ( h ( t ) ) h ′ ( t ) ≠ 0 , ∀ t ≥ 0 .Moreover,(29) k ′ ( t ) | k ′ ( t ) | = | h ( t ) | 1 / 2 h ( t ) 1 / 2 h ′ ( t ) | h ′ ( t ) | ⟶ ω 1 / 2 .Sok is as in Lemma 17, which shows that γ has the restriction property in D i.Remark 18. The notationω 1 / 2 is ambiguous when ω = - 1 (γ could be part of another parabola). But, because type I arcs can be ignored, we can assume that either γ is contained entirely in the upper half-plane, in which case ( - 1 ) 1 / 2 = i, or else γ is in the lower half-plane and ( - 1 ) 1 / 2 = - i. ## 4.3. Case  3:D i Is the Convex Complementary Domain of a Parabola In this case the parabola(30) y 2 = 4 ( π 4 ) 2 ( ( π 4 ) 2 - x ) will be chosen for C i, and D i will be the complementary domain to the “left” of C i. This choice is made because then we have the relatively simple Riemann mapping function (31) φ i ( z ) = tan 2 ( z 1 / 2 ) , ( z ∈ D i ) . This function maps the real interval ( - ∞ , ( π / 4 ) 2 ) in an increasing fashion onto ( - 1,1 ), and so it maps the upper/lower half of D i onto the upper/lower half of Δ. The formula for φ i is indeterminate on ( - ∞ , 0 ], but these singularities are removable and the formula (32) φ i ( x ) = - tanh 2 ( - x ) 1 / 2 can be used to define φ i ( x ), for negative x. This mapping will be examined in detail in a moment, but first we dispose of a trivial case and make some simple observations.Letγ ⊆ D i be a type III arc. If γ is a real interval ( - ∞ , a ), with a < ( π / 4 ) 2, then φ i ( γ ) is a subinterval of ( - 1,1 ) which obviously has the restriction property in Δ. So this case is trivial and needs no more attention.The following observations are elementary.(i) Ifγ is part of another line, then it must be parallel to R and certainly disjoint from ( - ∞ , 0 ]. (ii) Ifγ is part of another parabola C j, then C j must be symmetric about R and have an equation of the form (33) y 2 = 4 a ( b - x ) , where0 < a ≤ ( π / 4 ) 2,  b ≤ ( π / 4 ) 2. (iii) Ifγ is part of a hyperbola, then its asymptote must be parallel to R. (iv) In all (nontrivial) casesγ intersects ( - ∞ , 0 ] in at most two points. So, because type I arcs can be ignored there is no loss of generality in assuming that Im ⁡ z has constant sign on γ and that Re z < 0 on γ. (v) Hence, for definiteness, we can assume thatγ is contained in the open second quadrant. (vi) In all casesy 2 / x tends to a limit as z → ∞ along γ. If γ is part of a line or hyperbola, the limit is 0, and if γ is part of the parabola in (ii) above the limit is - 4 a. For future reference let us note that (34) 0 ≤ lim ⁡ ⁡ y 2 4 | x | ≤ ( π 4 ) 2 . (vii) Because thelim ⁡ in (34) exists and because type I arcs can be ignored, we can assume that (35) y 2 x 2 < 1 , on ⁡ γ .Now letγ be type III arc in D i as in (v) and (vi). We will show that φ i ( γ ) has the restriction property in Δ. To elucidate φ i ( γ ) it is convenient to work backwards, examining the mapping properties of the square map ( z → z 2 ), then tan, and then the principal square root.Lemma 19. LetΔ + be the open semidisc (36) Δ + = { z ∈ C : | z | < 1 , x > 0 } . If σ ′ is a smooth simple arc in Δ + ¯, if i is an end-point of σ ′, and if σ ′ - { i } ⊆ Δ +, then the arc (37) σ = { z 2 : z ∈ σ ′ } is a smooth simple arc in Δ ¯ satisfying the hypothesis of Lemma 12, so that σ - { - 1 } has the restriction property in Δ.Proof. This is clear: the square mapz → z 2 is conformal in a neighbourhood of σ ′.Now letS be the open strip (38) S = { z ∈ C : 0 < x < π 4 } . It is well known that tan maps S conformally onto Δ +. The imaginary axis is mapped to the vertical part of ∂ Δ +, and the line π / 4 + i R is mapped to the semicircular part of ∂ Δ +. Moreover, if z tends to infinity in S in such a way that y → + ∞, then tan ⁡ z → i.Lemma 20. Letk ∈ C 1 [ 0 , ∞ ) be injective and satisfy k ′ ( t ) ≠ 0, for t ≥ 0. Suppose also that(i) k ( t ) ∈ S for all t ≥ 0, (ii) Im ⁡ k ( t ) → + ∞ as t → + ∞, (iii) lim ⁡ t → ∞ Re k ( t ) = x 0 exists ( 0 ≤ x 0 ≤ π / 4 ), (iv) lim ⁡ t → ∞ ( k ′ ( t ) / | k ′ ( t ) | ) = i. Ifγ ′ is the arc parametrized by k, then σ ′ = ( tan ⁡ γ ′ ) ∪ { i } satisfies the hypothesis of Lemma 19, so that tan 2 γ ′ has the restriction property in Δ.Proof. Letg = tan ∘ k, so that g parametrizes γ ′ and tan ⁡ γ ′ = g [ 0 , ∞ ). Now g ∈ C 1 [ 0 , ∞ ),  g ′ ( t ) ≠ 0, for all t ≥ 0, and g ( t ) → i as t → + ∞. Lemma 15 will be used to show that σ ′ = g [ 0 , ∞ ) ∪ ( i ) satisfies the hypothesis of Lemma 19. For all t ≥ 0, (39) g ′ ( t ) | g ′ ( t ) | = | cos ⁡ ⁡ k ( t ) | 2 ( cos ⁡ ⁡ k ( t ) ) 2 k ′ ( t ) | k ′ ( t ) | . Letk ( t ) = x ( t ) + i y ( t ). Since x ( t ) → x 0 and y ( t ) → + ∞, as t → + ∞, and because cos ⁡ x , cosh ⁡ y > 0 on γ, (40) | cos ⁡ ⁡ k ( t ) | 2 cos ⁡ 2 k ( t ) = ( | cos ⁡ ⁡ x ( t ) cosh ⁡ y ( t ) - i sin ⁡ x ( t ) sinh ⁡ y ( t ) | cos ⁡ ⁡ x ( t ) cosh ⁡ y ( t ) - i sin ⁡ x ( t ) sinh ⁡ y ( t ) ) 2 = | 1 - i tan ⁡ x ( t ) tanh y ( t ) | 2 ( 1 - i tan ⁡ x ( t ) tanh y ( t ) ) 2 ⟶ | 1 - i tan ⁡ x 0 | 2 ( 1 - i tan ⁡ x 0 ) 2 . So lim ⁡ t → ∞ ( g ′ ( t ) / | g ′ ( t ) | ) exists.The function(41) ϑ ( z ) = z 1 / 2 maps D i - ( - ∞ , 0 ] conformally onto the vertical strip S as above. The limiting values of ϑ from above and below a point x on ( - ∞ , 0 ] are at ± i ( - x ) 1 / 2, respectively. Now tan maps S conformally onto Δ + and tan ± i ( - x ) 1 / 2 = ± i tanh ( - x ) 1 / 2. Finally the square function maps Δ + conformally onto Δ - ( ( - 1,0 ] ), and it maps both of ± i tanh ( - x ) 1 / 2 and  - tanh 2 ( - x ) 1 / 2. Thus the cut made by ϑ is repaired by the square function (by Schwarz’s Reflection Principle): φ i is continuous at all points of ( - ∞ , 0 ] and therefore analytic on D i. Because φ i ( z ) ∈ ( - 1,0 ] if and only if z ∈ ( - ∞ , 0 ] the injectivity of φ i on D i is clear.Letγ ⊆ D i be a type III arc. Assume that y > 0 and x < 0 when z = x + i y ∈ γ. Let γ ′ = ϑ ( γ ) so that γ ′ ⊆ S. We show that γ ′ is as in Lemma 20 so that tan 2 γ ′ has the restriction property in Δ and, hence, γ has the restriction property in D i.Letz = x + i y be an arbitrary point of γ and write (42) z 1 / 2 = u + i v , for the corresponding point ϑ ( z ) ∈ γ ′; then (43) x + i y = u 2 - v 2 + 2 i u v . Eliminating v, and remembering that x < 0, we see that (44) u 2 = 1 2 ( x + ( x 2 + y 2 ) 1 / 2 ) = | x | 2 ( ( 1 + y 2 x 2 ) 1 / 2 - 1 ) .Sincey 2 / x 2 < 1 (observation (vii)), the binomial series implies that (45) u 2 = y 2 4 | x | - 1 16 y 4 | x | 3 + ⋯ ~ y 2 4 | x | , as z tends to ∞ along γ. It follows from (34) that (46) lim ⁡ t → ∞ u 2 = a exists , 0 ≤ a ≤ ( π 4 ) 2 .Now leth be the arc-length parametrization of γ and write h ( t ) = x ( t ) + i y ( t ). Let k = ϑ ∘ h = h 1 / 2 so that k parametrizes γ ′. Write k ( t ) = u ( t ) + i v ( t ). (i), (ii), (iii), and (iv) of Lemma 20 can now be verified.Obviouslyk ( t ) ∈ S, for all t ≥ 0, so (i) is true. As t → ∞,  | k ( t ) | = | h ( t ) | 1 / 2 → ∞, but since 0 ≤ u ( t ) ≤ π / 4 we must have v ( t ) → + ∞, so that (ii) is true. Item (iii) follows from (46). Now h ( t ) → ∞ as t → ∞,  | h ′ ( t ) | ≡ 1, and h ′ ( t ) → - 1 as t → ∞. So, by Lemma 16, (47) lim ⁡ t → ∞ k ′ ( t ) | k ′ ( t ) | = | h ( t ) | 1 / 2 h ( t ) 1 / 2 h ′ ( t ) | h ′ ( t ) | ⟶ - i ( - 1 ) = i . So (iv) is true and we have now completed the proof. ## 4.4. Case  4:C i Is a Hyperbola Component We can deal simultaneously with the convex and concave complementary domains of a hyperbola component as follows. Let- π / 2 < α < π / 2 and let C i = sin ( α + i R ). If α < 0,  C i is the arc (48) C i = { z = x + i y ∈ C : x < 0 , x 2 sin 2 α - y 2 cos ⁡ 2 α = 1 } , and if α > 0,  C i is the arc (49) C i = { z = x + i y ∈ C : x > 0 , x 2 sin 2 α - y 2 cos ⁡ 2 α = 1 } .LetD i be the complementary domain to the “left” of C i; then D i is convex when α < 0 and concave when α > 0. Linear equivalence will be used as before to reduce the general case to this one.The functionsin - 1 maps the double cut plane C - { ( - ∞ , - 1 ] ∪ [ 1 , ∞ ) } conformally onto the vertical strip | x | < π / 2, mapping the upper/lower parts of the first domain onto the upper/lower parts of the second. The upper and lower limits of sin - 1 at a point - x ∈ ( - ∞ , - 1 ] are - π / 2 ± i cosh ⁡ - 1 x. The arc C i = sin ( α + i R ) is mapped to the line Re z = α. Therefore sin - 1 maps D i - ( - ∞ , - 1 ] conformally onto the strip (50) D α = { z = x + i y ∈ C : - π 2 < x < α } .If(51) λ ( z ) = π 4 z + ( π / 2 ) α + ( π / 2 ) , then λ maps D α conformally onto the strip (52) S = { z = x + i y ∈ C : 0 < x < π 4 } .Therefore(53) φ i ( z ) = tan 2 λ ( sin - 1 z ) is a Riemann mapping function for D i. Now let γ be a type III arc in D i. As in Case 3 the case γ ⊆ R is trivial, so we can assume that γ lies entirely in the upper half-plane. It will be sufficient for us to show that λ ( sin - 1 γ ) has a parametric function k as in Lemma 20.Letz = x + i y be arbitrary point of γ and write sin - 1 z = u + i v for the corresponding point of sin - 1 γ. Clearly, by (50), (54) u + i v ∈ D α .Now(55) z = x + i y = sin ⁡ ( u + i v ) = sin ⁡ u cosh ⁡ v + i cos ⁡ ⁡ u sinh ⁡ v , so that (56) | z | 2 = sin 2 u cosh ⁡ 2 v + cos ⁡ 2 u sinh ⁡ 2 v = sin 2 u + sinh ⁡ 2 v .Asz → ∞ along γ, | z | 2 → + ∞ and sin 2 u remains bounded; therefore (57) v ⟶ + ∞ as z ⟶ ∞ along γ .It now follows from (56) and (57) that (58) sin ⁡ u = x | z | ( tanh 2 v + sin 2 u cosh ⁡ 2 v ) 1 / 2 ~ x | z | as z ⟶ ∞ .Leth be the arc-length parametrization of γ. As z → ∞ along γ its unit tangent vector has a limit e i θ, say. The asymptotes of C i are the rays arg ⁡ z = ± ( π / 2 - α ). Therefore (59) lim ⁡ t → ∞ h ′ ( t ) | h ′ ( t ) | = lim ⁡ t → ∞ h ′ ( t ) = e i θ , where π 2 - α ≤ θ ≤ π .So, by (57) and Lemma 16, (60) lim ⁡ t → ∞ h ( t ) | h ( t ) | = e i θ .Nowg = sin - 1 ∘ h is a parametric function for sin - 1 γ. By (54) it follows that(i) g ( t ) ∈ D α  ( t ≥ 0 ),  and (57) shows that (ii) Im ⁡ g ( t ) → + ∞ as t → ∞.Equation (60) shows that(iii) lim ⁡ t → ∞ Re g ( t ) = sin - 1 cos ⁡ θ = ( π / 2 ) - θ  and we notice that - π / 2 ≤ ( π / 2 ) - θ ≤ α, by (59).Finally observe that(61) g ′ ( t ) | g ′ ( t ) | = | 1 - h ( t ) 2 | 1 / 2 ( 1 - h ( t ) 2 ) 1 / 2 h ′ ( t ) | h ′ ( t ) | .Now in the upper half-plane( 1 - w 2 ) 1 / 2 ~ - i w, as w → ∞. So, as t → ∞, (62) g ′ ( t ) | g ′ ( t ) | ~ | h ( t ) | - i h ( t ) h ′ ( t ) | h ′ ( t ) | , and therefore(iv) lim ⁡ t → ∞ ( g ′ ( t ) / | g ′ ( t ) | ) = i.It follows easily thatk = λ ∘ g satisfies the hypothesis of Lemma 20, and therefore φ i ( γ ) has the restriction property in Δ. --- *Source: 102169-2014-09-11.xml*
2014
# Sustainable Algae Biodiesel Production in Cold Climates **Authors:** Rudras Baliga; Susan E. Powers **Journal:** International Journal of Chemical Engineering (2010) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2010/102179 --- ## Abstract This life cycle assessment aims to determine the most suitable operating conditions for algae biodiesel production in cold climates to minimize energy consumption and environmental impacts. Two hypothetical photobioreactor algae production and biodiesel plants located in Upstate New York (USA) are modeled. The photobioreactor is assumed to be housed within a greenhouse that is located adjacent to a fossil fuel or biomass power plant that can supply waste heat and flue gas containingCO2 as a primary source of carbon. Model results show that the biodiesel areal productivity is high (19 to 25 L of BD/m2/yr). The total life cycle energy consumption was between 15 and 23 MJ/L of algae BD and 20 MJ/L of soy BD. Energy consumption and air emissions for algae biodiesel are substantially lower than soy biodiesel when waste heat was utilized. Algae's most substantial contribution is a significant decrease in the petroleum consumed to make the fuel. --- ## Body ## 1. Introduction In 1998, an amendment to the U.S. Energy Policy Act (EP Act) of 1992 triggered the rapid expansion of the US biodiesel industry. This act required that a fraction of new vehicles purchased by federal and state governments be alternative fuel vehicles. The U.S. Energy Independence and Security Act (EISA) of 2007 further mandated the production of renewable fuels to 36 billion gallons (136 billion liters) per year by 2022, including biodiesel. Crops such as soybeans and canola account for more than three quarters of all biodiesel feedstocks in the U.S. [1].About 14% of U.S. soybean production and 4% of global soybean production were used by the U.S. biodiesel industry to produce fuel in 2007 [1]. The use of oil crops for fuel has been criticized because the expansion of biodiesel production in the United States and Europe has coincided with a sharp increase in prices for food grains and vegetable oils [2]. The production of biodiesel from feedstocks that do not use arable land can be accomplished either by using biomass that is currently treated as waste or by introducing a new technology that allows for the development of new feedstocks for biodiesel that utilize land that is unsuitable for food production.Microalgae have the potential to displace other feedstocks for biodiesel owing to its high vegetable oil content and biomass production rates [3]. The vegetable oil content of algae can vary with growing conditions and species, but has been known to exceed 70% of the dry weight of algae biomass [4]. Microalgae could have significant social and environmental benefits because they do not compete for arable land with food crops and microalgae cultivation consumes less water than other crops [5]. Algae also grow in saline waters that are unsuitable for agricultural practices or consumption. This makes algae well suited for areas where cultivation of other crops is difficult [6, 7]. High biomass productivities may be achieved with indoor or outdoor photobioreactors (PBRs) [8]. In cold climates, PBRs have been used successfully, when housed within greenhouses and provided with artificial lightingMicroalgae biodiesel has received much attention in news media. Considerable progress has been made in the field of algae biomorphology [9–11]. In recent decades, however, little quantitative research has been done on the energy and environmental impacts of microalgae biodiesel production on a life cycle basis. The life cycle concept is a cradle to grave systems approach for the study of feedstocks, production, and use. The objective of this work was to assess the feasibility of algae biodiesel production in New York State (USA) based on life cycle energy and environmental impact parameters. Upstate NY was chosen as a challenging case for algae biodiesel production due to shorter days and cold temperatures during winter months. The productivity, energy consumption, and environmental emissions associated with the algae/BD production lifecycle were quantified in order to identify the best growing conditions and assess its impacts relative to soybean biodiesel. ## 2. Methodology ### 2.1. System Boundary and Scope The life cycle concept is a cradle to grave systems approach for the study of feedstocks, production, and use. The concept revolves around the recognition of different stages of production starting from upstream use of energy to cultivation of the feedstock, followed by the different processing stages. A life cycle inventory assessment allows for the quantification of mass and energy streams such as energy consumption, material usage, waste production, and generation of coproducts. A summary of the sustainability assessment metrics used for this life cycle inventory of microalgae feedstock for biodiesel production is presented in Table1.Table 1 Life cycle sustainability metrics for biodiesel. Environmental ImpactSustainability MetricsEnergy and Resource Consumption– Total energy consumed (MJ/L BD)– Fossil fuel energy consumed (MJ/L BD)– Petroleum consumed (MJ/L BD)– Land required (m2/L of BD)– Water required (L water/L BD)Climate Change– Net green house gas emissions (g CO2 equivalents/L of BD)Acidification– Acidification potential (g SO2 eq./L BD)Toxic Emissions– Particulate matter emissions (PM10, PM2.5)– Carbon monoxide emissions– Volatile organic carbon emissionsFigure1 provides an overview of the system boundary used in this analysis, which includes the production of algae and biodiesel via a transesterification reaction. The boundary includes all upstream mass and energy flows that are required to make the chemical and energy resources required for the processing. The production of biodiesel from algae and direct energy consumption is characterized by four distinct stages: cultivation, dewatering/drying, oil extraction, and transesterification (Figure 1). The energy consumed and subsequent emissions for fuel production, electricity generation, and chemical production comprise the upstream energy consumption and emissions. Biodiesel and algae meal are the products leaving the system boundary. The use of these products is not directly included within the analysis.Figure 1 Flowchart depicting system boundary for life cycle inventory of biodiesel from microalgae.The hypothetical algae and biodiesel production processing facilities considered are located in upstate placeStateNew York. The facilities are assumed to be adjacent to a biomass or fossil fuel electricity generation plant for access to the carbon dioxide in their flue gas and waste heat in order to maximize the utilization of waste resources within this system. Waste heat is considered to have no value as an energy product; so it is not counted as part of the total energy resources consumed by the facility.Two different locations were considered for the microalgae biodiesel facility: Syracuse, NY (43°2′ N, 76°8′ W) and Albany, NY (42°7′ N, 73°8′ W). Although these locations are at approximately the same latitude and have very similar hours of daylight, the Syracuse area is colder and cloudier throughout the year due to its proximity to the Great Lakes. Albany offers more intense natural lighting and less severe winter temperatures (Figure 2). Three specific cases were considered for each of these locations:(i) greenhouse structure to maximize natural lighting; natural gas used to maintain the system temperature;(ii) greenhouse structure to maximize natural lighting; waste heat used to maintain the system temperature;(iii) a well-insulated facility that allows for no natural lighting but requires substantially less heat.Monthly average temperature (a) and total monthly solar irradiance (b) for Syracuse, NY, and Albany, NY. (a)(b)The PBRs are assumed to operate continuously, using artificial lighting when natural lighting is not sufficient. In all cases, it was assumed thatPhaeodactylum tricornutum algae would be grown for biodiesel production. This algae species has a relatively high oil content (about 30% by dry weight), is resistant to contamination, and has been previously utilized to produce biodiesel [12, 13].Estimating the environmental and energy lifecycle impacts requires quantification of the mass and energy flows through this system. A mathematical model for the algae production process was developed in the work presented here. As shown in Figure3, the mass and energy flows estimated with the algae production model were used in conjunction with the Greenhouse gases, Related Emissions, and Energy use in Transportation (GREET) model 1.8a developed at the Argonne National Laboratories [14]. GREET provided the general framework and structure for the lifecycle inventory, especially aspects of the transesterification process and energy and emissions related to the upstream production of chemicals and energy resources. BD production from soybeans, which is used here as a benchmark for comparison, was taken directly from the GREET model. GREET is a widely accepted model and many studies and analyses have been based upon it because of its vast data on energy sources and the associated emissions (e.g., [15–17]). The default values for soybean production, oil extraction, and transesterification were taken as GREET default values [14], which are representative of the Midwestern region of the United States where most soybeans are grown. These were based initially on an LCA completed at the National Renewable Energy Lab [18] and updated to keep the GREET model as current as possible (e.g., [17]). There is only a small production of soybeans in NY State, with yields well below the average yield in the Midwest. Thus, no attempt was made to match the geographic system boundaries for biodiesel from algae to that of soybeans.Figure 3 Overview of microalgae biomass LCA model. The yellow boxes represent the contributions of the work presented here. The soybean LCA results were taken almost entirely from GREET.Uncertainty in the data was addressed by utilizing Monte Carlo simulations to input a range of values for parameters. For a given assumption or variable with a distribution as input, the commercially available software, Crystal Ball was utilized to determine a forecast or range of possible outputs. Standard error bars were created utilizing the mean value of the forecast and 95% certainty. ### 2.2. Algae Production Models The biomass production model utilizes solar data and a biological growth rate to estimate actual yields for algae biomass for a photobioreactor system [13, 19]. #### 2.2.1. Microalgae Plant Setup Hypothetical tubular closed photobioreactors (PBRs) were modeled in this case to predict algae production and account for energy consumption and emissions in Syracuse and Albany NY. The PBR plant setup is illustrated in Figure4. It was assumed that processes such as dewatering and transesterification could be carried out on site, thus eliminating the need for transportation.Figure 4 Photobioreactor system layout (not to scale).The various dimensions and parameters for the PBR were taken from recommendations of previous studies in order to depict a realistic setup [12, 13]. The PBR setup was designed for a maximum of 30 hours detention time. The maximum effluent concentration (Ce) was fixed at 3.4 kg/m3 with a recycle ratio of 0.35 [13, 20] and an allowable superficial fluid velocity of 0.3 m/s [21]. Since a long tubular length is required to meet these constraints (32,400 m), the PBR is split up into 6 units each of which is 61 m3 (5,400 m long, 0.12 m diameter). Stacking of tubes reduces the total foot print area of the greenhouse. All tubes are connected and algae broth passes through all six units.The floor area or foot print area of the greenhouse house was determined from the volume of the reactor and type of cultivation (annual/seasonal operation) and the specific processes. The diameter of tubes was set at 0.12 m for all cases since it is a widely reported size for PBRs [3, 13, 22, 23]. The spacing of tubes was set at 0.3 m. This is an important factor since it defines the total floor size, which in turn influences heating and lighting requirements. The various parameters related to plant setup are summarized in Table 2.Table 2 Summary of photobioreactor and greenhouse parameters. ParameterValueDepends uponReferenceDiameter of tubes0.12 mLarger diameter pipes can cause cell shading[3]Spacing of tubes0.3Greater spacing is desirable to avoid shading[13]Flow rate (Q)3.4×10-3 m3/sAlgae species and tube diameter[21]Recycle Ratio (r)0.35[20]Total volume of reactor (V)366m3Maximum residence time (30 hrs)[12]Influent concentration (Ci)1.2 kg/m3Max. Effluent concentration (Ce)3.4 kg/m3Growth rate, PBR set up[13] #### 2.2.2. Estimating Biomass Output Microalgae productivity is estimated from the location, reactor specifications, and microalgae data. It is assumed thatCO2 and nutrients are provided in excess to the microalgae culture through the media, thereby making light the only limiting factor for cell growth and decay [12]. If adequate lighting is available, the specific growth rate μ is determined from the average irradiance available Iavg (μE/m2-s) [19]: (1)μ=μmaxIavgnKIn+Iavgn, where KI is the half saturation constant (i.e., Iavg for which half of μmax is attained), and the exponent n is a unitless empirical constant. Both KI and n are constant for a given species of algae. Note that decay of algae cells during the hours with light is incorporated into the maximum specific growth rate (μmax) (h-1) since values provided by Molina Grima et al. [22] and Fernandez et al. [23] were determined from the net growth rate. Iavg is determined from the Beer-Lambert equation:(2)Iavg=IφeqKaCi[1-exp(-φeqKaCi)], where Ci (kg/m3) is the influent biomass concentration. The path length of light within the reactor is given by φeq, which is the ratio of the tube diameter to the cosine of the solar zenith angle. The photosynthetically active irradiance (I) (μEm-2s-1) is a function of various solar angles and the total solar irradiance. Hourly solar data were available from NREL’s solar database [24], and thus, algae cell growth was determined at an interval of an hour. The analysis of the solar data to estimate I and φeq is included in the appendix.The PBR is modeled as a series of plug flow reactors where the effluent concentration of each reactor is the influent concentration for the next. It is assumed that steady-state conditions prevail for each hour since irradiance is taken as a constant over that time period. Utilizing a Monod reaction rate for substrate utilization [22], the resulting steady-state plug flow reactor equation for each segment can be written as(3)udCdz=μmaxIavgnKIn+IavgnC, where C is the biomass concentration and u is the fluid velocity. Integrating this expression can provide the effluent concentration for each reactor segment “j” that represents one hour of residence time at an average irradiation rate for that hour:(4)ln(Cj+1kCjk)=μmax(Iavg(j+1)k)nKIn+(Iavg(j+1)k)n(Lu), where j and j+1 indicate the start and end location along the reactor length of the one hour segment. Cj+1 is calculated in series to determine the reactor effluent concentration, Ce, for each hour k. The growth rate during that hour is defined by the average irradiance for that hour of the day. The total biomass produced per day (MBM) (kg d-1) is estimated from the flow rate (Q) (m3hr-1), recycle ratio (r), and the effluent concentration Ce (kg m-3):(5)MBM=∑k=124Q(Cek)(1-r). The total microalgae biomass produced can be determined from (2), (4), and (5) along with algae growth parameters and solar irradiation data.The temperature requirements for algae differ by species. In general, faster growing algae species favor higher media temperatures of about 20–30°C [10]. The algae-related constants used for P. tricornutum in the model are included in Table 3. This species was selected because it has been used in the past to produce microalgae biodiesel and all relevant data were available [22].Table 3 P. tricornutum growth parameters. ParameterValueReferenceμmax0.063 (h-1)[22]B0.0018 (h-1)[25]KI3426 (μE m-2s-1)[22]n1–1.34[25]Ef1.74 ± 0.09μE J-1[22]Xp2–4%[25]Oil Content (%dry weight)30%[13]Water Content of algae40%[13]Cultivation temperature25°C[10] ### 2.3. Energy Consumed during Microalgae Cultivation Cultivating microalgae in closed systems is an energy intensive process, especially in regions with low temperatures and limited natural lighting [13]. The algae growth and harvesting stage involve a large number of intermediate processes for which estimates of the energy consumption were developed here. Energy consumption requirements for extraction and transesterification are already provided in the GREET model [14]. It was assumed that the processes for transesterification of algae oil are identical to that of soy oil. Thus, the chemical (methanol and sodium hydroxide) and energy consumption and the energy and emissions associated with their production were taken directly as the default parameters in GREET.Water Heating Energy requirements for a natural gas water heating system were determined by using the specific heat of water and the efficiency rating of the heater as provided by the manufacturer (EF = 0.82 [26]). It was assumed that groundwater, at an initial temperature (Tinlet) of 12°C [27], would be heated to a thermostat set point (Tt) of 25°C.Media Circulation Electric pumps are used to circulate the media through the entire length of the reactor. The input electrical power required to operate the pumps(Pp) (W) can be given by [28] (6)Pp=3.91×10-6μ13Re2.75ηpd3Aa,whereRe=ρudμ1, where μ1 (kg m-1s-1) is the dynamic viscosity of water, Re is the Reynolds number, d (m) is the diameter of the pipes, u is the superficial velocity of flow (m/s), ηp and ηelec are the pump efficiency (ηp = 0.7), and Aa (m2)is the tube aperture area. The pumps operate continuously.Artificial Lighting Natural algae cultivation inherently revolves around the diurnal and seasonal cycles. To compensate for these cycles and to maximize the production of biomass, artificial lighting is used to allow 24-hour cultivation. Lights are turned on from dusk to dawn. Monthly averages of daylight hours are used to define the time the lighting system is in operation each day. The power(Pa) (W) consumed for artificially lighting the greenhouse area is calculated as [29] (7)Pa=A′(IavgLwCf). The intensity of the artificial lighting provided was set equal to the naturally available lighting in the month of July (Iavg = 1.7 μE/m2-s) over the entire greenhouse region (A′=3345m2). Specifications for high-efficiency fluorescent GRO lights [29] were used to estimate the power required for artificial lighting. The light intensity of the bulbs is expressed as Lw = 220 Lu/W and the conversion factor (Cf) to convert between micromoles of photons (mE) and lux is 0.29.CO2 Purification Carbon dioxide acts as the only source of carbon for the biomass. Flue gases from power plants provide an inexhaustible source ofCO2. However, flue gases also contain varying levels of other gases such as SOx and NOx which are detrimental to microalgae culture beyond certain concentrations [30]. The monoethanolamine (MEA) absorption process can be used to separate pure CO2 from flue gas for microalgae production. Kadam [31] determined that if about 18% of the total carbon dioxide consumed is taken directly from flue gases and the rest is purified through the MEA process, then, the toxic flue gases will be sufficiently low concentration for algae growth. Molina Grima [19] determined that in order to make light the only limiting factor, CO2 must be provided in excess and the ratio of the aqueous CO2 concentration (kg/m3) to influent biomass concentration Ci (kg/m3) should be 0.63. Since growth rates for this system are lower than those in Molina's Grima study [19] due to reduced sunlight, this CO2 represents a conservative estimate. The mass of carbon dioxide required was estimated based on this ratio, the media flow rate, and the influent biomass concentration. Although carbon dioxide has a high solubility in water, and it is likely that all CO2 in the gas bubbled through the reactor would dissolve over the length of the reactor, a factor of safety of 2 was used here as an overestimate of the mass of CO2 that would be required. The MEA CO2 extraction process has been modeled and studied previously in context with algae production. Kadam [31, 32] reports that the process to extract CO2 from flue gas and recover the MEA for reuse consumes 32.65 kWh per ton of CO2 for algae cultivation. Details are not provided in these references to specifically quantify which of the steps in the MEA process consume the most electrical energy.Greenhouse Heating Temperature control within the greenhouse is essential for algae cultivation in cold weather conditions. The energy consumed for greenhouse heating depends upon the total surface area exposed, insulation material, and temperature inside and outside the greenhouse. For a given greenhouse with surface area(Ag) (m2) the heat loss per second (QL) (J/s) is given by [33] (8)QL=1.05(1R)(Treq-Tout)Ag, where R(1.9 m2C° s J-1) is the R-value of the greenhouse insulating material; Treq (25°C) and Tout (°C)are the temperatures required within the greenhouse and outside the greenhouse, respectively. The greenhouse was assumed to be insulated with 10 mm twinwall polycarbonate with an R-value of 1.9. The R-value of insulated and windowless cultivation scenario was set at 30. The outside temperature Tout is taken from monthly averages for Syracuse and Albany and is input as normal distributions for that month [34].Steam Drying and Dewatering Algae are suspended in a dilute broth from photobioreactors [13]. Dewatering and drying of algae is necessary to reduce the water content to 5% [35] before the hexane oil extraction process. For algae with high vegetable oil content, it is suggested that continuous nozzle discharge centrifuges provide the best reliability and consume the least amount of energy. Centrifugation consumes 3.24 MJ/m3 of effluent media [36]. After centrifugation, algae water content is 70% (by weight). Steam is utilized to further dry microalgae before oil extraction process. The natural gas consumed to provide the required steam energy was calculated based on the heat of vaporization of water, the mass of water that needed to be vaporized to reduce the water content from 0.70 to 0.05, and the efficiency of the boiler (0.93 [37]) and dryer (0.8 [26]) utilized. In the scenarios that utilize waste heat, it is assumed that because of the colocation of the algae production facility near a power plant, there is sufficient heat to dry the algae. ### 2.4. Water Consumption The consumption of water for the production of biofuels has recently been identified as a significant limitation to the development of an expanded biofuel economy. Water consumption occurs almost entirely in the feedstock production step for most biofuels. The average U.S. production biodiesel from soybeans requires 6,500 liters of water for evapotranspiration per liter of biodiesel produced [38]. Water consumption for algae biodiesel was calculated by a mass balance. The total water flowrate through the bioreactor is the sum of freshwater, water included in the algae recycle stream (35% recycle), and water recovered through the centrifuge dewatering process to increase the algae concentration from 0.34% to 30%. With this mass balance, 848 m3 make up water is required annually or approximately 4 L water per L of biodiesel for the feedstock production stage. This represents approximately 99% of water recovery and reuse. In the transesterification and biodiesel cleaning processes, 1–3 L of water are required per L of biodiesel produced [39]. ### 2.5. Fertilizer Consumption The microalgae culture media acts as the primary source of nutrients and carbon dioxide and a means of expelling excess oxygen. The minimum amount of nutrients consumed was defined based on the molecular formula of algae—CO0.48H1.83N0.11P0.01 [40]. N and P account for 6.5% and 1.3% of the algae mass. Assuming that maximum possible biomass concentration of algae cells is 4 kg /m3 [13, 22] in a tubular PBR, the N and P consumed from the algae media would be 0.26 kg N/m3 and 0.052 kg P/m3. Excess fertilizer that passes through the bioreactor as part of the broth is assumed to be recovered in the centrifuge dewatering step for reuse. Since nearly all of the water is recycled, it is assumed that nearly all of the nutrients that are not consumed are also recycled. ### 2.6. Utilizing GREET for Life Cycle Analysis The GREET model was modified and used to calculate the energy use and emissions generated from algae production, oil extraction, and transesterification stages of biodiesel production as well as the upstream chemical and energy production processes. For a given fuel system GREET evaluates natural gas, coal, and petroleum use as well as the emissions of carbon dioxide equivalent greenhouse gases, volatile organic compounds, carbon monoxide, nitrogen oxides, particulates, and sulfur oxides from all lifecycle stages [14]. The GREET results are presented as primary energy consumed and emissions per million BTU fuel produced. The low heating value of the BD was used to convert to the functional unit used here—liters BD produced.The GREET model is written in an MS Excel workbook and includes soy biodiesel production energy consumption and emissions pathways. A new spreadsheet page based on the soy biodiesel calculations was added to the GREET workbook and adapted for algae BD production. Default parameters for transesterification were used directly, but other input parameters including energy consumption for the various processes, biomass yield, nutrient requirements, carbon dioxide consumed were modified for algae biodiesel production based on the mass and energy flows presented above. The mix of electricity generation within New York State was used to define the primary energy consumed to generate electricity [41].The extraction of oil from algae was assumed to be carried out by hexane oil extraction. The procedure is similar to soybean oil extraction, although significantly less hexane is required to recover oil from algae (0.030 kg of hexane/kg of dry algae) [11] than is required for soybeans (1.2 kg hexane/kg dried and flaked soy bean) [18]. During this process, algae meal is produced as a coproduct that can be used as an animal feed in the same manner that soy meal is used as a coproduct from soy biodiesel. GREET uses the displacement method to determine how much of the biomass production and extraction steps can defined as a credit for the biodiesel due to the production of a coproduct. The protein content of soy meal is 48% [42], as compared to 28% in algae meal [13] and 40% in soy beans [42]. Thus, 1 kg of algae meal displaces about 0.7 kg of soybean, whereas 1 kg of soy meal displaces about 1.2 kg of soy bean for animal feed. The credits for not having to produce 0.7 kg soy beans for every kg algae meal produced are subtracted from the total energy use and emissions associated with the algae production, oil extraction, and their associated upstream processes.An additional credit was also attributed to the algae to represent the carbon dioxide sequestered from the power plant flue gas. Algae cell elemental composition was used to estimate the mass of carbon that was consumed by the algae growth within the PBR (0.51 kg ofCO2 consumed/kg algae grown). ## 2.1. System Boundary and Scope The life cycle concept is a cradle to grave systems approach for the study of feedstocks, production, and use. The concept revolves around the recognition of different stages of production starting from upstream use of energy to cultivation of the feedstock, followed by the different processing stages. A life cycle inventory assessment allows for the quantification of mass and energy streams such as energy consumption, material usage, waste production, and generation of coproducts. A summary of the sustainability assessment metrics used for this life cycle inventory of microalgae feedstock for biodiesel production is presented in Table1.Table 1 Life cycle sustainability metrics for biodiesel. Environmental ImpactSustainability MetricsEnergy and Resource Consumption– Total energy consumed (MJ/L BD)– Fossil fuel energy consumed (MJ/L BD)– Petroleum consumed (MJ/L BD)– Land required (m2/L of BD)– Water required (L water/L BD)Climate Change– Net green house gas emissions (g CO2 equivalents/L of BD)Acidification– Acidification potential (g SO2 eq./L BD)Toxic Emissions– Particulate matter emissions (PM10, PM2.5)– Carbon monoxide emissions– Volatile organic carbon emissionsFigure1 provides an overview of the system boundary used in this analysis, which includes the production of algae and biodiesel via a transesterification reaction. The boundary includes all upstream mass and energy flows that are required to make the chemical and energy resources required for the processing. The production of biodiesel from algae and direct energy consumption is characterized by four distinct stages: cultivation, dewatering/drying, oil extraction, and transesterification (Figure 1). The energy consumed and subsequent emissions for fuel production, electricity generation, and chemical production comprise the upstream energy consumption and emissions. Biodiesel and algae meal are the products leaving the system boundary. The use of these products is not directly included within the analysis.Figure 1 Flowchart depicting system boundary for life cycle inventory of biodiesel from microalgae.The hypothetical algae and biodiesel production processing facilities considered are located in upstate placeStateNew York. The facilities are assumed to be adjacent to a biomass or fossil fuel electricity generation plant for access to the carbon dioxide in their flue gas and waste heat in order to maximize the utilization of waste resources within this system. Waste heat is considered to have no value as an energy product; so it is not counted as part of the total energy resources consumed by the facility.Two different locations were considered for the microalgae biodiesel facility: Syracuse, NY (43°2′ N, 76°8′ W) and Albany, NY (42°7′ N, 73°8′ W). Although these locations are at approximately the same latitude and have very similar hours of daylight, the Syracuse area is colder and cloudier throughout the year due to its proximity to the Great Lakes. Albany offers more intense natural lighting and less severe winter temperatures (Figure 2). Three specific cases were considered for each of these locations:(i) greenhouse structure to maximize natural lighting; natural gas used to maintain the system temperature;(ii) greenhouse structure to maximize natural lighting; waste heat used to maintain the system temperature;(iii) a well-insulated facility that allows for no natural lighting but requires substantially less heat.Monthly average temperature (a) and total monthly solar irradiance (b) for Syracuse, NY, and Albany, NY. (a)(b)The PBRs are assumed to operate continuously, using artificial lighting when natural lighting is not sufficient. In all cases, it was assumed thatPhaeodactylum tricornutum algae would be grown for biodiesel production. This algae species has a relatively high oil content (about 30% by dry weight), is resistant to contamination, and has been previously utilized to produce biodiesel [12, 13].Estimating the environmental and energy lifecycle impacts requires quantification of the mass and energy flows through this system. A mathematical model for the algae production process was developed in the work presented here. As shown in Figure3, the mass and energy flows estimated with the algae production model were used in conjunction with the Greenhouse gases, Related Emissions, and Energy use in Transportation (GREET) model 1.8a developed at the Argonne National Laboratories [14]. GREET provided the general framework and structure for the lifecycle inventory, especially aspects of the transesterification process and energy and emissions related to the upstream production of chemicals and energy resources. BD production from soybeans, which is used here as a benchmark for comparison, was taken directly from the GREET model. GREET is a widely accepted model and many studies and analyses have been based upon it because of its vast data on energy sources and the associated emissions (e.g., [15–17]). The default values for soybean production, oil extraction, and transesterification were taken as GREET default values [14], which are representative of the Midwestern region of the United States where most soybeans are grown. These were based initially on an LCA completed at the National Renewable Energy Lab [18] and updated to keep the GREET model as current as possible (e.g., [17]). There is only a small production of soybeans in NY State, with yields well below the average yield in the Midwest. Thus, no attempt was made to match the geographic system boundaries for biodiesel from algae to that of soybeans.Figure 3 Overview of microalgae biomass LCA model. The yellow boxes represent the contributions of the work presented here. The soybean LCA results were taken almost entirely from GREET.Uncertainty in the data was addressed by utilizing Monte Carlo simulations to input a range of values for parameters. For a given assumption or variable with a distribution as input, the commercially available software, Crystal Ball was utilized to determine a forecast or range of possible outputs. Standard error bars were created utilizing the mean value of the forecast and 95% certainty. ## 2.2. Algae Production Models The biomass production model utilizes solar data and a biological growth rate to estimate actual yields for algae biomass for a photobioreactor system [13, 19]. ### 2.2.1. Microalgae Plant Setup Hypothetical tubular closed photobioreactors (PBRs) were modeled in this case to predict algae production and account for energy consumption and emissions in Syracuse and Albany NY. The PBR plant setup is illustrated in Figure4. It was assumed that processes such as dewatering and transesterification could be carried out on site, thus eliminating the need for transportation.Figure 4 Photobioreactor system layout (not to scale).The various dimensions and parameters for the PBR were taken from recommendations of previous studies in order to depict a realistic setup [12, 13]. The PBR setup was designed for a maximum of 30 hours detention time. The maximum effluent concentration (Ce) was fixed at 3.4 kg/m3 with a recycle ratio of 0.35 [13, 20] and an allowable superficial fluid velocity of 0.3 m/s [21]. Since a long tubular length is required to meet these constraints (32,400 m), the PBR is split up into 6 units each of which is 61 m3 (5,400 m long, 0.12 m diameter). Stacking of tubes reduces the total foot print area of the greenhouse. All tubes are connected and algae broth passes through all six units.The floor area or foot print area of the greenhouse house was determined from the volume of the reactor and type of cultivation (annual/seasonal operation) and the specific processes. The diameter of tubes was set at 0.12 m for all cases since it is a widely reported size for PBRs [3, 13, 22, 23]. The spacing of tubes was set at 0.3 m. This is an important factor since it defines the total floor size, which in turn influences heating and lighting requirements. The various parameters related to plant setup are summarized in Table 2.Table 2 Summary of photobioreactor and greenhouse parameters. ParameterValueDepends uponReferenceDiameter of tubes0.12 mLarger diameter pipes can cause cell shading[3]Spacing of tubes0.3Greater spacing is desirable to avoid shading[13]Flow rate (Q)3.4×10-3 m3/sAlgae species and tube diameter[21]Recycle Ratio (r)0.35[20]Total volume of reactor (V)366m3Maximum residence time (30 hrs)[12]Influent concentration (Ci)1.2 kg/m3Max. Effluent concentration (Ce)3.4 kg/m3Growth rate, PBR set up[13] ### 2.2.2. Estimating Biomass Output Microalgae productivity is estimated from the location, reactor specifications, and microalgae data. It is assumed thatCO2 and nutrients are provided in excess to the microalgae culture through the media, thereby making light the only limiting factor for cell growth and decay [12]. If adequate lighting is available, the specific growth rate μ is determined from the average irradiance available Iavg (μE/m2-s) [19]: (1)μ=μmaxIavgnKIn+Iavgn, where KI is the half saturation constant (i.e., Iavg for which half of μmax is attained), and the exponent n is a unitless empirical constant. Both KI and n are constant for a given species of algae. Note that decay of algae cells during the hours with light is incorporated into the maximum specific growth rate (μmax) (h-1) since values provided by Molina Grima et al. [22] and Fernandez et al. [23] were determined from the net growth rate. Iavg is determined from the Beer-Lambert equation:(2)Iavg=IφeqKaCi[1-exp(-φeqKaCi)], where Ci (kg/m3) is the influent biomass concentration. The path length of light within the reactor is given by φeq, which is the ratio of the tube diameter to the cosine of the solar zenith angle. The photosynthetically active irradiance (I) (μEm-2s-1) is a function of various solar angles and the total solar irradiance. Hourly solar data were available from NREL’s solar database [24], and thus, algae cell growth was determined at an interval of an hour. The analysis of the solar data to estimate I and φeq is included in the appendix.The PBR is modeled as a series of plug flow reactors where the effluent concentration of each reactor is the influent concentration for the next. It is assumed that steady-state conditions prevail for each hour since irradiance is taken as a constant over that time period. Utilizing a Monod reaction rate for substrate utilization [22], the resulting steady-state plug flow reactor equation for each segment can be written as(3)udCdz=μmaxIavgnKIn+IavgnC, where C is the biomass concentration and u is the fluid velocity. Integrating this expression can provide the effluent concentration for each reactor segment “j” that represents one hour of residence time at an average irradiation rate for that hour:(4)ln(Cj+1kCjk)=μmax(Iavg(j+1)k)nKIn+(Iavg(j+1)k)n(Lu), where j and j+1 indicate the start and end location along the reactor length of the one hour segment. Cj+1 is calculated in series to determine the reactor effluent concentration, Ce, for each hour k. The growth rate during that hour is defined by the average irradiance for that hour of the day. The total biomass produced per day (MBM) (kg d-1) is estimated from the flow rate (Q) (m3hr-1), recycle ratio (r), and the effluent concentration Ce (kg m-3):(5)MBM=∑k=124Q(Cek)(1-r). The total microalgae biomass produced can be determined from (2), (4), and (5) along with algae growth parameters and solar irradiation data.The temperature requirements for algae differ by species. In general, faster growing algae species favor higher media temperatures of about 20–30°C [10]. The algae-related constants used for P. tricornutum in the model are included in Table 3. This species was selected because it has been used in the past to produce microalgae biodiesel and all relevant data were available [22].Table 3 P. tricornutum growth parameters. ParameterValueReferenceμmax0.063 (h-1)[22]B0.0018 (h-1)[25]KI3426 (μE m-2s-1)[22]n1–1.34[25]Ef1.74 ± 0.09μE J-1[22]Xp2–4%[25]Oil Content (%dry weight)30%[13]Water Content of algae40%[13]Cultivation temperature25°C[10] ## 2.2.1. Microalgae Plant Setup Hypothetical tubular closed photobioreactors (PBRs) were modeled in this case to predict algae production and account for energy consumption and emissions in Syracuse and Albany NY. The PBR plant setup is illustrated in Figure4. It was assumed that processes such as dewatering and transesterification could be carried out on site, thus eliminating the need for transportation.Figure 4 Photobioreactor system layout (not to scale).The various dimensions and parameters for the PBR were taken from recommendations of previous studies in order to depict a realistic setup [12, 13]. The PBR setup was designed for a maximum of 30 hours detention time. The maximum effluent concentration (Ce) was fixed at 3.4 kg/m3 with a recycle ratio of 0.35 [13, 20] and an allowable superficial fluid velocity of 0.3 m/s [21]. Since a long tubular length is required to meet these constraints (32,400 m), the PBR is split up into 6 units each of which is 61 m3 (5,400 m long, 0.12 m diameter). Stacking of tubes reduces the total foot print area of the greenhouse. All tubes are connected and algae broth passes through all six units.The floor area or foot print area of the greenhouse house was determined from the volume of the reactor and type of cultivation (annual/seasonal operation) and the specific processes. The diameter of tubes was set at 0.12 m for all cases since it is a widely reported size for PBRs [3, 13, 22, 23]. The spacing of tubes was set at 0.3 m. This is an important factor since it defines the total floor size, which in turn influences heating and lighting requirements. The various parameters related to plant setup are summarized in Table 2.Table 2 Summary of photobioreactor and greenhouse parameters. ParameterValueDepends uponReferenceDiameter of tubes0.12 mLarger diameter pipes can cause cell shading[3]Spacing of tubes0.3Greater spacing is desirable to avoid shading[13]Flow rate (Q)3.4×10-3 m3/sAlgae species and tube diameter[21]Recycle Ratio (r)0.35[20]Total volume of reactor (V)366m3Maximum residence time (30 hrs)[12]Influent concentration (Ci)1.2 kg/m3Max. Effluent concentration (Ce)3.4 kg/m3Growth rate, PBR set up[13] ## 2.2.2. Estimating Biomass Output Microalgae productivity is estimated from the location, reactor specifications, and microalgae data. It is assumed thatCO2 and nutrients are provided in excess to the microalgae culture through the media, thereby making light the only limiting factor for cell growth and decay [12]. If adequate lighting is available, the specific growth rate μ is determined from the average irradiance available Iavg (μE/m2-s) [19]: (1)μ=μmaxIavgnKIn+Iavgn, where KI is the half saturation constant (i.e., Iavg for which half of μmax is attained), and the exponent n is a unitless empirical constant. Both KI and n are constant for a given species of algae. Note that decay of algae cells during the hours with light is incorporated into the maximum specific growth rate (μmax) (h-1) since values provided by Molina Grima et al. [22] and Fernandez et al. [23] were determined from the net growth rate. Iavg is determined from the Beer-Lambert equation:(2)Iavg=IφeqKaCi[1-exp(-φeqKaCi)], where Ci (kg/m3) is the influent biomass concentration. The path length of light within the reactor is given by φeq, which is the ratio of the tube diameter to the cosine of the solar zenith angle. The photosynthetically active irradiance (I) (μEm-2s-1) is a function of various solar angles and the total solar irradiance. Hourly solar data were available from NREL’s solar database [24], and thus, algae cell growth was determined at an interval of an hour. The analysis of the solar data to estimate I and φeq is included in the appendix.The PBR is modeled as a series of plug flow reactors where the effluent concentration of each reactor is the influent concentration for the next. It is assumed that steady-state conditions prevail for each hour since irradiance is taken as a constant over that time period. Utilizing a Monod reaction rate for substrate utilization [22], the resulting steady-state plug flow reactor equation for each segment can be written as(3)udCdz=μmaxIavgnKIn+IavgnC, where C is the biomass concentration and u is the fluid velocity. Integrating this expression can provide the effluent concentration for each reactor segment “j” that represents one hour of residence time at an average irradiation rate for that hour:(4)ln(Cj+1kCjk)=μmax(Iavg(j+1)k)nKIn+(Iavg(j+1)k)n(Lu), where j and j+1 indicate the start and end location along the reactor length of the one hour segment. Cj+1 is calculated in series to determine the reactor effluent concentration, Ce, for each hour k. The growth rate during that hour is defined by the average irradiance for that hour of the day. The total biomass produced per day (MBM) (kg d-1) is estimated from the flow rate (Q) (m3hr-1), recycle ratio (r), and the effluent concentration Ce (kg m-3):(5)MBM=∑k=124Q(Cek)(1-r). The total microalgae biomass produced can be determined from (2), (4), and (5) along with algae growth parameters and solar irradiation data.The temperature requirements for algae differ by species. In general, faster growing algae species favor higher media temperatures of about 20–30°C [10]. The algae-related constants used for P. tricornutum in the model are included in Table 3. This species was selected because it has been used in the past to produce microalgae biodiesel and all relevant data were available [22].Table 3 P. tricornutum growth parameters. ParameterValueReferenceμmax0.063 (h-1)[22]B0.0018 (h-1)[25]KI3426 (μE m-2s-1)[22]n1–1.34[25]Ef1.74 ± 0.09μE J-1[22]Xp2–4%[25]Oil Content (%dry weight)30%[13]Water Content of algae40%[13]Cultivation temperature25°C[10] ## 2.3. Energy Consumed during Microalgae Cultivation Cultivating microalgae in closed systems is an energy intensive process, especially in regions with low temperatures and limited natural lighting [13]. The algae growth and harvesting stage involve a large number of intermediate processes for which estimates of the energy consumption were developed here. Energy consumption requirements for extraction and transesterification are already provided in the GREET model [14]. It was assumed that the processes for transesterification of algae oil are identical to that of soy oil. Thus, the chemical (methanol and sodium hydroxide) and energy consumption and the energy and emissions associated with their production were taken directly as the default parameters in GREET.Water Heating Energy requirements for a natural gas water heating system were determined by using the specific heat of water and the efficiency rating of the heater as provided by the manufacturer (EF = 0.82 [26]). It was assumed that groundwater, at an initial temperature (Tinlet) of 12°C [27], would be heated to a thermostat set point (Tt) of 25°C.Media Circulation Electric pumps are used to circulate the media through the entire length of the reactor. The input electrical power required to operate the pumps(Pp) (W) can be given by [28] (6)Pp=3.91×10-6μ13Re2.75ηpd3Aa,whereRe=ρudμ1, where μ1 (kg m-1s-1) is the dynamic viscosity of water, Re is the Reynolds number, d (m) is the diameter of the pipes, u is the superficial velocity of flow (m/s), ηp and ηelec are the pump efficiency (ηp = 0.7), and Aa (m2)is the tube aperture area. The pumps operate continuously.Artificial Lighting Natural algae cultivation inherently revolves around the diurnal and seasonal cycles. To compensate for these cycles and to maximize the production of biomass, artificial lighting is used to allow 24-hour cultivation. Lights are turned on from dusk to dawn. Monthly averages of daylight hours are used to define the time the lighting system is in operation each day. The power(Pa) (W) consumed for artificially lighting the greenhouse area is calculated as [29] (7)Pa=A′(IavgLwCf). The intensity of the artificial lighting provided was set equal to the naturally available lighting in the month of July (Iavg = 1.7 μE/m2-s) over the entire greenhouse region (A′=3345m2). Specifications for high-efficiency fluorescent GRO lights [29] were used to estimate the power required for artificial lighting. The light intensity of the bulbs is expressed as Lw = 220 Lu/W and the conversion factor (Cf) to convert between micromoles of photons (mE) and lux is 0.29.CO2 Purification Carbon dioxide acts as the only source of carbon for the biomass. Flue gases from power plants provide an inexhaustible source ofCO2. However, flue gases also contain varying levels of other gases such as SOx and NOx which are detrimental to microalgae culture beyond certain concentrations [30]. The monoethanolamine (MEA) absorption process can be used to separate pure CO2 from flue gas for microalgae production. Kadam [31] determined that if about 18% of the total carbon dioxide consumed is taken directly from flue gases and the rest is purified through the MEA process, then, the toxic flue gases will be sufficiently low concentration for algae growth. Molina Grima [19] determined that in order to make light the only limiting factor, CO2 must be provided in excess and the ratio of the aqueous CO2 concentration (kg/m3) to influent biomass concentration Ci (kg/m3) should be 0.63. Since growth rates for this system are lower than those in Molina's Grima study [19] due to reduced sunlight, this CO2 represents a conservative estimate. The mass of carbon dioxide required was estimated based on this ratio, the media flow rate, and the influent biomass concentration. Although carbon dioxide has a high solubility in water, and it is likely that all CO2 in the gas bubbled through the reactor would dissolve over the length of the reactor, a factor of safety of 2 was used here as an overestimate of the mass of CO2 that would be required. The MEA CO2 extraction process has been modeled and studied previously in context with algae production. Kadam [31, 32] reports that the process to extract CO2 from flue gas and recover the MEA for reuse consumes 32.65 kWh per ton of CO2 for algae cultivation. Details are not provided in these references to specifically quantify which of the steps in the MEA process consume the most electrical energy.Greenhouse Heating Temperature control within the greenhouse is essential for algae cultivation in cold weather conditions. The energy consumed for greenhouse heating depends upon the total surface area exposed, insulation material, and temperature inside and outside the greenhouse. For a given greenhouse with surface area(Ag) (m2) the heat loss per second (QL) (J/s) is given by [33] (8)QL=1.05(1R)(Treq-Tout)Ag, where R(1.9 m2C° s J-1) is the R-value of the greenhouse insulating material; Treq (25°C) and Tout (°C)are the temperatures required within the greenhouse and outside the greenhouse, respectively. The greenhouse was assumed to be insulated with 10 mm twinwall polycarbonate with an R-value of 1.9. The R-value of insulated and windowless cultivation scenario was set at 30. The outside temperature Tout is taken from monthly averages for Syracuse and Albany and is input as normal distributions for that month [34].Steam Drying and Dewatering Algae are suspended in a dilute broth from photobioreactors [13]. Dewatering and drying of algae is necessary to reduce the water content to 5% [35] before the hexane oil extraction process. For algae with high vegetable oil content, it is suggested that continuous nozzle discharge centrifuges provide the best reliability and consume the least amount of energy. Centrifugation consumes 3.24 MJ/m3 of effluent media [36]. After centrifugation, algae water content is 70% (by weight). Steam is utilized to further dry microalgae before oil extraction process. The natural gas consumed to provide the required steam energy was calculated based on the heat of vaporization of water, the mass of water that needed to be vaporized to reduce the water content from 0.70 to 0.05, and the efficiency of the boiler (0.93 [37]) and dryer (0.8 [26]) utilized. In the scenarios that utilize waste heat, it is assumed that because of the colocation of the algae production facility near a power plant, there is sufficient heat to dry the algae. ## 2.4. Water Consumption The consumption of water for the production of biofuels has recently been identified as a significant limitation to the development of an expanded biofuel economy. Water consumption occurs almost entirely in the feedstock production step for most biofuels. The average U.S. production biodiesel from soybeans requires 6,500 liters of water for evapotranspiration per liter of biodiesel produced [38]. Water consumption for algae biodiesel was calculated by a mass balance. The total water flowrate through the bioreactor is the sum of freshwater, water included in the algae recycle stream (35% recycle), and water recovered through the centrifuge dewatering process to increase the algae concentration from 0.34% to 30%. With this mass balance, 848 m3 make up water is required annually or approximately 4 L water per L of biodiesel for the feedstock production stage. This represents approximately 99% of water recovery and reuse. In the transesterification and biodiesel cleaning processes, 1–3 L of water are required per L of biodiesel produced [39]. ## 2.5. Fertilizer Consumption The microalgae culture media acts as the primary source of nutrients and carbon dioxide and a means of expelling excess oxygen. The minimum amount of nutrients consumed was defined based on the molecular formula of algae—CO0.48H1.83N0.11P0.01 [40]. N and P account for 6.5% and 1.3% of the algae mass. Assuming that maximum possible biomass concentration of algae cells is 4 kg /m3 [13, 22] in a tubular PBR, the N and P consumed from the algae media would be 0.26 kg N/m3 and 0.052 kg P/m3. Excess fertilizer that passes through the bioreactor as part of the broth is assumed to be recovered in the centrifuge dewatering step for reuse. Since nearly all of the water is recycled, it is assumed that nearly all of the nutrients that are not consumed are also recycled. ## 2.6. Utilizing GREET for Life Cycle Analysis The GREET model was modified and used to calculate the energy use and emissions generated from algae production, oil extraction, and transesterification stages of biodiesel production as well as the upstream chemical and energy production processes. For a given fuel system GREET evaluates natural gas, coal, and petroleum use as well as the emissions of carbon dioxide equivalent greenhouse gases, volatile organic compounds, carbon monoxide, nitrogen oxides, particulates, and sulfur oxides from all lifecycle stages [14]. The GREET results are presented as primary energy consumed and emissions per million BTU fuel produced. The low heating value of the BD was used to convert to the functional unit used here—liters BD produced.The GREET model is written in an MS Excel workbook and includes soy biodiesel production energy consumption and emissions pathways. A new spreadsheet page based on the soy biodiesel calculations was added to the GREET workbook and adapted for algae BD production. Default parameters for transesterification were used directly, but other input parameters including energy consumption for the various processes, biomass yield, nutrient requirements, carbon dioxide consumed were modified for algae biodiesel production based on the mass and energy flows presented above. The mix of electricity generation within New York State was used to define the primary energy consumed to generate electricity [41].The extraction of oil from algae was assumed to be carried out by hexane oil extraction. The procedure is similar to soybean oil extraction, although significantly less hexane is required to recover oil from algae (0.030 kg of hexane/kg of dry algae) [11] than is required for soybeans (1.2 kg hexane/kg dried and flaked soy bean) [18]. During this process, algae meal is produced as a coproduct that can be used as an animal feed in the same manner that soy meal is used as a coproduct from soy biodiesel. GREET uses the displacement method to determine how much of the biomass production and extraction steps can defined as a credit for the biodiesel due to the production of a coproduct. The protein content of soy meal is 48% [42], as compared to 28% in algae meal [13] and 40% in soy beans [42]. Thus, 1 kg of algae meal displaces about 0.7 kg of soybean, whereas 1 kg of soy meal displaces about 1.2 kg of soy bean for animal feed. The credits for not having to produce 0.7 kg soy beans for every kg algae meal produced are subtracted from the total energy use and emissions associated with the algae production, oil extraction, and their associated upstream processes.An additional credit was also attributed to the algae to represent the carbon dioxide sequestered from the power plant flue gas. Algae cell elemental composition was used to estimate the mass of carbon that was consumed by the algae growth within the PBR (0.51 kg ofCO2 consumed/kg algae grown). ## 3. Results ### 3.1. Biomass Production Biomass output is an important factor for determining life cycle energy analysis of microalgae biodiesel production. When natural lighting is used to minimize electricity consumption for artificial lighting, algae production rises steadily between the months of February and April (Figure5). Biomass production is the highest between the months of May to July and is followed by a gradual decline in the months of August to October. Production is the lowest in the winter months due to low natural irradiance. The uncertainty bars included represent 95% confidence intervals from Monte Carlo simulation outputs.Figure 5 Algae Biomass production for Syracuse and Albany, NY, with natural lighting supplemented by artificial lighting for continuous algae production.The annual biomass productivity in Albany is about 12% greater than that in Syracuse (Table4). These cities are at very similar latitudes, but the actual irradiance in Albany is higher due to less cloud cover. Biomass and subsequent biodiesel production in the windowless (artificial lighting only) scenario is much higher than greenhouse cases because illumination is maintained throughout the year at the highest level achieved naturally (noon in the month of July).Table 4 Comparison of different locations and scenarios by biodiesel production. LocationBiomass Produced(tonnes/year)Biodiesel produced (Lm-2 y-1)Syracuse NYGreenhouse Base Case20219Greenhouse w/waste heat20219Albany NYWindowless Cultivation26325Greenhouse Base Case22521Greenhouse w/waste heat22521 ### 3.2. Energy Consumption for Microalgae Biodiesel Production The energy consumed for biodiesel production was estimated by modeling individual processes in the algae cultivation stage. Energy required for the transesterification process is accounted directly by the GREET 1.8a model. The energy required for feedstock production through the drying process is illustrated in Figure6. This does not include oil extraction and transesterification processes. Three variables can be assessed with this graph: location (Syracuse versus Albany), use of natural lighting versus solely artificial lighting and algae versus soybean production.Figure 6 Energy consumption for microalgae and soy bean feedstock production. The error bars represent 95% confidence intervals on the total energy consumption for feedstock production.Heating needs consume well over half of the total energy required for algae growth, dewatering, and drying. When no waste heat is available, dewatering and steam drying accounts for the greatest fraction—about 28–32% of the energy required for feedstock production. With the availability of waste heat, this component is reduced to about 13% of the total, which represents the electricity required for centrifugation. Greenhouse heating consumes a similar proportion of the total energy for algae production—about 25–30%. Water heating for cultivation consumes about 7–12% for feedstock production. Both locations have similar water heating requirements because groundwater temperature is assumed to be equal for both cases.When natural lighting is utilized to the extent possible artificial lighting, it consumes about a quarter of the total energy required for algae cultivation. However, in the windowless cultivation case where there is no natural light available, the artificial lighting cost is almost doubled. However, the total energy requirements in this scenario are still less (35%) than the scenarios requiring natural gas to heat a greenhouse.Among the design choices and trade-offs considered here, the growth and drying of algae with the utilization of waste heat is the only scenario that is substantially better than growing soybeans from the perspective of process energy consumed. These results clearly show the value of colocating an algae facility near a source of waste heat.Overall, microalgae cultivation in Albany, NY, consumes about 18–21% less energy than Syracuse, NY, because greenhouse heating energy requirements are lower and higher natural lighting intensity yields about 12% higher biomass output.Figure7 illustrates the total lifecycle energy, which now also includes biodiesel production and credits for CO2 consumption and algae/soy meal produced during the oil extraction phase. For most cases, the energy required for feedstock production is similar to the energy required for oil extraction and transesterification. Thus, the savings associated with the utilization of waste heat in the greenhouse also represent significant savings when the entire lifecycle energy consumption is considered. Greenhouse algae cultivation with waste heat in Albany consumes the least energy on a life cycle basis; however total energy consumption is very similar to that of the corresponding Syracuse case.Figure 7 Total life cycle energy consumption by life cycle stage. The error bars represent 95% confidence intervals on the total lifecycle energy consumed.The importance of the coproduct and carbon dioxide consumption credits are apparent from the data presented in Figure7. Soy meal credits are higher than algae meal credits because of higher protein content and higher fraction of soy meal produced per liter of biodiesel (1 kg of algae meal displaces about 0.7 kg of soybean, whereas 1 kg of soy meal displaces about 1.2 kg of soy bean for animal feed). Adding the higher credits for the soy bean BD case to the energy required for production reduces the net energy for this case to a level below the well-insulated and windowless algae production scenario. The greenhouse scenarios utilizing waste heat are still the best option for minimizing the consumption of energy that has value for other uses.Natural gas accounts for 65–80% of the total energy consumed on a life cycle basis for algae biodiesel production when waste heat is not available (data not shown). The high consumption of natural gas can be attributed to heating processes, the high fraction of natural gas in NY electricity mix (about 22%), and upstream consumption for process fuel and fertilizer production. In contrast, soy biodiesel requires substantially more petroleum (~5x) than microalgae consumes due to the extensive use of tractors and feedstock transportation when BD is made from soybeans. Thus, algae as a BD feedstock has a significant benefit over soybeans in terms of reducing our dependence on imported oil. Algae biodiesel production requires a significant amount of electricity and thus coal accounts for about 6–19% of the total life cycle energy consumption. Insulated cultivation has the highest coal consumption, about 19% of the total life cycle energy consumption, because of increased artificial lighting and electricity consumption. In comparison, for the greenhouse with waste heat case, only 7% the total lifecycle energy is derived from coal.The processing of soybeans to prepare for oil extraction also requires some heating to dry the beans. Arguably, waste heat could be considered to reduce the fossil fuel consumption for soybean biodiesel too. However, whereas the algae feedstock could be grown at the same location where waste heat is available, the soybeans require a much more dispersed geographical region. Soybeans are typically transported 75 miles or less to a soybean crushing facility. Thus, the probability that soybean production and crushing facilities can be colocated with a waste heat source is significantly less than for algae. If this can be achieved, the lifecycle energy production for the feedstock production (green bar for soybean BD, Figure7) would be less. ### 3.3. Global Warming Potential Global warming potential can be described as the impact of additional units of greenhouse gases to the atmosphere. The global warming potential for the different scenarios and gases is estimated in terms of carbon dioxide equivalents (Figure8). All algae scenarios are allocated the same CO2 credits because the carbon dioxide consumed per unit of algae produced is constant.Figure 8 Global warming potential of microalgae biodiesel—mass emissions normalized by dividing by the corresponding emissions for soy biodiesel for comparison.MostCO2 emissions for algae biodiesel originate from upstream usage of energy use for heating, transportation fuel use, and coal combustion for electricity. The extraction and utilization of natural gas for heating use, electricity generation, and fertilizer production is accompanied by high methane emissions. Natural gas extraction has a very high methane emission factor. Overall, the emission of carbon dioxide is relatively low compared to methane due to the high natural gas use relative to petroleum or coal. Natural gas utilization has a much lower carbon dioxide emission factor than coal.In cold climates, the production of algae biodiesel with the utilization of waste heat rather than natural gas consumption is the only approach that reduces greenhouse gas emissions relative to soy biodiesel. ### 3.4. Other Air Emissions The exposure of humans to air pollutants is increasingly associated with increased mortality and reduction in life expectancy [43]. Figure 9 presents the lifecycle air emissions for algae biodiesel production normalized to the corresponding air emissions estimated by GREET for soybean biodiesel. The microalgae biodiesel air emissions follow a trend similar to the total life cycle energy consumption. The high NOx emissions can be traced to high emission factors of equipment used to produce natural gas and the flaring of natural gas in refineries. The increased use of artificial lighting for the cultivation of algae in a windowless and well-insulated facility results in high particulate emissions, particularly in comparison to cases where natural lighting is used. These PM emissions originate mainly from coal and residual oil combustion use for electricity production.Figure 9 Toxic air emissions from microalgae biodiesel production—mass emissions normalized by dividing by the corresponding emissions for soy biodiesel for comparison.VOC emissions from microalgae biodiesel production are much lower than soy biodiesel, because of low utilization of petroleum and hexane. The VOC emission factors for transportation fuels like gasoline are far greater than any other source. Thus, since algae is locally produced for biodiesel and transportation minimal, the VOC emissions from algae biodiesel are much less than soy biodiesel, primarily because only a minimal amount of hexane is required for extraction compared with soy beans.Overall, the most important source of air emissions for microalgae is the upstream emissions associated with fuel and electricity generation. Yet, these emissions are still relatively low compared to soy biodiesel. The primary factor contributing to this apparent anomaly is the comparison of algae biodiesel produced in New York State to soy biodiesel produced nationally. NY State has a high percentage of hydroelectric (17%) and nuclear (29%) power production and relatively small amounts of electricity generated from coal (15%) [41]. This difference in upstream electricity generation has significant repercussions throughout the lifecycle energy emission estimates for any electricity-intensive manufacturing system: manufacturing in New York State benefits from relatively clean energy resources.The acidification of soils and water bodies occurs mainly due to the transformation of gaseous pollutants (SOx, NOx) into acids. The acidification potential of the different cases is estimated in SO2 equivalents. All cases of microalgae biodiesel are better than soy biodiesel in terms of acidification emissions. The total SO2 equivalents follow a trend that resembles the total energy usage. ### 3.5. Summary of Results A summary of the lifecycle sustainability assessment metrics for the various algae biodiesel production scenarios and soy biodiesel production is presented in Table5. The most sustainable biodiesel production for all cases requires the colocation of the algae and BD production facility in the vicinity of a source of waste heat. “Free” heat greatly reduces the fossil fuel consumption and all related green house gas and other air pollutants. At a similar latitude, choosing a location that maximizes sunlight helps somewhat to increase the algae production rate and, therefore, reduce the impacts when the results are compared on a per BD-produced basis. These effects are small, however, compared to the benefits of utilizing waste heat. Similarly, a well-insulated facility can help reduce heating needs, but the consequences of increased electricity use for artificial lighting decrease the benefits of reduced heating fuel required. In most regions of the U.S., where a higher fraction of the electricity mix is generated from fossil fuels, the well insulated windowless scenario would be worse in terms of most sustainability metrics due to the increased dependence on fossil fuels.Table 5 Summary of average sustainability metrics to compare algae and soy BD production. ScenarioEnvironmental ImpactGreenhouse Nat. Gas, SyracuseGreenhouse Nat. Gas, AlbanyGreenhouse w/waste heat, SyracuseGreenhouse w/waste heat, AlbanyInsulated, no nat. lightSoy biodiesel productionTotal life cycle energy Consumption* (MJ/L of BD)232116152221Land utilization (m2/L of BD/yr)0.0530.0480.0530.0480.04022.2Water Consumption (L water/L BD)5–75–75–75–74–66,500Greenhouse gas emissions (g CO2 equiv/L of BD)13501150740630910925Acidification potential (g SO2 eq./L of BD)4.94.62.82.53.44.0Toxic Emissions (g /L of BD)PM 105.14.62.62.35.75.3PM 2.51.81.60.70.61.82.7VOC0.220.200.060.050.093.4CO2.42.10.60.51.02.8*does not include credits. ## 3.1. Biomass Production Biomass output is an important factor for determining life cycle energy analysis of microalgae biodiesel production. When natural lighting is used to minimize electricity consumption for artificial lighting, algae production rises steadily between the months of February and April (Figure5). Biomass production is the highest between the months of May to July and is followed by a gradual decline in the months of August to October. Production is the lowest in the winter months due to low natural irradiance. The uncertainty bars included represent 95% confidence intervals from Monte Carlo simulation outputs.Figure 5 Algae Biomass production for Syracuse and Albany, NY, with natural lighting supplemented by artificial lighting for continuous algae production.The annual biomass productivity in Albany is about 12% greater than that in Syracuse (Table4). These cities are at very similar latitudes, but the actual irradiance in Albany is higher due to less cloud cover. Biomass and subsequent biodiesel production in the windowless (artificial lighting only) scenario is much higher than greenhouse cases because illumination is maintained throughout the year at the highest level achieved naturally (noon in the month of July).Table 4 Comparison of different locations and scenarios by biodiesel production. LocationBiomass Produced(tonnes/year)Biodiesel produced (Lm-2 y-1)Syracuse NYGreenhouse Base Case20219Greenhouse w/waste heat20219Albany NYWindowless Cultivation26325Greenhouse Base Case22521Greenhouse w/waste heat22521 ## 3.2. Energy Consumption for Microalgae Biodiesel Production The energy consumed for biodiesel production was estimated by modeling individual processes in the algae cultivation stage. Energy required for the transesterification process is accounted directly by the GREET 1.8a model. The energy required for feedstock production through the drying process is illustrated in Figure6. This does not include oil extraction and transesterification processes. Three variables can be assessed with this graph: location (Syracuse versus Albany), use of natural lighting versus solely artificial lighting and algae versus soybean production.Figure 6 Energy consumption for microalgae and soy bean feedstock production. The error bars represent 95% confidence intervals on the total energy consumption for feedstock production.Heating needs consume well over half of the total energy required for algae growth, dewatering, and drying. When no waste heat is available, dewatering and steam drying accounts for the greatest fraction—about 28–32% of the energy required for feedstock production. With the availability of waste heat, this component is reduced to about 13% of the total, which represents the electricity required for centrifugation. Greenhouse heating consumes a similar proportion of the total energy for algae production—about 25–30%. Water heating for cultivation consumes about 7–12% for feedstock production. Both locations have similar water heating requirements because groundwater temperature is assumed to be equal for both cases.When natural lighting is utilized to the extent possible artificial lighting, it consumes about a quarter of the total energy required for algae cultivation. However, in the windowless cultivation case where there is no natural light available, the artificial lighting cost is almost doubled. However, the total energy requirements in this scenario are still less (35%) than the scenarios requiring natural gas to heat a greenhouse.Among the design choices and trade-offs considered here, the growth and drying of algae with the utilization of waste heat is the only scenario that is substantially better than growing soybeans from the perspective of process energy consumed. These results clearly show the value of colocating an algae facility near a source of waste heat.Overall, microalgae cultivation in Albany, NY, consumes about 18–21% less energy than Syracuse, NY, because greenhouse heating energy requirements are lower and higher natural lighting intensity yields about 12% higher biomass output.Figure7 illustrates the total lifecycle energy, which now also includes biodiesel production and credits for CO2 consumption and algae/soy meal produced during the oil extraction phase. For most cases, the energy required for feedstock production is similar to the energy required for oil extraction and transesterification. Thus, the savings associated with the utilization of waste heat in the greenhouse also represent significant savings when the entire lifecycle energy consumption is considered. Greenhouse algae cultivation with waste heat in Albany consumes the least energy on a life cycle basis; however total energy consumption is very similar to that of the corresponding Syracuse case.Figure 7 Total life cycle energy consumption by life cycle stage. The error bars represent 95% confidence intervals on the total lifecycle energy consumed.The importance of the coproduct and carbon dioxide consumption credits are apparent from the data presented in Figure7. Soy meal credits are higher than algae meal credits because of higher protein content and higher fraction of soy meal produced per liter of biodiesel (1 kg of algae meal displaces about 0.7 kg of soybean, whereas 1 kg of soy meal displaces about 1.2 kg of soy bean for animal feed). Adding the higher credits for the soy bean BD case to the energy required for production reduces the net energy for this case to a level below the well-insulated and windowless algae production scenario. The greenhouse scenarios utilizing waste heat are still the best option for minimizing the consumption of energy that has value for other uses.Natural gas accounts for 65–80% of the total energy consumed on a life cycle basis for algae biodiesel production when waste heat is not available (data not shown). The high consumption of natural gas can be attributed to heating processes, the high fraction of natural gas in NY electricity mix (about 22%), and upstream consumption for process fuel and fertilizer production. In contrast, soy biodiesel requires substantially more petroleum (~5x) than microalgae consumes due to the extensive use of tractors and feedstock transportation when BD is made from soybeans. Thus, algae as a BD feedstock has a significant benefit over soybeans in terms of reducing our dependence on imported oil. Algae biodiesel production requires a significant amount of electricity and thus coal accounts for about 6–19% of the total life cycle energy consumption. Insulated cultivation has the highest coal consumption, about 19% of the total life cycle energy consumption, because of increased artificial lighting and electricity consumption. In comparison, for the greenhouse with waste heat case, only 7% the total lifecycle energy is derived from coal.The processing of soybeans to prepare for oil extraction also requires some heating to dry the beans. Arguably, waste heat could be considered to reduce the fossil fuel consumption for soybean biodiesel too. However, whereas the algae feedstock could be grown at the same location where waste heat is available, the soybeans require a much more dispersed geographical region. Soybeans are typically transported 75 miles or less to a soybean crushing facility. Thus, the probability that soybean production and crushing facilities can be colocated with a waste heat source is significantly less than for algae. If this can be achieved, the lifecycle energy production for the feedstock production (green bar for soybean BD, Figure7) would be less. ## 3.3. Global Warming Potential Global warming potential can be described as the impact of additional units of greenhouse gases to the atmosphere. The global warming potential for the different scenarios and gases is estimated in terms of carbon dioxide equivalents (Figure8). All algae scenarios are allocated the same CO2 credits because the carbon dioxide consumed per unit of algae produced is constant.Figure 8 Global warming potential of microalgae biodiesel—mass emissions normalized by dividing by the corresponding emissions for soy biodiesel for comparison.MostCO2 emissions for algae biodiesel originate from upstream usage of energy use for heating, transportation fuel use, and coal combustion for electricity. The extraction and utilization of natural gas for heating use, electricity generation, and fertilizer production is accompanied by high methane emissions. Natural gas extraction has a very high methane emission factor. Overall, the emission of carbon dioxide is relatively low compared to methane due to the high natural gas use relative to petroleum or coal. Natural gas utilization has a much lower carbon dioxide emission factor than coal.In cold climates, the production of algae biodiesel with the utilization of waste heat rather than natural gas consumption is the only approach that reduces greenhouse gas emissions relative to soy biodiesel. ## 3.4. Other Air Emissions The exposure of humans to air pollutants is increasingly associated with increased mortality and reduction in life expectancy [43]. Figure 9 presents the lifecycle air emissions for algae biodiesel production normalized to the corresponding air emissions estimated by GREET for soybean biodiesel. The microalgae biodiesel air emissions follow a trend similar to the total life cycle energy consumption. The high NOx emissions can be traced to high emission factors of equipment used to produce natural gas and the flaring of natural gas in refineries. The increased use of artificial lighting for the cultivation of algae in a windowless and well-insulated facility results in high particulate emissions, particularly in comparison to cases where natural lighting is used. These PM emissions originate mainly from coal and residual oil combustion use for electricity production.Figure 9 Toxic air emissions from microalgae biodiesel production—mass emissions normalized by dividing by the corresponding emissions for soy biodiesel for comparison.VOC emissions from microalgae biodiesel production are much lower than soy biodiesel, because of low utilization of petroleum and hexane. The VOC emission factors for transportation fuels like gasoline are far greater than any other source. Thus, since algae is locally produced for biodiesel and transportation minimal, the VOC emissions from algae biodiesel are much less than soy biodiesel, primarily because only a minimal amount of hexane is required for extraction compared with soy beans.Overall, the most important source of air emissions for microalgae is the upstream emissions associated with fuel and electricity generation. Yet, these emissions are still relatively low compared to soy biodiesel. The primary factor contributing to this apparent anomaly is the comparison of algae biodiesel produced in New York State to soy biodiesel produced nationally. NY State has a high percentage of hydroelectric (17%) and nuclear (29%) power production and relatively small amounts of electricity generated from coal (15%) [41]. This difference in upstream electricity generation has significant repercussions throughout the lifecycle energy emission estimates for any electricity-intensive manufacturing system: manufacturing in New York State benefits from relatively clean energy resources.The acidification of soils and water bodies occurs mainly due to the transformation of gaseous pollutants (SOx, NOx) into acids. The acidification potential of the different cases is estimated in SO2 equivalents. All cases of microalgae biodiesel are better than soy biodiesel in terms of acidification emissions. The total SO2 equivalents follow a trend that resembles the total energy usage. ## 3.5. Summary of Results A summary of the lifecycle sustainability assessment metrics for the various algae biodiesel production scenarios and soy biodiesel production is presented in Table5. The most sustainable biodiesel production for all cases requires the colocation of the algae and BD production facility in the vicinity of a source of waste heat. “Free” heat greatly reduces the fossil fuel consumption and all related green house gas and other air pollutants. At a similar latitude, choosing a location that maximizes sunlight helps somewhat to increase the algae production rate and, therefore, reduce the impacts when the results are compared on a per BD-produced basis. These effects are small, however, compared to the benefits of utilizing waste heat. Similarly, a well-insulated facility can help reduce heating needs, but the consequences of increased electricity use for artificial lighting decrease the benefits of reduced heating fuel required. In most regions of the U.S., where a higher fraction of the electricity mix is generated from fossil fuels, the well insulated windowless scenario would be worse in terms of most sustainability metrics due to the increased dependence on fossil fuels.Table 5 Summary of average sustainability metrics to compare algae and soy BD production. ScenarioEnvironmental ImpactGreenhouse Nat. Gas, SyracuseGreenhouse Nat. Gas, AlbanyGreenhouse w/waste heat, SyracuseGreenhouse w/waste heat, AlbanyInsulated, no nat. lightSoy biodiesel productionTotal life cycle energy Consumption* (MJ/L of BD)232116152221Land utilization (m2/L of BD/yr)0.0530.0480.0530.0480.04022.2Water Consumption (L water/L BD)5–75–75–75–74–66,500Greenhouse gas emissions (g CO2 equiv/L of BD)13501150740630910925Acidification potential (g SO2 eq./L of BD)4.94.62.82.53.44.0Toxic Emissions (g /L of BD)PM 105.14.62.62.35.75.3PM 2.51.81.60.70.61.82.7VOC0.220.200.060.050.093.4CO2.42.10.60.51.02.8*does not include credits. ## 4. Conclusions Cultivation of microalgae in NY State is an energy intensive process owing to temperature control and steam drying process. Colocating microalgae cultivation with a power plant is highly desirable. Annual production of microalgae requires the utilization of waste heat for steam drying, water heating, and greenhouse heating in order to be substantially better than soy biodiesel in terms of energy consumption and emissions. When waste heat is utilized, microalgae biodiesel production consumes less energy than soy biodiesel.Microalgae consumes less than one third the petroleum fossil fuel required for soy biodiesel and only a small fraction of the water. The feasibility of microalgae biodiesel production at a given location is greatly dependent on availability of waste heat and natural lighting conditions. The availability of either one or both makes algae biodiesel production process cleaner in terms of air emissions and consumes much less energy than soy biodiesel. However if both natural lighting and waste heat are absent, algae biodiesel production consumes more energy than soy biodiesel production and emits an equal or more amount toxic air emissions.Coproducts produced during algae biodiesel production process have less protein content than soy meal and, thus, are less valuable. The production of high value coproducts allows for increased energy allocation for soy biodiesel and thus emissions or energy consumption of both the feedstocks is very close and comparable.Most microalgae biodiesel production scenarios have low or very similar emissions as compared to soy biodiesel. Green house gas emissions for algae biodiesel are generally higher than soy biodiesel except when waste heat is utilized, in which case emissions are equal. The emission of volatile organic compounds for soy biodiesel is much higher than that for algae biodiesel. Emissions from microalgae production originate mainly from upstream fossil fuel energy consumption. Reducing needs for unit processes like greenhouse heating, lighting, and other systems will have significant benefits. --- *Source: 102179-2010-06-09.xml*
102179-2010-06-09_102179-2010-06-09.md
87,290
Sustainable Algae Biodiesel Production in Cold Climates
Rudras Baliga; Susan E. Powers
International Journal of Chemical Engineering (2010)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2010/102179
102179-2010-06-09.xml
--- ## Abstract This life cycle assessment aims to determine the most suitable operating conditions for algae biodiesel production in cold climates to minimize energy consumption and environmental impacts. Two hypothetical photobioreactor algae production and biodiesel plants located in Upstate New York (USA) are modeled. The photobioreactor is assumed to be housed within a greenhouse that is located adjacent to a fossil fuel or biomass power plant that can supply waste heat and flue gas containingCO2 as a primary source of carbon. Model results show that the biodiesel areal productivity is high (19 to 25 L of BD/m2/yr). The total life cycle energy consumption was between 15 and 23 MJ/L of algae BD and 20 MJ/L of soy BD. Energy consumption and air emissions for algae biodiesel are substantially lower than soy biodiesel when waste heat was utilized. Algae's most substantial contribution is a significant decrease in the petroleum consumed to make the fuel. --- ## Body ## 1. Introduction In 1998, an amendment to the U.S. Energy Policy Act (EP Act) of 1992 triggered the rapid expansion of the US biodiesel industry. This act required that a fraction of new vehicles purchased by federal and state governments be alternative fuel vehicles. The U.S. Energy Independence and Security Act (EISA) of 2007 further mandated the production of renewable fuels to 36 billion gallons (136 billion liters) per year by 2022, including biodiesel. Crops such as soybeans and canola account for more than three quarters of all biodiesel feedstocks in the U.S. [1].About 14% of U.S. soybean production and 4% of global soybean production were used by the U.S. biodiesel industry to produce fuel in 2007 [1]. The use of oil crops for fuel has been criticized because the expansion of biodiesel production in the United States and Europe has coincided with a sharp increase in prices for food grains and vegetable oils [2]. The production of biodiesel from feedstocks that do not use arable land can be accomplished either by using biomass that is currently treated as waste or by introducing a new technology that allows for the development of new feedstocks for biodiesel that utilize land that is unsuitable for food production.Microalgae have the potential to displace other feedstocks for biodiesel owing to its high vegetable oil content and biomass production rates [3]. The vegetable oil content of algae can vary with growing conditions and species, but has been known to exceed 70% of the dry weight of algae biomass [4]. Microalgae could have significant social and environmental benefits because they do not compete for arable land with food crops and microalgae cultivation consumes less water than other crops [5]. Algae also grow in saline waters that are unsuitable for agricultural practices or consumption. This makes algae well suited for areas where cultivation of other crops is difficult [6, 7]. High biomass productivities may be achieved with indoor or outdoor photobioreactors (PBRs) [8]. In cold climates, PBRs have been used successfully, when housed within greenhouses and provided with artificial lightingMicroalgae biodiesel has received much attention in news media. Considerable progress has been made in the field of algae biomorphology [9–11]. In recent decades, however, little quantitative research has been done on the energy and environmental impacts of microalgae biodiesel production on a life cycle basis. The life cycle concept is a cradle to grave systems approach for the study of feedstocks, production, and use. The objective of this work was to assess the feasibility of algae biodiesel production in New York State (USA) based on life cycle energy and environmental impact parameters. Upstate NY was chosen as a challenging case for algae biodiesel production due to shorter days and cold temperatures during winter months. The productivity, energy consumption, and environmental emissions associated with the algae/BD production lifecycle were quantified in order to identify the best growing conditions and assess its impacts relative to soybean biodiesel. ## 2. Methodology ### 2.1. System Boundary and Scope The life cycle concept is a cradle to grave systems approach for the study of feedstocks, production, and use. The concept revolves around the recognition of different stages of production starting from upstream use of energy to cultivation of the feedstock, followed by the different processing stages. A life cycle inventory assessment allows for the quantification of mass and energy streams such as energy consumption, material usage, waste production, and generation of coproducts. A summary of the sustainability assessment metrics used for this life cycle inventory of microalgae feedstock for biodiesel production is presented in Table1.Table 1 Life cycle sustainability metrics for biodiesel. Environmental ImpactSustainability MetricsEnergy and Resource Consumption– Total energy consumed (MJ/L BD)– Fossil fuel energy consumed (MJ/L BD)– Petroleum consumed (MJ/L BD)– Land required (m2/L of BD)– Water required (L water/L BD)Climate Change– Net green house gas emissions (g CO2 equivalents/L of BD)Acidification– Acidification potential (g SO2 eq./L BD)Toxic Emissions– Particulate matter emissions (PM10, PM2.5)– Carbon monoxide emissions– Volatile organic carbon emissionsFigure1 provides an overview of the system boundary used in this analysis, which includes the production of algae and biodiesel via a transesterification reaction. The boundary includes all upstream mass and energy flows that are required to make the chemical and energy resources required for the processing. The production of biodiesel from algae and direct energy consumption is characterized by four distinct stages: cultivation, dewatering/drying, oil extraction, and transesterification (Figure 1). The energy consumed and subsequent emissions for fuel production, electricity generation, and chemical production comprise the upstream energy consumption and emissions. Biodiesel and algae meal are the products leaving the system boundary. The use of these products is not directly included within the analysis.Figure 1 Flowchart depicting system boundary for life cycle inventory of biodiesel from microalgae.The hypothetical algae and biodiesel production processing facilities considered are located in upstate placeStateNew York. The facilities are assumed to be adjacent to a biomass or fossil fuel electricity generation plant for access to the carbon dioxide in their flue gas and waste heat in order to maximize the utilization of waste resources within this system. Waste heat is considered to have no value as an energy product; so it is not counted as part of the total energy resources consumed by the facility.Two different locations were considered for the microalgae biodiesel facility: Syracuse, NY (43°2′ N, 76°8′ W) and Albany, NY (42°7′ N, 73°8′ W). Although these locations are at approximately the same latitude and have very similar hours of daylight, the Syracuse area is colder and cloudier throughout the year due to its proximity to the Great Lakes. Albany offers more intense natural lighting and less severe winter temperatures (Figure 2). Three specific cases were considered for each of these locations:(i) greenhouse structure to maximize natural lighting; natural gas used to maintain the system temperature;(ii) greenhouse structure to maximize natural lighting; waste heat used to maintain the system temperature;(iii) a well-insulated facility that allows for no natural lighting but requires substantially less heat.Monthly average temperature (a) and total monthly solar irradiance (b) for Syracuse, NY, and Albany, NY. (a)(b)The PBRs are assumed to operate continuously, using artificial lighting when natural lighting is not sufficient. In all cases, it was assumed thatPhaeodactylum tricornutum algae would be grown for biodiesel production. This algae species has a relatively high oil content (about 30% by dry weight), is resistant to contamination, and has been previously utilized to produce biodiesel [12, 13].Estimating the environmental and energy lifecycle impacts requires quantification of the mass and energy flows through this system. A mathematical model for the algae production process was developed in the work presented here. As shown in Figure3, the mass and energy flows estimated with the algae production model were used in conjunction with the Greenhouse gases, Related Emissions, and Energy use in Transportation (GREET) model 1.8a developed at the Argonne National Laboratories [14]. GREET provided the general framework and structure for the lifecycle inventory, especially aspects of the transesterification process and energy and emissions related to the upstream production of chemicals and energy resources. BD production from soybeans, which is used here as a benchmark for comparison, was taken directly from the GREET model. GREET is a widely accepted model and many studies and analyses have been based upon it because of its vast data on energy sources and the associated emissions (e.g., [15–17]). The default values for soybean production, oil extraction, and transesterification were taken as GREET default values [14], which are representative of the Midwestern region of the United States where most soybeans are grown. These were based initially on an LCA completed at the National Renewable Energy Lab [18] and updated to keep the GREET model as current as possible (e.g., [17]). There is only a small production of soybeans in NY State, with yields well below the average yield in the Midwest. Thus, no attempt was made to match the geographic system boundaries for biodiesel from algae to that of soybeans.Figure 3 Overview of microalgae biomass LCA model. The yellow boxes represent the contributions of the work presented here. The soybean LCA results were taken almost entirely from GREET.Uncertainty in the data was addressed by utilizing Monte Carlo simulations to input a range of values for parameters. For a given assumption or variable with a distribution as input, the commercially available software, Crystal Ball was utilized to determine a forecast or range of possible outputs. Standard error bars were created utilizing the mean value of the forecast and 95% certainty. ### 2.2. Algae Production Models The biomass production model utilizes solar data and a biological growth rate to estimate actual yields for algae biomass for a photobioreactor system [13, 19]. #### 2.2.1. Microalgae Plant Setup Hypothetical tubular closed photobioreactors (PBRs) were modeled in this case to predict algae production and account for energy consumption and emissions in Syracuse and Albany NY. The PBR plant setup is illustrated in Figure4. It was assumed that processes such as dewatering and transesterification could be carried out on site, thus eliminating the need for transportation.Figure 4 Photobioreactor system layout (not to scale).The various dimensions and parameters for the PBR were taken from recommendations of previous studies in order to depict a realistic setup [12, 13]. The PBR setup was designed for a maximum of 30 hours detention time. The maximum effluent concentration (Ce) was fixed at 3.4 kg/m3 with a recycle ratio of 0.35 [13, 20] and an allowable superficial fluid velocity of 0.3 m/s [21]. Since a long tubular length is required to meet these constraints (32,400 m), the PBR is split up into 6 units each of which is 61 m3 (5,400 m long, 0.12 m diameter). Stacking of tubes reduces the total foot print area of the greenhouse. All tubes are connected and algae broth passes through all six units.The floor area or foot print area of the greenhouse house was determined from the volume of the reactor and type of cultivation (annual/seasonal operation) and the specific processes. The diameter of tubes was set at 0.12 m for all cases since it is a widely reported size for PBRs [3, 13, 22, 23]. The spacing of tubes was set at 0.3 m. This is an important factor since it defines the total floor size, which in turn influences heating and lighting requirements. The various parameters related to plant setup are summarized in Table 2.Table 2 Summary of photobioreactor and greenhouse parameters. ParameterValueDepends uponReferenceDiameter of tubes0.12 mLarger diameter pipes can cause cell shading[3]Spacing of tubes0.3Greater spacing is desirable to avoid shading[13]Flow rate (Q)3.4×10-3 m3/sAlgae species and tube diameter[21]Recycle Ratio (r)0.35[20]Total volume of reactor (V)366m3Maximum residence time (30 hrs)[12]Influent concentration (Ci)1.2 kg/m3Max. Effluent concentration (Ce)3.4 kg/m3Growth rate, PBR set up[13] #### 2.2.2. Estimating Biomass Output Microalgae productivity is estimated from the location, reactor specifications, and microalgae data. It is assumed thatCO2 and nutrients are provided in excess to the microalgae culture through the media, thereby making light the only limiting factor for cell growth and decay [12]. If adequate lighting is available, the specific growth rate μ is determined from the average irradiance available Iavg (μE/m2-s) [19]: (1)μ=μmaxIavgnKIn+Iavgn, where KI is the half saturation constant (i.e., Iavg for which half of μmax is attained), and the exponent n is a unitless empirical constant. Both KI and n are constant for a given species of algae. Note that decay of algae cells during the hours with light is incorporated into the maximum specific growth rate (μmax) (h-1) since values provided by Molina Grima et al. [22] and Fernandez et al. [23] were determined from the net growth rate. Iavg is determined from the Beer-Lambert equation:(2)Iavg=IφeqKaCi[1-exp(-φeqKaCi)], where Ci (kg/m3) is the influent biomass concentration. The path length of light within the reactor is given by φeq, which is the ratio of the tube diameter to the cosine of the solar zenith angle. The photosynthetically active irradiance (I) (μEm-2s-1) is a function of various solar angles and the total solar irradiance. Hourly solar data were available from NREL’s solar database [24], and thus, algae cell growth was determined at an interval of an hour. The analysis of the solar data to estimate I and φeq is included in the appendix.The PBR is modeled as a series of plug flow reactors where the effluent concentration of each reactor is the influent concentration for the next. It is assumed that steady-state conditions prevail for each hour since irradiance is taken as a constant over that time period. Utilizing a Monod reaction rate for substrate utilization [22], the resulting steady-state plug flow reactor equation for each segment can be written as(3)udCdz=μmaxIavgnKIn+IavgnC, where C is the biomass concentration and u is the fluid velocity. Integrating this expression can provide the effluent concentration for each reactor segment “j” that represents one hour of residence time at an average irradiation rate for that hour:(4)ln(Cj+1kCjk)=μmax(Iavg(j+1)k)nKIn+(Iavg(j+1)k)n(Lu), where j and j+1 indicate the start and end location along the reactor length of the one hour segment. Cj+1 is calculated in series to determine the reactor effluent concentration, Ce, for each hour k. The growth rate during that hour is defined by the average irradiance for that hour of the day. The total biomass produced per day (MBM) (kg d-1) is estimated from the flow rate (Q) (m3hr-1), recycle ratio (r), and the effluent concentration Ce (kg m-3):(5)MBM=∑k=124Q(Cek)(1-r). The total microalgae biomass produced can be determined from (2), (4), and (5) along with algae growth parameters and solar irradiation data.The temperature requirements for algae differ by species. In general, faster growing algae species favor higher media temperatures of about 20–30°C [10]. The algae-related constants used for P. tricornutum in the model are included in Table 3. This species was selected because it has been used in the past to produce microalgae biodiesel and all relevant data were available [22].Table 3 P. tricornutum growth parameters. ParameterValueReferenceμmax0.063 (h-1)[22]B0.0018 (h-1)[25]KI3426 (μE m-2s-1)[22]n1–1.34[25]Ef1.74 ± 0.09μE J-1[22]Xp2–4%[25]Oil Content (%dry weight)30%[13]Water Content of algae40%[13]Cultivation temperature25°C[10] ### 2.3. Energy Consumed during Microalgae Cultivation Cultivating microalgae in closed systems is an energy intensive process, especially in regions with low temperatures and limited natural lighting [13]. The algae growth and harvesting stage involve a large number of intermediate processes for which estimates of the energy consumption were developed here. Energy consumption requirements for extraction and transesterification are already provided in the GREET model [14]. It was assumed that the processes for transesterification of algae oil are identical to that of soy oil. Thus, the chemical (methanol and sodium hydroxide) and energy consumption and the energy and emissions associated with their production were taken directly as the default parameters in GREET.Water Heating Energy requirements for a natural gas water heating system were determined by using the specific heat of water and the efficiency rating of the heater as provided by the manufacturer (EF = 0.82 [26]). It was assumed that groundwater, at an initial temperature (Tinlet) of 12°C [27], would be heated to a thermostat set point (Tt) of 25°C.Media Circulation Electric pumps are used to circulate the media through the entire length of the reactor. The input electrical power required to operate the pumps(Pp) (W) can be given by [28] (6)Pp=3.91×10-6μ13Re2.75ηpd3Aa,whereRe=ρudμ1, where μ1 (kg m-1s-1) is the dynamic viscosity of water, Re is the Reynolds number, d (m) is the diameter of the pipes, u is the superficial velocity of flow (m/s), ηp and ηelec are the pump efficiency (ηp = 0.7), and Aa (m2)is the tube aperture area. The pumps operate continuously.Artificial Lighting Natural algae cultivation inherently revolves around the diurnal and seasonal cycles. To compensate for these cycles and to maximize the production of biomass, artificial lighting is used to allow 24-hour cultivation. Lights are turned on from dusk to dawn. Monthly averages of daylight hours are used to define the time the lighting system is in operation each day. The power(Pa) (W) consumed for artificially lighting the greenhouse area is calculated as [29] (7)Pa=A′(IavgLwCf). The intensity of the artificial lighting provided was set equal to the naturally available lighting in the month of July (Iavg = 1.7 μE/m2-s) over the entire greenhouse region (A′=3345m2). Specifications for high-efficiency fluorescent GRO lights [29] were used to estimate the power required for artificial lighting. The light intensity of the bulbs is expressed as Lw = 220 Lu/W and the conversion factor (Cf) to convert between micromoles of photons (mE) and lux is 0.29.CO2 Purification Carbon dioxide acts as the only source of carbon for the biomass. Flue gases from power plants provide an inexhaustible source ofCO2. However, flue gases also contain varying levels of other gases such as SOx and NOx which are detrimental to microalgae culture beyond certain concentrations [30]. The monoethanolamine (MEA) absorption process can be used to separate pure CO2 from flue gas for microalgae production. Kadam [31] determined that if about 18% of the total carbon dioxide consumed is taken directly from flue gases and the rest is purified through the MEA process, then, the toxic flue gases will be sufficiently low concentration for algae growth. Molina Grima [19] determined that in order to make light the only limiting factor, CO2 must be provided in excess and the ratio of the aqueous CO2 concentration (kg/m3) to influent biomass concentration Ci (kg/m3) should be 0.63. Since growth rates for this system are lower than those in Molina's Grima study [19] due to reduced sunlight, this CO2 represents a conservative estimate. The mass of carbon dioxide required was estimated based on this ratio, the media flow rate, and the influent biomass concentration. Although carbon dioxide has a high solubility in water, and it is likely that all CO2 in the gas bubbled through the reactor would dissolve over the length of the reactor, a factor of safety of 2 was used here as an overestimate of the mass of CO2 that would be required. The MEA CO2 extraction process has been modeled and studied previously in context with algae production. Kadam [31, 32] reports that the process to extract CO2 from flue gas and recover the MEA for reuse consumes 32.65 kWh per ton of CO2 for algae cultivation. Details are not provided in these references to specifically quantify which of the steps in the MEA process consume the most electrical energy.Greenhouse Heating Temperature control within the greenhouse is essential for algae cultivation in cold weather conditions. The energy consumed for greenhouse heating depends upon the total surface area exposed, insulation material, and temperature inside and outside the greenhouse. For a given greenhouse with surface area(Ag) (m2) the heat loss per second (QL) (J/s) is given by [33] (8)QL=1.05(1R)(Treq-Tout)Ag, where R(1.9 m2C° s J-1) is the R-value of the greenhouse insulating material; Treq (25°C) and Tout (°C)are the temperatures required within the greenhouse and outside the greenhouse, respectively. The greenhouse was assumed to be insulated with 10 mm twinwall polycarbonate with an R-value of 1.9. The R-value of insulated and windowless cultivation scenario was set at 30. The outside temperature Tout is taken from monthly averages for Syracuse and Albany and is input as normal distributions for that month [34].Steam Drying and Dewatering Algae are suspended in a dilute broth from photobioreactors [13]. Dewatering and drying of algae is necessary to reduce the water content to 5% [35] before the hexane oil extraction process. For algae with high vegetable oil content, it is suggested that continuous nozzle discharge centrifuges provide the best reliability and consume the least amount of energy. Centrifugation consumes 3.24 MJ/m3 of effluent media [36]. After centrifugation, algae water content is 70% (by weight). Steam is utilized to further dry microalgae before oil extraction process. The natural gas consumed to provide the required steam energy was calculated based on the heat of vaporization of water, the mass of water that needed to be vaporized to reduce the water content from 0.70 to 0.05, and the efficiency of the boiler (0.93 [37]) and dryer (0.8 [26]) utilized. In the scenarios that utilize waste heat, it is assumed that because of the colocation of the algae production facility near a power plant, there is sufficient heat to dry the algae. ### 2.4. Water Consumption The consumption of water for the production of biofuels has recently been identified as a significant limitation to the development of an expanded biofuel economy. Water consumption occurs almost entirely in the feedstock production step for most biofuels. The average U.S. production biodiesel from soybeans requires 6,500 liters of water for evapotranspiration per liter of biodiesel produced [38]. Water consumption for algae biodiesel was calculated by a mass balance. The total water flowrate through the bioreactor is the sum of freshwater, water included in the algae recycle stream (35% recycle), and water recovered through the centrifuge dewatering process to increase the algae concentration from 0.34% to 30%. With this mass balance, 848 m3 make up water is required annually or approximately 4 L water per L of biodiesel for the feedstock production stage. This represents approximately 99% of water recovery and reuse. In the transesterification and biodiesel cleaning processes, 1–3 L of water are required per L of biodiesel produced [39]. ### 2.5. Fertilizer Consumption The microalgae culture media acts as the primary source of nutrients and carbon dioxide and a means of expelling excess oxygen. The minimum amount of nutrients consumed was defined based on the molecular formula of algae—CO0.48H1.83N0.11P0.01 [40]. N and P account for 6.5% and 1.3% of the algae mass. Assuming that maximum possible biomass concentration of algae cells is 4 kg /m3 [13, 22] in a tubular PBR, the N and P consumed from the algae media would be 0.26 kg N/m3 and 0.052 kg P/m3. Excess fertilizer that passes through the bioreactor as part of the broth is assumed to be recovered in the centrifuge dewatering step for reuse. Since nearly all of the water is recycled, it is assumed that nearly all of the nutrients that are not consumed are also recycled. ### 2.6. Utilizing GREET for Life Cycle Analysis The GREET model was modified and used to calculate the energy use and emissions generated from algae production, oil extraction, and transesterification stages of biodiesel production as well as the upstream chemical and energy production processes. For a given fuel system GREET evaluates natural gas, coal, and petroleum use as well as the emissions of carbon dioxide equivalent greenhouse gases, volatile organic compounds, carbon monoxide, nitrogen oxides, particulates, and sulfur oxides from all lifecycle stages [14]. The GREET results are presented as primary energy consumed and emissions per million BTU fuel produced. The low heating value of the BD was used to convert to the functional unit used here—liters BD produced.The GREET model is written in an MS Excel workbook and includes soy biodiesel production energy consumption and emissions pathways. A new spreadsheet page based on the soy biodiesel calculations was added to the GREET workbook and adapted for algae BD production. Default parameters for transesterification were used directly, but other input parameters including energy consumption for the various processes, biomass yield, nutrient requirements, carbon dioxide consumed were modified for algae biodiesel production based on the mass and energy flows presented above. The mix of electricity generation within New York State was used to define the primary energy consumed to generate electricity [41].The extraction of oil from algae was assumed to be carried out by hexane oil extraction. The procedure is similar to soybean oil extraction, although significantly less hexane is required to recover oil from algae (0.030 kg of hexane/kg of dry algae) [11] than is required for soybeans (1.2 kg hexane/kg dried and flaked soy bean) [18]. During this process, algae meal is produced as a coproduct that can be used as an animal feed in the same manner that soy meal is used as a coproduct from soy biodiesel. GREET uses the displacement method to determine how much of the biomass production and extraction steps can defined as a credit for the biodiesel due to the production of a coproduct. The protein content of soy meal is 48% [42], as compared to 28% in algae meal [13] and 40% in soy beans [42]. Thus, 1 kg of algae meal displaces about 0.7 kg of soybean, whereas 1 kg of soy meal displaces about 1.2 kg of soy bean for animal feed. The credits for not having to produce 0.7 kg soy beans for every kg algae meal produced are subtracted from the total energy use and emissions associated with the algae production, oil extraction, and their associated upstream processes.An additional credit was also attributed to the algae to represent the carbon dioxide sequestered from the power plant flue gas. Algae cell elemental composition was used to estimate the mass of carbon that was consumed by the algae growth within the PBR (0.51 kg ofCO2 consumed/kg algae grown). ## 2.1. System Boundary and Scope The life cycle concept is a cradle to grave systems approach for the study of feedstocks, production, and use. The concept revolves around the recognition of different stages of production starting from upstream use of energy to cultivation of the feedstock, followed by the different processing stages. A life cycle inventory assessment allows for the quantification of mass and energy streams such as energy consumption, material usage, waste production, and generation of coproducts. A summary of the sustainability assessment metrics used for this life cycle inventory of microalgae feedstock for biodiesel production is presented in Table1.Table 1 Life cycle sustainability metrics for biodiesel. Environmental ImpactSustainability MetricsEnergy and Resource Consumption– Total energy consumed (MJ/L BD)– Fossil fuel energy consumed (MJ/L BD)– Petroleum consumed (MJ/L BD)– Land required (m2/L of BD)– Water required (L water/L BD)Climate Change– Net green house gas emissions (g CO2 equivalents/L of BD)Acidification– Acidification potential (g SO2 eq./L BD)Toxic Emissions– Particulate matter emissions (PM10, PM2.5)– Carbon monoxide emissions– Volatile organic carbon emissionsFigure1 provides an overview of the system boundary used in this analysis, which includes the production of algae and biodiesel via a transesterification reaction. The boundary includes all upstream mass and energy flows that are required to make the chemical and energy resources required for the processing. The production of biodiesel from algae and direct energy consumption is characterized by four distinct stages: cultivation, dewatering/drying, oil extraction, and transesterification (Figure 1). The energy consumed and subsequent emissions for fuel production, electricity generation, and chemical production comprise the upstream energy consumption and emissions. Biodiesel and algae meal are the products leaving the system boundary. The use of these products is not directly included within the analysis.Figure 1 Flowchart depicting system boundary for life cycle inventory of biodiesel from microalgae.The hypothetical algae and biodiesel production processing facilities considered are located in upstate placeStateNew York. The facilities are assumed to be adjacent to a biomass or fossil fuel electricity generation plant for access to the carbon dioxide in their flue gas and waste heat in order to maximize the utilization of waste resources within this system. Waste heat is considered to have no value as an energy product; so it is not counted as part of the total energy resources consumed by the facility.Two different locations were considered for the microalgae biodiesel facility: Syracuse, NY (43°2′ N, 76°8′ W) and Albany, NY (42°7′ N, 73°8′ W). Although these locations are at approximately the same latitude and have very similar hours of daylight, the Syracuse area is colder and cloudier throughout the year due to its proximity to the Great Lakes. Albany offers more intense natural lighting and less severe winter temperatures (Figure 2). Three specific cases were considered for each of these locations:(i) greenhouse structure to maximize natural lighting; natural gas used to maintain the system temperature;(ii) greenhouse structure to maximize natural lighting; waste heat used to maintain the system temperature;(iii) a well-insulated facility that allows for no natural lighting but requires substantially less heat.Monthly average temperature (a) and total monthly solar irradiance (b) for Syracuse, NY, and Albany, NY. (a)(b)The PBRs are assumed to operate continuously, using artificial lighting when natural lighting is not sufficient. In all cases, it was assumed thatPhaeodactylum tricornutum algae would be grown for biodiesel production. This algae species has a relatively high oil content (about 30% by dry weight), is resistant to contamination, and has been previously utilized to produce biodiesel [12, 13].Estimating the environmental and energy lifecycle impacts requires quantification of the mass and energy flows through this system. A mathematical model for the algae production process was developed in the work presented here. As shown in Figure3, the mass and energy flows estimated with the algae production model were used in conjunction with the Greenhouse gases, Related Emissions, and Energy use in Transportation (GREET) model 1.8a developed at the Argonne National Laboratories [14]. GREET provided the general framework and structure for the lifecycle inventory, especially aspects of the transesterification process and energy and emissions related to the upstream production of chemicals and energy resources. BD production from soybeans, which is used here as a benchmark for comparison, was taken directly from the GREET model. GREET is a widely accepted model and many studies and analyses have been based upon it because of its vast data on energy sources and the associated emissions (e.g., [15–17]). The default values for soybean production, oil extraction, and transesterification were taken as GREET default values [14], which are representative of the Midwestern region of the United States where most soybeans are grown. These were based initially on an LCA completed at the National Renewable Energy Lab [18] and updated to keep the GREET model as current as possible (e.g., [17]). There is only a small production of soybeans in NY State, with yields well below the average yield in the Midwest. Thus, no attempt was made to match the geographic system boundaries for biodiesel from algae to that of soybeans.Figure 3 Overview of microalgae biomass LCA model. The yellow boxes represent the contributions of the work presented here. The soybean LCA results were taken almost entirely from GREET.Uncertainty in the data was addressed by utilizing Monte Carlo simulations to input a range of values for parameters. For a given assumption or variable with a distribution as input, the commercially available software, Crystal Ball was utilized to determine a forecast or range of possible outputs. Standard error bars were created utilizing the mean value of the forecast and 95% certainty. ## 2.2. Algae Production Models The biomass production model utilizes solar data and a biological growth rate to estimate actual yields for algae biomass for a photobioreactor system [13, 19]. ### 2.2.1. Microalgae Plant Setup Hypothetical tubular closed photobioreactors (PBRs) were modeled in this case to predict algae production and account for energy consumption and emissions in Syracuse and Albany NY. The PBR plant setup is illustrated in Figure4. It was assumed that processes such as dewatering and transesterification could be carried out on site, thus eliminating the need for transportation.Figure 4 Photobioreactor system layout (not to scale).The various dimensions and parameters for the PBR were taken from recommendations of previous studies in order to depict a realistic setup [12, 13]. The PBR setup was designed for a maximum of 30 hours detention time. The maximum effluent concentration (Ce) was fixed at 3.4 kg/m3 with a recycle ratio of 0.35 [13, 20] and an allowable superficial fluid velocity of 0.3 m/s [21]. Since a long tubular length is required to meet these constraints (32,400 m), the PBR is split up into 6 units each of which is 61 m3 (5,400 m long, 0.12 m diameter). Stacking of tubes reduces the total foot print area of the greenhouse. All tubes are connected and algae broth passes through all six units.The floor area or foot print area of the greenhouse house was determined from the volume of the reactor and type of cultivation (annual/seasonal operation) and the specific processes. The diameter of tubes was set at 0.12 m for all cases since it is a widely reported size for PBRs [3, 13, 22, 23]. The spacing of tubes was set at 0.3 m. This is an important factor since it defines the total floor size, which in turn influences heating and lighting requirements. The various parameters related to plant setup are summarized in Table 2.Table 2 Summary of photobioreactor and greenhouse parameters. ParameterValueDepends uponReferenceDiameter of tubes0.12 mLarger diameter pipes can cause cell shading[3]Spacing of tubes0.3Greater spacing is desirable to avoid shading[13]Flow rate (Q)3.4×10-3 m3/sAlgae species and tube diameter[21]Recycle Ratio (r)0.35[20]Total volume of reactor (V)366m3Maximum residence time (30 hrs)[12]Influent concentration (Ci)1.2 kg/m3Max. Effluent concentration (Ce)3.4 kg/m3Growth rate, PBR set up[13] ### 2.2.2. Estimating Biomass Output Microalgae productivity is estimated from the location, reactor specifications, and microalgae data. It is assumed thatCO2 and nutrients are provided in excess to the microalgae culture through the media, thereby making light the only limiting factor for cell growth and decay [12]. If adequate lighting is available, the specific growth rate μ is determined from the average irradiance available Iavg (μE/m2-s) [19]: (1)μ=μmaxIavgnKIn+Iavgn, where KI is the half saturation constant (i.e., Iavg for which half of μmax is attained), and the exponent n is a unitless empirical constant. Both KI and n are constant for a given species of algae. Note that decay of algae cells during the hours with light is incorporated into the maximum specific growth rate (μmax) (h-1) since values provided by Molina Grima et al. [22] and Fernandez et al. [23] were determined from the net growth rate. Iavg is determined from the Beer-Lambert equation:(2)Iavg=IφeqKaCi[1-exp(-φeqKaCi)], where Ci (kg/m3) is the influent biomass concentration. The path length of light within the reactor is given by φeq, which is the ratio of the tube diameter to the cosine of the solar zenith angle. The photosynthetically active irradiance (I) (μEm-2s-1) is a function of various solar angles and the total solar irradiance. Hourly solar data were available from NREL’s solar database [24], and thus, algae cell growth was determined at an interval of an hour. The analysis of the solar data to estimate I and φeq is included in the appendix.The PBR is modeled as a series of plug flow reactors where the effluent concentration of each reactor is the influent concentration for the next. It is assumed that steady-state conditions prevail for each hour since irradiance is taken as a constant over that time period. Utilizing a Monod reaction rate for substrate utilization [22], the resulting steady-state plug flow reactor equation for each segment can be written as(3)udCdz=μmaxIavgnKIn+IavgnC, where C is the biomass concentration and u is the fluid velocity. Integrating this expression can provide the effluent concentration for each reactor segment “j” that represents one hour of residence time at an average irradiation rate for that hour:(4)ln(Cj+1kCjk)=μmax(Iavg(j+1)k)nKIn+(Iavg(j+1)k)n(Lu), where j and j+1 indicate the start and end location along the reactor length of the one hour segment. Cj+1 is calculated in series to determine the reactor effluent concentration, Ce, for each hour k. The growth rate during that hour is defined by the average irradiance for that hour of the day. The total biomass produced per day (MBM) (kg d-1) is estimated from the flow rate (Q) (m3hr-1), recycle ratio (r), and the effluent concentration Ce (kg m-3):(5)MBM=∑k=124Q(Cek)(1-r). The total microalgae biomass produced can be determined from (2), (4), and (5) along with algae growth parameters and solar irradiation data.The temperature requirements for algae differ by species. In general, faster growing algae species favor higher media temperatures of about 20–30°C [10]. The algae-related constants used for P. tricornutum in the model are included in Table 3. This species was selected because it has been used in the past to produce microalgae biodiesel and all relevant data were available [22].Table 3 P. tricornutum growth parameters. ParameterValueReferenceμmax0.063 (h-1)[22]B0.0018 (h-1)[25]KI3426 (μE m-2s-1)[22]n1–1.34[25]Ef1.74 ± 0.09μE J-1[22]Xp2–4%[25]Oil Content (%dry weight)30%[13]Water Content of algae40%[13]Cultivation temperature25°C[10] ## 2.2.1. Microalgae Plant Setup Hypothetical tubular closed photobioreactors (PBRs) were modeled in this case to predict algae production and account for energy consumption and emissions in Syracuse and Albany NY. The PBR plant setup is illustrated in Figure4. It was assumed that processes such as dewatering and transesterification could be carried out on site, thus eliminating the need for transportation.Figure 4 Photobioreactor system layout (not to scale).The various dimensions and parameters for the PBR were taken from recommendations of previous studies in order to depict a realistic setup [12, 13]. The PBR setup was designed for a maximum of 30 hours detention time. The maximum effluent concentration (Ce) was fixed at 3.4 kg/m3 with a recycle ratio of 0.35 [13, 20] and an allowable superficial fluid velocity of 0.3 m/s [21]. Since a long tubular length is required to meet these constraints (32,400 m), the PBR is split up into 6 units each of which is 61 m3 (5,400 m long, 0.12 m diameter). Stacking of tubes reduces the total foot print area of the greenhouse. All tubes are connected and algae broth passes through all six units.The floor area or foot print area of the greenhouse house was determined from the volume of the reactor and type of cultivation (annual/seasonal operation) and the specific processes. The diameter of tubes was set at 0.12 m for all cases since it is a widely reported size for PBRs [3, 13, 22, 23]. The spacing of tubes was set at 0.3 m. This is an important factor since it defines the total floor size, which in turn influences heating and lighting requirements. The various parameters related to plant setup are summarized in Table 2.Table 2 Summary of photobioreactor and greenhouse parameters. ParameterValueDepends uponReferenceDiameter of tubes0.12 mLarger diameter pipes can cause cell shading[3]Spacing of tubes0.3Greater spacing is desirable to avoid shading[13]Flow rate (Q)3.4×10-3 m3/sAlgae species and tube diameter[21]Recycle Ratio (r)0.35[20]Total volume of reactor (V)366m3Maximum residence time (30 hrs)[12]Influent concentration (Ci)1.2 kg/m3Max. Effluent concentration (Ce)3.4 kg/m3Growth rate, PBR set up[13] ## 2.2.2. Estimating Biomass Output Microalgae productivity is estimated from the location, reactor specifications, and microalgae data. It is assumed thatCO2 and nutrients are provided in excess to the microalgae culture through the media, thereby making light the only limiting factor for cell growth and decay [12]. If adequate lighting is available, the specific growth rate μ is determined from the average irradiance available Iavg (μE/m2-s) [19]: (1)μ=μmaxIavgnKIn+Iavgn, where KI is the half saturation constant (i.e., Iavg for which half of μmax is attained), and the exponent n is a unitless empirical constant. Both KI and n are constant for a given species of algae. Note that decay of algae cells during the hours with light is incorporated into the maximum specific growth rate (μmax) (h-1) since values provided by Molina Grima et al. [22] and Fernandez et al. [23] were determined from the net growth rate. Iavg is determined from the Beer-Lambert equation:(2)Iavg=IφeqKaCi[1-exp(-φeqKaCi)], where Ci (kg/m3) is the influent biomass concentration. The path length of light within the reactor is given by φeq, which is the ratio of the tube diameter to the cosine of the solar zenith angle. The photosynthetically active irradiance (I) (μEm-2s-1) is a function of various solar angles and the total solar irradiance. Hourly solar data were available from NREL’s solar database [24], and thus, algae cell growth was determined at an interval of an hour. The analysis of the solar data to estimate I and φeq is included in the appendix.The PBR is modeled as a series of plug flow reactors where the effluent concentration of each reactor is the influent concentration for the next. It is assumed that steady-state conditions prevail for each hour since irradiance is taken as a constant over that time period. Utilizing a Monod reaction rate for substrate utilization [22], the resulting steady-state plug flow reactor equation for each segment can be written as(3)udCdz=μmaxIavgnKIn+IavgnC, where C is the biomass concentration and u is the fluid velocity. Integrating this expression can provide the effluent concentration for each reactor segment “j” that represents one hour of residence time at an average irradiation rate for that hour:(4)ln(Cj+1kCjk)=μmax(Iavg(j+1)k)nKIn+(Iavg(j+1)k)n(Lu), where j and j+1 indicate the start and end location along the reactor length of the one hour segment. Cj+1 is calculated in series to determine the reactor effluent concentration, Ce, for each hour k. The growth rate during that hour is defined by the average irradiance for that hour of the day. The total biomass produced per day (MBM) (kg d-1) is estimated from the flow rate (Q) (m3hr-1), recycle ratio (r), and the effluent concentration Ce (kg m-3):(5)MBM=∑k=124Q(Cek)(1-r). The total microalgae biomass produced can be determined from (2), (4), and (5) along with algae growth parameters and solar irradiation data.The temperature requirements for algae differ by species. In general, faster growing algae species favor higher media temperatures of about 20–30°C [10]. The algae-related constants used for P. tricornutum in the model are included in Table 3. This species was selected because it has been used in the past to produce microalgae biodiesel and all relevant data were available [22].Table 3 P. tricornutum growth parameters. ParameterValueReferenceμmax0.063 (h-1)[22]B0.0018 (h-1)[25]KI3426 (μE m-2s-1)[22]n1–1.34[25]Ef1.74 ± 0.09μE J-1[22]Xp2–4%[25]Oil Content (%dry weight)30%[13]Water Content of algae40%[13]Cultivation temperature25°C[10] ## 2.3. Energy Consumed during Microalgae Cultivation Cultivating microalgae in closed systems is an energy intensive process, especially in regions with low temperatures and limited natural lighting [13]. The algae growth and harvesting stage involve a large number of intermediate processes for which estimates of the energy consumption were developed here. Energy consumption requirements for extraction and transesterification are already provided in the GREET model [14]. It was assumed that the processes for transesterification of algae oil are identical to that of soy oil. Thus, the chemical (methanol and sodium hydroxide) and energy consumption and the energy and emissions associated with their production were taken directly as the default parameters in GREET.Water Heating Energy requirements for a natural gas water heating system were determined by using the specific heat of water and the efficiency rating of the heater as provided by the manufacturer (EF = 0.82 [26]). It was assumed that groundwater, at an initial temperature (Tinlet) of 12°C [27], would be heated to a thermostat set point (Tt) of 25°C.Media Circulation Electric pumps are used to circulate the media through the entire length of the reactor. The input electrical power required to operate the pumps(Pp) (W) can be given by [28] (6)Pp=3.91×10-6μ13Re2.75ηpd3Aa,whereRe=ρudμ1, where μ1 (kg m-1s-1) is the dynamic viscosity of water, Re is the Reynolds number, d (m) is the diameter of the pipes, u is the superficial velocity of flow (m/s), ηp and ηelec are the pump efficiency (ηp = 0.7), and Aa (m2)is the tube aperture area. The pumps operate continuously.Artificial Lighting Natural algae cultivation inherently revolves around the diurnal and seasonal cycles. To compensate for these cycles and to maximize the production of biomass, artificial lighting is used to allow 24-hour cultivation. Lights are turned on from dusk to dawn. Monthly averages of daylight hours are used to define the time the lighting system is in operation each day. The power(Pa) (W) consumed for artificially lighting the greenhouse area is calculated as [29] (7)Pa=A′(IavgLwCf). The intensity of the artificial lighting provided was set equal to the naturally available lighting in the month of July (Iavg = 1.7 μE/m2-s) over the entire greenhouse region (A′=3345m2). Specifications for high-efficiency fluorescent GRO lights [29] were used to estimate the power required for artificial lighting. The light intensity of the bulbs is expressed as Lw = 220 Lu/W and the conversion factor (Cf) to convert between micromoles of photons (mE) and lux is 0.29.CO2 Purification Carbon dioxide acts as the only source of carbon for the biomass. Flue gases from power plants provide an inexhaustible source ofCO2. However, flue gases also contain varying levels of other gases such as SOx and NOx which are detrimental to microalgae culture beyond certain concentrations [30]. The monoethanolamine (MEA) absorption process can be used to separate pure CO2 from flue gas for microalgae production. Kadam [31] determined that if about 18% of the total carbon dioxide consumed is taken directly from flue gases and the rest is purified through the MEA process, then, the toxic flue gases will be sufficiently low concentration for algae growth. Molina Grima [19] determined that in order to make light the only limiting factor, CO2 must be provided in excess and the ratio of the aqueous CO2 concentration (kg/m3) to influent biomass concentration Ci (kg/m3) should be 0.63. Since growth rates for this system are lower than those in Molina's Grima study [19] due to reduced sunlight, this CO2 represents a conservative estimate. The mass of carbon dioxide required was estimated based on this ratio, the media flow rate, and the influent biomass concentration. Although carbon dioxide has a high solubility in water, and it is likely that all CO2 in the gas bubbled through the reactor would dissolve over the length of the reactor, a factor of safety of 2 was used here as an overestimate of the mass of CO2 that would be required. The MEA CO2 extraction process has been modeled and studied previously in context with algae production. Kadam [31, 32] reports that the process to extract CO2 from flue gas and recover the MEA for reuse consumes 32.65 kWh per ton of CO2 for algae cultivation. Details are not provided in these references to specifically quantify which of the steps in the MEA process consume the most electrical energy.Greenhouse Heating Temperature control within the greenhouse is essential for algae cultivation in cold weather conditions. The energy consumed for greenhouse heating depends upon the total surface area exposed, insulation material, and temperature inside and outside the greenhouse. For a given greenhouse with surface area(Ag) (m2) the heat loss per second (QL) (J/s) is given by [33] (8)QL=1.05(1R)(Treq-Tout)Ag, where R(1.9 m2C° s J-1) is the R-value of the greenhouse insulating material; Treq (25°C) and Tout (°C)are the temperatures required within the greenhouse and outside the greenhouse, respectively. The greenhouse was assumed to be insulated with 10 mm twinwall polycarbonate with an R-value of 1.9. The R-value of insulated and windowless cultivation scenario was set at 30. The outside temperature Tout is taken from monthly averages for Syracuse and Albany and is input as normal distributions for that month [34].Steam Drying and Dewatering Algae are suspended in a dilute broth from photobioreactors [13]. Dewatering and drying of algae is necessary to reduce the water content to 5% [35] before the hexane oil extraction process. For algae with high vegetable oil content, it is suggested that continuous nozzle discharge centrifuges provide the best reliability and consume the least amount of energy. Centrifugation consumes 3.24 MJ/m3 of effluent media [36]. After centrifugation, algae water content is 70% (by weight). Steam is utilized to further dry microalgae before oil extraction process. The natural gas consumed to provide the required steam energy was calculated based on the heat of vaporization of water, the mass of water that needed to be vaporized to reduce the water content from 0.70 to 0.05, and the efficiency of the boiler (0.93 [37]) and dryer (0.8 [26]) utilized. In the scenarios that utilize waste heat, it is assumed that because of the colocation of the algae production facility near a power plant, there is sufficient heat to dry the algae. ## 2.4. Water Consumption The consumption of water for the production of biofuels has recently been identified as a significant limitation to the development of an expanded biofuel economy. Water consumption occurs almost entirely in the feedstock production step for most biofuels. The average U.S. production biodiesel from soybeans requires 6,500 liters of water for evapotranspiration per liter of biodiesel produced [38]. Water consumption for algae biodiesel was calculated by a mass balance. The total water flowrate through the bioreactor is the sum of freshwater, water included in the algae recycle stream (35% recycle), and water recovered through the centrifuge dewatering process to increase the algae concentration from 0.34% to 30%. With this mass balance, 848 m3 make up water is required annually or approximately 4 L water per L of biodiesel for the feedstock production stage. This represents approximately 99% of water recovery and reuse. In the transesterification and biodiesel cleaning processes, 1–3 L of water are required per L of biodiesel produced [39]. ## 2.5. Fertilizer Consumption The microalgae culture media acts as the primary source of nutrients and carbon dioxide and a means of expelling excess oxygen. The minimum amount of nutrients consumed was defined based on the molecular formula of algae—CO0.48H1.83N0.11P0.01 [40]. N and P account for 6.5% and 1.3% of the algae mass. Assuming that maximum possible biomass concentration of algae cells is 4 kg /m3 [13, 22] in a tubular PBR, the N and P consumed from the algae media would be 0.26 kg N/m3 and 0.052 kg P/m3. Excess fertilizer that passes through the bioreactor as part of the broth is assumed to be recovered in the centrifuge dewatering step for reuse. Since nearly all of the water is recycled, it is assumed that nearly all of the nutrients that are not consumed are also recycled. ## 2.6. Utilizing GREET for Life Cycle Analysis The GREET model was modified and used to calculate the energy use and emissions generated from algae production, oil extraction, and transesterification stages of biodiesel production as well as the upstream chemical and energy production processes. For a given fuel system GREET evaluates natural gas, coal, and petroleum use as well as the emissions of carbon dioxide equivalent greenhouse gases, volatile organic compounds, carbon monoxide, nitrogen oxides, particulates, and sulfur oxides from all lifecycle stages [14]. The GREET results are presented as primary energy consumed and emissions per million BTU fuel produced. The low heating value of the BD was used to convert to the functional unit used here—liters BD produced.The GREET model is written in an MS Excel workbook and includes soy biodiesel production energy consumption and emissions pathways. A new spreadsheet page based on the soy biodiesel calculations was added to the GREET workbook and adapted for algae BD production. Default parameters for transesterification were used directly, but other input parameters including energy consumption for the various processes, biomass yield, nutrient requirements, carbon dioxide consumed were modified for algae biodiesel production based on the mass and energy flows presented above. The mix of electricity generation within New York State was used to define the primary energy consumed to generate electricity [41].The extraction of oil from algae was assumed to be carried out by hexane oil extraction. The procedure is similar to soybean oil extraction, although significantly less hexane is required to recover oil from algae (0.030 kg of hexane/kg of dry algae) [11] than is required for soybeans (1.2 kg hexane/kg dried and flaked soy bean) [18]. During this process, algae meal is produced as a coproduct that can be used as an animal feed in the same manner that soy meal is used as a coproduct from soy biodiesel. GREET uses the displacement method to determine how much of the biomass production and extraction steps can defined as a credit for the biodiesel due to the production of a coproduct. The protein content of soy meal is 48% [42], as compared to 28% in algae meal [13] and 40% in soy beans [42]. Thus, 1 kg of algae meal displaces about 0.7 kg of soybean, whereas 1 kg of soy meal displaces about 1.2 kg of soy bean for animal feed. The credits for not having to produce 0.7 kg soy beans for every kg algae meal produced are subtracted from the total energy use and emissions associated with the algae production, oil extraction, and their associated upstream processes.An additional credit was also attributed to the algae to represent the carbon dioxide sequestered from the power plant flue gas. Algae cell elemental composition was used to estimate the mass of carbon that was consumed by the algae growth within the PBR (0.51 kg ofCO2 consumed/kg algae grown). ## 3. Results ### 3.1. Biomass Production Biomass output is an important factor for determining life cycle energy analysis of microalgae biodiesel production. When natural lighting is used to minimize electricity consumption for artificial lighting, algae production rises steadily between the months of February and April (Figure5). Biomass production is the highest between the months of May to July and is followed by a gradual decline in the months of August to October. Production is the lowest in the winter months due to low natural irradiance. The uncertainty bars included represent 95% confidence intervals from Monte Carlo simulation outputs.Figure 5 Algae Biomass production for Syracuse and Albany, NY, with natural lighting supplemented by artificial lighting for continuous algae production.The annual biomass productivity in Albany is about 12% greater than that in Syracuse (Table4). These cities are at very similar latitudes, but the actual irradiance in Albany is higher due to less cloud cover. Biomass and subsequent biodiesel production in the windowless (artificial lighting only) scenario is much higher than greenhouse cases because illumination is maintained throughout the year at the highest level achieved naturally (noon in the month of July).Table 4 Comparison of different locations and scenarios by biodiesel production. LocationBiomass Produced(tonnes/year)Biodiesel produced (Lm-2 y-1)Syracuse NYGreenhouse Base Case20219Greenhouse w/waste heat20219Albany NYWindowless Cultivation26325Greenhouse Base Case22521Greenhouse w/waste heat22521 ### 3.2. Energy Consumption for Microalgae Biodiesel Production The energy consumed for biodiesel production was estimated by modeling individual processes in the algae cultivation stage. Energy required for the transesterification process is accounted directly by the GREET 1.8a model. The energy required for feedstock production through the drying process is illustrated in Figure6. This does not include oil extraction and transesterification processes. Three variables can be assessed with this graph: location (Syracuse versus Albany), use of natural lighting versus solely artificial lighting and algae versus soybean production.Figure 6 Energy consumption for microalgae and soy bean feedstock production. The error bars represent 95% confidence intervals on the total energy consumption for feedstock production.Heating needs consume well over half of the total energy required for algae growth, dewatering, and drying. When no waste heat is available, dewatering and steam drying accounts for the greatest fraction—about 28–32% of the energy required for feedstock production. With the availability of waste heat, this component is reduced to about 13% of the total, which represents the electricity required for centrifugation. Greenhouse heating consumes a similar proportion of the total energy for algae production—about 25–30%. Water heating for cultivation consumes about 7–12% for feedstock production. Both locations have similar water heating requirements because groundwater temperature is assumed to be equal for both cases.When natural lighting is utilized to the extent possible artificial lighting, it consumes about a quarter of the total energy required for algae cultivation. However, in the windowless cultivation case where there is no natural light available, the artificial lighting cost is almost doubled. However, the total energy requirements in this scenario are still less (35%) than the scenarios requiring natural gas to heat a greenhouse.Among the design choices and trade-offs considered here, the growth and drying of algae with the utilization of waste heat is the only scenario that is substantially better than growing soybeans from the perspective of process energy consumed. These results clearly show the value of colocating an algae facility near a source of waste heat.Overall, microalgae cultivation in Albany, NY, consumes about 18–21% less energy than Syracuse, NY, because greenhouse heating energy requirements are lower and higher natural lighting intensity yields about 12% higher biomass output.Figure7 illustrates the total lifecycle energy, which now also includes biodiesel production and credits for CO2 consumption and algae/soy meal produced during the oil extraction phase. For most cases, the energy required for feedstock production is similar to the energy required for oil extraction and transesterification. Thus, the savings associated with the utilization of waste heat in the greenhouse also represent significant savings when the entire lifecycle energy consumption is considered. Greenhouse algae cultivation with waste heat in Albany consumes the least energy on a life cycle basis; however total energy consumption is very similar to that of the corresponding Syracuse case.Figure 7 Total life cycle energy consumption by life cycle stage. The error bars represent 95% confidence intervals on the total lifecycle energy consumed.The importance of the coproduct and carbon dioxide consumption credits are apparent from the data presented in Figure7. Soy meal credits are higher than algae meal credits because of higher protein content and higher fraction of soy meal produced per liter of biodiesel (1 kg of algae meal displaces about 0.7 kg of soybean, whereas 1 kg of soy meal displaces about 1.2 kg of soy bean for animal feed). Adding the higher credits for the soy bean BD case to the energy required for production reduces the net energy for this case to a level below the well-insulated and windowless algae production scenario. The greenhouse scenarios utilizing waste heat are still the best option for minimizing the consumption of energy that has value for other uses.Natural gas accounts for 65–80% of the total energy consumed on a life cycle basis for algae biodiesel production when waste heat is not available (data not shown). The high consumption of natural gas can be attributed to heating processes, the high fraction of natural gas in NY electricity mix (about 22%), and upstream consumption for process fuel and fertilizer production. In contrast, soy biodiesel requires substantially more petroleum (~5x) than microalgae consumes due to the extensive use of tractors and feedstock transportation when BD is made from soybeans. Thus, algae as a BD feedstock has a significant benefit over soybeans in terms of reducing our dependence on imported oil. Algae biodiesel production requires a significant amount of electricity and thus coal accounts for about 6–19% of the total life cycle energy consumption. Insulated cultivation has the highest coal consumption, about 19% of the total life cycle energy consumption, because of increased artificial lighting and electricity consumption. In comparison, for the greenhouse with waste heat case, only 7% the total lifecycle energy is derived from coal.The processing of soybeans to prepare for oil extraction also requires some heating to dry the beans. Arguably, waste heat could be considered to reduce the fossil fuel consumption for soybean biodiesel too. However, whereas the algae feedstock could be grown at the same location where waste heat is available, the soybeans require a much more dispersed geographical region. Soybeans are typically transported 75 miles or less to a soybean crushing facility. Thus, the probability that soybean production and crushing facilities can be colocated with a waste heat source is significantly less than for algae. If this can be achieved, the lifecycle energy production for the feedstock production (green bar for soybean BD, Figure7) would be less. ### 3.3. Global Warming Potential Global warming potential can be described as the impact of additional units of greenhouse gases to the atmosphere. The global warming potential for the different scenarios and gases is estimated in terms of carbon dioxide equivalents (Figure8). All algae scenarios are allocated the same CO2 credits because the carbon dioxide consumed per unit of algae produced is constant.Figure 8 Global warming potential of microalgae biodiesel—mass emissions normalized by dividing by the corresponding emissions for soy biodiesel for comparison.MostCO2 emissions for algae biodiesel originate from upstream usage of energy use for heating, transportation fuel use, and coal combustion for electricity. The extraction and utilization of natural gas for heating use, electricity generation, and fertilizer production is accompanied by high methane emissions. Natural gas extraction has a very high methane emission factor. Overall, the emission of carbon dioxide is relatively low compared to methane due to the high natural gas use relative to petroleum or coal. Natural gas utilization has a much lower carbon dioxide emission factor than coal.In cold climates, the production of algae biodiesel with the utilization of waste heat rather than natural gas consumption is the only approach that reduces greenhouse gas emissions relative to soy biodiesel. ### 3.4. Other Air Emissions The exposure of humans to air pollutants is increasingly associated with increased mortality and reduction in life expectancy [43]. Figure 9 presents the lifecycle air emissions for algae biodiesel production normalized to the corresponding air emissions estimated by GREET for soybean biodiesel. The microalgae biodiesel air emissions follow a trend similar to the total life cycle energy consumption. The high NOx emissions can be traced to high emission factors of equipment used to produce natural gas and the flaring of natural gas in refineries. The increased use of artificial lighting for the cultivation of algae in a windowless and well-insulated facility results in high particulate emissions, particularly in comparison to cases where natural lighting is used. These PM emissions originate mainly from coal and residual oil combustion use for electricity production.Figure 9 Toxic air emissions from microalgae biodiesel production—mass emissions normalized by dividing by the corresponding emissions for soy biodiesel for comparison.VOC emissions from microalgae biodiesel production are much lower than soy biodiesel, because of low utilization of petroleum and hexane. The VOC emission factors for transportation fuels like gasoline are far greater than any other source. Thus, since algae is locally produced for biodiesel and transportation minimal, the VOC emissions from algae biodiesel are much less than soy biodiesel, primarily because only a minimal amount of hexane is required for extraction compared with soy beans.Overall, the most important source of air emissions for microalgae is the upstream emissions associated with fuel and electricity generation. Yet, these emissions are still relatively low compared to soy biodiesel. The primary factor contributing to this apparent anomaly is the comparison of algae biodiesel produced in New York State to soy biodiesel produced nationally. NY State has a high percentage of hydroelectric (17%) and nuclear (29%) power production and relatively small amounts of electricity generated from coal (15%) [41]. This difference in upstream electricity generation has significant repercussions throughout the lifecycle energy emission estimates for any electricity-intensive manufacturing system: manufacturing in New York State benefits from relatively clean energy resources.The acidification of soils and water bodies occurs mainly due to the transformation of gaseous pollutants (SOx, NOx) into acids. The acidification potential of the different cases is estimated in SO2 equivalents. All cases of microalgae biodiesel are better than soy biodiesel in terms of acidification emissions. The total SO2 equivalents follow a trend that resembles the total energy usage. ### 3.5. Summary of Results A summary of the lifecycle sustainability assessment metrics for the various algae biodiesel production scenarios and soy biodiesel production is presented in Table5. The most sustainable biodiesel production for all cases requires the colocation of the algae and BD production facility in the vicinity of a source of waste heat. “Free” heat greatly reduces the fossil fuel consumption and all related green house gas and other air pollutants. At a similar latitude, choosing a location that maximizes sunlight helps somewhat to increase the algae production rate and, therefore, reduce the impacts when the results are compared on a per BD-produced basis. These effects are small, however, compared to the benefits of utilizing waste heat. Similarly, a well-insulated facility can help reduce heating needs, but the consequences of increased electricity use for artificial lighting decrease the benefits of reduced heating fuel required. In most regions of the U.S., where a higher fraction of the electricity mix is generated from fossil fuels, the well insulated windowless scenario would be worse in terms of most sustainability metrics due to the increased dependence on fossil fuels.Table 5 Summary of average sustainability metrics to compare algae and soy BD production. ScenarioEnvironmental ImpactGreenhouse Nat. Gas, SyracuseGreenhouse Nat. Gas, AlbanyGreenhouse w/waste heat, SyracuseGreenhouse w/waste heat, AlbanyInsulated, no nat. lightSoy biodiesel productionTotal life cycle energy Consumption* (MJ/L of BD)232116152221Land utilization (m2/L of BD/yr)0.0530.0480.0530.0480.04022.2Water Consumption (L water/L BD)5–75–75–75–74–66,500Greenhouse gas emissions (g CO2 equiv/L of BD)13501150740630910925Acidification potential (g SO2 eq./L of BD)4.94.62.82.53.44.0Toxic Emissions (g /L of BD)PM 105.14.62.62.35.75.3PM 2.51.81.60.70.61.82.7VOC0.220.200.060.050.093.4CO2.42.10.60.51.02.8*does not include credits. ## 3.1. Biomass Production Biomass output is an important factor for determining life cycle energy analysis of microalgae biodiesel production. When natural lighting is used to minimize electricity consumption for artificial lighting, algae production rises steadily between the months of February and April (Figure5). Biomass production is the highest between the months of May to July and is followed by a gradual decline in the months of August to October. Production is the lowest in the winter months due to low natural irradiance. The uncertainty bars included represent 95% confidence intervals from Monte Carlo simulation outputs.Figure 5 Algae Biomass production for Syracuse and Albany, NY, with natural lighting supplemented by artificial lighting for continuous algae production.The annual biomass productivity in Albany is about 12% greater than that in Syracuse (Table4). These cities are at very similar latitudes, but the actual irradiance in Albany is higher due to less cloud cover. Biomass and subsequent biodiesel production in the windowless (artificial lighting only) scenario is much higher than greenhouse cases because illumination is maintained throughout the year at the highest level achieved naturally (noon in the month of July).Table 4 Comparison of different locations and scenarios by biodiesel production. LocationBiomass Produced(tonnes/year)Biodiesel produced (Lm-2 y-1)Syracuse NYGreenhouse Base Case20219Greenhouse w/waste heat20219Albany NYWindowless Cultivation26325Greenhouse Base Case22521Greenhouse w/waste heat22521 ## 3.2. Energy Consumption for Microalgae Biodiesel Production The energy consumed for biodiesel production was estimated by modeling individual processes in the algae cultivation stage. Energy required for the transesterification process is accounted directly by the GREET 1.8a model. The energy required for feedstock production through the drying process is illustrated in Figure6. This does not include oil extraction and transesterification processes. Three variables can be assessed with this graph: location (Syracuse versus Albany), use of natural lighting versus solely artificial lighting and algae versus soybean production.Figure 6 Energy consumption for microalgae and soy bean feedstock production. The error bars represent 95% confidence intervals on the total energy consumption for feedstock production.Heating needs consume well over half of the total energy required for algae growth, dewatering, and drying. When no waste heat is available, dewatering and steam drying accounts for the greatest fraction—about 28–32% of the energy required for feedstock production. With the availability of waste heat, this component is reduced to about 13% of the total, which represents the electricity required for centrifugation. Greenhouse heating consumes a similar proportion of the total energy for algae production—about 25–30%. Water heating for cultivation consumes about 7–12% for feedstock production. Both locations have similar water heating requirements because groundwater temperature is assumed to be equal for both cases.When natural lighting is utilized to the extent possible artificial lighting, it consumes about a quarter of the total energy required for algae cultivation. However, in the windowless cultivation case where there is no natural light available, the artificial lighting cost is almost doubled. However, the total energy requirements in this scenario are still less (35%) than the scenarios requiring natural gas to heat a greenhouse.Among the design choices and trade-offs considered here, the growth and drying of algae with the utilization of waste heat is the only scenario that is substantially better than growing soybeans from the perspective of process energy consumed. These results clearly show the value of colocating an algae facility near a source of waste heat.Overall, microalgae cultivation in Albany, NY, consumes about 18–21% less energy than Syracuse, NY, because greenhouse heating energy requirements are lower and higher natural lighting intensity yields about 12% higher biomass output.Figure7 illustrates the total lifecycle energy, which now also includes biodiesel production and credits for CO2 consumption and algae/soy meal produced during the oil extraction phase. For most cases, the energy required for feedstock production is similar to the energy required for oil extraction and transesterification. Thus, the savings associated with the utilization of waste heat in the greenhouse also represent significant savings when the entire lifecycle energy consumption is considered. Greenhouse algae cultivation with waste heat in Albany consumes the least energy on a life cycle basis; however total energy consumption is very similar to that of the corresponding Syracuse case.Figure 7 Total life cycle energy consumption by life cycle stage. The error bars represent 95% confidence intervals on the total lifecycle energy consumed.The importance of the coproduct and carbon dioxide consumption credits are apparent from the data presented in Figure7. Soy meal credits are higher than algae meal credits because of higher protein content and higher fraction of soy meal produced per liter of biodiesel (1 kg of algae meal displaces about 0.7 kg of soybean, whereas 1 kg of soy meal displaces about 1.2 kg of soy bean for animal feed). Adding the higher credits for the soy bean BD case to the energy required for production reduces the net energy for this case to a level below the well-insulated and windowless algae production scenario. The greenhouse scenarios utilizing waste heat are still the best option for minimizing the consumption of energy that has value for other uses.Natural gas accounts for 65–80% of the total energy consumed on a life cycle basis for algae biodiesel production when waste heat is not available (data not shown). The high consumption of natural gas can be attributed to heating processes, the high fraction of natural gas in NY electricity mix (about 22%), and upstream consumption for process fuel and fertilizer production. In contrast, soy biodiesel requires substantially more petroleum (~5x) than microalgae consumes due to the extensive use of tractors and feedstock transportation when BD is made from soybeans. Thus, algae as a BD feedstock has a significant benefit over soybeans in terms of reducing our dependence on imported oil. Algae biodiesel production requires a significant amount of electricity and thus coal accounts for about 6–19% of the total life cycle energy consumption. Insulated cultivation has the highest coal consumption, about 19% of the total life cycle energy consumption, because of increased artificial lighting and electricity consumption. In comparison, for the greenhouse with waste heat case, only 7% the total lifecycle energy is derived from coal.The processing of soybeans to prepare for oil extraction also requires some heating to dry the beans. Arguably, waste heat could be considered to reduce the fossil fuel consumption for soybean biodiesel too. However, whereas the algae feedstock could be grown at the same location where waste heat is available, the soybeans require a much more dispersed geographical region. Soybeans are typically transported 75 miles or less to a soybean crushing facility. Thus, the probability that soybean production and crushing facilities can be colocated with a waste heat source is significantly less than for algae. If this can be achieved, the lifecycle energy production for the feedstock production (green bar for soybean BD, Figure7) would be less. ## 3.3. Global Warming Potential Global warming potential can be described as the impact of additional units of greenhouse gases to the atmosphere. The global warming potential for the different scenarios and gases is estimated in terms of carbon dioxide equivalents (Figure8). All algae scenarios are allocated the same CO2 credits because the carbon dioxide consumed per unit of algae produced is constant.Figure 8 Global warming potential of microalgae biodiesel—mass emissions normalized by dividing by the corresponding emissions for soy biodiesel for comparison.MostCO2 emissions for algae biodiesel originate from upstream usage of energy use for heating, transportation fuel use, and coal combustion for electricity. The extraction and utilization of natural gas for heating use, electricity generation, and fertilizer production is accompanied by high methane emissions. Natural gas extraction has a very high methane emission factor. Overall, the emission of carbon dioxide is relatively low compared to methane due to the high natural gas use relative to petroleum or coal. Natural gas utilization has a much lower carbon dioxide emission factor than coal.In cold climates, the production of algae biodiesel with the utilization of waste heat rather than natural gas consumption is the only approach that reduces greenhouse gas emissions relative to soy biodiesel. ## 3.4. Other Air Emissions The exposure of humans to air pollutants is increasingly associated with increased mortality and reduction in life expectancy [43]. Figure 9 presents the lifecycle air emissions for algae biodiesel production normalized to the corresponding air emissions estimated by GREET for soybean biodiesel. The microalgae biodiesel air emissions follow a trend similar to the total life cycle energy consumption. The high NOx emissions can be traced to high emission factors of equipment used to produce natural gas and the flaring of natural gas in refineries. The increased use of artificial lighting for the cultivation of algae in a windowless and well-insulated facility results in high particulate emissions, particularly in comparison to cases where natural lighting is used. These PM emissions originate mainly from coal and residual oil combustion use for electricity production.Figure 9 Toxic air emissions from microalgae biodiesel production—mass emissions normalized by dividing by the corresponding emissions for soy biodiesel for comparison.VOC emissions from microalgae biodiesel production are much lower than soy biodiesel, because of low utilization of petroleum and hexane. The VOC emission factors for transportation fuels like gasoline are far greater than any other source. Thus, since algae is locally produced for biodiesel and transportation minimal, the VOC emissions from algae biodiesel are much less than soy biodiesel, primarily because only a minimal amount of hexane is required for extraction compared with soy beans.Overall, the most important source of air emissions for microalgae is the upstream emissions associated with fuel and electricity generation. Yet, these emissions are still relatively low compared to soy biodiesel. The primary factor contributing to this apparent anomaly is the comparison of algae biodiesel produced in New York State to soy biodiesel produced nationally. NY State has a high percentage of hydroelectric (17%) and nuclear (29%) power production and relatively small amounts of electricity generated from coal (15%) [41]. This difference in upstream electricity generation has significant repercussions throughout the lifecycle energy emission estimates for any electricity-intensive manufacturing system: manufacturing in New York State benefits from relatively clean energy resources.The acidification of soils and water bodies occurs mainly due to the transformation of gaseous pollutants (SOx, NOx) into acids. The acidification potential of the different cases is estimated in SO2 equivalents. All cases of microalgae biodiesel are better than soy biodiesel in terms of acidification emissions. The total SO2 equivalents follow a trend that resembles the total energy usage. ## 3.5. Summary of Results A summary of the lifecycle sustainability assessment metrics for the various algae biodiesel production scenarios and soy biodiesel production is presented in Table5. The most sustainable biodiesel production for all cases requires the colocation of the algae and BD production facility in the vicinity of a source of waste heat. “Free” heat greatly reduces the fossil fuel consumption and all related green house gas and other air pollutants. At a similar latitude, choosing a location that maximizes sunlight helps somewhat to increase the algae production rate and, therefore, reduce the impacts when the results are compared on a per BD-produced basis. These effects are small, however, compared to the benefits of utilizing waste heat. Similarly, a well-insulated facility can help reduce heating needs, but the consequences of increased electricity use for artificial lighting decrease the benefits of reduced heating fuel required. In most regions of the U.S., where a higher fraction of the electricity mix is generated from fossil fuels, the well insulated windowless scenario would be worse in terms of most sustainability metrics due to the increased dependence on fossil fuels.Table 5 Summary of average sustainability metrics to compare algae and soy BD production. ScenarioEnvironmental ImpactGreenhouse Nat. Gas, SyracuseGreenhouse Nat. Gas, AlbanyGreenhouse w/waste heat, SyracuseGreenhouse w/waste heat, AlbanyInsulated, no nat. lightSoy biodiesel productionTotal life cycle energy Consumption* (MJ/L of BD)232116152221Land utilization (m2/L of BD/yr)0.0530.0480.0530.0480.04022.2Water Consumption (L water/L BD)5–75–75–75–74–66,500Greenhouse gas emissions (g CO2 equiv/L of BD)13501150740630910925Acidification potential (g SO2 eq./L of BD)4.94.62.82.53.44.0Toxic Emissions (g /L of BD)PM 105.14.62.62.35.75.3PM 2.51.81.60.70.61.82.7VOC0.220.200.060.050.093.4CO2.42.10.60.51.02.8*does not include credits. ## 4. Conclusions Cultivation of microalgae in NY State is an energy intensive process owing to temperature control and steam drying process. Colocating microalgae cultivation with a power plant is highly desirable. Annual production of microalgae requires the utilization of waste heat for steam drying, water heating, and greenhouse heating in order to be substantially better than soy biodiesel in terms of energy consumption and emissions. When waste heat is utilized, microalgae biodiesel production consumes less energy than soy biodiesel.Microalgae consumes less than one third the petroleum fossil fuel required for soy biodiesel and only a small fraction of the water. The feasibility of microalgae biodiesel production at a given location is greatly dependent on availability of waste heat and natural lighting conditions. The availability of either one or both makes algae biodiesel production process cleaner in terms of air emissions and consumes much less energy than soy biodiesel. However if both natural lighting and waste heat are absent, algae biodiesel production consumes more energy than soy biodiesel production and emits an equal or more amount toxic air emissions.Coproducts produced during algae biodiesel production process have less protein content than soy meal and, thus, are less valuable. The production of high value coproducts allows for increased energy allocation for soy biodiesel and thus emissions or energy consumption of both the feedstocks is very close and comparable.Most microalgae biodiesel production scenarios have low or very similar emissions as compared to soy biodiesel. Green house gas emissions for algae biodiesel are generally higher than soy biodiesel except when waste heat is utilized, in which case emissions are equal. The emission of volatile organic compounds for soy biodiesel is much higher than that for algae biodiesel. Emissions from microalgae production originate mainly from upstream fossil fuel energy consumption. Reducing needs for unit processes like greenhouse heating, lighting, and other systems will have significant benefits. --- *Source: 102179-2010-06-09.xml*
2010
# Existence of Positive Solutions of Lotka-Volterra Competition Model with Nonlinear Boundary Conditions **Authors:** Yang Zhang; Mingxin Wang; Yuwen Wang **Journal:** Abstract and Applied Analysis (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102180 --- ## Abstract A Lotka-Volterra competition model with nonlinear boundary conditions is considered. First, by using upper and lower solutions method for nonlinear boundary problems, we investigate the existence of positive solutions in weak competition case. Next, we prove that- d 1 Δ u = u ( a 1 - b 1 u - c 1 v ), x ∈ Ω; - d 2 Δ v = v ( a 2 - b 2 u - c 2 v ), x ∈ Ω; ∂ u / ∂ ν + f ( u ) = 0, x ∈ ∂ Ω; ∂ v / ∂ ν + g ( v ) = 0, x ∈ ∂ Ω, has no positive solution when one of the diffusion coefficients is sufficiently large. --- ## Body ## 1. Introduction In this paper, we study the existence of positive solutions to the following problem with nonlinear boundary conditions:(1) - d 1 Δ u = u ( a 1 - b 1 u - c 1 v ) , x ∈ Ω , - d 2 Δ v = v ( a 2 - b 2 u - c 2 v ) , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , ∂ v ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω .Here,(H1) Ω ⊂ R N ( N ≥ 1 ) is an open bounded domain, and ν is the outward unit normal vector of the boundary ∂ Ω; (H2) fori = 1,2, a i, b i, c i, and d i are positive constants; (H3) f and g are strictly increasing C 2 functions in R and f ( 0 ) = g ( 0 ) = f ′ ( 0 ) = g ′ ( 0 ) = 0; (H4) b 1 / b 2 > a 1 / a 2 > c 1 / c 2.In the usual interpretation of the competition model,u ( x ) and v ( x ) are population variables; it is natural to consider only nonnegative solutions of (1). There is clearly a trivial solution u = v = 0 for all values of the parameters. In addition, for some values of parameters, there exist two semitrivial solutions ( u , v ) = ( u , 0 ) and ( 0 , v ). More interesting are the so-called positive solutions or coexistence solutions, where both u ( x ) and v ( x ) are positive for all x ∈ Ω.By using the positive operator theory, Ahn and Li [1] proved the existence of positive solutions to the following elliptic system: (2) - d 1 Δ u = u f ( u , v ) , x ∈ Ω , - d 2 Δ v = v g ( u , v ) , x ∈ Ω , ∂ u ∂ ν + α ( u ) = 0 , x ∈ ∂ Ω , ∂ v ∂ ν + β ( v ) = 0 , x ∈ ∂ Ω , where f ( u , v ), g ( u , v ) are C 1 functions in u and v, f u ≤ 0, f v ≥ 0, g u ≤ 0, and g v ≥ 0 for ( u , v ) ∈ R + × R +; α and β are increasing functions in R and α ( 0 ) = β ( 0 ) = 0.The main project of our paper is to investigate the existence of positive solutions to the problem (1). In Section 2 we state some known results, which are useful throughout this paper. Section 3 is devoted to proving the existence of positive solutions by using the upper and lower solutions method. When the diffusion coefficient d 1 or d 2 is sufficiently large, we will prove that problem (1) has no positive solution.For the homogeneous Neumann boundary conditions, that is,f ( u ) ≡ 0 and g ( v ) ≡ 0, problem (1) has been studied intensively by many authors. For the related results, please refer to, for instance, [2–8], [9, Section 4.3], and the references cited therein. ## 2. Preliminaries In this section, we will introduce some notations and lemmas, which serve as the basic tools for the arguments to prove our results.Throughout this paper, we will consider the solutionsu , v ∈ C ( Ω - ). For a given continuous function q : Ω - → R, let σ 1 ( q , d ) be the principal eigenvalue of the following eigenvalue problem: (3) - d Δ u + q ( x ) u = λ u , x ∈ Ω , ∂ u ∂ ν = 0 , x ∈ ∂ Ω . When the diffusion coefficient d = 1, we denote the first eigenvalue for (3) by σ 1 ( q ).The variational characterization ofσ 1 ( d , q ) is (4) σ 1 ( d , q ) = inf ⁡ { ∫ Ω ‍ ( d | ∇ ψ | 2 + q ( x ) ψ 2 ) d x : ∫ Ω ‍ ψ 2 d x = 1 , ψ ∈ H 1 ( Ω ) } . We are concerned with the relation between the sign of σ 1 ( d , q ) and the function q.Lemma 1 (see [9]). The first eigenvalueσ 1 ( d , q ) of (3) has the following properties: (5) ∫ Ω ‍ q ( x ) d x ≤ 0 ⟹ σ 1 ( d , q ) < 0 , ∀ d > 0 .Lemma 2 (see [10]). Suppose thatq ( x ) ∈ C ( Ω - ) and M is a positive constant satisfying M - q ( x ) > 0. Then the following hold:(i) σ 1 ( q ) < 0 ⇒ r [ ( M - Δ ) - 1 ( M - q ( x ) ) ] > 1; (ii) σ 1 ( q ) > 0 ⇒ r [ ( M - Δ ) - 1 ( M - q ( x ) ) ] < 1; (iii) σ 1 ( q ) = 0 ⇒ r [ ( M - Δ ) - 1 ( M - q ( x ) ) ] = 1.Next, we consider the nonlinear elliptic problem:(6) - Δ u = E ( x , u ) , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , where f ∈ C 2 ( R ) is strictly increasing with f ( 0 ) = 0.Definition 3. LetE ( x , ξ ) ∈ C ( Ω - × R ) be global Lipschitz continuous in ξ for all x ∈ Ω -. The functions u - , u _ ∈ C 2 ( Ω ) ∩ C 1 ( Ω - ) are called the upper and lower solutions of (6), if u - and u _ satisfy (7) - Δ u - ≥ E ( x , u - ) , x ∈ Ω , ∂ u - ∂ ν + f ( u - ) ≥ 0 , x ∈ ∂ Ω , - Δ u _ ≤ E ( x , u _ ) , x ∈ Ω , ∂ u _ ∂ ν + f ( u _ ) ≤ 0 , x ∈ ∂ Ω .By using the upper and lower solutions method, the following result was obtained by Ahn and Li [1].Lemma 4. Suppose thatu - ≥ u _ ≥ 0 are upper and lower solutions of (6); then there exists a maximal solution u ~ of (6) such that u - ≥ u ~ ≥ u _.Lemma 5 (see [1]). LetP , d be positive constants and h ∈ C ( Ω - ). Consider (8) - d Δ u + P u = h , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , where f ∈ C 2 ( R ), f ( 0 ) = 0, and f is strictly increasing. Then the following hold:(i) problem (8) has a solution u ∈ W 2 , m ( Ω ) ∩ C 1 , α ( Ω - ) for some α ∈ ( 0,1 ), and (9) ∥ u ∥ W 2 , m ≤ C 0 ∥ h ∥ ∞ , where C 0 is dependent on P; (ii) if0 ≢ h ≥ 0, then (8) has a unique positive solution.Now we consider the following nonlinear boundary value problem:(10) - Δ u = u F ( x , u ) , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω .Lemma 6. Letf ∈ C 2 ( R ) be an increasing and convex function in R +, and satisfy with f ( 0 ) = f ′ ( 0 ) = 0. Assume that the function F satisfies the following:(i) F ∈ C ( Ω - × R + ), F ( x , u ) is Lipschitz-continuous in u, and the Lipschitz constant C is independent of ( x , u ) ∈ Ω × R +; (ii) F ( x , u ) is decreasing in u; (iii) F ( x , 0 ) > 0 for x ∈ Ω, F ( x , u ) < 0 in Ω × ( c 0 , ∞ ) for some constant c 0 > 0. Ifσ 1 ( - F ( x , 0 ) ) < 0, then (10) has a unique positive solution. If σ 1 ( - F ( x , 0 ) ) ≥ 0, then u = 0 is the only nonnegative solution of (10).Proof. Letψ > 0 be the eigenfunction corresponding to the eigenvalue σ 1 ( 0 ) = 0 of problem (3); we can obtain F ( x , ξ ) < σ 1 when ξ > c 0, due to the hypothesis (iii). With the assumptions on the function f in mind, for a large M > 0, we have (11) - Δ M ψ - M ψ F ( x , M ψ ) ≥ 0 , x ∈ Ω , ∂ M ψ ∂ ν + f ( M ψ ) ≥ 0 , x ∈ ∂ Ω . Therefore T ( M ψ ) ≤ M ψ, where the operator T u : = ( - Δ · + P · ) ( F ( x , u ) u + P u ) is defined by the problem (8). T is compact in the positive cone K ⊂ C ( Ω - ) by Lemma 5. The function F ( x , · ) · + P · is monotone increasing on [ 0 , ∥ M ψ ∥ ∞ ] for sufficiently large P > 0. Therefore T is increasing on the order interval ( 0 , M ψ ) with T ( 0 ) = 0. Taking advantage of [1, Lemma 2.13], we know T ′ ( 0 ) = ( - Δ + P ) [ F ( x , 0 ) + P ]. The spectral radius r ( T ′ ( 0 ) ) > 1 by Lemma 2. Then, the result of [11, Theorem 7.6] ensures our conclusion. Next we show the uniqueness. Supposeu 1 is a positive solution of (10). Let u ~ be a maximal solution of (10). We claim that u ~ = u 1. Supposeu ~ ≥ u 1 ≢ u ~. Then (12) 0 < ∫ ∂ Ω ‍ [ u 1 f ( u ~ ) - u ~ f ( u 1 ) ] d S = ∫ Ω ‍ ( - u 1 Δ u ~ + u ~ Δ u 1 ) d x = ∫ Ω ‍ u ~ u 1 [ F ( x , u ~ ) - F ( x , u 1 ) ] d x ≤ 0 . The first integral is positive, as f is convex and f ( 0 ) = 0. The last integral is nonpositive, since F ( x , u ) is decreasing in u. This contradiction demonstrates u ~ ≡ u 1. If λ 1 ( - F ( x , 0 ) ) ≥ 0, the proof is similar to [1, Lemma 2.17], and we omit it. This completes the proof.By Lemma5, we are able to conclude the following result.Proposition 7. Suppose thatf , g are convex functions in R +. Then (1) has two semitrivial solutions ( θ 1 , 0 ) and ( 0 , θ 2 ), for some 0 < α < 1, where θ 1 ∈ C 1 , α ( Ω - ) is the unique positive solution of (13) - d 1 Δ u = u ( a 1 - b 1 u ) , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , and θ 2 ∈ C 1 , α ( Ω - ) is the unique positive solution of (14) - d 2 Δ v = v ( a 2 - c 2 v ) , x ∈ Ω , ∂ v ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω .Finally, we cite a strong maximum principle (see [5, Proposition 2.2]).Lemma 8. Suppose thatΩ is smooth and g ∈ C ( Ω - × R 1 ). Assume that z ∈ C 2 ( Ω ) ∩ C 1 ( Ω - ) and satisfies (15) Δ z ( x ) + g ( x , z ( x ) ) ≥ 0 , x ∈ Ω , ∂ z ∂ ν ≤ 0 , x ∈ ∂ Ω . If z ( x 0 ) = max ⁡ Ω - z ( x ), then g ( x 0 , z ( x 0 ) ) ≥ 0. ## 3. Existence and Nonexistence of Positive Solution By using the upper-lower solutions argument for nonlinear boundary problems, we first study the existence of positive solutions to (1). Our method is technically and conceptually simple in the proof of existence results involving upper-lower solutions hypotheses and Leray-Schauder continuation argument. Next, we prove that problem (1) has no positive solution, if one of the diffusion coefficients is sufficiently large. Finally, we discuss the stability of semitrivial solutions.We will show that the positive solutions of systems (1) have a priori bound.Lemma 9. Suppose that( u , v ) is a positive solution of (1). Then u ≤ a 1 / b 1 and v ≤ a 2 / c 2.Proof. In view of the first equation of (1), u satisfies (16) - d 1 Δ u - u ( a 1 - b 1 u - c 1 v ) = 0 , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω . As u is continuous on the compact set Ω - and f ( u ) ≥ 0, thanks to Lemma 8, it is easy to know u ≤ a 1 / b 1. In a similar manner, we obtain v ≤ a 2 / c 2. Next, we give the definitions of upper and lower solutions to (1).Definition 10. Assume that( u - , v - ) , ( u _ , v _ ) ∈ C 1 ( Ω - ) ∩ C 2 ( Ω ). We called that ( u - , v - ) and ( u _ , v _ ) are the coupled upper and lower solutions of (1), if ( u - , v - ) and ( u _ , v _ ) satisfy (17) - d 1 Δ u - - u - ( a 1 - b 1 u - - c 1 v _ ) ≥ 0 , x ∈ Ω , - d 2 Δ v - - v - ( a 2 - b 2 u _ - c 2 v - ) ≥ 0 , x ∈ Ω , ∂ u - ∂ ν + f ( u - ) ≥ 0 , x ∈ ∂ Ω , ∂ v - ∂ ν + g ( v - ) ≥ 0 , x ∈ ∂ Ω , - d 1 Δ u _ - u _ ( a 1 - b 1 u _ - c 1 v - ) ≤ 0 , x ∈ Ω , - d 2 Δ v _ - v _ ( a 2 - b 2 u - - c 2 v _ ) ≤ 0 , x ∈ Ω , ∂ u _ ∂ ν + f ( u _ ) ≤ 0 , x ∈ ∂ Ω , ∂ v _ ∂ ν + g ( v _ ) ≤ 0 , x ∈ ∂ Ω .Theorem 11. Suppose that( u - , v - ) and ( u _ , v _ ) are the coupled upper and lower solutions of (1) and ( u - , v - ) ≥ ( u _ , v _ ). Then (1) has at least one solution ( u , v ) and ( u - , v - ) ≥ ( u , v ) ≥ ( u _ , v _ ).Proof. For any givenW : = ( w 1 , w 2 ) ∈ [ C ( Ω - ) ] 2 and a sufficiently large positive constant M, let (18) F 1 ( x ) = w 1 ( a 1 - b 1 w 1 - c 1 w 2 ) + M w 1 , F 2 ( x ) = w 2 ( a 2 - b 2 w 1 - c 2 w 2 ) + M w 2 . Consider the following problem: (19) - d 1 Δ u + M u = w 1 ( a 1 - b 1 w 1 - c 1 w 2 ) + M w 1 , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω . Since F 1 ( x ) ∈ C ( Ω - ), we see that (19) admits a unique solution u ∈ C 1 + α ( Ω - ) by Lemma 5. Similarly, the problem (20) - d 2 Δ v + M v = w 2 ( a 2 - b 2 w 1 - c 2 w 2 ) + M w 2 , x ∈ Ω , ∂ v ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω has a unique solution v ∈ C 1 + α ( Ω - ). Denote U = ( u , v ), U - = ( u - , v - ), and U _ = ( u _ , v _ ). We define the ordered interval (21) A = { U ∈ [ C ( Ω - ) ] 2 : U _ ≤ U ≤ U - } and an operator T : A → [ C ( Ω - ) ] 2 by (22) U = T W . Thanks to Lemma 5, we have (23) ∥ u ∥ C 1 , α ( Ω - ) ≤ C ( ∥ w 1 ( a 1 - b 1 w 1 - c 1 w 2 ) ∥ ∞ + ∥ w 1 ∥ ∞ ) , ∥ v ∥ C 1 , α ( Ω - ) ≤ C ( ∥ w 2 ( a 2 - b 2 w 1 - c 2 w 2 ) ∥ ∞ + ∥ w 2 ∥ ∞ ) . Hence, T ( A ) is bounded in [ C ( Ω - ) ] 2. We claim thatT : A → [ C ( Ω - ) ] 2 is a compact operator. To see this, it suffices to prove that the operator T is continuous. Suppose that W n = ( w 1 n , w 2 n ) → W = ( w 1 , w 2 ) in [ C ( Ω - ) ] 2. Denote (24) F n 1 ( x ) = w 1 n ( a 1 - b 1 w 1 n - c 1 w 2 n ) + M w 1 n , F n 2 ( x ) = w 2 n ( a 2 - b 2 w 1 n - c 2 w 2 n ) + M w 2 n . Then F n = ( F n 1 , F n 2 ) → F = ( F 1 , F 2 ) in [ C ( Ω - ) ] 2. Let U n = T W n. By Lemma 5, we obtain (25) ∥ u n ∥ W 2 , p ( Ω - ) ≤ C ( ∥ F n 1 ( x ) - F 1 ( x ) ∥ ∞ + ∥ F 1 ( x ) ∥ ∞ ) , ∥ v n ∥ W 2 , p ( Ω - ) ≤ C ( ∥ F n 2 ( x ) - F 2 ( x ) ∥ ∞ + ∥ F 2 ( x ) ∥ ∞ ) . Therefore, U n ⇀ U in [ W 2 , p ( Ω ) ] 2. Note that W 2 , p ( Ω ) ↪ C 1 + α ( Ω - ) is compact; it is deduced that U n → U in [ C 1 + α ( Ω - ) ] 2. It is obvious that U = ( u , v ) is the solution of (26) - d 1 Δ u + M u = u ( a 1 - b 1 w 1 - c 1 w 2 ) + M w 1 , x ∈ Ω , - d 2 Δ v + M v = v ( a 2 - b 2 w 1 - c 2 w 2 ) + M w 2 , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , ∂ v ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω . This shows that T : A → [ C ( Ω - ) ] 2 is continuous. Now, we would like to proveT ( A ) ⊂ A. Suppose that W ∈ A and U = T W, where U = ( u , v ), W = ( w 1 , w 2 ). We first prove u - ≥ u. In virtue of ( w 1 , w 2 ) ∈ A, we have that (27) - d 1 Δ u - + M u - ≥ u - ( a 1 - b 1 u - - c 1 w 2 ) + M u - , x ∈ Ω , ∂ u - ∂ ν + f ( u - ) ≥ 0 , x ∈ ∂ Ω . Let z = u - - u. Noting that u - ≥ w 1 and u satisfies (19), it is obvious that (28) - d 1 Δ z + M z ≥ u - ( a 1 - b 1 u - - c 1 w 2 ) - w 1 ( a 1 - b 1 w 1 - c 1 w 2 ) + M ( u - - w 1 ) ≥ 0 , x ∈ Ω , ∂ z ∂ ν + f ( u - ) - f ( u ) ≥ 0 , x ∈ ∂ Ω . On the contrary, we assume that u - ≥ u is not true. By the strong maximum theory (see [12]), there exists x 0 ∈ ∂ Ω, such that min ⁡ x ∈ Ω - z ( x ) = z ( x 0 ) = u - ( x 0 ) - u ( x 0 ) < 0. Thanks to the Hopf boundary lemma, we know (29) ∂ z ∂ ν | x = x 0 < 0 . In view of (28), we get (30) ∂ z ∂ ν | x = x 0 ≥ - [ f ( u - ( x 0 ) ) - f ( u ( x 0 ) ) ] > 0 . This is a contradiction with (29). Thus z ≥ 0; that is, u - ≥ u. Similarly, u _ ≤ u, v - ≥ v, and v _ ≤ v. By the Schauder fixed point theorem, T has a fixed point U in A. The proof is complete.Theorem 12. Suppose thatb 1 / b 2 > a 1 / a 2 > c 1 / c 2 and f , g are convex functions in R +. Then the problem (1) has at least one positive solution.Proof. By the assumptionf ′ ( 0 ) = g ′ ( 0 ) = 0, there exists a constant β > 0 such that (31) lim ⁡ s → 0 f ( s ) s 1 + β = 0 , lim ⁡ s → 0 g ( s ) s 1 + β = 0 . Letθ 1 , θ 2 ∈ C 1 , α ( Ω - ) be the unique positive solutions of (13) and (14), respectively. Set U - = ( u - , v - ) = ( θ 1 , θ 2 ), U _ = ( u _ , v _ ) = ( ε 1 + β θ 1 + ε , ε 1 + β θ 2 + ε ), where ε > 0 is sufficiently small. Then U _ ≤ U - as θ 1 and θ 2 are positive on Ω -. To prove that U - and U _ are the coupled upper and lower solutions of (1), it suffices to verify inequalities (17) in Definition 10. Consider the following.(i) Thanks toθ 1 , θ 2 > 0 on Ω -, the following are obvious provided that ε > 0 is sufficiently small: (32) - d 1 Δ u - = u - ( a 1 - b 1 u - ) ≥ ( a 1 - b 1 u - - c 1 v _ ) , x ∈ Ω , - d 2 Δ v - = ( a 2 - c 2 v - ) ≥ ( a c - c 2 v - - b 2 u _ ) , x ∈ Ω , ∂ u - ∂ ν + f ( u - ) = 0 , ∂ v - ∂ ν + g ( v - ) = 0 , x ∈ ∂ Ω . (ii) By Lemma8, we have that θ 1 ( x ) ≤ a 1 / b 1 and θ ( x ) ≤ a 2 / c 2 on Ω -. In virtue of a 1 , a 2 > 0 and b 1 / b 2 > a 1 / a 2 > c 1 / c 2, a direct computation gives (33) - d 1 Δ u _ - ( a 1 - b 1 u _ - c 1 v - ) = - d 1 Δ ( ε 1 + β θ 1 + ε ) - ( ε 1 + β θ 1 + ε ) ( a 1 - b 1 ( ε 1 + β θ 1 + ε ) - c 1 θ 2 ) = ε 1 + β [ θ 1 ( a 1 - b 1 θ 1 ) ] - ( ε 1 + β θ 1 + ε ) ( a 1 - b 1 ( ε 1 + β θ 1 + ε ) - c 1 θ 2 ) = - ε ( a 1 - c 1 θ 2 ) + o ( ε ) ≤ - ε ( a 1 - c 1 a 2 c 2 ) + o ( ε ) < 0 , x ∈ Ω provided0 < ε ≪ 1. Since the function f is convex in R +, we know that, when 0 < ε ≪ 1, (34) ∂ u _ ∂ ν + f ( u _ ) = ε 1 + β ∂ θ 1 ∂ ν + f ( ε 1 + β θ 1 + ε ) - ε 1 + β f ( θ 1 ) + f ( ε 1 + β θ 1 + ε ) = f ( ε 1 + β θ 1 + ε ) ( ε 1 + β θ 1 + ε ) 1 + β ( ε 1 + β θ 1 + ε ) 1 + β ε 1 + β - f ( θ 1 ) ≤ 0 , x ∈ ∂ Ω . Similarly, (35) - d 1 Δ v _ - ( a 1 - b 1 u - - c 1 v _ ) < 0 , x ∈ Ω , ∂ v _ ∂ ν + f ( v _ ) ≤ 0 , x ∈ ∂ Ω . We have proved thatU - , U _ are the coupled upper and lower solutions of (1). Taking advantage of Theorem 11, (1) has at least one positive solution. The proof is complete.Next, we show that (1) has no positive solution, when the diffusion coefficient d 1 or d 2 is sufficiently large.Theorem 13. There exists a positive constantM = M ( a i , b i , c i ) such that when max ⁡ { d 1 , d 2 } ≥ M, problem (1) has no positive solution.Proof. There exists a positive constantC, independent of u and v, such that (36) ∥ u - u * ∥ ∞ ≤ C d 1 , ∥ v - v * ∥ ∞ ≤ C d 2 , where (37) u * = ∫ ∂ Ω ‍ f ( u ) u ∫ ∂ Ω ‍ f ( u ) , v * = ∫ ∂ Ω ‍ g ( v ) v ∫ ∂ Ω ‍ g ( v ) . Following the results of Lemma 9, we have (38) max ⁡ { ∥ u ∥ ∞ , ∥ v ∥ ∞ } ≤ C ≡ max ⁡ { a 1 b 1 , a 2 b 2 } . Rewrite (1) as (39) - d 1 Δ ( u - u * ) = u ( a 1 - b 1 u - c 1 v ) , x ∈ Ω , - d 2 Δ ( v - v * ) = v ( a 2 - b 2 u - c 2 v ) , x ∈ Ω , ∂ ( u - u * ) ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , ∂ ( v - v * ) ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω . Note that ∥ u ( a 1 - b 1 u - c 1 v ) ∥ ∞ ≤ C = max ⁡ 0 ≤ u , v ≤ C | u ( a 1 - b 1 u - c 1 v ) |. Multiplying the first equation of (39) by u - u *, and integrating on Ω, we derive that, by Green’s identity, Holder’s inequality, and Poincare’s inequality, (40) ∫ Ω ‍ | ∇ u | 2 ≤ | u ( a 1 - b 1 u - c 1 v ) | d 1 ∫ Ω ‍ | u - u * | ≤ C d 1 ∥ u - u * ∥ 2 ≤ C d 1 ∥ ∇ u ∥ 2 , which implies that (41) ∥ u - u * ∥ 2 ≤ C d 1 . By Lemma 5, (36), and (41), we obtain (42) ∥ u - u * ∥ W 2,2 ( Ω ) ≤ C ( ∥ u - u * ∥ 2 + ∥ u ( a 1 - b 1 u - c 1 v ) ∥ ∞ d 1 ) ≤ C d 1 . Thanks to the Sobolev embedding theorem (see [12]), for 0 < β < 1, (43) ∥ u - u * ∥ C 1 , β ( Ω - ) ≤ C d 1 . Review to the third equation of (1), we obtain (44) | D u | | ν | ≥ | D u · ν | = f ( u ) . In view of (43) and (44), it is easy to see (45) C d 1 ≥ f ( u ) > f ( u * - C d 1 ) . Note that f ( u ) is increasing function and f ( 0 ) = 0, as d 1 → ∞, and (46) 0 ≥ f ( u * ) > 0 . This is a contradiction. The same proof as above works equally well for the case when d 2 is large. This completes the proof.Finally, we discuss the stability of semitrivial solutions.Theorem 14. Suppose thatf , g are convex functions in R +. Then the following hold:(a) the semitrivial solution( θ 1 , 0 ) is unstable; (b) the semitrivial solution( 0 , θ 2 ) is unstable.Proof. For part (a), to prove the stability of( θ 1 , 0 ), we consider the corresponding elliptic system eigenvalue problem: (47) - d 1 Δ u - ( a 1 - 2 b 1 θ 1 ) u + c 1 θ 1 v = λ u , x ∈ Ω , - d 2 Δ v - ( a 2 - b 2 θ 1 ) v = λ v , x ∈ Ω , ∂ u ∂ ν + f ′ ( θ 1 ) u = 0 , x ∈ ∂ Ω , ∂ v ∂ ν = 0 , x ∈ ∂ Ω . First, observe that all eigenvalues of (47) are real since they are also eigenvalues of the second equation: (48) - d 2 Δ v - ( a 2 - b 2 θ 1 ) v = λ v , x ∈ Ω , ∂ v ∂ n = 0 , x ∈ ∂ Ω . Next, note that b 1 / b 2 > a 1 / a 2 > c 1 / c 2; by Lemma 1, we have σ 1 ( d 2 , q ) < 0. Thus ( θ 1 , 0 ) is unstable. In a similar way, we are able to conclude (b). --- *Source: 102180-2014-05-26.xml*
102180-2014-05-26_102180-2014-05-26.md
20,211
Existence of Positive Solutions of Lotka-Volterra Competition Model with Nonlinear Boundary Conditions
Yang Zhang; Mingxin Wang; Yuwen Wang
Abstract and Applied Analysis (2014)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102180
102180-2014-05-26.xml
--- ## Abstract A Lotka-Volterra competition model with nonlinear boundary conditions is considered. First, by using upper and lower solutions method for nonlinear boundary problems, we investigate the existence of positive solutions in weak competition case. Next, we prove that- d 1 Δ u = u ( a 1 - b 1 u - c 1 v ), x ∈ Ω; - d 2 Δ v = v ( a 2 - b 2 u - c 2 v ), x ∈ Ω; ∂ u / ∂ ν + f ( u ) = 0, x ∈ ∂ Ω; ∂ v / ∂ ν + g ( v ) = 0, x ∈ ∂ Ω, has no positive solution when one of the diffusion coefficients is sufficiently large. --- ## Body ## 1. Introduction In this paper, we study the existence of positive solutions to the following problem with nonlinear boundary conditions:(1) - d 1 Δ u = u ( a 1 - b 1 u - c 1 v ) , x ∈ Ω , - d 2 Δ v = v ( a 2 - b 2 u - c 2 v ) , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , ∂ v ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω .Here,(H1) Ω ⊂ R N ( N ≥ 1 ) is an open bounded domain, and ν is the outward unit normal vector of the boundary ∂ Ω; (H2) fori = 1,2, a i, b i, c i, and d i are positive constants; (H3) f and g are strictly increasing C 2 functions in R and f ( 0 ) = g ( 0 ) = f ′ ( 0 ) = g ′ ( 0 ) = 0; (H4) b 1 / b 2 > a 1 / a 2 > c 1 / c 2.In the usual interpretation of the competition model,u ( x ) and v ( x ) are population variables; it is natural to consider only nonnegative solutions of (1). There is clearly a trivial solution u = v = 0 for all values of the parameters. In addition, for some values of parameters, there exist two semitrivial solutions ( u , v ) = ( u , 0 ) and ( 0 , v ). More interesting are the so-called positive solutions or coexistence solutions, where both u ( x ) and v ( x ) are positive for all x ∈ Ω.By using the positive operator theory, Ahn and Li [1] proved the existence of positive solutions to the following elliptic system: (2) - d 1 Δ u = u f ( u , v ) , x ∈ Ω , - d 2 Δ v = v g ( u , v ) , x ∈ Ω , ∂ u ∂ ν + α ( u ) = 0 , x ∈ ∂ Ω , ∂ v ∂ ν + β ( v ) = 0 , x ∈ ∂ Ω , where f ( u , v ), g ( u , v ) are C 1 functions in u and v, f u ≤ 0, f v ≥ 0, g u ≤ 0, and g v ≥ 0 for ( u , v ) ∈ R + × R +; α and β are increasing functions in R and α ( 0 ) = β ( 0 ) = 0.The main project of our paper is to investigate the existence of positive solutions to the problem (1). In Section 2 we state some known results, which are useful throughout this paper. Section 3 is devoted to proving the existence of positive solutions by using the upper and lower solutions method. When the diffusion coefficient d 1 or d 2 is sufficiently large, we will prove that problem (1) has no positive solution.For the homogeneous Neumann boundary conditions, that is,f ( u ) ≡ 0 and g ( v ) ≡ 0, problem (1) has been studied intensively by many authors. For the related results, please refer to, for instance, [2–8], [9, Section 4.3], and the references cited therein. ## 2. Preliminaries In this section, we will introduce some notations and lemmas, which serve as the basic tools for the arguments to prove our results.Throughout this paper, we will consider the solutionsu , v ∈ C ( Ω - ). For a given continuous function q : Ω - → R, let σ 1 ( q , d ) be the principal eigenvalue of the following eigenvalue problem: (3) - d Δ u + q ( x ) u = λ u , x ∈ Ω , ∂ u ∂ ν = 0 , x ∈ ∂ Ω . When the diffusion coefficient d = 1, we denote the first eigenvalue for (3) by σ 1 ( q ).The variational characterization ofσ 1 ( d , q ) is (4) σ 1 ( d , q ) = inf ⁡ { ∫ Ω ‍ ( d | ∇ ψ | 2 + q ( x ) ψ 2 ) d x : ∫ Ω ‍ ψ 2 d x = 1 , ψ ∈ H 1 ( Ω ) } . We are concerned with the relation between the sign of σ 1 ( d , q ) and the function q.Lemma 1 (see [9]). The first eigenvalueσ 1 ( d , q ) of (3) has the following properties: (5) ∫ Ω ‍ q ( x ) d x ≤ 0 ⟹ σ 1 ( d , q ) < 0 , ∀ d > 0 .Lemma 2 (see [10]). Suppose thatq ( x ) ∈ C ( Ω - ) and M is a positive constant satisfying M - q ( x ) > 0. Then the following hold:(i) σ 1 ( q ) < 0 ⇒ r [ ( M - Δ ) - 1 ( M - q ( x ) ) ] > 1; (ii) σ 1 ( q ) > 0 ⇒ r [ ( M - Δ ) - 1 ( M - q ( x ) ) ] < 1; (iii) σ 1 ( q ) = 0 ⇒ r [ ( M - Δ ) - 1 ( M - q ( x ) ) ] = 1.Next, we consider the nonlinear elliptic problem:(6) - Δ u = E ( x , u ) , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , where f ∈ C 2 ( R ) is strictly increasing with f ( 0 ) = 0.Definition 3. LetE ( x , ξ ) ∈ C ( Ω - × R ) be global Lipschitz continuous in ξ for all x ∈ Ω -. The functions u - , u _ ∈ C 2 ( Ω ) ∩ C 1 ( Ω - ) are called the upper and lower solutions of (6), if u - and u _ satisfy (7) - Δ u - ≥ E ( x , u - ) , x ∈ Ω , ∂ u - ∂ ν + f ( u - ) ≥ 0 , x ∈ ∂ Ω , - Δ u _ ≤ E ( x , u _ ) , x ∈ Ω , ∂ u _ ∂ ν + f ( u _ ) ≤ 0 , x ∈ ∂ Ω .By using the upper and lower solutions method, the following result was obtained by Ahn and Li [1].Lemma 4. Suppose thatu - ≥ u _ ≥ 0 are upper and lower solutions of (6); then there exists a maximal solution u ~ of (6) such that u - ≥ u ~ ≥ u _.Lemma 5 (see [1]). LetP , d be positive constants and h ∈ C ( Ω - ). Consider (8) - d Δ u + P u = h , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , where f ∈ C 2 ( R ), f ( 0 ) = 0, and f is strictly increasing. Then the following hold:(i) problem (8) has a solution u ∈ W 2 , m ( Ω ) ∩ C 1 , α ( Ω - ) for some α ∈ ( 0,1 ), and (9) ∥ u ∥ W 2 , m ≤ C 0 ∥ h ∥ ∞ , where C 0 is dependent on P; (ii) if0 ≢ h ≥ 0, then (8) has a unique positive solution.Now we consider the following nonlinear boundary value problem:(10) - Δ u = u F ( x , u ) , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω .Lemma 6. Letf ∈ C 2 ( R ) be an increasing and convex function in R +, and satisfy with f ( 0 ) = f ′ ( 0 ) = 0. Assume that the function F satisfies the following:(i) F ∈ C ( Ω - × R + ), F ( x , u ) is Lipschitz-continuous in u, and the Lipschitz constant C is independent of ( x , u ) ∈ Ω × R +; (ii) F ( x , u ) is decreasing in u; (iii) F ( x , 0 ) > 0 for x ∈ Ω, F ( x , u ) < 0 in Ω × ( c 0 , ∞ ) for some constant c 0 > 0. Ifσ 1 ( - F ( x , 0 ) ) < 0, then (10) has a unique positive solution. If σ 1 ( - F ( x , 0 ) ) ≥ 0, then u = 0 is the only nonnegative solution of (10).Proof. Letψ > 0 be the eigenfunction corresponding to the eigenvalue σ 1 ( 0 ) = 0 of problem (3); we can obtain F ( x , ξ ) < σ 1 when ξ > c 0, due to the hypothesis (iii). With the assumptions on the function f in mind, for a large M > 0, we have (11) - Δ M ψ - M ψ F ( x , M ψ ) ≥ 0 , x ∈ Ω , ∂ M ψ ∂ ν + f ( M ψ ) ≥ 0 , x ∈ ∂ Ω . Therefore T ( M ψ ) ≤ M ψ, where the operator T u : = ( - Δ · + P · ) ( F ( x , u ) u + P u ) is defined by the problem (8). T is compact in the positive cone K ⊂ C ( Ω - ) by Lemma 5. The function F ( x , · ) · + P · is monotone increasing on [ 0 , ∥ M ψ ∥ ∞ ] for sufficiently large P > 0. Therefore T is increasing on the order interval ( 0 , M ψ ) with T ( 0 ) = 0. Taking advantage of [1, Lemma 2.13], we know T ′ ( 0 ) = ( - Δ + P ) [ F ( x , 0 ) + P ]. The spectral radius r ( T ′ ( 0 ) ) > 1 by Lemma 2. Then, the result of [11, Theorem 7.6] ensures our conclusion. Next we show the uniqueness. Supposeu 1 is a positive solution of (10). Let u ~ be a maximal solution of (10). We claim that u ~ = u 1. Supposeu ~ ≥ u 1 ≢ u ~. Then (12) 0 < ∫ ∂ Ω ‍ [ u 1 f ( u ~ ) - u ~ f ( u 1 ) ] d S = ∫ Ω ‍ ( - u 1 Δ u ~ + u ~ Δ u 1 ) d x = ∫ Ω ‍ u ~ u 1 [ F ( x , u ~ ) - F ( x , u 1 ) ] d x ≤ 0 . The first integral is positive, as f is convex and f ( 0 ) = 0. The last integral is nonpositive, since F ( x , u ) is decreasing in u. This contradiction demonstrates u ~ ≡ u 1. If λ 1 ( - F ( x , 0 ) ) ≥ 0, the proof is similar to [1, Lemma 2.17], and we omit it. This completes the proof.By Lemma5, we are able to conclude the following result.Proposition 7. Suppose thatf , g are convex functions in R +. Then (1) has two semitrivial solutions ( θ 1 , 0 ) and ( 0 , θ 2 ), for some 0 < α < 1, where θ 1 ∈ C 1 , α ( Ω - ) is the unique positive solution of (13) - d 1 Δ u = u ( a 1 - b 1 u ) , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , and θ 2 ∈ C 1 , α ( Ω - ) is the unique positive solution of (14) - d 2 Δ v = v ( a 2 - c 2 v ) , x ∈ Ω , ∂ v ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω .Finally, we cite a strong maximum principle (see [5, Proposition 2.2]).Lemma 8. Suppose thatΩ is smooth and g ∈ C ( Ω - × R 1 ). Assume that z ∈ C 2 ( Ω ) ∩ C 1 ( Ω - ) and satisfies (15) Δ z ( x ) + g ( x , z ( x ) ) ≥ 0 , x ∈ Ω , ∂ z ∂ ν ≤ 0 , x ∈ ∂ Ω . If z ( x 0 ) = max ⁡ Ω - z ( x ), then g ( x 0 , z ( x 0 ) ) ≥ 0. ## 3. Existence and Nonexistence of Positive Solution By using the upper-lower solutions argument for nonlinear boundary problems, we first study the existence of positive solutions to (1). Our method is technically and conceptually simple in the proof of existence results involving upper-lower solutions hypotheses and Leray-Schauder continuation argument. Next, we prove that problem (1) has no positive solution, if one of the diffusion coefficients is sufficiently large. Finally, we discuss the stability of semitrivial solutions.We will show that the positive solutions of systems (1) have a priori bound.Lemma 9. Suppose that( u , v ) is a positive solution of (1). Then u ≤ a 1 / b 1 and v ≤ a 2 / c 2.Proof. In view of the first equation of (1), u satisfies (16) - d 1 Δ u - u ( a 1 - b 1 u - c 1 v ) = 0 , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω . As u is continuous on the compact set Ω - and f ( u ) ≥ 0, thanks to Lemma 8, it is easy to know u ≤ a 1 / b 1. In a similar manner, we obtain v ≤ a 2 / c 2. Next, we give the definitions of upper and lower solutions to (1).Definition 10. Assume that( u - , v - ) , ( u _ , v _ ) ∈ C 1 ( Ω - ) ∩ C 2 ( Ω ). We called that ( u - , v - ) and ( u _ , v _ ) are the coupled upper and lower solutions of (1), if ( u - , v - ) and ( u _ , v _ ) satisfy (17) - d 1 Δ u - - u - ( a 1 - b 1 u - - c 1 v _ ) ≥ 0 , x ∈ Ω , - d 2 Δ v - - v - ( a 2 - b 2 u _ - c 2 v - ) ≥ 0 , x ∈ Ω , ∂ u - ∂ ν + f ( u - ) ≥ 0 , x ∈ ∂ Ω , ∂ v - ∂ ν + g ( v - ) ≥ 0 , x ∈ ∂ Ω , - d 1 Δ u _ - u _ ( a 1 - b 1 u _ - c 1 v - ) ≤ 0 , x ∈ Ω , - d 2 Δ v _ - v _ ( a 2 - b 2 u - - c 2 v _ ) ≤ 0 , x ∈ Ω , ∂ u _ ∂ ν + f ( u _ ) ≤ 0 , x ∈ ∂ Ω , ∂ v _ ∂ ν + g ( v _ ) ≤ 0 , x ∈ ∂ Ω .Theorem 11. Suppose that( u - , v - ) and ( u _ , v _ ) are the coupled upper and lower solutions of (1) and ( u - , v - ) ≥ ( u _ , v _ ). Then (1) has at least one solution ( u , v ) and ( u - , v - ) ≥ ( u , v ) ≥ ( u _ , v _ ).Proof. For any givenW : = ( w 1 , w 2 ) ∈ [ C ( Ω - ) ] 2 and a sufficiently large positive constant M, let (18) F 1 ( x ) = w 1 ( a 1 - b 1 w 1 - c 1 w 2 ) + M w 1 , F 2 ( x ) = w 2 ( a 2 - b 2 w 1 - c 2 w 2 ) + M w 2 . Consider the following problem: (19) - d 1 Δ u + M u = w 1 ( a 1 - b 1 w 1 - c 1 w 2 ) + M w 1 , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω . Since F 1 ( x ) ∈ C ( Ω - ), we see that (19) admits a unique solution u ∈ C 1 + α ( Ω - ) by Lemma 5. Similarly, the problem (20) - d 2 Δ v + M v = w 2 ( a 2 - b 2 w 1 - c 2 w 2 ) + M w 2 , x ∈ Ω , ∂ v ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω has a unique solution v ∈ C 1 + α ( Ω - ). Denote U = ( u , v ), U - = ( u - , v - ), and U _ = ( u _ , v _ ). We define the ordered interval (21) A = { U ∈ [ C ( Ω - ) ] 2 : U _ ≤ U ≤ U - } and an operator T : A → [ C ( Ω - ) ] 2 by (22) U = T W . Thanks to Lemma 5, we have (23) ∥ u ∥ C 1 , α ( Ω - ) ≤ C ( ∥ w 1 ( a 1 - b 1 w 1 - c 1 w 2 ) ∥ ∞ + ∥ w 1 ∥ ∞ ) , ∥ v ∥ C 1 , α ( Ω - ) ≤ C ( ∥ w 2 ( a 2 - b 2 w 1 - c 2 w 2 ) ∥ ∞ + ∥ w 2 ∥ ∞ ) . Hence, T ( A ) is bounded in [ C ( Ω - ) ] 2. We claim thatT : A → [ C ( Ω - ) ] 2 is a compact operator. To see this, it suffices to prove that the operator T is continuous. Suppose that W n = ( w 1 n , w 2 n ) → W = ( w 1 , w 2 ) in [ C ( Ω - ) ] 2. Denote (24) F n 1 ( x ) = w 1 n ( a 1 - b 1 w 1 n - c 1 w 2 n ) + M w 1 n , F n 2 ( x ) = w 2 n ( a 2 - b 2 w 1 n - c 2 w 2 n ) + M w 2 n . Then F n = ( F n 1 , F n 2 ) → F = ( F 1 , F 2 ) in [ C ( Ω - ) ] 2. Let U n = T W n. By Lemma 5, we obtain (25) ∥ u n ∥ W 2 , p ( Ω - ) ≤ C ( ∥ F n 1 ( x ) - F 1 ( x ) ∥ ∞ + ∥ F 1 ( x ) ∥ ∞ ) , ∥ v n ∥ W 2 , p ( Ω - ) ≤ C ( ∥ F n 2 ( x ) - F 2 ( x ) ∥ ∞ + ∥ F 2 ( x ) ∥ ∞ ) . Therefore, U n ⇀ U in [ W 2 , p ( Ω ) ] 2. Note that W 2 , p ( Ω ) ↪ C 1 + α ( Ω - ) is compact; it is deduced that U n → U in [ C 1 + α ( Ω - ) ] 2. It is obvious that U = ( u , v ) is the solution of (26) - d 1 Δ u + M u = u ( a 1 - b 1 w 1 - c 1 w 2 ) + M w 1 , x ∈ Ω , - d 2 Δ v + M v = v ( a 2 - b 2 w 1 - c 2 w 2 ) + M w 2 , x ∈ Ω , ∂ u ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , ∂ v ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω . This shows that T : A → [ C ( Ω - ) ] 2 is continuous. Now, we would like to proveT ( A ) ⊂ A. Suppose that W ∈ A and U = T W, where U = ( u , v ), W = ( w 1 , w 2 ). We first prove u - ≥ u. In virtue of ( w 1 , w 2 ) ∈ A, we have that (27) - d 1 Δ u - + M u - ≥ u - ( a 1 - b 1 u - - c 1 w 2 ) + M u - , x ∈ Ω , ∂ u - ∂ ν + f ( u - ) ≥ 0 , x ∈ ∂ Ω . Let z = u - - u. Noting that u - ≥ w 1 and u satisfies (19), it is obvious that (28) - d 1 Δ z + M z ≥ u - ( a 1 - b 1 u - - c 1 w 2 ) - w 1 ( a 1 - b 1 w 1 - c 1 w 2 ) + M ( u - - w 1 ) ≥ 0 , x ∈ Ω , ∂ z ∂ ν + f ( u - ) - f ( u ) ≥ 0 , x ∈ ∂ Ω . On the contrary, we assume that u - ≥ u is not true. By the strong maximum theory (see [12]), there exists x 0 ∈ ∂ Ω, such that min ⁡ x ∈ Ω - z ( x ) = z ( x 0 ) = u - ( x 0 ) - u ( x 0 ) < 0. Thanks to the Hopf boundary lemma, we know (29) ∂ z ∂ ν | x = x 0 < 0 . In view of (28), we get (30) ∂ z ∂ ν | x = x 0 ≥ - [ f ( u - ( x 0 ) ) - f ( u ( x 0 ) ) ] > 0 . This is a contradiction with (29). Thus z ≥ 0; that is, u - ≥ u. Similarly, u _ ≤ u, v - ≥ v, and v _ ≤ v. By the Schauder fixed point theorem, T has a fixed point U in A. The proof is complete.Theorem 12. Suppose thatb 1 / b 2 > a 1 / a 2 > c 1 / c 2 and f , g are convex functions in R +. Then the problem (1) has at least one positive solution.Proof. By the assumptionf ′ ( 0 ) = g ′ ( 0 ) = 0, there exists a constant β > 0 such that (31) lim ⁡ s → 0 f ( s ) s 1 + β = 0 , lim ⁡ s → 0 g ( s ) s 1 + β = 0 . Letθ 1 , θ 2 ∈ C 1 , α ( Ω - ) be the unique positive solutions of (13) and (14), respectively. Set U - = ( u - , v - ) = ( θ 1 , θ 2 ), U _ = ( u _ , v _ ) = ( ε 1 + β θ 1 + ε , ε 1 + β θ 2 + ε ), where ε > 0 is sufficiently small. Then U _ ≤ U - as θ 1 and θ 2 are positive on Ω -. To prove that U - and U _ are the coupled upper and lower solutions of (1), it suffices to verify inequalities (17) in Definition 10. Consider the following.(i) Thanks toθ 1 , θ 2 > 0 on Ω -, the following are obvious provided that ε > 0 is sufficiently small: (32) - d 1 Δ u - = u - ( a 1 - b 1 u - ) ≥ ( a 1 - b 1 u - - c 1 v _ ) , x ∈ Ω , - d 2 Δ v - = ( a 2 - c 2 v - ) ≥ ( a c - c 2 v - - b 2 u _ ) , x ∈ Ω , ∂ u - ∂ ν + f ( u - ) = 0 , ∂ v - ∂ ν + g ( v - ) = 0 , x ∈ ∂ Ω . (ii) By Lemma8, we have that θ 1 ( x ) ≤ a 1 / b 1 and θ ( x ) ≤ a 2 / c 2 on Ω -. In virtue of a 1 , a 2 > 0 and b 1 / b 2 > a 1 / a 2 > c 1 / c 2, a direct computation gives (33) - d 1 Δ u _ - ( a 1 - b 1 u _ - c 1 v - ) = - d 1 Δ ( ε 1 + β θ 1 + ε ) - ( ε 1 + β θ 1 + ε ) ( a 1 - b 1 ( ε 1 + β θ 1 + ε ) - c 1 θ 2 ) = ε 1 + β [ θ 1 ( a 1 - b 1 θ 1 ) ] - ( ε 1 + β θ 1 + ε ) ( a 1 - b 1 ( ε 1 + β θ 1 + ε ) - c 1 θ 2 ) = - ε ( a 1 - c 1 θ 2 ) + o ( ε ) ≤ - ε ( a 1 - c 1 a 2 c 2 ) + o ( ε ) < 0 , x ∈ Ω provided0 < ε ≪ 1. Since the function f is convex in R +, we know that, when 0 < ε ≪ 1, (34) ∂ u _ ∂ ν + f ( u _ ) = ε 1 + β ∂ θ 1 ∂ ν + f ( ε 1 + β θ 1 + ε ) - ε 1 + β f ( θ 1 ) + f ( ε 1 + β θ 1 + ε ) = f ( ε 1 + β θ 1 + ε ) ( ε 1 + β θ 1 + ε ) 1 + β ( ε 1 + β θ 1 + ε ) 1 + β ε 1 + β - f ( θ 1 ) ≤ 0 , x ∈ ∂ Ω . Similarly, (35) - d 1 Δ v _ - ( a 1 - b 1 u - - c 1 v _ ) < 0 , x ∈ Ω , ∂ v _ ∂ ν + f ( v _ ) ≤ 0 , x ∈ ∂ Ω . We have proved thatU - , U _ are the coupled upper and lower solutions of (1). Taking advantage of Theorem 11, (1) has at least one positive solution. The proof is complete.Next, we show that (1) has no positive solution, when the diffusion coefficient d 1 or d 2 is sufficiently large.Theorem 13. There exists a positive constantM = M ( a i , b i , c i ) such that when max ⁡ { d 1 , d 2 } ≥ M, problem (1) has no positive solution.Proof. There exists a positive constantC, independent of u and v, such that (36) ∥ u - u * ∥ ∞ ≤ C d 1 , ∥ v - v * ∥ ∞ ≤ C d 2 , where (37) u * = ∫ ∂ Ω ‍ f ( u ) u ∫ ∂ Ω ‍ f ( u ) , v * = ∫ ∂ Ω ‍ g ( v ) v ∫ ∂ Ω ‍ g ( v ) . Following the results of Lemma 9, we have (38) max ⁡ { ∥ u ∥ ∞ , ∥ v ∥ ∞ } ≤ C ≡ max ⁡ { a 1 b 1 , a 2 b 2 } . Rewrite (1) as (39) - d 1 Δ ( u - u * ) = u ( a 1 - b 1 u - c 1 v ) , x ∈ Ω , - d 2 Δ ( v - v * ) = v ( a 2 - b 2 u - c 2 v ) , x ∈ Ω , ∂ ( u - u * ) ∂ ν + f ( u ) = 0 , x ∈ ∂ Ω , ∂ ( v - v * ) ∂ ν + g ( v ) = 0 , x ∈ ∂ Ω . Note that ∥ u ( a 1 - b 1 u - c 1 v ) ∥ ∞ ≤ C = max ⁡ 0 ≤ u , v ≤ C | u ( a 1 - b 1 u - c 1 v ) |. Multiplying the first equation of (39) by u - u *, and integrating on Ω, we derive that, by Green’s identity, Holder’s inequality, and Poincare’s inequality, (40) ∫ Ω ‍ | ∇ u | 2 ≤ | u ( a 1 - b 1 u - c 1 v ) | d 1 ∫ Ω ‍ | u - u * | ≤ C d 1 ∥ u - u * ∥ 2 ≤ C d 1 ∥ ∇ u ∥ 2 , which implies that (41) ∥ u - u * ∥ 2 ≤ C d 1 . By Lemma 5, (36), and (41), we obtain (42) ∥ u - u * ∥ W 2,2 ( Ω ) ≤ C ( ∥ u - u * ∥ 2 + ∥ u ( a 1 - b 1 u - c 1 v ) ∥ ∞ d 1 ) ≤ C d 1 . Thanks to the Sobolev embedding theorem (see [12]), for 0 < β < 1, (43) ∥ u - u * ∥ C 1 , β ( Ω - ) ≤ C d 1 . Review to the third equation of (1), we obtain (44) | D u | | ν | ≥ | D u · ν | = f ( u ) . In view of (43) and (44), it is easy to see (45) C d 1 ≥ f ( u ) > f ( u * - C d 1 ) . Note that f ( u ) is increasing function and f ( 0 ) = 0, as d 1 → ∞, and (46) 0 ≥ f ( u * ) > 0 . This is a contradiction. The same proof as above works equally well for the case when d 2 is large. This completes the proof.Finally, we discuss the stability of semitrivial solutions.Theorem 14. Suppose thatf , g are convex functions in R +. Then the following hold:(a) the semitrivial solution( θ 1 , 0 ) is unstable; (b) the semitrivial solution( 0 , θ 2 ) is unstable.Proof. For part (a), to prove the stability of( θ 1 , 0 ), we consider the corresponding elliptic system eigenvalue problem: (47) - d 1 Δ u - ( a 1 - 2 b 1 θ 1 ) u + c 1 θ 1 v = λ u , x ∈ Ω , - d 2 Δ v - ( a 2 - b 2 θ 1 ) v = λ v , x ∈ Ω , ∂ u ∂ ν + f ′ ( θ 1 ) u = 0 , x ∈ ∂ Ω , ∂ v ∂ ν = 0 , x ∈ ∂ Ω . First, observe that all eigenvalues of (47) are real since they are also eigenvalues of the second equation: (48) - d 2 Δ v - ( a 2 - b 2 θ 1 ) v = λ v , x ∈ Ω , ∂ v ∂ n = 0 , x ∈ ∂ Ω . Next, note that b 1 / b 2 > a 1 / a 2 > c 1 / c 2; by Lemma 1, we have σ 1 ( d 2 , q ) < 0. Thus ( θ 1 , 0 ) is unstable. In a similar way, we are able to conclude (b). --- *Source: 102180-2014-05-26.xml*
2014
# Ethynilestradiol 20 mcg plus Levonorgestrel 100 mcg: Clinical Pharmacology **Authors:** Stefano Lello; Andrea Cavani **Journal:** International Journal of Endocrinology (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102184 --- ## Abstract Estroprogestins (EPs) are combinations of estrogen and progestin with several actions on women’s health. The different pharmacological composition of EPs is responsible for different clinical effects. One of the most used low-dose EP associations is ethinylestradiol 20 mcg plus levonorgestrel 100 mcg in monophasic regimen (EE20/LNG100). This review summarizes clinical pharmacology, cycle control, and effects on lipid and glucose metabolism, coagulation, body weight/body composition, acne, and sexuality of EE20/LNG100. Overall, EE20/LNG100 combination is safe and well tolerated, and in several studies the incidence of adverse events in the treated group was comparable to that of the placebo group. Cycle control was effective and body weight/body composition did not vary among treated and untreated groups in most studies. The EE20/LNG100 combination shows mild or no effect on lipid and glucose metabolism. Lastly, EE20/LNG100 is associated with a low risk of venous thromboembolism (VTE). In conclusion, in the process of decision making for the individualization of EPs choice, EE20/LNG100 should be considered for its favorable clinical profile. --- ## Body ## 1. Introduction Estroprogestins (EPs) are pharmaceutical compounds containing estrogen and progestin. Existing progestogen compounds can be classified as first- (e.g., norethisterone, norethindrone, ethynodiol diacetate, and lynestrenol), second- (levonorgestrel and norgestrel), and third-generation (desogestrel, gestodene, and norgestimate).Estrogen can decrease follicle-stimulating hormone and luteinizing hormone, even if firstly it was added to progestin to reduce or avoid symptoms which follow ovarian blockage and to improve cycle control. The roles of progestin are to decrease luteinizing hormone levels through a negative feedback mechanism, to thicken cervical mucus, and to decrease the endometrial proliferation after estrogen mitotic stimulation. Notably, these activities of EPs are used not only for contraception, but also as a tool to obtain a better health profile in many women, in the so-called “noncontraceptive use.” In this view, EPs can be considered as an important tool for women’s health, due to the effects of EPs on menstrual pain, excessive menstrual bleeding, endometriosis, polycystic ovary syndrome (PCOS), and the protection against some cancers (ovary, endometrium, and colon) [1].Different EPs can show different clinical effects and different risk profile according to their specific pharmacological composition (i.e., type and dose of estrogen and progestin). One of the most used associations is ethinylestradiol (EE) 20 mcg + levonorgestrel (LNG) 100 mcg in monophasic regimen (EE20/LNG100).This review summarizes clinical pharmacology, cycle control, and effects on lipid and glucose metabolism, coagulation, body weight/body composition, and acne of the EE20/LNG100 used once daily for 21 days of a 28-day cycle (21/7 regimen). ## 2. Selection of Evidence Key papers for inclusion in this paper were collected by browsing MEDLINE using pertinent keywords (e.g., ethinylestradiol and levonorgestrel); papers included in the reference list of the identified manuscript could also be considered for inclusion, as well as relevant abstracts or papers from the personal collection of literature of the authors.Papers were selected for inclusion according to their relevance for the topic, according to authors’ opinion. ## 3. EE20/LNG100: Clinical Pharmacology ### 3.1. Ethinylestradiol EE is the most used estrogen in EPs. It is more potent than estradiol, due to the presence of the 17α-ethinyl group, which can prevent the oxidation of 17β-hydroxy group (Figure 1). The 17α-ethinyl group can be oxidized, with the formation of an intermediate element which inhibits the cytochrome P450 isoenzymes (e.g., CYP3A4) involved in estrogen metabolism. Thus, EE can reduce its catabolism by inhibiting the hydroxylation at C2 through the blockade of these specific CYP isoenzymes [2, 3].Figure 1 Structural formula of the 17β-estradiol derivative ethinylestradiol (EE).After oral administration, besides oxidative metabolism, EE undergoes glucuronidation and sulphatation by specific enzymes (e.g., glucuronyltransferase and sulphotransferase). The reduction of enzymatic inactivation results in the dose-dependent hepatic modulation of a series of activity, such as protein synthesis. For example, EE stimulates sex hormone binding globulin (SHBG), thyroxin binding globulin (TBG), and cortisol binding globulin (CBG) but also has effects on the production of haemostatic elements, lipids, and lipoproteins [4, 5]. The oral bioavailability of oral EE ranges from 38 to 48%, due to a high first-pass metabolism, which, in turn, determines an important interindividual variation in EE plasma levels [6]. Hepatic metabolism yields EE conjugated and metabolites circulating into blood vessels. Of the oral dose, about 1% circulates as EE and is bound by 98.5% to albumin, with EE not presenting affinity for SHBG. Enterohepatic recirculation is important in the EE pharmacokinetics (PK), and the metabolic passages are based on hydroxylation at C2 and C4, with formation of catechol estrogens, which can be metabolized into 2- and 4-methoxy-EE. EE metabolites are excreted by feces and urine. ### 3.2. Levonorgestrel LNG (Figure2) is rapidly absorbed when administered orally. The bioavailability is about 100%, with no relevant first-pass effect, and the peak plasma-level is obtained between 1 and 3 hours after oral administration. LNG is bound to SHGB by 47.5% (this portion can be viewed as a sort of “reservoir” to maintain blood levels of LNG) and to serum albumin by 50% (more promptly available) and 2.5% is not bound. The half-life is about 15 hours [7], and level of LNG is still detectable 48 hours after the administration [8–10].Figure 2 Structural formula of the progestin levonorgestrel.LNG has a marked progestin activity, no mineralocorticoid or glucocorticoid effects, and an antiestrogenic action at hepatic level. LNG has also a very high affinity for the uterine progesterone receptor [11]. The reduction of the Δ4-3-keto group and hydroxylation are important metabolic pathways for LNG [12]. LNG and its metabolites (glucurono- and sulphoconjugated) are excreted by urine and feces. The lowest ovulation inhibiting dose of LNG is 50 mcg/day [13]. Data from animal models show that the dose stimulating a weight increase in the ventral prostate is >100-fold greater than the dose needed to inhibit ovulation; moreover, very little progestin is required to inhibit ovulation, even when used alone rather than in combination with an estrogen [14]. These results suggest the important antigonadotropic effect exerted by LNG. Moreover, LNG has a very high relative binding affinity for progesterone receptor [15], thus suggesting a strong progestin action.One study [16] investigated the PK of EE20/LNG100 in 18 young, healthy women. Serum levels of EE and LNG were assayed after single and repeated daily oral doses during three cycles (21/7 regimen). Serum maximum concentration was reached, for both EE and LNG, between 1 and 2 hours after single and repeated administration on a daily basis. The serum concentration of EE increased after multiple daily administrations, with about twofold accumulation. In addition, serum concentrations of LNG increased following repeated administrations, with steady-state being reached after 11 days since the intake of the first tablet. By comparing the AUC 0–24 values after the first and the last tablet, LNG showed an accumulation by a factor of 3 during a cycle of treatment. The PKs for steady state of LNG were similar after the end of the first and the third cycle of administration, thus indicating no further accumulation over a long-term intake. The clearance and distribution volume of LNG decreased and half-life increased after multiple daily administration.In conclusion, from a pharmacological point of view, LNG is a potent progestin with an important antiestrogenic action [17] also at hepatic level (as shown by its effects on SHBG production, e.g., the ability of LNG to partially counteract the EE-induced SHBG production) and with high oral bioavailability due to no relevant first-pass effect, thus providing lower interindividual bioavailability variations. ## 3.1. Ethinylestradiol EE is the most used estrogen in EPs. It is more potent than estradiol, due to the presence of the 17α-ethinyl group, which can prevent the oxidation of 17β-hydroxy group (Figure 1). The 17α-ethinyl group can be oxidized, with the formation of an intermediate element which inhibits the cytochrome P450 isoenzymes (e.g., CYP3A4) involved in estrogen metabolism. Thus, EE can reduce its catabolism by inhibiting the hydroxylation at C2 through the blockade of these specific CYP isoenzymes [2, 3].Figure 1 Structural formula of the 17β-estradiol derivative ethinylestradiol (EE).After oral administration, besides oxidative metabolism, EE undergoes glucuronidation and sulphatation by specific enzymes (e.g., glucuronyltransferase and sulphotransferase). The reduction of enzymatic inactivation results in the dose-dependent hepatic modulation of a series of activity, such as protein synthesis. For example, EE stimulates sex hormone binding globulin (SHBG), thyroxin binding globulin (TBG), and cortisol binding globulin (CBG) but also has effects on the production of haemostatic elements, lipids, and lipoproteins [4, 5]. The oral bioavailability of oral EE ranges from 38 to 48%, due to a high first-pass metabolism, which, in turn, determines an important interindividual variation in EE plasma levels [6]. Hepatic metabolism yields EE conjugated and metabolites circulating into blood vessels. Of the oral dose, about 1% circulates as EE and is bound by 98.5% to albumin, with EE not presenting affinity for SHBG. Enterohepatic recirculation is important in the EE pharmacokinetics (PK), and the metabolic passages are based on hydroxylation at C2 and C4, with formation of catechol estrogens, which can be metabolized into 2- and 4-methoxy-EE. EE metabolites are excreted by feces and urine. ## 3.2. Levonorgestrel LNG (Figure2) is rapidly absorbed when administered orally. The bioavailability is about 100%, with no relevant first-pass effect, and the peak plasma-level is obtained between 1 and 3 hours after oral administration. LNG is bound to SHGB by 47.5% (this portion can be viewed as a sort of “reservoir” to maintain blood levels of LNG) and to serum albumin by 50% (more promptly available) and 2.5% is not bound. The half-life is about 15 hours [7], and level of LNG is still detectable 48 hours after the administration [8–10].Figure 2 Structural formula of the progestin levonorgestrel.LNG has a marked progestin activity, no mineralocorticoid or glucocorticoid effects, and an antiestrogenic action at hepatic level. LNG has also a very high affinity for the uterine progesterone receptor [11]. The reduction of the Δ4-3-keto group and hydroxylation are important metabolic pathways for LNG [12]. LNG and its metabolites (glucurono- and sulphoconjugated) are excreted by urine and feces. The lowest ovulation inhibiting dose of LNG is 50 mcg/day [13]. Data from animal models show that the dose stimulating a weight increase in the ventral prostate is >100-fold greater than the dose needed to inhibit ovulation; moreover, very little progestin is required to inhibit ovulation, even when used alone rather than in combination with an estrogen [14]. These results suggest the important antigonadotropic effect exerted by LNG. Moreover, LNG has a very high relative binding affinity for progesterone receptor [15], thus suggesting a strong progestin action.One study [16] investigated the PK of EE20/LNG100 in 18 young, healthy women. Serum levels of EE and LNG were assayed after single and repeated daily oral doses during three cycles (21/7 regimen). Serum maximum concentration was reached, for both EE and LNG, between 1 and 2 hours after single and repeated administration on a daily basis. The serum concentration of EE increased after multiple daily administrations, with about twofold accumulation. In addition, serum concentrations of LNG increased following repeated administrations, with steady-state being reached after 11 days since the intake of the first tablet. By comparing the AUC 0–24 values after the first and the last tablet, LNG showed an accumulation by a factor of 3 during a cycle of treatment. The PKs for steady state of LNG were similar after the end of the first and the third cycle of administration, thus indicating no further accumulation over a long-term intake. The clearance and distribution volume of LNG decreased and half-life increased after multiple daily administration.In conclusion, from a pharmacological point of view, LNG is a potent progestin with an important antiestrogenic action [17] also at hepatic level (as shown by its effects on SHBG production, e.g., the ability of LNG to partially counteract the EE-induced SHBG production) and with high oral bioavailability due to no relevant first-pass effect, thus providing lower interindividual bioavailability variations. ## 4. EE20/LNG100 and Ovulation Inhibition Adequate suppression of ovarian activity with EE20/LNG100 was first shown in an ovulation inhibition study on three treatment cycles [18] with a highly-sensitive study design [19]. Mean levels of LH, FSH, 17beta-estradiol, and progesterone were suppressed during treatment, with a normal ovulation restored in posttreatment cycles; these results were confirmed also by an ultrasound examination. In another study, the rapid restoration of ovarian activity was confirmed by mean serum progesterone levels [20]. ## 5. EE20/LNG100 Effects on Lipids Total and HDL-cholesterol, high-density lipoprotein subfraction-2 (HDL-2), and apolipoprotein A-I did not significantly change from baseline during a 24-month study performed on 28 women (age range: 19–44 years) [21]. In addition the HDL-2/HDL-3 ratio did not change significantly. In the same study, between cycles 3 and 18, there were statistically significant increases versus baseline in LDL-cholesterol (P ≤ 0.05), triglycerides (P ≤ 0.01), apolipoprotein-B (P ≤ 0.001), the ratio LDL/HDL (P ≤ 0.05), total cholesterol/HDL (P ≤ 0.05), and the ratio apolipoprotein-B/apolipoprotein A-I (P ≤ 0.05). Interestingly, even if single subject values were reported occasionally outside the normal reference range, there were no subjects with relevant alterations in the lipid pattern, and the changes in lipid profile were similar to those observed with other low-dose EPs. Lipid changes were no longer significant at 24 months.Reisman et al. [22] compared EE20/LNG100 with a triphasic EP combination containing EE 35 mcg plus 500, 750, and 1000 mcg norethindrone (NET). While changes from baseline in triglycerides levels were not different between the two EPs, the mean increase in cholesterol level was significantly lower in the EE20/LNG100 group (0.203 mmol/L) than in the EE35/NET 500-750-1000 group (0.475 mmol/L; P < 0.05).An open-label, randomized study [23] compared EE20/LNG100 with EE30/LNG150 over a 1-year period of observation in 48 subjects, showing a decrease for HDL-2 cholesterol and lipoprotein(a) and an increase for LDL cholesterol, VLDL cholesterol, and total triglycerides in both groups from baseline to the 13th treatment cycle. Interestingly, the wide majority of lipid values remained within the normal reference range. Moreover, there was a trend to have lower changes in the EE20/LNG100 group than in EE30/LNG150 group.Another study, by Endrikat et al. [24], compared the combination EE 20 mcg + LNG 100 mcg with EE 30 mcg + LNG 150 mcg in terms of effects on lipids, carbohydrates, and coagulation during a 13-cycle period. The lower dosage of EE/LNG (EE20/LNG100) showed a milder impact on lipids and carbohydrates in comparison with the EE30/LNG150. Overall, lipid changes were more favorable for the combination EE20/LNG100 versus EE30/LNG150.Thus, the impact on lipid levels shown with EE20/LNG100 is globally mild with values usually within the normal range values. ## 6. EE20/LNG100 and Carbohydrate Metabolism Endrikat et al. [24] compared the effects on carbohydrate metabolism of EE20/LNG100 and EE30/LNG150. Overall, carbohydrate metabolism was not significantly changed, and the variations were lower in EE20/LNG100 group than EE30/LNG150 group; in particular, fasting levels of glucose, insulin, and C-peptide were decreased with EE20/LNG100, and the 3-hour area under the curve (AUC 0–3 h) showed a decrease in EE20/LNG100 (−1635.0 pmol/L × min) group. On the other hand, there was an increase in EE30 mcg/LNG150 mcg group, with a significant difference between the two groups (P < 0.04).Also the study by Skouby et al. [23] compared EE20/LNG100 and EE30/LNG150; the median values for the fasting levels of insulin and C-peptide slightly increased or remained unchanged, while the fasting glucose levels slightly decreased after 13 treatment cycles (% variation from baseline to cycle 13: −15.8 for EE20/LNG100 and −18.0 for EE30/LNG150). With regard to the AUC 0–3 h, for glucose, the variation was similar in both groups during the oral glucose tolerance test (OGTT) (median absolute change from baseline to cycle 13: −59 mmol/(L min) for both groups), while the insulin AUC 0–3 h was less increased in the EE20/LNG100 group (+4940 pmol/(L min)) than in the EE30/LNG150 group (7373 pmol/(L min)). No significant differences between the treatment groups for any of the carbohydrate metabolism variables were disclosed. ## 7. EE20/LNG100, Body Weight, and Body Composition Weight gain is one of the most common reasons for discontinuation of EPs [27–29]. Even if not confirmed by several studies, the perception of this problem remains in clinical practice among patients and, sometimes, among clinicians. On the other hand, a patient’s perception of weight gain (an actual weight gain or “a sensation of weight gain”) can lead to a decreased compliance, with a subsequent increased rate of misuse and discontinuation of EPs use.A study by Hite et al. [30] reported no weight change or weight reduction in 75% of the subjects with EE20/LNG100 in a 6-cycle study.Another interesting 6-cycle study [31], in which the combination EE20/LNG100 was compared with placebo, showed no difference in body weight changes between EP and placebo; in particular, there were no differences among the proportion of patients with weight gain (≥1 kg), no weight change (<1 kg), or weight loss (≥1 kg).Lello et al. [25] evaluated the effects of EE20/LNG100 on body weight and body composition (this latter assessed by bioelectrical impedance) in a 6-month study on 47 subjects treated with this EP and 31 controls (no hormone intake). EE20/LNG100 combination did not significantly change body weight, body mass index, and waist/hip ratio in comparison with nontreated subjects. More interestingly, in terms of body composition, the combination had no impact on fat mass, fat-free mass, total body water, intracellular water, and extracellular water versus baseline and versus lack of treatment (Figure 3).Figure 3 Total body water (TBW) showed no significant differences between EP-treated and control subjects or within each group [25].Endrikat et al. [24] reported that 87.9% of the women treated with EE20/LNG100 maintained a constant body weight (±3 kg), while 9.4% of patients had a loss >3 kg of body weight.In a randomized, multicenter, and placebo-controlled trial using EE20/LNG100 as an active treatment for six cycles in moderate acne treatment [32], changes in body weight were similar between EE20/LNG100 group and placebo group. ## 8. EE20/LNG100: Cycle Control, Safety, and Tolerability In a 6-cycle study [33] on 792 women (age range: 17–49 years) the effect of EE20/LNG100 on cycle control was evaluated. There was an incidence (% of the cycles; total number of cycles valid for analysis: 7508) of 4.3% for breakthrough bleeding (BTB), 12.1% for spotting, 11% for BTB + S, and 2.6% for amenorrhea. The mean length of withdrawal bleeding was 4.8 days (range between 3 and 7 days in 86% of cycles). The mean bleeding intensity was generally reported as mild. In the same study, >97% of the women showed normal blood pressure (systolic ≤ 140 mmHg; diastolic ≤ 90 mmHg) at baseline and during the observation. With regard to the most common side effects considered possibly drug related, headache was reported in 14% of the subjects, metrorrhagia in 8%, dysmenorrhea in 7%, and nausea in 7%; moreover, abdominal pain was pointed out in 4%, breast pain in 4%, emotional lability in 3%, acne in 3%, depression in 2%, amenorrhea in 2%, and vaginal moniliasis in 2%. A total number of 131 (8%) women reported an adverse event as a reason for discontinuation, the most frequent drug-related event being headache and metrorrhagia (1%); less frequent reasons for discontinuation (<1%) were amenorrhea, depression, emotional lability, hypertension, acne, menorrhagia, nausea, hypercholesterolemia, weight gain, dysmenorrhea, and flatulence. No relevant cardiovascular events were reported during the study. Thus, in this study EE20/LNG100 was well tolerated and showed an overall good cycle control.Endrikat et al. [26] compared cycle control and tolerability between two EP combinations containing EE 20 mcg associated with 100 mcg LNG or 500 mcg norethisterone (NET). The results from these two preparations were compared with a standard preparation containing EE 30 mcg + LNG 150 mcg. In this study, while cycle control was good with the two combinations with LNG, less favorable profile was obtained with the EP containing NET. In particular, the proportion of women with spotting or BTB was significantly lower with EE20/LNG100 and EE30/LNG150 than with EE20/NET500 (Figure 4). Comprehensively, spotting was present in 9.3% of the cycles with EE20/LNG100, in 21% of the cycles with EE20/NET500, and in 3.3% in the cycles with EE30/LNG150; BTB overall incidence in all 13 cycles of observation was 4.1% for EE20/LNG100, 11.7% for EE20/NET500, and 1.0% for EE30/LNG150. As for intermenstrual bleeding (IMB), in 87% of all cycles with EE20/LNG100 IMB was not reported, while the percentage was 67.6% and 95.5% with EE20/NET500 and EE30/LNG150, respectively. Moreover, the incidence of IMB decreased from 18.4% (baseline) in the EE20/LNG100 group to 7.7% in cycle 13 in the study. Amenorrhea was reported more frequently in the first cycles of observation and then decreased over time. The incidence over the study (13 cycles) was 7.1% with EE20/LNG100, 20.6% with EE20/NET500, and 0.9% with EE30/LNG150. Dysmenorrhea improved during this study, the incidence being highest at baseline and decreasing to 2.7% for EE20/LNG100, 5.1% for EE20/NET500, and 5.5% for EE30/LNG150, without significant differences among groups. Over 13 cycles of observation, a low incidence of drug-related side effects was reported for EE20/LNG100 group, with headache, breast tension, and nausea being the most frequent; nevertheless, only 7% of women in the EE20/LNG100 group discontinued the treatment due to an adverse event at the end of the study. Blood pressure was not modified significantly during the treatment with EE20/LNG100; 5.3% of the women taking EE20/100LNG showed individual systolic blood pressure occasionally >140 mmHg, and 3.4% subjects in the same group had diastolic blood pressure >90 mmHg. Thus, also in this study the combination EE20/LNG100 showed good cycle control and tolerability. Good cycle control for EE20/LNG100 was reported also in other studies [22, 30, 34].Figure 4 Analysis of cycle control parameters: percentage of subjects with no IMB, S episodes, or BTB. Modified from [26].In the study by Coney et al. [31], a similar percentage of women in the EE/LNG (82.0%) and placebo (76.9%) groups reported one or more adverse events (P = 0.11). The percentage of women in the EE/LNG and placebo groups who experienced possibly estrogen-related side effects, such as headache, migraine, nausea, vomiting, breast pain, and weight gain, was not different.In a study on 1708 subjects (age range: 17–49 years) observed for 26,554 cycles, the most common adverse events reported as reasons for EE20/LNG100 discontinuation were headache (2% of subjects) and metrorrhagia (2%) [35].Another study [34] on 805 women (age range: 18–36 years) treated for 4400 treatment cycles reported no serious adverse events that are treatment related, with headache reported by 17.3% of the women, breast tension by 11%, and nausea by 7.7%. No clinically relevant changes in laboratory findings, blood pressure, and body weight were reported. ## 9. EE20/LNG100, Hemostasis, and Venous Thromboembolism (VTE) Archer et al. studied the effects on hemostasis of EE20/LNG100 in 30 healthy women (mean age:29.9 ± 5.1) over a 12-cycle period of observation [36]. Factor X increased significantly from baseline during cycles 3 and 6 (P < 0.001) and cycle 12 (P < 0.01), whereas a significant (P < 0.05) decrease in factor VII concentration was seen at cycle 3. In particular, the coagulation activation marker thrombin-antithrombin (TAT) complex did not change significantly during the study. At cycles 3, 6, and 12 total protein S and antithrombin antigen levels (P < 0.001) decreased from baseline. Protein S activity decreased (P < 0.05) from baseline at cycles 3 and cycle 6 but was no longer different from baseline at cycle 12. Antithrombin activity or free protein S antigen did not show any significant changes from baseline during the study. Plasminogen antigen and activity levels increased significantly (P < 0.001) during the observation, while fibrinogen was not significantly modified. D-dimer increased significantly at cycles 3, 6, and 12, with a smaller increase at cycle 12. Altogether, only sporadic individual values (plasminogen antigen and activity) were outside the normal reference range, and no subject showed clinically important variation in hemostatic profile. The increase of plasminogen antigen and activity were seen in this study as a manifestation of increased fibrinolysis, and the general changes in haemostatic parameters were regarded as consistent with those of other low-dose oral EPs by the authors.Endrikat et al. [24] compared EE20/LNG100 with EE30/LNG150 in terms of effect on hemostatic variables. In the EE20/LNG100 group, the median concentration of prothrombin fragment 1 + 2 increased slightly during the study and reached, starting from a baseline value of 0.53 μg/L, a value of 0.8 μg/L, below the upper normal limit of the reference range (2.88 μg/L). Also D-dimer did not show any significant change between baseline and cycle 13. Plasminogen (a profibrinolytic marker) was increased by 31.1% after 13 cycles, while tPA antigen (a profibrinolytic marker) was reduced by 31.1 ng/mL. On the other hand, some procoagulatory markers were increased (fibrinogen +16.1%; F VII Ag +15.5%; F VIIa +68.8%), while some anticoagulatory factors were decreased (F VII Act −10%; F VIII −6.7%), thus indicating a new balance between coagulation and fibrinolysis. Overall, hemostatic parameters showed minor changes, all within the normal range of variation, thus indicating only a new and different balance between coagulation and fibrinolysis. The median concentration of prothrombin fragments 1 + 2 (considered as a marker for changes in coagulation) did not change significantly versus baseline, while D-dimer (a marker of fibrinolysis) did so. There were no differences between EE20/LNG100 and EE30/LNG150 groups. Moreover, the changes of all hemostatic variables were all within the normal ranges.Overall, EE20/LNG100 seems to stimulate both coagulation and fibrinolysis, with no effects on hemostatic balance.From an epidemiological point of view, the problem of venous thromboembolism (VTE) becomes evident early after the introduction of EPs in clinical use [37]. It was suggested by some reports since the 1990s that EPs containing second-generation progestins (e.g., LNG) carried a lower risk of VTE than other EPs containing the third-generation progestins (e.g., desogestrel and gestodene) [38–40]. More recently, other important epidemiological data have shown LNG-containing EPs as being linked to a lower VTE risk than EPs containing other progestins such as gestodene, desogestrel, and drospirenone [41–45], also in comparison with nonoral route of administration [46]. For instance, in a 6-year cohort study, the following odds rations for VTE were reported: gestodene versus levonorgestrel, 1.86 (95% CI 1.59 to 2.18) desogestrel versus levonorgestrel, 1.82 (95% CI 1.49 to 2.22), and drospirenone versus levonorgestrel, 1.64 (95% CI 1.27 to 2.10) [42].A possible explanation for this lower VTE risk with second-generation EPs (containing LNG) is that LNG exerts a stronger antiestrogenic effect at hepatic level than the other progestins for which a greater risk is reported, according to the concept that estrogenic component of EPs is the main reason for VTE risk in a dose-dependent manner. In addition, the progestin component may counteract this increased risk with different efficacy, according to progestin type (degree of residual androgenic and/or antiestrogenic effect of the progestin) [47]. Thus, third-generation and fourth-generation (drospirenone) progestins, not having androgenic or antiestrogenic action at hepatic level, may not sufficiently counteract the estrogen-dependent prothrombotic effect [48].On the other hand, other studies report different results. In the EURAS study [49] and in the INGENIX study [50] the VTE risk with LNG-containing EPs did not differ from the VTE risk carried by drospirenone-containing EPs. In the Transatlantic Active Surveillance on Cardiovascular Safety of Nuvaring [51], Dinger et al. reported that etonogestrel-containing vaginal ring and other combined EPs (including EPs containing LNG) are associated with a similar risk of VTE; another study [52] showed no difference in VTE risk for norelgestromin-containing patch and etonogestrel-containing vaginal ring in comparison with low-dose estrogen comparators (including LNG-containing EPs).Actually, VTE is one of the most serious side effects linked to the use of EPs, and even if rare, this condition can result in important consequences (in about 1-2% of all cases of VTE in women taking the pill) [53].However, the absolute risk of having venous thromboembolism in EP users is low (the baseline risk is five per 100,000 person-years; this risk increases to about 15–25 per 100,000 person-years when taking the EP pill) [42, 55]. However, due to the very large number of EP users [56], even a small increase in this risk could affect a significant number of women. In any case, patients with a personal or family history of venous thromboembolism should not take combined EPs. Nevertheless, the European Medicines Agency (EMA), following the evaluation of epidemiological data, has yielded some documents in which it is declared that the LNG-containing EPs have a lower VTE risk in comparison with EPs containing other progestins with the same dose of EE (2001, regarding third-generation progestins, in particular gestodene and desogestrel) [57]; later, in 2005, another document of EMEA indicated LNG-containing EP pill as the reference standard to use as a comparator to evaluate the VTE risk for new contraceptive agents [58]. In 2011, a third document [59] reported also drospirenone-containing combined oral EPs as having a VTE risk higher than levonorgestrel-containing EPs (second generation EPs).Recently, EMA [60] has indicated again LNG-containing EPs as being linked to a lower risk of VTE.With respect to the risk of myocardial infarction and thrombotic stroke (arterial thromboembolism) linked to the use of EPs, it is well known that this risk is very rare among EPs users, and a recent work by Lidegaard et al. [61] reported no significant difference in myocardial infarction and thrombotic stroke risk according to progestin component among different EPs; rather, this risk appeared to be related to EE dosage, with the highest risk for EPs containing 50 mcg EE and the lowest risk for 20 mcg EE-containing EPs. ## 10. EE20/LNG100 and Acne Thorneycroft et al. [54] evaluated the effects of EE20/LNG100 on androgen pattern and acne in 21 healthy women (age range: 18–28 years) in a 3-month study. EE20/LNG100 decreased significantly the levels of androgens (e.g., dehydroepiandrosterone sulphate, androstenedione, and total testosterone) in three compartments (adrenal, ovarian, and peripheral) and increased SHBG levels (Table 1). Total acne lesion count was reduced by the treatment with EE20/LNG100. Also in this population with signs of hyperandrogenism, the variation in body weight was not significant after 12 weeks of EE20/LNG100 administration, and blood pressure did not change: at baseline, mean systolic blood pressure (SBP) was 110 ± 11 mmHg and diastolic blood pressure (DBP) was 67 ± 9, while at the end of the study, SBP was 113 ± 13, and DBP 68 ± 8.Table 1 Percent (baseline versus end of the study) changes in androgens and SHBG levels over a 3-month period of treatment with EE20/LNG100 (modified from [54]). Parameter % change versus baseline P (baseline versus end of treatment) DHEAS (mcg/mL) −18.9 ± 40.2 <0.05 Androstenedione (ng/mL) −36.9 ± 26.7 <0.05 Total testosterone (ng/dL) −27.0 ± 21.5 <0.05 3-androstanediol glucuronide (ng/mL) −38.8 ± 36.1 <0.05 SHBG (nmol/L) 106 ± 89 <0.05A randomized, multicenter, and placebo-controlled trial using EE20/LNG100 as an active treatment versus placebo [32] evaluated the efficacy of EE20/LNG100 in treating moderate acne over a 6-month period. Total, inflammatory and noninflammatory lesion counts at cycle 6 with EE20/LNG100 were significantly lower than with placebo; moreover, the EE20/LNG100 group had a better evaluation from clinicians and a better self-evaluation from patients versus the placebo group. ## 11. EE20/LNG100 and Sexuality The EP use has been linked to a decreased sexual function [62]. It has been suggested that some women are more sensitive than others to a reduction of testosterone and free testosterone, showing a reduction in sexual interest [63, 64]. Moreover, a study by Coenen et al. [65] suggested that the decreased levels of free testosterone are due to the increased SHBG which, in turn, is determined by the estrogenic component of the EPs. EP, by blocking the ovulation, decreases further ovarian production of androgens. Thus, in premenopausal women, low level of androgens (in particular, free and total testosterone), associated with high level of SHBG, is believed to reduce sexual function during EP administration in women who previously had a normal sexual behavior [66, 67].It is possible that an EP containing low levels of estrogen and/or a progestin retaining a partial residual androgenic (or antiestrogenic) activity counteracting the estrogen-induced SHBG increase less likely decreases sexual function in women [68]. On the other hand, it is not uncommon that a woman with an EP-induced decrease in sexual function is switched to an EP containing LNG as a progestin [69].Interestingly, in a study [70] comparing two EPs containing EE and LNG differing between them only for the dosage of the two components (EE30/LNG150 versus EE 20/LNG100) about the effects on plasma androgen levels and sexual function over six cycles of administration, sexual function was evaluated at baseline and at the end of the study with Female Sexual Function Index (FSFI); moreover, also total testosterone (T) and SHBG were measured at baseline and at the end of the study. Free androgen index (FAI) was calculated as T (nmol/L) × 100/SHBG (nmol/L). T and FAI decreased in both groups, while SHBG increased. T and FAI were higher in EE20/LNG100 group in comparison with EE30/LNG150, and SHBG was lower. In particular, in EE30/LNG150 group, testosterone and FAI decreased by 32% and 67%, respectively, while SHBG increased by 32% (P < 0.05). On the other hand, in EE20/LNG100 group, T decreased by 20%, FAI decreased by 42%, and SHBG increased by 22%. Total score of FSFI did not differ between the two groups, but, over time, only in the EE20/LNG100 group a significant improvement was reported. These results could be explained by the low dose of EE and residual small androgenic activity of LNG and may be important in the overall clinical judgment on woman’s health when EE20/LNG100 is administered. ## 12. Conclusions EE20/LNG100 is a combination generally safe and well tolerated [31, 35]. The AEs reported for EE20/LNG100 are similar to those reported for other low-dose EPs [71]. Interestingly, in a study by Coney et al. [31] the percentage of women reporting one or more AEs is not different between EE20/LNG100 and placebo group.Moreover, cycle control is effective [22, 26, 30, 33, 34], and body weight and body composition do not display any significant variation in various studies [20, 25]. This combination shows a mild or no significant effect from a metabolic (i.e., lipid and glucose metabolism) point of view [22–24].Lastly, EE20/LNG100 has a low VTE risk and is considered as a gold standard by the European Regulatory Authorities in evaluating new EPs for this risk [58].Overall, this favorable clinical profile of EE20/LNG100 can be considered in terms of safety, tolerability, and compliance in the process of individualization of EPs choice. --- *Source: 102184-2014-11-16.xml*
102184-2014-11-16_102184-2014-11-16.md
38,212
Ethynilestradiol 20 mcg plus Levonorgestrel 100 mcg: Clinical Pharmacology
Stefano Lello; Andrea Cavani
International Journal of Endocrinology (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102184
102184-2014-11-16.xml
--- ## Abstract Estroprogestins (EPs) are combinations of estrogen and progestin with several actions on women’s health. The different pharmacological composition of EPs is responsible for different clinical effects. One of the most used low-dose EP associations is ethinylestradiol 20 mcg plus levonorgestrel 100 mcg in monophasic regimen (EE20/LNG100). This review summarizes clinical pharmacology, cycle control, and effects on lipid and glucose metabolism, coagulation, body weight/body composition, acne, and sexuality of EE20/LNG100. Overall, EE20/LNG100 combination is safe and well tolerated, and in several studies the incidence of adverse events in the treated group was comparable to that of the placebo group. Cycle control was effective and body weight/body composition did not vary among treated and untreated groups in most studies. The EE20/LNG100 combination shows mild or no effect on lipid and glucose metabolism. Lastly, EE20/LNG100 is associated with a low risk of venous thromboembolism (VTE). In conclusion, in the process of decision making for the individualization of EPs choice, EE20/LNG100 should be considered for its favorable clinical profile. --- ## Body ## 1. Introduction Estroprogestins (EPs) are pharmaceutical compounds containing estrogen and progestin. Existing progestogen compounds can be classified as first- (e.g., norethisterone, norethindrone, ethynodiol diacetate, and lynestrenol), second- (levonorgestrel and norgestrel), and third-generation (desogestrel, gestodene, and norgestimate).Estrogen can decrease follicle-stimulating hormone and luteinizing hormone, even if firstly it was added to progestin to reduce or avoid symptoms which follow ovarian blockage and to improve cycle control. The roles of progestin are to decrease luteinizing hormone levels through a negative feedback mechanism, to thicken cervical mucus, and to decrease the endometrial proliferation after estrogen mitotic stimulation. Notably, these activities of EPs are used not only for contraception, but also as a tool to obtain a better health profile in many women, in the so-called “noncontraceptive use.” In this view, EPs can be considered as an important tool for women’s health, due to the effects of EPs on menstrual pain, excessive menstrual bleeding, endometriosis, polycystic ovary syndrome (PCOS), and the protection against some cancers (ovary, endometrium, and colon) [1].Different EPs can show different clinical effects and different risk profile according to their specific pharmacological composition (i.e., type and dose of estrogen and progestin). One of the most used associations is ethinylestradiol (EE) 20 mcg + levonorgestrel (LNG) 100 mcg in monophasic regimen (EE20/LNG100).This review summarizes clinical pharmacology, cycle control, and effects on lipid and glucose metabolism, coagulation, body weight/body composition, and acne of the EE20/LNG100 used once daily for 21 days of a 28-day cycle (21/7 regimen). ## 2. Selection of Evidence Key papers for inclusion in this paper were collected by browsing MEDLINE using pertinent keywords (e.g., ethinylestradiol and levonorgestrel); papers included in the reference list of the identified manuscript could also be considered for inclusion, as well as relevant abstracts or papers from the personal collection of literature of the authors.Papers were selected for inclusion according to their relevance for the topic, according to authors’ opinion. ## 3. EE20/LNG100: Clinical Pharmacology ### 3.1. Ethinylestradiol EE is the most used estrogen in EPs. It is more potent than estradiol, due to the presence of the 17α-ethinyl group, which can prevent the oxidation of 17β-hydroxy group (Figure 1). The 17α-ethinyl group can be oxidized, with the formation of an intermediate element which inhibits the cytochrome P450 isoenzymes (e.g., CYP3A4) involved in estrogen metabolism. Thus, EE can reduce its catabolism by inhibiting the hydroxylation at C2 through the blockade of these specific CYP isoenzymes [2, 3].Figure 1 Structural formula of the 17β-estradiol derivative ethinylestradiol (EE).After oral administration, besides oxidative metabolism, EE undergoes glucuronidation and sulphatation by specific enzymes (e.g., glucuronyltransferase and sulphotransferase). The reduction of enzymatic inactivation results in the dose-dependent hepatic modulation of a series of activity, such as protein synthesis. For example, EE stimulates sex hormone binding globulin (SHBG), thyroxin binding globulin (TBG), and cortisol binding globulin (CBG) but also has effects on the production of haemostatic elements, lipids, and lipoproteins [4, 5]. The oral bioavailability of oral EE ranges from 38 to 48%, due to a high first-pass metabolism, which, in turn, determines an important interindividual variation in EE plasma levels [6]. Hepatic metabolism yields EE conjugated and metabolites circulating into blood vessels. Of the oral dose, about 1% circulates as EE and is bound by 98.5% to albumin, with EE not presenting affinity for SHBG. Enterohepatic recirculation is important in the EE pharmacokinetics (PK), and the metabolic passages are based on hydroxylation at C2 and C4, with formation of catechol estrogens, which can be metabolized into 2- and 4-methoxy-EE. EE metabolites are excreted by feces and urine. ### 3.2. Levonorgestrel LNG (Figure2) is rapidly absorbed when administered orally. The bioavailability is about 100%, with no relevant first-pass effect, and the peak plasma-level is obtained between 1 and 3 hours after oral administration. LNG is bound to SHGB by 47.5% (this portion can be viewed as a sort of “reservoir” to maintain blood levels of LNG) and to serum albumin by 50% (more promptly available) and 2.5% is not bound. The half-life is about 15 hours [7], and level of LNG is still detectable 48 hours after the administration [8–10].Figure 2 Structural formula of the progestin levonorgestrel.LNG has a marked progestin activity, no mineralocorticoid or glucocorticoid effects, and an antiestrogenic action at hepatic level. LNG has also a very high affinity for the uterine progesterone receptor [11]. The reduction of the Δ4-3-keto group and hydroxylation are important metabolic pathways for LNG [12]. LNG and its metabolites (glucurono- and sulphoconjugated) are excreted by urine and feces. The lowest ovulation inhibiting dose of LNG is 50 mcg/day [13]. Data from animal models show that the dose stimulating a weight increase in the ventral prostate is >100-fold greater than the dose needed to inhibit ovulation; moreover, very little progestin is required to inhibit ovulation, even when used alone rather than in combination with an estrogen [14]. These results suggest the important antigonadotropic effect exerted by LNG. Moreover, LNG has a very high relative binding affinity for progesterone receptor [15], thus suggesting a strong progestin action.One study [16] investigated the PK of EE20/LNG100 in 18 young, healthy women. Serum levels of EE and LNG were assayed after single and repeated daily oral doses during three cycles (21/7 regimen). Serum maximum concentration was reached, for both EE and LNG, between 1 and 2 hours after single and repeated administration on a daily basis. The serum concentration of EE increased after multiple daily administrations, with about twofold accumulation. In addition, serum concentrations of LNG increased following repeated administrations, with steady-state being reached after 11 days since the intake of the first tablet. By comparing the AUC 0–24 values after the first and the last tablet, LNG showed an accumulation by a factor of 3 during a cycle of treatment. The PKs for steady state of LNG were similar after the end of the first and the third cycle of administration, thus indicating no further accumulation over a long-term intake. The clearance and distribution volume of LNG decreased and half-life increased after multiple daily administration.In conclusion, from a pharmacological point of view, LNG is a potent progestin with an important antiestrogenic action [17] also at hepatic level (as shown by its effects on SHBG production, e.g., the ability of LNG to partially counteract the EE-induced SHBG production) and with high oral bioavailability due to no relevant first-pass effect, thus providing lower interindividual bioavailability variations. ## 3.1. Ethinylestradiol EE is the most used estrogen in EPs. It is more potent than estradiol, due to the presence of the 17α-ethinyl group, which can prevent the oxidation of 17β-hydroxy group (Figure 1). The 17α-ethinyl group can be oxidized, with the formation of an intermediate element which inhibits the cytochrome P450 isoenzymes (e.g., CYP3A4) involved in estrogen metabolism. Thus, EE can reduce its catabolism by inhibiting the hydroxylation at C2 through the blockade of these specific CYP isoenzymes [2, 3].Figure 1 Structural formula of the 17β-estradiol derivative ethinylestradiol (EE).After oral administration, besides oxidative metabolism, EE undergoes glucuronidation and sulphatation by specific enzymes (e.g., glucuronyltransferase and sulphotransferase). The reduction of enzymatic inactivation results in the dose-dependent hepatic modulation of a series of activity, such as protein synthesis. For example, EE stimulates sex hormone binding globulin (SHBG), thyroxin binding globulin (TBG), and cortisol binding globulin (CBG) but also has effects on the production of haemostatic elements, lipids, and lipoproteins [4, 5]. The oral bioavailability of oral EE ranges from 38 to 48%, due to a high first-pass metabolism, which, in turn, determines an important interindividual variation in EE plasma levels [6]. Hepatic metabolism yields EE conjugated and metabolites circulating into blood vessels. Of the oral dose, about 1% circulates as EE and is bound by 98.5% to albumin, with EE not presenting affinity for SHBG. Enterohepatic recirculation is important in the EE pharmacokinetics (PK), and the metabolic passages are based on hydroxylation at C2 and C4, with formation of catechol estrogens, which can be metabolized into 2- and 4-methoxy-EE. EE metabolites are excreted by feces and urine. ## 3.2. Levonorgestrel LNG (Figure2) is rapidly absorbed when administered orally. The bioavailability is about 100%, with no relevant first-pass effect, and the peak plasma-level is obtained between 1 and 3 hours after oral administration. LNG is bound to SHGB by 47.5% (this portion can be viewed as a sort of “reservoir” to maintain blood levels of LNG) and to serum albumin by 50% (more promptly available) and 2.5% is not bound. The half-life is about 15 hours [7], and level of LNG is still detectable 48 hours after the administration [8–10].Figure 2 Structural formula of the progestin levonorgestrel.LNG has a marked progestin activity, no mineralocorticoid or glucocorticoid effects, and an antiestrogenic action at hepatic level. LNG has also a very high affinity for the uterine progesterone receptor [11]. The reduction of the Δ4-3-keto group and hydroxylation are important metabolic pathways for LNG [12]. LNG and its metabolites (glucurono- and sulphoconjugated) are excreted by urine and feces. The lowest ovulation inhibiting dose of LNG is 50 mcg/day [13]. Data from animal models show that the dose stimulating a weight increase in the ventral prostate is >100-fold greater than the dose needed to inhibit ovulation; moreover, very little progestin is required to inhibit ovulation, even when used alone rather than in combination with an estrogen [14]. These results suggest the important antigonadotropic effect exerted by LNG. Moreover, LNG has a very high relative binding affinity for progesterone receptor [15], thus suggesting a strong progestin action.One study [16] investigated the PK of EE20/LNG100 in 18 young, healthy women. Serum levels of EE and LNG were assayed after single and repeated daily oral doses during three cycles (21/7 regimen). Serum maximum concentration was reached, for both EE and LNG, between 1 and 2 hours after single and repeated administration on a daily basis. The serum concentration of EE increased after multiple daily administrations, with about twofold accumulation. In addition, serum concentrations of LNG increased following repeated administrations, with steady-state being reached after 11 days since the intake of the first tablet. By comparing the AUC 0–24 values after the first and the last tablet, LNG showed an accumulation by a factor of 3 during a cycle of treatment. The PKs for steady state of LNG were similar after the end of the first and the third cycle of administration, thus indicating no further accumulation over a long-term intake. The clearance and distribution volume of LNG decreased and half-life increased after multiple daily administration.In conclusion, from a pharmacological point of view, LNG is a potent progestin with an important antiestrogenic action [17] also at hepatic level (as shown by its effects on SHBG production, e.g., the ability of LNG to partially counteract the EE-induced SHBG production) and with high oral bioavailability due to no relevant first-pass effect, thus providing lower interindividual bioavailability variations. ## 4. EE20/LNG100 and Ovulation Inhibition Adequate suppression of ovarian activity with EE20/LNG100 was first shown in an ovulation inhibition study on three treatment cycles [18] with a highly-sensitive study design [19]. Mean levels of LH, FSH, 17beta-estradiol, and progesterone were suppressed during treatment, with a normal ovulation restored in posttreatment cycles; these results were confirmed also by an ultrasound examination. In another study, the rapid restoration of ovarian activity was confirmed by mean serum progesterone levels [20]. ## 5. EE20/LNG100 Effects on Lipids Total and HDL-cholesterol, high-density lipoprotein subfraction-2 (HDL-2), and apolipoprotein A-I did not significantly change from baseline during a 24-month study performed on 28 women (age range: 19–44 years) [21]. In addition the HDL-2/HDL-3 ratio did not change significantly. In the same study, between cycles 3 and 18, there were statistically significant increases versus baseline in LDL-cholesterol (P ≤ 0.05), triglycerides (P ≤ 0.01), apolipoprotein-B (P ≤ 0.001), the ratio LDL/HDL (P ≤ 0.05), total cholesterol/HDL (P ≤ 0.05), and the ratio apolipoprotein-B/apolipoprotein A-I (P ≤ 0.05). Interestingly, even if single subject values were reported occasionally outside the normal reference range, there were no subjects with relevant alterations in the lipid pattern, and the changes in lipid profile were similar to those observed with other low-dose EPs. Lipid changes were no longer significant at 24 months.Reisman et al. [22] compared EE20/LNG100 with a triphasic EP combination containing EE 35 mcg plus 500, 750, and 1000 mcg norethindrone (NET). While changes from baseline in triglycerides levels were not different between the two EPs, the mean increase in cholesterol level was significantly lower in the EE20/LNG100 group (0.203 mmol/L) than in the EE35/NET 500-750-1000 group (0.475 mmol/L; P < 0.05).An open-label, randomized study [23] compared EE20/LNG100 with EE30/LNG150 over a 1-year period of observation in 48 subjects, showing a decrease for HDL-2 cholesterol and lipoprotein(a) and an increase for LDL cholesterol, VLDL cholesterol, and total triglycerides in both groups from baseline to the 13th treatment cycle. Interestingly, the wide majority of lipid values remained within the normal reference range. Moreover, there was a trend to have lower changes in the EE20/LNG100 group than in EE30/LNG150 group.Another study, by Endrikat et al. [24], compared the combination EE 20 mcg + LNG 100 mcg with EE 30 mcg + LNG 150 mcg in terms of effects on lipids, carbohydrates, and coagulation during a 13-cycle period. The lower dosage of EE/LNG (EE20/LNG100) showed a milder impact on lipids and carbohydrates in comparison with the EE30/LNG150. Overall, lipid changes were more favorable for the combination EE20/LNG100 versus EE30/LNG150.Thus, the impact on lipid levels shown with EE20/LNG100 is globally mild with values usually within the normal range values. ## 6. EE20/LNG100 and Carbohydrate Metabolism Endrikat et al. [24] compared the effects on carbohydrate metabolism of EE20/LNG100 and EE30/LNG150. Overall, carbohydrate metabolism was not significantly changed, and the variations were lower in EE20/LNG100 group than EE30/LNG150 group; in particular, fasting levels of glucose, insulin, and C-peptide were decreased with EE20/LNG100, and the 3-hour area under the curve (AUC 0–3 h) showed a decrease in EE20/LNG100 (−1635.0 pmol/L × min) group. On the other hand, there was an increase in EE30 mcg/LNG150 mcg group, with a significant difference between the two groups (P < 0.04).Also the study by Skouby et al. [23] compared EE20/LNG100 and EE30/LNG150; the median values for the fasting levels of insulin and C-peptide slightly increased or remained unchanged, while the fasting glucose levels slightly decreased after 13 treatment cycles (% variation from baseline to cycle 13: −15.8 for EE20/LNG100 and −18.0 for EE30/LNG150). With regard to the AUC 0–3 h, for glucose, the variation was similar in both groups during the oral glucose tolerance test (OGTT) (median absolute change from baseline to cycle 13: −59 mmol/(L min) for both groups), while the insulin AUC 0–3 h was less increased in the EE20/LNG100 group (+4940 pmol/(L min)) than in the EE30/LNG150 group (7373 pmol/(L min)). No significant differences between the treatment groups for any of the carbohydrate metabolism variables were disclosed. ## 7. EE20/LNG100, Body Weight, and Body Composition Weight gain is one of the most common reasons for discontinuation of EPs [27–29]. Even if not confirmed by several studies, the perception of this problem remains in clinical practice among patients and, sometimes, among clinicians. On the other hand, a patient’s perception of weight gain (an actual weight gain or “a sensation of weight gain”) can lead to a decreased compliance, with a subsequent increased rate of misuse and discontinuation of EPs use.A study by Hite et al. [30] reported no weight change or weight reduction in 75% of the subjects with EE20/LNG100 in a 6-cycle study.Another interesting 6-cycle study [31], in which the combination EE20/LNG100 was compared with placebo, showed no difference in body weight changes between EP and placebo; in particular, there were no differences among the proportion of patients with weight gain (≥1 kg), no weight change (<1 kg), or weight loss (≥1 kg).Lello et al. [25] evaluated the effects of EE20/LNG100 on body weight and body composition (this latter assessed by bioelectrical impedance) in a 6-month study on 47 subjects treated with this EP and 31 controls (no hormone intake). EE20/LNG100 combination did not significantly change body weight, body mass index, and waist/hip ratio in comparison with nontreated subjects. More interestingly, in terms of body composition, the combination had no impact on fat mass, fat-free mass, total body water, intracellular water, and extracellular water versus baseline and versus lack of treatment (Figure 3).Figure 3 Total body water (TBW) showed no significant differences between EP-treated and control subjects or within each group [25].Endrikat et al. [24] reported that 87.9% of the women treated with EE20/LNG100 maintained a constant body weight (±3 kg), while 9.4% of patients had a loss >3 kg of body weight.In a randomized, multicenter, and placebo-controlled trial using EE20/LNG100 as an active treatment for six cycles in moderate acne treatment [32], changes in body weight were similar between EE20/LNG100 group and placebo group. ## 8. EE20/LNG100: Cycle Control, Safety, and Tolerability In a 6-cycle study [33] on 792 women (age range: 17–49 years) the effect of EE20/LNG100 on cycle control was evaluated. There was an incidence (% of the cycles; total number of cycles valid for analysis: 7508) of 4.3% for breakthrough bleeding (BTB), 12.1% for spotting, 11% for BTB + S, and 2.6% for amenorrhea. The mean length of withdrawal bleeding was 4.8 days (range between 3 and 7 days in 86% of cycles). The mean bleeding intensity was generally reported as mild. In the same study, >97% of the women showed normal blood pressure (systolic ≤ 140 mmHg; diastolic ≤ 90 mmHg) at baseline and during the observation. With regard to the most common side effects considered possibly drug related, headache was reported in 14% of the subjects, metrorrhagia in 8%, dysmenorrhea in 7%, and nausea in 7%; moreover, abdominal pain was pointed out in 4%, breast pain in 4%, emotional lability in 3%, acne in 3%, depression in 2%, amenorrhea in 2%, and vaginal moniliasis in 2%. A total number of 131 (8%) women reported an adverse event as a reason for discontinuation, the most frequent drug-related event being headache and metrorrhagia (1%); less frequent reasons for discontinuation (<1%) were amenorrhea, depression, emotional lability, hypertension, acne, menorrhagia, nausea, hypercholesterolemia, weight gain, dysmenorrhea, and flatulence. No relevant cardiovascular events were reported during the study. Thus, in this study EE20/LNG100 was well tolerated and showed an overall good cycle control.Endrikat et al. [26] compared cycle control and tolerability between two EP combinations containing EE 20 mcg associated with 100 mcg LNG or 500 mcg norethisterone (NET). The results from these two preparations were compared with a standard preparation containing EE 30 mcg + LNG 150 mcg. In this study, while cycle control was good with the two combinations with LNG, less favorable profile was obtained with the EP containing NET. In particular, the proportion of women with spotting or BTB was significantly lower with EE20/LNG100 and EE30/LNG150 than with EE20/NET500 (Figure 4). Comprehensively, spotting was present in 9.3% of the cycles with EE20/LNG100, in 21% of the cycles with EE20/NET500, and in 3.3% in the cycles with EE30/LNG150; BTB overall incidence in all 13 cycles of observation was 4.1% for EE20/LNG100, 11.7% for EE20/NET500, and 1.0% for EE30/LNG150. As for intermenstrual bleeding (IMB), in 87% of all cycles with EE20/LNG100 IMB was not reported, while the percentage was 67.6% and 95.5% with EE20/NET500 and EE30/LNG150, respectively. Moreover, the incidence of IMB decreased from 18.4% (baseline) in the EE20/LNG100 group to 7.7% in cycle 13 in the study. Amenorrhea was reported more frequently in the first cycles of observation and then decreased over time. The incidence over the study (13 cycles) was 7.1% with EE20/LNG100, 20.6% with EE20/NET500, and 0.9% with EE30/LNG150. Dysmenorrhea improved during this study, the incidence being highest at baseline and decreasing to 2.7% for EE20/LNG100, 5.1% for EE20/NET500, and 5.5% for EE30/LNG150, without significant differences among groups. Over 13 cycles of observation, a low incidence of drug-related side effects was reported for EE20/LNG100 group, with headache, breast tension, and nausea being the most frequent; nevertheless, only 7% of women in the EE20/LNG100 group discontinued the treatment due to an adverse event at the end of the study. Blood pressure was not modified significantly during the treatment with EE20/LNG100; 5.3% of the women taking EE20/100LNG showed individual systolic blood pressure occasionally >140 mmHg, and 3.4% subjects in the same group had diastolic blood pressure >90 mmHg. Thus, also in this study the combination EE20/LNG100 showed good cycle control and tolerability. Good cycle control for EE20/LNG100 was reported also in other studies [22, 30, 34].Figure 4 Analysis of cycle control parameters: percentage of subjects with no IMB, S episodes, or BTB. Modified from [26].In the study by Coney et al. [31], a similar percentage of women in the EE/LNG (82.0%) and placebo (76.9%) groups reported one or more adverse events (P = 0.11). The percentage of women in the EE/LNG and placebo groups who experienced possibly estrogen-related side effects, such as headache, migraine, nausea, vomiting, breast pain, and weight gain, was not different.In a study on 1708 subjects (age range: 17–49 years) observed for 26,554 cycles, the most common adverse events reported as reasons for EE20/LNG100 discontinuation were headache (2% of subjects) and metrorrhagia (2%) [35].Another study [34] on 805 women (age range: 18–36 years) treated for 4400 treatment cycles reported no serious adverse events that are treatment related, with headache reported by 17.3% of the women, breast tension by 11%, and nausea by 7.7%. No clinically relevant changes in laboratory findings, blood pressure, and body weight were reported. ## 9. EE20/LNG100, Hemostasis, and Venous Thromboembolism (VTE) Archer et al. studied the effects on hemostasis of EE20/LNG100 in 30 healthy women (mean age:29.9 ± 5.1) over a 12-cycle period of observation [36]. Factor X increased significantly from baseline during cycles 3 and 6 (P < 0.001) and cycle 12 (P < 0.01), whereas a significant (P < 0.05) decrease in factor VII concentration was seen at cycle 3. In particular, the coagulation activation marker thrombin-antithrombin (TAT) complex did not change significantly during the study. At cycles 3, 6, and 12 total protein S and antithrombin antigen levels (P < 0.001) decreased from baseline. Protein S activity decreased (P < 0.05) from baseline at cycles 3 and cycle 6 but was no longer different from baseline at cycle 12. Antithrombin activity or free protein S antigen did not show any significant changes from baseline during the study. Plasminogen antigen and activity levels increased significantly (P < 0.001) during the observation, while fibrinogen was not significantly modified. D-dimer increased significantly at cycles 3, 6, and 12, with a smaller increase at cycle 12. Altogether, only sporadic individual values (plasminogen antigen and activity) were outside the normal reference range, and no subject showed clinically important variation in hemostatic profile. The increase of plasminogen antigen and activity were seen in this study as a manifestation of increased fibrinolysis, and the general changes in haemostatic parameters were regarded as consistent with those of other low-dose oral EPs by the authors.Endrikat et al. [24] compared EE20/LNG100 with EE30/LNG150 in terms of effect on hemostatic variables. In the EE20/LNG100 group, the median concentration of prothrombin fragment 1 + 2 increased slightly during the study and reached, starting from a baseline value of 0.53 μg/L, a value of 0.8 μg/L, below the upper normal limit of the reference range (2.88 μg/L). Also D-dimer did not show any significant change between baseline and cycle 13. Plasminogen (a profibrinolytic marker) was increased by 31.1% after 13 cycles, while tPA antigen (a profibrinolytic marker) was reduced by 31.1 ng/mL. On the other hand, some procoagulatory markers were increased (fibrinogen +16.1%; F VII Ag +15.5%; F VIIa +68.8%), while some anticoagulatory factors were decreased (F VII Act −10%; F VIII −6.7%), thus indicating a new balance between coagulation and fibrinolysis. Overall, hemostatic parameters showed minor changes, all within the normal range of variation, thus indicating only a new and different balance between coagulation and fibrinolysis. The median concentration of prothrombin fragments 1 + 2 (considered as a marker for changes in coagulation) did not change significantly versus baseline, while D-dimer (a marker of fibrinolysis) did so. There were no differences between EE20/LNG100 and EE30/LNG150 groups. Moreover, the changes of all hemostatic variables were all within the normal ranges.Overall, EE20/LNG100 seems to stimulate both coagulation and fibrinolysis, with no effects on hemostatic balance.From an epidemiological point of view, the problem of venous thromboembolism (VTE) becomes evident early after the introduction of EPs in clinical use [37]. It was suggested by some reports since the 1990s that EPs containing second-generation progestins (e.g., LNG) carried a lower risk of VTE than other EPs containing the third-generation progestins (e.g., desogestrel and gestodene) [38–40]. More recently, other important epidemiological data have shown LNG-containing EPs as being linked to a lower VTE risk than EPs containing other progestins such as gestodene, desogestrel, and drospirenone [41–45], also in comparison with nonoral route of administration [46]. For instance, in a 6-year cohort study, the following odds rations for VTE were reported: gestodene versus levonorgestrel, 1.86 (95% CI 1.59 to 2.18) desogestrel versus levonorgestrel, 1.82 (95% CI 1.49 to 2.22), and drospirenone versus levonorgestrel, 1.64 (95% CI 1.27 to 2.10) [42].A possible explanation for this lower VTE risk with second-generation EPs (containing LNG) is that LNG exerts a stronger antiestrogenic effect at hepatic level than the other progestins for which a greater risk is reported, according to the concept that estrogenic component of EPs is the main reason for VTE risk in a dose-dependent manner. In addition, the progestin component may counteract this increased risk with different efficacy, according to progestin type (degree of residual androgenic and/or antiestrogenic effect of the progestin) [47]. Thus, third-generation and fourth-generation (drospirenone) progestins, not having androgenic or antiestrogenic action at hepatic level, may not sufficiently counteract the estrogen-dependent prothrombotic effect [48].On the other hand, other studies report different results. In the EURAS study [49] and in the INGENIX study [50] the VTE risk with LNG-containing EPs did not differ from the VTE risk carried by drospirenone-containing EPs. In the Transatlantic Active Surveillance on Cardiovascular Safety of Nuvaring [51], Dinger et al. reported that etonogestrel-containing vaginal ring and other combined EPs (including EPs containing LNG) are associated with a similar risk of VTE; another study [52] showed no difference in VTE risk for norelgestromin-containing patch and etonogestrel-containing vaginal ring in comparison with low-dose estrogen comparators (including LNG-containing EPs).Actually, VTE is one of the most serious side effects linked to the use of EPs, and even if rare, this condition can result in important consequences (in about 1-2% of all cases of VTE in women taking the pill) [53].However, the absolute risk of having venous thromboembolism in EP users is low (the baseline risk is five per 100,000 person-years; this risk increases to about 15–25 per 100,000 person-years when taking the EP pill) [42, 55]. However, due to the very large number of EP users [56], even a small increase in this risk could affect a significant number of women. In any case, patients with a personal or family history of venous thromboembolism should not take combined EPs. Nevertheless, the European Medicines Agency (EMA), following the evaluation of epidemiological data, has yielded some documents in which it is declared that the LNG-containing EPs have a lower VTE risk in comparison with EPs containing other progestins with the same dose of EE (2001, regarding third-generation progestins, in particular gestodene and desogestrel) [57]; later, in 2005, another document of EMEA indicated LNG-containing EP pill as the reference standard to use as a comparator to evaluate the VTE risk for new contraceptive agents [58]. In 2011, a third document [59] reported also drospirenone-containing combined oral EPs as having a VTE risk higher than levonorgestrel-containing EPs (second generation EPs).Recently, EMA [60] has indicated again LNG-containing EPs as being linked to a lower risk of VTE.With respect to the risk of myocardial infarction and thrombotic stroke (arterial thromboembolism) linked to the use of EPs, it is well known that this risk is very rare among EPs users, and a recent work by Lidegaard et al. [61] reported no significant difference in myocardial infarction and thrombotic stroke risk according to progestin component among different EPs; rather, this risk appeared to be related to EE dosage, with the highest risk for EPs containing 50 mcg EE and the lowest risk for 20 mcg EE-containing EPs. ## 10. EE20/LNG100 and Acne Thorneycroft et al. [54] evaluated the effects of EE20/LNG100 on androgen pattern and acne in 21 healthy women (age range: 18–28 years) in a 3-month study. EE20/LNG100 decreased significantly the levels of androgens (e.g., dehydroepiandrosterone sulphate, androstenedione, and total testosterone) in three compartments (adrenal, ovarian, and peripheral) and increased SHBG levels (Table 1). Total acne lesion count was reduced by the treatment with EE20/LNG100. Also in this population with signs of hyperandrogenism, the variation in body weight was not significant after 12 weeks of EE20/LNG100 administration, and blood pressure did not change: at baseline, mean systolic blood pressure (SBP) was 110 ± 11 mmHg and diastolic blood pressure (DBP) was 67 ± 9, while at the end of the study, SBP was 113 ± 13, and DBP 68 ± 8.Table 1 Percent (baseline versus end of the study) changes in androgens and SHBG levels over a 3-month period of treatment with EE20/LNG100 (modified from [54]). Parameter % change versus baseline P (baseline versus end of treatment) DHEAS (mcg/mL) −18.9 ± 40.2 <0.05 Androstenedione (ng/mL) −36.9 ± 26.7 <0.05 Total testosterone (ng/dL) −27.0 ± 21.5 <0.05 3-androstanediol glucuronide (ng/mL) −38.8 ± 36.1 <0.05 SHBG (nmol/L) 106 ± 89 <0.05A randomized, multicenter, and placebo-controlled trial using EE20/LNG100 as an active treatment versus placebo [32] evaluated the efficacy of EE20/LNG100 in treating moderate acne over a 6-month period. Total, inflammatory and noninflammatory lesion counts at cycle 6 with EE20/LNG100 were significantly lower than with placebo; moreover, the EE20/LNG100 group had a better evaluation from clinicians and a better self-evaluation from patients versus the placebo group. ## 11. EE20/LNG100 and Sexuality The EP use has been linked to a decreased sexual function [62]. It has been suggested that some women are more sensitive than others to a reduction of testosterone and free testosterone, showing a reduction in sexual interest [63, 64]. Moreover, a study by Coenen et al. [65] suggested that the decreased levels of free testosterone are due to the increased SHBG which, in turn, is determined by the estrogenic component of the EPs. EP, by blocking the ovulation, decreases further ovarian production of androgens. Thus, in premenopausal women, low level of androgens (in particular, free and total testosterone), associated with high level of SHBG, is believed to reduce sexual function during EP administration in women who previously had a normal sexual behavior [66, 67].It is possible that an EP containing low levels of estrogen and/or a progestin retaining a partial residual androgenic (or antiestrogenic) activity counteracting the estrogen-induced SHBG increase less likely decreases sexual function in women [68]. On the other hand, it is not uncommon that a woman with an EP-induced decrease in sexual function is switched to an EP containing LNG as a progestin [69].Interestingly, in a study [70] comparing two EPs containing EE and LNG differing between them only for the dosage of the two components (EE30/LNG150 versus EE 20/LNG100) about the effects on plasma androgen levels and sexual function over six cycles of administration, sexual function was evaluated at baseline and at the end of the study with Female Sexual Function Index (FSFI); moreover, also total testosterone (T) and SHBG were measured at baseline and at the end of the study. Free androgen index (FAI) was calculated as T (nmol/L) × 100/SHBG (nmol/L). T and FAI decreased in both groups, while SHBG increased. T and FAI were higher in EE20/LNG100 group in comparison with EE30/LNG150, and SHBG was lower. In particular, in EE30/LNG150 group, testosterone and FAI decreased by 32% and 67%, respectively, while SHBG increased by 32% (P < 0.05). On the other hand, in EE20/LNG100 group, T decreased by 20%, FAI decreased by 42%, and SHBG increased by 22%. Total score of FSFI did not differ between the two groups, but, over time, only in the EE20/LNG100 group a significant improvement was reported. These results could be explained by the low dose of EE and residual small androgenic activity of LNG and may be important in the overall clinical judgment on woman’s health when EE20/LNG100 is administered. ## 12. Conclusions EE20/LNG100 is a combination generally safe and well tolerated [31, 35]. The AEs reported for EE20/LNG100 are similar to those reported for other low-dose EPs [71]. Interestingly, in a study by Coney et al. [31] the percentage of women reporting one or more AEs is not different between EE20/LNG100 and placebo group.Moreover, cycle control is effective [22, 26, 30, 33, 34], and body weight and body composition do not display any significant variation in various studies [20, 25]. This combination shows a mild or no significant effect from a metabolic (i.e., lipid and glucose metabolism) point of view [22–24].Lastly, EE20/LNG100 has a low VTE risk and is considered as a gold standard by the European Regulatory Authorities in evaluating new EPs for this risk [58].Overall, this favorable clinical profile of EE20/LNG100 can be considered in terms of safety, tolerability, and compliance in the process of individualization of EPs choice. --- *Source: 102184-2014-11-16.xml*
2014
# Effects of Illuminance and Heat Rays on Photo-Controlled/Living Radical Polymerization Mediated by 4-Methoxy-2,2,6,6-Tetramethylpiperidine-1-Oxyl **Authors:** Eri Yoshida **Journal:** ISRN Polymer Science (2012) **Publisher:** International Scholarly Research Network **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.5402/2012/102186 --- ## Abstract The effects of illuminance and heat rays released from the light source on the photo-controlled/living radical polymerization of methyl methacrylate were investigated with the aim of strict control of molecular weight. The bulk polymerization was performed at room temperature using 4-methoxy-2,2,6,6-tetramethylpiperidine-1-oxyl as the mediator and (2RS,2′RS)-azobis(4-methoxy-2,4-dimethylvaleronitrile) as the initiator in the presence of (4-tert-butylphenyl)diphenylsulfonium triflate as the accelerator by irradiation with a high-pressure mercury lamp. The polymerization by the direct irradiation from the light source yielded polymers containing an uncontrolled high-molecular-weight polymer and having the molecular weight distribution over 3. On the other hand, the polymerization by the indirect irradiation with reflective light using a mirror produced polymers with controlled molecular weights with comparatively narrow molecular weight distribution at ca. 1.4. Too high an illuminance caused an increase in the molecular weight distribution. During the polymerization, the monomer conversion increased as the illuminance increased. It was found that the elimination of heat rays from the illuminating light was indispensable for the molecular weight control by the photo-controlled/living radical polymerization. --- ## Body ## 1. Introduction Light is a desirable stimulus to manipulate the properties and functions of materials and living organs without damage by heat, such as thermal expansion and deactivation. Photo-controlled systems include the environmental advantage in utilizing solar energy, the significance of local applications, and the use of photo-specific reactions. A significant variety of photo-controlled systems has been created using reversible and also irreversible photoreactions. Examples include the photo-controlled mechanical motion of crystals through the azobenzene photoisomerization [1], the photo-controllable changes in surface morphology of salt crystals by enantio-specific and enantioselective photocyclization of a benzophenone derivative [2], the photo-induced wetting properties on an ultrathin ZnO-coated surface [3], the photo-responsive loading and release of drugs on nanoparticle [4] and nanofiber surfaces [5], self-assembly induced by photo irreversible reactions of photolysis [6–8], photo-rearrangement [9], and photo onium salt formation [10] for block copolymers, size change of core-shell nanogel particles through the photodimerization and photocleavage of coumarin [11], magnetization of CdS-modified nanoparticles by photo-induced electron transfer from CdS to Prussian blue [12], DNA cleavage by the combination of the photoactive Zn(II) cooperation and the azobenzene photoisomerization [13], the inhibition of telomerase activity by photo-cross-linking [14], and the photoswitch to induce paralysis in a living organism using the photocyclization of bis(pyridinium)-dithienylethene [15].The photo-controlled/living radical polymerization is also a photo-controllable system that can regulate the molecular weight of a polymer. Photo-living radical polymerization systems have been discovered using various mediators; dithiocarbamate derivatives [16–18], N,N,N′,N′-tetraethylthiuram disulfide [19], dibenzyl trithiocarbonate [20], 4-thiobenzoyl sulfanylmethyl benzoate [21], bis(2,4,6-trimethylbenzoyl)phenylphosphine oxide, bis(4-methoxybenzoyl)diethylgermanium [22], and manganese complex [23]. In recent years, the photo-controlled/living radical polymerization mediated by 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) has been established for methacrylate monomers that could not be applied to the thermal TEMPO-mediated polymerization [24–36] due to the disproportionation termination at high temperature. This TEMPO-mediated photopolymerization was accelerated by the photo-acid generators of the diaryliodonium salts [24, 25] and triarylsulfonium salts [28] and produced polymers with the comparatively narrow molecular weight distributions of ca. 1.4 even at a high conversion. However, there is no report concerning the effects of illuminance and heat released from the light source on the photo-controlled polymerization. It was found that the illuminance and heat from the light source affected the polymerization rate and molecular weight control. This paper describes the influences of illuminance and heat from the light source of a high-pressure mercury lamp on the TEMPO-mediated photo-controlled/living radical polymerization of methyl methacrylate (MMA). ## 2. Experimental ### 2.1. Instrumentation The photopolymerization was carried out using an Ushio optical modulex BA-H502, an illuminator OPM2-502H with a high-illumination lens UI-OP2SL, and a 500 W super high-pressure UV lamp (USH-500SC2, Ushio Co. Ltd.). The illuminance was measured using a Topcon IM-5 illuminance meter. Gel permeation chromatography (GPC) was performed using a Tosoh GPC-8020 instrument equipped with a DP-8020 dual pump, a CO-8020 column oven, and a RI-8020 refractometer. Three polystyrene gel columns, Tosoh TSKGELG2000HXL, G4000HXL, and G6000HXL were used with tetrahydrofuran as the eluent at 40°C. ### 2.2. Materials 4-Methoxy-TEMPO (MTEMPO) was prepared as reported previously [37]. (2RS,2′RS)-Azobis(4-methoxy-2,4-dimethylvaleronitrile) (r-AMDV) was obtained by separation from a mixture of the racemic and meso forms of 2,2′-azobis(4-methoxy-2,4-dimethylvaleronitrile) [38]. Commercial grade MMA was washed with 5 wt.% sodium hydroxide solution and water and distilled over calcium hydride. (4-tert-Butylphenyl)-diphenylsulfonium triflate (tBuS) was purchased from Sigma-Aldrich and used as received. A heat ray absorbent filter, HA30 and a neutral density filter, ND-50 were purchased from Hoya Candeo Optronics Corporation. ### 2.3. Polymerization by Indirect Irradiation MMA (936.0 mg, 9.35 mmol),r-AMDV (14.0 mg, 0.0454 mmol), MTEMPO (9.0 mg, 0.0483 mmol), and tBuS (11.0 mg, 0.0235 mmol) were placed in an ampoule. After degassing the contents, the ampoule was sealed under vacuum. The bulk polymerization was carried out at room temperature for 7 h with irradiation at 5.0 × 105 lux by reflective light using a mirror with a 500 W high-pressure mercury lamp. The product was dissolved in dichloromethane (10 mL). The solution was concentrated by an evaporator to remove the dichloromethane and unreacted monomer and was freeze-dried with benzene (20 mL) at 40°C to obtain the product as white powder (545.8 mg). The monomer conversion was estimated gravimetrically. The product was dissolved in dichloromethane (3 mL) and poured into hexane (500 mL). The precipitate was collected by filtration and dried in vacuo for several hours to be subjected to GPC analysis. ## 2.1. Instrumentation The photopolymerization was carried out using an Ushio optical modulex BA-H502, an illuminator OPM2-502H with a high-illumination lens UI-OP2SL, and a 500 W super high-pressure UV lamp (USH-500SC2, Ushio Co. Ltd.). The illuminance was measured using a Topcon IM-5 illuminance meter. Gel permeation chromatography (GPC) was performed using a Tosoh GPC-8020 instrument equipped with a DP-8020 dual pump, a CO-8020 column oven, and a RI-8020 refractometer. Three polystyrene gel columns, Tosoh TSKGELG2000HXL, G4000HXL, and G6000HXL were used with tetrahydrofuran as the eluent at 40°C. ## 2.2. Materials 4-Methoxy-TEMPO (MTEMPO) was prepared as reported previously [37]. (2RS,2′RS)-Azobis(4-methoxy-2,4-dimethylvaleronitrile) (r-AMDV) was obtained by separation from a mixture of the racemic and meso forms of 2,2′-azobis(4-methoxy-2,4-dimethylvaleronitrile) [38]. Commercial grade MMA was washed with 5 wt.% sodium hydroxide solution and water and distilled over calcium hydride. (4-tert-Butylphenyl)-diphenylsulfonium triflate (tBuS) was purchased from Sigma-Aldrich and used as received. A heat ray absorbent filter, HA30 and a neutral density filter, ND-50 were purchased from Hoya Candeo Optronics Corporation. ## 2.3. Polymerization by Indirect Irradiation MMA (936.0 mg, 9.35 mmol),r-AMDV (14.0 mg, 0.0454 mmol), MTEMPO (9.0 mg, 0.0483 mmol), and tBuS (11.0 mg, 0.0235 mmol) were placed in an ampoule. After degassing the contents, the ampoule was sealed under vacuum. The bulk polymerization was carried out at room temperature for 7 h with irradiation at 5.0 × 105 lux by reflective light using a mirror with a 500 W high-pressure mercury lamp. The product was dissolved in dichloromethane (10 mL). The solution was concentrated by an evaporator to remove the dichloromethane and unreacted monomer and was freeze-dried with benzene (20 mL) at 40°C to obtain the product as white powder (545.8 mg). The monomer conversion was estimated gravimetrically. The product was dissolved in dichloromethane (3 mL) and poured into hexane (500 mL). The precipitate was collected by filtration and dried in vacuo for several hours to be subjected to GPC analysis. ## 3. Results and Discussion The photo-controlled/living radical polymerization of MMA was performed at room temperature using ther-AMDV initiator and the MTEMPO mediator in the presence of the tBuS accelerator. The bulk polymerization was carried out at different illuminances by direct irradiation in a water bath to avoid a rise in the temperature of the reaction system by the direct irradiation (Figure 1). The results are shown in Table 1. The polymerization rapidly proceeded by the direct irradiation and provided very broad molecular weight distributions of Mw/Mn > 3. The GPC analysis revealed that the resulting polymers contained uncontrolled high-molecular-weight polymers. As can be seen in Figure 2, the proportion of the high-molecular-weight polymer was reduced as a result of decreasing the illuminance. In addition, no polymerization occurred by the direct irradiation through a heat ray absorbent filter, HA30, that can exclude the rays with the wavelengths over ca. 900 nm and a neutral density filter, ND-50, that reduces the illuminance to 50%.Table 1 The MMA polymerization with direct irradiation. Run No. Filter Illuminance (×10−5 lux) Time (h) Conversion (%) Mn (theor)a Mn (obs)a Mw/Mna 1 — 32.5 2.5 70 13,600 12,500 5.04 2 — 25.7 2.5 66 12,900 12,600 3.35 3 HA30 + ND50 11.6 6 0 — — — MTEMPO/r-AMDV = 1.06, BtuS/MTEMPO = 0.486. E astimated by GPC based on poly(MMA) standards.Figure 1 A schematic of the photopolymerization with direct irradiation.Figure 2 GPC profiles of the polymers obtained by the polymerization with the direct irradiation.In order to avoid the influence of heat rays on the polymerization, the polymerization was performed with the indirect irradiation by a reflective light using a mirror. This indirect irradiation can exclude heat rays of wavelengths around 1,100 nm included in the light from the mercury lamp because the heat rays are not reflected by a mirror. A schematic of the polymerization with the indirect irradiation is shown in Figure3. The results of the polymerization are shown in Table 2.Table 2 The MMA polymerization with indirect irradiation. Filter Illuminance (×10−5 lux) Time (h) Conversion (%) Mn (theor) Mn (obs)a Mw/Mna — 1.3 7 46 9,240 9,230 1.42 — 1.9 7 49 9,820 9,690 1.49 — 5.0 7 56 11,200 9,950 1.45 — 20.1 7 62 12,300 10,900 1.43 — 80.4 7 65 12,900 10,800 1.53 HA30 4.4 7.5 15 3,230 3,260 1.60 HA30 110.0 6 24 4,980 5,640 1.59 MTEMPO/r-AMDV = 1.06, BtuS/MTEMPO = 0.486. E astimated by GPC based on poly(MMA) standards.Figure 3 A schematic of the photopolymerization with indirect irradiation by reflective light.The indirect irradiation polymerization produced polymers with controlled molecular weights without any uncontrolled high-molecular-weight polymers. The experimental molecular weights, Mn (obs), were in good agreement with the theoretical molecular weights, Mn (theor). The GPC profile of the resulting polymer obtained at a 5.0 × 105 lux-illuminance is shown in Figure 4. Figure 5 shows the plots of the monomer conversion versus the illuminance. An increase in the illuminance accelerated the polymerization. However, too high an illuminance increased molecular weight distribution and produced a deviation in the molecular weight from the theoretical one. The polymerization at too high an illuminance may be influenced by the heat released from the light source because the distance from the light source to the reaction vessel was quite short (12.5 cm). The elimination of heat rays using HA30 decelerated the polymerization. This deceleration can be accounted for by the fact that this heat ray absorbent filter eliminates not only rays over 900 nm, but also rays below 270 nm. When it is taken into consideration that tBuS has a UV absorption at λmax=238 nm, tBuS was not excited by the irradiation through HA30, resulting in the fact that tBuS should not have served as the accelerator. The irradiation through HA30 also caused an increase in the molecular weight distribution probably due to the deceleration of the initiation. This implies that the excited tBuS also accelerates the decomposition of the initiator. The proposed mechanism throughout the polymerization is shown in Figure 6. It can be deduced that the elimination of only heat rays that cause the uncontrolled polymerization is significant for controlling the molecular weight of a polymer.Figure 4 The GPC profile of the polymer obtained by the polymerization with the indirect irradiation at 5.0 × 105 lux.Figure 5 The plots of the monomer conversion versus the illuminance for the polymerization with the indirect irradiation.Figure 6 The mechanism of the MTEMPO-mediated photo-controlled/living polymerization in the presence oftBuS. ## 4. Conclusion This is the first study clarifying the influence of heat rays on the photo-controlled/living radical polymerization mediated by MTEMPO. The heat rays caused uncontrolled polymerization during the MTEMPO-mediated photopolymerization. A decrease in the illuminance reduced the proportion of a polymer with an uncontrolled molecular weight. The exclusion of heat rays is indispensable for the molecular weight control. However, the use of a heat ray absorbent filter is ineffective for controlling the molecular weight because the filter also eliminated the rays below 270 nm when the accelerator with the UV absorption below this wavelength was used for the polymerization. The indirect irradiation by reflective light using a mirror effectively controlled the molecular weight by the TEMPO-mediated photopolymerization. --- *Source: 102186-2012-07-19.xml*
102186-2012-07-19_102186-2012-07-19.md
15,070
Effects of Illuminance and Heat Rays on Photo-Controlled/Living Radical Polymerization Mediated by 4-Methoxy-2,2,6,6-Tetramethylpiperidine-1-Oxyl
Eri Yoshida
ISRN Polymer Science (2012)
Chemistry and Chemical Sciences
International Scholarly Research Network
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.5402/2012/102186
102186-2012-07-19.xml
--- ## Abstract The effects of illuminance and heat rays released from the light source on the photo-controlled/living radical polymerization of methyl methacrylate were investigated with the aim of strict control of molecular weight. The bulk polymerization was performed at room temperature using 4-methoxy-2,2,6,6-tetramethylpiperidine-1-oxyl as the mediator and (2RS,2′RS)-azobis(4-methoxy-2,4-dimethylvaleronitrile) as the initiator in the presence of (4-tert-butylphenyl)diphenylsulfonium triflate as the accelerator by irradiation with a high-pressure mercury lamp. The polymerization by the direct irradiation from the light source yielded polymers containing an uncontrolled high-molecular-weight polymer and having the molecular weight distribution over 3. On the other hand, the polymerization by the indirect irradiation with reflective light using a mirror produced polymers with controlled molecular weights with comparatively narrow molecular weight distribution at ca. 1.4. Too high an illuminance caused an increase in the molecular weight distribution. During the polymerization, the monomer conversion increased as the illuminance increased. It was found that the elimination of heat rays from the illuminating light was indispensable for the molecular weight control by the photo-controlled/living radical polymerization. --- ## Body ## 1. Introduction Light is a desirable stimulus to manipulate the properties and functions of materials and living organs without damage by heat, such as thermal expansion and deactivation. Photo-controlled systems include the environmental advantage in utilizing solar energy, the significance of local applications, and the use of photo-specific reactions. A significant variety of photo-controlled systems has been created using reversible and also irreversible photoreactions. Examples include the photo-controlled mechanical motion of crystals through the azobenzene photoisomerization [1], the photo-controllable changes in surface morphology of salt crystals by enantio-specific and enantioselective photocyclization of a benzophenone derivative [2], the photo-induced wetting properties on an ultrathin ZnO-coated surface [3], the photo-responsive loading and release of drugs on nanoparticle [4] and nanofiber surfaces [5], self-assembly induced by photo irreversible reactions of photolysis [6–8], photo-rearrangement [9], and photo onium salt formation [10] for block copolymers, size change of core-shell nanogel particles through the photodimerization and photocleavage of coumarin [11], magnetization of CdS-modified nanoparticles by photo-induced electron transfer from CdS to Prussian blue [12], DNA cleavage by the combination of the photoactive Zn(II) cooperation and the azobenzene photoisomerization [13], the inhibition of telomerase activity by photo-cross-linking [14], and the photoswitch to induce paralysis in a living organism using the photocyclization of bis(pyridinium)-dithienylethene [15].The photo-controlled/living radical polymerization is also a photo-controllable system that can regulate the molecular weight of a polymer. Photo-living radical polymerization systems have been discovered using various mediators; dithiocarbamate derivatives [16–18], N,N,N′,N′-tetraethylthiuram disulfide [19], dibenzyl trithiocarbonate [20], 4-thiobenzoyl sulfanylmethyl benzoate [21], bis(2,4,6-trimethylbenzoyl)phenylphosphine oxide, bis(4-methoxybenzoyl)diethylgermanium [22], and manganese complex [23]. In recent years, the photo-controlled/living radical polymerization mediated by 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) has been established for methacrylate monomers that could not be applied to the thermal TEMPO-mediated polymerization [24–36] due to the disproportionation termination at high temperature. This TEMPO-mediated photopolymerization was accelerated by the photo-acid generators of the diaryliodonium salts [24, 25] and triarylsulfonium salts [28] and produced polymers with the comparatively narrow molecular weight distributions of ca. 1.4 even at a high conversion. However, there is no report concerning the effects of illuminance and heat released from the light source on the photo-controlled polymerization. It was found that the illuminance and heat from the light source affected the polymerization rate and molecular weight control. This paper describes the influences of illuminance and heat from the light source of a high-pressure mercury lamp on the TEMPO-mediated photo-controlled/living radical polymerization of methyl methacrylate (MMA). ## 2. Experimental ### 2.1. Instrumentation The photopolymerization was carried out using an Ushio optical modulex BA-H502, an illuminator OPM2-502H with a high-illumination lens UI-OP2SL, and a 500 W super high-pressure UV lamp (USH-500SC2, Ushio Co. Ltd.). The illuminance was measured using a Topcon IM-5 illuminance meter. Gel permeation chromatography (GPC) was performed using a Tosoh GPC-8020 instrument equipped with a DP-8020 dual pump, a CO-8020 column oven, and a RI-8020 refractometer. Three polystyrene gel columns, Tosoh TSKGELG2000HXL, G4000HXL, and G6000HXL were used with tetrahydrofuran as the eluent at 40°C. ### 2.2. Materials 4-Methoxy-TEMPO (MTEMPO) was prepared as reported previously [37]. (2RS,2′RS)-Azobis(4-methoxy-2,4-dimethylvaleronitrile) (r-AMDV) was obtained by separation from a mixture of the racemic and meso forms of 2,2′-azobis(4-methoxy-2,4-dimethylvaleronitrile) [38]. Commercial grade MMA was washed with 5 wt.% sodium hydroxide solution and water and distilled over calcium hydride. (4-tert-Butylphenyl)-diphenylsulfonium triflate (tBuS) was purchased from Sigma-Aldrich and used as received. A heat ray absorbent filter, HA30 and a neutral density filter, ND-50 were purchased from Hoya Candeo Optronics Corporation. ### 2.3. Polymerization by Indirect Irradiation MMA (936.0 mg, 9.35 mmol),r-AMDV (14.0 mg, 0.0454 mmol), MTEMPO (9.0 mg, 0.0483 mmol), and tBuS (11.0 mg, 0.0235 mmol) were placed in an ampoule. After degassing the contents, the ampoule was sealed under vacuum. The bulk polymerization was carried out at room temperature for 7 h with irradiation at 5.0 × 105 lux by reflective light using a mirror with a 500 W high-pressure mercury lamp. The product was dissolved in dichloromethane (10 mL). The solution was concentrated by an evaporator to remove the dichloromethane and unreacted monomer and was freeze-dried with benzene (20 mL) at 40°C to obtain the product as white powder (545.8 mg). The monomer conversion was estimated gravimetrically. The product was dissolved in dichloromethane (3 mL) and poured into hexane (500 mL). The precipitate was collected by filtration and dried in vacuo for several hours to be subjected to GPC analysis. ## 2.1. Instrumentation The photopolymerization was carried out using an Ushio optical modulex BA-H502, an illuminator OPM2-502H with a high-illumination lens UI-OP2SL, and a 500 W super high-pressure UV lamp (USH-500SC2, Ushio Co. Ltd.). The illuminance was measured using a Topcon IM-5 illuminance meter. Gel permeation chromatography (GPC) was performed using a Tosoh GPC-8020 instrument equipped with a DP-8020 dual pump, a CO-8020 column oven, and a RI-8020 refractometer. Three polystyrene gel columns, Tosoh TSKGELG2000HXL, G4000HXL, and G6000HXL were used with tetrahydrofuran as the eluent at 40°C. ## 2.2. Materials 4-Methoxy-TEMPO (MTEMPO) was prepared as reported previously [37]. (2RS,2′RS)-Azobis(4-methoxy-2,4-dimethylvaleronitrile) (r-AMDV) was obtained by separation from a mixture of the racemic and meso forms of 2,2′-azobis(4-methoxy-2,4-dimethylvaleronitrile) [38]. Commercial grade MMA was washed with 5 wt.% sodium hydroxide solution and water and distilled over calcium hydride. (4-tert-Butylphenyl)-diphenylsulfonium triflate (tBuS) was purchased from Sigma-Aldrich and used as received. A heat ray absorbent filter, HA30 and a neutral density filter, ND-50 were purchased from Hoya Candeo Optronics Corporation. ## 2.3. Polymerization by Indirect Irradiation MMA (936.0 mg, 9.35 mmol),r-AMDV (14.0 mg, 0.0454 mmol), MTEMPO (9.0 mg, 0.0483 mmol), and tBuS (11.0 mg, 0.0235 mmol) were placed in an ampoule. After degassing the contents, the ampoule was sealed under vacuum. The bulk polymerization was carried out at room temperature for 7 h with irradiation at 5.0 × 105 lux by reflective light using a mirror with a 500 W high-pressure mercury lamp. The product was dissolved in dichloromethane (10 mL). The solution was concentrated by an evaporator to remove the dichloromethane and unreacted monomer and was freeze-dried with benzene (20 mL) at 40°C to obtain the product as white powder (545.8 mg). The monomer conversion was estimated gravimetrically. The product was dissolved in dichloromethane (3 mL) and poured into hexane (500 mL). The precipitate was collected by filtration and dried in vacuo for several hours to be subjected to GPC analysis. ## 3. Results and Discussion The photo-controlled/living radical polymerization of MMA was performed at room temperature using ther-AMDV initiator and the MTEMPO mediator in the presence of the tBuS accelerator. The bulk polymerization was carried out at different illuminances by direct irradiation in a water bath to avoid a rise in the temperature of the reaction system by the direct irradiation (Figure 1). The results are shown in Table 1. The polymerization rapidly proceeded by the direct irradiation and provided very broad molecular weight distributions of Mw/Mn > 3. The GPC analysis revealed that the resulting polymers contained uncontrolled high-molecular-weight polymers. As can be seen in Figure 2, the proportion of the high-molecular-weight polymer was reduced as a result of decreasing the illuminance. In addition, no polymerization occurred by the direct irradiation through a heat ray absorbent filter, HA30, that can exclude the rays with the wavelengths over ca. 900 nm and a neutral density filter, ND-50, that reduces the illuminance to 50%.Table 1 The MMA polymerization with direct irradiation. Run No. Filter Illuminance (×10−5 lux) Time (h) Conversion (%) Mn (theor)a Mn (obs)a Mw/Mna 1 — 32.5 2.5 70 13,600 12,500 5.04 2 — 25.7 2.5 66 12,900 12,600 3.35 3 HA30 + ND50 11.6 6 0 — — — MTEMPO/r-AMDV = 1.06, BtuS/MTEMPO = 0.486. E astimated by GPC based on poly(MMA) standards.Figure 1 A schematic of the photopolymerization with direct irradiation.Figure 2 GPC profiles of the polymers obtained by the polymerization with the direct irradiation.In order to avoid the influence of heat rays on the polymerization, the polymerization was performed with the indirect irradiation by a reflective light using a mirror. This indirect irradiation can exclude heat rays of wavelengths around 1,100 nm included in the light from the mercury lamp because the heat rays are not reflected by a mirror. A schematic of the polymerization with the indirect irradiation is shown in Figure3. The results of the polymerization are shown in Table 2.Table 2 The MMA polymerization with indirect irradiation. Filter Illuminance (×10−5 lux) Time (h) Conversion (%) Mn (theor) Mn (obs)a Mw/Mna — 1.3 7 46 9,240 9,230 1.42 — 1.9 7 49 9,820 9,690 1.49 — 5.0 7 56 11,200 9,950 1.45 — 20.1 7 62 12,300 10,900 1.43 — 80.4 7 65 12,900 10,800 1.53 HA30 4.4 7.5 15 3,230 3,260 1.60 HA30 110.0 6 24 4,980 5,640 1.59 MTEMPO/r-AMDV = 1.06, BtuS/MTEMPO = 0.486. E astimated by GPC based on poly(MMA) standards.Figure 3 A schematic of the photopolymerization with indirect irradiation by reflective light.The indirect irradiation polymerization produced polymers with controlled molecular weights without any uncontrolled high-molecular-weight polymers. The experimental molecular weights, Mn (obs), were in good agreement with the theoretical molecular weights, Mn (theor). The GPC profile of the resulting polymer obtained at a 5.0 × 105 lux-illuminance is shown in Figure 4. Figure 5 shows the plots of the monomer conversion versus the illuminance. An increase in the illuminance accelerated the polymerization. However, too high an illuminance increased molecular weight distribution and produced a deviation in the molecular weight from the theoretical one. The polymerization at too high an illuminance may be influenced by the heat released from the light source because the distance from the light source to the reaction vessel was quite short (12.5 cm). The elimination of heat rays using HA30 decelerated the polymerization. This deceleration can be accounted for by the fact that this heat ray absorbent filter eliminates not only rays over 900 nm, but also rays below 270 nm. When it is taken into consideration that tBuS has a UV absorption at λmax=238 nm, tBuS was not excited by the irradiation through HA30, resulting in the fact that tBuS should not have served as the accelerator. The irradiation through HA30 also caused an increase in the molecular weight distribution probably due to the deceleration of the initiation. This implies that the excited tBuS also accelerates the decomposition of the initiator. The proposed mechanism throughout the polymerization is shown in Figure 6. It can be deduced that the elimination of only heat rays that cause the uncontrolled polymerization is significant for controlling the molecular weight of a polymer.Figure 4 The GPC profile of the polymer obtained by the polymerization with the indirect irradiation at 5.0 × 105 lux.Figure 5 The plots of the monomer conversion versus the illuminance for the polymerization with the indirect irradiation.Figure 6 The mechanism of the MTEMPO-mediated photo-controlled/living polymerization in the presence oftBuS. ## 4. Conclusion This is the first study clarifying the influence of heat rays on the photo-controlled/living radical polymerization mediated by MTEMPO. The heat rays caused uncontrolled polymerization during the MTEMPO-mediated photopolymerization. A decrease in the illuminance reduced the proportion of a polymer with an uncontrolled molecular weight. The exclusion of heat rays is indispensable for the molecular weight control. However, the use of a heat ray absorbent filter is ineffective for controlling the molecular weight because the filter also eliminated the rays below 270 nm when the accelerator with the UV absorption below this wavelength was used for the polymerization. The indirect irradiation by reflective light using a mirror effectively controlled the molecular weight by the TEMPO-mediated photopolymerization. --- *Source: 102186-2012-07-19.xml*
2012
# An Alternative Technique for Fabrication of Frameworks in an Immediate Loading Implant Fixed Mandibular Prosthesis **Authors:** André Gustavo Paleari; Cristina Dupim Presoto; Juliano Alencar Vasconcelos; José Maurício dos Santos Nunes Reis; Lígia Antunes Pereira Pinelli; Regina Helena Barbosa Tavares da Silva; Cristiane Campos Costa Quishida **Journal:** Case Reports in Dentistry (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102189 --- ## Abstract The oral rehabilitation of edentulous patients with immediate loading has become a safe procedure with high predictability. The success is related to immediate fabrication of a passive fit framework to attach the implants. Based on these considerations, this case report shows an alternative technique for mandibular rehabilitation using implants immediately loaded, where the framework was fabricated using cylinders with internal reinforcement and precast pieces, electrowelding, and conventional welding providing esthetics and function to the patient in a short period of time. --- ## Body ## 1. Introduction The oral rehabilitation of edentulous patients using implant-supported fixed prosthesis immediately loaded is a safe procedure with high predictability [1–5]. Such treatment has the advantages of immediate restoration of function and aesthetics and emotional comfort to patients, especially in cases of dental extractions and immediate implant placement. Moreover, few clinical sessions are required in view of the absence of the second surgical intervention for exposing the implants [1, 3, 6].According to Romanos et al. [7], loading seems to initiate bone remodeling and to form new bone around immediately loaded implants, with a better healing of the hard and soft tissues. In addition, the implant-supported fixed complete prostheses are favored by biomechanical aspects of arrangement and polyhedral rigid framework connection uniting the implants, which improve the distribution of occlusal loads [8].The rigid infrastructure prevents micromotion of the implants [9] and promotes primary stability and appropriate distribution of occlusal forces, since these are transmitted to the implants immediately after installation of the prosthesis [10]. Additionally, the fabrication of frameworks with passive adaptation to implant abutments promotes the maintenance of bone-implant interface [11–13]. The absence of passivity can cause serious complications such as bone loss and fracture of the abutments and screws [11, 12, 14].Other factors that influence the success of rehabilitation with immediate loading are related to problems in casting that may occur as a result of the negligence of the dental technician during the steps involved in casting, as well as lack of knowledge about the materials and equipment used. Modifications or negligence in the process of casting or welding can cause alterations in the mechanical and structural properties of any alloy that is used for fabrication of the frameworks [14]. Furthermore, inaccuracies in the impression procedure and incorporation of air bubbles in the impression material can interfere with the precision of the master cast, generating a misfit framework [15–17].The aim of this study is to report a case of an implant-supported mandibular prosthesis using a fast and efficient technique for the fabrication of the framework using cylinders with internal reinforcement and precast pieces, electrowelding, and conventional welding. ## 2. Case Report The following clinical case presentation demonstrates the treatment of a 64-year-old female patient, who attended the Fixed Partial Denture Clinic of the Araraquara Dental School-Univ. Estadual Paulista (UNESP), for dental treatment. The patient presented clinically with tooth mobility and bone resorption in anterior mandibular arch. In addition, she was wearing an old and misfit removable partial dentures in the upper and lower arch and did not show any systemic disease. Given the clinical (Figure1) and radiographic features, it was proposed to extract the remaining lower teeth and place an immediate loading implant-fixed mandibular prosthesis. This decision was made considering the bad conditions of mandibular teeth, which did not allow the rehabilitation with new removable partial dentures or another modality of dental prostheses. For the upper arch was proposed a provisional removable partial denture until the placement of bone graft with the purpose of subsequent placing dental implants.Figure 1 Initial clinical features.In order to follow a protocol based on reverse planning, the intermaxillary relations were established and the casts mounted in semiadjustable articulator prior to implant placement. Thus, a multifunctional guide (Figure2) was fabricated from a duplicate of the wax base, after the first clinical proof, to maintain the esthetic and dimensional features obtained previously. After the placement of the implants (Emfils-Indústria e Comércio de Produtos Odontológicos, Itu, SP, Brazil) with a greater torque than 40 N·cm, the abutments were installed (Figure 3) and an impression was performed using the multifunctional guide. Cylinders that were previously cast in Ni-Cr alloy (Fit Cast SB-Plus Ni-Cr without Beryllium, Talladium, Curitiba-PR, Brazil) were installed on the abutment replicas in the master cast. Fragments were earlier cast in Ni-Cr alloy and obtained by means of wax patterns and inserted into the space between the cylinders (Figure 4). The multifunctional guide was used as parameter to delimit the covering area to include the implants and set limits to the extent of the two distal cantilever extensions. The initial fixation of the metal fragments to cylinders was performed with an Electroweld (Kernit Mechatronics Ind. Ltd., Indaiatuba, Sao Paulo, Brazil) using orthodontic wire of 0.9 mm diameter. After primary stabilization, a conventional welding in Ni-Cr alloy was carried out, without the need to invest any wax pattern.Figure 2 Multifunctional guide.Figure 3 Abutments installed on the implants.Figure 4 Fragments previously cast in Ni-Cr alloy inserted into the space between the cylinders.Finally, the finishing in the framework with stones and disks and blasting with aluminum oxide (100μm) was performed (Figure 5). Then, it was performed the framework try-in on the abutments by first tightening down one of the terminal screws completely (Figure 6). After clinical and radiographic verification, the screw was unscrewed and the procedure was repeated for the other terminal abutment [18]. After these procedures, we performed a clinical evaluation of the teeth and, after the confirmation and approval of the patient, the prosthesis was installed in 12 hours after surgery (Figure 7). Figures 8 and 9 show the radiographic and clinical features of the prosthesis 12 months after the installation.Figure 5 Framework after finishing.Figure 6 Framework try-in.Figure 7 Prosthesis immediately after installation.Figure 8 Panoramic radiograph after 12 months.Figure 9 Clinical aspect of the prosthesis after 12 months. ## 3. Discussion The oral prosthetic rehabilitation using immediate loading implants has been reported as a beneficial treatment protocol to osseointegration of the implant and that increases the comfort of the patient. The surgical and prosthetic protocols have been developed in order to reduce the time between surgery and the installation of the prosthesis [5]. In addition, this treatment protocol has showed success rates similar to the treatments performed conventionally [3–5].This case report presents a technique option to the fast confection of metal framework for the treatment of total edentulous patients when planning the implant-supported prostheses with immediate loading. In this technique, the metallic framework is fabricated using cylinders and fragments previously cast in Ni-Cr alloy, electrowelding, and conventional welding to promote reduction in manufacture time of the prosthesis, allowing it to be installed in the same day or within 24 hours after the surgery of the implant placement. The speed of the treatment is important mainly for patients who initially present natural teeth and are subjected to multiple extractions and procedures for implant placement with immediate loading [1]. Additionally, it has been reported that with immediately loaded implants the patients may restart their function quickly. Any reduction in the number of the surgical procedures necessary or a decrease in the healing period is certainly very well welcomed by clinicians and patients [19].In this present case, the patient was not subjected to a second surgical phase or used a conventional complete mandibular denture during the period of osseointegration period. Based on clinical and tomographic features, it was decided to fabricate an implant-supported prosthesis with immediate loading, using the technique with cylinders and fragments of precast framework, electrowelding, and conventional welding. In this procedure, it is not necessary to wax and embedding the framework. Consequently, it can help to achieve a passive framework, which offers lower transmission of loads to the implants and peri-implant interface, thereby reducing the presence of harmful forces that can lead to bone loss around the implant, since passively fitting frameworks are a prerequisite for long-lasting osseointegration of dental implants [20]. The adaptation and passivity of the framework were verified by means of clinical probing and dental floss and confirmed by periapical radiography. However there is agreement among the in vitro studies that there is no implant framework fabrication approach or material that can provide absolute passivity of fitted frameworks [5, 11, 13, 21].A panoramic radiograph performed 12 months after the installation of the prosthesis demonstrated satisfactory adaptation of the framework and bone integrity around the implants. This technique represents an alternative to the conventional procedure to treatment and rehabilitation of edentulous mandibles, providing satisfactory function and esthetics of the patient in a short period of time. However laboratory studies are needed to evaluate the adaptability and mechanical strength of frameworks fabricated by using this technique compared to conventional techniques. Furthermore, prospective controlled clinical trials should be designed to assess long term success of implant-fixed prosthesis fabricated by using this technique. --- *Source: 102189-2015-01-05.xml*
102189-2015-01-05_102189-2015-01-05.md
10,666
An Alternative Technique for Fabrication of Frameworks in an Immediate Loading Implant Fixed Mandibular Prosthesis
André Gustavo Paleari; Cristina Dupim Presoto; Juliano Alencar Vasconcelos; José Maurício dos Santos Nunes Reis; Lígia Antunes Pereira Pinelli; Regina Helena Barbosa Tavares da Silva; Cristiane Campos Costa Quishida
Case Reports in Dentistry (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102189
102189-2015-01-05.xml
--- ## Abstract The oral rehabilitation of edentulous patients with immediate loading has become a safe procedure with high predictability. The success is related to immediate fabrication of a passive fit framework to attach the implants. Based on these considerations, this case report shows an alternative technique for mandibular rehabilitation using implants immediately loaded, where the framework was fabricated using cylinders with internal reinforcement and precast pieces, electrowelding, and conventional welding providing esthetics and function to the patient in a short period of time. --- ## Body ## 1. Introduction The oral rehabilitation of edentulous patients using implant-supported fixed prosthesis immediately loaded is a safe procedure with high predictability [1–5]. Such treatment has the advantages of immediate restoration of function and aesthetics and emotional comfort to patients, especially in cases of dental extractions and immediate implant placement. Moreover, few clinical sessions are required in view of the absence of the second surgical intervention for exposing the implants [1, 3, 6].According to Romanos et al. [7], loading seems to initiate bone remodeling and to form new bone around immediately loaded implants, with a better healing of the hard and soft tissues. In addition, the implant-supported fixed complete prostheses are favored by biomechanical aspects of arrangement and polyhedral rigid framework connection uniting the implants, which improve the distribution of occlusal loads [8].The rigid infrastructure prevents micromotion of the implants [9] and promotes primary stability and appropriate distribution of occlusal forces, since these are transmitted to the implants immediately after installation of the prosthesis [10]. Additionally, the fabrication of frameworks with passive adaptation to implant abutments promotes the maintenance of bone-implant interface [11–13]. The absence of passivity can cause serious complications such as bone loss and fracture of the abutments and screws [11, 12, 14].Other factors that influence the success of rehabilitation with immediate loading are related to problems in casting that may occur as a result of the negligence of the dental technician during the steps involved in casting, as well as lack of knowledge about the materials and equipment used. Modifications or negligence in the process of casting or welding can cause alterations in the mechanical and structural properties of any alloy that is used for fabrication of the frameworks [14]. Furthermore, inaccuracies in the impression procedure and incorporation of air bubbles in the impression material can interfere with the precision of the master cast, generating a misfit framework [15–17].The aim of this study is to report a case of an implant-supported mandibular prosthesis using a fast and efficient technique for the fabrication of the framework using cylinders with internal reinforcement and precast pieces, electrowelding, and conventional welding. ## 2. Case Report The following clinical case presentation demonstrates the treatment of a 64-year-old female patient, who attended the Fixed Partial Denture Clinic of the Araraquara Dental School-Univ. Estadual Paulista (UNESP), for dental treatment. The patient presented clinically with tooth mobility and bone resorption in anterior mandibular arch. In addition, she was wearing an old and misfit removable partial dentures in the upper and lower arch and did not show any systemic disease. Given the clinical (Figure1) and radiographic features, it was proposed to extract the remaining lower teeth and place an immediate loading implant-fixed mandibular prosthesis. This decision was made considering the bad conditions of mandibular teeth, which did not allow the rehabilitation with new removable partial dentures or another modality of dental prostheses. For the upper arch was proposed a provisional removable partial denture until the placement of bone graft with the purpose of subsequent placing dental implants.Figure 1 Initial clinical features.In order to follow a protocol based on reverse planning, the intermaxillary relations were established and the casts mounted in semiadjustable articulator prior to implant placement. Thus, a multifunctional guide (Figure2) was fabricated from a duplicate of the wax base, after the first clinical proof, to maintain the esthetic and dimensional features obtained previously. After the placement of the implants (Emfils-Indústria e Comércio de Produtos Odontológicos, Itu, SP, Brazil) with a greater torque than 40 N·cm, the abutments were installed (Figure 3) and an impression was performed using the multifunctional guide. Cylinders that were previously cast in Ni-Cr alloy (Fit Cast SB-Plus Ni-Cr without Beryllium, Talladium, Curitiba-PR, Brazil) were installed on the abutment replicas in the master cast. Fragments were earlier cast in Ni-Cr alloy and obtained by means of wax patterns and inserted into the space between the cylinders (Figure 4). The multifunctional guide was used as parameter to delimit the covering area to include the implants and set limits to the extent of the two distal cantilever extensions. The initial fixation of the metal fragments to cylinders was performed with an Electroweld (Kernit Mechatronics Ind. Ltd., Indaiatuba, Sao Paulo, Brazil) using orthodontic wire of 0.9 mm diameter. After primary stabilization, a conventional welding in Ni-Cr alloy was carried out, without the need to invest any wax pattern.Figure 2 Multifunctional guide.Figure 3 Abutments installed on the implants.Figure 4 Fragments previously cast in Ni-Cr alloy inserted into the space between the cylinders.Finally, the finishing in the framework with stones and disks and blasting with aluminum oxide (100μm) was performed (Figure 5). Then, it was performed the framework try-in on the abutments by first tightening down one of the terminal screws completely (Figure 6). After clinical and radiographic verification, the screw was unscrewed and the procedure was repeated for the other terminal abutment [18]. After these procedures, we performed a clinical evaluation of the teeth and, after the confirmation and approval of the patient, the prosthesis was installed in 12 hours after surgery (Figure 7). Figures 8 and 9 show the radiographic and clinical features of the prosthesis 12 months after the installation.Figure 5 Framework after finishing.Figure 6 Framework try-in.Figure 7 Prosthesis immediately after installation.Figure 8 Panoramic radiograph after 12 months.Figure 9 Clinical aspect of the prosthesis after 12 months. ## 3. Discussion The oral prosthetic rehabilitation using immediate loading implants has been reported as a beneficial treatment protocol to osseointegration of the implant and that increases the comfort of the patient. The surgical and prosthetic protocols have been developed in order to reduce the time between surgery and the installation of the prosthesis [5]. In addition, this treatment protocol has showed success rates similar to the treatments performed conventionally [3–5].This case report presents a technique option to the fast confection of metal framework for the treatment of total edentulous patients when planning the implant-supported prostheses with immediate loading. In this technique, the metallic framework is fabricated using cylinders and fragments previously cast in Ni-Cr alloy, electrowelding, and conventional welding to promote reduction in manufacture time of the prosthesis, allowing it to be installed in the same day or within 24 hours after the surgery of the implant placement. The speed of the treatment is important mainly for patients who initially present natural teeth and are subjected to multiple extractions and procedures for implant placement with immediate loading [1]. Additionally, it has been reported that with immediately loaded implants the patients may restart their function quickly. Any reduction in the number of the surgical procedures necessary or a decrease in the healing period is certainly very well welcomed by clinicians and patients [19].In this present case, the patient was not subjected to a second surgical phase or used a conventional complete mandibular denture during the period of osseointegration period. Based on clinical and tomographic features, it was decided to fabricate an implant-supported prosthesis with immediate loading, using the technique with cylinders and fragments of precast framework, electrowelding, and conventional welding. In this procedure, it is not necessary to wax and embedding the framework. Consequently, it can help to achieve a passive framework, which offers lower transmission of loads to the implants and peri-implant interface, thereby reducing the presence of harmful forces that can lead to bone loss around the implant, since passively fitting frameworks are a prerequisite for long-lasting osseointegration of dental implants [20]. The adaptation and passivity of the framework were verified by means of clinical probing and dental floss and confirmed by periapical radiography. However there is agreement among the in vitro studies that there is no implant framework fabrication approach or material that can provide absolute passivity of fitted frameworks [5, 11, 13, 21].A panoramic radiograph performed 12 months after the installation of the prosthesis demonstrated satisfactory adaptation of the framework and bone integrity around the implants. This technique represents an alternative to the conventional procedure to treatment and rehabilitation of edentulous mandibles, providing satisfactory function and esthetics of the patient in a short period of time. However laboratory studies are needed to evaluate the adaptability and mechanical strength of frameworks fabricated by using this technique compared to conventional techniques. Furthermore, prospective controlled clinical trials should be designed to assess long term success of implant-fixed prosthesis fabricated by using this technique. --- *Source: 102189-2015-01-05.xml*
2015
# IL-4 Gene Polymorphism May Contribute to an Increased Risk of Atopic Dermatitis in Children **Authors:** Hong Shang; Xiu-Li Cao; Yu-Jie Wan; Jin Meng; Lu-Hong Guo **Journal:** Disease Markers (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1021942 --- ## Abstract This study aimed to elucidate the associations betweeninterleukin-4(IL-4) single nucleotide polymorphisms (SNPs), 590C/T and 589C/T, serum IL-4 levels, and atopic dermatitis (AD) in children.Methods. A total of 82 children with AD were randomly selected as the case group and divided into mild group (15 cases), moderate group (46 cases), and severe group (21 cases). Additionally, 100 healthy children were selected as the control group. Genotype frequencies of IL-4 SNPs were detected by PCR-RFLP. Serum IL-4 levels were measured by ELISA.Results. Significant differences were shown in genotype distributions and allele frequencies of 589C/T and allele frequencies of 590C/T (all P<0.05). Serum IL-4 levels in the mild, moderate, and severe groups were significantly higher than those in the control group; significant differences were found among these three groups with increased severity of AD. Serum IL-4 levels of heterozygote and mutant homozygote carriers in the mild, moderate, and severe groups were higher than wild homozygote carriers in those three groups and the control group (all P<0.05).Conclusion. 590T and 589T alleles ofIL-4gene may be associated with high levels of serum IL-4, which may increase the risk of AD in children. --- ## Body ## 1. Introduction Atopic dermatitis (AD), also known as atopic eczema, genetic allergic dermatitis, or constitutional prurigo, is the most common allergic inflammatory disease, which shows an apparent tendency of familial aggregation [1, 2]. AD is characterized by recurrence, pruritus, and inflammation, with clinically visible papules, erythema, exudation, erosion, incrustation, and lichenization changes, accompanying intense pruritus [3]. It has been reported that nearly 50% of children with AD develop symptoms within the first 6 months of life and approximately 85% of individuals with eczema have onset of symptoms by the age of 5 years [4, 5]. There are substantial differences in prevalence of AD between and within countries because of geographic distributions along with economic development. AD incidence in developed countries is now up to 10% to 20% [6, 7]. Until now, the pathogenesis of AD remains unclear, and previous studies on genetic epidemiology showed that AD was a polygenic disease [2, 8].It has been confirmed that dysimmunity has been implicated in pathogenesis of AD, and differentiation imbalance of Th1/Th2 resulted in abnormal secretion of cytokines, which also played an important role in the development and progression of AD [9]. Interleukin-4 (IL-4) is a cytokine with various biological functions, which is secreted mainly by activated T cells and monocytes, basophilic granulocyte, and mast cells [10, 11]. In skin lesions of acute AD, activated Th2 cells would induce B cells to produce IgE by releasing cytokines such as IL-4 [9, 12]. As a characteristic cytokine of Th2 cell, IL-4 could promote the occurrence and development of inflammatory reactions characterized by Th2 [13]. A study showed that IL-4 gene polymorphism was associated with atopic asthma and allergic rhinitis both of which belong to complex inflammatory disorders [14], but there were few studies investigating the association betweenIL-4 gene polymorphism and AD. Therefore, to verify this hypothesis, we carried out an association study onIL-4 gene polymorphism and serum IL-4 levels with AD in children. ## 2. Materials and Methods ### 2.1. Subjects Case-control design was adopted in this study. A total of 82 patients with AD, admitted to the Dermatological Department of Pediatrics in Jining No. 1 People’s Hospital between January 2013 and May 2014, were randomly selected as the case group in this study. They were divided into mild group with score ≤ 20, moderate group with score between 20 and 50, and severe group with score ≥ 50 according to scoring atopic dermatitis (SCORAD) scoring [15]. A patient could be diagnosed as AD when he had pruritus history and three or more of the following characteristics: (1) history of flexor skin involvement, including fossa cubitalis, popliteal fossa, anterior talocrural region, or paracervical region (cheek was also included for children less than 10 years of age); (2) history of bronchial asthma or hay fever (or the occurrence of AD in children less than 4 years of age among first-degree relatives); (3) history of dry skin in the whole body; (4) visible dermatitis in the flexor side; and (5) AD occurrence before 2 years of age for patients of more than four years of age. In addition, 100 healthy children in this hospital were also selected as the control group. ### 2.2. Specimen Collection Elbow venous blood (10 mL) was collected from each subject that fasted overnight for 10 to 12 h, and 3 mL of blood sample was transferred into vacuum tubes containing ethylenediamine tetraacetic acid (EDTA). Then, genomic DNA was extracted using whole blood genomic DNA extraction kit (Tiangen Biotech (Beijing) Co., Ltd.); the rest of the blood samples without EDTA were centrifuged for 10 min at 3000 rpm at room temperature to extract serum, which was then stored at −80°C. ### 2.3. Detection ofIL-4 590C/T and 589C/T Single Nucleotide Polymorphisms (SNPs) IL-4 SNPs, 590C/T and 589C/T, were identified by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. The PCR primers were designed by Primer Premier 5.0 software and then synthesized by Shanghai Sangon Biotech Company. The primer sequences and lengths were shown in Table 1. The PCR reaction was performed in a total of 25 μL containing 2.0 μL DNA, 1.25 U TaqDNA amplifier enzymes (Sangon Biotech, SK2492), 2.5 μL 10x PCR buffer, 2.0 μL 2.5 mmol/L dNTP mix, and 20 pmol of each primer and sterilized in double distilled water. PCR reaction conditions were as follows: predegeneration at 94°C for 5 min, degeneration at 94°C for 30 s, annealing at 61°C for 45 s, extension at 72°C for 50 s, 35 cycles from degeneration to extension in total, and then extension at 72°C for 5 min. When the PCR reaction was terminated, 5 μL of PCR products was identified by agarose electrophoresis. The PCR products were digested with MaeI and AvaII endonuclease in water at 37°C for 16 h, respectively, followed by 2% agarose gel electrophoresis, and genotyping results of 590C/T and 589C/T were obtained.Table 1 Primer sequences ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Primer sequences 590C/TFor 5′-GTAAGGACCTTATGGACC-3′ 590C/TRev 5′-TACAAAGTTTCAGCATAGG-3′ 589C/TFor 5′-TAAACTTGGGAGAACATGGT-3′ 589C/TRev 5′-TGGGGAAAGATAGAGTAATA-3′ ### 2.4. Detection of Serum IL-4 Levels Enzyme linked immunosorbent assay (ELISA) was used to detect the IL-4 level in serum (reagent: Immundiagnostik AG Bensheim) according to manufacturer’s instruction. Optical density value (OD value) of each well was measured at 450 nm wavelength. Linear regression equation of the standard curve was calculated using concentration and OD value of the standard substance, with O value of the specimen substituted into the equation to calculate the concentration of the corresponding specimen. All specimens were detected twice and the average was obtained, and the detection was in conformity with the quality control criteria of laboratory. ### 2.5. Statistical Method Statistical analysis was conducted using SPSS19.0 software (SPSS Inc., IBM, Chicago, IL, USA). Continuous data was presented as mean ± standard deviation, andt-test or variance analysis was adopted for comparison between groups. Categorical data was presented with percentages or ratios, and Chi-square test was applied into comparison between groups. Hardy-Weinberg equilibrium was used to verify and evaluate the representativeness of the subjects in the control group. Differences in genotypes and allele frequency between the case group and the control group were presented as odds ratio (OR) with 95% confidence interval (CI). All P values were two-sided, and P<0.05 was considered statistically significant. ## 2.1. Subjects Case-control design was adopted in this study. A total of 82 patients with AD, admitted to the Dermatological Department of Pediatrics in Jining No. 1 People’s Hospital between January 2013 and May 2014, were randomly selected as the case group in this study. They were divided into mild group with score ≤ 20, moderate group with score between 20 and 50, and severe group with score ≥ 50 according to scoring atopic dermatitis (SCORAD) scoring [15]. A patient could be diagnosed as AD when he had pruritus history and three or more of the following characteristics: (1) history of flexor skin involvement, including fossa cubitalis, popliteal fossa, anterior talocrural region, or paracervical region (cheek was also included for children less than 10 years of age); (2) history of bronchial asthma or hay fever (or the occurrence of AD in children less than 4 years of age among first-degree relatives); (3) history of dry skin in the whole body; (4) visible dermatitis in the flexor side; and (5) AD occurrence before 2 years of age for patients of more than four years of age. In addition, 100 healthy children in this hospital were also selected as the control group. ## 2.2. Specimen Collection Elbow venous blood (10 mL) was collected from each subject that fasted overnight for 10 to 12 h, and 3 mL of blood sample was transferred into vacuum tubes containing ethylenediamine tetraacetic acid (EDTA). Then, genomic DNA was extracted using whole blood genomic DNA extraction kit (Tiangen Biotech (Beijing) Co., Ltd.); the rest of the blood samples without EDTA were centrifuged for 10 min at 3000 rpm at room temperature to extract serum, which was then stored at −80°C. ## 2.3. Detection ofIL-4 590C/T and 589C/T Single Nucleotide Polymorphisms (SNPs) IL-4 SNPs, 590C/T and 589C/T, were identified by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. The PCR primers were designed by Primer Premier 5.0 software and then synthesized by Shanghai Sangon Biotech Company. The primer sequences and lengths were shown in Table 1. The PCR reaction was performed in a total of 25 μL containing 2.0 μL DNA, 1.25 U TaqDNA amplifier enzymes (Sangon Biotech, SK2492), 2.5 μL 10x PCR buffer, 2.0 μL 2.5 mmol/L dNTP mix, and 20 pmol of each primer and sterilized in double distilled water. PCR reaction conditions were as follows: predegeneration at 94°C for 5 min, degeneration at 94°C for 30 s, annealing at 61°C for 45 s, extension at 72°C for 50 s, 35 cycles from degeneration to extension in total, and then extension at 72°C for 5 min. When the PCR reaction was terminated, 5 μL of PCR products was identified by agarose electrophoresis. The PCR products were digested with MaeI and AvaII endonuclease in water at 37°C for 16 h, respectively, followed by 2% agarose gel electrophoresis, and genotyping results of 590C/T and 589C/T were obtained.Table 1 Primer sequences ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Primer sequences 590C/TFor 5′-GTAAGGACCTTATGGACC-3′ 590C/TRev 5′-TACAAAGTTTCAGCATAGG-3′ 589C/TFor 5′-TAAACTTGGGAGAACATGGT-3′ 589C/TRev 5′-TGGGGAAAGATAGAGTAATA-3′ ## 2.4. Detection of Serum IL-4 Levels Enzyme linked immunosorbent assay (ELISA) was used to detect the IL-4 level in serum (reagent: Immundiagnostik AG Bensheim) according to manufacturer’s instruction. Optical density value (OD value) of each well was measured at 450 nm wavelength. Linear regression equation of the standard curve was calculated using concentration and OD value of the standard substance, with O value of the specimen substituted into the equation to calculate the concentration of the corresponding specimen. All specimens were detected twice and the average was obtained, and the detection was in conformity with the quality control criteria of laboratory. ## 2.5. Statistical Method Statistical analysis was conducted using SPSS19.0 software (SPSS Inc., IBM, Chicago, IL, USA). Continuous data was presented as mean ± standard deviation, andt-test or variance analysis was adopted for comparison between groups. Categorical data was presented with percentages or ratios, and Chi-square test was applied into comparison between groups. Hardy-Weinberg equilibrium was used to verify and evaluate the representativeness of the subjects in the control group. Differences in genotypes and allele frequency between the case group and the control group were presented as odds ratio (OR) with 95% confidence interval (CI). All P values were two-sided, and P<0.05 was considered statistically significant. ## 3. Results ### 3.1. Baseline Characteristics Among 82 children with AD, there were 56 males and 26 females, with a mean age of5.09±3.21 years, including 15 patients in the mild group, 46 patients in the moderate group, and 21 patients in the severe group. There was no significant difference in age, gender, and the severity of AD among these three groups (all P>0.05) (Table 2). The control group consisted of 100 children who had normal findings on physical examination and also had surgical circumcision in our hospital, including 73 males and 27 females, with a mean age of 5.67±3.38 years. No remarkable difference in age and gender was examined between the case group and the control group (both P>0.05).Table 2 Severity distribution of atopic dermatitis patients with different age and gender. Variables Severity of AD Mild group (n=15) Moderate group (n=46) Severe group (n=21) Age (years) 1–5 7 25 13 5–10 6 16 6 10–12 2 5 2 Gender M 7 16 3 F 8 30 18 Note: M, male; F, female; AD: atopic dermatitis. ### 3.2. Distribution ofIL-4 SNPs, 590C/T and 589C/T Genotype and allele frequencies ofIL-14 SNPs, 590C/T and 589C/T, were shown in Table 3. After examination by Hardy-Weinberg equilibrium method, all genotype and allele frequencies achieved genetic balance, with group representativeness. No statistical difference was found in 590C/T genotypes between the case group and the control group (both P>0.05), while statistical differences existed in allele frequencies (both P<0.05). Besides, there were significant differences in genotype and allele frequencies of 589C/T between the case group and the control group (both P<0.05). T allele of 590C/T and T allele of 589C/T could increase the risk of AD (590C/T: OR = 1.78, 95% CI: 1.03–3.09, and P<0.05; 589C/T: OR = 2.30, 95% CI: 1.25–4.22, and P<0.01).Table 3 Allele and genotype frequency distribution ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T, in the case group and the control group. Case group (n=82) Control group (n=100) P OR (95% CI) 590C/T CC 3 6 Ref. CT 17 33 1.03 (0.23–4.64) TT 62 61 0.111 3.03 (0.49–8.50) CT + TT 79 94 1.68 (0.41–6.94) C 23 45 Ref. T 141 155 0.039 1.78 (1.03–3.09) 589C/T CC 1 3 Ref. CT 15 36 1.25 (0.12–13.01) TT 66 61 0.017 3.25 (0.33–32.06) CT + TT 81 97 2.51 (0.256–24.56) C 17 42 Ref. T 147 158 0.007 2.30 (1.25–4.22) Note: Ref., reference; OR, odd ratio; 95% CI, 95% confidence intervals. ### 3.3. Association between the Severity of AD andIL-14 SNPs, 590C/T and 589C/T Genotype distributions and allele frequencies of 589C/T were exhibited among the control group, the mild group, the moderate group, and the severe group (allP<0.05). However, no significant difference in genotype distributions of 589C/T was found between the control group and the mild group, the moderate group, and the severe group (all P>0.05). As for allele frequencies of 589C/T, there was also no significant difference among the moderate group and the control group, the severe group and the control group, and the mild group and the moderate group (all P>0.05). No significant difference was found in genotype distributions and allele frequencies of 590C/T among the mild group, the moderate group, the severe group, and the control group (all P>0.05) (Table 4).Table 4 The severity of AD, allele frequencies, and genotype distributions ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Control group (n=100) Case group Mild group (n=15) Moderate group (n=46) Severe group (n=21) 590C/T CC 6 0 1 2 CT 33 4 10 3 TT 61 11 35 16 C 45 4 12 7 T 155 26 80 35 589C/T CC 3 0 0 1 CT 36 7 7 1 TT 61 8 39 19 C 42 7 7 3 T 158 23 85 39 ### 3.4. Comparison of Serum IL-4 Levels Serum IL-4 levels in the case group and the control group were shown in Table5. Compared with the control group, serum IL-4 levels of patients in the case group notably increased (85.34±26.43 versus 38.66±7.99, t=5.228, P<0.01). Serum IL-4 levels in the mild, moderate, and severe groups were all significantly higher than those in the control group (all P<0.05). Statistical differences of serum IL-4 levels also existed among the mild group, moderate group, and the severe group with increased severity of AD (F=52.08, P<0.05).Table 5 Comparison of serum IL-4 level between the case group and the control group. Groups n IL-4 (pg/mL) Control group 100 38.66 ± 7.99 Mild group 15 51.43 ± 8.94a Moderate group 46 86.60 ± 17.79ab Severe group 21 106.78 ± 26.69abc Note:Pa<0.05, compared with the control group; Pb<0.05, compared with the mild group; Pc<0.05, compared with the moderate group. ### 3.5. Comparison of Serum IL-4 Levels of 590C/T and 589C/T Serum IL-4 levels of AD patients carrying C/T + T/T of 590C/T and C/T + T/T of 589C/T were higher than those of the control group (allP<0.05). In the case group, serum IL-4 levels of 590C/T C/T + T/T carriers were higher than those of CC carriers (both P<0.05); also serum IL-4 levels of 589C/T C/T + T/T carriers were higher than those of the CC genotype carriers (both P<0.05) (see Table 6).Table 6 Effect ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T, on the serum IL-4 level in the case group and the control group. Genotypes Case group Control group t P 590C/T CC 37.27 ± 9.12 36.41 ± 7.32 0.154 0.882 CT 74.11 ± 19.28a 37.90 ± 6.26 7.545 0.000 TT 90.82 ± 25.12ab 39.30 ± 4.16 15.930 0.000 589C/T CC 89.26 34.02 ± 6.43 — — CT 55.31 ± 15.82 36.52 ± 8.71 4.334 0.000 TT 91.59 ± 24.26b 40.16 ± 7.37 16.420 0.000 Note:Pa<0.05, compared with wild homozygotes; Pb<0.05, compared with heterozygote. ### 3.6. Association between the Severity of AD and 590C/T and 589C/T Compared with the control group, serum IL-4 levels of CT and TT (590C/T) carriers and TT (589C/T) carriers significantly rose in the mild group, moderate group, and severe group; serum IL-4 levels of CT (589C/T) carriers also rose notably in the mild group and the moderate group (allP<0.05). Serum IL-4 levels of 590C/T CT carriers increased remarkably compared with those of CC carriers in the moderate group, and serum IL-4 levels of TT carriers were significantly higher than those of CT carriers in the moderate group and the severe group, respectively (all P<0.05). Compared to 589C/T CT carriers, respectively, TT carriers in moderate group and the severe group had increased serum IL-4 levels (all P<0.05). Serum IL-4 levels of AD patients carrying different genotypes of 590C/T and 589C/T increased with increased severity of AD (all P<0.05) (Table 7).Table 7 Association of the severity of AD with serum IL-4 level ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Genotypes Control group Mild group (n=15) Moderate group (n=46) Severe group (n=21) 590C/T CC 36.4 ± 7.3 — 37.2 37.3 ± 12.9 CT 37.9 ± 6.2 49.3 ± 10.6a 77.2 ± 12.0ab 96.6 ± 9.9abcde TT 39.3 ± 8.9 52.2 ± 8.7a 90.7 ± 16.3abe 117.4 ± 13.1abcde 589C/T CC 22.11 ± 2.6 — — 89.26 CT 31.52 ± 3.07 42.85 ± 2.88a 61.04 ± 10.72ab 95.53 TT 43.69 ± 5.44 58.06 ± 6.71ae 91.19 ± 14.65abe 108.30 ± 27.66abc Note:Pa<0.05, compared with the control group; Pb<0.05, compared with the mild group; Pc<0.05, compared with the moderate group; Pd<0.05, compared with the wild homozygote; Pe<0.05, compared with the heterozygote. ## 3.1. Baseline Characteristics Among 82 children with AD, there were 56 males and 26 females, with a mean age of5.09±3.21 years, including 15 patients in the mild group, 46 patients in the moderate group, and 21 patients in the severe group. There was no significant difference in age, gender, and the severity of AD among these three groups (all P>0.05) (Table 2). The control group consisted of 100 children who had normal findings on physical examination and also had surgical circumcision in our hospital, including 73 males and 27 females, with a mean age of 5.67±3.38 years. No remarkable difference in age and gender was examined between the case group and the control group (both P>0.05).Table 2 Severity distribution of atopic dermatitis patients with different age and gender. Variables Severity of AD Mild group (n=15) Moderate group (n=46) Severe group (n=21) Age (years) 1–5 7 25 13 5–10 6 16 6 10–12 2 5 2 Gender M 7 16 3 F 8 30 18 Note: M, male; F, female; AD: atopic dermatitis. ## 3.2. Distribution ofIL-4 SNPs, 590C/T and 589C/T Genotype and allele frequencies ofIL-14 SNPs, 590C/T and 589C/T, were shown in Table 3. After examination by Hardy-Weinberg equilibrium method, all genotype and allele frequencies achieved genetic balance, with group representativeness. No statistical difference was found in 590C/T genotypes between the case group and the control group (both P>0.05), while statistical differences existed in allele frequencies (both P<0.05). Besides, there were significant differences in genotype and allele frequencies of 589C/T between the case group and the control group (both P<0.05). T allele of 590C/T and T allele of 589C/T could increase the risk of AD (590C/T: OR = 1.78, 95% CI: 1.03–3.09, and P<0.05; 589C/T: OR = 2.30, 95% CI: 1.25–4.22, and P<0.01).Table 3 Allele and genotype frequency distribution ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T, in the case group and the control group. Case group (n=82) Control group (n=100) P OR (95% CI) 590C/T CC 3 6 Ref. CT 17 33 1.03 (0.23–4.64) TT 62 61 0.111 3.03 (0.49–8.50) CT + TT 79 94 1.68 (0.41–6.94) C 23 45 Ref. T 141 155 0.039 1.78 (1.03–3.09) 589C/T CC 1 3 Ref. CT 15 36 1.25 (0.12–13.01) TT 66 61 0.017 3.25 (0.33–32.06) CT + TT 81 97 2.51 (0.256–24.56) C 17 42 Ref. T 147 158 0.007 2.30 (1.25–4.22) Note: Ref., reference; OR, odd ratio; 95% CI, 95% confidence intervals. ## 3.3. Association between the Severity of AD andIL-14 SNPs, 590C/T and 589C/T Genotype distributions and allele frequencies of 589C/T were exhibited among the control group, the mild group, the moderate group, and the severe group (allP<0.05). However, no significant difference in genotype distributions of 589C/T was found between the control group and the mild group, the moderate group, and the severe group (all P>0.05). As for allele frequencies of 589C/T, there was also no significant difference among the moderate group and the control group, the severe group and the control group, and the mild group and the moderate group (all P>0.05). No significant difference was found in genotype distributions and allele frequencies of 590C/T among the mild group, the moderate group, the severe group, and the control group (all P>0.05) (Table 4).Table 4 The severity of AD, allele frequencies, and genotype distributions ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Control group (n=100) Case group Mild group (n=15) Moderate group (n=46) Severe group (n=21) 590C/T CC 6 0 1 2 CT 33 4 10 3 TT 61 11 35 16 C 45 4 12 7 T 155 26 80 35 589C/T CC 3 0 0 1 CT 36 7 7 1 TT 61 8 39 19 C 42 7 7 3 T 158 23 85 39 ## 3.4. Comparison of Serum IL-4 Levels Serum IL-4 levels in the case group and the control group were shown in Table5. Compared with the control group, serum IL-4 levels of patients in the case group notably increased (85.34±26.43 versus 38.66±7.99, t=5.228, P<0.01). Serum IL-4 levels in the mild, moderate, and severe groups were all significantly higher than those in the control group (all P<0.05). Statistical differences of serum IL-4 levels also existed among the mild group, moderate group, and the severe group with increased severity of AD (F=52.08, P<0.05).Table 5 Comparison of serum IL-4 level between the case group and the control group. Groups n IL-4 (pg/mL) Control group 100 38.66 ± 7.99 Mild group 15 51.43 ± 8.94a Moderate group 46 86.60 ± 17.79ab Severe group 21 106.78 ± 26.69abc Note:Pa<0.05, compared with the control group; Pb<0.05, compared with the mild group; Pc<0.05, compared with the moderate group. ## 3.5. Comparison of Serum IL-4 Levels of 590C/T and 589C/T Serum IL-4 levels of AD patients carrying C/T + T/T of 590C/T and C/T + T/T of 589C/T were higher than those of the control group (allP<0.05). In the case group, serum IL-4 levels of 590C/T C/T + T/T carriers were higher than those of CC carriers (both P<0.05); also serum IL-4 levels of 589C/T C/T + T/T carriers were higher than those of the CC genotype carriers (both P<0.05) (see Table 6).Table 6 Effect ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T, on the serum IL-4 level in the case group and the control group. Genotypes Case group Control group t P 590C/T CC 37.27 ± 9.12 36.41 ± 7.32 0.154 0.882 CT 74.11 ± 19.28a 37.90 ± 6.26 7.545 0.000 TT 90.82 ± 25.12ab 39.30 ± 4.16 15.930 0.000 589C/T CC 89.26 34.02 ± 6.43 — — CT 55.31 ± 15.82 36.52 ± 8.71 4.334 0.000 TT 91.59 ± 24.26b 40.16 ± 7.37 16.420 0.000 Note:Pa<0.05, compared with wild homozygotes; Pb<0.05, compared with heterozygote. ## 3.6. Association between the Severity of AD and 590C/T and 589C/T Compared with the control group, serum IL-4 levels of CT and TT (590C/T) carriers and TT (589C/T) carriers significantly rose in the mild group, moderate group, and severe group; serum IL-4 levels of CT (589C/T) carriers also rose notably in the mild group and the moderate group (allP<0.05). Serum IL-4 levels of 590C/T CT carriers increased remarkably compared with those of CC carriers in the moderate group, and serum IL-4 levels of TT carriers were significantly higher than those of CT carriers in the moderate group and the severe group, respectively (all P<0.05). Compared to 589C/T CT carriers, respectively, TT carriers in moderate group and the severe group had increased serum IL-4 levels (all P<0.05). Serum IL-4 levels of AD patients carrying different genotypes of 590C/T and 589C/T increased with increased severity of AD (all P<0.05) (Table 7).Table 7 Association of the severity of AD with serum IL-4 level ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Genotypes Control group Mild group (n=15) Moderate group (n=46) Severe group (n=21) 590C/T CC 36.4 ± 7.3 — 37.2 37.3 ± 12.9 CT 37.9 ± 6.2 49.3 ± 10.6a 77.2 ± 12.0ab 96.6 ± 9.9abcde TT 39.3 ± 8.9 52.2 ± 8.7a 90.7 ± 16.3abe 117.4 ± 13.1abcde 589C/T CC 22.11 ± 2.6 — — 89.26 CT 31.52 ± 3.07 42.85 ± 2.88a 61.04 ± 10.72ab 95.53 TT 43.69 ± 5.44 58.06 ± 6.71ae 91.19 ± 14.65abe 108.30 ± 27.66abc Note:Pa<0.05, compared with the control group; Pb<0.05, compared with the mild group; Pc<0.05, compared with the moderate group; Pd<0.05, compared with the wild homozygote; Pe<0.05, compared with the heterozygote. ## 4. Discussion AD is a disease coregulated by multiple genes, approaches, and factors which attracted an increasing number of scholars to study its pathogenesis in recent years because of high incidence and lack of specific treatment plans. At present, it is reported that pathogenesis of AD mainly includes damaged epidermal barrier function, Th1/Th2 imbalance, and cytokines (IL-21, IL-25, and IL-33) effect [16]. Besides, previous study performed by Vafa et al. showed that the prevalence ofPlasmodium falciparum infection was associated with theIL-4 -590 T allele in Fulani ethnic group [17]. They also showed in another paper and proved the impact of theIL-4 -590 C/T transition on the levels ofPlasmodium falciparum specific IgE, IgG, and IgG subclasses and total IgE in two sympatric ethnic groups living in Mali [18] but there were few studies focusing on association betweenIL-4 SNPs and the occurrence and development of AD.Our study found that serum IL-4 level of patients in the case group increased significantly compared with that of the control group, and it would be elevated with increased severity of AD, which suggested that AD was closely related to the level of IL-4 in serum. IL-4 was the most specific cytokine of Th2 cell, and increased IL-4 showed that the Th balance of AD patients was shifting towards Th2. Namely, decreased expression of Th1 cytokine and increased expression of Th2 cytokine would lead to balance disorder and then acute AD skin lesions, and this elevation would increase with increasing severity of AD [19, 20]. In addition, IL-4 was the strongest IgE regulatory factor [21] which would combine with the interleukin-4 receptor (IL-4R) on cell surface, activate various nonreceptor protein tyrosine kinases (PTK) in cytoplasm to perform signal transduction, and then play various biological effects [10]. Moreover, IL-4 played a significant role in accelerating the synthesis of IgE, which could induce T cell proliferation and the homotype transformation from IgM to IgE [22]. Additionally, IL-4 also could promote the degranulation effect of mast cells, the differentiation from Th cells to Th2 cells, and the occurrence of IL-4 and IL-5 [11]. Differentiation imbalance of Th1/Th2 resulted in abnormal secretion of cytokines, which played an important part in the occurrence and development of AD [23].In 1995, by using single stranded conformational polymorphisms (SSCP) and DNA sequencing methods, Rosenwasser et al. first reported that, inIL-4 gene promoter region, the C→T mutation was detected in 590T base close to the glucocorticoid response element region, which was related to total IgE level since 590T would lead to increased IgE [24]. Kawashima et al. studiedIL-4 gene polymorphism among 88 core families with AD in Japan, and the result showed that carriers of T allele (590C/T) were predisposed to AD. The case-control study also suggested that there existed TT genotype association [25]. Combined with previous works, it was known that mutation of T allele of 590C/T and 589C/T would influence the expression of IL-4, eventually leading to the occurrence of AD [24–26]. Our study showed that mutant T allele prevailed in both the case group and the control group, and carriers of TT homozygote genotypes were much more predisposed to AD than carriers of CT heterozygote and CC homozygote genotypes, which further verified that 590C/T T allele and 589C/T T allele could enhance the risk of AD. T allele could increase the activity ofIL-4 gene promoter, and IL-4 played an important role in the improvement of IgE synthesis, suggesting that high frequencies of T allele would also increase the susceptibility of AD [22]. Collectively, this result hinted that IL-4 gene polymorphism was closely related to the occurrence and development of AD.Consequently, serum IL-4 levels affect the occurrence and development of AD, and 590T and 589T alleles ofIL-4 gene are likely to be related to high level of IL-4 in serum, which may increase the risk of AD in children, providing new directions and ideas for the treatment of AD. However, study on the specific mechanism of AD remains unclear now, and more studies are required to conduct verification and analysis. --- *Source: 1021942-2016-04-24.xml*
1021942-2016-04-24_1021942-2016-04-24.md
31,919
IL-4 Gene Polymorphism May Contribute to an Increased Risk of Atopic Dermatitis in Children
Hong Shang; Xiu-Li Cao; Yu-Jie Wan; Jin Meng; Lu-Hong Guo
Disease Markers (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1021942
1021942-2016-04-24.xml
--- ## Abstract This study aimed to elucidate the associations betweeninterleukin-4(IL-4) single nucleotide polymorphisms (SNPs), 590C/T and 589C/T, serum IL-4 levels, and atopic dermatitis (AD) in children.Methods. A total of 82 children with AD were randomly selected as the case group and divided into mild group (15 cases), moderate group (46 cases), and severe group (21 cases). Additionally, 100 healthy children were selected as the control group. Genotype frequencies of IL-4 SNPs were detected by PCR-RFLP. Serum IL-4 levels were measured by ELISA.Results. Significant differences were shown in genotype distributions and allele frequencies of 589C/T and allele frequencies of 590C/T (all P<0.05). Serum IL-4 levels in the mild, moderate, and severe groups were significantly higher than those in the control group; significant differences were found among these three groups with increased severity of AD. Serum IL-4 levels of heterozygote and mutant homozygote carriers in the mild, moderate, and severe groups were higher than wild homozygote carriers in those three groups and the control group (all P<0.05).Conclusion. 590T and 589T alleles ofIL-4gene may be associated with high levels of serum IL-4, which may increase the risk of AD in children. --- ## Body ## 1. Introduction Atopic dermatitis (AD), also known as atopic eczema, genetic allergic dermatitis, or constitutional prurigo, is the most common allergic inflammatory disease, which shows an apparent tendency of familial aggregation [1, 2]. AD is characterized by recurrence, pruritus, and inflammation, with clinically visible papules, erythema, exudation, erosion, incrustation, and lichenization changes, accompanying intense pruritus [3]. It has been reported that nearly 50% of children with AD develop symptoms within the first 6 months of life and approximately 85% of individuals with eczema have onset of symptoms by the age of 5 years [4, 5]. There are substantial differences in prevalence of AD between and within countries because of geographic distributions along with economic development. AD incidence in developed countries is now up to 10% to 20% [6, 7]. Until now, the pathogenesis of AD remains unclear, and previous studies on genetic epidemiology showed that AD was a polygenic disease [2, 8].It has been confirmed that dysimmunity has been implicated in pathogenesis of AD, and differentiation imbalance of Th1/Th2 resulted in abnormal secretion of cytokines, which also played an important role in the development and progression of AD [9]. Interleukin-4 (IL-4) is a cytokine with various biological functions, which is secreted mainly by activated T cells and monocytes, basophilic granulocyte, and mast cells [10, 11]. In skin lesions of acute AD, activated Th2 cells would induce B cells to produce IgE by releasing cytokines such as IL-4 [9, 12]. As a characteristic cytokine of Th2 cell, IL-4 could promote the occurrence and development of inflammatory reactions characterized by Th2 [13]. A study showed that IL-4 gene polymorphism was associated with atopic asthma and allergic rhinitis both of which belong to complex inflammatory disorders [14], but there were few studies investigating the association betweenIL-4 gene polymorphism and AD. Therefore, to verify this hypothesis, we carried out an association study onIL-4 gene polymorphism and serum IL-4 levels with AD in children. ## 2. Materials and Methods ### 2.1. Subjects Case-control design was adopted in this study. A total of 82 patients with AD, admitted to the Dermatological Department of Pediatrics in Jining No. 1 People’s Hospital between January 2013 and May 2014, were randomly selected as the case group in this study. They were divided into mild group with score ≤ 20, moderate group with score between 20 and 50, and severe group with score ≥ 50 according to scoring atopic dermatitis (SCORAD) scoring [15]. A patient could be diagnosed as AD when he had pruritus history and three or more of the following characteristics: (1) history of flexor skin involvement, including fossa cubitalis, popliteal fossa, anterior talocrural region, or paracervical region (cheek was also included for children less than 10 years of age); (2) history of bronchial asthma or hay fever (or the occurrence of AD in children less than 4 years of age among first-degree relatives); (3) history of dry skin in the whole body; (4) visible dermatitis in the flexor side; and (5) AD occurrence before 2 years of age for patients of more than four years of age. In addition, 100 healthy children in this hospital were also selected as the control group. ### 2.2. Specimen Collection Elbow venous blood (10 mL) was collected from each subject that fasted overnight for 10 to 12 h, and 3 mL of blood sample was transferred into vacuum tubes containing ethylenediamine tetraacetic acid (EDTA). Then, genomic DNA was extracted using whole blood genomic DNA extraction kit (Tiangen Biotech (Beijing) Co., Ltd.); the rest of the blood samples without EDTA were centrifuged for 10 min at 3000 rpm at room temperature to extract serum, which was then stored at −80°C. ### 2.3. Detection ofIL-4 590C/T and 589C/T Single Nucleotide Polymorphisms (SNPs) IL-4 SNPs, 590C/T and 589C/T, were identified by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. The PCR primers were designed by Primer Premier 5.0 software and then synthesized by Shanghai Sangon Biotech Company. The primer sequences and lengths were shown in Table 1. The PCR reaction was performed in a total of 25 μL containing 2.0 μL DNA, 1.25 U TaqDNA amplifier enzymes (Sangon Biotech, SK2492), 2.5 μL 10x PCR buffer, 2.0 μL 2.5 mmol/L dNTP mix, and 20 pmol of each primer and sterilized in double distilled water. PCR reaction conditions were as follows: predegeneration at 94°C for 5 min, degeneration at 94°C for 30 s, annealing at 61°C for 45 s, extension at 72°C for 50 s, 35 cycles from degeneration to extension in total, and then extension at 72°C for 5 min. When the PCR reaction was terminated, 5 μL of PCR products was identified by agarose electrophoresis. The PCR products were digested with MaeI and AvaII endonuclease in water at 37°C for 16 h, respectively, followed by 2% agarose gel electrophoresis, and genotyping results of 590C/T and 589C/T were obtained.Table 1 Primer sequences ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Primer sequences 590C/TFor 5′-GTAAGGACCTTATGGACC-3′ 590C/TRev 5′-TACAAAGTTTCAGCATAGG-3′ 589C/TFor 5′-TAAACTTGGGAGAACATGGT-3′ 589C/TRev 5′-TGGGGAAAGATAGAGTAATA-3′ ### 2.4. Detection of Serum IL-4 Levels Enzyme linked immunosorbent assay (ELISA) was used to detect the IL-4 level in serum (reagent: Immundiagnostik AG Bensheim) according to manufacturer’s instruction. Optical density value (OD value) of each well was measured at 450 nm wavelength. Linear regression equation of the standard curve was calculated using concentration and OD value of the standard substance, with O value of the specimen substituted into the equation to calculate the concentration of the corresponding specimen. All specimens were detected twice and the average was obtained, and the detection was in conformity with the quality control criteria of laboratory. ### 2.5. Statistical Method Statistical analysis was conducted using SPSS19.0 software (SPSS Inc., IBM, Chicago, IL, USA). Continuous data was presented as mean ± standard deviation, andt-test or variance analysis was adopted for comparison between groups. Categorical data was presented with percentages or ratios, and Chi-square test was applied into comparison between groups. Hardy-Weinberg equilibrium was used to verify and evaluate the representativeness of the subjects in the control group. Differences in genotypes and allele frequency between the case group and the control group were presented as odds ratio (OR) with 95% confidence interval (CI). All P values were two-sided, and P<0.05 was considered statistically significant. ## 2.1. Subjects Case-control design was adopted in this study. A total of 82 patients with AD, admitted to the Dermatological Department of Pediatrics in Jining No. 1 People’s Hospital between January 2013 and May 2014, were randomly selected as the case group in this study. They were divided into mild group with score ≤ 20, moderate group with score between 20 and 50, and severe group with score ≥ 50 according to scoring atopic dermatitis (SCORAD) scoring [15]. A patient could be diagnosed as AD when he had pruritus history and three or more of the following characteristics: (1) history of flexor skin involvement, including fossa cubitalis, popliteal fossa, anterior talocrural region, or paracervical region (cheek was also included for children less than 10 years of age); (2) history of bronchial asthma or hay fever (or the occurrence of AD in children less than 4 years of age among first-degree relatives); (3) history of dry skin in the whole body; (4) visible dermatitis in the flexor side; and (5) AD occurrence before 2 years of age for patients of more than four years of age. In addition, 100 healthy children in this hospital were also selected as the control group. ## 2.2. Specimen Collection Elbow venous blood (10 mL) was collected from each subject that fasted overnight for 10 to 12 h, and 3 mL of blood sample was transferred into vacuum tubes containing ethylenediamine tetraacetic acid (EDTA). Then, genomic DNA was extracted using whole blood genomic DNA extraction kit (Tiangen Biotech (Beijing) Co., Ltd.); the rest of the blood samples without EDTA were centrifuged for 10 min at 3000 rpm at room temperature to extract serum, which was then stored at −80°C. ## 2.3. Detection ofIL-4 590C/T and 589C/T Single Nucleotide Polymorphisms (SNPs) IL-4 SNPs, 590C/T and 589C/T, were identified by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. The PCR primers were designed by Primer Premier 5.0 software and then synthesized by Shanghai Sangon Biotech Company. The primer sequences and lengths were shown in Table 1. The PCR reaction was performed in a total of 25 μL containing 2.0 μL DNA, 1.25 U TaqDNA amplifier enzymes (Sangon Biotech, SK2492), 2.5 μL 10x PCR buffer, 2.0 μL 2.5 mmol/L dNTP mix, and 20 pmol of each primer and sterilized in double distilled water. PCR reaction conditions were as follows: predegeneration at 94°C for 5 min, degeneration at 94°C for 30 s, annealing at 61°C for 45 s, extension at 72°C for 50 s, 35 cycles from degeneration to extension in total, and then extension at 72°C for 5 min. When the PCR reaction was terminated, 5 μL of PCR products was identified by agarose electrophoresis. The PCR products were digested with MaeI and AvaII endonuclease in water at 37°C for 16 h, respectively, followed by 2% agarose gel electrophoresis, and genotyping results of 590C/T and 589C/T were obtained.Table 1 Primer sequences ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Primer sequences 590C/TFor 5′-GTAAGGACCTTATGGACC-3′ 590C/TRev 5′-TACAAAGTTTCAGCATAGG-3′ 589C/TFor 5′-TAAACTTGGGAGAACATGGT-3′ 589C/TRev 5′-TGGGGAAAGATAGAGTAATA-3′ ## 2.4. Detection of Serum IL-4 Levels Enzyme linked immunosorbent assay (ELISA) was used to detect the IL-4 level in serum (reagent: Immundiagnostik AG Bensheim) according to manufacturer’s instruction. Optical density value (OD value) of each well was measured at 450 nm wavelength. Linear regression equation of the standard curve was calculated using concentration and OD value of the standard substance, with O value of the specimen substituted into the equation to calculate the concentration of the corresponding specimen. All specimens were detected twice and the average was obtained, and the detection was in conformity with the quality control criteria of laboratory. ## 2.5. Statistical Method Statistical analysis was conducted using SPSS19.0 software (SPSS Inc., IBM, Chicago, IL, USA). Continuous data was presented as mean ± standard deviation, andt-test or variance analysis was adopted for comparison between groups. Categorical data was presented with percentages or ratios, and Chi-square test was applied into comparison between groups. Hardy-Weinberg equilibrium was used to verify and evaluate the representativeness of the subjects in the control group. Differences in genotypes and allele frequency between the case group and the control group were presented as odds ratio (OR) with 95% confidence interval (CI). All P values were two-sided, and P<0.05 was considered statistically significant. ## 3. Results ### 3.1. Baseline Characteristics Among 82 children with AD, there were 56 males and 26 females, with a mean age of5.09±3.21 years, including 15 patients in the mild group, 46 patients in the moderate group, and 21 patients in the severe group. There was no significant difference in age, gender, and the severity of AD among these three groups (all P>0.05) (Table 2). The control group consisted of 100 children who had normal findings on physical examination and also had surgical circumcision in our hospital, including 73 males and 27 females, with a mean age of 5.67±3.38 years. No remarkable difference in age and gender was examined between the case group and the control group (both P>0.05).Table 2 Severity distribution of atopic dermatitis patients with different age and gender. Variables Severity of AD Mild group (n=15) Moderate group (n=46) Severe group (n=21) Age (years) 1–5 7 25 13 5–10 6 16 6 10–12 2 5 2 Gender M 7 16 3 F 8 30 18 Note: M, male; F, female; AD: atopic dermatitis. ### 3.2. Distribution ofIL-4 SNPs, 590C/T and 589C/T Genotype and allele frequencies ofIL-14 SNPs, 590C/T and 589C/T, were shown in Table 3. After examination by Hardy-Weinberg equilibrium method, all genotype and allele frequencies achieved genetic balance, with group representativeness. No statistical difference was found in 590C/T genotypes between the case group and the control group (both P>0.05), while statistical differences existed in allele frequencies (both P<0.05). Besides, there were significant differences in genotype and allele frequencies of 589C/T between the case group and the control group (both P<0.05). T allele of 590C/T and T allele of 589C/T could increase the risk of AD (590C/T: OR = 1.78, 95% CI: 1.03–3.09, and P<0.05; 589C/T: OR = 2.30, 95% CI: 1.25–4.22, and P<0.01).Table 3 Allele and genotype frequency distribution ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T, in the case group and the control group. Case group (n=82) Control group (n=100) P OR (95% CI) 590C/T CC 3 6 Ref. CT 17 33 1.03 (0.23–4.64) TT 62 61 0.111 3.03 (0.49–8.50) CT + TT 79 94 1.68 (0.41–6.94) C 23 45 Ref. T 141 155 0.039 1.78 (1.03–3.09) 589C/T CC 1 3 Ref. CT 15 36 1.25 (0.12–13.01) TT 66 61 0.017 3.25 (0.33–32.06) CT + TT 81 97 2.51 (0.256–24.56) C 17 42 Ref. T 147 158 0.007 2.30 (1.25–4.22) Note: Ref., reference; OR, odd ratio; 95% CI, 95% confidence intervals. ### 3.3. Association between the Severity of AD andIL-14 SNPs, 590C/T and 589C/T Genotype distributions and allele frequencies of 589C/T were exhibited among the control group, the mild group, the moderate group, and the severe group (allP<0.05). However, no significant difference in genotype distributions of 589C/T was found between the control group and the mild group, the moderate group, and the severe group (all P>0.05). As for allele frequencies of 589C/T, there was also no significant difference among the moderate group and the control group, the severe group and the control group, and the mild group and the moderate group (all P>0.05). No significant difference was found in genotype distributions and allele frequencies of 590C/T among the mild group, the moderate group, the severe group, and the control group (all P>0.05) (Table 4).Table 4 The severity of AD, allele frequencies, and genotype distributions ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Control group (n=100) Case group Mild group (n=15) Moderate group (n=46) Severe group (n=21) 590C/T CC 6 0 1 2 CT 33 4 10 3 TT 61 11 35 16 C 45 4 12 7 T 155 26 80 35 589C/T CC 3 0 0 1 CT 36 7 7 1 TT 61 8 39 19 C 42 7 7 3 T 158 23 85 39 ### 3.4. Comparison of Serum IL-4 Levels Serum IL-4 levels in the case group and the control group were shown in Table5. Compared with the control group, serum IL-4 levels of patients in the case group notably increased (85.34±26.43 versus 38.66±7.99, t=5.228, P<0.01). Serum IL-4 levels in the mild, moderate, and severe groups were all significantly higher than those in the control group (all P<0.05). Statistical differences of serum IL-4 levels also existed among the mild group, moderate group, and the severe group with increased severity of AD (F=52.08, P<0.05).Table 5 Comparison of serum IL-4 level between the case group and the control group. Groups n IL-4 (pg/mL) Control group 100 38.66 ± 7.99 Mild group 15 51.43 ± 8.94a Moderate group 46 86.60 ± 17.79ab Severe group 21 106.78 ± 26.69abc Note:Pa<0.05, compared with the control group; Pb<0.05, compared with the mild group; Pc<0.05, compared with the moderate group. ### 3.5. Comparison of Serum IL-4 Levels of 590C/T and 589C/T Serum IL-4 levels of AD patients carrying C/T + T/T of 590C/T and C/T + T/T of 589C/T were higher than those of the control group (allP<0.05). In the case group, serum IL-4 levels of 590C/T C/T + T/T carriers were higher than those of CC carriers (both P<0.05); also serum IL-4 levels of 589C/T C/T + T/T carriers were higher than those of the CC genotype carriers (both P<0.05) (see Table 6).Table 6 Effect ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T, on the serum IL-4 level in the case group and the control group. Genotypes Case group Control group t P 590C/T CC 37.27 ± 9.12 36.41 ± 7.32 0.154 0.882 CT 74.11 ± 19.28a 37.90 ± 6.26 7.545 0.000 TT 90.82 ± 25.12ab 39.30 ± 4.16 15.930 0.000 589C/T CC 89.26 34.02 ± 6.43 — — CT 55.31 ± 15.82 36.52 ± 8.71 4.334 0.000 TT 91.59 ± 24.26b 40.16 ± 7.37 16.420 0.000 Note:Pa<0.05, compared with wild homozygotes; Pb<0.05, compared with heterozygote. ### 3.6. Association between the Severity of AD and 590C/T and 589C/T Compared with the control group, serum IL-4 levels of CT and TT (590C/T) carriers and TT (589C/T) carriers significantly rose in the mild group, moderate group, and severe group; serum IL-4 levels of CT (589C/T) carriers also rose notably in the mild group and the moderate group (allP<0.05). Serum IL-4 levels of 590C/T CT carriers increased remarkably compared with those of CC carriers in the moderate group, and serum IL-4 levels of TT carriers were significantly higher than those of CT carriers in the moderate group and the severe group, respectively (all P<0.05). Compared to 589C/T CT carriers, respectively, TT carriers in moderate group and the severe group had increased serum IL-4 levels (all P<0.05). Serum IL-4 levels of AD patients carrying different genotypes of 590C/T and 589C/T increased with increased severity of AD (all P<0.05) (Table 7).Table 7 Association of the severity of AD with serum IL-4 level ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Genotypes Control group Mild group (n=15) Moderate group (n=46) Severe group (n=21) 590C/T CC 36.4 ± 7.3 — 37.2 37.3 ± 12.9 CT 37.9 ± 6.2 49.3 ± 10.6a 77.2 ± 12.0ab 96.6 ± 9.9abcde TT 39.3 ± 8.9 52.2 ± 8.7a 90.7 ± 16.3abe 117.4 ± 13.1abcde 589C/T CC 22.11 ± 2.6 — — 89.26 CT 31.52 ± 3.07 42.85 ± 2.88a 61.04 ± 10.72ab 95.53 TT 43.69 ± 5.44 58.06 ± 6.71ae 91.19 ± 14.65abe 108.30 ± 27.66abc Note:Pa<0.05, compared with the control group; Pb<0.05, compared with the mild group; Pc<0.05, compared with the moderate group; Pd<0.05, compared with the wild homozygote; Pe<0.05, compared with the heterozygote. ## 3.1. Baseline Characteristics Among 82 children with AD, there were 56 males and 26 females, with a mean age of5.09±3.21 years, including 15 patients in the mild group, 46 patients in the moderate group, and 21 patients in the severe group. There was no significant difference in age, gender, and the severity of AD among these three groups (all P>0.05) (Table 2). The control group consisted of 100 children who had normal findings on physical examination and also had surgical circumcision in our hospital, including 73 males and 27 females, with a mean age of 5.67±3.38 years. No remarkable difference in age and gender was examined between the case group and the control group (both P>0.05).Table 2 Severity distribution of atopic dermatitis patients with different age and gender. Variables Severity of AD Mild group (n=15) Moderate group (n=46) Severe group (n=21) Age (years) 1–5 7 25 13 5–10 6 16 6 10–12 2 5 2 Gender M 7 16 3 F 8 30 18 Note: M, male; F, female; AD: atopic dermatitis. ## 3.2. Distribution ofIL-4 SNPs, 590C/T and 589C/T Genotype and allele frequencies ofIL-14 SNPs, 590C/T and 589C/T, were shown in Table 3. After examination by Hardy-Weinberg equilibrium method, all genotype and allele frequencies achieved genetic balance, with group representativeness. No statistical difference was found in 590C/T genotypes between the case group and the control group (both P>0.05), while statistical differences existed in allele frequencies (both P<0.05). Besides, there were significant differences in genotype and allele frequencies of 589C/T between the case group and the control group (both P<0.05). T allele of 590C/T and T allele of 589C/T could increase the risk of AD (590C/T: OR = 1.78, 95% CI: 1.03–3.09, and P<0.05; 589C/T: OR = 2.30, 95% CI: 1.25–4.22, and P<0.01).Table 3 Allele and genotype frequency distribution ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T, in the case group and the control group. Case group (n=82) Control group (n=100) P OR (95% CI) 590C/T CC 3 6 Ref. CT 17 33 1.03 (0.23–4.64) TT 62 61 0.111 3.03 (0.49–8.50) CT + TT 79 94 1.68 (0.41–6.94) C 23 45 Ref. T 141 155 0.039 1.78 (1.03–3.09) 589C/T CC 1 3 Ref. CT 15 36 1.25 (0.12–13.01) TT 66 61 0.017 3.25 (0.33–32.06) CT + TT 81 97 2.51 (0.256–24.56) C 17 42 Ref. T 147 158 0.007 2.30 (1.25–4.22) Note: Ref., reference; OR, odd ratio; 95% CI, 95% confidence intervals. ## 3.3. Association between the Severity of AD andIL-14 SNPs, 590C/T and 589C/T Genotype distributions and allele frequencies of 589C/T were exhibited among the control group, the mild group, the moderate group, and the severe group (allP<0.05). However, no significant difference in genotype distributions of 589C/T was found between the control group and the mild group, the moderate group, and the severe group (all P>0.05). As for allele frequencies of 589C/T, there was also no significant difference among the moderate group and the control group, the severe group and the control group, and the mild group and the moderate group (all P>0.05). No significant difference was found in genotype distributions and allele frequencies of 590C/T among the mild group, the moderate group, the severe group, and the control group (all P>0.05) (Table 4).Table 4 The severity of AD, allele frequencies, and genotype distributions ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Control group (n=100) Case group Mild group (n=15) Moderate group (n=46) Severe group (n=21) 590C/T CC 6 0 1 2 CT 33 4 10 3 TT 61 11 35 16 C 45 4 12 7 T 155 26 80 35 589C/T CC 3 0 0 1 CT 36 7 7 1 TT 61 8 39 19 C 42 7 7 3 T 158 23 85 39 ## 3.4. Comparison of Serum IL-4 Levels Serum IL-4 levels in the case group and the control group were shown in Table5. Compared with the control group, serum IL-4 levels of patients in the case group notably increased (85.34±26.43 versus 38.66±7.99, t=5.228, P<0.01). Serum IL-4 levels in the mild, moderate, and severe groups were all significantly higher than those in the control group (all P<0.05). Statistical differences of serum IL-4 levels also existed among the mild group, moderate group, and the severe group with increased severity of AD (F=52.08, P<0.05).Table 5 Comparison of serum IL-4 level between the case group and the control group. Groups n IL-4 (pg/mL) Control group 100 38.66 ± 7.99 Mild group 15 51.43 ± 8.94a Moderate group 46 86.60 ± 17.79ab Severe group 21 106.78 ± 26.69abc Note:Pa<0.05, compared with the control group; Pb<0.05, compared with the mild group; Pc<0.05, compared with the moderate group. ## 3.5. Comparison of Serum IL-4 Levels of 590C/T and 589C/T Serum IL-4 levels of AD patients carrying C/T + T/T of 590C/T and C/T + T/T of 589C/T were higher than those of the control group (allP<0.05). In the case group, serum IL-4 levels of 590C/T C/T + T/T carriers were higher than those of CC carriers (both P<0.05); also serum IL-4 levels of 589C/T C/T + T/T carriers were higher than those of the CC genotype carriers (both P<0.05) (see Table 6).Table 6 Effect ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T, on the serum IL-4 level in the case group and the control group. Genotypes Case group Control group t P 590C/T CC 37.27 ± 9.12 36.41 ± 7.32 0.154 0.882 CT 74.11 ± 19.28a 37.90 ± 6.26 7.545 0.000 TT 90.82 ± 25.12ab 39.30 ± 4.16 15.930 0.000 589C/T CC 89.26 34.02 ± 6.43 — — CT 55.31 ± 15.82 36.52 ± 8.71 4.334 0.000 TT 91.59 ± 24.26b 40.16 ± 7.37 16.420 0.000 Note:Pa<0.05, compared with wild homozygotes; Pb<0.05, compared with heterozygote. ## 3.6. Association between the Severity of AD and 590C/T and 589C/T Compared with the control group, serum IL-4 levels of CT and TT (590C/T) carriers and TT (589C/T) carriers significantly rose in the mild group, moderate group, and severe group; serum IL-4 levels of CT (589C/T) carriers also rose notably in the mild group and the moderate group (allP<0.05). Serum IL-4 levels of 590C/T CT carriers increased remarkably compared with those of CC carriers in the moderate group, and serum IL-4 levels of TT carriers were significantly higher than those of CT carriers in the moderate group and the severe group, respectively (all P<0.05). Compared to 589C/T CT carriers, respectively, TT carriers in moderate group and the severe group had increased serum IL-4 levels (all P<0.05). Serum IL-4 levels of AD patients carrying different genotypes of 590C/T and 589C/T increased with increased severity of AD (all P<0.05) (Table 7).Table 7 Association of the severity of AD with serum IL-4 level ofIL-4 single nucleotide polymorphisms, 590C/T and 589C/T. Genotypes Control group Mild group (n=15) Moderate group (n=46) Severe group (n=21) 590C/T CC 36.4 ± 7.3 — 37.2 37.3 ± 12.9 CT 37.9 ± 6.2 49.3 ± 10.6a 77.2 ± 12.0ab 96.6 ± 9.9abcde TT 39.3 ± 8.9 52.2 ± 8.7a 90.7 ± 16.3abe 117.4 ± 13.1abcde 589C/T CC 22.11 ± 2.6 — — 89.26 CT 31.52 ± 3.07 42.85 ± 2.88a 61.04 ± 10.72ab 95.53 TT 43.69 ± 5.44 58.06 ± 6.71ae 91.19 ± 14.65abe 108.30 ± 27.66abc Note:Pa<0.05, compared with the control group; Pb<0.05, compared with the mild group; Pc<0.05, compared with the moderate group; Pd<0.05, compared with the wild homozygote; Pe<0.05, compared with the heterozygote. ## 4. Discussion AD is a disease coregulated by multiple genes, approaches, and factors which attracted an increasing number of scholars to study its pathogenesis in recent years because of high incidence and lack of specific treatment plans. At present, it is reported that pathogenesis of AD mainly includes damaged epidermal barrier function, Th1/Th2 imbalance, and cytokines (IL-21, IL-25, and IL-33) effect [16]. Besides, previous study performed by Vafa et al. showed that the prevalence ofPlasmodium falciparum infection was associated with theIL-4 -590 T allele in Fulani ethnic group [17]. They also showed in another paper and proved the impact of theIL-4 -590 C/T transition on the levels ofPlasmodium falciparum specific IgE, IgG, and IgG subclasses and total IgE in two sympatric ethnic groups living in Mali [18] but there were few studies focusing on association betweenIL-4 SNPs and the occurrence and development of AD.Our study found that serum IL-4 level of patients in the case group increased significantly compared with that of the control group, and it would be elevated with increased severity of AD, which suggested that AD was closely related to the level of IL-4 in serum. IL-4 was the most specific cytokine of Th2 cell, and increased IL-4 showed that the Th balance of AD patients was shifting towards Th2. Namely, decreased expression of Th1 cytokine and increased expression of Th2 cytokine would lead to balance disorder and then acute AD skin lesions, and this elevation would increase with increasing severity of AD [19, 20]. In addition, IL-4 was the strongest IgE regulatory factor [21] which would combine with the interleukin-4 receptor (IL-4R) on cell surface, activate various nonreceptor protein tyrosine kinases (PTK) in cytoplasm to perform signal transduction, and then play various biological effects [10]. Moreover, IL-4 played a significant role in accelerating the synthesis of IgE, which could induce T cell proliferation and the homotype transformation from IgM to IgE [22]. Additionally, IL-4 also could promote the degranulation effect of mast cells, the differentiation from Th cells to Th2 cells, and the occurrence of IL-4 and IL-5 [11]. Differentiation imbalance of Th1/Th2 resulted in abnormal secretion of cytokines, which played an important part in the occurrence and development of AD [23].In 1995, by using single stranded conformational polymorphisms (SSCP) and DNA sequencing methods, Rosenwasser et al. first reported that, inIL-4 gene promoter region, the C→T mutation was detected in 590T base close to the glucocorticoid response element region, which was related to total IgE level since 590T would lead to increased IgE [24]. Kawashima et al. studiedIL-4 gene polymorphism among 88 core families with AD in Japan, and the result showed that carriers of T allele (590C/T) were predisposed to AD. The case-control study also suggested that there existed TT genotype association [25]. Combined with previous works, it was known that mutation of T allele of 590C/T and 589C/T would influence the expression of IL-4, eventually leading to the occurrence of AD [24–26]. Our study showed that mutant T allele prevailed in both the case group and the control group, and carriers of TT homozygote genotypes were much more predisposed to AD than carriers of CT heterozygote and CC homozygote genotypes, which further verified that 590C/T T allele and 589C/T T allele could enhance the risk of AD. T allele could increase the activity ofIL-4 gene promoter, and IL-4 played an important role in the improvement of IgE synthesis, suggesting that high frequencies of T allele would also increase the susceptibility of AD [22]. Collectively, this result hinted that IL-4 gene polymorphism was closely related to the occurrence and development of AD.Consequently, serum IL-4 levels affect the occurrence and development of AD, and 590T and 589T alleles ofIL-4 gene are likely to be related to high level of IL-4 in serum, which may increase the risk of AD in children, providing new directions and ideas for the treatment of AD. However, study on the specific mechanism of AD remains unclear now, and more studies are required to conduct verification and analysis. --- *Source: 1021942-2016-04-24.xml*
2016
# Diagnostic and Therapeutic Challenges of a Large Pleural Inflammatory Myofibroblastic Tumor **Authors:** Judith Loeffler-Ragg; Johannes Bodner; Martin Freund; Michael Steurer; Christian Uprimny; Bettina Zelger; Christian M. Kähler **Journal:** Case Reports in Pulmonology (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102196 --- ## Abstract We report a 48-year-old woman with a pleural pseudoneoplasm requiring different diagnostic and therapeutic strategies. After initial presentation with increasing dyspnoea, temperature, dry cough, and interscapular pain diagnostic processing showed a large mediastinal mass with marked pleural effusion and high metabolic activity in the 18F-FDG-PET/CT. Extensive CT-guided biopsy of the tumor reaching from the visceral pleura into the right upper lobe revealed no malignancy, but a marked inflammatory tissue reaction containing foam cells. Initial empiric antibiotic therapy was temporarily successful. However, in the further course the mass relapsed and was resistant to antibiotics and a corticosteroid trial. With the working hypothesis of an inflammatory myofibroblastic tumor the patient underwent surgical tumor resection, finally confirming the suspected diagnosis. Due to residual disease intravenous immunoglobulins were administered leading to sustained response. This case with a pleural localisation of a large inflammatory pseudotumor with responsiveness to immunomodulation after incomplete resection extends the reported spectrum of thoracopulmonary manifestations of this rare entity. --- ## Body ## 1. Introduction Inflammatory myofibroblastic tumor (IMT) is a rare non-neoplastic lesion with unknown pathogenesis, comprising less than one percent of all surgically resected lung tumors in adults [1]. They can mimic both clinically and radiologically malignant processes, and a definitive preoperative diagnosis is often difficult to make. These tumors consist of a background proliferation of spindle-shaped mesenchymal cells associated with a variable infiltration with inflammatory cells. IMT most commonly involves the lung and the orbit, but has been reported to occur in nearly every site in the body [2]. Historical synonyms for the disease include inflammatory pseudotumor, plasma cell granuloma, inflammatory myofibrohistiocytic proliferation, histiocytoma, xanthoma, fibroxanthoma, xanthogranuloma, fibrous xanthoma, plasma cell histiocytoma complex, plasmocytoma, and solitary mast cell granuloma [3, 4]. The variety of terms reflects the heterogenous histological patterns that fall under the category of IMT. In this paper we describe the diagnostic and therapeutic approach to a large pleural inflammatory pseudotumor. ## 2. Case Report A 48-year-old woman presented to a peripheral hospital with a 14 days’ history of progressive shortness of breath on exertion, dry cough, and interscapular pain. On physical examination the patient displayed reduced breath sounds and a dull percussion note at the right lung base, but was otherwise unremarkable. The initial radiologic work-up revealed a large mediastinal mass measuring 9 cm in size with concomitant marked pleural effusion (Figure1(a)). The main differential diagnosis was considered to be a malignant disease. Due to a history of breast cancer (invasive ductal carcinoma, ypT1bN1aM0) with following neoadjuvant chemotherapy, surgery and radiation two years before and ongoing adjuvant hormonal therapy with arimidex and zoledronate, the patient was transferred to a gynecological department for further diagnostics. In the following days fever and high CRP levels (up to 27.96 mg/dL; normal range 0.0–0.7 mg/dL) required sequential antibiotic therapy with doxycyclin, piperazillin/tazobactam, and moxifloxacin. Autoimmune parameters (ANA, ANCA) and infectious screening for tuberculosis (T-SPOT), EBV, and toxoplasmosis were negative. Cytology from thoracocentesis revealed no malignant cells. From ten CT-guided needle biopsies of the tumor, which was reaching from the visceral pleura into the right upper lobe (Figure 2), metastasis of breast cancer could be excluded. Because of those indeterminate results the patient was referred to our department. The CT-guided biopsies primarily contained fibrotic and infiltrated parts of pleura and only some parts of normal lung parenchyma. Whereas the intraoperative frozen section was not definitely diagnostic showing an infiltration with small monomorphic cells, the initial H&E histology suggested a macrophage disorder because of monomorphic proliferation of mainly macrophages, some lymphocytes and plasma cells as well as single neutrophiles. No overt signs of malignancy, no nuclear pleomorphism, only rare mitosis, and no necrosis were found. Immunohistochemistry ruled out an underlying neoplastic lesion. The tumorous area was completely negative for epithelial markers namely the pankeratin markers AE3/AE3 and Cam5.2 as well as p63, CK5/6, CK7, and CK20. Calretinin, CD 117, TTF-1, and melanocytic markers as S100, HMB45, and Melan A stained negative too. It showed a prominent macrophage rich, KiM1p and CD 68 positive lesion with single CD4 positive T cells and some CD 79a and CD 138 positive plasma cells. There were no signs of a specific infectious disease such as tuberculosis (microscopy and TBC PCR were negative). H&E morphology and immunophenotype suggested a xanthogranulomatous process and the diagnosis of an inflammatory pseudotumor. Due to the fact that there was only limited material a rebiopsy of the mediastinal mass was recommended, because it was not sure if the material was representative for the whole lesion. The microbiologic workup of the fine needle aspirate was negative for bacteria, mycobacteria, and fungi.Anteroposterior chest radiograph showing a large homogenous opacity right paramediastinal and right side pleural effusion. (a) Initial presentation. (b) Response to treatment with moxifloxacin four weeks after initial presentation. (a) (b)Figure 2 Computed tomography (CT) scan of the chest with CT-guided needle biopsy of a right paramediastinal tumor. In total 10 biopsies were taken from the 8.4 cm large mass.Two weeks after the start of moxifloxacin therapy the patient presented herself as afebrile with a marked decrease in CRP (from 27.96 mg/dL to 3.5 mg/dL). Two further weeks later, along with the clinical improvement, a reduction of the size of the mediastinal mass by one-half was noted (from 6.2 × 8.4 cm to 6.2 × 3.4 cm; Figure1(b)). Because of this clinical course and well documented CT guided needle biopsy routes (excluding sampling error) the working hypothesis of an inflammatory pseudotumor was established. The response to a fourth-generation fluoroquinolone suggested an infectious trigger, which unfortunately could not be proven. Three months later the patient had a clinical worsening with hemoptysis and increase in CRP and tumor size. Therefore, an empirical treatment with methylprednisolone (40 mg/day) was started. 18F-FDG PET-CT scan at that time revealed a very high metabolic activity within the tumor (SUV max.: 41.7) which was suspicious of a malignant disease (Figure 3(a)). Due to a lack of response to corticosteroid therapy within the following 6 weeks, the interdisciplinary board recommended surgical resection of the progressing mass, which advise was followed. The intraoperative situs revealed a yellow, soft, and not well-bordered mass originating from the visceral pleura. The final histology of the pleural tumor and adjacent pulmonary parts from a wedge resection showed a marked inflammatory reaction with foam cells, consistent with the suspected inflammatory myofibroblastic tumor (Figures 4(a) and 4(b)). There was no ALK expression in our case.18-FDG PET-CT scan with high FDG uptake in a right paramediastinal mass (a). Postoperative scans showing residual FDG uptake one month after surgery (b), a decrease in intensity after immunomodulation with intravenous immunoglobulins 4 months later (c), and sustained response at one-year followup (d). (a) (b) (c) (d)Tumor biopsy showing a dense highly vasculated mixed inflammatory infiltration ((a); H&E, 40x), composed of macrophages with foamy cytoplasm, so called histiocytes, some lymphocytes and plasma cells ((b); H&E, 400x). (a) (b)The one month postoperative 18F-FDG PET-CT scan revealed in addition to postoperative alterations markedly hypermetabolic lesions (SUVmax.: 20.78) consistent with residual inflammatory disease (Figure3(b)). Thus, for immunomodulation a single course of intravenous immunoglobulins (IVIG) was administered at a total dose of 1 g/kg divided into two 8-hours infusions within two days. Follow-up18F-FDG PET/CT’s showed a good metabolic response with a decreasing metabolic activity after three months (SUVmax.: 9.5) and an almost absent glucose metabolism 12 months after therapy (SUVmax.: 3.2), and apart from that the CRP values remained within the normal range (Figures 3(b) and 3(c)). ## 3. Discussion In inflammatory pseudotumors the myofibroblast was recognized as the principal cell type and provided the recent classification as inflammatory myofibroblastic tumor [5]. Diverse degrees of inflammatory infiltrates with mixed populations of lymphocytes, plasma cells, histiocytes, and occasional eosinophils define three histological subtypes: organizing pneumonia pattern, fibrous histiocytic, or lymphohistiocytic pattern. In the lung such lesions most commonly present themselves as solitary intrapulmonary nodules, but can also be locally invasive [6]. Pleural manifestation of IMT is rare. There are some cases with pulmonary nodules attached to the pleura, mediastinal lesions with pleural involvement, but to our knowledge such a large pleural lesion has yet not been described [7–10]. It is currently unclear whether IMTs represent a primary inflammatory process versus an underlying low-grade malignancy with a prominent inflammatory response [2, 3]. Theories include a dysregulated cytokine production in response to an infection, because a history of infection is found in a third of patients. Single cases with chronic persistent Eikenella corrodens infection, DNA expression of Epstein Barr virus or Human Herpes Virus 8, association with mycobacteria, actinomycetes, nocardiae, and pseudomonas have so far been reported [11–13]. The observation that a chromosomal rearrangement at the site of the anaplastic lymphoma kinase (ALK) gene at band 2p23 is present in about two-thirds of cases supports the hypothesis of a low-grade malignancy with a secondary inflammatory component [5]. Especially case reports with local invasion or metastasis, although very rare, might present such subtypes. In the case presented neither an infection nor ALK expression could be detected.Approximately 70% of patients are asymptomatic [14], but like in this case some patients may complain of cough, dyspnea, chest pain, or hemoptysis. As we experienced, radiographic features are not reliable to differentiate IMT from other causes of pulmonary nodules. Thus, diagnosis is often delayed until a diagnostic resection of the indistinct lesion is performed. Because of its propensity to mimic clinically and radiologically a malignant disease, it has been named inflammatory pseudotumor by Umiker WO in 1954 [15]. CT as well as metabolic PET imaging are troubled by false-positive results. It is well known that uptake of radiolabeled glucose is not specific to malignant neoplasms, and may be observed in a variety of tissues with increased glucose consumption. Markedly increased 18F-FDG uptake has already been reported as a feature of IMTs [16, 17]. Also clinical course and the laboratory processing tend to be nonspecific or unremarkable, but the presence of serological evidence of inflammation rather suggests a non-malignant disease. Mildly increased levels of CRP and elevated sedimentation rate have been described in about 50% of cases. The extraordinary high levels in this case may reflect the high “tumor burden” as indicated by metabolic imaging. Despite initial response to antibiotics the disease of our patient progressed. The CRP values were indicative of therapy response as well as relapse. Their value as biomarker for disease monitoring remains uncertain. Complete surgical resection is the treatment of choice for IMT nonresponsive to empirical medicamentous therapy [18]. Most patients can be cured by complete surgical resection, but some lesions become locally invasive and involve the mediastinum, diaphragm, chest wall, vertebral bodies, heart, and major vessels. For patients with progressive disease and unable to have complete surgical resection (e.g., poor surgical candidates, multiple nodules, or unresectable disease), glucocorticoids, radiotherapy, chemotherapy, anti-inflammatory, and immunomodulatory concepts have been used with variable success [7, 19, 20]. In the case presented, due to residual postoperative disease, IVIGs have been administered. This approach was already once shown to be successful in a single patient with resistant inflammatory pseudotumor of the orbit [21]. IVIGs are established modulators of the immune system approved for use in various autoimmune and inflammatory diseases. The precise mechanism by which IVIGs suppress harmful inflammation has yet not been definitively established. Formation of immune complexes, interaction with activating Fc receptors on immune cells such as dendritic cells, T and B cells and monocytes, binding of abnormal host antibodies, activation of the complement system, regulation of macrophage phagocytosis, reduction of levels of tumor necrosis factor-alpha, interleukin 1, 1 beta and 10 are known mechanisms of action [22]. The low profile of adverse events may justify the off-label use of this substance in therapy resistant IMT cases. Two years after therapy our patient is still in remission. In general, long-term follow-up is recommended, as recurrent tumors have been reported as long as 11 years after resection [23]. In conclusion, pleural IPT is a rare condition that should be included in the differential diagnosis of patients with inconclusive inflammatory histology. Because of its rareness, treatment remains empirical, spanning trials from antibiotics, anti-inflammatory or antiproliferative drugs, to surgery and even immunomodulation. --- *Source: 102196-2012-12-31.xml*
102196-2012-12-31_102196-2012-12-31.md
14,513
Diagnostic and Therapeutic Challenges of a Large Pleural Inflammatory Myofibroblastic Tumor
Judith Loeffler-Ragg; Johannes Bodner; Martin Freund; Michael Steurer; Christian Uprimny; Bettina Zelger; Christian M. Kähler
Case Reports in Pulmonology (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102196
102196-2012-12-31.xml
--- ## Abstract We report a 48-year-old woman with a pleural pseudoneoplasm requiring different diagnostic and therapeutic strategies. After initial presentation with increasing dyspnoea, temperature, dry cough, and interscapular pain diagnostic processing showed a large mediastinal mass with marked pleural effusion and high metabolic activity in the 18F-FDG-PET/CT. Extensive CT-guided biopsy of the tumor reaching from the visceral pleura into the right upper lobe revealed no malignancy, but a marked inflammatory tissue reaction containing foam cells. Initial empiric antibiotic therapy was temporarily successful. However, in the further course the mass relapsed and was resistant to antibiotics and a corticosteroid trial. With the working hypothesis of an inflammatory myofibroblastic tumor the patient underwent surgical tumor resection, finally confirming the suspected diagnosis. Due to residual disease intravenous immunoglobulins were administered leading to sustained response. This case with a pleural localisation of a large inflammatory pseudotumor with responsiveness to immunomodulation after incomplete resection extends the reported spectrum of thoracopulmonary manifestations of this rare entity. --- ## Body ## 1. Introduction Inflammatory myofibroblastic tumor (IMT) is a rare non-neoplastic lesion with unknown pathogenesis, comprising less than one percent of all surgically resected lung tumors in adults [1]. They can mimic both clinically and radiologically malignant processes, and a definitive preoperative diagnosis is often difficult to make. These tumors consist of a background proliferation of spindle-shaped mesenchymal cells associated with a variable infiltration with inflammatory cells. IMT most commonly involves the lung and the orbit, but has been reported to occur in nearly every site in the body [2]. Historical synonyms for the disease include inflammatory pseudotumor, plasma cell granuloma, inflammatory myofibrohistiocytic proliferation, histiocytoma, xanthoma, fibroxanthoma, xanthogranuloma, fibrous xanthoma, plasma cell histiocytoma complex, plasmocytoma, and solitary mast cell granuloma [3, 4]. The variety of terms reflects the heterogenous histological patterns that fall under the category of IMT. In this paper we describe the diagnostic and therapeutic approach to a large pleural inflammatory pseudotumor. ## 2. Case Report A 48-year-old woman presented to a peripheral hospital with a 14 days’ history of progressive shortness of breath on exertion, dry cough, and interscapular pain. On physical examination the patient displayed reduced breath sounds and a dull percussion note at the right lung base, but was otherwise unremarkable. The initial radiologic work-up revealed a large mediastinal mass measuring 9 cm in size with concomitant marked pleural effusion (Figure1(a)). The main differential diagnosis was considered to be a malignant disease. Due to a history of breast cancer (invasive ductal carcinoma, ypT1bN1aM0) with following neoadjuvant chemotherapy, surgery and radiation two years before and ongoing adjuvant hormonal therapy with arimidex and zoledronate, the patient was transferred to a gynecological department for further diagnostics. In the following days fever and high CRP levels (up to 27.96 mg/dL; normal range 0.0–0.7 mg/dL) required sequential antibiotic therapy with doxycyclin, piperazillin/tazobactam, and moxifloxacin. Autoimmune parameters (ANA, ANCA) and infectious screening for tuberculosis (T-SPOT), EBV, and toxoplasmosis were negative. Cytology from thoracocentesis revealed no malignant cells. From ten CT-guided needle biopsies of the tumor, which was reaching from the visceral pleura into the right upper lobe (Figure 2), metastasis of breast cancer could be excluded. Because of those indeterminate results the patient was referred to our department. The CT-guided biopsies primarily contained fibrotic and infiltrated parts of pleura and only some parts of normal lung parenchyma. Whereas the intraoperative frozen section was not definitely diagnostic showing an infiltration with small monomorphic cells, the initial H&E histology suggested a macrophage disorder because of monomorphic proliferation of mainly macrophages, some lymphocytes and plasma cells as well as single neutrophiles. No overt signs of malignancy, no nuclear pleomorphism, only rare mitosis, and no necrosis were found. Immunohistochemistry ruled out an underlying neoplastic lesion. The tumorous area was completely negative for epithelial markers namely the pankeratin markers AE3/AE3 and Cam5.2 as well as p63, CK5/6, CK7, and CK20. Calretinin, CD 117, TTF-1, and melanocytic markers as S100, HMB45, and Melan A stained negative too. It showed a prominent macrophage rich, KiM1p and CD 68 positive lesion with single CD4 positive T cells and some CD 79a and CD 138 positive plasma cells. There were no signs of a specific infectious disease such as tuberculosis (microscopy and TBC PCR were negative). H&E morphology and immunophenotype suggested a xanthogranulomatous process and the diagnosis of an inflammatory pseudotumor. Due to the fact that there was only limited material a rebiopsy of the mediastinal mass was recommended, because it was not sure if the material was representative for the whole lesion. The microbiologic workup of the fine needle aspirate was negative for bacteria, mycobacteria, and fungi.Anteroposterior chest radiograph showing a large homogenous opacity right paramediastinal and right side pleural effusion. (a) Initial presentation. (b) Response to treatment with moxifloxacin four weeks after initial presentation. (a) (b)Figure 2 Computed tomography (CT) scan of the chest with CT-guided needle biopsy of a right paramediastinal tumor. In total 10 biopsies were taken from the 8.4 cm large mass.Two weeks after the start of moxifloxacin therapy the patient presented herself as afebrile with a marked decrease in CRP (from 27.96 mg/dL to 3.5 mg/dL). Two further weeks later, along with the clinical improvement, a reduction of the size of the mediastinal mass by one-half was noted (from 6.2 × 8.4 cm to 6.2 × 3.4 cm; Figure1(b)). Because of this clinical course and well documented CT guided needle biopsy routes (excluding sampling error) the working hypothesis of an inflammatory pseudotumor was established. The response to a fourth-generation fluoroquinolone suggested an infectious trigger, which unfortunately could not be proven. Three months later the patient had a clinical worsening with hemoptysis and increase in CRP and tumor size. Therefore, an empirical treatment with methylprednisolone (40 mg/day) was started. 18F-FDG PET-CT scan at that time revealed a very high metabolic activity within the tumor (SUV max.: 41.7) which was suspicious of a malignant disease (Figure 3(a)). Due to a lack of response to corticosteroid therapy within the following 6 weeks, the interdisciplinary board recommended surgical resection of the progressing mass, which advise was followed. The intraoperative situs revealed a yellow, soft, and not well-bordered mass originating from the visceral pleura. The final histology of the pleural tumor and adjacent pulmonary parts from a wedge resection showed a marked inflammatory reaction with foam cells, consistent with the suspected inflammatory myofibroblastic tumor (Figures 4(a) and 4(b)). There was no ALK expression in our case.18-FDG PET-CT scan with high FDG uptake in a right paramediastinal mass (a). Postoperative scans showing residual FDG uptake one month after surgery (b), a decrease in intensity after immunomodulation with intravenous immunoglobulins 4 months later (c), and sustained response at one-year followup (d). (a) (b) (c) (d)Tumor biopsy showing a dense highly vasculated mixed inflammatory infiltration ((a); H&E, 40x), composed of macrophages with foamy cytoplasm, so called histiocytes, some lymphocytes and plasma cells ((b); H&E, 400x). (a) (b)The one month postoperative 18F-FDG PET-CT scan revealed in addition to postoperative alterations markedly hypermetabolic lesions (SUVmax.: 20.78) consistent with residual inflammatory disease (Figure3(b)). Thus, for immunomodulation a single course of intravenous immunoglobulins (IVIG) was administered at a total dose of 1 g/kg divided into two 8-hours infusions within two days. Follow-up18F-FDG PET/CT’s showed a good metabolic response with a decreasing metabolic activity after three months (SUVmax.: 9.5) and an almost absent glucose metabolism 12 months after therapy (SUVmax.: 3.2), and apart from that the CRP values remained within the normal range (Figures 3(b) and 3(c)). ## 3. Discussion In inflammatory pseudotumors the myofibroblast was recognized as the principal cell type and provided the recent classification as inflammatory myofibroblastic tumor [5]. Diverse degrees of inflammatory infiltrates with mixed populations of lymphocytes, plasma cells, histiocytes, and occasional eosinophils define three histological subtypes: organizing pneumonia pattern, fibrous histiocytic, or lymphohistiocytic pattern. In the lung such lesions most commonly present themselves as solitary intrapulmonary nodules, but can also be locally invasive [6]. Pleural manifestation of IMT is rare. There are some cases with pulmonary nodules attached to the pleura, mediastinal lesions with pleural involvement, but to our knowledge such a large pleural lesion has yet not been described [7–10]. It is currently unclear whether IMTs represent a primary inflammatory process versus an underlying low-grade malignancy with a prominent inflammatory response [2, 3]. Theories include a dysregulated cytokine production in response to an infection, because a history of infection is found in a third of patients. Single cases with chronic persistent Eikenella corrodens infection, DNA expression of Epstein Barr virus or Human Herpes Virus 8, association with mycobacteria, actinomycetes, nocardiae, and pseudomonas have so far been reported [11–13]. The observation that a chromosomal rearrangement at the site of the anaplastic lymphoma kinase (ALK) gene at band 2p23 is present in about two-thirds of cases supports the hypothesis of a low-grade malignancy with a secondary inflammatory component [5]. Especially case reports with local invasion or metastasis, although very rare, might present such subtypes. In the case presented neither an infection nor ALK expression could be detected.Approximately 70% of patients are asymptomatic [14], but like in this case some patients may complain of cough, dyspnea, chest pain, or hemoptysis. As we experienced, radiographic features are not reliable to differentiate IMT from other causes of pulmonary nodules. Thus, diagnosis is often delayed until a diagnostic resection of the indistinct lesion is performed. Because of its propensity to mimic clinically and radiologically a malignant disease, it has been named inflammatory pseudotumor by Umiker WO in 1954 [15]. CT as well as metabolic PET imaging are troubled by false-positive results. It is well known that uptake of radiolabeled glucose is not specific to malignant neoplasms, and may be observed in a variety of tissues with increased glucose consumption. Markedly increased 18F-FDG uptake has already been reported as a feature of IMTs [16, 17]. Also clinical course and the laboratory processing tend to be nonspecific or unremarkable, but the presence of serological evidence of inflammation rather suggests a non-malignant disease. Mildly increased levels of CRP and elevated sedimentation rate have been described in about 50% of cases. The extraordinary high levels in this case may reflect the high “tumor burden” as indicated by metabolic imaging. Despite initial response to antibiotics the disease of our patient progressed. The CRP values were indicative of therapy response as well as relapse. Their value as biomarker for disease monitoring remains uncertain. Complete surgical resection is the treatment of choice for IMT nonresponsive to empirical medicamentous therapy [18]. Most patients can be cured by complete surgical resection, but some lesions become locally invasive and involve the mediastinum, diaphragm, chest wall, vertebral bodies, heart, and major vessels. For patients with progressive disease and unable to have complete surgical resection (e.g., poor surgical candidates, multiple nodules, or unresectable disease), glucocorticoids, radiotherapy, chemotherapy, anti-inflammatory, and immunomodulatory concepts have been used with variable success [7, 19, 20]. In the case presented, due to residual postoperative disease, IVIGs have been administered. This approach was already once shown to be successful in a single patient with resistant inflammatory pseudotumor of the orbit [21]. IVIGs are established modulators of the immune system approved for use in various autoimmune and inflammatory diseases. The precise mechanism by which IVIGs suppress harmful inflammation has yet not been definitively established. Formation of immune complexes, interaction with activating Fc receptors on immune cells such as dendritic cells, T and B cells and monocytes, binding of abnormal host antibodies, activation of the complement system, regulation of macrophage phagocytosis, reduction of levels of tumor necrosis factor-alpha, interleukin 1, 1 beta and 10 are known mechanisms of action [22]. The low profile of adverse events may justify the off-label use of this substance in therapy resistant IMT cases. Two years after therapy our patient is still in remission. In general, long-term follow-up is recommended, as recurrent tumors have been reported as long as 11 years after resection [23]. In conclusion, pleural IPT is a rare condition that should be included in the differential diagnosis of patients with inconclusive inflammatory histology. Because of its rareness, treatment remains empirical, spanning trials from antibiotics, anti-inflammatory or antiproliferative drugs, to surgery and even immunomodulation. --- *Source: 102196-2012-12-31.xml*
2012
# Designs of Biomaterials and Microenvironments for Neuroengineering **Authors:** Yanru Yang; Yuhua Zhang; Renjie Chai; Zhongze Gu **Journal:** Neural Plasticity (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1021969 --- ## Abstract Recent clinical research on neuroengineering is primarily focused on biocompatible materials, which can be used to provide electroactive and topological cues, regulate the microenvironment, and perform other functions. Novel biomaterials for neuroengineering have been received much attention in the field of research, including graphene, photonic crystals, and organ-on-a-chip. Graphene, which has the advantage of high mechanical strength and chemical stability with the unique electrochemical performance for electrical signal detection and transmission, has significant potential as a conductive scaffolding in the field of medicine. Photonic crystal materials, known as a novel concept in nerve substrates, have provided a new avenue for neuroengineering research because of their unique ordered structure and spectral attributes. The “organ-on-a-chip” systems have shown significant prospects for the developments of the solutions to nerve regeneration by mimicking the microenvironment of nerve tissue. This paper presents a review of current progress in the designs of biomaterials and microenvironments and provides case studies in developing nerve system stents upon these biomaterials. In addition, we compose a conductive patterned compounded biomaterial, which could mimic neuronal microenvironment for neuroengineering by concentrating the advantage of such biomaterials. --- ## Body ## 1. Introduction Nerve lesions, which cause a great number of disabilities around the world, have brought a tremendous impact on patients’ productivity and life quality. In general, nerve regeneration is the prime hindrance to limb reattachment in clinical practice. In previous studies, neuroengineering research for the peripheral nervous system (PNS) is primarily concentrated on alternatives to neurografts; however, work on spinal cord damage is primarily focused on creating a permissive environment for functional recovery [1]. During embryogenesis, neuron precursor cells (i.e., the neuroblasts) are divided and differentiated into the cellular components of the PNS and the central nervous system (CNS). They are driven towards specific cellular fates while migrating to predetermined destinations, and ultimately, these cells are developed into neurons and glial cells [2]. Nerve tissue engineering (NTE) is one of the most promising strategies for restoring CNS function in humans; in reality, the growth and distribution of cells within three-dimensional (3D) microporous scaffolds is of clinical significance for neuroengineering. Furthermore, NTE provides an attractive and promising platform for the competent management of PNS injury, by mechanically bridging the gap between severed nerves and also by inducing neuroregenerative mechanisms in a well-regulated environment that mimics the in vivo microenvironment of the specific nerve types that have been damaged so as to provide optimal clinical effectiveness [3]. For the existing explorations of different bioderived materials, they have provided several novel possibilities for the treatment and recovery of nerve injuries.Nerve scaffolds consist of natural biological materials and synthetic materials. Thus far, many solutions have been introduced to derive the above two kinds of materials. For example, in [4], the authors derived the natural biological materials from autogenous nerves or other native tissues such as skeletal muscles or blood vessels as well as polyester materials, i.e., polyhydroxyalkanoate. Another idea for natural biomaterials is to develop tissue-engineered nerve scaffolds by reconstituting nerve cell-derived extracellular matrix (ECM) using natural biomaterials and then develop a protocol to prepare and characterize cultured Schwann cell-derived ECM [5]. For the above protocol, silk fibroin fibers and a chitosan conduit are prepared, seeded with Schwann cells for deposition of ECM, and suffered from decellularization. However, this was confirmed to assembly into a Schwann cell-supported, chitosan/silk fibroin-mediated, and ECM-coated scaffold which was used to bridge a 10 mm gap in the sciatic nerve of rat. On the other hand, the synthetic materials used in nerve scaffolds mainly include decalcified bone tubes, nylon fiber tubes, and polyurethane. Yang et al. developed microporous polymeric nanofibrous scaffolds through biodegradable poly(l-lactic acid) (PLLA) for a two-dimensional (2D) nerve stem cell (NSC) culture. The production of PLLA scaffolds is carried out by a liquid-liquid phase separation strategy. This indicates that the physicochemical features of the scaffolds have been fully characterized, by scanning electron microscopy and differential scanning calorimetry [6].The combination of materials and tissue engineering is a mature research field, and numerous materials have been applied in clinical therapies. In recent years, several novel materials have been exploited, which is mainly due to the fact that their excellent chemical and physical features have been applied to the field of neuroengineering. In particular, a clinical study has been performed to determine the feasibility and safety of the collagen scaffold NeuroRegen, and it is found that patients demonstrated improved autonomic nerve function, and meanwhile, there is a recovery of motor- and sensory-evoked potentials from the soma [7]. The combination of biomaterials and neuroengineering has been widely researched all over the world; however, it is necessary to develop more novel materials to provide more choices for clinical therapy.Graphene consists of a single layer of carbon atoms, which have a high mechanical strength and chemical stability with unique electrochemical properties for electrical signal detection and transmission. This is important, due to the fact that, the promote diagnosis and nerve disease treatment mainly rely on the stimulation and recording of nerve impulses. A great number of biomaterials have been adopted as nerve scaffolds, and organ-on-chip devices provide novel in vitro microenvironments, which make progressed in the point where they are able to be used in the development and regeneration of nerve tissue. In reality, the latest progress in microtechnology allows more realistic mimicking of the naturally occurring microenvironment, where the behaviors and physiology of neurons and NSCs in response to the physical environment are more realistic. Neural interface biomaterials have become a topic of great interest; meanwhile, photonic materials are an emerging area in the production of stents in NTE. Ordered porous materials like photonic crystals provide a surface effect for studying the behavior of NSCs.This literature review covers recent studies on the three kinds of the above bioderived materials and their neuroengineering applications. Exploring the application of the compounded biomaterials in the field of neural interface materials could be serviceable in fabricating multifunctional neuron scaffold, which can be used not only for in vitro studies but also for therapeutic purposes. In addition, we combine the merit of such biomaterials to develop a compound design which has the advantage to further improve nerve cell growth. ## 2. The Growth of Nerve Tissue Guided with Graphene Neurons are electrically active cells, which function is exceedingly closely related to electrical activity. Through depolarizing the excitable cell membranes, electrical stimulation can initiate a functional response in neurons. In theory, depolarization can be achieved by the ionic flow between two or more electrodes; meanwhile, at least one of the electrodes is close to the target tissue. In general, there are two categories of electrodes that have been used in neural stimulation in neuroengineering research. Microelectrodes show low-charge/density thresholds and high-charge/phase thresholds, and they are, respectively, fixed on the target organ surface and possess a geometric surface area (GSA) greater over approximately 100,000μm2 [8, 9]. Researchers have confirmed that electrical charges are able to enhance nerve regeneration by altering protein adsorption during neuron interactions with electroconductive materials [10, 11]. Scaffolds designed for neuroengineering can simulate the electrical properties of neurons. The results show that the growth on the conductive substrate can enhance neurite growth under electrical stimulation [12–14].In general, graphene is the strongest and thinnest known material, which has been received great attention since being separated from graphite by Novoselov and Geim in 2004 [15, 16]. Graphene, also known as a single-crystal graphite, is a 2D crystal which consists of a single layer of carbon atoms. It is worth mentioning that the large specific surface area, excellent thermal, mechanical, and optical properties, as well as its outstanding electrical conductivity, make graphene an obvious choice for guiding the growth of nerve tissue. The work of Fabbro et al. has shown that untreated graphene can be connected to neurons, and the graphene has the ability to maintain the integrity of the active cells. This work is the first time to demonstrate that graphene can control the first step in creating deep brain implants, and graphene electrodes have great promise for implantation in the brain, which is able to restore functional loss after amputation, to reverse paralysis, and to provide relief for patients with movement disorders such as Parkinson’s disease [17]. The existing researches on neuroengineering are focused on studying the impacts of the graphene sheets on the complex relationship between neuron signal transmission mechanisms, and many works have shown that the diverse physical properties of graphene can affect the directional growth of neuronal axons, which can be used to promote the growth and activity of NSCs. ### 2.1. Graphene as a Two-Dimensional Substrate for Neurons The combination of outstanding thermal stability, biocompatibility, mechanical strength, and high electrical conductivity makes 2D graphene promising for a host of bioengineering applications [18, 19]. Graphene oxide (GO) is superior to graphene for the preparation of homogeneous aqueous suspensions in the existence of oxygen-encompassing hydrophilic groups, which decrease the reversible agglomeration of the graphene sheets. There has also been work showing that ginseng-reduced graphene oxide (rGO) sheets increase the differentiation efficiency of NSCs towards nerve cells. In one experiment, hydrophobic hydrazine-rGO films exhibited no toxicity against human neural stem cells (hNSCs), and the hydrophilic GO and ginseng-rGO films (which are more biocompatible films) showed proliferation of the hNSCs after three days. In addition, the hydrazine-rGO and peculiar ginseng-rGO films exhibited greater differentiation of hNSCs into neurons (rather than into glial cells) than the GO film after three weeks. The higher capability for electron transfer with rGO films bring about the promoted differentiation on such films [20]. However, when compared to graphene and other 2D or quasi-two-dimensional nanostructures that manifest superior flexibility and conductivity, the rGO derivative exhibits more worse conductivity [21]. The 2D graphene used in the work with nerve cells is mainly produced by chemical vapor deposition [22, 23]. Zhang et al. measured the cytotoxicity of graphene layers in neural phaeochromocytoma-derived PC12 cells and found that graphene induced strong metabolic activity at low concentrations, while the cell apoptosis marker (caspase-3) was activated in large numbers when PC12 cells were exposed to graphene at high concentration of 10 g/m [24]. Li’s group mainly researches on the effects of 2D graphene film on the development of hippocampal neuron cells, and they have demonstrated that graphene not only has favorable biocompatibility with neurons but also plays a significant role in promoting neurite sprouting and outgrowth of mice hippocampal cells [25]. This work manifests the prospect of graphene as a biomaterial for neural interfacing and offers insight into the future bioengineering applications of graphene. ### 2.2. Graphene as a Three-Dimensional Nerve Scaffold Material In vitro experiments on cell behaviors in the presence of graphene usually involve 2D graphene films, which lead to discrepancies between the 3D in vivo environment and the artificial 2D environment. Compared with the 2D scaffolds, 3D scaffolds are more accurate to mimic the chemical, physical, and biological properties of the in vivo environments [26–28]. Due to their interconnected porous structure and larger specific surface area, the 3D micropores of graphene make it an excellent scaffold material for regenerative medicine and tissue engineering and for providing a biomaterial interaction platform in living organisms during in vivo experiments [29–31]. The existing literatures have demonstrated that the topographical cues, including the sizes and patterns of biomaterials, have great influences on the NSC behavior. In reality, these observations provide a better understanding of the different roles that mechanical transduction plays in stem cell fate, especially in terms of directional differentiation, and how these dynamic cues can be used to advance the field of stem cell therapy [32].To develop practical applications for graphene, significant effort has gone into assembling 2D graphene sheets into 3D macroscopic structures that can serve as nerve scaffolds. The characteristics of 3D graphene are closely related to the size of such structures. Therefore, carefully controlling the size of the 3D preparation allows one to regulate the topographical cues of graphene, which can be used to meet different application requirements, and provides the opportunity to better understand the mechanism behind graphene effects in different applications. Many researchers have shown that 3D graphene stents cannot only promote the propagation of NSCs but also induce the selective differentiation of NSCs into functional neurons to a certain degree. For instance, porous three-dimensional graphene foam (3D-GF), which acts as a novel scaffold for NSCs, cannot only maintain NSC growth but also support the cells in an active propagation state through upregulation of Ki67 expression, when compared with 2D graphene films. It has also been shown that 3D-GFs can accelerate the differentiation of NSCs into astrocytes and neurons; meanwhile, the electrical coupling of 3D-GFs with differentiated NSCs demonstrated the effective electrical stimulation of these cells [33].In Tang’s [34] and Song’s [35] research group, a novel interconnected micropore scaffold 3D-GF is introduced for NSCs in vitro, which can be used to carry out a more in-depth study of the effects of 3D graphene on the cell. Their study found that microglial cells can grow very well on 3D graphene, and the pattern of graphene/cell interactions has an influence on the pro- and anti-inflammatory responses of microglia cells, which are cultured on graphene film or 3D-GF. Graphene showed a remarkable ability to rescue LPS-induced neuroinflammation, most likely through the restriction of microglial morphological transformation by the topographical cues of the 3D-GF surface. It is worth mentioning that hydrogel-doped graphene possesses fantabulous flexibility, which has received great attention for improving the regeneration of the PNS. Furthermore, the wettability, swelling ratio, morphology, mechanical properties, composition, and degradation behavior of graphene oxide/polyacrylamide (GO/PAM) hydrogels have been well characterized, and GO/PAM hydrogels have behaved a positive impact on Schwann cell adhesion and propagation [30]. ### 2.3. Graphene with Other Applications in Neuroengineering In previous studies, a novel method is introduced to inhibit synapses by fabricating nanometer-scale GO fragments. This solution mainly affects cell activity rather than inhibiting neuron signaling, which has been widely used in the treatment of neurological diseases [18]. Given the superior properties of 3D graphene structures, the synapses will be applied in neuroengineering and NSC transplantation treatment and other fields.Graphene functions as an improved artificial graft can be used to support nerve repair and regeneration. The unique physical properties regulate cellular growth behavior and improve cell activity, function, and development. In the future, the primary goal of biomedical engineering will be to address the potential applications of using graphene as a support biomaterial in cell culture. For example, the conductive properties of graphene will allow us to apply directional electric current on living tissues. In summary, graphene addresses quite a few challenging clinical applications of bioengineering and has great prospects in neuroengineering. ## 2.1. Graphene as a Two-Dimensional Substrate for Neurons The combination of outstanding thermal stability, biocompatibility, mechanical strength, and high electrical conductivity makes 2D graphene promising for a host of bioengineering applications [18, 19]. Graphene oxide (GO) is superior to graphene for the preparation of homogeneous aqueous suspensions in the existence of oxygen-encompassing hydrophilic groups, which decrease the reversible agglomeration of the graphene sheets. There has also been work showing that ginseng-reduced graphene oxide (rGO) sheets increase the differentiation efficiency of NSCs towards nerve cells. In one experiment, hydrophobic hydrazine-rGO films exhibited no toxicity against human neural stem cells (hNSCs), and the hydrophilic GO and ginseng-rGO films (which are more biocompatible films) showed proliferation of the hNSCs after three days. In addition, the hydrazine-rGO and peculiar ginseng-rGO films exhibited greater differentiation of hNSCs into neurons (rather than into glial cells) than the GO film after three weeks. The higher capability for electron transfer with rGO films bring about the promoted differentiation on such films [20]. However, when compared to graphene and other 2D or quasi-two-dimensional nanostructures that manifest superior flexibility and conductivity, the rGO derivative exhibits more worse conductivity [21]. The 2D graphene used in the work with nerve cells is mainly produced by chemical vapor deposition [22, 23]. Zhang et al. measured the cytotoxicity of graphene layers in neural phaeochromocytoma-derived PC12 cells and found that graphene induced strong metabolic activity at low concentrations, while the cell apoptosis marker (caspase-3) was activated in large numbers when PC12 cells were exposed to graphene at high concentration of 10 g/m [24]. Li’s group mainly researches on the effects of 2D graphene film on the development of hippocampal neuron cells, and they have demonstrated that graphene not only has favorable biocompatibility with neurons but also plays a significant role in promoting neurite sprouting and outgrowth of mice hippocampal cells [25]. This work manifests the prospect of graphene as a biomaterial for neural interfacing and offers insight into the future bioengineering applications of graphene. ## 2.2. Graphene as a Three-Dimensional Nerve Scaffold Material In vitro experiments on cell behaviors in the presence of graphene usually involve 2D graphene films, which lead to discrepancies between the 3D in vivo environment and the artificial 2D environment. Compared with the 2D scaffolds, 3D scaffolds are more accurate to mimic the chemical, physical, and biological properties of the in vivo environments [26–28]. Due to their interconnected porous structure and larger specific surface area, the 3D micropores of graphene make it an excellent scaffold material for regenerative medicine and tissue engineering and for providing a biomaterial interaction platform in living organisms during in vivo experiments [29–31]. The existing literatures have demonstrated that the topographical cues, including the sizes and patterns of biomaterials, have great influences on the NSC behavior. In reality, these observations provide a better understanding of the different roles that mechanical transduction plays in stem cell fate, especially in terms of directional differentiation, and how these dynamic cues can be used to advance the field of stem cell therapy [32].To develop practical applications for graphene, significant effort has gone into assembling 2D graphene sheets into 3D macroscopic structures that can serve as nerve scaffolds. The characteristics of 3D graphene are closely related to the size of such structures. Therefore, carefully controlling the size of the 3D preparation allows one to regulate the topographical cues of graphene, which can be used to meet different application requirements, and provides the opportunity to better understand the mechanism behind graphene effects in different applications. Many researchers have shown that 3D graphene stents cannot only promote the propagation of NSCs but also induce the selective differentiation of NSCs into functional neurons to a certain degree. For instance, porous three-dimensional graphene foam (3D-GF), which acts as a novel scaffold for NSCs, cannot only maintain NSC growth but also support the cells in an active propagation state through upregulation of Ki67 expression, when compared with 2D graphene films. It has also been shown that 3D-GFs can accelerate the differentiation of NSCs into astrocytes and neurons; meanwhile, the electrical coupling of 3D-GFs with differentiated NSCs demonstrated the effective electrical stimulation of these cells [33].In Tang’s [34] and Song’s [35] research group, a novel interconnected micropore scaffold 3D-GF is introduced for NSCs in vitro, which can be used to carry out a more in-depth study of the effects of 3D graphene on the cell. Their study found that microglial cells can grow very well on 3D graphene, and the pattern of graphene/cell interactions has an influence on the pro- and anti-inflammatory responses of microglia cells, which are cultured on graphene film or 3D-GF. Graphene showed a remarkable ability to rescue LPS-induced neuroinflammation, most likely through the restriction of microglial morphological transformation by the topographical cues of the 3D-GF surface. It is worth mentioning that hydrogel-doped graphene possesses fantabulous flexibility, which has received great attention for improving the regeneration of the PNS. Furthermore, the wettability, swelling ratio, morphology, mechanical properties, composition, and degradation behavior of graphene oxide/polyacrylamide (GO/PAM) hydrogels have been well characterized, and GO/PAM hydrogels have behaved a positive impact on Schwann cell adhesion and propagation [30]. ## 2.3. Graphene with Other Applications in Neuroengineering In previous studies, a novel method is introduced to inhibit synapses by fabricating nanometer-scale GO fragments. This solution mainly affects cell activity rather than inhibiting neuron signaling, which has been widely used in the treatment of neurological diseases [18]. Given the superior properties of 3D graphene structures, the synapses will be applied in neuroengineering and NSC transplantation treatment and other fields.Graphene functions as an improved artificial graft can be used to support nerve repair and regeneration. The unique physical properties regulate cellular growth behavior and improve cell activity, function, and development. In the future, the primary goal of biomedical engineering will be to address the potential applications of using graphene as a support biomaterial in cell culture. For example, the conductive properties of graphene will allow us to apply directional electric current on living tissues. In summary, graphene addresses quite a few challenging clinical applications of bioengineering and has great prospects in neuroengineering. ## 3. The Application of Photonic Crystals in Neuroengineering In 1987, Yablonovitch and John put forward the new concept of “photonic crystals” which expounds upon the effect that periodic dielectric structures have on the way that light propagates through certain crystalline materials. Photonic crystals consist of ordered arrays of two or more materials with different dielectric constants (refractive indexes). The materials form periodic patterns of dielectric constants, which generate a range of “forbidden” frequencies referred to as the photonic bandgap, and photons with energies in the bandgap cannot propagate through the material [36]. Although there are examples of photonic crystals in nature, such as opal, feathers, and butterfly wings, the vast majority of photonic crystals are of artificial design. A number of artificial fabrication techniques are currently available to achieve responsive photonic crystal patterning [37–40], and the application of photonic crystal materials is an emerging research field for novel nerve scaffolds, which can be used as stents in neuroengineering. ### 3.1. The Guidance of Nerve Cells by Ordered Structure Recently, the topological cues provided by biological scaffolds have been suggested to regulate cell behavior and stem cell fate [41–44]. These large structures can be measured in micrometers, and much work has been gone into determining their organization, assembly, molecular composition, and function [45]. The research results have shown that substrates patterned with grooves or ridges can regulate cell adhesion and orientation [46]. In addition, it has been shown that the morphology and alignment of cells can be modified by culturing them on stretched polymer inverse opal films [47]. In support of the applicability of such technologies, studies have shown that there is a substantial connection between NSC behavior and the nanotopography of the materials upon which they are growing [48, 49].The development of photonic crystal microstructures has been a primary focus of research into tissue regeneration over the past thirty years, and these materials have found applications in a multitude of tissue engineering applications, such as controlling the spatial arrangement of cells, guiding cell behavior, and differentiating stem cells. To be specific [50], proposed the application of uncomplicated stretched inverse opal structures for guiding the formation of cell orientation gradients, and it was shown that tendon fibroblasts growing on such structures formed elongation gradients that matched the topographical cues of the ordered substrate [50]. Thus far, there have been reports of applying photonic crystal structures in neuroengineering, as shown in [51–53].Nerve cell synapses can be easily guided by mechanical force in vivo, and the “random-to-aligned” cell gradients generated by such forces reproduce the part of the neuron that is inserted into connecting tissues and has significant potential for applications in neuroengineering. Photonic crystal materials like ordered microporous silicon are promising electrode materials in the nerve repair setting, which is mainly due to their advantage of biologically inert with excellent biocompatibility. Porous silicon has a large surface area, adheres firmly to tissues, and does not induce inflammatory response; all of which suggest that it would make a good biomaterial for use in implantable electronic nerve devices [51]. Wang et al. have developed a novel approach to create microporous tubular scaffolds from chitosan, which have mechanical properties and controllable inner structures, and therefore, they are useful for neuroengineering. The material has highly porous inner matrices with a large network of interconnected pores and axially oriented microchannels. Experiments in living donor tissue showed that these scaffolds exhibit mechanical strength, swelling, porosity, and biodegradability, which mimic the physical and chemical microenvironment in living organisms, and therefore will be of great potential for applications in neuroengineering. Characterization of in vitro cell cultures on these chitosan scaffolds showed that differentiated Neuro-2α cells grew along with the oriented microchannels, and the interrelated pores in the scaffold’s interior were beneficial for both nutrient diffusion and cell ingrowth [52]. The adoption of patterned biomimetic materials can guide the growth and arrangement of cells [54], and ordered porous materials provide a surface effect for the study of nerve cells and the behavior and effect of NSCs. ### 3.2. Monitoring Nerve Cells on Ordered Porous Material The photonic bandgap of periodic dielectric structures is the fundamental property of photonic crystals. The emergence of the photonic bandgap relies on the structure of the crystals, the ratio of the dielectric constants of the materials making up the crystal, and the geometric configuration of the crystal. In general, if the difference in the dielectric constant between the two kinds of material in the photonic crystal is obvious enough, then Bragg scattering will occur at the medium interface, and the dielectric constant ratio will become greater. In addition, the stronger the incident light is scattered, the greater the possibility to generate a photonic bandgap [55, 56]. The characteristic reflection peaks of the crystals is determined by the structural periodicity, herewith, the ordered porous crystals exhibit a perfect inertness because they can avoid chemical instability such as bleaching, quenching, or fading [57].In addition to the significant physical features with which periodic dielectric structures can guide the growth of nerve cells, photonic crystals can be used in a number of applications, which make use of the photonic bandgap. For example, their long-range ordered structures provide a stable code that can direct the growth of nerve cells according to changes in the refractive index. Huang et al. presented that lithographically patterned microporous silicon photonic crystals, which are functionalized with different bioactive peptide-doped surfaces, could be used as a spatial guidance for NSC differentiation and that NSCs can be spatially specified to suffer astrogenesis or neurogenesis as a multifunction of peptide identity as well as surface properties [58]. In addition, these crystals have found applications in the field of biomedical optics, and it has been shown that adsorbing proteins to the surface of a photonic crystal changes the refractive index which can be used to detect neurotransmitters and neural markers. To be specific, acetylcholinesterase-based organophosphate nerve agent-sensing photonic crystals have been widely studied in neuroengineering. These photonic crystals consist of polymerized crystalline colloidal arrays that can detect the organophosphorous compound parathion at ultratrace concentrations in aqueous solutions, and the sensor will cause a red shift in the wavelength of the diffracted light if it detects the nerve agent [59–62]. ## 3.1. The Guidance of Nerve Cells by Ordered Structure Recently, the topological cues provided by biological scaffolds have been suggested to regulate cell behavior and stem cell fate [41–44]. These large structures can be measured in micrometers, and much work has been gone into determining their organization, assembly, molecular composition, and function [45]. The research results have shown that substrates patterned with grooves or ridges can regulate cell adhesion and orientation [46]. In addition, it has been shown that the morphology and alignment of cells can be modified by culturing them on stretched polymer inverse opal films [47]. In support of the applicability of such technologies, studies have shown that there is a substantial connection between NSC behavior and the nanotopography of the materials upon which they are growing [48, 49].The development of photonic crystal microstructures has been a primary focus of research into tissue regeneration over the past thirty years, and these materials have found applications in a multitude of tissue engineering applications, such as controlling the spatial arrangement of cells, guiding cell behavior, and differentiating stem cells. To be specific [50], proposed the application of uncomplicated stretched inverse opal structures for guiding the formation of cell orientation gradients, and it was shown that tendon fibroblasts growing on such structures formed elongation gradients that matched the topographical cues of the ordered substrate [50]. Thus far, there have been reports of applying photonic crystal structures in neuroengineering, as shown in [51–53].Nerve cell synapses can be easily guided by mechanical force in vivo, and the “random-to-aligned” cell gradients generated by such forces reproduce the part of the neuron that is inserted into connecting tissues and has significant potential for applications in neuroengineering. Photonic crystal materials like ordered microporous silicon are promising electrode materials in the nerve repair setting, which is mainly due to their advantage of biologically inert with excellent biocompatibility. Porous silicon has a large surface area, adheres firmly to tissues, and does not induce inflammatory response; all of which suggest that it would make a good biomaterial for use in implantable electronic nerve devices [51]. Wang et al. have developed a novel approach to create microporous tubular scaffolds from chitosan, which have mechanical properties and controllable inner structures, and therefore, they are useful for neuroengineering. The material has highly porous inner matrices with a large network of interconnected pores and axially oriented microchannels. Experiments in living donor tissue showed that these scaffolds exhibit mechanical strength, swelling, porosity, and biodegradability, which mimic the physical and chemical microenvironment in living organisms, and therefore will be of great potential for applications in neuroengineering. Characterization of in vitro cell cultures on these chitosan scaffolds showed that differentiated Neuro-2α cells grew along with the oriented microchannels, and the interrelated pores in the scaffold’s interior were beneficial for both nutrient diffusion and cell ingrowth [52]. The adoption of patterned biomimetic materials can guide the growth and arrangement of cells [54], and ordered porous materials provide a surface effect for the study of nerve cells and the behavior and effect of NSCs. ## 3.2. Monitoring Nerve Cells on Ordered Porous Material The photonic bandgap of periodic dielectric structures is the fundamental property of photonic crystals. The emergence of the photonic bandgap relies on the structure of the crystals, the ratio of the dielectric constants of the materials making up the crystal, and the geometric configuration of the crystal. In general, if the difference in the dielectric constant between the two kinds of material in the photonic crystal is obvious enough, then Bragg scattering will occur at the medium interface, and the dielectric constant ratio will become greater. In addition, the stronger the incident light is scattered, the greater the possibility to generate a photonic bandgap [55, 56]. The characteristic reflection peaks of the crystals is determined by the structural periodicity, herewith, the ordered porous crystals exhibit a perfect inertness because they can avoid chemical instability such as bleaching, quenching, or fading [57].In addition to the significant physical features with which periodic dielectric structures can guide the growth of nerve cells, photonic crystals can be used in a number of applications, which make use of the photonic bandgap. For example, their long-range ordered structures provide a stable code that can direct the growth of nerve cells according to changes in the refractive index. Huang et al. presented that lithographically patterned microporous silicon photonic crystals, which are functionalized with different bioactive peptide-doped surfaces, could be used as a spatial guidance for NSC differentiation and that NSCs can be spatially specified to suffer astrogenesis or neurogenesis as a multifunction of peptide identity as well as surface properties [58]. In addition, these crystals have found applications in the field of biomedical optics, and it has been shown that adsorbing proteins to the surface of a photonic crystal changes the refractive index which can be used to detect neurotransmitters and neural markers. To be specific, acetylcholinesterase-based organophosphate nerve agent-sensing photonic crystals have been widely studied in neuroengineering. These photonic crystals consist of polymerized crystalline colloidal arrays that can detect the organophosphorous compound parathion at ultratrace concentrations in aqueous solutions, and the sensor will cause a red shift in the wavelength of the diffracted light if it detects the nerve agent [59–62]. ## 4. Organ-On-a-Chip as a Microsystem for Nerve Tissue The engineering of cellular environments has been shown to be crucial for improving the in vitro viability and in vivo-like function of cells and tissues, which is due to the fact that such environments are more accurate to mimic the situation in living organism [63]. Organs-on-chip platforms, including microfluidic, microengineering technologies, and essential bionic principles to faithfully describe the significant aspect of tissues in living organism, consist of critical spatiotemporal, microarchitecture cell-cell communications, and extracellular environments [64]. The improvement of organ-on-a-chip devices has yielded practical applications in drug screening and clinical research. The in vitro organ-based experiments with this new technology carry on the historical tradition of medical techniques that have sought to reconstitute damaged organs or tissues, and this novel technology is especially relevant to the research on the nervous system. Lundborg has proposed that the implantation of microfluidic chips in the nervous system might offer a novel interface between biology and technology, the concomitant development of gene engineering might provide novel possibilities for the manipulation of nerve regeneration and degeneration [65]. Organ-on-a-chip technology overcomes many of the challenges traditionally associated with clinical studies of neurological disease, especially when it comes to the complexity of neurological phenomena. The combination of neural engineering and chip research at present is mainly focused on axonal growth, the blood-brain barrier, neurospheres, and 3D or layered neural tissues [66–71]. ### 4.1. Axonal Growth on a Chip To successfully regenerate nerve tissue, axonal outgrowth from the proximal stumps requires growth without interference from the surrounding microenvironment, while it requires the formation of new connections with distal stumps. To address the above issue, Bryan et al. proposed a novel strategy to improve axonal sprouting in a guided way through a spatial neuron guidance channel [72]. Studies have shown that microchannels or microgrooves ranging in size from a few dozen nanometers up to 10 microns in width can induce directional axon growth [73] and meanwhile have a promoting effect on the formation and development of axons [74, 75]. In addition, bioresorbable guide channels, which are made of poly(lactic-co-glycolic) acid, have been shown to greatly affect the glial growth factors and Schwann cells during peripheral nerve regeneration [72]. Novel protocols for establishing CNS models on microplatforms have allowed axons to be visualized and quantified [76], and Hadlock et al. found that a polymer foam conduit comprised of some microchannels aligned longitudinally, which the diameters range from 60 to 550 microns and cultured with Schwann cells, promoted the regeneration of peripheral nerve. These conduits offer a microenvironment that is permissive to axonal regeneration [77].Kim et al. created the neuron chellop through a surface-printed microdot array to control axon branch formation and showed that the majority of collateral axon branches stemmed from axonal regions on a dot and terminated on neighboring microdots. In that study, the results showed that the length of branches increased as the spacing between dots increased [8]. This approach was also used to identify connectability defects in nerve cells from mouse model of 22q11.2 deletion syndrome/DiGeorge syndrome, by comparing the applications of channel guides to wild-type preparations. The results of that experiment demonstrated the reliability and sensitivity of the on-chip connectability approach and validated that tackling measures for quick assessment of neuron connectability defects in neuropsychiatric disease modeling [78]. The application of microchips in neuroengineering is appealing on account of their ability of maintaining the cellular environment in both a spatial and temporal manner [79]. Shin et al. designed a compartmental microfluidic device as a cell culture chamber and found that axons traversed the channels in microchips, which could be separated from the somata, thus forming an arrangement comparable to dissociated primary neurons [80]. The axons are the functional units that connect neurons to each other, and hence, the existing technologies such as those described here need to be further developed to provide the accurate guidance of axonal growth, which is needed for neuroengineering applications. ### 4.2. Microfluidic BBBs-On-a-Chip The blood-brain barrier (BBB) is derived from specialized endothelial cells, which isolates the blood from the brain tissue. Specifically, the BBB hinders the access of many exogenous compounds to the CNS selectively [81]. It is worth mentioning that the BBB is basically consists of three kinds of cells, where endothelial cells lined along astrocytes and pericytes. The membrane forms large numbers of tight junctions among endothelial cells. Therefore, the compound permeability can be directly controlled by maintaining high levels of transendothelial electrical resistance. The BBB protects the brain from noxious compounds in the blood and offers a homeostatic environment for optimal neuronal function. Limited BBB permeability leads to low efficiency in the clinical drug treatment of CNS pathologies, and examining BBB dysfunction and function is crucial for biomedical studies and drug development [82].The BBB-on-a-chip is a microfluidic platform, which can be used to mechanically and biochemically modulate BBB function [83]. This technology enables the real-time monitoring of neurons in a designed physiological niche, for example, through the use of small chambers and fluid guides as well as the attachment of sensors. Instances of BBBs-on-a-chip in the literature have demonstrated the feasibility of providing more accurate environments for research on organ-level functions [84–86]. These modular microenvironments recapitulate the roles of the “neurovascular unit” through a vertical stack of poly(dimethylsiloxane) neural parenchymal chambers, which are mainly separated from a vascular channel made of a porous polycarbonate membrane. Such microsystems will likely prove useful in studying neurodegenerative disorders and in toxicology and neuroinfectious disease studies as a screening tool [87].Microfluidic devices are gaining ground as novel automated microsystems for neuron culture and real-time monitoring, and the BBB-on-a-chip model provides an in vitro environment to mimic the natural forms and functions of the BBB and might be of great benefit in developing methods for nerve disease and new clinical treatments [88]. For instance, recreating the BBB structure and physiology on a chip—which is a neurovascular microfluidic bioreactor incorporating both a brain chamber and a vascular chamber separated by a microporous membrane—allows for adequate cell aggregation to support real-time monitoring and systematic analysis [89]. In addition, organs-on-a-chip have opened up a novel avenue for researching the characteristics of neurons derived from Alzheimer’s disease brains. A microfluidic chip based on 3D neuroaxonal spheroids is more accurate to imitate the brains in living organisms by supplying a constant quantity of fluid, which is similar to what is seen in the interstitial space of the brain. Furthermore, researchers have used such chips to study the influence of flow on neural networks, neurospheroid size, and nerve stem cell differentiation [90]. Takeda et al. designed a three-chambered microfluidic platform for modeling double-layered neurons to examine the ingestion and proliferation in response to changes in tau values, which occur in the interstitial fluid in the brains of tau transgenic mice and in the cortices of human Alzheimer’s disease patients [91]. In summary, 3D microfluidic BBBs-on-a-chip with controllable size and shape are a potential in vitro model for studying nerve tissue disease. ## 4.1. Axonal Growth on a Chip To successfully regenerate nerve tissue, axonal outgrowth from the proximal stumps requires growth without interference from the surrounding microenvironment, while it requires the formation of new connections with distal stumps. To address the above issue, Bryan et al. proposed a novel strategy to improve axonal sprouting in a guided way through a spatial neuron guidance channel [72]. Studies have shown that microchannels or microgrooves ranging in size from a few dozen nanometers up to 10 microns in width can induce directional axon growth [73] and meanwhile have a promoting effect on the formation and development of axons [74, 75]. In addition, bioresorbable guide channels, which are made of poly(lactic-co-glycolic) acid, have been shown to greatly affect the glial growth factors and Schwann cells during peripheral nerve regeneration [72]. Novel protocols for establishing CNS models on microplatforms have allowed axons to be visualized and quantified [76], and Hadlock et al. found that a polymer foam conduit comprised of some microchannels aligned longitudinally, which the diameters range from 60 to 550 microns and cultured with Schwann cells, promoted the regeneration of peripheral nerve. These conduits offer a microenvironment that is permissive to axonal regeneration [77].Kim et al. created the neuron chellop through a surface-printed microdot array to control axon branch formation and showed that the majority of collateral axon branches stemmed from axonal regions on a dot and terminated on neighboring microdots. In that study, the results showed that the length of branches increased as the spacing between dots increased [8]. This approach was also used to identify connectability defects in nerve cells from mouse model of 22q11.2 deletion syndrome/DiGeorge syndrome, by comparing the applications of channel guides to wild-type preparations. The results of that experiment demonstrated the reliability and sensitivity of the on-chip connectability approach and validated that tackling measures for quick assessment of neuron connectability defects in neuropsychiatric disease modeling [78]. The application of microchips in neuroengineering is appealing on account of their ability of maintaining the cellular environment in both a spatial and temporal manner [79]. Shin et al. designed a compartmental microfluidic device as a cell culture chamber and found that axons traversed the channels in microchips, which could be separated from the somata, thus forming an arrangement comparable to dissociated primary neurons [80]. The axons are the functional units that connect neurons to each other, and hence, the existing technologies such as those described here need to be further developed to provide the accurate guidance of axonal growth, which is needed for neuroengineering applications. ## 4.2. Microfluidic BBBs-On-a-Chip The blood-brain barrier (BBB) is derived from specialized endothelial cells, which isolates the blood from the brain tissue. Specifically, the BBB hinders the access of many exogenous compounds to the CNS selectively [81]. It is worth mentioning that the BBB is basically consists of three kinds of cells, where endothelial cells lined along astrocytes and pericytes. The membrane forms large numbers of tight junctions among endothelial cells. Therefore, the compound permeability can be directly controlled by maintaining high levels of transendothelial electrical resistance. The BBB protects the brain from noxious compounds in the blood and offers a homeostatic environment for optimal neuronal function. Limited BBB permeability leads to low efficiency in the clinical drug treatment of CNS pathologies, and examining BBB dysfunction and function is crucial for biomedical studies and drug development [82].The BBB-on-a-chip is a microfluidic platform, which can be used to mechanically and biochemically modulate BBB function [83]. This technology enables the real-time monitoring of neurons in a designed physiological niche, for example, through the use of small chambers and fluid guides as well as the attachment of sensors. Instances of BBBs-on-a-chip in the literature have demonstrated the feasibility of providing more accurate environments for research on organ-level functions [84–86]. These modular microenvironments recapitulate the roles of the “neurovascular unit” through a vertical stack of poly(dimethylsiloxane) neural parenchymal chambers, which are mainly separated from a vascular channel made of a porous polycarbonate membrane. Such microsystems will likely prove useful in studying neurodegenerative disorders and in toxicology and neuroinfectious disease studies as a screening tool [87].Microfluidic devices are gaining ground as novel automated microsystems for neuron culture and real-time monitoring, and the BBB-on-a-chip model provides an in vitro environment to mimic the natural forms and functions of the BBB and might be of great benefit in developing methods for nerve disease and new clinical treatments [88]. For instance, recreating the BBB structure and physiology on a chip—which is a neurovascular microfluidic bioreactor incorporating both a brain chamber and a vascular chamber separated by a microporous membrane—allows for adequate cell aggregation to support real-time monitoring and systematic analysis [89]. In addition, organs-on-a-chip have opened up a novel avenue for researching the characteristics of neurons derived from Alzheimer’s disease brains. A microfluidic chip based on 3D neuroaxonal spheroids is more accurate to imitate the brains in living organisms by supplying a constant quantity of fluid, which is similar to what is seen in the interstitial space of the brain. Furthermore, researchers have used such chips to study the influence of flow on neural networks, neurospheroid size, and nerve stem cell differentiation [90]. Takeda et al. designed a three-chambered microfluidic platform for modeling double-layered neurons to examine the ingestion and proliferation in response to changes in tau values, which occur in the interstitial fluid in the brains of tau transgenic mice and in the cortices of human Alzheimer’s disease patients [91]. In summary, 3D microfluidic BBBs-on-a-chip with controllable size and shape are a potential in vitro model for studying nerve tissue disease. ## 5. Conclusions and Discussions The unique morphology of nerves and neurons—with their distinct functional units—makes nerve repair and regeneration particularly challenging. The field of clinical medical materials has progressed significance in the past several years and has given rise to the design and synthesis of functional biomaterials for the therapeutic and diagnostic applications. The present challenges and the future goals for such proof-of-concept research need to be emphasized as well. Electrical conductivity is the primary predictor of neural signal quality in nerve repair and regeneration, and conductive materials will be useful for a great number of applications. It is worth mentioning that the communication between neurons and their downstream target cells takes place through the specificity of synapses, and information is transmitted between neuronal circuit elements by electrical or chemical signals. Neural signals based on electric conduction are predicted to have a strong influence on nerve reparation and regeneration, and neural components made of conductive material are expected to have wide clinical potential.The development of graphene technology is a field at the frontier of biomedical research, and one of the primary focuses of such research is to find ways to manipulate the properties of graphene materials so as to modulate neuronal synapses and neuronal excitability. The directional morphology of cell culture scaffolds can promote various nerve cell behaviors, and thus, another promising technology is the stimulation of nerve cell growth by topological cues and electrical signals through the use of photonic crystals. These crystals are relatively easy to make. Specifically, they are highly conductive and controllable. There is still a significant room for the optimization of nerve conduits, because a number of the parameters which affect the clinical effects as well as the underlying mechanisms of their use are still not well understood [3]. Accordingly, there has only been limited exploitations of photonic crystal devices in neuroengineering. Microfluidic-based devices have emerged as new cell culture platforms for neurobiology research. This is due to their excellent spatial and temporal control capacities, easy assembly, reproducibility, flexibility, and amenability in imaging and biochemical analyses as well as their high-throughput potentials, which are likely to play an increasingly important role in establishing physiologically relevant culture/tissue models [92].The evolution of biomimetic ECMs is of great significance in neuron tissue reparation and regeneration. The existing research has shown that electrostimulation of neurons in the absence of topological characteristics can guide axonal extension. During nerve repair and regeneration, the growth behavior of nerve cells is regulated by the directional morphology of a scaffold, the mechanical stretching of the scaffold, and the electronic signals in the scaffold. However, the existing studies do not consider the above factors together to regulate cell growth behavior. A technical problem for neuroengineering is how to develop a strategy that can combine the advantages of all these factors to further improve nerve cell growth (Figure1). Accordingly, it is necessary to address these issues as a way of identifying more efficient and cost-effective therapies in future research. Such hybrid biomaterials will find use with myocardial and other tissues such as muscle or bone and will be useful for the fabrication of tissues and cell constructs by providing conductive media, topographical cues, and biomimetic microenvironments.Figure 1 Design of biomaterial and microenvironment by graphene, photonic crystals, and organ-on-a-chip for NTE. --- *Source: 1021969-2018-12-09.xml*
1021969-2018-12-09_1021969-2018-12-09.md
55,734
Designs of Biomaterials and Microenvironments for Neuroengineering
Yanru Yang; Yuhua Zhang; Renjie Chai; Zhongze Gu
Neural Plasticity (2018)
Biological Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1021969
1021969-2018-12-09.xml
--- ## Abstract Recent clinical research on neuroengineering is primarily focused on biocompatible materials, which can be used to provide electroactive and topological cues, regulate the microenvironment, and perform other functions. Novel biomaterials for neuroengineering have been received much attention in the field of research, including graphene, photonic crystals, and organ-on-a-chip. Graphene, which has the advantage of high mechanical strength and chemical stability with the unique electrochemical performance for electrical signal detection and transmission, has significant potential as a conductive scaffolding in the field of medicine. Photonic crystal materials, known as a novel concept in nerve substrates, have provided a new avenue for neuroengineering research because of their unique ordered structure and spectral attributes. The “organ-on-a-chip” systems have shown significant prospects for the developments of the solutions to nerve regeneration by mimicking the microenvironment of nerve tissue. This paper presents a review of current progress in the designs of biomaterials and microenvironments and provides case studies in developing nerve system stents upon these biomaterials. In addition, we compose a conductive patterned compounded biomaterial, which could mimic neuronal microenvironment for neuroengineering by concentrating the advantage of such biomaterials. --- ## Body ## 1. Introduction Nerve lesions, which cause a great number of disabilities around the world, have brought a tremendous impact on patients’ productivity and life quality. In general, nerve regeneration is the prime hindrance to limb reattachment in clinical practice. In previous studies, neuroengineering research for the peripheral nervous system (PNS) is primarily concentrated on alternatives to neurografts; however, work on spinal cord damage is primarily focused on creating a permissive environment for functional recovery [1]. During embryogenesis, neuron precursor cells (i.e., the neuroblasts) are divided and differentiated into the cellular components of the PNS and the central nervous system (CNS). They are driven towards specific cellular fates while migrating to predetermined destinations, and ultimately, these cells are developed into neurons and glial cells [2]. Nerve tissue engineering (NTE) is one of the most promising strategies for restoring CNS function in humans; in reality, the growth and distribution of cells within three-dimensional (3D) microporous scaffolds is of clinical significance for neuroengineering. Furthermore, NTE provides an attractive and promising platform for the competent management of PNS injury, by mechanically bridging the gap between severed nerves and also by inducing neuroregenerative mechanisms in a well-regulated environment that mimics the in vivo microenvironment of the specific nerve types that have been damaged so as to provide optimal clinical effectiveness [3]. For the existing explorations of different bioderived materials, they have provided several novel possibilities for the treatment and recovery of nerve injuries.Nerve scaffolds consist of natural biological materials and synthetic materials. Thus far, many solutions have been introduced to derive the above two kinds of materials. For example, in [4], the authors derived the natural biological materials from autogenous nerves or other native tissues such as skeletal muscles or blood vessels as well as polyester materials, i.e., polyhydroxyalkanoate. Another idea for natural biomaterials is to develop tissue-engineered nerve scaffolds by reconstituting nerve cell-derived extracellular matrix (ECM) using natural biomaterials and then develop a protocol to prepare and characterize cultured Schwann cell-derived ECM [5]. For the above protocol, silk fibroin fibers and a chitosan conduit are prepared, seeded with Schwann cells for deposition of ECM, and suffered from decellularization. However, this was confirmed to assembly into a Schwann cell-supported, chitosan/silk fibroin-mediated, and ECM-coated scaffold which was used to bridge a 10 mm gap in the sciatic nerve of rat. On the other hand, the synthetic materials used in nerve scaffolds mainly include decalcified bone tubes, nylon fiber tubes, and polyurethane. Yang et al. developed microporous polymeric nanofibrous scaffolds through biodegradable poly(l-lactic acid) (PLLA) for a two-dimensional (2D) nerve stem cell (NSC) culture. The production of PLLA scaffolds is carried out by a liquid-liquid phase separation strategy. This indicates that the physicochemical features of the scaffolds have been fully characterized, by scanning electron microscopy and differential scanning calorimetry [6].The combination of materials and tissue engineering is a mature research field, and numerous materials have been applied in clinical therapies. In recent years, several novel materials have been exploited, which is mainly due to the fact that their excellent chemical and physical features have been applied to the field of neuroengineering. In particular, a clinical study has been performed to determine the feasibility and safety of the collagen scaffold NeuroRegen, and it is found that patients demonstrated improved autonomic nerve function, and meanwhile, there is a recovery of motor- and sensory-evoked potentials from the soma [7]. The combination of biomaterials and neuroengineering has been widely researched all over the world; however, it is necessary to develop more novel materials to provide more choices for clinical therapy.Graphene consists of a single layer of carbon atoms, which have a high mechanical strength and chemical stability with unique electrochemical properties for electrical signal detection and transmission. This is important, due to the fact that, the promote diagnosis and nerve disease treatment mainly rely on the stimulation and recording of nerve impulses. A great number of biomaterials have been adopted as nerve scaffolds, and organ-on-chip devices provide novel in vitro microenvironments, which make progressed in the point where they are able to be used in the development and regeneration of nerve tissue. In reality, the latest progress in microtechnology allows more realistic mimicking of the naturally occurring microenvironment, where the behaviors and physiology of neurons and NSCs in response to the physical environment are more realistic. Neural interface biomaterials have become a topic of great interest; meanwhile, photonic materials are an emerging area in the production of stents in NTE. Ordered porous materials like photonic crystals provide a surface effect for studying the behavior of NSCs.This literature review covers recent studies on the three kinds of the above bioderived materials and their neuroengineering applications. Exploring the application of the compounded biomaterials in the field of neural interface materials could be serviceable in fabricating multifunctional neuron scaffold, which can be used not only for in vitro studies but also for therapeutic purposes. In addition, we combine the merit of such biomaterials to develop a compound design which has the advantage to further improve nerve cell growth. ## 2. The Growth of Nerve Tissue Guided with Graphene Neurons are electrically active cells, which function is exceedingly closely related to electrical activity. Through depolarizing the excitable cell membranes, electrical stimulation can initiate a functional response in neurons. In theory, depolarization can be achieved by the ionic flow between two or more electrodes; meanwhile, at least one of the electrodes is close to the target tissue. In general, there are two categories of electrodes that have been used in neural stimulation in neuroengineering research. Microelectrodes show low-charge/density thresholds and high-charge/phase thresholds, and they are, respectively, fixed on the target organ surface and possess a geometric surface area (GSA) greater over approximately 100,000μm2 [8, 9]. Researchers have confirmed that electrical charges are able to enhance nerve regeneration by altering protein adsorption during neuron interactions with electroconductive materials [10, 11]. Scaffolds designed for neuroengineering can simulate the electrical properties of neurons. The results show that the growth on the conductive substrate can enhance neurite growth under electrical stimulation [12–14].In general, graphene is the strongest and thinnest known material, which has been received great attention since being separated from graphite by Novoselov and Geim in 2004 [15, 16]. Graphene, also known as a single-crystal graphite, is a 2D crystal which consists of a single layer of carbon atoms. It is worth mentioning that the large specific surface area, excellent thermal, mechanical, and optical properties, as well as its outstanding electrical conductivity, make graphene an obvious choice for guiding the growth of nerve tissue. The work of Fabbro et al. has shown that untreated graphene can be connected to neurons, and the graphene has the ability to maintain the integrity of the active cells. This work is the first time to demonstrate that graphene can control the first step in creating deep brain implants, and graphene electrodes have great promise for implantation in the brain, which is able to restore functional loss after amputation, to reverse paralysis, and to provide relief for patients with movement disorders such as Parkinson’s disease [17]. The existing researches on neuroengineering are focused on studying the impacts of the graphene sheets on the complex relationship between neuron signal transmission mechanisms, and many works have shown that the diverse physical properties of graphene can affect the directional growth of neuronal axons, which can be used to promote the growth and activity of NSCs. ### 2.1. Graphene as a Two-Dimensional Substrate for Neurons The combination of outstanding thermal stability, biocompatibility, mechanical strength, and high electrical conductivity makes 2D graphene promising for a host of bioengineering applications [18, 19]. Graphene oxide (GO) is superior to graphene for the preparation of homogeneous aqueous suspensions in the existence of oxygen-encompassing hydrophilic groups, which decrease the reversible agglomeration of the graphene sheets. There has also been work showing that ginseng-reduced graphene oxide (rGO) sheets increase the differentiation efficiency of NSCs towards nerve cells. In one experiment, hydrophobic hydrazine-rGO films exhibited no toxicity against human neural stem cells (hNSCs), and the hydrophilic GO and ginseng-rGO films (which are more biocompatible films) showed proliferation of the hNSCs after three days. In addition, the hydrazine-rGO and peculiar ginseng-rGO films exhibited greater differentiation of hNSCs into neurons (rather than into glial cells) than the GO film after three weeks. The higher capability for electron transfer with rGO films bring about the promoted differentiation on such films [20]. However, when compared to graphene and other 2D or quasi-two-dimensional nanostructures that manifest superior flexibility and conductivity, the rGO derivative exhibits more worse conductivity [21]. The 2D graphene used in the work with nerve cells is mainly produced by chemical vapor deposition [22, 23]. Zhang et al. measured the cytotoxicity of graphene layers in neural phaeochromocytoma-derived PC12 cells and found that graphene induced strong metabolic activity at low concentrations, while the cell apoptosis marker (caspase-3) was activated in large numbers when PC12 cells were exposed to graphene at high concentration of 10 g/m [24]. Li’s group mainly researches on the effects of 2D graphene film on the development of hippocampal neuron cells, and they have demonstrated that graphene not only has favorable biocompatibility with neurons but also plays a significant role in promoting neurite sprouting and outgrowth of mice hippocampal cells [25]. This work manifests the prospect of graphene as a biomaterial for neural interfacing and offers insight into the future bioengineering applications of graphene. ### 2.2. Graphene as a Three-Dimensional Nerve Scaffold Material In vitro experiments on cell behaviors in the presence of graphene usually involve 2D graphene films, which lead to discrepancies between the 3D in vivo environment and the artificial 2D environment. Compared with the 2D scaffolds, 3D scaffolds are more accurate to mimic the chemical, physical, and biological properties of the in vivo environments [26–28]. Due to their interconnected porous structure and larger specific surface area, the 3D micropores of graphene make it an excellent scaffold material for regenerative medicine and tissue engineering and for providing a biomaterial interaction platform in living organisms during in vivo experiments [29–31]. The existing literatures have demonstrated that the topographical cues, including the sizes and patterns of biomaterials, have great influences on the NSC behavior. In reality, these observations provide a better understanding of the different roles that mechanical transduction plays in stem cell fate, especially in terms of directional differentiation, and how these dynamic cues can be used to advance the field of stem cell therapy [32].To develop practical applications for graphene, significant effort has gone into assembling 2D graphene sheets into 3D macroscopic structures that can serve as nerve scaffolds. The characteristics of 3D graphene are closely related to the size of such structures. Therefore, carefully controlling the size of the 3D preparation allows one to regulate the topographical cues of graphene, which can be used to meet different application requirements, and provides the opportunity to better understand the mechanism behind graphene effects in different applications. Many researchers have shown that 3D graphene stents cannot only promote the propagation of NSCs but also induce the selective differentiation of NSCs into functional neurons to a certain degree. For instance, porous three-dimensional graphene foam (3D-GF), which acts as a novel scaffold for NSCs, cannot only maintain NSC growth but also support the cells in an active propagation state through upregulation of Ki67 expression, when compared with 2D graphene films. It has also been shown that 3D-GFs can accelerate the differentiation of NSCs into astrocytes and neurons; meanwhile, the electrical coupling of 3D-GFs with differentiated NSCs demonstrated the effective electrical stimulation of these cells [33].In Tang’s [34] and Song’s [35] research group, a novel interconnected micropore scaffold 3D-GF is introduced for NSCs in vitro, which can be used to carry out a more in-depth study of the effects of 3D graphene on the cell. Their study found that microglial cells can grow very well on 3D graphene, and the pattern of graphene/cell interactions has an influence on the pro- and anti-inflammatory responses of microglia cells, which are cultured on graphene film or 3D-GF. Graphene showed a remarkable ability to rescue LPS-induced neuroinflammation, most likely through the restriction of microglial morphological transformation by the topographical cues of the 3D-GF surface. It is worth mentioning that hydrogel-doped graphene possesses fantabulous flexibility, which has received great attention for improving the regeneration of the PNS. Furthermore, the wettability, swelling ratio, morphology, mechanical properties, composition, and degradation behavior of graphene oxide/polyacrylamide (GO/PAM) hydrogels have been well characterized, and GO/PAM hydrogels have behaved a positive impact on Schwann cell adhesion and propagation [30]. ### 2.3. Graphene with Other Applications in Neuroengineering In previous studies, a novel method is introduced to inhibit synapses by fabricating nanometer-scale GO fragments. This solution mainly affects cell activity rather than inhibiting neuron signaling, which has been widely used in the treatment of neurological diseases [18]. Given the superior properties of 3D graphene structures, the synapses will be applied in neuroengineering and NSC transplantation treatment and other fields.Graphene functions as an improved artificial graft can be used to support nerve repair and regeneration. The unique physical properties regulate cellular growth behavior and improve cell activity, function, and development. In the future, the primary goal of biomedical engineering will be to address the potential applications of using graphene as a support biomaterial in cell culture. For example, the conductive properties of graphene will allow us to apply directional electric current on living tissues. In summary, graphene addresses quite a few challenging clinical applications of bioengineering and has great prospects in neuroengineering. ## 2.1. Graphene as a Two-Dimensional Substrate for Neurons The combination of outstanding thermal stability, biocompatibility, mechanical strength, and high electrical conductivity makes 2D graphene promising for a host of bioengineering applications [18, 19]. Graphene oxide (GO) is superior to graphene for the preparation of homogeneous aqueous suspensions in the existence of oxygen-encompassing hydrophilic groups, which decrease the reversible agglomeration of the graphene sheets. There has also been work showing that ginseng-reduced graphene oxide (rGO) sheets increase the differentiation efficiency of NSCs towards nerve cells. In one experiment, hydrophobic hydrazine-rGO films exhibited no toxicity against human neural stem cells (hNSCs), and the hydrophilic GO and ginseng-rGO films (which are more biocompatible films) showed proliferation of the hNSCs after three days. In addition, the hydrazine-rGO and peculiar ginseng-rGO films exhibited greater differentiation of hNSCs into neurons (rather than into glial cells) than the GO film after three weeks. The higher capability for electron transfer with rGO films bring about the promoted differentiation on such films [20]. However, when compared to graphene and other 2D or quasi-two-dimensional nanostructures that manifest superior flexibility and conductivity, the rGO derivative exhibits more worse conductivity [21]. The 2D graphene used in the work with nerve cells is mainly produced by chemical vapor deposition [22, 23]. Zhang et al. measured the cytotoxicity of graphene layers in neural phaeochromocytoma-derived PC12 cells and found that graphene induced strong metabolic activity at low concentrations, while the cell apoptosis marker (caspase-3) was activated in large numbers when PC12 cells were exposed to graphene at high concentration of 10 g/m [24]. Li’s group mainly researches on the effects of 2D graphene film on the development of hippocampal neuron cells, and they have demonstrated that graphene not only has favorable biocompatibility with neurons but also plays a significant role in promoting neurite sprouting and outgrowth of mice hippocampal cells [25]. This work manifests the prospect of graphene as a biomaterial for neural interfacing and offers insight into the future bioengineering applications of graphene. ## 2.2. Graphene as a Three-Dimensional Nerve Scaffold Material In vitro experiments on cell behaviors in the presence of graphene usually involve 2D graphene films, which lead to discrepancies between the 3D in vivo environment and the artificial 2D environment. Compared with the 2D scaffolds, 3D scaffolds are more accurate to mimic the chemical, physical, and biological properties of the in vivo environments [26–28]. Due to their interconnected porous structure and larger specific surface area, the 3D micropores of graphene make it an excellent scaffold material for regenerative medicine and tissue engineering and for providing a biomaterial interaction platform in living organisms during in vivo experiments [29–31]. The existing literatures have demonstrated that the topographical cues, including the sizes and patterns of biomaterials, have great influences on the NSC behavior. In reality, these observations provide a better understanding of the different roles that mechanical transduction plays in stem cell fate, especially in terms of directional differentiation, and how these dynamic cues can be used to advance the field of stem cell therapy [32].To develop practical applications for graphene, significant effort has gone into assembling 2D graphene sheets into 3D macroscopic structures that can serve as nerve scaffolds. The characteristics of 3D graphene are closely related to the size of such structures. Therefore, carefully controlling the size of the 3D preparation allows one to regulate the topographical cues of graphene, which can be used to meet different application requirements, and provides the opportunity to better understand the mechanism behind graphene effects in different applications. Many researchers have shown that 3D graphene stents cannot only promote the propagation of NSCs but also induce the selective differentiation of NSCs into functional neurons to a certain degree. For instance, porous three-dimensional graphene foam (3D-GF), which acts as a novel scaffold for NSCs, cannot only maintain NSC growth but also support the cells in an active propagation state through upregulation of Ki67 expression, when compared with 2D graphene films. It has also been shown that 3D-GFs can accelerate the differentiation of NSCs into astrocytes and neurons; meanwhile, the electrical coupling of 3D-GFs with differentiated NSCs demonstrated the effective electrical stimulation of these cells [33].In Tang’s [34] and Song’s [35] research group, a novel interconnected micropore scaffold 3D-GF is introduced for NSCs in vitro, which can be used to carry out a more in-depth study of the effects of 3D graphene on the cell. Their study found that microglial cells can grow very well on 3D graphene, and the pattern of graphene/cell interactions has an influence on the pro- and anti-inflammatory responses of microglia cells, which are cultured on graphene film or 3D-GF. Graphene showed a remarkable ability to rescue LPS-induced neuroinflammation, most likely through the restriction of microglial morphological transformation by the topographical cues of the 3D-GF surface. It is worth mentioning that hydrogel-doped graphene possesses fantabulous flexibility, which has received great attention for improving the regeneration of the PNS. Furthermore, the wettability, swelling ratio, morphology, mechanical properties, composition, and degradation behavior of graphene oxide/polyacrylamide (GO/PAM) hydrogels have been well characterized, and GO/PAM hydrogels have behaved a positive impact on Schwann cell adhesion and propagation [30]. ## 2.3. Graphene with Other Applications in Neuroengineering In previous studies, a novel method is introduced to inhibit synapses by fabricating nanometer-scale GO fragments. This solution mainly affects cell activity rather than inhibiting neuron signaling, which has been widely used in the treatment of neurological diseases [18]. Given the superior properties of 3D graphene structures, the synapses will be applied in neuroengineering and NSC transplantation treatment and other fields.Graphene functions as an improved artificial graft can be used to support nerve repair and regeneration. The unique physical properties regulate cellular growth behavior and improve cell activity, function, and development. In the future, the primary goal of biomedical engineering will be to address the potential applications of using graphene as a support biomaterial in cell culture. For example, the conductive properties of graphene will allow us to apply directional electric current on living tissues. In summary, graphene addresses quite a few challenging clinical applications of bioengineering and has great prospects in neuroengineering. ## 3. The Application of Photonic Crystals in Neuroengineering In 1987, Yablonovitch and John put forward the new concept of “photonic crystals” which expounds upon the effect that periodic dielectric structures have on the way that light propagates through certain crystalline materials. Photonic crystals consist of ordered arrays of two or more materials with different dielectric constants (refractive indexes). The materials form periodic patterns of dielectric constants, which generate a range of “forbidden” frequencies referred to as the photonic bandgap, and photons with energies in the bandgap cannot propagate through the material [36]. Although there are examples of photonic crystals in nature, such as opal, feathers, and butterfly wings, the vast majority of photonic crystals are of artificial design. A number of artificial fabrication techniques are currently available to achieve responsive photonic crystal patterning [37–40], and the application of photonic crystal materials is an emerging research field for novel nerve scaffolds, which can be used as stents in neuroengineering. ### 3.1. The Guidance of Nerve Cells by Ordered Structure Recently, the topological cues provided by biological scaffolds have been suggested to regulate cell behavior and stem cell fate [41–44]. These large structures can be measured in micrometers, and much work has been gone into determining their organization, assembly, molecular composition, and function [45]. The research results have shown that substrates patterned with grooves or ridges can regulate cell adhesion and orientation [46]. In addition, it has been shown that the morphology and alignment of cells can be modified by culturing them on stretched polymer inverse opal films [47]. In support of the applicability of such technologies, studies have shown that there is a substantial connection between NSC behavior and the nanotopography of the materials upon which they are growing [48, 49].The development of photonic crystal microstructures has been a primary focus of research into tissue regeneration over the past thirty years, and these materials have found applications in a multitude of tissue engineering applications, such as controlling the spatial arrangement of cells, guiding cell behavior, and differentiating stem cells. To be specific [50], proposed the application of uncomplicated stretched inverse opal structures for guiding the formation of cell orientation gradients, and it was shown that tendon fibroblasts growing on such structures formed elongation gradients that matched the topographical cues of the ordered substrate [50]. Thus far, there have been reports of applying photonic crystal structures in neuroengineering, as shown in [51–53].Nerve cell synapses can be easily guided by mechanical force in vivo, and the “random-to-aligned” cell gradients generated by such forces reproduce the part of the neuron that is inserted into connecting tissues and has significant potential for applications in neuroengineering. Photonic crystal materials like ordered microporous silicon are promising electrode materials in the nerve repair setting, which is mainly due to their advantage of biologically inert with excellent biocompatibility. Porous silicon has a large surface area, adheres firmly to tissues, and does not induce inflammatory response; all of which suggest that it would make a good biomaterial for use in implantable electronic nerve devices [51]. Wang et al. have developed a novel approach to create microporous tubular scaffolds from chitosan, which have mechanical properties and controllable inner structures, and therefore, they are useful for neuroengineering. The material has highly porous inner matrices with a large network of interconnected pores and axially oriented microchannels. Experiments in living donor tissue showed that these scaffolds exhibit mechanical strength, swelling, porosity, and biodegradability, which mimic the physical and chemical microenvironment in living organisms, and therefore will be of great potential for applications in neuroengineering. Characterization of in vitro cell cultures on these chitosan scaffolds showed that differentiated Neuro-2α cells grew along with the oriented microchannels, and the interrelated pores in the scaffold’s interior were beneficial for both nutrient diffusion and cell ingrowth [52]. The adoption of patterned biomimetic materials can guide the growth and arrangement of cells [54], and ordered porous materials provide a surface effect for the study of nerve cells and the behavior and effect of NSCs. ### 3.2. Monitoring Nerve Cells on Ordered Porous Material The photonic bandgap of periodic dielectric structures is the fundamental property of photonic crystals. The emergence of the photonic bandgap relies on the structure of the crystals, the ratio of the dielectric constants of the materials making up the crystal, and the geometric configuration of the crystal. In general, if the difference in the dielectric constant between the two kinds of material in the photonic crystal is obvious enough, then Bragg scattering will occur at the medium interface, and the dielectric constant ratio will become greater. In addition, the stronger the incident light is scattered, the greater the possibility to generate a photonic bandgap [55, 56]. The characteristic reflection peaks of the crystals is determined by the structural periodicity, herewith, the ordered porous crystals exhibit a perfect inertness because they can avoid chemical instability such as bleaching, quenching, or fading [57].In addition to the significant physical features with which periodic dielectric structures can guide the growth of nerve cells, photonic crystals can be used in a number of applications, which make use of the photonic bandgap. For example, their long-range ordered structures provide a stable code that can direct the growth of nerve cells according to changes in the refractive index. Huang et al. presented that lithographically patterned microporous silicon photonic crystals, which are functionalized with different bioactive peptide-doped surfaces, could be used as a spatial guidance for NSC differentiation and that NSCs can be spatially specified to suffer astrogenesis or neurogenesis as a multifunction of peptide identity as well as surface properties [58]. In addition, these crystals have found applications in the field of biomedical optics, and it has been shown that adsorbing proteins to the surface of a photonic crystal changes the refractive index which can be used to detect neurotransmitters and neural markers. To be specific, acetylcholinesterase-based organophosphate nerve agent-sensing photonic crystals have been widely studied in neuroengineering. These photonic crystals consist of polymerized crystalline colloidal arrays that can detect the organophosphorous compound parathion at ultratrace concentrations in aqueous solutions, and the sensor will cause a red shift in the wavelength of the diffracted light if it detects the nerve agent [59–62]. ## 3.1. The Guidance of Nerve Cells by Ordered Structure Recently, the topological cues provided by biological scaffolds have been suggested to regulate cell behavior and stem cell fate [41–44]. These large structures can be measured in micrometers, and much work has been gone into determining their organization, assembly, molecular composition, and function [45]. The research results have shown that substrates patterned with grooves or ridges can regulate cell adhesion and orientation [46]. In addition, it has been shown that the morphology and alignment of cells can be modified by culturing them on stretched polymer inverse opal films [47]. In support of the applicability of such technologies, studies have shown that there is a substantial connection between NSC behavior and the nanotopography of the materials upon which they are growing [48, 49].The development of photonic crystal microstructures has been a primary focus of research into tissue regeneration over the past thirty years, and these materials have found applications in a multitude of tissue engineering applications, such as controlling the spatial arrangement of cells, guiding cell behavior, and differentiating stem cells. To be specific [50], proposed the application of uncomplicated stretched inverse opal structures for guiding the formation of cell orientation gradients, and it was shown that tendon fibroblasts growing on such structures formed elongation gradients that matched the topographical cues of the ordered substrate [50]. Thus far, there have been reports of applying photonic crystal structures in neuroengineering, as shown in [51–53].Nerve cell synapses can be easily guided by mechanical force in vivo, and the “random-to-aligned” cell gradients generated by such forces reproduce the part of the neuron that is inserted into connecting tissues and has significant potential for applications in neuroengineering. Photonic crystal materials like ordered microporous silicon are promising electrode materials in the nerve repair setting, which is mainly due to their advantage of biologically inert with excellent biocompatibility. Porous silicon has a large surface area, adheres firmly to tissues, and does not induce inflammatory response; all of which suggest that it would make a good biomaterial for use in implantable electronic nerve devices [51]. Wang et al. have developed a novel approach to create microporous tubular scaffolds from chitosan, which have mechanical properties and controllable inner structures, and therefore, they are useful for neuroengineering. The material has highly porous inner matrices with a large network of interconnected pores and axially oriented microchannels. Experiments in living donor tissue showed that these scaffolds exhibit mechanical strength, swelling, porosity, and biodegradability, which mimic the physical and chemical microenvironment in living organisms, and therefore will be of great potential for applications in neuroengineering. Characterization of in vitro cell cultures on these chitosan scaffolds showed that differentiated Neuro-2α cells grew along with the oriented microchannels, and the interrelated pores in the scaffold’s interior were beneficial for both nutrient diffusion and cell ingrowth [52]. The adoption of patterned biomimetic materials can guide the growth and arrangement of cells [54], and ordered porous materials provide a surface effect for the study of nerve cells and the behavior and effect of NSCs. ## 3.2. Monitoring Nerve Cells on Ordered Porous Material The photonic bandgap of periodic dielectric structures is the fundamental property of photonic crystals. The emergence of the photonic bandgap relies on the structure of the crystals, the ratio of the dielectric constants of the materials making up the crystal, and the geometric configuration of the crystal. In general, if the difference in the dielectric constant between the two kinds of material in the photonic crystal is obvious enough, then Bragg scattering will occur at the medium interface, and the dielectric constant ratio will become greater. In addition, the stronger the incident light is scattered, the greater the possibility to generate a photonic bandgap [55, 56]. The characteristic reflection peaks of the crystals is determined by the structural periodicity, herewith, the ordered porous crystals exhibit a perfect inertness because they can avoid chemical instability such as bleaching, quenching, or fading [57].In addition to the significant physical features with which periodic dielectric structures can guide the growth of nerve cells, photonic crystals can be used in a number of applications, which make use of the photonic bandgap. For example, their long-range ordered structures provide a stable code that can direct the growth of nerve cells according to changes in the refractive index. Huang et al. presented that lithographically patterned microporous silicon photonic crystals, which are functionalized with different bioactive peptide-doped surfaces, could be used as a spatial guidance for NSC differentiation and that NSCs can be spatially specified to suffer astrogenesis or neurogenesis as a multifunction of peptide identity as well as surface properties [58]. In addition, these crystals have found applications in the field of biomedical optics, and it has been shown that adsorbing proteins to the surface of a photonic crystal changes the refractive index which can be used to detect neurotransmitters and neural markers. To be specific, acetylcholinesterase-based organophosphate nerve agent-sensing photonic crystals have been widely studied in neuroengineering. These photonic crystals consist of polymerized crystalline colloidal arrays that can detect the organophosphorous compound parathion at ultratrace concentrations in aqueous solutions, and the sensor will cause a red shift in the wavelength of the diffracted light if it detects the nerve agent [59–62]. ## 4. Organ-On-a-Chip as a Microsystem for Nerve Tissue The engineering of cellular environments has been shown to be crucial for improving the in vitro viability and in vivo-like function of cells and tissues, which is due to the fact that such environments are more accurate to mimic the situation in living organism [63]. Organs-on-chip platforms, including microfluidic, microengineering technologies, and essential bionic principles to faithfully describe the significant aspect of tissues in living organism, consist of critical spatiotemporal, microarchitecture cell-cell communications, and extracellular environments [64]. The improvement of organ-on-a-chip devices has yielded practical applications in drug screening and clinical research. The in vitro organ-based experiments with this new technology carry on the historical tradition of medical techniques that have sought to reconstitute damaged organs or tissues, and this novel technology is especially relevant to the research on the nervous system. Lundborg has proposed that the implantation of microfluidic chips in the nervous system might offer a novel interface between biology and technology, the concomitant development of gene engineering might provide novel possibilities for the manipulation of nerve regeneration and degeneration [65]. Organ-on-a-chip technology overcomes many of the challenges traditionally associated with clinical studies of neurological disease, especially when it comes to the complexity of neurological phenomena. The combination of neural engineering and chip research at present is mainly focused on axonal growth, the blood-brain barrier, neurospheres, and 3D or layered neural tissues [66–71]. ### 4.1. Axonal Growth on a Chip To successfully regenerate nerve tissue, axonal outgrowth from the proximal stumps requires growth without interference from the surrounding microenvironment, while it requires the formation of new connections with distal stumps. To address the above issue, Bryan et al. proposed a novel strategy to improve axonal sprouting in a guided way through a spatial neuron guidance channel [72]. Studies have shown that microchannels or microgrooves ranging in size from a few dozen nanometers up to 10 microns in width can induce directional axon growth [73] and meanwhile have a promoting effect on the formation and development of axons [74, 75]. In addition, bioresorbable guide channels, which are made of poly(lactic-co-glycolic) acid, have been shown to greatly affect the glial growth factors and Schwann cells during peripheral nerve regeneration [72]. Novel protocols for establishing CNS models on microplatforms have allowed axons to be visualized and quantified [76], and Hadlock et al. found that a polymer foam conduit comprised of some microchannels aligned longitudinally, which the diameters range from 60 to 550 microns and cultured with Schwann cells, promoted the regeneration of peripheral nerve. These conduits offer a microenvironment that is permissive to axonal regeneration [77].Kim et al. created the neuron chellop through a surface-printed microdot array to control axon branch formation and showed that the majority of collateral axon branches stemmed from axonal regions on a dot and terminated on neighboring microdots. In that study, the results showed that the length of branches increased as the spacing between dots increased [8]. This approach was also used to identify connectability defects in nerve cells from mouse model of 22q11.2 deletion syndrome/DiGeorge syndrome, by comparing the applications of channel guides to wild-type preparations. The results of that experiment demonstrated the reliability and sensitivity of the on-chip connectability approach and validated that tackling measures for quick assessment of neuron connectability defects in neuropsychiatric disease modeling [78]. The application of microchips in neuroengineering is appealing on account of their ability of maintaining the cellular environment in both a spatial and temporal manner [79]. Shin et al. designed a compartmental microfluidic device as a cell culture chamber and found that axons traversed the channels in microchips, which could be separated from the somata, thus forming an arrangement comparable to dissociated primary neurons [80]. The axons are the functional units that connect neurons to each other, and hence, the existing technologies such as those described here need to be further developed to provide the accurate guidance of axonal growth, which is needed for neuroengineering applications. ### 4.2. Microfluidic BBBs-On-a-Chip The blood-brain barrier (BBB) is derived from specialized endothelial cells, which isolates the blood from the brain tissue. Specifically, the BBB hinders the access of many exogenous compounds to the CNS selectively [81]. It is worth mentioning that the BBB is basically consists of three kinds of cells, where endothelial cells lined along astrocytes and pericytes. The membrane forms large numbers of tight junctions among endothelial cells. Therefore, the compound permeability can be directly controlled by maintaining high levels of transendothelial electrical resistance. The BBB protects the brain from noxious compounds in the blood and offers a homeostatic environment for optimal neuronal function. Limited BBB permeability leads to low efficiency in the clinical drug treatment of CNS pathologies, and examining BBB dysfunction and function is crucial for biomedical studies and drug development [82].The BBB-on-a-chip is a microfluidic platform, which can be used to mechanically and biochemically modulate BBB function [83]. This technology enables the real-time monitoring of neurons in a designed physiological niche, for example, through the use of small chambers and fluid guides as well as the attachment of sensors. Instances of BBBs-on-a-chip in the literature have demonstrated the feasibility of providing more accurate environments for research on organ-level functions [84–86]. These modular microenvironments recapitulate the roles of the “neurovascular unit” through a vertical stack of poly(dimethylsiloxane) neural parenchymal chambers, which are mainly separated from a vascular channel made of a porous polycarbonate membrane. Such microsystems will likely prove useful in studying neurodegenerative disorders and in toxicology and neuroinfectious disease studies as a screening tool [87].Microfluidic devices are gaining ground as novel automated microsystems for neuron culture and real-time monitoring, and the BBB-on-a-chip model provides an in vitro environment to mimic the natural forms and functions of the BBB and might be of great benefit in developing methods for nerve disease and new clinical treatments [88]. For instance, recreating the BBB structure and physiology on a chip—which is a neurovascular microfluidic bioreactor incorporating both a brain chamber and a vascular chamber separated by a microporous membrane—allows for adequate cell aggregation to support real-time monitoring and systematic analysis [89]. In addition, organs-on-a-chip have opened up a novel avenue for researching the characteristics of neurons derived from Alzheimer’s disease brains. A microfluidic chip based on 3D neuroaxonal spheroids is more accurate to imitate the brains in living organisms by supplying a constant quantity of fluid, which is similar to what is seen in the interstitial space of the brain. Furthermore, researchers have used such chips to study the influence of flow on neural networks, neurospheroid size, and nerve stem cell differentiation [90]. Takeda et al. designed a three-chambered microfluidic platform for modeling double-layered neurons to examine the ingestion and proliferation in response to changes in tau values, which occur in the interstitial fluid in the brains of tau transgenic mice and in the cortices of human Alzheimer’s disease patients [91]. In summary, 3D microfluidic BBBs-on-a-chip with controllable size and shape are a potential in vitro model for studying nerve tissue disease. ## 4.1. Axonal Growth on a Chip To successfully regenerate nerve tissue, axonal outgrowth from the proximal stumps requires growth without interference from the surrounding microenvironment, while it requires the formation of new connections with distal stumps. To address the above issue, Bryan et al. proposed a novel strategy to improve axonal sprouting in a guided way through a spatial neuron guidance channel [72]. Studies have shown that microchannels or microgrooves ranging in size from a few dozen nanometers up to 10 microns in width can induce directional axon growth [73] and meanwhile have a promoting effect on the formation and development of axons [74, 75]. In addition, bioresorbable guide channels, which are made of poly(lactic-co-glycolic) acid, have been shown to greatly affect the glial growth factors and Schwann cells during peripheral nerve regeneration [72]. Novel protocols for establishing CNS models on microplatforms have allowed axons to be visualized and quantified [76], and Hadlock et al. found that a polymer foam conduit comprised of some microchannels aligned longitudinally, which the diameters range from 60 to 550 microns and cultured with Schwann cells, promoted the regeneration of peripheral nerve. These conduits offer a microenvironment that is permissive to axonal regeneration [77].Kim et al. created the neuron chellop through a surface-printed microdot array to control axon branch formation and showed that the majority of collateral axon branches stemmed from axonal regions on a dot and terminated on neighboring microdots. In that study, the results showed that the length of branches increased as the spacing between dots increased [8]. This approach was also used to identify connectability defects in nerve cells from mouse model of 22q11.2 deletion syndrome/DiGeorge syndrome, by comparing the applications of channel guides to wild-type preparations. The results of that experiment demonstrated the reliability and sensitivity of the on-chip connectability approach and validated that tackling measures for quick assessment of neuron connectability defects in neuropsychiatric disease modeling [78]. The application of microchips in neuroengineering is appealing on account of their ability of maintaining the cellular environment in both a spatial and temporal manner [79]. Shin et al. designed a compartmental microfluidic device as a cell culture chamber and found that axons traversed the channels in microchips, which could be separated from the somata, thus forming an arrangement comparable to dissociated primary neurons [80]. The axons are the functional units that connect neurons to each other, and hence, the existing technologies such as those described here need to be further developed to provide the accurate guidance of axonal growth, which is needed for neuroengineering applications. ## 4.2. Microfluidic BBBs-On-a-Chip The blood-brain barrier (BBB) is derived from specialized endothelial cells, which isolates the blood from the brain tissue. Specifically, the BBB hinders the access of many exogenous compounds to the CNS selectively [81]. It is worth mentioning that the BBB is basically consists of three kinds of cells, where endothelial cells lined along astrocytes and pericytes. The membrane forms large numbers of tight junctions among endothelial cells. Therefore, the compound permeability can be directly controlled by maintaining high levels of transendothelial electrical resistance. The BBB protects the brain from noxious compounds in the blood and offers a homeostatic environment for optimal neuronal function. Limited BBB permeability leads to low efficiency in the clinical drug treatment of CNS pathologies, and examining BBB dysfunction and function is crucial for biomedical studies and drug development [82].The BBB-on-a-chip is a microfluidic platform, which can be used to mechanically and biochemically modulate BBB function [83]. This technology enables the real-time monitoring of neurons in a designed physiological niche, for example, through the use of small chambers and fluid guides as well as the attachment of sensors. Instances of BBBs-on-a-chip in the literature have demonstrated the feasibility of providing more accurate environments for research on organ-level functions [84–86]. These modular microenvironments recapitulate the roles of the “neurovascular unit” through a vertical stack of poly(dimethylsiloxane) neural parenchymal chambers, which are mainly separated from a vascular channel made of a porous polycarbonate membrane. Such microsystems will likely prove useful in studying neurodegenerative disorders and in toxicology and neuroinfectious disease studies as a screening tool [87].Microfluidic devices are gaining ground as novel automated microsystems for neuron culture and real-time monitoring, and the BBB-on-a-chip model provides an in vitro environment to mimic the natural forms and functions of the BBB and might be of great benefit in developing methods for nerve disease and new clinical treatments [88]. For instance, recreating the BBB structure and physiology on a chip—which is a neurovascular microfluidic bioreactor incorporating both a brain chamber and a vascular chamber separated by a microporous membrane—allows for adequate cell aggregation to support real-time monitoring and systematic analysis [89]. In addition, organs-on-a-chip have opened up a novel avenue for researching the characteristics of neurons derived from Alzheimer’s disease brains. A microfluidic chip based on 3D neuroaxonal spheroids is more accurate to imitate the brains in living organisms by supplying a constant quantity of fluid, which is similar to what is seen in the interstitial space of the brain. Furthermore, researchers have used such chips to study the influence of flow on neural networks, neurospheroid size, and nerve stem cell differentiation [90]. Takeda et al. designed a three-chambered microfluidic platform for modeling double-layered neurons to examine the ingestion and proliferation in response to changes in tau values, which occur in the interstitial fluid in the brains of tau transgenic mice and in the cortices of human Alzheimer’s disease patients [91]. In summary, 3D microfluidic BBBs-on-a-chip with controllable size and shape are a potential in vitro model for studying nerve tissue disease. ## 5. Conclusions and Discussions The unique morphology of nerves and neurons—with their distinct functional units—makes nerve repair and regeneration particularly challenging. The field of clinical medical materials has progressed significance in the past several years and has given rise to the design and synthesis of functional biomaterials for the therapeutic and diagnostic applications. The present challenges and the future goals for such proof-of-concept research need to be emphasized as well. Electrical conductivity is the primary predictor of neural signal quality in nerve repair and regeneration, and conductive materials will be useful for a great number of applications. It is worth mentioning that the communication between neurons and their downstream target cells takes place through the specificity of synapses, and information is transmitted between neuronal circuit elements by electrical or chemical signals. Neural signals based on electric conduction are predicted to have a strong influence on nerve reparation and regeneration, and neural components made of conductive material are expected to have wide clinical potential.The development of graphene technology is a field at the frontier of biomedical research, and one of the primary focuses of such research is to find ways to manipulate the properties of graphene materials so as to modulate neuronal synapses and neuronal excitability. The directional morphology of cell culture scaffolds can promote various nerve cell behaviors, and thus, another promising technology is the stimulation of nerve cell growth by topological cues and electrical signals through the use of photonic crystals. These crystals are relatively easy to make. Specifically, they are highly conductive and controllable. There is still a significant room for the optimization of nerve conduits, because a number of the parameters which affect the clinical effects as well as the underlying mechanisms of their use are still not well understood [3]. Accordingly, there has only been limited exploitations of photonic crystal devices in neuroengineering. Microfluidic-based devices have emerged as new cell culture platforms for neurobiology research. This is due to their excellent spatial and temporal control capacities, easy assembly, reproducibility, flexibility, and amenability in imaging and biochemical analyses as well as their high-throughput potentials, which are likely to play an increasingly important role in establishing physiologically relevant culture/tissue models [92].The evolution of biomimetic ECMs is of great significance in neuron tissue reparation and regeneration. The existing research has shown that electrostimulation of neurons in the absence of topological characteristics can guide axonal extension. During nerve repair and regeneration, the growth behavior of nerve cells is regulated by the directional morphology of a scaffold, the mechanical stretching of the scaffold, and the electronic signals in the scaffold. However, the existing studies do not consider the above factors together to regulate cell growth behavior. A technical problem for neuroengineering is how to develop a strategy that can combine the advantages of all these factors to further improve nerve cell growth (Figure1). Accordingly, it is necessary to address these issues as a way of identifying more efficient and cost-effective therapies in future research. Such hybrid biomaterials will find use with myocardial and other tissues such as muscle or bone and will be useful for the fabrication of tissues and cell constructs by providing conductive media, topographical cues, and biomimetic microenvironments.Figure 1 Design of biomaterial and microenvironment by graphene, photonic crystals, and organ-on-a-chip for NTE. --- *Source: 1021969-2018-12-09.xml*
2018
# A Modified Homotopy Perturbation Transform Method for Transient Flow of a Third Grade Fluid in a Channel with Oscillating Motion on the Upper Wall **Authors:** Mohammed Abdulhameed; Rozaini Roslan; Mahathir Bin Mohamad **Journal:** Journal of Computational Engineering (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102197 --- ## Abstract A new analytical algorithm based on modified homotopy perturbation transform method is employed for solving the transient flow of third grade fluid in a porous channel generated by an oscillating upper wall. This method incorporates the He’s polynomial into the HPM, combined with Laplace transform. Comparison with HPM and OHAM analytical solutions reveals that the proposed algorithm is highly accurate. This proves the validity and great potential of the proposed algorithm as a new kind of powerful analytical tool for transient nonlinear problems. Graphs representing the solutions are discussed, and appropriate conclusions are drawn. --- ## Body ## 1. Introduction The equations describing the motion of non-Newtonian fluids are strongly of nonlinear higher order than the Navier-Stokes equation for Newtonian fluids. These nonlinear equations form a very complex structure, with a small number of exact solutions. Mostly, numerical methods have largely been used to handle these equations. The class of problems with known exact solution is related to the problem for infinite flat plate. The related studies in the recent years are as follows: Fakhar et al. [1] examine the exact unsteady flow of an incompressible third grade fluid along an infinite plane porous plate. They obtained results by applying a translational type of symmetries combined with finite difference method. Danish and Kumar [2] analysed a steady flow of a third grade between two parallel plates using similarity transformation. Abdulhameed et al. [3] consider an unsteady viscoelastic fluid of second grade for an infinite plate. They applied Laplace transform together with the regular perturbation techniques to obtain the exact solution. Ayub et al. [4] analysed the problem of steady flow of a third grade fluid for an infinite plate porous plate using homotopy analysis method (HAM).Homotopy perturbation method developed by He [5] for solving linear and nonlinear initial-boundary value problem merges two techniques, the perturbation and standard homotopy. Recently, the homotopy perturbation method has been modified by some scientists to obtain more accurate results and rapid convergence and also to reduce the amount of computation. Ghorbani [6] introduced He’s polynomials based on homotopy perturbation method for nonlinear differential equations. The homotopy perturbation transform method (HPTM) introduced by Khan and Wu [7] is a combination of the homotopy perturbation method and Laplace transform method that is used to solve various types of linear and nonlinear systems of partial differential equations. The modified homotopy perturbation transform method (MHPTM) by Khan and Smarda [8] is based on the application of Laplace transform to solve the third-order boundary layer equation on semi-infinite domain. Nazari-Golshan et al. [9] developed a modified homotopy perturbation Fourier transform method for nonlinear and singular Lane-Emden equations.The goal of the present work is to present an algorithm base on MHPTM to handle the problem of transient flow of third grade fluid in a channel with oscillating upper wall, study the fluid behaviour in particular, and examine the effect of viscoelastic parameter on velocity field. ## 2. New Analytical Algorithm Consider the following differential equation:(1) F ( u ( t , z ) ) = 0 , t ≥ 0 , z ≥ 0 . Usually the operators F can be decomposed into two parts, a linear part R and a nonlinear part N: (2) R ( u ( t , z ) ) + N ( u ( t , z ) ) = g ( z ) . We construct a homotopy as follows: (3) R ( u ( t , z ) ) + p N ( u ( t , z ) ) = g ( z ) , where p ∈ [ 0 , 1 ] is an embedding parameter. Taking the Laplace transform of both sides of (2) we obtain (4) L { R ( u ( t , z ) ) } + L { N ( u ( t , z ) ) } = L ( g ( z ) ) . Considering the linear operator R in (3), the concept of the homotopy perturbation method with embedding parameter p is used to generate a series expansion for R as follows: (5) R ( u ( t , z ) ) = R ( ∑ i = 0 ∞ p i v i ) , and for the nonlinear operator N in (3), follow the concept of He’s polynomial, H n, as follows: (6) N ( u ( t , z ) ) = ∑ n = 0 ∞ p n H n , where He’s polynomials (Ghorbani [6]), H n, are defined as (7) H n = 1 n ! d n d p n N ( ∑ i = 0 n p i u i ) p = 0 . Substituting (5) and (6) into (4) we obtain (8) L { R ( ∑ i = 0 ∞ p i v i ) } + L { ∑ i = 0 ∞ p i + 1 H i } = L ( g ( z ) ) . The first few components of He’s polynomials, for example, are given by (9) H 0 ( u ) = N ( u 0 ) , H 1 ( u ) = d d p N ( ∑ i = 0 1 p i u i ) p = 0 , H 2 ( u ) = 1 2 ! d 2 d p 2 N ( ∑ i = 0 2 p i u i ) p = 0 , H 3 ( u ) = 1 3 ! d 3 d p 3 N ( ∑ i = 0 3 p i u i ) p = 0 . However, (8) can be rewritten in the form (10) ∑ i = 0 ∞ p i L { R ( v i ) } + ∑ i = 0 ∞ p i + 1 L { H i } = L { g } . Using (10) we introduce the recursive relation (11) L { R ( v 0 ) } = L { g } , such that (12) ∑ i = 1 ∞ p i L { R ( v i ) } + ∑ i = 0 ∞ p i + 1 L { H i } = 0 . The recursive equation deduced from (12) can be rewritten as (13) p 0 : L { R ( v 0 ) } = L { g } , p 1 : L { R ( v 1 ) } + L { H 0 } = 0 , p 2 : L { R ( v 2 ) } + L { H 1 } = 0 , p 3 : L { R ( v 3 ) } + L { H 2 } = 0 , ⋮ p k : L { R ( v k ) } + L { H k - 1 } = 0 . ## 3. Model of the Problem Consider a third grade viscoelastic fluid which is unsteady flows between two porous infinite vertical and parallel plane walls. The distance between the walls, that is, the channel width, is2 h. The lower plate is stationary and the upper plate is oscillating with periodic velocity u w ( t ). The lower and the upper plates are, accordingly, located in the planes z = - h and z = + h of an orthogonal coordinate system with x-axis in the direction of flow. The z-axis is orthogonal to the channel walls, and the origin of the axes is such that the positions of the channel walls are z = - h and z = + h, respectively.The fluid velocity vectorV = [ u ( t , z ) , - v w ] is assumed to be parallel to the x-axis, so that only the x-component u of the velocity vector does not vanish but the transpiration cross-flow velocity v w remains constant, where v w < 0 is the velocity of blowing and v w > 0 is the velocity of suction. Initially, both the channel walls and the fluid are at rest. The external pressure gradient is zero and the fluid velocity u ( z , t ) = u ( z , t ) i is described by the governing equation: (14) μ ∂ 2 u ∂ z 2 + α 1 ∂ 3 u ∂ z 2 ∂ t - α 1 v w ∂ 3 u ∂ z 3 + 6 β 3 ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 - ρ ( ∂ u ∂ t - v w ∂ u ∂ z ) = 0 , where ρ is the fluid density, μ is the coefficient of viscosity, α 1 is the viscoelastic parameter for a second grade fluid, and β 3 is the viscoelastic parameter for a third grade fluid.The initial and boundary conditions are(15) u ( 0 , z ) = 0 , z ≥ 0 , (16) u ( t , - h ) = 0 , t > 0 , (17) u w ( t ) = u ( t , + h ) = u 0 exp ⁡ ⁡ ( i ω t ) , t > 0 , where u 0 is the amplitude of wall oscillations, ω > 0 is the frequency of the wall velocity, and i is the imaginary unit. Using the wall velocity u w ( t ) given in the expression (17), the cosine and sine oscillations can be obtained by taking the real and imaginary parts of the velocity field u ( t , z ).Consider the following set of nondimensional variables:(18) z * = u 0 ν h z , u * = u u 0 , t * = u 0 2 t ν , ω * = ω ν u 0 2 , ξ * = b v w u 0 , β * = 6 β 3 μ ( u 0 h ) 2 , α * = α 1 ρ ( u 0 h ) 2 . We obtain the nondimensional initial-boundary values problem (dropping the * notation) (19) ∂ 2 u ∂ z 2 + α ∂ 3 u ∂ z 2 ∂ t - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z - ∂ u ∂ t = 0 , (20) u ( 0 , z ) = 0 , z ≥ 0 , (21) u ( t , - 1 ) = 0 , t > 0 , (22) u w ( t ) = u ( t , + 1 ) = exp ⁡ ⁡ ( i ω t ) , t > 0 . ## 4. Solution Technique ### 4.1. Application of New Algorithm To solve the problem formulated in the previous section, we apply the new algorithm formulated in Section2.By applying the Laplace transform with respect to timet of (19)–(22) we get the following problem: (23) ( 1 + α s ) ∂ 2 u - ∂ z 2 - α ξ ∂ 3 u - ∂ z 3 + β ( ∂ u - ∂ z ) 2 ∂ 2 u - ∂ z 2 + ξ ∂ u - ∂ z - s u - = 0 , (24) u - ( s , - 1 ) = 0 , (25) u - ( s , + 1 ) = 1 s - i ω , where u - ( s , z ) = ∫ 0 ∞ u ( t , z ) e - s t d t is the Laplace transform of the function u ( t , z ). Substituting the recursive (12) into (23) leads to the following equation: (26) ∑ n = 0 ∞ p n ( α s ∂ 2 u - n ∂ z 2 - s u - n ) = ∑ n = 0 ∞ p n + 1 H n ( s ) . The recursive equation deduced from (26) can be written as follows: (27) p 0 : α s ∂ 2 ∂ z 2 u - 0 ( s ) - s u - 0 = 0 , u - 0 ( s , - 1 ) = 0 , u - 0 ( s , + 1 ) = 1 s - i ω , p 1 : α s ∂ 2 ∂ z 2 u - 1 ( s ) - s u - 1 = H 0 ( u ( s ) ) , u - 1 ( s , - 1 ) = 0 , u - 1 ( s , + 1 ) = 0 . The solutions of the recursive (27) can be compactly written as (28) u - 0 ( s , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) ( s - i ω ) , u - 1 ( s , z ) = 1 8 s α 5 / 2 ( e 4 / α - 1 ) 4 ( s - i ω ) 3 × { - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } e ( 1 - 3 z ) / α h h h × - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } [ - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } - β α e 2 / α + β α e 6 / α h h h h h h - β α e 6 ( 2 + z ) / α h h h h h h + β α e ( 8 + 6 z ) / α + e 2 z / α h h h h h h × ( 4 s 2 ( - 1 + z ) α + β α h h h h h h h h h + 8 s i ( - 1 + z ) α ω - 4 ( - 1 + z ) α ω 2 ) h h h h h h + e 2 ( 7 + 2 z ) / α ( - 4 s 2 ( - 1 + z ) α + β α h h h h h h h h h h h h h h h h h h + 8 s i ( - 1 + z ) α ω h h h h h h h h h h h h h h h h h h i + 4 ( - 1 + z ) α ω 2 ) - 3 ξ h h h h h h + e 2 + 4 z / α ( 4 s 2 ( 3 + z ) α - β α h h h h h h h h h h h h h h h h h - 8 s i ( 3 + z ) α ω - 4 ( 3 + z ) α ω 2 ) h h h h h h + e 2 ( 6 + z ) / α ( - 4 s 2 ( 3 + z ) α - β α h h h h h h h h h h h h h h h h h + 8 s i ( 3 + z ) α ω + 4 ( 3 + z ) α ω 2 ) h h h h h h - 4 e 2 ( 2 + z ) / α ( s 2 ( α + 3 z α ) + β - z β h h h h h h h h h h h h h h h h h h - 2 s i ( 1 + 3 z ) α ω - ( 1 + 3 z ) α ω 2 ) h h h h h h + 4 e 2 ( 4 + z ) / α ( s 2 ( 5 + 3 z α ) α - ( 3 + z ) β h h h h h h h h h h h h h h h h h h - 2 s i ( 5 + 3 z ) α ω - ( 5 + 3 z ) α ω 2 ) h h h h h h h - 4 e 6 + 4 z / α ( s 2 ( 5 + 3 z α ) α - ( 3 + z ) β h h h h h h h h h h h h h h h h h i - 2 s i ( 5 + 3 z ) α ω h h h h h h h h h h h h h h h h h h - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } . Using the Maple symbolic code, the inverse Laplace transform of (28) is (29) u 0 ( t , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) { cos ⁡ ⁡ ( ω t ) + i sin ⁡ ( ω t ) } . u 1 ( t , z ) = 1 16 α 5 / 2 ( e 4 / α - 1 ) 4 ω 3 × { e ( 1 - 3 z ) / α ( cos ⁡ ⁡ ( ω t 2 ) + i sin ⁡ ( ω t 2 ) ) h h h h h h × [ × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] i t ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) + 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h h h × ( - 1 + z ) + 4 e 2 ( 4 + z ) / α ( 3 + z ) h h h h h h h h h h h - 4 e ( 6 + 4 z ) / α ( 3 + z ) + α e 2 / α - α e 6 / α h h h h h h h h h h h - α e 2 z / α + α e 6 ( 2 + z ) / α + α e 2 ( 6 + z ) / α h h h h h h h h h h h - α e 2 ( 7 + 2 z ) / α - 3 ξ h h h h h h h h h h h - 4 e 2 ( 2 + z ) / α  + α e 2 ( 2 + 4 z ) / α - α e ( 8 + 6 z ) / α ) h h h h h h h h h + β ω ( 2 i + t ω ) cos ⁡ ⁡ ( ω t 2 ) h h h h h h h h h - β α e 2 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e 6 / 2 ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h - β α e 6 ( 2 + z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e ( 8 + 6 z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + α e ( 2 + 4 z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 - β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 ( 7 + 2 z ) / α h h h h h h h h h × ( - 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 z / α h h h h h h h h h × ( 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - α e 2 ( 6 + z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e 2 ( 2 + z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 4 + 2 z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 h h h h h h h h h h h h + ( 3 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e ( 6 + z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 + ( 3 + z ) h h h h h h h h h h h h × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] } , Consequently, the first-order approximate analytical solution of (19) is given by (30) u ( t , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) { cos ⁡ ( ω t ) + i sin ( ω t ) } , + 1 16 α 5 / 2 ( e 4 / α - 1 ) 4 ω 3 × { e ( 1 - 3 z ) / α ( cos ⁡ ⁡ ( ω t 2 ) + i sin ⁡ ( ω t 2 ) ) h h h h h h × [ × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) i t ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) h h h h h h h h h h h + 4 e 2 ( 5 + 2 z ) / α ( - 1 + z ) h h h h h h h h h h h + 4 e 2 ( 4 + z ) / α ( 3 + z ) h h h h h h h h h h h - 4 e ( 6 + 4 z ) / α ( 3 + z ) h h h h h h h h h h h + α e 2 / α - α e 6 / α - α e 2 z / α h h h h h h h h h h h + α e 6 ( 2 + z ) / α + α e 2 ( 6 + z ) / α h h h h h h h h h h h - α e 2 ( 7 + 2 z ) / α - 3 ξ h h h h h h h h h h h + α e 2 ( 2 + 4 z ) / α - α e ( 8 + 6 z ) / α ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) - 4 e 2 ( 2 + z ) / α ) h h h h h h h h h + β ω ( 2 i + t ω ) cos ⁡ ⁡ ( ω t 2 ) h h h h h h h h h - β α e 2 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e 6 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h - β α e 6 ( 2 + z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e ( 8 + 6 z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + α e ( 2 + 4 z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 - β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 ( 7 + 2 z ) / α h h h h h h h h h × ( - 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 z / α h h h h h h h h h × ( 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - α e 2 ( 6 + z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e 2 ( 2 + z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 4 + 2 z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 h h h h h h h h h h h + ( 3 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e ( 6 + z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 + ( 3 + z ) h h h h h h h h h h h × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] } . ### 4.2. Application of Homotopy Perturbation Method (HPM) Rewrite (19) as (31) ∂ ∂ t ( u - α ∂ 2 u ∂ z 2 ) = ∂ 2 u ∂ z 2 - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z .Integrate (31) with respect to t over the interval [ 0 , t ]  as follows: (32) u - α ∂ 2 u ∂ z 2 = ∫ 0 t [ ∂ 2 u ∂ z 2 - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z ] ∂ t . According to the HPM technique by He [5], we construct the following homotopy v ( t , z ; q ) : R × [ 0 , 1 ] → R which satisfies the following relation: (33) L ( v ) - L ( u 0 ) + p [ L ( u 0 ) + N ( v ) ] = 0 , where L ( v ) = v - α ( ∂ 2 v / ∂ z 2 ), N ( v ) = ∫ 0 t [ ( ∂ 2 v / ∂ z 2 ) - α ξ ( ∂ 3 v / ∂ z 3 ) + β ( ∂ v / ∂ z ) 2 ( ∂ 2 v / ∂ z 2 ) + ξ ( ∂ v / ∂ z ) ] ∂ t, and u 0 is an initial approximation to the transient solution u ( t , z ).Takingp as small parameter we assume a power series solution of (33) in the form (34) v ( t , z ; p ) = ∑ k = 0 ∞ p k v k ( t , z ) , where v k ( t , z ) are unknown functions of t, z. Now letting p → 1, (34) yields the approximate solution of u ( t , z ) in the following form: (35) u ( t , z ) = lim ⁡ q → 1 ⁡ v ( t , z ; p ) = ∑ k = 0 ∞ v k ( t , z ) . We now substitute (34) into (33) and the initial and boundary conditions (20)–(22) and equate the coefficients of like powers of p to obtain first-order problem: (36) p 0 : L ( v 0 ) - L ( u 0 ) = 0 , v 0 ( t , - 1 ) = 0 , v 0 ( t , + 1 ) = e i ω t , p 1 : L ( v 1 ) - L ( u 0 ) + ∫ 0 t [ ∂ 2 v 0 ∂ z 2 - α ξ ∂ 3 v 0 ∂ z 3 + β ( ∂ v 0 ∂ z ) 2 ∂ 2 v 0 ∂ z 2 + ξ ∂ v 0 ∂ z ] ∂ t = 0 , v 1 ( t , - 1 ) = 0 , v 1 ( t , + 1 ) = 0 . We can now solve these problems to find v 0 and v 1: (37) v 0 = ( e 2 ( 1 + z ) / α - 1 ) ( e 2 ( 1 + z ) / α - 1 ) [ e ( ( 1 - z ) / α ) + i ω t ] . v 1 = 1 8 α 5 / 2 ( e 4 / α - 1 ) 4 × { e ( 8 + 6 z + 2 i ω t α ) ) / α α β e ( ( 1 - 3 z ) / α ) + i ω t t h h h × [ e ( 8 + 6 z + 2 i ω t α ) ) / α α β  - 4 e 2 z / α ( - 1 + z ) α + 4 e 2 ( 7 + 2 z ) / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 6 + z ) / α ( 3 + z ) α h h h h h h - 4 e ( 2 + 4 z ) / α ( 3 + z ) α - 3 ξ + 4 e 2 ( 2 + z ) / α h h h h h h × ( 1 + 3 z ) α - 4 e 2 ( 5 + 2 z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 4 + z ) / α ( 5 + 3 z ) α h h h h h h + 4 e ( 6 + 4 z ) / α ( 5 + 3 z ) α - 4 e 2 ( 2 + z + i ω t α ) / α h h h h h h × ( - 1 + z ) β + 4 e 2 ( 5 + 2 z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 4 + z + i ω t α ) ( 3 + z ) β h h h h h h - 4 e 6 + 4 z + 2 i ω t α ( 3 + z ) β + e ( 2 / α ) + 2 i ω t α β h h h h h h - e ( 6 / α ) + 2 i ω t α β - e ( 2 z / α ) + 2 i ω t α β h h h h h h + e 2 ( 6 + 4 z + 2 i ω t α ) α β - e 2 ( 7 + 2 z + i ω t α ) α β h h h h h h + e 2 ( 6 + 3 z + i ω t α ) / α α β h h h h h h + e ( 2 + 4 z + i ω t α ) / α α β h h h h h h - e ( 8 + 6 z + 2 i ω t α ) / α α β ] } . The first-order approximate solution of (19) by HPM method is (38) u ( t , z ) = v 0 + v 1 = ( e 2 ( 1 + z ) / α - 1 ) ( e 2 ( 1 + z ) / α - 1 ) [ e ( ( 1 - z ) / α ) + i ω t ] + 1 8 α 5 / 2 ( e 4 / α - 1 ) 4 × { e ( 8 + 6 z + 2 i ω t α ) ) / α α β e ( ( 1 - 3 z ) / α ) + i ω t t h h h × [ e ( 8 + 6 z + 2 i ω t α ) ) / α α β  - 4 e 2 z / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 7 + 2 z ) / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 6 + z ) / α ( 3 + z ) α - 4 e ( 2 + 4 z ) / α h h h h h h × ( 3 + z ) α - 3 ξ + 4 e 2 ( 2 + z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 5 + 2 z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 4 + z ) / α ( 5 + 3 z ) α h h h h h h + 4 e ( 6 + 4 z ) / α ( 5 + 3 z ) α h h h h h h - 4 e 2 ( 2 + z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 5 + 2 z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 4 + z + i ω t α ) ( 3 + z ) β h h h h h h - 4 e 6 + 4 z + 2 i ω t α ( 3 + z ) β h h h h h h + e ( 2 / α ) + 2 i ω t α β - e ( 6 / α ) + 2 i ω t α β h h h h h h - e ( 2 z / α ) + 2 i ω t α β + e 2 ( 6 + 4 z + 2 i ω t α ) α β h h h h h h - e 2 ( 7 + 2 z + i ω t α ) α β h h h h h h + e 2 ( 6 + 3 z + i ω t α ) / α α β h h h h h h + e ( 2 + 4 z + i ω t α ) / α α β h h h h h h - e ( 8 + 6 z + 2 i ω t α ) / α α β ] } . ### 4.3. Application of Optimal Homotopy Asymptotic Method (OHAM) By means of the OHAM proposed by Marinca and Herişanu [10], we construct an optimal homotopy ϕ ( t , z ; p ) : R × [ 0 , 1 ] → R which satisfies the following relation: (39) L ( ϕ ( t , z ; p ) ) = H ( z ; p ) ( L ( ϕ ( t , z ; p ) ) + N ( ϕ ( t , z ; p ) ) ) . We have a great freedom to choose the auxiliary function H ( t , z ; p ) as (40) H ( t , z ; p ) = C 1 + C 2 e - z + C 3 t e - 2 z , where C 1, C 2, and C 3 are functions depending on the variable t, z.Let us consider the solutions of (39) in the form (41) ϕ ( t , z , p ; K i ) = u 0 ( t , z ) + ∑ k = 1 ∞ u k ( t , z ; K i ) q k , i = 1 , 2 , … . Substituting (41) into (39) and equating the coefficients of like powers of p we obtain the governing equations of u k ( t , z ); that is, (42) L ( u k ( t , z ) ) - L ( u k - 1 ( t , z ) ) = K k N 0 ( u 0 ( t , z ) ) + ∑ i = 1 k - 1 K i [ + N ( k - i ) ( u 0 ( t , z ) , u 1 ( t , z ) , … u k - i ( t , z ) ) ] L ( u k - i ( t , z ) ) h h h h h h h h h h h + N ( k - i ) ( u 0 ( t , z ) , u 1 ( t , z ) , … u k - i ( t , z ) ) ] . The first-order approximate solution of the problem is (43) u ( t , z ) = u 0 ( t , z ) + u 1 ( t , z ) , where the zeroth-order and first-order problems from (42) are (44) u 0 - α ∂ 2 u 0 ∂ z 2 = 0 , u 0 ( t , - 1 ) = 0 , u 0 ( t , + 1 ) = e i ω t , u 1 - α ∂ 2 u 1 ∂ z 2 = ( C 1 + C 2 e - z + C 3 t e - 2 z ) × { ∫ 0 t [ ∂ 2 u 0 ∂ z 2 - α ξ ∂ 3 u 0 ∂ z 3 + β ( ∂ u 0 ∂ z ) 2 ∂ 2 u 0 ∂ z 2 + ξ ∂ u 0 ∂ z ] ∂ t } , u 1 ( t , - 1 ) = 0 , u 1 ( t , + 1 ) = 0 . The solution of (44) can be solved using widely symbolic computational Maple software. The values of the constants C 1, C 2, and C 3 are obtained using collocation method. ## 4.1. Application of New Algorithm To solve the problem formulated in the previous section, we apply the new algorithm formulated in Section2.By applying the Laplace transform with respect to timet of (19)–(22) we get the following problem: (23) ( 1 + α s ) ∂ 2 u - ∂ z 2 - α ξ ∂ 3 u - ∂ z 3 + β ( ∂ u - ∂ z ) 2 ∂ 2 u - ∂ z 2 + ξ ∂ u - ∂ z - s u - = 0 , (24) u - ( s , - 1 ) = 0 , (25) u - ( s , + 1 ) = 1 s - i ω , where u - ( s , z ) = ∫ 0 ∞ u ( t , z ) e - s t d t is the Laplace transform of the function u ( t , z ). Substituting the recursive (12) into (23) leads to the following equation: (26) ∑ n = 0 ∞ p n ( α s ∂ 2 u - n ∂ z 2 - s u - n ) = ∑ n = 0 ∞ p n + 1 H n ( s ) . The recursive equation deduced from (26) can be written as follows: (27) p 0 : α s ∂ 2 ∂ z 2 u - 0 ( s ) - s u - 0 = 0 , u - 0 ( s , - 1 ) = 0 , u - 0 ( s , + 1 ) = 1 s - i ω , p 1 : α s ∂ 2 ∂ z 2 u - 1 ( s ) - s u - 1 = H 0 ( u ( s ) ) , u - 1 ( s , - 1 ) = 0 , u - 1 ( s , + 1 ) = 0 . The solutions of the recursive (27) can be compactly written as (28) u - 0 ( s , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) ( s - i ω ) , u - 1 ( s , z ) = 1 8 s α 5 / 2 ( e 4 / α - 1 ) 4 ( s - i ω ) 3 × { - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } e ( 1 - 3 z ) / α h h h × - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } [ - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } - β α e 2 / α + β α e 6 / α h h h h h h - β α e 6 ( 2 + z ) / α h h h h h h + β α e ( 8 + 6 z ) / α + e 2 z / α h h h h h h × ( 4 s 2 ( - 1 + z ) α + β α h h h h h h h h h + 8 s i ( - 1 + z ) α ω - 4 ( - 1 + z ) α ω 2 ) h h h h h h + e 2 ( 7 + 2 z ) / α ( - 4 s 2 ( - 1 + z ) α + β α h h h h h h h h h h h h h h h h h h + 8 s i ( - 1 + z ) α ω h h h h h h h h h h h h h h h h h h i + 4 ( - 1 + z ) α ω 2 ) - 3 ξ h h h h h h + e 2 + 4 z / α ( 4 s 2 ( 3 + z ) α - β α h h h h h h h h h h h h h h h h h - 8 s i ( 3 + z ) α ω - 4 ( 3 + z ) α ω 2 ) h h h h h h + e 2 ( 6 + z ) / α ( - 4 s 2 ( 3 + z ) α - β α h h h h h h h h h h h h h h h h h + 8 s i ( 3 + z ) α ω + 4 ( 3 + z ) α ω 2 ) h h h h h h - 4 e 2 ( 2 + z ) / α ( s 2 ( α + 3 z α ) + β - z β h h h h h h h h h h h h h h h h h h - 2 s i ( 1 + 3 z ) α ω - ( 1 + 3 z ) α ω 2 ) h h h h h h + 4 e 2 ( 4 + z ) / α ( s 2 ( 5 + 3 z α ) α - ( 3 + z ) β h h h h h h h h h h h h h h h h h h - 2 s i ( 5 + 3 z ) α ω - ( 5 + 3 z ) α ω 2 ) h h h h h h h - 4 e 6 + 4 z / α ( s 2 ( 5 + 3 z α ) α - ( 3 + z ) β h h h h h h h h h h h h h h h h h i - 2 s i ( 5 + 3 z ) α ω h h h h h h h h h h h h h h h h h h - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } . Using the Maple symbolic code, the inverse Laplace transform of (28) is (29) u 0 ( t , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) { cos ⁡ ⁡ ( ω t ) + i sin ⁡ ( ω t ) } . u 1 ( t , z ) = 1 16 α 5 / 2 ( e 4 / α - 1 ) 4 ω 3 × { e ( 1 - 3 z ) / α ( cos ⁡ ⁡ ( ω t 2 ) + i sin ⁡ ( ω t 2 ) ) h h h h h h × [ × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] i t ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) + 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h h h × ( - 1 + z ) + 4 e 2 ( 4 + z ) / α ( 3 + z ) h h h h h h h h h h h - 4 e ( 6 + 4 z ) / α ( 3 + z ) + α e 2 / α - α e 6 / α h h h h h h h h h h h - α e 2 z / α + α e 6 ( 2 + z ) / α + α e 2 ( 6 + z ) / α h h h h h h h h h h h - α e 2 ( 7 + 2 z ) / α - 3 ξ h h h h h h h h h h h - 4 e 2 ( 2 + z ) / α  + α e 2 ( 2 + 4 z ) / α - α e ( 8 + 6 z ) / α ) h h h h h h h h h + β ω ( 2 i + t ω ) cos ⁡ ⁡ ( ω t 2 ) h h h h h h h h h - β α e 2 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e 6 / 2 ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h - β α e 6 ( 2 + z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e ( 8 + 6 z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + α e ( 2 + 4 z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 - β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 ( 7 + 2 z ) / α h h h h h h h h h × ( - 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 z / α h h h h h h h h h × ( 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - α e 2 ( 6 + z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e 2 ( 2 + z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 4 + 2 z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 h h h h h h h h h h h h + ( 3 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e ( 6 + z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 + ( 3 + z ) h h h h h h h h h h h h × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] } , Consequently, the first-order approximate analytical solution of (19) is given by (30) u ( t , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) { cos ⁡ ( ω t ) + i sin ( ω t ) } , + 1 16 α 5 / 2 ( e 4 / α - 1 ) 4 ω 3 × { e ( 1 - 3 z ) / α ( cos ⁡ ⁡ ( ω t 2 ) + i sin ⁡ ( ω t 2 ) ) h h h h h h × [ × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) i t ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) h h h h h h h h h h h + 4 e 2 ( 5 + 2 z ) / α ( - 1 + z ) h h h h h h h h h h h + 4 e 2 ( 4 + z ) / α ( 3 + z ) h h h h h h h h h h h - 4 e ( 6 + 4 z ) / α ( 3 + z ) h h h h h h h h h h h + α e 2 / α - α e 6 / α - α e 2 z / α h h h h h h h h h h h + α e 6 ( 2 + z ) / α + α e 2 ( 6 + z ) / α h h h h h h h h h h h - α e 2 ( 7 + 2 z ) / α - 3 ξ h h h h h h h h h h h + α e 2 ( 2 + 4 z ) / α - α e ( 8 + 6 z ) / α ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) - 4 e 2 ( 2 + z ) / α ) h h h h h h h h h + β ω ( 2 i + t ω ) cos ⁡ ⁡ ( ω t 2 ) h h h h h h h h h - β α e 2 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e 6 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h - β α e 6 ( 2 + z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e ( 8 + 6 z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + α e ( 2 + 4 z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 - β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 ( 7 + 2 z ) / α h h h h h h h h h × ( - 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 z / α h h h h h h h h h × ( 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - α e 2 ( 6 + z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e 2 ( 2 + z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 4 + 2 z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 h h h h h h h h h h h + ( 3 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e ( 6 + z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 + ( 3 + z ) h h h h h h h h h h h × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] } . ## 4.2. Application of Homotopy Perturbation Method (HPM) Rewrite (19) as (31) ∂ ∂ t ( u - α ∂ 2 u ∂ z 2 ) = ∂ 2 u ∂ z 2 - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z .Integrate (31) with respect to t over the interval [ 0 , t ]  as follows: (32) u - α ∂ 2 u ∂ z 2 = ∫ 0 t [ ∂ 2 u ∂ z 2 - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z ] ∂ t . According to the HPM technique by He [5], we construct the following homotopy v ( t , z ; q ) : R × [ 0 , 1 ] → R which satisfies the following relation: (33) L ( v ) - L ( u 0 ) + p [ L ( u 0 ) + N ( v ) ] = 0 , where L ( v ) = v - α ( ∂ 2 v / ∂ z 2 ), N ( v ) = ∫ 0 t [ ( ∂ 2 v / ∂ z 2 ) - α ξ ( ∂ 3 v / ∂ z 3 ) + β ( ∂ v / ∂ z ) 2 ( ∂ 2 v / ∂ z 2 ) + ξ ( ∂ v / ∂ z ) ] ∂ t, and u 0 is an initial approximation to the transient solution u ( t , z ).Takingp as small parameter we assume a power series solution of (33) in the form (34) v ( t , z ; p ) = ∑ k = 0 ∞ p k v k ( t , z ) , where v k ( t , z ) are unknown functions of t, z. Now letting p → 1, (34) yields the approximate solution of u ( t , z ) in the following form: (35) u ( t , z ) = lim ⁡ q → 1 ⁡ v ( t , z ; p ) = ∑ k = 0 ∞ v k ( t , z ) . We now substitute (34) into (33) and the initial and boundary conditions (20)–(22) and equate the coefficients of like powers of p to obtain first-order problem: (36) p 0 : L ( v 0 ) - L ( u 0 ) = 0 , v 0 ( t , - 1 ) = 0 , v 0 ( t , + 1 ) = e i ω t , p 1 : L ( v 1 ) - L ( u 0 ) + ∫ 0 t [ ∂ 2 v 0 ∂ z 2 - α ξ ∂ 3 v 0 ∂ z 3 + β ( ∂ v 0 ∂ z ) 2 ∂ 2 v 0 ∂ z 2 + ξ ∂ v 0 ∂ z ] ∂ t = 0 , v 1 ( t , - 1 ) = 0 , v 1 ( t , + 1 ) = 0 . We can now solve these problems to find v 0 and v 1: (37) v 0 = ( e 2 ( 1 + z ) / α - 1 ) ( e 2 ( 1 + z ) / α - 1 ) [ e ( ( 1 - z ) / α ) + i ω t ] . v 1 = 1 8 α 5 / 2 ( e 4 / α - 1 ) 4 × { e ( 8 + 6 z + 2 i ω t α ) ) / α α β e ( ( 1 - 3 z ) / α ) + i ω t t h h h × [ e ( 8 + 6 z + 2 i ω t α ) ) / α α β  - 4 e 2 z / α ( - 1 + z ) α + 4 e 2 ( 7 + 2 z ) / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 6 + z ) / α ( 3 + z ) α h h h h h h - 4 e ( 2 + 4 z ) / α ( 3 + z ) α - 3 ξ + 4 e 2 ( 2 + z ) / α h h h h h h × ( 1 + 3 z ) α - 4 e 2 ( 5 + 2 z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 4 + z ) / α ( 5 + 3 z ) α h h h h h h + 4 e ( 6 + 4 z ) / α ( 5 + 3 z ) α - 4 e 2 ( 2 + z + i ω t α ) / α h h h h h h × ( - 1 + z ) β + 4 e 2 ( 5 + 2 z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 4 + z + i ω t α ) ( 3 + z ) β h h h h h h - 4 e 6 + 4 z + 2 i ω t α ( 3 + z ) β + e ( 2 / α ) + 2 i ω t α β h h h h h h - e ( 6 / α ) + 2 i ω t α β - e ( 2 z / α ) + 2 i ω t α β h h h h h h + e 2 ( 6 + 4 z + 2 i ω t α ) α β - e 2 ( 7 + 2 z + i ω t α ) α β h h h h h h + e 2 ( 6 + 3 z + i ω t α ) / α α β h h h h h h + e ( 2 + 4 z + i ω t α ) / α α β h h h h h h - e ( 8 + 6 z + 2 i ω t α ) / α α β ] } . The first-order approximate solution of (19) by HPM method is (38) u ( t , z ) = v 0 + v 1 = ( e 2 ( 1 + z ) / α - 1 ) ( e 2 ( 1 + z ) / α - 1 ) [ e ( ( 1 - z ) / α ) + i ω t ] + 1 8 α 5 / 2 ( e 4 / α - 1 ) 4 × { e ( 8 + 6 z + 2 i ω t α ) ) / α α β e ( ( 1 - 3 z ) / α ) + i ω t t h h h × [ e ( 8 + 6 z + 2 i ω t α ) ) / α α β  - 4 e 2 z / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 7 + 2 z ) / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 6 + z ) / α ( 3 + z ) α - 4 e ( 2 + 4 z ) / α h h h h h h × ( 3 + z ) α - 3 ξ + 4 e 2 ( 2 + z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 5 + 2 z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 4 + z ) / α ( 5 + 3 z ) α h h h h h h + 4 e ( 6 + 4 z ) / α ( 5 + 3 z ) α h h h h h h - 4 e 2 ( 2 + z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 5 + 2 z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 4 + z + i ω t α ) ( 3 + z ) β h h h h h h - 4 e 6 + 4 z + 2 i ω t α ( 3 + z ) β h h h h h h + e ( 2 / α ) + 2 i ω t α β - e ( 6 / α ) + 2 i ω t α β h h h h h h - e ( 2 z / α ) + 2 i ω t α β + e 2 ( 6 + 4 z + 2 i ω t α ) α β h h h h h h - e 2 ( 7 + 2 z + i ω t α ) α β h h h h h h + e 2 ( 6 + 3 z + i ω t α ) / α α β h h h h h h + e ( 2 + 4 z + i ω t α ) / α α β h h h h h h - e ( 8 + 6 z + 2 i ω t α ) / α α β ] } . ## 4.3. Application of Optimal Homotopy Asymptotic Method (OHAM) By means of the OHAM proposed by Marinca and Herişanu [10], we construct an optimal homotopy ϕ ( t , z ; p ) : R × [ 0 , 1 ] → R which satisfies the following relation: (39) L ( ϕ ( t , z ; p ) ) = H ( z ; p ) ( L ( ϕ ( t , z ; p ) ) + N ( ϕ ( t , z ; p ) ) ) . We have a great freedom to choose the auxiliary function H ( t , z ; p ) as (40) H ( t , z ; p ) = C 1 + C 2 e - z + C 3 t e - 2 z , where C 1, C 2, and C 3 are functions depending on the variable t, z.Let us consider the solutions of (39) in the form (41) ϕ ( t , z , p ; K i ) = u 0 ( t , z ) + ∑ k = 1 ∞ u k ( t , z ; K i ) q k , i = 1 , 2 , … . Substituting (41) into (39) and equating the coefficients of like powers of p we obtain the governing equations of u k ( t , z ); that is, (42) L ( u k ( t , z ) ) - L ( u k - 1 ( t , z ) ) = K k N 0 ( u 0 ( t , z ) ) + ∑ i = 1 k - 1 K i [ + N ( k - i ) ( u 0 ( t , z ) , u 1 ( t , z ) , … u k - i ( t , z ) ) ] L ( u k - i ( t , z ) ) h h h h h h h h h h h + N ( k - i ) ( u 0 ( t , z ) , u 1 ( t , z ) , … u k - i ( t , z ) ) ] . The first-order approximate solution of the problem is (43) u ( t , z ) = u 0 ( t , z ) + u 1 ( t , z ) , where the zeroth-order and first-order problems from (42) are (44) u 0 - α ∂ 2 u 0 ∂ z 2 = 0 , u 0 ( t , - 1 ) = 0 , u 0 ( t , + 1 ) = e i ω t , u 1 - α ∂ 2 u 1 ∂ z 2 = ( C 1 + C 2 e - z + C 3 t e - 2 z ) × { ∫ 0 t [ ∂ 2 u 0 ∂ z 2 - α ξ ∂ 3 u 0 ∂ z 3 + β ( ∂ u 0 ∂ z ) 2 ∂ 2 u 0 ∂ z 2 + ξ ∂ u 0 ∂ z ] ∂ t } , u 1 ( t , - 1 ) = 0 , u 1 ( t , + 1 ) = 0 . The solution of (44) can be solved using widely symbolic computational Maple software. The values of the constants C 1, C 2, and C 3 are obtained using collocation method. ## 5. Analysis of Result In Figure1 and Table 1, we show that the transient approximate solutions obtained using newly technique are in better agreement with the OHAM method, as compared to the HPM solutions, for small values of time t   and different values of all the non-Newtonian parameters and constants. This proves that the accuracy of the solution obtained by the new method is significant and more accurate for small values of the time t. However, in Figure 2 and Table 2 for large values of time t, the transient approximate solutions obtained using newly method become divergent with the approximate solutions obtained using HPM and OHAM which proves that the accuracy of the new method fails for growing values of time t.Table 1 Numerical results show the comparison of new algorithm results with the results obtained from HPM and OHAM for the case of small value of timet ( t = 0.1 ), when α = 0.5,  β = 0.5, ξ = - 0.5, ω = 0.5, C 1 = 0.15419514, C 2 = - 5.54965331, and C 2 = 4.10010429. z New algorithm HPM method OHAM method Absolute error Absolute error HPM method OHAM method - 1 - 1.43639 × 1 0 - 16 3.03836 × 1 0 - 17 4.44089 × 1 0 - 16 2.89467 × 1 0 - 17 5.87728 × 1 0 - 16 - 0.8 0.0401522 0.0269116 0.0315324 0.0132406 0.0086198 - 0.6 0.0829870 0.0565498 0.0698092 0.0264372 0.0131778 - 0.4 0.1313580 0.0919104 0.1180830 0.0394476 0.0132750 - 0.2 0.1884740 0.1365570 0.1783360 0.051917 0.0101380 0 0.2580990 0.1949830 0.2527330 0.063116 0.0053660 0.2 0.3447890 0.2730870 0.3443530 0.071702 0.0004360 0.4 0.4541750 0.3788420 0.4576250 0.075333 0.0034500 0.6 0.5933060 0.5232930 0.5986640 0.070013 0.0053580 0.8 0.7710760 0.7221790 0.7755630 0.048897 0.0044870 1 0.9987500 0.998750 0.998750 0 0Table 2 Numerical results show the comparison of new algorithm results with the results obtained from HPM and OHAM for the case of large value of timet ( t = 2 ), when α = 0.5,  β = 0.5, ξ = - 0.5, ω = 0.5, C 1 = 0.15419514, C 2 = - 5.54965331, and C 2 = 4.10010429. z New method HPM method OHAM method Absolute error Absolute error HPM method OHAM method - 1 - 6.41703 × 1 0 - 16 2.16947 × 1 0 - 16 - 1.33227 × 1 0 - 15 8.5865 × 1 0 - 16 6.550257 × 1 0 - 16 - 0.8 0.1309460 - 0.0313209 - 0.0275524 0.1622669 0.1584984 - 0.6 0.2630750 - 0.0593798 - 0.0138010 0.3224548 0.2768760 - 0.4 0.3968690 - 0.0802786 0.0436640 0.4771476 0.3532050 - 0.2 0.5312480 - 0.0888351 0.1294880 0.6200831 0.4017600 0 0.6623540 - 0.0779706 0.2239000 0.7403246 0.4384540 0.2 0.7816650 - 0.0383387 0.3090160 0.8200037 0.4726490 0.4 0.8728400 0.041187 0.3730220 0.8316530 0.4998180 0.6 0.9061440 0.170557 0.4152560 0.7355870 0.4908880 0.8 0.8279990 0.349146 0.4542990 0.4788530 0.3737000 1 0.5403020 0.540302 0.5403020 0 0Figure 1 Comparison of the present results with HPM and OHAM for small value of timet.Figure 2 Comparison of the present results with HPM and OHAM for large value of timet.To see the physical impact of the oscillating walls on the third grade fluid on the flow field, we have plotted the graphs for velocity profiles given by (30). Figure 3 is plotted to show the variation for different amplitude of wall oscillations ω in cases of cosine and sine oscillation; it is noted that the results are physically satisfied with different values ω for both cosine and sine excitation of the upper wall. Figure 4 shows the effect of third grade viscoelastic parameter with blowing case on the velocity profile for small values of time t; it is clearly seen that, by increasing β, the velocity increases across the channel, and this increase is rapid near the upper wall. This abrupt change in velocity near the upper wall is due to oscillatory nature of the wall boundary which generates depressive harmonic waves into the velocity field. From this figure we can also compare the velocity field of third grade fluid with corresponding velocity field of second grade fluid ( β = 0 ). For both cosine and sine oscillation, the third grade fluid flows faster than second grade fluid across the channel. It is noted from Figure 5 that, as we increase the value of the time t, the influence of the third grade parameter β on the fluid motion is not significant as compared in Figure 4 for small values of time t.Plot of velocity fieldu for varying values of ω for small time t = 1 and α = 0.5, ξ = - 0.7, and β = 0.5. (a) (b)Plot of velocity fieldu for varying values of β for small value of time t = 1 and α = 0.5, and ξ = - 0.7. (a) (b)Plot of velocity fieldu for varying values of β for large value of time t = 2.5 and α = 0.5, and ξ = - 0.7. (a) (b)In Figure6 the wall stress ( τ ω = ∂ u ( t , z ) / ∂ z ) is plotted against the dimensionless space coordinate z. For cosine oscillations, it is observed that the wall stress increases near the lower plate, whereas it decreases near the upper plate. This is because of the simultaneous suction and blowing phenomena at the lower plate and upper plate, respectively. The effects get reversed in the case of sine oscillations.Effect of blowing on wall stressτ ω when ω = 0.5, α = 0.5, t = 0.2, and β = 0.5 are fixed. (a) (b) ## 6. Conclusion Remark In this paper, a new technique is proposed to obtain analytical solutions of the transient flow of viscoelastic third grade fluid. We applied a new technique to our problem, which is also solved using HPM and OHAM. We obtained an explicit analytic solution of the 2D laminar transient third grade flow over a vertical channel with oscillation motions on a transpiration wall. This explicit analytic solution is valid in the whole transient region0 < t < 1.5. The results obtained using HPM and OHAM are compared with the solutions of the new technique. We observed that we get accurate results using new method, by comparing with OHAM results.The result has shown that the influence of the third grade parameterβ on the fluid motion is significant only for small values of time t.This approach seems to be useful and can be used to obtain other analytical solutions for other moving boundary layer transient equations in fluid mechanics Fan et al. [11] and Noghrehabadi et al. [12]. --- *Source: 102197-2014-07-17.xml*
102197-2014-07-17_102197-2014-07-17.md
42,233
A Modified Homotopy Perturbation Transform Method for Transient Flow of a Third Grade Fluid in a Channel with Oscillating Motion on the Upper Wall
Mohammed Abdulhameed; Rozaini Roslan; Mahathir Bin Mohamad
Journal of Computational Engineering (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102197
102197-2014-07-17.xml
--- ## Abstract A new analytical algorithm based on modified homotopy perturbation transform method is employed for solving the transient flow of third grade fluid in a porous channel generated by an oscillating upper wall. This method incorporates the He’s polynomial into the HPM, combined with Laplace transform. Comparison with HPM and OHAM analytical solutions reveals that the proposed algorithm is highly accurate. This proves the validity and great potential of the proposed algorithm as a new kind of powerful analytical tool for transient nonlinear problems. Graphs representing the solutions are discussed, and appropriate conclusions are drawn. --- ## Body ## 1. Introduction The equations describing the motion of non-Newtonian fluids are strongly of nonlinear higher order than the Navier-Stokes equation for Newtonian fluids. These nonlinear equations form a very complex structure, with a small number of exact solutions. Mostly, numerical methods have largely been used to handle these equations. The class of problems with known exact solution is related to the problem for infinite flat plate. The related studies in the recent years are as follows: Fakhar et al. [1] examine the exact unsteady flow of an incompressible third grade fluid along an infinite plane porous plate. They obtained results by applying a translational type of symmetries combined with finite difference method. Danish and Kumar [2] analysed a steady flow of a third grade between two parallel plates using similarity transformation. Abdulhameed et al. [3] consider an unsteady viscoelastic fluid of second grade for an infinite plate. They applied Laplace transform together with the regular perturbation techniques to obtain the exact solution. Ayub et al. [4] analysed the problem of steady flow of a third grade fluid for an infinite plate porous plate using homotopy analysis method (HAM).Homotopy perturbation method developed by He [5] for solving linear and nonlinear initial-boundary value problem merges two techniques, the perturbation and standard homotopy. Recently, the homotopy perturbation method has been modified by some scientists to obtain more accurate results and rapid convergence and also to reduce the amount of computation. Ghorbani [6] introduced He’s polynomials based on homotopy perturbation method for nonlinear differential equations. The homotopy perturbation transform method (HPTM) introduced by Khan and Wu [7] is a combination of the homotopy perturbation method and Laplace transform method that is used to solve various types of linear and nonlinear systems of partial differential equations. The modified homotopy perturbation transform method (MHPTM) by Khan and Smarda [8] is based on the application of Laplace transform to solve the third-order boundary layer equation on semi-infinite domain. Nazari-Golshan et al. [9] developed a modified homotopy perturbation Fourier transform method for nonlinear and singular Lane-Emden equations.The goal of the present work is to present an algorithm base on MHPTM to handle the problem of transient flow of third grade fluid in a channel with oscillating upper wall, study the fluid behaviour in particular, and examine the effect of viscoelastic parameter on velocity field. ## 2. New Analytical Algorithm Consider the following differential equation:(1) F ( u ( t , z ) ) = 0 , t ≥ 0 , z ≥ 0 . Usually the operators F can be decomposed into two parts, a linear part R and a nonlinear part N: (2) R ( u ( t , z ) ) + N ( u ( t , z ) ) = g ( z ) . We construct a homotopy as follows: (3) R ( u ( t , z ) ) + p N ( u ( t , z ) ) = g ( z ) , where p ∈ [ 0 , 1 ] is an embedding parameter. Taking the Laplace transform of both sides of (2) we obtain (4) L { R ( u ( t , z ) ) } + L { N ( u ( t , z ) ) } = L ( g ( z ) ) . Considering the linear operator R in (3), the concept of the homotopy perturbation method with embedding parameter p is used to generate a series expansion for R as follows: (5) R ( u ( t , z ) ) = R ( ∑ i = 0 ∞ p i v i ) , and for the nonlinear operator N in (3), follow the concept of He’s polynomial, H n, as follows: (6) N ( u ( t , z ) ) = ∑ n = 0 ∞ p n H n , where He’s polynomials (Ghorbani [6]), H n, are defined as (7) H n = 1 n ! d n d p n N ( ∑ i = 0 n p i u i ) p = 0 . Substituting (5) and (6) into (4) we obtain (8) L { R ( ∑ i = 0 ∞ p i v i ) } + L { ∑ i = 0 ∞ p i + 1 H i } = L ( g ( z ) ) . The first few components of He’s polynomials, for example, are given by (9) H 0 ( u ) = N ( u 0 ) , H 1 ( u ) = d d p N ( ∑ i = 0 1 p i u i ) p = 0 , H 2 ( u ) = 1 2 ! d 2 d p 2 N ( ∑ i = 0 2 p i u i ) p = 0 , H 3 ( u ) = 1 3 ! d 3 d p 3 N ( ∑ i = 0 3 p i u i ) p = 0 . However, (8) can be rewritten in the form (10) ∑ i = 0 ∞ p i L { R ( v i ) } + ∑ i = 0 ∞ p i + 1 L { H i } = L { g } . Using (10) we introduce the recursive relation (11) L { R ( v 0 ) } = L { g } , such that (12) ∑ i = 1 ∞ p i L { R ( v i ) } + ∑ i = 0 ∞ p i + 1 L { H i } = 0 . The recursive equation deduced from (12) can be rewritten as (13) p 0 : L { R ( v 0 ) } = L { g } , p 1 : L { R ( v 1 ) } + L { H 0 } = 0 , p 2 : L { R ( v 2 ) } + L { H 1 } = 0 , p 3 : L { R ( v 3 ) } + L { H 2 } = 0 , ⋮ p k : L { R ( v k ) } + L { H k - 1 } = 0 . ## 3. Model of the Problem Consider a third grade viscoelastic fluid which is unsteady flows between two porous infinite vertical and parallel plane walls. The distance between the walls, that is, the channel width, is2 h. The lower plate is stationary and the upper plate is oscillating with periodic velocity u w ( t ). The lower and the upper plates are, accordingly, located in the planes z = - h and z = + h of an orthogonal coordinate system with x-axis in the direction of flow. The z-axis is orthogonal to the channel walls, and the origin of the axes is such that the positions of the channel walls are z = - h and z = + h, respectively.The fluid velocity vectorV = [ u ( t , z ) , - v w ] is assumed to be parallel to the x-axis, so that only the x-component u of the velocity vector does not vanish but the transpiration cross-flow velocity v w remains constant, where v w < 0 is the velocity of blowing and v w > 0 is the velocity of suction. Initially, both the channel walls and the fluid are at rest. The external pressure gradient is zero and the fluid velocity u ( z , t ) = u ( z , t ) i is described by the governing equation: (14) μ ∂ 2 u ∂ z 2 + α 1 ∂ 3 u ∂ z 2 ∂ t - α 1 v w ∂ 3 u ∂ z 3 + 6 β 3 ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 - ρ ( ∂ u ∂ t - v w ∂ u ∂ z ) = 0 , where ρ is the fluid density, μ is the coefficient of viscosity, α 1 is the viscoelastic parameter for a second grade fluid, and β 3 is the viscoelastic parameter for a third grade fluid.The initial and boundary conditions are(15) u ( 0 , z ) = 0 , z ≥ 0 , (16) u ( t , - h ) = 0 , t > 0 , (17) u w ( t ) = u ( t , + h ) = u 0 exp ⁡ ⁡ ( i ω t ) , t > 0 , where u 0 is the amplitude of wall oscillations, ω > 0 is the frequency of the wall velocity, and i is the imaginary unit. Using the wall velocity u w ( t ) given in the expression (17), the cosine and sine oscillations can be obtained by taking the real and imaginary parts of the velocity field u ( t , z ).Consider the following set of nondimensional variables:(18) z * = u 0 ν h z , u * = u u 0 , t * = u 0 2 t ν , ω * = ω ν u 0 2 , ξ * = b v w u 0 , β * = 6 β 3 μ ( u 0 h ) 2 , α * = α 1 ρ ( u 0 h ) 2 . We obtain the nondimensional initial-boundary values problem (dropping the * notation) (19) ∂ 2 u ∂ z 2 + α ∂ 3 u ∂ z 2 ∂ t - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z - ∂ u ∂ t = 0 , (20) u ( 0 , z ) = 0 , z ≥ 0 , (21) u ( t , - 1 ) = 0 , t > 0 , (22) u w ( t ) = u ( t , + 1 ) = exp ⁡ ⁡ ( i ω t ) , t > 0 . ## 4. Solution Technique ### 4.1. Application of New Algorithm To solve the problem formulated in the previous section, we apply the new algorithm formulated in Section2.By applying the Laplace transform with respect to timet of (19)–(22) we get the following problem: (23) ( 1 + α s ) ∂ 2 u - ∂ z 2 - α ξ ∂ 3 u - ∂ z 3 + β ( ∂ u - ∂ z ) 2 ∂ 2 u - ∂ z 2 + ξ ∂ u - ∂ z - s u - = 0 , (24) u - ( s , - 1 ) = 0 , (25) u - ( s , + 1 ) = 1 s - i ω , where u - ( s , z ) = ∫ 0 ∞ u ( t , z ) e - s t d t is the Laplace transform of the function u ( t , z ). Substituting the recursive (12) into (23) leads to the following equation: (26) ∑ n = 0 ∞ p n ( α s ∂ 2 u - n ∂ z 2 - s u - n ) = ∑ n = 0 ∞ p n + 1 H n ( s ) . The recursive equation deduced from (26) can be written as follows: (27) p 0 : α s ∂ 2 ∂ z 2 u - 0 ( s ) - s u - 0 = 0 , u - 0 ( s , - 1 ) = 0 , u - 0 ( s , + 1 ) = 1 s - i ω , p 1 : α s ∂ 2 ∂ z 2 u - 1 ( s ) - s u - 1 = H 0 ( u ( s ) ) , u - 1 ( s , - 1 ) = 0 , u - 1 ( s , + 1 ) = 0 . The solutions of the recursive (27) can be compactly written as (28) u - 0 ( s , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) ( s - i ω ) , u - 1 ( s , z ) = 1 8 s α 5 / 2 ( e 4 / α - 1 ) 4 ( s - i ω ) 3 × { - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } e ( 1 - 3 z ) / α h h h × - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } [ - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } - β α e 2 / α + β α e 6 / α h h h h h h - β α e 6 ( 2 + z ) / α h h h h h h + β α e ( 8 + 6 z ) / α + e 2 z / α h h h h h h × ( 4 s 2 ( - 1 + z ) α + β α h h h h h h h h h + 8 s i ( - 1 + z ) α ω - 4 ( - 1 + z ) α ω 2 ) h h h h h h + e 2 ( 7 + 2 z ) / α ( - 4 s 2 ( - 1 + z ) α + β α h h h h h h h h h h h h h h h h h h + 8 s i ( - 1 + z ) α ω h h h h h h h h h h h h h h h h h h i + 4 ( - 1 + z ) α ω 2 ) - 3 ξ h h h h h h + e 2 + 4 z / α ( 4 s 2 ( 3 + z ) α - β α h h h h h h h h h h h h h h h h h - 8 s i ( 3 + z ) α ω - 4 ( 3 + z ) α ω 2 ) h h h h h h + e 2 ( 6 + z ) / α ( - 4 s 2 ( 3 + z ) α - β α h h h h h h h h h h h h h h h h h + 8 s i ( 3 + z ) α ω + 4 ( 3 + z ) α ω 2 ) h h h h h h - 4 e 2 ( 2 + z ) / α ( s 2 ( α + 3 z α ) + β - z β h h h h h h h h h h h h h h h h h h - 2 s i ( 1 + 3 z ) α ω - ( 1 + 3 z ) α ω 2 ) h h h h h h + 4 e 2 ( 4 + z ) / α ( s 2 ( 5 + 3 z α ) α - ( 3 + z ) β h h h h h h h h h h h h h h h h h h - 2 s i ( 5 + 3 z ) α ω - ( 5 + 3 z ) α ω 2 ) h h h h h h h - 4 e 6 + 4 z / α ( s 2 ( 5 + 3 z α ) α - ( 3 + z ) β h h h h h h h h h h h h h h h h h i - 2 s i ( 5 + 3 z ) α ω h h h h h h h h h h h h h h h h h h - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } . Using the Maple symbolic code, the inverse Laplace transform of (28) is (29) u 0 ( t , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) { cos ⁡ ⁡ ( ω t ) + i sin ⁡ ( ω t ) } . u 1 ( t , z ) = 1 16 α 5 / 2 ( e 4 / α - 1 ) 4 ω 3 × { e ( 1 - 3 z ) / α ( cos ⁡ ⁡ ( ω t 2 ) + i sin ⁡ ( ω t 2 ) ) h h h h h h × [ × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] i t ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) + 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h h h × ( - 1 + z ) + 4 e 2 ( 4 + z ) / α ( 3 + z ) h h h h h h h h h h h - 4 e ( 6 + 4 z ) / α ( 3 + z ) + α e 2 / α - α e 6 / α h h h h h h h h h h h - α e 2 z / α + α e 6 ( 2 + z ) / α + α e 2 ( 6 + z ) / α h h h h h h h h h h h - α e 2 ( 7 + 2 z ) / α - 3 ξ h h h h h h h h h h h - 4 e 2 ( 2 + z ) / α  + α e 2 ( 2 + 4 z ) / α - α e ( 8 + 6 z ) / α ) h h h h h h h h h + β ω ( 2 i + t ω ) cos ⁡ ⁡ ( ω t 2 ) h h h h h h h h h - β α e 2 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e 6 / 2 ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h - β α e 6 ( 2 + z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e ( 8 + 6 z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + α e ( 2 + 4 z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 - β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 ( 7 + 2 z ) / α h h h h h h h h h × ( - 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 z / α h h h h h h h h h × ( 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - α e 2 ( 6 + z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e 2 ( 2 + z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 4 + 2 z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 h h h h h h h h h h h h + ( 3 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e ( 6 + z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 + ( 3 + z ) h h h h h h h h h h h h × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] } , Consequently, the first-order approximate analytical solution of (19) is given by (30) u ( t , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) { cos ⁡ ( ω t ) + i sin ( ω t ) } , + 1 16 α 5 / 2 ( e 4 / α - 1 ) 4 ω 3 × { e ( 1 - 3 z ) / α ( cos ⁡ ⁡ ( ω t 2 ) + i sin ⁡ ( ω t 2 ) ) h h h h h h × [ × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) i t ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) h h h h h h h h h h h + 4 e 2 ( 5 + 2 z ) / α ( - 1 + z ) h h h h h h h h h h h + 4 e 2 ( 4 + z ) / α ( 3 + z ) h h h h h h h h h h h - 4 e ( 6 + 4 z ) / α ( 3 + z ) h h h h h h h h h h h + α e 2 / α - α e 6 / α - α e 2 z / α h h h h h h h h h h h + α e 6 ( 2 + z ) / α + α e 2 ( 6 + z ) / α h h h h h h h h h h h - α e 2 ( 7 + 2 z ) / α - 3 ξ h h h h h h h h h h h + α e 2 ( 2 + 4 z ) / α - α e ( 8 + 6 z ) / α ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) - 4 e 2 ( 2 + z ) / α ) h h h h h h h h h + β ω ( 2 i + t ω ) cos ⁡ ⁡ ( ω t 2 ) h h h h h h h h h - β α e 2 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e 6 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h - β α e 6 ( 2 + z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e ( 8 + 6 z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + α e ( 2 + 4 z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 - β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 ( 7 + 2 z ) / α h h h h h h h h h × ( - 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 z / α h h h h h h h h h × ( 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - α e 2 ( 6 + z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e 2 ( 2 + z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 4 + 2 z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 h h h h h h h h h h h + ( 3 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e ( 6 + z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 + ( 3 + z ) h h h h h h h h h h h × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] } . ### 4.2. Application of Homotopy Perturbation Method (HPM) Rewrite (19) as (31) ∂ ∂ t ( u - α ∂ 2 u ∂ z 2 ) = ∂ 2 u ∂ z 2 - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z .Integrate (31) with respect to t over the interval [ 0 , t ]  as follows: (32) u - α ∂ 2 u ∂ z 2 = ∫ 0 t [ ∂ 2 u ∂ z 2 - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z ] ∂ t . According to the HPM technique by He [5], we construct the following homotopy v ( t , z ; q ) : R × [ 0 , 1 ] → R which satisfies the following relation: (33) L ( v ) - L ( u 0 ) + p [ L ( u 0 ) + N ( v ) ] = 0 , where L ( v ) = v - α ( ∂ 2 v / ∂ z 2 ), N ( v ) = ∫ 0 t [ ( ∂ 2 v / ∂ z 2 ) - α ξ ( ∂ 3 v / ∂ z 3 ) + β ( ∂ v / ∂ z ) 2 ( ∂ 2 v / ∂ z 2 ) + ξ ( ∂ v / ∂ z ) ] ∂ t, and u 0 is an initial approximation to the transient solution u ( t , z ).Takingp as small parameter we assume a power series solution of (33) in the form (34) v ( t , z ; p ) = ∑ k = 0 ∞ p k v k ( t , z ) , where v k ( t , z ) are unknown functions of t, z. Now letting p → 1, (34) yields the approximate solution of u ( t , z ) in the following form: (35) u ( t , z ) = lim ⁡ q → 1 ⁡ v ( t , z ; p ) = ∑ k = 0 ∞ v k ( t , z ) . We now substitute (34) into (33) and the initial and boundary conditions (20)–(22) and equate the coefficients of like powers of p to obtain first-order problem: (36) p 0 : L ( v 0 ) - L ( u 0 ) = 0 , v 0 ( t , - 1 ) = 0 , v 0 ( t , + 1 ) = e i ω t , p 1 : L ( v 1 ) - L ( u 0 ) + ∫ 0 t [ ∂ 2 v 0 ∂ z 2 - α ξ ∂ 3 v 0 ∂ z 3 + β ( ∂ v 0 ∂ z ) 2 ∂ 2 v 0 ∂ z 2 + ξ ∂ v 0 ∂ z ] ∂ t = 0 , v 1 ( t , - 1 ) = 0 , v 1 ( t , + 1 ) = 0 . We can now solve these problems to find v 0 and v 1: (37) v 0 = ( e 2 ( 1 + z ) / α - 1 ) ( e 2 ( 1 + z ) / α - 1 ) [ e ( ( 1 - z ) / α ) + i ω t ] . v 1 = 1 8 α 5 / 2 ( e 4 / α - 1 ) 4 × { e ( 8 + 6 z + 2 i ω t α ) ) / α α β e ( ( 1 - 3 z ) / α ) + i ω t t h h h × [ e ( 8 + 6 z + 2 i ω t α ) ) / α α β  - 4 e 2 z / α ( - 1 + z ) α + 4 e 2 ( 7 + 2 z ) / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 6 + z ) / α ( 3 + z ) α h h h h h h - 4 e ( 2 + 4 z ) / α ( 3 + z ) α - 3 ξ + 4 e 2 ( 2 + z ) / α h h h h h h × ( 1 + 3 z ) α - 4 e 2 ( 5 + 2 z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 4 + z ) / α ( 5 + 3 z ) α h h h h h h + 4 e ( 6 + 4 z ) / α ( 5 + 3 z ) α - 4 e 2 ( 2 + z + i ω t α ) / α h h h h h h × ( - 1 + z ) β + 4 e 2 ( 5 + 2 z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 4 + z + i ω t α ) ( 3 + z ) β h h h h h h - 4 e 6 + 4 z + 2 i ω t α ( 3 + z ) β + e ( 2 / α ) + 2 i ω t α β h h h h h h - e ( 6 / α ) + 2 i ω t α β - e ( 2 z / α ) + 2 i ω t α β h h h h h h + e 2 ( 6 + 4 z + 2 i ω t α ) α β - e 2 ( 7 + 2 z + i ω t α ) α β h h h h h h + e 2 ( 6 + 3 z + i ω t α ) / α α β h h h h h h + e ( 2 + 4 z + i ω t α ) / α α β h h h h h h - e ( 8 + 6 z + 2 i ω t α ) / α α β ] } . The first-order approximate solution of (19) by HPM method is (38) u ( t , z ) = v 0 + v 1 = ( e 2 ( 1 + z ) / α - 1 ) ( e 2 ( 1 + z ) / α - 1 ) [ e ( ( 1 - z ) / α ) + i ω t ] + 1 8 α 5 / 2 ( e 4 / α - 1 ) 4 × { e ( 8 + 6 z + 2 i ω t α ) ) / α α β e ( ( 1 - 3 z ) / α ) + i ω t t h h h × [ e ( 8 + 6 z + 2 i ω t α ) ) / α α β  - 4 e 2 z / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 7 + 2 z ) / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 6 + z ) / α ( 3 + z ) α - 4 e ( 2 + 4 z ) / α h h h h h h × ( 3 + z ) α - 3 ξ + 4 e 2 ( 2 + z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 5 + 2 z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 4 + z ) / α ( 5 + 3 z ) α h h h h h h + 4 e ( 6 + 4 z ) / α ( 5 + 3 z ) α h h h h h h - 4 e 2 ( 2 + z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 5 + 2 z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 4 + z + i ω t α ) ( 3 + z ) β h h h h h h - 4 e 6 + 4 z + 2 i ω t α ( 3 + z ) β h h h h h h + e ( 2 / α ) + 2 i ω t α β - e ( 6 / α ) + 2 i ω t α β h h h h h h - e ( 2 z / α ) + 2 i ω t α β + e 2 ( 6 + 4 z + 2 i ω t α ) α β h h h h h h - e 2 ( 7 + 2 z + i ω t α ) α β h h h h h h + e 2 ( 6 + 3 z + i ω t α ) / α α β h h h h h h + e ( 2 + 4 z + i ω t α ) / α α β h h h h h h - e ( 8 + 6 z + 2 i ω t α ) / α α β ] } . ### 4.3. Application of Optimal Homotopy Asymptotic Method (OHAM) By means of the OHAM proposed by Marinca and Herişanu [10], we construct an optimal homotopy ϕ ( t , z ; p ) : R × [ 0 , 1 ] → R which satisfies the following relation: (39) L ( ϕ ( t , z ; p ) ) = H ( z ; p ) ( L ( ϕ ( t , z ; p ) ) + N ( ϕ ( t , z ; p ) ) ) . We have a great freedom to choose the auxiliary function H ( t , z ; p ) as (40) H ( t , z ; p ) = C 1 + C 2 e - z + C 3 t e - 2 z , where C 1, C 2, and C 3 are functions depending on the variable t, z.Let us consider the solutions of (39) in the form (41) ϕ ( t , z , p ; K i ) = u 0 ( t , z ) + ∑ k = 1 ∞ u k ( t , z ; K i ) q k , i = 1 , 2 , … . Substituting (41) into (39) and equating the coefficients of like powers of p we obtain the governing equations of u k ( t , z ); that is, (42) L ( u k ( t , z ) ) - L ( u k - 1 ( t , z ) ) = K k N 0 ( u 0 ( t , z ) ) + ∑ i = 1 k - 1 K i [ + N ( k - i ) ( u 0 ( t , z ) , u 1 ( t , z ) , … u k - i ( t , z ) ) ] L ( u k - i ( t , z ) ) h h h h h h h h h h h + N ( k - i ) ( u 0 ( t , z ) , u 1 ( t , z ) , … u k - i ( t , z ) ) ] . The first-order approximate solution of the problem is (43) u ( t , z ) = u 0 ( t , z ) + u 1 ( t , z ) , where the zeroth-order and first-order problems from (42) are (44) u 0 - α ∂ 2 u 0 ∂ z 2 = 0 , u 0 ( t , - 1 ) = 0 , u 0 ( t , + 1 ) = e i ω t , u 1 - α ∂ 2 u 1 ∂ z 2 = ( C 1 + C 2 e - z + C 3 t e - 2 z ) × { ∫ 0 t [ ∂ 2 u 0 ∂ z 2 - α ξ ∂ 3 u 0 ∂ z 3 + β ( ∂ u 0 ∂ z ) 2 ∂ 2 u 0 ∂ z 2 + ξ ∂ u 0 ∂ z ] ∂ t } , u 1 ( t , - 1 ) = 0 , u 1 ( t , + 1 ) = 0 . The solution of (44) can be solved using widely symbolic computational Maple software. The values of the constants C 1, C 2, and C 3 are obtained using collocation method. ## 4.1. Application of New Algorithm To solve the problem formulated in the previous section, we apply the new algorithm formulated in Section2.By applying the Laplace transform with respect to timet of (19)–(22) we get the following problem: (23) ( 1 + α s ) ∂ 2 u - ∂ z 2 - α ξ ∂ 3 u - ∂ z 3 + β ( ∂ u - ∂ z ) 2 ∂ 2 u - ∂ z 2 + ξ ∂ u - ∂ z - s u - = 0 , (24) u - ( s , - 1 ) = 0 , (25) u - ( s , + 1 ) = 1 s - i ω , where u - ( s , z ) = ∫ 0 ∞ u ( t , z ) e - s t d t is the Laplace transform of the function u ( t , z ). Substituting the recursive (12) into (23) leads to the following equation: (26) ∑ n = 0 ∞ p n ( α s ∂ 2 u - n ∂ z 2 - s u - n ) = ∑ n = 0 ∞ p n + 1 H n ( s ) . The recursive equation deduced from (26) can be written as follows: (27) p 0 : α s ∂ 2 ∂ z 2 u - 0 ( s ) - s u - 0 = 0 , u - 0 ( s , - 1 ) = 0 , u - 0 ( s , + 1 ) = 1 s - i ω , p 1 : α s ∂ 2 ∂ z 2 u - 1 ( s ) - s u - 1 = H 0 ( u ( s ) ) , u - 1 ( s , - 1 ) = 0 , u - 1 ( s , + 1 ) = 0 . The solutions of the recursive (27) can be compactly written as (28) u - 0 ( s , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) ( s - i ω ) , u - 1 ( s , z ) = 1 8 s α 5 / 2 ( e 4 / α - 1 ) 4 ( s - i ω ) 3 × { - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } e ( 1 - 3 z ) / α h h h × - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } [ - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } - β α e 2 / α + β α e 6 / α h h h h h h - β α e 6 ( 2 + z ) / α h h h h h h + β α e ( 8 + 6 z ) / α + e 2 z / α h h h h h h × ( 4 s 2 ( - 1 + z ) α + β α h h h h h h h h h + 8 s i ( - 1 + z ) α ω - 4 ( - 1 + z ) α ω 2 ) h h h h h h + e 2 ( 7 + 2 z ) / α ( - 4 s 2 ( - 1 + z ) α + β α h h h h h h h h h h h h h h h h h h + 8 s i ( - 1 + z ) α ω h h h h h h h h h h h h h h h h h h i + 4 ( - 1 + z ) α ω 2 ) - 3 ξ h h h h h h + e 2 + 4 z / α ( 4 s 2 ( 3 + z ) α - β α h h h h h h h h h h h h h h h h h - 8 s i ( 3 + z ) α ω - 4 ( 3 + z ) α ω 2 ) h h h h h h + e 2 ( 6 + z ) / α ( - 4 s 2 ( 3 + z ) α - β α h h h h h h h h h h h h h h h h h + 8 s i ( 3 + z ) α ω + 4 ( 3 + z ) α ω 2 ) h h h h h h - 4 e 2 ( 2 + z ) / α ( s 2 ( α + 3 z α ) + β - z β h h h h h h h h h h h h h h h h h h - 2 s i ( 1 + 3 z ) α ω - ( 1 + 3 z ) α ω 2 ) h h h h h h + 4 e 2 ( 4 + z ) / α ( s 2 ( 5 + 3 z α ) α - ( 3 + z ) β h h h h h h h h h h h h h h h h h h - 2 s i ( 5 + 3 z ) α ω - ( 5 + 3 z ) α ω 2 ) h h h h h h h - 4 e 6 + 4 z / α ( s 2 ( 5 + 3 z α ) α - ( 3 + z ) β h h h h h h h h h h h h h h h h h i - 2 s i ( 5 + 3 z ) α ω h h h h h h h h h h h h h h h h h h - ( 5 + 3 z ) α ω 2 ) β α e 2 / α + β α e 6 / α  ]  e ( 1 - 3 z ) / α } . Using the Maple symbolic code, the inverse Laplace transform of (28) is (29) u 0 ( t , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) { cos ⁡ ⁡ ( ω t ) + i sin ⁡ ( ω t ) } . u 1 ( t , z ) = 1 16 α 5 / 2 ( e 4 / α - 1 ) 4 ω 3 × { e ( 1 - 3 z ) / α ( cos ⁡ ⁡ ( ω t 2 ) + i sin ⁡ ( ω t 2 ) ) h h h h h h × [ × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] i t ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) + 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h h h × ( - 1 + z ) + 4 e 2 ( 4 + z ) / α ( 3 + z ) h h h h h h h h h h h - 4 e ( 6 + 4 z ) / α ( 3 + z ) + α e 2 / α - α e 6 / α h h h h h h h h h h h - α e 2 z / α + α e 6 ( 2 + z ) / α + α e 2 ( 6 + z ) / α h h h h h h h h h h h - α e 2 ( 7 + 2 z ) / α - 3 ξ h h h h h h h h h h h - 4 e 2 ( 2 + z ) / α  + α e 2 ( 2 + 4 z ) / α - α e ( 8 + 6 z ) / α ) h h h h h h h h h + β ω ( 2 i + t ω ) cos ⁡ ⁡ ( ω t 2 ) h h h h h h h h h - β α e 2 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e 6 / 2 ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h - β α e 6 ( 2 + z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e ( 8 + 6 z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + α e ( 2 + 4 z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 - β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 ( 7 + 2 z ) / α h h h h h h h h h × ( - 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 z / α h h h h h h h h h × ( 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - α e 2 ( 6 + z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e 2 ( 2 + z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 4 + 2 z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 h h h h h h h h h h h h + ( 3 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e ( 6 + z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 + ( 3 + z ) h h h h h h h h h h h h × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] } , Consequently, the first-order approximate analytical solution of (19) is given by (30) u ( t , z ) = e ( 1 - z ) / α ( - 1 + e 2 ( 1 + z ) / α ) ( e 4 / α - 1 ) { cos ⁡ ( ω t ) + i sin ( ω t ) } , + 1 16 α 5 / 2 ( e 4 / α - 1 ) 4 ω 3 × { e ( 1 - 3 z ) / α ( cos ⁡ ⁡ ( ω t 2 ) + i sin ⁡ ( ω t 2 ) ) h h h h h h × [ × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) i t ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) h h h h h h h h h h h + 4 e 2 ( 5 + 2 z ) / α ( - 1 + z ) h h h h h h h h h h h + 4 e 2 ( 4 + z ) / α ( 3 + z ) h h h h h h h h h h h - 4 e ( 6 + 4 z ) / α ( 3 + z ) h h h h h h h h h h h + α e 2 / α - α e 6 / α - α e 2 z / α h h h h h h h h h h h + α e 6 ( 2 + z ) / α + α e 2 ( 6 + z ) / α h h h h h h h h h h h - α e 2 ( 7 + 2 z ) / α - 3 ξ h h h h h h h h h h h + α e 2 ( 2 + 4 z ) / α - α e ( 8 + 6 z ) / α ( - 4 e 2 ( 2 + z ) / α ( - 1 + z ) - 4 e 2 ( 2 + z ) / α ) h h h h h h h h h + β ω ( 2 i + t ω ) cos ⁡ ⁡ ( ω t 2 ) h h h h h h h h h - β α e 2 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e 6 / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h - β α e 6 ( 2 + z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + β α e ( 8 + 6 z ) / α ( - 4 + 2 i t ω + t 2 ω 2 ) h h h h h h h h h + α e ( 2 + 4 z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 - β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 ( 7 + 2 z ) / α h h h h h h h h h × ( - 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + α e 2 z / α h h h h h h h h h × ( 16 ( - 1 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - α e 2 ( 6 + z ) / α h h h h h h h h h × ( 16 ( 3 + z ) α ω 2 + β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e 2 ( 2 + z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 5 + 2 z ) / α h h h h h h h h h × ( - 4 ( 1 + 3 z ) α ω 2 h h h h h h h h h h h + ( - 1 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h - 4 e 2 ( 4 + 2 z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 h h h h h h h h h h h + ( 3 + z ) β ( - 4 + 2 i t ω + t 2 ω 2 ) ) h h h h h h h h h + 4 e ( 6 + z ) / α h h h h h h h h h × ( - 4 ( 5 + 3 z ) α ω 2 + ( 3 + z ) h h h h h h h h h h h × β ( - 4 + 2 i t ω + t 2 ω 2 ) ) sin ( ω t 2 ) ] } . ## 4.2. Application of Homotopy Perturbation Method (HPM) Rewrite (19) as (31) ∂ ∂ t ( u - α ∂ 2 u ∂ z 2 ) = ∂ 2 u ∂ z 2 - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z .Integrate (31) with respect to t over the interval [ 0 , t ]  as follows: (32) u - α ∂ 2 u ∂ z 2 = ∫ 0 t [ ∂ 2 u ∂ z 2 - α ξ ∂ 3 u ∂ z 3 + β ( ∂ u ∂ z ) 2 ∂ 2 u ∂ z 2 + ξ ∂ u ∂ z ] ∂ t . According to the HPM technique by He [5], we construct the following homotopy v ( t , z ; q ) : R × [ 0 , 1 ] → R which satisfies the following relation: (33) L ( v ) - L ( u 0 ) + p [ L ( u 0 ) + N ( v ) ] = 0 , where L ( v ) = v - α ( ∂ 2 v / ∂ z 2 ), N ( v ) = ∫ 0 t [ ( ∂ 2 v / ∂ z 2 ) - α ξ ( ∂ 3 v / ∂ z 3 ) + β ( ∂ v / ∂ z ) 2 ( ∂ 2 v / ∂ z 2 ) + ξ ( ∂ v / ∂ z ) ] ∂ t, and u 0 is an initial approximation to the transient solution u ( t , z ).Takingp as small parameter we assume a power series solution of (33) in the form (34) v ( t , z ; p ) = ∑ k = 0 ∞ p k v k ( t , z ) , where v k ( t , z ) are unknown functions of t, z. Now letting p → 1, (34) yields the approximate solution of u ( t , z ) in the following form: (35) u ( t , z ) = lim ⁡ q → 1 ⁡ v ( t , z ; p ) = ∑ k = 0 ∞ v k ( t , z ) . We now substitute (34) into (33) and the initial and boundary conditions (20)–(22) and equate the coefficients of like powers of p to obtain first-order problem: (36) p 0 : L ( v 0 ) - L ( u 0 ) = 0 , v 0 ( t , - 1 ) = 0 , v 0 ( t , + 1 ) = e i ω t , p 1 : L ( v 1 ) - L ( u 0 ) + ∫ 0 t [ ∂ 2 v 0 ∂ z 2 - α ξ ∂ 3 v 0 ∂ z 3 + β ( ∂ v 0 ∂ z ) 2 ∂ 2 v 0 ∂ z 2 + ξ ∂ v 0 ∂ z ] ∂ t = 0 , v 1 ( t , - 1 ) = 0 , v 1 ( t , + 1 ) = 0 . We can now solve these problems to find v 0 and v 1: (37) v 0 = ( e 2 ( 1 + z ) / α - 1 ) ( e 2 ( 1 + z ) / α - 1 ) [ e ( ( 1 - z ) / α ) + i ω t ] . v 1 = 1 8 α 5 / 2 ( e 4 / α - 1 ) 4 × { e ( 8 + 6 z + 2 i ω t α ) ) / α α β e ( ( 1 - 3 z ) / α ) + i ω t t h h h × [ e ( 8 + 6 z + 2 i ω t α ) ) / α α β  - 4 e 2 z / α ( - 1 + z ) α + 4 e 2 ( 7 + 2 z ) / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 6 + z ) / α ( 3 + z ) α h h h h h h - 4 e ( 2 + 4 z ) / α ( 3 + z ) α - 3 ξ + 4 e 2 ( 2 + z ) / α h h h h h h × ( 1 + 3 z ) α - 4 e 2 ( 5 + 2 z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 4 + z ) / α ( 5 + 3 z ) α h h h h h h + 4 e ( 6 + 4 z ) / α ( 5 + 3 z ) α - 4 e 2 ( 2 + z + i ω t α ) / α h h h h h h × ( - 1 + z ) β + 4 e 2 ( 5 + 2 z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 4 + z + i ω t α ) ( 3 + z ) β h h h h h h - 4 e 6 + 4 z + 2 i ω t α ( 3 + z ) β + e ( 2 / α ) + 2 i ω t α β h h h h h h - e ( 6 / α ) + 2 i ω t α β - e ( 2 z / α ) + 2 i ω t α β h h h h h h + e 2 ( 6 + 4 z + 2 i ω t α ) α β - e 2 ( 7 + 2 z + i ω t α ) α β h h h h h h + e 2 ( 6 + 3 z + i ω t α ) / α α β h h h h h h + e ( 2 + 4 z + i ω t α ) / α α β h h h h h h - e ( 8 + 6 z + 2 i ω t α ) / α α β ] } . The first-order approximate solution of (19) by HPM method is (38) u ( t , z ) = v 0 + v 1 = ( e 2 ( 1 + z ) / α - 1 ) ( e 2 ( 1 + z ) / α - 1 ) [ e ( ( 1 - z ) / α ) + i ω t ] + 1 8 α 5 / 2 ( e 4 / α - 1 ) 4 × { e ( 8 + 6 z + 2 i ω t α ) ) / α α β e ( ( 1 - 3 z ) / α ) + i ω t t h h h × [ e ( 8 + 6 z + 2 i ω t α ) ) / α α β  - 4 e 2 z / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 7 + 2 z ) / α ( - 1 + z ) α h h h h h h + 4 e 2 ( 6 + z ) / α ( 3 + z ) α - 4 e ( 2 + 4 z ) / α h h h h h h × ( 3 + z ) α - 3 ξ + 4 e 2 ( 2 + z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 5 + 2 z ) / α ( 1 + 3 z ) α h h h h h h - 4 e 2 ( 4 + z ) / α ( 5 + 3 z ) α h h h h h h + 4 e ( 6 + 4 z ) / α ( 5 + 3 z ) α h h h h h h - 4 e 2 ( 2 + z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 5 + 2 z + i ω t α ) / α ( - 1 + z ) β h h h h h h + 4 e 2 ( 4 + z + i ω t α ) ( 3 + z ) β h h h h h h - 4 e 6 + 4 z + 2 i ω t α ( 3 + z ) β h h h h h h + e ( 2 / α ) + 2 i ω t α β - e ( 6 / α ) + 2 i ω t α β h h h h h h - e ( 2 z / α ) + 2 i ω t α β + e 2 ( 6 + 4 z + 2 i ω t α ) α β h h h h h h - e 2 ( 7 + 2 z + i ω t α ) α β h h h h h h + e 2 ( 6 + 3 z + i ω t α ) / α α β h h h h h h + e ( 2 + 4 z + i ω t α ) / α α β h h h h h h - e ( 8 + 6 z + 2 i ω t α ) / α α β ] } . ## 4.3. Application of Optimal Homotopy Asymptotic Method (OHAM) By means of the OHAM proposed by Marinca and Herişanu [10], we construct an optimal homotopy ϕ ( t , z ; p ) : R × [ 0 , 1 ] → R which satisfies the following relation: (39) L ( ϕ ( t , z ; p ) ) = H ( z ; p ) ( L ( ϕ ( t , z ; p ) ) + N ( ϕ ( t , z ; p ) ) ) . We have a great freedom to choose the auxiliary function H ( t , z ; p ) as (40) H ( t , z ; p ) = C 1 + C 2 e - z + C 3 t e - 2 z , where C 1, C 2, and C 3 are functions depending on the variable t, z.Let us consider the solutions of (39) in the form (41) ϕ ( t , z , p ; K i ) = u 0 ( t , z ) + ∑ k = 1 ∞ u k ( t , z ; K i ) q k , i = 1 , 2 , … . Substituting (41) into (39) and equating the coefficients of like powers of p we obtain the governing equations of u k ( t , z ); that is, (42) L ( u k ( t , z ) ) - L ( u k - 1 ( t , z ) ) = K k N 0 ( u 0 ( t , z ) ) + ∑ i = 1 k - 1 K i [ + N ( k - i ) ( u 0 ( t , z ) , u 1 ( t , z ) , … u k - i ( t , z ) ) ] L ( u k - i ( t , z ) ) h h h h h h h h h h h + N ( k - i ) ( u 0 ( t , z ) , u 1 ( t , z ) , … u k - i ( t , z ) ) ] . The first-order approximate solution of the problem is (43) u ( t , z ) = u 0 ( t , z ) + u 1 ( t , z ) , where the zeroth-order and first-order problems from (42) are (44) u 0 - α ∂ 2 u 0 ∂ z 2 = 0 , u 0 ( t , - 1 ) = 0 , u 0 ( t , + 1 ) = e i ω t , u 1 - α ∂ 2 u 1 ∂ z 2 = ( C 1 + C 2 e - z + C 3 t e - 2 z ) × { ∫ 0 t [ ∂ 2 u 0 ∂ z 2 - α ξ ∂ 3 u 0 ∂ z 3 + β ( ∂ u 0 ∂ z ) 2 ∂ 2 u 0 ∂ z 2 + ξ ∂ u 0 ∂ z ] ∂ t } , u 1 ( t , - 1 ) = 0 , u 1 ( t , + 1 ) = 0 . The solution of (44) can be solved using widely symbolic computational Maple software. The values of the constants C 1, C 2, and C 3 are obtained using collocation method. ## 5. Analysis of Result In Figure1 and Table 1, we show that the transient approximate solutions obtained using newly technique are in better agreement with the OHAM method, as compared to the HPM solutions, for small values of time t   and different values of all the non-Newtonian parameters and constants. This proves that the accuracy of the solution obtained by the new method is significant and more accurate for small values of the time t. However, in Figure 2 and Table 2 for large values of time t, the transient approximate solutions obtained using newly method become divergent with the approximate solutions obtained using HPM and OHAM which proves that the accuracy of the new method fails for growing values of time t.Table 1 Numerical results show the comparison of new algorithm results with the results obtained from HPM and OHAM for the case of small value of timet ( t = 0.1 ), when α = 0.5,  β = 0.5, ξ = - 0.5, ω = 0.5, C 1 = 0.15419514, C 2 = - 5.54965331, and C 2 = 4.10010429. z New algorithm HPM method OHAM method Absolute error Absolute error HPM method OHAM method - 1 - 1.43639 × 1 0 - 16 3.03836 × 1 0 - 17 4.44089 × 1 0 - 16 2.89467 × 1 0 - 17 5.87728 × 1 0 - 16 - 0.8 0.0401522 0.0269116 0.0315324 0.0132406 0.0086198 - 0.6 0.0829870 0.0565498 0.0698092 0.0264372 0.0131778 - 0.4 0.1313580 0.0919104 0.1180830 0.0394476 0.0132750 - 0.2 0.1884740 0.1365570 0.1783360 0.051917 0.0101380 0 0.2580990 0.1949830 0.2527330 0.063116 0.0053660 0.2 0.3447890 0.2730870 0.3443530 0.071702 0.0004360 0.4 0.4541750 0.3788420 0.4576250 0.075333 0.0034500 0.6 0.5933060 0.5232930 0.5986640 0.070013 0.0053580 0.8 0.7710760 0.7221790 0.7755630 0.048897 0.0044870 1 0.9987500 0.998750 0.998750 0 0Table 2 Numerical results show the comparison of new algorithm results with the results obtained from HPM and OHAM for the case of large value of timet ( t = 2 ), when α = 0.5,  β = 0.5, ξ = - 0.5, ω = 0.5, C 1 = 0.15419514, C 2 = - 5.54965331, and C 2 = 4.10010429. z New method HPM method OHAM method Absolute error Absolute error HPM method OHAM method - 1 - 6.41703 × 1 0 - 16 2.16947 × 1 0 - 16 - 1.33227 × 1 0 - 15 8.5865 × 1 0 - 16 6.550257 × 1 0 - 16 - 0.8 0.1309460 - 0.0313209 - 0.0275524 0.1622669 0.1584984 - 0.6 0.2630750 - 0.0593798 - 0.0138010 0.3224548 0.2768760 - 0.4 0.3968690 - 0.0802786 0.0436640 0.4771476 0.3532050 - 0.2 0.5312480 - 0.0888351 0.1294880 0.6200831 0.4017600 0 0.6623540 - 0.0779706 0.2239000 0.7403246 0.4384540 0.2 0.7816650 - 0.0383387 0.3090160 0.8200037 0.4726490 0.4 0.8728400 0.041187 0.3730220 0.8316530 0.4998180 0.6 0.9061440 0.170557 0.4152560 0.7355870 0.4908880 0.8 0.8279990 0.349146 0.4542990 0.4788530 0.3737000 1 0.5403020 0.540302 0.5403020 0 0Figure 1 Comparison of the present results with HPM and OHAM for small value of timet.Figure 2 Comparison of the present results with HPM and OHAM for large value of timet.To see the physical impact of the oscillating walls on the third grade fluid on the flow field, we have plotted the graphs for velocity profiles given by (30). Figure 3 is plotted to show the variation for different amplitude of wall oscillations ω in cases of cosine and sine oscillation; it is noted that the results are physically satisfied with different values ω for both cosine and sine excitation of the upper wall. Figure 4 shows the effect of third grade viscoelastic parameter with blowing case on the velocity profile for small values of time t; it is clearly seen that, by increasing β, the velocity increases across the channel, and this increase is rapid near the upper wall. This abrupt change in velocity near the upper wall is due to oscillatory nature of the wall boundary which generates depressive harmonic waves into the velocity field. From this figure we can also compare the velocity field of third grade fluid with corresponding velocity field of second grade fluid ( β = 0 ). For both cosine and sine oscillation, the third grade fluid flows faster than second grade fluid across the channel. It is noted from Figure 5 that, as we increase the value of the time t, the influence of the third grade parameter β on the fluid motion is not significant as compared in Figure 4 for small values of time t.Plot of velocity fieldu for varying values of ω for small time t = 1 and α = 0.5, ξ = - 0.7, and β = 0.5. (a) (b)Plot of velocity fieldu for varying values of β for small value of time t = 1 and α = 0.5, and ξ = - 0.7. (a) (b)Plot of velocity fieldu for varying values of β for large value of time t = 2.5 and α = 0.5, and ξ = - 0.7. (a) (b)In Figure6 the wall stress ( τ ω = ∂ u ( t , z ) / ∂ z ) is plotted against the dimensionless space coordinate z. For cosine oscillations, it is observed that the wall stress increases near the lower plate, whereas it decreases near the upper plate. This is because of the simultaneous suction and blowing phenomena at the lower plate and upper plate, respectively. The effects get reversed in the case of sine oscillations.Effect of blowing on wall stressτ ω when ω = 0.5, α = 0.5, t = 0.2, and β = 0.5 are fixed. (a) (b) ## 6. Conclusion Remark In this paper, a new technique is proposed to obtain analytical solutions of the transient flow of viscoelastic third grade fluid. We applied a new technique to our problem, which is also solved using HPM and OHAM. We obtained an explicit analytic solution of the 2D laminar transient third grade flow over a vertical channel with oscillation motions on a transpiration wall. This explicit analytic solution is valid in the whole transient region0 < t < 1.5. The results obtained using HPM and OHAM are compared with the solutions of the new technique. We observed that we get accurate results using new method, by comparing with OHAM results.The result has shown that the influence of the third grade parameterβ on the fluid motion is significant only for small values of time t.This approach seems to be useful and can be used to obtain other analytical solutions for other moving boundary layer transient equations in fluid mechanics Fan et al. [11] and Noghrehabadi et al. [12]. --- *Source: 102197-2014-07-17.xml*
2014
# Comparison of Blood Pressure Variability between 24 h Ambulatory Monitoring and Office Blood Pressure in Diabetics and Nondiabetic Patients: A Cross-Sectional Study **Authors:** Ana Lídia Rouxinol-Dias; Marta Lisandra Gonçalves; Diogo Ramalho; Jose Silva; Loide Barbosa; Jorge Polónia **Journal:** International Journal of Hypertension (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1022044 --- ## Abstract Background. Evidence regarding blood pressure (BP) variability (BPV) and its independent association with adverse outcomes has grown. Diabetic patients might have increased BPV, but there is still an evidence gap regarding relation between BPV and type 2 diabetes beyond mean values of BP. Objective. To examine the relationship between 24 h ambulatory BP monitoring (ABPM, short-term variability) and visit-to-visit in-office BPV (OBP, long-term variability), in diabetics (D) and nondiabetics (ND), and to explore BPV relation with estimated glomerular filtration rate (eGFR), and pulse wave velocity (PWV) as indicators of target organ lesion. Materials and Methods. We conducted a single-center cross-sectional study in an outpatient BP unit, including adult patients consecutively admitted from 1999 to 2019. Multivariate was performed to compare BPV between D and ND adjusted for clinical variables. Pearson’s correlation was performed to evaluate relation of BPV with eGFR and PWV. Results. A total of 1123 patients with ABPM and OBP measurements were included. Values of eGFR and PWV were worse in D than in ND. Measurements of OBPV did not differ between groups. Of ABPM BPV, the coefficient of variation and standard deviation for daytime systolic BP were higher in D compared to ND, but only in ND did BPV correlated with both eGFR and PWV. Conclusion. We found that diabetes is associated with higher variability of daytime BP than nondiabetics along with worse damage of vascular and renal function. However, in contrast to nondiabetics, in diabetics eGFR and PWV may not be dependent on BP variability, suggesting that other mechanisms might explain more rigorously the greater damage of target organ lesion markers. --- ## Body ## 1. Introduction High blood pressure (BP) is a well-known risk factor for cardiovascular mortality and morbidity, and the main contributor to global disease burden [1]. In addition to the mean of office blood pressure measurements, and with the crosswise use of ABPM in clinical practice, evidence regarding BP variability (BPV) and its independent association with adverse outcomes has grown [2].BPV can be classified into short-term, mid-term, and long-term variability. Seriated office BP (OBP) over the months and years might be considered as long-term BPV, seriated home BP measurement (HBPM) over a week as mid-term BPV, and (ABPM) as short-term BPV, and all have been related to cardiovascular adverse outcomes [3, 4].In diabetic patients, both DMT1 and DMT2, atherosclerosis, and microvascular diseases, such as nephropathy, are signs of a poorly managed condition [5, 6]. Hypertension’s impact in an organ damage is incremental to diabetes [7], and these patients might have increased BPV through different mechanisms, including increased arterial stiffness and the development of autonomic dysfunction [2]. Evidence regarding relation between BPV and type 2 diabetes beyond mean values of BP remains to be clarified.Thus, our aim was to examine the relationship of short-term and long-term BPV with diabetes and interaction of target organ lesion indicators (estimated glomerular filtration rate (eGFR) and pulse wave velocity) in this relationship. ## 2. Materials and Methods We performed a cross-sectional study in the outpatient clinic of Blood Pressure Unit, Hospital Pedro Hispano, Matosinhos, Portugal, an Excellence Center of the European Society of Hypertension [8]. The study was carried in full accordance with the guides of the Declaration of Helsinki, all subjects followed the routine clinical procedures and gave their informed consent, and all data collection was approved by the local Hospital Ethical Committee. Patients included were Caucasian, aged between 18 and 75 years, admitted to the Blood Pressure Unit, Hospital Pedro Hispano, Matosinhos, Portugal from 1999 to 2019.Patients underwent demographic and clinical baseline data collection either by questionnaire in the first appointment or from clinical files: age, gender, weight and height, family history of cardiovascular risk and adverse outcomes, and calculated body mass index (BMI). Clinical analysis, collected within 3 months from first appointment included glycated hemoglobin (HbA1C), fasting plasma glucose (FPG), and 24 h urinary sodium and potassium, and as indicators of target organ lesion: estimated glomerular function according to MDRD formula (eGFR) and pulse wave velocity (PWV). Patients were excluded if they had a significant inflammatory disease, if they had a change in their ongoing therapy in the last 3 months, or if they were pregnant, critically ill, or had a life expectancy under 3 months. Patients were examined under their stable chronic therapies and habitual dietary and physical activity habits.Diabetes mellitus was defined by two fasting plasma glucose ≥ 126 mg/dl, 2 h post-load plasma glucose ≥ 200 mg/dL, HbA1C ≥ 6.5%, or use of antidiabetic agents or personal history of diabetes [6, 9]. Pulse wave velocity (PWV), as an indicator of target organ lesion (atherosclerosis), was automatically calculated (as the ratio between distance and transit time) based in two Doppler pulse flow waves recordings simultaneously obtained at the level of the right common carotid and right femoral arteries, as reported previously [8], using a validated noninvasive device (Complior; Colson, Garges les Gonesse, France). PWV was only available for 37.9% of patients. Patients were categorized in four circadian patterns according to nocturnal SBP fall, assessed as the continuous night-to-day ratio (NDR), transformed into percent reduction of daytime values: normal dippers (NDR = ]0.8; 0.9]), extreme dippers(NDR ≤ 0.8), reduced dippers(NDR = ]0.9; 1.0]), and reverse dippers(NDR>1.0).Seriated OBP and in-office heart rate (measured by arterial peripheral pulse) measurements were collected in 3 consecutive clinical appointments in the unit, within a 6-month interval from each other. ABPM and OBP measurements were taken as reported in our previous work and performed according to the American Heart Association 2018 recommendations [7, 8]. ABPM monitoring was carried out using Spacelabs 90207 and 90217 (Spacelabs, Redmond, Washington, USA), and OBP recordings were measured using automatic sphygmomanometer OMROM models 705-IT and M4-I (Omron Healthcare, Hoofddorp, The Netherlands). ABPM data were divided into daytime and night-time according to patients’ reports, to compare these different time-sets of ABPM (mean measurements during daytime, night-time, and 24 h) and consider circadian variations of BP.BPV was measured by the following parameters: delta systolic/diastolic blood pressure (DS/DBP; calculated as the absolute difference between the maximum and minimum systolic/diastolic BP value, respectively); coefficient of variation (CV; calculated as SD/mean pressure x 100%); standard deviation (SD); average real variability (ARV), computed as the average of the absolute differences between consecutive BP, reflecting reading-to-reading, within-subject variability in BP or pulse levels; and “weighted” 24-hour SD (wSD; computed as the average of day and night SDs, weighted for their respective durations, as reported in Bilo et al. [10]), that can minimize the effect of nocturnal dipping without discarding information about BPV.Statistical analysis was computed using an IBM SPSS software (version 26; SPSS Inc, Chicago). Most of the continuous variables assumed a non-normal distribution. After visual analysis and the Kolmogorov–Smirnov test, only age, 24 h urinary sodium, daytime/night-time/24H pulse rate, in-office SD DBP, night-time SD DBP, daytime DBP, and eGFR presented a normal distribution (P>0.05); other BPV variables were right-skewed. To compare between nondiabetic (ND) patients and diabetic (D) patients a significance level (α) of 0.05 was considered and Pearson’s chi-square and Mann–Whitney rank sum tests were applied. We then performed generalized linear regression analysis (gamma distribution with log link function, considering maximum likelihood as estimation method) for BPV variables that were significantly correlated with diabetes (Table 1) in univariate analysis, adjusted for significant clinical variables in univariate analysis (Table 2), and respective BP mean. Spearman’s correlation coefficients (Rs) were calculated for the relationship between target organ lesion indicators (creatinine clearance and pulse wave velocity) and significant BPV variables after adjustment. Correlations were described as negligible, weak, moderate, strong, and very strong as reported by Prion and Haerling [11].Table 1 Short-term BP variability SD and CV comparison between ND and D across 24 h, daytime, and night-time. Non-diabeticsDiabeticsP value between groupsAdjustedP valueDelta (mmHg)DSBP 24 h46.0 (37.0–55.0)48.0 (38.0–59.0)∗=0.006∗0.006DDBP 24 h36.0 (30.0–42.0)33.0 (27.0–40.0)∗=0.0010.758SD systolic (mmHg)24 h13.6 (11.4–16.3)14.4 (12.6–17.6)∗<0.0010.104Daytime11.9 (9.9–14.4)13.6 (11.1–16.1)∗<0.001∗0.042Night-time10.0 (7.9–12.4)11.1 (8.7–13.7)∗<0.0010.112SD diastolic (mmHg)24 h10.6 (9.1–12.5)10.2 (8.6–12.1)∗0.0100.687Daytime8.9 (7.6–10.5)8.8 (7.4–10.4)=0.661—Night-time8.1 (6.4–10.1)8.4 (6.6–10.4)=0.598—CVS (%)24 h10.6 (8.9–12.4)10.5 (9.1–12.5)=0.410—Daytime8.8 (7.4–10.7)9.5 (8.0–10.9)∗0.001∗0.040Night-time8.4 (6.6–10.5)8.6 (6.6–10.7)=0.424—CVD (%)24 h13.7 (11.6–16.1)13.3 (11.2–15.4)∗0.0210.967Daytime11.0 (9.2–13.1)10.9 (9.3–12.8)=0.962—Night-time11.7 (9.2–14.6)11.9 (9.3–14.4)=0.900—ARVS8.4 (6.9–10.4)9.3 (7.7–12.0)<0.0010.211ARVD6.5 (5.5–7.8)6.5 (5.3–8.0)=0.541—wSD systolic 24 h8.6 (7.1–10.2)9.3 (7.7–12.0)<0.0010.179DSBP: delta (maximum-minimum) systolic blood pressure, DDBP: delta diastolic blood pressure, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure, CVD: coefficient of variation of diastolic blood pressure, CVP: coefficient of variation of arterial peripheral pulse, ARVS/D/P: average real variability of systolic/diastolic/arterial peripheral pulse ND: nondiabetics, D: diabetics. Variables are presented as medians (interquartile range: percentile 25-percentile 75) and comparisons between ND vs. D were tested with the Mann–Whitney rank sum test. A GLM model was computed to adjust for age, BMI, dyslipidemia, familiar history of diabetes and hypertension, and mean BP value.Table 2 Clinical characteristics of nondiabetics and diabetics. NondiabeticsDiabeticsTotalP valueN(%)851 (76)272 (24)1123Age (years)48 (36–61)60 (53–68)53 (39–64)<0.001Male346 (41)141 (52)487 (43)0.001BMI (Kg/m2)27.4 (24.5–31.0)29.1 (26.0–32.5)27.9 (24.7–31.4)<0.001Smokers124 (15)37 (14)161 (14)0.692Dyslipidemia298 (35)165 (61)463 (41)<0.001Family history N(%)Hypertension293 (34)69 (25)362 (32)0.005Stroke57 (7)11 (4)68 (6)0.110Coronary artery disease90 (11)25 (9)115 (10)0.512Diabetes122 (14)62 (23)184 (16)0.001Clinical analysisFasting glucose (mg/dL)93 (85–101)135 (119–167)98 (87–114)<0.001HbA1C (%)5.6 (5.3–5.8)6.8 (6.0–7.7)5.9 (5.4–6.7)<0.001Creatinine (mg/dL)0.80 (0.70–1.00)0.90 (0.80–1.10)0.80 (0.70–1.00)<0.001eGFR (mL/min/1.73 m2)86.6 (70.7–100.7)75.3 (60.0–94.4)84.2 (67.6–98.9)<0.00124 h Urinary sodium (mEq/24 h)183 (140–244)201 (158–269)189 (142–251)0.01524 h Urinary potassium (mEq/24 h)68 (53–85)91 (70–109)73 (57–95)<0.001PWV (m/s) (a)10.1 (8.8–12.0)11.8 (10.0–13.0)10.5 (9.0–12.2)<0.001BP analysisOBP systolic/diastolic (mmHg)147 (136–160)/92 (83–101)160 (145–178)/90 (81–99)150 (137–165)/92 (82–100)<0.001/0.10824 h-ABPM SBP/DBP (mmHg)129 (121–138)/78 (71–85)139 (129–150)/78 (70–85)131 (122–141)/78 (71–85)<0.001/0.386Daytime SBP/DBP (mmHg)133 (125–143)/82 (74–90)143 (132–155)/81 (74–88)135 (126–146)/82 (74–89)<0.001/0.107Nighttime SBP/DBP (mmHg)118 (110–128)/69 (62–76)129 (118–142)/70 (63–77)120 (111–131)/69 (63–76)<0.001/0.154Circadian profile (b)Dipper368101469Non-dipper3461114570.003Reverse dipper573794Extreme dipper732093BMI: body mass index, HbA1c: glycated hemoglobin, eGFR: estimated glomerular filtration rate, PWV: pulse wave velocity, OBP: in-office blood pressure (mean of 3 measurements at baseline), 24 h-ABPM: 24hambulatory blood pressure monitoring, SBP: systolic blood pressure, DBP: diastolic blood pressure. Continuous variables are presented as medians (interquartile range: percentile 25-percentile 75) and categorical variables are presented as absolute frequency (%). Comparisons were tested with the Mann–Whitney rank sum test and the X2 test. (a) data only available for 37.9% of patients with non-differential missing data between diabetics and nondiabetics, (b) missing data for 10 patients. ## 3. Results ### 3.1. Population Characteristics A total of 1123 patients (851 nondiabetics and 272 diabetics) were included. The demographic characteristics, previous medical history, family history, and clinical analysis are presented in Table2. In the diabetic group, 52% of patients were male, 61% had dyslipidemia, patients were older, and had higher BMI. Family history of diabetes was significantly higher and family history of hypertension was significantly lower in the D group. All variables measured in clinical analysis were significantly different between the groups. In particular, PWV values were higher and eGFR values lower in diabetics than in nondiabetics. All mean systolic blood pressure values (in-office and 24 h-ABPM) were significantly higher in diabetics vs. nondiabetics, but no differences were found for mean diastolic blood pressure values. ### 3.2. Long-Term BPV As shown in Table3, values of long-term BP variability (BP in-office) were not significantly different between the groups (Table 3).Table 3 Long-term (in-office) BP variability. NondiabeticsDiabeticsTotalP Value between groupsDSBP (mmHg)16 (8–27)17 (10–29)16 (9–28)0.295DDBP (mmHg)11 (6–18)12 (6–17)11 (6–17)0.803SD systolic (mmHg)12.8 (7.8–19.8)13.0 (8.3–21.0)12.8 (8.0–20.1)0.074SD diastolic (mmHg)7.8 (5.2–11.4)8.5 (5.5–11.7)8.0 (5.3–11.6)0.318SD pulse (bpm)7.8 (4.4–12.1)6.4 (3.8–11.3)7.6 (4.2–12.0)0.055CVS (%)8.7 (5.3–13.4)8.75 (5.72–13.46)8.8 (5.4–13.4)0.806CVD (%)9.2 (5.9–13.1)9.86 (6.39–13.65)9.3 (6.0–13.3)0.141CVP (%)10.14 (5.9–15.7)9.0 (5.1–14.4)9.9 (5.7–15.3)0.050ARVS6.0 (3.0–11.3)6.3 (3.3–13.0)6.3 (3.0–11.7)0.118ARVD3.7 (1.7–6.2)3.3 (1.5–6.0)3.5 (1.7–6.0)0.632ARVP5.0 (2.5–9.5)4.5 (2.0–9.0)5.0 (2.5–9.0)0.322SD: standard deviation, DSBP: delta (maximum-minimum) systolic blood pressure, DDBP: delta diastolic blood pressure, SD: standard deviation, CVS/D/P: coefficient of variation of systolic blood pressure/diastolic blood pressure/arterial peripheral pulse, ARVS/D/P: average real variability of systolic/diastolic/arterial peripheral pulse, ND: nondiabetics, D: diabetics. Variables are presented as medians (interquartile range: percentile 25-percentile 75) and comparisons between ND vs.D were tested with the Mann–Whitney rank sum test. ### 3.3. Short-Term BPV Table1 shows the results of short-term BP variability (24 h-ABPM). Diabetic patients showed higher values of daytime systolic BP variability than nondiabetics and this was the only significant difference on 24 h-ABPM variability indices between these groups.There was also a differential distribution of circadian type between diabetics and nondiabetics (Pearson chi-squareP-value 0.003), with the double prevalence of reverse dipper profile in diabetic patients: 37 (14%) of diabetics were reverse dippers vs. 57 (7%) of nondiabetics (Table 2).Multivariate generalized linear regression analysis for each BPV variable that significantly related to diabetes in univariate analysis was performed to evaluate association with diabetes. Adjustments were performed for age, BMI, dyslipidemia, familiar history of diabetes, and hypertension; a respective BP value was also included in the model (for ARVS and wSD, SBP 24 h was considered). After adjustment, only 24-hours delta SBP and daytime systolic BPV variables (SD and CVS) were independently correlated with diabetes (P=0.042, P=0.040) (Table 1). ### 3.4. Correlation with Target Organ Lesion Indicators In the overall population, significant correlations (P<0.001) were observed between age and both PWV (Rs for PWV = 0.51) and eGFR (Rs = − 0.49). PWV and eGFR were negatively correlated with each other (Rs = − 0.29, P<0.001). Daytime systolic SD and CVS were significantly correlated with both PWV (Rs = 0.39 and 0.26, respectively, P<0.001) and eGFR (Rs = − 0.19 and − 0.18, respectively, P<0.001). ### 3.5. Age Age was significantly correlated with all systolic BPV variables, with moderate correlations for daytime BPV, weak for 24 h BPV and negligible for night-time BPV. In Figure1, daytime systolic SD and CVS were plotted for evaluation of interaction of diabetes and age. Age remained significantly correlated to daytime systolic SD and CVS, with moderate correlation in nondiabetics and negligibly to weakly correlated in diabetic patients, with significant differences between diabetics and nondiabetic slopes for the correlation between age and daytime systolic SD (P=0.04).Figure 1 Evaluation of interaction of diabetes and age.∗P<0.05, the Pearson’s correlation test, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure ### 3.6. Short-Term BPV The relationship between daytime systolic SD and CVS with target organ lesion indicators (estimated a glomerular filtration rate and pulse wave velocity), by groups are plotted in Figure2. As shown in Figure 2, in nondiabetics, daytime systolic BP variability correlated significantly positively with PWV and negatively with eGFR values, but no such correlations were ever found in diabetic patients.Figure 2 Relationship between BPV and indicators of target organ lesion.∗P<0.001; (a) data only available for 37.9% of patients with non-differential missing data between diabetics and nondiabetics, the Pearson’s correlation test, DSBP: delta (maximum-minimum) systolic blood pressure, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure. ## 3.1. Population Characteristics A total of 1123 patients (851 nondiabetics and 272 diabetics) were included. The demographic characteristics, previous medical history, family history, and clinical analysis are presented in Table2. In the diabetic group, 52% of patients were male, 61% had dyslipidemia, patients were older, and had higher BMI. Family history of diabetes was significantly higher and family history of hypertension was significantly lower in the D group. All variables measured in clinical analysis were significantly different between the groups. In particular, PWV values were higher and eGFR values lower in diabetics than in nondiabetics. All mean systolic blood pressure values (in-office and 24 h-ABPM) were significantly higher in diabetics vs. nondiabetics, but no differences were found for mean diastolic blood pressure values. ## 3.2. Long-Term BPV As shown in Table3, values of long-term BP variability (BP in-office) were not significantly different between the groups (Table 3).Table 3 Long-term (in-office) BP variability. NondiabeticsDiabeticsTotalP Value between groupsDSBP (mmHg)16 (8–27)17 (10–29)16 (9–28)0.295DDBP (mmHg)11 (6–18)12 (6–17)11 (6–17)0.803SD systolic (mmHg)12.8 (7.8–19.8)13.0 (8.3–21.0)12.8 (8.0–20.1)0.074SD diastolic (mmHg)7.8 (5.2–11.4)8.5 (5.5–11.7)8.0 (5.3–11.6)0.318SD pulse (bpm)7.8 (4.4–12.1)6.4 (3.8–11.3)7.6 (4.2–12.0)0.055CVS (%)8.7 (5.3–13.4)8.75 (5.72–13.46)8.8 (5.4–13.4)0.806CVD (%)9.2 (5.9–13.1)9.86 (6.39–13.65)9.3 (6.0–13.3)0.141CVP (%)10.14 (5.9–15.7)9.0 (5.1–14.4)9.9 (5.7–15.3)0.050ARVS6.0 (3.0–11.3)6.3 (3.3–13.0)6.3 (3.0–11.7)0.118ARVD3.7 (1.7–6.2)3.3 (1.5–6.0)3.5 (1.7–6.0)0.632ARVP5.0 (2.5–9.5)4.5 (2.0–9.0)5.0 (2.5–9.0)0.322SD: standard deviation, DSBP: delta (maximum-minimum) systolic blood pressure, DDBP: delta diastolic blood pressure, SD: standard deviation, CVS/D/P: coefficient of variation of systolic blood pressure/diastolic blood pressure/arterial peripheral pulse, ARVS/D/P: average real variability of systolic/diastolic/arterial peripheral pulse, ND: nondiabetics, D: diabetics. Variables are presented as medians (interquartile range: percentile 25-percentile 75) and comparisons between ND vs.D were tested with the Mann–Whitney rank sum test. ## 3.3. Short-Term BPV Table1 shows the results of short-term BP variability (24 h-ABPM). Diabetic patients showed higher values of daytime systolic BP variability than nondiabetics and this was the only significant difference on 24 h-ABPM variability indices between these groups.There was also a differential distribution of circadian type between diabetics and nondiabetics (Pearson chi-squareP-value 0.003), with the double prevalence of reverse dipper profile in diabetic patients: 37 (14%) of diabetics were reverse dippers vs. 57 (7%) of nondiabetics (Table 2).Multivariate generalized linear regression analysis for each BPV variable that significantly related to diabetes in univariate analysis was performed to evaluate association with diabetes. Adjustments were performed for age, BMI, dyslipidemia, familiar history of diabetes, and hypertension; a respective BP value was also included in the model (for ARVS and wSD, SBP 24 h was considered). After adjustment, only 24-hours delta SBP and daytime systolic BPV variables (SD and CVS) were independently correlated with diabetes (P=0.042, P=0.040) (Table 1). ## 3.4. Correlation with Target Organ Lesion Indicators In the overall population, significant correlations (P<0.001) were observed between age and both PWV (Rs for PWV = 0.51) and eGFR (Rs = − 0.49). PWV and eGFR were negatively correlated with each other (Rs = − 0.29, P<0.001). Daytime systolic SD and CVS were significantly correlated with both PWV (Rs = 0.39 and 0.26, respectively, P<0.001) and eGFR (Rs = − 0.19 and − 0.18, respectively, P<0.001). ## 3.5. Age Age was significantly correlated with all systolic BPV variables, with moderate correlations for daytime BPV, weak for 24 h BPV and negligible for night-time BPV. In Figure1, daytime systolic SD and CVS were plotted for evaluation of interaction of diabetes and age. Age remained significantly correlated to daytime systolic SD and CVS, with moderate correlation in nondiabetics and negligibly to weakly correlated in diabetic patients, with significant differences between diabetics and nondiabetic slopes for the correlation between age and daytime systolic SD (P=0.04).Figure 1 Evaluation of interaction of diabetes and age.∗P<0.05, the Pearson’s correlation test, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure ## 3.6. Short-Term BPV The relationship between daytime systolic SD and CVS with target organ lesion indicators (estimated a glomerular filtration rate and pulse wave velocity), by groups are plotted in Figure2. As shown in Figure 2, in nondiabetics, daytime systolic BP variability correlated significantly positively with PWV and negatively with eGFR values, but no such correlations were ever found in diabetic patients.Figure 2 Relationship between BPV and indicators of target organ lesion.∗P<0.001; (a) data only available for 37.9% of patients with non-differential missing data between diabetics and nondiabetics, the Pearson’s correlation test, DSBP: delta (maximum-minimum) systolic blood pressure, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure. ## 4. Discussion In the present study, we assessed BP variability in nondiabetics and diabetic patients with 24 h-ABPM and seriated OBP (from 3 consecutive outpatient evaluations). We applied different formulas of assessment of variability to evaluate the association of BPV with diabetes and, also, the association of BPV with vascular, renal, and target organ lesion indicators (PWV and eGFR, respectively). Our population of diabetics was older, weighted more, and had worse clinical conditions than nondiabetics as expressed by higher PWV and lower eGFR. Although no differences in long-term BP variability were observed between groups, variability of daytime systolic BP was higher in diabetics than nondiabetics, even after adjustment for other significant clinical variables, as expected [9]; in univariate analysis, diabetic patients had higher values of systolic BP variability, and after adjustment for age, BMI, dyslipidemia, familiar history of diabetes and hypertension and a respective BP value, only daytime systolic SD and CVS remained significantly higher in diabetics. These results are in agreement with those of Casali et al. [4]. ### 4.1. Long-Term BPV Differences between Diabetics and Nondiabetics Previous evidence suggested that increasing values of long-term BPV predict the development and progression of diabetes organ target lesions, such as nephropathy, including correlation with PWV and urinary albumin excretion, another indicator of kidney lesion [1, 12]. Yet, the mechanism to explain its contribution as a predictor is still debatable, with a relevant contribution of behavioral influences and even seasonal related climatic changes [1]. In our study, we did not find significant differences between long-term BPV in diabetics vs. nondiabetics, suggesting that long-term BPV does not contribute to the different odds of cardiovascular complications in diabetic vs. nondiabetic hypertensive patients. ### 4.2. Short-Term BPV Differences between Diabetics and Nondiabetics Regarding short-term BPV (in 24 h-ABPM): at the moment, there is no standard way of measuring BPV, being one of the strengths of our study the use of different parameters to measure it, allowing for a more comprehensive evaluation of BPV and comparison with further research [3].The prevalence of reverse dippers was significantly higher in diabetics vs. nondiabetics. This circadian pattern has been related to increased cardiovascular risk, and this association with diabetes has been already explored before [13].Independently of the measurement, systolic BPV has been increasingly associated with cardiovascular outcomes in diabetic and nondiabetic patients, an impact that might go beyond that of BP [3, 14]. Chiriacò et al. suggested that it could be a relevant factor to include in adverse outcome risk prediction for diabetic patients [3], taking into account measurement limitations and each measurement association with the diabetes itself. As proposed by Parati et al., long-term BPV and short-term BPV probably translate the action of different physiological mechanisms [8]. Considering that the latter were significantly different between diabetics and nondiabetics, it may more accurately reflect the impact of diabetic aggression, through either autonomic modulation or atherosclerosis enhanced augmentation [1, 15]. ### 4.3. Correlation of Short-Term BPV and Target Organ Lesion In our study, short-term BPV was higher in diabetics than in nondiabetics. However, short-term BPV was indeed significantly correlated, in nondiabetics, with increased PWV and lower eGFR, but in diabetics there was no correlation of these markers of target organ lesions and BPV. Looking closer at the results of Chiriacò et al. significant heterogeneity across studies was present, and adverse outcomes were composite measures rather than specific assessments, such as PWV and eGFR, as an estimation of creatinine clearance [3]. Thus, in our study, although BP variability was higher in diabetic patients, its relationship with target organ damage was only observed in nondiabetic patients. These results lead us to speculate that, as proposed by Bell and colleagues, although the artery stiffness and deterioration of renal function in diabetics are worse than in nondiabetics, BPV may not bring additional clinical usefulness to the routinely measured predictors, such as mean blood pressure in diabetics [16].BPV may indeed be a marker of vascular aging, which may be accelerated in diabetics, but other factors may contribute in a larger scale to it adverse outcomes [16]. In our view, different from nondiabetics, the diabetic condition may cause structural renal and vascular damages by specific diabetic abnormalities independently of BP variability and likely of other cardiovascular risk factors. In support of this conjecture, we found that the variable age was significantly correlated with all systolic BPV variables, but its correlation with systolic BPV was significantly higher in nondiabetic vs. diabetics, with correlation becoming negligibly to weakly correlate in diabetic patients. It is well established that cardiovascular risk is naturally higher in diabetics than nondiabetics, which is confirmed in our study. However, such a higher risk may be related to complex structural and metabolic abnormalities, such as inflammation, endothelial dysfunction, oxidative stress, fibrosis, accumulation of AGEs, and atherosclerosis. [17–19], most of them escape from the dependence of BP variability. In other words, our data suggest that diabetic patients exhibit greater BP variability, more severe organ damage but less dependence on BP variability and probably of other usual anthropometric variables like age, gender, etc. ### 4.4. Strengths and Limitations This cross-sectional study’s strengths include a large cohort of patients with and without type 2 diabetes. The comparison between diabetics and nondiabetics allowed for a distinction of the BPV impact beyond diabetes. Although subgroup analysis of patients on BP lowering drugs within diabetic population has already been reported [16], most of the existing literature focused only on overall diabetic population and [12, 15], as our results present, there seems to be a relevant differential association of BPV to PWV and eGFR in nondiabetics vs. diabetics. As mentioned above, we have also computed several different parameters to measure BPV, exploring simple and more complex measurements enhancing the comparability with further literature.The main limitation of our study is its observational cross-sectional nature and the fact it was conducted in a single center. Although we have considered several potential confounders, other characteristics, such as diabetes duration disease, anti-hypertensive drugs, and diabetes treatment could be considered. We have performed complete-case analysis, considering that missing data were not differential between diabetics and nondiabetics. PWV was only available for 37.9% of patients. Considering this is a cross-sectional study, dealing with observational data collected from patients’ clinical registries, missing data imputations was not performed [20]. For GLM, SPSS automatically excludes cases with system-missing values for any of the variables on the GLM variable list, and user-missing values were treated as valid (data from 146 patients were excluded from GLM of 13%). As stated by STROBE, this alternative for missing data management may still be biased [20], therefore, further large-scale studies are needed to explore relation of BPV and PWV. ## 4.1. Long-Term BPV Differences between Diabetics and Nondiabetics Previous evidence suggested that increasing values of long-term BPV predict the development and progression of diabetes organ target lesions, such as nephropathy, including correlation with PWV and urinary albumin excretion, another indicator of kidney lesion [1, 12]. Yet, the mechanism to explain its contribution as a predictor is still debatable, with a relevant contribution of behavioral influences and even seasonal related climatic changes [1]. In our study, we did not find significant differences between long-term BPV in diabetics vs. nondiabetics, suggesting that long-term BPV does not contribute to the different odds of cardiovascular complications in diabetic vs. nondiabetic hypertensive patients. ## 4.2. Short-Term BPV Differences between Diabetics and Nondiabetics Regarding short-term BPV (in 24 h-ABPM): at the moment, there is no standard way of measuring BPV, being one of the strengths of our study the use of different parameters to measure it, allowing for a more comprehensive evaluation of BPV and comparison with further research [3].The prevalence of reverse dippers was significantly higher in diabetics vs. nondiabetics. This circadian pattern has been related to increased cardiovascular risk, and this association with diabetes has been already explored before [13].Independently of the measurement, systolic BPV has been increasingly associated with cardiovascular outcomes in diabetic and nondiabetic patients, an impact that might go beyond that of BP [3, 14]. Chiriacò et al. suggested that it could be a relevant factor to include in adverse outcome risk prediction for diabetic patients [3], taking into account measurement limitations and each measurement association with the diabetes itself. As proposed by Parati et al., long-term BPV and short-term BPV probably translate the action of different physiological mechanisms [8]. Considering that the latter were significantly different between diabetics and nondiabetics, it may more accurately reflect the impact of diabetic aggression, through either autonomic modulation or atherosclerosis enhanced augmentation [1, 15]. ## 4.3. Correlation of Short-Term BPV and Target Organ Lesion In our study, short-term BPV was higher in diabetics than in nondiabetics. However, short-term BPV was indeed significantly correlated, in nondiabetics, with increased PWV and lower eGFR, but in diabetics there was no correlation of these markers of target organ lesions and BPV. Looking closer at the results of Chiriacò et al. significant heterogeneity across studies was present, and adverse outcomes were composite measures rather than specific assessments, such as PWV and eGFR, as an estimation of creatinine clearance [3]. Thus, in our study, although BP variability was higher in diabetic patients, its relationship with target organ damage was only observed in nondiabetic patients. These results lead us to speculate that, as proposed by Bell and colleagues, although the artery stiffness and deterioration of renal function in diabetics are worse than in nondiabetics, BPV may not bring additional clinical usefulness to the routinely measured predictors, such as mean blood pressure in diabetics [16].BPV may indeed be a marker of vascular aging, which may be accelerated in diabetics, but other factors may contribute in a larger scale to it adverse outcomes [16]. In our view, different from nondiabetics, the diabetic condition may cause structural renal and vascular damages by specific diabetic abnormalities independently of BP variability and likely of other cardiovascular risk factors. In support of this conjecture, we found that the variable age was significantly correlated with all systolic BPV variables, but its correlation with systolic BPV was significantly higher in nondiabetic vs. diabetics, with correlation becoming negligibly to weakly correlate in diabetic patients. It is well established that cardiovascular risk is naturally higher in diabetics than nondiabetics, which is confirmed in our study. However, such a higher risk may be related to complex structural and metabolic abnormalities, such as inflammation, endothelial dysfunction, oxidative stress, fibrosis, accumulation of AGEs, and atherosclerosis. [17–19], most of them escape from the dependence of BP variability. In other words, our data suggest that diabetic patients exhibit greater BP variability, more severe organ damage but less dependence on BP variability and probably of other usual anthropometric variables like age, gender, etc. ## 4.4. Strengths and Limitations This cross-sectional study’s strengths include a large cohort of patients with and without type 2 diabetes. The comparison between diabetics and nondiabetics allowed for a distinction of the BPV impact beyond diabetes. Although subgroup analysis of patients on BP lowering drugs within diabetic population has already been reported [16], most of the existing literature focused only on overall diabetic population and [12, 15], as our results present, there seems to be a relevant differential association of BPV to PWV and eGFR in nondiabetics vs. diabetics. As mentioned above, we have also computed several different parameters to measure BPV, exploring simple and more complex measurements enhancing the comparability with further literature.The main limitation of our study is its observational cross-sectional nature and the fact it was conducted in a single center. Although we have considered several potential confounders, other characteristics, such as diabetes duration disease, anti-hypertensive drugs, and diabetes treatment could be considered. We have performed complete-case analysis, considering that missing data were not differential between diabetics and nondiabetics. PWV was only available for 37.9% of patients. Considering this is a cross-sectional study, dealing with observational data collected from patients’ clinical registries, missing data imputations was not performed [20]. For GLM, SPSS automatically excludes cases with system-missing values for any of the variables on the GLM variable list, and user-missing values were treated as valid (data from 146 patients were excluded from GLM of 13%). As stated by STROBE, this alternative for missing data management may still be biased [20], therefore, further large-scale studies are needed to explore relation of BPV and PWV. ## 5. Conclusion In conclusion, we found that diabetes is associated with higher variability of daytime BP than nondiabetics, along with worse damage of vascular and renal function. However, in contrast to nondiabetics, in diabetics eGFR and PWV may be not dependent of BP variability, suggesting that other mechanisms might explain more rigorously the greater damage of target organ lesion markers. --- *Source: 1022044-2022-06-21.xml*
1022044-2022-06-21_1022044-2022-06-21.md
38,952
Comparison of Blood Pressure Variability between 24 h Ambulatory Monitoring and Office Blood Pressure in Diabetics and Nondiabetic Patients: A Cross-Sectional Study
Ana Lídia Rouxinol-Dias; Marta Lisandra Gonçalves; Diogo Ramalho; Jose Silva; Loide Barbosa; Jorge Polónia
International Journal of Hypertension (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1022044
1022044-2022-06-21.xml
--- ## Abstract Background. Evidence regarding blood pressure (BP) variability (BPV) and its independent association with adverse outcomes has grown. Diabetic patients might have increased BPV, but there is still an evidence gap regarding relation between BPV and type 2 diabetes beyond mean values of BP. Objective. To examine the relationship between 24 h ambulatory BP monitoring (ABPM, short-term variability) and visit-to-visit in-office BPV (OBP, long-term variability), in diabetics (D) and nondiabetics (ND), and to explore BPV relation with estimated glomerular filtration rate (eGFR), and pulse wave velocity (PWV) as indicators of target organ lesion. Materials and Methods. We conducted a single-center cross-sectional study in an outpatient BP unit, including adult patients consecutively admitted from 1999 to 2019. Multivariate was performed to compare BPV between D and ND adjusted for clinical variables. Pearson’s correlation was performed to evaluate relation of BPV with eGFR and PWV. Results. A total of 1123 patients with ABPM and OBP measurements were included. Values of eGFR and PWV were worse in D than in ND. Measurements of OBPV did not differ between groups. Of ABPM BPV, the coefficient of variation and standard deviation for daytime systolic BP were higher in D compared to ND, but only in ND did BPV correlated with both eGFR and PWV. Conclusion. We found that diabetes is associated with higher variability of daytime BP than nondiabetics along with worse damage of vascular and renal function. However, in contrast to nondiabetics, in diabetics eGFR and PWV may not be dependent on BP variability, suggesting that other mechanisms might explain more rigorously the greater damage of target organ lesion markers. --- ## Body ## 1. Introduction High blood pressure (BP) is a well-known risk factor for cardiovascular mortality and morbidity, and the main contributor to global disease burden [1]. In addition to the mean of office blood pressure measurements, and with the crosswise use of ABPM in clinical practice, evidence regarding BP variability (BPV) and its independent association with adverse outcomes has grown [2].BPV can be classified into short-term, mid-term, and long-term variability. Seriated office BP (OBP) over the months and years might be considered as long-term BPV, seriated home BP measurement (HBPM) over a week as mid-term BPV, and (ABPM) as short-term BPV, and all have been related to cardiovascular adverse outcomes [3, 4].In diabetic patients, both DMT1 and DMT2, atherosclerosis, and microvascular diseases, such as nephropathy, are signs of a poorly managed condition [5, 6]. Hypertension’s impact in an organ damage is incremental to diabetes [7], and these patients might have increased BPV through different mechanisms, including increased arterial stiffness and the development of autonomic dysfunction [2]. Evidence regarding relation between BPV and type 2 diabetes beyond mean values of BP remains to be clarified.Thus, our aim was to examine the relationship of short-term and long-term BPV with diabetes and interaction of target organ lesion indicators (estimated glomerular filtration rate (eGFR) and pulse wave velocity) in this relationship. ## 2. Materials and Methods We performed a cross-sectional study in the outpatient clinic of Blood Pressure Unit, Hospital Pedro Hispano, Matosinhos, Portugal, an Excellence Center of the European Society of Hypertension [8]. The study was carried in full accordance with the guides of the Declaration of Helsinki, all subjects followed the routine clinical procedures and gave their informed consent, and all data collection was approved by the local Hospital Ethical Committee. Patients included were Caucasian, aged between 18 and 75 years, admitted to the Blood Pressure Unit, Hospital Pedro Hispano, Matosinhos, Portugal from 1999 to 2019.Patients underwent demographic and clinical baseline data collection either by questionnaire in the first appointment or from clinical files: age, gender, weight and height, family history of cardiovascular risk and adverse outcomes, and calculated body mass index (BMI). Clinical analysis, collected within 3 months from first appointment included glycated hemoglobin (HbA1C), fasting plasma glucose (FPG), and 24 h urinary sodium and potassium, and as indicators of target organ lesion: estimated glomerular function according to MDRD formula (eGFR) and pulse wave velocity (PWV). Patients were excluded if they had a significant inflammatory disease, if they had a change in their ongoing therapy in the last 3 months, or if they were pregnant, critically ill, or had a life expectancy under 3 months. Patients were examined under their stable chronic therapies and habitual dietary and physical activity habits.Diabetes mellitus was defined by two fasting plasma glucose ≥ 126 mg/dl, 2 h post-load plasma glucose ≥ 200 mg/dL, HbA1C ≥ 6.5%, or use of antidiabetic agents or personal history of diabetes [6, 9]. Pulse wave velocity (PWV), as an indicator of target organ lesion (atherosclerosis), was automatically calculated (as the ratio between distance and transit time) based in two Doppler pulse flow waves recordings simultaneously obtained at the level of the right common carotid and right femoral arteries, as reported previously [8], using a validated noninvasive device (Complior; Colson, Garges les Gonesse, France). PWV was only available for 37.9% of patients. Patients were categorized in four circadian patterns according to nocturnal SBP fall, assessed as the continuous night-to-day ratio (NDR), transformed into percent reduction of daytime values: normal dippers (NDR = ]0.8; 0.9]), extreme dippers(NDR ≤ 0.8), reduced dippers(NDR = ]0.9; 1.0]), and reverse dippers(NDR>1.0).Seriated OBP and in-office heart rate (measured by arterial peripheral pulse) measurements were collected in 3 consecutive clinical appointments in the unit, within a 6-month interval from each other. ABPM and OBP measurements were taken as reported in our previous work and performed according to the American Heart Association 2018 recommendations [7, 8]. ABPM monitoring was carried out using Spacelabs 90207 and 90217 (Spacelabs, Redmond, Washington, USA), and OBP recordings were measured using automatic sphygmomanometer OMROM models 705-IT and M4-I (Omron Healthcare, Hoofddorp, The Netherlands). ABPM data were divided into daytime and night-time according to patients’ reports, to compare these different time-sets of ABPM (mean measurements during daytime, night-time, and 24 h) and consider circadian variations of BP.BPV was measured by the following parameters: delta systolic/diastolic blood pressure (DS/DBP; calculated as the absolute difference between the maximum and minimum systolic/diastolic BP value, respectively); coefficient of variation (CV; calculated as SD/mean pressure x 100%); standard deviation (SD); average real variability (ARV), computed as the average of the absolute differences between consecutive BP, reflecting reading-to-reading, within-subject variability in BP or pulse levels; and “weighted” 24-hour SD (wSD; computed as the average of day and night SDs, weighted for their respective durations, as reported in Bilo et al. [10]), that can minimize the effect of nocturnal dipping without discarding information about BPV.Statistical analysis was computed using an IBM SPSS software (version 26; SPSS Inc, Chicago). Most of the continuous variables assumed a non-normal distribution. After visual analysis and the Kolmogorov–Smirnov test, only age, 24 h urinary sodium, daytime/night-time/24H pulse rate, in-office SD DBP, night-time SD DBP, daytime DBP, and eGFR presented a normal distribution (P>0.05); other BPV variables were right-skewed. To compare between nondiabetic (ND) patients and diabetic (D) patients a significance level (α) of 0.05 was considered and Pearson’s chi-square and Mann–Whitney rank sum tests were applied. We then performed generalized linear regression analysis (gamma distribution with log link function, considering maximum likelihood as estimation method) for BPV variables that were significantly correlated with diabetes (Table 1) in univariate analysis, adjusted for significant clinical variables in univariate analysis (Table 2), and respective BP mean. Spearman’s correlation coefficients (Rs) were calculated for the relationship between target organ lesion indicators (creatinine clearance and pulse wave velocity) and significant BPV variables after adjustment. Correlations were described as negligible, weak, moderate, strong, and very strong as reported by Prion and Haerling [11].Table 1 Short-term BP variability SD and CV comparison between ND and D across 24 h, daytime, and night-time. Non-diabeticsDiabeticsP value between groupsAdjustedP valueDelta (mmHg)DSBP 24 h46.0 (37.0–55.0)48.0 (38.0–59.0)∗=0.006∗0.006DDBP 24 h36.0 (30.0–42.0)33.0 (27.0–40.0)∗=0.0010.758SD systolic (mmHg)24 h13.6 (11.4–16.3)14.4 (12.6–17.6)∗<0.0010.104Daytime11.9 (9.9–14.4)13.6 (11.1–16.1)∗<0.001∗0.042Night-time10.0 (7.9–12.4)11.1 (8.7–13.7)∗<0.0010.112SD diastolic (mmHg)24 h10.6 (9.1–12.5)10.2 (8.6–12.1)∗0.0100.687Daytime8.9 (7.6–10.5)8.8 (7.4–10.4)=0.661—Night-time8.1 (6.4–10.1)8.4 (6.6–10.4)=0.598—CVS (%)24 h10.6 (8.9–12.4)10.5 (9.1–12.5)=0.410—Daytime8.8 (7.4–10.7)9.5 (8.0–10.9)∗0.001∗0.040Night-time8.4 (6.6–10.5)8.6 (6.6–10.7)=0.424—CVD (%)24 h13.7 (11.6–16.1)13.3 (11.2–15.4)∗0.0210.967Daytime11.0 (9.2–13.1)10.9 (9.3–12.8)=0.962—Night-time11.7 (9.2–14.6)11.9 (9.3–14.4)=0.900—ARVS8.4 (6.9–10.4)9.3 (7.7–12.0)<0.0010.211ARVD6.5 (5.5–7.8)6.5 (5.3–8.0)=0.541—wSD systolic 24 h8.6 (7.1–10.2)9.3 (7.7–12.0)<0.0010.179DSBP: delta (maximum-minimum) systolic blood pressure, DDBP: delta diastolic blood pressure, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure, CVD: coefficient of variation of diastolic blood pressure, CVP: coefficient of variation of arterial peripheral pulse, ARVS/D/P: average real variability of systolic/diastolic/arterial peripheral pulse ND: nondiabetics, D: diabetics. Variables are presented as medians (interquartile range: percentile 25-percentile 75) and comparisons between ND vs. D were tested with the Mann–Whitney rank sum test. A GLM model was computed to adjust for age, BMI, dyslipidemia, familiar history of diabetes and hypertension, and mean BP value.Table 2 Clinical characteristics of nondiabetics and diabetics. NondiabeticsDiabeticsTotalP valueN(%)851 (76)272 (24)1123Age (years)48 (36–61)60 (53–68)53 (39–64)<0.001Male346 (41)141 (52)487 (43)0.001BMI (Kg/m2)27.4 (24.5–31.0)29.1 (26.0–32.5)27.9 (24.7–31.4)<0.001Smokers124 (15)37 (14)161 (14)0.692Dyslipidemia298 (35)165 (61)463 (41)<0.001Family history N(%)Hypertension293 (34)69 (25)362 (32)0.005Stroke57 (7)11 (4)68 (6)0.110Coronary artery disease90 (11)25 (9)115 (10)0.512Diabetes122 (14)62 (23)184 (16)0.001Clinical analysisFasting glucose (mg/dL)93 (85–101)135 (119–167)98 (87–114)<0.001HbA1C (%)5.6 (5.3–5.8)6.8 (6.0–7.7)5.9 (5.4–6.7)<0.001Creatinine (mg/dL)0.80 (0.70–1.00)0.90 (0.80–1.10)0.80 (0.70–1.00)<0.001eGFR (mL/min/1.73 m2)86.6 (70.7–100.7)75.3 (60.0–94.4)84.2 (67.6–98.9)<0.00124 h Urinary sodium (mEq/24 h)183 (140–244)201 (158–269)189 (142–251)0.01524 h Urinary potassium (mEq/24 h)68 (53–85)91 (70–109)73 (57–95)<0.001PWV (m/s) (a)10.1 (8.8–12.0)11.8 (10.0–13.0)10.5 (9.0–12.2)<0.001BP analysisOBP systolic/diastolic (mmHg)147 (136–160)/92 (83–101)160 (145–178)/90 (81–99)150 (137–165)/92 (82–100)<0.001/0.10824 h-ABPM SBP/DBP (mmHg)129 (121–138)/78 (71–85)139 (129–150)/78 (70–85)131 (122–141)/78 (71–85)<0.001/0.386Daytime SBP/DBP (mmHg)133 (125–143)/82 (74–90)143 (132–155)/81 (74–88)135 (126–146)/82 (74–89)<0.001/0.107Nighttime SBP/DBP (mmHg)118 (110–128)/69 (62–76)129 (118–142)/70 (63–77)120 (111–131)/69 (63–76)<0.001/0.154Circadian profile (b)Dipper368101469Non-dipper3461114570.003Reverse dipper573794Extreme dipper732093BMI: body mass index, HbA1c: glycated hemoglobin, eGFR: estimated glomerular filtration rate, PWV: pulse wave velocity, OBP: in-office blood pressure (mean of 3 measurements at baseline), 24 h-ABPM: 24hambulatory blood pressure monitoring, SBP: systolic blood pressure, DBP: diastolic blood pressure. Continuous variables are presented as medians (interquartile range: percentile 25-percentile 75) and categorical variables are presented as absolute frequency (%). Comparisons were tested with the Mann–Whitney rank sum test and the X2 test. (a) data only available for 37.9% of patients with non-differential missing data between diabetics and nondiabetics, (b) missing data for 10 patients. ## 3. Results ### 3.1. Population Characteristics A total of 1123 patients (851 nondiabetics and 272 diabetics) were included. The demographic characteristics, previous medical history, family history, and clinical analysis are presented in Table2. In the diabetic group, 52% of patients were male, 61% had dyslipidemia, patients were older, and had higher BMI. Family history of diabetes was significantly higher and family history of hypertension was significantly lower in the D group. All variables measured in clinical analysis were significantly different between the groups. In particular, PWV values were higher and eGFR values lower in diabetics than in nondiabetics. All mean systolic blood pressure values (in-office and 24 h-ABPM) were significantly higher in diabetics vs. nondiabetics, but no differences were found for mean diastolic blood pressure values. ### 3.2. Long-Term BPV As shown in Table3, values of long-term BP variability (BP in-office) were not significantly different between the groups (Table 3).Table 3 Long-term (in-office) BP variability. NondiabeticsDiabeticsTotalP Value between groupsDSBP (mmHg)16 (8–27)17 (10–29)16 (9–28)0.295DDBP (mmHg)11 (6–18)12 (6–17)11 (6–17)0.803SD systolic (mmHg)12.8 (7.8–19.8)13.0 (8.3–21.0)12.8 (8.0–20.1)0.074SD diastolic (mmHg)7.8 (5.2–11.4)8.5 (5.5–11.7)8.0 (5.3–11.6)0.318SD pulse (bpm)7.8 (4.4–12.1)6.4 (3.8–11.3)7.6 (4.2–12.0)0.055CVS (%)8.7 (5.3–13.4)8.75 (5.72–13.46)8.8 (5.4–13.4)0.806CVD (%)9.2 (5.9–13.1)9.86 (6.39–13.65)9.3 (6.0–13.3)0.141CVP (%)10.14 (5.9–15.7)9.0 (5.1–14.4)9.9 (5.7–15.3)0.050ARVS6.0 (3.0–11.3)6.3 (3.3–13.0)6.3 (3.0–11.7)0.118ARVD3.7 (1.7–6.2)3.3 (1.5–6.0)3.5 (1.7–6.0)0.632ARVP5.0 (2.5–9.5)4.5 (2.0–9.0)5.0 (2.5–9.0)0.322SD: standard deviation, DSBP: delta (maximum-minimum) systolic blood pressure, DDBP: delta diastolic blood pressure, SD: standard deviation, CVS/D/P: coefficient of variation of systolic blood pressure/diastolic blood pressure/arterial peripheral pulse, ARVS/D/P: average real variability of systolic/diastolic/arterial peripheral pulse, ND: nondiabetics, D: diabetics. Variables are presented as medians (interquartile range: percentile 25-percentile 75) and comparisons between ND vs.D were tested with the Mann–Whitney rank sum test. ### 3.3. Short-Term BPV Table1 shows the results of short-term BP variability (24 h-ABPM). Diabetic patients showed higher values of daytime systolic BP variability than nondiabetics and this was the only significant difference on 24 h-ABPM variability indices between these groups.There was also a differential distribution of circadian type between diabetics and nondiabetics (Pearson chi-squareP-value 0.003), with the double prevalence of reverse dipper profile in diabetic patients: 37 (14%) of diabetics were reverse dippers vs. 57 (7%) of nondiabetics (Table 2).Multivariate generalized linear regression analysis for each BPV variable that significantly related to diabetes in univariate analysis was performed to evaluate association with diabetes. Adjustments were performed for age, BMI, dyslipidemia, familiar history of diabetes, and hypertension; a respective BP value was also included in the model (for ARVS and wSD, SBP 24 h was considered). After adjustment, only 24-hours delta SBP and daytime systolic BPV variables (SD and CVS) were independently correlated with diabetes (P=0.042, P=0.040) (Table 1). ### 3.4. Correlation with Target Organ Lesion Indicators In the overall population, significant correlations (P<0.001) were observed between age and both PWV (Rs for PWV = 0.51) and eGFR (Rs = − 0.49). PWV and eGFR were negatively correlated with each other (Rs = − 0.29, P<0.001). Daytime systolic SD and CVS were significantly correlated with both PWV (Rs = 0.39 and 0.26, respectively, P<0.001) and eGFR (Rs = − 0.19 and − 0.18, respectively, P<0.001). ### 3.5. Age Age was significantly correlated with all systolic BPV variables, with moderate correlations for daytime BPV, weak for 24 h BPV and negligible for night-time BPV. In Figure1, daytime systolic SD and CVS were plotted for evaluation of interaction of diabetes and age. Age remained significantly correlated to daytime systolic SD and CVS, with moderate correlation in nondiabetics and negligibly to weakly correlated in diabetic patients, with significant differences between diabetics and nondiabetic slopes for the correlation between age and daytime systolic SD (P=0.04).Figure 1 Evaluation of interaction of diabetes and age.∗P<0.05, the Pearson’s correlation test, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure ### 3.6. Short-Term BPV The relationship between daytime systolic SD and CVS with target organ lesion indicators (estimated a glomerular filtration rate and pulse wave velocity), by groups are plotted in Figure2. As shown in Figure 2, in nondiabetics, daytime systolic BP variability correlated significantly positively with PWV and negatively with eGFR values, but no such correlations were ever found in diabetic patients.Figure 2 Relationship between BPV and indicators of target organ lesion.∗P<0.001; (a) data only available for 37.9% of patients with non-differential missing data between diabetics and nondiabetics, the Pearson’s correlation test, DSBP: delta (maximum-minimum) systolic blood pressure, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure. ## 3.1. Population Characteristics A total of 1123 patients (851 nondiabetics and 272 diabetics) were included. The demographic characteristics, previous medical history, family history, and clinical analysis are presented in Table2. In the diabetic group, 52% of patients were male, 61% had dyslipidemia, patients were older, and had higher BMI. Family history of diabetes was significantly higher and family history of hypertension was significantly lower in the D group. All variables measured in clinical analysis were significantly different between the groups. In particular, PWV values were higher and eGFR values lower in diabetics than in nondiabetics. All mean systolic blood pressure values (in-office and 24 h-ABPM) were significantly higher in diabetics vs. nondiabetics, but no differences were found for mean diastolic blood pressure values. ## 3.2. Long-Term BPV As shown in Table3, values of long-term BP variability (BP in-office) were not significantly different between the groups (Table 3).Table 3 Long-term (in-office) BP variability. NondiabeticsDiabeticsTotalP Value between groupsDSBP (mmHg)16 (8–27)17 (10–29)16 (9–28)0.295DDBP (mmHg)11 (6–18)12 (6–17)11 (6–17)0.803SD systolic (mmHg)12.8 (7.8–19.8)13.0 (8.3–21.0)12.8 (8.0–20.1)0.074SD diastolic (mmHg)7.8 (5.2–11.4)8.5 (5.5–11.7)8.0 (5.3–11.6)0.318SD pulse (bpm)7.8 (4.4–12.1)6.4 (3.8–11.3)7.6 (4.2–12.0)0.055CVS (%)8.7 (5.3–13.4)8.75 (5.72–13.46)8.8 (5.4–13.4)0.806CVD (%)9.2 (5.9–13.1)9.86 (6.39–13.65)9.3 (6.0–13.3)0.141CVP (%)10.14 (5.9–15.7)9.0 (5.1–14.4)9.9 (5.7–15.3)0.050ARVS6.0 (3.0–11.3)6.3 (3.3–13.0)6.3 (3.0–11.7)0.118ARVD3.7 (1.7–6.2)3.3 (1.5–6.0)3.5 (1.7–6.0)0.632ARVP5.0 (2.5–9.5)4.5 (2.0–9.0)5.0 (2.5–9.0)0.322SD: standard deviation, DSBP: delta (maximum-minimum) systolic blood pressure, DDBP: delta diastolic blood pressure, SD: standard deviation, CVS/D/P: coefficient of variation of systolic blood pressure/diastolic blood pressure/arterial peripheral pulse, ARVS/D/P: average real variability of systolic/diastolic/arterial peripheral pulse, ND: nondiabetics, D: diabetics. Variables are presented as medians (interquartile range: percentile 25-percentile 75) and comparisons between ND vs.D were tested with the Mann–Whitney rank sum test. ## 3.3. Short-Term BPV Table1 shows the results of short-term BP variability (24 h-ABPM). Diabetic patients showed higher values of daytime systolic BP variability than nondiabetics and this was the only significant difference on 24 h-ABPM variability indices between these groups.There was also a differential distribution of circadian type between diabetics and nondiabetics (Pearson chi-squareP-value 0.003), with the double prevalence of reverse dipper profile in diabetic patients: 37 (14%) of diabetics were reverse dippers vs. 57 (7%) of nondiabetics (Table 2).Multivariate generalized linear regression analysis for each BPV variable that significantly related to diabetes in univariate analysis was performed to evaluate association with diabetes. Adjustments were performed for age, BMI, dyslipidemia, familiar history of diabetes, and hypertension; a respective BP value was also included in the model (for ARVS and wSD, SBP 24 h was considered). After adjustment, only 24-hours delta SBP and daytime systolic BPV variables (SD and CVS) were independently correlated with diabetes (P=0.042, P=0.040) (Table 1). ## 3.4. Correlation with Target Organ Lesion Indicators In the overall population, significant correlations (P<0.001) were observed between age and both PWV (Rs for PWV = 0.51) and eGFR (Rs = − 0.49). PWV and eGFR were negatively correlated with each other (Rs = − 0.29, P<0.001). Daytime systolic SD and CVS were significantly correlated with both PWV (Rs = 0.39 and 0.26, respectively, P<0.001) and eGFR (Rs = − 0.19 and − 0.18, respectively, P<0.001). ## 3.5. Age Age was significantly correlated with all systolic BPV variables, with moderate correlations for daytime BPV, weak for 24 h BPV and negligible for night-time BPV. In Figure1, daytime systolic SD and CVS were plotted for evaluation of interaction of diabetes and age. Age remained significantly correlated to daytime systolic SD and CVS, with moderate correlation in nondiabetics and negligibly to weakly correlated in diabetic patients, with significant differences between diabetics and nondiabetic slopes for the correlation between age and daytime systolic SD (P=0.04).Figure 1 Evaluation of interaction of diabetes and age.∗P<0.05, the Pearson’s correlation test, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure ## 3.6. Short-Term BPV The relationship between daytime systolic SD and CVS with target organ lesion indicators (estimated a glomerular filtration rate and pulse wave velocity), by groups are plotted in Figure2. As shown in Figure 2, in nondiabetics, daytime systolic BP variability correlated significantly positively with PWV and negatively with eGFR values, but no such correlations were ever found in diabetic patients.Figure 2 Relationship between BPV and indicators of target organ lesion.∗P<0.001; (a) data only available for 37.9% of patients with non-differential missing data between diabetics and nondiabetics, the Pearson’s correlation test, DSBP: delta (maximum-minimum) systolic blood pressure, SD: standard deviation, CVS: coefficient of variation of systolic blood pressure. ## 4. Discussion In the present study, we assessed BP variability in nondiabetics and diabetic patients with 24 h-ABPM and seriated OBP (from 3 consecutive outpatient evaluations). We applied different formulas of assessment of variability to evaluate the association of BPV with diabetes and, also, the association of BPV with vascular, renal, and target organ lesion indicators (PWV and eGFR, respectively). Our population of diabetics was older, weighted more, and had worse clinical conditions than nondiabetics as expressed by higher PWV and lower eGFR. Although no differences in long-term BP variability were observed between groups, variability of daytime systolic BP was higher in diabetics than nondiabetics, even after adjustment for other significant clinical variables, as expected [9]; in univariate analysis, diabetic patients had higher values of systolic BP variability, and after adjustment for age, BMI, dyslipidemia, familiar history of diabetes and hypertension and a respective BP value, only daytime systolic SD and CVS remained significantly higher in diabetics. These results are in agreement with those of Casali et al. [4]. ### 4.1. Long-Term BPV Differences between Diabetics and Nondiabetics Previous evidence suggested that increasing values of long-term BPV predict the development and progression of diabetes organ target lesions, such as nephropathy, including correlation with PWV and urinary albumin excretion, another indicator of kidney lesion [1, 12]. Yet, the mechanism to explain its contribution as a predictor is still debatable, with a relevant contribution of behavioral influences and even seasonal related climatic changes [1]. In our study, we did not find significant differences between long-term BPV in diabetics vs. nondiabetics, suggesting that long-term BPV does not contribute to the different odds of cardiovascular complications in diabetic vs. nondiabetic hypertensive patients. ### 4.2. Short-Term BPV Differences between Diabetics and Nondiabetics Regarding short-term BPV (in 24 h-ABPM): at the moment, there is no standard way of measuring BPV, being one of the strengths of our study the use of different parameters to measure it, allowing for a more comprehensive evaluation of BPV and comparison with further research [3].The prevalence of reverse dippers was significantly higher in diabetics vs. nondiabetics. This circadian pattern has been related to increased cardiovascular risk, and this association with diabetes has been already explored before [13].Independently of the measurement, systolic BPV has been increasingly associated with cardiovascular outcomes in diabetic and nondiabetic patients, an impact that might go beyond that of BP [3, 14]. Chiriacò et al. suggested that it could be a relevant factor to include in adverse outcome risk prediction for diabetic patients [3], taking into account measurement limitations and each measurement association with the diabetes itself. As proposed by Parati et al., long-term BPV and short-term BPV probably translate the action of different physiological mechanisms [8]. Considering that the latter were significantly different between diabetics and nondiabetics, it may more accurately reflect the impact of diabetic aggression, through either autonomic modulation or atherosclerosis enhanced augmentation [1, 15]. ### 4.3. Correlation of Short-Term BPV and Target Organ Lesion In our study, short-term BPV was higher in diabetics than in nondiabetics. However, short-term BPV was indeed significantly correlated, in nondiabetics, with increased PWV and lower eGFR, but in diabetics there was no correlation of these markers of target organ lesions and BPV. Looking closer at the results of Chiriacò et al. significant heterogeneity across studies was present, and adverse outcomes were composite measures rather than specific assessments, such as PWV and eGFR, as an estimation of creatinine clearance [3]. Thus, in our study, although BP variability was higher in diabetic patients, its relationship with target organ damage was only observed in nondiabetic patients. These results lead us to speculate that, as proposed by Bell and colleagues, although the artery stiffness and deterioration of renal function in diabetics are worse than in nondiabetics, BPV may not bring additional clinical usefulness to the routinely measured predictors, such as mean blood pressure in diabetics [16].BPV may indeed be a marker of vascular aging, which may be accelerated in diabetics, but other factors may contribute in a larger scale to it adverse outcomes [16]. In our view, different from nondiabetics, the diabetic condition may cause structural renal and vascular damages by specific diabetic abnormalities independently of BP variability and likely of other cardiovascular risk factors. In support of this conjecture, we found that the variable age was significantly correlated with all systolic BPV variables, but its correlation with systolic BPV was significantly higher in nondiabetic vs. diabetics, with correlation becoming negligibly to weakly correlate in diabetic patients. It is well established that cardiovascular risk is naturally higher in diabetics than nondiabetics, which is confirmed in our study. However, such a higher risk may be related to complex structural and metabolic abnormalities, such as inflammation, endothelial dysfunction, oxidative stress, fibrosis, accumulation of AGEs, and atherosclerosis. [17–19], most of them escape from the dependence of BP variability. In other words, our data suggest that diabetic patients exhibit greater BP variability, more severe organ damage but less dependence on BP variability and probably of other usual anthropometric variables like age, gender, etc. ### 4.4. Strengths and Limitations This cross-sectional study’s strengths include a large cohort of patients with and without type 2 diabetes. The comparison between diabetics and nondiabetics allowed for a distinction of the BPV impact beyond diabetes. Although subgroup analysis of patients on BP lowering drugs within diabetic population has already been reported [16], most of the existing literature focused only on overall diabetic population and [12, 15], as our results present, there seems to be a relevant differential association of BPV to PWV and eGFR in nondiabetics vs. diabetics. As mentioned above, we have also computed several different parameters to measure BPV, exploring simple and more complex measurements enhancing the comparability with further literature.The main limitation of our study is its observational cross-sectional nature and the fact it was conducted in a single center. Although we have considered several potential confounders, other characteristics, such as diabetes duration disease, anti-hypertensive drugs, and diabetes treatment could be considered. We have performed complete-case analysis, considering that missing data were not differential between diabetics and nondiabetics. PWV was only available for 37.9% of patients. Considering this is a cross-sectional study, dealing with observational data collected from patients’ clinical registries, missing data imputations was not performed [20]. For GLM, SPSS automatically excludes cases with system-missing values for any of the variables on the GLM variable list, and user-missing values were treated as valid (data from 146 patients were excluded from GLM of 13%). As stated by STROBE, this alternative for missing data management may still be biased [20], therefore, further large-scale studies are needed to explore relation of BPV and PWV. ## 4.1. Long-Term BPV Differences between Diabetics and Nondiabetics Previous evidence suggested that increasing values of long-term BPV predict the development and progression of diabetes organ target lesions, such as nephropathy, including correlation with PWV and urinary albumin excretion, another indicator of kidney lesion [1, 12]. Yet, the mechanism to explain its contribution as a predictor is still debatable, with a relevant contribution of behavioral influences and even seasonal related climatic changes [1]. In our study, we did not find significant differences between long-term BPV in diabetics vs. nondiabetics, suggesting that long-term BPV does not contribute to the different odds of cardiovascular complications in diabetic vs. nondiabetic hypertensive patients. ## 4.2. Short-Term BPV Differences between Diabetics and Nondiabetics Regarding short-term BPV (in 24 h-ABPM): at the moment, there is no standard way of measuring BPV, being one of the strengths of our study the use of different parameters to measure it, allowing for a more comprehensive evaluation of BPV and comparison with further research [3].The prevalence of reverse dippers was significantly higher in diabetics vs. nondiabetics. This circadian pattern has been related to increased cardiovascular risk, and this association with diabetes has been already explored before [13].Independently of the measurement, systolic BPV has been increasingly associated with cardiovascular outcomes in diabetic and nondiabetic patients, an impact that might go beyond that of BP [3, 14]. Chiriacò et al. suggested that it could be a relevant factor to include in adverse outcome risk prediction for diabetic patients [3], taking into account measurement limitations and each measurement association with the diabetes itself. As proposed by Parati et al., long-term BPV and short-term BPV probably translate the action of different physiological mechanisms [8]. Considering that the latter were significantly different between diabetics and nondiabetics, it may more accurately reflect the impact of diabetic aggression, through either autonomic modulation or atherosclerosis enhanced augmentation [1, 15]. ## 4.3. Correlation of Short-Term BPV and Target Organ Lesion In our study, short-term BPV was higher in diabetics than in nondiabetics. However, short-term BPV was indeed significantly correlated, in nondiabetics, with increased PWV and lower eGFR, but in diabetics there was no correlation of these markers of target organ lesions and BPV. Looking closer at the results of Chiriacò et al. significant heterogeneity across studies was present, and adverse outcomes were composite measures rather than specific assessments, such as PWV and eGFR, as an estimation of creatinine clearance [3]. Thus, in our study, although BP variability was higher in diabetic patients, its relationship with target organ damage was only observed in nondiabetic patients. These results lead us to speculate that, as proposed by Bell and colleagues, although the artery stiffness and deterioration of renal function in diabetics are worse than in nondiabetics, BPV may not bring additional clinical usefulness to the routinely measured predictors, such as mean blood pressure in diabetics [16].BPV may indeed be a marker of vascular aging, which may be accelerated in diabetics, but other factors may contribute in a larger scale to it adverse outcomes [16]. In our view, different from nondiabetics, the diabetic condition may cause structural renal and vascular damages by specific diabetic abnormalities independently of BP variability and likely of other cardiovascular risk factors. In support of this conjecture, we found that the variable age was significantly correlated with all systolic BPV variables, but its correlation with systolic BPV was significantly higher in nondiabetic vs. diabetics, with correlation becoming negligibly to weakly correlate in diabetic patients. It is well established that cardiovascular risk is naturally higher in diabetics than nondiabetics, which is confirmed in our study. However, such a higher risk may be related to complex structural and metabolic abnormalities, such as inflammation, endothelial dysfunction, oxidative stress, fibrosis, accumulation of AGEs, and atherosclerosis. [17–19], most of them escape from the dependence of BP variability. In other words, our data suggest that diabetic patients exhibit greater BP variability, more severe organ damage but less dependence on BP variability and probably of other usual anthropometric variables like age, gender, etc. ## 4.4. Strengths and Limitations This cross-sectional study’s strengths include a large cohort of patients with and without type 2 diabetes. The comparison between diabetics and nondiabetics allowed for a distinction of the BPV impact beyond diabetes. Although subgroup analysis of patients on BP lowering drugs within diabetic population has already been reported [16], most of the existing literature focused only on overall diabetic population and [12, 15], as our results present, there seems to be a relevant differential association of BPV to PWV and eGFR in nondiabetics vs. diabetics. As mentioned above, we have also computed several different parameters to measure BPV, exploring simple and more complex measurements enhancing the comparability with further literature.The main limitation of our study is its observational cross-sectional nature and the fact it was conducted in a single center. Although we have considered several potential confounders, other characteristics, such as diabetes duration disease, anti-hypertensive drugs, and diabetes treatment could be considered. We have performed complete-case analysis, considering that missing data were not differential between diabetics and nondiabetics. PWV was only available for 37.9% of patients. Considering this is a cross-sectional study, dealing with observational data collected from patients’ clinical registries, missing data imputations was not performed [20]. For GLM, SPSS automatically excludes cases with system-missing values for any of the variables on the GLM variable list, and user-missing values were treated as valid (data from 146 patients were excluded from GLM of 13%). As stated by STROBE, this alternative for missing data management may still be biased [20], therefore, further large-scale studies are needed to explore relation of BPV and PWV. ## 5. Conclusion In conclusion, we found that diabetes is associated with higher variability of daytime BP than nondiabetics, along with worse damage of vascular and renal function. However, in contrast to nondiabetics, in diabetics eGFR and PWV may be not dependent of BP variability, suggesting that other mechanisms might explain more rigorously the greater damage of target organ lesion markers. --- *Source: 1022044-2022-06-21.xml*
2022
# Abnormal Variations of the Key Genes in Osteoporotic Fractures **Authors:** Bin Wang; Caiyuan Mai; Lei Pan **Journal:** Emergency Medicine International (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1022078 --- ## Abstract Objective. The classical osteoporotic signaling pathways include the four key genes (LRP5, Runx2, Osterix, and RANKL) influencing the regulation of osteogenesis and osteoclastogenesis. This study investigates the expression of these four genes associated with bone remodeling during fracture healing. Methods. Ovariectomized rats as an osteoporotic group were randomly divided into three groups-group A, group B, and group C. Nonosteoporotic rats as the control group were likewise divided into three groups A0, B0, and C0, using the same method. The rats were killed on the third day of fractures in groups A and A0, on the seventh day of fractures in groups B and B0, and on the fourteenth day of fractures in groups C and C0. The bone specimens were taken from the femoral fracture site, and the expression level of each gene in the bone specimens was detected using RT-qPCR, Western blotting, and immunohistochemistry. Results. LRP5, Runx2, and Osterix expressions were decreased in osteoporotic rat fractures and then increased over time. The expression of RANKL was elevated in osteoporotic rat bone specimens, which decreased after that. Conclusion. The expressions of the four genes varied with time after fracture, which could be associated with the various stages of bone repair. The four genes can inform practice in ideal interventions in the prevention and management of osteoporosis. --- ## Body ## 1. Introduction Osteoporosis is a metabolic disorder associated with systemic bone aging and degradation. Osteoporosis is characterized by decreased bone mass, structural degradation, increased brittleness, and susceptibility to fractures [1]. Worldwide, one in five men over the age of 50 years sustained an osteoporotic fracture (OPF) in 2020 [2]. Fracture is the most serious outcome of osteoporosis. Postmenopausal osteoporosis (PMOP) occurs in women after menopause due to estrogen deficiency, resulting in bone loss and bone structure changes. Osteoporosis is three times more common in postmenopausal women than in non-postmenopausal women [3]. PMOP can significantly increase disability rates and mortality rates and result in a great family and socioeconomic burdens [4, 5]. The process of osteoporotic development involves bone resorption mediated by osteoclast and bone formation mediated by osteoblast, which maintains the bone in a state of continuous remodeling. Under physiological conditions, bone absorption and formation remain stable; however, osteoporosis develops when the balance is disrupted, where the bone resorption rate is higher than the formation rate [6, 7].Osteogenesis and osteoclastogenesis have been the subjects of interest in osteoporosis. The classical Wnt/β-catenin, BMP-2/Osterix, and RANKL/RANK signaling pathways include the key genes influencing the regulation of osteogenesis and osteoclastogenesis. LRP5, Runx2, and Osterix are the key osteogenic genes associated with the Wnt/β-catenin and BMP-2/Osterix signaling pathway, while RANKL is a key gene associated with the RANKL/RANK signaling pathways [8, 9]. However, the evolving process of these genes in osteoporotic fracture remains unclear. Therefore, the overall aim of the present study was to investigate the genes associated with the development and healing stages of osteoporotic fractures based on histological and molecular analyses of fracture healing stages. Specifically, this study aimed to investigate the variations of LRP5, Runx2, Osterix, and RANKL in bone specimens using RT-qPCR and immunohistochemistry. ## 2. Materials and Methods Following the Animal Experimentation Ethics Guidelines, the Ethics Committee of Foshan Sanshui District People’s Hospital approved the procedures of the study (Medical Research in Guangdong Province 2019003). A total of 100 Sprague–Dawley female rats (12 weeks old, weighing 200–220 g) were purchased from the Guangzhou medical university. Then, the rats were housed in a specific pathogen-free (SPF) facility (temperature 22 ± 2°C; humidity 50 ± 10%) with a 12/12-h light/dark cycle, and standard laboratory animal food and tap water were available ad libitum. First, the rat model of an osteoporotic model of ovariectomy (OVX)-induced osteoporosis was established by removing the ovaries bilaterally, and then, after 12 weeks, we selected 90 rats weighing 280–300 g. Another 90 rats of the same age and weight were purchased as a control group (without being subjected to sham procedures). A micro-CT (Perkin Elmer, China) for small animals was used to determine the femur bone mineral density (BMD) in all rats. Ovariectomized rats were sure to be Osteoporotic rats. With the anesthetic, an osteotomy was established using an oscillating sagittal saw at the left proximal femur in all rats, and then the incision was closed. Ninety osteoporotic rats were randomly divided into groups A, B, and C, with 30 cases in each group. Ninety nonosteoporotic rats were randomly divided into groups A0, B0, and C0, with 30 cases in each group. The rats were killed on the third day of fractures in groups A and A0, on the seventh day of fractures in groups B and B0, and on the fourteenth day of fractures in groups C and C0. A bone specimen was taken from the fractured femur when the rats were killed. Specimens (the bone tissues were equal to or more than 400 mg and the intercepted bone mass was greater than 0.8 cm × 0.3 cm × 0.3 cm in volume) were collected from the fracture sites. Specimens were collected and stored immediately in liquid nitrogen. ### 2.1. RT-qPCR Briefly, 100 mg of bone tissue was first ground in liquid nitrogen. Total RNA was then extracted from the bone tissues using the RNAiso Plus kit (Duoyang, China). cDNA was synthesized from the RNA using PrimeScript RT Master Mix (TaKaRa, Japen). The DNA was amplified using Real-time PCR (RT-qPCR) with the SYBR Premix kit (TaKaRa, Japen). Primers used in this research were synthesized by Thermo Fisher Scientific company. Conditions of PCR were as follows: initial denaturation at 94°C for 5 minutes, 30 elongation cycles at 94°C for 30 seconds, annealing at 58°C for 30 seconds, extension at 72°C for 40 seconds, and final extension at 72°C for 10 minutes. GAPDH was used as the internal control. The amplification of the DNA was expressed based on the Ct (2−ΔΔCT) equation. ### 2.2. Western Blotting (WB) Same with the aforementioned method, 200 mg of bone tissue was ground. Total protein was extracted from bone tissues using RIPA lysis buffer (Beyotime, China). The protein concentration was determined using a BCA assay kit (Beyotime, China). A total of 30µg protein/well was resolved using 10% SDS-PAGE and transferred to a PVDF membrane. Subsequently, 5% nonfat milk was used to block the membrane at 37°C for 1 h, followed by incubation at room temperature for 1 h with primary antibodies (Abcam, UK., LRP5 antibody: no. ab223203; Runx2 antibody: no. ab92336; Ostrix antibody: no. ab209484; RANKL antibody: no. ab239607; GAPDH antibody: ab8245). The membrane was then incubated with horseradish peroxidase-conjugated anti-rabbit IgG secondary antibody (Abcam, UK., ab150113) at room temperature for 2 h. The bands were visualized by using enhanced chemiluminescence (ECL) reagent kit (Yeasen, China) and semiquantified with Image J software. ### 2.3. Immunohistochemical Analysis Intercepted bone mass was thawed, fixed in 10% neutral formalin for 48 hours, and embedded in paraffin after decalcification in 10% EDTA solution (Zhongshan Jinqiao, China). The specimens were then cut into 5μm thick sections and treated with 3% hydrogen peroxide for 10 min. Afterward, the sections were rinsed with phosphate-buffered saline, incubated with primary and secondary antibodies (Abcam, UK., the same as WB) sequentially, and exposed to DAB (TaKaRa, Japen). The sections were counterstained with hematoxylin solution (TaKaRa, Japen). Ten visual fields were randomly selected and observed under a high magnification microscope (Olympus, Japen), and the number of positively stained cells was calculated. ### 2.4. Data Analysis All experiments were performed at least three times. Continuous data were expressed as mean ± standard deviation (±s). A one-way analysis of variance was used for multigroup comparison, and differences between the two groups were determined by LSD t-test. P<0.05 was considered statistically significant. Data were analyzed using GraphPad Prisma 5 software and SPSS V. 22.0 (Chicago, USA). ## 2.1. RT-qPCR Briefly, 100 mg of bone tissue was first ground in liquid nitrogen. Total RNA was then extracted from the bone tissues using the RNAiso Plus kit (Duoyang, China). cDNA was synthesized from the RNA using PrimeScript RT Master Mix (TaKaRa, Japen). The DNA was amplified using Real-time PCR (RT-qPCR) with the SYBR Premix kit (TaKaRa, Japen). Primers used in this research were synthesized by Thermo Fisher Scientific company. Conditions of PCR were as follows: initial denaturation at 94°C for 5 minutes, 30 elongation cycles at 94°C for 30 seconds, annealing at 58°C for 30 seconds, extension at 72°C for 40 seconds, and final extension at 72°C for 10 minutes. GAPDH was used as the internal control. The amplification of the DNA was expressed based on the Ct (2−ΔΔCT) equation. ## 2.2. Western Blotting (WB) Same with the aforementioned method, 200 mg of bone tissue was ground. Total protein was extracted from bone tissues using RIPA lysis buffer (Beyotime, China). The protein concentration was determined using a BCA assay kit (Beyotime, China). A total of 30µg protein/well was resolved using 10% SDS-PAGE and transferred to a PVDF membrane. Subsequently, 5% nonfat milk was used to block the membrane at 37°C for 1 h, followed by incubation at room temperature for 1 h with primary antibodies (Abcam, UK., LRP5 antibody: no. ab223203; Runx2 antibody: no. ab92336; Ostrix antibody: no. ab209484; RANKL antibody: no. ab239607; GAPDH antibody: ab8245). The membrane was then incubated with horseradish peroxidase-conjugated anti-rabbit IgG secondary antibody (Abcam, UK., ab150113) at room temperature for 2 h. The bands were visualized by using enhanced chemiluminescence (ECL) reagent kit (Yeasen, China) and semiquantified with Image J software. ## 2.3. Immunohistochemical Analysis Intercepted bone mass was thawed, fixed in 10% neutral formalin for 48 hours, and embedded in paraffin after decalcification in 10% EDTA solution (Zhongshan Jinqiao, China). The specimens were then cut into 5μm thick sections and treated with 3% hydrogen peroxide for 10 min. Afterward, the sections were rinsed with phosphate-buffered saline, incubated with primary and secondary antibodies (Abcam, UK., the same as WB) sequentially, and exposed to DAB (TaKaRa, Japen). The sections were counterstained with hematoxylin solution (TaKaRa, Japen). Ten visual fields were randomly selected and observed under a high magnification microscope (Olympus, Japen), and the number of positively stained cells was calculated. ## 2.4. Data Analysis All experiments were performed at least three times. Continuous data were expressed as mean ± standard deviation (±s). A one-way analysis of variance was used for multigroup comparison, and differences between the two groups were determined by LSD t-test. P<0.05 was considered statistically significant. Data were analyzed using GraphPad Prisma 5 software and SPSS V. 22.0 (Chicago, USA). ## 3. Results ### 3.1. Weight and BMD of Rats None of the rats died during the experiment. The weights (g) of groups A, B, C, A0, B0, and C0 were: 285.3 ± 2.5, 287.3 ± 3.6, 283.6 ± 7.4, 284.6 ± 6.2, 287.3 ± 8.9, and 286.5 ± 8.0, respectively, and no significant differences existed between any two groups (P>0.05). The weights (g) of the whole osteoporotic and control groups were 286.3 ± 6.4 and 286.0 ± 2.6, without significant differences (P>0.05).The values of BMD (g/cm2) of groups A, B, C, A0, B0, and C0 were 0.42 ± 0.25, 0.44 ± 0.06, 0.43 ± 0.14, 0.21 ± 0.02, 0.23 ± 0.09, and 0.22 ± 0.08, respectively. There were no significant differences between any two groups in groups A, B, and C, and there were no significant differences between any two groups in groups A0, B0, and C0 (P>0.05). The values of BMD (g/cm2) of the whole osteoporotic group and control group were 0.43 ± 0.18 and 0.22 ± 0.05, and there was a significant difference between them (P>0.05). ### 3.2. Expressions of LRP5, Runx2, Osterix, and RANKL in the Groups (Osteoporotic Group and Control Group) Using RT-qPCR RT-qPCR results revealed that the levels of LRP5, Runx2, and Osterix in the bone tissues were lower in the osteoporotic group than in the control group (P<0.05), while the level of RANKL was higher in the osteoporotic group than in the control group (P<0.05) (Figure 1).Figure 1 Relative expressions value of mRNA of LRP5, Runx2, Osterix, and RANKL in bone tissues of control and osteoporotic groups.#P<0.05. ### 3.3. Expressions of LRP5, Runx2, Osterix, and RANKL in the Groups Using RT-qPCR and WB The levels of LRP5, Runx2, Osterix, and RANKL between groups A0 and A, B0 and B, and C0 and C exhibited a highly significant difference (P<0.05). However, the differences between groups A0, B0, and C0 were insignificant across the four genes (P>0.05). The elevated expressions of LRP5 and Runx2 were found lowest in group A (the third day) than in groups B (the seventh day) and C (P<0.05), and the elevated expressions of Osterix were highest in group C (the fourteenth day) than in groups A and B (P<0.05), while the depressed expressions of RANKL were highest in group A (the third day) than in groups B and C (P<0.05). The differences in LRP5 and Runx2 between groups B and C, Osterix between groups A and B, and RANKL between groups B and C were not found to be significant (P>0.05) (Figures 2–4).Figure 2 Relative expressions value of mRNA of LRP5, Runx2, Osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05.Figure 3 Western blotting was performed to determine protein expressions of LRP5, Runx2, Osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05.Figure 4 Relative protein expressions value of LRP5, Runx2, osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05. ### 3.4. Immunohistochemical Staining of Proteins in LRP5, Runx2, Osterix, and RANKL The image of immunohistochemical staining is visible in Figures5 and 6. The number of positively stained cells was calculated. The levels of LRP5, Runx2, Osterix, and RANKL are the same variations as the above.Figure 5 Immunohistochemical staining of LRP5, Runx2, Osterix, and RANKL in bone mass of participants in groups A0, B0, C0, A, B, and C. The legend of scale bar = 50μm.Figure 6 A histogram representing quantitative analysis results of immunohistochemical staining in LRP5, Runx2, Osterix, and RANKL in bone specimens (groups A0, B0, C0, A, B, and C.#P<0.05. ## 3.1. Weight and BMD of Rats None of the rats died during the experiment. The weights (g) of groups A, B, C, A0, B0, and C0 were: 285.3 ± 2.5, 287.3 ± 3.6, 283.6 ± 7.4, 284.6 ± 6.2, 287.3 ± 8.9, and 286.5 ± 8.0, respectively, and no significant differences existed between any two groups (P>0.05). The weights (g) of the whole osteoporotic and control groups were 286.3 ± 6.4 and 286.0 ± 2.6, without significant differences (P>0.05).The values of BMD (g/cm2) of groups A, B, C, A0, B0, and C0 were 0.42 ± 0.25, 0.44 ± 0.06, 0.43 ± 0.14, 0.21 ± 0.02, 0.23 ± 0.09, and 0.22 ± 0.08, respectively. There were no significant differences between any two groups in groups A, B, and C, and there were no significant differences between any two groups in groups A0, B0, and C0 (P>0.05). The values of BMD (g/cm2) of the whole osteoporotic group and control group were 0.43 ± 0.18 and 0.22 ± 0.05, and there was a significant difference between them (P>0.05). ## 3.2. Expressions of LRP5, Runx2, Osterix, and RANKL in the Groups (Osteoporotic Group and Control Group) Using RT-qPCR RT-qPCR results revealed that the levels of LRP5, Runx2, and Osterix in the bone tissues were lower in the osteoporotic group than in the control group (P<0.05), while the level of RANKL was higher in the osteoporotic group than in the control group (P<0.05) (Figure 1).Figure 1 Relative expressions value of mRNA of LRP5, Runx2, Osterix, and RANKL in bone tissues of control and osteoporotic groups.#P<0.05. ## 3.3. Expressions of LRP5, Runx2, Osterix, and RANKL in the Groups Using RT-qPCR and WB The levels of LRP5, Runx2, Osterix, and RANKL between groups A0 and A, B0 and B, and C0 and C exhibited a highly significant difference (P<0.05). However, the differences between groups A0, B0, and C0 were insignificant across the four genes (P>0.05). The elevated expressions of LRP5 and Runx2 were found lowest in group A (the third day) than in groups B (the seventh day) and C (P<0.05), and the elevated expressions of Osterix were highest in group C (the fourteenth day) than in groups A and B (P<0.05), while the depressed expressions of RANKL were highest in group A (the third day) than in groups B and C (P<0.05). The differences in LRP5 and Runx2 between groups B and C, Osterix between groups A and B, and RANKL between groups B and C were not found to be significant (P>0.05) (Figures 2–4).Figure 2 Relative expressions value of mRNA of LRP5, Runx2, Osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05.Figure 3 Western blotting was performed to determine protein expressions of LRP5, Runx2, Osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05.Figure 4 Relative protein expressions value of LRP5, Runx2, osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05. ## 3.4. Immunohistochemical Staining of Proteins in LRP5, Runx2, Osterix, and RANKL The image of immunohistochemical staining is visible in Figures5 and 6. The number of positively stained cells was calculated. The levels of LRP5, Runx2, Osterix, and RANKL are the same variations as the above.Figure 5 Immunohistochemical staining of LRP5, Runx2, Osterix, and RANKL in bone mass of participants in groups A0, B0, C0, A, B, and C. The legend of scale bar = 50μm.Figure 6 A histogram representing quantitative analysis results of immunohistochemical staining in LRP5, Runx2, Osterix, and RANKL in bone specimens (groups A0, B0, C0, A, B, and C.#P<0.05. ## 4. Discussion Patients with osteoporosis are prone to occur OPF [10]. There was a 20 percent chance of death with complications of osteoporosis, and 20 percent or so of patients with recurrent fractures in the proximal femur of OPF [11]. In China, ∼83.9 million people are estimated to suffer from osteoporosis, and this number, including osteopenia, should increase to ∼212 million people by 2050 [12]. Osteoporosis and postmenopausal osteoporotic fracture (PMOPF) have globally become critical public health problems. The present study investigated the relationships of mRNA and protein expressions in LRP5, Runx2, Osterix, and RANKL in bone specimens of osteoporotic rats.Osteogenesis and osteoclastogenesis are regulated by the Wnt/β-catenin, BMP-2/Runx2/Osterix, and RANKL/RANK signaling pathways [8, 9, 13–17]. Based on histological and molecular analyses, the early stage of fracture healing can be divided into the early inflammatory response stage (one day after fracture), the nonspecific bone anabolic stage (three days after fracture), the nonspecific catabolic stage (three days to one week after fracture), and the specific bone anabolic stage (more than one week after fracture). The entire fracture healing phase can be divided into three stages: hematoma organization, original callus formation, and callus reconstruction molding stage. The hematoma organization stage is typically completed within 2 weeks after fracture [18, 19]. Consequently, we stratified osteoporotic rats into several groups to reflect the healing stages Group A (the third day), B (the seventh day), and C (the fourteenth day). The control group was divided into A0 to C0 using the method noted above. Our results depicted that LRP5, Runx2, and Osterix expressions decreased in osteoporotic rat fractures and then increased with the increase over time. The expression of RANKL was elevated in osteoporotic rat bone specimens, which decreased after that.In the Wnt/β-catenin signaling pathway, the Wnt, LRP5/6, and FZD complexes recruit Dvl and degradative complexes, which inhibit the phosphorylation of β-catenin by GSK-3β. Accumulating nonphosphorylated β-catenin in the nucleus activates downstream Runx2 and other genes, resulting in osteogenesis [8, 20]. LRP5 exists on the surface membranes of numerous cells [21]. Inhibition of LRP5 impairs the proliferation of osteoblasts, affecting bone formation [22]. Glinka [23] revealed that LGR5 regulated embryonic patterns and the proliferation of stem cells through the Wnt/β-catenin mediated agonist of R-cavernous signaling. In the present study, we observed substantial underexpression of LRP5 in the bone specimens of osteoporotic rats, which was consistent with the above theory. Runx2 is a highly specific biomarker for osteogenesis. The expression of Runx2 is essential for bone formation and development. In particular, Runx2 upregulates the transcription of genes for several mineralization-related genes in osteoblasts [24, 25]. The Wnt/β-catenin pathway directly regulates Runx2, strengthens osteogenic differentiation, and accelerates fracture healing [24]. In the present study, we observed a significant underexpression of Runx2 in osteoporotic rats, which reflects inadequate osteogenesis in osteoporotic rats.LRP5 regulates osteoblastosis and bone formation by activating the expression of Runx2 [26]. The expression levels of LRP5 and Runx2 were simultaneously lowest in group A and then went up synchronously in groups B and C, which suggests that variations of LRP5 and Runx2 were consonantly correlated with the given osteogenic stage.BMP-2 modulates the transcription of the BMP-2/Runx2/Osterix pathway by activating the expression of Smads [27]. Smads regulate the transcription of several target genes and induces the expression of Runx2. Osterix is a key osteogenic gene downstream of Runx2 [28], and Runx2 can upregulate the expression of Osterix [29]. Osterix was underexpressed in osteoporotic rats, suggesting that it is a critical downstream osteogenic gene that influences the healing of osteoporotic fractures.A study that used mouse models revealed that cartilage and bone specimen formation commences seven days after fracture and is sustained until the tenth day [30]. Osterix was generally expressed in the osteoblasts adjacent to the injury site fourteen days after fracture, which promoted the hardening of cartilage at the injured site; furthermore, numerous studies have demonstrated that BMP exerts a unique osteogenic effect, which fixes the fiber junction within two weeks after fracture [8, 9]. In the present study, the elevated expressions of Osterix were highest in group C (the fourteenth day) than in groups A and B, which was consistent with the findings.The OPG/RANKL/RANK pathway is essential for regulating the differentiation of osteoclasts, including the expressions of RANKL, RANK (on the respective cell membranes), and OPG (a pseudoreceptor). Given the high affinity between OPG and RANKL, OPG can competitively inhibit the interaction of RANKL and RANK, further disrupting the differentiation of osteoclasts [10]. The differentiation and maturation of osteoclasts are exclusively stimulated by RANKL [31]. We observed that RANKL was overexpressed in osteoporotic fractures, consistent with previous findings [32]. Expressions of RANKL were highest in group A (the third day after fracture), which decreased after that, reflecting the role of RANKL in osteoclasts and the healing process of osteoporotic fractures.Although we did not investigate the mechanisms underlying the abnormal variations of LRP5, Runx2, Osterix, and RANKL in the bone specimens of osteoporotic rats, our findings provided strong evidence that the Wnt/β-catenin, BMP-2/Runx2/Osterix, and RANKL/RANK pathways regulated osteogenesis and osteoclastogenesis in osteoporotic fractures of rats. The expressions of the four genes associated with bone remodeling during fracture healing varied with time after fracture, which could be associated with the various stages of bone repair. The characteristic variations in the expression of the four genes may inform future ideal interventions in preventing PMOP and managing PMOPF. --- *Source: 1022078-2022-10-29.xml*
1022078-2022-10-29_1022078-2022-10-29.md
25,217
Abnormal Variations of the Key Genes in Osteoporotic Fractures
Bin Wang; Caiyuan Mai; Lei Pan
Emergency Medicine International (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1022078
1022078-2022-10-29.xml
--- ## Abstract Objective. The classical osteoporotic signaling pathways include the four key genes (LRP5, Runx2, Osterix, and RANKL) influencing the regulation of osteogenesis and osteoclastogenesis. This study investigates the expression of these four genes associated with bone remodeling during fracture healing. Methods. Ovariectomized rats as an osteoporotic group were randomly divided into three groups-group A, group B, and group C. Nonosteoporotic rats as the control group were likewise divided into three groups A0, B0, and C0, using the same method. The rats were killed on the third day of fractures in groups A and A0, on the seventh day of fractures in groups B and B0, and on the fourteenth day of fractures in groups C and C0. The bone specimens were taken from the femoral fracture site, and the expression level of each gene in the bone specimens was detected using RT-qPCR, Western blotting, and immunohistochemistry. Results. LRP5, Runx2, and Osterix expressions were decreased in osteoporotic rat fractures and then increased over time. The expression of RANKL was elevated in osteoporotic rat bone specimens, which decreased after that. Conclusion. The expressions of the four genes varied with time after fracture, which could be associated with the various stages of bone repair. The four genes can inform practice in ideal interventions in the prevention and management of osteoporosis. --- ## Body ## 1. Introduction Osteoporosis is a metabolic disorder associated with systemic bone aging and degradation. Osteoporosis is characterized by decreased bone mass, structural degradation, increased brittleness, and susceptibility to fractures [1]. Worldwide, one in five men over the age of 50 years sustained an osteoporotic fracture (OPF) in 2020 [2]. Fracture is the most serious outcome of osteoporosis. Postmenopausal osteoporosis (PMOP) occurs in women after menopause due to estrogen deficiency, resulting in bone loss and bone structure changes. Osteoporosis is three times more common in postmenopausal women than in non-postmenopausal women [3]. PMOP can significantly increase disability rates and mortality rates and result in a great family and socioeconomic burdens [4, 5]. The process of osteoporotic development involves bone resorption mediated by osteoclast and bone formation mediated by osteoblast, which maintains the bone in a state of continuous remodeling. Under physiological conditions, bone absorption and formation remain stable; however, osteoporosis develops when the balance is disrupted, where the bone resorption rate is higher than the formation rate [6, 7].Osteogenesis and osteoclastogenesis have been the subjects of interest in osteoporosis. The classical Wnt/β-catenin, BMP-2/Osterix, and RANKL/RANK signaling pathways include the key genes influencing the regulation of osteogenesis and osteoclastogenesis. LRP5, Runx2, and Osterix are the key osteogenic genes associated with the Wnt/β-catenin and BMP-2/Osterix signaling pathway, while RANKL is a key gene associated with the RANKL/RANK signaling pathways [8, 9]. However, the evolving process of these genes in osteoporotic fracture remains unclear. Therefore, the overall aim of the present study was to investigate the genes associated with the development and healing stages of osteoporotic fractures based on histological and molecular analyses of fracture healing stages. Specifically, this study aimed to investigate the variations of LRP5, Runx2, Osterix, and RANKL in bone specimens using RT-qPCR and immunohistochemistry. ## 2. Materials and Methods Following the Animal Experimentation Ethics Guidelines, the Ethics Committee of Foshan Sanshui District People’s Hospital approved the procedures of the study (Medical Research in Guangdong Province 2019003). A total of 100 Sprague–Dawley female rats (12 weeks old, weighing 200–220 g) were purchased from the Guangzhou medical university. Then, the rats were housed in a specific pathogen-free (SPF) facility (temperature 22 ± 2°C; humidity 50 ± 10%) with a 12/12-h light/dark cycle, and standard laboratory animal food and tap water were available ad libitum. First, the rat model of an osteoporotic model of ovariectomy (OVX)-induced osteoporosis was established by removing the ovaries bilaterally, and then, after 12 weeks, we selected 90 rats weighing 280–300 g. Another 90 rats of the same age and weight were purchased as a control group (without being subjected to sham procedures). A micro-CT (Perkin Elmer, China) for small animals was used to determine the femur bone mineral density (BMD) in all rats. Ovariectomized rats were sure to be Osteoporotic rats. With the anesthetic, an osteotomy was established using an oscillating sagittal saw at the left proximal femur in all rats, and then the incision was closed. Ninety osteoporotic rats were randomly divided into groups A, B, and C, with 30 cases in each group. Ninety nonosteoporotic rats were randomly divided into groups A0, B0, and C0, with 30 cases in each group. The rats were killed on the third day of fractures in groups A and A0, on the seventh day of fractures in groups B and B0, and on the fourteenth day of fractures in groups C and C0. A bone specimen was taken from the fractured femur when the rats were killed. Specimens (the bone tissues were equal to or more than 400 mg and the intercepted bone mass was greater than 0.8 cm × 0.3 cm × 0.3 cm in volume) were collected from the fracture sites. Specimens were collected and stored immediately in liquid nitrogen. ### 2.1. RT-qPCR Briefly, 100 mg of bone tissue was first ground in liquid nitrogen. Total RNA was then extracted from the bone tissues using the RNAiso Plus kit (Duoyang, China). cDNA was synthesized from the RNA using PrimeScript RT Master Mix (TaKaRa, Japen). The DNA was amplified using Real-time PCR (RT-qPCR) with the SYBR Premix kit (TaKaRa, Japen). Primers used in this research were synthesized by Thermo Fisher Scientific company. Conditions of PCR were as follows: initial denaturation at 94°C for 5 minutes, 30 elongation cycles at 94°C for 30 seconds, annealing at 58°C for 30 seconds, extension at 72°C for 40 seconds, and final extension at 72°C for 10 minutes. GAPDH was used as the internal control. The amplification of the DNA was expressed based on the Ct (2−ΔΔCT) equation. ### 2.2. Western Blotting (WB) Same with the aforementioned method, 200 mg of bone tissue was ground. Total protein was extracted from bone tissues using RIPA lysis buffer (Beyotime, China). The protein concentration was determined using a BCA assay kit (Beyotime, China). A total of 30µg protein/well was resolved using 10% SDS-PAGE and transferred to a PVDF membrane. Subsequently, 5% nonfat milk was used to block the membrane at 37°C for 1 h, followed by incubation at room temperature for 1 h with primary antibodies (Abcam, UK., LRP5 antibody: no. ab223203; Runx2 antibody: no. ab92336; Ostrix antibody: no. ab209484; RANKL antibody: no. ab239607; GAPDH antibody: ab8245). The membrane was then incubated with horseradish peroxidase-conjugated anti-rabbit IgG secondary antibody (Abcam, UK., ab150113) at room temperature for 2 h. The bands were visualized by using enhanced chemiluminescence (ECL) reagent kit (Yeasen, China) and semiquantified with Image J software. ### 2.3. Immunohistochemical Analysis Intercepted bone mass was thawed, fixed in 10% neutral formalin for 48 hours, and embedded in paraffin after decalcification in 10% EDTA solution (Zhongshan Jinqiao, China). The specimens were then cut into 5μm thick sections and treated with 3% hydrogen peroxide for 10 min. Afterward, the sections were rinsed with phosphate-buffered saline, incubated with primary and secondary antibodies (Abcam, UK., the same as WB) sequentially, and exposed to DAB (TaKaRa, Japen). The sections were counterstained with hematoxylin solution (TaKaRa, Japen). Ten visual fields were randomly selected and observed under a high magnification microscope (Olympus, Japen), and the number of positively stained cells was calculated. ### 2.4. Data Analysis All experiments were performed at least three times. Continuous data were expressed as mean ± standard deviation (±s). A one-way analysis of variance was used for multigroup comparison, and differences between the two groups were determined by LSD t-test. P<0.05 was considered statistically significant. Data were analyzed using GraphPad Prisma 5 software and SPSS V. 22.0 (Chicago, USA). ## 2.1. RT-qPCR Briefly, 100 mg of bone tissue was first ground in liquid nitrogen. Total RNA was then extracted from the bone tissues using the RNAiso Plus kit (Duoyang, China). cDNA was synthesized from the RNA using PrimeScript RT Master Mix (TaKaRa, Japen). The DNA was amplified using Real-time PCR (RT-qPCR) with the SYBR Premix kit (TaKaRa, Japen). Primers used in this research were synthesized by Thermo Fisher Scientific company. Conditions of PCR were as follows: initial denaturation at 94°C for 5 minutes, 30 elongation cycles at 94°C for 30 seconds, annealing at 58°C for 30 seconds, extension at 72°C for 40 seconds, and final extension at 72°C for 10 minutes. GAPDH was used as the internal control. The amplification of the DNA was expressed based on the Ct (2−ΔΔCT) equation. ## 2.2. Western Blotting (WB) Same with the aforementioned method, 200 mg of bone tissue was ground. Total protein was extracted from bone tissues using RIPA lysis buffer (Beyotime, China). The protein concentration was determined using a BCA assay kit (Beyotime, China). A total of 30µg protein/well was resolved using 10% SDS-PAGE and transferred to a PVDF membrane. Subsequently, 5% nonfat milk was used to block the membrane at 37°C for 1 h, followed by incubation at room temperature for 1 h with primary antibodies (Abcam, UK., LRP5 antibody: no. ab223203; Runx2 antibody: no. ab92336; Ostrix antibody: no. ab209484; RANKL antibody: no. ab239607; GAPDH antibody: ab8245). The membrane was then incubated with horseradish peroxidase-conjugated anti-rabbit IgG secondary antibody (Abcam, UK., ab150113) at room temperature for 2 h. The bands were visualized by using enhanced chemiluminescence (ECL) reagent kit (Yeasen, China) and semiquantified with Image J software. ## 2.3. Immunohistochemical Analysis Intercepted bone mass was thawed, fixed in 10% neutral formalin for 48 hours, and embedded in paraffin after decalcification in 10% EDTA solution (Zhongshan Jinqiao, China). The specimens were then cut into 5μm thick sections and treated with 3% hydrogen peroxide for 10 min. Afterward, the sections were rinsed with phosphate-buffered saline, incubated with primary and secondary antibodies (Abcam, UK., the same as WB) sequentially, and exposed to DAB (TaKaRa, Japen). The sections were counterstained with hematoxylin solution (TaKaRa, Japen). Ten visual fields were randomly selected and observed under a high magnification microscope (Olympus, Japen), and the number of positively stained cells was calculated. ## 2.4. Data Analysis All experiments were performed at least three times. Continuous data were expressed as mean ± standard deviation (±s). A one-way analysis of variance was used for multigroup comparison, and differences between the two groups were determined by LSD t-test. P<0.05 was considered statistically significant. Data were analyzed using GraphPad Prisma 5 software and SPSS V. 22.0 (Chicago, USA). ## 3. Results ### 3.1. Weight and BMD of Rats None of the rats died during the experiment. The weights (g) of groups A, B, C, A0, B0, and C0 were: 285.3 ± 2.5, 287.3 ± 3.6, 283.6 ± 7.4, 284.6 ± 6.2, 287.3 ± 8.9, and 286.5 ± 8.0, respectively, and no significant differences existed between any two groups (P>0.05). The weights (g) of the whole osteoporotic and control groups were 286.3 ± 6.4 and 286.0 ± 2.6, without significant differences (P>0.05).The values of BMD (g/cm2) of groups A, B, C, A0, B0, and C0 were 0.42 ± 0.25, 0.44 ± 0.06, 0.43 ± 0.14, 0.21 ± 0.02, 0.23 ± 0.09, and 0.22 ± 0.08, respectively. There were no significant differences between any two groups in groups A, B, and C, and there were no significant differences between any two groups in groups A0, B0, and C0 (P>0.05). The values of BMD (g/cm2) of the whole osteoporotic group and control group were 0.43 ± 0.18 and 0.22 ± 0.05, and there was a significant difference between them (P>0.05). ### 3.2. Expressions of LRP5, Runx2, Osterix, and RANKL in the Groups (Osteoporotic Group and Control Group) Using RT-qPCR RT-qPCR results revealed that the levels of LRP5, Runx2, and Osterix in the bone tissues were lower in the osteoporotic group than in the control group (P<0.05), while the level of RANKL was higher in the osteoporotic group than in the control group (P<0.05) (Figure 1).Figure 1 Relative expressions value of mRNA of LRP5, Runx2, Osterix, and RANKL in bone tissues of control and osteoporotic groups.#P<0.05. ### 3.3. Expressions of LRP5, Runx2, Osterix, and RANKL in the Groups Using RT-qPCR and WB The levels of LRP5, Runx2, Osterix, and RANKL between groups A0 and A, B0 and B, and C0 and C exhibited a highly significant difference (P<0.05). However, the differences between groups A0, B0, and C0 were insignificant across the four genes (P>0.05). The elevated expressions of LRP5 and Runx2 were found lowest in group A (the third day) than in groups B (the seventh day) and C (P<0.05), and the elevated expressions of Osterix were highest in group C (the fourteenth day) than in groups A and B (P<0.05), while the depressed expressions of RANKL were highest in group A (the third day) than in groups B and C (P<0.05). The differences in LRP5 and Runx2 between groups B and C, Osterix between groups A and B, and RANKL between groups B and C were not found to be significant (P>0.05) (Figures 2–4).Figure 2 Relative expressions value of mRNA of LRP5, Runx2, Osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05.Figure 3 Western blotting was performed to determine protein expressions of LRP5, Runx2, Osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05.Figure 4 Relative protein expressions value of LRP5, Runx2, osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05. ### 3.4. Immunohistochemical Staining of Proteins in LRP5, Runx2, Osterix, and RANKL The image of immunohistochemical staining is visible in Figures5 and 6. The number of positively stained cells was calculated. The levels of LRP5, Runx2, Osterix, and RANKL are the same variations as the above.Figure 5 Immunohistochemical staining of LRP5, Runx2, Osterix, and RANKL in bone mass of participants in groups A0, B0, C0, A, B, and C. The legend of scale bar = 50μm.Figure 6 A histogram representing quantitative analysis results of immunohistochemical staining in LRP5, Runx2, Osterix, and RANKL in bone specimens (groups A0, B0, C0, A, B, and C.#P<0.05. ## 3.1. Weight and BMD of Rats None of the rats died during the experiment. The weights (g) of groups A, B, C, A0, B0, and C0 were: 285.3 ± 2.5, 287.3 ± 3.6, 283.6 ± 7.4, 284.6 ± 6.2, 287.3 ± 8.9, and 286.5 ± 8.0, respectively, and no significant differences existed between any two groups (P>0.05). The weights (g) of the whole osteoporotic and control groups were 286.3 ± 6.4 and 286.0 ± 2.6, without significant differences (P>0.05).The values of BMD (g/cm2) of groups A, B, C, A0, B0, and C0 were 0.42 ± 0.25, 0.44 ± 0.06, 0.43 ± 0.14, 0.21 ± 0.02, 0.23 ± 0.09, and 0.22 ± 0.08, respectively. There were no significant differences between any two groups in groups A, B, and C, and there were no significant differences between any two groups in groups A0, B0, and C0 (P>0.05). The values of BMD (g/cm2) of the whole osteoporotic group and control group were 0.43 ± 0.18 and 0.22 ± 0.05, and there was a significant difference between them (P>0.05). ## 3.2. Expressions of LRP5, Runx2, Osterix, and RANKL in the Groups (Osteoporotic Group and Control Group) Using RT-qPCR RT-qPCR results revealed that the levels of LRP5, Runx2, and Osterix in the bone tissues were lower in the osteoporotic group than in the control group (P<0.05), while the level of RANKL was higher in the osteoporotic group than in the control group (P<0.05) (Figure 1).Figure 1 Relative expressions value of mRNA of LRP5, Runx2, Osterix, and RANKL in bone tissues of control and osteoporotic groups.#P<0.05. ## 3.3. Expressions of LRP5, Runx2, Osterix, and RANKL in the Groups Using RT-qPCR and WB The levels of LRP5, Runx2, Osterix, and RANKL between groups A0 and A, B0 and B, and C0 and C exhibited a highly significant difference (P<0.05). However, the differences between groups A0, B0, and C0 were insignificant across the four genes (P>0.05). The elevated expressions of LRP5 and Runx2 were found lowest in group A (the third day) than in groups B (the seventh day) and C (P<0.05), and the elevated expressions of Osterix were highest in group C (the fourteenth day) than in groups A and B (P<0.05), while the depressed expressions of RANKL were highest in group A (the third day) than in groups B and C (P<0.05). The differences in LRP5 and Runx2 between groups B and C, Osterix between groups A and B, and RANKL between groups B and C were not found to be significant (P>0.05) (Figures 2–4).Figure 2 Relative expressions value of mRNA of LRP5, Runx2, Osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05.Figure 3 Western blotting was performed to determine protein expressions of LRP5, Runx2, Osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05.Figure 4 Relative protein expressions value of LRP5, Runx2, osterix, and RANKL in bone tissues of groups A0, B0, C0, A, B, and C#P<0.05. ## 3.4. Immunohistochemical Staining of Proteins in LRP5, Runx2, Osterix, and RANKL The image of immunohistochemical staining is visible in Figures5 and 6. The number of positively stained cells was calculated. The levels of LRP5, Runx2, Osterix, and RANKL are the same variations as the above.Figure 5 Immunohistochemical staining of LRP5, Runx2, Osterix, and RANKL in bone mass of participants in groups A0, B0, C0, A, B, and C. The legend of scale bar = 50μm.Figure 6 A histogram representing quantitative analysis results of immunohistochemical staining in LRP5, Runx2, Osterix, and RANKL in bone specimens (groups A0, B0, C0, A, B, and C.#P<0.05. ## 4. Discussion Patients with osteoporosis are prone to occur OPF [10]. There was a 20 percent chance of death with complications of osteoporosis, and 20 percent or so of patients with recurrent fractures in the proximal femur of OPF [11]. In China, ∼83.9 million people are estimated to suffer from osteoporosis, and this number, including osteopenia, should increase to ∼212 million people by 2050 [12]. Osteoporosis and postmenopausal osteoporotic fracture (PMOPF) have globally become critical public health problems. The present study investigated the relationships of mRNA and protein expressions in LRP5, Runx2, Osterix, and RANKL in bone specimens of osteoporotic rats.Osteogenesis and osteoclastogenesis are regulated by the Wnt/β-catenin, BMP-2/Runx2/Osterix, and RANKL/RANK signaling pathways [8, 9, 13–17]. Based on histological and molecular analyses, the early stage of fracture healing can be divided into the early inflammatory response stage (one day after fracture), the nonspecific bone anabolic stage (three days after fracture), the nonspecific catabolic stage (three days to one week after fracture), and the specific bone anabolic stage (more than one week after fracture). The entire fracture healing phase can be divided into three stages: hematoma organization, original callus formation, and callus reconstruction molding stage. The hematoma organization stage is typically completed within 2 weeks after fracture [18, 19]. Consequently, we stratified osteoporotic rats into several groups to reflect the healing stages Group A (the third day), B (the seventh day), and C (the fourteenth day). The control group was divided into A0 to C0 using the method noted above. Our results depicted that LRP5, Runx2, and Osterix expressions decreased in osteoporotic rat fractures and then increased with the increase over time. The expression of RANKL was elevated in osteoporotic rat bone specimens, which decreased after that.In the Wnt/β-catenin signaling pathway, the Wnt, LRP5/6, and FZD complexes recruit Dvl and degradative complexes, which inhibit the phosphorylation of β-catenin by GSK-3β. Accumulating nonphosphorylated β-catenin in the nucleus activates downstream Runx2 and other genes, resulting in osteogenesis [8, 20]. LRP5 exists on the surface membranes of numerous cells [21]. Inhibition of LRP5 impairs the proliferation of osteoblasts, affecting bone formation [22]. Glinka [23] revealed that LGR5 regulated embryonic patterns and the proliferation of stem cells through the Wnt/β-catenin mediated agonist of R-cavernous signaling. In the present study, we observed substantial underexpression of LRP5 in the bone specimens of osteoporotic rats, which was consistent with the above theory. Runx2 is a highly specific biomarker for osteogenesis. The expression of Runx2 is essential for bone formation and development. In particular, Runx2 upregulates the transcription of genes for several mineralization-related genes in osteoblasts [24, 25]. The Wnt/β-catenin pathway directly regulates Runx2, strengthens osteogenic differentiation, and accelerates fracture healing [24]. In the present study, we observed a significant underexpression of Runx2 in osteoporotic rats, which reflects inadequate osteogenesis in osteoporotic rats.LRP5 regulates osteoblastosis and bone formation by activating the expression of Runx2 [26]. The expression levels of LRP5 and Runx2 were simultaneously lowest in group A and then went up synchronously in groups B and C, which suggests that variations of LRP5 and Runx2 were consonantly correlated with the given osteogenic stage.BMP-2 modulates the transcription of the BMP-2/Runx2/Osterix pathway by activating the expression of Smads [27]. Smads regulate the transcription of several target genes and induces the expression of Runx2. Osterix is a key osteogenic gene downstream of Runx2 [28], and Runx2 can upregulate the expression of Osterix [29]. Osterix was underexpressed in osteoporotic rats, suggesting that it is a critical downstream osteogenic gene that influences the healing of osteoporotic fractures.A study that used mouse models revealed that cartilage and bone specimen formation commences seven days after fracture and is sustained until the tenth day [30]. Osterix was generally expressed in the osteoblasts adjacent to the injury site fourteen days after fracture, which promoted the hardening of cartilage at the injured site; furthermore, numerous studies have demonstrated that BMP exerts a unique osteogenic effect, which fixes the fiber junction within two weeks after fracture [8, 9]. In the present study, the elevated expressions of Osterix were highest in group C (the fourteenth day) than in groups A and B, which was consistent with the findings.The OPG/RANKL/RANK pathway is essential for regulating the differentiation of osteoclasts, including the expressions of RANKL, RANK (on the respective cell membranes), and OPG (a pseudoreceptor). Given the high affinity between OPG and RANKL, OPG can competitively inhibit the interaction of RANKL and RANK, further disrupting the differentiation of osteoclasts [10]. The differentiation and maturation of osteoclasts are exclusively stimulated by RANKL [31]. We observed that RANKL was overexpressed in osteoporotic fractures, consistent with previous findings [32]. Expressions of RANKL were highest in group A (the third day after fracture), which decreased after that, reflecting the role of RANKL in osteoclasts and the healing process of osteoporotic fractures.Although we did not investigate the mechanisms underlying the abnormal variations of LRP5, Runx2, Osterix, and RANKL in the bone specimens of osteoporotic rats, our findings provided strong evidence that the Wnt/β-catenin, BMP-2/Runx2/Osterix, and RANKL/RANK pathways regulated osteogenesis and osteoclastogenesis in osteoporotic fractures of rats. The expressions of the four genes associated with bone remodeling during fracture healing varied with time after fracture, which could be associated with the various stages of bone repair. The characteristic variations in the expression of the four genes may inform future ideal interventions in preventing PMOP and managing PMOPF. --- *Source: 1022078-2022-10-29.xml*
2022
# Analysis of Factors Related to Adolescents’ Physical Activity Behavior Based on Multichannel LSTM Model **Authors:** Guiling Chang; Jinfeng Liu **Journal:** Computational Intelligence and Neuroscience (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1022421 --- ## Abstract The health problems of teenagers are closely related to their sports behavior. In order to understand the relevant factors of teenagers’ sports behavior, we use a variety of research methods to make a brief theoretical analysis of the relevant factors of teenagers’ sports behavior and analyze the impact of the model on teenagers’ sports behavior from different levels. The model analyzes the factors affecting youth sports behavior, reveals the relationship between these factors, puts forward corresponding intervention strategies, and uses effective means to develop youth sports practice. Therefore, based on the analysis of the relevant factors of teenagers’ sports behavior, this paper puts forward the LSTM model from many aspects, which shows that our model can be very effective in analyzing the factors affecting teenagers’ sports behavior. --- ## Body ## 1. Introduction The obesity rate in China has doubled in the last 30 years, according to The Times, UK. The rate of obesity and overweight among adults in developing countries increased from 250 million in 1980 to 904 million in 2008, and China now has a quarter of the obese and overweight population [1]. This study analyzes the factors influencing the physical activity behavior of adolescent students to provide a theoretical reference for physical education programs aimed at improving adolescent exercise behavior and promoting adolescent health [2].At this stage, there are various factors that influence adolescent physical activity behavior. Individual factors are the primary factors of adolescent physical exercise behavior, which are necessary for exercise behavior. Individual physiological factors determine individual exercise behavior, and psychological factors promote the development of individual motor behavior [3]. Individual physiological factors are one of the important factors influencing adolescent physical exercise behavior, and factors such as height, weight, physical health status, and body size of adolescents are common research variables in studies. For example, [4] in a study on the formation factors of exercise habits among college students, it was pointed out that the motivation to lose weight and build physical beauty could promote the change of exercise behavior among college students, and individual athletic ability influenced by congenital genetic factors could also affect the exercise behavior of individuals. [5] The amount of physical activity of adolescents was studied in terms of gender, age, height, and weight, and the influence of physiological, psychological, sociocultural, and environmental factors on the physical activity of adolescents was studied. As far as psychological factors are concerned, there is a relationship between individual achievement and individual personality. The study of [6] has pointed out that those adolescents who are adventurous and challenging may prefer competitive sports programs, where exercise provides satisfaction and pleasure, and the act of physical activity provides individuals with the opportunity to demonstrate their level of athletic ability, strengthening their self-esteem and pride. The motivation that is motivational and persistent and determines the intensity of behavior is an essential factor in motivating individuals to participate in sports.Family factors are both physical environmental factors that satisfy the conditions and environment of adolescents’ exercise, and internal psychological factors that promote individual adolescents’ internal motivation and increase the level of individual exercise motivation [7]. Parents’ exercise awareness and exercise behavior, cultural level, and socio-economic status are important. School physical education is the main form of physical activity behavior of young students, and it is the main channel of physical activity of young people [8]. A reasonable physical education program can stimulate students’ motivation to exercise, induce behavior change, satisfy students’ self-actualization and sense of accomplishment, and also explain the causes of behavior change and its internal mechanism of action [9]. However, a common phenomenon in primary and secondary schools in China is that the number of physical education classes per week, the number of recesses, and the participation in school sports all decrease with grade level, and school sports become the main place to train students who have neither the awareness nor the ability to exercise [10].Physical education teachers are the implementers of physical education and the organizers of extracurricular physical activities, and the amount of physical activity in schools for young students is closely related to the physical education teachers. Issues such as the number of games and competitions organized in school, the physical culture of the school, and exercise peers are the main factors that limit students’ exercise behavior [11]. The phenomenon of physical education classes being squeezed is an important factor that currently restricts the development of physical education in schools, and “exam-oriented education” has seriously skewed educational values, with many schools pursuing the promotion rate unilaterally and squeezing and diverting students’ physical education classes and extracurricular physical activities at will.Social support, community exercise environment, and safety issues in the community were the main social level factors that governed youth physical activity behavior. Parental support, sibling support, school teacher support among school factors, and peer friend support were the main components of social support. Adolescents’ perceived social support is closely related to their physical activity behavior and has a direct or indirect effect on self-efficacy. Researchers have compared parent-child communication, parental support, and adolescent psychological risk factors in obese and normal weight adolescents and found that parent-child communication and parental support had a significant impact on normal weight adolescents, while parental support had a more significant impact on obese adolescents. The lack of sports facilities is the main objective factor affecting individual exercise behavior. Studies on community exercise behavior found that the lack of public sports facilities in the community is the main factor affecting the physical exercise behavior of urban adolescents. The convenience of sports facilities and field equipment around the home, the good or bad exercise environment, and the development of sports activities all have an impact on the form of adolescent physical exercise behavior, the choice of sports, exercise attitude, exercise awareness, exercise behavior, effort, and persistence. ## 2. Related Work The factors influencing adolescent children’s physical activity behaviors were studied in terms of subjective beliefs of both parents, objective material conditions, and work-life environment [12]. The study showed that parents’ physical activity behaviors play a major role in promoting the formation of adolescents’ sports perceptions, and that parents’ actual exercise behaviors and exercise perceptions can contribute to adolescent children’s internal motivation to exercise [13]. The stronger the family members’ beliefs about exercise, the better the family exercise climate, and the stronger the family members’ motivation to exercise and the greater the likelihood of physical activity behaviors.In terms of family socioeconomic status, [14] a study of the family cultural level was conducted to explore the effects of different cultural level levels of youth physical activity behavior, and the results showed that the effect of family cultural level on youth exercise perceptions reached significant levels. The higher the literacy level of parents, the greater the possibility of active intake of family education knowledge and the more scientific the attitude of educating their children. For the study of economic status, [15] concluded that parents’ exercise commitment was positively associated with adolescents’ exercise behavior, but the level of parental exercise commitment perceived by adolescents was low, and that adolescent children’s exercise behavior would be better developed if the intensity of commitment was increased. Parental encouragement and support are particularly important for the physical activity behaviors of 13- to 16-year-olds, and physical activity behaviors need to be sustained based on parental material support.Safety issues and the safety of school sports equipment are key factors that limit physical activity and have a direct impact on youth physical activity behaviors. Studies by foreign scholars on safety have focused on whether relevant sports equipment meets national safety standards, whether schools regularly inspect sports equipment, the safe sports environment created by the joint efforts of schools and society, and the convenience of access to drinking water and sports rest areas during individual adolescent physical activity behaviors. The study of [16] found that low levels of community safety were strongly associated with physical activity levels and that residents of communities with lower levels of safety had higher rates of obesity and larger BMI indices.Factor 1 mainly focused on reflecting motivation, interest in physical activity, sense of achievement in sports, enjoyable experience of sports, positive expectation of sports outcome, positive self-evaluation, and attitude toward sports knowledge. Among them, the sense of achievement in sports and interest in sports were the most highly correlated with F1, indicating that they are the more important factors in explaining the physical activity behavior of adolescents [17]. The sense of achievement in sports can lead to psychological tendencies of pleasure and success from the heart, which all stem from the individual’s active participation in sports. Sports behavior formed by sports interest can make individuals gain more knowledge about sports and health, improve motor skills, and promote healthy physical and mental development, as well as produce pleasant emotional experiences. In addition, the motivation of sports is also a factor that should not be neglected, as shown in a study by the authors of [18], motivation for physical activity is positively correlated with exercise adherence, and extrinsic motivation motivates people to participate in physical activity. Attitudes toward physical education knowledge had a slightly lower correlation with factor 1, suggesting less importance in comparison to other factors. The study of [1] showed that students’ perceived value and role of physical activity showed a high correlation (r=0.87) with adherence to physical activity in the long term, indicating that the perception of the value and role of physical education itself is an important factor influencing students’ adherence to physical activity. This shows that there are some differences in the role played by the attitude factor of physical education knowledge in two different stages of physical activity behavior.Factor 2 concentrates on the individual’s physical quality, health status, and the degree of importance. Among them, an individual’s health status and physical fitness have a high correlation, and it can be said that an individual’s physical fitness and health status are necessary prerequisites for realizing a variety of psychological factors and are the physical basis of physical exercise behavior. Therefore, physical health factors and psychological factors complement each other, physical health factors are the basis of psychological factors, and psychological factors play a role in promoting physical health [19]. ## 3. Methods A single-channel LSTM-based method for analyzing factors related to youth physical activity behavior mainly includes LSTMs use memory units to avoid gradient disappearance and gradient explosion during backpropagation, can learn long-term dependencies, and make full use of historical information. The LSTM was improved and extended, making it widely used in natural language processing, speech recognition, and other fields.The LSTM unit is shown in Figure1.Figure 1 LSTM unit.The updating process of the LSTM unit at timet is as follows:(1)it=σWixt+Uiht−1+Vict−1,ct=tanhWcxt+Ucht−1,ft=σWfxt+Ufht−1+Vfct−1,ct=ft⊙ct−1+it⊙ct,ot=σWoxt+Uoht−1+Voct,ht=ot⊙tanhct,where xi is the input data of the memory unit, σ is the logistic sigmoid function, and the symbol ⊙ is the dot product operation between vectors. it,ot,ft,ct are the values of the input gate, output gate, forgetting gate, and memory cell at time t, respectively, ct are the values of the candidate memory states of the memory cell and ht are the outputs of the LSTM cell at time t.For the analysis of the factors associated with youth physical activity behavior, we obtain a balanced sample of the factors associated with each youth physical activity behavior and then used a single-channel LSTM as the classification method as shown in Figure2. The input feature vectors are passed through the LSTM layer to obtain high-dimensional vectors, which can learn deeper features that can better describe the samples. The fully-connected layer is receiving all the outputs from the previous layer, weighting and summing these output vectors, and propagating the weighted outputs through the excitation function to the Dropout layer. In this experiment, the excitation function is shown in the following equation:(2)gx=max0,x,where x is the output vector and the ReLU function sets all values less than 0 to 0, with the ability to bootstrap moderate sparsity. The Dropout layer is shown in following equation:(3)g=h∗•Dp.Figure 2 Single-channel LSTM classifier framework.Finally, the output of the single-channel LSTM model is used to classify the samples by the Softmax output layer.(4)labelpred=argmaxiPY=i|x,W,U,V,where x is the upper layer output vector, i is the label prediction, W,U,V are the coefficient matrices in the LSTM update method, and labelpred is the predicted label with the highest posterior probability.The application of random undersampling and single-channel LSTM for the analysis of factors related to youth physical activity behavior has an obvious drawback: because undersampling only selects some samples from multiple classes and a large number of unselected samples. The method undersamples the imbalanced samples several times to obtain multiple sets of balanced samples, uses each set of balanced samples to learn an LSTM model, and uses the Merge layer to jointly learn multiple LSTMs.The multichannel LSTM classifier framework is shown in Figure3.Figure 3 Multichannel LSTM classifier framework.In the process of model training, we minimize,(5)J=−∑i=1N∑l=1m1ti=llogyl+λ2N∑k=1n∑ε∈ωWεkF2+∑ε∈μUεkF2+∑ε∈νVεkF2.In the loss function, in addition to minimizing the negative log likelihood, the L2 regularization ofW,U,V is added because the parameters of the Softmax function are redundant, i.e., the minima are not unique, and the addition of the regularization term can make the minima unique. The penalty factor λ regulates the weight of the regularization term, and the larger the value, the greater the penalty for large parameters. ## 4. Experiments The analysis of the contents of the family’s lifestyle, the way of interaction with the children, the interaction with the children, the parents’ sports knowledge structure, sports habits and sports awareness and behavior, the level of support from the relatives, and the family recreation side was carried out according to the different influencing factors. The reliability statistics under different influencing factors are in Table1.Table 1 Reliability statistics for each subscale of the influencing factors of youth physical activity behavior. Scale levelCronbachα coefficientStandardizedα coefficient# Subscale question itemsIndividual level0.8780.87912Family level0.9150.91611School level0.9020.9039Community level0.9180.91913Policy level0.9390.93910The factors with a high correlation with factor 1 include family sports atmosphere, family lifestyle, and tutoring style. It indicates that these factors exert an important influence in adolescent physical exercise behavior, and adolescent physical exercise behavior is often closely linked to the sports atmosphere of the family, and parents’ sports ideology, attitude, awareness, and behavior habits towards sports all have a subtle influence on their children. The results in Table2 show that exercisers and nonexercisers show significant differences in the dimensions of physical exercise attitudes.Table 2 Summary of the scores of adolescents’ attitudes toward physical exercise. Exercisers (N = 100)Nonexercisers (N = 251)TPMSDMSDBehavioral attitudes31.05004.75926.45484.9157.9680.0001Target attitudes48.65008.04245.78126.0513.6780.0001Behavioral cognition29.06003.55527.95253.3090.8780.004Behavioral habits37.15006.30531.19956.4057.8850.0001Behavioral intention27.32003.50224.18754.0186.8150.0001Emotional experience38.18005.85534.22355.8455.7150.0001Behavioral control25.15004.93520.98454.1617.9890.0001Subjective standards18.38004.29520.75753.999−4.8650.0001Based on the above-given correlation analysis of each factor of exercise attitudes and the correlation analysis of exercise attitudes and physical behavior, the path relationships among the variables were further explored in conjunction with the theory relied on in this study. The path relationships among the variables were examined behavioral intentions on physical activity behaviors; and behavioral habits, behavioral attitudes, and sense of behavioral control on physical activity behaviors, while constructing path diagrams based on them.Behavioral attitudes can have an impact on sport behavior only through the intermediate variable of behavioral perceptions, see Figure4.Figure 4 Path diagram of the influence of exercise attitude on physical behavior.Specifically, the specific parameters we used are shown in Table3.Table 3 Parameter settings in the LSTM. Parameter expressionParameter valueInput vector length120LSTM layer output dimension128Fully connected layer output dimension64Dropout parameter0.5#iterations20Accuracy and geometric mean (G-mean) were used as a measure of classification effectiveness. The geometric mean is calculated as shown in the following equation:(6)G−mean=∏i=1nRecallin,where Recalli denotes the recall of category i, n is the number of categories, and n is taken as 7 in this experiment.In the experiment, we implemented the following methods to deal with the analysis of factors related to youth physical activity behavior:(1) Full training + maximum entropy (FullT + Maxent), all the remaining samples of each class are used as training samples and the maximum entropy classifier is used.(2) Random oversampling + maximum entropy (OverS + Maxent), let the maximum number of remaining samples of each class (preference class) benmax and use the random oversampling technique to extract nmax samples from the remaining samples of each class as training samples, using the maximum entropy classifier.(3) Random undersampling + maximum entropy (UnderS + Maxent), let the number of remaining samples of the second smallest class (surprise class) benmin, and nmin samples from the remaining samples of each class are used as training samples using the random undersampling technique, and the maximum entropy classifier is used.(4) Random undersampling + single channel LSTM (UnderS + LS TM), using the sampling method in (3) to obtain the training samples, and the classifier uses a single channel LSTM.(5) Random undersampling + single-channel CNN (UnderS + CNN), using the sampling method in (3) to obtain the training samples, and the classifier uses a single-channel CNN.(6) Random undersampling + integrated learning (Ensemble-Maxent), multiple sets of training samples (5 sets for this experiment) are obtained using the sampling method in (3), and multiple base classifiers are built. Finally, integrated learning is performed by fusing the results of these base classifiers, where the base classifier is chosen as the maximum entropy classifier.(7) Random undersampling + multichannel LSTM (Multi-LSTM), using the sampling method in (5) to obtain multiple sets of training samples (5 sets in this experiment), and the classifier using a multichannel (5-channel) LSTM neural network [20–22].(8) Random undersampling + multichannel CNN (Multi-CNN), using the sampling method in (5) to obtain multiple sets of training samples (5 sets for this experiment), and a multichannel (5-channel) CNN for the classifier.Figure5 compares the classification effects of fully trained, randomly oversampled, and randomly undersampled methods in the analysis of factors related to youth physical activity behavior. We can see that the random nonpublic classification is obviously better than the first two methods, and its advantage is particularly obvious in the average value of g. The main reason for this phenomenon is that in the complete training and random sampling methods, the classification algorithm is very inclined to take more samples by category, resulting in the number of samples being less than the feedback category.Figure 5 Comparison of classification performance of traditional methods for analyzing factors related to youth physical activity behavior.The error accumulation results of this method are shown in Figure6.Figure 6 Error accumulation results.Next, we compare the classification performance of the maximum entropy and LSTM under the random undersampling method. Figure6 shows that the classification performance of single-channel LSTM is better than that of maximum entropy, with an improvement of 1.8% and 1.2% in accuracy and G-mean, respectively. We analyze that the main reason is that the LSTM can use the historical information and can learn the long-term dependence between samples. In addition, we also implemented a convolutional neural network (CNN)-based classification method. From Figure 7, we can see that the classification performance of LSTM and CNN are comparable, with LSTM having a slight advantage in accuracy and CNN having a slightly higher G-mean.Figure 7 Comparison of classification performance of maximum entropy and neural networks.In the problem of analyzing factors related to youth physical activity behavior, the undersampling-based integrated learning approach performs better to use all labeled samples while maintaining a balance between training samples. Next, we will compare the classification performance of the undersampling-based integrated learning approach with our proposed multichannel LSTM classification approach, as shown in Figure8.Figure 8 Comparison of classification performance between integrated learning and multichannel neural networks.The results in Figure8 show that the multichannel LSTM-based classification method improves 1.5% in accuracy and 2.8% in G-mean over the integrated learning method when the hidden layer features are fused using sum; the multichannel LSTM-based classification method improves 1.5% in accuracy and 2.8% in G-mean over the integrated learning method when the hidden layer features are fused using concatenate.As in Figure9, when the hidden features are fused using concatenate, the multichannel LSTM-based classification method improves by 1.0% in accuracy and 2.1% in G-mean over the integrated learning method. These results indicate that the multichannel LSTM-based classification method is very effective for analyzing factors related to youth physical activity behavior.Figure 9 The importance of different factors. ## 5. Conclusion Factors affecting adolescents’ physical activity behavior, namely, there are internal factors of individuals as well as family, school, and social factors closely related to individuals. Family sports environment and atmosphere as well as parents’ sports awareness directly influence adolescents’ physical activity behavior, and reliable and effective educational measures should be provided for the new generation of young parents to strengthen parents’ or guardians’ educational interventions on the importance of regular physical activity for adolescents in response to our special national conditions. We propose a multichannel LSTM-based analysis of factors related to adolescents’ physical activity behavior, and our method is found to be very effective in the analysis of related factors. It is hoped that the subsequent trinity of school, family, and society under the guidance of relevant policies will certainly promote the physical fitness of adolescents and improve their health behaviors. --- *Source: 1022421-2022-07-04.xml*
1022421-2022-07-04_1022421-2022-07-04.md
25,354
Analysis of Factors Related to Adolescents’ Physical Activity Behavior Based on Multichannel LSTM Model
Guiling Chang; Jinfeng Liu
Computational Intelligence and Neuroscience (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1022421
1022421-2022-07-04.xml
--- ## Abstract The health problems of teenagers are closely related to their sports behavior. In order to understand the relevant factors of teenagers’ sports behavior, we use a variety of research methods to make a brief theoretical analysis of the relevant factors of teenagers’ sports behavior and analyze the impact of the model on teenagers’ sports behavior from different levels. The model analyzes the factors affecting youth sports behavior, reveals the relationship between these factors, puts forward corresponding intervention strategies, and uses effective means to develop youth sports practice. Therefore, based on the analysis of the relevant factors of teenagers’ sports behavior, this paper puts forward the LSTM model from many aspects, which shows that our model can be very effective in analyzing the factors affecting teenagers’ sports behavior. --- ## Body ## 1. Introduction The obesity rate in China has doubled in the last 30 years, according to The Times, UK. The rate of obesity and overweight among adults in developing countries increased from 250 million in 1980 to 904 million in 2008, and China now has a quarter of the obese and overweight population [1]. This study analyzes the factors influencing the physical activity behavior of adolescent students to provide a theoretical reference for physical education programs aimed at improving adolescent exercise behavior and promoting adolescent health [2].At this stage, there are various factors that influence adolescent physical activity behavior. Individual factors are the primary factors of adolescent physical exercise behavior, which are necessary for exercise behavior. Individual physiological factors determine individual exercise behavior, and psychological factors promote the development of individual motor behavior [3]. Individual physiological factors are one of the important factors influencing adolescent physical exercise behavior, and factors such as height, weight, physical health status, and body size of adolescents are common research variables in studies. For example, [4] in a study on the formation factors of exercise habits among college students, it was pointed out that the motivation to lose weight and build physical beauty could promote the change of exercise behavior among college students, and individual athletic ability influenced by congenital genetic factors could also affect the exercise behavior of individuals. [5] The amount of physical activity of adolescents was studied in terms of gender, age, height, and weight, and the influence of physiological, psychological, sociocultural, and environmental factors on the physical activity of adolescents was studied. As far as psychological factors are concerned, there is a relationship between individual achievement and individual personality. The study of [6] has pointed out that those adolescents who are adventurous and challenging may prefer competitive sports programs, where exercise provides satisfaction and pleasure, and the act of physical activity provides individuals with the opportunity to demonstrate their level of athletic ability, strengthening their self-esteem and pride. The motivation that is motivational and persistent and determines the intensity of behavior is an essential factor in motivating individuals to participate in sports.Family factors are both physical environmental factors that satisfy the conditions and environment of adolescents’ exercise, and internal psychological factors that promote individual adolescents’ internal motivation and increase the level of individual exercise motivation [7]. Parents’ exercise awareness and exercise behavior, cultural level, and socio-economic status are important. School physical education is the main form of physical activity behavior of young students, and it is the main channel of physical activity of young people [8]. A reasonable physical education program can stimulate students’ motivation to exercise, induce behavior change, satisfy students’ self-actualization and sense of accomplishment, and also explain the causes of behavior change and its internal mechanism of action [9]. However, a common phenomenon in primary and secondary schools in China is that the number of physical education classes per week, the number of recesses, and the participation in school sports all decrease with grade level, and school sports become the main place to train students who have neither the awareness nor the ability to exercise [10].Physical education teachers are the implementers of physical education and the organizers of extracurricular physical activities, and the amount of physical activity in schools for young students is closely related to the physical education teachers. Issues such as the number of games and competitions organized in school, the physical culture of the school, and exercise peers are the main factors that limit students’ exercise behavior [11]. The phenomenon of physical education classes being squeezed is an important factor that currently restricts the development of physical education in schools, and “exam-oriented education” has seriously skewed educational values, with many schools pursuing the promotion rate unilaterally and squeezing and diverting students’ physical education classes and extracurricular physical activities at will.Social support, community exercise environment, and safety issues in the community were the main social level factors that governed youth physical activity behavior. Parental support, sibling support, school teacher support among school factors, and peer friend support were the main components of social support. Adolescents’ perceived social support is closely related to their physical activity behavior and has a direct or indirect effect on self-efficacy. Researchers have compared parent-child communication, parental support, and adolescent psychological risk factors in obese and normal weight adolescents and found that parent-child communication and parental support had a significant impact on normal weight adolescents, while parental support had a more significant impact on obese adolescents. The lack of sports facilities is the main objective factor affecting individual exercise behavior. Studies on community exercise behavior found that the lack of public sports facilities in the community is the main factor affecting the physical exercise behavior of urban adolescents. The convenience of sports facilities and field equipment around the home, the good or bad exercise environment, and the development of sports activities all have an impact on the form of adolescent physical exercise behavior, the choice of sports, exercise attitude, exercise awareness, exercise behavior, effort, and persistence. ## 2. Related Work The factors influencing adolescent children’s physical activity behaviors were studied in terms of subjective beliefs of both parents, objective material conditions, and work-life environment [12]. The study showed that parents’ physical activity behaviors play a major role in promoting the formation of adolescents’ sports perceptions, and that parents’ actual exercise behaviors and exercise perceptions can contribute to adolescent children’s internal motivation to exercise [13]. The stronger the family members’ beliefs about exercise, the better the family exercise climate, and the stronger the family members’ motivation to exercise and the greater the likelihood of physical activity behaviors.In terms of family socioeconomic status, [14] a study of the family cultural level was conducted to explore the effects of different cultural level levels of youth physical activity behavior, and the results showed that the effect of family cultural level on youth exercise perceptions reached significant levels. The higher the literacy level of parents, the greater the possibility of active intake of family education knowledge and the more scientific the attitude of educating their children. For the study of economic status, [15] concluded that parents’ exercise commitment was positively associated with adolescents’ exercise behavior, but the level of parental exercise commitment perceived by adolescents was low, and that adolescent children’s exercise behavior would be better developed if the intensity of commitment was increased. Parental encouragement and support are particularly important for the physical activity behaviors of 13- to 16-year-olds, and physical activity behaviors need to be sustained based on parental material support.Safety issues and the safety of school sports equipment are key factors that limit physical activity and have a direct impact on youth physical activity behaviors. Studies by foreign scholars on safety have focused on whether relevant sports equipment meets national safety standards, whether schools regularly inspect sports equipment, the safe sports environment created by the joint efforts of schools and society, and the convenience of access to drinking water and sports rest areas during individual adolescent physical activity behaviors. The study of [16] found that low levels of community safety were strongly associated with physical activity levels and that residents of communities with lower levels of safety had higher rates of obesity and larger BMI indices.Factor 1 mainly focused on reflecting motivation, interest in physical activity, sense of achievement in sports, enjoyable experience of sports, positive expectation of sports outcome, positive self-evaluation, and attitude toward sports knowledge. Among them, the sense of achievement in sports and interest in sports were the most highly correlated with F1, indicating that they are the more important factors in explaining the physical activity behavior of adolescents [17]. The sense of achievement in sports can lead to psychological tendencies of pleasure and success from the heart, which all stem from the individual’s active participation in sports. Sports behavior formed by sports interest can make individuals gain more knowledge about sports and health, improve motor skills, and promote healthy physical and mental development, as well as produce pleasant emotional experiences. In addition, the motivation of sports is also a factor that should not be neglected, as shown in a study by the authors of [18], motivation for physical activity is positively correlated with exercise adherence, and extrinsic motivation motivates people to participate in physical activity. Attitudes toward physical education knowledge had a slightly lower correlation with factor 1, suggesting less importance in comparison to other factors. The study of [1] showed that students’ perceived value and role of physical activity showed a high correlation (r=0.87) with adherence to physical activity in the long term, indicating that the perception of the value and role of physical education itself is an important factor influencing students’ adherence to physical activity. This shows that there are some differences in the role played by the attitude factor of physical education knowledge in two different stages of physical activity behavior.Factor 2 concentrates on the individual’s physical quality, health status, and the degree of importance. Among them, an individual’s health status and physical fitness have a high correlation, and it can be said that an individual’s physical fitness and health status are necessary prerequisites for realizing a variety of psychological factors and are the physical basis of physical exercise behavior. Therefore, physical health factors and psychological factors complement each other, physical health factors are the basis of psychological factors, and psychological factors play a role in promoting physical health [19]. ## 3. Methods A single-channel LSTM-based method for analyzing factors related to youth physical activity behavior mainly includes LSTMs use memory units to avoid gradient disappearance and gradient explosion during backpropagation, can learn long-term dependencies, and make full use of historical information. The LSTM was improved and extended, making it widely used in natural language processing, speech recognition, and other fields.The LSTM unit is shown in Figure1.Figure 1 LSTM unit.The updating process of the LSTM unit at timet is as follows:(1)it=σWixt+Uiht−1+Vict−1,ct=tanhWcxt+Ucht−1,ft=σWfxt+Ufht−1+Vfct−1,ct=ft⊙ct−1+it⊙ct,ot=σWoxt+Uoht−1+Voct,ht=ot⊙tanhct,where xi is the input data of the memory unit, σ is the logistic sigmoid function, and the symbol ⊙ is the dot product operation between vectors. it,ot,ft,ct are the values of the input gate, output gate, forgetting gate, and memory cell at time t, respectively, ct are the values of the candidate memory states of the memory cell and ht are the outputs of the LSTM cell at time t.For the analysis of the factors associated with youth physical activity behavior, we obtain a balanced sample of the factors associated with each youth physical activity behavior and then used a single-channel LSTM as the classification method as shown in Figure2. The input feature vectors are passed through the LSTM layer to obtain high-dimensional vectors, which can learn deeper features that can better describe the samples. The fully-connected layer is receiving all the outputs from the previous layer, weighting and summing these output vectors, and propagating the weighted outputs through the excitation function to the Dropout layer. In this experiment, the excitation function is shown in the following equation:(2)gx=max0,x,where x is the output vector and the ReLU function sets all values less than 0 to 0, with the ability to bootstrap moderate sparsity. The Dropout layer is shown in following equation:(3)g=h∗•Dp.Figure 2 Single-channel LSTM classifier framework.Finally, the output of the single-channel LSTM model is used to classify the samples by the Softmax output layer.(4)labelpred=argmaxiPY=i|x,W,U,V,where x is the upper layer output vector, i is the label prediction, W,U,V are the coefficient matrices in the LSTM update method, and labelpred is the predicted label with the highest posterior probability.The application of random undersampling and single-channel LSTM for the analysis of factors related to youth physical activity behavior has an obvious drawback: because undersampling only selects some samples from multiple classes and a large number of unselected samples. The method undersamples the imbalanced samples several times to obtain multiple sets of balanced samples, uses each set of balanced samples to learn an LSTM model, and uses the Merge layer to jointly learn multiple LSTMs.The multichannel LSTM classifier framework is shown in Figure3.Figure 3 Multichannel LSTM classifier framework.In the process of model training, we minimize,(5)J=−∑i=1N∑l=1m1ti=llogyl+λ2N∑k=1n∑ε∈ωWεkF2+∑ε∈μUεkF2+∑ε∈νVεkF2.In the loss function, in addition to minimizing the negative log likelihood, the L2 regularization ofW,U,V is added because the parameters of the Softmax function are redundant, i.e., the minima are not unique, and the addition of the regularization term can make the minima unique. The penalty factor λ regulates the weight of the regularization term, and the larger the value, the greater the penalty for large parameters. ## 4. Experiments The analysis of the contents of the family’s lifestyle, the way of interaction with the children, the interaction with the children, the parents’ sports knowledge structure, sports habits and sports awareness and behavior, the level of support from the relatives, and the family recreation side was carried out according to the different influencing factors. The reliability statistics under different influencing factors are in Table1.Table 1 Reliability statistics for each subscale of the influencing factors of youth physical activity behavior. Scale levelCronbachα coefficientStandardizedα coefficient# Subscale question itemsIndividual level0.8780.87912Family level0.9150.91611School level0.9020.9039Community level0.9180.91913Policy level0.9390.93910The factors with a high correlation with factor 1 include family sports atmosphere, family lifestyle, and tutoring style. It indicates that these factors exert an important influence in adolescent physical exercise behavior, and adolescent physical exercise behavior is often closely linked to the sports atmosphere of the family, and parents’ sports ideology, attitude, awareness, and behavior habits towards sports all have a subtle influence on their children. The results in Table2 show that exercisers and nonexercisers show significant differences in the dimensions of physical exercise attitudes.Table 2 Summary of the scores of adolescents’ attitudes toward physical exercise. Exercisers (N = 100)Nonexercisers (N = 251)TPMSDMSDBehavioral attitudes31.05004.75926.45484.9157.9680.0001Target attitudes48.65008.04245.78126.0513.6780.0001Behavioral cognition29.06003.55527.95253.3090.8780.004Behavioral habits37.15006.30531.19956.4057.8850.0001Behavioral intention27.32003.50224.18754.0186.8150.0001Emotional experience38.18005.85534.22355.8455.7150.0001Behavioral control25.15004.93520.98454.1617.9890.0001Subjective standards18.38004.29520.75753.999−4.8650.0001Based on the above-given correlation analysis of each factor of exercise attitudes and the correlation analysis of exercise attitudes and physical behavior, the path relationships among the variables were further explored in conjunction with the theory relied on in this study. The path relationships among the variables were examined behavioral intentions on physical activity behaviors; and behavioral habits, behavioral attitudes, and sense of behavioral control on physical activity behaviors, while constructing path diagrams based on them.Behavioral attitudes can have an impact on sport behavior only through the intermediate variable of behavioral perceptions, see Figure4.Figure 4 Path diagram of the influence of exercise attitude on physical behavior.Specifically, the specific parameters we used are shown in Table3.Table 3 Parameter settings in the LSTM. Parameter expressionParameter valueInput vector length120LSTM layer output dimension128Fully connected layer output dimension64Dropout parameter0.5#iterations20Accuracy and geometric mean (G-mean) were used as a measure of classification effectiveness. The geometric mean is calculated as shown in the following equation:(6)G−mean=∏i=1nRecallin,where Recalli denotes the recall of category i, n is the number of categories, and n is taken as 7 in this experiment.In the experiment, we implemented the following methods to deal with the analysis of factors related to youth physical activity behavior:(1) Full training + maximum entropy (FullT + Maxent), all the remaining samples of each class are used as training samples and the maximum entropy classifier is used.(2) Random oversampling + maximum entropy (OverS + Maxent), let the maximum number of remaining samples of each class (preference class) benmax and use the random oversampling technique to extract nmax samples from the remaining samples of each class as training samples, using the maximum entropy classifier.(3) Random undersampling + maximum entropy (UnderS + Maxent), let the number of remaining samples of the second smallest class (surprise class) benmin, and nmin samples from the remaining samples of each class are used as training samples using the random undersampling technique, and the maximum entropy classifier is used.(4) Random undersampling + single channel LSTM (UnderS + LS TM), using the sampling method in (3) to obtain the training samples, and the classifier uses a single channel LSTM.(5) Random undersampling + single-channel CNN (UnderS + CNN), using the sampling method in (3) to obtain the training samples, and the classifier uses a single-channel CNN.(6) Random undersampling + integrated learning (Ensemble-Maxent), multiple sets of training samples (5 sets for this experiment) are obtained using the sampling method in (3), and multiple base classifiers are built. Finally, integrated learning is performed by fusing the results of these base classifiers, where the base classifier is chosen as the maximum entropy classifier.(7) Random undersampling + multichannel LSTM (Multi-LSTM), using the sampling method in (5) to obtain multiple sets of training samples (5 sets in this experiment), and the classifier using a multichannel (5-channel) LSTM neural network [20–22].(8) Random undersampling + multichannel CNN (Multi-CNN), using the sampling method in (5) to obtain multiple sets of training samples (5 sets for this experiment), and a multichannel (5-channel) CNN for the classifier.Figure5 compares the classification effects of fully trained, randomly oversampled, and randomly undersampled methods in the analysis of factors related to youth physical activity behavior. We can see that the random nonpublic classification is obviously better than the first two methods, and its advantage is particularly obvious in the average value of g. The main reason for this phenomenon is that in the complete training and random sampling methods, the classification algorithm is very inclined to take more samples by category, resulting in the number of samples being less than the feedback category.Figure 5 Comparison of classification performance of traditional methods for analyzing factors related to youth physical activity behavior.The error accumulation results of this method are shown in Figure6.Figure 6 Error accumulation results.Next, we compare the classification performance of the maximum entropy and LSTM under the random undersampling method. Figure6 shows that the classification performance of single-channel LSTM is better than that of maximum entropy, with an improvement of 1.8% and 1.2% in accuracy and G-mean, respectively. We analyze that the main reason is that the LSTM can use the historical information and can learn the long-term dependence between samples. In addition, we also implemented a convolutional neural network (CNN)-based classification method. From Figure 7, we can see that the classification performance of LSTM and CNN are comparable, with LSTM having a slight advantage in accuracy and CNN having a slightly higher G-mean.Figure 7 Comparison of classification performance of maximum entropy and neural networks.In the problem of analyzing factors related to youth physical activity behavior, the undersampling-based integrated learning approach performs better to use all labeled samples while maintaining a balance between training samples. Next, we will compare the classification performance of the undersampling-based integrated learning approach with our proposed multichannel LSTM classification approach, as shown in Figure8.Figure 8 Comparison of classification performance between integrated learning and multichannel neural networks.The results in Figure8 show that the multichannel LSTM-based classification method improves 1.5% in accuracy and 2.8% in G-mean over the integrated learning method when the hidden layer features are fused using sum; the multichannel LSTM-based classification method improves 1.5% in accuracy and 2.8% in G-mean over the integrated learning method when the hidden layer features are fused using concatenate.As in Figure9, when the hidden features are fused using concatenate, the multichannel LSTM-based classification method improves by 1.0% in accuracy and 2.1% in G-mean over the integrated learning method. These results indicate that the multichannel LSTM-based classification method is very effective for analyzing factors related to youth physical activity behavior.Figure 9 The importance of different factors. ## 5. Conclusion Factors affecting adolescents’ physical activity behavior, namely, there are internal factors of individuals as well as family, school, and social factors closely related to individuals. Family sports environment and atmosphere as well as parents’ sports awareness directly influence adolescents’ physical activity behavior, and reliable and effective educational measures should be provided for the new generation of young parents to strengthen parents’ or guardians’ educational interventions on the importance of regular physical activity for adolescents in response to our special national conditions. We propose a multichannel LSTM-based analysis of factors related to adolescents’ physical activity behavior, and our method is found to be very effective in the analysis of related factors. It is hoped that the subsequent trinity of school, family, and society under the guidance of relevant policies will certainly promote the physical fitness of adolescents and improve their health behaviors. --- *Source: 1022421-2022-07-04.xml*
2022
# Prognostic Modeling of Lung Adenocarcinoma Based on Hypoxia and Ferroptosis-Related Genes **Authors:** Chang Liu; Yan-Qin Ruan; Lai-Hao Qu; Zhen-Hua Li; Chao Xie; Ya-Qiang Pan; Hao-Fei Li; Ding-Biao Li **Journal:** Journal of Oncology (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1022580 --- ## Abstract Background. It is well known that hypoxia and ferroptosis are intimately connected with tumor development. The purpose of this investigation was to identify whether they have a prognostic signature. To this end, genes related to hypoxia and ferroptosis scores were investigated using bioinformatics analysis to stratify the risk of lung adenocarcinoma. Methods. Hypoxia and ferroptosis scores were estimated using The Cancer Genome Atlas (TCGA) database-derived cohort transcriptome profiles via the single sample gene set enrichment analysis (ssGSEA) algorithm. The candidate genes associated with hypoxia and ferroptosis scores were identified using weighted correlation network analysis (WGCNA) and differential expression analysis. The prognostic genes in this study were discovered using the Cox regression (CR) model in conjunction with the LASSO method, which was then utilized to create a prognostic signature. The efficacy, accuracy, and clinical value of the prognostic model were evaluated using an independent validation cohort, Receiver Operator Characteristic (ROC) curve, and nomogram. The analysis of function and immune cell infiltration was also carried out. Results. Here, we appraised 152 candidate genes expressed not the same, which were related to hypoxia and ferroptosis for prognostic modeling in The Cancer Genome Atlas Lung Adenocarcinoma (TCGA-LUAD) cohort, and these genes were further validated in the GSE31210 cohort. We found that the 14-gene-based prognostic model, utilizing MAPK4, TNS4, WFDC2, FSTL3, ITGA2, KLK11, PHLDB2, VGLL3, SNX30, KCNQ3, SMAD9, ANGPTL4, LAMA3, and STK32A, performed well in predicting the prognosis in lung adenocarcinoma. ROC and nomogram analyses showed that risk scores based on prognostic signatures provided desirable predictive accuracy and clinical utility. Moreover, gene set variance analysis showed differential enrichment of 33 hallmark gene sets between different risk groups. Additionally, our results indicated that a higher risk score will lead to more fibroblasts and activated CD4 T  cells but fewer myeloid dendritic cells, endothelial cells, eosinophils, immature dendritic cells, and neutrophils. Conclusion. Our research found a 14-gene signature and established a nomogram that accurately predicted the prognosis in patients with lung adenocarcinoma. Clinical decision-making and therapeutic customization may benefit from these results, which may serve as a valuable reference in the future. --- ## Body ## 1. Introduction Lung cancer is one of the most frequent malignancies with high mortality and poor prognosis [1, 2]; 80% of lung malignancies diagnosed were NSCLC [3]. LUAD accounts for nearly 40% of NSCLC cases [4, 5], and its incidence is continually increasing [6]. In recent years, several therapeutic advances have been made, including targeted therapies and emerging immunotherapy [7, 8]. Although both methods are effective in a restricted range of lung cancer subtypes, the rate of survival for LUAD is still poor [9]. According to statistics, LUAD has a poor prognosis that only 18% could survive longer than 5 years [10]. As a result, the search for valid biomarkers might lead to the establishment of individualized diagnosis and therapy for LUAD patients [11]. The cancer tissue has many specific characteristics, including accelerated cell cycle, alterations of the genome, increase in cell mobility and invasive growth of the cells, incapable of going through normal apoptosis process, and depletion of normal cell functions. Because of these physiological and pathological characteristics, it is difficult for tumors to be treated.Recently, it has been studied that ferroptosis is a relatively new type of cell death. This process is often accompanied by significant iron buildup and lipid peroxidation in dying cells [12]. It can be distinguished from apoptosis, necrosis, and autophagy by certain key characteristics. Firstly, it is iron-dependent and is induced by the buildup of harmful lipid reactive oxygen species. In addition, polyunsaturated fatty acids are consumed during the process [12]. With the rapid development of the role of iron ions in cancer, new prospects have emerged for their use in cancer therapy [13]. The expression of the S100 calcium-binding protein A4 (FSP1) in lung cancer cell lines is related to resistance to ferroptosis, suggesting that overexpression of FSP1 may be a method for ferroptosis escape [14]. In addition, MAPK pathway activation is associated with the susceptibility to ferroptosis triggered by cystine deprivation in NSCLC cell lines [15]. Alvarez et al. [16] recently found that inhibiting the iron-sulfur cluster biosynthesis enzyme NFS1 induced ferroptosis in vitro and slowed tumor development in LUAD. Additionally, Liu et al. [17] discovered that brusatol, an inhibitor of NRF2, increased the response rate of cystine deprivation-triggered ferroptosis through the FOCAD-FAK signaling pathway in NSCLC cell lines. What is more surprising is that the merger of brusatol and erastin demonstrated a superior therapeutic effect on NSCLC. The findings in these prior studies suggest that ferroptosis is quite important for lung cancer treatment. Based on the above research, we made the following hypothesis that ferroptosis is connected with the prognosis of LUAD, and thus ferroptosis-related genes may function as prognostic biomarkers.Hypoxia or oxygen deprivation is a feature of most solid tumors because the growth of a tumor requires a large amount of oxygen. As the rapid tumor growth outstrips the supply of oxygen, an imbalance between decreased oxygen supply and increased oxygen demand was formed. This is a typical feature observed in the tumor microenvironment (TME) that increases the aggressiveness of many tumors and also causes abnormal blood vessel formation due to impaired blood supply, leading to poorer clinical outcomes [18–20]. Many transcription factors are active in tumor cells when the environment is hypoxic, and these transcription factors regulate cell proliferation, motility, and apoptosis via a variety of downstream signaling mechanisms [21]. This leads to an immunosuppressive TME that reduces the effectiveness of immunotherapy [22] and upregulates the expression of PD-L1, further supporting cancer escape [23, 24]. Although several studies have shown that intratumoral hypoxia and HIF1A expression affect overall survival (OS) in LUAD [25–27], hypoxia-based cannot be used to estimate who are at a high risk very early.According to recent research, HIF1A may influence lipid metabolism and cause lipids to be stored in droplets, which reduces peroxidation-mediated endosomal damage and limits cellular ferroptosis [28]. Additionally, HIF-2α has been reported to activate hypoxia-inducible lipid droplet-associated (HILPDA) expression and selectively enrich polyunsaturated lipids, thus promoting cellular ferroptosis [29]. Furthermore, increased ferritin heavy chains under hypoxic conditions can protect HT1080 tumor cells from ferroptosis [30]. These findings suggest a potential relationship between ferroptosis and hypoxia. But more research is needed to further investigate how ferroptosis and hypoxia interact with each other and how they can affect LUAD patients’ prognosis.A variety of models have been created to predict the prognostication in LUAD according to the TME [31], ferroptosis [32], hypoxia [33], and tumor immunology [34]. However, to our knowledge, there is no reported prognostic role of hypoxia and ferroptosis-interrelated features in LUAD. To fill the gap and broaden the diagnostic and therapeutic potential of LUAD, we performed a comprehensive analysis using TCGA and Gene Expression Omnibus (GEO), aiming to endorse the least prognostic genes for LUAD. Finally, a signature on hypoxia- and ferroptosis-interrelated genes was constructed to know the prognostic value in LUAD patients. ## 2. Materials and Methods ### 2.1. Data Source Transcriptomic data from 593 samples, composed of 59 normal and 534 LUAD, from TCGA database were used in this study. A total of 476 LUAD samples had available survival data. The GSE31210 dataset [35, 36] (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE31210), containing transcriptomic data and survival information for 226 LUAD patients, was obtained from the GEO database to validate the established model. ### 2.2. Single Sample Gene Set Enrichment Analysis The MSigDB (https://www.gsea-msigdb.org/gsea/msigdb/) was performed to acquire the hallmark gene sets of hypoxias, which consisted of 200 genes. The results show that there are 259 genes related to ferroptosis in total, which were gathered from the FerrDb database (https://www.zhounan.org/ferrdb/). The TCGA-LUAD database matched the expression patterns of the aforementioned genes. The ssGSEA method (from the R package GSVA) was performed to analyze all samples, and the hypoxia and ferroptosis scores for each sample were then calculated [37]. ### 2.3. Coexpression Network Construction The TCGA-LUAD transcriptome data were selected for the establishment of gene coexpression networks using theR package WGCNA [38]. Hypoxia and ferroptosis scores were used as phenotypic characteristics. To assess the correlation of all samples in the TCGA-LUAD database, we performed a cluster analysis to ensure the completeness of the samples. As shown in Supplementary Figure 1(a), TCGA-44-3917-01A-01R-A278-07 was identified as an outlier and therefore was not included in this section of the subsequent analysis. During the network construction phase, the soft thresholding power β was obtained above 0.90, with the fit index of the scale-free topology. A dendrogram of all genes was established using the dissimilarity measure to group them together (1-TOM) (Supplementary Figure 1(b)). We set 30 as the minimum module size, and modules with similar gene expressions were clustered and displayed in a tree diagram with color assignments according to the dynamic tree-cutting algorithm. To identify the modules associated with hypoxia and ferroptosis scores, a heatmap of module-feature relationships with correlation coefficients and P-values was drawn. Modules that had a strong dependency on both scores were identified as modules of interest, and the genes in these modules of interest were defined as hub genes. ### 2.4. Analysis of Differentially Expressed Genes (DEGs) Transcriptome data from 53 normal and 539 LUAD samples were used as the foundation for comparison to analyze genes expressed differently. DEGs were analyzed using theR package limma, with significance criteria of |log2 fold change (FC)| > 1 and P<0.05 as significance thresholds. ### 2.5. Overlap Analysis Overlap analysis was used to identify common genes between the identified hub genes and DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes for the subsequent analysis. ### 2.6. Functional Enrichment Using Metascape (https://metascape.org) [39], the researchers were able to confirm the functional enrichment of DE-hypoxia and ferroptosis score-related genes in this investigation. P<0.05 was the significant threshold.The active signaling was analyzed using gene set variation analysis (GSVA) [37], which could compute sample gene set enrichment using a Kolmogorov–Smirnov-like rank statistical analysis. In the present study, a GSVA assessment was used to establish the t score and to allocate 50 hallmark gene signature activity conditions to the groups with high or low risk. At last, we compared the values. The cutoff value was set to |t| > 2. ### 2.7. Identification and Establishment of the Gene Signature TCGA’s 476 LUAD cases were randomly separated into two groups by using a 7 : 3 split ratio. One group was used for training and another one for testing. The DE-hypoxia and ferroptosis score-interrelated genes that are related to OS were discovered using the TCGA training dataset. The characteristics related to LUAD prognosis were determined by using univariate Cox regression (UCR) analysis.P<0.05 was considered as significant. After the LASSO-penalized Cox regression (LCR) analysis of the proposed predictive panels, 10-fold cross-validation was used. Risk scores can be generated by using prognostic gene signature. In accordance with the appropriate cutoff of the risk score, patients from the TCGA training and TCGA test sets, as well as GSE31210, were split into two groups. The AUC of the ROC curve and Kaplan–Meier (KM) analyses were applied. External validation was performed using the GSE31210 dataset. ### 2.8. Nomogram Construction and Validation To identify whether the risk model can be influenced by clinical factors, UCR and MCR analysis together with the survivalR package were performed. Following those analyses, a nomogram was obtained using MCR coefficients of the risk score and clinical variables in the TCGA cohort, which was then analyzed. It was necessary to create calibration curves to determine whether OS for one, three, or five years were consistent with the actual findings (bootstrap-based 1000 iterations resampling validations). We developed these analyses based on the R package rms. ### 2.9. Immune Cells Infiltration (ICI) The ICI into two groups was determined using the ssGSEA method and theR software [40]. The analysis considered only values with a P<0.05. The violin diagrams used to illustrate the changes in ICI between separate categories were drawn with the ggplot2 package. ### 2.10. Patients and Tissue Samples We performed experimental validation on specimens from five LUAD patients who underwent surgery at Yan’an Affiliated Hospital, Kunming Medical University, to validate 14 hypoxia and ferroptosis score-related signature expression status in LUAD and adjacent normal tissues (ANT). ANTs were used as controls. The institutional and national research committees were followed in the conduct of all procedures, as well as the Helsinki Declaration. The hospital’s Ethics Committee gave its approval before any of the operations could be carried out (Permit No. 2017-014-01). All of the patients who took part in the trial gave their informed permission before participation. ### 2.11. RNA Isolation and qRT-PCR The 20 tissues were dissociated using TRIzol Reagent (Life Technologies); then, total RNA was collected and determined the concentration using NanoDrop 2000FC-3100 (Thermo Fisher Scientific). Prior to performing qRT-PCR, the SureScript-First-strand-cDNA-synthesis kit (GeneCopoeia) was used to reverse transcription reaction. The qRT-PCR reaction was as follows: 4μL of reverse transcription product, 2 μL of 5 × BlazeTaq qPCR Mix (GeneCopoeia, Guangzhou, China), 0.5 μL primers, and 3 μL of ddH2O. A BIO-RAD CFX96 TouchTM PCR detection system (Bio-Rad Laboratories, Inc., USA) was utilized to perform the PCR reaction as follows: 95°C for 30 s, 40 cycles of incubation at 95°C for 10 s, 60°C for 20 s, and 72°C for 30 s. In this study, the primers used were synthesized by Servicebio (Servicebio Co., Ltd., Guangzhou, China) as follows: for KLK11:5′-AGGGCTTGTAGGGGGAGA-3′, 5′-TGGGGAGGCTGTTGTTGA-3′; for MAPK4: 5′-TCAAGATTGGGGATTTCG-3′, 5′-TATGGGCTCATGTAGGGG-3′; for ITGA2: 5′-ATCAGGCGTCTCTCAGTTTC-3′, 5′-GTTTTCTTCTTGGCTTTCAC-3′; for WFDC2: 5′-CAGGCACAGGAGCAGAGAAG-3′, 5′-TCATTGGGCAGAGAGCAGAA-3′; for TNS4: 5′-GGGGCTTTTGTCATAAGGG-3′, 5′-TTTGAAGTGGACCACGGTG-3′; for LAMA3: 5′-GGTTTTGGTCCGTGTTCT-3′, 5′-ACTGCCCCGTCATCTCTT-3′; for SMAD9: 5′-GGAGATGAAGAGGAAAAGTGG-3′, 5′-GAAAGAGTCAGGATAGGTGGC-3′. GAPDH was chosen to be an internal control, and the 2−ΔΔCt method was used to calculate the hub genes’ relative expression level [41]. The experiment was repeated in triplicate on independent occasions. ### 2.12. Statistical Analysis Statistical analysis was performed usingR 3.4.3 and GraphPad Prism V9. P-value <0.05 means significant difference. To evaluate survival, both UCR and MCR analyzes were used. Both hazard ratios (HRs) and 95 percent CIs were reckoned to identify genes that were related to OS. Paired t-tests were performed for statistical differences in this study using GraphPad Prism V9. ## 2.1. Data Source Transcriptomic data from 593 samples, composed of 59 normal and 534 LUAD, from TCGA database were used in this study. A total of 476 LUAD samples had available survival data. The GSE31210 dataset [35, 36] (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE31210), containing transcriptomic data and survival information for 226 LUAD patients, was obtained from the GEO database to validate the established model. ## 2.2. Single Sample Gene Set Enrichment Analysis The MSigDB (https://www.gsea-msigdb.org/gsea/msigdb/) was performed to acquire the hallmark gene sets of hypoxias, which consisted of 200 genes. The results show that there are 259 genes related to ferroptosis in total, which were gathered from the FerrDb database (https://www.zhounan.org/ferrdb/). The TCGA-LUAD database matched the expression patterns of the aforementioned genes. The ssGSEA method (from the R package GSVA) was performed to analyze all samples, and the hypoxia and ferroptosis scores for each sample were then calculated [37]. ## 2.3. Coexpression Network Construction The TCGA-LUAD transcriptome data were selected for the establishment of gene coexpression networks using theR package WGCNA [38]. Hypoxia and ferroptosis scores were used as phenotypic characteristics. To assess the correlation of all samples in the TCGA-LUAD database, we performed a cluster analysis to ensure the completeness of the samples. As shown in Supplementary Figure 1(a), TCGA-44-3917-01A-01R-A278-07 was identified as an outlier and therefore was not included in this section of the subsequent analysis. During the network construction phase, the soft thresholding power β was obtained above 0.90, with the fit index of the scale-free topology. A dendrogram of all genes was established using the dissimilarity measure to group them together (1-TOM) (Supplementary Figure 1(b)). We set 30 as the minimum module size, and modules with similar gene expressions were clustered and displayed in a tree diagram with color assignments according to the dynamic tree-cutting algorithm. To identify the modules associated with hypoxia and ferroptosis scores, a heatmap of module-feature relationships with correlation coefficients and P-values was drawn. Modules that had a strong dependency on both scores were identified as modules of interest, and the genes in these modules of interest were defined as hub genes. ## 2.4. Analysis of Differentially Expressed Genes (DEGs) Transcriptome data from 53 normal and 539 LUAD samples were used as the foundation for comparison to analyze genes expressed differently. DEGs were analyzed using theR package limma, with significance criteria of |log2 fold change (FC)| > 1 and P<0.05 as significance thresholds. ## 2.5. Overlap Analysis Overlap analysis was used to identify common genes between the identified hub genes and DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes for the subsequent analysis. ## 2.6. Functional Enrichment Using Metascape (https://metascape.org) [39], the researchers were able to confirm the functional enrichment of DE-hypoxia and ferroptosis score-related genes in this investigation. P<0.05 was the significant threshold.The active signaling was analyzed using gene set variation analysis (GSVA) [37], which could compute sample gene set enrichment using a Kolmogorov–Smirnov-like rank statistical analysis. In the present study, a GSVA assessment was used to establish the t score and to allocate 50 hallmark gene signature activity conditions to the groups with high or low risk. At last, we compared the values. The cutoff value was set to |t| > 2. ## 2.7. Identification and Establishment of the Gene Signature TCGA’s 476 LUAD cases were randomly separated into two groups by using a 7 : 3 split ratio. One group was used for training and another one for testing. The DE-hypoxia and ferroptosis score-interrelated genes that are related to OS were discovered using the TCGA training dataset. The characteristics related to LUAD prognosis were determined by using univariate Cox regression (UCR) analysis.P<0.05 was considered as significant. After the LASSO-penalized Cox regression (LCR) analysis of the proposed predictive panels, 10-fold cross-validation was used. Risk scores can be generated by using prognostic gene signature. In accordance with the appropriate cutoff of the risk score, patients from the TCGA training and TCGA test sets, as well as GSE31210, were split into two groups. The AUC of the ROC curve and Kaplan–Meier (KM) analyses were applied. External validation was performed using the GSE31210 dataset. ## 2.8. Nomogram Construction and Validation To identify whether the risk model can be influenced by clinical factors, UCR and MCR analysis together with the survivalR package were performed. Following those analyses, a nomogram was obtained using MCR coefficients of the risk score and clinical variables in the TCGA cohort, which was then analyzed. It was necessary to create calibration curves to determine whether OS for one, three, or five years were consistent with the actual findings (bootstrap-based 1000 iterations resampling validations). We developed these analyses based on the R package rms. ## 2.9. Immune Cells Infiltration (ICI) The ICI into two groups was determined using the ssGSEA method and theR software [40]. The analysis considered only values with a P<0.05. The violin diagrams used to illustrate the changes in ICI between separate categories were drawn with the ggplot2 package. ## 2.10. Patients and Tissue Samples We performed experimental validation on specimens from five LUAD patients who underwent surgery at Yan’an Affiliated Hospital, Kunming Medical University, to validate 14 hypoxia and ferroptosis score-related signature expression status in LUAD and adjacent normal tissues (ANT). ANTs were used as controls. The institutional and national research committees were followed in the conduct of all procedures, as well as the Helsinki Declaration. The hospital’s Ethics Committee gave its approval before any of the operations could be carried out (Permit No. 2017-014-01). All of the patients who took part in the trial gave their informed permission before participation. ## 2.11. RNA Isolation and qRT-PCR The 20 tissues were dissociated using TRIzol Reagent (Life Technologies); then, total RNA was collected and determined the concentration using NanoDrop 2000FC-3100 (Thermo Fisher Scientific). Prior to performing qRT-PCR, the SureScript-First-strand-cDNA-synthesis kit (GeneCopoeia) was used to reverse transcription reaction. The qRT-PCR reaction was as follows: 4μL of reverse transcription product, 2 μL of 5 × BlazeTaq qPCR Mix (GeneCopoeia, Guangzhou, China), 0.5 μL primers, and 3 μL of ddH2O. A BIO-RAD CFX96 TouchTM PCR detection system (Bio-Rad Laboratories, Inc., USA) was utilized to perform the PCR reaction as follows: 95°C for 30 s, 40 cycles of incubation at 95°C for 10 s, 60°C for 20 s, and 72°C for 30 s. In this study, the primers used were synthesized by Servicebio (Servicebio Co., Ltd., Guangzhou, China) as follows: for KLK11:5′-AGGGCTTGTAGGGGGAGA-3′, 5′-TGGGGAGGCTGTTGTTGA-3′; for MAPK4: 5′-TCAAGATTGGGGATTTCG-3′, 5′-TATGGGCTCATGTAGGGG-3′; for ITGA2: 5′-ATCAGGCGTCTCTCAGTTTC-3′, 5′-GTTTTCTTCTTGGCTTTCAC-3′; for WFDC2: 5′-CAGGCACAGGAGCAGAGAAG-3′, 5′-TCATTGGGCAGAGAGCAGAA-3′; for TNS4: 5′-GGGGCTTTTGTCATAAGGG-3′, 5′-TTTGAAGTGGACCACGGTG-3′; for LAMA3: 5′-GGTTTTGGTCCGTGTTCT-3′, 5′-ACTGCCCCGTCATCTCTT-3′; for SMAD9: 5′-GGAGATGAAGAGGAAAAGTGG-3′, 5′-GAAAGAGTCAGGATAGGTGGC-3′. GAPDH was chosen to be an internal control, and the 2−ΔΔCt method was used to calculate the hub genes’ relative expression level [41]. The experiment was repeated in triplicate on independent occasions. ## 2.12. Statistical Analysis Statistical analysis was performed usingR 3.4.3 and GraphPad Prism V9. P-value <0.05 means significant difference. To evaluate survival, both UCR and MCR analyzes were used. Both hazard ratios (HRs) and 95 percent CIs were reckoned to identify genes that were related to OS. Paired t-tests were performed for statistical differences in this study using GraphPad Prism V9. ## 3. Results ### 3.1. Filtering for Hypoxia Score- and Ferroptosis Score-Related Genes in TCGA-LUAD Database A total of 200 hypoxia-interrelated and 259 ferroptosis-interrelated genes were gained from MSigDB and FerrDB, respectively. The expression conditions of these genes in 593 samples (normal: 59, LUAD: 534) were then matched and utilized as the basis for ssGSEA, which aimed to derive the hypoxia and ferroptosis scores in TCGA database. The ssGSEA outputs for the detailed score results are shown in Supplementary Table1.WGCNA was performed by applying the obtained hypoxia and ferroptosis scores as phenotypic data. After excluding the outlier samples, we constructed a sample-clustering tree (Figure1(a)). Herein, a scale-free network was built when β = 3, which was defined as a soft threshold parameter (Figure 1(b)). Finally, 23 modules were identified according to the dynamic tree-cutting algorithm and were labeled with different colors (Figure 1(c)). The turquoise module was most irrelevant to ferroptosis score (cor = −0.69, P=3e−10) and hypoxia score (cor = −0.63, P=8e−68), whereas the red module correlated more strongly with both ferroptosis score (cor of −0.47, P=6e−34) and hypoxia score (cor = −0.49, P=2e−36) (Figure 1(d)). Therefore, these two models were identified as the modules of interest. Collectively, 8314 genes (Supplementary Table 2) and 660 genes (Supplementary Table 3) were identified as hub genes and considered as hypoxia and ferroptosis score-related genes for subsequent analysis.Figure 1 (a) Sample-clustering dendrogram with feature heatmap. (b) Network topology analysis with different soft threshold power. (c) Cluster dendrograms of genes based on topological overlap of dissimilarities, and module colors were assigned. (d) Heatmap showing the relationship between gene modules and phenotypic traits. Each row and column correspond to a modulee-gene and a trait. The correlation coefficient in each cell represents the same relationship with heatmap in decreasing magnitude from red to green. The number in parentheses in each cell represents the correlation P-value. (a)(b)(c)(d) ### 3.2. Identification of LUAD-Related DEGs Differential expression analysis was used to acquire transcriptome data from TCGA (59 normal and 534 LUAD samples), which was produced using theR program limma. When LUAD samples were compared to normal samples, a total of 1,969 eligible DEGs were obtained, among which 906 were significantly increased in LUAD samples, and 1,063 were significantly decreased (Figure 2(a); Supplementary Table 4).Figure 2 (a) Volcano map of significant DEGs. Red spots: upregulated genes; blue spots: downregulated genes; gray: genes with no change in expression. (b) Venn diagram showing the repetitious genes of DEGs and WGCNA. (c, d) Function analysis of DE-hypoxia and ferroptosis score-related genes using Metascape. (a)(b)(c)(d) ### 3.3. Analysis of DE-Hypoxia and Ferroptosis Score-Related Genes Based on the overlap analysis, we identified 152 common genes from the list of 8,974 hypoxia and ferroptosis score-related genes and the list of 1,969 LUAD-related DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes (Figure2(b)). In LUAD, 86 of these genes were upregulated, while 66 were inversed. The expression patterns of DE-hypoxia and ferroptosis score-related genes in the TCGA-LUAD database are described in Supplementary Table 5.Functional annotations obtained from Metascape indicated that DE-hypoxia and ferroptosis score-related genes were mainly augmented in “transcriptional misregulation in cancer,” “spermatogenesis,” and “positive regulation of cell projection organization” (Figures2(c) and 2(d)). ### 3.4. Establishment of the Hypoxia and Ferroptosis Score-Related Signature In the TCGA training set (n = 334), the association of the 152 identified DE-hypoxia and ferroptosis score-related genes with survival in LUAD patients was analyzed using UCR. As shown in Table 1, only 17 of the 152 genes met the set significance threshold of P<0.05. The HRs of SMAD9, SNX30, STK32A, WFDC2, KLK11, and CTD.2589M5.4 were all <1, indicating that they were potential protective factors for LUAD. In contrast, ANGPTL4, LAMA3, VGLL3, ITGA2, TNS4, KCNQ3, PHLDB2, FAM83A.AS1, SLC16A3, FSTL3, and MAPK4, all with HR >1, were possible oncogenes. We performed LLR analysis based on 17 variables in the TCGA training set (Figures 3(a) and 3(b)) to obtain the best genes for constructing the prognostic signature. Ultimately, the hypoxia and ferroptosis score-related signature involved 14 genes: MAPK4, TNS4, WFDC2, FSTL3, ITGA2, KLK11, PHLDB2, VGLL3, SNX30, KCNQ3, SMAD9, ANGPTL4, LAMA3, and STK32A. We estimated the risk score of each individual in TCGA set based on the coefficient of each gene (Figure 3(c); Supplementary Table 6).Table 1 UCR analysis of the 152 identified DE-hypoxia and ferroptosis score-related genes explores 17 genes associated with LUAD patient survival. IDzHRHR. 95LHR. 95HP-valueMAPK44.211130297569851.467452956792971.227555406464371.754232981306112.54E−05TNS44.021197886047331.299383766104911.143670452807521.476297798436815.79E−05WFDC2−3.798607203742190.8116537401064020.7288009388506360.9039255559517410.000145511488661FSTL33.683049066154471.405171395157411.172503027990221.684009851260770.000230460779083FAM83A.AS13.192196817705661.380435854054131.132526134657911.682612955977170.00141195086371ITGA23.12971462700811.262683157733281.09108709281451.461266261257430.001749761967554KLK11−2.890815394135510.8170854370518640.7125003524757360.9370221490030370.003842437575419SLC16A32.690593528825161.383078579486441.092061016921071.751647872595450.007132503883482PHLDB22.675610484670871.368256603972891.087478762801161.721528914728530.007459328240577VGLL32.486319048170441.246214541303661.047700255004381.482342564620460.012907219145633SNX30−2.412803562384690.7182052891022910.5488791521645410.9397675886583390.015830348860021KCNQ32.232800021501321.344209664429551.036807121063751.742753869294370.025562134874828SMAD9−2.202999876552530.6601107721899650.4561772153061260.9552126168090540.02759475734267ANGPTL42.171332075966541.158389222183551.014415579070911.322796709510330.029906079314469LAMA32.059486651878771.183225217808151.008163363242031.388685571309580.039447642418345CTD.2589M5.4−1.981630557749020.8319104770607210.6934689925026090.9979898875446740.047520604494006STK32A−1.972582110725120.7890000523275250.6234655153655240.9984851884035120.048543192818321Figure 3 (a–c) The LCR was used to figure out the lowest criteria (a, b) and coefficients (c). (d) Allocations of risk scores (based on the hypoxia and ferroptosis score-related prognostic signature); (e)K-M survival curves. (f) Hypoxia and ferroptosis score-related signature can be utilized to predict OS in the TCGA training set according to ROC curves. (a)(b)(c)(d)(e)(f)The patients with LUAD in the TCGA training set were separated into two groups with the cutoff value at 1.0803 (Supplementary Table7). The allocation of risk scores is shown in Figure 3(d). Association analyses revealed a significant correlation (P<0.05) between the T stage and various risk groups in the TCGA training set (Table 2). A significant association between a high-risk score and a poor outcome (P<0.0001; Figure 3(e)) was shown in the Kaplan–Meier survival curves. ROC curves indicated that hypoxia and ferroptosis score-related signature could be used to predict OS in the TCGA training group (Figure 3(f)). Additionally, the heatmap indicated that the expression levels of KCNQ3, ITGA2, ANGPTL4, TNS4, FSTL3, LAMA3, MAPK4, PHLDB2, and VGLL3 were upregulated with enhancing risk score, but the expression levels of KLK11, SMAD9, WFDC2, SNX30, and STK32A were reduced. Additionally, in individuals with LUAD, T stages are also relevant to these genes expression (Figure 4).Table 2 Association analysis shows that clinical characteristics correlate results with different risk groups in the TCGA training set. ExpressionTotal (N = 309)HighLowP-value(N = 129)(N = 180)GenderFemale165 (53.4%)66 (51.2%)99 (55.0%)0.582Male144 (46.6%)63 (48.8%)81 (45.0%)Age (years)≥60229 (74.1%)95 (73.6%)134 (74.4%)0.979<6080 (25.9%)34 (26.4%)46 (25.6%)Pathologic stageStage I168 (54.4%)62 (48.1%)106 (58.9%)0.0815Stage II74 (23.9%)30 (23.3%)44 (24.4%)Stage III50 (16.2%)28 (21.7%)22 (12.2%)Stage IV17 (5.5%)9 (7.0%)8 (4.4%)T stageT1102 (33.0%)31 (24.0%)71 (39.4%)0.0128T2166 (53.7%)76 (58.9%)90 (50.0%)T329 (9.4%)13 (10.1%)16 (8.9%)T410 (3.2%)8 (6.2%)2 (1.1%)TX2 (0.6%)1 (0.8%)1 (0.6%)M stageM0198 (64.1%)82 (63.6%)116 (64.4%)0.47M116 (5.2%)9 (7.0%)7 (3.9%)MX95 (30.7%)38 (29.5%)57 (31.7%)N stageN0202 (65.4%)75 (58.1%)127 (70.6%)0.16N154 (17.5%)26 (20.2%)28 (15.6%)N246 (14.9%)25 (19.4%)21 (11.7%)N31 (0.3%)0 (0%)1 (0.6%)NX6 (1.9%)3 (2.3%)3 (1.7%)Figure 4 Heatmap of the relationship between the expression of 14 genes associated with hypoxia and ferroptosis scores and clinicopathological features in the (a) TCGA training, (b) TCGA test, and (c) GSE31210 dataset. (a)(b)(c) ### 3.5. Validation Prognostic Signature with 14 Genes We used the same algorithm to compute the risk scores for the patients in the TCGA test cohort (n = 142; Supplementary Table 8) and the GSE31210 dataset (n = 226; Supplementary Table 9). According to cutoff values determined for each dataset, patients were separated into two risk groups. The results corroborated those from the TCGA training set. Figures 5(a) and 5(d) indicated that mortality status was more concentrated in the domain of high-risk scores. In both validation datasets, Figures 5(b) and 5(e) showed that high-risk patients had a considerably poorer outcome. In both datasets, the 14-gene prognostic signature performed well. The risk scores of AUCs for 1-, 3-, and 5-year OS predictions were 0.666, 0.652, and 0.637 in the TCGA test set, respectively (Figure 5(c)), while the AUCs of the 14-gene signature were 0.741, 0.648, and 0.677 for the three kinds of OS predictions, respectively, using the GSE31210 dataset (Figure 5(f)). The distribution of LUAD patients with different groups according to each clinical feature in the TCGA test set is shown in Table 3. Association studies revealed a significant (P<0.01) correlation between the clinical stage and different risk groups in the GSE31210 dataset (Table 4).Figure 5 (a, d) Allocations of risk scores. (b), (e) TheK-M survival curves showed that a high-risk score was related to less OS. Hypoxia and ferroptosis score-related signature can be utilized to predict OS in the (c) TCGA test and (d) GSE31210 dataset according to ROC curves. (a)(b)(c)(d)(e)(f)Table 3 Association analysis shows that clinical characteristics correlate results with different risk groups in the TCGA test set. ExpressionTotal (N  = 135)HighLowP-value(N   = 56)(N   = 79)GenderFemale73 (54.1%)28 (50.0%)45 (57.0%)0.532Male62 (45.9%)28 (50.0%)34 (43.0%)Age (years)≥6097 (71.9%)35 (62.5%)62 (78.5%)0.0658<6038 (28.1%)21 (37.5%)17 (21.5%)Pathologic stageStage I73 (54.1%)25 (44.6%)48 (60.8%)0.159Stage II31 (23.0%)13 (23.2%)18 (22.8%)Stage III23 (17.0%)13 (23.2%)10 (12.7%)Stage IV8 (5.9%)5 (8.9%)3 (3.8%)T stageT148 (35.6%)16 (28.6%)32 (40.5%)0.506T268 (50.4%)31 (55.4%)37 (46.8%)T313 (9.6%)7 (12.5%)6 (7.6%)T45 (3.7%)2 (3.6%)3 (3.8%)TX1 (0.7%)0 (0%)1 (1.3%)M stageM091 (67.4%)35 (62.5%)56 (70.9%)0.381M18 (5.9%)5 (8.9%)3 (3.8%)MX36 (26.7%)16 (28.6%)20 (25.3%)N stageN086 (63.7%)31 (55.4%)55 (69.6%)0.094N127 (20.0%)13 (23.2%)14 (17.7%)N218 (13.3%)11 (19.6%)7 (8.9%)N31 (0.7%)1 (1.8%)0 (0%)NX3 (2.2%)0 (0%)3 (3.8%)Table 4 Association analysis shows that clinical characteristics correlate results with different risk groups in the GSE31210 dataset. ExpressionTotalHighLowP-value(N = 226)(N = 106)(N = 120)GenderFemale121 (53.5%)53 (50.0%)68 (56.7%)0.385Male105 (46.5%)53 (50.0%)52 (43.3%)Age (years)≥60130 (57.5%)58 (54.7%)72 (60.0%)0.505<6096 (42.5%)48 (45.3%)48 (40.0%)Pathologic stageI168 (74.3%)65 (61.3%)103 (85.8%)<0.001II58 (25.7%)41 (38.7%)17 (14.2%)SmokeEver-smoker111 (49.1%)58 (54.7%)53 (44.2%)0.147Never-smoker115 (50.9%)48 (45.3%)67 (55.8%) ### 3.6. Correlation Analysis of Risk Score with Clinical Characteristics of LUAD We observed the allocation of patient risk scores according to different clinical characteristics. Interestingly, the distribution of patient risk scores was highly related to the stages of the patients. Risk scores in patients in stage III were increased compared to those in stage I (P<0.05; Figure 6(a)). In terms of the T stage (Figure 6(b)), patients with LUAD in T4 had the highest risk scores, which have a significant difference in T1 and T2, but comparable to T3. Patients with LUAD in the T3 stage had slightly higher risk scores than those in the T1 stage (P<0.01); in the N stage (Figure 6(c)), patients in the N2 stage had higher risk scores than those in the N0 stage (P<0.01). Although the risk score in stage N3 was lower than in stage N2 (P<0.05), the sample size in stage N3 was too small to be considered valid. Subsequently, the impact of clinical characteristics on the OS in LUAD patients was investigated using KM survival analysis. Specifically, in the stratified analysis of stage (Figure 6(d)), patients with a lower stage are more likely to have a better prognosis, which showed the same trend with distribution of risk score levels. In the stratified analysis of the T stage (Figure 6(e)), the T1 stage had a better OS, whereas T3 and T4 stages exhibited a poor prognosis. The worst prognosis in LUAD patients in the T4 stage was consistent with the previous result that patients with T4 stage had the highest risk score. In terms of the N stage (Figure 6(f)), the N3 stage contained only one LUAD sample, and therefore, its impact on patient prognosis was ignored. Patients with the N0 stage had the longest survival time compared with those with the N2 stage who had the shortest survival time. The allocation of risk scores and stratified prognosis according to other clinical characteristics, including age, sex, and M stage, are detailed in Supplementary Figure 2.Figure 6 Wilcoxon analysis of the differing risk score distributions among various (a) stages, (b)T  stages, and (c) N stages in the TCGA-LUAD cohort. The K-M survival curves of patients with different (d) stages, (e) T stages, and (f) N stages. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. (a)(b)(c)(d)(e)(f) ### 3.7. Subgroup Analysis of the Prognostic Signature After establishing a correlation between hypoxia and ferroptosis score-related gene signatures and the aforementioned clinicopathological traits, we aimed to measure whether our model’s prognostic efficacy can be utilized for clinical factors. Five patients were separated according to the indicated subgroups, and then data stratification was executed according to age, sex, pathological tumor stage, pathologicalT stage, pathological N stage, and pathological M stage. The hypoxia and ferroptosis score-related gene signature was able to differentiate between prognoses in all subgroups except for T3-T4 and M1 features, implying a clinically and statistically significant prognostic value (Figure 7).Figure 7 K-M survival analysis of the fourteen-gene risk score level in subgroups: (a) younger than 60 years old and older than 60 years old, (b) male and female, (c) stages I-II and stages III-IV, (d) T1 2 stage and T3-4 stage, (e) N0 stage and N+ stage, and (f) M0 stage and M1 stage. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l) ### 3.8. Independent Prognostic Role of Risk Scores We investigated whether the risk score could be the only prognostic factor in LUAD patients using UCR and MCR. Based on the data from in TCGA set, UCR analyses showed that the risk score, stage,T stage, and N stage were significantly related to LUAD prognosis (Figure 8(a)). Subsequently, the above-mentioned variables (P<0.05) were subjected to MCR analysis. The results identified hypoxia and ferroptosis score-related gene signature (risk score) and stage as two independent prognostic factors predicting prognosis in LUAD patients (Figure 8(b)).Figure 8 (a) Forrest plot of UCR analysis in LUAD. (b) Forrest plot of MCR analysis in LUAD. (c) A prognostic nomogram predicting OS of LUAD. (d) Calibration plots of the nomogram for predicting the OS in the TCGA-LUAD dataset. (a)(b)(c)(d)LUAD patients’ OS were predicted using a compound nomogram incorporating the risk score and stage. This approach was developed to provide a more accurate prediction tool for clinical practice (Figure8(c)). It was evident from the calibration plots that the prognostic nomogram model accurately predicted patient survival with only a slight divergence from the actual outcomes (Figure 8(d)). ### 3.9. Differences in Hallmark Gene Sets between Two Group Patients According to the results of the analysis of signature gene sets, signaling pathways converging in numerous biological processes were found to vary in two groups. Notably, hypoxia, TNFα signaling via NF-κB, mitotic spindle, and glycolysis were decreased in the low-risk group. On the other hand, the other group was preferentially associated with bile acid metabolism, pancreatic beta cells, and KRAS signaling (Figure 9 and Supplementary Table 10).Figure 9 Gene set variation analysis. Differences in hallmark gene set activities scored by GSVA between two groups.T values are figured out using a linear model and the |t| > 2 as a cutoff value. ### 3.10. TME Infiltration Pattern of LUAD Based on Risk Score The ssGSEA algorithms were used on the data to investigate how risk scores affect TME components. As the results of heatmaps and Wilcoxon tests performed on TCGA-LUAD datasets, the infiltration of several TME contents, such as eosinophils and immature dendritic cells, was increased in the less-risk group, whereas the ICI of activated CD4T cells and others was more in the other group, as depicted in Figure 10.Figure 10 (a, b) Heatmap illustrating the distributions of immune cell subsets, fibroblasts, and endothelial cells assessed via MCP-counter (a) and ssGSEA (b) algorithms in the TCGA-LUAD cohort. (c, d) Wilcoxon analysis of the differing TME subtype distributions between two groups in the TCGA-LUAD cohort.∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. (a)(b)(c)(d) ### 3.11. Validation of Seven Selected Prognostic Genes Based on qRT-PCR According to the expression profiles of the identified DEGs (Supplementary Table5), TNS4, WFDC2, and ITGA2 were revealed to be all highly expressed, while MAPK4, SMAD9, KLK11, and LAMA3 were all downregulated in LUAD samples from the TCGA dataset. As shown in Figure 11, the high expression of TNS4, WFDC2, and ITGA2 and the low expression of MAPK4, SMAD9, KLK11, and LAMA3 in LUAD tissues (n = 10) were confirmed compared with the expression levels in the ANTs (n = 10).Figure 11 The high expression of TNS4 (a), WFDC2 (b), and ITGA2 (c) and the low expression of MAPK4 (d), SMAD9 (e), KLK11 (f), and LAMA3 (g) in LUAD tissues were confirmed compared to the paracancerous tissues. (a)(b)(c)(d)(e)(f)(g) ## 3.1. Filtering for Hypoxia Score- and Ferroptosis Score-Related Genes in TCGA-LUAD Database A total of 200 hypoxia-interrelated and 259 ferroptosis-interrelated genes were gained from MSigDB and FerrDB, respectively. The expression conditions of these genes in 593 samples (normal: 59, LUAD: 534) were then matched and utilized as the basis for ssGSEA, which aimed to derive the hypoxia and ferroptosis scores in TCGA database. The ssGSEA outputs for the detailed score results are shown in Supplementary Table1.WGCNA was performed by applying the obtained hypoxia and ferroptosis scores as phenotypic data. After excluding the outlier samples, we constructed a sample-clustering tree (Figure1(a)). Herein, a scale-free network was built when β = 3, which was defined as a soft threshold parameter (Figure 1(b)). Finally, 23 modules were identified according to the dynamic tree-cutting algorithm and were labeled with different colors (Figure 1(c)). The turquoise module was most irrelevant to ferroptosis score (cor = −0.69, P=3e−10) and hypoxia score (cor = −0.63, P=8e−68), whereas the red module correlated more strongly with both ferroptosis score (cor of −0.47, P=6e−34) and hypoxia score (cor = −0.49, P=2e−36) (Figure 1(d)). Therefore, these two models were identified as the modules of interest. Collectively, 8314 genes (Supplementary Table 2) and 660 genes (Supplementary Table 3) were identified as hub genes and considered as hypoxia and ferroptosis score-related genes for subsequent analysis.Figure 1 (a) Sample-clustering dendrogram with feature heatmap. (b) Network topology analysis with different soft threshold power. (c) Cluster dendrograms of genes based on topological overlap of dissimilarities, and module colors were assigned. (d) Heatmap showing the relationship between gene modules and phenotypic traits. Each row and column correspond to a modulee-gene and a trait. The correlation coefficient in each cell represents the same relationship with heatmap in decreasing magnitude from red to green. The number in parentheses in each cell represents the correlation P-value. (a)(b)(c)(d) ## 3.2. Identification of LUAD-Related DEGs Differential expression analysis was used to acquire transcriptome data from TCGA (59 normal and 534 LUAD samples), which was produced using theR program limma. When LUAD samples were compared to normal samples, a total of 1,969 eligible DEGs were obtained, among which 906 were significantly increased in LUAD samples, and 1,063 were significantly decreased (Figure 2(a); Supplementary Table 4).Figure 2 (a) Volcano map of significant DEGs. Red spots: upregulated genes; blue spots: downregulated genes; gray: genes with no change in expression. (b) Venn diagram showing the repetitious genes of DEGs and WGCNA. (c, d) Function analysis of DE-hypoxia and ferroptosis score-related genes using Metascape. (a)(b)(c)(d) ## 3.3. Analysis of DE-Hypoxia and Ferroptosis Score-Related Genes Based on the overlap analysis, we identified 152 common genes from the list of 8,974 hypoxia and ferroptosis score-related genes and the list of 1,969 LUAD-related DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes (Figure2(b)). In LUAD, 86 of these genes were upregulated, while 66 were inversed. The expression patterns of DE-hypoxia and ferroptosis score-related genes in the TCGA-LUAD database are described in Supplementary Table 5.Functional annotations obtained from Metascape indicated that DE-hypoxia and ferroptosis score-related genes were mainly augmented in “transcriptional misregulation in cancer,” “spermatogenesis,” and “positive regulation of cell projection organization” (Figures2(c) and 2(d)). ## 3.4. Establishment of the Hypoxia and Ferroptosis Score-Related Signature In the TCGA training set (n = 334), the association of the 152 identified DE-hypoxia and ferroptosis score-related genes with survival in LUAD patients was analyzed using UCR. As shown in Table 1, only 17 of the 152 genes met the set significance threshold of P<0.05. The HRs of SMAD9, SNX30, STK32A, WFDC2, KLK11, and CTD.2589M5.4 were all <1, indicating that they were potential protective factors for LUAD. In contrast, ANGPTL4, LAMA3, VGLL3, ITGA2, TNS4, KCNQ3, PHLDB2, FAM83A.AS1, SLC16A3, FSTL3, and MAPK4, all with HR >1, were possible oncogenes. We performed LLR analysis based on 17 variables in the TCGA training set (Figures 3(a) and 3(b)) to obtain the best genes for constructing the prognostic signature. Ultimately, the hypoxia and ferroptosis score-related signature involved 14 genes: MAPK4, TNS4, WFDC2, FSTL3, ITGA2, KLK11, PHLDB2, VGLL3, SNX30, KCNQ3, SMAD9, ANGPTL4, LAMA3, and STK32A. We estimated the risk score of each individual in TCGA set based on the coefficient of each gene (Figure 3(c); Supplementary Table 6).Table 1 UCR analysis of the 152 identified DE-hypoxia and ferroptosis score-related genes explores 17 genes associated with LUAD patient survival. IDzHRHR. 95LHR. 95HP-valueMAPK44.211130297569851.467452956792971.227555406464371.754232981306112.54E−05TNS44.021197886047331.299383766104911.143670452807521.476297798436815.79E−05WFDC2−3.798607203742190.8116537401064020.7288009388506360.9039255559517410.000145511488661FSTL33.683049066154471.405171395157411.172503027990221.684009851260770.000230460779083FAM83A.AS13.192196817705661.380435854054131.132526134657911.682612955977170.00141195086371ITGA23.12971462700811.262683157733281.09108709281451.461266261257430.001749761967554KLK11−2.890815394135510.8170854370518640.7125003524757360.9370221490030370.003842437575419SLC16A32.690593528825161.383078579486441.092061016921071.751647872595450.007132503883482PHLDB22.675610484670871.368256603972891.087478762801161.721528914728530.007459328240577VGLL32.486319048170441.246214541303661.047700255004381.482342564620460.012907219145633SNX30−2.412803562384690.7182052891022910.5488791521645410.9397675886583390.015830348860021KCNQ32.232800021501321.344209664429551.036807121063751.742753869294370.025562134874828SMAD9−2.202999876552530.6601107721899650.4561772153061260.9552126168090540.02759475734267ANGPTL42.171332075966541.158389222183551.014415579070911.322796709510330.029906079314469LAMA32.059486651878771.183225217808151.008163363242031.388685571309580.039447642418345CTD.2589M5.4−1.981630557749020.8319104770607210.6934689925026090.9979898875446740.047520604494006STK32A−1.972582110725120.7890000523275250.6234655153655240.9984851884035120.048543192818321Figure 3 (a–c) The LCR was used to figure out the lowest criteria (a, b) and coefficients (c). (d) Allocations of risk scores (based on the hypoxia and ferroptosis score-related prognostic signature); (e)K-M survival curves. (f) Hypoxia and ferroptosis score-related signature can be utilized to predict OS in the TCGA training set according to ROC curves. (a)(b)(c)(d)(e)(f)The patients with LUAD in the TCGA training set were separated into two groups with the cutoff value at 1.0803 (Supplementary Table7). The allocation of risk scores is shown in Figure 3(d). Association analyses revealed a significant correlation (P<0.05) between the T stage and various risk groups in the TCGA training set (Table 2). A significant association between a high-risk score and a poor outcome (P<0.0001; Figure 3(e)) was shown in the Kaplan–Meier survival curves. ROC curves indicated that hypoxia and ferroptosis score-related signature could be used to predict OS in the TCGA training group (Figure 3(f)). Additionally, the heatmap indicated that the expression levels of KCNQ3, ITGA2, ANGPTL4, TNS4, FSTL3, LAMA3, MAPK4, PHLDB2, and VGLL3 were upregulated with enhancing risk score, but the expression levels of KLK11, SMAD9, WFDC2, SNX30, and STK32A were reduced. Additionally, in individuals with LUAD, T stages are also relevant to these genes expression (Figure 4).Table 2 Association analysis shows that clinical characteristics correlate results with different risk groups in the TCGA training set. ExpressionTotal (N = 309)HighLowP-value(N = 129)(N = 180)GenderFemale165 (53.4%)66 (51.2%)99 (55.0%)0.582Male144 (46.6%)63 (48.8%)81 (45.0%)Age (years)≥60229 (74.1%)95 (73.6%)134 (74.4%)0.979<6080 (25.9%)34 (26.4%)46 (25.6%)Pathologic stageStage I168 (54.4%)62 (48.1%)106 (58.9%)0.0815Stage II74 (23.9%)30 (23.3%)44 (24.4%)Stage III50 (16.2%)28 (21.7%)22 (12.2%)Stage IV17 (5.5%)9 (7.0%)8 (4.4%)T stageT1102 (33.0%)31 (24.0%)71 (39.4%)0.0128T2166 (53.7%)76 (58.9%)90 (50.0%)T329 (9.4%)13 (10.1%)16 (8.9%)T410 (3.2%)8 (6.2%)2 (1.1%)TX2 (0.6%)1 (0.8%)1 (0.6%)M stageM0198 (64.1%)82 (63.6%)116 (64.4%)0.47M116 (5.2%)9 (7.0%)7 (3.9%)MX95 (30.7%)38 (29.5%)57 (31.7%)N stageN0202 (65.4%)75 (58.1%)127 (70.6%)0.16N154 (17.5%)26 (20.2%)28 (15.6%)N246 (14.9%)25 (19.4%)21 (11.7%)N31 (0.3%)0 (0%)1 (0.6%)NX6 (1.9%)3 (2.3%)3 (1.7%)Figure 4 Heatmap of the relationship between the expression of 14 genes associated with hypoxia and ferroptosis scores and clinicopathological features in the (a) TCGA training, (b) TCGA test, and (c) GSE31210 dataset. (a)(b)(c) ## 3.5. Validation Prognostic Signature with 14 Genes We used the same algorithm to compute the risk scores for the patients in the TCGA test cohort (n = 142; Supplementary Table 8) and the GSE31210 dataset (n = 226; Supplementary Table 9). According to cutoff values determined for each dataset, patients were separated into two risk groups. The results corroborated those from the TCGA training set. Figures 5(a) and 5(d) indicated that mortality status was more concentrated in the domain of high-risk scores. In both validation datasets, Figures 5(b) and 5(e) showed that high-risk patients had a considerably poorer outcome. In both datasets, the 14-gene prognostic signature performed well. The risk scores of AUCs for 1-, 3-, and 5-year OS predictions were 0.666, 0.652, and 0.637 in the TCGA test set, respectively (Figure 5(c)), while the AUCs of the 14-gene signature were 0.741, 0.648, and 0.677 for the three kinds of OS predictions, respectively, using the GSE31210 dataset (Figure 5(f)). The distribution of LUAD patients with different groups according to each clinical feature in the TCGA test set is shown in Table 3. Association studies revealed a significant (P<0.01) correlation between the clinical stage and different risk groups in the GSE31210 dataset (Table 4).Figure 5 (a, d) Allocations of risk scores. (b), (e) TheK-M survival curves showed that a high-risk score was related to less OS. Hypoxia and ferroptosis score-related signature can be utilized to predict OS in the (c) TCGA test and (d) GSE31210 dataset according to ROC curves. (a)(b)(c)(d)(e)(f)Table 3 Association analysis shows that clinical characteristics correlate results with different risk groups in the TCGA test set. ExpressionTotal (N  = 135)HighLowP-value(N   = 56)(N   = 79)GenderFemale73 (54.1%)28 (50.0%)45 (57.0%)0.532Male62 (45.9%)28 (50.0%)34 (43.0%)Age (years)≥6097 (71.9%)35 (62.5%)62 (78.5%)0.0658<6038 (28.1%)21 (37.5%)17 (21.5%)Pathologic stageStage I73 (54.1%)25 (44.6%)48 (60.8%)0.159Stage II31 (23.0%)13 (23.2%)18 (22.8%)Stage III23 (17.0%)13 (23.2%)10 (12.7%)Stage IV8 (5.9%)5 (8.9%)3 (3.8%)T stageT148 (35.6%)16 (28.6%)32 (40.5%)0.506T268 (50.4%)31 (55.4%)37 (46.8%)T313 (9.6%)7 (12.5%)6 (7.6%)T45 (3.7%)2 (3.6%)3 (3.8%)TX1 (0.7%)0 (0%)1 (1.3%)M stageM091 (67.4%)35 (62.5%)56 (70.9%)0.381M18 (5.9%)5 (8.9%)3 (3.8%)MX36 (26.7%)16 (28.6%)20 (25.3%)N stageN086 (63.7%)31 (55.4%)55 (69.6%)0.094N127 (20.0%)13 (23.2%)14 (17.7%)N218 (13.3%)11 (19.6%)7 (8.9%)N31 (0.7%)1 (1.8%)0 (0%)NX3 (2.2%)0 (0%)3 (3.8%)Table 4 Association analysis shows that clinical characteristics correlate results with different risk groups in the GSE31210 dataset. ExpressionTotalHighLowP-value(N = 226)(N = 106)(N = 120)GenderFemale121 (53.5%)53 (50.0%)68 (56.7%)0.385Male105 (46.5%)53 (50.0%)52 (43.3%)Age (years)≥60130 (57.5%)58 (54.7%)72 (60.0%)0.505<6096 (42.5%)48 (45.3%)48 (40.0%)Pathologic stageI168 (74.3%)65 (61.3%)103 (85.8%)<0.001II58 (25.7%)41 (38.7%)17 (14.2%)SmokeEver-smoker111 (49.1%)58 (54.7%)53 (44.2%)0.147Never-smoker115 (50.9%)48 (45.3%)67 (55.8%) ## 3.6. Correlation Analysis of Risk Score with Clinical Characteristics of LUAD We observed the allocation of patient risk scores according to different clinical characteristics. Interestingly, the distribution of patient risk scores was highly related to the stages of the patients. Risk scores in patients in stage III were increased compared to those in stage I (P<0.05; Figure 6(a)). In terms of the T stage (Figure 6(b)), patients with LUAD in T4 had the highest risk scores, which have a significant difference in T1 and T2, but comparable to T3. Patients with LUAD in the T3 stage had slightly higher risk scores than those in the T1 stage (P<0.01); in the N stage (Figure 6(c)), patients in the N2 stage had higher risk scores than those in the N0 stage (P<0.01). Although the risk score in stage N3 was lower than in stage N2 (P<0.05), the sample size in stage N3 was too small to be considered valid. Subsequently, the impact of clinical characteristics on the OS in LUAD patients was investigated using KM survival analysis. Specifically, in the stratified analysis of stage (Figure 6(d)), patients with a lower stage are more likely to have a better prognosis, which showed the same trend with distribution of risk score levels. In the stratified analysis of the T stage (Figure 6(e)), the T1 stage had a better OS, whereas T3 and T4 stages exhibited a poor prognosis. The worst prognosis in LUAD patients in the T4 stage was consistent with the previous result that patients with T4 stage had the highest risk score. In terms of the N stage (Figure 6(f)), the N3 stage contained only one LUAD sample, and therefore, its impact on patient prognosis was ignored. Patients with the N0 stage had the longest survival time compared with those with the N2 stage who had the shortest survival time. The allocation of risk scores and stratified prognosis according to other clinical characteristics, including age, sex, and M stage, are detailed in Supplementary Figure 2.Figure 6 Wilcoxon analysis of the differing risk score distributions among various (a) stages, (b)T  stages, and (c) N stages in the TCGA-LUAD cohort. The K-M survival curves of patients with different (d) stages, (e) T stages, and (f) N stages. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. (a)(b)(c)(d)(e)(f) ## 3.7. Subgroup Analysis of the Prognostic Signature After establishing a correlation between hypoxia and ferroptosis score-related gene signatures and the aforementioned clinicopathological traits, we aimed to measure whether our model’s prognostic efficacy can be utilized for clinical factors. Five patients were separated according to the indicated subgroups, and then data stratification was executed according to age, sex, pathological tumor stage, pathologicalT stage, pathological N stage, and pathological M stage. The hypoxia and ferroptosis score-related gene signature was able to differentiate between prognoses in all subgroups except for T3-T4 and M1 features, implying a clinically and statistically significant prognostic value (Figure 7).Figure 7 K-M survival analysis of the fourteen-gene risk score level in subgroups: (a) younger than 60 years old and older than 60 years old, (b) male and female, (c) stages I-II and stages III-IV, (d) T1 2 stage and T3-4 stage, (e) N0 stage and N+ stage, and (f) M0 stage and M1 stage. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l) ## 3.8. Independent Prognostic Role of Risk Scores We investigated whether the risk score could be the only prognostic factor in LUAD patients using UCR and MCR. Based on the data from in TCGA set, UCR analyses showed that the risk score, stage,T stage, and N stage were significantly related to LUAD prognosis (Figure 8(a)). Subsequently, the above-mentioned variables (P<0.05) were subjected to MCR analysis. The results identified hypoxia and ferroptosis score-related gene signature (risk score) and stage as two independent prognostic factors predicting prognosis in LUAD patients (Figure 8(b)).Figure 8 (a) Forrest plot of UCR analysis in LUAD. (b) Forrest plot of MCR analysis in LUAD. (c) A prognostic nomogram predicting OS of LUAD. (d) Calibration plots of the nomogram for predicting the OS in the TCGA-LUAD dataset. (a)(b)(c)(d)LUAD patients’ OS were predicted using a compound nomogram incorporating the risk score and stage. This approach was developed to provide a more accurate prediction tool for clinical practice (Figure8(c)). It was evident from the calibration plots that the prognostic nomogram model accurately predicted patient survival with only a slight divergence from the actual outcomes (Figure 8(d)). ## 3.9. Differences in Hallmark Gene Sets between Two Group Patients According to the results of the analysis of signature gene sets, signaling pathways converging in numerous biological processes were found to vary in two groups. Notably, hypoxia, TNFα signaling via NF-κB, mitotic spindle, and glycolysis were decreased in the low-risk group. On the other hand, the other group was preferentially associated with bile acid metabolism, pancreatic beta cells, and KRAS signaling (Figure 9 and Supplementary Table 10).Figure 9 Gene set variation analysis. Differences in hallmark gene set activities scored by GSVA between two groups.T values are figured out using a linear model and the |t| > 2 as a cutoff value. ## 3.10. TME Infiltration Pattern of LUAD Based on Risk Score The ssGSEA algorithms were used on the data to investigate how risk scores affect TME components. As the results of heatmaps and Wilcoxon tests performed on TCGA-LUAD datasets, the infiltration of several TME contents, such as eosinophils and immature dendritic cells, was increased in the less-risk group, whereas the ICI of activated CD4T cells and others was more in the other group, as depicted in Figure 10.Figure 10 (a, b) Heatmap illustrating the distributions of immune cell subsets, fibroblasts, and endothelial cells assessed via MCP-counter (a) and ssGSEA (b) algorithms in the TCGA-LUAD cohort. (c, d) Wilcoxon analysis of the differing TME subtype distributions between two groups in the TCGA-LUAD cohort.∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. (a)(b)(c)(d) ## 3.11. Validation of Seven Selected Prognostic Genes Based on qRT-PCR According to the expression profiles of the identified DEGs (Supplementary Table5), TNS4, WFDC2, and ITGA2 were revealed to be all highly expressed, while MAPK4, SMAD9, KLK11, and LAMA3 were all downregulated in LUAD samples from the TCGA dataset. As shown in Figure 11, the high expression of TNS4, WFDC2, and ITGA2 and the low expression of MAPK4, SMAD9, KLK11, and LAMA3 in LUAD tissues (n = 10) were confirmed compared with the expression levels in the ANTs (n = 10).Figure 11 The high expression of TNS4 (a), WFDC2 (b), and ITGA2 (c) and the low expression of MAPK4 (d), SMAD9 (e), KLK11 (f), and LAMA3 (g) in LUAD tissues were confirmed compared to the paracancerous tissues. (a)(b)(c)(d)(e)(f)(g) ## 4. Discussion As well known, lung cancer is one of the general forms of malignancy globally. Nearly 80% of lung cancer patients have NSCLC, and nearly 50% have LUAD [42]. LUAD is a malignant tumor that affects the lungs and has a poor prognosis [43]. Although there have been breakthroughs in the treatment of patients with LUAD, the OS rate in these individuals remains low.Ferroptosis is a particular kind of programmed cell death [17]. Ferroptosis-related research on lung cancer has mostly focused on the identification of related biomarkers that could induce ferroptosis [16, 44–46]. Hypoxia is also related to high proliferation rates in tumor cells [47]. Tumor hypoxia has a broad range of consequences, affecting a variety of biological systems, including metabolic changes, angiogenesis, and metastasis [48–50]. Numerous hypoxia-associated genes are associated with lung adenocarcinoma [51, 52]. However, no high-throughput research has been conducted to date to explore the possible prognostic value of them in LUAD.Here, the ferroptosis and hypoxiaZ-scores of each sample were estimated as clinical features based on the expression of ferroptosis and hypoxia-related genes identified in each sample, respectively. We obtained 23 modules, and the turquoise module showed no relationship with ferroptosis scores (cor = −0.69, P=3e−10) and hypoxia scores (cor = −0.63, P=8e−68), while the red module correlated more strongly with both scoring phenotypes, with ferroptosis score and hypoxia score. We then identified 152 common genes from the list of 8,974 hypoxia and ferroptosis score-related genes and 1,969 LUAD-related DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes, respectively.Functional annotations obtained from Metascape indicated that DE-hypoxia and ferroptosis score-related genes were mainly enriched in “transcriptional misregulation in cancer,” “endopeptidase inhibitor activity,” and “positive regulation of cell projection organization.” Overexpression of oncogenic transcription factors has been proven in recent research to change cells’ core autoregulatory circuitry, which has long been recognized to induce tumorigenesis due to mutations in transcription factor genes [53]. Therefore, it is possible to intervene in this pathway to prevent the development of LUAD.Of the 152 DE-hypoxia and ferroptosis score-related genes, 7.3% (17/152) were associated with prognosis in univariate Cox analysis. In addition, univariate Cox analysis identified six genes as protective markers and 11 genes as risk factors for patients with LUAD. Fourteen genes were identified using LASSO Cox regression (MAPK4, TNS4, WFDC2, FSTL3, ITGA2, KLK11, PHLDB2, VGLL3, SNX30, KCNQ3, SMAD9, ANGPTL4, LAMA3, and STK32A) to construct prognostic-related gene signatures and develop prognostic models to classify LUAD patients into two groups with various risks. Herein, we suggested that lower-risk patients seem to live longer. Additionally, we built a nomogram using MCR analysis and proved its predictive ability using ROC curves, calibration plots, and decision curves.MAPK4 overexpression promotes LUAD progression [54]. Tensin 4 (TNS4) is involved in MET-induced cell motility and is connected to the GPCR signaling pathway. According to one study, increased TNS4 expression leads to poor treatment outcomes in gastric cancer patients [55]. WFDC2 is upregulated in lung cancer [56–58] and has thus recognized the clinical application of WFDC2 as a serum tumor marker in the early diagnosis and efficacy monitoring of lung cancer [59]. In addition, in a study of individuals with LUAD, Song et al. [34] reported that WFDC2 was substantially related to the TNM stage of LUAD and prognosis of patients. Recent studies have reported substantial overexpression of FSTL3 in a subset of cancers [60–62]. Additionally, in patients with NSCLC and thyroid carcinoma, FSLT3 expression is substantially linked to lymph node metastasis and poor prognosis [60, 61]. ITGA2 overexpression is essential for tumor development, metastasis, and motility, and this molecule triggers the overexpression of the STAT3 signaling pathway, thus promoting tumor progression [63]. KLK11 protein is expressed more in NSCLC serum, although KLK11 mRNA levels are lower in cancerous lung tissues than in ANTs [64]. Leakage of these secreted proteins into the systemic circulation due to disruption of lung structure during angiogenesis or development may be the reason for this discrepancy between low mRNA levels and elevated serum protein levels in lung cancer [65]. It has been well studied that PHLDB2 is linked to a variety of malignancies [66, 67]. PHLDB2’s primary role is to control migration through interacting with the transcription factors CLASPS, prickle 1, and liprin 1 [68, 69]. According to Ge et al., patients with lower PHLDB2 expression have a better prognosis [70]. VGLL3 is a unique Ets1 interacting partner that inhibits adipocyte differentiation and controls trigeminal nerve development [71]. VGLL3 acts as a coactivator of mammalian toxicity equivalency factors and is implicated in many kinds of cancers, including breast, colon, and lung cancers [72, 73]. Methylation, phosphorylation [74], and dephosphorylation of SMAD9 may function in the progression of lung cancer [75]. Tumor cell-derived human angiopoietin-like protein 4 (ANGPTL4) has been shown to disrupt vascular endothelial cell connections, enhance pulmonary capillary permeability, and facilitate tumor cell protrusion through the vascular endothelium, which is involved in lung cancer [76]. Through the synergistic action of AP-1 binding sites [77], the epithelial enhancer mediates the production of laminin subunit alpha 3 (LAMA3), which is associated with tumor progression. Xu et al. [78] reported that it was discovered that the inhibition of LINC00628 decreased LUAD cell proliferation and drug resistance by lowering the methylation of the LAMA3 promoter. STK32A is important in cellular balance and transcription factor phosphorylation, together with cell cycle regulation, and its overexpression leads to enhanced NSCLC cell progression, as well as enhanced NF-κB p65 phosphorylation and inhibition of apoptosis [79]. SNX30 encodes sorted nexin-30 protein, a member of the sorted nexin, which a large class of proteins localized in the cytoplasm with membrane-bound potential via a phospholipid-binding domain [80]. KCNQ3 encodes a protein that regulates neuronal excitability, and GCSH encodes a mitochondrial protein that forms the glycine cleavage system [81]. However, there is a lack of research on the mechanisms of action of these two genes in cancer.Following this assessment, KM survival studies demonstrated that the 14 prognosis-associated genes may have a contribution to the initiation and development of LUAD in certain individuals. It came as a surprise to observe that risk scores for the 14-gene prognostic profile were shown to be strongly correlated with the OS in LUAD patients in two cohorts split by the TCGA and one GEO validation cohort. We discovered that modulation of the prognostic gene profile was linked with the LUAD survival models (T, N, M, stage, sex, and age) in our study. Furthermore, the nomogram of independent risk factors, which included risk score models, had a good predictive value and might assist clinicians in making optimum treatment choices to enhance the OS rates of patients with LUAD in the future. These results suggest that hypoxia- and ferroptosis-related genes were indispensable in the construction of prognostic models for LUAD development and that they may have the potential to act as OS biomarkers.Our findings suggested that the signaling pathways that converge in various biological processes differ between two groups, and the hypoxia, TNFα, signaling via NF-κB, mitotic spindle, and glycolysis were significantly downregulated in the less-risk group. Additionally, 14 prognosis-related genes in LUAD, including one hypoxia-related gene, ANGPTL4, were significantly expressed in the tumor tissues. This finding reflects the dependence of LUAD on hypoxia and the heterogeneity of hypoxia responses in the low- and high-risk groups. Hypoxia heterogeneity indicates its involvement in promoting a phenotypic variety of cancer cells in the TME, which promotes metastasis and therapeutic resistance. Li et al. [82] demonstrated that suppressing NLRP2 boosted cell proliferation through NF-κB signaling activation, thus resulting in an EMT phenotype in LUAD cells. Therefore, the regulatory pathways involved in NF-κB also function in the progression of LUAD. The evidence implies that LUAD pathogenesis is a complicated biological process involving multiple genes. Apart from that, dysregulation of multiple genes may contribute to the progression of LUAD by a variety of distinct processes. The differences in GSVA signatures and prognostic genes between the two groups have the potential to be explored in a more in-depth study. These discoveries may, in general, open new avenues of investigation of additional molecular mechanisms of LUAD for academics and physicians.Significant differences in immune infiltrating cell types between two groups were shown in this study. Interestingly, the enrichment fraction of activated CD4T cells and neutrophils was enhanced in the high-risk group, whereas the enrichment fraction of eosinophil and immature dendritic cells was found in the low-risk group. Immune cells, neutrophils that infiltrate tumor tissue, called TANs, also play a role in antitumor immunity. TANs stimulate T  cell responses in lung cancer rather than have an immunosuppressive effect [83]. In LUAD, overexpression of bridging granule genes is associated with a significant enhancement in infiltration of activated CD4 and CD8 T cells [84]. We hypothesize that the inflammatory response induced by immune cells may function in accelerating tumor cell mutations, which in turn may affect patient prognosis. The specific mechanisms by which the tumor immune microenvironment affects prognosis remain to be explored.Here, a prognostic model of LUAD with general applicability was successfully developed and validated based on hypoxia and ferroptosis. In addition, we performed experiments to validate the 14 molecules in the model. Of these, seven molecules were validated by qRT-PCR to be significantly different between tumor and paracancerous tissues. However, our study has some limitations. Due to the lack of studies on hypoxia and ferroptosis in tumors, the information provided by MSigDB and FerrDB websites may be inaccurate, as the references were manually obtained from previous studies. More studies will do to validate the roles of these fundamental prognostic genes’ hypoxia- and ferroptosis regulation roles in LUAD [3]. Both cohorts (TCGA-LUAD and 1 GEO cohort) were used to construct predictive signature. This hypoxia- and ferroptosis-predictive signal may be more reliable if examined in our research center’s prospective clinical trial cohort. ## 5. Conclusion Hypoxia and ferroptosis are two major mechanisms associated with lung adenocarcinoma development. In this research, the candidate genes associated with hypoxia and ferroptosis scores were identified; as a result, we have found a 14-gene signature and developed a predictive nomogram that could accurately predict OS in individuals with LUAD. These results may be useful in facilitating the making of medical decisions and personalizing therapeutic interventions. --- *Source: 1022580-2022-09-19.xml*
1022580-2022-09-19_1022580-2022-09-19.md
74,947
Prognostic Modeling of Lung Adenocarcinoma Based on Hypoxia and Ferroptosis-Related Genes
Chang Liu; Yan-Qin Ruan; Lai-Hao Qu; Zhen-Hua Li; Chao Xie; Ya-Qiang Pan; Hao-Fei Li; Ding-Biao Li
Journal of Oncology (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1022580
1022580-2022-09-19.xml
--- ## Abstract Background. It is well known that hypoxia and ferroptosis are intimately connected with tumor development. The purpose of this investigation was to identify whether they have a prognostic signature. To this end, genes related to hypoxia and ferroptosis scores were investigated using bioinformatics analysis to stratify the risk of lung adenocarcinoma. Methods. Hypoxia and ferroptosis scores were estimated using The Cancer Genome Atlas (TCGA) database-derived cohort transcriptome profiles via the single sample gene set enrichment analysis (ssGSEA) algorithm. The candidate genes associated with hypoxia and ferroptosis scores were identified using weighted correlation network analysis (WGCNA) and differential expression analysis. The prognostic genes in this study were discovered using the Cox regression (CR) model in conjunction with the LASSO method, which was then utilized to create a prognostic signature. The efficacy, accuracy, and clinical value of the prognostic model were evaluated using an independent validation cohort, Receiver Operator Characteristic (ROC) curve, and nomogram. The analysis of function and immune cell infiltration was also carried out. Results. Here, we appraised 152 candidate genes expressed not the same, which were related to hypoxia and ferroptosis for prognostic modeling in The Cancer Genome Atlas Lung Adenocarcinoma (TCGA-LUAD) cohort, and these genes were further validated in the GSE31210 cohort. We found that the 14-gene-based prognostic model, utilizing MAPK4, TNS4, WFDC2, FSTL3, ITGA2, KLK11, PHLDB2, VGLL3, SNX30, KCNQ3, SMAD9, ANGPTL4, LAMA3, and STK32A, performed well in predicting the prognosis in lung adenocarcinoma. ROC and nomogram analyses showed that risk scores based on prognostic signatures provided desirable predictive accuracy and clinical utility. Moreover, gene set variance analysis showed differential enrichment of 33 hallmark gene sets between different risk groups. Additionally, our results indicated that a higher risk score will lead to more fibroblasts and activated CD4 T  cells but fewer myeloid dendritic cells, endothelial cells, eosinophils, immature dendritic cells, and neutrophils. Conclusion. Our research found a 14-gene signature and established a nomogram that accurately predicted the prognosis in patients with lung adenocarcinoma. Clinical decision-making and therapeutic customization may benefit from these results, which may serve as a valuable reference in the future. --- ## Body ## 1. Introduction Lung cancer is one of the most frequent malignancies with high mortality and poor prognosis [1, 2]; 80% of lung malignancies diagnosed were NSCLC [3]. LUAD accounts for nearly 40% of NSCLC cases [4, 5], and its incidence is continually increasing [6]. In recent years, several therapeutic advances have been made, including targeted therapies and emerging immunotherapy [7, 8]. Although both methods are effective in a restricted range of lung cancer subtypes, the rate of survival for LUAD is still poor [9]. According to statistics, LUAD has a poor prognosis that only 18% could survive longer than 5 years [10]. As a result, the search for valid biomarkers might lead to the establishment of individualized diagnosis and therapy for LUAD patients [11]. The cancer tissue has many specific characteristics, including accelerated cell cycle, alterations of the genome, increase in cell mobility and invasive growth of the cells, incapable of going through normal apoptosis process, and depletion of normal cell functions. Because of these physiological and pathological characteristics, it is difficult for tumors to be treated.Recently, it has been studied that ferroptosis is a relatively new type of cell death. This process is often accompanied by significant iron buildup and lipid peroxidation in dying cells [12]. It can be distinguished from apoptosis, necrosis, and autophagy by certain key characteristics. Firstly, it is iron-dependent and is induced by the buildup of harmful lipid reactive oxygen species. In addition, polyunsaturated fatty acids are consumed during the process [12]. With the rapid development of the role of iron ions in cancer, new prospects have emerged for their use in cancer therapy [13]. The expression of the S100 calcium-binding protein A4 (FSP1) in lung cancer cell lines is related to resistance to ferroptosis, suggesting that overexpression of FSP1 may be a method for ferroptosis escape [14]. In addition, MAPK pathway activation is associated with the susceptibility to ferroptosis triggered by cystine deprivation in NSCLC cell lines [15]. Alvarez et al. [16] recently found that inhibiting the iron-sulfur cluster biosynthesis enzyme NFS1 induced ferroptosis in vitro and slowed tumor development in LUAD. Additionally, Liu et al. [17] discovered that brusatol, an inhibitor of NRF2, increased the response rate of cystine deprivation-triggered ferroptosis through the FOCAD-FAK signaling pathway in NSCLC cell lines. What is more surprising is that the merger of brusatol and erastin demonstrated a superior therapeutic effect on NSCLC. The findings in these prior studies suggest that ferroptosis is quite important for lung cancer treatment. Based on the above research, we made the following hypothesis that ferroptosis is connected with the prognosis of LUAD, and thus ferroptosis-related genes may function as prognostic biomarkers.Hypoxia or oxygen deprivation is a feature of most solid tumors because the growth of a tumor requires a large amount of oxygen. As the rapid tumor growth outstrips the supply of oxygen, an imbalance between decreased oxygen supply and increased oxygen demand was formed. This is a typical feature observed in the tumor microenvironment (TME) that increases the aggressiveness of many tumors and also causes abnormal blood vessel formation due to impaired blood supply, leading to poorer clinical outcomes [18–20]. Many transcription factors are active in tumor cells when the environment is hypoxic, and these transcription factors regulate cell proliferation, motility, and apoptosis via a variety of downstream signaling mechanisms [21]. This leads to an immunosuppressive TME that reduces the effectiveness of immunotherapy [22] and upregulates the expression of PD-L1, further supporting cancer escape [23, 24]. Although several studies have shown that intratumoral hypoxia and HIF1A expression affect overall survival (OS) in LUAD [25–27], hypoxia-based cannot be used to estimate who are at a high risk very early.According to recent research, HIF1A may influence lipid metabolism and cause lipids to be stored in droplets, which reduces peroxidation-mediated endosomal damage and limits cellular ferroptosis [28]. Additionally, HIF-2α has been reported to activate hypoxia-inducible lipid droplet-associated (HILPDA) expression and selectively enrich polyunsaturated lipids, thus promoting cellular ferroptosis [29]. Furthermore, increased ferritin heavy chains under hypoxic conditions can protect HT1080 tumor cells from ferroptosis [30]. These findings suggest a potential relationship between ferroptosis and hypoxia. But more research is needed to further investigate how ferroptosis and hypoxia interact with each other and how they can affect LUAD patients’ prognosis.A variety of models have been created to predict the prognostication in LUAD according to the TME [31], ferroptosis [32], hypoxia [33], and tumor immunology [34]. However, to our knowledge, there is no reported prognostic role of hypoxia and ferroptosis-interrelated features in LUAD. To fill the gap and broaden the diagnostic and therapeutic potential of LUAD, we performed a comprehensive analysis using TCGA and Gene Expression Omnibus (GEO), aiming to endorse the least prognostic genes for LUAD. Finally, a signature on hypoxia- and ferroptosis-interrelated genes was constructed to know the prognostic value in LUAD patients. ## 2. Materials and Methods ### 2.1. Data Source Transcriptomic data from 593 samples, composed of 59 normal and 534 LUAD, from TCGA database were used in this study. A total of 476 LUAD samples had available survival data. The GSE31210 dataset [35, 36] (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE31210), containing transcriptomic data and survival information for 226 LUAD patients, was obtained from the GEO database to validate the established model. ### 2.2. Single Sample Gene Set Enrichment Analysis The MSigDB (https://www.gsea-msigdb.org/gsea/msigdb/) was performed to acquire the hallmark gene sets of hypoxias, which consisted of 200 genes. The results show that there are 259 genes related to ferroptosis in total, which were gathered from the FerrDb database (https://www.zhounan.org/ferrdb/). The TCGA-LUAD database matched the expression patterns of the aforementioned genes. The ssGSEA method (from the R package GSVA) was performed to analyze all samples, and the hypoxia and ferroptosis scores for each sample were then calculated [37]. ### 2.3. Coexpression Network Construction The TCGA-LUAD transcriptome data were selected for the establishment of gene coexpression networks using theR package WGCNA [38]. Hypoxia and ferroptosis scores were used as phenotypic characteristics. To assess the correlation of all samples in the TCGA-LUAD database, we performed a cluster analysis to ensure the completeness of the samples. As shown in Supplementary Figure 1(a), TCGA-44-3917-01A-01R-A278-07 was identified as an outlier and therefore was not included in this section of the subsequent analysis. During the network construction phase, the soft thresholding power β was obtained above 0.90, with the fit index of the scale-free topology. A dendrogram of all genes was established using the dissimilarity measure to group them together (1-TOM) (Supplementary Figure 1(b)). We set 30 as the minimum module size, and modules with similar gene expressions were clustered and displayed in a tree diagram with color assignments according to the dynamic tree-cutting algorithm. To identify the modules associated with hypoxia and ferroptosis scores, a heatmap of module-feature relationships with correlation coefficients and P-values was drawn. Modules that had a strong dependency on both scores were identified as modules of interest, and the genes in these modules of interest were defined as hub genes. ### 2.4. Analysis of Differentially Expressed Genes (DEGs) Transcriptome data from 53 normal and 539 LUAD samples were used as the foundation for comparison to analyze genes expressed differently. DEGs were analyzed using theR package limma, with significance criteria of |log2 fold change (FC)| > 1 and P<0.05 as significance thresholds. ### 2.5. Overlap Analysis Overlap analysis was used to identify common genes between the identified hub genes and DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes for the subsequent analysis. ### 2.6. Functional Enrichment Using Metascape (https://metascape.org) [39], the researchers were able to confirm the functional enrichment of DE-hypoxia and ferroptosis score-related genes in this investigation. P<0.05 was the significant threshold.The active signaling was analyzed using gene set variation analysis (GSVA) [37], which could compute sample gene set enrichment using a Kolmogorov–Smirnov-like rank statistical analysis. In the present study, a GSVA assessment was used to establish the t score and to allocate 50 hallmark gene signature activity conditions to the groups with high or low risk. At last, we compared the values. The cutoff value was set to |t| > 2. ### 2.7. Identification and Establishment of the Gene Signature TCGA’s 476 LUAD cases were randomly separated into two groups by using a 7 : 3 split ratio. One group was used for training and another one for testing. The DE-hypoxia and ferroptosis score-interrelated genes that are related to OS were discovered using the TCGA training dataset. The characteristics related to LUAD prognosis were determined by using univariate Cox regression (UCR) analysis.P<0.05 was considered as significant. After the LASSO-penalized Cox regression (LCR) analysis of the proposed predictive panels, 10-fold cross-validation was used. Risk scores can be generated by using prognostic gene signature. In accordance with the appropriate cutoff of the risk score, patients from the TCGA training and TCGA test sets, as well as GSE31210, were split into two groups. The AUC of the ROC curve and Kaplan–Meier (KM) analyses were applied. External validation was performed using the GSE31210 dataset. ### 2.8. Nomogram Construction and Validation To identify whether the risk model can be influenced by clinical factors, UCR and MCR analysis together with the survivalR package were performed. Following those analyses, a nomogram was obtained using MCR coefficients of the risk score and clinical variables in the TCGA cohort, which was then analyzed. It was necessary to create calibration curves to determine whether OS for one, three, or five years were consistent with the actual findings (bootstrap-based 1000 iterations resampling validations). We developed these analyses based on the R package rms. ### 2.9. Immune Cells Infiltration (ICI) The ICI into two groups was determined using the ssGSEA method and theR software [40]. The analysis considered only values with a P<0.05. The violin diagrams used to illustrate the changes in ICI between separate categories were drawn with the ggplot2 package. ### 2.10. Patients and Tissue Samples We performed experimental validation on specimens from five LUAD patients who underwent surgery at Yan’an Affiliated Hospital, Kunming Medical University, to validate 14 hypoxia and ferroptosis score-related signature expression status in LUAD and adjacent normal tissues (ANT). ANTs were used as controls. The institutional and national research committees were followed in the conduct of all procedures, as well as the Helsinki Declaration. The hospital’s Ethics Committee gave its approval before any of the operations could be carried out (Permit No. 2017-014-01). All of the patients who took part in the trial gave their informed permission before participation. ### 2.11. RNA Isolation and qRT-PCR The 20 tissues were dissociated using TRIzol Reagent (Life Technologies); then, total RNA was collected and determined the concentration using NanoDrop 2000FC-3100 (Thermo Fisher Scientific). Prior to performing qRT-PCR, the SureScript-First-strand-cDNA-synthesis kit (GeneCopoeia) was used to reverse transcription reaction. The qRT-PCR reaction was as follows: 4μL of reverse transcription product, 2 μL of 5 × BlazeTaq qPCR Mix (GeneCopoeia, Guangzhou, China), 0.5 μL primers, and 3 μL of ddH2O. A BIO-RAD CFX96 TouchTM PCR detection system (Bio-Rad Laboratories, Inc., USA) was utilized to perform the PCR reaction as follows: 95°C for 30 s, 40 cycles of incubation at 95°C for 10 s, 60°C for 20 s, and 72°C for 30 s. In this study, the primers used were synthesized by Servicebio (Servicebio Co., Ltd., Guangzhou, China) as follows: for KLK11:5′-AGGGCTTGTAGGGGGAGA-3′, 5′-TGGGGAGGCTGTTGTTGA-3′; for MAPK4: 5′-TCAAGATTGGGGATTTCG-3′, 5′-TATGGGCTCATGTAGGGG-3′; for ITGA2: 5′-ATCAGGCGTCTCTCAGTTTC-3′, 5′-GTTTTCTTCTTGGCTTTCAC-3′; for WFDC2: 5′-CAGGCACAGGAGCAGAGAAG-3′, 5′-TCATTGGGCAGAGAGCAGAA-3′; for TNS4: 5′-GGGGCTTTTGTCATAAGGG-3′, 5′-TTTGAAGTGGACCACGGTG-3′; for LAMA3: 5′-GGTTTTGGTCCGTGTTCT-3′, 5′-ACTGCCCCGTCATCTCTT-3′; for SMAD9: 5′-GGAGATGAAGAGGAAAAGTGG-3′, 5′-GAAAGAGTCAGGATAGGTGGC-3′. GAPDH was chosen to be an internal control, and the 2−ΔΔCt method was used to calculate the hub genes’ relative expression level [41]. The experiment was repeated in triplicate on independent occasions. ### 2.12. Statistical Analysis Statistical analysis was performed usingR 3.4.3 and GraphPad Prism V9. P-value <0.05 means significant difference. To evaluate survival, both UCR and MCR analyzes were used. Both hazard ratios (HRs) and 95 percent CIs were reckoned to identify genes that were related to OS. Paired t-tests were performed for statistical differences in this study using GraphPad Prism V9. ## 2.1. Data Source Transcriptomic data from 593 samples, composed of 59 normal and 534 LUAD, from TCGA database were used in this study. A total of 476 LUAD samples had available survival data. The GSE31210 dataset [35, 36] (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE31210), containing transcriptomic data and survival information for 226 LUAD patients, was obtained from the GEO database to validate the established model. ## 2.2. Single Sample Gene Set Enrichment Analysis The MSigDB (https://www.gsea-msigdb.org/gsea/msigdb/) was performed to acquire the hallmark gene sets of hypoxias, which consisted of 200 genes. The results show that there are 259 genes related to ferroptosis in total, which were gathered from the FerrDb database (https://www.zhounan.org/ferrdb/). The TCGA-LUAD database matched the expression patterns of the aforementioned genes. The ssGSEA method (from the R package GSVA) was performed to analyze all samples, and the hypoxia and ferroptosis scores for each sample were then calculated [37]. ## 2.3. Coexpression Network Construction The TCGA-LUAD transcriptome data were selected for the establishment of gene coexpression networks using theR package WGCNA [38]. Hypoxia and ferroptosis scores were used as phenotypic characteristics. To assess the correlation of all samples in the TCGA-LUAD database, we performed a cluster analysis to ensure the completeness of the samples. As shown in Supplementary Figure 1(a), TCGA-44-3917-01A-01R-A278-07 was identified as an outlier and therefore was not included in this section of the subsequent analysis. During the network construction phase, the soft thresholding power β was obtained above 0.90, with the fit index of the scale-free topology. A dendrogram of all genes was established using the dissimilarity measure to group them together (1-TOM) (Supplementary Figure 1(b)). We set 30 as the minimum module size, and modules with similar gene expressions were clustered and displayed in a tree diagram with color assignments according to the dynamic tree-cutting algorithm. To identify the modules associated with hypoxia and ferroptosis scores, a heatmap of module-feature relationships with correlation coefficients and P-values was drawn. Modules that had a strong dependency on both scores were identified as modules of interest, and the genes in these modules of interest were defined as hub genes. ## 2.4. Analysis of Differentially Expressed Genes (DEGs) Transcriptome data from 53 normal and 539 LUAD samples were used as the foundation for comparison to analyze genes expressed differently. DEGs were analyzed using theR package limma, with significance criteria of |log2 fold change (FC)| > 1 and P<0.05 as significance thresholds. ## 2.5. Overlap Analysis Overlap analysis was used to identify common genes between the identified hub genes and DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes for the subsequent analysis. ## 2.6. Functional Enrichment Using Metascape (https://metascape.org) [39], the researchers were able to confirm the functional enrichment of DE-hypoxia and ferroptosis score-related genes in this investigation. P<0.05 was the significant threshold.The active signaling was analyzed using gene set variation analysis (GSVA) [37], which could compute sample gene set enrichment using a Kolmogorov–Smirnov-like rank statistical analysis. In the present study, a GSVA assessment was used to establish the t score and to allocate 50 hallmark gene signature activity conditions to the groups with high or low risk. At last, we compared the values. The cutoff value was set to |t| > 2. ## 2.7. Identification and Establishment of the Gene Signature TCGA’s 476 LUAD cases were randomly separated into two groups by using a 7 : 3 split ratio. One group was used for training and another one for testing. The DE-hypoxia and ferroptosis score-interrelated genes that are related to OS were discovered using the TCGA training dataset. The characteristics related to LUAD prognosis were determined by using univariate Cox regression (UCR) analysis.P<0.05 was considered as significant. After the LASSO-penalized Cox regression (LCR) analysis of the proposed predictive panels, 10-fold cross-validation was used. Risk scores can be generated by using prognostic gene signature. In accordance with the appropriate cutoff of the risk score, patients from the TCGA training and TCGA test sets, as well as GSE31210, were split into two groups. The AUC of the ROC curve and Kaplan–Meier (KM) analyses were applied. External validation was performed using the GSE31210 dataset. ## 2.8. Nomogram Construction and Validation To identify whether the risk model can be influenced by clinical factors, UCR and MCR analysis together with the survivalR package were performed. Following those analyses, a nomogram was obtained using MCR coefficients of the risk score and clinical variables in the TCGA cohort, which was then analyzed. It was necessary to create calibration curves to determine whether OS for one, three, or five years were consistent with the actual findings (bootstrap-based 1000 iterations resampling validations). We developed these analyses based on the R package rms. ## 2.9. Immune Cells Infiltration (ICI) The ICI into two groups was determined using the ssGSEA method and theR software [40]. The analysis considered only values with a P<0.05. The violin diagrams used to illustrate the changes in ICI between separate categories were drawn with the ggplot2 package. ## 2.10. Patients and Tissue Samples We performed experimental validation on specimens from five LUAD patients who underwent surgery at Yan’an Affiliated Hospital, Kunming Medical University, to validate 14 hypoxia and ferroptosis score-related signature expression status in LUAD and adjacent normal tissues (ANT). ANTs were used as controls. The institutional and national research committees were followed in the conduct of all procedures, as well as the Helsinki Declaration. The hospital’s Ethics Committee gave its approval before any of the operations could be carried out (Permit No. 2017-014-01). All of the patients who took part in the trial gave their informed permission before participation. ## 2.11. RNA Isolation and qRT-PCR The 20 tissues were dissociated using TRIzol Reagent (Life Technologies); then, total RNA was collected and determined the concentration using NanoDrop 2000FC-3100 (Thermo Fisher Scientific). Prior to performing qRT-PCR, the SureScript-First-strand-cDNA-synthesis kit (GeneCopoeia) was used to reverse transcription reaction. The qRT-PCR reaction was as follows: 4μL of reverse transcription product, 2 μL of 5 × BlazeTaq qPCR Mix (GeneCopoeia, Guangzhou, China), 0.5 μL primers, and 3 μL of ddH2O. A BIO-RAD CFX96 TouchTM PCR detection system (Bio-Rad Laboratories, Inc., USA) was utilized to perform the PCR reaction as follows: 95°C for 30 s, 40 cycles of incubation at 95°C for 10 s, 60°C for 20 s, and 72°C for 30 s. In this study, the primers used were synthesized by Servicebio (Servicebio Co., Ltd., Guangzhou, China) as follows: for KLK11:5′-AGGGCTTGTAGGGGGAGA-3′, 5′-TGGGGAGGCTGTTGTTGA-3′; for MAPK4: 5′-TCAAGATTGGGGATTTCG-3′, 5′-TATGGGCTCATGTAGGGG-3′; for ITGA2: 5′-ATCAGGCGTCTCTCAGTTTC-3′, 5′-GTTTTCTTCTTGGCTTTCAC-3′; for WFDC2: 5′-CAGGCACAGGAGCAGAGAAG-3′, 5′-TCATTGGGCAGAGAGCAGAA-3′; for TNS4: 5′-GGGGCTTTTGTCATAAGGG-3′, 5′-TTTGAAGTGGACCACGGTG-3′; for LAMA3: 5′-GGTTTTGGTCCGTGTTCT-3′, 5′-ACTGCCCCGTCATCTCTT-3′; for SMAD9: 5′-GGAGATGAAGAGGAAAAGTGG-3′, 5′-GAAAGAGTCAGGATAGGTGGC-3′. GAPDH was chosen to be an internal control, and the 2−ΔΔCt method was used to calculate the hub genes’ relative expression level [41]. The experiment was repeated in triplicate on independent occasions. ## 2.12. Statistical Analysis Statistical analysis was performed usingR 3.4.3 and GraphPad Prism V9. P-value <0.05 means significant difference. To evaluate survival, both UCR and MCR analyzes were used. Both hazard ratios (HRs) and 95 percent CIs were reckoned to identify genes that were related to OS. Paired t-tests were performed for statistical differences in this study using GraphPad Prism V9. ## 3. Results ### 3.1. Filtering for Hypoxia Score- and Ferroptosis Score-Related Genes in TCGA-LUAD Database A total of 200 hypoxia-interrelated and 259 ferroptosis-interrelated genes were gained from MSigDB and FerrDB, respectively. The expression conditions of these genes in 593 samples (normal: 59, LUAD: 534) were then matched and utilized as the basis for ssGSEA, which aimed to derive the hypoxia and ferroptosis scores in TCGA database. The ssGSEA outputs for the detailed score results are shown in Supplementary Table1.WGCNA was performed by applying the obtained hypoxia and ferroptosis scores as phenotypic data. After excluding the outlier samples, we constructed a sample-clustering tree (Figure1(a)). Herein, a scale-free network was built when β = 3, which was defined as a soft threshold parameter (Figure 1(b)). Finally, 23 modules were identified according to the dynamic tree-cutting algorithm and were labeled with different colors (Figure 1(c)). The turquoise module was most irrelevant to ferroptosis score (cor = −0.69, P=3e−10) and hypoxia score (cor = −0.63, P=8e−68), whereas the red module correlated more strongly with both ferroptosis score (cor of −0.47, P=6e−34) and hypoxia score (cor = −0.49, P=2e−36) (Figure 1(d)). Therefore, these two models were identified as the modules of interest. Collectively, 8314 genes (Supplementary Table 2) and 660 genes (Supplementary Table 3) were identified as hub genes and considered as hypoxia and ferroptosis score-related genes for subsequent analysis.Figure 1 (a) Sample-clustering dendrogram with feature heatmap. (b) Network topology analysis with different soft threshold power. (c) Cluster dendrograms of genes based on topological overlap of dissimilarities, and module colors were assigned. (d) Heatmap showing the relationship between gene modules and phenotypic traits. Each row and column correspond to a modulee-gene and a trait. The correlation coefficient in each cell represents the same relationship with heatmap in decreasing magnitude from red to green. The number in parentheses in each cell represents the correlation P-value. (a)(b)(c)(d) ### 3.2. Identification of LUAD-Related DEGs Differential expression analysis was used to acquire transcriptome data from TCGA (59 normal and 534 LUAD samples), which was produced using theR program limma. When LUAD samples were compared to normal samples, a total of 1,969 eligible DEGs were obtained, among which 906 were significantly increased in LUAD samples, and 1,063 were significantly decreased (Figure 2(a); Supplementary Table 4).Figure 2 (a) Volcano map of significant DEGs. Red spots: upregulated genes; blue spots: downregulated genes; gray: genes with no change in expression. (b) Venn diagram showing the repetitious genes of DEGs and WGCNA. (c, d) Function analysis of DE-hypoxia and ferroptosis score-related genes using Metascape. (a)(b)(c)(d) ### 3.3. Analysis of DE-Hypoxia and Ferroptosis Score-Related Genes Based on the overlap analysis, we identified 152 common genes from the list of 8,974 hypoxia and ferroptosis score-related genes and the list of 1,969 LUAD-related DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes (Figure2(b)). In LUAD, 86 of these genes were upregulated, while 66 were inversed. The expression patterns of DE-hypoxia and ferroptosis score-related genes in the TCGA-LUAD database are described in Supplementary Table 5.Functional annotations obtained from Metascape indicated that DE-hypoxia and ferroptosis score-related genes were mainly augmented in “transcriptional misregulation in cancer,” “spermatogenesis,” and “positive regulation of cell projection organization” (Figures2(c) and 2(d)). ### 3.4. Establishment of the Hypoxia and Ferroptosis Score-Related Signature In the TCGA training set (n = 334), the association of the 152 identified DE-hypoxia and ferroptosis score-related genes with survival in LUAD patients was analyzed using UCR. As shown in Table 1, only 17 of the 152 genes met the set significance threshold of P<0.05. The HRs of SMAD9, SNX30, STK32A, WFDC2, KLK11, and CTD.2589M5.4 were all <1, indicating that they were potential protective factors for LUAD. In contrast, ANGPTL4, LAMA3, VGLL3, ITGA2, TNS4, KCNQ3, PHLDB2, FAM83A.AS1, SLC16A3, FSTL3, and MAPK4, all with HR >1, were possible oncogenes. We performed LLR analysis based on 17 variables in the TCGA training set (Figures 3(a) and 3(b)) to obtain the best genes for constructing the prognostic signature. Ultimately, the hypoxia and ferroptosis score-related signature involved 14 genes: MAPK4, TNS4, WFDC2, FSTL3, ITGA2, KLK11, PHLDB2, VGLL3, SNX30, KCNQ3, SMAD9, ANGPTL4, LAMA3, and STK32A. We estimated the risk score of each individual in TCGA set based on the coefficient of each gene (Figure 3(c); Supplementary Table 6).Table 1 UCR analysis of the 152 identified DE-hypoxia and ferroptosis score-related genes explores 17 genes associated with LUAD patient survival. IDzHRHR. 95LHR. 95HP-valueMAPK44.211130297569851.467452956792971.227555406464371.754232981306112.54E−05TNS44.021197886047331.299383766104911.143670452807521.476297798436815.79E−05WFDC2−3.798607203742190.8116537401064020.7288009388506360.9039255559517410.000145511488661FSTL33.683049066154471.405171395157411.172503027990221.684009851260770.000230460779083FAM83A.AS13.192196817705661.380435854054131.132526134657911.682612955977170.00141195086371ITGA23.12971462700811.262683157733281.09108709281451.461266261257430.001749761967554KLK11−2.890815394135510.8170854370518640.7125003524757360.9370221490030370.003842437575419SLC16A32.690593528825161.383078579486441.092061016921071.751647872595450.007132503883482PHLDB22.675610484670871.368256603972891.087478762801161.721528914728530.007459328240577VGLL32.486319048170441.246214541303661.047700255004381.482342564620460.012907219145633SNX30−2.412803562384690.7182052891022910.5488791521645410.9397675886583390.015830348860021KCNQ32.232800021501321.344209664429551.036807121063751.742753869294370.025562134874828SMAD9−2.202999876552530.6601107721899650.4561772153061260.9552126168090540.02759475734267ANGPTL42.171332075966541.158389222183551.014415579070911.322796709510330.029906079314469LAMA32.059486651878771.183225217808151.008163363242031.388685571309580.039447642418345CTD.2589M5.4−1.981630557749020.8319104770607210.6934689925026090.9979898875446740.047520604494006STK32A−1.972582110725120.7890000523275250.6234655153655240.9984851884035120.048543192818321Figure 3 (a–c) The LCR was used to figure out the lowest criteria (a, b) and coefficients (c). (d) Allocations of risk scores (based on the hypoxia and ferroptosis score-related prognostic signature); (e)K-M survival curves. (f) Hypoxia and ferroptosis score-related signature can be utilized to predict OS in the TCGA training set according to ROC curves. (a)(b)(c)(d)(e)(f)The patients with LUAD in the TCGA training set were separated into two groups with the cutoff value at 1.0803 (Supplementary Table7). The allocation of risk scores is shown in Figure 3(d). Association analyses revealed a significant correlation (P<0.05) between the T stage and various risk groups in the TCGA training set (Table 2). A significant association between a high-risk score and a poor outcome (P<0.0001; Figure 3(e)) was shown in the Kaplan–Meier survival curves. ROC curves indicated that hypoxia and ferroptosis score-related signature could be used to predict OS in the TCGA training group (Figure 3(f)). Additionally, the heatmap indicated that the expression levels of KCNQ3, ITGA2, ANGPTL4, TNS4, FSTL3, LAMA3, MAPK4, PHLDB2, and VGLL3 were upregulated with enhancing risk score, but the expression levels of KLK11, SMAD9, WFDC2, SNX30, and STK32A were reduced. Additionally, in individuals with LUAD, T stages are also relevant to these genes expression (Figure 4).Table 2 Association analysis shows that clinical characteristics correlate results with different risk groups in the TCGA training set. ExpressionTotal (N = 309)HighLowP-value(N = 129)(N = 180)GenderFemale165 (53.4%)66 (51.2%)99 (55.0%)0.582Male144 (46.6%)63 (48.8%)81 (45.0%)Age (years)≥60229 (74.1%)95 (73.6%)134 (74.4%)0.979<6080 (25.9%)34 (26.4%)46 (25.6%)Pathologic stageStage I168 (54.4%)62 (48.1%)106 (58.9%)0.0815Stage II74 (23.9%)30 (23.3%)44 (24.4%)Stage III50 (16.2%)28 (21.7%)22 (12.2%)Stage IV17 (5.5%)9 (7.0%)8 (4.4%)T stageT1102 (33.0%)31 (24.0%)71 (39.4%)0.0128T2166 (53.7%)76 (58.9%)90 (50.0%)T329 (9.4%)13 (10.1%)16 (8.9%)T410 (3.2%)8 (6.2%)2 (1.1%)TX2 (0.6%)1 (0.8%)1 (0.6%)M stageM0198 (64.1%)82 (63.6%)116 (64.4%)0.47M116 (5.2%)9 (7.0%)7 (3.9%)MX95 (30.7%)38 (29.5%)57 (31.7%)N stageN0202 (65.4%)75 (58.1%)127 (70.6%)0.16N154 (17.5%)26 (20.2%)28 (15.6%)N246 (14.9%)25 (19.4%)21 (11.7%)N31 (0.3%)0 (0%)1 (0.6%)NX6 (1.9%)3 (2.3%)3 (1.7%)Figure 4 Heatmap of the relationship between the expression of 14 genes associated with hypoxia and ferroptosis scores and clinicopathological features in the (a) TCGA training, (b) TCGA test, and (c) GSE31210 dataset. (a)(b)(c) ### 3.5. Validation Prognostic Signature with 14 Genes We used the same algorithm to compute the risk scores for the patients in the TCGA test cohort (n = 142; Supplementary Table 8) and the GSE31210 dataset (n = 226; Supplementary Table 9). According to cutoff values determined for each dataset, patients were separated into two risk groups. The results corroborated those from the TCGA training set. Figures 5(a) and 5(d) indicated that mortality status was more concentrated in the domain of high-risk scores. In both validation datasets, Figures 5(b) and 5(e) showed that high-risk patients had a considerably poorer outcome. In both datasets, the 14-gene prognostic signature performed well. The risk scores of AUCs for 1-, 3-, and 5-year OS predictions were 0.666, 0.652, and 0.637 in the TCGA test set, respectively (Figure 5(c)), while the AUCs of the 14-gene signature were 0.741, 0.648, and 0.677 for the three kinds of OS predictions, respectively, using the GSE31210 dataset (Figure 5(f)). The distribution of LUAD patients with different groups according to each clinical feature in the TCGA test set is shown in Table 3. Association studies revealed a significant (P<0.01) correlation between the clinical stage and different risk groups in the GSE31210 dataset (Table 4).Figure 5 (a, d) Allocations of risk scores. (b), (e) TheK-M survival curves showed that a high-risk score was related to less OS. Hypoxia and ferroptosis score-related signature can be utilized to predict OS in the (c) TCGA test and (d) GSE31210 dataset according to ROC curves. (a)(b)(c)(d)(e)(f)Table 3 Association analysis shows that clinical characteristics correlate results with different risk groups in the TCGA test set. ExpressionTotal (N  = 135)HighLowP-value(N   = 56)(N   = 79)GenderFemale73 (54.1%)28 (50.0%)45 (57.0%)0.532Male62 (45.9%)28 (50.0%)34 (43.0%)Age (years)≥6097 (71.9%)35 (62.5%)62 (78.5%)0.0658<6038 (28.1%)21 (37.5%)17 (21.5%)Pathologic stageStage I73 (54.1%)25 (44.6%)48 (60.8%)0.159Stage II31 (23.0%)13 (23.2%)18 (22.8%)Stage III23 (17.0%)13 (23.2%)10 (12.7%)Stage IV8 (5.9%)5 (8.9%)3 (3.8%)T stageT148 (35.6%)16 (28.6%)32 (40.5%)0.506T268 (50.4%)31 (55.4%)37 (46.8%)T313 (9.6%)7 (12.5%)6 (7.6%)T45 (3.7%)2 (3.6%)3 (3.8%)TX1 (0.7%)0 (0%)1 (1.3%)M stageM091 (67.4%)35 (62.5%)56 (70.9%)0.381M18 (5.9%)5 (8.9%)3 (3.8%)MX36 (26.7%)16 (28.6%)20 (25.3%)N stageN086 (63.7%)31 (55.4%)55 (69.6%)0.094N127 (20.0%)13 (23.2%)14 (17.7%)N218 (13.3%)11 (19.6%)7 (8.9%)N31 (0.7%)1 (1.8%)0 (0%)NX3 (2.2%)0 (0%)3 (3.8%)Table 4 Association analysis shows that clinical characteristics correlate results with different risk groups in the GSE31210 dataset. ExpressionTotalHighLowP-value(N = 226)(N = 106)(N = 120)GenderFemale121 (53.5%)53 (50.0%)68 (56.7%)0.385Male105 (46.5%)53 (50.0%)52 (43.3%)Age (years)≥60130 (57.5%)58 (54.7%)72 (60.0%)0.505<6096 (42.5%)48 (45.3%)48 (40.0%)Pathologic stageI168 (74.3%)65 (61.3%)103 (85.8%)<0.001II58 (25.7%)41 (38.7%)17 (14.2%)SmokeEver-smoker111 (49.1%)58 (54.7%)53 (44.2%)0.147Never-smoker115 (50.9%)48 (45.3%)67 (55.8%) ### 3.6. Correlation Analysis of Risk Score with Clinical Characteristics of LUAD We observed the allocation of patient risk scores according to different clinical characteristics. Interestingly, the distribution of patient risk scores was highly related to the stages of the patients. Risk scores in patients in stage III were increased compared to those in stage I (P<0.05; Figure 6(a)). In terms of the T stage (Figure 6(b)), patients with LUAD in T4 had the highest risk scores, which have a significant difference in T1 and T2, but comparable to T3. Patients with LUAD in the T3 stage had slightly higher risk scores than those in the T1 stage (P<0.01); in the N stage (Figure 6(c)), patients in the N2 stage had higher risk scores than those in the N0 stage (P<0.01). Although the risk score in stage N3 was lower than in stage N2 (P<0.05), the sample size in stage N3 was too small to be considered valid. Subsequently, the impact of clinical characteristics on the OS in LUAD patients was investigated using KM survival analysis. Specifically, in the stratified analysis of stage (Figure 6(d)), patients with a lower stage are more likely to have a better prognosis, which showed the same trend with distribution of risk score levels. In the stratified analysis of the T stage (Figure 6(e)), the T1 stage had a better OS, whereas T3 and T4 stages exhibited a poor prognosis. The worst prognosis in LUAD patients in the T4 stage was consistent with the previous result that patients with T4 stage had the highest risk score. In terms of the N stage (Figure 6(f)), the N3 stage contained only one LUAD sample, and therefore, its impact on patient prognosis was ignored. Patients with the N0 stage had the longest survival time compared with those with the N2 stage who had the shortest survival time. The allocation of risk scores and stratified prognosis according to other clinical characteristics, including age, sex, and M stage, are detailed in Supplementary Figure 2.Figure 6 Wilcoxon analysis of the differing risk score distributions among various (a) stages, (b)T  stages, and (c) N stages in the TCGA-LUAD cohort. The K-M survival curves of patients with different (d) stages, (e) T stages, and (f) N stages. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. (a)(b)(c)(d)(e)(f) ### 3.7. Subgroup Analysis of the Prognostic Signature After establishing a correlation between hypoxia and ferroptosis score-related gene signatures and the aforementioned clinicopathological traits, we aimed to measure whether our model’s prognostic efficacy can be utilized for clinical factors. Five patients were separated according to the indicated subgroups, and then data stratification was executed according to age, sex, pathological tumor stage, pathologicalT stage, pathological N stage, and pathological M stage. The hypoxia and ferroptosis score-related gene signature was able to differentiate between prognoses in all subgroups except for T3-T4 and M1 features, implying a clinically and statistically significant prognostic value (Figure 7).Figure 7 K-M survival analysis of the fourteen-gene risk score level in subgroups: (a) younger than 60 years old and older than 60 years old, (b) male and female, (c) stages I-II and stages III-IV, (d) T1 2 stage and T3-4 stage, (e) N0 stage and N+ stage, and (f) M0 stage and M1 stage. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l) ### 3.8. Independent Prognostic Role of Risk Scores We investigated whether the risk score could be the only prognostic factor in LUAD patients using UCR and MCR. Based on the data from in TCGA set, UCR analyses showed that the risk score, stage,T stage, and N stage were significantly related to LUAD prognosis (Figure 8(a)). Subsequently, the above-mentioned variables (P<0.05) were subjected to MCR analysis. The results identified hypoxia and ferroptosis score-related gene signature (risk score) and stage as two independent prognostic factors predicting prognosis in LUAD patients (Figure 8(b)).Figure 8 (a) Forrest plot of UCR analysis in LUAD. (b) Forrest plot of MCR analysis in LUAD. (c) A prognostic nomogram predicting OS of LUAD. (d) Calibration plots of the nomogram for predicting the OS in the TCGA-LUAD dataset. (a)(b)(c)(d)LUAD patients’ OS were predicted using a compound nomogram incorporating the risk score and stage. This approach was developed to provide a more accurate prediction tool for clinical practice (Figure8(c)). It was evident from the calibration plots that the prognostic nomogram model accurately predicted patient survival with only a slight divergence from the actual outcomes (Figure 8(d)). ### 3.9. Differences in Hallmark Gene Sets between Two Group Patients According to the results of the analysis of signature gene sets, signaling pathways converging in numerous biological processes were found to vary in two groups. Notably, hypoxia, TNFα signaling via NF-κB, mitotic spindle, and glycolysis were decreased in the low-risk group. On the other hand, the other group was preferentially associated with bile acid metabolism, pancreatic beta cells, and KRAS signaling (Figure 9 and Supplementary Table 10).Figure 9 Gene set variation analysis. Differences in hallmark gene set activities scored by GSVA between two groups.T values are figured out using a linear model and the |t| > 2 as a cutoff value. ### 3.10. TME Infiltration Pattern of LUAD Based on Risk Score The ssGSEA algorithms were used on the data to investigate how risk scores affect TME components. As the results of heatmaps and Wilcoxon tests performed on TCGA-LUAD datasets, the infiltration of several TME contents, such as eosinophils and immature dendritic cells, was increased in the less-risk group, whereas the ICI of activated CD4T cells and others was more in the other group, as depicted in Figure 10.Figure 10 (a, b) Heatmap illustrating the distributions of immune cell subsets, fibroblasts, and endothelial cells assessed via MCP-counter (a) and ssGSEA (b) algorithms in the TCGA-LUAD cohort. (c, d) Wilcoxon analysis of the differing TME subtype distributions between two groups in the TCGA-LUAD cohort.∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. (a)(b)(c)(d) ### 3.11. Validation of Seven Selected Prognostic Genes Based on qRT-PCR According to the expression profiles of the identified DEGs (Supplementary Table5), TNS4, WFDC2, and ITGA2 were revealed to be all highly expressed, while MAPK4, SMAD9, KLK11, and LAMA3 were all downregulated in LUAD samples from the TCGA dataset. As shown in Figure 11, the high expression of TNS4, WFDC2, and ITGA2 and the low expression of MAPK4, SMAD9, KLK11, and LAMA3 in LUAD tissues (n = 10) were confirmed compared with the expression levels in the ANTs (n = 10).Figure 11 The high expression of TNS4 (a), WFDC2 (b), and ITGA2 (c) and the low expression of MAPK4 (d), SMAD9 (e), KLK11 (f), and LAMA3 (g) in LUAD tissues were confirmed compared to the paracancerous tissues. (a)(b)(c)(d)(e)(f)(g) ## 3.1. Filtering for Hypoxia Score- and Ferroptosis Score-Related Genes in TCGA-LUAD Database A total of 200 hypoxia-interrelated and 259 ferroptosis-interrelated genes were gained from MSigDB and FerrDB, respectively. The expression conditions of these genes in 593 samples (normal: 59, LUAD: 534) were then matched and utilized as the basis for ssGSEA, which aimed to derive the hypoxia and ferroptosis scores in TCGA database. The ssGSEA outputs for the detailed score results are shown in Supplementary Table1.WGCNA was performed by applying the obtained hypoxia and ferroptosis scores as phenotypic data. After excluding the outlier samples, we constructed a sample-clustering tree (Figure1(a)). Herein, a scale-free network was built when β = 3, which was defined as a soft threshold parameter (Figure 1(b)). Finally, 23 modules were identified according to the dynamic tree-cutting algorithm and were labeled with different colors (Figure 1(c)). The turquoise module was most irrelevant to ferroptosis score (cor = −0.69, P=3e−10) and hypoxia score (cor = −0.63, P=8e−68), whereas the red module correlated more strongly with both ferroptosis score (cor of −0.47, P=6e−34) and hypoxia score (cor = −0.49, P=2e−36) (Figure 1(d)). Therefore, these two models were identified as the modules of interest. Collectively, 8314 genes (Supplementary Table 2) and 660 genes (Supplementary Table 3) were identified as hub genes and considered as hypoxia and ferroptosis score-related genes for subsequent analysis.Figure 1 (a) Sample-clustering dendrogram with feature heatmap. (b) Network topology analysis with different soft threshold power. (c) Cluster dendrograms of genes based on topological overlap of dissimilarities, and module colors were assigned. (d) Heatmap showing the relationship between gene modules and phenotypic traits. Each row and column correspond to a modulee-gene and a trait. The correlation coefficient in each cell represents the same relationship with heatmap in decreasing magnitude from red to green. The number in parentheses in each cell represents the correlation P-value. (a)(b)(c)(d) ## 3.2. Identification of LUAD-Related DEGs Differential expression analysis was used to acquire transcriptome data from TCGA (59 normal and 534 LUAD samples), which was produced using theR program limma. When LUAD samples were compared to normal samples, a total of 1,969 eligible DEGs were obtained, among which 906 were significantly increased in LUAD samples, and 1,063 were significantly decreased (Figure 2(a); Supplementary Table 4).Figure 2 (a) Volcano map of significant DEGs. Red spots: upregulated genes; blue spots: downregulated genes; gray: genes with no change in expression. (b) Venn diagram showing the repetitious genes of DEGs and WGCNA. (c, d) Function analysis of DE-hypoxia and ferroptosis score-related genes using Metascape. (a)(b)(c)(d) ## 3.3. Analysis of DE-Hypoxia and Ferroptosis Score-Related Genes Based on the overlap analysis, we identified 152 common genes from the list of 8,974 hypoxia and ferroptosis score-related genes and the list of 1,969 LUAD-related DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes (Figure2(b)). In LUAD, 86 of these genes were upregulated, while 66 were inversed. The expression patterns of DE-hypoxia and ferroptosis score-related genes in the TCGA-LUAD database are described in Supplementary Table 5.Functional annotations obtained from Metascape indicated that DE-hypoxia and ferroptosis score-related genes were mainly augmented in “transcriptional misregulation in cancer,” “spermatogenesis,” and “positive regulation of cell projection organization” (Figures2(c) and 2(d)). ## 3.4. Establishment of the Hypoxia and Ferroptosis Score-Related Signature In the TCGA training set (n = 334), the association of the 152 identified DE-hypoxia and ferroptosis score-related genes with survival in LUAD patients was analyzed using UCR. As shown in Table 1, only 17 of the 152 genes met the set significance threshold of P<0.05. The HRs of SMAD9, SNX30, STK32A, WFDC2, KLK11, and CTD.2589M5.4 were all <1, indicating that they were potential protective factors for LUAD. In contrast, ANGPTL4, LAMA3, VGLL3, ITGA2, TNS4, KCNQ3, PHLDB2, FAM83A.AS1, SLC16A3, FSTL3, and MAPK4, all with HR >1, were possible oncogenes. We performed LLR analysis based on 17 variables in the TCGA training set (Figures 3(a) and 3(b)) to obtain the best genes for constructing the prognostic signature. Ultimately, the hypoxia and ferroptosis score-related signature involved 14 genes: MAPK4, TNS4, WFDC2, FSTL3, ITGA2, KLK11, PHLDB2, VGLL3, SNX30, KCNQ3, SMAD9, ANGPTL4, LAMA3, and STK32A. We estimated the risk score of each individual in TCGA set based on the coefficient of each gene (Figure 3(c); Supplementary Table 6).Table 1 UCR analysis of the 152 identified DE-hypoxia and ferroptosis score-related genes explores 17 genes associated with LUAD patient survival. IDzHRHR. 95LHR. 95HP-valueMAPK44.211130297569851.467452956792971.227555406464371.754232981306112.54E−05TNS44.021197886047331.299383766104911.143670452807521.476297798436815.79E−05WFDC2−3.798607203742190.8116537401064020.7288009388506360.9039255559517410.000145511488661FSTL33.683049066154471.405171395157411.172503027990221.684009851260770.000230460779083FAM83A.AS13.192196817705661.380435854054131.132526134657911.682612955977170.00141195086371ITGA23.12971462700811.262683157733281.09108709281451.461266261257430.001749761967554KLK11−2.890815394135510.8170854370518640.7125003524757360.9370221490030370.003842437575419SLC16A32.690593528825161.383078579486441.092061016921071.751647872595450.007132503883482PHLDB22.675610484670871.368256603972891.087478762801161.721528914728530.007459328240577VGLL32.486319048170441.246214541303661.047700255004381.482342564620460.012907219145633SNX30−2.412803562384690.7182052891022910.5488791521645410.9397675886583390.015830348860021KCNQ32.232800021501321.344209664429551.036807121063751.742753869294370.025562134874828SMAD9−2.202999876552530.6601107721899650.4561772153061260.9552126168090540.02759475734267ANGPTL42.171332075966541.158389222183551.014415579070911.322796709510330.029906079314469LAMA32.059486651878771.183225217808151.008163363242031.388685571309580.039447642418345CTD.2589M5.4−1.981630557749020.8319104770607210.6934689925026090.9979898875446740.047520604494006STK32A−1.972582110725120.7890000523275250.6234655153655240.9984851884035120.048543192818321Figure 3 (a–c) The LCR was used to figure out the lowest criteria (a, b) and coefficients (c). (d) Allocations of risk scores (based on the hypoxia and ferroptosis score-related prognostic signature); (e)K-M survival curves. (f) Hypoxia and ferroptosis score-related signature can be utilized to predict OS in the TCGA training set according to ROC curves. (a)(b)(c)(d)(e)(f)The patients with LUAD in the TCGA training set were separated into two groups with the cutoff value at 1.0803 (Supplementary Table7). The allocation of risk scores is shown in Figure 3(d). Association analyses revealed a significant correlation (P<0.05) between the T stage and various risk groups in the TCGA training set (Table 2). A significant association between a high-risk score and a poor outcome (P<0.0001; Figure 3(e)) was shown in the Kaplan–Meier survival curves. ROC curves indicated that hypoxia and ferroptosis score-related signature could be used to predict OS in the TCGA training group (Figure 3(f)). Additionally, the heatmap indicated that the expression levels of KCNQ3, ITGA2, ANGPTL4, TNS4, FSTL3, LAMA3, MAPK4, PHLDB2, and VGLL3 were upregulated with enhancing risk score, but the expression levels of KLK11, SMAD9, WFDC2, SNX30, and STK32A were reduced. Additionally, in individuals with LUAD, T stages are also relevant to these genes expression (Figure 4).Table 2 Association analysis shows that clinical characteristics correlate results with different risk groups in the TCGA training set. ExpressionTotal (N = 309)HighLowP-value(N = 129)(N = 180)GenderFemale165 (53.4%)66 (51.2%)99 (55.0%)0.582Male144 (46.6%)63 (48.8%)81 (45.0%)Age (years)≥60229 (74.1%)95 (73.6%)134 (74.4%)0.979<6080 (25.9%)34 (26.4%)46 (25.6%)Pathologic stageStage I168 (54.4%)62 (48.1%)106 (58.9%)0.0815Stage II74 (23.9%)30 (23.3%)44 (24.4%)Stage III50 (16.2%)28 (21.7%)22 (12.2%)Stage IV17 (5.5%)9 (7.0%)8 (4.4%)T stageT1102 (33.0%)31 (24.0%)71 (39.4%)0.0128T2166 (53.7%)76 (58.9%)90 (50.0%)T329 (9.4%)13 (10.1%)16 (8.9%)T410 (3.2%)8 (6.2%)2 (1.1%)TX2 (0.6%)1 (0.8%)1 (0.6%)M stageM0198 (64.1%)82 (63.6%)116 (64.4%)0.47M116 (5.2%)9 (7.0%)7 (3.9%)MX95 (30.7%)38 (29.5%)57 (31.7%)N stageN0202 (65.4%)75 (58.1%)127 (70.6%)0.16N154 (17.5%)26 (20.2%)28 (15.6%)N246 (14.9%)25 (19.4%)21 (11.7%)N31 (0.3%)0 (0%)1 (0.6%)NX6 (1.9%)3 (2.3%)3 (1.7%)Figure 4 Heatmap of the relationship between the expression of 14 genes associated with hypoxia and ferroptosis scores and clinicopathological features in the (a) TCGA training, (b) TCGA test, and (c) GSE31210 dataset. (a)(b)(c) ## 3.5. Validation Prognostic Signature with 14 Genes We used the same algorithm to compute the risk scores for the patients in the TCGA test cohort (n = 142; Supplementary Table 8) and the GSE31210 dataset (n = 226; Supplementary Table 9). According to cutoff values determined for each dataset, patients were separated into two risk groups. The results corroborated those from the TCGA training set. Figures 5(a) and 5(d) indicated that mortality status was more concentrated in the domain of high-risk scores. In both validation datasets, Figures 5(b) and 5(e) showed that high-risk patients had a considerably poorer outcome. In both datasets, the 14-gene prognostic signature performed well. The risk scores of AUCs for 1-, 3-, and 5-year OS predictions were 0.666, 0.652, and 0.637 in the TCGA test set, respectively (Figure 5(c)), while the AUCs of the 14-gene signature were 0.741, 0.648, and 0.677 for the three kinds of OS predictions, respectively, using the GSE31210 dataset (Figure 5(f)). The distribution of LUAD patients with different groups according to each clinical feature in the TCGA test set is shown in Table 3. Association studies revealed a significant (P<0.01) correlation between the clinical stage and different risk groups in the GSE31210 dataset (Table 4).Figure 5 (a, d) Allocations of risk scores. (b), (e) TheK-M survival curves showed that a high-risk score was related to less OS. Hypoxia and ferroptosis score-related signature can be utilized to predict OS in the (c) TCGA test and (d) GSE31210 dataset according to ROC curves. (a)(b)(c)(d)(e)(f)Table 3 Association analysis shows that clinical characteristics correlate results with different risk groups in the TCGA test set. ExpressionTotal (N  = 135)HighLowP-value(N   = 56)(N   = 79)GenderFemale73 (54.1%)28 (50.0%)45 (57.0%)0.532Male62 (45.9%)28 (50.0%)34 (43.0%)Age (years)≥6097 (71.9%)35 (62.5%)62 (78.5%)0.0658<6038 (28.1%)21 (37.5%)17 (21.5%)Pathologic stageStage I73 (54.1%)25 (44.6%)48 (60.8%)0.159Stage II31 (23.0%)13 (23.2%)18 (22.8%)Stage III23 (17.0%)13 (23.2%)10 (12.7%)Stage IV8 (5.9%)5 (8.9%)3 (3.8%)T stageT148 (35.6%)16 (28.6%)32 (40.5%)0.506T268 (50.4%)31 (55.4%)37 (46.8%)T313 (9.6%)7 (12.5%)6 (7.6%)T45 (3.7%)2 (3.6%)3 (3.8%)TX1 (0.7%)0 (0%)1 (1.3%)M stageM091 (67.4%)35 (62.5%)56 (70.9%)0.381M18 (5.9%)5 (8.9%)3 (3.8%)MX36 (26.7%)16 (28.6%)20 (25.3%)N stageN086 (63.7%)31 (55.4%)55 (69.6%)0.094N127 (20.0%)13 (23.2%)14 (17.7%)N218 (13.3%)11 (19.6%)7 (8.9%)N31 (0.7%)1 (1.8%)0 (0%)NX3 (2.2%)0 (0%)3 (3.8%)Table 4 Association analysis shows that clinical characteristics correlate results with different risk groups in the GSE31210 dataset. ExpressionTotalHighLowP-value(N = 226)(N = 106)(N = 120)GenderFemale121 (53.5%)53 (50.0%)68 (56.7%)0.385Male105 (46.5%)53 (50.0%)52 (43.3%)Age (years)≥60130 (57.5%)58 (54.7%)72 (60.0%)0.505<6096 (42.5%)48 (45.3%)48 (40.0%)Pathologic stageI168 (74.3%)65 (61.3%)103 (85.8%)<0.001II58 (25.7%)41 (38.7%)17 (14.2%)SmokeEver-smoker111 (49.1%)58 (54.7%)53 (44.2%)0.147Never-smoker115 (50.9%)48 (45.3%)67 (55.8%) ## 3.6. Correlation Analysis of Risk Score with Clinical Characteristics of LUAD We observed the allocation of patient risk scores according to different clinical characteristics. Interestingly, the distribution of patient risk scores was highly related to the stages of the patients. Risk scores in patients in stage III were increased compared to those in stage I (P<0.05; Figure 6(a)). In terms of the T stage (Figure 6(b)), patients with LUAD in T4 had the highest risk scores, which have a significant difference in T1 and T2, but comparable to T3. Patients with LUAD in the T3 stage had slightly higher risk scores than those in the T1 stage (P<0.01); in the N stage (Figure 6(c)), patients in the N2 stage had higher risk scores than those in the N0 stage (P<0.01). Although the risk score in stage N3 was lower than in stage N2 (P<0.05), the sample size in stage N3 was too small to be considered valid. Subsequently, the impact of clinical characteristics on the OS in LUAD patients was investigated using KM survival analysis. Specifically, in the stratified analysis of stage (Figure 6(d)), patients with a lower stage are more likely to have a better prognosis, which showed the same trend with distribution of risk score levels. In the stratified analysis of the T stage (Figure 6(e)), the T1 stage had a better OS, whereas T3 and T4 stages exhibited a poor prognosis. The worst prognosis in LUAD patients in the T4 stage was consistent with the previous result that patients with T4 stage had the highest risk score. In terms of the N stage (Figure 6(f)), the N3 stage contained only one LUAD sample, and therefore, its impact on patient prognosis was ignored. Patients with the N0 stage had the longest survival time compared with those with the N2 stage who had the shortest survival time. The allocation of risk scores and stratified prognosis according to other clinical characteristics, including age, sex, and M stage, are detailed in Supplementary Figure 2.Figure 6 Wilcoxon analysis of the differing risk score distributions among various (a) stages, (b)T  stages, and (c) N stages in the TCGA-LUAD cohort. The K-M survival curves of patients with different (d) stages, (e) T stages, and (f) N stages. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. (a)(b)(c)(d)(e)(f) ## 3.7. Subgroup Analysis of the Prognostic Signature After establishing a correlation between hypoxia and ferroptosis score-related gene signatures and the aforementioned clinicopathological traits, we aimed to measure whether our model’s prognostic efficacy can be utilized for clinical factors. Five patients were separated according to the indicated subgroups, and then data stratification was executed according to age, sex, pathological tumor stage, pathologicalT stage, pathological N stage, and pathological M stage. The hypoxia and ferroptosis score-related gene signature was able to differentiate between prognoses in all subgroups except for T3-T4 and M1 features, implying a clinically and statistically significant prognostic value (Figure 7).Figure 7 K-M survival analysis of the fourteen-gene risk score level in subgroups: (a) younger than 60 years old and older than 60 years old, (b) male and female, (c) stages I-II and stages III-IV, (d) T1 2 stage and T3-4 stage, (e) N0 stage and N+ stage, and (f) M0 stage and M1 stage. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l) ## 3.8. Independent Prognostic Role of Risk Scores We investigated whether the risk score could be the only prognostic factor in LUAD patients using UCR and MCR. Based on the data from in TCGA set, UCR analyses showed that the risk score, stage,T stage, and N stage were significantly related to LUAD prognosis (Figure 8(a)). Subsequently, the above-mentioned variables (P<0.05) were subjected to MCR analysis. The results identified hypoxia and ferroptosis score-related gene signature (risk score) and stage as two independent prognostic factors predicting prognosis in LUAD patients (Figure 8(b)).Figure 8 (a) Forrest plot of UCR analysis in LUAD. (b) Forrest plot of MCR analysis in LUAD. (c) A prognostic nomogram predicting OS of LUAD. (d) Calibration plots of the nomogram for predicting the OS in the TCGA-LUAD dataset. (a)(b)(c)(d)LUAD patients’ OS were predicted using a compound nomogram incorporating the risk score and stage. This approach was developed to provide a more accurate prediction tool for clinical practice (Figure8(c)). It was evident from the calibration plots that the prognostic nomogram model accurately predicted patient survival with only a slight divergence from the actual outcomes (Figure 8(d)). ## 3.9. Differences in Hallmark Gene Sets between Two Group Patients According to the results of the analysis of signature gene sets, signaling pathways converging in numerous biological processes were found to vary in two groups. Notably, hypoxia, TNFα signaling via NF-κB, mitotic spindle, and glycolysis were decreased in the low-risk group. On the other hand, the other group was preferentially associated with bile acid metabolism, pancreatic beta cells, and KRAS signaling (Figure 9 and Supplementary Table 10).Figure 9 Gene set variation analysis. Differences in hallmark gene set activities scored by GSVA between two groups.T values are figured out using a linear model and the |t| > 2 as a cutoff value. ## 3.10. TME Infiltration Pattern of LUAD Based on Risk Score The ssGSEA algorithms were used on the data to investigate how risk scores affect TME components. As the results of heatmaps and Wilcoxon tests performed on TCGA-LUAD datasets, the infiltration of several TME contents, such as eosinophils and immature dendritic cells, was increased in the less-risk group, whereas the ICI of activated CD4T cells and others was more in the other group, as depicted in Figure 10.Figure 10 (a, b) Heatmap illustrating the distributions of immune cell subsets, fibroblasts, and endothelial cells assessed via MCP-counter (a) and ssGSEA (b) algorithms in the TCGA-LUAD cohort. (c, d) Wilcoxon analysis of the differing TME subtype distributions between two groups in the TCGA-LUAD cohort.∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. (a)(b)(c)(d) ## 3.11. Validation of Seven Selected Prognostic Genes Based on qRT-PCR According to the expression profiles of the identified DEGs (Supplementary Table5), TNS4, WFDC2, and ITGA2 were revealed to be all highly expressed, while MAPK4, SMAD9, KLK11, and LAMA3 were all downregulated in LUAD samples from the TCGA dataset. As shown in Figure 11, the high expression of TNS4, WFDC2, and ITGA2 and the low expression of MAPK4, SMAD9, KLK11, and LAMA3 in LUAD tissues (n = 10) were confirmed compared with the expression levels in the ANTs (n = 10).Figure 11 The high expression of TNS4 (a), WFDC2 (b), and ITGA2 (c) and the low expression of MAPK4 (d), SMAD9 (e), KLK11 (f), and LAMA3 (g) in LUAD tissues were confirmed compared to the paracancerous tissues. (a)(b)(c)(d)(e)(f)(g) ## 4. Discussion As well known, lung cancer is one of the general forms of malignancy globally. Nearly 80% of lung cancer patients have NSCLC, and nearly 50% have LUAD [42]. LUAD is a malignant tumor that affects the lungs and has a poor prognosis [43]. Although there have been breakthroughs in the treatment of patients with LUAD, the OS rate in these individuals remains low.Ferroptosis is a particular kind of programmed cell death [17]. Ferroptosis-related research on lung cancer has mostly focused on the identification of related biomarkers that could induce ferroptosis [16, 44–46]. Hypoxia is also related to high proliferation rates in tumor cells [47]. Tumor hypoxia has a broad range of consequences, affecting a variety of biological systems, including metabolic changes, angiogenesis, and metastasis [48–50]. Numerous hypoxia-associated genes are associated with lung adenocarcinoma [51, 52]. However, no high-throughput research has been conducted to date to explore the possible prognostic value of them in LUAD.Here, the ferroptosis and hypoxiaZ-scores of each sample were estimated as clinical features based on the expression of ferroptosis and hypoxia-related genes identified in each sample, respectively. We obtained 23 modules, and the turquoise module showed no relationship with ferroptosis scores (cor = −0.69, P=3e−10) and hypoxia scores (cor = −0.63, P=8e−68), while the red module correlated more strongly with both scoring phenotypes, with ferroptosis score and hypoxia score. We then identified 152 common genes from the list of 8,974 hypoxia and ferroptosis score-related genes and 1,969 LUAD-related DEGs, which were defined as DE-hypoxia and ferroptosis score-related genes, respectively.Functional annotations obtained from Metascape indicated that DE-hypoxia and ferroptosis score-related genes were mainly enriched in “transcriptional misregulation in cancer,” “endopeptidase inhibitor activity,” and “positive regulation of cell projection organization.” Overexpression of oncogenic transcription factors has been proven in recent research to change cells’ core autoregulatory circuitry, which has long been recognized to induce tumorigenesis due to mutations in transcription factor genes [53]. Therefore, it is possible to intervene in this pathway to prevent the development of LUAD.Of the 152 DE-hypoxia and ferroptosis score-related genes, 7.3% (17/152) were associated with prognosis in univariate Cox analysis. In addition, univariate Cox analysis identified six genes as protective markers and 11 genes as risk factors for patients with LUAD. Fourteen genes were identified using LASSO Cox regression (MAPK4, TNS4, WFDC2, FSTL3, ITGA2, KLK11, PHLDB2, VGLL3, SNX30, KCNQ3, SMAD9, ANGPTL4, LAMA3, and STK32A) to construct prognostic-related gene signatures and develop prognostic models to classify LUAD patients into two groups with various risks. Herein, we suggested that lower-risk patients seem to live longer. Additionally, we built a nomogram using MCR analysis and proved its predictive ability using ROC curves, calibration plots, and decision curves.MAPK4 overexpression promotes LUAD progression [54]. Tensin 4 (TNS4) is involved in MET-induced cell motility and is connected to the GPCR signaling pathway. According to one study, increased TNS4 expression leads to poor treatment outcomes in gastric cancer patients [55]. WFDC2 is upregulated in lung cancer [56–58] and has thus recognized the clinical application of WFDC2 as a serum tumor marker in the early diagnosis and efficacy monitoring of lung cancer [59]. In addition, in a study of individuals with LUAD, Song et al. [34] reported that WFDC2 was substantially related to the TNM stage of LUAD and prognosis of patients. Recent studies have reported substantial overexpression of FSTL3 in a subset of cancers [60–62]. Additionally, in patients with NSCLC and thyroid carcinoma, FSLT3 expression is substantially linked to lymph node metastasis and poor prognosis [60, 61]. ITGA2 overexpression is essential for tumor development, metastasis, and motility, and this molecule triggers the overexpression of the STAT3 signaling pathway, thus promoting tumor progression [63]. KLK11 protein is expressed more in NSCLC serum, although KLK11 mRNA levels are lower in cancerous lung tissues than in ANTs [64]. Leakage of these secreted proteins into the systemic circulation due to disruption of lung structure during angiogenesis or development may be the reason for this discrepancy between low mRNA levels and elevated serum protein levels in lung cancer [65]. It has been well studied that PHLDB2 is linked to a variety of malignancies [66, 67]. PHLDB2’s primary role is to control migration through interacting with the transcription factors CLASPS, prickle 1, and liprin 1 [68, 69]. According to Ge et al., patients with lower PHLDB2 expression have a better prognosis [70]. VGLL3 is a unique Ets1 interacting partner that inhibits adipocyte differentiation and controls trigeminal nerve development [71]. VGLL3 acts as a coactivator of mammalian toxicity equivalency factors and is implicated in many kinds of cancers, including breast, colon, and lung cancers [72, 73]. Methylation, phosphorylation [74], and dephosphorylation of SMAD9 may function in the progression of lung cancer [75]. Tumor cell-derived human angiopoietin-like protein 4 (ANGPTL4) has been shown to disrupt vascular endothelial cell connections, enhance pulmonary capillary permeability, and facilitate tumor cell protrusion through the vascular endothelium, which is involved in lung cancer [76]. Through the synergistic action of AP-1 binding sites [77], the epithelial enhancer mediates the production of laminin subunit alpha 3 (LAMA3), which is associated with tumor progression. Xu et al. [78] reported that it was discovered that the inhibition of LINC00628 decreased LUAD cell proliferation and drug resistance by lowering the methylation of the LAMA3 promoter. STK32A is important in cellular balance and transcription factor phosphorylation, together with cell cycle regulation, and its overexpression leads to enhanced NSCLC cell progression, as well as enhanced NF-κB p65 phosphorylation and inhibition of apoptosis [79]. SNX30 encodes sorted nexin-30 protein, a member of the sorted nexin, which a large class of proteins localized in the cytoplasm with membrane-bound potential via a phospholipid-binding domain [80]. KCNQ3 encodes a protein that regulates neuronal excitability, and GCSH encodes a mitochondrial protein that forms the glycine cleavage system [81]. However, there is a lack of research on the mechanisms of action of these two genes in cancer.Following this assessment, KM survival studies demonstrated that the 14 prognosis-associated genes may have a contribution to the initiation and development of LUAD in certain individuals. It came as a surprise to observe that risk scores for the 14-gene prognostic profile were shown to be strongly correlated with the OS in LUAD patients in two cohorts split by the TCGA and one GEO validation cohort. We discovered that modulation of the prognostic gene profile was linked with the LUAD survival models (T, N, M, stage, sex, and age) in our study. Furthermore, the nomogram of independent risk factors, which included risk score models, had a good predictive value and might assist clinicians in making optimum treatment choices to enhance the OS rates of patients with LUAD in the future. These results suggest that hypoxia- and ferroptosis-related genes were indispensable in the construction of prognostic models for LUAD development and that they may have the potential to act as OS biomarkers.Our findings suggested that the signaling pathways that converge in various biological processes differ between two groups, and the hypoxia, TNFα, signaling via NF-κB, mitotic spindle, and glycolysis were significantly downregulated in the less-risk group. Additionally, 14 prognosis-related genes in LUAD, including one hypoxia-related gene, ANGPTL4, were significantly expressed in the tumor tissues. This finding reflects the dependence of LUAD on hypoxia and the heterogeneity of hypoxia responses in the low- and high-risk groups. Hypoxia heterogeneity indicates its involvement in promoting a phenotypic variety of cancer cells in the TME, which promotes metastasis and therapeutic resistance. Li et al. [82] demonstrated that suppressing NLRP2 boosted cell proliferation through NF-κB signaling activation, thus resulting in an EMT phenotype in LUAD cells. Therefore, the regulatory pathways involved in NF-κB also function in the progression of LUAD. The evidence implies that LUAD pathogenesis is a complicated biological process involving multiple genes. Apart from that, dysregulation of multiple genes may contribute to the progression of LUAD by a variety of distinct processes. The differences in GSVA signatures and prognostic genes between the two groups have the potential to be explored in a more in-depth study. These discoveries may, in general, open new avenues of investigation of additional molecular mechanisms of LUAD for academics and physicians.Significant differences in immune infiltrating cell types between two groups were shown in this study. Interestingly, the enrichment fraction of activated CD4T cells and neutrophils was enhanced in the high-risk group, whereas the enrichment fraction of eosinophil and immature dendritic cells was found in the low-risk group. Immune cells, neutrophils that infiltrate tumor tissue, called TANs, also play a role in antitumor immunity. TANs stimulate T  cell responses in lung cancer rather than have an immunosuppressive effect [83]. In LUAD, overexpression of bridging granule genes is associated with a significant enhancement in infiltration of activated CD4 and CD8 T cells [84]. We hypothesize that the inflammatory response induced by immune cells may function in accelerating tumor cell mutations, which in turn may affect patient prognosis. The specific mechanisms by which the tumor immune microenvironment affects prognosis remain to be explored.Here, a prognostic model of LUAD with general applicability was successfully developed and validated based on hypoxia and ferroptosis. In addition, we performed experiments to validate the 14 molecules in the model. Of these, seven molecules were validated by qRT-PCR to be significantly different between tumor and paracancerous tissues. However, our study has some limitations. Due to the lack of studies on hypoxia and ferroptosis in tumors, the information provided by MSigDB and FerrDB websites may be inaccurate, as the references were manually obtained from previous studies. More studies will do to validate the roles of these fundamental prognostic genes’ hypoxia- and ferroptosis regulation roles in LUAD [3]. Both cohorts (TCGA-LUAD and 1 GEO cohort) were used to construct predictive signature. This hypoxia- and ferroptosis-predictive signal may be more reliable if examined in our research center’s prospective clinical trial cohort. ## 5. Conclusion Hypoxia and ferroptosis are two major mechanisms associated with lung adenocarcinoma development. In this research, the candidate genes associated with hypoxia and ferroptosis scores were identified; as a result, we have found a 14-gene signature and developed a predictive nomogram that could accurately predict OS in individuals with LUAD. These results may be useful in facilitating the making of medical decisions and personalizing therapeutic interventions. --- *Source: 1022580-2022-09-19.xml*
2022
# Liquid Embolization of Peripheral Arteriovenous Malformations with Ethylene-Vinyl Alcohol Copolymer in Neonates and Infants **Authors:** Jochen Pfeifer; Walter A. Wohlgemuth; Hashim Abdul-Khaliq **Journal:** Cardiovascular Therapeutics (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1022729 --- ## Abstract In the postnatal period, extensive peripheral arteriovenous malformations (AVM) are associated with high morbidity, especially when localized in the liver. Their urgent treatment is always a challenging problem in neonates and infants. We analyzed four consecutive children aged three days to three years who underwent eight liquid embolization procedures with ethylene-vinyl alcohol copolymer. The AVM were situated on the thoracic wall, in the liver, and on the lower leg. In three cases, the malformations showed total regression. The tibial AVM degenerated widely. If impaired beforehand, cardiac or hepatic function normalized after the interventions. There were no embolization-associated complications such as nontarget embolization or tissue ischemia. We conclude that application of ethylene-vinyl alcohol copolymer seems to be a safe therapeutic option and can be used in neonates and infants with peripheral AVM in consideration of the agent’s characteristics. Nevertheless, there are still hardly any data concerning young children. --- ## Body ## 1. Introduction Vascular anomalies in children are rare congenital malformations. According to the International Society for the Study of Vascular Anomalies (ISSVA) [1, 2], they are divided into two subgroups: vascular tumors and vascular malformations. The most common type of vascular tumors are benign hemangiomas. Vascular malformations are classified into venous, capillary, lymphatic, arteriovenous, and combined malformations, depending on their main vessel structure and hemodynamics. Arteriovenous malformations (AVM) account for about 8% of all vascular malformations [3]. This type is characterized by irregular feeding arteries, shunting blood directly into a vein through a “nidus” consisting of arterial microfistulae [4, 5]. AVM are part of the high-flow vascular malformations. Peripheral AVM are located outside the central nervous system.AVM tend to grow progressively. There are various clinical symptoms and manifestations of AVM depending on their dimension, localization, and shunt volume affecting the circulatory system. The previously published Schobinger score provides a clinical staging of AVM. These four clinical stages are quiescence (stage I; cutaneous blush or warmth), expansion (stage II; bruit or thrill, increasing size, pulsation, and no pain), local destruction (stage III; pain, bleeding, infection, skin necrosis, or ulceration), and decompensation (stage IV; high-output cardiac failure) [4, 6, 7]. In contrast, coagulopathy is rather found in slow-flow malformations [8].If associated with severe systemic symptoms, extensive AVM require urgent treatment. The therapy of AVM in young children is always a challenging problem because of the patients’ small size and the restricted adaptation of their circulatory system. The therapeutic strategy depends on the localization, size, and the clinical manifestation. Small AVM may be resected surgically, whereas resection is not suitable for extensive lesions due to higher perfusion by multiple feeding arteries and hence the higher risk for perioperative bleeding and recrudescence. Selective ligation or embolization of feeding arteries is not promising, for it leads to further angiogenesis without AVM involution. Conservative and medical treatment options have also been found inappropriate [7, 9].Percutaneous and transvascular embolization are considered to be first line therapy in advanced state AVM regarding the Schobinger score [5, 7, 10]. For transcatheter use, various mechanical occlusion devices (coils, vascular plugs) and liquid embolic agents are available. The latter are divided into sclerosing agents (ethanol), polymerizing agents (cyanoacrylate or ethylene-vinyl alcohol copolymer = EVOH), and particulate agents [11]. To avoid relapse, complete occlusion of the vascular nidus has to be the goal of the embolization. Of note, exclusive embolization of single-feeding arteries has the same unsatisfactory effect as their surgical ligation. Therefore, implantation of coils or plugs is appropriate in addition to liquid embolization in selected cases [7].EVOH is originally used in interventional neuroradiology. Besides, there are several published studies of successful embolization of peripheral AVM with EVOH primarily in adults [5, 9, 12] and children older than 3 years [13].The aim of our retrospective study was to present our experience in urgent AVM embolization using EVOH in neonates and infants in different localizations. ## 2. Material and Methods ### 2.1. Patients We performed 8 liquid embolization procedures in 4 consecutive children, two of them male. The parents gave written informed consent the day before the intervention.Two AVM were located on the thoracic wall, one was a pretibial AVM; and there was one newborn with prenatal diagnosis of huge intrahepatic AVM. The patients’ characteristics are summarized in Table1.Table 1 Patients’ characteristics. Patient (no.)GenderAge at intervention(s)Body weight (kg)Localization of the AVMSymptomsSchobinger stage1f3 years12.6ChestHeart failureIV2m14, 16, and 24 months11.4, 12.0, and 13.5ChestAcute AVM hemorrhageIII4f4 months and 8 months6.6 and 9.7Right tibiaIncreasing sizeII3m3 and 16 days3.4 and 3.5LiverHeart failure, pulmonary hypertensionIVPatient 1 had undergone a surgical correction of atrioventricular septal defect at the age of 5 months. A thoracic AVM showed increasing size (finally7×5×2.5cm) with increasing flowing shunt. During follow-up, the arteriovenous shunt was associated with progressive cardiac output and increasing mitral regurgitation, and cardiomegaly occurred. Neither anticongestive treatment with propranolol nor coil occlusion of feeding arteries deriving from the subclavian artery with 5 coils (MReye©Flipper© detachable coils, Cook© medical, Bloomington, Indiana, USA) were successful to induce significant reduction of the AVM perfusion, since further feeding arteries were also established from the intercostal arteries. Liquid embolization was then indicated and successfully performed.Patient 2 presented with a growing AVM (9×8×2.5cm) of the thoracic wall. At the age of 6 months, he was admitted to the emergency department with a severe superficial bleeding due to accidental skin erosions covering the AVM. Emergency hemostasis was achieved by Neodym-Yag laser treatment and compression dressings. After stabilization and healing of the superficial skin erosions, we decided to subject the child to transcatheter embolization using EVOH.Patient 3 had a growing pretibial AVM (7×4×4.2cm) on the right leg with tight consistency. Embolization was indicated because of imminent rupture due to hyperperfusion of the AVM.In patient 4, AVM and cardiomegaly were diagnosed prenatally. The major part of the AVM was located in the liver segments 2 and 3. Giant veins drained into the lower vena cava. After birth, the newborn was critically ill because of severe high-output heart failure and pulmonary hypertension. Highly urgent embolization therapy was indicated. ### 2.2. Transcatheter Embolization All procedures were performed by percutaneous transarterial catheterization via the right femoral artery. After puncture, 4 French (Fr) introducers were inserted in each case. 4 Fr guiding catheters were used for angiography and guiding of microcatheters. In the case of the pretibial AVM, the sheath was introduced antegrade into the femoral artery. As antithrombotic therapy, heparin was administered in a dose of 100 IU per kilogram bodyweight.EVOH (Onyx©, EV3, Irvine, California, USA) was used for liquid embolization. This agent is an ethylene-vinyl alcohol copolymer, which is blended with radiopaque tantalum and dissolved in dimethyl sulfoxide (DMSO). In contact with other aqueous fluids such as water or blood, EVOH precipitates by polymerization and hardens from the surface to the inside. It hence builds an occlusive cast, when intravascularly applied. EVOH is available in 3 concentrations with increasing viscosity: Onyx©-18, -20, and -34, containing 6%, 6.5%, and 8% EVOH, respectively. The lower the concentration, the deeper it penetrates into the small peripheral capillary vessels.The interventional embolization treatment with EVOH was performed applying the previously published plug-and-push-technique [7, 10, 14]: a DMSO compatible microcatheter (usually with a detachable tip, Apollo Onyx Delivery Microcatheter, EV3, Irvine, California, USA) is selectively inserted into the nidus via a feeding artery. The catheter’s dead space has to be flushed with DMSO to avoid obstruction by intraluminal polymerization of the EVOH. Before injection, EVOH has to be shaken 10 to 20 minutes by a vibraxer in order to get a homogenous suspension. Then, it is injected very slowly and patiently with 1 ml syringes under fluoroscopy. An EVOH plug generates around the catheter tip and facilitates further antegrade injection. In addition, the plug avoids retrograde flow of EVOH, therefore, providing additional flow toward the nidus and building a coherent cast. The intention of this intervention is to achieve the embolization and closure of as many microfistulae within the AVM as possible. When finished, the microcatheter can be removed by detaching its tip, which is sealed within the plug. Nondetachable catheters have to be withdrawn out of the EVOH plug. The maximal dose of EVOH is 0.5-1.0 ml per kilogram bodyweight in a single procedure. In cases of more extensive lesions, multiple procedures may have to be performed.In the patients with thoracic AVM (patients 1 and 2), the procedures were conducted under analgosedation with propofol, midazolam, and ketamine. The embolization was performed via the feeding arteries originating from the subclavian and intercostal arteries (see Figure1 that shows patient 2).Figure 1 Imaging of the thoracic AVM in patient 2. (a) Angiographical imaging after injection of Solutrast 300® via a transarterial catheter (asterisk) into the feeding arteries deriving from the right subclavian artery. (b) Angiographical imaging after injection of radiopaque Onyx®. (c) Radiographic imaging (X-ray) of the thorax after the last embolization. (a)(b)(c)The patient with the pretibial AVM (patient 3) was intubated, ventilated, and in deep analgosedation. The feeding arteries were embolized via the anterior tibial artery. During the first embolization, there was a temporary hypoperfusion caused by extending of the EVOH plug into the anterior tibial artery. Immediate balloon dilatation was necessary with a 2 mm balloon. In a further catheterization, 1 microcoil (diameter 4 mm, Concerto© Helix, Medtronic, Irvine, California, USA) was implanted into a residual AVM artery. There was a transient vascular spasm of this artery during this procedure which was self-limiting within a few minutes. Arterial perfusion of the lower leg quickly normalized in both cases.The newborn with the large, prenatally diagnosed hepatic AVM (patient 4) had to be intubated and ventilated immediately postnatally, due to severe high-output cardiac failure and pulmonary hypertension. After interdisciplinary consultation, we decided to embolize the large hepatic AVM with EVOH. Two interventional procedures were necessary within the first 16 days of life. After angiography of the AVM including selective visualization of the feeding arteries via the arteria hepatica communis and the celiac trunc, catheterization and embolization of the feeding arteries were performed. Finally, a right hepatic arterial branch and a left phrenic arterial branch were occluded using 10 detachable microcoils (Concerto© Helix, Medtronic, Irvine, California, USA). The diameters of the coils were 2 mm (3 coils), 4 mm (5 coils), and 5 mm (2 coils), respectively (Figure2 shows patient 4). A sufficient antegrade perfusion of the liver parenchyma was provided, in contrast to reported cases with complete occlusion of the main hepatic artery.Figure 2 Imaging of the hepatic AVM in patient 4. (a) Angiographical imaging after injection of Solutrast 300® into the feeding arteries via transarterial catheter (asterisk). (b) Angiographical imaging after injection of radiopaque Onyx® and Concerto Helix©-coils (arrows; two transvascular catheters are marked by asterisks). (c) Radiographic imaging (X-ray) of the thorax and upper abdomen after the last embolization, still existing extensive cardiomegaly. (a)(b)(c)Embolization methods and results are summarized in Table2.Table 2 Embolization method and results. Patient (no.)No. of embolization sessions (n)Onyx© concentration/dose (ml)Additional coils (no. (n) and product)ComplicationsFollow-up (months)Result1118/3 ml5 Cook©-coilsNo56Complete AVM involution, normalized cardiac function23(1) 18/6 ml(2) 18/2.5 ml(3)18/3.2 mlNone(1) No(2) No(3) No24Complete AVM involution32(1) 18/0.05 ml(2) 18/0.1 ml1 Concerto Helix©-coilTransient arterial obstruction12Partial AVM involution42(1) 18/1.5 ml and 34/1.5 ml(2) 18/1.5 ml and 34/1.5 ml10 Concerto Helix©-coils(1) SIRS(2) Atrial flutter11Complete AVM involution, normalized cardiac and hepatic function, normalized pulmonary pressure ### 2.3. Follow-Up For follow-up, the patients underwent clinical examination, ultrasound, color Doppler imaging, and echocardiography. Blood parameters were obtained in the case of hepatic AVM. ## 2.1. Patients We performed 8 liquid embolization procedures in 4 consecutive children, two of them male. The parents gave written informed consent the day before the intervention.Two AVM were located on the thoracic wall, one was a pretibial AVM; and there was one newborn with prenatal diagnosis of huge intrahepatic AVM. The patients’ characteristics are summarized in Table1.Table 1 Patients’ characteristics. Patient (no.)GenderAge at intervention(s)Body weight (kg)Localization of the AVMSymptomsSchobinger stage1f3 years12.6ChestHeart failureIV2m14, 16, and 24 months11.4, 12.0, and 13.5ChestAcute AVM hemorrhageIII4f4 months and 8 months6.6 and 9.7Right tibiaIncreasing sizeII3m3 and 16 days3.4 and 3.5LiverHeart failure, pulmonary hypertensionIVPatient 1 had undergone a surgical correction of atrioventricular septal defect at the age of 5 months. A thoracic AVM showed increasing size (finally7×5×2.5cm) with increasing flowing shunt. During follow-up, the arteriovenous shunt was associated with progressive cardiac output and increasing mitral regurgitation, and cardiomegaly occurred. Neither anticongestive treatment with propranolol nor coil occlusion of feeding arteries deriving from the subclavian artery with 5 coils (MReye©Flipper© detachable coils, Cook© medical, Bloomington, Indiana, USA) were successful to induce significant reduction of the AVM perfusion, since further feeding arteries were also established from the intercostal arteries. Liquid embolization was then indicated and successfully performed.Patient 2 presented with a growing AVM (9×8×2.5cm) of the thoracic wall. At the age of 6 months, he was admitted to the emergency department with a severe superficial bleeding due to accidental skin erosions covering the AVM. Emergency hemostasis was achieved by Neodym-Yag laser treatment and compression dressings. After stabilization and healing of the superficial skin erosions, we decided to subject the child to transcatheter embolization using EVOH.Patient 3 had a growing pretibial AVM (7×4×4.2cm) on the right leg with tight consistency. Embolization was indicated because of imminent rupture due to hyperperfusion of the AVM.In patient 4, AVM and cardiomegaly were diagnosed prenatally. The major part of the AVM was located in the liver segments 2 and 3. Giant veins drained into the lower vena cava. After birth, the newborn was critically ill because of severe high-output heart failure and pulmonary hypertension. Highly urgent embolization therapy was indicated. ## 2.2. Transcatheter Embolization All procedures were performed by percutaneous transarterial catheterization via the right femoral artery. After puncture, 4 French (Fr) introducers were inserted in each case. 4 Fr guiding catheters were used for angiography and guiding of microcatheters. In the case of the pretibial AVM, the sheath was introduced antegrade into the femoral artery. As antithrombotic therapy, heparin was administered in a dose of 100 IU per kilogram bodyweight.EVOH (Onyx©, EV3, Irvine, California, USA) was used for liquid embolization. This agent is an ethylene-vinyl alcohol copolymer, which is blended with radiopaque tantalum and dissolved in dimethyl sulfoxide (DMSO). In contact with other aqueous fluids such as water or blood, EVOH precipitates by polymerization and hardens from the surface to the inside. It hence builds an occlusive cast, when intravascularly applied. EVOH is available in 3 concentrations with increasing viscosity: Onyx©-18, -20, and -34, containing 6%, 6.5%, and 8% EVOH, respectively. The lower the concentration, the deeper it penetrates into the small peripheral capillary vessels.The interventional embolization treatment with EVOH was performed applying the previously published plug-and-push-technique [7, 10, 14]: a DMSO compatible microcatheter (usually with a detachable tip, Apollo Onyx Delivery Microcatheter, EV3, Irvine, California, USA) is selectively inserted into the nidus via a feeding artery. The catheter’s dead space has to be flushed with DMSO to avoid obstruction by intraluminal polymerization of the EVOH. Before injection, EVOH has to be shaken 10 to 20 minutes by a vibraxer in order to get a homogenous suspension. Then, it is injected very slowly and patiently with 1 ml syringes under fluoroscopy. An EVOH plug generates around the catheter tip and facilitates further antegrade injection. In addition, the plug avoids retrograde flow of EVOH, therefore, providing additional flow toward the nidus and building a coherent cast. The intention of this intervention is to achieve the embolization and closure of as many microfistulae within the AVM as possible. When finished, the microcatheter can be removed by detaching its tip, which is sealed within the plug. Nondetachable catheters have to be withdrawn out of the EVOH plug. The maximal dose of EVOH is 0.5-1.0 ml per kilogram bodyweight in a single procedure. In cases of more extensive lesions, multiple procedures may have to be performed.In the patients with thoracic AVM (patients 1 and 2), the procedures were conducted under analgosedation with propofol, midazolam, and ketamine. The embolization was performed via the feeding arteries originating from the subclavian and intercostal arteries (see Figure1 that shows patient 2).Figure 1 Imaging of the thoracic AVM in patient 2. (a) Angiographical imaging after injection of Solutrast 300® via a transarterial catheter (asterisk) into the feeding arteries deriving from the right subclavian artery. (b) Angiographical imaging after injection of radiopaque Onyx®. (c) Radiographic imaging (X-ray) of the thorax after the last embolization. (a)(b)(c)The patient with the pretibial AVM (patient 3) was intubated, ventilated, and in deep analgosedation. The feeding arteries were embolized via the anterior tibial artery. During the first embolization, there was a temporary hypoperfusion caused by extending of the EVOH plug into the anterior tibial artery. Immediate balloon dilatation was necessary with a 2 mm balloon. In a further catheterization, 1 microcoil (diameter 4 mm, Concerto© Helix, Medtronic, Irvine, California, USA) was implanted into a residual AVM artery. There was a transient vascular spasm of this artery during this procedure which was self-limiting within a few minutes. Arterial perfusion of the lower leg quickly normalized in both cases.The newborn with the large, prenatally diagnosed hepatic AVM (patient 4) had to be intubated and ventilated immediately postnatally, due to severe high-output cardiac failure and pulmonary hypertension. After interdisciplinary consultation, we decided to embolize the large hepatic AVM with EVOH. Two interventional procedures were necessary within the first 16 days of life. After angiography of the AVM including selective visualization of the feeding arteries via the arteria hepatica communis and the celiac trunc, catheterization and embolization of the feeding arteries were performed. Finally, a right hepatic arterial branch and a left phrenic arterial branch were occluded using 10 detachable microcoils (Concerto© Helix, Medtronic, Irvine, California, USA). The diameters of the coils were 2 mm (3 coils), 4 mm (5 coils), and 5 mm (2 coils), respectively (Figure2 shows patient 4). A sufficient antegrade perfusion of the liver parenchyma was provided, in contrast to reported cases with complete occlusion of the main hepatic artery.Figure 2 Imaging of the hepatic AVM in patient 4. (a) Angiographical imaging after injection of Solutrast 300® into the feeding arteries via transarterial catheter (asterisk). (b) Angiographical imaging after injection of radiopaque Onyx® and Concerto Helix©-coils (arrows; two transvascular catheters are marked by asterisks). (c) Radiographic imaging (X-ray) of the thorax and upper abdomen after the last embolization, still existing extensive cardiomegaly. (a)(b)(c)Embolization methods and results are summarized in Table2.Table 2 Embolization method and results. Patient (no.)No. of embolization sessions (n)Onyx© concentration/dose (ml)Additional coils (no. (n) and product)ComplicationsFollow-up (months)Result1118/3 ml5 Cook©-coilsNo56Complete AVM involution, normalized cardiac function23(1) 18/6 ml(2) 18/2.5 ml(3)18/3.2 mlNone(1) No(2) No(3) No24Complete AVM involution32(1) 18/0.05 ml(2) 18/0.1 ml1 Concerto Helix©-coilTransient arterial obstruction12Partial AVM involution42(1) 18/1.5 ml and 34/1.5 ml(2) 18/1.5 ml and 34/1.5 ml10 Concerto Helix©-coils(1) SIRS(2) Atrial flutter11Complete AVM involution, normalized cardiac and hepatic function, normalized pulmonary pressure ## 2.3. Follow-Up For follow-up, the patients underwent clinical examination, ultrasound, color Doppler imaging, and echocardiography. Blood parameters were obtained in the case of hepatic AVM. ## 3. Results and Discussion ### 3.1. Results Patients 1 and 2 underwent postinterventional follow-up by clinical examination, ultrasound, and echocardiography after three months, annually thereafter. Due to local hypoperfusion, the AVM transformed by fatty involution and diminished. In both cases, it regressed to a minor soft swelling on the chest wall. Color Doppler ultrasound showed no revascularization. Surgical resection was not necessary. There was an age-appropriate physical resilience. Echocardiography showed a normal cardiac function. The mitral valve competence of patient 1 was significantly improving.In patient 3, the AVM partially degenerated with reduction of size, paling, and detumescence within four months after the second embolization (see Figure3). There was normal leg perfusion and motility. AVM rupture or hemorrhage did not occur. A residual feeding artery was occluded by a microcoil (Concerto© Helix, as mentioned above, diameter 4 mm) in a third intervention.Figure 3 Picture of the tibial AVM in patient 3. (a) Before EVOH embolization. (b) At follow-up. (a)(b)In these three patients with superficial AVM, slight tattoo effect was seen in terms of minor black skin discoloration.In patient 4, there was a systemic inflammatory response syndrome (SIRS) with hypotension and increased shock parameters, acidosis, and coagulopathy after the first procedure. At the time of the procedure, the newborn was already in very unstable hemodynamic conditions due to the high-output cardiac failure and suprasystemic pulmonary hypertension. Postinterventionally, intensive care therapy was needed with administration of catecholamines, buffering, and transfusion of fresh frozen plasma and thrombocytes. Weaning from ventilation was feasible on day seven. Before the second procedure, the newborn was electively reintubated and ventilated, and glucocorticoids were administered to avoid possible allergic or hyperinflammatory reaction. At the end of the intervention, electrical cardioversion was necessary because of atrial flutter. SIRS did not recur. The child remained stable and was extubated on day 3 after intervention. The mean pulmonary pressure immediately decreased from 48 to 36 mmHg during the procedure and normalized within the next 3 weeks. Hepatic, renal, and cardiac parameters normalized as well (see Table3). Within one year after the embolization, all liver parameters were normalized. Several follow-up studies by color Doppler revealed no revascularization of the hepatic AVM.Table 3 Laboratory summary of patient 4 (hepatic AVM). Parameter (normal value)Before1. Embolization (peak value)After1. Embolization (peak value)After2. Embolization (peak value)At follow-upALAT (1-25 U/l)415242818ASAT (10-50 U/l)961,6034136NH3 (<102 μg/dl)817311054nt-proBNP (<62.9 pg/ml)29,334152,23814,009173Creatinine (0.17-0.42 mg/dl)0.920.990.250.23Abbreviations: ALAT: alanine aminotransferase; ASAT: aspartate aminotransferase; NH3: ammonia; nt-proBNP: N-terminal pro-brain natriuretic peptide.As a side effect in all patients, there was an uncomfortable smell regressing within about 48 hours, typically caused by the sulfur-containing DMSO. ## 3.1. Results Patients 1 and 2 underwent postinterventional follow-up by clinical examination, ultrasound, and echocardiography after three months, annually thereafter. Due to local hypoperfusion, the AVM transformed by fatty involution and diminished. In both cases, it regressed to a minor soft swelling on the chest wall. Color Doppler ultrasound showed no revascularization. Surgical resection was not necessary. There was an age-appropriate physical resilience. Echocardiography showed a normal cardiac function. The mitral valve competence of patient 1 was significantly improving.In patient 3, the AVM partially degenerated with reduction of size, paling, and detumescence within four months after the second embolization (see Figure3). There was normal leg perfusion and motility. AVM rupture or hemorrhage did not occur. A residual feeding artery was occluded by a microcoil (Concerto© Helix, as mentioned above, diameter 4 mm) in a third intervention.Figure 3 Picture of the tibial AVM in patient 3. (a) Before EVOH embolization. (b) At follow-up. (a)(b)In these three patients with superficial AVM, slight tattoo effect was seen in terms of minor black skin discoloration.In patient 4, there was a systemic inflammatory response syndrome (SIRS) with hypotension and increased shock parameters, acidosis, and coagulopathy after the first procedure. At the time of the procedure, the newborn was already in very unstable hemodynamic conditions due to the high-output cardiac failure and suprasystemic pulmonary hypertension. Postinterventionally, intensive care therapy was needed with administration of catecholamines, buffering, and transfusion of fresh frozen plasma and thrombocytes. Weaning from ventilation was feasible on day seven. Before the second procedure, the newborn was electively reintubated and ventilated, and glucocorticoids were administered to avoid possible allergic or hyperinflammatory reaction. At the end of the intervention, electrical cardioversion was necessary because of atrial flutter. SIRS did not recur. The child remained stable and was extubated on day 3 after intervention. The mean pulmonary pressure immediately decreased from 48 to 36 mmHg during the procedure and normalized within the next 3 weeks. Hepatic, renal, and cardiac parameters normalized as well (see Table3). Within one year after the embolization, all liver parameters were normalized. Several follow-up studies by color Doppler revealed no revascularization of the hepatic AVM.Table 3 Laboratory summary of patient 4 (hepatic AVM). Parameter (normal value)Before1. Embolization (peak value)After1. Embolization (peak value)After2. Embolization (peak value)At follow-upALAT (1-25 U/l)415242818ASAT (10-50 U/l)961,6034136NH3 (<102 μg/dl)817311054nt-proBNP (<62.9 pg/ml)29,334152,23814,009173Creatinine (0.17-0.42 mg/dl)0.920.990.250.23Abbreviations: ALAT: alanine aminotransferase; ASAT: aspartate aminotransferase; NH3: ammonia; nt-proBNP: N-terminal pro-brain natriuretic peptide.As a side effect in all patients, there was an uncomfortable smell regressing within about 48 hours, typically caused by the sulfur-containing DMSO. ## 4. Discussion Embolization is the first-line therapy of AVM. In the last years, liquid embolic agents have gained in importance. There are several published data considering EVOH as suitable for embolization therapy: apart from AVM, EVOH is applied in the treatment of different vascular entities such as gastrointestinal or bronchial hemorrhage [15], endoleaks [16, 17], and central nervous vascular malformations. It can also be used as an adjuvant therapy after surgical AVM resection [18].Because AVM belong to the group of high-flow malformations, they are attended by high morbidity. Primarily hepatic AVM are associated with a poor prognosis and high mortality, because of severe organ manifestations such as high-output cardiac failure, pulmonary hypertension, and hepatic failure shortly after birth [19, 20]. Due to the critical cardio-respiratory condition immediately after birth, it is challenging to treat hepatic AVM by invasive interventional or surgical procedures. In other reported cases, coil embolization of the main common liver artery [19], extended surgical resection, or liver transplantation are considered as a therapeutic option of large hepatic AVM. Although there are some studies with good clinical results and low risk for adverse effects with EVOH embolization in children [13], there are still few published reports regarding young children, especially infants and newborns. Also, there are very few reports on treatment of neonatal hepatic AVM with EVOH [21, 22].In comparison to the published cases of Alexander et al. [19] and Hazebroek et al. [23], we were able to preserve the arterial supply of the liver. We prevented the common hepatic artery from total occlusion by diffuse EVOH embolization of the feeding arteries and only selective coil embolizations of branches of hepatic arteries. This may explain the complete normalization of the liver function during follow-up.Surgical resection of hepatic AVM bears a high risk for hemorrhage [20], which could be avoided as well.There are remarkable advantages of Onyx© compared to other liquid agents [14]: (i) Radiopacity of EVOH due to added tantalum powder allows a good controllability during fluoroscopically guided injection(ii) Three different available EVOH concentrations feature a variable penetration depth into the AVM nidus. Thus, an extensive occlusion of the malformation is possible(iii) By polymerizing and building a coherent cast, EVOH reduces the risk of nontarget embolization by detaching particlesThere may rarely be severe adverse effects of EVOH treatment such as nontarget embolization with resulting organ ischemia or asystole [24]. Another possible disadvantage is a long procedural duration associated with high radiation exposure, because it has to be injected very slowly and in fractionated procedures.Especially extremity arteries of young children are small and therefore damageable. Vascular spasms or injuries (dissection, occlusion) can result from extended interventional procedures.In cases of subcutaneous AVM, there can be a dark discoloration due to superficially injected EVOH. This tattoo effect is caused by the containing black tantalum. The discoloring usually vanishes within some months.DMSO (most commonly used for cryopreservation of stem cells) is the essential solvent of EVOH to avoid its premature precipitation. It causes an intensive garlic-like smell that volatilizes within 48 to 72 hours after injection. Cardiovascular adverse effects of intravenously administered DMSO can appear in 1-14% [25]. Arterial application of DMSO is always painful, so analgesic treatment during the procedure is necessary.We performed 8 transcatheter embolizations in four consecutive children with an age range of 3 days to 3 years with considerable success. In three cases, mechanical occlusive devices (coils) were additionally implanted.Three children had subcutaneous AVM, one had an intrahepatic AVM. In both cases of thoracic wall AVM, liquid embolization with EVOH resulted in complete involution without any severe complication. The tibial AVM could be diminished, restriction of motility or hypoperfusion of the leg as well as rupture and critical bleeding could be avoided. The typical garlic-like smell and slight tattoo effects were the only temporary side effects.Significant complication was noted in the neonate with the hepatic AVM. The child was in a critical general condition immediately after birth, high-output cardiac failure, and multiorgan dysfunction already existed preinterventionally. Whether the postintervention reaction was, at least partially, related to the DMSO or Onyx© is unclear. Nevertheless, the child recovered after the second procedure with intensive care management, which resulted in a normalized hepatic and cardiac function in long-term follow-up. Extended hepatic surgery and liver transplantation could be avoided.In summary, our case series confirms previously published data considering EVOH as suitable for treatment of AVM. It is possible to probe deep into the nidus with microcatheters so that it can be occluded widely and persistently. As well on the basis of our experience, exclusive occlusion of main AVM vessels is not a promising therapy. Beyond that, we showed its feasibility in infants as well as in neonates within the first days of life. Catheter size and small peripheral vessels frequently limit interventional procedures. Using microcatheters and guiding wires adapted from the neuroradiology, AVM embolizations are possible via transarterial catheters with small lumina (4 Fr) appropriate for small femoral vessels and thus for patients of the above age group. In contrast to still performed procedures in cases of hepatic AVM, surgical ligation of main hepatic arteries or liver resection can be avoided by this minimally invasive technique. It is thus possible to spare not affected liver tissue and to achieve normalization of biochemical hepatic markers and of the liver function.We also demonstrated its practicability in cases of emergency interventions in critically ill children suffering from high-output cardiac failure. Postinterventional cardiac reconvalescence occurred.Furthermore, rupture of superficial AVM attended by acute hemorrhage could be prevented.The main limitation of our study is the limited number of patients included which can be attributed to the low incidence of AVM as well as to the rare indication for interventional therapy in the first three years of life. ## 5. Conclusion Percutaneous embolization therapy with EVOH can be performed safely and effectively in the treatment of peripheral AVM in neonates and infants. An additional occlusion with mechanical devices such as detachable microcoils may be helpful. Selective vessel and nidus embolization in neonatal hepatic AVM can prevent acute heart failure; chronic hepatic and other organ dysfunction or even liver transplantation can be avoided. Further follow-up studies with larger a population are necessary to confirm our results. --- *Source: 1022729-2022-07-18.xml*
1022729-2022-07-18_1022729-2022-07-18.md
36,112
Liquid Embolization of Peripheral Arteriovenous Malformations with Ethylene-Vinyl Alcohol Copolymer in Neonates and Infants
Jochen Pfeifer; Walter A. Wohlgemuth; Hashim Abdul-Khaliq
Cardiovascular Therapeutics (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1022729
1022729-2022-07-18.xml
--- ## Abstract In the postnatal period, extensive peripheral arteriovenous malformations (AVM) are associated with high morbidity, especially when localized in the liver. Their urgent treatment is always a challenging problem in neonates and infants. We analyzed four consecutive children aged three days to three years who underwent eight liquid embolization procedures with ethylene-vinyl alcohol copolymer. The AVM were situated on the thoracic wall, in the liver, and on the lower leg. In three cases, the malformations showed total regression. The tibial AVM degenerated widely. If impaired beforehand, cardiac or hepatic function normalized after the interventions. There were no embolization-associated complications such as nontarget embolization or tissue ischemia. We conclude that application of ethylene-vinyl alcohol copolymer seems to be a safe therapeutic option and can be used in neonates and infants with peripheral AVM in consideration of the agent’s characteristics. Nevertheless, there are still hardly any data concerning young children. --- ## Body ## 1. Introduction Vascular anomalies in children are rare congenital malformations. According to the International Society for the Study of Vascular Anomalies (ISSVA) [1, 2], they are divided into two subgroups: vascular tumors and vascular malformations. The most common type of vascular tumors are benign hemangiomas. Vascular malformations are classified into venous, capillary, lymphatic, arteriovenous, and combined malformations, depending on their main vessel structure and hemodynamics. Arteriovenous malformations (AVM) account for about 8% of all vascular malformations [3]. This type is characterized by irregular feeding arteries, shunting blood directly into a vein through a “nidus” consisting of arterial microfistulae [4, 5]. AVM are part of the high-flow vascular malformations. Peripheral AVM are located outside the central nervous system.AVM tend to grow progressively. There are various clinical symptoms and manifestations of AVM depending on their dimension, localization, and shunt volume affecting the circulatory system. The previously published Schobinger score provides a clinical staging of AVM. These four clinical stages are quiescence (stage I; cutaneous blush or warmth), expansion (stage II; bruit or thrill, increasing size, pulsation, and no pain), local destruction (stage III; pain, bleeding, infection, skin necrosis, or ulceration), and decompensation (stage IV; high-output cardiac failure) [4, 6, 7]. In contrast, coagulopathy is rather found in slow-flow malformations [8].If associated with severe systemic symptoms, extensive AVM require urgent treatment. The therapy of AVM in young children is always a challenging problem because of the patients’ small size and the restricted adaptation of their circulatory system. The therapeutic strategy depends on the localization, size, and the clinical manifestation. Small AVM may be resected surgically, whereas resection is not suitable for extensive lesions due to higher perfusion by multiple feeding arteries and hence the higher risk for perioperative bleeding and recrudescence. Selective ligation or embolization of feeding arteries is not promising, for it leads to further angiogenesis without AVM involution. Conservative and medical treatment options have also been found inappropriate [7, 9].Percutaneous and transvascular embolization are considered to be first line therapy in advanced state AVM regarding the Schobinger score [5, 7, 10]. For transcatheter use, various mechanical occlusion devices (coils, vascular plugs) and liquid embolic agents are available. The latter are divided into sclerosing agents (ethanol), polymerizing agents (cyanoacrylate or ethylene-vinyl alcohol copolymer = EVOH), and particulate agents [11]. To avoid relapse, complete occlusion of the vascular nidus has to be the goal of the embolization. Of note, exclusive embolization of single-feeding arteries has the same unsatisfactory effect as their surgical ligation. Therefore, implantation of coils or plugs is appropriate in addition to liquid embolization in selected cases [7].EVOH is originally used in interventional neuroradiology. Besides, there are several published studies of successful embolization of peripheral AVM with EVOH primarily in adults [5, 9, 12] and children older than 3 years [13].The aim of our retrospective study was to present our experience in urgent AVM embolization using EVOH in neonates and infants in different localizations. ## 2. Material and Methods ### 2.1. Patients We performed 8 liquid embolization procedures in 4 consecutive children, two of them male. The parents gave written informed consent the day before the intervention.Two AVM were located on the thoracic wall, one was a pretibial AVM; and there was one newborn with prenatal diagnosis of huge intrahepatic AVM. The patients’ characteristics are summarized in Table1.Table 1 Patients’ characteristics. Patient (no.)GenderAge at intervention(s)Body weight (kg)Localization of the AVMSymptomsSchobinger stage1f3 years12.6ChestHeart failureIV2m14, 16, and 24 months11.4, 12.0, and 13.5ChestAcute AVM hemorrhageIII4f4 months and 8 months6.6 and 9.7Right tibiaIncreasing sizeII3m3 and 16 days3.4 and 3.5LiverHeart failure, pulmonary hypertensionIVPatient 1 had undergone a surgical correction of atrioventricular septal defect at the age of 5 months. A thoracic AVM showed increasing size (finally7×5×2.5cm) with increasing flowing shunt. During follow-up, the arteriovenous shunt was associated with progressive cardiac output and increasing mitral regurgitation, and cardiomegaly occurred. Neither anticongestive treatment with propranolol nor coil occlusion of feeding arteries deriving from the subclavian artery with 5 coils (MReye©Flipper© detachable coils, Cook© medical, Bloomington, Indiana, USA) were successful to induce significant reduction of the AVM perfusion, since further feeding arteries were also established from the intercostal arteries. Liquid embolization was then indicated and successfully performed.Patient 2 presented with a growing AVM (9×8×2.5cm) of the thoracic wall. At the age of 6 months, he was admitted to the emergency department with a severe superficial bleeding due to accidental skin erosions covering the AVM. Emergency hemostasis was achieved by Neodym-Yag laser treatment and compression dressings. After stabilization and healing of the superficial skin erosions, we decided to subject the child to transcatheter embolization using EVOH.Patient 3 had a growing pretibial AVM (7×4×4.2cm) on the right leg with tight consistency. Embolization was indicated because of imminent rupture due to hyperperfusion of the AVM.In patient 4, AVM and cardiomegaly were diagnosed prenatally. The major part of the AVM was located in the liver segments 2 and 3. Giant veins drained into the lower vena cava. After birth, the newborn was critically ill because of severe high-output heart failure and pulmonary hypertension. Highly urgent embolization therapy was indicated. ### 2.2. Transcatheter Embolization All procedures were performed by percutaneous transarterial catheterization via the right femoral artery. After puncture, 4 French (Fr) introducers were inserted in each case. 4 Fr guiding catheters were used for angiography and guiding of microcatheters. In the case of the pretibial AVM, the sheath was introduced antegrade into the femoral artery. As antithrombotic therapy, heparin was administered in a dose of 100 IU per kilogram bodyweight.EVOH (Onyx©, EV3, Irvine, California, USA) was used for liquid embolization. This agent is an ethylene-vinyl alcohol copolymer, which is blended with radiopaque tantalum and dissolved in dimethyl sulfoxide (DMSO). In contact with other aqueous fluids such as water or blood, EVOH precipitates by polymerization and hardens from the surface to the inside. It hence builds an occlusive cast, when intravascularly applied. EVOH is available in 3 concentrations with increasing viscosity: Onyx©-18, -20, and -34, containing 6%, 6.5%, and 8% EVOH, respectively. The lower the concentration, the deeper it penetrates into the small peripheral capillary vessels.The interventional embolization treatment with EVOH was performed applying the previously published plug-and-push-technique [7, 10, 14]: a DMSO compatible microcatheter (usually with a detachable tip, Apollo Onyx Delivery Microcatheter, EV3, Irvine, California, USA) is selectively inserted into the nidus via a feeding artery. The catheter’s dead space has to be flushed with DMSO to avoid obstruction by intraluminal polymerization of the EVOH. Before injection, EVOH has to be shaken 10 to 20 minutes by a vibraxer in order to get a homogenous suspension. Then, it is injected very slowly and patiently with 1 ml syringes under fluoroscopy. An EVOH plug generates around the catheter tip and facilitates further antegrade injection. In addition, the plug avoids retrograde flow of EVOH, therefore, providing additional flow toward the nidus and building a coherent cast. The intention of this intervention is to achieve the embolization and closure of as many microfistulae within the AVM as possible. When finished, the microcatheter can be removed by detaching its tip, which is sealed within the plug. Nondetachable catheters have to be withdrawn out of the EVOH plug. The maximal dose of EVOH is 0.5-1.0 ml per kilogram bodyweight in a single procedure. In cases of more extensive lesions, multiple procedures may have to be performed.In the patients with thoracic AVM (patients 1 and 2), the procedures were conducted under analgosedation with propofol, midazolam, and ketamine. The embolization was performed via the feeding arteries originating from the subclavian and intercostal arteries (see Figure1 that shows patient 2).Figure 1 Imaging of the thoracic AVM in patient 2. (a) Angiographical imaging after injection of Solutrast 300® via a transarterial catheter (asterisk) into the feeding arteries deriving from the right subclavian artery. (b) Angiographical imaging after injection of radiopaque Onyx®. (c) Radiographic imaging (X-ray) of the thorax after the last embolization. (a)(b)(c)The patient with the pretibial AVM (patient 3) was intubated, ventilated, and in deep analgosedation. The feeding arteries were embolized via the anterior tibial artery. During the first embolization, there was a temporary hypoperfusion caused by extending of the EVOH plug into the anterior tibial artery. Immediate balloon dilatation was necessary with a 2 mm balloon. In a further catheterization, 1 microcoil (diameter 4 mm, Concerto© Helix, Medtronic, Irvine, California, USA) was implanted into a residual AVM artery. There was a transient vascular spasm of this artery during this procedure which was self-limiting within a few minutes. Arterial perfusion of the lower leg quickly normalized in both cases.The newborn with the large, prenatally diagnosed hepatic AVM (patient 4) had to be intubated and ventilated immediately postnatally, due to severe high-output cardiac failure and pulmonary hypertension. After interdisciplinary consultation, we decided to embolize the large hepatic AVM with EVOH. Two interventional procedures were necessary within the first 16 days of life. After angiography of the AVM including selective visualization of the feeding arteries via the arteria hepatica communis and the celiac trunc, catheterization and embolization of the feeding arteries were performed. Finally, a right hepatic arterial branch and a left phrenic arterial branch were occluded using 10 detachable microcoils (Concerto© Helix, Medtronic, Irvine, California, USA). The diameters of the coils were 2 mm (3 coils), 4 mm (5 coils), and 5 mm (2 coils), respectively (Figure2 shows patient 4). A sufficient antegrade perfusion of the liver parenchyma was provided, in contrast to reported cases with complete occlusion of the main hepatic artery.Figure 2 Imaging of the hepatic AVM in patient 4. (a) Angiographical imaging after injection of Solutrast 300® into the feeding arteries via transarterial catheter (asterisk). (b) Angiographical imaging after injection of radiopaque Onyx® and Concerto Helix©-coils (arrows; two transvascular catheters are marked by asterisks). (c) Radiographic imaging (X-ray) of the thorax and upper abdomen after the last embolization, still existing extensive cardiomegaly. (a)(b)(c)Embolization methods and results are summarized in Table2.Table 2 Embolization method and results. Patient (no.)No. of embolization sessions (n)Onyx© concentration/dose (ml)Additional coils (no. (n) and product)ComplicationsFollow-up (months)Result1118/3 ml5 Cook©-coilsNo56Complete AVM involution, normalized cardiac function23(1) 18/6 ml(2) 18/2.5 ml(3)18/3.2 mlNone(1) No(2) No(3) No24Complete AVM involution32(1) 18/0.05 ml(2) 18/0.1 ml1 Concerto Helix©-coilTransient arterial obstruction12Partial AVM involution42(1) 18/1.5 ml and 34/1.5 ml(2) 18/1.5 ml and 34/1.5 ml10 Concerto Helix©-coils(1) SIRS(2) Atrial flutter11Complete AVM involution, normalized cardiac and hepatic function, normalized pulmonary pressure ### 2.3. Follow-Up For follow-up, the patients underwent clinical examination, ultrasound, color Doppler imaging, and echocardiography. Blood parameters were obtained in the case of hepatic AVM. ## 2.1. Patients We performed 8 liquid embolization procedures in 4 consecutive children, two of them male. The parents gave written informed consent the day before the intervention.Two AVM were located on the thoracic wall, one was a pretibial AVM; and there was one newborn with prenatal diagnosis of huge intrahepatic AVM. The patients’ characteristics are summarized in Table1.Table 1 Patients’ characteristics. Patient (no.)GenderAge at intervention(s)Body weight (kg)Localization of the AVMSymptomsSchobinger stage1f3 years12.6ChestHeart failureIV2m14, 16, and 24 months11.4, 12.0, and 13.5ChestAcute AVM hemorrhageIII4f4 months and 8 months6.6 and 9.7Right tibiaIncreasing sizeII3m3 and 16 days3.4 and 3.5LiverHeart failure, pulmonary hypertensionIVPatient 1 had undergone a surgical correction of atrioventricular septal defect at the age of 5 months. A thoracic AVM showed increasing size (finally7×5×2.5cm) with increasing flowing shunt. During follow-up, the arteriovenous shunt was associated with progressive cardiac output and increasing mitral regurgitation, and cardiomegaly occurred. Neither anticongestive treatment with propranolol nor coil occlusion of feeding arteries deriving from the subclavian artery with 5 coils (MReye©Flipper© detachable coils, Cook© medical, Bloomington, Indiana, USA) were successful to induce significant reduction of the AVM perfusion, since further feeding arteries were also established from the intercostal arteries. Liquid embolization was then indicated and successfully performed.Patient 2 presented with a growing AVM (9×8×2.5cm) of the thoracic wall. At the age of 6 months, he was admitted to the emergency department with a severe superficial bleeding due to accidental skin erosions covering the AVM. Emergency hemostasis was achieved by Neodym-Yag laser treatment and compression dressings. After stabilization and healing of the superficial skin erosions, we decided to subject the child to transcatheter embolization using EVOH.Patient 3 had a growing pretibial AVM (7×4×4.2cm) on the right leg with tight consistency. Embolization was indicated because of imminent rupture due to hyperperfusion of the AVM.In patient 4, AVM and cardiomegaly were diagnosed prenatally. The major part of the AVM was located in the liver segments 2 and 3. Giant veins drained into the lower vena cava. After birth, the newborn was critically ill because of severe high-output heart failure and pulmonary hypertension. Highly urgent embolization therapy was indicated. ## 2.2. Transcatheter Embolization All procedures were performed by percutaneous transarterial catheterization via the right femoral artery. After puncture, 4 French (Fr) introducers were inserted in each case. 4 Fr guiding catheters were used for angiography and guiding of microcatheters. In the case of the pretibial AVM, the sheath was introduced antegrade into the femoral artery. As antithrombotic therapy, heparin was administered in a dose of 100 IU per kilogram bodyweight.EVOH (Onyx©, EV3, Irvine, California, USA) was used for liquid embolization. This agent is an ethylene-vinyl alcohol copolymer, which is blended with radiopaque tantalum and dissolved in dimethyl sulfoxide (DMSO). In contact with other aqueous fluids such as water or blood, EVOH precipitates by polymerization and hardens from the surface to the inside. It hence builds an occlusive cast, when intravascularly applied. EVOH is available in 3 concentrations with increasing viscosity: Onyx©-18, -20, and -34, containing 6%, 6.5%, and 8% EVOH, respectively. The lower the concentration, the deeper it penetrates into the small peripheral capillary vessels.The interventional embolization treatment with EVOH was performed applying the previously published plug-and-push-technique [7, 10, 14]: a DMSO compatible microcatheter (usually with a detachable tip, Apollo Onyx Delivery Microcatheter, EV3, Irvine, California, USA) is selectively inserted into the nidus via a feeding artery. The catheter’s dead space has to be flushed with DMSO to avoid obstruction by intraluminal polymerization of the EVOH. Before injection, EVOH has to be shaken 10 to 20 minutes by a vibraxer in order to get a homogenous suspension. Then, it is injected very slowly and patiently with 1 ml syringes under fluoroscopy. An EVOH plug generates around the catheter tip and facilitates further antegrade injection. In addition, the plug avoids retrograde flow of EVOH, therefore, providing additional flow toward the nidus and building a coherent cast. The intention of this intervention is to achieve the embolization and closure of as many microfistulae within the AVM as possible. When finished, the microcatheter can be removed by detaching its tip, which is sealed within the plug. Nondetachable catheters have to be withdrawn out of the EVOH plug. The maximal dose of EVOH is 0.5-1.0 ml per kilogram bodyweight in a single procedure. In cases of more extensive lesions, multiple procedures may have to be performed.In the patients with thoracic AVM (patients 1 and 2), the procedures were conducted under analgosedation with propofol, midazolam, and ketamine. The embolization was performed via the feeding arteries originating from the subclavian and intercostal arteries (see Figure1 that shows patient 2).Figure 1 Imaging of the thoracic AVM in patient 2. (a) Angiographical imaging after injection of Solutrast 300® via a transarterial catheter (asterisk) into the feeding arteries deriving from the right subclavian artery. (b) Angiographical imaging after injection of radiopaque Onyx®. (c) Radiographic imaging (X-ray) of the thorax after the last embolization. (a)(b)(c)The patient with the pretibial AVM (patient 3) was intubated, ventilated, and in deep analgosedation. The feeding arteries were embolized via the anterior tibial artery. During the first embolization, there was a temporary hypoperfusion caused by extending of the EVOH plug into the anterior tibial artery. Immediate balloon dilatation was necessary with a 2 mm balloon. In a further catheterization, 1 microcoil (diameter 4 mm, Concerto© Helix, Medtronic, Irvine, California, USA) was implanted into a residual AVM artery. There was a transient vascular spasm of this artery during this procedure which was self-limiting within a few minutes. Arterial perfusion of the lower leg quickly normalized in both cases.The newborn with the large, prenatally diagnosed hepatic AVM (patient 4) had to be intubated and ventilated immediately postnatally, due to severe high-output cardiac failure and pulmonary hypertension. After interdisciplinary consultation, we decided to embolize the large hepatic AVM with EVOH. Two interventional procedures were necessary within the first 16 days of life. After angiography of the AVM including selective visualization of the feeding arteries via the arteria hepatica communis and the celiac trunc, catheterization and embolization of the feeding arteries were performed. Finally, a right hepatic arterial branch and a left phrenic arterial branch were occluded using 10 detachable microcoils (Concerto© Helix, Medtronic, Irvine, California, USA). The diameters of the coils were 2 mm (3 coils), 4 mm (5 coils), and 5 mm (2 coils), respectively (Figure2 shows patient 4). A sufficient antegrade perfusion of the liver parenchyma was provided, in contrast to reported cases with complete occlusion of the main hepatic artery.Figure 2 Imaging of the hepatic AVM in patient 4. (a) Angiographical imaging after injection of Solutrast 300® into the feeding arteries via transarterial catheter (asterisk). (b) Angiographical imaging after injection of radiopaque Onyx® and Concerto Helix©-coils (arrows; two transvascular catheters are marked by asterisks). (c) Radiographic imaging (X-ray) of the thorax and upper abdomen after the last embolization, still existing extensive cardiomegaly. (a)(b)(c)Embolization methods and results are summarized in Table2.Table 2 Embolization method and results. Patient (no.)No. of embolization sessions (n)Onyx© concentration/dose (ml)Additional coils (no. (n) and product)ComplicationsFollow-up (months)Result1118/3 ml5 Cook©-coilsNo56Complete AVM involution, normalized cardiac function23(1) 18/6 ml(2) 18/2.5 ml(3)18/3.2 mlNone(1) No(2) No(3) No24Complete AVM involution32(1) 18/0.05 ml(2) 18/0.1 ml1 Concerto Helix©-coilTransient arterial obstruction12Partial AVM involution42(1) 18/1.5 ml and 34/1.5 ml(2) 18/1.5 ml and 34/1.5 ml10 Concerto Helix©-coils(1) SIRS(2) Atrial flutter11Complete AVM involution, normalized cardiac and hepatic function, normalized pulmonary pressure ## 2.3. Follow-Up For follow-up, the patients underwent clinical examination, ultrasound, color Doppler imaging, and echocardiography. Blood parameters were obtained in the case of hepatic AVM. ## 3. Results and Discussion ### 3.1. Results Patients 1 and 2 underwent postinterventional follow-up by clinical examination, ultrasound, and echocardiography after three months, annually thereafter. Due to local hypoperfusion, the AVM transformed by fatty involution and diminished. In both cases, it regressed to a minor soft swelling on the chest wall. Color Doppler ultrasound showed no revascularization. Surgical resection was not necessary. There was an age-appropriate physical resilience. Echocardiography showed a normal cardiac function. The mitral valve competence of patient 1 was significantly improving.In patient 3, the AVM partially degenerated with reduction of size, paling, and detumescence within four months after the second embolization (see Figure3). There was normal leg perfusion and motility. AVM rupture or hemorrhage did not occur. A residual feeding artery was occluded by a microcoil (Concerto© Helix, as mentioned above, diameter 4 mm) in a third intervention.Figure 3 Picture of the tibial AVM in patient 3. (a) Before EVOH embolization. (b) At follow-up. (a)(b)In these three patients with superficial AVM, slight tattoo effect was seen in terms of minor black skin discoloration.In patient 4, there was a systemic inflammatory response syndrome (SIRS) with hypotension and increased shock parameters, acidosis, and coagulopathy after the first procedure. At the time of the procedure, the newborn was already in very unstable hemodynamic conditions due to the high-output cardiac failure and suprasystemic pulmonary hypertension. Postinterventionally, intensive care therapy was needed with administration of catecholamines, buffering, and transfusion of fresh frozen plasma and thrombocytes. Weaning from ventilation was feasible on day seven. Before the second procedure, the newborn was electively reintubated and ventilated, and glucocorticoids were administered to avoid possible allergic or hyperinflammatory reaction. At the end of the intervention, electrical cardioversion was necessary because of atrial flutter. SIRS did not recur. The child remained stable and was extubated on day 3 after intervention. The mean pulmonary pressure immediately decreased from 48 to 36 mmHg during the procedure and normalized within the next 3 weeks. Hepatic, renal, and cardiac parameters normalized as well (see Table3). Within one year after the embolization, all liver parameters were normalized. Several follow-up studies by color Doppler revealed no revascularization of the hepatic AVM.Table 3 Laboratory summary of patient 4 (hepatic AVM). Parameter (normal value)Before1. Embolization (peak value)After1. Embolization (peak value)After2. Embolization (peak value)At follow-upALAT (1-25 U/l)415242818ASAT (10-50 U/l)961,6034136NH3 (<102 μg/dl)817311054nt-proBNP (<62.9 pg/ml)29,334152,23814,009173Creatinine (0.17-0.42 mg/dl)0.920.990.250.23Abbreviations: ALAT: alanine aminotransferase; ASAT: aspartate aminotransferase; NH3: ammonia; nt-proBNP: N-terminal pro-brain natriuretic peptide.As a side effect in all patients, there was an uncomfortable smell regressing within about 48 hours, typically caused by the sulfur-containing DMSO. ## 3.1. Results Patients 1 and 2 underwent postinterventional follow-up by clinical examination, ultrasound, and echocardiography after three months, annually thereafter. Due to local hypoperfusion, the AVM transformed by fatty involution and diminished. In both cases, it regressed to a minor soft swelling on the chest wall. Color Doppler ultrasound showed no revascularization. Surgical resection was not necessary. There was an age-appropriate physical resilience. Echocardiography showed a normal cardiac function. The mitral valve competence of patient 1 was significantly improving.In patient 3, the AVM partially degenerated with reduction of size, paling, and detumescence within four months after the second embolization (see Figure3). There was normal leg perfusion and motility. AVM rupture or hemorrhage did not occur. A residual feeding artery was occluded by a microcoil (Concerto© Helix, as mentioned above, diameter 4 mm) in a third intervention.Figure 3 Picture of the tibial AVM in patient 3. (a) Before EVOH embolization. (b) At follow-up. (a)(b)In these three patients with superficial AVM, slight tattoo effect was seen in terms of minor black skin discoloration.In patient 4, there was a systemic inflammatory response syndrome (SIRS) with hypotension and increased shock parameters, acidosis, and coagulopathy after the first procedure. At the time of the procedure, the newborn was already in very unstable hemodynamic conditions due to the high-output cardiac failure and suprasystemic pulmonary hypertension. Postinterventionally, intensive care therapy was needed with administration of catecholamines, buffering, and transfusion of fresh frozen plasma and thrombocytes. Weaning from ventilation was feasible on day seven. Before the second procedure, the newborn was electively reintubated and ventilated, and glucocorticoids were administered to avoid possible allergic or hyperinflammatory reaction. At the end of the intervention, electrical cardioversion was necessary because of atrial flutter. SIRS did not recur. The child remained stable and was extubated on day 3 after intervention. The mean pulmonary pressure immediately decreased from 48 to 36 mmHg during the procedure and normalized within the next 3 weeks. Hepatic, renal, and cardiac parameters normalized as well (see Table3). Within one year after the embolization, all liver parameters were normalized. Several follow-up studies by color Doppler revealed no revascularization of the hepatic AVM.Table 3 Laboratory summary of patient 4 (hepatic AVM). Parameter (normal value)Before1. Embolization (peak value)After1. Embolization (peak value)After2. Embolization (peak value)At follow-upALAT (1-25 U/l)415242818ASAT (10-50 U/l)961,6034136NH3 (<102 μg/dl)817311054nt-proBNP (<62.9 pg/ml)29,334152,23814,009173Creatinine (0.17-0.42 mg/dl)0.920.990.250.23Abbreviations: ALAT: alanine aminotransferase; ASAT: aspartate aminotransferase; NH3: ammonia; nt-proBNP: N-terminal pro-brain natriuretic peptide.As a side effect in all patients, there was an uncomfortable smell regressing within about 48 hours, typically caused by the sulfur-containing DMSO. ## 4. Discussion Embolization is the first-line therapy of AVM. In the last years, liquid embolic agents have gained in importance. There are several published data considering EVOH as suitable for embolization therapy: apart from AVM, EVOH is applied in the treatment of different vascular entities such as gastrointestinal or bronchial hemorrhage [15], endoleaks [16, 17], and central nervous vascular malformations. It can also be used as an adjuvant therapy after surgical AVM resection [18].Because AVM belong to the group of high-flow malformations, they are attended by high morbidity. Primarily hepatic AVM are associated with a poor prognosis and high mortality, because of severe organ manifestations such as high-output cardiac failure, pulmonary hypertension, and hepatic failure shortly after birth [19, 20]. Due to the critical cardio-respiratory condition immediately after birth, it is challenging to treat hepatic AVM by invasive interventional or surgical procedures. In other reported cases, coil embolization of the main common liver artery [19], extended surgical resection, or liver transplantation are considered as a therapeutic option of large hepatic AVM. Although there are some studies with good clinical results and low risk for adverse effects with EVOH embolization in children [13], there are still few published reports regarding young children, especially infants and newborns. Also, there are very few reports on treatment of neonatal hepatic AVM with EVOH [21, 22].In comparison to the published cases of Alexander et al. [19] and Hazebroek et al. [23], we were able to preserve the arterial supply of the liver. We prevented the common hepatic artery from total occlusion by diffuse EVOH embolization of the feeding arteries and only selective coil embolizations of branches of hepatic arteries. This may explain the complete normalization of the liver function during follow-up.Surgical resection of hepatic AVM bears a high risk for hemorrhage [20], which could be avoided as well.There are remarkable advantages of Onyx© compared to other liquid agents [14]: (i) Radiopacity of EVOH due to added tantalum powder allows a good controllability during fluoroscopically guided injection(ii) Three different available EVOH concentrations feature a variable penetration depth into the AVM nidus. Thus, an extensive occlusion of the malformation is possible(iii) By polymerizing and building a coherent cast, EVOH reduces the risk of nontarget embolization by detaching particlesThere may rarely be severe adverse effects of EVOH treatment such as nontarget embolization with resulting organ ischemia or asystole [24]. Another possible disadvantage is a long procedural duration associated with high radiation exposure, because it has to be injected very slowly and in fractionated procedures.Especially extremity arteries of young children are small and therefore damageable. Vascular spasms or injuries (dissection, occlusion) can result from extended interventional procedures.In cases of subcutaneous AVM, there can be a dark discoloration due to superficially injected EVOH. This tattoo effect is caused by the containing black tantalum. The discoloring usually vanishes within some months.DMSO (most commonly used for cryopreservation of stem cells) is the essential solvent of EVOH to avoid its premature precipitation. It causes an intensive garlic-like smell that volatilizes within 48 to 72 hours after injection. Cardiovascular adverse effects of intravenously administered DMSO can appear in 1-14% [25]. Arterial application of DMSO is always painful, so analgesic treatment during the procedure is necessary.We performed 8 transcatheter embolizations in four consecutive children with an age range of 3 days to 3 years with considerable success. In three cases, mechanical occlusive devices (coils) were additionally implanted.Three children had subcutaneous AVM, one had an intrahepatic AVM. In both cases of thoracic wall AVM, liquid embolization with EVOH resulted in complete involution without any severe complication. The tibial AVM could be diminished, restriction of motility or hypoperfusion of the leg as well as rupture and critical bleeding could be avoided. The typical garlic-like smell and slight tattoo effects were the only temporary side effects.Significant complication was noted in the neonate with the hepatic AVM. The child was in a critical general condition immediately after birth, high-output cardiac failure, and multiorgan dysfunction already existed preinterventionally. Whether the postintervention reaction was, at least partially, related to the DMSO or Onyx© is unclear. Nevertheless, the child recovered after the second procedure with intensive care management, which resulted in a normalized hepatic and cardiac function in long-term follow-up. Extended hepatic surgery and liver transplantation could be avoided.In summary, our case series confirms previously published data considering EVOH as suitable for treatment of AVM. It is possible to probe deep into the nidus with microcatheters so that it can be occluded widely and persistently. As well on the basis of our experience, exclusive occlusion of main AVM vessels is not a promising therapy. Beyond that, we showed its feasibility in infants as well as in neonates within the first days of life. Catheter size and small peripheral vessels frequently limit interventional procedures. Using microcatheters and guiding wires adapted from the neuroradiology, AVM embolizations are possible via transarterial catheters with small lumina (4 Fr) appropriate for small femoral vessels and thus for patients of the above age group. In contrast to still performed procedures in cases of hepatic AVM, surgical ligation of main hepatic arteries or liver resection can be avoided by this minimally invasive technique. It is thus possible to spare not affected liver tissue and to achieve normalization of biochemical hepatic markers and of the liver function.We also demonstrated its practicability in cases of emergency interventions in critically ill children suffering from high-output cardiac failure. Postinterventional cardiac reconvalescence occurred.Furthermore, rupture of superficial AVM attended by acute hemorrhage could be prevented.The main limitation of our study is the limited number of patients included which can be attributed to the low incidence of AVM as well as to the rare indication for interventional therapy in the first three years of life. ## 5. Conclusion Percutaneous embolization therapy with EVOH can be performed safely and effectively in the treatment of peripheral AVM in neonates and infants. An additional occlusion with mechanical devices such as detachable microcoils may be helpful. Selective vessel and nidus embolization in neonatal hepatic AVM can prevent acute heart failure; chronic hepatic and other organ dysfunction or even liver transplantation can be avoided. Further follow-up studies with larger a population are necessary to confirm our results. --- *Source: 1022729-2022-07-18.xml*
2022
# Advertising Image Design Skills of E-Commerce Products in the Context of the Internet of Things **Authors:** Yamin Wei **Journal:** Mobile Information Systems (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1022825 --- ## Abstract E-commerce companies often use image advertising as a marketing approach to introduce potential customers to the goods or services that the business offers. People’s tastes are becoming more diverse and diverse in their range of variance. It is difficult for standard e-commerce commercials that aim their message at everyone to get the results they are looking for. The most significant obstacle that must be conquered in e-commerce is figuring out how to properly communicate an image advertisement to the ideal client for e-commerce in the optimal setting. This is a problem that must be resolved. As a result, in this work, we developed a unique commercial fuzzy picture advertising recommendation system for e-commerce items looking at it from the standpoint of the Internet of things (IoT). Customers who shop online may have their location and browsing history collected by Internet of Things devices. A multiadaptive k-nearest neighbour technique is used to predict the customers’ interests. After that, the suggested system is used to provide customized picture adverts to customers based on the customers’ interests and the locations of their devices. The proposed model’s effectiveness was assessed by using variables such as suggestion efficiency, Ad satisfaction rate, execution time, and click-through rate (CTR). According to the findings, the integrated Internet of Things advertising suggestion system that was developed is effective for targeted image advertising and enhancing client happiness. --- ## Body ## 1. Introduction E-commerce, a new business strategy, has been widely recognized by individuals. With the increasing development and expansion of e-commerce technology, unique business strategies based on the Internet have been formed. They have steadily become the dominating form of modern economic activity. The transaction capacity of the online shopping sector in China is anticipated to surpass 61 × 105 million RMB in 2017, according to the current figures released by Erie Consulting. The transaction level of the business-to-consumer (B2C) market in e-commerce is predicted to be 36 × 105 million RMB in 2017, contributing to sixty percent of the overall transaction level of online shopping in China (Wu et al., [1]). The development of e-marketing forced many organizations to redesign their old channel networks. In this information age, e-commerce represents a convergence of conventional business models with network and information technologies, presenting benefits and problems.Image advertisements are a boon for e-commerce firms because they are directly aimed at producing more income and drawing in many potential customers. Banner images on an e-commerce homepage (also known as Advertisement/Creatives) are the most effective in grabbing a user’s attention. Customers’ attention is piqued, and sales are increased due to the visuals and layout used in ads (Loveland et al., [2]). A company’s success or failure can be determined by how the general public views it. Customers’ loyalty and confidence in a company may be increased by a favourable corporate image. Aesthetic graphics in e-commerce advertisements may increase the items’ click-through rate (CTR) by a significant margin. It is becoming common for innovative Ad platforms to allow marketers to provide content for creating creative ads. Figure 1 presents the advantages of image advertising in e-commerce.Figure 1 Role of advertising in e-commerce.The most pertinent concern for e-commerce is how to find and connect with their ideal customers. Every day, consumers encounter several advertisements on phones. Consumers become more adept at avoiding and reducing their exposure to ads and messaging that do not speak to them or that they perceive to be unreliable over time. Marketers may stimulate the interest of informed customers by using personalized or tailored advertising. Increasing the relevance of advertising and delivering them to specific audiences is one way that marketers are helping companies stand out in a digital world (Shah et.al [3]).IoT, artificial intelligence, and big data have significantly impacted e-commerce. Consumers’ online purchasing experience has been dramatically enhanced by various information sources, making it possible to use business intelligence (Fu et al., [4]). In the IoT, sensors gather information about the physical environment like location, time, and behaviour. As new technologies like IoT, cloud storage, and big data combine, more conventional management models in e-commerce see breakthroughs. In e-business, it is critical to correctly predict user behaviour and preferences based on available data (Hong et.al [5] and Tsai et.al [6]). Because of recent advances in sensor technology, we now have a greater understanding of how people purchase things online than we ever had before. The sensor data may then go into a big data platform, which can be used for e-marketing, website design, and even e-advertising. Targeting campaigns must pay close attention to audience interest, demographics, purchasing habits, and other potential categories. Advertisers and publishers may use these data to target their ads’ most appropriate audience. This paper integrated IoT with a novel commercial fuzzy image advertisement recommendation system for e-commerce products in this paper. ### 1.1. Contributions of This Research (i) Using IoT sensors, e-commerce consumers’ browsing history and location are acquired.(ii) The multiadaptive k-nearest neighbour technique is used to anticipate consumer interests.(iii) We have presented a new commercial fuzzy image advertising suggestion system for e-commerce items in light of the Internet of things (IoT).This paper is divided into five sections: section2 lists related works and problem statements, section 3 shows the proposed work, section 4 shows the performance analysis, and section 5 concludes the research. ## 1.1. Contributions of This Research (i) Using IoT sensors, e-commerce consumers’ browsing history and location are acquired.(ii) The multiadaptive k-nearest neighbour technique is used to anticipate consumer interests.(iii) We have presented a new commercial fuzzy image advertising suggestion system for e-commerce items in light of the Internet of things (IoT).This paper is divided into five sections: section2 lists related works and problem statements, section 3 shows the proposed work, section 4 shows the performance analysis, and section 5 concludes the research. ## 2. Literature Survey E-marketing tactics in the distribution, acquisition, and promotion stages may all benefit from the IoT-assisted e-marketing and distribution framework (IoT-EDF), presented by Joghee [7]. IoT-EDF is used for customer retention activities and concentrates on the most reliable data. Using Bluetooth Low Energy (BLE), Nikodem and Szeliski [8] sent advertisements through the communication channels.According to a study by Zhu et al. [9], 5G IoT technology may help improve the quality and safety of online agricultural goods. For the supply chain of agricultural goods, they focus on 5G IoT technologies and utilize them to develop a circulation information management system for farm commodities based on 5G IoT, to achieve real-time location, information exchange, and security. Vempati et al. [10] developed a method for autonomously creating large-scale Ad creatives in a short period. Automatic annotation of needed items and tags was achieved using deep learning detectors. Genetic algorithms were used to create an ideal banner layout for the provided picture content.Art design through digital media is a new creative idea, and the influence of digital media is becoming more prevalent in advertising. Gao and Chen [11] examined the use of “digital media art” in big data based on the growth of contemporary advertising. For this study, researchers gathered information on three different aspects, namely, the product price, how many previous evaluations it has had, and product photo (brand logo, promotional information, street scenes, and model display). They used a decision tree to examine consumer buying behaviours, which allowed them to further evaluate the effect of product picture features on sales volume through a hierarchical regression model.Using DL and distributed expression technology, Zhou [12] investigated the use of e-commerce product advertising suggestions in Ad campaigns. Advertisement click-through rates may be predicted using the DL model built using a similarity network based on the topic distribution of advertising. As the last step, they offer a new recommendation method based on a distributed representation of recurrent neural networks. Customer relationship management (CRM), business intelligence (BI), and product creation are just a few ways the IoT may assist marketers. IoT communication channels may be used to help targeted marketing for product owners, as well as CRM and support, according to the study by Taylor et al. [13].According to Cui et al. [14], a marketing model for e-commerce products was developed depending on the Q-learning algorithm to enhance the product marketing strategies. Their precision marketing approach is practical and may be used in practice. The goal of Lavanya et al. [15] was to find people who are more accessible to social media platforms (Facebook) to build individual advertising and promote products through efficient networking on social media. Rosenkrans and Myers [16] assessed the efficacy of applying predictive analytics to improve mobile location-based ads by comparing the CTR of micro-geo-fenced web and app ads with macro-geo-fenced web and app ads. Utilizing predictive analytics, they investigated the ways to better target mobile customers with contextually relevant communications at the appropriate time and place using big data.An investigation conducted by Lo and Campos [17] examined how companies are integrating IoT solutions into the relationship marketing approaches to determine whether this combination can enhance business performance and what are the challenges with the changes in disruptive technologies. The context-aware advertising recommendation system developed by De Maio et al. [18] uses the analysis theory of the triadic formal concept to deduce users’ interests and deliver appealing adverts based on their tweets. It was found that a system developed by Deng et al. [19] could automatically adapt advertising material to match specific customers’ preferences.Odontogenic keratocysts are seen in several disorders, according to Mody and Bhoosreddy [20]. On the teeth of a 12-year-old girl, several odontogenic keratocysts were discovered. The study did not find any other abnormalities that may suggest a disease. According to Garg and Harita [21], fine-grained data were utilized to find individual departures from the norm. Digital twins in engineering were employed to explore these growing data-driven healthcare approaches from a theoretical and ethical perspective. Digital methods were utilized to link physical things and to represent their state constantly. Moral differences may be found by analyzing data structures and their interpretations. Digital twins are examined in terms of their ethical and sociological ramifications. The importance of data in healthcare has increased. This technology has the potential to be a social equalizer by supplying excellent equalizing improvement strategies. According to Ahmed et al. [22], allergy rhinitis will be a worldwide epidemic. Chinese and Western medicines are used in Taiwan to treat patients. In traditional Chinese medicine, allergic rhinitis was the most common cause of respiratory illness. Traditional Chinese medicine is compared to western medical treatment for allergic rhinitis in Taiwan. As mentioned by Shahbaz and Afzal [23], high-dose-rate (HDR) brachytherapy eliminates radiation, enables outpatient treatment, and reduces diagnostic time. A single-stepping source may increase dosage dispersion by altering delay at each dwell point. HDR brachytherapy treatments must be performed accurately because of the smaller processing intervals, which make it unable to perform error checks. Li and Zihan [24] provided treatment and technologies for residential sewage to improve the rural environment. Organic and physicochemical pesticides were found in soil samples from vegetable fields in Nigeria’s Zamfara State by Salihu and Zayyanu Iyya [25]. Testing procedures and results were evaluated using GC-MS and QuEChERS. ### 2.1. Problem Statement The e-commerce market is now seeing an increase in the competition. Marketing refinement and individuation of the e-commerce enterprises are also desperately required. An ever-expanding volume of data and an ever-expanding user base make the IoT environment very challenging. E-commerce marketing tactics in IoT will become more crucial as users grow. They will need to be adjusted according to user perceptions, industry, and environmental changes. Hence, there is a need for effective IoT-based personalized advertising strategies. ## 2.1. Problem Statement The e-commerce market is now seeing an increase in the competition. Marketing refinement and individuation of the e-commerce enterprises are also desperately required. An ever-expanding volume of data and an ever-expanding user base make the IoT environment very challenging. E-commerce marketing tactics in IoT will become more crucial as users grow. They will need to be adjusted according to user perceptions, industry, and environmental changes. Hence, there is a need for effective IoT-based personalized advertising strategies. ## 3. Proposed Work From the viewpoint of the IoT, we present in this study a unique customized advertisement recommendation approach for e-commerce products based on customer interest andl. IoT sensors are used to track the location of e-commerce clients. MAKNN approach is used to anticipate client needs by analyzing the browsing history. The suggested system (IoT-CFIAR) model is used to provide customized image ads based on the interests and location of the user. The framework of the proposed research is presented in Figure 2.Figure 2 Framework of the suggested research.The logs of a person’s previous activity, such as searches, browsing, reviews, or purchases, may be used to infer implicit user traits. One of China’s most popular e-commerce websites was used to gather clickstream data. The website utilized in our research, like Amazon.com, offers a wide range of products, including home goods, electronics, clothing, and cosmetics. In China’s online purchasing sector, the users of the chosen website account for about 8% of total customers. Every day, almost three million people go to the selected website (Su and Chen [26]). Table 1 provides the descriptive status of e-commerce website browsing data. The logs of a person’s previous activity, such as searches, browsing, reviews, or purchases, may be used to infer implicit user traits. One of China’s most popular e-commerce websites was used to gather clickstream data. The website utilized in our research, like Amazon.com, offers a wide range of products, including home goods, electronics, clothing, and cosmetics. In China’s online purchasing sector, the users of the chosen website account for about 8% of total customers. Almost three million people visit the chosen website every day [16]. According to Table 1, e-commerce website browsing data are descriptive. During a visit to a website, a clickstream is a record of the click navigation path of users. The user’s actions are recorded in the browser on the client side. The URLs in the clickstream may be used to get the HTML files of each user’s requests, making the clickstream a potentially rich data source. As a result, a new URL is produced each time a user clicks or browses on the site. In addition, clickstream data contain the user ID and the time stamp. Then, a GPS sensor in an IoT environment is used to identify the position of e-commerce customers.Table 1 Descriptive statistics of e-commerce website browsing data. FactorsLevelNumber of logs3,000,000Average visiting logs7Number of sessions1,88,619Average browsing time8 minutes ### 3.1. Data Preprocessing Using Natural Language Normalization The browsing history and location acquired from e-commerce websites are preprocessed using natural language normalization. This technique involves natural language processing and normalization. This method removes the user data’s blank rows, punctuation marks, and symbols. Then, the information that does not contribute to the meaning of the data is removed from the data set. The available information is converted into an understandable and useable format. This method uses the mean and standard deviation to position all data points in a range of zero to one. It enables data managers to estimate the probability that a value will be found in the standard data distribution. The value “f” of the search category is normalized according to the following equation:(1)F′=f−D¯σB,where F′ means the normalized value of “f” of the category “D,” D¯ and σB refer to the average and standard deviation of factor “D,” respectively. By choosing the most crucial variables and removing redundant and unnecessary features, feature selection enhances the Internet of things. It raises the prediction potential of e-commerce products on the Internet of things. However, in the data set, on Taobao.com, user groups are simply segmented based on the most fundamental characteristics of the users, such as gender and age. User groups may be more clearly classified in terms of different interest patterns, such as “fashion woman,” “3C admirers,” and “housewife favourites” by referring to the mining findings from our research. ### 3.2. Web Usage Mining Using Multiadaptive K-Nearest Neighbour Approach Users’ surfing behaviours are categorized depending on their route while on the website. The exciting topics for e-commerce users are predicted using web usage mining. In this work, we employed the MAKNN technique for predicting user interest in e-commerce products. The working principle of MAKNN is explained as follows.Assume the training clickstream data to bex = {a1, a2, …, an}. Decision boundary g(x) = 0 has been derived by training MAKNN on the acquired browse history data. A distance function measuring the similarity between two web usage patterns must be constructed to find the closest neighbour of a web use pattern. The Euclidean distance function has been utilized as similarity metrics for computational ease when no previous information was available to apply them. Euclidean distance is determined using the following equation:(2)fxm,xn=∑q=1pyqbqxm−bmxn2,,where x means the input vector, p denotes the vector’s dimensionality, bq indicates the ith attribute of data (q = 1 to p), and yq means the qth attribute’s weight; when the distance between two data patterns is smaller, then the respective data points are similar. Let xs be a test web usage pattern located near the decision border. Equation (3) specifies the closest point to xf on the decision border.(3)xf=argminxmxs−xm.The relevance ofm of web usage pattern at xs is estimated by the following equation:(4)Rmxm=gms.pf=pfm,pf=pfps,where gm denotes the input feature’s unit vector, for m = 1, …, p.The weights of all features of a web usage pattern are computed by the following equation:(5)ynxm=expARMXs∑NPexpARNXs,N=1…where yn is the feature weight at xs.To maintain the approach’s stability, this scheme employs an exponential weighting mechanism. The feature weights may then be used to calculate the weighted distance. It has been determined that thek closest neighbours to testing web usage patterns are obtained by arranging Euclidean distance in ascending order. (6) describes the sorted vector containing the testing web usage pattern with its closest patterns.(6)k=sortfsm…,flo,where f is the Euclidean distance and k is the sorted vector with similar patterns.To assign a class label (specific category or topic) to a sorted vector, thek closest neighbours must vote by the following equation:(7)zfm=argmaxl∑xn∈lNNyxm,Cl,where fm means the test example, xm means one of the k-nearest neighbours to the training set, and y(xm, Cl) defines the probability of whether xm belongs to category Cl. As a result of MAKNN, the clickstream data were grouped into specific categories. The interested topics of users were determined by weighting the resultant types of web usage patterns. ### 3.3. Commercial Fuzzy Image Advertisement Recommendation System CFIAR is utilized in this paper to recommend advertisements concerning an individual’s interest and location. This technology may target individual customers in new and creative ways. Figure3 depicts the framework of the personalized Ad recommendation system.Figure 3 Personalized image advertisement model.The e-commerce user’s interesting topics and location are provided as input to the CFIAR model. The match index for the similarity between user interest (UI) and advertisements (Ad) in the Ad database is determined using the following equation:(8)Match_IndexUI⟶Ad=SSUI⟶Ad∗Wi,where SS is the similarity score and W is the weight assigned for the similarity between UI and Ad.The match index for the similarity between user location (UL) and advertisements in the database is determined using the following equation:(9)Match_IndexUL⟶Ad=SSUL⟶Ad∗Wj.The fuzzification module in CFIAR transforms the crisp input values into fuzzy values (linguistic values) using the triangular membership function. The fuzzy logic is described using the following equation:(10)v=m,μVm|mεM,where μV(m) means the membership function of data (m) and M refers to the sample cluster.The triangle membership function is defined by the following equation:(11)μVm=0,m≤v,m−vt−v,v≤m≤t,l−ml−t,t≤m≤l,0,l≤m.Three parameters [v, t, l] constitute the triangle membership function, where v symbolizes the lower border, l indicates the upper boundary, 0 seems to be the membership degree, and t indicates the centre, where the membership degree is 1.The fuzzy rules to provide the personalized advertisement recommendation are constructed according to Table2. The Ad recommendation levels are classified using these rules. The undefined results are converted into crisp outputs using defuzzification.Table 2 Fuzzy rules for CFIAR. Fuzzy rule numberFuzzy ruleRule 1IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 2IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = lowRule 3IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = lowRule 4IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 5IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = averageRule 6IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = averageRule 7IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 8IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = averageRule 9IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = high ### 3.4. IoT Advertiser The IoT advertiser receives the image advertisements with the highest recommendation levels. The IoT advertiser is an entity of e-commerce marketing that uses IoT to advertise the recommended ads of products/services to targeted customers. Then, IoT advertiser sends the recommended image ads to the IoT publisher. Instead of a single IoT device, an IoT publisher is a collection of IoT devices that work together to provide the user with various features and send adverts. IoT advertising coordinator coordinates the delivery of image advertisements to targeted consumers. Utilizing the IoT capabilities mentioned above, the ultimate goal of IoT is to provide new applications and services. The high connection and intelligence included in IoT, together with the potential for continuous scaling, enable the construction of a vast pool of applications based on users’ produced IoT data, in contrast to the oversimplified method of using conventional legacy sensors paired with decision entities. One of the most promising of these is extending the traditional advertising of Internet business. We present our notion of an IoT advertising architecture to allow the IoT advertisement vision. IoT advertising has its quirks and needs a specific infrastructure to succeed, even though this is influenced by the Internet’s advertising architecture. Our IoT advertising model consists of three layers, each of which is made up of various entities: the bottom layer, known as the IoT Physical Layer, houses actual IoT devices; the middle layer, known as the IoT Advertising Middleware, the IoT Advertising Coordinator, which enables existing IoT devices to communicate with the IoT Publisher in particular; and the top layer, the IoT Advertising Ecosystem. Figure4 illustrates the IoT advertising chain.Figure 4 IoT advertising framework.This shows an organization looking to utilize IoT to promote its goods and services, similar to how e-commerce was used in the previous use case. It is anticipated that it would engage in similar interactions with other players in the advertising ecosystem on IoT as online advertisers do. Due to the wide range of devices involved, IoT advertisers must develop their campaigns for various target audiences using newer Ad formats that are not always visible (such as auditory messaging), as opposed to conventional banner ads that are shown on web browsers or mobile applications. Additionally, targeting criteria can extend beyond an e-commerce user’s behaviour; in fact, the contextual setting will be a critical factor in the Ad matching process. ## 3.1. Data Preprocessing Using Natural Language Normalization The browsing history and location acquired from e-commerce websites are preprocessed using natural language normalization. This technique involves natural language processing and normalization. This method removes the user data’s blank rows, punctuation marks, and symbols. Then, the information that does not contribute to the meaning of the data is removed from the data set. The available information is converted into an understandable and useable format. This method uses the mean and standard deviation to position all data points in a range of zero to one. It enables data managers to estimate the probability that a value will be found in the standard data distribution. The value “f” of the search category is normalized according to the following equation:(1)F′=f−D¯σB,where F′ means the normalized value of “f” of the category “D,” D¯ and σB refer to the average and standard deviation of factor “D,” respectively. By choosing the most crucial variables and removing redundant and unnecessary features, feature selection enhances the Internet of things. It raises the prediction potential of e-commerce products on the Internet of things. However, in the data set, on Taobao.com, user groups are simply segmented based on the most fundamental characteristics of the users, such as gender and age. User groups may be more clearly classified in terms of different interest patterns, such as “fashion woman,” “3C admirers,” and “housewife favourites” by referring to the mining findings from our research. ## 3.2. Web Usage Mining Using Multiadaptive K-Nearest Neighbour Approach Users’ surfing behaviours are categorized depending on their route while on the website. The exciting topics for e-commerce users are predicted using web usage mining. In this work, we employed the MAKNN technique for predicting user interest in e-commerce products. The working principle of MAKNN is explained as follows.Assume the training clickstream data to bex = {a1, a2, …, an}. Decision boundary g(x) = 0 has been derived by training MAKNN on the acquired browse history data. A distance function measuring the similarity between two web usage patterns must be constructed to find the closest neighbour of a web use pattern. The Euclidean distance function has been utilized as similarity metrics for computational ease when no previous information was available to apply them. Euclidean distance is determined using the following equation:(2)fxm,xn=∑q=1pyqbqxm−bmxn2,,where x means the input vector, p denotes the vector’s dimensionality, bq indicates the ith attribute of data (q = 1 to p), and yq means the qth attribute’s weight; when the distance between two data patterns is smaller, then the respective data points are similar. Let xs be a test web usage pattern located near the decision border. Equation (3) specifies the closest point to xf on the decision border.(3)xf=argminxmxs−xm.The relevance ofm of web usage pattern at xs is estimated by the following equation:(4)Rmxm=gms.pf=pfm,pf=pfps,where gm denotes the input feature’s unit vector, for m = 1, …, p.The weights of all features of a web usage pattern are computed by the following equation:(5)ynxm=expARMXs∑NPexpARNXs,N=1…where yn is the feature weight at xs.To maintain the approach’s stability, this scheme employs an exponential weighting mechanism. The feature weights may then be used to calculate the weighted distance. It has been determined that thek closest neighbours to testing web usage patterns are obtained by arranging Euclidean distance in ascending order. (6) describes the sorted vector containing the testing web usage pattern with its closest patterns.(6)k=sortfsm…,flo,where f is the Euclidean distance and k is the sorted vector with similar patterns.To assign a class label (specific category or topic) to a sorted vector, thek closest neighbours must vote by the following equation:(7)zfm=argmaxl∑xn∈lNNyxm,Cl,where fm means the test example, xm means one of the k-nearest neighbours to the training set, and y(xm, Cl) defines the probability of whether xm belongs to category Cl. As a result of MAKNN, the clickstream data were grouped into specific categories. The interested topics of users were determined by weighting the resultant types of web usage patterns. ## 3.3. Commercial Fuzzy Image Advertisement Recommendation System CFIAR is utilized in this paper to recommend advertisements concerning an individual’s interest and location. This technology may target individual customers in new and creative ways. Figure3 depicts the framework of the personalized Ad recommendation system.Figure 3 Personalized image advertisement model.The e-commerce user’s interesting topics and location are provided as input to the CFIAR model. The match index for the similarity between user interest (UI) and advertisements (Ad) in the Ad database is determined using the following equation:(8)Match_IndexUI⟶Ad=SSUI⟶Ad∗Wi,where SS is the similarity score and W is the weight assigned for the similarity between UI and Ad.The match index for the similarity between user location (UL) and advertisements in the database is determined using the following equation:(9)Match_IndexUL⟶Ad=SSUL⟶Ad∗Wj.The fuzzification module in CFIAR transforms the crisp input values into fuzzy values (linguistic values) using the triangular membership function. The fuzzy logic is described using the following equation:(10)v=m,μVm|mεM,where μV(m) means the membership function of data (m) and M refers to the sample cluster.The triangle membership function is defined by the following equation:(11)μVm=0,m≤v,m−vt−v,v≤m≤t,l−ml−t,t≤m≤l,0,l≤m.Three parameters [v, t, l] constitute the triangle membership function, where v symbolizes the lower border, l indicates the upper boundary, 0 seems to be the membership degree, and t indicates the centre, where the membership degree is 1.The fuzzy rules to provide the personalized advertisement recommendation are constructed according to Table2. The Ad recommendation levels are classified using these rules. The undefined results are converted into crisp outputs using defuzzification.Table 2 Fuzzy rules for CFIAR. Fuzzy rule numberFuzzy ruleRule 1IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 2IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = lowRule 3IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = lowRule 4IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 5IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = averageRule 6IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = averageRule 7IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 8IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = averageRule 9IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = high ## 3.4. IoT Advertiser The IoT advertiser receives the image advertisements with the highest recommendation levels. The IoT advertiser is an entity of e-commerce marketing that uses IoT to advertise the recommended ads of products/services to targeted customers. Then, IoT advertiser sends the recommended image ads to the IoT publisher. Instead of a single IoT device, an IoT publisher is a collection of IoT devices that work together to provide the user with various features and send adverts. IoT advertising coordinator coordinates the delivery of image advertisements to targeted consumers. Utilizing the IoT capabilities mentioned above, the ultimate goal of IoT is to provide new applications and services. The high connection and intelligence included in IoT, together with the potential for continuous scaling, enable the construction of a vast pool of applications based on users’ produced IoT data, in contrast to the oversimplified method of using conventional legacy sensors paired with decision entities. One of the most promising of these is extending the traditional advertising of Internet business. We present our notion of an IoT advertising architecture to allow the IoT advertisement vision. IoT advertising has its quirks and needs a specific infrastructure to succeed, even though this is influenced by the Internet’s advertising architecture. Our IoT advertising model consists of three layers, each of which is made up of various entities: the bottom layer, known as the IoT Physical Layer, houses actual IoT devices; the middle layer, known as the IoT Advertising Middleware, the IoT Advertising Coordinator, which enables existing IoT devices to communicate with the IoT Publisher in particular; and the top layer, the IoT Advertising Ecosystem. Figure4 illustrates the IoT advertising chain.Figure 4 IoT advertising framework.This shows an organization looking to utilize IoT to promote its goods and services, similar to how e-commerce was used in the previous use case. It is anticipated that it would engage in similar interactions with other players in the advertising ecosystem on IoT as online advertisers do. Due to the wide range of devices involved, IoT advertisers must develop their campaigns for various target audiences using newer Ad formats that are not always visible (such as auditory messaging), as opposed to conventional banner ads that are shown on web browsers or mobile applications. Additionally, targeting criteria can extend beyond an e-commerce user’s behaviour; in fact, the contextual setting will be a critical factor in the Ad matching process. ## 4. Results and Discussion This section focuses on the evaluation of the proposed image advertisement recommendation system. The efficacy of the proposed IoT integrated CFIAR model was assessed using recommendation efficiency, execution time, Ad satisfaction rate, and CTR. The performance of IoT-CFIAR was compared to the existing Ad recommendation systems, namely, context-aware advertising recommendation (CAAR), innovative generation system of personalized advertising copy (SGS-PAC), and IoT-EDF. ### 4.1. Recommendation Efficiency The term “recommendation efficiency” is defined as how accurately the model recommends the image advertisements to targeted e-commerce users depending on their interests and location. Figure5 shows the comparative assessment of various approaches in advertisement recommendations based on efficiency. The Ad recommendation efficiency rate of the IoT-CFIAR was greater than that of existing methods, namely, IoT-EDF, SGS-PAC, and CAAR. This indicates that IoT-CFIAR appropriately distributes image advertisements for e-commerce products by learning the user needs and location. Existing works like IoT-EDF have an efficiency of 68 percent, SGS-PAC has an efficiency of 78 percent, CAAR has an efficiency of 97 percent, and new techniques have an efficiency of 99 percent.Figure 5 Comparative evaluation of different recommendation models based on efficiency. ### 4.2. Execution Time The execution time of the proposed system is defined as the time taken to complete the image advertisement recommendation process. It is described in seconds. Figure6 shows the comparative assessment of various approaches in advertisement recommendation based on execution time. The execution time of the IoT-CFIAR for Ad recommendation was lesser than that of existing methods, namely, IoT-EDF, SGS-PAC, and CAAR. This indicates that IoT-CFIAR takes less time to provide personalized image advertisements for e-commerce products to target users. Hence, the IoT-CFIAR system is a time-efficient model. The existing methods are measured using proposed and existing methods, and the results are IoT-EDF with 38, SGS-PAC with 45, CAAR with 50, and proposed has 29.Figure 6 Comparative evaluation of different recommendation models based on execution time. ### 4.3. Ad Satisfaction Rate Using the Ad satisfaction rate, an e-commerce customer’s level of satisfaction with the company’s ads may be determined. Customer satisfaction is a metric to analyze the success of the recommendation model. Figure7 shows the comparative assessment of various approaches in advertisement recommendation based on Ad satisfaction rate. The Ad satisfaction rate of the IoT-CFIAR for Ad recommendation was higher than that of existing techniques, namely, IoT-EDF, SGS-PAC, and CAAR. This is because IoT-CFIAR provides only image ads of e-commerce products that meet the user’s interest and current location. IoT-EDF has an Ad satisfaction rate of 80%, SGS-PAC is at 77%, CAAR is at 95%, and suggested approaches have a 98 percent Ad satisfaction rate.Figure 7 Comparative evaluation of different recommendation models based on the Ad satisfaction rate. ### 4.4. Click-Through Rate The CTR measures how often your Ad is displayed versus how many clicks it gets. Figure8 shows the comparative assessment of various approaches in advertisement recommendations based on CTR. The CTR for the image Ads presented through IoT-CFIAR was higher than that of existing techniques, namely, IoT-EDF, SGS-PAC, and CAAR. The CTR for the image Ads offered through IoT-CFIAR was estimated to be 4.5%. This proved that the provision of personalized image advertisements increased the number of clicks obtained for the posted advertisements. The CTR for the image ads offered through IoT-CFIAR was estimated to be 4.5%. IoT-EDF showed 2.1%, SGS-PAC showed 3.4%, and CAAR showed 2.1%.Figure 8 Comparative evaluation of different recommendation models based on CTR. ### 4.5. Discussion Mobile commerce, context awareness, and the IoT have helped push the frontiers of e-commerce and move it into the considerable data age. Modern web apps often include personalization because it enhances the overall user experience by catering to the implicit preferences of the app’s users. Global Positioning System (GPS) technologies are used by advertising to determine users’ real-time whereabouts and send location-specific ads to people’s mobile devices. As a result, ads served through this strategy tend to be highly targeted and tailored to the unique requirements of each receiver. This strategy help get higher CTR because of the timely nature of the Ad offerings.Consequently, the proposed strategy is more likely to be considered an attractive and convincing part of effective mobile advertising. In this paper, we employed GPS for detecting the user’s location and MAKNN for predicting the user’s interests. Then, we applied IoT-CFIAR as a customized image advertisement recommendation system. IoT technology in mobile commerce allows users to get integrated with information depending on time, location, and context through location-based service and so provides a more effective purchasing experience (Joghee, 2021). The performance of IoT-CFIAR was compared to existing methods such as CAAR, IoT-EDF, and SGS-PAC. Contextual user interest elicitation and the categorization and construction of contextual-aware recommendation algorithms are some of the CAAR drawbacks (De Maio et al., 2021). This drawback has been overcome in this paper using efficient prediction of user interest through MAKNN. The proposed method achieves the highest efficiency and satisfaction rate compared to other existing advertisement recommendation tools. The highest satisfaction for the proposed technique is due to the provision of location-specific and user-interest-specific image advertisements. This reduces the distribution of unnecessary advertisements to consumers. As a result of IoT-CFIAR, companies may more easily design successful market distributions that support and fulfil the different needs of clients all over the globe. ## 4.1. Recommendation Efficiency The term “recommendation efficiency” is defined as how accurately the model recommends the image advertisements to targeted e-commerce users depending on their interests and location. Figure5 shows the comparative assessment of various approaches in advertisement recommendations based on efficiency. The Ad recommendation efficiency rate of the IoT-CFIAR was greater than that of existing methods, namely, IoT-EDF, SGS-PAC, and CAAR. This indicates that IoT-CFIAR appropriately distributes image advertisements for e-commerce products by learning the user needs and location. Existing works like IoT-EDF have an efficiency of 68 percent, SGS-PAC has an efficiency of 78 percent, CAAR has an efficiency of 97 percent, and new techniques have an efficiency of 99 percent.Figure 5 Comparative evaluation of different recommendation models based on efficiency. ## 4.2. Execution Time The execution time of the proposed system is defined as the time taken to complete the image advertisement recommendation process. It is described in seconds. Figure6 shows the comparative assessment of various approaches in advertisement recommendation based on execution time. The execution time of the IoT-CFIAR for Ad recommendation was lesser than that of existing methods, namely, IoT-EDF, SGS-PAC, and CAAR. This indicates that IoT-CFIAR takes less time to provide personalized image advertisements for e-commerce products to target users. Hence, the IoT-CFIAR system is a time-efficient model. The existing methods are measured using proposed and existing methods, and the results are IoT-EDF with 38, SGS-PAC with 45, CAAR with 50, and proposed has 29.Figure 6 Comparative evaluation of different recommendation models based on execution time. ## 4.3. Ad Satisfaction Rate Using the Ad satisfaction rate, an e-commerce customer’s level of satisfaction with the company’s ads may be determined. Customer satisfaction is a metric to analyze the success of the recommendation model. Figure7 shows the comparative assessment of various approaches in advertisement recommendation based on Ad satisfaction rate. The Ad satisfaction rate of the IoT-CFIAR for Ad recommendation was higher than that of existing techniques, namely, IoT-EDF, SGS-PAC, and CAAR. This is because IoT-CFIAR provides only image ads of e-commerce products that meet the user’s interest and current location. IoT-EDF has an Ad satisfaction rate of 80%, SGS-PAC is at 77%, CAAR is at 95%, and suggested approaches have a 98 percent Ad satisfaction rate.Figure 7 Comparative evaluation of different recommendation models based on the Ad satisfaction rate. ## 4.4. Click-Through Rate The CTR measures how often your Ad is displayed versus how many clicks it gets. Figure8 shows the comparative assessment of various approaches in advertisement recommendations based on CTR. The CTR for the image Ads presented through IoT-CFIAR was higher than that of existing techniques, namely, IoT-EDF, SGS-PAC, and CAAR. The CTR for the image Ads offered through IoT-CFIAR was estimated to be 4.5%. This proved that the provision of personalized image advertisements increased the number of clicks obtained for the posted advertisements. The CTR for the image ads offered through IoT-CFIAR was estimated to be 4.5%. IoT-EDF showed 2.1%, SGS-PAC showed 3.4%, and CAAR showed 2.1%.Figure 8 Comparative evaluation of different recommendation models based on CTR. ## 4.5. Discussion Mobile commerce, context awareness, and the IoT have helped push the frontiers of e-commerce and move it into the considerable data age. Modern web apps often include personalization because it enhances the overall user experience by catering to the implicit preferences of the app’s users. Global Positioning System (GPS) technologies are used by advertising to determine users’ real-time whereabouts and send location-specific ads to people’s mobile devices. As a result, ads served through this strategy tend to be highly targeted and tailored to the unique requirements of each receiver. This strategy help get higher CTR because of the timely nature of the Ad offerings.Consequently, the proposed strategy is more likely to be considered an attractive and convincing part of effective mobile advertising. In this paper, we employed GPS for detecting the user’s location and MAKNN for predicting the user’s interests. Then, we applied IoT-CFIAR as a customized image advertisement recommendation system. IoT technology in mobile commerce allows users to get integrated with information depending on time, location, and context through location-based service and so provides a more effective purchasing experience (Joghee, 2021). The performance of IoT-CFIAR was compared to existing methods such as CAAR, IoT-EDF, and SGS-PAC. Contextual user interest elicitation and the categorization and construction of contextual-aware recommendation algorithms are some of the CAAR drawbacks (De Maio et al., 2021). This drawback has been overcome in this paper using efficient prediction of user interest through MAKNN. The proposed method achieves the highest efficiency and satisfaction rate compared to other existing advertisement recommendation tools. The highest satisfaction for the proposed technique is due to the provision of location-specific and user-interest-specific image advertisements. This reduces the distribution of unnecessary advertisements to consumers. As a result of IoT-CFIAR, companies may more easily design successful market distributions that support and fulfil the different needs of clients all over the globe. ## 5. Conclusion Recent studies indicate that precise and targeted advertising is becoming a key development trend in the advertising business, and academics are concentrating their emphasis on establishing an advertising suggestion system to accommodate this trend. This study presents an application strategy for making advertising suggestions for products sold via e-commerce by using IoT-CFIAR based on interest data and location information. IoT-CFIAR is a powerful channel for contacting and connecting with individual consumers in unique ways. It enables marketers to reach out to customers whenever and wherever they are ready to acquire a product or service. This strategy attracts the attention of the client and encourages them to visit the e-commerce website so that they may purchase the product that is being offered. The users’ right to privacy and security over the data that Internet systems gather about them to deliver services that are better customized to their needs is becoming an increasingly heated debate. In the future, we will need to place a primary emphasis on implementing security techniques to ensure the safety of user data and the pictures in advertisements. [27]. --- *Source: 1022825-2022-08-08.xml*
1022825-2022-08-08_1022825-2022-08-08.md
49,725
Advertising Image Design Skills of E-Commerce Products in the Context of the Internet of Things
Yamin Wei
Mobile Information Systems (2022)
Computer Science
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1022825
1022825-2022-08-08.xml
--- ## Abstract E-commerce companies often use image advertising as a marketing approach to introduce potential customers to the goods or services that the business offers. People’s tastes are becoming more diverse and diverse in their range of variance. It is difficult for standard e-commerce commercials that aim their message at everyone to get the results they are looking for. The most significant obstacle that must be conquered in e-commerce is figuring out how to properly communicate an image advertisement to the ideal client for e-commerce in the optimal setting. This is a problem that must be resolved. As a result, in this work, we developed a unique commercial fuzzy picture advertising recommendation system for e-commerce items looking at it from the standpoint of the Internet of things (IoT). Customers who shop online may have their location and browsing history collected by Internet of Things devices. A multiadaptive k-nearest neighbour technique is used to predict the customers’ interests. After that, the suggested system is used to provide customized picture adverts to customers based on the customers’ interests and the locations of their devices. The proposed model’s effectiveness was assessed by using variables such as suggestion efficiency, Ad satisfaction rate, execution time, and click-through rate (CTR). According to the findings, the integrated Internet of Things advertising suggestion system that was developed is effective for targeted image advertising and enhancing client happiness. --- ## Body ## 1. Introduction E-commerce, a new business strategy, has been widely recognized by individuals. With the increasing development and expansion of e-commerce technology, unique business strategies based on the Internet have been formed. They have steadily become the dominating form of modern economic activity. The transaction capacity of the online shopping sector in China is anticipated to surpass 61 × 105 million RMB in 2017, according to the current figures released by Erie Consulting. The transaction level of the business-to-consumer (B2C) market in e-commerce is predicted to be 36 × 105 million RMB in 2017, contributing to sixty percent of the overall transaction level of online shopping in China (Wu et al., [1]). The development of e-marketing forced many organizations to redesign their old channel networks. In this information age, e-commerce represents a convergence of conventional business models with network and information technologies, presenting benefits and problems.Image advertisements are a boon for e-commerce firms because they are directly aimed at producing more income and drawing in many potential customers. Banner images on an e-commerce homepage (also known as Advertisement/Creatives) are the most effective in grabbing a user’s attention. Customers’ attention is piqued, and sales are increased due to the visuals and layout used in ads (Loveland et al., [2]). A company’s success or failure can be determined by how the general public views it. Customers’ loyalty and confidence in a company may be increased by a favourable corporate image. Aesthetic graphics in e-commerce advertisements may increase the items’ click-through rate (CTR) by a significant margin. It is becoming common for innovative Ad platforms to allow marketers to provide content for creating creative ads. Figure 1 presents the advantages of image advertising in e-commerce.Figure 1 Role of advertising in e-commerce.The most pertinent concern for e-commerce is how to find and connect with their ideal customers. Every day, consumers encounter several advertisements on phones. Consumers become more adept at avoiding and reducing their exposure to ads and messaging that do not speak to them or that they perceive to be unreliable over time. Marketers may stimulate the interest of informed customers by using personalized or tailored advertising. Increasing the relevance of advertising and delivering them to specific audiences is one way that marketers are helping companies stand out in a digital world (Shah et.al [3]).IoT, artificial intelligence, and big data have significantly impacted e-commerce. Consumers’ online purchasing experience has been dramatically enhanced by various information sources, making it possible to use business intelligence (Fu et al., [4]). In the IoT, sensors gather information about the physical environment like location, time, and behaviour. As new technologies like IoT, cloud storage, and big data combine, more conventional management models in e-commerce see breakthroughs. In e-business, it is critical to correctly predict user behaviour and preferences based on available data (Hong et.al [5] and Tsai et.al [6]). Because of recent advances in sensor technology, we now have a greater understanding of how people purchase things online than we ever had before. The sensor data may then go into a big data platform, which can be used for e-marketing, website design, and even e-advertising. Targeting campaigns must pay close attention to audience interest, demographics, purchasing habits, and other potential categories. Advertisers and publishers may use these data to target their ads’ most appropriate audience. This paper integrated IoT with a novel commercial fuzzy image advertisement recommendation system for e-commerce products in this paper. ### 1.1. Contributions of This Research (i) Using IoT sensors, e-commerce consumers’ browsing history and location are acquired.(ii) The multiadaptive k-nearest neighbour technique is used to anticipate consumer interests.(iii) We have presented a new commercial fuzzy image advertising suggestion system for e-commerce items in light of the Internet of things (IoT).This paper is divided into five sections: section2 lists related works and problem statements, section 3 shows the proposed work, section 4 shows the performance analysis, and section 5 concludes the research. ## 1.1. Contributions of This Research (i) Using IoT sensors, e-commerce consumers’ browsing history and location are acquired.(ii) The multiadaptive k-nearest neighbour technique is used to anticipate consumer interests.(iii) We have presented a new commercial fuzzy image advertising suggestion system for e-commerce items in light of the Internet of things (IoT).This paper is divided into five sections: section2 lists related works and problem statements, section 3 shows the proposed work, section 4 shows the performance analysis, and section 5 concludes the research. ## 2. Literature Survey E-marketing tactics in the distribution, acquisition, and promotion stages may all benefit from the IoT-assisted e-marketing and distribution framework (IoT-EDF), presented by Joghee [7]. IoT-EDF is used for customer retention activities and concentrates on the most reliable data. Using Bluetooth Low Energy (BLE), Nikodem and Szeliski [8] sent advertisements through the communication channels.According to a study by Zhu et al. [9], 5G IoT technology may help improve the quality and safety of online agricultural goods. For the supply chain of agricultural goods, they focus on 5G IoT technologies and utilize them to develop a circulation information management system for farm commodities based on 5G IoT, to achieve real-time location, information exchange, and security. Vempati et al. [10] developed a method for autonomously creating large-scale Ad creatives in a short period. Automatic annotation of needed items and tags was achieved using deep learning detectors. Genetic algorithms were used to create an ideal banner layout for the provided picture content.Art design through digital media is a new creative idea, and the influence of digital media is becoming more prevalent in advertising. Gao and Chen [11] examined the use of “digital media art” in big data based on the growth of contemporary advertising. For this study, researchers gathered information on three different aspects, namely, the product price, how many previous evaluations it has had, and product photo (brand logo, promotional information, street scenes, and model display). They used a decision tree to examine consumer buying behaviours, which allowed them to further evaluate the effect of product picture features on sales volume through a hierarchical regression model.Using DL and distributed expression technology, Zhou [12] investigated the use of e-commerce product advertising suggestions in Ad campaigns. Advertisement click-through rates may be predicted using the DL model built using a similarity network based on the topic distribution of advertising. As the last step, they offer a new recommendation method based on a distributed representation of recurrent neural networks. Customer relationship management (CRM), business intelligence (BI), and product creation are just a few ways the IoT may assist marketers. IoT communication channels may be used to help targeted marketing for product owners, as well as CRM and support, according to the study by Taylor et al. [13].According to Cui et al. [14], a marketing model for e-commerce products was developed depending on the Q-learning algorithm to enhance the product marketing strategies. Their precision marketing approach is practical and may be used in practice. The goal of Lavanya et al. [15] was to find people who are more accessible to social media platforms (Facebook) to build individual advertising and promote products through efficient networking on social media. Rosenkrans and Myers [16] assessed the efficacy of applying predictive analytics to improve mobile location-based ads by comparing the CTR of micro-geo-fenced web and app ads with macro-geo-fenced web and app ads. Utilizing predictive analytics, they investigated the ways to better target mobile customers with contextually relevant communications at the appropriate time and place using big data.An investigation conducted by Lo and Campos [17] examined how companies are integrating IoT solutions into the relationship marketing approaches to determine whether this combination can enhance business performance and what are the challenges with the changes in disruptive technologies. The context-aware advertising recommendation system developed by De Maio et al. [18] uses the analysis theory of the triadic formal concept to deduce users’ interests and deliver appealing adverts based on their tweets. It was found that a system developed by Deng et al. [19] could automatically adapt advertising material to match specific customers’ preferences.Odontogenic keratocysts are seen in several disorders, according to Mody and Bhoosreddy [20]. On the teeth of a 12-year-old girl, several odontogenic keratocysts were discovered. The study did not find any other abnormalities that may suggest a disease. According to Garg and Harita [21], fine-grained data were utilized to find individual departures from the norm. Digital twins in engineering were employed to explore these growing data-driven healthcare approaches from a theoretical and ethical perspective. Digital methods were utilized to link physical things and to represent their state constantly. Moral differences may be found by analyzing data structures and their interpretations. Digital twins are examined in terms of their ethical and sociological ramifications. The importance of data in healthcare has increased. This technology has the potential to be a social equalizer by supplying excellent equalizing improvement strategies. According to Ahmed et al. [22], allergy rhinitis will be a worldwide epidemic. Chinese and Western medicines are used in Taiwan to treat patients. In traditional Chinese medicine, allergic rhinitis was the most common cause of respiratory illness. Traditional Chinese medicine is compared to western medical treatment for allergic rhinitis in Taiwan. As mentioned by Shahbaz and Afzal [23], high-dose-rate (HDR) brachytherapy eliminates radiation, enables outpatient treatment, and reduces diagnostic time. A single-stepping source may increase dosage dispersion by altering delay at each dwell point. HDR brachytherapy treatments must be performed accurately because of the smaller processing intervals, which make it unable to perform error checks. Li and Zihan [24] provided treatment and technologies for residential sewage to improve the rural environment. Organic and physicochemical pesticides were found in soil samples from vegetable fields in Nigeria’s Zamfara State by Salihu and Zayyanu Iyya [25]. Testing procedures and results were evaluated using GC-MS and QuEChERS. ### 2.1. Problem Statement The e-commerce market is now seeing an increase in the competition. Marketing refinement and individuation of the e-commerce enterprises are also desperately required. An ever-expanding volume of data and an ever-expanding user base make the IoT environment very challenging. E-commerce marketing tactics in IoT will become more crucial as users grow. They will need to be adjusted according to user perceptions, industry, and environmental changes. Hence, there is a need for effective IoT-based personalized advertising strategies. ## 2.1. Problem Statement The e-commerce market is now seeing an increase in the competition. Marketing refinement and individuation of the e-commerce enterprises are also desperately required. An ever-expanding volume of data and an ever-expanding user base make the IoT environment very challenging. E-commerce marketing tactics in IoT will become more crucial as users grow. They will need to be adjusted according to user perceptions, industry, and environmental changes. Hence, there is a need for effective IoT-based personalized advertising strategies. ## 3. Proposed Work From the viewpoint of the IoT, we present in this study a unique customized advertisement recommendation approach for e-commerce products based on customer interest andl. IoT sensors are used to track the location of e-commerce clients. MAKNN approach is used to anticipate client needs by analyzing the browsing history. The suggested system (IoT-CFIAR) model is used to provide customized image ads based on the interests and location of the user. The framework of the proposed research is presented in Figure 2.Figure 2 Framework of the suggested research.The logs of a person’s previous activity, such as searches, browsing, reviews, or purchases, may be used to infer implicit user traits. One of China’s most popular e-commerce websites was used to gather clickstream data. The website utilized in our research, like Amazon.com, offers a wide range of products, including home goods, electronics, clothing, and cosmetics. In China’s online purchasing sector, the users of the chosen website account for about 8% of total customers. Every day, almost three million people go to the selected website (Su and Chen [26]). Table 1 provides the descriptive status of e-commerce website browsing data. The logs of a person’s previous activity, such as searches, browsing, reviews, or purchases, may be used to infer implicit user traits. One of China’s most popular e-commerce websites was used to gather clickstream data. The website utilized in our research, like Amazon.com, offers a wide range of products, including home goods, electronics, clothing, and cosmetics. In China’s online purchasing sector, the users of the chosen website account for about 8% of total customers. Almost three million people visit the chosen website every day [16]. According to Table 1, e-commerce website browsing data are descriptive. During a visit to a website, a clickstream is a record of the click navigation path of users. The user’s actions are recorded in the browser on the client side. The URLs in the clickstream may be used to get the HTML files of each user’s requests, making the clickstream a potentially rich data source. As a result, a new URL is produced each time a user clicks or browses on the site. In addition, clickstream data contain the user ID and the time stamp. Then, a GPS sensor in an IoT environment is used to identify the position of e-commerce customers.Table 1 Descriptive statistics of e-commerce website browsing data. FactorsLevelNumber of logs3,000,000Average visiting logs7Number of sessions1,88,619Average browsing time8 minutes ### 3.1. Data Preprocessing Using Natural Language Normalization The browsing history and location acquired from e-commerce websites are preprocessed using natural language normalization. This technique involves natural language processing and normalization. This method removes the user data’s blank rows, punctuation marks, and symbols. Then, the information that does not contribute to the meaning of the data is removed from the data set. The available information is converted into an understandable and useable format. This method uses the mean and standard deviation to position all data points in a range of zero to one. It enables data managers to estimate the probability that a value will be found in the standard data distribution. The value “f” of the search category is normalized according to the following equation:(1)F′=f−D¯σB,where F′ means the normalized value of “f” of the category “D,” D¯ and σB refer to the average and standard deviation of factor “D,” respectively. By choosing the most crucial variables and removing redundant and unnecessary features, feature selection enhances the Internet of things. It raises the prediction potential of e-commerce products on the Internet of things. However, in the data set, on Taobao.com, user groups are simply segmented based on the most fundamental characteristics of the users, such as gender and age. User groups may be more clearly classified in terms of different interest patterns, such as “fashion woman,” “3C admirers,” and “housewife favourites” by referring to the mining findings from our research. ### 3.2. Web Usage Mining Using Multiadaptive K-Nearest Neighbour Approach Users’ surfing behaviours are categorized depending on their route while on the website. The exciting topics for e-commerce users are predicted using web usage mining. In this work, we employed the MAKNN technique for predicting user interest in e-commerce products. The working principle of MAKNN is explained as follows.Assume the training clickstream data to bex = {a1, a2, …, an}. Decision boundary g(x) = 0 has been derived by training MAKNN on the acquired browse history data. A distance function measuring the similarity between two web usage patterns must be constructed to find the closest neighbour of a web use pattern. The Euclidean distance function has been utilized as similarity metrics for computational ease when no previous information was available to apply them. Euclidean distance is determined using the following equation:(2)fxm,xn=∑q=1pyqbqxm−bmxn2,,where x means the input vector, p denotes the vector’s dimensionality, bq indicates the ith attribute of data (q = 1 to p), and yq means the qth attribute’s weight; when the distance between two data patterns is smaller, then the respective data points are similar. Let xs be a test web usage pattern located near the decision border. Equation (3) specifies the closest point to xf on the decision border.(3)xf=argminxmxs−xm.The relevance ofm of web usage pattern at xs is estimated by the following equation:(4)Rmxm=gms.pf=pfm,pf=pfps,where gm denotes the input feature’s unit vector, for m = 1, …, p.The weights of all features of a web usage pattern are computed by the following equation:(5)ynxm=expARMXs∑NPexpARNXs,N=1…where yn is the feature weight at xs.To maintain the approach’s stability, this scheme employs an exponential weighting mechanism. The feature weights may then be used to calculate the weighted distance. It has been determined that thek closest neighbours to testing web usage patterns are obtained by arranging Euclidean distance in ascending order. (6) describes the sorted vector containing the testing web usage pattern with its closest patterns.(6)k=sortfsm…,flo,where f is the Euclidean distance and k is the sorted vector with similar patterns.To assign a class label (specific category or topic) to a sorted vector, thek closest neighbours must vote by the following equation:(7)zfm=argmaxl∑xn∈lNNyxm,Cl,where fm means the test example, xm means one of the k-nearest neighbours to the training set, and y(xm, Cl) defines the probability of whether xm belongs to category Cl. As a result of MAKNN, the clickstream data were grouped into specific categories. The interested topics of users were determined by weighting the resultant types of web usage patterns. ### 3.3. Commercial Fuzzy Image Advertisement Recommendation System CFIAR is utilized in this paper to recommend advertisements concerning an individual’s interest and location. This technology may target individual customers in new and creative ways. Figure3 depicts the framework of the personalized Ad recommendation system.Figure 3 Personalized image advertisement model.The e-commerce user’s interesting topics and location are provided as input to the CFIAR model. The match index for the similarity between user interest (UI) and advertisements (Ad) in the Ad database is determined using the following equation:(8)Match_IndexUI⟶Ad=SSUI⟶Ad∗Wi,where SS is the similarity score and W is the weight assigned for the similarity between UI and Ad.The match index for the similarity between user location (UL) and advertisements in the database is determined using the following equation:(9)Match_IndexUL⟶Ad=SSUL⟶Ad∗Wj.The fuzzification module in CFIAR transforms the crisp input values into fuzzy values (linguistic values) using the triangular membership function. The fuzzy logic is described using the following equation:(10)v=m,μVm|mεM,where μV(m) means the membership function of data (m) and M refers to the sample cluster.The triangle membership function is defined by the following equation:(11)μVm=0,m≤v,m−vt−v,v≤m≤t,l−ml−t,t≤m≤l,0,l≤m.Three parameters [v, t, l] constitute the triangle membership function, where v symbolizes the lower border, l indicates the upper boundary, 0 seems to be the membership degree, and t indicates the centre, where the membership degree is 1.The fuzzy rules to provide the personalized advertisement recommendation are constructed according to Table2. The Ad recommendation levels are classified using these rules. The undefined results are converted into crisp outputs using defuzzification.Table 2 Fuzzy rules for CFIAR. Fuzzy rule numberFuzzy ruleRule 1IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 2IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = lowRule 3IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = lowRule 4IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 5IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = averageRule 6IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = averageRule 7IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 8IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = averageRule 9IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = high ### 3.4. IoT Advertiser The IoT advertiser receives the image advertisements with the highest recommendation levels. The IoT advertiser is an entity of e-commerce marketing that uses IoT to advertise the recommended ads of products/services to targeted customers. Then, IoT advertiser sends the recommended image ads to the IoT publisher. Instead of a single IoT device, an IoT publisher is a collection of IoT devices that work together to provide the user with various features and send adverts. IoT advertising coordinator coordinates the delivery of image advertisements to targeted consumers. Utilizing the IoT capabilities mentioned above, the ultimate goal of IoT is to provide new applications and services. The high connection and intelligence included in IoT, together with the potential for continuous scaling, enable the construction of a vast pool of applications based on users’ produced IoT data, in contrast to the oversimplified method of using conventional legacy sensors paired with decision entities. One of the most promising of these is extending the traditional advertising of Internet business. We present our notion of an IoT advertising architecture to allow the IoT advertisement vision. IoT advertising has its quirks and needs a specific infrastructure to succeed, even though this is influenced by the Internet’s advertising architecture. Our IoT advertising model consists of three layers, each of which is made up of various entities: the bottom layer, known as the IoT Physical Layer, houses actual IoT devices; the middle layer, known as the IoT Advertising Middleware, the IoT Advertising Coordinator, which enables existing IoT devices to communicate with the IoT Publisher in particular; and the top layer, the IoT Advertising Ecosystem. Figure4 illustrates the IoT advertising chain.Figure 4 IoT advertising framework.This shows an organization looking to utilize IoT to promote its goods and services, similar to how e-commerce was used in the previous use case. It is anticipated that it would engage in similar interactions with other players in the advertising ecosystem on IoT as online advertisers do. Due to the wide range of devices involved, IoT advertisers must develop their campaigns for various target audiences using newer Ad formats that are not always visible (such as auditory messaging), as opposed to conventional banner ads that are shown on web browsers or mobile applications. Additionally, targeting criteria can extend beyond an e-commerce user’s behaviour; in fact, the contextual setting will be a critical factor in the Ad matching process. ## 3.1. Data Preprocessing Using Natural Language Normalization The browsing history and location acquired from e-commerce websites are preprocessed using natural language normalization. This technique involves natural language processing and normalization. This method removes the user data’s blank rows, punctuation marks, and symbols. Then, the information that does not contribute to the meaning of the data is removed from the data set. The available information is converted into an understandable and useable format. This method uses the mean and standard deviation to position all data points in a range of zero to one. It enables data managers to estimate the probability that a value will be found in the standard data distribution. The value “f” of the search category is normalized according to the following equation:(1)F′=f−D¯σB,where F′ means the normalized value of “f” of the category “D,” D¯ and σB refer to the average and standard deviation of factor “D,” respectively. By choosing the most crucial variables and removing redundant and unnecessary features, feature selection enhances the Internet of things. It raises the prediction potential of e-commerce products on the Internet of things. However, in the data set, on Taobao.com, user groups are simply segmented based on the most fundamental characteristics of the users, such as gender and age. User groups may be more clearly classified in terms of different interest patterns, such as “fashion woman,” “3C admirers,” and “housewife favourites” by referring to the mining findings from our research. ## 3.2. Web Usage Mining Using Multiadaptive K-Nearest Neighbour Approach Users’ surfing behaviours are categorized depending on their route while on the website. The exciting topics for e-commerce users are predicted using web usage mining. In this work, we employed the MAKNN technique for predicting user interest in e-commerce products. The working principle of MAKNN is explained as follows.Assume the training clickstream data to bex = {a1, a2, …, an}. Decision boundary g(x) = 0 has been derived by training MAKNN on the acquired browse history data. A distance function measuring the similarity between two web usage patterns must be constructed to find the closest neighbour of a web use pattern. The Euclidean distance function has been utilized as similarity metrics for computational ease when no previous information was available to apply them. Euclidean distance is determined using the following equation:(2)fxm,xn=∑q=1pyqbqxm−bmxn2,,where x means the input vector, p denotes the vector’s dimensionality, bq indicates the ith attribute of data (q = 1 to p), and yq means the qth attribute’s weight; when the distance between two data patterns is smaller, then the respective data points are similar. Let xs be a test web usage pattern located near the decision border. Equation (3) specifies the closest point to xf on the decision border.(3)xf=argminxmxs−xm.The relevance ofm of web usage pattern at xs is estimated by the following equation:(4)Rmxm=gms.pf=pfm,pf=pfps,where gm denotes the input feature’s unit vector, for m = 1, …, p.The weights of all features of a web usage pattern are computed by the following equation:(5)ynxm=expARMXs∑NPexpARNXs,N=1…where yn is the feature weight at xs.To maintain the approach’s stability, this scheme employs an exponential weighting mechanism. The feature weights may then be used to calculate the weighted distance. It has been determined that thek closest neighbours to testing web usage patterns are obtained by arranging Euclidean distance in ascending order. (6) describes the sorted vector containing the testing web usage pattern with its closest patterns.(6)k=sortfsm…,flo,where f is the Euclidean distance and k is the sorted vector with similar patterns.To assign a class label (specific category or topic) to a sorted vector, thek closest neighbours must vote by the following equation:(7)zfm=argmaxl∑xn∈lNNyxm,Cl,where fm means the test example, xm means one of the k-nearest neighbours to the training set, and y(xm, Cl) defines the probability of whether xm belongs to category Cl. As a result of MAKNN, the clickstream data were grouped into specific categories. The interested topics of users were determined by weighting the resultant types of web usage patterns. ## 3.3. Commercial Fuzzy Image Advertisement Recommendation System CFIAR is utilized in this paper to recommend advertisements concerning an individual’s interest and location. This technology may target individual customers in new and creative ways. Figure3 depicts the framework of the personalized Ad recommendation system.Figure 3 Personalized image advertisement model.The e-commerce user’s interesting topics and location are provided as input to the CFIAR model. The match index for the similarity between user interest (UI) and advertisements (Ad) in the Ad database is determined using the following equation:(8)Match_IndexUI⟶Ad=SSUI⟶Ad∗Wi,where SS is the similarity score and W is the weight assigned for the similarity between UI and Ad.The match index for the similarity between user location (UL) and advertisements in the database is determined using the following equation:(9)Match_IndexUL⟶Ad=SSUL⟶Ad∗Wj.The fuzzification module in CFIAR transforms the crisp input values into fuzzy values (linguistic values) using the triangular membership function. The fuzzy logic is described using the following equation:(10)v=m,μVm|mεM,where μV(m) means the membership function of data (m) and M refers to the sample cluster.The triangle membership function is defined by the following equation:(11)μVm=0,m≤v,m−vt−v,v≤m≤t,l−ml−t,t≤m≤l,0,l≤m.Three parameters [v, t, l] constitute the triangle membership function, where v symbolizes the lower border, l indicates the upper boundary, 0 seems to be the membership degree, and t indicates the centre, where the membership degree is 1.The fuzzy rules to provide the personalized advertisement recommendation are constructed according to Table2. The Ad recommendation levels are classified using these rules. The undefined results are converted into crisp outputs using defuzzification.Table 2 Fuzzy rules for CFIAR. Fuzzy rule numberFuzzy ruleRule 1IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 2IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = lowRule 3IF Match_Index (UI⟶Ad) = low and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = lowRule 4IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 5IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = averageRule 6IF Match_Index (UI⟶Ad) = average and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = averageRule 7IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = low THEN Ad_recommendation_level = lowRule 8IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = average THEN Ad_recommendation_level = averageRule 9IF Match_Index (UI⟶Ad) = high and Match_Index (UL⟶Ad) = high THEN Ad_recommendation_level = high ## 3.4. IoT Advertiser The IoT advertiser receives the image advertisements with the highest recommendation levels. The IoT advertiser is an entity of e-commerce marketing that uses IoT to advertise the recommended ads of products/services to targeted customers. Then, IoT advertiser sends the recommended image ads to the IoT publisher. Instead of a single IoT device, an IoT publisher is a collection of IoT devices that work together to provide the user with various features and send adverts. IoT advertising coordinator coordinates the delivery of image advertisements to targeted consumers. Utilizing the IoT capabilities mentioned above, the ultimate goal of IoT is to provide new applications and services. The high connection and intelligence included in IoT, together with the potential for continuous scaling, enable the construction of a vast pool of applications based on users’ produced IoT data, in contrast to the oversimplified method of using conventional legacy sensors paired with decision entities. One of the most promising of these is extending the traditional advertising of Internet business. We present our notion of an IoT advertising architecture to allow the IoT advertisement vision. IoT advertising has its quirks and needs a specific infrastructure to succeed, even though this is influenced by the Internet’s advertising architecture. Our IoT advertising model consists of three layers, each of which is made up of various entities: the bottom layer, known as the IoT Physical Layer, houses actual IoT devices; the middle layer, known as the IoT Advertising Middleware, the IoT Advertising Coordinator, which enables existing IoT devices to communicate with the IoT Publisher in particular; and the top layer, the IoT Advertising Ecosystem. Figure4 illustrates the IoT advertising chain.Figure 4 IoT advertising framework.This shows an organization looking to utilize IoT to promote its goods and services, similar to how e-commerce was used in the previous use case. It is anticipated that it would engage in similar interactions with other players in the advertising ecosystem on IoT as online advertisers do. Due to the wide range of devices involved, IoT advertisers must develop their campaigns for various target audiences using newer Ad formats that are not always visible (such as auditory messaging), as opposed to conventional banner ads that are shown on web browsers or mobile applications. Additionally, targeting criteria can extend beyond an e-commerce user’s behaviour; in fact, the contextual setting will be a critical factor in the Ad matching process. ## 4. Results and Discussion This section focuses on the evaluation of the proposed image advertisement recommendation system. The efficacy of the proposed IoT integrated CFIAR model was assessed using recommendation efficiency, execution time, Ad satisfaction rate, and CTR. The performance of IoT-CFIAR was compared to the existing Ad recommendation systems, namely, context-aware advertising recommendation (CAAR), innovative generation system of personalized advertising copy (SGS-PAC), and IoT-EDF. ### 4.1. Recommendation Efficiency The term “recommendation efficiency” is defined as how accurately the model recommends the image advertisements to targeted e-commerce users depending on their interests and location. Figure5 shows the comparative assessment of various approaches in advertisement recommendations based on efficiency. The Ad recommendation efficiency rate of the IoT-CFIAR was greater than that of existing methods, namely, IoT-EDF, SGS-PAC, and CAAR. This indicates that IoT-CFIAR appropriately distributes image advertisements for e-commerce products by learning the user needs and location. Existing works like IoT-EDF have an efficiency of 68 percent, SGS-PAC has an efficiency of 78 percent, CAAR has an efficiency of 97 percent, and new techniques have an efficiency of 99 percent.Figure 5 Comparative evaluation of different recommendation models based on efficiency. ### 4.2. Execution Time The execution time of the proposed system is defined as the time taken to complete the image advertisement recommendation process. It is described in seconds. Figure6 shows the comparative assessment of various approaches in advertisement recommendation based on execution time. The execution time of the IoT-CFIAR for Ad recommendation was lesser than that of existing methods, namely, IoT-EDF, SGS-PAC, and CAAR. This indicates that IoT-CFIAR takes less time to provide personalized image advertisements for e-commerce products to target users. Hence, the IoT-CFIAR system is a time-efficient model. The existing methods are measured using proposed and existing methods, and the results are IoT-EDF with 38, SGS-PAC with 45, CAAR with 50, and proposed has 29.Figure 6 Comparative evaluation of different recommendation models based on execution time. ### 4.3. Ad Satisfaction Rate Using the Ad satisfaction rate, an e-commerce customer’s level of satisfaction with the company’s ads may be determined. Customer satisfaction is a metric to analyze the success of the recommendation model. Figure7 shows the comparative assessment of various approaches in advertisement recommendation based on Ad satisfaction rate. The Ad satisfaction rate of the IoT-CFIAR for Ad recommendation was higher than that of existing techniques, namely, IoT-EDF, SGS-PAC, and CAAR. This is because IoT-CFIAR provides only image ads of e-commerce products that meet the user’s interest and current location. IoT-EDF has an Ad satisfaction rate of 80%, SGS-PAC is at 77%, CAAR is at 95%, and suggested approaches have a 98 percent Ad satisfaction rate.Figure 7 Comparative evaluation of different recommendation models based on the Ad satisfaction rate. ### 4.4. Click-Through Rate The CTR measures how often your Ad is displayed versus how many clicks it gets. Figure8 shows the comparative assessment of various approaches in advertisement recommendations based on CTR. The CTR for the image Ads presented through IoT-CFIAR was higher than that of existing techniques, namely, IoT-EDF, SGS-PAC, and CAAR. The CTR for the image Ads offered through IoT-CFIAR was estimated to be 4.5%. This proved that the provision of personalized image advertisements increased the number of clicks obtained for the posted advertisements. The CTR for the image ads offered through IoT-CFIAR was estimated to be 4.5%. IoT-EDF showed 2.1%, SGS-PAC showed 3.4%, and CAAR showed 2.1%.Figure 8 Comparative evaluation of different recommendation models based on CTR. ### 4.5. Discussion Mobile commerce, context awareness, and the IoT have helped push the frontiers of e-commerce and move it into the considerable data age. Modern web apps often include personalization because it enhances the overall user experience by catering to the implicit preferences of the app’s users. Global Positioning System (GPS) technologies are used by advertising to determine users’ real-time whereabouts and send location-specific ads to people’s mobile devices. As a result, ads served through this strategy tend to be highly targeted and tailored to the unique requirements of each receiver. This strategy help get higher CTR because of the timely nature of the Ad offerings.Consequently, the proposed strategy is more likely to be considered an attractive and convincing part of effective mobile advertising. In this paper, we employed GPS for detecting the user’s location and MAKNN for predicting the user’s interests. Then, we applied IoT-CFIAR as a customized image advertisement recommendation system. IoT technology in mobile commerce allows users to get integrated with information depending on time, location, and context through location-based service and so provides a more effective purchasing experience (Joghee, 2021). The performance of IoT-CFIAR was compared to existing methods such as CAAR, IoT-EDF, and SGS-PAC. Contextual user interest elicitation and the categorization and construction of contextual-aware recommendation algorithms are some of the CAAR drawbacks (De Maio et al., 2021). This drawback has been overcome in this paper using efficient prediction of user interest through MAKNN. The proposed method achieves the highest efficiency and satisfaction rate compared to other existing advertisement recommendation tools. The highest satisfaction for the proposed technique is due to the provision of location-specific and user-interest-specific image advertisements. This reduces the distribution of unnecessary advertisements to consumers. As a result of IoT-CFIAR, companies may more easily design successful market distributions that support and fulfil the different needs of clients all over the globe. ## 4.1. Recommendation Efficiency The term “recommendation efficiency” is defined as how accurately the model recommends the image advertisements to targeted e-commerce users depending on their interests and location. Figure5 shows the comparative assessment of various approaches in advertisement recommendations based on efficiency. The Ad recommendation efficiency rate of the IoT-CFIAR was greater than that of existing methods, namely, IoT-EDF, SGS-PAC, and CAAR. This indicates that IoT-CFIAR appropriately distributes image advertisements for e-commerce products by learning the user needs and location. Existing works like IoT-EDF have an efficiency of 68 percent, SGS-PAC has an efficiency of 78 percent, CAAR has an efficiency of 97 percent, and new techniques have an efficiency of 99 percent.Figure 5 Comparative evaluation of different recommendation models based on efficiency. ## 4.2. Execution Time The execution time of the proposed system is defined as the time taken to complete the image advertisement recommendation process. It is described in seconds. Figure6 shows the comparative assessment of various approaches in advertisement recommendation based on execution time. The execution time of the IoT-CFIAR for Ad recommendation was lesser than that of existing methods, namely, IoT-EDF, SGS-PAC, and CAAR. This indicates that IoT-CFIAR takes less time to provide personalized image advertisements for e-commerce products to target users. Hence, the IoT-CFIAR system is a time-efficient model. The existing methods are measured using proposed and existing methods, and the results are IoT-EDF with 38, SGS-PAC with 45, CAAR with 50, and proposed has 29.Figure 6 Comparative evaluation of different recommendation models based on execution time. ## 4.3. Ad Satisfaction Rate Using the Ad satisfaction rate, an e-commerce customer’s level of satisfaction with the company’s ads may be determined. Customer satisfaction is a metric to analyze the success of the recommendation model. Figure7 shows the comparative assessment of various approaches in advertisement recommendation based on Ad satisfaction rate. The Ad satisfaction rate of the IoT-CFIAR for Ad recommendation was higher than that of existing techniques, namely, IoT-EDF, SGS-PAC, and CAAR. This is because IoT-CFIAR provides only image ads of e-commerce products that meet the user’s interest and current location. IoT-EDF has an Ad satisfaction rate of 80%, SGS-PAC is at 77%, CAAR is at 95%, and suggested approaches have a 98 percent Ad satisfaction rate.Figure 7 Comparative evaluation of different recommendation models based on the Ad satisfaction rate. ## 4.4. Click-Through Rate The CTR measures how often your Ad is displayed versus how many clicks it gets. Figure8 shows the comparative assessment of various approaches in advertisement recommendations based on CTR. The CTR for the image Ads presented through IoT-CFIAR was higher than that of existing techniques, namely, IoT-EDF, SGS-PAC, and CAAR. The CTR for the image Ads offered through IoT-CFIAR was estimated to be 4.5%. This proved that the provision of personalized image advertisements increased the number of clicks obtained for the posted advertisements. The CTR for the image ads offered through IoT-CFIAR was estimated to be 4.5%. IoT-EDF showed 2.1%, SGS-PAC showed 3.4%, and CAAR showed 2.1%.Figure 8 Comparative evaluation of different recommendation models based on CTR. ## 4.5. Discussion Mobile commerce, context awareness, and the IoT have helped push the frontiers of e-commerce and move it into the considerable data age. Modern web apps often include personalization because it enhances the overall user experience by catering to the implicit preferences of the app’s users. Global Positioning System (GPS) technologies are used by advertising to determine users’ real-time whereabouts and send location-specific ads to people’s mobile devices. As a result, ads served through this strategy tend to be highly targeted and tailored to the unique requirements of each receiver. This strategy help get higher CTR because of the timely nature of the Ad offerings.Consequently, the proposed strategy is more likely to be considered an attractive and convincing part of effective mobile advertising. In this paper, we employed GPS for detecting the user’s location and MAKNN for predicting the user’s interests. Then, we applied IoT-CFIAR as a customized image advertisement recommendation system. IoT technology in mobile commerce allows users to get integrated with information depending on time, location, and context through location-based service and so provides a more effective purchasing experience (Joghee, 2021). The performance of IoT-CFIAR was compared to existing methods such as CAAR, IoT-EDF, and SGS-PAC. Contextual user interest elicitation and the categorization and construction of contextual-aware recommendation algorithms are some of the CAAR drawbacks (De Maio et al., 2021). This drawback has been overcome in this paper using efficient prediction of user interest through MAKNN. The proposed method achieves the highest efficiency and satisfaction rate compared to other existing advertisement recommendation tools. The highest satisfaction for the proposed technique is due to the provision of location-specific and user-interest-specific image advertisements. This reduces the distribution of unnecessary advertisements to consumers. As a result of IoT-CFIAR, companies may more easily design successful market distributions that support and fulfil the different needs of clients all over the globe. ## 5. Conclusion Recent studies indicate that precise and targeted advertising is becoming a key development trend in the advertising business, and academics are concentrating their emphasis on establishing an advertising suggestion system to accommodate this trend. This study presents an application strategy for making advertising suggestions for products sold via e-commerce by using IoT-CFIAR based on interest data and location information. IoT-CFIAR is a powerful channel for contacting and connecting with individual consumers in unique ways. It enables marketers to reach out to customers whenever and wherever they are ready to acquire a product or service. This strategy attracts the attention of the client and encourages them to visit the e-commerce website so that they may purchase the product that is being offered. The users’ right to privacy and security over the data that Internet systems gather about them to deliver services that are better customized to their needs is becoming an increasingly heated debate. In the future, we will need to place a primary emphasis on implementing security techniques to ensure the safety of user data and the pictures in advertisements. [27]. --- *Source: 1022825-2022-08-08.xml*
2022
# On Some Types of Multigranulation Covering Based on Binary Relations **Authors:** Ashraf Nawar; E. A. Elsakhawy **Journal:** Journal of Function Spaces (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1022955 --- ## Abstract Recently, the notions of right and left covering rough sets were constructed by right and left neighborhoods to propose four types of multigranulation covering rough set (MGCRS) models. These models were constructed using the granulations as equivalence relations. In this paper, we introduce four types of multigranulation covering rough set models under arbitrary relations using theq-minimal and q-maximal descriptors of objects in a given universe. We also study the properties of these new models. Thus, we explore the relationships between these models. Then, we put forward an algorithm to illustrate the method of reduction based on the presented model. Finally, we give an illustrative example to show its efficiency and importance. --- ## Body ## 1. Introduction The notion of rough set theory originated by Pawlak in 1982 [1, 2] to deal with uncertain information and knowledge. It is a tool concerned with the approximation of sets described by a single binary relation. In the view of granular computing suggested by Zadeh [3], a general concept described by a set is characterized via the upper and lower approximations under a single granulation (always equivalence relation) on the universe. This tool has been widely used in many subjects including machine learning, data mining, decision support, and analysis. In the past 20 years, many authors have proposed several extensions of the rough set model [4–19]. In some cases, it is important to use multiequivalence relations on the universe to describe precisely a target concept. Recently, more attention is given to multigranulation rough set (MGRS) models and, also, to multigranulation covering rough set (MGCRS) models in which a target concept is approximated by employing the maximal or minimal descriptors of objects in the given universe. In [20, 21], Qian et al. developed a multigranulation rough set (MGRS) model by using equivalence relations. Several scholars worked on MGRS such as the MGRS model through multiple tolerance relations in incomplete information systems, MGRS via the fuzzy approximation space, the hierarchical structures of MGRS, the topological and lattice-theoretic properties of MGRS, and the efficient rough feature selection algorithm with MGRS [22–30]. Moreover, Liu and Miao and Liu and Wang [31, 32] introduced the multigranulation covering via rough set (MGCRS) and fuzzy rough set (MGCFRS). Lin et al. studied two types of the neighborhood via MGRS [33] and three new types of MGCRS [34]. Also, three types of MGRS via the tolerance, ordered, and generalized relations are investigated and developed the multigranulation decision-theoretic rough set [35–38]. In addition, Liu et al. [39] proposed four new types of MGCRS using the minimal and maximal descriptions and discussed relevant characteristics. For more details about MGRS, see, for instance, [40–44].The notions of left and right covering rough sets proposed by Abd El-Monsef et al. [45] are important tool to make an extension of Liu et al. [39]. The objective of this paper is to develop new models of MGCRS using the notions of left and right covering using the concepts of q-minimal and q-maximal descriptions. Also, we discuss the properties of these models. The relationships between these models are studied. Then, we present the reduction method over our proposed work and establish a numerical example to show its performance. The paper consists of six sections and is organized as follows: Section 1 deals with a brief history to the subject. Section 2 includes the preliminary concepts. Section 3 is the main core of the paper and consists of the new models. In Section 4, the properties and differences between the proposed models are introduced. Section 5 explores new criteria to make a reduction with a test example. We end up with conclusion in the last section. ## 2. Basic Terminologies and Results This section provides a short survey of some notions used throughout the article.Definition 1 [26]. Letℚ be an universal set and ∅≠E=E~1,E~2,⋯,E~m⊆ℚ. We call E as a covering of ℚ, if ⋃i=1mE~iw=ℚ for anyw∈ℚ. Also, ℚ,E is called a covering approximation space (briefly, CAS).Definition 2 [46]. LetR be a binary relation on an universe set ℚ, and for every w∈ℚ, we have the following two classes. Define the after and fore sets as follows:(1)wR=v∈ℚ:wRv,Rw=v∈ℚ:vRw.Definition 3 [45]. LetR be a binary relation on an universe set ℚ. For each w∈ℚ, define the right covering Cr (resp., the left covering Cl) as follows:(2)Cr=wR:ℚ=⋃w∈ℚwR,Cl=Rw:ℚ=⋃w∈ℚRw.Definition 4 [45]. LetR be a binary relation on an universe set ℚ and Eq be a q-cover of ℚ, where q∈r,l. Then, ℚ,R,Eq is said to be Eq covering approximation space (briefly, Eq-CAS).Definition 5 [45]. Letℚ,R,Eq be Eq-CAS. For every w∈ℚ, define the right neighborhood ℕr, the left neighborhood ℕl, the intersection neighborhood ℕi, and the union neighborhood ℕu, respectively, as follows:(3)ℕrw=C∈Cr:w∈C,ℕlw=C∈Cl:w∈C,ℕiw=ℕrw∩ℕlw,ℕuw=ℕrw∪ℕlw.Definition 6 [45]. Letℚ,R,Eq be Eq-CAS and ∀p∈r,l,i,u and Z⊆ℚ. Define the p-lower approximation, p-upper approximation, p-boundary, p-positive, p-negative, and p-accuracy of Z, respectively, as follows:(4)LpZ=w∈Z:ℕpw⊆Z,UpZ=w∈Z:ℕpw∩Z≠∅,BpZ=UpZ−LpZ,⊕pZ=LpZ,⊖pZ=ℚ−UpZApZ=LpZUpZ,whereUpZ≠0. Pawlak’s [1, 2] rough set properties are given as follows: (L1)LZ⊆Z,H1Z⊆UZ. (L2) Lℚ=ℚ,H2U∅=∅. (L3) L∅=∅,H3Uℚ=ℚ. (L4) IfZ1⊆Z2,thenLZ1⊆LZ2,H4UZ1⊆UZ2. (L5) LZ1∩Z2=LZ1∩LZ2.H5UZ1∪Z2=UZ1∪UZ2. (L6) LZ1∪Z2⊇LZ1∪LZ2.H6UZ1∩Z2⊆UZ1∩UZ2. (L7) LZc=UZc,H7UZc=LZc. (L8) LLZ=LZ,H8UUZ=UZ. (L9) LLZc=LZc,L9UUZc=UZc.Definition 7 [47]. Letℚ,E be a CAS and Z⊆ℚ. For any w∈ℚ, define the minimal and maximal descriptions of w, respectively, as follows:(5)HE=C∈E:w∈C∧∀S∈E∧w∈S∧S⊆C⇒S=C,DE=C∈E:w∈C∧∀S∈E∧w∈S∧S⊇C⇒S=C.Definition 8 [39]. Letℚ,E be MGCAS and Z⊆ℚ. For any w∈ℚ, define four types of the lower and upper approximations, respectively, as follows: (6)L∑d=1nEd1Z=w∈ℚ:∩HE1w⊆Zor∩HE2w⊆Zor⋯or∩HEnw⊆Z,U∑d=1nEd1Z=w∈ℚ:∩HE1w∩Z≠∅and∩HE2w∩Z≠∅and⋯and∩HEnw∩Z≠∅,LEd2Z=w∈ℚ:∪HE1w⊆Zor∪HE2w⊆Zor⋯or∪HEnw⊆Z,UEd2Z=w∈ℚ:∪HE1w∩Z≠∅and∪HE2w∩Z≠∅and⋯and∪HEnw∩Z≠∅,LEd3Z=w∈ℚ:∩DE1w⊆Zor∩DE2w⊆Zor⋯or∩DEnw⊆Z,UEd3Z=w∈ℚ:∩DE1w∩Z≠∅and∩DE2w∩Z≠∅and⋯and∩DEnw∩Z≠∅,LEd4Z=w∈ℚ:∪DE1w⊆Zor∪DE2w⊆Zor⋯or∪DEnw⊆Z,UEd4Z=w∈ℚ:∪DE1w∩Z≠∅and∪DE2w∩Z≠∅and⋯and∪DEnw∩Z≠∅, IfLEd1Z (resp., LEd2Z, LEd3Z, and LEd4Z) ≠UEd1Z (resp., UEd2Z, UEd3Z, and UEd4Z), then Z is called the first kind of a multigranulation covering rough set (briefly, type 1-MGCRS) (resp., type 2-MGCRS, type 3-MGCRS, and type 4-MGCRS), else it is definable.Definition 9 [48]. Letℚ,E be a covering information system. For any Z⊆ℚ and w∈ℚ, define the first type of optimistic multigranulation covering lower approximation (briefly, 1-OMGCLA) 1L∑d=1ndEOZ and the first type of optimistic multigranulation covering upper approximation (briefly, 1-OMGCUA) 1U∑d=1ndEOZ as follows:(7)1L∑d=1ndEOZ=w∈ℚ:wE1⊆Z∨wE2⊆Z∨⋯∨wEn⊆Z,1L∑d=1ndEOZ=w∈ℚ:wE1∩Z≠∅∧wE2∩Z≠∅∧⋯∧wEn∩Z≠∅.Definition 10 [48]. Letℚ,E be a covering information system. For any Z⊆ℚ and w∈ℚ, define the first type of pessimistic multigranulation covering lower approximation (briefly, 1-PMGCLA) 1L∑d=1ndEqPZ and the first type of pessimistic multigranulation covering upper approximation (briefly, 1-PMGCUA) 1`U∑d=1ndEqPZ as follows:(8)1L∑d=1nEPZ=w∈ℚ:wE1⊆Z∧wE2⊆Z∧⋯∧wEn⊆Z,1U∑d=1ndEPZ=w∈ℚ:wE1∩Z≠∅∨wE2∩Z≠∅∨⋯∨wEn∩Z≠∅.Next, we have the following definitions using the notion ofEq-CAS.Definition 11. Letℚ,R,Eq be Eq-CAS and Z⊆ℚ. For any w∈ℚ, define the q-minimal and q-maximal descriptions of w, respectively, as follows: (9)HEq=C∈Eq:w∈C∧∀S∈Eq∧w∈S∧S⊆C⇒S=C,DEq=C∈Eq:w∈C∧∀S∈Eq∧w∈S∧S⊇C⇒S=C.We give the following example to illustrate the above definition.Example 1. Letℚ,R,Eq be Eq-CAS, ℚ=k1,k2,k3,k4 and R=k1,k4,k2,k2,k2,k3,k3,k2,k4,k1,k4,k3. Then, we have the following results:(10)HErk1=k1,k3,HErk2=k2,HErk3=k1,k3,k2,k3,HErk4=k4,HElk1=k1,HElk2=k2,k3,k2,k4,HElk3=k2,k3,HElk4=k4,DErk1=k1,k3,DErk2=k2,k3,DErk3=k1,k3k2,k3,DErk4=k4,DElk1=k1,DElk2=k2,k3,k2,k4,DElk3=k2,k3,DElk4=k2,k4.Definition 12. Letℚ,R,Eq be Eq-CAS and Z⊆ℚ. For any w∈ℚ, define the lower and upper approximations, respectively, as follows: (11)LEqZ=w∈Z:∩HEqw⊆Z,UEqZ=w∈ℚ:∩DEqw∩Z≠∅.To explain the above definition, we give the following example.Example 2. Consider Example1, if Z=k1,k2,k4, then we have the following results.(12)LErZ=k2,k4,UErZ=k1,k2,k4,LElZ=k1,k2,k4,UElZ=ℚ. ## 3. Multi-Eq-Covering Approximation Space Presume thatℚ is an universal set, R is a family of binary relations on ℚ, and Eq is q-cover of ℚ depending on R, where q∈l,r. Thus, ℚ,R,Eq is called a multi-Eq-covering approximation space (briefly, MEqCAS).Definition 13. Assume thatℚ,R,Eq is a MEqCAS and R=R1,R2,⋯,RS,∀S∈I, for any Z⊆ℚ and w∈ℚ. Then, we have four novel kinds of lower and upper approximations written as follows: Style 1 The 1-MCLA1L∑d=1ndEqZ and the 1-MCUA 1L∑d=1ndEqZ are shown as follows:(13)1L∑d=1ndEqZ=w∈ℚ:∩HEqR1w⊆Zor∩HEqR2w⊆Zor⋯or∩HEqRnw⊆Z,1U∑d=1ndEqZ=w∈ℚ:∩HEqR1w∩Z≠∅and∩HEqR2w∩Z≠∅and⋯and∩HEqRnw∩Z≠∅. If1L∑d=1ndEqZ≠1U∑d=1ndEqZ, then Z is said to be the first kind of q-covering multigranulation rough set (briefly, 1-qMGCRS), else it is definable. Style 2 The 2-MCLA2L∑d=1ndEqZ and the 2-MCUA 2L∑d=1ndEqZ are seen as follows:(14)2L∑d=1ndEqZ=w∈ℚ:∪HEqR1w⊆Zor∪HEqR2w⊆Zor⋯or∪HEqRnw⊆Z,2U∑d=1ndEqZ=w∈ℚ:∪HEqR1w∩Z≠∅and∪HEqR2w∩Z≠∅and⋯and∪HEqRnw∩Z≠∅. If2L∑d=1ndEqZ≠2U∑d=1ndEqZ, then Z is said to be the second kind of q-covering multigranulation rough set (briefly, 2-qMGCRS), else it is definable. Style 3 The 3-MCLA3L∑d=1ndEqZ and the 3-MCUA 3U∑d=1ndEqZ are seen as follows:(15)3L∑d=1ndEqZ=w∈ℚ:∩DEqR1w⊆Zor∩DEqR2w⊆Zor⋯or∩DEqRnw⊆Z,3U∑d=1ndEqZ=w∈ℚ:∩DEqR1w∩Z≠∅and∩DEqR2w∩Z≠∅and⋯and∩DEqRnw∩Z≠∅. If3L∑d=1ndEqZ≠3U∑d=1ndEqZ, then Z is said to be the third kind of q-covering multigranulation rough set (briefly, 3-qMGCRS), else it is definable. Style 4 The 4-MCLA4L∑d=1ndEqZ and the 4-MCUA 4U∑d=1ndEqZ are seen as follows:(16)4L∑d=1ndEqZ=w∈ℚ:∪DEqR1w⊆Zor∪DEqR2w⊆Zor⋯or∪DEqRnw⊆Z,4U∑d=1ndEqZ=w∈ℚ:∪DEqR1w∩Z≠∅and∪DEqR2w∩Z≠∅and⋯and∪DEqRnw∩Z≠∅. If4L∑d=1ndEqZ≠4U∑d=1ndEqZ, then Z is said to be the fourth kind of q-covering multigranulation rough set (briefly, 4-qMGCRS), else it is definable.Example 3. Considerℚ,R,Eq is a MEqCAS, ℚ=k1,k2,k3,k4 and R=R1,R2, where R1=k1,k4,k2,k2,k2,k3,k3,k2,k4,k1,k4,k3 and R2=k1,k1,k1,k2,k2,k3,k2,k4,k3,k1,k4,k1. Take Z=k1,k3; then, we have the presented outcomes: (1r)1L∑d=12dErZ=k1,k3,1U∑d=12dErZ=k1,k31l1L∑d=12dElZ=k1,1U∑d=12dElZ=k1,k3. (2r)2L∑d=12dErZ=k1,2U∑d=12dErZ=k1,k32l2L∑d=12dElZ=k1,2U∑d=12dElZ=k1,k3. (3r)3L∑d=12dErZ=k1,k3,3U∑d=12dErZ=k1,k2,k33l3L∑d=12dElZ=k1,3U∑d=12dElZ=k1,k3. (4r)4L∑d=12dErZ=k1,4U∑d=12dErZ=k1,k2,k34l4L∑d=12dElZ=k1,4U∑d=12dElZ=k1,k3.Theorem 14. Suppose thatℚ,R,Eq is a MEqCAS. For any Z⊆ℚ, we get the following properties:(1) 1L∑d=1ndEqZc=1U∑d=1ndEqZc,1U∑d=1ndEqZc=1L∑d=1ndEqZc(2) 2L∑d=1ndEqZc=2U∑d=1ndEqZc,2U∑d=1ndEqZc=2L∑d=1ndEqZc(3) 3L∑d=1ndEqZc=3U∑d=1ndEqZc,3U∑d=1ndEqZc=3L∑d=1ndEqZc(4) 4L∑d=1ndEqZc=4U∑d=1ndEqZc,4U∑d=1ndEqZc=4L∑d=1ndEqZcProof. Here, we want to set (1) only.(1) (17)1L∑d=1ndEqZc=w∈ℚ:∩HEqR1w⊆Zcor∩HEqR2w⊆Zcor⋯or∩HEqRnw⊆Zc=w∈ℚ:∩HEqR1w∩Z=∅or∩HEqR2w∩Z=∅or⋯or∩HEqRnw∩Z=∅=w∈ℚ:∩HEqR1w∩Z≠∅and∩HEqR2w∩Z≠∅and⋯and∩HEqRnw∩∅c=1U∑d=1ndEqZc. Also, it is easy to see1U∑d=1ndEqZc=1L∑d=1ndEqZc.Proposition 15. Suppose thatℚ,R,Eq is a MEqCAS. For any Z⊆ℚ, we get the following properties:(1) 1L∑d=1ndEq1L∑d=1ndEqZ=1L∑d=1ndEqZ,1U∑d=1ndEq1U∑d=1ndEqZ=1U∑d=1ndEqZ(2) 2L∑d=1ndEq2L∑d=1ndEqZ=2L∑d=1ndEqZ,2U∑d=1ndEq2U∑d=1ndEqZ=2U∑d=1ndEqZ(3) 3L∑d=1ndEq3L∑d=1ndEqZ=3L∑d=1ndEqZ,3U∑d=1ndEq3U∑d=1ndEqZ=3U∑d=1ndEqZProof. Here, we want to set (1) only.(1) It is obvious that1L∑d=1ndEq1L∑d=1ndEqZ⊆1L∑d=1ndEqZ. On the other hand, we have 1L∑d=1ndEqZ=1L1EqZ∪1L2EqZ∪⋯∪1LnEqZ. Thus, we get that(18)1L∑d=1ndEq1L∑d=1ndEqZ=1L1Eq1L∑d=1ndEqZ∪1L2Eq1L∑d=1ndEqZ∪⋯∪1LnEq1L∑d=1ndEqZ=1L1Eq1L1EqZ∪1L2EqZ∪⋯∪1LnEqZZ∪1L2Eq1L1EqZ∪1L2EqZ∪⋯∪1LnEqZZ∪⋯∪1LnEq1L1EqZ∪1L2EqZ∪⋯∪1LnEqZZ⊇1L1Eq1L1EqZ∪1L2Eq1L2EqZ∪11LnEq1LnEqZ=1L1EqZ∪1L2EqZ∪⋯∪1LnEqZ=1L∑d=1ndEqZ Also, it is clear that1U∑d=1ndEq1U∑d=1ndEqZ⊆1U∑d=1ndEqZ. Consequently, we have 1U∑d=1ndEqZ=1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ. So, we have(19)1U∑d=1ndEq1U∑d=1ndEqZ=1U1Eq1U∑d=1ndEqZ∩1U2Eq1U∑d=1ndEqZ∩⋯∩1UnEq1U∑d=1ndEqZ=1U1Eq1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ∩1U2Eq1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ∩⋯∩1UnEq1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ⊇1U1Eq1U1EqZ∩1U2Eq1U2EqZ∩⋯∩1UnEq1UnEqZ=1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ=1U∑d=1ndEqZ. Hence,1L∑d=1ndEq1L∑d=1ndEqZ=1L∑d=1ndEqZ and 1U∑d=1ndEq1U∑d=1ndEqZ=1U∑d=1ndEqZ.The above Proposition15 is not true for 4-qMGCRS as in the following example.Example 4. Consider thatℚ,R,Eq is a MEqCAS, ℚ=k1,k2,k3,k4,k5 and R=R1,R2, where R1=k1,k1,k1,k3,k1,k5,k2,k2,k3,k3,k4,k2,k4,k4,k4,k5,k5,k5 and R2=k1,k1,k1,k3,k2,k2,k2,k5,k3,k3,k4,k3,k4,k4,k5,k2,k5,k5. TakeZ1=k1,k3 and Z2=k4,k5; then, we have the presented outcomes. (1r) 4L∑d=12dErZ1=k1and4L∑d=12dEr4U∑d=12dErZ1=∅. Then, 4L∑d=12dEr4L∑d=12dErZ1≠4L∑d=12dErZ1. (2r) 4U∑d=12dErZ2=k2,k3,k4,k5 and 4U∑d=12dEr4U∑d=12dErZ2=ℚ. Then, 4U∑d=12dEr4U∑d=12dErZ2≠4U∑d=12dErZ2. (1l) 4L∑d=12dElZ1=k3 and 4L∑d=12dEl4L∑d=12dElZ1=∅. Then, 4L∑d=12dEl4L∑d=12dE1Z1≠4L∑d=12dElZ1. (2l) 4U∑d=12dElZ2=k1,k2,k4,k5 and 4U∑d=12dEl4U∑d=12dElZ2=ℚ. Then, 4U∑d=12dEl4U∑d=12dElZ2≠4U∑d=12dElZ2.Next, we will establish new properties in Proposition16. These characteristics are done for 1-qMGCRS, 2-qMGCRS, 3-qMGCRS, and 4-qMGCRS, though we demonstrate it in the case of 1-qMGCRS and others are similar.Proposition 16. Suppose thatℚ,R,Eq is a MEqCAS. For any Z1,Z2⊆ℚ, we get the following properties:(1) IfZ1⊆Z2, then 1L∑d=1ndEqZ1⊆1L∑d=1ndEqZ2(2) IfZ1⊆Z2, then 1U∑d=1ndEqZ1⊆1U∑d=1ndEqZ2(3) 1L∑d=1ndEqZ1∩Z2⊆1L∑d=1ndEqZ1∩1L∑d=1ndEqZ2(4) 1L∑d=1ndEqZ1∪Z2⊇1L∑d=1ndEqZ1∪1L∑d=1ndEqZ2(5) 1U∑d=1ndEqZ1∪Z2⊇1U∑d=1ndEqZ1∪1U∑d=1ndEqZ2(6) 1U∑d=1ndEqZ1∩Z2⊆1U∑d=1ndEqZ1∩1U∑d=1ndEqZ2Proof. Now, we just need to show (1) and (2).(1) From Definition13 and since Z1⊆Z2, then, we obtain the following:(20)1L∑d=1ndEqZ1=w∈ℚ:∩HEqR1w⊆Z1or∩HEqR2w⊆Z1or⋯or∩HEqRnw⊆Z1⊆w∈ℚ:∩HEqR1w⊆Z2or∩HEqR2w⊆Z2or⋯or∩HEqRnw⊆Z2=1L∑d=1ndEqZ2(2) From Definition13 and since Z1⊆Z2, then, we have the following:(21)1U∑d=1ndEqZ1=w∈ℚ:∩HEqR1w∩Z1≠∅and∩HEqR2w∩Z1≠∅and⋯and∩HEqRnw∩Z1≠∅⊆w∈ℚ:∩HEqR1w∩Z2≠∅and∩HEqR2w∩Z2≠∅and⋯and∩HEqRnw∩Z2≠∅=1U∑d=1ndEqZ2Example 5. Consider Example4. Then, we have the following: (1r) Take Z1=k2,k3,k4 and Z2=k2,k4,k5, then we have 1L∑d=12dErZ1=k2,k3,k4,1L∑d=12dErZ2=k2,k4,k5 and 1L∑d=12dErZ1∩Z2=k2. Thus, 1L∑d=12dErZ1∩Z2≠1L∑d=12dErZ1∩1L∑d=12dErZ2 (1l) Take Z1=k2,k4 and Z2=k2,k5, then we have 1L∑d=12dElZ1=k2,k4,1L∑d=12dElZ2=k2,k5 and 1L∑d=12dElZ1∩Z2=∅. Thus, 1L∑d=12dElZ1∩Z2≠1L∑d=12dElZ1∩1L∑d=12dElZ2 (2r) Take Z1=k1 and Z2=k3, then we have 1L∑d=12dErZ1=∅,1L∑d=12dErZ2=k3 and 1L∑d=12dErZ1∪Z2=k1,k3. Thus, 1L∑d=12dErZ1∪Z2≠1L∑d=12dErZ1∪1L∑d=12dErZ2 (2l) Take Z1=k2 and Z2=k4, then we have 1L∑d=12dElZ1=∅,1L∑d=12dElZ2=k4 and 1L∑d=12dElZ1∪Z2=k2,k4. Thus, 1L∑d=12dElZ1∪Z2≠1L∑d=12dElZ1∪1L∑d=12dElZ2 3r Take Z1=k2 and Z2=k3, then we have 1U∑d=12dErZ1=k2,1U∑d=12dErZ2=k1,k3 and 1U∑d=12dErZ1∪Z2=k1,k2,k3,k4. Thus, 1U∑d=12dErZ1∪Z2≠1U∑d=12dErZ1∪1U∑d=12dErZ2 3l Take Z1=k1 and Z2=k2, then we have 1U∑d=12dElZ1=k1,k3,1U∑d=12dElZ2=k2 and 1U∑d=12dElZ1∪Z2=k1,k2,k3,k5. Thus, 1U∑d=12dElZ1∪Z2≠1U∑d=12dElZ1∪1U∑d=12dElZ2 4r Take Z1=k2,k3 and Z2=k2,k4, then we have 1U∑d=12dErZ1=k1,k2,k3,k4,1U∑d=12dErZ2=k2,k4 and 1U∑d=12dErZ1∩Z2=k2. Thus, 1U∑d=12dErZ1∩Z2≠1U∑d=12dErZ1∩1U∑d=12dErZ2 4l Take Z1=k2,k4 and Z2=k2,k5, then we have 1U∑d=12dElZ1=k2,k4,k5,1U∑d=12dElZ2=k2,k5 and 1U∑d=12dElZ1∩Z2=k2. Thus, 1U∑d=12dElZ1∩Z2≠1U∑d=12dElZ1∩1U∑d=12dEl ## 4. Relationships among Different Proposed Models Next, we present the relationships between the proposed MEqCAS models.By using Definition13, we obtain the following properties.Proposition 17. Letℚ,R,Eq be a MEqCAS and Z⊆ℚ. Then, we have the following results: (1) 4L∑d=1ndEqZ⊆2L∑d=1ndEqZ⊆1L∑d=1ndEqZ⊆Z(2) 4L∑d=1ndEqZ⊆3U∑d=1ndEqZ⊆1L∑d=1ndEqZ⊆Z(3) Z⊆1U∑d=1ndEqZ⊆2U∑d=1ndEqZ⊆4U∑d=1ndEqZ(4) Z⊆1U∑d=1ndEqZ⊆3U∑d=1ndEqZ⊆4U∑d=1ndEqZRemark 18. Letℚ,Re,Eq be a MEqCAS and Z⊆ℚ. Then, we have the following results: (1) 2L∑d=1ndEqZ⊈3L∑d=1ndEqZand3L∑d=1ndEqZ⊈2L∑d=1ndEqZ(2) 2U∑d=1ndEqZ⊈3U∑d=1ndEqZand3U∑d=1ndEqZ⊈2U∑d=1ndEqZ This means that 2-qMGCRS and 3-qMGCRS are independent.Proposition 19. Letℚ,Re,Eq be a MEqCAS and Z⊆ℚ. Then, we have the following results: (1) 1L∑d=1ndEqZ=2L∑d=1ndEqZ∪3L∑d=1ndEqZ(2) 1U∑d=1ndEqZ=2U∑d=1ndEqZ∩3U∑d=1ndEqZTo illustrate the above characteristic, we give the following example.Example 6. Consider Example4 and let Z=k1,k2. Then, we have the following outcomes:(1) Forq=r, we have(22)1L∑d=12dEqZ=2L∑d=12dEqZ=k2,3L∑d=12dEqZ=4L∑d=12dEqZ=∅,1U∑d=12dEqZ=2U∑d=12dEqZ=3U∑d=12dEqZ=k1,k2,4U∑d=12dEqZ=k1,k2,k3,k5(2) Forq=l, we have(23)1L∑d=12=dEqZ=2L∑d=12dEqZ=3L∑d=12dEqZ=k1,4L∑d=12dEqZ=∅,1U∑d=12dEqZ=2U∑d=12dEqZ=3U∑d=12dEqZ=k1,k2,k3,k5,4U∑d=12dEqZ=ℚ So, you can find the following:(1) 4L∑d=1ndErZ⊆2L∑d=1ndErZ⊆1L∑d=1ndErZ⊆Z(2) 4L∑d=1ndErZ⊆3L∑d=1ndErZ⊆1L∑d=1ndErZ⊆Z(3) Z⊆1U∑d=1ndErZ⊆2U∑d=1ndErZ⊆4U∑d=1ndErZ(4) Z⊆1U∑d=1ndErZ⊆3U∑d=1ndErZ⊆4U∑d=1ndErZ(5) 4L∑d=1ndElZ⊆2L∑d=1ndElZ⊆1L∑d=1ndElZ⊆Z(6) 4L∑d=1ndElZ⊆3L∑d=1ndElZ⊆1L∑d=1ndElZ⊆Z(7) Z⊆1U∑d=1ndElZ⊆2U∑d=1ndElZ⊆4U∑d=1ndElZZ(8) Z⊆1U∑d=1ndElZ⊆3U∑d=1ndElZ⊆4U∑d=1ndElZTables1 and 2 show the Pawlak characteristics for the lower and upper approximations which are given in Definition 13.Table 1 Table for the lower approximations. 1L∑d=1ndErZ2L∑d=1ndErZ3L∑d=1ndErZ4L∑d=1ndErZ1L∑d=1ndElZ2L∑d=1ndElZ3L∑d=1ndElZ4L∑d=1ndElZL1√√√√√√√√L2√√√√√√√√L3√√√√√√√√L4××××××××L5√√√√√√√√L6√√√√√√√√L7√√√√√√√√L8√√√×√√√×L9××××××××Table 2 Table for the upper approximations. 1U∑d=1ndErZ2U∑d=1ndErZ3U∑d=1ndErZ4U∑d=1ndErZ1U∑d=1ndElZ2U∑d=1ndElZ3U∑d=1ndElZ4U∑d=1ndElZH1√√√√√√√√H2√√√√√√√√H3√√√√√√√√H4××××××××H5√√√√√√√√H6√√√√√√√√H7√√√√√√√√H8√√√×√√√×H9×××××××× ## 5. Relative Reduction of a MEqCAS This section is aimed at discussing a relative reduction of a pessimistic multigranulationq-covering rough sets (briefly, PMEqCRS). First, we give the following couple of definitions.Definition 20. Letℚ,R,Eq be a MEqCAS and R=R1,R2,⋯,RS,∀S∈I. For any Z⊆ℚ and w∈ℚ, define the pessimistic multigranulation q-covering lower approximation (briefly, PMGEqCLA) L∑d=1ndEqPZ and pessimistic multigranulation q-covering lower approximation (briefly, PMGEqCLA) U∑d=1ndEqPZ as follows:(24)L∑d=1ndEqPZ=w∈ℚ:wEqR1⊆ZandwEqR2⊆Zand⋯andwEqRn⊆Z,U∑d=1ndEqPZ=w∈ℚ:wEqR1∩Z≠∅orwEqR2∩Z≠∅or⋯orwEqRn∩Z≠∅.Definition 21. Letℚ,R,Eq be a MEqCAS and R=R1,R2,⋯,RS,∀S∈I. Suppose that D=D1,D2,⋯,Dt is a decision partition of ℚ. Then,(25)PLEqRkD=L∑d=1ndEqPD1,L∑d=1ndEqPD2,⋯,L∑d=1ndEqPDt,PUEqRkD=U∑d=1ndEqPD1,U∑d=1ndEqPD2,⋯,U∑d=1ndEqPDt.(i) BqRk⊆EqRk and PLBqRkD=PLEqRkD, but PLB∧qRkD≠PLEqRkD, for B∧qRk⊆BqRk; then, BqRk is a D reduction of PMEqCLA(ii) BqRk⊆EqRk and PUBqRkD=PUEqRkD, but PUB∧qRkD≠PUEqRkD, for B∧qRk⊆BqRk, then, BqRk is a D reduction of PMEqCUAWe can illustrate the method of reduction as the following Algorithm1.Algorithm 1: Algorithm for reduction of PMEqCLA. Input: ℚ,R,Eq with information system.Output: Reduction of PMEqCLA.1: CalculatePLEqRkD.2: RemoveEqRk, BqRk=EqRi−EqRk and PLBqRi−kD=PLBqRiD.3: Remove a covering inBqRk again and get B∧qRi−k. If PLB∧qRi−kD≠PLBqRiD, return BqRk; else, go to Step 2.4: : Repeat the Steps 2 and 3 for each covering inEqRi to get all the relative reduce of the covering family.Example 7. Presume thatℚ=k1,k2,⋯,k6 is a set of six houses, Z=equallysharedarea,color,price,surroundings is a set of attributes, and D=purchaseopinions is a set of decisions. The values of equally shared area could be {large, ordinary, small}. The values of color could be {excellent, good, bad}. The values of price could be {high, middle, low}. The values of surroundings could be {quiet, noisy, very noisy}. The decision values of purchase opinions could be {support, oppose}, which is randomly chosen from experts. The evaluation results are shown in Table 3.Table 3 Table for house assessment problem. Equally shared areaColorPriceSurroundingPurchase optionsk1{Large}{Good}{High}{Very noisy}Opposek2{Small, large}{Excellent}{Middle, low}{Quiet, noisy}Supportk3{Small, large}{Excellent, good}{Middle, low}{Noisy}Supportk4{Small, ordinary}{Bad}{High, middle}{Noisy, very noisy}Opposek5{Small, ordinary}{Bad}{High, middle}{Very noisy}Opposek6{Ordinary, large}{Excellent, good}{High, low}{Quiet, noisy}SupportAs for the attribute setZ, the binary relation is obtained as follows ∀k∈Z:(26)Rk=v,w:Fkv⊆Fkw.It is easy to see that theRk is reflexive and transitive but not symmetric.IfD is the decision set, then the nonequivalence relation is defined as follows:(27)RD=v,w:FDv⊆FDw.Then, we can construct the following two covers:(i) Right covering (r-cover for short)(28)Cr=wRK:∀K∈Z,D,w∈ℚ,ℚ=⋃w∈ℚwRK(ii) Left covering (l-cover for short)(29)Cl=RKw:∀K∈Z,D,w∈ℚ,ℚ=⋃w∈ℚRKwSo, we have the following results:(30)CrR1=k1,k2,k3,k2,k3,k4,k5,k6,CrR2=k1,k3,k6,k2,k3,k6,k3,k6,k4,k5,CrR3=k1,k4,k5,k6,k2,k3,k4,k5,k6,CrR4=k1,k4,k5,k2,k6,k2,k3,k4,k6,k4,CrRD=k1,k4,k5,k2,k3,k6,ClR1=k1,k1,k2,k3,k4,k5,k6,ClR2=k1,k2,k1,k2,k3,k6,k4,k5,ClR3=k1,k2,k3,k1,k4,k5,k1,k6,ClR4=k1,k5,k2,k3,k6,k3,k1,k3,k4,k5,ClRD=k1,k4,k5,k2,k3,k6.Thus, we can establish Tables4 and 5 for the neighborhood of k as follows.Table 4 Table for house assessment problem. k1k2k3k4k5k6kCrR1k1,k2,k3k2,k3k2,k3k4,k5k4,k5k6kCrR2k1,k3,k6k2,k3,k6k3,k6k4,k5k4,k5k3,k6kCrR3k1,k4,k5,k6k2,k3k2,k3k4,k5k4,k5k6kCrR4k1,k4,k5k2,k6k2,k3,k6k4k1,k4,k5k2,k6Table 5 Table for house assessment problem. k1k2k3k4k5k6kClR1k1k1,k2,k3k1,k2,k3k4,k5k4,k5k6kClR2k1k2k1,k2,k3,k6k4,k5k4,k5k1,k2,k3,k6kClR3k1k2,k3k2,k3k1,k4,k5k1,k4,k5k1,k6kClR4k1,k5k2,k3,k6k3k1,k3,k4,k5k1,k5k2,k3,k6Now, we can apply Algorithm1 as follows.Step 1. PLErRkD=L∑d=14dErPD1,L∑d=14dErPD2=k4,k5,k2,k3,k6.Step 2. PLBrR1D=k4,k5,k2,k3,k6, PLBrR2D=k4,k5,k2,k3,k6. Therefore, BrRk=CrR3,CrR4 is a reduction of the PMErCRS.Also, we can get the following outcomes of the left covering:(31)PLElRkD=L∑d=14dElPD1,L∑d=14dElPD2=k1,k5,∅,PLBlR1D=k1,k5,k2,PLBlR2D=k1,k5,ϕ,PLBlR3D=k1,k5,k6,PLBlR4D=k1,k4,k5,∅.Therefore,BlRk=ClR1,ClR3,ClR4 is a reduction of the PMElCRS. ## 6. Conclusion In this article, we present a notion called multi-Eq-covering approximation space (MEqCAS) by using the concept of q-minimal and q-maximal descriptions. Based on these notions, we establish four new types of multigranulation covering rough sets, denoted MEqCAS. We also study the properties of these new models. Further, we put forward a new methodology to make a reduction by the presented work. Then, we demonstrate the reduction method with the help of an illustrative example which shows its effectiveness and reliability. The main differences between our proposed work and the previous one in [39] are that the authors in [39] introduced four types of MGCRSs using the minimal and maximal description based on equivalence relations and here we used the notions of right (resp., left) covering rough sets to investigate four kinds of multigranulation right (resp., left) covering rough sets using the right (resp., left) minimal and right (resp., left) maximal description induced by binary relations. In further research, we hope to use this approach in fuzzy rough covering-based fuzzy neighborhoods [49], fuzzy soft covering-based rough sets [50], and soft fuzzy covering-based rough sets [51]. --- *Source: 1022955-2021-11-08.xml*
1022955-2021-11-08_1022955-2021-11-08.md
25,859
On Some Types of Multigranulation Covering Based on Binary Relations
Ashraf Nawar; E. A. Elsakhawy
Journal of Function Spaces (2021)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1022955
1022955-2021-11-08.xml
--- ## Abstract Recently, the notions of right and left covering rough sets were constructed by right and left neighborhoods to propose four types of multigranulation covering rough set (MGCRS) models. These models were constructed using the granulations as equivalence relations. In this paper, we introduce four types of multigranulation covering rough set models under arbitrary relations using theq-minimal and q-maximal descriptors of objects in a given universe. We also study the properties of these new models. Thus, we explore the relationships between these models. Then, we put forward an algorithm to illustrate the method of reduction based on the presented model. Finally, we give an illustrative example to show its efficiency and importance. --- ## Body ## 1. Introduction The notion of rough set theory originated by Pawlak in 1982 [1, 2] to deal with uncertain information and knowledge. It is a tool concerned with the approximation of sets described by a single binary relation. In the view of granular computing suggested by Zadeh [3], a general concept described by a set is characterized via the upper and lower approximations under a single granulation (always equivalence relation) on the universe. This tool has been widely used in many subjects including machine learning, data mining, decision support, and analysis. In the past 20 years, many authors have proposed several extensions of the rough set model [4–19]. In some cases, it is important to use multiequivalence relations on the universe to describe precisely a target concept. Recently, more attention is given to multigranulation rough set (MGRS) models and, also, to multigranulation covering rough set (MGCRS) models in which a target concept is approximated by employing the maximal or minimal descriptors of objects in the given universe. In [20, 21], Qian et al. developed a multigranulation rough set (MGRS) model by using equivalence relations. Several scholars worked on MGRS such as the MGRS model through multiple tolerance relations in incomplete information systems, MGRS via the fuzzy approximation space, the hierarchical structures of MGRS, the topological and lattice-theoretic properties of MGRS, and the efficient rough feature selection algorithm with MGRS [22–30]. Moreover, Liu and Miao and Liu and Wang [31, 32] introduced the multigranulation covering via rough set (MGCRS) and fuzzy rough set (MGCFRS). Lin et al. studied two types of the neighborhood via MGRS [33] and three new types of MGCRS [34]. Also, three types of MGRS via the tolerance, ordered, and generalized relations are investigated and developed the multigranulation decision-theoretic rough set [35–38]. In addition, Liu et al. [39] proposed four new types of MGCRS using the minimal and maximal descriptions and discussed relevant characteristics. For more details about MGRS, see, for instance, [40–44].The notions of left and right covering rough sets proposed by Abd El-Monsef et al. [45] are important tool to make an extension of Liu et al. [39]. The objective of this paper is to develop new models of MGCRS using the notions of left and right covering using the concepts of q-minimal and q-maximal descriptions. Also, we discuss the properties of these models. The relationships between these models are studied. Then, we present the reduction method over our proposed work and establish a numerical example to show its performance. The paper consists of six sections and is organized as follows: Section 1 deals with a brief history to the subject. Section 2 includes the preliminary concepts. Section 3 is the main core of the paper and consists of the new models. In Section 4, the properties and differences between the proposed models are introduced. Section 5 explores new criteria to make a reduction with a test example. We end up with conclusion in the last section. ## 2. Basic Terminologies and Results This section provides a short survey of some notions used throughout the article.Definition 1 [26]. Letℚ be an universal set and ∅≠E=E~1,E~2,⋯,E~m⊆ℚ. We call E as a covering of ℚ, if ⋃i=1mE~iw=ℚ for anyw∈ℚ. Also, ℚ,E is called a covering approximation space (briefly, CAS).Definition 2 [46]. LetR be a binary relation on an universe set ℚ, and for every w∈ℚ, we have the following two classes. Define the after and fore sets as follows:(1)wR=v∈ℚ:wRv,Rw=v∈ℚ:vRw.Definition 3 [45]. LetR be a binary relation on an universe set ℚ. For each w∈ℚ, define the right covering Cr (resp., the left covering Cl) as follows:(2)Cr=wR:ℚ=⋃w∈ℚwR,Cl=Rw:ℚ=⋃w∈ℚRw.Definition 4 [45]. LetR be a binary relation on an universe set ℚ and Eq be a q-cover of ℚ, where q∈r,l. Then, ℚ,R,Eq is said to be Eq covering approximation space (briefly, Eq-CAS).Definition 5 [45]. Letℚ,R,Eq be Eq-CAS. For every w∈ℚ, define the right neighborhood ℕr, the left neighborhood ℕl, the intersection neighborhood ℕi, and the union neighborhood ℕu, respectively, as follows:(3)ℕrw=C∈Cr:w∈C,ℕlw=C∈Cl:w∈C,ℕiw=ℕrw∩ℕlw,ℕuw=ℕrw∪ℕlw.Definition 6 [45]. Letℚ,R,Eq be Eq-CAS and ∀p∈r,l,i,u and Z⊆ℚ. Define the p-lower approximation, p-upper approximation, p-boundary, p-positive, p-negative, and p-accuracy of Z, respectively, as follows:(4)LpZ=w∈Z:ℕpw⊆Z,UpZ=w∈Z:ℕpw∩Z≠∅,BpZ=UpZ−LpZ,⊕pZ=LpZ,⊖pZ=ℚ−UpZApZ=LpZUpZ,whereUpZ≠0. Pawlak’s [1, 2] rough set properties are given as follows: (L1)LZ⊆Z,H1Z⊆UZ. (L2) Lℚ=ℚ,H2U∅=∅. (L3) L∅=∅,H3Uℚ=ℚ. (L4) IfZ1⊆Z2,thenLZ1⊆LZ2,H4UZ1⊆UZ2. (L5) LZ1∩Z2=LZ1∩LZ2.H5UZ1∪Z2=UZ1∪UZ2. (L6) LZ1∪Z2⊇LZ1∪LZ2.H6UZ1∩Z2⊆UZ1∩UZ2. (L7) LZc=UZc,H7UZc=LZc. (L8) LLZ=LZ,H8UUZ=UZ. (L9) LLZc=LZc,L9UUZc=UZc.Definition 7 [47]. Letℚ,E be a CAS and Z⊆ℚ. For any w∈ℚ, define the minimal and maximal descriptions of w, respectively, as follows:(5)HE=C∈E:w∈C∧∀S∈E∧w∈S∧S⊆C⇒S=C,DE=C∈E:w∈C∧∀S∈E∧w∈S∧S⊇C⇒S=C.Definition 8 [39]. Letℚ,E be MGCAS and Z⊆ℚ. For any w∈ℚ, define four types of the lower and upper approximations, respectively, as follows: (6)L∑d=1nEd1Z=w∈ℚ:∩HE1w⊆Zor∩HE2w⊆Zor⋯or∩HEnw⊆Z,U∑d=1nEd1Z=w∈ℚ:∩HE1w∩Z≠∅and∩HE2w∩Z≠∅and⋯and∩HEnw∩Z≠∅,LEd2Z=w∈ℚ:∪HE1w⊆Zor∪HE2w⊆Zor⋯or∪HEnw⊆Z,UEd2Z=w∈ℚ:∪HE1w∩Z≠∅and∪HE2w∩Z≠∅and⋯and∪HEnw∩Z≠∅,LEd3Z=w∈ℚ:∩DE1w⊆Zor∩DE2w⊆Zor⋯or∩DEnw⊆Z,UEd3Z=w∈ℚ:∩DE1w∩Z≠∅and∩DE2w∩Z≠∅and⋯and∩DEnw∩Z≠∅,LEd4Z=w∈ℚ:∪DE1w⊆Zor∪DE2w⊆Zor⋯or∪DEnw⊆Z,UEd4Z=w∈ℚ:∪DE1w∩Z≠∅and∪DE2w∩Z≠∅and⋯and∪DEnw∩Z≠∅, IfLEd1Z (resp., LEd2Z, LEd3Z, and LEd4Z) ≠UEd1Z (resp., UEd2Z, UEd3Z, and UEd4Z), then Z is called the first kind of a multigranulation covering rough set (briefly, type 1-MGCRS) (resp., type 2-MGCRS, type 3-MGCRS, and type 4-MGCRS), else it is definable.Definition 9 [48]. Letℚ,E be a covering information system. For any Z⊆ℚ and w∈ℚ, define the first type of optimistic multigranulation covering lower approximation (briefly, 1-OMGCLA) 1L∑d=1ndEOZ and the first type of optimistic multigranulation covering upper approximation (briefly, 1-OMGCUA) 1U∑d=1ndEOZ as follows:(7)1L∑d=1ndEOZ=w∈ℚ:wE1⊆Z∨wE2⊆Z∨⋯∨wEn⊆Z,1L∑d=1ndEOZ=w∈ℚ:wE1∩Z≠∅∧wE2∩Z≠∅∧⋯∧wEn∩Z≠∅.Definition 10 [48]. Letℚ,E be a covering information system. For any Z⊆ℚ and w∈ℚ, define the first type of pessimistic multigranulation covering lower approximation (briefly, 1-PMGCLA) 1L∑d=1ndEqPZ and the first type of pessimistic multigranulation covering upper approximation (briefly, 1-PMGCUA) 1`U∑d=1ndEqPZ as follows:(8)1L∑d=1nEPZ=w∈ℚ:wE1⊆Z∧wE2⊆Z∧⋯∧wEn⊆Z,1U∑d=1ndEPZ=w∈ℚ:wE1∩Z≠∅∨wE2∩Z≠∅∨⋯∨wEn∩Z≠∅.Next, we have the following definitions using the notion ofEq-CAS.Definition 11. Letℚ,R,Eq be Eq-CAS and Z⊆ℚ. For any w∈ℚ, define the q-minimal and q-maximal descriptions of w, respectively, as follows: (9)HEq=C∈Eq:w∈C∧∀S∈Eq∧w∈S∧S⊆C⇒S=C,DEq=C∈Eq:w∈C∧∀S∈Eq∧w∈S∧S⊇C⇒S=C.We give the following example to illustrate the above definition.Example 1. Letℚ,R,Eq be Eq-CAS, ℚ=k1,k2,k3,k4 and R=k1,k4,k2,k2,k2,k3,k3,k2,k4,k1,k4,k3. Then, we have the following results:(10)HErk1=k1,k3,HErk2=k2,HErk3=k1,k3,k2,k3,HErk4=k4,HElk1=k1,HElk2=k2,k3,k2,k4,HElk3=k2,k3,HElk4=k4,DErk1=k1,k3,DErk2=k2,k3,DErk3=k1,k3k2,k3,DErk4=k4,DElk1=k1,DElk2=k2,k3,k2,k4,DElk3=k2,k3,DElk4=k2,k4.Definition 12. Letℚ,R,Eq be Eq-CAS and Z⊆ℚ. For any w∈ℚ, define the lower and upper approximations, respectively, as follows: (11)LEqZ=w∈Z:∩HEqw⊆Z,UEqZ=w∈ℚ:∩DEqw∩Z≠∅.To explain the above definition, we give the following example.Example 2. Consider Example1, if Z=k1,k2,k4, then we have the following results.(12)LErZ=k2,k4,UErZ=k1,k2,k4,LElZ=k1,k2,k4,UElZ=ℚ. ## 3. Multi-Eq-Covering Approximation Space Presume thatℚ is an universal set, R is a family of binary relations on ℚ, and Eq is q-cover of ℚ depending on R, where q∈l,r. Thus, ℚ,R,Eq is called a multi-Eq-covering approximation space (briefly, MEqCAS).Definition 13. Assume thatℚ,R,Eq is a MEqCAS and R=R1,R2,⋯,RS,∀S∈I, for any Z⊆ℚ and w∈ℚ. Then, we have four novel kinds of lower and upper approximations written as follows: Style 1 The 1-MCLA1L∑d=1ndEqZ and the 1-MCUA 1L∑d=1ndEqZ are shown as follows:(13)1L∑d=1ndEqZ=w∈ℚ:∩HEqR1w⊆Zor∩HEqR2w⊆Zor⋯or∩HEqRnw⊆Z,1U∑d=1ndEqZ=w∈ℚ:∩HEqR1w∩Z≠∅and∩HEqR2w∩Z≠∅and⋯and∩HEqRnw∩Z≠∅. If1L∑d=1ndEqZ≠1U∑d=1ndEqZ, then Z is said to be the first kind of q-covering multigranulation rough set (briefly, 1-qMGCRS), else it is definable. Style 2 The 2-MCLA2L∑d=1ndEqZ and the 2-MCUA 2L∑d=1ndEqZ are seen as follows:(14)2L∑d=1ndEqZ=w∈ℚ:∪HEqR1w⊆Zor∪HEqR2w⊆Zor⋯or∪HEqRnw⊆Z,2U∑d=1ndEqZ=w∈ℚ:∪HEqR1w∩Z≠∅and∪HEqR2w∩Z≠∅and⋯and∪HEqRnw∩Z≠∅. If2L∑d=1ndEqZ≠2U∑d=1ndEqZ, then Z is said to be the second kind of q-covering multigranulation rough set (briefly, 2-qMGCRS), else it is definable. Style 3 The 3-MCLA3L∑d=1ndEqZ and the 3-MCUA 3U∑d=1ndEqZ are seen as follows:(15)3L∑d=1ndEqZ=w∈ℚ:∩DEqR1w⊆Zor∩DEqR2w⊆Zor⋯or∩DEqRnw⊆Z,3U∑d=1ndEqZ=w∈ℚ:∩DEqR1w∩Z≠∅and∩DEqR2w∩Z≠∅and⋯and∩DEqRnw∩Z≠∅. If3L∑d=1ndEqZ≠3U∑d=1ndEqZ, then Z is said to be the third kind of q-covering multigranulation rough set (briefly, 3-qMGCRS), else it is definable. Style 4 The 4-MCLA4L∑d=1ndEqZ and the 4-MCUA 4U∑d=1ndEqZ are seen as follows:(16)4L∑d=1ndEqZ=w∈ℚ:∪DEqR1w⊆Zor∪DEqR2w⊆Zor⋯or∪DEqRnw⊆Z,4U∑d=1ndEqZ=w∈ℚ:∪DEqR1w∩Z≠∅and∪DEqR2w∩Z≠∅and⋯and∪DEqRnw∩Z≠∅. If4L∑d=1ndEqZ≠4U∑d=1ndEqZ, then Z is said to be the fourth kind of q-covering multigranulation rough set (briefly, 4-qMGCRS), else it is definable.Example 3. Considerℚ,R,Eq is a MEqCAS, ℚ=k1,k2,k3,k4 and R=R1,R2, where R1=k1,k4,k2,k2,k2,k3,k3,k2,k4,k1,k4,k3 and R2=k1,k1,k1,k2,k2,k3,k2,k4,k3,k1,k4,k1. Take Z=k1,k3; then, we have the presented outcomes: (1r)1L∑d=12dErZ=k1,k3,1U∑d=12dErZ=k1,k31l1L∑d=12dElZ=k1,1U∑d=12dElZ=k1,k3. (2r)2L∑d=12dErZ=k1,2U∑d=12dErZ=k1,k32l2L∑d=12dElZ=k1,2U∑d=12dElZ=k1,k3. (3r)3L∑d=12dErZ=k1,k3,3U∑d=12dErZ=k1,k2,k33l3L∑d=12dElZ=k1,3U∑d=12dElZ=k1,k3. (4r)4L∑d=12dErZ=k1,4U∑d=12dErZ=k1,k2,k34l4L∑d=12dElZ=k1,4U∑d=12dElZ=k1,k3.Theorem 14. Suppose thatℚ,R,Eq is a MEqCAS. For any Z⊆ℚ, we get the following properties:(1) 1L∑d=1ndEqZc=1U∑d=1ndEqZc,1U∑d=1ndEqZc=1L∑d=1ndEqZc(2) 2L∑d=1ndEqZc=2U∑d=1ndEqZc,2U∑d=1ndEqZc=2L∑d=1ndEqZc(3) 3L∑d=1ndEqZc=3U∑d=1ndEqZc,3U∑d=1ndEqZc=3L∑d=1ndEqZc(4) 4L∑d=1ndEqZc=4U∑d=1ndEqZc,4U∑d=1ndEqZc=4L∑d=1ndEqZcProof. Here, we want to set (1) only.(1) (17)1L∑d=1ndEqZc=w∈ℚ:∩HEqR1w⊆Zcor∩HEqR2w⊆Zcor⋯or∩HEqRnw⊆Zc=w∈ℚ:∩HEqR1w∩Z=∅or∩HEqR2w∩Z=∅or⋯or∩HEqRnw∩Z=∅=w∈ℚ:∩HEqR1w∩Z≠∅and∩HEqR2w∩Z≠∅and⋯and∩HEqRnw∩∅c=1U∑d=1ndEqZc. Also, it is easy to see1U∑d=1ndEqZc=1L∑d=1ndEqZc.Proposition 15. Suppose thatℚ,R,Eq is a MEqCAS. For any Z⊆ℚ, we get the following properties:(1) 1L∑d=1ndEq1L∑d=1ndEqZ=1L∑d=1ndEqZ,1U∑d=1ndEq1U∑d=1ndEqZ=1U∑d=1ndEqZ(2) 2L∑d=1ndEq2L∑d=1ndEqZ=2L∑d=1ndEqZ,2U∑d=1ndEq2U∑d=1ndEqZ=2U∑d=1ndEqZ(3) 3L∑d=1ndEq3L∑d=1ndEqZ=3L∑d=1ndEqZ,3U∑d=1ndEq3U∑d=1ndEqZ=3U∑d=1ndEqZProof. Here, we want to set (1) only.(1) It is obvious that1L∑d=1ndEq1L∑d=1ndEqZ⊆1L∑d=1ndEqZ. On the other hand, we have 1L∑d=1ndEqZ=1L1EqZ∪1L2EqZ∪⋯∪1LnEqZ. Thus, we get that(18)1L∑d=1ndEq1L∑d=1ndEqZ=1L1Eq1L∑d=1ndEqZ∪1L2Eq1L∑d=1ndEqZ∪⋯∪1LnEq1L∑d=1ndEqZ=1L1Eq1L1EqZ∪1L2EqZ∪⋯∪1LnEqZZ∪1L2Eq1L1EqZ∪1L2EqZ∪⋯∪1LnEqZZ∪⋯∪1LnEq1L1EqZ∪1L2EqZ∪⋯∪1LnEqZZ⊇1L1Eq1L1EqZ∪1L2Eq1L2EqZ∪11LnEq1LnEqZ=1L1EqZ∪1L2EqZ∪⋯∪1LnEqZ=1L∑d=1ndEqZ Also, it is clear that1U∑d=1ndEq1U∑d=1ndEqZ⊆1U∑d=1ndEqZ. Consequently, we have 1U∑d=1ndEqZ=1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ. So, we have(19)1U∑d=1ndEq1U∑d=1ndEqZ=1U1Eq1U∑d=1ndEqZ∩1U2Eq1U∑d=1ndEqZ∩⋯∩1UnEq1U∑d=1ndEqZ=1U1Eq1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ∩1U2Eq1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ∩⋯∩1UnEq1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ⊇1U1Eq1U1EqZ∩1U2Eq1U2EqZ∩⋯∩1UnEq1UnEqZ=1U1EqZ∩1U2EqZ∩⋯∩1UnEqZ=1U∑d=1ndEqZ. Hence,1L∑d=1ndEq1L∑d=1ndEqZ=1L∑d=1ndEqZ and 1U∑d=1ndEq1U∑d=1ndEqZ=1U∑d=1ndEqZ.The above Proposition15 is not true for 4-qMGCRS as in the following example.Example 4. Consider thatℚ,R,Eq is a MEqCAS, ℚ=k1,k2,k3,k4,k5 and R=R1,R2, where R1=k1,k1,k1,k3,k1,k5,k2,k2,k3,k3,k4,k2,k4,k4,k4,k5,k5,k5 and R2=k1,k1,k1,k3,k2,k2,k2,k5,k3,k3,k4,k3,k4,k4,k5,k2,k5,k5. TakeZ1=k1,k3 and Z2=k4,k5; then, we have the presented outcomes. (1r) 4L∑d=12dErZ1=k1and4L∑d=12dEr4U∑d=12dErZ1=∅. Then, 4L∑d=12dEr4L∑d=12dErZ1≠4L∑d=12dErZ1. (2r) 4U∑d=12dErZ2=k2,k3,k4,k5 and 4U∑d=12dEr4U∑d=12dErZ2=ℚ. Then, 4U∑d=12dEr4U∑d=12dErZ2≠4U∑d=12dErZ2. (1l) 4L∑d=12dElZ1=k3 and 4L∑d=12dEl4L∑d=12dElZ1=∅. Then, 4L∑d=12dEl4L∑d=12dE1Z1≠4L∑d=12dElZ1. (2l) 4U∑d=12dElZ2=k1,k2,k4,k5 and 4U∑d=12dEl4U∑d=12dElZ2=ℚ. Then, 4U∑d=12dEl4U∑d=12dElZ2≠4U∑d=12dElZ2.Next, we will establish new properties in Proposition16. These characteristics are done for 1-qMGCRS, 2-qMGCRS, 3-qMGCRS, and 4-qMGCRS, though we demonstrate it in the case of 1-qMGCRS and others are similar.Proposition 16. Suppose thatℚ,R,Eq is a MEqCAS. For any Z1,Z2⊆ℚ, we get the following properties:(1) IfZ1⊆Z2, then 1L∑d=1ndEqZ1⊆1L∑d=1ndEqZ2(2) IfZ1⊆Z2, then 1U∑d=1ndEqZ1⊆1U∑d=1ndEqZ2(3) 1L∑d=1ndEqZ1∩Z2⊆1L∑d=1ndEqZ1∩1L∑d=1ndEqZ2(4) 1L∑d=1ndEqZ1∪Z2⊇1L∑d=1ndEqZ1∪1L∑d=1ndEqZ2(5) 1U∑d=1ndEqZ1∪Z2⊇1U∑d=1ndEqZ1∪1U∑d=1ndEqZ2(6) 1U∑d=1ndEqZ1∩Z2⊆1U∑d=1ndEqZ1∩1U∑d=1ndEqZ2Proof. Now, we just need to show (1) and (2).(1) From Definition13 and since Z1⊆Z2, then, we obtain the following:(20)1L∑d=1ndEqZ1=w∈ℚ:∩HEqR1w⊆Z1or∩HEqR2w⊆Z1or⋯or∩HEqRnw⊆Z1⊆w∈ℚ:∩HEqR1w⊆Z2or∩HEqR2w⊆Z2or⋯or∩HEqRnw⊆Z2=1L∑d=1ndEqZ2(2) From Definition13 and since Z1⊆Z2, then, we have the following:(21)1U∑d=1ndEqZ1=w∈ℚ:∩HEqR1w∩Z1≠∅and∩HEqR2w∩Z1≠∅and⋯and∩HEqRnw∩Z1≠∅⊆w∈ℚ:∩HEqR1w∩Z2≠∅and∩HEqR2w∩Z2≠∅and⋯and∩HEqRnw∩Z2≠∅=1U∑d=1ndEqZ2Example 5. Consider Example4. Then, we have the following: (1r) Take Z1=k2,k3,k4 and Z2=k2,k4,k5, then we have 1L∑d=12dErZ1=k2,k3,k4,1L∑d=12dErZ2=k2,k4,k5 and 1L∑d=12dErZ1∩Z2=k2. Thus, 1L∑d=12dErZ1∩Z2≠1L∑d=12dErZ1∩1L∑d=12dErZ2 (1l) Take Z1=k2,k4 and Z2=k2,k5, then we have 1L∑d=12dElZ1=k2,k4,1L∑d=12dElZ2=k2,k5 and 1L∑d=12dElZ1∩Z2=∅. Thus, 1L∑d=12dElZ1∩Z2≠1L∑d=12dElZ1∩1L∑d=12dElZ2 (2r) Take Z1=k1 and Z2=k3, then we have 1L∑d=12dErZ1=∅,1L∑d=12dErZ2=k3 and 1L∑d=12dErZ1∪Z2=k1,k3. Thus, 1L∑d=12dErZ1∪Z2≠1L∑d=12dErZ1∪1L∑d=12dErZ2 (2l) Take Z1=k2 and Z2=k4, then we have 1L∑d=12dElZ1=∅,1L∑d=12dElZ2=k4 and 1L∑d=12dElZ1∪Z2=k2,k4. Thus, 1L∑d=12dElZ1∪Z2≠1L∑d=12dElZ1∪1L∑d=12dElZ2 3r Take Z1=k2 and Z2=k3, then we have 1U∑d=12dErZ1=k2,1U∑d=12dErZ2=k1,k3 and 1U∑d=12dErZ1∪Z2=k1,k2,k3,k4. Thus, 1U∑d=12dErZ1∪Z2≠1U∑d=12dErZ1∪1U∑d=12dErZ2 3l Take Z1=k1 and Z2=k2, then we have 1U∑d=12dElZ1=k1,k3,1U∑d=12dElZ2=k2 and 1U∑d=12dElZ1∪Z2=k1,k2,k3,k5. Thus, 1U∑d=12dElZ1∪Z2≠1U∑d=12dElZ1∪1U∑d=12dElZ2 4r Take Z1=k2,k3 and Z2=k2,k4, then we have 1U∑d=12dErZ1=k1,k2,k3,k4,1U∑d=12dErZ2=k2,k4 and 1U∑d=12dErZ1∩Z2=k2. Thus, 1U∑d=12dErZ1∩Z2≠1U∑d=12dErZ1∩1U∑d=12dErZ2 4l Take Z1=k2,k4 and Z2=k2,k5, then we have 1U∑d=12dElZ1=k2,k4,k5,1U∑d=12dElZ2=k2,k5 and 1U∑d=12dElZ1∩Z2=k2. Thus, 1U∑d=12dElZ1∩Z2≠1U∑d=12dElZ1∩1U∑d=12dEl ## 4. Relationships among Different Proposed Models Next, we present the relationships between the proposed MEqCAS models.By using Definition13, we obtain the following properties.Proposition 17. Letℚ,R,Eq be a MEqCAS and Z⊆ℚ. Then, we have the following results: (1) 4L∑d=1ndEqZ⊆2L∑d=1ndEqZ⊆1L∑d=1ndEqZ⊆Z(2) 4L∑d=1ndEqZ⊆3U∑d=1ndEqZ⊆1L∑d=1ndEqZ⊆Z(3) Z⊆1U∑d=1ndEqZ⊆2U∑d=1ndEqZ⊆4U∑d=1ndEqZ(4) Z⊆1U∑d=1ndEqZ⊆3U∑d=1ndEqZ⊆4U∑d=1ndEqZRemark 18. Letℚ,Re,Eq be a MEqCAS and Z⊆ℚ. Then, we have the following results: (1) 2L∑d=1ndEqZ⊈3L∑d=1ndEqZand3L∑d=1ndEqZ⊈2L∑d=1ndEqZ(2) 2U∑d=1ndEqZ⊈3U∑d=1ndEqZand3U∑d=1ndEqZ⊈2U∑d=1ndEqZ This means that 2-qMGCRS and 3-qMGCRS are independent.Proposition 19. Letℚ,Re,Eq be a MEqCAS and Z⊆ℚ. Then, we have the following results: (1) 1L∑d=1ndEqZ=2L∑d=1ndEqZ∪3L∑d=1ndEqZ(2) 1U∑d=1ndEqZ=2U∑d=1ndEqZ∩3U∑d=1ndEqZTo illustrate the above characteristic, we give the following example.Example 6. Consider Example4 and let Z=k1,k2. Then, we have the following outcomes:(1) Forq=r, we have(22)1L∑d=12dEqZ=2L∑d=12dEqZ=k2,3L∑d=12dEqZ=4L∑d=12dEqZ=∅,1U∑d=12dEqZ=2U∑d=12dEqZ=3U∑d=12dEqZ=k1,k2,4U∑d=12dEqZ=k1,k2,k3,k5(2) Forq=l, we have(23)1L∑d=12=dEqZ=2L∑d=12dEqZ=3L∑d=12dEqZ=k1,4L∑d=12dEqZ=∅,1U∑d=12dEqZ=2U∑d=12dEqZ=3U∑d=12dEqZ=k1,k2,k3,k5,4U∑d=12dEqZ=ℚ So, you can find the following:(1) 4L∑d=1ndErZ⊆2L∑d=1ndErZ⊆1L∑d=1ndErZ⊆Z(2) 4L∑d=1ndErZ⊆3L∑d=1ndErZ⊆1L∑d=1ndErZ⊆Z(3) Z⊆1U∑d=1ndErZ⊆2U∑d=1ndErZ⊆4U∑d=1ndErZ(4) Z⊆1U∑d=1ndErZ⊆3U∑d=1ndErZ⊆4U∑d=1ndErZ(5) 4L∑d=1ndElZ⊆2L∑d=1ndElZ⊆1L∑d=1ndElZ⊆Z(6) 4L∑d=1ndElZ⊆3L∑d=1ndElZ⊆1L∑d=1ndElZ⊆Z(7) Z⊆1U∑d=1ndElZ⊆2U∑d=1ndElZ⊆4U∑d=1ndElZZ(8) Z⊆1U∑d=1ndElZ⊆3U∑d=1ndElZ⊆4U∑d=1ndElZTables1 and 2 show the Pawlak characteristics for the lower and upper approximations which are given in Definition 13.Table 1 Table for the lower approximations. 1L∑d=1ndErZ2L∑d=1ndErZ3L∑d=1ndErZ4L∑d=1ndErZ1L∑d=1ndElZ2L∑d=1ndElZ3L∑d=1ndElZ4L∑d=1ndElZL1√√√√√√√√L2√√√√√√√√L3√√√√√√√√L4××××××××L5√√√√√√√√L6√√√√√√√√L7√√√√√√√√L8√√√×√√√×L9××××××××Table 2 Table for the upper approximations. 1U∑d=1ndErZ2U∑d=1ndErZ3U∑d=1ndErZ4U∑d=1ndErZ1U∑d=1ndElZ2U∑d=1ndElZ3U∑d=1ndElZ4U∑d=1ndElZH1√√√√√√√√H2√√√√√√√√H3√√√√√√√√H4××××××××H5√√√√√√√√H6√√√√√√√√H7√√√√√√√√H8√√√×√√√×H9×××××××× ## 5. Relative Reduction of a MEqCAS This section is aimed at discussing a relative reduction of a pessimistic multigranulationq-covering rough sets (briefly, PMEqCRS). First, we give the following couple of definitions.Definition 20. Letℚ,R,Eq be a MEqCAS and R=R1,R2,⋯,RS,∀S∈I. For any Z⊆ℚ and w∈ℚ, define the pessimistic multigranulation q-covering lower approximation (briefly, PMGEqCLA) L∑d=1ndEqPZ and pessimistic multigranulation q-covering lower approximation (briefly, PMGEqCLA) U∑d=1ndEqPZ as follows:(24)L∑d=1ndEqPZ=w∈ℚ:wEqR1⊆ZandwEqR2⊆Zand⋯andwEqRn⊆Z,U∑d=1ndEqPZ=w∈ℚ:wEqR1∩Z≠∅orwEqR2∩Z≠∅or⋯orwEqRn∩Z≠∅.Definition 21. Letℚ,R,Eq be a MEqCAS and R=R1,R2,⋯,RS,∀S∈I. Suppose that D=D1,D2,⋯,Dt is a decision partition of ℚ. Then,(25)PLEqRkD=L∑d=1ndEqPD1,L∑d=1ndEqPD2,⋯,L∑d=1ndEqPDt,PUEqRkD=U∑d=1ndEqPD1,U∑d=1ndEqPD2,⋯,U∑d=1ndEqPDt.(i) BqRk⊆EqRk and PLBqRkD=PLEqRkD, but PLB∧qRkD≠PLEqRkD, for B∧qRk⊆BqRk; then, BqRk is a D reduction of PMEqCLA(ii) BqRk⊆EqRk and PUBqRkD=PUEqRkD, but PUB∧qRkD≠PUEqRkD, for B∧qRk⊆BqRk, then, BqRk is a D reduction of PMEqCUAWe can illustrate the method of reduction as the following Algorithm1.Algorithm 1: Algorithm for reduction of PMEqCLA. Input: ℚ,R,Eq with information system.Output: Reduction of PMEqCLA.1: CalculatePLEqRkD.2: RemoveEqRk, BqRk=EqRi−EqRk and PLBqRi−kD=PLBqRiD.3: Remove a covering inBqRk again and get B∧qRi−k. If PLB∧qRi−kD≠PLBqRiD, return BqRk; else, go to Step 2.4: : Repeat the Steps 2 and 3 for each covering inEqRi to get all the relative reduce of the covering family.Example 7. Presume thatℚ=k1,k2,⋯,k6 is a set of six houses, Z=equallysharedarea,color,price,surroundings is a set of attributes, and D=purchaseopinions is a set of decisions. The values of equally shared area could be {large, ordinary, small}. The values of color could be {excellent, good, bad}. The values of price could be {high, middle, low}. The values of surroundings could be {quiet, noisy, very noisy}. The decision values of purchase opinions could be {support, oppose}, which is randomly chosen from experts. The evaluation results are shown in Table 3.Table 3 Table for house assessment problem. Equally shared areaColorPriceSurroundingPurchase optionsk1{Large}{Good}{High}{Very noisy}Opposek2{Small, large}{Excellent}{Middle, low}{Quiet, noisy}Supportk3{Small, large}{Excellent, good}{Middle, low}{Noisy}Supportk4{Small, ordinary}{Bad}{High, middle}{Noisy, very noisy}Opposek5{Small, ordinary}{Bad}{High, middle}{Very noisy}Opposek6{Ordinary, large}{Excellent, good}{High, low}{Quiet, noisy}SupportAs for the attribute setZ, the binary relation is obtained as follows ∀k∈Z:(26)Rk=v,w:Fkv⊆Fkw.It is easy to see that theRk is reflexive and transitive but not symmetric.IfD is the decision set, then the nonequivalence relation is defined as follows:(27)RD=v,w:FDv⊆FDw.Then, we can construct the following two covers:(i) Right covering (r-cover for short)(28)Cr=wRK:∀K∈Z,D,w∈ℚ,ℚ=⋃w∈ℚwRK(ii) Left covering (l-cover for short)(29)Cl=RKw:∀K∈Z,D,w∈ℚ,ℚ=⋃w∈ℚRKwSo, we have the following results:(30)CrR1=k1,k2,k3,k2,k3,k4,k5,k6,CrR2=k1,k3,k6,k2,k3,k6,k3,k6,k4,k5,CrR3=k1,k4,k5,k6,k2,k3,k4,k5,k6,CrR4=k1,k4,k5,k2,k6,k2,k3,k4,k6,k4,CrRD=k1,k4,k5,k2,k3,k6,ClR1=k1,k1,k2,k3,k4,k5,k6,ClR2=k1,k2,k1,k2,k3,k6,k4,k5,ClR3=k1,k2,k3,k1,k4,k5,k1,k6,ClR4=k1,k5,k2,k3,k6,k3,k1,k3,k4,k5,ClRD=k1,k4,k5,k2,k3,k6.Thus, we can establish Tables4 and 5 for the neighborhood of k as follows.Table 4 Table for house assessment problem. k1k2k3k4k5k6kCrR1k1,k2,k3k2,k3k2,k3k4,k5k4,k5k6kCrR2k1,k3,k6k2,k3,k6k3,k6k4,k5k4,k5k3,k6kCrR3k1,k4,k5,k6k2,k3k2,k3k4,k5k4,k5k6kCrR4k1,k4,k5k2,k6k2,k3,k6k4k1,k4,k5k2,k6Table 5 Table for house assessment problem. k1k2k3k4k5k6kClR1k1k1,k2,k3k1,k2,k3k4,k5k4,k5k6kClR2k1k2k1,k2,k3,k6k4,k5k4,k5k1,k2,k3,k6kClR3k1k2,k3k2,k3k1,k4,k5k1,k4,k5k1,k6kClR4k1,k5k2,k3,k6k3k1,k3,k4,k5k1,k5k2,k3,k6Now, we can apply Algorithm1 as follows.Step 1. PLErRkD=L∑d=14dErPD1,L∑d=14dErPD2=k4,k5,k2,k3,k6.Step 2. PLBrR1D=k4,k5,k2,k3,k6, PLBrR2D=k4,k5,k2,k3,k6. Therefore, BrRk=CrR3,CrR4 is a reduction of the PMErCRS.Also, we can get the following outcomes of the left covering:(31)PLElRkD=L∑d=14dElPD1,L∑d=14dElPD2=k1,k5,∅,PLBlR1D=k1,k5,k2,PLBlR2D=k1,k5,ϕ,PLBlR3D=k1,k5,k6,PLBlR4D=k1,k4,k5,∅.Therefore,BlRk=ClR1,ClR3,ClR4 is a reduction of the PMElCRS. ## 6. Conclusion In this article, we present a notion called multi-Eq-covering approximation space (MEqCAS) by using the concept of q-minimal and q-maximal descriptions. Based on these notions, we establish four new types of multigranulation covering rough sets, denoted MEqCAS. We also study the properties of these new models. Further, we put forward a new methodology to make a reduction by the presented work. Then, we demonstrate the reduction method with the help of an illustrative example which shows its effectiveness and reliability. The main differences between our proposed work and the previous one in [39] are that the authors in [39] introduced four types of MGCRSs using the minimal and maximal description based on equivalence relations and here we used the notions of right (resp., left) covering rough sets to investigate four kinds of multigranulation right (resp., left) covering rough sets using the right (resp., left) minimal and right (resp., left) maximal description induced by binary relations. In further research, we hope to use this approach in fuzzy rough covering-based fuzzy neighborhoods [49], fuzzy soft covering-based rough sets [50], and soft fuzzy covering-based rough sets [51]. --- *Source: 1022955-2021-11-08.xml*
2021
# A Construction of Multisender Authentication Codes with Sequential Model from Symplectic Geometry over Finite Fields **Authors:** Shangdi Chen; Chunli Yang **Journal:** Journal of Applied Mathematics (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102301 --- ## Abstract Multisender authentication codes allow a group of senders to construct an authenticated message for a receiver such that the receiver can verify authenticity of the received message. In this paper, we construct multisender authentication codes with sequential model from symplectic geometry over finite fields, and the parameters and the maximum probabilities of deceptions are also calculated. --- ## Body ## 1. Introduction Information security consists of confidentiality and authentication. Confidentiality is to prevent the confidential information from decrypting by adversary. The purpose of authentication is to ensure the sender is real and to verify that the information is integrated. Digital signature and authentication codes are two important means of authenticating the information and provide good service in the network. In practical, digital signature is computationally secure assuming that the computing power of adversary is limited and a mathematical problem is intractable and complex. However, authentication codes are generally safe (unconditional secure) and relatively simple. In the 1940s, C. E. Shannon first put forward the concept of perfect secrecy authentication system using the information theory. In the 1980s, information theory method had been applied to the problem of authentication by G. J. Simmons; then authentication codes became the foundation for constructing unconditionally secure authentication system. In 1974, Gilbert et al. constructed the first authentication code [1], which is a landmark in the development of authentication theory. During the same period, Simmons independently studied the authentication theory and established three participants and four participants certification models [2]. The famous mathematician Wan Zhexian constructed an authentication code without arbitration from the subspace of the classical geometry [3]. In the case of transmitter and receiver being not honest, Ma et al. constructed a series of authentication codes with arbitration [4–9]. Xing et al. constructed authentication codes using algebraic curve and nonlinear functions, respectively [10, 11]. Safavi-Naini and Wang gave some results on multireceiver authentication codes [12]. Chen et al. made great contributions on multisender authentication codes from polynomials and matrices [13–19].With the rapid development of information science, the traditional one-to-one authentication codes have been unable to meet the requirements of network communication, thus making the study of multiuser authentication codes particularly important. Multiuser authentication code is a generalization of traditional two-user authentication code. It can be divided into two cases: one is a sender and many receivers authentication codes; the other one is many senders and a receiver authentication codes. We call the former as multireceiver authentication codes and the latter as multisender authentication codes. Safavi-Naini R gave some results on multireceiver authentication codes using the subspace of the classical geometry, while there are only some multisender authentication codes using polynomials and matrices to construct. We present the first construction multisender authentication code using the subspace of the classical geometry, specifically symplectic geometry.The main contribution of our paper is constructing a multi-sender authentication code using symplectic geometry. Furthermore, we calculate the corresponding parameters and the maximum probabilities of deceptions.The paper is organised as follows. Section2 gives the models of multisender authentication codes. In Section 3, we provide the calculation formulas on probability of success in attacks by malicious groups of senders. In Section 4, we give some definitions and properties on geometry of symplectic groups over finite fields. In Section 5, a construction of multisender authentication codes with sequential model from symplectic geometry over finite fields is given; then the parameters and the maximum probabilities of deceptions are also calculated. We give a comparison with the other construction of multisender authentication [19] in Section 6. ## 2. Models of Multisender Authentication Codes We review the concepts of authentication codes which can be extracted from [20].Definition 1 (see [20]). A systematic Cartesian authentication codeC is a 4-tuple ( S , E , T ; f ), where S is the set of source states, E is the set of keys, T is the set of authenticators, and f : S × E → T is the authentication mapping. The message space M = S × T is the set of all possible messages.In the actual computer network communications, multisender authentication codes include sequential models and simultaneous models. Sequential models are that each sender uses his own encoding rules to encode a source state orderly, and the last sender sends the encoded message to the receiver; then the receiver receives the message and verifies whether the message is legal or not. Simultaneous models are that all senders use their own encoding rules to encode a source state simultaneously; then the synthesizer forms an authenticated message and sends it to the receiver; the receiver receives the message and verifies whether the message is legal or not.In the following we will give out the working principles of two modes of multisender authentication codes and the protocols that the participants should follow.Definition 2 (see [17]). In sequential model, there are three participants: a group of sendersU = { U 1 , U 2 , … , U n }; a Key Distribution Center (KDC), for the distribution keys to senders and receiver; a receiver who receives the authenticated message and verifies the message true or not. The code works as follows: each sender and receiver has their own Cartesian authentication code, respectively. It is used to generate part of the message and verify authenticity of the received message. Sender’s authentication codes are called branch authentication codes, and receiver’s authentication code is called channel authentication code. Let ( S i , E i , T i ; f i ), i = 1,2 , … , n, be the ith sender’s Cartesian authentication codes, and let T i - 1 ⊂ S i, 1 ≤ i ≤ n, ( S , E , T ; f ) be the receiver’s Cartesian authentication code, and let S = S 1, T = T i , π i : E → E i be a subkey generation algorithm. For authenticating a message, the senders and the receiver should comply with protocols:(1) KDC randomly selects ane ∈ E and secretly sends it to the receiver R and sends e i = π i ( e ) to the ith sender U i, i = 1,2 , … , n; (2) if the senders would like to send a source states to the receiver R, U 1 calculates t 1 = f 1 ( s , e 1 ) and then sends t 1 to U 2 through an open channel; U 2 receives t 1 and calculates t 2 = f 2 ( t 1 , e 2 ) and then sends t 2 to U 3 through an open channel. In general, U i receives t i - 1 and calculates t i = f i ( t i - 1 , e i ) and then sends t i to U i + 1 through an open channel, 1 < i < n. U n receives t n - 1 and calculates t n = f n ( t n - 1 , e n ) and then sends m = ( s , t n ) through an open channel to the receiver R; (3) when the receiver receives the messagem = ( s , t n ), he checks the authenticity by verifying whether t n = f ( s , e ) or not. If the equality holds, the message is regarded as authentic and is accepted. Otherwise, the message is rejected.Definition 3 (see [17]). In simultaneous model of a multisender authentication code, there are four participants: a group of sendersU = { U 1 , U 2 , … , U n }; a Key Distribution Center (KDC), for the distribution keys to senders and receiver; a synthesizer C who only runs the trusted synthesis algorithm; a receiver who receives the authenticated message and verifies the message true or not. The code works as follows: each sender and receiver has their own Cartesian authentication code, respectively. It is used to generate part of the message and verify the received message. Sender’s authentication codes are called branch authentication codes, and receiver’s authentication code is called channel authentication code. Let ( S i , E i , T i ; f i ), i = 1,2 , … , n, be the sender’s Cartesian authentication codes, let ( S , E , T ; f ) be the receiver’s Cartesian authentication code, let g : T 1 × T 2 × ⋯ × T n → T be the synthesis algorithm, and let π i : E → E i be a subkey generation algorithm. For authenticating a message, the senders and the receiver should comply with protocols:(1) KDC randomly selects a encoding rulee ∈ E and secretly sends it to the receiver R and sends e i = π i ( e ) to the ith sender U i , i = 1,2 , … , n; (2) if the senders would like to send a source states to the receiver R, U i computes t i = f i ( s , e i ), i = 1,2 , … , n, and sends m i = ( s , t i ) ( i = 1,2 , … , n ) to the synthesizer C through an open channel; (3) the synthesizerC receives the messages m i = ( s , t i ), i = 1,2 , … , n, and calculates t = g ( t 1 , t 2 , … , t n ) using the synthesis algorithm g; then sends message m = ( s , t ) to the receiver R; (4) when the receiver receives the messagem = ( s , t ), he checks the authenticity by verifying whether t = f ( s , e ) or not. If the equality holds, the message is regarded as authentic and is accepted. Otherwise, the message is rejected. ## 3. Probabilities of Deceptions We assume that the arbitrator (KDC) and the synthesizer (C) are credible; though they know the senders’ and receiver’s encoding rules, they do not participate in any communication activities. When transmitter and receiver are disputing, the arbitrator settles it. At the same time, assume that the system follows Kerckhoff’s principle which the other information of the whole system is public except the actual used keys. Assume that the source state spaceS and the receiver’s decoding rules space E R are according to a uniform probability distribution; then the probability distribution of message space M and tag space T is determined by the probability distribution of S and E R. In a multisender authentication system, assume that the whole senders cooperate to form a valid message; that is, all senders as a whole and receiver are reliable. But there are some malicious senders which they together cheat the receiver; the part of senders and receiver are not credible; they can take impersonation attack and substitution attack.Assume thatU 1 , U 2 , … , U n are senders, R is a receiver, and E i is the encoding rules of U i, 1 ≤ i ≤ n. E R is the decoding rules of receiver R. L = { i 1 , i 2 , … , i l } ⊂ { 1,2 , … , n }, l < n, U L = { U i 1 , U i 2 , … , U i l }, E L = { E i 1 , E i 2 , … , E i l }.Impersonation Attack. U L, after receiving their secret keys, sends a message m to receiver. U L is successful if the receiver accepts it as legitimate message. Denote P I [ L ] as the maximum probability of success of the impersonation attack. It can be expressed as (1) P I [ L ] = max ⁡ e L ∈ E L max ⁡ m ∈ M P ( m is accepted by R ∣ e L ) .Substitution Attack. U L, after observing a legitimate message, substitutes it with another message m ′. U L is successful if m ′ is accepted by receiver as authentic. Denote P S [ L ] as the maximum probability of success of the substitution attack. It can be expressed as (2) P S [ L ] = max ⁡ e L ∈ E L max ⁡ m ∈ M max ⁡ m ′ ≠ m ∈ M P ( m ′ is accepted by R ∣ m , e L ) . ## 4. Symplectic Geometry In this section, we give some definitions and properties on geometry of symplectic groups over finite fields, which can be extracted from [20].Let𝔽 q be a finite field with q elements, n = 2 ν and define the 2 ν × 2 ν alternate matrix (3) K = ( 0 I ( ν ) - I ( ν ) 0 ) . The symplectic group of degree 2 ν over 𝔽 q, denoted by S p 2 ν ( 𝔽 q ), is defined to be the set of matrices (4) S p 2 ν ( 𝔽 q ) = { T ∣ T K T t = K } , with matrix multiplication as its group operation. Let 𝔽 q ( 2 ν ) be the 2 ν-dimensional row vector space over 𝔽 q. S p 2 ν ( 𝔽 q ) has an action on 𝔽 q ( 2 ν ) defined as follows: (5) 𝔽 q ( 2 ν ) × S p 2 ν ( 𝔽 q ) ⟶ 𝔽 q ( 2 ν ) , ( ( x 1 , x 2 , … , x 2 ν ) , T ) ⟶ ( x 1 , x 2 , … , x 2 ν ) T . The vector space 𝔽 q ( 2 ν ) together with this action of S p 2 ν ( 𝔽 q ) is called the symplectic space over 𝔽 q.LetP be an m-dimensional subspace of 𝔽 q ( 2 ν ). We use the same latter P to denote a matrix representation of P; that is, P is an m × 2 ν matrix of rank m such that its rows form a basis of P. The P K P t is alternate. Assume that it is of rank 2 s; then P is called a subspace of type ( m , s ). It is known that subspaces of type ( m , s ) exist in 𝔽 q ( 2 ν ) if and only if (6) 2 s ≤ m ≤ ν - s . It is also known that subspaces of the same type form an orbit under S p 2 ν ( 𝔽 q ). Denote by N ( m , s ; 2 ν ) the number of subspaces of type ( m , s ) in 𝔽 q ( 2 ν ).Denote byP ⊥ the set of vectors which are orthogonal to every vector of P; that is, (7) P ⊥ = { y ∈ 𝔽 q ( 2 ν ) ∣ y K x t = 0 for all x ∈ P } . Obviously, P ⊥ is a ( 2 ν - m )-dimensional subspace of 𝔽 q ( 2 ν ).Readers can refer to [15] for notations and terminology, which are not explained, on symplectic geometry of classical groups over finite fields. ## 5. Construction Let𝔽 q be a finite field with q elements. Assume that 1 < n < r < ν. U = 〈 e 1 , e 2 , … , e n 〉; then U ⊥ = 〈 e 1 , … , e ν , e ν + n + 1 , … , e 2 ν 〉. Let W i = 〈 e 1 , … , e i - 1 , e i + 1 , … , e n 〉; then W i ⊥ = 〈 e 1 , … , e ν , e ν + i , e ν + n + 1 , … , e 2 ν 〉. The set of source states S = { s ∣ s is a subspace of type ( 2 r - n , r - n ) and U ⊂ s ⊂ U ⊥ }; the set of ith sender’s encoding rules E i = { e i ∣ e i is a subspace of type ( n + 1,1 ), U ⊂ e i and e i ⊥ W i }, 1 ≤ i ≤ n; the set of receiver’s decoding rules E R = { e R ∣ e R is a subspace of type ( 2 n , n ) and U ⊂ e R }; the set of tags T i = { t i ∣ t i is a subspace of type ( 2 r - n + i , r - n + i ) and U ⊂ t i }, 1 ≤ i ≤ n.Define the encoding maps:(8) f 1 : S × E 1 ⟶ T 1 , f 1 ( s , e 1 ) = s + e 1 , f i : T i - 1 × E i ⟶ T i , f i ( t i - 1 , e i ) = t i - 1 + e i , 2 ≤ i ≤ n .Define the decoding map:(9) f : S × E R ⟶ T n , f ( s , e R ) = s + e R .This code works as follows.(1) Key Distribution. First, the KDC does a list L of senders; assume that L = { 1,2 , … , n }. Then, the KDC randomly chooses a subspace e R ∈ E R and privately sends e R to the receiver R. Last, the KDC randomly chooses a subspace e i ∈ E i and e i ⊂ e R, then privately sends e i to the ith sender, 1 ≤ i ≤ n. (2) Broadcast. For a source state s ∈ S, the sender U 1 calculates t 1 = s + e 1 and sends ( s , t 1 ) to U 2. The sender U 2 calculates t 2 = t 1 + e 2 and sends ( s , t 2 ) to U 3. Finally, the sender U n calculates t n = t n - 1 + e n and sends m = ( s , t n ) to the receiver R. (3) Verification. Since the receiver R holds the decoding rule e R, R accepts m as authentic if t n = s + e R. Otherwise, it is rejected by R.Lemma 4. LetC = ( S , E R , T n ; f ), C 1 = ( S , E 1 , T 1 ; f 1 ), C i = ( T i - 1 , E i , T i ; f i ) ( 2 ≤ i ≤ n ); then C, C 1, C i are all Cartesian authentication codes.Proof. First, we show thatC is a Cartesian authentication code. (1)   Fors ∈ S , e R ∈ E R. Let (10) s = ( U Q ) n 2 ( r - n ) , e R = ( U V ) n n . From the definition of s and e R, we can assume that (11) ( U Q ) K ( U Q ) t = ( 0 ( n ) 0 0 0 0 I ( r - n ) 0 - I ( r - n ) 0 ) , ( U V ) K ( U V ) t = ( 0 I ( n ) - I ( n ) 0 ) . Obviously, we have v ∉ s for any v ∈ V and v ≠ 0. Therefore, (12) t n = s + e R = ( U V Q ) , ( U V Q ) K ( U V Q ) t = ( 0 I ( n ) 0 0 - I ( n ) 0 * * 0 * 0 I ( r - n ) 0 * - I ( r - n ) 0 ) . From above, t n is a subspace of type ( 2 r , r ) and U ⊂ t n; that is, t n ∈ T n. (2) Fort n ∈ T n, t n is a subspace of type ( 2 r , r ) containing U. So there is subspace V ⊂ t n, satisfying (13) ( U V ) K ( U V ) t = ( 0 I ( n ) - I ( n ) 0 ) . Then, we can assume that t n = ( U V Q ), satisfying (14) ( U V Q ) K ( U V Q ) t = ( 0 I ( n ) 0 0 - I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 0 - I ( r - n ) 0 ) . Let s = ( U Q ); then s is a subspace of type ( 2 r - n , r - n ) and U ⊂ s ⊂ U ⊥; that is, s ∈ S is a source state. For any v ∈ V and v ≠ 0, we have v ∉ s and V ∩ U ⊥ = { 0 }. Therefore, t n ∩ U ⊥ = ( U Q ) = s. Let e R = ( U V ); then e R is a transmitter’s encoding rule satisfying t n = s + e R. Ifs ′ is another source state contained in t n, then U ⊂ s ′ ⊂ U ⊥. Therefore, s ′ ⊂ t n ∩ U ⊥ = s, while dim ⁡ s ′ = dim ⁡ s, so s ′ = s. That is, s is the uniquely source state contained in t n. Similarly, we can show thatC 1 and C i ( 2 ≤ i ≤ n ) are also Cartesian authentication code.From Lemma4, we know that such construction of multisender authentication codes is reasonable. Next we compute the parameters of this code.Lemma 5. The number of the source states is| S | = N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ).Proof. For anys ∈ S, since U ⊂ s ⊂ U ⊥, s has the form (15) s = ( I ( n ) 0 0 0 0 P 2 0 P 4 ) n 2 ( r - n ) = h n ν - n n ν - n , where ( P 2 , P 4 ) is a subspace of type ( 2 ( r - n ) , r - n ) in the symplectic space F q 2 ( ν - n ). Therefore, the number of the source states is | S | = N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ).Lemma 6. The number of theith sender’s encoding rules is | E i | = q 2 ( ν - n ).Proof. For anye i ∈ E i, e i is a subspace of type ( n + 1,1 ) containing U and e i is orthogonal to W i. So we can assume that e i = t ( e 1 , … , e n , u ), where u = ( x 1 x 2 ⋯ x 2 ν ). Obviously, x 1 = ⋯ = x n = x ν + 1 = ⋯ = x ν + i - 1 = x ν + i + 1 = ⋯ = x ν + n = 0, x ν + i = 1, and x n + 1 , … , x ν , x ν + n + 1 , … , x 2 ν arbitrarily. Therefore, | E i | = q 2 ( ν - n ).Lemma 7. The number of the receiver’s decoding rules is| E R | = q 2 n ( ν - n ).Proof. For anye R ∈ E R, since e R is a subspace of type ( 2 n , n ) containing U, e R has the form (16) e R = ( I ( n ) 0 0 0 0 Q 2 I ( n ) Q 4 ) n n n ν - n n ν - n , where Q 2, Q 4 are arbitrary matrices. Therefore, | E R | = q 2 n ( ν - n ).Lemma 8. (1) The number of decoding rulese R contained in t n is q 2 n ( r - n ); (2) the number of the tags is| T n | = q 2 n ( ν - r ) N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ).Proof. (1)   For anyt n ∈ T n, t n is a subspace of type ( 2 r , r ) and U ⊂ t n. We assume that t n has the form (17) t n = ( I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 0 0 0 0 0 0 I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 ) n r - n n r - n n r - n ν - r n r - n ν - r . If e R ⊂ t n, then we can assume that (18) e R = ( I ( n ) 0 0 0 0 0 0 R 2 0 I ( n ) R 5 0 ) n n n r - n ν - r n r - n ν - r , where R 2 , R 5 are arbitrary matrices. Therefore, the number of e R contained in t n is q 2 n ( r - n ). (2) We know that a tag contains only one source state and the number of decoding rulese R contained in t n is q 2 n ( r - n ). Therefore, we have | T n | = | S | | E R | / q 2 n ( r - n ) = q 2 n ( ν - r ) N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ).Theorem 9. The parameters of the above constructed multisender authentication code are(19) | S | = N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) ; | E i | = q 2 ( ν - n ) ; | E R | = q 2 n ( ν - n ) ; | T n | = q 2 n ( ν - r ) N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) .Without loss of generality, we can assume thatU L = { U 1 , U 2 , … , U l }, E L = { E 1 × ⋯ × E l }, where l < n.Lemma 10. For anye L = ( e 1 , e 2 , … , e l ) ∈ E L, the number of e R containing e L is q 2 ( n - l ) ( ν - n ).Proof. For anye L = ( e 1 , e 2 , … , e l ) ∈ E L, we can assume that (20) e L = ( I ( l ) 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 P 3 I ( l ) 0 P 6 ) l n - l l l n - l ν - n l n - l ν - n . If e L ⊂ e R, then e R has the form (21) e R = ( I ( l ) 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 P 3 I ( l ) 0 P 6 0 0 P 3 ′ 0 I ( n - l ) P 6 ′ ) l n - l l n - l l n - l ν - n l n - l ν - n , where P 3 ′, P 6 ′ are arbitrary matrices. Therefore, the number of e R containing e L is q 2 ( n - l ) ( ν - n ).Lemma 11. For anyt n ∈ T n and e L = ( e 1 , e 2 , … , e l ) ∈ E L, the number of e R contained in t n and containing e L is q 2 ( n - l ) ( r - n ).Proof. For anyt n ∈ T n, t n is a subspace of type ( 2 r , r ) and U ⊂ t n. We assume that t n has the form (22) t n = ( I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 0 0 0 0 0 0 I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 ) n r - n n r - n n r - n ν - r n r - n ν - r . If e L ⊂ t n, assume that e L has the form(23) e L = ( I ( l ) 0 0 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 0 0 R 3 0 I ( l ) 0 R 7 0 ) l n - l l l n - l r - n v - r l n - l r - n ν - r .If e R ⊂ t n and e L ⊂ e R, then(24) e R = ( I ( l ) 0 0 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 0 0 R 3 0 I ( l ) 0 R 7 0 0 0 R 3 ′ 0 0 I ( n - l ) R 7 ′ 0 ) l n - l l n - l l n - l r - n v - r l n - l r - n ν - r ,where R 3 ′, R 7 ′ are arbitrary matrices. Therefore, the number of e R contained in t n and containing e L is q 2 ( n - l ) ( r - n ).Lemma 12. Assume thatt n ∈ T n and t n ′ ∈ T n are two distinct tags which are decoded by receiver’s decoding rule e R. s 1 and s 2 contained in t n and t n ′, respectively. Let s 0 = s 1 ∩ s 2, dim ⁡ s 0 = k; then n ≤ k ≤ 2 r - n - 1; the number of e R contained in t n ∩ t n ′ and containing e L is q ( n - l ) ( k - n ).Proof. Sincet n = s 1 + e R, t n ′ = s 2 + e R and t n ≠ t n ′, then s 1 ≠ s 2. And for any s ∈ S, U ⊂ s, therefore, n ≤ k ≤ 2 r - n - 1. Assume that s i ′ is the complementary subspace of s 0 in the s i; then s i = s 0 + s i ′ ( i = 1,2 ). Because of t n = s 1 + e R = s 0 + s 1 ′ + e R, t n ′ = s 2 + e R = s 0 + s 2 ′ + e R and s 1 = t n ∩ U ⊥, s 2 = t n ′ ∩ U ⊥, we know s 0 = ( t n ∩ U ⊥ ) ∩ ( t n ′ ∩ U ⊥ ) = t n ∩ t n ′ ∩ U ⊥ = s 1 ∩ t n ′ = s 2 ∩ t n, and t n ∩ t n ′ = ( s 1 + e R ) ∩ t n ′ = ( s 0 + s 1 ′ + e R ) ∩ t n ′ = ( ( s 0 + e R ) + s 1 ′ ) ∩ t n ′. Since s 0 + e R ⊆ t n ′, then t n ∩ t n ′ = ( s 0 + e R ) + ( s 1 ′ ∩ t n ′ ), while s 1 ′ ∩ t n ′ ⊆ s 1 ∩ t n ′ = s 0, so t n ∩ t n ′ = s 0 + e R. From the definition of thet n and t n ′, we assume that (25) t n = ( I ( n ) 0 0 0 0 P 22 0 0 0 0 I ( n ) 0 0 0 0 P 44 ) n r - n n r - n n ν - n n ν - n , t n ′ = ( I ( n ) 0 0 0 0 P 22 ′ 0 0 0 0 I ( n ) 0 0 0 0 P 44 ′ ) n r - n n r - n n ν - n n ν - n . Let(26) t n ∩ t n ′ = ( I ( n ) 0 0 0 0 P 2 0 0 0 0 I ( n ) 0 0 0 0 P 4 ) n r - n n r - n n ν - n n ν - n . And from above we know that t n ∩ t n ′ = s 0 + e R; then dim ⁡ ( t n ∩ t n ′ ) = k + n; therefore, (27) dim ⁡ ( 0 P 2 0 0 0 0 0 P 4 ) = k - n . For any e L ⊂ t n ∩ t n ′, we assume that (28) e L = ( I ( l ) 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 R 3 I ( l ) 0 R 6 ) l n - l l l n - l ν - n l n - l ν - n . If e R ⊂ t n ∩ t n ′ and e L ⊂ e R, then e R has the form (29) e R = ( I ( l ) 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 R 3 I ( l ) 0 R 6 0 0 R 3 ′ 0 I ( n - l ) R 6 ′ ) l n - l l n - l l n - l ν - n l n - l ν - n . So, every row of ( 0 R 3 ′ 0 R 6 ′ ) is the linear combination of ( 0 P 2 0 0 0 0 0 P 4 ). Therefore, the number of e R contained in t n ∩ t n ′ and containing e L is q ( n - l ) ( k - n ).Theorem 13. In the constructed multi-sender authentication codes, the maximum probabilities of success for impersonation attack and substitution attack fromU L on the receiver R are (30) P I ( L ) = 1 q 2 ( n - l ) ( ν - r ) , P S ( L ) = 1 q ( n - l ) .Proof. (1)Impersonation Attack. U L, after receiving their secret keys, sends a message m to R. U L is successful if the receiver accepts it as authentic. Therefore, (31) P I ( L ) = max ⁡ e L ∈ E L max ⁡ m ∈ M { | { e R ∈ E R ∣ e L ⊂ e R , e R ⊂ t } | | { e R ∈ E R ∣ e L ⊂ e R } | } = q 2 ( n - l ) ( r - n ) q 2 ( n - l ) ( ν - n ) = 1 q 2 ( n - l ) ( ν - r ) . (2)Substitution Attack. R L, after observing a message m that is transmitted by the sender, replaces m with another message m ′. R L is successful if m ′ is accepted by R as authentic. Therefore, (32) P S ( L ) =  max ⁡ e L ∈ E L max ⁡ m ∈ M max ⁡ m ≠ m ′ ∈ M  { | { e R ∈ E R ∣ e L ⊂ e R , e R ⊂ t , e R ⊂ t ′ } | | { e R ∈ E R ∣ e L ⊂ e R , e R ⊂ t } | } = max ⁡ n ≤ k ≤ 2 r - n - 1 q ( n - l ) ( k - n ) q 2 ( n - l ) ( r - n ) = 1 q ( n - l ) . ## 6. The Advantage of the Constructed Authentication Code The security of an authentication code could be measured by the maximum probabilities of deceptions. The smaller the probability of successful attack, the higher the security of the authentication codes. Now let us compare the security of our constructed authentication code with the known one [19].The constructed authentication code in [19] is also a multisender authentication code from symplectic geometry over finite fields, but which is in simultaneous model. If we choose the parameters n, n ′, r, and ν with 1 < n < n ′ < r < ν, n > ( r / 2 ), and n ′ - n > ν - r, from Table 1 we see that the maximum probabilities of deceptions of our construction are smaller than the construction in [19]. Therefore, compared with the construction in [19], our construction is more efficient.Table 1 ( n > r / 2 , n ′ - n > ν - r ). Constructions [19] Size relation Ours The number of senders n = n The number of attackers l , 1 ≤ l < n = l , 1 ≤ l < n The parameters of codes | S | N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) = N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) | E i | q 2 ( ν - n ) = q 2 ( ν - n ) | E R | q 2 n ′ ( ν - n ′ ) > q 2 n ( ν - n ) | T | N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) q 2 n ′ ( ν - r - n ′ + n ) < N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) q 2 n ( ν - r ) The probabilities of deceptions P I ( L ) 1 q 2 ( n ′ - l ) ( ν + n - n ′ - r ) - ( n ′ - n ) ( n - l ) > 1 q 2 ( n - l ) ( ν - r ) P S ( L ) 1 q ( n ′ - l ) ( 2 n - 2 n ′ + 1 ) + ( n ′ - n ) ( n - l ) > 1 q n - l --- *Source: 102301-2014-04-30.xml*
102301-2014-04-30_102301-2014-04-30.md
26,778
A Construction of Multisender Authentication Codes with Sequential Model from Symplectic Geometry over Finite Fields
Shangdi Chen; Chunli Yang
Journal of Applied Mathematics (2014)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102301
102301-2014-04-30.xml
--- ## Abstract Multisender authentication codes allow a group of senders to construct an authenticated message for a receiver such that the receiver can verify authenticity of the received message. In this paper, we construct multisender authentication codes with sequential model from symplectic geometry over finite fields, and the parameters and the maximum probabilities of deceptions are also calculated. --- ## Body ## 1. Introduction Information security consists of confidentiality and authentication. Confidentiality is to prevent the confidential information from decrypting by adversary. The purpose of authentication is to ensure the sender is real and to verify that the information is integrated. Digital signature and authentication codes are two important means of authenticating the information and provide good service in the network. In practical, digital signature is computationally secure assuming that the computing power of adversary is limited and a mathematical problem is intractable and complex. However, authentication codes are generally safe (unconditional secure) and relatively simple. In the 1940s, C. E. Shannon first put forward the concept of perfect secrecy authentication system using the information theory. In the 1980s, information theory method had been applied to the problem of authentication by G. J. Simmons; then authentication codes became the foundation for constructing unconditionally secure authentication system. In 1974, Gilbert et al. constructed the first authentication code [1], which is a landmark in the development of authentication theory. During the same period, Simmons independently studied the authentication theory and established three participants and four participants certification models [2]. The famous mathematician Wan Zhexian constructed an authentication code without arbitration from the subspace of the classical geometry [3]. In the case of transmitter and receiver being not honest, Ma et al. constructed a series of authentication codes with arbitration [4–9]. Xing et al. constructed authentication codes using algebraic curve and nonlinear functions, respectively [10, 11]. Safavi-Naini and Wang gave some results on multireceiver authentication codes [12]. Chen et al. made great contributions on multisender authentication codes from polynomials and matrices [13–19].With the rapid development of information science, the traditional one-to-one authentication codes have been unable to meet the requirements of network communication, thus making the study of multiuser authentication codes particularly important. Multiuser authentication code is a generalization of traditional two-user authentication code. It can be divided into two cases: one is a sender and many receivers authentication codes; the other one is many senders and a receiver authentication codes. We call the former as multireceiver authentication codes and the latter as multisender authentication codes. Safavi-Naini R gave some results on multireceiver authentication codes using the subspace of the classical geometry, while there are only some multisender authentication codes using polynomials and matrices to construct. We present the first construction multisender authentication code using the subspace of the classical geometry, specifically symplectic geometry.The main contribution of our paper is constructing a multi-sender authentication code using symplectic geometry. Furthermore, we calculate the corresponding parameters and the maximum probabilities of deceptions.The paper is organised as follows. Section2 gives the models of multisender authentication codes. In Section 3, we provide the calculation formulas on probability of success in attacks by malicious groups of senders. In Section 4, we give some definitions and properties on geometry of symplectic groups over finite fields. In Section 5, a construction of multisender authentication codes with sequential model from symplectic geometry over finite fields is given; then the parameters and the maximum probabilities of deceptions are also calculated. We give a comparison with the other construction of multisender authentication [19] in Section 6. ## 2. Models of Multisender Authentication Codes We review the concepts of authentication codes which can be extracted from [20].Definition 1 (see [20]). A systematic Cartesian authentication codeC is a 4-tuple ( S , E , T ; f ), where S is the set of source states, E is the set of keys, T is the set of authenticators, and f : S × E → T is the authentication mapping. The message space M = S × T is the set of all possible messages.In the actual computer network communications, multisender authentication codes include sequential models and simultaneous models. Sequential models are that each sender uses his own encoding rules to encode a source state orderly, and the last sender sends the encoded message to the receiver; then the receiver receives the message and verifies whether the message is legal or not. Simultaneous models are that all senders use their own encoding rules to encode a source state simultaneously; then the synthesizer forms an authenticated message and sends it to the receiver; the receiver receives the message and verifies whether the message is legal or not.In the following we will give out the working principles of two modes of multisender authentication codes and the protocols that the participants should follow.Definition 2 (see [17]). In sequential model, there are three participants: a group of sendersU = { U 1 , U 2 , … , U n }; a Key Distribution Center (KDC), for the distribution keys to senders and receiver; a receiver who receives the authenticated message and verifies the message true or not. The code works as follows: each sender and receiver has their own Cartesian authentication code, respectively. It is used to generate part of the message and verify authenticity of the received message. Sender’s authentication codes are called branch authentication codes, and receiver’s authentication code is called channel authentication code. Let ( S i , E i , T i ; f i ), i = 1,2 , … , n, be the ith sender’s Cartesian authentication codes, and let T i - 1 ⊂ S i, 1 ≤ i ≤ n, ( S , E , T ; f ) be the receiver’s Cartesian authentication code, and let S = S 1, T = T i , π i : E → E i be a subkey generation algorithm. For authenticating a message, the senders and the receiver should comply with protocols:(1) KDC randomly selects ane ∈ E and secretly sends it to the receiver R and sends e i = π i ( e ) to the ith sender U i, i = 1,2 , … , n; (2) if the senders would like to send a source states to the receiver R, U 1 calculates t 1 = f 1 ( s , e 1 ) and then sends t 1 to U 2 through an open channel; U 2 receives t 1 and calculates t 2 = f 2 ( t 1 , e 2 ) and then sends t 2 to U 3 through an open channel. In general, U i receives t i - 1 and calculates t i = f i ( t i - 1 , e i ) and then sends t i to U i + 1 through an open channel, 1 < i < n. U n receives t n - 1 and calculates t n = f n ( t n - 1 , e n ) and then sends m = ( s , t n ) through an open channel to the receiver R; (3) when the receiver receives the messagem = ( s , t n ), he checks the authenticity by verifying whether t n = f ( s , e ) or not. If the equality holds, the message is regarded as authentic and is accepted. Otherwise, the message is rejected.Definition 3 (see [17]). In simultaneous model of a multisender authentication code, there are four participants: a group of sendersU = { U 1 , U 2 , … , U n }; a Key Distribution Center (KDC), for the distribution keys to senders and receiver; a synthesizer C who only runs the trusted synthesis algorithm; a receiver who receives the authenticated message and verifies the message true or not. The code works as follows: each sender and receiver has their own Cartesian authentication code, respectively. It is used to generate part of the message and verify the received message. Sender’s authentication codes are called branch authentication codes, and receiver’s authentication code is called channel authentication code. Let ( S i , E i , T i ; f i ), i = 1,2 , … , n, be the sender’s Cartesian authentication codes, let ( S , E , T ; f ) be the receiver’s Cartesian authentication code, let g : T 1 × T 2 × ⋯ × T n → T be the synthesis algorithm, and let π i : E → E i be a subkey generation algorithm. For authenticating a message, the senders and the receiver should comply with protocols:(1) KDC randomly selects a encoding rulee ∈ E and secretly sends it to the receiver R and sends e i = π i ( e ) to the ith sender U i , i = 1,2 , … , n; (2) if the senders would like to send a source states to the receiver R, U i computes t i = f i ( s , e i ), i = 1,2 , … , n, and sends m i = ( s , t i ) ( i = 1,2 , … , n ) to the synthesizer C through an open channel; (3) the synthesizerC receives the messages m i = ( s , t i ), i = 1,2 , … , n, and calculates t = g ( t 1 , t 2 , … , t n ) using the synthesis algorithm g; then sends message m = ( s , t ) to the receiver R; (4) when the receiver receives the messagem = ( s , t ), he checks the authenticity by verifying whether t = f ( s , e ) or not. If the equality holds, the message is regarded as authentic and is accepted. Otherwise, the message is rejected. ## 3. Probabilities of Deceptions We assume that the arbitrator (KDC) and the synthesizer (C) are credible; though they know the senders’ and receiver’s encoding rules, they do not participate in any communication activities. When transmitter and receiver are disputing, the arbitrator settles it. At the same time, assume that the system follows Kerckhoff’s principle which the other information of the whole system is public except the actual used keys. Assume that the source state spaceS and the receiver’s decoding rules space E R are according to a uniform probability distribution; then the probability distribution of message space M and tag space T is determined by the probability distribution of S and E R. In a multisender authentication system, assume that the whole senders cooperate to form a valid message; that is, all senders as a whole and receiver are reliable. But there are some malicious senders which they together cheat the receiver; the part of senders and receiver are not credible; they can take impersonation attack and substitution attack.Assume thatU 1 , U 2 , … , U n are senders, R is a receiver, and E i is the encoding rules of U i, 1 ≤ i ≤ n. E R is the decoding rules of receiver R. L = { i 1 , i 2 , … , i l } ⊂ { 1,2 , … , n }, l < n, U L = { U i 1 , U i 2 , … , U i l }, E L = { E i 1 , E i 2 , … , E i l }.Impersonation Attack. U L, after receiving their secret keys, sends a message m to receiver. U L is successful if the receiver accepts it as legitimate message. Denote P I [ L ] as the maximum probability of success of the impersonation attack. It can be expressed as (1) P I [ L ] = max ⁡ e L ∈ E L max ⁡ m ∈ M P ( m is accepted by R ∣ e L ) .Substitution Attack. U L, after observing a legitimate message, substitutes it with another message m ′. U L is successful if m ′ is accepted by receiver as authentic. Denote P S [ L ] as the maximum probability of success of the substitution attack. It can be expressed as (2) P S [ L ] = max ⁡ e L ∈ E L max ⁡ m ∈ M max ⁡ m ′ ≠ m ∈ M P ( m ′ is accepted by R ∣ m , e L ) . ## 4. Symplectic Geometry In this section, we give some definitions and properties on geometry of symplectic groups over finite fields, which can be extracted from [20].Let𝔽 q be a finite field with q elements, n = 2 ν and define the 2 ν × 2 ν alternate matrix (3) K = ( 0 I ( ν ) - I ( ν ) 0 ) . The symplectic group of degree 2 ν over 𝔽 q, denoted by S p 2 ν ( 𝔽 q ), is defined to be the set of matrices (4) S p 2 ν ( 𝔽 q ) = { T ∣ T K T t = K } , with matrix multiplication as its group operation. Let 𝔽 q ( 2 ν ) be the 2 ν-dimensional row vector space over 𝔽 q. S p 2 ν ( 𝔽 q ) has an action on 𝔽 q ( 2 ν ) defined as follows: (5) 𝔽 q ( 2 ν ) × S p 2 ν ( 𝔽 q ) ⟶ 𝔽 q ( 2 ν ) , ( ( x 1 , x 2 , … , x 2 ν ) , T ) ⟶ ( x 1 , x 2 , … , x 2 ν ) T . The vector space 𝔽 q ( 2 ν ) together with this action of S p 2 ν ( 𝔽 q ) is called the symplectic space over 𝔽 q.LetP be an m-dimensional subspace of 𝔽 q ( 2 ν ). We use the same latter P to denote a matrix representation of P; that is, P is an m × 2 ν matrix of rank m such that its rows form a basis of P. The P K P t is alternate. Assume that it is of rank 2 s; then P is called a subspace of type ( m , s ). It is known that subspaces of type ( m , s ) exist in 𝔽 q ( 2 ν ) if and only if (6) 2 s ≤ m ≤ ν - s . It is also known that subspaces of the same type form an orbit under S p 2 ν ( 𝔽 q ). Denote by N ( m , s ; 2 ν ) the number of subspaces of type ( m , s ) in 𝔽 q ( 2 ν ).Denote byP ⊥ the set of vectors which are orthogonal to every vector of P; that is, (7) P ⊥ = { y ∈ 𝔽 q ( 2 ν ) ∣ y K x t = 0 for all x ∈ P } . Obviously, P ⊥ is a ( 2 ν - m )-dimensional subspace of 𝔽 q ( 2 ν ).Readers can refer to [15] for notations and terminology, which are not explained, on symplectic geometry of classical groups over finite fields. ## 5. Construction Let𝔽 q be a finite field with q elements. Assume that 1 < n < r < ν. U = 〈 e 1 , e 2 , … , e n 〉; then U ⊥ = 〈 e 1 , … , e ν , e ν + n + 1 , … , e 2 ν 〉. Let W i = 〈 e 1 , … , e i - 1 , e i + 1 , … , e n 〉; then W i ⊥ = 〈 e 1 , … , e ν , e ν + i , e ν + n + 1 , … , e 2 ν 〉. The set of source states S = { s ∣ s is a subspace of type ( 2 r - n , r - n ) and U ⊂ s ⊂ U ⊥ }; the set of ith sender’s encoding rules E i = { e i ∣ e i is a subspace of type ( n + 1,1 ), U ⊂ e i and e i ⊥ W i }, 1 ≤ i ≤ n; the set of receiver’s decoding rules E R = { e R ∣ e R is a subspace of type ( 2 n , n ) and U ⊂ e R }; the set of tags T i = { t i ∣ t i is a subspace of type ( 2 r - n + i , r - n + i ) and U ⊂ t i }, 1 ≤ i ≤ n.Define the encoding maps:(8) f 1 : S × E 1 ⟶ T 1 , f 1 ( s , e 1 ) = s + e 1 , f i : T i - 1 × E i ⟶ T i , f i ( t i - 1 , e i ) = t i - 1 + e i , 2 ≤ i ≤ n .Define the decoding map:(9) f : S × E R ⟶ T n , f ( s , e R ) = s + e R .This code works as follows.(1) Key Distribution. First, the KDC does a list L of senders; assume that L = { 1,2 , … , n }. Then, the KDC randomly chooses a subspace e R ∈ E R and privately sends e R to the receiver R. Last, the KDC randomly chooses a subspace e i ∈ E i and e i ⊂ e R, then privately sends e i to the ith sender, 1 ≤ i ≤ n. (2) Broadcast. For a source state s ∈ S, the sender U 1 calculates t 1 = s + e 1 and sends ( s , t 1 ) to U 2. The sender U 2 calculates t 2 = t 1 + e 2 and sends ( s , t 2 ) to U 3. Finally, the sender U n calculates t n = t n - 1 + e n and sends m = ( s , t n ) to the receiver R. (3) Verification. Since the receiver R holds the decoding rule e R, R accepts m as authentic if t n = s + e R. Otherwise, it is rejected by R.Lemma 4. LetC = ( S , E R , T n ; f ), C 1 = ( S , E 1 , T 1 ; f 1 ), C i = ( T i - 1 , E i , T i ; f i ) ( 2 ≤ i ≤ n ); then C, C 1, C i are all Cartesian authentication codes.Proof. First, we show thatC is a Cartesian authentication code. (1)   Fors ∈ S , e R ∈ E R. Let (10) s = ( U Q ) n 2 ( r - n ) , e R = ( U V ) n n . From the definition of s and e R, we can assume that (11) ( U Q ) K ( U Q ) t = ( 0 ( n ) 0 0 0 0 I ( r - n ) 0 - I ( r - n ) 0 ) , ( U V ) K ( U V ) t = ( 0 I ( n ) - I ( n ) 0 ) . Obviously, we have v ∉ s for any v ∈ V and v ≠ 0. Therefore, (12) t n = s + e R = ( U V Q ) , ( U V Q ) K ( U V Q ) t = ( 0 I ( n ) 0 0 - I ( n ) 0 * * 0 * 0 I ( r - n ) 0 * - I ( r - n ) 0 ) . From above, t n is a subspace of type ( 2 r , r ) and U ⊂ t n; that is, t n ∈ T n. (2) Fort n ∈ T n, t n is a subspace of type ( 2 r , r ) containing U. So there is subspace V ⊂ t n, satisfying (13) ( U V ) K ( U V ) t = ( 0 I ( n ) - I ( n ) 0 ) . Then, we can assume that t n = ( U V Q ), satisfying (14) ( U V Q ) K ( U V Q ) t = ( 0 I ( n ) 0 0 - I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 0 - I ( r - n ) 0 ) . Let s = ( U Q ); then s is a subspace of type ( 2 r - n , r - n ) and U ⊂ s ⊂ U ⊥; that is, s ∈ S is a source state. For any v ∈ V and v ≠ 0, we have v ∉ s and V ∩ U ⊥ = { 0 }. Therefore, t n ∩ U ⊥ = ( U Q ) = s. Let e R = ( U V ); then e R is a transmitter’s encoding rule satisfying t n = s + e R. Ifs ′ is another source state contained in t n, then U ⊂ s ′ ⊂ U ⊥. Therefore, s ′ ⊂ t n ∩ U ⊥ = s, while dim ⁡ s ′ = dim ⁡ s, so s ′ = s. That is, s is the uniquely source state contained in t n. Similarly, we can show thatC 1 and C i ( 2 ≤ i ≤ n ) are also Cartesian authentication code.From Lemma4, we know that such construction of multisender authentication codes is reasonable. Next we compute the parameters of this code.Lemma 5. The number of the source states is| S | = N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ).Proof. For anys ∈ S, since U ⊂ s ⊂ U ⊥, s has the form (15) s = ( I ( n ) 0 0 0 0 P 2 0 P 4 ) n 2 ( r - n ) = h n ν - n n ν - n , where ( P 2 , P 4 ) is a subspace of type ( 2 ( r - n ) , r - n ) in the symplectic space F q 2 ( ν - n ). Therefore, the number of the source states is | S | = N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ).Lemma 6. The number of theith sender’s encoding rules is | E i | = q 2 ( ν - n ).Proof. For anye i ∈ E i, e i is a subspace of type ( n + 1,1 ) containing U and e i is orthogonal to W i. So we can assume that e i = t ( e 1 , … , e n , u ), where u = ( x 1 x 2 ⋯ x 2 ν ). Obviously, x 1 = ⋯ = x n = x ν + 1 = ⋯ = x ν + i - 1 = x ν + i + 1 = ⋯ = x ν + n = 0, x ν + i = 1, and x n + 1 , … , x ν , x ν + n + 1 , … , x 2 ν arbitrarily. Therefore, | E i | = q 2 ( ν - n ).Lemma 7. The number of the receiver’s decoding rules is| E R | = q 2 n ( ν - n ).Proof. For anye R ∈ E R, since e R is a subspace of type ( 2 n , n ) containing U, e R has the form (16) e R = ( I ( n ) 0 0 0 0 Q 2 I ( n ) Q 4 ) n n n ν - n n ν - n , where Q 2, Q 4 are arbitrary matrices. Therefore, | E R | = q 2 n ( ν - n ).Lemma 8. (1) The number of decoding rulese R contained in t n is q 2 n ( r - n ); (2) the number of the tags is| T n | = q 2 n ( ν - r ) N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ).Proof. (1)   For anyt n ∈ T n, t n is a subspace of type ( 2 r , r ) and U ⊂ t n. We assume that t n has the form (17) t n = ( I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 0 0 0 0 0 0 I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 ) n r - n n r - n n r - n ν - r n r - n ν - r . If e R ⊂ t n, then we can assume that (18) e R = ( I ( n ) 0 0 0 0 0 0 R 2 0 I ( n ) R 5 0 ) n n n r - n ν - r n r - n ν - r , where R 2 , R 5 are arbitrary matrices. Therefore, the number of e R contained in t n is q 2 n ( r - n ). (2) We know that a tag contains only one source state and the number of decoding rulese R contained in t n is q 2 n ( r - n ). Therefore, we have | T n | = | S | | E R | / q 2 n ( r - n ) = q 2 n ( ν - r ) N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ).Theorem 9. The parameters of the above constructed multisender authentication code are(19) | S | = N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) ; | E i | = q 2 ( ν - n ) ; | E R | = q 2 n ( ν - n ) ; | T n | = q 2 n ( ν - r ) N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) .Without loss of generality, we can assume thatU L = { U 1 , U 2 , … , U l }, E L = { E 1 × ⋯ × E l }, where l < n.Lemma 10. For anye L = ( e 1 , e 2 , … , e l ) ∈ E L, the number of e R containing e L is q 2 ( n - l ) ( ν - n ).Proof. For anye L = ( e 1 , e 2 , … , e l ) ∈ E L, we can assume that (20) e L = ( I ( l ) 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 P 3 I ( l ) 0 P 6 ) l n - l l l n - l ν - n l n - l ν - n . If e L ⊂ e R, then e R has the form (21) e R = ( I ( l ) 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 P 3 I ( l ) 0 P 6 0 0 P 3 ′ 0 I ( n - l ) P 6 ′ ) l n - l l n - l l n - l ν - n l n - l ν - n , where P 3 ′, P 6 ′ are arbitrary matrices. Therefore, the number of e R containing e L is q 2 ( n - l ) ( ν - n ).Lemma 11. For anyt n ∈ T n and e L = ( e 1 , e 2 , … , e l ) ∈ E L, the number of e R contained in t n and containing e L is q 2 ( n - l ) ( r - n ).Proof. For anyt n ∈ T n, t n is a subspace of type ( 2 r , r ) and U ⊂ t n. We assume that t n has the form (22) t n = ( I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 0 0 0 0 0 0 I ( n ) 0 0 0 0 0 0 I ( r - n ) 0 ) n r - n n r - n n r - n ν - r n r - n ν - r . If e L ⊂ t n, assume that e L has the form(23) e L = ( I ( l ) 0 0 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 0 0 R 3 0 I ( l ) 0 R 7 0 ) l n - l l l n - l r - n v - r l n - l r - n ν - r .If e R ⊂ t n and e L ⊂ e R, then(24) e R = ( I ( l ) 0 0 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 0 0 R 3 0 I ( l ) 0 R 7 0 0 0 R 3 ′ 0 0 I ( n - l ) R 7 ′ 0 ) l n - l l n - l l n - l r - n v - r l n - l r - n ν - r ,where R 3 ′, R 7 ′ are arbitrary matrices. Therefore, the number of e R contained in t n and containing e L is q 2 ( n - l ) ( r - n ).Lemma 12. Assume thatt n ∈ T n and t n ′ ∈ T n are two distinct tags which are decoded by receiver’s decoding rule e R. s 1 and s 2 contained in t n and t n ′, respectively. Let s 0 = s 1 ∩ s 2, dim ⁡ s 0 = k; then n ≤ k ≤ 2 r - n - 1; the number of e R contained in t n ∩ t n ′ and containing e L is q ( n - l ) ( k - n ).Proof. Sincet n = s 1 + e R, t n ′ = s 2 + e R and t n ≠ t n ′, then s 1 ≠ s 2. And for any s ∈ S, U ⊂ s, therefore, n ≤ k ≤ 2 r - n - 1. Assume that s i ′ is the complementary subspace of s 0 in the s i; then s i = s 0 + s i ′ ( i = 1,2 ). Because of t n = s 1 + e R = s 0 + s 1 ′ + e R, t n ′ = s 2 + e R = s 0 + s 2 ′ + e R and s 1 = t n ∩ U ⊥, s 2 = t n ′ ∩ U ⊥, we know s 0 = ( t n ∩ U ⊥ ) ∩ ( t n ′ ∩ U ⊥ ) = t n ∩ t n ′ ∩ U ⊥ = s 1 ∩ t n ′ = s 2 ∩ t n, and t n ∩ t n ′ = ( s 1 + e R ) ∩ t n ′ = ( s 0 + s 1 ′ + e R ) ∩ t n ′ = ( ( s 0 + e R ) + s 1 ′ ) ∩ t n ′. Since s 0 + e R ⊆ t n ′, then t n ∩ t n ′ = ( s 0 + e R ) + ( s 1 ′ ∩ t n ′ ), while s 1 ′ ∩ t n ′ ⊆ s 1 ∩ t n ′ = s 0, so t n ∩ t n ′ = s 0 + e R. From the definition of thet n and t n ′, we assume that (25) t n = ( I ( n ) 0 0 0 0 P 22 0 0 0 0 I ( n ) 0 0 0 0 P 44 ) n r - n n r - n n ν - n n ν - n , t n ′ = ( I ( n ) 0 0 0 0 P 22 ′ 0 0 0 0 I ( n ) 0 0 0 0 P 44 ′ ) n r - n n r - n n ν - n n ν - n . Let(26) t n ∩ t n ′ = ( I ( n ) 0 0 0 0 P 2 0 0 0 0 I ( n ) 0 0 0 0 P 4 ) n r - n n r - n n ν - n n ν - n . And from above we know that t n ∩ t n ′ = s 0 + e R; then dim ⁡ ( t n ∩ t n ′ ) = k + n; therefore, (27) dim ⁡ ( 0 P 2 0 0 0 0 0 P 4 ) = k - n . For any e L ⊂ t n ∩ t n ′, we assume that (28) e L = ( I ( l ) 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 R 3 I ( l ) 0 R 6 ) l n - l l l n - l ν - n l n - l ν - n . If e R ⊂ t n ∩ t n ′ and e L ⊂ e R, then e R has the form (29) e R = ( I ( l ) 0 0 0 0 0 0 I ( n - l ) 0 0 0 0 0 0 R 3 I ( l ) 0 R 6 0 0 R 3 ′ 0 I ( n - l ) R 6 ′ ) l n - l l n - l l n - l ν - n l n - l ν - n . So, every row of ( 0 R 3 ′ 0 R 6 ′ ) is the linear combination of ( 0 P 2 0 0 0 0 0 P 4 ). Therefore, the number of e R contained in t n ∩ t n ′ and containing e L is q ( n - l ) ( k - n ).Theorem 13. In the constructed multi-sender authentication codes, the maximum probabilities of success for impersonation attack and substitution attack fromU L on the receiver R are (30) P I ( L ) = 1 q 2 ( n - l ) ( ν - r ) , P S ( L ) = 1 q ( n - l ) .Proof. (1)Impersonation Attack. U L, after receiving their secret keys, sends a message m to R. U L is successful if the receiver accepts it as authentic. Therefore, (31) P I ( L ) = max ⁡ e L ∈ E L max ⁡ m ∈ M { | { e R ∈ E R ∣ e L ⊂ e R , e R ⊂ t } | | { e R ∈ E R ∣ e L ⊂ e R } | } = q 2 ( n - l ) ( r - n ) q 2 ( n - l ) ( ν - n ) = 1 q 2 ( n - l ) ( ν - r ) . (2)Substitution Attack. R L, after observing a message m that is transmitted by the sender, replaces m with another message m ′. R L is successful if m ′ is accepted by R as authentic. Therefore, (32) P S ( L ) =  max ⁡ e L ∈ E L max ⁡ m ∈ M max ⁡ m ≠ m ′ ∈ M  { | { e R ∈ E R ∣ e L ⊂ e R , e R ⊂ t , e R ⊂ t ′ } | | { e R ∈ E R ∣ e L ⊂ e R , e R ⊂ t } | } = max ⁡ n ≤ k ≤ 2 r - n - 1 q ( n - l ) ( k - n ) q 2 ( n - l ) ( r - n ) = 1 q ( n - l ) . ## 6. The Advantage of the Constructed Authentication Code The security of an authentication code could be measured by the maximum probabilities of deceptions. The smaller the probability of successful attack, the higher the security of the authentication codes. Now let us compare the security of our constructed authentication code with the known one [19].The constructed authentication code in [19] is also a multisender authentication code from symplectic geometry over finite fields, but which is in simultaneous model. If we choose the parameters n, n ′, r, and ν with 1 < n < n ′ < r < ν, n > ( r / 2 ), and n ′ - n > ν - r, from Table 1 we see that the maximum probabilities of deceptions of our construction are smaller than the construction in [19]. Therefore, compared with the construction in [19], our construction is more efficient.Table 1 ( n > r / 2 , n ′ - n > ν - r ). Constructions [19] Size relation Ours The number of senders n = n The number of attackers l , 1 ≤ l < n = l , 1 ≤ l < n The parameters of codes | S | N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) = N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) | E i | q 2 ( ν - n ) = q 2 ( ν - n ) | E R | q 2 n ′ ( ν - n ′ ) > q 2 n ( ν - n ) | T | N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) q 2 n ′ ( ν - r - n ′ + n ) < N ( 2 ( r - n ) , r - n ; 2 ( ν - n ) ) q 2 n ( ν - r ) The probabilities of deceptions P I ( L ) 1 q 2 ( n ′ - l ) ( ν + n - n ′ - r ) - ( n ′ - n ) ( n - l ) > 1 q 2 ( n - l ) ( ν - r ) P S ( L ) 1 q ( n ′ - l ) ( 2 n - 2 n ′ + 1 ) + ( n ′ - n ) ( n - l ) > 1 q n - l --- *Source: 102301-2014-04-30.xml*
2014
# Thioredoxin-1 Protects Bone Marrow-Derived Mesenchymal Stromal Cells from Hyperoxia-Induced Injury In Vitro **Authors:** Lei Zhang; Jin Wang; Yan Chen; Lingkong Zeng; Qiong Li; Yalan Liu; Lin Wang **Journal:** Oxidative Medicine and Cellular Longevity (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1023025 --- ## Abstract Background. The poor survival rate of mesenchymal stromal cells (MSC) transplanted into recipient lungs greatly limits their therapeutic efficacy for diseases like bronchopulmonary dysplasia (BPD). The aim of this study is to evaluate the effect of thioredoxin-1 (Trx-1) overexpression on improving the potential for bone marrow-derived mesenchymal stromal cells (BMSCs) to confer resistance against hyperoxia-induced cell injury. Methods. 80% O2 was used to imitate the microenvironment surrounding-transplanted cells in the hyperoxia-induced lung injury in vitro. BMSC proliferation and apoptotic rates and the levels of reactive oxygen species (ROS) were measured. The effects of Trx-1 overexpression on the level of antioxidants and growth factors were investigated. We also investigated the activation of apoptosis-regulating kinase-1 (ASK1) and p38 mitogen-activated protein kinases (MAPK). Result. Trx-1 overexpression significantly reduced hyperoxia-induced BMSC apoptosis and increased cell proliferation. We demonstrated that Trx-1 overexpression upregulated the levels of superoxide dismutase and glutathione peroxidase as well as downregulated the production of ROS. Furthermore, we illustrated that Trx-1 protected BMSCs against hyperoxic injury via decreasing the ASK1/P38 MAPK activation rate. Conclusion. These results demonstrate that Trx-1 overexpression improved the ability of BMSCs to counteract hyperoxia-induced injury, thus increasing their potential to treat hyperoxia-induced lung diseases such as BPD. --- ## Body ## 1. Introduction Bronchopulmonary dysplasia (BPD) is a chronic lung disease that typically occurs in very low-birth-weight premature infants following supplemental oxygen therapy and mechanical ventilation. An increase in the survival rate of these extremely premature infants has been associated with an increased incidence of BPD. BPD is a multifactorial disease, and hyperoxia, or oxygen toxicity, is known to play a key role in its pathogenesis [1]. Oxygen toxicity is believed to be mediated by the production and accumulation of reactive oxygen species (ROS), such as superoxide (O2−), hydrogen peroxide (H2O2), and hydroxyl radicals (•OH), to levels exceeding the capacity of the antioxidant defense mechanisms [2]. It is well known that ROS is required in a myriad of physiological reactions, cell fate decisions, and signal transduction pathways. However, overwhelming accumulation of ROS will trigger severe oxidative stress through enzyme oxidation, protease inhibition, DNA synthesis inhibition, and lipid peroxidation, which commits cells to necrosis or apoptosis [3].Currently, no effective treatments beyond supportive therapies are available for BPD. Stem cell-based treatment via tissue engineering is currently an increasing focus of research [4]. As bone marrow-derived mesenchymal stromal cells (BMSCs) come from an autologous source and are easy to isolate and amplify [4], they are the ideal seed cells for tissue engineering across broad tissue types such as the liver [5], bone [6], lung [7], heart [8], and kidney [9]. Recently, studies have shown that lung repair by BMSC therapy could be a promising and novel therapeutic modality for attenuating BPD severity [10, 11]. These studies have demonstrated that BMSCs enhance lung repair by direct regeneration or through secreting paracrine factors. However, several studies confirmed the low survival and poor engraftment rates of MSCs in recipient lungs, which greatly limits their therapeutic efficacy, as survival of the transplanted cells in the pathological environment is critical for their beneficial effects [12, 13]. Hence, one major focus in the field is to explore the mechanism involving the BMSC injury in pathological environment and to develop strategies to enhance BMSC survival and engraftment rates.Thioredoxin (Trx), a ubiquitous small protein (12 kDa) containing a redox-active dithiol/disulfide at a highly conserved active site, was originally identified as a hydrogen donor for ribonucleotide reductase inEscherichia coli [14]. There are two main thioredoxins: thioredoxin-1 (Trx-1), a cytosolic form, and thioredoxin-2 (Trx-2), a mitochondrial form. Trx, along with Trx reductase (TrxR) and nicotinamide adenine dinucleotide phosphate (NADPH), has been shown to catalyze protein disulfide reduction and is thought to be a strong ROS scavenger [15]. Trx-1 participates in redox reactions through reversible oxidation of its dithiol active center to disulfide which catalyzes dithiol-disulfide exchange reactions involved in many thiol-dependent processes [16]. By this way, Trx-1 acts on oxidized, therefore inactive, proteins by reducing them and restoring their functionality. Recent studies have shown that Trx-1 not only regulates the cellular redox balance by scavenging intracellular ROS ingredients, such as hydrogen peroxide (H2O2), but also has other biological activities, including regulation of cell growth, transcription factors, gene expression, apoptosis, and immune regulatory effects [17–19]. Our previous studies suggest that Trx protects alveolar epithelial cells from hyperoxia-induced injury by reducing ROS generation, elevating antioxidant activities, and regulating the MAPK and PI3K-Akt pathways [20].Based on previous studies from others and our own work, we hypothesize that BMSCs suffer severe injury under hyperoxic conditions and that increased Trx-1 expression in BMSCs may serve to counteract the negative effects of hyperoxia-induced cell injury. To better understand the mechanism of Trx-1, we also looked into the signaling pathways mediated by it in hypoxia-induced cell injury. Our data may provide a new perspective in the development of BMSC therapeutic strategies. ## 2. Materials and Methods ### 2.1. BMSC Culture All studies were performed under the approval of the Ethics Committee of the Animal Facility of Huazhong University of Science and Technology. BMSCs were isolated from the bone marrow of 6- to 7-week-old male Sprague-Dawley rats (provided by Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China) according to the previously described method with some modifications [11, 21, 22]. Briefly, bone marrow cells were flushed from rat tibias and femurs, suspended by pipetting, and filtered via nylon mesh (70 μm). The collected mononuclear cells were washed three times with Dulbecco’s phosphate-buffered saline (DPBS). The cells were suspended in a culture medium (DMEM medium containing 10% FBS, 0.02% sodium bicarbonate, 2 mM L-glutamine, 15 mM HEPES buffer, 100 units/mL penicillin, and 100 μg/mL streptomycin) and incubated at 37°C in an atmosphere of 95% humidified air and 5% CO2 for 24 h. The medium was exchanged with a fresh culture medium in an attempt to deplete the nonadherent cells. When adherent cells were grown to approximately 75% confluency, they were trypsinized and reseeded at a density of 105 cells/cm2. ### 2.2. Phenotypic Analysis of BMSCs Flow cytometric analysis was performed to characterize the phenotype of BMSCs. Cells were suspended in 100μL DPBS supplemented with 2% FBS. Phycoerythrin- (PE-) coupled antibodies against CD29 (eBiosciences, cat. no. 12-0291, San Diego, CA, USA), CD34 (Santa Cruz Biotechnology, cat. no. sc-74499), CD44 (Santa Cruz Biotechnology, cat. no. sc-7297), CD45 (Santa Cruz Biotechnology, cat. no. sc-1178), and CD90 (Santa Cruz Biotechnology, cat. no. sc-53456) were added separately, followed by incubation at 4°C for 30 minutes. For the detection of cell surface antigens CD105 and CD73, cells were incubated with the first antibodies against CD105 (Abcam, cat. no. ab156756) and CD73 (Abcam, cat. no.ab175396) for 1 hour at 4°C, washed, and then incubated for 1 hour at 4°C with Alexa Fluor 647-conjugated second antibodies (Invitrogen, cat. nos. A-21235 and A-21244). Irrelevant isotype-identical antibodies served as negative control. After washing, more than 10,000 cells were acquired using a FACS Calibur (Becton Dickins) flow cytometer and analyzed with FlowJo software (FlowJo LLC, Ashland, Oregon, USA). ### 2.3. BMSC Differentiation To confirm that our cultured cells have multipotent potential, we tested BMSC P3 cultures for their ability to undergo differentiation into osteocytes and adipocytes as previously described [23, 24]. Briefly, osteogenic differentiation was induced by incubating BMSCs with an osteogenic medium (RASMX-90021; Cyagen Biosciences, Guangzhou, China), and adipocyte differentiation was induced by maintaining BMSCs in an adipocyte differentiation medium (RASMX-90031; Cyagen Biosciences). After 21 days of differentiation, cells were fixed and stained with alizarin red and oil red O separately. ### 2.4. Transfection In order to achieve high efficiency of introduction and subsequent stable expression of rat Trx-1 in BMSCs, a lentiviral vector was employed. Briefly, the third passage of BMSCs was transfected with lentiviral vectors carrying Trx-1 and green fluorescent protein (GFP) (pCDH-CMV-Trx-1-EF1α-copGFP) or a lentiviral vector carrying only GFP (pCDH-CMV-MCS-EF1α-copGFP) using the Lipofectamine 2000 transfection reagent according to the manufacturer’s instructions (Invitrogen, Carlsbad, CA, USA). The transfected BMSCs were termed BMSCs-p (lentiviral vector only carried GFP) and BMSCs-Trx-1 (lentiviral vectors carried Trx-1 and GFP). The recombinant plasmids were constructed and identified by Wuhan Transduction Bio Co. Ltd. (Wuhan, China). Stably transfected cells were then selected by incubation in the fresh FBS-supplemented DMEM culture medium containing 500 μg/mL G418. BMSCs not subjected to transfection served as control cells. The expression of Trx-1 was detected by reverse transcriptase polymerase chain reaction (RT-PCR) analysis and Western blot analysis. ### 2.5. Hyperoxia and Normoxia Treatment Cells were seeded into 6-, 24-, or 96-well cell culture plates overnight. The next day, cells were placed in hyperoxia (80% O2, 5% CO2) or normoxic (21% O2, 5% CO2) environment as previously described [20, 25, 26]. The concentration of O2 was monitored in real time with a digital oxygen monitor (Hengaode, Beijing, China). Cells were harvested at 0, 12, 24, and 48 hours. ### 2.6. Cell Proliferation Assay In order to determine the influence of Trx-1 overexpression on BMSC proliferation, cell proliferation assays were performed using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to manufacturer’s protocol. Cells were seeded into a 96-well plate in triplicate at 5000 cells/well and cultured overnight. Cells were then exposed to hyperoxic or normoxic conditions described above. After the exposures, the number of cells per well was measured by the 450 nm absorbance of reduced WST-8 (2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2, 4-sulfophenyl)-2H-tetrazolium, monosodium salt) at the indicated time points [27, 28]. In addition, a blank control well was set containing only the culture medium. ### 2.7. Cell Apoptosis Assay Apoptosis was measured by flow cytometry after annexin V-PE/7-AAD staining (BD Pharmingen, USA) according to manufacturer’s instructions. Briefly, the treated cells were harvested with Accutase solution (Gibco/Life Technologies, cat. no. A11105-01), washed twice with cold PBS, and suspended in 1x binding Buffer. Then, the cells were labeled with annexin V-PE and 7-AAD for 15 minutes at room temperature in the dark. Apoptosis-positive control cells were placed in 50°C water bath for 5 minutes. Finally, the cells were subjected to flow cytometry analysis using a FACS Caliber flow cytometer (BD Biosciences, CA) within 30 minutes. ### 2.8. Measurement of Intracellular ROS Accumulation ROS production was measured with CellROX® deep red reagent from Molecular Probes (Eugene, OR, USA). The CellROX deep red reagent is a fluorogenic probe designed to reliably measure ROS in living cells. The cell-permeable CellROX deep red dye is nonfluorescent while in a reduced state and becomes fluorescent upon oxidation by reactive oxygen species with absorption/emission maxima at ~644/665 nm [29]. After treatment, cells were incubated at 37°C for 30 minutes in complete DMEM with 5 mM CellROX deep red reagent. Then, the medium was removed and the cells were washed 3 times with PBS. Cells were collected and suspended in PBS. Fluorescence was immediately measured using FACS analysis, and values were reported as mean fluorescence intensity. ### 2.9. Hydrogen Peroxide Assay The level of intracellular H2O2 was measured using Hydrogen peroxide assay kit (Beyotime Institute of Biotechnology, China) as described previously [30, 31]. In this assay system, ferrous ions (Fe2+) are oxidized to ferric ions (Fe3+) by H2O2. Then, the Fe3+ and the indicator dye xylenol orange form a purple complex, which is measurable with a microplate reader at a wavelength of 560 nm.According to the manufacturer’s protocol, cells were lysed using the lysis buffer solution supplied in the kit at a ratio of 100μL per 106 cells. After centrifugation at 12,000g for 5 minutes, the supernatants were collected. 50 μL of each supernatant sample was put into 100 μL of test solution, and the mixture was incubated for 20 minutes at room temperature. Finally, the absorbance at 560 nm was measured using a microplate reader (Elx 800; BioTek). The level of H2O2 in cells was determined using a standard curve prepared by plotting the average blank-corrected 560 nm measurement for each standard. ### 2.10. Caspase 3 Activity Assay Caspase 3 activity was measured using Caspase 3 Activity Assay kit (Beyotime Biotechnology, Nanjing, China) following the manufacturer’s instructions [32]. After being subjected to treatment described above, cells were detached from plates, washed with PBS, and centrifuged at 1200 rpm for 5 minutes at 4°C for cell collection and lysis. Caspase 3 activity was detected using the specific fluorogenic substrates Ac-DEVD-pNA; the absorbance at 405 nm was measured using a microplate reader (Elx 800; BioTek). ### 2.11. RNA Isolation and Real-Time PCR RNA samples were prepared using the RNAiso plus kit (Takara Bio Inc., Kusatsu, Shiga, Japan) according to manufacturer’s instructions. Total RNA (1μg) was used to reverse transcribed cDNA using iScriptTM cDNA synthesis kit (Takara Bio Inc.) according to manufacturer’s instructions. Real-time PCR was performed using iQ SYBR Green Supermix (Bio-Rad Inc. Laboratories, Hercules, CA, USA). Amplification, detection, and data analysis were performed with the iCycler real-time detection system (Bio-Rad Inc.). GAPDH was used as the endogenous control. Specific primer sets for Trx-1 and GAPDH were obtained from Invitrogen. The relative expression level of Trx-1 was determined using the 2−delta delta Ct analysis method. The primer sequences used for PCR were as follows: Trx-1, forward 5′-TTCTTTCATTCCCTCTGTG-3′ and reverse 5′-TCCGTAATAGTGGCTTCG-3′; GAPDH, forward 5′-GTTCTTCAATACGTCAGACATTCG-3′ and reverse 5′-CATTATCTTTGCTGTCACAAGAGC-3′. ### 2.12. Western Blot Analysis Cell protein levels of Trx-1, apoptosis-regulating kinase-1 (ASK1), phosphorylated ASK1 (p-ASK1), p38, and phosphorylated p38 (p-p38) were analyzed by Western blotting, usingβ-actin as an internal reference. Briefly, total proteins were extracted using a protein extraction kit (KGP2100; KeyGEN Biotech, Nanjing, China), quantified by BCA protein assay (Guge Bio, Wuhan, China), electrophoresed on SDS-PAGE gels, and electrotransferred to PVDF membrane by wet transfer (Bio-Rad). Membranes were blocked for 1 hour with 5% skim milk and incubated overnight at 4°C with the primary antibodies. The anti-ASK1 antibody came from Abcam (cat. no. ab131506), anti-p-ASK1 antibody came from Sigma (cat. no. SAB4504337), and all other antibodies came from Cell Signaling Technology Inc., Danvers, MA, USA. Membranes were washed in TBS/0.1% Tween-20 to remove excess primary antibodies. The membranes were then incubated for 1 hour with the secondary antibodies (Cell Signaling Technology Inc.). After three washes in TBS/0.1% Tween-20, the protein bands were visualized using an enhanced chemiluminescence kit according to the manufacturer’s instructions (ECL; Pierce Biotechnology Inc., Rockford, IL, USA). Densitometry was measured using “ImageJ” analysis software. ### 2.13. Antioxidant Enzyme Activity Measurements The activities of total superoxide dismutase (T-SOD), catalase (CAT), and glutathione peroxidase (GSH-Px) were estimated by the test kits according to the manufacturers’ instructions. T-SOD assay kit was purchased from Nanjin Jiancheng Biotechnology Co. Ltd. (Nanjing, Jiangsu, China) [33]. The GSH-Px assay kit and CAT activity assay kit were purchased from Beyotime Institute of Biotechnology (Shanghai, China) [34, 35]. Briefly, the cells were washed with PBS and lysed using cell lysis buffer. Cell lysates were then centrifuged at 10,000g for 5 minutes at 4°C, and the supernatants were collected to determine enzyme activities. These assays were performed on the Elx800 microplate reader at 550 nm for T-SOD, 520 nm for CAT, and 340 nm for GSH-Px, respectively. The values were normalized and expressed as units per mg protein, based on protein concentrations determined using BCA protein assay (Guge Bio). ### 2.14. Enzyme-Linked Immunosorbent Assay (ELISA) After treatment, culture supernatants were collected and spun at 300g for 10 minutes to remove cellular debris. The levels of keratinocyte growth factor (KGF), hepatocyte growth factor (HGF), and epidermal growth factor (EGF) were determined by employing ELISA kits (R&D System, Minneapolis, MN, USA) according to the manufacturer’s protocol. Each sample was analyzed in triplicate. ### 2.15. Statistical Methods All data were reported as mean ± standard deviations (mean ± SD) and analyzed by using SPSS 18.0 (SPSS Inc., Chicago, IL, USA). Data were analyzed statistically using ANOVA or Student’st-test. Significance was accepted at P<0.05. ## 2.1. BMSC Culture All studies were performed under the approval of the Ethics Committee of the Animal Facility of Huazhong University of Science and Technology. BMSCs were isolated from the bone marrow of 6- to 7-week-old male Sprague-Dawley rats (provided by Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China) according to the previously described method with some modifications [11, 21, 22]. Briefly, bone marrow cells were flushed from rat tibias and femurs, suspended by pipetting, and filtered via nylon mesh (70 μm). The collected mononuclear cells were washed three times with Dulbecco’s phosphate-buffered saline (DPBS). The cells were suspended in a culture medium (DMEM medium containing 10% FBS, 0.02% sodium bicarbonate, 2 mM L-glutamine, 15 mM HEPES buffer, 100 units/mL penicillin, and 100 μg/mL streptomycin) and incubated at 37°C in an atmosphere of 95% humidified air and 5% CO2 for 24 h. The medium was exchanged with a fresh culture medium in an attempt to deplete the nonadherent cells. When adherent cells were grown to approximately 75% confluency, they were trypsinized and reseeded at a density of 105 cells/cm2. ## 2.2. Phenotypic Analysis of BMSCs Flow cytometric analysis was performed to characterize the phenotype of BMSCs. Cells were suspended in 100μL DPBS supplemented with 2% FBS. Phycoerythrin- (PE-) coupled antibodies against CD29 (eBiosciences, cat. no. 12-0291, San Diego, CA, USA), CD34 (Santa Cruz Biotechnology, cat. no. sc-74499), CD44 (Santa Cruz Biotechnology, cat. no. sc-7297), CD45 (Santa Cruz Biotechnology, cat. no. sc-1178), and CD90 (Santa Cruz Biotechnology, cat. no. sc-53456) were added separately, followed by incubation at 4°C for 30 minutes. For the detection of cell surface antigens CD105 and CD73, cells were incubated with the first antibodies against CD105 (Abcam, cat. no. ab156756) and CD73 (Abcam, cat. no.ab175396) for 1 hour at 4°C, washed, and then incubated for 1 hour at 4°C with Alexa Fluor 647-conjugated second antibodies (Invitrogen, cat. nos. A-21235 and A-21244). Irrelevant isotype-identical antibodies served as negative control. After washing, more than 10,000 cells were acquired using a FACS Calibur (Becton Dickins) flow cytometer and analyzed with FlowJo software (FlowJo LLC, Ashland, Oregon, USA). ## 2.3. BMSC Differentiation To confirm that our cultured cells have multipotent potential, we tested BMSC P3 cultures for their ability to undergo differentiation into osteocytes and adipocytes as previously described [23, 24]. Briefly, osteogenic differentiation was induced by incubating BMSCs with an osteogenic medium (RASMX-90021; Cyagen Biosciences, Guangzhou, China), and adipocyte differentiation was induced by maintaining BMSCs in an adipocyte differentiation medium (RASMX-90031; Cyagen Biosciences). After 21 days of differentiation, cells were fixed and stained with alizarin red and oil red O separately. ## 2.4. Transfection In order to achieve high efficiency of introduction and subsequent stable expression of rat Trx-1 in BMSCs, a lentiviral vector was employed. Briefly, the third passage of BMSCs was transfected with lentiviral vectors carrying Trx-1 and green fluorescent protein (GFP) (pCDH-CMV-Trx-1-EF1α-copGFP) or a lentiviral vector carrying only GFP (pCDH-CMV-MCS-EF1α-copGFP) using the Lipofectamine 2000 transfection reagent according to the manufacturer’s instructions (Invitrogen, Carlsbad, CA, USA). The transfected BMSCs were termed BMSCs-p (lentiviral vector only carried GFP) and BMSCs-Trx-1 (lentiviral vectors carried Trx-1 and GFP). The recombinant plasmids were constructed and identified by Wuhan Transduction Bio Co. Ltd. (Wuhan, China). Stably transfected cells were then selected by incubation in the fresh FBS-supplemented DMEM culture medium containing 500 μg/mL G418. BMSCs not subjected to transfection served as control cells. The expression of Trx-1 was detected by reverse transcriptase polymerase chain reaction (RT-PCR) analysis and Western blot analysis. ## 2.5. Hyperoxia and Normoxia Treatment Cells were seeded into 6-, 24-, or 96-well cell culture plates overnight. The next day, cells were placed in hyperoxia (80% O2, 5% CO2) or normoxic (21% O2, 5% CO2) environment as previously described [20, 25, 26]. The concentration of O2 was monitored in real time with a digital oxygen monitor (Hengaode, Beijing, China). Cells were harvested at 0, 12, 24, and 48 hours. ## 2.6. Cell Proliferation Assay In order to determine the influence of Trx-1 overexpression on BMSC proliferation, cell proliferation assays were performed using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to manufacturer’s protocol. Cells were seeded into a 96-well plate in triplicate at 5000 cells/well and cultured overnight. Cells were then exposed to hyperoxic or normoxic conditions described above. After the exposures, the number of cells per well was measured by the 450 nm absorbance of reduced WST-8 (2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2, 4-sulfophenyl)-2H-tetrazolium, monosodium salt) at the indicated time points [27, 28]. In addition, a blank control well was set containing only the culture medium. ## 2.7. Cell Apoptosis Assay Apoptosis was measured by flow cytometry after annexin V-PE/7-AAD staining (BD Pharmingen, USA) according to manufacturer’s instructions. Briefly, the treated cells were harvested with Accutase solution (Gibco/Life Technologies, cat. no. A11105-01), washed twice with cold PBS, and suspended in 1x binding Buffer. Then, the cells were labeled with annexin V-PE and 7-AAD for 15 minutes at room temperature in the dark. Apoptosis-positive control cells were placed in 50°C water bath for 5 minutes. Finally, the cells were subjected to flow cytometry analysis using a FACS Caliber flow cytometer (BD Biosciences, CA) within 30 minutes. ## 2.8. Measurement of Intracellular ROS Accumulation ROS production was measured with CellROX® deep red reagent from Molecular Probes (Eugene, OR, USA). The CellROX deep red reagent is a fluorogenic probe designed to reliably measure ROS in living cells. The cell-permeable CellROX deep red dye is nonfluorescent while in a reduced state and becomes fluorescent upon oxidation by reactive oxygen species with absorption/emission maxima at ~644/665 nm [29]. After treatment, cells were incubated at 37°C for 30 minutes in complete DMEM with 5 mM CellROX deep red reagent. Then, the medium was removed and the cells were washed 3 times with PBS. Cells were collected and suspended in PBS. Fluorescence was immediately measured using FACS analysis, and values were reported as mean fluorescence intensity. ## 2.9. Hydrogen Peroxide Assay The level of intracellular H2O2 was measured using Hydrogen peroxide assay kit (Beyotime Institute of Biotechnology, China) as described previously [30, 31]. In this assay system, ferrous ions (Fe2+) are oxidized to ferric ions (Fe3+) by H2O2. Then, the Fe3+ and the indicator dye xylenol orange form a purple complex, which is measurable with a microplate reader at a wavelength of 560 nm.According to the manufacturer’s protocol, cells were lysed using the lysis buffer solution supplied in the kit at a ratio of 100μL per 106 cells. After centrifugation at 12,000g for 5 minutes, the supernatants were collected. 50 μL of each supernatant sample was put into 100 μL of test solution, and the mixture was incubated for 20 minutes at room temperature. Finally, the absorbance at 560 nm was measured using a microplate reader (Elx 800; BioTek). The level of H2O2 in cells was determined using a standard curve prepared by plotting the average blank-corrected 560 nm measurement for each standard. ## 2.10. Caspase 3 Activity Assay Caspase 3 activity was measured using Caspase 3 Activity Assay kit (Beyotime Biotechnology, Nanjing, China) following the manufacturer’s instructions [32]. After being subjected to treatment described above, cells were detached from plates, washed with PBS, and centrifuged at 1200 rpm for 5 minutes at 4°C for cell collection and lysis. Caspase 3 activity was detected using the specific fluorogenic substrates Ac-DEVD-pNA; the absorbance at 405 nm was measured using a microplate reader (Elx 800; BioTek). ## 2.11. RNA Isolation and Real-Time PCR RNA samples were prepared using the RNAiso plus kit (Takara Bio Inc., Kusatsu, Shiga, Japan) according to manufacturer’s instructions. Total RNA (1μg) was used to reverse transcribed cDNA using iScriptTM cDNA synthesis kit (Takara Bio Inc.) according to manufacturer’s instructions. Real-time PCR was performed using iQ SYBR Green Supermix (Bio-Rad Inc. Laboratories, Hercules, CA, USA). Amplification, detection, and data analysis were performed with the iCycler real-time detection system (Bio-Rad Inc.). GAPDH was used as the endogenous control. Specific primer sets for Trx-1 and GAPDH were obtained from Invitrogen. The relative expression level of Trx-1 was determined using the 2−delta delta Ct analysis method. The primer sequences used for PCR were as follows: Trx-1, forward 5′-TTCTTTCATTCCCTCTGTG-3′ and reverse 5′-TCCGTAATAGTGGCTTCG-3′; GAPDH, forward 5′-GTTCTTCAATACGTCAGACATTCG-3′ and reverse 5′-CATTATCTTTGCTGTCACAAGAGC-3′. ## 2.12. Western Blot Analysis Cell protein levels of Trx-1, apoptosis-regulating kinase-1 (ASK1), phosphorylated ASK1 (p-ASK1), p38, and phosphorylated p38 (p-p38) were analyzed by Western blotting, usingβ-actin as an internal reference. Briefly, total proteins were extracted using a protein extraction kit (KGP2100; KeyGEN Biotech, Nanjing, China), quantified by BCA protein assay (Guge Bio, Wuhan, China), electrophoresed on SDS-PAGE gels, and electrotransferred to PVDF membrane by wet transfer (Bio-Rad). Membranes were blocked for 1 hour with 5% skim milk and incubated overnight at 4°C with the primary antibodies. The anti-ASK1 antibody came from Abcam (cat. no. ab131506), anti-p-ASK1 antibody came from Sigma (cat. no. SAB4504337), and all other antibodies came from Cell Signaling Technology Inc., Danvers, MA, USA. Membranes were washed in TBS/0.1% Tween-20 to remove excess primary antibodies. The membranes were then incubated for 1 hour with the secondary antibodies (Cell Signaling Technology Inc.). After three washes in TBS/0.1% Tween-20, the protein bands were visualized using an enhanced chemiluminescence kit according to the manufacturer’s instructions (ECL; Pierce Biotechnology Inc., Rockford, IL, USA). Densitometry was measured using “ImageJ” analysis software. ## 2.13. Antioxidant Enzyme Activity Measurements The activities of total superoxide dismutase (T-SOD), catalase (CAT), and glutathione peroxidase (GSH-Px) were estimated by the test kits according to the manufacturers’ instructions. T-SOD assay kit was purchased from Nanjin Jiancheng Biotechnology Co. Ltd. (Nanjing, Jiangsu, China) [33]. The GSH-Px assay kit and CAT activity assay kit were purchased from Beyotime Institute of Biotechnology (Shanghai, China) [34, 35]. Briefly, the cells were washed with PBS and lysed using cell lysis buffer. Cell lysates were then centrifuged at 10,000g for 5 minutes at 4°C, and the supernatants were collected to determine enzyme activities. These assays were performed on the Elx800 microplate reader at 550 nm for T-SOD, 520 nm for CAT, and 340 nm for GSH-Px, respectively. The values were normalized and expressed as units per mg protein, based on protein concentrations determined using BCA protein assay (Guge Bio). ## 2.14. Enzyme-Linked Immunosorbent Assay (ELISA) After treatment, culture supernatants were collected and spun at 300g for 10 minutes to remove cellular debris. The levels of keratinocyte growth factor (KGF), hepatocyte growth factor (HGF), and epidermal growth factor (EGF) were determined by employing ELISA kits (R&D System, Minneapolis, MN, USA) according to the manufacturer’s protocol. Each sample was analyzed in triplicate. ## 2.15. Statistical Methods All data were reported as mean ± standard deviations (mean ± SD) and analyzed by using SPSS 18.0 (SPSS Inc., Chicago, IL, USA). Data were analyzed statistically using ANOVA or Student’st-test. Significance was accepted at P<0.05. ## 3. Results ### 3.1. Characterization of BMSCs The BMSC cultures were observed by using an inverted light microscope. BMSCs are plastic-adherent cells that showed a flattened and spindle-shaped morphology. About 10 days later, the primary cultured cells developed to clusters and could be used for subculture. After two to three passages, BMSCs demonstrated a homogeneous fibroblast-like, spindle-shaped morphology. The morphological features of the BMSCs are shown in Figure1(a). To verify the pluripotent capacity of the cultured cells, we cultured the cells in adipogenic or osteogenic differentiation induction media for 21 days. Differentiation toward these cell lineages was demonstrated by oil red O and alizarin red staining, respectively (Figures 1(b) and 1(c)). As illustrated in Figure 1(d), the BMSC population was positive for CD29, CD44, CD73, CD105, and CD90, which are important cell surface markers of MSCs, but negative for CD45 and CD34, which are two specific cell surface markers of hematopoietic cells [11, 36, 37].Figure 1 Characterization of rat bone marrow-derived mesenchymal stromal cells (BMSCs). (a) The plastic-adherent cells demonstrated a homogeneous fibroblast-like and spindle-shaped morphology. Original magnification, ×100. (b) Adipogenic differentiation of BMSCs stained with oil red O. Original magnification, ×200. (c) Osteogenic differentiation of BM-MSCs stained with alizarin red. Original magnification, ×400. (d) FACS analysis demonstrated expression of markers attributed to BMSCs. The cells were devoid of hematopoietic cells as indicated by the lack of CD45 and CD34. The MSC-specific markers, CD29, CD44, CD73, CD105, and CD90 were strongly expressed on the cells. (a) (b) (c) (d) ### 3.2. Stable Overexpression of Trx-1 in BMSCs For stable overexpression of Trx-1 in BMSCs, the cells were transfected with a plasmid encoding Trx-1. After the transfection and drug selection, the expression of GFP-tagged Trx-1 was confirmed by fluorescence microscopy (Figure2(a)). Compared to control cells, we observed that BMSCs-Trx-1 exhibited an 8-fold increased Trx-1 mRNA expression and showed a 4-fold increased protein content (Figures 2(b) and 2(c)). In order to examine whether BMSCs exhibit phenotypic changes after Trx-1 transfection, the expression patterns of cell surface markers were compared between intact BMSCs and BMSCs-Trx-1. We found that there were no marked differences in the expression patterns of cell surface markers between the two cells, indicating that regardless of transfection, these cells were genetically stable (Supplement Figure 1).Figure 2 Stable overexpression of Trx-1 in BMSCs. (a) Intensive green fluorescence was observed by fluorescence microscopy (×100). (b) The mRNA levels of Trx-1 in BMSCs, BMSCs-Trx-1, and BMSCs-p. (c) Detection of Trx-1 protein expression by Western blot analysis.∗∗P<0.01 compared to control. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ### 3.3. Effects of Hyperoxia and Trx-1 Overexpression on Cell Proliferation The effects of hyperoxia and Trx-1 overexpression on the proliferation of BMSCs were assessed by CCK-8 assay kit. As shown in Figure3, hyperoxia treatment inhibited BMSC proliferation in a time-dependent manner. Compared to cells cultured in normoxia, the growth rate of the hyperoxia-treated cells was significantly inhibited starting at 24 hours. After 48 hours of hyperoxia exposure, BMSCs-p proliferation was inhibited by more than 40%. However, BMSCs-Trx-1 proliferation was only inhibited by 23% at 48 hours, suggesting that Trx-1 overexpression significantly increased the proliferation of cells under hyperoxic conditions.Figure 3 Overexpression of Trx-1 promoted proliferation of BMSCs under hyperoxic conditions. Cells with or without Trx-1 overexpression were exposed to hyperoxia for the indicated time. Cell proliferation was estimated using a CCK-8 kit. Hyperoxia treatment inhibited BMSC proliferation. However, overexpression of Trx-1 increased cell growth rate under hyperoxic conditions compared to BMSCs-p. Growth curve was generated by reading the absorbance value at 450 nm. The value was computed as percent of 0 hour. The results were expressed as mean ± SD of the results of three independent experiments, each with triplicates.∗P<0.05 or 0.01 compared to normoxia control, #P<0.05 or 0.01 compared to BMSCs-p under hyperoxia conditions. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. ### 3.4. Assessment of Cell Apoptosis To investigate the effects of hyperoxia and Trx-1 overexpression on the induction of apoptosis in BMSCs, we labeled cells with annexin V-PE, a marker of early apoptosis, and with 7-ADD, a marker of necrosis, to be analyzed via flow cytometry. As shown in Figures4(a) and 4(b), hyperoxia induced apoptosis in a time-dependent manner in cells regardless of Trx-1 overexpression. The percent of apoptotic cells, as seen by annexin V+ staining, was increased in hyperoxia-treated BMSCs-p (about 20% at 24 hours and 35% at 48 hours). Trx-1 overexpression inhibited this hyperoxia-induced cell apoptosis, as seen by the decreased percent of annexin V+ cells (about 13% at 24 hours and 20% at 48 hours).Figure 4 Effect of Trx-1 on cell apoptosis in BMSCs. Cells were exposed to hyperoxia for 0, 12, 24, and 48 hours and were stained with annexin V-PE/7-ADD before flow cytometry analysis. (a) Dot plots of flow cytometry analysis. Intensity of 7-ADD staining (y-axis) was plotted versus annexin V intensity (x-axis). Numbers indicate percent in each region. (b) The graph shows the percentage of apoptosis as defined by annexin V+. The results are representative of 3 independent experiments. (c) Caspase 3 activity. Caspase 3 activity was measured by the caspase 3 activity kit. Bar graphs represent the relative expression of caspase 3 activity calculated from each group. The results are representative of 3 independent experiments. ∗P<0.05, ∗∗P<0.01 compared with the BMSCs-p group or BMSCs. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ### 3.5. Trx-1 Inhibits Caspase 3 Activity Caspase 3 is one of the key mediators of apoptosis; so, to further evaluate the antiapoptotic effects of Trx-1, we monitored caspase 3 activity using the Caspase 3 Activity Assay Kit. Results showed that caspase 3 activity increased when cells were treated with hyperoxia (Figure4(c)). Compared to BMSCs and BMSCs-p, overexpression of Trx-1 in BMSCs-Trx-1 reduced caspase 3 activities under hyperoxic conditions, with the largest difference seen at 48 hours (about 50% was inhibited compared to BMSCs-p). ### 3.6. Trx-1 Reduced the Intracellular Total ROS and Hydrogen Peroxide Formation under Hyperoxic Conditions To further explore the mechanisms by which Trx-1 reduces hyperoxia-induced BMSC injury, intracellular ROS levels were measured by flow cytometry analysis of cells stained with CellROX deep red reagent. As shown in Figure5(a), exposure of BMSCs to hyperoxia markedly increased the generation of ROS in a time-dependent manner (increased 2-fold at 48 hours). Compared with the BMSCs-p group, Trx-1 overexpression markedly decreased the hyperoxia-induced ROS formation in the BMSCs-Trx-1 group (decreased 20%~30% versus BMSCs-p control).Figure 5 Effects of Trx-1 on intracellular ROS levels in BMSCs. (a) Intracellular ROS production was measured with CellROX deep red reagent, which can detect total ROS and was not the target particular species. The relative fluorescence intensity was expressed as % compared to control cells (BMSCs-p at 0 hr). (b) The level of intracellular H2O2 was measured using hydrogen peroxide assay kit. Experiments were repeated three times. ∗P<0.05, ∗∗P<0.01 versus the corresponding group. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b)Subsequently, the level of intracellular H2O2 was determined, as it is an important ROS. The results showed that H2O2 production was increased with longer hyperoxia exposure (normoxia: 0.23 μM and hyperoxia: 3.5 μM at 48 hours) (Figure 5(b)). Trx-1 overexpression inhibited hyperoxia-induced H2O2 generation in BMSCs. The strongest inhibition happened at 12 hours (the percent inhibition exceeded 35%). ### 3.7. Effects of Trx-1 on Antioxidant Enzyme Activities in BMSCs The activities of three major endogenous antioxidant enzymes (SOD, CAT, and GSH-Px) were then analyzed in the three BMSC lines (BMSC, BMSCs-p, and BMSCs-Trx-1). After treatment with hyperoxia, significant increases in SOD and GSH-Px activities were detected in these three groups of BMSCs. As shown in Figure6(a), Trx-1 overexpression in BMSCs further increased SOD activity compared to BMSCs with normal Trx-1 expression. In the three BMSCs, GSH-Px activity was increased by a similar degree after hyperoxia exposure for 12 hours (Figure 6(b)). After 24 hours of hyperoxia exposure, the activities of GSH-Px began to decrease gradually. However, when compared to BMSCs and BMSCs-p, BMSCs-Trx-1 with Trx-1 overexpression upregulated GSH-Px activity after 24 hours and 48 hours of hyperoxia exposure. Trx-1 was not found to have any effect on CAT activity (Figure 6(c)).Figure 6 Effects of Trx-1 overexpression on antioxidant enzyme activities in BMSCs under hyperoxic conditions. (a) Superoxide dismutase (SOD) activities were measured using the SOD assay kit. (b) Glutathione peroxidase (GSH-Px) activities were measured using the glutathione peroxidase assay kit. (c) Catalase (CAT) activities were measured using the CAT assay kit. Data are representative of duplicate samples from five experiments.∗∗P<0.01. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ### 3.8. Trx-1 Has No Effect on Cytokine Secretion from BMSCs Recently, an increasing number of studies has shown that the protective effects of BMSC transplantation may be predominantly mediated by paracrine, rather than regenerative, mechanisms [7]. To determine whether Trx-1 exerts its cytoprotective effects by regulating cytokine secretion from BMSCs, the levels of EGF, KGF, and HGF in the cell culture medium were assayed by ELISA. Our results showed that Trx-1 overexpression only slightly increased the levels of secreted EGF, KGF, and HGF. However, these differences were not statistically significant across the three groups (Supplement Figure 2). ### 3.9. Effects of Trx-1 on the ASK1/P38 MAPK Pathway To investigate the influence of hyperoxia on Trx-1 expression, we compared the protein levels of Trx-1 after different hyperoxia exposures. As shown in Figures7(a) and 7(b), after 12-hour hyperoxia exposure, Trx-1 expression was significantly increased (about 50%). However, Trx-1 expression returned to almost normal levels after 24 hours of hyperoxia treatment in BMSCs-p cells. Trx-1 overexpression did not change throughout 0 to 48 hours of hyperoxia treatment in BMSCs-Trx-1 cells.Figure 7 Western blot results. Trx-1, phospho-ASK1, total ASK1, phospho-p38, and total p38 expressions were detected by Western blotting. (a) Representative Western blot bands. (b) Trx-1 densitometric analysis. (c) p-ASK/ASK densitometric analysis. (d) p-38/P38 densitometric analysis. Data are representative of three independent experiments. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs.∗P<0.05; ∗∗P<0.01 versus the corresponding group. (a) (b) (c) (d)Hyperoxia-induced activation of ASK1 was confirmed by a significant increase in phospho-ASK1 levels as detected by Western blotting, which was significantly inhibited by Trx-1 overexpression (Figures7(a) and 7(c)). We next examined whether p38, a potential downstream signal of ASK1, was involved in hyperoxic cell injury pathogenesis. As shown in Figures 7(a) and 7(d), activation of p38 via phosphorylation was upregulated under hyperoxic conditions as measured by phoshpo-p38, with levels peaking at 24 hours (about 4-fold upregulation compared to 0 hour). Trx-1 overexpression in BMSCs significantly suppressed the phosphorylation of p38. ## 3.1. Characterization of BMSCs The BMSC cultures were observed by using an inverted light microscope. BMSCs are plastic-adherent cells that showed a flattened and spindle-shaped morphology. About 10 days later, the primary cultured cells developed to clusters and could be used for subculture. After two to three passages, BMSCs demonstrated a homogeneous fibroblast-like, spindle-shaped morphology. The morphological features of the BMSCs are shown in Figure1(a). To verify the pluripotent capacity of the cultured cells, we cultured the cells in adipogenic or osteogenic differentiation induction media for 21 days. Differentiation toward these cell lineages was demonstrated by oil red O and alizarin red staining, respectively (Figures 1(b) and 1(c)). As illustrated in Figure 1(d), the BMSC population was positive for CD29, CD44, CD73, CD105, and CD90, which are important cell surface markers of MSCs, but negative for CD45 and CD34, which are two specific cell surface markers of hematopoietic cells [11, 36, 37].Figure 1 Characterization of rat bone marrow-derived mesenchymal stromal cells (BMSCs). (a) The plastic-adherent cells demonstrated a homogeneous fibroblast-like and spindle-shaped morphology. Original magnification, ×100. (b) Adipogenic differentiation of BMSCs stained with oil red O. Original magnification, ×200. (c) Osteogenic differentiation of BM-MSCs stained with alizarin red. Original magnification, ×400. (d) FACS analysis demonstrated expression of markers attributed to BMSCs. The cells were devoid of hematopoietic cells as indicated by the lack of CD45 and CD34. The MSC-specific markers, CD29, CD44, CD73, CD105, and CD90 were strongly expressed on the cells. (a) (b) (c) (d) ## 3.2. Stable Overexpression of Trx-1 in BMSCs For stable overexpression of Trx-1 in BMSCs, the cells were transfected with a plasmid encoding Trx-1. After the transfection and drug selection, the expression of GFP-tagged Trx-1 was confirmed by fluorescence microscopy (Figure2(a)). Compared to control cells, we observed that BMSCs-Trx-1 exhibited an 8-fold increased Trx-1 mRNA expression and showed a 4-fold increased protein content (Figures 2(b) and 2(c)). In order to examine whether BMSCs exhibit phenotypic changes after Trx-1 transfection, the expression patterns of cell surface markers were compared between intact BMSCs and BMSCs-Trx-1. We found that there were no marked differences in the expression patterns of cell surface markers between the two cells, indicating that regardless of transfection, these cells were genetically stable (Supplement Figure 1).Figure 2 Stable overexpression of Trx-1 in BMSCs. (a) Intensive green fluorescence was observed by fluorescence microscopy (×100). (b) The mRNA levels of Trx-1 in BMSCs, BMSCs-Trx-1, and BMSCs-p. (c) Detection of Trx-1 protein expression by Western blot analysis.∗∗P<0.01 compared to control. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ## 3.3. Effects of Hyperoxia and Trx-1 Overexpression on Cell Proliferation The effects of hyperoxia and Trx-1 overexpression on the proliferation of BMSCs were assessed by CCK-8 assay kit. As shown in Figure3, hyperoxia treatment inhibited BMSC proliferation in a time-dependent manner. Compared to cells cultured in normoxia, the growth rate of the hyperoxia-treated cells was significantly inhibited starting at 24 hours. After 48 hours of hyperoxia exposure, BMSCs-p proliferation was inhibited by more than 40%. However, BMSCs-Trx-1 proliferation was only inhibited by 23% at 48 hours, suggesting that Trx-1 overexpression significantly increased the proliferation of cells under hyperoxic conditions.Figure 3 Overexpression of Trx-1 promoted proliferation of BMSCs under hyperoxic conditions. Cells with or without Trx-1 overexpression were exposed to hyperoxia for the indicated time. Cell proliferation was estimated using a CCK-8 kit. Hyperoxia treatment inhibited BMSC proliferation. However, overexpression of Trx-1 increased cell growth rate under hyperoxic conditions compared to BMSCs-p. Growth curve was generated by reading the absorbance value at 450 nm. The value was computed as percent of 0 hour. The results were expressed as mean ± SD of the results of three independent experiments, each with triplicates.∗P<0.05 or 0.01 compared to normoxia control, #P<0.05 or 0.01 compared to BMSCs-p under hyperoxia conditions. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. ## 3.4. Assessment of Cell Apoptosis To investigate the effects of hyperoxia and Trx-1 overexpression on the induction of apoptosis in BMSCs, we labeled cells with annexin V-PE, a marker of early apoptosis, and with 7-ADD, a marker of necrosis, to be analyzed via flow cytometry. As shown in Figures4(a) and 4(b), hyperoxia induced apoptosis in a time-dependent manner in cells regardless of Trx-1 overexpression. The percent of apoptotic cells, as seen by annexin V+ staining, was increased in hyperoxia-treated BMSCs-p (about 20% at 24 hours and 35% at 48 hours). Trx-1 overexpression inhibited this hyperoxia-induced cell apoptosis, as seen by the decreased percent of annexin V+ cells (about 13% at 24 hours and 20% at 48 hours).Figure 4 Effect of Trx-1 on cell apoptosis in BMSCs. Cells were exposed to hyperoxia for 0, 12, 24, and 48 hours and were stained with annexin V-PE/7-ADD before flow cytometry analysis. (a) Dot plots of flow cytometry analysis. Intensity of 7-ADD staining (y-axis) was plotted versus annexin V intensity (x-axis). Numbers indicate percent in each region. (b) The graph shows the percentage of apoptosis as defined by annexin V+. The results are representative of 3 independent experiments. (c) Caspase 3 activity. Caspase 3 activity was measured by the caspase 3 activity kit. Bar graphs represent the relative expression of caspase 3 activity calculated from each group. The results are representative of 3 independent experiments. ∗P<0.05, ∗∗P<0.01 compared with the BMSCs-p group or BMSCs. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ## 3.5. Trx-1 Inhibits Caspase 3 Activity Caspase 3 is one of the key mediators of apoptosis; so, to further evaluate the antiapoptotic effects of Trx-1, we monitored caspase 3 activity using the Caspase 3 Activity Assay Kit. Results showed that caspase 3 activity increased when cells were treated with hyperoxia (Figure4(c)). Compared to BMSCs and BMSCs-p, overexpression of Trx-1 in BMSCs-Trx-1 reduced caspase 3 activities under hyperoxic conditions, with the largest difference seen at 48 hours (about 50% was inhibited compared to BMSCs-p). ## 3.6. Trx-1 Reduced the Intracellular Total ROS and Hydrogen Peroxide Formation under Hyperoxic Conditions To further explore the mechanisms by which Trx-1 reduces hyperoxia-induced BMSC injury, intracellular ROS levels were measured by flow cytometry analysis of cells stained with CellROX deep red reagent. As shown in Figure5(a), exposure of BMSCs to hyperoxia markedly increased the generation of ROS in a time-dependent manner (increased 2-fold at 48 hours). Compared with the BMSCs-p group, Trx-1 overexpression markedly decreased the hyperoxia-induced ROS formation in the BMSCs-Trx-1 group (decreased 20%~30% versus BMSCs-p control).Figure 5 Effects of Trx-1 on intracellular ROS levels in BMSCs. (a) Intracellular ROS production was measured with CellROX deep red reagent, which can detect total ROS and was not the target particular species. The relative fluorescence intensity was expressed as % compared to control cells (BMSCs-p at 0 hr). (b) The level of intracellular H2O2 was measured using hydrogen peroxide assay kit. Experiments were repeated three times. ∗P<0.05, ∗∗P<0.01 versus the corresponding group. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b)Subsequently, the level of intracellular H2O2 was determined, as it is an important ROS. The results showed that H2O2 production was increased with longer hyperoxia exposure (normoxia: 0.23 μM and hyperoxia: 3.5 μM at 48 hours) (Figure 5(b)). Trx-1 overexpression inhibited hyperoxia-induced H2O2 generation in BMSCs. The strongest inhibition happened at 12 hours (the percent inhibition exceeded 35%). ## 3.7. Effects of Trx-1 on Antioxidant Enzyme Activities in BMSCs The activities of three major endogenous antioxidant enzymes (SOD, CAT, and GSH-Px) were then analyzed in the three BMSC lines (BMSC, BMSCs-p, and BMSCs-Trx-1). After treatment with hyperoxia, significant increases in SOD and GSH-Px activities were detected in these three groups of BMSCs. As shown in Figure6(a), Trx-1 overexpression in BMSCs further increased SOD activity compared to BMSCs with normal Trx-1 expression. In the three BMSCs, GSH-Px activity was increased by a similar degree after hyperoxia exposure for 12 hours (Figure 6(b)). After 24 hours of hyperoxia exposure, the activities of GSH-Px began to decrease gradually. However, when compared to BMSCs and BMSCs-p, BMSCs-Trx-1 with Trx-1 overexpression upregulated GSH-Px activity after 24 hours and 48 hours of hyperoxia exposure. Trx-1 was not found to have any effect on CAT activity (Figure 6(c)).Figure 6 Effects of Trx-1 overexpression on antioxidant enzyme activities in BMSCs under hyperoxic conditions. (a) Superoxide dismutase (SOD) activities were measured using the SOD assay kit. (b) Glutathione peroxidase (GSH-Px) activities were measured using the glutathione peroxidase assay kit. (c) Catalase (CAT) activities were measured using the CAT assay kit. Data are representative of duplicate samples from five experiments.∗∗P<0.01. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ## 3.8. Trx-1 Has No Effect on Cytokine Secretion from BMSCs Recently, an increasing number of studies has shown that the protective effects of BMSC transplantation may be predominantly mediated by paracrine, rather than regenerative, mechanisms [7]. To determine whether Trx-1 exerts its cytoprotective effects by regulating cytokine secretion from BMSCs, the levels of EGF, KGF, and HGF in the cell culture medium were assayed by ELISA. Our results showed that Trx-1 overexpression only slightly increased the levels of secreted EGF, KGF, and HGF. However, these differences were not statistically significant across the three groups (Supplement Figure 2). ## 3.9. Effects of Trx-1 on the ASK1/P38 MAPK Pathway To investigate the influence of hyperoxia on Trx-1 expression, we compared the protein levels of Trx-1 after different hyperoxia exposures. As shown in Figures7(a) and 7(b), after 12-hour hyperoxia exposure, Trx-1 expression was significantly increased (about 50%). However, Trx-1 expression returned to almost normal levels after 24 hours of hyperoxia treatment in BMSCs-p cells. Trx-1 overexpression did not change throughout 0 to 48 hours of hyperoxia treatment in BMSCs-Trx-1 cells.Figure 7 Western blot results. Trx-1, phospho-ASK1, total ASK1, phospho-p38, and total p38 expressions were detected by Western blotting. (a) Representative Western blot bands. (b) Trx-1 densitometric analysis. (c) p-ASK/ASK densitometric analysis. (d) p-38/P38 densitometric analysis. Data are representative of three independent experiments. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs.∗P<0.05; ∗∗P<0.01 versus the corresponding group. (a) (b) (c) (d)Hyperoxia-induced activation of ASK1 was confirmed by a significant increase in phospho-ASK1 levels as detected by Western blotting, which was significantly inhibited by Trx-1 overexpression (Figures7(a) and 7(c)). We next examined whether p38, a potential downstream signal of ASK1, was involved in hyperoxic cell injury pathogenesis. As shown in Figures 7(a) and 7(d), activation of p38 via phosphorylation was upregulated under hyperoxic conditions as measured by phoshpo-p38, with levels peaking at 24 hours (about 4-fold upregulation compared to 0 hour). Trx-1 overexpression in BMSCs significantly suppressed the phosphorylation of p38. ## 4. Discussion Bone marrow-derived mesenchymal stem cells (BMSCs) are easily isolated and amplified, are immunologically tolerant, and have multilineage potential, which makes them an ideal candidate for intense investigation as a cell-based therapeutic strategy for many kinds of diseases, including BPD [38]. The isolated primary BMSCs in this study have properties of mesenchymal stromal cells, according to the criteria of the International Society for Cellular Therapy (ISCT) [39], such as being spindle shaped, plastic adherent, CD29+, CD44+, CD73+, CD105+, CD90+, CD45−, and CD34− and having multipotent differentiation (Figure 1).In rodent models of BPD, MSC administration by intravenous injection or intratracheal instillation resulted in stimulation of lung tissue repair, decreased vascular remodeling, pulmonary hypertension, and right ventricular hypertrophy [40]. Furthermore, in experimental models of BPD, intratracheal administration of MSC-conditioned medium resulted in similar short-term regenerative effects as those with administration of MSCs [41]. MSCs play protective effects in BPD, not only by engraftment and differentiation into specific lung cell types but also by secreting several anti-inflammatory cytokines and growth factors that affect cell proliferation, differentiation, and survival [7].Despite visible advances in the field of MSC-based therapy, the reported functional improvements are generally modest partly because of the low cellular survival rate [7, 11]. Studies have indicated that pathophysiological environmental conditions, including oxidative stress and inflammation, can lead to poor viability and apoptosis of MSCs [42]. MSCs are characterized by the requirement of a low-oxygen tension environment, about 2%–8% O2 [43]. In view of these observations, together with the fact that oxygen toxicity plays a critical role in the lung injury process leading to BPD [44], it is suggested that hyperoxia may be the first factor to threaten MSC survival in BPD. In the present study, we demonstrated that hyperoxia inhibited BMSC proliferation by 26.8% at 24 hours and 42% at 48 hours (Figure 3). Consistent with this result, hyperoxia treatment induced BMSC apoptosis in 20% cells at 24 hours and 35% cells at 48 hours (Figure 4). These results suggested that hyperoxia-induced injury plays a key role in BMSC death. Therefore, strategies to improve BMSC tolerance to hyperoxic conditions might improve the survival of transplanted cells and consequently increase their beneficial therapeutic effects on hyperoxia-induced injury.Recently, diverse approaches involving the genetic modification of MSCs have been undertaken to increase survivability [45]. The thioredoxin system has been demonstrated to play a key role in modulating redox signaling pathways and can be induced by a wide variety of stress conditions, such as oxidative stress, ultraviolet irradiation, γ-rays, hypoxia, lipopolysaccharide, and viral infections [46–48]. In the present study, we demonstrated that hyperoxia also could induce Trx-1 expression in BMSCs, but only within a short time frame (12 hours) (Figures 7(a) and 7(b)). Our previous studies have shown that exogenous addition of Trx can prevent hyperoxia-induced alveolar type II epithelial cell apoptosis [20]. Furthermore, cell injury in A549 cells, a lung epithelial adenocarcinoma cell line, has been shown to be significantly aggravated by Trx-specific siRNA under hyperoxic conditions [49]. In other cells, Trx-1 redox signaling was reported to regulate H1299 cell survival in response to hyperoxia [50]. Hyperoxic impairment of Trx-1 has a negative impact on peroxiredoxin-1 and HSP90 oxidative responses. These studies have led to the idea that Trx-1 can promote MSC survival in various conditions. Suresh et al. have experimented with overexpression of Trx-1 to increase engrafted MSC survivability in the treatment of cardiac failure [8]. Their results showed that following myocardial infarction, treatment with MSCs transfected with Trx-1 overexpression vectors increased their capacity for survival, proliferation, and differentiation, which promoted heart function and decreased fibrosis when compared to that with untransfected MSCs. Based on a similar premise, our present study aims to determine if Trx-1 overexpression can attenuate hyperoxia-induced BMSC injury using BMSCs we successfully engineered to overexpress the Trx-1 gene. Additionally, we confirmed that Trx-1 overexpression did not change BMSCs’ genetic stability (Supplement Figure 1).To examine the effect of Trx-1 on BMSC survival under hyperoxic conditions, cell proliferation rates and apoptosis were estimated in rat BMSCs with or without Trx-1 overexpression. As shown in Figures3 and 4, BMSCs-Trx-1 showed increased cell proliferation rates and decreased apoptosis under hyperoxic conditions compared to BMSCs-p control, suggesting that Trx-1 overexpression causes cells to be more resistant to hyperoxic stress. Caspases, a family of cysteine proteases, are expressed in almost all cell types as inactive proenzymes. Caspase activation is thought to be a key step in the genesis of apoptosis. Caspases are either initiators or executioners, and caspase 3 is known to play a key role in the execution of apoptosis [51]. To test whether caspase 3 was involved in hyperoxia-induced apoptosis, we probed for caspase 3 activity. The results showed that caspase 3 activity was increased more than 2-fold after 24-hour hyperoxia treatment and almost 4-fold after 48-hour hyperoxia treatment compared to those of the control-untreated cells (0 h). These results indicated that hyperoxia-induced BMSC apoptosis is, at least in part, caspase 3 dependent. We found that this hyperoxia-induced activation of caspase 3 was strongly inhibited by Trx-1 overexpression. These results, together with the annexin V stain assay showing a decrease in apoptosis, suggest that Trx-1 inhibited hyperoxia-induced BMSC apoptosis mainly through a caspase 3-dependent pathway.The effects of hyperoxia on cellular function and survival have been widely held to be secondary to the generation of ROS. It has been demonstrated that ROS act as upstream signaling molecules that initiate cell death under hyperoxic conditions [52]. In our study, hyperoxia exposure resulted in an increase of intracellular ROS. However, Trx-1 overexpression could partly reverse such effects of hyperoxia (Figure 5).H2O2 is a crucial ROS that is involved in cell signaling but can alter the intracellular redox environment when produced in excess amounts, leading to many pathophysiological conditions [53]. During exposure to hyperoxia, production of ROS is seen through the increased release of H2O2 by lung mitochondria and microsomes [54]. Accumulating evidence suggests that hyperoxia promotes intracellular H2O2 accumulation, with H2O2 playing a key role in the oxidative stress-induced injury from ROS [55]. It has been confirmed that the Trx system, which is composed of a NADPH-dependent thioredoxin reductase (TrxR) and Trx, provides electrons to thiol-dependent peroxidases (peroxiredoxin (Prx)) to directly remove H2O2 [53]. In the present study, we observed increased H2O2 generation in hyperoxia-exposed BMSCs, while Trx-1 overexpression decreased H2O2 generation under hyperoxic conditions. Additionally, compared to the total ROS generated, H2O2 was more strongly induced by hyperoxia, which suggests that H2O2 is the main source of intracellular ROS under hyperoxic conditions. However, more evidence is needed to confirm this hypothesis.As mentioned earlier, ROS are not only cytotoxic products from the external and internal environment but are also important mediators of redox signaling. Therefore, Trx acts as an antioxidant to maintain the balance of the thiol-related redox status and thus plays a pivotal role in the regulation of redox signaling and cell survival and death [48]. Trx-1 is known to regulate several transcription factors such as NF-κB, p53, and Ref-1, as well as some apoptotic factors like ASK1 [56–58]. ASK1 is a member of the mitogen-activated protein kinase kinase kinase (MAPKKK) group, which can be activated by various stresses such as oxidative stress, which can then activate caspase 3 and promote apoptosis [59]. As such, ASK-1 is necessary for ROS-induced cell death and inflammation [60]. Fukumoto et al. reported that deletion of ASK1 protects against hyperoxia-induced acute lung injury [61]. As shown in Figure 7, we confirmed that the activity of ASK1 was upregulated by hyperoxia in a time-dependent manner. It has been shown that Trx is a negative regulator of ASK1 [56]. Figure 8 shows that in resting cells, ASK1 forms an inactive complex with reduced Trx-1, but oxidation of Trx-1 leads to the dissociation of Trx-1 from ASK1, switching the ASK1 to an active kinase [48]. It has also been reported that overexpression of the Trx in endothelial cells induces ASK1 ubiquitination and degradation [16]. To demonstrate whether overexpression of Trx-1 protects BMSCs from hyperoxia-induced injury via inhibition of the ASK-1 signaling pathway, we determined the activation status of ASK1 and its downstream proapoptotic factor, the p38 MAP kinase. We did not observe obvious changes of the total ASK1 level but results demonstrated that Trx-1 overexpression inhibited hyperoxia-induced ASK1 activation. The activation of p38 has also been shown to be associated with hyperoxia-induced cell damage [62]. Previously, we demonstrated that Trx can protect alveolar epithelial cells from hyperoxia-induced damage via decreasing p38 activation [20]. It was reported that ASK1 is required for the sustained activation of JNK/p38 MAP kinases, leading to apoptosis [63]. In the present study, we showed that Trx-1 overexpression in BMSCs inhibited hyperoxia-induced p38 activation. Taken together, these results indicate that inhibition of the ASK1/P38 pathway was involved in the mechanism of Trx-1-mediated protection of BMSCs from hyperoxia-induced injury (Figure 8). Recently, several studies suggest that the protective effects of stem cell transplantation might be predominantly mediated by a paracrine mechanism [7, 38] and that growth factors such as VEGF, HGF, and KGF are critical in mediating the protective effects of MSCs against hyperoxic lung injury [64]. With regard to these growth factors, we found no difference between our three BMSC cell lines under hyperoxic conditions in vitro (Supplement Figure 2). Based on this, Trx-1 appears to protect BMSCs from hyperoxia-induced injury independently of paracrine growth factors. However, whether Trx-1 overexpression affects the therapeutical effect of BMSC via paracrine growth factors in vivo will need further studies.Figure 8 A schematic model of the regulation of the ASK1/P38 signal pathway by Trx-1. (a) The Trx-1 system contains NADPH, TrxR-1, and Trx-1. The oxidized Trx-1 (inactive form) is transformed to the active and reduced form of Trx-1 by receiving electrons from NADPH coenzyme in the presence of TrxR-1. Prxs reduce H2O2 to H2O using electrons from the active Trx-1. The active Trx-1 also regulates redox signals by reducing many other target proteins with disulfide bonds. ASK1 constantly forms an inactive complex with reduced Trx-1 under normoxic conditions. (b) Exposure of BMSCs to hyperoxia leads to elevated ROS and H2O2 production, which leads to oxidative stress. However, oxidized Trx-1 is dissociated from ASK1 in response to oxidative stress and subsequent activation of ASK1. Activated ASK1 in turn activates the p38 pathway and induces various cellular responses, including cell apoptosis and differentiation inhibition. Trx-1 overexpression promoted BMSC survival under hyperoxic conditions through elevation of antioxidant activities, reduction of ROS and H2O2 generation, and subsequent inhibition of the ASK1/P38 signaling pathway.Other mechanisms, such as changes in expression of antioxidant enzymes, may also be involved in the cellular response of Trx-1-overexpressing BMSCs against hyperoxia. Several studies reported greater MSC survival from oxidative stress injury via increased activities of antioxidant enzymes [65, 66]. Three of the primary antioxidant enzymes in oxygen-metabolizing mammalian cells believed to be necessary for cell survival are SOD, CAT, and GSH-Px. SOD is a metalloenzyme that catalyzes the dismutation of superoxide anion into O2 and H2O2. Subsequently, H2O2 is reduced to H2O by GSH-Px in the cytosol or by CAT in peroxisomes or cytosol [67]. In our study, we demonstrated that Trx-1 overexpression enhanced the activities of the antioxidant enzymes SOD and GSH-Px, resulting in maintenance of relatively low intracellular levels of ROS and H2O2, as shown in Figures 5 and 6. CAT is a common enzyme found in nearly all living organisms exposed to oxygen. It is a very important enzyme in the biological defense system. Zhang et al. demonstrated CAT transduction was able to increase MSC viability and promote ischemia-induced angiogenesis [68]. However, we did not observe CAT activity to be affected in this study. The mechanism involved by which Trx-1 selectively affects different antioxidant enzymes requires further studies.In conclusion, our results indicate that hyperoxia exposure induced BMSC apoptosis, which may contribute to the low survival rate of transplanted BMSCs and that Trx-1 overexpression has a significant effect on improving the survival rate of BMSCs. The summary of our results in the study is shown in Figure8. --- *Source: 1023025-2018-01-21.xml*
1023025-2018-01-21_1023025-2018-01-21.md
68,905
Thioredoxin-1 Protects Bone Marrow-Derived Mesenchymal Stromal Cells from Hyperoxia-Induced Injury In Vitro
Lei Zhang; Jin Wang; Yan Chen; Lingkong Zeng; Qiong Li; Yalan Liu; Lin Wang
Oxidative Medicine and Cellular Longevity (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1023025
1023025-2018-01-21.xml
--- ## Abstract Background. The poor survival rate of mesenchymal stromal cells (MSC) transplanted into recipient lungs greatly limits their therapeutic efficacy for diseases like bronchopulmonary dysplasia (BPD). The aim of this study is to evaluate the effect of thioredoxin-1 (Trx-1) overexpression on improving the potential for bone marrow-derived mesenchymal stromal cells (BMSCs) to confer resistance against hyperoxia-induced cell injury. Methods. 80% O2 was used to imitate the microenvironment surrounding-transplanted cells in the hyperoxia-induced lung injury in vitro. BMSC proliferation and apoptotic rates and the levels of reactive oxygen species (ROS) were measured. The effects of Trx-1 overexpression on the level of antioxidants and growth factors were investigated. We also investigated the activation of apoptosis-regulating kinase-1 (ASK1) and p38 mitogen-activated protein kinases (MAPK). Result. Trx-1 overexpression significantly reduced hyperoxia-induced BMSC apoptosis and increased cell proliferation. We demonstrated that Trx-1 overexpression upregulated the levels of superoxide dismutase and glutathione peroxidase as well as downregulated the production of ROS. Furthermore, we illustrated that Trx-1 protected BMSCs against hyperoxic injury via decreasing the ASK1/P38 MAPK activation rate. Conclusion. These results demonstrate that Trx-1 overexpression improved the ability of BMSCs to counteract hyperoxia-induced injury, thus increasing their potential to treat hyperoxia-induced lung diseases such as BPD. --- ## Body ## 1. Introduction Bronchopulmonary dysplasia (BPD) is a chronic lung disease that typically occurs in very low-birth-weight premature infants following supplemental oxygen therapy and mechanical ventilation. An increase in the survival rate of these extremely premature infants has been associated with an increased incidence of BPD. BPD is a multifactorial disease, and hyperoxia, or oxygen toxicity, is known to play a key role in its pathogenesis [1]. Oxygen toxicity is believed to be mediated by the production and accumulation of reactive oxygen species (ROS), such as superoxide (O2−), hydrogen peroxide (H2O2), and hydroxyl radicals (•OH), to levels exceeding the capacity of the antioxidant defense mechanisms [2]. It is well known that ROS is required in a myriad of physiological reactions, cell fate decisions, and signal transduction pathways. However, overwhelming accumulation of ROS will trigger severe oxidative stress through enzyme oxidation, protease inhibition, DNA synthesis inhibition, and lipid peroxidation, which commits cells to necrosis or apoptosis [3].Currently, no effective treatments beyond supportive therapies are available for BPD. Stem cell-based treatment via tissue engineering is currently an increasing focus of research [4]. As bone marrow-derived mesenchymal stromal cells (BMSCs) come from an autologous source and are easy to isolate and amplify [4], they are the ideal seed cells for tissue engineering across broad tissue types such as the liver [5], bone [6], lung [7], heart [8], and kidney [9]. Recently, studies have shown that lung repair by BMSC therapy could be a promising and novel therapeutic modality for attenuating BPD severity [10, 11]. These studies have demonstrated that BMSCs enhance lung repair by direct regeneration or through secreting paracrine factors. However, several studies confirmed the low survival and poor engraftment rates of MSCs in recipient lungs, which greatly limits their therapeutic efficacy, as survival of the transplanted cells in the pathological environment is critical for their beneficial effects [12, 13]. Hence, one major focus in the field is to explore the mechanism involving the BMSC injury in pathological environment and to develop strategies to enhance BMSC survival and engraftment rates.Thioredoxin (Trx), a ubiquitous small protein (12 kDa) containing a redox-active dithiol/disulfide at a highly conserved active site, was originally identified as a hydrogen donor for ribonucleotide reductase inEscherichia coli [14]. There are two main thioredoxins: thioredoxin-1 (Trx-1), a cytosolic form, and thioredoxin-2 (Trx-2), a mitochondrial form. Trx, along with Trx reductase (TrxR) and nicotinamide adenine dinucleotide phosphate (NADPH), has been shown to catalyze protein disulfide reduction and is thought to be a strong ROS scavenger [15]. Trx-1 participates in redox reactions through reversible oxidation of its dithiol active center to disulfide which catalyzes dithiol-disulfide exchange reactions involved in many thiol-dependent processes [16]. By this way, Trx-1 acts on oxidized, therefore inactive, proteins by reducing them and restoring their functionality. Recent studies have shown that Trx-1 not only regulates the cellular redox balance by scavenging intracellular ROS ingredients, such as hydrogen peroxide (H2O2), but also has other biological activities, including regulation of cell growth, transcription factors, gene expression, apoptosis, and immune regulatory effects [17–19]. Our previous studies suggest that Trx protects alveolar epithelial cells from hyperoxia-induced injury by reducing ROS generation, elevating antioxidant activities, and regulating the MAPK and PI3K-Akt pathways [20].Based on previous studies from others and our own work, we hypothesize that BMSCs suffer severe injury under hyperoxic conditions and that increased Trx-1 expression in BMSCs may serve to counteract the negative effects of hyperoxia-induced cell injury. To better understand the mechanism of Trx-1, we also looked into the signaling pathways mediated by it in hypoxia-induced cell injury. Our data may provide a new perspective in the development of BMSC therapeutic strategies. ## 2. Materials and Methods ### 2.1. BMSC Culture All studies were performed under the approval of the Ethics Committee of the Animal Facility of Huazhong University of Science and Technology. BMSCs were isolated from the bone marrow of 6- to 7-week-old male Sprague-Dawley rats (provided by Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China) according to the previously described method with some modifications [11, 21, 22]. Briefly, bone marrow cells were flushed from rat tibias and femurs, suspended by pipetting, and filtered via nylon mesh (70 μm). The collected mononuclear cells were washed three times with Dulbecco’s phosphate-buffered saline (DPBS). The cells were suspended in a culture medium (DMEM medium containing 10% FBS, 0.02% sodium bicarbonate, 2 mM L-glutamine, 15 mM HEPES buffer, 100 units/mL penicillin, and 100 μg/mL streptomycin) and incubated at 37°C in an atmosphere of 95% humidified air and 5% CO2 for 24 h. The medium was exchanged with a fresh culture medium in an attempt to deplete the nonadherent cells. When adherent cells were grown to approximately 75% confluency, they were trypsinized and reseeded at a density of 105 cells/cm2. ### 2.2. Phenotypic Analysis of BMSCs Flow cytometric analysis was performed to characterize the phenotype of BMSCs. Cells were suspended in 100μL DPBS supplemented with 2% FBS. Phycoerythrin- (PE-) coupled antibodies against CD29 (eBiosciences, cat. no. 12-0291, San Diego, CA, USA), CD34 (Santa Cruz Biotechnology, cat. no. sc-74499), CD44 (Santa Cruz Biotechnology, cat. no. sc-7297), CD45 (Santa Cruz Biotechnology, cat. no. sc-1178), and CD90 (Santa Cruz Biotechnology, cat. no. sc-53456) were added separately, followed by incubation at 4°C for 30 minutes. For the detection of cell surface antigens CD105 and CD73, cells were incubated with the first antibodies against CD105 (Abcam, cat. no. ab156756) and CD73 (Abcam, cat. no.ab175396) for 1 hour at 4°C, washed, and then incubated for 1 hour at 4°C with Alexa Fluor 647-conjugated second antibodies (Invitrogen, cat. nos. A-21235 and A-21244). Irrelevant isotype-identical antibodies served as negative control. After washing, more than 10,000 cells were acquired using a FACS Calibur (Becton Dickins) flow cytometer and analyzed with FlowJo software (FlowJo LLC, Ashland, Oregon, USA). ### 2.3. BMSC Differentiation To confirm that our cultured cells have multipotent potential, we tested BMSC P3 cultures for their ability to undergo differentiation into osteocytes and adipocytes as previously described [23, 24]. Briefly, osteogenic differentiation was induced by incubating BMSCs with an osteogenic medium (RASMX-90021; Cyagen Biosciences, Guangzhou, China), and adipocyte differentiation was induced by maintaining BMSCs in an adipocyte differentiation medium (RASMX-90031; Cyagen Biosciences). After 21 days of differentiation, cells were fixed and stained with alizarin red and oil red O separately. ### 2.4. Transfection In order to achieve high efficiency of introduction and subsequent stable expression of rat Trx-1 in BMSCs, a lentiviral vector was employed. Briefly, the third passage of BMSCs was transfected with lentiviral vectors carrying Trx-1 and green fluorescent protein (GFP) (pCDH-CMV-Trx-1-EF1α-copGFP) or a lentiviral vector carrying only GFP (pCDH-CMV-MCS-EF1α-copGFP) using the Lipofectamine 2000 transfection reagent according to the manufacturer’s instructions (Invitrogen, Carlsbad, CA, USA). The transfected BMSCs were termed BMSCs-p (lentiviral vector only carried GFP) and BMSCs-Trx-1 (lentiviral vectors carried Trx-1 and GFP). The recombinant plasmids were constructed and identified by Wuhan Transduction Bio Co. Ltd. (Wuhan, China). Stably transfected cells were then selected by incubation in the fresh FBS-supplemented DMEM culture medium containing 500 μg/mL G418. BMSCs not subjected to transfection served as control cells. The expression of Trx-1 was detected by reverse transcriptase polymerase chain reaction (RT-PCR) analysis and Western blot analysis. ### 2.5. Hyperoxia and Normoxia Treatment Cells were seeded into 6-, 24-, or 96-well cell culture plates overnight. The next day, cells were placed in hyperoxia (80% O2, 5% CO2) or normoxic (21% O2, 5% CO2) environment as previously described [20, 25, 26]. The concentration of O2 was monitored in real time with a digital oxygen monitor (Hengaode, Beijing, China). Cells were harvested at 0, 12, 24, and 48 hours. ### 2.6. Cell Proliferation Assay In order to determine the influence of Trx-1 overexpression on BMSC proliferation, cell proliferation assays were performed using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to manufacturer’s protocol. Cells were seeded into a 96-well plate in triplicate at 5000 cells/well and cultured overnight. Cells were then exposed to hyperoxic or normoxic conditions described above. After the exposures, the number of cells per well was measured by the 450 nm absorbance of reduced WST-8 (2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2, 4-sulfophenyl)-2H-tetrazolium, monosodium salt) at the indicated time points [27, 28]. In addition, a blank control well was set containing only the culture medium. ### 2.7. Cell Apoptosis Assay Apoptosis was measured by flow cytometry after annexin V-PE/7-AAD staining (BD Pharmingen, USA) according to manufacturer’s instructions. Briefly, the treated cells were harvested with Accutase solution (Gibco/Life Technologies, cat. no. A11105-01), washed twice with cold PBS, and suspended in 1x binding Buffer. Then, the cells were labeled with annexin V-PE and 7-AAD for 15 minutes at room temperature in the dark. Apoptosis-positive control cells were placed in 50°C water bath for 5 minutes. Finally, the cells were subjected to flow cytometry analysis using a FACS Caliber flow cytometer (BD Biosciences, CA) within 30 minutes. ### 2.8. Measurement of Intracellular ROS Accumulation ROS production was measured with CellROX® deep red reagent from Molecular Probes (Eugene, OR, USA). The CellROX deep red reagent is a fluorogenic probe designed to reliably measure ROS in living cells. The cell-permeable CellROX deep red dye is nonfluorescent while in a reduced state and becomes fluorescent upon oxidation by reactive oxygen species with absorption/emission maxima at ~644/665 nm [29]. After treatment, cells were incubated at 37°C for 30 minutes in complete DMEM with 5 mM CellROX deep red reagent. Then, the medium was removed and the cells were washed 3 times with PBS. Cells were collected and suspended in PBS. Fluorescence was immediately measured using FACS analysis, and values were reported as mean fluorescence intensity. ### 2.9. Hydrogen Peroxide Assay The level of intracellular H2O2 was measured using Hydrogen peroxide assay kit (Beyotime Institute of Biotechnology, China) as described previously [30, 31]. In this assay system, ferrous ions (Fe2+) are oxidized to ferric ions (Fe3+) by H2O2. Then, the Fe3+ and the indicator dye xylenol orange form a purple complex, which is measurable with a microplate reader at a wavelength of 560 nm.According to the manufacturer’s protocol, cells were lysed using the lysis buffer solution supplied in the kit at a ratio of 100μL per 106 cells. After centrifugation at 12,000g for 5 minutes, the supernatants were collected. 50 μL of each supernatant sample was put into 100 μL of test solution, and the mixture was incubated for 20 minutes at room temperature. Finally, the absorbance at 560 nm was measured using a microplate reader (Elx 800; BioTek). The level of H2O2 in cells was determined using a standard curve prepared by plotting the average blank-corrected 560 nm measurement for each standard. ### 2.10. Caspase 3 Activity Assay Caspase 3 activity was measured using Caspase 3 Activity Assay kit (Beyotime Biotechnology, Nanjing, China) following the manufacturer’s instructions [32]. After being subjected to treatment described above, cells were detached from plates, washed with PBS, and centrifuged at 1200 rpm for 5 minutes at 4°C for cell collection and lysis. Caspase 3 activity was detected using the specific fluorogenic substrates Ac-DEVD-pNA; the absorbance at 405 nm was measured using a microplate reader (Elx 800; BioTek). ### 2.11. RNA Isolation and Real-Time PCR RNA samples were prepared using the RNAiso plus kit (Takara Bio Inc., Kusatsu, Shiga, Japan) according to manufacturer’s instructions. Total RNA (1μg) was used to reverse transcribed cDNA using iScriptTM cDNA synthesis kit (Takara Bio Inc.) according to manufacturer’s instructions. Real-time PCR was performed using iQ SYBR Green Supermix (Bio-Rad Inc. Laboratories, Hercules, CA, USA). Amplification, detection, and data analysis were performed with the iCycler real-time detection system (Bio-Rad Inc.). GAPDH was used as the endogenous control. Specific primer sets for Trx-1 and GAPDH were obtained from Invitrogen. The relative expression level of Trx-1 was determined using the 2−delta delta Ct analysis method. The primer sequences used for PCR were as follows: Trx-1, forward 5′-TTCTTTCATTCCCTCTGTG-3′ and reverse 5′-TCCGTAATAGTGGCTTCG-3′; GAPDH, forward 5′-GTTCTTCAATACGTCAGACATTCG-3′ and reverse 5′-CATTATCTTTGCTGTCACAAGAGC-3′. ### 2.12. Western Blot Analysis Cell protein levels of Trx-1, apoptosis-regulating kinase-1 (ASK1), phosphorylated ASK1 (p-ASK1), p38, and phosphorylated p38 (p-p38) were analyzed by Western blotting, usingβ-actin as an internal reference. Briefly, total proteins were extracted using a protein extraction kit (KGP2100; KeyGEN Biotech, Nanjing, China), quantified by BCA protein assay (Guge Bio, Wuhan, China), electrophoresed on SDS-PAGE gels, and electrotransferred to PVDF membrane by wet transfer (Bio-Rad). Membranes were blocked for 1 hour with 5% skim milk and incubated overnight at 4°C with the primary antibodies. The anti-ASK1 antibody came from Abcam (cat. no. ab131506), anti-p-ASK1 antibody came from Sigma (cat. no. SAB4504337), and all other antibodies came from Cell Signaling Technology Inc., Danvers, MA, USA. Membranes were washed in TBS/0.1% Tween-20 to remove excess primary antibodies. The membranes were then incubated for 1 hour with the secondary antibodies (Cell Signaling Technology Inc.). After three washes in TBS/0.1% Tween-20, the protein bands were visualized using an enhanced chemiluminescence kit according to the manufacturer’s instructions (ECL; Pierce Biotechnology Inc., Rockford, IL, USA). Densitometry was measured using “ImageJ” analysis software. ### 2.13. Antioxidant Enzyme Activity Measurements The activities of total superoxide dismutase (T-SOD), catalase (CAT), and glutathione peroxidase (GSH-Px) were estimated by the test kits according to the manufacturers’ instructions. T-SOD assay kit was purchased from Nanjin Jiancheng Biotechnology Co. Ltd. (Nanjing, Jiangsu, China) [33]. The GSH-Px assay kit and CAT activity assay kit were purchased from Beyotime Institute of Biotechnology (Shanghai, China) [34, 35]. Briefly, the cells were washed with PBS and lysed using cell lysis buffer. Cell lysates were then centrifuged at 10,000g for 5 minutes at 4°C, and the supernatants were collected to determine enzyme activities. These assays were performed on the Elx800 microplate reader at 550 nm for T-SOD, 520 nm for CAT, and 340 nm for GSH-Px, respectively. The values were normalized and expressed as units per mg protein, based on protein concentrations determined using BCA protein assay (Guge Bio). ### 2.14. Enzyme-Linked Immunosorbent Assay (ELISA) After treatment, culture supernatants were collected and spun at 300g for 10 minutes to remove cellular debris. The levels of keratinocyte growth factor (KGF), hepatocyte growth factor (HGF), and epidermal growth factor (EGF) were determined by employing ELISA kits (R&D System, Minneapolis, MN, USA) according to the manufacturer’s protocol. Each sample was analyzed in triplicate. ### 2.15. Statistical Methods All data were reported as mean ± standard deviations (mean ± SD) and analyzed by using SPSS 18.0 (SPSS Inc., Chicago, IL, USA). Data were analyzed statistically using ANOVA or Student’st-test. Significance was accepted at P<0.05. ## 2.1. BMSC Culture All studies were performed under the approval of the Ethics Committee of the Animal Facility of Huazhong University of Science and Technology. BMSCs were isolated from the bone marrow of 6- to 7-week-old male Sprague-Dawley rats (provided by Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China) according to the previously described method with some modifications [11, 21, 22]. Briefly, bone marrow cells were flushed from rat tibias and femurs, suspended by pipetting, and filtered via nylon mesh (70 μm). The collected mononuclear cells were washed three times with Dulbecco’s phosphate-buffered saline (DPBS). The cells were suspended in a culture medium (DMEM medium containing 10% FBS, 0.02% sodium bicarbonate, 2 mM L-glutamine, 15 mM HEPES buffer, 100 units/mL penicillin, and 100 μg/mL streptomycin) and incubated at 37°C in an atmosphere of 95% humidified air and 5% CO2 for 24 h. The medium was exchanged with a fresh culture medium in an attempt to deplete the nonadherent cells. When adherent cells were grown to approximately 75% confluency, they were trypsinized and reseeded at a density of 105 cells/cm2. ## 2.2. Phenotypic Analysis of BMSCs Flow cytometric analysis was performed to characterize the phenotype of BMSCs. Cells were suspended in 100μL DPBS supplemented with 2% FBS. Phycoerythrin- (PE-) coupled antibodies against CD29 (eBiosciences, cat. no. 12-0291, San Diego, CA, USA), CD34 (Santa Cruz Biotechnology, cat. no. sc-74499), CD44 (Santa Cruz Biotechnology, cat. no. sc-7297), CD45 (Santa Cruz Biotechnology, cat. no. sc-1178), and CD90 (Santa Cruz Biotechnology, cat. no. sc-53456) were added separately, followed by incubation at 4°C for 30 minutes. For the detection of cell surface antigens CD105 and CD73, cells were incubated with the first antibodies against CD105 (Abcam, cat. no. ab156756) and CD73 (Abcam, cat. no.ab175396) for 1 hour at 4°C, washed, and then incubated for 1 hour at 4°C with Alexa Fluor 647-conjugated second antibodies (Invitrogen, cat. nos. A-21235 and A-21244). Irrelevant isotype-identical antibodies served as negative control. After washing, more than 10,000 cells were acquired using a FACS Calibur (Becton Dickins) flow cytometer and analyzed with FlowJo software (FlowJo LLC, Ashland, Oregon, USA). ## 2.3. BMSC Differentiation To confirm that our cultured cells have multipotent potential, we tested BMSC P3 cultures for their ability to undergo differentiation into osteocytes and adipocytes as previously described [23, 24]. Briefly, osteogenic differentiation was induced by incubating BMSCs with an osteogenic medium (RASMX-90021; Cyagen Biosciences, Guangzhou, China), and adipocyte differentiation was induced by maintaining BMSCs in an adipocyte differentiation medium (RASMX-90031; Cyagen Biosciences). After 21 days of differentiation, cells were fixed and stained with alizarin red and oil red O separately. ## 2.4. Transfection In order to achieve high efficiency of introduction and subsequent stable expression of rat Trx-1 in BMSCs, a lentiviral vector was employed. Briefly, the third passage of BMSCs was transfected with lentiviral vectors carrying Trx-1 and green fluorescent protein (GFP) (pCDH-CMV-Trx-1-EF1α-copGFP) or a lentiviral vector carrying only GFP (pCDH-CMV-MCS-EF1α-copGFP) using the Lipofectamine 2000 transfection reagent according to the manufacturer’s instructions (Invitrogen, Carlsbad, CA, USA). The transfected BMSCs were termed BMSCs-p (lentiviral vector only carried GFP) and BMSCs-Trx-1 (lentiviral vectors carried Trx-1 and GFP). The recombinant plasmids were constructed and identified by Wuhan Transduction Bio Co. Ltd. (Wuhan, China). Stably transfected cells were then selected by incubation in the fresh FBS-supplemented DMEM culture medium containing 500 μg/mL G418. BMSCs not subjected to transfection served as control cells. The expression of Trx-1 was detected by reverse transcriptase polymerase chain reaction (RT-PCR) analysis and Western blot analysis. ## 2.5. Hyperoxia and Normoxia Treatment Cells were seeded into 6-, 24-, or 96-well cell culture plates overnight. The next day, cells were placed in hyperoxia (80% O2, 5% CO2) or normoxic (21% O2, 5% CO2) environment as previously described [20, 25, 26]. The concentration of O2 was monitored in real time with a digital oxygen monitor (Hengaode, Beijing, China). Cells were harvested at 0, 12, 24, and 48 hours. ## 2.6. Cell Proliferation Assay In order to determine the influence of Trx-1 overexpression on BMSC proliferation, cell proliferation assays were performed using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to manufacturer’s protocol. Cells were seeded into a 96-well plate in triplicate at 5000 cells/well and cultured overnight. Cells were then exposed to hyperoxic or normoxic conditions described above. After the exposures, the number of cells per well was measured by the 450 nm absorbance of reduced WST-8 (2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2, 4-sulfophenyl)-2H-tetrazolium, monosodium salt) at the indicated time points [27, 28]. In addition, a blank control well was set containing only the culture medium. ## 2.7. Cell Apoptosis Assay Apoptosis was measured by flow cytometry after annexin V-PE/7-AAD staining (BD Pharmingen, USA) according to manufacturer’s instructions. Briefly, the treated cells were harvested with Accutase solution (Gibco/Life Technologies, cat. no. A11105-01), washed twice with cold PBS, and suspended in 1x binding Buffer. Then, the cells were labeled with annexin V-PE and 7-AAD for 15 minutes at room temperature in the dark. Apoptosis-positive control cells were placed in 50°C water bath for 5 minutes. Finally, the cells were subjected to flow cytometry analysis using a FACS Caliber flow cytometer (BD Biosciences, CA) within 30 minutes. ## 2.8. Measurement of Intracellular ROS Accumulation ROS production was measured with CellROX® deep red reagent from Molecular Probes (Eugene, OR, USA). The CellROX deep red reagent is a fluorogenic probe designed to reliably measure ROS in living cells. The cell-permeable CellROX deep red dye is nonfluorescent while in a reduced state and becomes fluorescent upon oxidation by reactive oxygen species with absorption/emission maxima at ~644/665 nm [29]. After treatment, cells were incubated at 37°C for 30 minutes in complete DMEM with 5 mM CellROX deep red reagent. Then, the medium was removed and the cells were washed 3 times with PBS. Cells were collected and suspended in PBS. Fluorescence was immediately measured using FACS analysis, and values were reported as mean fluorescence intensity. ## 2.9. Hydrogen Peroxide Assay The level of intracellular H2O2 was measured using Hydrogen peroxide assay kit (Beyotime Institute of Biotechnology, China) as described previously [30, 31]. In this assay system, ferrous ions (Fe2+) are oxidized to ferric ions (Fe3+) by H2O2. Then, the Fe3+ and the indicator dye xylenol orange form a purple complex, which is measurable with a microplate reader at a wavelength of 560 nm.According to the manufacturer’s protocol, cells were lysed using the lysis buffer solution supplied in the kit at a ratio of 100μL per 106 cells. After centrifugation at 12,000g for 5 minutes, the supernatants were collected. 50 μL of each supernatant sample was put into 100 μL of test solution, and the mixture was incubated for 20 minutes at room temperature. Finally, the absorbance at 560 nm was measured using a microplate reader (Elx 800; BioTek). The level of H2O2 in cells was determined using a standard curve prepared by plotting the average blank-corrected 560 nm measurement for each standard. ## 2.10. Caspase 3 Activity Assay Caspase 3 activity was measured using Caspase 3 Activity Assay kit (Beyotime Biotechnology, Nanjing, China) following the manufacturer’s instructions [32]. After being subjected to treatment described above, cells were detached from plates, washed with PBS, and centrifuged at 1200 rpm for 5 minutes at 4°C for cell collection and lysis. Caspase 3 activity was detected using the specific fluorogenic substrates Ac-DEVD-pNA; the absorbance at 405 nm was measured using a microplate reader (Elx 800; BioTek). ## 2.11. RNA Isolation and Real-Time PCR RNA samples were prepared using the RNAiso plus kit (Takara Bio Inc., Kusatsu, Shiga, Japan) according to manufacturer’s instructions. Total RNA (1μg) was used to reverse transcribed cDNA using iScriptTM cDNA synthesis kit (Takara Bio Inc.) according to manufacturer’s instructions. Real-time PCR was performed using iQ SYBR Green Supermix (Bio-Rad Inc. Laboratories, Hercules, CA, USA). Amplification, detection, and data analysis were performed with the iCycler real-time detection system (Bio-Rad Inc.). GAPDH was used as the endogenous control. Specific primer sets for Trx-1 and GAPDH were obtained from Invitrogen. The relative expression level of Trx-1 was determined using the 2−delta delta Ct analysis method. The primer sequences used for PCR were as follows: Trx-1, forward 5′-TTCTTTCATTCCCTCTGTG-3′ and reverse 5′-TCCGTAATAGTGGCTTCG-3′; GAPDH, forward 5′-GTTCTTCAATACGTCAGACATTCG-3′ and reverse 5′-CATTATCTTTGCTGTCACAAGAGC-3′. ## 2.12. Western Blot Analysis Cell protein levels of Trx-1, apoptosis-regulating kinase-1 (ASK1), phosphorylated ASK1 (p-ASK1), p38, and phosphorylated p38 (p-p38) were analyzed by Western blotting, usingβ-actin as an internal reference. Briefly, total proteins were extracted using a protein extraction kit (KGP2100; KeyGEN Biotech, Nanjing, China), quantified by BCA protein assay (Guge Bio, Wuhan, China), electrophoresed on SDS-PAGE gels, and electrotransferred to PVDF membrane by wet transfer (Bio-Rad). Membranes were blocked for 1 hour with 5% skim milk and incubated overnight at 4°C with the primary antibodies. The anti-ASK1 antibody came from Abcam (cat. no. ab131506), anti-p-ASK1 antibody came from Sigma (cat. no. SAB4504337), and all other antibodies came from Cell Signaling Technology Inc., Danvers, MA, USA. Membranes were washed in TBS/0.1% Tween-20 to remove excess primary antibodies. The membranes were then incubated for 1 hour with the secondary antibodies (Cell Signaling Technology Inc.). After three washes in TBS/0.1% Tween-20, the protein bands were visualized using an enhanced chemiluminescence kit according to the manufacturer’s instructions (ECL; Pierce Biotechnology Inc., Rockford, IL, USA). Densitometry was measured using “ImageJ” analysis software. ## 2.13. Antioxidant Enzyme Activity Measurements The activities of total superoxide dismutase (T-SOD), catalase (CAT), and glutathione peroxidase (GSH-Px) were estimated by the test kits according to the manufacturers’ instructions. T-SOD assay kit was purchased from Nanjin Jiancheng Biotechnology Co. Ltd. (Nanjing, Jiangsu, China) [33]. The GSH-Px assay kit and CAT activity assay kit were purchased from Beyotime Institute of Biotechnology (Shanghai, China) [34, 35]. Briefly, the cells were washed with PBS and lysed using cell lysis buffer. Cell lysates were then centrifuged at 10,000g for 5 minutes at 4°C, and the supernatants were collected to determine enzyme activities. These assays were performed on the Elx800 microplate reader at 550 nm for T-SOD, 520 nm for CAT, and 340 nm for GSH-Px, respectively. The values were normalized and expressed as units per mg protein, based on protein concentrations determined using BCA protein assay (Guge Bio). ## 2.14. Enzyme-Linked Immunosorbent Assay (ELISA) After treatment, culture supernatants were collected and spun at 300g for 10 minutes to remove cellular debris. The levels of keratinocyte growth factor (KGF), hepatocyte growth factor (HGF), and epidermal growth factor (EGF) were determined by employing ELISA kits (R&D System, Minneapolis, MN, USA) according to the manufacturer’s protocol. Each sample was analyzed in triplicate. ## 2.15. Statistical Methods All data were reported as mean ± standard deviations (mean ± SD) and analyzed by using SPSS 18.0 (SPSS Inc., Chicago, IL, USA). Data were analyzed statistically using ANOVA or Student’st-test. Significance was accepted at P<0.05. ## 3. Results ### 3.1. Characterization of BMSCs The BMSC cultures were observed by using an inverted light microscope. BMSCs are plastic-adherent cells that showed a flattened and spindle-shaped morphology. About 10 days later, the primary cultured cells developed to clusters and could be used for subculture. After two to three passages, BMSCs demonstrated a homogeneous fibroblast-like, spindle-shaped morphology. The morphological features of the BMSCs are shown in Figure1(a). To verify the pluripotent capacity of the cultured cells, we cultured the cells in adipogenic or osteogenic differentiation induction media for 21 days. Differentiation toward these cell lineages was demonstrated by oil red O and alizarin red staining, respectively (Figures 1(b) and 1(c)). As illustrated in Figure 1(d), the BMSC population was positive for CD29, CD44, CD73, CD105, and CD90, which are important cell surface markers of MSCs, but negative for CD45 and CD34, which are two specific cell surface markers of hematopoietic cells [11, 36, 37].Figure 1 Characterization of rat bone marrow-derived mesenchymal stromal cells (BMSCs). (a) The plastic-adherent cells demonstrated a homogeneous fibroblast-like and spindle-shaped morphology. Original magnification, ×100. (b) Adipogenic differentiation of BMSCs stained with oil red O. Original magnification, ×200. (c) Osteogenic differentiation of BM-MSCs stained with alizarin red. Original magnification, ×400. (d) FACS analysis demonstrated expression of markers attributed to BMSCs. The cells were devoid of hematopoietic cells as indicated by the lack of CD45 and CD34. The MSC-specific markers, CD29, CD44, CD73, CD105, and CD90 were strongly expressed on the cells. (a) (b) (c) (d) ### 3.2. Stable Overexpression of Trx-1 in BMSCs For stable overexpression of Trx-1 in BMSCs, the cells were transfected with a plasmid encoding Trx-1. After the transfection and drug selection, the expression of GFP-tagged Trx-1 was confirmed by fluorescence microscopy (Figure2(a)). Compared to control cells, we observed that BMSCs-Trx-1 exhibited an 8-fold increased Trx-1 mRNA expression and showed a 4-fold increased protein content (Figures 2(b) and 2(c)). In order to examine whether BMSCs exhibit phenotypic changes after Trx-1 transfection, the expression patterns of cell surface markers were compared between intact BMSCs and BMSCs-Trx-1. We found that there were no marked differences in the expression patterns of cell surface markers between the two cells, indicating that regardless of transfection, these cells were genetically stable (Supplement Figure 1).Figure 2 Stable overexpression of Trx-1 in BMSCs. (a) Intensive green fluorescence was observed by fluorescence microscopy (×100). (b) The mRNA levels of Trx-1 in BMSCs, BMSCs-Trx-1, and BMSCs-p. (c) Detection of Trx-1 protein expression by Western blot analysis.∗∗P<0.01 compared to control. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ### 3.3. Effects of Hyperoxia and Trx-1 Overexpression on Cell Proliferation The effects of hyperoxia and Trx-1 overexpression on the proliferation of BMSCs were assessed by CCK-8 assay kit. As shown in Figure3, hyperoxia treatment inhibited BMSC proliferation in a time-dependent manner. Compared to cells cultured in normoxia, the growth rate of the hyperoxia-treated cells was significantly inhibited starting at 24 hours. After 48 hours of hyperoxia exposure, BMSCs-p proliferation was inhibited by more than 40%. However, BMSCs-Trx-1 proliferation was only inhibited by 23% at 48 hours, suggesting that Trx-1 overexpression significantly increased the proliferation of cells under hyperoxic conditions.Figure 3 Overexpression of Trx-1 promoted proliferation of BMSCs under hyperoxic conditions. Cells with or without Trx-1 overexpression were exposed to hyperoxia for the indicated time. Cell proliferation was estimated using a CCK-8 kit. Hyperoxia treatment inhibited BMSC proliferation. However, overexpression of Trx-1 increased cell growth rate under hyperoxic conditions compared to BMSCs-p. Growth curve was generated by reading the absorbance value at 450 nm. The value was computed as percent of 0 hour. The results were expressed as mean ± SD of the results of three independent experiments, each with triplicates.∗P<0.05 or 0.01 compared to normoxia control, #P<0.05 or 0.01 compared to BMSCs-p under hyperoxia conditions. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. ### 3.4. Assessment of Cell Apoptosis To investigate the effects of hyperoxia and Trx-1 overexpression on the induction of apoptosis in BMSCs, we labeled cells with annexin V-PE, a marker of early apoptosis, and with 7-ADD, a marker of necrosis, to be analyzed via flow cytometry. As shown in Figures4(a) and 4(b), hyperoxia induced apoptosis in a time-dependent manner in cells regardless of Trx-1 overexpression. The percent of apoptotic cells, as seen by annexin V+ staining, was increased in hyperoxia-treated BMSCs-p (about 20% at 24 hours and 35% at 48 hours). Trx-1 overexpression inhibited this hyperoxia-induced cell apoptosis, as seen by the decreased percent of annexin V+ cells (about 13% at 24 hours and 20% at 48 hours).Figure 4 Effect of Trx-1 on cell apoptosis in BMSCs. Cells were exposed to hyperoxia for 0, 12, 24, and 48 hours and were stained with annexin V-PE/7-ADD before flow cytometry analysis. (a) Dot plots of flow cytometry analysis. Intensity of 7-ADD staining (y-axis) was plotted versus annexin V intensity (x-axis). Numbers indicate percent in each region. (b) The graph shows the percentage of apoptosis as defined by annexin V+. The results are representative of 3 independent experiments. (c) Caspase 3 activity. Caspase 3 activity was measured by the caspase 3 activity kit. Bar graphs represent the relative expression of caspase 3 activity calculated from each group. The results are representative of 3 independent experiments. ∗P<0.05, ∗∗P<0.01 compared with the BMSCs-p group or BMSCs. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ### 3.5. Trx-1 Inhibits Caspase 3 Activity Caspase 3 is one of the key mediators of apoptosis; so, to further evaluate the antiapoptotic effects of Trx-1, we monitored caspase 3 activity using the Caspase 3 Activity Assay Kit. Results showed that caspase 3 activity increased when cells were treated with hyperoxia (Figure4(c)). Compared to BMSCs and BMSCs-p, overexpression of Trx-1 in BMSCs-Trx-1 reduced caspase 3 activities under hyperoxic conditions, with the largest difference seen at 48 hours (about 50% was inhibited compared to BMSCs-p). ### 3.6. Trx-1 Reduced the Intracellular Total ROS and Hydrogen Peroxide Formation under Hyperoxic Conditions To further explore the mechanisms by which Trx-1 reduces hyperoxia-induced BMSC injury, intracellular ROS levels were measured by flow cytometry analysis of cells stained with CellROX deep red reagent. As shown in Figure5(a), exposure of BMSCs to hyperoxia markedly increased the generation of ROS in a time-dependent manner (increased 2-fold at 48 hours). Compared with the BMSCs-p group, Trx-1 overexpression markedly decreased the hyperoxia-induced ROS formation in the BMSCs-Trx-1 group (decreased 20%~30% versus BMSCs-p control).Figure 5 Effects of Trx-1 on intracellular ROS levels in BMSCs. (a) Intracellular ROS production was measured with CellROX deep red reagent, which can detect total ROS and was not the target particular species. The relative fluorescence intensity was expressed as % compared to control cells (BMSCs-p at 0 hr). (b) The level of intracellular H2O2 was measured using hydrogen peroxide assay kit. Experiments were repeated three times. ∗P<0.05, ∗∗P<0.01 versus the corresponding group. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b)Subsequently, the level of intracellular H2O2 was determined, as it is an important ROS. The results showed that H2O2 production was increased with longer hyperoxia exposure (normoxia: 0.23 μM and hyperoxia: 3.5 μM at 48 hours) (Figure 5(b)). Trx-1 overexpression inhibited hyperoxia-induced H2O2 generation in BMSCs. The strongest inhibition happened at 12 hours (the percent inhibition exceeded 35%). ### 3.7. Effects of Trx-1 on Antioxidant Enzyme Activities in BMSCs The activities of three major endogenous antioxidant enzymes (SOD, CAT, and GSH-Px) were then analyzed in the three BMSC lines (BMSC, BMSCs-p, and BMSCs-Trx-1). After treatment with hyperoxia, significant increases in SOD and GSH-Px activities were detected in these three groups of BMSCs. As shown in Figure6(a), Trx-1 overexpression in BMSCs further increased SOD activity compared to BMSCs with normal Trx-1 expression. In the three BMSCs, GSH-Px activity was increased by a similar degree after hyperoxia exposure for 12 hours (Figure 6(b)). After 24 hours of hyperoxia exposure, the activities of GSH-Px began to decrease gradually. However, when compared to BMSCs and BMSCs-p, BMSCs-Trx-1 with Trx-1 overexpression upregulated GSH-Px activity after 24 hours and 48 hours of hyperoxia exposure. Trx-1 was not found to have any effect on CAT activity (Figure 6(c)).Figure 6 Effects of Trx-1 overexpression on antioxidant enzyme activities in BMSCs under hyperoxic conditions. (a) Superoxide dismutase (SOD) activities were measured using the SOD assay kit. (b) Glutathione peroxidase (GSH-Px) activities were measured using the glutathione peroxidase assay kit. (c) Catalase (CAT) activities were measured using the CAT assay kit. Data are representative of duplicate samples from five experiments.∗∗P<0.01. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ### 3.8. Trx-1 Has No Effect on Cytokine Secretion from BMSCs Recently, an increasing number of studies has shown that the protective effects of BMSC transplantation may be predominantly mediated by paracrine, rather than regenerative, mechanisms [7]. To determine whether Trx-1 exerts its cytoprotective effects by regulating cytokine secretion from BMSCs, the levels of EGF, KGF, and HGF in the cell culture medium were assayed by ELISA. Our results showed that Trx-1 overexpression only slightly increased the levels of secreted EGF, KGF, and HGF. However, these differences were not statistically significant across the three groups (Supplement Figure 2). ### 3.9. Effects of Trx-1 on the ASK1/P38 MAPK Pathway To investigate the influence of hyperoxia on Trx-1 expression, we compared the protein levels of Trx-1 after different hyperoxia exposures. As shown in Figures7(a) and 7(b), after 12-hour hyperoxia exposure, Trx-1 expression was significantly increased (about 50%). However, Trx-1 expression returned to almost normal levels after 24 hours of hyperoxia treatment in BMSCs-p cells. Trx-1 overexpression did not change throughout 0 to 48 hours of hyperoxia treatment in BMSCs-Trx-1 cells.Figure 7 Western blot results. Trx-1, phospho-ASK1, total ASK1, phospho-p38, and total p38 expressions were detected by Western blotting. (a) Representative Western blot bands. (b) Trx-1 densitometric analysis. (c) p-ASK/ASK densitometric analysis. (d) p-38/P38 densitometric analysis. Data are representative of three independent experiments. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs.∗P<0.05; ∗∗P<0.01 versus the corresponding group. (a) (b) (c) (d)Hyperoxia-induced activation of ASK1 was confirmed by a significant increase in phospho-ASK1 levels as detected by Western blotting, which was significantly inhibited by Trx-1 overexpression (Figures7(a) and 7(c)). We next examined whether p38, a potential downstream signal of ASK1, was involved in hyperoxic cell injury pathogenesis. As shown in Figures 7(a) and 7(d), activation of p38 via phosphorylation was upregulated under hyperoxic conditions as measured by phoshpo-p38, with levels peaking at 24 hours (about 4-fold upregulation compared to 0 hour). Trx-1 overexpression in BMSCs significantly suppressed the phosphorylation of p38. ## 3.1. Characterization of BMSCs The BMSC cultures were observed by using an inverted light microscope. BMSCs are plastic-adherent cells that showed a flattened and spindle-shaped morphology. About 10 days later, the primary cultured cells developed to clusters and could be used for subculture. After two to three passages, BMSCs demonstrated a homogeneous fibroblast-like, spindle-shaped morphology. The morphological features of the BMSCs are shown in Figure1(a). To verify the pluripotent capacity of the cultured cells, we cultured the cells in adipogenic or osteogenic differentiation induction media for 21 days. Differentiation toward these cell lineages was demonstrated by oil red O and alizarin red staining, respectively (Figures 1(b) and 1(c)). As illustrated in Figure 1(d), the BMSC population was positive for CD29, CD44, CD73, CD105, and CD90, which are important cell surface markers of MSCs, but negative for CD45 and CD34, which are two specific cell surface markers of hematopoietic cells [11, 36, 37].Figure 1 Characterization of rat bone marrow-derived mesenchymal stromal cells (BMSCs). (a) The plastic-adherent cells demonstrated a homogeneous fibroblast-like and spindle-shaped morphology. Original magnification, ×100. (b) Adipogenic differentiation of BMSCs stained with oil red O. Original magnification, ×200. (c) Osteogenic differentiation of BM-MSCs stained with alizarin red. Original magnification, ×400. (d) FACS analysis demonstrated expression of markers attributed to BMSCs. The cells were devoid of hematopoietic cells as indicated by the lack of CD45 and CD34. The MSC-specific markers, CD29, CD44, CD73, CD105, and CD90 were strongly expressed on the cells. (a) (b) (c) (d) ## 3.2. Stable Overexpression of Trx-1 in BMSCs For stable overexpression of Trx-1 in BMSCs, the cells were transfected with a plasmid encoding Trx-1. After the transfection and drug selection, the expression of GFP-tagged Trx-1 was confirmed by fluorescence microscopy (Figure2(a)). Compared to control cells, we observed that BMSCs-Trx-1 exhibited an 8-fold increased Trx-1 mRNA expression and showed a 4-fold increased protein content (Figures 2(b) and 2(c)). In order to examine whether BMSCs exhibit phenotypic changes after Trx-1 transfection, the expression patterns of cell surface markers were compared between intact BMSCs and BMSCs-Trx-1. We found that there were no marked differences in the expression patterns of cell surface markers between the two cells, indicating that regardless of transfection, these cells were genetically stable (Supplement Figure 1).Figure 2 Stable overexpression of Trx-1 in BMSCs. (a) Intensive green fluorescence was observed by fluorescence microscopy (×100). (b) The mRNA levels of Trx-1 in BMSCs, BMSCs-Trx-1, and BMSCs-p. (c) Detection of Trx-1 protein expression by Western blot analysis.∗∗P<0.01 compared to control. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ## 3.3. Effects of Hyperoxia and Trx-1 Overexpression on Cell Proliferation The effects of hyperoxia and Trx-1 overexpression on the proliferation of BMSCs were assessed by CCK-8 assay kit. As shown in Figure3, hyperoxia treatment inhibited BMSC proliferation in a time-dependent manner. Compared to cells cultured in normoxia, the growth rate of the hyperoxia-treated cells was significantly inhibited starting at 24 hours. After 48 hours of hyperoxia exposure, BMSCs-p proliferation was inhibited by more than 40%. However, BMSCs-Trx-1 proliferation was only inhibited by 23% at 48 hours, suggesting that Trx-1 overexpression significantly increased the proliferation of cells under hyperoxic conditions.Figure 3 Overexpression of Trx-1 promoted proliferation of BMSCs under hyperoxic conditions. Cells with or without Trx-1 overexpression were exposed to hyperoxia for the indicated time. Cell proliferation was estimated using a CCK-8 kit. Hyperoxia treatment inhibited BMSC proliferation. However, overexpression of Trx-1 increased cell growth rate under hyperoxic conditions compared to BMSCs-p. Growth curve was generated by reading the absorbance value at 450 nm. The value was computed as percent of 0 hour. The results were expressed as mean ± SD of the results of three independent experiments, each with triplicates.∗P<0.05 or 0.01 compared to normoxia control, #P<0.05 or 0.01 compared to BMSCs-p under hyperoxia conditions. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. ## 3.4. Assessment of Cell Apoptosis To investigate the effects of hyperoxia and Trx-1 overexpression on the induction of apoptosis in BMSCs, we labeled cells with annexin V-PE, a marker of early apoptosis, and with 7-ADD, a marker of necrosis, to be analyzed via flow cytometry. As shown in Figures4(a) and 4(b), hyperoxia induced apoptosis in a time-dependent manner in cells regardless of Trx-1 overexpression. The percent of apoptotic cells, as seen by annexin V+ staining, was increased in hyperoxia-treated BMSCs-p (about 20% at 24 hours and 35% at 48 hours). Trx-1 overexpression inhibited this hyperoxia-induced cell apoptosis, as seen by the decreased percent of annexin V+ cells (about 13% at 24 hours and 20% at 48 hours).Figure 4 Effect of Trx-1 on cell apoptosis in BMSCs. Cells were exposed to hyperoxia for 0, 12, 24, and 48 hours and were stained with annexin V-PE/7-ADD before flow cytometry analysis. (a) Dot plots of flow cytometry analysis. Intensity of 7-ADD staining (y-axis) was plotted versus annexin V intensity (x-axis). Numbers indicate percent in each region. (b) The graph shows the percentage of apoptosis as defined by annexin V+. The results are representative of 3 independent experiments. (c) Caspase 3 activity. Caspase 3 activity was measured by the caspase 3 activity kit. Bar graphs represent the relative expression of caspase 3 activity calculated from each group. The results are representative of 3 independent experiments. ∗P<0.05, ∗∗P<0.01 compared with the BMSCs-p group or BMSCs. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ## 3.5. Trx-1 Inhibits Caspase 3 Activity Caspase 3 is one of the key mediators of apoptosis; so, to further evaluate the antiapoptotic effects of Trx-1, we monitored caspase 3 activity using the Caspase 3 Activity Assay Kit. Results showed that caspase 3 activity increased when cells were treated with hyperoxia (Figure4(c)). Compared to BMSCs and BMSCs-p, overexpression of Trx-1 in BMSCs-Trx-1 reduced caspase 3 activities under hyperoxic conditions, with the largest difference seen at 48 hours (about 50% was inhibited compared to BMSCs-p). ## 3.6. Trx-1 Reduced the Intracellular Total ROS and Hydrogen Peroxide Formation under Hyperoxic Conditions To further explore the mechanisms by which Trx-1 reduces hyperoxia-induced BMSC injury, intracellular ROS levels were measured by flow cytometry analysis of cells stained with CellROX deep red reagent. As shown in Figure5(a), exposure of BMSCs to hyperoxia markedly increased the generation of ROS in a time-dependent manner (increased 2-fold at 48 hours). Compared with the BMSCs-p group, Trx-1 overexpression markedly decreased the hyperoxia-induced ROS formation in the BMSCs-Trx-1 group (decreased 20%~30% versus BMSCs-p control).Figure 5 Effects of Trx-1 on intracellular ROS levels in BMSCs. (a) Intracellular ROS production was measured with CellROX deep red reagent, which can detect total ROS and was not the target particular species. The relative fluorescence intensity was expressed as % compared to control cells (BMSCs-p at 0 hr). (b) The level of intracellular H2O2 was measured using hydrogen peroxide assay kit. Experiments were repeated three times. ∗P<0.05, ∗∗P<0.01 versus the corresponding group. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b)Subsequently, the level of intracellular H2O2 was determined, as it is an important ROS. The results showed that H2O2 production was increased with longer hyperoxia exposure (normoxia: 0.23 μM and hyperoxia: 3.5 μM at 48 hours) (Figure 5(b)). Trx-1 overexpression inhibited hyperoxia-induced H2O2 generation in BMSCs. The strongest inhibition happened at 12 hours (the percent inhibition exceeded 35%). ## 3.7. Effects of Trx-1 on Antioxidant Enzyme Activities in BMSCs The activities of three major endogenous antioxidant enzymes (SOD, CAT, and GSH-Px) were then analyzed in the three BMSC lines (BMSC, BMSCs-p, and BMSCs-Trx-1). After treatment with hyperoxia, significant increases in SOD and GSH-Px activities were detected in these three groups of BMSCs. As shown in Figure6(a), Trx-1 overexpression in BMSCs further increased SOD activity compared to BMSCs with normal Trx-1 expression. In the three BMSCs, GSH-Px activity was increased by a similar degree after hyperoxia exposure for 12 hours (Figure 6(b)). After 24 hours of hyperoxia exposure, the activities of GSH-Px began to decrease gradually. However, when compared to BMSCs and BMSCs-p, BMSCs-Trx-1 with Trx-1 overexpression upregulated GSH-Px activity after 24 hours and 48 hours of hyperoxia exposure. Trx-1 was not found to have any effect on CAT activity (Figure 6(c)).Figure 6 Effects of Trx-1 overexpression on antioxidant enzyme activities in BMSCs under hyperoxic conditions. (a) Superoxide dismutase (SOD) activities were measured using the SOD assay kit. (b) Glutathione peroxidase (GSH-Px) activities were measured using the glutathione peroxidase assay kit. (c) Catalase (CAT) activities were measured using the CAT assay kit. Data are representative of duplicate samples from five experiments.∗∗P<0.01. BMSCs: intact BMSCs; BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs. (a) (b) (c) ## 3.8. Trx-1 Has No Effect on Cytokine Secretion from BMSCs Recently, an increasing number of studies has shown that the protective effects of BMSC transplantation may be predominantly mediated by paracrine, rather than regenerative, mechanisms [7]. To determine whether Trx-1 exerts its cytoprotective effects by regulating cytokine secretion from BMSCs, the levels of EGF, KGF, and HGF in the cell culture medium were assayed by ELISA. Our results showed that Trx-1 overexpression only slightly increased the levels of secreted EGF, KGF, and HGF. However, these differences were not statistically significant across the three groups (Supplement Figure 2). ## 3.9. Effects of Trx-1 on the ASK1/P38 MAPK Pathway To investigate the influence of hyperoxia on Trx-1 expression, we compared the protein levels of Trx-1 after different hyperoxia exposures. As shown in Figures7(a) and 7(b), after 12-hour hyperoxia exposure, Trx-1 expression was significantly increased (about 50%). However, Trx-1 expression returned to almost normal levels after 24 hours of hyperoxia treatment in BMSCs-p cells. Trx-1 overexpression did not change throughout 0 to 48 hours of hyperoxia treatment in BMSCs-Trx-1 cells.Figure 7 Western blot results. Trx-1, phospho-ASK1, total ASK1, phospho-p38, and total p38 expressions were detected by Western blotting. (a) Representative Western blot bands. (b) Trx-1 densitometric analysis. (c) p-ASK/ASK densitometric analysis. (d) p-38/P38 densitometric analysis. Data are representative of three independent experiments. BMSCs-p: empty lentivirus-engineered BMSCs; BMSCs-Trx-1: Trx-1-engineered BMSCs.∗P<0.05; ∗∗P<0.01 versus the corresponding group. (a) (b) (c) (d)Hyperoxia-induced activation of ASK1 was confirmed by a significant increase in phospho-ASK1 levels as detected by Western blotting, which was significantly inhibited by Trx-1 overexpression (Figures7(a) and 7(c)). We next examined whether p38, a potential downstream signal of ASK1, was involved in hyperoxic cell injury pathogenesis. As shown in Figures 7(a) and 7(d), activation of p38 via phosphorylation was upregulated under hyperoxic conditions as measured by phoshpo-p38, with levels peaking at 24 hours (about 4-fold upregulation compared to 0 hour). Trx-1 overexpression in BMSCs significantly suppressed the phosphorylation of p38. ## 4. Discussion Bone marrow-derived mesenchymal stem cells (BMSCs) are easily isolated and amplified, are immunologically tolerant, and have multilineage potential, which makes them an ideal candidate for intense investigation as a cell-based therapeutic strategy for many kinds of diseases, including BPD [38]. The isolated primary BMSCs in this study have properties of mesenchymal stromal cells, according to the criteria of the International Society for Cellular Therapy (ISCT) [39], such as being spindle shaped, plastic adherent, CD29+, CD44+, CD73+, CD105+, CD90+, CD45−, and CD34− and having multipotent differentiation (Figure 1).In rodent models of BPD, MSC administration by intravenous injection or intratracheal instillation resulted in stimulation of lung tissue repair, decreased vascular remodeling, pulmonary hypertension, and right ventricular hypertrophy [40]. Furthermore, in experimental models of BPD, intratracheal administration of MSC-conditioned medium resulted in similar short-term regenerative effects as those with administration of MSCs [41]. MSCs play protective effects in BPD, not only by engraftment and differentiation into specific lung cell types but also by secreting several anti-inflammatory cytokines and growth factors that affect cell proliferation, differentiation, and survival [7].Despite visible advances in the field of MSC-based therapy, the reported functional improvements are generally modest partly because of the low cellular survival rate [7, 11]. Studies have indicated that pathophysiological environmental conditions, including oxidative stress and inflammation, can lead to poor viability and apoptosis of MSCs [42]. MSCs are characterized by the requirement of a low-oxygen tension environment, about 2%–8% O2 [43]. In view of these observations, together with the fact that oxygen toxicity plays a critical role in the lung injury process leading to BPD [44], it is suggested that hyperoxia may be the first factor to threaten MSC survival in BPD. In the present study, we demonstrated that hyperoxia inhibited BMSC proliferation by 26.8% at 24 hours and 42% at 48 hours (Figure 3). Consistent with this result, hyperoxia treatment induced BMSC apoptosis in 20% cells at 24 hours and 35% cells at 48 hours (Figure 4). These results suggested that hyperoxia-induced injury plays a key role in BMSC death. Therefore, strategies to improve BMSC tolerance to hyperoxic conditions might improve the survival of transplanted cells and consequently increase their beneficial therapeutic effects on hyperoxia-induced injury.Recently, diverse approaches involving the genetic modification of MSCs have been undertaken to increase survivability [45]. The thioredoxin system has been demonstrated to play a key role in modulating redox signaling pathways and can be induced by a wide variety of stress conditions, such as oxidative stress, ultraviolet irradiation, γ-rays, hypoxia, lipopolysaccharide, and viral infections [46–48]. In the present study, we demonstrated that hyperoxia also could induce Trx-1 expression in BMSCs, but only within a short time frame (12 hours) (Figures 7(a) and 7(b)). Our previous studies have shown that exogenous addition of Trx can prevent hyperoxia-induced alveolar type II epithelial cell apoptosis [20]. Furthermore, cell injury in A549 cells, a lung epithelial adenocarcinoma cell line, has been shown to be significantly aggravated by Trx-specific siRNA under hyperoxic conditions [49]. In other cells, Trx-1 redox signaling was reported to regulate H1299 cell survival in response to hyperoxia [50]. Hyperoxic impairment of Trx-1 has a negative impact on peroxiredoxin-1 and HSP90 oxidative responses. These studies have led to the idea that Trx-1 can promote MSC survival in various conditions. Suresh et al. have experimented with overexpression of Trx-1 to increase engrafted MSC survivability in the treatment of cardiac failure [8]. Their results showed that following myocardial infarction, treatment with MSCs transfected with Trx-1 overexpression vectors increased their capacity for survival, proliferation, and differentiation, which promoted heart function and decreased fibrosis when compared to that with untransfected MSCs. Based on a similar premise, our present study aims to determine if Trx-1 overexpression can attenuate hyperoxia-induced BMSC injury using BMSCs we successfully engineered to overexpress the Trx-1 gene. Additionally, we confirmed that Trx-1 overexpression did not change BMSCs’ genetic stability (Supplement Figure 1).To examine the effect of Trx-1 on BMSC survival under hyperoxic conditions, cell proliferation rates and apoptosis were estimated in rat BMSCs with or without Trx-1 overexpression. As shown in Figures3 and 4, BMSCs-Trx-1 showed increased cell proliferation rates and decreased apoptosis under hyperoxic conditions compared to BMSCs-p control, suggesting that Trx-1 overexpression causes cells to be more resistant to hyperoxic stress. Caspases, a family of cysteine proteases, are expressed in almost all cell types as inactive proenzymes. Caspase activation is thought to be a key step in the genesis of apoptosis. Caspases are either initiators or executioners, and caspase 3 is known to play a key role in the execution of apoptosis [51]. To test whether caspase 3 was involved in hyperoxia-induced apoptosis, we probed for caspase 3 activity. The results showed that caspase 3 activity was increased more than 2-fold after 24-hour hyperoxia treatment and almost 4-fold after 48-hour hyperoxia treatment compared to those of the control-untreated cells (0 h). These results indicated that hyperoxia-induced BMSC apoptosis is, at least in part, caspase 3 dependent. We found that this hyperoxia-induced activation of caspase 3 was strongly inhibited by Trx-1 overexpression. These results, together with the annexin V stain assay showing a decrease in apoptosis, suggest that Trx-1 inhibited hyperoxia-induced BMSC apoptosis mainly through a caspase 3-dependent pathway.The effects of hyperoxia on cellular function and survival have been widely held to be secondary to the generation of ROS. It has been demonstrated that ROS act as upstream signaling molecules that initiate cell death under hyperoxic conditions [52]. In our study, hyperoxia exposure resulted in an increase of intracellular ROS. However, Trx-1 overexpression could partly reverse such effects of hyperoxia (Figure 5).H2O2 is a crucial ROS that is involved in cell signaling but can alter the intracellular redox environment when produced in excess amounts, leading to many pathophysiological conditions [53]. During exposure to hyperoxia, production of ROS is seen through the increased release of H2O2 by lung mitochondria and microsomes [54]. Accumulating evidence suggests that hyperoxia promotes intracellular H2O2 accumulation, with H2O2 playing a key role in the oxidative stress-induced injury from ROS [55]. It has been confirmed that the Trx system, which is composed of a NADPH-dependent thioredoxin reductase (TrxR) and Trx, provides electrons to thiol-dependent peroxidases (peroxiredoxin (Prx)) to directly remove H2O2 [53]. In the present study, we observed increased H2O2 generation in hyperoxia-exposed BMSCs, while Trx-1 overexpression decreased H2O2 generation under hyperoxic conditions. Additionally, compared to the total ROS generated, H2O2 was more strongly induced by hyperoxia, which suggests that H2O2 is the main source of intracellular ROS under hyperoxic conditions. However, more evidence is needed to confirm this hypothesis.As mentioned earlier, ROS are not only cytotoxic products from the external and internal environment but are also important mediators of redox signaling. Therefore, Trx acts as an antioxidant to maintain the balance of the thiol-related redox status and thus plays a pivotal role in the regulation of redox signaling and cell survival and death [48]. Trx-1 is known to regulate several transcription factors such as NF-κB, p53, and Ref-1, as well as some apoptotic factors like ASK1 [56–58]. ASK1 is a member of the mitogen-activated protein kinase kinase kinase (MAPKKK) group, which can be activated by various stresses such as oxidative stress, which can then activate caspase 3 and promote apoptosis [59]. As such, ASK-1 is necessary for ROS-induced cell death and inflammation [60]. Fukumoto et al. reported that deletion of ASK1 protects against hyperoxia-induced acute lung injury [61]. As shown in Figure 7, we confirmed that the activity of ASK1 was upregulated by hyperoxia in a time-dependent manner. It has been shown that Trx is a negative regulator of ASK1 [56]. Figure 8 shows that in resting cells, ASK1 forms an inactive complex with reduced Trx-1, but oxidation of Trx-1 leads to the dissociation of Trx-1 from ASK1, switching the ASK1 to an active kinase [48]. It has also been reported that overexpression of the Trx in endothelial cells induces ASK1 ubiquitination and degradation [16]. To demonstrate whether overexpression of Trx-1 protects BMSCs from hyperoxia-induced injury via inhibition of the ASK-1 signaling pathway, we determined the activation status of ASK1 and its downstream proapoptotic factor, the p38 MAP kinase. We did not observe obvious changes of the total ASK1 level but results demonstrated that Trx-1 overexpression inhibited hyperoxia-induced ASK1 activation. The activation of p38 has also been shown to be associated with hyperoxia-induced cell damage [62]. Previously, we demonstrated that Trx can protect alveolar epithelial cells from hyperoxia-induced damage via decreasing p38 activation [20]. It was reported that ASK1 is required for the sustained activation of JNK/p38 MAP kinases, leading to apoptosis [63]. In the present study, we showed that Trx-1 overexpression in BMSCs inhibited hyperoxia-induced p38 activation. Taken together, these results indicate that inhibition of the ASK1/P38 pathway was involved in the mechanism of Trx-1-mediated protection of BMSCs from hyperoxia-induced injury (Figure 8). Recently, several studies suggest that the protective effects of stem cell transplantation might be predominantly mediated by a paracrine mechanism [7, 38] and that growth factors such as VEGF, HGF, and KGF are critical in mediating the protective effects of MSCs against hyperoxic lung injury [64]. With regard to these growth factors, we found no difference between our three BMSC cell lines under hyperoxic conditions in vitro (Supplement Figure 2). Based on this, Trx-1 appears to protect BMSCs from hyperoxia-induced injury independently of paracrine growth factors. However, whether Trx-1 overexpression affects the therapeutical effect of BMSC via paracrine growth factors in vivo will need further studies.Figure 8 A schematic model of the regulation of the ASK1/P38 signal pathway by Trx-1. (a) The Trx-1 system contains NADPH, TrxR-1, and Trx-1. The oxidized Trx-1 (inactive form) is transformed to the active and reduced form of Trx-1 by receiving electrons from NADPH coenzyme in the presence of TrxR-1. Prxs reduce H2O2 to H2O using electrons from the active Trx-1. The active Trx-1 also regulates redox signals by reducing many other target proteins with disulfide bonds. ASK1 constantly forms an inactive complex with reduced Trx-1 under normoxic conditions. (b) Exposure of BMSCs to hyperoxia leads to elevated ROS and H2O2 production, which leads to oxidative stress. However, oxidized Trx-1 is dissociated from ASK1 in response to oxidative stress and subsequent activation of ASK1. Activated ASK1 in turn activates the p38 pathway and induces various cellular responses, including cell apoptosis and differentiation inhibition. Trx-1 overexpression promoted BMSC survival under hyperoxic conditions through elevation of antioxidant activities, reduction of ROS and H2O2 generation, and subsequent inhibition of the ASK1/P38 signaling pathway.Other mechanisms, such as changes in expression of antioxidant enzymes, may also be involved in the cellular response of Trx-1-overexpressing BMSCs against hyperoxia. Several studies reported greater MSC survival from oxidative stress injury via increased activities of antioxidant enzymes [65, 66]. Three of the primary antioxidant enzymes in oxygen-metabolizing mammalian cells believed to be necessary for cell survival are SOD, CAT, and GSH-Px. SOD is a metalloenzyme that catalyzes the dismutation of superoxide anion into O2 and H2O2. Subsequently, H2O2 is reduced to H2O by GSH-Px in the cytosol or by CAT in peroxisomes or cytosol [67]. In our study, we demonstrated that Trx-1 overexpression enhanced the activities of the antioxidant enzymes SOD and GSH-Px, resulting in maintenance of relatively low intracellular levels of ROS and H2O2, as shown in Figures 5 and 6. CAT is a common enzyme found in nearly all living organisms exposed to oxygen. It is a very important enzyme in the biological defense system. Zhang et al. demonstrated CAT transduction was able to increase MSC viability and promote ischemia-induced angiogenesis [68]. However, we did not observe CAT activity to be affected in this study. The mechanism involved by which Trx-1 selectively affects different antioxidant enzymes requires further studies.In conclusion, our results indicate that hyperoxia exposure induced BMSC apoptosis, which may contribute to the low survival rate of transplanted BMSCs and that Trx-1 overexpression has a significant effect on improving the survival rate of BMSCs. The summary of our results in the study is shown in Figure8. --- *Source: 1023025-2018-01-21.xml*
2018
# Minimally Invasive Subcortical Parafascicular Transsulcal Access for Clot Evacuation (Mi SPACE) for Intracerebral Hemorrhage **Authors:** Benjamin Ritsma; Amin Kassam; Dariush Dowlatshahi; Thanh Nguyen; Grant Stotts **Journal:** Case Reports in Neurological Medicine (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102307 --- ## Abstract Background. Spontaneous intracerebral hemorrhage (ICH) is common and causes significant mortality and morbidity. To date, optimal medical and surgical intervention remains uncertain. A lack of definitive benefit for operative management may be attributable to adverse surgical effect, collateral tissue injury. This is particularly relevant for ICH in dominant, eloquent cortex. Minimally invasive surgery (MIS) offers the potential advantage of reduced collateral damage. MIS utilizing a parafascicular approach has demonstrated such benefit for intracranial tumor resection. Methods. We present a case of dominant hemisphere spontaneous ICH evacuated via the minimally invasive subcortical parafascicular transsulcal access clot evacuation (Mi SPACE) model. We use this report to introduce Mi SPACE and to examine the application of this novel MIS paradigm. Case Presentation. The featured patient presented with a left temporal ICH and severe global aphasia. The hematoma was evacuated via the Mi SPACE approach. Postoperative reassessments showed significant improvement. At two months, bedside language testing was normal. MRI tractography confirmed limited collateral injury. Conclusions. This case illustrates successful application of the Mi SPACE model to ICH in dominant, eloquent cortex and subcortical regions. MRI tractography illustrates collateral tissue preservation. Safety and feasibility studies are required to further assess this promising new therapeutic paradigm. --- ## Body ## 1. Introduction Spontaneous intracerebral hemorrhage (ICH) affects approximately 2 million people worldwide each year [1]. Overall ICH mortality rate exceeds 40% and only 25% of survivors achieve full independence at 6 months [2]. Despite significant efforts, optimal medical and surgical ICH care remains uncertain [3, 4].A lack of definitive benefit for operative management may be attributable to adverse surgical effect, collateral tissue injury [5]. This is particularly relevant for ICH in dominant, eloquent cortex and deep regions. Minimally invasive surgery (MIS) offers potential advantages over conventional craniotomy and several groups have examined such techniques [3, 4]. Results have been conflicting and improved functional outcome has not been consistently demonstrated [3, 4]. Minimally invasive subcortical parafascicular transsulcal access for clot evacuation (Mi SPACE) is a novel paradigm designed to address key challenges in intracranial MIS. Earlier iterations of this approach have demonstrated safety and efficacy in treating diverse subcortical lesions [6].We present the initial case of ICH evacuated via the Mi SPACE model. We use this report to introduce Mi SPACE and to examine its application to ICH in dominant, eloquent cortical and subcortical regions. ## 2. Case A 64-year-old right-hand dominant male presented to the emergency department with a one-day history of progressive left-sided headache and aphasia. He was normotensive, awake, and alert but unable to express orientation. He was nonfluent and unable to follow 1-step verbal commands, repeat, name, read, or write a sentence. CT head imaging revealed a large left temporal hematoma. CT angiography denoted the absence of an identifiable vascular etiology. The patient was admitted for monitoring and blood pressure regulation.Past medical history was significant for an occult bleeding diathesis, having presented over the past decade with two other ICH episodes. These events were conservatively managed and resulted in mild residual left-sided motor and sensory deficits.On postadmission day three the patient had several episodes of emesis and his headache worsened. A repeat CT head showed a hematomal expansion with mass effect and impending uncal herniation (see Figure 1 in Supplementary Materials available online athttp://dx.doi.org/10.1155/2014/102307). Given the risk of further deterioration, prompt surgical intervention was deemed necessary. As the hematoma centred upon highly eloquent cortex with subcortical extension, the Mi SPACE approach was favoured. Starting with neuronavigation, Mi SPACE (Figures 1(a)–1(c)) was employed as outlined below. Following intervention, target systolic blood pressure (SBP) was less than 140 mm Hg.Intraoperative images depicting (a) the Mi SPACE computer assisted navigation system and the radial access transsulcal corridor system, BrainPath (BP) (NICO corp., Indianapolis, Indiana), (b) BrainPath (BP), and (c) the small craniotomy site and dime-sized dural opening created along the predefined sulcus for an entry point. A narrow dural opening allows for a tight seal against the edges of the BP. (a) (b) (c)On the same day postoperative assessment established significant improvement. The patient expressed orientation, followed one-step verbal commands, and used full sentences. Testing on the NIH Stroke Scale (NIHSS) was as follows: repetition 4/6, naming 1/6, and reading intact. CT imaging showed drainage of the hematoma, with improved mass effect (Figure 2 in Supplementary Materials).In the following evening there was an episode of hypertension, peak SBP 173 mm Hg, requiring administration of intravenous (IV) antihypertensive agents. The patient subsequently developed recurrent emesis and worsening aphasia, approaching admission status. Repeat CT imaging revealed reaccumulation of the hematoma with added mass effect (Figure 3 in Supplementary Materials). As such, he returned for a second evacuation using Mi SPACE. Given this patient’s occult bleeding diathesis and history of recurrent ICH, this course of perioperative management included antifibrinolytic agent tranexamic acid (Cyklokapron), 1 g IV loading dose followed by 300 mg IV every eight hours for forty-eight hours, in conjunction with tight blood pressure control (parameters of SBP less than 130 mm Hg for forty-eight hours, then SBP less than 140 mm Hg).Again, postsurgical examination demonstrated significant improvement and on same-day CT imaging the hemorrhage was no longer visualized (Figure 4 in Supplementary Materials). On postoperative day one he was awake and alert and expressed full orientation. He followed one- and two-step commands and communicated in full sentences. Naming and repetition were intact on NIHSS. At a two-month follow-up, NIHSS language testing was normal, and he had returned to his baseline level of function. At nine months, MRI with tractography demonstrated a small surgical tract extending from the left temporal cortex to the atrium of the lateral ventricle (Figure 5 in Supplementary Materials). Fractional anisotropy (FA) maps revealed minimal areas of decrease in FA in the left inferior longitudinal fasciculus and left temporal subcortical white matter (Figure2).Figure 2 Color coded fractional anisotropy (FA) maps obtained from the diffusion-tensor imaging (DTI) show collateral tissue preservation, with only small areas of decrease in FA in the left inferior longitudinal fasciculus (long arrow). ## 3. Discussion Spontaneous ICH is relatively common and it causes significant mortality and morbidity [1, 2]. Despite decades of quality research, optimal management strategies remain uncertain [3]. Neurosurgical intervention remains controversial and quite variable in practice, particularly for supratentorial hematomas [3, 4].Though inconsistent, there is data supporting potential benefits of surgical intervention in ICH [4]. As such, identifying patient subgroups more likely to show improved postsurgical outcome is the subject of ongoing investigation [7]. In a survey of British neurosurgeons, ICH dominance and depth affected clinical decision making over and above all other features [8]. Traditionally, a nonoperative approach has been favoured for dominant hemisphere and deep hematomas [3, 9]. Hesitation for surgery in this context is attributable to the fact that operative removal by standard craniotomy almost always requires creating access by transecting through uninjured brain [3]. Indeed, adverse surgical effect has been postulated as a potential explanation for the inability to prove benefit with hematoma evacuation [5].In an attempt to limit tissue damage and reduce surgical morbidity, MIS techniques have been analyzed [3, 4]. Several groups have reported benefits and neurosurgeons have expressed optimism for such implementation [4, 9]. Nevertheless, improved outcomes have not been consistently demonstrated [3, 4]. A key challenge has been the ability to safely access the hematoma, particularly if in the subcortical space, through a MIS corridor that allows for adequate visualization and bimanual technique to remove early fibrotic clots, without the use of thrombolytics. Moreover, previous endoscopic designs have facilitated a less complete removal of hematoma volume compared to standard craniotomy [10].Mi SPACE represents an integration of 5 core technologies ((1) Mapping, (2) Navigation, (3) Access, (4) Optics, and (5) Resection) into a single platform to address such challenges. Preoperative mapping via MRI tractography allows the calculation of a surgical trajectory, along the long axis of the most eloquent fibres, to minimize shear forces and fascicle injury. Intraoperative neuronavigation facilitates a transsulcal insertion of a 13.5 mm port (BrainPath, NICO corp., Indianapolis, Indiana), also specifically designed to minimize strain forces, along the preplanned trajectory. This creates a parafascicular corridor that enables (i) use of a novel telescopic optics system, Video Telescopic Assisted Microscopy (VTOM) (Storz corp., Culver City, CA), to optimize visualization, (ii) bimanual dissection technique, and (iii) use of specifically designed nonthermal automated MIS instrumentation (Myriad, NICO corp., Indianapolis, Indiana) without the need for thrombolytics.This case illustrates the successful initial application of the Mi SPACE approach to ICH in dominant, eloquent cortical and subcortical regions, in a patient with an occult bleeding diathesis. There was clear survival benefit as well as a dramatic early and lasting functional response. MRI tractography demonstrated limited impact on the eloquent fibre tracts, including the arcuate fasciculus, despite the initial hemorrhage, ICH recurrence, and two Mi SPACE evacuations.Intraoperatively, clot over the arcuate fasciculus was removed and we postulate that removal of this irritant effect was largely responsible for the improved outcome. Although this case’s hematoma had a lobar focus, with superficial elements, there was also considerable subcortical extension. These deeper components were also targeted in the evacuation. Earlier iterations of the Mi SPACE model have shown safety and efficacy in managing various subcortical lesions [6]. Thus, in addition to demonstrating potential value in treating dominant, eloquent cortical bleeds, this case may serve as a proof of concept for application to ICH with a subcortical focus.It has been suggested that a greater volume of hematoma removal is associated with better outcome [10]. The potential for Mi SPACE to offer improved visualization and bimanual technique may facilitate a more complete evacuation than previous MIS designs. In turn, this may further reduce ICH mechanical and toxic effects on adjacent tissue, including white matter tracts, to a degree sufficient enough to derive improved clinical outcomes.Mi SPACE allowed for adequate intraoperative hemostasis in both interventions, despite the patient’s occult bleeding diathesis, and there was excellent postoperative functional recovery. Furthermore, after clot reaccumulation, presumed to be from a combination of postoperative hypertension and the diathesis, repeat surgery within twenty-four hours of initial intervention yielded even better clinical and radiological outcomes. Typically, this is the period in which postoperative edema from conventional surgery becomes of greatest concern. It should also be noted that, in both instances of intervention, the patient underwent early surgery, within hours of clinical deterioration, which may offer benefit in some ICH subgroups [3].Perioperative management following the second surgery included antifibrinolytic treatment of the bleeding diathesis and more aggressive blood pressure control, after which there was no recurrent bleeding. Additional experience with Mi SPACE could further assess these aspects of ICH medical management. ## 4. Conclusion This case illustrates the successful initial application of the Mi SPACE model to ICH, including those in dominant, eloquent cortex and subcortical regions. There was clear survival benefit and dramatic functional response. MRI tractography demonstrates collateral fascicle preservation. Safety and feasibility studies are required to further assess this promising new surgical paradigm in ICH care. --- *Source: 102307-2014-08-06.xml*
102307-2014-08-06_102307-2014-08-06.md
13,361
Minimally Invasive Subcortical Parafascicular Transsulcal Access for Clot Evacuation (Mi SPACE) for Intracerebral Hemorrhage
Benjamin Ritsma; Amin Kassam; Dariush Dowlatshahi; Thanh Nguyen; Grant Stotts
Case Reports in Neurological Medicine (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102307
102307-2014-08-06.xml
--- ## Abstract Background. Spontaneous intracerebral hemorrhage (ICH) is common and causes significant mortality and morbidity. To date, optimal medical and surgical intervention remains uncertain. A lack of definitive benefit for operative management may be attributable to adverse surgical effect, collateral tissue injury. This is particularly relevant for ICH in dominant, eloquent cortex. Minimally invasive surgery (MIS) offers the potential advantage of reduced collateral damage. MIS utilizing a parafascicular approach has demonstrated such benefit for intracranial tumor resection. Methods. We present a case of dominant hemisphere spontaneous ICH evacuated via the minimally invasive subcortical parafascicular transsulcal access clot evacuation (Mi SPACE) model. We use this report to introduce Mi SPACE and to examine the application of this novel MIS paradigm. Case Presentation. The featured patient presented with a left temporal ICH and severe global aphasia. The hematoma was evacuated via the Mi SPACE approach. Postoperative reassessments showed significant improvement. At two months, bedside language testing was normal. MRI tractography confirmed limited collateral injury. Conclusions. This case illustrates successful application of the Mi SPACE model to ICH in dominant, eloquent cortex and subcortical regions. MRI tractography illustrates collateral tissue preservation. Safety and feasibility studies are required to further assess this promising new therapeutic paradigm. --- ## Body ## 1. Introduction Spontaneous intracerebral hemorrhage (ICH) affects approximately 2 million people worldwide each year [1]. Overall ICH mortality rate exceeds 40% and only 25% of survivors achieve full independence at 6 months [2]. Despite significant efforts, optimal medical and surgical ICH care remains uncertain [3, 4].A lack of definitive benefit for operative management may be attributable to adverse surgical effect, collateral tissue injury [5]. This is particularly relevant for ICH in dominant, eloquent cortex and deep regions. Minimally invasive surgery (MIS) offers potential advantages over conventional craniotomy and several groups have examined such techniques [3, 4]. Results have been conflicting and improved functional outcome has not been consistently demonstrated [3, 4]. Minimally invasive subcortical parafascicular transsulcal access for clot evacuation (Mi SPACE) is a novel paradigm designed to address key challenges in intracranial MIS. Earlier iterations of this approach have demonstrated safety and efficacy in treating diverse subcortical lesions [6].We present the initial case of ICH evacuated via the Mi SPACE model. We use this report to introduce Mi SPACE and to examine its application to ICH in dominant, eloquent cortical and subcortical regions. ## 2. Case A 64-year-old right-hand dominant male presented to the emergency department with a one-day history of progressive left-sided headache and aphasia. He was normotensive, awake, and alert but unable to express orientation. He was nonfluent and unable to follow 1-step verbal commands, repeat, name, read, or write a sentence. CT head imaging revealed a large left temporal hematoma. CT angiography denoted the absence of an identifiable vascular etiology. The patient was admitted for monitoring and blood pressure regulation.Past medical history was significant for an occult bleeding diathesis, having presented over the past decade with two other ICH episodes. These events were conservatively managed and resulted in mild residual left-sided motor and sensory deficits.On postadmission day three the patient had several episodes of emesis and his headache worsened. A repeat CT head showed a hematomal expansion with mass effect and impending uncal herniation (see Figure 1 in Supplementary Materials available online athttp://dx.doi.org/10.1155/2014/102307). Given the risk of further deterioration, prompt surgical intervention was deemed necessary. As the hematoma centred upon highly eloquent cortex with subcortical extension, the Mi SPACE approach was favoured. Starting with neuronavigation, Mi SPACE (Figures 1(a)–1(c)) was employed as outlined below. Following intervention, target systolic blood pressure (SBP) was less than 140 mm Hg.Intraoperative images depicting (a) the Mi SPACE computer assisted navigation system and the radial access transsulcal corridor system, BrainPath (BP) (NICO corp., Indianapolis, Indiana), (b) BrainPath (BP), and (c) the small craniotomy site and dime-sized dural opening created along the predefined sulcus for an entry point. A narrow dural opening allows for a tight seal against the edges of the BP. (a) (b) (c)On the same day postoperative assessment established significant improvement. The patient expressed orientation, followed one-step verbal commands, and used full sentences. Testing on the NIH Stroke Scale (NIHSS) was as follows: repetition 4/6, naming 1/6, and reading intact. CT imaging showed drainage of the hematoma, with improved mass effect (Figure 2 in Supplementary Materials).In the following evening there was an episode of hypertension, peak SBP 173 mm Hg, requiring administration of intravenous (IV) antihypertensive agents. The patient subsequently developed recurrent emesis and worsening aphasia, approaching admission status. Repeat CT imaging revealed reaccumulation of the hematoma with added mass effect (Figure 3 in Supplementary Materials). As such, he returned for a second evacuation using Mi SPACE. Given this patient’s occult bleeding diathesis and history of recurrent ICH, this course of perioperative management included antifibrinolytic agent tranexamic acid (Cyklokapron), 1 g IV loading dose followed by 300 mg IV every eight hours for forty-eight hours, in conjunction with tight blood pressure control (parameters of SBP less than 130 mm Hg for forty-eight hours, then SBP less than 140 mm Hg).Again, postsurgical examination demonstrated significant improvement and on same-day CT imaging the hemorrhage was no longer visualized (Figure 4 in Supplementary Materials). On postoperative day one he was awake and alert and expressed full orientation. He followed one- and two-step commands and communicated in full sentences. Naming and repetition were intact on NIHSS. At a two-month follow-up, NIHSS language testing was normal, and he had returned to his baseline level of function. At nine months, MRI with tractography demonstrated a small surgical tract extending from the left temporal cortex to the atrium of the lateral ventricle (Figure 5 in Supplementary Materials). Fractional anisotropy (FA) maps revealed minimal areas of decrease in FA in the left inferior longitudinal fasciculus and left temporal subcortical white matter (Figure2).Figure 2 Color coded fractional anisotropy (FA) maps obtained from the diffusion-tensor imaging (DTI) show collateral tissue preservation, with only small areas of decrease in FA in the left inferior longitudinal fasciculus (long arrow). ## 3. Discussion Spontaneous ICH is relatively common and it causes significant mortality and morbidity [1, 2]. Despite decades of quality research, optimal management strategies remain uncertain [3]. Neurosurgical intervention remains controversial and quite variable in practice, particularly for supratentorial hematomas [3, 4].Though inconsistent, there is data supporting potential benefits of surgical intervention in ICH [4]. As such, identifying patient subgroups more likely to show improved postsurgical outcome is the subject of ongoing investigation [7]. In a survey of British neurosurgeons, ICH dominance and depth affected clinical decision making over and above all other features [8]. Traditionally, a nonoperative approach has been favoured for dominant hemisphere and deep hematomas [3, 9]. Hesitation for surgery in this context is attributable to the fact that operative removal by standard craniotomy almost always requires creating access by transecting through uninjured brain [3]. Indeed, adverse surgical effect has been postulated as a potential explanation for the inability to prove benefit with hematoma evacuation [5].In an attempt to limit tissue damage and reduce surgical morbidity, MIS techniques have been analyzed [3, 4]. Several groups have reported benefits and neurosurgeons have expressed optimism for such implementation [4, 9]. Nevertheless, improved outcomes have not been consistently demonstrated [3, 4]. A key challenge has been the ability to safely access the hematoma, particularly if in the subcortical space, through a MIS corridor that allows for adequate visualization and bimanual technique to remove early fibrotic clots, without the use of thrombolytics. Moreover, previous endoscopic designs have facilitated a less complete removal of hematoma volume compared to standard craniotomy [10].Mi SPACE represents an integration of 5 core technologies ((1) Mapping, (2) Navigation, (3) Access, (4) Optics, and (5) Resection) into a single platform to address such challenges. Preoperative mapping via MRI tractography allows the calculation of a surgical trajectory, along the long axis of the most eloquent fibres, to minimize shear forces and fascicle injury. Intraoperative neuronavigation facilitates a transsulcal insertion of a 13.5 mm port (BrainPath, NICO corp., Indianapolis, Indiana), also specifically designed to minimize strain forces, along the preplanned trajectory. This creates a parafascicular corridor that enables (i) use of a novel telescopic optics system, Video Telescopic Assisted Microscopy (VTOM) (Storz corp., Culver City, CA), to optimize visualization, (ii) bimanual dissection technique, and (iii) use of specifically designed nonthermal automated MIS instrumentation (Myriad, NICO corp., Indianapolis, Indiana) without the need for thrombolytics.This case illustrates the successful initial application of the Mi SPACE approach to ICH in dominant, eloquent cortical and subcortical regions, in a patient with an occult bleeding diathesis. There was clear survival benefit as well as a dramatic early and lasting functional response. MRI tractography demonstrated limited impact on the eloquent fibre tracts, including the arcuate fasciculus, despite the initial hemorrhage, ICH recurrence, and two Mi SPACE evacuations.Intraoperatively, clot over the arcuate fasciculus was removed and we postulate that removal of this irritant effect was largely responsible for the improved outcome. Although this case’s hematoma had a lobar focus, with superficial elements, there was also considerable subcortical extension. These deeper components were also targeted in the evacuation. Earlier iterations of the Mi SPACE model have shown safety and efficacy in managing various subcortical lesions [6]. Thus, in addition to demonstrating potential value in treating dominant, eloquent cortical bleeds, this case may serve as a proof of concept for application to ICH with a subcortical focus.It has been suggested that a greater volume of hematoma removal is associated with better outcome [10]. The potential for Mi SPACE to offer improved visualization and bimanual technique may facilitate a more complete evacuation than previous MIS designs. In turn, this may further reduce ICH mechanical and toxic effects on adjacent tissue, including white matter tracts, to a degree sufficient enough to derive improved clinical outcomes.Mi SPACE allowed for adequate intraoperative hemostasis in both interventions, despite the patient’s occult bleeding diathesis, and there was excellent postoperative functional recovery. Furthermore, after clot reaccumulation, presumed to be from a combination of postoperative hypertension and the diathesis, repeat surgery within twenty-four hours of initial intervention yielded even better clinical and radiological outcomes. Typically, this is the period in which postoperative edema from conventional surgery becomes of greatest concern. It should also be noted that, in both instances of intervention, the patient underwent early surgery, within hours of clinical deterioration, which may offer benefit in some ICH subgroups [3].Perioperative management following the second surgery included antifibrinolytic treatment of the bleeding diathesis and more aggressive blood pressure control, after which there was no recurrent bleeding. Additional experience with Mi SPACE could further assess these aspects of ICH medical management. ## 4. Conclusion This case illustrates the successful initial application of the Mi SPACE model to ICH, including those in dominant, eloquent cortex and subcortical regions. There was clear survival benefit and dramatic functional response. MRI tractography demonstrates collateral fascicle preservation. Safety and feasibility studies are required to further assess this promising new surgical paradigm in ICH care. --- *Source: 102307-2014-08-06.xml*
2014
# Corrigendum to “Association between Virulence Factors and Extended Spectrum Beta-Lactamase ProducingKlebsiella pneumoniae Compared to Nonproducing Isolates” **Authors:** Mustafa Muhammad Gharrah; Areej Mostafa El-Mahdy; Rasha Fathy Barwa **Journal:** Interdisciplinary Perspectives on Infectious Diseases (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1023076 --- ## Body --- *Source: 1023076-2018-01-08.xml*
1023076-2018-01-08_1023076-2018-01-08.md
489
Corrigendum to “Association between Virulence Factors and Extended Spectrum Beta-Lactamase ProducingKlebsiella pneumoniae Compared to Nonproducing Isolates”
Mustafa Muhammad Gharrah; Areej Mostafa El-Mahdy; Rasha Fathy Barwa
Interdisciplinary Perspectives on Infectious Diseases (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1023076
1023076-2018-01-08.xml
--- ## Body --- *Source: 1023076-2018-01-08.xml*
2018
# Composite g-C3N4/NiCo2O4 with Excellent Electrochemical Impedance as an Electrode for Supercapacitors **Authors:** Danfeng Cui; Zheng Fan; Yanyun Fan; Hongmei Chen; Penglu Li; Xiaoya Duan; Shubin Yan; Hongyan Xu; Chenyang Xue **Journal:** Journal of Nanomaterials (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1023109 --- ## Abstract For the development of supercapacitors, electrode materials with the advantages of simple synthesis and high specific capacitance are one of the very important factors. Herein, we synthesized g-C3N4 and NiCo2O4 by thermal polymerization method and hydrothermal method, respectively, and finally synthesized NiCo2O4/g-C3N4 nanomaterials by mixing, grinding, and calcining g-C3N4 and NiCo2O4. NiCo2O4/g-C3N4 nanomaterials are characterized by X-ray diffraction and X-ray photoelectron spectroscopy. The microscopic morphology, lattice structure, and element distribution of NiCo2O4/g-C3N4 nanomaterials were characterized by scanning electron microscopy (SEM), transmission electron microscopy, high resoultion transmission electron microscopy, and mapping methods. The electrochemical performance and cycle stability of NiCo2O4/g-C3N4 were tested in a 6 M KOH aqueous solution as electrolyte under a three-electrode system. Due to the physical mixing structure of g-C3N4 and NiCo2O4 nanomaterials, the electrochemical energy storage performance of NiCo2O4/g-C3N4 supercapacitor electrodes is better than that of NiCo2O4 supercapacitor electrodes. At a current density of 1 A/g, the capacitances of NiCo2O4 and NiCo2O4/g-C3N4 are 98.86 and 1,127.71 F/g, respectively. At a current density of 10 A/g, the capacitance of NiCo2O4/g-C3N4 supercapacitor electrode maintains 70.5% after 3,000 cycles. NiCo2O4/g-C3N4 electrode has excellent electrochemical performance, which may be due to the formation of physical mixing between NiCo2O4 and g-C3N4, which has broad application prospects. This research is of great importance for the development of materials in high-performance energy storage devices, catalysis, sensors, and other applications. --- ## Body ## 1. Introduction Supercapacitor is an energy storage device, which is different from battery and capacitor [1, 2]. It has the advantages of fast charging speed [3], long service life [4], high electricity conversion efficiency [5], high power density [6], high safety factor, and green friendliness. Supercapacitors are used in many fields such as wind power generation systems, heavy-duty machinery, and hybrid vehicles. According to different principles, supercapacitors are classified as double-layer capacitors and pseudo-capacitors. Pseudo-capacitors [2], as a kind of supercapacitors, have been studied due to their advantages of higher discharge time and larger stored power. The supercapacitor electrode is the most crucial part of the storage capacity of the supercapacitor. In previous studies, pseudo-capacitance electrodes are mainly composed of oxides of elements such as Co [7–9], Fe [10, 11], Ru [12–14], Mn [15, 16], Ni [17, 18], W [19, 20], and Zn [21, 22]. These metal oxides have relatively high theoretical capacitance values [23, 24], and the synthesis method is simple. In recent years, multimetal oxides [25–27] have gradually attracted attention because of their excellent theoretical capacitance, high-power density, and outstanding cycling characteristics. Among them, the theoretical capacitance value of NiCo2O4 is 890 mAh/g [28], and it has good electrical conductivity due to the presence of bimetallic elements. These advantages provide support for NiCo2O4 to become a promising electrode material for supercapacitors [29]. Li et al. [30] facilely synthesized and electrochemically tested NiCo2O4 with different crystal structures. By controlling the ratio of CO(NH2)2 and NH4F composition, the crystal growth structure of NiCo2O4 was controlled. Among them, the NiCo2O4 with the best mass-specific capacitance is 1,710.9 F/g. However, a noncomposite bimetal oxide as a supercapacitor electrode usually has a large electrochemical impedance. By using suitable materials to compound with bimetallic oxides, it will usually help the electrode to have good electrochemical impedance.Graphite carbon nitride (g-C3N4) [31–33] is a widely used carbon-based material. g-C3N4 is a flexible layered structural material with good chemical stability, nontoxic, nonpolluting, and low cost [34–36]. Due to the presence of pyrrole nitrogen hole defects in the crystal lattice and the reduced distance between the edge covalent nitrogen atoms, the material exhibits a higher rate capability. Moreover, the porous heptazine and sp2 hybrid nitrogen also provide coordination sites [37]. With the synthesis of bimetallic transition metal element oxides dispersed on the g-C3N4 grid (with improved redox sites), its conductivity, electrochemical performance, hydrophilicity, and surface polarity have been enhanced. This allows supercapacitor electrodes to obtain excellent cycle performance and high-rate performance [38, 39]. Rabani et al. [37] studied the compound-prepared method and electrochemical performance of Co3O4@g-C3N4. The experimental results show that the capacitance of the Co3O4@g-C3N4 supercapacitor reaches 457.2 F/g, and it maintains 92% of the capacitance after 5,000 cycles. Thiagarajan et al. [40] synthesized NiMoO4/g-C3N4 by hydrothermal method and tested its electrochemical energy storage performance. The supercapacitor electrode NiMoO4/g-C3N4 reached 510 F/g, and maintained 91.8% capacity after 2,000 cycles.In this study, g-C3N4 is synthesized by thermal polymerization, NiCo2O4 is synthesized by hydrothermal method and thermal oxidation method. g-C3N4/NiCo2O4 nanomaterial is synthesized by fully grinding and calcining g-C3N4 and NiCo2O4. It can be observed that physical mixing structures of g-C3N4 and NiCo2O4 are formed in the g-C3N4/NiCo2O4 nanomaterial using TEM. Due to the existence of the physical mixing, the g-C3N4/NiCo2O4 nanomaterial has a higher mass-specific capacitance compared with NiCo2O4 when used as a supercapacitor electrode. By studying the electrochemical impedance spectroscopy (EIS) of the materials, the g-C3N4/NiCo2O4 nanomaterial has very low electrochemical impedance in the low-frequency response, showing good electrochemical energy storage performance. ## 2. Experimental ### 2.1. Preparation of the NiCo2O4/g-C3N4 Nanomaterial At first, 20 g of urea was placed in a corundum crucible and placed in a tube furnace. Then set the heating rate of the tube furnace to 10°C/min. The tube furnace temperature was maintained at 550°C for 180 min. After waiting for the complete natural cooling to room temperature, the sample was taken out and fully ground to obtain g-C3N4 powder. Then, the NiCo2O4 was prepared by combining with hydrothermal method and annealing. To get the solution for the preparation of NiCo2O4 nanoparticles, the reagent with the precise molar ratio of NiCl2·6H2O : CoCl2·6H2O : CO(NH2)2 : NH4F = 1 : 2 : 6 : 15 was dissolved in 30 mL of deionized water (DI water). After magnetic stirring for 15 min, the evenly mixed aqueous solution was moved into the polytetrafluoroethylene lining of the high-pressure reactor, heated at 150°C for 8 hr, and then cooled and centrifuged to get the NiCo2O4 precursor. And NiCo2O4 precursors were annealed in a tubular furnace at 400°C for 2 hr to get NiCo2O4 crystals. Finally, 2% g-C3N4 and NiCo2O4 were mixed and fully pulverized, then calcined in a tubular furnace at 550°C for 180 min to get 2% g-C3N4/NiCo2O4 powder. ### 2.2. Preparation of the Electrodes First, nickel foam was cleaned with ultrasonic DI water and dried at 60°C for 6 hr. After that, NiCo2O4/g-C3N4 trituration was mixed with acetylene black and polyvinylidene fluoride at a mass ratio of 0.8 : 0.15 : 0.05. A few drops of alcohol and 5% polytetrafluoroethylene were added and stirred. The foam was coated with nickel foam and the electrodes were dried for 24 hr for electrochemical test. ### 2.3. Characterization of the NiCo2O4/g-C3N4 Nanomaterial The micromorphology of the nanomaterial surface was investigated by SEM (FEI Quanta FEG 250). The lattice structure of the NiCo2O4/g-C3N4 nanomaterial was investigated by a transmission electron microscope (Tecnai G2 F20). The NiCo2O4/g-C3N4 nanomaterial was confirmed by X-ray diffraction (XRD) (Rigaku Ultimate IV). The chemical composition was analyzed by X-ray photoelectron spectroscopy (XPS) (Thermo ESCALAB 250Xi). ### 2.4. Electrode Performance Test Method The electrode performance test was carried out under the three-electrode system of working electrode (foamed nickel with NiCo2O4/g-C3N4 nanomaterial), counter electrode (platinum electrode), and reference electrode (saturated calomel electrode). Cyclic voltammetry (CV), galvanostatic current charging–discharging (GCD), EIS, and cycling ability were tested in a 6 M KOH aqueous solution through an electrochemical workstation (Metrohm Multi Autolab M204). ## 2.1. Preparation of the NiCo2O4/g-C3N4 Nanomaterial At first, 20 g of urea was placed in a corundum crucible and placed in a tube furnace. Then set the heating rate of the tube furnace to 10°C/min. The tube furnace temperature was maintained at 550°C for 180 min. After waiting for the complete natural cooling to room temperature, the sample was taken out and fully ground to obtain g-C3N4 powder. Then, the NiCo2O4 was prepared by combining with hydrothermal method and annealing. To get the solution for the preparation of NiCo2O4 nanoparticles, the reagent with the precise molar ratio of NiCl2·6H2O : CoCl2·6H2O : CO(NH2)2 : NH4F = 1 : 2 : 6 : 15 was dissolved in 30 mL of deionized water (DI water). After magnetic stirring for 15 min, the evenly mixed aqueous solution was moved into the polytetrafluoroethylene lining of the high-pressure reactor, heated at 150°C for 8 hr, and then cooled and centrifuged to get the NiCo2O4 precursor. And NiCo2O4 precursors were annealed in a tubular furnace at 400°C for 2 hr to get NiCo2O4 crystals. Finally, 2% g-C3N4 and NiCo2O4 were mixed and fully pulverized, then calcined in a tubular furnace at 550°C for 180 min to get 2% g-C3N4/NiCo2O4 powder. ## 2.2. Preparation of the Electrodes First, nickel foam was cleaned with ultrasonic DI water and dried at 60°C for 6 hr. After that, NiCo2O4/g-C3N4 trituration was mixed with acetylene black and polyvinylidene fluoride at a mass ratio of 0.8 : 0.15 : 0.05. A few drops of alcohol and 5% polytetrafluoroethylene were added and stirred. The foam was coated with nickel foam and the electrodes were dried for 24 hr for electrochemical test. ## 2.3. Characterization of the NiCo2O4/g-C3N4 Nanomaterial The micromorphology of the nanomaterial surface was investigated by SEM (FEI Quanta FEG 250). The lattice structure of the NiCo2O4/g-C3N4 nanomaterial was investigated by a transmission electron microscope (Tecnai G2 F20). The NiCo2O4/g-C3N4 nanomaterial was confirmed by X-ray diffraction (XRD) (Rigaku Ultimate IV). The chemical composition was analyzed by X-ray photoelectron spectroscopy (XPS) (Thermo ESCALAB 250Xi). ## 2.4. Electrode Performance Test Method The electrode performance test was carried out under the three-electrode system of working electrode (foamed nickel with NiCo2O4/g-C3N4 nanomaterial), counter electrode (platinum electrode), and reference electrode (saturated calomel electrode). Cyclic voltammetry (CV), galvanostatic current charging–discharging (GCD), EIS, and cycling ability were tested in a 6 M KOH aqueous solution through an electrochemical workstation (Metrohm Multi Autolab M204). ## 3. Results and Discussion Figure1 shows the XRD spectrum of g-C3N4, NiCo2O4, and NiCo2O4/g-C3N4 composite nanomaterials. The blue line of g-C3N4 in Figure 1 has an obvious (002) peak. The green line in the figure represents the XRD line of NiCo2O4 nanoparticles. All the peaks in the line can correspond to the cubic crystal orientation of NiCo2O4 PDF#20-0781 (111) (220) (311) (222) (400) (511) (440), and there are no other obvious peaks [30, 41]. Therefore, the green line in the XRD spectrum proves that we have prepared NiCo2O4 nanoparticles with higher purity. The red line in Figure 1 represents the XRD spectrum of the NiCo2O4/g-C3N4 composite nanomaterial. In the red line, the (220) (311) (222) (511) (440) peaks correspond to the NiCo2O4 in the NiCo2O4/g-C3N4 composite nanomaterial. The XRD spectrum shows that the (111) lattice peak of nickel cobalt oxide is shifted. And the appearance of the (002) peak [40] represents the successful composite of NiCo2O4 nanoparticles and g-C3N4 nanomaterials, and the NiCo2O4/g-C3N4 nanomaterials were successfully prepared.Figure 1 XRD graphs of g-C3N4, NiCo2O4, and g-C3N4/NiCo2O4.Figure2 shows the SEM images of g-C3N4, NiCo2O4, and NiCo2O4/g-C3N4. Figures 2(a) and 2(b) are SEM images of g-C3N4. It could be observed from Figure 2(a) that there are 200–300 nm pores between the g-C3N4 nanoparticles. It can be observed from Figure 2(b) that the micron morphology of the g-C3N4 nanoparticles is porous floc. Such a structure significantly increases the effective boundary area between the nanomaterial and the electrolyte. Furthermore, the porous floc can improve the electrochemical energy storage contact interface to improve electrochemical energy storage performance. Figures 2(c) and 2(d) are SEM pictures of NiCo2O4 nanoparticles. It can be observed from Figure 2(c) that the dimension of NiCo2O4 nanoparticles is 50–100 nm. The particle size of NiCo2O4 matches the size of the nanopores of g-C3N4, which can better complete the composite between nanomaterials. Figures 2(e) and 2(f) are SEM images of NiCo2O4/g-C3N4 composite material. It can be seen in Figure 2(e) that NiCo2O4 nanoparticles are partially embedded in the nanopores of g-C3N4. NiCo2O4 nanoparticles and g-C3N4 nanomaterials are effectively combined through the method shown in Figure 2(e) to improve the electrochemical performance of NiCo2O4/g-C3N4 composite nanomaterials.Figure 2 SEM pictures of (a, b) g-C3N4; (c, d) NiCo2O4; (e, f) g-C3N4/NiCo2O4 samples.Figures3(a)–3(d) is TEM images of g-C3N4, NiCo2O4, and NiCo2O4/g-C3N4 composite nanomaterials, respectively. We can observe from Figure 3(d) that NiCo2O4 and g-C3N4 are in close contact to form a NiCo2O4/g-C3N4 composite nanomaterial. As shown in Figures 3(a) and 3(b), NiCo2O4 has smaller particles and structures than g-C3N4. The g-C3N4/NiCo2O4 nanomaterials are synthesized by grinding and calcining g-C3N4 and NiCo2O4. As shown in Figure 3(c), the larger area of g-C3N4 forms a physical mixing structure with smaller particles of NiCo2O4. Figure 3(d) is a larger magnification TEM image of g-C3N4/NiCo2O4 nanomaterials, which clearly shows the physical mixing structure formed by NiCo2O4 nanoparticles on the larger structure of g-C3N4. And comparing with the size of NiCo2O4 particles in Figure 3(b), the black particles in Figure 3(d) are NiCo2O4 particles after forming a composite with g-C3N4. Figure 3(e) is the high resoultion transmission electron microscopy (HRTEM) image of NiCo2O4/g-C3N4. In the figure, the distance between the crystal orientations of NiCo2O4 (220) and (311) is marked, which matches the XRD analysis result. Figure 3(f) shows the element mapping of NiCo2O4/g-C3N4 composite nanomaterials. In Figure 3(f), it can be clearly seen that C, N, O, Co, and Ni are uniformly distributed in the NiCo2O4/g-C3N4 composite nanomaterial, which proves the successful composite of NiCo2O4/g-C3N4 material.Figure 3 TEM pictures of (a) g-C3N4; (b) NiCo2O4; (c, d) g-C3N4/NiCo2O4 samples; (e) HRTEM picture of g-C3N4/NiCo2O4; (f) elements mapping of g-C3N4/NiCo2O4.Figure4 shows the XPS spectrum of the g-C3N4/NiCo2O4 composite nanomaterial. It can be clearly observed in Figure 4(a) that composite nanomaterials have elements such as C, N, O, Co, and C. In Figure 4(b), the peak splitting and fitting of Ni 2p can be clearly observed. Due to the existence of the spin orbits of Ni3+ and Ni2+, the two main peaks are 873.6 and 856.4 eV, respectively. Among them, the two weaker peaks are due to the weaker accompanying peaks produced by Ni3+ and Ni2+, which are also related to Ni3+ and Ni2+. The O 1s spectrum in Figure 4(d) shows three oxygen peaks, concentrated at 531.12, 532.57, and 529.66 eV. This is due to the formation of oxides by the OH− and O2− of Ni and Co, which correspond to these three peaks. The C 1s spectrum in Figure 4(e) has three carbon peaks at ∼284.8, 286.03, and 288.16 eV, which are related to the formation of carbon–carbon bonds and carbon–nitrogen bonds [35]. The N 1s spectrum in Figure 4(f) shows three nitrogen peaks, concentrated at 399.27, 400.19, 401.32, and 403.31 eV [39].Figure 4 (a) g-C3N4/NiCo2O4 XPS spectrum and high-resolution XPS spectra of (b) Ni 2p; (c) Co 2p; (d) O 1s; (e) C 1s; (f) N 1s. (a)(b)(c)(d)(e)(f)First, we tested NiCo2O4 and g-C3N4/NiCo2O4 supercapacitor electrodes by CV. Figures 5(a) and 5(b) are the CV test curves of NiCo2O4 and g-C3N4/NiCo2O4 supercapacitor electrodes. NiCo2O4 and g-C3N4/NiCo2O4 supercapacitor electrodes were tested with different scan rates (5–100 mV/s) under the voltage of 0–0.45 V. Figure 5(a) clearly shows the redox peak of NiCo2O4 nanomaterials, indicating that NiCo2O4 supercapacitor electrodes have pseudo-capacitance characteristics. When the scan rate increases, the redox peak of the NiCo2O4 supercapacitor electrode shifts to a higher or lower voltage value, which is caused by the internal resistance of the electrode and the tortuous diffusion path of H+ ions in the supercapacitor electrode material. It can be clearly observed in Figure 5(b) that, compared with the NiCo2O4 supercapacitor electrode, the redox peak of the g-C3N4/NiCo2O4 supercapacitor electrode has a higher peak value. This shows that the g-C3N4/NiCo2O4 supercapacitor electrode has a higher working voltage window for electrochemical energy storage and is more suitable for high-voltage energy storage applications. It can be clearly observed in Figure 5(b) that the g-C3N4/NiCo2O4 supercapacitor electrode has a larger integration area compared to the NiCo2O4 supercapacitor electrode. This means that the g-C3N4/NiCo2O4 supercapacitor electrode has a higher electrochemical energy storage capacity. And the CV curve in Figure 5(b) is more symmetrical, indicating that the electrode of the g-C3N4/NiCo2O4 supercapacitor can undergo a more complete reversible reaction. In addition, the reaction equation represented by the redox peaks in Figures 5(a) and 5(b) should be(1)NiCo2O4+OH−+H2O⟷NiOOH+2CoOOH+e−(2)CoOOH+OH−⟷CoO2+H2O+e−.Figure 5 CV curves of (a) NiCo2O4; (b) g-C3N4/NiCo2O4; GCD curves of (c) NiCo2O4; (d) g-C3N4/NiCo2O4; (e) specific capacitance change trend; (f) cycle properties of g-C3N4/NiCo2O4 under the three-electrode system. (a)(b)(c)(d)(e)(f)Figures5(c) and 5(d) are the GCD test diagrams of NiCo2O4 supercapacitor electrode and g-C3N4/NiCo2O4 supercapacitor electrode. The GCD test is to characterize the electrochemical energy storage capacity of the electrode more conveniently by using a constant current to charge and discharge the electrode. Mass-specific capacitance and area-specific capacitance can be calculated by the following two formulae:(3)Cs=I×ΔtΔV×sandCg=I×ΔtΔV×m.In Formula (3), Cs is the area-specific capacitance, I is the current during constant current discharge, t is the discharge time, V is the potential difference during discharge, s is the electrode area of the supercapacitor, and Cg is the mass-specific capacitance, m is the material quality of the supercapacitor loaded by the electrode. The loading weights of the NiCo2O4 and g-C3N4/NiCo2O4 composite material on the electrode used in the electrochemical energy storage test were 14.9 and 8.9 mg, respectively. It can be clearly observed from Figures 5(c) and 5(d) that the discharge time of the g-C3N4/NiCo2O4 supercapacitor electrode is much longer than that of the NiCo2O4 supercapacitor electrode under the same charge and discharge current. Calculated by Formula (3), when the charge and discharge current is 1–8 A/g, the mass-specific capacitance content of NiCo2O4 supercapacitor electrode is 98.86, 82.86, 69.43, 50, and 32 F/g. The area-specific capacitance of NiCo2O4 supercapacitor electrodes is 1.4829, 1.2429, 1.04145, 0.75, and 0.48 F/cm2, respectively. However, when the charge and discharge current is 1–10 A/g, the mass-specific capacitance content of the g-C3N4/NiCo2O4 supercapacitor electrode is 1,127.71, 1,031.43, 947.14, 811.43, 637.71, and 517.14 F/g. The area-specific capacitances of the g-C3N4/NiCo2O4 supercapacitor electrodes are 16.92, 15.47, 14.21, 12.17, 9.57, and 7.7571 F/cm2, respectively. Compared with the mass-specific capacitance of the NiCo2O4 supercapacitor electrode, the mass-specific capacitance of the g-C3N4/NiCo2O4 supercapacitor electrode has been significantly improved due to the synergistic effect of the g-C3N4/NiCo2O4 composite nanomaterial. Figure 5(e) is the comparison of mass-specific capacitance between NiCo2O4 supercapacitor electrode and g-C3N4/NiCo2O4 supercapacitor electrode under different current charging and discharging conditions. It can be seen in Figure 5(e) that at higher operating currents, g-C3N4/NiCo2O4 supercapacitor electrodes have better rate characteristics. Figure 5(f) shows the retention of the mass-specific capacitance of the g-C3N4/NiCo2O4 supercapacitor electrode at a current of 10 A/g after 3,000 cycles. After 3,000 cycles of the g-C3N4/NiCo2O4 electrode, the mass-specific capacitance of the comparison electrode was maintained at 70.5% before the cycle test, which has an acceptable capacitance retention. The decrease in the capacitance value of the composites may be caused by the dissolution effect of the alkaline electrolyte on the nickel cobaltate combined with the minor structural instability of the physical mixing [42].(4)E=C×V22andP=Et.Equation (4) is the calculation formula of energy density and power density, respectively. Among them, E, C, V, P, and t in Equation (4) are energy density, capacitance, potential, power density, and discharge time, respectively. Calculated by the formula, the highest energy density of the composite material is 69.07 Wh/kg, and the power density is 603.54 W/kg.EIS is a nondestructive parameter measurement and an effective method for determining the dynamic behavior of electrochemical energy storage devices [43, 44]. We apply a sinusoidal signal with a weak amplitude to the supercapacitor electrodes in the three-electrode supercapacitor system to obtain the change in the ratio of the excitation voltage to the response current, which is the impedance spectrum of the electrochemical system. The electrochemical impedance curve of electrodes made of NiCo2O4 and g-C3N4/NiCo2O4 is measured in the frequency range of 100,000–0.1 Hz. Figures 6(a) and 6(b) are the AC impedance spectrum curves of NiCo2O4 and g-C3N4/NiCo2O4. The intersection of the AC impedance spectrum curve and the x-axis is the solution resistance (Rs) at the interface between the electrolyte and the electrode. The partly circular AC impedance spectrum curve in the high-frequency region is mainly dominated by the charge transfer of the electrode material [45, 46]. The slope of the low-frequency region shows the diffusion coefficient of the material, which is mainly dominated by the material transfer of the electrode material. Rs in electrochemical impedance spectroscopy of NiCo2O4 is 0.46 Ω, Rs in electrochemical impedance spectroscopy of g-C3N4/NiCo2O4 is 0.374 Ω. This proves that the g-C3N4/NiCo2O4 composite nanomaterial has better conductivity than the single NiCo2O4. The argument of Q (CPE) has nothing to do with frequency and is called constant phase angle element. Generally, when the Q parameter n is between 1 and 0.5, there is a dispersion effect on the pole surface. When n = 0.5, CPE can be used to replace the Warburg element of the finite diffusion layer, and CPE can also simulate the high-frequency part of the Warburg element of infinite thickness. The linear slope of the g-C3N4/NiCo2O4 electrode in the low-frequency region is higher than that of the NiCo2O4 electrode. It shows that the mobility of electrolyte ions on the surface of g-C3N4/NiCo2O4 electrode is higher than that on the surface of NiCo2O4 electrode. In Figure 3(d), it can be clearly observed that g-C3N4 forms a physical mixing structure with NiCo2O4. From the above discussion, it can be seen that the electron mobility in the g-C3N4/NiCo2O4 composite can be improved, and the electrochemical impedance spectroscopy shows a lower electrochemical impedance.Figure 6 Nyquist diagram of (a) NiCo2O4 and (b) g-C3N4/NiCo2O4 electrode, illustrated with the fitted equivalent circuit and enlarged diagram. (a)(b)Table1 shows the comparison of mass-specific capacitance values of metal oxides and carbon nitride composite nanomaterials. The g-C3N4/NiCo2O4 prepared in this study exhibited high mass-specific capacitance.Table 1 Specific capacitance comparison with CNs and metal oxides. No.ElectrodesElectrolyteCurrent density (A/g)Supercapacitors capacitance (F/g)Ref.1g-C3N4/TNS4 M KOH0.25332[31]2Functionalized g-C3N4/CNF/TNS4 M KOH0.25817[31]3MnO2@pg-C3N4–1348.4[32]4Ce-SnO2@g-C3N42 M KOH1274[33]5Co3O4-rGO-gC3N40.5 M H2SO41675[35]6Co3O4@g-C3N43 M KOH1457.2[37]7NiMoO4/g-C3N46 M KOH1510[40]8g-C3N4/NiCo2O46 M KOH11,127.71This work ## 4. Conclusions In this paper, we synthesized g-C3N4 and NiCo2O4 by thermal polymerization method and hydrothermal method, respectively, and finally synthesized NiCo2O4/g-C3N4 nanomaterials by fully mixing, grinding, and calcining g-C3N4 and NiCo2O4. Due to the effective combination of g-C3N4 and NiCo2O4 nanomaterials, the electrochemical energy storage performance of NiCo2O4/g-C3N4 supercapacitor electrodes is better than that of NiCo2O4 supercapacitor electrodes. At a current of 1 A/g, the mass-specific capacitances of NiCo2O4 and NiCo2O4/g-C3N4 are 98.86 and 1,127.71 F/g, respectively. At a current of 10 A/g, the NiCo2O4/g-C3N4 supercapacitor electrode maintains 70.5% of capacitance after 3,000 cycles. Moreover, NiCo2O4/g-C3N4 electrode shows an excellent electrochemical impedance compared with single NiCo2O4 electrode. NiCo2O4/g-C3N4 electrode has excellent electrochemical performance, which may be due to the formation of physical mixing between NiCo2O4 and g-C3N4, which has broad application prospects. This research is of great importance for the development of materials in high-performance energy storage devices, catalysis, sensors, and other applications. --- *Source: 1023109-2022-12-05.xml*
1023109-2022-12-05_1023109-2022-12-05.md
27,010
Composite g-C3N4/NiCo2O4 with Excellent Electrochemical Impedance as an Electrode for Supercapacitors
Danfeng Cui; Zheng Fan; Yanyun Fan; Hongmei Chen; Penglu Li; Xiaoya Duan; Shubin Yan; Hongyan Xu; Chenyang Xue
Journal of Nanomaterials (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1023109
1023109-2022-12-05.xml
--- ## Abstract For the development of supercapacitors, electrode materials with the advantages of simple synthesis and high specific capacitance are one of the very important factors. Herein, we synthesized g-C3N4 and NiCo2O4 by thermal polymerization method and hydrothermal method, respectively, and finally synthesized NiCo2O4/g-C3N4 nanomaterials by mixing, grinding, and calcining g-C3N4 and NiCo2O4. NiCo2O4/g-C3N4 nanomaterials are characterized by X-ray diffraction and X-ray photoelectron spectroscopy. The microscopic morphology, lattice structure, and element distribution of NiCo2O4/g-C3N4 nanomaterials were characterized by scanning electron microscopy (SEM), transmission electron microscopy, high resoultion transmission electron microscopy, and mapping methods. The electrochemical performance and cycle stability of NiCo2O4/g-C3N4 were tested in a 6 M KOH aqueous solution as electrolyte under a three-electrode system. Due to the physical mixing structure of g-C3N4 and NiCo2O4 nanomaterials, the electrochemical energy storage performance of NiCo2O4/g-C3N4 supercapacitor electrodes is better than that of NiCo2O4 supercapacitor electrodes. At a current density of 1 A/g, the capacitances of NiCo2O4 and NiCo2O4/g-C3N4 are 98.86 and 1,127.71 F/g, respectively. At a current density of 10 A/g, the capacitance of NiCo2O4/g-C3N4 supercapacitor electrode maintains 70.5% after 3,000 cycles. NiCo2O4/g-C3N4 electrode has excellent electrochemical performance, which may be due to the formation of physical mixing between NiCo2O4 and g-C3N4, which has broad application prospects. This research is of great importance for the development of materials in high-performance energy storage devices, catalysis, sensors, and other applications. --- ## Body ## 1. Introduction Supercapacitor is an energy storage device, which is different from battery and capacitor [1, 2]. It has the advantages of fast charging speed [3], long service life [4], high electricity conversion efficiency [5], high power density [6], high safety factor, and green friendliness. Supercapacitors are used in many fields such as wind power generation systems, heavy-duty machinery, and hybrid vehicles. According to different principles, supercapacitors are classified as double-layer capacitors and pseudo-capacitors. Pseudo-capacitors [2], as a kind of supercapacitors, have been studied due to their advantages of higher discharge time and larger stored power. The supercapacitor electrode is the most crucial part of the storage capacity of the supercapacitor. In previous studies, pseudo-capacitance electrodes are mainly composed of oxides of elements such as Co [7–9], Fe [10, 11], Ru [12–14], Mn [15, 16], Ni [17, 18], W [19, 20], and Zn [21, 22]. These metal oxides have relatively high theoretical capacitance values [23, 24], and the synthesis method is simple. In recent years, multimetal oxides [25–27] have gradually attracted attention because of their excellent theoretical capacitance, high-power density, and outstanding cycling characteristics. Among them, the theoretical capacitance value of NiCo2O4 is 890 mAh/g [28], and it has good electrical conductivity due to the presence of bimetallic elements. These advantages provide support for NiCo2O4 to become a promising electrode material for supercapacitors [29]. Li et al. [30] facilely synthesized and electrochemically tested NiCo2O4 with different crystal structures. By controlling the ratio of CO(NH2)2 and NH4F composition, the crystal growth structure of NiCo2O4 was controlled. Among them, the NiCo2O4 with the best mass-specific capacitance is 1,710.9 F/g. However, a noncomposite bimetal oxide as a supercapacitor electrode usually has a large electrochemical impedance. By using suitable materials to compound with bimetallic oxides, it will usually help the electrode to have good electrochemical impedance.Graphite carbon nitride (g-C3N4) [31–33] is a widely used carbon-based material. g-C3N4 is a flexible layered structural material with good chemical stability, nontoxic, nonpolluting, and low cost [34–36]. Due to the presence of pyrrole nitrogen hole defects in the crystal lattice and the reduced distance between the edge covalent nitrogen atoms, the material exhibits a higher rate capability. Moreover, the porous heptazine and sp2 hybrid nitrogen also provide coordination sites [37]. With the synthesis of bimetallic transition metal element oxides dispersed on the g-C3N4 grid (with improved redox sites), its conductivity, electrochemical performance, hydrophilicity, and surface polarity have been enhanced. This allows supercapacitor electrodes to obtain excellent cycle performance and high-rate performance [38, 39]. Rabani et al. [37] studied the compound-prepared method and electrochemical performance of Co3O4@g-C3N4. The experimental results show that the capacitance of the Co3O4@g-C3N4 supercapacitor reaches 457.2 F/g, and it maintains 92% of the capacitance after 5,000 cycles. Thiagarajan et al. [40] synthesized NiMoO4/g-C3N4 by hydrothermal method and tested its electrochemical energy storage performance. The supercapacitor electrode NiMoO4/g-C3N4 reached 510 F/g, and maintained 91.8% capacity after 2,000 cycles.In this study, g-C3N4 is synthesized by thermal polymerization, NiCo2O4 is synthesized by hydrothermal method and thermal oxidation method. g-C3N4/NiCo2O4 nanomaterial is synthesized by fully grinding and calcining g-C3N4 and NiCo2O4. It can be observed that physical mixing structures of g-C3N4 and NiCo2O4 are formed in the g-C3N4/NiCo2O4 nanomaterial using TEM. Due to the existence of the physical mixing, the g-C3N4/NiCo2O4 nanomaterial has a higher mass-specific capacitance compared with NiCo2O4 when used as a supercapacitor electrode. By studying the electrochemical impedance spectroscopy (EIS) of the materials, the g-C3N4/NiCo2O4 nanomaterial has very low electrochemical impedance in the low-frequency response, showing good electrochemical energy storage performance. ## 2. Experimental ### 2.1. Preparation of the NiCo2O4/g-C3N4 Nanomaterial At first, 20 g of urea was placed in a corundum crucible and placed in a tube furnace. Then set the heating rate of the tube furnace to 10°C/min. The tube furnace temperature was maintained at 550°C for 180 min. After waiting for the complete natural cooling to room temperature, the sample was taken out and fully ground to obtain g-C3N4 powder. Then, the NiCo2O4 was prepared by combining with hydrothermal method and annealing. To get the solution for the preparation of NiCo2O4 nanoparticles, the reagent with the precise molar ratio of NiCl2·6H2O : CoCl2·6H2O : CO(NH2)2 : NH4F = 1 : 2 : 6 : 15 was dissolved in 30 mL of deionized water (DI water). After magnetic stirring for 15 min, the evenly mixed aqueous solution was moved into the polytetrafluoroethylene lining of the high-pressure reactor, heated at 150°C for 8 hr, and then cooled and centrifuged to get the NiCo2O4 precursor. And NiCo2O4 precursors were annealed in a tubular furnace at 400°C for 2 hr to get NiCo2O4 crystals. Finally, 2% g-C3N4 and NiCo2O4 were mixed and fully pulverized, then calcined in a tubular furnace at 550°C for 180 min to get 2% g-C3N4/NiCo2O4 powder. ### 2.2. Preparation of the Electrodes First, nickel foam was cleaned with ultrasonic DI water and dried at 60°C for 6 hr. After that, NiCo2O4/g-C3N4 trituration was mixed with acetylene black and polyvinylidene fluoride at a mass ratio of 0.8 : 0.15 : 0.05. A few drops of alcohol and 5% polytetrafluoroethylene were added and stirred. The foam was coated with nickel foam and the electrodes were dried for 24 hr for electrochemical test. ### 2.3. Characterization of the NiCo2O4/g-C3N4 Nanomaterial The micromorphology of the nanomaterial surface was investigated by SEM (FEI Quanta FEG 250). The lattice structure of the NiCo2O4/g-C3N4 nanomaterial was investigated by a transmission electron microscope (Tecnai G2 F20). The NiCo2O4/g-C3N4 nanomaterial was confirmed by X-ray diffraction (XRD) (Rigaku Ultimate IV). The chemical composition was analyzed by X-ray photoelectron spectroscopy (XPS) (Thermo ESCALAB 250Xi). ### 2.4. Electrode Performance Test Method The electrode performance test was carried out under the three-electrode system of working electrode (foamed nickel with NiCo2O4/g-C3N4 nanomaterial), counter electrode (platinum electrode), and reference electrode (saturated calomel electrode). Cyclic voltammetry (CV), galvanostatic current charging–discharging (GCD), EIS, and cycling ability were tested in a 6 M KOH aqueous solution through an electrochemical workstation (Metrohm Multi Autolab M204). ## 2.1. Preparation of the NiCo2O4/g-C3N4 Nanomaterial At first, 20 g of urea was placed in a corundum crucible and placed in a tube furnace. Then set the heating rate of the tube furnace to 10°C/min. The tube furnace temperature was maintained at 550°C for 180 min. After waiting for the complete natural cooling to room temperature, the sample was taken out and fully ground to obtain g-C3N4 powder. Then, the NiCo2O4 was prepared by combining with hydrothermal method and annealing. To get the solution for the preparation of NiCo2O4 nanoparticles, the reagent with the precise molar ratio of NiCl2·6H2O : CoCl2·6H2O : CO(NH2)2 : NH4F = 1 : 2 : 6 : 15 was dissolved in 30 mL of deionized water (DI water). After magnetic stirring for 15 min, the evenly mixed aqueous solution was moved into the polytetrafluoroethylene lining of the high-pressure reactor, heated at 150°C for 8 hr, and then cooled and centrifuged to get the NiCo2O4 precursor. And NiCo2O4 precursors were annealed in a tubular furnace at 400°C for 2 hr to get NiCo2O4 crystals. Finally, 2% g-C3N4 and NiCo2O4 were mixed and fully pulverized, then calcined in a tubular furnace at 550°C for 180 min to get 2% g-C3N4/NiCo2O4 powder. ## 2.2. Preparation of the Electrodes First, nickel foam was cleaned with ultrasonic DI water and dried at 60°C for 6 hr. After that, NiCo2O4/g-C3N4 trituration was mixed with acetylene black and polyvinylidene fluoride at a mass ratio of 0.8 : 0.15 : 0.05. A few drops of alcohol and 5% polytetrafluoroethylene were added and stirred. The foam was coated with nickel foam and the electrodes were dried for 24 hr for electrochemical test. ## 2.3. Characterization of the NiCo2O4/g-C3N4 Nanomaterial The micromorphology of the nanomaterial surface was investigated by SEM (FEI Quanta FEG 250). The lattice structure of the NiCo2O4/g-C3N4 nanomaterial was investigated by a transmission electron microscope (Tecnai G2 F20). The NiCo2O4/g-C3N4 nanomaterial was confirmed by X-ray diffraction (XRD) (Rigaku Ultimate IV). The chemical composition was analyzed by X-ray photoelectron spectroscopy (XPS) (Thermo ESCALAB 250Xi). ## 2.4. Electrode Performance Test Method The electrode performance test was carried out under the three-electrode system of working electrode (foamed nickel with NiCo2O4/g-C3N4 nanomaterial), counter electrode (platinum electrode), and reference electrode (saturated calomel electrode). Cyclic voltammetry (CV), galvanostatic current charging–discharging (GCD), EIS, and cycling ability were tested in a 6 M KOH aqueous solution through an electrochemical workstation (Metrohm Multi Autolab M204). ## 3. Results and Discussion Figure1 shows the XRD spectrum of g-C3N4, NiCo2O4, and NiCo2O4/g-C3N4 composite nanomaterials. The blue line of g-C3N4 in Figure 1 has an obvious (002) peak. The green line in the figure represents the XRD line of NiCo2O4 nanoparticles. All the peaks in the line can correspond to the cubic crystal orientation of NiCo2O4 PDF#20-0781 (111) (220) (311) (222) (400) (511) (440), and there are no other obvious peaks [30, 41]. Therefore, the green line in the XRD spectrum proves that we have prepared NiCo2O4 nanoparticles with higher purity. The red line in Figure 1 represents the XRD spectrum of the NiCo2O4/g-C3N4 composite nanomaterial. In the red line, the (220) (311) (222) (511) (440) peaks correspond to the NiCo2O4 in the NiCo2O4/g-C3N4 composite nanomaterial. The XRD spectrum shows that the (111) lattice peak of nickel cobalt oxide is shifted. And the appearance of the (002) peak [40] represents the successful composite of NiCo2O4 nanoparticles and g-C3N4 nanomaterials, and the NiCo2O4/g-C3N4 nanomaterials were successfully prepared.Figure 1 XRD graphs of g-C3N4, NiCo2O4, and g-C3N4/NiCo2O4.Figure2 shows the SEM images of g-C3N4, NiCo2O4, and NiCo2O4/g-C3N4. Figures 2(a) and 2(b) are SEM images of g-C3N4. It could be observed from Figure 2(a) that there are 200–300 nm pores between the g-C3N4 nanoparticles. It can be observed from Figure 2(b) that the micron morphology of the g-C3N4 nanoparticles is porous floc. Such a structure significantly increases the effective boundary area between the nanomaterial and the electrolyte. Furthermore, the porous floc can improve the electrochemical energy storage contact interface to improve electrochemical energy storage performance. Figures 2(c) and 2(d) are SEM pictures of NiCo2O4 nanoparticles. It can be observed from Figure 2(c) that the dimension of NiCo2O4 nanoparticles is 50–100 nm. The particle size of NiCo2O4 matches the size of the nanopores of g-C3N4, which can better complete the composite between nanomaterials. Figures 2(e) and 2(f) are SEM images of NiCo2O4/g-C3N4 composite material. It can be seen in Figure 2(e) that NiCo2O4 nanoparticles are partially embedded in the nanopores of g-C3N4. NiCo2O4 nanoparticles and g-C3N4 nanomaterials are effectively combined through the method shown in Figure 2(e) to improve the electrochemical performance of NiCo2O4/g-C3N4 composite nanomaterials.Figure 2 SEM pictures of (a, b) g-C3N4; (c, d) NiCo2O4; (e, f) g-C3N4/NiCo2O4 samples.Figures3(a)–3(d) is TEM images of g-C3N4, NiCo2O4, and NiCo2O4/g-C3N4 composite nanomaterials, respectively. We can observe from Figure 3(d) that NiCo2O4 and g-C3N4 are in close contact to form a NiCo2O4/g-C3N4 composite nanomaterial. As shown in Figures 3(a) and 3(b), NiCo2O4 has smaller particles and structures than g-C3N4. The g-C3N4/NiCo2O4 nanomaterials are synthesized by grinding and calcining g-C3N4 and NiCo2O4. As shown in Figure 3(c), the larger area of g-C3N4 forms a physical mixing structure with smaller particles of NiCo2O4. Figure 3(d) is a larger magnification TEM image of g-C3N4/NiCo2O4 nanomaterials, which clearly shows the physical mixing structure formed by NiCo2O4 nanoparticles on the larger structure of g-C3N4. And comparing with the size of NiCo2O4 particles in Figure 3(b), the black particles in Figure 3(d) are NiCo2O4 particles after forming a composite with g-C3N4. Figure 3(e) is the high resoultion transmission electron microscopy (HRTEM) image of NiCo2O4/g-C3N4. In the figure, the distance between the crystal orientations of NiCo2O4 (220) and (311) is marked, which matches the XRD analysis result. Figure 3(f) shows the element mapping of NiCo2O4/g-C3N4 composite nanomaterials. In Figure 3(f), it can be clearly seen that C, N, O, Co, and Ni are uniformly distributed in the NiCo2O4/g-C3N4 composite nanomaterial, which proves the successful composite of NiCo2O4/g-C3N4 material.Figure 3 TEM pictures of (a) g-C3N4; (b) NiCo2O4; (c, d) g-C3N4/NiCo2O4 samples; (e) HRTEM picture of g-C3N4/NiCo2O4; (f) elements mapping of g-C3N4/NiCo2O4.Figure4 shows the XPS spectrum of the g-C3N4/NiCo2O4 composite nanomaterial. It can be clearly observed in Figure 4(a) that composite nanomaterials have elements such as C, N, O, Co, and C. In Figure 4(b), the peak splitting and fitting of Ni 2p can be clearly observed. Due to the existence of the spin orbits of Ni3+ and Ni2+, the two main peaks are 873.6 and 856.4 eV, respectively. Among them, the two weaker peaks are due to the weaker accompanying peaks produced by Ni3+ and Ni2+, which are also related to Ni3+ and Ni2+. The O 1s spectrum in Figure 4(d) shows three oxygen peaks, concentrated at 531.12, 532.57, and 529.66 eV. This is due to the formation of oxides by the OH− and O2− of Ni and Co, which correspond to these three peaks. The C 1s spectrum in Figure 4(e) has three carbon peaks at ∼284.8, 286.03, and 288.16 eV, which are related to the formation of carbon–carbon bonds and carbon–nitrogen bonds [35]. The N 1s spectrum in Figure 4(f) shows three nitrogen peaks, concentrated at 399.27, 400.19, 401.32, and 403.31 eV [39].Figure 4 (a) g-C3N4/NiCo2O4 XPS spectrum and high-resolution XPS spectra of (b) Ni 2p; (c) Co 2p; (d) O 1s; (e) C 1s; (f) N 1s. (a)(b)(c)(d)(e)(f)First, we tested NiCo2O4 and g-C3N4/NiCo2O4 supercapacitor electrodes by CV. Figures 5(a) and 5(b) are the CV test curves of NiCo2O4 and g-C3N4/NiCo2O4 supercapacitor electrodes. NiCo2O4 and g-C3N4/NiCo2O4 supercapacitor electrodes were tested with different scan rates (5–100 mV/s) under the voltage of 0–0.45 V. Figure 5(a) clearly shows the redox peak of NiCo2O4 nanomaterials, indicating that NiCo2O4 supercapacitor electrodes have pseudo-capacitance characteristics. When the scan rate increases, the redox peak of the NiCo2O4 supercapacitor electrode shifts to a higher or lower voltage value, which is caused by the internal resistance of the electrode and the tortuous diffusion path of H+ ions in the supercapacitor electrode material. It can be clearly observed in Figure 5(b) that, compared with the NiCo2O4 supercapacitor electrode, the redox peak of the g-C3N4/NiCo2O4 supercapacitor electrode has a higher peak value. This shows that the g-C3N4/NiCo2O4 supercapacitor electrode has a higher working voltage window for electrochemical energy storage and is more suitable for high-voltage energy storage applications. It can be clearly observed in Figure 5(b) that the g-C3N4/NiCo2O4 supercapacitor electrode has a larger integration area compared to the NiCo2O4 supercapacitor electrode. This means that the g-C3N4/NiCo2O4 supercapacitor electrode has a higher electrochemical energy storage capacity. And the CV curve in Figure 5(b) is more symmetrical, indicating that the electrode of the g-C3N4/NiCo2O4 supercapacitor can undergo a more complete reversible reaction. In addition, the reaction equation represented by the redox peaks in Figures 5(a) and 5(b) should be(1)NiCo2O4+OH−+H2O⟷NiOOH+2CoOOH+e−(2)CoOOH+OH−⟷CoO2+H2O+e−.Figure 5 CV curves of (a) NiCo2O4; (b) g-C3N4/NiCo2O4; GCD curves of (c) NiCo2O4; (d) g-C3N4/NiCo2O4; (e) specific capacitance change trend; (f) cycle properties of g-C3N4/NiCo2O4 under the three-electrode system. (a)(b)(c)(d)(e)(f)Figures5(c) and 5(d) are the GCD test diagrams of NiCo2O4 supercapacitor electrode and g-C3N4/NiCo2O4 supercapacitor electrode. The GCD test is to characterize the electrochemical energy storage capacity of the electrode more conveniently by using a constant current to charge and discharge the electrode. Mass-specific capacitance and area-specific capacitance can be calculated by the following two formulae:(3)Cs=I×ΔtΔV×sandCg=I×ΔtΔV×m.In Formula (3), Cs is the area-specific capacitance, I is the current during constant current discharge, t is the discharge time, V is the potential difference during discharge, s is the electrode area of the supercapacitor, and Cg is the mass-specific capacitance, m is the material quality of the supercapacitor loaded by the electrode. The loading weights of the NiCo2O4 and g-C3N4/NiCo2O4 composite material on the electrode used in the electrochemical energy storage test were 14.9 and 8.9 mg, respectively. It can be clearly observed from Figures 5(c) and 5(d) that the discharge time of the g-C3N4/NiCo2O4 supercapacitor electrode is much longer than that of the NiCo2O4 supercapacitor electrode under the same charge and discharge current. Calculated by Formula (3), when the charge and discharge current is 1–8 A/g, the mass-specific capacitance content of NiCo2O4 supercapacitor electrode is 98.86, 82.86, 69.43, 50, and 32 F/g. The area-specific capacitance of NiCo2O4 supercapacitor electrodes is 1.4829, 1.2429, 1.04145, 0.75, and 0.48 F/cm2, respectively. However, when the charge and discharge current is 1–10 A/g, the mass-specific capacitance content of the g-C3N4/NiCo2O4 supercapacitor electrode is 1,127.71, 1,031.43, 947.14, 811.43, 637.71, and 517.14 F/g. The area-specific capacitances of the g-C3N4/NiCo2O4 supercapacitor electrodes are 16.92, 15.47, 14.21, 12.17, 9.57, and 7.7571 F/cm2, respectively. Compared with the mass-specific capacitance of the NiCo2O4 supercapacitor electrode, the mass-specific capacitance of the g-C3N4/NiCo2O4 supercapacitor electrode has been significantly improved due to the synergistic effect of the g-C3N4/NiCo2O4 composite nanomaterial. Figure 5(e) is the comparison of mass-specific capacitance between NiCo2O4 supercapacitor electrode and g-C3N4/NiCo2O4 supercapacitor electrode under different current charging and discharging conditions. It can be seen in Figure 5(e) that at higher operating currents, g-C3N4/NiCo2O4 supercapacitor electrodes have better rate characteristics. Figure 5(f) shows the retention of the mass-specific capacitance of the g-C3N4/NiCo2O4 supercapacitor electrode at a current of 10 A/g after 3,000 cycles. After 3,000 cycles of the g-C3N4/NiCo2O4 electrode, the mass-specific capacitance of the comparison electrode was maintained at 70.5% before the cycle test, which has an acceptable capacitance retention. The decrease in the capacitance value of the composites may be caused by the dissolution effect of the alkaline electrolyte on the nickel cobaltate combined with the minor structural instability of the physical mixing [42].(4)E=C×V22andP=Et.Equation (4) is the calculation formula of energy density and power density, respectively. Among them, E, C, V, P, and t in Equation (4) are energy density, capacitance, potential, power density, and discharge time, respectively. Calculated by the formula, the highest energy density of the composite material is 69.07 Wh/kg, and the power density is 603.54 W/kg.EIS is a nondestructive parameter measurement and an effective method for determining the dynamic behavior of electrochemical energy storage devices [43, 44]. We apply a sinusoidal signal with a weak amplitude to the supercapacitor electrodes in the three-electrode supercapacitor system to obtain the change in the ratio of the excitation voltage to the response current, which is the impedance spectrum of the electrochemical system. The electrochemical impedance curve of electrodes made of NiCo2O4 and g-C3N4/NiCo2O4 is measured in the frequency range of 100,000–0.1 Hz. Figures 6(a) and 6(b) are the AC impedance spectrum curves of NiCo2O4 and g-C3N4/NiCo2O4. The intersection of the AC impedance spectrum curve and the x-axis is the solution resistance (Rs) at the interface between the electrolyte and the electrode. The partly circular AC impedance spectrum curve in the high-frequency region is mainly dominated by the charge transfer of the electrode material [45, 46]. The slope of the low-frequency region shows the diffusion coefficient of the material, which is mainly dominated by the material transfer of the electrode material. Rs in electrochemical impedance spectroscopy of NiCo2O4 is 0.46 Ω, Rs in electrochemical impedance spectroscopy of g-C3N4/NiCo2O4 is 0.374 Ω. This proves that the g-C3N4/NiCo2O4 composite nanomaterial has better conductivity than the single NiCo2O4. The argument of Q (CPE) has nothing to do with frequency and is called constant phase angle element. Generally, when the Q parameter n is between 1 and 0.5, there is a dispersion effect on the pole surface. When n = 0.5, CPE can be used to replace the Warburg element of the finite diffusion layer, and CPE can also simulate the high-frequency part of the Warburg element of infinite thickness. The linear slope of the g-C3N4/NiCo2O4 electrode in the low-frequency region is higher than that of the NiCo2O4 electrode. It shows that the mobility of electrolyte ions on the surface of g-C3N4/NiCo2O4 electrode is higher than that on the surface of NiCo2O4 electrode. In Figure 3(d), it can be clearly observed that g-C3N4 forms a physical mixing structure with NiCo2O4. From the above discussion, it can be seen that the electron mobility in the g-C3N4/NiCo2O4 composite can be improved, and the electrochemical impedance spectroscopy shows a lower electrochemical impedance.Figure 6 Nyquist diagram of (a) NiCo2O4 and (b) g-C3N4/NiCo2O4 electrode, illustrated with the fitted equivalent circuit and enlarged diagram. (a)(b)Table1 shows the comparison of mass-specific capacitance values of metal oxides and carbon nitride composite nanomaterials. The g-C3N4/NiCo2O4 prepared in this study exhibited high mass-specific capacitance.Table 1 Specific capacitance comparison with CNs and metal oxides. No.ElectrodesElectrolyteCurrent density (A/g)Supercapacitors capacitance (F/g)Ref.1g-C3N4/TNS4 M KOH0.25332[31]2Functionalized g-C3N4/CNF/TNS4 M KOH0.25817[31]3MnO2@pg-C3N4–1348.4[32]4Ce-SnO2@g-C3N42 M KOH1274[33]5Co3O4-rGO-gC3N40.5 M H2SO41675[35]6Co3O4@g-C3N43 M KOH1457.2[37]7NiMoO4/g-C3N46 M KOH1510[40]8g-C3N4/NiCo2O46 M KOH11,127.71This work ## 4. Conclusions In this paper, we synthesized g-C3N4 and NiCo2O4 by thermal polymerization method and hydrothermal method, respectively, and finally synthesized NiCo2O4/g-C3N4 nanomaterials by fully mixing, grinding, and calcining g-C3N4 and NiCo2O4. Due to the effective combination of g-C3N4 and NiCo2O4 nanomaterials, the electrochemical energy storage performance of NiCo2O4/g-C3N4 supercapacitor electrodes is better than that of NiCo2O4 supercapacitor electrodes. At a current of 1 A/g, the mass-specific capacitances of NiCo2O4 and NiCo2O4/g-C3N4 are 98.86 and 1,127.71 F/g, respectively. At a current of 10 A/g, the NiCo2O4/g-C3N4 supercapacitor electrode maintains 70.5% of capacitance after 3,000 cycles. Moreover, NiCo2O4/g-C3N4 electrode shows an excellent electrochemical impedance compared with single NiCo2O4 electrode. NiCo2O4/g-C3N4 electrode has excellent electrochemical performance, which may be due to the formation of physical mixing between NiCo2O4 and g-C3N4, which has broad application prospects. This research is of great importance for the development of materials in high-performance energy storage devices, catalysis, sensors, and other applications. --- *Source: 1023109-2022-12-05.xml*
2022
# A Mathematical Model of a Direct Propane Fuel Cell **Authors:** Hamidreza Khakdaman; Yves Bourgault; Marten Ternan **Journal:** Journal of Chemistry (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102313 --- ## Abstract A rigorous mathematical model for direct propane fuel cells (DPFCs) was developed. Compared to previous models, it provides better values for the current density and the propane concentration at the exit from the anode. This is the first DPFC model to correctly account for proton transport based on the combination of the chemical potential gradient and the electrical potential gradient. The force per unit charge from the chemical potential gradient (concentration gradient) that pushes protons from the anode to the cathode is greater than that from the electrical potential gradient that pushes them in the opposite direction. By including the chemical potential gradient, we learn that the proton concentration gradient is really much different than that predicted using the previous models that neglected the chemical potential gradient. Also inclusion of the chemical potential gradient made this model the first one having an overpotential gradient (calculated from the electrical potential gradient) with the correct slope. That is important because the overpotential is exponentially related to the reaction rate (current density). The model described here provides a relationship between the conditions inside the fuel cell (proton concentration, overpotential) and its performance as measured externally by current density and propane concentration. --- ## Body ## 1. Introduction The focus of this study is on the direct propane fuel cell (DPFC), which belongs to the polymer membrane electrolyte fuel cell (PEMFC) family but consumes propane instead of hydrogen as its feedstock. The generation of electrical energy in rural areas is our primary target application for DPFCs. The cost of delivering electrical energy to rural areas is substantially greater than to urban areas, because longer transmission lines are required to serve a comparatively small number of customers. Therefore more costly fuel cells can be justified for use in rural areas compared to urban areas. In addition, the infrastructure for delivering liquefied petroleum gas (LPG) or propane to rural areas already exists. Two major advantages of DPFCs over hydrogen fuel cells are that the expense of hydrogen production plants and of hydrogen transport/storage will be eliminated from the fuel cell energy production cycle. However, a drawback associated with DPFCs is that the propane reaction rate is much slower than that of hydrogen. Liebhafsky and Cairns [1], Bockris and Srinivasan [2], and Cairns [3] reviewed the majority of the DPFC experimental research that had been done in the 1960s. Only a little work has been performed since then. Polarization curves were reported both by Cheng et al. [4] and by Savadogo and Rodriguez Varela [5] for low temperature PEMFCs. Intermediate temperature (100–300°C) proton conducting fuel cells were investigated by Heo et al. [6]. Solid oxide fuel cell (SOFC) studies were performed with propane at 550–650°C by Feng et al. [7] and at 900°C by Yang et al. [8].The approaches to the modeling of fuel cells are summarized below. Weber and Newman [9] have reviewed four groups of fuel cell models that consider transport of water and protons in the electrolyte phase: simple models, diffusive models, hydraulic models, and combination models. The simple models [10–13] describe proton transfer using Ohm’s law with a constant ionic conductivity. These models cannot predict phenomena such as membrane dehydration, in which water content and thus ionic conductivity are variables. For water movement, a numerical value of the net water flux has to be determined as the boundary condition at the interface between the catalyst layers and the membrane.The diffusive models [14–19] predict the movement of dissolved water and protons within the membrane as a result of concentration and electrical potential gradients. They are applicable for the electrolyte systems with low water content (λ < 14, where λ is moles of water per mole of sulfonic acid sites in the Nafion membrane) where liquid water does not exist. The diffusive models are referred to as single phase models of membranes and can predict proton distribution in the electrolyte phase and membrane dehydration.At high water contents, membrane pores are completely filled with liquid water, and the water content is assumed to be uniform everywhere. Therefore, water diffusion does not occur and the convection mechanism causes proton and water transport. The hydraulic models [20–23] were developed for membranes with high water content. Two phases, liquid water and membrane, are described by the hydraulic models. Water velocity is calculated by Schlögl’s equation [23] which is a function of electrical potential gradient and pressure gradient. Finally, the hydraulic and diffusive models are merged in the combination models [24–26] when calculations covering the whole range of water content are desirable. This approach considers concentration and pressure gradients as driving forces for water and proton transport.There are two possible approaches to dealing with the transport properties in the diffusive models, that is, dilute solution theory and concentrated solution theory [27]. Mass transport in dilute electrolyte systems is usually described by the Nernst-Planck equation [27] in which the flux of a charged species is a function of the concentration gradient of that species as well as the electrical potential gradient. For a noncharged species, the potential gradient term in the Nernst-Planck equation disappears. The membrane transport properties are not required to be constant in this approach.Employing concentrated solution theory leads to rigorous models that consider the interactions between all species. Krishna [19] used Generalized Maxwell-Stefan (GMS) equations to implement this approach for multicomponent electrolyte systems in general. Wöhr et al. [17] also used Maxwell-Stefan (MS) equations to model proton and water transport in PEM fuel cells in which the MS diffusion coefficients are modified as a function of temperature and humidity. Fuller and Newman [28] used the electrochemical potential of each species as the driving force in the MS equations. Fimrite et al. [16] developed a transport model for water and protons based on the binary friction model. The mole fraction and potential gradients were considered in the electrochemical potential gradient expression. Baschuk and Li [15] also used MS equation but they calculated the MS diffusion coefficients based on experimental data available in the literature. Then, they validated those coefficients with experimental data for the electroosmotic drag coefficient.A diffusive model has been developed in the present study to investigate the movement of water and protons in the electrolyte phase of a DPFC where the operation temperature is above the boiling point of water. One possible strategy for increasing the reaction rate in DPFC is to operate at temperatures of 150°C or higher. A membrane that can resist high temperature and show acceptable conductivity (5.0 S m−1) has been developed in our research group [29]. This membrane is composed of porous polytetrafluoroethylene (PTFE) that contains zirconium phosphate (Z r H P O 4 2 · H 2 O or ZrP) in its pores. ZrP is a known proton conductor [30]. Concentrated solution theory was used in which the binary interactions between water, protons, and ZrP species were described.We are developing mathematical models of DPFCs in order to understand this phenomenon and hopefully to enhance their performance. The results reported here are major improvements over our previous model [10]. Our previous model, like the vast majority of fuel cell models, only used an electrical potential gradient to describe migration and neglected the proton concentration gradient in accounting for proton transport through the electrolyte layer. As we noted previously [10], neglect of the proton concentration gradient caused the overpotential gradient and the electrical potential gradient in the electrolyte phase to be incorrect. The model being described here, unlike the majority of fuel cell models, includes both a valid electrical potential gradient and a proton concentration gradient to account for proton transport by a combination of migration and diffusion. This model accounts for the influence of the proton concentration in the electrolyte phase and thereby overcomes the deficiencies mentioned above. ## 2. Model Development This model solved the governing equations for the Membrane Electrode Assembly (MEA), consisting of the membrane layer, anode layer, and cathode layer. A schematic of a typical DPFC is shown in Figure1. The cell is composed of two bipolar plates, two catalyst layers, and a membrane layer. Each bipolar plate has two sets of channels: one for reactants and one for products. The channels are connected to each other through the catalyst layer. Figure 1 shows these channels for the anode bipolar plate. The interdigitated flow fields show a symmetric geometry with repetitive pieces. In order to increase the computational speed, only one of these pieces was considered as the modeling domain. Therefore, the modeling domain can be defined as the part of the MEA that is located between the middle of a feed channel and the middle of its adjacent product channel (cross section in Figure 1). That cross section is shown in Figure 2 as the modeling domain. Its boundaries are shown as a dashed black line.Figure 1 A direct propane fuel cell with interdigitated flow field.Figure 2 Boundaries in the modeling domain.Previously, it was shown that neglecting proton diffusion in the proton conservation equation (an assumption used in many fuel cell models) led to incorrect results for the electrolyte potential and overpotential profiles even though the polarization curve was predicted correctly [10]. The present model includes both proton diffusion and migration. ### 2.1. Governing Equations Three phases are present in the anode and cathode catalyst layers. They are the “gas phase” containing reactants and products, the “solid catalyst phase” containing the carbon support and platinum, and the “solid electrolyte phase.” The latter consists of a stationary ZrPm a t r i x = [ Z r H P O 4 2 · H 2 O ] containing mobile H2O = [Zr(HPO4)2  ·  2H2O] and mobile H + = [ Z r H P O 4 2 · H 3 O + ] species that can be transported. The membrane layer contains the ZrP electrolyte phase as well as PTFE.Conservation equations for momentum, total mass, and mass of noncharged species were solved for the gas phase in each of the catalyst layers. A list of equations that were used for the gas phase of both anode and cathode catalyst layers is shown as follows:Conservation of mass in gas phase:(1) ∇ · ε G ρ G u → + ∑ i n ν i M W i j z F = 0 , wherei = C3H8, H2O, and CO2 for the anode and O2 and H2O for the cathode. Conservation of momentum in gas phase:(2) - ∇ P = 150 μ G 1 - ε G 2 D p 2 ε G 3 u → . Conservation of noncharged species in gas phase:(3) ∇ · ε G c G u → y i - ∇ · ε G c G D i ∇ y i + ν i j z F = 0 , wherei = C3H8 and CO2 for the anode and O2 and H2O for the cathode. Conservation of species in the electrolyte phase: for water,(4) - ∇ · c E L Y B H 2 O – H 2 O ′ - B H 2 O – H + ′ ∇ x H + + ∇ · c E L Y B H 2 O – H + ′ F x H + R T ∇ ϕ E L Y - j z F = 0 ; for proton,(5) ∇ · c E L Y B H + – H + ′ - B H + – H 2 O ′ ∇ x H + + ∇ · c E L Y B H + – H + ′ F x H + R T ∇ ϕ E L Y + j z F = 0 . Butler-Volmer equation in the anode:(6) j A = j A 0 A P t exp ⁡ α A F η A R T - exp ⁡ - α C F η A R T , where(7) j A 0 = j C 3 O x 0 r e f p C 3 p C 3 r e f exp ⁡ Δ G C 3 O x ‡ R 1 T r e f - 1 T , (8) η A = Δ ϕ A - Δ ϕ A E Q = ϕ P t A - ϕ E L Y A - ϕ P t A E Q - ϕ E L Y E Q . Butler-Volmer equation in the cathode:(9) j C = j C 0 A P t exp ⁡ α A F η C R T - exp ⁡ - α C F η C R T , where(10) j C 0 = j O 2 R d 0 r e f p O 2 p O 2 r e f exp ⁡ Δ G O 2 R d ‡ R 1 T r e f - 1 T , (11) η C = Δ ϕ C - Δ ϕ C E Q = ϕ P t C - ϕ E L Y C - ϕ P t C E Q - ϕ E L Y E Q .Equation (1) describes the total mass conservation in the gas phase of the catalyst layers. The second term in this equation is the sink or source term describing the mass consumption or production in the gas phase caused by electrochemical reactions. Equation (2) is the linear form of the Ergun equation. It was used to calculate the pressure profiles in the gas phase of the catalyst layers because they are packed beds. At the conditions used in this study, the magnitude of the quadratic velocity term in the Ergun equation was much smaller than the linear term. Hence only the linear term in velocity was used in (2). Equations (1) and (2) were solved together to calculate the velocity and pressure profiles in the gas phase of the catalyst layers. Mass balances for each of the individual gas phase species account for convection, diffusion, and reaction, as shown in (3).Equations (4) and (5) describe, respectively, water and proton conservation in the electrolyte phase of the membrane and catalyst layers. Diffusion was described by concentrated solution theory through the use of the GMS equations. The following paragraphs illustrate the derivation of (4) and (5).A general procedure for the calculation of mass fluxes in multicomponent electrolyte systems was presented by Krishna [19]. It has been proven that the Nernst-Planck equation is a limiting case of the GMS equations. The GMS equations can be written as follows:(12) d → i = ∑ j = 1 j ≠ i n x i J → j - x j J → i c E L Y Ð i j i = 1,2 , … , n - 1 ,where d → i is a generalized driving force for mass transport of species i. Because the summation of the n driving forces is equal to zero due to the Gibbs-Duhem limitation [31], only n - 1 driving forces are independent. The equation to calculate the generalized driving force has been derived based on nonequilibrium thermodynamics [31]. A simplified expression for a solid stationary electrolyte (no convection term) [19] can be written as(13) d → i = ∇ x i + x i z i F R T ∇ ϕ E L Y .For a noncharged species such as water z i is equal to zero, and, according to (13), the concentration gradient will be the only driving force.The migration term in (13) was obtained by representing ion mobility by the Nernst-Einstein relation (D i = R T u i). This equation is applicable only at infinite dilution. However, it can be used in concentrated solutions if additional composition-dependent transport parameters, such as the B ′ parameters in (19), are used to calculate the flux of ions [27]. It will be shown in the following paragraphs that (18) represent the composition-dependent parameters.Equation (12) results in (n - 1) independent equations that can be written in matrix form for convenience:(14) c E L Y d → 1 ⋮ d → n - 1 = - B 1 1 ⋯ B 1 , n - 1 ⋮ ⋱ ⋮ B n - 1 , 1 ⋯ B n - 1 , n - 1 J → 1 ⋮ J → n - 1 ,where the elements of the matrix of inverted diffusion coefficients [B] are given by(15) B i i = ∑ j = 1 j ≠ i n x i Ð i j i = 1,2 , … , n - 1 , B i j = - x i Ð i j i = 1,2 , … , n - 1 ; i ≠ j .The fluxes of species, J → i, can be calculated from (16) which is the inversion of (14):(16) J → 1 ⋮ J → n - 1 = - c E L Y B 11 ⋯ B 1 , n - 1 ⋮ ⋱ ⋮ B n - 1,1 ⋯ B n - 1 , n - 1 - 1 d → 1 ⋮ d → n - 1 .For the present electrolyte system containing three species, mobile H2O and H+ plus immobile solid ZrP, (16) may be written as(17) J → H 2 O J → H + = - c E L Y B H 2 O – H 2 O ′ B H 2 O – H + ′ B H + – H 2 O ′ B H + – H + ′ d → H 2 O d → H + ,where [B ′] is the inverse of the matrix of inverted diffusion coefficients. Because Ð H 2 O – H + = Ð H + – H 2 O, the elements of [B ′] are calculated using (18) which are functions of the GMS diffusivities and the species mole fractions in the electrolyte phase:(18) B H 2 O – H 2 O ′ = x H 2 O Ð H + –ZrP + Ð H 2 O – H + x H + + Ð H + –ZrP / Ð H 2 O –ZrP x H 2 O + Ð H 2 O – H + / Ð H 2 O –ZrP , B H 2 O – H + ′ = x H 2 O Ð H + –ZrP x H + + Ð H + –ZrP / Ð H 2 O –ZrP x H 2 O + Ð H 2 O – H + / Ð H 2 O –ZrP , B H + – H 2 O ′ = x H + Ð H 2 O –ZrP x H 2 O + Ð H 2 O –ZrP / Ð H + –ZrP x H + + Ð H 2 O – H + / Ð H + –ZrP , B H + – H + ′ = x H + Ð H 2 O –ZrP + Ð H 2 O – H + x H 2 O + Ð H 2 O –ZrP / Ð H + –ZrP x H + + Ð H 2 O – H + / Ð H + –ZrP .Combining sets of (17) and (13) results in two independent equations that can be used to calculate the fluxes of mobile species (J → H 2 O and J → H ↑ + ↓ ↑) within the electrolyte phase:(19) J → H 2 O = - c E L Y B H 2 O – H 2 O ′ ∇ x H 2 O - c E L Y B H 2 O – H + ′ ∇ x H + + F x H + R T ∇ ϕ E L Y , (20) J → H + = - c E L Y B H + – H 2 O ′ ∇ x H 2 O - c E L Y B H + – H + ′ ∇ x H + + F x H + R T ∇ ϕ E L Y .Equations (19) and (20) show that diffusion flux of each species is a function of the concentration gradient of all species as well as of the potential gradient. There are five unknowns in (19) and (20): J → H 2 O, J → H +, x H 2 O, x H +, and ϕ E L Y. Therefore, three more equations are required.ZrP is immobile. As a result, the diffusion phenomenon will effectively be the interchange of H+ and H2O species. Therefore, for diffusion purposes we will only consider the domain of the mobile species, H+ and H2O, and will ignore the immobile species, ZrP. On that basis, (21) can be used as a third equation. Nevertheless, the presence of ZrP is important because of its interaction with the mobile species. Specifically, the values of the B ′ coefficients for H+ and H2O were influenced by the presence of ZrP:(21) x H 2 O + x H + = 1.0 .The differential equations for H2O and H+ mass conservation in the electrolyte phase can be expressed in molar units as(22) ∇ · J → H 2 O = - j z F , ∇ · J → H + = j z F ,where j is the volumetric current production. This quantity which appears in (1), (3) to (5) and (22) is the rate of production of protons in the anode. Therefore, it is positive in the anode, j A, and negative in the cathode, j C. It was calculated using the Butler-Volmer equation for the anode and cathode, (6) and (9), respectively. Exchange current densities at the anode and cathode are a function of the reactants’ partial pressure and the operating temperature as shown in (7) and (10). The Butler-Volmer equation and its parameters for both propane oxidation and oxygen reduction were described in our previous communication [10]. Complete conversion of C3H8 to CO2 was reported in experiments by Grubb and Michalske [34]. Equations (19) to (22) were combined and are shown as (4) and (5). ### 2.2. Numerical Procedure The numerical solution procedure is illustrated in Figure3. Equations (1)–(11) define the problem at steady state. However, a time derivative was appended to each partial differential equation and a backward Euler time stepping method was used to increase stability while converging to the steady-state solution. The Finite Element Method was used to discretize the partial differential equations in space, with all dependent variables discretized by a linear finite element except for the pressure that is taken as a quadratic.Figure 3 Modeling procedure.FreeFEM++ software has been successfully used to solve two-dimensional partial differential equations (1)–(11). It is open-source software and is based on the Finite Element Method developed by Hecht et al. [32]. The calculated results from FreeFEM++  were exported to ParaView visualization software [35] for postprocessing. ParaView is also open-source software.There is no proton loss through the exterior boundaries of the domain (Figure2). Therefore, the total rate of proton production in the anode, ∫ A n o d e j d V, has to be equal to the total rate of proton consumption in the cathode, ∫ C a t h o d e ( - j ) d V. In each case, the electrical potential of the catalyst phase of the anode, ϕ P t A, and that of the cathode, ϕ P t C, had individual constant values. Then all the variables in the whole domain were calculated. However, having fixed electrical potentials of the anode and cathode catalyst phases does not guarantee that the proton production at the anode will equal the proton consumption at the cathode. The difference between the rate of proton production and consumption can be minimized by shifting ϕ E L Y by a constant value because the production and consumption rates are functions of the electrical potential in both of their respective catalyst phases, ϕ P t A and ϕ P t C, and in the electrolyte phase, ϕ E L Y. Therefore, the Newton method was used to force equal proton production and consumption. In other words, balancing ∫ A n o d e j d V and ∫ C a t h o d e ( - j ) d V acts as a constraint for the conservation of protons in the electrolyte phase.The equations for the conservation of momentum, total mass, and individual species in the gas phase of the anode and cathode were solved by assuming there was no species crossover through the membrane. Electrical potential, proton, and water concentrations in the electrolyte phase of the anode, cathode, and membrane layers were coupled to each other. These variables were calculated by solving (4), (5), and (21) iteratively in each layer. Then, the Robin method [10] was used to couple the solutions between layers. In the Robin method, both of the following transfer conditions are progressively satisfied on the anode catalyst/membrane interface and the membrane/cathode catalyst interface through iterations of (a) the continuity of the variable (e.g., potential) and (b) the continuity of the flux (e.g., electrical current).Figure2 shows four types of boundary conditions for the modeling domain, that is, inlet, outlet, wall of the land, and the midchannel symmetry boundaries. The flux of species in the gas phase is zero at the walls because there is no transfer through walls. The zero flux condition is also true at the midchannel symmetry boundaries. The compositions of the gaseous species are known at the inlet of the anode and cathode catalyst layers. It was assumed that no change in the composition of gas mixture occurred after leaving the catalyst bed. Therefore, the composition gradients are zero in the direction normal to the catalyst layer at the outlet boundaries. The zero flux condition is applied at all exterior boundaries for the species in the electrolyte phase. ### 2.3. Input Parameters The parameters used for the simulations are shown in Table1. The GMS diffusivities, Ð i j, which are used in (18) have to be calculated from the Fickian diffusion coefficients, D i j. For ideal solutions, the Fickian diffusion, D i j, can be used as Ð i j in the Stefan-Maxwell equations [26] because the concentration dependence of Fickian diffusion coefficients is ignored. Experimental values for D H + –ZrP and D H 2 O – H + are given in Table 1. Note that the diffusivity of protons in ZrP is approximately two orders of magnitude smaller than the diffusivity of protons in water. The movement of protons causes the electroosmotic flow of water [9]. It was assumed that one water molecule is dragged by each proton, H3O+, that travels from anode to cathode. Therefore, the diffusivity of water in ZrP was set equal to the diffusivity of protons in ZrP [36], the smaller of the two proton diffusivities in Table 1. Proton diffusivity and proton mobility are different quantities. The three diffusivities in Table 1 were the ones used to calculate the B ′ parameters in (18).Table 1 Operational, electrochemical, and design parameters for simulations. Property Value Temperature,T 423–503 K Pressure,P 101.3 k Pa Proton–ZrP diffusivity,D H + –ZrP 3.1 × 10−12 m2 s−1 [29] Proton–water diffusivity,D H 2 O– H + 2.9 × 10−10 m2 s−1 [12] Ionic conductivity in membrane,σ ZrP / PTFE 5.0 S m−1 [24] Electrical resistivity in membrane,R PTFE 1.0 × 1016 Ω m Charge transfer coefficients,α A and α C 1.0 [30] Equilibrium potential of catalyst phase at the anode,ϕ Pt A EQ 0.136 V [1] Equilibrium potential of catalyst phase at the cathode,ϕ Pt C EQ 1.229 V Equilibrium potential of electrolyte phase,ϕ ELY EQ 0.136 V Apparent bulk density of carbon catalyst support,ρ CAT ⁡ 0.259 gcatalyst mL catalyst - 1 Specific surface area of carbon catalyst support in the anode and cathode,A CAT ⁡ 255m catalyst 2 g catalyst - 1 Gas phase volume fraction in anode and cathode,ε G 0.5 Electrolyte phase volume fraction in anode and cathode,ε ELY 0.4 Effective particle diameter in anode and cathode,D p 5μm Land width,L W 2–8 mm Anode and cathode thickness, ThA, ThC 200–400μm Membrane thickness,T h M 100–200μm Fluid channels width in bipolar plates 0.4 mm ### 2.4. Model Validation The model predicts the performance of a DPFC that (i) has interdigitated flow fields, (ii) has zirconium phosphate as the electrolyte, and (iii) operates over a temperature range of 150–230°C. As there are no experimental data for DPFCs having zirconium phosphate electrolytes and interdigitated flow fields, the model results have been compared to published results for DPFCs with other types of electrolytes and flow fields.Figure4 compares the modeling results for zirconium phosphate electrolyte with the experimental data for other types of electrolytes [34, 37]. The figure shows that the polarization curve for ZrP-PTFE electrolyte is somewhat comparable to that for the other electrolytes. The difference between the polarization curves can be partially explained by the difference between conductivities of the electrolytes. The proton conductivity of a nonmodified Nafion 117 approaches 10 S m−1 at 80°C [38]. The conductivity of the 95% H3PO4 electrolyte is 35 S m−1 at 200°C [39]. However, the proton conductivity for the best ZrP-PTFE that has been developed in our laboratory is about 5 S m−1 at 150°C.Figure 4 Polarization curves of direct propane/oxygen fuel cell using Pt anode and cathode. (a) Experimental results [31] using Nafion 117 at 95°C. (b) Experimental results [32] using 95% H3PO4 at 200°C. (c) The present proton migration and diffusion model results for a solid ZrP-PTFE electrolyte at 150°C. ## 2.1. Governing Equations Three phases are present in the anode and cathode catalyst layers. They are the “gas phase” containing reactants and products, the “solid catalyst phase” containing the carbon support and platinum, and the “solid electrolyte phase.” The latter consists of a stationary ZrPm a t r i x = [ Z r H P O 4 2 · H 2 O ] containing mobile H2O = [Zr(HPO4)2  ·  2H2O] and mobile H + = [ Z r H P O 4 2 · H 3 O + ] species that can be transported. The membrane layer contains the ZrP electrolyte phase as well as PTFE.Conservation equations for momentum, total mass, and mass of noncharged species were solved for the gas phase in each of the catalyst layers. A list of equations that were used for the gas phase of both anode and cathode catalyst layers is shown as follows:Conservation of mass in gas phase:(1) ∇ · ε G ρ G u → + ∑ i n ν i M W i j z F = 0 , wherei = C3H8, H2O, and CO2 for the anode and O2 and H2O for the cathode. Conservation of momentum in gas phase:(2) - ∇ P = 150 μ G 1 - ε G 2 D p 2 ε G 3 u → . Conservation of noncharged species in gas phase:(3) ∇ · ε G c G u → y i - ∇ · ε G c G D i ∇ y i + ν i j z F = 0 , wherei = C3H8 and CO2 for the anode and O2 and H2O for the cathode. Conservation of species in the electrolyte phase: for water,(4) - ∇ · c E L Y B H 2 O – H 2 O ′ - B H 2 O – H + ′ ∇ x H + + ∇ · c E L Y B H 2 O – H + ′ F x H + R T ∇ ϕ E L Y - j z F = 0 ; for proton,(5) ∇ · c E L Y B H + – H + ′ - B H + – H 2 O ′ ∇ x H + + ∇ · c E L Y B H + – H + ′ F x H + R T ∇ ϕ E L Y + j z F = 0 . Butler-Volmer equation in the anode:(6) j A = j A 0 A P t exp ⁡ α A F η A R T - exp ⁡ - α C F η A R T , where(7) j A 0 = j C 3 O x 0 r e f p C 3 p C 3 r e f exp ⁡ Δ G C 3 O x ‡ R 1 T r e f - 1 T , (8) η A = Δ ϕ A - Δ ϕ A E Q = ϕ P t A - ϕ E L Y A - ϕ P t A E Q - ϕ E L Y E Q . Butler-Volmer equation in the cathode:(9) j C = j C 0 A P t exp ⁡ α A F η C R T - exp ⁡ - α C F η C R T , where(10) j C 0 = j O 2 R d 0 r e f p O 2 p O 2 r e f exp ⁡ Δ G O 2 R d ‡ R 1 T r e f - 1 T , (11) η C = Δ ϕ C - Δ ϕ C E Q = ϕ P t C - ϕ E L Y C - ϕ P t C E Q - ϕ E L Y E Q .Equation (1) describes the total mass conservation in the gas phase of the catalyst layers. The second term in this equation is the sink or source term describing the mass consumption or production in the gas phase caused by electrochemical reactions. Equation (2) is the linear form of the Ergun equation. It was used to calculate the pressure profiles in the gas phase of the catalyst layers because they are packed beds. At the conditions used in this study, the magnitude of the quadratic velocity term in the Ergun equation was much smaller than the linear term. Hence only the linear term in velocity was used in (2). Equations (1) and (2) were solved together to calculate the velocity and pressure profiles in the gas phase of the catalyst layers. Mass balances for each of the individual gas phase species account for convection, diffusion, and reaction, as shown in (3).Equations (4) and (5) describe, respectively, water and proton conservation in the electrolyte phase of the membrane and catalyst layers. Diffusion was described by concentrated solution theory through the use of the GMS equations. The following paragraphs illustrate the derivation of (4) and (5).A general procedure for the calculation of mass fluxes in multicomponent electrolyte systems was presented by Krishna [19]. It has been proven that the Nernst-Planck equation is a limiting case of the GMS equations. The GMS equations can be written as follows:(12) d → i = ∑ j = 1 j ≠ i n x i J → j - x j J → i c E L Y Ð i j i = 1,2 , … , n - 1 ,where d → i is a generalized driving force for mass transport of species i. Because the summation of the n driving forces is equal to zero due to the Gibbs-Duhem limitation [31], only n - 1 driving forces are independent. The equation to calculate the generalized driving force has been derived based on nonequilibrium thermodynamics [31]. A simplified expression for a solid stationary electrolyte (no convection term) [19] can be written as(13) d → i = ∇ x i + x i z i F R T ∇ ϕ E L Y .For a noncharged species such as water z i is equal to zero, and, according to (13), the concentration gradient will be the only driving force.The migration term in (13) was obtained by representing ion mobility by the Nernst-Einstein relation (D i = R T u i). This equation is applicable only at infinite dilution. However, it can be used in concentrated solutions if additional composition-dependent transport parameters, such as the B ′ parameters in (19), are used to calculate the flux of ions [27]. It will be shown in the following paragraphs that (18) represent the composition-dependent parameters.Equation (12) results in (n - 1) independent equations that can be written in matrix form for convenience:(14) c E L Y d → 1 ⋮ d → n - 1 = - B 1 1 ⋯ B 1 , n - 1 ⋮ ⋱ ⋮ B n - 1 , 1 ⋯ B n - 1 , n - 1 J → 1 ⋮ J → n - 1 ,where the elements of the matrix of inverted diffusion coefficients [B] are given by(15) B i i = ∑ j = 1 j ≠ i n x i Ð i j i = 1,2 , … , n - 1 , B i j = - x i Ð i j i = 1,2 , … , n - 1 ; i ≠ j .The fluxes of species, J → i, can be calculated from (16) which is the inversion of (14):(16) J → 1 ⋮ J → n - 1 = - c E L Y B 11 ⋯ B 1 , n - 1 ⋮ ⋱ ⋮ B n - 1,1 ⋯ B n - 1 , n - 1 - 1 d → 1 ⋮ d → n - 1 .For the present electrolyte system containing three species, mobile H2O and H+ plus immobile solid ZrP, (16) may be written as(17) J → H 2 O J → H + = - c E L Y B H 2 O – H 2 O ′ B H 2 O – H + ′ B H + – H 2 O ′ B H + – H + ′ d → H 2 O d → H + ,where [B ′] is the inverse of the matrix of inverted diffusion coefficients. Because Ð H 2 O – H + = Ð H + – H 2 O, the elements of [B ′] are calculated using (18) which are functions of the GMS diffusivities and the species mole fractions in the electrolyte phase:(18) B H 2 O – H 2 O ′ = x H 2 O Ð H + –ZrP + Ð H 2 O – H + x H + + Ð H + –ZrP / Ð H 2 O –ZrP x H 2 O + Ð H 2 O – H + / Ð H 2 O –ZrP , B H 2 O – H + ′ = x H 2 O Ð H + –ZrP x H + + Ð H + –ZrP / Ð H 2 O –ZrP x H 2 O + Ð H 2 O – H + / Ð H 2 O –ZrP , B H + – H 2 O ′ = x H + Ð H 2 O –ZrP x H 2 O + Ð H 2 O –ZrP / Ð H + –ZrP x H + + Ð H 2 O – H + / Ð H + –ZrP , B H + – H + ′ = x H + Ð H 2 O –ZrP + Ð H 2 O – H + x H 2 O + Ð H 2 O –ZrP / Ð H + –ZrP x H + + Ð H 2 O – H + / Ð H + –ZrP .Combining sets of (17) and (13) results in two independent equations that can be used to calculate the fluxes of mobile species (J → H 2 O and J → H ↑ + ↓ ↑) within the electrolyte phase:(19) J → H 2 O = - c E L Y B H 2 O – H 2 O ′ ∇ x H 2 O - c E L Y B H 2 O – H + ′ ∇ x H + + F x H + R T ∇ ϕ E L Y , (20) J → H + = - c E L Y B H + – H 2 O ′ ∇ x H 2 O - c E L Y B H + – H + ′ ∇ x H + + F x H + R T ∇ ϕ E L Y .Equations (19) and (20) show that diffusion flux of each species is a function of the concentration gradient of all species as well as of the potential gradient. There are five unknowns in (19) and (20): J → H 2 O, J → H +, x H 2 O, x H +, and ϕ E L Y. Therefore, three more equations are required.ZrP is immobile. As a result, the diffusion phenomenon will effectively be the interchange of H+ and H2O species. Therefore, for diffusion purposes we will only consider the domain of the mobile species, H+ and H2O, and will ignore the immobile species, ZrP. On that basis, (21) can be used as a third equation. Nevertheless, the presence of ZrP is important because of its interaction with the mobile species. Specifically, the values of the B ′ coefficients for H+ and H2O were influenced by the presence of ZrP:(21) x H 2 O + x H + = 1.0 .The differential equations for H2O and H+ mass conservation in the electrolyte phase can be expressed in molar units as(22) ∇ · J → H 2 O = - j z F , ∇ · J → H + = j z F ,where j is the volumetric current production. This quantity which appears in (1), (3) to (5) and (22) is the rate of production of protons in the anode. Therefore, it is positive in the anode, j A, and negative in the cathode, j C. It was calculated using the Butler-Volmer equation for the anode and cathode, (6) and (9), respectively. Exchange current densities at the anode and cathode are a function of the reactants’ partial pressure and the operating temperature as shown in (7) and (10). The Butler-Volmer equation and its parameters for both propane oxidation and oxygen reduction were described in our previous communication [10]. Complete conversion of C3H8 to CO2 was reported in experiments by Grubb and Michalske [34]. Equations (19) to (22) were combined and are shown as (4) and (5). ## 2.2. Numerical Procedure The numerical solution procedure is illustrated in Figure3. Equations (1)–(11) define the problem at steady state. However, a time derivative was appended to each partial differential equation and a backward Euler time stepping method was used to increase stability while converging to the steady-state solution. The Finite Element Method was used to discretize the partial differential equations in space, with all dependent variables discretized by a linear finite element except for the pressure that is taken as a quadratic.Figure 3 Modeling procedure.FreeFEM++ software has been successfully used to solve two-dimensional partial differential equations (1)–(11). It is open-source software and is based on the Finite Element Method developed by Hecht et al. [32]. The calculated results from FreeFEM++  were exported to ParaView visualization software [35] for postprocessing. ParaView is also open-source software.There is no proton loss through the exterior boundaries of the domain (Figure2). Therefore, the total rate of proton production in the anode, ∫ A n o d e j d V, has to be equal to the total rate of proton consumption in the cathode, ∫ C a t h o d e ( - j ) d V. In each case, the electrical potential of the catalyst phase of the anode, ϕ P t A, and that of the cathode, ϕ P t C, had individual constant values. Then all the variables in the whole domain were calculated. However, having fixed electrical potentials of the anode and cathode catalyst phases does not guarantee that the proton production at the anode will equal the proton consumption at the cathode. The difference between the rate of proton production and consumption can be minimized by shifting ϕ E L Y by a constant value because the production and consumption rates are functions of the electrical potential in both of their respective catalyst phases, ϕ P t A and ϕ P t C, and in the electrolyte phase, ϕ E L Y. Therefore, the Newton method was used to force equal proton production and consumption. In other words, balancing ∫ A n o d e j d V and ∫ C a t h o d e ( - j ) d V acts as a constraint for the conservation of protons in the electrolyte phase.The equations for the conservation of momentum, total mass, and individual species in the gas phase of the anode and cathode were solved by assuming there was no species crossover through the membrane. Electrical potential, proton, and water concentrations in the electrolyte phase of the anode, cathode, and membrane layers were coupled to each other. These variables were calculated by solving (4), (5), and (21) iteratively in each layer. Then, the Robin method [10] was used to couple the solutions between layers. In the Robin method, both of the following transfer conditions are progressively satisfied on the anode catalyst/membrane interface and the membrane/cathode catalyst interface through iterations of (a) the continuity of the variable (e.g., potential) and (b) the continuity of the flux (e.g., electrical current).Figure2 shows four types of boundary conditions for the modeling domain, that is, inlet, outlet, wall of the land, and the midchannel symmetry boundaries. The flux of species in the gas phase is zero at the walls because there is no transfer through walls. The zero flux condition is also true at the midchannel symmetry boundaries. The compositions of the gaseous species are known at the inlet of the anode and cathode catalyst layers. It was assumed that no change in the composition of gas mixture occurred after leaving the catalyst bed. Therefore, the composition gradients are zero in the direction normal to the catalyst layer at the outlet boundaries. The zero flux condition is applied at all exterior boundaries for the species in the electrolyte phase. ## 2.3. Input Parameters The parameters used for the simulations are shown in Table1. The GMS diffusivities, Ð i j, which are used in (18) have to be calculated from the Fickian diffusion coefficients, D i j. For ideal solutions, the Fickian diffusion, D i j, can be used as Ð i j in the Stefan-Maxwell equations [26] because the concentration dependence of Fickian diffusion coefficients is ignored. Experimental values for D H + –ZrP and D H 2 O – H + are given in Table 1. Note that the diffusivity of protons in ZrP is approximately two orders of magnitude smaller than the diffusivity of protons in water. The movement of protons causes the electroosmotic flow of water [9]. It was assumed that one water molecule is dragged by each proton, H3O+, that travels from anode to cathode. Therefore, the diffusivity of water in ZrP was set equal to the diffusivity of protons in ZrP [36], the smaller of the two proton diffusivities in Table 1. Proton diffusivity and proton mobility are different quantities. The three diffusivities in Table 1 were the ones used to calculate the B ′ parameters in (18).Table 1 Operational, electrochemical, and design parameters for simulations. Property Value Temperature,T 423–503 K Pressure,P 101.3 k Pa Proton–ZrP diffusivity,D H + –ZrP 3.1 × 10−12 m2 s−1 [29] Proton–water diffusivity,D H 2 O– H + 2.9 × 10−10 m2 s−1 [12] Ionic conductivity in membrane,σ ZrP / PTFE 5.0 S m−1 [24] Electrical resistivity in membrane,R PTFE 1.0 × 1016 Ω m Charge transfer coefficients,α A and α C 1.0 [30] Equilibrium potential of catalyst phase at the anode,ϕ Pt A EQ 0.136 V [1] Equilibrium potential of catalyst phase at the cathode,ϕ Pt C EQ 1.229 V Equilibrium potential of electrolyte phase,ϕ ELY EQ 0.136 V Apparent bulk density of carbon catalyst support,ρ CAT ⁡ 0.259 gcatalyst mL catalyst - 1 Specific surface area of carbon catalyst support in the anode and cathode,A CAT ⁡ 255m catalyst 2 g catalyst - 1 Gas phase volume fraction in anode and cathode,ε G 0.5 Electrolyte phase volume fraction in anode and cathode,ε ELY 0.4 Effective particle diameter in anode and cathode,D p 5μm Land width,L W 2–8 mm Anode and cathode thickness, ThA, ThC 200–400μm Membrane thickness,T h M 100–200μm Fluid channels width in bipolar plates 0.4 mm ## 2.4. Model Validation The model predicts the performance of a DPFC that (i) has interdigitated flow fields, (ii) has zirconium phosphate as the electrolyte, and (iii) operates over a temperature range of 150–230°C. As there are no experimental data for DPFCs having zirconium phosphate electrolytes and interdigitated flow fields, the model results have been compared to published results for DPFCs with other types of electrolytes and flow fields.Figure4 compares the modeling results for zirconium phosphate electrolyte with the experimental data for other types of electrolytes [34, 37]. The figure shows that the polarization curve for ZrP-PTFE electrolyte is somewhat comparable to that for the other electrolytes. The difference between the polarization curves can be partially explained by the difference between conductivities of the electrolytes. The proton conductivity of a nonmodified Nafion 117 approaches 10 S m−1 at 80°C [38]. The conductivity of the 95% H3PO4 electrolyte is 35 S m−1 at 200°C [39]. However, the proton conductivity for the best ZrP-PTFE that has been developed in our laboratory is about 5 S m−1 at 150°C.Figure 4 Polarization curves of direct propane/oxygen fuel cell using Pt anode and cathode. (a) Experimental results [31] using Nafion 117 at 95°C. (b) Experimental results [32] using 95% H3PO4 at 200°C. (c) The present proton migration and diffusion model results for a solid ZrP-PTFE electrolyte at 150°C. ## 3. Results and Discussion Figure5(a) shows the two-dimensional variation of the proton concentration in the electrolyte phase of the entire domain, that is, the anode catalyst layer (AN), the membrane layer (ML), and the cathode catalyst layer (CA). The proton concentration at the anode inlet close to the feed gas channel has the highest value. This would be expected because the propane’s partial pressure is higher at the anode inlet and that causes a higher propane oxidation reaction rate, according to Butler-Volmer equation (6). Because protons are produced in the anode catalyst layer and consumed in the cathode catalyst layer, the proton concentration is greater at the anode than the cathode. The resulting proton concentration gradient is the driving force for protons to diffuse from the anode to the cathode.Figure 5 (a) Proton concentration in the electrolyte phase of the anode, membrane, and cathode layers. (b) Electrical potential profile for the electrolyte phase of the anode, membrane, and cathode layers. (c) Protonic flux from anode to cathode in the electrolyte phase. The vectors lengths indicate the flux magnitude which varies from 0 to 17 mA cm−2 in this case. (a) (b) (c)The electrical potential variation in the electrolyte phase of the catalyst layers and membrane is shown in Figure5(b). As the reaction rate in the catalyst layers is not uniform, current density and electrical potential will be variable. Figure 5(b) shows that the electrical potential is higher at the cathode electrolyte phase than at the anode electrolyte phase. That electrical potential gradient is the driving force for protons to migrate from the cathode to the anode. This proton migration (caused by the electrical potential gradient) is in the opposite direction to the proton diffusion (caused by the proton concentration gradient) that was discussed above. In reality, protons are known to be transported from the anode to the cathode. Therefore the dominant driving force is the proton concentration gradient. Furthermore it can be concluded that the electrical potential gradient is not the dominant driving force for proton transport.Figure5(c) shows the magnitude and direction of protonic flux in the electrolyte phase of the anode, cathode, and membrane layers. Protons are produced in the anode and travel from the anode, through the membrane layer, and to cathode, where they are consumed. As discussed above, in Figure 5(a), the concentration driving force for proton flux was from anode to cathode and in Figure 5(b) the electrical potential driving force for protons was in the opposite direction, from cathode to anode. Finally, Figure 5(c) demonstrates that the net flux of protons is from the anode toward the cathode. As the net flux is the summation of two driving forces that are in opposite directions, again one can conclude that proton diffusion is dominant over proton migration. For the fuel cell to operate, the net transport of protons must be from the anode to the cathode. Therefore, the rate of proton diffusion must exceed the rate of proton migration. Figure 5(c) also shows that the arrows’ lengths are becoming longer (indicating that the proton flux increases) in the y-direction from the anode land/anode catalyst interface to the anode catalyst/membrane interface as more protons are produced throughout the anode catalyst layer. Similarly, the arrows’ length becomes shorter (as the proton flux decreases) in the y-direction from membrane/cathode catalyst interface to the cathode catalyst/cathode land interface.There are two routes by which electrons can flow from the anode to the cathode. The electron flux through the electrolyte is shown in Figure6. The electron flow rate through the electrolyte will be many orders of magnitude smaller than the electron flow rate through the external circuit. Although the vast majority of electrons flow through the external circuit, the production and consumption of the miniscule number of electrons that flow through the electrolyte have a distribution (Figure 6) that is similar to the distribution of protons (Figure 5(c)).Figure 6 Electronic flux from anode to cathode in electrolyte phase. The vectors lengths indicate the flux magnitude which varies from 0 to1 e - 11 mA cm−2 in the same case as in Figure 5(c).It is constructive to compare this model (migration plus diffusion) with a migration-only model [10]. A cross section of Figure 5(b) along the y-direction at the middle of the domain (x = L W / 2) is shown in Figure 7(a), where the electrical potential for the migration plus diffusion model in the electrolyte phase (the left axis in Figure 7(a)—solid line) is compared with that in the two solid catalyst phases (the right axis in Figure 7(a)—dashed lines). The electrical potentials in each of the two solid catalyst phases (dashed line) are almost constant throughout their layers because these phases have high electrical conductivities. The greater electrical potential at the cathode than at the anode (both in the catalyst phases and the electrolyte phase) provides a driving force that (a) pushes positively charged protons from the cathode to the anode, via the electrolyte and (b) pushes negatively charged electrons from the anode to the cathode via both the external circuit (almost all the electrons) and the electrolyte (a miniscule quantity of electrons). The flow rate of negatively charged electrons through the electrolyte phase from the anode to the cathode will be miniscule.Figure 7 Electrical potential profiles in they-direction for the electrolyte and catalyst phases located at the middle of the domain x-direction for the cathode and anode catalyst layers and membrane layer. The arrows point in the direction of the ordinate scale that applies to each of the three curves. (a) Proton migration plus diffusion within the electrolyte phase (the present model) (b) Proton migration only within the electrolyte phase [5]. (a) (b)The results of the migration plus diffusion model shown in Figure7(a) correctly describe these phenomena. In contrast, the results from the migration only model [10] are seen in Figure 7(b). Those calculations showed that the migration-only model produced incorrect results. Specifically, the electrical potential gradient in the electrolyte has the wrong slope. The slope (gradient) predicted by the migration-only model incorrectly drives the positively charged protons in the electrolyte from cathode to anode. In reality, they move from the anode to the cathode in the electrolyte.Figure8 compares the anodic and cathodic overpotential for two cases. The solid lines in Figure 8 are the results from the migration plus diffusion model. The dashed lines are the results from a migration only model. The dashed lines (migration-only) have a negative slope whereas the solid lines (migration plus diffusion) have a positive slope. Since the overpotential is the electrochemical driving force for the reaction (see (6) and (9)), it will always have its largest value adjacent to the anode land and decrease toward the membrane. In summary, the migration plus diffusion model predicted the correct behaviour, while the migration-only model predictions were incorrect.Figure 8 Overpotential profile in the anode and cathode alongy-axis at the middle of the modeling domain. Solid lines (migration plus diffusion). Dashed lines (migration only) [5].Figure9 shows the propane mole fraction in the gas phase of the anode catalyst layer along the x-direction. For similar operating conditions, the migration plus diffusion model predicted different propane concentrations than the migration-only model. This difference is caused by the different overpotential profiles predicted by the two models. The difference in overpotentials for migration plus diffusion compared to migration-only model is shown in Figure 8. Those differences are small. However, those small differences are in exponential terms, as shown in (6) and (9). It is the exponential terms that cause the large differences in concentration shown in Figure 9. If proton diffusion in the electrolyte phase is ignored, the prediction of species distribution within the gas phase of the catalyst layers becomes incorrect. In other words, the migration-only model can not correctly calculate either the proton concentration in the electrolyte phase or the propane concentration in the gas phase.Figure 9 Propane mole fraction in the gas phase of the anode catalyst layer along thex-direction at the middle of the anode catalyst layer. (a) Proton migration plus diffusion within the electrolyte phase (the present model). (b) Proton migration only within the electrolyte phase [5].In Figure10, the polarization curves for the migration plus diffusion model are compared with the migration-only model. At a specific cell potential, the cell current density predicted by the migration plus diffusion model is lower than that of the migration-only model. That is because the steady-state value for concentration occurs in the equation for the exchange current density, (7) and (9). This deviation may appear to be small at some conditions. In Figure 10, at a cell potential of 0.4 V, the migration plus diffusion model predicts a current density near 50 mA cm−2. In contrast, the migration-only model predicts nearly 70 mA cm−2. That is, one cannot conclude that a reasonable prediction of the fuel cell overall performance can be obtained using simple models that ignore the proton diffusion phenomenon in the electrolyte. In addition, there are other phenomena for which the migration-only model predicts results that are completely erroneous.Figure 10 Modeling results for polarization curves of direct propane/oxygen fuel cells using a solid ZrP-PTFE electrolyte at 150°C. (a) Proton migration and diffusion within the electrolyte phase (the present model). (b) Proton migration only within the electrolyte phase [5].It would be desirable to expand the range of the polarization curve in Figure10 to greater current densities and to smaller cell potentials. Many attempts to obtain such a wider range of values were made. Unfortunately they were all unsuccessful. As the current density increased, convergence to an acceptable numerical solution of the equations became progressively more difficult. Convergence was not obtained at values of current densities greater than those shown in Figure 10. The difficulty was caused by the exponential nature of the Butler-Volmer equation in combination with the complex Generalized Maxwell-Stefan equations. Small changes in cell potential cause the current density calculated from the Butler-Volmer equation to vary enormously. The search for superior convergence techniques is a topic that is being actively pursued in our laboratory.Activation overpotential and ohmic polarization are the major sources of potential drop in a direct propane fuel cell. Any change in the operating conditions or cell design that results in a decrease in activation overpotential and ohmic polarization will improve the cell performance. Figure11 shows the performance of a DPFC predicted by the model at different operating temperatures. It also shows the performance of a hydrogen PEM fuel cell at 80°C [40] and that of a DPFC at 200°C having a phosphoric acid electrolyte [34]. As temperature is increased from 150°C to 230°C, the rate of reaction increases according to (7) and (10). This leads to a decrease in the overpotential term in the Butler-Volmer equation and a major improvement in the cell performance. It can be concluded that the predicted performance of a DPFC operating at 230°C can approach that of a hydrogen PEMFC at 80°C when both operate at current densities less than 40 mA cm−2.Figure 11 (a), (b), and (c) Predicted polarization curves for a direct propane/oxygen fuel cell at different operating temperatures, (d) experimental data for a typical hydrogen/oxygen PEMFC [33], and (e) experimental data for the best performed DPFC at 200°C [32]. ## 4. Conclusions The migration plus diffusion model, described in this work, was shown to be superior to the migration-only model that is used in many fuel cell modeling studies. Specifically, the migration-only model predicted values of electrical potential in the electrolyte that are erroneous. The gradient of the electrolyte electrical potential predicted by the migration-only model was in the wrong direction. The incorrect values of the electrical potential in the electrolyte caused the values for the overpotential to be incorrect. Incorrect overpotential values caused the values calculated for the propane concentration to be incorrect. This work has shown that the predicted values for steady-state current density and steady-state propane concentration become substantially different when the effect of proton diffusion in the electrolyte is included in the model. The migration plus diffusion model described here has been shown to be a major improvement over the migration-only model that was used in earlier studies.Many important phenomena that occur in fuel cells are not described by polarization curves. Meaningful values for variables internal to the fuel cell, for example, overpotential and reactant concentration, are essential for the understanding of fuel cell performance. At some operating conditions, variables external to the fuel cell, for example, current density and the exit concentration of propane, are substantially different when proton diffusion in the electrolyte is included in the model. The insight obtained using the migration plus diffusion model is far more useful than that obtained from the migration-only model. --- *Source: 102313-2015-12-21.xml*
102313-2015-12-21_102313-2015-12-21.md
57,218
A Mathematical Model of a Direct Propane Fuel Cell
Hamidreza Khakdaman; Yves Bourgault; Marten Ternan
Journal of Chemistry (2015)
Chemistry and Chemical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102313
102313-2015-12-21.xml
--- ## Abstract A rigorous mathematical model for direct propane fuel cells (DPFCs) was developed. Compared to previous models, it provides better values for the current density and the propane concentration at the exit from the anode. This is the first DPFC model to correctly account for proton transport based on the combination of the chemical potential gradient and the electrical potential gradient. The force per unit charge from the chemical potential gradient (concentration gradient) that pushes protons from the anode to the cathode is greater than that from the electrical potential gradient that pushes them in the opposite direction. By including the chemical potential gradient, we learn that the proton concentration gradient is really much different than that predicted using the previous models that neglected the chemical potential gradient. Also inclusion of the chemical potential gradient made this model the first one having an overpotential gradient (calculated from the electrical potential gradient) with the correct slope. That is important because the overpotential is exponentially related to the reaction rate (current density). The model described here provides a relationship between the conditions inside the fuel cell (proton concentration, overpotential) and its performance as measured externally by current density and propane concentration. --- ## Body ## 1. Introduction The focus of this study is on the direct propane fuel cell (DPFC), which belongs to the polymer membrane electrolyte fuel cell (PEMFC) family but consumes propane instead of hydrogen as its feedstock. The generation of electrical energy in rural areas is our primary target application for DPFCs. The cost of delivering electrical energy to rural areas is substantially greater than to urban areas, because longer transmission lines are required to serve a comparatively small number of customers. Therefore more costly fuel cells can be justified for use in rural areas compared to urban areas. In addition, the infrastructure for delivering liquefied petroleum gas (LPG) or propane to rural areas already exists. Two major advantages of DPFCs over hydrogen fuel cells are that the expense of hydrogen production plants and of hydrogen transport/storage will be eliminated from the fuel cell energy production cycle. However, a drawback associated with DPFCs is that the propane reaction rate is much slower than that of hydrogen. Liebhafsky and Cairns [1], Bockris and Srinivasan [2], and Cairns [3] reviewed the majority of the DPFC experimental research that had been done in the 1960s. Only a little work has been performed since then. Polarization curves were reported both by Cheng et al. [4] and by Savadogo and Rodriguez Varela [5] for low temperature PEMFCs. Intermediate temperature (100–300°C) proton conducting fuel cells were investigated by Heo et al. [6]. Solid oxide fuel cell (SOFC) studies were performed with propane at 550–650°C by Feng et al. [7] and at 900°C by Yang et al. [8].The approaches to the modeling of fuel cells are summarized below. Weber and Newman [9] have reviewed four groups of fuel cell models that consider transport of water and protons in the electrolyte phase: simple models, diffusive models, hydraulic models, and combination models. The simple models [10–13] describe proton transfer using Ohm’s law with a constant ionic conductivity. These models cannot predict phenomena such as membrane dehydration, in which water content and thus ionic conductivity are variables. For water movement, a numerical value of the net water flux has to be determined as the boundary condition at the interface between the catalyst layers and the membrane.The diffusive models [14–19] predict the movement of dissolved water and protons within the membrane as a result of concentration and electrical potential gradients. They are applicable for the electrolyte systems with low water content (λ < 14, where λ is moles of water per mole of sulfonic acid sites in the Nafion membrane) where liquid water does not exist. The diffusive models are referred to as single phase models of membranes and can predict proton distribution in the electrolyte phase and membrane dehydration.At high water contents, membrane pores are completely filled with liquid water, and the water content is assumed to be uniform everywhere. Therefore, water diffusion does not occur and the convection mechanism causes proton and water transport. The hydraulic models [20–23] were developed for membranes with high water content. Two phases, liquid water and membrane, are described by the hydraulic models. Water velocity is calculated by Schlögl’s equation [23] which is a function of electrical potential gradient and pressure gradient. Finally, the hydraulic and diffusive models are merged in the combination models [24–26] when calculations covering the whole range of water content are desirable. This approach considers concentration and pressure gradients as driving forces for water and proton transport.There are two possible approaches to dealing with the transport properties in the diffusive models, that is, dilute solution theory and concentrated solution theory [27]. Mass transport in dilute electrolyte systems is usually described by the Nernst-Planck equation [27] in which the flux of a charged species is a function of the concentration gradient of that species as well as the electrical potential gradient. For a noncharged species, the potential gradient term in the Nernst-Planck equation disappears. The membrane transport properties are not required to be constant in this approach.Employing concentrated solution theory leads to rigorous models that consider the interactions between all species. Krishna [19] used Generalized Maxwell-Stefan (GMS) equations to implement this approach for multicomponent electrolyte systems in general. Wöhr et al. [17] also used Maxwell-Stefan (MS) equations to model proton and water transport in PEM fuel cells in which the MS diffusion coefficients are modified as a function of temperature and humidity. Fuller and Newman [28] used the electrochemical potential of each species as the driving force in the MS equations. Fimrite et al. [16] developed a transport model for water and protons based on the binary friction model. The mole fraction and potential gradients were considered in the electrochemical potential gradient expression. Baschuk and Li [15] also used MS equation but they calculated the MS diffusion coefficients based on experimental data available in the literature. Then, they validated those coefficients with experimental data for the electroosmotic drag coefficient.A diffusive model has been developed in the present study to investigate the movement of water and protons in the electrolyte phase of a DPFC where the operation temperature is above the boiling point of water. One possible strategy for increasing the reaction rate in DPFC is to operate at temperatures of 150°C or higher. A membrane that can resist high temperature and show acceptable conductivity (5.0 S m−1) has been developed in our research group [29]. This membrane is composed of porous polytetrafluoroethylene (PTFE) that contains zirconium phosphate (Z r H P O 4 2 · H 2 O or ZrP) in its pores. ZrP is a known proton conductor [30]. Concentrated solution theory was used in which the binary interactions between water, protons, and ZrP species were described.We are developing mathematical models of DPFCs in order to understand this phenomenon and hopefully to enhance their performance. The results reported here are major improvements over our previous model [10]. Our previous model, like the vast majority of fuel cell models, only used an electrical potential gradient to describe migration and neglected the proton concentration gradient in accounting for proton transport through the electrolyte layer. As we noted previously [10], neglect of the proton concentration gradient caused the overpotential gradient and the electrical potential gradient in the electrolyte phase to be incorrect. The model being described here, unlike the majority of fuel cell models, includes both a valid electrical potential gradient and a proton concentration gradient to account for proton transport by a combination of migration and diffusion. This model accounts for the influence of the proton concentration in the electrolyte phase and thereby overcomes the deficiencies mentioned above. ## 2. Model Development This model solved the governing equations for the Membrane Electrode Assembly (MEA), consisting of the membrane layer, anode layer, and cathode layer. A schematic of a typical DPFC is shown in Figure1. The cell is composed of two bipolar plates, two catalyst layers, and a membrane layer. Each bipolar plate has two sets of channels: one for reactants and one for products. The channels are connected to each other through the catalyst layer. Figure 1 shows these channels for the anode bipolar plate. The interdigitated flow fields show a symmetric geometry with repetitive pieces. In order to increase the computational speed, only one of these pieces was considered as the modeling domain. Therefore, the modeling domain can be defined as the part of the MEA that is located between the middle of a feed channel and the middle of its adjacent product channel (cross section in Figure 1). That cross section is shown in Figure 2 as the modeling domain. Its boundaries are shown as a dashed black line.Figure 1 A direct propane fuel cell with interdigitated flow field.Figure 2 Boundaries in the modeling domain.Previously, it was shown that neglecting proton diffusion in the proton conservation equation (an assumption used in many fuel cell models) led to incorrect results for the electrolyte potential and overpotential profiles even though the polarization curve was predicted correctly [10]. The present model includes both proton diffusion and migration. ### 2.1. Governing Equations Three phases are present in the anode and cathode catalyst layers. They are the “gas phase” containing reactants and products, the “solid catalyst phase” containing the carbon support and platinum, and the “solid electrolyte phase.” The latter consists of a stationary ZrPm a t r i x = [ Z r H P O 4 2 · H 2 O ] containing mobile H2O = [Zr(HPO4)2  ·  2H2O] and mobile H + = [ Z r H P O 4 2 · H 3 O + ] species that can be transported. The membrane layer contains the ZrP electrolyte phase as well as PTFE.Conservation equations for momentum, total mass, and mass of noncharged species were solved for the gas phase in each of the catalyst layers. A list of equations that were used for the gas phase of both anode and cathode catalyst layers is shown as follows:Conservation of mass in gas phase:(1) ∇ · ε G ρ G u → + ∑ i n ν i M W i j z F = 0 , wherei = C3H8, H2O, and CO2 for the anode and O2 and H2O for the cathode. Conservation of momentum in gas phase:(2) - ∇ P = 150 μ G 1 - ε G 2 D p 2 ε G 3 u → . Conservation of noncharged species in gas phase:(3) ∇ · ε G c G u → y i - ∇ · ε G c G D i ∇ y i + ν i j z F = 0 , wherei = C3H8 and CO2 for the anode and O2 and H2O for the cathode. Conservation of species in the electrolyte phase: for water,(4) - ∇ · c E L Y B H 2 O – H 2 O ′ - B H 2 O – H + ′ ∇ x H + + ∇ · c E L Y B H 2 O – H + ′ F x H + R T ∇ ϕ E L Y - j z F = 0 ; for proton,(5) ∇ · c E L Y B H + – H + ′ - B H + – H 2 O ′ ∇ x H + + ∇ · c E L Y B H + – H + ′ F x H + R T ∇ ϕ E L Y + j z F = 0 . Butler-Volmer equation in the anode:(6) j A = j A 0 A P t exp ⁡ α A F η A R T - exp ⁡ - α C F η A R T , where(7) j A 0 = j C 3 O x 0 r e f p C 3 p C 3 r e f exp ⁡ Δ G C 3 O x ‡ R 1 T r e f - 1 T , (8) η A = Δ ϕ A - Δ ϕ A E Q = ϕ P t A - ϕ E L Y A - ϕ P t A E Q - ϕ E L Y E Q . Butler-Volmer equation in the cathode:(9) j C = j C 0 A P t exp ⁡ α A F η C R T - exp ⁡ - α C F η C R T , where(10) j C 0 = j O 2 R d 0 r e f p O 2 p O 2 r e f exp ⁡ Δ G O 2 R d ‡ R 1 T r e f - 1 T , (11) η C = Δ ϕ C - Δ ϕ C E Q = ϕ P t C - ϕ E L Y C - ϕ P t C E Q - ϕ E L Y E Q .Equation (1) describes the total mass conservation in the gas phase of the catalyst layers. The second term in this equation is the sink or source term describing the mass consumption or production in the gas phase caused by electrochemical reactions. Equation (2) is the linear form of the Ergun equation. It was used to calculate the pressure profiles in the gas phase of the catalyst layers because they are packed beds. At the conditions used in this study, the magnitude of the quadratic velocity term in the Ergun equation was much smaller than the linear term. Hence only the linear term in velocity was used in (2). Equations (1) and (2) were solved together to calculate the velocity and pressure profiles in the gas phase of the catalyst layers. Mass balances for each of the individual gas phase species account for convection, diffusion, and reaction, as shown in (3).Equations (4) and (5) describe, respectively, water and proton conservation in the electrolyte phase of the membrane and catalyst layers. Diffusion was described by concentrated solution theory through the use of the GMS equations. The following paragraphs illustrate the derivation of (4) and (5).A general procedure for the calculation of mass fluxes in multicomponent electrolyte systems was presented by Krishna [19]. It has been proven that the Nernst-Planck equation is a limiting case of the GMS equations. The GMS equations can be written as follows:(12) d → i = ∑ j = 1 j ≠ i n x i J → j - x j J → i c E L Y Ð i j i = 1,2 , … , n - 1 ,where d → i is a generalized driving force for mass transport of species i. Because the summation of the n driving forces is equal to zero due to the Gibbs-Duhem limitation [31], only n - 1 driving forces are independent. The equation to calculate the generalized driving force has been derived based on nonequilibrium thermodynamics [31]. A simplified expression for a solid stationary electrolyte (no convection term) [19] can be written as(13) d → i = ∇ x i + x i z i F R T ∇ ϕ E L Y .For a noncharged species such as water z i is equal to zero, and, according to (13), the concentration gradient will be the only driving force.The migration term in (13) was obtained by representing ion mobility by the Nernst-Einstein relation (D i = R T u i). This equation is applicable only at infinite dilution. However, it can be used in concentrated solutions if additional composition-dependent transport parameters, such as the B ′ parameters in (19), are used to calculate the flux of ions [27]. It will be shown in the following paragraphs that (18) represent the composition-dependent parameters.Equation (12) results in (n - 1) independent equations that can be written in matrix form for convenience:(14) c E L Y d → 1 ⋮ d → n - 1 = - B 1 1 ⋯ B 1 , n - 1 ⋮ ⋱ ⋮ B n - 1 , 1 ⋯ B n - 1 , n - 1 J → 1 ⋮ J → n - 1 ,where the elements of the matrix of inverted diffusion coefficients [B] are given by(15) B i i = ∑ j = 1 j ≠ i n x i Ð i j i = 1,2 , … , n - 1 , B i j = - x i Ð i j i = 1,2 , … , n - 1 ; i ≠ j .The fluxes of species, J → i, can be calculated from (16) which is the inversion of (14):(16) J → 1 ⋮ J → n - 1 = - c E L Y B 11 ⋯ B 1 , n - 1 ⋮ ⋱ ⋮ B n - 1,1 ⋯ B n - 1 , n - 1 - 1 d → 1 ⋮ d → n - 1 .For the present electrolyte system containing three species, mobile H2O and H+ plus immobile solid ZrP, (16) may be written as(17) J → H 2 O J → H + = - c E L Y B H 2 O – H 2 O ′ B H 2 O – H + ′ B H + – H 2 O ′ B H + – H + ′ d → H 2 O d → H + ,where [B ′] is the inverse of the matrix of inverted diffusion coefficients. Because Ð H 2 O – H + = Ð H + – H 2 O, the elements of [B ′] are calculated using (18) which are functions of the GMS diffusivities and the species mole fractions in the electrolyte phase:(18) B H 2 O – H 2 O ′ = x H 2 O Ð H + –ZrP + Ð H 2 O – H + x H + + Ð H + –ZrP / Ð H 2 O –ZrP x H 2 O + Ð H 2 O – H + / Ð H 2 O –ZrP , B H 2 O – H + ′ = x H 2 O Ð H + –ZrP x H + + Ð H + –ZrP / Ð H 2 O –ZrP x H 2 O + Ð H 2 O – H + / Ð H 2 O –ZrP , B H + – H 2 O ′ = x H + Ð H 2 O –ZrP x H 2 O + Ð H 2 O –ZrP / Ð H + –ZrP x H + + Ð H 2 O – H + / Ð H + –ZrP , B H + – H + ′ = x H + Ð H 2 O –ZrP + Ð H 2 O – H + x H 2 O + Ð H 2 O –ZrP / Ð H + –ZrP x H + + Ð H 2 O – H + / Ð H + –ZrP .Combining sets of (17) and (13) results in two independent equations that can be used to calculate the fluxes of mobile species (J → H 2 O and J → H ↑ + ↓ ↑) within the electrolyte phase:(19) J → H 2 O = - c E L Y B H 2 O – H 2 O ′ ∇ x H 2 O - c E L Y B H 2 O – H + ′ ∇ x H + + F x H + R T ∇ ϕ E L Y , (20) J → H + = - c E L Y B H + – H 2 O ′ ∇ x H 2 O - c E L Y B H + – H + ′ ∇ x H + + F x H + R T ∇ ϕ E L Y .Equations (19) and (20) show that diffusion flux of each species is a function of the concentration gradient of all species as well as of the potential gradient. There are five unknowns in (19) and (20): J → H 2 O, J → H +, x H 2 O, x H +, and ϕ E L Y. Therefore, three more equations are required.ZrP is immobile. As a result, the diffusion phenomenon will effectively be the interchange of H+ and H2O species. Therefore, for diffusion purposes we will only consider the domain of the mobile species, H+ and H2O, and will ignore the immobile species, ZrP. On that basis, (21) can be used as a third equation. Nevertheless, the presence of ZrP is important because of its interaction with the mobile species. Specifically, the values of the B ′ coefficients for H+ and H2O were influenced by the presence of ZrP:(21) x H 2 O + x H + = 1.0 .The differential equations for H2O and H+ mass conservation in the electrolyte phase can be expressed in molar units as(22) ∇ · J → H 2 O = - j z F , ∇ · J → H + = j z F ,where j is the volumetric current production. This quantity which appears in (1), (3) to (5) and (22) is the rate of production of protons in the anode. Therefore, it is positive in the anode, j A, and negative in the cathode, j C. It was calculated using the Butler-Volmer equation for the anode and cathode, (6) and (9), respectively. Exchange current densities at the anode and cathode are a function of the reactants’ partial pressure and the operating temperature as shown in (7) and (10). The Butler-Volmer equation and its parameters for both propane oxidation and oxygen reduction were described in our previous communication [10]. Complete conversion of C3H8 to CO2 was reported in experiments by Grubb and Michalske [34]. Equations (19) to (22) were combined and are shown as (4) and (5). ### 2.2. Numerical Procedure The numerical solution procedure is illustrated in Figure3. Equations (1)–(11) define the problem at steady state. However, a time derivative was appended to each partial differential equation and a backward Euler time stepping method was used to increase stability while converging to the steady-state solution. The Finite Element Method was used to discretize the partial differential equations in space, with all dependent variables discretized by a linear finite element except for the pressure that is taken as a quadratic.Figure 3 Modeling procedure.FreeFEM++ software has been successfully used to solve two-dimensional partial differential equations (1)–(11). It is open-source software and is based on the Finite Element Method developed by Hecht et al. [32]. The calculated results from FreeFEM++  were exported to ParaView visualization software [35] for postprocessing. ParaView is also open-source software.There is no proton loss through the exterior boundaries of the domain (Figure2). Therefore, the total rate of proton production in the anode, ∫ A n o d e j d V, has to be equal to the total rate of proton consumption in the cathode, ∫ C a t h o d e ( - j ) d V. In each case, the electrical potential of the catalyst phase of the anode, ϕ P t A, and that of the cathode, ϕ P t C, had individual constant values. Then all the variables in the whole domain were calculated. However, having fixed electrical potentials of the anode and cathode catalyst phases does not guarantee that the proton production at the anode will equal the proton consumption at the cathode. The difference between the rate of proton production and consumption can be minimized by shifting ϕ E L Y by a constant value because the production and consumption rates are functions of the electrical potential in both of their respective catalyst phases, ϕ P t A and ϕ P t C, and in the electrolyte phase, ϕ E L Y. Therefore, the Newton method was used to force equal proton production and consumption. In other words, balancing ∫ A n o d e j d V and ∫ C a t h o d e ( - j ) d V acts as a constraint for the conservation of protons in the electrolyte phase.The equations for the conservation of momentum, total mass, and individual species in the gas phase of the anode and cathode were solved by assuming there was no species crossover through the membrane. Electrical potential, proton, and water concentrations in the electrolyte phase of the anode, cathode, and membrane layers were coupled to each other. These variables were calculated by solving (4), (5), and (21) iteratively in each layer. Then, the Robin method [10] was used to couple the solutions between layers. In the Robin method, both of the following transfer conditions are progressively satisfied on the anode catalyst/membrane interface and the membrane/cathode catalyst interface through iterations of (a) the continuity of the variable (e.g., potential) and (b) the continuity of the flux (e.g., electrical current).Figure2 shows four types of boundary conditions for the modeling domain, that is, inlet, outlet, wall of the land, and the midchannel symmetry boundaries. The flux of species in the gas phase is zero at the walls because there is no transfer through walls. The zero flux condition is also true at the midchannel symmetry boundaries. The compositions of the gaseous species are known at the inlet of the anode and cathode catalyst layers. It was assumed that no change in the composition of gas mixture occurred after leaving the catalyst bed. Therefore, the composition gradients are zero in the direction normal to the catalyst layer at the outlet boundaries. The zero flux condition is applied at all exterior boundaries for the species in the electrolyte phase. ### 2.3. Input Parameters The parameters used for the simulations are shown in Table1. The GMS diffusivities, Ð i j, which are used in (18) have to be calculated from the Fickian diffusion coefficients, D i j. For ideal solutions, the Fickian diffusion, D i j, can be used as Ð i j in the Stefan-Maxwell equations [26] because the concentration dependence of Fickian diffusion coefficients is ignored. Experimental values for D H + –ZrP and D H 2 O – H + are given in Table 1. Note that the diffusivity of protons in ZrP is approximately two orders of magnitude smaller than the diffusivity of protons in water. The movement of protons causes the electroosmotic flow of water [9]. It was assumed that one water molecule is dragged by each proton, H3O+, that travels from anode to cathode. Therefore, the diffusivity of water in ZrP was set equal to the diffusivity of protons in ZrP [36], the smaller of the two proton diffusivities in Table 1. Proton diffusivity and proton mobility are different quantities. The three diffusivities in Table 1 were the ones used to calculate the B ′ parameters in (18).Table 1 Operational, electrochemical, and design parameters for simulations. Property Value Temperature,T 423–503 K Pressure,P 101.3 k Pa Proton–ZrP diffusivity,D H + –ZrP 3.1 × 10−12 m2 s−1 [29] Proton–water diffusivity,D H 2 O– H + 2.9 × 10−10 m2 s−1 [12] Ionic conductivity in membrane,σ ZrP / PTFE 5.0 S m−1 [24] Electrical resistivity in membrane,R PTFE 1.0 × 1016 Ω m Charge transfer coefficients,α A and α C 1.0 [30] Equilibrium potential of catalyst phase at the anode,ϕ Pt A EQ 0.136 V [1] Equilibrium potential of catalyst phase at the cathode,ϕ Pt C EQ 1.229 V Equilibrium potential of electrolyte phase,ϕ ELY EQ 0.136 V Apparent bulk density of carbon catalyst support,ρ CAT ⁡ 0.259 gcatalyst mL catalyst - 1 Specific surface area of carbon catalyst support in the anode and cathode,A CAT ⁡ 255m catalyst 2 g catalyst - 1 Gas phase volume fraction in anode and cathode,ε G 0.5 Electrolyte phase volume fraction in anode and cathode,ε ELY 0.4 Effective particle diameter in anode and cathode,D p 5μm Land width,L W 2–8 mm Anode and cathode thickness, ThA, ThC 200–400μm Membrane thickness,T h M 100–200μm Fluid channels width in bipolar plates 0.4 mm ### 2.4. Model Validation The model predicts the performance of a DPFC that (i) has interdigitated flow fields, (ii) has zirconium phosphate as the electrolyte, and (iii) operates over a temperature range of 150–230°C. As there are no experimental data for DPFCs having zirconium phosphate electrolytes and interdigitated flow fields, the model results have been compared to published results for DPFCs with other types of electrolytes and flow fields.Figure4 compares the modeling results for zirconium phosphate electrolyte with the experimental data for other types of electrolytes [34, 37]. The figure shows that the polarization curve for ZrP-PTFE electrolyte is somewhat comparable to that for the other electrolytes. The difference between the polarization curves can be partially explained by the difference between conductivities of the electrolytes. The proton conductivity of a nonmodified Nafion 117 approaches 10 S m−1 at 80°C [38]. The conductivity of the 95% H3PO4 electrolyte is 35 S m−1 at 200°C [39]. However, the proton conductivity for the best ZrP-PTFE that has been developed in our laboratory is about 5 S m−1 at 150°C.Figure 4 Polarization curves of direct propane/oxygen fuel cell using Pt anode and cathode. (a) Experimental results [31] using Nafion 117 at 95°C. (b) Experimental results [32] using 95% H3PO4 at 200°C. (c) The present proton migration and diffusion model results for a solid ZrP-PTFE electrolyte at 150°C. ## 2.1. Governing Equations Three phases are present in the anode and cathode catalyst layers. They are the “gas phase” containing reactants and products, the “solid catalyst phase” containing the carbon support and platinum, and the “solid electrolyte phase.” The latter consists of a stationary ZrPm a t r i x = [ Z r H P O 4 2 · H 2 O ] containing mobile H2O = [Zr(HPO4)2  ·  2H2O] and mobile H + = [ Z r H P O 4 2 · H 3 O + ] species that can be transported. The membrane layer contains the ZrP electrolyte phase as well as PTFE.Conservation equations for momentum, total mass, and mass of noncharged species were solved for the gas phase in each of the catalyst layers. A list of equations that were used for the gas phase of both anode and cathode catalyst layers is shown as follows:Conservation of mass in gas phase:(1) ∇ · ε G ρ G u → + ∑ i n ν i M W i j z F = 0 , wherei = C3H8, H2O, and CO2 for the anode and O2 and H2O for the cathode. Conservation of momentum in gas phase:(2) - ∇ P = 150 μ G 1 - ε G 2 D p 2 ε G 3 u → . Conservation of noncharged species in gas phase:(3) ∇ · ε G c G u → y i - ∇ · ε G c G D i ∇ y i + ν i j z F = 0 , wherei = C3H8 and CO2 for the anode and O2 and H2O for the cathode. Conservation of species in the electrolyte phase: for water,(4) - ∇ · c E L Y B H 2 O – H 2 O ′ - B H 2 O – H + ′ ∇ x H + + ∇ · c E L Y B H 2 O – H + ′ F x H + R T ∇ ϕ E L Y - j z F = 0 ; for proton,(5) ∇ · c E L Y B H + – H + ′ - B H + – H 2 O ′ ∇ x H + + ∇ · c E L Y B H + – H + ′ F x H + R T ∇ ϕ E L Y + j z F = 0 . Butler-Volmer equation in the anode:(6) j A = j A 0 A P t exp ⁡ α A F η A R T - exp ⁡ - α C F η A R T , where(7) j A 0 = j C 3 O x 0 r e f p C 3 p C 3 r e f exp ⁡ Δ G C 3 O x ‡ R 1 T r e f - 1 T , (8) η A = Δ ϕ A - Δ ϕ A E Q = ϕ P t A - ϕ E L Y A - ϕ P t A E Q - ϕ E L Y E Q . Butler-Volmer equation in the cathode:(9) j C = j C 0 A P t exp ⁡ α A F η C R T - exp ⁡ - α C F η C R T , where(10) j C 0 = j O 2 R d 0 r e f p O 2 p O 2 r e f exp ⁡ Δ G O 2 R d ‡ R 1 T r e f - 1 T , (11) η C = Δ ϕ C - Δ ϕ C E Q = ϕ P t C - ϕ E L Y C - ϕ P t C E Q - ϕ E L Y E Q .Equation (1) describes the total mass conservation in the gas phase of the catalyst layers. The second term in this equation is the sink or source term describing the mass consumption or production in the gas phase caused by electrochemical reactions. Equation (2) is the linear form of the Ergun equation. It was used to calculate the pressure profiles in the gas phase of the catalyst layers because they are packed beds. At the conditions used in this study, the magnitude of the quadratic velocity term in the Ergun equation was much smaller than the linear term. Hence only the linear term in velocity was used in (2). Equations (1) and (2) were solved together to calculate the velocity and pressure profiles in the gas phase of the catalyst layers. Mass balances for each of the individual gas phase species account for convection, diffusion, and reaction, as shown in (3).Equations (4) and (5) describe, respectively, water and proton conservation in the electrolyte phase of the membrane and catalyst layers. Diffusion was described by concentrated solution theory through the use of the GMS equations. The following paragraphs illustrate the derivation of (4) and (5).A general procedure for the calculation of mass fluxes in multicomponent electrolyte systems was presented by Krishna [19]. It has been proven that the Nernst-Planck equation is a limiting case of the GMS equations. The GMS equations can be written as follows:(12) d → i = ∑ j = 1 j ≠ i n x i J → j - x j J → i c E L Y Ð i j i = 1,2 , … , n - 1 ,where d → i is a generalized driving force for mass transport of species i. Because the summation of the n driving forces is equal to zero due to the Gibbs-Duhem limitation [31], only n - 1 driving forces are independent. The equation to calculate the generalized driving force has been derived based on nonequilibrium thermodynamics [31]. A simplified expression for a solid stationary electrolyte (no convection term) [19] can be written as(13) d → i = ∇ x i + x i z i F R T ∇ ϕ E L Y .For a noncharged species such as water z i is equal to zero, and, according to (13), the concentration gradient will be the only driving force.The migration term in (13) was obtained by representing ion mobility by the Nernst-Einstein relation (D i = R T u i). This equation is applicable only at infinite dilution. However, it can be used in concentrated solutions if additional composition-dependent transport parameters, such as the B ′ parameters in (19), are used to calculate the flux of ions [27]. It will be shown in the following paragraphs that (18) represent the composition-dependent parameters.Equation (12) results in (n - 1) independent equations that can be written in matrix form for convenience:(14) c E L Y d → 1 ⋮ d → n - 1 = - B 1 1 ⋯ B 1 , n - 1 ⋮ ⋱ ⋮ B n - 1 , 1 ⋯ B n - 1 , n - 1 J → 1 ⋮ J → n - 1 ,where the elements of the matrix of inverted diffusion coefficients [B] are given by(15) B i i = ∑ j = 1 j ≠ i n x i Ð i j i = 1,2 , … , n - 1 , B i j = - x i Ð i j i = 1,2 , … , n - 1 ; i ≠ j .The fluxes of species, J → i, can be calculated from (16) which is the inversion of (14):(16) J → 1 ⋮ J → n - 1 = - c E L Y B 11 ⋯ B 1 , n - 1 ⋮ ⋱ ⋮ B n - 1,1 ⋯ B n - 1 , n - 1 - 1 d → 1 ⋮ d → n - 1 .For the present electrolyte system containing three species, mobile H2O and H+ plus immobile solid ZrP, (16) may be written as(17) J → H 2 O J → H + = - c E L Y B H 2 O – H 2 O ′ B H 2 O – H + ′ B H + – H 2 O ′ B H + – H + ′ d → H 2 O d → H + ,where [B ′] is the inverse of the matrix of inverted diffusion coefficients. Because Ð H 2 O – H + = Ð H + – H 2 O, the elements of [B ′] are calculated using (18) which are functions of the GMS diffusivities and the species mole fractions in the electrolyte phase:(18) B H 2 O – H 2 O ′ = x H 2 O Ð H + –ZrP + Ð H 2 O – H + x H + + Ð H + –ZrP / Ð H 2 O –ZrP x H 2 O + Ð H 2 O – H + / Ð H 2 O –ZrP , B H 2 O – H + ′ = x H 2 O Ð H + –ZrP x H + + Ð H + –ZrP / Ð H 2 O –ZrP x H 2 O + Ð H 2 O – H + / Ð H 2 O –ZrP , B H + – H 2 O ′ = x H + Ð H 2 O –ZrP x H 2 O + Ð H 2 O –ZrP / Ð H + –ZrP x H + + Ð H 2 O – H + / Ð H + –ZrP , B H + – H + ′ = x H + Ð H 2 O –ZrP + Ð H 2 O – H + x H 2 O + Ð H 2 O –ZrP / Ð H + –ZrP x H + + Ð H 2 O – H + / Ð H + –ZrP .Combining sets of (17) and (13) results in two independent equations that can be used to calculate the fluxes of mobile species (J → H 2 O and J → H ↑ + ↓ ↑) within the electrolyte phase:(19) J → H 2 O = - c E L Y B H 2 O – H 2 O ′ ∇ x H 2 O - c E L Y B H 2 O – H + ′ ∇ x H + + F x H + R T ∇ ϕ E L Y , (20) J → H + = - c E L Y B H + – H 2 O ′ ∇ x H 2 O - c E L Y B H + – H + ′ ∇ x H + + F x H + R T ∇ ϕ E L Y .Equations (19) and (20) show that diffusion flux of each species is a function of the concentration gradient of all species as well as of the potential gradient. There are five unknowns in (19) and (20): J → H 2 O, J → H +, x H 2 O, x H +, and ϕ E L Y. Therefore, three more equations are required.ZrP is immobile. As a result, the diffusion phenomenon will effectively be the interchange of H+ and H2O species. Therefore, for diffusion purposes we will only consider the domain of the mobile species, H+ and H2O, and will ignore the immobile species, ZrP. On that basis, (21) can be used as a third equation. Nevertheless, the presence of ZrP is important because of its interaction with the mobile species. Specifically, the values of the B ′ coefficients for H+ and H2O were influenced by the presence of ZrP:(21) x H 2 O + x H + = 1.0 .The differential equations for H2O and H+ mass conservation in the electrolyte phase can be expressed in molar units as(22) ∇ · J → H 2 O = - j z F , ∇ · J → H + = j z F ,where j is the volumetric current production. This quantity which appears in (1), (3) to (5) and (22) is the rate of production of protons in the anode. Therefore, it is positive in the anode, j A, and negative in the cathode, j C. It was calculated using the Butler-Volmer equation for the anode and cathode, (6) and (9), respectively. Exchange current densities at the anode and cathode are a function of the reactants’ partial pressure and the operating temperature as shown in (7) and (10). The Butler-Volmer equation and its parameters for both propane oxidation and oxygen reduction were described in our previous communication [10]. Complete conversion of C3H8 to CO2 was reported in experiments by Grubb and Michalske [34]. Equations (19) to (22) were combined and are shown as (4) and (5). ## 2.2. Numerical Procedure The numerical solution procedure is illustrated in Figure3. Equations (1)–(11) define the problem at steady state. However, a time derivative was appended to each partial differential equation and a backward Euler time stepping method was used to increase stability while converging to the steady-state solution. The Finite Element Method was used to discretize the partial differential equations in space, with all dependent variables discretized by a linear finite element except for the pressure that is taken as a quadratic.Figure 3 Modeling procedure.FreeFEM++ software has been successfully used to solve two-dimensional partial differential equations (1)–(11). It is open-source software and is based on the Finite Element Method developed by Hecht et al. [32]. The calculated results from FreeFEM++  were exported to ParaView visualization software [35] for postprocessing. ParaView is also open-source software.There is no proton loss through the exterior boundaries of the domain (Figure2). Therefore, the total rate of proton production in the anode, ∫ A n o d e j d V, has to be equal to the total rate of proton consumption in the cathode, ∫ C a t h o d e ( - j ) d V. In each case, the electrical potential of the catalyst phase of the anode, ϕ P t A, and that of the cathode, ϕ P t C, had individual constant values. Then all the variables in the whole domain were calculated. However, having fixed electrical potentials of the anode and cathode catalyst phases does not guarantee that the proton production at the anode will equal the proton consumption at the cathode. The difference between the rate of proton production and consumption can be minimized by shifting ϕ E L Y by a constant value because the production and consumption rates are functions of the electrical potential in both of their respective catalyst phases, ϕ P t A and ϕ P t C, and in the electrolyte phase, ϕ E L Y. Therefore, the Newton method was used to force equal proton production and consumption. In other words, balancing ∫ A n o d e j d V and ∫ C a t h o d e ( - j ) d V acts as a constraint for the conservation of protons in the electrolyte phase.The equations for the conservation of momentum, total mass, and individual species in the gas phase of the anode and cathode were solved by assuming there was no species crossover through the membrane. Electrical potential, proton, and water concentrations in the electrolyte phase of the anode, cathode, and membrane layers were coupled to each other. These variables were calculated by solving (4), (5), and (21) iteratively in each layer. Then, the Robin method [10] was used to couple the solutions between layers. In the Robin method, both of the following transfer conditions are progressively satisfied on the anode catalyst/membrane interface and the membrane/cathode catalyst interface through iterations of (a) the continuity of the variable (e.g., potential) and (b) the continuity of the flux (e.g., electrical current).Figure2 shows four types of boundary conditions for the modeling domain, that is, inlet, outlet, wall of the land, and the midchannel symmetry boundaries. The flux of species in the gas phase is zero at the walls because there is no transfer through walls. The zero flux condition is also true at the midchannel symmetry boundaries. The compositions of the gaseous species are known at the inlet of the anode and cathode catalyst layers. It was assumed that no change in the composition of gas mixture occurred after leaving the catalyst bed. Therefore, the composition gradients are zero in the direction normal to the catalyst layer at the outlet boundaries. The zero flux condition is applied at all exterior boundaries for the species in the electrolyte phase. ## 2.3. Input Parameters The parameters used for the simulations are shown in Table1. The GMS diffusivities, Ð i j, which are used in (18) have to be calculated from the Fickian diffusion coefficients, D i j. For ideal solutions, the Fickian diffusion, D i j, can be used as Ð i j in the Stefan-Maxwell equations [26] because the concentration dependence of Fickian diffusion coefficients is ignored. Experimental values for D H + –ZrP and D H 2 O – H + are given in Table 1. Note that the diffusivity of protons in ZrP is approximately two orders of magnitude smaller than the diffusivity of protons in water. The movement of protons causes the electroosmotic flow of water [9]. It was assumed that one water molecule is dragged by each proton, H3O+, that travels from anode to cathode. Therefore, the diffusivity of water in ZrP was set equal to the diffusivity of protons in ZrP [36], the smaller of the two proton diffusivities in Table 1. Proton diffusivity and proton mobility are different quantities. The three diffusivities in Table 1 were the ones used to calculate the B ′ parameters in (18).Table 1 Operational, electrochemical, and design parameters for simulations. Property Value Temperature,T 423–503 K Pressure,P 101.3 k Pa Proton–ZrP diffusivity,D H + –ZrP 3.1 × 10−12 m2 s−1 [29] Proton–water diffusivity,D H 2 O– H + 2.9 × 10−10 m2 s−1 [12] Ionic conductivity in membrane,σ ZrP / PTFE 5.0 S m−1 [24] Electrical resistivity in membrane,R PTFE 1.0 × 1016 Ω m Charge transfer coefficients,α A and α C 1.0 [30] Equilibrium potential of catalyst phase at the anode,ϕ Pt A EQ 0.136 V [1] Equilibrium potential of catalyst phase at the cathode,ϕ Pt C EQ 1.229 V Equilibrium potential of electrolyte phase,ϕ ELY EQ 0.136 V Apparent bulk density of carbon catalyst support,ρ CAT ⁡ 0.259 gcatalyst mL catalyst - 1 Specific surface area of carbon catalyst support in the anode and cathode,A CAT ⁡ 255m catalyst 2 g catalyst - 1 Gas phase volume fraction in anode and cathode,ε G 0.5 Electrolyte phase volume fraction in anode and cathode,ε ELY 0.4 Effective particle diameter in anode and cathode,D p 5μm Land width,L W 2–8 mm Anode and cathode thickness, ThA, ThC 200–400μm Membrane thickness,T h M 100–200μm Fluid channels width in bipolar plates 0.4 mm ## 2.4. Model Validation The model predicts the performance of a DPFC that (i) has interdigitated flow fields, (ii) has zirconium phosphate as the electrolyte, and (iii) operates over a temperature range of 150–230°C. As there are no experimental data for DPFCs having zirconium phosphate electrolytes and interdigitated flow fields, the model results have been compared to published results for DPFCs with other types of electrolytes and flow fields.Figure4 compares the modeling results for zirconium phosphate electrolyte with the experimental data for other types of electrolytes [34, 37]. The figure shows that the polarization curve for ZrP-PTFE electrolyte is somewhat comparable to that for the other electrolytes. The difference between the polarization curves can be partially explained by the difference between conductivities of the electrolytes. The proton conductivity of a nonmodified Nafion 117 approaches 10 S m−1 at 80°C [38]. The conductivity of the 95% H3PO4 electrolyte is 35 S m−1 at 200°C [39]. However, the proton conductivity for the best ZrP-PTFE that has been developed in our laboratory is about 5 S m−1 at 150°C.Figure 4 Polarization curves of direct propane/oxygen fuel cell using Pt anode and cathode. (a) Experimental results [31] using Nafion 117 at 95°C. (b) Experimental results [32] using 95% H3PO4 at 200°C. (c) The present proton migration and diffusion model results for a solid ZrP-PTFE electrolyte at 150°C. ## 3. Results and Discussion Figure5(a) shows the two-dimensional variation of the proton concentration in the electrolyte phase of the entire domain, that is, the anode catalyst layer (AN), the membrane layer (ML), and the cathode catalyst layer (CA). The proton concentration at the anode inlet close to the feed gas channel has the highest value. This would be expected because the propane’s partial pressure is higher at the anode inlet and that causes a higher propane oxidation reaction rate, according to Butler-Volmer equation (6). Because protons are produced in the anode catalyst layer and consumed in the cathode catalyst layer, the proton concentration is greater at the anode than the cathode. The resulting proton concentration gradient is the driving force for protons to diffuse from the anode to the cathode.Figure 5 (a) Proton concentration in the electrolyte phase of the anode, membrane, and cathode layers. (b) Electrical potential profile for the electrolyte phase of the anode, membrane, and cathode layers. (c) Protonic flux from anode to cathode in the electrolyte phase. The vectors lengths indicate the flux magnitude which varies from 0 to 17 mA cm−2 in this case. (a) (b) (c)The electrical potential variation in the electrolyte phase of the catalyst layers and membrane is shown in Figure5(b). As the reaction rate in the catalyst layers is not uniform, current density and electrical potential will be variable. Figure 5(b) shows that the electrical potential is higher at the cathode electrolyte phase than at the anode electrolyte phase. That electrical potential gradient is the driving force for protons to migrate from the cathode to the anode. This proton migration (caused by the electrical potential gradient) is in the opposite direction to the proton diffusion (caused by the proton concentration gradient) that was discussed above. In reality, protons are known to be transported from the anode to the cathode. Therefore the dominant driving force is the proton concentration gradient. Furthermore it can be concluded that the electrical potential gradient is not the dominant driving force for proton transport.Figure5(c) shows the magnitude and direction of protonic flux in the electrolyte phase of the anode, cathode, and membrane layers. Protons are produced in the anode and travel from the anode, through the membrane layer, and to cathode, where they are consumed. As discussed above, in Figure 5(a), the concentration driving force for proton flux was from anode to cathode and in Figure 5(b) the electrical potential driving force for protons was in the opposite direction, from cathode to anode. Finally, Figure 5(c) demonstrates that the net flux of protons is from the anode toward the cathode. As the net flux is the summation of two driving forces that are in opposite directions, again one can conclude that proton diffusion is dominant over proton migration. For the fuel cell to operate, the net transport of protons must be from the anode to the cathode. Therefore, the rate of proton diffusion must exceed the rate of proton migration. Figure 5(c) also shows that the arrows’ lengths are becoming longer (indicating that the proton flux increases) in the y-direction from the anode land/anode catalyst interface to the anode catalyst/membrane interface as more protons are produced throughout the anode catalyst layer. Similarly, the arrows’ length becomes shorter (as the proton flux decreases) in the y-direction from membrane/cathode catalyst interface to the cathode catalyst/cathode land interface.There are two routes by which electrons can flow from the anode to the cathode. The electron flux through the electrolyte is shown in Figure6. The electron flow rate through the electrolyte will be many orders of magnitude smaller than the electron flow rate through the external circuit. Although the vast majority of electrons flow through the external circuit, the production and consumption of the miniscule number of electrons that flow through the electrolyte have a distribution (Figure 6) that is similar to the distribution of protons (Figure 5(c)).Figure 6 Electronic flux from anode to cathode in electrolyte phase. The vectors lengths indicate the flux magnitude which varies from 0 to1 e - 11 mA cm−2 in the same case as in Figure 5(c).It is constructive to compare this model (migration plus diffusion) with a migration-only model [10]. A cross section of Figure 5(b) along the y-direction at the middle of the domain (x = L W / 2) is shown in Figure 7(a), where the electrical potential for the migration plus diffusion model in the electrolyte phase (the left axis in Figure 7(a)—solid line) is compared with that in the two solid catalyst phases (the right axis in Figure 7(a)—dashed lines). The electrical potentials in each of the two solid catalyst phases (dashed line) are almost constant throughout their layers because these phases have high electrical conductivities. The greater electrical potential at the cathode than at the anode (both in the catalyst phases and the electrolyte phase) provides a driving force that (a) pushes positively charged protons from the cathode to the anode, via the electrolyte and (b) pushes negatively charged electrons from the anode to the cathode via both the external circuit (almost all the electrons) and the electrolyte (a miniscule quantity of electrons). The flow rate of negatively charged electrons through the electrolyte phase from the anode to the cathode will be miniscule.Figure 7 Electrical potential profiles in they-direction for the electrolyte and catalyst phases located at the middle of the domain x-direction for the cathode and anode catalyst layers and membrane layer. The arrows point in the direction of the ordinate scale that applies to each of the three curves. (a) Proton migration plus diffusion within the electrolyte phase (the present model) (b) Proton migration only within the electrolyte phase [5]. (a) (b)The results of the migration plus diffusion model shown in Figure7(a) correctly describe these phenomena. In contrast, the results from the migration only model [10] are seen in Figure 7(b). Those calculations showed that the migration-only model produced incorrect results. Specifically, the electrical potential gradient in the electrolyte has the wrong slope. The slope (gradient) predicted by the migration-only model incorrectly drives the positively charged protons in the electrolyte from cathode to anode. In reality, they move from the anode to the cathode in the electrolyte.Figure8 compares the anodic and cathodic overpotential for two cases. The solid lines in Figure 8 are the results from the migration plus diffusion model. The dashed lines are the results from a migration only model. The dashed lines (migration-only) have a negative slope whereas the solid lines (migration plus diffusion) have a positive slope. Since the overpotential is the electrochemical driving force for the reaction (see (6) and (9)), it will always have its largest value adjacent to the anode land and decrease toward the membrane. In summary, the migration plus diffusion model predicted the correct behaviour, while the migration-only model predictions were incorrect.Figure 8 Overpotential profile in the anode and cathode alongy-axis at the middle of the modeling domain. Solid lines (migration plus diffusion). Dashed lines (migration only) [5].Figure9 shows the propane mole fraction in the gas phase of the anode catalyst layer along the x-direction. For similar operating conditions, the migration plus diffusion model predicted different propane concentrations than the migration-only model. This difference is caused by the different overpotential profiles predicted by the two models. The difference in overpotentials for migration plus diffusion compared to migration-only model is shown in Figure 8. Those differences are small. However, those small differences are in exponential terms, as shown in (6) and (9). It is the exponential terms that cause the large differences in concentration shown in Figure 9. If proton diffusion in the electrolyte phase is ignored, the prediction of species distribution within the gas phase of the catalyst layers becomes incorrect. In other words, the migration-only model can not correctly calculate either the proton concentration in the electrolyte phase or the propane concentration in the gas phase.Figure 9 Propane mole fraction in the gas phase of the anode catalyst layer along thex-direction at the middle of the anode catalyst layer. (a) Proton migration plus diffusion within the electrolyte phase (the present model). (b) Proton migration only within the electrolyte phase [5].In Figure10, the polarization curves for the migration plus diffusion model are compared with the migration-only model. At a specific cell potential, the cell current density predicted by the migration plus diffusion model is lower than that of the migration-only model. That is because the steady-state value for concentration occurs in the equation for the exchange current density, (7) and (9). This deviation may appear to be small at some conditions. In Figure 10, at a cell potential of 0.4 V, the migration plus diffusion model predicts a current density near 50 mA cm−2. In contrast, the migration-only model predicts nearly 70 mA cm−2. That is, one cannot conclude that a reasonable prediction of the fuel cell overall performance can be obtained using simple models that ignore the proton diffusion phenomenon in the electrolyte. In addition, there are other phenomena for which the migration-only model predicts results that are completely erroneous.Figure 10 Modeling results for polarization curves of direct propane/oxygen fuel cells using a solid ZrP-PTFE electrolyte at 150°C. (a) Proton migration and diffusion within the electrolyte phase (the present model). (b) Proton migration only within the electrolyte phase [5].It would be desirable to expand the range of the polarization curve in Figure10 to greater current densities and to smaller cell potentials. Many attempts to obtain such a wider range of values were made. Unfortunately they were all unsuccessful. As the current density increased, convergence to an acceptable numerical solution of the equations became progressively more difficult. Convergence was not obtained at values of current densities greater than those shown in Figure 10. The difficulty was caused by the exponential nature of the Butler-Volmer equation in combination with the complex Generalized Maxwell-Stefan equations. Small changes in cell potential cause the current density calculated from the Butler-Volmer equation to vary enormously. The search for superior convergence techniques is a topic that is being actively pursued in our laboratory.Activation overpotential and ohmic polarization are the major sources of potential drop in a direct propane fuel cell. Any change in the operating conditions or cell design that results in a decrease in activation overpotential and ohmic polarization will improve the cell performance. Figure11 shows the performance of a DPFC predicted by the model at different operating temperatures. It also shows the performance of a hydrogen PEM fuel cell at 80°C [40] and that of a DPFC at 200°C having a phosphoric acid electrolyte [34]. As temperature is increased from 150°C to 230°C, the rate of reaction increases according to (7) and (10). This leads to a decrease in the overpotential term in the Butler-Volmer equation and a major improvement in the cell performance. It can be concluded that the predicted performance of a DPFC operating at 230°C can approach that of a hydrogen PEMFC at 80°C when both operate at current densities less than 40 mA cm−2.Figure 11 (a), (b), and (c) Predicted polarization curves for a direct propane/oxygen fuel cell at different operating temperatures, (d) experimental data for a typical hydrogen/oxygen PEMFC [33], and (e) experimental data for the best performed DPFC at 200°C [32]. ## 4. Conclusions The migration plus diffusion model, described in this work, was shown to be superior to the migration-only model that is used in many fuel cell modeling studies. Specifically, the migration-only model predicted values of electrical potential in the electrolyte that are erroneous. The gradient of the electrolyte electrical potential predicted by the migration-only model was in the wrong direction. The incorrect values of the electrical potential in the electrolyte caused the values for the overpotential to be incorrect. Incorrect overpotential values caused the values calculated for the propane concentration to be incorrect. This work has shown that the predicted values for steady-state current density and steady-state propane concentration become substantially different when the effect of proton diffusion in the electrolyte is included in the model. The migration plus diffusion model described here has been shown to be a major improvement over the migration-only model that was used in earlier studies.Many important phenomena that occur in fuel cells are not described by polarization curves. Meaningful values for variables internal to the fuel cell, for example, overpotential and reactant concentration, are essential for the understanding of fuel cell performance. At some operating conditions, variables external to the fuel cell, for example, current density and the exit concentration of propane, are substantially different when proton diffusion in the electrolyte is included in the model. The insight obtained using the migration plus diffusion model is far more useful than that obtained from the migration-only model. --- *Source: 102313-2015-12-21.xml*
2015
# Antimicrobial Activity of the Essential Oil ofPlectranthus neochilus against Cariogenic Bacteria **Authors:** Eduardo José Crevelin; Soraya Carolina Caixeta; Herbert Júnior Dias; Milton Groppo; Wilson Roberto Cunha; Carlos Henrique Gomes Martins; Antônio Eduardo Miller Crotti **Journal:** Evidence-Based Complementary and Alternative Medicine (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102317 --- ## Abstract This work used the broth microdilution method to investigate the antimicrobial activity of the essential oil obtained from the leaves ofPlectranthus neochilus (PN-EO) against a representative panel of oral pathogens. We assessed the antimicrobial activity of this oil in terms of the minimum inhibitory concentration (MIC). PN-EO displayed moderate activity against Enterococcus faecalis (MIC = 250 μg/mL) and Streptococcus salivarus (MIC = 250 μg/mL), significant activity against Streptococcus sobrinus (MIC = 62.5 μg/mL), Streptococcus sanguinis (MIC = 62.5 μg/mL), Streptococcus mitis (MIC = 31.25 μg/mL), and Lactobacillus casei (MIC = 31.25 μg/mL), and interesting activity against Streptococcus mutans (MIC = 3.9 μg/mL). GC-FID and GC-MS helped to identify thirty-one compounds in PN-EO; α-pinene (1, 14.1%), β-pinene (2, 7.1%), trans-caryophyllene (3, 29.8%), and caryophyllene oxide (4, 12.8%) were the major chemical constituents of this essential oil. When tested alone, compounds 1, 2, 3, and 4 were inactive (MIC > 4000 μg/mL) against all the microorganisms. These results suggested that the essential oil extracted from the leaves of Plectranthus neochilus displays promising activity against most of the evaluated cariogenic bacteria, especially S. mutans. --- ## Body ## 1. Introduction Dental caries is associated with acidogenic and aciduric bacteria that adhere to the tooth surface as an oral biofilm (dental plaque) [1]. Because this pathology can destroy dental hard tissues [2–4], it has become a major public health concern worldwide. The most efficient way to prevent caries and periodontal diseases is to reduce and eliminate bacterial accumulation on the top of and between teeth by brushing the teeth on a daily basis and conducting periodic dental cleaning or prophylaxis. Unfortunately, most people fail to maintain a sufficient level of oral hygiene [5], which has called for the use of oral products containing antimicrobial ingredients as a complementary measure to diminish biofilm formation on the tooth surface [6].Chlorhexidine has been the most effective antiplaque agent tested to date, but some reversible local side effects have led dentists to recommend its use for short periods only [7]. Several other antimicrobial agents including fluorides, phenol derivatives, ampicillin, erythromycin, penicillin, tetracycline, and vancomycin can inhibit bacterial growth [8]. Nevertheless, excessive use of these chemicals can disturb the oral and intestinal flora and cause microorganism susceptibility, vomiting, diarrhea, and tooth staining [8]. To find an alternative to the substances currently employed to prevent caries and to control plaques, researchers have investigated the antimicrobial activities of natural products, especially essential oils [1, 3, 7, 9–11].The herbaceous and aromatic plantPlectranthus neochilus is popularly known as “boldo-rasteiro” in Brazil [12]. In folk medicine, this plant has helped to treat disturbed digestion, skin infection, respiratory ailments [13], hepatic insufficiency, and dyspepsia [14]. The essential oil ofP. neochilus displays antischistosomal [12] and insecticidal [15] activities. Recently, researchers have applied diffusion in agar disc to assess the antimicrobial activity of the essential oil of a specimen ofP. neochilus collected in Portugal againstBacillus cereus,Bacillus subtilis,Staphylococcus aureus,Listeria monocytogenes,Escherichia coli,Pseudomonas aeruginosa,Helicobacter pylori, andSaccharomyces cerevisiae [16]. The authors reported that the activity of this essential oil against the selected microorganisms was between low and moderate.As part of our ongoing research on the antimicrobial activities of essential oils [1, 17–19], in this work, we used the broth microdilution method to evaluate thein vitro antimicrobial activity of the essential oil obtained from the leaves ofPlectranthus neochilus (Lamiaceae) against a representative panel of cariogenic bacteria. ## 2. Materials and Methods ### 2.1. Plant Material AdultP. neochilus Schltr. (Lamiaceae) leaves were collected at “May 13th Farm” (20°26′S 47°27′W 977 m) in May 2011. The collection site was located near the city of Franca, state of São Paulo, Brazil. This species was identified by Professor Dr. Milton Groppo; one voucher specimen (SPFR 12323) was deposited at the Herbarium of the Department of Biology (Herbarium SPFR), University of São Paulo, Brazil. ### 2.2. Essential Oil Extraction Fresh leaves ofP. neochilus were submitted to hydrodistillation in a Clevenger-type apparatus for 3 h. To this end, 1200 g of the plant material was divided into three samples of 400 g each, and 500 mL of distilled water was added to each sample. Condensation of the steam followed by accumulation of the essential oil/water system in the graduated receiver of the apparatus separated the essential oil from the water, which allowed for further manual collection of the organic phase. Anhydrous sodium sulfate was used to remove traces of water. Samples were stored in an amber bottle and kept in the refrigerator at 4°C until analysis. Yields were calculated from the weight of the fresh leaves. ### 2.3. Gas Chromatography (GC-FID) Analyses The essential oil ofP. neochilus (PN-EO) was analyzed by gas chromatography (GC) on a Hewlett-Packard G1530A 6890 gas chromatograph fitted with FID and a data-handling processor. An HP-5 (Hewlett-Packard, Palo Alto, CA, USA) fused-silica capillary column (length = 30 m, i.d. = 0.25 mm, and film thickness = 0.33 μm) was employed. The column temperature was programmed to rise from 60 to 240°C at 3°C/min and then held at 240°C for 5 min. The carrier gas was H2 at a flow rate of 1.0 mL/min. The equipment was set to the injection mode; the injection volume was 0.1 μL (split ratio of 1 : 10). The injector and detector temperatures were 240 and 280°C, respectively. The relative concentrations of the components were obtained by peak area normalization (%). The relative areas were the average of triplicate GC-FID analyses. ### 2.4. Gas Chromatography-Mass Spectrometry (GC-MS) Analyses GC-MS analyses were carried out on a Shimadzu QP2010 Plus (Shimadzu Corporation, Kyoto, Japan) system equipped with an AOC-20i autosampler. The column consisted of Rtx-5MS (Restek Co., Bellefonte, PA, USA) fused-silica capillary (length = 30 m, i.d. = 0.25 mm, and film thickness = 0.25μm). The electron ionization (EI-MS) mode at 70 eV was employed. Helium (99.999%) at a constant flow of 1.0 mL/min was the carrier gas. The injection volume was 0.1 μL (split ratio of 1 : 10). The injector and the ion source temperatures were set at 240 and 280°C, respectively. The oven temperature program was the same as the one used for GC-FID. The mass spectra were registered with a scan interval of 0.5 s in the mass range of 40 to 600 Da. ### 2.5. Identification of the PN-EO Constituents PN-EO components were identified on the basis of their retention indices relative to a homologous series ofn-alkanes (C8–C24). To this end, an Rtx-5MS capillary column was employed under the same operating conditions as in the case of GC. The retention index (RI) of each PE-EO constituent was determined as described previously [20]. The chemical structures were computer-matched with the Wiley 7, NIST 08, and FFNSC 1.2 spectral libraries of the GC-MS data system; their fragmentation patterns were compared with the literature data [21]. ### 2.6. Bacterial Strains and Antimicrobial Assays Thein vitro antimicrobial activity of PN-EO and its major constituents was assessed by minimum inhibitory concentration (MIC) values calculated by means of the broth microdilution method accomplished in 96-well microplates. The following standard ATCC strains were used:Streptococcus salivarius (ATCC 25975),Streptococcus sanguinis (ATCC 10556),Streptococcus mitis (ATCC 49456),Streptococcus mutans (ATCC 25175),Streptococcus sobrinus (ATCC 33478),Enterococcus faecalis (ATCC 4082), andLactobacillus casei (ATCC 11578). Individual 24-h colonies from blood agar (Difco Labs, Detroit, MI, USA) were suspended in 10.0 mL of tryptic soy broth (Difco). Standardization of each microorganism suspension was carried out on a spectrophotometer (Femto, São Paulo, Brazil) operating at a wavelength (λ) of 625 nm, to match the transmittance of 81 (equivalent to 0.5 McFarland scale or 1.5 × 108 CFU/mL). The microorganism suspension was diluted to a final concentration of 5 × 105 CFU/mL. PN-EO was dissolved in DMSO (Merck, Darmstadt, Germany) at 16.0 mg/mL and diluted in tryptic soy broth (Difco), to yield concentrations between 4000 and 3.9 μg/mL. Compounds1 (α-pinene),2 (β-pinene),3 (trans-caryophyllene), and4 (caryophyllene oxide) were purchased from Sigma-Aldrich (St. Louis, MA) and evaluated by means of the same methodology and at the same concentrations as PN-EO. A 1 μM solution of each compound was tested individually. In the case of the mixture1 +2 +3 +4, the constituents were mixed in the same proportion that they occurred in PN-EO. After dilutions, the DMSO concentrations were between 4% and 0.0039% (v/v). Three inoculated wells containing DMSO at concentrations ranging from 4% to 1% were used as negative controls. One inoculated well was included to control the adequacy of the broth for organism growth. One noninoculated well free of antimicrobial agent was also included to assess the medium sterility. Twofold serial dilutions of chlorhexidine dihydrochloride (CHD) (Sigma-Aldrich, St. Louis) were performed in tryptic soy broth (Difco) to achieve concentrations ranging from 5.9 to 0.115 μg/mL. These dilutions were used as positive control. The microplates (96-well) were sealed with parafilm and incubated at 37°C for 24 h. After that, 30 mL of 0.02% resazurin (Sigma-Aldrich, St. Louis, MO, USA) aqueous solution was poured into each microplate reservoir to indicate microorganism viability [22]. The MIC value (i.e., the lowest concentration of a sample capable of inhibiting microorganism growth) was determined as the lowest concentration of the essential oil and or major constituents capable of preventing a colour change of the resazurin solution [23]. Three replicates were conducted for each microorganism. ## 2.1. Plant Material AdultP. neochilus Schltr. (Lamiaceae) leaves were collected at “May 13th Farm” (20°26′S 47°27′W 977 m) in May 2011. The collection site was located near the city of Franca, state of São Paulo, Brazil. This species was identified by Professor Dr. Milton Groppo; one voucher specimen (SPFR 12323) was deposited at the Herbarium of the Department of Biology (Herbarium SPFR), University of São Paulo, Brazil. ## 2.2. Essential Oil Extraction Fresh leaves ofP. neochilus were submitted to hydrodistillation in a Clevenger-type apparatus for 3 h. To this end, 1200 g of the plant material was divided into three samples of 400 g each, and 500 mL of distilled water was added to each sample. Condensation of the steam followed by accumulation of the essential oil/water system in the graduated receiver of the apparatus separated the essential oil from the water, which allowed for further manual collection of the organic phase. Anhydrous sodium sulfate was used to remove traces of water. Samples were stored in an amber bottle and kept in the refrigerator at 4°C until analysis. Yields were calculated from the weight of the fresh leaves. ## 2.3. Gas Chromatography (GC-FID) Analyses The essential oil ofP. neochilus (PN-EO) was analyzed by gas chromatography (GC) on a Hewlett-Packard G1530A 6890 gas chromatograph fitted with FID and a data-handling processor. An HP-5 (Hewlett-Packard, Palo Alto, CA, USA) fused-silica capillary column (length = 30 m, i.d. = 0.25 mm, and film thickness = 0.33 μm) was employed. The column temperature was programmed to rise from 60 to 240°C at 3°C/min and then held at 240°C for 5 min. The carrier gas was H2 at a flow rate of 1.0 mL/min. The equipment was set to the injection mode; the injection volume was 0.1 μL (split ratio of 1 : 10). The injector and detector temperatures were 240 and 280°C, respectively. The relative concentrations of the components were obtained by peak area normalization (%). The relative areas were the average of triplicate GC-FID analyses. ## 2.4. Gas Chromatography-Mass Spectrometry (GC-MS) Analyses GC-MS analyses were carried out on a Shimadzu QP2010 Plus (Shimadzu Corporation, Kyoto, Japan) system equipped with an AOC-20i autosampler. The column consisted of Rtx-5MS (Restek Co., Bellefonte, PA, USA) fused-silica capillary (length = 30 m, i.d. = 0.25 mm, and film thickness = 0.25μm). The electron ionization (EI-MS) mode at 70 eV was employed. Helium (99.999%) at a constant flow of 1.0 mL/min was the carrier gas. The injection volume was 0.1 μL (split ratio of 1 : 10). The injector and the ion source temperatures were set at 240 and 280°C, respectively. The oven temperature program was the same as the one used for GC-FID. The mass spectra were registered with a scan interval of 0.5 s in the mass range of 40 to 600 Da. ## 2.5. Identification of the PN-EO Constituents PN-EO components were identified on the basis of their retention indices relative to a homologous series ofn-alkanes (C8–C24). To this end, an Rtx-5MS capillary column was employed under the same operating conditions as in the case of GC. The retention index (RI) of each PE-EO constituent was determined as described previously [20]. The chemical structures were computer-matched with the Wiley 7, NIST 08, and FFNSC 1.2 spectral libraries of the GC-MS data system; their fragmentation patterns were compared with the literature data [21]. ## 2.6. Bacterial Strains and Antimicrobial Assays Thein vitro antimicrobial activity of PN-EO and its major constituents was assessed by minimum inhibitory concentration (MIC) values calculated by means of the broth microdilution method accomplished in 96-well microplates. The following standard ATCC strains were used:Streptococcus salivarius (ATCC 25975),Streptococcus sanguinis (ATCC 10556),Streptococcus mitis (ATCC 49456),Streptococcus mutans (ATCC 25175),Streptococcus sobrinus (ATCC 33478),Enterococcus faecalis (ATCC 4082), andLactobacillus casei (ATCC 11578). Individual 24-h colonies from blood agar (Difco Labs, Detroit, MI, USA) were suspended in 10.0 mL of tryptic soy broth (Difco). Standardization of each microorganism suspension was carried out on a spectrophotometer (Femto, São Paulo, Brazil) operating at a wavelength (λ) of 625 nm, to match the transmittance of 81 (equivalent to 0.5 McFarland scale or 1.5 × 108 CFU/mL). The microorganism suspension was diluted to a final concentration of 5 × 105 CFU/mL. PN-EO was dissolved in DMSO (Merck, Darmstadt, Germany) at 16.0 mg/mL and diluted in tryptic soy broth (Difco), to yield concentrations between 4000 and 3.9 μg/mL. Compounds1 (α-pinene),2 (β-pinene),3 (trans-caryophyllene), and4 (caryophyllene oxide) were purchased from Sigma-Aldrich (St. Louis, MA) and evaluated by means of the same methodology and at the same concentrations as PN-EO. A 1 μM solution of each compound was tested individually. In the case of the mixture1 +2 +3 +4, the constituents were mixed in the same proportion that they occurred in PN-EO. After dilutions, the DMSO concentrations were between 4% and 0.0039% (v/v). Three inoculated wells containing DMSO at concentrations ranging from 4% to 1% were used as negative controls. One inoculated well was included to control the adequacy of the broth for organism growth. One noninoculated well free of antimicrobial agent was also included to assess the medium sterility. Twofold serial dilutions of chlorhexidine dihydrochloride (CHD) (Sigma-Aldrich, St. Louis) were performed in tryptic soy broth (Difco) to achieve concentrations ranging from 5.9 to 0.115 μg/mL. These dilutions were used as positive control. The microplates (96-well) were sealed with parafilm and incubated at 37°C for 24 h. After that, 30 mL of 0.02% resazurin (Sigma-Aldrich, St. Louis, MO, USA) aqueous solution was poured into each microplate reservoir to indicate microorganism viability [22]. The MIC value (i.e., the lowest concentration of a sample capable of inhibiting microorganism growth) was determined as the lowest concentration of the essential oil and or major constituents capable of preventing a colour change of the resazurin solution [23]. Three replicates were conducted for each microorganism. ## 3. Results and Discussion This work relied on minimum inhibitory concentration (MIC) values to evaluate the antimicrobial activity of the essential oil ofP. neochilus (PN-EO) against a panel of cariogenic bacteria; chlorhexidine dihydrochloride (CHD) was the positive control. Samples with MIC values lower than 100 μg/mL, between 100 and 500 μg/mL, and between 500 and 1000 μg/mL were considered to be promising, moderately active, and weak antimicrobials, respectively. Samples with MIC values greater than 1000 μg/mL were deemed inactive [11, 24–26].Table1 summarizes the MIC values. PN-EO displayed moderate activity againstS. salivarius (MIC = 250 μg/mL) andS. faecalis (MIC = 250 μg/mL) and significant antimicrobial activity againstStreptococcus sobrinus (MIC = 62.5 μg/mL),Streptococcus sanguinis (MIC = 62.5 μg/mL),Streptococcus mitis (MIC = 31.25 μg/mL),Lactobacillus casei (MIC = 31.25 μg/mL), andStreptococcus mutans (MIC = 3.9 μg/mL). The antimicrobial activity of PN-EO againstS. mutans was an interesting result: this microorganism is considered to be the main cariogenic agent [10, 27], and very few natural compounds can inhibit it [26].Table 1 Minimum inhibitory concentration (MIC) values (μg/mL) obtained for the essential oil of P. neochilus (PN-EO), compounds 1, 2, 3, and 4, and the mixture 1 + 2 + 3 + 4 against selected cariogenic bacteria. Microorganisms PN-EO CHDa 1b 2b 3b 4b 1 + 2 + 3 + 4c E. faecalis 250.0 14.8 >4000 >4000 >4000 >4000 >4000 S. salivarius 250.0 7.4 4000 4000 4000 4000 4000 S. mutans 3.9 1.8 >4000 >4000 >4000 >4000 1000 S. mitis 31.3 14.8 4000 4000 4000 4000 4000 S. sobrinus 62.5 1.8 >4000 >4000 >4000 >4000 4000 S. sanguinis 62.5 7.4 4000 >4000 >4000 >4000 >4000 L. casei 31.3 3.7 4000 4000 4000 4000 500 aChlorhexidine dihydrochloride. bFor compounds 1, 2, 3, and 4, the concentration of 4000 μg/mL corresponds to 29.4, 29.4, 19.6, and 18.1 mM, respectively. c1 + 2 + 3 + 4: mixture of α-pinene, β-pinene, trans-caryophyllene, and caryophyllene oxide.Hydrodistillation ofP. neochilus leaves afforded PN-EO in 0.03% ± 0.01 (w/w) yield. Gas chromatography revealed the presence of 31 compounds in PN-EO, namely, fifteen monoterpenes (36.0%), fifteen sesquiterpenes (63.5%), and aliphatic alcohol (0.2%). The major PN-EO constituents were α-pinene (1; 14.1%), β-pinene (2; 7.1%),trans-caryophyllene (3; 29.8%), and caryophyllene oxide (4; 12.8%), as shown in Table 2. The chemical composition of PN-EO differed significantly from the chemical composition ofP. neochilus specimens collected in South Africa, whose major constituents were citronellol (29.0%), citronellyl formate (11.0%), linalool (9.8%), and isomenthone (9.2%) [28], but it resembled the chemical composition previously reported forP. neochilus specimens collected in Brazil [12, 15]. These different chemical compositions may be associated with environmental factors or growing conditions, which can greatly affect the chemical composition of volatile oils [29, 30].Table 2 Chemical composition of the essential oil from the leaves ofP. neochilus as identified by GC/MS. Chemical compound RT [min]a RI exp ⁡ b R I lit c Content [%]d Identificatione α-Thujene 5.00 921 924 6.3 RL MS α-Pinene (1) 5.19 929 932 14.1 RL MS Thuja-2,4(10)-diene 5.43 939 941 0.2 RL MS Camphene 5.62 943 947 0.1 RL MS Sabinene 6.21 966 971 1.9 RL MS β-Pinene (2) 6.37 975 977 7.1 RL MS β-Myrcene 6.65 985 988 0.3 RL MS Octan-3-ol 6.90 993 996 0.2 RL MS α-Terpinene 7.53 1015 1016 0.5 RL MS o-Cymene 7.79 1022 1023 0.3 RL MS Limonene 7.94 1026 1027 0.2 RL MS (Z)-β-Ocimene 8.14 1030 1033 0.4 RL MS (E)-β-Ocimene 8.50 1040 1043 1.8 RL MS γ-Terpinene 8.94 1052 1055 1.4 RL MS α-Terpinolene 9.94 1080 1084 0.2 RL MS 4-Terpineol 13.75 1177 1179 1.2 RL MS α-Cubebene 20.78 1341 1344 0.5 RL MS α-Copaene 21.97 1366 1372 1.2 RL MS β-Bourbonene 22.29 1380 1379 1.1 RL MS β-Cubenene 22.50 1378 1384 0.3 RL MS trans-Caryophyllene (3) 23.80 1412 1415 29.8 RL MS α-Humulene 25.25 1448 1450 1.5 RL MS Germacrene D 26.30 1470 1476 6.2 RL MS Eremophilene 27.51 1504 1505 3.9 RL MS α-Amorphene 27.61 1506 1508 0.4 RL MS δ-Cadinene 27.83 1514 1513 1.9 RL MS (E)-Nerolidol 29.63 1554 1559 0.3 RL MS Caryophyllene oxide (4) 30.26 1571 1575 12.8 RL MS Unknown 32.06 — 1622 0.3 — epi-α-Cadinol 32.62 1634 1637 1.5 RL MS δ-Cadinol 32.70 1636 1639 0.8 RL MS α-Cadinol 33.12 1647 1650 1.3 RL MS Monoterpenes hydrocarbons 34.8 Oxygenated monoterpenes 1.2 Sesquiterpenes hydrocarbons 46.8 Oxygenated sesquiterpenes 16.7 Others 0.2 Not identified 0.3 aRT: retention time determined on the Rtx-5MS capillary column. bRI exp ⁡: retention index determined on the Rtx-5MS column relative to n-alkanes (C8–C20). cR I lit: retention index. dCalculated from the peak area relative to the total peak area. eRL: comparison of the retention index with the literature [21]; MS: comparison of the mass spectrum with the literature.The antimicrobial activity of essential oils has been associated with the lipophilicity of their chemical constituents, mainly monoterpenes and sesquiterpenes, which are often the main chemicals thereof [31]. The hydrophobicity of terpenoids would allow these compounds to diffuse across the cell membranes easily and to kill microorganisms by affecting the metabolic pathways or organelles of the pathogen. In addition, synergistic interactions between essential oil components could enhance their activity [31]. For this reason, the major chemical constituents of some essential oils deserve antimicrobial evaluation alone or as a mixture [32–34]. This study evaluated the individual antimicrobial activity of α-pinene (1), β-pinene (2),trans-caryophyllene (3), and caryophyllene oxide (4). Alone, all of these compounds were much less effective against the selected cariogenic bacteria than PN-EO; their MIC values were higher than 4000 μg/mL (Table 1). The antimicrobial activity of a mixture containing compounds1 +2 +3 +4 in the same relative proportion compared to their relative areas in the CG-FID chromatogram of PN-EO displayed moderate activity againstL. casei (MIC = 500 μg/mL) and weak activity againstS. mutans (MIC = 1000 μg/mL), but it was inactive against the other bacteria (MIC > 4000 μg/mL). Although the MIC values obtained for the mixture suggested a very discrete synergism between compounds1,2,3, and4, the mixture1 +2 +3 +4 was much less active than PN-EO. Hence, only the presence of compounds1,2,3, and4 does not account for the antimicrobial activity of PN-EO. In fact, the antimicrobial activity of PN-EO may also be related to the other minor chemical constituents identified in the oil, which may underlie or even increase the activity of the major chemical constituents of this essential oil. ## 4. Conclusions The essential oil ofP. neochilus (PN-EO) displays promising antimicrobial activity against some cariogenic bacteria, includingStreptococcus mutans, which is one of the main causative agents of dental caries. Taken together, our results suggest that this essential oil might be promising for the development of new oral care products. Further studies to identify the active chemical constituents of PN-EO are underway. --- *Source: 102317-2015-06-16.xml*
102317-2015-06-16_102317-2015-06-16.md
24,492
Antimicrobial Activity of the Essential Oil ofPlectranthus neochilus against Cariogenic Bacteria
Eduardo José Crevelin; Soraya Carolina Caixeta; Herbert Júnior Dias; Milton Groppo; Wilson Roberto Cunha; Carlos Henrique Gomes Martins; Antônio Eduardo Miller Crotti
Evidence-Based Complementary and Alternative Medicine (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102317
102317-2015-06-16.xml
--- ## Abstract This work used the broth microdilution method to investigate the antimicrobial activity of the essential oil obtained from the leaves ofPlectranthus neochilus (PN-EO) against a representative panel of oral pathogens. We assessed the antimicrobial activity of this oil in terms of the minimum inhibitory concentration (MIC). PN-EO displayed moderate activity against Enterococcus faecalis (MIC = 250 μg/mL) and Streptococcus salivarus (MIC = 250 μg/mL), significant activity against Streptococcus sobrinus (MIC = 62.5 μg/mL), Streptococcus sanguinis (MIC = 62.5 μg/mL), Streptococcus mitis (MIC = 31.25 μg/mL), and Lactobacillus casei (MIC = 31.25 μg/mL), and interesting activity against Streptococcus mutans (MIC = 3.9 μg/mL). GC-FID and GC-MS helped to identify thirty-one compounds in PN-EO; α-pinene (1, 14.1%), β-pinene (2, 7.1%), trans-caryophyllene (3, 29.8%), and caryophyllene oxide (4, 12.8%) were the major chemical constituents of this essential oil. When tested alone, compounds 1, 2, 3, and 4 were inactive (MIC > 4000 μg/mL) against all the microorganisms. These results suggested that the essential oil extracted from the leaves of Plectranthus neochilus displays promising activity against most of the evaluated cariogenic bacteria, especially S. mutans. --- ## Body ## 1. Introduction Dental caries is associated with acidogenic and aciduric bacteria that adhere to the tooth surface as an oral biofilm (dental plaque) [1]. Because this pathology can destroy dental hard tissues [2–4], it has become a major public health concern worldwide. The most efficient way to prevent caries and periodontal diseases is to reduce and eliminate bacterial accumulation on the top of and between teeth by brushing the teeth on a daily basis and conducting periodic dental cleaning or prophylaxis. Unfortunately, most people fail to maintain a sufficient level of oral hygiene [5], which has called for the use of oral products containing antimicrobial ingredients as a complementary measure to diminish biofilm formation on the tooth surface [6].Chlorhexidine has been the most effective antiplaque agent tested to date, but some reversible local side effects have led dentists to recommend its use for short periods only [7]. Several other antimicrobial agents including fluorides, phenol derivatives, ampicillin, erythromycin, penicillin, tetracycline, and vancomycin can inhibit bacterial growth [8]. Nevertheless, excessive use of these chemicals can disturb the oral and intestinal flora and cause microorganism susceptibility, vomiting, diarrhea, and tooth staining [8]. To find an alternative to the substances currently employed to prevent caries and to control plaques, researchers have investigated the antimicrobial activities of natural products, especially essential oils [1, 3, 7, 9–11].The herbaceous and aromatic plantPlectranthus neochilus is popularly known as “boldo-rasteiro” in Brazil [12]. In folk medicine, this plant has helped to treat disturbed digestion, skin infection, respiratory ailments [13], hepatic insufficiency, and dyspepsia [14]. The essential oil ofP. neochilus displays antischistosomal [12] and insecticidal [15] activities. Recently, researchers have applied diffusion in agar disc to assess the antimicrobial activity of the essential oil of a specimen ofP. neochilus collected in Portugal againstBacillus cereus,Bacillus subtilis,Staphylococcus aureus,Listeria monocytogenes,Escherichia coli,Pseudomonas aeruginosa,Helicobacter pylori, andSaccharomyces cerevisiae [16]. The authors reported that the activity of this essential oil against the selected microorganisms was between low and moderate.As part of our ongoing research on the antimicrobial activities of essential oils [1, 17–19], in this work, we used the broth microdilution method to evaluate thein vitro antimicrobial activity of the essential oil obtained from the leaves ofPlectranthus neochilus (Lamiaceae) against a representative panel of cariogenic bacteria. ## 2. Materials and Methods ### 2.1. Plant Material AdultP. neochilus Schltr. (Lamiaceae) leaves were collected at “May 13th Farm” (20°26′S 47°27′W 977 m) in May 2011. The collection site was located near the city of Franca, state of São Paulo, Brazil. This species was identified by Professor Dr. Milton Groppo; one voucher specimen (SPFR 12323) was deposited at the Herbarium of the Department of Biology (Herbarium SPFR), University of São Paulo, Brazil. ### 2.2. Essential Oil Extraction Fresh leaves ofP. neochilus were submitted to hydrodistillation in a Clevenger-type apparatus for 3 h. To this end, 1200 g of the plant material was divided into three samples of 400 g each, and 500 mL of distilled water was added to each sample. Condensation of the steam followed by accumulation of the essential oil/water system in the graduated receiver of the apparatus separated the essential oil from the water, which allowed for further manual collection of the organic phase. Anhydrous sodium sulfate was used to remove traces of water. Samples were stored in an amber bottle and kept in the refrigerator at 4°C until analysis. Yields were calculated from the weight of the fresh leaves. ### 2.3. Gas Chromatography (GC-FID) Analyses The essential oil ofP. neochilus (PN-EO) was analyzed by gas chromatography (GC) on a Hewlett-Packard G1530A 6890 gas chromatograph fitted with FID and a data-handling processor. An HP-5 (Hewlett-Packard, Palo Alto, CA, USA) fused-silica capillary column (length = 30 m, i.d. = 0.25 mm, and film thickness = 0.33 μm) was employed. The column temperature was programmed to rise from 60 to 240°C at 3°C/min and then held at 240°C for 5 min. The carrier gas was H2 at a flow rate of 1.0 mL/min. The equipment was set to the injection mode; the injection volume was 0.1 μL (split ratio of 1 : 10). The injector and detector temperatures were 240 and 280°C, respectively. The relative concentrations of the components were obtained by peak area normalization (%). The relative areas were the average of triplicate GC-FID analyses. ### 2.4. Gas Chromatography-Mass Spectrometry (GC-MS) Analyses GC-MS analyses were carried out on a Shimadzu QP2010 Plus (Shimadzu Corporation, Kyoto, Japan) system equipped with an AOC-20i autosampler. The column consisted of Rtx-5MS (Restek Co., Bellefonte, PA, USA) fused-silica capillary (length = 30 m, i.d. = 0.25 mm, and film thickness = 0.25μm). The electron ionization (EI-MS) mode at 70 eV was employed. Helium (99.999%) at a constant flow of 1.0 mL/min was the carrier gas. The injection volume was 0.1 μL (split ratio of 1 : 10). The injector and the ion source temperatures were set at 240 and 280°C, respectively. The oven temperature program was the same as the one used for GC-FID. The mass spectra were registered with a scan interval of 0.5 s in the mass range of 40 to 600 Da. ### 2.5. Identification of the PN-EO Constituents PN-EO components were identified on the basis of their retention indices relative to a homologous series ofn-alkanes (C8–C24). To this end, an Rtx-5MS capillary column was employed under the same operating conditions as in the case of GC. The retention index (RI) of each PE-EO constituent was determined as described previously [20]. The chemical structures were computer-matched with the Wiley 7, NIST 08, and FFNSC 1.2 spectral libraries of the GC-MS data system; their fragmentation patterns were compared with the literature data [21]. ### 2.6. Bacterial Strains and Antimicrobial Assays Thein vitro antimicrobial activity of PN-EO and its major constituents was assessed by minimum inhibitory concentration (MIC) values calculated by means of the broth microdilution method accomplished in 96-well microplates. The following standard ATCC strains were used:Streptococcus salivarius (ATCC 25975),Streptococcus sanguinis (ATCC 10556),Streptococcus mitis (ATCC 49456),Streptococcus mutans (ATCC 25175),Streptococcus sobrinus (ATCC 33478),Enterococcus faecalis (ATCC 4082), andLactobacillus casei (ATCC 11578). Individual 24-h colonies from blood agar (Difco Labs, Detroit, MI, USA) were suspended in 10.0 mL of tryptic soy broth (Difco). Standardization of each microorganism suspension was carried out on a spectrophotometer (Femto, São Paulo, Brazil) operating at a wavelength (λ) of 625 nm, to match the transmittance of 81 (equivalent to 0.5 McFarland scale or 1.5 × 108 CFU/mL). The microorganism suspension was diluted to a final concentration of 5 × 105 CFU/mL. PN-EO was dissolved in DMSO (Merck, Darmstadt, Germany) at 16.0 mg/mL and diluted in tryptic soy broth (Difco), to yield concentrations between 4000 and 3.9 μg/mL. Compounds1 (α-pinene),2 (β-pinene),3 (trans-caryophyllene), and4 (caryophyllene oxide) were purchased from Sigma-Aldrich (St. Louis, MA) and evaluated by means of the same methodology and at the same concentrations as PN-EO. A 1 μM solution of each compound was tested individually. In the case of the mixture1 +2 +3 +4, the constituents were mixed in the same proportion that they occurred in PN-EO. After dilutions, the DMSO concentrations were between 4% and 0.0039% (v/v). Three inoculated wells containing DMSO at concentrations ranging from 4% to 1% were used as negative controls. One inoculated well was included to control the adequacy of the broth for organism growth. One noninoculated well free of antimicrobial agent was also included to assess the medium sterility. Twofold serial dilutions of chlorhexidine dihydrochloride (CHD) (Sigma-Aldrich, St. Louis) were performed in tryptic soy broth (Difco) to achieve concentrations ranging from 5.9 to 0.115 μg/mL. These dilutions were used as positive control. The microplates (96-well) were sealed with parafilm and incubated at 37°C for 24 h. After that, 30 mL of 0.02% resazurin (Sigma-Aldrich, St. Louis, MO, USA) aqueous solution was poured into each microplate reservoir to indicate microorganism viability [22]. The MIC value (i.e., the lowest concentration of a sample capable of inhibiting microorganism growth) was determined as the lowest concentration of the essential oil and or major constituents capable of preventing a colour change of the resazurin solution [23]. Three replicates were conducted for each microorganism. ## 2.1. Plant Material AdultP. neochilus Schltr. (Lamiaceae) leaves were collected at “May 13th Farm” (20°26′S 47°27′W 977 m) in May 2011. The collection site was located near the city of Franca, state of São Paulo, Brazil. This species was identified by Professor Dr. Milton Groppo; one voucher specimen (SPFR 12323) was deposited at the Herbarium of the Department of Biology (Herbarium SPFR), University of São Paulo, Brazil. ## 2.2. Essential Oil Extraction Fresh leaves ofP. neochilus were submitted to hydrodistillation in a Clevenger-type apparatus for 3 h. To this end, 1200 g of the plant material was divided into three samples of 400 g each, and 500 mL of distilled water was added to each sample. Condensation of the steam followed by accumulation of the essential oil/water system in the graduated receiver of the apparatus separated the essential oil from the water, which allowed for further manual collection of the organic phase. Anhydrous sodium sulfate was used to remove traces of water. Samples were stored in an amber bottle and kept in the refrigerator at 4°C until analysis. Yields were calculated from the weight of the fresh leaves. ## 2.3. Gas Chromatography (GC-FID) Analyses The essential oil ofP. neochilus (PN-EO) was analyzed by gas chromatography (GC) on a Hewlett-Packard G1530A 6890 gas chromatograph fitted with FID and a data-handling processor. An HP-5 (Hewlett-Packard, Palo Alto, CA, USA) fused-silica capillary column (length = 30 m, i.d. = 0.25 mm, and film thickness = 0.33 μm) was employed. The column temperature was programmed to rise from 60 to 240°C at 3°C/min and then held at 240°C for 5 min. The carrier gas was H2 at a flow rate of 1.0 mL/min. The equipment was set to the injection mode; the injection volume was 0.1 μL (split ratio of 1 : 10). The injector and detector temperatures were 240 and 280°C, respectively. The relative concentrations of the components were obtained by peak area normalization (%). The relative areas were the average of triplicate GC-FID analyses. ## 2.4. Gas Chromatography-Mass Spectrometry (GC-MS) Analyses GC-MS analyses were carried out on a Shimadzu QP2010 Plus (Shimadzu Corporation, Kyoto, Japan) system equipped with an AOC-20i autosampler. The column consisted of Rtx-5MS (Restek Co., Bellefonte, PA, USA) fused-silica capillary (length = 30 m, i.d. = 0.25 mm, and film thickness = 0.25μm). The electron ionization (EI-MS) mode at 70 eV was employed. Helium (99.999%) at a constant flow of 1.0 mL/min was the carrier gas. The injection volume was 0.1 μL (split ratio of 1 : 10). The injector and the ion source temperatures were set at 240 and 280°C, respectively. The oven temperature program was the same as the one used for GC-FID. The mass spectra were registered with a scan interval of 0.5 s in the mass range of 40 to 600 Da. ## 2.5. Identification of the PN-EO Constituents PN-EO components were identified on the basis of their retention indices relative to a homologous series ofn-alkanes (C8–C24). To this end, an Rtx-5MS capillary column was employed under the same operating conditions as in the case of GC. The retention index (RI) of each PE-EO constituent was determined as described previously [20]. The chemical structures were computer-matched with the Wiley 7, NIST 08, and FFNSC 1.2 spectral libraries of the GC-MS data system; their fragmentation patterns were compared with the literature data [21]. ## 2.6. Bacterial Strains and Antimicrobial Assays Thein vitro antimicrobial activity of PN-EO and its major constituents was assessed by minimum inhibitory concentration (MIC) values calculated by means of the broth microdilution method accomplished in 96-well microplates. The following standard ATCC strains were used:Streptococcus salivarius (ATCC 25975),Streptococcus sanguinis (ATCC 10556),Streptococcus mitis (ATCC 49456),Streptococcus mutans (ATCC 25175),Streptococcus sobrinus (ATCC 33478),Enterococcus faecalis (ATCC 4082), andLactobacillus casei (ATCC 11578). Individual 24-h colonies from blood agar (Difco Labs, Detroit, MI, USA) were suspended in 10.0 mL of tryptic soy broth (Difco). Standardization of each microorganism suspension was carried out on a spectrophotometer (Femto, São Paulo, Brazil) operating at a wavelength (λ) of 625 nm, to match the transmittance of 81 (equivalent to 0.5 McFarland scale or 1.5 × 108 CFU/mL). The microorganism suspension was diluted to a final concentration of 5 × 105 CFU/mL. PN-EO was dissolved in DMSO (Merck, Darmstadt, Germany) at 16.0 mg/mL and diluted in tryptic soy broth (Difco), to yield concentrations between 4000 and 3.9 μg/mL. Compounds1 (α-pinene),2 (β-pinene),3 (trans-caryophyllene), and4 (caryophyllene oxide) were purchased from Sigma-Aldrich (St. Louis, MA) and evaluated by means of the same methodology and at the same concentrations as PN-EO. A 1 μM solution of each compound was tested individually. In the case of the mixture1 +2 +3 +4, the constituents were mixed in the same proportion that they occurred in PN-EO. After dilutions, the DMSO concentrations were between 4% and 0.0039% (v/v). Three inoculated wells containing DMSO at concentrations ranging from 4% to 1% were used as negative controls. One inoculated well was included to control the adequacy of the broth for organism growth. One noninoculated well free of antimicrobial agent was also included to assess the medium sterility. Twofold serial dilutions of chlorhexidine dihydrochloride (CHD) (Sigma-Aldrich, St. Louis) were performed in tryptic soy broth (Difco) to achieve concentrations ranging from 5.9 to 0.115 μg/mL. These dilutions were used as positive control. The microplates (96-well) were sealed with parafilm and incubated at 37°C for 24 h. After that, 30 mL of 0.02% resazurin (Sigma-Aldrich, St. Louis, MO, USA) aqueous solution was poured into each microplate reservoir to indicate microorganism viability [22]. The MIC value (i.e., the lowest concentration of a sample capable of inhibiting microorganism growth) was determined as the lowest concentration of the essential oil and or major constituents capable of preventing a colour change of the resazurin solution [23]. Three replicates were conducted for each microorganism. ## 3. Results and Discussion This work relied on minimum inhibitory concentration (MIC) values to evaluate the antimicrobial activity of the essential oil ofP. neochilus (PN-EO) against a panel of cariogenic bacteria; chlorhexidine dihydrochloride (CHD) was the positive control. Samples with MIC values lower than 100 μg/mL, between 100 and 500 μg/mL, and between 500 and 1000 μg/mL were considered to be promising, moderately active, and weak antimicrobials, respectively. Samples with MIC values greater than 1000 μg/mL were deemed inactive [11, 24–26].Table1 summarizes the MIC values. PN-EO displayed moderate activity againstS. salivarius (MIC = 250 μg/mL) andS. faecalis (MIC = 250 μg/mL) and significant antimicrobial activity againstStreptococcus sobrinus (MIC = 62.5 μg/mL),Streptococcus sanguinis (MIC = 62.5 μg/mL),Streptococcus mitis (MIC = 31.25 μg/mL),Lactobacillus casei (MIC = 31.25 μg/mL), andStreptococcus mutans (MIC = 3.9 μg/mL). The antimicrobial activity of PN-EO againstS. mutans was an interesting result: this microorganism is considered to be the main cariogenic agent [10, 27], and very few natural compounds can inhibit it [26].Table 1 Minimum inhibitory concentration (MIC) values (μg/mL) obtained for the essential oil of P. neochilus (PN-EO), compounds 1, 2, 3, and 4, and the mixture 1 + 2 + 3 + 4 against selected cariogenic bacteria. Microorganisms PN-EO CHDa 1b 2b 3b 4b 1 + 2 + 3 + 4c E. faecalis 250.0 14.8 >4000 >4000 >4000 >4000 >4000 S. salivarius 250.0 7.4 4000 4000 4000 4000 4000 S. mutans 3.9 1.8 >4000 >4000 >4000 >4000 1000 S. mitis 31.3 14.8 4000 4000 4000 4000 4000 S. sobrinus 62.5 1.8 >4000 >4000 >4000 >4000 4000 S. sanguinis 62.5 7.4 4000 >4000 >4000 >4000 >4000 L. casei 31.3 3.7 4000 4000 4000 4000 500 aChlorhexidine dihydrochloride. bFor compounds 1, 2, 3, and 4, the concentration of 4000 μg/mL corresponds to 29.4, 29.4, 19.6, and 18.1 mM, respectively. c1 + 2 + 3 + 4: mixture of α-pinene, β-pinene, trans-caryophyllene, and caryophyllene oxide.Hydrodistillation ofP. neochilus leaves afforded PN-EO in 0.03% ± 0.01 (w/w) yield. Gas chromatography revealed the presence of 31 compounds in PN-EO, namely, fifteen monoterpenes (36.0%), fifteen sesquiterpenes (63.5%), and aliphatic alcohol (0.2%). The major PN-EO constituents were α-pinene (1; 14.1%), β-pinene (2; 7.1%),trans-caryophyllene (3; 29.8%), and caryophyllene oxide (4; 12.8%), as shown in Table 2. The chemical composition of PN-EO differed significantly from the chemical composition ofP. neochilus specimens collected in South Africa, whose major constituents were citronellol (29.0%), citronellyl formate (11.0%), linalool (9.8%), and isomenthone (9.2%) [28], but it resembled the chemical composition previously reported forP. neochilus specimens collected in Brazil [12, 15]. These different chemical compositions may be associated with environmental factors or growing conditions, which can greatly affect the chemical composition of volatile oils [29, 30].Table 2 Chemical composition of the essential oil from the leaves ofP. neochilus as identified by GC/MS. Chemical compound RT [min]a RI exp ⁡ b R I lit c Content [%]d Identificatione α-Thujene 5.00 921 924 6.3 RL MS α-Pinene (1) 5.19 929 932 14.1 RL MS Thuja-2,4(10)-diene 5.43 939 941 0.2 RL MS Camphene 5.62 943 947 0.1 RL MS Sabinene 6.21 966 971 1.9 RL MS β-Pinene (2) 6.37 975 977 7.1 RL MS β-Myrcene 6.65 985 988 0.3 RL MS Octan-3-ol 6.90 993 996 0.2 RL MS α-Terpinene 7.53 1015 1016 0.5 RL MS o-Cymene 7.79 1022 1023 0.3 RL MS Limonene 7.94 1026 1027 0.2 RL MS (Z)-β-Ocimene 8.14 1030 1033 0.4 RL MS (E)-β-Ocimene 8.50 1040 1043 1.8 RL MS γ-Terpinene 8.94 1052 1055 1.4 RL MS α-Terpinolene 9.94 1080 1084 0.2 RL MS 4-Terpineol 13.75 1177 1179 1.2 RL MS α-Cubebene 20.78 1341 1344 0.5 RL MS α-Copaene 21.97 1366 1372 1.2 RL MS β-Bourbonene 22.29 1380 1379 1.1 RL MS β-Cubenene 22.50 1378 1384 0.3 RL MS trans-Caryophyllene (3) 23.80 1412 1415 29.8 RL MS α-Humulene 25.25 1448 1450 1.5 RL MS Germacrene D 26.30 1470 1476 6.2 RL MS Eremophilene 27.51 1504 1505 3.9 RL MS α-Amorphene 27.61 1506 1508 0.4 RL MS δ-Cadinene 27.83 1514 1513 1.9 RL MS (E)-Nerolidol 29.63 1554 1559 0.3 RL MS Caryophyllene oxide (4) 30.26 1571 1575 12.8 RL MS Unknown 32.06 — 1622 0.3 — epi-α-Cadinol 32.62 1634 1637 1.5 RL MS δ-Cadinol 32.70 1636 1639 0.8 RL MS α-Cadinol 33.12 1647 1650 1.3 RL MS Monoterpenes hydrocarbons 34.8 Oxygenated monoterpenes 1.2 Sesquiterpenes hydrocarbons 46.8 Oxygenated sesquiterpenes 16.7 Others 0.2 Not identified 0.3 aRT: retention time determined on the Rtx-5MS capillary column. bRI exp ⁡: retention index determined on the Rtx-5MS column relative to n-alkanes (C8–C20). cR I lit: retention index. dCalculated from the peak area relative to the total peak area. eRL: comparison of the retention index with the literature [21]; MS: comparison of the mass spectrum with the literature.The antimicrobial activity of essential oils has been associated with the lipophilicity of their chemical constituents, mainly monoterpenes and sesquiterpenes, which are often the main chemicals thereof [31]. The hydrophobicity of terpenoids would allow these compounds to diffuse across the cell membranes easily and to kill microorganisms by affecting the metabolic pathways or organelles of the pathogen. In addition, synergistic interactions between essential oil components could enhance their activity [31]. For this reason, the major chemical constituents of some essential oils deserve antimicrobial evaluation alone or as a mixture [32–34]. This study evaluated the individual antimicrobial activity of α-pinene (1), β-pinene (2),trans-caryophyllene (3), and caryophyllene oxide (4). Alone, all of these compounds were much less effective against the selected cariogenic bacteria than PN-EO; their MIC values were higher than 4000 μg/mL (Table 1). The antimicrobial activity of a mixture containing compounds1 +2 +3 +4 in the same relative proportion compared to their relative areas in the CG-FID chromatogram of PN-EO displayed moderate activity againstL. casei (MIC = 500 μg/mL) and weak activity againstS. mutans (MIC = 1000 μg/mL), but it was inactive against the other bacteria (MIC > 4000 μg/mL). Although the MIC values obtained for the mixture suggested a very discrete synergism between compounds1,2,3, and4, the mixture1 +2 +3 +4 was much less active than PN-EO. Hence, only the presence of compounds1,2,3, and4 does not account for the antimicrobial activity of PN-EO. In fact, the antimicrobial activity of PN-EO may also be related to the other minor chemical constituents identified in the oil, which may underlie or even increase the activity of the major chemical constituents of this essential oil. ## 4. Conclusions The essential oil ofP. neochilus (PN-EO) displays promising antimicrobial activity against some cariogenic bacteria, includingStreptococcus mutans, which is one of the main causative agents of dental caries. Taken together, our results suggest that this essential oil might be promising for the development of new oral care products. Further studies to identify the active chemical constituents of PN-EO are underway. --- *Source: 102317-2015-06-16.xml*
2015
# TheK-Size Edge Metric Dimension of Graphs **Authors:** Tanveer Iqbal; Muhammad Naeem Azhar; Syed Ahtsham Ul Haq Bokhary **Journal:** Journal of Mathematics (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1023175 --- ## Abstract In this paper, a new conceptk-size edge resolving set for a connected graph G in the context of resolvability of graphs is defined. Some properties and realizable results on k-size edge resolvability of graphs are studied. The existence of this new parameter in different graphs is investigated, and the k-size edge metric dimension of path, cycle, and complete bipartite graph is computed. It is shown that these families have unbounded k-size edge metric dimension. Furthermore, the k-size edge metric dimension of the graphs Pm □ Pn, Pm □ Cn for m, n ≥ 3 and the generalized Petersen graph is determined. It is shown that these families of graphs have constant k-size edge metric dimension. --- ## Body ## 1. Introduction Kelenc et al. [1] recently defined the concept of edge resolvability in graphs and initiated the study of its mathematical properties. The edge metric dimension of graph G is the minimum cardinality of edge resolving set, say X, and is denoted as βeG. An edge metric generator for G of cardinality βeG is an edge metric basis for G [1]. This concept of an edge metric generator may have a weakness with respect to possible uniqueness of the edge identifying a pair of different vertices of the graph. Consider, for example, in a network, a vertex x is identified by a unique edge e in a metric basis X, but if at some point the communication between the vertex x and edge e is blocked, then the vertex x cannot be accessed by the edge metric basis X. To avoid this situation, one can think of defining a metric edge basis in which every vertex can be identified by at least two edges. Inspired by the motivation of idea of k-size resolving sets in graphs by Naeem et al. [2], we present a new concept in the context of edge resolvabililty, called the k-size edge resolving set in graphs.For an undirected, simple, and connected graphG, the vertex set is VG and edge set is EG. The distance parameter in graphs has been used to distinguish (resolve or determine) the vertices or edges of G. The distance between the vertex α and the edge β=α1α2 in a graph G is given by dGβ,α=mindGα1,α,dGα2,α. Any two edges β1 and β2 are resolved by a vertex α of a graph G, whenever dGα,β1≠dGα,β2. A set of vertices X is an edge metric generator for a graph G, whenever every two edges of G are resolved by some vertex of X. The edge metric dimension of graph G is the minimum cardinality of set X and is denoted as βeG. An edge metric generator for G of cardinality βeG is an edge metric basis for G [1].Definition 1. A set of verticesW is said to be a k-size edge resolving set of a graph G of order n≥2 if W is an edge resolving set and the size of subgraph W induced by W is equal to k. The k-size edge metric dimension of G, denoted by βkseG, is the minimum cardinality of a k-size edge resolving set of G. Moreover, the k-size edge resolving set of cardinality βkseG is represented as kser-set, where k≥1 and belongs to a set of natural number. Now, we discuss the existence of this new parameter in some simple, nontrivial connected graphs. LetH be a connected graph having the vertex set VH=l1,l2,…,lp,s1,s2,…,sq∪t1,t2 and edge set EH=l1l2,lit1,pjt1,pjt2:1≤i≤p,1≤j≤q, as shown in Figure 1. A set W=l1,l2,…,lp,s2,s3,…,sq is a 1ser-set for H. It can be seen that, for 3≤p≤q, the k-size edge resolving set for graph H exist only for k=1. We observe that the setS=q1,q2,q3 is a minimum 2ser-set for G of Figure 1. Moreover, for the graph G, 1ser-set does not exist. We have the following remark from these two examples.Figure 1 The graphsG and H.Remark 1. (i) The existence ofβkseG does not imply the existence of βk+tseG for t≥1 and vice versa in any nontrivial connected graph G(ii) 2≤βkseG≤n, for any simple graph G, where 1≤k≤n2 In this paper, we compute thek-size edge metric dimension in several well known families of graphs, Cartesian product graphs Pm□Pn,Pm□Cn, and generalized Petersen graphs GPq,1. Moreover, we present some realizable result on k-size edge metric dimension in graphs for k=1,2. ## 2. Applications Resolvability in graphs has diverse applications related to the navigation of robots in networks [3], pattern identification, and image processing. It has also many applications in pharmaceutical chemistry and drugs [4–6]. Few interesting connections between metric generators in graphs and the mastermind game or coin weighing problem have been presented in [7]. The other important results about the metric and edge metric dimension can be found in [8–12]. ## 3. Existence ofK-Size Edge Resolving Sets in Well-Known Classes of Graphs Now, we firstly initiate the study of existence of this new parameter in some basic families of graphs and compute theirk-size edge metric dimension.Lemma 1. For a path graphG=Pn1≤k≤n−1, βkseG=k+1ifn≥2.Proof. Consider a path graphPn with vertex set VPn=l1,l2,…,ln and edge set EPn=lili+1:1≤i≤n−1. Let W=l1,l2,l3,…,lk+1 be a subset of vertex set of Pn. The code of each edge lsls+11≤s≤k with respect to W is distinct because each edge has the 0 entry at its sth and s+1th place. The code of each edge lsls+1 is cWlsls+1=l−1,l−2,l−3,…,l−k+1, for k+2≤s≤n−1, and cWlsls+1=l−1,l−2,l−3,…,2,1,0, for s=k+1. Thus, W is an edge resolving set for Pn. Since the size of Pn is n−1, it implies that the subgraph W induced by W has k edges. Therefore, W is k-size edge resolving set for Pn. Hence, βksePn=k+1 for 1≤k≤n−1.Lemma 2. For a simple and connected graphG of order n≥6, βkseG=k+1ifG=Cn1≤k≤n−2.Proof. LetCn:t1,t2,t3,…,tn,t1 be a cycle graph of order n≥6, and let W=t1,t2,t3,…,tk+1. We define l=n/2. The code of each edge tptp+1 for 1≤p≤k1≤k≤l and for 1≤p≤ll+1≤k≤n−2 with respect to W has the 0 entry at its pth and p+1th place. The code of each edge tptp+1 is cWtptp+1=p−1,p−2,p−3,…,p−k+1, for k+1≤p≤l1≤k≤l−1,. The qth and q+1th entries of edges tptp+1 are equal to l−1 when n is even 1≤q≤k,p=l+q,1≤k≤l,1≤q≤l,l+1≤k≤n−2 and equal to l−2 and l−1, respectively, when n is odd 1≤q≤k,p=l+q,q,1≤k≤l,1≤q≤l−1,l+1≤k≤n−2. The code of remaining edges tptp+1 is n−p,n+1−p,n+2−p,…,n+k−p, for l+k+1≤p≤n,1≤k≤l−1, when n is even and, for 1≤k≤l−2, when n is odd. We note that the codes of all the edges are distinct. Therefore, W is a k-size edge resolving set for Cn. Hence, βkseCn=k+1 for 1≤k≤n−2. Thek-size edge resolving sets of a complete bipartite graph Ks1,s2 exist only for the values of k given in the following result.Lemma 3. For the complete bipartite graphKs1,s2s1=s2 and s1,s2≥4,(1)βkseKs1,s2=2s1−2,k=s1−1s2−1,2s1−1,k=s1s2−s1ork=s1s2−1,2s1,k=s1s2,while, for s1≥s2≥1, we have(2)βkseKs1,s2=s1+k1−1,k=k1s1−1,1≤k1≤s1−1,s1+k2,k=k2s1,1≤k2≤s1−1.Observation. It cannot be necessary that a k-size edge resolving set has at least k vertices in it.To justify our above observation, we consider a graphG=K4. A set of vertices W=f1,f2,f3,f4 is the vertex set of G=K4. One can observe that the set VK4 is a 6-size edge resolving set for K4. Therefore, β6seK4=4. ## 4.K-Size Edge Metric Dimension of Cartesian Product of Graphs LetG=Pm□Pn be the Cartesian product of two path graphs Pm and Pn, for m,n≥4. Let E1G=rghrgh+1:1≤g≤m,1≤h≤n be the set of horizontal edges and E2G=sghsg+1h:1≤g≤m−1,1≤h≤n be the set of vertical edges of Pm□Pn. The graph of P5□P6 is shown in Figure 2. To find distances, we embed G into xy plane in such a way that each vertex is in an ordered pair form. Let the vertices 0,0,m−1,0,0,n−1,m−1,n−1 be the corner vertices of G. In the next two lemmas, we shall discuss size 1, size 2, and size 3 edge metric dimension of G=Pm□Pn and G=Pm□Cn, for m,n≥5.Figure 2 G=P5□P6.Lemma 4. LetG be the cartesian product graph Pm□Pn; then, we have(3)βksePm□Pn=3,k=1,4,k=2,5,k=3.Proof. Here, we will prove this result fork=3 (only). For this we consider W=a1,a2,a3,a4,a5, where a1=0,0,a2=1,0,a3=m−1,0,a4=0,1,anda5=0,2, and prove that W is a 3ser-set for Pm□Pn. Note that dr1,s1,r2,s2=r1−r2+s1−s2 is the distance between any two vertices of Pm□Pn. Let α=r1,s1r2,s2 be an edge. The distances of the edge α from the vertices of W are calculated as follows. da1,α=r1+s1,da3,α=m−1−r2+s1, da2,α=r1+s1−10,1 when α∈rghrgh+1,sghsg+1h:1≤g≤m,2≤h≤n, da2,α=r1+s1 when α∈rghrgh+1:1≤g≤m,h=1, and da2,α=r1+s1+1 when α∈sghsg+1h:1≤g≤m−1,h=1; da4,α=r1+s1+1, whenever α∈rghrgh+1:g=1,1≤h≤n, da4,α=r1+s1−1, whenever α∈rghrgh+1,sghsg+1h:2≤g≤m,1≤h≤n, da4,α=r1+s1, whenever α∈sghsg+1h:g=1,1≤h≤n, da5,α=r1+s1+2e, whenever α∈rghrgh+1:g=1,1≤h≤n, da5,α=r1+s1, whenever α∈rghrgh+1:g=2,1≤h≤n, da5,α=r1+s1+1, whenever α∈sghsg+1h:g=1,1≤h≤n, da5,α=r1+s1−1, whenever α∈sghsg+1h:g=2,1≤h≤n, and da5,α=r1+s1−2, whenever α∈rghrgh+1,sghsg+1h:3≤g≤m,1≤h≤n. Suppose contrary two edges α1=r1,s1r2,s2 and α2=t1,u1t2,u2 are at the same distance from the vertices of W. Thus, we have the following equalities:(4)r1+s1=t1+u1,m−1−r2+s1=m−1−t2+u1. The above equalities imply thats1−u1=r2−t2. Thus, it follows that r1+r2=t1+t2. In both the cases r1=r2 or r1=r2−1, and we get r1=t1 and r2=t2. The equality r1=t1 together with r1+s1=t1+u1 implies that s1=u1. Both the vertices s2 and u2 can either equal to s1 or equal s1+1. One of the edges α1 or α2 does not represent an edge if they have distinct values. So, finally we have α1=α2, which is a contradiction. Therefore, W is an edge resolving set for Pm□Pn. Moreover, the subgraph induced by W has 3 edges. Hence, we conclude that β3sePm□Pn=5. Similarly, we can prove the result for the values of k=1,2. Hence, we conclude the result. We present the following result on thek-size edge metric dimension of the Cartesian product graph Pm□Cn without proof.Lemma 5. LetG be the cartesian product graph Pm□Cn; then, we have(5)βksePm□Cn=3,k=1,2,4,k=3. ## 5.K-Size Edge Metric Dimension of Generalized Petersen Graphs The generalized Petersen graphGPq,s is a 3-regular graph containing 2q vertices and 3q edges. The vertex set of GPq,s is VGPq,s=ct,dt:1≤t≤q, and the edge set is EGPq,s=ctct+1,ctdt,dtdt+s:1≤t≤q,1≤s<q/2. The edges ctdt for 1≤t≤q are said to be spokes of GPq,s. The outer cycle of GPq,s is said to be the principal cycle GPq,s.We will compute thek-size edge metric dimension of GPq,s for k=1,2 and s=1 in the following two sections. Firstly, we will find upper bound for the 1-size edge metric dimension of GPq,1.Lemma 6. For allq≥7, we have β1seGPq,1≤3.Proof. LetW=c1,c3,d1 be a set of vertices of GPq,1.Case 1. Whenq is odd. Here, we definet1=q+1/2. There is only one edge in the subgraph W induced by W and the codes of all the edges of GPq,1 are given in Tables 1–3.Table 1 Codes of outer edges ofGPq,1 when q is odd. d.,.c1c3d1c1c2011ctct+1:2≤t≤3t−10tcttt+1:4≤t≤t1t−1t−3tctct+1:t=1+t1q−tq−t1+q−tctct+1:2+t1≤t≤qq−t2+q−t1+q−tTable 2 Codes of spokes ofGPq,1 when q is odd. d.,.c1c3d1c1d1020c2d2111c3d3202ctdt:4≤t≤t1t−1t−3t−1ctdt:t=1+t11+q−tq−t1+q−tctdt:t=2+t11+q−t2+q−t1+q−tctdt:3+t1≤t≤q1+q−t3+q−t1+q−tTable 3 Codes of inner edges ofGPq,1 when q is odd. d.,.c1c3d1d1d2120d2d3211dtdt+1:3≤t≤t1tt−2t−1dtdt+1:t=1+t11+q−t1+q−tq−tdtdt+1:2+t1≤t≤q1+q−t3+q−tq−tCase 2. Whenq is even. We definet2=q/2. The induced subgraph W by W has only one edge, and the codes of all the edges of GPn,1 are given in Tables 4–6.We observe that codes of all the edges in both cases are distinct. So, W is 1-size edge resolving set for GPq,1. Hence, we have β1seGPq,1≤3. Now, we will compute upper bound for the size 2 edge metric dimension of generalized Petersen graphs GPq,1.Table 4 Codes of outer edges ofGPq,1 when q is even. d.,.c1c3d1c1c2011ctct+1:2≤t≤3t−10ictct+1:4≤t≤t2t−1t−3tctct+1:t=1+t2q−tq−t−11+q−tctct+1:t=2+t2q−t1+q−t1+q−tctct+1:3+t2≤t≤qq−t2+q−t1+q−tTable 5 Codes of spokes ofGPq,1 when q is even. d.,.c1c3d1c1d1020c2d2111c3d3202ctdt:4≤t≤t2+1t−1t−3t−1ctdt:t=2+t2t2−1t2−1t2−1ctdt:3+t2≤t≤q1+q−t3+q−t1+q−tTable 6 Codes of inner edges ofGPq,1 when q is even. d.,.c1c3d1d1d2120d2d3211dtdt+1:3≤t≤t2tt−2t−1dtdt+1:t=1+t21+q−tq−tq−tdtdt+1:t=2+t21+q−t2+q−tq−tdtdt+1:3+t2≤t≤qq+1−t3+q−tq−tLemma 7. For allq≥5,β2seGPq,1≤3.Proof. LetW=c1,c2,d1 be an edge resolving set for GPq,1. Define k=q/2. The codes of all the edges of GPq,1 with respect to W are given Table 7–9. Fort=1+k, the code of outer edge ctct+1 will be cWctct+1=q−t,q−t,q−t+1 when q is even and cWctct+1=q−t,q−t+1,q−t+1 when q is odd. It seems that codes of all the edges are distinct. So,W is size 2 edge resolving set for GPq,1. Hence, we have β2seGPq,1≤3. In the next lemma, we will give the lower bound for thek-size edge metric dimension of GPq,1 for k=1,2.Table 7 Codes of outer edges ofGPq,1. d.,.c1c2d1c1c2011ctct+1:2≤t≤kt−1t−2tctct+1:k+2≤t≤qq−t1+q−t1+q−tTable 8 Codes of spokes ofGPq,1. d.,.c1c2d1c1d1010c2d2:2≤t≤k+1t−1t−2t−1ctdt:k+2≤t≤qq−t+1q−t+2q−t+1Table 9 Codes of inner edges ofGPq,1. d.,.c1c2d1d1d2110dtdt+1:2≤t≤ktt−1t−1dtdt+1:k+2≤t≤qq−t+1q−t+2q−tLemma 8. For allq≥7, we have βkseGPq,1≥3.Proof. First, we will show that there is no edge resolving set ofGPq,1 consisting of two vertices. Contrarily, we suppose that X=A,B be a set having two vertices of VGPq,1. Then, we have the following three possibilities.Case 1. When both the verticesA and B are from the principal cycle. Let us fix a vertex, sayA=c1, then B is any other vertex ct2≤t≤q. For 2≤t≤k−1, we have cWcq−1cq=cWd1dq=1,t. For t=k, we have cWc1cq=cWc1d1=0,k−1. For t=k+1, we have cWc2c3=cWcq−1cq=1,k−2 when q is even, while cWc2d2=cWd1dq=1,k−1 when q is odd. For t=k+2, we have cWc2c3=cWd1dq=1,k−1 when q is even, while cWc2c3=cWd1d2=1,q−t+2 when t is odd. For k+3≤t≤q, we have cWc2c3=cWd1d2=1,q−t+2.Case 2. When bothA and B are the inner vertices. Let us fix a vertex, sayA=d1, then B is any other vertex vt2≤t≤q. For2≤t≤k, we have cWd1dq=cWc1d1=0,t−1. For t=1+k, we have cWd3d4=cWdq−2dq−1=2,k−3 when q is even, while cWd3d4=cWcq−1dq−1=2,k−3 when q is odd. For k+2≤t≤q, we have cWc1d1=cWd1d2=0,q−t+1.Case 3. WhenA is any vertex from the principal cycle and B is any inner vertex. Let us fix a vertex, sayA=c1; then, B is any inner vertex dt1≤t≤q. For t=1, we have cWc1cq=cWc1c2=0,1. For 2≤t≤k, we have cwc1c2=cWc1d1=0,t−1. For t=1+k, we have cwc1cq=cWc1d1=0,q−t+1 when q is odd; however, cwc1c2=cWc1d1=0,t−1 when q is even. For k+2≤t≤q, we have cWc1cq=cWc1d1=0,q−t+1. From the above three cases, we conclude that(6)βeGPq,1≥3. So, there does not exist a size 1 and size 2 edge resolving set of cardinality 2 inGPq,1. Therefore, it yields that βkseGPq,1≥3, for the value of k=1,2. Lemmas6–8, we conclude the following main result.Theorem 1. For allq≥7, we have βkseGPq,1=3 when k=1,2. ## 6. Bounds and Some Realizable Results onβkseG From the earlier discussion, one fundamental question arises. Is thek+1 size edge metric dimension strictly greater than the k-size metric dimension? To answer this question, we consider following two examples.Consider the graphsG1 and G2 which are depicted in Figure 3. It can be observed that the set W=t5,t6,t7 is an edge resolving set for G1 and the cardinality of set W is minimum. Moreover, W1=t1,t5,t6,t7 is a 1ser-set, W2=t5,t2,t6,t7 is a 2ser-set, and W3=t1,t2,t5,t7 is a 3ser-set for G1. Thus, βeG1=3 and β1seG1=β2seG1=β3seG1=4.Figure 3 The graphsG1 and G2.While, for the graphG2≅GP5,1, the set S=p1,p2,q1 is an edge resolving set of the minimum cardinality. Here, the outer vertices are p1,…,p5 and the inner vertices are q1,…,q5. It can be easily seen that the sets S1, S2, and S3 are 1ser-set, 2ser-set, and 3ser-set, respectively, where S1=p1,p2,p4, S2=q1,q2,p1, and S3=p1,p2,p3,p4. Thus, βeGP5,1=β1seGP5,1=β2seGP5,1=3 and β3seGP5,1=4.From these two examples, it can be observed that ifβkseG exists for 1≤k≤t in a nontrivial connected graph G of order n, then(7)βeG≤β1seG≤β2seG≤β3seG………≤βtseG.However, the following example shows that the above inequality is not true, in general.Example 1. LetG be a graph which is constructed from two graphs C5 and K4. The vertex set of G is l1,l2,l3,l4,l5,p1,p2 and the edge set is l1l2,l2l3,l3l4,l4l5,l5l1,l1p1,p1p2,p2l5,p1l5,p2l1, as shown in Figure 4. The set A=l1,l5,p1,p2 is a 6ser-set of G. However, there is no such set B which resolves all the edges of graph G with cardinality 4 and B has size 5. So, we take the set W=l1,l2,l3,l5,p1 is a 5ser-set of the minimum cardinality for G. Hence, β5seG>β6seG. Next, we characterize some realizable results for1ser-set and 2ser-set in graphs.Figure 4 The graph in whichβ5seG>β6seG.Theorem 2. For a nontrivial, simple, and connected graphG of order t, we have β1seG=t if and only if G=P2.Proof. Lemma1 implies that if G=P2, then β1seG=2=t. Conversely, assume that G be a connected graph of order t≥2 and β1seG=t. Since the induced subgraph W has only one edge, therefore it is obvious that |W|=|VG|=2. Thus, G is a path graph of order 2. The following result on the complete graphKs was presented in [1].Lemma 9 (see [1]). For any integers≥2, βeKs=s−1.Theorem 3. LetGbe a complete graph of orders≥3, thenβ1seGexists iffG=K3. Moreover,β1seG=2.Proof. One can observe that ifG is a complete graph K3, then the result holds. Conversely, let G be a complete graph of order s≥4. Now, by Lemma 7, the induced subgraph G has more than one edge. Therefore, β1seG does not exist. Hence, the proof is complete.Theorem 4. LetG be a nontrivial connected graph of order s≥3, then β1seG=s−1 if and only if G=P3 or G=K3≃C3.Proof. LetG=P3 or G=K3≅C3. From Lemmas 1 and 2, and Theorem 5, we have β1seG=2=s−1. Contrarily, assume that G be a connected graph of order s≥3 and β1seG=s−1. For s=3, it is simple to prove that G=P3 or G=K3≃C3. Now, we will prove the result for s≥4. For this, we suppose W be a 1ser-set of G of cardinality s−1. It is easy to see that VG−W=1; therefore, the induced subgraph has surely more than one edge in it (G is a connected graph). It yields that W=VG=3. Thus, G=P3 or G=K3≃C3.Remark 2. The two size edge resolving sets exist in complete bipartite graphG=Ks,t if and only if G∈K2,1,K2,2,K2,3,K3,1. Moreover, β2seK2,1=3=s+t, β2seK3,1=β2seK2,2=3=s+t−1, andβ2seK2,3=3=s+t−2.Theorem 5. For a simple and connected graphGof orders≥3,β2seG=sif and only ifG=P3≅K2,1.Proof. By Lemma1 and Remark 2, if G=P3≅K2,1, then β2seG=3=s. Conversely, suppose that β2seG=s for a connected graph G of order s≥3. Let W be a 2ser-set of order s. Since G is a connected graph and induced subgraph has two edges which implies that W=VG=3, thus G=P3≃K2,1. Now, we study a sufficient condition for a pairq,p of positive integers to be realizable as the order and the k-size edge metric dimension of a connected graph, respectively.Theorem 6. For a pairq,p of positive integers with q≥p≥2, there exists a connected graph G of order q and βkseG=p, where k+1≤p≤q.Proof. Fork≥1, we consider the following two cases according to the choice of p.(i) Forp=k+1, let G=Pq be a path graph of order q≥2. Lemma 1 implies that βksePq=p, where 1≤k≤q−1. For p=q, let G=Cq be a cycle of order q≥3, then it seems that βkseCq=p, where k=p.(ii) Fork+2≤p≤q−1, let H be a connected graph of order q≥6 obtained from paths Pq−p:e1,e2,e3…,eq−pq−p≥2, where p≥4, Pk+1:f1,f2,f3,…,fk+11≤k≤p−3 and p−k−1 vertices g1,g2,…,gp−k−1 with gi∼e1 and cj∼e1 for 1≤i≤k+1 and 1≤j≤p−k−1, as shown in Figure 5. Firstly, we prove that βkseH≤p. For this, let S=f1,f2,…,fk+1,g1,g2…,gp−k−1⊆VH. Since, the subgraph induced by S has k edges and cSeiei+1=i,i,i,…,i,i for each 1≤i≤q−p−1, it implies that S is a kser-set for G, and hence βkseH≤l. Now, to prove βkseG≥l, we suppose contrarily that βkseG≤l−1 and W be a kser-set for H with |W|≤p−1. If W contains at least p−k−2 vertices from C, say gi1≤i≤p−k−2 and the size of W is equal to k, it follows that W=VPk+1∪g1,g2,…,gp−k−2. However, still we have cWe1e2=cWe1gp−k−1=1,1,1,…,1,1. Thus, βkseH≥p which yields that βksH=p.Figure 5 The graphH. ## 7. Conclusions In this work, we have introduced a new variant, namely, thek-size edge metric dimension of graphs, and initiated its study by finding the k-size edge metric dimension of several well-known classes of graphs. We have characterized the graphs having k-size edge metric dimension n and n−1. Moreover, we have computed the k-size edge metric dimension of the Cartesian product graphs Pm□Pn and Pm□Cn for the values of k=1,2,3. In addition, we have proved that the k-size edge metric dimension of generalized Petersen graphs GPq,1 is 3 for the values of k=1,2. Some realizable results on k-size edge resolvability are also presented in this paper [8–12]. --- *Source: 1023175-2020-12-31.xml*
1023175-2020-12-31_1023175-2020-12-31.md
21,734
TheK-Size Edge Metric Dimension of Graphs
Tanveer Iqbal; Muhammad Naeem Azhar; Syed Ahtsham Ul Haq Bokhary
Journal of Mathematics (2020)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1023175
1023175-2020-12-31.xml
--- ## Abstract In this paper, a new conceptk-size edge resolving set for a connected graph G in the context of resolvability of graphs is defined. Some properties and realizable results on k-size edge resolvability of graphs are studied. The existence of this new parameter in different graphs is investigated, and the k-size edge metric dimension of path, cycle, and complete bipartite graph is computed. It is shown that these families have unbounded k-size edge metric dimension. Furthermore, the k-size edge metric dimension of the graphs Pm □ Pn, Pm □ Cn for m, n ≥ 3 and the generalized Petersen graph is determined. It is shown that these families of graphs have constant k-size edge metric dimension. --- ## Body ## 1. Introduction Kelenc et al. [1] recently defined the concept of edge resolvability in graphs and initiated the study of its mathematical properties. The edge metric dimension of graph G is the minimum cardinality of edge resolving set, say X, and is denoted as βeG. An edge metric generator for G of cardinality βeG is an edge metric basis for G [1]. This concept of an edge metric generator may have a weakness with respect to possible uniqueness of the edge identifying a pair of different vertices of the graph. Consider, for example, in a network, a vertex x is identified by a unique edge e in a metric basis X, but if at some point the communication between the vertex x and edge e is blocked, then the vertex x cannot be accessed by the edge metric basis X. To avoid this situation, one can think of defining a metric edge basis in which every vertex can be identified by at least two edges. Inspired by the motivation of idea of k-size resolving sets in graphs by Naeem et al. [2], we present a new concept in the context of edge resolvabililty, called the k-size edge resolving set in graphs.For an undirected, simple, and connected graphG, the vertex set is VG and edge set is EG. The distance parameter in graphs has been used to distinguish (resolve or determine) the vertices or edges of G. The distance between the vertex α and the edge β=α1α2 in a graph G is given by dGβ,α=mindGα1,α,dGα2,α. Any two edges β1 and β2 are resolved by a vertex α of a graph G, whenever dGα,β1≠dGα,β2. A set of vertices X is an edge metric generator for a graph G, whenever every two edges of G are resolved by some vertex of X. The edge metric dimension of graph G is the minimum cardinality of set X and is denoted as βeG. An edge metric generator for G of cardinality βeG is an edge metric basis for G [1].Definition 1. A set of verticesW is said to be a k-size edge resolving set of a graph G of order n≥2 if W is an edge resolving set and the size of subgraph W induced by W is equal to k. The k-size edge metric dimension of G, denoted by βkseG, is the minimum cardinality of a k-size edge resolving set of G. Moreover, the k-size edge resolving set of cardinality βkseG is represented as kser-set, where k≥1 and belongs to a set of natural number. Now, we discuss the existence of this new parameter in some simple, nontrivial connected graphs. LetH be a connected graph having the vertex set VH=l1,l2,…,lp,s1,s2,…,sq∪t1,t2 and edge set EH=l1l2,lit1,pjt1,pjt2:1≤i≤p,1≤j≤q, as shown in Figure 1. A set W=l1,l2,…,lp,s2,s3,…,sq is a 1ser-set for H. It can be seen that, for 3≤p≤q, the k-size edge resolving set for graph H exist only for k=1. We observe that the setS=q1,q2,q3 is a minimum 2ser-set for G of Figure 1. Moreover, for the graph G, 1ser-set does not exist. We have the following remark from these two examples.Figure 1 The graphsG and H.Remark 1. (i) The existence ofβkseG does not imply the existence of βk+tseG for t≥1 and vice versa in any nontrivial connected graph G(ii) 2≤βkseG≤n, for any simple graph G, where 1≤k≤n2 In this paper, we compute thek-size edge metric dimension in several well known families of graphs, Cartesian product graphs Pm□Pn,Pm□Cn, and generalized Petersen graphs GPq,1. Moreover, we present some realizable result on k-size edge metric dimension in graphs for k=1,2. ## 2. Applications Resolvability in graphs has diverse applications related to the navigation of robots in networks [3], pattern identification, and image processing. It has also many applications in pharmaceutical chemistry and drugs [4–6]. Few interesting connections between metric generators in graphs and the mastermind game or coin weighing problem have been presented in [7]. The other important results about the metric and edge metric dimension can be found in [8–12]. ## 3. Existence ofK-Size Edge Resolving Sets in Well-Known Classes of Graphs Now, we firstly initiate the study of existence of this new parameter in some basic families of graphs and compute theirk-size edge metric dimension.Lemma 1. For a path graphG=Pn1≤k≤n−1, βkseG=k+1ifn≥2.Proof. Consider a path graphPn with vertex set VPn=l1,l2,…,ln and edge set EPn=lili+1:1≤i≤n−1. Let W=l1,l2,l3,…,lk+1 be a subset of vertex set of Pn. The code of each edge lsls+11≤s≤k with respect to W is distinct because each edge has the 0 entry at its sth and s+1th place. The code of each edge lsls+1 is cWlsls+1=l−1,l−2,l−3,…,l−k+1, for k+2≤s≤n−1, and cWlsls+1=l−1,l−2,l−3,…,2,1,0, for s=k+1. Thus, W is an edge resolving set for Pn. Since the size of Pn is n−1, it implies that the subgraph W induced by W has k edges. Therefore, W is k-size edge resolving set for Pn. Hence, βksePn=k+1 for 1≤k≤n−1.Lemma 2. For a simple and connected graphG of order n≥6, βkseG=k+1ifG=Cn1≤k≤n−2.Proof. LetCn:t1,t2,t3,…,tn,t1 be a cycle graph of order n≥6, and let W=t1,t2,t3,…,tk+1. We define l=n/2. The code of each edge tptp+1 for 1≤p≤k1≤k≤l and for 1≤p≤ll+1≤k≤n−2 with respect to W has the 0 entry at its pth and p+1th place. The code of each edge tptp+1 is cWtptp+1=p−1,p−2,p−3,…,p−k+1, for k+1≤p≤l1≤k≤l−1,. The qth and q+1th entries of edges tptp+1 are equal to l−1 when n is even 1≤q≤k,p=l+q,1≤k≤l,1≤q≤l,l+1≤k≤n−2 and equal to l−2 and l−1, respectively, when n is odd 1≤q≤k,p=l+q,q,1≤k≤l,1≤q≤l−1,l+1≤k≤n−2. The code of remaining edges tptp+1 is n−p,n+1−p,n+2−p,…,n+k−p, for l+k+1≤p≤n,1≤k≤l−1, when n is even and, for 1≤k≤l−2, when n is odd. We note that the codes of all the edges are distinct. Therefore, W is a k-size edge resolving set for Cn. Hence, βkseCn=k+1 for 1≤k≤n−2. Thek-size edge resolving sets of a complete bipartite graph Ks1,s2 exist only for the values of k given in the following result.Lemma 3. For the complete bipartite graphKs1,s2s1=s2 and s1,s2≥4,(1)βkseKs1,s2=2s1−2,k=s1−1s2−1,2s1−1,k=s1s2−s1ork=s1s2−1,2s1,k=s1s2,while, for s1≥s2≥1, we have(2)βkseKs1,s2=s1+k1−1,k=k1s1−1,1≤k1≤s1−1,s1+k2,k=k2s1,1≤k2≤s1−1.Observation. It cannot be necessary that a k-size edge resolving set has at least k vertices in it.To justify our above observation, we consider a graphG=K4. A set of vertices W=f1,f2,f3,f4 is the vertex set of G=K4. One can observe that the set VK4 is a 6-size edge resolving set for K4. Therefore, β6seK4=4. ## 4.K-Size Edge Metric Dimension of Cartesian Product of Graphs LetG=Pm□Pn be the Cartesian product of two path graphs Pm and Pn, for m,n≥4. Let E1G=rghrgh+1:1≤g≤m,1≤h≤n be the set of horizontal edges and E2G=sghsg+1h:1≤g≤m−1,1≤h≤n be the set of vertical edges of Pm□Pn. The graph of P5□P6 is shown in Figure 2. To find distances, we embed G into xy plane in such a way that each vertex is in an ordered pair form. Let the vertices 0,0,m−1,0,0,n−1,m−1,n−1 be the corner vertices of G. In the next two lemmas, we shall discuss size 1, size 2, and size 3 edge metric dimension of G=Pm□Pn and G=Pm□Cn, for m,n≥5.Figure 2 G=P5□P6.Lemma 4. LetG be the cartesian product graph Pm□Pn; then, we have(3)βksePm□Pn=3,k=1,4,k=2,5,k=3.Proof. Here, we will prove this result fork=3 (only). For this we consider W=a1,a2,a3,a4,a5, where a1=0,0,a2=1,0,a3=m−1,0,a4=0,1,anda5=0,2, and prove that W is a 3ser-set for Pm□Pn. Note that dr1,s1,r2,s2=r1−r2+s1−s2 is the distance between any two vertices of Pm□Pn. Let α=r1,s1r2,s2 be an edge. The distances of the edge α from the vertices of W are calculated as follows. da1,α=r1+s1,da3,α=m−1−r2+s1, da2,α=r1+s1−10,1 when α∈rghrgh+1,sghsg+1h:1≤g≤m,2≤h≤n, da2,α=r1+s1 when α∈rghrgh+1:1≤g≤m,h=1, and da2,α=r1+s1+1 when α∈sghsg+1h:1≤g≤m−1,h=1; da4,α=r1+s1+1, whenever α∈rghrgh+1:g=1,1≤h≤n, da4,α=r1+s1−1, whenever α∈rghrgh+1,sghsg+1h:2≤g≤m,1≤h≤n, da4,α=r1+s1, whenever α∈sghsg+1h:g=1,1≤h≤n, da5,α=r1+s1+2e, whenever α∈rghrgh+1:g=1,1≤h≤n, da5,α=r1+s1, whenever α∈rghrgh+1:g=2,1≤h≤n, da5,α=r1+s1+1, whenever α∈sghsg+1h:g=1,1≤h≤n, da5,α=r1+s1−1, whenever α∈sghsg+1h:g=2,1≤h≤n, and da5,α=r1+s1−2, whenever α∈rghrgh+1,sghsg+1h:3≤g≤m,1≤h≤n. Suppose contrary two edges α1=r1,s1r2,s2 and α2=t1,u1t2,u2 are at the same distance from the vertices of W. Thus, we have the following equalities:(4)r1+s1=t1+u1,m−1−r2+s1=m−1−t2+u1. The above equalities imply thats1−u1=r2−t2. Thus, it follows that r1+r2=t1+t2. In both the cases r1=r2 or r1=r2−1, and we get r1=t1 and r2=t2. The equality r1=t1 together with r1+s1=t1+u1 implies that s1=u1. Both the vertices s2 and u2 can either equal to s1 or equal s1+1. One of the edges α1 or α2 does not represent an edge if they have distinct values. So, finally we have α1=α2, which is a contradiction. Therefore, W is an edge resolving set for Pm□Pn. Moreover, the subgraph induced by W has 3 edges. Hence, we conclude that β3sePm□Pn=5. Similarly, we can prove the result for the values of k=1,2. Hence, we conclude the result. We present the following result on thek-size edge metric dimension of the Cartesian product graph Pm□Cn without proof.Lemma 5. LetG be the cartesian product graph Pm□Cn; then, we have(5)βksePm□Cn=3,k=1,2,4,k=3. ## 5.K-Size Edge Metric Dimension of Generalized Petersen Graphs The generalized Petersen graphGPq,s is a 3-regular graph containing 2q vertices and 3q edges. The vertex set of GPq,s is VGPq,s=ct,dt:1≤t≤q, and the edge set is EGPq,s=ctct+1,ctdt,dtdt+s:1≤t≤q,1≤s<q/2. The edges ctdt for 1≤t≤q are said to be spokes of GPq,s. The outer cycle of GPq,s is said to be the principal cycle GPq,s.We will compute thek-size edge metric dimension of GPq,s for k=1,2 and s=1 in the following two sections. Firstly, we will find upper bound for the 1-size edge metric dimension of GPq,1.Lemma 6. For allq≥7, we have β1seGPq,1≤3.Proof. LetW=c1,c3,d1 be a set of vertices of GPq,1.Case 1. Whenq is odd. Here, we definet1=q+1/2. There is only one edge in the subgraph W induced by W and the codes of all the edges of GPq,1 are given in Tables 1–3.Table 1 Codes of outer edges ofGPq,1 when q is odd. d.,.c1c3d1c1c2011ctct+1:2≤t≤3t−10tcttt+1:4≤t≤t1t−1t−3tctct+1:t=1+t1q−tq−t1+q−tctct+1:2+t1≤t≤qq−t2+q−t1+q−tTable 2 Codes of spokes ofGPq,1 when q is odd. d.,.c1c3d1c1d1020c2d2111c3d3202ctdt:4≤t≤t1t−1t−3t−1ctdt:t=1+t11+q−tq−t1+q−tctdt:t=2+t11+q−t2+q−t1+q−tctdt:3+t1≤t≤q1+q−t3+q−t1+q−tTable 3 Codes of inner edges ofGPq,1 when q is odd. d.,.c1c3d1d1d2120d2d3211dtdt+1:3≤t≤t1tt−2t−1dtdt+1:t=1+t11+q−t1+q−tq−tdtdt+1:2+t1≤t≤q1+q−t3+q−tq−tCase 2. Whenq is even. We definet2=q/2. The induced subgraph W by W has only one edge, and the codes of all the edges of GPn,1 are given in Tables 4–6.We observe that codes of all the edges in both cases are distinct. So, W is 1-size edge resolving set for GPq,1. Hence, we have β1seGPq,1≤3. Now, we will compute upper bound for the size 2 edge metric dimension of generalized Petersen graphs GPq,1.Table 4 Codes of outer edges ofGPq,1 when q is even. d.,.c1c3d1c1c2011ctct+1:2≤t≤3t−10ictct+1:4≤t≤t2t−1t−3tctct+1:t=1+t2q−tq−t−11+q−tctct+1:t=2+t2q−t1+q−t1+q−tctct+1:3+t2≤t≤qq−t2+q−t1+q−tTable 5 Codes of spokes ofGPq,1 when q is even. d.,.c1c3d1c1d1020c2d2111c3d3202ctdt:4≤t≤t2+1t−1t−3t−1ctdt:t=2+t2t2−1t2−1t2−1ctdt:3+t2≤t≤q1+q−t3+q−t1+q−tTable 6 Codes of inner edges ofGPq,1 when q is even. d.,.c1c3d1d1d2120d2d3211dtdt+1:3≤t≤t2tt−2t−1dtdt+1:t=1+t21+q−tq−tq−tdtdt+1:t=2+t21+q−t2+q−tq−tdtdt+1:3+t2≤t≤qq+1−t3+q−tq−tLemma 7. For allq≥5,β2seGPq,1≤3.Proof. LetW=c1,c2,d1 be an edge resolving set for GPq,1. Define k=q/2. The codes of all the edges of GPq,1 with respect to W are given Table 7–9. Fort=1+k, the code of outer edge ctct+1 will be cWctct+1=q−t,q−t,q−t+1 when q is even and cWctct+1=q−t,q−t+1,q−t+1 when q is odd. It seems that codes of all the edges are distinct. So,W is size 2 edge resolving set for GPq,1. Hence, we have β2seGPq,1≤3. In the next lemma, we will give the lower bound for thek-size edge metric dimension of GPq,1 for k=1,2.Table 7 Codes of outer edges ofGPq,1. d.,.c1c2d1c1c2011ctct+1:2≤t≤kt−1t−2tctct+1:k+2≤t≤qq−t1+q−t1+q−tTable 8 Codes of spokes ofGPq,1. d.,.c1c2d1c1d1010c2d2:2≤t≤k+1t−1t−2t−1ctdt:k+2≤t≤qq−t+1q−t+2q−t+1Table 9 Codes of inner edges ofGPq,1. d.,.c1c2d1d1d2110dtdt+1:2≤t≤ktt−1t−1dtdt+1:k+2≤t≤qq−t+1q−t+2q−tLemma 8. For allq≥7, we have βkseGPq,1≥3.Proof. First, we will show that there is no edge resolving set ofGPq,1 consisting of two vertices. Contrarily, we suppose that X=A,B be a set having two vertices of VGPq,1. Then, we have the following three possibilities.Case 1. When both the verticesA and B are from the principal cycle. Let us fix a vertex, sayA=c1, then B is any other vertex ct2≤t≤q. For 2≤t≤k−1, we have cWcq−1cq=cWd1dq=1,t. For t=k, we have cWc1cq=cWc1d1=0,k−1. For t=k+1, we have cWc2c3=cWcq−1cq=1,k−2 when q is even, while cWc2d2=cWd1dq=1,k−1 when q is odd. For t=k+2, we have cWc2c3=cWd1dq=1,k−1 when q is even, while cWc2c3=cWd1d2=1,q−t+2 when t is odd. For k+3≤t≤q, we have cWc2c3=cWd1d2=1,q−t+2.Case 2. When bothA and B are the inner vertices. Let us fix a vertex, sayA=d1, then B is any other vertex vt2≤t≤q. For2≤t≤k, we have cWd1dq=cWc1d1=0,t−1. For t=1+k, we have cWd3d4=cWdq−2dq−1=2,k−3 when q is even, while cWd3d4=cWcq−1dq−1=2,k−3 when q is odd. For k+2≤t≤q, we have cWc1d1=cWd1d2=0,q−t+1.Case 3. WhenA is any vertex from the principal cycle and B is any inner vertex. Let us fix a vertex, sayA=c1; then, B is any inner vertex dt1≤t≤q. For t=1, we have cWc1cq=cWc1c2=0,1. For 2≤t≤k, we have cwc1c2=cWc1d1=0,t−1. For t=1+k, we have cwc1cq=cWc1d1=0,q−t+1 when q is odd; however, cwc1c2=cWc1d1=0,t−1 when q is even. For k+2≤t≤q, we have cWc1cq=cWc1d1=0,q−t+1. From the above three cases, we conclude that(6)βeGPq,1≥3. So, there does not exist a size 1 and size 2 edge resolving set of cardinality 2 inGPq,1. Therefore, it yields that βkseGPq,1≥3, for the value of k=1,2. Lemmas6–8, we conclude the following main result.Theorem 1. For allq≥7, we have βkseGPq,1=3 when k=1,2. ## 6. Bounds and Some Realizable Results onβkseG From the earlier discussion, one fundamental question arises. Is thek+1 size edge metric dimension strictly greater than the k-size metric dimension? To answer this question, we consider following two examples.Consider the graphsG1 and G2 which are depicted in Figure 3. It can be observed that the set W=t5,t6,t7 is an edge resolving set for G1 and the cardinality of set W is minimum. Moreover, W1=t1,t5,t6,t7 is a 1ser-set, W2=t5,t2,t6,t7 is a 2ser-set, and W3=t1,t2,t5,t7 is a 3ser-set for G1. Thus, βeG1=3 and β1seG1=β2seG1=β3seG1=4.Figure 3 The graphsG1 and G2.While, for the graphG2≅GP5,1, the set S=p1,p2,q1 is an edge resolving set of the minimum cardinality. Here, the outer vertices are p1,…,p5 and the inner vertices are q1,…,q5. It can be easily seen that the sets S1, S2, and S3 are 1ser-set, 2ser-set, and 3ser-set, respectively, where S1=p1,p2,p4, S2=q1,q2,p1, and S3=p1,p2,p3,p4. Thus, βeGP5,1=β1seGP5,1=β2seGP5,1=3 and β3seGP5,1=4.From these two examples, it can be observed that ifβkseG exists for 1≤k≤t in a nontrivial connected graph G of order n, then(7)βeG≤β1seG≤β2seG≤β3seG………≤βtseG.However, the following example shows that the above inequality is not true, in general.Example 1. LetG be a graph which is constructed from two graphs C5 and K4. The vertex set of G is l1,l2,l3,l4,l5,p1,p2 and the edge set is l1l2,l2l3,l3l4,l4l5,l5l1,l1p1,p1p2,p2l5,p1l5,p2l1, as shown in Figure 4. The set A=l1,l5,p1,p2 is a 6ser-set of G. However, there is no such set B which resolves all the edges of graph G with cardinality 4 and B has size 5. So, we take the set W=l1,l2,l3,l5,p1 is a 5ser-set of the minimum cardinality for G. Hence, β5seG>β6seG. Next, we characterize some realizable results for1ser-set and 2ser-set in graphs.Figure 4 The graph in whichβ5seG>β6seG.Theorem 2. For a nontrivial, simple, and connected graphG of order t, we have β1seG=t if and only if G=P2.Proof. Lemma1 implies that if G=P2, then β1seG=2=t. Conversely, assume that G be a connected graph of order t≥2 and β1seG=t. Since the induced subgraph W has only one edge, therefore it is obvious that |W|=|VG|=2. Thus, G is a path graph of order 2. The following result on the complete graphKs was presented in [1].Lemma 9 (see [1]). For any integers≥2, βeKs=s−1.Theorem 3. LetGbe a complete graph of orders≥3, thenβ1seGexists iffG=K3. Moreover,β1seG=2.Proof. One can observe that ifG is a complete graph K3, then the result holds. Conversely, let G be a complete graph of order s≥4. Now, by Lemma 7, the induced subgraph G has more than one edge. Therefore, β1seG does not exist. Hence, the proof is complete.Theorem 4. LetG be a nontrivial connected graph of order s≥3, then β1seG=s−1 if and only if G=P3 or G=K3≃C3.Proof. LetG=P3 or G=K3≅C3. From Lemmas 1 and 2, and Theorem 5, we have β1seG=2=s−1. Contrarily, assume that G be a connected graph of order s≥3 and β1seG=s−1. For s=3, it is simple to prove that G=P3 or G=K3≃C3. Now, we will prove the result for s≥4. For this, we suppose W be a 1ser-set of G of cardinality s−1. It is easy to see that VG−W=1; therefore, the induced subgraph has surely more than one edge in it (G is a connected graph). It yields that W=VG=3. Thus, G=P3 or G=K3≃C3.Remark 2. The two size edge resolving sets exist in complete bipartite graphG=Ks,t if and only if G∈K2,1,K2,2,K2,3,K3,1. Moreover, β2seK2,1=3=s+t, β2seK3,1=β2seK2,2=3=s+t−1, andβ2seK2,3=3=s+t−2.Theorem 5. For a simple and connected graphGof orders≥3,β2seG=sif and only ifG=P3≅K2,1.Proof. By Lemma1 and Remark 2, if G=P3≅K2,1, then β2seG=3=s. Conversely, suppose that β2seG=s for a connected graph G of order s≥3. Let W be a 2ser-set of order s. Since G is a connected graph and induced subgraph has two edges which implies that W=VG=3, thus G=P3≃K2,1. Now, we study a sufficient condition for a pairq,p of positive integers to be realizable as the order and the k-size edge metric dimension of a connected graph, respectively.Theorem 6. For a pairq,p of positive integers with q≥p≥2, there exists a connected graph G of order q and βkseG=p, where k+1≤p≤q.Proof. Fork≥1, we consider the following two cases according to the choice of p.(i) Forp=k+1, let G=Pq be a path graph of order q≥2. Lemma 1 implies that βksePq=p, where 1≤k≤q−1. For p=q, let G=Cq be a cycle of order q≥3, then it seems that βkseCq=p, where k=p.(ii) Fork+2≤p≤q−1, let H be a connected graph of order q≥6 obtained from paths Pq−p:e1,e2,e3…,eq−pq−p≥2, where p≥4, Pk+1:f1,f2,f3,…,fk+11≤k≤p−3 and p−k−1 vertices g1,g2,…,gp−k−1 with gi∼e1 and cj∼e1 for 1≤i≤k+1 and 1≤j≤p−k−1, as shown in Figure 5. Firstly, we prove that βkseH≤p. For this, let S=f1,f2,…,fk+1,g1,g2…,gp−k−1⊆VH. Since, the subgraph induced by S has k edges and cSeiei+1=i,i,i,…,i,i for each 1≤i≤q−p−1, it implies that S is a kser-set for G, and hence βkseH≤l. Now, to prove βkseG≥l, we suppose contrarily that βkseG≤l−1 and W be a kser-set for H with |W|≤p−1. If W contains at least p−k−2 vertices from C, say gi1≤i≤p−k−2 and the size of W is equal to k, it follows that W=VPk+1∪g1,g2,…,gp−k−2. However, still we have cWe1e2=cWe1gp−k−1=1,1,1,…,1,1. Thus, βkseH≥p which yields that βksH=p.Figure 5 The graphH. ## 7. Conclusions In this work, we have introduced a new variant, namely, thek-size edge metric dimension of graphs, and initiated its study by finding the k-size edge metric dimension of several well-known classes of graphs. We have characterized the graphs having k-size edge metric dimension n and n−1. Moreover, we have computed the k-size edge metric dimension of the Cartesian product graphs Pm□Pn and Pm□Cn for the values of k=1,2,3. In addition, we have proved that the k-size edge metric dimension of generalized Petersen graphs GPq,1 is 3 for the values of k=1,2. Some realizable results on k-size edge resolvability are also presented in this paper [8–12]. --- *Source: 1023175-2020-12-31.xml*
2020
# Research on Mathematical Model of Smart Service for the Elderly in Small- and Medium-Sized Cities Based on Image Processing **Authors:** Chunmei Feng **Journal:** Scientific Programming (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1023187 --- ## Abstract Image processing technology is to use computer, camera, and other technologies to calculate and process images and make the image clearer and convenient for quick extraction of information. Image processing technology has entered an all-round development stage. It also plays a great role in the components of the intelligent service model for the aged. Now many countries in the world have entered the aging stage, but old-age equipment is relatively backward and personnel management is not standardized. Based on these problems, this paper studies the intelligent model of old-age care in small- and medium-sized cities by using the image recognition method. Based on the analysis of the present situation of intelligent old-age care, an intelligent system is proposed, which solves the problems of defects in old-age care facilities and insufficient comprehensive management of medical staff in some small- and medium-sized cities. This system has RTID positioning system and APP client, which can ensure the privacy of the elderly. Through real-time identification of images in the elderly service, the rationality and layout optimization of existing old-age facilities are analyzed. The mathematical model is used to detect the regularity of participants’ daily activities. The image experiment results show that the prediction accuracy is over 90%, and the optimal prediction effect is obtained. In addition, a questionnaire survey was conducted among many elderly people over 50 years old to investigate their willingness to use smart old-age products. --- ## Body ## 1. Introduction With the development of computers, image technology has also been applied to all aspects of human life. In the construction of smart models for the elderly in certain small- and medium-sized cities, image processing technology has also played a great role. Literature [1] introduces the development of image processing technology and the connection with the smart service model for the elderly. We analyzed the results of previous research and related data. Because of the physical reasons of the elderly, their social and victory functions will decline with age, so they will encounter various difficulties in daily life. The article is based on the elderly. The daily activity habit of creating scenes provides a specific solution for the activities of the elderly at home. Literature [2] discusses the views and suggestions of alternatives to restraint and isolation for service users and their families, as well as the conditions that can be used in adults in short-term psychiatric care and residents in long-term care. Set up 3 nursing institutions related to middle-aged and elderly family members and 5 focus groups related to service quality. These small focus groups discussed and analyzed records and found that more companionship and caring for the elderly are the most effective way to solve the mental disorders of the elderly. Listen to the inner thoughts of the elderly and give them more love and company. Immature old-age care policies in big cities result in the high poverty rate of the elderly and a series of old-age care problems. Literature [3] studies the elderly to provide location-based, customized job search services to actively support the elderly to participate in economic activities, provide customized positions for the elderly according to their residence, physical condition, and working conditions, so that the elderly can choose suitable positions according to their interests and living conditions, and contribute to the expansion of the employment market for the elderly. Literature [4] aims to develop smart healthcare glove system (SHGS) and transcutaneous electrical nerve stimulator (TENS) based on electronic textiles as healthcare equipment for elderly hypertension. Using image processing technology, the patient’s pulse and blood pressure generated pictures can be transmitted to the computer for better observation. Observing the blood pressure and pulse of multiple patients wearing SHGS, the results show that SHGS can not only be used for the elderly to lower blood pressure and improve the irregular blood circulation but also be used for hypertensive patients of any age. The goal of literature [5] is to deploy an intelligent nursing service integration agent to provide individualization and integration for each elderly person. Every elderly person’s needs and preferences are different. In time, a robot with strong learning ability also needs to spend a lot of time learning everyone’s preferences and needs. In order to provide basic nursing tasks, we have deployed nursing templates in the cloud. There are different scenes for each character. Literature [6] provides a theoretical basis for combining biophilic and smart home technology and provides a framework for smart home services to ensure that elderly residents can have a biophilic experience. Smart home technology is not a mechanized achievement. It simulates nature and creates a high quality of life, which can support the physical and mental health of the elderly. It not only provides effective information for the elderly smart home industry but also makes a huge contribution to the trend of smart home services. Literature [7] studies the contradiction between the design of smart home products under the medical care model and the daily needs of the elderly and conducts case analysis of smart home products for the elderly. The research results show that smart homes can maximize the satisfaction of the elderly. Literature [8] introduces WITSCare, which is a research prototype of a web-based IoT smart home system. It is a smart home system designed using the intelligence of the Internet of Things, which can help the elderly live safely and independently at home. Literature [9] proposes an application architecture suitable for such online service network platforms and designed it with the most advanced concepts such as server-side JavaScript, NoSQL databases, and machine learning. The rapid increase in the number of elderly people has brought great pressure to our country’s medical treatment and society. It has become an increasingly popular topic to provide the elderly with a high-quality living environment. Literature [10] proposes a conceptual model of an integrated and personalized system to solve this problem. We interviewed three healthcare experts to better explore the needs of the elderly and customized a conceptual model for the elderly to live independently, centering on the concepts of comfort, safety, and environmental protection and providing good protection for the elderly to live alone. The purpose of literature [11] is to propose services based on behavioral patterns. The range of activities for the elderly is limited, and the living room is an important area. We have proposed a method of developing a smart living room specifically designed for the elderly. We put forward a number of behavior patterns of the elderly, including preventing falls and injuries, and explain safety issues and health issues to provide a comfortable living environment in all aspects. There are many hidden safety hazards for the elderly at home alone. To ensure the safety of the elderly at home, we must consider the hazards such as gas leaks in the kitchen, fire, and falls in the bathroom. Literature [12] deals with the hazards and intelligent services in the kitchen according to the behavior of the elderly. According to multiple accidents such as fires and gas leaks, a smart service has been set up. The smart service can not only detect safety problems at home but also has functions such as automatic alarms and automatic ventilation to further protect the safety of the elderly at home. Studies have shown that the most common disease in the elderly is cardiovascular disease, which also poses a great threat to the health of the elderly. Literature [13] develops smart clothes to record 3-lead electrocardiogram (ECG). Using the image elementary technology, the analyzed data are quickly generated into pictures. The system is composed of fiber clothes with electrodes which have the function of acquiring physiological signals and can analyze health data. Experimental results show that the accuracy of ECG is as high as 86.82%. Starting from the two ways of family pension and institutional pension, family pension is the pension mode chosen by most people. The deficiency of traditional family pension lies in that it is generally difficult for the elderly to get professional and meticulous care, medical care, and spiritual and cultural services in the family. Under the background that “421 structure” families have become the mainstream of urban society, social competition has intensified and the pace of life has accelerated. The social labor cost and people’s work burden are generally increasing, family members and children cannot have enough energy to take care of the elderly at home, family pension is facing severe challenges, the traditional family pension model is increasingly difficult to maintain and play its social function and role, and the urban family pension is gradually weakened and socialized. Nursing homes charge higher fees, so most families cannot bear the economic pressure, and they have higher requirements for medical staff. Based on these problems, this paper creates an intelligent mathematical model to help solve the problem of providing care for the aged and uses image processing technology to create an intelligent system. The system has many functions, such as detecting health problems of the elderly and real-time positioning. By analyzing the present situation of old-age wisdom in China, this paper finds some problems in old-age care in China, such as backward old-age care equipment and irregular personnel management. This paper uses image processing technology to build an old-age wisdom service model, which solves the problems of old-age care facility defects and insufficient comprehensive management of medical staff in some small- and medium-sized cities. Using this model, the problem of providing care for the aged in China has been greatly solved. Through the test, it can be found that the system has a very high accuracy for the health detection of the elderly. ### 1.1. Overview and Development of Digital Image Processing Technology With the rise of the computer industry, people gradually began to pay attention to digital image processing technology. In 1964, the US Jet Propulsion Laboratory used a computer to process a large number of lunar photos sent by the “Prowler” spacecraft. The results are very satisfactory. The digital image technology has also become an emerging subject. Until the 1990s, digital images have developed rapidly. Today, various industries have put forward high requirements on image processing technology, which has also promoted the better development of image processing technology. Common methods of processing technology include image transformation, image enhancement and restoration, and image segmentation. Image processing technology has the following characteristics: the diversity of image processing, the sharpness of the processed image is getting higher and higher, and the amount of data processing is large. ### 1.2. Research Background The problem of population aging in our country is becoming more and more serious. The issue of elderly care is one of the problems that our country must face in its development. According to the report of the civil affairs department, the elderly in our country now account for 17.17% of the total population, and the problem of population aging is aggravated. The imperfections of old-age facilities show that our country is not fully prepared for the problem of population aging again. The number of elderly people living alone has increased rapidly in recent years. Because with the acceleration of social development and the pace of life, young people in many big cities have chosen to develop in big cities, so they cannot accompany these old people for a long time to take care of their own old people. The elderly care in modern society is mainly divided into two types: one is home care and the other is “material home.” Because of the early implementation of the family planning policy in our country, a pair of husbands may have a single child every day. This traditional family management model is called “4-2-1” by us. It refers to a special family that requires 4 independent elderly people, a husband and a wife, and a child. This is for a couple of two people. In other words, it is necessary to take care of 4 independent old people and one child at the same time. Because of the livelihoods of working outside, there is no way to accompany the old people at all times. The traditional family care model is difficult to achieve. When the elderly live alone, there are also many hidden dangers. When the elderly suddenly falls ill and cannot seek medical treatment in time, they cannot deal with the sudden accident in time. These hidden dangers make children of many ages gravitate towards nursing care institutions. The elderly care institutions in our country are showing a trend of two levels of differentiation. Senior nursing homes are equipped with advanced equipment and high level of management personnel, but the fees are too high. Most families cannot afford the high expenses. The general nursing homes have aging equipment and limited service quality and cannot provide high-quality services for the elderly. ### 1.3. Research Significance With the increase of age, the elderly may also have some problems in life, such as falling hair and teeth, shaking legs and feet, and memory loss. They tend to have a strong sense of self-dependence and fear of loneliness. They pay more attention to their own life and health problems and their dependence on family and affection. As modern Chinese elderly people decline in their cognitive abilities and active actions, their ability to accept new things has also declined. Most nursing homes in our country have imperfect medical service facilities and imperfect management, which will cause many retired elderly people to have to enjoy high-quality medical care. At present, most nursing homes in our country monitor and care for the elderly mainly through videos. Although video monitoring can directly monitor the situation and behavior of the elderly in the life of the home and nursing homes, there are dead corners in this kind of monitoring and cannot provide full coverage to the elderly. This kind of monitoring easily reveals the personal privacy of the left-behind elderly. Therefore, this article proposes an intelligent elderly care service system. On the premise of ensuring that the privacy of the elderly in the family is not leaked and will not affect the normal life of the elderly in the family, the RFID technology equipment can monitor the elderly in the family in real time. The RFID technology equipment can monitor the situation of medical sanatorium in real time for the elderly at home. ### 1.4. Status Quo of Research on Smart Elderly Care Population aging is an inevitable phenomenon. Many developed countries have entered a new era of population aging earlier than our Chinese nation and have accumulated a lot of advanced experience. Therefore, when dealing with these pension problems, our country should appropriately learn from the practices of developed countries. From the successful examples of the elderly care service systems in these developed countries, we can find some more excellent high-quality smart elderly care products, which not only integrate advanced technology but also incorporate humanistic care that meets the characteristics of the elderly, so that the elderly can enjoy high tech while bringing convenience and fun and feel the humanistic care that meets your requirements. With the development of computer technology and the Internet of Things, the convenience brought by automation to the elderly care problem is of great significance, and it provides effective experience for the development of our country’s smart elderly care industry. ## 1.1. Overview and Development of Digital Image Processing Technology With the rise of the computer industry, people gradually began to pay attention to digital image processing technology. In 1964, the US Jet Propulsion Laboratory used a computer to process a large number of lunar photos sent by the “Prowler” spacecraft. The results are very satisfactory. The digital image technology has also become an emerging subject. Until the 1990s, digital images have developed rapidly. Today, various industries have put forward high requirements on image processing technology, which has also promoted the better development of image processing technology. Common methods of processing technology include image transformation, image enhancement and restoration, and image segmentation. Image processing technology has the following characteristics: the diversity of image processing, the sharpness of the processed image is getting higher and higher, and the amount of data processing is large. ## 1.2. Research Background The problem of population aging in our country is becoming more and more serious. The issue of elderly care is one of the problems that our country must face in its development. According to the report of the civil affairs department, the elderly in our country now account for 17.17% of the total population, and the problem of population aging is aggravated. The imperfections of old-age facilities show that our country is not fully prepared for the problem of population aging again. The number of elderly people living alone has increased rapidly in recent years. Because with the acceleration of social development and the pace of life, young people in many big cities have chosen to develop in big cities, so they cannot accompany these old people for a long time to take care of their own old people. The elderly care in modern society is mainly divided into two types: one is home care and the other is “material home.” Because of the early implementation of the family planning policy in our country, a pair of husbands may have a single child every day. This traditional family management model is called “4-2-1” by us. It refers to a special family that requires 4 independent elderly people, a husband and a wife, and a child. This is for a couple of two people. In other words, it is necessary to take care of 4 independent old people and one child at the same time. Because of the livelihoods of working outside, there is no way to accompany the old people at all times. The traditional family care model is difficult to achieve. When the elderly live alone, there are also many hidden dangers. When the elderly suddenly falls ill and cannot seek medical treatment in time, they cannot deal with the sudden accident in time. These hidden dangers make children of many ages gravitate towards nursing care institutions. The elderly care institutions in our country are showing a trend of two levels of differentiation. Senior nursing homes are equipped with advanced equipment and high level of management personnel, but the fees are too high. Most families cannot afford the high expenses. The general nursing homes have aging equipment and limited service quality and cannot provide high-quality services for the elderly. ## 1.3. Research Significance With the increase of age, the elderly may also have some problems in life, such as falling hair and teeth, shaking legs and feet, and memory loss. They tend to have a strong sense of self-dependence and fear of loneliness. They pay more attention to their own life and health problems and their dependence on family and affection. As modern Chinese elderly people decline in their cognitive abilities and active actions, their ability to accept new things has also declined. Most nursing homes in our country have imperfect medical service facilities and imperfect management, which will cause many retired elderly people to have to enjoy high-quality medical care. At present, most nursing homes in our country monitor and care for the elderly mainly through videos. Although video monitoring can directly monitor the situation and behavior of the elderly in the life of the home and nursing homes, there are dead corners in this kind of monitoring and cannot provide full coverage to the elderly. This kind of monitoring easily reveals the personal privacy of the left-behind elderly. Therefore, this article proposes an intelligent elderly care service system. On the premise of ensuring that the privacy of the elderly in the family is not leaked and will not affect the normal life of the elderly in the family, the RFID technology equipment can monitor the elderly in the family in real time. The RFID technology equipment can monitor the situation of medical sanatorium in real time for the elderly at home. ## 1.4. Status Quo of Research on Smart Elderly Care Population aging is an inevitable phenomenon. Many developed countries have entered a new era of population aging earlier than our Chinese nation and have accumulated a lot of advanced experience. Therefore, when dealing with these pension problems, our country should appropriately learn from the practices of developed countries. From the successful examples of the elderly care service systems in these developed countries, we can find some more excellent high-quality smart elderly care products, which not only integrate advanced technology but also incorporate humanistic care that meets the characteristics of the elderly, so that the elderly can enjoy high tech while bringing convenience and fun and feel the humanistic care that meets your requirements. With the development of computer technology and the Internet of Things, the convenience brought by automation to the elderly care problem is of great significance, and it provides effective experience for the development of our country’s smart elderly care industry. ## 2. Related Technical and Theoretical Research ### 2.1. Positioning Technology Positioning technology mainly includes two major components: indoor positioning and outdoor positioning. Outdoor positioning technology has been widely used in various scenarios. At present and internationally, outdoor positioning systems that can achieve commercial operation and normal operation mainly include the United States’ global positioning system, Europe’s Galileo satellite positioning system, and Russia’s global positioning system. The navigation satellite system and China’s BeiDou satellite positioning system are based on satellite signals, but when a satellite signal is transmitted indoors, the signal strength will be severely degraded and the error is large, making it impossible for anyone to receive. Therefore, it cannot be suitable for indoor positioning. However, the area of daily life of the elderly is basically indoors, so the indoor automatic positioning system can greatly promote the daily life of the elderly. ### 2.2. Frequency Radio Identification RFID wireless radio frequency technology is a technology that uses radio frequency signals to achieve information interaction through a magnetic field. The basic structure diagram is shown in Figure1.Figure 1 RFID structure diagram.The development process of RFID technology is shown in Table1.Table 1 RFID technology development process. YearDevelopment process1941–1950RFID technology is separated from radar technology and appears in front of people as an independent technology1951–1960RFID technology is separated from radar technology and appears in front of people as an independent technology1961–1970The first RFID-related paper was published, and the successful application of EAS for electronic article monitoring marks the further development of RFID technology1971–1980A large number of RFID patents appeared, and RFID technology appeared in commodity applications for the first time1981–1990RFID has been officially used in commercial production, and various large-scale applications have begun to appear1991–2000The standardization of RFID technology is getting more and more attention, RFID products are widely used, and RFID products have gradually become a part of people’s daily lives2000 laterRFID product types are more abundant, the production level is continuously improved, the cost of electronic tags is continuously reduced, and the scale of application industries is expandedThere are 3 categories of RFID, and their main characteristics are shown in Table2.Table 2 Comparison of characteristics of different types of RFID. Types of RFIDPassive RFIDSemiactive RFIDActive RFIDLabel power supplyWithout batteryBuilt-in batteryPart of the built-in batteryRange of actionLimitedFartherGeneralService lifeLongerShorterGeneralLabel costLowerHigherGeneralAdapt to harsh environmentsSuitableInappropriateGeneralThe comparison chart of RFID technology under different carrier frequencies is shown in Table3.Table 3 Technical comparison table under different frequencies. FrequencyLow frequencyHigh frequencyUHFCarrier frequency<125 kHz13.56 MHz>433 HzGeneral characteristicsHigh price, affected by the environmentLow price, suitable for short-distance and multiple target recognition applicationsAdvanced IC technology makes the cost the lowest, suitable for multiple target recognitionData transfer rateLow (8kbit/s)High (64kbit/s)High (64kbit/s)Recognition speedLow (<1 m/s)Medium (<5 m/s)High (<50 m/s)Label structureCoilPrinted coilDipole antennaDirectionalityNoneNonePartHumid environmentNo effectNo effectGreater impactMarket share74%17%9%Transmission performancePenetrable conductorPenetrable conductorLinear propagationImitation impact performanceLimitedGoodGoodExisting standardsISO11784 ISO11785ISO18000-3 ISO14443EPC G2ISO18000-6Recognition distance<60 cm0.1–1 m1–6 mScope of applicationAccess control, fixed equipment, natural gasLibrary, product tracking, transportationShelves, truck tracking, containers ## 2.1. Positioning Technology Positioning technology mainly includes two major components: indoor positioning and outdoor positioning. Outdoor positioning technology has been widely used in various scenarios. At present and internationally, outdoor positioning systems that can achieve commercial operation and normal operation mainly include the United States’ global positioning system, Europe’s Galileo satellite positioning system, and Russia’s global positioning system. The navigation satellite system and China’s BeiDou satellite positioning system are based on satellite signals, but when a satellite signal is transmitted indoors, the signal strength will be severely degraded and the error is large, making it impossible for anyone to receive. Therefore, it cannot be suitable for indoor positioning. However, the area of daily life of the elderly is basically indoors, so the indoor automatic positioning system can greatly promote the daily life of the elderly. ## 2.2. Frequency Radio Identification RFID wireless radio frequency technology is a technology that uses radio frequency signals to achieve information interaction through a magnetic field. The basic structure diagram is shown in Figure1.Figure 1 RFID structure diagram.The development process of RFID technology is shown in Table1.Table 1 RFID technology development process. YearDevelopment process1941–1950RFID technology is separated from radar technology and appears in front of people as an independent technology1951–1960RFID technology is separated from radar technology and appears in front of people as an independent technology1961–1970The first RFID-related paper was published, and the successful application of EAS for electronic article monitoring marks the further development of RFID technology1971–1980A large number of RFID patents appeared, and RFID technology appeared in commodity applications for the first time1981–1990RFID has been officially used in commercial production, and various large-scale applications have begun to appear1991–2000The standardization of RFID technology is getting more and more attention, RFID products are widely used, and RFID products have gradually become a part of people’s daily lives2000 laterRFID product types are more abundant, the production level is continuously improved, the cost of electronic tags is continuously reduced, and the scale of application industries is expandedThere are 3 categories of RFID, and their main characteristics are shown in Table2.Table 2 Comparison of characteristics of different types of RFID. Types of RFIDPassive RFIDSemiactive RFIDActive RFIDLabel power supplyWithout batteryBuilt-in batteryPart of the built-in batteryRange of actionLimitedFartherGeneralService lifeLongerShorterGeneralLabel costLowerHigherGeneralAdapt to harsh environmentsSuitableInappropriateGeneralThe comparison chart of RFID technology under different carrier frequencies is shown in Table3.Table 3 Technical comparison table under different frequencies. FrequencyLow frequencyHigh frequencyUHFCarrier frequency<125 kHz13.56 MHz>433 HzGeneral characteristicsHigh price, affected by the environmentLow price, suitable for short-distance and multiple target recognition applicationsAdvanced IC technology makes the cost the lowest, suitable for multiple target recognitionData transfer rateLow (8kbit/s)High (64kbit/s)High (64kbit/s)Recognition speedLow (<1 m/s)Medium (<5 m/s)High (<50 m/s)Label structureCoilPrinted coilDipole antennaDirectionalityNoneNonePartHumid environmentNo effectNo effectGreater impactMarket share74%17%9%Transmission performancePenetrable conductorPenetrable conductorLinear propagationImitation impact performanceLimitedGoodGoodExisting standardsISO11784 ISO11785ISO18000-3 ISO14443EPC G2ISO18000-6Recognition distance<60 cm0.1–1 m1–6 mScope of applicationAccess control, fixed equipment, natural gasLibrary, product tracking, transportationShelves, truck tracking, containers ## 3. Image Preprocessing ### 3.1. Image Binarization Image binarization is a common image segmentation method. The grayscale processed image is binarized again. Assuming that an image hasL gray levels and T is the binarization value, the entire image can be divided into two districts, namely, C0 and C1. The number of sharing speed points of gray level i is ni, and N is the total number of pixels in the image. Then,(1)N=∑i=0t−1ni.The probability of occurrence ofi is(2)pi=niN.The probability of the pixels in the two regions appearing in the image is(3)ω0=∑i=0TPi,ω1=∑i=T+1L−1Pi=1−ω0.The average grayscale of the two regions is(4)μ1=1ω0∑i=T+1L−1ipi,(5)μ0=1ω0∑i=0Tipi.Through formulas (4) and (5), the average gray value of the entire image is obtained, and according to the pixel points, the color image is divided into three components: R, G, and B, which show various colors such as red, green, and blue, respectively. Grayscale is the process of making the R, G, and B components of color equal. The pixels with large gray value are brighter (the maximum pixel value is 255, which is white), and the opposite is darker (the lowest pixel is 0, which is black).The background and the interclass variance formula of the target can be obtained as follows:(6)μ=∑i=0L−1iPi=∑i=T+1TiPi=ω0μ0+ω1μ1,σ2T=ω0μ0−μ2+ω1μ1−μ2=ω0ω1μ0−μ12. ### 3.2. Image Morphological Filtering Image morphological filtering is widely used in the process of image processing, and its common arithmetic methods are divided into the following types.(1)Expansion Algorithm. The principle of the expansion algorithm is to assume that A is an image, B is a structural element, and ⊕ is an operator in Figure 2, which is defined as(7)A⊕B=Z|B^Z∩A≠∅.Figure 2 Schematic diagram of expansion algorithm.(2)Corrosion Algorithm. The basic principle of image expansion is assuming that A is an image, B is a structural element, and ⊙ is an erosion mathematical operator in Figure 3, which is defined as(8)A⊙B=Z|Bz⊆A.Figure 3 Schematic diagram of corrosion algorithm. ### 3.3. Use Image Processing Data #### 3.3.1. Extreme Learning Machine When training an extreme learning machine to deal with classification problems, a setxi,ti∈Rd×Rm is given, where xi is the input vector and ti represents the category to which it belongs. Then,(9)tj=∑i=1LβiGiwi⋅xi+bi,j=1,…,N.Among them,Gw,b,x is the activation function. Equation (10) can be simplified to(10)Hβ=T.In equation (10),(11)H=Gw1⋅x1+b1⋯GwL⋅x1+bL⋯Gw1⋅xN+b1⋯GwL⋅xN+bLN×L,β=β1T⋮βNTL×M,T=t1T⋮tNTL×M.According to the permission matrix of the learning machine, it can be set as(12)β^=HTH+λI−1HTT,λ>0.When entering an unknown sample ofx˜,(13)t˜=argmaxh˜β˜,where(14)h˜=GW1⋅X˜+b1⋯GwL⋅x˜+bL,where t˜ is the predicted value of unknown sample x˜. #### 3.3.2. Support Vector Machine The training process of the support vector machine model is to solve a minimum problem with constraints:(15)yiwi⋅xi+b≥1,i=1,2,…m,minJw=w22.Solving by Lagrangian function,(16)Lw,b,α=12w⋅w−∑i=1mαxi⋅w+byi−1,Take the partial derivative of w and b, and then take the value of 0. Then, we obtain(17)∂L∂W=W−∑i−1mαiyixi=0,∂L∂b=∑i=1maiyi=0.The optimal hyperplane needs to meet(18)αixi⋅w+byi−1=0,which is transformed into(19)minWα=12∑i=1m∑i=1myiyjαiαjxi⋅xj−∑j=1maj,s.tai≥0,i=1,…,m,∑i=1mαiyi=0.The value ofα∗ is the optimal solution, and there are(20)w∗=2wα∗=∑svαi∗αj∗xi⋅xjyiyj.The optimal decomposition function is(21)fx=sgn∑svmyiαi∗xi⋅xj+b∗.For the linear inseparable case, the constraint conditions are transformed into(22)yiw⋅xi+b−1+εi≥0,i=1,…,m.The objective function is transformed into(23)Jw,ε=12w⋅w+C∑i=1mεi.In formula (15), C is the penalty factor, and the larger the value of C>0, the greater the loss caused by the outlier to the objective function. ## 3.1. Image Binarization Image binarization is a common image segmentation method. The grayscale processed image is binarized again. Assuming that an image hasL gray levels and T is the binarization value, the entire image can be divided into two districts, namely, C0 and C1. The number of sharing speed points of gray level i is ni, and N is the total number of pixels in the image. Then,(1)N=∑i=0t−1ni.The probability of occurrence ofi is(2)pi=niN.The probability of the pixels in the two regions appearing in the image is(3)ω0=∑i=0TPi,ω1=∑i=T+1L−1Pi=1−ω0.The average grayscale of the two regions is(4)μ1=1ω0∑i=T+1L−1ipi,(5)μ0=1ω0∑i=0Tipi.Through formulas (4) and (5), the average gray value of the entire image is obtained, and according to the pixel points, the color image is divided into three components: R, G, and B, which show various colors such as red, green, and blue, respectively. Grayscale is the process of making the R, G, and B components of color equal. The pixels with large gray value are brighter (the maximum pixel value is 255, which is white), and the opposite is darker (the lowest pixel is 0, which is black).The background and the interclass variance formula of the target can be obtained as follows:(6)μ=∑i=0L−1iPi=∑i=T+1TiPi=ω0μ0+ω1μ1,σ2T=ω0μ0−μ2+ω1μ1−μ2=ω0ω1μ0−μ12. ## 3.2. Image Morphological Filtering Image morphological filtering is widely used in the process of image processing, and its common arithmetic methods are divided into the following types.(1)Expansion Algorithm. The principle of the expansion algorithm is to assume that A is an image, B is a structural element, and ⊕ is an operator in Figure 2, which is defined as(7)A⊕B=Z|B^Z∩A≠∅.Figure 2 Schematic diagram of expansion algorithm.(2)Corrosion Algorithm. The basic principle of image expansion is assuming that A is an image, B is a structural element, and ⊙ is an erosion mathematical operator in Figure 3, which is defined as(8)A⊙B=Z|Bz⊆A.Figure 3 Schematic diagram of corrosion algorithm. ## 3.3. Use Image Processing Data ### 3.3.1. Extreme Learning Machine When training an extreme learning machine to deal with classification problems, a setxi,ti∈Rd×Rm is given, where xi is the input vector and ti represents the category to which it belongs. Then,(9)tj=∑i=1LβiGiwi⋅xi+bi,j=1,…,N.Among them,Gw,b,x is the activation function. Equation (10) can be simplified to(10)Hβ=T.In equation (10),(11)H=Gw1⋅x1+b1⋯GwL⋅x1+bL⋯Gw1⋅xN+b1⋯GwL⋅xN+bLN×L,β=β1T⋮βNTL×M,T=t1T⋮tNTL×M.According to the permission matrix of the learning machine, it can be set as(12)β^=HTH+λI−1HTT,λ>0.When entering an unknown sample ofx˜,(13)t˜=argmaxh˜β˜,where(14)h˜=GW1⋅X˜+b1⋯GwL⋅x˜+bL,where t˜ is the predicted value of unknown sample x˜. ### 3.3.2. Support Vector Machine The training process of the support vector machine model is to solve a minimum problem with constraints:(15)yiwi⋅xi+b≥1,i=1,2,…m,minJw=w22.Solving by Lagrangian function,(16)Lw,b,α=12w⋅w−∑i=1mαxi⋅w+byi−1,Take the partial derivative of w and b, and then take the value of 0. Then, we obtain(17)∂L∂W=W−∑i−1mαiyixi=0,∂L∂b=∑i=1maiyi=0.The optimal hyperplane needs to meet(18)αixi⋅w+byi−1=0,which is transformed into(19)minWα=12∑i=1m∑i=1myiyjαiαjxi⋅xj−∑j=1maj,s.tai≥0,i=1,…,m,∑i=1mαiyi=0.The value ofα∗ is the optimal solution, and there are(20)w∗=2wα∗=∑svαi∗αj∗xi⋅xjyiyj.The optimal decomposition function is(21)fx=sgn∑svmyiαi∗xi⋅xj+b∗.For the linear inseparable case, the constraint conditions are transformed into(22)yiw⋅xi+b−1+εi≥0,i=1,…,m.The objective function is transformed into(23)Jw,ε=12w⋅w+C∑i=1mεi.In formula (15), C is the penalty factor, and the larger the value of C>0, the greater the loss caused by the outlier to the objective function. ## 3.3.1. Extreme Learning Machine When training an extreme learning machine to deal with classification problems, a setxi,ti∈Rd×Rm is given, where xi is the input vector and ti represents the category to which it belongs. Then,(9)tj=∑i=1LβiGiwi⋅xi+bi,j=1,…,N.Among them,Gw,b,x is the activation function. Equation (10) can be simplified to(10)Hβ=T.In equation (10),(11)H=Gw1⋅x1+b1⋯GwL⋅x1+bL⋯Gw1⋅xN+b1⋯GwL⋅xN+bLN×L,β=β1T⋮βNTL×M,T=t1T⋮tNTL×M.According to the permission matrix of the learning machine, it can be set as(12)β^=HTH+λI−1HTT,λ>0.When entering an unknown sample ofx˜,(13)t˜=argmaxh˜β˜,where(14)h˜=GW1⋅X˜+b1⋯GwL⋅x˜+bL,where t˜ is the predicted value of unknown sample x˜. ## 3.3.2. Support Vector Machine The training process of the support vector machine model is to solve a minimum problem with constraints:(15)yiwi⋅xi+b≥1,i=1,2,…m,minJw=w22.Solving by Lagrangian function,(16)Lw,b,α=12w⋅w−∑i=1mαxi⋅w+byi−1,Take the partial derivative of w and b, and then take the value of 0. Then, we obtain(17)∂L∂W=W−∑i−1mαiyixi=0,∂L∂b=∑i=1maiyi=0.The optimal hyperplane needs to meet(18)αixi⋅w+byi−1=0,which is transformed into(19)minWα=12∑i=1m∑i=1myiyjαiαjxi⋅xj−∑j=1maj,s.tai≥0,i=1,…,m,∑i=1mαiyi=0.The value ofα∗ is the optimal solution, and there are(20)w∗=2wα∗=∑svαi∗αj∗xi⋅xjyiyj.The optimal decomposition function is(21)fx=sgn∑svmyiαi∗xi⋅xj+b∗.For the linear inseparable case, the constraint conditions are transformed into(22)yiw⋅xi+b−1+εi≥0,i=1,…,m.The objective function is transformed into(23)Jw,ε=12w⋅w+C∑i=1mεi.In formula (15), C is the penalty factor, and the larger the value of C>0, the greater the loss caused by the outlier to the objective function. ## 4. System Design The smart elderly care system is a smart elderly care system designed using image processing technology, remote monitoring, and information service platforms, combined with the life needs of the elderly, and the overall structure is shown in Figure4.Figure 4 The overall structure of the smart elderly care system.The system has many functions. When the elderly wear the ECG monitoring bracelet, the heart rate and heartbeat of the elderly will be generated and transmitted to the mobile phones of family members and medical staff. So that they can pay more attention to the health problems of the elderly and spend the day of the elderly. The pictures generated by the system will also be sent to the family members’ mobile phones. Based on the actual management and service needs of nursing homes, a nursing home personnel positioning management system is developed, which can realize daily basic information management, real-time positioning and tracking of the elderly, vital signs monitoring of the elderly, one-button alarm of dangerous situations of the elderly, and other functions. When the elderly have special circumstances, they can respond as soon as possible, and the system can truly realize the significance and requirements of intelligent old-age care and be responsible for the daily life and health status of the elderly. It is no longer the traditional way of providing care for the aged, and there is a shortage of nursing staff. There is an independent nursing home in this system. ### 4.1. Simulation Experiment Results We investigated, learned, and collected the daily activity trajectories of many elderly people, as shown in Figure5.Figure 5 Activity trajectory diagram of the elderly.The collected images are counted, and the statistics chart can view the real-time location of each person, as shown in Figure6.Figure 6 Statistics of the scope of activity.You can also query the activity track of a user in a day, as shown in Figures7 and 8.Figure 7 Statistics of the range of activities of the elderly.Figure 8 Statistics of the scope of activities for the elderly.There are many recreational facilities in nursing homes. These recreational facilities bring great fun to the elderly and enrich the daily life of the elderly; some facilities are used by a large number of people, and the resource allocation is not reasonable. Nursing homes should strengthen the construction of the project and make elderly care services better. ### 4.2. Obtaining Experimental Results In order to check the accuracy of the system, we arranged the RFID equipment and collected the daily data of 4 students for a month. The ReaderIDs of the two places are represented by 1 and 2, respectively. The specific data are shown in Table4.Table 4 Stored location data. TagIDTimeDateReaderIDAntIDReaderRssiAntRssi8202715810:35:002020-09-1622−12−78202805810:35:002020-09-1622−16−178202715910:35:002020-09-1611−23−208203242510:35:002020-09-1622−21−148202715810:36:002020-09-1622−12−78202805810:36:002020-09-1622−16−178202715910:36:002020-09-1611−23−208203242510:36:002020-09-1622−21−14The smart elderly care system generates many pieces of location data every day to prevent data memory from being confused. We should process these data to facilitate more accurate results. The data of each experimenter are accumulated in one place. Because there are 5 active areas, the experimental data are divided into 5 groups. ### 4.3. Analysis of Simulation Experiment Results The experiment process lasted for one month. We used the data of 4 students within 25 days as the training set and used the data of the last 5 days to test the robustness of the prediction model and the support vector machine prediction model. Figure9 shows the prediction results. The results show that, in the five-day life of four people, there were only two errors, and the accuracy rate was as high as 90%. The specific results are shown in Figures 9 and 10.Figure 9 Prediction results of the extreme learning machine.Figure 10 Prediction results of the support vector machine.The predicted value and actual value of the extreme learning machine are 70 and 60, respectively, and the accuracy rate is as high as 90%. ### 4.4. Research Methods We started a round of questionnaire surveys with 50 pairs of elderly people and those older than 50 years as the survey subjects and distributed questionnaires to elderly people in different regions, different ages, and educational backgrounds. After removing invalid questionnaires, there were 254 valid questionnaires remaining. The distribution results are shown in Table5.Table 5 Distribution results. Sample statistical characteristicsCategoryFrequencyPercentage%Cumulative percentage%GenderMale11339.839.8Female17160.2100.0Age50–69 years old70 years old and above17260...660.0611239.4100.0EducationElementary school and belowJunior high schoolHigh schoolBachelor degree and above14450.750.76021.171.83913.785.64114.4100.0Monthly income<1000 yuan8530.030.01000–2000 yuan8028.358.32000–3000 yuan5318.777.0>3000 yuan6523.0100.0The willingness to use smart aged care products is shown in Figure6.According to the experimental results obtained in Figures9 and 10, we can clearly observe the accuracy of prediction by drawing images with image processing technology. For the questionnaire survey obtained in Table 6, through image analysis, we can also clearly see the different percentages of statistical characteristics of different samples and other conditions.Table 6 Regression analysis table. D VI VNonstandardized coefficientStandard coefficienttSig.VIFDurbin–WatsonR2BStandard errorU MConstant0.4240.1510.2910.4330.2112.8010.0051.5831.6631.2991.7580.5970.293∗∗0.0486.1040.0000.402∗∗0.046808480.0000.210∗∗0.0434.8790.000Through the results of the questionnaire survey, we can find that the degree of education, education level, and salary of the elderly are related to the willingness to use smart pension products. In general, the elderly with high wages also have a higher education level and they will use their own knowledge reserve ability to quickly learn how to use smart products. Therefore, in the process of product development and promotion, we should focus on the elderly who are slow to accept new things. ## 4.1. Simulation Experiment Results We investigated, learned, and collected the daily activity trajectories of many elderly people, as shown in Figure5.Figure 5 Activity trajectory diagram of the elderly.The collected images are counted, and the statistics chart can view the real-time location of each person, as shown in Figure6.Figure 6 Statistics of the scope of activity.You can also query the activity track of a user in a day, as shown in Figures7 and 8.Figure 7 Statistics of the range of activities of the elderly.Figure 8 Statistics of the scope of activities for the elderly.There are many recreational facilities in nursing homes. These recreational facilities bring great fun to the elderly and enrich the daily life of the elderly; some facilities are used by a large number of people, and the resource allocation is not reasonable. Nursing homes should strengthen the construction of the project and make elderly care services better. ## 4.2. Obtaining Experimental Results In order to check the accuracy of the system, we arranged the RFID equipment and collected the daily data of 4 students for a month. The ReaderIDs of the two places are represented by 1 and 2, respectively. The specific data are shown in Table4.Table 4 Stored location data. TagIDTimeDateReaderIDAntIDReaderRssiAntRssi8202715810:35:002020-09-1622−12−78202805810:35:002020-09-1622−16−178202715910:35:002020-09-1611−23−208203242510:35:002020-09-1622−21−148202715810:36:002020-09-1622−12−78202805810:36:002020-09-1622−16−178202715910:36:002020-09-1611−23−208203242510:36:002020-09-1622−21−14The smart elderly care system generates many pieces of location data every day to prevent data memory from being confused. We should process these data to facilitate more accurate results. The data of each experimenter are accumulated in one place. Because there are 5 active areas, the experimental data are divided into 5 groups. ## 4.3. Analysis of Simulation Experiment Results The experiment process lasted for one month. We used the data of 4 students within 25 days as the training set and used the data of the last 5 days to test the robustness of the prediction model and the support vector machine prediction model. Figure9 shows the prediction results. The results show that, in the five-day life of four people, there were only two errors, and the accuracy rate was as high as 90%. The specific results are shown in Figures 9 and 10.Figure 9 Prediction results of the extreme learning machine.Figure 10 Prediction results of the support vector machine.The predicted value and actual value of the extreme learning machine are 70 and 60, respectively, and the accuracy rate is as high as 90%. ## 4.4. Research Methods We started a round of questionnaire surveys with 50 pairs of elderly people and those older than 50 years as the survey subjects and distributed questionnaires to elderly people in different regions, different ages, and educational backgrounds. After removing invalid questionnaires, there were 254 valid questionnaires remaining. The distribution results are shown in Table5.Table 5 Distribution results. Sample statistical characteristicsCategoryFrequencyPercentage%Cumulative percentage%GenderMale11339.839.8Female17160.2100.0Age50–69 years old70 years old and above17260...660.0611239.4100.0EducationElementary school and belowJunior high schoolHigh schoolBachelor degree and above14450.750.76021.171.83913.785.64114.4100.0Monthly income<1000 yuan8530.030.01000–2000 yuan8028.358.32000–3000 yuan5318.777.0>3000 yuan6523.0100.0The willingness to use smart aged care products is shown in Figure6.According to the experimental results obtained in Figures9 and 10, we can clearly observe the accuracy of prediction by drawing images with image processing technology. For the questionnaire survey obtained in Table 6, through image analysis, we can also clearly see the different percentages of statistical characteristics of different samples and other conditions.Table 6 Regression analysis table. D VI VNonstandardized coefficientStandard coefficienttSig.VIFDurbin–WatsonR2BStandard errorU MConstant0.4240.1510.2910.4330.2112.8010.0051.5831.6631.2991.7580.5970.293∗∗0.0486.1040.0000.402∗∗0.046808480.0000.210∗∗0.0434.8790.000Through the results of the questionnaire survey, we can find that the degree of education, education level, and salary of the elderly are related to the willingness to use smart pension products. In general, the elderly with high wages also have a higher education level and they will use their own knowledge reserve ability to quickly learn how to use smart products. Therefore, in the process of product development and promotion, we should focus on the elderly who are slow to accept new things. ## 5. Conclusion In view of the difficulty of providing care for the aged in cities, this paper puts forward an RTID positioning system and APP client, which can ensure the privacy of the elderly. Through image processing technology, it analyzes the old-age situation in each activity center and the living conditions of the elderly in different environments. It can reasonably plan urban old-age facilities and activity centers, so as to improve the satisfaction of elderly services. --- *Source: 1023187-2021-10-29.xml*
1023187-2021-10-29_1023187-2021-10-29.md
51,908
Research on Mathematical Model of Smart Service for the Elderly in Small- and Medium-Sized Cities Based on Image Processing
Chunmei Feng
Scientific Programming (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1023187
1023187-2021-10-29.xml
--- ## Abstract Image processing technology is to use computer, camera, and other technologies to calculate and process images and make the image clearer and convenient for quick extraction of information. Image processing technology has entered an all-round development stage. It also plays a great role in the components of the intelligent service model for the aged. Now many countries in the world have entered the aging stage, but old-age equipment is relatively backward and personnel management is not standardized. Based on these problems, this paper studies the intelligent model of old-age care in small- and medium-sized cities by using the image recognition method. Based on the analysis of the present situation of intelligent old-age care, an intelligent system is proposed, which solves the problems of defects in old-age care facilities and insufficient comprehensive management of medical staff in some small- and medium-sized cities. This system has RTID positioning system and APP client, which can ensure the privacy of the elderly. Through real-time identification of images in the elderly service, the rationality and layout optimization of existing old-age facilities are analyzed. The mathematical model is used to detect the regularity of participants’ daily activities. The image experiment results show that the prediction accuracy is over 90%, and the optimal prediction effect is obtained. In addition, a questionnaire survey was conducted among many elderly people over 50 years old to investigate their willingness to use smart old-age products. --- ## Body ## 1. Introduction With the development of computers, image technology has also been applied to all aspects of human life. In the construction of smart models for the elderly in certain small- and medium-sized cities, image processing technology has also played a great role. Literature [1] introduces the development of image processing technology and the connection with the smart service model for the elderly. We analyzed the results of previous research and related data. Because of the physical reasons of the elderly, their social and victory functions will decline with age, so they will encounter various difficulties in daily life. The article is based on the elderly. The daily activity habit of creating scenes provides a specific solution for the activities of the elderly at home. Literature [2] discusses the views and suggestions of alternatives to restraint and isolation for service users and their families, as well as the conditions that can be used in adults in short-term psychiatric care and residents in long-term care. Set up 3 nursing institutions related to middle-aged and elderly family members and 5 focus groups related to service quality. These small focus groups discussed and analyzed records and found that more companionship and caring for the elderly are the most effective way to solve the mental disorders of the elderly. Listen to the inner thoughts of the elderly and give them more love and company. Immature old-age care policies in big cities result in the high poverty rate of the elderly and a series of old-age care problems. Literature [3] studies the elderly to provide location-based, customized job search services to actively support the elderly to participate in economic activities, provide customized positions for the elderly according to their residence, physical condition, and working conditions, so that the elderly can choose suitable positions according to their interests and living conditions, and contribute to the expansion of the employment market for the elderly. Literature [4] aims to develop smart healthcare glove system (SHGS) and transcutaneous electrical nerve stimulator (TENS) based on electronic textiles as healthcare equipment for elderly hypertension. Using image processing technology, the patient’s pulse and blood pressure generated pictures can be transmitted to the computer for better observation. Observing the blood pressure and pulse of multiple patients wearing SHGS, the results show that SHGS can not only be used for the elderly to lower blood pressure and improve the irregular blood circulation but also be used for hypertensive patients of any age. The goal of literature [5] is to deploy an intelligent nursing service integration agent to provide individualization and integration for each elderly person. Every elderly person’s needs and preferences are different. In time, a robot with strong learning ability also needs to spend a lot of time learning everyone’s preferences and needs. In order to provide basic nursing tasks, we have deployed nursing templates in the cloud. There are different scenes for each character. Literature [6] provides a theoretical basis for combining biophilic and smart home technology and provides a framework for smart home services to ensure that elderly residents can have a biophilic experience. Smart home technology is not a mechanized achievement. It simulates nature and creates a high quality of life, which can support the physical and mental health of the elderly. It not only provides effective information for the elderly smart home industry but also makes a huge contribution to the trend of smart home services. Literature [7] studies the contradiction between the design of smart home products under the medical care model and the daily needs of the elderly and conducts case analysis of smart home products for the elderly. The research results show that smart homes can maximize the satisfaction of the elderly. Literature [8] introduces WITSCare, which is a research prototype of a web-based IoT smart home system. It is a smart home system designed using the intelligence of the Internet of Things, which can help the elderly live safely and independently at home. Literature [9] proposes an application architecture suitable for such online service network platforms and designed it with the most advanced concepts such as server-side JavaScript, NoSQL databases, and machine learning. The rapid increase in the number of elderly people has brought great pressure to our country’s medical treatment and society. It has become an increasingly popular topic to provide the elderly with a high-quality living environment. Literature [10] proposes a conceptual model of an integrated and personalized system to solve this problem. We interviewed three healthcare experts to better explore the needs of the elderly and customized a conceptual model for the elderly to live independently, centering on the concepts of comfort, safety, and environmental protection and providing good protection for the elderly to live alone. The purpose of literature [11] is to propose services based on behavioral patterns. The range of activities for the elderly is limited, and the living room is an important area. We have proposed a method of developing a smart living room specifically designed for the elderly. We put forward a number of behavior patterns of the elderly, including preventing falls and injuries, and explain safety issues and health issues to provide a comfortable living environment in all aspects. There are many hidden safety hazards for the elderly at home alone. To ensure the safety of the elderly at home, we must consider the hazards such as gas leaks in the kitchen, fire, and falls in the bathroom. Literature [12] deals with the hazards and intelligent services in the kitchen according to the behavior of the elderly. According to multiple accidents such as fires and gas leaks, a smart service has been set up. The smart service can not only detect safety problems at home but also has functions such as automatic alarms and automatic ventilation to further protect the safety of the elderly at home. Studies have shown that the most common disease in the elderly is cardiovascular disease, which also poses a great threat to the health of the elderly. Literature [13] develops smart clothes to record 3-lead electrocardiogram (ECG). Using the image elementary technology, the analyzed data are quickly generated into pictures. The system is composed of fiber clothes with electrodes which have the function of acquiring physiological signals and can analyze health data. Experimental results show that the accuracy of ECG is as high as 86.82%. Starting from the two ways of family pension and institutional pension, family pension is the pension mode chosen by most people. The deficiency of traditional family pension lies in that it is generally difficult for the elderly to get professional and meticulous care, medical care, and spiritual and cultural services in the family. Under the background that “421 structure” families have become the mainstream of urban society, social competition has intensified and the pace of life has accelerated. The social labor cost and people’s work burden are generally increasing, family members and children cannot have enough energy to take care of the elderly at home, family pension is facing severe challenges, the traditional family pension model is increasingly difficult to maintain and play its social function and role, and the urban family pension is gradually weakened and socialized. Nursing homes charge higher fees, so most families cannot bear the economic pressure, and they have higher requirements for medical staff. Based on these problems, this paper creates an intelligent mathematical model to help solve the problem of providing care for the aged and uses image processing technology to create an intelligent system. The system has many functions, such as detecting health problems of the elderly and real-time positioning. By analyzing the present situation of old-age wisdom in China, this paper finds some problems in old-age care in China, such as backward old-age care equipment and irregular personnel management. This paper uses image processing technology to build an old-age wisdom service model, which solves the problems of old-age care facility defects and insufficient comprehensive management of medical staff in some small- and medium-sized cities. Using this model, the problem of providing care for the aged in China has been greatly solved. Through the test, it can be found that the system has a very high accuracy for the health detection of the elderly. ### 1.1. Overview and Development of Digital Image Processing Technology With the rise of the computer industry, people gradually began to pay attention to digital image processing technology. In 1964, the US Jet Propulsion Laboratory used a computer to process a large number of lunar photos sent by the “Prowler” spacecraft. The results are very satisfactory. The digital image technology has also become an emerging subject. Until the 1990s, digital images have developed rapidly. Today, various industries have put forward high requirements on image processing technology, which has also promoted the better development of image processing technology. Common methods of processing technology include image transformation, image enhancement and restoration, and image segmentation. Image processing technology has the following characteristics: the diversity of image processing, the sharpness of the processed image is getting higher and higher, and the amount of data processing is large. ### 1.2. Research Background The problem of population aging in our country is becoming more and more serious. The issue of elderly care is one of the problems that our country must face in its development. According to the report of the civil affairs department, the elderly in our country now account for 17.17% of the total population, and the problem of population aging is aggravated. The imperfections of old-age facilities show that our country is not fully prepared for the problem of population aging again. The number of elderly people living alone has increased rapidly in recent years. Because with the acceleration of social development and the pace of life, young people in many big cities have chosen to develop in big cities, so they cannot accompany these old people for a long time to take care of their own old people. The elderly care in modern society is mainly divided into two types: one is home care and the other is “material home.” Because of the early implementation of the family planning policy in our country, a pair of husbands may have a single child every day. This traditional family management model is called “4-2-1” by us. It refers to a special family that requires 4 independent elderly people, a husband and a wife, and a child. This is for a couple of two people. In other words, it is necessary to take care of 4 independent old people and one child at the same time. Because of the livelihoods of working outside, there is no way to accompany the old people at all times. The traditional family care model is difficult to achieve. When the elderly live alone, there are also many hidden dangers. When the elderly suddenly falls ill and cannot seek medical treatment in time, they cannot deal with the sudden accident in time. These hidden dangers make children of many ages gravitate towards nursing care institutions. The elderly care institutions in our country are showing a trend of two levels of differentiation. Senior nursing homes are equipped with advanced equipment and high level of management personnel, but the fees are too high. Most families cannot afford the high expenses. The general nursing homes have aging equipment and limited service quality and cannot provide high-quality services for the elderly. ### 1.3. Research Significance With the increase of age, the elderly may also have some problems in life, such as falling hair and teeth, shaking legs and feet, and memory loss. They tend to have a strong sense of self-dependence and fear of loneliness. They pay more attention to their own life and health problems and their dependence on family and affection. As modern Chinese elderly people decline in their cognitive abilities and active actions, their ability to accept new things has also declined. Most nursing homes in our country have imperfect medical service facilities and imperfect management, which will cause many retired elderly people to have to enjoy high-quality medical care. At present, most nursing homes in our country monitor and care for the elderly mainly through videos. Although video monitoring can directly monitor the situation and behavior of the elderly in the life of the home and nursing homes, there are dead corners in this kind of monitoring and cannot provide full coverage to the elderly. This kind of monitoring easily reveals the personal privacy of the left-behind elderly. Therefore, this article proposes an intelligent elderly care service system. On the premise of ensuring that the privacy of the elderly in the family is not leaked and will not affect the normal life of the elderly in the family, the RFID technology equipment can monitor the elderly in the family in real time. The RFID technology equipment can monitor the situation of medical sanatorium in real time for the elderly at home. ### 1.4. Status Quo of Research on Smart Elderly Care Population aging is an inevitable phenomenon. Many developed countries have entered a new era of population aging earlier than our Chinese nation and have accumulated a lot of advanced experience. Therefore, when dealing with these pension problems, our country should appropriately learn from the practices of developed countries. From the successful examples of the elderly care service systems in these developed countries, we can find some more excellent high-quality smart elderly care products, which not only integrate advanced technology but also incorporate humanistic care that meets the characteristics of the elderly, so that the elderly can enjoy high tech while bringing convenience and fun and feel the humanistic care that meets your requirements. With the development of computer technology and the Internet of Things, the convenience brought by automation to the elderly care problem is of great significance, and it provides effective experience for the development of our country’s smart elderly care industry. ## 1.1. Overview and Development of Digital Image Processing Technology With the rise of the computer industry, people gradually began to pay attention to digital image processing technology. In 1964, the US Jet Propulsion Laboratory used a computer to process a large number of lunar photos sent by the “Prowler” spacecraft. The results are very satisfactory. The digital image technology has also become an emerging subject. Until the 1990s, digital images have developed rapidly. Today, various industries have put forward high requirements on image processing technology, which has also promoted the better development of image processing technology. Common methods of processing technology include image transformation, image enhancement and restoration, and image segmentation. Image processing technology has the following characteristics: the diversity of image processing, the sharpness of the processed image is getting higher and higher, and the amount of data processing is large. ## 1.2. Research Background The problem of population aging in our country is becoming more and more serious. The issue of elderly care is one of the problems that our country must face in its development. According to the report of the civil affairs department, the elderly in our country now account for 17.17% of the total population, and the problem of population aging is aggravated. The imperfections of old-age facilities show that our country is not fully prepared for the problem of population aging again. The number of elderly people living alone has increased rapidly in recent years. Because with the acceleration of social development and the pace of life, young people in many big cities have chosen to develop in big cities, so they cannot accompany these old people for a long time to take care of their own old people. The elderly care in modern society is mainly divided into two types: one is home care and the other is “material home.” Because of the early implementation of the family planning policy in our country, a pair of husbands may have a single child every day. This traditional family management model is called “4-2-1” by us. It refers to a special family that requires 4 independent elderly people, a husband and a wife, and a child. This is for a couple of two people. In other words, it is necessary to take care of 4 independent old people and one child at the same time. Because of the livelihoods of working outside, there is no way to accompany the old people at all times. The traditional family care model is difficult to achieve. When the elderly live alone, there are also many hidden dangers. When the elderly suddenly falls ill and cannot seek medical treatment in time, they cannot deal with the sudden accident in time. These hidden dangers make children of many ages gravitate towards nursing care institutions. The elderly care institutions in our country are showing a trend of two levels of differentiation. Senior nursing homes are equipped with advanced equipment and high level of management personnel, but the fees are too high. Most families cannot afford the high expenses. The general nursing homes have aging equipment and limited service quality and cannot provide high-quality services for the elderly. ## 1.3. Research Significance With the increase of age, the elderly may also have some problems in life, such as falling hair and teeth, shaking legs and feet, and memory loss. They tend to have a strong sense of self-dependence and fear of loneliness. They pay more attention to their own life and health problems and their dependence on family and affection. As modern Chinese elderly people decline in their cognitive abilities and active actions, their ability to accept new things has also declined. Most nursing homes in our country have imperfect medical service facilities and imperfect management, which will cause many retired elderly people to have to enjoy high-quality medical care. At present, most nursing homes in our country monitor and care for the elderly mainly through videos. Although video monitoring can directly monitor the situation and behavior of the elderly in the life of the home and nursing homes, there are dead corners in this kind of monitoring and cannot provide full coverage to the elderly. This kind of monitoring easily reveals the personal privacy of the left-behind elderly. Therefore, this article proposes an intelligent elderly care service system. On the premise of ensuring that the privacy of the elderly in the family is not leaked and will not affect the normal life of the elderly in the family, the RFID technology equipment can monitor the elderly in the family in real time. The RFID technology equipment can monitor the situation of medical sanatorium in real time for the elderly at home. ## 1.4. Status Quo of Research on Smart Elderly Care Population aging is an inevitable phenomenon. Many developed countries have entered a new era of population aging earlier than our Chinese nation and have accumulated a lot of advanced experience. Therefore, when dealing with these pension problems, our country should appropriately learn from the practices of developed countries. From the successful examples of the elderly care service systems in these developed countries, we can find some more excellent high-quality smart elderly care products, which not only integrate advanced technology but also incorporate humanistic care that meets the characteristics of the elderly, so that the elderly can enjoy high tech while bringing convenience and fun and feel the humanistic care that meets your requirements. With the development of computer technology and the Internet of Things, the convenience brought by automation to the elderly care problem is of great significance, and it provides effective experience for the development of our country’s smart elderly care industry. ## 2. Related Technical and Theoretical Research ### 2.1. Positioning Technology Positioning technology mainly includes two major components: indoor positioning and outdoor positioning. Outdoor positioning technology has been widely used in various scenarios. At present and internationally, outdoor positioning systems that can achieve commercial operation and normal operation mainly include the United States’ global positioning system, Europe’s Galileo satellite positioning system, and Russia’s global positioning system. The navigation satellite system and China’s BeiDou satellite positioning system are based on satellite signals, but when a satellite signal is transmitted indoors, the signal strength will be severely degraded and the error is large, making it impossible for anyone to receive. Therefore, it cannot be suitable for indoor positioning. However, the area of daily life of the elderly is basically indoors, so the indoor automatic positioning system can greatly promote the daily life of the elderly. ### 2.2. Frequency Radio Identification RFID wireless radio frequency technology is a technology that uses radio frequency signals to achieve information interaction through a magnetic field. The basic structure diagram is shown in Figure1.Figure 1 RFID structure diagram.The development process of RFID technology is shown in Table1.Table 1 RFID technology development process. YearDevelopment process1941–1950RFID technology is separated from radar technology and appears in front of people as an independent technology1951–1960RFID technology is separated from radar technology and appears in front of people as an independent technology1961–1970The first RFID-related paper was published, and the successful application of EAS for electronic article monitoring marks the further development of RFID technology1971–1980A large number of RFID patents appeared, and RFID technology appeared in commodity applications for the first time1981–1990RFID has been officially used in commercial production, and various large-scale applications have begun to appear1991–2000The standardization of RFID technology is getting more and more attention, RFID products are widely used, and RFID products have gradually become a part of people’s daily lives2000 laterRFID product types are more abundant, the production level is continuously improved, the cost of electronic tags is continuously reduced, and the scale of application industries is expandedThere are 3 categories of RFID, and their main characteristics are shown in Table2.Table 2 Comparison of characteristics of different types of RFID. Types of RFIDPassive RFIDSemiactive RFIDActive RFIDLabel power supplyWithout batteryBuilt-in batteryPart of the built-in batteryRange of actionLimitedFartherGeneralService lifeLongerShorterGeneralLabel costLowerHigherGeneralAdapt to harsh environmentsSuitableInappropriateGeneralThe comparison chart of RFID technology under different carrier frequencies is shown in Table3.Table 3 Technical comparison table under different frequencies. FrequencyLow frequencyHigh frequencyUHFCarrier frequency<125 kHz13.56 MHz>433 HzGeneral characteristicsHigh price, affected by the environmentLow price, suitable for short-distance and multiple target recognition applicationsAdvanced IC technology makes the cost the lowest, suitable for multiple target recognitionData transfer rateLow (8kbit/s)High (64kbit/s)High (64kbit/s)Recognition speedLow (<1 m/s)Medium (<5 m/s)High (<50 m/s)Label structureCoilPrinted coilDipole antennaDirectionalityNoneNonePartHumid environmentNo effectNo effectGreater impactMarket share74%17%9%Transmission performancePenetrable conductorPenetrable conductorLinear propagationImitation impact performanceLimitedGoodGoodExisting standardsISO11784 ISO11785ISO18000-3 ISO14443EPC G2ISO18000-6Recognition distance<60 cm0.1–1 m1–6 mScope of applicationAccess control, fixed equipment, natural gasLibrary, product tracking, transportationShelves, truck tracking, containers ## 2.1. Positioning Technology Positioning technology mainly includes two major components: indoor positioning and outdoor positioning. Outdoor positioning technology has been widely used in various scenarios. At present and internationally, outdoor positioning systems that can achieve commercial operation and normal operation mainly include the United States’ global positioning system, Europe’s Galileo satellite positioning system, and Russia’s global positioning system. The navigation satellite system and China’s BeiDou satellite positioning system are based on satellite signals, but when a satellite signal is transmitted indoors, the signal strength will be severely degraded and the error is large, making it impossible for anyone to receive. Therefore, it cannot be suitable for indoor positioning. However, the area of daily life of the elderly is basically indoors, so the indoor automatic positioning system can greatly promote the daily life of the elderly. ## 2.2. Frequency Radio Identification RFID wireless radio frequency technology is a technology that uses radio frequency signals to achieve information interaction through a magnetic field. The basic structure diagram is shown in Figure1.Figure 1 RFID structure diagram.The development process of RFID technology is shown in Table1.Table 1 RFID technology development process. YearDevelopment process1941–1950RFID technology is separated from radar technology and appears in front of people as an independent technology1951–1960RFID technology is separated from radar technology and appears in front of people as an independent technology1961–1970The first RFID-related paper was published, and the successful application of EAS for electronic article monitoring marks the further development of RFID technology1971–1980A large number of RFID patents appeared, and RFID technology appeared in commodity applications for the first time1981–1990RFID has been officially used in commercial production, and various large-scale applications have begun to appear1991–2000The standardization of RFID technology is getting more and more attention, RFID products are widely used, and RFID products have gradually become a part of people’s daily lives2000 laterRFID product types are more abundant, the production level is continuously improved, the cost of electronic tags is continuously reduced, and the scale of application industries is expandedThere are 3 categories of RFID, and their main characteristics are shown in Table2.Table 2 Comparison of characteristics of different types of RFID. Types of RFIDPassive RFIDSemiactive RFIDActive RFIDLabel power supplyWithout batteryBuilt-in batteryPart of the built-in batteryRange of actionLimitedFartherGeneralService lifeLongerShorterGeneralLabel costLowerHigherGeneralAdapt to harsh environmentsSuitableInappropriateGeneralThe comparison chart of RFID technology under different carrier frequencies is shown in Table3.Table 3 Technical comparison table under different frequencies. FrequencyLow frequencyHigh frequencyUHFCarrier frequency<125 kHz13.56 MHz>433 HzGeneral characteristicsHigh price, affected by the environmentLow price, suitable for short-distance and multiple target recognition applicationsAdvanced IC technology makes the cost the lowest, suitable for multiple target recognitionData transfer rateLow (8kbit/s)High (64kbit/s)High (64kbit/s)Recognition speedLow (<1 m/s)Medium (<5 m/s)High (<50 m/s)Label structureCoilPrinted coilDipole antennaDirectionalityNoneNonePartHumid environmentNo effectNo effectGreater impactMarket share74%17%9%Transmission performancePenetrable conductorPenetrable conductorLinear propagationImitation impact performanceLimitedGoodGoodExisting standardsISO11784 ISO11785ISO18000-3 ISO14443EPC G2ISO18000-6Recognition distance<60 cm0.1–1 m1–6 mScope of applicationAccess control, fixed equipment, natural gasLibrary, product tracking, transportationShelves, truck tracking, containers ## 3. Image Preprocessing ### 3.1. Image Binarization Image binarization is a common image segmentation method. The grayscale processed image is binarized again. Assuming that an image hasL gray levels and T is the binarization value, the entire image can be divided into two districts, namely, C0 and C1. The number of sharing speed points of gray level i is ni, and N is the total number of pixels in the image. Then,(1)N=∑i=0t−1ni.The probability of occurrence ofi is(2)pi=niN.The probability of the pixels in the two regions appearing in the image is(3)ω0=∑i=0TPi,ω1=∑i=T+1L−1Pi=1−ω0.The average grayscale of the two regions is(4)μ1=1ω0∑i=T+1L−1ipi,(5)μ0=1ω0∑i=0Tipi.Through formulas (4) and (5), the average gray value of the entire image is obtained, and according to the pixel points, the color image is divided into three components: R, G, and B, which show various colors such as red, green, and blue, respectively. Grayscale is the process of making the R, G, and B components of color equal. The pixels with large gray value are brighter (the maximum pixel value is 255, which is white), and the opposite is darker (the lowest pixel is 0, which is black).The background and the interclass variance formula of the target can be obtained as follows:(6)μ=∑i=0L−1iPi=∑i=T+1TiPi=ω0μ0+ω1μ1,σ2T=ω0μ0−μ2+ω1μ1−μ2=ω0ω1μ0−μ12. ### 3.2. Image Morphological Filtering Image morphological filtering is widely used in the process of image processing, and its common arithmetic methods are divided into the following types.(1)Expansion Algorithm. The principle of the expansion algorithm is to assume that A is an image, B is a structural element, and ⊕ is an operator in Figure 2, which is defined as(7)A⊕B=Z|B^Z∩A≠∅.Figure 2 Schematic diagram of expansion algorithm.(2)Corrosion Algorithm. The basic principle of image expansion is assuming that A is an image, B is a structural element, and ⊙ is an erosion mathematical operator in Figure 3, which is defined as(8)A⊙B=Z|Bz⊆A.Figure 3 Schematic diagram of corrosion algorithm. ### 3.3. Use Image Processing Data #### 3.3.1. Extreme Learning Machine When training an extreme learning machine to deal with classification problems, a setxi,ti∈Rd×Rm is given, where xi is the input vector and ti represents the category to which it belongs. Then,(9)tj=∑i=1LβiGiwi⋅xi+bi,j=1,…,N.Among them,Gw,b,x is the activation function. Equation (10) can be simplified to(10)Hβ=T.In equation (10),(11)H=Gw1⋅x1+b1⋯GwL⋅x1+bL⋯Gw1⋅xN+b1⋯GwL⋅xN+bLN×L,β=β1T⋮βNTL×M,T=t1T⋮tNTL×M.According to the permission matrix of the learning machine, it can be set as(12)β^=HTH+λI−1HTT,λ>0.When entering an unknown sample ofx˜,(13)t˜=argmaxh˜β˜,where(14)h˜=GW1⋅X˜+b1⋯GwL⋅x˜+bL,where t˜ is the predicted value of unknown sample x˜. #### 3.3.2. Support Vector Machine The training process of the support vector machine model is to solve a minimum problem with constraints:(15)yiwi⋅xi+b≥1,i=1,2,…m,minJw=w22.Solving by Lagrangian function,(16)Lw,b,α=12w⋅w−∑i=1mαxi⋅w+byi−1,Take the partial derivative of w and b, and then take the value of 0. Then, we obtain(17)∂L∂W=W−∑i−1mαiyixi=0,∂L∂b=∑i=1maiyi=0.The optimal hyperplane needs to meet(18)αixi⋅w+byi−1=0,which is transformed into(19)minWα=12∑i=1m∑i=1myiyjαiαjxi⋅xj−∑j=1maj,s.tai≥0,i=1,…,m,∑i=1mαiyi=0.The value ofα∗ is the optimal solution, and there are(20)w∗=2wα∗=∑svαi∗αj∗xi⋅xjyiyj.The optimal decomposition function is(21)fx=sgn∑svmyiαi∗xi⋅xj+b∗.For the linear inseparable case, the constraint conditions are transformed into(22)yiw⋅xi+b−1+εi≥0,i=1,…,m.The objective function is transformed into(23)Jw,ε=12w⋅w+C∑i=1mεi.In formula (15), C is the penalty factor, and the larger the value of C>0, the greater the loss caused by the outlier to the objective function. ## 3.1. Image Binarization Image binarization is a common image segmentation method. The grayscale processed image is binarized again. Assuming that an image hasL gray levels and T is the binarization value, the entire image can be divided into two districts, namely, C0 and C1. The number of sharing speed points of gray level i is ni, and N is the total number of pixels in the image. Then,(1)N=∑i=0t−1ni.The probability of occurrence ofi is(2)pi=niN.The probability of the pixels in the two regions appearing in the image is(3)ω0=∑i=0TPi,ω1=∑i=T+1L−1Pi=1−ω0.The average grayscale of the two regions is(4)μ1=1ω0∑i=T+1L−1ipi,(5)μ0=1ω0∑i=0Tipi.Through formulas (4) and (5), the average gray value of the entire image is obtained, and according to the pixel points, the color image is divided into three components: R, G, and B, which show various colors such as red, green, and blue, respectively. Grayscale is the process of making the R, G, and B components of color equal. The pixels with large gray value are brighter (the maximum pixel value is 255, which is white), and the opposite is darker (the lowest pixel is 0, which is black).The background and the interclass variance formula of the target can be obtained as follows:(6)μ=∑i=0L−1iPi=∑i=T+1TiPi=ω0μ0+ω1μ1,σ2T=ω0μ0−μ2+ω1μ1−μ2=ω0ω1μ0−μ12. ## 3.2. Image Morphological Filtering Image morphological filtering is widely used in the process of image processing, and its common arithmetic methods are divided into the following types.(1)Expansion Algorithm. The principle of the expansion algorithm is to assume that A is an image, B is a structural element, and ⊕ is an operator in Figure 2, which is defined as(7)A⊕B=Z|B^Z∩A≠∅.Figure 2 Schematic diagram of expansion algorithm.(2)Corrosion Algorithm. The basic principle of image expansion is assuming that A is an image, B is a structural element, and ⊙ is an erosion mathematical operator in Figure 3, which is defined as(8)A⊙B=Z|Bz⊆A.Figure 3 Schematic diagram of corrosion algorithm. ## 3.3. Use Image Processing Data ### 3.3.1. Extreme Learning Machine When training an extreme learning machine to deal with classification problems, a setxi,ti∈Rd×Rm is given, where xi is the input vector and ti represents the category to which it belongs. Then,(9)tj=∑i=1LβiGiwi⋅xi+bi,j=1,…,N.Among them,Gw,b,x is the activation function. Equation (10) can be simplified to(10)Hβ=T.In equation (10),(11)H=Gw1⋅x1+b1⋯GwL⋅x1+bL⋯Gw1⋅xN+b1⋯GwL⋅xN+bLN×L,β=β1T⋮βNTL×M,T=t1T⋮tNTL×M.According to the permission matrix of the learning machine, it can be set as(12)β^=HTH+λI−1HTT,λ>0.When entering an unknown sample ofx˜,(13)t˜=argmaxh˜β˜,where(14)h˜=GW1⋅X˜+b1⋯GwL⋅x˜+bL,where t˜ is the predicted value of unknown sample x˜. ### 3.3.2. Support Vector Machine The training process of the support vector machine model is to solve a minimum problem with constraints:(15)yiwi⋅xi+b≥1,i=1,2,…m,minJw=w22.Solving by Lagrangian function,(16)Lw,b,α=12w⋅w−∑i=1mαxi⋅w+byi−1,Take the partial derivative of w and b, and then take the value of 0. Then, we obtain(17)∂L∂W=W−∑i−1mαiyixi=0,∂L∂b=∑i=1maiyi=0.The optimal hyperplane needs to meet(18)αixi⋅w+byi−1=0,which is transformed into(19)minWα=12∑i=1m∑i=1myiyjαiαjxi⋅xj−∑j=1maj,s.tai≥0,i=1,…,m,∑i=1mαiyi=0.The value ofα∗ is the optimal solution, and there are(20)w∗=2wα∗=∑svαi∗αj∗xi⋅xjyiyj.The optimal decomposition function is(21)fx=sgn∑svmyiαi∗xi⋅xj+b∗.For the linear inseparable case, the constraint conditions are transformed into(22)yiw⋅xi+b−1+εi≥0,i=1,…,m.The objective function is transformed into(23)Jw,ε=12w⋅w+C∑i=1mεi.In formula (15), C is the penalty factor, and the larger the value of C>0, the greater the loss caused by the outlier to the objective function. ## 3.3.1. Extreme Learning Machine When training an extreme learning machine to deal with classification problems, a setxi,ti∈Rd×Rm is given, where xi is the input vector and ti represents the category to which it belongs. Then,(9)tj=∑i=1LβiGiwi⋅xi+bi,j=1,…,N.Among them,Gw,b,x is the activation function. Equation (10) can be simplified to(10)Hβ=T.In equation (10),(11)H=Gw1⋅x1+b1⋯GwL⋅x1+bL⋯Gw1⋅xN+b1⋯GwL⋅xN+bLN×L,β=β1T⋮βNTL×M,T=t1T⋮tNTL×M.According to the permission matrix of the learning machine, it can be set as(12)β^=HTH+λI−1HTT,λ>0.When entering an unknown sample ofx˜,(13)t˜=argmaxh˜β˜,where(14)h˜=GW1⋅X˜+b1⋯GwL⋅x˜+bL,where t˜ is the predicted value of unknown sample x˜. ## 3.3.2. Support Vector Machine The training process of the support vector machine model is to solve a minimum problem with constraints:(15)yiwi⋅xi+b≥1,i=1,2,…m,minJw=w22.Solving by Lagrangian function,(16)Lw,b,α=12w⋅w−∑i=1mαxi⋅w+byi−1,Take the partial derivative of w and b, and then take the value of 0. Then, we obtain(17)∂L∂W=W−∑i−1mαiyixi=0,∂L∂b=∑i=1maiyi=0.The optimal hyperplane needs to meet(18)αixi⋅w+byi−1=0,which is transformed into(19)minWα=12∑i=1m∑i=1myiyjαiαjxi⋅xj−∑j=1maj,s.tai≥0,i=1,…,m,∑i=1mαiyi=0.The value ofα∗ is the optimal solution, and there are(20)w∗=2wα∗=∑svαi∗αj∗xi⋅xjyiyj.The optimal decomposition function is(21)fx=sgn∑svmyiαi∗xi⋅xj+b∗.For the linear inseparable case, the constraint conditions are transformed into(22)yiw⋅xi+b−1+εi≥0,i=1,…,m.The objective function is transformed into(23)Jw,ε=12w⋅w+C∑i=1mεi.In formula (15), C is the penalty factor, and the larger the value of C>0, the greater the loss caused by the outlier to the objective function. ## 4. System Design The smart elderly care system is a smart elderly care system designed using image processing technology, remote monitoring, and information service platforms, combined with the life needs of the elderly, and the overall structure is shown in Figure4.Figure 4 The overall structure of the smart elderly care system.The system has many functions. When the elderly wear the ECG monitoring bracelet, the heart rate and heartbeat of the elderly will be generated and transmitted to the mobile phones of family members and medical staff. So that they can pay more attention to the health problems of the elderly and spend the day of the elderly. The pictures generated by the system will also be sent to the family members’ mobile phones. Based on the actual management and service needs of nursing homes, a nursing home personnel positioning management system is developed, which can realize daily basic information management, real-time positioning and tracking of the elderly, vital signs monitoring of the elderly, one-button alarm of dangerous situations of the elderly, and other functions. When the elderly have special circumstances, they can respond as soon as possible, and the system can truly realize the significance and requirements of intelligent old-age care and be responsible for the daily life and health status of the elderly. It is no longer the traditional way of providing care for the aged, and there is a shortage of nursing staff. There is an independent nursing home in this system. ### 4.1. Simulation Experiment Results We investigated, learned, and collected the daily activity trajectories of many elderly people, as shown in Figure5.Figure 5 Activity trajectory diagram of the elderly.The collected images are counted, and the statistics chart can view the real-time location of each person, as shown in Figure6.Figure 6 Statistics of the scope of activity.You can also query the activity track of a user in a day, as shown in Figures7 and 8.Figure 7 Statistics of the range of activities of the elderly.Figure 8 Statistics of the scope of activities for the elderly.There are many recreational facilities in nursing homes. These recreational facilities bring great fun to the elderly and enrich the daily life of the elderly; some facilities are used by a large number of people, and the resource allocation is not reasonable. Nursing homes should strengthen the construction of the project and make elderly care services better. ### 4.2. Obtaining Experimental Results In order to check the accuracy of the system, we arranged the RFID equipment and collected the daily data of 4 students for a month. The ReaderIDs of the two places are represented by 1 and 2, respectively. The specific data are shown in Table4.Table 4 Stored location data. TagIDTimeDateReaderIDAntIDReaderRssiAntRssi8202715810:35:002020-09-1622−12−78202805810:35:002020-09-1622−16−178202715910:35:002020-09-1611−23−208203242510:35:002020-09-1622−21−148202715810:36:002020-09-1622−12−78202805810:36:002020-09-1622−16−178202715910:36:002020-09-1611−23−208203242510:36:002020-09-1622−21−14The smart elderly care system generates many pieces of location data every day to prevent data memory from being confused. We should process these data to facilitate more accurate results. The data of each experimenter are accumulated in one place. Because there are 5 active areas, the experimental data are divided into 5 groups. ### 4.3. Analysis of Simulation Experiment Results The experiment process lasted for one month. We used the data of 4 students within 25 days as the training set and used the data of the last 5 days to test the robustness of the prediction model and the support vector machine prediction model. Figure9 shows the prediction results. The results show that, in the five-day life of four people, there were only two errors, and the accuracy rate was as high as 90%. The specific results are shown in Figures 9 and 10.Figure 9 Prediction results of the extreme learning machine.Figure 10 Prediction results of the support vector machine.The predicted value and actual value of the extreme learning machine are 70 and 60, respectively, and the accuracy rate is as high as 90%. ### 4.4. Research Methods We started a round of questionnaire surveys with 50 pairs of elderly people and those older than 50 years as the survey subjects and distributed questionnaires to elderly people in different regions, different ages, and educational backgrounds. After removing invalid questionnaires, there were 254 valid questionnaires remaining. The distribution results are shown in Table5.Table 5 Distribution results. Sample statistical characteristicsCategoryFrequencyPercentage%Cumulative percentage%GenderMale11339.839.8Female17160.2100.0Age50–69 years old70 years old and above17260...660.0611239.4100.0EducationElementary school and belowJunior high schoolHigh schoolBachelor degree and above14450.750.76021.171.83913.785.64114.4100.0Monthly income<1000 yuan8530.030.01000–2000 yuan8028.358.32000–3000 yuan5318.777.0>3000 yuan6523.0100.0The willingness to use smart aged care products is shown in Figure6.According to the experimental results obtained in Figures9 and 10, we can clearly observe the accuracy of prediction by drawing images with image processing technology. For the questionnaire survey obtained in Table 6, through image analysis, we can also clearly see the different percentages of statistical characteristics of different samples and other conditions.Table 6 Regression analysis table. D VI VNonstandardized coefficientStandard coefficienttSig.VIFDurbin–WatsonR2BStandard errorU MConstant0.4240.1510.2910.4330.2112.8010.0051.5831.6631.2991.7580.5970.293∗∗0.0486.1040.0000.402∗∗0.046808480.0000.210∗∗0.0434.8790.000Through the results of the questionnaire survey, we can find that the degree of education, education level, and salary of the elderly are related to the willingness to use smart pension products. In general, the elderly with high wages also have a higher education level and they will use their own knowledge reserve ability to quickly learn how to use smart products. Therefore, in the process of product development and promotion, we should focus on the elderly who are slow to accept new things. ## 4.1. Simulation Experiment Results We investigated, learned, and collected the daily activity trajectories of many elderly people, as shown in Figure5.Figure 5 Activity trajectory diagram of the elderly.The collected images are counted, and the statistics chart can view the real-time location of each person, as shown in Figure6.Figure 6 Statistics of the scope of activity.You can also query the activity track of a user in a day, as shown in Figures7 and 8.Figure 7 Statistics of the range of activities of the elderly.Figure 8 Statistics of the scope of activities for the elderly.There are many recreational facilities in nursing homes. These recreational facilities bring great fun to the elderly and enrich the daily life of the elderly; some facilities are used by a large number of people, and the resource allocation is not reasonable. Nursing homes should strengthen the construction of the project and make elderly care services better. ## 4.2. Obtaining Experimental Results In order to check the accuracy of the system, we arranged the RFID equipment and collected the daily data of 4 students for a month. The ReaderIDs of the two places are represented by 1 and 2, respectively. The specific data are shown in Table4.Table 4 Stored location data. TagIDTimeDateReaderIDAntIDReaderRssiAntRssi8202715810:35:002020-09-1622−12−78202805810:35:002020-09-1622−16−178202715910:35:002020-09-1611−23−208203242510:35:002020-09-1622−21−148202715810:36:002020-09-1622−12−78202805810:36:002020-09-1622−16−178202715910:36:002020-09-1611−23−208203242510:36:002020-09-1622−21−14The smart elderly care system generates many pieces of location data every day to prevent data memory from being confused. We should process these data to facilitate more accurate results. The data of each experimenter are accumulated in one place. Because there are 5 active areas, the experimental data are divided into 5 groups. ## 4.3. Analysis of Simulation Experiment Results The experiment process lasted for one month. We used the data of 4 students within 25 days as the training set and used the data of the last 5 days to test the robustness of the prediction model and the support vector machine prediction model. Figure9 shows the prediction results. The results show that, in the five-day life of four people, there were only two errors, and the accuracy rate was as high as 90%. The specific results are shown in Figures 9 and 10.Figure 9 Prediction results of the extreme learning machine.Figure 10 Prediction results of the support vector machine.The predicted value and actual value of the extreme learning machine are 70 and 60, respectively, and the accuracy rate is as high as 90%. ## 4.4. Research Methods We started a round of questionnaire surveys with 50 pairs of elderly people and those older than 50 years as the survey subjects and distributed questionnaires to elderly people in different regions, different ages, and educational backgrounds. After removing invalid questionnaires, there were 254 valid questionnaires remaining. The distribution results are shown in Table5.Table 5 Distribution results. Sample statistical characteristicsCategoryFrequencyPercentage%Cumulative percentage%GenderMale11339.839.8Female17160.2100.0Age50–69 years old70 years old and above17260...660.0611239.4100.0EducationElementary school and belowJunior high schoolHigh schoolBachelor degree and above14450.750.76021.171.83913.785.64114.4100.0Monthly income<1000 yuan8530.030.01000–2000 yuan8028.358.32000–3000 yuan5318.777.0>3000 yuan6523.0100.0The willingness to use smart aged care products is shown in Figure6.According to the experimental results obtained in Figures9 and 10, we can clearly observe the accuracy of prediction by drawing images with image processing technology. For the questionnaire survey obtained in Table 6, through image analysis, we can also clearly see the different percentages of statistical characteristics of different samples and other conditions.Table 6 Regression analysis table. D VI VNonstandardized coefficientStandard coefficienttSig.VIFDurbin–WatsonR2BStandard errorU MConstant0.4240.1510.2910.4330.2112.8010.0051.5831.6631.2991.7580.5970.293∗∗0.0486.1040.0000.402∗∗0.046808480.0000.210∗∗0.0434.8790.000Through the results of the questionnaire survey, we can find that the degree of education, education level, and salary of the elderly are related to the willingness to use smart pension products. In general, the elderly with high wages also have a higher education level and they will use their own knowledge reserve ability to quickly learn how to use smart products. Therefore, in the process of product development and promotion, we should focus on the elderly who are slow to accept new things. ## 5. Conclusion In view of the difficulty of providing care for the aged in cities, this paper puts forward an RTID positioning system and APP client, which can ensure the privacy of the elderly. Through image processing technology, it analyzes the old-age situation in each activity center and the living conditions of the elderly in different environments. It can reasonably plan urban old-age facilities and activity centers, so as to improve the satisfaction of elderly services. --- *Source: 1023187-2021-10-29.xml*
2021
# Methicillin-ResistantStaphylococcus aureus Contamination of Frequently Touched Objects in Intensive Care Units: Potential Threat of Nosocomial Infections **Authors:** Dharm Raj Bhatta; Sumnima Koirala; Abha Baral; Niroj Man Amatya; Sulochana Parajuli; Rajani Shrestha; Deependra Hamal; Niranjan Nayak; Shishir Gokhale **Journal:** Canadian Journal of Infectious Diseases and Medical Microbiology (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1023241 --- ## Abstract Background. Bacterial contamination in intensive care units is an important risk factor associated with increasing incidences of nosocomial infections. This study was conducted to study the bacterial colonization on commonly touched objects of intensive care units and antibiotic resistance pattern of bacterial isolates. Methods. This study was conducted in different intensive care units of Manipal Teaching Hospital, Pokhara, Nepal. A total of 235 swabs were collected from surfaces of bed rails, monitors, door handles, IV stands, telephone sets, nursing stations, medicine trolleys, sphygmomanometers, wash basin taps, dressing drums, stethoscopes, pulse oximeters, ventilators, defibrillators, and stretchers. Isolation, identification, and antibiotic susceptibility tests of the bacteria were performed following standard microbiological techniques. Results. Of 235 samples, bacterial growth was observed in 152 samples. A total of 90 samples of Staphylococcus aureus were isolated from 235 samples. Most of the sampling sites included in this study were found contaminated with S. aureus. The highest number of S. aureus was cultured from the surface of bed rails. Of the total S. aureus isolates, 54.4% (49/90) were methicillin-resistant Staphylococcus aureus (MRSA). Vancomycin resistance was detected among 8.1% MRSA isolates (4/49). Acinetobacter species were the commonest Gram-negative bacterial isolate. Conclusion. Bacterial contamination of the objects/instruments of the ICU was recorded to be high. The most common contaminating bacteria were S. aureus with a high percentage of MRSA and emergence of VRSA. Periodic microbiological surveillance, detection of contaminated sites, and effective decontamination methods would minimize the colonization by potential pathogens and their transmission. --- ## Body ## 1. Background Intensive care units (ICUs) are the most essential working spaces in a hospital setting. Patients admitted in ICU are at a greater risk of nosocomial infections. The reported incidence of nosocomial infections in patients in ICU is around 2 to 5 times higher than in the general wards [1]. Microbial agents colonizing the environment of ICU and healthcare workers (HCWs) are important sources of nosocomial pathogens. Bacterial agents from the hands of HCWs and the ICU environment have been associated with outbreaks of hospital-acquired infections [2–4].A wide range of bacteria, fungi, and viruses have been associated with nosocomial infections in ICU. Potential bacterial pathogens includeStaphylococcus aureus, Klebsiella species, Escherichia coli, Pseudomonas aeruginosa, Acinetobacter baumannii, and Enterococcus species. Methicillin-resistant Staphylococcus aureus (MRSA) has been significantly associated with nosocomial infections in developing countries [5–7]. Gram-positive bacteria including MRSA survive for many weeks on dry inanimate objects/surfaces of the hospitals [8]. The increasing prevalence of MRSA among the patients in ICU has been a matter of great concern even in countries where standard infection control measures are regularly implemented. The environmental contamination rate by MRSA in ICU varies from hospital to hospital depending on a number of factors such as ward setting, crowding, patients with MRSA infections, carrier rate among staff, hand hygiene, and other infection control practices. The reported MRSA infections among ICU patients are associated with extended stay, poor outcome, and higher mortality and morbidity [9]. Approximately, 20% of infected patients in ICU die from invasive MRSA infections [10].Frequently touched surfaces, instruments, and objects such as bed railings, floor, stethoscopes, and door handles are reported to be more frequently and heavily contaminated with bacterial agents [11, 12]. The microbes involved tend to be more difficult to eradicate due to high prevalence of antibiotic resistance. This study was planned to assess bacterial contamination of inanimate objects which are frequently touched in critical care units. Despite increasing incidences of hospital-acquired infections in intensive care units, contamination monitoring in the hospitals of Nepal is poor. Bacteriological examination of frequently touched sites and identification of areas colonized by potential pathogens would help in formulating cleaning/disinfection strategies in critical care units to minimize nosocomial infections. ## 2. Materials and Methods This hospital-based study was conducted in intensive care units of Manipal Teaching Hospital, Pokhara, Nepal, over a duration of four months (May 2021 to August 2021). Ethical approval was obtained from the Institutional Review Committee (IRC) of Manipal Teaching Hospital, Pokhara, Nepal (MEMG/446/IRC). Manipal Teaching Hospital has over 750-bed capacity and is a referral center of the western region of Nepal. The hospital has different intensive care units such as medical ICU, postoperative ICU, critical care unit, pediatric ICU, and neonatal ICU. Total beds available in medical ICU, surgical ICU, and critical care unit are 14, 10, and 8, respectively.Information about routine cleaning/disinfection of various objects/instruments of ICU was obtained from different units. Healthcare professionals of ICU maintain standard hand hygiene protocols. The floor cleaning/disinfection procedures are performed by mopping two times a day using a detergent solution. Noninvasive instruments, nursing station, table tops, and other objects are periodically cleaned with 75% alcohol swabs. Invasive instruments are sterilized by autoclave or disinfected by Cidex as per the manufacturer’s instruction. ### 2.1. Specimen Collection A total of 235 swabs were obtained from surfaces of bed rails (n = 75), monitors (n = 32), door handles (n = 21), IV stands (n = 14), telephone sets (n = 11), nursing stations (n = 11), medicine trolleys (n = 11), sphygmomanometers (n = 10), wash basin taps (n = 10), dressing drums (n = 09), stethoscopes (08), pulse oximeters (n = 06), ventilators (n = 06), defibrillators (n = 06), and stretchers (n = 05). A majority of these sites are frequently touched either by healthcare professionals or patients. Samples were collected by rubbing sterile swabs moistened with peptone water. ### 2.2. Isolation and Identification of Bacteria All the samples were inoculated in a nutrient broth and incubated at 37°C for 18–24 hours. A subculture from the broth was performed on 5% sheep blood agar and MacConkey agar. Culture plates were incubated at 37°C for 24–48 hours. The bacterial isolates were identified by standard bacteriological procedures such as colony morphology, Gram staining, biochemical reactions, and other phenotypic characteristics [13]. ### 2.3. Antibiotic Susceptibility Test An antibiotic sensitivity test was performed by the Kirby–Bauer disc diffusion method using Mueller–Hinton agar plates (HI media, Mumbai, India) [14]. The antibiotics tested are ciprofloxacin (5 μg), penicillin (10 IU), gentamicin (10 μg), erythromycin (15 μg), clindamycin (2 μg) trimethoprim sulfamethoxazole (1.25/23.75 μg), ceftazidime (30 μg), amikacin (30 μg), and imipenem (10 μg). Bacterial isolates which are resistant to minimum one agent in three or more than three antibiotic groups were categorized as multidrug-resistant (MDR) [15]. Methicillin-resistant S. aureus (MRSA) isolates were detected by the cefoxitin (30 μg) disc diffusion method [14]. Minimal inhibitory concentration (MIC) of vancomycin was performed by the Epsilometer test (HI media, Mumbai, India) following CLSI guidelines [14]. ### 2.4. Biofilm Detection Detection of biofilm amongS. aureus and MRSA isolates was tested by the standard microtiter plate method [16]. ## 2.1. Specimen Collection A total of 235 swabs were obtained from surfaces of bed rails (n = 75), monitors (n = 32), door handles (n = 21), IV stands (n = 14), telephone sets (n = 11), nursing stations (n = 11), medicine trolleys (n = 11), sphygmomanometers (n = 10), wash basin taps (n = 10), dressing drums (n = 09), stethoscopes (08), pulse oximeters (n = 06), ventilators (n = 06), defibrillators (n = 06), and stretchers (n = 05). A majority of these sites are frequently touched either by healthcare professionals or patients. Samples were collected by rubbing sterile swabs moistened with peptone water. ## 2.2. Isolation and Identification of Bacteria All the samples were inoculated in a nutrient broth and incubated at 37°C for 18–24 hours. A subculture from the broth was performed on 5% sheep blood agar and MacConkey agar. Culture plates were incubated at 37°C for 24–48 hours. The bacterial isolates were identified by standard bacteriological procedures such as colony morphology, Gram staining, biochemical reactions, and other phenotypic characteristics [13]. ## 2.3. Antibiotic Susceptibility Test An antibiotic sensitivity test was performed by the Kirby–Bauer disc diffusion method using Mueller–Hinton agar plates (HI media, Mumbai, India) [14]. The antibiotics tested are ciprofloxacin (5 μg), penicillin (10 IU), gentamicin (10 μg), erythromycin (15 μg), clindamycin (2 μg) trimethoprim sulfamethoxazole (1.25/23.75 μg), ceftazidime (30 μg), amikacin (30 μg), and imipenem (10 μg). Bacterial isolates which are resistant to minimum one agent in three or more than three antibiotic groups were categorized as multidrug-resistant (MDR) [15]. Methicillin-resistant S. aureus (MRSA) isolates were detected by the cefoxitin (30 μg) disc diffusion method [14]. Minimal inhibitory concentration (MIC) of vancomycin was performed by the Epsilometer test (HI media, Mumbai, India) following CLSI guidelines [14]. ## 2.4. Biofilm Detection Detection of biofilm amongS. aureus and MRSA isolates was tested by the standard microtiter plate method [16]. ## 3. Results A total of 235 swabs were collected from different sites. Bacterial growth was observed in 152 swabs on both blood agar and MacConkey agar plates, while 83 samples did not show bacterial growth. A total of 90S. aureus isolates were cultured from 235 samples. Most of the sampling sites included in this study were found to be contaminated with S. aureus isolates. Acinetobacter species were the commonest Gram-negative bacteria. Other bacterial isolates were Pseudomonas species, members of family Enterobacteriaceae, Staphylococcus species (coagulase negative), Enterococci, Micrococci, non-diphtheriae Corynebacterium, and Bacillus species. Frequency of bacterial agents isolated from objects/instruments of the ICU is depicted in Table 1.Table 1 Frequency of bacterial isolates colonized on objects of intensive care units. OrganismTotal isolates (n = 152)TotalICUCCUSICUStaphylococcus aureus31 (20.4%)19 (12.5%)40 (26.3%)90 (59.2%)Coagulase-negativeStaphylococci6 (3.9%)3 (1.9%)4 (2.6%)13 (8.5%)Bacillus species2 (1.3%)2 (1.3%)3 (1.9%)7 (4.6%)Micrococcus species3 (1.9%)1 (0.6%)2 (1.3%)6 (3.9%)Non-diphtheriaeCorynebacterium1 (0.6%)3 (1.9%)1 (0.6%)5 (3.2%)Enterococcus species1 (0.6%)——1 (0.6%)Acinetobacter species6 (3.9%)9 (5.9%)6 (3.9%)21 (13.8%)Pseudomonas species1 (0.6%)1 (0.6%)—2 (1.3%)Escherichia coli—2 (1.3%)2 (1.3%)4 (2.6%)Klebsiella pneumoniae—1 (0.6%)—1 (0.6%)Enterobacter species——1 (0.6%)1 (0.6%)Proteus species1 (0.6%)——1 (0.6%)ICU: intensive care units, CCU: critical care units, SICU: surgical intensive care units.All the sampling sites included in this study were contaminated withS. aureus (except medicine trolleys), with the highest number from the surface of bed rails. Among the total S. aureus, 54.4% (49/90) were MRSA and 45.5% (41/90) were identified as MSSA. Details of sampling sites and S. aureus isolates are shown in Table 2.Table 2 Distribution ofStaphylococcus aureus (MSSA and MRSA) isolated from environmental samples of intensive care units. Sampling sitesNumber of swabsNumber ofS. aureus isolatesNumber of MSSA isolatesNumber of MRSA isolatesBed rail75291019Monitor32835Door handle211284IV stand14853Telephone set11541Nursing station11523Medicine trolley11000Sphygmomanometer10523Wash basin tap10312Dressing drum9532Stethoscope8202Pulse oximeter6110Ventilator6211Defibrillator6303Stretcher5211Total235904149High percentages ofS. aureus isolates were found susceptible to vancomycin, gentamicin, and ciprofloxacin. All the isolates of S. aureus were resistant to penicillin. Vancomycin resistance was detected among 8.1% MRSA isolates (4/49) with a minimal inhibitory concentration (MIC) value of >256 μg/ml. Eleven isolates (11/49) of MRSA were intermediate-susceptible to vancomycin as shown in Figure 1. The antibiotic resistance patterns of S. aureus, MSSA, and MRSA isolates are shown in Table 3. The antibiotic resistance pattern of MRSA isolates to ciprofloxacin, cotrimoxazole, erythromycin, and gentamicin was significantly higher than that of MSSA as shown in Table 3. Among 90 S. aureus isolates, 20 (22.2%) were biofilm producers. No significant association was observed in biofilm-forming property of MSSA and MRSA isolates. Acinetobacter species were the commonest Gram-negative bacterial isolate (21/152). The drug resistance pattern of Acinetobacter species showed a high percentage of MDR isolates, with 38% (8/21) resistant to imipenem.Figure 1 MIC results of vancomycin against MRSA isolates.Table 3 Antibiotic resistance patterns ofS. aureus, MSSA, and MRSA isolates. AntibioticsS. aureus isolates (n = 90) frequency (%)MSSA isolates (n = 41) frequency (%)MRSA isolates (n = 49) frequency (%)P valueCiprofloxacin25 (27.7%)6 (14.6%)19 (38.7%)0.017Cotrimoxazole33 (36.6%)09 (21.9%)24 (48.9%)0.009Clindamycin65 (72.2%)28 (68.3%)37 (75.5%)0.486Cefoxitin49 (54.4%)0049 (100%)0.000Erythromycin77 (85.5%)30 (73.1%)47 (95.9%)0.003Gentamicin15 (16.6%)00 (0%)15 (30.6%)0.000Penicillin90 (100%)41 (100%)49 (100%)—Vancomycin04 (4.4%)00 (%)04 (8.1%)0.123 ## 4. Discussion Microbial colonization of objects/instruments in the ICU is considered a major factor for increased incidences of nosocomial infections. The reported prevalence of nosocomial infections in ICU in developing countries is 2–20 times higher than those of developed countries [17]. Nonadherence to standard hand hygiene protocols by healthcare professionals contributes significantly to the contamination of inanimate objects and cross-transmission during contact with the patient. S. aureus is one of the most common human pathogens and is significantly associated with nosocomial infections particularly in ICU. Increasing drug resistance among MRSA isolates and the emergence of vancomycin-resistant Staphylococcus aureus (VRSA) isolates have further exacerbated the problem. Identification of sites colonized by MRSA and other potential nosocomial pathogens would minimize the transmission among patients and thus help in reducing incidence of nosocomial infections in ICU.In our study, bacterial contamination of frequently touched objects/instruments in ICU was high. The overall bacterial contamination rate in ICU was 64.7% (152/235) which is higher than that in findings of other studies [18, 19]. Some of the studies have reported higher percentage of bacterial contamination than our findings [20, 21]. This reflects that bacterial contamination rates vary from hospital to hospital within and outside the country. High rates of bacterial contamination in ICU could be associated with the admission of patients with different clinical conditions referred from various units, higher bed occupancy, prolonged stay, and poor compliances to infection control. Rates of bacterial contamination vary with frequency of use of life-supporting equipment, frequency of sterilization/disinfection, type and concentration of the disinfectant used, fumigation, and other infection control practices in ICU.The findings of our study showed contamination of objects/instruments in ICU with a diverse group of Gram-positive and Gram-negative bacteria. In our study findings, contamination by Gram-positive bacteria was higher than that by Gram-negative bacteria. Similar findings are reported by other studies [19, 20]. In contrast to our findings, a study from India reported higher contamination rates with Gram-negative bacterial isolates [22].S. aureus and MRSA are notorious nosocomial pathogens associated with a variety of clinical conditions in ICU. Colonized hands of healthcare workers account for 20–40% of infections due to cross contamination [23, 24]. All the sites included in the study were contaminated with S. aureus except the medicine trolley. Among Gram-positive bacteria, S. aureus was the most common potential pathogen isolated with 54.4% MRSA. Surfaces of bed rails yielded the highest number of S. aureus isolates as compared to other sites. Bed rails are the one of the most frequently touched surfaces by healthcare providers, patients, and visitors. Objects/instruments of different units of a hospital often remain contaminated with S. aureus due to its prolonged survival [8]. A study conducted in the environmental samples of neonatal ICU of Manipal Teaching Hospital reported S. aureus as one of the common potential pathogens with 33.3% MRSA isolates [25]. Patients admitted in ICU are often immunocompromised and vulnerable to nosocomial infection. Contamination of S. aureus and MRSA on these sites increases the risk of transmission among the patients and may result in septicemia and pneumonia.Infections associated with MRSA are difficult to treat due to limited therapeutic options. Among MRSA isolates, 8.1% (4/49) were resistant to vancomycin. Detection of vancomycin-resistant isolates is a serious matter of concern and requires special attention. Various studies from Nepal have reported the emergence of VRSA isolates in clinical samples [26, 27]. Contamination of objects of ICU with VRSA poses a great risk of nosocomial infections. 22.4% (11/49) isolates of MRSA were intermediate-susceptible to vancomycin. This may be an alarming finding for increased incidences of VRSA infections in the near future.Biofilm formation amongS. aureus isolates was studied. Among S. aureus isolates, 22.2% (20/90) were biofilm producers. S. aureus isolates with biofilm-forming property can survive longer on hospital surfaces resulting in long-term survival and are a potential source of nosocomial infections. A recent study from Manipal Teaching Hospital reported a higher percentage (31.8%) of biofilm among S. aureus isolates cultured from inanimate objects of the hospital [24].Among Gram-negative bacteria, potential pathogens isolated wereAcinetobacter species, Pseudomonas species, and bacteria from family Enterobacteriaceae. In our study, Acinetobacter species were the most common Gram-negative bacteria isolated. Acinetobacter species are well-established pathogens among the patients admitted in ICU due to resistance to different groups of antibiotics and chemical disinfectants. A study from Manipal Teaching Hospital reported MDR Acinetobacter species as the most common bacterial pathogen associated with lower respiratory tract infections among ICU patients [28]. Contamination of objects/instruments of ICU with Acinetobacter species is an additional risk factor of nosocomial pneumonia. Drug resistance pattern showed that a high percentage (38%) of isolates was resistant to imipenem. Resistance to imipenem is alarming and challenging to clinicians for therapeutic management. Increasing resistance to higher-generation antibiotics limits the treatment options with additional financial burden and long-term hospitalization among the patients.Our study findings are important to generate awareness among infection control team and healthcare professionals regarding contamination of ICU with bacterial agents and their possible role in nosocomial infections. This was a single center study, and findings may not be generalized. We did not study the association between contamination of objects/instruments and nosocomial infections. Molecular characterization of the isolates was not performed. ## 5. Conclusion Our study results showed high level of bacterial contamination of the frequently touched objects/instruments of ICU. Isolation of MRSA and VRSA from the sites is a potential threat of nosocomial infections. The present study emphasizes need for modification in the existing cleaning/disinfection procedures in order to minimize the contamination by potential pathogens. Periodic microbiological surveillance of the ICU environment with effective infection control practices is expected to minimize the bacterial contamination and transmission. Gentamicin may be empirically used in suspected cases of staphylococcal infections in ICU. --- *Source: 1023241-2022-05-21.xml*
1023241-2022-05-21_1023241-2022-05-21.md
21,486
Methicillin-ResistantStaphylococcus aureus Contamination of Frequently Touched Objects in Intensive Care Units: Potential Threat of Nosocomial Infections
Dharm Raj Bhatta; Sumnima Koirala; Abha Baral; Niroj Man Amatya; Sulochana Parajuli; Rajani Shrestha; Deependra Hamal; Niranjan Nayak; Shishir Gokhale
Canadian Journal of Infectious Diseases and Medical Microbiology (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1023241
1023241-2022-05-21.xml
--- ## Abstract Background. Bacterial contamination in intensive care units is an important risk factor associated with increasing incidences of nosocomial infections. This study was conducted to study the bacterial colonization on commonly touched objects of intensive care units and antibiotic resistance pattern of bacterial isolates. Methods. This study was conducted in different intensive care units of Manipal Teaching Hospital, Pokhara, Nepal. A total of 235 swabs were collected from surfaces of bed rails, monitors, door handles, IV stands, telephone sets, nursing stations, medicine trolleys, sphygmomanometers, wash basin taps, dressing drums, stethoscopes, pulse oximeters, ventilators, defibrillators, and stretchers. Isolation, identification, and antibiotic susceptibility tests of the bacteria were performed following standard microbiological techniques. Results. Of 235 samples, bacterial growth was observed in 152 samples. A total of 90 samples of Staphylococcus aureus were isolated from 235 samples. Most of the sampling sites included in this study were found contaminated with S. aureus. The highest number of S. aureus was cultured from the surface of bed rails. Of the total S. aureus isolates, 54.4% (49/90) were methicillin-resistant Staphylococcus aureus (MRSA). Vancomycin resistance was detected among 8.1% MRSA isolates (4/49). Acinetobacter species were the commonest Gram-negative bacterial isolate. Conclusion. Bacterial contamination of the objects/instruments of the ICU was recorded to be high. The most common contaminating bacteria were S. aureus with a high percentage of MRSA and emergence of VRSA. Periodic microbiological surveillance, detection of contaminated sites, and effective decontamination methods would minimize the colonization by potential pathogens and their transmission. --- ## Body ## 1. Background Intensive care units (ICUs) are the most essential working spaces in a hospital setting. Patients admitted in ICU are at a greater risk of nosocomial infections. The reported incidence of nosocomial infections in patients in ICU is around 2 to 5 times higher than in the general wards [1]. Microbial agents colonizing the environment of ICU and healthcare workers (HCWs) are important sources of nosocomial pathogens. Bacterial agents from the hands of HCWs and the ICU environment have been associated with outbreaks of hospital-acquired infections [2–4].A wide range of bacteria, fungi, and viruses have been associated with nosocomial infections in ICU. Potential bacterial pathogens includeStaphylococcus aureus, Klebsiella species, Escherichia coli, Pseudomonas aeruginosa, Acinetobacter baumannii, and Enterococcus species. Methicillin-resistant Staphylococcus aureus (MRSA) has been significantly associated with nosocomial infections in developing countries [5–7]. Gram-positive bacteria including MRSA survive for many weeks on dry inanimate objects/surfaces of the hospitals [8]. The increasing prevalence of MRSA among the patients in ICU has been a matter of great concern even in countries where standard infection control measures are regularly implemented. The environmental contamination rate by MRSA in ICU varies from hospital to hospital depending on a number of factors such as ward setting, crowding, patients with MRSA infections, carrier rate among staff, hand hygiene, and other infection control practices. The reported MRSA infections among ICU patients are associated with extended stay, poor outcome, and higher mortality and morbidity [9]. Approximately, 20% of infected patients in ICU die from invasive MRSA infections [10].Frequently touched surfaces, instruments, and objects such as bed railings, floor, stethoscopes, and door handles are reported to be more frequently and heavily contaminated with bacterial agents [11, 12]. The microbes involved tend to be more difficult to eradicate due to high prevalence of antibiotic resistance. This study was planned to assess bacterial contamination of inanimate objects which are frequently touched in critical care units. Despite increasing incidences of hospital-acquired infections in intensive care units, contamination monitoring in the hospitals of Nepal is poor. Bacteriological examination of frequently touched sites and identification of areas colonized by potential pathogens would help in formulating cleaning/disinfection strategies in critical care units to minimize nosocomial infections. ## 2. Materials and Methods This hospital-based study was conducted in intensive care units of Manipal Teaching Hospital, Pokhara, Nepal, over a duration of four months (May 2021 to August 2021). Ethical approval was obtained from the Institutional Review Committee (IRC) of Manipal Teaching Hospital, Pokhara, Nepal (MEMG/446/IRC). Manipal Teaching Hospital has over 750-bed capacity and is a referral center of the western region of Nepal. The hospital has different intensive care units such as medical ICU, postoperative ICU, critical care unit, pediatric ICU, and neonatal ICU. Total beds available in medical ICU, surgical ICU, and critical care unit are 14, 10, and 8, respectively.Information about routine cleaning/disinfection of various objects/instruments of ICU was obtained from different units. Healthcare professionals of ICU maintain standard hand hygiene protocols. The floor cleaning/disinfection procedures are performed by mopping two times a day using a detergent solution. Noninvasive instruments, nursing station, table tops, and other objects are periodically cleaned with 75% alcohol swabs. Invasive instruments are sterilized by autoclave or disinfected by Cidex as per the manufacturer’s instruction. ### 2.1. Specimen Collection A total of 235 swabs were obtained from surfaces of bed rails (n = 75), monitors (n = 32), door handles (n = 21), IV stands (n = 14), telephone sets (n = 11), nursing stations (n = 11), medicine trolleys (n = 11), sphygmomanometers (n = 10), wash basin taps (n = 10), dressing drums (n = 09), stethoscopes (08), pulse oximeters (n = 06), ventilators (n = 06), defibrillators (n = 06), and stretchers (n = 05). A majority of these sites are frequently touched either by healthcare professionals or patients. Samples were collected by rubbing sterile swabs moistened with peptone water. ### 2.2. Isolation and Identification of Bacteria All the samples were inoculated in a nutrient broth and incubated at 37°C for 18–24 hours. A subculture from the broth was performed on 5% sheep blood agar and MacConkey agar. Culture plates were incubated at 37°C for 24–48 hours. The bacterial isolates were identified by standard bacteriological procedures such as colony morphology, Gram staining, biochemical reactions, and other phenotypic characteristics [13]. ### 2.3. Antibiotic Susceptibility Test An antibiotic sensitivity test was performed by the Kirby–Bauer disc diffusion method using Mueller–Hinton agar plates (HI media, Mumbai, India) [14]. The antibiotics tested are ciprofloxacin (5 μg), penicillin (10 IU), gentamicin (10 μg), erythromycin (15 μg), clindamycin (2 μg) trimethoprim sulfamethoxazole (1.25/23.75 μg), ceftazidime (30 μg), amikacin (30 μg), and imipenem (10 μg). Bacterial isolates which are resistant to minimum one agent in three or more than three antibiotic groups were categorized as multidrug-resistant (MDR) [15]. Methicillin-resistant S. aureus (MRSA) isolates were detected by the cefoxitin (30 μg) disc diffusion method [14]. Minimal inhibitory concentration (MIC) of vancomycin was performed by the Epsilometer test (HI media, Mumbai, India) following CLSI guidelines [14]. ### 2.4. Biofilm Detection Detection of biofilm amongS. aureus and MRSA isolates was tested by the standard microtiter plate method [16]. ## 2.1. Specimen Collection A total of 235 swabs were obtained from surfaces of bed rails (n = 75), monitors (n = 32), door handles (n = 21), IV stands (n = 14), telephone sets (n = 11), nursing stations (n = 11), medicine trolleys (n = 11), sphygmomanometers (n = 10), wash basin taps (n = 10), dressing drums (n = 09), stethoscopes (08), pulse oximeters (n = 06), ventilators (n = 06), defibrillators (n = 06), and stretchers (n = 05). A majority of these sites are frequently touched either by healthcare professionals or patients. Samples were collected by rubbing sterile swabs moistened with peptone water. ## 2.2. Isolation and Identification of Bacteria All the samples were inoculated in a nutrient broth and incubated at 37°C for 18–24 hours. A subculture from the broth was performed on 5% sheep blood agar and MacConkey agar. Culture plates were incubated at 37°C for 24–48 hours. The bacterial isolates were identified by standard bacteriological procedures such as colony morphology, Gram staining, biochemical reactions, and other phenotypic characteristics [13]. ## 2.3. Antibiotic Susceptibility Test An antibiotic sensitivity test was performed by the Kirby–Bauer disc diffusion method using Mueller–Hinton agar plates (HI media, Mumbai, India) [14]. The antibiotics tested are ciprofloxacin (5 μg), penicillin (10 IU), gentamicin (10 μg), erythromycin (15 μg), clindamycin (2 μg) trimethoprim sulfamethoxazole (1.25/23.75 μg), ceftazidime (30 μg), amikacin (30 μg), and imipenem (10 μg). Bacterial isolates which are resistant to minimum one agent in three or more than three antibiotic groups were categorized as multidrug-resistant (MDR) [15]. Methicillin-resistant S. aureus (MRSA) isolates were detected by the cefoxitin (30 μg) disc diffusion method [14]. Minimal inhibitory concentration (MIC) of vancomycin was performed by the Epsilometer test (HI media, Mumbai, India) following CLSI guidelines [14]. ## 2.4. Biofilm Detection Detection of biofilm amongS. aureus and MRSA isolates was tested by the standard microtiter plate method [16]. ## 3. Results A total of 235 swabs were collected from different sites. Bacterial growth was observed in 152 swabs on both blood agar and MacConkey agar plates, while 83 samples did not show bacterial growth. A total of 90S. aureus isolates were cultured from 235 samples. Most of the sampling sites included in this study were found to be contaminated with S. aureus isolates. Acinetobacter species were the commonest Gram-negative bacteria. Other bacterial isolates were Pseudomonas species, members of family Enterobacteriaceae, Staphylococcus species (coagulase negative), Enterococci, Micrococci, non-diphtheriae Corynebacterium, and Bacillus species. Frequency of bacterial agents isolated from objects/instruments of the ICU is depicted in Table 1.Table 1 Frequency of bacterial isolates colonized on objects of intensive care units. OrganismTotal isolates (n = 152)TotalICUCCUSICUStaphylococcus aureus31 (20.4%)19 (12.5%)40 (26.3%)90 (59.2%)Coagulase-negativeStaphylococci6 (3.9%)3 (1.9%)4 (2.6%)13 (8.5%)Bacillus species2 (1.3%)2 (1.3%)3 (1.9%)7 (4.6%)Micrococcus species3 (1.9%)1 (0.6%)2 (1.3%)6 (3.9%)Non-diphtheriaeCorynebacterium1 (0.6%)3 (1.9%)1 (0.6%)5 (3.2%)Enterococcus species1 (0.6%)——1 (0.6%)Acinetobacter species6 (3.9%)9 (5.9%)6 (3.9%)21 (13.8%)Pseudomonas species1 (0.6%)1 (0.6%)—2 (1.3%)Escherichia coli—2 (1.3%)2 (1.3%)4 (2.6%)Klebsiella pneumoniae—1 (0.6%)—1 (0.6%)Enterobacter species——1 (0.6%)1 (0.6%)Proteus species1 (0.6%)——1 (0.6%)ICU: intensive care units, CCU: critical care units, SICU: surgical intensive care units.All the sampling sites included in this study were contaminated withS. aureus (except medicine trolleys), with the highest number from the surface of bed rails. Among the total S. aureus, 54.4% (49/90) were MRSA and 45.5% (41/90) were identified as MSSA. Details of sampling sites and S. aureus isolates are shown in Table 2.Table 2 Distribution ofStaphylococcus aureus (MSSA and MRSA) isolated from environmental samples of intensive care units. Sampling sitesNumber of swabsNumber ofS. aureus isolatesNumber of MSSA isolatesNumber of MRSA isolatesBed rail75291019Monitor32835Door handle211284IV stand14853Telephone set11541Nursing station11523Medicine trolley11000Sphygmomanometer10523Wash basin tap10312Dressing drum9532Stethoscope8202Pulse oximeter6110Ventilator6211Defibrillator6303Stretcher5211Total235904149High percentages ofS. aureus isolates were found susceptible to vancomycin, gentamicin, and ciprofloxacin. All the isolates of S. aureus were resistant to penicillin. Vancomycin resistance was detected among 8.1% MRSA isolates (4/49) with a minimal inhibitory concentration (MIC) value of >256 μg/ml. Eleven isolates (11/49) of MRSA were intermediate-susceptible to vancomycin as shown in Figure 1. The antibiotic resistance patterns of S. aureus, MSSA, and MRSA isolates are shown in Table 3. The antibiotic resistance pattern of MRSA isolates to ciprofloxacin, cotrimoxazole, erythromycin, and gentamicin was significantly higher than that of MSSA as shown in Table 3. Among 90 S. aureus isolates, 20 (22.2%) were biofilm producers. No significant association was observed in biofilm-forming property of MSSA and MRSA isolates. Acinetobacter species were the commonest Gram-negative bacterial isolate (21/152). The drug resistance pattern of Acinetobacter species showed a high percentage of MDR isolates, with 38% (8/21) resistant to imipenem.Figure 1 MIC results of vancomycin against MRSA isolates.Table 3 Antibiotic resistance patterns ofS. aureus, MSSA, and MRSA isolates. AntibioticsS. aureus isolates (n = 90) frequency (%)MSSA isolates (n = 41) frequency (%)MRSA isolates (n = 49) frequency (%)P valueCiprofloxacin25 (27.7%)6 (14.6%)19 (38.7%)0.017Cotrimoxazole33 (36.6%)09 (21.9%)24 (48.9%)0.009Clindamycin65 (72.2%)28 (68.3%)37 (75.5%)0.486Cefoxitin49 (54.4%)0049 (100%)0.000Erythromycin77 (85.5%)30 (73.1%)47 (95.9%)0.003Gentamicin15 (16.6%)00 (0%)15 (30.6%)0.000Penicillin90 (100%)41 (100%)49 (100%)—Vancomycin04 (4.4%)00 (%)04 (8.1%)0.123 ## 4. Discussion Microbial colonization of objects/instruments in the ICU is considered a major factor for increased incidences of nosocomial infections. The reported prevalence of nosocomial infections in ICU in developing countries is 2–20 times higher than those of developed countries [17]. Nonadherence to standard hand hygiene protocols by healthcare professionals contributes significantly to the contamination of inanimate objects and cross-transmission during contact with the patient. S. aureus is one of the most common human pathogens and is significantly associated with nosocomial infections particularly in ICU. Increasing drug resistance among MRSA isolates and the emergence of vancomycin-resistant Staphylococcus aureus (VRSA) isolates have further exacerbated the problem. Identification of sites colonized by MRSA and other potential nosocomial pathogens would minimize the transmission among patients and thus help in reducing incidence of nosocomial infections in ICU.In our study, bacterial contamination of frequently touched objects/instruments in ICU was high. The overall bacterial contamination rate in ICU was 64.7% (152/235) which is higher than that in findings of other studies [18, 19]. Some of the studies have reported higher percentage of bacterial contamination than our findings [20, 21]. This reflects that bacterial contamination rates vary from hospital to hospital within and outside the country. High rates of bacterial contamination in ICU could be associated with the admission of patients with different clinical conditions referred from various units, higher bed occupancy, prolonged stay, and poor compliances to infection control. Rates of bacterial contamination vary with frequency of use of life-supporting equipment, frequency of sterilization/disinfection, type and concentration of the disinfectant used, fumigation, and other infection control practices in ICU.The findings of our study showed contamination of objects/instruments in ICU with a diverse group of Gram-positive and Gram-negative bacteria. In our study findings, contamination by Gram-positive bacteria was higher than that by Gram-negative bacteria. Similar findings are reported by other studies [19, 20]. In contrast to our findings, a study from India reported higher contamination rates with Gram-negative bacterial isolates [22].S. aureus and MRSA are notorious nosocomial pathogens associated with a variety of clinical conditions in ICU. Colonized hands of healthcare workers account for 20–40% of infections due to cross contamination [23, 24]. All the sites included in the study were contaminated with S. aureus except the medicine trolley. Among Gram-positive bacteria, S. aureus was the most common potential pathogen isolated with 54.4% MRSA. Surfaces of bed rails yielded the highest number of S. aureus isolates as compared to other sites. Bed rails are the one of the most frequently touched surfaces by healthcare providers, patients, and visitors. Objects/instruments of different units of a hospital often remain contaminated with S. aureus due to its prolonged survival [8]. A study conducted in the environmental samples of neonatal ICU of Manipal Teaching Hospital reported S. aureus as one of the common potential pathogens with 33.3% MRSA isolates [25]. Patients admitted in ICU are often immunocompromised and vulnerable to nosocomial infection. Contamination of S. aureus and MRSA on these sites increases the risk of transmission among the patients and may result in septicemia and pneumonia.Infections associated with MRSA are difficult to treat due to limited therapeutic options. Among MRSA isolates, 8.1% (4/49) were resistant to vancomycin. Detection of vancomycin-resistant isolates is a serious matter of concern and requires special attention. Various studies from Nepal have reported the emergence of VRSA isolates in clinical samples [26, 27]. Contamination of objects of ICU with VRSA poses a great risk of nosocomial infections. 22.4% (11/49) isolates of MRSA were intermediate-susceptible to vancomycin. This may be an alarming finding for increased incidences of VRSA infections in the near future.Biofilm formation amongS. aureus isolates was studied. Among S. aureus isolates, 22.2% (20/90) were biofilm producers. S. aureus isolates with biofilm-forming property can survive longer on hospital surfaces resulting in long-term survival and are a potential source of nosocomial infections. A recent study from Manipal Teaching Hospital reported a higher percentage (31.8%) of biofilm among S. aureus isolates cultured from inanimate objects of the hospital [24].Among Gram-negative bacteria, potential pathogens isolated wereAcinetobacter species, Pseudomonas species, and bacteria from family Enterobacteriaceae. In our study, Acinetobacter species were the most common Gram-negative bacteria isolated. Acinetobacter species are well-established pathogens among the patients admitted in ICU due to resistance to different groups of antibiotics and chemical disinfectants. A study from Manipal Teaching Hospital reported MDR Acinetobacter species as the most common bacterial pathogen associated with lower respiratory tract infections among ICU patients [28]. Contamination of objects/instruments of ICU with Acinetobacter species is an additional risk factor of nosocomial pneumonia. Drug resistance pattern showed that a high percentage (38%) of isolates was resistant to imipenem. Resistance to imipenem is alarming and challenging to clinicians for therapeutic management. Increasing resistance to higher-generation antibiotics limits the treatment options with additional financial burden and long-term hospitalization among the patients.Our study findings are important to generate awareness among infection control team and healthcare professionals regarding contamination of ICU with bacterial agents and their possible role in nosocomial infections. This was a single center study, and findings may not be generalized. We did not study the association between contamination of objects/instruments and nosocomial infections. Molecular characterization of the isolates was not performed. ## 5. Conclusion Our study results showed high level of bacterial contamination of the frequently touched objects/instruments of ICU. Isolation of MRSA and VRSA from the sites is a potential threat of nosocomial infections. The present study emphasizes need for modification in the existing cleaning/disinfection procedures in order to minimize the contamination by potential pathogens. Periodic microbiological surveillance of the ICU environment with effective infection control practices is expected to minimize the bacterial contamination and transmission. Gentamicin may be empirically used in suspected cases of staphylococcal infections in ICU. --- *Source: 1023241-2022-05-21.xml*
2022
# A Novel Cooperative ARQ Method for Wireless Sensor Networks **Authors:** Haiyong Wang; Geng Yang; Yiran Gu; Jian Xu; Zhixin Sun **Journal:** International Journal of Distributed Sensor Networks (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102326 --- ## Abstract In wireless sensor networks, cooperative communication can combat the effects of channel fading by exploiting diversity gain achieved via cooperation communication among the relay nodes. A cooperative automatic retransmission request (ARQ) protocol based on two-relay node selection was proposed in this paper. A novel discrete time Markov chain model in order to analyze the throughput and energy efficiency was built, and system throughput and energy efficiency performance of proposed protocol and traditional ARQ protocol were studied based on such model. The numerical results reveal that the throughput and energy efficiency of the proposed protocol could perform better when compared with the traditional ARQ protocol. --- ## Body ## 1. Introduction Recently, wireless sensor networks (WSNs) are becoming a fast-developing research area which is related to a wide range of applications, such as environment surveillance, military, and patient monitoring. WSNs are composed of a large amount of sensor nodes which are typically powered by small batteries. Moreover, it is undesirable or impossible to replace or recharge the sensor nodes in many situations. Hence, there is a great need for a reliable and energy-efficiency transmission strategy to improve the throughput and energy efficiency and prolong the network lifetime while satisfying specific quality of service requirements.Cooperative communication among sensor nodes has been considered to provide diversity in WSNs where fading obviously affects point to point link, which can help combat fading effectively and enhance the reliability of the communication significantly, so cooperative communication has been studied extensively. On the other hand, traditional automatic-retransmission-request protocol (TA) is an effective method to improve transmission quality and combat poor channels condition in a radio channel by retransmission of the data packet which is incorrectly received in previous slot. Thus, cooperative ARQ (CARQ) mechanism, which combines the cooperative communication and ARQ protocol, is receiving more and more attention over the past decade or so [1–4]. CARQ mechanism can increase the successful rate of data receiving in destination node and combat channel attenuation simultaneously. As viewed from energy consumption, CARQ mechanism can achieve higher energy efficiency because it makes the wireless communication between source node and destination node more successful than that of traditional ARQ protocol in data receiving rate and more reliable than normal cooperative communications.In the past few years, cooperative communication has established itself as an effective and energy conserving method for wireless sensor networks. One of the promising techniques is to use a relay to help source node communicate with destination node, in which each node is equipped with only one antenna. With the help of the relay node, a virtual MIMO antenna array system is formed, which can provide spatial diversity without multiple antennas per terminal node [5, 6].The so-called single-relay cooperation is considered where the data packet sent by the relay node was only received by the destination node [7, 8], but due to the broadcast nature of wireless channel, the signal can be received by the destination node as well as the other relays. Departing from most previous works in cooperative communication, an alternative method to improve system throughput performance is applying ARQ protocol at the data link layer [9, 10].Energy efficiency of cooperative communication has been studied in [11] which uses the model as hierarchical cooperative clustering scheme and compared with cooperative multiple-input multiple-output (CMIMO) clustering scheme and traditional multihop Single-Input-Single-Output (SISO) routing approach. Experimental results show increase in network lifetime and significant energy conservation is acquired. In two-dimensional WSNs [12], the energy efficiency of cooperative and noncooperative transmissions is studied under the same end-to-end throughput and at a certain outage probability, the simulation results show that the energy efficiency advantage increases with the nodes density and distance.Recently, cooperative communication has been proposed in connection with wireless sensor networks to improve energy efficiency, throughput, and reliability in fading condition [13]. In [13], the authors propose a novel cooperative ARQ strategy where cooperative communication and ARQ scheme is combined for clustering-based WSNs. Through a generalized discrete time Markov chain model to analyze the throughput and energy efficiency, simulation results show that the proposed cooperative ARQ strategy is much better than the traditional ARQ scheme. The spectral efficiency for CARQ scheme in WSNs is investigated thoroughly but does not analyze the energy efficiency and throughput of the proposed system in [14].The works all focus on no-relay or single-relay node between source node and destination node with ARQ. In this paper we turn our attention to two-relay node network. Comparing with the volume of former research focused on the single goal, such as energy conservation or spectral efficiency, we focus on the analysis of trade-offs in energy efficiency and throughput. Important questions include where to place the relay nodes. Our aim here is to optimize energy consumption per packet and throughput under different network geometry. This work bridges the current literature gap by considering relay position, energy efficiency, and throughput optimization.The contribution of this paper is twofold.(1) First, we develop a new cooperative ARQ protocol of two relay nodes in wireless sensor network, called TRCAP (two-relay cooperative ARQ protocol), derived from two relays and CARQ that enhances significantly the network throughput and energy efficiency comparing to the traditional ARQ protocol. Furthermore, we have also introduced a retransmitting probabilities scheme, named RDFP (retransmit data frame probabilities) based on the network environment and performance require.(2) We propose a novel DTMC (discrete time Markov chain) model in order to analyze the throughput and energy efficiency of TRCAP in wireless sensor networks.The remainder of the paper is organized as follows. In Section2, a description of the system model of two-relay node and the corresponding model is introduced. The performance analysis of throughput and energy efficiency of two-relay node cooperative ARQ protocol is provided in Sections 3 and 4, respectively. After that, in Section 5, the numerical simulation is conducted. Finally, we summarize the conclusions. ## 2. System Model and Operation Model ### 2.1. System Model In this paper, we consider a typical model of WSNs which consists of some sensor nodes and a sink node. When the network operates, some clusters are formed according to LEACH protocol, where CH is short for cluster head and CN is short for cluster node. There exist two transmission phases: firstly, each CN transmits its data frame to the corresponding CH according to some protocol; secondly, the CHS forwards the received data frame to the sink node according to a certain protocol.That is to say, there are two different cooperative communication modes: intracluster cooperative communication and intercluster cooperative communication, which means cooperative communication between CN and CHS in the same cluster and cooperative between CHS and CHS or sink node from different clusters, respectively. In this paper, we have considered the Nakagami-m distribution,m=2 for line-of-sight (LOS) and m=1 for non-line-of-sight (NLOS, Rayleigh distribution). Meanwhile we suppose that the channel is in long-term quasi-static fading, which means that the channel remains constant for a long period and is correlated [15]. The channel gains of S-D channel, S-R channel, and R-D channel are supposed to be mutually independent and unchanged during a data frame successful received period. Meanwhile all the channels are subjected to flat Rayleigh slow fading and the channel does not change during the first period and retransmission period. The channel state information (CSI) is well known by the corresponding receiver.No matter in which cooperative communication mode, for simplicity, we consider that a two-relay node cooperative ARQ model is equivalent to a four-node system with one-source node (S), two-relay nodes (R1,R2), and one-destination node (D), as shown in Figure 1. We consider that the cooperative communication happens over a relay network equipped with two relay nodes for assisting the communication between S and D.Figure 1 Simplified system model. ### 2.2. Operation Model In this paper, we use space time encoding (STC) that means a data frame is encoded by a code book. A set of codewords is formed behind the mapping of everyn bit. S and two relay nodes transmit the first, the second, and the third row of the code book, respectively. The system persists until the data frame is correctly received by D.The two-relay cooperative ARQ protocol is as follows. First,S sends an information packet to both two-relay node (R1,R2) and D. The receiver sends an ACK (acknowledgement) message or a NACK (negative acknowledgement) message indicating success or failure of decoding the packet, respectively; that is, R1 and R2 feed back to S, and D feeds back to both S and two-relay node (R1,R2). All the ACK/NACK feedback messages are assumed to be received error-free and with no latency at the source node and two-relay node.If the data frame is correctly decoded at the destinationD, D feeds back an ACK message to both S and two-relay node (R1,R2), and the next data frame is transmitted in the following time slot.IfD incorrectly decodes the received data frame, it sends back a NACK message to both S and two-relay node (R1,R2), wherein R1 and R2 will retransmit the data frame with probabilities pR1 and pR2, respectively, that has been correctly decoded in the former time slot. We assumed that S retransmits data frame simultaneously with probability pS.If neither two-relay node (R1,R2) nor D is able to correctly decode the data frame, then S will retransmit the data frame with probability pS0.Suppose path loss exponent is denoted byα, noise components are additive white Gaussian noise (AWGN) with variance N0, path loss exponent is represented by α, and the transmit power is represented by Pt which is constant for all nodes. The average SNR (σi,j) can be expressed by (1)σij=Ptrij-αN0,where rij represents the distance between node i and node j, and the instantaneous received SNR γ has an exponential distribution by the probability distribution function (PDF):(2)fγ=1σijexp⁡⁡-γσij.Assume the modulation is 16-QAM and the closed-form formula is given for the average bit error rate (BER) by [16](3)BERij≈321-σij10+σij.Having the instantaneous received SNRγ and BER, we can calculate the packet error rate (PER):(4)PERij=1-1-BERijL,where L is the length of a packet. If k-bits error correction capacity is utilized to a block code, the PER(γ) can be expressed as [17](5)PERγ=1-∑l=0kLlBERγl1-BERγL-1.Considering the abovePER(γ) formulations are too complicated for analysis, we adapt the following formulation as approximate expression in the following analysis ([18], (5)):(6)PERγ=1if0<γ<γtaexp⁡⁡-gγifγ≥γt,where (a,g,γt) can be calculated by uncoded or convolutionally coded Mn–ary rectangular or square QAM modes; meanwhile threshold γt is constrained by(7)aexp⁡⁡-gγt=1. ## 2.1. System Model In this paper, we consider a typical model of WSNs which consists of some sensor nodes and a sink node. When the network operates, some clusters are formed according to LEACH protocol, where CH is short for cluster head and CN is short for cluster node. There exist two transmission phases: firstly, each CN transmits its data frame to the corresponding CH according to some protocol; secondly, the CHS forwards the received data frame to the sink node according to a certain protocol.That is to say, there are two different cooperative communication modes: intracluster cooperative communication and intercluster cooperative communication, which means cooperative communication between CN and CHS in the same cluster and cooperative between CHS and CHS or sink node from different clusters, respectively. In this paper, we have considered the Nakagami-m distribution,m=2 for line-of-sight (LOS) and m=1 for non-line-of-sight (NLOS, Rayleigh distribution). Meanwhile we suppose that the channel is in long-term quasi-static fading, which means that the channel remains constant for a long period and is correlated [15]. The channel gains of S-D channel, S-R channel, and R-D channel are supposed to be mutually independent and unchanged during a data frame successful received period. Meanwhile all the channels are subjected to flat Rayleigh slow fading and the channel does not change during the first period and retransmission period. The channel state information (CSI) is well known by the corresponding receiver.No matter in which cooperative communication mode, for simplicity, we consider that a two-relay node cooperative ARQ model is equivalent to a four-node system with one-source node (S), two-relay nodes (R1,R2), and one-destination node (D), as shown in Figure 1. We consider that the cooperative communication happens over a relay network equipped with two relay nodes for assisting the communication between S and D.Figure 1 Simplified system model. ## 2.2. Operation Model In this paper, we use space time encoding (STC) that means a data frame is encoded by a code book. A set of codewords is formed behind the mapping of everyn bit. S and two relay nodes transmit the first, the second, and the third row of the code book, respectively. The system persists until the data frame is correctly received by D.The two-relay cooperative ARQ protocol is as follows. First,S sends an information packet to both two-relay node (R1,R2) and D. The receiver sends an ACK (acknowledgement) message or a NACK (negative acknowledgement) message indicating success or failure of decoding the packet, respectively; that is, R1 and R2 feed back to S, and D feeds back to both S and two-relay node (R1,R2). All the ACK/NACK feedback messages are assumed to be received error-free and with no latency at the source node and two-relay node.If the data frame is correctly decoded at the destinationD, D feeds back an ACK message to both S and two-relay node (R1,R2), and the next data frame is transmitted in the following time slot.IfD incorrectly decodes the received data frame, it sends back a NACK message to both S and two-relay node (R1,R2), wherein R1 and R2 will retransmit the data frame with probabilities pR1 and pR2, respectively, that has been correctly decoded in the former time slot. We assumed that S retransmits data frame simultaneously with probability pS.If neither two-relay node (R1,R2) nor D is able to correctly decode the data frame, then S will retransmit the data frame with probability pS0.Suppose path loss exponent is denoted byα, noise components are additive white Gaussian noise (AWGN) with variance N0, path loss exponent is represented by α, and the transmit power is represented by Pt which is constant for all nodes. The average SNR (σi,j) can be expressed by (1)σij=Ptrij-αN0,where rij represents the distance between node i and node j, and the instantaneous received SNR γ has an exponential distribution by the probability distribution function (PDF):(2)fγ=1σijexp⁡⁡-γσij.Assume the modulation is 16-QAM and the closed-form formula is given for the average bit error rate (BER) by [16](3)BERij≈321-σij10+σij.Having the instantaneous received SNRγ and BER, we can calculate the packet error rate (PER):(4)PERij=1-1-BERijL,where L is the length of a packet. If k-bits error correction capacity is utilized to a block code, the PER(γ) can be expressed as [17](5)PERγ=1-∑l=0kLlBERγl1-BERγL-1.Considering the abovePER(γ) formulations are too complicated for analysis, we adapt the following formulation as approximate expression in the following analysis ([18], (5)):(6)PERγ=1if0<γ<γtaexp⁡⁡-gγifγ≥γt,where (a,g,γt) can be calculated by uncoded or convolutionally coded Mn–ary rectangular or square QAM modes; meanwhile threshold γt is constrained by(7)aexp⁡⁡-gγt=1. ## 3. Throughput Analysis In order to analyze the throughput and energy efficiency of the TRCAP protocol, we model the transmission process with a DTMC illustrated in Figure2.Figure 2 The state transition of the DTMC model.There are four states in the DTMC, as follows.StateS0 represents that both D and two-relay node (R1,R2) do not correctly decode the received data frame.StateS1 represents that D does not correctly decode the received data frame. Only one of two-relay nodes correctly decodes the received data frame.StateS2 represents that D does not correctly decode the received data frame. Both R1 and R2 correctly decode the data frame.StateS3 represents that D correctly decodes the received data frame.The state transition of the DTMC model can be seen from Figure2. What needs to be pointed is that the relay node will store the correctly received data frame until the data frame is correctly decoded by D.On stateS0, S retransmits data frame simultaneously with probability ps0.On stateS1, R1 retransmits data frame with probability pR1 if data frame has been correctly decoded in the former time slot. The same is to R2 with probability pR2. Meanwhile S retransmits data frame with probability pS.On stateS2, R1 and R2 will retransmit the data frame with probabilities pR1 and pR2, respectively. Meanwhile S retransmits data frame with probability pS.On stateS3, S transmits next data frame.By solving the state transition equations listed below, wherepSD and pSR1, pSR2 are defined as the outage probabilities on each link and pij stands for the transition probability of state Si to state Sj ((i,j∈{0,1,2,3})),(8)p00=ps0pSDpSR1pSR2,p01=ps0pSD1-pSR1pSR2+pSR11-pSR2,p02=ps0pSD1-pSR11-pSR2,p03=1-p00-p01-p02,p11=ppR1pR1D+1-pR1pSpSDpSR2+1-pS+1-ppR2pR2D+1-pR2·pSpSDpSR1+1-pS,p12=ppSpSD1-pSR2pR1pR1D+1-p·pSpSD1-pSR1pR2pR2D,p13=1-p11-p12,p22=pR1pR1D+1-pR1pR2pR2D+1-pR2·pSpSD+1-pS,p23=1-p22,p30=pSDpSR1pSR2,p31=pSD1-pSR1pSR2+pSR11-pSR2,p32=pSD1-pSR11-pSR2,p33=1-pSD,p10=p20=p21=0,where p=1 with correct frame reception at node R1; otherwise p=0 on state S1. And pS0=1 owing to a mechanism which is adopted to inform S of previous data frame which is not correctly received at both D and two-relay node (R1,R2); otherwise pS0=pS.Suppose the transition probability matrixP of the DTMC is initiated from state S0, and let π¯=(π0,π1,π2,π3) be the steady state distribution of the DTMC; then π3 is the steady probability of state S3.In this paper, the throughput is defined as the average number of data frames received successfully by destination nodeD per time slot and can be computed as the average number of time slots that the DTMC spends in state S3, which equals the probability of steady state S3. So the throughput can be acquired by solving the following formula:(9)π¯P=π¯,∑i=03πi=1,where π3 is the throughput and P is the transition probability matrix whose elements are given by (8).Comparing the throughput of TRCAP and CA ([19], (18)), we can easily calculate the throughput gain as follows: (10)G=TTRCAPTCA,where TTRCAP and TCA are the throughput of TRCAP and CA, respectively. ## 4. Analysis of Energy Efficiency The power consumption of the internal RF circuitry and the power amplifier is the main energy consumption of the sensor node [20]. Assume that the total energy consumption of the system is composed of the power consumption of the power amplifier and circuit blocks of the nodes. Let PPA denote the power consumption of the power amplifier, and Pt and Pr represent the power consumption of the internal RF circuitry of the transmitting and receiving: (11)PPA=Pi1+ε,where Pi denotes the transmit power of node i, ε denotes the loss factor of the power amplifier, ε=(ξ/η-1) with ξ=3(M-1/M+1) is the peak-to-average ratio (PAR) for an M-QAM modulation, and η is the drain efficiency of the amplifier.Considering that the total packet is composed of the header, payload, and the trailer, the energy efficiency can be expressed as follows:(12)ρ=1-PERLP,where PER denotes the average packet error rate with TA and TRCAP, L denotes the length of the payload in a data packet, and P denotes the energy consumption of the communication system. Thus the energy efficiency is expressed by the ratio of the number of packet bits received successfully to the total energy consumption. (1) Traditional ARQ:(13)PTA=Pi1+ε+Pt+Pr1-PSD2Pi1+ε+2Pt+2PrPSD.In the first term of the above expression, when the data frame is received successfully byD with the probability (1-PSD), the energy consumption is composed of the consumed power in node S (Pt(1+ε)+Pt) and receiving power in node DPr. The second term expresses the energy consumption of the system when node D has received the packet incorrectly and the node S’s retransmission.So the total energy consumption of transmitting the packet in traditional strategy with ARQ is expressed as follows:(14)ETA=Pi1+ε+Pt+2Pr1-PSD+2Pi1+ε+2Pt+2PrPSD=Pi1+ε+Pt+Pr1+PSD.Energy efficiency of traditional strategy with ARQ can be obtained by substituting (6) and (13) into (12):(15)ρTA=L1-PERγETA.(2) Two-relay node cooperative ARQ protocol:(16)PCA=Pi1+ε+Pt+3Pr1-PSD,2Pi1+ε+2Pt+4PrPSD1-PSR1PSR2,2Pi1+ε+2Pt+4PrPSDPSR11-PSR2,3Pi1+ε+3Pt+4PrPSD1-PSR11-PSR2,2Pi1+ε+2Pt+6PrPSDPSR1PSR2,(17)Etotal=π0Eπ0+π1Eπ1+π2Eπ2+Eπ3π3.In the first term in (16), when the data frame is correctly received by D with the probability (1-PSD), the energy consumption consists of the consumed power in node S (Pi(1+ε)+Pt) and receiving power in node D is Pr and in node R1, R2 is 2Pr. The second and third term express the energy consumption when either R1 or R2 has received the packet successfully and D has failed to receive the packet. The fourth term expresses the energy consumption when node D has failed to receive the packet and both R1 and R2 have correctly received the packet simultaneously. The last term represents the energy consumption of system when all three nodes have failed to receive the packet simultaneously.In the above expression, the probability of different state is represented by the steady state (π0,π1,π2,π3) distribution of the DTMC.The total energy consumption of successfully transmitting a packet fromS to D using TRCAP can be derived from our Markov model as follows:(18)Etotal=2Pi1+ε+2Pt+6Prπ0+2Pi1+ε+2Pt+4Prπ1+3Pi1+ε+3Pt+4Prπ2+Pi1+ε+Pt+3Prπ3.Energy efficiency of TRCAP can be obtained by (9) and (18) into (12):(19)ρ=L1-π3Etotal. ## 5. Numerical Results In this section we numerically evaluate the throughput and energy efficiency of the presented protocol compared with that of traditional TA and TRCAP. Throughout this simulation we assume that the length of a packet is set to be 1024 bits and the system parameters take the following values:α=4, ε=0.3, Pi=10-3 W, Pt=10-4 W, Pr=5×10-5 W, L=1024 bits, N0=10-13.5, (a,g,γt)=(58.7332,0.1641,13.9470). The values of α, ε, Pi, Pt, and Pr are taken from the specifications of Mica2 motes. MATLAB is selected as the simulation tool and 16-QAM modulation is used. We consider the S-D distance varies from 100 m to 300 m for throughput and energy efficiency analysis.We assume that the connections of the relaysR1 and R2 are perpendicular to the connecting line of the source node S and the destination node D, and the vertical cross point is O. We also assume the S-R1 distance is DSR1=qSR1×DSD (0<qSR1<1), the same as qSR2 and qR1D, qR2D.Figure3 depicts the throughput performances of different ARQ protocols versus the S-D distance with different qSR1 and qSR2, qR1D, qR2D. From Figure 3, we can see that the throughput efficiency decreases when DSD increases no matter what ARQ protocol is adopted in this paper because the SNR at the receiver reduces as the distance increases. It can be seen from the figure that TRCAP outperforms the traditional ARQ protocol for any S-D distance, and the throughput performances will become better when relay nodes are close to the middle location between the source node and the destination node D. The simulation results are very close to the theoretical results, which verified the performance analysis in Section 3. Through the above analytical and simulation results, we can see TRCAP can significantly improve the throughput performance of system when the communication distance is rather long.Figure 3 The throughput performances of different ARQ protocols.Figure4 depicts the throughput gain of TRCAP compared with TA under different relay locations, respectively. From the figures, we can see that the throughput gain is the best when relay nodes are close to the middle location between the source node and the destination node D. And the value of throughput gain will become more and more big along with the distance increases.Figure 4 The throughput efficiency gain of TRCAP compared with TA.Figure5 depicts the energy efficiency of the system versus the distance between the source node S and the destination node D with different ARQ protocols. The simulation results are very close to the theoretical result, which verified the energy efficiency performance analysis in Section 5. From Figure 5, we can see that the value of energy efficiency becomes more and more small first because the PER increases with the S-D distance increases. The energy efficiency of TRCAP has a better performance than TA when S-D distance is above 120 m. At S-D distance below 120 m, TA has a better performance than TRCAP, because node D will correctly receive the packet from source node when destination node is near to source node. The probability of cooperative retransmission will become bigger and bigger along with the S-D distance increases.Figure 5 The energy efficiency of different ARQ protocols.Figure6 depicts the energy efficiency gain of TRCAP compared with TA under different relay locations, respectively. We can find that the biggest gain is acquired when relay nodes are close to the middle location between the source node and the destination node D. The energy efficiency gain is almost the same when destination node is near to source node.Figure 6 The energy efficiency gain of TRCAP compared with TA. ## 6. Conclusions In this paper, the throughput and energy efficiency of the TRCAP and TA protocol in WSN are studied and compared. The theoretical analysis and numerical results prove that when the distance of source and destination is above the threshold distance, the TRCAP has larger throughput and more energy efficiency than TA and two-relay node gains can be achieved. Moreover, whenDSR1 is approximately equal to DR1D and DSR2 is approximately equal to DR2D meanwhile, throughput gain and energy efficiency gain are the best among all of the different relay locations.So far, we have only considered a simple four-node wireless network. In the future, we will study the more complicated and the practical wireless sensor networks. However, the focus and the contribution of our work are the theoretical analysis of the throughput and energy efficiency of proposed two-relay node cooperative ARQ protocol in WSNs which is based on a novel DTMC model. --- *Source: 102326-2015-10-19.xml*
102326-2015-10-19_102326-2015-10-19.md
28,022
A Novel Cooperative ARQ Method for Wireless Sensor Networks
Haiyong Wang; Geng Yang; Yiran Gu; Jian Xu; Zhixin Sun
International Journal of Distributed Sensor Networks (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102326
102326-2015-10-19.xml
--- ## Abstract In wireless sensor networks, cooperative communication can combat the effects of channel fading by exploiting diversity gain achieved via cooperation communication among the relay nodes. A cooperative automatic retransmission request (ARQ) protocol based on two-relay node selection was proposed in this paper. A novel discrete time Markov chain model in order to analyze the throughput and energy efficiency was built, and system throughput and energy efficiency performance of proposed protocol and traditional ARQ protocol were studied based on such model. The numerical results reveal that the throughput and energy efficiency of the proposed protocol could perform better when compared with the traditional ARQ protocol. --- ## Body ## 1. Introduction Recently, wireless sensor networks (WSNs) are becoming a fast-developing research area which is related to a wide range of applications, such as environment surveillance, military, and patient monitoring. WSNs are composed of a large amount of sensor nodes which are typically powered by small batteries. Moreover, it is undesirable or impossible to replace or recharge the sensor nodes in many situations. Hence, there is a great need for a reliable and energy-efficiency transmission strategy to improve the throughput and energy efficiency and prolong the network lifetime while satisfying specific quality of service requirements.Cooperative communication among sensor nodes has been considered to provide diversity in WSNs where fading obviously affects point to point link, which can help combat fading effectively and enhance the reliability of the communication significantly, so cooperative communication has been studied extensively. On the other hand, traditional automatic-retransmission-request protocol (TA) is an effective method to improve transmission quality and combat poor channels condition in a radio channel by retransmission of the data packet which is incorrectly received in previous slot. Thus, cooperative ARQ (CARQ) mechanism, which combines the cooperative communication and ARQ protocol, is receiving more and more attention over the past decade or so [1–4]. CARQ mechanism can increase the successful rate of data receiving in destination node and combat channel attenuation simultaneously. As viewed from energy consumption, CARQ mechanism can achieve higher energy efficiency because it makes the wireless communication between source node and destination node more successful than that of traditional ARQ protocol in data receiving rate and more reliable than normal cooperative communications.In the past few years, cooperative communication has established itself as an effective and energy conserving method for wireless sensor networks. One of the promising techniques is to use a relay to help source node communicate with destination node, in which each node is equipped with only one antenna. With the help of the relay node, a virtual MIMO antenna array system is formed, which can provide spatial diversity without multiple antennas per terminal node [5, 6].The so-called single-relay cooperation is considered where the data packet sent by the relay node was only received by the destination node [7, 8], but due to the broadcast nature of wireless channel, the signal can be received by the destination node as well as the other relays. Departing from most previous works in cooperative communication, an alternative method to improve system throughput performance is applying ARQ protocol at the data link layer [9, 10].Energy efficiency of cooperative communication has been studied in [11] which uses the model as hierarchical cooperative clustering scheme and compared with cooperative multiple-input multiple-output (CMIMO) clustering scheme and traditional multihop Single-Input-Single-Output (SISO) routing approach. Experimental results show increase in network lifetime and significant energy conservation is acquired. In two-dimensional WSNs [12], the energy efficiency of cooperative and noncooperative transmissions is studied under the same end-to-end throughput and at a certain outage probability, the simulation results show that the energy efficiency advantage increases with the nodes density and distance.Recently, cooperative communication has been proposed in connection with wireless sensor networks to improve energy efficiency, throughput, and reliability in fading condition [13]. In [13], the authors propose a novel cooperative ARQ strategy where cooperative communication and ARQ scheme is combined for clustering-based WSNs. Through a generalized discrete time Markov chain model to analyze the throughput and energy efficiency, simulation results show that the proposed cooperative ARQ strategy is much better than the traditional ARQ scheme. The spectral efficiency for CARQ scheme in WSNs is investigated thoroughly but does not analyze the energy efficiency and throughput of the proposed system in [14].The works all focus on no-relay or single-relay node between source node and destination node with ARQ. In this paper we turn our attention to two-relay node network. Comparing with the volume of former research focused on the single goal, such as energy conservation or spectral efficiency, we focus on the analysis of trade-offs in energy efficiency and throughput. Important questions include where to place the relay nodes. Our aim here is to optimize energy consumption per packet and throughput under different network geometry. This work bridges the current literature gap by considering relay position, energy efficiency, and throughput optimization.The contribution of this paper is twofold.(1) First, we develop a new cooperative ARQ protocol of two relay nodes in wireless sensor network, called TRCAP (two-relay cooperative ARQ protocol), derived from two relays and CARQ that enhances significantly the network throughput and energy efficiency comparing to the traditional ARQ protocol. Furthermore, we have also introduced a retransmitting probabilities scheme, named RDFP (retransmit data frame probabilities) based on the network environment and performance require.(2) We propose a novel DTMC (discrete time Markov chain) model in order to analyze the throughput and energy efficiency of TRCAP in wireless sensor networks.The remainder of the paper is organized as follows. In Section2, a description of the system model of two-relay node and the corresponding model is introduced. The performance analysis of throughput and energy efficiency of two-relay node cooperative ARQ protocol is provided in Sections 3 and 4, respectively. After that, in Section 5, the numerical simulation is conducted. Finally, we summarize the conclusions. ## 2. System Model and Operation Model ### 2.1. System Model In this paper, we consider a typical model of WSNs which consists of some sensor nodes and a sink node. When the network operates, some clusters are formed according to LEACH protocol, where CH is short for cluster head and CN is short for cluster node. There exist two transmission phases: firstly, each CN transmits its data frame to the corresponding CH according to some protocol; secondly, the CHS forwards the received data frame to the sink node according to a certain protocol.That is to say, there are two different cooperative communication modes: intracluster cooperative communication and intercluster cooperative communication, which means cooperative communication between CN and CHS in the same cluster and cooperative between CHS and CHS or sink node from different clusters, respectively. In this paper, we have considered the Nakagami-m distribution,m=2 for line-of-sight (LOS) and m=1 for non-line-of-sight (NLOS, Rayleigh distribution). Meanwhile we suppose that the channel is in long-term quasi-static fading, which means that the channel remains constant for a long period and is correlated [15]. The channel gains of S-D channel, S-R channel, and R-D channel are supposed to be mutually independent and unchanged during a data frame successful received period. Meanwhile all the channels are subjected to flat Rayleigh slow fading and the channel does not change during the first period and retransmission period. The channel state information (CSI) is well known by the corresponding receiver.No matter in which cooperative communication mode, for simplicity, we consider that a two-relay node cooperative ARQ model is equivalent to a four-node system with one-source node (S), two-relay nodes (R1,R2), and one-destination node (D), as shown in Figure 1. We consider that the cooperative communication happens over a relay network equipped with two relay nodes for assisting the communication between S and D.Figure 1 Simplified system model. ### 2.2. Operation Model In this paper, we use space time encoding (STC) that means a data frame is encoded by a code book. A set of codewords is formed behind the mapping of everyn bit. S and two relay nodes transmit the first, the second, and the third row of the code book, respectively. The system persists until the data frame is correctly received by D.The two-relay cooperative ARQ protocol is as follows. First,S sends an information packet to both two-relay node (R1,R2) and D. The receiver sends an ACK (acknowledgement) message or a NACK (negative acknowledgement) message indicating success or failure of decoding the packet, respectively; that is, R1 and R2 feed back to S, and D feeds back to both S and two-relay node (R1,R2). All the ACK/NACK feedback messages are assumed to be received error-free and with no latency at the source node and two-relay node.If the data frame is correctly decoded at the destinationD, D feeds back an ACK message to both S and two-relay node (R1,R2), and the next data frame is transmitted in the following time slot.IfD incorrectly decodes the received data frame, it sends back a NACK message to both S and two-relay node (R1,R2), wherein R1 and R2 will retransmit the data frame with probabilities pR1 and pR2, respectively, that has been correctly decoded in the former time slot. We assumed that S retransmits data frame simultaneously with probability pS.If neither two-relay node (R1,R2) nor D is able to correctly decode the data frame, then S will retransmit the data frame with probability pS0.Suppose path loss exponent is denoted byα, noise components are additive white Gaussian noise (AWGN) with variance N0, path loss exponent is represented by α, and the transmit power is represented by Pt which is constant for all nodes. The average SNR (σi,j) can be expressed by (1)σij=Ptrij-αN0,where rij represents the distance between node i and node j, and the instantaneous received SNR γ has an exponential distribution by the probability distribution function (PDF):(2)fγ=1σijexp⁡⁡-γσij.Assume the modulation is 16-QAM and the closed-form formula is given for the average bit error rate (BER) by [16](3)BERij≈321-σij10+σij.Having the instantaneous received SNRγ and BER, we can calculate the packet error rate (PER):(4)PERij=1-1-BERijL,where L is the length of a packet. If k-bits error correction capacity is utilized to a block code, the PER(γ) can be expressed as [17](5)PERγ=1-∑l=0kLlBERγl1-BERγL-1.Considering the abovePER(γ) formulations are too complicated for analysis, we adapt the following formulation as approximate expression in the following analysis ([18], (5)):(6)PERγ=1if0<γ<γtaexp⁡⁡-gγifγ≥γt,where (a,g,γt) can be calculated by uncoded or convolutionally coded Mn–ary rectangular or square QAM modes; meanwhile threshold γt is constrained by(7)aexp⁡⁡-gγt=1. ## 2.1. System Model In this paper, we consider a typical model of WSNs which consists of some sensor nodes and a sink node. When the network operates, some clusters are formed according to LEACH protocol, where CH is short for cluster head and CN is short for cluster node. There exist two transmission phases: firstly, each CN transmits its data frame to the corresponding CH according to some protocol; secondly, the CHS forwards the received data frame to the sink node according to a certain protocol.That is to say, there are two different cooperative communication modes: intracluster cooperative communication and intercluster cooperative communication, which means cooperative communication between CN and CHS in the same cluster and cooperative between CHS and CHS or sink node from different clusters, respectively. In this paper, we have considered the Nakagami-m distribution,m=2 for line-of-sight (LOS) and m=1 for non-line-of-sight (NLOS, Rayleigh distribution). Meanwhile we suppose that the channel is in long-term quasi-static fading, which means that the channel remains constant for a long period and is correlated [15]. The channel gains of S-D channel, S-R channel, and R-D channel are supposed to be mutually independent and unchanged during a data frame successful received period. Meanwhile all the channels are subjected to flat Rayleigh slow fading and the channel does not change during the first period and retransmission period. The channel state information (CSI) is well known by the corresponding receiver.No matter in which cooperative communication mode, for simplicity, we consider that a two-relay node cooperative ARQ model is equivalent to a four-node system with one-source node (S), two-relay nodes (R1,R2), and one-destination node (D), as shown in Figure 1. We consider that the cooperative communication happens over a relay network equipped with two relay nodes for assisting the communication between S and D.Figure 1 Simplified system model. ## 2.2. Operation Model In this paper, we use space time encoding (STC) that means a data frame is encoded by a code book. A set of codewords is formed behind the mapping of everyn bit. S and two relay nodes transmit the first, the second, and the third row of the code book, respectively. The system persists until the data frame is correctly received by D.The two-relay cooperative ARQ protocol is as follows. First,S sends an information packet to both two-relay node (R1,R2) and D. The receiver sends an ACK (acknowledgement) message or a NACK (negative acknowledgement) message indicating success or failure of decoding the packet, respectively; that is, R1 and R2 feed back to S, and D feeds back to both S and two-relay node (R1,R2). All the ACK/NACK feedback messages are assumed to be received error-free and with no latency at the source node and two-relay node.If the data frame is correctly decoded at the destinationD, D feeds back an ACK message to both S and two-relay node (R1,R2), and the next data frame is transmitted in the following time slot.IfD incorrectly decodes the received data frame, it sends back a NACK message to both S and two-relay node (R1,R2), wherein R1 and R2 will retransmit the data frame with probabilities pR1 and pR2, respectively, that has been correctly decoded in the former time slot. We assumed that S retransmits data frame simultaneously with probability pS.If neither two-relay node (R1,R2) nor D is able to correctly decode the data frame, then S will retransmit the data frame with probability pS0.Suppose path loss exponent is denoted byα, noise components are additive white Gaussian noise (AWGN) with variance N0, path loss exponent is represented by α, and the transmit power is represented by Pt which is constant for all nodes. The average SNR (σi,j) can be expressed by (1)σij=Ptrij-αN0,where rij represents the distance between node i and node j, and the instantaneous received SNR γ has an exponential distribution by the probability distribution function (PDF):(2)fγ=1σijexp⁡⁡-γσij.Assume the modulation is 16-QAM and the closed-form formula is given for the average bit error rate (BER) by [16](3)BERij≈321-σij10+σij.Having the instantaneous received SNRγ and BER, we can calculate the packet error rate (PER):(4)PERij=1-1-BERijL,where L is the length of a packet. If k-bits error correction capacity is utilized to a block code, the PER(γ) can be expressed as [17](5)PERγ=1-∑l=0kLlBERγl1-BERγL-1.Considering the abovePER(γ) formulations are too complicated for analysis, we adapt the following formulation as approximate expression in the following analysis ([18], (5)):(6)PERγ=1if0<γ<γtaexp⁡⁡-gγifγ≥γt,where (a,g,γt) can be calculated by uncoded or convolutionally coded Mn–ary rectangular or square QAM modes; meanwhile threshold γt is constrained by(7)aexp⁡⁡-gγt=1. ## 3. Throughput Analysis In order to analyze the throughput and energy efficiency of the TRCAP protocol, we model the transmission process with a DTMC illustrated in Figure2.Figure 2 The state transition of the DTMC model.There are four states in the DTMC, as follows.StateS0 represents that both D and two-relay node (R1,R2) do not correctly decode the received data frame.StateS1 represents that D does not correctly decode the received data frame. Only one of two-relay nodes correctly decodes the received data frame.StateS2 represents that D does not correctly decode the received data frame. Both R1 and R2 correctly decode the data frame.StateS3 represents that D correctly decodes the received data frame.The state transition of the DTMC model can be seen from Figure2. What needs to be pointed is that the relay node will store the correctly received data frame until the data frame is correctly decoded by D.On stateS0, S retransmits data frame simultaneously with probability ps0.On stateS1, R1 retransmits data frame with probability pR1 if data frame has been correctly decoded in the former time slot. The same is to R2 with probability pR2. Meanwhile S retransmits data frame with probability pS.On stateS2, R1 and R2 will retransmit the data frame with probabilities pR1 and pR2, respectively. Meanwhile S retransmits data frame with probability pS.On stateS3, S transmits next data frame.By solving the state transition equations listed below, wherepSD and pSR1, pSR2 are defined as the outage probabilities on each link and pij stands for the transition probability of state Si to state Sj ((i,j∈{0,1,2,3})),(8)p00=ps0pSDpSR1pSR2,p01=ps0pSD1-pSR1pSR2+pSR11-pSR2,p02=ps0pSD1-pSR11-pSR2,p03=1-p00-p01-p02,p11=ppR1pR1D+1-pR1pSpSDpSR2+1-pS+1-ppR2pR2D+1-pR2·pSpSDpSR1+1-pS,p12=ppSpSD1-pSR2pR1pR1D+1-p·pSpSD1-pSR1pR2pR2D,p13=1-p11-p12,p22=pR1pR1D+1-pR1pR2pR2D+1-pR2·pSpSD+1-pS,p23=1-p22,p30=pSDpSR1pSR2,p31=pSD1-pSR1pSR2+pSR11-pSR2,p32=pSD1-pSR11-pSR2,p33=1-pSD,p10=p20=p21=0,where p=1 with correct frame reception at node R1; otherwise p=0 on state S1. And pS0=1 owing to a mechanism which is adopted to inform S of previous data frame which is not correctly received at both D and two-relay node (R1,R2); otherwise pS0=pS.Suppose the transition probability matrixP of the DTMC is initiated from state S0, and let π¯=(π0,π1,π2,π3) be the steady state distribution of the DTMC; then π3 is the steady probability of state S3.In this paper, the throughput is defined as the average number of data frames received successfully by destination nodeD per time slot and can be computed as the average number of time slots that the DTMC spends in state S3, which equals the probability of steady state S3. So the throughput can be acquired by solving the following formula:(9)π¯P=π¯,∑i=03πi=1,where π3 is the throughput and P is the transition probability matrix whose elements are given by (8).Comparing the throughput of TRCAP and CA ([19], (18)), we can easily calculate the throughput gain as follows: (10)G=TTRCAPTCA,where TTRCAP and TCA are the throughput of TRCAP and CA, respectively. ## 4. Analysis of Energy Efficiency The power consumption of the internal RF circuitry and the power amplifier is the main energy consumption of the sensor node [20]. Assume that the total energy consumption of the system is composed of the power consumption of the power amplifier and circuit blocks of the nodes. Let PPA denote the power consumption of the power amplifier, and Pt and Pr represent the power consumption of the internal RF circuitry of the transmitting and receiving: (11)PPA=Pi1+ε,where Pi denotes the transmit power of node i, ε denotes the loss factor of the power amplifier, ε=(ξ/η-1) with ξ=3(M-1/M+1) is the peak-to-average ratio (PAR) for an M-QAM modulation, and η is the drain efficiency of the amplifier.Considering that the total packet is composed of the header, payload, and the trailer, the energy efficiency can be expressed as follows:(12)ρ=1-PERLP,where PER denotes the average packet error rate with TA and TRCAP, L denotes the length of the payload in a data packet, and P denotes the energy consumption of the communication system. Thus the energy efficiency is expressed by the ratio of the number of packet bits received successfully to the total energy consumption. (1) Traditional ARQ:(13)PTA=Pi1+ε+Pt+Pr1-PSD2Pi1+ε+2Pt+2PrPSD.In the first term of the above expression, when the data frame is received successfully byD with the probability (1-PSD), the energy consumption is composed of the consumed power in node S (Pt(1+ε)+Pt) and receiving power in node DPr. The second term expresses the energy consumption of the system when node D has received the packet incorrectly and the node S’s retransmission.So the total energy consumption of transmitting the packet in traditional strategy with ARQ is expressed as follows:(14)ETA=Pi1+ε+Pt+2Pr1-PSD+2Pi1+ε+2Pt+2PrPSD=Pi1+ε+Pt+Pr1+PSD.Energy efficiency of traditional strategy with ARQ can be obtained by substituting (6) and (13) into (12):(15)ρTA=L1-PERγETA.(2) Two-relay node cooperative ARQ protocol:(16)PCA=Pi1+ε+Pt+3Pr1-PSD,2Pi1+ε+2Pt+4PrPSD1-PSR1PSR2,2Pi1+ε+2Pt+4PrPSDPSR11-PSR2,3Pi1+ε+3Pt+4PrPSD1-PSR11-PSR2,2Pi1+ε+2Pt+6PrPSDPSR1PSR2,(17)Etotal=π0Eπ0+π1Eπ1+π2Eπ2+Eπ3π3.In the first term in (16), when the data frame is correctly received by D with the probability (1-PSD), the energy consumption consists of the consumed power in node S (Pi(1+ε)+Pt) and receiving power in node D is Pr and in node R1, R2 is 2Pr. The second and third term express the energy consumption when either R1 or R2 has received the packet successfully and D has failed to receive the packet. The fourth term expresses the energy consumption when node D has failed to receive the packet and both R1 and R2 have correctly received the packet simultaneously. The last term represents the energy consumption of system when all three nodes have failed to receive the packet simultaneously.In the above expression, the probability of different state is represented by the steady state (π0,π1,π2,π3) distribution of the DTMC.The total energy consumption of successfully transmitting a packet fromS to D using TRCAP can be derived from our Markov model as follows:(18)Etotal=2Pi1+ε+2Pt+6Prπ0+2Pi1+ε+2Pt+4Prπ1+3Pi1+ε+3Pt+4Prπ2+Pi1+ε+Pt+3Prπ3.Energy efficiency of TRCAP can be obtained by (9) and (18) into (12):(19)ρ=L1-π3Etotal. ## 5. Numerical Results In this section we numerically evaluate the throughput and energy efficiency of the presented protocol compared with that of traditional TA and TRCAP. Throughout this simulation we assume that the length of a packet is set to be 1024 bits and the system parameters take the following values:α=4, ε=0.3, Pi=10-3 W, Pt=10-4 W, Pr=5×10-5 W, L=1024 bits, N0=10-13.5, (a,g,γt)=(58.7332,0.1641,13.9470). The values of α, ε, Pi, Pt, and Pr are taken from the specifications of Mica2 motes. MATLAB is selected as the simulation tool and 16-QAM modulation is used. We consider the S-D distance varies from 100 m to 300 m for throughput and energy efficiency analysis.We assume that the connections of the relaysR1 and R2 are perpendicular to the connecting line of the source node S and the destination node D, and the vertical cross point is O. We also assume the S-R1 distance is DSR1=qSR1×DSD (0<qSR1<1), the same as qSR2 and qR1D, qR2D.Figure3 depicts the throughput performances of different ARQ protocols versus the S-D distance with different qSR1 and qSR2, qR1D, qR2D. From Figure 3, we can see that the throughput efficiency decreases when DSD increases no matter what ARQ protocol is adopted in this paper because the SNR at the receiver reduces as the distance increases. It can be seen from the figure that TRCAP outperforms the traditional ARQ protocol for any S-D distance, and the throughput performances will become better when relay nodes are close to the middle location between the source node and the destination node D. The simulation results are very close to the theoretical results, which verified the performance analysis in Section 3. Through the above analytical and simulation results, we can see TRCAP can significantly improve the throughput performance of system when the communication distance is rather long.Figure 3 The throughput performances of different ARQ protocols.Figure4 depicts the throughput gain of TRCAP compared with TA under different relay locations, respectively. From the figures, we can see that the throughput gain is the best when relay nodes are close to the middle location between the source node and the destination node D. And the value of throughput gain will become more and more big along with the distance increases.Figure 4 The throughput efficiency gain of TRCAP compared with TA.Figure5 depicts the energy efficiency of the system versus the distance between the source node S and the destination node D with different ARQ protocols. The simulation results are very close to the theoretical result, which verified the energy efficiency performance analysis in Section 5. From Figure 5, we can see that the value of energy efficiency becomes more and more small first because the PER increases with the S-D distance increases. The energy efficiency of TRCAP has a better performance than TA when S-D distance is above 120 m. At S-D distance below 120 m, TA has a better performance than TRCAP, because node D will correctly receive the packet from source node when destination node is near to source node. The probability of cooperative retransmission will become bigger and bigger along with the S-D distance increases.Figure 5 The energy efficiency of different ARQ protocols.Figure6 depicts the energy efficiency gain of TRCAP compared with TA under different relay locations, respectively. We can find that the biggest gain is acquired when relay nodes are close to the middle location between the source node and the destination node D. The energy efficiency gain is almost the same when destination node is near to source node.Figure 6 The energy efficiency gain of TRCAP compared with TA. ## 6. Conclusions In this paper, the throughput and energy efficiency of the TRCAP and TA protocol in WSN are studied and compared. The theoretical analysis and numerical results prove that when the distance of source and destination is above the threshold distance, the TRCAP has larger throughput and more energy efficiency than TA and two-relay node gains can be achieved. Moreover, whenDSR1 is approximately equal to DR1D and DSR2 is approximately equal to DR2D meanwhile, throughput gain and energy efficiency gain are the best among all of the different relay locations.So far, we have only considered a simple four-node wireless network. In the future, we will study the more complicated and the practical wireless sensor networks. However, the focus and the contribution of our work are the theoretical analysis of the throughput and energy efficiency of proposed two-relay node cooperative ARQ protocol in WSNs which is based on a novel DTMC model. --- *Source: 102326-2015-10-19.xml*
2015
# A Comparative Study of the Stress Distribution in Different Endodontic Post-Retained Teeth with and without Ferrule Design—A Finite Element Analysis **Authors:** Lokanath Garhnayak; Hari Parkash; D. K. Sehgal; Veena Jain; Mirna Garhnayak **Journal:** ISRN Dentistry (2011) **Publisher:** International Scholarly Research Network **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.5402/2011/102329 --- ## Abstract Purpose. To analyze the stress distribution in an endodontically treated maxillary central incisor restored with various post-core systems and assess the benefit of ferrule using finite element analysis. Material and Methods. Twelve models with metal ceramic crown were created based on the combination of three types of post-core systems (titanium post-composite resin core, nickel-chromium post-core, and fiber reinforced composite resin post-composite resin core), two varieties of posts (tapered, parallel), and with or without ferrule. 100 N load was applied in three directions and the von Mises stress was compared. Results. Ferrule made no difference in stress distribution for the titanium and nickel-chromium posts, though it showed some stress reduction in fiber-reinforced composite resin posts. Nickel-chromium cast post-core transmitted the least amount of stresses to the dentin despite producing the maximum stress. Conclusion. Incorporation of ferrule offered some degree of stress reduction in nonmetal post, and it increased the stresses within cervical dentin. --- ## Body ## 1. Introduction The restoration of endodontically treated teeth has been a concern for prosthodontists for more than 100 years [1]. The increasing effectiveness and predictability of endodontic therapy have only made the challenge more evident. If a root canal treated tooth is severely damaged beyond the realms of restoration by conventional means, the clinician is left with no alternative other than a post-core. The predominant function of an endodontic post is to retain the core [2], which subsequently provides a suitable foundation for final restoration. The selection of a particular type of endodontic post is based on its mechanical properties, ease of fabrication, biocompatibility, availability in the market, and the cost factor. Custom made nickel-chromium (Ni-Cr) cast post-core system is routinely used because of its precision and ease of fabrication in the laboratory. Among the prefabricated posts, titanium (Ti) posts are very popular owing to their proven biocompatibility. Newer post materials like ceramics and fiber reinforced composite resin (FRC) [3–5] are also gaining popularity due to their unique esthetic properties. The ability of post-core system to sustain masticatory forces and remain firmly seated in the tooth is essential for the survival of a restoration. One other important design consideration in the restoration of an endodontically treated tooth is the ferrule. This has been described as a metal band that encircles the external dimension of the residual tooth [6]. A properly built ferrule significantly reduces the incidence of fracture in the nonvital tooth by reinforcing the tooth at its external surface [7, 8]. It resists the lateral forces from the tapered dowels and the leverage from the crown in function [6]. It also increases the retention and resistance of the restoration [6]. Combined together, the pattern of stress distribution by the endodontic posts and the ferrule design under masticatory load is of great importance in ensuring an optimal design for the prosthesis.Stress analysis in dentistry has been a topic of interest for the past few decades. Traditional methods of experimental stress analysis includes mechanical stress analysis [9], photoelastic method [10], and electrical strain gauge [11] and so forth. Among these, the photoelastic method was favored due to its ability to incorporate irregular geometry and the ready visualization of stress concentration in the materials under loads [12]. However, the exact duplication of material properties of the model was difficult, and hence the actual stresses in the real case was only approximated. An approach to stress analysis of dental structures which deals with the previously described complexities while avoiding the shortcomings of photoelastic analysis is the “finite element method” (FEM). Finite element method is a numerical tool, which is popularly used to analyze very complex and irregular structures [13]. Since its beginning in 1956, the versatility and efficiency of this method has been recognized in various engineering fields like civil, mechanical, and aeronautical engineering [13]. Besides this, it has a wide application in biomechanical sciences like dentistry and orthopedics.This study is an attempt to use FEM to predict, analyze, and compare stress distribution of different types of post-core systems with and without ferrule in metal ceramic crown restored endodontically treated maxillary central incisor. ## 2. Material and Methods The study was conducted in the Department of Dental Surgery, All India Institute of Medical Sciences, New Delhi and the Department of Applied Mechanics, Indian Institute of Technology, New Delhi, India. ### 2.1. Finite Element Modeling The dimensions of various parts of the model for maxillary central incisor (Bucco-lingual view) were adopted from standard dental literature (Table1) [6, 14–17]. The finite element model geometry (Figure 1) was generated on the computer screen by provision of various entities such as grids, lines and patches, and so forth. Each model was divided into small elements interconnected at nodes and a finite element mesh was superimposed on the model (Figures 2 and 3). The elements used in this analysis were four noded quadrilateral elements and three noded triangular elements. In total, 12 different, 2-dimensional plane strain models were generated for different variety of post-core, with and without ferrule design.Table 1 Dimensions of structures in FE model [6, 14–17]. No.Different parts in FE modelDimensions (mm)(1)Tooth [14](a) Root length13.0 mm(b) Root diameter6.0 mm (cervical)(2)Periodontal ligament [15]0.2 mm (width)(3)Cortical bone [16]1.2 mm (thickness)(4)Gutta Percha [17]4.5 mm (length)(5)Endodontic post [17]Length10.5 mmDiameter1.5 mm (parallel post)0.6–1.5 mm (tapered post)10.5 mm (height)(6)Crown [17]1.2 mm (thickness at cervical margin)2.0 mm (thickness at incisal edge)(7)Core [17]6.5 mm (height) + 2 mm residual dentin(8)Ferrule design [6]1.5 mm (length)Figure 1 Geometry of finite element (FE) model.Figure 2 Mesh generated over ferrule model.Figure 3 Mesh generated over nonferrule model.The different parts of these models are illustrated in Figure1. Three different post materials—titanium (Ti), nickel-chromium (Ni-Cr), and fiber reinforced composite resin (FRC) were evaluated in tapered and parallel forms. The length of the post was kept at 10.5 mm for both the forms. The width of the parallel post was 1.5 mm throughout its length and that of the tapered design decreased from 1.5 mm at the cervical portion to 0.6 mm at the apical portion (Table 1). While the core material for custom made Ni-Cr cast post-core was the same alloy, that of titanium and fiber reinforced composite resin posts, was composite resin. The height of core was 6.5 mm built over 2 mm of residual dentin coronally from crown margin (Table 1). The artificial crown in all cases was metal ceramic crown, and the total height of crown was 10.5 mm with an additional 1.5 mm cervical extension in the ferrule model (Figure 1, Table 1). The thickness of the crown margin at the cervical portion and at the incisal edge was 1.2 mm and 2.0 mm, respectively. The thickness of metal coping (Ni-Cr) was modeled to be 0.3 mm (Table 1).The properties of various materials like modulus of elasticity (“E”) and Poisson’s ratio (“v”), adopted from the standard literature (Table 2) [5, 18–23], were applied for the respective materials the finite element model. The displacement boundary conditions were fixed at the alveolar bone trough surrounding the root.Table 2 Material properties in FE model [5, 18–23]. No.MaterialModulus of elasticity (“E”) (GPa)Poisson’s ratio (“v”)(1)Dentin [18]18.60.31(2)Periodontal ligament [19, 20]0.00006890.45(3)Cortical bone [21]13.70.30(4)Gutta Percha [19, 20]0.0690.45(5)Titanium [22]120.00.30(6)Ni-Cr alloy [22]203.60.30(7)Fiber reinforced composite resin [5]15.00.28(8)Composite resin [20]8.30.28(9)Porcelain [23]69.00.28 ### 2.2. The Loading Condition and Analysis A static load of 100 N was planned to be applied to the maxillary central incisor tooth restored with post, core, and crown. Since the model in the current study was 2-dimensional (plane strain model with 1 mm thickness) and the mesiodistal diameter of maxillary central incisor was 8.5 mm, the force applied to the FE model was calculated as total force/mesiodistal width of tooth, that is, 100 N/8.5 = 11.76 N (approximated to 12 N).Each model was evaluated under forces directed in three different directions, (1) vertical—at the incisal edge of the crown, (2) oblique—at 45° and at a distance of 2 mm from the incisal edge on the lingual aspect of crown, and (3) horizontal—at the same point like oblique force. The stress pattern and maximum value of the generated von Mises stress were used to interpret the results of the study as it indicates the site where yielding of ductile material is likely to occur.All the modeling, meshing, and the analysis were performed in NISA (numerically integrated element for system analysis). ## 2.1. Finite Element Modeling The dimensions of various parts of the model for maxillary central incisor (Bucco-lingual view) were adopted from standard dental literature (Table1) [6, 14–17]. The finite element model geometry (Figure 1) was generated on the computer screen by provision of various entities such as grids, lines and patches, and so forth. Each model was divided into small elements interconnected at nodes and a finite element mesh was superimposed on the model (Figures 2 and 3). The elements used in this analysis were four noded quadrilateral elements and three noded triangular elements. In total, 12 different, 2-dimensional plane strain models were generated for different variety of post-core, with and without ferrule design.Table 1 Dimensions of structures in FE model [6, 14–17]. No.Different parts in FE modelDimensions (mm)(1)Tooth [14](a) Root length13.0 mm(b) Root diameter6.0 mm (cervical)(2)Periodontal ligament [15]0.2 mm (width)(3)Cortical bone [16]1.2 mm (thickness)(4)Gutta Percha [17]4.5 mm (length)(5)Endodontic post [17]Length10.5 mmDiameter1.5 mm (parallel post)0.6–1.5 mm (tapered post)10.5 mm (height)(6)Crown [17]1.2 mm (thickness at cervical margin)2.0 mm (thickness at incisal edge)(7)Core [17]6.5 mm (height) + 2 mm residual dentin(8)Ferrule design [6]1.5 mm (length)Figure 1 Geometry of finite element (FE) model.Figure 2 Mesh generated over ferrule model.Figure 3 Mesh generated over nonferrule model.The different parts of these models are illustrated in Figure1. Three different post materials—titanium (Ti), nickel-chromium (Ni-Cr), and fiber reinforced composite resin (FRC) were evaluated in tapered and parallel forms. The length of the post was kept at 10.5 mm for both the forms. The width of the parallel post was 1.5 mm throughout its length and that of the tapered design decreased from 1.5 mm at the cervical portion to 0.6 mm at the apical portion (Table 1). While the core material for custom made Ni-Cr cast post-core was the same alloy, that of titanium and fiber reinforced composite resin posts, was composite resin. The height of core was 6.5 mm built over 2 mm of residual dentin coronally from crown margin (Table 1). The artificial crown in all cases was metal ceramic crown, and the total height of crown was 10.5 mm with an additional 1.5 mm cervical extension in the ferrule model (Figure 1, Table 1). The thickness of the crown margin at the cervical portion and at the incisal edge was 1.2 mm and 2.0 mm, respectively. The thickness of metal coping (Ni-Cr) was modeled to be 0.3 mm (Table 1).The properties of various materials like modulus of elasticity (“E”) and Poisson’s ratio (“v”), adopted from the standard literature (Table 2) [5, 18–23], were applied for the respective materials the finite element model. The displacement boundary conditions were fixed at the alveolar bone trough surrounding the root.Table 2 Material properties in FE model [5, 18–23]. No.MaterialModulus of elasticity (“E”) (GPa)Poisson’s ratio (“v”)(1)Dentin [18]18.60.31(2)Periodontal ligament [19, 20]0.00006890.45(3)Cortical bone [21]13.70.30(4)Gutta Percha [19, 20]0.0690.45(5)Titanium [22]120.00.30(6)Ni-Cr alloy [22]203.60.30(7)Fiber reinforced composite resin [5]15.00.28(8)Composite resin [20]8.30.28(9)Porcelain [23]69.00.28 ## 2.2. The Loading Condition and Analysis A static load of 100 N was planned to be applied to the maxillary central incisor tooth restored with post, core, and crown. Since the model in the current study was 2-dimensional (plane strain model with 1 mm thickness) and the mesiodistal diameter of maxillary central incisor was 8.5 mm, the force applied to the FE model was calculated as total force/mesiodistal width of tooth, that is, 100 N/8.5 = 11.76 N (approximated to 12 N).Each model was evaluated under forces directed in three different directions, (1) vertical—at the incisal edge of the crown, (2) oblique—at 45° and at a distance of 2 mm from the incisal edge on the lingual aspect of crown, and (3) horizontal—at the same point like oblique force. The stress pattern and maximum value of the generated von Mises stress were used to interpret the results of the study as it indicates the site where yielding of ductile material is likely to occur.All the modeling, meshing, and the analysis were performed in NISA (numerically integrated element for system analysis). ## 3. Results Under vertical load, the overall stress produced within tapered posts were more for Ni-Cr cast post-core (4.470 MPa) and Ti post with composite resin core (3.489 MPa; Table3). The FRC tapered and parallel posts produced approximately the same stress values (0.702 MPa) within the post as well as in the adjacent dentin (Table 3).Table 3 Comparison of stress* distribution between tapered and parallel posts of different materials with vertical load. MaterialsRegion†Tapered postParallel postFerruleNonferruleFerruleNonferruleTiA2.0942.0912.0942.091B2.7912.7882.7912.788C3.4893.4852.7912.788Ni-CrA4.4704.4703.8324.470B4.4704.4703.8323.832C4.4704.4703.1933.832RCA0.70240.70140.70240.7016B0.70240.70140.70240.7016C0.70240.70140.70240.7016*Maximum Von Mises stress in MPa.†A—cervical one-third; B—middle one-third; C—apical one-third.Common to all the type of post-core systems, when the direction of load was changed from vertical to horizontal, the stress levels increased as the forces became more oblique (Table4) and finally reached the highest levels when they were absolutely horizontal (Table 5). For the horizontal loading, the Ni-Cr cast post-core produced the maximum stresses (16.99 MPa) within the post-core system. Interestingly, it was the same system that also transmitted the least of stresses to the surrounding dentin. Although FRC posts recorded the lowest stress level within the posts, the stresses transmitted to the surrounding dentin were more (25.77 MPa, Table 6). Ferrule did not reduce the stress values either in the tapered or parallel forms of Ti and Ni-Cr posts. However, there was some reduction in stress levels within the tapered and parallel FRC posts under oblique and horizontal load. Incorporation of ferrule was found to increase the magnitude of the stresses in the cervical dentin as given in Table 6.Table 4 Comparison of stress* distribution between tapered and parallel posts of different materials with oblique load. MaterialsRegion†Tapered postParallel postFerruleNonferruleFerruleNonferruleTiA5.8076.4035.7956.384B7.2588.5377.2438.512C8.7108.5377.2438.512Ni-CrA10.2410.2410.7610.94B11.3811.3811.9512.16C10.2410.2410.7610.94FRCA4.1954.1364.1894.133B2.7973.1022.7933.100C6.9927.2366.9818.265*Maximum Von Mises stress in MPa.†A—cervical one-third; B—middle one-third; C—apical one-third.Table 5 Comparison of stress* distribution between tapered and parallel posts of different materials with horizontal load. MaterialsRegion†Tapered postParallel postFerruleNonferruleFerruleNonferruleTiA9.80410.269.80210.24B11.7613.1913.7213.17C13.7213.1911.7611.71Ni-CrA14.0814.0715.0815.29B15.6515.6316.7616.99C14.0814.0715.0815.29FRCA5.5255.6265.5165.625B3.6845.6263.6785.626C9.20611.259.19212.65*Maximum Von Mises stress in MPa.†A—cervical one-third; B—middle one-third; C—apical one-third.Table 6 Comparison of stress* distribution with different posts having ferrule and nonferrule design in the cervical dentin. MaterialsLoad†Tapered postParallel postFerruleNonferruleFerruleNonferruleTiV3.4893.4853.4893.485O18.8713.8718.8313.83H25.4919.0625.4819.02Ni-CrV2.5552.5552.5552.555O12.5212.5111.9513.36H17.2117.1918.4318.68FRCV3.5103.5053.5103.505O19.5714.4719.5414.46H25.7719.6925.7319.68*Maximum Von Mises stress in MPa.†V—vertical load; O—oblique load (45° angle); H—horizontal load. ## 4. Discussion Stress plots (von Mises) under vertical load indicated that the overall stress produced by tapered post was more as compared to parallel post for both ferrule and nonferrule group, in case of Ti and Ni-Cr posts. The difference in the stress values were better appreciable in the middle and clearly defined in the apical one-third of the post. This exemplifies the “wedging effect” (Figure4) seen in the dentin adjacent to the apex of the tapered post, a finding consistent with the existing literature [24, 25]. The “wedging effect” is due to the reduction in the dimensions of the tapered post near the apical portion. This results in the same amount of load distributed over a smaller area as compared to the parallel post (Figure 5). However, no such difference was observed for FRC posts in case of both ferrule and nonferrule group, a finding that can be explained on the basis of difference in the modulus of elasticity (“E” value). Since, the “E” value of Ti and Ni-Cr are higher than the dentin, stresses are mainly concentrated within the post and in the adjacent dentin. But, the difference in “E” values between dentin and FRC was minimal. Thus, there is approximately equal distribution of stresses within the post as well as in the dentin in case of FRC post. The finding that the “wedging effect” was more pronounced in case of Ni-Cr as compared to Ti, suggests a direct relationship between “E” value of the material and the intensity of “wedging effect.”Figure 4 “Wedging effect” with tapered post.Figure 5 Poorly defined “wedging effect” with parallel post.A common observation noticed for all 3 types of materials that is, Ti, Ni-Cr, and FRC was that stress values were maximum under horizontal load, followed by oblique, and the least for vertical load. This could be due to the higher effect of leverage that occurs with oblique and horizontal loads.There is mixed opinion regarding the efficacy of ferrule in increasing the threshold of failure load in an endodontically treated tooth. Some mechanical studies favor the placement of ferrule as it confers increased fracture resistance to the endodontically treated teeth. Libman and Nicholls [26] have reported that the preliminary failure occurred with lower number of cyclic load (4.0 kg) in endodontically treated central incisors with ferrule length of 0.5 and 1.0 mm as compared to 1.5 and 2.0 mm.EvenIsidor et al.[7] have observed an increased fracture resistance of endodontically treated tooth to cyclic loading with increasing ferrule length.Barkhordar et al.[8] reported that a metal collar of approximately 3° significantly enhanced the resistance to root fracture of endodontically treated maxillary central incisor. In contrast, Tjan and Whang [27] found that addition of a metal collar did not enhance the resistance to root fracture. Al-Hazaimeh and Gutteridge [28] demonstrated that the additional use of a ferrule (2 mm) preparation had no benefits in terms of resistance to fracture when composite cement and core materials were utilized with a prefabricated parapost system. The results drawn from the present study, suggested that the placement of ferrule at the cervical portion of the root reduced the stresses in case of FRC post with composite resin core only and the stress reduction was greater for horizontal load than the oblique load. Probably, this could be due to the use of ferrule with nonmetal post-core concentrating more stresses in and around the metal collar (because of higher “E” value) in the cervical dentin and transmitting less stresses to the underlying FRC post. Placing an additional metal collar did not have much significance in stress reduction within metal post, that is, Ni-Cr, and Ti. To the contrary, the metal collar increased the stresses in the cervical dentin around it (Figures 6 and 7). The lack of benefit of ferrule placement when Ti post with composite resin core and Ni-Cr cast post-core were used could be explained on the basis of their higher modulus of elasticity attracting greater stresses within the post. This is in agreement with the results of the photoelastic study conducted byLoney et al.[29] who reported that the metal collar (1.5 mm) produced higher stress value at the cervical and apical regions of post. The FEM study ofIchim et al.[30] revealed that the ferrule increased the mechanical resistance of a post/core/crown restoration in central incisor. But, it also created a larger area of palatal dentin under tensile stress, a condition favorable for a crack development on the palatal aspect of the root eventually leading to an oblique root fracture. In comparison, a restoration without ferrule was prone to fail primarily by debonding and subsequently by root fracture through the lever action of the loose post.Figure 6 Increased stresses (von Mises) in cervical dentin with ferrule design.Figure 7 Reduced stresses (von Mises) in cervical dentin without ferrule design.Ni-Cr cast metal post-core recorded minimal stress levels in the surrounding tissue as they were concentrated along the Ni-Cr cast post-core owing to its higher “E” value. Eskitaşcioǧlu et al. [31] found that the cast post-core accumulated higher stresses within the post-core system and transmitted lower stresses to supportive structures whereas fiber composite laminate (FCL) post-core produced less stresses within post-core system and transmitted greater stresses to the supportive structures. When stress patterns were compared between core materials with different posts, it was observed that the overall maximum stress values were produced with Ni-Cr.From the analysis of the stress plots obtained from various FE models, Ni-Cr cast post-core appears to be most advantageous, since it transmitted less stresses to the supportive structures. Alhough the stresses produced within the FRC post (with composite resin core) were less as compared to Ni-Cr post, the former transmitted greater stresses to surrounding dentin mainly in oblique and horizontal load directions.Martínez-Insua et al.[3] reported that although the FRC posts (“E” value 15 GPa) fail at lower loads than stainless steel (“E” value 200.0 GPa), the failure occurred at loads greater than those occurring in the mouth. The failure was also not catastrophic as debonding occurs between individual fibers and the matrix before frank fracture of the FRC post. This can be considered as an advantage with FRC post because the reverse situation generally necessitates extraction of the tooth. Pegoretti et al. [32] reported that the incorporation of glass fiber in FRC, instead of carbon fiber, resulted in the lowest level of peak stresses to the surrounding dentin by virtue of its stiffness being much similar to the latter. Taking all these factors into consideration, FRC posts appear promising for the long-term success of endodontically treated teeth [3, 32]. ## 5. Conclusions (1) Under vertical load, the overall stresses produced within the tapered posts were more for Ni-Cr cast post-core and Ti post with composite resin core. The FRC tapered and parallel posts produced approximately the same stress levels within the posts as well as in the adjacent dentin.(2) For all the models evaluated, the maximum stresses in the posts and surrounding structures were recorded with horizontal load, followed by oblique and vertical loads.(3) Ni-Cr cast post-core produced maximum stresses within the post-core system and the least amount of stresses transmitted to the surrounding dentin. Although FRC posts recorded minimal stress level within the posts, the stresses transmitted to the surrounding dentin were more as compared to Ni-Cr cast post-core.(4) Ferrule did not reduce the stress values either in the tapered or parallel posts of Ti and Ni- Cr. However, there was some degree of stress reduction within the tapered and parallel FRC posts under oblique and horizontal load.(5) Incorporation of ferrule increased the stresses in the cervical dentin in all the types of post-core systems evaluated in the study. --- *Source: 102329-2011-06-14.xml*
102329-2011-06-14_102329-2011-06-14.md
25,757
A Comparative Study of the Stress Distribution in Different
Lokanath Garhnayak; Hari Parkash; D. K. Sehgal; Veena Jain; Mirna Garhnayak
ISRN Dentistry (2011)
Medical & Health Sciences
International Scholarly Research Network
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.5402/2011/102329
102329-2011-06-14.xml
--- ## Abstract Purpose. To analyze the stress distribution in an endodontically treated maxillary central incisor restored with various post-core systems and assess the benefit of ferrule using finite element analysis. Material and Methods. Twelve models with metal ceramic crown were created based on the combination of three types of post-core systems (titanium post-composite resin core, nickel-chromium post-core, and fiber reinforced composite resin post-composite resin core), two varieties of posts (tapered, parallel), and with or without ferrule. 100 N load was applied in three directions and the von Mises stress was compared. Results. Ferrule made no difference in stress distribution for the titanium and nickel-chromium posts, though it showed some stress reduction in fiber-reinforced composite resin posts. Nickel-chromium cast post-core transmitted the least amount of stresses to the dentin despite producing the maximum stress. Conclusion. Incorporation of ferrule offered some degree of stress reduction in nonmetal post, and it increased the stresses within cervical dentin. --- ## Body ## 1. Introduction The restoration of endodontically treated teeth has been a concern for prosthodontists for more than 100 years [1]. The increasing effectiveness and predictability of endodontic therapy have only made the challenge more evident. If a root canal treated tooth is severely damaged beyond the realms of restoration by conventional means, the clinician is left with no alternative other than a post-core. The predominant function of an endodontic post is to retain the core [2], which subsequently provides a suitable foundation for final restoration. The selection of a particular type of endodontic post is based on its mechanical properties, ease of fabrication, biocompatibility, availability in the market, and the cost factor. Custom made nickel-chromium (Ni-Cr) cast post-core system is routinely used because of its precision and ease of fabrication in the laboratory. Among the prefabricated posts, titanium (Ti) posts are very popular owing to their proven biocompatibility. Newer post materials like ceramics and fiber reinforced composite resin (FRC) [3–5] are also gaining popularity due to their unique esthetic properties. The ability of post-core system to sustain masticatory forces and remain firmly seated in the tooth is essential for the survival of a restoration. One other important design consideration in the restoration of an endodontically treated tooth is the ferrule. This has been described as a metal band that encircles the external dimension of the residual tooth [6]. A properly built ferrule significantly reduces the incidence of fracture in the nonvital tooth by reinforcing the tooth at its external surface [7, 8]. It resists the lateral forces from the tapered dowels and the leverage from the crown in function [6]. It also increases the retention and resistance of the restoration [6]. Combined together, the pattern of stress distribution by the endodontic posts and the ferrule design under masticatory load is of great importance in ensuring an optimal design for the prosthesis.Stress analysis in dentistry has been a topic of interest for the past few decades. Traditional methods of experimental stress analysis includes mechanical stress analysis [9], photoelastic method [10], and electrical strain gauge [11] and so forth. Among these, the photoelastic method was favored due to its ability to incorporate irregular geometry and the ready visualization of stress concentration in the materials under loads [12]. However, the exact duplication of material properties of the model was difficult, and hence the actual stresses in the real case was only approximated. An approach to stress analysis of dental structures which deals with the previously described complexities while avoiding the shortcomings of photoelastic analysis is the “finite element method” (FEM). Finite element method is a numerical tool, which is popularly used to analyze very complex and irregular structures [13]. Since its beginning in 1956, the versatility and efficiency of this method has been recognized in various engineering fields like civil, mechanical, and aeronautical engineering [13]. Besides this, it has a wide application in biomechanical sciences like dentistry and orthopedics.This study is an attempt to use FEM to predict, analyze, and compare stress distribution of different types of post-core systems with and without ferrule in metal ceramic crown restored endodontically treated maxillary central incisor. ## 2. Material and Methods The study was conducted in the Department of Dental Surgery, All India Institute of Medical Sciences, New Delhi and the Department of Applied Mechanics, Indian Institute of Technology, New Delhi, India. ### 2.1. Finite Element Modeling The dimensions of various parts of the model for maxillary central incisor (Bucco-lingual view) were adopted from standard dental literature (Table1) [6, 14–17]. The finite element model geometry (Figure 1) was generated on the computer screen by provision of various entities such as grids, lines and patches, and so forth. Each model was divided into small elements interconnected at nodes and a finite element mesh was superimposed on the model (Figures 2 and 3). The elements used in this analysis were four noded quadrilateral elements and three noded triangular elements. In total, 12 different, 2-dimensional plane strain models were generated for different variety of post-core, with and without ferrule design.Table 1 Dimensions of structures in FE model [6, 14–17]. No.Different parts in FE modelDimensions (mm)(1)Tooth [14](a) Root length13.0 mm(b) Root diameter6.0 mm (cervical)(2)Periodontal ligament [15]0.2 mm (width)(3)Cortical bone [16]1.2 mm (thickness)(4)Gutta Percha [17]4.5 mm (length)(5)Endodontic post [17]Length10.5 mmDiameter1.5 mm (parallel post)0.6–1.5 mm (tapered post)10.5 mm (height)(6)Crown [17]1.2 mm (thickness at cervical margin)2.0 mm (thickness at incisal edge)(7)Core [17]6.5 mm (height) + 2 mm residual dentin(8)Ferrule design [6]1.5 mm (length)Figure 1 Geometry of finite element (FE) model.Figure 2 Mesh generated over ferrule model.Figure 3 Mesh generated over nonferrule model.The different parts of these models are illustrated in Figure1. Three different post materials—titanium (Ti), nickel-chromium (Ni-Cr), and fiber reinforced composite resin (FRC) were evaluated in tapered and parallel forms. The length of the post was kept at 10.5 mm for both the forms. The width of the parallel post was 1.5 mm throughout its length and that of the tapered design decreased from 1.5 mm at the cervical portion to 0.6 mm at the apical portion (Table 1). While the core material for custom made Ni-Cr cast post-core was the same alloy, that of titanium and fiber reinforced composite resin posts, was composite resin. The height of core was 6.5 mm built over 2 mm of residual dentin coronally from crown margin (Table 1). The artificial crown in all cases was metal ceramic crown, and the total height of crown was 10.5 mm with an additional 1.5 mm cervical extension in the ferrule model (Figure 1, Table 1). The thickness of the crown margin at the cervical portion and at the incisal edge was 1.2 mm and 2.0 mm, respectively. The thickness of metal coping (Ni-Cr) was modeled to be 0.3 mm (Table 1).The properties of various materials like modulus of elasticity (“E”) and Poisson’s ratio (“v”), adopted from the standard literature (Table 2) [5, 18–23], were applied for the respective materials the finite element model. The displacement boundary conditions were fixed at the alveolar bone trough surrounding the root.Table 2 Material properties in FE model [5, 18–23]. No.MaterialModulus of elasticity (“E”) (GPa)Poisson’s ratio (“v”)(1)Dentin [18]18.60.31(2)Periodontal ligament [19, 20]0.00006890.45(3)Cortical bone [21]13.70.30(4)Gutta Percha [19, 20]0.0690.45(5)Titanium [22]120.00.30(6)Ni-Cr alloy [22]203.60.30(7)Fiber reinforced composite resin [5]15.00.28(8)Composite resin [20]8.30.28(9)Porcelain [23]69.00.28 ### 2.2. The Loading Condition and Analysis A static load of 100 N was planned to be applied to the maxillary central incisor tooth restored with post, core, and crown. Since the model in the current study was 2-dimensional (plane strain model with 1 mm thickness) and the mesiodistal diameter of maxillary central incisor was 8.5 mm, the force applied to the FE model was calculated as total force/mesiodistal width of tooth, that is, 100 N/8.5 = 11.76 N (approximated to 12 N).Each model was evaluated under forces directed in three different directions, (1) vertical—at the incisal edge of the crown, (2) oblique—at 45° and at a distance of 2 mm from the incisal edge on the lingual aspect of crown, and (3) horizontal—at the same point like oblique force. The stress pattern and maximum value of the generated von Mises stress were used to interpret the results of the study as it indicates the site where yielding of ductile material is likely to occur.All the modeling, meshing, and the analysis were performed in NISA (numerically integrated element for system analysis). ## 2.1. Finite Element Modeling The dimensions of various parts of the model for maxillary central incisor (Bucco-lingual view) were adopted from standard dental literature (Table1) [6, 14–17]. The finite element model geometry (Figure 1) was generated on the computer screen by provision of various entities such as grids, lines and patches, and so forth. Each model was divided into small elements interconnected at nodes and a finite element mesh was superimposed on the model (Figures 2 and 3). The elements used in this analysis were four noded quadrilateral elements and three noded triangular elements. In total, 12 different, 2-dimensional plane strain models were generated for different variety of post-core, with and without ferrule design.Table 1 Dimensions of structures in FE model [6, 14–17]. No.Different parts in FE modelDimensions (mm)(1)Tooth [14](a) Root length13.0 mm(b) Root diameter6.0 mm (cervical)(2)Periodontal ligament [15]0.2 mm (width)(3)Cortical bone [16]1.2 mm (thickness)(4)Gutta Percha [17]4.5 mm (length)(5)Endodontic post [17]Length10.5 mmDiameter1.5 mm (parallel post)0.6–1.5 mm (tapered post)10.5 mm (height)(6)Crown [17]1.2 mm (thickness at cervical margin)2.0 mm (thickness at incisal edge)(7)Core [17]6.5 mm (height) + 2 mm residual dentin(8)Ferrule design [6]1.5 mm (length)Figure 1 Geometry of finite element (FE) model.Figure 2 Mesh generated over ferrule model.Figure 3 Mesh generated over nonferrule model.The different parts of these models are illustrated in Figure1. Three different post materials—titanium (Ti), nickel-chromium (Ni-Cr), and fiber reinforced composite resin (FRC) were evaluated in tapered and parallel forms. The length of the post was kept at 10.5 mm for both the forms. The width of the parallel post was 1.5 mm throughout its length and that of the tapered design decreased from 1.5 mm at the cervical portion to 0.6 mm at the apical portion (Table 1). While the core material for custom made Ni-Cr cast post-core was the same alloy, that of titanium and fiber reinforced composite resin posts, was composite resin. The height of core was 6.5 mm built over 2 mm of residual dentin coronally from crown margin (Table 1). The artificial crown in all cases was metal ceramic crown, and the total height of crown was 10.5 mm with an additional 1.5 mm cervical extension in the ferrule model (Figure 1, Table 1). The thickness of the crown margin at the cervical portion and at the incisal edge was 1.2 mm and 2.0 mm, respectively. The thickness of metal coping (Ni-Cr) was modeled to be 0.3 mm (Table 1).The properties of various materials like modulus of elasticity (“E”) and Poisson’s ratio (“v”), adopted from the standard literature (Table 2) [5, 18–23], were applied for the respective materials the finite element model. The displacement boundary conditions were fixed at the alveolar bone trough surrounding the root.Table 2 Material properties in FE model [5, 18–23]. No.MaterialModulus of elasticity (“E”) (GPa)Poisson’s ratio (“v”)(1)Dentin [18]18.60.31(2)Periodontal ligament [19, 20]0.00006890.45(3)Cortical bone [21]13.70.30(4)Gutta Percha [19, 20]0.0690.45(5)Titanium [22]120.00.30(6)Ni-Cr alloy [22]203.60.30(7)Fiber reinforced composite resin [5]15.00.28(8)Composite resin [20]8.30.28(9)Porcelain [23]69.00.28 ## 2.2. The Loading Condition and Analysis A static load of 100 N was planned to be applied to the maxillary central incisor tooth restored with post, core, and crown. Since the model in the current study was 2-dimensional (plane strain model with 1 mm thickness) and the mesiodistal diameter of maxillary central incisor was 8.5 mm, the force applied to the FE model was calculated as total force/mesiodistal width of tooth, that is, 100 N/8.5 = 11.76 N (approximated to 12 N).Each model was evaluated under forces directed in three different directions, (1) vertical—at the incisal edge of the crown, (2) oblique—at 45° and at a distance of 2 mm from the incisal edge on the lingual aspect of crown, and (3) horizontal—at the same point like oblique force. The stress pattern and maximum value of the generated von Mises stress were used to interpret the results of the study as it indicates the site where yielding of ductile material is likely to occur.All the modeling, meshing, and the analysis were performed in NISA (numerically integrated element for system analysis). ## 3. Results Under vertical load, the overall stress produced within tapered posts were more for Ni-Cr cast post-core (4.470 MPa) and Ti post with composite resin core (3.489 MPa; Table3). The FRC tapered and parallel posts produced approximately the same stress values (0.702 MPa) within the post as well as in the adjacent dentin (Table 3).Table 3 Comparison of stress* distribution between tapered and parallel posts of different materials with vertical load. MaterialsRegion†Tapered postParallel postFerruleNonferruleFerruleNonferruleTiA2.0942.0912.0942.091B2.7912.7882.7912.788C3.4893.4852.7912.788Ni-CrA4.4704.4703.8324.470B4.4704.4703.8323.832C4.4704.4703.1933.832RCA0.70240.70140.70240.7016B0.70240.70140.70240.7016C0.70240.70140.70240.7016*Maximum Von Mises stress in MPa.†A—cervical one-third; B—middle one-third; C—apical one-third.Common to all the type of post-core systems, when the direction of load was changed from vertical to horizontal, the stress levels increased as the forces became more oblique (Table4) and finally reached the highest levels when they were absolutely horizontal (Table 5). For the horizontal loading, the Ni-Cr cast post-core produced the maximum stresses (16.99 MPa) within the post-core system. Interestingly, it was the same system that also transmitted the least of stresses to the surrounding dentin. Although FRC posts recorded the lowest stress level within the posts, the stresses transmitted to the surrounding dentin were more (25.77 MPa, Table 6). Ferrule did not reduce the stress values either in the tapered or parallel forms of Ti and Ni-Cr posts. However, there was some reduction in stress levels within the tapered and parallel FRC posts under oblique and horizontal load. Incorporation of ferrule was found to increase the magnitude of the stresses in the cervical dentin as given in Table 6.Table 4 Comparison of stress* distribution between tapered and parallel posts of different materials with oblique load. MaterialsRegion†Tapered postParallel postFerruleNonferruleFerruleNonferruleTiA5.8076.4035.7956.384B7.2588.5377.2438.512C8.7108.5377.2438.512Ni-CrA10.2410.2410.7610.94B11.3811.3811.9512.16C10.2410.2410.7610.94FRCA4.1954.1364.1894.133B2.7973.1022.7933.100C6.9927.2366.9818.265*Maximum Von Mises stress in MPa.†A—cervical one-third; B—middle one-third; C—apical one-third.Table 5 Comparison of stress* distribution between tapered and parallel posts of different materials with horizontal load. MaterialsRegion†Tapered postParallel postFerruleNonferruleFerruleNonferruleTiA9.80410.269.80210.24B11.7613.1913.7213.17C13.7213.1911.7611.71Ni-CrA14.0814.0715.0815.29B15.6515.6316.7616.99C14.0814.0715.0815.29FRCA5.5255.6265.5165.625B3.6845.6263.6785.626C9.20611.259.19212.65*Maximum Von Mises stress in MPa.†A—cervical one-third; B—middle one-third; C—apical one-third.Table 6 Comparison of stress* distribution with different posts having ferrule and nonferrule design in the cervical dentin. MaterialsLoad†Tapered postParallel postFerruleNonferruleFerruleNonferruleTiV3.4893.4853.4893.485O18.8713.8718.8313.83H25.4919.0625.4819.02Ni-CrV2.5552.5552.5552.555O12.5212.5111.9513.36H17.2117.1918.4318.68FRCV3.5103.5053.5103.505O19.5714.4719.5414.46H25.7719.6925.7319.68*Maximum Von Mises stress in MPa.†V—vertical load; O—oblique load (45° angle); H—horizontal load. ## 4. Discussion Stress plots (von Mises) under vertical load indicated that the overall stress produced by tapered post was more as compared to parallel post for both ferrule and nonferrule group, in case of Ti and Ni-Cr posts. The difference in the stress values were better appreciable in the middle and clearly defined in the apical one-third of the post. This exemplifies the “wedging effect” (Figure4) seen in the dentin adjacent to the apex of the tapered post, a finding consistent with the existing literature [24, 25]. The “wedging effect” is due to the reduction in the dimensions of the tapered post near the apical portion. This results in the same amount of load distributed over a smaller area as compared to the parallel post (Figure 5). However, no such difference was observed for FRC posts in case of both ferrule and nonferrule group, a finding that can be explained on the basis of difference in the modulus of elasticity (“E” value). Since, the “E” value of Ti and Ni-Cr are higher than the dentin, stresses are mainly concentrated within the post and in the adjacent dentin. But, the difference in “E” values between dentin and FRC was minimal. Thus, there is approximately equal distribution of stresses within the post as well as in the dentin in case of FRC post. The finding that the “wedging effect” was more pronounced in case of Ni-Cr as compared to Ti, suggests a direct relationship between “E” value of the material and the intensity of “wedging effect.”Figure 4 “Wedging effect” with tapered post.Figure 5 Poorly defined “wedging effect” with parallel post.A common observation noticed for all 3 types of materials that is, Ti, Ni-Cr, and FRC was that stress values were maximum under horizontal load, followed by oblique, and the least for vertical load. This could be due to the higher effect of leverage that occurs with oblique and horizontal loads.There is mixed opinion regarding the efficacy of ferrule in increasing the threshold of failure load in an endodontically treated tooth. Some mechanical studies favor the placement of ferrule as it confers increased fracture resistance to the endodontically treated teeth. Libman and Nicholls [26] have reported that the preliminary failure occurred with lower number of cyclic load (4.0 kg) in endodontically treated central incisors with ferrule length of 0.5 and 1.0 mm as compared to 1.5 and 2.0 mm.EvenIsidor et al.[7] have observed an increased fracture resistance of endodontically treated tooth to cyclic loading with increasing ferrule length.Barkhordar et al.[8] reported that a metal collar of approximately 3° significantly enhanced the resistance to root fracture of endodontically treated maxillary central incisor. In contrast, Tjan and Whang [27] found that addition of a metal collar did not enhance the resistance to root fracture. Al-Hazaimeh and Gutteridge [28] demonstrated that the additional use of a ferrule (2 mm) preparation had no benefits in terms of resistance to fracture when composite cement and core materials were utilized with a prefabricated parapost system. The results drawn from the present study, suggested that the placement of ferrule at the cervical portion of the root reduced the stresses in case of FRC post with composite resin core only and the stress reduction was greater for horizontal load than the oblique load. Probably, this could be due to the use of ferrule with nonmetal post-core concentrating more stresses in and around the metal collar (because of higher “E” value) in the cervical dentin and transmitting less stresses to the underlying FRC post. Placing an additional metal collar did not have much significance in stress reduction within metal post, that is, Ni-Cr, and Ti. To the contrary, the metal collar increased the stresses in the cervical dentin around it (Figures 6 and 7). The lack of benefit of ferrule placement when Ti post with composite resin core and Ni-Cr cast post-core were used could be explained on the basis of their higher modulus of elasticity attracting greater stresses within the post. This is in agreement with the results of the photoelastic study conducted byLoney et al.[29] who reported that the metal collar (1.5 mm) produced higher stress value at the cervical and apical regions of post. The FEM study ofIchim et al.[30] revealed that the ferrule increased the mechanical resistance of a post/core/crown restoration in central incisor. But, it also created a larger area of palatal dentin under tensile stress, a condition favorable for a crack development on the palatal aspect of the root eventually leading to an oblique root fracture. In comparison, a restoration without ferrule was prone to fail primarily by debonding and subsequently by root fracture through the lever action of the loose post.Figure 6 Increased stresses (von Mises) in cervical dentin with ferrule design.Figure 7 Reduced stresses (von Mises) in cervical dentin without ferrule design.Ni-Cr cast metal post-core recorded minimal stress levels in the surrounding tissue as they were concentrated along the Ni-Cr cast post-core owing to its higher “E” value. Eskitaşcioǧlu et al. [31] found that the cast post-core accumulated higher stresses within the post-core system and transmitted lower stresses to supportive structures whereas fiber composite laminate (FCL) post-core produced less stresses within post-core system and transmitted greater stresses to the supportive structures. When stress patterns were compared between core materials with different posts, it was observed that the overall maximum stress values were produced with Ni-Cr.From the analysis of the stress plots obtained from various FE models, Ni-Cr cast post-core appears to be most advantageous, since it transmitted less stresses to the supportive structures. Alhough the stresses produced within the FRC post (with composite resin core) were less as compared to Ni-Cr post, the former transmitted greater stresses to surrounding dentin mainly in oblique and horizontal load directions.Martínez-Insua et al.[3] reported that although the FRC posts (“E” value 15 GPa) fail at lower loads than stainless steel (“E” value 200.0 GPa), the failure occurred at loads greater than those occurring in the mouth. The failure was also not catastrophic as debonding occurs between individual fibers and the matrix before frank fracture of the FRC post. This can be considered as an advantage with FRC post because the reverse situation generally necessitates extraction of the tooth. Pegoretti et al. [32] reported that the incorporation of glass fiber in FRC, instead of carbon fiber, resulted in the lowest level of peak stresses to the surrounding dentin by virtue of its stiffness being much similar to the latter. Taking all these factors into consideration, FRC posts appear promising for the long-term success of endodontically treated teeth [3, 32]. ## 5. Conclusions (1) Under vertical load, the overall stresses produced within the tapered posts were more for Ni-Cr cast post-core and Ti post with composite resin core. The FRC tapered and parallel posts produced approximately the same stress levels within the posts as well as in the adjacent dentin.(2) For all the models evaluated, the maximum stresses in the posts and surrounding structures were recorded with horizontal load, followed by oblique and vertical loads.(3) Ni-Cr cast post-core produced maximum stresses within the post-core system and the least amount of stresses transmitted to the surrounding dentin. Although FRC posts recorded minimal stress level within the posts, the stresses transmitted to the surrounding dentin were more as compared to Ni-Cr cast post-core.(4) Ferrule did not reduce the stress values either in the tapered or parallel posts of Ti and Ni- Cr. However, there was some degree of stress reduction within the tapered and parallel FRC posts under oblique and horizontal load.(5) Incorporation of ferrule increased the stresses in the cervical dentin in all the types of post-core systems evaluated in the study. --- *Source: 102329-2011-06-14.xml*
2011
# Policies and Problems of Modernizing Ethnomedicine in China: A Focus on the Yi and Dai Traditional Medicines of Yunnan Province **Authors:** Zhiyong Li; Caifeng Li; Xiaobo Zhang; Shihuan Tang; Hongjun Yang; Xiuming Cui; Luqi Huang **Journal:** Evidence-Based Complementary and Alternative Medicine (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1023297 --- ## Abstract Yunnan is a multiethnic province in southwest China, rich in Materia medica resources, and is popularly known as the kingdom of plants. Biomedicine and public health industry have been the industrial pillars of Yunnan since 2016, which is the important pharmaceutical industrial base for Dai and Yi medicine in China. This review of the Yunnan ethnic medicine industry describes some of the problems to be solved in the development of sustainable ethnomedicine in China. We investigated Chinese patent medicines (CPMs) declared as ethnomedicine on the drug instructions and identified 28 Dai patent medicines (DPMs) and 73 Yi patent medicines (YPMs) that were approved for clinical use in China. In further research, the clinical indications of these CPMs were determined, and the quality standard of medicinal materials and their usage frequencies in DPMs and YPMs were investigated. We also collected and analyzed the data on use of botanical and animal sources of medicines, the rare and endangered medicinal materials, and toxic medicines in DPMs and YPMs. The application of zootherapy in Yi traditional medicine was introduced from its abundant ancient documents and records; based on the “YaGei” theory in Dai traditional medicine, toxic medicines can be relatively safe in DPMs. However, for promoting the Yunnan traditional medicine industry, it is necessary to strengthen medical research to expand evidence-based clinical practice and balance ethnomedicine production and sustainable utilization of Materia medica resources, especially the animal sources of medicines, toxic medicines, and the protected wild resources reported in this survey. Only in this way can industrialization of ethnomedicine promote the improvement of human health. --- ## Body ## 1. Introduction Evidence of the first human use of plants as medicines was observed in the fossil record of the Middle Paleolithic period, which began approximately 60,000 years ago [1]. Traditional medical knowledge and practices developed in different civilizations by the trial-and-error use of local botanicals and other biomaterial resources that accumulated slowly over long periods of time [2]. The World Health Organization (WHO) estimates that herbal medicines currently serve the health needs of approximately 80% of the world’s population, especially millions of people living in the vast rural areas of developing countries [3]. The Chinese have one of the oldest and distinct medical systems in the world. Traditional Chinese Medicine (TCM) has a written history of nearly 3000 years and is widely practiced in China [4]. China is a multiracial country with 56 nationalities, 55 of which are officially recognized as ethnic minorities in 18 provinces of China. Each ethnic minority, e.g., the Tibetans, Mongols, Uygurs, Dai, Yi, and Miao, has its own traditional medicine, and each differs slightly in theory and practice from TCM. Ethnomedicine thus refers to the use of traditional medicine guided by the medical theory and practical experience of each ethnic minority [5]. Since 2017, the Law of the People’s Republic of China on Traditional Chinese Medicine has given ethnomedicine a relatively independent status although it is considered as part of TCM. The importance of ethnomedicine has been increasing in China since 1951 as discussed in detail below. Chinese Traditional Medicine Statistics published in 2015 by the National Administration of Traditional Chinese Medicine of China and the Investigation and Analysis of Quality Standards of Ethnomedicines in Nine Provinces of China published in 2015 by the National Medical Products Administration of China listed 161 pharmaceutical enterprises that produced 4317 ethnic patent medicines (EPMs) and nearly 4000 in-hospital preparations of ethnomedicines used in 253 ethnic hospitals. A total of 39 EPMs were included in Chinese Pharmacopoeia (2015 edition). The fourth national survey of Chinese Materia medica resources is underway with the objective of determining the status of the available resources and investigating the modern value of herbal medicine including ethnic and folk medicines [6]. However, the national application of ethnic medicine in China is a complex issue that involves public policy, ethnic culture, livelihood status, regional economies, the protection of wild resources, etc.Yunnan is a multiethnic province in southwest China. In addition to the Han nationality, there are 25 ethnic minorities with a population of more than 6,000, including the Yi, Hani, Bai, Dai, Zhuang, Miao, Hui, and Tibetan. The population of ethnic minorities is estimated at over 16 million, accounting for 33.4% of the provincial total population. The Dai and Yi traditional medicine are the representatives of ethnomedicine practiced in Yunnan. Tibetan medicine as practiced in Shangri-La will be described in a subsequent review. Yunnan Province is an important pharmaceutical industry center for Dai medicine and Yi medicine. For example, Yunnan Baiyao, a highly effective patent medicine has originated from the ancient Yi prescription [5]. Biomedicine and public health industry have been the major industries in Yunnan since 2016, and more than 2000 ethnic medicinal resources and more than 10,000 folk prescriptions are native to Yunnan [7]. This review focuses on the Yi and Dai traditional medicine in Yunnan and the potential problems to be encountered in the development of policies favorable to ethnomedicine development. ## 2. Historical Changes of Chinese Ethnomedicine Policies Since 1949, the Chinese government has successively introduced many policies to support and protect the development of ethnomedicine (Table1). The Ethnic Minorities Health Work Plan of China published in 1951 recommended that native doctors who used herbs to cure diseases should be united and supported to the greatest extent. The Decision on Health Reform and Development published in 1997 discussed how ethnomedicine should be mined, organized, summarized, and improved. The Decision on Further Strengthening Rural Health Work, published in 2002, promoted the development and organization of ethnomedicine resources and technologies in rural regions. For many years, ethnic folk doctors in rural areas provided care in very much the same way as barefoot doctors, a nickname for part-time paramedical workers in rural areas who were trained for simple diagnoses and treatments in the 1960s and 1970s. During that time, China’s health services could not cover all areas of the country, and the need of ethnic and folk medical practices was tacitly accepted. At present, the Tibetan, Mongolian, Uygur, Dai, Kazakh, Korean, Zhuang, and Hui medicine have set up physician certification systems for special skill or knowledge to support ethnic folk doctors obtaining legal medical qualification through certain procedures. The Outline of a Strategic Plan for Development of Traditional Chinese Medicine (2016–2030) published by the State Council of China promotes the development of ethnomedicine.Table 1 History of Chinese ethnomedicine policy. TimePolicy outlinePromulgator1951Ethnic minorities health work plan of ChinaUnknown1984Several opinions on strengthening ethnic minorities workMinistry of Health and NEAC1997Decision on health reform and developmentCPC Central Committee and the State Council2002Decision on further strengthening rural health workCPC Central Committee and the State Council2003Regulations of the People’s Republic of China on traditional Chinese medicineThe State Council2007Guiding opinions on strengthening the development of ethnomedicine11 ministries including NATCM2017The law of the People’s Republic of China on traditional Chinese medicineNPC standing committee2018Some opinions on strengthening the ethnomedicine work in the new era13 ministries including NATCMNEAC: National Ethnic Affairs Commission of PRC (China); CPC Central Committee: the Central Committee of the Communist Party; NATCM: National Administration of Traditional Chinese Medicine of PRC (China); NPC: National People’s Congress.The status of ethnomedicine in China experienced a cognitive change with the publication of theRegulations of the People’s Republic of China on Traditional Chinese Medicine in 2003, which required that the administration of ethnomedicine should be implemented with compliance to the regulations that apply to TCM and established a close relationship between ethnomedicine and TCM. The law of the People’s Republic of China on Traditional Chinese Medicine currently defines ethnomedicine as one part of TCM, sharing a history and development with TCM that conform to the united national culture of China. Yunnan Province has published policies and plans to regulate the development of its rich medicine resources in the autonomous prefectures of Xishuangbanna Dai, Chuxiong Yi, Diqing Tibet, Chuxiong Yi, and Xishuangbanna Dai between 2016 and 2030 [8]. ## 3. Ethnic Hospitals and Pharmaceutical Enterprises in Yunnan Yunnan Province has two Yi autonomous prefectures, including Chuxiong Yi and Honghe Hani Yi, and two Dai autonomous prefectures, including Xishuangbanna and Dehong Dai Jingpo. There are also 11 Yi autonomous counties and 7 Dai autonomous counties. The Yunnan Traditional Yi Medicine Hospital, the largest Yi medical hospital in China, is located in Chuxiong City, Chuxiong Yi Autonomous Prefecture. More than 10 counties have Yi medical hospitals or outpatient departments located in villages and towns of Yunnan. Each of the two Dai autonomous prefectures has a large traditional Dai medicine hospital, the hospital of Dai traditional medicine of Xishuangbanna Dai Autonomous Prefecture and the hospital of Dai traditional medicine of Dehong Dai Jingpo Autonomous Prefecture. There are at least 6 Dai hospitals in several counties.A total of 42 corporations are licensed to produce Dai patent medicines (DPMs) and Yi patent medicines (YPMs) in China. Two corporations are located outside Yunnan Province. Twenty corporations are in Kunming City, nine corporations are in Chuxiong Yi Autonomous Prefecture, three corporations are in Yuxi City, and two corporations are in Dali Bai Autonomous Prefecture. Wenshan City, Zhaotong City, and Xishuangbanna Dai Autonomous Prefecture have one pharmaceutical manufacturer each. The companies include the Yunnan Baiyao Group, which is best known for producing Baibaodan, the original name of Yunnan Baiyao, which was invented by Qu Huanzhang (AD 1880–1938). It also produces more than 300 other patent medicines and 19 dosage forms including Shu Lie An Capsule, Qiancao Nao Tong Oral liquid, Gu Feng Ning Capsule, Shang Yi Aerosol, Tong Shu Capsule, and Zhong Tong Liniment. In 2014, Dihon Pharmaceutical Co. was purchased by Bayer, a large German pharmaceutical company, which was marked as a significant entry for Bayer into the TCM marketplace. Dihon produces Dan E Fu Kang Ointment, Gan Dan Qing Capsule, Yu Mai Kou Yan liquid, and Wei Fu Shu Capsule. ## 4. Clinical Indications of Yi and Dai Medicines Based on the pharmaceutical instructions in which the properties of ethnic medicine were claimed, the clinical indications of DPMs and YPMs were surveyed and recorded, and 28 DPMs and 73 YPMs that could be approved for clinical use in China following the drug regulatory laws were identified. Fourteen DPMs such as Biao Re Qing Granule, Guan Tong Shu Oral liquid, and Hui Xue Sheng Capsule that have already been approved as over-the-counter (OTC) drugs accounted for 50% of the DPMs. The YPMs included 24 prescriptions such as Bai Bei Yi Fei Capsule, Chang Shu Zhi Xie Capsule, and Dan E Fu Kang Ointment, which have been approved as OTC drugs and account for 32.8% of the total YPMs. The information about these patent medicines is recorded in TablesS1 and S2. As shown in Figure 1, these DPMs and YPMs are used to treat respiratory, cardiovascular, mental, and neurological diseases among others. One example is Dan Deng Tong Nao Capsule (DDTN), of which Erigeron breviscapus (Vaniot) Hand. -Mazz (Dengzhanxixin), a component herb, has been recorded in the pattra-leaf scripture of Dai traditional medicine for 2500 years. DDTN when combined with rehabilitation training can improve the recovery of neurological function and the quality of life of stroke patients with cerebral infarction [9]. DDTN is also found to prevent cerebral injury in rats with middle cerebral artery-induced ischemic stroke by decreasing the intracellular Ca2+ concentration and inhibiting the release of excitatory amino acids [10].Figure 1 Clinical indications of DPMs and YPMs. DPMs, Dai patent medicines; YPMs, Yi patent medicines. ## 5. Application of Quality Standards for Yi Medicine and Dai Medicine In China, the quality standards of ethnic medicines and their patent medicines are based on the national standards included in theChinese Pharmacopoeia, which has covered ethnomedicines since 1977. Previous research on EPMs in the Chinese Pharmacopoeia (2015 edition) found that some traditional medicines did not establish national quality standards, and that 71 traditional medicines, which include 39 EPMs, are not listed in the Chinese Pharmacopoeia [11]. This practice (called “upside-down standards,”) that involves quality standards for Chinese patent medicines (CPMs) but no standard for the composition of CPMs affects the safety of CPMs and the healthy development of the Chinese pharmaceutical industry.The provincial standards relating to the Tibetan, the Xinjiang Uygur, the Inner Mongolia, the Guangxi Zhuang Autonomous Region, Qinghai, Sichuan, and Yunnan and Guizhou provinces also apply to the regulation of the quality of ethnic medicines in China [12]. The academy group and enterprise standards are also applicable to the quality of ethnomedicine. The 28 DPMs identified in the survey include 101 traditional medicines with quality standards listed in the Chinese Pharmacopoeia (2015 edition). The quality standards of 30 traditional medicines are listed in the Standards for Chinese medicinal materials in Yunnan Province (SYNP) or other provincial quality standards (Figure 2). Four herbal medicines including Aristolochia chuii Wu (Dabaijie), Michelia mahan C. Y. Wu (Mahan), Asparagus officinalis L. (Xiaobaibu), and leaf and stem of Vitex trifolia L. do not have applicable quality standards or are out of date. The 73 YPMs identified in the survey included 182 traditional medicines with quality standards in the Chinese Pharmacopoeia (2015 edition). The quality standards of 88 traditional medicines are included in the SYNP or other provincial pharmacopoeias. Eleven herbal medicines have no applicable quality standards or are out of date, including the root of Rosa odorata (Andr.) Sweet var. gigantea (Crép.) Rhed. Et Wils. (Gugongguo), Fibraurea recisa Pierre (Dahuangteng), Dolichos falcata Klein (Damayao), Cyanotis arachnoidea G. B. Clarke (Lushuicao), Bulbophyllum reptans (Lindl.) Lindl. (Xiaolvji), Adenophora bulleyana Diels (Shashen), Cynoglossum officinale L. (Daotihu), Crepis lignea (Vant.) Babc. (Wanzhangsheng), Cymbopogon distans (Nees) Wats. (Yunxiangcao), and Ziheche and extract of Hemsleya chinensis Cogn. ex Forbes et Hemsl (Xuedan extract). More information is provided in Tables S3 and S4.Figure 2 Quality standards of herbal medicines in DPMs and YPMs. ChP,China Pharmacopoeia; SYNP, Standards for Chinese medicinal materials in Yunnan Province; SPOP, Standards for Chinese medicinal materials in other province except Yunnan; NQS, No quality standard.The usage frequencies of traditional medicines in DPMs and YPMs are shown in Figure3. Glycyrrhiza uralensis Fisch (Gancao), Panax notoginseng (Burk.) F. H. Chen (Sanqi), Angelica sinensis (Oliv.) Diels (Danggui), Astragalus membranaceus (Fisch.) Bge. Var. mongholicus (Bge.) Hsiao, and Astragalus membranaceus (Fisch.) Bge. (Huangqi) are the most frequently used genuine medicinal materials in Yunnan Province. The other medicines included in the SYNP as Dai medicines or Yi medicines are listed in Table 2.Figure 3 Usage frequencies of herbal medicines in DPMs and YPMs. DPM, Dai patent medicine; YPM, Yi patent medicine; Gancao,Glycyrrhiza glabra L. or Hemsleya chinensis Cogn.ex Forbes et Hemsl (Xuedan extract ; Sanqi, Panax notoginseng (Burk.) F. H. Chen; Danggui, Angelica sinensis (Oliv.) Diels; Huangqi, Astragalus propinquus Schischkin; Jishiteng, Paederia scandens (Lour.) Merr.; Dengzhanxixin, Erigeron breviscapus (Vaniot) Hand. -Mazz; Sharen, Amomum villosum Lour.; Huzhang, Polygonum cuspidatum Sieb.et Zucc.; Jinqiaomai, Fagopyrum dibotrys (D. Don) Hara; Chonglou, Paris polyphylla Smith var. chinenisi (Franch) Hara; Gonglaomu, Mahonia bealei (Fort.) Carr.; Dahongpao, Campylotropis hirtella (Franchet) Schindler; Gegen, Pueraria lobata (Willd.) Ohwi; Zidanshen, Salvia yunnanensis C. H. Wright; Chuangxiong, Ligusticum chuanxiong Hort.; Chaihu, Bupleurum scorzonerifolium Willd; Yanhusuo, Corydalis yanhusuo W. T. Wang; Zhizi, Gardenia jasminoides Ellis; Baiji, Bletilla striata (Thunb.) Reichb. f.; Chenpi, Citrus japonica Blanco; Honghua, Carthamus tinctorius L.Table 2 Traditional medicines used in DPMs and YPMs and listed in the SYNP. NoScientific namePinyin nameMPEMFrequency1Plumbago zeylanica Linn.BaihuadanStem and leafYi12Toddalia asiatica (L.) Lam.FeilongzhangxueStemYi13Tripterygium hypoglaucum (Levl.) HutchHuobahuagenRootYi34Inula cappa (Buch -Ham) DC.Yang’erjuWhole plantYi25Geum aleppicum Thumb. var. Chinese BolleWuqihuanyangcaoWhole plantYi36Rhodobryum giganteum (Hook.) Par.HuixincaoWhole plantYi27Polygonum paleaceum Wall. ex Hook.CaoxuejieRhizomeYi28Polygala arillata Buch. Ham. ex D. DomJigenRoots and rhizomeYi19Salvia yunnanensis C. H. WrightZi DanshenRootYi610Ampelopsis delavayana (Franch.) Planch.Yuputao genRootYi311Swertia patens Burk.Xiao’er futong caoWhole plantYi312Polygonum cuspidatum Sieb.et Zucc.HuzhangyeLeafYi113Cynodon dactylon (L.) Pers.QianxiancaoWhole plantYi114Potentilla fulgens Wall. ex HookGuanzhongRootYi215Ainsliaea pertyoides Franch. var. albo-tomentosa Beauv.YexiahuaWhole plantYi116Valeriana jatamansi JonesMatixiangRoots and rhizomeYi117Speranskia tuberculata (Bunge) BaillonTougucaoAerial partYi318Arthromeris mairei (Brause) ChingDiwugongRhizomeYi119Schefflera venulosa (Wight et Arn.) HarmsQiyelianWhole plant, stem, and leafYi320Boenninghausenia sessilicarpa Levl.ShijiaocaoWhole plantYi221Oxalis corniculata Linn.ZajiacaoWhole plantYi122Anemone rivularis Buch. Ham. ex DC.HuzhangcaoRootYi223Opuntia stricta (Haw.) Haw. var. dillenii (Ker Gawl.) Benson.XianrencaoStemYi124Dysosma versipellis (Hance) M. Cheng ex YingBajiaolianRhizomeYi425Jatropha curcas L.GaotongRoot bark, stem barkYi126Ficus tikoua Bur.DibantengCaneYi127Kadsura longipedunculata Finet et Gagnep.WuxiangxuetengCaneYi328Leycesteria aponic Wall. var. stenosepala Rehd.DazuifengAerial partYi129Anaphalis bulleyana (J. F. Jeffr.) ChangWuxiangcaoWhole plantYi130Craibiodendron yunnanense W.W. SmithJinyeziLeafYi131Phyllanthus urinaria L.YexiazhuAerial partDai232Brassica integrifolia (West) O. E. Schulz ex Urb.KucaiziSeedDai133Zingiber purpureum Rosc.ZisejiangRhizomeDai134Polyrhachis dives SmithWeimayiBodyDai435Phyllanthus niruri L.ZhuzicaoWhole plantDai136Tacca chantrieri AndreJiangenshuStem tuberDai237Stephania epigaea H. S. LoDiburongRoot tuberDai138Streptocaulon juventas (Lour.) Merr.TengkushenRootDai139Inula cappa (Buch.-Ham) DCYangerjugenRootDai140Benincasa hispida (Thunb.) Cogn.KudongguaFruitDai1DPMs, Dai patent medicines; YPMs, Yi patent medicines; SYNP, Standards for Chinese medicinal materials in Yunnan Province; MP, medicinal parts; EM, ethnic medicine. ## 6. Application of Medicinal Resources in Yi and Dai Medicine ### 6.1. Botanical, Animal, and Mineral Sources of Medicines Used in DPMs and YPMs Differences in geographical and climatic conditions are reflected in distinct lifestyle, customs, cultures, and usage of medicinal resources by the residents of the various regions in China. In general, botanicals are the most widely used traditional medicines. DPMs and YPMs include 361 botanical medicines, 22 animal source of medicines, and 9 mineral medicines, as shown in Figure4. YPMs contain more animal sources of medicines than DPMs. Yi people have a long history of hunting, and many ancient documents attest to the use of animal sources of medicines. The Yi Nationality Offering Medicine Scriptures (Yi Zu Xian Yao Jing), written in the early Qing Dynasty, notes that up to 92.8% of the prescriptions were animal sources of medicines and were divided into 12 types including insects, meat, bones, gallbladders, fat, blood, fish gall bladders, and hair. The Book of Good medicines for Treating Diseases (Yi Bing Hao Yao Shu, AD.1737) describes 152 animal sources of medicines that accounted for 35.7% of Yi medicines [13].Figure 4 The use of botanical, animal, and mineral resources in DPMs and YPMs. DPMs, Dai patent medicines; YPMs, Yi patent medicines.Animals are therapeutic arsenals with a significant role in healing. Zootherapy are derived from products of metabolism (e.g., corporal secretions and excrements) or from nonanimal materials such as nests or cocoons [14]. The reasonableness of zootherapy cannot be denied, and evidence supporting their use should be strengthened by modern scientific research. The animal sources of medicines in DPMs and YPMs are listed in Table 3. The use of some animal products in DPMs and YPMs is controversial, e.g., Cordyceps sinensis, Moschus deer musk, and bear bile, because these medicinal materials originate from protected wild animals or by harvesting activities that harm the ecological environment. Nevertheless, the production and use of such animal sources of medicines are restricted in China. China has joined and complied with many conventions on biological protection such as CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora) and the Convention on Biological Diversity and has implemented domestic animal protection regulations. However, the medicinal value of these and other animal sources of medicines cannot be ignored. Fortunately, substitutes are available because of the cultivation of Cordyceps and industrial production of artificial musk [15, 16]. Bile can be obtained from living, farmed bears but is ethically controversial [17]. Artificial bear bile has been reported to be effective on anticonvulsion, sedative, and choleretic [18].Table 3 Medicinal sources from the animals used in DPMs and YPMs that are listed in the SYNP. Scientific namePinyin nameMPStandardDPMYPMCervus nip port TemminckLurongAntlerChP10Cryptotympana pustulata FabriciusChantuiSloughChP20Gallus gallus domesticus BrissonJineijingGizzardChP11Polyrhachis dives SmithHeimayiBodySYNP14Gekko gecko LinnaeusGejieBodyChP04Pheretima aspergillum (E. Perrier)DilongBodyChP04Bufo bufo gargarizans CantorChansuSecretionChP01Aspongopus chinensis DallasJiuxiangchongBodyChP02Selenarctos thibetanus CuvierXiongdanfenBileSYNP01Bombyx mori LinnaeusJiangchanBodyChP01Periplaneta japonica LinnaeusFeilieBodySYNP01Sepiella maindroni de RochebruneHaipiaoqiaoShellChP01Moschus berezovskii FlerovShexiangSecretionChP01Armadillidium vulgare LatreilleShufuchongBodySSDP02Cervus nip port TemminckLujiaoshuangAntler colloidChP11Cordyceps sinensis (BerK.) Sacc.DongchongxiacaoBacterial and insect complexChP22EPM, ethnic patent medicine; MP, medicinal parts; ChP,Chinese Pharmacopoeia; SSDP, Standards for Chinese medicinal materials in Shandong Province (2012). ### 6.2. Medicinal Parts of Botanical Medicines Yunnan is rich in Chinese Materia medica resources and is known as the kingdom of plants. The plant parts used in herbal medicines include seeds, berries, roots, leaves, fruits, barks, and flowers and the whole plant itself. From ancient times to the present, people have used crude botanical materials as medicines to maintain vitality and cure disease [19]. The medicinal parts of botanical medicines in DPMs and YPMs are shown in Figure 5. The plant parts included in DPMs and YPMs are similar, with root and rhizomes, the whole plant, and fruit and seeds being the most frequent. The various parts of the medicinal plants contain active components that are responsible for their effectiveness [20] and physical properties that determine their names [21]. For example, Huangqin (Scutellaria baicalensis Georgi) is called “Rijishi” in the Yi language, in which “Ri” means herbaceous plant; “Ji” means root, the medicinal part of the plant; “Shi” indicates that the color is yellow [22]. The continuing usage of herbal medicines prepared from wild roots and rhizomes, fruits and seeds, and whole plants is not sustainable. The best strategy for balancing industrialization and resource protection is replacing wild with cultivated resources [23].Figure 5 Relative contribution of the parts of medicinal plants to DPMs and YPMs. ## 6.1. Botanical, Animal, and Mineral Sources of Medicines Used in DPMs and YPMs Differences in geographical and climatic conditions are reflected in distinct lifestyle, customs, cultures, and usage of medicinal resources by the residents of the various regions in China. In general, botanicals are the most widely used traditional medicines. DPMs and YPMs include 361 botanical medicines, 22 animal source of medicines, and 9 mineral medicines, as shown in Figure4. YPMs contain more animal sources of medicines than DPMs. Yi people have a long history of hunting, and many ancient documents attest to the use of animal sources of medicines. The Yi Nationality Offering Medicine Scriptures (Yi Zu Xian Yao Jing), written in the early Qing Dynasty, notes that up to 92.8% of the prescriptions were animal sources of medicines and were divided into 12 types including insects, meat, bones, gallbladders, fat, blood, fish gall bladders, and hair. The Book of Good medicines for Treating Diseases (Yi Bing Hao Yao Shu, AD.1737) describes 152 animal sources of medicines that accounted for 35.7% of Yi medicines [13].Figure 4 The use of botanical, animal, and mineral resources in DPMs and YPMs. DPMs, Dai patent medicines; YPMs, Yi patent medicines.Animals are therapeutic arsenals with a significant role in healing. Zootherapy are derived from products of metabolism (e.g., corporal secretions and excrements) or from nonanimal materials such as nests or cocoons [14]. The reasonableness of zootherapy cannot be denied, and evidence supporting their use should be strengthened by modern scientific research. The animal sources of medicines in DPMs and YPMs are listed in Table 3. The use of some animal products in DPMs and YPMs is controversial, e.g., Cordyceps sinensis, Moschus deer musk, and bear bile, because these medicinal materials originate from protected wild animals or by harvesting activities that harm the ecological environment. Nevertheless, the production and use of such animal sources of medicines are restricted in China. China has joined and complied with many conventions on biological protection such as CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora) and the Convention on Biological Diversity and has implemented domestic animal protection regulations. However, the medicinal value of these and other animal sources of medicines cannot be ignored. Fortunately, substitutes are available because of the cultivation of Cordyceps and industrial production of artificial musk [15, 16]. Bile can be obtained from living, farmed bears but is ethically controversial [17]. Artificial bear bile has been reported to be effective on anticonvulsion, sedative, and choleretic [18].Table 3 Medicinal sources from the animals used in DPMs and YPMs that are listed in the SYNP. Scientific namePinyin nameMPStandardDPMYPMCervus nip port TemminckLurongAntlerChP10Cryptotympana pustulata FabriciusChantuiSloughChP20Gallus gallus domesticus BrissonJineijingGizzardChP11Polyrhachis dives SmithHeimayiBodySYNP14Gekko gecko LinnaeusGejieBodyChP04Pheretima aspergillum (E. Perrier)DilongBodyChP04Bufo bufo gargarizans CantorChansuSecretionChP01Aspongopus chinensis DallasJiuxiangchongBodyChP02Selenarctos thibetanus CuvierXiongdanfenBileSYNP01Bombyx mori LinnaeusJiangchanBodyChP01Periplaneta japonica LinnaeusFeilieBodySYNP01Sepiella maindroni de RochebruneHaipiaoqiaoShellChP01Moschus berezovskii FlerovShexiangSecretionChP01Armadillidium vulgare LatreilleShufuchongBodySSDP02Cervus nip port TemminckLujiaoshuangAntler colloidChP11Cordyceps sinensis (BerK.) Sacc.DongchongxiacaoBacterial and insect complexChP22EPM, ethnic patent medicine; MP, medicinal parts; ChP,Chinese Pharmacopoeia; SSDP, Standards for Chinese medicinal materials in Shandong Province (2012). ## 6.2. Medicinal Parts of Botanical Medicines Yunnan is rich in Chinese Materia medica resources and is known as the kingdom of plants. The plant parts used in herbal medicines include seeds, berries, roots, leaves, fruits, barks, and flowers and the whole plant itself. From ancient times to the present, people have used crude botanical materials as medicines to maintain vitality and cure disease [19]. The medicinal parts of botanical medicines in DPMs and YPMs are shown in Figure 5. The plant parts included in DPMs and YPMs are similar, with root and rhizomes, the whole plant, and fruit and seeds being the most frequent. The various parts of the medicinal plants contain active components that are responsible for their effectiveness [20] and physical properties that determine their names [21]. For example, Huangqin (Scutellaria baicalensis Georgi) is called “Rijishi” in the Yi language, in which “Ri” means herbaceous plant; “Ji” means root, the medicinal part of the plant; “Shi” indicates that the color is yellow [22]. The continuing usage of herbal medicines prepared from wild roots and rhizomes, fruits and seeds, and whole plants is not sustainable. The best strategy for balancing industrialization and resource protection is replacing wild with cultivated resources [23].Figure 5 Relative contribution of the parts of medicinal plants to DPMs and YPMs. ## 7. Rare and Endangered Medicinal Materials The rapidly increasing demand for CPMs is likely to challenge the sustainability of herbal resources in China. At present, 80% of the most frequently used species cannot meet medical demand, and 1,800–2,100 medicinal species are facing extinction [24]. In the China Plant Red Data Book published in 1992, 388 plant species were listed as threatened, with 121 species as endangered and needing first-grade national protection, 110 species as rare needing second-grade national protection, and 157 species as vulnerable needing third-grade national protection. Of those plant species, 77 are herbal medicines that account for 19.86% of the threatened species [25]. The national key protection name list of wild animals in China includes 257 animal sources. The shortage of medicinal plants available to pharmaceutical companies can be partially reduced by the cultivation of at least 200 herbs, while some special herbs used in ethnomedicine are obtained by continuous wild collection without planned scientific cultivation. The rare medicinal materials used in DPMs and YPMs are listed in Table 4.Table 4 Rare medicinal materials used in DPMs and YPMs. Herbal nameScientific nameNPWPIUCNProprietaryNPWMUFGancaoGlycyrrhiza uralensis FischIILC—II26Glycyrrhiza inflata Bat.IILC—IIGlycyrrhiza glabra L.IILC—IIRenshenPanaxginseng C. A. MeyICR—II4LianqiaoForsythia suspense (Thunb.) Vahl.———III4HuangqinScutellaria baicalensis Georgi———III4WuweiziSchisandra chinensis (Turcz.) Baill.IILC—III1LurongCervus nip port Temminck———I1Cervus elaphus Linnaeus———IRouchongrongCistanche deserticola Y. C. MaIIEN—III1Cistanche tubulosa (Schenk) WightII———HuangbaiPhellodendron chinense Schneid———II2DuzhongEucommia ulmoides Oliv.———II1LongdanGentiana manshurica Kitag.———III3Gentiana scabra Bge———IIIGentiana triflora Pall.———IIIGentiana regescens Franch.———IIIHuanglianCoptis chinensis Franch——Unique to ChinaII1Coptis deltoidea C. Y. Cheng et Hsiao—VUUnique to ChinaIICoptis teetoides C. Y. Cheng.———IIHoupuMagnolia officinalis Rehd. et WilsIINTUnique to ChinaII1Magnolia officinalis Rehd. et Wils. var. biloba Rehd. et WilsII—Unique to ChinaIIHuangbaiPhellodendron chinense Schneid———II2ZicaoArnebia euchroma (Royle) Johnst———III1QingjiaoGentiana macrophylla Pall.———III1Gentiana macrophylla Maxim.———IIIGentiana crassicaulis Duthie ex Burk.———IIIGentiana dahurica Fisch———IIIShexiangMoschus berezovskii Flerov.———II1Moschus sifanicus Przewalski.———IIMoschus moschiferus Linnaeus.———IIChonglouParis polyphylla Smith var. chinensis (Franch.) HaraII———7NPWP, National Key Protected Wild Plants of China (August 4, 1999); NPWM, National Key Protected Species of Wild Medicinal Materials of China (Dec. 1, 1987); IUCN, International Union for Conservation of Nature (CR, critically endangered; LC, least concern; EN, endangered; VU, vulnerable; NT, near threatened); UF, usage frequency in DPMs and YPMs.Those plants are protected by the Chinese government and some international nongovernment organizations such as the International Union for Conservation of Nature.Cistanche deserticola Y. C. Ma (Rouchongrong), Panax ginseng C. A. Mey (Renshen), Glycyrrhiza aponic Bat (Gancao), or other rare medicinal materials listed in the catalogs is protected and utilized sustainably in China. However, the number of endangered ethnic-specific medicines is far larger than that recorded in the catalogs; for example, more than 3000 tons of Rodgersia sambucifolia Hemsl. (Yantuo) are collected annually to produce YPMs. The availability of wild Rodgersia plants is sharply reduced, and resources are severely damaged in Luquan, Yongsheng, Yulong, Heqing, and Ninglang counties of Yunnan Province [26], while its cultivation research just began in recent years. Figure 6 shows the planting situation of Rodgersia sambucifolia Hemsl. in the Meizi test ground, which is subordinate to the Institute of Alpine Economics and Botany, Yunnan Academy of Agricultural Sciences. Consequently, 30 traditional medicines have been listed in the Rare Traditional Chinese Herbs of Yunnan Province in Urgent Needs (RTCHYN), and those used in DPMs and YPMs are shown in Table 5 [27].Figure 6 Rodgersia sambucifolia Hemsl. cultivated in the Meizi test ground (Lijiang, Yunnan). Toxic medicines used in DPMs and YPMs. (a)(b)Table 5 Traditional medicines listed in the RTCHYN. Scientific namePinyin nameMedicinal partsRegional distribution∗StandardUFErigeron breviscapus (Vaniot) Hand. -Mazz.DengzhanxixinWhole plantAreas except southwest of YunnanChP7Cordyceps sinensis (BerK.) Sacc.DongchongxiacaoBacterial and insect complexDeqin, Shangri-La, Lijiang, Binchuan, Lvfeng, GuangtongChP4Dracaena cochinchinensis (Lour.) S. C. Chen ALongxuejieResinJinping, Menglian, Pu’er, Jinghong, ZhenkangSGZP2Cyanotis arachnoidea C. B. ClarkeLushuicaoWhole plantMenghai, Menglian, Jinghong, Jingdong, Mengzi, Anning, Kunming, PingbianNo1Swertia mileensis T. N. He et W. L. ShiQingyedanWhole plantMileChP2Anisodus acutangulus C. Y. Wu et C. ChenSanfensanRootsLijiangSYNP1Hemsleya amabilis DielsXuedanRootsKunming, chongming, Binchuan, Eryuan, Dali, HeqingNo1Bergenia purpurascens (Hook.f.et Thoms.) Engl. var. delavayi (Franch.) Engl. et Irm.YanbaicaiRhizomeDeqin, Weixi, Shangri-La, Lijiang, Dali, Qujing, Ludian, Zhaotong, Gongshan, FugongChP1RTCHYN,Rare Traditional Chinese Herbs of Yunnan Province in Urgent Needs. ∗Regional distribution is from the Flora of Yunnan (Science Press of China, 2006). UF, usage frequency; ChP, Chinese Pharmacopoeia; SGZP, Standards for Chinese medicinal materials in Guizhou Province (2009); SYNP, Standards for Chinese medicinal materials in Yunnan Province (2005).Medicines with pharmacological activities that are present in traditional ethnomedicine are likely to be clinically useful, but they may also be toxic, especially if used incorrectly or in the wrong amounts. Unlike that for modern drugs, the efficacy and toxicity assessments of these ethnomedicines are based on traditional knowledge and clinical experience rather than on laboratory evaluation [28]. Toxicity associated with the use of Chinese ethnomedicine may occur because of the environment, religious beliefs, or medical practices. The Chinese Pharmacopoeia includes 83 traditional medicines that are considered toxic [29]; some medicines listed in provincial standards of herbal medicines are also considered toxic [30].Ten toxic medicines are used in 11 DPMs; six of those are included in theChinese Pharmacopoeia (2015 edition) and four toxic medicines in the SYNP. One toxic medicine is used in Dai traditional medicine, and two toxic medicines are used in Yi traditional medicine. The 40 YPMs include 24 toxic herbs; 12 toxic medicines are in the Chinese Pharmacopoeia (2015 Edition) and 12 are in the SYNP. Four medicines are known as Yi medicines. Although some toxic ethnomedicines are used in DPMs and YPMs, the proprietary medicines are considered safe and approved for use in China because pharmaceutical processing, combining, and decocting contribute to reducing toxicity and enhancing efficacy. In the traditional Dai medicine (TDM), “YaGei” herbs are used to reduce toxicity. The “YaGei” detoxification theory is a unique supplement of TDM [31], and “YaGei” medicines are used as antidotes to relieve adverse reactions caused by food poisoning, drug poisoning, and other substances [32]. Dai people consume antidotes regularly to eliminate the microtoxins from the body, reduce the chance of illness, and prolong life.Due to the lack of more pharmaceutical information disclosed, as well as the lack of basic research, the safety information of these DPMs and YPMs including toxic medicines is insufficient. The modern toxicological evidence of these toxic medicines is collected and summarized in Tables6 and 7, focusing on Dai and Yi toxic medicines. The root of Tripterygium hypoglaucum (Levl.) Hutch (Huobahuagen) soaked in wine as an oral medicine is an ancient Yi medicine described in the Ailao Materia Medica in the treatment of arthritis, joint swelling, pain, bruises, and sprains [59]. Boenning hauseniase silicarpa Levl. (Shijiaocao) is described in the Materia Medica in South Yunnan (Dian Nan Ben Cao, AD.1396 -1476) written by Lan Mao as a bitter, pungent, and warm medicine used to treat chest pain, heartache, stomachache, and abdominal distension. Shijiaocao is also described in the Ailao Materia Medica as a treatment for sore throat, gastric pain, and dysentery. It is described as a cure of acute gastroenteritis in combination with the parasite of Zanthoxylum bungeanum in the Wa Die Yi Medical Book, which was written at the end of the Qing Dynasty [60]. Ancient documents describe the usage of the toxic herbs included in the medical practice of Yi, Dai, or other ethnic minorities in Yunnan. Modern toxicological study can ensure their safe and effective use, and further study is warranted to determine how the toxicity reduces while the prescription remains effective.Table 6 Toxic medicines in DPMs. Scientific namePinyin nameToxicity degreeStandardDPMModern toxicologyReferencesParis polyphylla Smith var. chinensis (Franch) HaraChonglouLTChPRBQCToxic to the digestive system and has cardio toxicity and neurotoxicity, LD50 = 2.68 g/kg (mice, p.o.)[33]Curculigo orchioides Gaertn.XianmaoMTChPLXBSTLD50 = 215.9 g/kg (ethanol extract, rats, p.o.) and damages the liver, kidney, and reproductive organs with oral administration of 120 g/kg (ethanol extract, rats, 6 months)[34]Cnidium monnieri (L.) Cuss.ShechuangziLTChPLXBSTNausea and vomiting, decreased spontaneous activity, shortness of breath, unstable gait, and tremor (ethanol extract), LD50 = 17.45 g/kg (mice, p.o.), MTD = 1.50 g/kg, or LD50 = 3.45 g/kg (osthol, mice, p.o.)[35–37]Zanthoxylum nitidum (Roxb.) DC.LiangmianzhenLTChP7-JDHXONitidine chloride damages the liver and kidney cells and decreases the heart rate of zebrafish[38]Pinellia ternate (Thunb.) Breit.BanxiaMTChPSBZKGLD50 = 42.7 ± 1.27 g/kg (mice, p.o.), damages the renal and liver, causes serious damage to gastric mucosa, and has significant toxicity on pregnancy maternal mice and embryo (total alkaloids)[39]Prunus armeniaca L. var. ansu MaximKuxinrenLTChPSBZKGLD50 of amygdalin is 25 g/kg (mice, i.v.), 887 mg/kg (mice, p.o.) and hydrocyanic acid produced by amygdalin inhibits the activity of cytochrome oxidase, leading to cell respiration inhibition and cell death[40]Plumbago zeylanica Linn.BhuadanLTSYNPDLBSCSkin redness, swelling, and peeling when contacted and antiovulation activities for female rats (alcohol extract)[41, 42]Tacca chantrieri AndreJiangenshuMTSYNPYGTDiarrhea and vomiting in mild intoxication and intestinal mucosal exfoliation and hemorrhoea appear in severe poisoning patients[43]Tripterygium hypoglaucum (Levl.) HutchHuobahuagenLTSYNPGTSLLD50 = 79 g/kg (male mice, p.o.), LD50 = 100 g/kg (female mice, p.o.), and reversible antifertility effect[44, 45]Erythrina variegata L. var. orientalis (L.) MerrHaitongpiMTSSCPGTSLUnknownHT, high toxicity; MT, medium toxicity; LT, low toxicity; SSCP, Standards for Chinese medicinal materials in Sichuan Province (2010); SYNP, Standards for Chinese medicinal materials in Yunnan Province (2005); RBQC, Ru Bi Qing Capsule; LXBST, Lu Xian Bu Shen Tablet; 7-JDHXO, 7-Jie Du Huo Xue Ointment; SBZKG, Shen Bei Zhi Ke Granular; DLBSC, Dan Lv Bu Shen Capsule; YGT, YaGei Tablet; GTSL, Guan Tong Shu Oral liquid.Table 7 Toxic medicines in YPMs. Scientific namePinyin nameToxicity degreeStandardYPMModern toxicologyReferencesParis polyphylla Smith var. chinensis (Franch) HaraChonglouLTChPGFNC, NQSG, SYA, TSC, ZTL——Osmunda japonica Thunb.ZiqiguanzhongLTChPSWYAUnknown—Evodia rutaecarpa (Juss.) Benth.WuzhuyuLTChPGDQC, HWYPLD50 is 2.70 mL/kg (volatile oil, mice, p.o.), one of the main target organ is the liver[46]Bufo bufo gargarizans cantorChansuMTChPCLTCVentricular arrhythmias and increasing the levels of Ca2+, CK, and LDH in the heart[47]Artemisia argyi Levl. Et Vant.AiyeLTChPKSGLD50 is 80.2 g/kg (aqueous extract, mice, p.o.), LD50 is 1.67 mL/kg (volatile oil, mice, p.o.), MTD is 75.6 g/kg (ethanol extract, mice, p.o.)[48]Aconitum kusnezoffii Reichb.CaowuHTChPTXTCausing serious cardiac dysfunction and damaging the nervous system. LD50 is 1.8 mg/kg, (aconitine, mice, p.o.), LD50 is 5.8 mg/kg (hypaconitine, mice, p.o.), and LD50 is 1.9 mg/kg (mesaconitine, mice, p.o.).[49, 50]Papaver somniferum L.YingsuqiaoMTChPKLTMain toxic components are morphine and codeine. Morphine with 60 mg causes poisoning and 250 mg leads to death[51]Arisaema erubescens (Wall.) Schott.TiannanxingMTChPTXTProducing folate deficiency and injury to the kidneys[52]Laggera pterodonta (DC.) Benth.ChoulingdanMTChPLL, SKCGLD50 is 1.19 g/kg (water extract, mice, i.p.)[53]Prunus armeniaca L. var. ansu MaximKuxinrenLTChPSKCG, CLTC——Pinellia ternate (Thunb.) BreitBanxiaMTChPWFSC, ZXASG——Psammosilene tunicoides W. C. Wu et C. Y. WuJintiesuoLTChPZTLLD50 is 4.8471 (mice, p.o.) and toxic target organs include the lungs, spleen, and stomach[54]Boenninghausenia sessilicarpa Levl.ShijiaocaoLTSYNPSAC, SKCGThe ether extract reduces the activity in mice by intraperitoneal injection[33]Dysosma versipellis (Hance) M. Cheng ex YingBajiaolianLTSYNPZTL, HJXJC, SLAC, WJHXZTTLD50 is 0.493 ± 0.032 g/kg (mice, p.o.) and is toxic to the heart and central nervous system, appearing excited then inhibited[55]Millettia bonatiana Pamp.DafahanMTSYNPHSTTDamages the stomach[33]Craibiodendron yunnanense W. W. SmithJinyeziHTSYNPZTLUnknownTripterygium hypoglaucum (Levl.) HutchHuobahuagenMTSYNPZTL, GFNC—Anemone rivularis Bunch. Ham. ex DC.WuzhangcaoLTSYNPTYGT, YSLUnknownDelphinium yunnanense Franch.DaotihuMTSGZPWHXZTCUnknownDioscorea bulbifera L.HuangyaoziLTSGDPFFLC, FFLGLD50 is 25.49 g/kg (mice, i.p.), LD50 is 79.98 g/kg, 250.3 g/kg, or 544 g/kg (mice, p.o.), toxic target organs including the liver and kidney[56, 57]Clematis apiifolia var. argentilucida (H. Leveille & Vaniot) W. T. WangShanmutongLTSHNPNQSGUnknownAnisodus acutangulus C. Y. Wu et C. ChenSanfensanHTSYNPTXTUnknownDatura stramonium L.MantuoluoyeMTSYNPYWNCShortness of breath and death after nerve stimulation[33]Aconitum brachypodum Diels.XueshangyizhihaoHTSHNPZTLLD50 are 6766.928 and 5492.337 mg/kg (petroleum ether extracts, N-butanol extracts, mice, p.o.)[58]HT, high toxicity; MT, medium toxicity; LT, low toxicity.∗The herb has more than two origins of species, only one origin is shown. SGZP, Standards for Chinese medicinal materials in Guizhou Province (2009); SGDP, Standards for Chinese medicinal materials in Guangdong Province (2011); SHNP, Standards for Chinese medicinal materials in Hunan Province (2009); SYNP, Standards for Chinese medicinal materials in Yunnan Province (2005). GFNC, Gu Feng Ning Capsule; NQSG, Niao Qing Shu Granular; SYA, Shang Yi Aerosol; TSC, Tong Shu Capsule; ZTL, Zhong Tong Liniment; SWYA, Shu Wei Yao Alcohol; GDQC, Gan Dan Qing Capsule; HWYP, Huoxiang Wan Ying Powder; CLTC, Chuan Luo Tong Capsule; KSG, Kang Shen Granular; TXT, Tian Xiang Tincture; KLT, Ke Tan Oral liquid; LL, Lingdancao Oral liquid; SKCG, Shijiaocao Ke Chuan Granular; WFSC, Wei Fu Shu Capsule; ZXASG, Zhi Xuan An Shen Granular; SAC, Shen An Capsule; HJXJC, Hong Jin Xiao Jie Capsule; SLAC, Shu Lie An Capsule; WJHXZTT, Wu Jin Huo Xue Zhi Tong Tablet; HSTT, Huzhang Shang Tong Tincture; GFNC, Gu Feng Ning Capsule; TYGT, Tianhusui Yu Gan Tablet; YSL, Yan Shu Oral liquid; WHXZTC, Wujin Huo Xue Zhi Tong Capsule; FFLC, Fu Fang Luxiancao Capsule; FFLG, Fu Fang Luxiancao Granular; NQSG, Niao Qing Shu Granular; YWNC, Yun Wei Ning Capsule. ## 8. Concluding Remarks Ethnomedicine is an important part of TCM that has a unique medical theoretical system and refers to a wide range of healthcare systems/structures, practices, beliefs, and therapeutic techniques that arise from indigenous cultural development. Thousands of years of ethnic amalgamation has produced diversity, integration, and differences among the traditional medicine of different Chinese ethnics. Approximately 8, 000 medicinal species are used by 40 ethnic minorities in China, which account for over 70% of the Chinese Materia medica resources. Data from the National Medical Products Administration of China show that there are more than 600 types of EPMs [12]. The Chinese Pharmacopoeia (1977 edition) began to cover DPMs, and some Miao patent medicines and YPMs were collected from the 2015 edition of Chinese Pharmacopoeia. Overall, a total of 39 CPMs were identified as EPMs, 26 EPMs as prescription drugs, and 13 EPMs as OTC drugs [11]. Prescriptions that are not approved by the government were not included in the review evaluated, but are still in use in clinics in regions of China inhabited by ethnic groups.This review focuses on Dai and Yi traditional medicines in Yunnan Province because of their long histories and descriptions in the ancient medical literature. The earliest book of Yi traditional medicine that can be verified is theYuanyang Yi Medicine Book, which was written in 957 AD and found in Yuanyang County of Yunnan Province in 1985 [14]. The earliest books of Dai traditional medicine that can be verified are Ge Ya San Ha Ya, which was written in 964-884 BC, and Dang Ha Ya Long, which was written in 1323 AD [61]. There are 1, 666 Dai medicines [62], nearly 1, 400 Yi medicines [13], and 400 medicines are listed in the Yi Materia Medica [63]. There are 478 Yi medical formulas described in Chinese Yi Medicine Prescriptions Science [64], and 200 Dai medical formulas are in Study on Dai classical prescriptions of China [65]. The numbers of folk formulas from Dai and Yi traditional medicine are not available to record yet. Just as the example of Yunnan Baiyao mentioned before, a series of ethnomedicines in Yunnan were successfully industrialized and modernized to promote the modern vitality of ancient ethnomedicines and thus serve a wide population range. The Tong Shu Capsule is a YPM produced by the Yunnan Baiyao Group that has been approved recently for phase II clinical research in the United States. Yunnan Province expects to produce TCMs including ethnomedicines with a value of 140 billion RMB in 2020 and an average annual growth of more than 15%, accounting for 75% of its production [65].Five key conclusions can be drawn from this investigation of Dai and Yi medicines.First, except for the Yunnan Baiyao Group and the Dihon Pharmaceutical Company, most of the pharmaceutical manufacturers of EPMs in Yunnan Province are small enterprises, thereby limiting research and development capacity. A search on theChina National Knowledge Infrastructure (CNKI, http://www.cnki.net) found 163 articles that reported investigations of these 28 DPMs reviewed here and 59 articles about Yunnan Baiyao Aerosol, whereas it is only one of the CPMs produced by the Yunnan Baiyao Group. In 2015, 100 million bottles of Yunnan Baiyao Aerosol were produced, with a value of more 1.5 billion RMB. In the same year, the overall sales revenue of the Yunnan Baiyao Group was 20.74 billion RMB [7].Second, the sale volumes of YPMs and DPMs cannot be grasped, and it is hard to determine the extent to which traditional medicines used in YPMs and DPMs are collected in the wild. This will be a challenge to sustainable utilization for Chinese Materia medica resources.Third, the use of toxic medicine used in ethnomedicine is of concern. Herbal medicine containing aristolochic acid has been associated with nephropathy in Belgium [66], and the adverse events associated with the use of Xiao Chaihu Tang in Japan [67] have led to warnings of the safety of CPM. Scientific evidence is needed to demonstrate the rationale and necessity of using toxic herbs in EPMs.Fourth, the identification and usage of traditional medicines vary among ethnic minorities because of differences in experiences of clinical practice. The survey of DPMs and YPMs showed the differences in the number of animal sources of medicines used by the Yi and Dai people. The differences were also found in the ancient medical literature of Yi and Dai minorities.Fifth, Dai and Yi medical prescriptions were traditionally written in the Dai and Yi languages, but the current clinical indications of DPMs and YPMs are written in Chinese. Difficulties in translation have hampered evaluation of how these ethnic medicines are used. Efforts to obtain accurate translations will be the next important work.The sale volumes of DPMs and YPMs are not available because they belong to trade secrets. Because the descriptions of ethnic medical prescriptions in the ancient literature were written in Yi and Dai languages, they are hard to comprehend. However, the medical practices and culture of ethnic minorities have existed in Yunnan for thousands of years and have resulted in written records of more than 1300 ethnic medicinal materials and nearly 30,000 folk prescriptions. The medical information has been passed on orally or via ancient documents written in various ethnic minority languages such as theSan Ma Tou Yi Medical Book and the Lao Wu Dou Yi Medical book written in the late Qing Dynasty of China. The ongoing scientific investigation and sustainable utilization of medicine resources will help to increase the impact of ethnomedicines of Yunnan Province on improvement of human health. --- *Source: 1023297-2020-08-14.xml*
1023297-2020-08-14_1023297-2020-08-14.md
52,298
Policies and Problems of Modernizing Ethnomedicine in China: A Focus on the Yi and Dai Traditional Medicines of Yunnan Province
Zhiyong Li; Caifeng Li; Xiaobo Zhang; Shihuan Tang; Hongjun Yang; Xiuming Cui; Luqi Huang
Evidence-Based Complementary and Alternative Medicine (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1023297
1023297-2020-08-14.xml
--- ## Abstract Yunnan is a multiethnic province in southwest China, rich in Materia medica resources, and is popularly known as the kingdom of plants. Biomedicine and public health industry have been the industrial pillars of Yunnan since 2016, which is the important pharmaceutical industrial base for Dai and Yi medicine in China. This review of the Yunnan ethnic medicine industry describes some of the problems to be solved in the development of sustainable ethnomedicine in China. We investigated Chinese patent medicines (CPMs) declared as ethnomedicine on the drug instructions and identified 28 Dai patent medicines (DPMs) and 73 Yi patent medicines (YPMs) that were approved for clinical use in China. In further research, the clinical indications of these CPMs were determined, and the quality standard of medicinal materials and their usage frequencies in DPMs and YPMs were investigated. We also collected and analyzed the data on use of botanical and animal sources of medicines, the rare and endangered medicinal materials, and toxic medicines in DPMs and YPMs. The application of zootherapy in Yi traditional medicine was introduced from its abundant ancient documents and records; based on the “YaGei” theory in Dai traditional medicine, toxic medicines can be relatively safe in DPMs. However, for promoting the Yunnan traditional medicine industry, it is necessary to strengthen medical research to expand evidence-based clinical practice and balance ethnomedicine production and sustainable utilization of Materia medica resources, especially the animal sources of medicines, toxic medicines, and the protected wild resources reported in this survey. Only in this way can industrialization of ethnomedicine promote the improvement of human health. --- ## Body ## 1. Introduction Evidence of the first human use of plants as medicines was observed in the fossil record of the Middle Paleolithic period, which began approximately 60,000 years ago [1]. Traditional medical knowledge and practices developed in different civilizations by the trial-and-error use of local botanicals and other biomaterial resources that accumulated slowly over long periods of time [2]. The World Health Organization (WHO) estimates that herbal medicines currently serve the health needs of approximately 80% of the world’s population, especially millions of people living in the vast rural areas of developing countries [3]. The Chinese have one of the oldest and distinct medical systems in the world. Traditional Chinese Medicine (TCM) has a written history of nearly 3000 years and is widely practiced in China [4]. China is a multiracial country with 56 nationalities, 55 of which are officially recognized as ethnic minorities in 18 provinces of China. Each ethnic minority, e.g., the Tibetans, Mongols, Uygurs, Dai, Yi, and Miao, has its own traditional medicine, and each differs slightly in theory and practice from TCM. Ethnomedicine thus refers to the use of traditional medicine guided by the medical theory and practical experience of each ethnic minority [5]. Since 2017, the Law of the People’s Republic of China on Traditional Chinese Medicine has given ethnomedicine a relatively independent status although it is considered as part of TCM. The importance of ethnomedicine has been increasing in China since 1951 as discussed in detail below. Chinese Traditional Medicine Statistics published in 2015 by the National Administration of Traditional Chinese Medicine of China and the Investigation and Analysis of Quality Standards of Ethnomedicines in Nine Provinces of China published in 2015 by the National Medical Products Administration of China listed 161 pharmaceutical enterprises that produced 4317 ethnic patent medicines (EPMs) and nearly 4000 in-hospital preparations of ethnomedicines used in 253 ethnic hospitals. A total of 39 EPMs were included in Chinese Pharmacopoeia (2015 edition). The fourth national survey of Chinese Materia medica resources is underway with the objective of determining the status of the available resources and investigating the modern value of herbal medicine including ethnic and folk medicines [6]. However, the national application of ethnic medicine in China is a complex issue that involves public policy, ethnic culture, livelihood status, regional economies, the protection of wild resources, etc.Yunnan is a multiethnic province in southwest China. In addition to the Han nationality, there are 25 ethnic minorities with a population of more than 6,000, including the Yi, Hani, Bai, Dai, Zhuang, Miao, Hui, and Tibetan. The population of ethnic minorities is estimated at over 16 million, accounting for 33.4% of the provincial total population. The Dai and Yi traditional medicine are the representatives of ethnomedicine practiced in Yunnan. Tibetan medicine as practiced in Shangri-La will be described in a subsequent review. Yunnan Province is an important pharmaceutical industry center for Dai medicine and Yi medicine. For example, Yunnan Baiyao, a highly effective patent medicine has originated from the ancient Yi prescription [5]. Biomedicine and public health industry have been the major industries in Yunnan since 2016, and more than 2000 ethnic medicinal resources and more than 10,000 folk prescriptions are native to Yunnan [7]. This review focuses on the Yi and Dai traditional medicine in Yunnan and the potential problems to be encountered in the development of policies favorable to ethnomedicine development. ## 2. Historical Changes of Chinese Ethnomedicine Policies Since 1949, the Chinese government has successively introduced many policies to support and protect the development of ethnomedicine (Table1). The Ethnic Minorities Health Work Plan of China published in 1951 recommended that native doctors who used herbs to cure diseases should be united and supported to the greatest extent. The Decision on Health Reform and Development published in 1997 discussed how ethnomedicine should be mined, organized, summarized, and improved. The Decision on Further Strengthening Rural Health Work, published in 2002, promoted the development and organization of ethnomedicine resources and technologies in rural regions. For many years, ethnic folk doctors in rural areas provided care in very much the same way as barefoot doctors, a nickname for part-time paramedical workers in rural areas who were trained for simple diagnoses and treatments in the 1960s and 1970s. During that time, China’s health services could not cover all areas of the country, and the need of ethnic and folk medical practices was tacitly accepted. At present, the Tibetan, Mongolian, Uygur, Dai, Kazakh, Korean, Zhuang, and Hui medicine have set up physician certification systems for special skill or knowledge to support ethnic folk doctors obtaining legal medical qualification through certain procedures. The Outline of a Strategic Plan for Development of Traditional Chinese Medicine (2016–2030) published by the State Council of China promotes the development of ethnomedicine.Table 1 History of Chinese ethnomedicine policy. TimePolicy outlinePromulgator1951Ethnic minorities health work plan of ChinaUnknown1984Several opinions on strengthening ethnic minorities workMinistry of Health and NEAC1997Decision on health reform and developmentCPC Central Committee and the State Council2002Decision on further strengthening rural health workCPC Central Committee and the State Council2003Regulations of the People’s Republic of China on traditional Chinese medicineThe State Council2007Guiding opinions on strengthening the development of ethnomedicine11 ministries including NATCM2017The law of the People’s Republic of China on traditional Chinese medicineNPC standing committee2018Some opinions on strengthening the ethnomedicine work in the new era13 ministries including NATCMNEAC: National Ethnic Affairs Commission of PRC (China); CPC Central Committee: the Central Committee of the Communist Party; NATCM: National Administration of Traditional Chinese Medicine of PRC (China); NPC: National People’s Congress.The status of ethnomedicine in China experienced a cognitive change with the publication of theRegulations of the People’s Republic of China on Traditional Chinese Medicine in 2003, which required that the administration of ethnomedicine should be implemented with compliance to the regulations that apply to TCM and established a close relationship between ethnomedicine and TCM. The law of the People’s Republic of China on Traditional Chinese Medicine currently defines ethnomedicine as one part of TCM, sharing a history and development with TCM that conform to the united national culture of China. Yunnan Province has published policies and plans to regulate the development of its rich medicine resources in the autonomous prefectures of Xishuangbanna Dai, Chuxiong Yi, Diqing Tibet, Chuxiong Yi, and Xishuangbanna Dai between 2016 and 2030 [8]. ## 3. Ethnic Hospitals and Pharmaceutical Enterprises in Yunnan Yunnan Province has two Yi autonomous prefectures, including Chuxiong Yi and Honghe Hani Yi, and two Dai autonomous prefectures, including Xishuangbanna and Dehong Dai Jingpo. There are also 11 Yi autonomous counties and 7 Dai autonomous counties. The Yunnan Traditional Yi Medicine Hospital, the largest Yi medical hospital in China, is located in Chuxiong City, Chuxiong Yi Autonomous Prefecture. More than 10 counties have Yi medical hospitals or outpatient departments located in villages and towns of Yunnan. Each of the two Dai autonomous prefectures has a large traditional Dai medicine hospital, the hospital of Dai traditional medicine of Xishuangbanna Dai Autonomous Prefecture and the hospital of Dai traditional medicine of Dehong Dai Jingpo Autonomous Prefecture. There are at least 6 Dai hospitals in several counties.A total of 42 corporations are licensed to produce Dai patent medicines (DPMs) and Yi patent medicines (YPMs) in China. Two corporations are located outside Yunnan Province. Twenty corporations are in Kunming City, nine corporations are in Chuxiong Yi Autonomous Prefecture, three corporations are in Yuxi City, and two corporations are in Dali Bai Autonomous Prefecture. Wenshan City, Zhaotong City, and Xishuangbanna Dai Autonomous Prefecture have one pharmaceutical manufacturer each. The companies include the Yunnan Baiyao Group, which is best known for producing Baibaodan, the original name of Yunnan Baiyao, which was invented by Qu Huanzhang (AD 1880–1938). It also produces more than 300 other patent medicines and 19 dosage forms including Shu Lie An Capsule, Qiancao Nao Tong Oral liquid, Gu Feng Ning Capsule, Shang Yi Aerosol, Tong Shu Capsule, and Zhong Tong Liniment. In 2014, Dihon Pharmaceutical Co. was purchased by Bayer, a large German pharmaceutical company, which was marked as a significant entry for Bayer into the TCM marketplace. Dihon produces Dan E Fu Kang Ointment, Gan Dan Qing Capsule, Yu Mai Kou Yan liquid, and Wei Fu Shu Capsule. ## 4. Clinical Indications of Yi and Dai Medicines Based on the pharmaceutical instructions in which the properties of ethnic medicine were claimed, the clinical indications of DPMs and YPMs were surveyed and recorded, and 28 DPMs and 73 YPMs that could be approved for clinical use in China following the drug regulatory laws were identified. Fourteen DPMs such as Biao Re Qing Granule, Guan Tong Shu Oral liquid, and Hui Xue Sheng Capsule that have already been approved as over-the-counter (OTC) drugs accounted for 50% of the DPMs. The YPMs included 24 prescriptions such as Bai Bei Yi Fei Capsule, Chang Shu Zhi Xie Capsule, and Dan E Fu Kang Ointment, which have been approved as OTC drugs and account for 32.8% of the total YPMs. The information about these patent medicines is recorded in TablesS1 and S2. As shown in Figure 1, these DPMs and YPMs are used to treat respiratory, cardiovascular, mental, and neurological diseases among others. One example is Dan Deng Tong Nao Capsule (DDTN), of which Erigeron breviscapus (Vaniot) Hand. -Mazz (Dengzhanxixin), a component herb, has been recorded in the pattra-leaf scripture of Dai traditional medicine for 2500 years. DDTN when combined with rehabilitation training can improve the recovery of neurological function and the quality of life of stroke patients with cerebral infarction [9]. DDTN is also found to prevent cerebral injury in rats with middle cerebral artery-induced ischemic stroke by decreasing the intracellular Ca2+ concentration and inhibiting the release of excitatory amino acids [10].Figure 1 Clinical indications of DPMs and YPMs. DPMs, Dai patent medicines; YPMs, Yi patent medicines. ## 5. Application of Quality Standards for Yi Medicine and Dai Medicine In China, the quality standards of ethnic medicines and their patent medicines are based on the national standards included in theChinese Pharmacopoeia, which has covered ethnomedicines since 1977. Previous research on EPMs in the Chinese Pharmacopoeia (2015 edition) found that some traditional medicines did not establish national quality standards, and that 71 traditional medicines, which include 39 EPMs, are not listed in the Chinese Pharmacopoeia [11]. This practice (called “upside-down standards,”) that involves quality standards for Chinese patent medicines (CPMs) but no standard for the composition of CPMs affects the safety of CPMs and the healthy development of the Chinese pharmaceutical industry.The provincial standards relating to the Tibetan, the Xinjiang Uygur, the Inner Mongolia, the Guangxi Zhuang Autonomous Region, Qinghai, Sichuan, and Yunnan and Guizhou provinces also apply to the regulation of the quality of ethnic medicines in China [12]. The academy group and enterprise standards are also applicable to the quality of ethnomedicine. The 28 DPMs identified in the survey include 101 traditional medicines with quality standards listed in the Chinese Pharmacopoeia (2015 edition). The quality standards of 30 traditional medicines are listed in the Standards for Chinese medicinal materials in Yunnan Province (SYNP) or other provincial quality standards (Figure 2). Four herbal medicines including Aristolochia chuii Wu (Dabaijie), Michelia mahan C. Y. Wu (Mahan), Asparagus officinalis L. (Xiaobaibu), and leaf and stem of Vitex trifolia L. do not have applicable quality standards or are out of date. The 73 YPMs identified in the survey included 182 traditional medicines with quality standards in the Chinese Pharmacopoeia (2015 edition). The quality standards of 88 traditional medicines are included in the SYNP or other provincial pharmacopoeias. Eleven herbal medicines have no applicable quality standards or are out of date, including the root of Rosa odorata (Andr.) Sweet var. gigantea (Crép.) Rhed. Et Wils. (Gugongguo), Fibraurea recisa Pierre (Dahuangteng), Dolichos falcata Klein (Damayao), Cyanotis arachnoidea G. B. Clarke (Lushuicao), Bulbophyllum reptans (Lindl.) Lindl. (Xiaolvji), Adenophora bulleyana Diels (Shashen), Cynoglossum officinale L. (Daotihu), Crepis lignea (Vant.) Babc. (Wanzhangsheng), Cymbopogon distans (Nees) Wats. (Yunxiangcao), and Ziheche and extract of Hemsleya chinensis Cogn. ex Forbes et Hemsl (Xuedan extract). More information is provided in Tables S3 and S4.Figure 2 Quality standards of herbal medicines in DPMs and YPMs. ChP,China Pharmacopoeia; SYNP, Standards for Chinese medicinal materials in Yunnan Province; SPOP, Standards for Chinese medicinal materials in other province except Yunnan; NQS, No quality standard.The usage frequencies of traditional medicines in DPMs and YPMs are shown in Figure3. Glycyrrhiza uralensis Fisch (Gancao), Panax notoginseng (Burk.) F. H. Chen (Sanqi), Angelica sinensis (Oliv.) Diels (Danggui), Astragalus membranaceus (Fisch.) Bge. Var. mongholicus (Bge.) Hsiao, and Astragalus membranaceus (Fisch.) Bge. (Huangqi) are the most frequently used genuine medicinal materials in Yunnan Province. The other medicines included in the SYNP as Dai medicines or Yi medicines are listed in Table 2.Figure 3 Usage frequencies of herbal medicines in DPMs and YPMs. DPM, Dai patent medicine; YPM, Yi patent medicine; Gancao,Glycyrrhiza glabra L. or Hemsleya chinensis Cogn.ex Forbes et Hemsl (Xuedan extract ; Sanqi, Panax notoginseng (Burk.) F. H. Chen; Danggui, Angelica sinensis (Oliv.) Diels; Huangqi, Astragalus propinquus Schischkin; Jishiteng, Paederia scandens (Lour.) Merr.; Dengzhanxixin, Erigeron breviscapus (Vaniot) Hand. -Mazz; Sharen, Amomum villosum Lour.; Huzhang, Polygonum cuspidatum Sieb.et Zucc.; Jinqiaomai, Fagopyrum dibotrys (D. Don) Hara; Chonglou, Paris polyphylla Smith var. chinenisi (Franch) Hara; Gonglaomu, Mahonia bealei (Fort.) Carr.; Dahongpao, Campylotropis hirtella (Franchet) Schindler; Gegen, Pueraria lobata (Willd.) Ohwi; Zidanshen, Salvia yunnanensis C. H. Wright; Chuangxiong, Ligusticum chuanxiong Hort.; Chaihu, Bupleurum scorzonerifolium Willd; Yanhusuo, Corydalis yanhusuo W. T. Wang; Zhizi, Gardenia jasminoides Ellis; Baiji, Bletilla striata (Thunb.) Reichb. f.; Chenpi, Citrus japonica Blanco; Honghua, Carthamus tinctorius L.Table 2 Traditional medicines used in DPMs and YPMs and listed in the SYNP. NoScientific namePinyin nameMPEMFrequency1Plumbago zeylanica Linn.BaihuadanStem and leafYi12Toddalia asiatica (L.) Lam.FeilongzhangxueStemYi13Tripterygium hypoglaucum (Levl.) HutchHuobahuagenRootYi34Inula cappa (Buch -Ham) DC.Yang’erjuWhole plantYi25Geum aleppicum Thumb. var. Chinese BolleWuqihuanyangcaoWhole plantYi36Rhodobryum giganteum (Hook.) Par.HuixincaoWhole plantYi27Polygonum paleaceum Wall. ex Hook.CaoxuejieRhizomeYi28Polygala arillata Buch. Ham. ex D. DomJigenRoots and rhizomeYi19Salvia yunnanensis C. H. WrightZi DanshenRootYi610Ampelopsis delavayana (Franch.) Planch.Yuputao genRootYi311Swertia patens Burk.Xiao’er futong caoWhole plantYi312Polygonum cuspidatum Sieb.et Zucc.HuzhangyeLeafYi113Cynodon dactylon (L.) Pers.QianxiancaoWhole plantYi114Potentilla fulgens Wall. ex HookGuanzhongRootYi215Ainsliaea pertyoides Franch. var. albo-tomentosa Beauv.YexiahuaWhole plantYi116Valeriana jatamansi JonesMatixiangRoots and rhizomeYi117Speranskia tuberculata (Bunge) BaillonTougucaoAerial partYi318Arthromeris mairei (Brause) ChingDiwugongRhizomeYi119Schefflera venulosa (Wight et Arn.) HarmsQiyelianWhole plant, stem, and leafYi320Boenninghausenia sessilicarpa Levl.ShijiaocaoWhole plantYi221Oxalis corniculata Linn.ZajiacaoWhole plantYi122Anemone rivularis Buch. Ham. ex DC.HuzhangcaoRootYi223Opuntia stricta (Haw.) Haw. var. dillenii (Ker Gawl.) Benson.XianrencaoStemYi124Dysosma versipellis (Hance) M. Cheng ex YingBajiaolianRhizomeYi425Jatropha curcas L.GaotongRoot bark, stem barkYi126Ficus tikoua Bur.DibantengCaneYi127Kadsura longipedunculata Finet et Gagnep.WuxiangxuetengCaneYi328Leycesteria aponic Wall. var. stenosepala Rehd.DazuifengAerial partYi129Anaphalis bulleyana (J. F. Jeffr.) ChangWuxiangcaoWhole plantYi130Craibiodendron yunnanense W.W. SmithJinyeziLeafYi131Phyllanthus urinaria L.YexiazhuAerial partDai232Brassica integrifolia (West) O. E. Schulz ex Urb.KucaiziSeedDai133Zingiber purpureum Rosc.ZisejiangRhizomeDai134Polyrhachis dives SmithWeimayiBodyDai435Phyllanthus niruri L.ZhuzicaoWhole plantDai136Tacca chantrieri AndreJiangenshuStem tuberDai237Stephania epigaea H. S. LoDiburongRoot tuberDai138Streptocaulon juventas (Lour.) Merr.TengkushenRootDai139Inula cappa (Buch.-Ham) DCYangerjugenRootDai140Benincasa hispida (Thunb.) Cogn.KudongguaFruitDai1DPMs, Dai patent medicines; YPMs, Yi patent medicines; SYNP, Standards for Chinese medicinal materials in Yunnan Province; MP, medicinal parts; EM, ethnic medicine. ## 6. Application of Medicinal Resources in Yi and Dai Medicine ### 6.1. Botanical, Animal, and Mineral Sources of Medicines Used in DPMs and YPMs Differences in geographical and climatic conditions are reflected in distinct lifestyle, customs, cultures, and usage of medicinal resources by the residents of the various regions in China. In general, botanicals are the most widely used traditional medicines. DPMs and YPMs include 361 botanical medicines, 22 animal source of medicines, and 9 mineral medicines, as shown in Figure4. YPMs contain more animal sources of medicines than DPMs. Yi people have a long history of hunting, and many ancient documents attest to the use of animal sources of medicines. The Yi Nationality Offering Medicine Scriptures (Yi Zu Xian Yao Jing), written in the early Qing Dynasty, notes that up to 92.8% of the prescriptions were animal sources of medicines and were divided into 12 types including insects, meat, bones, gallbladders, fat, blood, fish gall bladders, and hair. The Book of Good medicines for Treating Diseases (Yi Bing Hao Yao Shu, AD.1737) describes 152 animal sources of medicines that accounted for 35.7% of Yi medicines [13].Figure 4 The use of botanical, animal, and mineral resources in DPMs and YPMs. DPMs, Dai patent medicines; YPMs, Yi patent medicines.Animals are therapeutic arsenals with a significant role in healing. Zootherapy are derived from products of metabolism (e.g., corporal secretions and excrements) or from nonanimal materials such as nests or cocoons [14]. The reasonableness of zootherapy cannot be denied, and evidence supporting their use should be strengthened by modern scientific research. The animal sources of medicines in DPMs and YPMs are listed in Table 3. The use of some animal products in DPMs and YPMs is controversial, e.g., Cordyceps sinensis, Moschus deer musk, and bear bile, because these medicinal materials originate from protected wild animals or by harvesting activities that harm the ecological environment. Nevertheless, the production and use of such animal sources of medicines are restricted in China. China has joined and complied with many conventions on biological protection such as CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora) and the Convention on Biological Diversity and has implemented domestic animal protection regulations. However, the medicinal value of these and other animal sources of medicines cannot be ignored. Fortunately, substitutes are available because of the cultivation of Cordyceps and industrial production of artificial musk [15, 16]. Bile can be obtained from living, farmed bears but is ethically controversial [17]. Artificial bear bile has been reported to be effective on anticonvulsion, sedative, and choleretic [18].Table 3 Medicinal sources from the animals used in DPMs and YPMs that are listed in the SYNP. Scientific namePinyin nameMPStandardDPMYPMCervus nip port TemminckLurongAntlerChP10Cryptotympana pustulata FabriciusChantuiSloughChP20Gallus gallus domesticus BrissonJineijingGizzardChP11Polyrhachis dives SmithHeimayiBodySYNP14Gekko gecko LinnaeusGejieBodyChP04Pheretima aspergillum (E. Perrier)DilongBodyChP04Bufo bufo gargarizans CantorChansuSecretionChP01Aspongopus chinensis DallasJiuxiangchongBodyChP02Selenarctos thibetanus CuvierXiongdanfenBileSYNP01Bombyx mori LinnaeusJiangchanBodyChP01Periplaneta japonica LinnaeusFeilieBodySYNP01Sepiella maindroni de RochebruneHaipiaoqiaoShellChP01Moschus berezovskii FlerovShexiangSecretionChP01Armadillidium vulgare LatreilleShufuchongBodySSDP02Cervus nip port TemminckLujiaoshuangAntler colloidChP11Cordyceps sinensis (BerK.) Sacc.DongchongxiacaoBacterial and insect complexChP22EPM, ethnic patent medicine; MP, medicinal parts; ChP,Chinese Pharmacopoeia; SSDP, Standards for Chinese medicinal materials in Shandong Province (2012). ### 6.2. Medicinal Parts of Botanical Medicines Yunnan is rich in Chinese Materia medica resources and is known as the kingdom of plants. The plant parts used in herbal medicines include seeds, berries, roots, leaves, fruits, barks, and flowers and the whole plant itself. From ancient times to the present, people have used crude botanical materials as medicines to maintain vitality and cure disease [19]. The medicinal parts of botanical medicines in DPMs and YPMs are shown in Figure 5. The plant parts included in DPMs and YPMs are similar, with root and rhizomes, the whole plant, and fruit and seeds being the most frequent. The various parts of the medicinal plants contain active components that are responsible for their effectiveness [20] and physical properties that determine their names [21]. For example, Huangqin (Scutellaria baicalensis Georgi) is called “Rijishi” in the Yi language, in which “Ri” means herbaceous plant; “Ji” means root, the medicinal part of the plant; “Shi” indicates that the color is yellow [22]. The continuing usage of herbal medicines prepared from wild roots and rhizomes, fruits and seeds, and whole plants is not sustainable. The best strategy for balancing industrialization and resource protection is replacing wild with cultivated resources [23].Figure 5 Relative contribution of the parts of medicinal plants to DPMs and YPMs. ## 6.1. Botanical, Animal, and Mineral Sources of Medicines Used in DPMs and YPMs Differences in geographical and climatic conditions are reflected in distinct lifestyle, customs, cultures, and usage of medicinal resources by the residents of the various regions in China. In general, botanicals are the most widely used traditional medicines. DPMs and YPMs include 361 botanical medicines, 22 animal source of medicines, and 9 mineral medicines, as shown in Figure4. YPMs contain more animal sources of medicines than DPMs. Yi people have a long history of hunting, and many ancient documents attest to the use of animal sources of medicines. The Yi Nationality Offering Medicine Scriptures (Yi Zu Xian Yao Jing), written in the early Qing Dynasty, notes that up to 92.8% of the prescriptions were animal sources of medicines and were divided into 12 types including insects, meat, bones, gallbladders, fat, blood, fish gall bladders, and hair. The Book of Good medicines for Treating Diseases (Yi Bing Hao Yao Shu, AD.1737) describes 152 animal sources of medicines that accounted for 35.7% of Yi medicines [13].Figure 4 The use of botanical, animal, and mineral resources in DPMs and YPMs. DPMs, Dai patent medicines; YPMs, Yi patent medicines.Animals are therapeutic arsenals with a significant role in healing. Zootherapy are derived from products of metabolism (e.g., corporal secretions and excrements) or from nonanimal materials such as nests or cocoons [14]. The reasonableness of zootherapy cannot be denied, and evidence supporting their use should be strengthened by modern scientific research. The animal sources of medicines in DPMs and YPMs are listed in Table 3. The use of some animal products in DPMs and YPMs is controversial, e.g., Cordyceps sinensis, Moschus deer musk, and bear bile, because these medicinal materials originate from protected wild animals or by harvesting activities that harm the ecological environment. Nevertheless, the production and use of such animal sources of medicines are restricted in China. China has joined and complied with many conventions on biological protection such as CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora) and the Convention on Biological Diversity and has implemented domestic animal protection regulations. However, the medicinal value of these and other animal sources of medicines cannot be ignored. Fortunately, substitutes are available because of the cultivation of Cordyceps and industrial production of artificial musk [15, 16]. Bile can be obtained from living, farmed bears but is ethically controversial [17]. Artificial bear bile has been reported to be effective on anticonvulsion, sedative, and choleretic [18].Table 3 Medicinal sources from the animals used in DPMs and YPMs that are listed in the SYNP. Scientific namePinyin nameMPStandardDPMYPMCervus nip port TemminckLurongAntlerChP10Cryptotympana pustulata FabriciusChantuiSloughChP20Gallus gallus domesticus BrissonJineijingGizzardChP11Polyrhachis dives SmithHeimayiBodySYNP14Gekko gecko LinnaeusGejieBodyChP04Pheretima aspergillum (E. Perrier)DilongBodyChP04Bufo bufo gargarizans CantorChansuSecretionChP01Aspongopus chinensis DallasJiuxiangchongBodyChP02Selenarctos thibetanus CuvierXiongdanfenBileSYNP01Bombyx mori LinnaeusJiangchanBodyChP01Periplaneta japonica LinnaeusFeilieBodySYNP01Sepiella maindroni de RochebruneHaipiaoqiaoShellChP01Moschus berezovskii FlerovShexiangSecretionChP01Armadillidium vulgare LatreilleShufuchongBodySSDP02Cervus nip port TemminckLujiaoshuangAntler colloidChP11Cordyceps sinensis (BerK.) Sacc.DongchongxiacaoBacterial and insect complexChP22EPM, ethnic patent medicine; MP, medicinal parts; ChP,Chinese Pharmacopoeia; SSDP, Standards for Chinese medicinal materials in Shandong Province (2012). ## 6.2. Medicinal Parts of Botanical Medicines Yunnan is rich in Chinese Materia medica resources and is known as the kingdom of plants. The plant parts used in herbal medicines include seeds, berries, roots, leaves, fruits, barks, and flowers and the whole plant itself. From ancient times to the present, people have used crude botanical materials as medicines to maintain vitality and cure disease [19]. The medicinal parts of botanical medicines in DPMs and YPMs are shown in Figure 5. The plant parts included in DPMs and YPMs are similar, with root and rhizomes, the whole plant, and fruit and seeds being the most frequent. The various parts of the medicinal plants contain active components that are responsible for their effectiveness [20] and physical properties that determine their names [21]. For example, Huangqin (Scutellaria baicalensis Georgi) is called “Rijishi” in the Yi language, in which “Ri” means herbaceous plant; “Ji” means root, the medicinal part of the plant; “Shi” indicates that the color is yellow [22]. The continuing usage of herbal medicines prepared from wild roots and rhizomes, fruits and seeds, and whole plants is not sustainable. The best strategy for balancing industrialization and resource protection is replacing wild with cultivated resources [23].Figure 5 Relative contribution of the parts of medicinal plants to DPMs and YPMs. ## 7. Rare and Endangered Medicinal Materials The rapidly increasing demand for CPMs is likely to challenge the sustainability of herbal resources in China. At present, 80% of the most frequently used species cannot meet medical demand, and 1,800–2,100 medicinal species are facing extinction [24]. In the China Plant Red Data Book published in 1992, 388 plant species were listed as threatened, with 121 species as endangered and needing first-grade national protection, 110 species as rare needing second-grade national protection, and 157 species as vulnerable needing third-grade national protection. Of those plant species, 77 are herbal medicines that account for 19.86% of the threatened species [25]. The national key protection name list of wild animals in China includes 257 animal sources. The shortage of medicinal plants available to pharmaceutical companies can be partially reduced by the cultivation of at least 200 herbs, while some special herbs used in ethnomedicine are obtained by continuous wild collection without planned scientific cultivation. The rare medicinal materials used in DPMs and YPMs are listed in Table 4.Table 4 Rare medicinal materials used in DPMs and YPMs. Herbal nameScientific nameNPWPIUCNProprietaryNPWMUFGancaoGlycyrrhiza uralensis FischIILC—II26Glycyrrhiza inflata Bat.IILC—IIGlycyrrhiza glabra L.IILC—IIRenshenPanaxginseng C. A. MeyICR—II4LianqiaoForsythia suspense (Thunb.) Vahl.———III4HuangqinScutellaria baicalensis Georgi———III4WuweiziSchisandra chinensis (Turcz.) Baill.IILC—III1LurongCervus nip port Temminck———I1Cervus elaphus Linnaeus———IRouchongrongCistanche deserticola Y. C. MaIIEN—III1Cistanche tubulosa (Schenk) WightII———HuangbaiPhellodendron chinense Schneid———II2DuzhongEucommia ulmoides Oliv.———II1LongdanGentiana manshurica Kitag.———III3Gentiana scabra Bge———IIIGentiana triflora Pall.———IIIGentiana regescens Franch.———IIIHuanglianCoptis chinensis Franch——Unique to ChinaII1Coptis deltoidea C. Y. Cheng et Hsiao—VUUnique to ChinaIICoptis teetoides C. Y. Cheng.———IIHoupuMagnolia officinalis Rehd. et WilsIINTUnique to ChinaII1Magnolia officinalis Rehd. et Wils. var. biloba Rehd. et WilsII—Unique to ChinaIIHuangbaiPhellodendron chinense Schneid———II2ZicaoArnebia euchroma (Royle) Johnst———III1QingjiaoGentiana macrophylla Pall.———III1Gentiana macrophylla Maxim.———IIIGentiana crassicaulis Duthie ex Burk.———IIIGentiana dahurica Fisch———IIIShexiangMoschus berezovskii Flerov.———II1Moschus sifanicus Przewalski.———IIMoschus moschiferus Linnaeus.———IIChonglouParis polyphylla Smith var. chinensis (Franch.) HaraII———7NPWP, National Key Protected Wild Plants of China (August 4, 1999); NPWM, National Key Protected Species of Wild Medicinal Materials of China (Dec. 1, 1987); IUCN, International Union for Conservation of Nature (CR, critically endangered; LC, least concern; EN, endangered; VU, vulnerable; NT, near threatened); UF, usage frequency in DPMs and YPMs.Those plants are protected by the Chinese government and some international nongovernment organizations such as the International Union for Conservation of Nature.Cistanche deserticola Y. C. Ma (Rouchongrong), Panax ginseng C. A. Mey (Renshen), Glycyrrhiza aponic Bat (Gancao), or other rare medicinal materials listed in the catalogs is protected and utilized sustainably in China. However, the number of endangered ethnic-specific medicines is far larger than that recorded in the catalogs; for example, more than 3000 tons of Rodgersia sambucifolia Hemsl. (Yantuo) are collected annually to produce YPMs. The availability of wild Rodgersia plants is sharply reduced, and resources are severely damaged in Luquan, Yongsheng, Yulong, Heqing, and Ninglang counties of Yunnan Province [26], while its cultivation research just began in recent years. Figure 6 shows the planting situation of Rodgersia sambucifolia Hemsl. in the Meizi test ground, which is subordinate to the Institute of Alpine Economics and Botany, Yunnan Academy of Agricultural Sciences. Consequently, 30 traditional medicines have been listed in the Rare Traditional Chinese Herbs of Yunnan Province in Urgent Needs (RTCHYN), and those used in DPMs and YPMs are shown in Table 5 [27].Figure 6 Rodgersia sambucifolia Hemsl. cultivated in the Meizi test ground (Lijiang, Yunnan). Toxic medicines used in DPMs and YPMs. (a)(b)Table 5 Traditional medicines listed in the RTCHYN. Scientific namePinyin nameMedicinal partsRegional distribution∗StandardUFErigeron breviscapus (Vaniot) Hand. -Mazz.DengzhanxixinWhole plantAreas except southwest of YunnanChP7Cordyceps sinensis (BerK.) Sacc.DongchongxiacaoBacterial and insect complexDeqin, Shangri-La, Lijiang, Binchuan, Lvfeng, GuangtongChP4Dracaena cochinchinensis (Lour.) S. C. Chen ALongxuejieResinJinping, Menglian, Pu’er, Jinghong, ZhenkangSGZP2Cyanotis arachnoidea C. B. ClarkeLushuicaoWhole plantMenghai, Menglian, Jinghong, Jingdong, Mengzi, Anning, Kunming, PingbianNo1Swertia mileensis T. N. He et W. L. ShiQingyedanWhole plantMileChP2Anisodus acutangulus C. Y. Wu et C. ChenSanfensanRootsLijiangSYNP1Hemsleya amabilis DielsXuedanRootsKunming, chongming, Binchuan, Eryuan, Dali, HeqingNo1Bergenia purpurascens (Hook.f.et Thoms.) Engl. var. delavayi (Franch.) Engl. et Irm.YanbaicaiRhizomeDeqin, Weixi, Shangri-La, Lijiang, Dali, Qujing, Ludian, Zhaotong, Gongshan, FugongChP1RTCHYN,Rare Traditional Chinese Herbs of Yunnan Province in Urgent Needs. ∗Regional distribution is from the Flora of Yunnan (Science Press of China, 2006). UF, usage frequency; ChP, Chinese Pharmacopoeia; SGZP, Standards for Chinese medicinal materials in Guizhou Province (2009); SYNP, Standards for Chinese medicinal materials in Yunnan Province (2005).Medicines with pharmacological activities that are present in traditional ethnomedicine are likely to be clinically useful, but they may also be toxic, especially if used incorrectly or in the wrong amounts. Unlike that for modern drugs, the efficacy and toxicity assessments of these ethnomedicines are based on traditional knowledge and clinical experience rather than on laboratory evaluation [28]. Toxicity associated with the use of Chinese ethnomedicine may occur because of the environment, religious beliefs, or medical practices. The Chinese Pharmacopoeia includes 83 traditional medicines that are considered toxic [29]; some medicines listed in provincial standards of herbal medicines are also considered toxic [30].Ten toxic medicines are used in 11 DPMs; six of those are included in theChinese Pharmacopoeia (2015 edition) and four toxic medicines in the SYNP. One toxic medicine is used in Dai traditional medicine, and two toxic medicines are used in Yi traditional medicine. The 40 YPMs include 24 toxic herbs; 12 toxic medicines are in the Chinese Pharmacopoeia (2015 Edition) and 12 are in the SYNP. Four medicines are known as Yi medicines. Although some toxic ethnomedicines are used in DPMs and YPMs, the proprietary medicines are considered safe and approved for use in China because pharmaceutical processing, combining, and decocting contribute to reducing toxicity and enhancing efficacy. In the traditional Dai medicine (TDM), “YaGei” herbs are used to reduce toxicity. The “YaGei” detoxification theory is a unique supplement of TDM [31], and “YaGei” medicines are used as antidotes to relieve adverse reactions caused by food poisoning, drug poisoning, and other substances [32]. Dai people consume antidotes regularly to eliminate the microtoxins from the body, reduce the chance of illness, and prolong life.Due to the lack of more pharmaceutical information disclosed, as well as the lack of basic research, the safety information of these DPMs and YPMs including toxic medicines is insufficient. The modern toxicological evidence of these toxic medicines is collected and summarized in Tables6 and 7, focusing on Dai and Yi toxic medicines. The root of Tripterygium hypoglaucum (Levl.) Hutch (Huobahuagen) soaked in wine as an oral medicine is an ancient Yi medicine described in the Ailao Materia Medica in the treatment of arthritis, joint swelling, pain, bruises, and sprains [59]. Boenning hauseniase silicarpa Levl. (Shijiaocao) is described in the Materia Medica in South Yunnan (Dian Nan Ben Cao, AD.1396 -1476) written by Lan Mao as a bitter, pungent, and warm medicine used to treat chest pain, heartache, stomachache, and abdominal distension. Shijiaocao is also described in the Ailao Materia Medica as a treatment for sore throat, gastric pain, and dysentery. It is described as a cure of acute gastroenteritis in combination with the parasite of Zanthoxylum bungeanum in the Wa Die Yi Medical Book, which was written at the end of the Qing Dynasty [60]. Ancient documents describe the usage of the toxic herbs included in the medical practice of Yi, Dai, or other ethnic minorities in Yunnan. Modern toxicological study can ensure their safe and effective use, and further study is warranted to determine how the toxicity reduces while the prescription remains effective.Table 6 Toxic medicines in DPMs. Scientific namePinyin nameToxicity degreeStandardDPMModern toxicologyReferencesParis polyphylla Smith var. chinensis (Franch) HaraChonglouLTChPRBQCToxic to the digestive system and has cardio toxicity and neurotoxicity, LD50 = 2.68 g/kg (mice, p.o.)[33]Curculigo orchioides Gaertn.XianmaoMTChPLXBSTLD50 = 215.9 g/kg (ethanol extract, rats, p.o.) and damages the liver, kidney, and reproductive organs with oral administration of 120 g/kg (ethanol extract, rats, 6 months)[34]Cnidium monnieri (L.) Cuss.ShechuangziLTChPLXBSTNausea and vomiting, decreased spontaneous activity, shortness of breath, unstable gait, and tremor (ethanol extract), LD50 = 17.45 g/kg (mice, p.o.), MTD = 1.50 g/kg, or LD50 = 3.45 g/kg (osthol, mice, p.o.)[35–37]Zanthoxylum nitidum (Roxb.) DC.LiangmianzhenLTChP7-JDHXONitidine chloride damages the liver and kidney cells and decreases the heart rate of zebrafish[38]Pinellia ternate (Thunb.) Breit.BanxiaMTChPSBZKGLD50 = 42.7 ± 1.27 g/kg (mice, p.o.), damages the renal and liver, causes serious damage to gastric mucosa, and has significant toxicity on pregnancy maternal mice and embryo (total alkaloids)[39]Prunus armeniaca L. var. ansu MaximKuxinrenLTChPSBZKGLD50 of amygdalin is 25 g/kg (mice, i.v.), 887 mg/kg (mice, p.o.) and hydrocyanic acid produced by amygdalin inhibits the activity of cytochrome oxidase, leading to cell respiration inhibition and cell death[40]Plumbago zeylanica Linn.BhuadanLTSYNPDLBSCSkin redness, swelling, and peeling when contacted and antiovulation activities for female rats (alcohol extract)[41, 42]Tacca chantrieri AndreJiangenshuMTSYNPYGTDiarrhea and vomiting in mild intoxication and intestinal mucosal exfoliation and hemorrhoea appear in severe poisoning patients[43]Tripterygium hypoglaucum (Levl.) HutchHuobahuagenLTSYNPGTSLLD50 = 79 g/kg (male mice, p.o.), LD50 = 100 g/kg (female mice, p.o.), and reversible antifertility effect[44, 45]Erythrina variegata L. var. orientalis (L.) MerrHaitongpiMTSSCPGTSLUnknownHT, high toxicity; MT, medium toxicity; LT, low toxicity; SSCP, Standards for Chinese medicinal materials in Sichuan Province (2010); SYNP, Standards for Chinese medicinal materials in Yunnan Province (2005); RBQC, Ru Bi Qing Capsule; LXBST, Lu Xian Bu Shen Tablet; 7-JDHXO, 7-Jie Du Huo Xue Ointment; SBZKG, Shen Bei Zhi Ke Granular; DLBSC, Dan Lv Bu Shen Capsule; YGT, YaGei Tablet; GTSL, Guan Tong Shu Oral liquid.Table 7 Toxic medicines in YPMs. Scientific namePinyin nameToxicity degreeStandardYPMModern toxicologyReferencesParis polyphylla Smith var. chinensis (Franch) HaraChonglouLTChPGFNC, NQSG, SYA, TSC, ZTL——Osmunda japonica Thunb.ZiqiguanzhongLTChPSWYAUnknown—Evodia rutaecarpa (Juss.) Benth.WuzhuyuLTChPGDQC, HWYPLD50 is 2.70 mL/kg (volatile oil, mice, p.o.), one of the main target organ is the liver[46]Bufo bufo gargarizans cantorChansuMTChPCLTCVentricular arrhythmias and increasing the levels of Ca2+, CK, and LDH in the heart[47]Artemisia argyi Levl. Et Vant.AiyeLTChPKSGLD50 is 80.2 g/kg (aqueous extract, mice, p.o.), LD50 is 1.67 mL/kg (volatile oil, mice, p.o.), MTD is 75.6 g/kg (ethanol extract, mice, p.o.)[48]Aconitum kusnezoffii Reichb.CaowuHTChPTXTCausing serious cardiac dysfunction and damaging the nervous system. LD50 is 1.8 mg/kg, (aconitine, mice, p.o.), LD50 is 5.8 mg/kg (hypaconitine, mice, p.o.), and LD50 is 1.9 mg/kg (mesaconitine, mice, p.o.).[49, 50]Papaver somniferum L.YingsuqiaoMTChPKLTMain toxic components are morphine and codeine. Morphine with 60 mg causes poisoning and 250 mg leads to death[51]Arisaema erubescens (Wall.) Schott.TiannanxingMTChPTXTProducing folate deficiency and injury to the kidneys[52]Laggera pterodonta (DC.) Benth.ChoulingdanMTChPLL, SKCGLD50 is 1.19 g/kg (water extract, mice, i.p.)[53]Prunus armeniaca L. var. ansu MaximKuxinrenLTChPSKCG, CLTC——Pinellia ternate (Thunb.) BreitBanxiaMTChPWFSC, ZXASG——Psammosilene tunicoides W. C. Wu et C. Y. WuJintiesuoLTChPZTLLD50 is 4.8471 (mice, p.o.) and toxic target organs include the lungs, spleen, and stomach[54]Boenninghausenia sessilicarpa Levl.ShijiaocaoLTSYNPSAC, SKCGThe ether extract reduces the activity in mice by intraperitoneal injection[33]Dysosma versipellis (Hance) M. Cheng ex YingBajiaolianLTSYNPZTL, HJXJC, SLAC, WJHXZTTLD50 is 0.493 ± 0.032 g/kg (mice, p.o.) and is toxic to the heart and central nervous system, appearing excited then inhibited[55]Millettia bonatiana Pamp.DafahanMTSYNPHSTTDamages the stomach[33]Craibiodendron yunnanense W. W. SmithJinyeziHTSYNPZTLUnknownTripterygium hypoglaucum (Levl.) HutchHuobahuagenMTSYNPZTL, GFNC—Anemone rivularis Bunch. Ham. ex DC.WuzhangcaoLTSYNPTYGT, YSLUnknownDelphinium yunnanense Franch.DaotihuMTSGZPWHXZTCUnknownDioscorea bulbifera L.HuangyaoziLTSGDPFFLC, FFLGLD50 is 25.49 g/kg (mice, i.p.), LD50 is 79.98 g/kg, 250.3 g/kg, or 544 g/kg (mice, p.o.), toxic target organs including the liver and kidney[56, 57]Clematis apiifolia var. argentilucida (H. Leveille & Vaniot) W. T. WangShanmutongLTSHNPNQSGUnknownAnisodus acutangulus C. Y. Wu et C. ChenSanfensanHTSYNPTXTUnknownDatura stramonium L.MantuoluoyeMTSYNPYWNCShortness of breath and death after nerve stimulation[33]Aconitum brachypodum Diels.XueshangyizhihaoHTSHNPZTLLD50 are 6766.928 and 5492.337 mg/kg (petroleum ether extracts, N-butanol extracts, mice, p.o.)[58]HT, high toxicity; MT, medium toxicity; LT, low toxicity.∗The herb has more than two origins of species, only one origin is shown. SGZP, Standards for Chinese medicinal materials in Guizhou Province (2009); SGDP, Standards for Chinese medicinal materials in Guangdong Province (2011); SHNP, Standards for Chinese medicinal materials in Hunan Province (2009); SYNP, Standards for Chinese medicinal materials in Yunnan Province (2005). GFNC, Gu Feng Ning Capsule; NQSG, Niao Qing Shu Granular; SYA, Shang Yi Aerosol; TSC, Tong Shu Capsule; ZTL, Zhong Tong Liniment; SWYA, Shu Wei Yao Alcohol; GDQC, Gan Dan Qing Capsule; HWYP, Huoxiang Wan Ying Powder; CLTC, Chuan Luo Tong Capsule; KSG, Kang Shen Granular; TXT, Tian Xiang Tincture; KLT, Ke Tan Oral liquid; LL, Lingdancao Oral liquid; SKCG, Shijiaocao Ke Chuan Granular; WFSC, Wei Fu Shu Capsule; ZXASG, Zhi Xuan An Shen Granular; SAC, Shen An Capsule; HJXJC, Hong Jin Xiao Jie Capsule; SLAC, Shu Lie An Capsule; WJHXZTT, Wu Jin Huo Xue Zhi Tong Tablet; HSTT, Huzhang Shang Tong Tincture; GFNC, Gu Feng Ning Capsule; TYGT, Tianhusui Yu Gan Tablet; YSL, Yan Shu Oral liquid; WHXZTC, Wujin Huo Xue Zhi Tong Capsule; FFLC, Fu Fang Luxiancao Capsule; FFLG, Fu Fang Luxiancao Granular; NQSG, Niao Qing Shu Granular; YWNC, Yun Wei Ning Capsule. ## 8. Concluding Remarks Ethnomedicine is an important part of TCM that has a unique medical theoretical system and refers to a wide range of healthcare systems/structures, practices, beliefs, and therapeutic techniques that arise from indigenous cultural development. Thousands of years of ethnic amalgamation has produced diversity, integration, and differences among the traditional medicine of different Chinese ethnics. Approximately 8, 000 medicinal species are used by 40 ethnic minorities in China, which account for over 70% of the Chinese Materia medica resources. Data from the National Medical Products Administration of China show that there are more than 600 types of EPMs [12]. The Chinese Pharmacopoeia (1977 edition) began to cover DPMs, and some Miao patent medicines and YPMs were collected from the 2015 edition of Chinese Pharmacopoeia. Overall, a total of 39 CPMs were identified as EPMs, 26 EPMs as prescription drugs, and 13 EPMs as OTC drugs [11]. Prescriptions that are not approved by the government were not included in the review evaluated, but are still in use in clinics in regions of China inhabited by ethnic groups.This review focuses on Dai and Yi traditional medicines in Yunnan Province because of their long histories and descriptions in the ancient medical literature. The earliest book of Yi traditional medicine that can be verified is theYuanyang Yi Medicine Book, which was written in 957 AD and found in Yuanyang County of Yunnan Province in 1985 [14]. The earliest books of Dai traditional medicine that can be verified are Ge Ya San Ha Ya, which was written in 964-884 BC, and Dang Ha Ya Long, which was written in 1323 AD [61]. There are 1, 666 Dai medicines [62], nearly 1, 400 Yi medicines [13], and 400 medicines are listed in the Yi Materia Medica [63]. There are 478 Yi medical formulas described in Chinese Yi Medicine Prescriptions Science [64], and 200 Dai medical formulas are in Study on Dai classical prescriptions of China [65]. The numbers of folk formulas from Dai and Yi traditional medicine are not available to record yet. Just as the example of Yunnan Baiyao mentioned before, a series of ethnomedicines in Yunnan were successfully industrialized and modernized to promote the modern vitality of ancient ethnomedicines and thus serve a wide population range. The Tong Shu Capsule is a YPM produced by the Yunnan Baiyao Group that has been approved recently for phase II clinical research in the United States. Yunnan Province expects to produce TCMs including ethnomedicines with a value of 140 billion RMB in 2020 and an average annual growth of more than 15%, accounting for 75% of its production [65].Five key conclusions can be drawn from this investigation of Dai and Yi medicines.First, except for the Yunnan Baiyao Group and the Dihon Pharmaceutical Company, most of the pharmaceutical manufacturers of EPMs in Yunnan Province are small enterprises, thereby limiting research and development capacity. A search on theChina National Knowledge Infrastructure (CNKI, http://www.cnki.net) found 163 articles that reported investigations of these 28 DPMs reviewed here and 59 articles about Yunnan Baiyao Aerosol, whereas it is only one of the CPMs produced by the Yunnan Baiyao Group. In 2015, 100 million bottles of Yunnan Baiyao Aerosol were produced, with a value of more 1.5 billion RMB. In the same year, the overall sales revenue of the Yunnan Baiyao Group was 20.74 billion RMB [7].Second, the sale volumes of YPMs and DPMs cannot be grasped, and it is hard to determine the extent to which traditional medicines used in YPMs and DPMs are collected in the wild. This will be a challenge to sustainable utilization for Chinese Materia medica resources.Third, the use of toxic medicine used in ethnomedicine is of concern. Herbal medicine containing aristolochic acid has been associated with nephropathy in Belgium [66], and the adverse events associated with the use of Xiao Chaihu Tang in Japan [67] have led to warnings of the safety of CPM. Scientific evidence is needed to demonstrate the rationale and necessity of using toxic herbs in EPMs.Fourth, the identification and usage of traditional medicines vary among ethnic minorities because of differences in experiences of clinical practice. The survey of DPMs and YPMs showed the differences in the number of animal sources of medicines used by the Yi and Dai people. The differences were also found in the ancient medical literature of Yi and Dai minorities.Fifth, Dai and Yi medical prescriptions were traditionally written in the Dai and Yi languages, but the current clinical indications of DPMs and YPMs are written in Chinese. Difficulties in translation have hampered evaluation of how these ethnic medicines are used. Efforts to obtain accurate translations will be the next important work.The sale volumes of DPMs and YPMs are not available because they belong to trade secrets. Because the descriptions of ethnic medical prescriptions in the ancient literature were written in Yi and Dai languages, they are hard to comprehend. However, the medical practices and culture of ethnic minorities have existed in Yunnan for thousands of years and have resulted in written records of more than 1300 ethnic medicinal materials and nearly 30,000 folk prescriptions. The medical information has been passed on orally or via ancient documents written in various ethnic minority languages such as theSan Ma Tou Yi Medical Book and the Lao Wu Dou Yi Medical book written in the late Qing Dynasty of China. The ongoing scientific investigation and sustainable utilization of medicine resources will help to increase the impact of ethnomedicines of Yunnan Province on improvement of human health. --- *Source: 1023297-2020-08-14.xml*
2020
# Feasibility of Bariatric Surgery as a Strategy for Secondary Prevention in Cardiovascular Disease: A Report from the Swedish Obese Subjects Trial **Authors:** Lotta Delling; Kristjan Karason; Torsten Olbers; David Sjöström; Björn Wahlstrand; Björn Carlsson; Lena Carlsson; Kristina Narbro; Jan Karlsson; Carl Johan Behre; Lars Sjöström; Kaj Stenlöf **Journal:** Journal of Obesity (2010) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2010/102341 --- ## Abstract Aims. Evaluation of bariatric surgery as secondary prevention in obese patients with ischemic heart disease (IHD). Methods. Analysis of data from 4047 subjects in the Swedish Obese Subjects (SOSs) study. Thirty-five patients with IHD are treated with bariatric surgery (n=21) or conventional treatment (n=14). Mean follow-up is 10.8 years. Results. Bariatric surgery resulted in sustained weight loss during the study period. After 2 years, the surgery group displayed significant reductions in cardiovascular risk factors, relief from cardiorespiratory symptoms, increments in physical activity, and improved quality of life. After 10 years, recovery from hypertension, diabetes, physical inactivity, and depression was still more common in the surgery group. There were no signs of increased cardiovascular morbidity or mortality in the surgery group. Conclusion. Bariatric surgery appears to be a safe and feasible treatment to achieve long-term weight loss and improvement in cardiovascular risk factors, symptoms, and quality of life in obese subjects with IHD. --- ## Body ## 1. Introduction Obesity, together with associated clustering of cardiovascular risk factors, is a strong promoter for cardiovascular disease morbidity and mortality [1, 2]. Weight control is considered a cornerstone in primary prevention aimed at reducing the overall incidence of cardiovascular disease. Obesity is also frequently targeted in secondary preventive programs intended to improve outcome in patients with already established cardiovascular disease [3, 4]. One major problem with standard strategies is that weight loss is difficult to achieve with conventional methods and the results are often temporary.Bariatric surgery has emerged as an effective treatment option to obtain large and sustained weight loss in obese subjects [5]. Surgically induced weight loss has been shown to improve or prevent many of the obesity-related cardiovascular risk factors including hypertension, dyslipidemia, diabetes, and obstructive sleep apnea [1, 2, 5–9]. In addition, surgical intervention has been shown to restrain the progression rate [10, 11] and in some cases even reverse [12] the development of early atherosclerosis. More recently, bariatric surgery has been demonstrated to reduce overall and cardiovascular mortality when applied as primary preventive strategy in morbid obesity [6].Despite these encouraging findings, the use of bariatric surgery in patients with established cardiovascular disease has been limited. One probable explanation is the concern about increased perioperative risk in this patient population, but another reason could be the growing scepticism towards weight control as a secondary preventive measure. Uncertainty has arisen since several large epidemiological studies have revealed an inverse relationship between BMI and outcome in patients with ischemic heart disease [13]. An apparent “protective quality” of obesity has been demonstrated in patients with acute coronary syndromes and those undergoing coronary artery bypass grafting [14–16]. On the other hand, it has been pointed out that the so-called “obesity paradox” may just as well be related to adverse prognosis in patients with disease-related cachexia. In any case, the controversy remains and calls for controlled intervention studies.Bearing this in mind, the present study was aimed to evaluate the safety and feasibility of bariatric surgery as a preventive measure in obese subjects with ischemic heart disease. This was performed by analysing data from the Swedish Obese Subjects (SOS) controlled surgical intervention trial. ## 2. Methods ### 2.1. The SOS Study Briefly, obese patients (BMI≥ 38 kg/m2 for women and BMI≥ 34 kg/m2 for men) between 37 and 60 years of age were assigned to either bariatric surgery or conventional obesity treatment as described in earlier studies [6]. Surgical intervention consisted of gastric banding, vertical banded gastroplasty, or gastric bypass, whereas control treatment involved conventional life style recommendations. Exclusion criteria were minimal and allowed for a coronary event outside 6 months of inclusion. The study complied with the Declaration of Helsinki and was approved by the regional boards for ethical approval. ### 2.2. Present Study Group In the total SOS study cohort of 4047 subjects, 62 patients reported a history of myocardial infarction at the time of screening. After evaluating ECG recordings and hospital records, a prior coronary event defined as myocardial infarction, unstable angina, or prior revascularization, could be verified in 37 of these subjects. Two patients were excluded from the present report due to early drop out, resulting in a final study group of 35 subjects (11 women and 24 men). Of these subjects, 21 underwent bariatric surgery and 14 received conventional treatment. Patients were evaluated at inclusion and again after 2 and 10 years. The average follow-up period was 10.8 years (range 6.3 – 17.4 years). One subject declined two-year evaluation but participated in the 10-year follow-up. Twenty-one patients completed the 10-year follow-up (7 patients had died, 3 patients had not attained 10 years of follow-up, 3 patients had withdrawn their consent, and 1 patient had emigrated). ### 2.3. Clinical and Laboratory Assessments At each visit measurements of body weight and height were obtained and blood pressure recorded. Blood samples were drawn in the morning after 10–12 hours of fasting. Blood glucose and serum lipids were analysed by enzymatic techniques (accredited according to European Norm 45001). ### 2.4. Cardiovascular Risk Factors Hypertension was defined as systolic blood pressure ≥ 140 mmHg or diastolic blood pressure ≥ 90 mmHg or treatment with antihypertensive medication. Dyslipidemia was classified as total cholesterol ≥ 5.2 mmol/L or triglycerides ≥ 2.8 mmol/L or current lipid lowering medication. The criteria for diabetes were fasting glucose ≥ 6.1 mmol/L or treatment with insulin or oral hypoglycemic agents. ### 2.5. Cardiorespiratory Symptoms, Physical Activity, and Quality of Life Patients completed a questionnaire at inclusion and again after 2 and 10 years of follow-up. They were asked about the occurrence of chest pain and breathlessness and whether a family member or other person had observed pauses in breathing during sleep. Subjects were also asked to grade their level of physical activity during working and leisure time and health related quality of life (HRQOL). The HRQOL evaluation included questions regarding current health perception, social interaction, obesity-related problems, overall mood, anxiety, and depression. ### 2.6. Adverse Events Information regarding gastrointestinal and cardiovascular adverse events was obtained from self-administered questionnaires and verified by cross-checking hospital records. A cardiovascular event was defined as hospitalisation or death due to cardiovascular disease. Information about perioperative complications was obtained from surgical trial reports and discharge reports filled in by the surgeon. Information on cause of death was acquired from registries provided by the Swedish National Board of Health Welfare. ### 2.7. Statistical Methods Data are summarised as means (±SD) for continuous variables and percentages for categorical data. Differences between groups and changes from baseline were evaluated with paired ttests for continuous variables, with Fisher’s exact test or McNemar’s test for categorical variables, and with Pitman’s nonparametrical test for quality of life data. Data on gastrointestinal and cardiovascular adverse events, as well as mortality, are presented in a descriptive manner. ## 2.1. The SOS Study Briefly, obese patients (BMI≥ 38 kg/m2 for women and BMI≥ 34 kg/m2 for men) between 37 and 60 years of age were assigned to either bariatric surgery or conventional obesity treatment as described in earlier studies [6]. Surgical intervention consisted of gastric banding, vertical banded gastroplasty, or gastric bypass, whereas control treatment involved conventional life style recommendations. Exclusion criteria were minimal and allowed for a coronary event outside 6 months of inclusion. The study complied with the Declaration of Helsinki and was approved by the regional boards for ethical approval. ## 2.2. Present Study Group In the total SOS study cohort of 4047 subjects, 62 patients reported a history of myocardial infarction at the time of screening. After evaluating ECG recordings and hospital records, a prior coronary event defined as myocardial infarction, unstable angina, or prior revascularization, could be verified in 37 of these subjects. Two patients were excluded from the present report due to early drop out, resulting in a final study group of 35 subjects (11 women and 24 men). Of these subjects, 21 underwent bariatric surgery and 14 received conventional treatment. Patients were evaluated at inclusion and again after 2 and 10 years. The average follow-up period was 10.8 years (range 6.3 – 17.4 years). One subject declined two-year evaluation but participated in the 10-year follow-up. Twenty-one patients completed the 10-year follow-up (7 patients had died, 3 patients had not attained 10 years of follow-up, 3 patients had withdrawn their consent, and 1 patient had emigrated). ## 2.3. Clinical and Laboratory Assessments At each visit measurements of body weight and height were obtained and blood pressure recorded. Blood samples were drawn in the morning after 10–12 hours of fasting. Blood glucose and serum lipids were analysed by enzymatic techniques (accredited according to European Norm 45001). ## 2.4. Cardiovascular Risk Factors Hypertension was defined as systolic blood pressure ≥ 140 mmHg or diastolic blood pressure ≥ 90 mmHg or treatment with antihypertensive medication. Dyslipidemia was classified as total cholesterol ≥ 5.2 mmol/L or triglycerides ≥ 2.8 mmol/L or current lipid lowering medication. The criteria for diabetes were fasting glucose ≥ 6.1 mmol/L or treatment with insulin or oral hypoglycemic agents. ## 2.5. Cardiorespiratory Symptoms, Physical Activity, and Quality of Life Patients completed a questionnaire at inclusion and again after 2 and 10 years of follow-up. They were asked about the occurrence of chest pain and breathlessness and whether a family member or other person had observed pauses in breathing during sleep. Subjects were also asked to grade their level of physical activity during working and leisure time and health related quality of life (HRQOL). The HRQOL evaluation included questions regarding current health perception, social interaction, obesity-related problems, overall mood, anxiety, and depression. ## 2.6. Adverse Events Information regarding gastrointestinal and cardiovascular adverse events was obtained from self-administered questionnaires and verified by cross-checking hospital records. A cardiovascular event was defined as hospitalisation or death due to cardiovascular disease. Information about perioperative complications was obtained from surgical trial reports and discharge reports filled in by the surgeon. Information on cause of death was acquired from registries provided by the Swedish National Board of Health Welfare. ## 2.7. Statistical Methods Data are summarised as means (±SD) for continuous variables and percentages for categorical data. Differences between groups and changes from baseline were evaluated with paired ttests for continuous variables, with Fisher’s exact test or McNemar’s test for categorical variables, and with Pitman’s nonparametrical test for quality of life data. Data on gastrointestinal and cardiovascular adverse events, as well as mortality, are presented in a descriptive manner. ## 3. Results ### 3.1. Demographics At baseline, the surgery group (n=21) and control group (n=14) were comparable with respect to age (50.9 ± 5.7 versus 53.2 ± 4.9 years), gender distribution (33% versus 40% females), and body weight (122 ± 15 versus 115 ± 18 kg).Bariatric surgery resulted in sustained weight loss after 2 and 10 years (surgery:‒21.2% after 2 years and ‒13.8% after 10 years, controls: ‒2.4% after 2 years and ‒3.8% after 10 years, P<.001) (Figure 1). Changes in body weight, BMI, and waist circumference are shown in Table 1.Table 1 Anthropometrics and prevalence of cardiovascular risk factors (%) in the surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgeryn=21Surgery,n=21Surgery,n=13Controln=14Control,n=13Control,n=8Weight, kgSurgery122.8± 15-26.3 ± 14.7***-17.3 ± 13.1*Control115.3± 18-2.3 ± 5.2-4.3 ± 5.2BMI, kg/m2Surgery40.6± 4.3-8.6 ± 4.8***-5.6 ± 4.2*Control38.0± 4.5-0.8 ± 1.8-1.5 ± 2.0Waist circumference, cmSurgery128.3±8.3-21.2 ±12.5-12.9 ±12.2Control123.5±9.1-12.9 ±12.2-3.7 ± 6.0Current smoker %Surgery52.4-20.0*-18.2Control50.0-14.8-22.2Hypertension %Surgery57.1-15.0***-23.1*Control53.821.20Dyslipidemia %Surgery95.2-28.5***-69.2Control92.90-22.2Diabetes %Surgery52.4-14.3***-7.7***Control50.0011.1P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up. *P<.05, ***P<.001.Figure 1 Mean weight change (%) at 2, 6, and 10 years of follow-up in the surgery and control groups.Filled squares: surgery; Open circles: controls.***P<.001,*P<.05. ### 3.2. Cardiovascular Risk Factors At baseline, the prevalence of hypertension, dyslipidemia, and diabetes was similar in the two study groups. After 2 years, the surgery group displayed significant improvements in all of these cardiovascular risk factors as compared to control subjects. After 10 years, recovery from hypertension and diabetes was still more prevalent among surgically treated patients (Table1). ### 3.3. Cardiorespiratory Symptoms and Physical Activity At baseline, the prevalence of sleep disordered breathing was similar in the two study groups, whereas the surgery group reported lower frequencies of chest pain, breathlessness, and a higher degree of physical inactivity. After two years of follow-up, surgical patients displayed significant improvements in all 4 conditions, as compared with control subjects. After 10 years, a reversal of physical inactivity was still more common in the surgery group (Table2).Table 2 Chest pain, breathlessness sleep apnea, and physical activity (%) in the surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgeryn=21Surgery,n=21Surgery,n=13Controln=14Control,n=13Control,n=8Chest pain %Surgery38.1-18.1***-36.4Control84.6-7.7-33.3Breathlessness %Surgery61.9-51.9**-45.4Control84.623.125Sleep apnea %Surgery52.4-42.1***-54.5Control53.80-66.7Physical inactivity %Surgery47.6-12.6***-18.2**Control30.823.011.1P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up. **P<.01, ***P<.001. ### 3.4. Health-Related Quality of Life HRQOL was similar in both study groups at baseline, except for obesity-related problems, which were reported more often in the surgery group. After 2 years, the surgery group displayed diminutions in obesity-related problems and improvements in social interaction and depression score, as compared with controls. After 10 years, recovery from depression and obesity-related problems was still more frequent in the surgery group (Table3).Table 3 Health related quality of life in surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgery,n=21Surgery,n=21Surgery,n=13Control,n=14Control,n=13Control,n=8Current health perceptionsSurgery41.3± 19.821.8± 30.3-0.8 ± 14.5Control35.1± 24.26.6± 16.26.8± 29.3Social interactionSurgery15.5± 15.6-9.9 ± 12.6*-3.4 ± 8.9Control15.8± 11.10.4± 9.9-9.7 ± 12.0Obesity-related Problems scaleSurgery48.4± 31.3-35.1 ± 26.5***-31.0±26.7**Control28.3± 27.78.7± 19.6-0.8 ± 14.6Overall MoodSurgery2.85± 0.520.23± 0.410.09± 0.34Control2.79± 0.63-0.01 ± 0.550.18± 0.32AnxietySurgery6.5± 4.4-1.7 ± 3.5-1.2 ± 2.8Control7.5± 4.8-1.0 ± 2.7-2.4 ± 4.2DepressionSurgery5.2± 2.8-1.9 ± 2.7*-0.7±2.6*Control5.1± 2.70.5± 2.20.1± 2.8P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up.*P<.05, **P<.01, ***P<.001.Current health perceptions: scale range 0–100; high scores represent well-being.Social interaction: scale range and obesity-related problems scale 0–100; high scores indicate dysfunction.Overall mood: scale range 1–4; high scores represent well-being.Anxiety and depression: scale range 0–21; high scores represent symptoms. ### 3.5. Adverse Events In patients who underwent bariatric surgery there were no postoperative deaths. One patient bled 1300 mL during surgery. Otherwise there were no peri-operative complications reported. Frequent adverse events included nausea and/or abdominal pain, which lead to unscheduled gastroscopy in 12 patients (57%). Serious adverse events, requiring surgical or endoscopic treatment, occurred in 3 patients (14%) and consisted of pouch stenosis (1), pouch dilatation (1), and incisional hernia (1). No significant differences were observed between the surgery and control groups with respect to cardiovascular event rates, including myocardial infarction (42.9% versus 38.5%) coronary revascularisation (47.6% versus 53.8%) and total cardiovascular events (61.9% versus 69.2%). Mean time to first event was 5.7 years in the surgery group and 5.5 years in the control group. During the follow-up period 6 patients (29%) in the surgery group died, as compared with 5 patients (38.5%) in the control group. The most common cause of death was cardiovascular (66.7% versus 80%). ## 3.1. Demographics At baseline, the surgery group (n=21) and control group (n=14) were comparable with respect to age (50.9 ± 5.7 versus 53.2 ± 4.9 years), gender distribution (33% versus 40% females), and body weight (122 ± 15 versus 115 ± 18 kg).Bariatric surgery resulted in sustained weight loss after 2 and 10 years (surgery:‒21.2% after 2 years and ‒13.8% after 10 years, controls: ‒2.4% after 2 years and ‒3.8% after 10 years, P<.001) (Figure 1). Changes in body weight, BMI, and waist circumference are shown in Table 1.Table 1 Anthropometrics and prevalence of cardiovascular risk factors (%) in the surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgeryn=21Surgery,n=21Surgery,n=13Controln=14Control,n=13Control,n=8Weight, kgSurgery122.8± 15-26.3 ± 14.7***-17.3 ± 13.1*Control115.3± 18-2.3 ± 5.2-4.3 ± 5.2BMI, kg/m2Surgery40.6± 4.3-8.6 ± 4.8***-5.6 ± 4.2*Control38.0± 4.5-0.8 ± 1.8-1.5 ± 2.0Waist circumference, cmSurgery128.3±8.3-21.2 ±12.5-12.9 ±12.2Control123.5±9.1-12.9 ±12.2-3.7 ± 6.0Current smoker %Surgery52.4-20.0*-18.2Control50.0-14.8-22.2Hypertension %Surgery57.1-15.0***-23.1*Control53.821.20Dyslipidemia %Surgery95.2-28.5***-69.2Control92.90-22.2Diabetes %Surgery52.4-14.3***-7.7***Control50.0011.1P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up. *P<.05, ***P<.001.Figure 1 Mean weight change (%) at 2, 6, and 10 years of follow-up in the surgery and control groups.Filled squares: surgery; Open circles: controls.***P<.001,*P<.05. ## 3.2. Cardiovascular Risk Factors At baseline, the prevalence of hypertension, dyslipidemia, and diabetes was similar in the two study groups. After 2 years, the surgery group displayed significant improvements in all of these cardiovascular risk factors as compared to control subjects. After 10 years, recovery from hypertension and diabetes was still more prevalent among surgically treated patients (Table1). ## 3.3. Cardiorespiratory Symptoms and Physical Activity At baseline, the prevalence of sleep disordered breathing was similar in the two study groups, whereas the surgery group reported lower frequencies of chest pain, breathlessness, and a higher degree of physical inactivity. After two years of follow-up, surgical patients displayed significant improvements in all 4 conditions, as compared with control subjects. After 10 years, a reversal of physical inactivity was still more common in the surgery group (Table2).Table 2 Chest pain, breathlessness sleep apnea, and physical activity (%) in the surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgeryn=21Surgery,n=21Surgery,n=13Controln=14Control,n=13Control,n=8Chest pain %Surgery38.1-18.1***-36.4Control84.6-7.7-33.3Breathlessness %Surgery61.9-51.9**-45.4Control84.623.125Sleep apnea %Surgery52.4-42.1***-54.5Control53.80-66.7Physical inactivity %Surgery47.6-12.6***-18.2**Control30.823.011.1P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up. **P<.01, ***P<.001. ## 3.4. Health-Related Quality of Life HRQOL was similar in both study groups at baseline, except for obesity-related problems, which were reported more often in the surgery group. After 2 years, the surgery group displayed diminutions in obesity-related problems and improvements in social interaction and depression score, as compared with controls. After 10 years, recovery from depression and obesity-related problems was still more frequent in the surgery group (Table3).Table 3 Health related quality of life in surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgery,n=21Surgery,n=21Surgery,n=13Control,n=14Control,n=13Control,n=8Current health perceptionsSurgery41.3± 19.821.8± 30.3-0.8 ± 14.5Control35.1± 24.26.6± 16.26.8± 29.3Social interactionSurgery15.5± 15.6-9.9 ± 12.6*-3.4 ± 8.9Control15.8± 11.10.4± 9.9-9.7 ± 12.0Obesity-related Problems scaleSurgery48.4± 31.3-35.1 ± 26.5***-31.0±26.7**Control28.3± 27.78.7± 19.6-0.8 ± 14.6Overall MoodSurgery2.85± 0.520.23± 0.410.09± 0.34Control2.79± 0.63-0.01 ± 0.550.18± 0.32AnxietySurgery6.5± 4.4-1.7 ± 3.5-1.2 ± 2.8Control7.5± 4.8-1.0 ± 2.7-2.4 ± 4.2DepressionSurgery5.2± 2.8-1.9 ± 2.7*-0.7±2.6*Control5.1± 2.70.5± 2.20.1± 2.8P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up.*P<.05, **P<.01, ***P<.001.Current health perceptions: scale range 0–100; high scores represent well-being.Social interaction: scale range and obesity-related problems scale 0–100; high scores indicate dysfunction.Overall mood: scale range 1–4; high scores represent well-being.Anxiety and depression: scale range 0–21; high scores represent symptoms. ## 3.5. Adverse Events In patients who underwent bariatric surgery there were no postoperative deaths. One patient bled 1300 mL during surgery. Otherwise there were no peri-operative complications reported. Frequent adverse events included nausea and/or abdominal pain, which lead to unscheduled gastroscopy in 12 patients (57%). Serious adverse events, requiring surgical or endoscopic treatment, occurred in 3 patients (14%) and consisted of pouch stenosis (1), pouch dilatation (1), and incisional hernia (1). No significant differences were observed between the surgery and control groups with respect to cardiovascular event rates, including myocardial infarction (42.9% versus 38.5%) coronary revascularisation (47.6% versus 53.8%) and total cardiovascular events (61.9% versus 69.2%). Mean time to first event was 5.7 years in the surgery group and 5.5 years in the control group. During the follow-up period 6 patients (29%) in the surgery group died, as compared with 5 patients (38.5%) in the control group. The most common cause of death was cardiovascular (66.7% versus 80%). ## 4. Discussion The effect that obesity and weight loss have on secondary outcomes in ischemic heart disease remains unclear. Contrary to intuition, many studies report a protective effect of obesity on prognosis in IHD populations [13]. This “obesity paradox” has been described for acute coronary syndromes [17, 18], percutaneous coronary intervention [14, 16], and coronary artery bypass grafting [15]. In view of these findings, the present recommendations of weight control in patients with coronary heart disease have been questioned.Even though, in small cohorts, we and others [2, 19] now provide data indicating that bariatric surgery can be a safe method to attain sustained weight loss also for obese subjects with established IHD, among patients treated with surgery there were no signs of increased short-term or long-term cardiovascular morbidity or mortality. Postoperative complications were comparable with those previously reported in obese patients free from cardiovascular disease.The beneficial effects of bariatric surgery in the present study were in line with those previously observed [6, 8, 20, 21]. After two years, surgically induced weight loss was associated with favourable effects on multiple cardiovascular risk factors, including abdominal obesity, hypertension, diabetes, and dyslipidemia. Patients also experienced a significant relief from symptoms of chest pain and breathlessness and reduction in sleep-disordered breathing. Physical activity during leisure time increased and several aspects of quality of life improved. After 10 years, recovery from hypertension, diabetes, physical inactivity, and depression was still more common in treated patients.In view of the findings in the present study, previous concerns about increased perioperative risk associated with bariatric surgery appear to be unwarranted. Further, the widespread effects of surgical obesity treatment on symptoms and risk factors make it an attractive alternative in attaining secondary prevention in patients with ischemic heart disease. Still, risk factor improvements following bariatric surgery did not translate into reduced clinical endpoints when surgery and control groups were compared. It is possible that the small study sample precluded the detection of an actual difference in event rate between the two study groups and larger cohort studies are needed to elucidate the effect of bariatric surgery on clinical outcome.The main limitation of the present study is its small sample size, which precluded any firm conclusions with respect to cardiovascular outcome. Its strength, on the other hand, is the long-term follow-up of prospectively collected data, which makes it reasonable to conclude that the operative procedure is safe in patients with ischemic heart disease. Another weakness is the nonrandomized design of the study. Despite this, the two groups were quite similar with respect to baseline demographics. Thereby the conclusions with respect to improvements in cardiovascular risk factors, symptoms, and quality of life following surgery seem valid.In this study most patient were treated with minimal invasive surgery techniques with a known low complication rate (gastric banding or vertical banded gastroplasty). In studies using gastric bypass as surgical method a higher peri- and postoperative complication rate could be expected. ## 5. Conclusion Taken together we have provided data that support the safety and feasibility of bariatric surgery in obese patients with IHD. This is encouraging for future-controlled studies prospectively evaluating the long-term effects of bariatric surgery in this patient population. Future trials should aim to explore bariatric surgery in obese patients with IHD and metabolic complications. --- *Source: 102341-2010-08-12.xml*
102341-2010-08-12_102341-2010-08-12.md
29,382
Feasibility of Bariatric Surgery as a Strategy for Secondary Prevention in Cardiovascular Disease: A Report from the Swedish Obese Subjects Trial
Lotta Delling; Kristjan Karason; Torsten Olbers; David Sjöström; Björn Wahlstrand; Björn Carlsson; Lena Carlsson; Kristina Narbro; Jan Karlsson; Carl Johan Behre; Lars Sjöström; Kaj Stenlöf
Journal of Obesity (2010)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2010/102341
102341-2010-08-12.xml
--- ## Abstract Aims. Evaluation of bariatric surgery as secondary prevention in obese patients with ischemic heart disease (IHD). Methods. Analysis of data from 4047 subjects in the Swedish Obese Subjects (SOSs) study. Thirty-five patients with IHD are treated with bariatric surgery (n=21) or conventional treatment (n=14). Mean follow-up is 10.8 years. Results. Bariatric surgery resulted in sustained weight loss during the study period. After 2 years, the surgery group displayed significant reductions in cardiovascular risk factors, relief from cardiorespiratory symptoms, increments in physical activity, and improved quality of life. After 10 years, recovery from hypertension, diabetes, physical inactivity, and depression was still more common in the surgery group. There were no signs of increased cardiovascular morbidity or mortality in the surgery group. Conclusion. Bariatric surgery appears to be a safe and feasible treatment to achieve long-term weight loss and improvement in cardiovascular risk factors, symptoms, and quality of life in obese subjects with IHD. --- ## Body ## 1. Introduction Obesity, together with associated clustering of cardiovascular risk factors, is a strong promoter for cardiovascular disease morbidity and mortality [1, 2]. Weight control is considered a cornerstone in primary prevention aimed at reducing the overall incidence of cardiovascular disease. Obesity is also frequently targeted in secondary preventive programs intended to improve outcome in patients with already established cardiovascular disease [3, 4]. One major problem with standard strategies is that weight loss is difficult to achieve with conventional methods and the results are often temporary.Bariatric surgery has emerged as an effective treatment option to obtain large and sustained weight loss in obese subjects [5]. Surgically induced weight loss has been shown to improve or prevent many of the obesity-related cardiovascular risk factors including hypertension, dyslipidemia, diabetes, and obstructive sleep apnea [1, 2, 5–9]. In addition, surgical intervention has been shown to restrain the progression rate [10, 11] and in some cases even reverse [12] the development of early atherosclerosis. More recently, bariatric surgery has been demonstrated to reduce overall and cardiovascular mortality when applied as primary preventive strategy in morbid obesity [6].Despite these encouraging findings, the use of bariatric surgery in patients with established cardiovascular disease has been limited. One probable explanation is the concern about increased perioperative risk in this patient population, but another reason could be the growing scepticism towards weight control as a secondary preventive measure. Uncertainty has arisen since several large epidemiological studies have revealed an inverse relationship between BMI and outcome in patients with ischemic heart disease [13]. An apparent “protective quality” of obesity has been demonstrated in patients with acute coronary syndromes and those undergoing coronary artery bypass grafting [14–16]. On the other hand, it has been pointed out that the so-called “obesity paradox” may just as well be related to adverse prognosis in patients with disease-related cachexia. In any case, the controversy remains and calls for controlled intervention studies.Bearing this in mind, the present study was aimed to evaluate the safety and feasibility of bariatric surgery as a preventive measure in obese subjects with ischemic heart disease. This was performed by analysing data from the Swedish Obese Subjects (SOS) controlled surgical intervention trial. ## 2. Methods ### 2.1. The SOS Study Briefly, obese patients (BMI≥ 38 kg/m2 for women and BMI≥ 34 kg/m2 for men) between 37 and 60 years of age were assigned to either bariatric surgery or conventional obesity treatment as described in earlier studies [6]. Surgical intervention consisted of gastric banding, vertical banded gastroplasty, or gastric bypass, whereas control treatment involved conventional life style recommendations. Exclusion criteria were minimal and allowed for a coronary event outside 6 months of inclusion. The study complied with the Declaration of Helsinki and was approved by the regional boards for ethical approval. ### 2.2. Present Study Group In the total SOS study cohort of 4047 subjects, 62 patients reported a history of myocardial infarction at the time of screening. After evaluating ECG recordings and hospital records, a prior coronary event defined as myocardial infarction, unstable angina, or prior revascularization, could be verified in 37 of these subjects. Two patients were excluded from the present report due to early drop out, resulting in a final study group of 35 subjects (11 women and 24 men). Of these subjects, 21 underwent bariatric surgery and 14 received conventional treatment. Patients were evaluated at inclusion and again after 2 and 10 years. The average follow-up period was 10.8 years (range 6.3 – 17.4 years). One subject declined two-year evaluation but participated in the 10-year follow-up. Twenty-one patients completed the 10-year follow-up (7 patients had died, 3 patients had not attained 10 years of follow-up, 3 patients had withdrawn their consent, and 1 patient had emigrated). ### 2.3. Clinical and Laboratory Assessments At each visit measurements of body weight and height were obtained and blood pressure recorded. Blood samples were drawn in the morning after 10–12 hours of fasting. Blood glucose and serum lipids were analysed by enzymatic techniques (accredited according to European Norm 45001). ### 2.4. Cardiovascular Risk Factors Hypertension was defined as systolic blood pressure ≥ 140 mmHg or diastolic blood pressure ≥ 90 mmHg or treatment with antihypertensive medication. Dyslipidemia was classified as total cholesterol ≥ 5.2 mmol/L or triglycerides ≥ 2.8 mmol/L or current lipid lowering medication. The criteria for diabetes were fasting glucose ≥ 6.1 mmol/L or treatment with insulin or oral hypoglycemic agents. ### 2.5. Cardiorespiratory Symptoms, Physical Activity, and Quality of Life Patients completed a questionnaire at inclusion and again after 2 and 10 years of follow-up. They were asked about the occurrence of chest pain and breathlessness and whether a family member or other person had observed pauses in breathing during sleep. Subjects were also asked to grade their level of physical activity during working and leisure time and health related quality of life (HRQOL). The HRQOL evaluation included questions regarding current health perception, social interaction, obesity-related problems, overall mood, anxiety, and depression. ### 2.6. Adverse Events Information regarding gastrointestinal and cardiovascular adverse events was obtained from self-administered questionnaires and verified by cross-checking hospital records. A cardiovascular event was defined as hospitalisation or death due to cardiovascular disease. Information about perioperative complications was obtained from surgical trial reports and discharge reports filled in by the surgeon. Information on cause of death was acquired from registries provided by the Swedish National Board of Health Welfare. ### 2.7. Statistical Methods Data are summarised as means (±SD) for continuous variables and percentages for categorical data. Differences between groups and changes from baseline were evaluated with paired ttests for continuous variables, with Fisher’s exact test or McNemar’s test for categorical variables, and with Pitman’s nonparametrical test for quality of life data. Data on gastrointestinal and cardiovascular adverse events, as well as mortality, are presented in a descriptive manner. ## 2.1. The SOS Study Briefly, obese patients (BMI≥ 38 kg/m2 for women and BMI≥ 34 kg/m2 for men) between 37 and 60 years of age were assigned to either bariatric surgery or conventional obesity treatment as described in earlier studies [6]. Surgical intervention consisted of gastric banding, vertical banded gastroplasty, or gastric bypass, whereas control treatment involved conventional life style recommendations. Exclusion criteria were minimal and allowed for a coronary event outside 6 months of inclusion. The study complied with the Declaration of Helsinki and was approved by the regional boards for ethical approval. ## 2.2. Present Study Group In the total SOS study cohort of 4047 subjects, 62 patients reported a history of myocardial infarction at the time of screening. After evaluating ECG recordings and hospital records, a prior coronary event defined as myocardial infarction, unstable angina, or prior revascularization, could be verified in 37 of these subjects. Two patients were excluded from the present report due to early drop out, resulting in a final study group of 35 subjects (11 women and 24 men). Of these subjects, 21 underwent bariatric surgery and 14 received conventional treatment. Patients were evaluated at inclusion and again after 2 and 10 years. The average follow-up period was 10.8 years (range 6.3 – 17.4 years). One subject declined two-year evaluation but participated in the 10-year follow-up. Twenty-one patients completed the 10-year follow-up (7 patients had died, 3 patients had not attained 10 years of follow-up, 3 patients had withdrawn their consent, and 1 patient had emigrated). ## 2.3. Clinical and Laboratory Assessments At each visit measurements of body weight and height were obtained and blood pressure recorded. Blood samples were drawn in the morning after 10–12 hours of fasting. Blood glucose and serum lipids were analysed by enzymatic techniques (accredited according to European Norm 45001). ## 2.4. Cardiovascular Risk Factors Hypertension was defined as systolic blood pressure ≥ 140 mmHg or diastolic blood pressure ≥ 90 mmHg or treatment with antihypertensive medication. Dyslipidemia was classified as total cholesterol ≥ 5.2 mmol/L or triglycerides ≥ 2.8 mmol/L or current lipid lowering medication. The criteria for diabetes were fasting glucose ≥ 6.1 mmol/L or treatment with insulin or oral hypoglycemic agents. ## 2.5. Cardiorespiratory Symptoms, Physical Activity, and Quality of Life Patients completed a questionnaire at inclusion and again after 2 and 10 years of follow-up. They were asked about the occurrence of chest pain and breathlessness and whether a family member or other person had observed pauses in breathing during sleep. Subjects were also asked to grade their level of physical activity during working and leisure time and health related quality of life (HRQOL). The HRQOL evaluation included questions regarding current health perception, social interaction, obesity-related problems, overall mood, anxiety, and depression. ## 2.6. Adverse Events Information regarding gastrointestinal and cardiovascular adverse events was obtained from self-administered questionnaires and verified by cross-checking hospital records. A cardiovascular event was defined as hospitalisation or death due to cardiovascular disease. Information about perioperative complications was obtained from surgical trial reports and discharge reports filled in by the surgeon. Information on cause of death was acquired from registries provided by the Swedish National Board of Health Welfare. ## 2.7. Statistical Methods Data are summarised as means (±SD) for continuous variables and percentages for categorical data. Differences between groups and changes from baseline were evaluated with paired ttests for continuous variables, with Fisher’s exact test or McNemar’s test for categorical variables, and with Pitman’s nonparametrical test for quality of life data. Data on gastrointestinal and cardiovascular adverse events, as well as mortality, are presented in a descriptive manner. ## 3. Results ### 3.1. Demographics At baseline, the surgery group (n=21) and control group (n=14) were comparable with respect to age (50.9 ± 5.7 versus 53.2 ± 4.9 years), gender distribution (33% versus 40% females), and body weight (122 ± 15 versus 115 ± 18 kg).Bariatric surgery resulted in sustained weight loss after 2 and 10 years (surgery:‒21.2% after 2 years and ‒13.8% after 10 years, controls: ‒2.4% after 2 years and ‒3.8% after 10 years, P<.001) (Figure 1). Changes in body weight, BMI, and waist circumference are shown in Table 1.Table 1 Anthropometrics and prevalence of cardiovascular risk factors (%) in the surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgeryn=21Surgery,n=21Surgery,n=13Controln=14Control,n=13Control,n=8Weight, kgSurgery122.8± 15-26.3 ± 14.7***-17.3 ± 13.1*Control115.3± 18-2.3 ± 5.2-4.3 ± 5.2BMI, kg/m2Surgery40.6± 4.3-8.6 ± 4.8***-5.6 ± 4.2*Control38.0± 4.5-0.8 ± 1.8-1.5 ± 2.0Waist circumference, cmSurgery128.3±8.3-21.2 ±12.5-12.9 ±12.2Control123.5±9.1-12.9 ±12.2-3.7 ± 6.0Current smoker %Surgery52.4-20.0*-18.2Control50.0-14.8-22.2Hypertension %Surgery57.1-15.0***-23.1*Control53.821.20Dyslipidemia %Surgery95.2-28.5***-69.2Control92.90-22.2Diabetes %Surgery52.4-14.3***-7.7***Control50.0011.1P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up. *P<.05, ***P<.001.Figure 1 Mean weight change (%) at 2, 6, and 10 years of follow-up in the surgery and control groups.Filled squares: surgery; Open circles: controls.***P<.001,*P<.05. ### 3.2. Cardiovascular Risk Factors At baseline, the prevalence of hypertension, dyslipidemia, and diabetes was similar in the two study groups. After 2 years, the surgery group displayed significant improvements in all of these cardiovascular risk factors as compared to control subjects. After 10 years, recovery from hypertension and diabetes was still more prevalent among surgically treated patients (Table1). ### 3.3. Cardiorespiratory Symptoms and Physical Activity At baseline, the prevalence of sleep disordered breathing was similar in the two study groups, whereas the surgery group reported lower frequencies of chest pain, breathlessness, and a higher degree of physical inactivity. After two years of follow-up, surgical patients displayed significant improvements in all 4 conditions, as compared with control subjects. After 10 years, a reversal of physical inactivity was still more common in the surgery group (Table2).Table 2 Chest pain, breathlessness sleep apnea, and physical activity (%) in the surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgeryn=21Surgery,n=21Surgery,n=13Controln=14Control,n=13Control,n=8Chest pain %Surgery38.1-18.1***-36.4Control84.6-7.7-33.3Breathlessness %Surgery61.9-51.9**-45.4Control84.623.125Sleep apnea %Surgery52.4-42.1***-54.5Control53.80-66.7Physical inactivity %Surgery47.6-12.6***-18.2**Control30.823.011.1P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up. **P<.01, ***P<.001. ### 3.4. Health-Related Quality of Life HRQOL was similar in both study groups at baseline, except for obesity-related problems, which were reported more often in the surgery group. After 2 years, the surgery group displayed diminutions in obesity-related problems and improvements in social interaction and depression score, as compared with controls. After 10 years, recovery from depression and obesity-related problems was still more frequent in the surgery group (Table3).Table 3 Health related quality of life in surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgery,n=21Surgery,n=21Surgery,n=13Control,n=14Control,n=13Control,n=8Current health perceptionsSurgery41.3± 19.821.8± 30.3-0.8 ± 14.5Control35.1± 24.26.6± 16.26.8± 29.3Social interactionSurgery15.5± 15.6-9.9 ± 12.6*-3.4 ± 8.9Control15.8± 11.10.4± 9.9-9.7 ± 12.0Obesity-related Problems scaleSurgery48.4± 31.3-35.1 ± 26.5***-31.0±26.7**Control28.3± 27.78.7± 19.6-0.8 ± 14.6Overall MoodSurgery2.85± 0.520.23± 0.410.09± 0.34Control2.79± 0.63-0.01 ± 0.550.18± 0.32AnxietySurgery6.5± 4.4-1.7 ± 3.5-1.2 ± 2.8Control7.5± 4.8-1.0 ± 2.7-2.4 ± 4.2DepressionSurgery5.2± 2.8-1.9 ± 2.7*-0.7±2.6*Control5.1± 2.70.5± 2.20.1± 2.8P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up.*P<.05, **P<.01, ***P<.001.Current health perceptions: scale range 0–100; high scores represent well-being.Social interaction: scale range and obesity-related problems scale 0–100; high scores indicate dysfunction.Overall mood: scale range 1–4; high scores represent well-being.Anxiety and depression: scale range 0–21; high scores represent symptoms. ### 3.5. Adverse Events In patients who underwent bariatric surgery there were no postoperative deaths. One patient bled 1300 mL during surgery. Otherwise there were no peri-operative complications reported. Frequent adverse events included nausea and/or abdominal pain, which lead to unscheduled gastroscopy in 12 patients (57%). Serious adverse events, requiring surgical or endoscopic treatment, occurred in 3 patients (14%) and consisted of pouch stenosis (1), pouch dilatation (1), and incisional hernia (1). No significant differences were observed between the surgery and control groups with respect to cardiovascular event rates, including myocardial infarction (42.9% versus 38.5%) coronary revascularisation (47.6% versus 53.8%) and total cardiovascular events (61.9% versus 69.2%). Mean time to first event was 5.7 years in the surgery group and 5.5 years in the control group. During the follow-up period 6 patients (29%) in the surgery group died, as compared with 5 patients (38.5%) in the control group. The most common cause of death was cardiovascular (66.7% versus 80%). ## 3.1. Demographics At baseline, the surgery group (n=21) and control group (n=14) were comparable with respect to age (50.9 ± 5.7 versus 53.2 ± 4.9 years), gender distribution (33% versus 40% females), and body weight (122 ± 15 versus 115 ± 18 kg).Bariatric surgery resulted in sustained weight loss after 2 and 10 years (surgery:‒21.2% after 2 years and ‒13.8% after 10 years, controls: ‒2.4% after 2 years and ‒3.8% after 10 years, P<.001) (Figure 1). Changes in body weight, BMI, and waist circumference are shown in Table 1.Table 1 Anthropometrics and prevalence of cardiovascular risk factors (%) in the surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgeryn=21Surgery,n=21Surgery,n=13Controln=14Control,n=13Control,n=8Weight, kgSurgery122.8± 15-26.3 ± 14.7***-17.3 ± 13.1*Control115.3± 18-2.3 ± 5.2-4.3 ± 5.2BMI, kg/m2Surgery40.6± 4.3-8.6 ± 4.8***-5.6 ± 4.2*Control38.0± 4.5-0.8 ± 1.8-1.5 ± 2.0Waist circumference, cmSurgery128.3±8.3-21.2 ±12.5-12.9 ±12.2Control123.5±9.1-12.9 ±12.2-3.7 ± 6.0Current smoker %Surgery52.4-20.0*-18.2Control50.0-14.8-22.2Hypertension %Surgery57.1-15.0***-23.1*Control53.821.20Dyslipidemia %Surgery95.2-28.5***-69.2Control92.90-22.2Diabetes %Surgery52.4-14.3***-7.7***Control50.0011.1P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up. *P<.05, ***P<.001.Figure 1 Mean weight change (%) at 2, 6, and 10 years of follow-up in the surgery and control groups.Filled squares: surgery; Open circles: controls.***P<.001,*P<.05. ## 3.2. Cardiovascular Risk Factors At baseline, the prevalence of hypertension, dyslipidemia, and diabetes was similar in the two study groups. After 2 years, the surgery group displayed significant improvements in all of these cardiovascular risk factors as compared to control subjects. After 10 years, recovery from hypertension and diabetes was still more prevalent among surgically treated patients (Table1). ## 3.3. Cardiorespiratory Symptoms and Physical Activity At baseline, the prevalence of sleep disordered breathing was similar in the two study groups, whereas the surgery group reported lower frequencies of chest pain, breathlessness, and a higher degree of physical inactivity. After two years of follow-up, surgical patients displayed significant improvements in all 4 conditions, as compared with control subjects. After 10 years, a reversal of physical inactivity was still more common in the surgery group (Table2).Table 2 Chest pain, breathlessness sleep apnea, and physical activity (%) in the surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgeryn=21Surgery,n=21Surgery,n=13Controln=14Control,n=13Control,n=8Chest pain %Surgery38.1-18.1***-36.4Control84.6-7.7-33.3Breathlessness %Surgery61.9-51.9**-45.4Control84.623.125Sleep apnea %Surgery52.4-42.1***-54.5Control53.80-66.7Physical inactivity %Surgery47.6-12.6***-18.2**Control30.823.011.1P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up. **P<.01, ***P<.001. ## 3.4. Health-Related Quality of Life HRQOL was similar in both study groups at baseline, except for obesity-related problems, which were reported more often in the surgery group. After 2 years, the surgery group displayed diminutions in obesity-related problems and improvements in social interaction and depression score, as compared with controls. After 10 years, recovery from depression and obesity-related problems was still more frequent in the surgery group (Table3).Table 3 Health related quality of life in surgery and control groups at baseline and changes in prevalence after 2 and 10 years of follow-up. Only the patients applicable for the certain timepoint are included in the statistical calculations. BaselineChange at 2-year follow-upChange at 10-year follow-upSurgery,n=21Surgery,n=21Surgery,n=13Control,n=14Control,n=13Control,n=8Current health perceptionsSurgery41.3± 19.821.8± 30.3-0.8 ± 14.5Control35.1± 24.26.6± 16.26.8± 29.3Social interactionSurgery15.5± 15.6-9.9 ± 12.6*-3.4 ± 8.9Control15.8± 11.10.4± 9.9-9.7 ± 12.0Obesity-related Problems scaleSurgery48.4± 31.3-35.1 ± 26.5***-31.0±26.7**Control28.3± 27.78.7± 19.6-0.8 ± 14.6Overall MoodSurgery2.85± 0.520.23± 0.410.09± 0.34Control2.79± 0.63-0.01 ± 0.550.18± 0.32AnxietySurgery6.5± 4.4-1.7 ± 3.5-1.2 ± 2.8Control7.5± 4.8-1.0 ± 2.7-2.4 ± 4.2DepressionSurgery5.2± 2.8-1.9 ± 2.7*-0.7±2.6*Control5.1± 2.70.5± 2.20.1± 2.8P-value denotes differences in effects of treatment between the two groups from baseline to 2 and 10 years of follow-up.*P<.05, **P<.01, ***P<.001.Current health perceptions: scale range 0–100; high scores represent well-being.Social interaction: scale range and obesity-related problems scale 0–100; high scores indicate dysfunction.Overall mood: scale range 1–4; high scores represent well-being.Anxiety and depression: scale range 0–21; high scores represent symptoms. ## 3.5. Adverse Events In patients who underwent bariatric surgery there were no postoperative deaths. One patient bled 1300 mL during surgery. Otherwise there were no peri-operative complications reported. Frequent adverse events included nausea and/or abdominal pain, which lead to unscheduled gastroscopy in 12 patients (57%). Serious adverse events, requiring surgical or endoscopic treatment, occurred in 3 patients (14%) and consisted of pouch stenosis (1), pouch dilatation (1), and incisional hernia (1). No significant differences were observed between the surgery and control groups with respect to cardiovascular event rates, including myocardial infarction (42.9% versus 38.5%) coronary revascularisation (47.6% versus 53.8%) and total cardiovascular events (61.9% versus 69.2%). Mean time to first event was 5.7 years in the surgery group and 5.5 years in the control group. During the follow-up period 6 patients (29%) in the surgery group died, as compared with 5 patients (38.5%) in the control group. The most common cause of death was cardiovascular (66.7% versus 80%). ## 4. Discussion The effect that obesity and weight loss have on secondary outcomes in ischemic heart disease remains unclear. Contrary to intuition, many studies report a protective effect of obesity on prognosis in IHD populations [13]. This “obesity paradox” has been described for acute coronary syndromes [17, 18], percutaneous coronary intervention [14, 16], and coronary artery bypass grafting [15]. In view of these findings, the present recommendations of weight control in patients with coronary heart disease have been questioned.Even though, in small cohorts, we and others [2, 19] now provide data indicating that bariatric surgery can be a safe method to attain sustained weight loss also for obese subjects with established IHD, among patients treated with surgery there were no signs of increased short-term or long-term cardiovascular morbidity or mortality. Postoperative complications were comparable with those previously reported in obese patients free from cardiovascular disease.The beneficial effects of bariatric surgery in the present study were in line with those previously observed [6, 8, 20, 21]. After two years, surgically induced weight loss was associated with favourable effects on multiple cardiovascular risk factors, including abdominal obesity, hypertension, diabetes, and dyslipidemia. Patients also experienced a significant relief from symptoms of chest pain and breathlessness and reduction in sleep-disordered breathing. Physical activity during leisure time increased and several aspects of quality of life improved. After 10 years, recovery from hypertension, diabetes, physical inactivity, and depression was still more common in treated patients.In view of the findings in the present study, previous concerns about increased perioperative risk associated with bariatric surgery appear to be unwarranted. Further, the widespread effects of surgical obesity treatment on symptoms and risk factors make it an attractive alternative in attaining secondary prevention in patients with ischemic heart disease. Still, risk factor improvements following bariatric surgery did not translate into reduced clinical endpoints when surgery and control groups were compared. It is possible that the small study sample precluded the detection of an actual difference in event rate between the two study groups and larger cohort studies are needed to elucidate the effect of bariatric surgery on clinical outcome.The main limitation of the present study is its small sample size, which precluded any firm conclusions with respect to cardiovascular outcome. Its strength, on the other hand, is the long-term follow-up of prospectively collected data, which makes it reasonable to conclude that the operative procedure is safe in patients with ischemic heart disease. Another weakness is the nonrandomized design of the study. Despite this, the two groups were quite similar with respect to baseline demographics. Thereby the conclusions with respect to improvements in cardiovascular risk factors, symptoms, and quality of life following surgery seem valid.In this study most patient were treated with minimal invasive surgery techniques with a known low complication rate (gastric banding or vertical banded gastroplasty). In studies using gastric bypass as surgical method a higher peri- and postoperative complication rate could be expected. ## 5. Conclusion Taken together we have provided data that support the safety and feasibility of bariatric surgery in obese patients with IHD. This is encouraging for future-controlled studies prospectively evaluating the long-term effects of bariatric surgery in this patient population. Future trials should aim to explore bariatric surgery in obese patients with IHD and metabolic complications. --- *Source: 102341-2010-08-12.xml*
2010
# Leveled Multi-Hop Multi-Identity Fully Homomorphic Encryption **Authors:** Wen Liu; Fuqun Wang; Xiaodan Jin; Kefei Chen; Zhonghua Shen **Journal:** Security and Communication Networks (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1023439 --- ## Abstract Gentry, Sahai, and Waters (CRYPTO 2013) proposed the notion of multi-identity fully homomorphic encryption (MIFHE), which allows homomorphic evaluation of data encrypted under multiple identities. Subsequently, Clear and McGoldrick (CANS 2014, CRYPTO 2015) proposed leveled MIFHE candidates. However, the proposed MIFHE is either based oniO, which is a nonstandard assumption or single hop; that is, an arbitrary “evaluated” ciphertext under a set of identities is difficult to further evaluate when new ciphertexts are encrypted under additional identities. To overcome these drawbacks, we propose a leveled multi-hop MIFHE scheme. In a multi-hop MIFHE scheme, one can evaluate a group of ciphertexts under a set of identities to obtain an “evaluated” ciphertext, which can be further evaluated with other ciphertexts encrypted under additional identities. We also show that the proposed MIFHE scheme is secure against selective identity and chosen-plaintext attacks (IND-sID-CPA) under the learning with errors (LWE) assumption. --- ## Body ## 1. Introduction The idea of homomorphic encryption was first proposed by Rivest et al. [1] in 1978. How to construct a scheme with homomorphic properties is used to be a difficult problem for cryptographers. With the advent of the information age and the development of cloud computing technology, it is particularly urgent to solve this problem. It was not until 2009 that Gentry proposed the first fully homomorphic encryption (FHE) system based on ideal lattices, allowing anyone without a secret key to compute any efficiently computable function over encrypted data [2]. Because FHE is suitable to apply cloud computing without compromising security, it has quickly become a research hot topic [3–7].All of the above FHE schemes are single-key homomorphic, that is, only suitable for the homomorphic evaluation of ciphertext encrypted under a single key. However, in many realistic scenarios, the ciphertext of homomorphic encryption is usually encrypted under multiple different keys. Therefore, at STOC 2012, López-Alt, Tromer, and Vaikuntanathan [8] proposed the first cryptographic construction of multi-key full homomorphic encryption (MKFHE) based on NTRU cryptography, which enables the evaluation of data encrypted under different keys. Subsequently, a large number of articles appeared to improve MKFHE, including single hop only [9, 10], multi-hop with bootstrapping [11–16], and multi-hop without bootstrapping [17].Although (MK) FHEs have extensive applications, they require complex certificate management in implementation. To simplify the certificate management, Naccache [18] introduced a notion of identity-based fully homomorphic encryption (IBFHE), where there is no user-specific key that the evaluator must use. In particular, in an IBFHE scheme, data encrypted under a single identity can perform homomorphic operations by any evaluator with only public parameters. In 2013, Gentry et al. [7] constructed an IBFHE scheme from GSW-FHE. The IBFHE scheme only allows homomorphic evaluation of encrypted data under single identity but not multiple identities. Clear and McGoldrick [19] gave an MIFHE based on the indistinguishability obfuscation (iO) [20] to overcome the disadvantage of single identity at CANS 2014. Then, they [9] proposed a leveled MIFHE candidate under LWE in the random oracle model at CRYPTO 2015. However, the later scheme needs to set the number of users participating in homomorphic evaluation in advance, and new users cannot be added to the operation process, which is single hop in MIFHE cryptography. In 2017, Canetti et al. [21] proposed two MIFHE schemes. The first combines MKFHE and identity-based encryption (IBE) on the fly. Therefore, the ciphertext extension depends on the number of ciphertexts, which is not compact. The second is nonleveled, but uses iO. In 2020, with the help of an MKFHE, Pal and Dutta [22] extended IBE to a CCA1 secure MIFHE scheme. However, their extension process uses witness pseudorandom function (WPRF), which is a nonstandard assumption. Recently, Shen et al. [23] proposed a compressible MIFHE scheme based on [9, 10, 24]. The scheme is selectively secure under the LWE assumption and can reach an optimal compression rate, but it is single hop.Thus, it is interesting to construct a compact MIFHE scheme with the multi-hop homomorphism under standard assumption, where one can evaluate a group of ciphertexts under a set of identities to obtain an “evaluated” ciphertext, which can be further evaluated with other ciphertexts encrypted under additional identities. ### 1.1. Contribution We propose a leveled multi-hop MIFHE scheme adapted from the GPV-FHE [25], following the construction method of the PS-MKFHE, which is a leveled multi-hop MKFHE scheme built by Peikert and Shiehian [17]. We show that it is compact and secure against the IND-sID-CPA attack under the LWE assumption in the random oracle model. In our construction, we use a fully homomorphic commitment to commit the plaintext bit to help homomorphic operations. Additionally, by combining the transformation of MIFHE to the nonadaptive chosen-ciphertext attack (CCA1) secure FHE proposed by Canetti et al. [21], we can obtain a CCA1 secure FHE with multi-hop homomorphism. Finally, we note that our construction can be applied to the ring setting as [9] for shorter parameters. ### 1.2. Technical Overview It is well known that the efficient multi-hop MKFHE obtained by bootstrapping is difficult to generalize MIFHE, because the public homomorphic evaluation key cannot be extracted from identity secret keySKid as it is a ciphertext of SKid and even the SKid is not generated before decryption. Therefore, we focus on the PS-MKFHE [17], which does not use bootstrapping. Our key observation is that we can construct a multi-hop MIFHE scheme following the ideas introduced by Peikert and Shiehian [17]. They built multi-hop MKFHE schemes to overcome the single hop drawback of the MKFHE schemes [9, 10]. In their first construction, a ciphertext consists of C,F,D, where C is a GSW-FHE ciphertext [7] that encrypts message μ, F is a fully homomorphic commitment [26] to the same message, and D is a special encryption of the commitment randomness implied in F under the same key to C. Here, D,F provides the power of expanding C to C′ with additional keys (with a part of the public key) and preserves some invariant, which can be further used. In total, after expanding or computing, the form of C,F,D remains. This finding is the reason to support the multi-hop computation with respect to additional keys.However, it is nontrivial to construct a multi-hop MIFHE scheme because PS-MKFHE [17] is built from GSW-FHE [7] but not from GPV-FHE [26]. For simplicity, we now informally describe a multi-hop MKFHE from GPV-FHE [26], which can be converted into a multi-hop MIFHE scheme (see Section 3).In a multi-hop MKFHE scheme based on GPV-FHE, a ciphertext under a secret keyt∈ℤmk consists of four components C,D,E,F:(1) A GPV-FHE ciphertextC∈ℤqmk×mkℓ that encrypts μ under t.(2) A GPV-FHE style fully homomorphic commitmentD∈ℤqm×mℓ to the same message μ with underlying commitment randomness R,X.(3) A special encryptionE∈ℤqnmℓ×mℓ under t of the former part of the commitment randomness R.(4) Another special encryptionF∈ℤqm2×mℓ under t of the latter part of the commitment randomness X.To expandC to an additional secret key t˜∈ℤqm, we define(1)C′=CU0D∈ℤqmk+1×mk+1ℓ,where U is derived from E. The commitment D is preserved, and E,F are padded with zeros to fit the long secret key t,t˜. Moreover, the homomorphic evaluation can be simply designed as GPV-FHE (see Section 3 for more details about our MIFHE scheme). ### 1.3. Paper Organization First, we recall some notions, definitions, and facts in Section2. Then, we propose our MIFHE scheme that satisfies IND-sID-CPA secure in Section 3. In the end, we conclude in Section 4. ## 1.1. Contribution We propose a leveled multi-hop MIFHE scheme adapted from the GPV-FHE [25], following the construction method of the PS-MKFHE, which is a leveled multi-hop MKFHE scheme built by Peikert and Shiehian [17]. We show that it is compact and secure against the IND-sID-CPA attack under the LWE assumption in the random oracle model. In our construction, we use a fully homomorphic commitment to commit the plaintext bit to help homomorphic operations. Additionally, by combining the transformation of MIFHE to the nonadaptive chosen-ciphertext attack (CCA1) secure FHE proposed by Canetti et al. [21], we can obtain a CCA1 secure FHE with multi-hop homomorphism. Finally, we note that our construction can be applied to the ring setting as [9] for shorter parameters. ## 1.2. Technical Overview It is well known that the efficient multi-hop MKFHE obtained by bootstrapping is difficult to generalize MIFHE, because the public homomorphic evaluation key cannot be extracted from identity secret keySKid as it is a ciphertext of SKid and even the SKid is not generated before decryption. Therefore, we focus on the PS-MKFHE [17], which does not use bootstrapping. Our key observation is that we can construct a multi-hop MIFHE scheme following the ideas introduced by Peikert and Shiehian [17]. They built multi-hop MKFHE schemes to overcome the single hop drawback of the MKFHE schemes [9, 10]. In their first construction, a ciphertext consists of C,F,D, where C is a GSW-FHE ciphertext [7] that encrypts message μ, F is a fully homomorphic commitment [26] to the same message, and D is a special encryption of the commitment randomness implied in F under the same key to C. Here, D,F provides the power of expanding C to C′ with additional keys (with a part of the public key) and preserves some invariant, which can be further used. In total, after expanding or computing, the form of C,F,D remains. This finding is the reason to support the multi-hop computation with respect to additional keys.However, it is nontrivial to construct a multi-hop MIFHE scheme because PS-MKFHE [17] is built from GSW-FHE [7] but not from GPV-FHE [26]. For simplicity, we now informally describe a multi-hop MKFHE from GPV-FHE [26], which can be converted into a multi-hop MIFHE scheme (see Section 3).In a multi-hop MKFHE scheme based on GPV-FHE, a ciphertext under a secret keyt∈ℤmk consists of four components C,D,E,F:(1) A GPV-FHE ciphertextC∈ℤqmk×mkℓ that encrypts μ under t.(2) A GPV-FHE style fully homomorphic commitmentD∈ℤqm×mℓ to the same message μ with underlying commitment randomness R,X.(3) A special encryptionE∈ℤqnmℓ×mℓ under t of the former part of the commitment randomness R.(4) Another special encryptionF∈ℤqm2×mℓ under t of the latter part of the commitment randomness X.To expandC to an additional secret key t˜∈ℤqm, we define(1)C′=CU0D∈ℤqmk+1×mk+1ℓ,where U is derived from E. The commitment D is preserved, and E,F are padded with zeros to fit the long secret key t,t˜. Moreover, the homomorphic evaluation can be simply designed as GPV-FHE (see Section 3 for more details about our MIFHE scheme). ## 1.3. Paper Organization First, we recall some notions, definitions, and facts in Section2. Then, we propose our MIFHE scheme that satisfies IND-sID-CPA secure in Section 3. In the end, we conclude in Section 4. ## 2. Preliminaries Let us start with the following notations that will be used throughout the study. We use the bold uppercase letters (e.g.,A,B) to represent matrices. Similarly, the bold lowercase letters (e.g., a,b) represent column vectors. We use ai to denote the i entry of a and ai,j to denote the i,j entry of A. A‖B is used to denote the concatenation of two matrices. Similarly, a,b is used to denote the concatenation of two column vectors. Let λ denote the security parameter. We define n=1,2,…,n for any positive integer n. Let negl λ denote a negligible function that grows slower than λ−c for any constant c>0 and any sufficiently large value of λ. An event occurs with overwhelming probability; i.e., it occurs with a probability of at least 1−neglλ. ### 2.1. Basic Notions #### 2.1.1. Approximations Very recently, Peikert and Shiehian [17] suggested a simple method to indicate that the two sides in some “noisy equations” extensively used in lattice-based cryptography were approximately equal to an additive error. We will follow their notation using:(2)x≈yerrorE.To indicate thatx=y+e for some e∈−E,E, the notation can be naturally expanded to the vector or matrix type using the infinite norm. #### 2.1.2. Tensor Product For matricesA∈ℤm×n,B∈ℤs×t, tensor product A⊗B is an ms×nt matrix, which consists of m×n blocks, whose i,j block is ai,j·B.In this work, we widely use the mixed-product property: for any matricesA,B,C,D with compatible dimensions, it holds that:(3)A⊗B·C⊗D=A·C⊗B·D. ### 2.2. Background on Lattices #### 2.2.1. Lattices For matrixA∈ℤqn×m, we define the q-ary integer lattice in this way:(4)Λ⊥A=x∈ℤm:Ax=0modq.For vectoru∈ℤqn, we now define the coset (or “shifted” lattice) in this way:(5)Λu⊥A=x∈ℤm:Ax=umodq. #### 2.2.2. LWE The learning with errors (LWE) problem was first introduced by Regev [27] as an extension of the “learning parity with noise.” In this study, we define the decisional learning with errors (DLWE) problem that is equivalent to LWE for certain parameters as follows.Definition 1. (Decisional Learning with Errors (DLWE)) For positive integers n,m,q and an error distribution χ over ℤ, the DLWEn,m,q,χ problem is to distinguish with nonnegligible advantage between A,b=AT·t+e, where A←$ℤqn×m,t←$ℤqn,e←χmb2−4ac, and A,b sampled uniformly at random from ℤqn×m×ℤqm.Letχ be a discrete Gaussian distribution over ℤ with parameter r=2n, and it is a well-established instantiation of LWE. A sample drawn from this distribution has magnitude bounded by rn=Θn except with a probability of at most 2−n. For this parameterization and any m=polyλ, it is easy to see LWE by quantum reduction at least as difficult as certain worst-case difficult problems (e.g., the shortest independent vector problem) on n-dimensional lattices with approximate factor O˜qn [27]. Classical reductions are also famous for subexponential modulus q [28] and polynomial q [29].In this work, we rely on the tensor form of LWE denoted by TLWE and the matrix form of LWE denoted by MLWE, whose specific definitions are introduced below.Definition 2. (The Matrix Form of Learning with Errors (MLWE)) For positive integers n,m,k,q and an error distribution χ over ℤ, the MLWEn,m,k,q,χ problem is to distinguish with nonnegligible advantage between A,B=AT·T+E, where A←$ℤqn×m,T←$ℤqn×k,E←χm×k, and A,B sampled uniformly at random from ℤqn×m×ℤqm×k.Definition 3. (The Tensor Form of Learning with Errors (TLWE)) For positive integers n,m,k,t,q and an error distribution χ over ℤ, the TLWEn,m,k,t,q,χ problem is to distinguish with nonnegligible advantage between A,B=It⊗AT·T+E, where A←$ℤqn×m,T←$ℤqnt×k,E←χmt×k, and It denotes the t-dimensional identity matrix, and A,B sampled uniformly at random from ℤqn×m×ℤqmt×k.According to the standard mixed argument, we can get that MLWE is equivalent to DLWE with at mostk factor loss in the distinguishing advantage, and TLWE is equivalent to MLWE with at most t2 factor loss in the distinguishing advantage. #### 2.2.3. Lattice Trapdoor We recall some cryptographic facts about trapdoor generation and preimage sampling algorithms with important properties [30]. Since all the details of implementation are not strictly necessary in this work, we ignore them. Note that there are improved algorithms but only within Ologλ factor [31].Lemma 1 (see [30]). Letn=polyλ,q≥3be odd, andm=⌈6nlogq⌉. There is a probabilistic polynomial timePPTalgorithmTrapGenn,m,qthat outputs a pairA∈ℤqn×m,TA∈ℤm×m, such thatAis statistically close to a uniform matrix inℤqn×m, andTAis a basis forΛ⊥AsatisfyingTA≤OnlogqandT˜A≤Onlogqwith overwhelming probability inλ.Lemma 2 (see [30]). Letn=polyλbe a positive integer,qbe a prime, andm≥2nlogq. Then, for all but a2q−nfraction of allA∈ℤqn×mand for anyr≥ωlogm, the distribution of the syndromeu=Asmodqis statistically close to uniform overℤqn, wheres←Dℤm,r.Lemma 3 (see [30]). SetA∈ℤqn×m,TA∈ℤm×m←TrapGenn,m,qfrom Lemma 1. Then, for a parameterr≥T˜A·ωlogmand a uniformly random vectoru∈ℝn, there is a PPT algorithm, which outputs vectors∈Λu⊥Asampled from a statistically similar distribution toDΛu⊥A,r; thus,As=uwheneverΛu⊥Ais not empty. #### 2.2.4. Gadget Matrices and Bit Decomposition We recall a useful notion ofgadget matrix, which was first introduced in [31], to decompose vectors or matrices over ℤq into short vectors or matrices over ℤ.For integerq, let ℓ=⌈logq⌉. Gadget matrix gT=1,2,22,…,2ℓ−1 and bit decomposition function g−1:ℤq⟶0,1ℓ are defined, which outputs a binary column vector that consists of the binary representation of its argument, such that gT·g−1a=a, for any a∈ℤq.More generally, for any positive integerk, Gk=Ik⊗gT∈ℤqk×kℓ is defined, where Ik denotes the k-dimensional identity matrix. For any t, the general bit decomposition function G−1:ℤqk×t⟶0,1kℓ×t outputs a binary kℓ×tmatrix (invoking g−1), such that G·G−1A=A, for A∈ℤqk×t. Additionally, we often write G−TA=G−1AT for simplicity. ### 2.3. Multi-Identity Fully Homomorphic Encryption We begin with the definition of the leveled multi-hop MIFHE, which is adapted and summarized from the definition of the single-hop MIFHE in [9], definition of the single-hop MKFHE in [10], and definition of the multi-hop MKFHE in [17]. Here, we require a bound d on the NAND circuit depth and a bound L on the number of identities in one evaluation, and we mainly focus on the bit encryption scheme and [17].Now, a ciphertext is called a “fresh” ciphertext if it is generated by the encryption algorithmEnc defined below (i.e., it corresponds to a single identity), an “expanded” ciphertext if it is the output of expansion algorithm Expand (which relates to multiple identities), or an “evaluated” ciphertext if it is the output of homomorphic evaluation algorithm Eval.Definition 4. A leveled multi-hop multi-identity fully homomorphic encryption scheme consists of sixPPT algorithms Setup,Extract,Enc,Expand,Eval,andDec defined as follows:(i) Setup1λ,1d,1L: on inputting a security parameter λ, a bound d on the NAND circuit depth, and a bound L on the number of identities involved in one evaluation, generate a master public key MPK and a master secret key MSK, and then output MPK,MSK. Here, the security parameter λ also defines an identity space ℐ(ii) ExtractMPK,MSK,id: on inputting the MPK, MSK, and identity id∈ℐ, extract a user-specific secret key SKid, and output it(iii) EncMPK,id,μ∈0,1: on inputting the MPK, identity id∈ℐ, and bit μ∈0,1, output a “fresh” ciphertext id;c(iv) ExpandMPK,idk+1,id1,id2,…,idk;c: on inputting the MPK, identity idk+1, and any (“fresh,” “expanded,” or “evaluated”) ciphertext id1,id2,…,idk;c under k identities id1,id2,…,idk, compute and output an “expanded” ciphertext id1,id2,…,idk,idk+1;c′ under k+1 identities id1,id2,…,idk,idk+1(v) EvalMPK,f,c1,c2,…,cN: on inputting MPK, an NAND circuit f, and N ciphertexts c1,c2,…,cN, output an “evaluated” ciphertext cf(vi) Dec(SKid1,SKid2,…,SKidk,id1,id2,…,idk;c): on inputting k secret keys SKid1,SKid2,…,SKidk, which correspond to identities id1,id2,…,idk and any ciphertext id1,id2,…,idk;c, output a bit μWe underline that we will homomorphically evaluate anyNAND circuit gate by gate as described in [17], which indicates that the evaluation is multi-hop as previous multi-key FHE schemes [8, 17]. #### 2.3.1. Correctness A leveled multi-hop MIFHE scheme is correct if it satisfies the following conditions. For all positive integersλ,d,L, for every NAND circuit f of depth at most d with N input wires, for every function π:N⟶L (which relates each input wire to a key pair), and for every μ∈0,1N, the following experiment succeeds with overwhelming probability: MPK,MSK←Setup1λ,1d,1L, generate identity key pairs PKidπi,SKidπi←ExtractMPK,MSK,idπi for every i∈N, generate ciphertext ci←EncMPK,idπi,μi∈0,1 for every i∈N, compute cf←EvalMPK,f,c1,c2,…,cN (may invoke algorithm Expand), and finally check whether DecSKidπii∈N,cf=fμ1,μ2,…,μN. #### 2.3.2. Compactness A leveled multi-hop MIFHE scheme is compact if there exists a polynomialp·,·,· such that cf≤pλ,d,L in the experiment from Definition 4. In other words, the length of cf is independent of both f and N but can depend polynomially on λ,d, and L. #### 2.3.3. Security The security game of MIFHE is the same as that of IBE, but there is no reference to the expansion algorithm and evaluation algorithm because they are public and do not impact the security. In this study, we will mainly focus on the semantically secure under selective identity and chosen-plaintext attack (IND-sID-CPA) security game for MIFHE between a challengerC and a PPT attacker A, which is defined as follows:(i) Initial Stage. AttackerA is given bound d on the NAND circuit depth and bound L on the number of identities and outputs target identity id∗.(ii) Setup. ChallengerC runs Setup1λ,1d,1L to generate MPK,MSK and sends MPK to attacker A.(iii) Query Stage 1. AdversaryA adaptively issues a query on any identity id such that id≠id∗. Challenger C runs ExtractMPK,MSK,id to obtain identity secret key skid that corresponds to id and sends skid back to A.(iv) Challenge. ChallengerC selects a uniformly random bit μ∗←$0,1, computes a challenge ciphertext c∗←EncMPK,id∗,μ∗, and sends it to attacker A.(v) Query Stage 2. AdversaryA issues additional adaptive identity secret key queries, and challenger C responds as in query stage 1.(vi) Output. The attacker outputs a guessμ′∈0,1 and wins if μ′=μ∗.The advantage of the attacker in the above IND-sID-CPA security game is defined asPrμ′=μ∗−1/2, where the probability is taken over the random bits used by all algorithms in the game.Definition 5. A leveled multi-identity fully homomorphic encryption scheme isIND-sID-CPA secure if any PPT attacker has at most a negligible advantage in the IND-sID-CPA security game defined above. ## 2.1. Basic Notions ### 2.1.1. Approximations Very recently, Peikert and Shiehian [17] suggested a simple method to indicate that the two sides in some “noisy equations” extensively used in lattice-based cryptography were approximately equal to an additive error. We will follow their notation using:(2)x≈yerrorE.To indicate thatx=y+e for some e∈−E,E, the notation can be naturally expanded to the vector or matrix type using the infinite norm. ### 2.1.2. Tensor Product For matricesA∈ℤm×n,B∈ℤs×t, tensor product A⊗B is an ms×nt matrix, which consists of m×n blocks, whose i,j block is ai,j·B.In this work, we widely use the mixed-product property: for any matricesA,B,C,D with compatible dimensions, it holds that:(3)A⊗B·C⊗D=A·C⊗B·D. ## 2.1.1. Approximations Very recently, Peikert and Shiehian [17] suggested a simple method to indicate that the two sides in some “noisy equations” extensively used in lattice-based cryptography were approximately equal to an additive error. We will follow their notation using:(2)x≈yerrorE.To indicate thatx=y+e for some e∈−E,E, the notation can be naturally expanded to the vector or matrix type using the infinite norm. ## 2.1.2. Tensor Product For matricesA∈ℤm×n,B∈ℤs×t, tensor product A⊗B is an ms×nt matrix, which consists of m×n blocks, whose i,j block is ai,j·B.In this work, we widely use the mixed-product property: for any matricesA,B,C,D with compatible dimensions, it holds that:(3)A⊗B·C⊗D=A·C⊗B·D. ## 2.2. Background on Lattices ### 2.2.1. Lattices For matrixA∈ℤqn×m, we define the q-ary integer lattice in this way:(4)Λ⊥A=x∈ℤm:Ax=0modq.For vectoru∈ℤqn, we now define the coset (or “shifted” lattice) in this way:(5)Λu⊥A=x∈ℤm:Ax=umodq. ### 2.2.2. LWE The learning with errors (LWE) problem was first introduced by Regev [27] as an extension of the “learning parity with noise.” In this study, we define the decisional learning with errors (DLWE) problem that is equivalent to LWE for certain parameters as follows.Definition 1. (Decisional Learning with Errors (DLWE)) For positive integers n,m,q and an error distribution χ over ℤ, the DLWEn,m,q,χ problem is to distinguish with nonnegligible advantage between A,b=AT·t+e, where A←$ℤqn×m,t←$ℤqn,e←χmb2−4ac, and A,b sampled uniformly at random from ℤqn×m×ℤqm.Letχ be a discrete Gaussian distribution over ℤ with parameter r=2n, and it is a well-established instantiation of LWE. A sample drawn from this distribution has magnitude bounded by rn=Θn except with a probability of at most 2−n. For this parameterization and any m=polyλ, it is easy to see LWE by quantum reduction at least as difficult as certain worst-case difficult problems (e.g., the shortest independent vector problem) on n-dimensional lattices with approximate factor O˜qn [27]. Classical reductions are also famous for subexponential modulus q [28] and polynomial q [29].In this work, we rely on the tensor form of LWE denoted by TLWE and the matrix form of LWE denoted by MLWE, whose specific definitions are introduced below.Definition 2. (The Matrix Form of Learning with Errors (MLWE)) For positive integers n,m,k,q and an error distribution χ over ℤ, the MLWEn,m,k,q,χ problem is to distinguish with nonnegligible advantage between A,B=AT·T+E, where A←$ℤqn×m,T←$ℤqn×k,E←χm×k, and A,B sampled uniformly at random from ℤqn×m×ℤqm×k.Definition 3. (The Tensor Form of Learning with Errors (TLWE)) For positive integers n,m,k,t,q and an error distribution χ over ℤ, the TLWEn,m,k,t,q,χ problem is to distinguish with nonnegligible advantage between A,B=It⊗AT·T+E, where A←$ℤqn×m,T←$ℤqnt×k,E←χmt×k, and It denotes the t-dimensional identity matrix, and A,B sampled uniformly at random from ℤqn×m×ℤqmt×k.According to the standard mixed argument, we can get that MLWE is equivalent to DLWE with at mostk factor loss in the distinguishing advantage, and TLWE is equivalent to MLWE with at most t2 factor loss in the distinguishing advantage. ### 2.2.3. Lattice Trapdoor We recall some cryptographic facts about trapdoor generation and preimage sampling algorithms with important properties [30]. Since all the details of implementation are not strictly necessary in this work, we ignore them. Note that there are improved algorithms but only within Ologλ factor [31].Lemma 1 (see [30]). Letn=polyλ,q≥3be odd, andm=⌈6nlogq⌉. There is a probabilistic polynomial timePPTalgorithmTrapGenn,m,qthat outputs a pairA∈ℤqn×m,TA∈ℤm×m, such thatAis statistically close to a uniform matrix inℤqn×m, andTAis a basis forΛ⊥AsatisfyingTA≤OnlogqandT˜A≤Onlogqwith overwhelming probability inλ.Lemma 2 (see [30]). Letn=polyλbe a positive integer,qbe a prime, andm≥2nlogq. Then, for all but a2q−nfraction of allA∈ℤqn×mand for anyr≥ωlogm, the distribution of the syndromeu=Asmodqis statistically close to uniform overℤqn, wheres←Dℤm,r.Lemma 3 (see [30]). SetA∈ℤqn×m,TA∈ℤm×m←TrapGenn,m,qfrom Lemma 1. Then, for a parameterr≥T˜A·ωlogmand a uniformly random vectoru∈ℝn, there is a PPT algorithm, which outputs vectors∈Λu⊥Asampled from a statistically similar distribution toDΛu⊥A,r; thus,As=uwheneverΛu⊥Ais not empty. ### 2.2.4. Gadget Matrices and Bit Decomposition We recall a useful notion ofgadget matrix, which was first introduced in [31], to decompose vectors or matrices over ℤq into short vectors or matrices over ℤ.For integerq, let ℓ=⌈logq⌉. Gadget matrix gT=1,2,22,…,2ℓ−1 and bit decomposition function g−1:ℤq⟶0,1ℓ are defined, which outputs a binary column vector that consists of the binary representation of its argument, such that gT·g−1a=a, for any a∈ℤq.More generally, for any positive integerk, Gk=Ik⊗gT∈ℤqk×kℓ is defined, where Ik denotes the k-dimensional identity matrix. For any t, the general bit decomposition function G−1:ℤqk×t⟶0,1kℓ×t outputs a binary kℓ×tmatrix (invoking g−1), such that G·G−1A=A, for A∈ℤqk×t. Additionally, we often write G−TA=G−1AT for simplicity. ## 2.2.1. Lattices For matrixA∈ℤqn×m, we define the q-ary integer lattice in this way:(4)Λ⊥A=x∈ℤm:Ax=0modq.For vectoru∈ℤqn, we now define the coset (or “shifted” lattice) in this way:(5)Λu⊥A=x∈ℤm:Ax=umodq. ## 2.2.2. LWE The learning with errors (LWE) problem was first introduced by Regev [27] as an extension of the “learning parity with noise.” In this study, we define the decisional learning with errors (DLWE) problem that is equivalent to LWE for certain parameters as follows.Definition 1. (Decisional Learning with Errors (DLWE)) For positive integers n,m,q and an error distribution χ over ℤ, the DLWEn,m,q,χ problem is to distinguish with nonnegligible advantage between A,b=AT·t+e, where A←$ℤqn×m,t←$ℤqn,e←χmb2−4ac, and A,b sampled uniformly at random from ℤqn×m×ℤqm.Letχ be a discrete Gaussian distribution over ℤ with parameter r=2n, and it is a well-established instantiation of LWE. A sample drawn from this distribution has magnitude bounded by rn=Θn except with a probability of at most 2−n. For this parameterization and any m=polyλ, it is easy to see LWE by quantum reduction at least as difficult as certain worst-case difficult problems (e.g., the shortest independent vector problem) on n-dimensional lattices with approximate factor O˜qn [27]. Classical reductions are also famous for subexponential modulus q [28] and polynomial q [29].In this work, we rely on the tensor form of LWE denoted by TLWE and the matrix form of LWE denoted by MLWE, whose specific definitions are introduced below.Definition 2. (The Matrix Form of Learning with Errors (MLWE)) For positive integers n,m,k,q and an error distribution χ over ℤ, the MLWEn,m,k,q,χ problem is to distinguish with nonnegligible advantage between A,B=AT·T+E, where A←$ℤqn×m,T←$ℤqn×k,E←χm×k, and A,B sampled uniformly at random from ℤqn×m×ℤqm×k.Definition 3. (The Tensor Form of Learning with Errors (TLWE)) For positive integers n,m,k,t,q and an error distribution χ over ℤ, the TLWEn,m,k,t,q,χ problem is to distinguish with nonnegligible advantage between A,B=It⊗AT·T+E, where A←$ℤqn×m,T←$ℤqnt×k,E←χmt×k, and It denotes the t-dimensional identity matrix, and A,B sampled uniformly at random from ℤqn×m×ℤqmt×k.According to the standard mixed argument, we can get that MLWE is equivalent to DLWE with at mostk factor loss in the distinguishing advantage, and TLWE is equivalent to MLWE with at most t2 factor loss in the distinguishing advantage. ## 2.2.3. Lattice Trapdoor We recall some cryptographic facts about trapdoor generation and preimage sampling algorithms with important properties [30]. Since all the details of implementation are not strictly necessary in this work, we ignore them. Note that there are improved algorithms but only within Ologλ factor [31].Lemma 1 (see [30]). Letn=polyλ,q≥3be odd, andm=⌈6nlogq⌉. There is a probabilistic polynomial timePPTalgorithmTrapGenn,m,qthat outputs a pairA∈ℤqn×m,TA∈ℤm×m, such thatAis statistically close to a uniform matrix inℤqn×m, andTAis a basis forΛ⊥AsatisfyingTA≤OnlogqandT˜A≤Onlogqwith overwhelming probability inλ.Lemma 2 (see [30]). Letn=polyλbe a positive integer,qbe a prime, andm≥2nlogq. Then, for all but a2q−nfraction of allA∈ℤqn×mand for anyr≥ωlogm, the distribution of the syndromeu=Asmodqis statistically close to uniform overℤqn, wheres←Dℤm,r.Lemma 3 (see [30]). SetA∈ℤqn×m,TA∈ℤm×m←TrapGenn,m,qfrom Lemma 1. Then, for a parameterr≥T˜A·ωlogmand a uniformly random vectoru∈ℝn, there is a PPT algorithm, which outputs vectors∈Λu⊥Asampled from a statistically similar distribution toDΛu⊥A,r; thus,As=uwheneverΛu⊥Ais not empty. ## 2.2.4. Gadget Matrices and Bit Decomposition We recall a useful notion ofgadget matrix, which was first introduced in [31], to decompose vectors or matrices over ℤq into short vectors or matrices over ℤ.For integerq, let ℓ=⌈logq⌉. Gadget matrix gT=1,2,22,…,2ℓ−1 and bit decomposition function g−1:ℤq⟶0,1ℓ are defined, which outputs a binary column vector that consists of the binary representation of its argument, such that gT·g−1a=a, for any a∈ℤq.More generally, for any positive integerk, Gk=Ik⊗gT∈ℤqk×kℓ is defined, where Ik denotes the k-dimensional identity matrix. For any t, the general bit decomposition function G−1:ℤqk×t⟶0,1kℓ×t outputs a binary kℓ×tmatrix (invoking g−1), such that G·G−1A=A, for A∈ℤqk×t. Additionally, we often write G−TA=G−1AT for simplicity. ## 2.3. Multi-Identity Fully Homomorphic Encryption We begin with the definition of the leveled multi-hop MIFHE, which is adapted and summarized from the definition of the single-hop MIFHE in [9], definition of the single-hop MKFHE in [10], and definition of the multi-hop MKFHE in [17]. Here, we require a bound d on the NAND circuit depth and a bound L on the number of identities in one evaluation, and we mainly focus on the bit encryption scheme and [17].Now, a ciphertext is called a “fresh” ciphertext if it is generated by the encryption algorithmEnc defined below (i.e., it corresponds to a single identity), an “expanded” ciphertext if it is the output of expansion algorithm Expand (which relates to multiple identities), or an “evaluated” ciphertext if it is the output of homomorphic evaluation algorithm Eval.Definition 4. A leveled multi-hop multi-identity fully homomorphic encryption scheme consists of sixPPT algorithms Setup,Extract,Enc,Expand,Eval,andDec defined as follows:(i) Setup1λ,1d,1L: on inputting a security parameter λ, a bound d on the NAND circuit depth, and a bound L on the number of identities involved in one evaluation, generate a master public key MPK and a master secret key MSK, and then output MPK,MSK. Here, the security parameter λ also defines an identity space ℐ(ii) ExtractMPK,MSK,id: on inputting the MPK, MSK, and identity id∈ℐ, extract a user-specific secret key SKid, and output it(iii) EncMPK,id,μ∈0,1: on inputting the MPK, identity id∈ℐ, and bit μ∈0,1, output a “fresh” ciphertext id;c(iv) ExpandMPK,idk+1,id1,id2,…,idk;c: on inputting the MPK, identity idk+1, and any (“fresh,” “expanded,” or “evaluated”) ciphertext id1,id2,…,idk;c under k identities id1,id2,…,idk, compute and output an “expanded” ciphertext id1,id2,…,idk,idk+1;c′ under k+1 identities id1,id2,…,idk,idk+1(v) EvalMPK,f,c1,c2,…,cN: on inputting MPK, an NAND circuit f, and N ciphertexts c1,c2,…,cN, output an “evaluated” ciphertext cf(vi) Dec(SKid1,SKid2,…,SKidk,id1,id2,…,idk;c): on inputting k secret keys SKid1,SKid2,…,SKidk, which correspond to identities id1,id2,…,idk and any ciphertext id1,id2,…,idk;c, output a bit μWe underline that we will homomorphically evaluate anyNAND circuit gate by gate as described in [17], which indicates that the evaluation is multi-hop as previous multi-key FHE schemes [8, 17]. ### 2.3.1. Correctness A leveled multi-hop MIFHE scheme is correct if it satisfies the following conditions. For all positive integersλ,d,L, for every NAND circuit f of depth at most d with N input wires, for every function π:N⟶L (which relates each input wire to a key pair), and for every μ∈0,1N, the following experiment succeeds with overwhelming probability: MPK,MSK←Setup1λ,1d,1L, generate identity key pairs PKidπi,SKidπi←ExtractMPK,MSK,idπi for every i∈N, generate ciphertext ci←EncMPK,idπi,μi∈0,1 for every i∈N, compute cf←EvalMPK,f,c1,c2,…,cN (may invoke algorithm Expand), and finally check whether DecSKidπii∈N,cf=fμ1,μ2,…,μN. ### 2.3.2. Compactness A leveled multi-hop MIFHE scheme is compact if there exists a polynomialp·,·,· such that cf≤pλ,d,L in the experiment from Definition 4. In other words, the length of cf is independent of both f and N but can depend polynomially on λ,d, and L. ### 2.3.3. Security The security game of MIFHE is the same as that of IBE, but there is no reference to the expansion algorithm and evaluation algorithm because they are public and do not impact the security. In this study, we will mainly focus on the semantically secure under selective identity and chosen-plaintext attack (IND-sID-CPA) security game for MIFHE between a challengerC and a PPT attacker A, which is defined as follows:(i) Initial Stage. AttackerA is given bound d on the NAND circuit depth and bound L on the number of identities and outputs target identity id∗.(ii) Setup. ChallengerC runs Setup1λ,1d,1L to generate MPK,MSK and sends MPK to attacker A.(iii) Query Stage 1. AdversaryA adaptively issues a query on any identity id such that id≠id∗. Challenger C runs ExtractMPK,MSK,id to obtain identity secret key skid that corresponds to id and sends skid back to A.(iv) Challenge. ChallengerC selects a uniformly random bit μ∗←$0,1, computes a challenge ciphertext c∗←EncMPK,id∗,μ∗, and sends it to attacker A.(v) Query Stage 2. AdversaryA issues additional adaptive identity secret key queries, and challenger C responds as in query stage 1.(vi) Output. The attacker outputs a guessμ′∈0,1 and wins if μ′=μ∗.The advantage of the attacker in the above IND-sID-CPA security game is defined asPrμ′=μ∗−1/2, where the probability is taken over the random bits used by all algorithms in the game.Definition 5. A leveled multi-identity fully homomorphic encryption scheme isIND-sID-CPA secure if any PPT attacker has at most a negligible advantage in the IND-sID-CPA security game defined above. ## 2.3.1. Correctness A leveled multi-hop MIFHE scheme is correct if it satisfies the following conditions. For all positive integersλ,d,L, for every NAND circuit f of depth at most d with N input wires, for every function π:N⟶L (which relates each input wire to a key pair), and for every μ∈0,1N, the following experiment succeeds with overwhelming probability: MPK,MSK←Setup1λ,1d,1L, generate identity key pairs PKidπi,SKidπi←ExtractMPK,MSK,idπi for every i∈N, generate ciphertext ci←EncMPK,idπi,μi∈0,1 for every i∈N, compute cf←EvalMPK,f,c1,c2,…,cN (may invoke algorithm Expand), and finally check whether DecSKidπii∈N,cf=fμ1,μ2,…,μN. ## 2.3.2. Compactness A leveled multi-hop MIFHE scheme is compact if there exists a polynomialp·,·,· such that cf≤pλ,d,L in the experiment from Definition 4. In other words, the length of cf is independent of both f and N but can depend polynomially on λ,d, and L. ## 2.3.3. Security The security game of MIFHE is the same as that of IBE, but there is no reference to the expansion algorithm and evaluation algorithm because they are public and do not impact the security. In this study, we will mainly focus on the semantically secure under selective identity and chosen-plaintext attack (IND-sID-CPA) security game for MIFHE between a challengerC and a PPT attacker A, which is defined as follows:(i) Initial Stage. AttackerA is given bound d on the NAND circuit depth and bound L on the number of identities and outputs target identity id∗.(ii) Setup. ChallengerC runs Setup1λ,1d,1L to generate MPK,MSK and sends MPK to attacker A.(iii) Query Stage 1. AdversaryA adaptively issues a query on any identity id such that id≠id∗. Challenger C runs ExtractMPK,MSK,id to obtain identity secret key skid that corresponds to id and sends skid back to A.(iv) Challenge. ChallengerC selects a uniformly random bit μ∗←$0,1, computes a challenge ciphertext c∗←EncMPK,id∗,μ∗, and sends it to attacker A.(v) Query Stage 2. AdversaryA issues additional adaptive identity secret key queries, and challenger C responds as in query stage 1.(vi) Output. The attacker outputs a guessμ′∈0,1 and wins if μ′=μ∗.The advantage of the attacker in the above IND-sID-CPA security game is defined asPrμ′=μ∗−1/2, where the probability is taken over the random bits used by all algorithms in the game.Definition 5. A leveled multi-identity fully homomorphic encryption scheme isIND-sID-CPA secure if any PPT attacker has at most a negligible advantage in the IND-sID-CPA security game defined above. ## 3. Multi-Identity Fully Homomorphic Encryption ### 3.1. MIFHE Scheme In this section, we will describe the proposed MIFHE scheme. We present one more algorithmMIFHE.NAND to help understand MIFHE.Eval.We parameterize the system by dimensionn=polyλ, modulus q, and error distribution χ for the underlying LWE problem; we set m=⌈6nlogq⌉, r=Onlogq·ωlogm, and B=Θrn. For the worst-case security, we set χ to be the standard discrete Gaussian distribution over ℤ with parameter 2n, which implies that the samples drawn from χ have magnitudes bounded by E=Θn except with probability 2−n. Modulus q is set in the following Section 3.2 based on the bound on the maximum depth of the supported circuit and the bound of the number of identities. The scheme is described as follows:(i) MIFHE.Setup1λ,1d,1L: On inputting security parameter λ, bound d on the NAND circuit depth, and bound L on the number of identities in one evaluation, do(1) Run algorithm TrapGenn,m−1,q to generate a uniformly random matrix A1∈ℤqn×m−1 with a short basis TA1∈ℤm−1×m−1 for Λq⊥A1 such that T˜A1≤Onlogq.(2) Choose a vectora←$ℤqn and set A=A1‖a∈ℤqn×m.(3) OutputMPK=A=A1‖a as the master public key and output MSK=TA1 as the master secret key.(ii) MIFHE.ExtractMPK,MSK,id: on inputting MPK, MSK, and identity id∈ℐ, do:(1) Ifid,u,SKid∈storage is from a previous inquiry on identity id, then return SKid. Otherwise, compute u=Hid∈ℤn, where H is a hash function modeled as a random oracle.(2) RunSamplePreA1,TA1,u−a,r to output a vector s∈ℤm−1 such that A1s=u−a. Set user-specific secret key SKid=t=s,1, and store id,u,t locally. Note that At=u and t∞≤B.(3) OutputSKid=t.(iii) MIFHE.EncMPK,id,μ∈0,1: on inputting master public key MPK, identity id∈ℐ, and bit message μ∈0,1, do:(1) Setu=Hid∈ℤn and compute B=A−enT⊗u∈ℤqn×m, where en is the nth standard unit (column) vector. (Remark: observe that Bt=0).(2) Choose a uniformly random matrixQ←$ℤqn×mℓ and a discrete Gaussian matrix W←χm×mℓ, and define the following:(6)C=BTQ+W+μGm∈ℤqm×mℓ.Note thatC is nicely a GPV-FHE ciphertext [26] encrypting μ under the secret key t. In particular,(7)tTC=tTW+μtTGm≈μtTGm.errorβC(3) Choose a matrixR←$ℤqn×mℓ and a discrete Gaussian matrix X←χm×mℓ, and define the following:(8)D=ATR+X+μGm∈ℤqm×mℓ.Here,D is regarded as a commitment to the message μ under commitment randomness R,X.(4) Choose a matrixS←$ℤqn2ℓ×mℓ and a discrete Gaussian matrix Y←χnmℓ×mℓ, and define the following:(9)E=Inℓ⊗BTS+Y+R⊗g⊗em∈ℤqnmℓ×mℓ.Note that(10)Inℓ⊗tT·E=Inℓ⊗tT·Y+R⊗g⊗em≈R⊗g.errorβETherefore,E is regarded as a sort of encryption of R (the tensor product with g corresponding to some bit decomposition appeared in expansion algorithm is vital to control the error growth), the former part of the commitment randomness used in D.(5) Choose a uniformly random matrixT←$ℤqnm×mℓ and a discrete Gaussian matrix Z←χm2×mℓ, and define the following:(11)F=Im⊗BTT+Z+X⊗em∈ℤqm2×mℓ.Note that:(12)Im⊗tT·F=Im⊗tT·Z+X⊗em≈X.errorβF.Therefore,F is regarded as a sort of encryption of X, the latter part of commitment randomness used in D.(6) Output a “fresh” ciphertextid;C,D,E,F to identity id.(iv) MIFHE.ExpandMPK,idk+1,id1,id2,…,idk;C,D,E,F: on inputting MPK, identity idk+1, and ciphertext id1,id2,…,idk;C∈ℤqmk×mkℓ,D∈ℤqm×mℓ,E∈ℤqnmkℓ×mℓ,F∈ℤqm2kℓ×mℓ encrypting μ under identities id1,id2,…,idk, do:(1) Setuk+1=Hidk+1.(2) We define the following:(13)C′=CU0D∈ℤqmk+1×mk+1ℓ,  where U is defined as follows:(14)U=Gn−T−uk+1⊗Imk·E∈ℤqmk×mℓ.(3) We leave the commitment and its randomness unchanged:D′=D,R′=R, and X′=X.(4) We define the following:(15)E′=Inℓ⊗Imk0m×mk·E∈ℤqnmk+1ℓ×mℓ.(5) Similarly, we define the following:(16)F′=Im⊗Imk0m×mk·F∈ℤqm2k+1×mℓ.(6) Outputid1,id2,…,idk+1;C′,D′,E′,F′ as the “expanded” ciphertext to identities id1,id2,…, idk+1.(v) MIFHE.NAND(id1,id2,…,idk;C1,D1,E1,F1,id1,id2,…,idk;C2,D2,E2,F2): on inputting two ciphertexts id1,id2,…,idk;C1,D1,E1,F1,id1,id2,…,idk;C2,D2,E2,F2 that encrypt μ1,μ2 under identities id1,id2,…,idk, do:(1) We define the following:(17)CNAND=Gmk−C1·Gmk−1C2∈ℤqmk×mkℓ.(2) We define the following:(18)DNAND=Gm−D1·Gm−1D2∈ℤqm×mℓ.(3) We define the following:(19)ENAND=−E1·Gm−1D2+Inℓ⊗C1·Gnmkℓ−1E2∈ℤqnmkℓ×mℓ.(4) We define the following:(20)FNAND=−F1·Gm−1D2+Im⊗C1·Gm2k−1F2∈ℤqm2k×mℓ.(5) Finally, outputid1,id2,…,idk;CNAND,DNAND,ENAND,FNAND as the “evaluated” NAND ciphertext.(vi) MIFHE.EvalMPK,f,c1,c2,…,cN: on inputting MPK, NAND circuit f:0,1N⟶0,1, and any N ciphertexts c1,c2,…,cN, compute homomorphically f over any N ciphertexts c1,c2,…,cN gate by gate by invoking MIFHE.Expand and MIFHE.NAND, and output an “evaluated” ciphertext cf.(vii) MIFHE.Dec(SKid1,SKid2,…,SKidk,id1,id2,…,idk;C,D,E,F): on inputting the secret keys SKid1,SKid2,…,SKidk and a ciphertext id1,id2,…,idk;C∈ℤqmk×mkℓ,D∈ℤqm×mℓ,E∈ℤqnmkℓ×mℓ,F∈ℤqm2k×mℓ under identities id1,id2,…,idk, let t∈ℤmk be the (column) concatenation of the secret keys SKid1,SKid2,…,SKidk, and compute:(21)tTC≈μtTGmk.errorβCIf βC<q/4, where q is set in the next section, we can recover μ from the last term of vector μtTGmk: if this term is closer to 0, output 0; otherwise, output 1. ### 3.2. Analyzing the Noise Growth and Setting the Parameters Now, we provide the reasons for definitions and analyze the noise growth inMIFHE.Expand and MIFHE.NAND to easily set the parameters. We instantiate the parameters and ensure correctness of MIFHE.First, as described in the previous section in theMIFHE.Expand algorithm, let us do the following analysis:(1) We have the following:(22)C′=CU0D∈ℤqmk+1×mk+1ℓ.Givent∈ℤmk that is the (column) concatenation of the secret keys corresponding to identities id1,id2,…,idk, respectively, and a new secret key tk+1, which is the secret key of idk+1, we set t′=t,tk+1 and then have the following:(23)t′T·C′=tT·C,tT·U+tk+1T·D,≈μtT·Gmk,tT·U+tk+1T·DerrorβC=μtT·Gmk,tT·Gn−T−uk+1⊗Imk·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1⊗tT·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1·Inℓ⊗tT·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1·Inℓ⊗tT·Y+GnT·R+tk+1T·D≈μtT·Gmk,Gn−T−uk+1·GnT·R+tk+1T·DerrornℓβE=μtT·Gmk,−uk+1T·R+tk+1T·D=μtT·Gmk,−uk+1T·R+tk+1T·ATR+X+μGm=μtT·Gmk,−uk+1T·R+uk+1T·R+tk+1T·X+μtk+1T·Gm=μtT·Gmk,tk+1T·X+μtk+1T·Gm≈μtT·Gmk,μtk+1T·GmerrormBE=μt′T·Gmk+1, which indicates that (7) holds. In general, the error implied in the “expanded” ciphertext C′ is as follows:(24)βC′=βC+nℓβE+mBE.(2) This visibly satisfies equation (8), and the error implied in the “expanded” ciphertext D′ is as follows:(25)βD′=βD.(3) We have the following:(26)E′=Inℓ⊗Imk0m×mk·E∈ℤqnmk+1ℓ×mℓ.It is obvious that:(27)Inℓ⊗t′T·E′=Inℓ⊗tT·E≈R⊗g=R′⊗g.errorβE.Thus, (10) is kept up to expansion.(4) We have the following:(28)F′=Im⊗Imk0m×mk·F∈ℤqm2k+1×mℓ.Similar to the above step, (12) is also kept up to expansion as follows:(29)Im⊗t′T·F′=Im⊗tT·F≈X=X′.errorβF.Second, we analyze theMIFHE.NAND algorithm.(1) We have the following:(30)CNAND=Gmk−C1·Gmk−1C2∈ℤqmk×mkℓ.Witht∈ℤmk, which is the (column) concatenation of the secret keys that correspond to identities id1,id2,…,idk, we have the following:(31)tT·CNAND=tT·Gmk−C1·Gmk−1C2=tT·Gmk−tT·C1·Gmk−1C2≈tT·Gmk−μ1tT·Gmk·Gmk−1C2errormkℓ·βC1=tT·Gmk−μ1tT·C2≈tT·Gmk−μ1μ2tT·GmkerrorβC2=1−μ1μ2tT·Gmk, which indicates that (7) holds. In total, the error implied in the NAND ciphertext CNAND is as follows:(32)βCNAND=mkℓ·βC1+βC2.(2) We have the following:(33)DNAND=Gm−D1·Gm−1D2∈ℤqm×mℓ,and the commitment randomness is as follows:(34)RNAND=−R1·Gm−1D2−μ1R2,XNAND=−X1·Gm−1D2−μ1X2.By simply computing, we can see that (8) is preserved:(35)DNAND=Gm−D1·Gm−1D2=Gm−ATR1+X1+μ1Gm·Gm−1D2=Gm−ATR1+X1·Gm−1D2−μ1D2=Gm−ATR1+X1·Gm−1D2−μ1ATR2+X2+μ2Gm=AT·−R1·Gm−1D2−μ1R2+−X1·Gm−1D2−μ1X2+1−μ1μ2Gm=AT·RNAND+XNAND+1−μ1μ2Gm.(3) We have the following:(36)ENAND=−E1·Gm−1D2+Inℓ⊗C1·Gnmkℓ−1E2∈ℤqnmkℓ×mℓ.To see that (10) holds for ENAND, first note that:(37)Inℓ⊗tT·E1·Gm−1D2=Inℓ⊗tT·E1·Gm−1D2≈R1⊗g·Gm−1D2errormℓ·βE1=R1·Gm−1D2⊗g.Second, note that:(38)Inℓ⊗tT·Inℓ⊗C1·Gnmkℓ−1E2=Inℓ⊗tT·C1·Gnmkℓ−1E2≈Inℓ⊗μ1tT·Gmk·Gnmkℓ−1E2errornmkℓ2·βC1=μ1Inℓ⊗tT⊗gT·Gnmkℓ−1E2=μ1Inℓ⊗tT·Inmkℓ⊗gT·Gnmkℓ−1E2=μ1Inℓ⊗tT·E2≈μ1R2⊗g⋅errorβE2.Finally, from the above, we have the following:(39)Inℓ⊗tT·ENAND≈−R1·Gm−1D2−μ1R2⊗gerrornmkℓ2·βC1+mℓ·βE1+βE2=RNAND⊗g, which indicates that (10) holds.(4) We have the following:(40)FNAND=−F1·Gm−1D2+Im⊗C1·Gm2k−1F2∈ℤqm2k×mℓ.To see that (12) holds for FNAND, first, note that:(41)Im⊗tT·F1·Gm−1D2=Im⊗tT·F1·Gm−1D2≈X1·Gm−1D2⋅errormℓ·βF1.Second, note that:(42)Im⊗tT·Im⊗C1·Gm2k−1F2=Im⊗tT·C1·Gm2k−1F2≈Im⊗μ1tT·Gmk·Gm2k−1F2errorm2kℓ·βC1=μ1Im⊗tT⊗gT·Gm2k−1F2=μ1Im⊗tT·Im2k⊗gT·Gm2k−1F2=μ1Im⊗tT·F2≈μ1X2⋅errorβF2.Finally, from the above, we have the following:(43)Im⊗tT·FNAND≈−X1·Gm−1D2−μ1X2errorm2kℓ·βC1+mℓ·βF1+βF2=XNAND, which indicates that (12) holds.Then, as [17], we now instantiate the parameters by bounding the worst-case error growth when homomorphically computing a depth dNAND circuit for up to L distinct identities. For a ciphertext id1,id2,…,idk;C,D,E,F with commitment randomness R,X, the max error is defined:(44)β=maxβC,βE,βF,BE.With the bounds from the above, for any ciphertext with errors bounded byβ, its “expanded” ciphertext has a max error of at most nℓ+m+1·β=polyn,ℓ·β. Similarly, when we homomorphically compute an NAND gate of two ciphertexts with errors bounded by β, the result has a max error of at most m2Lℓ+mℓ+1·β=polyn,ℓ,L·β. Thus, after computing any depth dNAND circuit on “fresh” ciphertexts under L distinct keys, then the result has a max error of at most:(45)polyn,ℓ,Ld+L.Thus, we can set4·polyn,ℓ,Ld+L=q for the correctness of decryption. Recall that ℓ=Θlogq=O˜d+L; therefore, polyn,ℓ,Ld+L=polyn,d,Ld+L. Thus, the security of our scheme corresponds to a worst-case n-dimensional lattice problem with an approximation factor of polyn,d,Ld+L.Finally, the compactness requirement is satisfied because any ciphertext in our construction is bounded bypolyn,d,L=polyλ,d,L. ### 3.3. Security Now, we prove that the proposed scheme MIFHE is IND-sID-CPA secure under the hardness of the DLWE assumption in the random oracle model.Theorem 1. The multi-hop multi-identity fully homomorphic encryption scheme MIFHE, which was constructed in Section 3.1, is IND-sID-CPA secure in the random oracle model assuming that the DLWEn,q,χassumption holds.Proof. We prove the security of the proposed scheme MIFHE using a sequence of hybrid games. The first one of this game is the real IND-sID-CPA security game in Definition5, and the last one is the ideal game, where the challenge ciphertext (except challenge identity id∗) is uniformly random and independent of the challenge bit μ∗. We proceed by considering a sequence of hybrid games as follows:Game 0: This is the original game described in Definition5, and it is IND-sID-CPA security. Recall that id∗∈ℐ is the target identity; that is, attacker A plans to attack id∗, and the challenge ciphertext is id∗;C∗,D∗,E∗,F∗ encrypting μ∗.Game 1: In this game, we change the methods of generating master public keyA, answering hash (random oracle) queries and answering identity secret key queries as follows.(1) Uniformly select at random a matrixA1←$ℤqn×m−1 and a vector a←$ℤqn, and set A=A1‖a∈ℤqn×m.(2) Uniformly select at random a vectoruid∗←$ℤqn.(3) When attackerA issues a hash query H on identity id∈ℐ, do:(a) Ifid=id∗, return uid∗.(b) Otherwise, ifid,uid,tid∈store, return uid.(c) Otherwise, sample vectorsid∈Dℤm−1,r, and compute uid=A1sid+a, set tid=sid,1, and store id,uid,tid locally. Finally, return uid.(4) When attackerA issues an identity secret key query on identity id∈ℐ, where id≠id∗, without loss of generality, we assume that A has queried H on id and return tid, where id,uid,tid∈store.Game 2: this game is the same as Game 1 except thatC∗,E∗,F∗, which is a part of the challenge ciphertext, is selected as uniformly random independent elements in ℤqm×mℓ×ℤqnmℓ×mℓ×ℤqm2×mℓ.Game 3: this game is the same as Game 2, except thatD∗ is selected as a uniformly random element in ℤqm×mℓ. In fact, Game 3 is the ideal game. We show the indistinguishability among all sequential hybrid games.Lemma 4. Game 0 and Game 1 are statistically indistinguishable.Proof. We show that Game 0 is statistically close to Game 1 by analyzing that the changes are undetectable by any attacker between them step by step using Lemmas1–3. First, note that while the former part of master public keyA1 is generated by running algorithm TrapGenn,m−1,q (with a trapdoor TA1) in Game 0, A1 is sampled from the uniform distribution over ℤqn×m−1 in Game 1. By Lemma 1, A1 in Game 0 is distributed statistically close to a uniform distribution over ℤqn×m−1 as in Game 1. Second, in regard to the simulation of hash queryH, we discuss two cases:(1) Ifid=id∗, then return uid∗, which was uniformly sampled from ℤqn at random. This perfectly simulates hash query Hid∗.(2) Otherwise, sample a Gaussian vectorsid∈Dℤm−1,r, compute uid=A1sid+a, and return uid. Here, uid is distributed statistically close to the uniform distribution over ℤqn, since A1sid is distributed statistically close to uniform distribution over ℤqn by Lemma 2 and a←$ℤqn. Finally, note that the identity secret key ofid is tid=sid,1, where sid is generated by running algorithm SamplePreA1,TA1,uid−a,r in Game 0, and the identity secret key of id is tid=sid,1, where sid is sampled from Dℤm−1,r such that uid=A1sid+a in Game 1. By Lemma 3, sid in Game 0 is distributed statistically close to DΛuid−a⊥A1,r such that A1sid=uid−a. Thus, the identity secret key tid in Game 0 is distributed statistically close to that in Game 1. Therefore, Game 0 and Game 1 are statistically indistinguishable.Lemma 5. Game 1 and Game 2 are computationally indistinguishable.Proof. The computational indistinguishability between Game 1 and Game 2 follows the assumed intractability of DLWE (and MLWE and TLWE). To show this behavior, we present a simulatorS that can draw samples and form B,C∗,E∗,F∗; it simulates Game 1 when the samples are DLWE samples; it simulates Game 2 when they are uniformly random samples. S proceeds as follows:(1) Draw sufficient samples, and formB,C∗,E∗,F∗. Uniformly select at random a vector uid∗←$ℤqn, and let A=B+enT⊗uid∗∈ℤqn×m.(2) The hash query and identity secret key query are identical to those in Game 1. GenerateD∗ exactly as in MIFHE.Enc. Therefore, ifB,C∗,E∗,F∗ is sampled from DLWE, then S perfectly simulates Game 1. In contrast, if B,C∗,E∗,F∗ is uniformly random, then S simulates perfectly Game 2previous.Lemma 6. Game 2 and Game 3 are computationally indistinguishable.Proof. Similar to the previous proof of Lemma5, the computational indistinguishability between Game 2 and Game 3 follows the assumed intractability of DLWE (or MLWE). To show this effect, we present a simulator S that can draw samples and form A,D∗; it simulates Game 2 when the samples are DLWE samples; it simulates Game 3 when they are uniformly random samples. S proceeds as follows:(1) Draw sufficient samples, and formA,D∗. Uniformly choose at random a vector uid∗←$ℤqn, and let B=A−enT⊗uid∗∈ℤqn×m.(2) The hash query and identity secret key query are identical to those in Game 2. Uniformly choose randomC∗,E∗,F∗. Therefore, ifA,D∗ is sampled from DLWE, then S perfectly simulates Game 2. In contrast, if A,D∗ is uniformly random, then S perfectly simulates Game 3.There is no information of messageμ∗ in Game 3. Additionally, Game 0 and Game 3 are computationally indistinguishable by Lemmas 4–6. Therefore, if the DLWE is difficult, attacker A only has negligible advantage, which completes the proof of the IND-sID-CPA security of MIFHE. ## 3.1. MIFHE Scheme In this section, we will describe the proposed MIFHE scheme. We present one more algorithmMIFHE.NAND to help understand MIFHE.Eval.We parameterize the system by dimensionn=polyλ, modulus q, and error distribution χ for the underlying LWE problem; we set m=⌈6nlogq⌉, r=Onlogq·ωlogm, and B=Θrn. For the worst-case security, we set χ to be the standard discrete Gaussian distribution over ℤ with parameter 2n, which implies that the samples drawn from χ have magnitudes bounded by E=Θn except with probability 2−n. Modulus q is set in the following Section 3.2 based on the bound on the maximum depth of the supported circuit and the bound of the number of identities. The scheme is described as follows:(i) MIFHE.Setup1λ,1d,1L: On inputting security parameter λ, bound d on the NAND circuit depth, and bound L on the number of identities in one evaluation, do(1) Run algorithm TrapGenn,m−1,q to generate a uniformly random matrix A1∈ℤqn×m−1 with a short basis TA1∈ℤm−1×m−1 for Λq⊥A1 such that T˜A1≤Onlogq.(2) Choose a vectora←$ℤqn and set A=A1‖a∈ℤqn×m.(3) OutputMPK=A=A1‖a as the master public key and output MSK=TA1 as the master secret key.(ii) MIFHE.ExtractMPK,MSK,id: on inputting MPK, MSK, and identity id∈ℐ, do:(1) Ifid,u,SKid∈storage is from a previous inquiry on identity id, then return SKid. Otherwise, compute u=Hid∈ℤn, where H is a hash function modeled as a random oracle.(2) RunSamplePreA1,TA1,u−a,r to output a vector s∈ℤm−1 such that A1s=u−a. Set user-specific secret key SKid=t=s,1, and store id,u,t locally. Note that At=u and t∞≤B.(3) OutputSKid=t.(iii) MIFHE.EncMPK,id,μ∈0,1: on inputting master public key MPK, identity id∈ℐ, and bit message μ∈0,1, do:(1) Setu=Hid∈ℤn and compute B=A−enT⊗u∈ℤqn×m, where en is the nth standard unit (column) vector. (Remark: observe that Bt=0).(2) Choose a uniformly random matrixQ←$ℤqn×mℓ and a discrete Gaussian matrix W←χm×mℓ, and define the following:(6)C=BTQ+W+μGm∈ℤqm×mℓ.Note thatC is nicely a GPV-FHE ciphertext [26] encrypting μ under the secret key t. In particular,(7)tTC=tTW+μtTGm≈μtTGm.errorβC(3) Choose a matrixR←$ℤqn×mℓ and a discrete Gaussian matrix X←χm×mℓ, and define the following:(8)D=ATR+X+μGm∈ℤqm×mℓ.Here,D is regarded as a commitment to the message μ under commitment randomness R,X.(4) Choose a matrixS←$ℤqn2ℓ×mℓ and a discrete Gaussian matrix Y←χnmℓ×mℓ, and define the following:(9)E=Inℓ⊗BTS+Y+R⊗g⊗em∈ℤqnmℓ×mℓ.Note that(10)Inℓ⊗tT·E=Inℓ⊗tT·Y+R⊗g⊗em≈R⊗g.errorβETherefore,E is regarded as a sort of encryption of R (the tensor product with g corresponding to some bit decomposition appeared in expansion algorithm is vital to control the error growth), the former part of the commitment randomness used in D.(5) Choose a uniformly random matrixT←$ℤqnm×mℓ and a discrete Gaussian matrix Z←χm2×mℓ, and define the following:(11)F=Im⊗BTT+Z+X⊗em∈ℤqm2×mℓ.Note that:(12)Im⊗tT·F=Im⊗tT·Z+X⊗em≈X.errorβF.Therefore,F is regarded as a sort of encryption of X, the latter part of commitment randomness used in D.(6) Output a “fresh” ciphertextid;C,D,E,F to identity id.(iv) MIFHE.ExpandMPK,idk+1,id1,id2,…,idk;C,D,E,F: on inputting MPK, identity idk+1, and ciphertext id1,id2,…,idk;C∈ℤqmk×mkℓ,D∈ℤqm×mℓ,E∈ℤqnmkℓ×mℓ,F∈ℤqm2kℓ×mℓ encrypting μ under identities id1,id2,…,idk, do:(1) Setuk+1=Hidk+1.(2) We define the following:(13)C′=CU0D∈ℤqmk+1×mk+1ℓ,  where U is defined as follows:(14)U=Gn−T−uk+1⊗Imk·E∈ℤqmk×mℓ.(3) We leave the commitment and its randomness unchanged:D′=D,R′=R, and X′=X.(4) We define the following:(15)E′=Inℓ⊗Imk0m×mk·E∈ℤqnmk+1ℓ×mℓ.(5) Similarly, we define the following:(16)F′=Im⊗Imk0m×mk·F∈ℤqm2k+1×mℓ.(6) Outputid1,id2,…,idk+1;C′,D′,E′,F′ as the “expanded” ciphertext to identities id1,id2,…, idk+1.(v) MIFHE.NAND(id1,id2,…,idk;C1,D1,E1,F1,id1,id2,…,idk;C2,D2,E2,F2): on inputting two ciphertexts id1,id2,…,idk;C1,D1,E1,F1,id1,id2,…,idk;C2,D2,E2,F2 that encrypt μ1,μ2 under identities id1,id2,…,idk, do:(1) We define the following:(17)CNAND=Gmk−C1·Gmk−1C2∈ℤqmk×mkℓ.(2) We define the following:(18)DNAND=Gm−D1·Gm−1D2∈ℤqm×mℓ.(3) We define the following:(19)ENAND=−E1·Gm−1D2+Inℓ⊗C1·Gnmkℓ−1E2∈ℤqnmkℓ×mℓ.(4) We define the following:(20)FNAND=−F1·Gm−1D2+Im⊗C1·Gm2k−1F2∈ℤqm2k×mℓ.(5) Finally, outputid1,id2,…,idk;CNAND,DNAND,ENAND,FNAND as the “evaluated” NAND ciphertext.(vi) MIFHE.EvalMPK,f,c1,c2,…,cN: on inputting MPK, NAND circuit f:0,1N⟶0,1, and any N ciphertexts c1,c2,…,cN, compute homomorphically f over any N ciphertexts c1,c2,…,cN gate by gate by invoking MIFHE.Expand and MIFHE.NAND, and output an “evaluated” ciphertext cf.(vii) MIFHE.Dec(SKid1,SKid2,…,SKidk,id1,id2,…,idk;C,D,E,F): on inputting the secret keys SKid1,SKid2,…,SKidk and a ciphertext id1,id2,…,idk;C∈ℤqmk×mkℓ,D∈ℤqm×mℓ,E∈ℤqnmkℓ×mℓ,F∈ℤqm2k×mℓ under identities id1,id2,…,idk, let t∈ℤmk be the (column) concatenation of the secret keys SKid1,SKid2,…,SKidk, and compute:(21)tTC≈μtTGmk.errorβCIf βC<q/4, where q is set in the next section, we can recover μ from the last term of vector μtTGmk: if this term is closer to 0, output 0; otherwise, output 1. ## 3.2. Analyzing the Noise Growth and Setting the Parameters Now, we provide the reasons for definitions and analyze the noise growth inMIFHE.Expand and MIFHE.NAND to easily set the parameters. We instantiate the parameters and ensure correctness of MIFHE.First, as described in the previous section in theMIFHE.Expand algorithm, let us do the following analysis:(1) We have the following:(22)C′=CU0D∈ℤqmk+1×mk+1ℓ.Givent∈ℤmk that is the (column) concatenation of the secret keys corresponding to identities id1,id2,…,idk, respectively, and a new secret key tk+1, which is the secret key of idk+1, we set t′=t,tk+1 and then have the following:(23)t′T·C′=tT·C,tT·U+tk+1T·D,≈μtT·Gmk,tT·U+tk+1T·DerrorβC=μtT·Gmk,tT·Gn−T−uk+1⊗Imk·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1⊗tT·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1·Inℓ⊗tT·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1·Inℓ⊗tT·Y+GnT·R+tk+1T·D≈μtT·Gmk,Gn−T−uk+1·GnT·R+tk+1T·DerrornℓβE=μtT·Gmk,−uk+1T·R+tk+1T·D=μtT·Gmk,−uk+1T·R+tk+1T·ATR+X+μGm=μtT·Gmk,−uk+1T·R+uk+1T·R+tk+1T·X+μtk+1T·Gm=μtT·Gmk,tk+1T·X+μtk+1T·Gm≈μtT·Gmk,μtk+1T·GmerrormBE=μt′T·Gmk+1, which indicates that (7) holds. In general, the error implied in the “expanded” ciphertext C′ is as follows:(24)βC′=βC+nℓβE+mBE.(2) This visibly satisfies equation (8), and the error implied in the “expanded” ciphertext D′ is as follows:(25)βD′=βD.(3) We have the following:(26)E′=Inℓ⊗Imk0m×mk·E∈ℤqnmk+1ℓ×mℓ.It is obvious that:(27)Inℓ⊗t′T·E′=Inℓ⊗tT·E≈R⊗g=R′⊗g.errorβE.Thus, (10) is kept up to expansion.(4) We have the following:(28)F′=Im⊗Imk0m×mk·F∈ℤqm2k+1×mℓ.Similar to the above step, (12) is also kept up to expansion as follows:(29)Im⊗t′T·F′=Im⊗tT·F≈X=X′.errorβF.Second, we analyze theMIFHE.NAND algorithm.(1) We have the following:(30)CNAND=Gmk−C1·Gmk−1C2∈ℤqmk×mkℓ.Witht∈ℤmk, which is the (column) concatenation of the secret keys that correspond to identities id1,id2,…,idk, we have the following:(31)tT·CNAND=tT·Gmk−C1·Gmk−1C2=tT·Gmk−tT·C1·Gmk−1C2≈tT·Gmk−μ1tT·Gmk·Gmk−1C2errormkℓ·βC1=tT·Gmk−μ1tT·C2≈tT·Gmk−μ1μ2tT·GmkerrorβC2=1−μ1μ2tT·Gmk, which indicates that (7) holds. In total, the error implied in the NAND ciphertext CNAND is as follows:(32)βCNAND=mkℓ·βC1+βC2.(2) We have the following:(33)DNAND=Gm−D1·Gm−1D2∈ℤqm×mℓ,and the commitment randomness is as follows:(34)RNAND=−R1·Gm−1D2−μ1R2,XNAND=−X1·Gm−1D2−μ1X2.By simply computing, we can see that (8) is preserved:(35)DNAND=Gm−D1·Gm−1D2=Gm−ATR1+X1+μ1Gm·Gm−1D2=Gm−ATR1+X1·Gm−1D2−μ1D2=Gm−ATR1+X1·Gm−1D2−μ1ATR2+X2+μ2Gm=AT·−R1·Gm−1D2−μ1R2+−X1·Gm−1D2−μ1X2+1−μ1μ2Gm=AT·RNAND+XNAND+1−μ1μ2Gm.(3) We have the following:(36)ENAND=−E1·Gm−1D2+Inℓ⊗C1·Gnmkℓ−1E2∈ℤqnmkℓ×mℓ.To see that (10) holds for ENAND, first note that:(37)Inℓ⊗tT·E1·Gm−1D2=Inℓ⊗tT·E1·Gm−1D2≈R1⊗g·Gm−1D2errormℓ·βE1=R1·Gm−1D2⊗g.Second, note that:(38)Inℓ⊗tT·Inℓ⊗C1·Gnmkℓ−1E2=Inℓ⊗tT·C1·Gnmkℓ−1E2≈Inℓ⊗μ1tT·Gmk·Gnmkℓ−1E2errornmkℓ2·βC1=μ1Inℓ⊗tT⊗gT·Gnmkℓ−1E2=μ1Inℓ⊗tT·Inmkℓ⊗gT·Gnmkℓ−1E2=μ1Inℓ⊗tT·E2≈μ1R2⊗g⋅errorβE2.Finally, from the above, we have the following:(39)Inℓ⊗tT·ENAND≈−R1·Gm−1D2−μ1R2⊗gerrornmkℓ2·βC1+mℓ·βE1+βE2=RNAND⊗g, which indicates that (10) holds.(4) We have the following:(40)FNAND=−F1·Gm−1D2+Im⊗C1·Gm2k−1F2∈ℤqm2k×mℓ.To see that (12) holds for FNAND, first, note that:(41)Im⊗tT·F1·Gm−1D2=Im⊗tT·F1·Gm−1D2≈X1·Gm−1D2⋅errormℓ·βF1.Second, note that:(42)Im⊗tT·Im⊗C1·Gm2k−1F2=Im⊗tT·C1·Gm2k−1F2≈Im⊗μ1tT·Gmk·Gm2k−1F2errorm2kℓ·βC1=μ1Im⊗tT⊗gT·Gm2k−1F2=μ1Im⊗tT·Im2k⊗gT·Gm2k−1F2=μ1Im⊗tT·F2≈μ1X2⋅errorβF2.Finally, from the above, we have the following:(43)Im⊗tT·FNAND≈−X1·Gm−1D2−μ1X2errorm2kℓ·βC1+mℓ·βF1+βF2=XNAND, which indicates that (12) holds.Then, as [17], we now instantiate the parameters by bounding the worst-case error growth when homomorphically computing a depth dNAND circuit for up to L distinct identities. For a ciphertext id1,id2,…,idk;C,D,E,F with commitment randomness R,X, the max error is defined:(44)β=maxβC,βE,βF,BE.With the bounds from the above, for any ciphertext with errors bounded byβ, its “expanded” ciphertext has a max error of at most nℓ+m+1·β=polyn,ℓ·β. Similarly, when we homomorphically compute an NAND gate of two ciphertexts with errors bounded by β, the result has a max error of at most m2Lℓ+mℓ+1·β=polyn,ℓ,L·β. Thus, after computing any depth dNAND circuit on “fresh” ciphertexts under L distinct keys, then the result has a max error of at most:(45)polyn,ℓ,Ld+L.Thus, we can set4·polyn,ℓ,Ld+L=q for the correctness of decryption. Recall that ℓ=Θlogq=O˜d+L; therefore, polyn,ℓ,Ld+L=polyn,d,Ld+L. Thus, the security of our scheme corresponds to a worst-case n-dimensional lattice problem with an approximation factor of polyn,d,Ld+L.Finally, the compactness requirement is satisfied because any ciphertext in our construction is bounded bypolyn,d,L=polyλ,d,L. ## 3.3. Security Now, we prove that the proposed scheme MIFHE is IND-sID-CPA secure under the hardness of the DLWE assumption in the random oracle model.Theorem 1. The multi-hop multi-identity fully homomorphic encryption scheme MIFHE, which was constructed in Section 3.1, is IND-sID-CPA secure in the random oracle model assuming that the DLWEn,q,χassumption holds.Proof. We prove the security of the proposed scheme MIFHE using a sequence of hybrid games. The first one of this game is the real IND-sID-CPA security game in Definition5, and the last one is the ideal game, where the challenge ciphertext (except challenge identity id∗) is uniformly random and independent of the challenge bit μ∗. We proceed by considering a sequence of hybrid games as follows:Game 0: This is the original game described in Definition5, and it is IND-sID-CPA security. Recall that id∗∈ℐ is the target identity; that is, attacker A plans to attack id∗, and the challenge ciphertext is id∗;C∗,D∗,E∗,F∗ encrypting μ∗.Game 1: In this game, we change the methods of generating master public keyA, answering hash (random oracle) queries and answering identity secret key queries as follows.(1) Uniformly select at random a matrixA1←$ℤqn×m−1 and a vector a←$ℤqn, and set A=A1‖a∈ℤqn×m.(2) Uniformly select at random a vectoruid∗←$ℤqn.(3) When attackerA issues a hash query H on identity id∈ℐ, do:(a) Ifid=id∗, return uid∗.(b) Otherwise, ifid,uid,tid∈store, return uid.(c) Otherwise, sample vectorsid∈Dℤm−1,r, and compute uid=A1sid+a, set tid=sid,1, and store id,uid,tid locally. Finally, return uid.(4) When attackerA issues an identity secret key query on identity id∈ℐ, where id≠id∗, without loss of generality, we assume that A has queried H on id and return tid, where id,uid,tid∈store.Game 2: this game is the same as Game 1 except thatC∗,E∗,F∗, which is a part of the challenge ciphertext, is selected as uniformly random independent elements in ℤqm×mℓ×ℤqnmℓ×mℓ×ℤqm2×mℓ.Game 3: this game is the same as Game 2, except thatD∗ is selected as a uniformly random element in ℤqm×mℓ. In fact, Game 3 is the ideal game. We show the indistinguishability among all sequential hybrid games.Lemma 4. Game 0 and Game 1 are statistically indistinguishable.Proof. We show that Game 0 is statistically close to Game 1 by analyzing that the changes are undetectable by any attacker between them step by step using Lemmas1–3. First, note that while the former part of master public keyA1 is generated by running algorithm TrapGenn,m−1,q (with a trapdoor TA1) in Game 0, A1 is sampled from the uniform distribution over ℤqn×m−1 in Game 1. By Lemma 1, A1 in Game 0 is distributed statistically close to a uniform distribution over ℤqn×m−1 as in Game 1. Second, in regard to the simulation of hash queryH, we discuss two cases:(1) Ifid=id∗, then return uid∗, which was uniformly sampled from ℤqn at random. This perfectly simulates hash query Hid∗.(2) Otherwise, sample a Gaussian vectorsid∈Dℤm−1,r, compute uid=A1sid+a, and return uid. Here, uid is distributed statistically close to the uniform distribution over ℤqn, since A1sid is distributed statistically close to uniform distribution over ℤqn by Lemma 2 and a←$ℤqn. Finally, note that the identity secret key ofid is tid=sid,1, where sid is generated by running algorithm SamplePreA1,TA1,uid−a,r in Game 0, and the identity secret key of id is tid=sid,1, where sid is sampled from Dℤm−1,r such that uid=A1sid+a in Game 1. By Lemma 3, sid in Game 0 is distributed statistically close to DΛuid−a⊥A1,r such that A1sid=uid−a. Thus, the identity secret key tid in Game 0 is distributed statistically close to that in Game 1. Therefore, Game 0 and Game 1 are statistically indistinguishable.Lemma 5. Game 1 and Game 2 are computationally indistinguishable.Proof. The computational indistinguishability between Game 1 and Game 2 follows the assumed intractability of DLWE (and MLWE and TLWE). To show this behavior, we present a simulatorS that can draw samples and form B,C∗,E∗,F∗; it simulates Game 1 when the samples are DLWE samples; it simulates Game 2 when they are uniformly random samples. S proceeds as follows:(1) Draw sufficient samples, and formB,C∗,E∗,F∗. Uniformly select at random a vector uid∗←$ℤqn, and let A=B+enT⊗uid∗∈ℤqn×m.(2) The hash query and identity secret key query are identical to those in Game 1. GenerateD∗ exactly as in MIFHE.Enc. Therefore, ifB,C∗,E∗,F∗ is sampled from DLWE, then S perfectly simulates Game 1. In contrast, if B,C∗,E∗,F∗ is uniformly random, then S simulates perfectly Game 2previous.Lemma 6. Game 2 and Game 3 are computationally indistinguishable.Proof. Similar to the previous proof of Lemma5, the computational indistinguishability between Game 2 and Game 3 follows the assumed intractability of DLWE (or MLWE). To show this effect, we present a simulator S that can draw samples and form A,D∗; it simulates Game 2 when the samples are DLWE samples; it simulates Game 3 when they are uniformly random samples. S proceeds as follows:(1) Draw sufficient samples, and formA,D∗. Uniformly choose at random a vector uid∗←$ℤqn, and let B=A−enT⊗uid∗∈ℤqn×m.(2) The hash query and identity secret key query are identical to those in Game 2. Uniformly choose randomC∗,E∗,F∗. Therefore, ifA,D∗ is sampled from DLWE, then S perfectly simulates Game 2. In contrast, if A,D∗ is uniformly random, then S perfectly simulates Game 3.There is no information of messageμ∗ in Game 3. Additionally, Game 0 and Game 3 are computationally indistinguishable by Lemmas 4–6. Therefore, if the DLWE is difficult, attacker A only has negligible advantage, which completes the proof of the IND-sID-CPA security of MIFHE. ## 4. Conclusion and Open Problem We present a multi-hop MIFHE scheme, which is IND-sID-CPA secure in the random oracle model under the standard LWE assumption. However, the proposed MIFHE scheme is only leveled homomorphic. Therefore, it is interesting to construct a nonleveled multi-hop MIFHE scheme (i.e., there is no a priori bound on the depth of the circuits) under standard assumptions such as LWE (without unfalsifiableiO or WPRF). --- *Source: 1023439-2022-03-22.xml*
1023439-2022-03-22_1023439-2022-03-22.md
75,611
Leveled Multi-Hop Multi-Identity Fully Homomorphic Encryption
Wen Liu; Fuqun Wang; Xiaodan Jin; Kefei Chen; Zhonghua Shen
Security and Communication Networks (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1023439
1023439-2022-03-22.xml
--- ## Abstract Gentry, Sahai, and Waters (CRYPTO 2013) proposed the notion of multi-identity fully homomorphic encryption (MIFHE), which allows homomorphic evaluation of data encrypted under multiple identities. Subsequently, Clear and McGoldrick (CANS 2014, CRYPTO 2015) proposed leveled MIFHE candidates. However, the proposed MIFHE is either based oniO, which is a nonstandard assumption or single hop; that is, an arbitrary “evaluated” ciphertext under a set of identities is difficult to further evaluate when new ciphertexts are encrypted under additional identities. To overcome these drawbacks, we propose a leveled multi-hop MIFHE scheme. In a multi-hop MIFHE scheme, one can evaluate a group of ciphertexts under a set of identities to obtain an “evaluated” ciphertext, which can be further evaluated with other ciphertexts encrypted under additional identities. We also show that the proposed MIFHE scheme is secure against selective identity and chosen-plaintext attacks (IND-sID-CPA) under the learning with errors (LWE) assumption. --- ## Body ## 1. Introduction The idea of homomorphic encryption was first proposed by Rivest et al. [1] in 1978. How to construct a scheme with homomorphic properties is used to be a difficult problem for cryptographers. With the advent of the information age and the development of cloud computing technology, it is particularly urgent to solve this problem. It was not until 2009 that Gentry proposed the first fully homomorphic encryption (FHE) system based on ideal lattices, allowing anyone without a secret key to compute any efficiently computable function over encrypted data [2]. Because FHE is suitable to apply cloud computing without compromising security, it has quickly become a research hot topic [3–7].All of the above FHE schemes are single-key homomorphic, that is, only suitable for the homomorphic evaluation of ciphertext encrypted under a single key. However, in many realistic scenarios, the ciphertext of homomorphic encryption is usually encrypted under multiple different keys. Therefore, at STOC 2012, López-Alt, Tromer, and Vaikuntanathan [8] proposed the first cryptographic construction of multi-key full homomorphic encryption (MKFHE) based on NTRU cryptography, which enables the evaluation of data encrypted under different keys. Subsequently, a large number of articles appeared to improve MKFHE, including single hop only [9, 10], multi-hop with bootstrapping [11–16], and multi-hop without bootstrapping [17].Although (MK) FHEs have extensive applications, they require complex certificate management in implementation. To simplify the certificate management, Naccache [18] introduced a notion of identity-based fully homomorphic encryption (IBFHE), where there is no user-specific key that the evaluator must use. In particular, in an IBFHE scheme, data encrypted under a single identity can perform homomorphic operations by any evaluator with only public parameters. In 2013, Gentry et al. [7] constructed an IBFHE scheme from GSW-FHE. The IBFHE scheme only allows homomorphic evaluation of encrypted data under single identity but not multiple identities. Clear and McGoldrick [19] gave an MIFHE based on the indistinguishability obfuscation (iO) [20] to overcome the disadvantage of single identity at CANS 2014. Then, they [9] proposed a leveled MIFHE candidate under LWE in the random oracle model at CRYPTO 2015. However, the later scheme needs to set the number of users participating in homomorphic evaluation in advance, and new users cannot be added to the operation process, which is single hop in MIFHE cryptography. In 2017, Canetti et al. [21] proposed two MIFHE schemes. The first combines MKFHE and identity-based encryption (IBE) on the fly. Therefore, the ciphertext extension depends on the number of ciphertexts, which is not compact. The second is nonleveled, but uses iO. In 2020, with the help of an MKFHE, Pal and Dutta [22] extended IBE to a CCA1 secure MIFHE scheme. However, their extension process uses witness pseudorandom function (WPRF), which is a nonstandard assumption. Recently, Shen et al. [23] proposed a compressible MIFHE scheme based on [9, 10, 24]. The scheme is selectively secure under the LWE assumption and can reach an optimal compression rate, but it is single hop.Thus, it is interesting to construct a compact MIFHE scheme with the multi-hop homomorphism under standard assumption, where one can evaluate a group of ciphertexts under a set of identities to obtain an “evaluated” ciphertext, which can be further evaluated with other ciphertexts encrypted under additional identities. ### 1.1. Contribution We propose a leveled multi-hop MIFHE scheme adapted from the GPV-FHE [25], following the construction method of the PS-MKFHE, which is a leveled multi-hop MKFHE scheme built by Peikert and Shiehian [17]. We show that it is compact and secure against the IND-sID-CPA attack under the LWE assumption in the random oracle model. In our construction, we use a fully homomorphic commitment to commit the plaintext bit to help homomorphic operations. Additionally, by combining the transformation of MIFHE to the nonadaptive chosen-ciphertext attack (CCA1) secure FHE proposed by Canetti et al. [21], we can obtain a CCA1 secure FHE with multi-hop homomorphism. Finally, we note that our construction can be applied to the ring setting as [9] for shorter parameters. ### 1.2. Technical Overview It is well known that the efficient multi-hop MKFHE obtained by bootstrapping is difficult to generalize MIFHE, because the public homomorphic evaluation key cannot be extracted from identity secret keySKid as it is a ciphertext of SKid and even the SKid is not generated before decryption. Therefore, we focus on the PS-MKFHE [17], which does not use bootstrapping. Our key observation is that we can construct a multi-hop MIFHE scheme following the ideas introduced by Peikert and Shiehian [17]. They built multi-hop MKFHE schemes to overcome the single hop drawback of the MKFHE schemes [9, 10]. In their first construction, a ciphertext consists of C,F,D, where C is a GSW-FHE ciphertext [7] that encrypts message μ, F is a fully homomorphic commitment [26] to the same message, and D is a special encryption of the commitment randomness implied in F under the same key to C. Here, D,F provides the power of expanding C to C′ with additional keys (with a part of the public key) and preserves some invariant, which can be further used. In total, after expanding or computing, the form of C,F,D remains. This finding is the reason to support the multi-hop computation with respect to additional keys.However, it is nontrivial to construct a multi-hop MIFHE scheme because PS-MKFHE [17] is built from GSW-FHE [7] but not from GPV-FHE [26]. For simplicity, we now informally describe a multi-hop MKFHE from GPV-FHE [26], which can be converted into a multi-hop MIFHE scheme (see Section 3).In a multi-hop MKFHE scheme based on GPV-FHE, a ciphertext under a secret keyt∈ℤmk consists of four components C,D,E,F:(1) A GPV-FHE ciphertextC∈ℤqmk×mkℓ that encrypts μ under t.(2) A GPV-FHE style fully homomorphic commitmentD∈ℤqm×mℓ to the same message μ with underlying commitment randomness R,X.(3) A special encryptionE∈ℤqnmℓ×mℓ under t of the former part of the commitment randomness R.(4) Another special encryptionF∈ℤqm2×mℓ under t of the latter part of the commitment randomness X.To expandC to an additional secret key t˜∈ℤqm, we define(1)C′=CU0D∈ℤqmk+1×mk+1ℓ,where U is derived from E. The commitment D is preserved, and E,F are padded with zeros to fit the long secret key t,t˜. Moreover, the homomorphic evaluation can be simply designed as GPV-FHE (see Section 3 for more details about our MIFHE scheme). ### 1.3. Paper Organization First, we recall some notions, definitions, and facts in Section2. Then, we propose our MIFHE scheme that satisfies IND-sID-CPA secure in Section 3. In the end, we conclude in Section 4. ## 1.1. Contribution We propose a leveled multi-hop MIFHE scheme adapted from the GPV-FHE [25], following the construction method of the PS-MKFHE, which is a leveled multi-hop MKFHE scheme built by Peikert and Shiehian [17]. We show that it is compact and secure against the IND-sID-CPA attack under the LWE assumption in the random oracle model. In our construction, we use a fully homomorphic commitment to commit the plaintext bit to help homomorphic operations. Additionally, by combining the transformation of MIFHE to the nonadaptive chosen-ciphertext attack (CCA1) secure FHE proposed by Canetti et al. [21], we can obtain a CCA1 secure FHE with multi-hop homomorphism. Finally, we note that our construction can be applied to the ring setting as [9] for shorter parameters. ## 1.2. Technical Overview It is well known that the efficient multi-hop MKFHE obtained by bootstrapping is difficult to generalize MIFHE, because the public homomorphic evaluation key cannot be extracted from identity secret keySKid as it is a ciphertext of SKid and even the SKid is not generated before decryption. Therefore, we focus on the PS-MKFHE [17], which does not use bootstrapping. Our key observation is that we can construct a multi-hop MIFHE scheme following the ideas introduced by Peikert and Shiehian [17]. They built multi-hop MKFHE schemes to overcome the single hop drawback of the MKFHE schemes [9, 10]. In their first construction, a ciphertext consists of C,F,D, where C is a GSW-FHE ciphertext [7] that encrypts message μ, F is a fully homomorphic commitment [26] to the same message, and D is a special encryption of the commitment randomness implied in F under the same key to C. Here, D,F provides the power of expanding C to C′ with additional keys (with a part of the public key) and preserves some invariant, which can be further used. In total, after expanding or computing, the form of C,F,D remains. This finding is the reason to support the multi-hop computation with respect to additional keys.However, it is nontrivial to construct a multi-hop MIFHE scheme because PS-MKFHE [17] is built from GSW-FHE [7] but not from GPV-FHE [26]. For simplicity, we now informally describe a multi-hop MKFHE from GPV-FHE [26], which can be converted into a multi-hop MIFHE scheme (see Section 3).In a multi-hop MKFHE scheme based on GPV-FHE, a ciphertext under a secret keyt∈ℤmk consists of four components C,D,E,F:(1) A GPV-FHE ciphertextC∈ℤqmk×mkℓ that encrypts μ under t.(2) A GPV-FHE style fully homomorphic commitmentD∈ℤqm×mℓ to the same message μ with underlying commitment randomness R,X.(3) A special encryptionE∈ℤqnmℓ×mℓ under t of the former part of the commitment randomness R.(4) Another special encryptionF∈ℤqm2×mℓ under t of the latter part of the commitment randomness X.To expandC to an additional secret key t˜∈ℤqm, we define(1)C′=CU0D∈ℤqmk+1×mk+1ℓ,where U is derived from E. The commitment D is preserved, and E,F are padded with zeros to fit the long secret key t,t˜. Moreover, the homomorphic evaluation can be simply designed as GPV-FHE (see Section 3 for more details about our MIFHE scheme). ## 1.3. Paper Organization First, we recall some notions, definitions, and facts in Section2. Then, we propose our MIFHE scheme that satisfies IND-sID-CPA secure in Section 3. In the end, we conclude in Section 4. ## 2. Preliminaries Let us start with the following notations that will be used throughout the study. We use the bold uppercase letters (e.g.,A,B) to represent matrices. Similarly, the bold lowercase letters (e.g., a,b) represent column vectors. We use ai to denote the i entry of a and ai,j to denote the i,j entry of A. A‖B is used to denote the concatenation of two matrices. Similarly, a,b is used to denote the concatenation of two column vectors. Let λ denote the security parameter. We define n=1,2,…,n for any positive integer n. Let negl λ denote a negligible function that grows slower than λ−c for any constant c>0 and any sufficiently large value of λ. An event occurs with overwhelming probability; i.e., it occurs with a probability of at least 1−neglλ. ### 2.1. Basic Notions #### 2.1.1. Approximations Very recently, Peikert and Shiehian [17] suggested a simple method to indicate that the two sides in some “noisy equations” extensively used in lattice-based cryptography were approximately equal to an additive error. We will follow their notation using:(2)x≈yerrorE.To indicate thatx=y+e for some e∈−E,E, the notation can be naturally expanded to the vector or matrix type using the infinite norm. #### 2.1.2. Tensor Product For matricesA∈ℤm×n,B∈ℤs×t, tensor product A⊗B is an ms×nt matrix, which consists of m×n blocks, whose i,j block is ai,j·B.In this work, we widely use the mixed-product property: for any matricesA,B,C,D with compatible dimensions, it holds that:(3)A⊗B·C⊗D=A·C⊗B·D. ### 2.2. Background on Lattices #### 2.2.1. Lattices For matrixA∈ℤqn×m, we define the q-ary integer lattice in this way:(4)Λ⊥A=x∈ℤm:Ax=0modq.For vectoru∈ℤqn, we now define the coset (or “shifted” lattice) in this way:(5)Λu⊥A=x∈ℤm:Ax=umodq. #### 2.2.2. LWE The learning with errors (LWE) problem was first introduced by Regev [27] as an extension of the “learning parity with noise.” In this study, we define the decisional learning with errors (DLWE) problem that is equivalent to LWE for certain parameters as follows.Definition 1. (Decisional Learning with Errors (DLWE)) For positive integers n,m,q and an error distribution χ over ℤ, the DLWEn,m,q,χ problem is to distinguish with nonnegligible advantage between A,b=AT·t+e, where A←$ℤqn×m,t←$ℤqn,e←χmb2−4ac, and A,b sampled uniformly at random from ℤqn×m×ℤqm.Letχ be a discrete Gaussian distribution over ℤ with parameter r=2n, and it is a well-established instantiation of LWE. A sample drawn from this distribution has magnitude bounded by rn=Θn except with a probability of at most 2−n. For this parameterization and any m=polyλ, it is easy to see LWE by quantum reduction at least as difficult as certain worst-case difficult problems (e.g., the shortest independent vector problem) on n-dimensional lattices with approximate factor O˜qn [27]. Classical reductions are also famous for subexponential modulus q [28] and polynomial q [29].In this work, we rely on the tensor form of LWE denoted by TLWE and the matrix form of LWE denoted by MLWE, whose specific definitions are introduced below.Definition 2. (The Matrix Form of Learning with Errors (MLWE)) For positive integers n,m,k,q and an error distribution χ over ℤ, the MLWEn,m,k,q,χ problem is to distinguish with nonnegligible advantage between A,B=AT·T+E, where A←$ℤqn×m,T←$ℤqn×k,E←χm×k, and A,B sampled uniformly at random from ℤqn×m×ℤqm×k.Definition 3. (The Tensor Form of Learning with Errors (TLWE)) For positive integers n,m,k,t,q and an error distribution χ over ℤ, the TLWEn,m,k,t,q,χ problem is to distinguish with nonnegligible advantage between A,B=It⊗AT·T+E, where A←$ℤqn×m,T←$ℤqnt×k,E←χmt×k, and It denotes the t-dimensional identity matrix, and A,B sampled uniformly at random from ℤqn×m×ℤqmt×k.According to the standard mixed argument, we can get that MLWE is equivalent to DLWE with at mostk factor loss in the distinguishing advantage, and TLWE is equivalent to MLWE with at most t2 factor loss in the distinguishing advantage. #### 2.2.3. Lattice Trapdoor We recall some cryptographic facts about trapdoor generation and preimage sampling algorithms with important properties [30]. Since all the details of implementation are not strictly necessary in this work, we ignore them. Note that there are improved algorithms but only within Ologλ factor [31].Lemma 1 (see [30]). Letn=polyλ,q≥3be odd, andm=⌈6nlogq⌉. There is a probabilistic polynomial timePPTalgorithmTrapGenn,m,qthat outputs a pairA∈ℤqn×m,TA∈ℤm×m, such thatAis statistically close to a uniform matrix inℤqn×m, andTAis a basis forΛ⊥AsatisfyingTA≤OnlogqandT˜A≤Onlogqwith overwhelming probability inλ.Lemma 2 (see [30]). Letn=polyλbe a positive integer,qbe a prime, andm≥2nlogq. Then, for all but a2q−nfraction of allA∈ℤqn×mand for anyr≥ωlogm, the distribution of the syndromeu=Asmodqis statistically close to uniform overℤqn, wheres←Dℤm,r.Lemma 3 (see [30]). SetA∈ℤqn×m,TA∈ℤm×m←TrapGenn,m,qfrom Lemma 1. Then, for a parameterr≥T˜A·ωlogmand a uniformly random vectoru∈ℝn, there is a PPT algorithm, which outputs vectors∈Λu⊥Asampled from a statistically similar distribution toDΛu⊥A,r; thus,As=uwheneverΛu⊥Ais not empty. #### 2.2.4. Gadget Matrices and Bit Decomposition We recall a useful notion ofgadget matrix, which was first introduced in [31], to decompose vectors or matrices over ℤq into short vectors or matrices over ℤ.For integerq, let ℓ=⌈logq⌉. Gadget matrix gT=1,2,22,…,2ℓ−1 and bit decomposition function g−1:ℤq⟶0,1ℓ are defined, which outputs a binary column vector that consists of the binary representation of its argument, such that gT·g−1a=a, for any a∈ℤq.More generally, for any positive integerk, Gk=Ik⊗gT∈ℤqk×kℓ is defined, where Ik denotes the k-dimensional identity matrix. For any t, the general bit decomposition function G−1:ℤqk×t⟶0,1kℓ×t outputs a binary kℓ×tmatrix (invoking g−1), such that G·G−1A=A, for A∈ℤqk×t. Additionally, we often write G−TA=G−1AT for simplicity. ### 2.3. Multi-Identity Fully Homomorphic Encryption We begin with the definition of the leveled multi-hop MIFHE, which is adapted and summarized from the definition of the single-hop MIFHE in [9], definition of the single-hop MKFHE in [10], and definition of the multi-hop MKFHE in [17]. Here, we require a bound d on the NAND circuit depth and a bound L on the number of identities in one evaluation, and we mainly focus on the bit encryption scheme and [17].Now, a ciphertext is called a “fresh” ciphertext if it is generated by the encryption algorithmEnc defined below (i.e., it corresponds to a single identity), an “expanded” ciphertext if it is the output of expansion algorithm Expand (which relates to multiple identities), or an “evaluated” ciphertext if it is the output of homomorphic evaluation algorithm Eval.Definition 4. A leveled multi-hop multi-identity fully homomorphic encryption scheme consists of sixPPT algorithms Setup,Extract,Enc,Expand,Eval,andDec defined as follows:(i) Setup1λ,1d,1L: on inputting a security parameter λ, a bound d on the NAND circuit depth, and a bound L on the number of identities involved in one evaluation, generate a master public key MPK and a master secret key MSK, and then output MPK,MSK. Here, the security parameter λ also defines an identity space ℐ(ii) ExtractMPK,MSK,id: on inputting the MPK, MSK, and identity id∈ℐ, extract a user-specific secret key SKid, and output it(iii) EncMPK,id,μ∈0,1: on inputting the MPK, identity id∈ℐ, and bit μ∈0,1, output a “fresh” ciphertext id;c(iv) ExpandMPK,idk+1,id1,id2,…,idk;c: on inputting the MPK, identity idk+1, and any (“fresh,” “expanded,” or “evaluated”) ciphertext id1,id2,…,idk;c under k identities id1,id2,…,idk, compute and output an “expanded” ciphertext id1,id2,…,idk,idk+1;c′ under k+1 identities id1,id2,…,idk,idk+1(v) EvalMPK,f,c1,c2,…,cN: on inputting MPK, an NAND circuit f, and N ciphertexts c1,c2,…,cN, output an “evaluated” ciphertext cf(vi) Dec(SKid1,SKid2,…,SKidk,id1,id2,…,idk;c): on inputting k secret keys SKid1,SKid2,…,SKidk, which correspond to identities id1,id2,…,idk and any ciphertext id1,id2,…,idk;c, output a bit μWe underline that we will homomorphically evaluate anyNAND circuit gate by gate as described in [17], which indicates that the evaluation is multi-hop as previous multi-key FHE schemes [8, 17]. #### 2.3.1. Correctness A leveled multi-hop MIFHE scheme is correct if it satisfies the following conditions. For all positive integersλ,d,L, for every NAND circuit f of depth at most d with N input wires, for every function π:N⟶L (which relates each input wire to a key pair), and for every μ∈0,1N, the following experiment succeeds with overwhelming probability: MPK,MSK←Setup1λ,1d,1L, generate identity key pairs PKidπi,SKidπi←ExtractMPK,MSK,idπi for every i∈N, generate ciphertext ci←EncMPK,idπi,μi∈0,1 for every i∈N, compute cf←EvalMPK,f,c1,c2,…,cN (may invoke algorithm Expand), and finally check whether DecSKidπii∈N,cf=fμ1,μ2,…,μN. #### 2.3.2. Compactness A leveled multi-hop MIFHE scheme is compact if there exists a polynomialp·,·,· such that cf≤pλ,d,L in the experiment from Definition 4. In other words, the length of cf is independent of both f and N but can depend polynomially on λ,d, and L. #### 2.3.3. Security The security game of MIFHE is the same as that of IBE, but there is no reference to the expansion algorithm and evaluation algorithm because they are public and do not impact the security. In this study, we will mainly focus on the semantically secure under selective identity and chosen-plaintext attack (IND-sID-CPA) security game for MIFHE between a challengerC and a PPT attacker A, which is defined as follows:(i) Initial Stage. AttackerA is given bound d on the NAND circuit depth and bound L on the number of identities and outputs target identity id∗.(ii) Setup. ChallengerC runs Setup1λ,1d,1L to generate MPK,MSK and sends MPK to attacker A.(iii) Query Stage 1. AdversaryA adaptively issues a query on any identity id such that id≠id∗. Challenger C runs ExtractMPK,MSK,id to obtain identity secret key skid that corresponds to id and sends skid back to A.(iv) Challenge. ChallengerC selects a uniformly random bit μ∗←$0,1, computes a challenge ciphertext c∗←EncMPK,id∗,μ∗, and sends it to attacker A.(v) Query Stage 2. AdversaryA issues additional adaptive identity secret key queries, and challenger C responds as in query stage 1.(vi) Output. The attacker outputs a guessμ′∈0,1 and wins if μ′=μ∗.The advantage of the attacker in the above IND-sID-CPA security game is defined asPrμ′=μ∗−1/2, where the probability is taken over the random bits used by all algorithms in the game.Definition 5. A leveled multi-identity fully homomorphic encryption scheme isIND-sID-CPA secure if any PPT attacker has at most a negligible advantage in the IND-sID-CPA security game defined above. ## 2.1. Basic Notions ### 2.1.1. Approximations Very recently, Peikert and Shiehian [17] suggested a simple method to indicate that the two sides in some “noisy equations” extensively used in lattice-based cryptography were approximately equal to an additive error. We will follow their notation using:(2)x≈yerrorE.To indicate thatx=y+e for some e∈−E,E, the notation can be naturally expanded to the vector or matrix type using the infinite norm. ### 2.1.2. Tensor Product For matricesA∈ℤm×n,B∈ℤs×t, tensor product A⊗B is an ms×nt matrix, which consists of m×n blocks, whose i,j block is ai,j·B.In this work, we widely use the mixed-product property: for any matricesA,B,C,D with compatible dimensions, it holds that:(3)A⊗B·C⊗D=A·C⊗B·D. ## 2.1.1. Approximations Very recently, Peikert and Shiehian [17] suggested a simple method to indicate that the two sides in some “noisy equations” extensively used in lattice-based cryptography were approximately equal to an additive error. We will follow their notation using:(2)x≈yerrorE.To indicate thatx=y+e for some e∈−E,E, the notation can be naturally expanded to the vector or matrix type using the infinite norm. ## 2.1.2. Tensor Product For matricesA∈ℤm×n,B∈ℤs×t, tensor product A⊗B is an ms×nt matrix, which consists of m×n blocks, whose i,j block is ai,j·B.In this work, we widely use the mixed-product property: for any matricesA,B,C,D with compatible dimensions, it holds that:(3)A⊗B·C⊗D=A·C⊗B·D. ## 2.2. Background on Lattices ### 2.2.1. Lattices For matrixA∈ℤqn×m, we define the q-ary integer lattice in this way:(4)Λ⊥A=x∈ℤm:Ax=0modq.For vectoru∈ℤqn, we now define the coset (or “shifted” lattice) in this way:(5)Λu⊥A=x∈ℤm:Ax=umodq. ### 2.2.2. LWE The learning with errors (LWE) problem was first introduced by Regev [27] as an extension of the “learning parity with noise.” In this study, we define the decisional learning with errors (DLWE) problem that is equivalent to LWE for certain parameters as follows.Definition 1. (Decisional Learning with Errors (DLWE)) For positive integers n,m,q and an error distribution χ over ℤ, the DLWEn,m,q,χ problem is to distinguish with nonnegligible advantage between A,b=AT·t+e, where A←$ℤqn×m,t←$ℤqn,e←χmb2−4ac, and A,b sampled uniformly at random from ℤqn×m×ℤqm.Letχ be a discrete Gaussian distribution over ℤ with parameter r=2n, and it is a well-established instantiation of LWE. A sample drawn from this distribution has magnitude bounded by rn=Θn except with a probability of at most 2−n. For this parameterization and any m=polyλ, it is easy to see LWE by quantum reduction at least as difficult as certain worst-case difficult problems (e.g., the shortest independent vector problem) on n-dimensional lattices with approximate factor O˜qn [27]. Classical reductions are also famous for subexponential modulus q [28] and polynomial q [29].In this work, we rely on the tensor form of LWE denoted by TLWE and the matrix form of LWE denoted by MLWE, whose specific definitions are introduced below.Definition 2. (The Matrix Form of Learning with Errors (MLWE)) For positive integers n,m,k,q and an error distribution χ over ℤ, the MLWEn,m,k,q,χ problem is to distinguish with nonnegligible advantage between A,B=AT·T+E, where A←$ℤqn×m,T←$ℤqn×k,E←χm×k, and A,B sampled uniformly at random from ℤqn×m×ℤqm×k.Definition 3. (The Tensor Form of Learning with Errors (TLWE)) For positive integers n,m,k,t,q and an error distribution χ over ℤ, the TLWEn,m,k,t,q,χ problem is to distinguish with nonnegligible advantage between A,B=It⊗AT·T+E, where A←$ℤqn×m,T←$ℤqnt×k,E←χmt×k, and It denotes the t-dimensional identity matrix, and A,B sampled uniformly at random from ℤqn×m×ℤqmt×k.According to the standard mixed argument, we can get that MLWE is equivalent to DLWE with at mostk factor loss in the distinguishing advantage, and TLWE is equivalent to MLWE with at most t2 factor loss in the distinguishing advantage. ### 2.2.3. Lattice Trapdoor We recall some cryptographic facts about trapdoor generation and preimage sampling algorithms with important properties [30]. Since all the details of implementation are not strictly necessary in this work, we ignore them. Note that there are improved algorithms but only within Ologλ factor [31].Lemma 1 (see [30]). Letn=polyλ,q≥3be odd, andm=⌈6nlogq⌉. There is a probabilistic polynomial timePPTalgorithmTrapGenn,m,qthat outputs a pairA∈ℤqn×m,TA∈ℤm×m, such thatAis statistically close to a uniform matrix inℤqn×m, andTAis a basis forΛ⊥AsatisfyingTA≤OnlogqandT˜A≤Onlogqwith overwhelming probability inλ.Lemma 2 (see [30]). Letn=polyλbe a positive integer,qbe a prime, andm≥2nlogq. Then, for all but a2q−nfraction of allA∈ℤqn×mand for anyr≥ωlogm, the distribution of the syndromeu=Asmodqis statistically close to uniform overℤqn, wheres←Dℤm,r.Lemma 3 (see [30]). SetA∈ℤqn×m,TA∈ℤm×m←TrapGenn,m,qfrom Lemma 1. Then, for a parameterr≥T˜A·ωlogmand a uniformly random vectoru∈ℝn, there is a PPT algorithm, which outputs vectors∈Λu⊥Asampled from a statistically similar distribution toDΛu⊥A,r; thus,As=uwheneverΛu⊥Ais not empty. ### 2.2.4. Gadget Matrices and Bit Decomposition We recall a useful notion ofgadget matrix, which was first introduced in [31], to decompose vectors or matrices over ℤq into short vectors or matrices over ℤ.For integerq, let ℓ=⌈logq⌉. Gadget matrix gT=1,2,22,…,2ℓ−1 and bit decomposition function g−1:ℤq⟶0,1ℓ are defined, which outputs a binary column vector that consists of the binary representation of its argument, such that gT·g−1a=a, for any a∈ℤq.More generally, for any positive integerk, Gk=Ik⊗gT∈ℤqk×kℓ is defined, where Ik denotes the k-dimensional identity matrix. For any t, the general bit decomposition function G−1:ℤqk×t⟶0,1kℓ×t outputs a binary kℓ×tmatrix (invoking g−1), such that G·G−1A=A, for A∈ℤqk×t. Additionally, we often write G−TA=G−1AT for simplicity. ## 2.2.1. Lattices For matrixA∈ℤqn×m, we define the q-ary integer lattice in this way:(4)Λ⊥A=x∈ℤm:Ax=0modq.For vectoru∈ℤqn, we now define the coset (or “shifted” lattice) in this way:(5)Λu⊥A=x∈ℤm:Ax=umodq. ## 2.2.2. LWE The learning with errors (LWE) problem was first introduced by Regev [27] as an extension of the “learning parity with noise.” In this study, we define the decisional learning with errors (DLWE) problem that is equivalent to LWE for certain parameters as follows.Definition 1. (Decisional Learning with Errors (DLWE)) For positive integers n,m,q and an error distribution χ over ℤ, the DLWEn,m,q,χ problem is to distinguish with nonnegligible advantage between A,b=AT·t+e, where A←$ℤqn×m,t←$ℤqn,e←χmb2−4ac, and A,b sampled uniformly at random from ℤqn×m×ℤqm.Letχ be a discrete Gaussian distribution over ℤ with parameter r=2n, and it is a well-established instantiation of LWE. A sample drawn from this distribution has magnitude bounded by rn=Θn except with a probability of at most 2−n. For this parameterization and any m=polyλ, it is easy to see LWE by quantum reduction at least as difficult as certain worst-case difficult problems (e.g., the shortest independent vector problem) on n-dimensional lattices with approximate factor O˜qn [27]. Classical reductions are also famous for subexponential modulus q [28] and polynomial q [29].In this work, we rely on the tensor form of LWE denoted by TLWE and the matrix form of LWE denoted by MLWE, whose specific definitions are introduced below.Definition 2. (The Matrix Form of Learning with Errors (MLWE)) For positive integers n,m,k,q and an error distribution χ over ℤ, the MLWEn,m,k,q,χ problem is to distinguish with nonnegligible advantage between A,B=AT·T+E, where A←$ℤqn×m,T←$ℤqn×k,E←χm×k, and A,B sampled uniformly at random from ℤqn×m×ℤqm×k.Definition 3. (The Tensor Form of Learning with Errors (TLWE)) For positive integers n,m,k,t,q and an error distribution χ over ℤ, the TLWEn,m,k,t,q,χ problem is to distinguish with nonnegligible advantage between A,B=It⊗AT·T+E, where A←$ℤqn×m,T←$ℤqnt×k,E←χmt×k, and It denotes the t-dimensional identity matrix, and A,B sampled uniformly at random from ℤqn×m×ℤqmt×k.According to the standard mixed argument, we can get that MLWE is equivalent to DLWE with at mostk factor loss in the distinguishing advantage, and TLWE is equivalent to MLWE with at most t2 factor loss in the distinguishing advantage. ## 2.2.3. Lattice Trapdoor We recall some cryptographic facts about trapdoor generation and preimage sampling algorithms with important properties [30]. Since all the details of implementation are not strictly necessary in this work, we ignore them. Note that there are improved algorithms but only within Ologλ factor [31].Lemma 1 (see [30]). Letn=polyλ,q≥3be odd, andm=⌈6nlogq⌉. There is a probabilistic polynomial timePPTalgorithmTrapGenn,m,qthat outputs a pairA∈ℤqn×m,TA∈ℤm×m, such thatAis statistically close to a uniform matrix inℤqn×m, andTAis a basis forΛ⊥AsatisfyingTA≤OnlogqandT˜A≤Onlogqwith overwhelming probability inλ.Lemma 2 (see [30]). Letn=polyλbe a positive integer,qbe a prime, andm≥2nlogq. Then, for all but a2q−nfraction of allA∈ℤqn×mand for anyr≥ωlogm, the distribution of the syndromeu=Asmodqis statistically close to uniform overℤqn, wheres←Dℤm,r.Lemma 3 (see [30]). SetA∈ℤqn×m,TA∈ℤm×m←TrapGenn,m,qfrom Lemma 1. Then, for a parameterr≥T˜A·ωlogmand a uniformly random vectoru∈ℝn, there is a PPT algorithm, which outputs vectors∈Λu⊥Asampled from a statistically similar distribution toDΛu⊥A,r; thus,As=uwheneverΛu⊥Ais not empty. ## 2.2.4. Gadget Matrices and Bit Decomposition We recall a useful notion ofgadget matrix, which was first introduced in [31], to decompose vectors or matrices over ℤq into short vectors or matrices over ℤ.For integerq, let ℓ=⌈logq⌉. Gadget matrix gT=1,2,22,…,2ℓ−1 and bit decomposition function g−1:ℤq⟶0,1ℓ are defined, which outputs a binary column vector that consists of the binary representation of its argument, such that gT·g−1a=a, for any a∈ℤq.More generally, for any positive integerk, Gk=Ik⊗gT∈ℤqk×kℓ is defined, where Ik denotes the k-dimensional identity matrix. For any t, the general bit decomposition function G−1:ℤqk×t⟶0,1kℓ×t outputs a binary kℓ×tmatrix (invoking g−1), such that G·G−1A=A, for A∈ℤqk×t. Additionally, we often write G−TA=G−1AT for simplicity. ## 2.3. Multi-Identity Fully Homomorphic Encryption We begin with the definition of the leveled multi-hop MIFHE, which is adapted and summarized from the definition of the single-hop MIFHE in [9], definition of the single-hop MKFHE in [10], and definition of the multi-hop MKFHE in [17]. Here, we require a bound d on the NAND circuit depth and a bound L on the number of identities in one evaluation, and we mainly focus on the bit encryption scheme and [17].Now, a ciphertext is called a “fresh” ciphertext if it is generated by the encryption algorithmEnc defined below (i.e., it corresponds to a single identity), an “expanded” ciphertext if it is the output of expansion algorithm Expand (which relates to multiple identities), or an “evaluated” ciphertext if it is the output of homomorphic evaluation algorithm Eval.Definition 4. A leveled multi-hop multi-identity fully homomorphic encryption scheme consists of sixPPT algorithms Setup,Extract,Enc,Expand,Eval,andDec defined as follows:(i) Setup1λ,1d,1L: on inputting a security parameter λ, a bound d on the NAND circuit depth, and a bound L on the number of identities involved in one evaluation, generate a master public key MPK and a master secret key MSK, and then output MPK,MSK. Here, the security parameter λ also defines an identity space ℐ(ii) ExtractMPK,MSK,id: on inputting the MPK, MSK, and identity id∈ℐ, extract a user-specific secret key SKid, and output it(iii) EncMPK,id,μ∈0,1: on inputting the MPK, identity id∈ℐ, and bit μ∈0,1, output a “fresh” ciphertext id;c(iv) ExpandMPK,idk+1,id1,id2,…,idk;c: on inputting the MPK, identity idk+1, and any (“fresh,” “expanded,” or “evaluated”) ciphertext id1,id2,…,idk;c under k identities id1,id2,…,idk, compute and output an “expanded” ciphertext id1,id2,…,idk,idk+1;c′ under k+1 identities id1,id2,…,idk,idk+1(v) EvalMPK,f,c1,c2,…,cN: on inputting MPK, an NAND circuit f, and N ciphertexts c1,c2,…,cN, output an “evaluated” ciphertext cf(vi) Dec(SKid1,SKid2,…,SKidk,id1,id2,…,idk;c): on inputting k secret keys SKid1,SKid2,…,SKidk, which correspond to identities id1,id2,…,idk and any ciphertext id1,id2,…,idk;c, output a bit μWe underline that we will homomorphically evaluate anyNAND circuit gate by gate as described in [17], which indicates that the evaluation is multi-hop as previous multi-key FHE schemes [8, 17]. ### 2.3.1. Correctness A leveled multi-hop MIFHE scheme is correct if it satisfies the following conditions. For all positive integersλ,d,L, for every NAND circuit f of depth at most d with N input wires, for every function π:N⟶L (which relates each input wire to a key pair), and for every μ∈0,1N, the following experiment succeeds with overwhelming probability: MPK,MSK←Setup1λ,1d,1L, generate identity key pairs PKidπi,SKidπi←ExtractMPK,MSK,idπi for every i∈N, generate ciphertext ci←EncMPK,idπi,μi∈0,1 for every i∈N, compute cf←EvalMPK,f,c1,c2,…,cN (may invoke algorithm Expand), and finally check whether DecSKidπii∈N,cf=fμ1,μ2,…,μN. ### 2.3.2. Compactness A leveled multi-hop MIFHE scheme is compact if there exists a polynomialp·,·,· such that cf≤pλ,d,L in the experiment from Definition 4. In other words, the length of cf is independent of both f and N but can depend polynomially on λ,d, and L. ### 2.3.3. Security The security game of MIFHE is the same as that of IBE, but there is no reference to the expansion algorithm and evaluation algorithm because they are public and do not impact the security. In this study, we will mainly focus on the semantically secure under selective identity and chosen-plaintext attack (IND-sID-CPA) security game for MIFHE between a challengerC and a PPT attacker A, which is defined as follows:(i) Initial Stage. AttackerA is given bound d on the NAND circuit depth and bound L on the number of identities and outputs target identity id∗.(ii) Setup. ChallengerC runs Setup1λ,1d,1L to generate MPK,MSK and sends MPK to attacker A.(iii) Query Stage 1. AdversaryA adaptively issues a query on any identity id such that id≠id∗. Challenger C runs ExtractMPK,MSK,id to obtain identity secret key skid that corresponds to id and sends skid back to A.(iv) Challenge. ChallengerC selects a uniformly random bit μ∗←$0,1, computes a challenge ciphertext c∗←EncMPK,id∗,μ∗, and sends it to attacker A.(v) Query Stage 2. AdversaryA issues additional adaptive identity secret key queries, and challenger C responds as in query stage 1.(vi) Output. The attacker outputs a guessμ′∈0,1 and wins if μ′=μ∗.The advantage of the attacker in the above IND-sID-CPA security game is defined asPrμ′=μ∗−1/2, where the probability is taken over the random bits used by all algorithms in the game.Definition 5. A leveled multi-identity fully homomorphic encryption scheme isIND-sID-CPA secure if any PPT attacker has at most a negligible advantage in the IND-sID-CPA security game defined above. ## 2.3.1. Correctness A leveled multi-hop MIFHE scheme is correct if it satisfies the following conditions. For all positive integersλ,d,L, for every NAND circuit f of depth at most d with N input wires, for every function π:N⟶L (which relates each input wire to a key pair), and for every μ∈0,1N, the following experiment succeeds with overwhelming probability: MPK,MSK←Setup1λ,1d,1L, generate identity key pairs PKidπi,SKidπi←ExtractMPK,MSK,idπi for every i∈N, generate ciphertext ci←EncMPK,idπi,μi∈0,1 for every i∈N, compute cf←EvalMPK,f,c1,c2,…,cN (may invoke algorithm Expand), and finally check whether DecSKidπii∈N,cf=fμ1,μ2,…,μN. ## 2.3.2. Compactness A leveled multi-hop MIFHE scheme is compact if there exists a polynomialp·,·,· such that cf≤pλ,d,L in the experiment from Definition 4. In other words, the length of cf is independent of both f and N but can depend polynomially on λ,d, and L. ## 2.3.3. Security The security game of MIFHE is the same as that of IBE, but there is no reference to the expansion algorithm and evaluation algorithm because they are public and do not impact the security. In this study, we will mainly focus on the semantically secure under selective identity and chosen-plaintext attack (IND-sID-CPA) security game for MIFHE between a challengerC and a PPT attacker A, which is defined as follows:(i) Initial Stage. AttackerA is given bound d on the NAND circuit depth and bound L on the number of identities and outputs target identity id∗.(ii) Setup. ChallengerC runs Setup1λ,1d,1L to generate MPK,MSK and sends MPK to attacker A.(iii) Query Stage 1. AdversaryA adaptively issues a query on any identity id such that id≠id∗. Challenger C runs ExtractMPK,MSK,id to obtain identity secret key skid that corresponds to id and sends skid back to A.(iv) Challenge. ChallengerC selects a uniformly random bit μ∗←$0,1, computes a challenge ciphertext c∗←EncMPK,id∗,μ∗, and sends it to attacker A.(v) Query Stage 2. AdversaryA issues additional adaptive identity secret key queries, and challenger C responds as in query stage 1.(vi) Output. The attacker outputs a guessμ′∈0,1 and wins if μ′=μ∗.The advantage of the attacker in the above IND-sID-CPA security game is defined asPrμ′=μ∗−1/2, where the probability is taken over the random bits used by all algorithms in the game.Definition 5. A leveled multi-identity fully homomorphic encryption scheme isIND-sID-CPA secure if any PPT attacker has at most a negligible advantage in the IND-sID-CPA security game defined above. ## 3. Multi-Identity Fully Homomorphic Encryption ### 3.1. MIFHE Scheme In this section, we will describe the proposed MIFHE scheme. We present one more algorithmMIFHE.NAND to help understand MIFHE.Eval.We parameterize the system by dimensionn=polyλ, modulus q, and error distribution χ for the underlying LWE problem; we set m=⌈6nlogq⌉, r=Onlogq·ωlogm, and B=Θrn. For the worst-case security, we set χ to be the standard discrete Gaussian distribution over ℤ with parameter 2n, which implies that the samples drawn from χ have magnitudes bounded by E=Θn except with probability 2−n. Modulus q is set in the following Section 3.2 based on the bound on the maximum depth of the supported circuit and the bound of the number of identities. The scheme is described as follows:(i) MIFHE.Setup1λ,1d,1L: On inputting security parameter λ, bound d on the NAND circuit depth, and bound L on the number of identities in one evaluation, do(1) Run algorithm TrapGenn,m−1,q to generate a uniformly random matrix A1∈ℤqn×m−1 with a short basis TA1∈ℤm−1×m−1 for Λq⊥A1 such that T˜A1≤Onlogq.(2) Choose a vectora←$ℤqn and set A=A1‖a∈ℤqn×m.(3) OutputMPK=A=A1‖a as the master public key and output MSK=TA1 as the master secret key.(ii) MIFHE.ExtractMPK,MSK,id: on inputting MPK, MSK, and identity id∈ℐ, do:(1) Ifid,u,SKid∈storage is from a previous inquiry on identity id, then return SKid. Otherwise, compute u=Hid∈ℤn, where H is a hash function modeled as a random oracle.(2) RunSamplePreA1,TA1,u−a,r to output a vector s∈ℤm−1 such that A1s=u−a. Set user-specific secret key SKid=t=s,1, and store id,u,t locally. Note that At=u and t∞≤B.(3) OutputSKid=t.(iii) MIFHE.EncMPK,id,μ∈0,1: on inputting master public key MPK, identity id∈ℐ, and bit message μ∈0,1, do:(1) Setu=Hid∈ℤn and compute B=A−enT⊗u∈ℤqn×m, where en is the nth standard unit (column) vector. (Remark: observe that Bt=0).(2) Choose a uniformly random matrixQ←$ℤqn×mℓ and a discrete Gaussian matrix W←χm×mℓ, and define the following:(6)C=BTQ+W+μGm∈ℤqm×mℓ.Note thatC is nicely a GPV-FHE ciphertext [26] encrypting μ under the secret key t. In particular,(7)tTC=tTW+μtTGm≈μtTGm.errorβC(3) Choose a matrixR←$ℤqn×mℓ and a discrete Gaussian matrix X←χm×mℓ, and define the following:(8)D=ATR+X+μGm∈ℤqm×mℓ.Here,D is regarded as a commitment to the message μ under commitment randomness R,X.(4) Choose a matrixS←$ℤqn2ℓ×mℓ and a discrete Gaussian matrix Y←χnmℓ×mℓ, and define the following:(9)E=Inℓ⊗BTS+Y+R⊗g⊗em∈ℤqnmℓ×mℓ.Note that(10)Inℓ⊗tT·E=Inℓ⊗tT·Y+R⊗g⊗em≈R⊗g.errorβETherefore,E is regarded as a sort of encryption of R (the tensor product with g corresponding to some bit decomposition appeared in expansion algorithm is vital to control the error growth), the former part of the commitment randomness used in D.(5) Choose a uniformly random matrixT←$ℤqnm×mℓ and a discrete Gaussian matrix Z←χm2×mℓ, and define the following:(11)F=Im⊗BTT+Z+X⊗em∈ℤqm2×mℓ.Note that:(12)Im⊗tT·F=Im⊗tT·Z+X⊗em≈X.errorβF.Therefore,F is regarded as a sort of encryption of X, the latter part of commitment randomness used in D.(6) Output a “fresh” ciphertextid;C,D,E,F to identity id.(iv) MIFHE.ExpandMPK,idk+1,id1,id2,…,idk;C,D,E,F: on inputting MPK, identity idk+1, and ciphertext id1,id2,…,idk;C∈ℤqmk×mkℓ,D∈ℤqm×mℓ,E∈ℤqnmkℓ×mℓ,F∈ℤqm2kℓ×mℓ encrypting μ under identities id1,id2,…,idk, do:(1) Setuk+1=Hidk+1.(2) We define the following:(13)C′=CU0D∈ℤqmk+1×mk+1ℓ,  where U is defined as follows:(14)U=Gn−T−uk+1⊗Imk·E∈ℤqmk×mℓ.(3) We leave the commitment and its randomness unchanged:D′=D,R′=R, and X′=X.(4) We define the following:(15)E′=Inℓ⊗Imk0m×mk·E∈ℤqnmk+1ℓ×mℓ.(5) Similarly, we define the following:(16)F′=Im⊗Imk0m×mk·F∈ℤqm2k+1×mℓ.(6) Outputid1,id2,…,idk+1;C′,D′,E′,F′ as the “expanded” ciphertext to identities id1,id2,…, idk+1.(v) MIFHE.NAND(id1,id2,…,idk;C1,D1,E1,F1,id1,id2,…,idk;C2,D2,E2,F2): on inputting two ciphertexts id1,id2,…,idk;C1,D1,E1,F1,id1,id2,…,idk;C2,D2,E2,F2 that encrypt μ1,μ2 under identities id1,id2,…,idk, do:(1) We define the following:(17)CNAND=Gmk−C1·Gmk−1C2∈ℤqmk×mkℓ.(2) We define the following:(18)DNAND=Gm−D1·Gm−1D2∈ℤqm×mℓ.(3) We define the following:(19)ENAND=−E1·Gm−1D2+Inℓ⊗C1·Gnmkℓ−1E2∈ℤqnmkℓ×mℓ.(4) We define the following:(20)FNAND=−F1·Gm−1D2+Im⊗C1·Gm2k−1F2∈ℤqm2k×mℓ.(5) Finally, outputid1,id2,…,idk;CNAND,DNAND,ENAND,FNAND as the “evaluated” NAND ciphertext.(vi) MIFHE.EvalMPK,f,c1,c2,…,cN: on inputting MPK, NAND circuit f:0,1N⟶0,1, and any N ciphertexts c1,c2,…,cN, compute homomorphically f over any N ciphertexts c1,c2,…,cN gate by gate by invoking MIFHE.Expand and MIFHE.NAND, and output an “evaluated” ciphertext cf.(vii) MIFHE.Dec(SKid1,SKid2,…,SKidk,id1,id2,…,idk;C,D,E,F): on inputting the secret keys SKid1,SKid2,…,SKidk and a ciphertext id1,id2,…,idk;C∈ℤqmk×mkℓ,D∈ℤqm×mℓ,E∈ℤqnmkℓ×mℓ,F∈ℤqm2k×mℓ under identities id1,id2,…,idk, let t∈ℤmk be the (column) concatenation of the secret keys SKid1,SKid2,…,SKidk, and compute:(21)tTC≈μtTGmk.errorβCIf βC<q/4, where q is set in the next section, we can recover μ from the last term of vector μtTGmk: if this term is closer to 0, output 0; otherwise, output 1. ### 3.2. Analyzing the Noise Growth and Setting the Parameters Now, we provide the reasons for definitions and analyze the noise growth inMIFHE.Expand and MIFHE.NAND to easily set the parameters. We instantiate the parameters and ensure correctness of MIFHE.First, as described in the previous section in theMIFHE.Expand algorithm, let us do the following analysis:(1) We have the following:(22)C′=CU0D∈ℤqmk+1×mk+1ℓ.Givent∈ℤmk that is the (column) concatenation of the secret keys corresponding to identities id1,id2,…,idk, respectively, and a new secret key tk+1, which is the secret key of idk+1, we set t′=t,tk+1 and then have the following:(23)t′T·C′=tT·C,tT·U+tk+1T·D,≈μtT·Gmk,tT·U+tk+1T·DerrorβC=μtT·Gmk,tT·Gn−T−uk+1⊗Imk·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1⊗tT·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1·Inℓ⊗tT·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1·Inℓ⊗tT·Y+GnT·R+tk+1T·D≈μtT·Gmk,Gn−T−uk+1·GnT·R+tk+1T·DerrornℓβE=μtT·Gmk,−uk+1T·R+tk+1T·D=μtT·Gmk,−uk+1T·R+tk+1T·ATR+X+μGm=μtT·Gmk,−uk+1T·R+uk+1T·R+tk+1T·X+μtk+1T·Gm=μtT·Gmk,tk+1T·X+μtk+1T·Gm≈μtT·Gmk,μtk+1T·GmerrormBE=μt′T·Gmk+1, which indicates that (7) holds. In general, the error implied in the “expanded” ciphertext C′ is as follows:(24)βC′=βC+nℓβE+mBE.(2) This visibly satisfies equation (8), and the error implied in the “expanded” ciphertext D′ is as follows:(25)βD′=βD.(3) We have the following:(26)E′=Inℓ⊗Imk0m×mk·E∈ℤqnmk+1ℓ×mℓ.It is obvious that:(27)Inℓ⊗t′T·E′=Inℓ⊗tT·E≈R⊗g=R′⊗g.errorβE.Thus, (10) is kept up to expansion.(4) We have the following:(28)F′=Im⊗Imk0m×mk·F∈ℤqm2k+1×mℓ.Similar to the above step, (12) is also kept up to expansion as follows:(29)Im⊗t′T·F′=Im⊗tT·F≈X=X′.errorβF.Second, we analyze theMIFHE.NAND algorithm.(1) We have the following:(30)CNAND=Gmk−C1·Gmk−1C2∈ℤqmk×mkℓ.Witht∈ℤmk, which is the (column) concatenation of the secret keys that correspond to identities id1,id2,…,idk, we have the following:(31)tT·CNAND=tT·Gmk−C1·Gmk−1C2=tT·Gmk−tT·C1·Gmk−1C2≈tT·Gmk−μ1tT·Gmk·Gmk−1C2errormkℓ·βC1=tT·Gmk−μ1tT·C2≈tT·Gmk−μ1μ2tT·GmkerrorβC2=1−μ1μ2tT·Gmk, which indicates that (7) holds. In total, the error implied in the NAND ciphertext CNAND is as follows:(32)βCNAND=mkℓ·βC1+βC2.(2) We have the following:(33)DNAND=Gm−D1·Gm−1D2∈ℤqm×mℓ,and the commitment randomness is as follows:(34)RNAND=−R1·Gm−1D2−μ1R2,XNAND=−X1·Gm−1D2−μ1X2.By simply computing, we can see that (8) is preserved:(35)DNAND=Gm−D1·Gm−1D2=Gm−ATR1+X1+μ1Gm·Gm−1D2=Gm−ATR1+X1·Gm−1D2−μ1D2=Gm−ATR1+X1·Gm−1D2−μ1ATR2+X2+μ2Gm=AT·−R1·Gm−1D2−μ1R2+−X1·Gm−1D2−μ1X2+1−μ1μ2Gm=AT·RNAND+XNAND+1−μ1μ2Gm.(3) We have the following:(36)ENAND=−E1·Gm−1D2+Inℓ⊗C1·Gnmkℓ−1E2∈ℤqnmkℓ×mℓ.To see that (10) holds for ENAND, first note that:(37)Inℓ⊗tT·E1·Gm−1D2=Inℓ⊗tT·E1·Gm−1D2≈R1⊗g·Gm−1D2errormℓ·βE1=R1·Gm−1D2⊗g.Second, note that:(38)Inℓ⊗tT·Inℓ⊗C1·Gnmkℓ−1E2=Inℓ⊗tT·C1·Gnmkℓ−1E2≈Inℓ⊗μ1tT·Gmk·Gnmkℓ−1E2errornmkℓ2·βC1=μ1Inℓ⊗tT⊗gT·Gnmkℓ−1E2=μ1Inℓ⊗tT·Inmkℓ⊗gT·Gnmkℓ−1E2=μ1Inℓ⊗tT·E2≈μ1R2⊗g⋅errorβE2.Finally, from the above, we have the following:(39)Inℓ⊗tT·ENAND≈−R1·Gm−1D2−μ1R2⊗gerrornmkℓ2·βC1+mℓ·βE1+βE2=RNAND⊗g, which indicates that (10) holds.(4) We have the following:(40)FNAND=−F1·Gm−1D2+Im⊗C1·Gm2k−1F2∈ℤqm2k×mℓ.To see that (12) holds for FNAND, first, note that:(41)Im⊗tT·F1·Gm−1D2=Im⊗tT·F1·Gm−1D2≈X1·Gm−1D2⋅errormℓ·βF1.Second, note that:(42)Im⊗tT·Im⊗C1·Gm2k−1F2=Im⊗tT·C1·Gm2k−1F2≈Im⊗μ1tT·Gmk·Gm2k−1F2errorm2kℓ·βC1=μ1Im⊗tT⊗gT·Gm2k−1F2=μ1Im⊗tT·Im2k⊗gT·Gm2k−1F2=μ1Im⊗tT·F2≈μ1X2⋅errorβF2.Finally, from the above, we have the following:(43)Im⊗tT·FNAND≈−X1·Gm−1D2−μ1X2errorm2kℓ·βC1+mℓ·βF1+βF2=XNAND, which indicates that (12) holds.Then, as [17], we now instantiate the parameters by bounding the worst-case error growth when homomorphically computing a depth dNAND circuit for up to L distinct identities. For a ciphertext id1,id2,…,idk;C,D,E,F with commitment randomness R,X, the max error is defined:(44)β=maxβC,βE,βF,BE.With the bounds from the above, for any ciphertext with errors bounded byβ, its “expanded” ciphertext has a max error of at most nℓ+m+1·β=polyn,ℓ·β. Similarly, when we homomorphically compute an NAND gate of two ciphertexts with errors bounded by β, the result has a max error of at most m2Lℓ+mℓ+1·β=polyn,ℓ,L·β. Thus, after computing any depth dNAND circuit on “fresh” ciphertexts under L distinct keys, then the result has a max error of at most:(45)polyn,ℓ,Ld+L.Thus, we can set4·polyn,ℓ,Ld+L=q for the correctness of decryption. Recall that ℓ=Θlogq=O˜d+L; therefore, polyn,ℓ,Ld+L=polyn,d,Ld+L. Thus, the security of our scheme corresponds to a worst-case n-dimensional lattice problem with an approximation factor of polyn,d,Ld+L.Finally, the compactness requirement is satisfied because any ciphertext in our construction is bounded bypolyn,d,L=polyλ,d,L. ### 3.3. Security Now, we prove that the proposed scheme MIFHE is IND-sID-CPA secure under the hardness of the DLWE assumption in the random oracle model.Theorem 1. The multi-hop multi-identity fully homomorphic encryption scheme MIFHE, which was constructed in Section 3.1, is IND-sID-CPA secure in the random oracle model assuming that the DLWEn,q,χassumption holds.Proof. We prove the security of the proposed scheme MIFHE using a sequence of hybrid games. The first one of this game is the real IND-sID-CPA security game in Definition5, and the last one is the ideal game, where the challenge ciphertext (except challenge identity id∗) is uniformly random and independent of the challenge bit μ∗. We proceed by considering a sequence of hybrid games as follows:Game 0: This is the original game described in Definition5, and it is IND-sID-CPA security. Recall that id∗∈ℐ is the target identity; that is, attacker A plans to attack id∗, and the challenge ciphertext is id∗;C∗,D∗,E∗,F∗ encrypting μ∗.Game 1: In this game, we change the methods of generating master public keyA, answering hash (random oracle) queries and answering identity secret key queries as follows.(1) Uniformly select at random a matrixA1←$ℤqn×m−1 and a vector a←$ℤqn, and set A=A1‖a∈ℤqn×m.(2) Uniformly select at random a vectoruid∗←$ℤqn.(3) When attackerA issues a hash query H on identity id∈ℐ, do:(a) Ifid=id∗, return uid∗.(b) Otherwise, ifid,uid,tid∈store, return uid.(c) Otherwise, sample vectorsid∈Dℤm−1,r, and compute uid=A1sid+a, set tid=sid,1, and store id,uid,tid locally. Finally, return uid.(4) When attackerA issues an identity secret key query on identity id∈ℐ, where id≠id∗, without loss of generality, we assume that A has queried H on id and return tid, where id,uid,tid∈store.Game 2: this game is the same as Game 1 except thatC∗,E∗,F∗, which is a part of the challenge ciphertext, is selected as uniformly random independent elements in ℤqm×mℓ×ℤqnmℓ×mℓ×ℤqm2×mℓ.Game 3: this game is the same as Game 2, except thatD∗ is selected as a uniformly random element in ℤqm×mℓ. In fact, Game 3 is the ideal game. We show the indistinguishability among all sequential hybrid games.Lemma 4. Game 0 and Game 1 are statistically indistinguishable.Proof. We show that Game 0 is statistically close to Game 1 by analyzing that the changes are undetectable by any attacker between them step by step using Lemmas1–3. First, note that while the former part of master public keyA1 is generated by running algorithm TrapGenn,m−1,q (with a trapdoor TA1) in Game 0, A1 is sampled from the uniform distribution over ℤqn×m−1 in Game 1. By Lemma 1, A1 in Game 0 is distributed statistically close to a uniform distribution over ℤqn×m−1 as in Game 1. Second, in regard to the simulation of hash queryH, we discuss two cases:(1) Ifid=id∗, then return uid∗, which was uniformly sampled from ℤqn at random. This perfectly simulates hash query Hid∗.(2) Otherwise, sample a Gaussian vectorsid∈Dℤm−1,r, compute uid=A1sid+a, and return uid. Here, uid is distributed statistically close to the uniform distribution over ℤqn, since A1sid is distributed statistically close to uniform distribution over ℤqn by Lemma 2 and a←$ℤqn. Finally, note that the identity secret key ofid is tid=sid,1, where sid is generated by running algorithm SamplePreA1,TA1,uid−a,r in Game 0, and the identity secret key of id is tid=sid,1, where sid is sampled from Dℤm−1,r such that uid=A1sid+a in Game 1. By Lemma 3, sid in Game 0 is distributed statistically close to DΛuid−a⊥A1,r such that A1sid=uid−a. Thus, the identity secret key tid in Game 0 is distributed statistically close to that in Game 1. Therefore, Game 0 and Game 1 are statistically indistinguishable.Lemma 5. Game 1 and Game 2 are computationally indistinguishable.Proof. The computational indistinguishability between Game 1 and Game 2 follows the assumed intractability of DLWE (and MLWE and TLWE). To show this behavior, we present a simulatorS that can draw samples and form B,C∗,E∗,F∗; it simulates Game 1 when the samples are DLWE samples; it simulates Game 2 when they are uniformly random samples. S proceeds as follows:(1) Draw sufficient samples, and formB,C∗,E∗,F∗. Uniformly select at random a vector uid∗←$ℤqn, and let A=B+enT⊗uid∗∈ℤqn×m.(2) The hash query and identity secret key query are identical to those in Game 1. GenerateD∗ exactly as in MIFHE.Enc. Therefore, ifB,C∗,E∗,F∗ is sampled from DLWE, then S perfectly simulates Game 1. In contrast, if B,C∗,E∗,F∗ is uniformly random, then S simulates perfectly Game 2previous.Lemma 6. Game 2 and Game 3 are computationally indistinguishable.Proof. Similar to the previous proof of Lemma5, the computational indistinguishability between Game 2 and Game 3 follows the assumed intractability of DLWE (or MLWE). To show this effect, we present a simulator S that can draw samples and form A,D∗; it simulates Game 2 when the samples are DLWE samples; it simulates Game 3 when they are uniformly random samples. S proceeds as follows:(1) Draw sufficient samples, and formA,D∗. Uniformly choose at random a vector uid∗←$ℤqn, and let B=A−enT⊗uid∗∈ℤqn×m.(2) The hash query and identity secret key query are identical to those in Game 2. Uniformly choose randomC∗,E∗,F∗. Therefore, ifA,D∗ is sampled from DLWE, then S perfectly simulates Game 2. In contrast, if A,D∗ is uniformly random, then S perfectly simulates Game 3.There is no information of messageμ∗ in Game 3. Additionally, Game 0 and Game 3 are computationally indistinguishable by Lemmas 4–6. Therefore, if the DLWE is difficult, attacker A only has negligible advantage, which completes the proof of the IND-sID-CPA security of MIFHE. ## 3.1. MIFHE Scheme In this section, we will describe the proposed MIFHE scheme. We present one more algorithmMIFHE.NAND to help understand MIFHE.Eval.We parameterize the system by dimensionn=polyλ, modulus q, and error distribution χ for the underlying LWE problem; we set m=⌈6nlogq⌉, r=Onlogq·ωlogm, and B=Θrn. For the worst-case security, we set χ to be the standard discrete Gaussian distribution over ℤ with parameter 2n, which implies that the samples drawn from χ have magnitudes bounded by E=Θn except with probability 2−n. Modulus q is set in the following Section 3.2 based on the bound on the maximum depth of the supported circuit and the bound of the number of identities. The scheme is described as follows:(i) MIFHE.Setup1λ,1d,1L: On inputting security parameter λ, bound d on the NAND circuit depth, and bound L on the number of identities in one evaluation, do(1) Run algorithm TrapGenn,m−1,q to generate a uniformly random matrix A1∈ℤqn×m−1 with a short basis TA1∈ℤm−1×m−1 for Λq⊥A1 such that T˜A1≤Onlogq.(2) Choose a vectora←$ℤqn and set A=A1‖a∈ℤqn×m.(3) OutputMPK=A=A1‖a as the master public key and output MSK=TA1 as the master secret key.(ii) MIFHE.ExtractMPK,MSK,id: on inputting MPK, MSK, and identity id∈ℐ, do:(1) Ifid,u,SKid∈storage is from a previous inquiry on identity id, then return SKid. Otherwise, compute u=Hid∈ℤn, where H is a hash function modeled as a random oracle.(2) RunSamplePreA1,TA1,u−a,r to output a vector s∈ℤm−1 such that A1s=u−a. Set user-specific secret key SKid=t=s,1, and store id,u,t locally. Note that At=u and t∞≤B.(3) OutputSKid=t.(iii) MIFHE.EncMPK,id,μ∈0,1: on inputting master public key MPK, identity id∈ℐ, and bit message μ∈0,1, do:(1) Setu=Hid∈ℤn and compute B=A−enT⊗u∈ℤqn×m, where en is the nth standard unit (column) vector. (Remark: observe that Bt=0).(2) Choose a uniformly random matrixQ←$ℤqn×mℓ and a discrete Gaussian matrix W←χm×mℓ, and define the following:(6)C=BTQ+W+μGm∈ℤqm×mℓ.Note thatC is nicely a GPV-FHE ciphertext [26] encrypting μ under the secret key t. In particular,(7)tTC=tTW+μtTGm≈μtTGm.errorβC(3) Choose a matrixR←$ℤqn×mℓ and a discrete Gaussian matrix X←χm×mℓ, and define the following:(8)D=ATR+X+μGm∈ℤqm×mℓ.Here,D is regarded as a commitment to the message μ under commitment randomness R,X.(4) Choose a matrixS←$ℤqn2ℓ×mℓ and a discrete Gaussian matrix Y←χnmℓ×mℓ, and define the following:(9)E=Inℓ⊗BTS+Y+R⊗g⊗em∈ℤqnmℓ×mℓ.Note that(10)Inℓ⊗tT·E=Inℓ⊗tT·Y+R⊗g⊗em≈R⊗g.errorβETherefore,E is regarded as a sort of encryption of R (the tensor product with g corresponding to some bit decomposition appeared in expansion algorithm is vital to control the error growth), the former part of the commitment randomness used in D.(5) Choose a uniformly random matrixT←$ℤqnm×mℓ and a discrete Gaussian matrix Z←χm2×mℓ, and define the following:(11)F=Im⊗BTT+Z+X⊗em∈ℤqm2×mℓ.Note that:(12)Im⊗tT·F=Im⊗tT·Z+X⊗em≈X.errorβF.Therefore,F is regarded as a sort of encryption of X, the latter part of commitment randomness used in D.(6) Output a “fresh” ciphertextid;C,D,E,F to identity id.(iv) MIFHE.ExpandMPK,idk+1,id1,id2,…,idk;C,D,E,F: on inputting MPK, identity idk+1, and ciphertext id1,id2,…,idk;C∈ℤqmk×mkℓ,D∈ℤqm×mℓ,E∈ℤqnmkℓ×mℓ,F∈ℤqm2kℓ×mℓ encrypting μ under identities id1,id2,…,idk, do:(1) Setuk+1=Hidk+1.(2) We define the following:(13)C′=CU0D∈ℤqmk+1×mk+1ℓ,  where U is defined as follows:(14)U=Gn−T−uk+1⊗Imk·E∈ℤqmk×mℓ.(3) We leave the commitment and its randomness unchanged:D′=D,R′=R, and X′=X.(4) We define the following:(15)E′=Inℓ⊗Imk0m×mk·E∈ℤqnmk+1ℓ×mℓ.(5) Similarly, we define the following:(16)F′=Im⊗Imk0m×mk·F∈ℤqm2k+1×mℓ.(6) Outputid1,id2,…,idk+1;C′,D′,E′,F′ as the “expanded” ciphertext to identities id1,id2,…, idk+1.(v) MIFHE.NAND(id1,id2,…,idk;C1,D1,E1,F1,id1,id2,…,idk;C2,D2,E2,F2): on inputting two ciphertexts id1,id2,…,idk;C1,D1,E1,F1,id1,id2,…,idk;C2,D2,E2,F2 that encrypt μ1,μ2 under identities id1,id2,…,idk, do:(1) We define the following:(17)CNAND=Gmk−C1·Gmk−1C2∈ℤqmk×mkℓ.(2) We define the following:(18)DNAND=Gm−D1·Gm−1D2∈ℤqm×mℓ.(3) We define the following:(19)ENAND=−E1·Gm−1D2+Inℓ⊗C1·Gnmkℓ−1E2∈ℤqnmkℓ×mℓ.(4) We define the following:(20)FNAND=−F1·Gm−1D2+Im⊗C1·Gm2k−1F2∈ℤqm2k×mℓ.(5) Finally, outputid1,id2,…,idk;CNAND,DNAND,ENAND,FNAND as the “evaluated” NAND ciphertext.(vi) MIFHE.EvalMPK,f,c1,c2,…,cN: on inputting MPK, NAND circuit f:0,1N⟶0,1, and any N ciphertexts c1,c2,…,cN, compute homomorphically f over any N ciphertexts c1,c2,…,cN gate by gate by invoking MIFHE.Expand and MIFHE.NAND, and output an “evaluated” ciphertext cf.(vii) MIFHE.Dec(SKid1,SKid2,…,SKidk,id1,id2,…,idk;C,D,E,F): on inputting the secret keys SKid1,SKid2,…,SKidk and a ciphertext id1,id2,…,idk;C∈ℤqmk×mkℓ,D∈ℤqm×mℓ,E∈ℤqnmkℓ×mℓ,F∈ℤqm2k×mℓ under identities id1,id2,…,idk, let t∈ℤmk be the (column) concatenation of the secret keys SKid1,SKid2,…,SKidk, and compute:(21)tTC≈μtTGmk.errorβCIf βC<q/4, where q is set in the next section, we can recover μ from the last term of vector μtTGmk: if this term is closer to 0, output 0; otherwise, output 1. ## 3.2. Analyzing the Noise Growth and Setting the Parameters Now, we provide the reasons for definitions and analyze the noise growth inMIFHE.Expand and MIFHE.NAND to easily set the parameters. We instantiate the parameters and ensure correctness of MIFHE.First, as described in the previous section in theMIFHE.Expand algorithm, let us do the following analysis:(1) We have the following:(22)C′=CU0D∈ℤqmk+1×mk+1ℓ.Givent∈ℤmk that is the (column) concatenation of the secret keys corresponding to identities id1,id2,…,idk, respectively, and a new secret key tk+1, which is the secret key of idk+1, we set t′=t,tk+1 and then have the following:(23)t′T·C′=tT·C,tT·U+tk+1T·D,≈μtT·Gmk,tT·U+tk+1T·DerrorβC=μtT·Gmk,tT·Gn−T−uk+1⊗Imk·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1⊗tT·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1·Inℓ⊗tT·E+tk+1T·D=μtT·Gmk,Gn−T−uk+1·Inℓ⊗tT·Y+GnT·R+tk+1T·D≈μtT·Gmk,Gn−T−uk+1·GnT·R+tk+1T·DerrornℓβE=μtT·Gmk,−uk+1T·R+tk+1T·D=μtT·Gmk,−uk+1T·R+tk+1T·ATR+X+μGm=μtT·Gmk,−uk+1T·R+uk+1T·R+tk+1T·X+μtk+1T·Gm=μtT·Gmk,tk+1T·X+μtk+1T·Gm≈μtT·Gmk,μtk+1T·GmerrormBE=μt′T·Gmk+1, which indicates that (7) holds. In general, the error implied in the “expanded” ciphertext C′ is as follows:(24)βC′=βC+nℓβE+mBE.(2) This visibly satisfies equation (8), and the error implied in the “expanded” ciphertext D′ is as follows:(25)βD′=βD.(3) We have the following:(26)E′=Inℓ⊗Imk0m×mk·E∈ℤqnmk+1ℓ×mℓ.It is obvious that:(27)Inℓ⊗t′T·E′=Inℓ⊗tT·E≈R⊗g=R′⊗g.errorβE.Thus, (10) is kept up to expansion.(4) We have the following:(28)F′=Im⊗Imk0m×mk·F∈ℤqm2k+1×mℓ.Similar to the above step, (12) is also kept up to expansion as follows:(29)Im⊗t′T·F′=Im⊗tT·F≈X=X′.errorβF.Second, we analyze theMIFHE.NAND algorithm.(1) We have the following:(30)CNAND=Gmk−C1·Gmk−1C2∈ℤqmk×mkℓ.Witht∈ℤmk, which is the (column) concatenation of the secret keys that correspond to identities id1,id2,…,idk, we have the following:(31)tT·CNAND=tT·Gmk−C1·Gmk−1C2=tT·Gmk−tT·C1·Gmk−1C2≈tT·Gmk−μ1tT·Gmk·Gmk−1C2errormkℓ·βC1=tT·Gmk−μ1tT·C2≈tT·Gmk−μ1μ2tT·GmkerrorβC2=1−μ1μ2tT·Gmk, which indicates that (7) holds. In total, the error implied in the NAND ciphertext CNAND is as follows:(32)βCNAND=mkℓ·βC1+βC2.(2) We have the following:(33)DNAND=Gm−D1·Gm−1D2∈ℤqm×mℓ,and the commitment randomness is as follows:(34)RNAND=−R1·Gm−1D2−μ1R2,XNAND=−X1·Gm−1D2−μ1X2.By simply computing, we can see that (8) is preserved:(35)DNAND=Gm−D1·Gm−1D2=Gm−ATR1+X1+μ1Gm·Gm−1D2=Gm−ATR1+X1·Gm−1D2−μ1D2=Gm−ATR1+X1·Gm−1D2−μ1ATR2+X2+μ2Gm=AT·−R1·Gm−1D2−μ1R2+−X1·Gm−1D2−μ1X2+1−μ1μ2Gm=AT·RNAND+XNAND+1−μ1μ2Gm.(3) We have the following:(36)ENAND=−E1·Gm−1D2+Inℓ⊗C1·Gnmkℓ−1E2∈ℤqnmkℓ×mℓ.To see that (10) holds for ENAND, first note that:(37)Inℓ⊗tT·E1·Gm−1D2=Inℓ⊗tT·E1·Gm−1D2≈R1⊗g·Gm−1D2errormℓ·βE1=R1·Gm−1D2⊗g.Second, note that:(38)Inℓ⊗tT·Inℓ⊗C1·Gnmkℓ−1E2=Inℓ⊗tT·C1·Gnmkℓ−1E2≈Inℓ⊗μ1tT·Gmk·Gnmkℓ−1E2errornmkℓ2·βC1=μ1Inℓ⊗tT⊗gT·Gnmkℓ−1E2=μ1Inℓ⊗tT·Inmkℓ⊗gT·Gnmkℓ−1E2=μ1Inℓ⊗tT·E2≈μ1R2⊗g⋅errorβE2.Finally, from the above, we have the following:(39)Inℓ⊗tT·ENAND≈−R1·Gm−1D2−μ1R2⊗gerrornmkℓ2·βC1+mℓ·βE1+βE2=RNAND⊗g, which indicates that (10) holds.(4) We have the following:(40)FNAND=−F1·Gm−1D2+Im⊗C1·Gm2k−1F2∈ℤqm2k×mℓ.To see that (12) holds for FNAND, first, note that:(41)Im⊗tT·F1·Gm−1D2=Im⊗tT·F1·Gm−1D2≈X1·Gm−1D2⋅errormℓ·βF1.Second, note that:(42)Im⊗tT·Im⊗C1·Gm2k−1F2=Im⊗tT·C1·Gm2k−1F2≈Im⊗μ1tT·Gmk·Gm2k−1F2errorm2kℓ·βC1=μ1Im⊗tT⊗gT·Gm2k−1F2=μ1Im⊗tT·Im2k⊗gT·Gm2k−1F2=μ1Im⊗tT·F2≈μ1X2⋅errorβF2.Finally, from the above, we have the following:(43)Im⊗tT·FNAND≈−X1·Gm−1D2−μ1X2errorm2kℓ·βC1+mℓ·βF1+βF2=XNAND, which indicates that (12) holds.Then, as [17], we now instantiate the parameters by bounding the worst-case error growth when homomorphically computing a depth dNAND circuit for up to L distinct identities. For a ciphertext id1,id2,…,idk;C,D,E,F with commitment randomness R,X, the max error is defined:(44)β=maxβC,βE,βF,BE.With the bounds from the above, for any ciphertext with errors bounded byβ, its “expanded” ciphertext has a max error of at most nℓ+m+1·β=polyn,ℓ·β. Similarly, when we homomorphically compute an NAND gate of two ciphertexts with errors bounded by β, the result has a max error of at most m2Lℓ+mℓ+1·β=polyn,ℓ,L·β. Thus, after computing any depth dNAND circuit on “fresh” ciphertexts under L distinct keys, then the result has a max error of at most:(45)polyn,ℓ,Ld+L.Thus, we can set4·polyn,ℓ,Ld+L=q for the correctness of decryption. Recall that ℓ=Θlogq=O˜d+L; therefore, polyn,ℓ,Ld+L=polyn,d,Ld+L. Thus, the security of our scheme corresponds to a worst-case n-dimensional lattice problem with an approximation factor of polyn,d,Ld+L.Finally, the compactness requirement is satisfied because any ciphertext in our construction is bounded bypolyn,d,L=polyλ,d,L. ## 3.3. Security Now, we prove that the proposed scheme MIFHE is IND-sID-CPA secure under the hardness of the DLWE assumption in the random oracle model.Theorem 1. The multi-hop multi-identity fully homomorphic encryption scheme MIFHE, which was constructed in Section 3.1, is IND-sID-CPA secure in the random oracle model assuming that the DLWEn,q,χassumption holds.Proof. We prove the security of the proposed scheme MIFHE using a sequence of hybrid games. The first one of this game is the real IND-sID-CPA security game in Definition5, and the last one is the ideal game, where the challenge ciphertext (except challenge identity id∗) is uniformly random and independent of the challenge bit μ∗. We proceed by considering a sequence of hybrid games as follows:Game 0: This is the original game described in Definition5, and it is IND-sID-CPA security. Recall that id∗∈ℐ is the target identity; that is, attacker A plans to attack id∗, and the challenge ciphertext is id∗;C∗,D∗,E∗,F∗ encrypting μ∗.Game 1: In this game, we change the methods of generating master public keyA, answering hash (random oracle) queries and answering identity secret key queries as follows.(1) Uniformly select at random a matrixA1←$ℤqn×m−1 and a vector a←$ℤqn, and set A=A1‖a∈ℤqn×m.(2) Uniformly select at random a vectoruid∗←$ℤqn.(3) When attackerA issues a hash query H on identity id∈ℐ, do:(a) Ifid=id∗, return uid∗.(b) Otherwise, ifid,uid,tid∈store, return uid.(c) Otherwise, sample vectorsid∈Dℤm−1,r, and compute uid=A1sid+a, set tid=sid,1, and store id,uid,tid locally. Finally, return uid.(4) When attackerA issues an identity secret key query on identity id∈ℐ, where id≠id∗, without loss of generality, we assume that A has queried H on id and return tid, where id,uid,tid∈store.Game 2: this game is the same as Game 1 except thatC∗,E∗,F∗, which is a part of the challenge ciphertext, is selected as uniformly random independent elements in ℤqm×mℓ×ℤqnmℓ×mℓ×ℤqm2×mℓ.Game 3: this game is the same as Game 2, except thatD∗ is selected as a uniformly random element in ℤqm×mℓ. In fact, Game 3 is the ideal game. We show the indistinguishability among all sequential hybrid games.Lemma 4. Game 0 and Game 1 are statistically indistinguishable.Proof. We show that Game 0 is statistically close to Game 1 by analyzing that the changes are undetectable by any attacker between them step by step using Lemmas1–3. First, note that while the former part of master public keyA1 is generated by running algorithm TrapGenn,m−1,q (with a trapdoor TA1) in Game 0, A1 is sampled from the uniform distribution over ℤqn×m−1 in Game 1. By Lemma 1, A1 in Game 0 is distributed statistically close to a uniform distribution over ℤqn×m−1 as in Game 1. Second, in regard to the simulation of hash queryH, we discuss two cases:(1) Ifid=id∗, then return uid∗, which was uniformly sampled from ℤqn at random. This perfectly simulates hash query Hid∗.(2) Otherwise, sample a Gaussian vectorsid∈Dℤm−1,r, compute uid=A1sid+a, and return uid. Here, uid is distributed statistically close to the uniform distribution over ℤqn, since A1sid is distributed statistically close to uniform distribution over ℤqn by Lemma 2 and a←$ℤqn. Finally, note that the identity secret key ofid is tid=sid,1, where sid is generated by running algorithm SamplePreA1,TA1,uid−a,r in Game 0, and the identity secret key of id is tid=sid,1, where sid is sampled from Dℤm−1,r such that uid=A1sid+a in Game 1. By Lemma 3, sid in Game 0 is distributed statistically close to DΛuid−a⊥A1,r such that A1sid=uid−a. Thus, the identity secret key tid in Game 0 is distributed statistically close to that in Game 1. Therefore, Game 0 and Game 1 are statistically indistinguishable.Lemma 5. Game 1 and Game 2 are computationally indistinguishable.Proof. The computational indistinguishability between Game 1 and Game 2 follows the assumed intractability of DLWE (and MLWE and TLWE). To show this behavior, we present a simulatorS that can draw samples and form B,C∗,E∗,F∗; it simulates Game 1 when the samples are DLWE samples; it simulates Game 2 when they are uniformly random samples. S proceeds as follows:(1) Draw sufficient samples, and formB,C∗,E∗,F∗. Uniformly select at random a vector uid∗←$ℤqn, and let A=B+enT⊗uid∗∈ℤqn×m.(2) The hash query and identity secret key query are identical to those in Game 1. GenerateD∗ exactly as in MIFHE.Enc. Therefore, ifB,C∗,E∗,F∗ is sampled from DLWE, then S perfectly simulates Game 1. In contrast, if B,C∗,E∗,F∗ is uniformly random, then S simulates perfectly Game 2previous.Lemma 6. Game 2 and Game 3 are computationally indistinguishable.Proof. Similar to the previous proof of Lemma5, the computational indistinguishability between Game 2 and Game 3 follows the assumed intractability of DLWE (or MLWE). To show this effect, we present a simulator S that can draw samples and form A,D∗; it simulates Game 2 when the samples are DLWE samples; it simulates Game 3 when they are uniformly random samples. S proceeds as follows:(1) Draw sufficient samples, and formA,D∗. Uniformly choose at random a vector uid∗←$ℤqn, and let B=A−enT⊗uid∗∈ℤqn×m.(2) The hash query and identity secret key query are identical to those in Game 2. Uniformly choose randomC∗,E∗,F∗. Therefore, ifA,D∗ is sampled from DLWE, then S perfectly simulates Game 2. In contrast, if A,D∗ is uniformly random, then S perfectly simulates Game 3.There is no information of messageμ∗ in Game 3. Additionally, Game 0 and Game 3 are computationally indistinguishable by Lemmas 4–6. Therefore, if the DLWE is difficult, attacker A only has negligible advantage, which completes the proof of the IND-sID-CPA security of MIFHE. ## 4. Conclusion and Open Problem We present a multi-hop MIFHE scheme, which is IND-sID-CPA secure in the random oracle model under the standard LWE assumption. However, the proposed MIFHE scheme is only leveled homomorphic. Therefore, it is interesting to construct a nonleveled multi-hop MIFHE scheme (i.e., there is no a priori bound on the depth of the circuits) under standard assumptions such as LWE (without unfalsifiableiO or WPRF). --- *Source: 1023439-2022-03-22.xml*
2022
# Optimizing Schedules of Rail Train Circulations by Tabu Search Algorithm **Authors:** Mingming Chen; Huimin Niu **Journal:** Mathematical Problems in Engineering (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102346 --- ## Abstract This paper develops an integer programming model for the scheduling problem in train circulations on an intercity rail line. The model that aims to minimize the sum of interval time for any two consecutive tasks is proposed to characterize the train operation process. Two main constraints, namely, time-shift and equilibrium constraint, are considered to get the feasible and practical solution of train schedules. A heuristic procedure using tabu search algorithm is also designed to solve the model by introducing the penalty function and a neighborhood search method with the trip exchange and insert strategy. A computational experiment performed on test instances provided by two major stations on the Beijing–Tianjin Intercity Railway in China illustrates the proposed model and algorithm. --- ## Body ## 1. Introduction Transit scheduling problem is a major area in operations research because of the complexity of problems that arise from various transit modes, such as airlines, railways, maritime, and urban transit. Vehicle scheduling and crew scheduling are two main problems that arise in this area. Generally, these two problems are considered separately, where the first is the vehicle scheduling problem and the second is the crew scheduling problem [1–3].Recently, most scholars have focused on the two problems simultaneously. In [4], a single depot case with a homogeneous fleet of vehicles was considered and an exact approach was proposed to solve the simultaneous vehicle and crew scheduling problem in urban mass transit systems. An integrated approach to solve a vehicle scheduling problem and a crew scheduling problem on a single bus route was presented in [5]. An integrated vehicle and crew scheduling problem was described using an integer linear programming formulation combining a multicommodity network flow model with a set partitioning/covering model in [6]. An approach was presented to solve the bus crew scheduling problem that considers early, day, and late duty modes with time-shift and work intensity constraints in [7]. The authors in [8] proposed an integrated vehicle-crew-roster model with days-off pattern, which aimed to simultaneously determine minimum cost vehicle and daily crew schedules.In the field of rail transit, train and crew scheduling problems are key steps in the rail operational process. The train scheduling problem involves assigning trains to a set of trips generated by a train timetable. The crew scheduling problem involves assigning crews to trains that operate at a given schedule. The authors in [9] proposed a phase-regular scheduling method and applied a regular train-departing interval and the same train length for each period under the period-dependent demand conditions. In [10], a binary integer programming model incorporated with passenger loading and departure events was built to optimize the passenger train timetable in a heavily congested urban rail corridor. The authors in [11] established an optimization model based on maximum passenger satisfaction for train operations at the junction station in passenger dedicated lines. The authors in [12] described research in progress that would determine the minimum circulation of trains needed to execute a given timetable with given bounds on demand and capacities.Train and crew scheduling is an NP hard problem. Generally, the difficulties stem from a large set of complex and conflicting restrictions that must be satisfied by any solution. Most of these restrictions are reflected in a sizable number of operational conditions that involve trips in daily train timetables, train numbers, train capacities, crew numbers, and certain constraints related to time-shift, equilibrium, and work intensity. The authors in [13] proposed an algorithm which was based on local optimality criteria in the event of a potential crossing conflict to solve the train scheduling problem. A model designed to optimize train schedules on single line rail corridors was described in [14]. In [15], a multiobjective optimization model was developed for the passenger train-scheduling problem on a railroad network which included single and multiple tracks, as well as multiple platforms with different train capacities. To minimize shortages in capacity during rush hours, the authors in [16] described a model that could be used to find an optimal allocation of train types and subtypes for lines.Meanwhile, various optimization models that relate to many aspects of train and crew schedules in railways are being studied extensively. The column generation approach is an effective algorithm for solving these problems. For example, the authors in [17] developed a column generation approach for a rail crew rescheduling problem. The authors in [18] presented a column generation approach based on the decomposition algorithm, which would achieve high-quality solutions at reasonable runtimes.In recent years, a number of studies have paid more attention to developing a heuristic algorithm for the train scheduling problem. An algorithm that combined a compact assign and a matrix, as well as an operational time strategy was proposed in [19]. The authors in [20] developed two-solution approaches based on a space-time network representation that would operate a predetermined set of train duties to satisfy the strict day-off requirement for crew in railways. On the premise of unfixed train used sections, the authors in [21] built an optimized train operation and maintenance planning model using an algorithm with a penalty function and a 3-opt neighborhood structure to solve the model. A particle swarm optimization algorithm with a local search heuristic was presented to solve the crew scheduling problem in [22].For an overview of the above papers, most studies on constructing the train scheduling model have been paid to the factors associated with train numbers, train capacities, interval time of trains, and so on. Moreover, interval time often includes night waiting time, which allows a train to conduct the trip tasks for the following day when the shortest layover time is less than the interval time of two consecutive trips. In fact, most trains run on the intercity line with high frequency, which is highly similar to bus transit. The origin and destination stations are generally equipped with train bases. In this paper, the factor of night waiting time is neglected, and the train scheduling problem is merely based on the train timetables in one day.This paper is organized as follows. In Section2, the rail train scheduling problem and an optimization model that minimizes the total interval time cost are described. A tabu search algorithm is presented in Section 3. In Section 4, a numerical example is provided to illustrate the application of the model and algorithm. The last section draws conclusions and discusses future research directions. ## 2. Rail Train Scheduling Model ### 2.1. Problem Description This paper considers train scheduling on a bidirectional intercity railway line with several stations. The location and number of trains available in each station are known. Every day, the trains are arranged from the designated stations to perform a set of trips. For each trip, the departure and arrival times and locations, which are determined by train timetables, are also known. Figure1 shows a simple train timetable with three stations and 10 trips.Figure 1 A simple train timetable.Train scheduling aims to assign a number of timetabled trips to a set of trains with the objective of minimizing total train operation costs and satisfying a range of constraints, including labor union agreements, government regulations, and company policy. For clarity, the following assumptions have to be considered.(1) Any two consecutive trips arranged for a train should have compatible terminals. Deadhead trips are not considered in this paper. Thus, the arrival location of a trip should be similar to the departure location of the next trip (e.g., Trips 1 and 6).(2) Any two consecutive trips arranged for a train should be compatible in time. A lower bound, called layover time, is present when a train arrives at a terminus. During this time, the trains wait for passengers to alight, board, turnaround, and so on. If the interval time of two trips cannot exceed the shortest layover time, the two trips for any train is voided. If we assume that the shortest layover time is 15 minutes, then we can say that Trips 4 and 5 cannot be compatible.(3) The train scheduling problem is solved as a daily problem in which every train schedule is assumed to be obtained from a daily train timetable. For two consecutive trips, the departure time of one trip should not be earlier than the arrival time of the next trip (e.g., Trips 1 and 2).(4) The number of required trains cannot exceed the prescribed maximum number of trains.Figure2 illustrates train schedules corresponding to the data from the train timetable in Figure 1. Three trains are arranged for 12 trips. Each row corresponds to the trip tasks of a train. For instance, the trip tasks of the first train (no. 1) are in a chain with C-A, A-C, C-B, B-C, and C-A. The columns correspond to the trips in a timetable. Notably, the number of columns that correspond to different trains differs from one another. For any trip task of a particular train, the departure station of the first trip and the arrival station of the last trip are not necessarily the same.Figure 2 Illustration of train schedules. ### 2.2. Objective Function The train is an important piece of equipment used in intercity railways, which is expensive to produce or purchase. To complete train timetables, the required number of trains should be as few as possible. Furthermore, the interval time of trains, a key factor in measuring their circulation efficiency, can be described as follows: a train runs in a section and reaches an intermediate station in one trip. It operates and waits at the station and then departs from the station for the following trip task. The interval time includes the operation time and waiting time at the station. When the train arrives and departs from the terminus, the interval time can be called the turn-back time. Therefore, the optimal objective of rail train scheduling is to determine the minimum number of trains required and ensure the minimum cost of interval time.As the sum of interval time and running time of trains is a multiple of the number of trains, and the running time of each train is a constant number, the objective of having the smallest number of trains is equivalent to the objective of having minimal cost in interval time. In this paper, the optimal objective is to minimize the total interval time, which can be formulated as follows:(1)min∑k=1m∑i=1n∑j=1ncij·yijk, where m represents the number of trains required in a day, n represents the number of trips provided by a train timetable in a day and cij is the interval time between trip i and trip j. yijk is a binary 0-1 variable that indicates the status of trip i and trip j conducted by train k. yijk has two values: yijk=1 if trip j is the next trip after trip i conducted by train k, and yijk=0 otherwise.The interval time of any two trips in train timetables is significantly influenced by factors related to the trip, such as origin station, destination station, and arrival and departure time at the two stations. For any two trips, if the destination station of the former trip is different from the origin station of the latter trip, or if the interval time is less than the shortest layover timeT0, then the two trips cannot be arranged using the same train. Obviously, the two trips cannot satisfy the time-shift constraint. In this paper, the interval time can be set to an infinite positive number M. If the interval time of the two trips meets the time-shift constraint, then it can be calculated using the formula dj-ai. Therefore, interval time cij between trips i and j can be expressed as follows: (2)cij={dj-ai,zi=sj,dj-ai≥T0,M,zi≠sjorzi=sj,dj-ai<T0, where zi is the destination station of trip i, sj is the origin station of trip j, ai is the arrival time of trip i at destination station, and dj is the departure time of trip j at origin station. ### 2.3. Constraints (1) To ensure that each trip can be conducted using only one train, we formulate(3)∑k=1mxik=1,∀i. (2) The relationship between decision variablexik and auxiliary variable yijk can be formulated using the following equation: (4)yijk=xik·xjk,∀k. (3) The two consecutive trips taken by the same train should satisfy the time-shift constraint, which means that the arrival time when the former trip ends at the destination station must be earlier than the departure time when the latter trip starts at the origin station. The minimum difference should be not less thanT0, (5)I(dj-ai-T0)≥yijk,∀k, where I(x) represents sign function, which is calculated using (6), (6)I(x)={1,x≥0,0,x<0. (4) To ensure that the running mileage between different trains is equal, we have to make sure that the actual operation time between different trains does not vary significantly because of the constant speed of each train. The parameterGk, which represents the total operation time of train k, can be calculated using the following equation: (7)Gk=∑i=1nxik(ai-di). Therefore, we can calculate the maximum and minimum operation times of trains, and the difference should not exceed the maximum value of the operation time between any two trains T1, (8)maxk,k′|Gk-Gk′|≤T1. (5) The decision variablexik is a 0-1 variable to indicate whether trip i is conducted by train k. xik has two values: xik=1 if trip i is conducted by train k, and xik=0 otherwise. Thus, (9)xik∈{0,1},∀i,k. ## 2.1. Problem Description This paper considers train scheduling on a bidirectional intercity railway line with several stations. The location and number of trains available in each station are known. Every day, the trains are arranged from the designated stations to perform a set of trips. For each trip, the departure and arrival times and locations, which are determined by train timetables, are also known. Figure1 shows a simple train timetable with three stations and 10 trips.Figure 1 A simple train timetable.Train scheduling aims to assign a number of timetabled trips to a set of trains with the objective of minimizing total train operation costs and satisfying a range of constraints, including labor union agreements, government regulations, and company policy. For clarity, the following assumptions have to be considered.(1) Any two consecutive trips arranged for a train should have compatible terminals. Deadhead trips are not considered in this paper. Thus, the arrival location of a trip should be similar to the departure location of the next trip (e.g., Trips 1 and 6).(2) Any two consecutive trips arranged for a train should be compatible in time. A lower bound, called layover time, is present when a train arrives at a terminus. During this time, the trains wait for passengers to alight, board, turnaround, and so on. If the interval time of two trips cannot exceed the shortest layover time, the two trips for any train is voided. If we assume that the shortest layover time is 15 minutes, then we can say that Trips 4 and 5 cannot be compatible.(3) The train scheduling problem is solved as a daily problem in which every train schedule is assumed to be obtained from a daily train timetable. For two consecutive trips, the departure time of one trip should not be earlier than the arrival time of the next trip (e.g., Trips 1 and 2).(4) The number of required trains cannot exceed the prescribed maximum number of trains.Figure2 illustrates train schedules corresponding to the data from the train timetable in Figure 1. Three trains are arranged for 12 trips. Each row corresponds to the trip tasks of a train. For instance, the trip tasks of the first train (no. 1) are in a chain with C-A, A-C, C-B, B-C, and C-A. The columns correspond to the trips in a timetable. Notably, the number of columns that correspond to different trains differs from one another. For any trip task of a particular train, the departure station of the first trip and the arrival station of the last trip are not necessarily the same.Figure 2 Illustration of train schedules. ## 2.2. Objective Function The train is an important piece of equipment used in intercity railways, which is expensive to produce or purchase. To complete train timetables, the required number of trains should be as few as possible. Furthermore, the interval time of trains, a key factor in measuring their circulation efficiency, can be described as follows: a train runs in a section and reaches an intermediate station in one trip. It operates and waits at the station and then departs from the station for the following trip task. The interval time includes the operation time and waiting time at the station. When the train arrives and departs from the terminus, the interval time can be called the turn-back time. Therefore, the optimal objective of rail train scheduling is to determine the minimum number of trains required and ensure the minimum cost of interval time.As the sum of interval time and running time of trains is a multiple of the number of trains, and the running time of each train is a constant number, the objective of having the smallest number of trains is equivalent to the objective of having minimal cost in interval time. In this paper, the optimal objective is to minimize the total interval time, which can be formulated as follows:(1)min∑k=1m∑i=1n∑j=1ncij·yijk, where m represents the number of trains required in a day, n represents the number of trips provided by a train timetable in a day and cij is the interval time between trip i and trip j. yijk is a binary 0-1 variable that indicates the status of trip i and trip j conducted by train k. yijk has two values: yijk=1 if trip j is the next trip after trip i conducted by train k, and yijk=0 otherwise.The interval time of any two trips in train timetables is significantly influenced by factors related to the trip, such as origin station, destination station, and arrival and departure time at the two stations. For any two trips, if the destination station of the former trip is different from the origin station of the latter trip, or if the interval time is less than the shortest layover timeT0, then the two trips cannot be arranged using the same train. Obviously, the two trips cannot satisfy the time-shift constraint. In this paper, the interval time can be set to an infinite positive number M. If the interval time of the two trips meets the time-shift constraint, then it can be calculated using the formula dj-ai. Therefore, interval time cij between trips i and j can be expressed as follows: (2)cij={dj-ai,zi=sj,dj-ai≥T0,M,zi≠sjorzi=sj,dj-ai<T0, where zi is the destination station of trip i, sj is the origin station of trip j, ai is the arrival time of trip i at destination station, and dj is the departure time of trip j at origin station. ## 2.3. Constraints (1) To ensure that each trip can be conducted using only one train, we formulate(3)∑k=1mxik=1,∀i. (2) The relationship between decision variablexik and auxiliary variable yijk can be formulated using the following equation: (4)yijk=xik·xjk,∀k. (3) The two consecutive trips taken by the same train should satisfy the time-shift constraint, which means that the arrival time when the former trip ends at the destination station must be earlier than the departure time when the latter trip starts at the origin station. The minimum difference should be not less thanT0, (5)I(dj-ai-T0)≥yijk,∀k, where I(x) represents sign function, which is calculated using (6), (6)I(x)={1,x≥0,0,x<0. (4) To ensure that the running mileage between different trains is equal, we have to make sure that the actual operation time between different trains does not vary significantly because of the constant speed of each train. The parameterGk, which represents the total operation time of train k, can be calculated using the following equation: (7)Gk=∑i=1nxik(ai-di). Therefore, we can calculate the maximum and minimum operation times of trains, and the difference should not exceed the maximum value of the operation time between any two trains T1, (8)maxk,k′|Gk-Gk′|≤T1. (5) The decision variablexik is a 0-1 variable to indicate whether trip i is conducted by train k. xik has two values: xik=1 if trip i is conducted by train k, and xik=0 otherwise. Thus, (9)xik∈{0,1},∀i,k. ## 3. Algorithm Design The tabu search algorithm uses a neighborhood search procedure to iteratively move from one potential solution to an improved solution, until a stopping criterion is satisfied. Tabu search is a metaheuristic local search algorithm that can be used to solve combinatorial optimization problems. The major advantages of this algorithm are its simplicity, speed, and flexibility, and the scheduling model for rail train circulations in this paper is a complex zero-one programming problem. Thus, the tabu search algorithm can be used easily. The main parameters of the algorithm are designed as follows. ### 3.1. Expression of Solution The two-dimensional integer array encoding method can be used to solve the train scheduling problem. In this method, rows represent trains, and columns represent trips. The trips are numbered according to departure time in ascending order. For example, in the train operation data presented in Figure2, the trip chains of each train can be expressed as follows: train 1: 1-6-7-8-9, train 2: 2-5-10, and train 3: 3-4-11-12. The expression of solution is shown in Figure 3.Figure 3 Expression of solution.Based on the values in the two-dimensional array, the decoding process is the inverse of the encoding process. For example, Figure3 contains 3 trains and 12 trips. The numbers of trips conducted by train 2 are 2, 5, and 10. Thus the variables are x22=1, x52=1, x102=1, y2,52=1, and y5,102=1. Other trains are decoded using the same method used in the former method. ### 3.2. Generation of Initial Solution The initial solution is the starting point of the algorithmic search. A superior initial solution enables the algorithm to arrive quickly at the optimal solution. In the process of generating the initial solution, the time-shift constraint should be satisfied. The procedure of the algorithm is as follows.Step 1 (initialization). The set of trips to be conducted by traink is set Pk=⌀, for all k. Let i=1, k=1, and a0=-T0.Step 2. The train numberk′ and consecutive trip number i′ corresponding to trip i are determined. If Pk=⌀, then let k′=k and Step 4 is performed. Otherwise, i′=min{αl∣l∈{1,2,…,k}}, k′={l∣i′=αl,l∈{1,2,…,k}}, where αl=max{s∣s∈Pl}, and go to Step 3.Step 3. The time-shift constraint is verified. Ifdi-ai′≥T0 and si=zi′, then go to Step 4. Otherwise, k←k+1, and go to Step 2.Step 4. LetPk′=Pk′∪{i}, i←i+1; go to Step 5.Step 5. Ifi>n, then the algorithm ends and the results are obtained. Otherwise, go to Step 2. ### 3.3. Neighborhood Structure The neighborhood structure uses trips exchange and insert strategies between different trains. The trips exchange strategy can be described as follows: a single exchange point on both the trip chains of the two parents is selected. The trip number of that point is swapped between the two parent organisms. The resulting organisms are the children. For example, trip 7 of train 1 (1-6-7-8-9) is exchanged with trip 5 of train 2 (2-5-10), and new solutions can be obtained. The trip chain of train 1 becomes 1-6-5-8-9 and that of train 2 becomes 2-7-10, as shown in Figure4.Figure 4 Trips exchange strategy.The trips insert strategy can be described as follows: two consecutive trips from train 1 are inserted into the trip chain of train 2. The insert point depends on the departure time in the trips chain of train 2 in ascending order, and two new trip chains for the trains result. For example, if trip 5 of train 1 (1-5-7-8-10) is inserted into the trip chain of train 2 (2-6-9), then new solutions can be obtained. The trip chain of train 1 becomes 1-8-10 and the trip chain of train 2 becomes 2-5-7-6-9, as shown in Figure5. The trips exchange or insert operation cannot be performed if the trip chain of any crew cannot satisfy the time-shift constraint using any operation.Figure 5 Trips insert strategy. ### 3.4. Evaluation of Solution To search for better solutions in the algorithmic iterative process, we have to evaluate the solution. It must simultaneously calculate the value of the objective function and consider the constraints. Given that the initial solution satisfies the time-shift constraint and the new solution generated in the neighborhood search also satisfies the former constraint, we can consider only the equilibrium constraint as the only factor in the former steps that should be punished.In this study, the parameters inα can be considered as the punish factors and the value becomes a large positive number. If the solution can satisfy the equilibrium constraint, then the values of the fitness function and the objective function become equal. Otherwise, the value of the fitness function becomes significantly larger than that of the objective function, which means that the set of values for the decision variables cannot create a feasible solution. The fitness function can be formulated as follows: (10)f=Z+α·max{maxk,k′|Gk-Gk′|-T1,0}, where Z=∑k=1m∑i=1n∑j=1ncij·yijk represents the value of the objective function. ### 3.5. Other Parameters The record of the tabu table is the transform (exchange or insert) node, and tabu length has a fixed value. Regulation is selected based on the value of evaluation as the aspiration criterion. In other words, the solution of the objective can be free if it is better than any of the best candidate solutions currently known. The stopping criterion is based on the value of the fitness function. If the best value does not change after a given number of iterations, then the algorithm stops the calculation. ## 3.1. Expression of Solution The two-dimensional integer array encoding method can be used to solve the train scheduling problem. In this method, rows represent trains, and columns represent trips. The trips are numbered according to departure time in ascending order. For example, in the train operation data presented in Figure2, the trip chains of each train can be expressed as follows: train 1: 1-6-7-8-9, train 2: 2-5-10, and train 3: 3-4-11-12. The expression of solution is shown in Figure 3.Figure 3 Expression of solution.Based on the values in the two-dimensional array, the decoding process is the inverse of the encoding process. For example, Figure3 contains 3 trains and 12 trips. The numbers of trips conducted by train 2 are 2, 5, and 10. Thus the variables are x22=1, x52=1, x102=1, y2,52=1, and y5,102=1. Other trains are decoded using the same method used in the former method. ## 3.2. Generation of Initial Solution The initial solution is the starting point of the algorithmic search. A superior initial solution enables the algorithm to arrive quickly at the optimal solution. In the process of generating the initial solution, the time-shift constraint should be satisfied. The procedure of the algorithm is as follows.Step 1 (initialization). The set of trips to be conducted by traink is set Pk=⌀, for all k. Let i=1, k=1, and a0=-T0.Step 2. The train numberk′ and consecutive trip number i′ corresponding to trip i are determined. If Pk=⌀, then let k′=k and Step 4 is performed. Otherwise, i′=min{αl∣l∈{1,2,…,k}}, k′={l∣i′=αl,l∈{1,2,…,k}}, where αl=max{s∣s∈Pl}, and go to Step 3.Step 3. The time-shift constraint is verified. Ifdi-ai′≥T0 and si=zi′, then go to Step 4. Otherwise, k←k+1, and go to Step 2.Step 4. LetPk′=Pk′∪{i}, i←i+1; go to Step 5.Step 5. Ifi>n, then the algorithm ends and the results are obtained. Otherwise, go to Step 2. ## 3.3. Neighborhood Structure The neighborhood structure uses trips exchange and insert strategies between different trains. The trips exchange strategy can be described as follows: a single exchange point on both the trip chains of the two parents is selected. The trip number of that point is swapped between the two parent organisms. The resulting organisms are the children. For example, trip 7 of train 1 (1-6-7-8-9) is exchanged with trip 5 of train 2 (2-5-10), and new solutions can be obtained. The trip chain of train 1 becomes 1-6-5-8-9 and that of train 2 becomes 2-7-10, as shown in Figure4.Figure 4 Trips exchange strategy.The trips insert strategy can be described as follows: two consecutive trips from train 1 are inserted into the trip chain of train 2. The insert point depends on the departure time in the trips chain of train 2 in ascending order, and two new trip chains for the trains result. For example, if trip 5 of train 1 (1-5-7-8-10) is inserted into the trip chain of train 2 (2-6-9), then new solutions can be obtained. The trip chain of train 1 becomes 1-8-10 and the trip chain of train 2 becomes 2-5-7-6-9, as shown in Figure5. The trips exchange or insert operation cannot be performed if the trip chain of any crew cannot satisfy the time-shift constraint using any operation.Figure 5 Trips insert strategy. ## 3.4. Evaluation of Solution To search for better solutions in the algorithmic iterative process, we have to evaluate the solution. It must simultaneously calculate the value of the objective function and consider the constraints. Given that the initial solution satisfies the time-shift constraint and the new solution generated in the neighborhood search also satisfies the former constraint, we can consider only the equilibrium constraint as the only factor in the former steps that should be punished.In this study, the parameters inα can be considered as the punish factors and the value becomes a large positive number. If the solution can satisfy the equilibrium constraint, then the values of the fitness function and the objective function become equal. Otherwise, the value of the fitness function becomes significantly larger than that of the objective function, which means that the set of values for the decision variables cannot create a feasible solution. The fitness function can be formulated as follows: (10)f=Z+α·max{maxk,k′|Gk-Gk′|-T1,0}, where Z=∑k=1m∑i=1n∑j=1ncij·yijk represents the value of the objective function. ## 3.5. Other Parameters The record of the tabu table is the transform (exchange or insert) node, and tabu length has a fixed value. Regulation is selected based on the value of evaluation as the aspiration criterion. In other words, the solution of the objective can be free if it is better than any of the best candidate solutions currently known. The stopping criterion is based on the value of the fitness function. If the best value does not change after a given number of iterations, then the algorithm stops the calculation. ## 4. Numerical Example The Beijing-Tianjin intercity rail line is a major railway system that serves passengers who travel between the cities of Beijing and Tianjin in China. The line starts from Beijing South Railway Station and ends at Tianjin Railway Station. It has a total length of 119.4 km and covers 74 trips in two directions every day. The main pieces of information about each trip, such as origin and destination stations as well as departure and arrival time, are presented in Table1.Table 1 Departure and arrival time of trips. No. DT AT No. DT AT No. DT AT No. DT AT 1 6:25 6:58 38 10:40 11:13 75 14:40 15:13 112 18:25 18:58 2 6:30 7:03 39 10:45 11:23 76 14:25 14:58 113 18:30 19:08 3 6:40 7:18 40 10:45 11:18 77 14:50 15:23 114 18:35 19:08 4 6:45 7:23 41 11:00 11:33 78 14:35 15:13 115 18:50 19:28 5 7:10 7:43 42 10:55 11:28 79 15:00 15:33 116 19:05 19:38 6 7:05 7:38 43 11:20 11:53 80 14:45 15:18 117 19:00 19:33 7 7:20 7:58 44 11:10 11:48 81 15:05 15:43 118 19:15 19:53 8 7:25 8:03 45 11:25 11:58 82 14:55 15:33 119 19:10 19:43 9 7:40 8:13 46 11:25 11:58 83 15:15 15:48 120 19:30 20:03 10 7:35 8:08 47 11:30 12:08 84 15:15 15:48 121 19:30 20:03 11 7:55 8:28 48 11:35 12:08 85 15:20 15:53 122 19:40 20:13 12 7:45 8:18 49 11:50 12:23 86 15:25 15:58 123 19:40 20:18 13 8:00 8:38 50 11:55 12:33 87 15:35 16:13 124 19:55 20:28 14 7:55 8:33 51 12:00 12:33 88 15:35 16:13 125 20:05 20:38 15 8:10 8:43 52 12:20 12:53 89 15:50 16:23 126 20:10 20:43 16 8:20 8:53 53 12:20 12:53 90 15:50 16:23 127 20:15 20:48 17 8:25 9:03 54 12:30 13:03 91 15:55 16:28 128 20:30 21:03 18 8:30 9:08 55 12:25 12:58 92 16:05 16:43 129 20:20 20:53 19 8:40 9:13 56 12:40 13:18 93 16:05 16:38 130 20:40 21:18 20 8:45 9:18 57 12:45 13:23 94 16:15 16:48 131 20:30 21:08 21 8:45 9:18 58 12:55 13:28 95 16:15 16:48 132 20:50 21:23 22 9:00 9:38 59 13:10 13:43 96 16:20 16:53 133 20:55 21:28 23 9:05 9:38 60 13:05 13:38 97 16:30 17:08 134 21:00 21:33 24 9:20 9:53 61 13:20 13:53 98 16:30 17:03 135 21:05 21:38 25 9:10 9:43 62 13:10 13:43 99 16:45 17:18 136 21:15 21:53 26 9:30 10:03 63 13:25 13:58 100 16:40 17:18 137 21:20 21:53 27 9:25 10:03 64 13:15 13:48 101 17:05 17:43 138 21:30 22:03 28 9:40 10:13 65 13:50 14:28 102 17:00 17:38 139 21:35 22:08 29 9:35 10:08 66 13:30 14:03 103 17:25 17:58 140 21:40 22:13 30 9:50 10:28 67 14:05 14:38 104 17:20 17:53 141 21:50 22:23 31 9:55 10:28 68 13:40 14:13 105 17:35 18:13 142 21:55 22:28 32 10:00 10:33 69 14:10 14:43 106 17:25 17:58 143 22:00 22:33 33 10:10 10:48 70 13:45 14:23 107 17:50 18:23 144 22:10 22:43 34 10:10 10:43 71 14:25 15:03 108 17:40 18:18 145 22:15 22:48 35 10:20 10:53 72 14:05 14:38 109 18:00 18:38 146 22:25 22:58 36 10:25 11:03 73 14:35 15:08 110 18:05 18:43 147 22:45 23:18 37 10:35 11:08 74 14:15 14:48 111 18:15 18:48 148 23:00 23:33 Notes: “DT” stands for departure time and “AT” stands for arrival time. Trains with odd numbers run from Tianjin Station to Beijing Station, and trains with even numbers run from Beijing Station to Tianjin Station.The shortest layover time for trains at switchback station is 15 minutes, and the maximum deviation value of any two trains is set to 90 minutes. The parameters for the tabu search algorithm are as follows: the tabu length is 6; the punish factor is 10,000; and the given number of iterations without improving the solution is 100. The optimal solution can be calculated using the VC++ program, as shown in Table2. The objective value of the optimal solution is 5,099 minutes.Table 2 The optimal results for train scheduling. Train number Trips Operation time (min) 01 1-16-31-46-61-76-91-106-121-136 335 02 2-17-32-47-62-77-96-107-122-137 340 03 3-18-33-48-63-78-93-108-123-138 360 04 4-19-38-49-64-85-100-109-126-139 345 05 5-20-35-50-65-80-95-110-125-140 345 06 6-21-42-51-66-81-92-111-130-141 345 07 7-22-37-52-67-82-97-112-127-142 350 08 8-23-34-53-72-83-98-113-134-143 340 09 9-24-39-54-69-84-99-114-129-144 335 10 10-15-40-55-70-79-94-115-128-145 340 11 11-26-41-56-71-86-101-116-131-146 350 12 12-29-44-59-74-89-104-117-124 302 13 13-28-43-58-73-88-103-118-133-148 345 14 14-27-36-57-68-87-102-119-132-147 360 15 15-30-45-60-75-90-105-120-135 307Table2 indicates that the total number of trains is 15 in two directions. The maximum value of train operation time is no more than 360 minutes (nos. 3 and 14). The minimum value of train operation time is 302 minutes (no. 12). The deviation value of operation time between train nos. 3 and 12 is 58 minutes, which is less than the specified value of 90 minutes. Thus, the equilibrium constraint is satisfied. ## 5. Conclusions In this paper, we present an optimized scheduling method of rail train circulations on a bidirectional intercity railway line. A binary integer programming model with the objective of minimizing the train fleet and the total interval times is presented to show the schedule process. To work out a practical schedule, the time-shift and equilibrium constraints are also considered. We have developed a tabu search algorithm to solve the proposed scheduling model.Computational results with the trip data on Beijing-Tianjin have shown that the algorithm can produce high-quality train scheduling solutions. Furthermore, this method can be widely applied to rail train circulations characterized by high-density trips and large-quantity trains. During the peak period of an intercity railway line, because of the unbalanced feature of trip numbers in both two directions, future research work may focus on considering the rail train circulations with deadhead strategy. --- *Source: 102346-2013-12-09.xml*
102346-2013-12-09_102346-2013-12-09.md
37,180
Optimizing Schedules of Rail Train Circulations by Tabu Search Algorithm
Mingming Chen; Huimin Niu
Mathematical Problems in Engineering (2013)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102346
102346-2013-12-09.xml
--- ## Abstract This paper develops an integer programming model for the scheduling problem in train circulations on an intercity rail line. The model that aims to minimize the sum of interval time for any two consecutive tasks is proposed to characterize the train operation process. Two main constraints, namely, time-shift and equilibrium constraint, are considered to get the feasible and practical solution of train schedules. A heuristic procedure using tabu search algorithm is also designed to solve the model by introducing the penalty function and a neighborhood search method with the trip exchange and insert strategy. A computational experiment performed on test instances provided by two major stations on the Beijing–Tianjin Intercity Railway in China illustrates the proposed model and algorithm. --- ## Body ## 1. Introduction Transit scheduling problem is a major area in operations research because of the complexity of problems that arise from various transit modes, such as airlines, railways, maritime, and urban transit. Vehicle scheduling and crew scheduling are two main problems that arise in this area. Generally, these two problems are considered separately, where the first is the vehicle scheduling problem and the second is the crew scheduling problem [1–3].Recently, most scholars have focused on the two problems simultaneously. In [4], a single depot case with a homogeneous fleet of vehicles was considered and an exact approach was proposed to solve the simultaneous vehicle and crew scheduling problem in urban mass transit systems. An integrated approach to solve a vehicle scheduling problem and a crew scheduling problem on a single bus route was presented in [5]. An integrated vehicle and crew scheduling problem was described using an integer linear programming formulation combining a multicommodity network flow model with a set partitioning/covering model in [6]. An approach was presented to solve the bus crew scheduling problem that considers early, day, and late duty modes with time-shift and work intensity constraints in [7]. The authors in [8] proposed an integrated vehicle-crew-roster model with days-off pattern, which aimed to simultaneously determine minimum cost vehicle and daily crew schedules.In the field of rail transit, train and crew scheduling problems are key steps in the rail operational process. The train scheduling problem involves assigning trains to a set of trips generated by a train timetable. The crew scheduling problem involves assigning crews to trains that operate at a given schedule. The authors in [9] proposed a phase-regular scheduling method and applied a regular train-departing interval and the same train length for each period under the period-dependent demand conditions. In [10], a binary integer programming model incorporated with passenger loading and departure events was built to optimize the passenger train timetable in a heavily congested urban rail corridor. The authors in [11] established an optimization model based on maximum passenger satisfaction for train operations at the junction station in passenger dedicated lines. The authors in [12] described research in progress that would determine the minimum circulation of trains needed to execute a given timetable with given bounds on demand and capacities.Train and crew scheduling is an NP hard problem. Generally, the difficulties stem from a large set of complex and conflicting restrictions that must be satisfied by any solution. Most of these restrictions are reflected in a sizable number of operational conditions that involve trips in daily train timetables, train numbers, train capacities, crew numbers, and certain constraints related to time-shift, equilibrium, and work intensity. The authors in [13] proposed an algorithm which was based on local optimality criteria in the event of a potential crossing conflict to solve the train scheduling problem. A model designed to optimize train schedules on single line rail corridors was described in [14]. In [15], a multiobjective optimization model was developed for the passenger train-scheduling problem on a railroad network which included single and multiple tracks, as well as multiple platforms with different train capacities. To minimize shortages in capacity during rush hours, the authors in [16] described a model that could be used to find an optimal allocation of train types and subtypes for lines.Meanwhile, various optimization models that relate to many aspects of train and crew schedules in railways are being studied extensively. The column generation approach is an effective algorithm for solving these problems. For example, the authors in [17] developed a column generation approach for a rail crew rescheduling problem. The authors in [18] presented a column generation approach based on the decomposition algorithm, which would achieve high-quality solutions at reasonable runtimes.In recent years, a number of studies have paid more attention to developing a heuristic algorithm for the train scheduling problem. An algorithm that combined a compact assign and a matrix, as well as an operational time strategy was proposed in [19]. The authors in [20] developed two-solution approaches based on a space-time network representation that would operate a predetermined set of train duties to satisfy the strict day-off requirement for crew in railways. On the premise of unfixed train used sections, the authors in [21] built an optimized train operation and maintenance planning model using an algorithm with a penalty function and a 3-opt neighborhood structure to solve the model. A particle swarm optimization algorithm with a local search heuristic was presented to solve the crew scheduling problem in [22].For an overview of the above papers, most studies on constructing the train scheduling model have been paid to the factors associated with train numbers, train capacities, interval time of trains, and so on. Moreover, interval time often includes night waiting time, which allows a train to conduct the trip tasks for the following day when the shortest layover time is less than the interval time of two consecutive trips. In fact, most trains run on the intercity line with high frequency, which is highly similar to bus transit. The origin and destination stations are generally equipped with train bases. In this paper, the factor of night waiting time is neglected, and the train scheduling problem is merely based on the train timetables in one day.This paper is organized as follows. In Section2, the rail train scheduling problem and an optimization model that minimizes the total interval time cost are described. A tabu search algorithm is presented in Section 3. In Section 4, a numerical example is provided to illustrate the application of the model and algorithm. The last section draws conclusions and discusses future research directions. ## 2. Rail Train Scheduling Model ### 2.1. Problem Description This paper considers train scheduling on a bidirectional intercity railway line with several stations. The location and number of trains available in each station are known. Every day, the trains are arranged from the designated stations to perform a set of trips. For each trip, the departure and arrival times and locations, which are determined by train timetables, are also known. Figure1 shows a simple train timetable with three stations and 10 trips.Figure 1 A simple train timetable.Train scheduling aims to assign a number of timetabled trips to a set of trains with the objective of minimizing total train operation costs and satisfying a range of constraints, including labor union agreements, government regulations, and company policy. For clarity, the following assumptions have to be considered.(1) Any two consecutive trips arranged for a train should have compatible terminals. Deadhead trips are not considered in this paper. Thus, the arrival location of a trip should be similar to the departure location of the next trip (e.g., Trips 1 and 6).(2) Any two consecutive trips arranged for a train should be compatible in time. A lower bound, called layover time, is present when a train arrives at a terminus. During this time, the trains wait for passengers to alight, board, turnaround, and so on. If the interval time of two trips cannot exceed the shortest layover time, the two trips for any train is voided. If we assume that the shortest layover time is 15 minutes, then we can say that Trips 4 and 5 cannot be compatible.(3) The train scheduling problem is solved as a daily problem in which every train schedule is assumed to be obtained from a daily train timetable. For two consecutive trips, the departure time of one trip should not be earlier than the arrival time of the next trip (e.g., Trips 1 and 2).(4) The number of required trains cannot exceed the prescribed maximum number of trains.Figure2 illustrates train schedules corresponding to the data from the train timetable in Figure 1. Three trains are arranged for 12 trips. Each row corresponds to the trip tasks of a train. For instance, the trip tasks of the first train (no. 1) are in a chain with C-A, A-C, C-B, B-C, and C-A. The columns correspond to the trips in a timetable. Notably, the number of columns that correspond to different trains differs from one another. For any trip task of a particular train, the departure station of the first trip and the arrival station of the last trip are not necessarily the same.Figure 2 Illustration of train schedules. ### 2.2. Objective Function The train is an important piece of equipment used in intercity railways, which is expensive to produce or purchase. To complete train timetables, the required number of trains should be as few as possible. Furthermore, the interval time of trains, a key factor in measuring their circulation efficiency, can be described as follows: a train runs in a section and reaches an intermediate station in one trip. It operates and waits at the station and then departs from the station for the following trip task. The interval time includes the operation time and waiting time at the station. When the train arrives and departs from the terminus, the interval time can be called the turn-back time. Therefore, the optimal objective of rail train scheduling is to determine the minimum number of trains required and ensure the minimum cost of interval time.As the sum of interval time and running time of trains is a multiple of the number of trains, and the running time of each train is a constant number, the objective of having the smallest number of trains is equivalent to the objective of having minimal cost in interval time. In this paper, the optimal objective is to minimize the total interval time, which can be formulated as follows:(1)min∑k=1m∑i=1n∑j=1ncij·yijk, where m represents the number of trains required in a day, n represents the number of trips provided by a train timetable in a day and cij is the interval time between trip i and trip j. yijk is a binary 0-1 variable that indicates the status of trip i and trip j conducted by train k. yijk has two values: yijk=1 if trip j is the next trip after trip i conducted by train k, and yijk=0 otherwise.The interval time of any two trips in train timetables is significantly influenced by factors related to the trip, such as origin station, destination station, and arrival and departure time at the two stations. For any two trips, if the destination station of the former trip is different from the origin station of the latter trip, or if the interval time is less than the shortest layover timeT0, then the two trips cannot be arranged using the same train. Obviously, the two trips cannot satisfy the time-shift constraint. In this paper, the interval time can be set to an infinite positive number M. If the interval time of the two trips meets the time-shift constraint, then it can be calculated using the formula dj-ai. Therefore, interval time cij between trips i and j can be expressed as follows: (2)cij={dj-ai,zi=sj,dj-ai≥T0,M,zi≠sjorzi=sj,dj-ai<T0, where zi is the destination station of trip i, sj is the origin station of trip j, ai is the arrival time of trip i at destination station, and dj is the departure time of trip j at origin station. ### 2.3. Constraints (1) To ensure that each trip can be conducted using only one train, we formulate(3)∑k=1mxik=1,∀i. (2) The relationship between decision variablexik and auxiliary variable yijk can be formulated using the following equation: (4)yijk=xik·xjk,∀k. (3) The two consecutive trips taken by the same train should satisfy the time-shift constraint, which means that the arrival time when the former trip ends at the destination station must be earlier than the departure time when the latter trip starts at the origin station. The minimum difference should be not less thanT0, (5)I(dj-ai-T0)≥yijk,∀k, where I(x) represents sign function, which is calculated using (6), (6)I(x)={1,x≥0,0,x<0. (4) To ensure that the running mileage between different trains is equal, we have to make sure that the actual operation time between different trains does not vary significantly because of the constant speed of each train. The parameterGk, which represents the total operation time of train k, can be calculated using the following equation: (7)Gk=∑i=1nxik(ai-di). Therefore, we can calculate the maximum and minimum operation times of trains, and the difference should not exceed the maximum value of the operation time between any two trains T1, (8)maxk,k′|Gk-Gk′|≤T1. (5) The decision variablexik is a 0-1 variable to indicate whether trip i is conducted by train k. xik has two values: xik=1 if trip i is conducted by train k, and xik=0 otherwise. Thus, (9)xik∈{0,1},∀i,k. ## 2.1. Problem Description This paper considers train scheduling on a bidirectional intercity railway line with several stations. The location and number of trains available in each station are known. Every day, the trains are arranged from the designated stations to perform a set of trips. For each trip, the departure and arrival times and locations, which are determined by train timetables, are also known. Figure1 shows a simple train timetable with three stations and 10 trips.Figure 1 A simple train timetable.Train scheduling aims to assign a number of timetabled trips to a set of trains with the objective of minimizing total train operation costs and satisfying a range of constraints, including labor union agreements, government regulations, and company policy. For clarity, the following assumptions have to be considered.(1) Any two consecutive trips arranged for a train should have compatible terminals. Deadhead trips are not considered in this paper. Thus, the arrival location of a trip should be similar to the departure location of the next trip (e.g., Trips 1 and 6).(2) Any two consecutive trips arranged for a train should be compatible in time. A lower bound, called layover time, is present when a train arrives at a terminus. During this time, the trains wait for passengers to alight, board, turnaround, and so on. If the interval time of two trips cannot exceed the shortest layover time, the two trips for any train is voided. If we assume that the shortest layover time is 15 minutes, then we can say that Trips 4 and 5 cannot be compatible.(3) The train scheduling problem is solved as a daily problem in which every train schedule is assumed to be obtained from a daily train timetable. For two consecutive trips, the departure time of one trip should not be earlier than the arrival time of the next trip (e.g., Trips 1 and 2).(4) The number of required trains cannot exceed the prescribed maximum number of trains.Figure2 illustrates train schedules corresponding to the data from the train timetable in Figure 1. Three trains are arranged for 12 trips. Each row corresponds to the trip tasks of a train. For instance, the trip tasks of the first train (no. 1) are in a chain with C-A, A-C, C-B, B-C, and C-A. The columns correspond to the trips in a timetable. Notably, the number of columns that correspond to different trains differs from one another. For any trip task of a particular train, the departure station of the first trip and the arrival station of the last trip are not necessarily the same.Figure 2 Illustration of train schedules. ## 2.2. Objective Function The train is an important piece of equipment used in intercity railways, which is expensive to produce or purchase. To complete train timetables, the required number of trains should be as few as possible. Furthermore, the interval time of trains, a key factor in measuring their circulation efficiency, can be described as follows: a train runs in a section and reaches an intermediate station in one trip. It operates and waits at the station and then departs from the station for the following trip task. The interval time includes the operation time and waiting time at the station. When the train arrives and departs from the terminus, the interval time can be called the turn-back time. Therefore, the optimal objective of rail train scheduling is to determine the minimum number of trains required and ensure the minimum cost of interval time.As the sum of interval time and running time of trains is a multiple of the number of trains, and the running time of each train is a constant number, the objective of having the smallest number of trains is equivalent to the objective of having minimal cost in interval time. In this paper, the optimal objective is to minimize the total interval time, which can be formulated as follows:(1)min∑k=1m∑i=1n∑j=1ncij·yijk, where m represents the number of trains required in a day, n represents the number of trips provided by a train timetable in a day and cij is the interval time between trip i and trip j. yijk is a binary 0-1 variable that indicates the status of trip i and trip j conducted by train k. yijk has two values: yijk=1 if trip j is the next trip after trip i conducted by train k, and yijk=0 otherwise.The interval time of any two trips in train timetables is significantly influenced by factors related to the trip, such as origin station, destination station, and arrival and departure time at the two stations. For any two trips, if the destination station of the former trip is different from the origin station of the latter trip, or if the interval time is less than the shortest layover timeT0, then the two trips cannot be arranged using the same train. Obviously, the two trips cannot satisfy the time-shift constraint. In this paper, the interval time can be set to an infinite positive number M. If the interval time of the two trips meets the time-shift constraint, then it can be calculated using the formula dj-ai. Therefore, interval time cij between trips i and j can be expressed as follows: (2)cij={dj-ai,zi=sj,dj-ai≥T0,M,zi≠sjorzi=sj,dj-ai<T0, where zi is the destination station of trip i, sj is the origin station of trip j, ai is the arrival time of trip i at destination station, and dj is the departure time of trip j at origin station. ## 2.3. Constraints (1) To ensure that each trip can be conducted using only one train, we formulate(3)∑k=1mxik=1,∀i. (2) The relationship between decision variablexik and auxiliary variable yijk can be formulated using the following equation: (4)yijk=xik·xjk,∀k. (3) The two consecutive trips taken by the same train should satisfy the time-shift constraint, which means that the arrival time when the former trip ends at the destination station must be earlier than the departure time when the latter trip starts at the origin station. The minimum difference should be not less thanT0, (5)I(dj-ai-T0)≥yijk,∀k, where I(x) represents sign function, which is calculated using (6), (6)I(x)={1,x≥0,0,x<0. (4) To ensure that the running mileage between different trains is equal, we have to make sure that the actual operation time between different trains does not vary significantly because of the constant speed of each train. The parameterGk, which represents the total operation time of train k, can be calculated using the following equation: (7)Gk=∑i=1nxik(ai-di). Therefore, we can calculate the maximum and minimum operation times of trains, and the difference should not exceed the maximum value of the operation time between any two trains T1, (8)maxk,k′|Gk-Gk′|≤T1. (5) The decision variablexik is a 0-1 variable to indicate whether trip i is conducted by train k. xik has two values: xik=1 if trip i is conducted by train k, and xik=0 otherwise. Thus, (9)xik∈{0,1},∀i,k. ## 3. Algorithm Design The tabu search algorithm uses a neighborhood search procedure to iteratively move from one potential solution to an improved solution, until a stopping criterion is satisfied. Tabu search is a metaheuristic local search algorithm that can be used to solve combinatorial optimization problems. The major advantages of this algorithm are its simplicity, speed, and flexibility, and the scheduling model for rail train circulations in this paper is a complex zero-one programming problem. Thus, the tabu search algorithm can be used easily. The main parameters of the algorithm are designed as follows. ### 3.1. Expression of Solution The two-dimensional integer array encoding method can be used to solve the train scheduling problem. In this method, rows represent trains, and columns represent trips. The trips are numbered according to departure time in ascending order. For example, in the train operation data presented in Figure2, the trip chains of each train can be expressed as follows: train 1: 1-6-7-8-9, train 2: 2-5-10, and train 3: 3-4-11-12. The expression of solution is shown in Figure 3.Figure 3 Expression of solution.Based on the values in the two-dimensional array, the decoding process is the inverse of the encoding process. For example, Figure3 contains 3 trains and 12 trips. The numbers of trips conducted by train 2 are 2, 5, and 10. Thus the variables are x22=1, x52=1, x102=1, y2,52=1, and y5,102=1. Other trains are decoded using the same method used in the former method. ### 3.2. Generation of Initial Solution The initial solution is the starting point of the algorithmic search. A superior initial solution enables the algorithm to arrive quickly at the optimal solution. In the process of generating the initial solution, the time-shift constraint should be satisfied. The procedure of the algorithm is as follows.Step 1 (initialization). The set of trips to be conducted by traink is set Pk=⌀, for all k. Let i=1, k=1, and a0=-T0.Step 2. The train numberk′ and consecutive trip number i′ corresponding to trip i are determined. If Pk=⌀, then let k′=k and Step 4 is performed. Otherwise, i′=min{αl∣l∈{1,2,…,k}}, k′={l∣i′=αl,l∈{1,2,…,k}}, where αl=max{s∣s∈Pl}, and go to Step 3.Step 3. The time-shift constraint is verified. Ifdi-ai′≥T0 and si=zi′, then go to Step 4. Otherwise, k←k+1, and go to Step 2.Step 4. LetPk′=Pk′∪{i}, i←i+1; go to Step 5.Step 5. Ifi>n, then the algorithm ends and the results are obtained. Otherwise, go to Step 2. ### 3.3. Neighborhood Structure The neighborhood structure uses trips exchange and insert strategies between different trains. The trips exchange strategy can be described as follows: a single exchange point on both the trip chains of the two parents is selected. The trip number of that point is swapped between the two parent organisms. The resulting organisms are the children. For example, trip 7 of train 1 (1-6-7-8-9) is exchanged with trip 5 of train 2 (2-5-10), and new solutions can be obtained. The trip chain of train 1 becomes 1-6-5-8-9 and that of train 2 becomes 2-7-10, as shown in Figure4.Figure 4 Trips exchange strategy.The trips insert strategy can be described as follows: two consecutive trips from train 1 are inserted into the trip chain of train 2. The insert point depends on the departure time in the trips chain of train 2 in ascending order, and two new trip chains for the trains result. For example, if trip 5 of train 1 (1-5-7-8-10) is inserted into the trip chain of train 2 (2-6-9), then new solutions can be obtained. The trip chain of train 1 becomes 1-8-10 and the trip chain of train 2 becomes 2-5-7-6-9, as shown in Figure5. The trips exchange or insert operation cannot be performed if the trip chain of any crew cannot satisfy the time-shift constraint using any operation.Figure 5 Trips insert strategy. ### 3.4. Evaluation of Solution To search for better solutions in the algorithmic iterative process, we have to evaluate the solution. It must simultaneously calculate the value of the objective function and consider the constraints. Given that the initial solution satisfies the time-shift constraint and the new solution generated in the neighborhood search also satisfies the former constraint, we can consider only the equilibrium constraint as the only factor in the former steps that should be punished.In this study, the parameters inα can be considered as the punish factors and the value becomes a large positive number. If the solution can satisfy the equilibrium constraint, then the values of the fitness function and the objective function become equal. Otherwise, the value of the fitness function becomes significantly larger than that of the objective function, which means that the set of values for the decision variables cannot create a feasible solution. The fitness function can be formulated as follows: (10)f=Z+α·max{maxk,k′|Gk-Gk′|-T1,0}, where Z=∑k=1m∑i=1n∑j=1ncij·yijk represents the value of the objective function. ### 3.5. Other Parameters The record of the tabu table is the transform (exchange or insert) node, and tabu length has a fixed value. Regulation is selected based on the value of evaluation as the aspiration criterion. In other words, the solution of the objective can be free if it is better than any of the best candidate solutions currently known. The stopping criterion is based on the value of the fitness function. If the best value does not change after a given number of iterations, then the algorithm stops the calculation. ## 3.1. Expression of Solution The two-dimensional integer array encoding method can be used to solve the train scheduling problem. In this method, rows represent trains, and columns represent trips. The trips are numbered according to departure time in ascending order. For example, in the train operation data presented in Figure2, the trip chains of each train can be expressed as follows: train 1: 1-6-7-8-9, train 2: 2-5-10, and train 3: 3-4-11-12. The expression of solution is shown in Figure 3.Figure 3 Expression of solution.Based on the values in the two-dimensional array, the decoding process is the inverse of the encoding process. For example, Figure3 contains 3 trains and 12 trips. The numbers of trips conducted by train 2 are 2, 5, and 10. Thus the variables are x22=1, x52=1, x102=1, y2,52=1, and y5,102=1. Other trains are decoded using the same method used in the former method. ## 3.2. Generation of Initial Solution The initial solution is the starting point of the algorithmic search. A superior initial solution enables the algorithm to arrive quickly at the optimal solution. In the process of generating the initial solution, the time-shift constraint should be satisfied. The procedure of the algorithm is as follows.Step 1 (initialization). The set of trips to be conducted by traink is set Pk=⌀, for all k. Let i=1, k=1, and a0=-T0.Step 2. The train numberk′ and consecutive trip number i′ corresponding to trip i are determined. If Pk=⌀, then let k′=k and Step 4 is performed. Otherwise, i′=min{αl∣l∈{1,2,…,k}}, k′={l∣i′=αl,l∈{1,2,…,k}}, where αl=max{s∣s∈Pl}, and go to Step 3.Step 3. The time-shift constraint is verified. Ifdi-ai′≥T0 and si=zi′, then go to Step 4. Otherwise, k←k+1, and go to Step 2.Step 4. LetPk′=Pk′∪{i}, i←i+1; go to Step 5.Step 5. Ifi>n, then the algorithm ends and the results are obtained. Otherwise, go to Step 2. ## 3.3. Neighborhood Structure The neighborhood structure uses trips exchange and insert strategies between different trains. The trips exchange strategy can be described as follows: a single exchange point on both the trip chains of the two parents is selected. The trip number of that point is swapped between the two parent organisms. The resulting organisms are the children. For example, trip 7 of train 1 (1-6-7-8-9) is exchanged with trip 5 of train 2 (2-5-10), and new solutions can be obtained. The trip chain of train 1 becomes 1-6-5-8-9 and that of train 2 becomes 2-7-10, as shown in Figure4.Figure 4 Trips exchange strategy.The trips insert strategy can be described as follows: two consecutive trips from train 1 are inserted into the trip chain of train 2. The insert point depends on the departure time in the trips chain of train 2 in ascending order, and two new trip chains for the trains result. For example, if trip 5 of train 1 (1-5-7-8-10) is inserted into the trip chain of train 2 (2-6-9), then new solutions can be obtained. The trip chain of train 1 becomes 1-8-10 and the trip chain of train 2 becomes 2-5-7-6-9, as shown in Figure5. The trips exchange or insert operation cannot be performed if the trip chain of any crew cannot satisfy the time-shift constraint using any operation.Figure 5 Trips insert strategy. ## 3.4. Evaluation of Solution To search for better solutions in the algorithmic iterative process, we have to evaluate the solution. It must simultaneously calculate the value of the objective function and consider the constraints. Given that the initial solution satisfies the time-shift constraint and the new solution generated in the neighborhood search also satisfies the former constraint, we can consider only the equilibrium constraint as the only factor in the former steps that should be punished.In this study, the parameters inα can be considered as the punish factors and the value becomes a large positive number. If the solution can satisfy the equilibrium constraint, then the values of the fitness function and the objective function become equal. Otherwise, the value of the fitness function becomes significantly larger than that of the objective function, which means that the set of values for the decision variables cannot create a feasible solution. The fitness function can be formulated as follows: (10)f=Z+α·max{maxk,k′|Gk-Gk′|-T1,0}, where Z=∑k=1m∑i=1n∑j=1ncij·yijk represents the value of the objective function. ## 3.5. Other Parameters The record of the tabu table is the transform (exchange or insert) node, and tabu length has a fixed value. Regulation is selected based on the value of evaluation as the aspiration criterion. In other words, the solution of the objective can be free if it is better than any of the best candidate solutions currently known. The stopping criterion is based on the value of the fitness function. If the best value does not change after a given number of iterations, then the algorithm stops the calculation. ## 4. Numerical Example The Beijing-Tianjin intercity rail line is a major railway system that serves passengers who travel between the cities of Beijing and Tianjin in China. The line starts from Beijing South Railway Station and ends at Tianjin Railway Station. It has a total length of 119.4 km and covers 74 trips in two directions every day. The main pieces of information about each trip, such as origin and destination stations as well as departure and arrival time, are presented in Table1.Table 1 Departure and arrival time of trips. No. DT AT No. DT AT No. DT AT No. DT AT 1 6:25 6:58 38 10:40 11:13 75 14:40 15:13 112 18:25 18:58 2 6:30 7:03 39 10:45 11:23 76 14:25 14:58 113 18:30 19:08 3 6:40 7:18 40 10:45 11:18 77 14:50 15:23 114 18:35 19:08 4 6:45 7:23 41 11:00 11:33 78 14:35 15:13 115 18:50 19:28 5 7:10 7:43 42 10:55 11:28 79 15:00 15:33 116 19:05 19:38 6 7:05 7:38 43 11:20 11:53 80 14:45 15:18 117 19:00 19:33 7 7:20 7:58 44 11:10 11:48 81 15:05 15:43 118 19:15 19:53 8 7:25 8:03 45 11:25 11:58 82 14:55 15:33 119 19:10 19:43 9 7:40 8:13 46 11:25 11:58 83 15:15 15:48 120 19:30 20:03 10 7:35 8:08 47 11:30 12:08 84 15:15 15:48 121 19:30 20:03 11 7:55 8:28 48 11:35 12:08 85 15:20 15:53 122 19:40 20:13 12 7:45 8:18 49 11:50 12:23 86 15:25 15:58 123 19:40 20:18 13 8:00 8:38 50 11:55 12:33 87 15:35 16:13 124 19:55 20:28 14 7:55 8:33 51 12:00 12:33 88 15:35 16:13 125 20:05 20:38 15 8:10 8:43 52 12:20 12:53 89 15:50 16:23 126 20:10 20:43 16 8:20 8:53 53 12:20 12:53 90 15:50 16:23 127 20:15 20:48 17 8:25 9:03 54 12:30 13:03 91 15:55 16:28 128 20:30 21:03 18 8:30 9:08 55 12:25 12:58 92 16:05 16:43 129 20:20 20:53 19 8:40 9:13 56 12:40 13:18 93 16:05 16:38 130 20:40 21:18 20 8:45 9:18 57 12:45 13:23 94 16:15 16:48 131 20:30 21:08 21 8:45 9:18 58 12:55 13:28 95 16:15 16:48 132 20:50 21:23 22 9:00 9:38 59 13:10 13:43 96 16:20 16:53 133 20:55 21:28 23 9:05 9:38 60 13:05 13:38 97 16:30 17:08 134 21:00 21:33 24 9:20 9:53 61 13:20 13:53 98 16:30 17:03 135 21:05 21:38 25 9:10 9:43 62 13:10 13:43 99 16:45 17:18 136 21:15 21:53 26 9:30 10:03 63 13:25 13:58 100 16:40 17:18 137 21:20 21:53 27 9:25 10:03 64 13:15 13:48 101 17:05 17:43 138 21:30 22:03 28 9:40 10:13 65 13:50 14:28 102 17:00 17:38 139 21:35 22:08 29 9:35 10:08 66 13:30 14:03 103 17:25 17:58 140 21:40 22:13 30 9:50 10:28 67 14:05 14:38 104 17:20 17:53 141 21:50 22:23 31 9:55 10:28 68 13:40 14:13 105 17:35 18:13 142 21:55 22:28 32 10:00 10:33 69 14:10 14:43 106 17:25 17:58 143 22:00 22:33 33 10:10 10:48 70 13:45 14:23 107 17:50 18:23 144 22:10 22:43 34 10:10 10:43 71 14:25 15:03 108 17:40 18:18 145 22:15 22:48 35 10:20 10:53 72 14:05 14:38 109 18:00 18:38 146 22:25 22:58 36 10:25 11:03 73 14:35 15:08 110 18:05 18:43 147 22:45 23:18 37 10:35 11:08 74 14:15 14:48 111 18:15 18:48 148 23:00 23:33 Notes: “DT” stands for departure time and “AT” stands for arrival time. Trains with odd numbers run from Tianjin Station to Beijing Station, and trains with even numbers run from Beijing Station to Tianjin Station.The shortest layover time for trains at switchback station is 15 minutes, and the maximum deviation value of any two trains is set to 90 minutes. The parameters for the tabu search algorithm are as follows: the tabu length is 6; the punish factor is 10,000; and the given number of iterations without improving the solution is 100. The optimal solution can be calculated using the VC++ program, as shown in Table2. The objective value of the optimal solution is 5,099 minutes.Table 2 The optimal results for train scheduling. Train number Trips Operation time (min) 01 1-16-31-46-61-76-91-106-121-136 335 02 2-17-32-47-62-77-96-107-122-137 340 03 3-18-33-48-63-78-93-108-123-138 360 04 4-19-38-49-64-85-100-109-126-139 345 05 5-20-35-50-65-80-95-110-125-140 345 06 6-21-42-51-66-81-92-111-130-141 345 07 7-22-37-52-67-82-97-112-127-142 350 08 8-23-34-53-72-83-98-113-134-143 340 09 9-24-39-54-69-84-99-114-129-144 335 10 10-15-40-55-70-79-94-115-128-145 340 11 11-26-41-56-71-86-101-116-131-146 350 12 12-29-44-59-74-89-104-117-124 302 13 13-28-43-58-73-88-103-118-133-148 345 14 14-27-36-57-68-87-102-119-132-147 360 15 15-30-45-60-75-90-105-120-135 307Table2 indicates that the total number of trains is 15 in two directions. The maximum value of train operation time is no more than 360 minutes (nos. 3 and 14). The minimum value of train operation time is 302 minutes (no. 12). The deviation value of operation time between train nos. 3 and 12 is 58 minutes, which is less than the specified value of 90 minutes. Thus, the equilibrium constraint is satisfied. ## 5. Conclusions In this paper, we present an optimized scheduling method of rail train circulations on a bidirectional intercity railway line. A binary integer programming model with the objective of minimizing the train fleet and the total interval times is presented to show the schedule process. To work out a practical schedule, the time-shift and equilibrium constraints are also considered. We have developed a tabu search algorithm to solve the proposed scheduling model.Computational results with the trip data on Beijing-Tianjin have shown that the algorithm can produce high-quality train scheduling solutions. Furthermore, this method can be widely applied to rail train circulations characterized by high-density trips and large-quantity trains. During the peak period of an intercity railway line, because of the unbalanced feature of trip numbers in both two directions, future research work may focus on considering the rail train circulations with deadhead strategy. --- *Source: 102346-2013-12-09.xml*
2013